entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04206v1 | 20230709153017 | Multi-mission view of low-luminosity 'obscured' phase of GRS 1915+105 | [
"Athulya M. P.",
"Anuj Nandi"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling
Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn
G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg.
G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
GRS 1915+105 is observed in an `obscured' phase since May 2019, exhibiting steady and low X-ray luminosities, while being intervened by sporadic re-brightenings. In this work, we perform a comprehensive and wide-band analysis of the spectral and timing properties of the source during the period 2019–2021 using AstroSat (SXT: 0.5–8 keV; LAXPC: 3–60 keV), NICER (0.5–12 keV), and NuSTAR (3–60 keV) observations. Spectral analysis reveals the presence of a highly variable obscurer (N_ H_ 1∼ 10^22–10^24 atoms cm^-2) throughout the observation period. Source is detected in the Low/Hard state for most of the time, with the spectra being described by a Comptonised component (Γ ∼1.16 – 1.79, kT_ e∼2 – 31 keV). The source spectra steepen (Γ∼2.5) indicating softening of the spectrum during the rise of the re-brightenings. Various emission and absorption lines corresponding to the neutral Fe-Kα, Fe-XXV Kα, Fe-XXVI Kα, and the Ni-XXVIII Kα were detected with the equivalent widths varying between 70 eV – 3.5 keV. The column density of the absorbing plasma varied between 10^16–10^18 atoms cm^-2 at a distance ≤2×10^10 cm. Interestingly, the source is also seen exhibiting various variability classes (ρ, λ, δ, χ)
at relatively low luminosities (∼0.01L_Edd) during the re-brightening phases. Different variability classes show signature of QPOs (ν_ QPO: 20–180 mHz,
rms_ QPO: 7.5%–16%). The source showed a maximum bolometric luminosity (L_bol) of ∼0.01L_Edd (Re-brightening phases) and a minimum L_bol of 0.004L_Edd (Quiet phase) during the period. We discuss the possible disc dynamics around the black hole during this low-luminosity `obscured' phase.
X-ray binaries - accretion, accretion discs - black hole physics - stars: black holes - radiation mechanisms: general - stars: individual: GRS 1915+105
§ INTRODUCTION
GRS 1915+105 is a unique Low Mass X-ray Binary (LMXB), hosting a massive (12.4_-1.8^+2.0M_⊙; ) and a maximally rotating (â > 0.98^+0.01_-0.01; ) black hole at the center, accreting matter from a K-giant companion <cit.>. GRS 1915+105 is the only LMXB that has exhibited 15 unique variability classes (α, β, γ, δ, θ, κ, λ, μ, ν, ρ, ϕ, χ, ω, η, ξ) so far (<cit.>, see also ). Each of these classes exhibit variabilities at timescales ranging from a few seconds to many hours, thereby providing a captivating illustration of various instabilities and the timescales at which these instabilities develop in the accretion disc around a stellar mass black hole ( and the references therein). GRS 1915+105 exhibits steady radio jets in the hard state <cit.>, while transient and discrete radio jets <cit.> are seen during transition of the source from the hard state to the soft state. Besides jet events, disc winds have also been a prominent ejection event in GRS 1915+105. <cit.> detected various absorption features that indicated ionized outflowing winds, which also acted as jet suppressing mechanisms by averting the disc matter inflow into the radio jets <cit.>. Owing to its eccentric inflow and outflow phenomena, GRS 1915+105 sets an exemplary case study to analyse the astrophysical phenomena around the compact object.
After more than 25 years of high X-ray activity, GRS 1915+105 began to show a decrease in the X-ray flux <cit.> since May 2018, leading to the presumption that the source is finally approaching quiescence. Yet, exceptionally again, the source started exhibiting a series of non-generic activities since April 2019 <cit.>. The detection of multiple absorption lines in the energy spectrum of GRS 1915+105 <cit.> and the requirement of an additional absorption model to address the soft excess <cit.>, indicated the presence of a local obscuration in the system. also detected a decrease in the bolometric luminosity of the source, caused by the local obscuration in the system. GRS 1915+105 is therefore, perceived to have entered into a new state of accretion called the `obscured state'. Obscuration, although is a commonly observed phenomenon in Active Galactic Nuclei ( and the references therein), is rarely observed in LMXBs. The various models developed to explain the cause of obscuration in X-ray binaries comprise the disc flaring theory <cit.>, the obscuring winds caused by the stellar activity in the secondary star <cit.>, slim disc at the close vicinity of the compact object <cit.> etc. However, in the recent work on GRS 1915+105, detected three ionization zones layered up at a distance ∼10^11 cm, around the outer disc. Their study featured the possibility of vertical expansion of the outer disc that further acted as the obscuration medium. Meanwhile, estimated the wind launch radius (r < ∼10^9 cm), velocity of the winds (350 km s^-1), and the magnetic field strength required to drive the winds away from the compact object. Their results revealed a wind that failed to escape the system, eventually enshrouding the compact object thus causing the obscuration.
Over the period of obscuration, the source also displayed several re-brightenings <cit.>, either in the form of a quick flare or a prolonged re-brightening. Few quick flares were reported to be a sequel to the radio flares <cit.>, whereas during the prolonged re-brightenings the ALMA observations (15.5 GHz) of the source showed a decrease in the radio activity (). The quasi-simultaneous radio-X-ray flares happening on short timescales (∼1400 sec) was also observed in Cyg X–1 <cit.>, where the X-ray emission is hypothesised to be originated at the base of the jet. The prolonged re-brightenings (also called as re-flares, mini-outbursts, failed outbursts etc.) have been observed in many LMXBs. The nature of the mini-outbursts varied in each LMXBs, with a few sources exhibiting only one spectral state throughout the mini-outburst (eg: MAXI J1659–152 <cit.>, IGR J17379–3747 <cit.>, XTE J1650–500 <cit.>, while few other sources exhibited different spectral states throughout the mini-outburst (MAXI J1535–571 <cit.> and GRS 1739–278 <cit.>. Irrespective of the nature of the outbursts, the cause and offset of the mini-outbursts are not clearly understood. Augmented mass transfer due to the irradiation of the companion <cit.> is one of the commonly used models to explain the cause of a mini-outburst.
GRS 1915+105 has been extensively studied throughout 26-years of long outburst. However, only a few attempts, using scattered observations of the source, have been made to understand the characteristics after the source descended into the low-luminosity `obscured' phase. In this manuscript, we perform for the first time, an in-depth and a cohesive analysis of the spectral and timing properties of the source, during the period of March 2019 to November 2021 using observations from AstroSat, NICER and NuSTAR. Through our results, we describe the attributes of obscuration in the system. The observations also reveal the source to be exhibiting multiple re-brightening phases with the display of the characteristic variability classes and the transition between classes during prolonged re-brightenings. We, therefore, characterize the source properties and spectral state transitions observed during the re-brightening phases as well as the quiet phase, in this work.
This paper is structured as follows: <ref> briefly describes the data reduction procedures for all the observations obtained from SXT & LAXPC onboard AstroSat, NICER and NuSTAR. In <ref>, we explain the modeling techniques and the procedure of spectral and timing analysis. In <ref>, we present the results obtained through our analysis and in <ref>, we discuss the overall behavioural pattern of the source. Finally, in <ref>, we conclude with a summary of our results.
§ OBSERVATIONS AND DATA REDUCTION
We use the data obtained from AstroSat, NICER and NuSTAR from March 2019 to November 2021 to perform a coordinated and wide-band study of the spectral and timing properties of the source. Table <ref> enlists the overall observations of the source made by AstroSat together with the simultaneous NICER – NuSTAR observations available during these Epochs. All of them are also indicated in Figure <ref> with vertical dashed lines. In addition, Table <ref> gives the log of further NICER observations for most of the re-brightening phases observed between March 2019 –- November 2021. Below, we brief the reduction procedures for the data obtained from all three instruments.
§.§ AstroSat
AstroSat <cit.>, India's first dedicated astronomy mission, observes celestial bodies in broad energy band simultaneously, ranging from near UV to hard X-rays. In our work, we use the observations made by two instruments on board AstroSat, the Soft X-ray Telescope (SXT) <cit.> covering the energy band 0.3 - 8 keV and the Large Area X-ray Proportional Counter (LAXPC) <cit.> operating in 3 - 80 keV energy band. AstroSat had made 5 observations of the source during our period of study. Level-2 SXT data and Level-1 LAXPC20 data are obtained from the ISSDC data dissemination archive[<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>]. In order to perform simultaneous analysis, we choose one segment from both SXT and LAXPC, where the observations made by both these instruments happen at almost same time, with exposure time ≥ 1 ks for both instruments. The SXT pipeline is used for Level 2 data analysis. The light curve and the spectral files are further extracted using the . The background spectrum and the response files for SXT data is distributed by TIFR-POC[<https://www.tifr.res.in/ astrosat_sxt/sxtpipeline.html>]. The ARF files for each observation is generated using the tool. The LAXPC data processing, from Level 1 to Level 2, is carried out using the software . The background spectrum for each LAXPC spectrum is generated using the code , while we use the pre-computed response files (version v1.0) provided by the LAXPC team[<https://www.tifr.res.in/ astrosat_laxpc/LaxpcSoft.html>]. A detailed description for the standard reduction and extraction procedures are provided in (see also ). Additionally, a circular region of 12^' is used while extracting SXT data (see Figure <ref>). We use a combination of the top-layer and all events during the extraction of LAXPC data. The energy spectra thus obtained are grouped to 25 counts per bin using the and a systematic error of 3% <cit.> is additionally included to account for the uncertainty in the spectral response.
§.§ NuSTAR
The Nuclear Spectroscopic Telescope Array (NuSTAR) is sensitive to X-rays within the energy range 3 - 78 keV <cit.>. In this paper, we consider all the 11 observations of the source made by the modules, FPMA and FPMB onboard NuSTAR, during the period 2019 – 2021. These data were obtained from the HEASARC database[<https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl>]. NuSTAR Data Analysis Software (NuSTARDAS) and CALDB v20191219 is used to generate the cleaned event file <cit.>. A circular region of 60^'' is used to extract the source events. Similarly, a region of 60^'' radius that is free of source photons, is chosen for the background extraction (see Figure <ref>). The cleaned events and the region files thus obtained are used to extract the source products using the module . The extracted spectra were uniformly grouped to 25 counts per energy bin and is modeled in the energy range 3 – 60 keV.
§.§ NICER
Neutron star Interior Composition Explorer (NICER) <cit.> has persistently observed GRS 1915+105 during the obscured phase using its primary scientific instrument, the X-ray Timing Instrument (XTI) that covers the energy band 0.2 - 12 keV. In our work, we study 23 NICER-XTI observations, some of which without a simultaneous high energy observation. We also did a thorough spectral analysis of 40 additional observations, made by NICER between MJD 58610-58650, 59050-59150 and 59375-59500. However, we tabulate only 23 observations, as they sufficiently describe the spectral and timing evolution of the source parameters. These observations were obtained from the HEASARC database[<https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl>]. The data is processed using the latest version of the NICER software (NICERDAS ver 10). The Level 2 analysis is performed using the task. The Level 3 data analysis is performed using the new extraction task (recently made available with released in November 2022). concurrently generates spectrum, background, ancillary and response files from the un-barycentered merged event file. Additionally, we also set the background model type to 3C50 while running the script. All the spectra thus obtained were uniformly grouped to 25 counts per bin.
§ ANALYSIS & MODELING
§.§ Timing Analysis
Light curves for AstroSat-LAXPC and NICER observations were initially generated with a time bin of 10 sec in the energy ranges 3 – 60 keV and 0.5 – 12 keV respectively. In addition, the NICER light curves corresponding to each re-brightening phase observation were separately extracted again for three individual energy ranges; 0.3 – 3 keV, 3 – 6 keV and 6 – 12 keV with a time bin of 10 sec in order to plot the Color-Color Diagram (CCD) (see Figures <ref>, <ref> and <ref>). The CCD plots HR1 (Hardness Ratio 1) on the X - axis and HR2 (Hardness Ratio 2) on the Y - axis, where HR1 is the ratio of photons from 3 – 6 keV to the photons in 0.3 – 3 keV and HR2 is the ratio of photons in 6 – 12 keV to the photons in 0.3 – 3 keV.
The Power Density Spectrum (PDS) was generated using light curves with a time resolution of 10 msec, obtained from AstroSat-LAXPC, NICER and NuSTAR observations. Although a Nyquist frequency of 50 Hz was obtained, the PDS was dominated with noise above 1 Hz. The data points were binned to 32768 bins resulting in a lowest frequency of 0.003 Hz (1/(32768×10 ms)). All the PDS are rms-normalized <cit.> with a geometrical re-bin factor of 1.05. The noise associated with the PDS is fitted using powerlaw. The narrow features in the PDS, known as Quasi-periodic Oscillations (QPOs), are described using the Lorentzian profile,
L(f) = KQf_0/π/f_0^2 + Q^2(f-f_0)^2,
where, f_0 is the frequency of the QPO, K is the normalization that defines the total strength of the oscillation and Q (= f_0/Δ, where Δ is the half width at half maximum) is the quality factor that denotes the coherence of the variation.
We, therefore, use a Lorentzian model to fit the QPO feature ( and the references therein). Details of the method to obtain the model fitted parameters are mentioned in . The narrow features with a Q-factor ≥ 2 <cit.> and significance (σ) > 3 <cit.> are considered as QPOs. The total rms variability for the frequency range 0.003 – 1 Hz is estimated using the rectangle rule integration method, where the rms = √((P(ν)×δν))×100 (in %). P(ν) is in units of rms^2 Hz^-1 and δν is the interval width in Hz (see and the references therein).
§.§ Spectral Analysis and Modeling
The spectral analyses for all the observations were done using and and . We perform the broadband spectral analysis and modeling across 0.7 – 60 keV using simultaneous data from SXT (0.7 – 7 keV) + LAXPC (3 – 60 keV) and NICER (0.7 – 12 keV) + NuSTAR (3 – 60 keV). The difference in flux normalization between two instruments, while performing the simultaneous fit, is taken care by the multiplicative model, constant. Absorption due to the interstellar medium (ISM) of the Galaxy has been modeled using the TBabs model with all the abundance parameters set to default as mentioned in .
The combined spectra were initially fitted using the multi-temperature disc blackbody (diskbb) <cit.> model and the Comptonisation (nthComp) <cit.> model individually. The spectra corresponding to Epochs 6 – 9, 11, 12, 17 and 18 showed acceptable fits using the nthComp model only, whereas Epochs 1 – 5, 10 and 13 – 16 required the combination of both models, diskbb+nthComp for the continuum. We also use a partial covering absorption model, TBpcf along the continuum for all the observations, considering the recent evolution of the source into the `obscured' phase (see ). TBpcf addresses the edge at ∼7 keV which is interpreted as a neutral iron K-alpha photoelectric edge. This spectral feature, conventionally seen in AGNs, is described by the partial covering absorption by winds/gas clouds <cit.>. TBpcf quantifies the equivalent hydrogen column density that is local to the source (N_ H_ 1) and the covering fraction (PCF) of the obscuration (see also ). Henceforth, the model combinations - TBabs(TBpcf(diskbb+nthComp)) and TBabs(TBpcf(nthComp)) will be referred as Model-1 and Model-2 respectively. Along with the above mentioned model combinations, few additional models were required to address the absorption and emission features between 6 – 9 keV. For example, the gaussian model was used to address the emission line between 6 - 8 keV. Epochs 13 and 17 showed a narrow absorption feature in the same energy range, which was addressed by the gabs <cit.> model. A broad absorption feature was also present in few Epochs (Epochs 6, 7, 8, 11, 12 and 13) between 7 – 9 keV, for which the smedge <cit.> model was used. The additional Si and Au edges in the SXT spectra were addressed using the command (see and the references therein), with the slope set to 1. We use the edge model to address the instrumental Xenon edge (∼32 keV) observed in the LAXPC spectrum <cit.>.
The NICER spectra (0.7 – 12 keV) corresponding to the re-brightening phases (Table <ref>) were initially fitted using Model-1, considering the requirement of disc and Comptonization model components to produce a best fit for the broadband observations. Nonetheless, all the NICER observations required only a single component for the continuum, except Obs. 1 corresponding to RBI, where this particular observation showed improved fit with a combination of
diskbb and nthComp model components (Model-1). However, for this observation, we had to freeze both disc parameters to the values close to the ones obtained from the broadband best-fits. The remaining NICER observations did not show the requirement of two components for the continuum. We then tried to fit the remaining NICER spectra using the diskbb model component along with TBabs and TBpcf. While few of the observations produced good fits, the rest of the observations produced non-physical disc temperatures. Following <cit.>, we also attempted to fit the NICER spectra using either with the powerlaw model or with the bbody model. Although these models produced satisfactory fits for few observations, they turned non-viable for most of the observations. Additionally, we also tried simpl*diskbb to fit the observations. Unfortunately, that model combination also did not work for most of the observations. And for the ones it did, we got an extremely steep photon index (Γ > 4), with high errors. At face value, it can be broadly inferred that the NICER spectra of the source in the 0.7 – 12 keV energy band, corresponding to the obscured phase, is best described using a single Comptonized component and does not show the requirement for an additional disc component. We infer that this could be because of the fact that the disc temperature in the nthComp model component adequately describes the moderately faint disc flux, without actually having to include the disc component. We, therefore, proceeded to fit all the NICER observations using Model-2, which produced acceptable fits for all the observations. All the NICER spectra also showed additional absorption and emission features, which are addressed using the gaussian, gabs and smedge models. Errors for the parameters are estimated using the error command in . All the error values are quoted at 90% confidence level. However, the error values of certain parameters were too small (< 5%), and thus are tagged with a dagger symbol in the tables (Tables <ref>, <ref>, <ref> and <ref>) to notify that the error values are insignificant.
We computed the unabsorbed bolometric luminosity (L_ bol) in 0.3 – 100 keV energy range using the relation, L_ bol = 4πD^2× (F), where D is the distance of the source (D = 8.2 kpc <cit.>; also see and the references therein) and the F is the unabsorbed intrinsic flux (0.3 – 100 keV), which is estimated by incorporating the cflux model along the continuum. The partial covering absorption model (TBpcf) is excluded while integrating the cflux model along with the continuum components. The flux value thus obtained does not account the effects of obscuration. For example, we used the resultant model combination TBabs(TBpcf(cflux(diskbb+gauss+nthComp))) to estimate the unabsorbed intrinsic flux during Epoch 10.
In the subsequent section, we present the results obtained from our analysis.
§ RESULTS
§.§ Prolonged Low Luminosity Phase
In Figure <ref>, we show the flux variation of GRS 1915+105, starting from March 2019 to November 2021, as observed by multiple instruments both in the X-rays and the radio bands. The top four panels display the source flux as observed by MAXI (2 – 20 keV), BAT (15 – 50 keV), NICER (0.3 – 10 keV) and RATAN-600 radio telescope (11.2 GHz) respectively, while the bottom-most panel shows the Hardness Ratio (HR = (6 – 20 keV) / (2 – 6 keV)) of the source obtained from MAXI light curve.
GRS 1915+105 is observed in the decay phase at the beginning of the observation period, where MAXI flux showed a decrease from ∼1 ph cm^-2 s^-1 during MJD 58250 to ∼0.4 ph cm^-2 s^-1 during MJD 58400. This pattern is also reflected in the BAT light curve, where the flux does not drastically decrease, rather a marginal decrease is seen from 0.3 cts cm^-2 s^-1 to 0.15 cts cm^-2 s^-1. There is also a gradual increase in the HR from 0.4 – 0.7 during this period. The source exhibited a consistently low flux since MJD 58600. However, this low luminosity phase is noticed to be intermittently perturbed by strong and suḍden re-brightenings in X-rays. We refer to these sudden re-brightenings as the re-brightening (RB) phases. The six re-brightening phases, referred as RB_ I, RB_ II, RB_ III, RB_ IV, RB_ V, and RB_ VI are shown in gold, grey, cyan, olive-green, blue, and pink colour-shaded regions respectively in Figure <ref>.
The MAXI light curve of GRS 1915+105 showed a sequence of oscillations with flux varying between 0.02 to 0.4 ph cm^-2 s^-1 during re-brightening phase I (RB_ I) between MJD 58600 to 58633.
RB_ II (MJD 58799) and RB_ IV (MJD 58994) represent quick flares lasting for ∼900 and 2500 sec, respectively. RB_ III (MJD 58891) is also recognized as a quick flare. But due to the lack of observations of the source during RB_ III, the exact duration of the quick flare could not be determined. The source exhibited a sudden increase in the MAXI flux from ∼0.02 to 0.5 ph cm^-2 s^-1 during these quick flares. The available radio data also reveals that RB_ I, RB_ II and RB_ III were precursory to the radio flares (see panel d of Figure <ref>). In addition to the quick flares, GRS 1915+105 also exhibited two relatively prolonged re-brightenings (RB_ V and RB_ VI), which lasted for ∼100 days and ∼150 days respectively. Source showed a gradual increase and decrease in the flux analogous to a mini-outburst <cit.> and the average NICER flux varied from ∼20 cts s^-1 at the beginning to ∼120 cts s^-1 at the peak of RB_ V and ∼30 cts s^-1 to ∼250 cts s^-1 from the beginning to the peak of RB_ VI (see Table <ref>). The drop in HR during both RB_ V and RB_ VI indicates a slow propagation of the source towards the softer state during these two re-brightenings. All these re-brightening phases are extensively studied using the NICER observations. RB_ I, RB_ V and RB_ VI have also been observed in the broad energy band by AstroSat and NuSTAR. Although, RB_ II and RB_ IV being quick and sudden flares, could not be monitored in the broadband energies. RB_ III is not studied in this paper due to the lack of observations.
§.§ The Re-brightening Phases
§.§.§ Re-brightening phase I (RB_ I)
The MAXI light curve in the top-panel of Figure <ref>a shows the flux variation of the source during RB_ I, starting from MJD 58600 to MJD 58633. The 5 color-shaded regions in the top-panel represent the 5 NICER observations that we considered to study RB_ I, where each observation corresponds to a different phase of the re-brightening. Obs. 1 corresponds to the low luminosity phase (blue color-shaded region in the top-panel of Figure <ref>a). Obs. 2, 3, 4 and 5 correspond to the rise (shaded in brown), flare (shaded in green), decay (shaded in orange) and low phase after the decay (shaded in cyan) respectively. The NICER light curves corresponding to all the 5 observations are shown in panels b, d, f, h & j respectively, while the corresponding CCDs are shown in the adjacent panels (c, e, g, i & k respectively).
The above-mentioned colour index is followed throughout this paper, while plotting the NICER data points corresponding to each phase. The NICER flux varied from 7 cts s^-1 at the low phase to 1200 cts s^-1 at the peak of the re-brightening. The variation in the average count rate during each observation is shown in Table <ref>. In the CCD, during Obs. 1, 2, 4 and 5, HR1 and HR2 varied between the limits 0.1 ≤ HR1 ≤ 5 and 0.2 ≤ HR2 ≤ 2. However, during Obs. 3 (flare), the upper value of the range of both HR1 and HR2 increased significantly to 9 and 6 respectively. These model independent analysis does not explicitly unveil any variability classes in the source during RB_ I.
In the top-panel of Figure <ref>b, we show an overplot of the PDS obtained from Obs. 1 and 3, plotted in blue and green points respectively. The PDS during Obs. 2, 4 and 5 is dominated with broadband noise beyond 0.05 Hz, similar to the PDS corresponding to Obs. 1 shown in the figure. PDS obtained from Obs. 3 showed a power-law distribution. The total rms varied between 9_-2^+1% to 44_-4^+3% in 0.003 – 1 Hz frequency range. The bottom-panel of the Figure <ref>b shows an overplot of spectra obtained from all the 5 observations, along with the residuals from the best-fit model. The spectral analysis for all the 5 observations is carried out with Model-2. The photon index (Γ) varies between 1.29_-0.04^+0.02 – 1.47_-0.02^+0.04 throughout RB_ I. The N_ H value varied between 5.0_-0.2^+0.1 - 5.5_-0.3^+0.2× 10^22 atoms cm^-2. Along with interstellar absorption (N_ H) all the 5 observations also showed effects of local obscuration. We observed an additional column density (N_ H_ 1) of 82_-4^+4×10^22 atoms cm^-2 initially, which decreased to 18^-1_+2×10^22 during the flare. N_ H_ 1 further increased to 108^-5_+8×10^22 at the end of RB_ I. The PCF however, varied randomly between 0.56 – 0.77. The best-fitted timing and spectral parameters are mentioned in Tables <ref> and <ref> respectively.
§.§.§ Re-brightening phase II and IV (RB_ II & RB_ IV)
RB_ II and RB_ IV (grey and olive-green shaded regions in Figure <ref>) are two quick flares that spanned for ∼1000 sec and 2700 sec respectively. The MAXI flux corresponding to RB_ II is partly missing, while the NICER flux showed a steady flux with an average value of 38 cts sec^-1. However, the radio observations (panel d of Figure <ref>) show a simultaneous radio flare with the flux varying from ∼40 to 400 mJy. During RB_ IV, the MAXI flux is suddenly seen increasing from 0.04 to 0.5 ph cm^-2 s^-1. This flare pattern is also observed in the BAT and the NICER light curves. The hardness during both RB_ II and RB_ IV is relatively higher with the HR values > 1.5 (panel e of Figure <ref>).
The power spectra obtained from RB_ II and RB_ IV is characterized by a power law distribution. No QPO features were identified and the PDS was dominated with broadband noise above 0.1 Hz for both re-brightening phases. Model-2 provides the best-fit for spectra from both RB_ II and RB_ IV. The source showed a Γ value of 1.20_-0.03^+0.02 and 1.16_-0.06^+0.06 for RB_ II and RB_ IV respectively. A constant galactic absorption with N_ H ∼ 5.5×10^22 atoms cm^-2 was observed during both observations. The additional column density drastically varied between RB_ II and RB_ IV, with N_ H_ 1∼16^+3_-2× 10^22 atoms cm^-2 and a PCF of 0.63_-0.02^+0.03 for RB_ II and N_ H_ 1∼78^+4_-7× 10^22 atoms cm^-2 and a PCF of 0.71_-0.02^+0.02 for RB_ IV. The results of the fit are presented in Tables <ref> and <ref>.
§.§.§ Re-brightening phase V (RB_ V)
The prolonged re-brightening phase, RB_ V, spanned from MJD 59050 to 59150 with the MAXI flux varying from ∼0.02 ph cm^-2 s^-1 at the beginning to ∼0.6 ph cm^-2 s^-1 at the peak of RB_ V (top-panel of Figure <ref>a). The NICER light curves and the corresponding CCDs pertaining to the low (Obs. 1), rise (Obs. 2), peak (Obs. 3) and the decay (Obs. 4) phases of RB_ V are shown in the bottom panels (b & c, d & e, f & g and h & i, respectively). At the beginning of RB_ V, the source displayed low flux (∼ 15 cts sec^-1) with no structured variability in the light curve. Recurring burst profiles with a periodicity of ∼ 50 sec, were observed in the light curve during the rise. Each flare profile showed a varying peak amplitude. These flares also did not resemble the typical heart-beat profile (ρ class) as classified by . We therefore, classify the source belongs to ρ^' variability class (defined as a variant of the ρ class; see ), during Obs. 2. At the peak of RB_ V, the source exhibited large amplitude variability with flux varying between 40 cts sec^-1 at the dip to 220 cts sec^-1 at the peak of the variability. The source variability during Obs. 3 resembled with the λ variability class (). During the decay, the source is seen exhibiting the typical ρ profile with a periodicity of ∼160 sec. Obs. 1 exhibits high HR values in the CCD with HR2 going as high as 0.75 (panel c in Figure <ref>a), while Obs. 2, 3 and 4 show relatively lower HR values, HR2 < 0.4 (see panels e, g and i in Figure <ref>a). The relatively higher HR values and the absence of any variability structure in the light curve during Obs. 1 leads to the assumption that the source exhibits χ variability class during Obs. 1.
The top-panel of Figure <ref>b shows an overplot of the PDS obtained from Obs. 2, 3 and 4 plotted in brown, green and orange colors respectively. The PDSs show QPOs at 26 mHz and 6 mHz during Obs. 2 and 4, attributing to the periodicity of the ρ^' and ρ profiles in the light curves. Obs. 3 showed a powerlaw noise distribution in the PDS, while the PDS during Obs. 1 was dominated with noise beyond 0.1 Hz. The total rms of the source varied between 8.2_-0.8^+0.9% – 40.2_-2.1^+4.9%.
The NICER spectra corresponding to all 4 observations are well described with Model-2. Γ increases as the source proceeds from the low phase (Obs. 1) to the peak phase (Obs. 3) from 1.34_-0.04^+0.03 to 2.47_-0.06^+0.03 and decreases down to 1.91_-0.01^+0.03 during Obs. 4. The source showed an almost constant galactic hydrogen column density with N_ H ∼5.0×10^22 atoms cm^-2. A high N_ H_ 1 of 150^+12_-10×10^22 atoms cm^-2 is observed during the low phase (Obs. 1), while N_ H_ 1 drastically drops to ∼4^-0.4_+0.3-8^-0.1_+0.1×10^22 atoms cm^-2 during Obs. 2, 3 and 4. The PCF varied between 0.60 to 0.79_-0.03^+0.03 without a pattern. In the bottom-panel of Figure <ref>b, we present an overplot of the fitted NICER spectra for all 4 observations along with the residuals obtained after fitting with Model-2. All the model fitted parameters are summarized in Tables <ref> and <ref>.
§.§.§ Re-brightening phase VI (RB_ VI)
RB_ VI, observed from MJD 59350 to 59500, is also a prolonged re-brightening phase like RB_ V. However, RB_ VI portrayed a fast-rise and slow decay light curve profile, in contrast to RB_ V that showed a slow-rise and fast-decay profile in the light curve. The flux evolution of the source during RB_ VI is shown in the MAXI light curve (top-panel of Figure <ref>a). The light curves obtained from all the 5 observations did not show any periodic variability structure (see panels b, d, f, h and j in Figure <ref>a). The CCDs corresponding to Obs. 1 – 4 showed moderate HR values with HR1 < 3 and HR2 < 0.8 (panels c, e, g and i in Figure <ref>a). But Obs. 5 shows an increased hardness with the upper value of the range of HR1 and HR2 extending up to 9 and 6 respectively (panel k in Figure <ref>a). The avg. count rate corresponding to each observation is given in Table <ref>.
The PDS obtained from Obs. 2, 3, and 4 (top-panel of Figure <ref>b) showed QPOs at 170, 180 and 200 mHz respectively, with Q-factor evolving from 2.70_-0.01^+0.02 (Obs. 2) to 4.22_-0.03^+0.03 (Obs. 4). The total rms varied between 15_-1^+1% – 20_-2^+3% during Obs. 2, 3 and 4. PDS corresponding to Obs. 1 and 5 exhibited high fractional variability (> 26%) and showed no indication of QPOs. The parameters are provided in Table <ref>.
We present an overplot of the modeled spectra corresponding to the 5 NICER observations along with the residuals in the bottom-panel of Figure <ref>b. All the 5 spectra were well-fitted using Model-2. Initially, source showed Γ of 1.36_-0.02^+0.02 during Obs. 1. The spectra showed a steeper Γ value of 2.04_-0.04^+0.02 during Obs. 3, which again decreased to 1.37_-0.4^+0.04 during Obs. 5. A constant kT_ eof ∼ 1.9 keV was observed throughout RB_ VI. N_ H_ 1 value was minimum during the rise, peak and the decay phases, with values varying between 4.4_-0.4^+0.6-8.9_-0.6^+0.6×10^22 atoms cm^-2. An increased N_ H_ 1 was observed during Obs. 1 (N_ H_ 1∼98_-10^+5×10^22 atoms cm^-2) and Obs. 5 (N_ H_ 1∼44_-4^+6×10^22 atoms cm^-2). With reference to , and based on the CCD, PDS and the spectral characteristics, the source could possibly belong to the hard state (or the χ class) during the beginning and the end of RB_ VI, while it exhibited the δ variability classes during the rise, peak and the decay phases. The best-fitted model parameters are given in Tables <ref> and <ref>.
§.§ Wide-band Observational Analysis
The intermittent re-brightenings, exhibited by GRS 1915+105 during the low-luminosity period, were vividly observed in the soft energies (0.7 - 12 keV) by NICER. The spectral and timing properties of the source during each re-brightening have already been discussed in <ref>.
In this section, we constrain the broadband spectral and timing properties of the source by analyzing the simultaneous observations by AstroSat, NICER and NuSTAR (18 Epochs in Table <ref>). The 18 wide-band observations are divided into two categories based on the X-ray activity of the source: The Quiet Phase - when the source exhibits steady and low X-ray flux (Epochs 6 – 9, 11, 12, 17 and 18 of Figure <ref>) and the Active Phase - when the source exhibits X-ray activities thereby producing an enhanced flux (Epochs 1 – 5, 10 and 13 – 16 of Figure <ref>). The source exhibited high X-ray flux during Epoch 5(∼58649 MJD). However, due to the lack of NICER observations between the period MJD 58636 and MJD 58656, we classify Epoch 5 under the active phase. The PDS and energy spectra corresponding to all the wide-band observations are modeled and analyzed as mentioned in <ref> and <ref>. The spectral and timing properties are presented in the subsequent sections.
§.§.§ Quiet Phase (QP)
The light curves corresponding to Epochs 6 – 9, 11, 12, 17 and 18 showed no structured variability.
The average MAXI flux was ∼0.15 ph cm^-2 s^-1 (panel a of Figure <ref>). The HR values are relatively higher (HR > 1, panel e of Figure <ref>). The avg. count rate for every Epoch is mentioned in Table <ref>. The power spectra obtained from all the Epochs are consistent with a power law model. None of these Epochs showed any indication of QPO features. The total rms varied between 7.1_-0.9^+0.6 to 22.2_+1.4^-1.2% in 0.003 - 1 Hz frequency range. The timing properties corresponding to all Epochs in the QP are summarized in Tables <ref>.
In Figure <ref>, we show the spectra corresponding to the QP (spectra plotted in red) obtained from Epoch 18.
A good fit for all the energy spectra is obtained using Model-2. The Γ varied between 1.13_-0.06^+0.06 – 1.73_-0.02^+0.02, while the kT_ e ranged from 6.3_-0.2^+0.1 to 18.2_-3.7^+4.8 keV. All the Epochs showed obscuration with the N_ H_ 1 highly varying between 21_-1^+1 - 545_-72^+85× 10^22 atoms cm^-2. The bolometric luminosities (L_ bol) during these Epochs varied between 0.001 L_ Edd – 0.004 L_ Edd. The fit values obtained for each of the Epochs are presented in Table <ref>.
§.§.§ Active Phase (AP)
The X-ray light curves obtained from Epochs 1 – 5, 10 and 13 - 16 showed high X-ray activity, when compared to the Epochs corresponding to the Quiet Phase. The MAXI flux during these Epochs varied from 0.2 – 0.6 ph cm^-2 s^-1 (panel a of Figure <ref>). Each of these Epochs marks an event exhibited by the source during the three-year observation period.
Epoch 1 corresponds to the decay phase of the major outburst with a flux value of 0.4 ph cm^-2 s^-1. During Epochs 2, 3 and 4, the source exhibited RB_ I and the flux oscillated between 0.1 – 0.4 ph cm^-2 s^-1. Epoch 10 corresponds to the rising phase of RB_ V, where the source exhibited ρ^' variability class (see <ref>). The LAXPC light curve and CCD corresponding to Epoch 10 is shown in Figure <ref> (panels a and b respectively), with the flux during each flare varying from ∼150 - 250 cts s^-1. The HR1 in the CCD is the ratio of count rates in 6 - 15 keV and 3 - 6 keV, while HR2 is the ratio of counts in 15 - 60 keV and 3 -6 keV. Epochs 13 – 16 observe the source activity during the rise and the decay phase of RB_ VI, with the MAXI flux at ∼0.5 ph cm^-2 s^-1. The HR value corresponding to the rebrightening phases (Epochs 10, 13 – 16) showed lower HR values (HR < 0.4, panel e of Figure <ref>).
The PDS corresponding to Epoch 1 shows QPO at 2.08 Hz with a rms_ QPO of ∼12%. The power spectrum has a flat-top noise with a total rms of ∼21.5%. Epochs 2, 3 and 4, pertaining to RB_ I, show high variability with the rms_ Tot > 22% and does not show any QPO signatures. Epochs 10 and 13 – 16, pertaining to RB_ V and RB_ VI respectively, show QPO signatures with ν_ QPO varying from 20 to 200 mHz and rms_ QPO varying between 8.7_-0.6^+0.6 – 20.9_-1.0^+1.0%. The total rms variability was relatively lesser with values varying from 12.1_-0.3^+0.3 – 22.0_-1.2^+1.0%. Panel c of Figure <ref> shows the PDS corresponding to Epoch 10 with a ν_ QPO detected at 23 mHz having a Q-factor of 6.3_-0.8^+0.8 and a rms_ QPO of 8.7_-0.6^+0.5%. The details of the timing properties obtained from the fits are mentioned in Table <ref>.
The broadband source spectra obtained from all the Epochs in the AP are modeled using Model-1. The spectrum (plotted in black) in Figure <ref> obtained from Epoch 13, demonstrates the typical broadband spectra corresponding to the AP. During the Epochs 1 – 5, the source exhibited the Low/Hard spectral nature with the Γ and kT_ in varying between 1.13_-0.01^+0.01 – 1.73_-0.02^+0.02 and 0.25_-0.02^+0.01 – 1.32_-0.04^+0.05 keV respectively. kT_ e varied from 8.2_-0.2^+0.2 – 16.6_-1.4^+1.7 keV. During the Epochs 10, 13 – 16, we observe the source exhibiting softer spectral states, with Γ varying from 1.8_-0.1^+0.1 to 2.8_-0.1^+0.1. kT_ in and kT_ e varied from 0.97_-0.01^+0.01 keV – 1.59_-0.02^+0.01 keV and 3.1_-0.1^+0.1 to 12.9_-0.5^+0.4 keV, respectively. The best fitted model parameters are tabulated in Table <ref>.
§.§ Absorption and Emission Features
The characteristic features in the X-ray spectrum include the prominent emission and absorption lines superposed on the spectral continuum. The emission lines are essentially the fluorescent line photons from the disc, originated from the illumination of the disc by the hard X-ray photons <cit.>. The absorption by the outflowing plasma from the accretion disc generates absorption lines <cit.>. All the observations considered in our work (wide-band observations (see Table <ref>) and individual NICER observations (see Table <ref>) also showed the presence of prominent Fe absorption and emission line features (see Figure <ref>). We use the gaussian and the gabs models to estimate the features of the emission and the absorption lines respectively (as already described in <ref>). Broad and narrow emission lines were detected between the energy ranges 6.4 – 8.3 keV (see Tables <ref> and <ref>). The centroid energies of these lines correspond to the neutral Fe Kα, Fe XXV Kα, Fe XXVI Kα, and the Ni XXVIII Kα energies at 6.4 keV, 6.7 keV, 6.97 keV, and 8.10 keV respectively. The strength of these lines are measured in terms of the Equivalent Width (EW), which is estimated using the standard definition,
EW = ∫_E_1^E_2F_c(E)-F(E)/F_c(E) dE,
where, F_ c(E) is the flux in the continuum, F(E) the flux in the line at Energy (E). E_1 and E_2 represent the lower and upper energy limits of the observed line (see ). Emission lines were predominantly observed in the QP and re-brightening phases - RB_ I, RB_ II and RB_ V. The EW of the emission lines is observed to vary from 70 – 990 eV. However, the source also exhibited broad emission lines where the EW varied between 1020 – 3260 eV. Emission lines with EW ≥ 1 keV is not a commonly observed feature in X-ray binaries, but instead is considered as a classic indicator of a Compton-thick (N_ H_ 1≥ 10^24 atoms cm^-2) obscuration generally seen in AGNs <cit.>. A Compton thick obscuration suppresses the continuum beneath the neutral line, thus leading to an increase in the EW of the Fe Kα line. However, the absorption lines observed throughout the observation period were narrow with the EWs varying from 120 to 590 eV. These narrow absorption lines were observed during RB_ IV and RB_ VI in the energy ranges 6.4 keV – 7 keV. The line properties obtained from the best-fit models are quoted in Tables <ref> and <ref>. The broad emission lines during the hard state and the narrow absorption lines during the relatively softer spectral states were formerly observed in GRS 1915+105 <cit.>. It is speculated that the broad emission lines originate when the inner accretion disc is illuminated by the hard X-ray photons from the jet/corona, while the narrow absorption lines are due to the winds in the accretion disc.
The EW of the absorption lines enable us to estimate the column densities of Fe XXV and Fe XXVI elements, using the relation,
W_λ = (π e^2/M_ec^2)N_jλ^2f_ij = 8.85×10^-13N_jλ^2f_ij,
where, W_λ is the EW of the line and λ is the wavelength in centimeters, f_ij is the oscillator strength and is equal to 0.798 and 0.416 for Fe XXV and Fe XXVI elements, respectively (with reference to ). The energy of the lines (in keV) is converted to wavelength (in cm) using the relation, E = hc/λ, where h is the plank's constant (6.626× 10^-34 Js) and c is the velocity of light (3×10^10 cm/s). The ion column density (N_j) thus obtained helps in constraining the physical parameters of the absorbing plasma. Our estimates show the Fe column density values to be varying between 10^16 - 10^18 atoms cm^-2. These moderate values of ion column densities suggest the kinetic temperature of the absorbing plasma (kT_ Fe) to be ≥ 25 keV <cit.>. With reference to , if we assume the absorbing plasma to be in hydrodynamical equilibrium in the direction vertical to the plane, we can calculate the radius of the absorbing plasma from the center (r) using the relation suggested in ,
( h/r)^2GMm_ H/r≃ kT_ th,
where, h/r = tan(90^∘-i), i being the inclination angle (60^∘; ), G is the gravitational constant, M is the mass of the black hole and m_ H is the mass of the hydrogen atom. kT_ th is the thermal temperature and can be estimated using the relation, kT_ th = (m_ H/m_ Fe) kT_Fe, where m_ H is the mass of the Fe atom <cit.>. For kT_ Fe≥25 keV, the absorbing plasma is found at a distance r ≤ 2×10^10 cm. estimated the radius of the inner hot absorption zone, from where the winds are launched, to be at r < 10^9 cm.
§ DISCUSSION
We performed a comprehensive study of the spectral and timing characteristics of the source during the low-luminosity `obscured' phase between March 2019 – November 2021. During this period, the source is seen exhibiting a low-luminosity phase at length, along with the occasional re-brightening phases. Below, we present a cohesive explanation of the overall evolution of the source properties during each of these phases.
§.§ State Transitions during the Re-brightening Phases
GRS 1915+105 exhibited 6 major re-brightenings - RB_ I, RB_ II, RB_ III, RB_ IV, RB_ V and RB_ VI (see panel a of Figure <ref>) throughout the three-year observation period. RB_ I was a series of flares, RB_ II and RB_ IV quick flares (spanning for a few ksec) and RB_ V and RB_ VI were prolonged re-brightenings (spanning for ∼100 days and ∼150 days respectively). Figure <ref> shows the overall evolution of few important spectral parameters like, L_ bol (in 10^38 erg s^-1), Γ, kT_ e (in keV) and N_ H_1 (atoms cm^-2) during the rebrightening phases - RB_ I, RB_ V and RB_ VI. Figure <ref> also includes the results from the analysis of the additional 40 NICER observations which are not tabulated in Table <ref>, as stated in <ref>.
The source exhibited state transitions during the prolonged re-brightenings RB_ V (Figure <ref>) and RB_ VI (Figure <ref>). At the beginning of both RB_ V and RB_ VI, the source is detected in the hard spectral state during the low phase (Obs. 1), with Γ of ∼1.3 (see panel c corresponding to RB_ V and RB_ VI in Figure <ref>) and an almost constant electron temperature, kT_ e∼ 2 keV (see <ref> and <ref>). As the source progresses into the rise and the peak phase of these re-brightenings, Γ is seen to increase from 1.34_-0.04^+0.03 to 2.47_-0.06^+0.03 during RB_ V and 1.36_-0.02^+0.02 to 2.04_-0.04^+0.02 during RB_ VI. Source exhibited a maximum luminosity L_ bol of 12.8×10^38 erg s^-1 and 13.4×10^38 erg s^-1 during the peak of RB_ V and RB_ VI, respectively (see panel b in the blue and red shaded regions in Figure <ref>). The decay phase is characterized by a decrease in Γ, to 1.91_-0.01^+0.03 and 1.69_-0.05^+0.02 during the decay phases (Obs. 4) of RB_ V and RB_ VI, respectively. A further decrease in the photon index (Γ∼1.3) is observed as the source descends to the low phase after the decay (Obs. 5 in RB_ VI), indicating the hard spectral nature of the source. However, kT_ e (see panel d in the blue and red shaded regions in Figure <ref>) is found to remain constant throughout the re-brightening phases. The rise, peak and the decay phases (Obs. 2, 3 and 4 respectively) are recognized as the intermediate/soft state. The total rms variability also decreases as the source progresses from the hard to the soft state (13.1_-0.8^+1.1% to 8_-0.8^+0.9% during RB_ V, 32.0_-3.2^+2.9% to 17_-1.4^+1.3% during RB_ VI).
Similar evolution pattern in the spectral and timing properties has already been observed during the 2018 mini-outburst of MAXI J1535-571 <cit.> and 2017 mini-outburst of GRS 1739-278 <cit.>. Several other LMXBs also have previously exhibited re-brightenings <cit.>. However, spectral state transitions are not witnessed during every re-brightening.
Only 2 BH LMXBs - MAXI J1535-571 <cit.> and GRS 1739-278 <cit.>, so far, have exhibited spectral state transitions from the hard state to the soft state during the re-brightenings. However, a disc component is seen in both sources during the peak of the outburst with 2.1 < Γ < 2.7 and 0.33 < kT_ in (keV) < 0.5, whereas these source showed no indication of the disc component during the hard state, with 1.5 < Γ < 2. These two sources also showed hysteresis in the HID, thereby generating a q-track in the HID. Nevertheless, we not observe a q-track in the HID during RB_ V and RB_ VI. Based on an analogy of the spectral and timing characteristics of GRS 1915+105 with MAXI J1535-571 and GRS 1739-278, it can be concluded that the source undergoes the following sequence of transitions: hard → intermediate → soft state → intermediate → hard state (during RB_ V) and hard → soft state → hard state (during RB_ VI).
All the mini-outbursts/re-brightenings in MAXI J1535–571, GRS 1739–278 and GRS 1915+105 are seen to have completely different timescales. In case of MAXI J1535-571, re-flares occurred soon after the major outburst, whereas in GRS 1739–278, the time gap between the major outburst and the mini-outburst is not clearly understood due to the observational gap. However, <cit.> predicts a time gap of < 200 days between the major and the mini-outburst. But, in the case of GRS 1915+105, RB_ V happened 500 days after the major outburst and RB_ VI is seen occurring 200 days after the decay of RB_ V. The periodicity of the mini-outbursts in MAXI J1535–571 and GRS 1739–278 was estimated to vary between ∼20 – 35 days. In GRS 1915+105, RB_ V and RB_ V lasted for ∼100 days and ∼150 days, respectively. A comparison of the timescales seen in all the three sources does not lead us to the common cause that triggers these re-brightenings. There exists several models in the literature that explains the origin of the re-brightenings <cit.>. But based on the results, we postulate that the re-brightenings/mini-outbursts are small-scale outbursts <cit.>, where mini-outbursts develop and progress in a way similar to the main outburst. The instability is assumed to be triggered at some location in the outer disc, which gradually increases the disc density and temperature. The mass accretion rate increases as this instability advances and propagates inwards as a heating wave, thus causing a re-brightening or a mini-outburst. The detection of the disc component in GRS 1915+105 as the spectral state of the source softens towards the peak, complies with the above scenario.
§.§ Variability during the Re-brightenings
GRS 1915+105 has exhibited 15 variability classes since its discovery. During the major outburst, the average count rate exhibited by the source, during each of the variability classes, varied from 1 - 50 kcts s^-1 (<cit.> and references therein). It was also reported that the limit-cycle oscillations seen during certain variability classes, disappeared at an average count rate < 5 kcts s^-1 <cit.>. However, during the recent `obscured' phase, the source has displayed ρ, λ and δ variability classes during RB_ V and RB_ VI (Figures <ref>a and <ref>a) exhibiting an average count rates of ∼30, 120 and 250 cts s^-1, respectively. GRS 1915+105 also exhibited ρ^' variability class (<ref>), a variant of the typical ρ class (see ) during the rise phase of RB_ V. Using the Modified Hindmarsh-Rose (MHR) model, <cit.> shows that ρ^' could be a result of the slight modulation of time dependent input function (J(t)). We also categorize the source to belong to χ variability class during the low-luminosity phase, where the source exhibited Low/Hard spectral state (Γ∼ 1.13_-0.06^+0.06 – 1.73_-0.02^+0.02 and kT_ e∼ 6.3_-0.2^+0.1 – 18.2_-3.7^+4.8 keV, see <ref>). However, the PDS did not explicitly show any QPO features during the the hard state, which could be a result of low statistics.
The source also exhibits a sequential class transition from χ→λ→ρ, during the rising phase of RB_ V. The time evolution of the variability classes in GRS 1915+105 depicted by the MHR model <cit.> indicate that the source makes transition from stable states - states showing stable equilibrium patterns (classes ϕ, χ, α^'', θ, ξ and ω) to an unstable state - state showing unstable equilibrium pattern (ρ class) via transition segment (δ, γ, λ, κ and α^' classes), as the time dependent input function (J(t)) varies. This MHR model is in complete accordance with the sequential transition from χ→λ→ρ, exhibited by the source during RB_ V.
The variability classes is a unique feature particular to this source. These unique variability classes depict the accretion and ejection limit cycles in an unstable disc <cit.>. Based on the statistical analysis of RXTE observations of the source between 1996 April to 2000 May, it was inferred that the unique variability classes occurred when the source radiated at exorbitant luminosities and satisfied the criteria L/L_ Edd≥ 1 <cit.>. However, recently, GRS 1915+105 began exhibiting variability classes at L/L_ Edd≤ 1 <cit.>. In hindsight, this decrease in the source luminosities were anticipated, because of the depletion of the matter in the accretion disc with the persistent X-ray activity. Similarly, IGR J17091–3624 <cit.> also exhibited these variability patterns at ∼20 – 30 % L_ Edd, thereby manifesting that the source need not necessarily radiate at luminosities ≥ L_ Edd, in order to exhibit unique variability classes.
In addition to the observations, the results from the time-dependent disc models and the simulations predict the outset of limit-cycle instabilities at L/L_ Edd≥ 0.3 <cit.>. Nonetheless, the above mentioned calculated predictions stand questionable based on the recent activities of GRS 1915+105, where the source exhibited variability classes at unabsorbed luminosities varying between 0.01 – 0.004 L_ Edd, which is ≪ 0.3 L_ Edd. At these luminosities, LMXBs generally show least activity/variabilities. With the discrepancies in the disc-instability models to explain the limit-cycle oscillation behaviour, magnetic fields can be considered as an alternative way to explain the limit-cycle instabilities. The lack of threshold large-scale magnetic fields of uniform polarity eventually fails in sustaining the thermal stability in the accretion disc <cit.>. This could be a cause for the peculiar variabilities seen in GRS 1915+105. However, the key questions on how the phenomena that triggers the unique variability patterns in the system is invoked even at extremely low luminosities (L ∼ 0.01 L_ Edd), is yet to be understood.
§.§ Dynamical Obscuration in the System
An incessant presence of `obscuration' in GRS 1915+105 since May 2019 has been detected from the spectral analysis. This obscuration is detected to be highly variable and in-homogeneous, i.e. PCF < 1 (see ). The column density due to the local obscuration (N_ H_ 1) and the partial covering fraction (PCF) varied drastically within a few minutes.
During the prolonged low-luminosity phase (<ref>), the source exhibited non-deviant Low/Hard spectral state properties. However, the obscuration in the system was highly dynamic and random with N_ H_ 1 varying between 20 - 550× 10^22 atoms cm^-2 and the PCF varied between 0.38 – 0.96 (see Table <ref>). In case of the prolonged re-brightening phases (RB_ V and RB_ VI), the source showed intrinsically evolving spectral characteristics as well as a methodically evolving local obscuration medium (see panel a in the blue and red shaded columns in Figure <ref>). The obscuration varied in a pattern, where the least column density (N_ H_ 1) was detected during the active phase of the re-brightening (rise/peak/decay phases of the re-brightenings) and a maximum density was detected in the low phase (hard state) either before the rise or after the decay of the re-brightenings. A minimum N_ H_ 1 value of 4× 10^22 atoms cm^-2 was observed near the decay phase of RB_ V and the peak phases of RB_ VI. However, an increased N_ H_ 1 value of ∼150×10^22 atoms cm^-2 and ∼100×10^22 atoms cm^-2 was observed during the low phase/hard state of both RB_ V and RB_ VI, respectively. The PCF varied randomly between 0.6 – 0.79 and 0.37 – 0.93 throughout RB_ V and RB_ VI, respectively.
This varying trend in N_ H_ 1 is also observed during RB_ I (see panel a corresponding to RB_ I in Figure <ref>), where N_ H_ 1 decreased significantly to ∼18×10^22 atoms cm^-2 during the flare in comparison with the other phases of RB_ I (see Table <ref>).
An identical scenario was observed in V404 Cyg during its outburst in 2015. The Swift observations of the source revealed fast and highly variable obscuration (N_ H_ 1 ∼10^21 - 10^24 atoms cm^-2) within the system <cit.>. The authors suggest a clumpy Compton thick outflow to explain the fast variable obscuration in the system (see Figure 10 in ). This clumpy Compton thick outflow aptly describes the nature of the obscuring material in GRS 1915+105. The high EW of the neutral Fe Kα line (EW ≥ 1 keV) obtained from the analysis of GRS 1915+105 (see <ref>), can also be considered a proponent indicator of the presence of a Compton thick absorbing material <cit.>. The
decrease in the density of the obscuring medium during the flares is seen in both GRS 1915+105 and V404 Cyg. <cit.> presumes that the radio flare preceding the X-ray flare in V404 Cyg drives the obscuring medium away, at least for a short span, which leads to a decreased density and the PCF during the flare. This explanation can be adapted to justify the decrease in N_ H_ 1 during the flare in RB_ I, as there exists a precursory radio flare to RB_ I. But, in case of RB_ V and RB_ VI, we observe a decreased radio activity as the source moves towards the peak during RB_ V and RB_ VI (ALMA observations as reported in ). This indicates that there is some other mechanism in the system that drives away the obscuring medium. Although their results and our results show some similarities to some extent, we understand that this explanation cannot be considered appropriate for GRS 1915+105 because both V404 Cyg and GRS 1915+105 differs fundamentally with regards to the accretion rate. V404 Cyg accreted at Eddington/super-Eddington rates thereby inflating the inner accretion disc (slim disc; ), thus causing the clumpy and Compton thick outflows, whereas GRS 1915+105 accretes at sub-Eddington rates which is inadequate to generate a clumpy outflow.
In the recent works, <cit.> and <cit.> identified the obscuring medium as various layers of absorbing zones extending radially out. <cit.> detected the winds emanating from the inner hot absorption zone, at a radius < 10^9 cm and N_ H_ 1 ∼10^23 atoms cm^-2. These winds are elucidated as failed winds, which could not be launched to infinity due to the lack of magnetic field strength. The winds eventually surround the central engine thus obscuring the system. Alternatively, <cit.> deduced the obscuring medium at a distance ∼10^11 cm with a density ∼10^12-10^13 cm^-3. The radial and the vertical profile of the disc, from the deduced density and radius values, suggested an inflation of the outer disc. The inflated disc acted like a torus thereby partially or completely obscuring the inner accretion disc (see also ). However, it is not viable to capture the dynamic nature of the obscuration with models as suggested in <cit.> based on a limited sample of observations. In our study, we consider a large sample of observations to study the fast and the complex evolution of the obscuring medium. Yet, there is no consensus on the processes causing the obscuration and the dynamics of the obscuration. An insight on the dynamics of the obscuring medium may be obtained by a quantitative analysis of the source using a time-dependent model for obscuration, which is beyond the scope of the present work.
§ SUMMARY
In this paper, we performed a thorough and comprehensive spectral and timing analysis of AstroSat, NICER and NuSTAR observations pertaining to the low-luminosity `obscured' phase, that GRS 1915+105 has been exhibiting since May 2019. Based on the results, we present a cohesive summary of the evolution of the spectral and timing characteristics of the source.
* GRS 1915+105 exhibited multiple re-brightenings both in X-ray and radio energies. The bolometric luminosities (L_ bol) of the source varied between 0.004 L_ Edd at the low-luminosity phase to 0.01 L_ Edd during the peak of the re-brightening phases.
* The source exhibited state transitions during the prolonged re-brightening phases. The source was tracked making a transition from hard → intermediate → soft state → intermediate → hard state, during RB_ V and hard → soft state → hard state, during RB_ VI.
* GRS 1915+105 displayed the characteristic variability classes, ρ, λ and δ at unabsorbed luminosities of 0.01 L_ Edd, 0.02 L_ Edd and 0.02 L_ Edd respectively. Although QPOs could not be detected, based on the spectral characteristics the source is classified to belong to the χ variability class during the low-luminosity phases.
* The source revealed multiple Fe absorption and emission line features between 6 – 8 keV with EW varying between 70 eV – 3.26 keV. Fe XXV and Fe XXVI ion column density varied between 10^16 - 10^18 atoms cm^-2. The distance of the absorbing plasma was constrained to be ≤2×10^10 cm.
* The source exhibited a highly dynamic obscuration with column density varying between ∼10^22 - 10^24atoms cm^-2 throughout the 3-year observation period.
§ ACKNOWLEDGEMENTS
We thank the anonymous reviewer for his/her suggestions and comments that helped to improve the quality of this manuscript. AMP, AN acknowledge the financial support of Indian Space Research Organisation (ISRO) under RESPOND program Sanction order No. DS-2B-13012(2)/19/2019-Sec.II. AMP acknowledges the PI of this project, Dr. Baishali Garai, for the relentless support and guidance. AMP also thanks Dr. Dominic Walton (University of Hertfordshire, Hatfield, UK) for his support in pursuing this work. This publication uses data from the AstroSat mission of the ISRO archived at the Indian Space Science Data Centre (ISSDC). This work has been performed utilising the calibration databases and auxiliary analysis tools developed, maintained and distributed by AstroSat-SXT team with members from various institutions in India and abroad. This research has made use of MAXI data provided by RIKEN, JAXA and the MAXI team. Also this research has made use of software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC) and NASA’s Astrophysics Data System Bibliographic Services. AN also thank GH, SAG; DD, PDMSA and Director-URSC for encouragement and continuous support to carry out this research.
Facilities: AstroSat, MAXI, NICER, NuSTAR .
§ DATA AVAILABILITY
The data used for analysis in this article are available in AstroSat-ISSDC website (<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>), MAXI website (<http://maxi.riken.jp/top/index.html>) and NICER and NuSTAR observations from HEASARC database (<https://heasarc.gsfc.nasa.gov/docs/cgro/db-perl/W3Browse/w3browse.pl>).
mnras
|
http://arxiv.org/abs/2307.04352v1 | 20230710052537 | Phase Diagram and Crossover Phases of Topologically Ordered Graphene Zigzag Nanoribbons: Role of Localization Effects | [
"Hoang Anh Le",
"In Hwan Lee",
"Young Heon Kim",
"S. -R. Eric Yang"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.stat-mech",
"quant-ph"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
empty
Is the formation of a fractional charge <cit.> a necessary and sufficient condition<cit.> for topologically ordered insulators <cit.> such as fractional quantum Hall systems <cit.> and interacting disordered zigzag graphene nanoribbons <cit.> (ZGNRs)
with anyonic fractional charges? This issue is related to whether the electron localization effects of doped systems destroy or enhance the topological order <cit.> and quantization of fractional charges. In a Laughlin state on a disordered sphere (no edges are present), the added electrons fractionalize and form a quasi-degenerate peak in the gap of the tunneling density of states (DOS).
Electron localization <cit.> is expected to suppress the quantum fluctuations of these fractional charges of the quasi-degenerate gap states because these localized quasi-degenerate energy states are spatially separated from each other, as explained in Ref. <cit.> (if fractional charges are delocalized they overlap and become ill-defined). However, excessive disorder is considered detrimental to topological order.
In this study, we investigate similar issues with ZGNRs <cit.>.
A recent study showed that weak randomness (disorder) in ZGNRs can generate e^-/2 fractional charges <cit.>, which is a disorder effect closely related to the change in the disorder-free symmetry-protected topological insulator of ZGNRs to a topologically ordered <cit.> Mott-Anderson insulator <cit.>. These systems have a universal value for topological entanglement entropy (TEE) <cit.> in the weak-disorder regime <cit.>. The shape of entanglement spectrum is also found <cit.> to be similar to the DOS of the edge states, as expected of topologically ordered systems <cit.>.
In interacting disordered ZGNRs, the gap is filled further by edge states <cit.> with an increasing strength of the disorder potential. (We call these states gap-edge states.)
The ground states have the opposite edge site spins in the absence of disorder <cit.>. In the presence of disorder
a spin reconstruction of the zigzag edges can take place <cit.>. Nonetheless, a topologically ordered ZGNR has two degenerate ground states, see Fig. <ref>(a).
Mixed chiral edge states play an important role in this effect.
A short-range disorder potential couples two nearly chiral gap-edge states residing on opposite zigzag edges <cit.>, and mixed chiral gap-edge states with split probability densities may form
to display e^-/2 fractional semion charges <cit.> (see Fig.<ref>(b))(these states with midgap energies are solitonic with the half of the spectral weight originating from the conduction band and the other half from the valence band <cit.>). Note that a mixed chiral gap-edge state has a nonzero fractional probability at the A- and B-carbon sites. In other words, it is split into two nonlocal parts, each residing on the edges of the A or B sublattice. The formation of mixed chiral gap-edge states is a nonperturbative instanton effect <cit.>. (They are similar to the bonding and antibonding states of a double quantum well.)
It should be noted that well-defined e^-/2 fractional charges in the weak-disorder regime are emergent particles, i.e., they have new qualitative features and appear only in sufficiently long ribbons.
In a weak-disorder regime, the number of fractional charges is proportional to the length of zigzag edges.
Although weak disorder leads to formation of fractional charges strong disorder may destroy them. Similar to fractional quantum Hall systems, the topological order of a ZGNR is not immediately destroyed upon doping because electron localization partially suppresses quantum fluctuations between quasi-degenerate mid-gap states. The system may still be an insulator with a fractional charge. However, in the presence of strong disorder or doping, zigzag edge antiferromagnetism is expected to diminish, and thereby, the topological order.
(Away from the low doping region, a disordered anyon phase with a distorted edge spin density wave was found <cit.>.) These results suggest that there may be several topological phase transitions in the zigzag ribbons. What is the nature of these topological phase transitions and the physical properties of the ground states? Does the presence of a fractional charge imply a universal value of the TEE? Does the TEE become nonuniversal and vary <cit.> with an increase in the disorder strength or doping level?
We explored the phase diagram of ZGNRs in the parameter space comprising on-site repulsion (U), disorder strength (Γ), and doping concentration (δ N/N_s) (δ N and N_s are, respectively, the number of doped electrons and the total number of sites in the ribbon). The competition between localization and electron interactions can have detrimental effects on the topological order and lead to several different phases, which includes crossover phases. We found a number of different phases with a topological order, quasi-topological order, and no order. Each of these phases is defined by the value of TEE β and its variance. These properties of β are related to the presence or absence
of charge fractionalization and charge transfer correlations between zigzag edges. When both of these properties are present, in addition to correlations leading to spin-charge separation, β is universal, with small variances. In low-doped ZGNRs the interplay between electron localization and on-site repulsion contributes to the spatial separation of quasi-degenerate gap-edge states and protects the charge fractionalization against quantum fluctuations.
There are two other types of phases with a quasi-topological order. We refer to these phases as crossover phases, in which the variance of β is significant. In one of these phases, both e^-/2 fractional charges and spin-charge separation are absent; however, the charge transfer (± e^-/2) correlations exist between the zigzag edges.
Another phase may contain stable e^-/2 fractional charges but no charge transfer correlations between the zigzag edges.
The ground state and zigzag edge properties of the various crossover and nontopological phases are explored.
§ MODEL HAMILTONIAN
The following mechanisms can all lead to fractional charges: the coupling between the valleys mediated by short-range scatterers <cit.> and the sublattice mixing facilitated by alternation of the nearest neighbor hopping parameters <cit.>. Here we will consider only the effect of short-range scatterers.
The self-consistent Hartree-Fock (HF) approximation works well for graphene systems <cit.>. The HF Hamiltonian of a ZGNR with length L and width W is
H_MF=-t∑_n.n.,σ c^†_i,σc_j,σ +∑_i,σ V_ic_i,σ^†c_i,σ
+U∑_i[ n_i,↑⟨ n_i,↓⟩ +n_i,↓⟨ n_i,↑⟩-⟨ n_i,↓⟩⟨ n_i,↑⟩]
+ ∑_i [s_ix⟨ h_ix⟩+s_iy⟨ h_iy⟩],
where the site index is given by i=(k,l) (k labels sites along the ribbon direction and l along the width), c^†_i, σ and n_i_,σ represent creation and occupation operators at site i with spin σ = {↑, ↓}, respectively (periodic boundary conditions are used along the ribbon direction). The site spin operators are given by s_i x (y) = 1/2( c^†_i, ↑, c^†_i, ↓) σ^x (y) ( c_i, ↑, c_i, ↓)^T, where σ^x (y) is the conventional Pauli matrix. The first term represents the kinetic energy with hopping parameter t, n.n implies the summation over the nearest-neighbor sites. The second term represents the short-range impurities parameterized by V_i, which is randomly chosen from the energy interval [ -Γ, Γ]. Throughout this study, the density of the impure sites is fixed at 10 %. U denotes the on-site repulsive strength. The last term in Eq. (<ref>) represents self-consistent “magnetic fields", where ⟨ h_i x⟩ = -2 U ⟨ s_i x⟩ and ⟨ h_i y⟩ = -2 U ⟨ s_i y⟩. (These fields are present only in doped ZGNRs. In the initial stage of the HF iteration, the values of ⟨ h_i x⟩ and ⟨ h_i y⟩ can be selected from small random numbers). In the presence of these fields, the HF eigenstates are mixed spin states. The HF single-particle states |k⟩ (k=1,2,…,2N_s) can be written as a linear combination of site states |i,σ⟩. In the language of second quantization this is equivalent to
a_k=∑_i,σ A_k,i,σc_i,σ.
These magnetic fields are rather small for the disorder strength and doping level considered in this study.
There may be several nearly degenerate HF ground states. We select the HF initial ground state such that ⟨ n_i,σ⟩ represents a paramagnetic state with a small spin splitting.
In addition, we choose small random numbers of ⟨ h_i x⟩ and ⟨ h_i y⟩ (they do not significantly affect the final results).
The HF matrix dimension scales with the number of carbon atoms, which is typically <50000. The HF eigenstates and eigenenergies are self-consistently computed
(this requires approximately 20 iterations). The TEE is computed using the disorder-averaging results of numerous disorder realizations. Here, we used gpu to speed up the solution of the HF matrix. The gpu calculations were intensive and performed on a supercomputer. In the presence of disorder and in the low-doping region, the obtained HF ground-state properties with solitons are in qualitative agreement with those of the density matrix renormalization group (DMRG) in the matrix product representation <cit.>. (In this work we do not investigate the high doping region. The DMRG result is difficult to obtain in this region because the computation is rather time consuming, and therefore, it is not possible to determine which nearly degenerate HF ground state is the true ground state.)
Note that the Mott gap Δ is well-developed only when L≫ W (the excitation spectrum of a ribbon with L∼ W is similar to that of a gapless two-dimensional graphene sheet <cit.>). The localization properties of ZGNRs are unusual because both localized and delocalized states can exist <cit.>. Gap-edge states with energy |E|∼Δ/2 (Δ represents the Mott gap in the absence of disorder) can have localization lengths ∼ W and overlap significantly with each other.
§ EFFECTS OF ANDERSON LOCALIZATION
Anderson localization plays a crucial role in the quantization of fractional charges <cit.>. The effects of Anderson localization can be described using self-consistent Hartree–Fock approximation (HFA) <cit.>. The first important effect is as follows:
Anderson localization reduces the correlation length. Therefore, in comparison to that of the nondisordered case, we can use a smaller Wilson loop <cit.> to calculate the TEE of disordered interacting ZGNRs .
The correlation length may be computed from the entanglement entropy of an area A. It is computed from the correlation function, which is also known as the reduced density matrix of region A.
The HF correlation function <cit.> between i∈ A and j∈ A decays exponentially as a function of distance x between i and j
C(x) = C_i↑, j↑ = ⟨Ψ| c^†_i↑ c_j↑|Ψ⟩∼exp(- | x|/ξ),
where Ψ represents the HF ground state of the ZGNR and ξ represents the correlation length.
By inverting the relation given in Eq.(<ref>), we can write c_iσ as a linear combination of a_k. This makes it straightforward to compute C(x). To compute it accurately, the area must be larger than the correlation length <cit.>.
We compute the correlation function and determine the correlation length ξ, as shown in Fig. <ref>.
For the disordered case Γ≠ 0, the correlation length is obtained by averaging over several disorder realizations. The Anderson localization reduces the correlation length compared to that of disorder-free ribbons, as shown in Fig. <ref>(b). (Disorder-free ZGNRs have a large correlation length for small U.) In contrast, doping increases the correlation length, as shown in Fig. <ref>(c).
Another important effect of Anderson localization in the presence of on-site repulsion is that quasi-degenerate localized states
are spatially separated <cit.>, leading to well-defined fractional charges <cit.>. (In low-doped ZGNRs added electrons fractionalize and form a narrow peak in the DOS near E=0 consisting of quasi-degenerate localized states <cit.>.)
The probability densities of such two mid-gap states carrying fractional charges are shown in Fig. <ref>(d). These gap-edge states are mixed chiral states <cit.>, whose probability densities peak at the two edges and rapidly decays inside the ribbon.
Note that these states do not overlap with each other. Non-interacting electrons of disordered ZGNRs also display mixed chiral states near E=0. However, although the overlap between nearly degenerate states in the weak-disorder regime is small, it is not negligible. Thus well-defined fractional charges do not readily form in non-interacting disordered ZGNRs.
§ PHASE DIAGRAM
Topological order can be detected by investigating the TEE (β) <cit.> within the HFA <cit.>. We first select a set of values for
(L, W, w, l_zig,l_arm), as defined in Fig. <ref>(a), to compute β. Next, these quantities are increased by the same ratio and a new β is computed. This process is repeated several times (see Ref. <cit.> for details). We apply finite-size scaling analysis to extract the value of the TEE in the limit L→∞ (see Fig. <ref>(b)).
We divide the parameter space (Γ,U,δ N) into three-dimensional grid points, and at each grid point, we compute β (see Fig. <ref>(c)). The three-dimensional phase diagram obtained is shown in Fig. <ref>(d).
We find that β can have three types of values: (i) A universal value in the topologically ordered phase, (ii) nonuniversal values of β with large variances in the crossover phases, and (iii) a zero value of
β in the normal-disordered phase. Projections of the phase diagram, namely U-Γ,
Γ-δ N, and
U–δ N planes, are shown in Figs. <ref>(e)-(g).
In undoped ZGNRs a TO phase is found in regions Γ/U≲ 1 and U≲ t, see Fig. <ref>(c). The topological phase transition into the symmetric protected phase at Γ=0 is abrupt, consistent with the result of Ref. <cit.> (The TEE of the symmetric phase is zero). There are also other topological phase transitions but they are smooth transitions with crossover regions <cit.>.
Figs. <ref>(e)-(g) display the presence of crossover regimes lying beyond the TO phase with an increase in the disorder, doping, and interaction strength.
The phase boundaries between topologically ordered and normal phases are “blurred”, which indicate the presence of crossover phases (there are two types of crossover (CO) phases, labeled COI and COII). The numerical results of the TEE is shown in Fig. <ref>(c).
The error bars in this figure include, besides random fluctuations caused by disorder, the uncertainties that occur in the extrapolation process of the finite scaling analysis.
As Γ/U increases, β decreases (see the red line
in Fig. <ref>(c)). The value of the TEE thus changes across a crossover phase.
In such a phase, β has a large variance, but the average values are not zero, which implies that the topological order is not completely destroyed. In this regime, the TEE becomes nonuniversal and decays.
In crossover phases charge transfer correlations between the opposite zigzag edges are present but fractional charges are not well defined, or vice versa.
One can use a different but equivalent
procedure to determine the phase diagram. We verified that the same phase diagram can be obtained by analyzing the presence of fractional charges and nonlocal correlations between the opposite zigzag edges. At each grid point (Γ,U,δ N) in the parameter space, we find the ground state and investigate whether the gap-edge states display fractional charges and whether nonlocal correlations exist between the opposite zigzag edges. By ultilizing this method, we have successfully recovered the phase diagram shown in Fig. <ref>(d).
§ TOPOLOGICALLY ORDERED PHASE
The universal region was investigated in Ref. <cit.>, and therefore, we do not describe this phase in detail here. However, we would like to mention some new results.
We elucidate the nature of correlations in topologically ordered ZGNRs. A ZGNR is shown in Fig. <ref>(a). It consists of 8 carbon lines labeled l = 1, 2, 3, 4, 5, 6, 7, 8.
In each pair of carbons lines (1, 8), (2, 7), (3, 6), and (4, 5), an increase/decrease in the occupation number of one line is correlated with a decrease/increase in that of the other line (see, for example, lines 1 and 8 in Fig. <ref>(c)).
It is not only the zigzag edges that are correlated in this way, but also other carbon lines inside the ribbon that are away from the edges. The corresponding site spins of the ribbon are shown in Fig.<ref> (d).
Mixed chiral gap states contribute to this effect (these gap-edge states can decay slowly from the zigzag edges, unlike the fractional edge states. A schematic picture of a mixed chiral state is shown in Fig. <ref>(b)). Changes in the occupation numbers δ n_i,↑ and δ n_i,↓ of an edge often coincide at nearly the same values as k, which labels the site position along the ribbon direction. This effect can
lead to n_i,↑≈ n_i,↓ of the occupation numbers in the presence of disorder, resulting in s_i≈ 0, i.e., the appearance of spin-charge separation around a site on one of the edges <cit.>.
The following points should be also noted. The results in Fig.<ref>(c) show that
the variance of β decreases in the singular limit Γ/U→ 0. (Additional numerical results confirm this conclusion.) This result is consistent with the previous finding that
fractional charge of a midgap state becomes accurate in the weak disorder regime and in the thermodynamic limit (see Ref.<cit.>).
In the opposite limit Γ/U ≫ 1, the value of the TEE is non-universal and decreases with increasing U (see Fig. <ref>(c)). In addition, the functional dependence of DOS on E in the universal region is given by an exponentially suppressed function, a linear function <cit.>, or something in between. The actual shape of the DOS is determined by the competition between the strength of disorder and the on-site repulsion <cit.>; for example, the DOS is linear for (U,Γ, δ N) = (2t, 0.5t, 0), but it is exponentially suppressed in the weak disorder limit.
§ CROSSOVER PHASE I
We describe in detail the properties of undoped ZGNRs in the COI phase, where U ≳ t and U≳Γ (the on-site repulsion U is the dominant energy in this phase). The TO phase gradually changes
into the COI phase as U increases, as illustrated in Fig. <ref>(e). In this phase, β is nonzero, but its variance is significant, as shown in Fig <ref>(a). This phase has the following properties:
(i) The disorder-induced change in the edge occupation numbers δ n_i,↑= 1/2 for one type of spin σ is entirely transferred to the opposite edge, i.e., the zigzag edges are correlated in a nontrivial manner (see Fig. <ref>(c) for Γ/U = 0.17).
However, the site positions k on the opposite zigzag edges, where changes in δ n_i,↑ and δ n_i,↓ occur , do not coincide.
(In contrast, these positions are correlated at the nearly same values of k in the TO phase, as we mentioned before.) We believe that these edge transfer correlations between zigzag edges change the ground state entanglement pattern and yield a nonzero fluctuating TEE. The edge charge transfer correlations become weaker when the disorder is stronger (see Fig. <ref>(d) for Γ/t=4), leading to a smaller value of β (see Fig. <ref>(c)).
(ii) Although the zigzag edge changes are fractional, δ n_i,↑= 1/2 (Figs. <ref>(b) and (c)), the A- and B-probability densities of the mixed chiral states responsible for this feature overlap, see Fig. <ref>(c). Thus, fractional charges are ill-defined.
(iii) For spin-charge separation to be present, charge transfers for both spins must occur at the same values k. These effects are not observed in the COI phase. Note that, the condition S_z=1/2(n_i,↑-n_i,↓)=0 at site i is not sufficient for spin-charge separation. To fulfill the conditions, well-defined fractional charges must exist.
§ CROSSOVER PHASE II
For undoped ZGNRs, there is another CO phase for Γ≫ U but U/t≲ 1. We call this phase COII where the disorder strength Γ is the dominant energy. As Γ increases, the TO phase undergoes a gradual transition into the COII phase, as demonstrated in Figure <ref>(e). Concurrently, the gap is progressively filled with states, as depicted in the upper graph of Figure <ref>(a).
Similar to the COI phase (Fig. <ref>(a)) β is finite with a significant variance. But there are no charge-transfer correlations between the zigzag edges (see the lower graph of Figure <ref>(a)). However, some fractional charges may exist, see Fig. <ref>(b). This is consistent with the following obtained results: (i) Some changes in the edge occupation number are δ n_i,σ≈±1/2.
(ii) There are gap-edge states with q_A ≈ 1/2. (Here q_A=∑_i∈ A|ψ_iσ(E)|^2, where ψ_iσ(E) is the HF eigenstate with energy E, see Ref.<cit.>. The probability densities are summed over all sites of the A sublattice.) However, the variance of q_A in the energy interval [E-δ E,E+δ E] is large because q_A varies substantially in this interval, as shown in Fig. <ref>(a). (But the disorder averaged mean charge value of the states in this energy interval is e^-/2.) Despite this, a fractional charge of a state in the interval [E-δ E,E+δ E] near E=0 does not overlap significantly with the probability densities of other fractional and non-fractional states in the same energy interval, provided that δ E is small (for U∼ t and Γ∼ t this happens when δ E∼ 0.01t). We believe that the interplay between localization and on-site repulsion is responsible for this effect. However, since a gap is absent, the fractional charges are less stable compared to the TO phase.
We checked that several states in the same value of δ E overlap in the absence of on-site repulsion.
Thus far, we investigated the undoped case. Upon doping, the disorder-free ZGNRs exhibit edge spin density waves instead of edge ferromagnetism of undoped ribbons. If disorder is added to a doped ZGNR, the spin waves become distorted <cit.>: There is a topological phase transition from modulated ferromagnetic edges at zero doping to distorted spin-wave edges at finite doping. Our results indicated that, when doping is substantial, this phase is also a COII phase.
The dependence of the mean value of q_A of the states in the mid-gap peak on the number of doped electrons is shown in Fig. <ref>(d) (the DOS shows a sharp peak at the mid-gap energy, see Fig. <ref>(c)). At a low doping concentration, the disorder-averaged value of q_A is close to 0.5. The states in the mid-gap peak display well-defined fractional charges, as we discussed below Fig.<ref> (note that the width of the midgap peak is δ E∼ 0.005t). As the doping concentration increases further, q_A significantly deviates from 0.5, and simultaneously, the DOS mid-gap peak starts to decrease <cit.>. These findings imply that even though fractional charges can be found, their number decreases with an increase in doping. The gradual change in q_A as a function of δ N / N_sindicates that the transition from the phase of the distorted ferromagnetic edge to the phase of the distorted edge spin-wave is not sharp. Figure <ref>(e) shows how β decreased with an increase in δ N / N_s. For a large δ N / N_s, it is computationally demanding to calculate β because the correlation length is expected to be longer (see Fig. <ref>(c)) in comparison to that of undoped ZGNRs.
§ STRONGLY DISORDERED AND STRONGLY REPULSIVE PHASES
We discuss the strongly disordered phase in region Γ/U≫ 1 (see Fig. <ref>(e)). The topological order is destroyed once the disorder strength reaches a sufficiently large value (e.g., β = 0 at (U,Γ, δ N) = (t,15t, 0)). In this region the edge charge-transfer correlations and charge fractionalization are not well-defined, which implies that TEE is zero.
In Fig. <ref>(a), site occupation numbers in weak (Γ = 0.03 t) and strong (Γ = 15 t) disorder regimes are shown side by side to highlight the difference, where the ones in strong disorder regime highly fluctuate from site to site.
Edge magnetization is zero almost everywhere (see Fig. <ref>(b)). The occupation numbers display sharp values of n_i,σ=1 at some sites (see Fig. <ref>(b)) (they were also present in the DMRG calculations of disordered ribbons, see Ref.<cit.>).
Another phase, that is, the strongly repulsive phase (U≫ t) with no fractional charges and zigzag edge correlations, is shown in Fig. <ref>(e). In this case, β≈ 0.
The q_A–E diagram in Fig. <ref>(c)
displays the nonperturbative nature of disorder in this regime: the values q_A are scattered between 0 and 1 in the limit Γ→0, whereas they are restricted to
the four solid lines at Γ = 0.
Also, the (E, q_A) distribution indicates that in a strongly repulsive regime, even with the presence of disorder, there is still a large energy gap.
There are no states with q_A≈ 1/2 near the mid-gap energy, as shown in Fig. <ref>(c). The A and B components of the wave function of the states with q_A≈ 1/2 near the gap edges ±Δ/2 overlap (see Fig. <ref>(d)).
For a stronger disorder (larger values of Γ), the gap is filled with states such that the DOS is finite at E = 0, as shown in Fig. <ref>(e).
The main physics of this phase is illustrated by investigating the zigzag edge structure: the occupation numbers are n_i,σ=1 or 0 so charge transfers are one (δ n_i,σ=± 1) in the strongly repulsive phase (the total site occupation number of each site is n_i≈ 2, 1, or 0 despite a strong U). No transfer of fractional charges was observed between zigzag edges. This is because mixed chiral gap edge states are not present. Note that the edge magnetization displays sharp domain walls, as indicated in Figs. <ref>(f)-(g).
§ SUMMARY AND DISCUSSION
We computed the phase diagram of zigzag graphene nanoribbons as a function of the on-site repulsion U, doping δ N, and disorder strength Γ.
We identified the universal, crossover, strongly disordered, and strongly repulsive phases.
Each phase of the phase diagram was defined by the TEE value and its variance. We also investigated how the values of the TEE are related to the following physical properties: the presence of charge fractionalization and the edge charge transfer correlations between the opposite zigzag edges. When both properties are present, in addition to correlations leading to spin-charge separation,
β was universal. If only one of these properties is present, β was nonuniversal and its variance was significant. However, when both were absent, β was approximately zero. In addition, we found a strongly repulsive phase with zero TEE
in large on-site repulsion and weak disorder limits. Its ground state contains abrupt kinks in zigzag edge magnetizations without charge fractionalization, which is a consequence of the singular perturbative nature of the disorder potential.
There is another phase with zero TEE, i.e., the strongly disordered phase in regime Γ≫ U. In this phase, the edge site occupation numbers fluctuate highly from site to site, and antiferromagnetic coupling between the two edges are nearly destroyed.
Each phase of the phase diagram has a different zigzag-edge structure.
We also investigated the effect of the interplay between localization and on-site repulsion on the charge quantization. In low-doped and/or weakly disordered ZGNRs this interplay contributes to the spatial separation of quasi-degenerate gap-edge states and protects the charge fractionalization against quantum fluctuations. Even in the presence of moderately strong disorder charge fractionalization is not completely destroyed.
We briefly discuss some experimental implications. It would be interesting to observe the presence of nonlocal charge transfers between the zigzag edges of the COI phase. This can be investigated by measuring correlations between the edge site occupation numbers using a scanning tunneling microscope <cit.>.
In the COII phase, fractional e^-/2 edge charges are present; however, unusual transport and magnetic susceptibility
properties are not expected because spin-charge separation is not present. (In contrast, the TO phase is expected to display unusual transport and magnetic susceptibility because of spin-charge separation <cit.>.) Similarly, scanning tunneling microscopy can be used to verify the predicted edge occupation numbers
of the strongly repulsive and strongly disordered phases.
In addition, investigation of tunneling between zigzag edges, as in fractional quantum Hall bar systems <cit.>, may be fruitful.
10
rm
url<#>1urlprefixURL doiprefixDOI:
Lei13
authorLeinaas, J. M. & authorMyrheim, J.
journaltitleOn the theory of identical
particles.
Nuovo Cimento B
volume37, pages1–23,
<10.1007/BF02727953> (year1977).
Wilczek03
authorWilczek, F.
journaltitleQuantum mechanics of
fractional-spin particles.
Phys. Rev. Lett.
volume49, pages957–959,
<10.1103/PhysRevLett.49.957> (year1982).
Arovas
authorArovas, D., authorSchrieffer, J. R. &
authorWilczek, F.
journaltitleFractional statistics and the
quantum hall effect.
Phys. Rev. Lett.
volume53, pages722–723,
<10.1103/PhysRevLett.53.722> (year1984).
Nakamura01
authorNakamura, J., authorLiang, S.,
authorGardner, G. C. & authorManfra, M. J.
journaltitleDirect observation of anyonic
braiding statistics.
Nature Physics volume16,
pages931–936,
<https://doi.org/10.1038/s41567-020-1019-1>
(year2020).
Barto1
authorBartolomei, H. et al.
journaltitleFractional statistics in anyon
collisions.
Science volume368,
pages173–177, <10.1126/science.aaz5601>
(year2020).
GV2019
authorGirvin, S. M. & authorYang, K.
titleModern condensed matter physics
(publisherCambridge University Press,
addressCambridge, year2019).
Pach
authorPachos, J. K.
titleIntroduction to Topological Quantum
Computation (publisherCambridge University Press,
addressCambridge, year2012).
Wen11
authorWen, X.-G.
journaltitleColloquium: Zoo of
quantum-topological phases of matter.
Rev. Mod. Phys.
volume89, pages041004,
<10.1103/RevModPhys.89.041004> (year2017).
NPlaugh
authorLaughlin, R. B.
journaltitleAnomalous quantum hall effect: An
incompressible quantum fluid with fractionally charged excitations.
Phys. Rev. Lett.
volume50, pages1395–1398,
<10.1103/PhysRevLett.50.1395> (year1983).
Yang
authorS.-R. Eric Yang.
titleTopologically Ordered Zigzag Nanoribbon
(publisherWorld Scientific, Singapore, year2023).
Kitaev11
authorKitaev, A. & authorPreskill, J.
journaltitleTopological entanglement entropy.
Phys. Rev. Lett.
volume96, pages110404,
<10.1103/PhysRevLett.96.110404> (year2006).
Levin11
authorLevin, M. & authorWen, X.-G.
journaltitleDetecting topological order in a
ground state wave function.
Phys. Rev. Lett.
volume96, pages110405,
<10.1103/PhysRevLett.96.110405> (year2006).
Haldane191
authorLi, H. & authorHaldane, F. D. M.
journaltitleEntanglement spectrum as a
generalization of entanglement entropy: Identification of topological order
in non-abelian fractional quantum hall effect states.
Phys. Rev. Lett.
volume101, pages010504,
<10.1103/PhysRevLett.101.010504> (year2008).
Altshuler
authorAltshuler, B.
titleInductory anderson localization.
In booktitleAdvanced Workshop on Anderson
Localization, Nonlinearity and Turbulence: a Cross-Fertilization
(organizationInternational Centre for Theoretical Physics,
year2010).
GV2000
authorGirvin, S. M.
titleThe quantum hall effect: novel excitations and broken
symmetries.
In booktitleAspects topologiques de la physique en
basse dimension. Topological aspects of low dimensional systems,
pages53–175 (publisherSpringer,
year1999).
Fujita
authorFujita, M., authorWakabayashi, K.,
authorNakada, K. & authorKusakabe, K.
journaltitlePeculiar localized state at zigzag
graphite edge.
J. Phys. Soc. Jpn.
volume65, pages1920–1923,
<10.1143/JPSJ.65.1920> (year1996).
Brey2006
authorBrey, L. & authorFertig, H. A.
journaltitleElectronic states of graphene
nanoribbons studied with the dirac equation.
Phys. Rev. B volume73,
pages235411, <10.1103/PhysRevB.73.235411>
(year2006).
Lyang
authorYang, L., authorPark, C.-H.,
authorSon, Y.-W., authorCohen, M. L. &
authorLouie, S. G.
journaltitleQuasiparticle energies and band
gaps in graphene nanoribbons.
Phys. Rev. Lett.
volume99, pages186801,
<10.1103/PhysRevLett.99.186801> (year2007).
Pisa1
authorPisani, L., authorChan, J. A.,
authorMontanari, B. & authorHarrison, N. M.
journaltitleElectronic structure and magnetic
properties of graphitic ribbons.
Phys. Rev. B volume75,
pages064418, <10.1103/PhysRevB.75.064418>
(year2007).
Cai2
authorRuffieux, P. et al.
journaltitleOn-surface synthesis of graphene
nanoribbons with zigzag edge topology.
Nature volume531,
pages489–492, <10.1038/nature17151>
(year2016).
Kolmer
authorKolmer, M. et al.
journaltitleRational synthesis of atomically
precise graphene nanoribbons directly on metal oxide surfaces.
Science volume369,
pages571–575, <10.1126/science.abb8880>
(year2020).
Brey
editorBrey, L., editorSeneor, P. &
editorTejeda, A. (eds.) titleGraphene
Nanoribbons.
2053-2563 (publisherIOP Publishing,
year2019).
Yang2019
authorJeong, Y. H., authorS.-R. Eric Yang &
authorCha, M.-C.
journaltitleSoliton fractional charge of
disordered graphene nanoribbon.
Journal of Physics: Condensed Matter
volume31, pages265601,
<10.1088/1361-648X/ab146b> (year2019).
yang1
authorS.-R. Eric Yang.
journaltitleSoliton fractional charges in
graphene nanoribbon and polyacetylene: similarities and differences.
Nanomaterials volume9,
pages885, <10.3390/nano9060885>
(year2019).
Yang2020
authorS.-R. Eric Yang, Cha, M.-C., authorLee,
H. J. & authorKim, Y. H.
journaltitleTopologically ordered zigzag
nanoribbon: e/2 fractional edge charge, spin-charge separation, and
ground-state degeneracy.
Phys. Rev. Research
volume2, pages033109,
<10.1103/PhysRevResearch.2.033109> (year2020).
Dob
authorDobrosavljevic, V., authorTrivedi, N. &
authorValles, J. M., Jr.
titleConductor-Insulator Quantum Phase
Transitions (publisherOxford University Press,
year2012).
Belitz
authorBelitz, D. & authorKirkpatrick, T. R.
journaltitleThe anderson-mott transition.
Rev. Mod. Phys.
volume66, pages261–380,
<10.1103/RevModPhys.66.261> (year1994).
Byczuk
authorByczuk, K., authorHofstetter, W. &
authorVollhardt, D.
journaltitleCompetition between anderson
localization and antiferromagnetism in correlated lattice fermion systems
with disorder.
Phys. Rev. Lett.
volume102, pages146403,
<10.1103/PhysRevLett.102.146403> (year2009).
Yang2021
authorKim, Y. H., authorLee, H. J. &
authorS.-R. Eric Yang.
journaltitleTopological entanglement entropy of
interacting disordered zigzag graphene ribbons.
Phys. Rev. B volume103,
pages115151, <10.1103/PhysRevB.103.115151>
(year2021).
Yang2022
authorKim, Y. H., authorLee, H. J.,
authorLee, H.-Y. & authorS.-R. Eric Yang.
journaltitleNew disordered anyon phase of doped
graphene zigzag nanoribbon.
Scientific Reports
volume12, pages14551,
<10.1038/s41598-022-18731-6> (year2022).
Efros
authorÉfros, A. L. & authorShklovskii, B. I.
journaltitleCoulomb gap and low temperature
conductivity of disordered systems.
Journal of Physics C: Solid State Physics
volume8, pagesL49–L51,
<10.1088/0022-3719/8/4/003> (year1975).
Lima2012
authorLima, L. R. F., authorPinheiro, F. A.,
authorCapaz, R. B., authorLewenkopf, C. H. &
authorMucciolo, E. R.
journaltitleEffects of disorder range and
electronic energy on the perfect transmission in graphene nanoribbons.
Phys. Rev. B volume86,
pages205111, <10.1103/PhysRevB.86.205111>
(year2012).
Canri
authorCanright, G. S., authorGirvin, S. M. &
authorBrass, A.
journaltitleSuperconductive pairing of fermions
and semions in two dimensions.
Phys. Rev. Lett.
volume63, pages2295–2298,
<10.1103/PhysRevLett.63.2295> (year1989).
Heeger
authorHeeger, A. J., authorKivelson, S.,
authorSchrieffer, J. R. & authorSu, W. P.
journaltitleSolitons in conducting polymers.
Rev. Mod. Phys.
volume60, pages781–850,
<10.1103/RevModPhys.60.781> (year1988).
Zeng
authorZeng, B., authorChen, X., authorZhou,
D.-L. & authorWen, X.-G.
titleQuantum Information Meets Quantum Matter
(publisherSpringer New York, addressSpringer,
year2019).
Stau
authorStauber, T. et al.
journaltitleInteracting electrons in graphene:
Fermi velocity renormalization and optical response.
Phys. Rev. Lett.
volume118, pages266801,
<10.1103/PhysRevLett.118.266801> (year2017).
Neto
authorCastro Neto, A. H., authorGuinea, F.,
authorPeres, N. M. R., authorNovoselov, K. S. &
authorGeim, A. K.
journaltitleThe electronic properties of
graphene.
Rev. Mod. Phys.
volume81, pages109–162,
<10.1103/RevModPhys.81.109> (year2009).
Yang1995
authorS.-R. Eric Yang, authorMacDonald, A. H. &
authorHuckestein, B.
journaltitleInteractions, localization, and the
integer quantum hall effect.
Phys. Rev. Lett.
volume74, pages3229–3232,
<10.1103/PhysRevLett.74.3229> (year1995).
Mac1
authorS.-R. Eric Yang & authorMacDonald, A. H.
journaltitleCoulomb gaps in a strong magnetic
field.
Phys. Rev. Lett.
volume70, pages4110–4113,
<10.1103/PhysRevLett.70.4110> (year1993).
Peschel119
authorPeschel, I.
journaltitleCalculation of reduced density
matrices from correlation functions.
Journal of Physics A: Mathematical and
General volume36, pagesL205,
<10.1088/0305-4470/36/14/101> (year2003).
Bal
authorJiang, H.-C., authorWang, Z. &
authorBalents, L.
journaltitleIdentifying topological order by
entanglement entropy.
Nature Phys volume8,
pages902–905, <10.1038/nphys2465>
(year2012).
Andrei
authorAndrei, E. Y., authorLi, G. &
authorDu, X.
journaltitleElectronic properties of graphene:
a perspective from scanning tunneling microscopy and magnetotransport.
Reports on Progress in Physics
volume75, pages056501,
<10.1088/0034-4885/75/5/056501> (year2012).
Chung
authorChung, T.-C., authorMoraes, F.,
authorFlood, J. D. & authorHeeger, A. J.
journaltitleSolitons at high density in
trans-(CH)_x: Collective transport by mobile, spinless charged
solitons.
Phys. Rev. B volume29,
pages2341–2343, <10.1103/PhysRevB.29.2341>
(year1984).
Kang
authorKang, W., authorStormer, H.,
authorPfeiffer, L., authorBaldwin, K. &
authorWest, K.
journaltitleTunnelling between the edges of two
lateral quantum hall systems.
Nature volume403,
pages59–61, <https://doi.org/10.1038/47436>
(year2000).
Iyang
authorYang, I., authorKang, W.,
authorBaldwin, K. W., authorPfeiffer, L. N. &
authorWest, K. W.
journaltitleCascade of quantum phase
transitions in tunnel-coupled edge states.
Phys. Rev. Lett.
volume92, pages056802,
<10.1103/PhysRevLett.92.056802> (year2004).
§ ACKNOWLEDGEMENTS
S.R.E.Y. was supported by the Basic Science Research Program of the National Research Foundation of
Korea (NRF), funded by the Ministry of Science and ICT (MSIT) NRF-2021R1F1A1047759. Two other grants are also acknowledged: BK21 FOUR (Fostering Outstanding Universities for Research)
and KISTI Supercomputing Center with supercomputing resources including technical support KSC-2022-CRE-0345.
§ DATA AVAILABILITY
On reasonable request, the corresponding author will provide all relevant data in this paper.
§ CODE AVAILABILITY
On reasonable request, the corresponding author will provide all numerical codes in this paper.
§ AUTHOR CONTRIBUTIONS
L.H.A., Y.H.K. and I.H.L. performed the HF calculations. S.R.E.Y. conceived the project and supervised the study. All authors contributed to the writing of the manuscript.
§ COMPETING INTERESTS
The authors declare no competing interests.
|
http://arxiv.org/abs/2307.05960v1 | 20230712070558 | An adaptive approach to remove tensile instability in SPH for weakly compressible fluids | [
"Kanishka Bhattacharya",
"Tapan Jana",
"Amit Shaw",
"L. S. Ramachandra",
"Vishal Mehera"
] | cs.CE | [
"cs.CE"
] |
a,b]Kanishka Bhattacharya
a]Tapan Jana
a]Amit Shawcor1
[email protected]
[cor1]Corresponding author
a]L. S. Ramachandra
c]Vishal Mehera
[a]Civil Engineering Department, Indian Institute of Technology Kharagpur, West Bengal, India
[b]CSIR-Structural Engineering Research Centre, Chennai, India
[c]Bhabha Atomic Research Centre, Visakhapatnam, India
Smoothed Particle Hydrodynamics (SPH) is plagued by the phenomenon of tensile instability, which is the occurrence of short wavelength zero energy modes resulting in unphysical clustering of particles. The root cause of the instability is the shape of derivative of the compactly supported kernel function which may yield negative stiffness in the particle interaction under certain circumstances. In this work, an adaptive algorithm is developed to remove tensile instability in SPH for weakly compressible fluids. Herein, a B-spline function is used as the SPH kernel and the knots of the B-spline are adapted to change the shape of the kernel, thereby satisfying the condition associated with stability. The knot-shifting criterion is based on the particle movement within the influence domain. This enables the prevention of instability in fluid problems where excessive rearrangement of particle positions occurs. A 1D dispersion analysis of an Oldroyd B fluid material model is performed to show how the algorithm prevents instabilities for short wavelengths but ensures accuracy at large wavelengths. The efficacy of the approach is demonstrated through a few benchmark fluid dynamics simulations where a visco-elastic Oldroyd B material model and a non-viscous Eulerian fluid material model are considered.
Tensile instability, Smoothed particle hydrodynamics, B-spline, adaptive kernel, weakly compressible fluids
§ INTRODUCTION
Smoothed Particle Hydrodynamics (SPH) is a particle-based method that has picked up much attention in the past few decades as an alternative to the traditional mesh-based methods. SPH was first developed by <cit.> and <cit.> to simulate astrodynamical problems. Since then, SPH has been widely used in fluid dynamics problems. A lot of work has been done in the areas of incompressible flows (<cit.>,<cit.>,<cit.>,<cit.>,<cit.>), multiphase fluid flows (<cit.>,<cit.>,<cit.>,<cit.>,<cit.>), viscoelastic flows (<cit.>,<cit.>,<cit.>,<cit.>,<cit.>) and fluid-structure interaction (<cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>). In the last few years, SPH has also been used in solid mechanics problems <cit.>. Few of the studies include fracture modeling (<cit.>,<cit.>, <cit.>), high velocity impact and blast modeling (<cit.>,<cit.>,<cit.>,<cit.>,<cit.>, <cit.>) and geotechnical simulations (<cit.>,<cit.>,<cit.>,<cit.>). Despite its potential and exploration in several areas of computational mechanics, one major drawback of SPH is the tensile instability, which, if unattended, may ruin the simulation.
Tensile instability is the occurrence of small wavelength zero energy modes which pollute the solution and sometimes even change the entire dynamics of the problem. The root of the instability has been studied by many researchers and is now well documented (<cit.>,<cit.>,<cit.>,<cit.>,<cit.>). As two SPH particles move away from each other due to negative pressure (tension), the magnitude of the gradient of the SPH kernel first increases, reaches a maximum and then decreases. The force between two SPH particles is proportional to the gradient of the kernel; consequently, the force also initially increases, reaches a maximum and then decreases. However, a decreasing force with increasing distance between two SPH particles results in negative stiffness, which ultimately causes an unphysical separation of the particles. This is the genesis of the tensile instability. The same argument can be made for positive pressure. As two SPH particles approach each other, the repulsive force first increases, but after a point starts decreasing, which results in particle clumping. <cit.> performed a detailed study of these instabilities. Via a 1D linear perturbation analysis, he arrived at an instability criterion which depends on the sign of the product of the stress and the second derivative of the SPH kernel function at the nearest neighbour.
A few remedies are available in the literature to tackle the problem of tensile instability. <cit.> proposed a kernel whose 1-st derivative monotonically increases as particles approach each other, thereby preventing the clumping of particles in compression. However, the 1-st derivative of the kernel is discontinuous, and also, the kernel will not be able to prevent the instability in tension. Some other researchers (<cit.>, <cit.>, <cit.>, <cit.>) used conservative smoothing on SPH variables, which effectively introduced a diffusive term in the conservation equations to attenuate the short wavelengths associated with the instability. <cit.> also showed how the conservative smoothing could be used as a more accurate dissipative mechanism than the standard artificial viscosity. <cit.> and <cit.>, in a 1D setting, introduced dual sets of particles: the standard SPH particles carried velocity, while `stress particles' were introduced between SPH particles, where stresses were calculated. Though this eliminated the tensile instability, carrying this forward to 2D becomes computationally intensive due to the tracking of the two different sets of particles and the mapping of properties from one set to the other (<cit.>). <cit.> and <cit.> developed the artificial stress method. To prevent the clumping of particles due to the tensile instability, they suggested the introduction of a small repulsive force between the particles. Using a dispersion analysis, they showed how the parameters associated with the repulsive force could be estimated to prohibit tensile instability as well as ensure accuracy. Because the instability was noticeable only in tension, they provided the repulsive force only to particles in tension. For the modelling of fluid flows at low and moderate Reynold's numbers, a background compressive pressure was added to ensure that the entire domain is in compression (<cit.>, <cit.>). This approach was successful in preventing the instabilities from arising in regions of negative pressure. The drawback with this approach is the setting of the background pressure, as too large a value results in numerical noise. <cit.> proposed a hyperbolic-shaped kernel to remove the instability in viscous fluids under compression. Similar to <cit.>, the value of the 1-st derivative of the kernel increases as particles approach each other. Though it has been shown that the kernel is able to remove the instability in compression, it will not be able to prevent the instability in tension. Another method to tackle tensile instability is the particle shifting method. When the equations of motion are solved, the SPH particles follow the streamlines of motion, which makes the particle distribution anisotropic, resulting in a breakdown of the solution at later stages. To tackle this, the particle shifting method was introduced in an Incompressible SPH setting (<cit.>,<cit.>). The same particle shifting technique can be utilised to tackle the instability in Weakly Compressible SPH. Fick's law of diffusion is used to shift particles from regions of high concentration to regions of low concentration (<cit.>,<cit.>), thereby effectively preventing the clumping of particles.
The corrective measures mentioned above are either computationally intensive or require some parameters which need to be judiciously chosen a-priori. Recently, we proposed an adaptive approach <cit.> where the shape of the kernel at a particle is modified, on the basis of the state of stress. Using this approach, we were able to show how the issue of tensile instability can be resolved in elastic dynamics problems. Based on a similar concept, a stable SPH computational framework for the simulation of Weakly Compressible fluids is developed in this paper. A B-spline basis function constructed over a variable knot vector is taken as the kernel, and its shape is adapted by changing the location of the intermediate knots to satisfy the Swegle's condition of preventing instability <cit.>. Most of the studies (<cit.>,<cit.>,<cit.>,<cit.>) have shown that compressive stresses do not show any visible signs of instability; hence the remedies aim to remove the instability in tension. In the simulations performed in this paper, too, it was the instability in tension that affected the results. Hence, in this work, the shape of the kernel is modified in a bid to satisfy Swegle's condition for tension for the farthest immediate neighbour, which automatically ensures the stability of all the other nearest neighbour points in tension. <cit.> had used a hyperbolic kernel to eliminate instability in problems involving positive pressure. Although the problem explored by <cit.> is not investigated in this paper, it is shown how the kernel used in this study can be adapted to mimic the properties of the hyperbolic kernel, thereby satisfying Swegle's condition for compression.
In this work, two benchmark problems viz. an impacting visco-elastic fluid drop and the rotation of an inviscid Eulerian fluid patch are considered. The governing equations for the visco-elastic fluid are presented in Section <ref>, and the SPH discretisation of the same equations is given in Section <ref>. A 1D perturbation analysis of the exact equations and the SPH discretised equations are performed in Section <ref>. The proposed algorithm to tackle the instability is presented in Section <ref>. The efficacy of the algorithm is demonstrated in Section <ref>. Finally, the concluding remarks are highlighted in Section <ref>.
§ GOVERNING EQUATIONS FOR A VISCO-ELASTIC FLUID
The conservation equations for a fluid in indicial notation are;
dρ/dt=-ρ∂ v^β/∂ x^β,
dv^α/dt=1/ρ∂σ^αβ/∂ x^β+g^α,
where ρ is the density, t is the time, x^β and v^β are the β^th components of the position and velocity vector respectively, σ^αβ is the (α,β)^th component of the stress tensor and g^α is the α^th component of the vector corresponding to the acceleration due to gravity. Einstein summation convention is followed, i.e. summation is taken over repeated indices.
The stress tensor is expressed as the sum of the hydrostatic pressure (P) and a deviatoric stress. For an Oldroyd B fluid, which may be considered as a polymer solution, the deviatoric stress can be composed as the sum of a Newtonian solvent contribution (_s^αβ) and a polymeric contribution (_p^αβ). This gives,
σ^αβ=-Pδ^αβ+_s^αβ+θ_p^αβ,
where δ^αβ is the Kronecker Delta. A standard procedure in SPH is to consider a Weakly Compressible fluid with an equation of state for the calculation of the pressure as,
P=ρ_0 c_0^2/γ((ρ/ρ_0)^γ-1),
where c_0 denotes the speed of sound, ρ_0 is the initial density, and γ is taken to be 7 to make the equation stiff. The value of the speed of sound is set at least ten times the maximum fluid velocity. This keeps the Mach number (M) below 0.1, and because δρ/ρ∼ M^2, this ensures that the variation in density is less than 1%, and thus, the behaviour of the fluid is close to that of an incompressible fluid.
The solvent contribution of the deviatoric stress is linearly related to the rate of deformation tensor d^αβ=1/2(∂ v^α/∂ x^β+∂ v^β/∂ x^α) as
_s^αβ=2η_s d^αβ,
where η_s is the solvent viscosity.
The polymer contribution can be obtained from the following differential equation:
_p^αβ+λ_1 ∇_p^αβ=2η_p d^αβ,
where λ_1 is the relaxation time of the fluid, η_p is the polymer contribution to the viscosity, and ∇_p^αβ is the upper convected derivative of _p^αβ which is defined as
∇_p^αβ=d_p^αβ/dt-∂ v^α/∂ x^γ_p^γβ-∂ v^β/∂ x^γ_p^αγ.
Substituting Equation (<ref>) in Equation (<ref>) we get
d_p^αβ/dt=∂ v^α/∂ x^γ_p^γβ+∂ v^β/∂ x^γ_p^αγ-1/λ_1_p^αβ+2η_p/λ_1d^αβ.
In Equation (<ref>), θ=1 gives an Oldroyd B model while θ=0 gives a Newtonian model. An inviscid Eulerian fluid may be obtained by taking θ = 0 and setting the viscosities (η_s and η_p) to 0.
§ SPH EQUATIONS
In SPH, the domain is discretised into particles, and at a given particle, a local continuous field over its neighbouring particles is created through a kernel function. Following Fang et. al., <cit.>, the SPH discretised form of Equations (<ref>), (<ref>) and (<ref>) may be written as;
dρ_i/dt=∑_j m_j (v^β_i - v^β_j)∂ W_ij/∂ x^β_i,
dv_i^α/dt = ∑_j m_j(σ_i^αβ/ρ_i^2+σ_j^αβ/ρ_j^2-Π_ijδ^αβ)∂ W_ij/∂ x_i^β + g^α,
_s,i^αβ = η_s(k_i^αβ+k_i^βα),
d_p,i^αβ/dt=k_i^αγ_p,i^γβ+k_i^βγ_p,i^γα-1/λ_1_p,i^αβ+η_p/λ_1(k_i^αβ+k_i^βα),
where
k_i^αβ=∂ v_i^α/∂ x^β=∑_j m_j/ρ_j(v_j^α-v_i^α)∂ W_ij/∂ x_i^β.
In Equation (<ref>), Π_ij is the artificial viscosity which is required to stabilise the computation in the presence of a shock or a sharp gradient. The following form of the artificial viscosity is used in the present study;
Π_ij=
-γ_1c_ijμ_ij + γ_2μ^2_ij/ρ_ij for x_ij.v_ij < 0,
0 otherwise,
where, μ_ij= h(v_ij.x_ij)/|x_ij|^2 + ϵ h^2; c_ij = c_i + c_j/2; ρ_ij = ρ_i + ρ_j/2; γ_1 and γ_2 are parameters which control the intensity of the artificial viscosity; ϵ is a small number to avoid singularity when two interacting particles (i and j) are close to each other; c_i and c_j are the wave propagation speeds evaluated at the i-th and j-th particles respectively; and v_ij = v_i- v_j and x_ij = x_i- x_j indicate the relative velocity and position of the i-j particle pair.
§ DISPERSION ANALYSIS
From the dispersion relation, one can obtain the wavelengths, which are Zero Energy Modes and due to which the instabilities in the system arise. The exact and the SPH dispersion relations for an Oldroyd B fluid are derived in this section. These relations are later on used in Section <ref> to show how the approach outlined in this paper can prevent tensile instability.
§.§ The Exact Dispersion Analysis
First, the exact dispersion relation is derived for an Oldroyd B fluid. A 1D infinite expanse of fluid is considered, which is initially at rest. It is assumed that this 1D continuum has initial uniform stress σ=-P+_p. From Equation (<ref>) and Equation (<ref>), it can be understood that theoretically, a 1D continuum at rest cannot have non-zero values of _s, but can have non-zero values of _p. A perturbation is given to the initial state, and the resulting variables are
v=Ve^i(kx-ω t),
ρ = ρ+ δρ,
δρ = De^i(kx-ω t),
P = P+Mδρ,
M = c_0^2(ρ/ρ_0)^γ -1,
_s = T_se^i(kx-ω t),
_p = _p + T_pe^i(kx-ω t),
where the initial state variables are denoted by a bar on the top. x is the spatial coordinate at the initial state. V, D, M, T_s and T_p are the amplitudes of the perturbations to v, ρ, P, _s and _p respectively. Substituting these perturbed variables in the continuity equation (Equation (<ref>)) yields
D = ρ/ωkV.
The linear momentum conservation equation (Equation (<ref>)) upon perturbation becomes
ρω V = k(MD-T_s-θ T_p).
Upon substituting the perturbed variables from Equation (<ref>) in the equation for the solvent contribution, _s (Equation (<ref>)) and the polymer contribution _p (Equation (<ref>)) of the deviatoric stress, we obtain;
T_s = 2iη_s kV,
T_p = 2(_p+η_p/λ_1)/(1/λ_1-iω)ikV.
Upon using T_p from Equation (<ref>) in Equation (<ref>), an analytical expression for the dispersion relation cannot be obtained. Now, the exact dispersion relation is going to be used to validate the accuracy of the SPH dispersion relation for long wavelengths, i.e. k → 0. From ω = c k, we see that if k → 0, then ω→ 0. Now, λ_1 = 0.02 for the impact drop problem in Section <ref>, hence we can say, |iω| << |1/λ_1| for large wavelengths, and obtain a simplified equation for T_p;
T_p = 2ikV(_p+η_p/λ_1)λ_1.
Finally, upon substitution of Equations (<ref>), (<ref>) and (<ref>) in Equation (<ref>) we obtain a quadratic equation in ω as,
ρω^2+2i k^2 Z ω -Mρk^2=0,
where Z = (η_s + θ (_p+η_p/λ_1)λ_1). Solving for ω we get
ω = -k^2Z/ρi ±√(Mk^2-k^4Z^2/ρ^2).
So, we obtain ω in the form ω=Re(ω)+iIm(ω). Now, in the perturbation of the velocity, we get v=Ve^i(kx-Re(ω)t)e^Im(ω)t. From the harmonic component of the perturbation, we obtain the wave speed as
c=Re(ω)/k=√(M-k^2Z^2/ρ^2) ,
which is the exact dispersion relation for a 1D Oldroyd B continuum.
§.§ The SPH Dispersion Analysis
In this section, the SPH Dispersion relation is derived. A 1D infinite expanse of SPH particles with uniform spacing Δ p, at rest, is considered. Similar to the exact dispersion analysis, it is assumed that this 1D continuum has initial uniform stress σ=-P+_p. Now, a harmonic perturbation is given to these SPH particles. The perturbation in position and velocity of particle a is
x_a = x_a + δ x_a,
δ x_a = Xe^i(kx_a-ω t),
δ v_a = Ve^i(kx_a-ω t).
The perturbation in density, pressure and stresses are the same as in Equation (<ref>) with a subscript a, denoting the variable value at particle a. Here x_a denotes the initial position of particle a. The continuity equation (Equation (<ref>)) upon perturbation is
d(δρ_a)/dt=-∑_b ρΔ p(δ v_b - δ v_a)(∂ W_ab/∂x_a+∂^2 W_ab/∂x^2_a(δ x_a - δ x_b)),
where the summation is over particles b within the domain of a.
It is assumed that the 1D bar has a unit cross-sectional area, i.e. m=ρΔ p. Considering only the first-order terms and substituting for the perturbed variables, we obtain
D = ρΔ p V/ω∑_b sin kξ∂ W_ab/∂x_a
where ξ = x_a - x_b. The linear momentum conservation equation (Equation (<ref>)) reads
(dδ v_a)/dt=∑_b ρΔ p[σ+δσ_a/(ρ +δρ_a)^2+σ+δσ_b/(ρ +δρ_b)^2][∂ W_ab/∂x_a+∂^2 W_ab/∂x^2_a(δ x_a - δ x_b)].
Only keeping the first-order terms in Equation (<ref>) gives
(-iωδ v_a) =2σΔ p/ρ∑_b (δ x_a - δ x_b)∂^2 W_ab/∂x^2_a+Δ p/ρ∑_b δσ_b∂ W_ab/∂x_a - 2σΔ p/ρ^2∑_b δρ_b∂ W_ab/∂x_a
=[2σΔ p/ρX∑_b (1-cos kξ)∂^2 W_ab/∂x^2_a+Δ p/ρ(-MD+T_s+θ T_p)∑_b isin kξ∂ W_ab/∂x_a
-2σΔ pD/ρ^2∑_b isin kξ∂ W_ab/∂x_a]e^i(kx_a-ω t).
In the above equation, δσ_b=-δ P_b + δ_s,b +θδ_p,b is used. By substituting the perturbed variables in Equation (<ref>) and (<ref>) we arrive at
T_s = 2η_sΔ pVi∑_b sin kξ∂ W_ab/∂x_a,
T_p=2Δ p(_p+η_P/λ_1)/(1/λ_1-iω)Vi∑_b sin kξ∂ W_ab/∂x_a.
Now, similar to the discussion in the previous section, if T_p from Equation (<ref>) is used, an analytical expression of the dispersion relation may not be possible. The SPH dispersion relation is being derived to compare its accuracy with the exact dispersion relation for long wavelength modes and also to investigate the tensile instability for short wavelength modes. As already discussed in Section <ref>, the approximation |iω| << |1/λ_1| can be used for long wavelength modes.
Now, the shortest wavelength is λ=2Δ p, which gives us ∑_b sin kξ∂ W_ab/∂x_a=0. Hence from Equation (<ref>) we arrive at ω = √(-2σΔ p B/ρ), where B=∑_b (1-cos kξ)∂^2 W_ab/∂x^2_a. Without any loss of generalisation, _p is ignored, i.e. σ=-P. Now, for a range of density ratios (ρ/ρ_0) from 0.95 - 1.05, and for a range of h/Δ p from 0.8 - 2, from ω = √(-2σΔ p B/ρ) we calculate that the magnitude of ω is atleast one order of magnitude less than 1/λ_1. Hence for short wavelengths as well, the approximation |iω| << |1/λ_1| can be used. Hence we obtain the modified equation of T_p as;
T_p=2Δ p (_p+η_P/λ_1) λ_1 Vi∑_b sin kξ∂ W_ab/∂x_a.
Substituting Equations (<ref>), (<ref>) and (<ref>) in Equation (<ref>) we obtain a quadratic equation in ω;
ρω^2+2iA^2(Δ p)^2Zω-Mρ(Δ p)^2A^2 - 2σ(Δ p)^2A^2 + 2σΔ pB=0,
where A=∑_b sin kξ∂ W_ab/∂x_a. Solving the quadratic equation gives
ω=-ZΔ p^2A^2i/ρ±√(-Z^2Δ p^4A^4/ρ^2-2σΔ pB/ρ+A^2Δ p^2(M+2σ/ρ)).
Hence, the wave speed is obtained as
c=1/k√(-Z^2Δ p^4A^4/ρ^2-2σΔ pB/ρ+A^2Δ p^2(M+2σ/ρ)).
§ ADAPTIVE ALGORITHM FOR STABLE SPH COMPUTATION
As mentioned in the s1, Swegle's stability analysis <cit.> constitutes the premise of the adaptive algorithm developed in this work. Herein, the shape of the kernel at a given particle location is continuously modified, such that the condition which may cause instability does not arise. However, while doing so, it is also important to ensure that the adaptive exercise does not become computationally intensive. To this end, a B-spline basis function defined over a set of variable knots is considered as the kernel. The advantage of a B-Spline basis function is that the shape of the kernel can be modified by changing the position of the knots. The algorithm and its implementation steps are discussed in this section.
First, the B-Spline basis function for a variable knot vector is presented in Section <ref>. Using this basis function as the kernel, it is shown how the adaptive algorithm works in Section <ref>. In Section <ref>, it is shown how the farthest immediate neighbour is estimated. Finally, in Section <ref>, the 1D dispersion relation for the Oldroyd B material is plotted to show how the zero energy modes can be eliminated.
§.§ B-Spline Basis Function as Kernel
We use the deBoor, Cox and Mansfield recurrence formula (<cit.>) to define the B-Spline basis functions. Let Ξ={ζ_1,ζ_2, ζ_3,...,ζ_m | ζ_I∈ℝ} be a non-decreasing sequence of real numbers called as the knot vector with ζ_I being the position of the I-th knot. The I-th B-Spline basis function of P-th degree denoted by N_I,P(ζ) is defined as;
N_I,0=
1, if ζ_I≤ζ<ζ_I+1.
0, otherwise.
N_I,P(ζ) =ζ-ζ_I/ζ_I+P-ζ_IN_I,P-1(ζ)+ζ_I+P+1-ζ/ζ_I+P+1-ζ_I+1N_I+1,P-1(ζ).
The local support property of the B-spline basis function gives N_I,P(ζ)>0 ∀ ζ ∈ [ζ_I,ζ_I+P+1). The shape of N_I,P(ζ), within its support [ζ_I,ζ_I+P+1), can be modified by changing the position of intermediate knots {ζ_I+1,...ζ_I+P}. The support of N_I,P(ζ) can be changed by changing the positions of the extreme knots {ζ_I,ζ_I+P+1}. Herein, we take a symmetric knot vector Ξ={-b,-a,0,a,b} and the basis function N_0,3 to construct a symmetric cubic spline kernel. The resulting kernel we get is;
W(q,h)=α_c (a+b)q^3-3abq^2+a^2b^2/a^2b(a+b), if 0≤ q<a
(b-q)^3/b(b^2-a^2), if a≤ q<b
0, if b≤ q
where α_c is obtained from the normalising condition for the kernel, i.e. ∫_Ω W(x-x',h)dx'=1. α_c=2/bh for 1D and α_c=10(a+b)/π b(a^2+ab+b^2)h^2 for 2D. As shown in Figure <ref>, changing the position of the knots results in a change in the shape of the kernel, which is the basis of the adaptive algorithm, as explained in the next section.
§.§ Adaptive Algorithm
In a 1D stability analysis, Swegle had shown that to remove tensile instability, W^” at the nearest neighbour should be less than zero for a state of tension and greater than zero for a state of compression.
However, from most of the studies (<cit.>,<cit.>,<cit.>,<cit.>), one is led to understand that the instability in tension is more prominent and can severely pollute the solution. In <cit.>, the author provided an artificial pressure primarily when the material was under negative pressure (i.e., tension), and in <cit.>, the authors provided an artificial stress only along the principal direction in tension. In <cit.>, and <cit.>, the authors provided a background pressure to ensure the pressure of the entire domain is positive at all times. In the simulations performed in this work too, it is shown that satisfying Swegle's condition for tension is sufficient to prevent instability.
Though the proposed adaptive algorithm is applicable for any quasi-uniform particle distribution, for a better comprehension, the steps involved in the method are demonstrated through a particle arrangement following a rectangular grid as shown in Figure <ref>. The smoothing length h is taken as 2Δ p, where Δ p is the particle spacing. The influence domain of a particle, say i-th particle with position x_i, is defined as ℕ^i = { j ∈ℤ^+ | ||x_i-x_j|| < bh and i ≠ j }, with b being the cutoff of the kernel as defined in Equation (<ref>). Let ℕ^i⊆ℕ^i be the set of immediate neighbours. For the given particle arrangement in Figure <ref>, the immediate neighbours are highlighted in red. For simplicity, we are going to assume that tensile stress acts along the x axis and compressive stress along the y axis. To prevent the tensile instability from arising at the i-th particle, we have to ensure that, in the direction of tension, W^”_ij<0 ∀ j ∈ℕ^i. Essentially, we have to track the farthest immediate neighbour and ensure that W^”<0 at that position. The approach adopted in this work is described next.
§.§.§ a-adaptive
For a cubic spline kernel (Equation (<ref>)) with smoothing length h, the position of the extremum of W^' is at ab/a+bh. Let r_i=max_j ∈ℕ^i{∥x_i-x_j ∥} be the distance of the farthest immediate neighbour (say j) from particle i. For the extremum of W^' to be positioned at j, the value of knot a should be: a=br_i/bh-r_i. Now, if the position of the extremum of W^' is slightly beyond j, then the condition W^”<0 will be satisfied at all immediate neighbours. Hence the value of knot a should be such that:
a = br^*/bh-r^*, where r^* = Ar_i.
In Equation <ref>, A>1 is a multiplying constant which ensures that the stable zone of the kernel always covers the farthest immediate neighbour. Equation (<ref>), with a = 1 and b=2, reproduces the commonly used Cubic B-spline kernel in the literature. In the present study, we also take b=2 unless large tensile strains occur, which is discussed in the next sub-section (s5.2.2). The intermediate knot a ∈ (0,b) is adjusted according to Equation <ref>. It is to be noted that A in Equation <ref> does not require any tuning or calibration. The sole purpose of taking a value of A greater than 1 is to ensure that the extremum of W^' is always slightly ahead of the farthest immediate neighbour and thereby Swegle's criteria for preventing tensile instability is effectively satisfied. It is observed in the simulations of this paper that values of A from 1.05 to 1.1 serves the purpose. The concept is demonstrated in Figure <ref>, where it can be observed how the extremum of W^' is always slightly ahead of the farthest immediate neighbour when a is estimated from Equation (<ref>).
§.§.§ ab-adaptive
Now, consider a situation where the neighbourhood of a particle is under continuous tension. This causes the farthest immediate neighbour to continuously move away from the centre particle, the i-th particle in this case. As r_i increases, the intermediate knot a is also increased as per Equation <ref>. When r^* ≈ h or r_i ≈ 0.95h (for A=1.05), Equation <ref> yields a ≈ 2, which is also the value of b. This is the limiting situation beyond which the further shifting of a is not possible as long as b is fixed at 2. Now suppose the farthest immediate neighbour further moves away due to continued tension. However, since a has already reached its limiting value (i.e., b), the kernel shape cannot be further adjusted through a. This may cause the farthest immediate neighbour to cross the extremum of W^' and leave the stable zone of the kernel. In such a situation, to prohibit the instability from occurring, both the values of a and b are allowed to increase such that the position of the extremum can be shifted along with the farthest immediate neighbour.
Hence, if a reaches a value close to 2, the following algorithm is used: if a>1.95
b = 2.05×r^*/h [from Equation <ref> with a = 0.95b],
a = 0.95b.
Increasing the value of a and b both allows the extremum of W^' to shift along with r_i (when r_i > h), as can be seen in Figure <ref>. However, naively letting a and b increase with r_i poses some problems. An increase in b results in an increase of the support domain, thereby allowing more particles to interact with particle i. This not only results in an increased computational time but also leads to an artificial smoothening of results. But, a more serious drawback is that the tensile instability might not be eliminated. Swegle's condition says that in the case of tension, a positive value of the second derivative of the kernel contributes towards instability. Suppose the support domain is allowed to increase with increasing b. In that case, it can be understood from Figure <ref> that the particles in between the regions of radius 2h and bh will have positive values of W^”. This will, in fact, result in tensile instability. Therefore, in our approach, the kernel is truncated with the support domain having a constant radius of 2h, as can be seen from Figure <ref> and Figure <ref>. Figure <ref> shows the 1D kernel for a situation with h=1 and r_i=1.75. From Equation (<ref>) for A=1.05 we obtain a=3.58 and b=3.77. Figure <ref> shows the kernel with a support domain of radius bh and also the truncated kernel whose support domain is of radius 2h. The truncated kernel is again shown in Figure <ref> where it has been normalized such that ∫_Ω W(x-x',h)dx'=1. Figure <ref> shows the 1-st derivative of the kernel with support domain of radius bh and also the truncated 1-st derivative. Figure <ref> shows the 1-st derivative of the normalized truncated kernel. Truncation of the kernel may cause inconsistency in the approximation. To ensure consistency, gradient correction is used where the first derivative of the kernel function is modified as ∂ W^C_ij/∂ x^α=𝐌^αβ∂ W_ij/∂ x^β, where 𝐌 is a symmetric re-normalisation matrix obtained as 𝐌^-1_i = -∑_j∈ N^im_j/ρ_jx_ij⊗▽ W_ij.
Figure <ref> presents a flow chart of the algorithm to estimate the knot values of a and b as discussed in the previous paragraphs. It is shown in the flowchart that for particle i, if a_i>1.95, one might choose to extend a_i and b_i beyond 2 or one might assign a_i=1.95 and b_i=2. Of the two numerical simulations performed, in the Impacting drop problem in Section <ref>, it was required to increase the values of a and b beyond 2 to prevent instability. But for the rotation of the fluid patch problem in Section <ref>, the same was not required to prevent instability, as discussed in Section <ref>.
§.§.§ Instability in compression and Hyperbolic kernel
It is of relevance to mention the work by <cit.> wherein the authors used a hyperbolic kernel whose second derivative is non-negative over the full support and argued that the hyperbolic kernel can prevent the instability from occurring for positive pressure. This was demonstrated through an example of viscous liquid drops. Though the simulation performed in <cit.> is beyond the scope of the present work, it can be shown how in a situation of positive pressure, the kernel proposed in this work can be adapted to ensure that the second derivative is non-negative for all the particles within the support.
In a 1D setting, as shown in Figure <ref>, for values of a ≤ 0, the second derivative is non-negative everywhere in the support domain.
§.§ Estimation of the farthest immediate neighbour
One of the most vital steps in the algorithm is the estimation of r_i, i.e. the distance of the farthest immediate neighbour. A possible method is to perform an inverse mapping of points in the candidate list and, in the mapped space, estimate the convex hull to get the nearest neighbours (<cit.>). This exercise may make the adaptive algorithm computationally intensive, especially in problems involving a large number of particles. Development of a computationally efficient and easy to implement technique to determine r_i requires further exploration. Nevertheless, to not deviate from the main focus of this work, which is to test the applicability of our approach in tackling tensile instability, a simpler process to estimate r_i based on strain is adopted here. This is found to work well, at least for the problems considered in the present study. The strategy is described below.
The measure of strain at a point should provide information about the position of points in its neighbourhood. Consider a 2D initially rectangular grid of particles with spacing Δ X^0 and Δ Y^0 in the x and y directions, respectively. At any subsequent time step t, the line segment Δ X^0 changes its length to Δ X^t_i = Δ X^0 + ∑_tϵ̇_i,t^xxΔ X^t-1_i dt, and the relative displacement of one end of the line segment with respect to the other is ∑_tϵ̇_i,t^xyΔ X^t-1_i dt. Similarly, the length of the line segment Δ Y^0 changes to Δ Y^t_i = Δ Y^0 + ∑_tϵ̇_i,t^yyΔ Y^t-1_i dt, and the relative displacement of one end of the line segment with respect to the other is ∑_tϵ̇_i,t^yxΔ Y^t-1_i dt, where, ϵ̇_i,t^xx, ϵ̇_i,t^yy and ϵ̇_i,t^xy are the components of strain rate at particle i at the t-th time step. It is to be noted that because SPH is an updated lagrangian method, the strain rates at the t-th step are calculated with respect to the configuration at time t-1 as,
ϵ̇_i,t^xx =∑_j m_j/ρ^t-1_j(u^t-1_j-u^t-1_i)∂ W_ij/∂ x_i,
ϵ̇_i,t^yy =∑_j m_j/ρ^t-1_j(v^t-1_j-v^t-1_i)∂ W_ij/∂ y_i,
ϵ̇_i,t^xy =1/2∑_j m_j/ρ^t-1_j[ (v^t-1_j-v^t-1_i)∂ W_ij/∂ x_i + (u^t-1_j-u^t-1_i)∂ W_ij/∂ y_i].
Now, the rectangle of sides Δ X^0 and Δ Y^0 becomes a rhombus whose diagonals (S_1 and S_2) may be determined as,
S_1 = √((Δ X^t _i + ∑ϵ̇_i,t^xyΔ Y^t-1 dt)^2 + (Δ Y^t_i + ∑ϵ̇_i,t^yxΔ X^t-1 dt)^2)
S_2 = √((Δ X^t_i - ∑ϵ̇_i,t^xyΔ Y^t-1 dt)^2 + (Δ Y^t_i - ∑ϵ̇_i,t^yxΔ X^t-1 dt)^2).
The farthest immediate neighbour of the i-th particle at time step t is considered to be at a distance max(S_1, S_2) away.
For inviscid fluids, as in the case of the second example (Section <ref>), the contribution of shear strain is ignored, and the distance of the farthest immediate neighbour is determined as,
S = √((Δ X^t _i)^2 + (Δ Y^t_i)^2).
To verify that the above approach calculates r_i reasonably well, consider Figure <ref>, which shows two views of the impacting drop problem from Section <ref>. The red circle shows the SPH horizon, and the black circle has a radius r^*. r_i is calculated using Equation <ref> and r^*=Ar_i, where A is taken as 1.05. Figure <ref> shows how the r_i obtained is able to track the farthest immediate neighbour with reasonable accuracy.
§.§ Plot of 1D Dispersion Relation
In this section, by plotting the dispersion relations, we show how the adaptive algorithm helps in alleviating the tensile instability. Towards this, a 1D bar is considered, and perturbations are provided to obtain the wave speeds for the exact Oldroyd B material (Equation <ref>) and for the SPH approximations (Equation <ref>) in Section <ref>. For the 1D SPH bar, the horizon size is kept constant at 2 units. Three different particle spacings are considered; 1.5, 2.5 and 3.5 units. ρ/ρ_0 is taken as 0.99, as the stiff equation of state (Equation <ref>) controls the density variation to 1%. _p is taken as 2000 Pa (the maximum _p obtained in the Impact drop simulation (Section <ref>) is around 800 Pa). In Figure <ref>, <ref> and <ref>, are plotted the exact dispersion relation and the SPH dispersion relation with the standard cubic B-spline kernel for the three particle spacings. The wave numbers for which Re(ω)=0 represent instability, i.e. zero energy modes. Now, if the adaptive approach proposed in Section <ref> is used, then the instabilities are eliminated. Figure <ref>, <ref>, and <ref> show the SPH dispersion relation using the adaptive algorithm, with A=1.05. For spacing of 1.5 units, the value of a is 1.3, and of b is 2. For spacing of 2.5 and 3.5, the values of a and b need to be greater than 2 to prevent instability. For spacing of 2.5, a = 2.56, b = 2.69; and for a spacing of 3.5, a = 3.58 and b = 3.77. From Figure <ref>, it can be seen that for long wavelength modes, i.e. k → 0, there is a good accuracy between the exact and the SPH dispersion relations.
§ NUMERICAL SIMULATIONS
In the literature, the tensile instability of SPH in the simulation of weakly compressible fluids is discussed through several examples. Among them, two benchmark problems, viz., a liquid drop impacting a rigid surface and the rotation of a fluid patch, are taken in this section to demonstrate the proposed algorithm's efficacy in alleviating tensile instability. For the first example, i.e., the impacting liquid drop problem, a no-slip boundary condition needs to be enforced between the liquid drop and the rigid surface. The approach adopted for the no-slip boundary condition is discussed in the following sub-section.
After obtaining the values of a and b for all the particles using the algorithm described in Section <ref>, two different approaches are adopted in this study to solve Equations (<ref>)-(<ref>).
The shape of the kernel centred at a given particle, say i-th, depends on the value of a_i and b_i, which in turn depend on the value of r^*. Since particles in a domain will have different values of r^*, the knot positions and, consequently, the shape of the kernel will not be the same for all the particles. If differently shaped kernels at particles i and j are used for estimating the interaction between the particle pair, energy conservation will be violated. So, in one approach, the interaction between any particle pair i-j is performed through a kernel function constructed with average values of a_ij=(a_i+a_j)/2 and b_ij=(b_i+b_j)/2, which ensures that the conservativeness of the method is restored. However, consider another particle k in the vicinity of i. a_ik and b_ik will, in general, be different from a_ij and b_ij, which makes the kernel centred at i, lose its symmetry.
Hence in the other approach, to ensure that the kernel at any particle i is symmetric, Equations (<ref>)-(<ref>) at particle i are solved by constructing the kernel from a_i and b_i, instead of considering averaged knot values. Though there will be a non-conservation of energy, it is quite small, as shown in the numerical simulations. The results from the numerical simulations show a minor difference between the two approaches.
§.§ No-slip boundary Conditions
Implementing boundary conditions in SPH sometimes poses problems because of particle deficiency at or near the boundaries. In the 1-st numerical simulation in Section <ref>, the boundary to be simulated is a solid wall. Following the approach of <cit.>, two types of boundary particles are considered, wall boundary particles and dummy particles. The wall boundary particles are placed along the solid wall, as shown in Figure <ref>. The dummy particles are arranged in a grid just outside the solid wall, and they fill a domain with a depth of at least 2h. Because this is a fixed wall, the wall boundary particles are fixed in position, and the pressure of each of these particles (i) is calculated as
P_i=∑_j(m_j/ρ_j)P_jW_ij/∑_j(m_j/ρ_j)W_ij, where j is the index of liquid particles in the support domain of i. The dummy particles are also stationary, and the pressure of each particle is set as the pressure of the closest wall particle. Instead of providing repulsive forces to the fluid particles (<cit.>, <cit.>), the wall boundary and the dummy particles interact with the fluid particles by contributing to the expressions of continuity (Equation (<ref>)) and the conservation of linear momentum (Equation (<ref>)). A combination of these wall boundary and dummy particles solves the problem of particle deficiency of the fluid particles near the boundaries and also enforces a no-slip boundary condition.
§.§ Example 1: Impacting Liquid Drop
As the first example, the numerical simulation of a visco-elastic drop impacting a rigid surface is investigated. In 2D, a disc of initial radius R = 1 cm is dropped onto a rigid surface from a height of 4 cm. The drop is given an initial downward speed of V = 1 m/s. The acceleration due to gravity is taken as -9.81 m/s^2. For the discretisation of the drop, a rectangular grid of particles with spacing 0.02 cm in both the x and y directions is considered, and then a circle of radius 1 cm is drawn. The particles within and on the periphery of the circle are retained, and the rest are deleted. This gives a total of 7957 particles. The particles within the circle have volume 4× 10^-8 m^3, while for the particles on the periphery of the circle, volumes are distributed such that the total volume of the drop is π× 0.01^2 m^3. The impacting surface is modelled as a solid wall, and no-slip boundary conditions are implemented, as explained in Section <ref>. The speed of sound c_0 in the equation of state (Equation <ref>) is taken as 12.5 m/s <cit.>. The smoothing length h is taken as 2Δ p, where Δ p is the initial interparticle spacing (0.02 cm in this case). A time step of 4× 10^-6 s is considered for numerical stability. It will be shown subsequently that the artificial viscosity alone cannot prevent the tensile instability from occurring. Nevertheless, as mentioned in <cit.> and <cit.>, without the artificial viscosity, the solution may blow up and diverge. Thus some amount of artificial viscosity with parameters γ_1=0.5 and γ_2=0.5 is used in all the simulations. Results are plotted against a non-dimensionless time T=tV/2R. In the following simulations, averaged knot values a_ij and b_ij are considered when estimating the interaction between a particle pair i-j.
First, the simulation is performed for a Newtonian fluid with θ = 0 and η_s = 4.0 Pa s. As reported in <cit.>, our simulations also show that the Newtonian drop does not exhibit any tensile instability. Hence, even though the adaptive kernel is used in the simulation, it is not required that a be increased beyond 1.95 once it reaches that value. The plot of the time history of the width of the fluid drop is presented in Figure <ref>, where the result is compared with the SPH simulations of <cit.> and the FDM simulations of <cit.>. It can be seen that the present SPH results match the results from the literature, and particularly, there is a decent agreement with the FDM result. It can be understood that after impact, the Newtonian drop spreads uniformly, and it continues to spread with time. Because the tensile instability is not visibly evident even in standard SPH, the profiles of the drop at different times are not shown.
Next, the simulation is performed for an Oldroyd B fluid. In this case, the solvent viscosity is η_s = 0.4 Pa s, the polymer contribution to the dynamic viscosity is η_p = 3.6 Pa s, and the relaxation time of the fluid is λ_1 = 0.02 s. Figure <ref> shows the drop profiles at two different times (T=1.94 and T=2.29) for three simulations, one with the standard SPH and two with the adaptive approach presented here. Figures <ref> and <ref> show the drop profiles when the standard B spline kernel is used. The tensile instability in the solution is clearly evident. As mentioned previously, even though artificial viscosity with γ_1 = 0.5 and γ_2 = 0.5 is used in these simulations, tensile instability prevails. In the next set of simulations, the adaptive kernel proposed in this work is used, and the value of knot a is obtained from Equation <ref>. However, once the value of a has reached 1.95, a and b are not modified according to Equation <ref>. Hence the maximum value that a can take in this simulation is 1.95. The drop profiles are shown in Figures <ref> and <ref>. Though the instability is prevented at the initial stages (Figure <ref>), at later stages of the simulation, the instability appears (Figure <ref>). This is because at the later stages of the simulation, r_i becomes greater than h, but the values of a and b are not increased in order that the extremum of W^' shifts along with r_i. Hence, for the last simulation, the adaptive algorithm is again used, but now once a exceeds the value of 1.95, a and b are modified according to Equation <ref>.
It can be seen from Figures <ref> and <ref> how the problem of tensile instability is completely alleviated in this simulation.
For the simulation with the adaptive approach, where a and b are modified according to Equation <ref>, the drop profiles for the Oldroyd B drop for different times are shown in Figure <ref>. It can be observed how the instability is prevented from occurring at all times. In Figure <ref>, the evolution of the width of the drop with time is shown, and the result is compared with that given in the literature (<cit.>). The present SPH results are comparable with the results from the literature, and furthermore, there is a decent agreement with the FDM result in <cit.>. The spatial distribution of a and b at different time instants are shown in Figure <ref>. Finally, a simulation is performed without the averaging of knot values between an interacting particle pair i-j to compare with the previous simulations where the averaging has been considered. From the evolution of the drop width shown in Figure <ref>, it can be seen that there is a marginal difference in the results of the two approaches.
§.§ Example 2: Rotation of a fluid patch
In this section, the example of a rotating, initially square patch (side length L) of fluid is considered. This test is considered a benchmark for demonstrating instability in particle-based methods (<cit.>). Figure <ref> shows the initial configuration of the fluid patch, which is given an initial velocity field as,
u(x,y,t=0) = +ω y; v(x,y,t=0) = -ω x,
where ω is the angular velocity. The initial pressure field consistent with the initial velocity field (Equation <ref>) is given by,
P_0(x,y) = ρ∑_m^∞∑_n^∞ -32ω^2/(mnπ^2)/(nπ/L)^2+(mπ/L)^2sin(mπ x^*/L)sin(nπ y^*/L) m,n ∈ℕ^odd,
where x^*=x+L/2 and y^*=y+L/2. The initial pressure is calculated from Equation <ref> and applied to the fluid patch as shown in Figure <ref>. In the literature, the simulation of this problem is performed without any gradient correction for consistency, and therefore, here also, the gradient correction is not implemented.
The value of c_0 is taken as 7ω L <cit.>. The density is periodically reinitialised as is done in <cit.>. At every 20^th time step, the density is recalculated as,
ρ_i=∑_jm_j W_j^MLS(x_i),
where the moving least square kernel W_j^MLS is calculated through the following equations;
W_j^MLS(x_i)=[β_0(x_i)+β_1(x_i)(x_i-x_j)+β_2(x_i)(y_i-y_j)]W_ij,
β(x_i)=[ β_0; β_1; β_2 ]=A^-1(x_i)[ 1; 0; 0, ],
A(x_i)=∑_jW_j(x_i)Ã_ij,
Ã_ij=[ 1 (x_i-x_j) (y_i-y_j); (x_i-x_j) (x_i-x_j)^2 (y_i-y_j)(x_i-x_j); (y_i-y_j) (y_i-y_j)(x_i-x_j) (y_i-y_j)^2 ].
The advantage of using density reinitialisation is that a more regular pressure distribution can be obtained.
In the following simulations, the averaging of the knot values between a pair of interacting particles is not considered when solving Equations (<ref>)-(<ref>). The number of SPH particles considered is 90,000. The smoothing length h is taken as 2Δ p, where Δ p is the initial interparticle spacing. First, the simulations are performed with standard SPH with artificial viscosity coefficients γ_1 = 0.8 and γ_2 = 0.8. Because of the -ve pressures in the fluid patch (Figure <ref>), tensile instability occurs in standard SPH. This results in unphysical fragmentation, as can be seen from the deformed shapes in Figure <ref>. Even for higher values of γ_1 and γ_2, the instability could not be prevented. Next, the simulations are performed using the adaptive kernel approach proposed in this work. As mentioned in Section <ref>, in this case, r_i is calculated via Equation <ref> where the contribution of shear strains is not considered. Furthermore, in this simulation, the maximum value of a in the unstable zone (i.e., the central part of the patch) is found to be always less than 1.95, and therefore the a-adaptive approach is sufficient to tackle the tensile instability. The value of a reaches close to 1.95 only near the legs, but the pressure in these regions is close to zero; hence no tensile instability is observed in the legs.
In this simulation, the adaptive kernel approach alone could not prevent instability; artificial viscosity is needed in conjunction with the adaptive approach in controlling the tensile instability. It is documented in <cit.> that artificial viscosity cannot eliminate the instability, but it can reduce its growth rate. In this problem, we can observe how it aids the adaptive approach in eliminating instability, but artificial viscosity alone cannot eliminate it. The minimum values of the artificial viscosity parameters required for the simulation are γ_1 = 0.8 and γ_2 = 0.8. The deformed shapes of the rotating patch at times tω = 2 and tω = 4 are shown in Figure <ref>. The deformed shapes are compared with the numerical solutions obtained by the Boundary Element Method (<cit.>, <cit.>). Though the tensile instability is prevented, slight deviations of the deformed shapes with respect to the reference solution can be observed. The reason for this is the comparatively high values of γ_1 and γ_2 adopted. The tensile instability occurs in regions of high negative pressure; hence it is in those regions where the high artificial viscosity is needed. In the regions where the pressure is positive or where the magnitude of negative pressure is low, the values of γ_1 and γ_2 can be reduced. Hence in the next set of simulations, γ_1 and γ_2 are varied according to the distribution of pressure.
At the beginning of the simulation, the artificial viscosity parameters are set such that the points with the maximum tensile pressure are assigned (γ_1, γ_2)=(0.8,0.8), and the points where the pressure is compressive the (γ_1, γ_2) are reduced to (0.1,0.1). For points with a pressure between zero and the maximum tensile pressure, the (γ_1, γ_2) is linearly varied in between (0.1,0.1) and (0.8,0.8). Using this approach, we can see from Figure <ref> that there is a reasonably good agreement with the reference solutions.
Figure <ref> shows the plot of the pressure at the centre of the patch, and it is compared with the BEM solution, and the SPH solution from <cit.>. Similar to the result of <cit.>, the pressure has an oscillation. The mean of the pressure distribution starts from a slightly higher value as compared to the BEM result. Nevertheless, the result obtained has a considerable agreement with the reference solutions. As the averaging of knot values has not been considered, it should lead to a violation of the conservation of energy. In Figure <ref>, the total energy is plotted against time. The deviation in total energy is found to be less than 0.5%. As the final simulation, the averaging of the knot values between an interacting particle pair is considered. The deformed shapes are plotted in Figure <ref>, and a comparison with Figure <ref> shows that there is a negligible difference between the two approaches. Figure <ref> shows that the energy is conserved in this approach.
§ DISCUSSION AND CONCLUSIONS
The results can be summarised as follows:
* In this study, first, a 1D perturbation analysis is performed for an Oldroyd B material, both for the exact equations and for the SPH approximations to the exact equations. For the short wavelengths, the SPH dispersion relation shows Zero Energy Modes, which result in the clumping of particles in numerical simulations.
* A B-spline basis function is proposed as the SPH kernel, and the knot positions of the basis function can be moved to change the shape of the kernel. By changing the position of the knots, the extremum of the first derivative of the kernel function can be moved in order to satisfy Swegle's condition <cit.>, and prevent tensile instability.
* Though theoretically, the Zero Energy Modes can be present in the case of tension and compression both, the literature suggests preventing the instability in the direction of tensile stress (<cit.>,<cit.>) or ensuring positive pressure in the entire domain (<cit.>,<cit.>), to eliminate the instability. Hence, in our study, a technique is adopted where the farthest immediate neighbour is tracked, and the intermediate knot position of the kernel, a, is moved (a-adaptive) such that the extremum of the 1-st derivative of the SPH kernel is just beyond the farthest immediate neighbour. If the strain in the direction of tension is large, then both knot positions a and b are shifted (ab-adaptive), keeping the support size the same. This satisfies Swegle's condition in the direction of tension. An approach to estimate the farthest immediate neighbour from the local strains is demonstrated.
* Via the perturbation analysis, it is shown how this proposed technique eliminates the short wavelength Zero Energy Modes but at the same time ensures the accuracy in the long wavelength range.
* Numerical simulations of a Newtonian drop and an Oldroyd B drop impacting a rigid surface show how this new algorithm prevents tensile instability and gives good agreement with the literature results. In the rotation of a fluid patch problem, it is shown that the artificial viscosity is required in conjunction with the adaptive algorithm to fully eliminate the tensile instability. It is also shown that the artificial viscosity alone cannot eliminate the instability. Because the artificial viscosity results in diffusion of the results, the parameters of the viscosity are varied spatially such that the values are maximum where the tensile pressure is the highest and the values have been reduced proportionally as the pressure reduces. A good comparison with the literature results is obtained.
§ ACKNOWLEDGEMENT
The authors would like to acknowledge the Naval Research Board, DRDO, India for their support of this work.
elsarticle-num-names
|
http://arxiv.org/abs/2307.07383v1 | 20230714144852 | Higher-order topological kernels via quantum computation | [
"Massimiliano Incudini",
"Francesco Martini",
"Alessandra Di Pierro"
] | quant-ph | [
"quant-ph",
"cs.LG"
] |
Higher-order topological kernels
via quantum computation
MI and APD are partially supported by the Istituto Nazionale di Alta Matematica “Francesco Severi”. FM is supported by the grant AdR4029/22 cofinanced by TAS Group and the University of Verona.
Massimiliano Incudini0000-0002-9389-5370
Department of Computer Science
University of Verona
Verona, Italy
[email protected]
Francesco Martini0000-0003-2651-140X
Department of Computer Science
University of Verona
Verona, Italy
[email protected]
Alessandra Di Pierro0000-0003-4173-7941
Department of Computer Science
University of Verona
Verona, Italy
[email protected]
July 14, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data. TDA enhances the analysis of objects by embedding them into a simplicial complex and extracting useful global properties such as the Betti numbers, i.e. the number of multidimensional holes. which can be used to define kernel methods that are easily integrated with existing machine-learning algorithms. These kernel methods have found broad applications, as they rely on powerful mathematical frameworks which provide theoretical guarantees on their performance. However, the computation of higher-dimensional Betti numbers can be prohibitively expensive on classical hardware, while quantum algorithms can approximate them in polynomial time in the instance size. In this work, we propose a quantum approach to defining topological kernels, which is based on constructing Betti curves, i.e. topological fingerprint of filtrations with increasing order. We exhibit a working prototype of our approach implemented on a noiseless simulator and show its robustness by means of some empirical results suggesting that topological approaches may offer an advantage in quantum machine learning.
Quantum topological data analysis, quantum kernel, quantum machine learning, topological kernel, Betti curve, Betti number.
§ INTRODUCTION
Quantum computing holds the promise of revolutionizing the field of machine learning due to the increase in computation power <cit.>. Despite the numerous efforts made by researchers, no clear direction for achieving any advantage has been discovered yet.
One of the initial techniques developed is the quantum kernel, which computes a kernel function via a quantum device <cit.>. Its success is attributed to the powerful mathematical framework of kernel methods, which is based on functional analysis and statistical learning and provides theoretical guarantee on their performance
<cit.>. The most basic type of quantum kernel essentially consists of a quantum embedding, a parametric unitary function that encodes classical data into rotational angles of quantum gates. The embedded data is processed using the fidelity test or the SWAP test to estimate the inner product. This kernel family has gained popularity in the quantum realm due to its immediate applicability to the current generation of devices <cit.>. This approach demonstrates superior performance compared to conventional kernels in certain physics-related practical applications <cit.>. However, its use is hindered by unfavorable properties such as the flat distribution of exponentially small eigenvalues <cit.>, which prevents the learning of associated components, and the exponential concentration of coefficients, which makes kernel calls indistinguishable <cit.>.
An alternative method of constructing quantum kernels involves the use of fault-tolerant hardware. In <cit.>, a quantum kernel is proposed that uses the Shor algorithm and applies to problems involving single-feature data x ∈ℤ_p that becomes linearly classifiable, i.e. trivial to classify, after being transformed via the mapping x ↦log_g(x) with g ∈ℤ_p. This approach allows for an exponential speedup of the quantum algorithm over the classical one, given widely accepted cryptographic assumptions. While this technique provides strong guarantees for this particular problem, it is highly artificial and not applicable to most real-world problems. Following a similar approach, in this paper we suggest the use of the Lloyd-Garnerone-Zanardi (LGZ) <cit.> algorithm to estimate the topological feature of complex data for the construction of quantum kernels which offer both theoretical guarantees and practical applicability to real-world tasks. The LGZ is exponentially faster than the best classical method identified to date, and no attempts to invalidate this advantage via dequantization techniques have succeeded <cit.>. While classical techniques are efficient for computing topological features of lower dimensions, the quantum framework seems to be indispensable for higher-dimensional ones. Fig. <ref>(a) illustrates the scenario where quantum hardware can outperform its classical counterpart.
In the past decades, Topological Data Analysis (TDA) <cit.>, i.e. the extraction of useful topological features from data, has been widely studied.
In TDA, data is typically transformed into a point cloud, and a skeleton graph is generated by connecting vertices whose Euclidean distance is less than a threshold value ϵ. The next step involves creating an abstract simplicial complex (ABS) from this skeleton graph, which captures the higher-order relationships between points (simplices). One common type of ABS is the Vietoris-Rips complex, which reveals all the k-cliques in the graph, potentially up to a certain order. By analyzing the ABS, we can determine the Betti number of order k, denoted β_k, which represents the number of k-dimensional holes in the structure. Additionally, a filtration can be created by ordering a collection of ABS based on increasing values of ϵ, resulting in a sequence of nested structures. Persistent Betti numbers can then be calculated from the filtration, tracking the birth and death of each multidimensional hole, i.e., the minimum threshold for which the hole appears and the maximum threshold after which it disappears. Fig. <ref>(b) provides an illustration of the filtration construction process.
Topological Data Analysis has the potential to enhance various aspects of machine learning, such as data exploration, feature engineering, and visual representation <cit.>. In the context of feature engineering, TDA provides an alternative approach to traditional techniques that rely on distance measures in vector spaces. However, the integration process requires particular care. A topological fingerprint of the data (e.g. vector of Betti numbers) can be extracted and fed as additional features to most machine learning models, such as neural networks. Alternatively, the same aspects can be included in the definition of a kernel function, allowing the use of this powerful mathematical framework and integration with most distance-based machine-learning algorithms. Authors in <cit.> have proposed a kernel technique based on the persistence diagram, a two-dimensional representation of the topological aspects of a filtration, in which each point (b_i, d_i) corresponds to the birth and death of the i-th multidimensional hole. Such representation can be embedded into a Hilbert space, whose inner product allows the construction of the similarity measure. Furthermore, it can be proven that this approach is stable with respect to the 1-Wasserstein distance, i.e. robust to small perturbations of the dataset <cit.>. An alternative definition of the kernel can be based on a more succinct representation of the topological features, the Betti curves, introduced in <cit.>. These consider only the total number of k-dimensional holes for a given filtration, without the need to compute persistent Betti numbers. It is possible to obtain a Betti curve given the corresponding persistence diagram, although such a transformation is not injective. Finally, defining a distance (or similarity) measure between two Betti curves is straightforward because they are essentially bounded truncated piecewise linear functions.
In this work, we incorporate the data obtained from the LGZ algorithm into a kernel machine, show its efficacy, and explore the possibility of using our method in large-scale applications.
To accomplish this, we introduce the concept of multidimensional Betti curves, which extend the Betti curves to explicitly consider various degrees of Betti numbers. This technique enables us to leverage the topological information provided by the LGZ algorithm without relying on the persistent Betti numbers. We demonstrate how multidimensional Betti curves can be used to define a kernel function and implement our approach using Python and the Qiskit platform. To evaluate the performance of our approach, we conduct experiments on the shape classification problem, a well-known benchmark. We compare the accuracy of a kernel machine with the higher-order topological kernel to conventional kernel functions, including the Gaussian kernel and polynomial kernels in various configurations. We calculate the topological kernels using either the classical exact procedure or by executing the LGZ algorithm on a noiseless quantum simulator to imply that such an algorithm requires fault-tolerant hardware. We also conduct further experiments to demonstrate the approach's robustness to errors introduced by the Hamiltonian simulation and finite sampling. In most cases, the quantum (approximated) technique closely resembles the classical (exact, but computationally expensive) method, even when using stochastic Hamiltonian simulation techniques such as qDrift. Finally, we discuss potential applications based on time series analysis that could benefit from higher-degree Betti numbers and the advantages that quantum techniques may provide.
§ BACKGROUND
This section provides a brief overview of the relevant background for this work.
§.§ Kernel methods
The application of kernel methods in both supervised and unsupervised machine learning is widespread and essential in these domains. These methods are described in detail in various sources, including <cit.>. Kernel methods can be integrated with most similarity-based algorithms, including Ridge regression and Support Vector Machine, Principal Component Analysis, k-means, and DBSCAN. In this section, we present a concise summary of these ideas.
A kernel function κ : X × X → extends the concept of similarity for a non-empty set X. If X is an inner product space, the inner product can be used as a similarity measure for a distance-based machine learning algorithm. However, if the inner product fails to capture the relationship between the data points or is unavailable, a kernel function can be used, which maps elements of X into a higher-dimensional Hilbert space using a feature map ϕ: X →,
κ(x_1, x_2) = ⟨ϕ(x_1), ϕ(x_2) ⟩_.
Equivalently, κ is a kernel if it satisfies positive definiteness and, by Mercer's theorem, this guarantees that there exists a feature map ϕ and Hilbert space satisfying Equation (<ref>).
Considering the supervised learning problem
f^* = min_f ∈∑_i=1^m ℓ(f(x^(i)), y^(i)) + λ‖ f ‖_^2
where { (x^(i), y^(i)) }_i=1^m is the training set, ℓ convex loss function and λ > 0, the Representer Theorem guarantees that the learning problem is convex, m-dimensional and its solution can be expressed as f^*(x) = ∑_i=1^m α_i κ(x^(i), x) with α_i parameters of the model. This contrasts with neural networks and most machine learning models, whose training is non-convex and provides no guarantee of finding the optimal solution.
§.§ Topological data analysis
We provide the necessary background in algebraic topology to comprehend the quantum algorithm. For a more detailed overview, we refer readers to <cit.>.
A k-simplex σ = ∑_i=0^k α_i p_i is the convex hull of k+1 independent points p_0, ..., p_k ∈^d, with α_i > 0 and ∑_i=0^k α_i = 1. The dimension of σ is k. A face of σ is the convex hull of a subset of σ's points. A simplicial complex S is a set of simplices such that σ∈ S, τ is a face of σ implies τ∈ S, and given σ, τ∈ S, their intersection σ∩τ is either the empty set or a face of both.
The simplicial complex S has a corresponding purely combinatorial entity known as the abstract simplicial complex = (V, Σ) where V = {0, ..., k} is the set of vertices and Σ is a collection of subsets of V such that σ∈Σ and ∅≠τ⊆σ implies τ∈Σ. The elements of Σ are called simplices.
A p-chain c is a sum of p-simplices σ_i ∈, each multiplied by a coefficient α_i. In this context, the coefficients are taken from the ring ℤ_2, and chains can be treated as sets. The collection of p-chains, with addition modulo two, forms the chain group C_p() or simply C_p. For a p-simplex σ, the boundary operator ∂_p: C_p → C_p-1 is given by
∂_p σ = ∑_j=0^k (-1)^j { v_0, ..., v_k }∖{ v_j }.
It holds that ∂_p-1∘∂_p = 0. A p-chain c is called a p-cycle if ∂ c = 0. The collection of all p-cycles forms a group under addition modulo two, known as the cycle group Z_p.
Applying the boundary operator ∂_p+1 to all p+1-chains in C_p+1 produces the boundary group B_p. The p-th homology group is the quotient group Z_p / B_p, which is also a vector space of dimension β_p = dim H_p.
The number β_p is called the p-th Betti number. In addition, there is a relationship between H_p and the Hodge Laplacian Δ_p = ∂_p^†∂_p + ∂_p+1∂_p+1^† provided by Hodge theory, H_p = kerΔ_k.
A filtration of a simplicial complex , () or simply , is a nested sequence of its subcomplex : ∅ = _0 ⊆ ... ⊆_n =. Every filtration gives rise to a sequence of homomorphisms h^i,j_p : H_p(_i) → H_p(_j). The p-th persistent homology group is H_p^i,j = im h^i,j_p and dim H_p^i,j is the p-th persistent Betti number.
Given a metric space with distance d and r > 0, the abstract simplicial complex is a Vietoris-Rips complex when σ∈ if and only if d(p, q) ≤ 2r for all vertices p, q ∈σ. In the case of Vietoris-Rips complexes, we can represent using a graph G = (V, E), with vertices representing 0-simplices, edges representing 1-simplices, and (k+1)-cliques representing k-simplices.
§ QUANTUM COMPUTATION OF BETTI NUMBERS
The authors in <cit.> proposed a quantum algorithm for approximating β_k of a simplicial complex . In this section, we briefly discuss this algorithm and explore both its potential benefits and limitations.
§.§ LGZ algorithm
The LGZ algorithm takes in a Vietoris-Rips complex (or equivalently its graph G) and a non-negative integer k, and returns an approximation of the Betti number β_k. The number of vertices of G=(V, E) is denoted with |V| = n. We adopt the notation from <cit.> by using Cl_k(G) ⊂{ 0, 1 }^n to denote the set of k+1-clique in G of binary strings having Hamming weight k+1 and whose onset corresponds to the vertices of the clique. The algorithm works as follows.
Firstly, we prepare the state
ρ_k^G = 1/|Cl_k(G)|∑_j ∈ Cl_k(G)|j⟩⟨j|.
This is achieved by encoding the uniform superposition of k+1-cliques in G using an n-qubits register, which can be accomplished either by utilizing Grover's algorithm as outlined in <cit.> or by employing a state preparation algorithm <cit.>. While Grover's algorithm can be useful for preparing states in general, the overhead it introduces may make the use of a state preparation routine more convenient for proof-of-concept implementations with small-scale systems. Next, a CNOT gate is applied with the control qubit on each qubit of the state to a separate ancilla to obtain the mixed state, and the v ancillae are then measured and their results discarded.
We now perform quantum phase estimation over the eigenvector ρ_k^G and unitary operator exp(-i Δ), with Δ the Hodge-Laplacian operator. Note that, instead of simulating Δ, we can instead use the Dirac operator B,
B =
0.7[ 0 ∂_1^ ; (∂_1^)^† 0 ∂_2^ ; (∂_2^)^† 0 ; ⋱ ; 0 ∂_n-1^; (∂_n-1^)^† 0 ],
which is the square root of the Laplacian operator and is n-sparse thus easier to implement. As we are interested in kerΔ_k^G rather than kerΔ, we modify the routine by evolving the operator I ⊗ B on the state |k+1⟩⟨k+1|⊗ρ_k^ using ⌈log_2(n+1) ⌉ more qubits. This ensures that the estimated eigenvalue is nonzero for |j⟩∉kerΔ_k^G <cit.>. Furthermore, to prevent the estimation of eigenvalues that are multiples of 2π, we can rescale B by the factor λ_max^-1, where λ_max is the highest eigenvalue of B which is bounded by O(n) due to the Gershgorin circle theorem. The operator exp(-i B) can be implemented using a Hamiltonian simulation algorithm, such as Trotter decomposition <cit.> or stochastic techniques like qDrift <cit.>. These approaches provide varying levels of resource requirements and error rates, with stochastic techniques being less resource-intensive but producing a larger error. The use of importance sampling in the stochastic selection of the operator terms <cit.> can lead to further resource reduction.
Finally, we perform the estimation process M ∈ O(ϵ^-2) times to achieve a precision of ϵ. We use the obtained phases θ^(1), ..., θ^(M) to compute
β_k ≈β̃_k = |{θ^(i)|θ^(i) = 0 }|/M× |Cl_k(G)|.
To distinguish the phase 0 from the smallest eigenvalue we require a precision of κ = λ_max / λ_min.
However, no bound is known for λ_min. Nonetheless, if we fix a precision that cannot distinguish λ_min from 0, LGZ will still provide useful topological information <cit.>, even though such information does not correspond to the normalized Betti numbers. The circuit is shown in Fig. <ref>.
§.§ Limitations of the LGZ algorithm
While the LGZ algorithm has the potential to achieve a superpolynomial speedup, it is crucial to address its limitations.
Clearly, this algorithm requires fault-tolerant quantum computers, although a cheaper variation that can be executed on NISQ devices exists <cit.>.
According to <cit.>, we can estimate the normalized Betti number β_k / |Cl_k(G)| up to an additive error ε in time
O(n^3 κ + n k^2 ζ_k^-1/2/ε^2).
Firstly, the state preparation is performed via Grover's algorithm using the oracle
f_k(j)
= 1, j ∈ Cl_k(G)
0, j ∉Cl_k(G),
which can be implemented in O(k^2), and requiring O(ζ^-1/2_k) oracle calls, where ζ_k = |Cl_k(G)|/nk+1. The cost of applying CNOTs for constructing the mixed state is O(n). Secondly, quantum phase estimation of an n-sparse operator requires O(n^3) gates and a precision of O(κ), where κ = λ_max/λ_min. Thirdly, the cost of sampling is ε^-2.
Imposing a multiplicative error of δ on the actual Betti numbers, obtained fixing ε = δβ_k/|Cl_k(G)|, results in a runtime of
O(|Cl_k(G)|^2/β_k^2n^3 κ + n k^2 ζ_k^-1/2/δ^2).
For the LGZ algorithm, a polynomial runtime is maintained only when ζ^-1∈ O(poly(n)), which is the case for a graph that is k-clique dense. Additionally, it is necessary for κ to be polynomial. Although λ_max can be bounded by O(n), there is currently no known lower bound for λ_min, meaning it could potentially be exponentially small. It is worth noting that <cit.> has demonstrated that both exact and approximate Betti number calculations are NP-hard, even for clique-dense graphs, which implies that it is necessary to investigate alternative definitions of simplicial complexes that are not based on clique complexes.
LGZ is not capable of estimating persistent Betti numbers. Nonetheless, there are other quantum algorithms available for this task, such as the one proposed in <cit.>. Authors in <cit.> introduced an alternative method that can achieve a speedup of up to quintic compared to deterministic classical TDA techniques.
§ HIGHER-ORDER TOPOLOGICAL KERNELS
The topological kernel method embeds data into a filtration, extracts topological features, uses to define a distance function, which can immediately be transformed into a kernel. The term higher-order denotes that the kernel captures information concerning Betti numbers of orders greater than 1. We denote by β_k() the k-th Betti number of a given simplicial complex .
The Betti curve, also known as Betti sequence, can be used to represent topological information using non-persistent Betti numbers <cit.>. Given a filtration : _0 ⊂ ... ⊂_q, with ϵ_0, ..., ϵ_q the corresponding thresholds, and an integer k ≥ 0, the Betti curve β_k^: →ℕ is defined by:
rl
β_k^ (ϵ) = β_k(_j), ϵ∈[ϵ_j, ϵ_j+1).
Note that the Betti curve provides less information compared to other topological representations, such as persistence diagrams, which are based on persistent Betti numbers. However, one advantage of the Betti curve representation is that we can exploit LGZ algorithm as a procedure to extract the Betti numbers. Furthermore, authors in <cit.> have shown that the 1-norm of Betti curves is stable against small perturbations of the dataset.
The concept of Betti curves can be extended to include multiple orders of Betti numbers k, resulting in the multivariate Betti curve β_≤ k^: ℕ×→ℕ,
rl
β_≤k^ (k', ϵ) =
β_k'(_j), k' ≤k and ϵ∈[ϵ_j, ϵ_j+1)
0, k' > k or ϵ∉[ϵ_0, ϵ_q]
We can represent such information by means of a matrix of real elements B_≤ k^∈^(k+1) × (q+1),
(B_≤ k^)_i,j = β_i(_j); 0 ≤ i ≤ k, 0 ≤ j ≤ q
We can establish an upper bound on the error of B̃_≤ k^, which is the approximation of B_≤ k^ obtained by using LGZ to estimate the Betti numbers. To derive this bound, we assume that we are operating in the optimal regime, i.e. k'-clique dense graph for all k' ≤ k.
Consider the Vietoris-Rips filtration : _0 ⊂ ... ⊂_q, G_0, ..., G_q the corresponding graphs, ϵ_0, ..., ϵ_q the corresponding thresholds, and an integer k ≥ 0. Use LGZ to estimate (non-normalized) Betti numbers up to a multiplicative error δ > 0. Then, we can calculate B̃_≤ k^, the approximation of B_≤ k^, in time
O(qk ×|Cl_k(G)|^2/β_k^2n^3 κ + n k^2 ζ_k^-1/2/δ^2).
with relative Frobenious norm upper bounded by δ,
‖B̃_≤ k^ - B_≤ k^‖_F
/‖ B_≤ k^‖_F≤δ.
The runtime is obtained by separately estimating all the elements of the matrix, which adds an overhead proportional to qk. The upper bound on the relative Frobenius norm comes from the properties of the norm and elementary arithmetic rules,
‖B̃_≤ k^ - B_≤ k^‖_F ≤‖
(1+δ) B_≤ k^ - B_≤ k^‖_F ≤ |δ| ·‖ B_≤ k^‖_F.
If we consider a dataset of m elements, where each element corresponds to a cloud of n points, we can place an upper bound on the number of distinct thresholds q as m(n-1)(n-2)/2. This is due to the fact that each point can have at most (n-1)(n-2)/2 different distances, which can be determined by examining the adjacency matrix of the graph. Since the matrix is symmetric and the diagonal contains all zeros, we only need the values in the upper triangular part without the principal diagonal. In practical scenarios, the actual number of thresholds can be significantly smaller than the upper bound.
We can define a distance between multidimensional Betti curves, given B_≤ k^,(1), B_≤ k^,(2) defined on the same sequence of thresholds ϵ_0, ..., ϵ_q,
l
d(B_≤k^,(1), B_≤k^,(2)) =
(∑_k'=0^k ∑_j=0^q-1 (ϵ_j+1-ϵ_j) ((B_≤k^,(1))_k', j - (B_≤k^,(2))_k', j )^p )^1/p
It should be noted that Equation (<ref>) represents an instance of the weighted Minkowski distance; for p=2, it becomes a weighted Euclidean distance.
The concepts of similarity (kernel) and distance measures are closely related. One example is the use of a kernel function κ to induce a distance measure between two objects,
d(a, b) = κ(a, a) + κ(b, b) - κ(a, b).
Conversely, it is also possible to define a kernel function given a distance measure d. One common approach is to use the Gaussian kernel,
κ(B_≤ k^,(1), B_≤ k^,(2)) = exp(- γ d(B_≤ k^,(1), B_≤ k^,(2)))
where γ > 0. Here, the distance function d corresponds to the one defined in Equation (<ref>).
The function in (<ref>) is a Mercer kernel.
Follows from the definition of Gaussian kernel <cit.>.
The kernel defined by (<ref>) is tailored for use with the LGZ algorithm, which does not require calculating persistent Betti numbers, in contrast to the ones previously proposed in the classical machine learning literature <cit.>. Nevertheless, quantum algorithms capable of estimating persistent Betti numbers have the ability to generate topological kernels based on alternative representations, such as persistence diagrams (as discussed in Sec. <ref>).
§ EXPERIMENTAL ASSESSMENT
We show a working prototype of our approach and test it over the shape classification problem, a small-scale synthetic benchmark feasible to solve on a quantum simulator. To generate the dataset, we sampled points from the perimeter of various shapes. We then created Vietoris-Rips complexes for different values of ϵ and extracted the Betti numbers for various values of k to construct the kernel as defined in (<ref>). After applying this kernel to a Support Vector Machine (SVM), we measured its accuracy and compared it to the accuracy of an SVM that utilizes a conventional classical kernel. The machine learning pipeline used in this experiment is pictured in Fig. <ref>(c).
Furthermore, we investigate the robustness of our approach by examining how much the topological kernel computed using the quantum algorithm deviates from the exact result computed on classical hardware. To test this, we vary the Hamiltonian simulation techniques (Trotter and qDrift), the number of repetitions, and the number of shots used for sampling the results.
§.§ Setup
The dataset used for shape classification is artificially created procedurally. The objective of this classification problem is to differentiate between a triangle and a sliced quadrangle, given a set of uniform sample points on their perimeters. It is worth noting that these two shapes are not topologically equivalent and therefore possess distinct topological characteristics. Alternatively, we could differentiate digits zero and eight, topologically equivalent to triangles and sliced quadrangles, for some datasets such as MNIST.
To create each triangle, we described it in ^2 using the points {(0,0), (0,1), (1,1) }. We then apply an affine transformation that rotates the shape by a randomly generated angle between 0 and 2π radians and introduces a skew by randomly generating shear angles along the x and y axes with degrees between 0 and π/4. Similarly, we construct each square using the points {(0,0), (0,1), (1,1), (1,0) } and define its perimeter as the external border joined with the segment (0,0) - (1,1). We then apply the same transformations as for the triangles to generate various instances of the square. Once generated the dataset of shapes, we sample an increasing number of points from the perimeters of these.
The dataset created has 100 items and is balanced. It has been randomly split into training and testing sets so that each subset is balanced too. We have sampled clouds of points from this dataset, with the number of points ranging from 5 to 20 for topological kernels calculated using classical methods. Furthermore, we have defined a smaller dataset of m=20 items and sampled 5 points from each item.
Firstly, we have compared the performance of the topological kernel (<ref>) on the larger dataset having γ = 1.0, p = 2 with the Gaussian kernel (hyperparameter γ = 1e-4, 1e-3, 1e-2, 1e-1, 1), Laplacian kernel (hyperparameter γ = 1e-4, 1e-3, 1e-2, 1e-1, 1), and the polynomial kernel (degree d = 1, 2, 3, 4, 5). The metric chosen for the comparison is the accuracy with respect to the testing set. In this case, the topological kernel has been calculated classically and the result is exact.
Secondly, we have used the smaller dataset to tackle the shape classification problem through the use of topological kernels, which are computed using both classical exact methods and a quantum algorithm on a classical simulator. Then, we have compared the difference between the two versions by means of root mean square error (RMSE), which is defined as RMSE(K, K̃) = √((∑_i,j=1^m (K)_i,j - (K̃)_i,j)/m^2). To implement LGZ, we have used Hamiltonian simulation techniques such as Trotter (with a Trotter number spanning 2, 3, 4) and qDrift (with a repetition number of 1, 2, 5, 10), along with varying numbers of shots. This allows us to analyze the impact of these factors on the solution.
§.§ Results
In Figure <ref>(a), we can observe the accuracy of kernel machines utilizing various kernel techniques. Notably, the topological kernel yields the best performance compared to the conventional kernels. This can be explained by the inductive bias of topological data analysis. TDA assumes that the data has an underlying topological structure that can be analyzed and is informative for understanding its properties. This assumption is certainly true for the shape classification problem.
More importantly, the topological technique is the only one that exhibits improvement in performance with an increase in sampled points. This behavior cannot be seen for the other kernels, particularly the RBF kernel which shows the second-best performance. The ability of our kernel to remain robust in relation to the set of sampled points of our figures is especially significant and serves as a strong motivation for utilizing the topological kernel over the other approaches.
In Figure <ref>(b), we provide empirical evidence of the robustness of our approach regarding the choice of Hamiltonian simulation technique and the number of shots. It is noteworthy that the performance of the topological kernel created using LGZ closely resembles the exact classical calculations. The qDrift technique introduces stochasticity in the simulation, thus requiring a greater number of repetitions to attain satisfactory performance. This is in contrast to the Trotter technique, which is more resource-intensive, yet yields the best performance (zero RMSE) with only two repetitions. The number of shots needed is relatively unimportant, as 200 shots are sufficient to accurately estimate the coefficients.
§ CONCLUSION AND FUTURE WORK
We have introduced a technique for generating a topological kernel using the LGZ quantum algorithm. We have established an upper bound on the error that may arise from the use of this algorithm and demonstrated that the resulting kernel satisfies the criteria of a Mercer kernel. Finally, we have provided a working prototype of our method and tested it on a synthetic benchmark for shape classification. Our approach is feasible when using Trotter and qDrift as Hamiltonian simulation techniques (the latter only with a large enough number of repetitions of the circuit) and with a modest number of shots.
The work we have presented has the potential to be extended in several ways. Firstly, the literature lacks sufficient evidence regarding the impact of higher-order Betti numbers on solving problems. This is due to the previous inability to compute such features before the introduction of quantum algorithms, making the question of their relevance moot. To gain a better understanding of which use cases are suitable for quantum computation, we must analyze various scenarios. To this purpose, one promising area is time series processing, which can be efficiently encoded as a point cloud in a d-dimensional space using Takens' embedding. Tuning the parameter d to generate simplicial complexes with the highest order of Betti numbers could potentially enable us to reach a regime in which the quantum algorithm performs optimally. Time series has several applications in biology, engineering, and finance, including fraud detection <cit.>.
Secondly, an area of further exploration could be the potential combination of the proposed topological kernel with conventional (Gaussian, Laplacian, polynomial) kernels. This approach has the potential to leverage the flexibility of kernel methods and utilize the property that the sum, product, and limits of a sequence of Mercer kernels are also Mercer kernels. Consequently, it may be possible to create a hybrid kernel that incorporates both geometrical and topological information.
Thirdly, the skeleton of the Vietoris-Rips complex is constructed using a distance function, typically Euclidean. However, it is possible to substitute this distance function with one induced by a quantum kernel. Doing so may enable us to uncover relationships between data that are not apparent using traditional methods. An example of the potential advantages of these quantum kernels has been demonstrated in the context of physics-related problems <cit.>.
Fourthly, all the tested methods have the potential to be enhanced by optimizing the kernel parameters. This involves adjusting the parameter γ > 0 for the Gaussian and Laplacian kernels, the degree d for the polynomial kernel, while for the topological kernel, both the γ > 0 parameter and the maximum order of Betti number k might be optimized. It is also possible to enhance the approach by restricting the multivariate Betti curves, represented in matrix form in Equation (<ref>), to a randomly sampled subset of the thresholds ϵ_0, ..., ϵ_q.
Fifthly, the stability of the multivariate Betti curve can be intuitively inferred from the proof of the stability of the univariate Betti curve presented in <cit.>, but this has to be formally proven.
§ CODE AND DATA AVAILABILITY
The code and data used in this study are available upon request by contacting the authors for access.
§ ACKNOWLEDGMENT
We acknowledge the CINECA award under the ISCRA initiative, for the availability of high-performance computing resources and support. MI thanks Oriel Kiss for the insightful discussion.
99
biamonte2017quantum J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost,
N. Wiebe, and S. Lloyd, “Quantum machine learning,”
Nature, vol. 549, no. 7671, pp. 195–202, 2017.
havlivcek2019supervised V. Havlicek, A. D. Corcoles, K. Temme, A. W. Harrow,
A. Kandala, J. M. Chow, et al., “Supervised
learning with quantum-enhanced feature spaces,”
Nature, vol. 567, no. 7747, pp. 209–212, 2019.
canatar2021spectral
A. Canatar, B. Bordelon, and C. Pehlevan, “Spectral
bias and task-model alignment explain generalization in
kernel regression and infinitely wide neural networks,”
Nature communications, vol. 12, no. 1,
p. 2914, 2021.
preskill2018quantum J. Preskill, “Quantum computing in the nisq era and
beyond,” Quantum, vol. 2, p. 79, 2018.
wozniak2023quantum K. A. Wo´zniak, V. Belis, E. Puljak, P. Barkoutsos,
G. Dissertori, M. Grossi, et al., “Quantum
anomaly detection in the latent space of proton collision
events at the lhc,” Unpublished, available as preprint
arXiv:2301.10780, 2023.
kubler2021inductive J. Kübler, S. Buchholz, and B. Schölkopf, “The
inductive bias of quantum kernels,” Advances in
Neural Information Processing Systems, vol. 34,
pp. 12 661–12 673, 2021.
thanasilp2022exponential S. Thanasilp, S. Wang, M. Cerezo, and Z. Holmes,
“Exponential concentration and untrainability in
quantum kernel methods,” Unpublished, available as
preprint arXiv:2208.11060, 2022.
liu2021rigorous Y. Liu, S. Arunachalam, and K. Temme, “A rigorous
and robust quantum speed-up in supervised machine
learning,” Nature Physics, vol. 17, no. 9,
pp. 1013–1017, 2021.
lloyd2016quantum S. Lloyd, S. Garnerone, and P. Zanardi, “Quantum
algorithms for topological and geometric analysis of
data,” Nature communications, vol. 7, no. 1,
p. 10 138, 2016.
chia2022sampling N.-H. Chia, A. P. Gily´en, T. Li, H.-H. Lin, E. Tang, and
C. Wang, “Sampling-based sublinear low-rank matrix
arithmetic framework for dequantizing quantum
machine learning,” Journal of the ACM,
vol. 69, no. 5, pp. 1–72, 2022.
chazal2021introduction F. Chazal and B. Michel, “An introduction to
topological data analysis: Fundamental and practical
aspects for data scientists,” Frontiers in artificial
intelligence, vol. 4, p. 667 963, 2021.
hensel2021survey
F. Hensel, M. Moor, and B. Rieck, “A survey of
topological machine learning methods,”
Frontiers in Artificial Intelligence, vol. 4,
p. 681 108, 2021.
reininghaus2015stable J. Reininghaus, S. Huber, U. Bauer, and R. Kwitt, “A
stable multi-scale kernel for topological machine
learning,” in Proceedings of the IEEE
conference on computer vision and pattern recognition,
2015, pp. 4741–4748.
carriere2017sliced M. Carriere, M. Cuturi, and S. Oudot, “Sliced
wasserstein kernel for persistence diagrams,” in
International conference on machine learning,
PMLR, 2017, pp. 664–673.
umeda2017time Y. Umeda, “Time series classification via topological
data analysis,” Information and Media
Technologies, vol. 12, pp. 228–239, 2017.
steinwart2008support I. Steinwart and A. Christmann, Support vector
machines. Springer Science & Business Media, 2008.
rieck2020topological B. Rieck, F. Sadlo, and H. Leitte, “Topological machine
learning with persistence indicator functions,” in
Topological Methods in Data Analysis and
Visualization V: Theory, Algorithms, and Applications
7, Springer, 2020, pp. 87–101.
gyurik2022towards C. Gyurik, C. Cade, and V. Dunjko, “Towards quantum
advantage via topological data analysis,”
Quantum, vol. 6, p. 855, Nov. 2022.
gunn2019review S. Gunn and N. Kornerup, “Review of a quantum
algorithm for betti numbers,” Unpublished, available as
preprint arXiv:1906.07673, 2019.
ameneyro2022quantum B. Ameneyro, G. Siopsis, and V. Maroulas, “Quantum
persistent homology for time series,” in
IEEE/ACM 7th Symposium on Edge Computing
(SEC), IEEE, 2022, pp. 387–392.
childs2021theory A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu,
“Theory of trotter error with commutator scaling,”
Physical Review X, vol. 11, no. 1, p. 011 020,
2021.
campbell2019random E. Campbell, “Random compiler for fast hamiltonian
simulation,” Physical review letters, vol. 123,
no. 7, p. 070 503, 2019.
kiss2023importance O. Kiss, M. Grossi, and A. Roggero, “Importance
sampling for stochastic quantum simulations,”
Quantum, vol. 7, p. 977, 2023.
akhalwaya2022exponential I. Y. Akhalwaya, S. Ubaru, K. L. Clarkson,
M. S. Squillante, V. Jejjala, Y.-H. He, et al.,
“Towards quantum advantage on noisy quantum
computers,” arXiv preprint arXiv:2209.09371,
2022.
schmidhuber2022complexity A. Schmidhuber and S. Lloyd, “Complexity-theoretic
limitations on quantum algorithms for topological data
analysis,” Unpublished, available as preprint
arXiv:2209.14286, 2022.
hayakawa2022quantum R. Hayakawa, “Quantum algorithm for persistent betti
numbers and topological data analysis,”
Quantum, vol. 6, p. 873, 2022.
mcardle2022streamlined S. McArdle, A. Gily´en, and M. Berta, “A streamlined
quantum algorithm for topological data analysis with
exponentially fewer qubits,” Unpublished, available as
preprint arXiv:2209.12887, 2022.
dipierro2021quantum A. Di Pierro and M. Incudini, “Quantum machine
learning and fraud detection,” in Protocols,
Strands, and Logic: Essays Dedicated to Joshua
Guttman on the Occasion of his 66.66 th Birthday,
Springer, 2021, pp. 139–155.
|
http://arxiv.org/abs/2307.06245v2 | 20230712153529 | Large deviations of the stochastic area for linear diffusions | [
"Johan du Buisson",
"Thamu D. P. Mnyulwa",
"Hugo Touchette"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"math.PR"
] | =1
|
http://arxiv.org/abs/2307.04315v2 | 20230710025718 | Movement of branch points in Ahlfors' theory of covering surfaces | [
"Yun-Ling Chen",
"Tian-Run Lin",
"Guang-Yuan Zhang"
] | math.CV | [
"math.CV",
"[2020] 30D35, 30D45, 52B60"
] |
Movement of branch points]Movement of branch points in Ahlfors' theory of covering surfaces
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.
R. China
[email protected],
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.
R. China
[email protected]
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.
R. China
[email protected]
Project 10971112 and 12171264 supported by NSFC
In this paper, we will prove a result which is asserted in <cit.> and is
used in the proof of the existence of extremal surfaces in <cit.>.
[2020] 30D35, 30D45, 52B60
[
Guangyuan Zhang
====================
§ INTRODUCTION
In 1935, Lars Ahlfors <cit.> introduced the theory of covering surfaces and
gave a geometric illustration of Nevanlinna's value distribution theory.
Depending on the application of the Length – Area principle (<cit.>
,p.14), Ahlfors' theory has a metric-topological nature. The most crucial
result in the theory of covering surfaces is Ahlfors' Second Fundamental
Theorem (SFT), which corresponds to Nevanlinna's Second Main Theorem. As the
most important constant in Ahlfors' SFT, the precise bound of the constant
H(E_q) (we will give the definition later) has not been sufficiently
studied yet. This leads to our work.
We start with several definitions and elementary facts in the theory of
covering surfaces. The unit sphere S is identified with the extended complex
plane ℂ under the stereographic projection
P:S→ℂ as in <cit.>. Endowed with the spherical
metric on S, the spherical length L and the spherical area A on S
have natural interpretations on ℂ as
dL =2|dz|/1+|z|^2,
and
dA =4dxdy/(1+|z|^2)^2
for any z∈ℂ.
For a closed set K on ℂ, a mapping f:K→ S
is called continuous and open if f can be extended to a continuous and open
mapping from a neighborhood of K to S. Now we can define the covering surface.
Let U be a domain on ℂ whose boundary
consists of a finite number of disjoint Jordan curves α_1
,…,α_n. Let f:U→ S be an
orientation-preserving, continuous, open, and finite-to-one map (OPCOFOM).
Then the pair Σ=(f,U) is called a covering surface over S,
and the pair ∂Σ=(f,∂ U) is called the boundary of
Σ.
For each point w ∈ S, the covering number n(f,w) is defined
as the number of all w-points of f in U without counting multiplicity.
That is, n(f,w) = n(Σ,w) = ♯{f^-1(w)∩
U}.
All surfaces in this paper are covering surfaces defined above.
The area of a surface Σ=(f,U) is defined as the spherical
area of f:U→ S, say,
A(Σ)=A(f,U) = ∫∫_Sn(Σ,w) dA(w)
= ∫∫_ℂ4/(1+u^2+v^2)^2n
(Σ,u+√(-1)v) dudv.
And the perimeter of Σ=(f,U) is defined as the spherical
length of f:∂U→ S and write
L(∂Σ) = L(f, ∂ U).
Let Σ=(f,U) be a covering surface.
(1) Σ is called a closed surface, if U=S. For a closed surface
Σ, we have ∂Σ=∅, and then L(∂Σ)=0.
(2) Σ is called a simply-connected surface, if U is a
simply connected domain.
(3) 𝐅 denotes all surfaces such that for each Σ=(
f,U) ∈𝐅, U is a Jordan domain.
(A) Let K_1 and K_2 be two domains or two
closed domains on S, such that ∂ K_1 and ∂ K_2 are
both consisted of a finite number of disjoint Jordan curves. A mapping
f:K_1→ K_2 is called a complete covering
mapping (CCM), if (a) for each p∈ K_2 there exists a neighborhood V
of p in K_2 such that f^-1(V)can be expressed as a union
∪_j∈𝒜U_j of disjoint (relative) open sets of K_1, and
(b) f|_U_j:U_j→ V is a homeomorphism for each j∈𝒜.
(B) We call f a branched
complete covering mapping (BCCM), if all conditions of (A) hold, except that
(b) is replaced with (b1) or (b2): (b1) If both K_1 and K_2 are
domains, then for each j∈𝒜, U_j∩ f^-1(p) contains only
one point a_j of f^-1(p), and there exist two homeomorphisms
φ_j:U_j→Δ,ψ_j:V→Δ with
φ_j( a_j) =ψ_j( p) =0, such that
ψ_j∘ f|_U_j∘φ_j^-1(ζ)=ζ^k_j,ζ∈Δ,where k_j is a positive integer; or (b2) if both K_1 and
K_2 are closed domains, then f|_K_1^∘:K_1^∘→
K_2^∘ satisfies (b1) and moreover, f restricted to a neighborhood
of ∂ K_1 in K_1 is a CCM onto a neighborhood of ∂
K_2 in K_2.
(C) For a surface Σ=( f,U)over S, f is in general not a CCM or BCCM. When f(
z) =z^2, both f:Δ→Δand
f:Δ→Δ are BCCMs, but when f( z) =z(
z-a/1-a̅z) ^2, f:Δ→ f(Δ) is
neither a CCM nor a BCCM.
Ahlfors' Second Fundamental Theorem gives the relationship between A(Σ
), n(Σ) and L(∂Σ).
Given an integer q≥3, let
E_q={a_1,…,a_q} be a set of distinct q points on S. Then
there exists a positive constant h depending only on E_q, such that for
any covering surface Σ= (f,U)∈𝐅, we have
(q-2)A(Σ) ≤4π∑_j=1^qn
(Σ,a_j) + h L(∂Σ).
In particular, if f(U) ∩{0,1,∞}=∅, then we
have
A(Σ) ≤ h L(∂Σ).
It is a natural question that whether we can find a precise lower bound for
the constants h in Theorem <ref>. For this purpose, we need to define
the remainder-perimeter ratio H(Σ) as follows.
For a covering surface Σ=(f,U)∈𝐅 and
a set E_q={a_1,…,a_q} on S, we define the total covering
number over E_q as
n(f,E_q) = n(Σ,E_q) = ∑_j=1^qn(Σ,a_j) = ♯{f^-1(E_q)∩ U},
the remainder as
R(Σ,E_q)=(q-2)A(Σ) - 4πn(Σ,E_q),
and the remainder-perimeter ratio as
H(Σ,E_q) = R(Σ,E_q)/L(∂Σ).
In the sequel, we always use R(Σ) and H(Σ) without emphasizing
the set E_q.
We can observe that to estimate the constants h in Theorem <ref>, we are
supposed to give an upper bound of H(Σ). In <cit.>, the last author
developed an innovative method to compute the precise value of the constant
h in (<ref>).
For any surface Σ=(f,U
)∈𝐅 with f(U) ∩{0,1,∞}=∅, we have
A(Σ)< h_0L(∂Σ),
where
h_0=max_θ∈[ 0,π/2] {(
π+θ) √(1+sin^2θ)/arctan√(1+sin
^2θ)cosθ-sinθ} .
Moreover, the constant h_0 is sharp: there exists a sequence of covering
surface {Σ_n} in 𝐅 with f(U_n)
∩{0,1,∞}=∅ such that A(Σ_n)/L(∂Σ
_n)→ h_0 as n→∞.
However, in general cases, it will be very difficult to estimate the precise
bound of the constant h. Since the branch points (See definition in Remark
<ref>) outside of f^-1(E_q) of a surface bring a lot of trouble
in the research, Sun and the last author tried overcoming such problems in
<cit.>. Unfortunately, we observe that the published result in <cit.>
does not work well enough. Before establishing our main theorem, we introduce
more terminologies and definitions.
All paths and curves considered in this paper are oriented and any subarc of a
path or closed curve inherits this orientation. Sometimes paths and curves
will be regarded as sets, but only when we use specific set operations and set
relations. For an oriented circular arc c, the circle C containing c and
oriented by c is called the circle determined by c.
For any two non-antipodal
points p and q on S, pq is the geodesic on S from p to
q: the shorter of the two arcs with endpoints p and q of the great
circle on S passing through p and q. Thus d(p,q)<π and
pq is uniquely determined by p and q. An arc of a great
circle on S is called a line segment on S, and to emphasize this,
we also refer to it as a straight line segment. For the notation
pq, when p and q are explicit complex numbers we write
p,q, to avoid ambiguity such as 123=12,3
or 1,23. When p and q are two antipodal points of S,
pq is not unique and d( p,q) =π. To avoid
confusions, when we write pq, or say pq is well
defined, we always assume d( p,q) <π.
(1) For a Jordan domain D in ℂ, let h be a
Möbius transformation with h(D)⊂Δ. Then ∂ D is
oriented by h and the anticlockwise orientation of ∂ h(D). The
boundary of every Jordan domain on S is oriented in the same way, via
stereographic projection.
(2) For a Jordan curve C on ℂ or S, the domain
T_C bounded by C is called enclosed by C if the boundary
orientation of T_C agrees with the orientation of C.
(3) A domain D on S is called convex if for any two points q_1
and q_2 in D with d(q_1,q_2)<π, q_1q_2⊂ D; a Jordan curve on S is called convex if it encloses a convex
domain on S; a path on S is called convex if it is an arc of a
convex Jordan curve.
(4) Let γ:[a,b]→ S be a path on S and p_0∈(a,b).
γ is called convex at p_0, if γ restricted to a
neighborhood (p_0-δ,p_0+δ) of p_0 in (a,b)
is a convex Jordan path, with respect to the parametrization giving
γ (t increases). γ is called strictly convex at p_0
if γ is convex at p_0 and restricted to a neighborhood N_p_0
of p_0 in (a,b) is contained in some closed hemisphere S_1 on S
with γ_N_p_0∩ S_1=γ(p_0).
Recall that 𝐅 is the space of covering surfaces Σ
=(f,U),where U is a Jordan domain on ℂ.
Before introducing a subspace of 𝐅, we need to give the definition
of partition. For a Jordan curve α in ℂ, its partition is a
collection {α_j}_j=1^n of its subarcs such that α
=∪_j=1^nα_j and α_j^∘ are disjoint and arranged
anticlockwise. In this setting we write α=α_1+α_2
+⋯+α_n. Here α_j^∘ is the interior of α_j, which is α_j without endpoints. A partition
∂Σ=γ_1+γ_2+⋯+γ_n
of ∂Σ for a surface Σ=(f,U)∈𝐅 is
equivalent to a partition
∂ U=α_1+α_2+⋯+α_n
of ∂ U such that γ_j=(f,α_j) for j=1,…,n.
We denote by ℱ the subspace of 𝐅 such that for each
Σ=( f,U), ∂Σ has a partition
∂Σ=c_1+c_2+…+c_n,
where c_1,…,c_n are simple convex circular (SCC) arcs. This means
that ∂ U has a partition
∂ U=α_1+α_2+…+α_n
such that α_j,1≤ j≤ n, are arranged anticlockwise and f
restricted to each α_j is a homeomorphism onto the convex circular
arc c_j.
Now we introduce some subspaces of ℱ which can describe some
properties of the covering surfaces precisely.
For given positive number L, ℱ(L) denotes the
subspace of ℱ in which every surface has boundary length
L(∂Σ)≤ L.
𝒞(L,m) denotes the subspace of ℱ(L) such that
Σ=( f,Δ) ∈𝒞(L,m) if and only
if ∂Δ and ∂Σ have 𝒞(L,m)-partitions.
This means that ∂Δ and ∂Σ have partitions
∂Δ=α_1( a_1,a_2) +α_2(
a_2,a_3) +…+α_m( a_m,a_1)
and
∂Σ=c_1( q_1,q_2) +c_2( q_2
,q_3) +…+c_m( q_m,q_1)
respectively, such that c_j( q_j,q_j+1) =(
f,α_j( a_j,a_j+1) ) is an SCC arc for each
j=1,…,m.
Given q≥3, let E_q={a_1,…,a_q} be a set of q distinct
points. 𝒞^∗(L,m) denotes the subspace of 𝒞
(L,m)such that Σ=( f,Δ)
∈𝒞^∗( L,m) if and only if ∂Δ and
∂Σ have 𝒞^∗(L,m)-partitions. That is, the
partitions are 𝒞(L,m)-partitions in (<ref>) and (<ref>)
so that f has no branch points in α_j^∘∩ f^-1(E_q) for
every j=1,…,m.
ℱ(L,m) denotes the subspace of 𝒞(L,m) such
thatΣ=( f,Δ) ∈ℱ
( L,m) if and only if ∂Δ and ∂Σ
have ℱ(L,m)-partitions (<ref>) and (<ref>), that is,
the partitions are 𝒞(L,m)-partitions such that, for each
j=1,2,…,m, f has no branch point in α_j^∘.
ℱ_r denotes the subspace of ℱ such that
Σ=( f,Δ) ∈ℱ_r if and only if
f has no branch point in Δ\ f^-1(E_q), say,
C_f^∗( Δ) =∅, and define
ℱ_r(L)=ℱ_r∩ℱ(L),
ℱ_r(L,m)=ℱ_r∩ℱ(L,m).
The condition in the definition of ℱ(L,m) is equivalent to say
that, for each j=1,…,m, f restricted to a neighborhood of α
_j^∘ in Δ is a homeomorphism onto a one-side
neighborhood of c_j^∘, which is the part of a neighborhood of
c_j^∘ contained in the closed disk enclosed by the circle determined
by c_j.
By definition, we have
ℱ_r( L,m) ⫋ℱ( L,m)
⫋𝒞^∗( L,m) ⫋𝒞(
L,m) ,
and
ℱ(L)=∪_m=1^∞ℱ( L,m) =∪
_m=1^∞𝒞( L,m) .
For each Σ∈𝒞( L,m) , there exists an integer
m_1>m such that Σ∈ℱ( L,m_1) .
Analogous to Definition<ref>, we define the Ahlfors' constants in
different subspaces of covering surfaces.
Given q≥3, for any set E_q={a_1,…,a_q} of q
distinct points, we define
H_0=sup_Σ∈ℱH(Σ)=sup_Σ∈ℱ
H(Σ,E_q),
H_L=H_L(E_q)=sup_Σ∈ℱ(L)H(Σ)=sup_Σ∈ℱ(L)H(Σ,E_q),
H_L,m=sup_Σ∈ℱ(L,m)H(Σ)=sup_Σ∈ℱ(L,m)H(Σ,E_q),
For any surface Σ∈ℱ and any ε>0, to
estimate H(Σ) we may assume L(∂Σ)<+∞. Otherwise, we
have H(Σ)=0.
Let ℒ be the set of continuous points of H_L
=H_L(E_q), with respect to L.
By Ahlfors' SFT, we can see that
H_0=lim_L→+∞H_L<+∞.
Since H_L increase with respect to L, it is clear that (
0,+∞) \ℒ is just a countable set. Thus for
each L∈ℒ, there exists a positive number δ_L such that
for each L^'∈(L-δ_L,L+δ_L), we have
H_L-π/2L<H_L^'<H_L+π/2L.
Now we can state our main theorem as follows.
Let L∈ℒ and let Σ=(
f,Δ) be a covering surface in 𝒞^∗(L,m). Assume that
H(Σ)>H_L-π/2L(∂Σ).
Then there exists a surface Σ^'=( f^',Δ) such that
(i) Σ^'∈ℱ_r(L,m).
(ii) H(Σ^')≥ H(Σ) and L(∂Σ^')≤
L(∂Σ). Moreover, at least one of the inequalities is strict if
Σ∉ℱ_r(L,m).
(iii) When L(∂Σ^')=L(∂Σ), we have
∂Σ^'=∂Σ and they share the same
ℱ(L,m)-partitions (<ref>) and (<ref>).
Now we outline the structure of this paper. Section 2 introduces some
fundamental properties of covering surfaces, especially the surgeries to sew
two surfaces along the equivalent boundary arcs. In Section 3, we remove the
non-special branch points of the given surface, and in Section 4 we finish our
proof of the main theorem.
§ ELEMENTARY PROPERTIES OF COVERING SURFACES
This section consists of some useful properties of covering surfaces. For a
path Γ on S given by z=z(t),t∈ t_1,t_2], -Γ is
the opposite path of Γ given by z=z(-t),t∈-t_2,-t_1].
A convex domain enclosed by a convex circular arc c and its
chord I is called a lune and is denoted by 𝔇^'( I,c) ,𝔇^'( I,θ(c)) ,
𝔇^'( I,L(c)) , or 𝔇^'( I,k(c)) where θ is the interior angle at the two
cusps, k is the curvature of c and I is oriented such that[The
initial and terminal points of I and c are the same, respectively, in the
notation 𝔇^'(I,θ), in other words, 𝔇
^'(I,θ) is on the right hand side of I.] ∂𝔇^'( I,θ) =c-I.
For two lunes 𝔇^'( I,θ_1) and
𝔇^'( -I,θ_2) sharing the common chord
I we write
𝔇( I,θ_1,θ_2) =𝔇^'( I,θ_1) ∪ I^∘∪𝔇^'(
-I,θ_2)
and called the Jordan domain 𝔇( I,θ_1,θ
_2) a lens. Then the notations 𝔇( I,l_1
,l_2), 𝔇( I,c_1,c_2)and
𝔇( I,k_1,k_2) are in sense and denote the same
lens, when l_j=L(c_j) and k_j is the curvature of c_j, j=1,2,
say,
𝔇( I,c_1,c_2) =𝔇(
I,l_1,l_2) =𝔇( I,k_1,k_2)
=𝔇^'( I,l_1) ∪ I^∘∪𝔇^'( -I,l_2)
=𝔇^'( I,c_1) ∪ I^∘∪𝔇^'( -I,c_2) =𝔇^'(
I,k_1) ∪ I^∘∪𝔇^'( -I,k_2
) .
For a lune 𝔇^'( I,τ) , whether τ
denotes the length l, the angle θ, or the curvature k is always
clear from the context, and so is for the lens 𝔇( I,τ
_1,τ_2) . By definition, we have 0<θ_j≤π for
j=1,2, but for the domain 𝔇( I,θ_1,θ
_2) it is permitted that θ_1 or θ_2 is zero, say
𝔇( I,θ_1,θ_2) reduces to
𝔇^'( I,θ_1) or 𝔇^'( -I,θ_2) . By definition of 𝔇
(I,θ,θ) we have
𝔇(I,θ,θ)=𝔇^'( I,θ)
∪𝔇^'( -I,θ) ∪ I^∘,
and θ∈(0,π]. If I=1,0,-1 and θ=π/2, for
example, 𝔇( I,θ,θ) =Δ and
𝔇^'( I,θ) =Δ^+ is the upper half
disk of Δ.
Let Σ=( f,U) ∈ℱ and let
p∈∂ U. If f is injective near p, then f is homeomorphic in a
closed Jordan neighborhood N_p of p in U, and then f(N_p) is a
closed Jordan domain on S whose boundary near f(p) is an SCC arc, or two
SCC arcs joint at f(p), and thus the interior angle of f(N_p) at f(p)
is well defined, called the interior angle of Σ at p and denoted by
∠(Σ,p).
In general, we can draw some paths {β_j}_j=1^k in U
with ∪_j=1^kβ_j\{p}⊂ U and β_j∩β_i={p}if i≠ j, such that each ( f,β_j)
is a simple line segment on S, ∪_j=1^kβ_j divides a closed
Jordan neighborhood N_p of p in U into k+1 closed Jordan
domains U_jwith p∈U_j,j=1,…,k+1, and
U_i∩ U_j=∅if i≠ j, and f restricted to
U_j is a homeomorphism with ( f,U_j)
∈ℱ for each j. Then the interior angle of Σ at p is
defined by
∠( Σ,p) =∑_j=1^k+1∠( (
f,U_j) ,p) .
(i). (Stoilow's Theorem <cit.>
pp.120–121) Let U be a domain on ℂ and let
f:U→ S be an open, continuous and discrete mapping. Then there
exist a domain V on ℂ and a homeomorphism
h:V→ U, such that f∘ h:V→ S is a holomorphic mapping.
(ii). Let Σ=(f,U) be a surface
where U is a domain on ℂ. Then there exists a domain
V on ℂ and an OPH h:V→U such that f∘ h:V→ S is a holomorphic mapping.
(iii) Let Σ=(f,U)∈𝐅.
Then there exists an OPH φ:U→U such
that f∘φ is holomorphic on U.
What f is discrete means that f^-1(w)∩ K is finite for any compact
subset K of U.
Let Σ=(f,U) be a
surface where U is a domain on ℂ. Then f:U→ S is the restriction of an OPCOFOM g defined in a
neighborhood U_1 of U, and thus by Stoilow's theorem, there
exists a domain V_1 on ℂ and an OPH h:V_1
→ U_1 such that g∘ h is holomorphic on V_1 and then for
V=h^-1(U), f∘ h is holomorphic on
V, and thus (ii) holds.
Continue the above discussion and assume U is a
Jordan domain. Then V is also a Jordan domain and by Riemann mapping theorem
there exists a conformal mapping h_1 from U onto V and by
Caratheodory's extension theorem h_1 can be extended to be homeomorphic
from U onto V, and thus the extension of h∘
h_1 is the desired mapping φ in (iii).
For two curves (α_1,[a_1,b_1]) and (α
_2,[a_2,b_2]) on S, we call they equivalent and write
(α_1,[a_1,b_1])∼(α_2,[a_2,b_2])
if there is an increasing homeomorphism τ:[a_1,b_1]→
a_2,b_2] such that α_2∘τ=α_1. For two surfaces
(f_1,U_1) and (f_2,U_2), we call they
equivalent and write (f_1,U_1)∼(f_2,U_2)
if there is an orientation-preserving homeomorphism (OPH) ϕ:U_1→U_2 such that f_2∘ϕ=f_1.
By our convention , for any covering surface Σ=(f,U) over
S, f is the restriction of an OPCOFOM f defined on a Jordan
neighborhood V of U. By Theorem <ref>, there is a
self-homeomorphism h of V such that f∘ h is holomorphic on V.
Thus, Σ is equivalent to the covering surface (g,U_1),
where U_1=h^-1(U) and g=f∘ h is holomorphic
on U_1. For any two equivalent surfaces Σ_1
=(f_1,U_1) and Σ_2=(f_2,U_2), we have
A(f_1,U_1)=A(f_2,U_2), L(f_1,∂U_1)=L(f_2,∂U_2) and n
(f_1,E_q)=n(f_2,E_q) for a fixed set E_q. Thus we can
identify the equivalent surfaces and for any surface Σ=(f,U
), we may assume f is holomorphic in U.
Theorem <ref> is a powerful tool to explain the connection between OPCOFOM
and the holomorphic map. The following lemma is a consequence of Theorem
<ref>. We shall denote by D(a,δ) the disk on S with center a and
spherical radius δ. Then Δ⊂ S is the disk D(0,π/2).
Let (f,U)be a surface, U be a domain on
ℂ bounded by a finite number of Jordan curves and (
f,∂ U) is consisted of a finite number of simple circular arcs
and let q∈ f(U). Then, for sufficiently small disk
D(q,δ) on Swith δ<π/2, f^-1(D(q,δ)
)∩U is a finite union of disjoint sets {U_j
}_1^n in U, where each U_j is a Jordan domain in U,
such that for each j, U_j∩ f^-1(q) contains exactly one
point x_j and (A) or (B) holds:
(A) x_j∈ U_j⊂U_j⊂ U and f:U_j
→D(q,δ) is a BCCM such that x_j is the only
possible branch point.
(B) x_j∈∂ U, f is locally homeomorphic on U_j
\{x_j}, and when ( f,U) ∈ℱ, the following conclusions (B1)–(B3) hold:
(B1) The Jordan curve ∂ U_j has a partition α_1(
p_1,x_j) +α_2( x_j,p_2) +α_3(
p_2,p_1) such that α_1+α_2=( ∂
U) ∩∂ U_j is an arc of ∂ U, α_3^∘⊂ U, c_j=( f,α_j) is an SCC arc for j=1,2,
and c_3=( f,α_3) is a locally SCC[The
condition δ<π/2 makes ∂ D( q,δ) strictly
convex, and it is possible that ( f,α_3^∘) may
describes ∂ D( q,δ) more than one round, and in
this case ( f,α_3^∘) is just locally SCC.] arc in
∂ D( q,δ) from q_2=f( p_2) to
q_1=f( p_1). Moreover, f is homeomorphic in a
neighborhood of α_j\{x_j} in U for j=1,2,
and
∂( f,U_j) =( f,∂ U_j)
=c_1+c_2+c_3.
(B2) The interior angle of ( f,U_j) at p_1
and p_2 are both contained in [7π/16,9π/16].
(B3) There exists a rotation ψ of S with ψ(q)=0 such that one of
the following holds:
(B3.1) q_1=q_2,( f,α_1) =q_1q
=q_2q=-( f,α_2) , say, ( f,α
_1+α_2) =q_1q+qq_1, and (
ψ∘ f,U_j) is equivalent to the
surface[Here δ z^ω_j is regarded as the mapping
z↦δ z^ω_j∈ S,z∈Δ^+, via the
stereographic projection P.] ( δ z^ω_j:Δ^+) on S so that
( δ z^ω_j,[-1,1]) =a_δ
,0+0,a_δ,
where ω_j is an even positive integer and a_δ∈(
0,1) with d( 0,a_δ) =δ.
(B3.2) q_1≠ q_2, as sets c_1∩ c_2={q}, and (
ψ∘ f,U_j) is equivalent to the the surface
( F,Δ^+∪𝔇_1^'
∪𝔇_2^') so that the following holds.
(B3.2.1) 𝔇_1^'=𝔇^'(
-1,0,θ_1)and 𝔇_2^'=𝔇^'( 0,1,θ_2), such that
for each j=1,2,θ_j∈0,π/4]. Moreover θ_1=0
(or θ_2=0) when c_1=q_1q (or c_2=qq_2), and in this case 𝔇_1^'=∅ (or
𝔇_2^'=∅). See Definition <ref> for the
notation 𝔇^'( ·,·) .
(B3.2.2) ( F,Δ^+) is the surface T=(
δ z^ω_j,Δ^+), where ω_jis
a positive number which is not an even number and even may not be an integer,
( F,𝔇_1^') is the lune
ψ( 𝔇^'( q_1q
,c_1) ) and ( F,𝔇_2^'
) is the lune ψ( 𝔇^'(
qq_2,c_2) ) . That is to say, (
f,U_j) is obtained by sewing the sector
ψ^-1( T) with center angle[This angle maybe
larger than 2π as the sector ( z^3,Δ^+) at 0.] ω_jπ, and the closed lunes 𝔇
^'( q_1q,c_1) and 𝔇^'( qq_2,c_2) along
q_1q and qq_2 respectively.
(A) follows from Stoilow's theorem directly when x_j∈ U. (B) follows
from (A) and the assumption ( f,U) ∈ℱ,
by considering the extension of f which is an OPCOFOM in a neighborhood of
x_j in ℂ.
We list more elementary conclusions deduced from the previous
lemma directly and more notations. Let Σ=(f,U)∈ℱ, q∈ f(U), δ, x_j, U_j and α_1
+α_2 be given as in Lemma <ref>.
(A) If for some j, x_j∈Δ, then by Lemma <ref> (A), f is a
BCCM in the neighborhood U_j of x_j in Δ, and the order
v_f(x_j) of f at x_j is well defined, which is a positive integer,
and f is a v_f(x_j)-to-1 CCM on U_j\{x_j}.
(B) If for some j,x_jis contained in ∂Δ, then, using
notations in Lemma <ref> (B), there are two possibilities:
(B1) q_1=q_2, the interior angle of Σ at x_j equals
ω_jπ, and the order v_f( x_j) is defined to be
ω_j/2, which is a positive integer.
(B2) q_1≠ q_2,c_1+c_2 is a simple arc from q_1 to q, and
then to q_2. In this case the interior angle of Σ at x_j equals
ω_jπ+φ_1+φ_2, where φ_1 and φ_2
are the interior angles of 𝔇^'( q_1
q,c_1) and 𝔇^'( qq_2
,c_2) at the cusps, and we defined the order of f at x_j to
be the least integer v_f( x_j) with v_f(
x_j) ≥( ω_jπ+φ_1+φ_2) /2π. Since ω_jπ+φ_1+φ_2≥ω_jπ>0, we have
v_f( x_j) ≥1 and f is injective on U_j
\{ c_1+c_2} iff v_f( x_j) =1.
This is also easy to see by Corollary <ref> (v).
(C) The number v_f(x_j) can be used to count path lifts with the same
initial point x_j: when x_j∈Δ, any sufficiently short line
segment on S starting from q=f(x_j)has exactly v_f(
x_j) f-lifts starting from x_j and disjoint in Δ\{x_j}; and when x_j∈∂Δ, for each arc β
of the two sufficiently short arcs of ∂Δ with initial point
x_j, (f,β) is simple and has exactly v_f(x_j)-1 f-lifts
{β_j} _j=1^v_f( x_j) -1 with the
same initial point x_j, β_j\{x_j}⊂Δ for
each j and they are disjoint in Δ. This is also easy to see by
Corollary <ref> (v).
(D) A point x∈U is called a branch point of f (or
Σ) if v_f(x)>1, or otherwise called a regular point if
v_f( x) =1. We denote by C_f the set of all branch points
of f, and CV_f the set of all branch values of f. For a set
A⊂U, we denote by C_f( A) =C_f∩ A the
set of branch points of f located in A, and by CV_f(K)=CV_f∩
K the set of branch values of f located in K⊂ S. We will
write
C_f^∗( A) =C_f( A) \ f^-1
(E_q) and C_f^∗=C_f\ f^-1(E_q)=C_f(
U) \ f^-1(E_q).
(E) For each x∈U, b_f( x) =v_f(
x) -1is called the branch number of f at x, and for a set
A⊂U we write B_f( A) =∑_x∈ A
b_f( x) . Then we have b_f( x) ≠0 iff
C_f( x) ={x}, and B_f( A) =∑_x∈
C_f( A) b_f(x). We also define
B_f^∗( A) =B_f( A\ f^-1(E_q))
.
Then B_f^∗( A) ≥0, equality holding iff C_f^∗( A) =∅. When A=U is the domain of
definition of f, we write
B_f=B_f( U) and B_f^∗
=B_f^∗( U) .
Now we can state a direct Corollary to Lemma <ref>.
Let Σ=( f,U) ∈ℱ and let ( x_1,U_1) be a disk
of Σ with radius δ_1. Then, the following hold.
(i) f is locally homeomorphic on U_1\{x_1}; and
if ( x_1,U_1^') is another disk of Σ with
radius δ_1^'>δ_1, then U_1⊂
U_1^', whether x_1is in ∂ U or U.
(ii) If f is homeomorphic in some neighborhood of x_1 in U
(which may be arbitrarily small), or if f locally homeomorphic on
U, then the disk ( x_1,U_1) is a
one sheeted closed domain of Σ, say, f restricted to U_1 is a homeomorphism onto f( U_1) .
(iii) For each x_2∈ U_1\{x_1}, any closed disk (
x_2,U_2) of Σ is a one sheeted closed domain of
Σ, moreover, U_2⊂ U_1 when the radius of
( x_2,U_2) is smaller than δ-d( f(x_1
),f(x_2)) .
(iv) If x_1∈∂ U, f is regular at x_1 and (
f,∂ U) is circular near x_1, then ( f,U_1) is a convex and one sheeted closed domain of
Σ, which is in fact the closed lens 𝔇(
I,c_1,c_1^'), where c_1 and c_1^' are
circular subarcs of ∂Σ and the circle ∂ D(
f(x_1),δ_1) , I is the common chord, and the three paths
c_1,-c_1^',I have the same initial point. Moreover, if
∂Σ is straight at x_1, then f(U_1
)=𝔇^'( -I,c_1^')
=𝔇^'( -c_1,c_1^')is
half of the disk D(f(x_1),δ_1) on the left hand side of
diameter c_1 (see Definition <ref> for lenses and lunes).
(v) For any x∈U_1, there exists a path I(
x_1,x) in U_1 from x_1 to x such that
I( x_1,x) is the unique f-lift of f(x_1
)f(x). That is to say, ( f,U_1) can be foliated
by the family of straight line segments {( f,I( x_1,x)
) :x∈∂ U_1} which are disjoint in U_1
\{x_1}.
(vi) For each x∈∂ U, the interior angle of Σ at x is positive.
Lemma <ref> also implies a criterion of regular point.
Let (f,U)∈ℱ. Then the following hold.
(A) For each a∈U,f restricted to some neighborhood of p in
U is a homeomorphism if one of the following alternatives holds.
(A1) p∈ U and p is a regular point of f.
(A2) p∈∂ U, p is a regular point of f and (f,∂ U) is
simple in a neighborhood of p on ∂ U.
(B) For any SCC arc ( f,α) of ∂Σ=(
f,∂ U) , f restricted to a neighborhood of α^∘
in U is a homeomorphic if and only if h has no branch point on
α^∘. Here α^∘ means the interior of the arc α.
The hypothesis in condition (A2) that (f,∂ U) is simple cannot be
ignored. See the following example.
Take f(z)=z^2 for any z∈Δ^+. then f is regular at
z=0 but not injective in any neighborhood of 0 in Δ^+.
The following lemma shows that how to sew two surfaces together into one
surface along the equivalent curves. This is an important tool in Section 4.
For j=1,2, let Σ_j=(f_j,U_j) be a surface and let
α_j=α_j( x_j1,x_j2) be a proper arc of
∂ U_j such that ( f_j,α_j) is a simple arc
with distinct endpoints. If
(f_1,α_1)∼-(f_2,α_2),
then (f_1,U_1) and (f_2,U_2)can be
sewn along ( f_1,α_1) to become a
surface Σ_3=(f_3,Δ), such that the following hold:
(i) There exist orientation-preserving homeomorphisms (OPHs)
h_1:U_1→Δ^+ and h_2
:U_2→Δ^-, called
identification mappings (IMs), such that
(h_1,α_1)∼-1,1]∼-( h_2,α_2)
=( h_2,-α_2) ,
f_1∘ h_1^-1( x) =f_2∘ h_2^-1(x),∀
x∈-1,1],
and
f_3(z)={[ f_1∘ h_1^-1(z),z∈Δ^+,; f_2∘ h_2^-1(z),z∈Δ^-\-1,1], ].
is a well defined OPCOFOM, and we have the equivalent relations
(f_3,Δ^+)∼(f_1,U_1),(f_3
,Δ^-)∼(f_2,U_2),
∂Σ_3=( f_3,( ∂Δ) ^+)
+( f_3,( ∂Δ) ^-) ∼(
f_1,( ∂ U_1) \α_1^∘)
+( f_2,( ∂ U_2) \α_2^∘) ,
and
(f_3,[-1,1])∼(f_1,α_1)∼(f_2,-α_2).
(ii)
L(∂Σ_3)=L(∂Σ_1)+L(∂Σ_2
)-2L(f_2,α_2),
A(Σ_3)=A( Σ_1) +A(Σ_2),
n( Σ_3) =n( Σ_1)
+n( Σ_2) +#( γ^∘∩
E_q) ,
and
R(Σ_3)=R(Σ_1)+R(Σ_2)-4π#( γ^∘∩
E_q) .
(iii) z∈ C_f_3( Δ\{-1,1}) if
and only if h_1^-1(z)∈ C_f_1( U_1\∂α_1) or h_2^-1(z)∈ C_f_2(
U_2\∂α_2). In particular, if
f_1(∂α_1)⊂ E_q, then f_2(∂α
_2)⊂ E_q and in addition
CV_f_3(S\ E_q)=CV_f_1(S\ E_q)∪ CV_f_2
(S\ E_q).
The conclusion (i) in fact gives a routine how to sew Σ
_1 and Σ_2. By (<ref>), there exists an OPH[Note
that -α_2 is the same path with opposite direction, not the set
{-y:y∈α_2}.] φ:α_1→-α_2 such
that
( f_1,α_1) =( f_2∘φ,α_1)
,
that is
f_2( φ(x)) ≡ f_1(x),∀ x∈α_1.
Let h_1:U_1→Δ^+ be any OPH such
that h_1(α_1)=[-1,1]. Then let h_2:U_2
→Δ^- be an OPH such that
h_2( y) ≡ h_1( φ^-1(y) ),∀
y∈α_2.
In fact, h_2|_α_2 defined by (<ref>) is an OPH from
α_2 onto [1,-1] and can be extended to be an OPH h_2 from
U_2 onto Δ^-. The pair of h_1 and
h_2 are the desired mappings satisfying (i). Then (ii) is trivial to verify.
To prove (iii) we may assume that Σ_1 and Σ_2 are the
surfaces Σ_±=(f_±,Δ^±) such that f_±
agree on [-1,1], and then f_3 defined by f_± on Δ^±is an OPLM. When x∈(-1,1) is a branch point of f_+ or
f_-, x is obviously a branch point of f_3. Since f_± are the
restrictions of f_3 to Δ^±, and (
f_+,[-1,1]) and ( f_-,[1,-1]) are simple with
opposite direction, if x∈( -1,1) is not a branch point of
f_±, then f_± are homeomorphisms in neighborhoods V^± of x
in Δ^± and the simple arc ( f_3,[-1,1]) separates f_+(V^+\-1,1]) and f_-(V^-
\-1,1]), and thus f_3 is homeomorphic on a neighborhood
of x and so x cannot be a branch point of f_3. Therefore x∈
C_f_3 iff x∈ C_f_1∪ C_f_2. In consequence we have
C_f_3( Δ\{-1,1}) =C_f_1
( Δ^+\{-1,1}) ∪ C_f_2(
Δ^-\{-1,1}) , and (iii) follows.
The condition (f_1,α_1)∼(f_2,-α_2) is crucial. Two
copies of the hemisphere S^+ cannot be sewn along their common
edge 0,1⊂ S to become a surface in ℱ, but
S^+ and S^-, with natural edges 0,1
and -0,1=1,0 respectively, can be sewn along
0,1 to become a surface in ℱ.
Lemma <ref> will be used frequently when we patch the covering surfaces.
The condition in this lemma that α_j are proper arcs of ∂
U_j can be replaced by that one of the curves α_1 and α_2
is proper. Indeed, if only α_1 is proper, then we can find partitions
α_1=α_11+α_12 and α_2=α_21+α_22
so that α_11∼α_21 and α_12∼α_22, and we
can use Lemma <ref> twice.
For a surface Σ=(f,U) and an arc β on S, we define
the lift of β by f as an arc α in U satisfying that (f,
α) ∼β. By Remark <ref>, for any point p∈ U, a
sufficiently short path β from f(p) has exactly v_f(p) lifts from
p.
Let Σ=(f,U
)∈ℱ, p_0∈U and β a polygonal simple path
on S with distinct endpoints. Assume that β has two f-lifts
α_j, j=1,2, with initial point p_0, such that α
_1^∘∩α_2^∘=∅. Then
(i) f(U)=S if α_1 and α_2 terminate at the same
point; moreover, ( f,U) can be sewn along (
f,α_1) ∼( f,α_2) becoming a closed
surface ( f_0,S) .
(ii) If α_1∪α_2 is a proper arc of ∂ U, then the
following (ii1) and (ii2) hold.
(ii1) (f,U) can be sewn along β to become a covering
surface Σ_1=(g,Δ)∈ℱ, such that
A(g,Δ) =A(f,U),
L(g,∂Δ) =L(f,∂ U)-L(f,α_1∪α_2)
=L(f,∂ U)-2L(β),
n( Σ_1,E_q) =n(
Σ,E_q) +#{f( ( α_1∪α_2)
^∘) ∩ E_q}
=n( Σ,E_q) +#{[β^∘∪{f(p_0)}]∩ E_q}.
(ii2) ( f,N) and ( g,N_1) are
equivalent surfaces, where N=U\( α_1
∪α_2) and N_1=Δ\[0,1], and
thus (f, (∂ U) \( α_1∪α_2)
^∘ ), regarded as a closed curve, is equivalent to (g,∂Δ).
(iii) If α_1⊂∂ U,α_2\{
p_0}⊂ U, the terminal point of β is in E_q but all
other points of β are outside of E_q, then there exists a covering
surface Σ_1 such that
R(Σ_1) =R(Σ)+4π,
L(∂Σ_1) =L(∂Σ),
and ∂Σ_1 is equivalent to the closed curve ∂Σ.
We first consider that α_1 and α_2 have the same terminal
point. Then they bound a Jordan domain V in U, and thus f(U)⊃ f(V)=S by the argument principle. One the other hand,
we can sew the closed domain V by identifying α_1 and
α_2 so that the points x∈α_1 and y∈α_2 are
identified if and only if f(x)=f(y), to obtain the surface S. Then
(f,V) becoming a closed surface ( f_0,S). So
(i) holds true.
To prove (ii), we may assume that α_1 and ∂ U have the same
orientation. Then α_2 and ∂ U have opposite orientations,
and there exists an orientation-preserving homeomorphism ϕ:Δ^+→U with ϕ([0,1])=α_1,
ϕ([-1,0])=-α_2 and f∘ϕ(x)=f∘ϕ(-x) for any
x∈0,1]. Let g(z)=f∘ϕ(re^iθ/2) with z=re^iθ∈Δ, θ∈[0,2π]. Then, Σ_1=(g,Δ)∈ℱ is a covering surface which satisfies the conclusion
of (ii).
To prove (iii), let h be an OPCOFOM map from Δ^+ onto
U such that h restricted to Δ^+
∖-1,1] is a homeomorphism onto U\α_2. Moreover, we assume that h maps both [-1,0] and [0,1]
homeomorphically onto α_2 with opposite direction, and maps the arc
α_1^'={ e^√(-1)θ:θ∈0,π/2]} homeomorphically onto α_1. Then we consider the
surface Σ^'=( f∘ h,Δ^+).
After rescaling the parameter of ∂Σ^', we may assume that
Σ^' satisfies (ii), with α_1 and α_2 of (ii)
being replaced by [0,1] and α_1^'. Then by identifying
α_1^' and [0,1] as in (ii), we can sew Σ_1^'
to obtain a new surface Σ_1. It is clear that n
(Σ_1,E_q)=n(Σ_1^',E_q)=n
(Σ,E_q)-1, and thus Σ_1 satisfies (iii).
(<cit.> p. 32–35) Let Σ=(f,Δ
)∈ℱ and β be a path on S with initial point q_1.
Assume that α⊂Δ is an f-lift of some subarc of
β from q_1, and α^∘⊂Δ. Then α can be
extended to an f-lift α^' of a longer subarc of β with
α^'∘⊂Δ, such that either α^'
terminates at a point on ∂Δ, or α^' is the
f-lift of the whole path β.
The following lemma is obvious, which states that two different interior
branch points can be exchanged.
Let Σ=(f,Σ)∈ℱ, b∈Δ be a branch
point of f with v_f(b)=d, and δ>0 be a sufficiently small number.
Then there exists a Jordan neighborhood V of b in Δ such that
f:V→D(f(b),δ) is a d-to-1 BCCM so that b
is the unique branch point, and for any y_1 with d(f(b),y_1)<δand any b_1∈ V, there exists a surface Σ=(f_1,Δ)∈ℱ such that f_1 restricted to Δ\ V equals f and f_1:V→D(f(b),δ) is a d-to-1 branched covering map such that b_1 becomes the unique
branch point of f_1 in V, y_1=f_1(b_1) and v_f_1
(b_1)=v_f(b).
The following results are essentially consequences of argument principle.
Let (f,Δ)∈ℱ and let D be a
Jordan domain on S such that f^-1 has a univalent branch g defined on
D. Then g can be extended to a univalent branch of f^-1 defined on
D.
The proof of this lemma is almost the same as that of Lemma 5.2 in <cit.>.
Let D_1 and D_2 be Jordan domains on ℂ or S
and let f:D_1→D_2 be a map such that
f:D_1→ f(D_1) is a homeomorphism. If
f(∂ D_1)⊂∂ D_2, then f(D_1
)=D_2.
§ REMOVING BRANCH POINTS OUTSIDE F^-1(E_Q)
In this section, we will introduce the surgeries to remove branch points
outside f^-1(E_q). Before the key techniques, we remark some properties
of the partitions of covering surface Σ=( f,Δ) ∈𝒞( L,m) .
Let Γ=( f,∂Δ) be a closed curve
in S which consists of a finite number of SCC arcs. We define 𝔏
( Γ) to be the minimal integer m with the following
property: there exists closed arcs γ_j1,γ_j2,j=1,…,m, such
that
Γ =γ_02=γ_11+γ_12;
γ_12 =γ_21+γ_22;
…
γ_m-1,2 =γ_m1,
in which for each j=1,2,…,m, γ_j1 is either a simple closed arc
of γ_j-1,2, or a folded path I+( -I) where I is a
maximal simple arc such that I+( -I) is a folded arc of
γ_j-1,2. Note that the same closed curve γ_jk may have
different initial point in different places.
Note that 𝔏( ∂Σ) <+∞ if Σ∈𝒞( L,m). The following examples give an intuitive
explanation of 𝔏( Γ).
(1) If Γ is simple or Γ=ab+ba, then
𝔏( Γ) =1. When Γ is a point, we write
𝔏( Γ) =0.
(2) For the closed curve Γ in Figure <ref> (1) we have
𝔏( Γ) =2.
(3) The closed curve Γ=ABCDEFGHIJKLMNOPQA in Figure <ref> (2), in
which CD,GH,KLM,LM,NO are five straight line segments on S (KLM is
straight), contains no simple closed arcs, but it contains four maximal folded
closed arcs CDE, GHI, LMN, and NOP, and thus 𝔏(
Γ) =5.
The following lemma is trivial from Definition <ref>.
Let Γ be a closed curve on S which consists of a finite
number of simple circular arcs. If Γ has a partition Γ=γ
_1+γ_2 such that γ_1 is a simple closed arc, or a maximal
folded closed arc, then
𝔏( γ_1) =𝔏(Γ)-1.
Now we start to introduce some lemmas to deal with the non-special branch
points, i.e. the branch points over E_q (Correspondingly, the special
branch points mean the branch points over E_q). It is essentialy similar
to previous results in <cit.>. We first establish a lemma to remove the
non-special branch points in the interior, that is, the branch points in
C_f^∗(Δ) (Recall Remark <ref> for the notations).
Let Σ=( f,Δ)
∈𝒞^∗( L,m) and assume that (<ref>)
holds. If C_f^∗(Δ)≠∅, then there exists a surface
Σ_1=( f_1,Δ) ∈𝒞^∗( L,m) such that
H(Σ_1)≥ H( Σ) ,L(∂Σ_1)≤
L(∂Σ),
and
B_f_1^∗( Δ) ≤ B_f^∗( Δ)
-1.
Moreover, L(∂Σ_1)=L(∂Σ) if and only if
∂Σ_1=∂Σ,H(Σ_1)≥ H( Σ)
and B_f_1^∗( ∂Δ) >B_f^∗(
∂Δ) .
Corresponding to Definition <ref>, we assume ∂Δ and
∂Σ have 𝒞^∗(L,m)-partitions
∂Δ=α_1( a_1,a_2) +α_2(a_2
,a_3)+…+a_m( a_m,a_1)
and
∂Σ=c_1( q_1,q_2) +c_2(q_2,q_3
)+…+c_m( q_m,q_1) ,
where q_j=f( a_j) and c_j( q_j,q_j+1)
=( f,α_j( a_j,a_j+1) ) ,j=1,…,m. By
definition of 𝒞^∗(L,m), f has no branch points in
α_j^∘∩ f^-1(E_q) for each j=1,2,…,m.
Let p_0∈C_f^∗( Δ), say, p_0 is a
non-special branch point of f with order v and let b_0=f(p_0). Let
b be a point in E_q such that d( b_0,b) <π. Then
there is a polygonal simple path η=η( b_0,b) on S
from b_0 to b such that
η^∘∩ E_q=∅, η^∘∩{q_j}_j=1^m=∅, and η^∘ contains no branch value of
f. Moreover, η^∘ intersects ∂Σ perpendicularly and
β∩∂Σ contains only finitely many points.
We can extract a maximal subarc η_1=η( b_0,b_1)of η with b_1∈η\{b_0} such that η_1 has
v distinct f-lifts β_l=β_l( p_0,p_l)
,l=1,2,…,v, starting from p_0 with,
β_l^∘⊂Δ, l=1,…,v,
and that
β_l_1^∘∩β_l_2^∘=∅, 1≤
l_1<l_2≤ v.
The maximum of η_1 means that either b_1=b∈ E_q, or some of
{p_l}_l=1^v are contained in ∂Δ. We write
A=∪_l=1^vβ_l, and assume that β_l are arranged
anticlockwise around the common initial point p_0. Thus, by Condition
<ref>, the following claim holds.
(i) { p_l} _l=1^v⊂Δ only if b_1=b;
(ii) { p_l} _l=1^v⊂ f^-1(E_q) if and only if
b_1=b∈ E_q;
(iii) p_l_1=p_l_2 for some l_1≠ l_2 if and only if
p_l_1 is also a branch point and b_1=b.
Then we have only five possibilities:
Case (1). p_l_1=p_l_2 for some l_1≠ l_2
and p_l_1∈∂Δ.
Case (2). p_l_1=p_l_2 for some l_1≠ l_2
and p_l_1∈Δ.
Case (3). p_l,l=1,…,v, are distinct from each other
and {p_l}_l=2^v⊂Δbut p_1∈∂Δ.
Case (4). p_l,l=1,…,v, are distinct from each other
and {p_l}_l=1^v⊂Δ.
Case (5). p_l,l=1,…,v, are distinct from each other
and there exist some distinct l_1and l_2 such that both p_l_1
and p_l_2 are contained in ∂Δ.
Now we will discuss the above cases one by one.
Cases (1) and (2) cannot occur.
Assume Case (1) occurs. By Claim <ref> (iii), p_l_1(=p_l_2)
is a branch point in f^-1(E_q) and b_1=b. Since {β_j
}_j=1^l are arranged anticlockwise, we can derive that p_l_1
=p_l_2=p_l_1+1, which means that there exist two adjacent f-lifts
β_l_1 and β_l_1+1 whose terminal points coincide. The
f-lift β_l_1-β_l_1+1 encloses a domain D⊂Δ.
Thus we can cut D off Δ along its boundary and sew the remained part
to obtain a new surface Σ_1=( f_1,Δ)
such that f_1=f in a neighborhood of ∂Δ\{p_l_1
} in Δ. Then ∂Σ_1=∂Σ. We also
have p_l_1∈{a_j}_j=1^m since p_l_1 is a branch point in
f^-1(E_q) and f has no branch points in α_j^∘∩
f^-1(E_q) for j=1,2,…,m. Thus (<ref>) and (<ref>) are
𝒞^∗( L,m) partitions of ∂Σ_1
which implies that Σ_1∈𝒞^∗( L,m) . By
Lemma <ref> (i) we have:
(f,D) can be sewn along its boundary
(f,β_l_1)∼(f,β_l_2)=η, resulting a closed surface
Σ_0=(f_0,S).
Assume the degree of Σ_0 is d, then by Riemann-Hurwitz formula we
have
n( Σ_0,E_q) =qd-∑_x∈
f_0^-1(E_q)(v_f_0(x)-1)
≥ qd-∑_x∈ S(v_f_0(x)-1)
≥(q-2)d+2.
On the other hand, (∂ D)∩ f^-1(E_q)={p_l_1}. Thus we
have n( Σ_1) =n(
Σ) -n( Σ_0) +1≤n( Σ) -(q-2)d-1. It is clear that A( Σ
_1) =A(Σ)-4dπ. Then we have
R(Σ_1) =( q-2) A( Σ_1)
-4πn( Σ_1,E_q)
≥( q-2) A( Σ) -4π( q-2)
d-4πn( Σ,E_q) +4π(q-2)d+4π
=R(Σ)+4π,
and thus H(Σ_1)=H(Σ)+4π/L(∂Σ_1), which
with ∂Σ_1=∂Σ and (<ref>) implies a
contradiction:
H_L≥ H(Σ_1)≥ H_L-4π/2L(∂Σ)+4π/L(∂Σ_1)=H_L+7π/2L(∂Σ_1).
Thus Case (1) cannot occur.
Following the same arguments, one can show that Case (2) also cannot occur.
Discussion of Case (5).
Assume Case (5) occurs. Then the f-lift -β_l_1+β_l_2
divides Δ into two Jordan domains Δ_1 and Δ_2 with
∂Δ_1=-β_l_2+β_l_1+τ_1, ∂Δ_2=-β_l_1+β_l_2+τ_2,
where τ_1 is the arc of ∂Δ from p_l_1 to p_l_2, and τ_2=( ∂Δ) \τ_1^∘.
Then by Lemma <ref>, we can sew ( f,Δ_1) and ( f,Δ_2) along -β_l_1
+β_l_2 respectively to obtain two new surfaces Σ_1=(
f_1,Δ) and Σ_2=( f_2,Δ) such that
∂Σ=∂Σ_1+∂Σ_2,
R( Σ_1) +R( Σ_2) =R(Σ),
and that Σ_1 and Σ_2 satisfy the following condition.
τ_1^∘ (resp. τ_2^∘) has a neighborhood
N_1 (resp. N_2) in Δ_1 (resp. Δ_2). And ( ∂Δ) \{1} has a
neighborhood N_1^' (resp. N_2^') in Δ,
such that ( f_1,N_1^') (resp. ( f_2
,N_2^')) is equivalent to ( f_1,N_1)
(resp. ( f_2,N_2)).
Since each arc in partition (<ref>) is SCC and ( f,τ_1) (resp.( f,τ_2)) is closed, we may assume p_l_1
∈α_i_1( a_i_1,a_i_1+1) \{a_i_1
+1} and p_l_2∈α_i_1+k( a_i_1+k,a_i_1
+k+1) \{a_i_1+k+1} for some 0≤ k≤ m.
We should show that 0<k<m. Otherwise, p_l_1 and p_l_2 are both
contained in α_i_1\{a_i_1+1} when k=0 or m. But
f is injective on α_j\{a_j+1} for each j, and thus
p_l_1=p_l_2, contradicting to the assumption. Then
τ_1=α_i_1( p_l_1,a_i_1+1) +α_i_1
+1( a_i_1+1,a_i_1+2) +…+α_i_1+k(
a_i_1+k,p_l_2) ,
and
τ_2=α_i_1+k( p_l_2,a_i_1+k+1)
+α_i_1+k+1( a_i_1+k+1,a_i_1+k+2) +…
+α_i_1+m( a_i_1+m,p_l_1) ,
where a_i_1+j=a_i_1+j-m and α_i_1+j=α_i_1+j-m if
i_1+j>m, and either of the two partitions (<ref>) and (<ref>)
contains at most m terms.
We first show that Σ_1∈𝒞^∗( L,m) . We
may assume ∂Δ has a partition
∂Δ =α_1^'+α_2^'+…+α
_k+1^'
=α_1^'( a_1^',a_2^')
+α_2^'( a_2^',a_3^')
+…+α_k+1^'( a_k+1^',a_1^')
,
such that
( f,α_i_1( p_l_1,a_i_1+1) )
=( f_1,α_1^') ,
(f,α_i_1+1( a_i_1+1,a_i_1+2) ) =(
f_1,α_2^') ,
…
( f,α_i_1+k( a_i_1+k,p_l_2) )
=( f_1,α_k+1^'( a_k+1^',a_1^') ) .
Note that ∂Δ=α_1^' if and only if p_l_1
=a_i_1, p_l_2=a_i_1+1, c_i_1=( f,α_i_1
) is a whole circle, and (f_1,∂Δ)=( f,τ
_1). In this way, L(∂Σ_1)<L(∂Σ). It
follows from (<ref>), (<ref>) and Condition <ref> that, the
partition (<ref>) is a 𝒞^∗( L,m)-partition.
Similarly, Σ_2 also has a 𝒞^∗( L,m)-partition.
It is clear that
max{B_f_1^∗(Δ),B_f_2^∗(Δ)}≤ B_f_1^∗(Δ)+B_f_2^∗(Δ)=B_f^∗( Δ) -1.
Recalling the condition (<ref>), we deduce that max{H(Σ
_1),H(Σ_2)}≥ H(Σ). We may assume H(Σ_1)≥
H(Σ_2), otherwise we replace Σ_1 with Σ_2. Then
Σ_1 is the desired surface in Case (5) and in this case,
L(∂Σ_1)<L(∂Σ).
Discussion of Cases (3). Let A=∪_l=1^vβ_l and
Δ_1=Δ\ A. Then we obtain a surface F whose interior
is ( f,Δ_1) and whose boundary is
γ_1 =( ∂Δ) -β_1( p_0
,p_1) +β_2( p_0,p_2) -β_2(
p_0,p_2)
+…+β_v( p_0,p_v) -β_v( p_0
,p_v) +β_1( p_0,p_1) ,
where ∂Δ is regarded as a closed path from p_1 to p_1.
See (1) of Figure <ref> for the case v=3. Now we split A into a
simple path
γ =-β_1^''( p_0^2,p_1) +β
_2^'( p_0^2,p_2) -β_2^''(
p_0^3,p_2)
+…+β_v^'( p_0^v,p_v) -β_v
^''( p_0^1,p_v) +β_1^'(
p_0^1,p_1) ,
as in Figure <ref> (2). Via a homeomorphism from Δ_1^'
onto Δ_1, we obtain the surface F=( g,Δ
_1^') whose interior is equivalent to ( f,Δ
_1) and whose boundary ∂ F=( g,∂Δ
_1^') is equivalent to ( f,γ_1) . Then
it is easy to see that Σ can be recovered by sewing F along
β_l^' and β_l^'', which means by
identifying β_l^' and β_l^'',
l=1,2,…,v.
It is interesting that, by Lemma <ref> (ii), we can sew F by
identifying β_l^'' with β_l+1^', for
l=1,2,…,v-1, and β_v^'' with β_1^',
to obtain a new surface Σ_1=( f_1,Δ) .
Indeed, we can deform Δ_1^' as in Figure
<ref> (2) into Δ_1^'' as in Figure
<ref> (3) with p_1 fixed, and then deform Δ_1^'' homeomorphically onto the disk Δ omitting the union B of the
v line segments p_0^lp_1 for l=1,2,…,v, as in
Figure <ref> (4).
It is clear that A( Σ) =A( Σ_1) and
L(∂Σ)=L(∂Σ_1). When b_1=b, we see by b∈
E_q that {p_j}_j=1^v⊂ f^-1(E_q) and when b_1≠ b
we have A∩ E_q=∅. Thus
n( F,E_q) =n( f_1,E_q)
=#{f^-1(E_q)∩( Δ\ A) }=n( Σ) -( v-1) χ_E_q(
b_1) ,
where χ_E_q( b_1) =1 when b_1∈ E_q and
χ_E_q( b_1) =0 when b_1∉ E_q. Clearly, we
have ∂Σ∼∂Σ_1. Thus Σ_1∈𝒞( L,m) and
H( Σ_1) =H( Σ) +4π(
v-1) χ_E_q( b_1) /L(∂Σ).
If b_1=b, then by (<ref>) and (<ref>), we obtain a
contradiction that
H_L≥ H(Σ_1)>H_L-π/2L(∂Σ)+4π(
v-1) /L(∂Σ)>H_L.
Thus we have
b_1≠ b and A∩ f^-1(E_q)=∅,
which induces that Σ_1∈𝒞^∗( L,m) , and
that H( Σ_1) =H( Σ) .
After above deformations, all p_0^l, l=1,2,…,v, are regular points
of f_1. Thus
∑_l=1^v( v_f_1( p_0^l) -1)
=v_f( p_0) -v=0.
On the other hand, p_0 and { p_l} _l=2^v are the
only possible branch points of f on A∩Δ, and the cut B inside
Δ contains no branch point of f_1. Thus we have
B_f( {p_0,p_2,…,p_v}) ≥ B_f(
p_0) =v_f( p_0) -1=v-1,
and
B_f^∗( Δ) =B_f^∗( (
Δ\ A) ) +B_f^∗( {p_0,p_2
,…,p_v})
≥ B_f^∗( ( Δ\ A) ) +v-1
=B_f_1^∗( Δ\∪_l=1^vp_1
p_0^l) +v-1
=B_f_1^∗( Δ) +v-1
≥ B_f_1^∗( Δ) +1.
It is clear that
v_f_1( p_1) =v_f( p_1) +v_f(
p_2) +…+v_f( p_v) ≥ v_f(
p_1) +v-1>v_f( p_1) +1.
and thus we have by (<ref>) B_f_1^∗( p_1)
>B_f^∗( p_1) . On the other hand, we have b_f(
z) ≡ b_f_1( z) for all z∈(
∂Δ) \{p_1}. Thus we have
B_f_1^∗( ∂Δ) >B_f^∗(
∂Δ) .
This completes the proof of Case (3).
Case (4) cannot occur. In this case, b_1=b, {
p_l} _l=1^v⊂ f^-1(E_q)and A⊂Δ. The
discussion is similar to of Case (3) with b_1=b, and we can deduce a
contradiction. Then, as in Figure <ref>, we can cut and split Δ
along A to obtain an annulus Δ_1=Δ\D with
∂Δ_1=∂Δ-∂ D, where ∂ D=β
_1^'-β_1^''+β_2^'-β_2
^''+…+β_v^'-β_v^''. Repeating
the same strategies in Case (3), we can obtain a new surface Σ
_1=( f_1,Δ) so that f_1 and f
coincide on a neighborhood of ∂Δ in Δ, which
implies that Σ_1∈𝒞^∗( L,m) . In Figure
<ref> (4), B=p_1p_0^1∪p_1p_0^2
∪p_1p_0^3∪…∪p_1p_0^v contains
only one point p_1 of f_1^-1(E_q), and thus
#[f^-1(E_q)∩Δ] =#[ f^-1(E_q)∩Δ\{p_l}_l=1^v] +v
=#[ f_1^-1(E_q)∩Δ\ B] +#[f_1
^-1(E_q)∩ B]+v-1
=#[f_1^-1(E_q)∩Δ]+v-1,
which implies
n( Σ_1) =n( Σ)
-v+1≤n( Σ) -1.
From above arguments, we derive H(Σ_1)≥ H( Σ)
+4π/L(∂Σ). This again implies a contradiction.
Now our proof has been completed.
For a branch point a of f, we call ( a,f(a)) a branch pair
of f. In Case (3) of previous proof, f_1 can be understood as a movement
of the branch pair ( p_0,f(p_0)) of f to the branch pair
( p_1,f_1( p_1) ) of f_1 along the
curve β_1( p_0,p_1). Then ( p_0
,f(p_0)) is split into v regular pairs ( p_0^l
,f_1( p_0^l) ) =( p_0^l,f(
p_0) ), l=1,…,v, and ( p_1,f(p_1)
) becomes a branch pair of f_1 at the boundary point p_1, whose order
v_f_1( p_1) =∑_l=1^vv_f(
p_l). Meanwhile, all other branch pairs ( x,f(x))
remain unchanged, saying that there exists a homeomorphism h from
Δ\ A onto Δ\ B such that
( f,Δ\ A) is equivalent to (
f_1∘ h,Δ\ A) .
Let Σ=( f,Δ)
∈𝒞^∗( L,m), and assume that (<ref>)
holds. Then there exists a surface Σ_1=( f_1,Δ)
∈𝒞^∗( L,m) satisfying (<ref>) such that
C_f_1^∗( Δ) =∅,
H(Σ_1)≥ H( Σ) ,L(∂Σ_1)≤ L(
∂Σ) ,
and (i) or (ii) holds:
(i) C_f^∗( Δ) ≠∅and L(∂Σ_1)<L( ∂Σ).
(ii) H(Σ_1)=H( Σ) ,L(∂Σ_1)=L(
∂Σ) ,∂Σ_1=∂Σ; and moreover
B_f_1^∗( ∂Δ) >B_f^∗(
∂Δ)if and only if C_f^∗( Δ)
≠∅.
When C_f^∗( Δ) =∅, then Σ_1=Σ
is the desired surface and (ii) holds. So we assume C_f^∗(
Δ) ≠∅. Then by Lemma <ref>, there exists a
surface Σ_1^'=( f_1^',Δ)
∈𝒞^∗( L,m) such that
H(Σ_1^')≥ H( Σ) ,L(∂Σ
_1^')≤ L(∂Σ),
and
B_f_1^'^∗( Δ) ≤ B_f^∗(
Δ) -1.
Moreover, L(∂Σ_1^')=L(∂Σ) if and only if
∂Σ_1^'=∂Σ,H(Σ_1^')=H(
Σ) and C_f_1^'^∗( ∂Δ)
>C_f^∗( ∂Δ) . It is clear that Σ
_1^' again satisfies the inequality (<ref>). Repeating this
procedure at most B_f_1^'^∗( Δ) times, we
can obtain the desired surface Σ_1.
Next, we will establish some lemmas to remove the branch point on the boundary.
Let Σ=( f,Δ) ∈𝒞^∗( L,m) be a surface satisfying the inequality
(<ref>) with the 𝒞^∗( L,m)-partitions
(<ref>) and (<ref>). Suppose that
(A) f has no branch points in Δ\ f^-1(E_q);
(B) For the first term α_1( a_1,a_2) of (<ref>),
α_1( a_1,a_2) \{a_2} contains a branch
point p_0 of f with p_0∉ f^-1(E_q). p_1is a point in
α_1( p_0,a_2) such that f( p_0)
≠ f(p_1), [ α_1( p_0,p_1) \{p_1}] ∩ f^-1(E_q)=∅ and that α_1^∘( p_0,p_1) contains no branch point of f;
(C) For b_0=f( p_0) and b_1=f( p_1) ,
the subarc c_1^'=c_1( b_0,b_1) of c_1 has
v=v_f( p_0) distinct f-lifts β_1(
p_0,p_1) ,β_2( p_0,p_2) ,…,β
_v( p_0,p_v) , arranged anticlockwise around p_0, such
that β_l\{p_0,p_l}⊂Δ for l=2,…,v.
Then there exists a surface Σ_1=( f_1,Δ) ∈𝒞^∗( L,m) such that there is no
branch points of f_1 in Δ\ f_1^-1(E_q), and one of
the following alternatives (i) and (ii) holds:
(i) The partition number m≥2,
H(Σ_1)≥ H( Σ) ,L(∂Σ_1)<L(∂Σ),
and
#( ∂Δ) ∩ f_1^-1(E_q)≤#(
∂Δ) ∩ f^-1(E_q).
Moreover
#C_f_1^∗( ∂Δ) ≤#C_f^∗(
∂Δ) ,
with equality only if one of the following relations (<ref>
)–(<ref>) holds:
B_f_1^∗( ∂Δ) ≤ B_f^∗(
∂Δ) -1,
𝔏( ∂Σ_1) ≤𝔏(
∂Σ) -1,
Σ_1=( f_1,Δ) ∈𝒞^∗( L,m-1) ,
A(Σ_1)≤ A( Σ) -4π.
(ii) p_l,l=1,2,…,v, are distinct, { p_l} _l=2
^v⊂Δ, p_1∉ f^-1(E_q), ∂Σ_1
=∂Σ,H(Σ_1)=H(Σ), v_f(x)=v_f_1(x) for all
x∈( ∂Δ) \{p_0,p_1}, v_f_1
(p_0)=1 and
v_f_1( p_1) =v_f( p_1) +v-1,
and moreover, (<ref>) and (<ref>) are still 𝒞^∗(
L,m)-partitions of ∂Σ_1,
B_f_1^∗( ∂Δ) =B_f^∗(
∂Δ) ,
and
#C_f_1^∗( ∂Δ) ≤#C_f^∗(
∂Δ) ,
equality holding if and only if p_1∉ C_f^∗(
∂Δ) ∪ f^-1(E_q).
By (C) we have that β_1=α_1(p_0,p_1). Write A=∪
_l=1^vβ_l. We will imitate the arguments in the proof of Lemma
<ref>. Under the partitions (<ref>) and (<ref>), we first
consider the case that p_l_1=p_l_2 for some pair 1≤ l_1
<l_2≤ v. Then β_l_1-β_l_2 bounds a Jordan domain D
contained in Δ, and we may face the following three Cases.
Case (1). l_1=1 and l_2=2(see Figure <ref> (1)).
Case (2). 1<l_1and p_l_1∈∂Δ (see
Figure <ref> (3)).
Case (3). 1<l_1 and p_l_1∈Δ(see
Figure <ref> (5)).
We show that none of the above three cases can occur, by deduce a
contradiction that H_L>H_L.
When Case (1) occurs, we put h_1 to be a homeomorphism from Δ onto Δ_1=Δ\ D so that
h_1 is an identity on ( ∂Δ_1) ∩∂Δ. Then put Σ_1=( f_1,Δ) with f_1=f∘ h_1 (See Figure <ref> (1) and (2)).
When Case (2) occurs, ∂ D divides Δ into three Jordan domains
Δ_1, D and Δ_2 as in Figure <ref> (3). We can glue
the surfaces ( f|_Δ_1,Δ_1) and ( f|_Δ_2,Δ_2)
together along the boundary (f,β_l_1)∼(f,β_l_2) to obtain
a new surface Σ_1=( f_1,Δ). Indeed,
we can take a continuous mapping h_2:Δ\
D→Δ so that h_2|_Δ_1:Δ_1→Δ_1^' (resp. h_2
|_Δ_2:Δ_2→Δ
_2^') is an orientation-preserving homeomorphism, f(h_2
^-1(y)) is a singleton for all y∈β, and h_2 is an identity on a
neighborhood of ( ∂Δ) \{p_0,p_l_1}
in Δ. Then we define Σ_1=( f_1
,Δ) with f_1=f∘ h_2^-1 (See Figure
<ref> (3) and (4)).
When Case (3) occurs, ( β_l_1∪β_l_2)
\{p_0}⊂Δ, and Δ_1=Δ\D is a domain as in Figure <ref> (5) when l_1=1 and
l_2=2. We can sew ( f,Δ\ D) along
(f,β_l_1)∼(f,β_l_2) to obtain a surface (
f_1,Δ) so that β_l_1-β_l_2
becomes a simple path β, the line segment from p_0 to p_l_1 as
in Figure <ref> (5) and (6). In fact we can define f_1:=f∘
h_3^-1, where h_3:Δ_1→Δ
is an OPCOFOM so that β_l_1 and β_l_2 are mapped
homeomorphically onto β, h_3(p_l_1)=p_l_1, h_3
(p_0)=p_0, h_3 is an identity on ∂Δ and on a
neighborhood of ∂Δ\{p_0} in Δ,
and h_3:Δ_1→Δ_1^' is a homeomorphism.
In the above Cases (1)–(3), it is clear that Σ_1 also has
𝒞^∗( L,m)-partitions as (<ref>) and
(<ref>), and the interior angle of (f,D) at p_l_1 is a
positive multiple of 2π. Then we have v_f_1( p_l_1)
≤ v_f( p_l_1) -1. Then ∂ D∩ f^-1
(E_q)={p_1} or ∅.
As in the proof of Claim <ref>, (f,D) can be sewn to be a
closed surface Σ_0=(f_0,S) along the equivalent paths
(f,β_l_1) and (f,β_l_2). Assume that the degree of f_0
is d_0. Then we have in any case of Cases (1), (2) and (3),
n( Σ_1,E_q) ≤n(
Σ,E_q) -n( Σ_0,E_q) +1.
On the other hand, as in the proof of Claim <ref>, by Riemann-Hurwitz
formula, we have n( Σ_0,E_q) ≥
(q-2)d_0+2 with the equality holding if and only if C_f_0(V)⊂
f_0^-1(E_q). Then we have
n( Σ_1,E_q) ≤n(
Σ,E_q) -n( Σ_0,E_q)
+1≤n( Σ,E_q) -(q-2)d_0-1.
Now we have A(Σ)=A(Σ_0)+A(Σ_1) and A(Σ_0)=4π
d_0. Then
R( Σ_1) =( q-2) A(Σ_1
)-4πn( Σ_1)
≥( q-2) (A(Σ)-4π d_0)-4π[ n( Σ) -(q-2)d_0-1]
=R( Σ) +4π.
On the other hand, we have L(∂Σ)=L(∂Σ_1). Then we
derive
H(Σ_1)≥R(Σ)+4π/L(∂Σ)=H(Σ)+4π/L(∂Σ),
which with (<ref>) implies the contradiction that H_L≥
H(Σ_1)>H_L. Hence Cases (1)–(3) can not occur.
There are still two cases left.
Case (4). p_l,l=1,…,v, are distinct from each other
and {p_l}_l=2^v⊂Δ.
Case (5). p_l,l=1,…,v, are distinct from each other
and p_l_1∈∂Δ for some 2≤ l_1≤ v. In particular,
{p_2,…,p_l_1-1}⊂Δ when l_1>2, and it is possible
that p_l_2∈∂Δ for some l_1<l_2≤ v.
Assume Case (4) occurs. Except for a few differences, the following discussion
is similar to the Cases (3) and (4) in the proof of Lemma <ref>.
Here, we just present the arguments for v=3, as in Figure <ref>. Cut
Δ along the lifts β_2 and β_3 and split β_2 and
β_3 via an OPCOFOM h from a closed Jordan domain Δ_1^' as in Figure <ref> (2) onto Δ
such that h:Δ_1^'→Δ_1=Δ\
(β_2∪β_3) is a homeomorphism.
Then we obtain a surface Σ_1^'=( f_1^'
,Δ_1^') such that
( f_1,β_1^') ∼( f_1,β_2^') ∼( f_1,β_2^'') ∼(
f_1,β_3^') ∼( f_1,β_3^'') .
It is clear that we can recover the surface Σ when we identify
β_2^' with β_2^'' and β_3^'
with β_3^''. However, by Lemma <ref> (ii), we can
also identify β_1^' with β_2^', and β
_2^'' with β_3^', by deformations in Figure
<ref> (2)-(4), resulting a new surface Σ_1=(
f_1,Δ) . On the other hand, since β_l^∘,l=1,…,v, contains no point of f^-1(E_q) and C_f^∗(
Δ) =∅, f is homeomorphic in neighborhoods of β
_j^∘,j=2,…,v. Thus we can conclude the following.
∂Σ_1∼∂Σ. There exists a
neighborhood N_1 of ( ∂Δ) \{p_0,p_1} in Δ and a neighborhood N_1^' of
( ∂Δ) \{p_0^3,p_1^'} in
Δ such that ( f,N_1) ∼( f_1
,N_1^') . In fact as in Figure <ref> (2) and (3),
β_1^'∘,β_2^''∘,β_3^''∘ have neighborhoods in Δ_1^' so that
the restrictions of f_1^' to them, respectively, are equivalent to
the restriction of f to a neighborhood of β_1 in Δ. Thus we may replace p_1^' and p_0^3 by p_1 and
p_0, and make ∂Σ_1=∂Σ via a homeomorphism of
Δ. Then partitions (<ref>) and (<ref>) are both
𝒞^∗( L,m)-partitions of ∂Σ_1if and only if p_1∉ f^-1(E_q)∩α_1^∘, and in
general Σ_1∈𝒞^∗( L,m+1) ⊂ℱ( L) .
It is clear that A(Σ_1)=A(Σ)and L(∂Σ
_1)=L(∂Σ). We can also see that {p_0^l}_l=1^v
become regular points of f_1 and
v_f_1(p_1)=v_f( p_1) +v_f( p_2)
+…+v_f( p_v) .
It implies that
n( Σ_1) ={[ n( Σ) , if p_1∉ f^-1
(E_q),; n( Σ) -v+1, if p_1∈ f^-1
(E_q). ].
Thus in the case p_1∈ f^-1(E_q), we have
R(Σ_1)=R(Σ)+( v-1) 4π≥ R(Σ)+4π,
which with (<ref>) implies a contradiction that H_L≥ H(Σ
_1)≥ H(Σ)+4π/L(∂Σ_1)>H_L. So we have to
assume p_1∉ f^-1(E_q), which implies Σ_1∈𝒞^∗( L,m) and H(Σ_1)=H(Σ), and
moreover
{p_l}_l=1^v∩ f^-1(E_q)=∅.
Then each p_l∉ C_f( Δ) and v_f_1
(p_1)=v_f( p_1) +v-1, say
b_f_1(p_1)-b_f( p_1) =v-1.
On the other hand we have v_f_1(p_0)=1 and v_f(p_0)=v, which
implies
b_f_1( p_0) -b_f(p_0)=-v+1,
and thus by (<ref>) we have
B_f_1^∗( { p_0,p_1}) =B_f^∗( { p_0,p_1}) .
It is clear that, by Summary <ref>, B_f_1^∗( (
∂Δ) \{p_0,p_1}) =B_f^∗(
( ∂Δ) \{p_0,p_1}) . Then we
have by (<ref>)
B_f_1^∗( ∂Δ) =B_f^∗(
∂Δ) .
On the other hand, we have #C_f_1^∗( { p_0
,p_1}) =#C_f_1^∗( p_1) =1 and
#C_f^∗( { p_0,p_1}) =1 if and only
if p_1 is not a branch point (note that we are in the environment of
p_1∉ f^-1(E_q), which implies p_1∉ f_1^-1(E_q)).
Thus we have #C_f_1^∗( ∂Δ) ≤#C_f^∗( ∂Δ) equality holding if and only if
p_1∉ C_f^∗( ∂Δ) ∪ f^-1(E_q).
Hence, all conclusions in (ii) hold in Case (4).
Assume Case (5) occurs. When m=1,∂Σ=c_1( q_1
,q_1) is a simple circle and thus f^-1(b_1)∩∂Δ={p_1}. This case can not occur, since in Case (5), {p_1
,p_l_1}⊂ f^-1(b_1)∩∂Δ and p_1≠ p_l_1. So we have m≥2.
It is clear that f restricted to a neighborhood of β_l_1^∘
is homeomorphic and β_l_1 divides Δ into two Jordan domains
Δ_1 and Δ_2. Denote by Δ_1 the domain on the right
hand side of β_l_1. Let γ_1 be the arc of ∂Δ
from p_1 to p_l_1 and γ_2 be the complement arc of
γ_1 in ∂Δ, both oriented anticlockwise. Recall that
β_1,β_2,…,β_v are arranged anticlockwise around
p_0. Then we have ∪_l=2^l_1-1β_l\{p_0
}⊂Δ_1, while {β_l} _l=l_1+1^v is
contained in Δ_2. Based on (<ref>) and (<ref>), we
also have the partitions
γ_1=α_1( p_1,a_2) +α_2+…
+α_k-1+α_k( a_k,p_l_1) ,
and
γ_2=α_k( p_l_1,a_k+1) +α_k+1
+…+α_m+α_1( a_1,p_1) ,
where
p_l_1∈α_k( a_k,a_k+1) \{a_k+1}.
We can see that
v_f|_Δ_1( p_0) =l_1-1,
v_f|_Δ_2( p_0) =v-l_1+1.
Considering p_0∉ f^-1(E_q), we have
B_f|_Δ_1^∗( p_0) =l_1-2,
B_f|_Δ_2^∗( p_0) =v-l_1,
and
B_f^∗( p_0) -B_f|_Δ_1^∗( p_0) -B_f|_Δ_2^∗(
p_0) =1.
Now we shall consider Δ_1 and Δ_2 separately.
Firstly, let h_2 be a homeomorphism from Δ_2 onto
Δ such that h_2|_γ_2∖β_1=id and
h_2|_β_l_1=β_1^∘+γ_1. Recall that β
_1=α_1( p_0,p_1). Then we can construct a new
surface as
Σ_2^'=( f_2^',Δ) =(
f∘ h_2^-1,Δ) ,
with
L(f_2^',∂Δ) =L(f,(∂Δ_2)\β_l_1)+L(f,β_l_1)
=L(f,(∂Δ_2)\β_l_1)+L(f,β_1)
=L(γ_2)<L.
Since p_1≠ p_l_1, f(p_1)=f(p_l_1)=b_1 and f is
injective on each α_k( a_k,a_k+1) \{a_k+1}, we conclude that either of the two partitions (<ref>) and
(<ref>) contains at least two terms. Since the sum of terms of (<ref>)
and (<ref>) is at most m+2, we conclude that either of (<ref>) and
(<ref>) contains at most m terms. Thus we have Σ_2^'
∈𝒞^∗( L,m) . Hence, summarizing the above
discussion, we have
C_f_2^'^∗( Δ) =∅,
Σ_2^'∈𝒞^∗( L,m), and moreover,
by definition of f_2^',
#C_f_2^'^∗( ∂Δ) =#C_f|_Δ_2^∗( ∂Δ_2) =#C_f|_Δ_2^∗( γ_2) ≤#C_f^∗(
∂Δ) ,
#( ∂Δ) ∩ f_2^'-1( E_q)
=#γ_2∩ f^-1( E_q) ≤#( ∂Δ) ∩ f^-1( E_q) ,
B_f_2^'^∗( ∂Δ) =#B_f|_Δ_2^∗( ∂Δ_2) ≤#B_f^∗( ∂Δ) -1.
Next, we construct a new surface Σ_1^'=( f_1^',Δ) as follows. Denote by Δ_1^1=Δ
_1\∪_l=2^l_1-1β_l, which is a simply connected
domain. Cutting Δ_1^1 along the paths ∪_l=1^l_1-1
β_j, we can obtain a Jordan domain Δ_1^2 as in Figure
<ref> (2) where l_1=3. Indeed, there exists an OPCOFOM
h_1:Δ_1^2→Δ_1^1 such
that the restrictions
h_1:Δ_1^2→Δ_1^1, h_1:β_l^'→β_l, h_1:β_l^''→β_l
are homeomorphisms for l=2,…,l_1-1. Then the surface F_1:=(
g_1,Δ_1^2) =( f∘ h_1,Δ_1^2) is simply connected and we can recover the surface
( f|_Δ_1,Δ_1) when we
glue F_1 along the pairs ( g_1,β_l^') and
( g_1,β_l^'') for l=2,…,l_1-1.
Since
( g_1,β_1^') ∼( g_1,β_2^') ,( g_1,β_2^'') ∼(
g_1,β_3^') ,⋯,( g_1,β_l_1-1
^'') ∼( g_1,β_l_1^') ,
we can also glue F_1 along the above equivalent pairs and obtain a new
surface Σ_1^'=( f_1^',Δ)
, as the deformations described in Figure <ref> (2)–(4). In this way,
p_1,…,p_l_1 are glued into a single point p_1^'
∈∂Δ. It is clear that we have
v_f_1^'( p_1^') ≤ v_f(
p_1) +v_f( p_2) +⋯+v_f( p_l_1
-1) +v_f|_Δ_1( p_l_1) .
When b_1∉ E_q, by condition (A) of Lemma <ref> we have
v_f( p_2) =⋯=v_f( p_l_1-1) =1.
Thus
v_f_1^'( p_1^') ≤ v_f(
p_1) +v_f|_Δ_1( p_l_1)
+l_1-2.
As in Figure <ref> (2) or (3), p_0^1,…,p_0^l_1-1 are
regular points of g_1, and g_1 is homeomorphic on some neighborhoods
of (β_j^')^∘ and (β_j^'')^∘ in
Δ_1^2 for j=1,…,l_1-1. Thus f_1^' is
homeomorphic on some neighborhood of β_j^'\{p_1^'} for j=1,…,l_1-1. Therefore by (<ref>) we have that
C_f_1^'^∗( Δ) =∅,
( f_1^',∂Δ) ∼( f,γ
_1) , (<ref>) is an ℱ(L,k)-partition of
∂Σ_1^' and moreover
B_f_1^'^∗( p_1^') =0, if
p_1∈ f^-1(E_q);
B_f_1^'^∗( p_1^') =v_f_1^'( p_1^') -1≤ B_f^∗( p_1)
+B_f|_Δ_1^∗( p_l_1) +l_1
-1 if p_1∉ f^-1(E_q).
Now we will apply Claims <ref> and <ref> to verify the conclusion (i).
There is no doubt that A(Σ_1^')+A(Σ_2^'
)=A(Σ) and L(Σ_1^')+L(Σ_2^')=L(
Σ) . We can deduce from the previous constructions that
n( Σ) =n( Σ_1^') +n( Σ_2^') +(
l_1-2) χ_E_q( f( p_1) ) ,
where χ_E_q( f( p_1) ) =1 if p_1∈
f^-1(E_q) and χ_E_q( f( p_1) ) =0
otherwise. Then we have
R(Σ_1^')+R( Σ_2^') =R(Σ
)+4π( l_1-2) χ_E_q( f( p_1)
) .
Take Σ_1=Σ_1^' or Σ_2^' such that
H(Σ_1)=max{ H( Σ_1^') ,H(
Σ_2^') } . Then we have
H(Σ_1)≥ H(Σ)+4π( l_1-2) χ_E_q
( f( p_1) ) /L( ∂Σ) .
By the restriction of inequality (<ref>), however, we can obtain the
contradiction H( Σ_1) >H_L when l_1>2 and
p_1∈ f^-1(E_q). Then in the sequel we assume that
l_1=2 or f( p_1) ∉ E_q.
If Σ_1=Σ_2^', then by Claim <ref>, Σ_1
satisfies (i). Thus in the sequel, we assume that
Σ_1=( f_1,Δ) =Σ_1^'=(
f_1^',Δ) ,
say, f_1=f_1^'. Then by condition f( p_1) ∉
E_q it is trivial that
#( ∂Δ) ∩ f_1^-1( E_q)
=#γ_1∩ f^-1( E_q) ≤#( ∂Δ) ∩ f^-1( E_q) ,
and
#C_f_1^∗( ∂Δ) =#C_f_1^∗(
( ∂Δ) \{p_1^'})
+#C_f_1^∗( p_1^') =#C_f^∗(
γ_1^∘) +#C_f_1^∗( p_1^') .
Thus, by the relations γ_2=( ∂Δ)
\γ_1^∘ and γ_2⊃{p_0,p_1,p_l_1
}, we have
#C_f^∗( ∂Δ) -#C_f_1^∗(
∂Δ) =#C_f^∗( ∂Δ)
-#C_f^∗( γ_1^∘) -#C_f_1^∗(
p_1^')
=#C_f^∗( γ_2) -#C_f_1^∗(
p_1^')
≥#C_f^∗( p_0) +#C_f^∗(
p_1) +#C_f^∗( p_l_1) -#C_f_1^∗( p_1^')
=1+#C_f^∗( p_1) +#C_f^∗( p_l_1
) -#C_f_1^∗( p_1^')
≥1+0+0-#C_f_1^∗( p_1^') ≥0.
Therefore, (<ref>) holds, equality holding only if
#C_f^∗( p_1) =#C_f^∗( p_l_1)
=0,
and
#C_f^∗( γ_2) =#C_f^∗( p_0)
=#C_f_1^∗( p_1^') =1,
which implies
f_1(p_1^')=f( { p_l} _l=1^v)
=b_1∉ E_q.
Assume that the equality in (<ref>) holds. Then (<ref>
)–(<ref>) hold and imply
B_f^∗( p_1) =B_f^∗( p_l_1)
=0 but B_f_1^∗( p_1^') ≥1.
By (<ref>), (<ref>) and (<ref>), considering that ∂Δ=γ_1+γ_2 we have
B_f^∗( ∂Δ) =B_f^∗( γ
_1^∘) +B_f^∗( γ_2) =B_f^∗( γ_1^∘) +B_f^∗( p_0) ,
B_f_1^∗( ∂Δ) =B_f^∗( γ
_1^∘) +B_f_1^∗( p_1^') ,
and
B_f_2^'^∗( ∂Δ) =B_f|_Δ_2^∗( p_0) ;
and then
B_f^∗( ∂Δ) -B_f_1^∗(
∂Δ) =B_f^∗( p_0) -B_f_1^∗( p_1^') .
Thus, by Claim <ref>, (<ref>), and the assumption v_f^∗( p_0) =v, we have
B_f^∗( ∂Δ) -B_f_1^∗(
∂Δ)
≥ B_f^∗( p_0) -B_f^∗( p_1)
-B_f|_Δ_1^∗( p_l_1) -l_1+1
=v-1-0-0-l_1+1=v-l_1,
and then by (<ref>) and the assumption v_f^∗( p_0)
=v we have
B_f_1^∗( ∂Δ) ≤ B_f^∗(
∂Δ) -v+l_1≤ B_f^∗( ∂Δ)
,
with equality only if l_1=v.
Now we assume the equality in (<ref>) holds while (<ref>) does
not hold, which implies l_1=v by (<ref>).
Since f is injective on
α_1( a_1,a_2) \{a_1},p_1∈α
_1( a_1,a_2) \{a_1}, p_1≠ p_l_1 in
Case (5) and f(p_1)=f( p_l_1) , we have p_l_1
∉α_1( a_1,a_2) \{a_1}, which
implies a_2∈γ_1and p_l_1∉β_1. Thus
a_1∉γ_1^∘ and a_1∈γ_2=( ∂Δ) \γ_1^∘, say, #[ γ_2
∩{a_j}_j=1^m] ≥1, with equality if and only if
γ_2∩{a_j}_j=1^m={ a_1} .
(a) If γ_2 contains two points of {
a_j} _j=1^m, then Σ_1=Σ_1^'∈𝒞^∗( L,m-1) . In fact in this case, (<ref>)
contains at most k≤ m-1 terms, and thus by Claim <ref> (<ref>)
Σ_1∈ℱ(L,k)⊂ℱ(L,m-1).
(b) If γ_2 contains only one point of { a_j}
_j=1^m, say, γ_2∩{a_j}_j=1^m={ a_1}
, then p_l_1∈α_m( a_m,a_1) \{a_m} and then either ( f,γ_2) is a simple closed
arc of ∂Σ_1, or it is a folded arc, say,
( f,γ_2) =c_m( f(p_l_1),f(a_1))
+c_1( f(a_1),f( p_l_1) ) =c_m(
f(p_l_1),f(a_1)) -c_m( f( p_l_1)
,f(a_1)) .
Hence either
𝔏( ∂Σ_1) ≤𝔏(
∂Σ) -1
by Lemma <ref>; or ( f,Δ_2) =S by Lemma
<ref> (i), and thus
A(Σ_1)≤ A(Σ)-4π.
Summarizing (<ref>) and Discussion <ref>, we can derive that the
equality in (<ref>) holds only if at least one of (<ref>
)-(<ref>) holds. Then (i) holds in Case (5), and we have finished the proof.
When (ii) holds, say, in Case (4), f_1 plays the role that
moves the branch property of p_0 to p_1, so that H(
Σ) ,R(Σ),∂Σ,n( Σ)
and the branch property of all other points, say, points in (
∂Δ) \{p_0,p_1}, remain unchanged, while
p_0 becomes a regular point and p_1 becomes a branch point with
v_f_1( p_1) =v_f( p_1) +v_f(
p_0) -1(note that the interior α_1( p_0
,p_1) ^∘ of α_1( p_0,p_1) contains
no branch point of f and contains no point of f^-1(E_q)). Such
movement fails in Case (5), and in this case, (i) holds.
Let Σ=( f,Δ)be a
surface in 𝒞^∗(L,m)with the 𝒞^∗(
L,m)-partitions (<ref>) and (<ref>). Assume that condition (A)
of Lemma <ref> holds, say C_f^∗( Δ)
=∅, and assume (<ref>) holds. Write
ℰ_f:=C_f^∗( ∂Δ) ∪(
∂Δ∩ f^-1(E_q)) ={ p_0^'
,p_1^',…,p_s-1^'} ,
and assume p_0^'∈ C_f^∗( ∂Δ) ,
s≥2 and p_0^',…,p_s-1^' are arranged on
∂Δ anticlockwise. Then there exists a surface Σ
_1=( f_1,Δ) ∈𝒞^∗(
L,m) such that C_f_1^∗( Δ) =∅
and one of the followings holds.
(a) The conclusion (i) of Lemma <ref> holds. Thus L(∂Σ_1)<L( ∂Σ) , #ℰ_f_1
≤#ℰ_f and either #C_f_1^∗( ∂Δ) ≤#C_f^∗( ∂Δ) -1, or
#C_f_1^∗( ∂Δ) =#C_f^∗(
∂Δ)and one of (<ref>)–(<ref>) holds.
(b) p_1^'∈ C_f^∗( ∂Δ) , H(
Σ_1) =H(Σ), ∂Σ_1=∂Σ,
#ℰ_f_1={ p_1^',…,p_s-1^'}
=#ℰ_f-1, and B_f_1^∗( ∂Δ)
=B_f^∗( ∂Δ) .
Let p_0^'∈ C_f^∗( ∂Δ) and
p_0^',p_1^',…,p_s-1^',s≥2, be all points
of ℰ_f arranged anticlockwise on ∂Δ. Then
ℰ_f gives a partition of ∂Δ as
∂Δ=β_1^'( p_0^',p_1^')
+β_2^'( p_1^',p_2^') +…
+β_s^'( p_s-1^',p_0^') .
Without loss of generality, we assume that p_0^'∈α_1(
a_1,a_2) \{a_2} is the first point of C_f^∗( ∂Δ) in α_1( a_1,a_2) ,
say,
C_f^∗( ∂Δ) ∩α_1( a_1
,p_0^') ={p_0^'}.
Firstly, we consider the simple case that
β_1^'( p_0^',p_1^') ⊂α_1( p_0^',a_2) .
We may further assume f( p_0^') ≠ f(p_1^'). Otherwise, we must have that p_0^'=a_1,p_1^'=a_2
and that c_1=c_1( q_1,q_2) =( f,β_1^'( p_0^',p_1^') ) =( f,α
_1) is a circle with α_1∩ f^-1(E_q)=∅, and
then we can discuss based on the following argument.
Consider a proper subarc β_1=β_1( p_0
^',p_01^')of β_1^'=β_1^'( p_0^',p_1^'), with f(p_01^')≠
f(p_0^') and p_01^'∉ f^-1(E_q), so that
f( β_1) has other v-1 f-lifts β_2(
p_0^',p_02^') ,…,β_v( p_0^',p_0v^') so that p_0^',{ p_0l^'} _l=1^vand {β_l}_l=1^v satisfy all conditions
of Lemma <ref> and the condition of Case 4 in the proof of Lemma
<ref>. Then by Lemma <ref> there exists a surface
Σ_1^'=( f_1^',Δ)
∈𝒞^∗( L,m) so that C_f_1^∗(
Δ) =∅and Lemma <ref> (ii) holds, say,
∂Σ_1^'=∂Σ,H(Σ_1^')=H(Σ),
p_01^'∈ C_f_1^'^∗,ℰ_f_1^'
={ p_01^',p_1^',…,p_s-1^'}and B_f_1^'^∗( ∂Δ) =B_f^∗( ∂Δ) , and moreover, (<ref>) and (<ref>) are
still 𝒞^∗( L,m)-partitions of ∂Σ_1^'. Then we can replace Σ with Σ_1^'
to continue our proof under (<ref>).
Now we may assume f( p_0^') ≠ f(p_1^') and
forget Argument <ref>. Let β_1=β_01^'(
p_0^',p_01^') be the longest subarc of β
_1^'( p_0^',p_1^') such that (B) and
(C) of Lemma <ref> are satisfied by β_1. There is nothing to
show when conclusion (i) of Lemma <ref> holds.
Assume conclusion (ii) of Lemma <ref> holds for β_1. Then
only Case (4) ocurs and p_01^'∉ f^-1(E_q). If
p_01^'≠ p_1^', then we can extend β_1 longer so
that it still is a subarc of β_1^'( p_0^'
,p_1^') ⊂α_1( p_0^',a_2) and satisfies (B) and (C), which contradicts definition of β_1.
Then, p_01^'=p_1^'∈ C_f^∗( ∂Δ) , and by Lemma <ref> (ii) we have
(c) There exists a surface Σ_1=( f_1,Δ)
∈𝒞^∗( L,m) such that C_f_1^∗(
Δ) =∅ and (b) holds, and moreover (<ref>) and
(<ref>) are still 𝒞^∗( L,m)-partitions of
∂Σ_1.
The corollary is proved under the consition (<ref>). When
p_1^'∉ C_f^∗( ∂Δ) , we have
p_1^'∈ f^-1(E_q), and then only (a) holds.
Next, we show what will happen if (<ref>) fails. Then a_2∈β
_1^'∘ and so a_2∉ℰ_f. Assume that
p_1^'∈α_j_0( a_j_0,a_j_0+1)
\{a_j_0},
for some j_0>1. Then we can find a point p_1∈α_1(
p_0^',a_2) so that β_1=α_1(
p_0^',p_1) is a maximal subarc of α_1(
p_0,a_2) satisfying conditions (B) and (C) of Lemma
<ref>. Then p_1∉ℰ_f and according to the above
proof only Case 4 or Case 5 occurs. If Case 5 occurs, then the proof for Case
(5) deduces the conclusion (i) of Lemma <ref>, and so does (a). If
Case 4 occurs, then by the condition C_f^∗( Δ)
=∅, the maximal property of β_1 and Lemma <ref>, we have
p_1=a_2. Then the proof of Case 4 again deduces that (ii) in Lemma
<ref> holds, and we obtain a surface Σ_1=(
f_1,Δ) ∈𝒞^∗( L,m)
such that H(Σ_1)=H(Σ), ∂Σ_1=∂Σ,
C_f_1^∗( Δ) =∅, p_1∉ f_1
^-1(E_q), B_f_1^∗( ∂Δ) =B_f^∗( ∂Δ) and
ℰ_f_1={p_1,p_1^',…,p_s-1^'
}={a_2,p_1^',…,p_s-1^'}.
Thus using Lemma <ref> repeatedly, we can either prove (a) holds, or
obtain a surface Σ_j_0=( f_j_0,Δ)
such that H(Σ_j_0)=H(Σ), ∂Σ_j_0
=∂Σ, C_f_j_0^∗( Δ) =∅,
a_j_0∉ f_j_0^-1(E_q), B_f_j_0^∗(
∂Δ) =B_f^∗( ∂Δ) and
ℰ_f_j_0={a_j_0,p_1^',…,p_s-1^'} and B_f_j_0^∗( ∂Δ)
=B_f^∗( ∂Δ) .
Note that a_j_0and p_1^' are both contained in the same arc
α_j_0. Then we can go back to condition (<ref>) to show that
either (a) or (b) holds, and moreover, by Remark <ref>, (b) holds only
if p_1^'∈ C_f_j_0^∗( ∂Δ) ,
which implies p_1^'∉f_j_0^-1( E_q) .
Let Σ_0=( f_0,Δ)be a surface in 𝒞^∗(L,m)with the 𝒞^∗( L,m)-partitions (<ref>) and (<ref>). Assume that
condition (A) of Lemma <ref> holds, say C_f_0^∗(
Δ) =∅, and that (<ref>) holds. Then there exists a
surface Σ_1=( f_1,Δ) ∈𝒞
^∗( L,m) , such that C_f_1^∗(
Δ) =∅,
H( Σ_1) ≥ H(Σ_0),L(∂Σ_1)≤
L(∂Σ_0),
that H( Σ_1) >H(Σ_0)implies L(∂Σ_1)<L(∂Σ_0). Moreover one of the following conclusions
(I)–(II) holds.
(I) Σ_1∈ℱ_r( L,m) and, in this case,
L(∂Σ_1)<L(∂Σ_0) if and only if C_f_0^∗( ∂Δ) ≠∅.
(II) Σ_1∈𝒞^∗( L,m) , and both
ℰ_f_1 and C_f_1^∗( ∂Δ)
are the same singleton, say, ℰ_f_1=C_f_1^∗(
∂Δ) is a singleton outside f_1^-1(E_q). Moreover,
if #C_f_0^∗( ∂Δ) ≠∅ and
f_0^-1(E_q)∩∂Δ≠∅, then L(∂Σ
_1)<L(∂Σ_0) and either #C_f_1^∗(
∂Δ) ≤#C_f_0^∗( ∂Δ)
-1 holds, or #C_f_1^∗( ∂Δ) =#C_f_0
^∗( ∂Δ) and one of (<ref>
)–(<ref>) hold with f=f_0.
We will prove this by induction on #C_f_0^∗( ∂Δ) . If C_f_0^∗( ∂Δ)
=∅, then (I) holds for Σ_1=Σ_0.
If C_f_0^∗( ∂Δ) is a singleton and
ℰ_f_0=C_f_0^∗( ∂Δ) , then
f_0^-1(E_q)∩∂Δ=∅ and so (II) holds with
Σ_1=Σ_0.
Now, assume that C_f_0^∗( ∂Δ) ≠∅ and #ℰ_f_0≥2. Then we can write
ℰ_f_0={ p_0^0,p_1^0,…,p_s_0-1
^0}
so that p_j^0,j=0,1,…,s_0-1, are arranged anticlockwise on
∂Δand p_0^0∈ C_f_0^∗( ∂Δ) . Then by Corollary <ref>, there exists a
surface Σ_1∈𝒞^∗(L,m) such that the following
conclusion (a)-(n-1,n) or (b)-(n-1,n) holds for n=1.
(a)-(n-1,n): C_f_n^∗( Δ) =∅ and the
conclusion (i) of Lemma <ref> holds, and thus H(Σ_1)≥
H( Σ_0) ,L(∂Σ_1)<L(∂Σ_0),
#ℰ_f_1≤#ℰ_f_0 and either #C_f_1
^∗( ∂Δ) ≤#C_f_0^∗(
∂Δ) -1 holds or #C_f_1^∗( ∂Δ) =#C_f_0^∗( ∂Δ) and one of
(<ref>)–(<ref>) holds with f=f_0.
(b)-(n-1,n): C_f_n^∗( Δ) =∅,
ℰ_f_n={ p_1^n-1,…,p_s_n-1-1^n-1}
with p_1^n-1∈ C_f_n-1^∗( ∂Δ) ,
H( Σ_n) =H(Σ_n-1), ∂Σ_n
=∂Σ_n-1, and B_f_n^∗( ∂Δ)
=B_f_n-1^∗( ∂Δ) .
If Σ_1∈ℱ_r(L,m), then (b)-(0,1) does not hold, and
then (a)-(0,1) holds, which imlpies (I). Note that when Σ_1
∈ℱ_r(L,m) and H( Σ_1) >H(Σ_0)
hold, we must have C_f_0^∗( ∂Δ)
≠∅.
Assume Σ_1 is not in ℱ_r(L,m) and ℰ_f_1
=C_f_1^∗( ∂Δ) is a singleton. Then
(b)-( 0,1) holds and so C_f_1^∗(
∂Δ) =ℰ_f_1={ p_1^0,…
,p_s_0-1^0} ={ p_1^0}∈ C_f_n-1^∗( ∂Δ) . In this case, we must have f_0
^-1(E_q)∩∂Δ=∅. Thus (II) holds.
Now, assume that Σ_1 is not in ℱ_r(L,m) but
ℰ_f_1 contains at least two points. Then C_f_1^∗( ∂Δ) ≠∅, and we can iterate the above
discussion to obtain surfaces Σ_j=( f_j,Δ) ,j=1,2,…,n_0, so that Σ_n_0 no longer can be
iterated. Then for each Σ_n, n=1,…,n_0, (a)-(
n-1,n) or (b)-( n-1,n) holds, and thus one of the
following holds.
(c) C_f_n_0^∗( ∂Δ) is empty and
(a)-( n_0-1,n_0) holds.
(d) C_f_n_0^∗( ∂Δ) is a singleton and
(a)-( n_0-1,n_0) holds.
(e) C_f_n_0^∗( ∂Δ) is a singleton and
(b)-( n_0-1,n_0) holds.
We show that Σ_n_0 is a desired surface. First of all, we have
#C_f_n_0^∗( Δ) =∅.
If (c) holds, then Σ_n_0∈ℱ_r(L,m) and we obtain (I).
Assume (d) holds. Then we have L(∂Σ_n_0)<L(∂Σ_n_0-1)≤ L(∂Σ_0) and C_f_0^∗(
∂Δ) ≠∅. Then (II) holds in this case, no matter
f_0^-1(E_q)∩∂Δ is empty or not.
Assume (e) holds. If all conditions (b)-( n-1,n) hold for
n=1,…,n_0, then s_0=n_0+1, ∂Σ_n_0
=∂Σ_n_0-1=…=∂Σ_0 and ℰ_f_n
=C_f_1^∗( ∂Δ) ={p_n^0,p_n+1
^0,…,p_n_0^0} for n=0,1,2,…,n_0, say, f_0^-1
(E_q)∩∂Δ is empty. Thus, when f_0^-1(E_q)∩∂Δ≠∅, (a)-( n_0^'-1,n_0^') has to be satisfied for some n_0^'<n_0. Thus all
conclusion in (II) hold.
§ PROOF OF THE MAIN THEOREM
Now, we can complete the proof of the main theorem, Theorem <ref>.
Let Σ=( f,Δ) ∈ℱ. For
any two points a and b in Δ, define their d_f-distance d_f( a,b) by
d_f(a,b)=inf{ L(f,I):I is a path in Δ from a to b} .
For any two sets A and B in Δ, define their d_f-distance by
d_f( A,B) =inf{ d_f( a,b) :a∈ A,b∈
B} .
Let ℒ={L^'>0:H_L is continuous at
L}, L∈ℒ and let L_0∈(0,L]. Then there exists a positive
number δ_L_0 such that
d_f(Δ∩ f^-1(E_q),∂Δ)>δ_L_0
holds for all surfaces Σ=( f,Δ) in
ℱ(L) with L(∂Σ)≥ L_0 and with (<ref>).
This is proved in <cit.>. In fact, if this fails, then for any
ε>0, there exists a surface Σ∈ℱ(L) with
L(∂Σ)≥ L_0 such that d_f(Δ∩ f^-1
(E_q),∂Δ)<ε/3 and Δ∩ f^-1(E_q
)≠∅. Then one can cut Σ from a boundary point on
∂Δ to a point in f^-1(E_q)∩Δ, along a path
I_ε⊂Δ so that ( f,I_ε) is polygonal and that 2L(f,I_ε)<ε,
obtaining a surface Σ_ε∈ℱ( L+ε) with L(∂Σ_ε)=L(∂Σ
)+2L(f,I_ε), A(Σ_ε)=A(Σ) and
n( Σ_ε) ≤n(
Σ) -1. Then we have R( Σ_ε) ≥
R(Σ)+4π and thus
H(Σ_ε)=R( Σ_ε)
/L(∂Σ_ε)≥R(Σ)+4π/L(∂Σ)+ε=R(Σ)/L(∂Σ)+4π/L(∂Σ
)/1+ε/L(∂Σ).
This and (<ref>) deduce that H_L+ε≥ H(Σ
_ε)>H_L+π/2L(∂Σ) when ε is
small enough. But this contradicts the assumption L∈ℒ, which
implies that H_L+ε→ H_L as ε→0.
Let Σ=( f,Δ) ∈𝒞^∗(L,m) be a covering surface such that
(<ref>) holds. If C_f^∗=C_f^∗( Δ) =∅, then Σ^'=Σ itself is the desired
surface in Theorem <ref>.
If C_f^∗( Δ) =∅, but C_f^∗(
∂Δ) ≠∅, then by Corollary <ref>
, either the conclusion of Theorem <ref> holds with L(∂Σ
_1)<L(∂Σ_0), or
(III) there exists a surface Σ_1=( f_1,Δ) ∈𝒞^∗(L,m) such that C_f_1^∗(
Δ) =∅, H( Σ_1) ≥ H(
Σ), L( ∂Σ_1) ≤ L(∂Σ), H( Σ_1) >H( Σ) only if
L( ∂Σ_1) <L(∂Σ); and both
ℰ_f_1 and C_f_1^∗( ∂Δ)
are the same singleton. Moreover, if #C_f^∗( ∂Δ) ≠∅ and f^-1(E_q)∩∂Δ≠∅, then L(∂Σ_1)<L(∂Σ_0) and either
#C_f_1^∗( ∂Δ) ≤#C_f^∗(
∂Δ) -1 holds, or #C_f_1^∗( ∂Δ) =#C_f^∗( ∂Δ) and one of
(<ref>)–(<ref>) holds.
Assume C_f^∗( Δ) ≠∅. Then by Corollary
<ref>, we have
There exists a surface Σ_0=(f_0,Δ)∈𝒞
^∗( L,m) such that C_f_0^∗( Δ)
=∅, H( Σ_0) ≥ H( Σ) and
L( ∂Σ_0) ≤ L(∂Σ). Moreover,
L(∂Σ_0)=L(∂Σ) holds if and only if H(Σ
_0)=H(Σ), ∂Σ_0=∂Σ and B_f_0^∗( ∂Δ) >B_f^∗( ∂Δ) hold.
If C_f_0^∗( ∂Δ) =∅, then
B_f_0^∗( ∂Δ) =0, Σ_0∈ℱ_r(L,m) and by the claim, L(∂Σ_0)<L(
∂Σ), and thus Σ^'=Σ_0 satisfies the
conclusion of Theorem <ref>.
If #C_f_0^∗( ∂Δ) =1 and ℰ
_f_0=C_f_0^∗( ∂Δ) ={p_0} is a
singleton, then (III) holds with Σ_1=Σ_0 by the claim.
Now assume that #C_f_0^∗( ∂Δ) ≥1 and
#ℰ_f_0≥2. Then there exists a surface Σ_1=(
f_1,Δ) ∈𝒞^∗( L,m)
satisfying all conclusions of Corollary <ref> with (I), or
(II). When (I) holds, Σ^'=Σ_1 again satisfies Theorem
<ref>, and the proof finishes. If (II) holds, and (I) fails, (III) holds
again. So we may complete the proof based on Σ_1 under the assumption (III).
We will show that there exists a surface Σ^' satisfying the
conclusion of <ref> with L(∂Σ^')<L(∂Σ).
Let P and P^∗ be two antipodal points of S and let φ
_θ be a continuous rotation on S with the axis passing through P
and P^∗and rotation angle θ, which rotates anticlockwise
around P when θ increases and we view S from sinside. P and
P^∗ are chosen so that we can define θ_0=θ_0(
Σ_1) ∈(0,π), such that
φ_θ_0( ∂Σ_1) ∩ E_q≠∅,
while
φ_θ( ∂Σ_1) ∩ E_q=∅ for all θ∈(0,θ_0),
and
φ_θ_0(f_1(p_0))∉ E_q.
We may assume P^∗ and P are outside ∂Σ_1. We first
show that
There exists a surface Σ_2=( f_2,Δ) ∈𝒞^∗(L,m) such that (<ref>) holds,
H(Σ_2)=H(Σ_1),L( ∂Σ_2) =L(∂Σ_1),
∂Σ_2=( f_2,∂Δ) =(
φ_θ_1∘ f_1,∂Δ) ,
n( Σ_2,E_q) =n( Σ
_1,E_q) ,A(Σ_2)=A(Σ_1),
∂Σ_2 contains at least one point of E_q, and p_0
∈∂Δ is the unique branch point of f_2 in Δ\ f_2^-1(E_q), where θ_1∈(0,2π].
Let δ_L_0 with L_0=L(∂Σ_1) be determined by Lemma
<ref> and let δ_E_q be the smallest positive distance between
points of E_q. Then d_f_1( f_1^-1(E_q),∂Δ) >δ_L_0. Let θ_1 be the maximal number in
(0,θ_0) such that for each θ∈(0,θ_1)
max_𝔞∈ E_qd( 𝔞,φ_θ(𝔞) )<δ_L_0^'=min(δ_E_q
,δ_L_0)/3.
Let b_1,b_2,…,b_n,n=n(
Σ_1,E_q) , be all distinct points in f_1(Δ)∩
E_q. Then for each j≤n there exists a Jordan domain U_j
conaining b_j with j=1,…,n and U_j
⊂Δ, such that f_1 restricted to U_j is a BCCM
onto the closed disk V_j=D( f_1( b_j)
,δ_L_0^') , with U_i∩U_j
=∅ if i≠ j and b_j is the unique possible branch point of
f_1 in U_j.
Let g_1=φ_θ_1∘ f_1:Δ→ S,
and let ϕ_j be the homeomorphism from φ_θ_1
(V_j) onto itself, which is an identity on ∂φ_θ_1(V_j) and maps φ_θ_1(
f_1(b_j)) to f(b_j). Note that both f_1(
b_j) and ϕ_j( f_1( b_j) ) ar
both contained in φ_θ_1( V_j) . Let
g_1^' be the mapping given by g_1 on Δ\( ∪_j=1^nU_j) and by ϕ
_j∘ g_1 on U_j.Then g_1^' is an OPLM so
that G_1=( g_1^',Δ) is contained in
𝒞^∗( L,m) with C_g_1^'^∗(Δ)={p_0}, and that, for each j=1,…,n,
b_j is the only possible branch point of g_1^' in
U_j with g_1^'(b_j)=f(b_j) and v_g_1
^'(b_j)=v_f_1(b_j). Thus it is the clear that (<ref>
)–(<ref>) hold for Σ_2=G_1. Therefore, in the case
θ_1=θ_0 we have ( ∂Δ) ∩
g_1^'-1(E_q)≠∅, and we proved Claim <ref> when
θ_1=θ_0.
Assume that θ_1<θ_θ_0. Then G_1 satisfies all
assumptions of the (III), and additionally satisfies (<ref>), and then
we still have d_g_1^'( g_1^'-1(E_q),∂Δ) >δ_L_0. Moreover, we have θ_0(
G_1) =θ_0( Σ_1) -θ_1. Then we can
repeat the above arguments at most k-1=[ θ_0(
Σ_1) -θ_1/θ_1] +1 times to obtain a
surface Σ_2=( f_2,Δ) =G_k=(
g_k^',Δ) satisfying Claim <ref>. The
existence of Σ_2 is proved.
Now we can write ℰ_f_2={p_0,p_1,…,p_s-1},s≥2,
so that p_0∈ C_f_2^∗( ∂Δ) and
{p_1,…,p_s-1}⊂ f_2^-1(E_q).
Then by Corollary <ref> there exists a surface Σ
_3=( f_3,Δ) ∈𝒞^∗(
L,m) such that C_f_3^∗( Δ) =∅,
H( Σ_3) ≥ H(Σ_2),L(Σ_3)<L(∂Σ_1),
and moreover the following conclusion (I) or (II) holds true.
(I) Σ_3∈ℱ_r( L,m) .
(II) Σ_3∈𝒞^∗( L,m) , and both
ℰ_f_3 and C_f_3^∗( ∂Δ)
are the same singleton, and either #C_f_3^∗( ∂Δ) ≤#C_f_2^∗( ∂Δ) -1 holds,
or #C_f_1^∗( ∂Δ) =#C_f_0^∗( ∂Δ) and one of (<ref>)–(<ref>) hold
with f=f_2.
If (I) holds, the proof is completed.
If (II) holds, then we repeat the the same argument which deduce Σ_3
from Σ_1. This interate can only be executed a finite number of times
(II), and at last we obtain a surface Σ_k=( f_k,Δ) ∈𝒞^∗(L,m) such that
H( Σ_k) ≥ H(Σ),L(Σ_k)<L(∂Σ),
and one of the following two alternatives holds:
(𝔞) Σ_k∈𝒞^∗( L,1) ,
𝔏( ∂Σ_k) =1, A(
Σ_k) <4πand C_f_k^∗( Δ) =C_f_k^∗( ∂Δ) =∅.
(𝔟) #C_f_k^∗( Δ)
=#C_f_k^∗(∂Δ)=#ℰ_f_k=1, H(Σ
_k)≥ H(Σ),L(∂Σ_k)<L( ∂Σ) ,
and one of the three alternatives holds: Σ_k∈𝒞^∗( L,m-1) ,A( Σ_k) <A( f)
-4π,L( ∂Σ_k) <L( ∂Σ).
If (𝔞) holds, then ∂Σ_k is a simple convex circle
and, f_k( Δ) as a set, is the closed disk on
S enclosed by ∂Σ_k, and by argument principle we have
Σ_k∈ℱ_r( L,1) ⊂ℱ_r(L,m).
If (𝔟) holds, then we can repeat the whole above argument, which
deduces Σ_1 first from Σ then deduces Σ_k from
Σ_1,to obtain a surface Σ_s from Σ_k satisfying
(𝔞) or (𝔟). But this interate can only be executed a
finite number of times by (𝔟) and at last we obtain a surface
Σ_t satisfying (𝔞).
99
Ah0L. Ahlfors, Complex analysis, McGraw-Hill, third edition, 1979.
AhL. Ahlfors, Zur theorie der Üherlagerung-Sflächen, Acta
Math., 65 (1935), 157-194.
BerF. Bernstein, Über die isoperimetrische Eigenschaft des
Kreises auf der Kugeloberfläche und in der Ebene, Math. Ann., vol. 60
(1905), pp. 117-136.
DrD. Drasin, The impact of Lars Ahlfors' work in value-distribution
theory, Ann. Acad. Sci. Fenn. Ser. A I Math. 13 (1988), no. 3, 329–353.
DuJ. Dufresnoy, Sur les domaines couverts par les valeurs d'une
fonction méromorphe ou algébroïde, Ann. Sci. École. Norm.
Sup. 58. (1941), 179-259.
EreA. Eremenko, Ahlfors' contribution to the theory of meromorphic
functions, Lectures in memory of Lars Ahlfors (Haifa, 1996), 41–63, Israel
Math. Conf. Proc., 14, Bar-Ilan Univ., Ramat Gan, 2000.
HaW.K. Hayman, Meromorphic functions, Oxford, 1964.
NR. Nevanlinna, Zur theorie der meromorphen funktionen. Acta Math.
46, 1-99 (1925)
RT. Rado, The isoperimetric inequality on the sphere.
Am.J.Math.57(4), 765-770 (1935)
RiS. Rickman, Quasiregular mappings. Springer,
Berlin(1993).Ergebnisse der Mathematik und Ihrer Grenzgebiete 3 Folge. 191:197-253(2013)
SS. Stoilow, Lecons sur les Principes Topologiques de la Theorie
des Fonctions Analytiques. Gauthier-Villars, Paris (1956)
S-ZZ.H. Sun & G.Y. Zhang, Branch values in Ahlfors' theory of
covering surfaces, Science China Mathematics, Vol. 63 No. 8: 1535-1558.
TI. Todhunter, Spherical Trigonometry (5th ed.). MacMillan. (1886),
pp. 76.
YL. Yang, Value Distribution Theory. Springer, Berlin (1993)
Z1G.Y. Zhang, Curves, Domains and Picard's Theorem. Bull. London.
Math. Soc. 34(2),205-211(2002)
Zh1G.Y. Zhang, The precise bound for the area-length ratio in
Ahifors' theory of covering surfaces. Invent math 191:197-253(2013)
Zh2G.Y. Zhang, The precise form of Ahifors' Second Fundamental
Theorem, https://doi.org/10.48550/arXiv.2307.04623
|
http://arxiv.org/abs/2307.04491v2 | 20230710112824 | Thermal Corrections to Rényi Entropy in BMS Field Theory | [
"Yuan Zhong"
] | hep-th | [
"hep-th"
] |
Calculating Originality of LLM Assisted Source Code
Shipra Sharma
[email protected]
Balwinder Sodhi
Department of Computer Science and Engineering
Indian Institute of Technology Ropar
India
[email protected]
==========================================================================================================================================================================
§ INTRODUCTION
On the journey of understanding the quantum gravity, one of the most remarkable idea is the holographic principle <cit.>, which relates the (d+1)-dimensional quantum gravity with the d-dimensional quantum field theory. The most fruitful incarnation of the holographic principle is the AdS/CFT correspondence <cit.>, which equates the quantum gravity on the (d+1)-dimensional asymptotically anti-de Sitter (AdS) spacetime and the d-dimensional conformal field theory (CFT) on the asymptotic boundary. An important entry in the holographic dictionary is that the asymptotic symmetry of the bulk theory agrees with the symmetry of the boundary theory. The constraints from symmetry are powerful, and many universal results can be obtained in a general way together with other constraints.
In the study of holographic description of asymptotically flat gravity, inspired by the success of the role of asymptotic symmetry played in the AdS/CFT correspondence, the study of the asymptotic symmetry in the asymptotically flat spacetime, known as the Bondi–van der Burg–Metzner–Sachs symmetry <cit.>, receives much interest in the last few years. A simpler version of the asymptotically flat gravity is the three-dimensional BMS_3 symmetry. Based on the BMS_3 symmetry, the three-dimensional flat holography was proposed <cit.> that the three-dimensional asymptotic flat gravity is holographically described by a two-dimensional quantum field theory governed by the BMS_3 symmetry, known as the BMS field theory (BMSFT) or Carrollian conformal field theory, since the BMS_3 algebra is isomorphic to the Carrollian conformal algebra. This is an infinite-dimensional algebra, and the constraints from it lead to powerful constraints in the study of the BMS field theories.
One important probe in the AdS/CFT correspondence is the holographic entanglement entropy. The Ryu-Takayanagi formula <cit.> proposed that the entanglement entropy in the boundary corresponds to the area of a minimal surface in the bulk. In the case of flat holography, the analogue of the Ryu-Takayanagi formula was proposed in <cit.>. On the BMS field theory side, the entanglement entropy for a single interval on the cylinder or on the plane in the vacuum state can be obtained with the help of replica trick <cit.>.
The entanglement entropy is a good measurement of entanglement only when the system is in a pure state. In practice, however, it is always thermally polluted. In this paper, we are interested in the entanglement entropy for a single interval in the thermal state. Since there is a thermal circle and a spatial circle, this task is generally very difficult. However, in the low-temperature limit β_ϕ≫ L,β_u/β_ϕ≤ O(1), the leading thermal correction to the Rényi entropy is dominated by the first excited state and calculable. Here, L is the circumference of the cylinder coordinated by ϕ and u, and β_ϕ and β_u are the lengths of the thermal circle along the ϕ- and u-directions. Inspired by the universal results of the thermal correction to the entanglement entropy in the low-temperature limit in CFT <cit.>, we use the replica trick to rewrite the leading term in the thermal correction as an correlation function on the branched covering space and work it out with the help of the uniformizing map. It turns out that the leading thermal correction to the Rényi entropy takes a universal form
δS_n =n/1-n[( sinπl_ϕ/L/nsinπl_ϕ/n L )^2Δ e^2 πl_u ξ/L ( πl_ϕ/L -1/nπl_ϕ/nL ) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L,
which only depends on the scaling dimension Δ and the boost charge ξ of the first excited state and the geometric configuration of the entanglement interval. The thermal correction to the entanglement entropy is obtained by δ S_E = δ S_n→ 1.
As a double check, we also use the entanglement first law to translate the calculation of the variation δ S_E of the entanglement entropy to the variation δ⟨K|_|$⟩ of the expectation value of the modular Hamiltonian. The latter can be calculated directly as the modular Hamiltonian for a single interval on the cylinder in the pure state can be written explicitly. We show that these two approaches agree.
This paper is organized as follows. In Sec. 2, we give a quick review on BMS field theory. In Sec. 3, we calculate the thermal correction to the Rényi entropy in a type of low-temperature limit with the help of the replica trick and the uniformizing map. We also provide an alternative way to calculate the thermal correction to the entanglement entropy from the modular Hamiltonian and the entanglement first law as a double check. We conclude in Sec. 4 with a summary and some future directions.
§ REVIEW ON THE BMS FIELD THEORY
In this section, we give a quick review on some aspects of the BMS field theory.
∙ BMSFT on the cylinder
A BMSFT on a cylinder(ϕ,u)with a circumference
ϕ∼ϕ+L
is a two-dimensional quantum field theory that is invariant under the following BMS transformations
ϕ→ f(ϕ),
u → f'(ϕ) u +g(ϕ).
Here,f(ϕ)andg(ϕ)are periodic functions inϕwith the periodicityL. Then, the infinitesimal BMS transformation generators are obtained by taking the Fourier modes
l_n = i L/2π e^i n 2π/Lϕ∂_ϕ -n e^i n 2π/Lϕ u∂_u,
m_n =i L/2π e^i n 2π/Lϕ∂_u.
∙ BMSFT on the plane
The BMSFT on the(x,y)-plane is obtained from the following plane-to-cylinder transformation
x =e^2π i /Lϕ,
y = 2π i /L e^2π i/Lϕ u.
The infinitesimal symmetry generators on the plane are
l_n =-x^n+1∂_x -(n+1) x^n y ∂_y,
m_n = -x^n+1∂_y.
They form the BMS algebra without a central term via the Lie bracket
[l_n ,l_m] =(n-m) l_m+n,
[l_n, m_m] =(n-m) m_m+n,
[m_n, m_m] =0.
At the quantum level, these symmetry generatorsl_nandm_nwill become operatorsL_nandM_nwhich act on the state space. They form the BMS algebra with central chargesc_Mandc_Las
[L_n ,L_m] =(n-m) L_m+n +c_L/12n(n^2-1)δ_m+n,
[L_n, M_m] =(n-m) M_m+n+c_M/12n(n^2-1)δ_m+n,
[M_n, M_m] =0.
A primary operatorψof the boost chargeξand the conformal dimensionΔis specified by the following conditions
[L_0, ψ] =Δψ,
[M_0,ψ] =ξψ,
[L_n, ψ] =0, n>0,
[M_n, ψ] =0, n>0.
Under a BMS transformation
x̃ =f(x),
ỹ = f'(x)y +g(x),
a primary operatorψtransforms as
ψ̃(x̃,ỹ) =(f')^-Δ e^-ξy f” +g'/f'ψ(x,y).
On the plane, the currentsJ(x)andP(x)admit the following mode expansions
J(x) = ∑_n L_n x^-n-2,
P(x) =∑_n M_n x^-n-2.
Under the BMS transformation (<ref>) and (<ref>), the currentsJ(x)andP(x)transform as <cit.>P̃(x̃) =( ∂ f/∂ x)^-2( P(x) -c_M/12{f,x}),
J̃(x̃) =( ∂ f/∂ x)^-2( J(x) -c_L/12{f,x}) + ( ∂ g/∂ x)^-2( P(x) -c_M/12{g,x}) .
∙ State-operator correspondence
On the(x,y)-plane, the in-state corresponds to an operator inserted atx=0. From the plane-to-cylinder map (<ref>), in the cylinder coordinate, the in-state is inserted atϕ=i∞. Similarly, the out-state is inserted atϕ=-i∞in the cylinder coordinate.
§ THERMAL CORRECTIONS TO THE RÉNYI ENTROPY
In this section, we use the replica trick and the uniformizing map to calculate the thermal correction to the Rényi entropy in the BMSFT for a single intervalon the cylinder with circumferenceL.
§.§ Thermal Corrections to Rényi Entropy in CFT_2
Before we continue our calculation of the thermal correction to the Rényi entropy for a single interval on the cylinder in the BMSFT, we would like to first review the similar calculation in the case of CFT_2<cit.> first.
We assume that the theory is put on a cylinder with the circumferenceL, coordinatized byw=x-it, the thermal density matrix written in terms of a complete set of states is
ρ =1/(e^-β H)∑_|ϕ|⟩ |ϕ|⟨%s|⟩ϕ| e^-β E_ϕ.
The Hamiltonian on the cylinder in the CFT is the combination of the left- and the right-moving zeroth-level Virasoro generators and the central charge,
H =2π/L( L_0 +L̅_0 -c/12).
Here, we have assumed thatc_L=c_R=c. With the assumptions that there exists a unique ground state|0|$⟩, and that the spectrum of conformal dimensions Δ=h+h̅ is positive and gapped from the smallest positive value, there should exist an operator ψ of conformal weights (h,h̅) carrying this smallest Δ. This ψ has the smallest energy E_ψ=2π/L(Δ -c/12). Then, in the low-temperature limit β≫ L, the thermal density matrix admits the following expansion
ρ= |0|⟨%s|⟩0| +|ψ|⟨%s|⟩ψ|e^-2πΔβ/L+⋯/1 +e^-2πΔβ/L +⋯.
We consider the entanglement region to be a single interval with two endpoints
∂_- : w=w̅=w_1, ∂_+ : w=w̅=w_2.
For convenience, we also introduce the rescaled endpoints
θ_1,2 = 2πw_1,2/L
and their difference
l=w_2-w_1.
The trace of the reduced density matrix ρ_ can be expanded according to the expansion (<ref>) of the thermal density matrix as
ρ_A^n = [ _(|0|⟨%s|⟩0| +|ψ|⟨%s|⟩ψ|e^-2πΔβ/L+⋯) ]^n/(1 +e^-2πΔβ/L +⋯)^n
=(_ |0|⟨%s|⟩0|)^n [1+ ( (_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1)/(_ |0|⟨%s|⟩0|)^n -1 ) n e^-2πΔβ/L +⋯].
The first term (_ |0|⟨%s|⟩0|)^n is just the zero-temperature Rényi entropy. And the expression in the second term
(_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1)/(_ |0|⟨%s|⟩0|)^n,
which determines the leading thermal correction, can be recasted as a 2-point function of the operator ψ(w) on an n-sheeted copy C_n of the cylinder branched over via the state operator correspondence |ψ|∼⟩lim_t→-∞ψ(x,t)|0|$⟩ and⟨ψ|| ∼lim_t→∞⟨0||ψ(x,t)as
(_B |ψ|⟨%s|⟩ψ| (_B |0|⟨%s|⟩0|)^n-1)/ (_B |0|⟨%s|⟩0|)^n =lim_t_2 →∞, t_1 → -∞⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n/⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_1.
To calculate the 2-point function⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_non then-sheeted copyC_n, we can use the following uniformizing map
ζ^(n) =( e^2π i w/L -e^iθ_2/e^2π i w/L -e^iθ_1)^1/n
to sendC_nto theζ-plane. The 2-point function on a plane in the CFT is just
⟨ψ|(ζ^(n)_2,ζ̅^(n)_2)ψ(ζ^(n)_1,ζ̅^(n)_1) |=⟩1/(ζ^(n)_21)^2h(ζ̅^(n)_21)^2h̅.
Mapping it back to then-sheeted copyC_nalong the uniformizing map (<ref>), we obtain the expression of the 2-point function onC_nas
⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n =(d ζ_1/d w_1d ζ_2/d w_2)^h/ζ_12^2h(d ζ̅_1/d w̅_1d ζ̅_2/d w̅_2)^h̅/ζ̅_12^2h̅.
Substituting this into (<ref>),we have
⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n/⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_1 = [ 1/n^2h( ζ_1^(n)ζ_2^(n)/ζ_1^(1)ζ_2^(1))^h( ζ_2^(1) -ζ_1^(1)/ζ_2^(n) -ζ_1^(n))^2h] · [complex conjugate].
After taking the limitt_1→-∞andt_2→∞, we have
⟨ψ|(i∞)ψ(-i∞) |_⟩C_n/⟨ψ|(i∞)ψ(-i∞) |_⟩C_11/n^2Δ =( sinθ_2 -θ_1/2/sinθ_2 -θ_1/2n)^2Δ.
Then, from (<ref>) and the definition of the Rényi entropy, we obtain the leading thermal correction to the Rényi entropy as
δ S_n = 1/1-n( sin^2Δ(π l/L)/n^2Δ-1sin^2Δ(π l/nL)-n ) e^-2πΔβ/L+ o(e^-2πΔβ/L).
In this calculation, suitable assumptions about the spectrum have been proposed so that the leading contribution to the thermal correction of the Rényi entropy is captured by the correlation function of the lightest operator on the branched covering space. The latter is further worked out with the help of the uniformizing map that sends thisn-sheeted copy space to the plane.
§.§ Thermal Correction Dominated by the Singlet Primary
Consider a two-dimensional BMS filed theory on the cylinder coordinated by(ϕ, u)with circumferenceLϕ∼ϕ +L.
To introduce the temperature, we consider the following thermal identification [Here, we consider the case that β_u takes the same sign as β_ϕ, because we are going to assume the boost charge ξ is bounded from below. If ξ is bounded from above instead, then we should consider (ϕ, u) ∼ (ϕ +iβ_ϕ, u -iβ_u ) instead.]
(ϕ, u) ∼ (ϕ +iβ_ϕ, u +iβ_u ),
the corresponding thermal density matrix is
ρ = e^-β_ϕ L_0^cyl -β_u M_0^cyl/( e^-β_ϕ L_0^cyl -β_u M_0^cyl) .
Here,L_0^cylandM_0^cylare charges generating translations alongϕandudirections respectively. Under the plane-to-cylinder transformation of the currents (<ref>), these cylinder translation generators are related to the canonical BMS generatorsL_0andM_0as
L_0^cyl =2π/L( L_0-c_L/24), M_0^cyl=2π/L( M_0-c_M/24).
Substituting this back into (<ref>), the thermal density matrix written in terms of canonical BMS generators is
ρ = e^-β_ϕ2π/L( L_0-c_L/24) -β_u 2π/L( M_0-c_M/24)/( e^-β_ϕ2π/L( L_0-c_L/24) -β_u 2π/L( M_0-c_M/24))
= e^-β_ϕ2π/L L_0 -β_u 2π/LM_0/( e^-β_ϕ2π/LL_0 -β_u 2π/LM_0).
∙ Low Temperature Expansion
We consider the BMSFT whose spectrum satisfies the following conditions so that the low-temperature expansion of the thermal density matrix is dominated by the first excited state.
– There exists a unique ground state |0⟩, around which we can turn on a small temperature and expand the thermal density matrix.
– In the spectrum both the conformal weight Δ and the boost charge ξ are bounded from below.
– There exists a gap between the ground state |0⟩ and the lightest state |ψ⟩ corresponding to the primary operator ψ labelled by (Δ,ξ).
The last condition requires more explanation. As we turn on a small temperature, there might be several candidate lightest states above the ground state. Depending on the approach to the low-temperature limit, the operatorψwith the smallestΔ+β_u/β_ϕ ξexcites first.
There are still several difficulties to obtain an expansion dominated byψ. First, due to the non-unitary nature, althoughM_0is self-adjoint, it is not diagonalizable. For example, there are two descendants ofψat the level 1,M_-1|ψ⟩andL_-1|ψ⟩.M_0acts on them non-diagonally as a Jordan block
M_0 [ M_-1|ψ⟩; L_-1|ψ⟩ ] = [ ξ 0; 1 ξ ][ M_-1|ψ⟩; L_-1|ψ⟩ ].
As a consequence, the thermal density matrixρis also non-diagonalizable, and it is not possible to expandρin terms of eigenstates{Φ}ofL_0andM_0such as
ρ∝∑_Φ e^-2π/L(β_ϕ L_0^Φ +β_u M_0^Φ) |Φ|⟨%s|⟩Φ| .
Another problem is that there are infinitely many descendants created byM_-k's with the same boost chargeξasψitself, becauseM_-kall commute withM_0. So, in a low-temperature limit withβ_u ≫β_ϕ, these descendants will not be suppressed.
At this point, we will not try to answer the interesting question of the meaning of a non-diagonalizable density matrix. Instead, we restrict to a particular type of low-temperature limit to avoid the above difficulties.
– Consider the following low-temperature limit
β_ϕ≫ L, β_u/β_ϕ≤ O(1).
Then, the primary operator ψ dominates the thermal density matrix expansion.
Under these assumptions, the thermal density matrix is dominated byψat this low temperature as
ρ = |0⟩⟨ 0| +|ψ⟩⟨ψ| e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯.
∙ Entanglement measurements
Consider the entanglement region, the reduced density matrix onis
ρ_ = _ρ.
We are interested in the following entanglement measurements: the Renyi entropy
S_n = 1/1-nlog(ρ_A^n)
and the entanglement entropy
S_E= -ρ_A logρ_A =S_n→ 1.
Concretely, we consider the entanglement region to be a single intervalAspecified by its endpoints
∂_- A =(ϕ_-,u_-), ∂_+ A =(ϕ_+,u_+).
For convenience, let us introduce the range of the intervalin theϕ- and theu-directions as
l_ϕ =ϕ_+ -ϕ_-, l_u=u_+ -u_-.
Under the above low-temperature expansion (<ref>),ρ_^ncan be expanded as
ρ_^n =[_(|0⟩⟨ 0| +|ψ⟩⟨ψ| e^-2πβ_ϕΔ/L -2πβ_uξ/L +⋯)]^n /(1 +e^-2πβ_ϕΔ/L -2πβ_uξ/L +⋯)^n
=(_ |0⟩⟨ 0|)^n [1+ ( [_ |ψ⟩⟨ψ| (_ |0⟩⟨ 0|)^n-1]/ (_ |0⟩⟨ 0|)^n -1 ) n e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯].
The first term(_ |0⟩⟨0|)^ncorresponds to the ground-state Rényi entropy. The second term determines the leading contribution to the low-temperature thermal correction. To calculate this term, we use the replica trick and the state-operator correspondence to replace it by a 2-point function ofψon then-sheeted copyC_nof the original space branched over∂. Use the state-operator correspondence, the in-state|ψ⟩corresponds to
|ψ⟩=lim_ϕ→ i∞ψ(ϕ,u)|0|,⟩
and the out-state⟨ψ||corresponds to
⟨ψ||=lim_ϕ→ -i∞⟨0||ψ(ϕ,u).
Together with the replica trick, the coefficient in the thermal correction term can be written as
[_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n =lim_ϕ_1→ +i∞
ϕ_2→ -i∞ [_( ψ(ϕ_1,u_1)|0|⟨%s|⟩0|ψ(ϕ_2,u_2) ) (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n
= lim_ϕ_1→ +i∞
ϕ_2→ -i∞⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n/⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_1.
Now, we can use the uniformizing map to calculate this 2-point function ofψonC_n.
∙ Uniformizing Map
To calculate the 2-point function onC_n, we use the following uniformizing map fromC_nto the plane,
x =(e^2π i ϕ/L -e^2π i ϕ_-/L/e^2π i ϕ/L-e^2π i ϕ_+/L)^1/n =: f^(n)(ϕ)
y = ( u -l_u/2sinπ l_ϕ/Lsinπ( 2ϕ -ϕ_- -ϕ_+)/L)d/d ϕ f^(n)(ϕ).
This transformation can be decomposed into several steps. In thex-direction, the plane-to-cylinder mapz=e^2πiϕ/Lmaps theS^1-coordinateϕto the analytically continued complexz-plane. Then, on this complex plane, thez-coordinate of∂becomesz_±=e^2πi ϕ_±/L. To introduce then-sheeted copy of this analytically continued space branched overz_±, we apply anSL(2,ℂ)transformationw=z-z_-/z-z_+which sendsz_-to0andz_+to∞, and take then-th root of it. In they-direction, the subtraction( u -l_u/2sinπl_ϕ/Lsinπ( 2ϕ-ϕ_- -ϕ_+)/L )cancels the rangel_uof the intervalAinu-direction.
The 2-point function of the primary operators on the plane is determined by the symmetry up to a normalization factorN,
⟨ψ|(x_1,y_1)ψ(x_2,y_2) |=⟩N x_12^-2Δe^-2ξy_12/x_12.
Mapped to the cylinder coordinate along (<ref>), the primary operatorψtransforms as
ψ(ϕ, u) =( d x/d ϕ)^Δe^-ξyd^2ϕ/dx^2/dϕ/dx -ξd/dϕ( l_u/2sinπ l_ϕ/Lsinπ( 2ϕ -ϕ_- -ϕ_+)/L)ψ(x,y)
=f^(n)'Δ(ϕ) e^-ξ( u f^(n)'(ϕ)d(f^(n)'(ϕ)^-1)/dϕ +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ -ϕ_- -ϕ_+)/L)ψ(x,y).
Thus, the correlation function onC_nbecomes
⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n
= N f^(n)'Δ(ϕ_2) e^-ξ( u_2 f^(n)'(ϕ)d(f^(n)'(ϕ_2)^-1)/dϕ_2 +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ_2 -ϕ_- -ϕ_+)/L)
× f^(n)'Δ(ϕ_1) e^-ξ( u f^(n)'(ϕ_1)d(f^(n)'(ϕ_1)^-1)/dϕ_1 +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ_1 -ϕ_- -ϕ_+)/L) x_12^-2Δe^-2ξy_12/x_12.
Substituting this into (<ref>) and take the limit, we obtain the the correction term
[_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n = lim_ϕ_1→ +i∞
ϕ_2→ -i∞⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n/⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_1
= ( sinπ l_ϕ/L/nsinπ l_ϕ/n L)^2Δ e^2 π l_u ξ/L( π l_ϕ/L -1/nπ l_ϕ/nL) .
In the explicit calculation, to take the limitϕ_1→+i∞, ϕ_2→-i∞, we have setϕ_1=i T_1andϕ_2 =-i T_2and expand the above in order ofϵ_1=e^-2πT_1/Landϵ_2=e^-2πT_2/L.
Substituting this back to the definition (<ref>) of the Rényi entropy, we obtain the thermal correction to the Rényi entropy
δ S_n =n/1-n[( sinπ l_ϕ/L/nsinπ l_ϕ/n L)^2Δ e^2 π l_u ξ/L( π l_ϕ/L -1/nπ l_ϕ/nL) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L.
The thermal correction to the entanglement entropy can be obtained by taking then→1limit,
δ S_E =[ 2Δ(1-π l_ϕ/Lπ l_ϕ/L) + 2 ξ( π^2 l_u l_ϕ/L^2 sin^2π l_ϕ/L-π l_u/Lπ l_ϕ/L) ] e^-2πβ_ϕΔ/L -2πβ_uξ/L.
For a pure state,S_n()=S_n(). However, the thermal correction contribution violates this equality. The compliment ofis an interval of rangeL-l_ϕin theϕ-direction and-l_uin theu-direction. SinceδS_n(L-l_ϕ,-l_u)≠δS_n (l_ϕ,l_u), the Rényi entropy is indeed thermally polluted.
§.§ Thermal Correction Dominated by the Multiplet Primary
Previously, we obtained the thermal correction to the Rényi entropy and the entanglement entropy in the case that a singlet primary dominates the thermal correction. Now, we consider the case that a multiplet primary dominates the thermal correction. As we will see, the thermal correction to the Rényi entropy is just that of a singlet multiplied by the rank of the multiplet. However, this seemingly intuitive result is not that trivial. Actually, the off-diagonal terms dominate the expansion of the thermal density matrix, but they just do not contribute to the thermal correction to the Rényi entropy.
TheM_0acts on a rank-rprimary multiplet=(O_0,O_1,⋯,O_r-1)^Tas
M_0 | O_a | ⟩= ξ | O_a |+⟩ | O_a-1|,⟩ a=1,⋯,r-1,
M_0 | O_0 | ⟩= ξ | O_0 |,⟩ a=0.
Or in a more compact form,M_0 =(ξ_r +_r) . Here,_ris the rank-ridentity matrix, and_ris the rank-rJordan cell
_r=
[ 0 ; 1 0 ; ⋱ ⋱; 1 0; ]_r× r,
which is nilpotent(_r)^r=0. The action ofe^-β_ϕ2π/LL_0-β_u 2π/LM_0on the primary part of this multiplet becomese^-β_u 2π/L_r e^-2πβ_ϕ/LΔ-2πβ_u/Lξ. The matrix parte^-β_u 2π/L_rcan be expanded into finitely many terms as
e^-β_u 2π/L_r =∑_k=0^r-1(-β_u 2π/L)^k/k! (_r)^k.
Sinceβ_u ≫L, it seems that thek=r-1term dominates the expansion (<ref>). However, as we will see later, although this^r-1term dominates the expansion of the matrix, it does not contribute to the thermal correction term to the Rényi entropy after taking trace. It is the^0term that dominates the thermal correction. Explicitly, thek=r-1term is
(-β_u 2π/L)^r-1/(r-1)! (_r)^k= (-β_u 2π/L)^r-1/(r-1)![ 0 0; ⋮ ⋱; 0 ⋱; 1 0 ⋯ 0 ]_r× r
= (-β_u 2π/L)^r-1/(r-1)! |O_0|⟨%s|⟩O_r-1^∨|.
Here, the dual basis⟨O|_a^∨|is defined by
⟨O|_a^∨ |O_b|=⟩δ_a,b.
Putting everything together, the multiplet version of the low-temperature expansion of density matrix (<ref>) is
ρ = |0⟩⟨ 0| +|O_0|⟨%s|⟩O_r-1^∨ | (-2πβ_u/L)^r-1/(r-1)! e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +r e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯.
We can use the inner product between the in-state and the out-state within a multiplet <cit.>⟨O|_a | O_b |=⟩δ_a+b,r-1
to transfrom from the dual basis to the out-states,
⟨O|_a^∨ |=⟨O|_r-1-a|.
Then, the density matrix can be written as
ρ = |0⟩⟨ 0| +|O_0|⟨%s|⟩O_0| (-2πβ_u/L)^r-1/(r-1)! e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +r e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯.
The correlation function <cit.> among the rank-rmultiplet is
⟨ O_a(x_x,y_x)O_b(x_1,y_1)⟩ ={[ 0 ; d_r x_12^-2Δ_i e^-2ξ_iy_12/x_121/q!(-2y_12/x_12)^q, ]. , q=a+b-r+1.
In particular, forr>1⟨O|_0(x,y) O_0(x',y') |=⟩0.
So, we see this^r-1term does not contribute to the thermal correction term at all. Moreover, it turns out that all the off-diagonal terms do not contribute to the leading correction to the Rényi entropy. To see this, consider the^ksummand in (<ref>) written in the basis
_r^k = ∑_a=0^r-1-k |O_a|⟨%s|⟩O_a+k^∨| =∑_a=0^r-1-k |O_a|⟨%s|⟩O_r-1-a-k|.
Sinceq=(a) +(r-1-a-k) =r-1-k ≥r-1and the equality holds only ifk=0,
the correlation function⟨O|_a(x,y) O_r-1-a-k(x',y')|$⟩ vanishes for any k>0 because of (<ref>). Only for k=0, the correlation function does not vanish, i.e.,
⟨O|_a(x_2,y_2) O_r-1-a(x_1,y_1) |=⟩N x_12^-2Δ e^-ξy_12/x_12, a=0,⋯,r-1,
which is the same as the correlation function of a singlet (<ref>). So, the thermal correction to the Rényi entropy is just that of a singlet multiplied by r,
δS_n =rn/1-n[( sinπl_ϕ/L/nsinπl_ϕ/n L )^2Δ e^2 πl_u ξ/L ( πl_ϕ/L -1/nπl_ϕ/nL ) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L.
We see that when a multiplet primary dominates the low-temperature expansion, although the off-diagonal contributions dominate the correction to the thermal density matrix, they do not contribute to the correction of the Rényi entropy. The result is just that of the singlet multiplied by the rank r. It will be interesting to find if there exist any other entanglement measurements to which the off-diagonal contributions do not vanish. We leave this to future work.
§.§ Comments on Another Limit
So far, we consider the particular low-temperature limit (<ref>), but there is also a complimentary choice to reach the low-temperature limit so that the boost charge ξ dominates the first excited state. An extreme case is that the thermal circle is purely along the u-direction. The thermal circle is
u ∼u+ iβ_u, β_u ≫L.
Then, the density matrix is proportional to e^-β_u M_0. In this case, any primary ψ with boost charge ξ>0 is heavier than not only the vacuum state, but all the descendants of the vacuum (e.g., M_-k⃗|0|$⟩), because these descendants of the vacuum all have the boost chargeξ=0.
If in the spectrum the boost charge is gapped, then in theβ_u ≫Llimit, the density matrix is dominated by the vacuum block, and all the vacuum descendants are just as heavy as the vacuum, thus a low-temperature expansion is hardly accessible. However, the result of such thermal correction to the entanglement entropy might be even more universal than the previous case, as it depends only on the vacuum block and the algebraic structure, not on the details of the spectrum.
On the other hand, if there exist any other primary operators with boost charge0, then the density matrix is dominated by these blocks together with the vacuum block. Since the operatore^-β_u M_0does not care about the conformal weight at all, the results of the thermal correction might be similar to the case that only the vacuum block dominates.
To summarize, in this type of low-temperature limit, since all descendants of the vacuum are equally heavy measured by their boost charge, an honest calculation must include them all. Even it is still possible to expand the density matrixe^-β_u M_0organized by the orders of the Taylor expansion and the levels of the descendants, it is still hard to trace outand obtain the reduced density matrix onin a workable way. However, since we expect the result to be universal, once we work it out in one explicit example, hopefully we might find a solution according to the answer. Currently, since this type of thermal circle is in BMSFT is still not well understood, we leave this to future work.
§.§ Modular Hamiltonian Approach
In this subsection, we calculate the thermal correction to the entanglement entropy from the modular Hamiltonian. As a double check, the result agrees with the previous calculation (<ref>). The modular Hamiltonian for the reduced density matrix onis defined to be
K_ = -logρ_.
From the entanglement first law, for an infinitesimal variation of the state, the calculation of the variation of the entanglement entropy can be replaced by the variation of the expectation value of the modular Hamiltonian
δ S_=δ⟨K|_|.⟩
In general, the modular HamiltonianK_cannot be written down in terms of local data. Only in theories with enough symmetry the modular Hamiltonian has an explicit formula for simple entanglement regions and special states. In particular, the modular Hamiltonian in BMSFT <cit.> can be written down explicitly for a single interval on the cylinder under the vacuum state.
For the single intervalin the vacuum state on the cylinder with circumferenceL, the modular HamiltonianK_can be written as a local integral of the modular generatorζ_against the currentsJ(ϕ)andP(ϕ)as
K_ =∫_ϕ_-^ϕ_+ dϕ[ L/2πcosπ l_ϕ/L-cosπ(2ϕ -ϕ_+ -ϕ_-)/L/sinπ l_ϕ/L J(ϕ) + l_u/2 π l_ϕ/Lcosπ(2ϕ -ϕ_+ -ϕ_-)/L -π l_ϕ/L/sinπ l_ϕ/L P(ϕ) ].
To calculate the variation of the modular Hamiltonian, we need to calculate the variation of the currentsJ(ϕ)andP(ϕ),
δ⟨J||=⟩⟨J||_⟩ρ -⟨J||_⟩|0|⟩,
δ⟨P||=⟩⟨J||_⟩ρ -⟨P||_⟩|0|⟩.
Substitute the low-temperature expansion (<ref>) of the thermal density matrixρ,
δ⟨J|(ϕ) |=⟩e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩),
δ⟨P|(ϕ) |=⟩e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨P|(ϕ) |_⟩|ψ|⟩ - ⟨P|(ϕ) |_⟩| 0|⟩).
So, we need to calculate the difference of the expectation values of the currents between the primary state|ψ|$⟩ and the vacuum | 0|$⟩. For this, we apply the plane-to-cylinder transformation (<ref>) and insert the primary operatorψat the origin of the(x,y)-plane. Recall the mode expansion of the currents on the plane
J(x) =∑_n L_n x^-n-2, P(x)=∑_n M_n x^-n-2.
Thus, the expectation values of the currents on the plane under a primary state are
⟨J|^pl(x)|=⟩x^-2Δ, ⟨P|^pl(x)|=⟩x^-2ξ.
Applying the transformation of currents (<ref>), the expectation values of currents on cylinder become
⟨J|(ϕ) | ⟩= ( ∂ x/∂ϕ)^2 J^pl(x) +c_L/12{x,ϕ}=-4π^2/L^2Δ +π^2/L^2c_L/6,
⟨P|(ϕ) | ⟩= ( ∂ x/∂ϕ)^2 P^pl(x) +c_M/12{x,ϕ}=-4π^2/L^2ξ +π^2/L^2c_M/6.
Thus, the difference of the expectation values of the currents between|ψ|$⟩ and |0|$⟩ are
⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩ =-4π^2/L^2Δ,
⟨P|(ϕ) |_⟩|ψ|⟩ -⟨P|(ϕ) |_⟩|0|⟩ =-4π^2/L^2ξ.
Substituting this into (<ref>), we obtain the variation of the currents
δ⟨J|(ϕ) | ⟩=e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩) =-4π^2/L^2Δ e^-2πβ_ϕ/LΔ -2πβ_u/Lξ ,
δ⟨P|(ϕ) | ⟩=e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨P|(ϕ) |_⟩|ψ|⟩ - ⟨P|(ϕ) |_⟩| 0|⟩) =-4π^2/L^2ξ e^-2πβ_ϕ/LΔ -2πβ_u/Lξ.
For the modular Hamiltonian (<ref>), the variation of the modular Hamiltonian is
δ⟨K|_|=⟩ ∫_ϕ_-^ϕ_+ dϕ[ L/2πcosπ l_ϕ/L-cosπ(2ϕ -ϕ_+ -ϕ_-)/L/sinπ l_ϕ/Lδ⟨J|(ϕ) |+⟩l_u/2 π l_ϕ/Lcosπ(2ϕ -ϕ_+ -ϕ_-)/L -π l_ϕ/L/sinπ l_ϕ/Lδ⟨P|(ϕ) |⟩]
= [ 2Δ(1-π l_ϕ/Lπ l_ϕ/L) + 2 ξ( π^2 l_u l_ϕ/L^2 sin^2π l_ϕ/L-π l_u/Lπ l_ϕ/L) ]e^-2πβ_ϕ/LΔ -2πβ_u/Lξ .
This result agrees with the previous calculation (<ref>) of the variation of the entanglement entropy.
§ DISCUSSION
In this paper, we consider the single interval entanglement region on the cylinder in the BMSFT. We find a suitable low-temperature limit under which an expansion of the thermal density matrix dominated by the first excited operator is possible. In this limit, we calculate the thermal correction to the Rényi entropy by the replica trick and the uniformizing map. As a double check, for the thermal correction to the entanglement entropy, we also provide an alternative calculation by the modular Hamiltonian and the entanglement first law.
Though we provide a double check from another calculation of the entanglement entropy by modular Hamiltonian, it will be more satisfactory to have a numerical check in the concrete model. Despite the fact that several concrete BMSFT models have been found and studied recently, it seems that we still do not have a satisfactory understanding of their underlying Hilbert space structure and the correct way to discretize the models in a meaningful way. We leave this to future work until we have a better understanding of these concrete models. Also, a concrete model analysis might be helpful to understand another type of low-temperature limit in Sec. <ref>.
Another interesting thing is to test this thermal correction term in the holographic entanglement proposals. For the finite temperature, the calculation on the cylinder is secretly on a torus, and the replica trick fails as the covering space is of high genus. However, a holographic calculation with temperature in the bulk is still possible using the geometric picture. Hence, a comparison between the low-temperature result in the bulk and boundary is possible.
I would like to thank Peng-xiang Hao, Wenxin Lai and Jun Nian for useful discussions. I would like to specially thank Jun Nian for proofreading the manuscript. This work was supported in part by the NSFC under grant No. 12147103.
JHEP |
http://arxiv.org/abs/2307.04000v1 | 20230708155126 | Synthesis of resonant modes in electromagnetics | [
"Antonello Tamburrino",
"Carlo Forestiere",
"Giovanni Miano",
"Guglielmo Rubinacci",
"Salvatore Ventre"
] | physics.optics | [
"physics.optics",
"physics.class-ph"
] |
Department of Electrical and Information Engineering M. Scarano, Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy.
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI-48824, USA.
e-mail: [email protected]
Department of Electrical Engineering and Information Technology, Università degli Studi di Napoli Federico II, via Claudio 21, Napoli, 80125, Italy
Department of Electrical and Engineering Information M. Scarano, Università degli Studi di Cassino e del Lazio Meridionale, Via G. Di Biasio n. 43, 03043 Cassino (FR), Italy.
Resonant modes determine the response of electromagnetic devices, including dielectric and plasmonic resonators. Relying on the degrees of freedom that metamaterials provide, this contribution shows how to design, at will, the resonant modes of a dielectric object placed in an unbounded space. Specifically, the proposed method returns in analytical form the spatial distribution of the dielectric susceptibility tensor for which the object exhibits resonances at prescribed frequencies and spatial distribution of the polarization. Together with the synthesis of the material, two key concepts are introduced: the controlled tunability of the resonant modes and the number of essential modes, i.e. the number of modes that uniquely characterize the spatial distribution of the dielectric susceptibility.
Moreover, this approach can be applied to design the resonant modes of any system where the constitutive relationship is linear and local.
Synthesis of resonant modes in electromagnetics
Salvatore Ventre
August 12, 2023
================================================
Media with spatially inhomogeneous refractive index have fascinated the humankind for millennia, exhibiting counter-intuitive effects such as mirages, or fata morgana. Archaeological evidence indicates that humans learned how to engineer the refractive index variations to make lenses in antiquity, spanning several millennia. More recently, nano-fabrication techniques, the discovery of materials with tunable permittivity, and the introduction of the metamaterial concept <cit.> have greatly expanded the landscape of feasible permittivity distributions for the electromagnetic design. Anisotropic and even continuous effective variations of the permittivity can be now implemented.
Using the degrees of freedom in the choice of the materials, it is possible to control the electromagnetic field as shown by Pendry et al <cit.> by introducing trasformation optics <cit.>. They showed that, the permittivity and permeability effectively determine a curved spatial geometry for the electromagnetic field. Thus, leveraging on this analogy, they showed how the anisotropic and inhomogeneous permittivity and permeability profiles to redirect the electromagnetic field in a prescribed way. Recently, several optimization methods have been introduced to design materials to achieve a prescribed electromagnetic response, incorporating at the same time fabrication constraints
<cit.>.
In this manuscript, we take a fresh path to the design of electromagnetic resonances of a scatterer, which plays a central role in electromagnetic devices, e.g. <cit.>. Plasmonic and dielectric nano-resonators are an interesting example. When the resonance condition is met, the near-field and far-field characteristics of the device are dominated by the corresponding resonant mode.
We introduce a theoretical framework that enables the synthesis of the spatial distribution of the permittivity profile of a dielectric object, to design its resonant modes, i.e. polarization current density distributions. The designer preliminary specifies, in the spatial domain occupied by the object, one or several modes, together with the corresponding resonant frequencies. Then, the synthesis process returns the possibly inhomogeneous and anisotropic permittivity profile which guarantees that the dielectric object exhibits the prescribed modes at the specified resonance frequencies. It is a direct method: it does not require the use of any optimization approaches, but explicitly returns the analytical solution in a single step. The syntheses approach leverages on a formulation of the generalized eigenvalue problem where the contributions of the material and of the electromagnetic field are separated. Yet, this approach is very general: it can be applied to any system where the constitutive relationship is linear and non-spatially dispersive. For instance, it can be used to design the properties of an elastic material to control its vibrational modes.
In addition, the proposed framework allows one to clearly identify the physical feasibility and limitations inherent to the problem of the design of the modes. The main outcome is that the maximum number of modes (essential modes) that can be prescribed at a given resonance frequency, is equal to the dimension of the problem (two for a 2D problem and three for a 3D problem). These are inherent physical limits unveiled by the proposed framework.
Finally, we also address the problem of the tunability where, by scaling the dielectric susceptibility, we can change completely the resonance property in a controlled way. This feature enables the design of tunable materials, where one can adapt the response of the material dynamically, according to specific needs.
§ MODES AND EIGENVALUE PROBLEM
We consider a linear, nonmagnetic and non-spatially dispersive dielectric of finite size, shown in Fig. <ref>. We denote the space occupied by the dielectric with Ω, its boundary by ∂Ω, the (unit vector) normal to ∂ V that points outward by 𝐧.
Under these assumptions, the polarization density 𝐏 is given by 𝐏( 𝐫,ω)
= ε_0χ( 𝐫,ω)
·𝐄( 𝐫,ω), where is
the dielectric susceptibility tensor, ω is the angular frequency
(the e^jω t time behavior is assumed), ε_0 is the vacuum permittivity, and · corresponds to the
usual dot product between tensors and vectors.
When the dielectric scatterer is excited by an external electric field 𝐄^i, the total electric field 𝐄 can be written as the sum of 𝐄^i and of the reaction field 𝐄^𝙿 due to the presence
of the polarization current density jω𝐏. The constitutive
relation can be written as
1/ε_0( 𝐫, ω)
·𝐏( 𝐫, ω) - (
𝐫, ω) = 𝐄^i( 𝐫, ω)
in Ω,
where tensor is the pointwise inverse of
, i.e. ( 𝐫,ω)
=^-1( 𝐫,ω).
Let ℰ( ω)
be the operator giving the electric field produced by a prescribed polarization
density field 𝐏 radiating in the free space at frequency ω <cit.>:
𝐄^P( 𝐫) =jω∫_Ω𝐆
( 𝐫-𝐫^') 𝐏( 𝐫^') dS^'
where 𝐆 is the proper electric-electric dyadic Green function.
For any prescribed angular frequency ω, the electromagnetic scattering is
governed by the integral equation
1/ε_0·𝐏 - ℰ(
ω) 𝐏=𝐄^i in Ω.
Two particularly significant auxiliary eigenvalue problems can be defined starting from Eq. <ref>, setting the exciting field to zero, and assigning the material tensor .
Quasi Normal Modes <cit.> (QNM) are nontrivial solutions ω and 𝐏 of
ℰ( ω) 𝐏=1/ε_0
·𝐏 in Ω.
QNM are often used to characterize micro- and nano- resonators <cit.>, enabling the calculation of synthetic parameters such as the quality factor, the mode volume <cit.>, and the Purcell factor. QNM are also used to expand the response of micro-nanoresonators by <cit.> highlighting the contribution of the individual modes in the overall scattering response. The eigen-frequencies ω are complex numbers, i.e. ω∈ℂ, and (ω, 𝐏) forms a (generalized)
eigenvalue/eigenvector pair.
Material Modes are nontrivial solution ξ∈ℂ and 𝐏 of
ℰ( ω) 𝐏=ξ1/ε_0
·𝐏 in Ω,
where the frequency ω∈ℂ is prescribed.
ξ and 𝐏 form a (generalized)
eigenvalue/eigenvector pair.
These modes for ω∈ℝ and uniform and isotropic
material ((𝐫) = χ scalar constant in Ω) have been already investigated in <cit.>, and have been used to expand the electromagnetic response of nano-resonators <cit.>, and also to design the scalar permittivity of a homogeneous object to achieve a prescribed scattering response, such as scattering cancellation or maximization <cit.>.
In this work χ may be non uniform and/or non isotropic, and ω may be complex. The characteristic feature of the eigenvalue/eigenvector pair for (<ref>) is to be a
homogeneous function of , i.e. if ^'=α then
𝐏^' =𝐏; 1/ξ^' =α1/ξ
is an eigenvalue/eigenvector pair for ^'. Specifically, the eigenvector 𝐏 is a 0-degree homogeneous function, whereas the reciprocal of the eigenvalue ξ is a 1-degree homogeneous function.
After this property, we term these modes as Homogeneous Material Modes. Homogeneous Material Modes have been successfully introduced in low-frequency electromagnetism for eddy current tomography <cit.>.
A unique feature of Material Modes and, more in general, of Homogeneous Material Modes, is that since the eigenvalue ξ and the eigenvector are homogeneous function of χ, it is possible to tune on different resonant modes the electromagnetic system by scaling the susceptibility. This feature, which we call tunabilty, opens the door to a systematic design of reconfigurable materials and will be discussed in detail in a subsequent Section.
§ SYNTHESIS OF MODES (SOM)
In this Section, we introduce a theoretical framework enabling the synthesis of the dielectric permittivity tensor =
( 𝐫, ω) of the object, such that it exhibits the set of resonance modes {(ω_k,ξ_k,𝐏_k) }_k=1… N at prescribed frequencies ω_k. Each individual mode is described by the triplet ( ω_k,ξ_k,𝐏_k). Hereafter, ω_k is referred as the frequency eigenvalue, ξ_k as the material eigenvalue, and 𝐏_k as the spatial mode. The problem consists in solving for a proper γ_k ( 𝐫) = γ( 𝐫, ω_k ), the set of equations imposing the modes
ℰ( ω_k ) 𝐏_k=ξ_k 1/ε_0_k ·𝐏_k in Ω, for k=1, …, N.
The synthesis is carried out in two steps. First, we solve the problem at each prescribed angular frequency ω_k, by evaluating γ_k, as solution of (<ref>). Then, we interpolate in the frequency domain the collection of tensors χ_1, …, χ_N, being χ_k = γ_k^-1
Hereafter, we consider the _z scenario where the electromagnetic problem is x_3- invariant and
the electric field is transverse to the x_3-axis. This is a 2D case where
the tensor is of the type ( 𝐫, ω) =∑_l,m=1^2χ_lm( 𝐫, ω) 𝐞
_l 𝐞_m, the electric field is 𝐄( 𝐫, ω) =E_1( 𝐫, ω) 𝐞_1+E_2(
𝐫, ω) 𝐞_2, 𝐫=x_1𝐞_1
+x_2𝐞_2 and 𝐞_1 and 𝐞_2 are the unit
vectors along the x_1 and x_2 directions, respectively. The elements of the Green function are given in Appendix <ref>.
§.§ Synthesis of Modes at a prescribed angular frequency
Given a prescribed angular frequency ω_k, we distinguish two cases:
(i) a single mode is prescribed or (ii) two modes are prescribed. In a 3D setting, one have to include also the third case when three modes are prescribed. The treatment of this case is nothing but a straightforward extension of the one needed when two modes are prescribed.
Single mode case. Let ( ω_k,ξ_k,𝐏_k) be an individual prescribed resonances modes at frequency ω_k, where ω_j
≠ω_k for j≠ k. The solution of equation (<ref>) can
be expressed in explicit form as
_k( 𝐫) = ε_0
_k( 𝐫) /ξ_k|𝐏_k( 𝐫) | ^2𝐏_k
^∗( 𝐫) +α_k( 𝐫)
𝐯_k( 𝐫) 𝐩_k^∗(
𝐫),
where ∗ is the complex conjugate operation, _k=ℰ( ω_k) 𝐏_k, 𝐩
_k( 𝐫) ⊥𝐏_k( 𝐫)
for almost everywhere (a.e.) 𝐫∈Ω [Here 𝐚( 𝐫
) 𝐛( 𝐫) means that 𝐚
^∗( 𝐫) ·𝐛( 𝐫)
=0.], 𝐯_k is an arbitrary vector field and α_k is an
arbitrary scalar field. The solution γ_k given in equation (<ref>) can be easily verified by
plugging it in equation (<ref>).
A possible choice for 𝐩_k is 𝐩_k
=ℛ𝐏_k^∗, being ℛ the 90
^∘ rotation operator in the counterclockwise direction. We notice that
ℛ𝐏_k^∗( 𝐫) =𝐏
_k^∗( 𝐫) ×𝐞_3 where 𝐞_3 is the unit vector along the x_3 direction.
Finally, we highlight that by means of the explicit solution of equation (<ref>) one can easily check if
_k is bounded or continuous. Specifically, we have that if _k and 𝐏_k are continuous (piecewise continuous) and
|_k|/|𝐏
_k| is bounded, then _k is continuous (piecewise continuous).
We conclude this Section with a remark about the scalar case.
When _k∥𝐏_k, i.e.
𝐏_k( 𝐫) =ε_0χ_k(
𝐫) _k( 𝐫) being
χ_k a scalar field, Eq. (<ref>) returns a scalar susceptibility tensor (homogeneous material):
_k=1/ξχ_kℐ,
where ℐ is the unit dyad. Indeed, Eq. (<ref>) follows from (<ref>) by choosing 𝐩_k( 𝐫) =𝐏_k^∗( 𝐫
) ×_3, 𝐯( 𝐫
) =𝐄_P^∗×_3, α
_k( 𝐫) =χ_k^∗( 𝐫)
/χ_k( 𝐫), and by observing that 𝐮𝐮
^∗+( 𝐮^∗×_3)
( 𝐮×_3) gives the (2D)
unit dyad ℐ when 𝐮 is an arbitrary unit vector. In this case, the prescribed mode is a material independent mode <cit.>.
Two isofrequential modes.
Let ω_1=ω_2≠ω_j for j>2, and ( ω_1,ξ_1,𝐏_1) and (
ω_2,ξ_2,𝐏_2) be the prescribed resonances modes. Let the solution be expressed as
_1 ( 𝐫)= ∑_l,m=1^2Γ_lm( 𝐫) 𝐔_l( 𝐫) 𝐏_m^∗( 𝐫).
where Γ_lm( 𝐫) ∈ℂ and
𝐔_l = ε_0ℰ (ω_1) 𝐏_l/ξ_l, l=1,2.
To find the unknown coefficients Γ_lm, we observe that by imposing Eq. (<ref>) on the two prescribed resonance modes we have:
𝐔_r ( 𝐫)= γ_1 ( 𝐫) ·𝐏_r ( 𝐫) for a.e. 𝐫∈Ω, and r=1,2.
Then, by left multiplying this expression by 𝐔^∗_s( 𝐫), we have
𝐔_s^∗·𝐔_t=∑_l,m=1^2(
𝐔_s^∗·𝐔_l) Γ_lm(
𝐏_m^∗·𝐏_t) in Ω, s,t=1,2,
which, in matrix form, gives
𝐆_U( 𝐫) = 𝐆_U( 𝐫) Γ( 𝐫) 𝐆_P( 𝐫),
where ( G_U) _st=𝐔_s^∗·𝐔_t,
( G_P) _ik=𝐏_i^∗·𝐏_k and
Γ is the matrix made by the unknown coefficients Γ_lm.
When both 𝐆_U and 𝐆_P are invertible at location 𝐫, the solution of (<ref>) exists, is unique and is given by
Γ( 𝐫) = 𝐆_P^-1( 𝐫).
In the remaining cases, i.e. 𝐆_P and/or 𝐆_U non invertible, the solution may not exist or be unique.
It is worth noting that matrices 𝐆_U and 𝐆_P are Gram matrices and, therefore, 𝐆_U=𝐆_U^†, 𝐆
_U≥ 0, 𝐆_P=𝐆_P^† and
𝐆_P≥ 0.
Moreover, the inverse of (<ref>) is (when it
exists)
χ=∑_l,m=1^2_ml^D𝐏_m𝐔_l^∗,
where ^D=( 𝐆_U^I
𝐆_P) ^-1.
§.§ Parameterization of the frequency response
Once the inverse of the susceptibility tensor is found at each each prescribed angular frequency
ω_k, we need to reconstruct the dispersion relation ( 𝐫,ω), which has to satisfy the causality throught the Kramers-Kronig conditions and the Hermitian symmetry, namely ( 𝐫,-ω)=^* ( 𝐫,ω). To this purpose, we parameterize the dispersion relation, as follows
( 𝐫,ω) =∑_m=1^M𝐚
_m( 𝐫) φ_m( ω)
where M is the number of terms, each expansion function φ_m is causal and Hermitian and each tensor field 𝐚_m is real. The φ_ms depend on the actual realization of the artificial material. A possible choice consists in assuming each expansion function φ_m of the
Lorentz-Drude type:
φ_m( ω) =ω_p,m^2/( ω_0,m
^2-ω^2) +jωβ_m,
where causality requires β_m>0.
Tensors fields 𝐚_ms can be found by point matching, for instance. Within this approach, we enforce the following constraints ∀ k =1, …, N
∑_m=1^M𝐚_m (𝐫) Re{φ_m( ω
_i)} = Re{γ_k^-1 (𝐫) }
,
∑_m=1^M𝐚_m (𝐫) Im{φ_m( ω
_i)} = Im{γ_m^-1 (𝐫) }
,
where Re{·} and Im{·} are the real and imaginary parts of their argument, respectively. Moreover, from (<ref>) and (<ref>), it follows that M=2 N to have existence and uniqueness of the solutions in terms of the unknown tensor fields 𝐚_ms.
We remark that parameters ω_p,m, ω_0,m and β_m depend on the actual realization of the artificial material. For instance, ω_0,m does not need to be equal to the resonant (angular) frequency ω_m prescribed for the Synthesis of the Modes. In the remaining of the paper we select parameters ω_p,m, ω_0,m and β_m, to avoid the appearance of any resonance due to the expansion functions, at the resonant frequencies prescribed for the Synthesis of the Modes.
§ TUNABILITY AND ESSENTIAL MODES
The tunability of the resonance refers to the possibility of changing the properties of a material in a controlled manner. The Synthesis of Modes entails tunability in a natural manner via the material eigenvalues ξ_k.
Indeed, after (<ref>), we have that a material with dielectric permittivity given by χ / ξ_k, being χ the result of the synthesis of modes, resonates at the angular frequency given by ω_k. In other terms, we can control the frequency behaviour of a material (value of the frequency resonances and spatial distribution of the related mode), by simply scaling χ by a proper factor.
From another perspective, the proposed approach to the synthesis of the modes allows to get the resonance frequencies and related spatial modes as a function of an individual parameter: a scaling factor in front of the synthesized χ.
This feature open the door to a systematic design of reconfigurable materials.
The concept of essential modes refers to the maximum number of modes that can be arbitrarily prescribed at a given angular frequency ω_k. Equation (<ref>) provide the values of the Γ_lm giving the sought inverse of the dielectric susceptivity tensor in (<ref>). This equation shed light on a special and not obvious physical feature of the modes: two modes are capable of defining uniquely the material property of the scatterer, at the prescribed angular frequency. In other words, γ(·,ω_k) is in a one-to-one correspondence with two of its modes at ω_k, at ω_k. From another perspective, only two modes can be assigned in a completely independent manner or, equivalently, all the modes depend upon two arbitrarily selected modes, at a prescribed angular frequency.
We term two arbitrary modes in a one-to-one correspondence with χ(·,ω_k) as essential modes.
It is worth noting that he number of essential modes is two in a 2D problem and three in a 3D problem.
§ APPLICATION OF THE THEORY OF SYNTHESIS OF MODES
In this Section, we show the effectiveness of the resonance synthesis method by means of three application examples. We demonstrate (i) the capability of the method to synthesise several modes, each one having prescribed polarization density distribution at prescribed frequencies, (ii) the tunability of resonant response, by a proper scaling of the dielectric susceptibility tensor and (iii) the concept of essential modes. In the first two examples, the reference geometry is an indefinite cylinder with square L× L cross-section with L=10cm under the illumination. In the third example the geometry consist of coated spherical gold nanoparticle.
The numerical model for solving the electromagnetic problem is derived from Ref. <cit.>. The parameters of the Lorentz-Drude expansion functions φ_k, introduced in Eq. (<ref>), are given in Table <ref>. The plot of each individual expansion function is shown in Figure <ref>. The positions of the peaks of the expansion function are uniformly spaced over the bandwidth of interest. We assume ω_p,k=ω_0,k and β_k=0.1ω_0,k. With this latter choice, each expansion function is localized in a neighborhood of its peak position, but does not present a sharp resonance that could hide those arising from the Synthesis of Modes. The amplitude and the shape of the expansion function are briefly discussed in Appendix <ref>.
Synthesis of the modes. In this first application, we prescribe the modes at the three angular frequencies shown in Table <ref>. Specifically, at the angular frequency ω_1 we prescribe two modes: the first one has a polarization density field 𝐏_0, whose shape resembles the number “0" and it is associated with the eigenvalue ξ_A=1; the second mode has a polarization density field 𝐏_1, whose shape resembles the number “1" and it is associated with the eigenvalue ξ_B=2. At the angular frequency ω_2, we prescribe the modes 𝐏_1 and 𝐏_2, where 𝐏_2 has a shape which resembles number 2. Modes 𝐏_1 and 𝐏_2 are associated with eigenvalues ξ_A=1 and ξ_B=2, respectively. Finally, at the angular frequency ω_3 we prescribe modes 𝐏_2 and 𝐏_0, associated with eigenvalues ξ_A=1 and ξ_B=2, respectively. Tables <ref> and <ref> summarize these choices.
The synthesis is carried out in two steps: i) we evaluate γ_i ( 𝐫) at the three prescribed frequencies; ii) we interpolate the corresponding dielectric susceptibility as in Eq. (<ref>), by solving (<ref>) and (<ref>).
In the first step, the theory for the synthesis of two isofrequential modes
is applied at each individual angular frequency using equation (<ref>): (i) for (ω_1, ξ_A, 𝐏_0) and (ω_1, ξ_B, 𝐏_1) at ω_1, (ii) for (ω_2, ξ_A, 𝐏_1) and (ω_2, ξ_B, 𝐏_2) at ω_2 and (iii) for (ω_3, ξ_A, 𝐏_2) and (ω_3, ξ_B, 𝐏_0) at ω_3.
Figures <ref>, <ref>, and <ref> show the real and imaginary part of every element of the relative dielectric permittivity tensor ε_R,k=χ_k+1, at ω_1, ω_2, and ω_3, respectively.
To validate the proposed method, we performed two tests, where the dielectric susceptibility profile is either χ^𝙰 ( 𝐫, ω ) = χ ( 𝐫, ω ) / ξ^𝙰 or χ^𝙱 ( 𝐫, ω ) = χ ( 𝐫, ω ) / ξ^𝙱, where χ ( 𝐫, ω ) is the outcome of the synthesis of modes.
The first test was a direct test and it consisted in i) computing the modes at the three frequencies and in ii) comparing them with the prescribed polarization density field. This test was passed successfully.
As second test, we evaluate the induced polarization density fields at the three frequencies ω_1, ω_2, and ω_3, when the cylinder is excited by a linearly polarized plane wave, propagating along the horizontal axis. These polarization fields are showed in Fig. <ref> (e-c) assuming a susceptibility tensor χ^𝙰(𝐫,ω) and in Fig. <ref> (d-f) for χ^𝙱.
The induced polarization density fields is very close to the prescribed modes. In quantitative terms, Table <ref>, shows the 2-norm of the relative difference between the actual 𝐏 and its projection along the subspaces generated by the prescribed modes, at each specific angular frequency:
ρ_k^i = ‖𝐏_i( ·,ω_k) -Π^i_k𝐏_i( ·, ω_k) ‖/‖𝐏_i( ·, ω_k ) ‖
with k=1,2,3 and i=𝙰,𝙱. In (<ref>), 𝐏_𝙰( ·,ω_k) and 𝐏_B ( ·,ω_k) are the polarization vectors at ω_k and for material 𝙰 and 𝙱, Π^𝙰_k and Π^𝙱_k are the projector into the linear space for the modes at the k-th angular frequency ω_k and for material 𝙰 and 𝙱. The detail about projectors Π^𝙰_ks and Π^𝙱_ks is given in Table <ref>.
We stress that
𝐏_i ( ·, ω_k) is the polarization vector for the
physical system under the prescribed illumination at ω_k.
This example clearly illustrates the concept of tunability of the resonant response: by just uniformly halving the value of the susceptibility distribution (passing from χ^𝙰 to χ^𝙱) the resonance modes in correspondence of the peaks change from the ordered sequence 0, 1, 2to 1, 2, 0.
Tunability. In this second application we determine the dielectric susceptibility by synthesizing at the frequency ω_1 the degenerate modes 𝐏_ and 𝐏_∨, whose polarization density field distribution resembles the characters and ∨, respectively; and at ω_2 the degenerate modes 𝐏_- and 𝐏_|, whose prescribed field distribution resembles the characters - and |, respectively. To validate the performed synthesis, we excite the infinite cylinder with a plane wave polarized along (𝐞_1+𝐞_2)/√(2). We show the real and imaginary part of the induced polarization field distributions at ω_1 in Figures <ref>(c), (d), and in Figures <ref>(g), (h) at ω_2. It is immediately apparent that at ω_1 the induced polarization field is a linear combination of the two prescribed degenerated modes 𝐏_ and 𝐏_∨, while at ω_1 the induced polarization field is a linear combination of 𝐏_- and 𝐏_|. From the quantitative perspective, the 2-norm relative difference ρ between the actual 𝐏 and its projection along the subspaces generated by the prescribed degenerated modes, is equal at 2.9908 × 10^-2 at ω_1 and 3.5310 × 10^-2 at ω_2. In this case Π_1 projects onto {𝐏_, 𝐏_∨}, whereas Π_2 projects onto {𝐏_-, 𝐏_| }.
Essential modes.
This final application case demonstrates a key feature of the Theory of the Synthesis of Modes, i.e. the concept of Essential Modes.
Specifically, given a scatterer operated at a prescribed angular frequency ω_1 and described by the dielectric susceptivity tensor χ(·,ω_1), we compute two resonance modes (ω_1, ξ_A,𝐏_A) and (ω_1, ξ_B,𝐏_B) and, then, we apply our Theory of the Synthesis to these modes. Since the tensor of the dielectric permittivity is in an one-to-one correspondence with two arbitrary modes, as discussed in a previous Section, we expect that the tensor χ_s(·,ω_1) of the dielectric permittivity synthesized by means of (ω_1, ξ_A,𝐏_A) and (ω_1, ξ_B,𝐏_B) via (<ref>), is equal to χ(·,ω_1).
The scatterer of this example consists of a coated (thickness 100 nm) circular (radius 200 nm) gold nanorod operated at f=500 THz (ω_1=π× 10^15 rad/s, free-space wavelength of 600 nm). The relative dielectric permittivity of the gold nanoparticle is 9.44-j 1.51, whereas that of the coating is 4.
Figures <ref> and <ref> show the real and imaginary parts for the selected modes 𝐏_A and 𝐏_B. The synthesized dielectric permittivity tensor is almost equal to that of the prescribed scatterer. As a figure of merit we evaluated the maximum relative error over the scatterer domain Ω:
e=max_𝐫∈Ω||χ(𝐫,ω_1)-χ_s(𝐫,ω_1)||_2/||χ(𝐫,ω_1)||_2,
which, in this case, is equal to 3.3 × 10^-11. In (<ref>) χ is the prescribed tensor of the dielectric susceptibility, whereas χ_s is the tensor of the synthesized dielectric susceptibility.
§ CONCLUSIONS
In this work we introduced a theoretical framework to find the permittivity profile of a dielectric object to synthesize at will its resonant modes. Specifically, we are able to control the spatial distribution of the polarization density field and the resonance frequency of a set of modes. The equations for the synthesis are straightforward and in an explicit form, making them suitable for specific customization. Moreover, we can prescribe the modes at many different frequencies.
The only limit, arising from the underlying physics, consists in the possibility of assigning at most two modes to each individual frequency and eigenvalue (up to three modes in a 3D setting). Indeed, from the theory of the synthesis of modes arises naturally that, at a prescribed angular frequency, the dielectric susceptivity tensor is in one-to-one correspondence with two of its modes, that we termed as essential modes.
We also demonstrated the concept of tunability: the proposed approach enables the design of the permittivity of a dielectric object that not only allows the synthesis at will of its resonant modes, but also allows to changes the resonant modes of the dielectric object in a controlled manner, by multiplying the designed permittivity by a proper multiplicative factor.
We also demonstrated the concept of tunability: our approach enables the design of the permittivity of a dielectric object, that not only allows the synthesis at will its scattering resonances, but also allows when such permittivity is multiplied by a proper multiplicative factor, it changes its resonant behaviour in a controlled manner. This is relevant from the practical point of view because this operation (multiplication by a constant) appears to be a simple operation.
With this theoretical framework, future development will be aimed to design a real world material approximating the synthesized dielectric susceptibility. Metamaterials are the natural candidates to this purpose.
The method introduced can be transplanted to different linear physical systems, where the constitutive relationship is linear and local, including thermal and mechanical systems.
§ METHODS
All the numerical calculations have been carried out by using the numerical method of <cit.>. All the value of the parameters used for generating numerical results have been included into the article.
§ DATA AVAILABILITY
All the data supporting the conclusions of this study are included in the
article. Source data are provided with this paper.
§ CODE AVAILABILITY
The computer code and algorithm that support the findings of this
study are available from the corresponding author on request.
§ GREEN FUNCTION
The component of the Green function for the illumination are
G_11( 𝐫) =-ζ_0/4r^3[
krx_2^2H_0( kr) +( x_1^2-x_2^2)
H_1( kr) ]
G_12( 𝐫) =-ζ_0/4r^3x_1
x_2[ 2H_1( kr) -krH_0( kr) ]
G_21( 𝐫) =G_12( 𝐫)
G_22( 𝐫) =-ζ_0/4r^3[
krx_1^2H_0( kr) +( x_2^2-x_1^2)
H_1( kr) ] ,
being ζ_0 the characteristic impedance of vacuum, k=ω/c_0 the
wavenumber, and c_0 the speed of light in vacuum.
§ LORENTZ-DRUDE EXPANSION FUNCTION
The (normalized) amplitude of the elementary Lorentz-Drude expansion function is:
| φ (ω) |/( ω_p / ω_0)^2 =1/√([ 1 - ( ω/ω_0)^2]^2 +( ω/ω_0)^2 ( β/ω_0)^2).
Its maximum value is
| φ (ω) |_max/( ω_p / ω_0)^2 =1/β/ω_0√(1 + 3/4( β/ω_0)).
and it is achieved at
ω/ω_0 = √(1+1/2( β/ω_0)^2)
The plot of (<ref>) for different β / ω_0 ratios is showed in Figure <ref>.
10
engheta_metamaterials_2006
N. Engheta and R. W. Ziolkowski, Metamaterials: Physics and
Engineering Explorations.
John Wiley & Sons, June 2006.
pendry_controlling_2006
J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling Electromagnetic
Fields,” Science, vol. 312, no. 5781, pp. 1780–1782, 2006.
leonhardt_optical_2006
U. Leonhardt, “Optical Conformal Mapping,” Science, vol. 312,
no. 5781, pp. 1777–1780, 2006.
hughes_adjoint_2018
T. W. Hughes, M. Minkov, I. A. D. Williamson, and S. Fan, “Adjoint Method
and Inverse Design for Nonlinear Nanophotonic Devices,” ACS
Photonics, vol. 5, pp. 4781–4787, Dec. 2018.
Publisher: American Chemical Society.
yao_intelligent_2019
K. Yao, R. Unni, and Y. Zheng, “Intelligent nanophotonics: merging photonics
and artificial intelligence at the nanoscale,” Nanophotonics, vol. 8,
pp. 339–366, Jan. 2019.
lalanne_light_2018
P. Lalanne, W. Yan, K. Vynck, C. Sauvan, and J. . P. Hugonin, “Light
interaction with photonic and plasmonic resonances,” Laser & Photonics
Rev., vol. 12, 2018.
van_bladel_electromagnetic_2007
J. G. Van Bladel, Electromagnetic fields, vol. 19.
John Wiley & Sons, 2007.
kristensen_modes_2013
P. T. Kristensen and S. Hughes, “Modes and mode volumes of leaky optical
cavities and plasmonic nanoresonators,” ACS Photonics, vol. 1, 2013.
muljarov_brillouin-wigner_2010
E. A. Muljarov, W. Langbein, and R. Zimmermann, “Brillouin-Wigner
perturbation theory in open electromagnetic systems,” EPL (Europhysics
Letters), vol. 92, p. 50010, Dec. 2010.
Publisher: IOP Publishing.
lalanne_quasinormal_2019
P. Lalanne, W. Yan, A. Gras, C. Sauvan, J.-P. Hugonin, M. Besbes, G. Demésy,
M. D. Truong, B. Gralak, F. Zolla, A. Nicolet, F. Binkowski, L. Zschiedrich,
S. Burger, J. Zimmerling, R. Remis, P. Urbach, H. T. Liu, and T. Weiss,
“Quasinormal mode solvers for resonators with dispersive materials,” JOSA A, vol. 36, pp. 686–704, Apr. 2019.
kristensen_generalized_2012
P. T. Kristensen, C. V. Vlack, and S. Hughes, “Generalized effective mode
volume for leaky optical cavities,” Optics Letters, vol. 37,
pp. 1649–1651, May 2012.
sauvan_theory_2013
C. Sauvan, J.-P. Hugonin, I. Maksymov, and P. Lalanne, “Theory of the
spontaneous optical emission of nanosize photonic and plasmon resonators,”
Physical Review Letters, vol. 110, no. 23, p. 237401, 2013.
Publisher: APS.
muljarov_exact_2016
E. A. Muljarov and W. Langbein, “Exact mode volume and Purcell factor of
open optical systems,” Physical Review B, vol. 94, p. 235438, Dec.
2016.
Publisher: American Physical Society.
bergman_theory_1980
D. J. Bergman and D. Stroud, “Theory of resonances in the electromagnetic
scattering by macroscopic bodies,” Phys. Rev. B, vol. 22, 1980.
forestiere_material-independent_2016
C. Forestiere and G. Miano, “Material-independent modes for electromagnetic
scattering,” Phys. Rev. B, vol. 94, p. 201406, Nov. 2016.
forestiere_volume_2018
C. Forestiere, G. Miano, G. Rubinacci, A. Tamburrino, R. Tricarico, and
S. Ventre, “Volume Integral Formulation for the Calculation of
Material Independent Modes of Dielectric Scatterers,” IEEE
Transactions on Antennas and Propagation, vol. 66, pp. 2505–2514, May 2018.
pascale_full-wave_2019
M. Pascale, G. Miano, R. Tricarico, and C. Forestiere, “Full-wave
electromagnetic modes and hybridization in nanoparticle dimers,” Scientific Reports, vol. 9, p. 14524, Oct. 2019.
forestiere_nanoparticle_2017
C. Forestiere and G. Miano, “On the nanoparticle resonances in the
full-retarded regime,” Journal of Optics, vol. 19, p. 075601, June
2017.
pascale_spectral_2017
M. Pascale, G. Miano, and C. Forestiere, “Spectral theory of electromagnetic
scattering by a coated sphere,” JOSA B, vol. 34, pp. 1524–1535, July
2017.
forestiere_directional_2019
C. Forestiere, G. Miano, M. Pascale, and R. Tricarico, “Directional scattering
cancellation for an electrically large dielectric sphere,” Optics
Letters, vol. 44, pp. 1972–1975, Apr. 2019.
su_monotonicity_2017
Z. Su, S. Ventre, L. Udpa, and A. Tamburrino, “Monotonicity based imaging
method for time-domain eddy current problems,” Inverse Problems,
vol. 33, p. 125007, Nov. 2017.
tamburrino_monotonicity_2021
A. Tamburrino, G. Piscitelli, and Z. Zhou, “The monotonicity principle for
magnetic induction tomography,” Inverse Problems, vol. 37, p. 095003,
Aug. 2021.
Publisher: IOP Publishing.
Note1
Here 𝐚 ( 𝐫 ) 𝐛 ( 𝐫 ) means that 𝐚^∗ ( 𝐫 ) ·𝐛 ( 𝐫 ) =0.
richmond_te-wave_1966
J. Richmond, “TE-wave scattering by a dielectric cylinder of arbitrary
cross-section shape,” IEEE Transactions on Antennas and Propagation,
vol. 14, pp. 460–464, July 1966.
|
http://arxiv.org/abs/2307.04169v1 | 20230709133123 | Heavy Higgs Searches at the LHC in the light of a Left-Right Symmetric Model | [
"Sanchari Bhattacharyya"
] | hep-ph | [
"hep-ph"
] |
]Sanchari Bhattacharyya[[email protected]]
[]University of Calcutta
92 Acharya Prafulla Chandra Road, Kolkata 700009
Heavy Higgs Searches at the LHC in the light of a Left-Right Symmetric Model
[
============================================================================
2cm
We investigate a Left-Right symmetric model respecting SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R local gauge symmetry. We study the interactions of the heavy neutral and charged scalars of this model along with their production at the hadron collider and their subsequent decays. We analyze the collider searches of two heavy scalars, one of them is charge neutral and another one is singly charged. In both the cases we consider their associated production at the Large Hadron Collider (LHC) and finally concentrate only on the leptonic final states. We perform both cut-based and multivariate analysis using Boosted Decision Tree algorithm for 14 TeV as well as as 27 TeV LHC run with 3000 fb^-1 integrated luminosity. As expected, the multivariate analysis shows a better signal-background discrimination compared to the cut-based analysis. In this article, we show that a charged Higgs of mass 750 GeV and 1.2 TeV can be probed with 2.77 σ (4.58 σ) and 1.38 σ (3.66 σ) significance at 14 (27) TeV run of LHC.
§ INTRODUCTION
It is well known that Standard Model (SM) of particle physics has been extremely successful in describing the interactions of the elementary particles. The discovery of Higgs boson at Large Hadron Collider (LHC), CERN <cit.> has added another feather in its cap. Despite of being so successful, it is still unable to explain some of the natural phenomena which are already experimentally established, for example the explanation of Dark Matter (DM) or tiny neutrino mass etc. It is also unknown to us that whether the discovered Higgs boson is the only scalar candidate in nature or there are also other scalars with heavier masses which are similarly responsible for Electroweak Symmetry Breaking (EWSB). All of these unexplained facts actually motivate the physicists to look beyond SM.
In the existing literature, there are several studies which actually deal with the phenomenology of extended Higgs sector <cit.>. Many of them have argued that the idea of one Higgs boson is not complete and there may be other representations also which may give rise to other required Higgs bosons having a heavier or lighter mass compared to the SM Higgs bosons. We are hopeful that with the advancement of technologies a detailed study about the properties of SM Higgs boson, for example its decay, branching ratios (BR), its couplings, precision measurements <cit.> will be possible which will make the picture of the scalar sector more clearer. Extended Higgs sector may also have some bearings on this dark matter sector, Higgs mass hierarchy or neutrino mass issues. In some models, singlet scalar has been considered as a suitable DM candidate <cit.>. The presence of charged Higgs may contribute to the radiative masses of neutrino <cit.>. In Left-Right symmetric models (LRSM), people have studied about the mass generation of neutrinos with help of extended triplet or singlet Higgs bosons <cit.>, <cit.>. Additional Higgs bosons can also play a crucial role in dealing with the flavor problems <cit.>. Although the direct searches from the LHC have not confirmed the existence of such a scalar, which actually pushes the exclusion limits on the masses of such scalars to higher and higher scales.
In quest of such a complete theory, we investigate a model which respects SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R (32121) <cit.> local gauge symmetry. This can be obtained via a two step symmetry breaking from E_6 <cit.> Grand Unified group with [ SU(3)_C ⊗ SU(3)_L ⊗ SU(3)_R ] as the intermediate step. We shall only be interested in the Left-Right (LR) symmetric gauge group, 32121 and in the phenomenology of its scalar sector. This model contains the fermions from the full 27-plet of E_6, among them 11 are heavy exotic fermions.
Two of these heavy fermions, one being Dirac like and another being Majorana like, are suitable Dark Matter (DM) candidates <cit.>. This model gives rise to a two component DM scenario. One of the DM candidates has a larger rate of interaction compared to the other. The relic particle with the larger interaction rate, satisfies the constraints from Direct detection experiments when a dimension-6 effective four-fermion interaction is introduced with a new coupling strength. The other DM candidate, with a smaller interaction rate is able to satisfy relic density constraints only when the coannihilation channels between these two relic candidates are opened up. Thus together they present a promising DM scenario and one can constrain the parameter spaces using the recent results of the direct detection of Dark Matter and relic density measurements from PLANCK collaboration. The detailed analysis regrading Dark Matter aspects of this model has been discussed in <cit.>.
Apart from the SM gauge bosons the gauge sector comprises of three heavy BSM gauge bosons. The scalars in 32121 arise from the (1, 3, 3̅) representation of SU(3)_C ⊗ SU(3)_L ⊗ SU(3)_R. They are color singlet and heavy scalars. One of them must have similar properties like SM Higgs boson. Some of the BSM Higgs bosons show interesting signatures at High Luminosity-LHC (HL-LHC). In this article we shall mainly analyse the properties of some of the heavy Higgs bosons and their signatures at the LHC with 14 and 27 TeV high luminosity run.
In this article we plan to describe the model breifly in section <ref> where we mainly discuss the particle sector of this model with special emphasis on the scalar sector of our interest. We also discuss the proerties and production mechanisms of the some exotic scalars including heavy neutral and singly charged scalars at the LHC. In section <ref> we perform the signal-background phenomenology of these two BSM Higgs bosons considering the cut-based analysis as well as multivariate analysis. We shall see that the signal-background discrimination is much better in case of multivariate analysis. Finally we conclude in section <ref>.
§ DESCRIPTION OF 32121 MODEL
We start with a Left-Right (LR) symmetric gauge group SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R, namely 32121. A two-step symmetry breaking of E_6 can lead to 32121, though we will not be interested in this specific symmetry breaking pattern. This model is rich in particles which are listed in Table <ref> with their corresponding gauge quantum numbers. In this article, among all the particles we will mainly study the interactions of some of the scalars which may generate interesting signatures at hadron collider.
The gauge bosons along with the matter fields present in this model are listed in Table <ref> with their corresponding gauge quantum numbers. The Higgs multiplets present in this table are instrumental in breaking down SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1)_R to the SU(3)_C ⊗ SU(2)_L ⊗ U(1)_Y and then to SU(3)_C ⊗ U(1)_EM. L and R denote Left and Right repectively. One can calculate the electric charge, Q as, Q = T_3L + T_3R + Y_L/2 + Y_R/2 where Y_L/2 and Y_R/2 are noted down in the last two columns of Table <ref> respectively.
§.§ Gauge sector
The gauge sector of 32121 model has two charged gauge bosons and four neutral gauge bosons. In the charged sector, one has been identified with the SM W boson and the other field is the heavy W' boson. In the neutral gauge sector two fields have been identified with SM Z and photon. Rest two fields are denoted as Z' and A'. The masses and mixings along with the interactions in electro-weak gauge sector are controlled by the four gauge coupling constants, g_2L, g_2R, g_1L and g_1R along with the vacuum expectation values (vevs) of the scalar fields. If one follows the symmetry breaking pattern of SU(2)_R ⊗ U(1)_L ⊗ U(1)_R to U(1)_Y, one can have an expression like,
1g_Y^2 = 1g_2R^2 + 1g_1L^2 + 1g_1R^2
where g_Y denotes the U(1)_Y gauge coupling constant. g_2L is identified with the SU(2)_L gauge coupling constant of SM, g. We have chosen g_2L=g_2R = g and g_1L=g_1R to keep our Lagrangian Left-Right symmetric. With these choices one can fix the gauge parameters of the 32121 model. On the other hand, the lower limits of the vevs of the Higgs fields can be fixed from the experimental lower limits of the heavy gauge bosons. A deltailed study on the gauge sector of 32121 model can be found in <cit.>.
§.§ Fermion sector
As already mentioned, in 32121 model we have 27 fermions. Their chiral components are as follows,
L_L = [ ν_L; e_L ], L_R = [ ν_R; e_R ]
Q_L = [ u_L; d_L ], Q_R = [ u_R; d_R ]
Q_LS = q_SL, Q_RS = q_SR, l_S
L_B = [ N_1 E_1; E_2 N_2 ]L̃_B = [ N_2^c E_2^c; E_1^c N_1^c ]
L_L,R and Q_L,R contain the SM leptons and quarks respectively along with a right-handed neutrino. Rest of fields are exotic fermions. Q_LS and Q_RS form a four-component Dirac-like color triplet quark whereas N_1, N_2^c and E_1, E_2^c construct neutral and singly charged Dirac-like lepton N and E respectively. l_S and l_S^c form a Majorana-like neutral fermion L_S.
The interactions between the Higgs fields and the fermions are responsible for the masses of the fermions. The relevant Yukawa Lagrangian is as follows.
ℒ_Y = y_qijQ̅_iLΦ_B Q_jR + ỹ_qijQ̅_iRΦ̃_B Q_jL + y_lijL̅_iLΦ_B L_jR + ỹ_lijL̅_iRΦ̃_B L_jL
+ y_sijQ̅_iLSΦ_S Q_jRS + y_LBij Tr [ L̅_iBL̃_jB] Φ_S^c + y_LSij/Λl̅_iS l_jS^c Φ_S Φ_S
+ y_BBij Tr[ L̅_iBΦ̃_B ] l_jS^c + y_ijBRL̅_iL L_jBΦ_R + y_ijBLL̅_iR L_jB^†Φ̃_L
+ y_ijLRSQ̅_iL Q_jRS^* Φ̃_L + y_ijRLSQ̅_iR Q_jLS^* Φ̃_R + h.c.
where, i,j = 1,2,3 are generation numbers and y(s) are Yukawa coupling constants. Φ_S^* is complex conjugate of Φ_S, Φ̃_B = σ_2 Φ_B^* σ_2 and L̃_B=σ_2 L_B^* σ_2.
The first line of Eq. <ref> shows the terms generating the masses of the SM fermions. The terms present in the second line of Eq. <ref> are responsible for giving masses to the heavy exotic fermions. It is to be noted that we have written a dimension-5 term for generating the Majorana mass of l_S. The rest of the terms represent the mixings among exotic and SM fermions. Here we note that, we can write only Dirac-like mass term for the nutrino in our model. In <cit.> the fermion sector of this model is discussed in more detail.
§.§ Scalar sector of 32121
There are several scalar fields this model. The Higgs fields which are mainly responsible for the symmetry breaking from 32121 ⟶ SU(3)_C ⊗ SU(2)_L ⊗ U(1)_Y ⟶ SU(3)_C ⊗ U(1)_EM are, one Higgs bi-doublet (Φ_B), one left-handed (Φ_L), one right-handed (Φ_R) weak doublets and a singlet Higgs boson (Φ_S). Φ_S is SU(2) singlet but carries U(1) hypercharge. These color singlet scalars arise from (1, 3, 3̅) representation of the Trinification gauge group ([ SU(3)_C ⊗ SU(3)_L ⊗ SU(3)_R ]). Among these fields, Φ_R is instrumental in breaking the LR symmetry. The allignment of the Higgs felds are as following.
Φ_B = [ 1/√(2)(k_1 + h_1^0 + i ξ_1^0) h_1^+; h_2^- 1/√(2)(k_2 + h_2^0 + i ξ_2^0) ],
Φ_L = [ h_L^+; 1/√(2)(v_L + h_L^0 + i ξ_L^0) ],
Φ_R = [ 1/√(2)(v_R + h_R^0 + i ξ_R^0); h_R^- ], Φ_S = 1/√(2)(v_S + h_S^0 + i ξ_S^0)
The Higgs potential of the 32121 model, V is composed of two parts, V_1 and V_2. It is given by,
𝒱_1 = - μ_1^2 Tr ( Φ_B^†Φ_B) - μ_3^2 ( Φ_L^†Φ_L + Φ_R^†Φ_R ) - μ_4^2 Φ_S^†Φ_S
+ λ_1 Tr [ (Φ_B^†Φ_B)^2] + λ_3 ( Tr[ Φ_B^†Φ̃_B] Tr[ Φ̃_B^†Φ_B] )
+ α_1 (Φ_S^†Φ_S)^2 + β_1 Tr[ Φ_B^†Φ_B] (Φ_S^†Φ_S) + γ_1 [ (Φ_L^†Φ_L) + (Φ_R^†Φ_R)] (Φ_S^†Φ_S)
+ ρ_1 [ (Φ_L^†Φ_L)^2 + (Φ_R^†Φ_R)^2]
+ ρ_3 [ (Φ_L^†Φ_L) (Φ_R^†Φ_R)] + c_1 Tr[ Φ_B^†Φ_B] [ (Φ_L^†Φ_L) + (Φ_R^†Φ_R)]
+ c_3 [ ( Φ_L^†Φ_B Φ_B^†Φ_L ) + ( Φ_R^†Φ_B^†Φ_B Φ_R ) ] + c_4 [ ( Φ_L^†Φ̃_B Φ̃_B^†Φ_L ) + ( Φ_R^†Φ̃_B^†Φ̃_B Φ_R ) ]
and,
𝒱_2 = μ_BS Tr [ Φ^† _BΦ̃_B] Φ_S^∗ + h.c.
The parameters in 𝒱 are considered to be real. 𝒱 is also LR symmetric and obeys the gauge symmetry of 32121 model. In the above, Φ̃_B ≡σ_2 Φ_B^* σ_2.
Apart from the above symmetries, 𝒱_1 is also symmetric under the global phase transformations like,
Φ_B → e^i θ_B Φ_B; Φ_L → e^i θ_L Φ_L; Φ_R → e^i θ_R Φ_R and Φ_S → e^i θ_S Φ_S.
Whereas, the terms present in 𝒱_2 explicitly breaks this symmetry. Now, if we choose both k_1 and k_2 to be non-zero, the terms proportional to λ_3 in 𝒱_1 give rise to some bilinear terms like h_1^0 h_2^0, h_1^+ h_2^- which makes 𝒱_1 break the aforementioned global symmetry spontaneously. This actually causes an extra undesirable massless Goldstone mode. This issue of getting unwanted Goldstone mode can be avoided in two ways. One simple option is to choose any between k_1 and k_2 to be zero which will make such bilinear terms (like h_1^0 h_2^0, h_1^+ h_2^-) vanish and turn the potential 𝒱_1 invariant under such global symmetry. Another way is to consider 𝒱_2 in addition to 𝒱_1 as the scalar potential. As 𝒱_2 breaks the global symmetry explicitly, we can get rid of the extra massless mode in this way. In <cit.>, it is discussed in detail that the presence of 𝒱_2 does not affect the masses and the mixings in the scalar sector in a significant way. Hence, we choose k_2 to be zero.
A non-zero value of v_R is necessary to lead the Left-Right symmetry breaking. Whereas, v_S also needs to be non-zero as it is responsible for U(1) symmetry breaking. A non-zero value of v_L will along with a non-zero v_R will again spontaneously break the global symmetry mentioned in Eq. <ref> which will give rise to extra unwanted Goldstone mode. In order to avoid such a problem, we choose v_L=0 <cit.>.
There are 10 real parameters in the scalar potential of this model, λ_1, λ_3, ρ_1, ρ_3, c_1, c_3, c_4, α_1, β_1 and γ_1. We accept only those values of the quartic parameters which make the scalar potential bounded from below and which are allowed by the SM-Higgs signal strengths <cit.>.
Among all the scalar fields, there are five neutral CP-even scalar fields, h^0, h_2^0, h_L^0, H_R^0 and H_S^0. h^0 has been identified with the SM-Higgs. The neutral CP-odd scalar sector contains two physical fields, ξ_2^0 and ξ_L^0. In addition to these scalars, there are two charged Higgs fields, H_1^± and H_L^±. h_2^0 and ξ_2^0 are mass degenerate at the tree level. In a similar fashion, h_L^0 and ξ_L^0 also have same mass. In this article, we will mainly concentrate on the scalars who belong to the Higgs bi-doublet, Φ_B and discuss their properties.
∙ Scalars from Bi-doublet Higgs field:
Apart from the SM-like Higgs, the bi-doublet Higgs field Φ_B comprises of some exotic scalar fields including two neutral CP-even (h_2^0) and CP-odd (ξ_2^0) scalars and a singly charged Higgs H_1^±. At tree level, the above scalar (h_2^0) and pseudoscalar (ξ^0 _2) have equal masses. With k_2=0,
m_h_2^0^2 = m_ξ_2^0^2 = 1/2[ 4 λ_3 k_1^2 + (c_4 - c_3) v_R^2]
The zero value of k_2 restricts h_2^0 (ξ_2^0) to couple with a pair of other scalars or gauge bosons but they can have interactions with a pair of SM fermions (see Eq. <ref>). From Eq. <ref> it is evident that the coupling of h_2^0 (ξ_2^0) with the up quark sector is proportional to the bottom quark sector Yukawa coupling and vice-versa. This implies that the coupling of h_2^0 (ξ_2^0) with a pair of bottom quarks is proportional to top Yukawa coupling. To find the limit on the mass of h_2^0 (ξ_2^0) we have produced these heavy scalars in association with a pair of b-quarks with a further decay to b quark pair. ATLAS and CMS have already performed a search for heavy neutral scalar which is produced in association with a pair of b quarks at √(s) = 13 TeV <cit.>. Using this result, we compare σ× BR obtained in 32121 model with the measured rate by ATLAS Collaboration and find a lower limit on m_h_2^0 (m_ξ_2^0). We find, m_h_2^0 (m_ξ_2^0) must be greater than 800 GeV <cit.>.
At the LHC, one of the dominating ways of producing h_2^0 (ξ_2^0) is via gluon gluon fusion. Unlike SM Higgs, here a triangle loop of bottom quark will mainly control the production cross-section <cit.>. Another dominant way to produce h_2^0 (ξ_2^0) at the hadronc collider is the associated Higgs production as previously dicussed. One can produce h_2^0 (ξ_2^0) in association with two bottom quarks. This large production cross-section will sensitively depend on the top Yukawa coupling. This in turn makes us consider the associated production mechanism while generating the heavy scalars at the collider.
We present the associated production cross-section and decay branching ratios of h_2^0 (ξ_2^0) in Fig. <ref>.
We note that h_2^0 (ξ_2^0) has a dominant decay mode to bb̅ untill the decay to H_1^± W^∓ is kinematically allowed. In this plot the mass of H_1^± has been set to 750 GeV.
In order to generate such events, we have first implemented our model in <cit.> and then generated such processes using <cit.>. We have also taken care of the QCD K-factor (∼ 1.1) following the ref. <cit.>.
Now, coming to the singly charged Higgs boson, H_1^±, it is another scalar field which is of our interest. H_1^± has a mass,
m_H_1^±^2 = 1/2 (c_4-c_3) (k_1^2 + v_R^2)
It can couple to SM fermions via Yukawa coupling (see Eq. <ref>) and also has interactions with SM W boson and heavy neutral scalar h_2^0 (ξ_2^0). One dominant process of producing this charged scalar at the LHC is the production in association with a top and a bottom quark. Other mechanisms may include Drell-Yan process or even vector boson fusion process. ATLAS and CMS collaborations both have searched for heavy charged Higgs boson at 13 TeV run followed by a decay to a top and a bottom quark <cit.>. In our analysis, we have also produced H_1^± in association with a top and a bottom with a further decay of H_1^± again to a top and a bottom. We compare the event rates obtained in 32121 model with the result provided by ATLAS collaboration which H_1^±. We find, m_H_1^± > 720 GeV <cit.>.
While performing our analysis, we have considered to produce m_H_1^± at the collider in association with t b. The leading contribution will be from g g →t̅ b H_1. In Fig. <ref>, we present the production cross-section of H_1^± at centre of mass energies of 14 TeV and 27 TeV along with the branching ratios of H_1^± to different final states. We observe that H_1^± mainly decays to a top and a bottom quark until it is allowed to decay to h_2^0 (ξ_2^0) W^± kinematically. The H^± tb production cross-section varies from 0.15 (1) pb for m_H_1^± = 720 GeV to 0.005 (0.06) pb for m_H_1^± = 1500 GeV at 14 (27) TeV LHC run.
In the next section, we will now present the signal-background study of h_2^0 (ξ_2^0) and H_1^± production at the LHC at 14 and 27 TeV run with 3000 fb^-1 integrated luminosity.
§ COLLIDER PHENOMENOLOGY
In the previous section we have already discussed about the production mechanisms and subsequent decays of the two scalars, h_2^0 ( ξ_2^0) and H_1^±. The heavy neutral and charged scalars have some exotic decay channels. In this section we concentrate on the signal-background analysis of these two scalars at the LHC where we have considered such exotic decay channels of the heavy scalars.
One of the interesting channels to probe h_2^0 (ξ_2^0) at the LHC is the following (see Fig. <ref>).
p p → h_2^0 (ξ_2^0) b b̅→ (H_1^± W^∓) b b̅→ (t b̅ l^- ν̅_l) b b̅→ b b̅ b b̅ l^+ l^- ν_l ν̅_l
Similarly to look for H_1^± at the hadron collider, one may consider (see Fig. <ref>),
p p → H_1^± t b → (t b̅) t̅ b → (W^- b̅ b) W^+ b b̅→ b b̅ b b̅ l^+ l^- ν_l ν̅_l
We shall now briefly discuss these two channels with leptonic decay of W boson with not-too-large background in the context of HL-LHC at 14 TeV and 27 TeV center of mass energy.
Fig. <ref> shows the leading order Feynman diagrams which are the most dominating for the production of heavy neutral h_2^0 (ξ_2^0) and singly charged scalars H_1^± at the LHC.
We denote the production of h_2^0 (ξ_2^0) and H_1^± as signal 1 (S_1) and signal 2 (S_2) respectively.
Both the signals discussed above, have similar final states with four b jets, two oppositely charged leptons and missing transverse energy in the final state. This specific combination in the final state makes these signals unique as the chances of getting similar states coming from the Standard Model is quite low.
Among all the background processes, t t̅ + jets production will be the most dominant. Other significant background effects include b b̅ t t̅ production, h t t̅ production, Ztt̅ production, multijet processes etc.
Initially we have set the transverse momentum of b-tagged jet, light jets and leptons as, p_T_b > 40 GeV, p_T_j > 30 GeV, p_T_l > 10 GeV respectively. We have also put an initial cut on missing transverse energy, E_T / which is greater than 20 GeV.
We perform this analysis for four chosen benchmark points corresponding to four different sets of masses of the scalars and their decay properties.
S_1 depends on the branching ratio of h_2^0 to H_1^± W^∓ channel which is non-zero just after a certain mass of h_2^0 (see Fig. <ref>). Whereas S_2 depends on the H_1 → t b branching ratio which is non-zero throughout the mass range of H_1^± (see Fig. <ref>) but this branching ratio reduces after the H_1 ⟶ h_2^0 (ξ_2^0) W decay channel opens up. In Table <ref>, the four choices of the benchmark points have been presented.
For BP1 and BP4, the masses of h_2^0 (ξ_2^0) and H_1^± are such that both the signals, S_1 and S_2 are on as the BR (h_2^0 → H_1^± W^∓) and BR (H_1 → t b) are non-zero, whereas for BP2 and BP3 the BR(h_2^0 → H_1^± W^∓) is zero turning Signal 1 off.
Furthermore, in case of BP1, for S_1 and S_2, h_2^0 (ξ_2^0) and H_1 dominantly decay to H_1^± W^∓ and tb̅ respectively with the highest BR in the corresponding channels.
For BP2, BR(H_1 → t b) remains same as it is in case of BP1 keeping the Signal 2 unchaged.
However, in case of BP3 the H_1 dominantly decays to h_2^0 W^± turning BR(H_1 → t b) small.
In a similar fashion for BP4 we get both signals on as BP1 but with reduced branching ratios to corresponding channels and with reduced cross-sections.
It is important to note here that the BP2 and BP3 practically imply the production of charged Higgs (H_1^±) only whereas BP1 and BP4 actually denote the production of both the scalars.
§.§ Cut-based Analysis
In this section we present the signal-background analysis of h_2^0 (ξ_2^0) and H_1± production at the LHC using cut-based approach.
We have implemented our model in <cit.>, generated the signal and background events with <cit.> using parton distribution functions <cit.>. In order to consider the showering and hadronization, we have passed the events through <cit.> already built in Madgraph and used <cit.> for detector simulation. We demand there are atleast three b-tagged jets and atleast one charged lepton (e or μ) in the final state. Such a choice effectively turns down the number of events of multijet production which makes us ignore this background.
With these demands we have made, we plot the distributions of two important variables icluding the transverse momentum of leading b-tagged jet, p_T^b and scalar sum of p_T of all the visible jets, H_T at 14 and 27 TeV HL-LHC run with 3000 fb^-1 integrated luminosity. In Figs. <ref> and <ref>, we present such distributions for the BP1 only.
In Figs. <ref> and <ref>, the distributions of p_T of leading b-tagged jet and H_T have been presented for each signal and background processes for BP1 at 14 and 27 TeV center of mass energy respectively with integrated luminosity 3000 fb^-1. The processes corresponding to the different color codes have been mentioned inside the plots. It is clearly understandable from the plots that an appropriate cut on p_T of leading b-tagged jet and H_T in each case can effectively reduce the background events compared to the signal events. Here we want to note that, for the other benchmark points the disributions of the signals are not significantly different compared to what is shown for BP1. We have optimized the cuts in such a way so that the significance of the signal does not vary significantly for all of the benchmark points.
∙ Event Selection
As already mentioned, we choose to keep such events where there are at least three b-tagged jets in the final state. Using the information we get from the distribution plots in Figs. <ref> and <ref>, we apply and try to optimize the cuts on the variables we have considered i.e., p_T^b and H_T so that we can reduce certain amount of background events keeping the signal events as much as possible. In other words we try apply our cuts on the suitable variable in such a way so that we obtain maximum significance, 𝒮 where 𝒮 is given by,
𝒮 = √(2[(S+B) log (S+BB)-S])
S and B stand for the number of signal and background events respectively.
In Table <ref>, we show the optimized cut flows providing the maximum significance, (see Eq. <ref>) for all of the benchmark points at 14 TeV HL-LHC run. Here we select only those events where the transverse momentum of leading b-jet, p_t^b > 240 GeV and H_T > 990 GeV.
As both the signals have similar final states, in case of BP1 and BP4 S is practically the collection of two different signal event numbers, S=S_1+S_2. In Table <ref>, we present the case for 27 TeV LHC run with 3000 fb^-1 integrated luminosity. Here we find that we obtain the maximum significance when we select only those events who pass the criteria of having p_t^b > 230 GeV and H_T > 680 GeV.
From the above Tables, <ref> and <ref>, we observe that the significance for the case of 14 TeV HL-LHC run is much small which gets much better while considering the case for 27 TeV HL-LHC run. BP1 provides 0.7 significance (𝒮) for 14 TeV run whereas the significance increases to 5.1 for 27 TeV run at HL-LHC (Table <ref>). But the results obtained using the cut-based approach does not make this method much useful. This motivates us to explore our results using multivariate analysis which we discuss in the following.
§.§ Multivariate Analysis
In this section, we mainly concentrate on the results we obtain using Boosted Decision Tree (BDT) algorithm. This part of analysis has been performed in a framework <cit.>.
Decision trees are mainly classifiers who generally classifies the signal and background-like events. A suitable variable is chosen and application of a proper cut on this variable separates the signal from the background as best as it can. One can choose a number of variables and train the signal and background sample events. Modifcation of the weights corresponding to the sample events creates new boosted decision trees. After training and testing of the signal and background-like events, this method of analysis excels the generic cut-based analysis by performing a much better discrimination between signal events and background events.
To perform the BDT analysis we have considered 11 variables, providing the best possible signal significance, are as the following.
* The transverse momentum of leading b-tagged jet, p_T^b1.
* The missing transverse energy, E_T /.
* The transverse momentum of leading lepton, p_T^l1.
* Δη^bibj between the three leading b-tagged jets.
* Δϕ^bibj between the three leading b-tagged jets.
* Δη^l1l2 between the leading and sub-leading lepton.
* Δϕ^l1l2 between the leading and sub-leading lepton.
The Table <ref> shows the ranks of the above variables according to their relevance for both 14 and 27 TeV signal-background study. The rank of the variables are determined depending on how many times a variableis used to split decision tree nodes. Here in both cases of 14 and 27 TeV run, E_T / has been the most important variable. In our study, the important parameters for a BDT analysis have been set as follows. We have set the number of Trees to be 850 with maximum depth 3 and the boost type as AdaBoost.
The normalized distributions of the above variables are shown in Fig. <ref>. The blue-shaded (red-dashed) distributions are for the signal (background). It is to be mentioned that while doing this analysis, all the four backgrounds have been taken into consideration despitethe fact that tt̅ + jets production is the most dominating one.
The linear correlation matrix for the variables of our choice is shown below in Fig. <ref> for only benchmark point BP1. The correlations between any two variables are presented in % in this figure. One can see, in most of the cases, the variables are not correlated in a significant way.
The signal and bakground events have been trained for each four benchmark points. A partial overtraining might be quite possible for boosted decision tree algorithm which must be avoided. It can be tested comparing the performance of training and testing samples. We have ensured that the effect of ovetraining of signal and background is minimal for our cases by Kolmogorov-Smirnov (KS) test. In general, KS score must be ∼ 0.1. It may be greater than 0.01 if this value remains fixed over changing the statistics of the signal and background events. In Fig. <ref>, one can see the value of the KS probability is ∼ 0.187 (0.428) and ∼ 0.195 (0.184) for signal (background) for BP1 at 14 TeV and 27 TeV HL-LHC run.
Half of the signal and background events have been used for training and the other half of the same sample is used for testing. After a successful training and testing of the signal and background samples, the BDT algorithm has made the results for 14 as well a 27 TeV HL-LHC run better compared to cut-based analysis. The TMVA response of the classification has shown an good discrimination between signal and background which is shown in Fig. <ref> for BP1 benchmark point at 14 as well as 27 TeV HL-LHC run.
The significance we approximately measure using the expression shown in Eq. <ref> has improved in a significant amount compared to cut-based scenario which is explained in Table <ref> for both 14 and 27 TeV run of LHC for all of the benchmark points respectively. In Fig. <ref> the signal efficiency, background efficiency and signal significance have been presented for two benchmark points BP1 and BP2 at 14 and 27 TeV HL-LHC run where the results for BP2 solely corresponds to probing a charged Higgs at the hadron collider.
The significances obtained from the BDT analysis for each case have been given below in a form of a table (see Table <ref>). The significance obtained for BP1 is ∼ 3.87 which is much better compared to the significance achieved in case of cut-based analysis as one can expect. Similar increments are observed for the other benchmark points BP2, BP3, BP4 at 14 TeV run. The results obtained for 27 TeV HL-LHC run are more encouraging. From the results for BP2 and BP3, in 32121 model one can hope to probe a charged Higgs of mass 750 GeV and 1.2 TeV with 2.77 σ (4.58 σ) and 1.38 σ (3.66 σ) significance respectively at 14 (27) TeV HL-LHC run (see Figs. <ref> (b), <ref> (d) for BP2).
§ CONCLUSIONS
We start with an E_6 GUT inspired gauge theory SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1))_R namely 32121. This gauge group can arise after a two step symmetry breaking of E_6. We have mainly concentrated on the Left-Right symmetry breaking from SU(3)_C ⊗ SU(2)_L ⊗ U(1)_L ⊗ SU(2)_R ⊗ U(1))_R down to SU(3)_C ⊗ SU(2)_L ⊗ U(1)_Y. The fermions in this model belong to the full 27-plet of E_6. The Higgs bosons of this model arise from (1, 3, 3̅) represensation of SU(3)^3. The vevs (k_1 = 246 GeV, v_R > 14.7 TeV, v_S > 12.61 TeV) of the scalar fields have been constrained from the masses of the W, W' and A' gauge bosons respectively.
The gauge sector of 32121 model contains five gauge couplings whose value have been fixed following the pattern of Left-Right symmetry breaking. Apart from the SM gauge bosons this model contains W', Z' and A' gauge bosons where A' is the hallmark of the extra U(1) gauge symmetry. In the fermionic sector, among all the fermions of the 27-plet of E_6, two color singlet charge neutral fermions will be suitable DM candidates. The scalar sector of 32121 model contains numbers of Higgs bosons among one is SM-like Higgs. In this article we have mainly set our focus on two exotic heavy Higgs bosons, one is charge neutral CP-even Higgs field, h_2^0 and its CP-odd partner ξ_2^0. Both of them have similar masses and couplings. Another scalar field is a singly charged Higgs, H_1^±. Both h_2^0 (ξ_2^0) and H_1^± arise from the Higgs bi-doublet Φ_B. h_2^0 (ξ_2^0) dominantly decays to bb̅ untill the decay channel to H_1^± W^∓ is kinematically accessible. H_1^+ dominatly decays to tb̅ untill the decay cahnnel to h_2^0 (ξ_2^0) W^+ is kinematically allowed. We have used these informations on exotic decay channels while discussing about the signatures of these scalars at the LHC. For h_2^0 (ξ_2^0) we have mainly chosen the dominant production mechanism of this scalar which is in our case the associated Higgs production. The production cross-section of h_2^0 (ξ_2^0) in association of bb̅ is 0.3 (3) pb at 14 (27) TeV LHC run for 1 TeV mass. Whereas H_1^± has been produced in association with tb. Production cross-section of H_1^± in association of tb is 0.04 (0.35) pb at 14 (27) TeV LHC run for 1 TeV scalar mass.
In the next section we have performed a detailed signal-background analysis of two of the heavy Higgs bosons, h_2^0 (ξ_2^0) and H_1^±. The associated production of both of them give rise to similar final states with three or more than three b-tagged jets, more than one charged leptons and missing transverse energy. The dominant background will arise from tt̅ production with jets. The other background events will arise from bb̅tt̅, htt̅, Ztt̅ productions. Depending on the masses and decay properties of the heavy neutral and charged scalars in our model, we choose four benchmark points (BP) to perform our analysis. To begin with, we have presented our results using cut-based analysis for four benchmark points. We have applied a series of cuts on some chosen suitable variables like transverse momentum (p_T) of leading b-tagged jet and the scalar sum of p_T of all jets (H_T). For BP1, at 14 (27) TeV the signal to background ratio is 0.7 (5.1) whereas for other benchmark points this is somewhat lower except the case for BP4 at 14 TeV run (𝒮∼ 1.1). In order to distinguish signal events from background-like events more accurately we have used a better algorithm used in multivariate analysis where we have chosen the BDT method. With this mechanism, as per our expectation, a better significance could be achieved for all of the benchmark points. For BP1, at 14 (27) TeV the significance is 3.87 (10.45) which clearly shows a better signal-background discrimination. With the results we obtained, one can hope to probe a heavy charged Higgs of a mass 750 GeV in the 32121 model, with 2.77 σ (4.58 σ) significance at a 14 (27) TeV LHC run with 3000 fb^-1 integrated luminosity.
Acknowledgement: SB acknowledges financial support from DST, Ministry of Science and Technology, Government of India in the form of an INSPIRE-Senior Research Fellowship. SB acknowledges Prof. Anindya Datta for his valuable suggestions throughout the analysis. SB also acknowledges Gourab Saha and Nivedita Ghosh for their help in dealing with some technical issues. SB is thankful to Prof. Partha Konar for the insightful discussions.
widestlabel
higgs_atlas ATLAS collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, https://doi.org/10.1016/j.physletb.2012.08.020Phys. Lett. B716 (2012) 1-29, arXiv: [https://arxiv.org/pdf/1207.7214.pdf1207.7214].
higgs_cms CMS collaboration, S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, https://doi.org/ 10.1016/j.physletb.2012.08.021Phys. Lett. B 716 (2012) 30, arXiv: [https://arxiv.org/pdf/1207.7235.pdf1207.7235].
Higgs-review M. Mühlleitner, M. O. P. Sampaio, R. Santos and J. wittbrodt, Phenomenological comparison of models with extended Higgs sectors, https://doi.org/10.1007/JHEP08(2017)132JHEP08 (2017) 132, arXiv: [https://arxiv.org/pdf/1703.07750.pdf1703.07750];
J. Steggemann, Extended Scalar Sectors, https://doi.org/10.1146/annurev-nucl-032620-043846Annu. Rev. Nucl. Part. Sci. 2020. 70:197–223 and references therein.
higgs-precision J. Alison et al., Higgs boson potential at colliders: status and perspectives, https://doi.org/10.1016/j.revip.2020.100045Review in Physics (2020) 100045, arXiv: [https://arxiv.org/pdf/1910.00012.pdf1910.00012];
G. Heinrich, Collider Physics at the Precision Frontier, https://doi.org/10.1016/j.physrep.2021.03.006Physics Reports, Volume 922, 2021, Pages 1-69, arXiv: [https://arxiv.org/pdf/2009.00516.pdf2009.00516].
singlet-DM C. E. Yaguna, The singlet scalar as FIMP dark matter, https://doi.org/10.1007/JHEP08(2011)060 JHEP08(2011)060, arXiv: [https://arxiv.org/pdf/1105.1654.pdf1105.1654];
R. Campbell, S. Godfrey, H. E. Logan and A. Poulin, Real singlet scalar dark matter extension of the Georgi-Machacek model, [https://doi.org/10.1103/PhysRevD.95.016005Phys. Rev. D 95, 016005];
The GAMBIT Collaboration, Status of the scalar singlet dark matter model, [https://doi.org/10.1140/epjc/s10052-017-5113-1Eur. Phys. J. C (2017) 77:568];
P. Das, M. K. Das and N. Khan, A new feasible dark matter region in the singlet scalar scotogenic model, [https://doi.org/10.1016/j.nuclphysb.2021.115307Nuclear Physics B, Vol. 964, 115307];
numass E. Ma and O. Popov, Pathways to naturally small Dirac neutrino masses, https://doi.org/10.1016/j.physletb.2016.11.027Phys. Lett. B.2016.11.027, arXiv: [https://arxiv.org/pdf/1609.02538.pdf1609.02538].
triplet-neutrinomass R. N. Mohapatra and P. B. Pal, Massive neutrinos in physics and astrophysics, https://doi.org/10.1142/5024World Sci. Lect. Notes Phys.72, 1 (2004);
N. G. Deshpande, J. F. Gunion, B. Kayser, and F. Olness, Left-right-symmetric electroweak models with triplet Higgs field, https://doi.org/10.1103/PhysRevD.44.837Phys. Rev. D 44, 837;
E. Ma and U. Sarkar, Neutrino Masses and Leptogenesis with Heavy Higgs Triplets, https://doi.org/10.1103/PhysRevLett.80.5716Phys. Rev. Lett. 80 (1998) 5716-5719, arXiv: [https://arxiv.org/pdf/hep-ph/9802445.pdfhep-ph/9802445].
Nu_mass1 C. Hati, S. Patra, P. Pritimita and U. Sarkar, Neutrino Masses and Leptogenesis in Left-Right Symmetric Models: A Review From a Model Building Perspective, https://doi.org/10.3389/fphy.2018.00019Front. Phys., 06 March 2018.
2hdm A. Vicente, Higgs Lepton Flavor Violating Decays in Two Higgs Doublet Models, https://doi.org/10.3389/fphy.2019.00174Front. Phys., fphy.2019.00174;
D. Das, P M. Ferreira, A. P. Morais, I. Padilla-Gay, R. Pasechnik and J. P. Rodrigues, A three Higgs doublet model with symmetry-suppressed flavour changing neutral currents, arXiv: [https://arxiv.org/pdf/2106.06425.pdf2106.06425];
S. Iguro, Y. Muramatsu, Y. Omura and Y. Shigekami, Flavor physics in the multi-Higgs doublet models induced by the left-right symmetry, https://doi.org/10.1007/JHEP11(2018)046JHEP11 (2018) 046, arXiv: [https://arxiv.org/pdf/1804.07478.pdf1804.07478].
32121 S. Bhattacharyya and A. Datta, Phenomenology of an E_6 inspired extension of Standard Model: Higgs sector, https://doi.org/10.1103/PhysRevD.105.075021Phys. Rev. D 105, 075021, arXiv: [https://arxiv.org/pdf/2109.08524.pdf2109.08524].
E6 Y. Achiman and B. Stech, Quark-Lepton Symmetry and mass scales in an E6 unified gauge model, https://doi.org/10.1016/0370-2693(78)90584-1Physics Letters B, 77(4-5), 389-393;
Q. Shafi, E6 as a unifying gauge symmetry, https://doi.org/10.1016/0370-2693(78)90248-4Physics Letters B, 79(3), 301-303;
F. Gursey, P. Ramond and P. Sikivie, A universal gauge theory model based on E6, https://doi.org/10.1016/0370-2693(76)90417-2Physics Letters B, 60(2), 177-180;
R. Barbieri, D. V. Nanopoulos and A. Masiero, Hierarchial fermion masses in E6, https://doi.org/10.1016/0370-2693(81)90589-XPhysics Letters B, 104(3), 194-198;
G. Dvali and Q. Shafi, On proton stability and the gauge hierarchy problem, https://doi.org/10.1016/s0370-2693(97)00395Physics Letters B, 403(1-2), 65-69.
32121DM S. Bhattacharyya and A. Datta, Dark Matter perspective of Left-Right symmetric gauge model, https://doi.org/10.1016/j.nuclphysb.2023.116197Nucl. Phys. B, 991, 116197 (2023), arXiv: [https://arxiv.org/pdf/2206.13105.pdf2206.13105].
Atlas_h2 ATLAS Collaboration, Search for heavy neutral Higgs bosons produced in association with b-quarks and decaying to b-quarks at √(s) = 13 TeV with the ATLAS detector, https://doi.org/10.1103/PhysRevD.102.032004Phys. Rev. D 102, 032004 (2020), arXiv: [https://arxiv.org/pdf/1907.02749.pdf1907.02749].
Cms_h2 CMS Collaboration, Search for beyond the standard model Higgs bosons decaying into a bb̅ pair in pp collisions at √(s) = 13 TeV, https://doi.org/10.1007/JHEP08(2018)113JHEP 08 (2018) 113, arXiv: [https://arxiv.org/pdf/1805.12191.pdf1805.12191].
feynrules A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, FeynRules 2.0 - A complete toolbox for tree-level phenomenology, https://doi.org/10.1016/j.cpc.2014.04.012Comput.Phys.Commun. 185 (2014) 2250-2300, arXiv: [https://arxiv.org/pdf/1310.1921.pdf1310.1921].
madgraph J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer et al., The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, https://doi.org/10.1007/JHEP07(2014)079JHEP07 (2014) 079, arXiv: [https://arxiv.org/pdf/1405.0301.pdf1405.0301].
h2-qcd-k-factor S. Dawson, C. B. Jackson, L. Reina and D. Wackeroth, Higgs Production in Association With Bottom Quarks at Hadron Colliders, https://doi.org/10.1142/S0217732306019256Mod.Phys.Lett. A21 (2006) 89-110, arXiv: [https://arxiv.org/pdf/hep-ph/0508293.pdfhep-ph/0508293].
b_running A. V. Bednyakov, B. A. Kniehl, A. F. Pikelner and O. L. Veretin, On the b-quark running mass in QCD and the SM, https://doi.org/10.1016/j.nuclphysb.2017.01.004Nucl.Phys. B916 (2017) 463-483, arXiv: [https://arxiv.org/pdf/1612.00660.pdf1612.00660].
AtlasCharged1 ATLAS Collaboration, Search for charged Higgs bosons decaying into a top quark and a bottom quark at √(s) = 13 TeV with the ATLAS detector, https://doi.org/10.1007/JHEP06(2021)145JHEP 06 (2021) 145, arXiv: [https://arxiv.org/pdf/2102.10076.pdf2102.10076].
AtlasCharged2 ATLAS Collaboration, Search for charged Higgs bosons decaying into top and bottom quarks at √(s) = 13 TeV with the ATLAS detector, https://doi.org/10.1007/JHEP11(2018)085JHEP 11 (2018) 085, arXiv: [https://arxiv.org/pdf/1808.03599.pdf1808.03599].
CmsCharged CMS Collaboration, Search for charged Higgs bosons decaying into a top and a bottom quark in the all-jet final state of pp collisions at √(s) = 13 TeV, https://doi.org/10.1007/JHEP07(2020)126JHEP 07 (2020) 126, arXiv: [https://arxiv.org/pdf/2001.07763.pdf2001.07763].
parton-dist R. D. Ball et al., Parton Distributions with LHC data, https://doi.org/10.1016/j.nuclphysb.2012.10.003Nucl. Phys. B867, 244 (2013), arXiv: [https://arxiv.org/pdf/1207.1303.pdf1207.1303].
pythia8 T. Sjostrand, S. Mrenna and P. Z. Skands, PYTHIA 6.4 Physics and Manual, https://doi.org/10.1088/1126-6708/2006/05/026JHEP 0605:026,2006, arXiv: [https://arxiv.org/pdf/hep-ph/0603175.pdfhep-ph/0603175].
delphes DELPHES 3 Collaboration, J. de Favereau et al., A modular framework for fastsimulation of a generic collider experiment, https://doi.org/10.1007/JHEP02(2014)057J. High Energ. Phys. 2014, 57 (2014),
arXiv: [https://arxiv.org/pdf/1307.6346.pdf1307.6346].
tmva A. Hoecker et al., TMVA - Toolkit for Multivariate Data Analysis, arXiv: https://doi.org/10.48550/arXiv.physics/0703039physics/0703039.
|
http://arxiv.org/abs/2307.03987v1 | 20230708142557 | A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation | [
"Neeraj Varshney",
"Wenlin Yao",
"Hongming Zhang",
"Jianshu Chen",
"Dong Yu"
] | cs.CL | [
"cs.CL"
] |
PCG-based Static Underground Garage Scenario Generation
Wenjin Li, Kai Li
Wenjin Li, Kai Li are with the Department of Computer Science and Technology, Southern University of Science and Technology, Shenzhen, 518055, China
August 12, 2023
============================================================================================================================================================================
Recently developed large language models have achieved remarkable success in generating fluent and coherent text. However, these models often tend to `hallucinate' which critically hampers their reliability.
In this work, we address this crucial problem and propose an approach that actively detects and mitigates hallucinations during the generation process.
Specifically,
we first identify the candidates of potential hallucination leveraging the model's logit output values, check their correctness through a validation procedure, mitigate the detected hallucinations, and then continue
with the generation process.
Through extensive experiments with the `article generation task', we first demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, the detection technique achieves a recall of ∼88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations.
Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average.
In summary, our work contributes to improving the reliability and trustworthiness of large language models, a crucial step en route to enabling their widespread adoption in real-world applications.
§ INTRODUCTION
Recently developed large language models such as GPT-3 <cit.>, InstructGPT <cit.>, PaLM <cit.>, LLaMA <cit.>, and several others <cit.>
have achieved remarkable performance on a wide range of language understanding tasks.
Furthermore, they have been shown to possess an impressive ability to generate fluent and coherent text.
Despite all these abilities, their tendency to `hallucinate' critically hampers their reliability and limits their widespread adoption in real-world applications.
Hallucination in the context of language refers to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input <cit.>.
These hallucinations can lead to serious consequences such as spreading of misinformation and violation of privacy.
Thus, in this work, we focus on the crucial problem of `addressing' hallucinations of the large language models.
We propose to actively `detect' and `mitigate' hallucinations during the generation process.
This is crucial as we show that a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input.
Thus, actively detecting and mitigating hallucinations is also important to prevent the propagation of hallucinations in the subsequently generated sentences. We divide our approach into two stages, Detection and Mitigation.
In the hallucination detection stage, we first identify the candidates of potential hallucination, i.e., the key `concepts' of the generated sentence.
Next, leveraging the logit output values of the model, we calculate model's `uncertainty' on the identified concepts.
We demonstrate that this uncertainty provides a signal for hallucination.
However, we note that this is an additional signal and not a necessary requirement for our approach.
Then, we check the correctness of the
`uncertain' concepts through a validation procedure where we:
(a) create a query that tests the correctness of the information pertaining to the concept,
(b) retrieve knowledge relevant to the validation question, (c) answer the validation question leveraging the retrieved knowledge, and verify the corresponding information in the generated sentence to detect hallucinations.
This is followed by the hallucination mitigation stage in which we
`repair' the potentially hallucinated sentence using the retrieved knowledge as evidence.
Figure <ref> illustrates the key steps of our approach.
Furthermore, we conduct a systematic and wide study exploring multiple techniques to achieve the objective of each of the steps.
We design an experimental setup where
we prompt the model to write about topics from diverse domains such as sports, politics, music, literature, etc.
Then, we annotate the correctness of the first five generated sentences for each topic.
We first demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, the detection technique achieves a recall of ∼88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations.
Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average (Figure <ref>).
We conduct a thorough analysis that further
results in several interesting and important findings.
Lastly, we release our code and correctness annotations that will also facilitate a systematic future research in addressing hallucinations.
§ APPROACH
§.§ Overview
We propose to actively detect hallucinations and mitigate them during the generation process.
This is crucial as we show that
a generated sentence is hallucinated more often
when the model has already hallucinated in its previously generated sentences for the input (Section <ref>).
Similarly, a generated sentence is relatively less often hallucinated when the model has not hallucinated in its previously generated sentences.
Thus, actively detecting hallucinations and mitigating them is also important to prevent the propagation of further hallucinations in subsequently generated sentences.
To this end, we iteratively generate sentences through the model and actively detect and mitigate hallucinations.
Figure <ref> illustrates the key steps of our approach.
In section <ref>, we detail the steps of our hallucination detection approach, i.e., identifying the important `concepts' of the generated sentence, i.e., the candidates of potential hallucination (<ref>), calculating model's uncertainty on the concepts using the logit output values (<ref>), and checking the correctness by creating validation query (<ref>), finding relevant knowledge (<ref>), and verifying information leveraging the retrieved knowledge (<ref>).
We describe various techniques to achieve the objective of each of these steps and also elaborate on several important points such as
using a `self-inquiry' method to answer validation questions without using an external knowledge source and trade-off between executing the validation procedure in parallel for all the concepts and in sequential order based on their `uncertainty'.
For each step, we also indicate the most preferred technique with (*) and provide our justification.
In section <ref>, we detail our hallucination mitigation approach.
Specifically, we `repair' the hallucinated sentence by removing or substituting the hallucinated information leveraging the retrieved knowledge as evidence and also utilize the retrieved knowledge as context (prepended to the input) to generate the next sentence.
§.§ Hallucination Detection
§.§.§ Identify Key Concepts
In the first step, we identify the important concepts from the generated sentence.
We identify these concepts because validating the correctness of the entire sentence at once is infeasible; this is because a sentence may contain a number of different facets all of which can not be validated at once.
On the other hand, individually validating the correctness corresponding to the concepts provides opportunities for accurately detecting hallucinations.
Thus, the objective of this step is to identify the candidates of potential hallucination.
We note that a concept or keyphrase is essentially a span of text consisting of one or more words.
We study the following techniques to identify the concepts:
Entity Extraction:
Entities are usually an important part of a sentence, thus, we use an off-the-shelf entity extraction model
to identify the concepts.
A limitation of this method is that a concept need not necessarily be an entity and can be a non-entity span also.
We address this limitation with a keyword extraction model.
Keyword Extraction:
To also identify the non-entity concepts, we explore an off-the-shelf keyword extraction model[https://huggingface.co/ml6team/keyphrase-extraction-kbir-kpcrowd].
This model uses Keyphrase Boundary Infilling with Replacement (KBIR) as its base model and fine-tunes it on the KPCrowd dataset <cit.>.
*Instructing the Model*:
Since state-of-the-art language models perform remarkably well on a wide range of tasks, in this technique, we directly instruct the model to identify the important concepts from the generated sentence.
An important characteristic of this technique is that it doesn't require calling a task-specific tool (entity or keyword extraction model) for this task.
Table <ref> (in Appendix <ref>) illustrates examples of concepts identified using the three techniques.
It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also.
In contrast, instruction technique successfully identifies all the important concepts.
Moreover, it doesn't require calling a task-specific tool.
Thus, we represent this technique with (*), our preferred technique for this step.
§.§.§ Calculate Model's Uncertainty
GPT-3 <cit.> and several other publicly available models also provide logit output values in their prediction response.
Thus, we study if these logit output values can be utilized to
detect hallucinations.
However, we note that this is an additional source of information and not a necessary requirement for our hallucination detection method as some models that are available only via API calls do not provide these logit output values.
Recall that a concept can consist of more than one token also (note that the model provides logit output values at the level of tokens); thus, we study three different techniques for calculating a probability score for a concept.
Consider a concept consisting of n tokens and having the maximum softmax probabilities as p_1, p_2, p_3, ..., p_n for the n token positions respectively.
We obtain these probabilities by applying the softmax function over the logit values for each token position.
We study the following techniques:
Average of Token Probabilities:
In this technique, we simply take the average of the probabilities of the tokens corresponding to the concept:
score = AVG (p_1, p_2, ..., p_n)
Normalized Product of Token Probabilities:
Here, we take a normalized product of the probabilities of the tokens:
score = (p_1 × p_2 × ... × p_n)^1/n
*Minimum of Token Probabilities*:
Here, we take the minimum of probabilities as the score.
score = MIN (p_1, p_2, ..., p_n)
This is our preferred technique for this step as the other techniques average out the effect of model's uncertainty on the tokens while low probability in even one token of the concept provides a strong evidence of the model being uncertain.
For example, if the model is uncertain on the name of the USA president then its uncertainty on the first token (`Joe') would be high but on the next token (`Biden') would be very low as the token `Joe' is frequently followed by the token `Biden'.
Thus, averaging or normalizing the probabilities will have a limited capability to capture this signal.
Through our experiments (Section <ref>), we show that this score (especially `MIN') indeed provides a signal for hallucination, i.e., the more uncertain a model is on a concept (low probability score), the more likely it is to be hallucinating about that concept.
However, we note that this score is just a signal for hallucination and in no way provides a guarantee for presence of hallucinations.
We utilize this signal and check for hallucinations with respect to the uncertain concepts using our validation procedure (<ref>-<ref>).
In the absence of logit output values:
For models that do not provide the logit output values, all or some heuristically selected concepts (depending on the computational and latency budget of the system) can be passed to the validation stage for detecting hallucinations.
§.§.§ Create Validation Question
We start the validation procedure for a concept by creating a question that tests the correctness of the information (in the generated sentence) pertaining to the concept.
We create Yes/No Questions, i.e., questions for which the answer is either a `Yes' or a `No'.
Table <ref> shows examples of validation questions.
For creating these questions, we explore the following two techniques:
Question Generation Tool:
Here, we use an off-the-shelf answer-aware question generation model.
*Instructing the Model*:
Here, we directly instruct the model to create a validation question checking the correctness of the information about the selected concept.
For the same reason as in the concept identification step, this is our preferred technique as it does not require calling a task-specific tool.
We note that instead of Yes/No questions, Wh-questions can also be used for validation.
We prefer Yes/No questions as it is relatively easier to check the answer for these questions.
We leave exploring Wh-questions for validation for future work.
§.§.§ Find Relevant Knowledge
*Web Search*:
In order to answer the validation question, we retrieve knowledge relevant to it which serves as additional context.
For generality and wide coverage, we use web search (via Bing search API) for retrieving this knowledge.
However, we note that any other search API or knowledge corpus can also be utilized for this purpose.
Self-Inquiry:
We also explore a self-inquiry technique where we directly prompt the model to answer the validation question.
In this technique, the model relies on its parametric knowledge to answer the validation question.
This technique has several drawbacks as compared to web search such as lack of a reliable strategy to extract the parametric knowledge from the model and staleness of the parametric knowledge.
§.§.§ Answer Validation Question
In this step, we prompt the model to answer the validation question (leveraging the retrieved knowledge as context) and verify its response.
If the validation procedure succeeds for all the uncertain concepts then we continue generating the next sentence; otherwise, we interrupt the generation process, mitigate the potential hallucination in the sentence, and then continue generation.
Order of Validation of Concepts:
Validation of different concepts can be done in a sequence (in ascending order of their calculated probability score) or in parallel.
However, running this in parallel would require starting multiple threads which may not be supported by all machines.
Thus, in this work we study only the sequential validation strategy but note that it can be made more efficient by running it in parallel.
We regard this sequential validation as a greedy exiting strategy as we proceed to the mitigation stage on detection of the first potential hallucination.
§.§ Hallucination Mitigation
For mitigating the hallucination in the generated sentence, we instruct the model to repair the generated sentence by either removing or substituting the hallucinated information using the retrieved knowledge as evidence.
Table <ref> shows the instructional prompts for different steps of our approach.
Note: We note that the result of the validation procedure is contingent on the retrieved knowledge and the model's ability to leverage that knowledge in answering the validation question.
Thus, a case is plausible in which the validation procedure reports hallucination even though the sentence is actually not hallucinated.
However, in Section <ref>, we show that our approach performs fairly well on this task.
Moreover, it achieves a very high recall demonstrating its efficacy at detecting hallucinations.
Moreover, in Section <ref>, we show that our mitigation approach does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
§ EXPERIMENTS AND RESULTS
In this section, we first demonstrate the two findings that motivate our approach (<ref> and <ref>).
Then, we show the individual efficacy of our hallucination detection and mitigation techniques in <ref> and <ref>, respectively.
Finally, in <ref>, we show the effectiveness of the proposed active detection and mitigation approach in addressing hallucinations.
Data and Annotation:
In our experimental setup, we prompt the large language model (GPT-3: text-davinci-003) to write about various topics.
Specifically, we use a total of 150 topics from diverse domains.
Figure <ref> shows the distribution of different domains in our topic set.
In each domain, we include different kinds of topics; for instance, Sports domain consists of sports persons, administrators, teams, and games, Music consists of musicians, songs, music labels, and bands, Politics includes politicians, political parties, and elections, Film & TV includes actors, TV personalities, shows, and movies, History includes historians and events, etc.
For selecting the names of people, we use randomly sampled names from the top 20% of longest articles in WikiBio dataset <cit.> as done in <cit.>.
Similarly, for the other topics, we randomly sample from the longest Wikipedia articles.
This is done to ensure that no obscure or ambiguous concept is selected.
Equipped with the list of topics, we give the following input prompt to the model:
for each topic.
Following this, we (the authors) manually annotate the correctness of the first five sentences generated by the model for each topic.
For annotating the correctness, we look at search results from the web to find the relevant knowledge that either supports or contradicts the information present in the generated sentence.
In some cases, multiple web searches were required to check the correctness of different facets of a sentence.
Furthermore, in a small number of cases, we could not find information supporting or contradicting the information in the generated sentence, we mark it as a case of extrinsic hallucination.
We opt for this expert annotation strategy because despite our annotation task being a simple binary classification task, it requires considerable effort in checking the correctness of a given sentence which can not reliably be collected via crowdsourcing.
In addition to this sentence-level annotation, we also annotate correctness at the concept-level that we will detail in <ref>.
We release both sentence-level and concept-level hallucination annotations that will also facilitate a systematic future research in this direction.
§.§ Motivating Findings
§.§.§ Hallucination Causes Further Hallucination
Recall that we consider the first five sentences generated by the model for each topic and annotate their correctness.
Since the sentences are sequentially generated, we investigate the
relationship between `hallucination in a generated sentence' and `hallucination in the previously generated sentences' for an input.
Since there are two binary variables, there exist four possibilities in this relationship, i.e.,
a sentence is hallucinated and there was hallucination in the previously generated sentences (A), the sentence is not hallucinated and there was hallucination in the previously generated sentences (B), the sentence is hallucinated and there was no hallucination in the previously generated sentences (C), the sentence is not hallucinated and there was no hallucination in the previously generated sentences (D).
For illustration, consider a sample case for sentence 3, the two binary variables are whether sentence 3 is hallucinated and whether there was hallucination in the previously generated sentences (i.e. in sentence 1 OR sentence 2).
Figure <ref> demonstrates this relationship for sentences 2, 3, 4 and 5 aggregated over all the topics in our data.
We do not show this for sentence 1 as there is no previously generated sentence for it.
From this figure, we draw the following inferences:
(a) A > B: Cases A and B correspond to the scenario when there is hallucination in the previously generated sentences. It can be observed that A is considerably greater than B which implies that when there is hallucination in the previously generated sentences, a sentence is hallucinated more often.
Moreover, the gap keeps increasing as the sentence number increases.
(b) A > C: Cases A and C correspond to the scenario when a generated sentence is hallucinated. It can be observed that A is greater than C which implies that a generated sentence is hallucinated more when there is hallucination in the previously generated sentences as compared to when there is no previous hallucination.
(c) D > C: Cases C and D correspond to the scenario when there is no hallucination in the previously generated sentences. Here, D is greater than C which implies that when there is no hallucination in the previously generated sentences, a generated sentence is more often not hallucinated.
(d) D > B: Cases B and D correspond to the scenario when a generated sentence is not hallucinated. D is greater than B which implies that a generated sentence is not hallucinated more when there is no previous hallucination as compared to when there is previous hallucination.
This shows that hallucination in a sentence often results in further hallucinations in the subsequently generated sentences and thus actively detecting and mitigating hallucinations can not only fix the current hallucination but can also prevent its propagation in the subsequently generated sentences.
Next, we demonstrate the utility of logit output values in detecting hallucinations.
§.§.§ Logit Output Values Provide a Signal for Hallucination
In this subsection, we first show the trend of hallucination with the probability score.
Note that this score is calculated using the logit output values.
Then, we demonstrate the benefit of identifying concepts from the generated sentence in detecting hallucinations.
Finally, we compare the efficacy of different probability calculation techniques in detecting hallucinations.
Hallucination vs Probability Score:
In order to study the relationship between logit output values and hallucination, we annotate correctness at concept-level also (in addition to sentence-level annotations described earlier).
Specifically, for each identified concept, we mark whether the information about it in the generated sentence is hallucinated or not.
This can be different from sentence-level annotation as it focuses only on the correctness of the information about the concept in the sentence.
Table <ref> shows examples of both sentence-level and concept-level annotations.
Figure <ref> shows the trend of hallucination with our calculated probability scores at both sentence and concept levels.
For a sentence, we use the minimum across tokens of all its identified concepts as the probability score and for a concept, we use the minimum across all its tokens as the probability score.
It can be observed that as the probability score increases (or uncertainty decreases), tendency to hallucinate decreases.
This shows that these probability values can be utilized as a signal for hallucination, i.e., the low probability concepts in a generated sentence can be considered as candidates of potential hallucination and their correctness in the generated sentence can be validated for detecting hallucinations.
On average, we observe an absolute difference of ∼0.15 between the probabilities of concepts when the model is hallucinating vs when it is not hallucinating.
Benefit of Identifying Concepts from a Sentence:
Now, we demonstrate the benefit of identifying concepts from a sentence and leveraging the logit output values corresponding to their tokens for detecting hallucinations.
To this end, we plot precision-recall curves for the hallucination detection task corresponding to two methods that use the probabilities calculated from the logit output values.
The blue curve corresponds to the technique in which we use the minimum probability across all tokens of the sentence and the orange curve is for the technique in which we use the minimum over only the tokens of the identified concepts.
Figure <ref> shows the two curves.
The orange curve achieves higher area under the precision-recall curve implying that utilizing the probabilities of the concept tokens provides a stronger signal for hallucination as compared to the probabilities corresponding to all the tokens.
Comparing Probability Calculation Techniques:
Figure <ref> shows the Precision-Recall curves for the hallucination detection task (at concept-level) using the three probability calculation techniques, i.e., Minimum, Average, and Normalized (described in <ref>).
The `Minimum' technique achieves the highest area under the curve and hence is better at the hallucination detection task.
§.§ Hallucination Detection Performance
In this subsection, we demonstrate the hallucination detection performance of various techniques at both sentence and concept-levels.
Self-Inquiry vs Web Search:
Table <ref> and <ref>
show the hallucination detection performance of the self-inquiry and web search techniques at sentence-level and concept-level, respectively.
For sentence-level results, we predict the sentence to be hallucinated if the validation procedure fails on any identified concept.
Note that in these results, we do not leverage the uncertainty score to select concepts for validation, instead we validate all the identified concepts.
We study the relationship of recall with probability thresholds in Figure <ref> (in Appendix).
From the tables, it can be observed that the web-search technique achieve considerably high recall in detecting hallucinations.
Here, we emphasize on the high `recall' of web-search technique as we show that our mitigation approach does not introduce any new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives (<ref>).
Figure <ref> shows the recall of hallucination detection vs Probability threshold plot for Self Inquiry and web search techniques at both sentence-level and concept-level.
Web-search is consistently and considerably better than self-inquiry.
§.§ Hallucination Mitigation Performance
On sentences where our validation procedure (using Web search) reports hallucinations, we apply our mitigation technique.
We note that a sentence which is reported as hallucination can either be actually hallucinated or not hallucinated, i.e., it could also be a false positive.
Table <ref> shows the result of our method.
It successfully mitigates the hallucination on 57.6% of the correctly detected hallucinations (True Positives); we refer to this metric as `success'.
Furthermore, it achieves this at minimal `deterioration' (3.06%), i.e., it incorrectly converts a minimal 3.06% of the non-hallucinated instances to hallucinated.
§.§ Active Detection and Mitigation
The two findings in Section <ref> motivate our approach of addressing hallucinations in which we actively detect hallucinations leveraging the logit output values and mitigate them during the generation process to prevent their propagation.
Specifically, using the calculated probability scores, we identify the uncertain concepts and check their correctness using our validation procedure.
We generate one sentence at a time and when our detection method reports hallucination, we fix it using our mitigation approach and continue generating the next sentence.
We demonstrated separate detection and mitigation efficacy in <ref> and <ref>, respectively.
Figure <ref> compares the percentage of hallucination in the output of GPT-3 model and our active detection and mitigation approach.
Our approach reduces the percentage of hallucinations from 47.4% to 14.53%.
In Figure <ref>, we demonstrate this comparison for different categories of hallucination.
It shows that our approach reduces hallucinations for all categories.
§ RELATED WORK
Advancements in the field of natural language processing led to the development of models that possess an impressive ability to generate fluent and coherent text. However, these models are vulnerable to a phenomenon called text hallucination.
Prior work <cit.> has categorized text hallucinations into two classes: Intrinsic (when the generated output contradicts the source content) and Extrinsic (when the generated output cannot be verified from the source content, i.e., it that can neither be supported nor contradicted by the source).
One thread of research pertaining to hallucinations has focused on studying different causes of this phenomenon such as training data quality <cit.>, source-target divergence <cit.>, ill-suited modeling <cit.>, and randomness during inference <cit.>.
The other thread focuses on addressing the hallucination problem <cit.>.
<cit.> propose a sampling-based hallucination detection approach in which they first sample multiple responses from the model and then measure the information consistency between the different responses. They posit that when a language model knows a given concept well, the sampled responses are likely to be similar and contain consistent facts; on the other hand, for hallucinated facts, stochastically sampled responses are likely to diverge and may completely contradict one another.
Another recent work <cit.> leverage LLM's internal state to identify the truthfulness of a statement. Using an annotated dataset, they train a separate classifier that takes the LLM's activation values as input and predicts its truthfulness.
<cit.> hypothesize that the randomness of sampling is more harmful to factuality when it is used to generate the latter part of a sentence than the beginning of a sentence and propose a new sampling algorithm named factual-nucleus sampling that dynamically adapts the `nucleus' p along the generation of each sentence.
<cit.> propose an approach motivated by The Society of Mind and multi-agent settings in which multiple models individually propose and jointly debate their responses and reasoning processes to arrive at a common answer.
In our approach, we leverage the logit output values and web search to actively detect and mitigate hallucinations.
§ CONCLUSION
In this work, we proposed an approach that actively `detects' and `mitigates' hallucinations of the large language models.
Through systematic and extensive experiments, we show that our approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average.
We also demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, our detection technique achieves a high recall and our mitigation technique successfully mitigates majority of the correctly detected hallucinations.
Notably, the mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Overall, our work contributes to improving the reliability and trustworthiness of text generation systems, a crucial step en route to enabling their widespread adoption in real-world applications.
acl_natbib
§ APPENDIX
§ APPROACH
Table <ref> shows the instructional prompts used for different steps of our approach.
We note that these techniques are the preferred techniques as they do not require calling an external task-specific tool to achieve the corresponding objectives.
§.§ Identify Key Concepts
Table <ref> shows examples of concepts identified using the three methods, i.e., Entity Extraction, Keyword Extraction, and Instructing the Model.
It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also.
In contract, instruction technique successfully identifies majority of the important concepts.
§.§ Create Validation Question
Table <ref> shows examples of validation questions corresponding to each concept created via instructing the model technique.
It shows examples of both the question types, i.e., Yes/No and Wh questions.
We prefer Yes/No questions as it is relatively easier to check the answer for these questions.
We leave exploring Wh-questions for validation for future work.
§ EVALUATION DATA
Table <ref> shows the statistics of the sentences generated by the GPT-3 (text-davinci-003 with temperature 0) model.
A sentence has ∼18 word on average and each sentence has ∼3.2 key concepts that are identified by our instruction technique.
Table <ref> shows examples of sentence-level and concept-level hallucination annotations.
§ RECALL OF HALLUCINATION DETECTION VS PROBABILITY THRESHOLD
Figure <ref> compares recall of hallucination detection for self-inquiry and web search techniques at different probability thresholds.
Web search considerably outperforms self-inquiry at all thresholds.
|
http://arxiv.org/abs/2307.04710v1 | 20230710171958 | Remarks on the Axion Domain Wall Problem | [
"Michael Dine"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
Machine learning potentials with Iterative Boltzmann Inversion: training to experiment
Kipton Barros
August 12, 2023
======================================================================================
§ TO DO
Consider domain walls bounded by strings, along the lines of the old paper by Sikivie et al. Is there any change in the picture, e.g. due to attractive and repulsive forces between string elements? Might imagine
always have order one domain per horizon before considering bias. Suppose, e.g., N. In light travel time to cross the horizon, system develops a large γ.
Ask what fraction of domain wall energy might be in hadrons and interpret in terms of final collapse.
Think about final collapse in terms of particle collisions. If principally axions, what are the collision products?
§ INTRODUCTION
A Peccei-Quinn symmetry<cit.> has the potential to solve the strong CP problem and account for the dark matter of the universe
<cit.>. Before considering cosmology, the axion decay constant, a priori,
can take a broad range of values. Stellar astrophysics places a lower bound in the range of 10^9-10^10 GeV<cit.>. Big bang cosmology, with the assumption
that the universe, in the past, was hotter than a GeV or so, places an upper limit of about 10^12 GeV. Attaining a symmetry of sufficient quality<cit.> to solve the strong CP problem, however,
is quite a challenge. String theory, and more general considerations of quantum gravity, rule out exact, continuous global symmetries. So one expects
that in the effective field theory at low energies, at the very least there will be Planck-suppressed operators which violate the symmetry. Even for the low range of f_a,
operators of very high dimension must be suppressed to account for the smallness of θ<cit.>. One might try to account for this suppression by discrete symmetries<cit.>,
but the symmetries must be quite large. String theory appears capable of avoiding this problem<cit.>, in the sense that PQ symmetries may be violated only by non-perturbative
effects, which can be extremely small if the theory generates a small coupling constant. But in this case, a value of f_a much smaller than, say, typical scales associated with coupling constant
unification, would be surprising. Larger scales are admissible if the universe was never much hotter than nucleosynthesis temperatures in the past. This might occur if there was a period
where the universe was dominated by moduli; see, for example<cit.>.
These observations arguably cast doubt on the PQ solution, and in any case, would seem to favor large values of f_a and a modified cosmology. In this paper, however,
we will adopt the conventional picture that the universe was quite hot in the past and we will assume that there was a PQ transition after inflation, and focus on the problem of domain walls.
In the scenario in which there is a PQ transition after inflation, domain walls are potentially problematic<cit.>.If the PQ symmetry is an exact, continuous global symmetry (up to anomalies), the theory has stable domain walls provided the
coefficient of the QCD anomaly is (suitably normalized) an integer different than one. The domain wall energy density falls off as 1/R, where R is the scale factor, as opposed to the radiation energy density (1/R^4)
or the matter dominated energy density 1/R^3. Typically, the domain walls dominate well before the present era, spoiling the successes of the Standard Cosmology.
Two plausible solutions to this problem were put forward in<cit.>:
* The coefficient of the anomaly is unity.
* There is a small explicit breaking of the PQ symmetry, large enough to lead to collapse of the domain wall system.
Even for the first solution, one has to consider the effects of cosmic strings<cit.>. The second solution has troubling features. The strength of the leading
symmetry breaking operator is restricted to a narrow range. First, it must be small enough that the resulting θ satisfies the current experimental bounds. Typically this requires that the leading operator
which breaks the continuous PQ symmetry is of very high dimension<cit.>. Second, the operator must be large enough that, if there are domain walls, these disappear before
they come to dominate the energy density of the universe. Naively, for interesting values
of f_a, the axion decay constant, these two conditions limit the symmetry breaking operator to a narrow range (we will take f_a = 10^12 GeV as our benchmark). If the suppression of high dimension operators is a consequence of discrete symmetries,
these symmetries must be very large, but not too large<cit.>.
Recently, the authors of <cit.> have revisited the domain wall problem. They put forward and then rule out a third possible solution: a bias in the domain wall ensemble favoring one of the ground
states.They then argue that the second solution above is unlikely to work, except for relatively small values of the axion decay constant, small enough as to be problematic for stellar processes.
They argue, in particular, that the collapsing domain wall system produces too much dark matter.
Motivated by the appearance of high quality PQ symmetries in string theory, the present author has long been an advocate for high scale breaking of the PQ symmetry, which requires a breakdown of
the Standard Big Bang Cosmology at temperatures not much higher than nucleosynthesis temperature. But in this note, we consider the possibility of a post-inflationary
PQ transition and smaller axion decay constants, with a conventional
thermal history for the university at least up to temperatures of a few GeV. We will focus on the issues associated with the second solution, small breaking of the Peccei-Quinn symmetry.
We will consider the dark matter issue, demonstrating that, until the domains collapse, most of the excess energy is converted into kinetic energy of the domain walls. Once the domains shrink to sizes of order
m_a^-1, this energy is converted to ultrarelativistic axions and hadrons. Provided that the the domain walls never dominate, these objects are relatively harmless.
We will review the problem of accounting for small symmetry breaking, focussing on models where the PQ symmetry is an accident of a large
discrete symmetry<cit.>. Such symmetries, at least at first sight, are not particularly plausible. The requirements, indeed, yield extremely large symmetries, yet the symmetry
also cannot be too large if the breaking is to be sufficient to avoid a domain wall dominated universe.
Given the seeming absurdity of the requirements on the symmetry breaking, we ask whether there might be some anthropic explanation. Taking a very conservative approach to the anthropic principle, where one asks
whether the change of one particular parameter can rule out the existence of observers, anthropic constraints have the potential to restrict the strength of the symmetry breaking to the required range.
This note is organized as follows.
In the next section, we review some aspects of domain walls and the cosmic strings which bound them<cit.>. In particular, we discuss the sense in which one can systematically
construct the domain wall from the chiral lagrangian, and also provide a simple, analytic domain wall solution in a particular limit. In section <ref>, we study the system in the presence of explicit breaking.
We discuss constraints on the size of the breaking and the axion decay constant. We focus, particularly, on models where the PQ symmetry arises from a large discrete symmetry, noting that the
symmetry must be quite large to accommodate the current limits on θ but (anticipating our cosmology discussion) can't be appreciably larger than this if it is to avoid domain wall catastrophes. In section <ref> we turn to cosmology. After reviewing aspects of the cosmological domain wall problem, we demonstrate that the wall collisions principally produce gravitational waves and note that their energy density
can readily be in a suitable range. We then turn to the coincidence problem in section <ref>, arguing that it places requirements on a theory which call out, if a PQ symmetry is realized in nature
in this fashion, for
an anthropic solution. As noted above, we will see that such a solution is plausible. It is hard to see how otherwise there would be any solution at all. Our conclusions are presented in section <ref>.
§ DOMAIN WALL GENERALITIES
Suppose we have an exact Peccei-Quinn symmetry up to the anomaly. Under the PQ symmetry, the various fields transform by phases e^i q_pqα. By convention, we take the PQ charges, q_pq to be integers. θ changes, in general, by 2 π N for some integer N under this transformation. Given 2π periodicity of θ, the symmetry is in fact
Z_ N. For N 1, there are domain walls.
The tension of the domain walls is of order
T ∼
m_a f_a^2 ∼ m_π f_π f_a.
We will have in mind f_a ∼ 10^11- 10^12 GeV in what follows.
It is of interest to ask whether, within the framework of chiral perturbation theory and/or large N, we can write a strict equality for the domain wall tension.
§.§ Domain Wall Solutions from the Chiral Lagrangian
Just as one can compute the axion mass using the chiral lagrangian, one can obtain the domain wall solutions in the case that the system supports axionic domain walls.
Suppose that the light quarks have PQ charges q_i (we will mainly write explicit formulas for the case of two light quarks). Suppose, as well, that the PQ symmetry has
anomaly
∂_μ j^μ_PQ = N 32 π^2 F F̃.
Then we can define an anomaly free current, j̃^mu_PQ by subtracting off a non-conserved current with the same anomaly:
j^5 μ = N( u̅γ^μγ^5 u + d̅γ^μγ^5 d)
So now:
∂_μj̃_PQ^μ = N( m_u u̅γ^5 u+ m_d d̅γ^5 d).
In computing the axion mass, one sometimes makes a different choice<cit.>, so that the divergence of the current does not have matrix elements between vacuum and the single pion state,
but this is not convenient for the domain wall problem, where one needs to consider a finite axion field range.
We can explore the effect of finite transformations generated by Q̃_PQ,
U(α) = e^i αQ̃_PQ.
Under this transformation the quark mass terms are not invariant; these transform as:
m_u u̅ u + m_d d̅ d → ( m_u u̅ u + m_d d̅ d) cos ( Nα) + (m_u u̅γ_5 u + m_d d̅γ_5 d)sin( Nα) .
Now consider the chiral lagrangian. Our goal is to integrate out the pion fields. We do this by replacing u̅ u and d̅ d by their expectation values as functions of the pseudogoldstone fields, and solving for the minimum of the π⃗ potential
as a function of a = α f_a. Switching to a two component notation, and letting f,g denote flavor indices:
ψ̅(x)_f ψ(x)_g = ⟨ψ̅(0) ψ(0) ⟩ ( e^i π⃗·σ 2 f_π )_fg
We only have to solve for π_0.
A particularly simple case is that of m_u = m_d. Then the potential is independent of π_0:
V(a) = -m_π^2 f_π^2 cos( Na f_a ) = -f_a^2 m_a^2 N^-2cos( Na f_a ) .
The presence of domains for N± 1 is manifest; the potential has N degenerate minimima with
a f_a = 2 π k N; k=0,…, N-1,
and the existence of domain wall solutions follows.
The domain wall solution can be written down explicitly; it is the static soliton of the Sine-Gordan theory:
a(x) = f_a (4arctan (e^m_a x) N + 2 π k N ).
The tension of the domain wall satisfies:
T ∝ f_a^2 m_a ∝f_a m_π f_π.
It m_u m_d, there is an additional contribution from the pion fields proportional to
δ T ∝m_u -m_d m_u + m_df_a m_π f_π.
So, in general, the pions make an order one contribution to the tension. In the final collapse of the domains, this will be associated with production of energetic hadrons.
§ DOMAIN WALL COSMOLOGY
Domain walls, if they come to dominate the energy density of the universe, are problematic<cit.>. The domain wall energy density decreases as 1/R, so it can quickly overwhelm the density of
radiation or matter, falling as 1/R^4 or 1/R^3, respectively. So it is necessary that there either never were domain walls at all, or that they disappear relatively quickly, typically by times of order a few seconds after the big bang. Reference <cit.>
considers, in addition to the two proposed solutions we mentioned earlier, a third possibility, that of a biased domain wall distribution, but rules it out. They then argue
that the constraints associated with PQ violating operators have been underestimated. We will address their critique shortly.
Reference <cit.> analyzed explicit PQ symmetry violation, tilting the axion potential
and causing all but one type of domain to collapse. Suppose the splitting between states is:
Δ V = ϵ 10^-10 m_π^2 f_π^2.
This corresponds to a potential for the axion roughly of the form:
Δ V(a) = ϵ 10^-10 m_a^2 f_a a.
ϵ cannot be extremely small if the domain wall system is not to dominate the energy of the universe before it disappears.
When the temperature is of order f_π, the domain walls form. The corresponding
Hubble parameter is H_0 = m_π f_π M_p. Calling the corresponding scale factor R_0, the domain wall density is subsequently of order:
ρ_DW = (f_a m_π f_π) m_π f_π M_pR_0 R.
So domain walls dominate when
(R_0 R )^3 ≈f_a M_p
or
(R_0 R ) ≈ 10^-2 ( f_a 10^12 )^1/3.
This corresponds to a temperature of order 1 MeV, or times of order 10 seconds. How quickly the domains collapse is the subject of the next section.
§.§ Fate of the Domain Walls
As noted in the literature, when the walls collapse, their energy can be converted to kinetic energy of the domain walls, to axions, gravitational waves, electromagnetic radiation, and possibly other types of matter or radiation. We will shortly
argue that, before collapse, the energy is principally converted to kinetic energy of the walls, followed by highly relativistic axions. At the final collapse, this energy is converted to extremely relativstic
axions. Gravitational and electormagnetic radiation are minor components of the energy budget. These axions would still be highly relativistic today.
When formed, the bubbles have radius, r_0, of order:
r_0 ≈M_p m_π f_π.
Initially their acceleration due to Δ V is slightly less than that due to the Hubble expansion, H:
a ∼Δ V f_a f_π^2≈ϵ 10^-22 f_π^2; H ∼ 10^-19 f_π^2.
They become comparable when the temperature decreases by a factor of order 10^2. beyond this point we can, to first approximation, neglect the expansion. The velocity becomes of order one in less than a Hubble time, and collapse occurs in such a time. We can then ask: if we ignore gravitational radiation, what is the velocity of the domain wall once the domain shrinks to a microscopic size (more precisely, how large is the Lorentz γ factor for the wall.
We take as the initial time the time when the cosmic acceleration is equal to the acceleration of the wall:
H_0 = Δ V f_π^2 f_a≈ϵ× 10^-24 GeV.
The energy stored in a horizon sized region of one of the excited states is:
E ≈Δ V H_0^-3∼ϵ 10^58 GeV.
If there is no emission of axions, gravitational or other radiation during the collapse, then once the region size is of order m_a^-1, the γ factor is enormous. The effective mass of the system is of order
m_eff= f_a f_π^2 m_a^-2∼ 10^38 GeV,
so
γ∼ϵ 10^20.
We now argue that in fact most of the energy gained is transferred to kinetic energy of the domain wall.
Initially the domain is large and the curvature of the domain wall is negligible on macroscopic scales.
To consider axion radiation, it is helpful to work in the instantaneous rest frame of the domain wall (more precisely, of a macroscopic segment of the wall). We can define what we mean as axion radiation
by considering a set of domain walls, instantaneously at rest, described by a classical field configuration,
ϕ(x⃗) = ϕ_cl(x⃗-x⃗_0).
Were it not for the symmetry-breaking potential, Δ V, these configurations would be solutions of the equations of motion if x⃗_0 = v⃗ t. But due to the potential, they are not.
Axion radiation corresponds roughly to the difference, δϕ, of the actual axion field and the would-be domain wall configuration at a given time. This is proportional to
δϕ(x⃗,t) = Cẍ_0(t)_i ·∂_i ϕ_cl (x⃗,t).
It would be challenging to compute the energy carried by δϕ, but it's form, for non-relativistic motion, can be determined by simple considerations. The energy should be rotationally invariant,
translationally invariant, and time reversal invariant (up to very small effects within the Standard Model coupled to an axion). So the energy per unit time transferred to δϕ behaves as:
E = B f_a^2 |ẍ_0^2 | A
where A is the area of the domain wall, and B is an order one constant. We can write ẍ_0 in terms of Δ V and the tension of the domain wall,
ẍ_0 = Δ V f_π m_π f_a.
Correspondingly, in a frame boosted with Lorentz factor γ, the energy per unit time is increased by a factor of γ for the energy, but decreased by a factor of order γ from the time dilation,
so the transformation between the two frames behaves as (γ)^0.
A little more precisely, we might consider radiation in a time interval Δ t in the instantaneous rest frame of the wall. Δ t should be such that the wall is non-relativistic in that frame. For the element
of area A, there will be movement of order Δ t × v in this time period. The corresponding time elapsed in our observers frame is:
Δ t^' = γ (Δ t +v Δ z)
but the second term is, by assumption, much smaller than the first and can be neglected.
We want to compare the energy radiated by the wall per unit time with the energy the wall acquires per unit time from Δ V. This is
Δ V A,
so the condition that radiation is comparable to the energy increase of the wall per unit time due to Δ V is
Δ V f_π^2 m_π^2 >1
which is never satisfied. In other words, the energy radiated in axions as the walls collapse is negligible. Similar considerations lead to suppression of electromagnetic and gravitational
radiation. So the walls are extremely relativistic when they finally shrink to microscopic size. At this point, one expects that the collapse results in production of extremely relativistic axions,
with γ factors comparable to those we discussed above, and very relativistic hadrons. The hadrons quickly thermalize with the background hadrons. The number of axions produced would be small, consistent with the fact that the final domains are microscopic in size.
As we explain below, very little of the axion energy would be degraded in collisions with hadrons. In any case, provided that the domain walls don't
dominate the energy density of the universe at the time of their collapse, their cosmological effects would be minor.
§.§ The Final Stage of Domain Wall Collapse
Because of the enormous γ factors of the axion produced in the decay of a domain, most of these axions stream through the universe. Only rarely does one interact with
quarks, gluons. or other axions. For s-channel processes, the mean time between collisions is long. Even assuming an order one coupling of these high energy axions to nucleons,
δ L = a N̅γ_5 N,
the mean free time for axion collisions with nucleons is
τ∼ 10^10γ m_a T^-3m_N ∼ 10^33 (γ 10^20 ) ( MeV T )^3 (m_a 10^-14 )
in GeV units, and we have taken n_B n_γ≈ 10^-10. This is an enormous time, comparable to even the current age of the universe. As we have noted, the γ
factor is actually likely to be serval orders of magnitude larger than 10^25, giving further suppression.
For t-channel exchange, due to the enormous γ factor, scattering is appreciable only at extremely small angles. As a result, the mean scattering time is, again, extremely large.
So most of these axions are still around today, and are highly energetic. Their numbers, however, are not large, and their contribution to the energy budget of the present universe is smaller
than the photon contribution (assuming that the domain wall contribution was always a small fraction of the total energy density at the time of collapse). Interactions with ordinary matter are very rare.
§ ANTHROPIC CONSIDERATIONS
As we have stressed, there is one troubling feature of the Peccei-Quinn symmetry for these relatively low f_a axions in the suppression required for a Peccei-Quinn symmetry
of sufficient quality to solve the strong CP problem, and another in the requirement of sufficient tilt solution to solve the domain wall problem. One might, indeed, argue
that the requirements to obtain a PQ symmetry of sufficient quality to solve the strong CP problem are implausible, For example, if the result of a discrete symmetry, the symmetry must be very large<cit.>.
On the other hand, we have seen that to avoid a domain-wall dominated universe, the tilt must be just barely smaller than that required for the PQ symmetry. One might be inclined to discard axions with
these relatively low decay constants, but
it is also tempting to ask whether such bizarre constraints might be satisfied as a result of anthropic considerations. In this section, we will examine this possibility. We do not attempt
to understand precisely how, within some sort of landscape, this might be realized in detail. Rather we ask the simpler question: might the existence of observers be ruled out if we made a change
in a single parameter, holding others fixed.
If the tilt is controlled by a discrete symmetry, we might imagine that the symmetry violating terms in the axion potential have the form A M_p^-nΦ^n+4, where Φ = f_a e^i a f_a, A is now a complex constant (not the area) and
Φ→ e^2 π i NΦ under the Z_N symmetry, so
δ V = | A| f_a^4 ( f_a M_p )^N-4cos((Na f_a + α).
In order that θ be small enough, assuming A is of order one, N must be quite large. If f_a = 10^12 GeV, we require N ≥ 13, for example. At the same time, our discussion
of domain wall evolution implies that N can't be larger than 13.
The lower limit on the tilt (upper limit on N) is a relatively easy one to explain anthropically. Were domain walls to dominate the energy density of the universe before their collapse, the universe would not evolve to a situation with structures of the sort
we see in nature (and which support observers).
The upper limit might be understood if we assume that a dark matter density close to that observed is necessary for (or perhaps optimizes the number of) observers. Write the tilt contribution to the axion
potential as
δ V(a) = ϵ 10^-10 m_π^2 f_π^2 cos(a f_a + α).
We require ϵ < 1. Suppose ϵ was much larger, corresponding to N=11, for example, and ϵ∼ 10^4. Then the axion mass receives a larger contribution
from δ V then from QCD, and it starts to oscillate when the temperature is of order 10^5 GeV. As a
result, there are far too few axions to constitute the dark matter. Axions only account for about 10^-20 of the energy density initially. At temperatures of order 1 eV, they are only 10^-6 of
the total energy of the universe.
The case of n=12 is different. The contribution, at zero temperature, to the axion mass from δ V is smaller than that from QCD, but we have to investigate the behavior of the
system as a function of temperature. The axion mass as a function of temperature behaves as<cit.>:
m_a(T) ≈ m_a (Λ_QCD T )^3.7.
Without δ V, the axion begins to oscillate when
3 H = m_a(T).
This corresponds to a temperature of order 10 GeV. But m_a(T) < δ m_a down to a smaller temperature, of order 0.2 GeV if we take
the formula <ref> seriously at these temperatures. So there is significant overproduction of dark matter.
All of this is meant to establish that there is a plausible anthropic rationale for the size of δ V being such that axions constitute the dark matter, and the domain wall system collapses “just in time".
We stress again that we don't have a detailed cosmological picture for how this might be implemented.
§ CONCLUSIONS
In this paper we have adopted
the conventional view of axion cosmology, that the PQ transition occurred after inflation and that the post inflationary universe was once at a temperature well above
the scales of QCD, and focused on the resulting question of domain walls. We have recalled that the problem is generic, and admits only a small number of solutions. Among these, we
have considered the effects of small, explicit breaking of the symmetry. We have recalled the well-known issue that the constraints of obtaining small enough θ and domain wall
annihilation before domain wall dominance restrict the symmetry breaking to a narrow range. We have pointed out that it would be almost absurd for the size of this breaking to be a consequence of discrete
symmetries. We have noted that, as for other seemingly absurd phenomena actually observed in nature, one might contemplate an anthropic solution. This has the virtue that it could explain the remarkable
coincidences required.
But most of our attention has been devoted to the fate of the universe in such a picture. We have studied the question of where the energy in the domain walls goes. We have noted that
gravitational radiation leads to a limiting Lorentz factor, γ, which in turn implies that the vast majority of the domain wall energy is converted to gravitational waves. This is in contrast
to the possibility that much of the energy ends up in non-relativistic, dark matter axions<cit.>. The resulting constraints are mild, in the sense that they are not much stronger than the requirement
that the domain wall energy not dominate the energy density of the universe before annihilation.
§ ACKNOWLEDGMENTS
We thank Patrick Draper, Guido Festuccia and Pierre Sikivie for conversations and critical comments.
This work was supported in part by U.S. Department of Energy grant No. DE-FG02-04ER41286.
JHEP
|
http://arxiv.org/abs/2307.05416v1 | 20230711163258 | Optimizing Scientific Data Transfer on Globus with Error-bounded Lossy Compression | [
"Yuanjian Liu",
"Sheng Di",
"Kyle Chard",
"Ian Foster",
"Franck Cappello"
] | cs.DC | [
"cs.DC",
"cs.DB"
] |
Optimizing Scientific Data Transfer on Globus with Error-bounded Lossy Compression
Yuanjian Liu1, Sheng Di2, Kyle Chard12, Ian Foster12, Franck Cappello2
1
University of Chicago, Chicago, IL, USA
2
Argonne National Laboratory, Lemont, IL, USA
[email protected], [email protected], [email protected], [email protected],
[email protected]
Corresponding author: Sheng Di, Mathematics and Computer Science Division, Argonne National Laboratory, 9700 Cass Avenue, Lemont, IL 60439, USA
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================
plain
plain
The increasing volume and velocity of science data
necessitate the frequent movement of enormous data volumes
as part of routine research activities. As a result,
limited wide-area bandwidth often leads to bottlenecks
in research progress. However, in many cases, consuming applications (e.g., for analysis, visualization, and machine learning) can achieve acceptable performance on reduced-precision data, and thus researchers may wish to compromise on data precision to reduce transfer and storage costs.
Error-bounded lossy compression presents a promising approach as it can
significantly reduce data volumes while preserving data integrity based on user-specified error bounds. In this paper, we propose a novel data transfer framework called Ocelot that integrates error-bounded lossy compression into the Globus data transfer infrastructure.
We note four key contributions: (1) Ocelot is the first integration of lossy compression in Globus to significantly improve scientific data transfer performance over wide area network (WAN).
(2) We propose an effective machine-learning based lossy compression quality estimation model that can predict the quality of error-bounded lossy compressors, which is fundamental to ensure that transferred data are acceptable to users.
(3) We develop optimized strategies to reduce the compression time overhead, counter the compute-node waiting time, and improve transfer speed for compressed files.
(4) We perform evaluations using many real-world scientific applications across different domains and distributed Globus endpoints. Our experiments show that Ocelot can improve dataset transfer performance substantially, and the quality of lossy compression (time, ratio and data distortion) can be predicted accurately for the purpose of quality assurance.
Lossy Compression, Performance, Data Transfer, Globus, WAN
arabic
§ INTRODUCTION
Large amounts of data are being produced by high performance computing (HPC) simulations and advanced instruments such as the Advanced Photon Source (APS) <cit.> and LCLS-II <cit.>.
These data typically need to be shared for analysis, storage, publication, and archival, and often across multiple research institutions.
However, transferring data over a wide area network (WAN) can be time-consuming, significantly delaying research progress.
Tools like Globus <cit.> have been widely adopted to improve data transfer performance; however, while transfer performance can be increased by deploying more data transfer nodes or creating parallel data streams, limited network bandwidth constrains transfer speeds.
Many scientific data are collections of floating point numbers, and often
scientific applications do not require the level of precision encoded in those data. Thus, it is possible to reduce the data size by compromising the precision to a certain level. Error-bounded lossy compression exploits this fact and offers the potential to significantly reduce data sizes.
However, optimal tuning of compression process (i.e., for performance and quality) remains an open problem and thus such methods are rarely used
in data transfer solutions.
Although error-bounded lossy compression can substantially reduce the volume of data with user-tolerable data distortion, existing studies focus on conventional use cases such as reducing storage space <cit.>, lowering I/O cost <cit.>, or reducing memory capacity requirements <cit.>. Li et al. <cit.> studied how to make the error-bounded lossy compressor SZ resilient to soft errors during data transfers and evaluated their approach by using a numerical analysis/simulator, but they did not systematically model and optimize data transfer performance with respect to lossy compression techniques.
Modeling and optimizing the error-bounded lossy compression based data transfer over WAN is challenging in practice. On one hand, adding compression/decompression into the transfer services introduces new complexities (compute nodes will be involved, compressors need to be configured, overall transfer performance will be influenced by the compression speed, etc.).
On the other hand, it is critical for users to understand the quality of compressed data, so that they can precisely control the data distortion and/or meet expected compression ratios for their use cases. However, scientific applications are distinct from each other and lossy compressors exhibit different characteristics/performance because of their distinct designs. It is non-trivial to predict the compression ratios and quality accurately.
In this paper, we propose an optimized data transfer model, namely Ocelot, by leveraging error-bounded lossy compression techniques in data transfers.
Our contributions are:
* We develop an efficient lossy compression quality prediction model, which is fundamental
to accurately predict the data distortion of lossy reconstructed data and compression ratio/speed.
* We propose a novel approach for efficient wide-area data transfer by combining the error-bounded lossy compression techniques, Globus <cit.>, and FuncX, a federated-Function-As-a-Service (FaaS) platform <cit.>. We also optimize the performance by developing a series of strategies to address I/O contention, compute-node waiting, and transfer slow-down for many small files.
* We evaluate Ocelot using several Globus endpoints and real-world scientific applications across different domains. Experiments show that applying parallel compression can significantly improve data transfer performance over WAN (reaching 11.2× speed-up with negligible data distortion for users).
The rest of the paper is organized as follows. In Section <ref>, we discuss related work. In Section <ref>, we present the research background. In Section <ref>, we propose the online data transfer framework Ocelot, which integrates error-bounded lossy compression technology with Globus.
In Section <ref>, we describe three capabilities of Ocelot. In Sections <ref> and <ref>, we describe
how we conduct lossy compression quality prediction and optimize data transfer performance with lossy compression techniques, respectively. In Section <ref>, we evaluate Ocelot on real-world scientific datasets and the state-of-the-art lossy compressor SZ with different compression pipelines.
Finally, we conclude the paper with a discussion of future work in Section <ref>.
§ RELATED WORK
In this section, we discuss the related works in two facets: the modern techniques in wide area data transfer and common use-cases of error-bounded lossy compression.
Many systems have been developed to improve the performance of large wide-area data transfers. One common method adopted by many commercial data transfer tools, such as FileCatalyst <cit.> and IBM Aspera <cit.>, is using User Datagram Protocol (UDP) or multiple Transmission Control Protocol (TCP) streams.
BitTorrent is a popular Peer-to-Peer (P2P) data transfer software developed at the application level over TCP/IP, which can be used to transfer big data files. BitTorrent adopts a tracker/seed mechanism to allow each data downloader to be a data uploader in a community, such that the more the users participate, the higher the data transfer speed. The BitTorrent technique, however, is unsuitable for big data transfer in the science community because a stable and secure science data-sharing service is highly required.
Globus is a research data management platform that enables high-performance, secure, and reliable third-party data transfers.
Globus builds upon the GridFTP protocol for data movement and adopts several optimization techniques such as parallel streams <cit.>, which can significantly improve data transfer performance. Transferring big data files with Globus, however, may still suffer from low performance in practice, as performance is related to multiple sophisticated factors such as the settings on Globus connect server (GCS) endpoints (concurrency, pipelining, striping, etc.) <cit.>, low quality network paths, and underprovisioned data transfer nodes (DTNs). In particular, recent studies <cit.> show that transferring big data files between Argonne Leadership Computing Facility (ALCF) and National Energy Research Scientific Computing Center (NERSC) could be slow (only hundreds of MB/s) at an inefficient concurrency setting.
Error-bounded lossy compression has been effective in significantly reducing data volumes for many use cases. However, it has rarely been used in the wide area data transfer case. Common use cases for error-bounded lossy compression include reducing storage footprint <cit.>, reducing memory capacity requirements <cit.>, mitigating I/O costs in supercomputers <cit.>, and avoiding recomputation of data <cit.>. Zhao et al. <cit.>, for instance, developed an efficient lossy compressor for molecular dynamics (MD) simulation data based on the spatio-temporal patterns of MD datasets, which aims to reduce the storage space as much as possible. Wu et al. <cit.> explored the best-qualified error-bounded lossy compression method for Intel-QS <cit.>—a full-state quantum circuit simulator developed by Intel, in order to significantly lower the required memory capacity for large-scale quantum computing simulations. Li et al. <cit.> proposed a resilient error-bounded lossy compression method, which aims to protect the data compression against potential errors such as SDCs. However, their work does not involve data transfer performance optimization, which is instead addressed in our work.
§ RESEARCH BACKGROUND
We briefly describe the critical technical components on which we build.
§.§ Error-bounded Lossy Compression
Error-bounded lossy compression has been broadly used to significantly reduce the volumes of scientific datasets produced by large-scale HPC applications or advanced instruments (with a compression ratio of several hundreds or thousands <cit.>), while effectively controlling data distortion based on the user-specified error bound. In comparison with lossy compression, lossless compression suffers from low compression ratios (≤2 in most cases <cit.>) since lossless compressors generally depend on the exactly repeated byte stream patterns while scientific datasets are often composed of floating-point data values often with diverse ending mantissa bits.
There have been many error-bounded lossy compressors developed. In general, there are two models for error-bounded lossy compression: the transform-based model and the prediction-based model. The former performs the (near)orthogonal transform to decorrelate the raw data to another coefficient data (such as by wavelet transform) and then reduce the coefficient data by specific encoders such as embedded encoding <cit.>. The typical examples are ZFP <cit.> and SSEM <cit.>. The latter uses a data predictor and linear-scale quantization to decorrelate the datasets and then uses a variable-length encoding (such as Huffman encoding <cit.>) and dictionary encoding (such as LZ77 <cit.>) to obtain a fairly high compression ratio. Examples include SZ <cit.> and MGARDx <cit.>.
We adopt SZ3 <cit.> in our work for two reasons: its modular structure, which allows us to construct many different compression pipelines (i.e., different compressors) for different datasets and use cases, and the high performance <cit.> of its default SZ-interp compression algorithm, which exhibits the highest compression ratio and quality in many cases compared with the other state-of-the-art lossy compressors including ZFP, SZ2, and MGARDx.
§.§ Globus Data Transfer Infrastructure
Globus
is a research data management platform that is used to transfer, synchronize, and share large volumes of data.
Globus was launched in 2010, and has since managed the reliable
movement of almost two exabytes of data across 40,000 endpoints
distributed around the world.
Globus endpoints are widely deployed at universities, research laboratories, and on cloud platforms (such as Amazon S3 and Google drive).
Globus adopts the GridFTP protocol <cit.> to provide high-performance, secure, and reliable data transfer over WAN. There are many optimization strategies in GridFTP for improving data transfer performance, such as pipelining, parallelism, and concurrency. GridFTP pipelining avoids blocking/waiting on transfer-commands, which can transfer many small files very efficiently. Parallelism allows different portions of the same file to be sent by multiple channels in parallel. Concurrency supports transferring of different data files through multiple channels in parallel.
§.§ Federated Function as A Service (FuncX)
FuncX <cit.> is a distributed and scalable function execution platform. FuncX differs from traditional cloud-hosted FaaS platforms in that it combines a centralized cloud-hosted service with a collection of user-deployed and managed endpoints. Users can deploy their own endpoints on their own resources via a small Python endpoint software. They may configure that endpoint to provision resources dynamically from various backend resource providers (e.g., batch schedulers, Kubernetes clusters, cloud instances).
Users may register and execute Python functions in a similar way to cloud-hosted FaaS, by providing the function body and input arguments. However, unliek centralized FaaS they must also select an endpoint on which to execute that function. The FuncX service relies on an OAuth-based identity and access management platform, Globus Auth <cit.>, to securely execute functions. FuncX leverages containers to package function codes and resolve dependencies on endpoints, and also enables multiple optimization strategies to obtain the best performance in the remote function calls, such as container warming (avoiding/reducing the container instantiation cost), executor/user batching (amortizing costs across many function requests), and prefetching (advertising the anticipated capacity to interleave network communication with computation).
§ OCELOT: ONLINE DATA TRANSFER WITH ERROR-BOUNDED LOSSY COMPRESSION
<ref> presents a high-level overview of Ocelot. As shown in the figure, Ocelot provides an ML-based quality prediction model for users to predict the lossy compression quality (such as data distortion and compression ratio), thus guaranteeing the integrity/validity of the lossy reconstructed data (step 1). The data then progress through five steps (2-6) during the data transfer procedure from one endpoint to another over WAN. The key difference between Ocelot and the traditional data transfer method is that we integrate an error-bounded lossy compression step, which is expected to significantly reduce the data volume before transferring the data. At the target endpoint, upon receiving of compressed data, they are be decompressed and then written to the file system. The detailed compression technologies have been discussed in Section <ref>.
Ocelot can be used remotely without needing to
manually log in to the source or destination machine to perform the compression/decompression task, because the executors have been deployed on those machines beforehand.
We present our architecture in <ref> (the colored boxes indicate the new modules we developed for Ocelot). In our design, the user connects the Ocelot Framework through a user interface (e.g., a command line or GUI). Upon receiving user's data transfer task, Ocelot starts the quality predictor via funcX to
obtain a suitable compressor configuration by testing a few settings very quickly with subsampling methods. funcX allows these tasks to be executed on the remote resource on which the data reside. Ocelot then uses funcX to initiate a compression task on the remote endpoint.
The compression is conducted by an MPI program that loads different files from the file systems and compresses them in parallel. Ocelot then
starts the transfer via Globus. The transfer will move the compressed files to the target machine once the files are ready. There is some optimization here because sometimes the compression tasks cannot be scheduled immediately. We will leave the detailed discussion to Section <ref>.
We design Ocelot to be flexible, enabling users to bypass the quality predictor by manually providing a compressor configuration for certain cases when they know what error bound and compressor to use. The quality predictor module is driven by our designed machine-learning model, which will be detailed in Section <ref>.
§ CRITICAL CAPABILITIES OFFERED BY OCELOT
Before diving into the technical details, we introduce three key capabilities of our framework from a user's perspective.
* Selecting best-qualified lossy compression configuration based on our proposed quality predictor.
Ocelot is able to select the most suitable lossy compression configuration in terms of users' requirements. Based on the estimated results generated by our quality predictor, the user can select the “best” compression solution for their data. Specifically, users can view the data distortion, compression ratio, and compression time for different lossy compression pipelines or configurations, thus guiding them in selecting/optimizing the best-qualified setting.
* Reducing transfer time with parallel (de)compression.
After applying the prediction model to configure compression automatically, users can utilize Ocelot to reduce the file transfer time.
Users need only to specify data paths and start the transfer.
The compression/decompression will be performed automatically.
* Remote orchestrating (de)compression and transfer.
We incorporate FuncX and Globus Transfer API into our framework, allowing users to control the compression and transfer between endpoints on any authorized machines. Users do not have to explicitly connect to remote resources (e.g., via ssh) to submit batch jobs to do compression. Instead, they just need to run our Ocelot software on their personal computer and control the compression, transfer, and decompression remotely.
Moreover, Ocelot allows users to collect information about compression and transfer. The analytical data is stored on the user's personal computer, and can be used to further analyze the performance with graphical tools.
§ COMPRESSION QUALITY PREDICTION
In this section, we propose a prediction model to estimate the lossy compression ratio, compression speed, and peak-to-noise ratio (PSNR) <cit.>.
In general,
it is impossible for users to predict compression quality (such as compression ratio and data distortion level) for a particular error-bounded lossy compressor without performing the compression on the given dataset. This is because the effect of data prediction/transform and coding in the compressor varies with diverse data features.
With our prediction model, users can quickly test multiple compression settings and choose the one that best matches their use case.
We train a machine learning (ML) model on masses of sample datasets, with the aim to build a relationship between the compression-related features and the compression quality.
The model can then be used to estimate compression quality accurately based on the features extracted from the given datasets at runtime.
We derive many features as input to our model,
as illustrated in <ref>. Identifying a set of useful features is challenging, because (1) the extraction of each feature should have low computation cost, and (2) the features should form an accurate indicator of the compression quality. We consider features in one of three categories: (1) config-level features, (2) data-based features, and (3) compressor-level features.
Config-based features are configuration settings (including error bound values and compression pipeline) specified by users. Different error bounds can yield largely different compression quality (e.g., compression ratios and compression speed). Compression quality also depends on specific compressors each with distinct designs. The prediction-based compressors<cit.>, for example, may adopt various predictors which may exhibit different performances.
We enable our model to recognize the characteristics of compressors by treating the compressor-type feature as a discrete classification variable and feeding it with profiling data.
Data-based features describe the characteristics of datasets, which is also a key factor to distinguish the compressibility. As shown in <ref>, even for the same application, different datasets can have very different properties such as min, max, and value range. In addition, we also use byte-level information entropy as one feature, because it reflects the “chaos-level” of a dataset. The entropy is defined as
H(X) = - ∑_x ∈ Sp(x)log p(x) = E[-log p(X)]
where S is the set of byte values (0-255) and p denotes the probability/frequency of an element in S.
In general, the higher entropy a dataset exhibits, the more difficult it is to compress that dataset. As verified in <ref> (a) and (b), the entropy value projects a positive correlation against the compression time, especially when the error bound is relatively low. It is worth noting that when the error bound is relatively high, the entropy would lose its effect (as shown in <ref> (c)), because the large error bound would diminish the data variation. Moreover, we use the average Lorenzo error (i.e., the difference between the true data value and Lorenzo-predicted value<cit.>) as a feature to shape the “easiness of prediction” for a dataset. If the average Lorenzo error is high, the prediction stage tend to be imprecise, leading to low compression ratio.
Compressor-based features are the properties of the intermediate data used in the course of lossy compression, which generally have the highest prediction ability for compression quality. Specifically, we focus on the quantization bins, as shown in <ref>. Since the quantization bins are encoded by the subsequent lossless encoders, its characteristic closely correlates to the final compression quality. In order to control the execution overhead, the quantization bins are computed based on the sampled data points. As demonstrated in <ref>, we develop four compressor-based features, including p_0, P_0, quantization entropy, and run-length estimator. (1) p_0 denotes the percentage of the 0-value bins over all quantization bins.
In general, large p_0 tends to yield a high compression ratio and compression speed, because a large majority of predictions should be accurate in this situation. (2) P_0 denotes the fraction of `0'(encoded) taken in Huffman coding in the regard of the full Huffman encoded data size. (3) Quantization entropy is the entropy of quantization bins. If the prediction is accurate, quantization bin values will mostly be near 0, and the quantization entropy will be low. (4) Run-length estimator (denoted R_rle) is derived from P_0 and p_0 by the following equation:
R_rle = 1/((1-p_0)P_0 + (1-P_0)).
Although the p_0 and P_0 are also used in related work <cit.>, our solution is much more accurate in compression quality estimation in general cases.
The estimation of compression ratio in <cit.> depends on the following formula: ĈR̂=1/(C_1(1-p_0)P_0 + (1-P_0)), where C_1 is an ad-hoc tuning parameter which varies with different applications. As shown in <ref> (c), almost all data points are located on the line y=x (red line in the figure), which means the estimated compression ratio ĈR̂ under that formula could be very accurate in this case. This is due to the fact that this formula happens to form a linear function with compression ratio for the Nyx<cit.> application.
However, that formula is sensitive to the tuning of the C_1 parameter, which may cause unexpected large compression quality estimation errors in other applications. For instance, the estimator's value does not form a linear relationship with the compression ratio for the Miranda<cit.> application (as shown in <ref> (a) and (b)), which leads to bad compression quality estimation in turn (see <ref> (c)).
In comparison, our R_rle formula does not depend on the C_1. In fact, R_rle serves as a feature and we feed it into the ML model along with other features (including p_0 and P_0), and thus the model can automatically fine-tune the coefficients applied on those features, thus being able to keep an accurate estimation in most of cases (to be shown later).
Our compressor-based features can also be used to predict the reconstructed data distortion. This is because these features are also closely correlated to the data distortion metrics such as PSNR, as verified in <ref> and <ref>.
Based on the observations above, we use a decision tree model to perform the compression quality estimation. The evaluation result will be demonstrated in Section <ref>.
§ OPTIMIZATION OF DATA TRANSFER WITH ERROR-BOUNDED LOSSY COMPRESSION
The compression performance prediction model described above provides a fast and automatic way to determine appropriate compressor settings. However, compression remains a computational expensive process, especially with large data. In Ocelot we
utilize multiple cores/nodes to compress files in parallel.
Nonetheless, it is worth noting that there are two issues that may impede the “compress and transfer” performance.
First, for large datasets the compression task may exceed the capacity available on DTNs or login nodes, and thus require provisioning of compute nodes via batch scheduler. Such requests may not be scheduled immediately.
Second, the number and size of files significantly influence the transfer speed because (1) each file transfer has an inevitable data handling cost in addition to data transfer time, and many small files may significantly lower the overall transfer throughput; (2) transfers with too few files cannot utilize the available concurrent transfer threads.
We describe our transfer performance optimization strategies in this section. To address the first issue, we need a strategy to transfer files when compute nodes are not immediately available. For the second issue, we need an efficient file grouping method to counter issues with many compressed small files.
§.§ Parallel Compression/Decompression
Our fundamental approach to reduce the transfer time is using compression methods to reduce the file sizes. However, each compression suffers
a certain time cost, thus if we compress thousands of files sequentially, the total compression time may surpass the transfer time.
We utilize parallel computing to significantly accelerate the compression process. We investigate the performance of different levels of parallelization: as shown in <ref> (left), the increase in the number of CPU cores significantly reduces the time needed to compress these datasets because they consist of many independent files. To address this issue, we let each core handle the compression of a set of files in parallel. The compression time cannot be further reduced when the number of cores reaches the number of files to be compressed because of the saturation of the parallelism.
Our experiments show that decompression performance does not increase monotonically with the number of CPU cores. For instance, decompressing the CESM <cit.> dataset on Cori takes 68.7s on four nodes but more than 5 minutes on 16 nodes.
We conduct a more thorough test for parallel decompression on the Purdue Anvil machine, and the result is shown in <ref> (right). We see in this experiment that performance
degrades with more nodes. We believe this to be due to I/O contention
on a shared file system.
We can avoid the slow-down by tuning the number of cores to the parallel file system.
§.§ Optimization for Node Waiting Time
The uncertain wait time on compression tasks and transfer tasks may degrade transfer performance when involving compression.
In most systems there are infrequently sufficient nodes available immediately to do the compression when users submit the data compression tasks. If the compression tasks are stuck too long in the scheduler queue, the overall transfer performance would be even slower than transferring without compression.
In order to counter the node waiting time, we run a sentinel program to monitor and schedule the transfer/compression task. As shown in <ref>, when a user submits a transfer request (with lossy compression option turned on) which is not assigned compute nodes immediately, we start transferring the files in groups without compression. Once a file transfer is complete, we write their filenames in a meta file so that the compression scheduler knows which files no longer need compression. When the compute nodes are assigned, the sentinel program notifies the transfer tasks to stop and let the parallel compression scheduler take over the remaining files. In this way, the data transfer is not be suspended because of waiting for nodes, and the worst-case is that all data are transferred without compression (when the nodes are not assigned through the whole period).
In production deployments, we anticipate that the Ocelot service could be deployed on dedicated cluster nodes (e.g., DTNs) with the approval of system administrator (similar to Globus service). In this case, wait time would be only dependent on other Ocelot transfers sharing those resources.
§.§ File Grouping for High Data Transfer Throughput
We propose a file grouping strategy to improve the data transfer throughput based on our observation that the number of files and file sizes may significantly affect the transfer speed (as shown in <ref>). Although the effective transfer speed fluctuates
due to network and I/O contention, we generally see that the effective network speed decreases as the number of files increases, when transferring the same amount of data.
This motivates us to optimize the file transfer speed by grouping small files together.
Grouping small compressed files can increase a single file's size and reduce the number of files, and thus improve transfer speed. As shown in <ref>, we compress files in parallel and group many compressed files to achieve a better size for transfer. We use MPI to communicate the compressed sizes among CPU cores to determine the file offset for each core to write. Each grouped file has a header and a body of connected compressed data. The header contains information about the number of compressed files in this group, the starting offset, and the size of each file. The metadata text file contains human-readable information about the number of files, grouping strategy, and the original filenames that are useful for decompression. The default strategy is to group files by the “world_size”, i.e., the available number of cores for compression, because they run in parallel and can usually finish the compression at a similar time. According to the profiling test and information provided by the administrator, we know in advance the preferred size for each file to achieve the fastest transfer speed. Thus, the compression scheduler can also determine the number of files to put in one group based on the file sizes.
§ PERFORMANCE EVALUATION
In this section, we present our experimental testbed and performance evaluation results of our models with an in-depth analysis. We first evaluate the prediction precision on individual files with different settings and then evaluate the performance of transfer with parallel compression.
§.§ Experimental Settings
We collect performance data on three supercomputers: Bebop, NERSC Cori, and Purdue Anvil, with specs shown in <ref>. Each is located in different regions of the United States and has different network conditions. The evaluation of network transfer performance is based on the network connecting these supercomputers. We evaluate our prediction approaches on datasets generated by six scientific applications: QMCPACK <cit.>, RTM <cit.>, Miranda <cit.>, CESM <cit.>, Nyx <cit.>, and Hurricane Isabel <cit.>, as presented in Table <ref>.
The Miranda, CESM, and RTM applications contain many files and are well-suitable for our parallel compression tasks. Specifically, we use a fixed subset of these three applications in our parallel compression evaluation. Miranda contains 768 files each of dimension 256×384×384; CESM contains 61 snapshots and in total 7182 files of two types of dimensions — 26×1800×3600 and 1800×3600; RTM contains 3601 snapshots and each file is of dimension 449 × 449 × 235.
We focus on SZ2 <cit.>, SZ3 <cit.> and their variants because our compression quality prediction method is based on the prediction-based compression model. How to estimate compression quality for transformer-based compression models is left to future work.
§.§ Estimation of Compression Time and Ratio
To make an estimation of compression time and ratio, we apply a decision tree regressor model on 11 features described in Section <ref>, and train on 30% of files from each of the applications in <ref> (the remaining 70% serves as testing data). We set 11 different error bounds from 1e-6 to 1e-1 to compress the data and collect the features for training.
The distribution of the difference between the predicted values and real values is shown in <ref>. The green bounding box shows the 80% confidence interval, meaning 80% of prediction error falls into the green box. Thinner box means higher prediction accuracy.
<ref> indicates our prediction method performs very well, as the differences between predicted and actual values are very close to 0.
The prediction has a negligible overhead (around 1.7%) compared with the total compression time when we sample 1% of data (using 1 data point every 100 data points). As shown in <ref> (A), the sampling helps reduce the overhead time from more than 70% to less than 5%. The extracted compressor-based features p_0 and P_0 are different from the actual percentage of the zero quantization code because we run the Lorenzo prediction with the real data values instead of the reconstructed data values.
<ref> shows a high correlation between compression time and the compressor-level features. In fact, the datasets' compression times are similar with each other as long as they have the same dimensions (usually because they belong to the same application) as shown in <ref> (B). This pattern helps us estimate the overall compression time accurately in parallel compression: the rough estimation would be the number of datasets divided by the number of cores then multiplied by the average compression time per one dataset.
<ref> shows the prediction results for our datasets. We can observe from the values that the compression time is gathered into groups related to the application to which they belong. Moreover, we see that our model can always precisely predict the compression ratio and time at different error-bound settings. This is because the distribution of the quantization code changes according to error bounds, and our model captures this information with p_0, P_0 and the quantization entropy effectively.
§.§ Estimation of Data Quality via PSNR
We use 50% of gathered data for training, and perform the compression quality prediction test for the remaining 50% of data. <ref> shows the PSNR based on 10 data files randomly selected in the CESM application, where the root mean squared error of the PSNR prediction is 13.05. <ref> shows a similar prediction result for the ISABEL application, and the corresponding root mean squared error of PSNR is 14.23. Unlike the prediction of compression ratio/time which is fairly accurate, the prediction of PSNR is good in most cases yet still suffers relatively high errors occasionally on a few datasets. We plan to improve it in our future work.
We explain the key reasons why the PSNR is predictable as follows. On one hand, if the quantization bins often gather around zero (especially when a relatively large error bound is used), the predicted values are likely unable to be corrected by quantization bins, leading to relatively low PSNR. On the other hand, if the zero quantization bin takes a tiny percentage, this means the quantization bins are likely very small because of the small error bounds used. In this situation, many data points would be corrected by the quantization bins or stored as they are based on the SZ compression model, thus leading to relatively high PSNR.
We explain why the prediction of PSNR may not be as precise as the compression ratio's prediction as follows. In fact, when p_0 and the quantization entropy are in the middle, most data points can still be the quantization-based reconstructed data, and it is unclear how far away these data points are from the original data values. They can be either an error bound away or quite close, therefore it is unclear how they will contribute to the final PSNR based on the selected features.
With the settings shown in <ref>, we visualize the original and compressed data of three data files in <ref>. From our experience, when PSNR is higher than 50, there is no visible visual difference between the original and compressed data. Therefore, when the predicted PSNR is high, we are confident that the compressed data will be of a good quality for post-analysis.
§.§ Transfer Datasets with Parallel Compression
We now investigate the overall transfer performance when utilizing parallel compression on supercomputers.
We use three applications CESM, RTM, and Miranda to analyze parallel compression performance.
<ref> shows the time reduction because of our parallel compression applied in the data transfer. The compression time is measured on the Purdue Anvil machine with 16 nodes (each node uses 128 CPU cores, and in total 2048 CPU cores), while decompression is measured on Bebop for experiment (1) and on Cori for experiment (2) with 8 nodes (each node has 32 CPU cores and in total 256 CPU cores). We see increased transfer speed when using parallel compression because (1) the total file size is much smaller and (2) the compression time is minimized by parallelization. The node waiting time on Purdue Anvil machine is negligible in our experiments because compression tasks can immediately be scheduled. On Bebop and Cori, however, the node waiting time varies. When there were idle nodes, the waiting time was between 0s to 30s, but sometimes it took a few minutes or even hours to get an available compute node. The behavior is highly dependent on other users' tasks, and we could not conclude any quantifiable patterns on the expected node waiting time. Our sentinel program can ensure the worst case would be transferring the data without compression.
<ref> shows the comparison between direct transfer without compression and transfer involving our parallel compression method.
Because of a significant reduction in file sizes, we can see an obvious reduction in the transfer time for all three applications. We notice that the effective transfer speed drops after compression without file grouping. This is because the files are smaller while the number of directories and the number of files stays constant. This result aligns with the pattern shown in <ref>. Because large files generally transfer faster in the network than small files, our file grouping strategy helps counter the speed reduction for the RTM and CESM applications. For the Miranda application, the grouped files do not transfer faster because, after grouping, there are only 8 files and it has not reached the number of concurrent threads available in the Globus Transfer Service. This result also shows that we should strategically group files into multiple groups instead of simply connecting all compressed files into one large file. Moreover, making all cores write to the same file would cause I/O contention and add overhead to the file grouping process.
§ CONCLUSION AND FUTURE WORK
We developed a novel data transfer framework, Ocelot, that integrates Globus transfer with transparent error-bounded prediction-based lossy compression. We proposed a model to predict compression ratio/time and data quality for user defined compression settings with little overhead. Based on our evaluation on six real-world scientific datasets, we report the following key findings.
* Compression time/ratio and PSNR are predictable by using various categories of features. By doing 1% sampling, we can reduce the overhead required to finish the prediction to 1.7% of compression time—a small cost when compared with transfer time.
* Scientific data transfer performance can be greatly improved by applying parallel compression. We use FuncX to further control the node waiting time on supercomputers, and minimize the transfer time for given datasets. More than 90% of the transfer time can be reduced by this method.
* Network transfer speed can be significantly affected by file size and number of files. A few large files generally transfer faster than many small files. We can improve transfer speed by grouping smaller compressed files, and the transfer time can be reduced by more than 25% because of file grouping.
* While the use of more CPU cores can improve compression and decompression performance, I/O contention can become a problem in the decompression case. It is generally better to use more CPU cores for compression and fewer CPU cores for decompression.
We selected features that were simple to derive and fast to train and make predictions, but there is still room to extract better features to improve prediction accuracy. In addition, our model requires seeing the dataset in advance to make predictions and has very limited generalization to other datasets. Moreover, we lack effective time/ratio prediction methods for transformer-based compressors like ZFP<cit.> and TTHRESH<cit.>. In the future, we will look into other features, particularly those that do not require processing of the data, to see if we can make accurate predictions on datasets that have never appeared in the training set. We will also investigate additional compressor types and work to identify features that are suitable for transformer-based compressors.
§ ACKNOWLEDGMENTS
The material was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR), under contract DE-AC02-06CH11357, and supported by the National Science Foundation under Grant OAC-2003709 and OAC-2104023. We acknowledge the computing resources provided on Bebop (operated by Laboratory Computing Resource Center at Argonne).
IEEEtran
|
http://arxiv.org/abs/2307.04384v1 | 20230710074305 | Causal Neural Graph Collaborative Filtering | [
"Xiangmeng Wang",
"Qian Li",
"Dianer Yu",
"Wei Huang",
"Guandong Xu"
] | cs.IR | [
"cs.IR"
] |
Causal Neural Graph Collaborative Filtering
Xiangmeng Wang1,
Qian Li1 2,
Dianer Yu,
Wei Huang,
Guandong Xu2, Member, IEEE
X. Wang, D. Yu and G. Xu are with Data Science and Machine Intelligence Lab, Faculty of Engineering and Information Technology, University of Technology Sydney, New South Wales, Australia.
E-mail: {Xiangmeng.Wang, Dianer.Yu, Guandong.Xu}@uts.edu.au
Q. Li is with the School of Electrical Engineering, Computing and Mathematical
Sciences, Curtin University, Perth, Australia. E-mail: [email protected].
W. Huang is with RIKEN Center for Advanced Intelligence Project (AIP). E-mail: [email protected]
* Both authors contributed equally to this research.
†Corresponding author.
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Graph collaborative filtering (GCF) has gained considerable attention in recommendation systems by leveraging graph learning techniques to enhance collaborative filtering (CF) models. One classical approach in GCF is to learn user and item embeddings by modeling complex graph relations and utilizing these embeddings for CF models. However, the quality of the embeddings significantly impacts the recommendation performance of GCF models.
In this paper, we argue that existing graph learning methods are insufficient in generating satisfactory embeddings for CF models. This is because they aggregate neighboring node messages directly, which can result in incorrect estimations of user-item correlations. To overcome this limitation, we propose a novel approach that incorporates causal modeling to explicitly encode the causal effects of neighboring nodes on the target node. This approach enables us to identify spurious correlations and uncover the root causes of user preferences.
We introduce Causal Neural Graph Collaborative Filtering (CNGCF), the first causality-aware graph learning framework for CF. CNGCF integrates causal modeling into the graph representation learning process, explicitly coupling causal effects between node pairs into the core message-passing process of graph learning. As a result, CNGCF yields causality-aware embeddings that promote robust recommendations.
Our extensive experiments demonstrate that CNGCF provides precise recommendations that align with user preferences. Therefore, our proposed framework can address the limitations of existing GCF models and offer a more effective solution for recommendation systems.
Graph Representation Learning, Causal Inference, Structural Causal Model, Recommendation System
§ INTRODUCTION
Recommendation system (RS) has been a core in many web-based services, e.g., e-commerce, to facilitate information filtering for users from overwhelming data.
Benefiting from the capability to learn from relational graph data, an emerging RS paradigm built on graph learning <cit.>, i.e., graph collaborative filtering (GCF), has been studied extensively in recent years <cit.>.
GCF enhances traditional collaborative filtering <cit.> by modeling complex user-item interactions in a graph as well as auxiliary side information, e.g., user and item attributes.
Thus, GCF has shown great potential in deriving knowledge (e.g., user behavior patterns) embedded in graphs.
Existing GCF can be categorized as random walk-based and graph representation learning-based methods.
The first branch of random walk-based methods <cit.> uses user and item similarities to build random walk models that produce user-item co-occurrence information for downstream CF models.
For instance, ItemRank <cit.> performs label propagation within an interaction graph and utilizes a probability model to compute inter-user and inter-item similarities.
The similarities are then defined as transition probabilities of a random walk model, which produces item importance to enhance a CF model.
However, the random walk model is conceptually isolated from the CF model, since it does not include model parameters to be optimized with the CF learning objective.
An alternative category of graph representation learning methods utilizes graph neural networks to analyze graph connections and construct representations, commonly known as embeddings.
The fundamental concept behind these methods is to acquire vectorized user and item embeddings through the application of graph neural networks, which can subsequently be utilized to optimize the collaborative filtering model.
For instance, NGCF <cit.> exploits a graph convolutional network (GCN) to propagate neighboring node messages in the interaction graph to obtain user and item embeddings.
The learned embeddings capture user collaborative behavior and are used to predict user preference scores for CF optimization.
Following this paradigm, subsequent works <cit.> also achieve favorable performance in different tasks, e.g., sequential recommendation <cit.>, by using auxiliary information such as interaction timestamp <cit.> for user sequential behavior modeling.
Despite the efforts, we argue that existing graph representation learning methods are not sufficient to yield satisfactory embeddings to enhance CF models.
The main reason is that
they learn user and item embeddings by directly aggregating neighboring node messages, while these messages are simple correlation signals of node pairs.
Take Figure <ref> (a) as a toy example.
Given an interaction graph, existing graph representation learning generally learns user embeddings by sampling and aggregating users' correlated neighbors.
Considering that user u_1 has a neighbor set {i_1, i_2, a_1, i_3, a_2}, which is highly overlapped with user u_2's neighbor set {i_1, i_2, a_1,i_4, a_3}, the yield embeddings of u_1 and u_2 would be very similar compared with other users.
The CF model takes the inner product between u_1's embedding and the embeddings of items from the item set as u_1's preference scores over items.
Similarly, u_2's preference scores are estimated based on u_2's embedding and item embeddings.
For item i_3, as u_1 and u_2's embeddings are similar, the preference scores of u_1 and u_2 on item i_3 would be similar too.
Assuming that user u_1 has previously interacted with item i_3, thereby indicating a significant preference score for i_3, the CF model would recommend i_3 to user u_2 based on this high preference score.
However, we may infer that user u_2 is truly interested in item attribute a_3 that belongs to the item i_4 interacted with the user.
Consequently, the item i_3 that is recommended based on attribute a_2 may not align with the personal preferences of user u_2 and, consequently, fail to meet the user's expectations.
We claim that estimating the direct causal effects between node pairs in the graph could address this issue.
As illustrated in Figure <ref> (b), in order to determine the accurate preference of user u_2, we might consider each node within the set of neighbors of u_2 as the cause and the preference of u_2 as the effect.
For instance, measure the causal effect of a_3 on u_2 by considering a_3 as the cause and u_2's preference as the effect.
By estimating the causal effect in each of the node-preference pairs, we can obtain the causal effect of
a_3 on u_2, i.e., 0.96, and the causal effect of a_1 on u_2, i.e., 0.91.
Given the condition that a causal effect above 0.9 indicates strong causation between cause and effect nodes, we thus conclude that a_3 and a_1 attract u_2's personal interest.
As such, we can use this causation signal to refine the user embedding of u_2 towards favoring items with a_3 and a_1 and finally enhance the CF model for user interest modeling.
Following the above intuition, we propose to inject causal modeling into graph representation learning to explicitly encode the crucial causal relations within node pairs into embeddings.
Causal modeling identifies the intrinsic cause-effect relations between a node and true user preferences <cit.>.
Considering that the message-passing mechanism suffers from ambiguous correlations of node relations within calculated messages <cit.>, modeling node-level causal relations could help estimate the true user preferences to obtain causality-aware messages.
For instance, we can estimate how a user's preference (i.e., effect) is affected by the item brand (i.e., cause).
As such, by coupling with causal modeling, we could enable graph learning to uncover the true interests under user interactions, i.e., the root causes that trigger users' interests to interact with the item.
We therefore propose the first causality-aware graph representation learning framework for collaborative filtering.
We focus on a special class of neural networks for graph learning, namely the graph convolutional network (GCN), to inject the causal relations between nodes into the core message-passing process in the GCN computation.
The underlying idea is to establish a connection between the structural causal model (SCM) and the message-passing mechanism of graph convolutional network (GCN) computation, which enables the messages to encapsulate the causal relationships between the adjacent nodes and the target node.
Specifically, we construct a causal graph that induces a SCM to describe the recommendation generation process of graph representation learning that incorporates causality.
Using the SCM, we formulate the recommendation process as a generative model, in which each component in the generative model describes a structural equation.
We propose a novel Causal Neural Graph Collaborative Filtering (CNGCF), which utilizes variational inference to quantify the components of the generative model. The CNGCF framework explicitly integrates causal relationships, as defined by the structural causal model (SCM), into the message-passing mechanism of graph convolutional network (GCN)-based graph learning. This integration facilitates the generation of accurate recommendations that uncover the true user preferences.
The contributions of this work are:
* We introduce a novel approach that leverages causal model-based graph representation learning for recommendation systems.
Our proposed CNGCF is the first of its kind to explore causal relationships underlying the graph with the aim of generating causality-aware graph embeddings.
* Our CNGCF utilizes a unified framework based on variational inference, which is driven by a causal graph encoder to model the graph topology of the causal graph and a collaborative filtering decoder to reconstruct user interactions.
* We validate the effectiveness of our proposed framework through extensive experimentation. Our experimental results demonstrate that our approach outperforms existing methods in achieving satisfactory recommendation performance.
§ RELATED WORK
§.§ Graph Collaborative Filtering
Collaborative filtering (CF) <cit.> dominates recommendation research due to its simplicity and effectiveness.
Early CF models including latent factor models <cit.> and neural-based CF <cit.> use descriptive features (e.g., IDs) to calculate user similarities, assuming that users with similar historical behaviors have similar future preferences.
For example, Bayesian personalized ranking (BPR) <cit.> learns
user and item latent vectors from the interaction matrix built by implicit user feedback, e.g., clicks.
The inner products between latent vectors are used as user-item similarities to predict user preference scores.
Neural collaborative filtering (NCF) <cit.> uses a Multi-layer perceptron (MLP) to learn a user behavior similarity function based on simple user/item one-hot encodings.
Graph CF (GCF) leverages advances in graph learning <cit.> to model user-item interaction graphs as well as rich auxiliary data (e.g., text, image), thus boosting the recommendation by augmenting complex semantics under user-item interactions.
Relevant approaches can be categorized as random walk-based and graph representation learning-based methods.
The first line of random walk-based methods
builds random walk models with calculated similarities among users and items from probability models.
The learned random walk models give probability distributions over items to produce auxiliary user-item co-occurrence information for CF models.
For instance, ItemRank <cit.> computes the stationary distribution of a random walk model based on estimating inter-user and inter-item similarities from a user-item interaction graph.
The random walk model provides item importance for a CF model, in which the final ranking of items is based on the calculated item importance.
BiRank <cit.> extends ItemRank to incorporate both item features and user preferences in recommendations.
BiRank computes a joint stationary distribution over users and items in the graph, where the probability of transitioning from an item node to a user node is based on user ratings on items.
These methods are inferior to optimization-based CF methods since they do not include model parameters that can be optimized together with the CF training.
Another line of graph representation learning-based methods usually uses deep neural networks (e.g., graph convolution network) to scrutinize complex graph relations and produce user and item representations for recommendation tasks.
Neural graph collaborative filtering (NGCF) <cit.> is one of the most representative graph representation learning-based CF approaches, which incorporates two graph convolutional networks (GCNs) to learn the collaborative signal of user interactions from a user-item interaction graph.
GC-MC <cit.> uses a GCN-based auto-encoder to learn latent features of users and items from an interaction graph and reconstructs the rating links for matrix completion.
Later, LightGCN <cit.> simplifies the application of the GCN in recommendations by only including neighborhood aggregation for calculating user and item representations, which further boosts the efficiency of subsequent GCF approaches, e.g., <cit.>.
Despite the great effort, existing GCF methods only capture correlation signals of user behaviors by modeling neighboring node messages.
This would result in the limited ability of GCF models to capture the true user preferences in the presence of spurious correlations.
On the contrary, we abandon the modeling of spurious correlations to pursue the intrinsic causal relations between nodes, which estimate the causal effect of a specific item on user preferences to uncover true user interests.
§.§ Causal Learning for Recommendation
Recent recommendation research has largely favored causality-driven methods.
A burst of relevant papers is proposed to address critical issues in RS, such as data bias and model explainability with causal learning.
Among them, two representative causal frameworks are largely adopted, i.e., the potential outcome framework (POF) from Rubin et al. <cit.> and the structural causal model (SCM) from Pearl et al. <cit.>.
POF-based recommendation directly estimates the causal effect of a treatment (e.g., item feature) on the outcome, i.e., recommendation results.
Inverse propensity weighting (IPW) <cit.> is wildly adopted in POF-based recommendations.
Tobias et al. <cit.> adopt IPW to learn unbiased matrix factorization models, in which propensity scores are estimated by a separately learned propensity model.
Zhang et al. <cit.> integrate the learning of the propensity model and the recommendation model into a multi-task learning framework.
However, POF-based recommendation is less intuitive since it does not include graphical models to describe causal relations.
Besides, POF-based recommendation largely relies on the quality of propensity score estimation.
The estimator usually suffers from the “propensity overfitting” <cit.> due to the uncertainty of unseen variables, limiting the performance of POF-based recommendations.
SCM-based recommendation directly builds a graphical causal graph by extracting structural equations on causal relations between deterministic variables in recommendations.
It aims to use the causal graph to conduct causal reasoning for causal effect estimation.
Using the causal graph, most relevant approaches pursue mitigating the bad effects of different data biases, e.g., exposure bias <cit.>, popularity bias <cit.>.
For instance, Wang et al. <cit.> mitigate exposure bias in the partially observed user-item interactions by regarding the bias as the confounder in the causal graph.
They propose a decounfonded model that performs Poisson factorization on substitute confounders (i.e., an exposure matrix) and partially observed user ratings.
Zheng et al. <cit.> relate the user conformity issue in recommendations with popularity bias, and use a causal graph to guide the disentangled learning of user interest embeddings.
Other approaches also achieve explainable recommendations.
Wang et al. <cit.> define a causal graph that shows how users' true intents are related to item semantics, i.e., attributes.
They propose a framework that produces disentangled semantics-aware user intent embeddings, in which each model component corresponds to a specific node in the causal graph.
The learned embeddings are able to disentangle users' true intents towards specific item semantics, which explains which item attributes are favored by users.
§ PRELIMINARIES
We provide key preliminaries, including the definition of graph-based recommendations utilizing graph convolutional networks, as well as basic concepts under causal inference.
§.§ Recommendation with Graph Convolutional Network
Let 𝒰 and ℐ denote the sets of users and items, respectively.
Graph-based recommendation formulates users and items with their features into a graph G=(𝒱, ℰ), where 𝒱 is the node set absorbs all user and item nodes with |𝒱| = |𝒰∪ℐ| and ℰ is the edge set denoting the connections among nodes.
G induces an adjacency matrix 𝐀∈ [0,1]^N × N and a node feature matrix 𝐃∈ℝ^N × d, where N=|𝒱| is the number of nodes and d is the dimension of node features.
Each 𝐝_i ∈ℝ^d is the vector-valued sample of a specific node i ∈𝒱 containing descriptive information of the node, e.g., user/item IDs.
Using G, most graph-based recommendation models rely on graph representation learning <cit.> to scrutinize complex graph relations and produce dense vectors (a.k.a embeddings) for recommendation tasks, e.g., rating prediction.
Graph convolutional network (GCN) <cit.> is a typical method for graph representation learning.
It employs multiple graph convolutional layers to obtain the graph representation 𝐄 of G, where 𝐄∈ℝ^|𝒱| × d^'
absorbs user and item node representations as d^'-dimensional dense vectors.
Based on 𝐄, the model then infers the interaction probabilities of users over items to make recommendations.
In particular, a graph convolutional layer g(𝐃, 𝐀) calculates each representation 𝐞_i of a user/item node i based on its feature 𝐝_i ∈𝐃 and node neighbors 𝒩_i through the following equation [We present the wildly-used inductive graph representation learning setting with the GCN. An inductive setting can abandon the reliance on full graph Laplacian v.s. the transductive setting. For the comparison between inductive and transductive learning, refer to <cit.>.]:
𝐞_i=ϕ(𝐝_i, ⊕_j ∈𝒩_iψ(𝐝_i, 𝐝_j))
where 𝐞_i denotes the representation of a user/item node i, which is calculated by aggregating (⊕) the messages ψ from its neighbors within 𝒩_i.
𝒩_i is the neighbor set of i established by visiting the adjacency matrix 𝐀 and 𝐝_j is the node feature of the neighboring node j.
The calculation of messages ψ in Eq (<ref>) is known as message-passing <cit.>, which is the de facto of a class of GCN variants, e.g., graph attentional networks <cit.>.
The aggregation operator ⊕ may take various forms, e.g., element-wise mean <cit.>, max-pooling <cit.>.
§.§ Causal Inference
A causal graph <cit.> is a directed acyclic graph (DAG) G̃=({𝒱, Z}, ℰ) represents causal relations among endogenous and exogenous variables.
Here, 𝒱 is a set of endogenous variables of interest, e.g., user and item nodes in the graph learning, and user preference variables.
Z is a set of exogenous variables outside the model, e.g., item exposure.
ℰ is the edge set denoting causal relations among G̃.
Each directed edge (j → i) ∈ℰ represents a causal relation from j to i, where i ∈𝒱 and j is a parent node of i, i.e., j ∈ pa(i).
G̃ induces a user causal adjacency vector 𝐀̃_u and an item causal adjacency vector 𝐀̃_v, which specify the adjacent neighbors of a user node u and an item node v, respectively.
Each element 𝐀̃_u^j =1 if j ∈ pa(u), otherwise, 𝐀̃_u^j=0.
Similarly, 𝐀̃_v^j=1 if j ∈ pa(v).
A structural causal model (SCM) <cit.> ℳ = ⟨𝒱, Z, ℱ, P(Z)⟩ is the mathematical form of the causal graph G̃ that includes a collection of structural equations ℱ on endogenous variables 𝒱 and a distribution P(Z) over exogenous variables Z.
Each structural equation f_i∈ℱ for a variable i ∈𝒱 is a mapping from i's parents and connected exogenous variables to i:
i ← f_i(pa(i), Z_i), Z_i ∼ P(Z)
where pa(i) ⊆𝒱\ i is i's parents from the causal graph G̃.
Z_i ∈ Z is a set of exogenous variables connected with i.
An intervention <cit.> is operated with the do-operator do(i = x), which forces a variable i ∈𝒱 to take the value x.
do(i) introduces an independence of the intervened node i to its causal parents. i.e., i pa(i).
Intervention lies at the core of causal modeling as suggested by Rubin et al. <cit.>.
Given a SCM ℳ, an intervention is to force a variable i ∈𝒱 to take a specific value x in order to observe the effect on another variable.
Through intervention, we can determine the causal relationship between endogenous variables.
For instance, in the recommendation, we want to determine the effect of a particular recommendation (e.g., a video) on user behavior (e.g., click).
We can intervene by assigning this recommendation to users, and observe users' behaviors before and after interventions.
If users who received the recommendation are more likely to click, we can conclude that the recommendation has a positive causal effect on user behaviors.
As such, interventions allow us to determine the true causal effect by intervening to recommend items, instead of passively observing user-item correlations in training data.
§ PROBLEM FORMULATION
We put forward the causal graph for causality-aware graph-based recommendations.
We then formulate the generation process of recommendations based on structural equations under the causal graph.
§.§ A Causal View of Recommendation
Early CF resorts to user-item associative matching by assuming the causal graph in Figure <ref> (a).
They typically assume P(Y=1 | u, v) ∝𝐮^⊤𝐯, where 𝐮 and 𝐯 are user and item latent factors.
Graph CF (GCF), as shown in Figure <ref> (b), considers auxiliary data Z_u and Z_v (could be hidden) and the inner connections of users and items from their neighbors to model more complex user behavior patterns.
They first derive dense embedding vectors (i.e., E) for users and items, then use these embeddings to infer user preferences.
They assume P(Y=1 | u, v) ∝ E = N N(agg(u, z_u, msg(𝒩_u)), agg(v, z_v, msg(𝒩_v))), where 𝒩_u and 𝒩_v are neighbor sets for users and items, respectively; N N is the representation learning network (e.g., GCN), agg and msg are the aggregation and message-passing operations, respectively.
Both Figure <ref> (a) and (b) assume the co-occurrence of users and items is independent in the observational data, i.e., there is no edge U → V or V → U.
However, this assumption is unrealistic in the real world because user behaviors are influenced by the recommended items for various reasons.
For instance, users may be more likely to click the items if they are recommended <cit.>.
Besides, the exposure of items is determined by user preferences estimated from the recommendation model <cit.>.
Thus, it is necessary to model the influence of users on items and vice versa, as shown in Figure <ref> (c), to achieve better user preference modeling.
We thus use the causal graph defined in Figure <ref> (c) for user preference modeling.
The causal graph induces a structural causal model, with structural equations defined as:
ℱ(𝒱, Z):= {[ U ← f_U(U, V, Z_u); V ← f_V(U, V, Z_v); E ← f_E(U, V); Y ← f_Y(E) ].
where {U, V, E, Y}∈𝒱 are endogenous variables in the recommendation.
f_U, f_V, f_E and f_Y are the structural equations that specify the causal modeling of U (i.e., user),
V (i.e., item), E (i.e., representation) and Y (i.e., recommendation), respectively.
For example, user node u whose causal mechanism is modeled by f_U is characterized by the structural equation f_u.
Such a structural equation models the direct causal relation from the set of causes pa(u) to user node u accounting for the effects of Z_u as indicated by Eq. (<ref>).
The ability to perform interventions lays a foundation for Eq. (<ref>), as interventions enable estimating the causal effects between endogenous variables.
For example, by using the do-operation do(·) on users, we can estimate the causal effect of user influence on items (i.e., U → V) by modeling P(y | v, do(u)).
Also, we can estimate the influence of items on users (i.e., V → U) using the u-specific causal effect P(y | u, do(v)), instead of fitting users' historical interactions by modeling P(y | u, v) without accounting for user-item causal relations.
As such, we could model user-item causal relations to allow causality-aware graph-based recommendations.
§.§ Causality-aware Recommendation Generative Process
We now present the generative process of causality-aware graph-based recommendations.
The generative process is guided by the structural equations under the causal graph (cf. Eq. (<ref>)) to capture causal relations in graph-based recommendations.
In particular,
we first assume the unobserved exogenous variables of users and items in Eq. (<ref>) are drawn from a standard Gaussian prior, denoted as d-dimension latent vectors 𝐙_u and 𝐙_v for exogenous variables Z_u and Z_v, respectively.
For each user u, we calculate the user representation 𝐮 based on latent vectors of user exogenous variables 𝐙_u and neighbor information f_φ(U | U, V) propagated by its connected users and items.
Note that we enable the neighbor information f_φ(U | U, V) to capture the causal relations between neighboring nodes and the target node, and thus propose a causality-aware message passing operation that defines f_φ as a feedforward neural network with parameter φ.
f_ϕ is a sum-aggregator for message aggregation to give the distribution of 𝐮.
Analogously, item representation 𝐯 is given by aggregating 𝐙_v and neighbor information f_φ(V | U,V) through f_ϕ.
The latent representation 𝐮 and 𝐯 are transformed via a non-linear function f_θ_3∈ℝ^I.
The output of f_θ_3 is normalized via a softmax function to produce a preference probability vector 𝐞∈𝕊^I-1,
where 𝕊^I-1 is an (I-1)-simplex with (I-1) as the size of 𝐞 and I is the total item number.
Given the total number of interactions N=∑_i y_ui from user u, the observed user interaction vector 𝐲 follows multinomial priors based on the distribution of 𝐞.
Formally,
{[ 𝐙_u ∼𝒩(0, 𝐈_K), 𝐙_v ∼𝒩(0, 𝐈_K),; 𝐮∝ f_U = {f_ϕ(𝐙_u, f_φ(U | U, V))}_θ_1,; 𝐯∝ f_V ={f_ϕ(𝐙_v, f_φ(V | U, V))}_θ_2,; 𝐞∝ f_E =softmax(f_θ_3(𝐮, 𝐯)),; 𝐲∼ f_Y = Mult(N, 𝐞); ].
The generative process in Eq. (<ref>) ensures the causality-aware graph learning for recommendations by modeling causal relations induced by structural equations in Eq. (<ref>).
Later, we will use this generative process to guide our model framework design for robust recommendations.
§ METHODOLOGY
We now introduce our Causal Neural Graph Collaborative Filtering (CNGCF) framework that delivers causality-aware graph-based recommendations.
We follow Eq. (<ref>) to design each of the components in CNGCF, i.e., implementing f_U, f_V, f_E and f_Y, respectively.
We use variational autoencoders (VAEs) <cit.> to approximate the intractable posterior distributions of parameters from the four structural equations.
In particular, as shown in Figure <ref>, CNGCF devises two major components based on the VAE structure:
1) The causal graph encoder includes a semi-implicit generative model, a user encoder and an item encoder.
The semi-implicit generative model implements a causality-aware message passing to model causal relation dependencies between nodes.
The user encoder and item encoder implement f_U and f_V to output user representation 𝐮 and item representation 𝐯, respectively.
2) The collaborative filtering decoder
implements f_E to construct the user preference vector 𝐞 through collaborative filtering, from which user's interactions f_Y is sampled.
§.§ Semi-implicit Inference for Causal Graph Encoder
Our causal graph encoder aims to learn user and item representations 𝐮 and 𝐯 by using a user encoder q_θ_1(𝐮|𝐙_u, 𝐝_u, 𝐀̃_u) and an item encoder q_θ_2(𝐯|𝐙_v, 𝐝_v, 𝐀̃_v).
However, modeling q_θ_1 and q_θ_2 is not easy, since there are inherent causal relation dependencies between a user/item node and its adjacent neighbors.
Besides, as indicated by Eq. (<ref>), those causal relations should be modeled with a neural network f_φ as dependency terms of structural equations.
Thus, the true posteriors of q_θ_1 and q_θ_1 do not follow Gaussian distributions due to the existence of complex causal relation dependencies parameterized by an additional neural network.
As a result, traditional variational inference <cit.> that directly parameterizes user and item representations to simple, tractable Gaussian random vectors is not applicable in our setting.
To approximate complex posteriors, we use semi-implicit variational inference (SIVI) <cit.> that models complex distributions through the use of implicit distributions.
§.§.§ Semi-implicit Generative Model
SIVI approximates additional implicit posteriors with a generative model and integrates them with variational encoders to enable flexible mixture modeling of complex posteriors.
Inspired by SIVI, we devise a semi-implicit generative model on top of the user and item encoder to model implicit posteriors.
Notably, our semi-implicit generative model includes a causality-aware message passing to handle neighboring node dependencies of user and item nodes in the causal graph.
As a result, our causal graph encoder not only captures causal relation dependencies, but also naturally allows the mixture modeling of complex posterior distributions.
Formally, the semi-implicit generative model f_{φ, ϕ} equips causality-aware message passing with a neural network f_φ and an aggregation operator f_ϕ to learn hidden factors 𝐡_u and 𝐡_v for a user u and an item v.
Then, the user encoder q_θ_1 takes 𝐡_u as the input to output μ_u, σ_u, from which the user representation 𝐮 is sampled.
Analogously, the item encoder use 𝐡_v for q_θ_2 to calculate item representation 𝐯:
𝐡_u ∼ f_{φ, ϕ}
,
𝐮∼ q_θ_1(𝐮|𝐡_u)=𝒩(𝐮|μ_u, diag(σ_u^2))
𝐡_v ∼ f_{φ, ϕ}
,
𝐯∼ q_θ_2(𝐯|𝐡_v) =𝒩(𝐯|μ_v, diag(σ_v^2))
where {φ, ϕ} parameterize the semi-implicit generative model. θ_1 and θ_2 are the parameters of the user and the item encoder.
Next, we detail the semi-implicit generative model that learns 𝐡_u and 𝐡_v by using two key components:
* Causality-aware message passing:
Causality-aware message passing models each of the dependency terms f_φ(i,j) for a node i and its neighbor j within a structural equation, such that the learned messages themselves become a descriptor of the causal relation for (i ← j).
In particular, we define f_φ(i,j) as a learnable multi-layer perception (MLP) to capture the causal relations.
Formally, for a user u, given its features 𝐝_u and its causal adjacency vector 𝐀̃_u, the messages from u's neighbors j within 𝐀̃_u is given by:
𝐦_u^(l-1) = f_φ(u,j)= ∑_j ∈𝒩_u ∝𝐀̃_u𝐡_j^(l-1)·MLP^(l)(𝐡_u^(l-1), 𝐡_j^(l-1))
=ReLU(𝐖_φ^(l)(𝐡_u^(l-1), 𝐡_j^(l-1))), for l ∈{1, ⋯, L}
where 𝐦_u^(l-1) is the neighbor message calculated for user u at the l-1-th graph learning layer [The neighbor message at the 0-th layer, i.e., 𝐦_u^(0), is initialized from a normal distribution.].
𝒩_u is a set of neighbors adjacent to user u within u's causal adjacency vector 𝐀̃_u.
𝐡_j^(l-1) and 𝐡_u^(l-1) are hidden factors for a neighbor j and the user u at the l-1-th layer [𝐡_j^(0) and 𝐡_u^(0) are initialized as node features 𝐝_j and 𝐝_u.].
𝐖_φ is the learnable weight matrix for f_φ and denotes column-wise concatenation.
Analogously, we can calculate the neighbor message 𝐦_v for an item v follows Eq. (<ref>).
* Aggregation:
At each graph learning layer l, we perform aggregation operation on the messages 𝐦_u and user exogenous variables 𝐙_u to obtain the hidden factor 𝐡_u^(l) for u:
𝐡_u^(l)=σ(𝐖_ϕ^(l)(𝐡_u^(l-1)𝐦_u^(l-1), 𝐙_u ))
where 𝐡_u^(l) is the learned hidden factor for u at the l-th graph learning layer.
σ(·) is the aggregation function chosen as sum, following <cit.>; || is the concatenation operation. 𝐖_ϕ is the weight for aggregation.
At the 0-th layer, u's hidden factors 𝐡_u^(0) are initialized as the user features 𝐝_u.
Similarly, we can calculate the hidden factors 𝐡_v^(l) for an item v at the l-th graph learning layer follows Eq. (<ref>).
Having obtained the hidden factors 𝐡_u^(l) for user u and 𝐡_v^(l) for item v at each graph learning layer l ∈{1,⋯, L}, we adopt layer-aggregation mechanism <cit.> to concatenate vectors at all layers into a single vector:
𝐡_u=𝐡_u^(1) + ⋯ + 𝐡_u^(L), 𝐡_v=𝐡_v^(1) + ⋯ + 𝐡_v^(L)
By performing layer aggregation, we capture higher-order connectivities of node pairs across different graph learning layers.
Finally, our semi-implicit generative model outputs 𝐡_u and 𝐡_v from Eq. (<ref>) as the semi-implicit posteriors of users and items for the latter variational encoders.
§.§.§ User and Item Encoder
Given semi-implicit posterior 𝐡_u for a user u, the user encoder outputs the mean and variance in 𝒩(μ_u, diag(σ_u^2)), from which user representation 𝐮 is sampled:
q_θ_1(𝐮|𝐡_u) =𝒩(𝐮|μ_u, diag(σ_u^2))
where μ_u and diag(σ_u^2) are the mean and variance for user u, which are obtained by sending u's hidden factors 𝐡_u to a one-layer neural network with activation function ReLU(x)=max (0, x):
μ_u=ReLU(𝐖^μ_u_θ_1𝐡_u+b), σ_u^2=exp(ReLU(𝐖^σ_u_θ_1𝐡_u+b))
where 𝐖_θ_1 = {𝐖^μ_u_θ_1, 𝐖^σ_u_θ_1} is a hidden-to-output weight matrix for the user encoder q_θ_1.
Analogously, the item encoder follows the same paradigm as the user encoder to generate the mean and variance for item v based on v's hidden factors 𝐡_v:
q_θ_2(𝐯|𝐡_v) =𝒩(𝐯|μ_v, diag(σ_v^2)),
μ_v=ReLU(𝐖^μ_v_θ_2𝐡_v+b), σ_v^2=exp(ReLU(𝐖^μ_v_θ_2𝐡_v+b))
where 𝐖_θ_2 = {𝐖^μ_v_θ_1, 𝐖^σ_v_θ_1} is the weight matrix for the item encoder q_θ_2.
§.§ Collaborative Filtering Decoder
Collaborative filtering is largely dominated by latent factor models, as evidenced by Koren et al. <cit.>. These models involve mapping users and items into latent factors in order to estimate the preference scores of users towards items.
We extend latent factor-based collaborative filtering into our decoder for modeling the user preference 𝐞, which is a probability vector over the entire item set for recommendations.
The predicted user interaction vector 𝐲 is assumed to be sampled from a multinomial distribution with probability 𝐞.
Formally, we define a generative function f_θ_3(𝐮, 𝐯) recovering classical latent factor-based CF to approximate user preference vector 𝐞:
𝐞 = f_θ_3(𝐮, 𝐯)=𝐮^⊤𝐯
where 𝐮 and 𝐯 are latent factors drawn from our user and item encoder in Eq. (<ref>) and Eq. (<ref>), respectively.
Then, the decoder p_θ_3(𝐞|𝐮, 𝐯) produces interaction probability 𝐲 by approximating a logistic log-likelihood:
log p_θ_3(𝐲|𝐞) =
∑_v y_uvlogσ(𝐞)+(1-y_uv) log(1-σ(𝐞))
where y_uv is the historical interaction between u and v, e.g., click. σ(𝐞)=1 /(1+exp (-𝐞)) is the logistic function.
§.§ Optimization with Counterfactual Instances
We wish our CNGCF to be robust to unseen (unknown) user preference shift to further enhance our recommendation robustness.
Catching user preferences is at the core of any recommendation model <cit.>; however, user preferences are dynamic and may change over time <cit.>.
For example, a user may once love items with the brand been Nike but changes his taste for liking Adidas.
Such a user preference shift can be captured by actively manipulating user preference through interventions on the user preference vector 𝐞, i.e., do(𝐞= 𝐞^').
The data after interventions is termed as counterfactual instances <cit.> that, if augmented to original training instances, increase the model robustness to unseen interventions.
Following this intuition, we optimize our CNGCF by considering two different data scenarios, i.e., the clean data scenario in which our CNGCF accesses the data without interventions, and the counterfactual data scenario in which the data is generated by known interventions on user preference vectors.
Formally, for the clean data scenario, assuming that CNGCF observes clean data 𝐃 only during training.
In this case, we retain the original value 𝐨 of user preference 𝐞 by do(𝐞=𝐨).
Then, CNGCF is trained by maximizing the likelihood function log p_θ_3(𝐲|𝐞, do(𝐞=𝐨)).
Since this marginal distribution is intractable <cit.>, we instead maximize the intervention evidence lower-bound (ELBO) with do(𝐞=𝐨), i.e. max_θ_1, θ_2,θ_3ELBO(𝐃, do(𝐞=𝐨).
In particular,
ELBO(𝐃, do(𝐞=𝐨))
=
𝔼_θ[logp_θ_3(𝐲|𝐞, do(𝐞=𝐨) ) p(𝐮)p(𝐯)/q_θ_1(𝐮|Ξ, do(𝐞 =𝐨) )q_θ_2(𝐯|Ξ, do(𝐞=𝐨) )]
= 𝔼_θ[log p_θ_3(𝐲|𝐞, do(𝐞=𝐨) )]
- KL(q_θ_1(𝐮|Ξ) p(𝐮), q_θ_2(𝐯|Ξ) p(𝐯))
where Ξ represents required conditions for the conditional probability distributions of q_θ_1, q_θ_2 and p_θ_3, i.e., Ξ ={𝐙_u, 𝐝_u, 𝐀̃_u} for q_θ_1,
Ξ ={𝐙_v, 𝐝_v, 𝐀̃_v} for q_θ_2 and Ξ ={𝐮, 𝐯} for p_θ_3.
θ={θ_1, θ_2, θ_3} is a set of model parameters to be trained and KL( Q P ) is KL-divergence between distributions Q and P.
For the counterfactual data scenario, we assume CNGCF accesses counterfactual data 𝐃^' generated by known interventions do(𝐞=𝐞^') on user preference vectors.
The counterfactual vectors 𝐞^' hold the same dimension with 𝐞 and are drawn from a random distribution.
Then, the ELBO of CNGCF with the counterfactual data is,
ELBO (𝐃^', do(𝐞=𝐞^'))
=𝔼_θ[log p_θ_3(𝐲|𝐞, do(𝐞=𝐞^') )]
-KL(q_θ_1(𝐮|Ξ) p(𝐮), q_θ_2(𝐯|Ξ) p(𝐯))
Inspired by data augmentation and adversarial training <cit.>, we augment the clean data with counterfactual instances to enhance the robustness of our CNGCF meanwhile capturing user preference shifts.
In particular, the total loss function after augmentation is as below
ℒ_aug (Θ) =λ(ELBO(𝐃, do(𝐞=𝐨))
+(1-λ) (ELBO(𝐃^', do(𝐞=𝐞^'))
where ℒ_aug (Θ) is the loss function for training our CNGCF and Θ are model parameters. λ is the trade-off parameter between the clean and the counterfactual data scenario.
During the training stage, the loss function is calculated by averaging the ELBO over all users.
§ EXPERIMENTS
We thoroughly evaluate the proposed CNGCF for the recommendation task to answer the following research questions:
* RQ1:
How does CNGCF perform as compared with state-of-the-art recommendation methods?
* RQ2: How do different components impact CNGCF's performance?
* RQ3: How do parameters in the causal graph encoder affect CNGCF?
§.§ Experimental Settings
We conduct our experiments on three real-world and one synthetic datasets to evaluate the effectiveness of CNGCF.
§.§.§ Datasets
We use three benchmark recommendation datasets from Amazon Product Reviews [https://nijianmo.github.io/amazon/index.html] <cit.> and Epinions [http://www.cse.msu.edu/ tangjili/trust.html] <cit.>
* Amazon-Beauty and Amazon-Appliances: two sub-datasets selected from Amazon Product Reviews, which record large crawls of user reviews and product metadata (e.g., brand).
Following <cit.>, we use brand and price to build item features since other features (e.g., category) are too sparse and contain noisy information.
We build item neighbors based on co-purchased and co-viewed information from the product metadata.
The co-purchased and co-viewed information records item-to-item relationships, i.e., a user who bought/viewed item A also bought/viewed item B, reflecting the relations between item A and B.
We build user neighbors based on similar interactions from the review data, i.e., users who reviewed the same item are neighbors for each other.
* Epinions:
a social recommendation dataset recording social relations between users.
We convert user/item features from the dataset into one-hot embeddings.
We use social relations to build user neighbors, i.e., a user's social friends are the neighbors of the user.
Besides, items bought by the same user are neighbors to each other.
We follow <cit.> to build the synthetic dataset, which assumes that synthetic user-item interactions follow the causal relations in a causal graph.
In particular, given the causal graph in Figure <ref>(c),
we construct the Synthetic dataset in four steps:
* Feature generation:
We simulate |𝒰|=1,000 users and |ℐ|=1,000 items, where each user has one discrete feature (gender) and one continuous feature (income), while each item has three discrete features, i.e., type, brand and location.
For discrete features, their values in {0,1} are sampled from Bernoulli distributions.
We sample continuous features from random sampling, in which random feature values are chosen from the minimum (i.e., 0) and the maximum (i.e., 1000) feature values.
For both users and items, we assume four exogenous variables (i.e., Z_u and Z_v) drawn from Gaussian distribution 𝒩(0,1).
* Causal neighbor sampling:
As the causal graph gives causal relations U → U and V → V, we synthesize the causal relations by building user/item causal neighbors, i.e., the connected users/items, for the target user/item.
In particular, we set the causal neighbor number N_c=10.
We sample user causal neighbors (U → U) through random sampling, in which a user's causal neighbors are randomly chosen from the user set 𝒰.
For item causal neighbor sampling (V → V), we first convert items with their features generated in the first step into dense vectors through item2vec <cit.>, then calculate the Euclidean distances between two items.
Those items that have the N_c smallest Euclidean distances with the target item are chosen as causal neighbors for the target item.
* User preference estimation:
For each user u and item v, the user preference 𝐮∈ℝ^d towards item property 𝐯∈ℝ^d is generated from a multi-variable Gaussian distribution 𝒩(0, 𝐈), where d and 𝐈 represent the vector size and unit matrix, respectively.
Then, the preference score y_uv between user u and item v is calculated by the inner product of 𝐮 and 𝐯.
* User interaction sampling:
Once we obtain a user u's preference scores for all items (i.e., ℐ), we normalize these preference scores by exp(r_i)/∑_i^'∈ℐexp(r_i^').
We select items with k-top scores as the interactions for the user u ∈𝒰, where k is a constant chosen randomly from range [20, 100].
For the three real-world datasets, we regard user interactions with overall ratings above 3.0 as positive interactions.
For the synthetic dataset, we regard all user-item interactions as positive as they are top items selected based on users' preferences.
We adopt a 10-core setting, i.e., retaining users and items with at least ten interactions.
The statistics of the four datasets are shown in Table <ref>.
For model training, we split both datasets into training, validation, and test sets by the ratio of 70%, 10%, and 20%.
§.§.§ Baselines
We compare CNGCF with eight competitive recommendation methods.
* BPR <cit.>: a well-known matrix factorization-based model with a pairwise ranking loss to enable recommendation learning from implicit feedback.
* NCF <cit.>: extends the CF to neural network architecture. It maps users and items into dense vectors, then feeds user and item vectors into an MLP to predict user preference scores.
* MultiVAE <cit.>: extends the CF to VAE architecture for implicit feedback modeling.
It converts the CF learning process into a generative model and uses variational inference to model the distribution of the generative model.
* NGCF <cit.>: a graph CF that incorporates two GCNs to learn user and item representations. The learned representations are passed to a matrix factorization to capture the collaborative signal for recommendations.
* VGAE <cit.>: a representative graph learning method that extends VAE to handle graph-structured data. We use VGAE to obtain user and item representations and inner product those representations to predict user preference scores.
* GC-MC <cit.>: a graph-based auto-encoder framework for matrix completion. The encoder is a GCN that produces user and item representations. The learned representations reconstruct the rating links through a bilinear decoder.
* LightGCN <cit.>: a SOTA graph-based recommendation model that simplifies the GCN component.
It includes the essential part in GCNs, i.e., neighbor aggregation, to learn user and item representations for collaborative filtering.
* CACF <cit.>: a method that learns attention scores from individual treatment effect estimation.
The attention scores are used as user and item weights to enhance the CF model.
§.§.§ Evaluation Metrics
We use three Top-K recommendation evaluation metrics, i.e., Precision@K, Recall@K and Normalized Discounted Cumulative Gain(NDCG)@K.
The three evaluation metrics measure whether the recommended Top-K items are consistent with users' preferences in their historical interactions.
We report the average results with respect to the metrics over all users.
The Wilcoxon signed-rank test <cit.> is used to evaluate whether the improvements against baselines are significant.
§.§.§ Parameter Settings
We implement our CNGCF using Pytorch.
The latent embedding sizes of neural networks for all neural-based methods are fixed as d=64.
The in-dimension and out-dimension of the graph convolutional layer in CNGCF, NGCF, VGAE, GC-MC and LightGCN is set as 32 and 64, respectively for graph learning.
We apply a dropout layer on top of the graph convolutional layer to prevent model overfitting for all GCN-based methods.
The Adam optimizer is applied to all methods for model optimization, where the batch size is fixed as 1024.
The hyper-parameters of all methods are chosen by the grid search, including the learning rate l_r in {0.0001,0.0005,0.001,0.005}, L_2 norm regularization in {10^-5, 10^-4, ⋯, 10^1, 10^2}, and the dropout ratio p in {0.0,0.1, ⋯, 0.8}.
We set the maximum epoch for all methods as 400 and use the early stopping strategy, i.e., terminate model training when the validation Precision@10 value does not increase for 20 epochs.
§.§ Recommendation Performance (RQ1)
We show the recommendation performance of our CNGCF and all baselines on the four datasets in Table <ref>.
By analyzing Table <ref>, we have the following findings.
* CNGCF consistently outperforms the strongest baselines on both synthetic and real-world datasets, achieving the best recommendation performance across all three evaluation metrics.
In particular, CNGCF outperforms the strongest baselines by 23.4%, 7.0%, 34.3% and 5.7% in terms of Precision@10 on Synthetic, Amazon-Beauty, Amazon-Appliances and Epinions, respectively.
Additionally, CNGCF improves Recall@10/NDCG@10 by 2.5%/3.8%, 8.4%/22.1%, 13.3%/35.9% and 10.6%/2.8% on the four datasets, respectively.
The superiority of CNGCF can be attributed to two factors: the power of neural graph learning and the modeling of causality.
Firstly, graph learning explicitly models the interactions between users and items as a graph, and uses graph convolutional networks to capture the non-linear relations from neighboring nodes.
This allows graph learning to capture more complex user behavior patterns.
Secondly, modeling causal relations allows us to identify the causal effects of different items on users, thus capturing true user preferences on items.
By injecting causal modeling into graph representation learning, our CNGCF captures more precise user preferences to produce robust recommendations against baselines.
*
CNGCF achieves the most notable improvements (e.g., 35.9% for NDCG@10 and 43.8% for NDCG@20) on the Amazon-Appliances dataset, which is a large-scale dataset with a considerable amount of user behavior data that may be noisy and challenging to model.
CNGCF's ability to inject causality into graph learning enables the model to surpass merely capturing spurious correlations among noisy data, leading to more accurate and reliable modeling of true user preferences.
* NGCF that uses graph representation learning outperforms NCF without graph learning.
This is because NGCF models user-item interactions as a graph, and uses graph convolutional networks to capture more complex user-user collaborative behavior to enhance recommendations.
In contrast, NCF uses a multi-layer perception to learn user and item similarities, which captures only linear user-item correlations from the interaction matrix.
Moreover, GC-MC and LightGCN outperform other graph learning-based baselines (i.e., NGCF, VGAE) in most cases.
This is because GC-MC and LightGCN aggregate multiple embedding propagation layers to capture higher-order connectivity within the interaction graph.
Similarly, our CNGCF incorporates layer aggregation within our causal graph encoder, enabling us to capture higher-order connectivity and produce better graph representations for improved recommendation performance.
* CNGCF outperforms all graph learning-based baselines, including NGCF, VGAE, GC-MC and LightGCN.
This is because CNGCF models causal relations within the graph learning process.
Guided by the causality-aware recommendation generative process, CNGCF is able to inject causal relations under the structural causal model into the learning process of the graph convolutional network.
This allows CNGCF to uncover the causal effect of items on users and capture user behavior patterns more accurately.
§.§ Study of CNGCF (RQ2)
We start by exploring how replacing our causal graph encoder with other graph representation learning methods, i.e., naive GCN <cit.>, Graphsage <cit.> and Pinsage <cit.>, impact CNGCF's performance.
We then analyze the influences of core components, including causality-aware message passing and counterfactual instance-aware ELBO.
§.§.§ Effect of Causal Graph Encoder
The causal graph encoder plays a pivotal role in CNGCF to model the causal relations of nodes.
To investigate its effectiveness, we replace our causal graph encoder with different encoders built by other graph learning methods.
In particular, we use GCN <cit.>, Graphsage <cit.> and Pinsage <cit.> to produce user and item embedding vectors for the decoder learning phase, and compare the performance of CNGCF before and after the replacements.
We present the experimental results in Table <ref>.
We find that both GCN <cit.>, Graphsage <cit.> and Pinsage <cit.>-based encoders downgrade the performance of CNGCF compared with CNGCF equiped with our proposed causal graph encoder.
For instance, CNGCF with a GCN-based encoder downgrades the NDCG@10 by 28.68% on the Amazon-Beauty.
This is because GCN, Graphsage and Pinsage cannot capture the causal relations of nodes in the interaction graph, leading to insufficient representations of users and items.
On the contrary, our causal graph encoder captures the intrinsic causal relations between nodes using the causality-aware message passing; thus learns causality-aware user and item representations to
better serve the later decoder learning.
Moreover, the GCN-based encoder downgrades the CNGCF performance most severely compared with GraphSage and Pinsage-based encoders.
This is because naive GCN performs transductive learning requiring full graph Laplacian, whereas GraphSage and Pinsage perform inductive learning without requiring full graph Laplacian to handle large-scale graph data well.
We thus conclude that an inductive learning setting is more desired for our CNGCF, especially when facing large-scale graph data.
§.§.§ Effect of Causality-aware Message Passing
The causality-aware message passing models the dependency terms between each of the structural equations as the causal relations between nodes.
We present CNGCF's performance after removing the causality-aware message passing in Table <ref>.
We observe that removing the component downgrades CNGCF's performance, indicating the importance of causality-aware message passing in helping CNGCF to achieve favorable recommendation performance.
We thus conclude that modeling the causal relations between nodes within the graph-structured data is essential for graph learning-based models to uncover true user preferences for improved recommendations.
§.§.§ Effect of Counterfactual Instance-aware ELBO
The counterfactual instance-aware ELBO augments counterfactual instances for CNGCF optimization.
We present CNGCF's performance after removing the counterfactual instance-aware ELBO in Table <ref>.
Apparently, removing the counterfactual instance-aware ELBO leads to the downgraded performance of CNGCF on both datasets.
This is because our counterfactual instance-aware ELBO augments counterfactual instances, i.e., the intervened data on user preference vectors, thus facilitating better model optimization to capture user preference shifts.
§.§ Parameter Analysis of Causal Graph Encoder (RQ3)
We analyze CNGCF's performance under different embedding sizes n of the semi-implicit generative model in the causal graph encoder.
We also investigate the node dropout ratios p of the dropout layer applied in the causal graph encoder.
§.§.§ Effect of Embedding Size
Figure <ref> (a) (b) (c) report the parameter sensitivity of our CNGCF w.r.t. embedding size n with n = {16, 32, 64, 128, 256, 512, 1024, 2048}.
Apparently, the performance of CNGCF on Amazon-Beauty, Amazon-Appliances and Epinions demonstrates increasing trends from n=16, then reaches the peak when n = 512, n = 64 and n=256, respectively.
This is reasonable since n controls the number of latent vectors of users and items from the semi-implicit generative model, and low-dimensional latent vectors cannot retain enough information for the encoder learning phrase.
After reaching the peaks, the performance of CNGCF degrades slightly and then becomes stable.
The decrease in performance is due to the introduction of redundant information as the embedding size becomes too large, which can affect the model.
Additionally, we observe the largest Amazon-Appliances dataset requires the smallest embedding size of n = 64 to reach its peak performance compared to the other two datasets.
This is because a larger embedding size brings large-scale datasets a higher computational burden, thus limiting the model's performance.
§.§.§ Effect of Dropout Ratio
We employ a node dropout layer in the causal graph encoder to prevent model overfitting.
We show the influence of node dropout ratio p on the three datasets in Figure <ref> (d) (e) (f).
We observe that the performance of CNGCF on both Amazon-Beauty, Amazon-Appliances and Epinions exhibits a decreasing trend as we increase the node dropout ratio p from 0.0 to 0.3, but recovers at p=0.4.
After p=0.4, the performance of CNGCF decreases as the dropout ratio increases.
We believe that the reduced performance could be attributed to the removal of crucial information that the model needs to learn from the data, thus impairing the CNGCF's performance.
Nevertheless, the recovered performance at p=0.4 indicates that CNGCF is robust to balance the loss of information and overfitting.
§ CONCLUSION
We propose CNGCF, the first causality-aware graph representation learning framework for collaborative filtering.
Our CNGCF injects causal relations between nodes into GCN-based graph representation learning to derive satisfactory user and item representations for the CF model.
We craft a causal graph to describe the causality-aware graph representation learning process.
Our CNGCF quantifies each of the structural equations under the causal graph, with a semi-implicit generative model enabling causality-aware message passing for graph learning.
Finally, we capture true user preferences on items by modeling node messages as dependencies of structural equations.
Extensive evaluations on four datasets demonstrate CNGCF’s ability to produce precise recommendations that interpret user preferences and uncover user behavior patterns.
§ ACKNOWLEDGMENTS
This work is supported by the Australian Research Council (ARC) under Grant No. DP220103717, LE220100078, LP170100891 and DP200101374.
IEEEtran
§ BIOGRAPHY SECTION
[
< g r a p h i c s >
]Xiangmeng Wang has been a Ph.D. student at the School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney (UTS). She received her MSc degree in Computer Application Technology from Shanghai University. Her general research interests lie primarily in explainable artificial intelligence, data analysis, and causal machine learning.
[
< g r a p h i c s >
]
Qian Li is a Lecturer at the School of Engineering, Computing and Mathematical Sciences (EECMS), Curtin University, Perth, Australia.
Her general research interests lie primarily in optimization algorithms and causal machine learning.
[
< g r a p h i c s >
]Dianer Yu has been a Ph.D. candidate at the School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney (UTS). He received MSc and BSc degree in Computer Science from UTS.
His general research interests lie primarily in data mining, causal inference and explainable machine learning.
[
< g r a p h i c s >
]Wei Huang is a postdoctoral researcher at RIKEN Center for Advanced Intelligence Project (AIP). He obtained a Ph.D. degree in Computer Science at the University of Technology Sydney (UTS). He received his Master and Bachelor degree in Statistical Physics from the University of Science and Technology of China. His research interests lie in explainable artificial intelligence, deep learning theory, and graph representation learning.
[
< g r a p h i c s >
]Guandong Xu is a Professor in the School of Computer Science and Advanced Analytics Institute at University of Technology Sydney. He received MSc and BSc degree in Computer Science and Engineering, and PhD in Computer Science. He currently heads the Data Science and Machine Intelligence Lab, which consists of 15+ members of academics, research fellows and HDR students. From Nov 2019, he directs the newly established Smart Future Research Centre, which is an across-disciplines industry engagement and innovation platform for AI and Data Science Application towards smart wealth management and investment, energy, food, water, living, and city.
|
http://arxiv.org/abs/2307.04116v1 | 20230709081305 | Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds $A_2$La$_2$NiW$_2$O$_{12}$ ($A$ = Sr, Ba) | [
"B. C. Yu",
"J. Y. Yang",
"D. J. Gawryluk",
"Y. Xu",
"Q. F. Zhan",
"T. Shiroka",
"T. Shang"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
plain
Preprint: August 12, 2023,
These authors contributed equally
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
These authors contributed equally
Institute of High Energy Physics, Chinese Academy of Sciences (CAS), Beijing 100049, China
Spallation Neutron Source Science Center (SNSSC), Dongguan 523803, China
Laboratory for Multiscale Materials Experiments, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
Laboratory for Muon-Spin Spectroscopy, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
Laboratorium für Festkörperphysik, ETH Zürich, CH-8093 Zürich, Switzerland
[Corresponding authors:
][email protected]
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
Chongqing Key Laboratory of Precision Optics, Chongqing Institute of East China Normal University, Chongqing 401120, China
We report on the geometrically frustrated two-dimensional triangular-lattice magnets A_2La_2NiW_2O_12 (A = Sr, Ba) studied mostly by means of neutron powder diffraction (NPD) and muon-spin rotation and relaxation (µSR) techniques. The chemical pressure induced by the Ba-for-Sr substitution suppresses the ferromagnetic (FM) transition from 6.3 K in the Ba-compound to 4.8 K in the Sr-compound.
We find that the R3̅ space group reproduces the NPD patterns better than the previously reported R3̅m space group. Both compounds adopt the same magnetic structure with a propagation vector k = (0, 0, 0), in which the Ni^2+ magnetic moments are aligned ferromagnetically along the c-axis. The zero-field µSR results reveal two distinct internal
fields (0.31 and 0.10 T), caused by the long-range ferromagnetic order.
The small transverse muon-spin relaxation rates reflect the homogeneous internal field
distribution in the ordered phase and, thus, further support the simple FM arrangement of the Ni^2+ moments. The small longitudinal muon-spin relaxation
rates, in both the ferromagnetic- and paramagnetic states of A_2La_2NiW_2O_12, indicate that spin fluctuations are rather weak.
Our results demonstrate that chemical pressure indeed changes the superexchange interactions in A_2La_2NiW_2O_12 compounds, with the FM interactions being dominant.
Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba)
T. Shang
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
================================================================================================================================
3pt
§ INTRODUCTION
8pt
Geometric frustration occurs when a system of interacting spins is unable
to find its lowest energy state because of how the spins are arranged.
This property plays an important role at microscopic scales in solids.
In particular, in certain cases, such as in spin glasses, spin ice, and
spin liquids <cit.>, the
localized magnetic moments interact through competing exchange
interactions that cannot be simultaneously satisfied, thus giving rise
to a highly degenerate magnetic ground state.
For instance, in a spin-liquid system, the constituent spins are
highly correlated, but still strongly fluctuating down to zero
temperature <cit.>.
Such fluctuations lead to remarkable collective phenomena such as emergent gauge fields and fractional excitations <cit.>.
Most of the magnetic frustrations have a simple geometric origin <cit.>, usually occurring in materials
with a 2D triangular- or kagome lattice, or a 3D pyrochlore lattice,
etc., with the nearest-neighbor interactions being antiferromagnetic
(AFM) <cit.>.
A two-dimensional triangular lattice with antiferromagnetic interactions
provides one of the prototypes of magnetic frustration <cit.>.
The perovskite-derived compounds A_4B'B_2O_12 (A = Sr, Ba, La; B' = Mn, Co, Ni; B = Sb, Te, W, Re) represent one such system <cit.>.
Depending on the valence states of the B' and B atoms, the A site can be occupied by either a Sr^2+ (Ba^2+) or La^3+ ion, or by their combinations.
Here, the magnetic B' ions form a layered structure with a 3-fold site symmetry [see Fig. <ref>(a) for the B' = Ni^2+ case].
Since the magnetic B' layers are well separated by the nonmagnetic
A- and BO_6 layers, the former give rise to a magnetic
quasi-2D triangular lattice,
which can potentially host magnetic frustrations.
To date, different magnetic ground states have been found to occur
in the A_4B'B_2O_12 family <cit.>,
whose magnetic properties are thought to be determined mostly by the competition between the
ferromagnetic (FM-) B'-O-B-O-B' and antiferromagnetic B'-O-O-B' superexchange interactions, shown by solid- and dashed lines in Fig. <ref>(c) <cit.>. The spin state
of the magnetic B' ions plays a decisive role in the competition between the
two superexchange interactions. As a consequence, A_4CoB_2O_12
(effective spin S = 1/2 for Co^2+) and
Ba_2La_2NiW_2O_12 (S = 1 for Ni^2+) are reported to be ferromagnetic, while
Ba_2La_2MnW_2O_12 (S = 5/2 for Mn^2+) is reported to be antiferromagnetic <cit.>.
Similar superexchange interactions and their competitions have been observed in other triangular-lattice magnets, e.g., Ba_3B'Nb_2O_9 <cit.> and AAg_2B'(VO_4)_2 <cit.>.
Unsurprisingly, such closely competing interactions can be tuned by either external pressure or by chemical substitution,
each of which able to introduce lattice distortions and to modify
the bond lengths and angles <cit.>, thus, tuning the magnetic order and frustration.
For example, in A_4CoB_2O_12, the chemical pressure (i.e., the substitution of Ba with Sr and/or La, or W with Re) can tune the FM transition temperature <cit.>.
However, the effects of chemical pressure on the magnetic properties
of A_4NiB_2O_12 have not been investigated in detail.
To clarify the above issues, in this paper, we synthesized polycrystalline samples of A_2La_2NiW_2O_12 (A = Sr, Ba)
and studied their magnetic properties by means of magnetization specific heat-, neutron scattering-, and muon-spin rotation and relaxation (µSR) measurements. The chemical pressure is introduced by substituting Ba with Sr, which suppresses the
FM transition temperature from 6.3 down to 4.8 K, while the magnetic
moments of the Ni^2+ ions are ferromagnetically aligned along the c-axis in both compounds.
Our results suggest that the chemical pressure indeed changes the superexchange interactions in A_2La_2NiW_2O_12, with the B'-O-B-O-B' superexchange path dominating
the competition between the FM and AFM interactions. External
pressure on Sr_2La_2NiW_2O_12 or chemical substitution on the
Ni site may further tune the magnetic interactions and lead
to magnetic frustration.
§ EXPERIMENTAL DETAILS
8pt
The A_2La_2NiW_2O_12 (A = Sr, Ba) polycrystalline samples were prepared by
the solid-state reaction method. Stoichiometric amounts of La_2O_3,
BaCO_3, SrCO_3, NiO, and WO_3 powders were used to prepare the
materials. The La_2O_3 rare-earth oxide was annealed for
15 hours in atmosphere to remove moisture. The powders were then mixed, ground, and sintered at 1200^∘C for 24 hours. After grinding the samples again, the powders were pressed into pellets and sintered at 1200^∘C for extra 48 hours. The magnetic-susceptibility and heat-capacity measurements were performed
on a Quantum Design magnetic property measurement system (MPMS) and
physical property measurement system (PPMS), respectively.
Neutron powder diffraction (NPD) measurements were carried out at the Swiss Neutron Source SINQ of the Paul Scherrer Institute in Villigen, Switzerland. The A_2La_2NiW_2O_12 powder samples were introduced in cylindrical vanadium cans (8 mm in diameter and 50 mm high) and mounted on a helium cryostat
stick (2–300 K). High-resolution room-temperature NPD patterns were recorded at the powder diffractometer HRPT [Ge (822), λ = 1.154 Å].
To discern the magnetic diffraction peaks, high-intensity NPD patterns were collected at 1.7 K on the DMC diffractometer using a longer wavelength [pyrolitic graphite (002), λ = 2.458 Å].
The collected NPD patterns were analyzed using the Rietveld package of the FullProf suite <cit.>.
The bulk µSR measurements were carried out at the general-purpose
surface-muon instrument (GPS) of the Swiss muon source at Paul Scherrer
Institut, Villigen, Switzerland.
In this study, we performed two types of experiments: zero-field (ZF)-, and longitudinal-field (LF) µSR measurements.
In both cases, we aimed at studying the temperature evolution of the magnetically ordered phase and the spin fluctuations.
The µSR spectra were collected upon sample heating and then analyzed by the software package <cit.>.
§ RESULTS AND DISCUSSION
8pt
§.§ Magnetic susceptibility
The A_2La_2NiW_2O_12 samples were first characterized by magnetic-susceptibility measurements. Figures <ref>(a) and (d) show the temperature-dependent magnetic susceptibility χ(T) collected in an applied magnetic field of 0.1 T using a zero-field-cooling (ZFC) protocol. χ(T) shows a sharp increase close to T_c, the
temperature where the Ni^2+ moments give rise to a FM order. The
Curie temperatures T_c can be determined from the derivative
of susceptibility with respect to temperature dχ/dT [see Fig. <ref>(c) and (f)] which, in a 0.1-T applied field, provides a T_c of 6.3 and 4.8 K for
Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively.
The magnetic susceptibility was also measured under various magnetic fields up to 6 T. As shown in Fig. <ref>(b) and (e),
as the magnetic field increases, the transition becomes broader and T_c moves to higher temperatures, both features typical of ferromagnetic materials.
The insets in Fig. <ref>(a) and (d) show the Curie-Weiss fits to
the inverse susceptibility (solid lines), which yield a Weiss temperature θ_p = 7.4 K
for Ba_2La_2NiW_2O_12 and θ_p = 8.4 K for Sr_2La_2NiW_2O_12. The positive θ_p values indicate that FM interactions are
dominant in both compounds.
The estimated effective moments are μ_eff = 3.17 μ_B and 3.13 μ_B for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively.
Both are close to the theoretical value of spin-only
Ni^2+ ions (2.83 μ_B), i.e., assuming a
quenching of the orbital moment, typical of octahedral complexes <cit.> — such as the NiO_6 units in Fig. <ref>(a).
The FM ground state was further confirmed by field-dependent magnetization
measurements (see Fig. <ref>). For T < T_c, a small yet clear
magnetic hysteresis loop is observed. For both materials, the
magnetization starts to saturate for μ_0H > 5 T. After substituting
the Ba with Sr, the magnetism becomes softer. The coercive field
of Ba_2La_2NiW_2O_12 is about 67 mT, while,
in Sr_2La_2NiW_2O_12, it decreases to 4 mT.
Thus, in A_2La_2NiW_2O_12, the chemical pressure suppresses
both the magnetization and the T_c, hence suggesting an
enhancement of the magnetic competition. Nevertheless, the FM
interactions remain dominant also in Sr_2La_2NiW_2O_12.
§.§ Heat capacity
We measured the zero-field heat-capacity
of A_2La_2NiW_2O_12 from 2 to 300 K.
The low-T heat-capacity data were also collected under various
external fields, up to 9 T. As shown in Fig. <ref>,
in both compounds, there is a sharp λ-like transition at
low temperatures, typical of long-range magnetic order.
The C(T) data show a distinct peak at T_c = 6.1 and 4.7 K for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, which are consistent with the T_c values determined from magnetization data (see Fig. <ref>).
To extract the magnetic contribution, the normal-state (i.e., T ≫ T_c)
specific-heat data were fitted to C/T = γ + βT^2, where
γ≡ 0, due to the insulating nature of both compounds [see solid lines in Fig. <ref>(a) and (d)]. The derived β values are 0.0013 and 0.0012 J/mol-K^4 for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, which yield a Debye temperature θ_D = 142 and 145 K, respectively. After subtracting the phonon contribution (i.e, the βT^2 term), the magnetic specific heat C_m/T vs. temperature is plotted in Fig. <ref>(b) and (e) for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively.
Upon increasing the magnetic field, the peak at T_c becomes broader
and moves to higher temperatures, once more confirming the FM nature of
the magnetic transition in both materials.
The zero-field magnetic entropy S_m(T) obtained by
integrating C_m(T)/T is shown in Fig. <ref>(c)
and (f) for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively.
In both compounds, at temperatures close to T_c,
S_m reaches Rln(2) (corresponding to S = 1/2).
In Ba_2La_2NiW_2O_12, at temperatures above T_c,
S_m reaches Rln(3) (corresponding to S = 1), while
in Sr_2La_2NiW_2O_12, S_m is slightly smaller
than Rln(3). Such a deviation is most likely
due to an over-subtraction of the phonon contribution from
the specific-heat data. To properly subtract the phonon contribution
and estimate the magnetic entropy, heat-capacity measurements on
the non-magnetic counterparts, as e.g., A_2La_2ZnW_2O_12, are highly desirable.
§.§ Neutron diffraction
To determine the crystal- and magnetic structures of A_2La_2NiW_2O_12,
neutron powder diffraction patterns were collected at both the
paramagnetic (300 K)- and ferromagnetic states (1.7 K).
The room-temperature patterns were first analyzed by using the space group
R3̅m (No. 166), as reported in previous studies <cit.>.
With this model, the powder x-ray diffraction (XRD) patterns could be fitted reasonably well with a goodness of fit χ_r^2 ∼ 7.
However, in case of the NPD patterns, although the Bragg peaks were located at the right positions, the R3̅m space group yielded a fairly large χ_r^2 ∼ 18,
as evinced also from the clear discrepancy
between the observed- and calculated intensities. This indicates
that the space group R3̅m does not describe the crystal
structure of A_2La_2NiW_2O_12 compounds accurately and,
thus, further corrections to the structural model are required.
Considering that neutron diffraction is more sensitive to the oxygen
atoms than x-ray diffraction <cit.>, the oxygen positions are
most likely to require corrections. We found that the space group R3̅ (No. 148) reproduces the
NPD patterns quite well. In fact, both R3̅m and
R3̅ groups belong to the trigonal system, with the latter
exhibiting slightly different oxygen positions.
Figures <ref>(a) and (b) show the Rietveld refinements of NPD
at 300 K using the R3̅ space group for both compounds.
These refinements yield a significantly reduced χ_r^2 ∼ 2,
thus confirming that, in both cases, the R3̅ space group
is more appropriate than R3̅m.
With R3̅, the NiO_6 and WO_6 octahedra rotate in
opposite directions around the c-axis, which breaks the mirror symmetry.
A similar symmetry breaking has been observed also in the
Ba_2La_2NiTe_2O_12 compound <cit.>. The refined lattice parameters, atomic positions, and bond lengths/angles,
together with the goodness of fits are summarized in Table <ref> for A_2La_2NiW_2O_12 compounds.
To clarify the magnetic structure of Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, the NPD patterns were also collected in the magnetically ordered state
(i.e., 1.7 K) using long wavelength neutrons (λ = 2.458 Å).
The LeBail fits of the magnetic diffraction patterns
reveal a commensurate magnetic structure with a propagation
vector k = (0, 0, 0) for A_2La_2NiW_2O_12 compounds.
For such a magnetic vector, the little group G_k is identical to the
space group R3̅ and it includes the symmetry elements
1, 3^+, 3^-, 1̅, 3̅^+, and 3̅^- <cit.>.
The magnetic unit cell of A_2La_2NiW_2O_12 possesses a single orbit with only one site located at the Ni (0, 0, 0) position.
For k = (0, 0, 0), G_k has six different irreducible representations (irreps) τ1, τ2, τ3, τ4, τ5, and τ6, among which only τ1, τ3, and τ5 allow for a long-range magnetic order at the Ni site. Table <ref> summarizes the basis vectors of τ1, τ3, and τ5 irreps calculated with BasIreps.
For the R3̅ space group, the Ni atoms are located at the
3a site (0, 0, 0), invariant under all the symmetry operations.
As a consequence, all the allowed irreps generate a FM coupling with the
spins aligned along the c-axis for τ1, or lying within the ab-plane for τ3 and τ5 (see details in Table <ref>).
According to the Rietveld refinements of the 1.7-K NPD pattern [see Fig. <ref>(c) and (d)], the best fits were obtained by using the τ1 irrep, yielding the smallest
χ_r^2 = 1.93 and 2.77 for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively. The refined magnetic structure is shown in Fig. <ref>(b).
The magnetic moments of Ni atoms obtained from the refinements are 1.94(2) and 1.84(3) μ_B for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12,
consistent with their saturation magnetization (see Fig. <ref>).
§.§ ZF- and LF-µSR
The large gyromagnetic ratio of muons, combined with their
availability as 100% spin-polarized beams, makes ZF-µSR a very sensitive probe for investigating magnetic materials.
Here, to study the magnetic properties of A _2La_2NiW_2O_12
at a local level, we collected a series of ZF-µSR spectra at temperatures covering both the paramagnetic- and ferromagnetic states.
Since neutron diffraction data suggest FM ground states for both Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12
(with the Ni^2+ moments aligned along the c-axis),
for our µSR measurements we focused on Ba_2La_2NiW_2O_12
due to its slightly higher T_c value.
In a magnetic material with a long-range order, the time evolution of ZF-µSR asymmetry,
A_ZF(t), encodes both the intrinsic magnetic fields and their distribution at the muon-stopping site <cit.>.
The ZF-µSR spectra of Ba_2La_2NiW_2O_12 collected at different temperatures are shown in Fig. <ref>(a).
In the paramagnetic state (T > T_c), the ZF-µSR spectra exhibit a relatively
slow muon-spin depolarization (∼0.5–1 µs^-1 at 10 K), indicating rather weak spin fluctuations.
Considering the two muon-stopping sites in Ba_2La_2NiW_2O_12, attributed to two distinct oxygen sites (see Table <ref>),
the ZF-µSR spectra in the paramagnetic state were analyzed using the following model:
A_ZF(t)= ∑_i=1^2 A_i e^-λ^L_it.
Here, λ^L_i represent the longitudinal muon-spin relaxation rates,
while A_i are the asymmetries of the two nonequivalent muon-stopping sites.
In the FM state (T < T_c), the ZF-µSR spectra are characterized by
highly-damped oscillations, typical of long-range magnetic order.
These are clearly visible in Fig. <ref>(b), where short-time
oscillations are superimposed on a long-time slow relaxation.
The ZF-µSR spectra in the FM state were, hence, analyzed using
the following model:
A_ZF(t)= ∑_i=1^2A_i[αcos(ω_it+ϕ)e^-λ^T_it + (1-α)e^-λ^L_it].
Here, α and 1–α are the oscillating (i.e., transverse) and nonoscillating (i.e., longitudinal) fractions of the µSR signal, respectively,
whose initial total asymmetry is equal to A_1 and A_2.
In polycrystalline materials with a long-range magnetic order, one expects α = 2/3, since statistically one third of the muon spins are aligned parallel to the local field direction
(i.e., S_μ∥ B_int) and, hence, do not precess;
ω_i (=γ_μ B_i^int) represents the muon-spin precession frequency,
with γ_μ= 2π×135.5 MHz/T the muon gyromagnetic ratio and B_i^int the local field sensed by muons;
λ^T_i are the transverse muon-spin relaxation rates, reflecting the internal field distributions; ϕ is a shared initial phase.
The derived fitting parameters are summarized in Fig. <ref>(c)-(e).
The B_i^int, λ^T_i, and λ^L_i
all show a distinct anomaly at T_c.
The T_c determined from ZF-µSR is consistent with the value determined from magnetic susceptibility and heat capacity (see Figs. <ref> and <ref>).
As shown in Fig. <ref>(c), below T_c, there are two distinct internal fields, here reflecting the two different muon-stopping sites.
In the FM state, the temperature evolution of B^int_i(T)
resembles the typical mean-field curve. To estimate the zero-temperature internal field,
B^int_i(T) was analyzed by means of
a phenomenological model:
B^int_i(T) = B^int_i(0) [1-(T/T_c)^γ]^δ,
where B^int_i(0) is the zero-temperature internal field,
while γ and δ represent two empirical parameters.
As shown by solid lines in Fig. <ref>(c), the above model describes the data reasonably well, yielding B^int_1(0) = 0.30 T
and B^int_2(0) = 0.10 T
for Ba_2La_2NiW_2O_12. The resulting
power exponents are γ = 5.5(2) and δ = 0.54(2) for B_1^int(T), and γ = 4.6(2) and δ = 0.26(1) for B_2^int(T), respectively.
The lack of any anomalies in
B^int_i(T) below T_c is consistent with the simple FM structure of Ba_2La_2NiW_2O_12 (see Fig. <ref>).
In fact, in some complex magnetic materials with multiple transitions,
one observes a more complex B^int(T), since changes in
magnetic structure are reflected in the local-field distribution <cit.>.
The transverse muon-spin relaxation rate λ^T reflects the static magnetic field distribution at the muon-stopping site and is also affected by dynamical effects such as spin fluctuations,
while its longitudinal counterpart λ^T is solely determined by spin fluctuations.
The λ_i^T(T) of Ba_2La_2NiW_2O_12 exhibits the typical behavior of
magnetic materials with a long-range order <cit.>, i.e., diverging at T_c and continuously decreasing well
inside the magnetic state [see Fig. <ref>(d)].
In the paramagnetic state, λ_i^T is zero, due to the lack of a magnetic moment in the absence of an external field.
The λ_i^L(T) in Fig. <ref>(e) shows a similar
behavior to the λ_i^T(T), i.e., λ_i^L(T)
diverges near T_c, followed by a significant drop at T < T_c,
indicating that spin fluctuations are the strongest close to the onset
of the FM order. Note that, the absolute values of longitudinal relaxation are much smaller than the transverse ones.
Thus, at 1.5 K, λ^L/λ^T∼ 0.097
and 0.002 for the two different muon-stopping sites.
In the paramagnetic state (i.e., T > 8 K), λ_i^L is
also very small, suggesting weak spin fluctuations in both the ferromagnetic and
paramagnetic states of Ba_2La_2NiW_2O_12.
Such weak spin fluctuations are further supported by LF-µSR measurements.
Figure <ref> shows the 2-K LF-µSR spectra
collected in a longitudinal field of 0.1 and 0.5 T. Once the external
field exceeds the internal field (here, ∼ 0.3 T), the µSR spectra become
almost flat. This suggests that, in Ba_2La_2NiW_2O_12,
muon spins are fully decoupled from the electronic magnetic moments
in a field of 0.5 T.
§ DISCUSSION
Although our comprehensive set of measurements suggest that
both Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12
have FM ground states, the magnetic susceptibility and neutron diffraction
results indicate that the competition between FM- and AFM couplings is indeed tuned
by the chemical
pressure induced by the substitution of Ba- with the smaller Sr ions.
To understand this, we examine the crystal-structure parameters
of A_2La_2NiW_2O_12 (see details in Table <ref>),
including the bond lengths and angles. The latter are directly
related to the magnetic superexchange interactions and, thus, control the
magnetic properties. In A_4B'B_2O_12, the B'O_6 octahedra
share their corners with the BO_6 octahedra via oxygen atoms, thus leading to two superexchange interaction paths, i.e.,
B'-O-B-O-B' and B'-O-O-B' [see details in Fig. <ref>(c)].
According to the Goodenough-Kanamori rule,
which provides the signs of the competitive interactions that are
responsible for non-collinear spin ordering <cit.>,
the B'-O-B-O-B' superexchange interaction (with ∠O-B-O ∼ 90^∘) favors a FM coupling, while the B'-O-O-B' path (with ∠B'-O-O ∼ 120-180^∘) allows for an AFM coupling. Although the R3̅ space group implies
reduced O-B-O and B'-O-O bond angles with respect to the previously
reported R3̅m space group <cit.>, the change is such
that the FM or AFM character of the superexchange interactions is maintained.
For instance, in Ba_2La_2NiW_2O_12, R3̅m gives
∠Ni-O2-O2 = 137.2^∘ and ∠O2-W-O2 = 86.7^∘; while
in R3̅, these bond angles become 121.5^∘ and 84.5^∘.
Consequently, the B'-O-B-O-B' and B'-O-O-B' superexchange
interaction paths remain
valid also in the R3̅ space group.
The competition between these FM and AFM interactions eventually determines
the magnetic ground state of A_4B'B_2O_12.
Since Sr has a smaller atomic radius than Ba, by
replacing Ba with Sr, the lattice constants along both the
a- and c-axis are reduced by a factor of 1.14 and 2.81%,
the Ni-O bond length decreases from 2.064 Å to 2.051 Å, while
the Ni-O2-O2 bond angle increases from 121.50^∘ to 120.62^∘.
By contrast, the W-O bond length and the O2-W-O2 bond angle are
less affected, most likely because the W-O2 layer is further
away from the Ba- or Sr-layers [see Fig. <ref>(a)].
The O2-W-O2 bond angle increases slightly from 84.51^∘ to
84.53^∘. The changes of Ni-O2-O2 and O2-W-O2 bond angles induced by chemical pressure (i.e., the substitution of Ba by Sr)
tune the competition between FM- and AFM superexchange interactions in A_2La_2NiW_2O_12.
The physical pressure might further
tune the competition between the FM- and AFM interactions, and yield
magnetic frustration.
Previous studies reveal that the magnetic ground states of
A_4B'B_2O_12 can also be tuned
by chemical substitution on the B sites <cit.>.
The substitution on the B'-site of Ni may enhance the B'-O-O-B'
AFM interactions and stabilize the AFM ground state. For instance,
Ba_2La_2MnW_2O_12 shows an AFM order below 1.7 K <cit.>.
The Ni^2+ ions can also be substituted by Cu^2+ ions, but the
latter case is not yet studied, although it may represent
another interesting compound to exhibit magnetic frustration. Finally, the introduction of magnetic ions on the A site
(e.g., the substitution of Ba^2+ or Sr^2+ with Eu^2+),
whose magnetic interactions can compete with the above superexchange
interactions, may lead to exotic magnetic properties.
§ CONCLUSION
To summarize, we studied the effects of chemical pressure on the
magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba).
Their magnetic properties (due to the Ni^2+ ions) were investigated by means of magnetic susceptibility, specific heat, neutron diffraction,
and µSR spectroscopy. When replacing Ba with Sr, chemical pressure is introduced which can tune the competition between the FM- and AFM superexchange interactions. While the Curie temperature T_c is suppressed
from 6.3 K to 4.8 K, the FM interactions still persist
in Sr_2La_2NiW_2O_12. According to the refinements of neutron
diffraction patterns, in both compounds, the magnetic moments of
Ni atoms are aligned along the c-axis, with a propagation vector
k = (0, 0, 0).
By using ZF-µSR measurements, we could follow the temperature
evolution of the spin fluctuations and of the local magnetic fields.
The estimated internal fields at zero temperature for the two
different muon-stopping sites are 0.31 and 0.1 T.
The smooth transverse muon-spin relaxation rates λ_T
in the ordered phase confirm the simple FM structure of
A_2La_2NiW_2O_12. In both materials, spin fluctuations
are rather weak, reflected in a small longitudinal muon-spin relaxation rate in both the ferromagnetic- and paramagnetic states.
In the future, it could be interesting to check if the combined
physical pressure and chemical substitution on the A and B' sites can further tune the magnetic competitions in
Sr_2La_2NiW_2O_12, and eventually lead to magnetic frustration or to a quantum spin-liquid state.
This work was supported by the Natural Science Foundation of Shanghai
(Grants No. 21ZR1420500 and 21JC1402300), Natural Science
Foundation of Chongqing (Grant No. 2022NSCQ-MSX1468), and the Schweizerische
Nationalfonds zur Förderung der Wissenschaftlichen Forschung
(SNF) (Grants No. 200021_188706 and 206021_139082). Y.X. acknowledges
support from the Shanghai Pujiang Program (Grant No. 21PJ1403100) and the Natural Science Foundation of China (Grant No. 12274125).
|
http://arxiv.org/abs/2307.04193v1 | 20230709145446 | Some new constructions of optimal linear codes and alphabet-optimal $(r,δ)$-locally repairable codes | [
"Jing Qiu",
"Fang-Wei Fu"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Jing QiuChern Institute of Mathematics and LPMC, Nankai University
Tianjin, 300071, P. R. China
[email protected]
Fang-Wei FuChern Institute of Mathematics and LPMC, Nankai University
Tianjin, 300071, P. R. China
[email protected]
Some new constructions of optimal linear codes and alphabet-optimal (r,δ)-locally repairable codes
Jing Qiu Fang-Wei Fu
Received: date / Accepted: date
======================================================================================================
In distributed storage systems, locally repairable codes (LRCs) are designed to reduce disk I/O and repair costs by enabling recovery of each code symbol from a small number of other symbols.
To handle multiple node failures, (r,δ)-LRCs are introduced to enable local recovery in the event of up to δ-1 failed nodes.
Constructing optimal (r,δ)-LRCs has been a significant research topic over the past decade. In <cit.>, Luo et al. proposed a construction of linear codes by using unions
of some projective subspaces within a projective space. Several new classes of Griesmer codes and distance-optimal codes were constructed, and some of them were proved to be alphabet-optimal 2-LRCs.
In this paper, we first modify the method of constructing linear codes in <cit.> by considering a more general situation of intersecting projective subspaces. This modification enables us to construct good codes with more flexible parameters.
Additionally, we present the conditions for the constructed linear codes to qualify as Griesmer codes or achieve distance optimality. Next, we explore the locality of linear codes constructed by eliminating elements from a complete projective space. The novelty of our work lies in establishing the locality as (2,p-2), (2,p-1), or (2,p)-locality, in contrast to the previous literature that only considered 2-locality. Moreover, by combining analysis of code parameters and the C-M like bound for (r,δ)-LRCs, we construct some alphabet-optimal (2,δ)-LRCs which may be either Griesmer codes or not Griesmer codes. Finally, we investigate the availability and alphabet-optimality of (r,δ)-LRCs constructed from our modified framework.
§ INTRODUCTION
Let 𝔽_q be the finite field with q elements and _q^*=_q∖{0}, where q is any prime power.
In this paper, we assume that p is an odd prime, m is a positive integer, and define [m]≜{1,2,…,m}. Consider 𝔽_q^m as an m-dimensional vector space over 𝔽_q, and let _q^m*=_q^m∖{0}, where 0 denotes the zero vector.
§.§ Griesmer codes
A q-ary [n, k, d] linear code 𝒞 is a k-dimensional subspace of 𝔽_q^n with minimum distance d. For a q-ary [n,k,d] linear code, the Griesmer bound is given by<cit.>
n ≥∑_i=0^k-1⌈d/q^i⌉,
where ⌈·⌉ is the ceiling function.
A linear code achieving the Griesmer bound is called a Griesmer code. If there is no linear code with parameters [n, k, d^' > d] for an [n, k, d] linear code 𝒞, we classify 𝒞 as distance-optimal.
Solomon and Stiffler<cit.> utilized unions of mutually disjoint projective spaces to propose an infinite family of binary Griesmer codes.
More recently, Hyun et al. <cit.> constructed infinite families of binary Griesmer codes by utilizing unions of projective subspaces. This construction was later generalized to the p-ary case by Luo et al. <cit.>.
§.§ Locally repairable codes
To reduce the repair bandwidth in massive reliable scale distributed storage system, the concept of locally repairable codes (LRCs) <cit.> emerged. The i-th coordinate of an [n, k] linear code 𝒞 is said to have r-locality if the value at this coordinate can be recovered by accessing at most r other coordinates. If all the coordinates have r-locality, we call 𝒞 an r-LRC.
However, when multiple node failures occur, the original concept of locality may not work.
Prakash et al. <cit.> introduced the concept of (r, δ)-locality of linear codes, where δ≥ 2, which generalized the notion of r-locality. The i-th coordinate of is said to have (r, δ)-locality (δ≥ 2), if there exists a subset S_i ⊂{1, 2, …, n} such that i∈ S_i, |S_i|≤ r+δ-1 and the punctured code |_S_i has the minimum distance d(|_S_i) ≥δ, the set S_i∖{i} is termed the repair set of i-th coordinate. A code is said to have (r, δ)-locality or be an (r,δ)-LRC if all the coordinates of have (r, δ)-locality. Note that (r,δ)-locality reduces to r-locality when δ=2, and we call a code LRC if it has r-locality or (r,δ)-locality. A Singleton-like bound for the minimum distance of an (r, δ)-LRC is given as follows <cit.>:
d() ≤ n-k- (⌈k/r⌉ -1 )(δ-1) +1.
An (r,δ)-LRC achieving the bound (<ref>) is said to be Singleton-optimal.
In the last decade, many constructions of Singleton-optimal LRCs have been proposed, for example see <cit.> .
To consider the alphabet size and address practical application needs, Cadambe and Mazumdar <cit.> introduced a new bound called the C-M bound for an [n, k, d] LRC over 𝔽_q with locality r. The C-M bound is given as follows:
k≤min_1≤ t ≤⌈k/r⌉-1{tr+k_ opt^(q)(n-t(r+1), d)},
where k_ opt^(q)(n, d) denotes the maximum dimension of a linear code over 𝔽_q of length n and minimum distance d. An r-LRC achieving the bound (<ref>) is said to be alphabet-optimal.
The C-M like bound for (r,δ)-LRCs was obtained in <cit.> as follows:
k≤min_1≤ t ≤⌈k/r⌉-1{tr+k_ opt^(q)(n-t(r+δ-1), d)}.
An (r,δ)-LRC achieving the bound (<ref>) is also said to be alphabet-optimal in the absence of ambiguity.
It was demonstrated in <cit.> that binary Simplex codes are alphabet-optimal with locality 2. Several infinite families of alphabet-optimal binary LRCs were proposed in <cit.> by considering the punctured Simplex codes. Some alphabet-optimal binary LRCs constructed from partial spreads were presented in <cit.>. Luo and Cao <cit.> constructed seven infinite families of alphabet-optimal binary LRCs by using a general framework for binary linear codes. For the nonbinary cases, Silberstein and Zeh <cit.> proposed several infinite families of alphabet-optimal p-ary LRCs with locality 2 or 3 by puncturing Simplex codes. Tan et al. <cit.> presented some infinite families of q-ary LRCs achieving the bound (<ref>), by determining the localities of some known linear codes. Very recently, Luo and Ling <cit.> proposed more infinite families of alphabet-optimal LRCs with locality 2 by employing the general framework of constructing p-ary linear codes. Note that the method in <cit.> can be also regarded as puncturing the Simplex codes. In fact, almost all the constructed alphabet-optimal LRCs can be regarded as punctured codes of the Simplex code, and most related papers focus on the r-locality. In <cit.>, Fu et al. provided some Singleton-optimal (r,δ)-LRCs from Simplex code and Cap code, but the dimensions of these codes are limited in {3,4}.
In distributed storage systems, to permit access of a coordinate from multiple ways in parallel,
LRCs was generalized to LRCs with availability in <cit.> and <cit.>,
in which case a coordinate has more than one repair set. For this topic, the readers may refer to <cit.>,<cit.>, <cit.>, <cit.>, <cit.>,<cit.>,<cit.>.
§.§ Our contributions and techniques
Our contributions can be summarized as follows:
(i) We modify the method of constructing linear codes proposed in <cit.> by relaxing the restrictions on projective subspaces. This allows us to obtain some optimal codes with more flexible parameters.
(ii) We provide criteria for determining the (2, p-2), (2, p-1), and (2, p)-localities of q-ary linear codes constructed by eliminating elements from a complete projective space. We also propose constructions for p-ary alphabet-optimal (2, p-1), and (2, p)-LRCs. Notably, we prove that p-ary alphabet-optimal 2-LRCs constructed in <cit.> are also alphabet-optimal (2,p-1)-LRCs. Moreover, we point out that the criteria for determining the (r,δ)-localities of p-ary codes can be generalized to determining the (r,δ)-localities of q-ary codes, where q is a prime power.
From which, we prove that q-ary Simplex codes are alphabet-optimal (2, q)-LRCs with respect to the bound (<ref>).
(iii) We demonstrate that the new linear codes constructed from the modified framework are (r, δ)-LRCs with availability. Although we do not have the exact expression of the alphabet size related bound for (r, δ)-LRCs with availability, we can confirm that some of new constructed codes are alphabet-optimal. Specifically, we propose a sufficient condition for these codes to be alphabet-optimal. From which, infinite families of alphabet-optimal (r,δ)-LRCs with availability can be obtained. To the best of our knowledge, there has been no general construction of alphabet-optimal (r,δ)-LRCs with availability.
This paper is organized as follows. Section 2 introduces some basic results that are needed for our discussion. Section 3 is devoted to generalize the framework and constructions of optimal linear codes over 𝔽_p in <cit.>. In Section 4, we present criteria for determining (2,p-2), (2,p-1), and (2,p)-locality of p-ary linear codes constructed by eliminating elements from a complete projective space. From which we can get some alphabet-optimal LRCs over 𝔽_p. Note that in term of locality, the results can be generalized to q-ary codes without any difficulties. In Section 5, we discuss the (r,δ)-locality with availability of the linear codes constructed in Section 3, a sufficient condition for these linear codes to be alphabet-optimal is provided. Finally, Section 6 concludes this paper.
§ PRELIMINARIES
§.§ A general framework of constructing linear codes
In this subsection, we introduce a general construction of linear codes and some basic results
about additive characters and projective spaces over finite fields.
For any vector x = (x_1,…, x_n) ∈_p^n, define the Hamming weight of x as (x) = |{i ∈ [n] : x_i ≠ 0}|. For a linear code 𝒞, let A_i denote the number of codewords in 𝒞 with weight i. The sequence (A_0, A_1, …, A_n) is called the weight distribution of 𝒞. The weight enumerator of 𝒞 is defined as 1 + A_1z + A_2z^2 + … + A_nz^n.
In <cit.>, Ding et al. proposed a universal framework of constructing linear codes based on trace function and a nonempty subset D = {d_1,…, d_n}⊂_p^m. By employing this framework, a p-ary linear code of length n can be formed as follows:
𝒞_D={c_x=((xd_1),…,(xd_n)): x∈_p^m},
where (·) is the trace function from 𝔽_p^m to 𝔽_p given by
(y)=y+y^p+…+y^p^m-1.
Here, c_x represents the codeword corresponding to the element x in the finite field _p^m. The subset D is referred to as the defining set of the linear code 𝒞_D.
Assume that ξ_p is a primitive p-th root of complex unity. For any a ∈𝔽_p^m, an additive character of 𝔽_p^m is defined as the function χ_a(x) = ξ_p^ tr(ax), where x ∈𝔽_p^m. All additive characters of 𝔽_p^m form a group of order p^m with operation χ_a+b(x)=χ_a(x)χ_b(x).
The famous orthogonal relation of additive characters is given as follows:
∑_x∈_p^mχ_a(x)={[ 0, ,; p^m, . ].
Suppose that {α_1, …, α_m} is a basis of _p^m over _p, then there exists a unique basis {β_1, …, β_m} of _p^m over _p satisfying
tr(α_iβ_j)={[ 1, ,; 0, , ].
for any 1 ≤ i , j ≤ m. We call {β_1, …, β_m} the dual basis of {α_1, …, α_m}.
For any x, y ∈_p^m, we can represent them by x = ∑^m_i=1 x_iα_i and y =∑^m_i=1 y_iβ_i, where x_i , y_i ∈_p for any i ∈ [m]. Then,
x and y can be expressed as vectors 𝐱 = (x_1, x_2, … , x_m) and 𝐲 = (y_1, y_2, … , y_m) in ^m_p, respectively.
The Euclidean inner product of 𝐱 and 𝐲 is defined as 𝐱·𝐲 = ∑^m_i=1x_i y_i.
It can be easily verified that (xy) = 𝐱·𝐲. Consequently, we can express the additive character χ_y(x) of _p^m as χ_y(x) = ξ_p^𝐱·𝐲.
Based on the above discussion, the previously defined linear code 𝒞_D is equivalent to
𝒞_𝒟={c_𝐱=(𝐱·𝐝_1,…,𝐱·𝐝_n): 𝐱∈_p^m},
where 𝒟 = {𝐝_1, … , 𝐝_n}⊂^m_p is referred to as the defining set of 𝒞_𝒟. The matrix G=[𝐝_1^T,𝐝_2^T,…,𝐝_n^T] can be regarded as a generator matrix of 𝒞_𝒟, and the rank of G is equal to the dimension of 𝒞_𝒟.
In <cit.>, Luo et al. introduced new constructions of Griesmer codes and distance-optimal linear codes by considering the defining set as the complement of the union of certain projective subspaces within a projective space.
We will follow the notations in <cit.>. Let V be the m-dimensional vector space 𝔽_p^m. Two nonzero vectors 𝐱=(x_1,x_2,…, x_m) and 𝐲=(y_1 , y_2 , …, y_m) in V are said to be equivalent, denoted by x∼y, if 𝐲 = λ𝐱 for some λ in 𝔽_p^*. The relation ∼ is indeed an equivalence relation. Denote (x_1 : x_2 : …: x_m) the equivalent class consists of all nonzero scalar multiples of (x_1 , x_2 , …, x_m).
The set of all equivalent classes in V is a projective space over 𝔽_p with dimension m-1, termed the projective space of V. The elements of a projective space are called points. For every point (x_1 : x_2 : …: x_m) in the projective space of V, we can use arbitrary nonzero scalar multiple of (x_1 , x_2 , …, x_m) to express the point.
Let 𝒜 be a nonempty subset of [m]. Define an |𝒜|-dimensional vector space over _p by
L_𝒜={(a_1, … , a_n):a_i∈_p if i∈𝒜 and a_i=0 if i∉𝒜}.
Assume that P_𝒜 is the projective space of L_𝒜.
For convenience, we assign the expression of every point in P_𝒜 as the vector of _p^m in the corresponding equivalent class whose first nonzero coordinate is 1. In this way, P_𝒜 can be regarded as a subset of L_𝒜.
It is easy to check that |P_𝒜| = p^|𝒜|-1/p-1 and
L_𝒜∖{0} = ⋃_a∈_p^*aP_𝒜.
Obviously,
L_𝒜∖{0} = P_𝒜 if p = 2. For any two subsets 𝒜_1,𝒜_2 of [m], the intersection of P_𝒜_1 and
P_𝒜_2 is equal to P_𝒜_1∩𝒜_2, where P_∅ = ∅.
§.§ Modifying the framework
As we can see from (<ref>), the defining set in the original framework is required to be a subset of ^m_p. In this subsection, we modify the framework by allowing defining set to be a multi-set consisting of vectors from ^m_p.
(Modified framework)
Suppose that s is a positive integer. Let 𝒟_1, 𝒟_2,…, 𝒟_s be subsets of ^m_p.
We can define a p-ary linear code by
𝒞_(𝒟_1,𝒟_2,…,𝒟_s)={c_𝐱=(𝐱 G_1,…,𝐱 G_s): x∈_p^m},
where G_i denotes the matrix whose columns are transpose of vectors of 𝒟_i for all i∈ [s].
Following the approach outlined in <cit.>, we can consider each 𝒟_i(1≤ i
≤ s) as the complement of unions of specific projective subspaces within a projective space. Consequently, the construction of good linear codes can be simplified to design suitable subsets of [m].
Suppose that t > 1 is an integer. Let E={ℰ_1, ℰ_2,…, ℰ_t} be a multi-set with elements being nonempty subsets of [m], we call set
⋃_1≤ i<j≤ t(ℰ_i ∩ℰ_j)
the center of E, denoted by Center(E).
(Property I_s)
Suppose that ℓ > 1 is a positive integer, 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m]. If we can partition the multi-set A={𝒜_i}_i=1^ℓ into the form as A=B_1∪ B_2∪⋯∪ B_s,
where
B_j={ℬ_1^(j), ℬ_2^(j),…, ℬ_ℓ_j^(j)} for any 1≤ j≤ s,
s, ℓ_1,ℓ_2,…,ℓ_s are positive integers satisfying ℓ=∑_i=1^sℓ_i, such that
𝒜_i∖⋃_j=1^s Center(B_j)≠∅
for any 1≤ i ≤ℓ,
then 𝒜_1, 𝒜_2,…, 𝒜_ℓ are said to satisfy Property I_s.
In <cit.>, the authors initially required that 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying 𝒜_i ∖⋃_j∈ [ℓ]∖{i}𝒜_j≠∅ for every i ∈ [ℓ]. These requirements are in fact equivalent to Property I_s with s=1 by the following lemma.
Suppose that ℓ > 1 is a positive integer. Let 𝒜_1, 𝒜_2,…, 𝒜_ℓ be nonempty subsets of [m], and let A be the multi-set {𝒜_i}_i=1^ℓ. For any i∈ [ℓ], 𝒜_i ∖⋃_j∈ [ℓ]∖{i}𝒜_j≠∅ if and only if 𝒜_i∖ Center(A)≠∅.
For any i∈ [ℓ], we have
𝒜_i∖ Center(A) = 𝒜_i∖⋃_1≤ j<k≤ℓ(𝒜_j ∩𝒜_k)
= 𝒜_i∖((𝒜_i∩⋃_j∈ [ℓ]∖{i}𝒜_j)∪ Center(A∖{𝒜_i}) )
= 𝒜_i∖(⋃_j∈ [ℓ]∖{i}𝒜_j∪ Center(A∖{𝒜_i}) )
=𝒜_i∖⋃_j∈ [ℓ]∖{i}𝒜_j,
where the last equation comes from Center(A∖{𝒜_i}) ⊂⋃_j∈ [ℓ]∖{i}𝒜_j.
The proof is completed.
§.§ Some auxiliary lemmas
For 𝒟⊂^m_p, 𝒜⊂ [m] and 𝐱∈^m_p, let χ_𝐱(𝒟) = ∑_𝐲∈𝒟ξ_p^𝐱·𝐲 and let 𝐱_𝒜 be a vector obtained from 𝐱 by removing the coordinates in [m] ∖𝒜.
The following two lemmas play a fundamental role in <cit.> and are also relevant to our proofs.
<cit.>
Assume that 𝒜_1 and 𝒜_2 are subsets of [m] such that they do not contain each
other. Let P_𝒜_i be the projective space of L_𝒜_i defined as in (<ref>), i = 1, 2. Then, for any 𝐱∈𝔽_p^m*, we have
wt(c_𝐱) ={[ p^|𝒜_1|+P^|𝒜_2|-P^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1=0,𝐱_𝒜_2=0,; p^|𝒜_1|-P^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1=0,𝐱_𝒜_2≠0,; p^|𝒜_2|-P^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1≠0,𝐱_𝒜_2=0,; -p^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1≠0,𝐱_𝒜_2≠0, 𝐱_𝒜_1∩𝒜_2=0,; -1, if 𝐱_𝒜_1≠0,𝐱_𝒜_2≠0, 𝐱_𝒜_1∩𝒜_2≠0. ].
<cit.>
Suppose that ℓ > 1 is a positive integer. Let 𝒜_1,𝒜_2, … ,𝒜_ℓ be nonempty subsets
of [m] satisfying 𝒜_i ∖⋃_j∈ [ℓ]∖{i}𝒜_j≠∅ for any i ∈ [ℓ]. Then
min{∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓP_𝒜_i)):𝐱∈_p^m*}=-1+∑_k=2^ℓ(-1)^k-1∑_1≤ i_1<…<i_k≤ℓp^|⋂_j=1^k𝒜_i_j|
.
In <cit.>, the authors also pointed out that
∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓP_𝒜_i))
=-1+∑_k=2^ℓ(-1)^k-1∑_1≤ i_1<…<i_k≤ℓp^|⋂_j=1^k𝒜_i_j|
if and only if 𝐱_𝒜_i≠0 for all i ∈ [ℓ] and 𝐱_𝒜_i_1∩𝒜_i_2 = 0 for all 1 ≤ i_1 < i_2 ≤ℓ.
Next, we generalize Lemma <ref> to the case of s≥ 1.
Suppose that ℓ > 1, s are positive integers. If 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying Property I_s, let B_j={ℬ_i^(j)}_i=1^ℓ_j for all 1≤ j≤ s are defined as in (<ref>), then
min{∑_j=1^s∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_jP_ℬ_i^(j))):𝐱∈_p^m*}
= ∑_r=1^s(-1+∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|).
From Lemma <ref> we know
min{∑_j=1^s∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_jP_ℬ_i^(j))):𝐱∈_p^m*}
= ∑_r=1^s(-1+∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|)
if and only if
∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_rP_ℬ_i^(r)))=-1+∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|
for all 1≤ r≤ s, if and only if the following conditions are satisfied simultaneously,
(i) 𝐱_ℬ_j^(i)≠0 for any j∈ [ℓ_i] and i ∈ [s],
(ii) 𝐱_ℬ_i_1^(u)∩ℬ_i_2^(u)= 0 for any 1 ≤ i_1 < i_2 ≤ℓ_u and 1≤ u≤ s.
The above conditions are equivalent to
(i)^* 𝐱_𝒜_i≠0 for any i ∈ [ℓ],
(ii)^* 𝐱_∪_j=1^s Center(B_j)= 0.
Such 𝐱 always exists due to Property I_s.
§ NEW CONSTRUCTIONS OF OPTIMAL LINEAR CODES
In <cit.>, by setting the defining set to be complement of unions of some projective subspaces within P_[m], the authors obtained some Griesmer codes and distance-optimal codes with respect to the Griesmer bound. In this section, we generalize the construction in <cit.> by using (<ref>) and subsets of [m] satisfying Property I_s. Specifically, we require each 𝒟_i to be the complement of unions of some projective subspaces within P_[m].
Let p be an odd prime, m, ℓ > 1 and s be positive integers.
Suppose that 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying Property I_s, and B_j={ℬ_i^(j)}_i=1^ℓ_i for all j
∈ [s] are defined as in (<ref>). Let 𝒟_i=P_[m]∖⋃_j=1^ℓ_i P_ℬ_j^(i), 𝒟_i^c=⋃_j=1^ℓ_i P_ℬ_j^(i) for every i
∈ [s].
If p^m-1>∑_i=1^ℓ_rp^|ℬ_i^(r)|-1 for all r
∈ [s],
then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is a linear code over _p with parameters
[sp^m-1/p-1-∑_r=1^s|𝒟_r^c|,m,sp^m-1-∑_i=1^ℓp^|𝒜_i|-1],
where
|𝒟_r^c|=∑_k=1^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1/p-1, r=1,…,s.
From the principle of inclusion and exclusion (PIE), we get the form of |𝒟_r^c| for each r ∈ [s] directly.
For any 𝐱∈_p^m*, by the orthogonal relation of additive characters, we have
wt(c_𝐱) = ∑_r=1^s(p^m-1/p-1-|𝒟_r^c|-|{𝐝∈𝒟_r: 𝐱·𝐝=0}|)
(a)=∑_r=1^s((p^m-1/p-1-|𝒟_r^c|)p-1/p+1/p+1/p∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_rP_ℬ_i^(r)))),
where (a) can be found in the proof of Theorem 3.1 of <cit.>.
It then follows from Lemma <ref> that the minimum value of wt(c_𝐱) for any 𝐱∈_p^m* is
∑_r=1^s(p^m-1- ∑_i=1^ℓ_rp^|ℬ_i^(r)|-1)=sp^m-1-∑_i=1^ℓp^|𝒜_i|-1.
So the minimum distance of 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is sp^m-1 -∑_i=1^ℓp^|𝒜_i|-1>0.
It is evident that wt(c_𝐱)= 0 if and only if 𝐱 = 0, hence 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) has dimension m.
Next, we will discuss the optimality of the linear codes given by Theorem <ref> with respect to the
Griesmer bound. We follow the notations in <cit.>.
Let 𝒜_1, 𝒜_2,…, 𝒜_ℓ be subsets of [m].
Assume that |𝒜_1| = … = |𝒜_i_1| =s_1, |𝒜_i_1+1| = …= |𝒜_i_2| = s_2,…, |𝒜_i_t-1+1| = … = |𝒜_ℓ| = s_t, where s_1 < s_2 < … < s_t and t ≤ℓ.
Then ∑_i=1^ℓ p^|𝒜_i| = ∑_i=1^ta_ip^s_i, where a_i denotes the number of subsets of size s_i in 𝒜_1, 𝒜_2,…, 𝒜_ℓ for any i ∈ [t].
Put M(𝒜_1, 𝒜_2,…, 𝒜_ℓ) = max{a_i : i = 1,…, t}. Suppose that P(∑_i=1^ℓ p^|𝒜_i|-1) =∑_i=g^hb_ip^i is the p-adic expansion of ∑_i=1^ℓ p^|𝒜_i|-1 with coefficients b_i in {0, 1,…, p-1} and b_g≠ 0, b_h ≠ 0.
Let C(∑_i=1^ℓp^|𝒜_i|-1)=∑_i=g^hb_i and let v_p(∑_i=1^ℓp^|𝒜_i|-1) be the
p-adic valuation of ∑_i=1^ℓp^|𝒜_i|-1.
It is easy to see that v_p(∑_i=1^ℓp^|𝒜_i|-1)= g.
Let the notation be the same as in Theorem <ref>.
(1)Then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is a Griesmer code if and only if ℬ_1^(r),ℬ_2^(r),
… ,ℬ_ℓ_r^(r) are mutually disjoint for each r∈ [s] and
M(𝒜_1,𝒜_2, … ,𝒜_ℓ) ≤ p- 1.
(2)If
∑_r=1^s|𝒟_r^c|> ∑_i=1^ℓ p^|𝒜_i|-C(∑_i=1^ℓ p^|𝒜_i|-1)/p-1-v_p(∑_i=1^ℓp^|𝒜_i|-1)-1,
then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is distance-optimal with respect to the Griesmer bound.
(1) Note that 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is a p-ary linear code with parameters
[sp^m-1/p-1-∑_r=1^s|𝒟_r^c|,m,sp^m-1-∑_i=1^ℓp^|𝒜_i|-1].
From
∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉ = ∑_j=0^m-1⌈sp^m-1-P(∑_i=1^ℓp^|𝒜_i|-1)/p^j⌉
=∑_j=0^m-1⌈sp^m-1-∑_i=g^hb_ip^i/p^j⌉
=s∑_j=0^m-1p^m-1-j-∑_i=g^hb_i(∑_j=0^ip^i-j)
=sp^m-1/p-1-∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1
,
we know that 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is a Griesmer code if and only if
∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1
= ∑_r=1^s(∑_k=1^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1)/p-1,
i.e.,
C(∑_i=1^ℓp^|𝒜_i|-1)=
-∑_r=1^s(∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1).
For simplicity, we use LHS to denote the left hand side of equation (<ref>), and RHS to denote the right hand side of equation (<ref>).
Then we can rewrite (<ref>) as LHS= RHS.
It can be easily seen that for each r∈ [s],
|𝒟_r|=∑_k=1^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1/p-1≤∑_i=1^ℓ_rp^|ℬ_i^(r)|-ℓ_r/p-1,
which implies that
∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1≤ -ℓ_r,
where the equality holds if and only if ℬ_1^(r),ℬ_2^(r), … ,ℬ_ℓ_r^(r) are mutually disjoint.
Hence, RHS≥∑_r=1^sℓ_r=ℓ.
On the other hand, we observe that LHS=C(∑_i=1^ℓp^|𝒜_i|-1)=∑_i=g^hb_i≤ℓ, where the equality holds if and only if M(𝒜_1,𝒜_2, … ,𝒜_ℓ) ≤ p-1.
In summary, we have ℓ≥ LHS, and RHS≥ℓ.
Therefore, (<ref>) holds if and only if RHS=ℓ and LHS=ℓ, if and only if ℬ_1^(r),ℬ_2^(r), … ,ℬ_ℓ_r^(r) are mutually disjoint for every 1≤ r≤ s and M(𝒜_1,𝒜_2, … ,𝒜_ℓ) ≤ p- 1.
(2) For any positive integer t,
∑_j=0^m-1 ⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1+t/p^j⌉
≥∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1+1/p^j⌉
=sp^m-1/p-1-∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1+v_p(∑_i=1^ℓp^|𝒜_i|-1)+1.
Due to ∑_r=1^s|𝒟_r^c|>∑_i=1^ℓ p^|𝒜_i|-C(∑_i=1^ℓ p^|𝒜_i|-1)/p-1-v_p(∑_i=1^ℓp^|𝒜_i|-1)-1, we have
sp^m-1/p-1-∑_r=1^s|𝒟_r^c| < sp^m-1/p-1-∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1+v_p(∑_i=1^ℓp^|𝒜_i|-1)+1
≤∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1+t/p^j⌉.
According to the Griesmer bound, there is no p-ary [sp^m-1/p-1-∑_r=1^s|𝒟_r^c|,m,d> sp^m-1-∑_i=1^ℓp^|𝒜_i|-1]
linear code. Therefore, the linear code 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) with parameters [sp^m-1/p-1-∑_r=1^s|𝒟_r^c|, m, sp^m-1-∑_i=1^ℓp^|𝒜_i|-1] is distance-optimal with respect to the Griesmer bound.
Theorems 3.1 and 3.2 in <cit.> can be regarded as special cases of Theorems <ref> and <ref> with s=1, respectively.
Suppose that 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying Property I_1, then for any integer t≥ 1, the t-copies of 𝒜_1, 𝒜_2,…, 𝒜_ℓ satisfy Property I_t. From Theorem <ref>, we can derive that
the r-repetition of Griesmer codes constructed from Theorem 3.2 in <cit.> are also Griesmer codes, where 1≤ r ≤⌊p-1/M(𝒜_1,𝒜_2, … ,𝒜_ℓ)⌋.
In the follows, we will give two corollaries of Theorems <ref> and <ref> by considering the case of s=2, to show that it is possible to construct good linear codes with new parameters, and that sometimes the weight distribution can be easily determined.
Let p≥ 3 be an odd prime and let m be a positive integer. Suppose that 𝒜_1, 𝒜_2 𝒜_3, 𝒜_4 are nonempty subsets of [m] such that
(i) 𝒜_3⊆𝒜_1, 𝒜_4⊆𝒜_2,
(ii) 𝒜_1∩𝒜_2=∅,
(iii) M(𝒜_1, 𝒜_2,𝒜_3, 𝒜_4)≤ p-1, and
(iv) p^m > p^|𝒜_1| + p^|𝒜_2|, p^m > p^|𝒜_3|+ p^|𝒜_4|.
If 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4),
then 𝒞_(𝒟_1,𝒟_2) constructed by (<ref>) is a p-ary [2p^m-∑_i=1^4p^|𝒜_1|+2/p-1 ,m, 2p^m-1-∑_i=1^4p^|𝒜_i|-1]
Griesmer code, whose weight distribution is listed in Table <ref>.
From (i)-(ii), we can check that 𝒜_1, 𝒜_2, 𝒜_3, 𝒜_4 satisfy Property I_2. Together with (iii)-(iv), it follows from Theorem <ref>
that 𝒞_(𝒟_1,𝒟_2) is a Griesmer code over _p. For any 𝐱∈^m*_p , the weight of a codeword c_𝐱 is
(c_𝐱)=2p^m-1-∑_i=1^4p^|𝒜_i|-1+
4/p+1/p∑_y∈_p^*χ_𝐱(y(P_𝒜_1∪ P_𝒜_2))+1/p∑_y∈_p^*χ_𝐱(y(P_𝒜_3∪ P_𝒜_4)).
According to Lemma <ref>, we have (c_𝐱) =
{[ 2p^m-1, if 𝐱_𝒜_1=0, 𝐱_𝒜_2=0,; 2p^m-1-p^|𝒜_2|-1, if 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0, 𝐱_𝒜_4=0,; 2p^m-1-p^|𝒜_2|-1-p^|𝒜_4|-1, if 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0, 𝐱_𝒜_4≠0,; 2p^m-1-p^|𝒜_1|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0,𝐱_𝒜_3=0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_3|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0,
𝐱_𝒜_3≠0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0,
𝐱_𝒜_3=0, 𝐱_𝒜_4=0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_4|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0,
𝐱_𝒜_3=0, 𝐱_𝒜_4≠0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0,
𝐱_𝒜_3≠0, 𝐱_𝒜_4=0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0,
𝐱_𝒜_3≠0, 𝐱_𝒜_4≠0. ].
The multiplicity corresponding to each weight follows.
Below we give an example to illustrate Corollary <ref>.
Let p = m = 3, 𝒜_1 = {1, 2}, 𝒜_2 = {3}, 𝒜_3 = {1}, 𝒜_4 = ∅. Let 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4), 𝒞_(𝒟_1,𝒟_2) be the 3-ary code constructed by (<ref>), from Corollary <ref>, we know 𝒞_(𝒟_1,𝒟_2) is a Griesmer code [20,3,13]_3 with weight enumerator 1+12x^13+10x^14+2x^15+2x^17.
Let p be an odd prime and m a positive integer. Suppose that 𝒜_1, 𝒜_2, 𝒜_3, 𝒜_4 are nonempty subsets of [m] such that
(i) 𝒜_3⊆𝒜_1, 𝒜_4⊆𝒜_2,
(ii) 𝒜_1∩𝒜_2≠∅,
(iii) 𝒜_3⊈𝒜_1∩𝒜_2, 𝒜_4⊈𝒜_1∩𝒜_2, and
(iv) p^m > p^|𝒜_1| + p^|𝒜_2|.
If 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4),
then 𝒞_(𝒟_1,𝒟_2) constructed by (<ref>) is a p-ary linear code with parameters
[2p^m-∑_i=1^4p^|𝒜_i|+p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1 ,m, 2p^m-1 -∑_i=1^4 p^|𝒜_i|-1].
Furthermore, without loss of generality, let |𝒜_3|=min{|𝒜_i|: i=1,2,3,4}, then 𝒞_(𝒟_1,𝒟_2) is distance-optimal with respect to the Griesmer bound if any one of the following conditions is satisfied.
(1) |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|-2/p-1, if M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)≤ p-1.
(2) |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1, if p=3, |𝒜_1|=|𝒜_2|=|𝒜_4|>|𝒜_3|.
(3) |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1-1, if p=3, |𝒜_1|>|𝒜_2|=|𝒜_3|=|𝒜_4|.
(4) |𝒜_3|>p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1-1, if p=3, |𝒜_1|=|𝒜_2|=|𝒜_3|=|𝒜_4|.
From conditions (i)-(iv) and Theorem <ref>, the parameters of 𝒞_(𝒟_1,𝒟_2) follows.
(1) When M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)≤ p-1, it is easy to see that
C(∑_i=1^4p^|𝒜_i|-1)=4, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|-1.
Due to |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|-2/p-1, we obtain that
|𝒟_1^c|+|𝒟_2^c| =∑_i=1^4p^|𝒜_i|-p^|𝒜_1∩𝒜_2|-p^|𝒜_3∩𝒜_4|-2/p-1
>∑_i=1^4p^|𝒜_i|-4/p-1-|𝒜_3|
= ∑_i=1^4 p^|𝒜_i|-C(∑_i=1^4 p^|𝒜_i|-1)/p-1-v_p(∑_i=1^4p^|𝒜_i|-1)-1.
According to Theorem <ref>, 𝒞_(𝒟_1,𝒟_2) is distance-optimal with respect to the Griesmer bound.
(2-4) Since M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)≤ 4, M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)> p-1 if and only if p=3.
M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)=3 ⇔{[ |𝒜_1|=|𝒜_2|=|𝒜_4|>|𝒜_3|; |𝒜_1|>|𝒜_2|=|𝒜_3|=|𝒜_4| ].
⇔{[ C(∑_i=1^4p^|𝒜_i|-1)=2, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|-1.; C(∑_i=1^4p^|𝒜_i|-1)=2, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|. ].
M(𝒜_
1,𝒜_2,𝒜_3,𝒜_4)=4 ⇔
|𝒜_1|=|𝒜_2|=|𝒜_4|=|𝒜_3|
⇔
C(∑_i=1^4p^|𝒜_i|-1)=2, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|.
These three circumstances corresponding to (2-4) respectively, the remaining proofs of (2-4) can be done similarly to (1).
In general, it is complicated to calculate the weight distribution of code constructed in Corollary <ref>, as we need to consider the following 15 cases:
{[ 𝐱_𝒜_1=0, 𝐱_𝒜_2=0,; 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0,{[ 𝐱_𝒜_4=0,; 𝐱_𝒜_4≠0, ].; 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0,{[ 𝐱_𝒜_3=0,; 𝐱_𝒜_3≠0, ].; 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0,
{[ 𝐱_𝒜_3=0,𝐱_𝒜_4=0,
{[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3=0,𝐱_𝒜_4≠0,
{[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3≠0, 𝐱_𝒜_4=0,
{[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3≠0, 𝐱_𝒜_4≠0,
{[ 𝐱_𝒜_3∩𝒜_4=0,
{[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3∩𝒜_4≠0,
{[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0. ]. ]. ]. ].
Correspondingly, (c_𝐱) is equal to
{[ 2p^m-1,; 2p^m-1-p^|𝒜_2|-1,; 2p^m-1-p^|𝒜_2|-1-p^|𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_3|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1+p^|𝒜_1∩𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_4|-1+p^|𝒜_1∩𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1+p^|𝒜_1∩𝒜_2|,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1+p^|𝒜_1∩𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1+p^|𝒜_3∩𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1+p^|𝒜_3∩𝒜_4|-1+p^|𝒜_1∩𝒜_2|-1. ].
We will not give the explicit expression of the weight distribution here, but we will illustrate the computation with an example.
Let p = 3, m=4 and 𝒜_1 = {1, 2, 3}, 𝒜_2 = {3, 4}, 𝒜_3 = {1, 2}, 𝒜_4 = {3, 4}. Let 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4), 𝒞_(𝒟_1,𝒟_2) be the 3-ary code constructed by (<ref>). From Corollary <ref>, we know that the parameters of 𝒞_(𝒟_1,𝒟_2) are [56,4,36]_3. Referring to the above analysis, for 𝐱∈_3^4*, we have
(c_𝐱)={[ 2·3^3-3-3, 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0,; 2·3^3-3^2, 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0, 𝐱_𝒜_3=0,; 2·3^3-3^2-3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0, 𝐱_𝒜_3≠0,; 2·3^3-3^2-3-3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3=0,𝐱_𝒜_1∩𝒜_2=0,; 2·3^3-3^2-3-3+3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3=0,𝐱_𝒜_1∩𝒜_2≠0,; 2·3^3-3^2-3-3-3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3≠0,𝐱_𝒜_1∩𝒜_2=0,; 2·3^3-3^2-3-3-3+3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3≠0,𝐱_𝒜_1∩𝒜_2≠0. ].
So 𝒞_(𝒟_1,𝒟_2) here is a 5-weight code with nonzero weights {36,39,42,45,48}.
According to the best known linear code from the Magma BKLC (_3, 56, 4) with weight enumerator 1+76x^36+4x^45, it can be seen that 𝒞_(𝒟_1,𝒟_2) is a new distance-optimal linear code.
§ ALPHABET-OPTIMAL (R,Δ)-LRCS
In this section, we will revisit the locality of codes constructed in <cit.>, i.e., the codes in Theorem <ref> with s=1. It turns out that the constructed codes in <cit.>, which are alphabet-optimal 2-LRCs, can also be characterized as alphabet-optimal (2,p-1)-LRCs. Furthermore, we will investigate the conditions under which the codes constructed from (<ref>) possess (2, p) and (2, p-2) localities. Some new alphabet-optimal (r,δ)-LRCs are also provided.
Recall that
P_[m] = {(1, a_2, a_3, … , a_m) : a_2, … , a_m ∈_p}∪
{(0, 1, a_3, … , a_m) : a_3, … , a_m ∈_p}∪…∪{(0, 0, … , 1)}.
Let P_[m]={(1, a_2, … , a_m) : a_2, … , a_m ∈_p^*}.
When p^m > ∑_i=1^ℓ p^|𝒜_i|, which
implies that 𝒜_1,𝒜_2, … ,𝒜_ℓ are proper subsets of [m], then P_[m] is a subset of 𝒟= P_[m]∖⋃_i=1^ℓP_𝒜_i, as P_[m]∩ P_𝒜_i=∅ for any 1≤ i≤ℓ.
Next, we will give a formal definition of (r,δ)-LRCs from the aspect of generator matrix.
Let 𝒞 be a p-ary linear code with generator matrix
G = [𝐠_1, …, 𝐠_n].
The i-th coordinate, 1 ≤ i ≤ n, of 𝒞 is said to have (r,δ)-locality if there exists a subset ℐ⊂{𝐠_1, …, 𝐠_n} containing 𝐠_i such that
(1) |ℐ| ≤ r+δ-1,
(2) any δ-1 vectors in ℐ are linear combinations of the remaining vectors in ℐ.
If all the coordinates of 𝒞 have (r,δ)-locality, then 𝒞 is called an (r,δ)-locally repairable code, or in short, (r,δ)-LRC.
As previously mentioned, P_[m] can be considered as a subset of L_[m]. Therefore, we can refer to the elements of P_[m] as vectors without any ambiguity. In P_[m], the first nonzero coordinate of any vector is 1, which means that the linear combination of vectors in P_[m] may not necessarily belong to P_[m]. For the sake of convenience in later discussions, for any 𝐟∈ L_[m], we denote by [𝐟] the vector equivalent to 𝐟 whose first nonzero coordinate is 1.
§.§ (2,p-2)-LRCs
In this subsection, we will explore the (2,p-2)-locality of p-ary codes constructed by (<ref>) via analysing the linear dependence among vectors in the defining sets.
Let p≥ 5 be an odd prime, 𝔽_p={0,-1,α_1,α_2,…,α_p-2}.
(1) For any 𝐠∈P_[m], there exists 𝐡∈P_[m], 𝐡≠𝐠, such that
[𝐡+α_i𝐠]∈P_[m]
for all i ∈ [p-2]∖{j}, where j is arbitrary chosen from [p-2].
(2) For any 𝐠∈ P_[m]∖P_[m], there exists 𝐡∈P_[m] such that
[𝐡+α_i𝐠]∈P_[m]
for all i ∈ [p-2].
(1) For 𝐠∈P_[m], we can write 𝐠 = (1, g_2,…, g_m), where g_i≠ 0 for every i∈{2,…,m}.
For any j∈ [p-2], if we choose 𝐡=(1,-α_jg_2,…,-α_jg_m)∈P_[m], then
[𝐡+α_i𝐠]=1/1+α_i(1+α_i, (α_i-α_j)g_2,…,(α_i-α_j)g_m)∈P_[m]
for all i ∈ [p-2]∖{j}.
(2) For 𝐠∈ P_[m]∖P_[m], denote i the position of first nonzero component of 𝐠, where 1≤ i≤ m. Then we can express 𝐠 as follows
𝐠 = (0,…,0^i-1,1, g_i+1,…, g_m).
Since 𝐠∉P_[m], there is a subset Z⊆{i+1,i+2,…,m} such that g_j=0 for j∈ Z, g_j≠ 0 for j∈{i+1,i+2,…,m}∖ Z, where i-1+|Z|≥ 1.
Let 𝐡 be an arbitrary element in
{(1,…,1^i, h_i+1,…, h_m): h_r≠ 0 if r∈ Z, h_r= g_r if r∈{i+1,i+2,…,m}∖ Z}.
It is easy to check that 𝐡∈P_[m] and
[𝐡+α_i𝐠]∈P_[m]
for all i ∈ [p-2].
Below we give an example to illustrate Lemma <ref>.
Let p=5, m=4, 𝔽_p={0,-1,α_1,α_2,…,α_p-2}={0,-1,1,2,3}.
(1) For 𝐠 = (1, 2, 3, 4), let j=3, from Lemma <ref>, we can choose 𝐡=(1,4,1,3), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∉P_[4].
(2) For 𝐠 = (1, 0, 3, 4), from Lemma <ref>, we can choose 𝐡=(1,1,3,4), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4].
(3) For 𝐠 = (0, 1, 0, 4), from Lemma <ref>, we can choose 𝐡=(1,1,3,4), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4].
By combining Definition <ref> and Lemma <ref>, we can establish a criterion for determining the (2,p-2)-locality of codes constructed from (<ref>).
Let p≥ 5 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. Suppose that 𝒜_1, 𝒜_2, …, 𝒜_ℓ are proper subsets of [m] such that they do not contain each other. Let 𝒟= P_[m]∖ (⋃_i=1^ℓ P_𝒜_i), then 𝒞_𝒟 constructed by (<ref>) is a (2,p-2)-LRC.
Since 𝒜_1, 𝒜_2, …, 𝒜_ℓ are proper subsets of [m], P_[m]⊂𝒟.
From Lemma <ref>, for each 𝐠∈P_[m], let j=p-2, then there exists 𝐡∈P_[m], 𝐡≠𝐠, and a set ℐ={𝐠,𝐡,[𝐡+α_1𝐠],…, [𝐡+α_p-3𝐠]}⊂P_[m]⊂𝒟 of size p-1 such that any 2 vectors in ℐ can be linearly combined to get the remaining vectors in ℐ. From Definition <ref>, coordinates occupied by P_[m] have (2,p-2)-locality.
Similarly, we can prove that coordinates occupied by 𝒟∖P_[m] have (2,p-1)-locality. Overall, 𝒞_𝒟 is a (2,p-2)-LRC.
The part (1) of Theorem 4.2 in <cit.> can be obtained by Theorem <ref> .
§.§ (2,p-1)-LRCs
In this subsection, after adding an extra requirement to the defining sets and utilizing the results in Subsection <ref>, we will determine the (2,p-1)-locality of p-ary codes constructed by (<ref>).
Let p≥ 3 be an odd prime, ℓ≥ 1 and m≥ 2 be integers.
Suppose that 𝒜_1, 𝒜_2, …, 𝒜_ℓ are proper subsets of [m] such that they do not contain each other, let 𝒟= P_[m]∖ (⋃_i=1^ℓ P_𝒜_i). If there exists a subset 𝒜^*⊂ [m] with size m-1 such that 𝒜^*≠𝒜_i for all i∈ [ℓ],
then 𝒞_𝒟 constructed by (<ref>) is a (2,p-1)-LRC.
Let 𝔽_p={0,-1,α_1,α_2,…,α_p-2}. According to Lemma <ref> and Theorem <ref>, if we can show that for any 𝐠∈P_[m]⊂𝒟, there exists 𝐡∈𝒟 such that
|{𝐠,𝐡,[𝐡-𝐠], [𝐡+α_1𝐠], …, [𝐡+α_p-2𝐠]}∩𝒟|≥ p,
the proof is done.
For simplicity, define [𝐠,𝐡]:={𝐠,𝐡,[𝐡-𝐠], [𝐡+α_1𝐠], …, [𝐡+α_p-2𝐠]}. We can see that in [𝐠,𝐡], any 2 vectors can be linearly combined to obtain all the remaining vectors.
Next, we prove the theorem by considering the following two cases.
Case (i): Suppose 𝒜^*={2,3,…,m}.
For any 𝐠=(1,g_2,…,g_m)∈P_[m], where g_i≠ 0 for all 2≤ i≤ m, let 𝐡=(h_1,h_2,…,h_m)=g_2^-1(0,g_2,…,g_m). For each i∈ [ℓ], since 𝒜^*∖𝒜_i≠∅, there exists t_i∈𝒜^* such that t_i∉𝒜_i. As h_t_i≠ 0, we have 𝐡∉ P_𝒜_i for any i∈ [ℓ]. Hence 𝐡∉⋃_j=1^ℓP_𝒜_j, i.e., 𝐡∈𝒟.
It is easy to verify that [𝐡+(-g_2^-1𝐠)]=(1,0,…,0) is the only possible element in [𝐠,𝐡] which does not belong to 𝒟, and all the remaining elements in [𝐠,𝐡] belong to P_[m]. So
|[𝐠,𝐡]∩𝒟|≥ p.
Case (ii): Suppose 𝒜^*=[m]∖{j} for some j∈{2, …, m}.
For any 𝐠=(1,g_2,…,g_m)∈P_[m], where g_i≠ 0 for all 2≤ i≤ m, let 𝐡=(1,h_2, h_3,…,h_m) with h_i=g_i for i∈𝒜^*∖{1}, and h_j=0.
Similarly, we can check that 𝐡∈𝒟,
[𝐡-𝐠] = (0,…,0^j-1,1, 0,…, 0)
is the only possible element in [𝐠,𝐡] which does not belong to 𝒟, and all the remaining elements in [𝐠,𝐡] belong to P_[m]. So
|[𝐠,𝐡]∩𝒟|≥ p.
In summary, the proof is completed.
The part (2) of Theorem 4.2 in <cit.> is a special case of Theorem <ref> with p=3.
Below we give an example to illustrate Theorem <ref>.
Let p=5, m=4, 𝔽_p={0,-1,α_1,α_2,α_3}={0,-1,1,2,3}.
(1) For 𝒜^*={2,3,4}, 𝐠 = (1, 2, 3, 4), from Theorem <ref>, we can choose 𝐡=(0,1,4,2), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4], [𝐡-𝐠]∈P_[4].
(2) For 𝒜^*={1,3,4}, 𝐠 = (1, 2, 3, 4), from Theorem <ref>, we can choose 𝐡=(1,0,3,4), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4].
By combining Theorems <ref> and <ref>, we can provide a construction of p-ary alphabet-optimal (2,p-1)-LRCs which are Griesmer codes.
Let p≥ 3 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. Assume 𝒜_1,𝒜_2,…,𝒜_ℓ are mutually disjoint subsets of [m] and M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1,
let 𝒟= P_[m]∖ (⋃_i=1^ℓP_𝒜_i). Then 𝒞_𝒟 constructed by (<ref>) is an alphabet-optimal (2,p-1)-LRC with parameters
[p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1 ,m, p^m-1 -∑_i=1^ℓp^|𝒜_i|-1].
Since 𝒜_1,𝒜_2,…,𝒜_ℓ are mutually disjoint
and M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1, we know that
there is at most one subset among {𝒜_i}_i=1^ℓ, say 𝒜_j, has size m-1, which means that there is at least a subset 𝒜^*⊂ [m] with size m-1 such that 𝒜^*≠𝒜_i for all i∈ [ℓ]. From Theorems <ref> and <ref>, 𝒞_𝒟 is a p-ary [n,k,d] Griesmer code with (2,p-1)-locality, where n=p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1, k=m, d=p^m-1 -∑_i=1^ℓp^|𝒜_i|-1.
Since 𝒜_1,𝒜_2,…,𝒜_ℓ are mutually disjoint, we have ∑_i=1^ℓp^|𝒜_i|-1≤ p^m-2+1, then
⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-2⌉≥ p-1.
Note that
n=∑_j=0^m-1⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉.
Thus
∑_j=0^m-2⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉= n-1,
∑_j=0^m-3⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉≤ n-p.
Thanks to the Griesmer bound, k_ opt^(p)(n -p, d) = m -2. Utilizing the bound of (<ref>) with
t = 1, we get that k ≤ 2 + k_ opt^(p)(n-p, d) = m. Therefore, the linear code 𝒞_𝒟 achieves the bound of (<ref>).
In <cit.>, the authors showed that code 𝒞_𝒟 constructed in Theorem <ref> is an alphabet-optimal 2-LRC, here we prove that 𝒞_𝒟 is actually an alphabet-optimal (2,p-1)-LRC, our results are more concise.
Next, we give a construction of alphabet-optimal (2,p-1)-LRCs which are not Griesmer codes.
Let p≥ 3 be an odd prime and m=3. Let 𝒜_1={1,2} and 𝒜_2={2,3}. Let 𝒟^c = P_𝒜_1∪ P_𝒜_2 and 𝒟= P_[m]∖𝒟^c, then C_𝒟 constructed by (<ref>) is a p-ary alphabet-optimal (2,p-1)-LRC with parameters [p^2-p ,3, p^2 - 2p].
From Theorems <ref> and <ref>, 𝒞_𝒟 is a p-ary [n,k,d] code with (2,p-1)-locality, where n=p^2-p, k=3, d=p^2 - 2p. Utilizing the bound of (<ref>) with
t = 1, we get that k ≤ 2 + k_ opt^(p)(p^2-2p, p^2-2p) = 3. Therefore, the linear code 𝒞_𝒟 achieves the bound of (<ref>).
Below we give an example to illustrate Theorem <ref>.
Let p=5, m=3. Assume that 𝒜_1={1,2} and 𝒜_2={2,3}. Let 𝒟^c = P_𝒜_1∪ P_𝒜_2 and 𝒟= P_[3]∖𝒟^c, then C_𝒟 constructed by (<ref>) is a 5-ary linear code with a generator matrix
G=[ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1; 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 0 0 0 0; 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 ].
By Magma software, we know the parameters of C_𝒟 are [20, 3, 15].
By Theorem <ref>, we can partition the matrix G into the following four submatrices
[ 1 1 1 1 1; 1 0 2 3 4; 1 1 1 1 1 ],
[ 1 1 1 1 1; 1 0 2 3 4; 2 2 2 2 2 ],
[ 1 1 1 1 1; 1 0 2 3 4; 3 3 3 3 3 ],
[ 1 1 1 1 1; 1 0 2 3 4; 4 4 4 4 4 ].
In each submatrix, any two columns can be linearly combined to get the remaining three columns, from Definition <ref>, C_𝒟 is a (2,4)-LRC.
It is easy to check that the codes C_𝒟 constructed in Theorem <ref> are also Singleton-optimal.
This phenomenon reminds us that it may be an interesting topic to construct LRCs that achieve both Singleton-optimality and alphabet-optimality.
§.§ (2,p)-LRCs
In Theorems <ref> and <ref>, we utilize the inherent structure of defining sets to establish the (2,p-2) and (2,p-1)-localities of p-ary linear code 𝒞_𝒟, respectively. Now, we proceed to present a theorem that allows us to determine the (2,p)-locality of a p-ary linear code, where the defining set of this code is a subset of P_[m], solely based on the cardinality of its defining set.
Let p≥2 be a prime, m > 2 an integer. Suppose that 𝒟 is a subset of P_[m], 𝒟^c= P_[m]∖𝒟. If |𝒟^c|< p^m-1-1/p-1, then
𝒞_𝒟 defined as in (<ref>) is a p-ary (2,p)-LRC .
Let _p={0,-1,α_1,α_2,…, α_p-2}.
We will show that for any nonzero 𝐠∈𝒟, there always exists a (p+1)-size set [𝐠,𝐡]:={𝐠,𝐡,[𝐡-𝐠],[𝐡+α_1𝐠],…, [𝐡+α_p-2𝐠]}⊂𝒟 for some 𝐡∈𝒟∖{𝐠}.
As any 2 elements of the set [𝐠,𝐡]∖{𝐠} could be linearly combined to get 𝐠, we call [𝐠,𝐡]∖{𝐠} a repair set of 𝐠.
For different 𝐡_i, 𝐡_j∈ P_[m]∖{𝐠}, it is easy to examine that [𝐠,𝐡_i]∩ [𝐠,𝐡_j]={𝐠}.
So, there are p^m-1/p-1-1/p=p^m-1-1/p-1 disjoint repair sets of 𝐠 in P_[m].
Since |𝒟^c| < p^m-1-1/p-1, we have |𝒟|=|P_[m]|-|𝒟^c|> p^m-1> (p-1)p^m-1-1/p-1.
According to the Pigeonhole principle, for any vector 𝐠∈𝒟, there always exists a repair set [𝐠,𝐡_0]∖{𝐠}⊂𝒟, where 𝐡_0∈𝒟 and 𝐡_0≠𝐠, which is equivalent to say that the coordinate occupied by 𝐠 has (2,p)-locality. Since the chosen of 𝐠 is arbitrary, all the coordinates of _𝒟 has (2,p)-locality.
By combining Theorems <ref> and <ref>, we can provide a construction of p-ary alphabet-optimal (2,p)-LRCs.
Let p≥ 3 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. If 𝒜_1,𝒜_2, … ,𝒜_ℓ are nonempty subsets
of [m] satisfying
(i) 𝒜_1,𝒜_2, … ,𝒜_ℓ are mutually disjoint,
(ii) M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1, and
(iii) p^m-1>∑_i=1^ℓp^|𝒜_i|.
If 𝒟= P_[m]∖ (⋃_i=1^ℓP_𝒜_i), 𝒟^c= P_[m]∖𝒟, then 𝒞_𝒟 constructed by (<ref>) is an alphabet-optimal (2,p)-LRC with parameters
[p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1 ,m, p^m-1 -∑_i=1^ℓp^|𝒜_i|-1].
From (i) and (iii), |𝒟^c|=∑_i=1^ℓ(p^|𝒜_i|-1)/p-1< p^m-1-1/p-1.
By Theorem <ref>, 𝒞_𝒟 has (2,p)-locality.
From (i)-(ii) and Theorem <ref>, 𝒞_𝒟 is a p-ary [n,k,d] Griesmer code, where n=p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1, k=m, d=p^m-1 -∑_i=1^ℓp^|𝒜_i|-1.
From (iii), we have
⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-2⌉=p.
Note that
n=∑_j=0^m-1⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉.
Thus
∑_j=0^m-2⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉=n-1,
∑_j=0^m-3⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉= n-(p+1).
Thanks to the Griesmer bound, k_ opt^(p)(n -(p+1), d) = m -2. Utilizing the bound of (<ref>) with
t = 1, we get that k ≤ 2 + k_ opt^(p)(n-(p+1), d) = m. Therefore, the linear code 𝒞_𝒟 achieves the bound of (<ref>).
From the proofs of Theorems <ref>, <ref> and <ref>, we can see that if we replace prime p with any prime power q, the statements of localities for p-ary codes can be generalized to q-ary codes without any difficulties.
Simplex codes over any finite field 𝔽_q are alphabet-optimal (2,q)-LRCs with respect to the bound (<ref>).
From Remark <ref>, we know that any q-ary Simplex code 𝒮_m has (2,q)-locality. The parameters of 𝒮_m are [n=q^m-1/q-1,k=m,d=q^m-1]. Utilizing the bound (<ref>) with t = 1, we get that k ≤ 2 + k_ opt^(q)(q^m-1/q-1-(q+1), q^m-1) (b)≤ m, where (b) is from the Plotkin bound. The proof is done.
§ ALPHABET-OPTIMAL (R,Δ)-LRCS WITH AVAILABILITY
In this section, we will investigate the locality of the codes 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) constructed in Theorem <ref> with s≥ 1.
In the absence of ambiguity, we call an [n,k,d]_q code alphabet-optimal if it achieves some upper bound for k which takes the alphabet size q into consideration.
As we can see in Section <ref>, the set P_[m] is the beacon of proofs of (r,δ)-localities. When s> 1, there are s copies of P_[m] in the generator matrix of code 𝒞_(𝒟_1,𝒟_2,…,𝒟_s). Consequently, each coordinate of 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) may possess s disjoint repair sets. Next, we will give a formal definition of (r,δ)-locality with availability from the aspect of generator matrix.
Let 𝒞 be a p-ary linear code with generator matrix
G = [𝐠_1, …, 𝐠_n].
The i-th coordinate, 1 ≤ i ≤ n, of 𝒞 is said to have (r, δ)_t-locality if there exist t pairwise disjoint sets ℐ^(i)_1 ,…, ℐ^(i)_t, which are subsets of {𝐠_1, …, 𝐠_n}∖{𝐠_i}, satisfying that for each j ∈ [t],
(1) |ℐ^(i)_j∪{𝐠_i}| ≤ r+δ-1,
(2) any δ-1 vectors in ℐ^(i)_j∪{𝐠_i} are linear combinations of the remaining vectors in ℐ_j^(i)∪{𝐠_i}.
If all the coordinates of 𝒞 have (r,δ)_t-locality, then 𝒞 is called an (r,δ)_t-locally repairable code, or in short, (r,δ)_t-LRC.
From the above definition, a p-ary [n,k,d] code with (r, δ)_t-locality is also a code with (r, δ)_i-locality, 1≤ i ≤ t-1. So, for a linear code with (r, δ)_t-locality, if it is (r, δ)-alphabet-optimal, then it is also (r, δ)_t-alphabet-optimal.
From Remark <ref>, for an (r, δ)_t-LRC, where t≥ 1, we can prove that it is (r, δ)_t-alphabet-optimal by proving that it is (r, δ)-alphabet-optimal.
Let the notation be the same as in Theorem <ref>. If ℬ_1^(j),ℬ_2^(j),…,ℬ_ℓ_j^(j) are mutually disjoint for every j∈ [s], M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1,
and
⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-1⌉< p,
then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is an alphabet-optimal (2,p-1)_s-LRC with parameters
[sp^m-∑_i=1^ℓp^|𝒜_i|+ℓ-s/p-1 ,m, sp^m-1 -∑_i=1^ℓp^|𝒜_i|-1].
From Theorem <ref>, 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is a p-ary [n,k,d] Griesmer code, where n=sp^m-∑_i=1^ℓp^|𝒜_i|+ℓ-s/p-1, k=m, d=sp^m-1 -∑_i=1^ℓp^|𝒜_i|-1.
It is evident that all the ℬ_j^(i), where 1≤ j≤ℓ_i and 1≤ i≤ s, are proper subsets of [m], so P_[m]⊂𝒟_r for all 1≤ r≤ s. Then from Lemma <ref> and Definition <ref>, coordinates occupied by 𝒟_i∖P_[m] have (2,p-1)_s-locality, where 1≤ i≤ s.
From the proof of Theorem <ref>, for each i∈ [s], there exists an 𝒜^(i)⊂ [m] with size m-1 such that 𝒜^(i)≠ℬ^(i)_j for all j∈ [ℓ_i].
Then from Theorem <ref>, for any 𝐠 in P_[m], we can find a 𝐡_i∈ P_𝒜^(i)⊂𝒟_i∖P_[m] such that there is a p-size set {𝐠,𝐡_i,𝐡_i+α_1𝐠,…,𝐡_i+α_p-2𝐠}⊂𝒟_i for all 1≤ i≤ s, where 𝔽_p={0,-1,α_1,α_2,…,α_p-2}. From Theorem <ref> and Definition <ref>, coordinates occupied by P_[m] have (2,p-1)_s-locality.
In summary, 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is a (2,p-1)_s-LRC, and of course a (2,p-1)-LRC.
Note that
n=∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉,
from (<ref>),
we have
∑_j=0^m-2⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉>n-p.
Since ℬ_1^(j),ℬ_2^(j),…,ℬ_ℓ_j^(j) are mutually disjoint for every j∈ [s], we can deduce that
⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-2⌉≥ s(p-1),
so
∑_j=0^m-3⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉≤ n-1-s(p-1)≤ n-p.
Thanks to the Griesmer bound, k_ opt^(p)(n -p, d) = m -2. Utilizing the bound (<ref>) with
t = 1, we can derive that k ≤ 2 + k_ opt^(p)(n-p, d) = m, which means that 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is alphabet-optimal with respect to (2,p-1)-locality. From Remark <ref>, 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is also alphabet-optimal with respect to (2,p-1)_s-locality.
From the above theorem, the key of constructing alphabet-optimal (r, δ)_s-LRCs is to make (<ref>) hold, which can be fulfilled when s is small, for example, let s< p.
§ CONCLUSION
In this paper, we first proposed a construction of linear codes 𝒞_(𝒟_1,…𝒟_s) over 𝔽_p by generalizing the constructions in <cit.>. Similarly with <cit.>, a necessary and sufficient condition for the linear codes 𝒞_(𝒟_1,…𝒟_s) to be Griesmer codes, a sufficient condition for 𝒞_(𝒟_1,…𝒟_s) to be distance-optimal, were presented. From which, some new constructions of Griesmer codes and distance-optimal codes can be derived.
Secondly, we proposed criteria for determining the (2,p-2), (2,p-1), and (2,p)-localities of p-ary linear codes constructed by eliminating elements from a complete projective space, and some alphabet-optimal (2,p-1)-LRCs and (2,p)-LRCs were provided. Specially, by showing that the methods of determining (r,δ)-localities of p-ary code can be generalized to q-ary codes for any prime power p, we proved that the q-ary Simplex codes are alphabet-optimal (2,q)-LRCs. Finally, we explored the availability of (r,δ)-LRCs constructed from the generalized framework (<ref>) with an alphabet-optimal construction. In the following research work, we
plan to explore more (r,δ)-localities of linear codes constructed from (<ref>) and (<ref>) and to propose more alphabet-optimal (r,δ)-LRCs.
99
Cadambe2015 Cadambe V. R., Mazumdar A.: Bounds on the size of locally recoverable codes. IEEE Trans. Inf. Theory 61(11), 5787–5794 (2015).
Cai2020-ava Cai H., Miao Y., Schwartz M., Tang X.: On optimal locally repairable codes with multiple disjoint repair sets. IEEE Trans. Inf. Theory 66(4), 2402–2416 (2020).
Cai2020 Cai H., Miao Y., Schwartz M., Tang X.: On optimal locally repairable codes with super-linear length. IEEE Trans. Inf. Theory 66(8), 4853–4868 (2020).
Chen2018 Chen B., Xia S.-T., Hao J., Fu F.-W.: Constructions of optimal cyclic (r,δ) locally repairable codes. IEEE Trans. Inf. Theory 64(4), 2499–2511 (2018).
Chen2019 Chen B., Fang W., Xia S.-T., Fu F.-W.: Constructions of optimal (r,δ) locally repairable codes via constacyclic codes. IEEE Trans. Commun. 67(8), 5253–5263 (2019).
Chen2021it Chen B., Fang W., Xia S.-T., Hao J. and Fu F.-W.: Improved bounds and Singleton-optimal constructions of locally repairable codes with minimum distance 5 and 6. IEEE Trans. Inf. Theory 67(1), 217–231 (2021).
Ding2007 Ding C., Niederreiter H.: Cyclotomic linear codes of order 3. IEEE Trans. Inf. Theory 53(6), 2274–2277 (2007).
Ding2008
Ding C., Luo J., Niederreiter H.: Two-weight codes punctured from irreducible cyclic codes. Ser. Coding Theory Cryptol. 4, 119–124 (2008).
Fang2018 Fang W., Fu F.-W.: Optimal cyclic (r, δ) locally repairable codes with unbounded length. Finite Fields Appl. 63, 101650 (2020).
Fang2021 Fang W., Chen B., Xia S.-T., Fu F.-W.: Singleton-optimal LRCs and perfect LRCs via cyclic codes. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 3261–3266 (2021).
Fu2020 Fu Q., Li R., Yang S.: Optimal (r,δ)-locally repairable codes from Simplex Code and Cap code. IEEE Access 8, 215414–215418 (2020).
Gopalan2012 Gopalan P., Huang C., Simitci H., Yekhanin S.: On the locality of codeword symbols. IEEE Trans. Inf. Theory 58(11), 6925–6934 (2012).
Goparaju2014 Goparaju S., Calderbank R.: Binary cyclic codes that are locally repairable. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 676–680 (2014).
Griesmer1960
Griesmer J.H.: A bound for error-correcting codes. IBM J. Res. Dev. 4(5), 532–542 (1960).
Guruswami2019 Guruswami V., Xing C., Yuan C.: How long can optimal locally repairable codes be? IEEE Trans. Inf. Theory 65(6), 3662–3670 (2019).
Huang2016 Huang P., Yaakobi E., Uchikawa H., Siegel P.: Binary linear locally repairable codes. IEEE Trans. Inf. Theory 62(11), 6268–6283 (2016).
Hyun2020 Hyun J.Y., Lee J., Lee Y.: Infinite families of optimal linear codes constructed from simplicial complexes. IEEE Trans. Inf. Theory 66(11), 6762–6773 (2020).
Jin2019 Jin L.: Explicit construction of optimal locally recoverable codes of distance 5 and 6 via binary constant weight codes. IEEE Trans. Inf. Theory 65(8), 4658–4663 (2019).
Jin2020 Jin L., Ma L., Xing C.: Construction of optimal locally repairable codes via automorphism groups of rational function fields. IEEE Trans. Inf. Theory 66(1), 210–221 (2020).
Kong2021 Kong X., Wang X., Ge G.: New constructions of optimal locally repairable codes with super-linear length. IEEE Trans. Inf. Theory 67(10), 6491–6506 (2021).
Li2019 Li X., Ma L., Xing C.: Optimal locally repairable codes via elliptic curves. IEEE Trans. Inf. Theory 65(1), 108–117 (2019).
Luo2019 Luo Y., Xing C., Yuan C.: Optimal locally repairable codes of distance 3 and 4 via cyclic codes. IEEE Trans. Inf. Theory 65(2), 1048–1053 (2019).
Luo2021 Luo G., Cao X.: Constructions of optimal binary locally recoverable codes via a general construction of linear codes. IEEE Trans. Commun. 69(8), 4987–4997 (2021).
Luo2022 Luo G., Ling S.: Application of optimal p-ary linear codes to alphabet-optimal locally repairable codes. Des. Codes Cryptogr. 90, 1271–1287 (2022).
Ma2019 Ma J., Ge G.: Optimal binary linear locally repairable codes with disjoint repair groups. SIAM J. Discret. Math. 33(4), 2509–2529 (2019).
Pamies2013
Pamies-Juarez L., Hollmann H.D., Oggier F.: Locally
repairable codes with multiple repair alternatives. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 892–896 (2013).
Prakash2012 Prakash N., Kamath G. M., Lalitha V., Kumar P. V.: Optimal linear
codes with a local-error-correction property. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 2776–2780 (2012).
Qiu2021 Qiu J., Zheng D., Fu F.-W.: New constructions of optimal cyclic (r, δ) locally repairable codes from their zeros. IEEE Trans. Inf. Theory 67(3), 1596–1608 (2021).
Rawat2015
Rawat A.S., Mazumdar A., Vishwanath, S.: Cooperative local repair in distributed storage. EURASIP J. Adv. Signal Process. 2015, 107 (2015).
Rawat2016
Rawat A. S., Papailopoulos D. S., Dimakis A. G., Vishwanath S.: Locality and availability in distributed storage. IEEE Trans. Inf. Theory 62(8), 4481–4493 (2016).
Silberstein2015 Silberstein N., Zeh A.: Optimal binary locally repairable codes via anticodes. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 1247–1251 (2015).
Silberstein2018 Silberstein N., Zeh A.: Anticode-based locally repairable codes with high availability. Des. Codes Crypt. 86, 419–445 (2018).
Silberstein2019 Silberstein N., Etzion T., Schwartz M.: Locality and availability of
array codes constructed from subspaces. IEEE Trans. Inf. Theory 65(5), 2648–2660 (2019).
Solomon1965
Solomon G., Stiffler J.J.: Algebraically punctured cyclic codes. Inf. Control 8(2), 170–179 (1965).
Sun2019 Sun Z., Zhu S., Wang L.: Optimal constacyclic locally repairable codes, IEEE Commun. Lett. 23(2), 206–209 (2019).
Tamo2014 Tamo I., Barg A.: A family of optimal locally recoverable codes. IEEE Trans. Inf. Theory, 60(8), 4661–4676 (2014).
Tamo2015 Tamo I., Barg A., Goparaju S., Calderbank R.: Cyclic LRC codes and their subfield subcodes. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 1262–1266 (2015).
Tamo2016 Tamo I., Barg A., Goparaju S., Calderbank R.: Cyclic LRC codes, binary LRC codes, and upper bounds on the distance of cyclic codes. Int. J. Inf. Coding Theory 3(4), 345–364 (2016).
Tamo2016-ava Tamo I., Barg A., Frolov A.: Bounds on the parameters of locally
recoverable codes. IEEE Trans. Inf. Theory 62(6), 3070–3083 (2016).
Tan2021
Tan P., Fan C., Ding C., Tang C., Zhou Z.: The minimum locality of linear codes. Des. Codes Cryptogr. 91, 83–114 (2023).
Wang2014
Wang A., Zhang Z.: Repair locality with multiple erasure tolerance. IEEE Trans. Inf. Theory 60(11), 6979–6987 (2014).
Xing2019 Xing C., Yuan C.: Construction of optimal (r, δ)-locally recoverable codes and connection with graph theory. IEEE Trans. Inf. Theory 68(7), 4320–4328 (2022).
|
http://arxiv.org/abs/2307.04315v2 | 20230710025718 | Movement of branch points in Ahlfors' theory of covering surfaces | [
"Yun-Ling Chen",
"Tian-Run Lin",
"Guang-Yuan Zhang"
] | math.CV | [
"math.CV",
"[2020] 30D35, 30D45, 52B60"
] |
Movement of branch points]Movement of branch points in Ahlfors' theory of covering surfaces
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.
R. China
[email protected],
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.
R. China
[email protected]
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, P.
R. China
[email protected]
Project 10971112 and 12171264 supported by NSFC
In this paper, we will prove a result which is asserted in <cit.> and is
used in the proof of the existence of extremal surfaces in <cit.>.
[2020] 30D35, 30D45, 52B60
[
Guangyuan Zhang
====================
§ INTRODUCTION
In 1935, Lars Ahlfors <cit.> introduced the theory of covering surfaces and
gave a geometric illustration of Nevanlinna's value distribution theory.
Depending on the application of the Length – Area principle (<cit.>
,p.14), Ahlfors' theory has a metric-topological nature. The most crucial
result in the theory of covering surfaces is Ahlfors' Second Fundamental
Theorem (SFT), which corresponds to Nevanlinna's Second Main Theorem. As the
most important constant in Ahlfors' SFT, the precise bound of the constant
H(E_q) (we will give the definition later) has not been sufficiently
studied yet. This leads to our work.
We start with several definitions and elementary facts in the theory of
covering surfaces. The unit sphere S is identified with the extended complex
plane ℂ under the stereographic projection
P:S→ℂ as in <cit.>. Endowed with the spherical
metric on S, the spherical length L and the spherical area A on S
have natural interpretations on ℂ as
dL =2|dz|/1+|z|^2,
and
dA =4dxdy/(1+|z|^2)^2
for any z∈ℂ.
For a closed set K on ℂ, a mapping f:K→ S
is called continuous and open if f can be extended to a continuous and open
mapping from a neighborhood of K to S. Now we can define the covering surface.
Let U be a domain on ℂ whose boundary
consists of a finite number of disjoint Jordan curves α_1
,…,α_n. Let f:U→ S be an
orientation-preserving, continuous, open, and finite-to-one map (OPCOFOM).
Then the pair Σ=(f,U) is called a covering surface over S,
and the pair ∂Σ=(f,∂ U) is called the boundary of
Σ.
For each point w ∈ S, the covering number n(f,w) is defined
as the number of all w-points of f in U without counting multiplicity.
That is, n(f,w) = n(Σ,w) = ♯{f^-1(w)∩
U}.
All surfaces in this paper are covering surfaces defined above.
The area of a surface Σ=(f,U) is defined as the spherical
area of f:U→ S, say,
A(Σ)=A(f,U) = ∫∫_Sn(Σ,w) dA(w)
= ∫∫_ℂ4/(1+u^2+v^2)^2n
(Σ,u+√(-1)v) dudv.
And the perimeter of Σ=(f,U) is defined as the spherical
length of f:∂U→ S and write
L(∂Σ) = L(f, ∂ U).
Let Σ=(f,U) be a covering surface.
(1) Σ is called a closed surface, if U=S. For a closed surface
Σ, we have ∂Σ=∅, and then L(∂Σ)=0.
(2) Σ is called a simply-connected surface, if U is a
simply connected domain.
(3) 𝐅 denotes all surfaces such that for each Σ=(
f,U) ∈𝐅, U is a Jordan domain.
(A) Let K_1 and K_2 be two domains or two
closed domains on S, such that ∂ K_1 and ∂ K_2 are
both consisted of a finite number of disjoint Jordan curves. A mapping
f:K_1→ K_2 is called a complete covering
mapping (CCM), if (a) for each p∈ K_2 there exists a neighborhood V
of p in K_2 such that f^-1(V)can be expressed as a union
∪_j∈𝒜U_j of disjoint (relative) open sets of K_1, and
(b) f|_U_j:U_j→ V is a homeomorphism for each j∈𝒜.
(B) We call f a branched
complete covering mapping (BCCM), if all conditions of (A) hold, except that
(b) is replaced with (b1) or (b2): (b1) If both K_1 and K_2 are
domains, then for each j∈𝒜, U_j∩ f^-1(p) contains only
one point a_j of f^-1(p), and there exist two homeomorphisms
φ_j:U_j→Δ,ψ_j:V→Δ with
φ_j( a_j) =ψ_j( p) =0, such that
ψ_j∘ f|_U_j∘φ_j^-1(ζ)=ζ^k_j,ζ∈Δ,where k_j is a positive integer; or (b2) if both K_1 and
K_2 are closed domains, then f|_K_1^∘:K_1^∘→
K_2^∘ satisfies (b1) and moreover, f restricted to a neighborhood
of ∂ K_1 in K_1 is a CCM onto a neighborhood of ∂
K_2 in K_2.
(C) For a surface Σ=( f,U)over S, f is in general not a CCM or BCCM. When f(
z) =z^2, both f:Δ→Δand
f:Δ→Δ are BCCMs, but when f( z) =z(
z-a/1-a̅z) ^2, f:Δ→ f(Δ) is
neither a CCM nor a BCCM.
Ahlfors' Second Fundamental Theorem gives the relationship between A(Σ
), n(Σ) and L(∂Σ).
Given an integer q≥3, let
E_q={a_1,…,a_q} be a set of distinct q points on S. Then
there exists a positive constant h depending only on E_q, such that for
any covering surface Σ= (f,U)∈𝐅, we have
(q-2)A(Σ) ≤4π∑_j=1^qn
(Σ,a_j) + h L(∂Σ).
In particular, if f(U) ∩{0,1,∞}=∅, then we
have
A(Σ) ≤ h L(∂Σ).
It is a natural question that whether we can find a precise lower bound for
the constants h in Theorem <ref>. For this purpose, we need to define
the remainder-perimeter ratio H(Σ) as follows.
For a covering surface Σ=(f,U)∈𝐅 and
a set E_q={a_1,…,a_q} on S, we define the total covering
number over E_q as
n(f,E_q) = n(Σ,E_q) = ∑_j=1^qn(Σ,a_j) = ♯{f^-1(E_q)∩ U},
the remainder as
R(Σ,E_q)=(q-2)A(Σ) - 4πn(Σ,E_q),
and the remainder-perimeter ratio as
H(Σ,E_q) = R(Σ,E_q)/L(∂Σ).
In the sequel, we always use R(Σ) and H(Σ) without emphasizing
the set E_q.
We can observe that to estimate the constants h in Theorem <ref>, we are
supposed to give an upper bound of H(Σ). In <cit.>, the last author
developed an innovative method to compute the precise value of the constant
h in (<ref>).
For any surface Σ=(f,U
)∈𝐅 with f(U) ∩{0,1,∞}=∅, we have
A(Σ)< h_0L(∂Σ),
where
h_0=max_θ∈[ 0,π/2] {(
π+θ) √(1+sin^2θ)/arctan√(1+sin
^2θ)cosθ-sinθ} .
Moreover, the constant h_0 is sharp: there exists a sequence of covering
surface {Σ_n} in 𝐅 with f(U_n)
∩{0,1,∞}=∅ such that A(Σ_n)/L(∂Σ
_n)→ h_0 as n→∞.
However, in general cases, it will be very difficult to estimate the precise
bound of the constant h. Since the branch points (See definition in Remark
<ref>) outside of f^-1(E_q) of a surface bring a lot of trouble
in the research, Sun and the last author tried overcoming such problems in
<cit.>. Unfortunately, we observe that the published result in <cit.>
does not work well enough. Before establishing our main theorem, we introduce
more terminologies and definitions.
All paths and curves considered in this paper are oriented and any subarc of a
path or closed curve inherits this orientation. Sometimes paths and curves
will be regarded as sets, but only when we use specific set operations and set
relations. For an oriented circular arc c, the circle C containing c and
oriented by c is called the circle determined by c.
For any two non-antipodal
points p and q on S, pq is the geodesic on S from p to
q: the shorter of the two arcs with endpoints p and q of the great
circle on S passing through p and q. Thus d(p,q)<π and
pq is uniquely determined by p and q. An arc of a great
circle on S is called a line segment on S, and to emphasize this,
we also refer to it as a straight line segment. For the notation
pq, when p and q are explicit complex numbers we write
p,q, to avoid ambiguity such as 123=12,3
or 1,23. When p and q are two antipodal points of S,
pq is not unique and d( p,q) =π. To avoid
confusions, when we write pq, or say pq is well
defined, we always assume d( p,q) <π.
(1) For a Jordan domain D in ℂ, let h be a
Möbius transformation with h(D)⊂Δ. Then ∂ D is
oriented by h and the anticlockwise orientation of ∂ h(D). The
boundary of every Jordan domain on S is oriented in the same way, via
stereographic projection.
(2) For a Jordan curve C on ℂ or S, the domain
T_C bounded by C is called enclosed by C if the boundary
orientation of T_C agrees with the orientation of C.
(3) A domain D on S is called convex if for any two points q_1
and q_2 in D with d(q_1,q_2)<π, q_1q_2⊂ D; a Jordan curve on S is called convex if it encloses a convex
domain on S; a path on S is called convex if it is an arc of a
convex Jordan curve.
(4) Let γ:[a,b]→ S be a path on S and p_0∈(a,b).
γ is called convex at p_0, if γ restricted to a
neighborhood (p_0-δ,p_0+δ) of p_0 in (a,b)
is a convex Jordan path, with respect to the parametrization giving
γ (t increases). γ is called strictly convex at p_0
if γ is convex at p_0 and restricted to a neighborhood N_p_0
of p_0 in (a,b) is contained in some closed hemisphere S_1 on S
with γ_N_p_0∩ S_1=γ(p_0).
Recall that 𝐅 is the space of covering surfaces Σ
=(f,U),where U is a Jordan domain on ℂ.
Before introducing a subspace of 𝐅, we need to give the definition
of partition. For a Jordan curve α in ℂ, its partition is a
collection {α_j}_j=1^n of its subarcs such that α
=∪_j=1^nα_j and α_j^∘ are disjoint and arranged
anticlockwise. In this setting we write α=α_1+α_2
+⋯+α_n. Here α_j^∘ is the interior of α_j, which is α_j without endpoints. A partition
∂Σ=γ_1+γ_2+⋯+γ_n
of ∂Σ for a surface Σ=(f,U)∈𝐅 is
equivalent to a partition
∂ U=α_1+α_2+⋯+α_n
of ∂ U such that γ_j=(f,α_j) for j=1,…,n.
We denote by ℱ the subspace of 𝐅 such that for each
Σ=( f,U), ∂Σ has a partition
∂Σ=c_1+c_2+…+c_n,
where c_1,…,c_n are simple convex circular (SCC) arcs. This means
that ∂ U has a partition
∂ U=α_1+α_2+…+α_n
such that α_j,1≤ j≤ n, are arranged anticlockwise and f
restricted to each α_j is a homeomorphism onto the convex circular
arc c_j.
Now we introduce some subspaces of ℱ which can describe some
properties of the covering surfaces precisely.
For given positive number L, ℱ(L) denotes the
subspace of ℱ in which every surface has boundary length
L(∂Σ)≤ L.
𝒞(L,m) denotes the subspace of ℱ(L) such that
Σ=( f,Δ) ∈𝒞(L,m) if and only
if ∂Δ and ∂Σ have 𝒞(L,m)-partitions.
This means that ∂Δ and ∂Σ have partitions
∂Δ=α_1( a_1,a_2) +α_2(
a_2,a_3) +…+α_m( a_m,a_1)
and
∂Σ=c_1( q_1,q_2) +c_2( q_2
,q_3) +…+c_m( q_m,q_1)
respectively, such that c_j( q_j,q_j+1) =(
f,α_j( a_j,a_j+1) ) is an SCC arc for each
j=1,…,m.
Given q≥3, let E_q={a_1,…,a_q} be a set of q distinct
points. 𝒞^∗(L,m) denotes the subspace of 𝒞
(L,m)such that Σ=( f,Δ)
∈𝒞^∗( L,m) if and only if ∂Δ and
∂Σ have 𝒞^∗(L,m)-partitions. That is, the
partitions are 𝒞(L,m)-partitions in (<ref>) and (<ref>)
so that f has no branch points in α_j^∘∩ f^-1(E_q) for
every j=1,…,m.
ℱ(L,m) denotes the subspace of 𝒞(L,m) such
thatΣ=( f,Δ) ∈ℱ
( L,m) if and only if ∂Δ and ∂Σ
have ℱ(L,m)-partitions (<ref>) and (<ref>), that is,
the partitions are 𝒞(L,m)-partitions such that, for each
j=1,2,…,m, f has no branch point in α_j^∘.
ℱ_r denotes the subspace of ℱ such that
Σ=( f,Δ) ∈ℱ_r if and only if
f has no branch point in Δ\ f^-1(E_q), say,
C_f^∗( Δ) =∅, and define
ℱ_r(L)=ℱ_r∩ℱ(L),
ℱ_r(L,m)=ℱ_r∩ℱ(L,m).
The condition in the definition of ℱ(L,m) is equivalent to say
that, for each j=1,…,m, f restricted to a neighborhood of α
_j^∘ in Δ is a homeomorphism onto a one-side
neighborhood of c_j^∘, which is the part of a neighborhood of
c_j^∘ contained in the closed disk enclosed by the circle determined
by c_j.
By definition, we have
ℱ_r( L,m) ⫋ℱ( L,m)
⫋𝒞^∗( L,m) ⫋𝒞(
L,m) ,
and
ℱ(L)=∪_m=1^∞ℱ( L,m) =∪
_m=1^∞𝒞( L,m) .
For each Σ∈𝒞( L,m) , there exists an integer
m_1>m such that Σ∈ℱ( L,m_1) .
Analogous to Definition<ref>, we define the Ahlfors' constants in
different subspaces of covering surfaces.
Given q≥3, for any set E_q={a_1,…,a_q} of q
distinct points, we define
H_0=sup_Σ∈ℱH(Σ)=sup_Σ∈ℱ
H(Σ,E_q),
H_L=H_L(E_q)=sup_Σ∈ℱ(L)H(Σ)=sup_Σ∈ℱ(L)H(Σ,E_q),
H_L,m=sup_Σ∈ℱ(L,m)H(Σ)=sup_Σ∈ℱ(L,m)H(Σ,E_q),
For any surface Σ∈ℱ and any ε>0, to
estimate H(Σ) we may assume L(∂Σ)<+∞. Otherwise, we
have H(Σ)=0.
Let ℒ be the set of continuous points of H_L
=H_L(E_q), with respect to L.
By Ahlfors' SFT, we can see that
H_0=lim_L→+∞H_L<+∞.
Since H_L increase with respect to L, it is clear that (
0,+∞) \ℒ is just a countable set. Thus for
each L∈ℒ, there exists a positive number δ_L such that
for each L^'∈(L-δ_L,L+δ_L), we have
H_L-π/2L<H_L^'<H_L+π/2L.
Now we can state our main theorem as follows.
Let L∈ℒ and let Σ=(
f,Δ) be a covering surface in 𝒞^∗(L,m). Assume that
H(Σ)>H_L-π/2L(∂Σ).
Then there exists a surface Σ^'=( f^',Δ) such that
(i) Σ^'∈ℱ_r(L,m).
(ii) H(Σ^')≥ H(Σ) and L(∂Σ^')≤
L(∂Σ). Moreover, at least one of the inequalities is strict if
Σ∉ℱ_r(L,m).
(iii) When L(∂Σ^')=L(∂Σ), we have
∂Σ^'=∂Σ and they share the same
ℱ(L,m)-partitions (<ref>) and (<ref>).
Now we outline the structure of this paper. Section 2 introduces some
fundamental properties of covering surfaces, especially the surgeries to sew
two surfaces along the equivalent boundary arcs. In Section 3, we remove the
non-special branch points of the given surface, and in Section 4 we finish our
proof of the main theorem.
§ ELEMENTARY PROPERTIES OF COVERING SURFACES
This section consists of some useful properties of covering surfaces. For a
path Γ on S given by z=z(t),t∈ t_1,t_2], -Γ is
the opposite path of Γ given by z=z(-t),t∈-t_2,-t_1].
A convex domain enclosed by a convex circular arc c and its
chord I is called a lune and is denoted by 𝔇^'( I,c) ,𝔇^'( I,θ(c)) ,
𝔇^'( I,L(c)) , or 𝔇^'( I,k(c)) where θ is the interior angle at the two
cusps, k is the curvature of c and I is oriented such that[The
initial and terminal points of I and c are the same, respectively, in the
notation 𝔇^'(I,θ), in other words, 𝔇
^'(I,θ) is on the right hand side of I.] ∂𝔇^'( I,θ) =c-I.
For two lunes 𝔇^'( I,θ_1) and
𝔇^'( -I,θ_2) sharing the common chord
I we write
𝔇( I,θ_1,θ_2) =𝔇^'( I,θ_1) ∪ I^∘∪𝔇^'(
-I,θ_2)
and called the Jordan domain 𝔇( I,θ_1,θ
_2) a lens. Then the notations 𝔇( I,l_1
,l_2), 𝔇( I,c_1,c_2)and
𝔇( I,k_1,k_2) are in sense and denote the same
lens, when l_j=L(c_j) and k_j is the curvature of c_j, j=1,2,
say,
𝔇( I,c_1,c_2) =𝔇(
I,l_1,l_2) =𝔇( I,k_1,k_2)
=𝔇^'( I,l_1) ∪ I^∘∪𝔇^'( -I,l_2)
=𝔇^'( I,c_1) ∪ I^∘∪𝔇^'( -I,c_2) =𝔇^'(
I,k_1) ∪ I^∘∪𝔇^'( -I,k_2
) .
For a lune 𝔇^'( I,τ) , whether τ
denotes the length l, the angle θ, or the curvature k is always
clear from the context, and so is for the lens 𝔇( I,τ
_1,τ_2) . By definition, we have 0<θ_j≤π for
j=1,2, but for the domain 𝔇( I,θ_1,θ
_2) it is permitted that θ_1 or θ_2 is zero, say
𝔇( I,θ_1,θ_2) reduces to
𝔇^'( I,θ_1) or 𝔇^'( -I,θ_2) . By definition of 𝔇
(I,θ,θ) we have
𝔇(I,θ,θ)=𝔇^'( I,θ)
∪𝔇^'( -I,θ) ∪ I^∘,
and θ∈(0,π]. If I=1,0,-1 and θ=π/2, for
example, 𝔇( I,θ,θ) =Δ and
𝔇^'( I,θ) =Δ^+ is the upper half
disk of Δ.
Let Σ=( f,U) ∈ℱ and let
p∈∂ U. If f is injective near p, then f is homeomorphic in a
closed Jordan neighborhood N_p of p in U, and then f(N_p) is a
closed Jordan domain on S whose boundary near f(p) is an SCC arc, or two
SCC arcs joint at f(p), and thus the interior angle of f(N_p) at f(p)
is well defined, called the interior angle of Σ at p and denoted by
∠(Σ,p).
In general, we can draw some paths {β_j}_j=1^k in U
with ∪_j=1^kβ_j\{p}⊂ U and β_j∩β_i={p}if i≠ j, such that each ( f,β_j)
is a simple line segment on S, ∪_j=1^kβ_j divides a closed
Jordan neighborhood N_p of p in U into k+1 closed Jordan
domains U_jwith p∈U_j,j=1,…,k+1, and
U_i∩ U_j=∅if i≠ j, and f restricted to
U_j is a homeomorphism with ( f,U_j)
∈ℱ for each j. Then the interior angle of Σ at p is
defined by
∠( Σ,p) =∑_j=1^k+1∠( (
f,U_j) ,p) .
(i). (Stoilow's Theorem <cit.>
pp.120–121) Let U be a domain on ℂ and let
f:U→ S be an open, continuous and discrete mapping. Then there
exist a domain V on ℂ and a homeomorphism
h:V→ U, such that f∘ h:V→ S is a holomorphic mapping.
(ii). Let Σ=(f,U) be a surface
where U is a domain on ℂ. Then there exists a domain
V on ℂ and an OPH h:V→U such that f∘ h:V→ S is a holomorphic mapping.
(iii) Let Σ=(f,U)∈𝐅.
Then there exists an OPH φ:U→U such
that f∘φ is holomorphic on U.
What f is discrete means that f^-1(w)∩ K is finite for any compact
subset K of U.
Let Σ=(f,U) be a
surface where U is a domain on ℂ. Then f:U→ S is the restriction of an OPCOFOM g defined in a
neighborhood U_1 of U, and thus by Stoilow's theorem, there
exists a domain V_1 on ℂ and an OPH h:V_1
→ U_1 such that g∘ h is holomorphic on V_1 and then for
V=h^-1(U), f∘ h is holomorphic on
V, and thus (ii) holds.
Continue the above discussion and assume U is a
Jordan domain. Then V is also a Jordan domain and by Riemann mapping theorem
there exists a conformal mapping h_1 from U onto V and by
Caratheodory's extension theorem h_1 can be extended to be homeomorphic
from U onto V, and thus the extension of h∘
h_1 is the desired mapping φ in (iii).
For two curves (α_1,[a_1,b_1]) and (α
_2,[a_2,b_2]) on S, we call they equivalent and write
(α_1,[a_1,b_1])∼(α_2,[a_2,b_2])
if there is an increasing homeomorphism τ:[a_1,b_1]→
a_2,b_2] such that α_2∘τ=α_1. For two surfaces
(f_1,U_1) and (f_2,U_2), we call they
equivalent and write (f_1,U_1)∼(f_2,U_2)
if there is an orientation-preserving homeomorphism (OPH) ϕ:U_1→U_2 such that f_2∘ϕ=f_1.
By our convention , for any covering surface Σ=(f,U) over
S, f is the restriction of an OPCOFOM f defined on a Jordan
neighborhood V of U. By Theorem <ref>, there is a
self-homeomorphism h of V such that f∘ h is holomorphic on V.
Thus, Σ is equivalent to the covering surface (g,U_1),
where U_1=h^-1(U) and g=f∘ h is holomorphic
on U_1. For any two equivalent surfaces Σ_1
=(f_1,U_1) and Σ_2=(f_2,U_2), we have
A(f_1,U_1)=A(f_2,U_2), L(f_1,∂U_1)=L(f_2,∂U_2) and n
(f_1,E_q)=n(f_2,E_q) for a fixed set E_q. Thus we can
identify the equivalent surfaces and for any surface Σ=(f,U
), we may assume f is holomorphic in U.
Theorem <ref> is a powerful tool to explain the connection between OPCOFOM
and the holomorphic map. The following lemma is a consequence of Theorem
<ref>. We shall denote by D(a,δ) the disk on S with center a and
spherical radius δ. Then Δ⊂ S is the disk D(0,π/2).
Let (f,U)be a surface, U be a domain on
ℂ bounded by a finite number of Jordan curves and (
f,∂ U) is consisted of a finite number of simple circular arcs
and let q∈ f(U). Then, for sufficiently small disk
D(q,δ) on Swith δ<π/2, f^-1(D(q,δ)
)∩U is a finite union of disjoint sets {U_j
}_1^n in U, where each U_j is a Jordan domain in U,
such that for each j, U_j∩ f^-1(q) contains exactly one
point x_j and (A) or (B) holds:
(A) x_j∈ U_j⊂U_j⊂ U and f:U_j
→D(q,δ) is a BCCM such that x_j is the only
possible branch point.
(B) x_j∈∂ U, f is locally homeomorphic on U_j
\{x_j}, and when ( f,U) ∈ℱ, the following conclusions (B1)–(B3) hold:
(B1) The Jordan curve ∂ U_j has a partition α_1(
p_1,x_j) +α_2( x_j,p_2) +α_3(
p_2,p_1) such that α_1+α_2=( ∂
U) ∩∂ U_j is an arc of ∂ U, α_3^∘⊂ U, c_j=( f,α_j) is an SCC arc for j=1,2,
and c_3=( f,α_3) is a locally SCC[The
condition δ<π/2 makes ∂ D( q,δ) strictly
convex, and it is possible that ( f,α_3^∘) may
describes ∂ D( q,δ) more than one round, and in
this case ( f,α_3^∘) is just locally SCC.] arc in
∂ D( q,δ) from q_2=f( p_2) to
q_1=f( p_1). Moreover, f is homeomorphic in a
neighborhood of α_j\{x_j} in U for j=1,2,
and
∂( f,U_j) =( f,∂ U_j)
=c_1+c_2+c_3.
(B2) The interior angle of ( f,U_j) at p_1
and p_2 are both contained in [7π/16,9π/16].
(B3) There exists a rotation ψ of S with ψ(q)=0 such that one of
the following holds:
(B3.1) q_1=q_2,( f,α_1) =q_1q
=q_2q=-( f,α_2) , say, ( f,α
_1+α_2) =q_1q+qq_1, and (
ψ∘ f,U_j) is equivalent to the
surface[Here δ z^ω_j is regarded as the mapping
z↦δ z^ω_j∈ S,z∈Δ^+, via the
stereographic projection P.] ( δ z^ω_j:Δ^+) on S so that
( δ z^ω_j,[-1,1]) =a_δ
,0+0,a_δ,
where ω_j is an even positive integer and a_δ∈(
0,1) with d( 0,a_δ) =δ.
(B3.2) q_1≠ q_2, as sets c_1∩ c_2={q}, and (
ψ∘ f,U_j) is equivalent to the the surface
( F,Δ^+∪𝔇_1^'
∪𝔇_2^') so that the following holds.
(B3.2.1) 𝔇_1^'=𝔇^'(
-1,0,θ_1)and 𝔇_2^'=𝔇^'( 0,1,θ_2), such that
for each j=1,2,θ_j∈0,π/4]. Moreover θ_1=0
(or θ_2=0) when c_1=q_1q (or c_2=qq_2), and in this case 𝔇_1^'=∅ (or
𝔇_2^'=∅). See Definition <ref> for the
notation 𝔇^'( ·,·) .
(B3.2.2) ( F,Δ^+) is the surface T=(
δ z^ω_j,Δ^+), where ω_jis
a positive number which is not an even number and even may not be an integer,
( F,𝔇_1^') is the lune
ψ( 𝔇^'( q_1q
,c_1) ) and ( F,𝔇_2^'
) is the lune ψ( 𝔇^'(
qq_2,c_2) ) . That is to say, (
f,U_j) is obtained by sewing the sector
ψ^-1( T) with center angle[This angle maybe
larger than 2π as the sector ( z^3,Δ^+) at 0.] ω_jπ, and the closed lunes 𝔇
^'( q_1q,c_1) and 𝔇^'( qq_2,c_2) along
q_1q and qq_2 respectively.
(A) follows from Stoilow's theorem directly when x_j∈ U. (B) follows
from (A) and the assumption ( f,U) ∈ℱ,
by considering the extension of f which is an OPCOFOM in a neighborhood of
x_j in ℂ.
We list more elementary conclusions deduced from the previous
lemma directly and more notations. Let Σ=(f,U)∈ℱ, q∈ f(U), δ, x_j, U_j and α_1
+α_2 be given as in Lemma <ref>.
(A) If for some j, x_j∈Δ, then by Lemma <ref> (A), f is a
BCCM in the neighborhood U_j of x_j in Δ, and the order
v_f(x_j) of f at x_j is well defined, which is a positive integer,
and f is a v_f(x_j)-to-1 CCM on U_j\{x_j}.
(B) If for some j,x_jis contained in ∂Δ, then, using
notations in Lemma <ref> (B), there are two possibilities:
(B1) q_1=q_2, the interior angle of Σ at x_j equals
ω_jπ, and the order v_f( x_j) is defined to be
ω_j/2, which is a positive integer.
(B2) q_1≠ q_2,c_1+c_2 is a simple arc from q_1 to q, and
then to q_2. In this case the interior angle of Σ at x_j equals
ω_jπ+φ_1+φ_2, where φ_1 and φ_2
are the interior angles of 𝔇^'( q_1
q,c_1) and 𝔇^'( qq_2
,c_2) at the cusps, and we defined the order of f at x_j to
be the least integer v_f( x_j) with v_f(
x_j) ≥( ω_jπ+φ_1+φ_2) /2π. Since ω_jπ+φ_1+φ_2≥ω_jπ>0, we have
v_f( x_j) ≥1 and f is injective on U_j
\{ c_1+c_2} iff v_f( x_j) =1.
This is also easy to see by Corollary <ref> (v).
(C) The number v_f(x_j) can be used to count path lifts with the same
initial point x_j: when x_j∈Δ, any sufficiently short line
segment on S starting from q=f(x_j)has exactly v_f(
x_j) f-lifts starting from x_j and disjoint in Δ\{x_j}; and when x_j∈∂Δ, for each arc β
of the two sufficiently short arcs of ∂Δ with initial point
x_j, (f,β) is simple and has exactly v_f(x_j)-1 f-lifts
{β_j} _j=1^v_f( x_j) -1 with the
same initial point x_j, β_j\{x_j}⊂Δ for
each j and they are disjoint in Δ. This is also easy to see by
Corollary <ref> (v).
(D) A point x∈U is called a branch point of f (or
Σ) if v_f(x)>1, or otherwise called a regular point if
v_f( x) =1. We denote by C_f the set of all branch points
of f, and CV_f the set of all branch values of f. For a set
A⊂U, we denote by C_f( A) =C_f∩ A the
set of branch points of f located in A, and by CV_f(K)=CV_f∩
K the set of branch values of f located in K⊂ S. We will
write
C_f^∗( A) =C_f( A) \ f^-1
(E_q) and C_f^∗=C_f\ f^-1(E_q)=C_f(
U) \ f^-1(E_q).
(E) For each x∈U, b_f( x) =v_f(
x) -1is called the branch number of f at x, and for a set
A⊂U we write B_f( A) =∑_x∈ A
b_f( x) . Then we have b_f( x) ≠0 iff
C_f( x) ={x}, and B_f( A) =∑_x∈
C_f( A) b_f(x). We also define
B_f^∗( A) =B_f( A\ f^-1(E_q))
.
Then B_f^∗( A) ≥0, equality holding iff C_f^∗( A) =∅. When A=U is the domain of
definition of f, we write
B_f=B_f( U) and B_f^∗
=B_f^∗( U) .
Now we can state a direct Corollary to Lemma <ref>.
Let Σ=( f,U) ∈ℱ and let ( x_1,U_1) be a disk
of Σ with radius δ_1. Then, the following hold.
(i) f is locally homeomorphic on U_1\{x_1}; and
if ( x_1,U_1^') is another disk of Σ with
radius δ_1^'>δ_1, then U_1⊂
U_1^', whether x_1is in ∂ U or U.
(ii) If f is homeomorphic in some neighborhood of x_1 in U
(which may be arbitrarily small), or if f locally homeomorphic on
U, then the disk ( x_1,U_1) is a
one sheeted closed domain of Σ, say, f restricted to U_1 is a homeomorphism onto f( U_1) .
(iii) For each x_2∈ U_1\{x_1}, any closed disk (
x_2,U_2) of Σ is a one sheeted closed domain of
Σ, moreover, U_2⊂ U_1 when the radius of
( x_2,U_2) is smaller than δ-d( f(x_1
),f(x_2)) .
(iv) If x_1∈∂ U, f is regular at x_1 and (
f,∂ U) is circular near x_1, then ( f,U_1) is a convex and one sheeted closed domain of
Σ, which is in fact the closed lens 𝔇(
I,c_1,c_1^'), where c_1 and c_1^' are
circular subarcs of ∂Σ and the circle ∂ D(
f(x_1),δ_1) , I is the common chord, and the three paths
c_1,-c_1^',I have the same initial point. Moreover, if
∂Σ is straight at x_1, then f(U_1
)=𝔇^'( -I,c_1^')
=𝔇^'( -c_1,c_1^')is
half of the disk D(f(x_1),δ_1) on the left hand side of
diameter c_1 (see Definition <ref> for lenses and lunes).
(v) For any x∈U_1, there exists a path I(
x_1,x) in U_1 from x_1 to x such that
I( x_1,x) is the unique f-lift of f(x_1
)f(x). That is to say, ( f,U_1) can be foliated
by the family of straight line segments {( f,I( x_1,x)
) :x∈∂ U_1} which are disjoint in U_1
\{x_1}.
(vi) For each x∈∂ U, the interior angle of Σ at x is positive.
Lemma <ref> also implies a criterion of regular point.
Let (f,U)∈ℱ. Then the following hold.
(A) For each a∈U,f restricted to some neighborhood of p in
U is a homeomorphism if one of the following alternatives holds.
(A1) p∈ U and p is a regular point of f.
(A2) p∈∂ U, p is a regular point of f and (f,∂ U) is
simple in a neighborhood of p on ∂ U.
(B) For any SCC arc ( f,α) of ∂Σ=(
f,∂ U) , f restricted to a neighborhood of α^∘
in U is a homeomorphic if and only if h has no branch point on
α^∘. Here α^∘ means the interior of the arc α.
The hypothesis in condition (A2) that (f,∂ U) is simple cannot be
ignored. See the following example.
Take f(z)=z^2 for any z∈Δ^+. then f is regular at
z=0 but not injective in any neighborhood of 0 in Δ^+.
The following lemma shows that how to sew two surfaces together into one
surface along the equivalent curves. This is an important tool in Section 4.
For j=1,2, let Σ_j=(f_j,U_j) be a surface and let
α_j=α_j( x_j1,x_j2) be a proper arc of
∂ U_j such that ( f_j,α_j) is a simple arc
with distinct endpoints. If
(f_1,α_1)∼-(f_2,α_2),
then (f_1,U_1) and (f_2,U_2)can be
sewn along ( f_1,α_1) to become a
surface Σ_3=(f_3,Δ), such that the following hold:
(i) There exist orientation-preserving homeomorphisms (OPHs)
h_1:U_1→Δ^+ and h_2
:U_2→Δ^-, called
identification mappings (IMs), such that
(h_1,α_1)∼-1,1]∼-( h_2,α_2)
=( h_2,-α_2) ,
f_1∘ h_1^-1( x) =f_2∘ h_2^-1(x),∀
x∈-1,1],
and
f_3(z)={[ f_1∘ h_1^-1(z),z∈Δ^+,; f_2∘ h_2^-1(z),z∈Δ^-\-1,1], ].
is a well defined OPCOFOM, and we have the equivalent relations
(f_3,Δ^+)∼(f_1,U_1),(f_3
,Δ^-)∼(f_2,U_2),
∂Σ_3=( f_3,( ∂Δ) ^+)
+( f_3,( ∂Δ) ^-) ∼(
f_1,( ∂ U_1) \α_1^∘)
+( f_2,( ∂ U_2) \α_2^∘) ,
and
(f_3,[-1,1])∼(f_1,α_1)∼(f_2,-α_2).
(ii)
L(∂Σ_3)=L(∂Σ_1)+L(∂Σ_2
)-2L(f_2,α_2),
A(Σ_3)=A( Σ_1) +A(Σ_2),
n( Σ_3) =n( Σ_1)
+n( Σ_2) +#( γ^∘∩
E_q) ,
and
R(Σ_3)=R(Σ_1)+R(Σ_2)-4π#( γ^∘∩
E_q) .
(iii) z∈ C_f_3( Δ\{-1,1}) if
and only if h_1^-1(z)∈ C_f_1( U_1\∂α_1) or h_2^-1(z)∈ C_f_2(
U_2\∂α_2). In particular, if
f_1(∂α_1)⊂ E_q, then f_2(∂α
_2)⊂ E_q and in addition
CV_f_3(S\ E_q)=CV_f_1(S\ E_q)∪ CV_f_2
(S\ E_q).
The conclusion (i) in fact gives a routine how to sew Σ
_1 and Σ_2. By (<ref>), there exists an OPH[Note
that -α_2 is the same path with opposite direction, not the set
{-y:y∈α_2}.] φ:α_1→-α_2 such
that
( f_1,α_1) =( f_2∘φ,α_1)
,
that is
f_2( φ(x)) ≡ f_1(x),∀ x∈α_1.
Let h_1:U_1→Δ^+ be any OPH such
that h_1(α_1)=[-1,1]. Then let h_2:U_2
→Δ^- be an OPH such that
h_2( y) ≡ h_1( φ^-1(y) ),∀
y∈α_2.
In fact, h_2|_α_2 defined by (<ref>) is an OPH from
α_2 onto [1,-1] and can be extended to be an OPH h_2 from
U_2 onto Δ^-. The pair of h_1 and
h_2 are the desired mappings satisfying (i). Then (ii) is trivial to verify.
To prove (iii) we may assume that Σ_1 and Σ_2 are the
surfaces Σ_±=(f_±,Δ^±) such that f_±
agree on [-1,1], and then f_3 defined by f_± on Δ^±is an OPLM. When x∈(-1,1) is a branch point of f_+ or
f_-, x is obviously a branch point of f_3. Since f_± are the
restrictions of f_3 to Δ^±, and (
f_+,[-1,1]) and ( f_-,[1,-1]) are simple with
opposite direction, if x∈( -1,1) is not a branch point of
f_±, then f_± are homeomorphisms in neighborhoods V^± of x
in Δ^± and the simple arc ( f_3,[-1,1]) separates f_+(V^+\-1,1]) and f_-(V^-
\-1,1]), and thus f_3 is homeomorphic on a neighborhood
of x and so x cannot be a branch point of f_3. Therefore x∈
C_f_3 iff x∈ C_f_1∪ C_f_2. In consequence we have
C_f_3( Δ\{-1,1}) =C_f_1
( Δ^+\{-1,1}) ∪ C_f_2(
Δ^-\{-1,1}) , and (iii) follows.
The condition (f_1,α_1)∼(f_2,-α_2) is crucial. Two
copies of the hemisphere S^+ cannot be sewn along their common
edge 0,1⊂ S to become a surface in ℱ, but
S^+ and S^-, with natural edges 0,1
and -0,1=1,0 respectively, can be sewn along
0,1 to become a surface in ℱ.
Lemma <ref> will be used frequently when we patch the covering surfaces.
The condition in this lemma that α_j are proper arcs of ∂
U_j can be replaced by that one of the curves α_1 and α_2
is proper. Indeed, if only α_1 is proper, then we can find partitions
α_1=α_11+α_12 and α_2=α_21+α_22
so that α_11∼α_21 and α_12∼α_22, and we
can use Lemma <ref> twice.
For a surface Σ=(f,U) and an arc β on S, we define
the lift of β by f as an arc α in U satisfying that (f,
α) ∼β. By Remark <ref>, for any point p∈ U, a
sufficiently short path β from f(p) has exactly v_f(p) lifts from
p.
Let Σ=(f,U
)∈ℱ, p_0∈U and β a polygonal simple path
on S with distinct endpoints. Assume that β has two f-lifts
α_j, j=1,2, with initial point p_0, such that α
_1^∘∩α_2^∘=∅. Then
(i) f(U)=S if α_1 and α_2 terminate at the same
point; moreover, ( f,U) can be sewn along (
f,α_1) ∼( f,α_2) becoming a closed
surface ( f_0,S) .
(ii) If α_1∪α_2 is a proper arc of ∂ U, then the
following (ii1) and (ii2) hold.
(ii1) (f,U) can be sewn along β to become a covering
surface Σ_1=(g,Δ)∈ℱ, such that
A(g,Δ) =A(f,U),
L(g,∂Δ) =L(f,∂ U)-L(f,α_1∪α_2)
=L(f,∂ U)-2L(β),
n( Σ_1,E_q) =n(
Σ,E_q) +#{f( ( α_1∪α_2)
^∘) ∩ E_q}
=n( Σ,E_q) +#{[β^∘∪{f(p_0)}]∩ E_q}.
(ii2) ( f,N) and ( g,N_1) are
equivalent surfaces, where N=U\( α_1
∪α_2) and N_1=Δ\[0,1], and
thus (f, (∂ U) \( α_1∪α_2)
^∘ ), regarded as a closed curve, is equivalent to (g,∂Δ).
(iii) If α_1⊂∂ U,α_2\{
p_0}⊂ U, the terminal point of β is in E_q but all
other points of β are outside of E_q, then there exists a covering
surface Σ_1 such that
R(Σ_1) =R(Σ)+4π,
L(∂Σ_1) =L(∂Σ),
and ∂Σ_1 is equivalent to the closed curve ∂Σ.
We first consider that α_1 and α_2 have the same terminal
point. Then they bound a Jordan domain V in U, and thus f(U)⊃ f(V)=S by the argument principle. One the other hand,
we can sew the closed domain V by identifying α_1 and
α_2 so that the points x∈α_1 and y∈α_2 are
identified if and only if f(x)=f(y), to obtain the surface S. Then
(f,V) becoming a closed surface ( f_0,S). So
(i) holds true.
To prove (ii), we may assume that α_1 and ∂ U have the same
orientation. Then α_2 and ∂ U have opposite orientations,
and there exists an orientation-preserving homeomorphism ϕ:Δ^+→U with ϕ([0,1])=α_1,
ϕ([-1,0])=-α_2 and f∘ϕ(x)=f∘ϕ(-x) for any
x∈0,1]. Let g(z)=f∘ϕ(re^iθ/2) with z=re^iθ∈Δ, θ∈[0,2π]. Then, Σ_1=(g,Δ)∈ℱ is a covering surface which satisfies the conclusion
of (ii).
To prove (iii), let h be an OPCOFOM map from Δ^+ onto
U such that h restricted to Δ^+
∖-1,1] is a homeomorphism onto U\α_2. Moreover, we assume that h maps both [-1,0] and [0,1]
homeomorphically onto α_2 with opposite direction, and maps the arc
α_1^'={ e^√(-1)θ:θ∈0,π/2]} homeomorphically onto α_1. Then we consider the
surface Σ^'=( f∘ h,Δ^+).
After rescaling the parameter of ∂Σ^', we may assume that
Σ^' satisfies (ii), with α_1 and α_2 of (ii)
being replaced by [0,1] and α_1^'. Then by identifying
α_1^' and [0,1] as in (ii), we can sew Σ_1^'
to obtain a new surface Σ_1. It is clear that n
(Σ_1,E_q)=n(Σ_1^',E_q)=n
(Σ,E_q)-1, and thus Σ_1 satisfies (iii).
(<cit.> p. 32–35) Let Σ=(f,Δ
)∈ℱ and β be a path on S with initial point q_1.
Assume that α⊂Δ is an f-lift of some subarc of
β from q_1, and α^∘⊂Δ. Then α can be
extended to an f-lift α^' of a longer subarc of β with
α^'∘⊂Δ, such that either α^'
terminates at a point on ∂Δ, or α^' is the
f-lift of the whole path β.
The following lemma is obvious, which states that two different interior
branch points can be exchanged.
Let Σ=(f,Σ)∈ℱ, b∈Δ be a branch
point of f with v_f(b)=d, and δ>0 be a sufficiently small number.
Then there exists a Jordan neighborhood V of b in Δ such that
f:V→D(f(b),δ) is a d-to-1 BCCM so that b
is the unique branch point, and for any y_1 with d(f(b),y_1)<δand any b_1∈ V, there exists a surface Σ=(f_1,Δ)∈ℱ such that f_1 restricted to Δ\ V equals f and f_1:V→D(f(b),δ) is a d-to-1 branched covering map such that b_1 becomes the unique
branch point of f_1 in V, y_1=f_1(b_1) and v_f_1
(b_1)=v_f(b).
The following results are essentially consequences of argument principle.
Let (f,Δ)∈ℱ and let D be a
Jordan domain on S such that f^-1 has a univalent branch g defined on
D. Then g can be extended to a univalent branch of f^-1 defined on
D.
The proof of this lemma is almost the same as that of Lemma 5.2 in <cit.>.
Let D_1 and D_2 be Jordan domains on ℂ or S
and let f:D_1→D_2 be a map such that
f:D_1→ f(D_1) is a homeomorphism. If
f(∂ D_1)⊂∂ D_2, then f(D_1
)=D_2.
§ REMOVING BRANCH POINTS OUTSIDE F^-1(E_Q)
In this section, we will introduce the surgeries to remove branch points
outside f^-1(E_q). Before the key techniques, we remark some properties
of the partitions of covering surface Σ=( f,Δ) ∈𝒞( L,m) .
Let Γ=( f,∂Δ) be a closed curve
in S which consists of a finite number of SCC arcs. We define 𝔏
( Γ) to be the minimal integer m with the following
property: there exists closed arcs γ_j1,γ_j2,j=1,…,m, such
that
Γ =γ_02=γ_11+γ_12;
γ_12 =γ_21+γ_22;
…
γ_m-1,2 =γ_m1,
in which for each j=1,2,…,m, γ_j1 is either a simple closed arc
of γ_j-1,2, or a folded path I+( -I) where I is a
maximal simple arc such that I+( -I) is a folded arc of
γ_j-1,2. Note that the same closed curve γ_jk may have
different initial point in different places.
Note that 𝔏( ∂Σ) <+∞ if Σ∈𝒞( L,m). The following examples give an intuitive
explanation of 𝔏( Γ).
(1) If Γ is simple or Γ=ab+ba, then
𝔏( Γ) =1. When Γ is a point, we write
𝔏( Γ) =0.
(2) For the closed curve Γ in Figure <ref> (1) we have
𝔏( Γ) =2.
(3) The closed curve Γ=ABCDEFGHIJKLMNOPQA in Figure <ref> (2), in
which CD,GH,KLM,LM,NO are five straight line segments on S (KLM is
straight), contains no simple closed arcs, but it contains four maximal folded
closed arcs CDE, GHI, LMN, and NOP, and thus 𝔏(
Γ) =5.
The following lemma is trivial from Definition <ref>.
Let Γ be a closed curve on S which consists of a finite
number of simple circular arcs. If Γ has a partition Γ=γ
_1+γ_2 such that γ_1 is a simple closed arc, or a maximal
folded closed arc, then
𝔏( γ_1) =𝔏(Γ)-1.
Now we start to introduce some lemmas to deal with the non-special branch
points, i.e. the branch points over E_q (Correspondingly, the special
branch points mean the branch points over E_q). It is essentialy similar
to previous results in <cit.>. We first establish a lemma to remove the
non-special branch points in the interior, that is, the branch points in
C_f^∗(Δ) (Recall Remark <ref> for the notations).
Let Σ=( f,Δ)
∈𝒞^∗( L,m) and assume that (<ref>)
holds. If C_f^∗(Δ)≠∅, then there exists a surface
Σ_1=( f_1,Δ) ∈𝒞^∗( L,m) such that
H(Σ_1)≥ H( Σ) ,L(∂Σ_1)≤
L(∂Σ),
and
B_f_1^∗( Δ) ≤ B_f^∗( Δ)
-1.
Moreover, L(∂Σ_1)=L(∂Σ) if and only if
∂Σ_1=∂Σ,H(Σ_1)≥ H( Σ)
and B_f_1^∗( ∂Δ) >B_f^∗(
∂Δ) .
Corresponding to Definition <ref>, we assume ∂Δ and
∂Σ have 𝒞^∗(L,m)-partitions
∂Δ=α_1( a_1,a_2) +α_2(a_2
,a_3)+…+a_m( a_m,a_1)
and
∂Σ=c_1( q_1,q_2) +c_2(q_2,q_3
)+…+c_m( q_m,q_1) ,
where q_j=f( a_j) and c_j( q_j,q_j+1)
=( f,α_j( a_j,a_j+1) ) ,j=1,…,m. By
definition of 𝒞^∗(L,m), f has no branch points in
α_j^∘∩ f^-1(E_q) for each j=1,2,…,m.
Let p_0∈C_f^∗( Δ), say, p_0 is a
non-special branch point of f with order v and let b_0=f(p_0). Let
b be a point in E_q such that d( b_0,b) <π. Then
there is a polygonal simple path η=η( b_0,b) on S
from b_0 to b such that
η^∘∩ E_q=∅, η^∘∩{q_j}_j=1^m=∅, and η^∘ contains no branch value of
f. Moreover, η^∘ intersects ∂Σ perpendicularly and
β∩∂Σ contains only finitely many points.
We can extract a maximal subarc η_1=η( b_0,b_1)of η with b_1∈η\{b_0} such that η_1 has
v distinct f-lifts β_l=β_l( p_0,p_l)
,l=1,2,…,v, starting from p_0 with,
β_l^∘⊂Δ, l=1,…,v,
and that
β_l_1^∘∩β_l_2^∘=∅, 1≤
l_1<l_2≤ v.
The maximum of η_1 means that either b_1=b∈ E_q, or some of
{p_l}_l=1^v are contained in ∂Δ. We write
A=∪_l=1^vβ_l, and assume that β_l are arranged
anticlockwise around the common initial point p_0. Thus, by Condition
<ref>, the following claim holds.
(i) { p_l} _l=1^v⊂Δ only if b_1=b;
(ii) { p_l} _l=1^v⊂ f^-1(E_q) if and only if
b_1=b∈ E_q;
(iii) p_l_1=p_l_2 for some l_1≠ l_2 if and only if
p_l_1 is also a branch point and b_1=b.
Then we have only five possibilities:
Case (1). p_l_1=p_l_2 for some l_1≠ l_2
and p_l_1∈∂Δ.
Case (2). p_l_1=p_l_2 for some l_1≠ l_2
and p_l_1∈Δ.
Case (3). p_l,l=1,…,v, are distinct from each other
and {p_l}_l=2^v⊂Δbut p_1∈∂Δ.
Case (4). p_l,l=1,…,v, are distinct from each other
and {p_l}_l=1^v⊂Δ.
Case (5). p_l,l=1,…,v, are distinct from each other
and there exist some distinct l_1and l_2 such that both p_l_1
and p_l_2 are contained in ∂Δ.
Now we will discuss the above cases one by one.
Cases (1) and (2) cannot occur.
Assume Case (1) occurs. By Claim <ref> (iii), p_l_1(=p_l_2)
is a branch point in f^-1(E_q) and b_1=b. Since {β_j
}_j=1^l are arranged anticlockwise, we can derive that p_l_1
=p_l_2=p_l_1+1, which means that there exist two adjacent f-lifts
β_l_1 and β_l_1+1 whose terminal points coincide. The
f-lift β_l_1-β_l_1+1 encloses a domain D⊂Δ.
Thus we can cut D off Δ along its boundary and sew the remained part
to obtain a new surface Σ_1=( f_1,Δ)
such that f_1=f in a neighborhood of ∂Δ\{p_l_1
} in Δ. Then ∂Σ_1=∂Σ. We also
have p_l_1∈{a_j}_j=1^m since p_l_1 is a branch point in
f^-1(E_q) and f has no branch points in α_j^∘∩
f^-1(E_q) for j=1,2,…,m. Thus (<ref>) and (<ref>) are
𝒞^∗( L,m) partitions of ∂Σ_1
which implies that Σ_1∈𝒞^∗( L,m) . By
Lemma <ref> (i) we have:
(f,D) can be sewn along its boundary
(f,β_l_1)∼(f,β_l_2)=η, resulting a closed surface
Σ_0=(f_0,S).
Assume the degree of Σ_0 is d, then by Riemann-Hurwitz formula we
have
n( Σ_0,E_q) =qd-∑_x∈
f_0^-1(E_q)(v_f_0(x)-1)
≥ qd-∑_x∈ S(v_f_0(x)-1)
≥(q-2)d+2.
On the other hand, (∂ D)∩ f^-1(E_q)={p_l_1}. Thus we
have n( Σ_1) =n(
Σ) -n( Σ_0) +1≤n( Σ) -(q-2)d-1. It is clear that A( Σ
_1) =A(Σ)-4dπ. Then we have
R(Σ_1) =( q-2) A( Σ_1)
-4πn( Σ_1,E_q)
≥( q-2) A( Σ) -4π( q-2)
d-4πn( Σ,E_q) +4π(q-2)d+4π
=R(Σ)+4π,
and thus H(Σ_1)=H(Σ)+4π/L(∂Σ_1), which
with ∂Σ_1=∂Σ and (<ref>) implies a
contradiction:
H_L≥ H(Σ_1)≥ H_L-4π/2L(∂Σ)+4π/L(∂Σ_1)=H_L+7π/2L(∂Σ_1).
Thus Case (1) cannot occur.
Following the same arguments, one can show that Case (2) also cannot occur.
Discussion of Case (5).
Assume Case (5) occurs. Then the f-lift -β_l_1+β_l_2
divides Δ into two Jordan domains Δ_1 and Δ_2 with
∂Δ_1=-β_l_2+β_l_1+τ_1, ∂Δ_2=-β_l_1+β_l_2+τ_2,
where τ_1 is the arc of ∂Δ from p_l_1 to p_l_2, and τ_2=( ∂Δ) \τ_1^∘.
Then by Lemma <ref>, we can sew ( f,Δ_1) and ( f,Δ_2) along -β_l_1
+β_l_2 respectively to obtain two new surfaces Σ_1=(
f_1,Δ) and Σ_2=( f_2,Δ) such that
∂Σ=∂Σ_1+∂Σ_2,
R( Σ_1) +R( Σ_2) =R(Σ),
and that Σ_1 and Σ_2 satisfy the following condition.
τ_1^∘ (resp. τ_2^∘) has a neighborhood
N_1 (resp. N_2) in Δ_1 (resp. Δ_2). And ( ∂Δ) \{1} has a
neighborhood N_1^' (resp. N_2^') in Δ,
such that ( f_1,N_1^') (resp. ( f_2
,N_2^')) is equivalent to ( f_1,N_1)
(resp. ( f_2,N_2)).
Since each arc in partition (<ref>) is SCC and ( f,τ_1) (resp.( f,τ_2)) is closed, we may assume p_l_1
∈α_i_1( a_i_1,a_i_1+1) \{a_i_1
+1} and p_l_2∈α_i_1+k( a_i_1+k,a_i_1
+k+1) \{a_i_1+k+1} for some 0≤ k≤ m.
We should show that 0<k<m. Otherwise, p_l_1 and p_l_2 are both
contained in α_i_1\{a_i_1+1} when k=0 or m. But
f is injective on α_j\{a_j+1} for each j, and thus
p_l_1=p_l_2, contradicting to the assumption. Then
τ_1=α_i_1( p_l_1,a_i_1+1) +α_i_1
+1( a_i_1+1,a_i_1+2) +…+α_i_1+k(
a_i_1+k,p_l_2) ,
and
τ_2=α_i_1+k( p_l_2,a_i_1+k+1)
+α_i_1+k+1( a_i_1+k+1,a_i_1+k+2) +…
+α_i_1+m( a_i_1+m,p_l_1) ,
where a_i_1+j=a_i_1+j-m and α_i_1+j=α_i_1+j-m if
i_1+j>m, and either of the two partitions (<ref>) and (<ref>)
contains at most m terms.
We first show that Σ_1∈𝒞^∗( L,m) . We
may assume ∂Δ has a partition
∂Δ =α_1^'+α_2^'+…+α
_k+1^'
=α_1^'( a_1^',a_2^')
+α_2^'( a_2^',a_3^')
+…+α_k+1^'( a_k+1^',a_1^')
,
such that
( f,α_i_1( p_l_1,a_i_1+1) )
=( f_1,α_1^') ,
(f,α_i_1+1( a_i_1+1,a_i_1+2) ) =(
f_1,α_2^') ,
…
( f,α_i_1+k( a_i_1+k,p_l_2) )
=( f_1,α_k+1^'( a_k+1^',a_1^') ) .
Note that ∂Δ=α_1^' if and only if p_l_1
=a_i_1, p_l_2=a_i_1+1, c_i_1=( f,α_i_1
) is a whole circle, and (f_1,∂Δ)=( f,τ
_1). In this way, L(∂Σ_1)<L(∂Σ). It
follows from (<ref>), (<ref>) and Condition <ref> that, the
partition (<ref>) is a 𝒞^∗( L,m)-partition.
Similarly, Σ_2 also has a 𝒞^∗( L,m)-partition.
It is clear that
max{B_f_1^∗(Δ),B_f_2^∗(Δ)}≤ B_f_1^∗(Δ)+B_f_2^∗(Δ)=B_f^∗( Δ) -1.
Recalling the condition (<ref>), we deduce that max{H(Σ
_1),H(Σ_2)}≥ H(Σ). We may assume H(Σ_1)≥
H(Σ_2), otherwise we replace Σ_1 with Σ_2. Then
Σ_1 is the desired surface in Case (5) and in this case,
L(∂Σ_1)<L(∂Σ).
Discussion of Cases (3). Let A=∪_l=1^vβ_l and
Δ_1=Δ\ A. Then we obtain a surface F whose interior
is ( f,Δ_1) and whose boundary is
γ_1 =( ∂Δ) -β_1( p_0
,p_1) +β_2( p_0,p_2) -β_2(
p_0,p_2)
+…+β_v( p_0,p_v) -β_v( p_0
,p_v) +β_1( p_0,p_1) ,
where ∂Δ is regarded as a closed path from p_1 to p_1.
See (1) of Figure <ref> for the case v=3. Now we split A into a
simple path
γ =-β_1^''( p_0^2,p_1) +β
_2^'( p_0^2,p_2) -β_2^''(
p_0^3,p_2)
+…+β_v^'( p_0^v,p_v) -β_v
^''( p_0^1,p_v) +β_1^'(
p_0^1,p_1) ,
as in Figure <ref> (2). Via a homeomorphism from Δ_1^'
onto Δ_1, we obtain the surface F=( g,Δ
_1^') whose interior is equivalent to ( f,Δ
_1) and whose boundary ∂ F=( g,∂Δ
_1^') is equivalent to ( f,γ_1) . Then
it is easy to see that Σ can be recovered by sewing F along
β_l^' and β_l^'', which means by
identifying β_l^' and β_l^'',
l=1,2,…,v.
It is interesting that, by Lemma <ref> (ii), we can sew F by
identifying β_l^'' with β_l+1^', for
l=1,2,…,v-1, and β_v^'' with β_1^',
to obtain a new surface Σ_1=( f_1,Δ) .
Indeed, we can deform Δ_1^' as in Figure
<ref> (2) into Δ_1^'' as in Figure
<ref> (3) with p_1 fixed, and then deform Δ_1^'' homeomorphically onto the disk Δ omitting the union B of the
v line segments p_0^lp_1 for l=1,2,…,v, as in
Figure <ref> (4).
It is clear that A( Σ) =A( Σ_1) and
L(∂Σ)=L(∂Σ_1). When b_1=b, we see by b∈
E_q that {p_j}_j=1^v⊂ f^-1(E_q) and when b_1≠ b
we have A∩ E_q=∅. Thus
n( F,E_q) =n( f_1,E_q)
=#{f^-1(E_q)∩( Δ\ A) }=n( Σ) -( v-1) χ_E_q(
b_1) ,
where χ_E_q( b_1) =1 when b_1∈ E_q and
χ_E_q( b_1) =0 when b_1∉ E_q. Clearly, we
have ∂Σ∼∂Σ_1. Thus Σ_1∈𝒞( L,m) and
H( Σ_1) =H( Σ) +4π(
v-1) χ_E_q( b_1) /L(∂Σ).
If b_1=b, then by (<ref>) and (<ref>), we obtain a
contradiction that
H_L≥ H(Σ_1)>H_L-π/2L(∂Σ)+4π(
v-1) /L(∂Σ)>H_L.
Thus we have
b_1≠ b and A∩ f^-1(E_q)=∅,
which induces that Σ_1∈𝒞^∗( L,m) , and
that H( Σ_1) =H( Σ) .
After above deformations, all p_0^l, l=1,2,…,v, are regular points
of f_1. Thus
∑_l=1^v( v_f_1( p_0^l) -1)
=v_f( p_0) -v=0.
On the other hand, p_0 and { p_l} _l=2^v are the
only possible branch points of f on A∩Δ, and the cut B inside
Δ contains no branch point of f_1. Thus we have
B_f( {p_0,p_2,…,p_v}) ≥ B_f(
p_0) =v_f( p_0) -1=v-1,
and
B_f^∗( Δ) =B_f^∗( (
Δ\ A) ) +B_f^∗( {p_0,p_2
,…,p_v})
≥ B_f^∗( ( Δ\ A) ) +v-1
=B_f_1^∗( Δ\∪_l=1^vp_1
p_0^l) +v-1
=B_f_1^∗( Δ) +v-1
≥ B_f_1^∗( Δ) +1.
It is clear that
v_f_1( p_1) =v_f( p_1) +v_f(
p_2) +…+v_f( p_v) ≥ v_f(
p_1) +v-1>v_f( p_1) +1.
and thus we have by (<ref>) B_f_1^∗( p_1)
>B_f^∗( p_1) . On the other hand, we have b_f(
z) ≡ b_f_1( z) for all z∈(
∂Δ) \{p_1}. Thus we have
B_f_1^∗( ∂Δ) >B_f^∗(
∂Δ) .
This completes the proof of Case (3).
Case (4) cannot occur. In this case, b_1=b, {
p_l} _l=1^v⊂ f^-1(E_q)and A⊂Δ. The
discussion is similar to of Case (3) with b_1=b, and we can deduce a
contradiction. Then, as in Figure <ref>, we can cut and split Δ
along A to obtain an annulus Δ_1=Δ\D with
∂Δ_1=∂Δ-∂ D, where ∂ D=β
_1^'-β_1^''+β_2^'-β_2
^''+…+β_v^'-β_v^''. Repeating
the same strategies in Case (3), we can obtain a new surface Σ
_1=( f_1,Δ) so that f_1 and f
coincide on a neighborhood of ∂Δ in Δ, which
implies that Σ_1∈𝒞^∗( L,m) . In Figure
<ref> (4), B=p_1p_0^1∪p_1p_0^2
∪p_1p_0^3∪…∪p_1p_0^v contains
only one point p_1 of f_1^-1(E_q), and thus
#[f^-1(E_q)∩Δ] =#[ f^-1(E_q)∩Δ\{p_l}_l=1^v] +v
=#[ f_1^-1(E_q)∩Δ\ B] +#[f_1
^-1(E_q)∩ B]+v-1
=#[f_1^-1(E_q)∩Δ]+v-1,
which implies
n( Σ_1) =n( Σ)
-v+1≤n( Σ) -1.
From above arguments, we derive H(Σ_1)≥ H( Σ)
+4π/L(∂Σ). This again implies a contradiction.
Now our proof has been completed.
For a branch point a of f, we call ( a,f(a)) a branch pair
of f. In Case (3) of previous proof, f_1 can be understood as a movement
of the branch pair ( p_0,f(p_0)) of f to the branch pair
( p_1,f_1( p_1) ) of f_1 along the
curve β_1( p_0,p_1). Then ( p_0
,f(p_0)) is split into v regular pairs ( p_0^l
,f_1( p_0^l) ) =( p_0^l,f(
p_0) ), l=1,…,v, and ( p_1,f(p_1)
) becomes a branch pair of f_1 at the boundary point p_1, whose order
v_f_1( p_1) =∑_l=1^vv_f(
p_l). Meanwhile, all other branch pairs ( x,f(x))
remain unchanged, saying that there exists a homeomorphism h from
Δ\ A onto Δ\ B such that
( f,Δ\ A) is equivalent to (
f_1∘ h,Δ\ A) .
Let Σ=( f,Δ)
∈𝒞^∗( L,m), and assume that (<ref>)
holds. Then there exists a surface Σ_1=( f_1,Δ)
∈𝒞^∗( L,m) satisfying (<ref>) such that
C_f_1^∗( Δ) =∅,
H(Σ_1)≥ H( Σ) ,L(∂Σ_1)≤ L(
∂Σ) ,
and (i) or (ii) holds:
(i) C_f^∗( Δ) ≠∅and L(∂Σ_1)<L( ∂Σ).
(ii) H(Σ_1)=H( Σ) ,L(∂Σ_1)=L(
∂Σ) ,∂Σ_1=∂Σ; and moreover
B_f_1^∗( ∂Δ) >B_f^∗(
∂Δ)if and only if C_f^∗( Δ)
≠∅.
When C_f^∗( Δ) =∅, then Σ_1=Σ
is the desired surface and (ii) holds. So we assume C_f^∗(
Δ) ≠∅. Then by Lemma <ref>, there exists a
surface Σ_1^'=( f_1^',Δ)
∈𝒞^∗( L,m) such that
H(Σ_1^')≥ H( Σ) ,L(∂Σ
_1^')≤ L(∂Σ),
and
B_f_1^'^∗( Δ) ≤ B_f^∗(
Δ) -1.
Moreover, L(∂Σ_1^')=L(∂Σ) if and only if
∂Σ_1^'=∂Σ,H(Σ_1^')=H(
Σ) and C_f_1^'^∗( ∂Δ)
>C_f^∗( ∂Δ) . It is clear that Σ
_1^' again satisfies the inequality (<ref>). Repeating this
procedure at most B_f_1^'^∗( Δ) times, we
can obtain the desired surface Σ_1.
Next, we will establish some lemmas to remove the branch point on the boundary.
Let Σ=( f,Δ) ∈𝒞^∗( L,m) be a surface satisfying the inequality
(<ref>) with the 𝒞^∗( L,m)-partitions
(<ref>) and (<ref>). Suppose that
(A) f has no branch points in Δ\ f^-1(E_q);
(B) For the first term α_1( a_1,a_2) of (<ref>),
α_1( a_1,a_2) \{a_2} contains a branch
point p_0 of f with p_0∉ f^-1(E_q). p_1is a point in
α_1( p_0,a_2) such that f( p_0)
≠ f(p_1), [ α_1( p_0,p_1) \{p_1}] ∩ f^-1(E_q)=∅ and that α_1^∘( p_0,p_1) contains no branch point of f;
(C) For b_0=f( p_0) and b_1=f( p_1) ,
the subarc c_1^'=c_1( b_0,b_1) of c_1 has
v=v_f( p_0) distinct f-lifts β_1(
p_0,p_1) ,β_2( p_0,p_2) ,…,β
_v( p_0,p_v) , arranged anticlockwise around p_0, such
that β_l\{p_0,p_l}⊂Δ for l=2,…,v.
Then there exists a surface Σ_1=( f_1,Δ) ∈𝒞^∗( L,m) such that there is no
branch points of f_1 in Δ\ f_1^-1(E_q), and one of
the following alternatives (i) and (ii) holds:
(i) The partition number m≥2,
H(Σ_1)≥ H( Σ) ,L(∂Σ_1)<L(∂Σ),
and
#( ∂Δ) ∩ f_1^-1(E_q)≤#(
∂Δ) ∩ f^-1(E_q).
Moreover
#C_f_1^∗( ∂Δ) ≤#C_f^∗(
∂Δ) ,
with equality only if one of the following relations (<ref>
)–(<ref>) holds:
B_f_1^∗( ∂Δ) ≤ B_f^∗(
∂Δ) -1,
𝔏( ∂Σ_1) ≤𝔏(
∂Σ) -1,
Σ_1=( f_1,Δ) ∈𝒞^∗( L,m-1) ,
A(Σ_1)≤ A( Σ) -4π.
(ii) p_l,l=1,2,…,v, are distinct, { p_l} _l=2
^v⊂Δ, p_1∉ f^-1(E_q), ∂Σ_1
=∂Σ,H(Σ_1)=H(Σ), v_f(x)=v_f_1(x) for all
x∈( ∂Δ) \{p_0,p_1}, v_f_1
(p_0)=1 and
v_f_1( p_1) =v_f( p_1) +v-1,
and moreover, (<ref>) and (<ref>) are still 𝒞^∗(
L,m)-partitions of ∂Σ_1,
B_f_1^∗( ∂Δ) =B_f^∗(
∂Δ) ,
and
#C_f_1^∗( ∂Δ) ≤#C_f^∗(
∂Δ) ,
equality holding if and only if p_1∉ C_f^∗(
∂Δ) ∪ f^-1(E_q).
By (C) we have that β_1=α_1(p_0,p_1). Write A=∪
_l=1^vβ_l. We will imitate the arguments in the proof of Lemma
<ref>. Under the partitions (<ref>) and (<ref>), we first
consider the case that p_l_1=p_l_2 for some pair 1≤ l_1
<l_2≤ v. Then β_l_1-β_l_2 bounds a Jordan domain D
contained in Δ, and we may face the following three Cases.
Case (1). l_1=1 and l_2=2(see Figure <ref> (1)).
Case (2). 1<l_1and p_l_1∈∂Δ (see
Figure <ref> (3)).
Case (3). 1<l_1 and p_l_1∈Δ(see
Figure <ref> (5)).
We show that none of the above three cases can occur, by deduce a
contradiction that H_L>H_L.
When Case (1) occurs, we put h_1 to be a homeomorphism from Δ onto Δ_1=Δ\ D so that
h_1 is an identity on ( ∂Δ_1) ∩∂Δ. Then put Σ_1=( f_1,Δ) with f_1=f∘ h_1 (See Figure <ref> (1) and (2)).
When Case (2) occurs, ∂ D divides Δ into three Jordan domains
Δ_1, D and Δ_2 as in Figure <ref> (3). We can glue
the surfaces ( f|_Δ_1,Δ_1) and ( f|_Δ_2,Δ_2)
together along the boundary (f,β_l_1)∼(f,β_l_2) to obtain
a new surface Σ_1=( f_1,Δ). Indeed,
we can take a continuous mapping h_2:Δ\
D→Δ so that h_2|_Δ_1:Δ_1→Δ_1^' (resp. h_2
|_Δ_2:Δ_2→Δ
_2^') is an orientation-preserving homeomorphism, f(h_2
^-1(y)) is a singleton for all y∈β, and h_2 is an identity on a
neighborhood of ( ∂Δ) \{p_0,p_l_1}
in Δ. Then we define Σ_1=( f_1
,Δ) with f_1=f∘ h_2^-1 (See Figure
<ref> (3) and (4)).
When Case (3) occurs, ( β_l_1∪β_l_2)
\{p_0}⊂Δ, and Δ_1=Δ\D is a domain as in Figure <ref> (5) when l_1=1 and
l_2=2. We can sew ( f,Δ\ D) along
(f,β_l_1)∼(f,β_l_2) to obtain a surface (
f_1,Δ) so that β_l_1-β_l_2
becomes a simple path β, the line segment from p_0 to p_l_1 as
in Figure <ref> (5) and (6). In fact we can define f_1:=f∘
h_3^-1, where h_3:Δ_1→Δ
is an OPCOFOM so that β_l_1 and β_l_2 are mapped
homeomorphically onto β, h_3(p_l_1)=p_l_1, h_3
(p_0)=p_0, h_3 is an identity on ∂Δ and on a
neighborhood of ∂Δ\{p_0} in Δ,
and h_3:Δ_1→Δ_1^' is a homeomorphism.
In the above Cases (1)–(3), it is clear that Σ_1 also has
𝒞^∗( L,m)-partitions as (<ref>) and
(<ref>), and the interior angle of (f,D) at p_l_1 is a
positive multiple of 2π. Then we have v_f_1( p_l_1)
≤ v_f( p_l_1) -1. Then ∂ D∩ f^-1
(E_q)={p_1} or ∅.
As in the proof of Claim <ref>, (f,D) can be sewn to be a
closed surface Σ_0=(f_0,S) along the equivalent paths
(f,β_l_1) and (f,β_l_2). Assume that the degree of f_0
is d_0. Then we have in any case of Cases (1), (2) and (3),
n( Σ_1,E_q) ≤n(
Σ,E_q) -n( Σ_0,E_q) +1.
On the other hand, as in the proof of Claim <ref>, by Riemann-Hurwitz
formula, we have n( Σ_0,E_q) ≥
(q-2)d_0+2 with the equality holding if and only if C_f_0(V)⊂
f_0^-1(E_q). Then we have
n( Σ_1,E_q) ≤n(
Σ,E_q) -n( Σ_0,E_q)
+1≤n( Σ,E_q) -(q-2)d_0-1.
Now we have A(Σ)=A(Σ_0)+A(Σ_1) and A(Σ_0)=4π
d_0. Then
R( Σ_1) =( q-2) A(Σ_1
)-4πn( Σ_1)
≥( q-2) (A(Σ)-4π d_0)-4π[ n( Σ) -(q-2)d_0-1]
=R( Σ) +4π.
On the other hand, we have L(∂Σ)=L(∂Σ_1). Then we
derive
H(Σ_1)≥R(Σ)+4π/L(∂Σ)=H(Σ)+4π/L(∂Σ),
which with (<ref>) implies the contradiction that H_L≥
H(Σ_1)>H_L. Hence Cases (1)–(3) can not occur.
There are still two cases left.
Case (4). p_l,l=1,…,v, are distinct from each other
and {p_l}_l=2^v⊂Δ.
Case (5). p_l,l=1,…,v, are distinct from each other
and p_l_1∈∂Δ for some 2≤ l_1≤ v. In particular,
{p_2,…,p_l_1-1}⊂Δ when l_1>2, and it is possible
that p_l_2∈∂Δ for some l_1<l_2≤ v.
Assume Case (4) occurs. Except for a few differences, the following discussion
is similar to the Cases (3) and (4) in the proof of Lemma <ref>.
Here, we just present the arguments for v=3, as in Figure <ref>. Cut
Δ along the lifts β_2 and β_3 and split β_2 and
β_3 via an OPCOFOM h from a closed Jordan domain Δ_1^' as in Figure <ref> (2) onto Δ
such that h:Δ_1^'→Δ_1=Δ\
(β_2∪β_3) is a homeomorphism.
Then we obtain a surface Σ_1^'=( f_1^'
,Δ_1^') such that
( f_1,β_1^') ∼( f_1,β_2^') ∼( f_1,β_2^'') ∼(
f_1,β_3^') ∼( f_1,β_3^'') .
It is clear that we can recover the surface Σ when we identify
β_2^' with β_2^'' and β_3^'
with β_3^''. However, by Lemma <ref> (ii), we can
also identify β_1^' with β_2^', and β
_2^'' with β_3^', by deformations in Figure
<ref> (2)-(4), resulting a new surface Σ_1=(
f_1,Δ) . On the other hand, since β_l^∘,l=1,…,v, contains no point of f^-1(E_q) and C_f^∗(
Δ) =∅, f is homeomorphic in neighborhoods of β
_j^∘,j=2,…,v. Thus we can conclude the following.
∂Σ_1∼∂Σ. There exists a
neighborhood N_1 of ( ∂Δ) \{p_0,p_1} in Δ and a neighborhood N_1^' of
( ∂Δ) \{p_0^3,p_1^'} in
Δ such that ( f,N_1) ∼( f_1
,N_1^') . In fact as in Figure <ref> (2) and (3),
β_1^'∘,β_2^''∘,β_3^''∘ have neighborhoods in Δ_1^' so that
the restrictions of f_1^' to them, respectively, are equivalent to
the restriction of f to a neighborhood of β_1 in Δ. Thus we may replace p_1^' and p_0^3 by p_1 and
p_0, and make ∂Σ_1=∂Σ via a homeomorphism of
Δ. Then partitions (<ref>) and (<ref>) are both
𝒞^∗( L,m)-partitions of ∂Σ_1if and only if p_1∉ f^-1(E_q)∩α_1^∘, and in
general Σ_1∈𝒞^∗( L,m+1) ⊂ℱ( L) .
It is clear that A(Σ_1)=A(Σ)and L(∂Σ
_1)=L(∂Σ). We can also see that {p_0^l}_l=1^v
become regular points of f_1 and
v_f_1(p_1)=v_f( p_1) +v_f( p_2)
+…+v_f( p_v) .
It implies that
n( Σ_1) ={[ n( Σ) , if p_1∉ f^-1
(E_q),; n( Σ) -v+1, if p_1∈ f^-1
(E_q). ].
Thus in the case p_1∈ f^-1(E_q), we have
R(Σ_1)=R(Σ)+( v-1) 4π≥ R(Σ)+4π,
which with (<ref>) implies a contradiction that H_L≥ H(Σ
_1)≥ H(Σ)+4π/L(∂Σ_1)>H_L. So we have to
assume p_1∉ f^-1(E_q), which implies Σ_1∈𝒞^∗( L,m) and H(Σ_1)=H(Σ), and
moreover
{p_l}_l=1^v∩ f^-1(E_q)=∅.
Then each p_l∉ C_f( Δ) and v_f_1
(p_1)=v_f( p_1) +v-1, say
b_f_1(p_1)-b_f( p_1) =v-1.
On the other hand we have v_f_1(p_0)=1 and v_f(p_0)=v, which
implies
b_f_1( p_0) -b_f(p_0)=-v+1,
and thus by (<ref>) we have
B_f_1^∗( { p_0,p_1}) =B_f^∗( { p_0,p_1}) .
It is clear that, by Summary <ref>, B_f_1^∗( (
∂Δ) \{p_0,p_1}) =B_f^∗(
( ∂Δ) \{p_0,p_1}) . Then we
have by (<ref>)
B_f_1^∗( ∂Δ) =B_f^∗(
∂Δ) .
On the other hand, we have #C_f_1^∗( { p_0
,p_1}) =#C_f_1^∗( p_1) =1 and
#C_f^∗( { p_0,p_1}) =1 if and only
if p_1 is not a branch point (note that we are in the environment of
p_1∉ f^-1(E_q), which implies p_1∉ f_1^-1(E_q)).
Thus we have #C_f_1^∗( ∂Δ) ≤#C_f^∗( ∂Δ) equality holding if and only if
p_1∉ C_f^∗( ∂Δ) ∪ f^-1(E_q).
Hence, all conclusions in (ii) hold in Case (4).
Assume Case (5) occurs. When m=1,∂Σ=c_1( q_1
,q_1) is a simple circle and thus f^-1(b_1)∩∂Δ={p_1}. This case can not occur, since in Case (5), {p_1
,p_l_1}⊂ f^-1(b_1)∩∂Δ and p_1≠ p_l_1. So we have m≥2.
It is clear that f restricted to a neighborhood of β_l_1^∘
is homeomorphic and β_l_1 divides Δ into two Jordan domains
Δ_1 and Δ_2. Denote by Δ_1 the domain on the right
hand side of β_l_1. Let γ_1 be the arc of ∂Δ
from p_1 to p_l_1 and γ_2 be the complement arc of
γ_1 in ∂Δ, both oriented anticlockwise. Recall that
β_1,β_2,…,β_v are arranged anticlockwise around
p_0. Then we have ∪_l=2^l_1-1β_l\{p_0
}⊂Δ_1, while {β_l} _l=l_1+1^v is
contained in Δ_2. Based on (<ref>) and (<ref>), we
also have the partitions
γ_1=α_1( p_1,a_2) +α_2+…
+α_k-1+α_k( a_k,p_l_1) ,
and
γ_2=α_k( p_l_1,a_k+1) +α_k+1
+…+α_m+α_1( a_1,p_1) ,
where
p_l_1∈α_k( a_k,a_k+1) \{a_k+1}.
We can see that
v_f|_Δ_1( p_0) =l_1-1,
v_f|_Δ_2( p_0) =v-l_1+1.
Considering p_0∉ f^-1(E_q), we have
B_f|_Δ_1^∗( p_0) =l_1-2,
B_f|_Δ_2^∗( p_0) =v-l_1,
and
B_f^∗( p_0) -B_f|_Δ_1^∗( p_0) -B_f|_Δ_2^∗(
p_0) =1.
Now we shall consider Δ_1 and Δ_2 separately.
Firstly, let h_2 be a homeomorphism from Δ_2 onto
Δ such that h_2|_γ_2∖β_1=id and
h_2|_β_l_1=β_1^∘+γ_1. Recall that β
_1=α_1( p_0,p_1). Then we can construct a new
surface as
Σ_2^'=( f_2^',Δ) =(
f∘ h_2^-1,Δ) ,
with
L(f_2^',∂Δ) =L(f,(∂Δ_2)\β_l_1)+L(f,β_l_1)
=L(f,(∂Δ_2)\β_l_1)+L(f,β_1)
=L(γ_2)<L.
Since p_1≠ p_l_1, f(p_1)=f(p_l_1)=b_1 and f is
injective on each α_k( a_k,a_k+1) \{a_k+1}, we conclude that either of the two partitions (<ref>) and
(<ref>) contains at least two terms. Since the sum of terms of (<ref>)
and (<ref>) is at most m+2, we conclude that either of (<ref>) and
(<ref>) contains at most m terms. Thus we have Σ_2^'
∈𝒞^∗( L,m) . Hence, summarizing the above
discussion, we have
C_f_2^'^∗( Δ) =∅,
Σ_2^'∈𝒞^∗( L,m), and moreover,
by definition of f_2^',
#C_f_2^'^∗( ∂Δ) =#C_f|_Δ_2^∗( ∂Δ_2) =#C_f|_Δ_2^∗( γ_2) ≤#C_f^∗(
∂Δ) ,
#( ∂Δ) ∩ f_2^'-1( E_q)
=#γ_2∩ f^-1( E_q) ≤#( ∂Δ) ∩ f^-1( E_q) ,
B_f_2^'^∗( ∂Δ) =#B_f|_Δ_2^∗( ∂Δ_2) ≤#B_f^∗( ∂Δ) -1.
Next, we construct a new surface Σ_1^'=( f_1^',Δ) as follows. Denote by Δ_1^1=Δ
_1\∪_l=2^l_1-1β_l, which is a simply connected
domain. Cutting Δ_1^1 along the paths ∪_l=1^l_1-1
β_j, we can obtain a Jordan domain Δ_1^2 as in Figure
<ref> (2) where l_1=3. Indeed, there exists an OPCOFOM
h_1:Δ_1^2→Δ_1^1 such
that the restrictions
h_1:Δ_1^2→Δ_1^1, h_1:β_l^'→β_l, h_1:β_l^''→β_l
are homeomorphisms for l=2,…,l_1-1. Then the surface F_1:=(
g_1,Δ_1^2) =( f∘ h_1,Δ_1^2) is simply connected and we can recover the surface
( f|_Δ_1,Δ_1) when we
glue F_1 along the pairs ( g_1,β_l^') and
( g_1,β_l^'') for l=2,…,l_1-1.
Since
( g_1,β_1^') ∼( g_1,β_2^') ,( g_1,β_2^'') ∼(
g_1,β_3^') ,⋯,( g_1,β_l_1-1
^'') ∼( g_1,β_l_1^') ,
we can also glue F_1 along the above equivalent pairs and obtain a new
surface Σ_1^'=( f_1^',Δ)
, as the deformations described in Figure <ref> (2)–(4). In this way,
p_1,…,p_l_1 are glued into a single point p_1^'
∈∂Δ. It is clear that we have
v_f_1^'( p_1^') ≤ v_f(
p_1) +v_f( p_2) +⋯+v_f( p_l_1
-1) +v_f|_Δ_1( p_l_1) .
When b_1∉ E_q, by condition (A) of Lemma <ref> we have
v_f( p_2) =⋯=v_f( p_l_1-1) =1.
Thus
v_f_1^'( p_1^') ≤ v_f(
p_1) +v_f|_Δ_1( p_l_1)
+l_1-2.
As in Figure <ref> (2) or (3), p_0^1,…,p_0^l_1-1 are
regular points of g_1, and g_1 is homeomorphic on some neighborhoods
of (β_j^')^∘ and (β_j^'')^∘ in
Δ_1^2 for j=1,…,l_1-1. Thus f_1^' is
homeomorphic on some neighborhood of β_j^'\{p_1^'} for j=1,…,l_1-1. Therefore by (<ref>) we have that
C_f_1^'^∗( Δ) =∅,
( f_1^',∂Δ) ∼( f,γ
_1) , (<ref>) is an ℱ(L,k)-partition of
∂Σ_1^' and moreover
B_f_1^'^∗( p_1^') =0, if
p_1∈ f^-1(E_q);
B_f_1^'^∗( p_1^') =v_f_1^'( p_1^') -1≤ B_f^∗( p_1)
+B_f|_Δ_1^∗( p_l_1) +l_1
-1 if p_1∉ f^-1(E_q).
Now we will apply Claims <ref> and <ref> to verify the conclusion (i).
There is no doubt that A(Σ_1^')+A(Σ_2^'
)=A(Σ) and L(Σ_1^')+L(Σ_2^')=L(
Σ) . We can deduce from the previous constructions that
n( Σ) =n( Σ_1^') +n( Σ_2^') +(
l_1-2) χ_E_q( f( p_1) ) ,
where χ_E_q( f( p_1) ) =1 if p_1∈
f^-1(E_q) and χ_E_q( f( p_1) ) =0
otherwise. Then we have
R(Σ_1^')+R( Σ_2^') =R(Σ
)+4π( l_1-2) χ_E_q( f( p_1)
) .
Take Σ_1=Σ_1^' or Σ_2^' such that
H(Σ_1)=max{ H( Σ_1^') ,H(
Σ_2^') } . Then we have
H(Σ_1)≥ H(Σ)+4π( l_1-2) χ_E_q
( f( p_1) ) /L( ∂Σ) .
By the restriction of inequality (<ref>), however, we can obtain the
contradiction H( Σ_1) >H_L when l_1>2 and
p_1∈ f^-1(E_q). Then in the sequel we assume that
l_1=2 or f( p_1) ∉ E_q.
If Σ_1=Σ_2^', then by Claim <ref>, Σ_1
satisfies (i). Thus in the sequel, we assume that
Σ_1=( f_1,Δ) =Σ_1^'=(
f_1^',Δ) ,
say, f_1=f_1^'. Then by condition f( p_1) ∉
E_q it is trivial that
#( ∂Δ) ∩ f_1^-1( E_q)
=#γ_1∩ f^-1( E_q) ≤#( ∂Δ) ∩ f^-1( E_q) ,
and
#C_f_1^∗( ∂Δ) =#C_f_1^∗(
( ∂Δ) \{p_1^'})
+#C_f_1^∗( p_1^') =#C_f^∗(
γ_1^∘) +#C_f_1^∗( p_1^') .
Thus, by the relations γ_2=( ∂Δ)
\γ_1^∘ and γ_2⊃{p_0,p_1,p_l_1
}, we have
#C_f^∗( ∂Δ) -#C_f_1^∗(
∂Δ) =#C_f^∗( ∂Δ)
-#C_f^∗( γ_1^∘) -#C_f_1^∗(
p_1^')
=#C_f^∗( γ_2) -#C_f_1^∗(
p_1^')
≥#C_f^∗( p_0) +#C_f^∗(
p_1) +#C_f^∗( p_l_1) -#C_f_1^∗( p_1^')
=1+#C_f^∗( p_1) +#C_f^∗( p_l_1
) -#C_f_1^∗( p_1^')
≥1+0+0-#C_f_1^∗( p_1^') ≥0.
Therefore, (<ref>) holds, equality holding only if
#C_f^∗( p_1) =#C_f^∗( p_l_1)
=0,
and
#C_f^∗( γ_2) =#C_f^∗( p_0)
=#C_f_1^∗( p_1^') =1,
which implies
f_1(p_1^')=f( { p_l} _l=1^v)
=b_1∉ E_q.
Assume that the equality in (<ref>) holds. Then (<ref>
)–(<ref>) hold and imply
B_f^∗( p_1) =B_f^∗( p_l_1)
=0 but B_f_1^∗( p_1^') ≥1.
By (<ref>), (<ref>) and (<ref>), considering that ∂Δ=γ_1+γ_2 we have
B_f^∗( ∂Δ) =B_f^∗( γ
_1^∘) +B_f^∗( γ_2) =B_f^∗( γ_1^∘) +B_f^∗( p_0) ,
B_f_1^∗( ∂Δ) =B_f^∗( γ
_1^∘) +B_f_1^∗( p_1^') ,
and
B_f_2^'^∗( ∂Δ) =B_f|_Δ_2^∗( p_0) ;
and then
B_f^∗( ∂Δ) -B_f_1^∗(
∂Δ) =B_f^∗( p_0) -B_f_1^∗( p_1^') .
Thus, by Claim <ref>, (<ref>), and the assumption v_f^∗( p_0) =v, we have
B_f^∗( ∂Δ) -B_f_1^∗(
∂Δ)
≥ B_f^∗( p_0) -B_f^∗( p_1)
-B_f|_Δ_1^∗( p_l_1) -l_1+1
=v-1-0-0-l_1+1=v-l_1,
and then by (<ref>) and the assumption v_f^∗( p_0)
=v we have
B_f_1^∗( ∂Δ) ≤ B_f^∗(
∂Δ) -v+l_1≤ B_f^∗( ∂Δ)
,
with equality only if l_1=v.
Now we assume the equality in (<ref>) holds while (<ref>) does
not hold, which implies l_1=v by (<ref>).
Since f is injective on
α_1( a_1,a_2) \{a_1},p_1∈α
_1( a_1,a_2) \{a_1}, p_1≠ p_l_1 in
Case (5) and f(p_1)=f( p_l_1) , we have p_l_1
∉α_1( a_1,a_2) \{a_1}, which
implies a_2∈γ_1and p_l_1∉β_1. Thus
a_1∉γ_1^∘ and a_1∈γ_2=( ∂Δ) \γ_1^∘, say, #[ γ_2
∩{a_j}_j=1^m] ≥1, with equality if and only if
γ_2∩{a_j}_j=1^m={ a_1} .
(a) If γ_2 contains two points of {
a_j} _j=1^m, then Σ_1=Σ_1^'∈𝒞^∗( L,m-1) . In fact in this case, (<ref>)
contains at most k≤ m-1 terms, and thus by Claim <ref> (<ref>)
Σ_1∈ℱ(L,k)⊂ℱ(L,m-1).
(b) If γ_2 contains only one point of { a_j}
_j=1^m, say, γ_2∩{a_j}_j=1^m={ a_1}
, then p_l_1∈α_m( a_m,a_1) \{a_m} and then either ( f,γ_2) is a simple closed
arc of ∂Σ_1, or it is a folded arc, say,
( f,γ_2) =c_m( f(p_l_1),f(a_1))
+c_1( f(a_1),f( p_l_1) ) =c_m(
f(p_l_1),f(a_1)) -c_m( f( p_l_1)
,f(a_1)) .
Hence either
𝔏( ∂Σ_1) ≤𝔏(
∂Σ) -1
by Lemma <ref>; or ( f,Δ_2) =S by Lemma
<ref> (i), and thus
A(Σ_1)≤ A(Σ)-4π.
Summarizing (<ref>) and Discussion <ref>, we can derive that the
equality in (<ref>) holds only if at least one of (<ref>
)-(<ref>) holds. Then (i) holds in Case (5), and we have finished the proof.
When (ii) holds, say, in Case (4), f_1 plays the role that
moves the branch property of p_0 to p_1, so that H(
Σ) ,R(Σ),∂Σ,n( Σ)
and the branch property of all other points, say, points in (
∂Δ) \{p_0,p_1}, remain unchanged, while
p_0 becomes a regular point and p_1 becomes a branch point with
v_f_1( p_1) =v_f( p_1) +v_f(
p_0) -1(note that the interior α_1( p_0
,p_1) ^∘ of α_1( p_0,p_1) contains
no branch point of f and contains no point of f^-1(E_q)). Such
movement fails in Case (5), and in this case, (i) holds.
Let Σ=( f,Δ)be a
surface in 𝒞^∗(L,m)with the 𝒞^∗(
L,m)-partitions (<ref>) and (<ref>). Assume that condition (A)
of Lemma <ref> holds, say C_f^∗( Δ)
=∅, and assume (<ref>) holds. Write
ℰ_f:=C_f^∗( ∂Δ) ∪(
∂Δ∩ f^-1(E_q)) ={ p_0^'
,p_1^',…,p_s-1^'} ,
and assume p_0^'∈ C_f^∗( ∂Δ) ,
s≥2 and p_0^',…,p_s-1^' are arranged on
∂Δ anticlockwise. Then there exists a surface Σ
_1=( f_1,Δ) ∈𝒞^∗(
L,m) such that C_f_1^∗( Δ) =∅
and one of the followings holds.
(a) The conclusion (i) of Lemma <ref> holds. Thus L(∂Σ_1)<L( ∂Σ) , #ℰ_f_1
≤#ℰ_f and either #C_f_1^∗( ∂Δ) ≤#C_f^∗( ∂Δ) -1, or
#C_f_1^∗( ∂Δ) =#C_f^∗(
∂Δ)and one of (<ref>)–(<ref>) holds.
(b) p_1^'∈ C_f^∗( ∂Δ) , H(
Σ_1) =H(Σ), ∂Σ_1=∂Σ,
#ℰ_f_1={ p_1^',…,p_s-1^'}
=#ℰ_f-1, and B_f_1^∗( ∂Δ)
=B_f^∗( ∂Δ) .
Let p_0^'∈ C_f^∗( ∂Δ) and
p_0^',p_1^',…,p_s-1^',s≥2, be all points
of ℰ_f arranged anticlockwise on ∂Δ. Then
ℰ_f gives a partition of ∂Δ as
∂Δ=β_1^'( p_0^',p_1^')
+β_2^'( p_1^',p_2^') +…
+β_s^'( p_s-1^',p_0^') .
Without loss of generality, we assume that p_0^'∈α_1(
a_1,a_2) \{a_2} is the first point of C_f^∗( ∂Δ) in α_1( a_1,a_2) ,
say,
C_f^∗( ∂Δ) ∩α_1( a_1
,p_0^') ={p_0^'}.
Firstly, we consider the simple case that
β_1^'( p_0^',p_1^') ⊂α_1( p_0^',a_2) .
We may further assume f( p_0^') ≠ f(p_1^'). Otherwise, we must have that p_0^'=a_1,p_1^'=a_2
and that c_1=c_1( q_1,q_2) =( f,β_1^'( p_0^',p_1^') ) =( f,α
_1) is a circle with α_1∩ f^-1(E_q)=∅, and
then we can discuss based on the following argument.
Consider a proper subarc β_1=β_1( p_0
^',p_01^')of β_1^'=β_1^'( p_0^',p_1^'), with f(p_01^')≠
f(p_0^') and p_01^'∉ f^-1(E_q), so that
f( β_1) has other v-1 f-lifts β_2(
p_0^',p_02^') ,…,β_v( p_0^',p_0v^') so that p_0^',{ p_0l^'} _l=1^vand {β_l}_l=1^v satisfy all conditions
of Lemma <ref> and the condition of Case 4 in the proof of Lemma
<ref>. Then by Lemma <ref> there exists a surface
Σ_1^'=( f_1^',Δ)
∈𝒞^∗( L,m) so that C_f_1^∗(
Δ) =∅and Lemma <ref> (ii) holds, say,
∂Σ_1^'=∂Σ,H(Σ_1^')=H(Σ),
p_01^'∈ C_f_1^'^∗,ℰ_f_1^'
={ p_01^',p_1^',…,p_s-1^'}and B_f_1^'^∗( ∂Δ) =B_f^∗( ∂Δ) , and moreover, (<ref>) and (<ref>) are
still 𝒞^∗( L,m)-partitions of ∂Σ_1^'. Then we can replace Σ with Σ_1^'
to continue our proof under (<ref>).
Now we may assume f( p_0^') ≠ f(p_1^') and
forget Argument <ref>. Let β_1=β_01^'(
p_0^',p_01^') be the longest subarc of β
_1^'( p_0^',p_1^') such that (B) and
(C) of Lemma <ref> are satisfied by β_1. There is nothing to
show when conclusion (i) of Lemma <ref> holds.
Assume conclusion (ii) of Lemma <ref> holds for β_1. Then
only Case (4) ocurs and p_01^'∉ f^-1(E_q). If
p_01^'≠ p_1^', then we can extend β_1 longer so
that it still is a subarc of β_1^'( p_0^'
,p_1^') ⊂α_1( p_0^',a_2) and satisfies (B) and (C), which contradicts definition of β_1.
Then, p_01^'=p_1^'∈ C_f^∗( ∂Δ) , and by Lemma <ref> (ii) we have
(c) There exists a surface Σ_1=( f_1,Δ)
∈𝒞^∗( L,m) such that C_f_1^∗(
Δ) =∅ and (b) holds, and moreover (<ref>) and
(<ref>) are still 𝒞^∗( L,m)-partitions of
∂Σ_1.
The corollary is proved under the consition (<ref>). When
p_1^'∉ C_f^∗( ∂Δ) , we have
p_1^'∈ f^-1(E_q), and then only (a) holds.
Next, we show what will happen if (<ref>) fails. Then a_2∈β
_1^'∘ and so a_2∉ℰ_f. Assume that
p_1^'∈α_j_0( a_j_0,a_j_0+1)
\{a_j_0},
for some j_0>1. Then we can find a point p_1∈α_1(
p_0^',a_2) so that β_1=α_1(
p_0^',p_1) is a maximal subarc of α_1(
p_0,a_2) satisfying conditions (B) and (C) of Lemma
<ref>. Then p_1∉ℰ_f and according to the above
proof only Case 4 or Case 5 occurs. If Case 5 occurs, then the proof for Case
(5) deduces the conclusion (i) of Lemma <ref>, and so does (a). If
Case 4 occurs, then by the condition C_f^∗( Δ)
=∅, the maximal property of β_1 and Lemma <ref>, we have
p_1=a_2. Then the proof of Case 4 again deduces that (ii) in Lemma
<ref> holds, and we obtain a surface Σ_1=(
f_1,Δ) ∈𝒞^∗( L,m)
such that H(Σ_1)=H(Σ), ∂Σ_1=∂Σ,
C_f_1^∗( Δ) =∅, p_1∉ f_1
^-1(E_q), B_f_1^∗( ∂Δ) =B_f^∗( ∂Δ) and
ℰ_f_1={p_1,p_1^',…,p_s-1^'
}={a_2,p_1^',…,p_s-1^'}.
Thus using Lemma <ref> repeatedly, we can either prove (a) holds, or
obtain a surface Σ_j_0=( f_j_0,Δ)
such that H(Σ_j_0)=H(Σ), ∂Σ_j_0
=∂Σ, C_f_j_0^∗( Δ) =∅,
a_j_0∉ f_j_0^-1(E_q), B_f_j_0^∗(
∂Δ) =B_f^∗( ∂Δ) and
ℰ_f_j_0={a_j_0,p_1^',…,p_s-1^'} and B_f_j_0^∗( ∂Δ)
=B_f^∗( ∂Δ) .
Note that a_j_0and p_1^' are both contained in the same arc
α_j_0. Then we can go back to condition (<ref>) to show that
either (a) or (b) holds, and moreover, by Remark <ref>, (b) holds only
if p_1^'∈ C_f_j_0^∗( ∂Δ) ,
which implies p_1^'∉f_j_0^-1( E_q) .
Let Σ_0=( f_0,Δ)be a surface in 𝒞^∗(L,m)with the 𝒞^∗( L,m)-partitions (<ref>) and (<ref>). Assume that
condition (A) of Lemma <ref> holds, say C_f_0^∗(
Δ) =∅, and that (<ref>) holds. Then there exists a
surface Σ_1=( f_1,Δ) ∈𝒞
^∗( L,m) , such that C_f_1^∗(
Δ) =∅,
H( Σ_1) ≥ H(Σ_0),L(∂Σ_1)≤
L(∂Σ_0),
that H( Σ_1) >H(Σ_0)implies L(∂Σ_1)<L(∂Σ_0). Moreover one of the following conclusions
(I)–(II) holds.
(I) Σ_1∈ℱ_r( L,m) and, in this case,
L(∂Σ_1)<L(∂Σ_0) if and only if C_f_0^∗( ∂Δ) ≠∅.
(II) Σ_1∈𝒞^∗( L,m) , and both
ℰ_f_1 and C_f_1^∗( ∂Δ)
are the same singleton, say, ℰ_f_1=C_f_1^∗(
∂Δ) is a singleton outside f_1^-1(E_q). Moreover,
if #C_f_0^∗( ∂Δ) ≠∅ and
f_0^-1(E_q)∩∂Δ≠∅, then L(∂Σ
_1)<L(∂Σ_0) and either #C_f_1^∗(
∂Δ) ≤#C_f_0^∗( ∂Δ)
-1 holds, or #C_f_1^∗( ∂Δ) =#C_f_0
^∗( ∂Δ) and one of (<ref>
)–(<ref>) hold with f=f_0.
We will prove this by induction on #C_f_0^∗( ∂Δ) . If C_f_0^∗( ∂Δ)
=∅, then (I) holds for Σ_1=Σ_0.
If C_f_0^∗( ∂Δ) is a singleton and
ℰ_f_0=C_f_0^∗( ∂Δ) , then
f_0^-1(E_q)∩∂Δ=∅ and so (II) holds with
Σ_1=Σ_0.
Now, assume that C_f_0^∗( ∂Δ) ≠∅ and #ℰ_f_0≥2. Then we can write
ℰ_f_0={ p_0^0,p_1^0,…,p_s_0-1
^0}
so that p_j^0,j=0,1,…,s_0-1, are arranged anticlockwise on
∂Δand p_0^0∈ C_f_0^∗( ∂Δ) . Then by Corollary <ref>, there exists a
surface Σ_1∈𝒞^∗(L,m) such that the following
conclusion (a)-(n-1,n) or (b)-(n-1,n) holds for n=1.
(a)-(n-1,n): C_f_n^∗( Δ) =∅ and the
conclusion (i) of Lemma <ref> holds, and thus H(Σ_1)≥
H( Σ_0) ,L(∂Σ_1)<L(∂Σ_0),
#ℰ_f_1≤#ℰ_f_0 and either #C_f_1
^∗( ∂Δ) ≤#C_f_0^∗(
∂Δ) -1 holds or #C_f_1^∗( ∂Δ) =#C_f_0^∗( ∂Δ) and one of
(<ref>)–(<ref>) holds with f=f_0.
(b)-(n-1,n): C_f_n^∗( Δ) =∅,
ℰ_f_n={ p_1^n-1,…,p_s_n-1-1^n-1}
with p_1^n-1∈ C_f_n-1^∗( ∂Δ) ,
H( Σ_n) =H(Σ_n-1), ∂Σ_n
=∂Σ_n-1, and B_f_n^∗( ∂Δ)
=B_f_n-1^∗( ∂Δ) .
If Σ_1∈ℱ_r(L,m), then (b)-(0,1) does not hold, and
then (a)-(0,1) holds, which imlpies (I). Note that when Σ_1
∈ℱ_r(L,m) and H( Σ_1) >H(Σ_0)
hold, we must have C_f_0^∗( ∂Δ)
≠∅.
Assume Σ_1 is not in ℱ_r(L,m) and ℰ_f_1
=C_f_1^∗( ∂Δ) is a singleton. Then
(b)-( 0,1) holds and so C_f_1^∗(
∂Δ) =ℰ_f_1={ p_1^0,…
,p_s_0-1^0} ={ p_1^0}∈ C_f_n-1^∗( ∂Δ) . In this case, we must have f_0
^-1(E_q)∩∂Δ=∅. Thus (II) holds.
Now, assume that Σ_1 is not in ℱ_r(L,m) but
ℰ_f_1 contains at least two points. Then C_f_1^∗( ∂Δ) ≠∅, and we can iterate the above
discussion to obtain surfaces Σ_j=( f_j,Δ) ,j=1,2,…,n_0, so that Σ_n_0 no longer can be
iterated. Then for each Σ_n, n=1,…,n_0, (a)-(
n-1,n) or (b)-( n-1,n) holds, and thus one of the
following holds.
(c) C_f_n_0^∗( ∂Δ) is empty and
(a)-( n_0-1,n_0) holds.
(d) C_f_n_0^∗( ∂Δ) is a singleton and
(a)-( n_0-1,n_0) holds.
(e) C_f_n_0^∗( ∂Δ) is a singleton and
(b)-( n_0-1,n_0) holds.
We show that Σ_n_0 is a desired surface. First of all, we have
#C_f_n_0^∗( Δ) =∅.
If (c) holds, then Σ_n_0∈ℱ_r(L,m) and we obtain (I).
Assume (d) holds. Then we have L(∂Σ_n_0)<L(∂Σ_n_0-1)≤ L(∂Σ_0) and C_f_0^∗(
∂Δ) ≠∅. Then (II) holds in this case, no matter
f_0^-1(E_q)∩∂Δ is empty or not.
Assume (e) holds. If all conditions (b)-( n-1,n) hold for
n=1,…,n_0, then s_0=n_0+1, ∂Σ_n_0
=∂Σ_n_0-1=…=∂Σ_0 and ℰ_f_n
=C_f_1^∗( ∂Δ) ={p_n^0,p_n+1
^0,…,p_n_0^0} for n=0,1,2,…,n_0, say, f_0^-1
(E_q)∩∂Δ is empty. Thus, when f_0^-1(E_q)∩∂Δ≠∅, (a)-( n_0^'-1,n_0^') has to be satisfied for some n_0^'<n_0. Thus all
conclusion in (II) hold.
§ PROOF OF THE MAIN THEOREM
Now, we can complete the proof of the main theorem, Theorem <ref>.
Let Σ=( f,Δ) ∈ℱ. For
any two points a and b in Δ, define their d_f-distance d_f( a,b) by
d_f(a,b)=inf{ L(f,I):I is a path in Δ from a to b} .
For any two sets A and B in Δ, define their d_f-distance by
d_f( A,B) =inf{ d_f( a,b) :a∈ A,b∈
B} .
Let ℒ={L^'>0:H_L is continuous at
L}, L∈ℒ and let L_0∈(0,L]. Then there exists a positive
number δ_L_0 such that
d_f(Δ∩ f^-1(E_q),∂Δ)>δ_L_0
holds for all surfaces Σ=( f,Δ) in
ℱ(L) with L(∂Σ)≥ L_0 and with (<ref>).
This is proved in <cit.>. In fact, if this fails, then for any
ε>0, there exists a surface Σ∈ℱ(L) with
L(∂Σ)≥ L_0 such that d_f(Δ∩ f^-1
(E_q),∂Δ)<ε/3 and Δ∩ f^-1(E_q
)≠∅. Then one can cut Σ from a boundary point on
∂Δ to a point in f^-1(E_q)∩Δ, along a path
I_ε⊂Δ so that ( f,I_ε) is polygonal and that 2L(f,I_ε)<ε,
obtaining a surface Σ_ε∈ℱ( L+ε) with L(∂Σ_ε)=L(∂Σ
)+2L(f,I_ε), A(Σ_ε)=A(Σ) and
n( Σ_ε) ≤n(
Σ) -1. Then we have R( Σ_ε) ≥
R(Σ)+4π and thus
H(Σ_ε)=R( Σ_ε)
/L(∂Σ_ε)≥R(Σ)+4π/L(∂Σ)+ε=R(Σ)/L(∂Σ)+4π/L(∂Σ
)/1+ε/L(∂Σ).
This and (<ref>) deduce that H_L+ε≥ H(Σ
_ε)>H_L+π/2L(∂Σ) when ε is
small enough. But this contradicts the assumption L∈ℒ, which
implies that H_L+ε→ H_L as ε→0.
Let Σ=( f,Δ) ∈𝒞^∗(L,m) be a covering surface such that
(<ref>) holds. If C_f^∗=C_f^∗( Δ) =∅, then Σ^'=Σ itself is the desired
surface in Theorem <ref>.
If C_f^∗( Δ) =∅, but C_f^∗(
∂Δ) ≠∅, then by Corollary <ref>
, either the conclusion of Theorem <ref> holds with L(∂Σ
_1)<L(∂Σ_0), or
(III) there exists a surface Σ_1=( f_1,Δ) ∈𝒞^∗(L,m) such that C_f_1^∗(
Δ) =∅, H( Σ_1) ≥ H(
Σ), L( ∂Σ_1) ≤ L(∂Σ), H( Σ_1) >H( Σ) only if
L( ∂Σ_1) <L(∂Σ); and both
ℰ_f_1 and C_f_1^∗( ∂Δ)
are the same singleton. Moreover, if #C_f^∗( ∂Δ) ≠∅ and f^-1(E_q)∩∂Δ≠∅, then L(∂Σ_1)<L(∂Σ_0) and either
#C_f_1^∗( ∂Δ) ≤#C_f^∗(
∂Δ) -1 holds, or #C_f_1^∗( ∂Δ) =#C_f^∗( ∂Δ) and one of
(<ref>)–(<ref>) holds.
Assume C_f^∗( Δ) ≠∅. Then by Corollary
<ref>, we have
There exists a surface Σ_0=(f_0,Δ)∈𝒞
^∗( L,m) such that C_f_0^∗( Δ)
=∅, H( Σ_0) ≥ H( Σ) and
L( ∂Σ_0) ≤ L(∂Σ). Moreover,
L(∂Σ_0)=L(∂Σ) holds if and only if H(Σ
_0)=H(Σ), ∂Σ_0=∂Σ and B_f_0^∗( ∂Δ) >B_f^∗( ∂Δ) hold.
If C_f_0^∗( ∂Δ) =∅, then
B_f_0^∗( ∂Δ) =0, Σ_0∈ℱ_r(L,m) and by the claim, L(∂Σ_0)<L(
∂Σ), and thus Σ^'=Σ_0 satisfies the
conclusion of Theorem <ref>.
If #C_f_0^∗( ∂Δ) =1 and ℰ
_f_0=C_f_0^∗( ∂Δ) ={p_0} is a
singleton, then (III) holds with Σ_1=Σ_0 by the claim.
Now assume that #C_f_0^∗( ∂Δ) ≥1 and
#ℰ_f_0≥2. Then there exists a surface Σ_1=(
f_1,Δ) ∈𝒞^∗( L,m)
satisfying all conclusions of Corollary <ref> with (I), or
(II). When (I) holds, Σ^'=Σ_1 again satisfies Theorem
<ref>, and the proof finishes. If (II) holds, and (I) fails, (III) holds
again. So we may complete the proof based on Σ_1 under the assumption (III).
We will show that there exists a surface Σ^' satisfying the
conclusion of <ref> with L(∂Σ^')<L(∂Σ).
Let P and P^∗ be two antipodal points of S and let φ
_θ be a continuous rotation on S with the axis passing through P
and P^∗and rotation angle θ, which rotates anticlockwise
around P when θ increases and we view S from sinside. P and
P^∗ are chosen so that we can define θ_0=θ_0(
Σ_1) ∈(0,π), such that
φ_θ_0( ∂Σ_1) ∩ E_q≠∅,
while
φ_θ( ∂Σ_1) ∩ E_q=∅ for all θ∈(0,θ_0),
and
φ_θ_0(f_1(p_0))∉ E_q.
We may assume P^∗ and P are outside ∂Σ_1. We first
show that
There exists a surface Σ_2=( f_2,Δ) ∈𝒞^∗(L,m) such that (<ref>) holds,
H(Σ_2)=H(Σ_1),L( ∂Σ_2) =L(∂Σ_1),
∂Σ_2=( f_2,∂Δ) =(
φ_θ_1∘ f_1,∂Δ) ,
n( Σ_2,E_q) =n( Σ
_1,E_q) ,A(Σ_2)=A(Σ_1),
∂Σ_2 contains at least one point of E_q, and p_0
∈∂Δ is the unique branch point of f_2 in Δ\ f_2^-1(E_q), where θ_1∈(0,2π].
Let δ_L_0 with L_0=L(∂Σ_1) be determined by Lemma
<ref> and let δ_E_q be the smallest positive distance between
points of E_q. Then d_f_1( f_1^-1(E_q),∂Δ) >δ_L_0. Let θ_1 be the maximal number in
(0,θ_0) such that for each θ∈(0,θ_1)
max_𝔞∈ E_qd( 𝔞,φ_θ(𝔞) )<δ_L_0^'=min(δ_E_q
,δ_L_0)/3.
Let b_1,b_2,…,b_n,n=n(
Σ_1,E_q) , be all distinct points in f_1(Δ)∩
E_q. Then for each j≤n there exists a Jordan domain U_j
conaining b_j with j=1,…,n and U_j
⊂Δ, such that f_1 restricted to U_j is a BCCM
onto the closed disk V_j=D( f_1( b_j)
,δ_L_0^') , with U_i∩U_j
=∅ if i≠ j and b_j is the unique possible branch point of
f_1 in U_j.
Let g_1=φ_θ_1∘ f_1:Δ→ S,
and let ϕ_j be the homeomorphism from φ_θ_1
(V_j) onto itself, which is an identity on ∂φ_θ_1(V_j) and maps φ_θ_1(
f_1(b_j)) to f(b_j). Note that both f_1(
b_j) and ϕ_j( f_1( b_j) ) ar
both contained in φ_θ_1( V_j) . Let
g_1^' be the mapping given by g_1 on Δ\( ∪_j=1^nU_j) and by ϕ
_j∘ g_1 on U_j.Then g_1^' is an OPLM so
that G_1=( g_1^',Δ) is contained in
𝒞^∗( L,m) with C_g_1^'^∗(Δ)={p_0}, and that, for each j=1,…,n,
b_j is the only possible branch point of g_1^' in
U_j with g_1^'(b_j)=f(b_j) and v_g_1
^'(b_j)=v_f_1(b_j). Thus it is the clear that (<ref>
)–(<ref>) hold for Σ_2=G_1. Therefore, in the case
θ_1=θ_0 we have ( ∂Δ) ∩
g_1^'-1(E_q)≠∅, and we proved Claim <ref> when
θ_1=θ_0.
Assume that θ_1<θ_θ_0. Then G_1 satisfies all
assumptions of the (III), and additionally satisfies (<ref>), and then
we still have d_g_1^'( g_1^'-1(E_q),∂Δ) >δ_L_0. Moreover, we have θ_0(
G_1) =θ_0( Σ_1) -θ_1. Then we can
repeat the above arguments at most k-1=[ θ_0(
Σ_1) -θ_1/θ_1] +1 times to obtain a
surface Σ_2=( f_2,Δ) =G_k=(
g_k^',Δ) satisfying Claim <ref>. The
existence of Σ_2 is proved.
Now we can write ℰ_f_2={p_0,p_1,…,p_s-1},s≥2,
so that p_0∈ C_f_2^∗( ∂Δ) and
{p_1,…,p_s-1}⊂ f_2^-1(E_q).
Then by Corollary <ref> there exists a surface Σ
_3=( f_3,Δ) ∈𝒞^∗(
L,m) such that C_f_3^∗( Δ) =∅,
H( Σ_3) ≥ H(Σ_2),L(Σ_3)<L(∂Σ_1),
and moreover the following conclusion (I) or (II) holds true.
(I) Σ_3∈ℱ_r( L,m) .
(II) Σ_3∈𝒞^∗( L,m) , and both
ℰ_f_3 and C_f_3^∗( ∂Δ)
are the same singleton, and either #C_f_3^∗( ∂Δ) ≤#C_f_2^∗( ∂Δ) -1 holds,
or #C_f_1^∗( ∂Δ) =#C_f_0^∗( ∂Δ) and one of (<ref>)–(<ref>) hold
with f=f_2.
If (I) holds, the proof is completed.
If (II) holds, then we repeat the the same argument which deduce Σ_3
from Σ_1. This interate can only be executed a finite number of times
(II), and at last we obtain a surface Σ_k=( f_k,Δ) ∈𝒞^∗(L,m) such that
H( Σ_k) ≥ H(Σ),L(Σ_k)<L(∂Σ),
and one of the following two alternatives holds:
(𝔞) Σ_k∈𝒞^∗( L,1) ,
𝔏( ∂Σ_k) =1, A(
Σ_k) <4πand C_f_k^∗( Δ) =C_f_k^∗( ∂Δ) =∅.
(𝔟) #C_f_k^∗( Δ)
=#C_f_k^∗(∂Δ)=#ℰ_f_k=1, H(Σ
_k)≥ H(Σ),L(∂Σ_k)<L( ∂Σ) ,
and one of the three alternatives holds: Σ_k∈𝒞^∗( L,m-1) ,A( Σ_k) <A( f)
-4π,L( ∂Σ_k) <L( ∂Σ).
If (𝔞) holds, then ∂Σ_k is a simple convex circle
and, f_k( Δ) as a set, is the closed disk on
S enclosed by ∂Σ_k, and by argument principle we have
Σ_k∈ℱ_r( L,1) ⊂ℱ_r(L,m).
If (𝔟) holds, then we can repeat the whole above argument, which
deduces Σ_1 first from Σ then deduces Σ_k from
Σ_1,to obtain a surface Σ_s from Σ_k satisfying
(𝔞) or (𝔟). But this interate can only be executed a
finite number of times by (𝔟) and at last we obtain a surface
Σ_t satisfying (𝔞).
99
Ah0L. Ahlfors, Complex analysis, McGraw-Hill, third edition, 1979.
AhL. Ahlfors, Zur theorie der Üherlagerung-Sflächen, Acta
Math., 65 (1935), 157-194.
BerF. Bernstein, Über die isoperimetrische Eigenschaft des
Kreises auf der Kugeloberfläche und in der Ebene, Math. Ann., vol. 60
(1905), pp. 117-136.
DrD. Drasin, The impact of Lars Ahlfors' work in value-distribution
theory, Ann. Acad. Sci. Fenn. Ser. A I Math. 13 (1988), no. 3, 329–353.
DuJ. Dufresnoy, Sur les domaines couverts par les valeurs d'une
fonction méromorphe ou algébroïde, Ann. Sci. École. Norm.
Sup. 58. (1941), 179-259.
EreA. Eremenko, Ahlfors' contribution to the theory of meromorphic
functions, Lectures in memory of Lars Ahlfors (Haifa, 1996), 41–63, Israel
Math. Conf. Proc., 14, Bar-Ilan Univ., Ramat Gan, 2000.
HaW.K. Hayman, Meromorphic functions, Oxford, 1964.
NR. Nevanlinna, Zur theorie der meromorphen funktionen. Acta Math.
46, 1-99 (1925)
RT. Rado, The isoperimetric inequality on the sphere.
Am.J.Math.57(4), 765-770 (1935)
RiS. Rickman, Quasiregular mappings. Springer,
Berlin(1993).Ergebnisse der Mathematik und Ihrer Grenzgebiete 3 Folge. 191:197-253(2013)
SS. Stoilow, Lecons sur les Principes Topologiques de la Theorie
des Fonctions Analytiques. Gauthier-Villars, Paris (1956)
S-ZZ.H. Sun & G.Y. Zhang, Branch values in Ahlfors' theory of
covering surfaces, Science China Mathematics, Vol. 63 No. 8: 1535-1558.
TI. Todhunter, Spherical Trigonometry (5th ed.). MacMillan. (1886),
pp. 76.
YL. Yang, Value Distribution Theory. Springer, Berlin (1993)
Z1G.Y. Zhang, Curves, Domains and Picard's Theorem. Bull. London.
Math. Soc. 34(2),205-211(2002)
Zh1G.Y. Zhang, The precise bound for the area-length ratio in
Ahifors' theory of covering surfaces. Invent math 191:197-253(2013)
Zh2G.Y. Zhang, The precise form of Ahifors' Second Fundamental
Theorem, https://doi.org/10.48550/arXiv.2307.04623
|
http://arxiv.org/abs/2307.04572v1 | 20230710140659 | M1 neutrino transport within the numerical-relativistic code BAM with application to low mass binary neutron star mergers | [
"Federico Schianchi",
"Henrique Gieg",
"Vsevolod Nedora",
"Anna Neuweiler",
"Maximiliano Ujevic",
"Mattia Bulla",
"Tim Dietrich"
] | gr-qc | [
"gr-qc",
"astro-ph.HE"
] |
^1Institut für Physik und Astronomie, Universität Potsdam, Haus 28, Karl-Liebknecht-Str. 24/25, 14476, Potsdam, Germany
^2Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-170, Santo André, São Paulo, Brazil
^3Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, Potsdam 14476, Germany
^4Department of Physics and Earth Science, University of Ferrara, via Saragat 1, I-44122 Ferrara, Italy
^5INFN, Sezione di Ferrara, via Saragat 1, I-44122 Ferrara, Italy
^6INAF, Osservatorio Astronomico d’Abruzzo, via Mentore Maggini snc, 64100 Teramo, Italy
Neutrino interactions are essential for an accurate understanding of the binary neutron star merger process. In this article, we extend the code infrastructure of the well-established numerical-relativity code BAM that until recently neglected neutrino-driven interactions. In fact, while previous work allowed already the usage of nuclear-tabulated equations of state and employing a neutrino leakage scheme, we are moving forward by implementing a first-order multipolar radiation transport scheme (M1) for the advection of neutrinos.
After testing our implementation on a set of standard scenarios, we apply it to the evolution of four low-mass binary systems, and we perform an analysis of ejecta properties. We also show that our new ejecta analysis infrastructure is able to provide numerical relativity-informed inputs for the codes and , for the computation of kilonova lightcurves and nucleosynthesis yields, respectively.
M1 neutrino transport within the numerical-relativistic code BAM with application to low mass binary neutron star mergers
Tim Dietrich^1,3
August 12, 2023
=========================================================================================================================
§ INTRODUCTION
Simulations of binary neutron star (BNS) mergers are a fundamental tool to support interpretations of multimessenger observations combining gravitational waves (GWs) and electromagnetic (EM) signals produced by the same transient event, allowing, among others, the study of matter at supranuclear densities e.g.,<cit.>, the expansion rate of the Universe <cit.>, and the production of heavy elements, e.g. <cit.>.
The strong interest in BNS mergers is
partially caused by the myriad of observational data that has recently become available with the detection of the GW signal GW170817 <cit.> by advanced LIGO <cit.> and advanced Virgo <cit.> and its associated EM counterparts: the kilonova AT2017gfo <cit.> and the short -ray burst GRB170817A <cit.>, with long-lived signatures of its afterglow <cit.>.
With the start of the O4 observation run of the LIGO-Virgo-Kagra collaboration in May 2023, more events of this kind are expected to be detected, e.g., <cit.>.
In the neutron-rich matter outflow, r-process nucleosynthesis can set in, which can power transient EM phenomena due to heating caused by radioactive decay of newly synthesized nuclei in a wide range of atomic numbers <cit.>, corroborating the hypothesis that kilonovae are connected to the production of heavy nuclei <cit.>.
Analysis of AT2017gfo has shown that kilonovae can consist of multiple components, each one generated by ejecta with different electron fractions and entropy <cit.>.
Studies of ejecta based on numerical-relativity (NR) simulations of BNS mergers suggest that the properties of the ejecta depend on the different ejection mechanisms during and after the merger, e.g., Refs. <cit.>.
Most NR simulations of BNS mergers are relatively short (≤100ms after the merger) and thus provide information on the early time, dynamical ejecta, which is generally divided into a tidal component (driven by tidal torques) and a shocked component (driven by shocks launched during NS core bounces)
<cit.>. In equal-mass mergers, the shocked component is found to be up to a factor
∼10 more massive than the tidal one <cit.>.
However, the dynamical ejecta found in NR simulations cannot account alone for the
bright blue and late red components of the observed kilonova in AT2017gfo <cit.>.
Winds powered by neutrino absorption and angular momentum transport can unbind 𝒪(0.1 M_⊙) from the disk surrounding the remnant on timescales of 𝒪(0.1-1 s) and could (if present) give the largest contribution to the kilonova signal <cit.>. Until recent years, these winds have been mostly studied by means of long-term simulations of neutrino-cooled disks <cit.>.
Ab-initio NR simulations of the merger with advanced neutrino-transport and
magnetohydrodynamics were not yet fully developed at sufficiently long timescales <cit.>, but large progress has been made recently, e.g., <cit.>.
Additionally, shorter (up to 100 ms post-merger) NR simulations
pointed out the existence of moderately neutron-rich spiral-wave wind
that is sufficiently massive and fast to contribute to the early blue kilonova emission <cit.>.
Another contribution to post-merger ejecta can come from neutrino-driven winds that can lead to ∼ 10^-4-10^-3M_⊙ ejecta with high electron fraction <cit.>.
To perform multimessenger analyses of future GW and EM detections associated with BNS mergers, NR simulations, including microphysical modeling, are essential. In particular, for the estimation of nucleosynthetic yields and kilonova light curves, it is important to account for the interaction of nuclear matter with neutrinos. This is because neutrino emission and absorption are responsible for determining the electron fraction of the ejecta, which influences kilonova light curves and nucleosynthesis strongly.
In the past, several attempts were made to map ejecta properties to binary parameters like deformability and mass ratio, e.g., <cit.>, with the aim of building phenomenological fits for Bayesian analysis of kilonova light curves and GW signal simultaneously. These studies showed that neutrino radiation treatment plays an important role in determining the mass, composition, and geometry of the ejecta. The extension and improvements of such fits with new data require the use of an advanced scheme to include neutrino radiation.
The first attempt to include neutrino interactions in a BNS merger simulation was made more than 20 years ago in <cit.> by means of a neutrino leakage scheme (NLS). NLS employs an effective neutrino emissivity assigned to each fluid element according to its thermodynamical configuration and the optical depth of the path from it to infinity. This effective emission represents the rate of neutrino energy/number that escapes a fluid element. Hence, NLS is limited to model neutrino cooling. Unfortunately, this quantity is only known in the diffusive and free-streaming regimes, and phenomenological interpolation is used for gray zones. The main issue of NLS is the fact that, by neglecting the neutrino heating and pressure on the nuclear matter, it leads to a significant underestimation of the ejecta's electron fraction <cit.> and affects the matter dynamics. The more recent development of an advanced spectral leakage (ASL) scheme tried to solve this issue by phenomenological modeling of neutrino flux anisotropies <cit.>.
A more accurate theoretical approach to incorporate neutrino effects would require evolving the neutrinos distribution function according to the General Relativistic Boltzmann equation <cit.>. In principle, it is possible to follow this approach in a conservative 3+1 formulation, e.g., <cit.>. However, since the distribution function is defined in the 6+1-dimensional one-particle phase space, the computational cost of such an approach is prohibitive. Therefore, in recent years, more computationally efficient neutrino radiation transport approaches have become increasingly popular. Amongst them is the so-called moment scheme, which is based on a multipolar expansion of the moments of the radiation distribution function <cit.>. The 3+1 decomposition of such a formalism has been first studied in <cit.>. The basic idea of this framework is to dynamically evolve the distribution function of neutrino intensity in a base of multipoles up to a certain rank and evolve them as field variables. Most of the radiation transport codes used in NR consider the transport of the zeroth and first-rank moments, thus referred to as M0 scheme <cit.> or M1 moments scheme <cit.>. It is worth noting that the aforementioned M1 implementations rely on the grey approximation, i.e., the considered moments are frequency integrated. This description makes the computation significantly less expensive but less accurate regarding the matter-neutrino interaction rates, which are strongly dependent on the neutrino energy <cit.>. We want to point out that not only in BNS simulations but also in core-collapse supernovae and disk simulations, multipolar radiation transport schemes are regularly used. In most cases, even in more sophisticated versions, like non-gray, energy-dependent schemes, e.g., <cit.>.
One important artifact of multipolar radiation transport schemes is the well-known unphysical interaction of crossing beams <cit.>, which is due to the inability of the M1 scheme to treat higher-order moments of the distribution. The crossing beams and energy-dependent interaction rates issues are both cured by Monte-Carlo radiation transport. In the latter, radiation is modeled by an arbitrary number of neutrino packets, each one with its own energy and momentum. This scheme has recently been adapted to NR simulation <cit.>.
Another scheme that can, in principle, solve these issues is the relativistic Lattice-Boltzmann <cit.>, where the momenta component of the Boltzmann equation is solved in every space point on a discretized spherical grid in order to model the transport even in presence of higher order momenta. Overall, we refer to <cit.> for a detailed review of neutrino transport methods.
Among all the mentioned schemes, we decided to implement M1 transport because of its ability to treat neutrino heating and pressure with a reasonable computational cost and for being well-tested in NR simulations of BNS mergers.
This article is structured as follows: In Sec. <ref>, we recap the governing equations of General Relativistic Radiation Hydrodynamics (GRRHD) and M1 transport. In Sec. <ref>, we discuss the numerical methods used to integrate M1 transport equations, paying particular attention to the stiff source terms and the advection of radiation in the trapped regime. In Sec. <ref>, we show the results of the tests we performed to validate the code in different regimes. In Sec. <ref>, we present the application of our newly developed code to the merger of binary neutron stars with two different EoSs and mass ratios, with a description of ejecta geometry, neutrino luminosity, GW signal, nucleosynthesis yields, and kilonova light curves.
Throughout this article, we will use the Einstein notation for index summation with the (-,+,+,+) signature of the metric and (unless differently specified) geometric units, i.e., G=c=M_⊙=1. Also, the Boltzmann constant is κ_B = 1.
§ GOVERNING EQUATIONS
§.§ 3+1-Decomposition and spacetime evolution
The spacetime dynamics is considered by numerically solving Einstein's field equations in 3+1 formulation, for which the line element reads
ds^2 = -α^2dt^2 + (dx^i + β^i dt)(dx^j + β^j dt) γ_ij,
where α is the lapse function, β^i is the shift vector, and γ_ij is the 3-dimensional spatial metric (or 3-metric) induced on the 3-dimensional slices of the 4-dimensional spacetime, identified by t = constant.
The 3-metric is given by
γ_αβ = g_αβ + n_αn_β,
where g_αβ is the 4-dimensional spacetime metric and n^α is the timelike, normal vector field. By construction, n^α is future-directed, normal to each point of a given t = constant slice and normalized to n^μ n_μ = -1.
In the coordinate system given by the line element of Eq. (<ref>), the normal vector field has components
n^α = ( 1/α, -β^k/α), n_α = ( -α, 0 ).
In this framework, the BAM code <cit.> can solve for the Einstein field equations. In this work, we do so using the Z4c formulation with constraint damping terms <cit.> as implemented in <cit.>.
§.§ General Relativistic Hydrodynamics
As in the previous version of BAM that included a neutrino leakage scheme <cit.>, we solve general-relativistic radiation hydrodynamics equations arising from the conservation of stress-energy tensor of matter with source terms representing neutrino interactions. Furthermore, the conservation of baryon number and transport of electron fraction lead to:
∇_μ (ρ u^μ) = 0,
∇_μ T_ matter^μν = -S^ν,
∇_μ (ρ Y_ e u^μ) = m_ bℛ,
with ρ being the rest mass density, T^ matter_μν the stress energy tensor of matter, u^μ its four-velocity, m_ b the baryon mass, and Y_ e = n_ p/n_ b = (n_ e^- - n_ e^+)/n_ b the electron fraction. n_ e^-, n_ e^+, n_ p, and n_ b are the number densities of electrons, positrons, protons, and baryons, respectively. The source terms S^μ and ℛ represent the interaction of the fluid with neutrinos, i.e., neutrino cooling and heating, and the lepton number deposition rate, respectively.
We employ the usual decomposition of the fluid's 4-velocity as follows:
u^α = W(n^α + v^α), n^αv_α=0,
with W = -n^αu_α=1/√(1-v^iv_i), being the Lorentz factor.
Assuming matter to be an ideal fluid, its stress-energy tensor can be decomposed as
T_ matter^μν = ρ h u^μu^ν + pg^μν,
where h and p are the specific enthalpy and the pressure of the fluid, respectively. Equations (<ref>), (<ref>), and (<ref>) are expressed as conservative transport equations following the standard Valencia formulation <cit.> as already implemented in previous versions of BAM, e.g., <cit.>, which is based on the evolution of the conservative variables
D = √(γ) W ρ,
τ = √(γ) (W^2 h ρ - p) - D,
𝒮_i = √(γ) W^2 h ρ v_i,
D_Y = √(γ) W ρ Y_e.
As an upgrade, in comparison to our previous implementation, we modified the source terms S^μ and ℛ to adapt them to the new neutrino scheme, as we will discuss in the following.
§.§ Multipolar formulation for radiation transport
In this article, we implement a first-order multipolar radiation transport scheme following the formulation of Ref. <cit.>. The multipolar formulation was originally developed to reduce the dimensionality of the general-relativistic Boltzmann equation for the neutrino distribution function in the phase space
df(x^μ,p^μ)/dl = S_ coll(x^μ,p^μ,f),
with f being the distribution of neutrinos, l being the proper length traveled by neutrinos in a fiducial observer frame, and S_ coll being a collisional term that takes into account the interaction of neutrinos with matter, i.e., emission, absorption, and scattering. The derivative d/dl is along the trajectory in the phase space of the neutrinos, so it will have a component in physical spacetime and one in momentum space. Since neutrinos are assumed to travel on light-like geodesics, their momentum has to satisfy the constraint p^αp_α=0. This reduces the dimensionality of the problem by one and allows us to describe the 4-momentum via the variables Ω and ν, representing the space direction of particles on a solid angle and their frequency in the fiducial observer frame. To make the evaluation of collisional sources easier, we chose the fluid frame as the fiducial frame. In the following text, ν will always be the frequency of neutrinos as measured in the fluid frame.
At this point, we still have to handle a 6+1 dimensional problem that, if we want to ensure a sufficient resolution and accuracy for proper modeling, would computationally be too expensive.
Hence, we need to work out a partial differential equation on the physical 3D space that can capture the main features of radiation even without fully solving for f in the momentum space.
In this regard, Thorne <cit.> showed that it is convenient to decompose the intensity of radiation I=ν^3 f and the source S_ coll in multipoles of the radiation momentum p^α
M^α_1 ... α_k (x^β) := ∫_0 ^∞dν ν^3 ∫dΩf(ν,Ω,x^μ)/ν^kp^α_1...p^α_k ,
S^α_1 ... α_k (x^β) := ∫_0 ^∞dν ν^3 ∫dΩS_ coll(ν,Ω,x^μ,f)/ν^kp^α_1...p^α_k .
Plugging this ansatz into the Boltzmann Equation, Eq. (<ref>), one can derive the following evolution equations for every radiation moment M^A_k:
∇_β M^A_k β - (k-1)M^A_k βγ∇_γu_β = S^A_k,
where A_k is a multi-index of order k; cf. Ref. <cit.>.
In this work, we will focus on the second-order multipole M^αβ, which is known to be equal to the stress-energy tensor of radiation. It can be decomposed employing the laboratory frame or employing the fluid frame by choosing two different decompositions of the radiation momentum p^α:
fluid frame: p^α = ν (u^α + ℓ^α),
laboratory frame: p^α = ν' (n^α + l^α),
where ν and ν' are neutrino frequency in the fluid and lab frame, respectively[Note that ν appearing in the integrals of Eqs. (<ref>) and (<ref>) is always the frequency in the fluid's frame.], with the constraints u^αℓ_α = n^αl_α = 0 and l^α l_α = ℓ^αℓ_α = 1.
In such a way, we can express the radiation stress-energy tensor in the fluid frame as
T_ rad^αβ = M^αβ = Ju^αu^β + u^αH^β + H^αu^β + 𝒦^αβ,
with u^αH_α=S^αβu_α=0 and
J := ∫_o^∞dν ν^3 ∫dΩ f(ν,Ω, x^μ),
H^α := ∫_o^∞dν ν^3 ∫dΩ f(ν,Ω, x^μ) ℓ^α,
𝒦^αβ := ∫_o^∞dν ν^3 ∫dΩ f(ν,Ω, x^μ) ℓ^αℓ^β,
representing respectively the energy, the momentum, and the stress-energy tensor measured in the fluid frame. We will use these variables to express source terms since interaction rates of radiation with matter are usually evaluated in the fluid frame, in particular, we will write the 1st source multipole following Shibata et. al. <cit.> as
S^α = η u^α -κ_a J u^α -(κ_a + κ_s)H^α,
with κ_a being the absorption opacity, κ_s being the scattering opacity, and η being the emissivity.
In our work, we incorporate neutrino emission, absorption, and elastic scattering into the source term, but we neglect inelastic scattering.
In general, the emissivity η and the opacities κ_a, κ_s depend both on the fluid properties, namely on the density of the matter ρ, the temperature T, and the electron fraction Y_e, but also on the neutrino spectrum.
Unfortunately, the latter information is not available in our formalism since we only evolve averaged quantities. This represents one of the weaknesses of the employed scheme. Hence, to enable dynamical simulations, we have to employ additional assumptions that we will outline in the following.
§.§ M1 evolution equations
Ref. <cit.> showed that fluid-frame variables are not suitable for obtaining a well-posed system of partial differential equations in conservative form. For such a purpose, we need to perform a decomposition in the laboratory frame as
T_ rad^αβ = M^αβ = E n^α n^β + F^αn^β + F^βn^α + P^αβ,
with n^αF_α = P^αβn_α = F^t = P^tα = 0, in this case, E, F^i, and P^ij represent, respectively, the energy density, momentum density, and stress tensor as measured in the lab frame and are defined in an analogous way as their fluid frame equivalents.
We can work out laboratory frame variables starting from the fluid frame ones and vice versa performing different projections of T_rad^αβ. For our work, we will use:
J = W^2 E - 2 W F^iu_i + P^iju_i u_j,
H^α = (EW - F^iu_i)h^α_ βn^β + Wh^α_βF^β - h^α_iu_jP^ij,
where we have defined the 3-metric in the fluid frame as
h^αβ = g^αβ + u^αu^β.
Ref. <cit.> showed that by decomposing M^αβ as in Eq. (<ref>) and plugging it into Eq. (<ref>) with k=1, we can get the following conservative evolution equations for the energy and momentum of neutrinos:
i + j + k ∂_t Ẽ + ∂_j (αF̃^j - β^j Ẽ)
= α (P̃^ij K_ij - F̃^j∂_j ln(α) - S̃^αn_α),
∂_tF̃_̃ĩ + ∂_j(αP̃^j_i - β^jF̃_̃ĩ)
= (-Ẽ∂_i α + F̃_k∂_iβ^k + α/2P̃^jk∂_i γ_jk + αS̃^αγ_iα),
where K_ij is the extrinsic curvature of the spatial hypersurface, and we defined the densitized variables Ẽ=√(γ)E, F̃^i=√(γ)F^i, P̃^ij=√(γ)P^ij and S̃^α = √(γ)S^α. Here, we can clearly see that the source terms of this equation can be divided into two categories, the gravitational ones, proportional to the first derivatives of the metric and the gauge, and the collisional ones, proportional to S̃^α. While the former is responsible for effects like neutrino path bending and gravitational blueshift/redshift, the latter describes neutrino emission, absorption, and scattering by the fluid.
§.§ Closure relation
Since we do not have an evolution equation for P_ij, we can only estimate it from E and F^i. We follow the prescription discussed in <cit.>:
P^ij = 1/2(3χ(ζ)-1)P^ij_ thin + 3/2(1-χ(ζ))P^ij_ thick,
with P_ thin and P_ thick being the closures in the optically thin and thick regimes, respectively, and the quantity χ is called Eddington factor. The Eddington factor was introduced to model the transition from a trapped radiation regime (χ=1/3) to a free streaming radiation regime (χ=1). In this work, we use the so-called Minerbo closure <cit.>
χ(ζ) = 1/3 + ζ^2 6 - 2ζ + 6ζ^2/15,
where
ζ^2 = H^αH_α/J^2,
is the closure parameter. We expect ζ→ 1 for free streaming radiation and ζ→ 0 for trapped radiation.
The free streaming closure can be expressed as <cit.>:
P^ij_ thin = E F^i F^j/F^2.
The computation of P^ij_ thick is more elaborated since the thick closure must be defined in such a way to be isotropic in the fluid frame, i.e, we want
𝒦^αβ_ thick = 1/3J h^αβ.
Refs. <cit.> showed that 𝒦^αβ_ thick of Eq. (<ref>) leads to
P^ij_ thick = 4/3 J_ thick W^2 v^i v^j + 2 W v^(iγ^j)_αH^α_ thick + 1/3J_ thickγ^ij,
with
J_ thick = 3/2W^2+1 [ E(2W^2-1) - 2W^2F^iv_i ],
γ ^i_α H_ thick^α = F^i/W - 4/3J_ thickWv^i + W[F^i v_i -E + J_ thick]v^i.
The system composed of Eqs. (<ref>) and (<ref>) is proven to be strongly hyperbolic using the closure (<ref>) as long as the causality constraint E ≤ |F| is satisfied. Equation (<ref>) also guarantees that characteristic velocities of the system (<ref>, <ref>) are not superluminal.
Similar to other implementations of this scheme, e.g., <cit.>, we divide neutrinos into three species: ν_e, ν̅_e, and ν_x, with this last species collecting all heavy neutrinos and respective anti-neutrinos together. In this way, we are solving three M1 systems (<ref>, <ref>) coupled to each other only through the fluid.
§.§ Neutrino number density
The previously described scheme still misses any information about the neutrinos energy spectrum, which will be important for having an accurate estimate of the fluid's neutrino opacities κ_a and κ_s (since cross sections of the involved processes are strongly dependent on neutrino energy). The simplest way to improve the previous scheme in this sense is adding the evolution of neutrino number density in such a way to be able to get the neutrino average energy ⟨ϵ_ν_i⟩ in every point.
In our implementation, we set up the neutrino number evolution following <cit.> and <cit.>,
i.e., through the transport equation
∇_α (n f^α) = η_n - κ_n n,
where n is the neutrino number density in the fluid frame and η_n and κ_n are the neutrino number emissivity and opacity respectively and nf^α is the 4 dimensional number flux. According to Ref. <cit.> we chose
f^α = u^α + H^α/J,
in such a way that the projection of nf^α along u^α gives the neutrino number density in the fluid frame:
n = -nf^α u_α.
Expressed in slice adapted coordinates, Eq. (<ref>) reads
∂_t (α√(γ) n f^0) + ∂_i(α√(γ)nf^i) = α√(γ)(η_n - κ_n n),
which is a transport equation for the conservative variable
N := α√(γ)nf^0.
Finally, we can find f^0 and f^i using the definition of slice adapted coordinates:
α f^0 = -f^αn_α = W - H^αn_α/J,
f^i = Wv^i + γ^i_αH^α/J - β^i f^0.
We solve Eq. (<ref>) together with Eq. (<ref>) and Eq. (<ref>) to get a complete and closed system of hyperbolic transport equations in conservative form.
Note that in this formulation, the average energy of neutrinos in the fluid frame can be simply obtained by:
⟨ϵ_ν⟩ = J/n.
§.§ Coupling to hydrodynamics
To model the exchange of energy and momentum between neutrinos and the fluid, we modify the conservation of the matter's stress-energy tensor into
∇_βT_ matter^βα = - ∑_ν_i S^α_ν_i,
where the sum runs over all three neutrino species. This means
∂_t τ = standard hydro rhs + ∑_ν_iα n^αS̃_α,ν_i,
∂_t 𝒮_i = standard hydro rhs - ∑_ν_iαγ_i^αS̃_α,ν_i,
where τ and 𝒮_i are the conservative internal energy and momentum in the standard Valencia formulation of GRHD.
We also take the variation of the electron fraction of the fluid into account and solve the transport equation
∇_α(ρ Y_e u^α) = m_b ℛ,
with a source term given by interaction with neutrinos
ℛ = -∑_ν_isign(ν_i) (η_n,ν_i - κ_n,ν_i n_ν_i),
where
sign(ν_i)=
1, if ν_i = ν_e,
-1, if ν_i = ν̅_e,
0, if ν_i = ν_x,
is a function that accounts for different signs of contributions given by different neutrinos species.
§.§ Opacities and emissivities
Within our gray scheme, we evolve energy-integrated variables and lose information about the neutrino spectrum. This means opacities contained in Eqs. (<ref>) and (<ref>) represent effective frequency-averaged quantities.
In the case of neutrinos in thermal equilibrium with the fluid, we can define the equilibrium opacities as
κ^ eq_a,s = ∫_0^+∞κ_a,s(ϵ) I^ eq(ϵ, T, μ) dϵ/∫_0^+∞ I^ eq(ϵ, T, μ) dϵ,
κ^ eq_n = ∫_0^+∞κ_a(ϵ) n^ eq(ϵ, T, μ) dϵ/∫_0^+∞ n^ eq(ϵ, T, μ) dϵ,
where ϵ is the neutrino energy, I^ eq and n^ eq are the spectral energy density and number density at equilibrium, respectively. T is the fluid's temperature and μ is the neutrino chemical potential at equilibrium. We assume I^ eq∼ϵ^3 f_ FD(ϵ, T, μ) and n^ eq∼ϵ^2 f_ FD(ϵ, T, μ) with f_ FD being the ultrarelativistic Fermi-Dirac distribution function.
The fluid's temperature T is one of the primitive variables provided by the hydrodynamic sector in our new BAM implementation <cit.> while the chemical potential at equilibrium μ is obtained by the nuclear EoS table. The latter actually provides the chemical potential for e^-, n, and p. Based on these, we can compute the potentials for neutrinos assuming β-equilibrium, i.e.
μ_ν_e = μ_e^- + μ_p - μ_n, μ_ν_e = - μ_ν_e, μ_ν_x = 0.
The frequency-dependent opacities κ_a,s(ϵ) are obtained from the open source code <cit.> available at <http://www.nulib.org>. For a given EoS, they are evaluated as functions of the fluid's rest mass density ρ, temperature T_f, and electron fraction Y_e and given in the form of a 4D table.
For every value of ϵ in the opacity table, we perform a 3D interpolation with respect to the other three variables (ρ, T, Y_e) to get κ_a,s(ϵ). Finally, we use those values to discretize and evaluate the integrals in Eqs. (<ref>-<ref>) to obtain the desired opacities.
For our work, we use 400 points for ρ, 180 for T, 60 for Y_e, and 24 for ϵ.
Table <ref> lists all reactions taken into account for the calculation of the spectral opacities and the emissivities with related references about the calculation method. We note that, in principle, could include more reactions.
As also reported in Ref. <cit.>, tables give an unphysically high opacity in regions with ρ<10^11 g/cm^3 and T ≲ 0.35 MeV. This is because of blocking factors that are applied to the absorption opacities for ρ>10^11 g/cm^3. Unfortunately, the application of blocking factors in lower-density regions leads to numerical issues for 1 MeV ≲ T ≲ 30 MeV.
Therefore, we modified the original code to extend the domain of application of absorption blocking factors to the regions where T<0.35 MeV and Y_e ≶ 0.4 (> for ν_e, < for ν_e) independently on ρ, in addition to the region ρ>10^11 g/cm^3. This ensures that we obtain a smooth table that is free of unphysical absorption opacities that were previously affecting the low-density and low-temperature regions.
So far, we assumed neutrinos to be in equilibrium with the fluid. However, this is, in general, not the case. Since the cross sections of neutrinos scale with ϵ^2, the assumption of neutrinos at equilibrium with the fluid would lead to an underestimate of opacities when hot neutrinos out of equilibrium cross a region of cooler fluid. To take this energy dependence into account, we apply the correction
κ_a,s,n = κ_a,s,n^eq( T^ν_ eff/T)^2,
which is also used in most other grey M1 implementations <cit.>. Where T^ν_ eff is the effective temperature of neutrinos.
To obtain T^ν_ eff, we assume neutrinos spectrum to be Planckian with temperature T^ν_ eff and reduced chemical potential η_ν = μ/T_f. We can then evaluate the average neutrino energy as
⟨ϵ_ν⟩ = F_3(η_ν)/F_2(η_ν) T^ν_ eff,
with F_k being the Fermi integral of order k. Since we know ⟨ϵ_ν⟩ = J/n, we can solve Eq. (<ref>) for
T^ν_ eff = F_2(η_ν)/F_3(η_ν)J/n,
and plug T^ν_ eff into Eq. (<ref>) to obtain the corrected opacity.
EoS tables only provide chemical potentials of neutrinos at thermal equilibrium with the fluid. When neutrinos get decoupled from the latter, we expect to approach a distribution with zero chemical potential, i.e., the distribution describing a fixed number of particles. As in <cit.>, to qualitatively account for this transition, we are evaluating the reduced chemical potentials of Eq. (<ref>) in the following way:
η_ν =μ/T(1 - e^-τ),
where τ is the optical depth provided by the NLS of <cit.>.
Finally, once we have set κ_a,s,n, we remain with η and η_n to be set. For that, we assume the Kirchhoff law
η = κ^eq_a 4π/(hc)^3 F_3(η_ν) T^4,
η_n = κ^eq_n 4π/(hc)^3 F_2(η_ν) T^3.
Such a choice ensures that neutrinos thermalize with the fluid and reach the expected thermal equilibrium state when trapped.
§ NUMERICAL SCHEME
We implemented the above multipolar formalism for radiation transport as a new module in the BAM code <cit.>, and will provide implementation details below.
§.§ Closure factor
Equation (<ref>) cannot be evaluated directly since H^α and J are functions of P^ij (and so of ζ); cf. Eqs. (<ref>) and (<ref>).
Hence, the closure factor must be found by solving the following implicit equation
ζ^2 J^2(ζ) - H_α(ζ)H^α(ζ)/E^2 = 0
using a root finder algorithm.
In our implementation, we solve Eq. (<ref>) for ζ using a Dekker algorithm <cit.>, which improves the convergence speed compared to the bisection scheme.
§.§ Fluxes
To evaluate the numerical fluxes at cell interfaces, we follow Ref. <cit.>. Given a field variable u and its flux ℱ(u), we employ a linear combination of a low-order diffusive flux ℱ_ LO and a second-order non-diffusive flux ℱ_ HO so that:
ℱ_i+1/2 = ℱ_ HO(u_i+1/2) - A [ ℱ_ HO(u_i+1/2) - ℱ_ LO(u_i+1/2) ] ,
where A = min(1, 1/κΔ x) and κ = (κ_s,i + κ_s,i+1 + κ_a,i + κ_a,i+1)/2.
This ansatz leads to ℱ_i+1/2 = ℱ_ LO in the free streaming regime and to ℱ_i+1/2≃ℱ_ HO in the scattering/absorption regime.
The low-order diffusive flux is computed using fluxes at the cell center, and a local Lax-Friedrichs (LLF) Riemann solver <cit.>:
ℱ_ LO(u_i+1/2) = ℱ(u_i) + ℱ(u_i+1)/2 - λ_ amaxu_i+1 - u_i/2,
with
λ_ amax = max _a ∈{i, i+1} b ∈ [1,2] { |λ^b_a| },
where λ^b are the characteristic velocities of the system. This choice ensures the monotonicity preservation of the solution in case of shocks and leads to better stability in the free streaming regime due to an increased numerical dissipation.
Certainly, in some cases, the latter can also be a disadvantage, e.g., in the case of radiation in an optically thick medium, it would introduce an unphysical diffusion, leading to a wrong estimation of the neutrinos diffusion rate. To avoid this effect, we employ the following non-diffusive scheme in optically thick regions:
ℱ_ HO(u_i+1/2) = 1/2 [ ℱ(u_i) + ℱ(u_i+1) ].
Our choice of ℱ_ HO cures the unphysical diffusion of ℱ_ LO, but in case of shocks, it can violate monotonicity preservation.
To make the scheme described by Eq. (<ref>) able to handle shocks in a thick regime without adding unphysical diffusion in smooth regions, we first compute ℱ in every point using Eq. (<ref>) and then set ℱ = ℱ_ LO if one of the following conditions is satisfied:
* Δ^n_i-1Δ^n_i < 0 or Δ^n_i Δ^n_i+1 < 0, i.e., if the solution at the current time step shows an extremum.
* Ẽ^n+1_i≤ 0 or Ẽ^n+1_i+1≤ 0, i.e., if the energy solution at the next time step would be overshoot to a negative value.
* Δ^n+1_i-1/Δ^n+1_i < 1/4 or Δ^n+1_i/Δ^n+1_i+1 < 1/4, i.e., if the solution at next time step would develop an extremum or if the change in the slope happens too quickly,
where
Δ_i^n = u^n_i+1 - u^n_i,
u^n+1_i = u_i^n - Δ t/Δ x ( ℱ^n_i - ℱ^n_i-1 ).
Characteristic velocities are constructed as a linear combination of thin and thick velocities with the same coefficients used to compose the closure in Eq. (<ref>). We chose to employ the following velocities in a generic i^th-direction:
λ_ thin ^ 1,2 = -β^i ±α |F^i|/|F|,
λ_ thick ^1,2 = -β^i ±α√(γ^ii/3).
In our tests, this particular choice increases the stability of our scheme without affecting the accuracy. Thin velocities are the same as employed in <cit.> while the thick ones are taken from <cit.>. For a complete list and discussion of the characteristic velocity, we refer to <cit.>.
We note that, in principle, Eq. (<ref>) is second-order accurate in the diffusive region far away from shocks or solution's extrema and first-order accurate in the free streaming regime.
§.§ Implicit-explicit time step
Scattering and absorption opacities, even in the geometrized units handled by BAM, can reach large values up to 10^3Δ t in very thick regions, e.g., in the neutron star interior. For such values, the collisional source terms S^α of Eqs. (<ref>, <ref>) become stiff, i.e., we have to treat the terms through an implicit scheme. Fluxes and gravitational sources are instead handled via a second-order explicit-implicit method.
A full time step of our radiation evolution algorithm is given by:
q^* - q^n/Δ t = - ∂_i F^i(q^n) + G(q^n) + S_coll(q^*),
q^n+1 - q^n/Δ t = - ∂_i F^i(q^*) + G(q^*) + S_coll(q^n+1),
where q = (Ẽ, F̃_i, N), and F^i, G, and S_coll represent fluxes, gravitational-source terms, and collisional source terms of Eq. (<ref>), (<ref>), and (<ref>), respectively. This method is 2nd-order accurate in fluxes and gravitational terms but only 1st-order accurate in the implicit terms.
Hydrodynamics variables are kept constant during the radiation substep of Eq. (<ref>), and are updated after the second step using S_coll(q^n+1). Analogously radiation variables are kept constant during the hydrodynamics and spacetime evolution substeps, which are performed ignoring radiation-fluid interactions.
The application of a partially implicit method requires the solution of a system in Ẽ^n+1, F̃^n+1_i, and N^n+1.
The last variable is decoupled from the rest of the system since its implicit time step can be written as
N^n+1 - N^n/Δ t = -∂_i ℱ^i_N(N^n) + α√(γ)η_n - κ_n N^n+1/f^0,
which can be solved straightforwardly for N^n+1 as:
N^n+1 = N^n -∂_i ℱ^i_N(N^n) Δ t + α√(γ)η_n Δ t/1 + k_n/f^0Δ t .
Solving for the neutrino momenta, unfortunately, requires more effort. We follow the linearized scheme of <cit.>. Plugging the expression of J and H^α in Eqs. (<ref>) and (<ref>), into the definition of S^α in Eq. (<ref>), and linearizing assuming ζ and F^i/|F| to be constant, we can write
S̃_α^n+1 = Ẽ^n+1 A_α + F̃^n+1_iB^i_α + √(γ)η u_α,
where A^α and B^iα are tensor functions of κ_a, κ_s, ζ, F^i/|F|, and u^α only. In our implementation, we solve for the implicit time step using the value of ζ and F^i/|F| at time step n. This is necessary for obtaining a linear system in (Ẽ^n+1,F̃_i^n+1). Plugging this expression into Eq. (<ref>), we get a system of four linear equations for four variables, whose analytical solution can be found in Appendix <ref>.
We have pointed out that the scheme used for collisional terms is not fully implicit since A^α and B^iα are also dependent on neutrino variables, and the values of the fluid's opacities are not updated according to the radiation-fluid interaction at each substep.
However, the solution of a fully coupled implicit system would require a non-linear root finder and an update of fluid variables at each substep with a significantly higher computational cost. For this reason, we limit ourselves to this linearized implicit scheme, which we found to be enough to ensure the stability of the code.
Finally, it is worth mentioning that other M1 implementations <cit.> treat the nonlinear terms of S_coll implicitly. This is equivalent to treating the ratio f^i = F^i/|F| as a variable at time n+1 and solving for a four-dimensional nonlinear root-finding problem. This scheme is believed to be more accurate in describing the interaction of radiation with a fast-moving fluid since it handles better the terms proportional to v · f. However, in the next section, we will show that our linearized scheme properly captures the advection of trapped radiation by a moving fluid, which is a stringent test that must be satisfied by a radiation transport code oriented to the simulation of BNS mergers.
§.§ Neutrino right-hand side routine
In the following, we summarize the steps followed to evaluate the full right-hand side (RHS) of the neutrino sector from E, F^i, and N, i.e., Eqs. (<ref>, <ref>, <ref>):
* We check for causality constraint E ≤ |F|, if it is not satisfied we set E=|F| by rescaling F^i. This step is necessary to ensure the causality and hyperbolicity of the scheme.
* We evaluate the closure factor ζ solving Eq. (<ref>). We use ζ to evaluate the fluid frame energy J and the neutrino number density n.
* We set opacities using the scheme described in the previous section.
* We compute fluxes at the cell's interfaces using Eq. (<ref>).
* We check whether the reconstructed fluxes satisfy one of the conditions listed above. If they do, we recompute them using Eq. (<ref>).
* We add the fluxes divergence to the RHS.
* We evaluate the gravitational source terms of Eq. (<ref>) and (<ref>) and add them to RHS.
* We solve Eq. (<ref>) for E^n+1, F_i^n+1, and N^n+1 using the explicit part of RHS we have evaluated in the previous points.
All the steps listed above are repeated for the three neutrino species.
§ NUMERICAL TESTS
§.§ Geodesics
To test the fluxes and gravitational sources, we set up a test employing Kerr-Schild spacetime with zero angular momentum, where we shoot a beam of free streaming neutrinos with Ẽ = |F̃| = 1 from left to the right of our numerical domain.
In Fig. <ref>, the neutrino beam is injected in the simulation tangentially to the black hole (BH) horizon at a coordinate distance 5M to 5.5M from the singularity (which is located in the origin of the coordinate system). The red lines in the figure show light-like geodesics that neutrinos at the top and bottom of the beam are supposed to follow. The whole beam should then be contained between these two lines.
We observe that most of the neutrino energy remains confined between the two geodesics with a small part that is dispersed outside, mostly because of the low-order reconstruction scheme employed to handle the free streaming region, which introduces numerical dispersion. This interpretation is strengthened by the fact the dispersion happens on both sides of the beam and decreases with increasing grid resolution. However, we expect that this is not an issue in BNS simulations since we do not expect to have sharp variations of the energy density in free streaming regions as in this test case.
§.§ Absorption
To test the collisional source terms that model the neutrino-fluid interaction in the pure absorption regime, we set up two tests on a flat spacetime, one with a static and one with a stationary moving fluid. In both tests, we shoot a beam of neutrinos similar to the previous case.
Since, in these conditions, neutrinos should be only absorbed and not scattered, we expect their regime to remain purely thin and the momentum vectors to remain parallel to each other.
In Fig. <ref>, we show a wide beam of neutrinos moving from left to right encountering a sphere of matter with κ_a=0.5 in the center and decreasing radially as a Gaussian. As expected, neutrino momenta remain parallel to each other, and as a consequence, the region behind the sphere receives a much smaller amount of radiation when compared to regions on the sides, projecting a very clear shadow on the right edge of the simulation domain.
In the second absorption test, shown in Fig. <ref>, we distribute matter on a vertical tube with homogeneous properties, κ_a=0.05 and v_y = 0.5. As expected, we observe that a part of the radiation is absorbed by the fluid, and another part passes through it without being scattered. In contrast to the previous test, the fluid is not at rest. Hence, the test is well suited to probe the conversion between the fluid and the laboratory frame, which were equal in our previous test, i.e., ζ=1 was trivially satisfied along the beam.
In this new test, we still expect to find ζ=1. However, since now H^αH_α≠ F_i F^i and E ≠ J, this is not trivial anymore.
Based on the success of the test, we can conclude that the root finder algorithm used to evaluate ζ is converging to the correct solution.
§.§ Advection
Advection of trapped radiation in a moving fluid is one of the most challenging situations that our code has to handle. To test such a scenario, we set up a test similar to the one shown in Sec. 4 of Ref. <cit.>, i.e., we evolve a one-dimensional Gaussian neutrino packet trapped in a homogeneous fluid moving at mildly relativistic velocity with stiff, pure-scattering opacity.
As initial conditions, we chose
Ẽ(t=0,x) = e^-x^2, J = 3E/4W^2-1, F_i = 4/3JW^2 v_i.
As shown in <cit.>, with this condition for F_α, it is ensured that H^α=0, i.e., we model a fully thick regime. For the fluid, we chose κ_s=10^3 and |v|=v_x=0.5. We use a single uniform grid with Δ x = 0.05 and employ a Courant-Friedrich-Levi (CFL) factor of 0.25. We test two different flux reconstruction schemes to check whether they can capture the correct diffusion rate in the regime k_s Δ x ≫ 1. In this article, we test two different schemes: a constant reconstruction (u_i+1/2=u_i) with an LLF Riemann solver (Eq. (<ref>)) and the composed flux of Eq. (<ref>) proposed in <cit.>.
Results are shown in Fig. <ref> together with the reference solution, which we assume to be the advected solution of the diffusion equation
Ẽ(x,t) = 1/√(1+4Dt)exp[-(x-v_xt)^2/1+4Dt],
with D=1/(3κ_s) being the diffusivity. We see that the lowest order reconstruction scheme [Eq. (<ref>)] fails in reproducing the correct diffusion rate because of its intrinsic numerical dispersion. The scheme used in <cit.>, instead, performs better except near the maximum, where it reduces again to the lower order one. Moreover, we observe no unphysical amplification of the package, contrary to the test performed using library <cit.> in <cit.>. In our implementation, we find that both the neutrino energy and the neutrino number are advected with the correct velocity.
To test the robustness of the scheme, we performed an additional test with identical fluid configuration but neutrinos' initial data given by a step function. Results at time t=4 are shown in Fig. <ref> for different resolutions together with the reference solution
Ẽ(x,t) = 1/2 [ 1 - erf (x-v_x t/2√(Dt) ) ],
with erf being the error function.
This test shows that the flux reconstruction, Eq. (<ref>), together with the linearized collisional sources, Eq. (<ref>), can handle shocks even in the presence of stiff source terms preserving the monotonicity of the solution and with a numerical dispersion that decreases with the increase of resolution.
§.§ Uniform Sphere
The uniform sphere test is the closest configuration to an idealized star for which we have an analytical solution of the Boltzmann equations <cit.>. For this reason, several groups have shown such simulations to test their implementations, e.g., Refs. <cit.>. It consists of a sphere of radius r_s=1. In its interior, we set κ_a=η= constant and κ_s=0. We set up this test on a 3-dimensional Cartesian grid with Δ x = Δ y = Δ z = 0.05 imposing reflection symmetry with respect to x-y, x-z and y-z planes. Evolution is performed using a RK3 algorithm with a CFL factor of 0.25. We perform this test with two different opacities κ_1 = 5 and κ_2=10^10 to test different regimes; cf. <cit.>.
Figure <ref> shows both the numerical and analytical Ẽ as a function of the radius for the opacities. The numerical solution is taken at t=12 along the diagonal x=y=z. Overall, we find a good agreement between our numerical result and the analytical solution of the Boltzmann equation, comparable to the results obtained in other works. However, we point out that one cannot expect to converge to the exact solution since the M1 scheme is only an approximation to the Boltzmann equation and is only exact in the fully trapped or free streaming regimes (without crossing beams).
§.§ Single isolated hot star
We evolve a single isolated hot neutron star employing the SFHo EoS <cit.>. Initial data are constructed by solving the TOV equations with the assumption of constant entropy and beta-equilibrium as in <cit.>. For the integration of the TOV equation, we choose ρ_c = 8.65 × 10^14 g/cm^3 and an entropy per baryon s = 1k_B. This leads to a total baryonic mass M_ bar = 1.64 M_⊙, which corresponds to a gravitational mass of 1.52 M_⊙, a coordinate radius R=9.8 km, and a central temperature of 27.8 MeV. We evolve the system on a grid with a grid spacing of Δ x = Δ y = Δ z = 182 m using a CFL factor of 0.25. In this test, we evolve the hydrodynamics with the module of <cit.> using 4th order Runge-Kutta (RK4) integration algorithm and WENOZ <cit.> primitive reconstruction with LLF Riemann solver for the fluxes at cell interfaces.
Fig. <ref> shows the transition of neutrinos from trapped to the free streaming regime on the surface of the star. As expected, the neutrino energy density reaches its peak in the star's core due to the higher density and temperature of the fluid in this region. Moreover, we can observe that neutrinos inside the star have a zero average momentum since they constitute a particle gas in thermal equilibrium with the fluid, and the transport phenomena are negligible. When the optical depth τ drops below 2/3, interactions with the fluid start becoming subdominant, and neutrinos start traveling freely, developing an average momentum in the radial direction.
Another important consequence of the neutrino-baryon decoupling can be seen in Fig. <ref>, where we can observe all three species of neutrinos being thermalized with the fluid in the inner part of the star and decoupling next to the relative photosphere at three different temperatures. After decoupling, the neutrino temperature remains constant due to the lack of interactions with the fluid. The average energy hierarchy is, as reported in the literature, ⟨ϵ_ν_e⟩ < ⟨ϵ_ν_e⟩ < ⟨ϵ_ν_x⟩ <cit.>.
§ BINARY NEUTRON STAR MERGERS
§.§ Configurations and Setup
We run 10 different BNS configurations employing two different EoSs (SFHo <cit.> and DD2 <cit.>) with the same total baryonic mass of 2.6 M_⊙, and two different mass ratios of q=M_1/M_2=1 and q=1.2, where M_i is the gravitational mass of the i-th star. All binary systems are considered to be irrotational, i.e., the stars are non-spinning. Further details about the setups are given in Table <ref>. We run the simulations with SFHo EOS and neutrino transport at two different resolutions: R1 with 96 points per dimension in each of the two finest boxes covering the stars. This corresponds to a grid spacing in the finest level of Δ x_ min = 248 m and Δ x_ max = 31.8 km in the coarsest one. R2 with 128 points on each finest box for Δ x_ min = 186 m and Δ x_ max = 23.8 km on the coarsest level. Initial data was produced using the pseudo-spectral code SGRID <cit.> under the assumption that matter is in beta-equilibrium with a constant initial temperature of T=0.1 MeV; cf. <cit.>.
The proper initial distance between the stars' centers is set to 38 km. This corresponds to about three orbits before the merger of the stars. Given that we will primarily focus on the post-merger evolution, we did not perform any eccentricity reduction procedure. Both spacetime and hydrodynamics variables are evolved using a method of lines with RK4 algorithm with a CFL factor of 0.25.
Time evolution is performed using a Berger-Oliger algorithm with eight refinement levels. The two finest refinement levels are composed of two moving boxes centered around the stars.
Spacetime is evolved employing the Z4c formulation <cit.>. It is discretized using a finite difference scheme with a fourth-order centered stencil for numerical derivatives. Lapse and shift are evolved using 1+log slicing <cit.> and gamma-driver conditions <cit.> respectively.
For hydrodynamic variables we use a finite volume scheme with WENOZ <cit.> reconstruction of primitives at cell interfaces and HLL Riemann solver <cit.> for computing numerical fluxes. We apply the flux corrections of the conservative adaptive mesh refinement <cit.> to the conservative hydrodynamics variables but not to the radiation fields.
§.§ Ejecta
We compute ejecta properties using a series of concentric spheres centered around the coordinate origin with radii varying from 300 km to 1000 km. On each sphere, the total flux of mass, energy, and momentum of outgoing, unbound matter is computed. On such extraction spheres, the matter is assumed to be unbound according to the geodesic criterion <cit.>, i.e., if
u_t < -1 and u_r>0.
From now on, we will always refer to the unbound mass as the one that satisfies this criterion unless stated otherwise. Differently from previous BAM versions, the spheres with radius 450 km and 600 km also save the angular coordinates (θ,ϕ) of the matter flux together with u_t, ρ, T, and Y_e. This allows a more detailed analysis of the ejecta that includes its geometry and thermodynamical properties, e.g., the use of the Bernoulli criterion <cit.> for determining unbound mass, i.e.,
h u_t < -1 and u_r>0.
Since u_t [or hu_t for Bernoulli] is assumed to be conserved and at infinity u_t=-W [or hu_t=-W], it is also possible to compute the asymptotic velocity v_∞ of each fluid element as v_∞ = √(1 - 1/u_t^2) [or √(1-1/(hu_t)^2)].
§.§.§ Ejecta Mass
Mass ejection from BNS mergers within a dynamical timescale 𝒪(10 ms) has already been the subject of several detailed studies, e.g., <cit.>. There is a general consensus on dividing dynamical ejecta into two components: tidal tail and shocked ejecta. The former is composed of matter shed from the star's surface right before the merger due to tidal forces. Since this matter does not undergo any shock heating or weak interaction, it has a low Y_e comparable to the one of neutron stars' outer layers and low entropy ≲ 10 κ_B. Shocked ejecta, on the opposite, is launched by the high pressure developed in the shock formed at the star's surface during the plunge. It has significantly higher entropy and Y_e with respect to the tidal tails. It is produced later but with higher velocity, rapidly reaching the tidal tails and interacting with them <cit.>.
In Fig. <ref>, we show the mass of the unbound matter moving through the detection sphere at r≃450 km as a function of time for both geodesic and Bernoulli criteria. There is an important qualitative difference between simulations where neutrinos are neglected and the ones including neutrino transport. While in the former case, the ejecta mass saturates within 20 ms after the merger, in the latter one, we observe a non-negligible matter outflow continuing for the whole duration of the simulation, although with decreasing intensity. Such a phenomenon has been observed in other BNS simulations with M1 transport in <cit.>, where a very similar numerical implementation of M1 is used, and in much smaller amount also in <cit.>. We attribute it to the neutrinos emitted from the remnant. Through scattering/absorption processes in the upper parts of the disk, they can indeed accelerate material, making it gravitationally unbound. This hypothesis is consistent with what we see in Fig. <ref>, where we show the conserved mass density for bound and unbound matter on the xz-plane roughly 45 ms after the merger. We denote by D_u the conserved mass density of unbound matter, i.e., D_u=D where matter is unbound and D_u=0 otherwise. Most of the unbound matter is concentrated in the inner part of the upper edge of the disk, as we would expect from a neutrino wind mechanism powered by the remnant emission.
In particular, in <cit.>, equal mass simulations using the SFHo and DD2 EoSs are performed, and an early neutrino wind mechanism is also observed. However, such simulations only show results up to ≃ 10 ms after the merger.
For both EoSs, the ejecta mass is higher and more rapidly growing for asymmetric configurations. This is in agreement with the higher amount of tidal tails ejecta that asymmetric binaries are known to produce. In the same figure, the amount of ejecta according to the Bernoulli criterion is also shown.
Bernoulli-criterion ejecta corresponds to the geodesic one in the very initial phase of the matter outflow but predicts a significantly higher mass after the dynamical phase. More importantly, Bernoulli ejecta is not close to saturation at the end of the simulation time. These features are comparable with the results of other works, e.g.,<cit.>. This continuous matter outflow is attributed to the so-called spiral wave wind, i.e., the outward transport of angular momentum through the disk due to the shocks.
§.§.§ Electron fraction and velocity
Figure <ref> shows the average of Y_e and v_∞ (⟨ Y_e ⟩ and ⟨ v_∞⟩ respectively), of matter flowing through the detection sphere, located at r≃ 450 km, as a function of time. These quantities are defined as:
⟨ Y_e ⟩ (t) = ∫ dΩ F_D_u (t, Ω) Y_e(t,Ω)/∫ dΩ F_D_u (t, Ω),
⟨ v_∞⟩ (t) = ∫ dΩ F_D_u (t, Ω) v_∞(t,Ω)/∫ dΩ F_D_u (t, Ω),
where r is the radius of the extraction sphere and F_D_u = D_u (α v^r - β^r) is the local radial flux of unbound matter through the detection sphere.
All simulations show an overall monotonically increasing electron fraction since matter ejected later remains longer next to the remnant, having more time for protonizing due to neutrino absorption. In addition, most systems show a more or less pronounced plateau at about 5-15 ms after the merger with a visible dependence on the mass ratio. This is likely due to tidal tails containing material with an almost uniform and low ⟨ Y_e ⟩ of ≃ 0.1. Tidal tails are then reached and partially reprocessed by the faster and more proton-rich shocked ejecta, giving rise to the plateau we observe[Note that this plateau is absent for SFHo_q1_M1 due to the smaller amount of tidal ejecta for this equal mass, soft-EoS configuration.].
⟨ v_∞⟩ has a sharp velocity peak at early times followed by slow late-time ejecta. The fact the initial peak does not show a bimodal shape is another indicator of the fact that tidal and shock ejecta already merged together at the extraction radius. Finally, we see that asymmetric binaries present a higher velocity peak of the early ejecta. This feature is consistent with Fig. <ref> and is responsible for the tails with v∼ 0.5c - 0.7c. The velocity histogram in the same figure shows no dependence on the EoS, with the mass ratio being the only feature determining the velocity profile.
§.§.§ Angular dependence
In the upper panel of Fig. <ref>, we show the normalized polar angle distribution of the ejecta defined as
m_ ej(θ) = r^2 ∫^T_0 ∫_0^2π F_D_u(t,θ,ϕ) dt dϕ,
with T being the final time of the simulation and r the radius of the detection sphere (in this case ≃ 450 km). According to this definition M_ ej = ∫_0^πsin(θ) m_ ej(θ) dθ. Then m_ ej(θ) is normalized by the total mass of the ejecta M_ ej^ tot.
The peak at θ≃ 0.5 due to the post-merger neutrino wind is immediately visible. At lower latitudes, the neutrino wind mechanism is indeed heavily suppressed by the disk, which is cold and optically thick and stops the neutrinos emitted by the remnant (see Fig. <ref>). For asymmetric binaries, there is also a peak at low latitudes visible, caused by the tidal tail ejecta.
The effect of such a component on the electron fraction is visible in the lower panels of the same figure. It is responsible for the lower ⟨ Y_e ⟩ of the equatorial region, and, as expected, it is more evident for asymmetric binaries.
In the same panel, we can also observe that when neutrino wind is included in the ejecta, the ⟨ Y_e ⟩ of the polar regions increases significantly, reaching up to 0.5, while regions with polar angles above one radiant are unchanged by the phenomenon. This is due to the intense neutrino irradiation that this matter received, which increased its electron fraction. The fact that dynamical ejecta from equal mass binaries has an overall higher ⟨ Y_e ⟩ can be explained by the higher amount of shocked ejecta that such configurations are known to produce. Shock ejecta is indeed supposed to have a higher entropy and electron fraction with respect to tidal tail ejecta and is more isotropically distributed. The last characteristic can explain why symmetric binaries give a higher ⟨ Y_e ⟩ than their respective asymmetric counterparts at lower latitudes.
In Fig. <ref>, a histogram of the ejecta's ⟨ Y_e ⟩ is shown. Dynamical ejecta of equal mass binaries produces a fairly uniform distribution of mass with a drop for ⟨ Y_e ⟩≲ 0.1. In the unequal mass scenario, the situation changes. Here, we have indeed a clear peak at ⟨ Y_e ⟩≃ 0.1 produced by tidal tails. In both cases, the inclusion of neutrino wind leads to an increase of ejecta with 0.3 ≲⟨ Y_e ⟩≲ 0.6.
Another important feature of the ejecta that has been investigated in literature is the correlation between Y_e and entropy (s/k_B). In Fig. <ref>, we show a 2D histogram of the total ejecta in these two variables. Most of the ejecta mass lies within a main sequence with a positive monotonic correlation between entropy and Y_e. This is a consequence of the fact that fluid with a higher entropy is characterized by a more proton-rich thermodynamical equilibrium configuration. The exception to this rule is made by matter with Y_e ≲ 0.3 and entropy in a very wide range going up to s ∼ 100 k_B. This matter is present in every simulation and is believed to be a consequence of the interaction between tidal tails and shocked ejecta <cit.>. When the latter hits the former, it generates indeed a violent shock that increases the fluid's entropy. Since this happens at low density, when the neutrino-matter interaction timescale is bigger than the dynamical one, this does not leave time for the fluid to settle to an equilibrium configuration with higher Y_e.
Ejecta's average properties of our simulations are summarized in Table <ref>. Here we see, as expected, a strong dependence of ⟨ Y_e ⟩ on the mass ratio, with asymmetric binaries producing a more neutron-rich and an overall more massive outcome. An imprint of the tidal deformability can also be observed, with the more deformable EoS (DD2) producing less massive but neutron-rich ejecta. For SFHo simulations, the dependence of the ejecta mass and ⟨ Y_e ⟩ on the mass ratio is consistent through all resolutions. We do not observe any significant dependence of the average asymptotic velocity on the mass ratio or tidal deformability.
§.§ Neutrino luminosity
We determine the neutrino luminosity as the total flux of neutrino energy Ẽ through the same series of spheres used for the analysis of the ejecta, i.e., L_ν = r^2 ∫ dΩ (αF̃^r - Ẽβ^r). Similarly to the ejecta detection, also here the two spheres, located at 450 km and 600 km, are able to save the flux angular direction together with its values of J and n, enabling a more detailed study that includes the geometry of neutrino luminosity and its average energy.
Looking at the total neutrino luminosity in the left panel of Fig. <ref>, we find that ν_e emission is brighter in the early post-merger with respect to the other species. Its peaking luminosity of ∼ 10^53 erg/s is consistent with results obtained by similar simulations <cit.>. The initial ν_e burst is a consequence of the fast protonization that the material undergoes right after the merger, when the beta equilibrium is broken, and the system evolves toward a new meta-stable configuration characterized by a higher entropy and Y_e. Approximately 10 ms after the merger, the ν_e starts decreasing and approaches the luminosity of ν_e a few tens of ms later. This is a signal that the system is approaching the weak equilibrium configuration within the late simulation time. Both ν_e and ν_x show similar behavior, with a peak at ∼ 10 ms and roughly half of the intensity of ν_e. In the early post-merger, we have, as reported in the literature, L_ν_e > L_ν_x > L_ν_e. The brightness oscillations that appear in this phase for every neutrino species are due to the remnant oscillations, which cause shocks propagating outward and perturbing the surface of the neutrino sphere. The last inequality is inverted after the luminosity peak. L_ν_x drops faster because of the remnant's cooling. ν_x interactions indeed include only thermal processes that are independent of Y_e. This makes the heavy neutrino emission more sensitive to temperature with respect to other species. The right panel of Fig. <ref> shows the average energy of neutrinos flowing through the detection sphere at r = 450 km as a function of time. As reported in the literature ν_x have significantly higher energy with respect to the other two species in the early post-merger. This is an expected feature since heavy neutrinos are less interacting with matter and decouple at higher densities, where matter is usually also hotter. All the features described above have been already explored in more detail in, e.g., <cit.>.
Finally, in Fig. <ref>, we show the total luminosity for all four configurations at resolution R2. The first observation is that SFHo systems emit significantly more neutrinos than their DD2 counterparts due to the temperature difference visible in Fig. <ref>, with a difference of almost 50% at the brightness peak. Such an important difference could explain, or at least contribute to, the significant difference in the neutrino wind emission between the two EoSs.
§.§ Remnant properties
We begin the analysis of the remnant by looking at Fig. <ref>, showing the evolution of density and temperature maxima. In the left panel, we can observe that maximum density is not significantly affected by neutrino radiation, with differences rarely exceeding 5% during the post-merger oscilation ond settling to smaller values after ≃ 15 ms. Considering the temperature evolution, we find that the maximum temperature for the simulations using SFHo is indeed lower for systems using M1 compared to simulations without evolving the neutrinos. Contrary, the setups employing the DD2 EOS show an almost unchanged maximum temperature.
The difference between M1 and neutrinoless simulations is more pronounced for SFHo EoS because of the higher amount of neutrino energy involved. We explain this result as an indirect effect of neutrino cooling affecting the remnant in the early post-merger.
Since all the binary simulations performed in this work produce a stable massive neutron star (MNS) surrounded by a disk, we decide to adopt the usual convention of defining the disk of a MNS+disk system as the region where matter is gravitationally bound and ρ<10^13 g/cm^3, by contrary the MNS is defined by ρ>10^13 g/cm^3; <cit.>. This allows us to provide an estimate of the mass of the disk and the MNS.
In Fig. <ref>, we show the masses of the disk and the MNS as a function of post-merger time. After an initial time where the disk is growing fast, acquiring mass from the remnant, the disk mass stabilizes at ≃ 20 ms after the merger. Such disk accretion phenomena are usually sustained by angular momentum viscous transport and shocks generated by the m=1 bar mode oscillations of the central object <cit.> and contrasted by the gravitational pull of the central object. The effect of neutrino transport on the disk's mass for SFHo simulations is negligible (see Table <ref>), and the results look robust also at lower resolutions. This is an expected result since the disk formation takes place at times when neutrino cooling is not the dominant source of energy loss.
§.§ Nucleosynthesis
The nucleosynthesis calculations are performed in postprocessing
following the same approach as in <cit.>
employing the results from the nuclear reaction network
of <cit.>.
In Fig. <ref>, we show the abundances as a function of the mass
number A of the different isotopes synthesized by the r-process 32 years after the merger in ejecta.
To compare the results for different simulations, we shift the
abundances from all models such that they are always the same as the
solar one for A=195. The solar residual r-process abundances are taken
from <cit.> (for a review of the solar system abundances; see
<cit.>).
The normalization to A_ sol=195 is chosen as nucleosynthesis in neutron-rich ejecta from BNS mergers was shown to robustly reproduce the third r-process peak <cit.>. We also consider normalization to A_ sol=135 and A_ sol=152 commonly considered in literature <cit.>. The former leads to only a minor qualitative change while the latter leads to the overall overestimation of the abundances at both, second and third r-process peaks.
As the mass-averaged electron fraction of the dynamical ejecta from most models (except SFHo q=1 model) is small (see Fig. <ref>), the r-process nucleosynthesis results in the underproduction of lighter, 1st and 2nd peak elements. Additionally, the elements around the rare-earth peak are underproduced. This can be also attributed to the systematic uncertainties in the simplified method we employ to compute nucleosynthesis yields. The simulation with SFHo EOS and mass-ratio q=1 displays a more flat electron fraction distribution in its ejecta, and relative abundances at 2nd peak are consistent with solar.
The Bernoulli ejecta displays on average higher electron fraction, as it undergoes strong neutrino irradiation, being ejected on a longer timescale. Higher Y_e leads to a larger amount of lighter elements produced. However, the overall underproduction of 1st r-process elements for all simulations but the SFHo q=1 model remains.
§.§ Gravitational waves
While neutrinos are supposed not to play any role during the inspiral, they could, in principle, be relevant in the post-merger dynamics, e.g., through the additional cooling channel of the formed remnant, which might change the compactness of the remnant and, therefore, the post-merger GW frequency and the time until black-hole formation. We investigate this possibility in the following subsection by comparing the GW signal produced by each simulation and its `neutrinoless' counterpart.
We compute the GW strain h on a series of concentric spheres using the Ψ_4 Newman-Penrose scalar <cit.>, following the method of <cit.>.
In Fig. <ref>, we show the GW strain h and its frequency for the dominant (2,2) mode of each simulation.
Overall, one can observe only minimal changes in the GW amplitude and frequency caused by neutrino cooling[We note that the spike at 6 ms for DD2_q12 is due to numerical inaccuracies when computing the instantaneous GW frequency for a GW signal with almost vanishing amplitude.].
Given the large challenge in measuring the post-merger GW signal from future detections <cit.> and the presumably large uncertainties regarded the extracted postmerger frequencies,
we expect that the differences visible here are not measurable, potentially not even with the next generation of detectors.
However, a more systematic study involving Bayesian parameter estimation is needed to verify this hypothesis.
§.§ Lightcurves
To compute the kilonova signal associated with the extracted ejecta profiles from the performed simulations, we use the 3D Monte Carlo radiative transfer code <cit.>. The code allows us to use the 3D simulation output of the unbound rest-mass density D_u and the electron fraction Y_e of the ejecta as input. The required input data represents a snapshot at a reference time t_0 and is subsequently evolved following a homologous expansion, i.e., the velocity v^i of each fluid cell remains constant.
In Appendix. <ref>, we outline the exact procedure employed to obtain input data.
For the generation of photon packets (assigned energy, frequency, and direction) at each time step, employs the heating rate libraries from <cit.> and computes the thermalization efficiencies as in <cit.>. The photon packets are then propagated through the ejecta, taking into account interactions with matter via electron scattering and bound-bound absorption. uses wavelength- and time-dependent opacities from <cit.> as a function of local densities, temperatures, and electron fraction within the ejecta. We perform the radiative transfer simulations with a total of N_ ph = 10^6 photon packets.
In contrast to previous works in which we used <cit.>, we are now able to use the electron fraction of the material directly and do not have to approximate it through the computation of the fluid's entropy. This is an important improvement since this quantity is fundamental in determining the kilonova luminosity and spectrum. Matter with low Y_e (like tidal tails) can indeed synthesize Lanthanides and Actinides, which have high absorption opacities in the blue (ultraviolet-optical) spectrum, making the EM signal redder. On the contrary, high Y_e material (like shocked ejecta and winds) synthesizes lighter elements that have a smaller opacity and are more transparent to high-frequency radiation, i.e., it will produce a bluer kilonova.
In Fig. <ref>, we show the bolometric luminosity for each simulation for five different observation angles: For the pole with Θ = 0^∘, and in the orbital plane with Θ = 90^∘ for Φ = 0^∘, Φ = 90^∘, Φ = 180^∘, and Φ = 270^∘.
In general, we find that the luminosity at the pole is higher than in the equatorial plane, because of the smaller opacities and the higher amount of mass. At the same time, light curves obtained for the four angles in the orbital plane are rather similar in the q=1 simulations. For the systems with unequal mass, the differences are more prominent, but they tend to decrease in time within a timescale of a few days.
This can be explained by the fact that the ejecta input in for these systems is less axisymmetric than for the systems with equal masses (see ejecta maps in Appendix <ref>).
Furthermore, we show in Fig. <ref> the light curves for the four systems in different frequency bands, ranging from ultraviolet to optical and infrared. We focus on one Φ-angle only, i.e., Φ=0^∘. Still, we want to note here that the results for other Φ angles for the systems with unequal masses differ up to about ∼ 1 mag in the first two days after the merger.
We observe that the magnitude difference between polar angles is more pronounced in the ultraviolet and optical bands than in the infrared bands, particularly, in the J- and K-bands.
The light curves for the systems with SFHo EoS are on average brighter due to the larger ejecta mass than systems employing the DD2 EoS (at the same mass ratio). Even more importantly, we observe that the ratio between the blue and the red component of the kilonova is strongly affected by both the EoS and mass ratio, with more deformable EoS (DD2) and asymmetric configurations giving a redder kilonovae due to the bigger amount of tidal tails with respect to shocked ejecta.
Moreover, we find that in the orbital plane (Θ=90^∘) the infrared bands are generally more dominant. This is due to the neutron-rich matter of tidal tails located at low latitude, which absorbs most of the radiation at high frequencies. In contrast, for an observer at the pole (Θ = 0^∘), the ultraviolet and optical bands are brighter in the first two days. However, these diminish rapidly, and at later times the red and infrared bands dominate the kilonova signal here as well. Accordingly, a blue kilonova will be observed in the first days, shifting to the red spectra in the following days. These observations indicate again the need for quick follow-up observations of GW signals with upcoming UV-satellites, e.g., <cit.>.
§ CONCLUSIONS
In this article, we implemented a gray M1 multipolar radiation transport scheme following <cit.> in the BAM code. The main features of the implementation are summarized in Tab. <ref>.
We performed a series of standard tests: transport along lightlike geodesics in vacuum, absorption by static and moving fluid, advection by a moving fluid in the scattering-dominated regime, and emission by a thick uniform sphere.
The main difficulty was to properly account for the collisional sources implicitly and to suppress artificial dissipation in the trapped regime in order to capture the correct diffusion rate. We show that our implementation is able to correctly handle all these regimes employing linearized implicit sources of <cit.> and the flux reconstruction of <cit.>.
In addition, we also performed simulations of a single, isolated, hot neutron star. In this case, both the spacetime and the fluid are dynamically evolved. Opacities are motivated by nuclear physics theory and computed using the library. In this last test, we show that neutrinos correctly thermalize inside the star, where they form gas in thermal equilibrium with the nuclear matter and decouple at the star's surface at different temperatures according to their species (with the hierarchy T^ν_e_eff < T^ν_e_eff < T^ν_x_eff).
Moreover, we showed neutrinos correctly start developing a non-zero average momentum at the neutrinosphere τ=2/3. In the last part of the article, we simulated four different low-mass BNS configurations using two different EoS and two mass ratios.
Ejecta from our simulations had the following properties: masses of the order of ∼ 10^-3 M_⊙ with ⟨ V_∞⟩ = 0.1c - 0.2c and ⟨ Y_e ⟩ = 0.2-0.4, the latter with a strong dependence on the mass ratio. In general, more asymmetric systems and systems with a stiffer EoS (DD2) produce lower ⟨ Y_e ⟩ due to the larger mass of tidal tail ejecta, with the lowest ⟨ Y_e ⟩ given by the asymmetric DD2 configuration.
We also illustrated that, on average, more asymmetric binaries produce more ejecta with respect to their symmetric counterparts for both EoSs. Softer EoS (SFHo) eject more than stiffer ones due to the more violent impact of the merger.
Overall, the mechanisms we identified in our simulations are consistent with those reported in the literature for the dynamical ejecta.
Moreover, similar to <cit.>, we found a neutrino wind ejecta component in the polar region during the whole duration of the simulation, albeit with decreasing matter flux. Such a component is significantly more important for softer EOSs, in our case SFHo, due to the higher outflow of neutrino energy. It can contribute up to 50% of the total ejecta mass and significantly increase ⟨ Y_e ⟩. This component could get even more dominant if the simulation is run for longer.
All our simulations produce a MNS remnant surrounded by a massive, neutrino-thick disk with baryonic mass M_ disk∼ 10^-1 M_⊙. The mass of the disk increases with the mass ratio for SFHo EoS while having the opposite behavior for the stiffer DD2. The results summarized so far are valid for all resolutions.
Finally, we used our new ejecta analysis tools to employ our NR-extracted ejecta properties as inputs for the codes and , which we used to compute nucleosynthesis yields and kilonova lightcurves, respectively. The use of Y_e obtained directly from the NR simulations produces much more realistic results with respect to the previous assumption based on fluid's entropy that was used in .
We plan to use the implementation described in this article as the standard for our future BNS simulations oriented to the study of ejecta properties and post-merger dynamics and of the associated kilonova light curves and nucleosynthesis yields.
§ ACKNOWLEDGEMENTS
We thank H. Andresen, S. Bernuzzi, B. Brügmann, F. Foucart, E. O'Connor, M. Shibata, and W. Tichy for helpful discussions.
FS and TD acknowledge funding from the EU Horizon under ERC Starting Grant, no. SMArt-101076369. TD and AN acknowledge support from the Deutsche
Forschungsgemeinschaft, DFG, project number DI 2553/7.
TD and VN acknowledge support through the Max Planck Society funding the Max Planck Fellow group
`Multi-messenger Astrophysics of Compact Binaries'. HG acknowledges funding by FAPESP grant number 2019/26287-0. MU acknowledges support through the UP Reconnect Program from the Alumni Researcher Program of the University of Potsdam.
The simulations were performed on the national supercomputer HPE Apollo Hawk at the High Performance Computing (HPC) Center Stuttgart (HLRS) under the grant number GWanalysis/44189, on the GCS Supercomputer SuperMUC_NG at the Leibniz Supercomputing Centre (LRZ) [project pn29ba], and on the HPC systems Lise/Emmy of the North German Supercomputing Alliance (HLRN) [project bbp00049].
§ LINEARIZED IMPLICIT TIMESTEP SOLUTION
The projections of tensors A^α and B^α_i of Eq. (<ref>) perpendicular to the spacelike hypersurface Σ_t can be written as:
n^αA_α = k_a A_(J) - (k_a+k_s) A_(H),
with
A_(J) = W[W^2 + a W^2 (v · f)^2 + b W^2-1/2 W^2+1(3-2 W^2)],
A_(H) = W[-1 + W^2 + a W^2(v · f)^2 + b W^2-1/2W^2+1(3-2W^2)],
and
n^αB^i_α = k_a B^i_(J) - (k_a+k_s) B^i_(H),
where we define
B^i_(J) = W[-2W + bW^2-1/2W^2+14W^2]v^i,
B^i_(H) = W [1 - 2W^2 + bW^2-1/2W^2+14W^2]v^i.
While for the parallel component, we have
γ^α_i A_α = k_a A_i,(J) - (k_a+k_s) A_i,(H),
with
A_i,(J) = -W[W^2 + aW^2(v · f)^2 + bW^2-1/2W^2+1(3-2W^2)]v_i,
A_i,(H) = -[W^3 + aW^3(v · f)^2 + b WW^2/2W^2+1(3-2W^2)]v_i
- a W(v · f) f_i,
and
γ^α_i B_α^j = k_a B^j_i,(J) - (k_a + k_s) B^j_i,(H),
with
B^j_i,(J) = W[2W^2 - bW^2-1/2W^2+14W^2]v_iv^j,
B^j_i,(H) = [2W^3 - bWW^2-1/2W^2+14W^2 -
-bW/2W^2+1(2W^2-1)]v_iv^j
+ (1-bv^2)Wδ_i^j,
where a = (3χ-1)/2 and b = 1- a are the thin and thick closure coefficients respectively and f^i=F^i/|F|.
Using these projections we can write Ẽ and F̃_̃ĩ at time n+1 as:
F̃_i^n+1 = (M^-1)_i^j S_j,
Ẽ^n+1 = 1/1 + αΔ t n^α A_α [ Ẽ^n + Δ t (-∂_i ℱ^i_E +G_E)
+αΔ t (η√(γ) W - n^αB^i_αF̃_i^n+1) ],
with
M_i^j = δ_i^j - αΔ t γ_i^α B_α^j + α^2 Δ t ^2/1+αΔ t n^αA_α A_αγ^α_i n^βB_β^j,
and
S_i = F̃_i^n + Δ t ( -∂_j ℱ^j_F_i + G_F_i)+ Δ t [ α√(γ)η W v_i .
. +α A_αγ_i^α/1+αΔ t n^αA_α ( Ẽ^n +Δ t (-∂_i ℱ^i_E + G_E) + αΔ t √(γ)Wη ) ],
where G_E and G_F_i represent the gravitational sources of Eq. (<ref>) and Eq. (<ref>), respectively, computed using Ẽ^n and F̃_i^n.
Since M_i^j and S_i only depend on variables at time n, F̃_i^n+1 must be first computed and then plugged into the expression of Ẽ^n+1 to complete the solution.
§ HAMILTONIAN CONSTRAINT VIOLATION
In Fig. <ref>, we show the L_2 norm of the Hamiltonian constraint as a function of the time. The latter follows the same qualitative evolution as in Ref. <cit.>. When initial data are interpolated from sgrid, the Hamiltonian constraint is of the order of 10^-8. The evolution with the Z4c formulation reduces this value order 10^-10 due to its constraint-damped properties. At merger time, the value increases, due to the formation of shocks in the hydrodynamics variables, reaching a peak shortly after. After the peak, the value decreases and stabilizes between 10^-9 and 10^-10. Simulations including neutrino transport systematically show a bigger violation of the Hamiltonian constraint after the merger. One of the reasons might be that, as common in the literature, the neutrino's stress-energy tensor is not included in the matter term of the spacetime evolution equations. This leads to a mathematical violation of General Relativity constraints proportional to neutrino's stress-energy tensor. However, we can observe that the value of the Hamiltonian constraint is always lower than its initial value.
§ EJECTA DATA FOR
Given the limited length of our simulations and the issue of covering both early-time and postmerger ejecta with individual snapshots, we employ 3D snapshots together with information from the detection sphere at r ≃ 450 km. The detailed procedure is as follows:
* We find the latest 3D snapshot in which all the ejecta is contained within the simulation domain. We mark the time of this snapshot as t_ cut. From it, we cut out the matter still contained within the detection sphere. This component includes most of the ejecta mass, including the tidal tails and the shocked component.
* We rescale the ejecta from the previous step assuming homologous expansion the same way does, i.e., assuming every fluid element moves with a constant velocity v^i = x^i/(t-t_ merger). This is equivalent to defining a scale factor α(t) = (t - t_ merger)/ (t_ cut - t_ merger) and rescaling coordinates and mass density as x^i →α(T) x^i, ρ→ρ / α^3(T), where T is the final time of the simulation. After this step, the radius of the inner cut (initially corresponding to the detection sphere) moved outwards, leaving a gap between the ejecta and the detection sphere that we are going to fill using data from the sphere itself.
* From the sphere we select data with t ∈ [t_ cut, T]. Assuming homologous expansion like for the 3D data, we can map the time into a radius by R(t) = r (T-t_ merger)/(t-t_ merger)=r α(T)/α(t) where r is the fixed coordinate radius of the detection sphere. At the same time, we rescale the mass density by ρ(t,θ,ϕ) →ρ(t,θ,ϕ) (α(t)/α(T))^3. After this procedure, we will have the ρ(R,θ,ϕ), and we interpolate it into the Cartesian grid, where the ejecta from 3D data is defined. This way, we fill the gap between the ejecta and the detection sphere left by the previous rescaling step.
It is important to point out that the ejecta at the detection sphere is not fully homologous, and assuming a constant velocity with v^i = x^i/(t-t_ merger) might introduce biases. This is due to the different velocities of components ejected at different times, with shock ejecta that is faster than tidal tails, although it is ejected later. Although deviations from homologous expansion are shown to be present even at 𝒪(100 ms) after the merger <cit.>, it has been shown that their influence for the light curve computation using is negligible, i.e., within the range of Monte Carlo noise, if the ejecta is extracted at t > 80 ms after the merger <cit.>. (In <cit.>, only the dynamical ejecta was included, and GRHD simulations were performed without the evolution of the electron fraction. The inclusion of other ejecta components or neutrino radiation probably leads to a delay in reaching the homologous phase.)
Because of this reason, we let the ejecta evolve as long as possible out of the detection sphere before assuming homologous expansion and starting the procedure described above. In order to alleviate the issue, an even longer evolution would be required to produce accurate lightcurves.
The resulting input data for the radiative transfer simulations are shown in Fig. <ref> and Fig. <ref>.
In Fig. <ref>, we present maps in the v_y-v_z plane of the matter density ρ, electron fraction Y_e, and temperature T used in and computed at 1 day after the merger for all four BNS systems using the M1 scheme. In addition, we show in Fig. <ref> the distribution of density and electron fraction in the v_x-v_y plane to show how the configurations deviate from axisymmetry.
|
http://arxiv.org/abs/2307.06341v1 | 20230712105829 | Assessment of the suitability of degradation models for the planning of CCTV inspections of sewer pipes | [
"Fidae El Morer",
"Stefan Wittek",
"Andreas Rausch"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
This is a preprint that has been submitted to the Urban Water Journal. This article is undergoing peer-review and is not accepted for publication. Feel free to contact the corresponding author and visit the repository containing the data and the results of this project.
A Systematic Survey of Moon-Forming Giant Impacts I: Non-rotating bodies
[
========================================================================
The degradation of sewer pipes poses significant economical, environmental and health concerns. The maintenance of such assets requires structured plans to perform inspections, which are more efficient when structural and environmental features are considered along with the results of previous inspection reports. The development of such plans requires degradation models that can be based on statistical and machine learning methods. This work proposes a methodology to assess their suitability to plan inspections considering three dimensions: accuracy metrics, ability to produce long-term degradation curves and explainability. Results suggest that although ensemble models yield the highest accuracy, they are unable to infer the long-term degradation of the pipes, whereas the Logistic Regression offers a slightly less accurate model that is able to produce consistent degradation curves with a high explainability. A use case is presented to demonstrate this methodology and the efficiency of model-based planning compared to the current inspection plan.
machine learning;sewer deterioration modeling; statistical analysis; simulation;
§ INTRODUCTION
§.§ Problem statement
Physical assets in wastewater systems suffer from degradation over time, which translates into a constant loss from a financial and operational perspective. This deterioration can lead to damages that have health and environmental impacts due to exfiltrations that degrade the groundwater quality <cit.>, sewer blockages that can lead to overflows <cit.>, as well as interactions with other infrastructures such as roads <cit.>, among others.
A key component to prevent such impacts is an efficient operation and maintenance of sewer networks, which can be achieved with the definition of appropriate inspection strategies. Two main approaches to maintenance can be considered: reactive and proactive. Reactive techniques are based on intervening the assets only when they stop working, whereas proactive ones use preventive and predictive tools that anticipate the occurrence of failures <cit.>. <cit.> suggests that proactive techniques have greater “up-front" costs for the inspection, given the need of developing planning strategies to guide the decision making process, while greater “follow-up" costs are derived from reactive strategies because failures might be already present in the assets when inspected. Therefore, a correct application and performance of proactive maintenance strategies can be more cost-efficient than the traditional reactive approach <cit.>.
The development of proactive maintenance strategies can also be seen as a planning system to prioritise what assets require to be inspected. Several authors have worked with different methodologies to establish prioritisation strategies for sewer asset maintenance. Many sewer network operators develop proactive planning strategies based on defining a fixed interval of years between subsequent inspections. In the case of Germany, the recommendations for the definition of inspection plans are set by the DIN EN 13508-1 <cit.>, but they are further developed by the states. In the case of the state of Nordrhein-Westfalen (Germany), the norm recommends to carry out the first inspection when the pipe is installed, another one after 10 years, and the rest of the inspections are performed every 15 years <cit.>.
This interval-based proactive or static planning can be restrictive, given that robust or resilient pipes are being inspected when it is not strictly required, and critical or frail pipes are subject to inspections when the failure has already occurred. Furthermore, static planning does not take into consideration specific information about the structural or environmental features of the pipe, and it leaves out valuable information that arises from CCTV of inspections. Therefore, a dynamic planning or prioritisation system should be defined to take into account different factors that could cause the pipe to fail, as well as the information obtained from previous inspections, which shall be introduced as Dynamic Maintenance (DM). DM can be defined as a set of methods that use a priori information such as the asset's age or the result of previous inspections to update the maintenance plan <cit.>. To develop a DM plan for physical assets, a deterioration model is required.
§.§ Objectives
Many statistical and machine learning-based degradation models have been presented over time, but most of them set their focus only on the accuracy metrics, without evaluating the ability of their models to produce long-term predictions of the deterioration of the assets. In order to develop DM plans, a long-term aging behaviour should be inferred from the results of the degradation model. Few examples can be found of degradation models where this property is assessed, but the results yield unrealistic behaviours where failure is never reached by the pipes <cit.>, or the long-term simulations do not show a monotonic deterioration of the assets <cit.>, which is an inherent property of civil infrastructure systems where no maintenance is considered <cit.>.
Additionally, the interpretability of the models should be taken into consideration. Although significant efforts have been made in recent years to elaborate methodologies that would allow machine learning models to be interpretable and go beyond the black-box paradigm <cit.>, the rationale behind the predictions cannot be understood and the internal logic is not transparent to the user or analyst <cit.>. Given the lack of interpretability of black-box models, authors such as <cit.> argue in favor of using inherently interpretable models in high-stakes decisions, so that the analyst or the user can have a transparent tool to decide whether to trust the predictions of the model or not.
Therefore, this work aims to provide a framework for the development of sewer deterioration models that goes beyond fitness or accuracy metrics. Two additional aspects should be considered to select a model for the planning of inspections, which include the generation of consistent long-term simulations that represent the probability of failure of the pipes along time, as well as its ability to produce interpretable and transparent results.
The main requirements that will be considered for the development of a satisfactory model are that a) it should accurately predict the condition of sewer pipes given a set of structural and environmental factors, b) the result of the simulation along time of single pipes must show a monotonic behavior, provided that the condition of the pipes cannot improve if no maintenance is considered, and c) the model should allow a certain level of interpretability in order to be able to explain the predictions conditioned on the inputs of the model. An additional contribution of this research paper is the inclusion of the length of the upstream network for every sewer pipe, which can be considered as a surrogate variable that accounts for the volume of water that flows through the pipes.
The resulting model should be a useful tool for decision-makers and asset managers to schedule new CCTV inspections based on physical and environmental attributes of the sewer pipes and the result of previous inspection reports. Based on the probability of failure of each pipe, the decision-makers can elaborate sewer inspection plans with different levels of risk. To demonstrate the proposed methodology, a case study of a German urban area in the state of Nordrhein-Westfalen is presented.
The rest of this work is structured as follows: Section 2 is a literature review that covers the main contributions of previous works to the development of degradation models, focusing on statistical and machine learning classification models. In Section 3 we present the data for the use case and the methodology used to define the most suitable model. Section 4 covers the results of the comparison, as well as an example of the possible use of the resulting model. Section 5 presents the conclusions of this work.
§ RELATED WORK
Several authors suggested different consequence-based score systems that evaluate the effect of asset failures in the surrounding environment or in the operation of the sewer network itself. The higher the score given by this rating system, the greater the need to inspect and maintain a specific pipe. These methods use many factors such as the structural and physical characteristics of the pipes, the proximity of the assets to other critical infrastructures, or their importance within the network, and every variable has a weight assigned to it that reflects the relevance that it might have regarding the degradation process. As stated by their proponents, the main limitation of this approach is that it relies heavily on the subjectivity introduced by the developers of the model. These works include the ones presented by <cit.>, <cit.>, <cit.> or <cit.>.
Predictive models can overcome the drawback of the mentioned methods since no previous weights or influences need to be included in the system. Just like the aforementioned score systems, predictive models can include a myriad of factors that may cause the degradation of the assets. These predictive models are used to map some explanatory variables such as the physical attributes or the environmental information of the pipes to a scoring system that defines the condition of the pipe.
§.§ Logistic Regression
Logistic Regression (LR) models have been widely used in the literature to tackle the sewer pipe degradation problem. The works of authors such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.> or <cit.> concluded that LR models are outperformed by more sophisticated machine learning methodologies, although the advantage shown by this type of statistical model is its transparency and the explainability through its coefficients. In order to look into the estimation of the coefficients of the LR model, <cit.> used a Bayesian approach that concluded that sewer age and length were the dominant drivers for the degradation of cementitious and clay pipes. As for the explainability on the predictions end, <cit.> proposed the use of LR for the development of degradation curves by simulating the life cycle of single pipes. The authors indicate that the degradation profiles show an unrealistic behavior for some materials, as their probability of failure in some cases reaches 50% after 200 or 300 years.
§.§ Random Forest
Many authors have compared the use of Random Forests (RF) to classify both dichotomous and multiclass response variables that represent the condition of sewer pipes. The main proponents of this model are <cit.>, <cit.> and <cit.>. <cit.> compared the performance of different models to predict the condition of sewer pipes using three categories for the response variable. The authors performed a long-term simulation of the degradation behavior of individual pipes, noting that the prediction of the probability of failure decreased in certain periods of the simulations. They concluded that the interpretations that could arise from such a simulation could be misleading, as they would imply that the physical condition of pipes could improve along time even if no maintenance was carried out. Therefore, the authors recommend to use this approach only for ad-hoc classification.
§.§ Artificial Neural Networks
Different architectures of Artificial Neural Networks (ANN) have been proposed by several authors to model the degradation behavior. Among these authors, we include <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. From the mentioned works, only <cit.> present deterioration curves for single pipes. The authors show several examples of long-term simulations for individual pipes and, as previously mentioned regarding the conclusions presented by <cit.>, the degradation curves that result from this model do not show a continuous deterioration of the pipes.
§.§ Other models
Additional machine learning techniques have been proposed by other authors, although no degradation curves have been produced. Gradient Boosting models were used by <cit.> and <cit.>. The latter indicate that this model outperforms the rest of the prediction models subject to comparison, namely LR, RF and Decision Trees (DT). Support Vector Machines (SVM) were presented by <cit.>, <cit.> and <cit.>, concluding that although this algorithm showed a high potential in terms of predicting the condition of sewer pipes, ANNs yielded better results.
As shown in the previous paragraphs, the use of statistical and machine learning models has been widely explored and compared to predict the condition of sewer pipes with satisfactory results in terms of accuracy metrics, but there is still a gap in the assessment of the suitability of such tools for the application of degradation models that could be useful for the development of DM plans. In other words, it remains necessary to investigate the capacity of the proposed models to generate reliable and understandable outcomes, as well as consistent long-term simulations describing the deterioration of sewer pipes.
<Ref> shows a collection of the explanatory variables used by the mentioned authors in order to model the degradation of sewer pipes. For a more detailed review of the most influential factors in this field, we recommend the reviews conducted by <cit.> and <cit.>.
§ MATERIALS AND METHODS
§.§ Data
The use case that we present on this study is based on an urban area in the state of Nordrhein-Westfalen (Germany) with a population of around 25,000 inhabitants. The dataset is comprised by two main components, namely the physical and environmental attributes of the individual pipes, and the assessment of the condition of the sewer pipes carried out by experts based on CCTV inspections performed between the years 2000 and 2021.
The database initially consisted of 12,832 inspections corresponding to 11,650 sewer pipe segments. Incomplete assessments or reports that contained missing values were left out of the analysis. As for the sewer pipes, house connections were not taken into consideration because although an inspection was carried out, no assessment on the condition was performed. The house connections account for 40.93% of the inspections and 49.18% of the pipes. Materials with less than 5 samples were excluded from the analysis, as no generalization could be drawn from such small groups. Finally, pipes that were given a very negative score despite being recently installed were dismissed, and the same goes for pipes that were installed 80 years prior to the inspection but were given the highest score in terms of condition (1.24% of the inspections, 1.32% of the pipes). These considerations resulted in a dataset with 6,279 inspections corresponding to 4,899 sewer pipe segments.
§.§.§ Variable selection
The list of variables considered for the development of the degradation model are shown in <ref>. Many of the variables taken into consideration such as the pipe length, the material or the average depth, have been considered previously by several authors. Additionally, this work proposes the use of the geographical coordinates of the sewer pipes' centroids as a surrogate variable for unobservable covariates such as groundwater fluctuations, soil compactation or interaction with infrastructures present in the surface, as suggested by <cit.>. To add further information about unobserved phenomena, this work includes the count and the length of upstream pipes, which can be considered a surrogate variable for the flow running through the pipes. Before training the models, the numerical variables have been properly scaled using a MinMax scaling. <Ref> shows the main descriptive statistics of the explanatory variables selected for this work. Note that the coordinates of the centroids of the pipes have been anonymized.
§.§.§ Response variable
The output variable is modelled based on the results of inspections carried out by experts. These inspections are performed according to the methodology provided by the ATV-M143-2 <cit.> and the DIN EN 13508-2 <cit.>, which state the guidelines for the interpretation and coding of damages using CCTV inspections. Based on these coding systems, the data provider uses an internal classification system from 1 to 6, where 6 indicates that the pipe is as good as new, and 1 means that the pipe should be replaced immediately. In order to simplify the modelling of such a variable, and to overcome the problem of class imbalance, the output has been binarized in such a way that classes 5 and 6 are considered non-defective, and the rest correspond to defective pipes. The binarization of the classes corresponding to different levels of structural or operational damage of sewer pipes can be found in previous works <cit.>. <Ref> shows the result of the mentioned binarization, where it can be seen that there is a clear correlation between the pipe age and the damage class. Damage classes 5 and 6 account for 41% of the observations as seen in the right-hand side of <ref>. Considering these two categories under the same class (non-defective) helps to overcome the problem of class imbalance.
The main descriptive statistics can be seen in <ref>. <Ref>(a) shows that the dataset mainly consists of concrete (63.53%) and clay (25.20%) pipes. As for the age of the pipes (<ref>c), 68.15% of the samples were inspected before age 40, and only 3.69% of the inspections correspond to pipes that were inspected after age 60, which implies a considerable bias towards pipes that were inspected shortly or moderately after their installation.
§.§ Methodology
As stated in previous sections of this work, the aim is to provide a predictive model that uses physical and environmental attributes of sewer pipes, as well as the results of prior assessments carried out after the performance of CCTV inspections, that is able to produce long-term degradation curves in order to develop DM strategies. To carry out such a task, two main assumptions will be made: a) the model should be able to accurately predict the condition of sewer pipes given the specified attributes and the response variable, and b) given that no maintenance, repairs or rehabilitation works are considered in the available inspections, the degradation curves that result from the simulation of the life cycle of the pipes should increase monotonically. Additionally the resulting model should be able to produce interpretable predictions based on the inputs. <Ref> shows a flowchart with the proposed methodology.
To achieve this goal, a set of statistical and machine learning models will be trained on the processed dataset. The performance of the models will be assessed under two criteria, namely the classification metrics specified on <ref> and the temporal consistency of the degradation curves produced by the models.
§.§.§ Models
§.§.§ Logistic Regression
The Logistic Regression (LR) is a statistical model that applies an inverse logit function to map a linear estimator to a binary outcome, having as a result the probability P of a sample x_i of belonging to the positive class (in the case of this work, the defective class), with a set of coefficients β. The linear estimator is composed by a matrix X that contains the values of the variables for each sample and a column vector β which expresses the coefficients of said linear estimator. A link function σ is applied on it, so the result of the estimator is constrained to the [0, 1] domain.
P(x_i; β ) = 1/1+e^-β x_i
LR models are inherently explainable, and they give information about the statistical power of the explanatory variables, as well as their effect on the response variable. Assuming that the model shows global significance, i.e. at least one of the coefficients is non-zero according to the result of the chi-square test, we must take into consideration the significance of the individual variables. The significance of the explanatory variables comes from applying a z-test to the standardized coefficients, and it shows the statistical power that a specific factor has to explain an event.
Once a variable is considered significant, the coefficients can be interpreted by means of the Odds Ratio (OR). For an input variable j with a coefficient β_j, the OR is exp(β_j), and it can be interpreted as the odds that an outcome will occur given the presence of a specific factor, compared to the odds of the outcome occurring without that factor being present <cit.>. For a variable with an OR>1, an increase in 1 unit of that factor will increase the probability of occurrence of the outcome. A formal definition and the interpretation of the results of the LR model can be found in <cit.>.
§.§.§ Decision Trees
Decision Trees (DT) are sequential models introduced by <cit.> that perform a series of tests to find the optimal decision threshold for each variable in order to classify a sample. Each test is performed on a node, and each possible outcome of the test points out to a child node, where another test might be carried out. Subsequent tests are performed until a leaf is reached, which is a node without children <cit.>.
The tests carried out in the nodes can be simplified as yes-no questions, which make the logical rules followed by the model easy to understand. Therefore, DTs can be considered inherently explainable models, as the logical process that they follow to produce results is explicit <cit.>.
§.§.§ Random Forest Classifier
A Random Forest (RF) is an ensemble method introduced by <cit.> that combines the prediction results of several decision trees by means of averaging them. In terms of binary classification problems, RFs are constructed using a set of tree-structured predictors that cast a unit vote, and the output will fall into one of the two possible categories {0, 1}. For every input x_i from the collection of samples X, the most popular predicted class ŷ_̂î among the tree classifiers will be assigned.
RF models use Variable Importance (VI) as a measure of the relevance of an explanatory variable. A popular VI criterion is the Gini impurity, which is a metric used to decide the splits of the tree-structured predictors. Relevant predictors will have a higher decrease of the Gini impurity, and therefore, will have a higher VI <cit.>. For a formal definition of this model, we recommend the works of <cit.> and <cit.>.
§.§.§ Extreme Gradient Boosting
Gradient Boosting (GB) machines are part of the boosting methods family. While classical ensemble techniques like RFs build predictions based on weak estimators, boosting methods add new models to the ensemble sequentially <cit.>. In this sense, the model initially proposed by <cit.> aims at sequentially building new base-learners to be maximally correlated with the negative gradient of the loss function. The GB model used in this work is based on the XGBoost library, developed by <cit.>, which presents an efficient and scalable implementation of this technique.
Similarly to RFs, GBs can give a measure of the relevance of the inputs to generate the output variable. This is done through the gain, which is a metric used to optimize the splits of the boosted trees. A variable that increases the gain is more decisive for the development of the model, and therefore, it is more relevant to explain the output.
§.§.§ Support Vector Machine
Support Vector Machines (SVM) were initially introduced by <cit.> as an algorithm to find the optimal decision boundary between classes. For the two-class discrimination problem, SVMs determine a separating hyperplane (or decision boundary) in a high-dimensional space, relying on maximizing the margin or minimal distance between the hyperplane and the closest data points to it <cit.>. An advantage presented by such a model is the possibility of selecting different kernels, which are mathematical devices that project the data samples from a low-dimensional space to a space of higher dimension. This transformation allows the data to become separable in the higher space by means of the aforementioned hyperplane <cit.>.
§.§.§ Artificial Neural Networks
Artificial Neural Networks (ANN) are a set of models that correspond to the family of deep learning techniques and are widely used for pattern recognition problems. The structure of ANNs is composed of an input layer where the features of the data samples are introduced, a set of hidden layers, and an output layer, where the target value is approximated. These layers are made of neurons, which are computational or processing units that apply linear or non-linear transformations (activation functions) to the information coming from previous layers during the feedforward step. The optimization of the parameters of the ANNs comes from the backpropagation step, which takes into consideration the error of the prediction during the feedforward step, and updates the values of the parameters to yield a better estimate of the outputs given the inputs. For a better understanding of this type of models, we recommend the work of <cit.>, and for a formal definition of neural networks, we suggest <cit.>.
In the context of this work, an ANN with 2 hidden layers consisting of 100-50 neurons respectively with a Rectified Linear Unit (ReLU) as the activation function was used. The output layer consists of a single neuron with a sigmoid activation function, since the aim of the model is to discriminate between two classes.
§.§.§ Model quality metrics
Several classification metrics have been used to compare the performance of the models. Given a binary outcome, 4 possible predictions can arise after training a model, namely true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN), assuming that in this context, a positive value would represent a defective pipe.
The accuracy (<ref>) represents the proportion of correct predictions with respect to the sample size. It is a good estimator of the performance of a model, but it does not give information about the bias of the model in terms of leaning towards FNs or FPs.
Accuracy = TN+TP/TN+TP+FN+FP
The precision (<ref>) or positive predictive value is the proportion of TPs over the total positive predictions. That is, in this context, the precision would represent the rate of samples that were correctly predicted as damaged with respect to the total amount of samples that were considered damaged by the model.
Precision = TP/TP+FP
The recall (<ref>) or true positive rate shows the proportion of TPs with respect to the known positives. In the context of this work, it would represent the rate of observations that were considered damaged (positive) with respect to all the samples that were actually damaged.
Recall = TP/TP+FN
Finally, the Area Under the Curve (AUC) is used as a metric for the performance of the models. This metric comes from the Receiver Operating Characteristic (ROC) curve, which shows, for different thresholds, the relationship between the TP ratio and the FP ratio. A perfect classifier would have a ROC curve that reaches a value of 1 for the TP ratio and 0 for the FP ratio simultaneously, and therefore, the AUC would have a value of 1. For a more detailed description of the presented metrics, we suggest the review presented by <cit.>.
§.§.§ Monotonicity
As stated in previous sections, degradation curves are expected to increase monotonously with respect to time, given that no maintenance tasks are considered. To check whether this condition is fulfilled by the tested models, a simple algorithm will be ran to compute if, at a certain age t, the probability of being defective P(x_t) is higher than the same probability one year before. If P(x_t) < P(x_t-1), the behavior will not be considered monotonous.
§ RESULTS AND DISCUSSION
§.§ Performance metrics
The performance of the proposed models is compared using cross-validation. 90% of the data is selected for training and validation purposes, and 10% is held-out to assess their ability to generalize to unseen samples. The cross-validation is applied on the first batch (training and validation set) so that 70% of the samples are used for training and 30% are for validation. This process is repeated across 10 folds, and in every iteration, the performance metrics are calculated on the held-out (test) set. The cross-validation (<ref>) shows that there is a significant difference between the performance of the ensemble models (XGB and RF) with respect to the rest of the tested techniques, as suggested by authors such as <cit.> or <cit.>. The mentioned models show a higher accuracy, but also a higher variance than other models. No significant difference can be seen on average between the ANN and the SVM models, although the SVM shows a much lower variance. The LR shows a high robustness in its predictions, but its accuracy is lower than that shown by the SVM and the ANN. Finally, the DT shows the lowest accuracy on average, and it presents a variance comparable to that of the ensemble models or the ANN.
A similar pattern can be observed regarding the rest of the performance metrics. RF shows a higher recall, precision and AUC than the rest of the models, followed closely by the XGB. This means that not only the ensemble models outperform the rest in terms of accuracy, but they also provide more reliable predictions, given the balance between the rates of FNs and FPs. SVM shows a similar recall to the one yielded by the ensemble methods, but it has the lowest precision, which means that the model is biased towards predicting more FPs than FNs. The result given by the SVM implies that the model would be prone to suggest that a pipe is defective when it is not. As seen in <ref>, the LR shows a comparable accuracy to the DT, the SVM or the ANN, although it outperforms the last 2 models in terms of precision. The LR model also shows a lower variance in the performance metrics, thus rendering this model more robust in terms of its predictions.
Despite the inherent difficulty to perform comparisons across different studies (different target values, uncertainty of the pipe condition inspections and metrics, different input variables, etc.), the results that have been obtained in this research work are consistent with the literature review. <cit.> and <cit.> show that ensemble models such as XGB or RF outperform simpler models like the LR.
§.§ Degradation curves
To illustrate the differences between models in terms of their capability of generating degradation curves, <ref> shows the results of the simulation of 100 years of 4 different pipes. These pipes are selected after carrying out the monotonicity test (<ref>), and represent the samples with the highest amount of decreases in terms of the probability of failure along time. <Ref>(a) shows the sewer pipe where DTs yield more shifts, <ref>(b) represents the same for the SVM model, <ref>(c) for the XGB and <ref>(d) for the RF.
The DT model only captures the extreme probability values, i.e. 1 and 0, which makes it an unsuitable for the prediction of probabilities, as it only produces binary values, and they are not consistent with the aging behavior.
The ensemble models, i.e. XGB and RF, show similar behaviors as the probability of failure increases along time, but both of them fail to show a monotonic degradation curve. XGB shows a spiky curve with a sudden drop in the probability of failure after 70 years, and after a short period it rises up again to reach a probability of failure of 100%. As for the RF, the probability of failure only reaches 100% in the case of <ref>(b), and even if it shows a general upward trend, the model suggests an improvement of the condition of the pipes at different ages. This result is in line with the findings presented by <cit.>, where the authors indicate that this long-term forecast could be misleading, since it would be suggesting that the pipe will improve its structural and operational condition along time.
LR and SVM models show a similar pattern in the predicted degradation behavior. Both models generate S shaped curves that show a smooth increase in the probability of failure, although the predictions produced by the SVM do not always reach a probability of failure of 100%, and the curve showed in <ref>(b) shows a decay in the degradation rate, which would imply an improvement in the condition of the asset.
As seen on <ref>, ANNs yield monotonic degradation curves for all the simulations, although the predicted behavior is more irregular than the one showed by the LR or the SVM.
§.§ Interpretability
Among the two models that produce degradation curves that show a monotonic increase of the probability of failure, the LR is the only one that yields an interpretable result based on the coefficients of its linear estimator. By means of these coefficients, it is possible to know what is the size of the effect of the input variables with respect to the output, as well as its sign and its statistical significance. <Ref> shows the coefficients obtained from training the LR model.
As stated in <ref>, the interpretation of the LR can be done by analyzing its coefficients and the ORs. The results of the analysis show that, as reported in previous studies, pipe age and structural features such as its length or size are highly significant factors when it comes to sewer pipe degradation. Observing the OR of the pipe age, it can be seen that an increase of 1 year of age rises the chances of the pipe being defective by a ratio of 1.095.
The function of the pipe only appears to be significant when they transport stormwater. Its negative coefficient and its OR <1 indicate that mixed use and sewage pipes are more prone to degradation than stormwater pipes. This result is highly dependant on the maintenance strategies carried out by the water utility. As seen in <cit.> and <cit.>, the degradation of mixed use sewers is lower due to higher engineering, construction and maintenance efforts, whereas the results obtained by <cit.> show that sanitary pipes are more resilient to deterioration. On the contrary, the size of the pipe, represented in this analysis by the height, shows a statistically significant negative effect on the outcome, which suggests that bigger pipes are more resilient, which is in line with the conclusions of authors such as <cit.> or <cit.>.
The length of the pipes slightly increases the probability of failure. Authors such as <cit.>, <cit.> or <cit.> explain this effect arguing that longer pipes have more joints, which are vulnerable to failure, and are more exposed to structural defects such as bending. The length of the upstream pipes has a similar effect on the outcome, showing that the probability of failure could be correlated with the volume of water flowing through the pipes, considering that downstream pipes will receive a higher volume. As stated previously, this result depends on the particular characteristics of the studied network and the asset management strategies performed by the water utility, and it should not be confused with the effect of the flow rate on the degradation of the pipes. According to authors like <cit.> or <cit.>, steep slopes cause higher flow rates, which lead to higher deterioration rates, whereas lower slopes can cause sedimentation due to the low velocity of the water <cit.>. Given the lack of statistical significance of the slope in the presented experiment, no conclusion about the correlation between flow rate and sewer degradation can be extracted for this particular use case.
Finally, the X and Y coordinates of the centroid of the pipe show opposite effects on the response variable. The OR of the X coordinates suggests that an increase in 1 unit of this variable lowers the chances of the pipe being defective (OR < 1), meaning that pipes that are situated in eastern areas of the studied area show a slower degradation rate. The coefficient related to the Y coordinates indicates exactly the opposite: pipes situated in the North are more prone to failure (OR > 1). This difference can be better explained by looking at <ref>, where it can be clearly seen that region A, which lays in the Northwest of the studied area, has a higher density of population, and therefore, a higher density of sewer pipes and a higher volume of water. On the contrary, Region C (Southwest) is less populated than its counterparts, and it has a less complex sewer network. For the same simulated age, this area shows lower probabilities of failure, confirming the intuition behind the size and the sign of the coefficients regarding the coordinates of the centroids of the pipes and the length of the upstream pipes.
§.§ Current inspection strategy vs. model-based strategy
Once the best option is selected among the proposed models, a comparison can be made between the current inspection plan and the one that can be drawn from exploiting the model. The advantage of using the proposed model is that it provides flexibility in setting probability thresholds. By adjusting the threshold, the planner can determine the acceptable level of risk and allocate inspection resources accordingly. This flexibility allows for a more efficient inspection plan, focusing resources on pipes with higher probabilities of failure.
<Ref>a shows a comparison of 4 different scenarios where 3 possible probability thresholds are defined against a scenario where no model is used. For example, in Scenario 1, a conservative threshold is set, resulting in a large number of pipes being inspected. This approach prioritizes safety but may lead to unnecessary inspections and increased costs. In Scenario 2, a moderate threshold is used, reducing the number of inspections compared to Scenario 1 while still maintaining an acceptable level of risk. Scenario 3 represents a more risk-tolerant approach with a higher threshold, resulting in even fewer inspections.
When comparing Scenario 4 (no model) and Scenario 2 (with a probability threshold of 50%), approximately half of the network is inspected after around 27 years. By subtracting the predicted failure age of the pipes from the actual age of inspection (as shown in Figure 2b), we obtain a distribution where some pipes are inspected before the predicted cutoff point (negative side) and others are inspected later than required (positive side).
In this case, according to the model and the selected probability threshold, 49.11% of the pipes are inspected later than required, which could lead to higher maintenance and reparation costs. A more restrictive strategy such as the one proposed in Scenario 1 would lead to a proportion of 68.23% of the pipes inspected too late, and Scenario 3 would result in a rate of 28.71% of this quantity. Therefore, to optimize the operation and maintenance of the sewer network, the decision boundary (probability threshold) needs to be adjusted accordingly. This adjustment should take into account the needs and resources of the managing authority to strike a balance between timely inspections and cost-effectiveness.
§ CONCLUSIONS
This work presented a comparison of different statistical and machine learning methods to assess their suitability to tackle the problem of modelling the degradation of sewer pipes. The analysis has been carried out considering three main elements, namely the accuracy of the models, their ability to produce consistent long-term simulations based on the probability of failure of single pipes, and their interpretability.
The results showed that ensemble methods such as Random Forests or Gradient Boosting Trees yield the best results in terms of accuracy metrics, but their long-term simulations do not produce monotonous degradation curves, which implies that they cannot be used to develop reliable dynamic maintenance plans in the presented scenario. Support Vector Machines and Artificial Neural Networks show similar accuracy metrics, but the former is not able to generate coherent long-term simulations, and the latter lacks the interpretability that was seeked during the presentation of the requirements of this work. The Logistic Regression showed slightly less accurate results, but it produced degradation curves that fulfilled the monotonicity requirement, and is inherently explainable by means of its coefficients, rendering it the most suitable model for the development of dynamic inspection plans for the presented use case.
After obtaining these findings, a simulation was conducted to compare the existing situation (without a model) with three alternative scenarios employing various thresholds for the probability of failure of single pipes. This simulation demonstrated the effectiveness of a data-driven model to prevent a high proportion of pipes of the network from being inspected later than required.
This study has provided a framework to assess different statistical and machine learning models for creating inspection plans that consider long-term failure simulations and model interpretability. However, further research is needed to make the methodology more reliable. This can be achieved by analyzing larger datasets that include more variables affecting sewer pipe deterioration and comparing the costs of different inspection plans to the current scenario.
§ ACKNOWLEDGEMENTS
This research was funded in part by the German Federal Ministry of Education and Research (BMBF) under the project KIKI (grant number 02WDG1594A). The authors would like to acknowledge the support and collaboration of August-Wilhelm Scheer Institut für digitale Produkte und Prozesse GmbH, IBAK Helmut Hunger GmbH & Co. KG, Eurawasser GmbH & Co. KG, AHT AquaGemini GmbH and Entsorgungsverband Saar.
§ DATA AVAILABILITY
The data used in this study is available upon request from the corresponding author or can be accessed through the following GitHub repository: <https://github.com/Fidaeic/sewer-pred>.
|
http://arxiv.org/abs/2307.04513v1 | 20230710122005 | CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation | [
"Yicheng Wu",
"Zhonghua Wu",
"Hengcan Shi",
"Bjoern Picker",
"Winston Chong",
"Jianfei Cai"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
CoactSeg for New MS Lesion Segmentation
Yicheng Wu et al.
1 Department of Data Science & AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3168, Australia
[email protected]
2 SenseTime Research, Singapore, 069547, Singapore
3 Alfred Health Radiology, Alfred Health, Melbourne, VIC 3004, Australia
4 Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC 3800
CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation
Yicheng Wu1() Zhonghua Wu 2 Hengcan Shi 1 Bjoern Picker 3,4 Winston Chong 3,4 Jianfei Cai1
August 12, 2023
==============================================================================================
New lesion segmentation is essential to estimate the disease progression and therapeutic effects during multiple sclerosis (MS) clinical treatments. However, the expensive data acquisition and expert annotation restrict the feasibility of applying large-scale deep learning models. Since single-time-point samples with all-lesion labels are relatively easy to collect, exploiting them to train deep models is highly desirable to improve new lesion segmentation.
Therefore, we proposed a coaction segmentation (CoactSeg) framework to exploit the heterogeneous data (i.e., new-lesion annotated two-time-point data and all-lesion annotated single-time-point data) for new MS lesion segmentation.
The CoactSeg model is designed as a unified model, with the same three inputs (the baseline, follow-up, and their longitudinal brain differences) and the same three outputs (the corresponding all-lesion and new-lesion predictions), no matter which type of heterogeneous data is being used.
Moreover, a simple and effective relation regularization is proposed to ensure the longitudinal relations among the three outputs to improve the model learning.
Extensive experiments demonstrate that utilizing the heterogeneous data and the proposed longitudinal relation constraint can significantly improve the performance for both new-lesion and all-lesion segmentation tasks.
Meanwhile, we also introduce an in-house MS-23v1 dataset, including 38 Oceania single-time-point samples with all-lesion labels. Codes and the dataset are released at <https://github.com/ycwu1997/CoactSeg>.
§ INTRODUCTION
Multiple sclerosis (MS) is a common inflammatory disease in the central nervous system (CNS), affecting millions of people worldwide <cit.> and even leading to the disability of young population <cit.>. During the clinical treatment of MS, lesion changes, especially the emergence of new lesions, are crucial criteria for estimating the effects of given anti-inflammatory disease-modifying drugs <cit.>. However, MS lesions are usually small, numerous, and appear similar to Gliosis or other types of brain lesions, e.g., ischemic vasculopathy <cit.>. Identifying MS lesion changes from multi-time-point data is still a heavy burden for clinicians. Therefore, automatically quantifying MS lesion changes is essential in constructing a computer-aided diagnosis (CAD) system for clinical applications.
Deep learning has been widely used for MS lesion segmentation from brain MRI sequences <cit.>. For example, the icobrain 5.1 framework <cit.> combined supervised and unsupervised approaches and designed manual rules to fuse the final segmentation results. Some works <cit.> further studied the complementary features from other MRI modalities for MS lesion segmentation. Meanwhile, to train a better deep model, class-imbalance issues <cit.> and prior brain structures <cit.> have been respectively investigated to improve the performance.
With the impressive performance achieved by existing pure MS lesion segmentation methods <cit.>, recent attention has been shifted to analyze the longitudinal MS changes <cit.>, such as stable, new, shrinking, and enlarging lesions, with the focus on new MS lesion segmentation <cit.>.
However, collecting adequate well-labeled longitudinal MS lesion data for model learning is highly challenging since it needs multi-time-point data from the same set of patients, and requires costly and time-consuming expert annotations.
Fig. <ref> shows the three types of heterogeneous MS lesion data: new-lesion annotated two-time-point data, all-lesion annotated two-time-point data, and all-lesion annotated single-time-point data, each of which is associated with different costs. New-lesion annotated two-time-point data is the ideal one for learning new lesion segmentation, but with the highest data acquisition and annotation costs. Annotating all lesions in two-time-point data can reduce the annotation cost, but it requires accurate brain registration and rule-based post-processing to identify lesion changes, which cannot avoid noise accumulation and often leads to sub-optimal performance. All-lesion annotated single-time-point data is with the cheapest data acquisition and annotation costs. This motivates us to raise the question: “Can we leverage all-lesion annotated single-time-point data to promote the new MS lesion segmentation?”
Therefore, in this paper, we proposed a deep Coaction Segmentation (CoactSeg) model that can unify heterogeneous data and annotations for the new MS lesion segmentation task. Specifically, CoactSeg takes three-channel inputs, including the baseline, follow-up, and corresponding differential brains, and produces all-lesion and new-lesion segmentation results at the same time.
Moreover, a longitudinal relation constraint (e.g., new lesions should only appear at the follow-up scans) is proposed to regularize the model learning in order to integrate the two tasks (new and all lesion segmentation) and boost each other. Extensive experiments on two MS datasets demonstrate that our proposed CoactSeg model is able to achieve superior performance for both new and all MS lesion segmentation, e.g., obtaining 63.82% Dice on the public MICCAI-21 dataset <cit.> and 72.32% Dice on our in-house MS-23v1 dataset, respectively. It even outperforms two neuro-radiologists on MICCAI-21.
Overall, the contributions of this work are three-fold:
* We propose a simple unified model CoactSeg that can be trained on both new-lesion annotated two-time-point data and all-lesion annotated single-time-point data in the same way, with the same input and output format;
* We design a relation regularizer to ensure the longitudinal relations among all and new lesion predictions of the baseline, follow-up, and corresponding differential brains;
* We construct an in-house MS-23v1 dataset, which includes 38 Oceania single-time-point 3D FLAIR scans with manual all-lesion annotations by experienced human experts. We will release this dataset publicly.
§ DATASETS
We trained and evaluated our CoactSeg model on two MS segmentation datasets, as shown in Table <ref>. On the public MICCAI-21 dataset[<https://portal.fli-iam.irisa.fr/msseg-2/>], we only use its training set since it does not provide official labels of testing samples. Specifically, 40 two-time-point 3D FLAIR scans are captured by 15 MRI scanners at different locations. Among them, 11 scans do not contain any new MS lesions. The follow-up data were obtained around 1-3 years after the first examination. Four neuro-radiologists from different centers manually annotated new MS lesions, and a majority voting strategy was used to obtain the final ground truth. For pre-processing, the organizers only performed a rigid brain registration, and we further normalized all MRI scans to a fixed resolution of [0.5, 0.75, 0.75] mm.
Since the public MS lesion data is not adequate <cit.>, we further collected 38 single-time-point 3D FLAIR sequences as a new MS dataset (MS-23v1). Specifically, all samples were anonymized and captured by a 3T Siemens scanner in Alfred Health, Australia. To the best of our knowledge, this will be the first open-source dataset from Oceania for MS lesion segmentation, contributing to the diversity of existing public MS data. Two neuro-radiologists and one senior neuro-scientist segmented all MS lesions individually and in consensus using the MRIcron segmentation tool[<https://www.nitrc.org/projects/mricron/>]. The voxel spacing of all samples is then normalized to an isotropic resolution of [0.8, 0.8, 0.8] mm.
Finally, when conducting the mixed training, we used a fixed data split in this paper (i.e., 62 samples for training and 16 for validation in total). Note that we followed the setting of the public challenge <cit.>, which selects the new validation set from MICCAI-21 that does not include samples without any new MS lesions.
§ METHOD
§.§ Overview
Fig. <ref> illustrates the overall pipeline of our proposed CoactSeg model F_θ. We construct a quadruple set (X_b, X_fu, X_d, Y) for the model training. Here, the longitudinal difference map x_d ∈ X_d is obtained by a subtraction operation between the baseline brain x_b ∈ X_b and its follow-up x_fu∈ X_fu (i.e., x_d = x_fu-x_b). Therefore, given heterogeneous annotations, i.e., all-lesion labels y_al^s ∈ Y_al^s in single-time-point data and new-lesion labels y_nl^t ∈ Y_nl^t in two-time-point data, the CoactSeg model F_θ is designed to exploit both for the model training.
§.§ Multi-head Architecture
Fig. <ref> shows that new-lesion regions are highlighted in the brain difference map x_d. Hence, besides x_b and x_fu, CoactSeg also receives x_d as inputs. It generates all-lesion and new-lesion predictions as
p_al^s1, p_al^s2, p_nl^s = F_θ(x_b^s, x_fu^s, x_d^0)
p_al^t1, p_al^t2, p_nl^t = F_θ(x_b^t, x_fu^t, x_d^t).
For single-time-point samples x^s ∈ X^s, x_b^s and x_fu^s are identical as x^s, and the difference map becomes an all-zero matrix x_d^0, with p_al^s1, p_al^s2 and p_nl^s being the corresponding all-lesion and new-lesion predictions of x^s. For two-time-point data x^t ∈ X^t,
x_b^t and x_fu^t respectively denote the first and second time-point data samples, with p_al^t1, p_al^t2 and p_nl^t being the all-lesion segmentation results at the first and second time-point and the new-lesion results of x^t, respectively.
In this way, we unify the learning of both single and two-time-point data with heterogeneous annotations by using the same model F_θ, with the same input and output formats.
Note that, inspired by semi-supervised learning <cit.>, we mix x^s and x^t samples into each batch for training. Given the heterogeneous annotations, i.e., all-lesion labels for single-time-point data and new-lesion labels for two-time-point data, we apply the following corresponding supervisions:
L_al = Dice(p_al^s1, y_al^s) + Dice(p_al^s2, y_al^s)
L_nl = Dice(p_nl^t, y_nl^t)
where Dice refers to the common Dice loss for medical segmentation tasks. We use a 3D VNet <cit.> as the backbone of F_θ and three prediction heads are designed as individual convolutional blocks. Note that, the last prediction head also receives the features from the first two in order to capture the all-lesion information. Compared to the recent work <cit.> for exploiting heterogeneous data, our architecture avoids the complicated design of dynamic prediction heads.
§.§ Longitudinal Relation Regularization
Human experts usually identify new MS lesions by comparing the brain MRI scans at different time points. Inspired by this, we further propose a longitudinal relation constraint to compare samples from different time points:
L_rr = ||p_al^s1, p_al^s2||_2 + ||p_al^t1⊗ y_nl^t, 0||_2 + ||p_al^t2⊗ y_nl^t, 1||_2
where ⊗ is a masking operation. The first term in (<ref>) is to encourage the all-lesion predictions p_al^s1 and p_al^s2 to be the same since there is no brain difference for single-time-point data. The second and third terms in (<ref>) are to ensure that the new-lesion region can be correctly segmented as the foreground in p_al^t2 and as the background in p_al^t1 in two-time-point data with only new lesion labels y_nl^t.
Finally, the overall loss function to train our CoactSeg model becomes a weighted sum of L_al, L_nl, and the regularization L_rr:
L = L_al + λ_1 × L_nl +λ_2 × L_rr
where λ_1 and λ_2 are constants to balance different tasks.
§ RESULTS
§.§.§ Implementation Details.
For training, we normalize all inputs as zero mean and unit variance. Then, among common augmentation operations, we use the random flip or rotation to perturb inputs. Since MS lesions are always small, we apply a weighted cropping strategy to extract 3D patches of size 80×80×80 to relieve the class imbalance problem <cit.>. Specifically, if the input sample contains the foreground, we randomly select one of the foreground voxels as the patch center and shift the patch via a maximum margin of [-10, 10] voxels. Otherwise, we randomly crop 3D patches. The batch size is set as eight (i.e., four new-lesion two-time-point samples and four all-lesion single-time-point samples). We apply Adam optimizer with a learning rate of 1e-2. The overall training iterations are 20k. In the first 10k iterations, λ_1 and λ_2 are set to 1 and 0, respectively, in order to train the model for segmenting MS lesions at the early training stage. After that, we set λ_2 as 1 to apply the relation regularization. During testing, we extract the overlapped patches by a stride of 20×20×20 and then re-compose them into the entire results.
Note that we follow <cit.> to mask the non-brain regions and all experiments are only conducted in the brain regions with the same environment (Hardware: Single NVIDIA Tesla V100 GPU; Software: PyTorch 1.8.0, Python 3.8.10; Random Seed: 1337). The computational complexity of our model is 42.34 GMACs, and the number of parameters is 9.48 M.
§.§.§ Performance for MS Lesion Segmentation.
Two MS tasks (i.e., new-lesion segmentation on MICCAI-21 and all-lesion segmentation on our MS-23v1 dataset) are used to evaluate the proposed CoactSeg. Besides common segmentation metrics <cit.> including Dice, Jaccard, 95% Hausdorff Distance (95HD), and Average Surface Distance (ASD), we further follow <cit.> to use the instance-level F1 score (F1) to denote the lesion-wise segmentation performance. Here, tiny lesions (i.e., fewer than 11 voxels) are not included in the F1 calculation as <cit.>.
Fig. <ref> illustrates that our proposed CoactSeg accurately segments the tiny new lesions on MICCAI-21. Compared to the recent work <cit.>, our model can even predict new lesions with low contrast (indicated by the enlarged yellow rectangles in Fig. <ref>). Table <ref> gives the quantitative results on MICCAI-21. We can see that: 1) Our model achieves good segmentation performance for new MS lesion segmentation and outperforms the second-best method <cit.> by 7.01% in Dice; 2) Compared with human experts, our proposed model also outperforms two of them (i.e., #3 and #4) in terms of the segmentation and the shape-related metrics; 3) For the lesion-wise F1 score, our method
significantly reduces the performance gap between deep models and human experts, achieving a comparable F1 with expert #3 (i.e., 61.96% vs. 62.88%).
Fig. <ref> shows the all-lesion segmentation results of our CoactSeg model on our in-house MS-23v1 dataset. It can be seen that CoactSeg is able to segment most MS lesions, even for very tiny ones (highlighted by red arrows). Moreover, we can see that the segmentation results of the first two prediction heads are relatively consistent (i.e., the 2nd and 3rd columns of Fig. <ref>), demonstrating the effectiveness of our proposed relation regularization.
§.§.§ Ablation Study.
Table <ref> further shows the ablation study for both new and all MS lesion segmentation tasks. It reveals that: 1) Introducing the heterogeneous data significantly improves the performance of new-lesion segmentation on MICCAI-21 with an average Dice gain of 2.64%; 2) Exploiting the relation regularization for mixed training can further improve the performance on the two datasets; 3) The simple stage-by-stage training strategy (See the Implementation Details <ref>) can better balance two tasks and achieve the overall best segmentation performance for both tasks.
§ CONCLUSION
In this paper, we have presented a unified model CoactSeg for new MS lesion segmentation, which can predict new MS lesions according to the two-time-point inputs and their differences while at the same time segmenting all MS lesions. Our model effectively exploits heterogeneous data for training via a multi-head architecture and a relation regularization. Experimental results demonstrated that introducing all-lesion single-time-point data can significantly improve the new-lesion segmentation performance. Moreover, the relation constraint also facilitates the model to capture the longitudinal MS changes, leading to a further performance gain. Our in-house MS-23v1 dataset will be made public to help the MS lesion research.
Future works will explore more longitudinal relations to study the fine-grained MS changes as well as consider more powerful constraints to address the domain gap <cit.> and fairness <cit.> problems. Moreover, we plan to collect and annotate more MS lesion data to improve the possibility of training large-scale deep models for clinical applications <cit.>.
§.§.§ Acknowledgement.
This work was supported in part by the Monash FIT Start-up Grant, in part by the Novartis (ID: 76765455), and in part by the Monash Institute of Medical Engineering (MIME) Project: 2022-13. We here appreciate the public repositories of SNAC <cit.> and Neuropoly <cit.>, and also thanks for the efforts to collect and share the MS dataset <cit.> and the MS-23v1 dataset from Alfred Health, Australia.
splncs04
|
http://arxiv.org/abs/2307.03958v1 | 20230708114851 | Secrets Revealed in Container Images: An Internet-wide Study on Occurrence and Impact | [
"Markus Dahlmanns",
"Constantin Sander",
"Robin Decker",
"Klaus Wehrle"
] | cs.CR | [
"cs.CR",
"cs.NI"
] |
An Internet-wide Study on Secrets in Container Images]Secrets Revealed in Container Images:
An Internet-wide Study on Occurrence and Impact
Markus Dahlmanns, Constantin Sander, Robin Decker, Klaus Wehrle
Communication and Distributed Systems, RWTH Aachen University Aachen Germany
{dahlmanns, sander, decker, wehrle}@comsys.rwth-aachen.de
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
Containerization allows bundling applications and their dependencies into a single image.
The containerization framework Docker eases the use of this concept and enables sharing images publicly, gaining high momentum.
However, it can lead to users creating and sharing images that include private keys or API secrets—either by mistake or out of negligence.
This leakage impairs the creator's security and that of everyone using the image.
Yet, the extent of this practice and how to counteract it remains unclear.
In this paper, we analyze numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax other private registries unveiling that pctaffectedimages of images indeed include secrets.
Specifically, we find validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches leaked API secrets, both opening a large attack surface, i.e., putting authentication and confidentiality of privacy-sensitive data at stake and even allow active attacks.
We further document that those leaked keys are used in the wild:
While we discovered casignedcerts certificates relying on compromised keys being issued by public certificate authorities, based on further active Internet measurements, we find 20220901numuniquehosts TLS and SSH hosts using leaked private keys for authentication.
To counteract this issue, we discuss how our methodology can be used to prevent secret leakage and reuse.
<ccs2012>
<concept>
<concept_id>10002978.10003014</concept_id>
<concept_desc>Security and privacy Network security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10002979.10002980</concept_id>
<concept_desc>Security and privacy Key management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Network security
[500]Security and privacy Key management
[
Klaus Wehrle
August 12, 2023
===================
§ INTRODUCTION
While originally developed to isolate applications <cit.>, containerization has become a new cornerstone of interconnected services as it significantly eases their deployment <cit.>.
To this end, Docker, the most prominent containerization framework <cit.>, uses prebuilt images that include all software dependencies necessary to deploy an application <cit.>.
Users only need to download an image from a registry or can derive their own image by adapting its configuration and included files.
These new images can then again be uploaded building a whole ecosystem of containerized applications.
For example, Docker Hub, the official Docker registry, comprises more than 9000000 images <cit.> anybody can use.
With this level of public exposure, any mistake during image creation can have drastic consequences.
Most notably, including confidential secrets such as cryptographic keys or API secrets, by mistake or out of negligence, can introduce two security issues:
[(i)]
* attackers can misuse compromised secrets leading to potential loss of data, money, privacy, or control, and
* administrators instantiating images can rely on broken security, e.g., paving the way for Man-in-the-Middle attacks.
Aggravatingly, there is no easy tooling to show which files have been added—accidentally adding a secret is thus much easier than identifying such an incident.
Indeed, related work traced three reused private keys authenticating 6000 (Industrial) Internet of Things services back to the occurrence in a Docker image <cit.>.
Additionally, blog entries produced anecdotal evidence that Docker images include further confidential security material <cit.>.
However, comprehensive analyses on revealed security secrets at scale do not exist in this realm.
Instead, such analyses focus on GitHub repositories <cit.>.
Hence, the extent for container images is unknown.
In this paper, we thus comprehensively study whether Docker images include confidential security material and whether administrators reuse these compromised secrets at large scale by
[(i)]
* scanning publicly available Docker images for confidential security material, and
* measure whether these secrets are used in practice on production deployments.
To this end, we analyze images available on the official and largest registry Docker Hub as well as examine the entire IPv4 address space for public registries and services relying their security on compromised secrets.
Contributions Our main contributions are as follows.
* We found privatemeasurementnumtotalmax Docker registries in the IPv4 address space that contain not only secrets but also potentially confidential software and likely allow attackers to replace images, e.g., with malware.
* After filtering test secrets, we identified totalvalidmatches leaked distinct secrets, i.e., validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets, in numaffectedimages images (pctaffectedimages of images we scanned are affected).
* We show that operators use 20220901corrFingerprint compromised private keys in practice affecting the authenticity of 20220901numuniquehosts Internet-reachable hosts providing, i.a., HTTP, AMQP, MQTT, and LDAP services.
* We discuss improvements of the Docker paradigm to prevent secret leakage and reuse in the future as well as provide our software used to find and verify secrets <cit.> to support mitigation.
§ A PRIMER ON THE DOCKER PARADIGM
In contrast to other containerization frameworks, Docker <cit.> does not only provide an isolated execution environment for applications.
Instead, Docker specifies an easy-to-use paradigm to create, share and deploy ready-to-run container images <cit.>.
These images constitute the filesystems of the containers and include all dependencies necessary for the actual applications, i.e., they can include all kinds of files added during creation.
The completeness of these images allows to share them via (publicly accessible) registries.
Figure <ref> shows the structure and lifecycle of Docker images in detail, from creating images to sharing and running them.
Image Creation
To create an image, Docker uses a user-defined Dockerfile <cit.> to specify the image ingredients.
First 1, the Dockerfile references another image, the base image, which is downloaded from a registry and comprises the initial file system of the new image.
Second 2, image layers consisting of differential snapshots of the file system after running commands from the Dockerfile are created and stacked on each other <cit.>.
These commands can include shell statements to, e.g., compile an application running in the container.
Furthermore, specific commands exist to embed environment variables or to add files from the host system into the image <cit.>.
While the files can be, e.g., source code or further dependencies, image creators can also easily and accidentally include (cryptographic) secrets into the image or its environment variables, putting the service's security at risk when leaked.
Once an image has been fully created, it exists as a self-containing unit, which is ready-to-run but also allows little insight on what has been added.
Image Push
After generating the image, creators can push it to a registry <cit.>, e.g., the official and largest registry Docker Hub <cit.>, allowing to deploy containers among an own fleet of servers easily, but also to share it with other users <cit.>.
To this end, the image layers are uploaded to the registry under a repository name and tag 3.
Thereby, the repository name typically represents the application in the image, and the tag describes a version.
Conventionally, creators tag the newest image in a repository with .
Container Deployment
To run a Docker container, users pull an image from a registry.
When pulling, users first request an image manifest <cit.> from the registry, including meta information about the image and its layers.
After downloading all layers 4, Docker merges the content composing the file system for the new container 5 <cit.>.
The application then finds an unchanged file system with all content provided by the image creator, i.e., all dependencies but also potentially added secrets, and can very likely provide services to the public Internet.
Since numerous containers of various users can base on a single image, included, and thus compromised, secrets could affect several deployments.
The Docker paradigm eases distribution and deployment of applications.
However, insight into what is added in images and up- or downloaded from a registry can be lost.
Thus, secrets can be leaked and reused, impairing Internet-reachable services at scale.
§ RELATED WORK
Three streams of research motivate our analysis of confidential security material in Docker images: studies that detect leaked security material, research on publicly available Docker images, and Internet-wide scans disclosing security weaknesses at scale.
Actively Leaked Security Material
Currently, the search for leaked security material focuses on code repositories.
Several studies detected the leakage of passwords <cit.>, SSH private keys <cit.>, Amazon Cloud API keys <cit.>, and Slack API keys <cit.>, using the built-in search of GitHub.
To allow broader searches, researchers entailed regular expressions but focused on specific file types <cit.> or code snippets <cit.>, i.e., the scale of this research was limited.
In contrast, Meli et al. performed a large scale study without focusing on specific file types, showing that ∼3.5 of the 4 analyzed code repositories on GitHub included leaked secrets <cit.>.
Further approaches use machine learning to improve the detection by relying on code semantics <cit.>, false-positive detection <cit.>, or both requiring further user input <cit.>.
Away from GitHub, research proposed methods to investigate various platforms <cit.> and proved the presence of secrets in publicly available Android apps <cit.>.
A recent study underlines that most developers experienced secret leakage, and guidelines are insufficient for prevention <cit.>.
While retroactively deleting leaked secrets does not help <cit.>, (non)-commercial approaches, e.g., GitGuardian <cit.>, TruffleHog <cit.>, or Gitrob <cit.>, aim at preventing secret leakage for Git.
Docker Images
Besides Git, researchers and developers, early on without evidence, assumed leaked secrets in images for virtual machines or Docker and provided countermeasures <cit.>.
Nevertheless, non-academic Web-blog studies <cit.> still find leaked secrets in images on Docker Hub.
However, these studies either limit their scale <cit.> to a few thousand images/secrets or restrict their methodology <cit.> to process large amounts of available images.
The latter study <cit.> finds 46076 affected images among 6.3 images on Docker Hub, but only considers information available in Dockerfiles, e.g., specific file paths.
Meanwhile, SecretScanner <cit.>, a smaller secret search tool, implements a function allowing users to find secrets in Docker images.
Still, a comprehensible, large-scale, and methodology-driven analysis on introduced security weaknesses by leaked security material is missing.
Instead, large-scale studies on Docker images focused on data compression <cit.>, software vulnerabilities <cit.>, or typosquatting of image names <cit.>.
Hence, as of now, it is unclear how widespread secret leakage is in images on Docker Hub as well as private Internet-reachable registries.
Moreover, it is unknown to what extent these compromised images are then used on the Internet and whether they weaken security at scale.
Internet Measurements
For understanding deployment security at scale, Internet-wide measurements have been a valuable tool in the past.
Internet scan services, such as Shodan <cit.> or Censys <cit.>, fetch and publish meta-information, e.g., security configurations, on Internet-reachable services.
Although these services often helped researchers analyzing the security of connected devices, e.g., cars <cit.> or (insecure) Industrial IoT (IIoT) deployments <cit.>, they usually do not see all deployments <cit.>.
Hence, researchers frequently conduct own active Internet measurement, e.g., using ZMap <cit.>.
On the web, these measurements allowed to analyze the deployment of new TLS versions <cit.> and revealed wide security configuration mistakes <cit.> or implementation deficits <cit.>.
Aside the web, researchers assessed the security of SSH services <cit.> and key-value stores leaking confidential data <cit.>.
For the IoT and IIoT, research revealed many deployments relying on vulnerable software <cit.> and communicating without any security mechanism <cit.>, e.g., access control.
Even with built-in security features, operators often configure such services insecurely <cit.>.
For example, a massive reuse of certificates was traced back to a Docker image including certificates and corresponding private keys <cit.> jeopardizing the authenticity of numerous deployments.
Based on this, we claim that it is probable that there are further public Docker images that wrongly include confidential secrets and harm security on the Internet—especially when looking at the sheer size of Docker and Docker Hub.
Although the broad leakage of security secrets in code repositories is well understood, the spread of revealed secrets in Docker images and the introduced security risk for the Internet are unknown.
However, known secret leakage detection techniques and Internet measurements are predestined to shed light on these issues.
§ COMPOSING OUR DATASET
To answer whether Docker image creators actively compromise security secrets by publishing them in openly available Docker images, we set out and retrieve images from Docker Hub (Section <ref>) and publicly reachable private registries (Section <ref>).
§.§ Retrieving Images from Docker Hub
Table <ref> guides through our composition process on Docker Hub, which has three tasks:
[(i)]
* composing a list of repositories,
* selecting one image per repository to widely spread our analysis, and
* identifying layers the images consist of.
§.§.§ Repositories
While Docker Hub limits the number of image downloads <cit.> and we cannot download and analyze all 15 of images available on Docker Hub <cit.> due to runtime and bandwidth restrictions, our analysis requires a selection of repositories of interest.
Furthermore, Docker Hub does not support listing all available images to choose from.
Hence, we use specific search terms to get images users retrieve when searching via the Web interface.
Our search terms (which we elaborate in more detail in Appendix <ref>) build two query groups (Table <ref> (left));
Standard comprises mainstream communication protocol names <cit.> and frequently used technologies <cit.> for a wide analysis of images referencing current issues.
For comparison and more focusing on a specific area, we choose the Industrial Internet of Things (IIoT) as past studies showed a great susceptibility to security faults <cit.>, i.e., IIoT includes protocol names from this area.
We list the number of repositories covered by our analysis per query group, i.e., the sum of found repositories of all search terms of a group, in Table <ref> (column Repositories-#).
To further convey the prevalence of our search terms, we indicate the minimum, maximum, and 25-, 50-, and 75-percentiles of search results for included terms, i.e., higher values of lower percentiles would imply a higher prevalence.
While both query groups contain terms that lead to no results (min), i.e., the term is not mentioned in any repository name or description, terms in the standard group generate more results due to their closer correlation to frequently used technologies than IIoT protocols (p_25, p_50, p_75).
Docker Hub's API limits the number of results to 10000 (max).
As different search terms lead to overlapping repositories, we further report on the distinct number of repositories gradually, i.e., per query group, and overall.
In total, we gathered distinctnumrepooverall distinct repositories subject to our study of which standarddistinctpctrepopergrouponly are uniquely added by our standard search terms and iiotdistinctpctrepopergrouponly by IIoT related search queries.
§.§.§ Images
Table <ref> (column Images-#) indicates how many images were available in total over the distinct repositories of a search group.
While repositories mostly contain different images, including the same software in other versions and thereby comprising similar files, we choose to analyze one tag per repository to spread our analysis as widely as possible.
Here, we select images tagged with which is used as Docker's default and typically includes the newest version of an image.
However, not all repositories contain images tagged with (as shown in Table <ref> (column Images-).
Here, we select the image with the latest changes (as reported by Docker Hub's API).
Empty repositories (Table <ref> (column Images-none)), i.e., have no image layers available, cannot include any secrets.
Besides the number of images that are covered by our study (column Images-analyzed), we also report on the age of the images to analyze how long they are already available on Docker Hub.
The ages of images included in both query groups roughly have the same distribution indicating that although the number of images found by our IIoT-related queries is lower image creators update their images in the same frequency as image creators of images included in our Standard group.
§.§.§ Layers
While we report on the number of layers included in all images (Table <ref> column Layers-#), different images often share the same layers, e.g., layers from frequently used base images.
Hence, to speed up our search for leaked secrets, we analyze each distinct layer only once.
We show the distinct number of layers gradually, i.e., per query group, and overall.
To cover all distinctnumrepooverall repositories, we analyze distinctnumlayersoverall layers.
(standarddistinctpctlayersgroup uniquely added by Standard-related, iiotdistinctpctlayersgroup by IIoT-related repositories).
§.§ Images from Private Docker Registries
Since image creators might upload sensitive images preferably to private registries, we want to include images from these registries in our analysis.
Table <ref> shows our steps taken to extend our dataset with images from private registries, i.e., we search private registries, and, subsequently, include a subset of available layers.
§.§.§ Find Private Registries and Repositories
To find publicly reachable Docker registries, we scan the complete IPv4 address space for services running on the standard port for Docker registries, i.e., TCP port 5000, under comprehensive ethical measures (cf. Appendix <ref>) twice to analyze short-term fluctuations (Table <ref> (left)).
Both times, we perform a TCP SYN scan using <cit.>, identifying hosts running a service behind this port and subsequently send an HTTP request as defined by Docker's Registry API <cit.> for verification.
Whenever we do not receive a valid HTTP response, we retry via HTTPS.
While we found up to privatemeasurementnumtotalmax private registries on privatemeasurementdatemax, the difference in found registries in comparison to our scan on privatemeasurementdatemin is due to registries in Amazon AWS-related ASes that do not reply after our first scan anymore.
Since these registries only contain the same and single image (uhttpd), they might relate to another research project, e.g., implementing a registry honeypot.
Contrarily to Docker Hub's API, the API of private registries allows listing available repositories without search terms.
However, we limit our requests to receive a maximum of 100 repositories per registry to prevent any overloads.
As such, the found private registries provide privatemeasurement220801repositorysum resp. privatemeasurement220806repositorysum repositories.
Since the registries do not implement access control for read access, clients are able to download all included images.
Notably, by default also write access is not restricted <cit.>, i.e., attackers might be able to inject malware.
privatemeasurement0repositoryuhttpd
privatemeasurement2repositorynginx
privatemeasurement4repositoryredis
While being publicly available on private registries but not filtered by any search terms, the content of these images is of special interest.
Here, often the repository name indicates the image's content and thus allows conclusions on widely distributed applications, i.e., over both measurements, is the most reoccurring repository name (reoccurring privatemeasurement0sum times, but only during our first scan).
Repository names on the second and third place, i.e., and , indicate proxy and cloud services where image creators might have included security secrets before uploading it to their registry.
Beyond the scope of security secrets, other repository names occurring less often, e.g., or , imply that image creators might include confidential software, source code, private data, or information on systems especially worthy of protection in openly available Docker images.
§.§.§ Image and Layer Selection
For all found repositories, we collect the lists of available images and their tags (Table <ref> (center)).
Although private registries typically do not implement any rate limiting like Docker Hub, we do not want to overload found registries or their Internet connections.
Hence, to spread our analysis as far as possible but limit the load on each registry, we choose one tag per image.
Similar to our selection process on Docker Hub, typically, in each repository, we select images tagged as to download the corresponding manifest.
Whenever no image is available, we sort all available images naturally by their tag (to account for version numbers as tags), and select the maximum (i.e., the newest version), as the API does not provide any information on the latest changes.
Subsequently, we download the corresponding image manifests to retrieve accompanying layers.
To further limit load on Internet connections of found registries, we do not download all available layers for included secrets.
Instead, we randomly select layers of chosen images such that the sum of their sizes does not exceed 250 per registry and per measurement.
All in all, we added privatenumdistinctlayersselected layers from private registries to our dataset.
In parallel to Docker Hub numerous private registries exist providing images to the public.
Overall, we assemble a dataset of numconsideredlayersoverall layers from numnonemptyimages images subject to our future research.
Furthermore, private registries might allow attackers to, e.g., inject malware, potentially infecting container deployments at scale as well.
§ LEAKED SECRETS IN DOCKER IMAGES
Next, we search in considered images for included secrets (Section <ref>), discuss the origin of affected images to later evaluate remedies (Section <ref>), and analyze also found certificates compromised due to private key leakage to estimate arising risks (Section <ref>).
§.§ Searching for Secrets
To analyze available images for included secrets, we align our approach to established methods <cit.>, i.e., we choose and extend regular expressions identifying specific secrets and match these on files and environment variables.
Additionally, we extensively filter our matches to exclude false positives.
§.§.§ Regular Expression Selection
We base our selection of regular expressions on previous work to find secrets in code repositories <cit.> (we further elaborate on our election process and expressions in Appendix <ref>).
Table <ref> (left) names the domains of secrets that our selected expressions match and indicates how attackers could misuse these secrets.
We start with regular expressions composed by Meli et al. <cit.> due to their selection of unambiguous expressions (reducing false positives) matching secrets with a high threat when leaked.
We extend their expressions for private keys to match a larger variety, e.g., also OpenSSH private keys.
Moreover, we widen the set by expressions matching API secrets of trending technologies <cit.> based on match rules from TruffleHog <cit.>.
However, TruffleHog's rules are relatively ambiguous and incur many false positives, which TruffleHog filters by validating the API secrets against their respective endpoints.
As our ethical considerations do not allow for any further use of the secrets (cf. Appendix <ref>), we focus on rules which expect at least one fixed character and later add further filtering and verification steps.
§.§.§ Matching Potential Secrets
To analyze whether image layers include secrets, we match the selected regular expressions on the images as follows (we will open-source our tool on acceptance of this paper):
We download and decompress the image layers and then match our regular expressions on the included files.
Moreover, we recursively extract archive files up to a depth of 3 and match again.
As API documentations often suggest setting secrets in environment variables and not writing them into files, we analyze set variables.
Since Docker allows downloading the small image configuration containing set variables aside of the image, i.e., potential attackers do not have to download and search through all files to find included secrets, we analyze variables separately:
As such, we only download the image configuration file and iterate our regular expression over set environment variables.
Here, we adapt the API expressions, as some expect a specific term before the secret (cf. Table <ref> in Appendix <ref>), e.g., the service name as part of a variable name.
As the variable names and values are separated in the configuration file, we also split the according expressions and match them individually.
Table <ref> (center) lists for each secret domain how many matches and how many distinct matches we found in both, image content and environment variables.
Notably, while only covering two services, i.e., Facebook and Twitter, the expressions in the Social Media domain matched most often over all domains, which already indicates that API secrets of this domain are often suspect to leakage.
The high redundancy of the matches, visible as the significant decrement between distinct and non-distinct matches, already hints at invalid matches, e.g., private keys or example API tokens prevalent in unit tests or documentation in several layers.
Indeed, the most reoccurring match (mostreoccurringnumocc times in mostreoccurringnumlayer different layers), is an example key for mostreoccurringrule from a library documentation which creators usually include in their images.
We thus validate our matches extensively.
§.§.§ Match Validation
To exclude test keys for cryptographic libraries, example API secrets, and completely invalid matches to get a near lower bound of harmful leaked secrets in Docker images, we use different filters depending on the secret type.
While we show the number of resulting valid secrets in Table <ref> (right), Figure <ref> details the filtering results separated by the match's origin, i.e., image content or environment variable and domain.
Private Keys
Our regular expressions for private keys match on PEM or XML formatted keys.
Thus, we can first exclude every match that is not parsable (filter Unparsable).
Figure <ref> shows that only a minority of all potential private keys in image layers are unparsable, underlining that image creators include and compromise private keys actually usable in final Docker containers for practical operations.
Contrarily, the single match within the environment variables is only a key fragment and thus not parsable.
Still, we expect a high number of software test keys in Docker images among found keys, as they are part of several libraries creators might include in their images, e.g., OpenSSL.
Since users will most likely not use such keys to secure their deployments, we filter out test keys that are included in kompromat <cit.>, a repository listing already compromised secrets (filter Kompromat).
More specifically, we filter keys occurring in RFCs (kompromatfoundrfcnumdistinct), libraries for software tests (kompromatfoundsoftwaretestsnumdistinct), or as special test vectors (kompromatfoundtestvectorsnumdistinct).
To also account for software test keys that are not available in kompromat, we analyze the file paths where respective keys were found (filter File).
While we do not generally exclude all paths containing signal words indicating test or example keys, as users might use such paths also for keys they generated and use in practice, we apply different measures.
For instance, based on locations of test keys identified using kompromat, we deliberately exclude matches in similar locations, i.e., keys within directories where we already detected test keys and all parent directories under which we find more than 2/3 test keys.
Last, we exclude file paths typically used by libraries (cf. Appendix <ref>), e.g., , as there is a lower chance that users adapt their keys here.
Figure <ref> shows that these filters process the largest share of excluded private key matches.
It further indicates that kompromat only includes a minority of software test keys, i.e., is not directly usable to exclude all false-positive matches.
Still, many of the found keys are not filtered and, thus, most likely, no software test keys.
In total, we found validprivatekeyvalidnumdistinctmatchestotal valid private keys potentially in use in practice (cf. Table <ref> (right)).
Since all of these keys are located in files, attackers would have to download respective image layers to get access and not only meta information to retrieve environment variables.
Still, since these keys are publicly available and thus compromised, usage in production puts authentication at stake, i.e., attackers can perform impersonation attacks.
API Secrets
Since our ethical considerations deter us from validating API secrets against their service endpoints (cf. Appendix <ref>) as applied by TruffleHog <cit.>, and related methods for false positive detection focus on matches in source code <cit.>, which is not prevalent in Docker images, we need alternative measures to filter invalid matches.
By manually supervising our filtering, we ensure that the final set only includes valid-looking API secrets.
Based on invalid matches in GitHub code repositories <cit.>, we expect human-created example keys that contain keywords, e.g., , or consecutive character sequences, e.g., , that we must exclude (filter Sequence).
To filter consecutive sequences, we search for segments consisting of ascending, descending (both with a length of four), and repeating characters (with a length of three).
Furthermore, we filter matches including sequences that occur unusually often, i.e., we create (frequencyngrammin, frequencyngrammax)-character-grams of all matches, exclude grams created over fixed parts of our regular expressions as well as grams only containing digits, and count the number of occurrences over all API matches.
To account for randomly reoccurring grams, we filter all matches that include grams occurring frequencyNgramsTimeFactor times more often than the average.
We manually ensured that our filter is not too restrictive but also not to loose leaving often reoccurring grams out.
Figure <ref> shows that this filtering excludes a large share of matches.
Interestingly, the most reoccurring gram is [sic!], which we could trace back to DNA sequences in images related to bioinformatics underpinning the large variety of different and unexpected file types occurring in Docker images.
Similar to filtering private key matches by their file paths, we also filter API matches occurring in manually selected paths (filter File, cf. Appendix <ref>).
Essentially, we revisited the location and file types of all matches and excluded paths that most likely do not include any valid secrets compromised by publishing these in Docker images.
Figure <ref> indicates that the filtered paths often also include matches filtered by our sequence filter and thus that libraries include strings similar to secrets, e.g., in their documentation.
Still, after manual revision of the remaining matches, we conclude that rules which match on a fixed term before the secret, e.g., the service name, and then allow a specific length of characters are too ambiguous for usage on files in Docker images as they match on arbitrary content, e.g., on hashes with the service name in front.
We thus decide to exclude matches of these rules from our further analysis (gray in Table <ref> (left)), i.e., consider these matches invalid, to ensure the integrity of our further results.
Still, a minority of these matches might be valid, potentially enabling attackers to compromise production services or access confidential data.
Comparing the filter results of API secret matches in files and environment variables, the share of valid matches in variables is significantly higher than in files indicating that image creators less likely include secret placeholders in variables.
Still, as Table <ref> (right) shows, most secrets are located within the images.
Thus, attackers have a higher chance of finding valid secrets when downloading both environment variables and image content.
In total, we found apinumdistinctmatches distinct API secrets in Docker images, mostly related to services from the cloud domain (validapicloudvalidnumdistinctmatchestotal secrets).
Although we cannot prove the functionality of these secrets, the occurrence of apicloud1numdistinctmatches secrets for the apicloud1rule or apicloud2numdistinctmatches secrets for the apicloud2rule indicate that attackers might be able to reconfigure cloud services maliciously, e.g., by editing DNS or VM options.
Additionally, we found evidence for secrets allowing attackers to access private data from social media (validapisocialmediavalidnumdistinctmatchestotal secrets), or even access financial services (validapifinancialvalidnumdistinctmatchestotal secrets, most matches: apifinancial0rule).
Notably, although we focused our image search partly on IoT terms, we found no valid secrets from selected IoT services.
§.§.§ Secrets Owned by Single Users
Based on findings over leaked secrets found on GitHub <cit.>, we expect most valid secrets to residing in images of single users (as users do not share their secrets intentionally).
Contrarily, invalid matches, e.g., library test keys, would mainly reside in images of multiple owners.
Thus, to check whether the matches we identified as valid secrets are located in images of single users, we analyze the number of different owners that include a specific secret in their images.
To this end, for images from Docker Hub, we consider the repository owner (embedded in the repository name) as the owner of a secret.
For private registries, we consider the registry's IP address as the owner (assuming that owners only run a single registry and neglecting that registries might use different (dynamic) IP addresses).
Figure <ref> shows that the largest share of valid secrets indeed occurs in images of single owners.
validmatchmultiuserprivatekeyFalsepct of private keys (validmatchmultiuserprivatekeyFalsenum keys) and validmatchmultiuserapiFalsepct of API secrets (validmatchmultiuserapiFalsenum secrets) reside in images of single owners underpinning that these should be protected.
Moreover, we can trace validmatchmultiuserlayer0privatekeyTruenum private keys and validmatchmultiuserlayer0apiTruenum API secrets of multiple owners back to inheritance.
These secrets were already included in the base image, but w.r.t. to the overall occurrence, we conclude that secret spread due to inheritance is no major problem.
To responsibly inform image creators about leaked secrets in their images, we reach out to them whenever possible (numemaildisclosure extractable and valid e-mail addresses) and also contacted the operator of Docker Hub (cf. Appendix <ref>).
Early on, we received notifications of creators that removed found secrets from their images.
totalvalidmatches found secrets show that image creators publish confidential information in their publicly available Docker images.
As attackers have access to these secrets relying authentication and other security mechanisms are futile, potentially leading to compromised servers or leaked privacy-sensitive data.
§.§ Origin of Leaked Secrets
Next, we analyze where the validated secrets stem from to see whether specific images are more affected and why.
To this end, we examine the distribution of affected images and compare between private registries and Docker Hub, as well as IIoT specific and Standard images.
Moreover, we evaluate which operation in the original Dockerfile led to the insertion of secrets and inspect the file paths where they reside to get an intuition for their usage.
§.§.§ Docker Hub Leads Before Private Registries
We already discovered that private registries include potentially sensitive images.
However, until now, it remains unclear whether images on these registries are more often subject to secret leakage than images from Docker Hub, e.g., due to creators believing that these are unavailable for the public.
Thus, we analyze whether leaked secrets occur more often in images from Docker Hub or from private registries.
While we found that numaffectedimages images (pctaffectedimages of images analyzed) contain valid secrets, pctaffectedimagesdockerhub of images from Docker Hub and pctaffectedimagesprivate of images from private registries are affected.
Thus, creators upload secrets to Docker Hub more often than to private registries indicating that private registry users may have a better security understanding, maybe due to a deeper technical understanding required for hosting a registry.
Yet, both categories are far from being leak-free.
For Docker Hub, besides the increased fraction of leaked secrets, we see an issue for others, i.e., other users can easily deploy containers based on these images.
Thus, there is a higher chance their containers rely their security on included and compromised secrets.
For example, a shared certificate private key could lead to an impersonation attack.
In case of shared API secrets, all deployed containers might use the same API token leading to exhausted rate limits in the best case, but maybe also to overwritten or insufficiently secured private data.
As a single API token does not allow fine-granular exclusions, i.e., it is either valid or revoked for all users, a revocation would also interfere with benign users.
Independent of their origin, attackers could equally misuse the secrets we found to leverage authentication or access privacy- or security-sensitive data.
As such, both user groups of Docker Hub and private registries leak sensitive information, be it through unawareness or a deceptive feeling of security.
§.§.§ Domains are Similarly Affected
For our image selection on Docker Hub, we specifically included search terms relating to the IIoT, as past research has shown significant security shortcomings in this area.
However, until now it is open whether found images of a certain domain are suspect to revealed secrets more frequently than other images.
To answer this question, we trace images that include secrets back to the query group that led to their inclusion.
We discovered that affectedstandardrepositorypct of the images only found using queries from the Standard query group and affectediiotrepositorypct of images only from the IIoT group include valid secrets[Images found by both query groups are not included.].
Thus, in case of secret leakage via Docker images and based on our selected search terms, the IIoT domain does not perform worse than our Standard domain.
However, it underpins that the problem of secret leakage in Docker images is a prominent issue for all domains.
§.§.§ Fresh Private Keys and Copied API Secrets
To find countermeasures against secret leakage in Docker images, it is important to understand how these leaked secrets became part of Docker images.
More specifically, for private keys, it is unclear whether creators execute commands in the Dockerfile to create fresh keys, which are then published in images, or whether they manually add them, i.e., using or in a Dockerfile.
Additionally, both, private keys and API secrets, could be indirectly included through other means, e.g., by cloning Git repositories or downloading further data.
Figure <ref> shows that while most API secrets are typically inserted by file operations (File), e.g., copied from the image creator's host system, private keys are predominantly included by executing a command within the Dockerfile (Exec.)[Secrets can be associated with both, File and Exec. operations, e.g., when first ed to the image and then copied or moved internally using or .].
Thus, private keys might be either downloaded or generated during the creation process.
To further trace the insertion of secrets in Exec. layers back to the responsible executed commands, we analyze these commands.
Since image creators often concatenate several bash commands whose output is then included in a single layer without any opportunity to associate files (and thus secrets) to a specific command, we count each of the commands related to the leakage of a secret.
We show the most prominent of all validmatchnumdistinctcommands commands associated with secret leakage in Figure <ref>.
In fact, privatekeyinstsshdpct of private keys were generated in layers where image creators installed the OpenSSH server.
Since the installation triggers to generate a fresh host key pair, it is automatically included in the image.
While the procedure of automatic key generation is beneficial on real hardware, i.e., users are not tempted to reuse keys on different hosts, in published Docker images it automatically leads to compromised keys and thus puts the authenticity of all containers relying on this image in danger.
Further privatekeysshkeygenpct of found private keys were generated by a direct call of , e.g., to generate fresh SSH client key material, implying the planned usage in production of generated but compromised key material.
Given the massive secret leakage on GitHub <cit.>, we also expect secrets to be included in images by cloning Git repositories.
However, only a minority of secrets can be associated with Git, suggesting that the sets of users leaking secrets via Docker and GitHub are distinct. Furthermore, only a minority of secrets were downloaded (using or ) both indicating that the secrets we found were most likely exclusively leaked in Docker images and underpinning that they are actually worth being protected.
§.§.§ File Paths Indicate Usage
To further reason about the usage of our found secrets, we analyze their file paths within the images assessing where secrets stem from and how services apply them.
Separated by private keys and API secrets, Figure <ref> shows the distribution of secrets throughout the directory structure of all images and focuses on the top seven paths.
We found the majority of private keys in underpinning a high prevalence of compromised SSH host keys.
Another large share occurs in suggesting compromised keys used for host authentication via TLS.
This path is also the location for TLS default (“snakeoil”) keys that are used if no other information is provided.
They are auto-generated when the package is installed such that every host possesses a unique default key-pair.
However, when installed during the creation of Docker images, the key is included in the image and, thus, compromised when shared.
Based on the key's filename, indeed, we found numsnakeoiletcssl of such keys which are potentially used to offer TLS services with broken authenticity to the public Internet.
Even more alarming, we found keys lying in , indicating that included keys are associated with a Public Key Infrastructure (PKI), and thus potentially destined to offer services to a higher number of users.
Furthermore, contains private keys used in relation to the IoT and, as per the repository names, for authentication using IoT protocols like CoAP and MQTT.
Thus, attackers possessing these private keys can leverage the authentication of all connections users establish to each container created based on these images.
In fact, attackers then can access or alter transmitted confidential information, e.g., privacy-sensitive user data or commands of IoT services potentially impacting cyber-physical systems.
In addition, we found keys in , i.e., a location where SSH client key pairs typically reside.
Hence, these keys might enable attackers to take over SSH servers, trusting these keys and having access to confidential data.
Contrarily, found API secrets are distributed more evenly through the directory structure.
We found the largest share in , which is the example folder for including own applications in Docker images <cit.>, underlining that image creators compromise their own application's API secrets.
While similar holds for , another large share of secrets resides in stemming from Firefox profiles containing Google Service API secrets in cached JavaScript files.
Although these secrets are most likely usable in combination with Google Maps or Google Analytics and thus meant to be shared with website visitors, this leakage implies privacy issues:
An attacker could retrace the creator's browsing history, which apparently exists due to the cache being filled, which could show potentially sensitive information.
In addition, we found a large share of Google API secrets (both Cloud and Services) in .
Since we do not use API tokens for further validation (cf. Appendix <ref>), we cannot be entirely sure whether these secrets are usable or only generated for testing purposes.
However, manual supervision of the matches and including files suggest that they could be actually in use.
pctaffectedimages of analyzed images contain and thus leak secrets.
While the majority stems from public Docker Hub images regardless of their domain, also private registries leak a significant number of secrets.
Notably, associated file paths and commands imply their production use and that various authentication mechanisms are futile.
§.§ Compromised Certificates
To further understand the severity of potentially compromised systems, we now focus on found certificates as they provide various information on their relations and use cases.
Thus, we research the trust chain, validity, and usage parameters of knowncompromizedcerts compromised certificates occurring in Docker images.
Trust Anchors
While self-signed certificates indicate the usage of certificates in controlled environments, i.e., clients need a safelist with all certificates they can trust, CA-signed certificates imply the usage at larger scale as these are trusted by all clients having a corresponding root certificate installed.
We consider certificates where the issuer and common name are similar as self-signed and CA-signed otherwise.
For CA-signed certificates, we consider those which we can validate against widespread root stores[Stores from Android, iOS/MacOS, Mozilla NSS, OpenJDK, Oracle JDK, and Windows.] as signed by a public CA, and otherwise signed by a private CA.
We discovered that the majority of found compromised certificates (selfsignedcertspct) are self-signed, but also privatecacerts private CA-signed and casignedcerts public CA-signed certificates.
While all systems relying on these certificates open the door for impersonation attacks, the occurrence of CA-signed certificates is especially alarming as such certificates are typically planned to provide authenticity to many clients/users and are universally accepted.
Thus, knowing these certificates' private key not only allows attackers to perform Man-in-the-Middle attacks but also enable them to sign malicious software to compromise other's systems.
Validity
As a countermeasure against key leakage, the certificate's lifetime enforces service operators to request new certificates from time to time, as clients should reject outdated certificates.
Notably, casignedvalidondownload public-CA, privatecavalidondownload private-CA, and selfsignedvalidondownload self-signed certificates were valid when we downloaded their containing image layer, showing that the authenticity of relying services is at stake, i.e., the lifetime does not help in these cases of key leakage.
Interestingly, casignedvalidonhistory public-CA, privatecavalidonhistory private-CA, and selfsignedvalidonhistory self-signed certificates were valid when added to their Docker image (as per the image's history timestamp).
While these larger numbers show that the limited lifetime of certificates helps to mitigate leaked private keys, they also indicate that key leakage in images is tedious, i.e., more and more private keys are leaked.
Usages
The usage attributes of certificates can optionally indicate the practical use-case of CA-signed certificates and, thus, further help to understand the severity of the private key leakage.
While all public-CA-signed certificates allow for authentication (digital signatures), and casignedparsedFindingextensionsextendedkeyusageserverauth are explicitly declared for server authentication, casignedparsedFindingextensionsextendedkeyusagecodesigning (private-CA: privatecaparsedFindingextensionsextendedkeyusagecodesigning) allow for code-signing.
Thus, knowing the private key of these certificates, does not only allow attackers to perform Man-in-the-Middle attacks, but also enable to sign malicious software to compromise others systems.
knowncompromizedcerts found compromised certificates show that leaked private keys can have extensive influence on the authenticity of services and software.
Thus, attackers can impersonate services, decrypt past communications, or sign malware to infect production systems.
§ SECRET USAGE IN THE WILD
Until now, it is open whether the found compromised secrets are used in practice and, if so, to what extent, i.e., whether a single compromised secret is reused due to several Docker containers stemming from the same image.
While we cannot check the validity of API secrets by using them against their destined endpoint due to our ethical guidelines (cf. Appendix <ref>), we can investigate whether hosts on the Internet use found private keys for authentication.
To assess whether Internet-reachable hosts can be suspect to impersonation attacks due to secret leakage in Docker images, we check for TLS- and SSH-enabled hosts relying their authentication on compromised private keys by using the Censys database, i.e., 15 months of active Internet-wide measurement results <cit.>.
Here, we search for hosts presenting a public key, i.e., as SSH host key or within a TLS certificate, matching to one of the found compromised keys.
More specifically, we match the fingerprint of public keys in the Censys database on ones extracted from found private keys.
In Figure <ref>, we detail how many hosts rely their authenticity on found compromised private keys and how often these keys are reused.
While the total number of hosts relying on compromised keys is worrying on its own (20220901numuniquehosts hosts in Oct. 2022), their protocols, even worse, imply sensitive services.
As such, in October 2022, we find MQTT20220901numuniquehosts MQTT and AMQP20220901numuniquehosts AMQP hosts, potentially transferring privacy-sensitive ((I)IoT) data.
Moreover, FTP20220901numuniquehosts FTP, PostgreSQL20220901numuniquehosts PostgreSQL, Elasticsearch20220901numuniquehosts Elasticsearch, and MySQL20220901numuniquehosts MySQL instances serve potentially confidential data.
Regarding Internet communications, we see SIP20220901numuniquehosts SIP hosts used for telephony as well as SMTP20220901numuniquehosts SMTP, POP320220901numuniquehosts POP3, and IMAP20220901numuniquehosts IMAP servers used for email.
Since these hosts are susceptible to impersonation attacks due to their leaked private keys, attackers can eavesdrop, relay, or alter the sensitive data transmitted here.
Aggravatingly, we also find services with administrative relevance:
SSH20220901numuniquehosts SSH servers rely on SSH20220901corrFingerprint compromised host keys and Kubernetes20220901numuniquehosts Kubernetes instances use leaked keys opening doors for attacks which can lead to remote-shell access, extension of botnets or further data access.
The comparably low number of compromised keys used (compared to knowncompromizedhostkeys found SSH host keys) is probably due to a missing need for SSH servers in Docker containers as other mechanisms, e.g., , already allow shell access.
Furthermore, we see LDAP20220901numuniquehosts LDAP instances relying on leaked secrets.
As LDAP is used as a base for user authentication on attached systems, the integrity of unknown many other clients is at stake.
For instance, attackers could grant themselves root access to a myriad of systems.
The number of actually used keys is low compared to the number of hosts which rely on them indicating that a few Docker images lead to various compromised container deployments.
Thus, the simplicity of Docker to deploy services based on ready-to-use images puts the authenticity of several instances most likely operated by different users under threat.
In this regard, HTTPS hosts stand out in particular.
HTTP20220901numuniquehosts HTTPS hosts use HTTP20220901corrFingerprint different compromised private keys showing that the reuse of these keys is rampant for Web services.
Thus, attackers can perform Man-in-the-Middle attacks to alter webpages on their delivery or data sent to the server.
Figure <ref> also underpins that the key usage of compromised keys is long-lasting and rising, i.e., over the complete available period the number of compromised systems grew from 20210501numuniquehosts (relying on 20210501corrFingerprint compromised keys) to 20220901numuniquehosts hosts (20220901corrFingerprint keys) indicating that container images with compromised certificates or SSH host keys included are increasingly used.
Thus, the authenticity of more and more systems is futile, offering an ever-growing attack surface.
While our study is significantly driven by initially found compromised keys in Docker images in the area of the IIoT, Censys does not identify secured IIoT protocols other than AMQP and MQTT via TLS.
Thus, we perform own Internet-wide measurements for a deeper inspection of whether IIoT services also use compromised certificates, e.g., for authentic communication via OPC UA.
To this end, we select ten secure IIoT protocols from recent literature <cit.> and mimic its proposed measurement strategy.
Our results show that besides the already large number of compromised AMQP and MQTT hosts, only 2 CoAP hosts use 2 different leaked keys from Docker containers.
That we do not find substantially more compromised hosts using other IIoT protocols underlines that the issue of key leakage is not an IIoT specfic hotspot but a general problem.
20220901numuniquehosts hosts use 20220901corrFingerprint compromised private keys found in Docker images for authentication on the Internet and encompass deployments using, i.a., MQTT, SMTP, and PostgreSQL.
This widespread usage allows attackers to eavesdrop on confidential or alter sensitive information, e.g., from the IoT, webpages, or databases.
§ DISCUSSION, LIMITATIONS & MITIGATIONS
The outcome of our work has different aspects.
We have seen that numerous private keys are compromised by image creators publishing their images via Docker registries and shown that security relies on these secrets in practice.
Still, future work could investigate the limitations of our approach or implement the derived mitigation opportunities from our results.
View on Available Images
Due to rate and computation-time limits and comprehensive ethical considerations (cf. Appendix <ref>), we could not analyze all available images on Docker Hub and private registries.
Thus, we might have missed secrets included in single layers or complete images that were not subject to our study.
In this light, the absolute number of found secrets is already very alerting.
Also, in relative numbers, our results should be representative for the selected groups due to our sampling.
Yet, the selected groups, i.e., our Docker Hub search terms, might lead to skewed results overestimating the overall population.
For instance, images that are not targeted at protocols might have been created with fewer secrets.
Thus, we opted for a broad body of terms based on, i.a., public polls <cit.> to avoid any bias.
Moreover, our private registry analysis has not been targeted but included randomly sampled layers, and we still found a similar share of affected images as on Docker Hub.
As such, we believe that our relative results are—at least in their magnitude—representative for the overall population of Docker images publicly available.
Missing Methods to Check API Secrets
While relying on Internet-wide measurements was a suitable measure to assess the usage of compromised private keys for the authenticity of Internet-reachable services, we could not check whether found API secrets are functional.
The only option would be to contact the corresponding API's endpoint to check for the acceptance of found credentials.
However, due to our ethical considerations, we must not use found secrets as such usage might influence other systems or services.
Thus, we cannot validate them against their respective endpoint.
Still, the number of found secrets is worrying and looking at the usage of compromised private keys, we are convinced that many API secrets are also functional.
Causes & Mitigation Opportunities
We have seen both creators actively copying secrets from their local file system into the image, e.g., most of the API secrets but also private keys, incl. certificates, and passively generating key material during the image creation process, e.g., by installing an OpenSSH server.
Both behaviors lead to compromised secrets and affect the security of both image creators and users basing their containers on an image and already included secrets.
Most likely, creators and users are unaware of compromising or using compromised foreign secrets.
In fact, compared to GitHub, which provides a graphical interface to browse published files and potentially notice a mistakenly uploaded secret, files in Docker images and containers cannot be browsed easily, i.e., users barely get an overview on included files.
Furthermore, while Git repositories only include manually added files, images of Docker containers contain a complete system directory tree.
Thus, files with included secrets cannot be identified.
The mitigation of these problems must be two-fold.
On the one hand, image creators must be warned that they are uploading their secrets to (publicly reachable) Docker registries.
On the other hand, when deploying containers based on downloaded images, users should be informed that included secrets, especially private keys, might already be compromised, putting the authentication of deployed services at stake.
To this end, credential-finding tools such as TruffleHog <cit.> or SecretScanner <cit.> can be integrated on both sides of the Docker paradigm.
When uploading or downloading an image, these tools could then scan all layers of the image for included secrets.
To reduce the number of false positives, for potential API secrets, the tool can also check the secret's function against the respective endpoint (we think this is also ethically correct on the user's side who downloaded the image).
For private keys, the tools could maintain a list of test keys that are usually included in libraries.
Increasing the image creator's awareness regarding the leakage of such secrets should decrease their number in uploaded images.
Additionally, performing a second check at the user deploying a container based on a downloaded image should further decrease the number of services relying on already compromised secrets.
An additional help could be an API + graphical view for images on Docker Hub, which shows the included files.
This API could also enable third-party solutions similar to those for GitHub <cit.> to easily search for known secret file paths.
§ CONCLUSION
Containerization allows integrating applications and their dependencies in self-containing and shareable images making software deployment easy.
However, when focusing on security, sharing of secrets or using already compromised secrets breaks promises, e.g., authenticity or access control.
Thus, cryptographic secrets must not be included in publicly available container images.
Our analysis of numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax private registries revealed that, however, pctaffectedimages include secrets that should not be leaked to the public.
More specifically, we found a near-lower bound of validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets.
validapicloudvalidnumdistinctmatchestotal API secrets belonging to cloud providers, e.g., apicloud1rule (apicloud1numdistinctmatches secrets), or validapifinancialvalidnumdistinctmatchestotal secrets to financial services, e.g., apifinancial0rule (apifinancial0numdistinctmatches secrets), show that attackers can cause immediate damage knowing these secrets.
Focusing on the leaked private keys, we find that these are also in use in practice: 20220901numuniquehosts TLS and SSH hosts on the Internet rely their authentication on found keys, thus being susceptible to impersonation attacks.
Notably, many private keys automatically generate when installing packages during image creation.
While beneficial when running on real hardware where every computer generates its own key, in container images, this process automatically leads to compromised secrets and potentially a sheer number of containers with compromised authenticity.
We further discover that especially private registries serve images with potentially sensitive software, most likely not intended to be publicly shared.
Additionally, these registries might not prevent write access enabling attackers to add malware to images.
Our work shows that secret leakage in container images is a real threat and not neglectable.
Especially the proven usage of leaked private keys in practice verifies numerous introduced attack vectors.
As a countermeasure, the awareness of image creators and users regarding secret compromise must be increased, e.g., by integrating credential search tools into the Docker paradigm.
Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) — Research Project VeN2uS — 03EI6053K.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy — EXC-2023 Internet of Production — 390621612.
ACM-Reference-Format
§ ETHICAL CONSIDERATIONS
Our research curates a comprehensive archive of leaked security secrets in Docker images on Docker Hub and private registries whose leakage is again a threat to security.
Moreover, to find private registries and deployments relying their security on leaked secrets, we leverage Internet-wide measurements that can have unintended implications, e.g., high load on single network connections impacting stability or alerting sysadmins due to unknown traffic.
Thus, we base our research on several ethical considerations.
First, we take well-established guidelines <cit.> and best practices of our institution as base for our research.
We handle all collected data with care and inform image creators and Docker Inc., to responsibly disclose our findings (cf. Appendix <ref>).
Moreover, we comply with recognized measurement guidelines <cit.> for our Internet-wide measurements reducing their impact (cf. Appendix <ref>).
§.§ Handling of Data & Responsibilities
During our research, we always only collect and request publicly available data, i.e., our access is limited to publicly available image repositories.
At no time do we bypass access control, e.g., by guessing passwords.
We, thus, cannot download private images.
Still, we revealed that many of the public images contain sensitive security secrets (cf. Section <ref>) which we stored for further analysis.
All found secrets are stored on secured systems.
Furthermore, we refrain from releasing our dataset including these secrets or image names, to not provide an archive of leaked secrets or pinpoints for potential attackers.
While this restriction prevents others from independently reproducing our results, we consider this decision to constitute a reasonable trade-off to protect affected users.
Responsible Disclosure
To further support affected users in removing their secrets from publicly available Docker images, we target to responsibly disclose our findings.
To this end, we extract e-mail addresses from maintainer variables set in Dockerfiles and furthermore derive addresses from Gravatar accounts linked to affected Docker Hub accounts.
In this regard, we identified numemaildisclosure e-mail addresses we contacted to notify about our possible findings.
Already after a few hours, we received >30 answers of owners appreciating our efforts, fixing their images or informing us that the image at hand is not used anymore.
A handful informed us that no secrets were leaked helping us to refine our filtering.
Moreover, we decided to reach out to the operator of Docker Hub, i.e., Docker Inc., to discuss potential further disclosure to unidentifiable creators.
§.§ Reducing Impact of Measurements
To reduce the impact of our active Internet scans, we follow widely accepted Internet measurement guidelines <cit.>.
Coordination
We coordinate our measurements with our Network Operation Center to reduce the impact on the Internet and to react correspondingly.
Abuse emails are handled informing about the intent of our measurements and how to opt-out of our measurements.
As part of this opt-out process, we maintain a blocklist to exclude IPs from our measurements.
External Information
For giving external operators information about our research intent, we provide rDNS records for all our scan IPs and transmit contact information in the HTTP header of each request to the registries.
Moreover, we host a webpage on our scan IPs, which gives further information on our project and how to opt-out.
Over time, also due to other measurements, we excluded 5.8 M IP addresses (0.14% of the IPv4 address space).
Limiting Load
To limit load and stress on all systems involved (along the path and the end-host), we deliberately reduce our scan-rate.
Our scans are stretched over the course of one day and use 's address randomization to spread load evenly.
We further limit the load on single private registries when downloading available images.
While we paid to increase the existing rate limiting for image downloads on Docker Hub (cf. Appendix <ref>), private registries typically do not implement any rate limiting.
Hence, to prevent our scanner from overloading registries running on resource-constrained hardware or connected via slow or volume-billed Internet connections, we decide to only download image layers randomly until their size sums up to at most 250.
Additionally, we shuffle the downloads of layers of different registries to further distribute the load.
§.§ Overall Considerations
Without taking our goals into account, summarizing the sensitive nature and the impact of our measurements can quickly lead to the conclusion that our measurements are not beneficial.
However, we consider it public interest and fundamental for improving security to know about potential security issues and how widespread these are.
The Docker paradigm does not include any mechanisms to prevent image creators from (accidentally) adding security secrets to their images and no mechanisms exist that warns users relying on already compromised security secrets.
Hence, we consider it essential to know whether secrets are widely included in publicly available Docker images and whether these are in use at scale to steer future decisions for counter-measures.
To answer this question, we carefully weighed the impact of our measurements against their benefit and have taken sensible measures to reduce the risks of building a large archive of leaked security secrets and risks introduced by active Internet measurements.
§ IMAGE DOWNLOAD FROM DOCKER HUB
The limit of image manifest downloads from Docker Hub depends on the booked plan, e.g., free users are allowed to pull only 800 images per day.
Hence, for a faster analysis of images on Docker Hub, we purchased two Pro accounts, that allow 5000 image downloads per day each.
Still, we are required to perform our analysis on a subpart of available images as the download of one image of every of the 9321726 available repositories would require 933 days under best conditions.
Thus, we decided to limit our analysis on two categories:
[(i)]
* a context of standard protocol and frequently used technologies, and
* an (Industrial) IoT context for comparison.
Both categories have communication in common as here security can be affected on an Internet scale.
Standard Context
To generate a wide view on secret leakage in Docker images, we create a list of search queries comprising standard protocols <cit.>, and frequently used technologies <cit.>.
To find related images, we employ Docker Hub's API to perform searches over all available images and retrieve results users would retrieve when using the CLI command or Docker Hub's web interface.
To ensure that different handling of special characters in technology and protocol names does not exclude any images, we include different spelling variants in our query list, i.e., we include terms as they are, but also replace non-alpha-numeric characters by and space.
Table <ref> (top) shows our constructed search queries for the standard context.
(Industrial) IoT Context
We extend our analysis on images in the (Industrial) IoT context, as deployments in this area showed massive security deficits in past <cit.>, in single cases traced back to security secret leakage via GitHub and Docker images <cit.>.
As search terms, we take (Industrial) IoT protocol names that were subject to recent research <cit.>.
We proceed similar as in the standard context, i.e., include derived spellings of these terms, and show our constructed search query of this context in Table <ref> (bottom).
§ REGULAR EXPRESSIONS
Following already established procedures to find security secrets in code repositories <cit.>, we build our secret detection in Docker Images on regular expressions, i.e., we try to match regular expressions derived from secrets on the content of included files.
Table <ref> shows our composed list of regular expressions covering a variety of secrets, i.e., asymmetric private keys and API keys, as well as accompanying material we use for our analysis, i.e., public keys and certificates.
We orientate our expressions towards related work <cit.> and TruffleHog <cit.>, an established tool to find secrets in various sources, i.e., the local file system, Git repositories, S3 storages, and syslogs.
Specifically, we inherit Meli et al.'s <cit.> regular expressions to allow comparisons between the occurrence of leaked secrets in GitHub repositories at scale and our findings.
Furthermore, they composed their expressions comprehensibly, i.e., they included API keys for certain services by the occurrence of service domains in Alexa's Top 50 Global and United States lists in combination with a list of well-known APIs manually filtered for services with a high risk on key leakage and keys with a distinctive signature (to reduce the number of false-positives).
For private keys they focus on the most prevalent types and form to store, i.e., RSA, elliptic curve keys, PGP, and general keys in PEM format.
To spread our analysis and align our expressions to the scope of our search queries (cf. Appendix <ref>), we adapt our expression for private keys to match every type of private key in PEM format and, furthermore, extend the list of expressions to also match private key blocks, keys in PKCS7 format, and keys stored in XML format (due to their unambiguous signature).
Regarding API secrets to match, we extend our list with expressions from TruffleHog <cit.> on basis of services being currently trending under developers <cit.> or having a high risk for misuse and the regular expressions including a unique signature (also to reduce the number of false positives).
For some services we found more than one type of secret, i.e., secrets for different API versions (GitHub v1 and v2), or different types of keys (Stripe).
Our final list contains 48 expressions which we match on the content of every file in the images part of our study.
§ FILTERING BASED ON FILEPATHS
After matching our regular expressions on arbitrary file content available in Docker images, extensive filtering is required to exclude false positive matches, i.e., matches that do not contain any secret.
Our File filter bases on file paths derived from matches our Kompromat filter excluded, i.e., all parent directories under which we find more than 2/3 test keys known by kompromat <cit.> and all directories that include known test keys directly.
Additionally, it takes manually compiled file paths, e.g., where standard libraries reside () or package managers store their downloads (e.g., ) and extensions of database files (e.g., and ) into account which we selected after manually revisit all matches as these produced a high number of false positives.
Figure <ref> shows the seven most prevalent file paths that contain matches excluded by our File filter.
Indeed, most of the exclusions are matches included in folders belonging to package managers and thus most likely test secrets.
The massive filtering of API secret matches in is due to the high number of false positives of the Twitter regular expressions on database files.
|
http://arxiv.org/abs/2307.05275v1 | 20230711140851 | CareFall: Automatic Fall Detection through Wearable Devices and AI Methods | [
"Juan Carlos Ruiz-Garcia",
"Ruben Tolosana",
"Ruben Vera-Rodriguez",
"Carlos Moro"
] | cs.LG | [
"cs.LG",
"eess.SP"
] |
]CareFall: Automatic Fall Detection through
Wearable Devices and AI Methods
Universidad Autonoma de Madrid
Madrid
Spain
[email protected]
Universidad Autonoma de Madrid
Madrid
Spain
[email protected]
Universidad Autonoma de Madrid
Madrid
Spain
[email protected]
Cartronic Group
Madrid
Spain
[email protected]
The aging population has led to a growing number of falls in our society, affecting global public health worldwide. This paper presents CareFall, an automatic Fall Detection System (FDS) based on wearable devices and Artificial Intelligence (AI) methods. CareFall considers the accelerometer and gyroscope time signals extracted from a smartwatch. Two different approaches are used for feature extraction and classification: i) threshold-based, and ii) machine learning-based. Experimental results on two public databases show that the machine learning-based approach, which combines accelerometer and gyroscope information, outperforms the threshold-based approach in terms of accuracy, sensitivity, and specificity. This research contributes to the design of smart and user-friendly solutions to mitigate the negative consequences of falls among older people.
< g r a p h i c s >
Representation of CareFall, an automatic Fall Detection System (FDS) based on wearable devices and AI methods.
[
Carlos Moro
August 12, 2023
===================
§ INTRODUCTION
Population aging is increasing worldwide. The World Health Organization considers falls among the elderly to be a major global public health challenge <cit.>. In fact, falls can adversely affect the quality of life in older people, causing them serious physical, psychological, and social consequences, such as contusions, fractures, trauma, motor and neurological damage, or even death <cit.>. For this reason, it is crucial the design and deployment of user-friendly technologies to detect falls.
In recent years, solutions such as the Personal Emergency Response System (PERS) have been proposed <cit.>. PERS is a manual system whereby a person, after falling to the ground, must press a warning button (usually in a pendant or bracelet), and an emergency team is immediately dispatched to provide assistance. However, this system might not be a good solution in some cases, e.g., if the person has fainted or lost consciousness due to the fall and can not press the emergency button.
To overcome the limitations of PERS, a wide variety of Fall Detection Systems (FDS) have been proposed in the last decade, providing automatic and user-friendly solutions for elderly people <cit.>. Most FDS are based on wearable devices <cit.>, such as belts or bracelets with accelerometer sensors <cit.>, image-based devices, such as indoor surveillance cameras <cit.>, or smartphones <cit.>, among many others.
This paper presents CareFall, an automatic FDS based on wearable devices and Artificial Intelligence (AI) methods. Fig. <ref> provides a graphical representation of CareFall. CareFall considers a scenario where the smartwatch is positioned on the wrist, acquiring information related to its inertial sensors, such as the 3-axis accelerometer and gyroscope <cit.>, or heart rate monitor <cit.>. Once the information is acquired by the smartwatch, the time signals (accelerometer and gyroscope signals) are used for feature extraction and classification. Two different approaches are considered: i) threshold-based, and ii) machine learning-based. In case the FDS detects a fall, it automatically warns the emergency services.
§ METHODS
CareFall considers two of the most popular methods for fall detection in the literature <cit.>. They are fed to the 3-axis time signals of accelerometer and gyroscope sensors. The sampling frequency of the smartwatch is between 20-25Hz. For a simple and real-time analysis, we consider separate time windows of 1 minute.
* Threshold-based: it is one of the simplest and least computationally expensive solutions to detect a fall. It is based on the extraction of additional time signals from the original accelerometer and gyroscope ones such as the Signal Magnitude Vector (SMV), the Fall Index (FI), and the Absolute Vertical Direction (AVD), among others <cit.>. After that, a specific threshold is defined for each time sequence. In case the instant value of the time sequence surpasses the threshold, the output of the system would be fall. It is important to highlight that, in case of considering several time signals in the analysis (e.g., SMV, FI, and AVD), the final output of the system would be based on the majority voting of all the time signals considered.
* Machine Learning-based: this approach automatically learns the discriminative patterns for the task using data. From the original 6 time signals (3-axis accelerometer and gyroscope) and 2 additional time signals (SMV of the accelerometer and gyroscope), we extract the following 11 global features per time window (1 minute) related to statistical information: Mean, Variance, Median, Delta, Standard Deviation, Maximum Value, Minimum Value, 25th Percentile, 75th Percentile, Power Spectral Density (PSD), and Power Spectral Entropy (PSE). In total, we obtain a feature vector with 44 global features related to the accelerometer information and 44 global features related to the gyroscope.
Once we have the feature vector with the 88 global features, we train machine learning classifiers for the task of fall detection. The most widely used algorithms are K-Nearest Neighbor (KNN) <cit.>, Support Vector Machine (SVM) <cit.>, Gradient Boosting (GB) <cit.>, Random Forest (RF) <cit.>, and Artificial Neural Network (ANN) <cit.>, among others.
§ EXPERIMENTAL SETUP
Two popular public databases are considered in the experimental framework of the paper: Erciyes Univesity <cit.> and UMAFall <cit.>. Table <ref> shows the most relevant information from these databases: i) the number of Activities of Daily Life (ADLs) such as walking, sitting, lying down, etc., and simulated falls (forward, backward, sideways, etc.); ii) participant information (number, gender, height, weight, and age range); iii) type of time signals captured (accelerometer and gyroscope); iv) sensor position; and v) the sampling rate. The main criteria for selecting these databases were the position of the sensor (wrist), the sampling rate of the sensors (20-25Hz), and the variability in the type of activities and falls.
Regarding the experimental protocol, both databases are divided into development (80% of participants) and final evaluation (20% remaining participants) datasets. As a result, different subjects are considered for the training and final evaluation of CareFall. Regarding metrics, we consider three popular metrics in the literature: Sensitivity (SE), Specificity (SP), and Accuracy. SE refers to the probability of detecting a fall, SP to the probability of detecting a non-fall (i.e., ADLs), and accuracy to the overall system performance.
§ EXPERIMENTAL RESULTS
Table <ref> (top) shows the results for the Erciyes University database over the final evaluation set. The results presented correspond to the best configuration of each fall detection approach. The results obtained in general (accuracy) with the threshold-based approach are significantly worse compared with the machine learning approach (77.3% vs. 98.4%), resulting in a higher number of false positives (no falls detected as falls). This trend can be observed by looking at the specificity (68.4% vs. 96.7%). Nevertheless, it is interesting to remark that the Threshold approach outperforms the Machine Learning approach in terms of sensitivity (100% vs. 98.9%), showing to be a simple but efficient approach to detecting falls. In addition, analysing the Machine Learning approach, we can see how the combination of accelerometer information (44 global features) and gyroscope information (44 global features) achieves the best results.
Finally, we can also see in Table <ref> (bottom) the results achieved for the public UMAFall database. Similar conclusions are obtained, although better results are achieved on Erciyes database. This can be produced due to the quality of the device and the acquisition process. This seems to indicate that combining accelerometer and gyroscope information is a good practice for the fall detection task.
§ ACKNOWLEDGMENTS
This work has been supported by projects: INTER-ACTION (PID2021-126521OBI00 MICINN/FEDER), HumanCAIC (TED2021-131787B-I00 MICINN), and Cartronic Group.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.07217v1 | 20230714082047 | Stable domains for higher order elliptic operators | [
"Jean-François Grosjean",
"Antoine Lemenant",
"Rémy Mougenot"
] | math.OC | [
"math.OC"
] |
2426Communication29.9pc0.5pt1618
Three-Dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information
Shiri ChechikTel Aviv University, Israel, mailto:[email protected]@tauex.tau.ac.il
Shay MozesReichman University, Israel, mailto:[email protected]@idc.ac.il
Oren WeimannUniversity of Haifa, Israel, mailto:[email protected]@cs.haifa.ac.il
==================================================================================================================================================================================================================================================================================
This paper is devoted to prove that any domain satisfying a (δ_0,r_0)-capacity condition of first order is automatically (m,p)-stable for all m⩾ 1 and p⩾ 1, and for any dimension N⩾ 1. In particular, this includes regular enough domains such as 𝒞^1-domains, Lipchitz domains, Reifenberg flat domains, but is weak enough to also includes cusp points. Our result extends some of the results of Hayouni and Pierre valid only for N=2,3, and extends also the results of Bucur and Zolesio for higher order operators, with a different and simpler proof.
§ INTRODUCTION
Let Ω⊂^N be a bounded and open set. Following <cit.>, we say that Ω is (m,p)-stable if
W^m,p(^N) ∩{u =0 a.e. in Ω^c } = W^m,p_0(Ω).
This notion is related to the continuity of a 2m-order elliptic PDE with respect to domain perturbation. In particular, if Ω is (m,2)-stable, then it implies that for any sequence of domains (Ω_n)_n∈ℕ converging to Ω in a certain Hausdorff sense, one has that (u_n)_n∈ℕ converges strongly in H^m to u, where u_n is the unique solution in H^m_0(Ω_n) for the equation (-Δ)^m(u_n)=f in Ω_n, and u is the solution of the same problem in Ω. It is also equivalent to the convergence of (W^m,p_0(Ω_n))_n∈ℕ to W^m,p_0(Ω) in the sense of Mosco (see Section <ref>).
In the literature, a lot of attention has been devoted to the case m=1 and p=2 because of its relation to the Laplace operator. On the other hand, very few results are available for the higher order spaces H^m_0(Ω), related to bi-harmonic or more generally poly-harmonic equations, that have a lot of applications. The objective of this paper is to give a short and elementary proof of the fact that any domain which is “regular enough” is always (m,p)-stable for all m,p and all dimensions N.
Notice that in general, the stability for W^m,p_0(Ω) does not simply reduce to the one of W^1,p_0(Ω). To enlight this fact we recall that for every open set Ω⊂^N, we have the characterisation (see for instance <cit.>)
W^m,p_0(Ω)= W^m,p(^N)∩{∇^ku|_Ω^c=0 (m-k,p)-q.e. for all k⩽ m-1 },
where ∇^k u := (∂^α u)_|α| = k and ∂^α u is the (m-k,p)-quasicontinuous representative, which is in particular defined pointwise (m-k,p)-q.e.
If Ω is (1,p)-stable, then for any |α|⩽ m-1 and from the assumption ∂^α u=0 a.e. in Ω^c we would only deduce that ∂^α u=0 (1,p)-q.e. on Ω^c, whereas in order to prove that u∈ W^m,p_0(Ω) we would need the stronger condition ∂^α u=0 (m-|α|,p)-q.e. on Ω^c.
In <cit.>, Hayouni and Pierre exploited the compact embedding of H^2 into continuous functions in dimensions 2 and 3, in order to get some stability results for the space H^2_0. In particular, they proved that, in dimension 2 and 3, any (1,2)-stable domain is automatically (2,2)-stable (see <cit.> or <cit.>). They also proved in the same paper that, in dimensions 2 and 3, any sufficiently smooth domain is a (2,2)-stable domain.
In the present paper, we show that there is no true restriction on dimension N to obtain (m,p)-stability. Our main result says that any domain that satisfies a classical (1,p)- capacitary condition will be automatically (m,p)-stable, in any dimension, and for any m. This includes a large class or “regular” domains such as 𝒞^1-domains, Lipschitz domains, Reifenberg-flat domains, domains satisfying the so-called external corkscrew condition (see Definition <ref>), ε-cone property, or even domains with segment property which allows domains with cusps, or more generally domains with the so called fat cone property <cit.>.
For the rest of this paper, we restrict ourselves to open subset of a fixed ball D⊂ℝ^N, and we denote the set of admissible domains by
𝒪(D) := {Ω | Ω⊆ D is open}.
Let r_0>0 and δ_0>0. An open set Ω⊆ℝ^N has the (r_0, δ_0)-capacitary condition if for all x∈∂Ω and for all r⩽ r_0,
Cap_1,p(Ω^c∩ B(x,r))/ Cap_1,p( B(x,r))⩾δ_0.
The class of open subset of D having the (r_0, δ_0)-capacity condition is denoted by 𝒪^δ_0,r_0_cap(D).
Here is our main statement.
If Ω∈𝒪^δ_0,r_0_cap(D) satisfies |∂Ω|=0, then Ω is (m,p)-stable for any m⩾ 1 and 1⩽ p <+∞.
Let us give some comments about the result. One of the main feature and somewhat surprising is that the condition involves only the (1,p)-capacity even if the conclusion yields (m,p)-stability for all m⩾ 1. In <cit.>, Bucur and Zolesio proved that a domain is (1,2)-stable under a very similar but weaker condition with (1,2)-capacity. More precisely, the condition in <cit.> is the same as ours but without a bar over Ω in the numerator (See Section <ref> for more details). In contrast, with the very similar and slightly stronger (1,2)-capacity condition (<ref>), we obtain (m,2)-stability for all m⩾ 1. It is worth mentioning that our proof is different and much simpler than the one <cit.>, thus provides an alternative argument which is new even for the standard case m=1.
As a consequence of our main result we get a capacitary condition which implies stability for the polyharmonic equation along a Hausdorff converging sequence of domains. We refer to Section <ref> for the definition of Hausdorff convergence, Mosco convergence and γ_m-convergence, and we give here in the introduction two different statements. In the first one (Corollary <ref>) we assume only the limiting domain Ω to be “regular” while in the second (Theorem <ref>) we assume the whole sequence to be “regular”.
Let Ω∈𝒪^δ_0,r_0_cap(D) and (Ω_n)_n∈ℕ be a sequence in 𝒪(D). If |∂Ω|=0, (Ω_n)_n∈ℕ d_H-converges to Ω, and (Ω_n)_n∈ℕ d_H^c-converges to Ω,
then the sequence (Ω_n)_n∈ℕ γ_m-converges to Ω, or equivalently, (H^m_0(Ω_n))_n∈ℕ converges to H^m_0(Ω) in the sense of Mosco.
Corollary <ref> follows from gathering together Proposition <ref> and Theorem <ref>. Let us now mention a few remarks.
* The interesting feature of Corollary <ref> is that only the limiting domain Ω is assumed to be stable (thus somehow “regular”) and nothing is assumed on the sequence (Ω_n)_n∈ℕ, which could be arbitrary open sets.
* It is worth mentioning that in <cit.> the authors assumed only Ω_nd_H^c⟶Ω to obtain the γ_m-convergence of a sequence (Ω_n)_n∈ℕ. On the other hand they assumed that every term Ω_n along the sequence satisfies a capacitary condition with uniform constants. A similar statement will be given later in Theorem <ref>.
* It is easy to construct an example of stable domain Ω (even smooth) and a sequence (Ω_n)_n∈ℕ such that Ω_nd_H^c⟶Ω and (Ω_n)_n∈ℕ does not γ_m-converges to Ω. This shows that without any other assumption on the sequence, the second assumption Ω_nd_H⟶Ω is pivotal for the result to hold true. The construction is rather classical : consider the sequence made from an enumeration x_i ∈ B(0,1) of points with rational coordinates.
Then define
Ω_n :=B(0,2)∖⋃_i=0^n {x_i}.
It is easy to see that (Ω_n)_n∈ℕ converges to Ω:= B(0,2)∖B(0,1) for the complementary Hausdorff distance, which is clearly a (m,2)-stable domain because the boundary is smooth. On the other hand, for dimension N⩾2m we know that Cap_m,2({x_i})=0, so it is classical that (Ω_n)_n∈ℕ does not γ_m-converge to Ω (see <cit.> for the case m=1). On the other hand Ω_n=B(0,2) clearly does not Hausdorff converge to Ω=B(0,2)∖ B(0,1), which explains why Theorem <ref> does not apply.
Next, in order to get existence of shape optimisation problems for higher order equations under geometrical constraints, the following variant is more usefull. Notice that here we suppose (<ref>) on the whole sequence and by this way we can avoid the Hausdorff convergence but only complementary Hausdorff convergence is enough.
Let Ω∈𝒪(D) and (Ω_n)_n∈ℕ all belonging to 𝒪^δ_0,r_0_cap(D). If |∂Ω| = 0 and
(Ω_n)_n∈ℕ d_H^c-converges to Ω, then (Ω_n)_n∈ℕ γ_m-converges to Ω, or equivalently, (H^m_0(Ω_n))_n∈ℕ converges to H^m_0(Ω) in the sense of Mosco.
Since complementary Hausdorff topology is relatively compact, it is easy to get existence results for shape optimisation problems using Theorem <ref>, with additional geometrical constraints on the domain. This applies to various standard classes of domains such as uniformly Lipschitz domains, Reifenberg-flat, corkscrew, or ε-cone, as described in the last section of the paper (see Theorem <ref>).
§ PRELIMINARIES
The term domain and the symbol Ω will be reserved for an open and bounded set in the N-dimensional euclidean space ℝ^N. The norm of a point x∈ℝ^N is denoted by | x | := (∑_i=1^Nx_i^2)^1/2. If α is a multi-indice, i.e. α∈ℕ^N, then the norm of α is |α| := ∑_i=1^Nα_i and we define the partial derivative operator
∂^α := ∂^|α|/∂_1^α_1⋯∂^α_k_N,
and the vector ∇^k:=(∂^α )_|α| = k.
The notations ∂Ω and Ω stand for
the boundary and the closure of Ω, respectively. Let 𝒞^∞_c(Ω) be the space of smooth functions with compact support in Ω. The ball of radius r⩾ 0 and centered at x∈ℝ^N is denoted by B(x,r). For m∈ℕ and p ∈ [1, +∞[, we consider the usual Sobolev space W^m,p(Ω) endowed with the norm
‖ u ‖_W^m,p(Ω):= ( ∑_k=0^m ‖∇^k u‖_L^p(Ω)^p )^1/p,
where
‖∇^k u‖_L^p(Ω)^p := ∫_Ω|∇^k u |^p dx.
Finally, the space W_0^m,p(Ω) is the completion of 𝒞^∞_c(Ω) with respect to the norm ‖·‖_W^m,p(Ω).
When the dimension N < mp, elements of W^m,p(^N) can be represented as continuous functions. However, if N ⩾ mp, this is no longer the case and the natural way of measuring by how much the functions deviate from continuity is by means of capacity. If K ⊂^N is a compact, then we define the (m,p)-capacity of K by
Cap_m,p(K):= inf{‖φ‖^p_W^m,p | φ∈𝒞^∞_c(^N) such that φ⩾ 1 on K }.
Afterwards, for an open set Ω⊆^N we can consider
Cap_m,p(Ω):= sup{Cap_m,p(K) | K ⊆Ω is a compact}.
Finally, if E ⊆^N is an abritrary set, then we define
Cap_m,p(E):= inf{Cap_m,p(Ω) | Ω⊇ E is a open set}.
When an assertion is true except for a set of (m,p)-capacity equal to zero, we say that it is true (m,p)-quasi everywhere and denote by (m,p)-q.e.. Let ρ∈𝒞^∞_c(B(0,1)) be a test function and consider an approximate identity (ρ_n)_n∈ℕ by ρ_n(x):= n^Nρ(nx). For (m,p)-q.e. x ∈ℝ^N, limρ_n * u(x) := u(x) exists and u(x)=u(x) almost everywhere. Moreover, for all ε>0 there is an open set Ω_ε⊂ℝ^N such that Cap_m,p(Ω_ε)< ε and, for a subsequence, ρ_n * u converges uniformly to u on ℝ^N \Ω_ε. In particular, the function u is continuous on ℝ^N \Ω_ε and we call it the (m,p)-quasicontinuous representative of u. In the present paper, functions u in W^m,p(ℝ^N) will be assumed to be defined pointwise (m,p)-q.e. and to be (m,p)-quasicontinuous (see <cit.>).
The proof of the main result will use the following Poincaré type inequality that can be found for instance in <cit.>. To be more precise, we can apply the inequality in <cit.> to B(0,1) and use the fact that Cap_1,p is homogeneous of degree N-p (see <cit.>), then by a simple change of variable apply it to the function x⟼ u(Rx), we get the following one.
Let r>0, and u ∈ W^1,p(B(0,r)). We define Z(u):={ x ∈B(x_0,r) | u(x)=0}.
If Cap_1,p(Z(u))>0, then
∫_B(0,r)| u|^p dx ⩽ C r^p/ Cap_1,p(r^-1Z(u))∫_B(0,r) |∇ u|^p dx,
where C>0 depends only on p and N.
§ PROOF OF THEOREM <REF>
Let Ω be a bounded domain satisfying the assumptions of Theorem <ref> and let u∈ W^m,p(^N) be given satisfying u=0 almost everywhere in Ω^c. To prove the theorem it suffice to prove that u can be approximated in the W^m,p(^N) norm by a sequence of functions in 𝒞^∞_c(Ω). To do so we will first truncate u near the boundary of Ω as follows. For all n ∈ℕ, we consider
K_n := { x ∈Ω | d(x,∂Ω)⩾ 2^-n}.
The exhaustive family of compact (K_n)_n∈ℕ satisfies K_n⊆ K_n+1 and Ω = ⋃_n∈ℕK_n.
Then take a test function ρ∈𝒞^∞_c(B(0,1)) such that ρ⩾ 0 and
∫_^Nρ(x)dx = 1.
We define ρ_ε(x):=ε^-Nρ(x/ε) and
θ_n,ε(x) := 1_K_n*ρ_ε(x) =ε^-N∫_K_nρ(x-y/ε)dy,
which satisfies Supp (θ_n,ε) ⊆ K_n+ B(0,ε). We take ε_n :=2^n+1 and denote now θ_n:=θ_n,ε_n in such a way that θ_n∈𝒞^∞_c(Ω), θ_n=1 on K_n-1, θ_n = 0 on K_n+1^c,
Supp (∇^kθ_n) ⊆ K_n+1∖Int(K_n-1).
To prove the theorem it suffice to prove that
u_n:=u θ_nu in W^m,p(^N),
because then we can conclude by using the density of 𝒞^∞_c(Ω) into W^m,p(Int(K_n+2)), and a diagonal argument.
Let k⩽ m be a positive integer. To prove the claim we first estimate the L^p norm :
u_n-u_L^p(^N)^p ⩽∫_Ω\ K_n-1| u |^p dx.
Using the fact that (Ω\ K_n)_n∈ℕ is a decreasing sequence of Lebesgue measurable sets, and thanks to the condition |Ω∖Ω |=0, we know that |Ω\ K_n|⟶ 0 as n⟶ +∞ and therefore u_n ⟶ u in L^p(^N). Next for the norm of gradients we will use a covering of ∂Ω. More precisely, the infinite family (B(x,2^-(n-2)))_x∈∂Ω is a cover of Supp (∇^k θ_n) and by the famous 5B-covering lemma (see for instance <cit.>) there exists a countably subcover indexed by (x_i)_i∈ℕ⊆∂Ω such that (B(x_i,2^-(n-2)))_i∈ℕ is a disjoint family,
Supp (∇^kθ_n) ⊆⋃_i∈ℕ B(x_i, 5· 2^-(n-2)), and ∑_i∈ℕ1_B(x_i,5·2^-(n-2))⩽ N_0,
for a universal constant N_0 ∈ℕ. In the sequel, we simply write B_n(x_i) instead of B(x_i,5·2^-(n-2)). Afterwards, we estimate
∇^k u_n-∇^k u_L^p(ℝ^N)^p ⩽ C∫_Ω\ K_n-1|∇^k u |^p dx + C∑_k = |β| + |γ|
γ 0∫_Ω\ K_n-1 |∂^β u|^p |∂^γθ_n |^p dx.
The first term tends to 0 as n ⟶ +∞ for the same reasons as before. For the other term we use the following estimate
|∂^γθ_n(x)|^p ⩽ε_n^-pN∫_K_nε_n^-p|γ||∂^γρ(x-y/ε_n)|^pdy ⩽ Cε_n^-p|γ|.
The function u vanishes almost everywhere on the open set Ω^c, so ∂^β u is zero in 𝒟'(Ω^c) and vanishes almost everywhere on this open set. Hence the Poincaré inequality (<ref>) applies to all the ∂^βu for |β|<m, and for all ball B_n(x_i) such that 2^-(n-2)⩽ r_0, thanks to our capacitary condition (<ref>) we get
Cap_1,p(x_i+5^-1·2^n-2(Z(∂^β u)-x_i)) ⩾ Cε_n^-(N-p) Cap_1,p(Ω^c ∩ B(x_i,5·2^2-n)) ⩾ Cδ_0.
Therefore,
∫_B_n(x_i) |∂^β u|^p dx ⩽ C δ_0^-1ε_n^p ∫_B_n(x_i) |∇∂^β u|^p dx,
and using successively (k- |β|)-times the Poincaré inequality and the covering of ∂Ω, we get
∫_Ω\ K_n-1 |∂^β u|^p |∂^γθ_n |^p dx ⩽ Cε_n^-p|γ|∫_Ω\ K_n-1 |∂^β u|^p dx
⩽ Cε_n^-p|γ|∑_i∈ℕ∫_B_n(x_i) |∂^β u|^p dx
⩽ C ∑_i∈ℕ∫_B_n(x_i) |∇^k u|^p dx
⩽ CN_0∫_Ω\ K_n-5 |∇^k u|^p dx
and this tends to zero as n ⟶ +∞ so follows the proof.
§ EXAMPLES OF DOMAINS SATISFYING OUR CONDITION
As we said in the introduction, any smooth enough domain will satisfy our condition. For instance domains satisfying an external corkscrew condition as defined below.
Let Ω⊂^N be an open and bounded set, a∈ (0,1), and r_0>0. We say that Ω satisfies an (a,r_0)-external corkscrew condition if for every x ∈∂Ω and r⩽ r_0, one can find a ball B(y, ar ) such that
B(y,ar) ⊂ B(x,r) ∩Ω^c.
We give a non-exhaustive list of class of domains includes in 𝒪(D) :
* 𝒪_convex(D):={Ω⊆ D | Ω open and convex}.
* 𝒪_seg^r_0(D):={Ω⊆ D | Ω open and has the r_0-external segment property}, we say Ω has the r_0-external segment property if for every x∈∂Ω, there exists a vector y_x ∈𝕊^N-1(0,r_0) such that x+ty_x ∈Ω^c for t∈(0,1). This notion can also be generalized by the “flat cone” condition as in <cit.> (see also <cit.>).
* 𝒪_Lip^λ(D):={Ω⊆ D | Ω open and is a Lipschitz domain}.
* 𝒪_Reif flat^δ_0, r_0(D):={Ω⊆ D | Ω open and is (ε_0, δ_0)-Reifenberg flat}, we say Ω is (ε_0, δ_0)-Reifenberg flat for ε_0 ∈(0,1/2) and δ_0 ∈(0,1) if for all x∈∂Ω and δ∈ (0, δ_0], there exists an hyperplan 𝒫_x(δ) of ^N such that x∈𝒫_x(δ) and
d_H(∂Ω∩B(x,δ), 𝒫_x(δ) ∩B(x,δ) ) ⩽δε_0.
Moreover for all x∈∂Ω, the set
B(x,δ_0) ∩{ x ∈^N | d(x, 𝒫_x(δ_0))⩾2δ_0 ε_0}
have two connex components ; one is include in Ω, the other one in ^N \Ω.
* 𝒪^ε_cone(D):={Ω⊆ D | Ω open and has the external ε-cone condition}, we say Ω has the external ε-cone condition if there exists a cone C of angle ε such that for every x∈∂Ω, there exists a cone C_x congruent to C by rigid motion and such that x is the vertex of C_x and C_x ⊂Ω^c.
* 𝒪^a,r_0_corks(D):={Ω⊆ D | Ω open and has the (a,r_0)-external corkscrew condition}, see definition <ref>.
* 𝒪^δ_0,r_0_cap(D):={Ω⊆ D | Ω open and has the (δ_0,r_0)-capacity condition <ref>}.
It is easy to see that for some fixed parameters we have the inclusions
𝒪^ε_cone(D)⊆𝒪^a,r_0_corks(D),
and
𝒪_convex(D) ⊆𝒪_Lip^λ(D)⊆𝒪_Reif flat^δ_0,r_0(D)⊆𝒪_corks^a,r_1(D)⊆𝒪^δ_1,r_2_cap(D).
A segment is a locally Lipschitz manifold of dimension 1, the properties of (1,p)-capacity implies that in dimension N<p+1 we have
𝒪_seg^r_0(D) ⊆𝒪^δ_0,r_1_cap(D).
Any C^1 domain or Lipschitz domain satisfies an external corkscrew condition. It also follows from porosity estimates that the crokscrew condition implies |Ω∖Ω|=0, as stated in the following useful proposition.
If Ω∈𝒪_corks^a,r_0(D), then |∂Ω| =0.
For all x∈∂Ω and r⩽ r_0, there exists y∈^N such that
B(y,ar)⊂ B(x,r) ∩Ω^c ⊂^N\∂Ω.
In other words, ∂Ω is a σ-porus set in ^N, in the sense of <cit.>, with σ=2a. In virtue of <cit.> (see the last paragraph at the bottom of page 321 in <cit.>, or see also <cit.>), we conclude |∂Ω| =0.
If Ω lies in one of the following classes : 𝒪^ε_cone(D), 𝒪_convex(D), 𝒪_Lip^λ(D), 𝒪_Reif flat^δ_0,r_0(D), or 𝒪_corks^a,r_0(D), then Ω is (m,p)-stable for any m⩾ 1 and 1⩽ p <+∞.
§ STABILITY WITH RESPECT TO DOMAIN PERTURBATION
As before, we consider a fixed bounded domain D⊂^N. Let Ω and (Ω_n)_n∈ℕ be bounded subdomains of D such that Ω_n⟶Ω and D\Ω⟶D\Ω as n ⟶ +∞ for the Hausdorff convergence. In particular, this implies the compact convergence of the sequence. In this section we verify that the (m,2)-stability of Ω implies the Mosco convergence of the sequence (H^m_0(Ω_n))_n∈ℕ towards H^m_0(Ω). This will follows from the same argument as for the classical case of H^1_0, but for the sake of completeness we give here the full details. For this purpose, we first prove the equivalence between γ_m-convergence and Mosco convergence (Proposition <ref>). Then we show that (Ω_n)_n∈ℕ γ_m-converges to Ω according to (m,2)-stability of Ω (Proposition <ref>).
The sequence (Ω_n)_n∈ℕ γ_m-converges to Ω if for all f ∈ L^2(D), the sequence (u_n)_n∈ℕ strongly converges in H^m_0(D) to u, where u_n (resp. u) is the unique solution of the Dirichlet problem (-Δ)^mu_n =f (resp. (-Δ)^mu=f) in H^m_0(Ω_n) (resp. H^m_0(Ω)).
The sequence (H^m_0(Ω_n))_n ∈ℕ converges to H^m_0(Ω) in the sense of Mosco if the following holds :
* If (v_n_k)_k∈ℕ is a subsequence, where v_n_k∈ H^m_0(Ω_n_k), and weakly converges to v∈ H^m_0(D), then v∈ H^m_0(Ω).
* For all v∈ H^m_0(Ω), there exists a sequence (v_n)_n∈ℕ, where v_n ∈ H^m_0(Ω_n), which strongly converges to v in H^m_0(D).
The sequence (Ω_n)_n∈ℕ γ_m-converges to Ω if, and only if, (H^m_0(Ω_n))_n∈ℕ converges to H^m_0(Ω) in the sense of Mosco.
Suppose that the sequence (Ω_n)_n∈ℕ γ_m-converge to Ω. Let (v_n_k)_k∈ℕ be a subsequence which weakly converges to v∈ H^m_0(D), where v_n_k∈ H^m_0(Ω_n_k). Consider the L^2(D) function f:= (-Δ)^m v in such a way that v is the unique solution of the Dirichlet problem in D. The γ_m-convergence implies that (u_n_k)_k ∈ℕ strongly converges in H^m_0(D) to u ∈ H^m_0(Ω) where u_n_k satisfies (-Δ)^m u_n_k =f in Ω_n_k and u satisfies (-Δ)^mu= f in Ω. It suffices to show that u = v. For all k ∈ℕ,
∫_D∇^m u_n_k : ∇^m(u_n_k-v_n_k) dx = ∫_Ω_n_k∇^m u_n_k : ∇^m(u_n_k-v_n_k) dx
= ∫_Ω_n_kf(u_n_k-v_n_k) dx
= ∫_Df(u_n_k-v_n_k) dx,
and as k ⟶ +∞, we use the strong convergence of (u_n_k)_k∈ℕ and the weak convergence of (v_n_k)_k∈ℕ to get
∫_D∇^mu: ∇^m(u-v) dx = ∫_D f(u-v) dx.
Then according to equality f:=(-Δ)^m v, we have
∫_D |∇^m (u - v)|^2 dx = ∫_D ∇^m(u - v) : ∇^mu dx - ∫_D ∇^m(u- v) : ∇^mv dx
= ∫_D f(u-v) dx -∫_D(u - v) (-Δ)^mv dx =0.
The domain D is bounded and due to the classical Poincaré inequality, the functions u and v are equals and the first point of Mosco convergence follows. To prove the second one, consider v∈ H^m_0(Ω) ⊆ H^m_0(D) and let f := (-Δ)^m v. The same arguments used before implies that (u_n)_n ∈ℕ strongly converges in H^m_0(D) to u = v where u_n ∈ H^m_0(Ω_n) is the unique solution of Dirichlet problem with respect to f. Now suppose that (H^m_0(Ω_n))_n∈ℕ converges in the sense of Mosco to H^m_0(Ω). Consider f∈ L^2(D) and the associated solutions u_n of the Dirichlet problem in Ω_n. For all n ∈ℕ,
∫_D |∇^m u_n|^2 dx = ∫_Ω_n∇^m u_n: ∇^m u_n dx = ∫_Ω_n f u_n dx = ∫_D fu_n dx,
we infer that the sequence (u_n)_n∈ℕ is bounded in H^m_0(D) since
‖ f ‖_L^2(D)‖ u_n‖_L^2(D)⩽‖ f ‖_L^2(D)‖ u_n‖_H^m_0(D).
Let (u_n_k)_k ∈ℕ be a subsequence which weakly converges to a function v∈ H^m_0(D). Using the Mosco convergence, v∈ H^m_0(Ω) and for all φ∈ H^m_0(Ω), there exists a sequence (φ_k)_k ∈ℕ, with φ_k ∈ H^m_0(Ω_n_k), strongly converging to φ in H^m_0(D). Hence, for all k ∈ℕ,
∫_D ∇^m u_n_k: ∇^m φ_k dx = ∫_Ω_n_k∇^m u_n_k: ∇^m φ_k dx = ∫_Ω_n_kf φ_k dx = ∫_D f φ_k dx,
and using the strong convergence of (φ_k)_k ∈ℕ and the weak convergence of (u_n_k)_n∈ℕ as k ⟶ + ∞, we obtain
∫_Ω∇^m v :∇^m φ dx =∫_D ∇^m v :∇^m φ dx = ∫_D f φ dx = ∫_Ω f φ dx.
The uniqueness of the solution of the Dirichlet problem proves u = v. Moreover,
∫_D|∇^m u_n_k|^2 dx = ∫_Ω_nfu_n_k dx = ∫_Dfu_n_k dx,
and
∫_Dfu_n_k dx ∫_D fu dx = ∫_D|∇^m u |^2 dx.
This yields
‖ u_n_k‖_H^m_0(D)‖ u ‖_H^m_0(D),
and the convergence of the subsequence is strong. By uniqueness of the limit, the whole sequence is strongly converging to u in H^m_0(D).
For two closed sets A,B⊂^N, the Hausdorff distance d_H(A,B) is defined by
d_H(A,B):=max_x∈ A dist(x,B) + max_x∈ B dist(x,A).
A sequence of closed sets (A_n)_n∈ℕ converges to A for the Hausdorff distance if d_H(A_n,A) ⟶ 0 as n ⟶ +∞. In this case, we will write A_n d_H⟶ A.
Next, we define the complementary Hausdorff distance over 𝒪(D) by
d_H^c(Ω_1, Ω_2) := d_H(D\Ω_1, D\Ω_2),
and one can show that the topolgy induced on 𝒪(D) is compact.
In the sequel we will use the following well known result.
If ( Ω_n)_n∈ℕ is a sequence in 𝒪(D) such that Ω_n d_H^c⟶Ω∈𝒪(D), then for any compact set K⊂Ω there exists n_0∈ℕ depending on K such that K⊂Ω_n for all n⩾ n_0.
Since K is compact and Ω is open, we know that
inf_x∈ K dist(x,Ω^c)=:a>0.
By Hausdorff convergence of the complements, there exists n_0(a)∈ℕ such that for all n⩾ n_0(a),
Ω_n^c ⊂{y ∈^N | dist(y,Ω^c)<a/2}.
We deduce from the triangle inequality that inf_x∈ K dist(x,Ω_n^c) >0 for n large enough and in particular K⊂Ω_n.
We are now ready to state the following result that will directly imply Corollary <ref> written in the introduction.
Let ( Ω_n)_n∈ℕ be a sequence in 𝒪(D) such that Ω_nd_H⟶Ω and Ω_n d_H^c⟶Ω where Ω∈𝒪(D). If Ω is (m,2)-stable, then the sequence (Ω_n)_n∈ℕ γ_m-converges to Ω or equivalently, (H^m_0(Ω_n))_n∈ℕ converges to H^m_0(Ω) in the sense of Mosco.
Consider f ∈ L^2(D). We know that the sequence (u_n)_n ∈ℕ of the Dirichlet problem solutions associated to f is bounded in H^m_0. There exists a subsequence (u_n_k)_k∈ℕ which weakly converges to a function v ∈ H^m_0(D). Let φ∈𝒞_c^∞(Ω) be test function. By complementary Haussdorf convergence, there exists an integer k_0 ∈ℕ such that for all k⩾ k_0,
Supp(φ) ⊆Ω_n_k.
Thus, for all k⩾ k_0,
∫_Ω∇^m u_n_k : ∇^m φ dx =∫_Ω_n_k∇^m u_n_k : ∇^m φ dx = ∫_Ω_n_kfφ dx =∫_Ωfφ dx,
and by weak convergence of (u_n_k)_k ⩾ k_0 in H^m_0(D) ⊇ H^m_0(Ω),
∫_Ω∇^m v : ∇^m φ dx =∫_Ωfφ dx.
Thanks to the uniqueness of the Dirichlet problem, it suffices to prove that v∈ H^m_0(Ω). In this case, v = u and the whole sequence (u_n)_n ∈ℕ strongly converge to u. Up to a subsequence, we can assume that (u_n_k)_k ∈ℕ converges almost everywhere to v. The functions u_n_k vanishes (m,2)-quasi everywhere on Ω_n_k^c so almost everywhere. By Hausdorff convergence of the adherence we know that for all compact K⊂Ω^c then K⊂Ω_n^c for n large enough thus finaly v =0 almost everywhere in Ω^c. Using the definition of (m,2)-stability, we conclude v ∈ H^m_0(Ω).
§ PROOF OF THEOREM <REF>
In this section we give a proof of Theorem <ref> stated in the introduction. Let Ω, Ω_n ⊂ D be bounded domains as in the statement of Theorem <ref> that satisfies
Ω_n d_H^c⟶Ω,
and such that (<ref>) holds true for all Ω_n with the same δ_0>0 and r_0>0. We want to prove that Ω_n γ_m-converges to Ω. To this aim we start with a similar argument as in the proof of Proposition <ref>. Consider f ∈ L^2(D). We know that the sequence (u_n)_n ∈ℕ of the Dirichlet problem solutions associated to f in Ω_n is bounded in H^m_0(D). There exists a subsequence (u_n_k)_k∈ℕ which weakly converges to a function v ∈ H^m_0(D). Let φ∈𝒞_c^∞(Ω) be test function. By complementary Hausdorff convergence, there exists an integer k_0 ∈ℕ such that for all k⩾ k_0,
Supp(φ) ⊆Ω_n_k.
Thus, for all k⩾ k_0,
∫_Ω∇^m u_n_k : ∇^m φ dx =∫_Ω_n_k∇^m u_n_k : ∇^m φ dx = ∫_Ω_n_kfφ dx =∫_Ωfφ dx,
and by weak convergence of (u_n_k)_k ⩾ k_0 in H^m_0(D) ⊇ H^m_0(Ω),
∫_Ω∇^m v : ∇^m φ dx =∫_Ωfφ dx.
Now thanks to the uniqueness of the Dirichlet problem, it suffices to prove that v∈ H^m_0(Ω). In section <ref> we prove that the class 𝒪_cap^δ_0, r_0(D) is compact for the complementary Hausdorff convergence. Moreover, since Ω satisfies the (δ_0,r_0)-capacitary condition we know from Theorem <ref> that Ω is a (m,2)-stable domain. Thus in order to conclude the proof we are left to prove that v=0 a.e. in Ω^c. From here the proof differs from the one of Theorem <ref> because we do not know anymore that Ω_nd_H⟶Ω. Instead, we shall benefit from the fact that (<ref>) holds true for the whole sequence Ω_n and we will use a construction similar to the one used in the proof of Theorem <ref>, but on the functions u_n. From now on we will simply denote by n instead of n_k for the subsequence u_n→ v in H^m(D) as n ⟶+ ∞. Let K⊂Ω^c be an arbitrary compact set and let ε>0 be given. Our goal is to prove that v=0 a.e. on K. For a general closed set F⊂^N and λ>0 we denote by (F)_λ the λ-enlargement of F, namely,
F_λ:= {x∈^N | dist(x,F)⩽λ}.
By the Hausdorff convergence of Ω_n^c to Ω^c we know that there exists n_0(ε)∈ℕ such that for all n⩾ n_0(ε),
Ω^c ⊂ (Ω_n^c)_ε, and Ω_n^c⊂ (Ω^c)_ε.
From the above we deduce that
K⊂Ω^c ⊂ (Ω_n^c)_ε⊂ (Ω^c)_2ε.
Next, we want to construct a test function in 𝒞^∞_c(Ω_n) which is very close to u_n in L^2 and equal to 0 on K. Let us consider the following subset of Ω_n,
A_n,ε := { x ∈Ω_n | d(x, Ω_n^c)⩾ 10 ε},
and the function
w_n,ε:=u_n 1_A_n,ε.
The main point being that w_n,ε=0 in (Ω_n^c)_ε and in virtue of (<ref>) we deduce that w_n,ε=0 on K. Now we estimate the difference w_n,ε- u_n in L^2(^N) using a covering of ∂Ω_n. More precisely, the infinite family (B(x,20 ε))_x∈∂Ω_n is a cover of Ω_n ∖ A_n,ε and by the 5B-covering Lemma there exists a countably subcover indexed by (x_i)_i∈ℕ⊂∂Ω such that (B(x_i,20ε))_i∈ℕ is a disjoint family,
Ω_n ∖ A_n,ε⊂⋃_i∈ℕ B(x_i, 100ε), and ∑_i∈ℕ1_B(x_i,100ε)⩽ N_0,
for a universal constant N_0 ∈ℕ. Then we can estimate,
∫_D |w_n,ε-u_n|^2 dx ⩽∫_Ω_n ∖ A_n,ε| u_n| ^2 dx.
The functions ∂^β u_n vanishes almost everywhere on the open set Ω_n^c, so thanks to our capacitary condition (<ref>) we have for ε small enough
(100 ε)^-(N-p) Cap_1,2(Z(∂^β u_n)) ⩾ Cε^-(N-p) Cap_1,2(Ω_n^c ∩ B(0,100ε)) ⩾ Cδ_0.
Therefore, the Poincaré inequality (<ref>) applies to ∂^β u_n in all ball B(x_i, 100ε) gives
∫_B(x_i,100ε)|∂^β u_n|^2 dx ⩽ C δ_0^-1ε^2∫_B(x_i,100ε)|∇∂^β u_n|^2 dx.
We deduce that
∫_Ω_n ∖ A_n,ε| u_n|^2 dx ⩽∑_i∈ℕ∫_B(x_i,100ε)| u_n|^2 dx
⩽ C ∑_i∈ℕε^2m∫_B(x_i,100ε) |∇^m u_n|^2 dx
⩽ CN_0 ε^2m∫_D |∇^m u_n|^2 dx
⩽ C ε^2,
because the sequence u_n is uniformly bounded in H^1(D). In conclusion we have proved the following : for each ε>0, we have n_0(ε) ∈ℕ such that for all n ⩾ n_0(ε), there exists
w_n, ε∈ L^2(D) such that w_n, ε-u_n_L^2⩽ Cε and w_n, ε=0 on K. Now for n great enough let ε= 2^-n and let w_n:=w_n_0(2^-n), 2^-n. We can assume that n_0(2^-n)→+∞. The function w_n converges to v in L^2 because u_n converges to v in L^2, and w_n=0 on K for all n ∈ℕ. Therefore, up to a subsequence, w_n converges a.e. on K and this shows that u=0 a.e. on K. Since K is arbitrary, this shows that v=0 a.e. on Ω^c, hence u∈ H^m_0(Ω) because Ω is (m,2)-stable. This achieves the proof.
§ EXISTENCE FOR SHAPE OPTIMISATION PROBLEMS UNDER GEOMETRICAL CONSTRAINTS
Let D ⊂ℝ^N be a fixed bounded open set and let 𝒪_D:={Ω⊆ D | Ω is open} denote all open subsets of D. For a shape functional F : 𝒪_D ⟶ℝ^+, it is a natural question to ask if there exists extremal points. In order to answer this question, we introduce a subfamily of 𝒪_D which is compact for the γ_m-convergence and satisfies the capacitary condition (<ref>). If F is lower semi-continuous for the γ_m-convergence, then we use Theorem <ref> to conclude.
The existence of minimizers for shape functionals has been studied in <cit.>. They showed that the class of open subset satisfying the (r_0,δ_0)-capacitary condition
∀ x ∈∂Ω, ∀ r < r_0, Cap_1,2(Ω^c ∩ B(x,r),B(x,2r))/Cap_1,2(B(x,r),B(x,2r))⩾δ_0,
is compact for d_H^c where for any compact K,
Cap_1,2(K ∩ B(x,r),B(x,2r)):= inf‖φ‖_H^1^2
and the infimum is taken over all φ∈𝒞^∞_c(B(x,2r)) such that φ⩾ 1 on K ∩ B(x,r). This condition is weaker than (<ref>) due to the fact we consider Ω^c instead of Ω^c. Therefore, the results in <cit.> implies Theorem <ref> in the case of m=1.
Any class of the following list is compact for the complementary Hausdorff convergence : 𝒪_convex(D), 𝒪^ε_cone(D), 𝒪_corks^a,r_0(D).
* Case 𝒪=𝒪_convex(D),𝒪^ε_cone(D). The proof can be founded in <cit.>.
* Case 𝒪=𝒪_corks^a,r_0(D). Suppose (Ω_n)_n∈ℕ is sequence of (a,r_0)-corkscrew domains which converges to an open set Ω⊂ D. Let x∈∂Ω and r⩽ r_0. By Hausdorff complementary convergence properties, there exists a sequence (x_n)_n ∈ℕ such that x_n ∈∂Ω_n and x_n⟶ x as n⟶ +∞. By corkscrew conditions, one finds B(y_n,ar) ⊂Ω_n^c∩ B(x_n,r) with, up to a subsequence, y_n ⟶ y as n⟶ +∞. First of all, it is obvious that B(y,ar) ⊂ B(x,r), it remains to prove that B(y,ar) ⊂Ω^c. Let ε>0, from the enlargement characterisation of Hausdorff convergence, there exists N(ε)∈ℕ such that for every n ⩾ N(ε), D\Ω_n ⊂ (D\Ω)_ε where
(D\Ω)_ε := {x∈^N | dist(x,D\Ω)⩽ε}.
Thus B(y_n,ar) ⊂(D\Ω)_ε and passing to the limit as n → + ∞, then taking the intersection in ε, we get
B(y,ar) ⊂⋂_ε>0(D\Ω)_ε = D\Ω.
The ball B(y,ar) is open so we conclude B(y,ar) ⊂Ω^c ∩ B(x,r).
Any class of the following list is compact for the γ_m-convergence: 𝒪_convex(D), 𝒪^ε_cone(D), 𝒪_corks^a,r_0(D).
Let 𝒪 be one of the class of domains listed above and let (Ω_n)_n ∈ℕ be a sequence in 𝒪. Because of the compactness of the complementary Hausdorff convergence in 𝒪(D), there exists a subsequence (Ω_n)_n ∈ℕ denoted by the same indices which d_H^c-converges to Ω∈𝒪(D). Using Theorem <ref> and Proposition <ref>, it is sufficient to prove that Ω∈𝒪, i.e. 𝒪 is closed for d_H^c-convergence. Proposition <ref> concludes the proof.
Let 𝒪 be a γ_m-compact class of subset listed in Proposition <ref>. Let F : 𝒪⟶ℝ be a lower semi-continuous functional for the γ_m-convergence. There exists Ω∈𝒪 such that
F(Ω) = inf{F(ω) | ω∈𝒪}.
Let (Ω_n)_n∈ℕ be a minimising sequence in 𝒪, i.e. F(Ω_n) converges to inf{F(ω) | ω∈𝒪} as n ⟶ +∞. Using Proposition <ref>, up to a subsequence there exists an open set Ω∈𝒪 such that
Ω_n Ω.
Then by lower semi-continuity of the functional we get
F(Ω) ⩽n → + ∞lim infF(Ω_n) = inf{F(ω) | ω∈𝒪},
which finishes the proof.
plain
|
http://arxiv.org/abs/2307.03916v1 | 20230708064241 | Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing | [
"Zhijie Li",
"Xiangyu Ye",
"Xi Kong",
"Tianyu Xie",
"Zhiping Yang",
"Pengju Zhao",
"Ya Wang",
"Fazhan Shi",
"Jiangfeng Du"
] | quant-ph | [
"quant-ph"
] |
revtex4-2
These authors contributed equally to this work.
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
These authors contributed equally to this work.
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
[email protected]
The State Key Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, 210093 Nanjing, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
School of Biomedical Engineering and Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China
[email protected]
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
School of Physics, Zhejiang University, Hangzhou 310027, China
Here we propose and demonstrate a phased geometric control protocol for zero-field double quantum gates in a V-shaped three-level spin system. This method utilizes linearly polarized microwave pulses and exploits the geometric qubit properties to prevent state leakage. By employing specific phased geometric controls, we realize a low-power multi-pulse zero-field sensing technique using single nitrogen-vacancy centers in diamond. Our method offers a novel approach to implement precise double quantum gate operations with an adaptable driving power, making it a valuable tool for zero-field spin-based quantum technology.
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing
Jiangfeng Du
August 12, 2023
=======================================================================================
In recent years, quantum sensing techniques based on controllable quantum systems have seen significant development. One successful example is the nitrogen-vacancy (NV) center in diamond, which possesses numerous merits, including nanoscale size, biocompatibility, and long coherence time under ambient conditions <cit.>. Typically, solid-state quantum systems require a static external magnetic field to lift the degeneracy of their ground-state manifolds. However, the presence of an external magnetic field suppresses the anisotropic interactions within the target sample, resulting in the loss of anisotropic physical information and causing inhomogeneous spectral broadening. A well-known zero-field technology is the zero- to ultralow-field nuclear magnetic resonance (ZULF NMR) spectroscopy. This technique effectively mitigates the inhomogeneous broadening of the spectrum in heterogeneous environments by attenuating the broadening effects induced by magnetic susceptibility <cit.>. More zero-field scenarios can be found in the field of electromagnetic biology <cit.> and in the research of ferromagnetic film magnetization <cit.>. In order to extend the zero-field condition to solid-state quantum systems like NV centers, the implementation of high-fidelity quantum control for the three-level system (3LS) is imperative.
To address the near-degenerate quantum states in the absence of external fields, one approach is to employ circularly polarized microwave pulses <cit.>. While this method is effective when using a few pulses, it is limited in its ability to utilize double quantum (DQ) transitions with a multi-pulse method, which is crucial for sensing weak AC signals. Recent works have paved the way for realizing dynamical decoupling (DD) with linearly polarized microwave pulses at zero field by manipulating the 3LS via an effective Raman coupling <cit.>. This method enables the utilization of high-power multiple pulses, leveraging the advantage of DQ transitions at zero field to offer a significantly broader sensing bandwidth and expanded sensitivity range. However, the effectiveness of this method is compromised by the occurrence of state leakage due to the contradiction between the unavoidable hyperfine non-degeneracy and the limited driving field strength <cit.>. Subsequently, sequences that counteract the effects of the non-degeneracy detuning were proposed <cit.>. However, these methods, while relaxing the requirements for a strong driving field, lack versatility in their operations. In this study, we propose a method that prevents state leakage with a weak driving field by leveraging the geometric properties of the dressed states. Through this approach, a collection of effective DQ rotation operations can be achieved. Furthermore, we demonstrate a zero-field quantum sensing scheme utilizing single NV centers based on the proposed method.
A single NV center in diamond consists of a substitutional nitrogen and a neighboring vacancy, its electron ground states form a typical 3LS (Fig. <ref>(a)). The Hamiltonian of a single NV center driven by a linearly polarized microwave field can be given by (ħ=1) <cit.>
H= (D+d_∥Π_z) S_z^2+(Δ+δ/2)S_z+Ωcos(ω t+ϕ)S_x
+d_⊥[Π_x(S_y^2-S_x^2)+Π_y(S_xS_y+S_yS_x)],
where S=(S_x,S_y,S_z) is the spin-1 operator, D is the zero-field splitting, d_∥ and d_⊥ are the longitudinal and transverse electric dipole moment components, Δ refers to the Zeeman splitting induced by the external magnetic field along the NV center's principle axis, δ contains hyperfine couplings with the surrounding spin-1/2 nuclei, and Π=(Π_x,Π_y,Π_z) denotes the total effective electric field. Furthermore, Ω,ω and ϕ correspond to the amplitude, angular frequency, and phase of the linearly polarized microwave, respectively. Provided that the NV center's native nitrogen atom is a ^15N atom and there is no magnetic field along the NV center's symmetric axis, the splitting within each electronic state manifold is primarily attributed to hyperfine interactions and transverse electric dipole couplings. When a linearly polarized microwave pulse with angular frequency ω=D+d_∥Π_z is applied, it drives the oscillations |0⟩↔|+1⟩ and |0⟩↔|-1⟩ simultaneously. As a result, an effective Raman coupling emerges (Fig. <ref>(a)). By utilizing phase-fixed geometric controls <cit.> on the ground-state 3LS, it is possible to accumulate a geometric π phase on the state |+⟩ while keeping the state |-⟩ nearly unchanged, as long as the 2π cycle occurs rapidly compared to the detuning modulation (Fig. <ref>(b)). This approach enables the realization of a nearly π pulse within the {|+1⟩,|-1⟩} subspace. However, the presence of the hyperfine coupling δ and the transverse effective electric field (Π_x,Π_y) can induce state leakage to the |0⟩ state. Consequently, the imperfect controls in the dynamical decoupling sequence result in degraded spin coherence and distorted signal filtering, thereby diminishing the sensitivity.
In this Letter, we introduce a novel phased geometric control method that prevents state leakage and enables a diverse range of operations. With the resonance condition ω=D+d_∥Π_z and the microwave polarization perpendicular to the transverse projection of Π, the Hamiltonian of the system can be expressed as <cit.>
H̃'̃(Ω,ϕ)=(Ω e^iϕ/2 |0⟩ + δ'e^iψ/2 |-⟩)⟨+|+H.c.,
where δ'=√(δ^2+4d^2_⊥Π_y^2) and ψ=arctan(-2d_⊥Π_y/δ). Set Ω=δ', a complete transition between the states |0⟩ and |-⟩ is activated (Fig. <ref>(c)). The operation U_ϕ, which enables the complete transition |0⟩↔|-⟩, is defined by the incident microwave phase ϕ. Defining |ϕ'⟩=(e^iϕ|0⟩+e^iψ|-⟩)/√(2), the Hamiltonian Eq. (<ref>) can be written as
H̃'̃(δ',ϕ)=δ'/√(2)(|ϕ'⟩⟨+|+|+⟩⟨ϕ'|).
In the qubit spanned by {|+⟩,|ϕ'⟩}, Eq. (<ref>) is proportional to the Pauli-X operator, and U_ϕ acts as a 2π pulse defined by the duration T'=√(2)π/δ' (Fig. <ref>(c)). In this geometric spin qubit, any 2π cycle generates a microwave-phase independent factor of -1 before |+⟩ <cit.>. Moreover, the operation U_ϕ introduces conjugate phase factors in the {|0⟩,|-⟩} subspace (Fig. <ref>(d)), i.e.
⟨ -| U_ϕ |0⟩=-e^-i(ϕ-ψ),
⟨ 0| U_ϕ |-⟩=-e^i(ϕ-ψ).
Therefore, the 4π pulse defined as G_π=U_ϕ U_ϕ+π precisely leads to |+⟩→|+⟩ and |-⟩→-|-⟩ (Fig. <ref>(e)). Consequently, the π pulse in the {|+1⟩,|-1⟩} subspace can be achieved without any leakage to the state |0⟩, directly bringing about the zero-field dynamical decoupling (ZDD) sequence with equally spaced G_π operations. Generally, phased geometric gate G_θ=U_ϕ U_ϕ+θ is equivalent to the phase gate P(θ) in the {|+⟩,|-⟩} subspace <cit.>, thus the effect of G_θ can be depicted as a rotation on the Bloch sphere (Fig. <ref>(f)). Following the scheme outlined above, arbitrary effective rotations along z-axis in the {|+⟩,|-⟩} subspace can be implemented. In addition to the G_±π gates, the G_±π/2 gates are particularly relevant in quantum sensing protocols due to their ability to convert coherence into state population in the {|+1⟩,|-1⟩} basis, which can be used to perform correlation of phases accumulated in separate DD sequences.
We use a ^12C enriched diamond chip implanted with 40 keV ^15N^+ ions for our experiments. To counterbalance the geomagnetic field, a set of permanent magnets is employed, reducing the field strength to below 0.005 mT. In this regime, we ensure that Δ/δ<<1, where δ is dominated by the intrinsic ^15N hyperfine interaction A_∥. The transverse microwave polarization is aligned perpendicular to the transverse effective electric field vector, with the polarization direction along the x-axis. The resultant non-degenerate splitting is given by δ'=√(A_∥^2+4d^2_⊥Π_y^2)=2π×3.04(1) MHz. Therefore, the manipulating microwave can be determined by Ω=δ' and ω=D+d_∥Π_z=2π× 2870.79(1) MHz. Setting ϕ=0, the |+⟩↔|0'⟩ transition is driven with the angular frequency Ω=√(δ'^2+Ω^2)=√(2)δ', and the pulse length of the 2π operation U_ϕ is defined by T'=2π/Ω (Fig. <ref>(a)). Applying the Ramsey sequence with two separate 2π pulses, oscillation of the frequency δ'/2π emerges. The envelope of this oscillation directly reflects the dephasing occurring in the {|+ 1⟩,|- 1⟩} subspace. By inserting G_π in the middle of the Ramsey sequence, coherence revival is realized (Fig. <ref>(b)). With the specific 4π pulse available, we construct the ZDD-N sequence in the form of 2π (t'/2 4π t' 4π t'/2)^N/22π (Fig. <ref>(a)), where t'=t-2T' is the duration of each free evolution, t denotes the pulse interval, Nt is the total evolution time, and the superscript indicates the interchange of the phases of constituent 2π pulses. This interlaced sequence is designed to compensate fidelity errors caused by pulse imperfections up to the second order <cit.>. By applying the ZDD-N sequences, significant prolongation of the DQ coherence in the {|+ 1⟩,|0⟩,|- 1⟩} basis is observed as the pulse number N increases (Fig. <ref>(b)), indicating that there are sufficient manipulation fidelity and coherence resources available for quantum sensing purposes.
Measurements of an AC signal with a frequency of f=0.5 MHz are shown in Fig. <ref>(c, e). The ZDD-64 sensed frequency is f'=1/(2t_s)=0.499(1) MHz, corresponding to the coherence dip at t_s=1.002(1)µs (Fig. <ref>(c)). In nanoscale NMR applications, the correlation spectroscopy sequence <cit.> is utilized to achieve high-resolution spectroscopy or to mitigate the effects of unwanted harmonics <cit.>. However, conventionally performing this free precession technique at zero field is challenging due to the incomplete manipulation of the 3LS. Nevertheless, it can be implemented by inserting G_π/2 gates between separate DD sequences (Fig. <ref>(d)). The lowest order correlation reveals the signal frequency <cit.>, as expressed by
⟨sinψ_1sinψ_2⟩∼cos (2π f(2τ+t) ),
where τ is set to t_s according to the coherence dip in the ZDD spectrum, ψ_i is the phase accumulated during each individual ZDD sequence. The correlation signal of two ZDD-16 sequences for the AC field sensed in Fig. <ref>(c) is shown in Fig. <ref>(e).
In order to demonstrate the advantage of the ZDD sequence constructed with phased geometric gates, we conduct a comparison with other DD sequences. As shown in Fig. <ref>(a), state evolutions of different DD sequences with distinct driving powers are simulated in the absence of signal fields. The state evolution under normal DD sequence is significantly distorted by detuning, while the LDD and the OC sequences <cit.> which utilize detuning-resistant phase arrangements as well as optimal control techniques, effectively suppress the distortion. In comparison, the ZDD sequence ensures equivalent populations during the free evolution periods.
Measurements of the filter functions (FFs) F(t,ω) of different DD sequences at ω=0.5 MHz are presented in Fig. <ref>(b). With low driving fields, the signal filtering of the LDD and the OC sequences are distorted. However, the ZDD sequence operating with Ω=δ' exhibits a reasonable lineshape. The deviation between the ZDD-16 FF and the ideal FF is primarily caused by the finite duty cycle of the manipulating pulses. Nonetheless, this deviation is insignificant when the duty cycle is lower than 40% (Fig. <ref>(c)). In practice, the non-degenerate splitting δ' can be controlled by applying transverse strains, allowing for an adjustable duty cycle.
In this work, we introduce a phased geometric control protocol and demonstrate its application in a zero-field quantum sensing technique. The sequences employed for dynamical decoupling and correlation spectroscopy are specifically designed using phased geometric gates. Compared to previous approaches, our method provides a wider range of gate operations in sequence design and prevents the detrimental effects of state leakage by utilizing the properties of the geometric phase. In addition to the NV center, other solid spin systems such as divacancies in SiC <cit.> offer more alternatives for implementing the DQ manipulations with phased geometric gates. These systems possess a non-degenerate splitting that can be easily adjusted by strains or electric fields, enabling precise operations even with a short dephasing time. This allows for a broadened sensing bandwidth and the analysis of electric field noise. Furthermore, it is worth noting that our protocol can be extended to any other spin-based 3LS with similar energy configuration, thereby expanding its potential applications in various quantum technologies.
§ ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China (Grant No. T2125011, 81788101), the National Key R&D Program of China (Grant No. 2018YFA0306600), the CAS (Grant No. XDC07000000, GJJSTD20200001, Y201984, YSBR-068), Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302200, 2021ZD0303204), the Anhui Initiative in Quantum Information Technologies (Grant No. AHY050000), Hefei Comprehensive National Science Center, and the Fundamental Research Funds for the Central Universities.
This work was partially carried out at the USTC Center for Micro and Nanoscale Research and Fabrication.
unsrt
|
http://arxiv.org/abs/2307.07348v1 | 20230714135722 | Mott-Enhanced Exciton Condensation in a Hubbard bilayer | [
"Samuele Giuli",
"Adriano Amaricci",
"Massimo Capone"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.supr-con"
] |
APS/123-QED
[Correspondence email address: ][email protected]
International School for Advanced Studies (SISSA), via Bonomea 265, 34136 Trieste, Italy
CNR-IOM, via Bonomea 265, 34136 Trieste, Italy
International School for Advanced Studies (SISSA), via Bonomea 265, 34136 Trieste, Italy
CNR-IOM, via Bonomea 265, 34136 Trieste, Italy
We study the conditions to realize an excitonic condensed phase in an electron-hole bilayer system with local Hubbard-like interactions at half-filling, where we can address the interplay with Mott localization.
Using Dynamical Mean-Field Theory, we find that an excitonic state is stable in a sizeable region of a phase diagram spanned by the intra-layer (U) and inter-layer (V) interactions. The latter term is expected to favour the excitonic phase which is indeed found in a slice of the phase diagram with V >U. Remarkably, we find that when U is large enough, the excitonic region extends also for U > V in contrast with naive expectations.
The extended stability of the excitonic phase can be linked to in-layer Mott localization and inter-layer spin correlations. Using a mapping to a model with attractive inter-layer coupling, we fully characterize the condensate phase in terms of its superconducting counterpart, thereby addressing its coherence and correlation length.
Mott-Enhanced Exciton Condensation in a Hubbard bilayer
Massimo Capone
August 12, 2023
=======================================================
§ INTRODUCTION
The condensation of excitons in a macroscopic quantum state has been
proposed soon after the success of BCS theory of
superconductivity<cit.> owing to the similarities
between the Cooper pairs created by the binding of two electrons, and
the excitons, bound states formed by an electron and a hole. However,
the observation of excitonic phases has long eluded the experimental
effort, mainly because of the short lifetimes of the excitons due to
electron-hole recombination processes.
The developments in the engineering of devices and heterostructures
have provided ideal platforms to observe exciton condensation (EC),
which has been indeed proposed and reported in quantum-Hall bilayers
<cit.>, graphene double
bilayers<cit.> and semiconductors
quantum wells <cit.>. Excitonic ordering has also
been recently reported also in bulk solids <cit.>
Bilayer structures are arguably ideal platforms to observe condensation of spatially indirect excitons composed by holes and electrons belonging to different layers, for which recombination is essentially inhibited by the presence of a dielectric material between the layers.
Quantum Monte Carlo calculations for electron-hole gases coupled by the long-range Coulomb interaction<cit.> have indeed shown that an excitonic phase is stable at very low densities, a result which has been confirmed by simulations of double bilayer graphene<cit.>.
In an analogous lattice model with local interactions some indication of exciton condensation has been found away from half-filling<cit.> and in the half-filled system when the interlayer interaction is larger than the intra-layer repulsion<cit.>.
Similar models have been investigated using Dynamical Mean-Field
Theory (DMFT).
In Ref. Vanhala the competition between EC and s-wave superconductivity has been addressed in a model without intra-layer repulsion.
A variety of two-orbital models including, e.g., energy splitting between bands, the Hund's coupling and including non-trivial topology have also been found to host excitonic states in some regions of parameters<cit.>.
In this work we aim at identifying a generic mechanism
connecting strong correlation physics and excitonic phases which can be used to gain a deeper insight on results on more involved and richer models for specific systems. In particular,
we address the interplay between the EC and Mott physics, the most direct fingerprint of correlations, in an idealized model for an electron-hole bilayer system with local Hubbard-like interactions.
Our focus is on the relative role of the intra-layer (U) and
inter-layer (V) interactions. We consider the system at
half-filling, where a Mott transition can take place, so that our phase diagram will be characterized by the competition/interplay between Mott insulating and EC phases.
The paper is organized as follows: In Sec. II we introduce the model, our implementation of Dynamical Mean-Field Theory and the relevant observables we consider. In Sec. III we present the normal-phase results where we discard excitonic ordering, while Sec. IV is devoted to the results for the EC phase. Sec. V reports our concluding remarks.
§ MODEL AND METHOD
We consider a two-layer Hubbard model with a local interaction term:
H =- ∑_⟨ ij ⟩σ m t_m c^†_i σ
m c_j σ m + H.c. -μ∑_i σ m n_i σ m
+U ∑_i mn'_i ↑ mn'_i ↓ m +V ∑_i σσ^'n'_i σ A n'_i σ^' B
where c_i σ m (c^†_i σ m) is the annihilation (creation) operator of an electron on site i, layer m=A,B and with spin σ, n_i σ m is the number operator and n'_i σ m = n_i σ m-1/2 is introduced to write the model in a particle-hole symmetric form which implies that both bands are half-filled for μ =0.
We set t_A = t and t_B=α t_A. In our calculations we will consider α = -1 in order to describe an electron-like band (A) and a hole-like band (B).
U and V are both positive and they measure the intra-layer and inter-layer local screened Coulomb repulsion.
We will study an excitonic state characterized by a uniform (q=0) spin-singlet excitonic order parameter (EOP)
Δ_0 = 1/N∑_iσ⟨ c^†_i A σc_i B σ⟩
which is expected to be degenerate with spin-triplet counterparts due to the SU(2)×SU(2) spin symmetry of our model. Models including other interaction terms and material-specific features, may favour one or the other spin symmetries<cit.>.
We solve the model at zero temperature using DMFT<cit.>, a state-of-the-art method which treats different interactions non perturbatively and it is particularly well suited to study the Mott transition<cit.>, strongly correlated metallic phases as well as superconductivity and other broken-symmetry states. Within DMFT the lattice model is mapped onto an impurity model which has to be solved self-consistently requiring that the impurity Green's function coincides with the local component of the lattice Green's function. We solve the impurity model at T=0 using
Lanczos/Arnoldi exact diagonalization (ED)<cit.>.
As customary in the DMFT community, we consider a Bethe lattice with a semicircular density of states N_m(ϵ)= 2/π D_m^2√(D_m^2-ϵ^2 ), where D_m ∝ t_m is the half-bandwidth.
In order to study the EC phase, the bath of the impurity model has to include an excitonic amplitude, analogously to the superconducting case.
Using a spinorial representation where Ψ_k,σ^† =( c_k σ A^† , c_k σ B^† ), where k=0 identify the impurity and k= 1,...,N_bath the bath levels, we can write it as
H_imp^(0) = ∑_k σ[ Ψ_k σ^† Ψ_0 σ^† ][ ℋ_k σ V_k ·𝕀_2; V_k ·𝕀_2 0 ][ Ψ_k σ; Ψ_0 σ ]
where 𝕀_2 is the 2 × 2 identity and
ℋ_k σ = [ ϵ_k + M_k P_k; P_k ϵ_k -M_k; ]
where P_k is the inter-orbital excitonic hybridization term in the
bath Hamiltonian, ϵ_k + (-) M_k is the bath energy on orbital
A (B) and V_k is the hybridization between the impurity and bath
site k.
Within ED-DMFT we have to limit the number of bath sites to be able to solve the impurity model. We fixed the number of bath sites to be N_bath=4 and we fixed the system at global half-filling ⟨∑_σ m n_σ m⟩ = 2 by imposing μ=0, then since we are focusing on orbitals with opposite dispersion relation we also fixed ϵ_k=0 ∀ k and since we focus on state with orbital half-filling, this required that for each M_k parameter on bath site k there must be another bath site k^' with opposite energy M_k^'=-M_k.
§ NORMAL STATE
We start our investigation from the normal state where we inhibit excitonic ordering, as well as any other broken-symmetry state like antiferromagnetism or staggered orbital ordering. This is a standard strategy which has helped to understand the Mott transition disentangling Mott localization from magnetic ordering<cit.>. For our model, a normal-state phase diagram has been reported in Ref. <cit.>, but we find it useful to present our results in order to emphasize the aspects which are useful to better address the excitonic phase.
The model is expected to feature two different Mott-insulating solutions that we can easily understand from the atomic (t_m =0) limit. Among all configurations with two electrons per site, the four with one electron in each layer |↑,↓⟩, |↓,↑⟩, |↑,↑⟩ and |↓,↓⟩ have energy E_11 = -1/2 U, while the two configurations with two electrons in the same layer |↑↓,0⟩ and
| 0,↑↓⟩ have energy E_20 = 1/2 U-V. Therefore the former set of states is favoured for U > V and the latter for U < V.
Hence when U and V are much larger than the hopping and U > V we expect an insulator with one electron on every site of each layer.
This state, that we label as U-Mott (U-MI) is expected to be unstable towards antiferromagnetic ordering if we allow for symmetry breaking. On the other hand, for V>U we have an insulator where every site is in a mixture between the two solutions with one doubly occupied layer. This state, henceforth V-Mott (V-MI), would be naturally unstable towards a staggered orbital (layer) ordering.
In order to monitor the Mott localization we compute the quasiparticle
weight Z_m which measures the metallicity of the
system<cit.>.
The progressive destruction of the metallic state is described by a
reduction of Z_m from 1 (non-interacting limit) to 0 (correlated
insulator).
The connected local density-density correlations C_m,m^' =
⟨ n_m n_m^'⟩ - ⟨ n_m⟩⟨
n_m^'⟩ can be used to study the competition between the
two interaction terms and the approach to the atomic
limit insulators.
The orbital symmetry implies C_AA = C_BB and C_AB = C_BA.
It is easy to see from the above discussion that
the atomic U-MI has C_AA=0 and C_AB=0, while the atomic
V-MI has C_AA=1 and C_AB=-1.
In Fig. <ref> we show as dotted lines the evolution of
Z_A=Z_B and of the inter- and intra-layer correlations C_AA and
C_AB as functions of V/D for different values of U/D.
The boundaries of the U-MI and V-MI phases are marked by dotted lines
with crosses in the phase diagram of Fig. <ref>
The cuts for U/D = 1 and 2 in Fig. <ref> clearly show a
metal-insulator transition towards the V-MI state with Z_A=0,
C_AA=1 and C_AB=-1. For U/D = 3, we find a U-MI for small
V followed by a metallic region and the V-MI as V increases.
For large U/D =4 we have only a tiny slice of V with a metallic solution
sandwiched by the two insulators.
The main feature of the normal-state phase diagram, as already pointed
out in Ref. Koga3, is the existence of a metallic region when U
and V are comparable, even when they are so large to independently drive a Mott
transition (in the absence one of the other). The region shrinks as we
increase U and V but it does not close.
In particular, for U=V we always find a metallic solution, similarly
to other models where the competition between different atomic states
leads to intermediate phases which can have either a
metallic<cit.> or an insulating<cit.>
nature.
§ EXCITONIC PHASE
We now turn to solutions where the exciton condensation is allowed. The values of Z_A, C_AA and C_AB are shown as solid lines in Fig. <ref> and compared with their normal-state counterparts. Indeed, the excitonic state is stable in a wide region of parameters and its onset makes the evolution from the U-MI to the V-MI smoother, thereby increasing also the quasipartcle weight.
Reporting this information on the phase diagram of
Fig. <ref>, where the boundaries of the excitonic
region are black solid lines, we clearly see that the EC region is
roughly centered around the normal state transition towards the
V-Mott state. The picture is simple: Increasing V, before the
interaction is large enough to drive the system insulating, it leads
to the binding of electrons and holes on different layers into excitons.
However, the effect of U changes the
position and the nature of the transition.
For small and moderate U the EC establishes only when V prevails over U (above the V=U line, marked with a dashed grey line) in agreement with previous work<cit.>.
A much less expected result emerges when we increase U and we approach the boundary of the U-MI phase. Here we find that the stability region of the EC increases and, remarkably, it extends in the region where U < V signaling a non-trivial intrinsic many-body effect due to the interplay of the two interactions. As a result, for U ≳ 3D, the whole metallic region between the two Mott insulators is replaced by an excitonic state.
The positive effect of the Hubbard repulsion on the excitonic order is evident in Fig. <ref> (a), where we plot the order parameter Δ as a function of V for the same cuts of Fig. 1.
Here we show that the EC for large U is not only stable in a wider range of V, but its amplitude is also larger. For instance, for U/D=4 the maximum value of Δ is more than twice the U=0 maximum. For every value of U, the transition from the metal to the EC appears of first-order, while the transition from the EC to the V-MI state is associated with a continuously vanishing Δ.
§.§ Exciton Ordering and Mott physics
In this section we link the enhancement of the EC region for V<U and large U/D to the magnetic correlation between orbitals near the V-MI phase that is enhanced by the nearby U-MI phase.
The main effect of U is to drive a standard Mott localization within
each layer. Hence the double occupation on each layer d_m is
strongly reduced.
For a half-filled non-magnetic system this reflects directly in the formation of local moments as measured by ⟨ S_m^z S_m^z ⟩ = 1/4⟨ (n_m,↑ -n_m,↓)^2 ⟩ = 1/2(1/2 -d_m) which approaches 1/4.
While the spins on the two layers are uncorrelated in the normal state, when we reach the EC region and U ≳ 3D the inter-layer spin correlations ⟨ S^z_A S^z_B ⟩ become sizeable and negative eventually approaching the limit -1/4.
The local quantum state (computed from the impurity model within DMFT) approaches for large U
| ψ⟩∼1/√(2)(|↑_A ↓_B
⟩ + |↑_B ↓_A ⟩ ) for which ⟨ S^z_A
S^z_A ⟩ =1/4 and ⟨ S^z_A S^z_B ⟩
=-1/4.
Note however that the interplay between Mott localization and exciton ordering is not trivial. The singlet atomic excitonic state is indeed a linear combination of
|↑_A ↓_B ⟩ and |↑_B ↓_A ⟩ which are favoured by increasing U, but also of the states |↑_A ↓_A, 0 ⟩ and
|0, ↑_B ↓_B ⟩, which are instead depleted by U. Hence, while the magnetic correlations develop approaching the U-Mott state,
they first contribute to the onset of excitonic ordering, but as we exceed a given "optimal" distance from the Mott state, the EOP decreases, leading to the existence of a bell-shaped behavior of the order parameter.
We finally notice that the spin-singlet correlations follow from our choice to study spin-singlet excitons, and we expect the same picture to hold for spin-triplet exciton. The key idea is that Mott localization within each layer leads to localized moments which are naturally prone to acquire any inter-layer correlation when exciton ordering is allowed. Finally, in the U-MI state the EOP vanishes and the SU(2)× SU(2) spin symmetry with four independent ground states is recovered.
§.§ Characterizing the Excitonic State via a mapping on Superconductivity
A particle-hole transformation on layer B:
c^†_i σ B→ c_i σ B (-1)^σ
maps our model for α= -1 onto a two-orbital model with the same form of Eq. (1) in which the two orbitals share the same hopping t_A = t_B = t and the inter-orbital interaction becomes attractive (-V), while the intra-layer remains repulsive. This model can indeed host an inter-orbital s-wave superconducting state, which maps on our excitonic state via the same particle-hole transformation (<ref>).
We can exploit this mapping to compute some observable which characterize the superconducting state and allow to better characterize the EC.
The superfluid stiffness D_s <cit.> is a crucial parameter that controls the critical temperature. It measures the coherence of the superconducting state and its rigidity to fluctuations of the phase of the order parameter. Indeed, a superconductor with small D_s has a small critical temperature even if the zero-temperature modulus of the order parameter is large, as it happens in the strong-coupling limit in a single-orbital attractive Hubbard model <cit.>
In the effective model with inter-layer attraction -| V| obtained via the transformation (<ref>) D_s reads
D_S/π e^2= ⟨ -E_kin⟩ - χ_jj ( 𝐪→ 0 , ω =0)
where j is the current operator and E_kin is the expectation value of the hopping part of the Hamltonian.
For a Bethe lattice we obtain<cit.>
D^ex_S/e^2 π = -4 α/β∑_ i ω_n , σ∫ dε V(ε) D(ε) |G_AB (ε , iω_n)|^2
where V(ϵ)=4t^2-ϵ^2/2 is the square of the current vertex for orbital A and α=t_B/t_A (See Appendix <ref> for derivation) .
We underline that the total current of the attractive model corresponds, in model (<ref>), to the operator
j_ex(𝐪,iω_n)=j_A(𝐪,i ω_n)- j_B(𝐪,i ω_n),
which is clearly different from the current operator associated with the total charge. Hence, the D_s can be considered a real superfluid stiffness only for the auxiliary attractive model.
Yet, D_s provides direct also information about the coherence and stability properties, which translates into an analogous information about the EC phase of our model (<ref>).
The coherence length ξ has indeed naturally the same meaning in the two frameworks, namely it measures the length over which the constituents of the pair/exciton retain quantum coherence. It is given by<cit.>
ξ^2 = ∑_𝐤 | ∇_𝐤 F( 𝐤) |^2/∑_𝐤 | F( 𝐤) |^2
where
F(𝐤) = ∑_iω_n e^iω_n 0^+ G_AB(ϵ_𝐤,iω_n)
The results for D_s and ξ are reported in panels (b) and (c) of
Fig. <ref> in order to compare their behavior with the
EOP. The results for U=0 are qualitatively similar to an attractive
model and they reflect the BCS to Bose-Einstein Condensate (BEC) crossover as a function of the coupling. Indeed both D_s and ξ are maximal in the weak-coupling side and they decrease as the interaction grows.
Increasing | V| we have a progressive reduction of the coherence length, associated with more localized pairs/excitons characteristic of the BEC limit. Also D_s decreases as result of the smaller coherence of the pairs/excitons and it actually vanishes at the continuous transtion to the V-MI state.
When we introduce and increase U, we find an important difference on the "weak-coupling" side of the crossover. Indeed both D_s and ξ are depleted also close to the smallest values of V required to establish the EC. As a result, for large U the two quantities have a maximum around the U ∼ V line. These results clearly confirm the U-induced localization of the excitons that we discussed above and the crucial role of the interplay between the two interactions to induce an EC for V<U.
§ CONCLUSIONS
We used DMFT to assess the existence of an excitonic state in the zero-temperature phase diagram of a two-layer Hubbard model with intra-layer (U) and inter-layer (V) density-density repulsive interactions. Working at half filling, we can study how the excitonic long-range order is affected by the Mott physics.
We find a sizeable region of exciton ordering when the two interactions are comparable.
The transition from EC phase to the Mott insulating phase is continuous, while the transition from Metal to EC is of the first order.
For small and intermediate U, the excitonic state is present only if V > U. On the other hand, for U ≳ 3D i.e., close to a standard Mott transition within each layer, we find an exciton state also when V < U, signaling a non-trivial interplay in which quantum fluctuations play an active role.
We have indeed shown that the enlargement of the excitonic phase in the proximity of the intra-layer Mott transition can be connected with the U-driven development of local magnetic moments that, in turn, favour magnetic correlations between the two layers (singlets in our case). We expect this mechanism to be general, and in particular, to be present also for models where the exciton and the magnetic correlations have a triplet symmetry.
Exploiting a simple mapping onto a model with attractive inter-layer
interactions, we have been able to further characterize the excitonic
state. The coherence length, which has essentially the same
interpretation of that of a superconductor, shows that the proximity
to the V-driven Mott state leads to localized pairs with very short
coherence length. Analogously, the equivalent of the superconducting
superfluid stiffness shows that the coherence of the EC state tends to
vanish when the V-Mott insulator is reached. In other words, when we
approach the Mott transition, the EC state is driven towards the
strong-coupling limit, which in the superconducting language corresponds to the BEC limit<cit.>. We notice in passing that the
BEC nature and its evolution from a BCS limit can be experimentally
assessed via both thermodynamic<cit.> and spectral
properties<cit.>.
These results further strengthen our picture where the charge
localization induced by U is central in the stabilization of the
excitonic condensate for V < U and in determining its properties.
The existence of excitonic states for V < U is important because in a real bilayer system, or in a multi-orbital correlated material, we always expect V < U. We notice however that an electron-phonon coupling of the Holstein type (coupled with the total local electron density) can effectively reduce U, making in principle the effective U closer or even smaller than V<cit.>.
As we anticipated in the introduction, our model has been introduced as the minimal model for a bilayer system in which excitonic phases can be present and, at the same time, Mott physics is effective. The results we have obtained have to be considered as a basis to build the understanding of richer and more involved models including, among others, different and more complex hopping structures, energy difference and/or hybridization betweeen the two bands and a richer structure of the interactions.
§ ACKNOWLEDGEMENTS
We acknowledge funding by MUR through the PRIN 2017 (Prot. 20172H2SC4
005), PRIN 2020 (Prot. 2020JLZ52N 002) programs, National Recovery and
Resilience Plan (NRRP) MUR Project No. PE0000023-NQSTI and ICSC–Centro
Nazionale di Ricerca in High Performance Computing, Big Data and
Quantum Computing, funded by European Union – NextGenerationEU (Grant
number CN00000013) - Mission 4 Component 2 Investments 1.3 and 1.4.
§ SUPERFLUID STIFFNESS
In this appendix we provide some details of the calculation of the superfluid stiffness for the attractive model obtained through the canonical transformation (<ref>). From the definition<cit.>:
D_S/π e^2= ⟨ -E_kin⟩ - χ_jj ( 𝐪→ 0 , ω =0)
We need to compute the kinetic energy and the current-current response function. We make use of the previously defined spinorial representation to define the Green's function as:
Ĝ_σ (𝐤,τ)= ⟨ T [ c_k A σ(τ); c_k B σ(τ) ]⊗[ c^†_k A σ(0) c^†_k B σ (0) ]⟩ =
[ G_AA (𝐤 , τ ) G_AB (𝐤 , τ ); G_BA (𝐤 , τ ) G_BB (𝐤 , τ ) ]
From now on we consider it diagonal in the spin therefore we can avoid to write explicitly the spin index σ. In single-site DMFT, where the self-energy is local and site independent, the Dyson equation for the interacting Green's functions reads:
Ĝ_0 (𝐤,i ω_n )^-1 = Ĝ (𝐤,i ω_n )^-1 + Σ̂ ( i ω_n )
where the hat indicates that all of these are matrices as in the previous equation <ref>. This means that the diagonal and off diagonal component are:
G_AA ( ε , i ω ) = i ω - αε -Σ_BB ( i ω)/( iω - ε - Σ_AA(i ω) ) ( i ω -αε -Σ_BB(i ω) ) - | Σ_AB(i ω) |^2
G_BB ( ε , i ω ) = i ω - ε -Σ_AA ( i ω)/( iω - ε - Σ_AA(i ω) ) ( i ω -αε -Σ_BB(i ω) ) - | Σ_AB(i ω) |^2
G_AB (ε, i ω ) = Σ_AB ( i ω)/( iω - ε - Σ_AA(i ω) ) ( i ω -αε -Σ_BB(i ω) ) - | Σ_AB(i ω) |^2 =G_BA^* (ε, i ω )
where α=t_B/t_A therefore ϵ^(A)=ϵ and ϵ^(B)=αϵ.
In this derivation we will set the energy splitting to zero (M=0) for simplicity but the results remain valid for any value of M. In DMFT the kinetic energy for orbital m can be easily computed since the Green's function is known:
E_kin^(m) = ∑_𝐤σϵ_𝐤^(m)⟨ c^†_𝐤σ m c_𝐤σ m⟩
= lim_η→ 0^+β^-1∑_i ω_n∑_𝐤σϵ_𝐤^(m) G_m m (𝐤,i ω_n ) e^i ω_n η
= lim_η→ 0^+β^-1∑_i ω_n, σ∫ d ϵ D(ϵ) ϵ^(m) G_m m (ϵ,i ω_n ) e^i ω_n η
computing it explicitly for the two orbitals and performing a partial integration using the relation -ϵ D(ϵ)=∂_ϵ [ D(ϵ) V(ϵ)] where V(ϵ)=4t^2-ϵ^2/3=(v^(A)_ϵ)^2 is the square of the current vertex in orbital A, α^2 V(ϵ)=(v^(B)_ϵ)^2 is the square of the current vertex in orbital B and D(ϵ)= 1/2π t^2√((2t)^2-ϵ^2) is the density of states:
E_kin,A = β^-1∑_ i ω_n , σ∫ dε V(ε) D(ε) G_AA^2 (ε , iω_n) [1 +α|Σ_AB(i ω_n) |^2/(i ω_n - αε - Σ_BB(i ω_n) )^2 ]
= β^-1∑_ i ω_n , σ∫ dε V(ε) D(ε) [ G_AA^2 (ε , iω_n)+α |G_AB (ε , iω_n)|^2]
E_kin,B = β^-1∑_ i ω_n , σ∫ dε V(ε) D(ε) G_BB^2 (ε , iω_n) [α^2 +α|Σ_AB(i ω_n) |^2/(i ω_n - ε - Σ_AA(i ω_n) )^2 ]
= β^-1∑_ i ω_n , σ∫ dε V(ε) D(ε) [α^2 G_BB^2 (ε , iω_n)+α |G_AB (ε , iω_n)|^2 ]
From which one can check that if there is no orbital off-diagonal self-energy and α=± 1 the kinetic energy is the same in the two orbitals. The computation of the current-current response in DMFT in infinite dimensions is simplified since all the vertex corrections are cancelled <cit.> and only the elementary bubble contributions survive, therefore:
χ_jj (𝐪 , τ) = -⟨ j_ex (𝐪, τ) j_ex(-𝐪,0) ⟩ , j_ex(𝐪, τ)=j_A(𝐪, τ)-j_B(𝐪, τ)
χ_jj (𝐪→ 0 , i ω =0) = [χ^AA_jj - χ^AB_jj - χ^BA_jj +χ^BB_jj ](𝐪→ 0 , i ω =0)
χ_jj^mm^' ( 𝐪,iω) = -β^-1∑_𝐤 , iν , σ v^(m)_𝐤σ v^(m^')_𝐤+𝐪σ G_mm^'( 𝐤, iν) G_m^' m (𝐤 + 𝐪, i ν + i ω) , m,m^'=A,B
Where the current vertex for the two orbitals are related by v^(B)= α v^(A).
Merging the DMFT results for the kinetic energy and the current-current response function, the superfluid stiffness for the selected model is:
D_S/e^2 π = -4α/β∑_ i ω_n , σ∫ dε V(ε) D(ε) |G_AB (ε , iω_n)|^2
This interesting result carries some important information. Since the Superfluid Stiffness has to be a positive quantity, the "naive" two-orbital Hubbard model with symmetric bands (α=1) would not allow any finite D_S, this is in agreement with some results showing that local excitonic correlations are dumped for α>0 <cit.> in favor of a bipartite antiferro-EC state that correspond to a model with a shift of the B band of the vector 𝐐 of bipartite lattices for which ϵ_𝐤 = - ϵ_𝐤+𝐐, e.g. for the square lattice in D-dimensions the vector is 𝐐=(π,π,...,π). For α=0 (Falikov-Kimball Model with spin) it correctly predict no superfluid excitonic state since one of the species is not mobile and since in this limit no excitonic phase is expected <cit.>. This special case prohibit excitonic ordering since in the limit α→ 0^+ there must be an antiferro-EC state while in the limit α→ 0^- a ferro-EC state, thus α=0 is an unstable point between these two
phases<cit.>.
Our choice of opposite bands α=-1 is therefore optimal and in this situation the Superfluid Stiffness can be rewritten as:
D_S/e^2 π = 4/β∑_σ , iω_n∫ d ε V ( ε ) D( ε ) | G_AB( ε , iω_n) |^2
This results tells us that opposite band dispersion is the optimal ground for the research of a Superfluid Exciton Condensate.
§ CALCULATION OF THE COHERENCE LENGTH
For the Bethe lattice we have no access to the momenta but only energy, therefore we have to pass from ∇_𝐤 to something we can treat. Starting from the numerator of the coherence length definition<cit.>:
∑_𝐤|∇_𝐤 F( 𝐤)|^2
=∑_𝐤|(∇_𝐤ϵ_𝐤)∂ F(ϵ)/∂ϵ|_ϵ=ϵ_𝐤|^2
= ∑_𝐤|(∇_𝐤ϵ_𝐤) [ 1/β∑_iω_n e^iω_n 0^+∂/∂ϵF(ϵ , iω_n) |_ϵ=ϵ_𝐤] |^2,
where F(ϵ,iω_n)=G_AB(ϵ,iω_n) as previously defined (See Appendix <ref>) and ∇_𝐤ϵ_𝐤=v_𝐤 is the group velocity of the non interacting particles (we take ħ =1). Now the dependency on 𝐤 is present only through ϵ_𝐤 via the relation |v_𝐤|=√(4t^2-ϵ_𝐤^2/3)=v(ϵ) therefore we can pass to the integral in energy and the result for the numerator is:
∑_𝐤|∇_𝐤 F( 𝐤)|^2
= ∫ dϵ D(ϵ) | 1/β∑_iω_n e^iω_n 0^+v(ϵ) G_AB^2(ϵ , iω_n)2ϵ +Σ_BB(iω_n)-Σ_AA(iω_n)/Σ_AB(iω_n) |^2
For the denominator no change is needed and the substitution of F(𝐤) gives directly
∫ dϵ D(ϵ) | 1/β∑_iω_ne^iω_n 0^+ G_AB(ϵ, iω_n) |^2
|
http://arxiv.org/abs/2307.05677v1 | 20230711180003 | Area, Delay, and Energy-Efficient Full Dadda Multiplier | [
"Muteen Munawar",
"Zain Shabbir",
"Muhammad Akram"
] | eess.SY | [
"eess.SY",
"cs.AR",
"cs.SY",
"03B80",
"B.6.0; B.7.0"
] |
Department of Electrical, Electronics and Telecommunication Engineering,
University of Engineering and Technology,
Lahore, Punjab,
Pakistan[Muteen Munawar, Zain Shabbir and Muhammad Akram are with the Department of Electrical, Electronics and Telecommunication Engineering, University of Engineering and Technology, Lahore 54890, Pakistan]
^[email protected]
^[email protected]
^[email protected]
Area, Delay, and Energy-Efficient full Dadda Multiplier
Muteen Munawar^1, Zain Shabbir^2, Muhammad Akram^3
August 12, 2023
=======================================================
(Day Month Year)
(Day Month Year)
(Day Month Year)
The Dadda algorithm is a parallel structured multiplier, which is quite faster as compared to array multipliers, i.e., Booth, Braun, Baugh-Wooley, etc. However, it consumes more power and needs a larger number of gates for hardware implementation. In this paper, a modified-Dadda algorithm-based multiplier is designed using a proposed half-adder-based carry-select adder with a binary to excess-1 converter and an improved ripple-carry adder (RCA). The proposed design is simulated in different technologies, i.e., Taiwan Semiconductor Manufacturing Company (TSMC) 50 nm, 90 nm, and 120 nm, and on different GHz frequencies, i.e., 0.5, 1, 2, and 3.33 GHz. Specifically, the 4-bit circuit of the proposed design in TSMC's 50 nm technology consumes 25 uW of power at 3.33 GHz with 76 ps of delay. The simulation results reveal that the design is faster, more power-energy efficient, and requires a smaller number of transistors for implementation as compared to some closely related works. The proposed design can be a promising candidate for low-power and low-cost digital controllers. In the end, the design has been compared with recent relevant works in the literature.
§ INTRODUCTION
A digital multiplier is an important building block of any logical processor in a digital system, and some of the main specifications of such systems like processing speed, power consumption, and energy efficiency highly depend on it <cit.>. There is always a need to improve the performance of a multiplier to meet the requirements of fast and energy-efficient processes <cit.>. In digital image processing systems, convolution neural networks <cit.>, and general-purpose processes, the performance of a multiplier must be considered, especially where mathematical data evaluation is of higher priority <cit.>.
A general multiplication algorithm can be divided into three segments <cit.>. The first part is where two n-bit numbers are given to the inputs of AND gates to generate the partial products (PPs). Then, these PPs layers are compressed using full adders (FAs) and half adders (HAs), unless only two layers of binary numbers remain. Finally, these two layers are added, usually by a RCA, in order to generate the final result of multiplication <cit.>. The majority of the work in the literature is focused on the second segment, i.e., compressors, to reduce the overall delay, power consumption, and area of the multiplier <cit.>.
In the literature, many publications can be found on the implementation of multiplier algorithms to reduce delay, power and energy consumption, and layout area. For example, in <cit.> authors designed a 4-bit Dadda multiplier circuit using a reduced-split precharge-data driven dynamic sum logic (rspD3Lsum), which uses a lower number of transistors as compared to the traditional one. As a result, the power consumption and area of the chip were improved. In <cit.>, the authors modified the carry-select adder (CSA) using a binary to excess-1 (BEC1) converter and used this circuit as a compressor to implement the circuit of the Dadda multiplier, which improved the speed, area, and energy as compared to the traditional CSA-based design. To reduce power and area, an optimized adder with pass transistor logic is used to design a dadda multiplier in <cit.>, but the output voltage levels are not as strong as in CMOS logic. The authors proposed a Dadda circuit based on the carry look-ahead adder and optimized full adder in <cit.>, which was implemented using complex cells in CMOS 65 nm technology.
To increase the performance of the digital multipliers by improving the second stage of the multiplication, a number of works can be found in the literature. The parallel prefix <cit.>, the approximate adder <cit.>, attack-based, and the novel compressor <cit.> are examples. These methods try to focus on the second step of multiplication, where different layers of PPs are reduced using compressors <cit.>. Another type of multiplier implementation technique exists in the literature and is known as "approximate multipliers" <cit.>. These multipliers are typically more area- and power-efficient than exact multipliers; however, they may contain errors and are best suited for error-tolerant applications. Various designs for these approximate multipliers have been proposed in the literature <cit.>.
In this paper, a digital multiplier design is proposed that is based on the modified Dadda algorithm, also known as the "full Dadda algorithm" <cit.>. To compress the layers of PPs in our design, a novel 3:2 adder has been proposed that is faster, more area-efficient, and more power-energy efficient as compared to other traditional FAs, i.e., carry look-ahead adders, simple FAs, CSAs, etc. In the end, an improved RCA <cit.> has been used to calculate the final result. The design of the proposed circuits is validated in DSCH software, whereas the layouts are designed and simulated in Microwind software using Taiwan Semiconductor Manufacturing Company (TSMC) 50 nm, 90 nm, and 120 nm technologies. The simulation results show that the proposed design is better compared to the related works in terms of transistor count, energy efficiency, and delay.
This paper is organized as follows: the traditional Dadda algorithm and the full Dadda algorithm are explained in Section 2. In Section 3, the proposed design is explained in detail. Section 4 shows the simulation results, whereas Section 5 gives a brief conclusion to this paper.
§ FULL DADDA MULTIPLIER
A Dadda multiplier is similar to a Wallace multiplier <cit.>, but it is faster and needs a smaller number of gates for a multiplication operation. In Figures <ref>, <ref>, and <ref>, the structure of a 4-bit Dadda multiplier can be seen. After generating 16 PPs of input bits, as shown in Figure <ref>, PPs are arranged in a specific order, which can be seen in Figure <ref>. After arranging in this pattern, the Dadda multiplier uses the HAs and FAs to reduce the number of layers of PPs in such a way that each successive step reduces the number of layers by a factor of 2/3. The Dadda multiplier tries to reduce the number of gates and input/output delay in each layer, whereas the Wallace multiplier attempts to reduce the layers as much as possible. More information on the traditional dadda multiplier can be found in the references <cit.>.
The difference between a traditional Dadda and a full Dadda multiplier is that a full Dadda prefers to use FAs in the early stages of reduction while a simple Dadda uses HAs, except in the last stage where both are the same. The full Dadda provides a simple and more regular scheme as compared to the traditional one and uses a lower number of interconnections in circuit layout. As a result, full Dadda implementation requires less area as compared to a simple Dadda scheme. Some general equations to calculate the number of HAs, FAs, and carry-propagation adders (CPA) for an N-bit full Dadda multiplier are given below <cit.>:
HAs = N - 1,
FAs = (N - 1) × (N - 3),valid for N > 2,
Size of CPA = 2 × (N - 2).
More detail on the general rules and equations of the full Dadda multiplier can be found in <cit.>. A comparison of the Dadda and full Dadda pattern, from <cit.>, has been shown in Figure <ref> and <ref>, respectively.
§ PROPOSED WORK
The implementation design of a full Dadda multiplier is proposed, where the reduction phase has been processed using our proposed adder. Before discussing the proposed adder, it is necessary to look at a traditional CSA (Figure <ref>). A CSA calculates two results in parallel. One result considers that the incoming input carry bit is 0, while the second result considers carry bit 1. In this way, when the actual carry bit comes, this adder selects the corresponding result using a multiplexer (MUX) circuit. The benefit of using this technique is that the final result is much faster as compared to a simple carry propagation full adder because it does not have to wait for the incoming carry bit to start the addition process. However, the problem with this addition is that its implementation requires a large number of transistors, which results in a large area and high power consumption. In <cit.>, the authors modified CSA with a BEC1 converter (CSA-BEC1) (Figure <ref>) and used this adder in the implementation of the Dadda multiplier circuit. As a result, the overall circuit used a small area in the VLSI layout and consumed less power. As it can be seen in Figure <ref>, the purpose of the FAs in the second row is just to give an addition result while considering that the incoming carry bit is 1. This series of FAs can be replaced by a BEC1 converter. A BEC1 adds one bit to the result while using fewer transistors. So instead of using a simple FAs series, the authors in <cit.> used a BEC1 converter, and as a result, their multiplier achieved better results in terms of power, speed, and energy.
This section is divided into two parts. In the first part, the design and workings of the proposed adder are explained. Consequently, based on the proposed adder, the design of the full Dadda multiplier is explained in the second part.
§.§ Proposed half adder based CSA-BEC1
The CSA-BEC1 circuit is further modified in this paper, and it is then used in full daddy multiplier implementation to achieve better performance. A block diagram of the proposed modification is shown in Figure <ref>. It can be seen that there is no FA in the proposed modification. Although all FAs have been replaced by HAs, this circuit still makes use of the speed advantages of a CSA. Comparing this circuit with Figures <ref> and <ref>, it can be observed that there are no horizontal interconnections between the HA adders and the BEC1 circuit as there were in CSA and CSA-BEC1. Carry propagation overhead between FAs is shifted to MUX stages. Due to this overhead transfer, although the FAs can be replaced by HAs, the number of MUX is doubled; however, the overall number of transistors is reduced as compared to the simple CSA and CSA-BEC1. Furthermore, the MUX circuit is implemented using pass transistor logic (PTL), which only needs 2 transistors for implementation (Figure <ref>).
In summary, the FAs are replaced with HAs, which save at least 18 transistors per 1-bit in CMOS logic at a cost of only two transistors more in MUX. Hence, the total number of transistors is reduced, which will reduce the layout area and improve the delay and energy performance.
The workings of the improved adder, named HA-based CSA with BEC1 (HA-CSA-BEC1), are as follows: The HAs calculate the output considering that there is no input carry, i.e., 0, and similarly, the BEC-1 blocks calculate the addition results considering that the input is 1. When the actual carry comes to the input of MUX, the final selection of output is made based on the actual carry bit. Carry propagation takes place in the MUX stage rather than the FAs as it did in CSA and CSA-BEC1. However, this propagation of carry through MUXs is very fast as compared to CSA and CSA-BEC1 because there is only one transistor delay per 1-bit in the path of propagation.
A comparison of the total number of transistors is shown in Table <ref>. The implementation of a 4-bit simple CSA, CSA-BEC1, and proposed HA-CSA-BEC1 with CMOS logic takes 256, 168, and 128 transistors, respectively. Details are given in Table <ref>.
Furthermore, in Figure <ref>, the gate-level circuit diagram of 1-bit of our proposed adder is shown, and the working of the same circuit is explained in Figure <ref>. This 1-bit version (used in the proposed multiplier design) of our proposed adder works as a FA but with high speed and better energy efficiency. In this circuit (Figure <ref>), the incoming carry bit contributes to the final sum and carry-out with only one transistor delay. In some cases, i.e., in the Dadda circuit, the incoming carry bit may be late as compared to inputs A and B. For example, in Figure <ref>, the circuit block diagram of the 4-bit proposed multiplier is shown, where an HA-CSA-BEC-1 block is outlined in red. The inputs A and B of that block come directly from AND gates, i.e., the PPs generator; however, the carry-in bit comes through another adder block, which adds some delay to this bit as compared to the A and B inputs. This gap of time adds some delay to the final results. However, in the proposed adder (Figure <ref>), the incoming carry bit can cover this delay, and the next adder doesn’t have to wait for carry-in until Step-3 (Figure <ref>), because the carry-in bit contributes only at the last stage, whereas A and B have to pass through Step-1 and/or Step-2, depending on the actual value of Cin. The output expression of the proposed adder is given by
[ output_sum = A ⊕ B and output_carry = A ∘ B if C_in = 0,; output_sum = A ⊕ B and output_carry = ( A ∘ B) ⊕( A ⊕ B) if C_in = 1. ]
The 1-bit circuit of the proposed adder is simulated using a TSMC 50-nm library. The layout is tested on different frequencies, i.e., 0.5 GHz, 1 GHz, 2 GHz, and 3.33 GHz, and the results, i.e., delay, power consumption, and power-delay product (PDP), are shown in Table <ref>.
A 1-bit version of our proposed HA-CSA-BEC1 consumes 0.779, 1.6, 3.28, and 5.7 uW of power at 0.5, 1, 2, and 3.33 GHz,, respectively. Furthermore, it can be seen from Table <ref> that the delay is 14 ps and the total number of transistors needed for the implementation is 24. Similarly, the PDP is 0.0109, 0.0224, 0.0459, and 0.0798 (fJ) at 0.5, 1, 2, and 3.33 GHz, respectively.
§.§ Proposed full Dadda multiplier with HA-CSA-BEC1
The circuit-block diagram of a proposed 4-bit multiplier is shown in Figure <ref>. The CMOS layouts of 4-bit and 8-bit are depicted in Figures <ref>, and <ref>, respectively.
In Figure <ref>, there are two 4-bit numbers available for the input of the PPs-Generator, which results in 16 multiplications, i.e., 16 AND gates. These multiplications are added to each other by following the full Dadda algorithm, i.e., Figure <ref>. By following the layer-reduction pattern of full Dadda, two 6-bit layers are obtained, which are then fed into a carry propagation adder <cit.> to compute the final 8-bit product. Figure 6 provides two 4-bit numbers as inputs to the PPs-Generator, resulting in 16 multiplications or AND gates. These multiplications are then combined using the full Dadda algorithm (Figure 2a) to obtain two 6-bit layers. Finally, a carry propagation adder [35] is utilized to calculate the 8-bit product.
For an M-bit proposed multiplier, the total number of HA-CSA-BEC1s, HAs, AND-Gates (PPs), and CPA size can be calculated using the following equations:
HA-CSA-BEC1s = (M - 1) × (M - 3),valid for M > 2,
HAs = M - 1,
AND gates for PPs = M^2,
Size of CPA = 2 × (M - 2).
To calculate the total number of MUXs (2:1) in an M-bit proposed multiplier, following equation can be used:
MUXs = 2 × (M - 1) × (M - 3).
§ SIMULATION RESULTS AND PERFORMANCE COMPARISON
As aforementioned, the proposed multiplier layout is designed and simulated in Microwind software using TSMC 50 nm, 90 nm, and 120 nm CMOS technologies on different frequencies, i.e., 0.5 GHz, 1 GHz, 2 GHz, and 3.33 GHz, and results are tabulated in the Tables <ref>, <ref>, and <ref>.
According to the simulation results in Table <ref>, the proposed multiplier with TSMC 50 nm, consumes 5.34, 9.083, 15.9, and 25 uW of power at 0.5, 1, 2, and 3.33 GHz frequencies, respectively. The delay is 75 ps, and the energy usage results are 0.4, 0.681, 1.208, and 1.9 fJ at 0.5, 1, 2, and 3.33 GHz, respectively. Similarly, the simulation results of the proposed design with TSMC 90 nm and TSMC 120 nm can be seen in Tables <ref> and <ref>, respectively.
The proposed design is also compared to some recent, closely related works in the literature, i.e., <cit.>, where <cit.> is an approximate multiplier-based design. It can be seen in Table <ref> that our design outperforms these in terms of delay, power consumption, energy efficiency, and transistor count. It is important to notice that each paper is being compared separately with the proposed one. The reason is that each research paper’s results have been simulated under different frequencies and technologies, i.e., 45 nm, 90 nm, and 120 nm, etc. Therefore, to make a justified comparison, our design is simulated with the same frequencies and technologies. For example, in <cit.>, TSMC-90 nm technology with a 500 MHz frequency is used, while in <cit.> TSMC-40 nm technology with a 100 MHz frequency is used, and to make a comparison with these papers, different technologies and frequencies are used correspondingly.
§ CONCLUSION
In this paper, a low-power, high-speed, and area-energy-efficient digital multiplier is designed that is based on the full Dadda algorithm and uses a new proposed full adder named a half-adder-based carry-select adder with a binary to excess-1 converter. Specifically, the circuit, CSA-BEC1 [17], is modified by replacing the FAs with HAs and transferring the overhead of carry propagation to the multiplexer stage. The performance of the proposed design is highly dependent on the HA-CSA-BEC1, which uses only 24 transistors to work as a CMOS full adder. The proposed multiplier’s circuit and layout are designed and simulated in DSCH and Microwind software, respectively, using TSMC 50 nm, 90 nm, and 120 nm with GHz frequencies, i.e., 0.5, 1, 2, and 3.33. According to the simulation results, a 4-bit proposed multiplier in TSMC-50 nm consumes 5.341 uW, 9.08 uW, 15.9 uW, and 25 uW of power at 0.5 GHz, 1 GHz, 2 GHz, and 3 GHz of frequencies, respectively, with a delay of 75 ns and an area of 338 transistors. Compared to many recent related works, the proposed design uses a smaller number of transistors and has a lower delay, PDP, and power consumption. The proposed design can be a good candidate for resource-limited digital control applications.
0
ref1 Sung GN, Ciou YJ, Wang CC. A power-aware 2-dimensional bypassing multiplier using cell-based design flow. 2008 IEEE Int. Symp. on Circuits and Syst. 2008; 3338-3341. doi: 10.1109/ISCAS.2008.4542173
ref2 Jiang GL, Wu TC, Chang YJ. Low power multiplier with alternative bypassing implementation. In: Proc. of the Int. Conf. on Embedded Syst., Cyber-physical Sys., and Applications (ESCS) 2012.
ref3 Cui X, Liu W, Chen X, Swartzlander EE, Lombardi F. A modified partial product generator for redundant binary multipliers. IEEE transact. on comput. 2015; 65(4): 1165-71. doi: 10.1109/TC.2015.2441711
ref4 Moss DJ, Boland D, Leong PH. A two-speed, radix-4, serial–parallel multiplier. IEEE Transac. on Very Large Scale Integration (VLSI) Systems. 2018 Dec 17;27(4):769-77. doi: 10.1109/TVLSI.2018.2883645
ref5 Ravi N, Satish A, Prasad TJ, Rao TS. A new design for array multiplier with trade off in power and area. arXiv preprint arXiv:1111.7258. 2011 Nov 30. doi: 10.48550/arXiv.1111.7258
ref6 Vaidya S, Dandekar D. Delay-power performance comparison of multipliers in VLSI circuit design. Int. J. of Comput. Networks & Commun (IJCNC). 2010 Jul;2(4):47-56.
ref7 Moons B, Verhelst M. An energy-efficient precision-scalable ConvNet processor in 40-nm CMOS. IEEE J. of solid-state Circuits. 2016 Dec 29;52(4):903-14. doi: 10.1109/JSSC.2016.2636225
ref8 Jo J, Kim S, Park IC. Energy-efficient convolution architecture based on rescheduled dataflow. IEEE Transac. on Circuits and Systems 2018 Jun 7;65(12):4196-207. doi: 10.1109/TCSI.2018.2840092
ref9 Kesava RB, Rao BL, Sindhuri KB, Kumar NU. Low power and area efficient Wallace tree multiplier using carry select adder with binary to excess-1 converter. 2016 Conf. on Adv. in Signal Process. (CASP) 2016 Jun 9 (pp. 248-253). IEEE. doi: 10.1109/CASP.2016.7746174
ref10 Bano N. VLSI Design of low power booth multiplier. Int. J. of Scientific & Eng. Res 2012 Feb;3(2):2-4.
ref11 Chang CH, Gu J, Zhang M. Ultra low-voltage low-power CMOS 4-2 and 5-2 compressors for fast arithmetic circuits. IEEE Transac. on Circuits and Sys. 2004 Oct 18;51(10):1985-97. doi: 10.1109/TCSI.2004.835683
ref12 Aliparast P, Koozehkanani ZD, Nazari F. An ultra high speed digital 4-2 compressor in 65-nm CMOS. Int. J. of Comput. Theory and Eng. 2013 Aug 1;5(4):593.
ref13 Chang YJ, Cheng YC, Lin YF, Liao SC, Lai CH, Wu TC. Imprecise 4-2 compressor design used in image processing applications. IET Circuits, Devices & Sys.2019 Oct 10;13(6):848-56.
ref14 Alexander S. A Review of Different Multipliers in Digital Circuits. Int. J. of MC Square Sci. Research. 2012 Dec 15;4(1):1-6.
ref15 Esposito D, Strollo AG, Napoli E, De Caro D, Petra N. Approximate multipliers based on new approximate compressors. IEEE Transac. on Circuits and Sys. 2018 Jun 12;65(12):4169-82. doi: 10.1109/TCSI.2018.2839266
ref16 Shabbir Z, Ghumman AR, Chaudhry SM. A Reduced-sp-D3L_sum Adder-Based High Frequency 4x4 Bit Multiplier Using Dadda Algorithm. Circuits, Sys., and Signal Process 2016 Sep;35(9):3113-34. doi: 10.1007/s00034-015-0201-7
ref17 Munawar M, Khan T, Rehman M, Shabbir Z, Daniel K, Sheraz A, Omer M. Low power and high speed Dadda multiplier using carry select adder with binary to excess-1 converter. 2020 Int. Conf. on Emerging Trends in Smart Technologies (ICETST) 2020 Mar 26 (pp. 1-4). IEEE. doi: 10.1109/ICETST49965.2020.9080739
ref18 Riaz MH, Ahmed SA, Javaid Q, Kamal T. Low power 4× 4 bit multiplier design using dadda algorithm and optimized full adder. 15th international Bhurban conf. on applied sci. and technol. (IBCAST) 2018 Jan 9 (pp. 392-396). IEEE. doi: 10.1109/IBCAST.2018.8312254
ref19 Manu V, Prakash AV, Chandra MU. Design and implementation of sixteen-bit low power and area efficient dadda multiplier. 2019 4th International Conf. on Recent Trends on Electronics, Inf., Communication & Technol. (RTEICT) 2019 May 17 (pp. 631-636). IEEE. doi: 10.1109/RTEICT46194.2019.9016834
ref20 Bharathi M, Shirur YJ. Optimized Synthesis of Dadda Multiplier Using ParallelPrefix Adders. 2019 International Conf. on Smart Sys. and Inventive Technol. (ICSSIT) 2019 Nov 27 (pp. 288-292). doi: 10.1109/ICSSIT46314.2019.8987897
ref21 Pathak KC, Sarvaiya JN, Darji AD, Diwan S, Gangadwala A, Bhatt Z, Patel A. An Efficient Dadda Multiplier using Approximate Adder. 2020 IEEE Region 10 Conf. (TENCON) 2020 Nov 16 (pp. 176-181). doi: 10.1109/TENCON50793.2020.9293737
ref22 Maddisetti L, Ravindra JV. Low-Power. High-Speed Adversarial Attack based 4: 2 Compressor as Full Adder for Multipliers in FIR Digital Filters. Int. Symp. of System-on-Chip (SoC) 2019 Oct 29 (pp. 1-6). IEEE. doi: 10.1109/NORCHIP.2019.8906934
ref23 Sebastian A, Jose F, Gopakumar K, Thiyagarajan P. Design and Implementation of an Efficient Dadda Multiplier Using Novel Compressors and Fast Adder. 2020 International Symp. on Devices, Circuits and Sys. (ISDCS) 2020 Mar 4 (pp. 1-4). IEEE. doi: 10.1109/ISDCS49393.2020.9263014
ref24 Pabithra S, Nageswari S. Analysis of approximate multiplier using 15–4 compressor for error tolerant application. 2018 International Conf. on Control, Power, Commun. and Computing Technol (ICCPCCT) 2018 Mar 23 (pp. 410-415). IEEE. doi: 10.1109/ICCPCCT.2018.8574287
ref25 Kim S, Kim Y. High-performance and energy-efficient approximate multiplier for error-tolerant applications. 2017 Int. SoC Design Conf. (ISOCC) 2017 Nov 5 (pp. 278-279). IEEE. doi: 10.1109/ISOCC.2017.8368894
ref26 Yadav P, Pandey A, KJ RP, Vasantha MH, YB NK. Low Power Approximate Multipliers With Truncated Carry Propagation for LSBs. IEEE 61st Int. Midwest Symp. on Circuits and Sys. (MWSCAS) 2018 Aug 5 (pp. 500-503). IEEE. doi: 10.1109/MWSCAS.2018.8624067
ref27 Krishna TS, Riyas KS, Premson Y, Sakthivel R. 15–4 Approximate Compressor based multiplier for image processing. 2nd Int Conf. on Trends in Electronics and Inf. (ICOEI) 2018 May 11 (pp. 671-675). IEEE. doi: 10.1109/ICOEI.2018.8553734
ref28 Pandey A, Karri MR, Yadav P, YB NK, Vasantha MH. Design and analysis of approximate multipliers for error-tolerant applications. IEEE Int. Symp. on Smart Electronic Sys. (iSES)(Formerly iNiS) 2018 Dec 17 (pp. 94-97). IEEE. doi: 10.1109/iSES.2018.00029
ref29 Lavanya M, Ravindra JV. Performance Metrics of Imprecise Multipliers Based on Proximate Compressors for IIR Filters. 2018 30th Int. Conf. on Microelectronics (ICM) 2018 Dec 16 (pp. 96-99). IEEE. doi: 10.1109/ICM.2018.8704044
ref30 Savithaa N, Poornima A. A High speed Area Efficient Compression technique of Dadda multiplier for Image Blending Application. 2019 Third Int. Conf. on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC) 2019 Dec 12 (pp. 426-430). IEEE. doi: 10.1109/I-SMAC47947.2019.9032622
ref31 Maddisetti L, Ravindra JV. Machine learning based power efficient approximate 4: 2 compressors for imprecise multipliers. 2019 32nd Int. Conf. on VLSI Design and 2019 2019 Jan 5 (pp. 221-226). IEEE. doi: 10.1109/VLSID.2019.00057
ref32 Arjun JA, Majumdar S. Development of approximate compressor based hybrid Dadda multiplier for image de-noising applications. 2019 IEEE 16th India Council Int. Conf. (INDICON) 2019 Dec 13 (pp. 1-4). IEEE. doi: 10.1109/INDICON47234.2019.9030269
ref33 Tung CW, Huang SH. Low-power high-accuracy approximate multiplier using approximate high-order compressors. 2nd Int. Conf. on Commun. Eng. and Technol. (ICCET) 2019 Apr 12 (pp. 163-167). IEEE. doi: 10.1109/ICCET.2019.8726875
ref34 Kaushik V, Saini H. The Proposed Full-Dadda Multipliers. IJIRST–Int. J. for Innovative Research in Sci. & Technol 2018.
ref35 Patel SK, Singhal SK. Area–delay and energy efficient multi-operand binary tree adder. IET Circuits, Devices & Sys. 2020 Aug 25;14(5):586-93.
ref36 Wikipedia contributors. Dadda multiplier. In Wikipedia, The Free Encyclopedia. Retrieved January 10, 2023
ref37 Bickerstaff KC, Schulte M, Swartzlander EE. Reduced area multipliers. Proceedings of Int. Conf. on Appl. Specific Array Processors (ASAP'93) 1993 Oct 25 (pp. 478-489). doi: 10.1109/ASAP.1993.397168
ref38 Bickerstaff KC, Swartzlander EE, Schulte MJ. Analysis of column compression multipliers. Proceedings 15th IEEE Symp. on Comput. Arith. ARITH-15 2001 Jun 11 (pp. 33-39). doi: 10.1109/ARITH.2001.930101
ref39 Habibi A, Wintz PA. Fast multipliers. IEEE Transac. on Comput. 1970 Feb;100(2):153-7.
ref40 Chu W, Unwala AI, Wu P, Swartzlander EE. Implementation of a high speed multiplier using carry lookahead adders. Asilomar Conf. on Signals, Sys. and Comput. 2013 Nov 3 (pp. 400-404). doi: 10.1109/ACSSC.2013.6810305
ref41 Townsend WJ, Swartzlander Jr EE, Abraham JA. A comparison of Dadda and Wallace multiplier delays. Adv. signal process. algorithms, architectures, and implementations XIII 2003 Dec 24 (Vol. 5205, pp. 552-560). doi: 10.1117/12.507012
ref42Saha A, Pal R, Naik AG, Pal D. Novel CMOS multi-bit counter for speed-power optimization in multiplier design. AEU-Int. J. of Electronics and Commun. 2018 Oct 1;95:189-98. doi: 10.1016/j.aeue.2018.08.015
ref43 Lee SJ, Ruslan SH. Retracted: 4x4 bit Vedic multiplier using 13T hybrid full adder in 90 nm CMOS technology. J. of Fundam. and Appl. Scie. 2018;10(6S):438-50.
ref44 Mestry ST, Sankpal SV, Golatkar DN. Low Power High Performance 4 bit Vedic-multiplier in 32 nm. 2021 6th Int. Conf. for Convergence in Technol. (I2CT) 2021 Apr 2 (pp. 1-5). doi: 10.1109/I2CT51068.2021.9417974
ref45 Kuo KC, Chou CW. Low power and high speed multiplier design with row bypassing and parallel architecture. Microelectronics J. 2010 Oct 1;41(10):639-50. doi: 10.1016/j.mejo.2010.06.009
ref46 Chang YJ, Cheng YC, Liao SC, Hsiao CH. A low power radix-4 booth multiplier with pre-encoded mechanism. IEEE access. 2020 Jun 19;8:114842-53. doi: 10.1109/ACCESS.2020.3003684
ref47 Rajput RP, Swamy S. High speed, efficient area, low power novel modified booth encoder multiplier for signed-unsigned number. Artificial Intell. Perspectives in Intell. Sys. 2016 (pp. 321-333). Springer, Cham. doi: 10.1007/978-3-319-33625-1_29
ref48 Venkatachalam S, Adams E, Lee HJ, Ko SB. Design and analysis of area and power efficient approximate booth multipliers. IEEE Transac. on Comput.2019 Jul 2;68(11):1697-703. doi: 10.1109/TC.2019.2926275
ref49 Skandha Deepsita S, Dhayala Kumar M, Noor Mahammad SK. Energy Efficient Error Resilient Multiplier Using Low-power Compressors. ACM Transac. on Design Automation of Electronic Sys. 2022(3):1-26. doi:10.1145/3488837
|
http://arxiv.org/abs/2307.04977v1 | 20230711023343 | Model-Driven Sensing-Node Selection and Power Allocation for Tracking Maneuvering Targets in Perceptive Mobile Networks | [
"Lei Xie",
"Shenghui Song",
"Yonina C. Eldar"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Model-Driven Sensing-Node Selection and Power Allocation for Tracking Maneuvering Targets in Perceptive Mobile Networks
Lei Xie, Member, IEEE, Shenghui Song, Senior Member, IEEE, and Yonina C. Eldar, Fellow, IEEE
L. Xie and S. Song are with Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong. e-mail: ({eelxie, eeshsong}@ust.hk). Y. C. Eldar is with the Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 7610001, Israel (e-mail: [email protected]).
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================
Maneuvering target tracking will be an important service of future wireless networks to assist innovative applications such as intelligent transportation.
However, tracking maneuvering targets by cellular networks faces many challenges.
For example, the dense network and high-speed targets make the selection of the sensing nodes (SNs), e.g., base stations, and the associated power allocation very difficult, given the stringent latency requirement of sensing applications. Existing methods have demonstrated engaging tracking performance, but with very high computational complexity.
In this paper, we propose a model-driven deep learning approach for SN selection to meet the latency requirement. To this end, we first propose an iterative SN selection method by jointly exploiting the majorization-minimization (MM) framework and the alternating direction method of multipliers (ADMM). Then, we unfold the iterative algorithm as a deep neural network (DNN) and prove its convergence. The proposed model-driven method has a low computational complexity, because the number of layers is less than the number of iterations required by the original algorithm, and each layer only involves simple matrix-vector additions/multiplications.
Finally, we propose an efficient power allocation method based on fixed point (FP) water filling (WF) and solve the joint SN selection and power allocation problem under the alternative optimization framework.
Simulation results show that the proposed method achieves better performance than the conventional optimization-based methods with much lower computational complexity.
Maneuvering target tracking, perceptive mobile network, model-driven deep learning, sensing node selection, power allocation.
§ INTRODUCTION
Innovative applications such as intelligent transportation systems require high-precision sensing capabilities, which are unavailable from current cellular networks. To this end, the recently proposed integrated sensing and communication (ISAC) paradigm offers a promising way to share spectrum, hardware, and software between sensing and communication <cit.>. Perceptive mobile network (PMN) was proposed as a special type of ISAC system that adds high-precision sensing capability to the cellular networks <cit.>. There are many favorable properties of cellular networks that can facilitate sensing. For instance, the large number of sensing nodes (SNs) in PMNs enables collaborative sensing, where multiple perspectives from different SNs are exploited to sense the same target.
The SNs can be base station (BS) <cit.>, road side units <cit.>, remote radio unit <cit.>, or target monitoring terminal <cit.>.
However, tracking maneuvering targets by PMNs faces many challenges. For example, due to the dense cellular network, selecting a proper set of SNs to track a moving target can be very difficult, because the handover from one group of SNs to another faces very stringent latency requirements. There have been engaging results on SN selection and power allocation for tracking maneuvering targets <cit.>.
The authors of <cit.> proposed two SN selection methods in wireless networks to minimize the posterior Cramér-Rao lower bound (PCRLB) and maximize the mutual information between the target location and the measurements of the selected SNs, respectively. In <cit.>, a cooperative game theoretic approach was utilized to allocate power for tracking targets in a radar network.
The authors of <cit.> proposed two strategies for resource allocation with given SNs, where one maximizes the tracking accuracy with limited power budgets, and the other minimizes the power consumption with required tracking performance.
To achieve better performance, the joint SN selection and power allocation schemes were also considered <cit.>.
In <cit.>, a distributed multi-target tracking method was proposed for the networked multiple-input multiple-output (MIMO) radar system, where an alternative optimization (AO)-based method was utilized to solve the bi-variable optimization problem. The boolean constraint on the SN selection vector is one of the most critical challenges for the joint SN selection and power allocation problem.
To handle this issue, a typical method is to relax the boolean constraint to allow continuous and sparse variables <cit.>.
In <cit.>, the relaxed SN selection was formulated as a semi-definite programming (SDP) problem and solved by the CVX toolbox <cit.>.
Unfortunately, the complexity of the existing methods increases exponentially with the number of SNs, which may violate the stringent latency requirement of sensing applications when a large number of SNs exist.
To this end, model-driven deep learning (DL) offers a promising solution.
By unfolding an iterative algorithm as a neural network where each iteration is implemented by one layer with learnable parameters, model-driven methods have the potential to offer better performance with reduced computational complexity.
Some research efforts have been made to utilize model-driven deep neural networks (DNNs) to find sparse solutions for better performance and lower computational costs.
In <cit.>, an unfolded vector-approximate message passing network with random initialization was proposed to learn a denoiser identical to the statistically matched one. The authors of <cit.> unfolded the iterative algorithm, used to solve a problem with l_0 sparse regularization, to be a feed-forward neural network for faster inference and better scalability. In <cit.>, a generalized DNN was proposed to learn a sparse solution by unfolding the alternating direction method of multipliers (ADMM) with better accuracy and lower computational cost. The authors of <cit.> designed an ADMM-Net for interference removal in radar imaging, which exhibited much lower imaging error and computational cost than ADMM and CVX.
However, the inverse of high-dimensional matrices are involved in the existing ADMM-based unfolding methods, which causes high storage and computational cost.
In this paper, to meet the stringent latency requirement of sensing applications, we propose a model-driven method for SN selection to track multiple maneuvering targets. For that purpose, we first derive an iterative algorithm for SN selection, leveraging the majorization-minimization (MM) framework and ADMM.
Then, the MM-ADMM algorithm is unfolded into a DNN where the technical challenges lie in the large number of learnable parameters and the uncertain convergence property. To this end, we design a new model-driven DNN with an additional module to exploit the first- and second-order momentum, and refer to it as deep alternating network (DAN), which has fewer learnable parameters than the directly-unfolded MM-ADMM.
The convergence proof of the proposed DAN is also given. The computational complexity of DAN is low, because the number of layers is less than the number of iterations required by the original algorithm, and each layer of DAN only involves simple matrix-vector additions/multiplications without high-dimensional matrix inverse.
Finally, we propose a fixed-point (FP) water-filling (WF)-based method for power allocation, which is derived based on the Lagrange multiplier method.
The joint SN selection and power allocation problem is solved by combining the proposed DAN and FP-WF algorithms under the AO framework. Experiment results show that the proposed method can achieve better performance than the optimization-based methods with remarkably lower computational costs.
The contributions of this paper are summarized as follows:
* We propose an iterative method based on MM and ADMM for SN selection. In particular, we exploit the MM approach to handle the non-convexity of the penalized cost functions. For each iteration of ADMM, we derive explicit expressions for the solution to the constrained optimization problem by exploiting the KKT conditions, which facilitate the development of the model-driven method.
* We design a new model-driven DNN, named DAN, by adding an additional module to the directly-unfolded MM-ADMM method, which exploits the momentum for accelerating the convergence.
Moreover, we provide the convergence proof for DAN, which achieves a similar SN selection performance as the exhaustive searching method with significantly lower computational cost.
* Inspired by the classic WF-based power allocation strategies, we propose an iterative FP-WF power allocation method. Specifically, in each water-filling step, the water level is obtained by solving an FP equation.
This approach not only reduces the computational complexity, but also provides an interesting physical insight: the power allocation strategy depends on the ratio between the Fisher information of the predictions and the measurements.
The remainder of this paper is organized as follows. Section II introduces the system model and formulates the problem. Section III derives the joint SN selection and power allocation algorithm. Section IV provides the simulation results to validate the advantage of the proposed model-driven method. Section V concludes this paper.
§ SYSTEM MODEL AND PROBLEM FORMULATION
In Fig. <ref>, we show a PMN consisting of one BS serving as the sensing signal transmitter and N SNs serving as the receivers for the echoes, which can be BSs or other types of SNs <cit.>. In each tracking frame, the BS will transmit sensing signals to the predicted positions of multiple targets, and the selected SNs will collaboratively estimate the location and velocity of the targets (motion state). The estimation results will be utilized to predict the motion state in the next tracking frame[The tracked targets are initialized and the number of the targets is known in advance. This assumption can be realized by communication or some available detection approaches, e.g., radio access technology <cit.>, PDA <cit.> or multi-frame detection <cit.> before target tracking. The targets are widely separated and each of them moves independently in the monitoring area <cit.>.].
In this paper, the SN selection and power allocation will be formulated as an optimization problem to minimize the PCRLB for the estimation error of the target motion state. To this end, we first introduce the target motion model and the signal model, which are the foundation for deriving the PCRLB.
§.§ Target Motion Model
The target motion model describes the motion behavior of the targets and affects the Fisher information of the prediction.
Assume that the target motion follows a near constant velocity model and the transition matrix 𝐆 is given by <cit.>
𝐆=𝐈_2⊗[
1 Δ T
0 1
]
where 𝐈_2 denotes the 2× 2 identity matrix, ⊗ represents the Kronecker product, and Δ T denotes the time between two adjacent tracking frames.
In the kth tracking frame, there are Q point-like targets, where the qth target is located at 𝐫_q^(k)=(r_x,q^(k),r_y,q^(k)) with a velocity 𝐯_q^(k)=(v_x,q^(k),v_y,q^(k)).
The target motion state is updated by 𝐱_q^(k) = 𝐆𝐱_q^(k-1)+ 𝐳_q^(k-1),
where
𝐱_q^(k) = [r_x,q^(k),v_x,q^(k),r_y,q^(k),v_y,q^(k)]^ includes the parameters to be estimated.
Here, 𝐳_q^(k-1) denotes the state noise, which is assumed to be a zero-mean Gaussian vector with covariance matrix <cit.>
𝐐=q_s 𝐈_2⊗[1/3(Δ T)^3 1/2(Δ T)^2
1/2(Δ T)^2 Δ T
]
where q_s is the intensity of the process noise.
§.§ Signal Model
In the kth tracking frame, the BS will transmit the sensing signal 𝐬^(k)(t) to the targets, and the echoes will be captured by the selected SNs for sensing purposes. The location of the BS and the nth SN is given by 𝐫_BS and 𝐫_n, respectively.
Given the motion state, we can determine the measurements, i.e., the angle of arrival (AOA), the time delay, and the Doppler frequency of the q-th target with respect to the n-th SN as
θ_q,n^(k) =arccos𝐞_n^(𝐫_q^(k)-𝐫_n)/‖𝐫_q^(k)-𝐫_n‖,
τ_q,n^(k)=1/c(‖𝐫_n-𝐫_q^(k)‖+‖𝐫_BS-𝐫_q^(k)‖),
μ_q,n^(k)=𝐯_q^(𝐫_q^(k)-𝐫_n)/λ‖𝐫_q^(k)-𝐫_n‖ + 𝐯_q^(𝐫_q^(k)-𝐫_BS)/λ‖𝐫_q^(k)-𝐫_BS‖ ,
where 𝐞_n represents the unit vector parallel to the line formed by all antennas of the uniform linear array, c is the speed of light, λ is the wavelength, and ||·|| denotes the l_2 norm.
Define the power allocation vector 𝐩^(k)=[p_1^(k),⋯,p_Q^(k)]∈ℝ^Q× 1, where p_q^(k) denotes the power allocated to the qth target.
The baseband echo of the qth target received by the nth SN is given by
𝐲_q,n^(k)(t) =√(p_q^(k))β_q,n^(k) e^j2πμ_q,n^(k)t𝐛_q,n^(k)𝐚_q,k^𝐬^(k)(t-τ_q,n^(k))
+𝐧_n^(k)(t),
where 𝐧_n^(k)(t) denotes the complex additive white Gaussian noise with zero mean and variance σ^2. The transmit and receive steering vectors are given by 𝐛_q,n^(k)=𝐛(θ_q,n^(k)) and 𝐚_q,k=𝐚(ψ_q^(k)), respectively, where ψ_q^(k) represents the angle of departure (AOD) of the qth target from the BS. β_q,n^(k) represents the complex gain of the BS-target-SN (qth target and nth SN) path, which accounts for the array gain, the propagation loss and the target radar cross section (RCS) <cit.>.
Following <cit.>, the local estimation error is modeled as a zero-mean Gaussian vector with the covariance matrix
Σ_q,n^(k)=[σ_θ_q,n^(k)^2,σ_τ_q,n^(k)^2,σ_μ_q,n^(k)^2],
where σ_θ_q,n^(k)^2, σ_τ_q,n^(k)^2, and σ_μ_q,n^(k)^2 denote the CRLBs for the estimation of the direction, range, and Doppler shift, respectively. The local estimation error affects the Fisher information of measurement, which will be utilized to derive the PCRLB in the next section.
§.§ Posterior Cramér-Rao Lower Bound
Based on the above-mentioned target motion model and signal model, we will derive the PCRLB, which gives the lower bound of the estimation error for the target motion state.
Define 𝐔^(k)=[𝐮_1^(k),⋯,𝐮_Q^(k)]∈ℝ^N_BS× Q as the SN selection matrix, whose (n,q)th entry u_q,n^(k) is 1 if the qth target is associated with the nth SN.
The Fisher information matrix (FIM) for the qth target is given by <cit.>
𝐉_q^(k)(p_q^(k),𝐮_q^(k))=𝐉_P,q^(k)+𝐉_Z,q^(k),
where 𝐉_P,q^(k) and 𝐉_Z,q^(k) denote the prior and data information matrix, respectively. In particular, the prior information matrix is given by
𝐉_P,q^(k)=(𝐐+𝐆 (𝐉_q^(k-1))^-1𝐆^)^-1.
The data information matrix 𝐉_Z,q^(k) is given by
𝐉_Z,q^(k)=∑_n=1^N u_q,n^(k)(𝐇_q,n^(k))^ (Σ_q,n^(k))^-1𝐇_q,n^(k),
where
𝐇_q,n^(k)=
∂𝐠_n^(k)/∂𝐱_q^(k)|_𝐱_q^(k)=𝐱̂_q^(k|k-1),
with ∂𝐠_n^(k)/∂𝐱_q^(k) denoting the derivative of the measurements 𝐠_n^(k)=[θ_q,n^(k)(𝐱_q^(k)),τ_q,n^(k)(𝐱_q^(k)),μ_q,n^(k)(𝐱_q^(k))]^ with respect to the motion state 𝐱_q^(k).
The predicted motion state of the qth target in the kth frame is updated by 𝐱̂_q^(k|k-1)=𝐆𝐱̂_q^(k-1),
where 𝐱̂_q^(k-1) represents the estimated motion state of the qth target in the (k-1)th frame.
Note that Σ_q,n^(k) is inversely proportional to the SNR at the SN <cit.>.
Thus, we can rewrite the measurement covariance in (<ref>) as
Σ_q,n^(k)
=(p_q^(k))^-1Σ̅_q,n^(k),
where Σ̅_q,n^(k) contains the part of Σ_q,n^(k) that is independent of p_q^(k).
Then, we have 𝐉_Z,q^(k)=
p_q^(k)∑_n=1^N u_q,n^(k)𝐌_q,n^(k),
where
𝐌_q,n^(k)=(𝐇_q,n^(k))^ (Σ̅_q,n^(k))^-1𝐇_q,n^(k).
Note that 𝐌_q,n^(k)=p_q^(k)𝐌_q,n^(k) denotes the measurement information for the qth target at the nth SN.
The inverse of the derived FIM yields the PCRLB matrix, i.e., <cit.>
𝐂_q(p_q^(k),𝐮_q^(k))=(𝐉_q^(k)(p_q^(k),𝐮_q^(k)))^-1.
The diagonal elements of 𝐂_q(p_q^(k),𝐮_q^(k)) provide a lower bound on the variances of the estimation error of an unbiased estimator for the target motion state, i.e.,
𝔼((𝐱̂_q^(k)-𝐱_q^(k))(𝐱̂_q^(k)-𝐱_q^(k))^)≽𝐂_q(p_q^(k),𝐮_q^(k)),
where
𝐀≽𝐁 indicates 𝐀-𝐁 is a positive-semidefinite matrix.
Some functions of the diagonal elements of the PCRLB matrix, e.g., the trace <cit.> and the determinant <cit.>, have been used as the performance metric for target sensing and tracking.
§.§ Problem Formulation
We want to minimize the PCRLB through SN selection and power allocation. In the kth frame, the problem is modeled as
min_𝐩^(k),𝐔^(k) ∑_q=1^Q log𝐂_q(p_q^(k),𝐮_q^(k))
s.t. ∑_q=1^Q p_q^(k)≤ P_T,
p_q^(k)≥ P_min,
1^𝐮_q^(k)≤ N_max,q=1,2,⋯,Q,
𝐔^(k)∈{0,1}^N× Q,
where constraint (<ref>) limits the total transmit power. Constraint (<ref>) indicates the minimum power allocated to each target, constraint (<ref>) limits the maximum number of SNs to track one target <cit.>, and (<ref>) gives the binary constraint on 𝐮_q^(k). The main reasons to select log(𝐂_q) as the performance metric include: 1) the determinant of 𝐂_q is proportional to the volume of the minimum achievable covariance ellipsoid, which is widely used as an important metric for parameter estimation <cit.>; and 2) if the determinant is directly used, the original problem (<ref>) is not convex, but the monotonic logarithmic transformations can render this problem convex.
§ MODEL-DRIVEN SENSING NODE SELECTION AND POWER ALLOCATION SCHEME
Note that the problem in (<ref>) has two variables. To handle this issue, we propose to update the variables alternatively based on the AO theory.
With a given feasible starting point {𝐩^(k,0), {𝐮_q^(k,0)}_q=1^Q }, we iteratively perform the following two operations:
1) updating {𝐮_q^(k,j+1)}_q=1^Q with fixed 𝐩^(k,j) via
𝐮_q^(k,j+1)= min_𝐮_q^(k)log𝐂_q(p_q^(k,j),𝐮_q^(k)),
2) updating 𝐩^(k,j+1) with fixed {𝐮_q^(k,j+1)}_q=1^Q via
𝐩^(k,j+1)=min_𝐩^(k)∑_q=1^Q log𝐂_q(p_q^(k),𝐮_q^(k,j+1)),
which decouple the SN selection and power allocation problem.
In the following, we will first derive an iterative method for SN selection by jointly exploiting the MM framework and ADMM.
To further reduce the computational complexity, we will develop a model-driven approach to solve (<ref>).
Finally, we will propose an FP-based WF method to solve (<ref>), which has much lower complexity but offers comparable performance as the traditional CVX-based method.
§.§ MM-ADMM based Sensing Node Selection
Given 𝐩^(k,j), the problem in (<ref>) can be formulated as
min_𝐮_q^(k) ℱ_u(𝐮_q^(k))
s.t. 1^𝐮_q^(k)≤ N_max, 𝐮_q^(k)∈{0,1}^N× 1,
where ℱ_u(𝐮_q^(k))=log𝐂_q(𝐮_q^(k)|p_q^(k,j)).
In order to enforce a binary solution and simplify the problem, we introduce a l_0 pseudo-norm penalty to the objective function and relax the binary constraint <cit.>. Then, the problem in (<ref>) is relaxed as
min_𝐮_q^(k) ℱ_u(𝐮_q^(k))+ρ_q‖𝐮_q^(k)‖_0
s.t. 1^𝐮_q^(k)≤ N_max, 0≤𝐮_q^(k)≤1,
where ‖·‖_0 denotes the l_0 pseudo-norm.
In general, a larger ρ_q leads to a sparser 𝐮_q^(k). Due to the non-convex, non-continuous, and combinatorial nature of the l_0 pseudo-norm, the problem (<ref>) is NP-hard. To simplify the notation, we omit the index q hereafter unless doing so creates confusion.
Inspired by <cit.>, we approximate the l_0 pseudo-norm by a function 𝒫_γ(𝐮^(k))=∑_n=1^N(1-e^-γ u_n^(k)),
where γ is a sufficiently large constant. 𝒫_γ(𝐮^(k)) is utilized due to several favorable properties: 1) it is asymptotically equivalent to ‖𝐮^(k)‖_0, i.e.,
lim_γ→∞𝒫_γ(𝐮^(k))=∑_n=1^N(1-δ(u_n^(k)))=‖𝐮^(k)‖_0;
2) it is continuous, concave, and non-decreasing in the feasible set; and 3) it is differentiable and its gradient is easy to obtain.
§.§.§ MM framework for solving (<ref>)
The problem in (<ref>) can be approximated by
min_𝐮^(k)∈𝒮_u ℱ_u(𝐮^(k))+ρ𝒫_γ(𝐮^(k))
where 𝒮_u={𝐮^(k)|1^𝐮^(k)= N_max,0≤𝐮^(k)≤1}.
Though 𝒫_γ(𝐮^(k)) is continuous w.r.t. 𝐮^(k), the problem in (<ref>) is still hard to solve, due to the complicated form of ℱ_u(𝐮^(k)) w.r.t. 𝐮^(k). To handle this difficulty, we propose to utilize the MM framework <cit.>, based on which (<ref>) can be solved in an iterative process. At each iteration, the MM framework updates the optimization variable by minimizing a tight upperbound of the function, which is known as the surrogate function.
Then, The next question is how to construct a surrogate function for the objective function in (<ref>).
Since 𝒫_γ(𝐮^(k)) is differentiable and concave with respect to 𝐮^(k), it is upperbounded by its first-order Taylor expansion, i.e.,
𝒫_γ(𝐮^(k))≤𝒫_γ(𝐮^(k)|𝐮^(k,l))
≜𝒫_γ(𝐮^(k,l)) + (𝐝_γ^(k,l))^ (𝐮^(k)-𝐮^(k,l)),
where 𝐮^(k,l) denotes the optimized result at the lth iteration, 𝐝_γ^(k,l)=γ[e^-γ u_1^(k,l),e^-γ u_2^(k,l),⋯,e^-γ u_N^(k,l)]^ represents the gradient of 𝒫_γ(𝐮^(k)), and u_n^(k,l)
denotes the nth entry of 𝐮^(k,l).
An appropriate upperbound of ℱ_u(𝐮^(k)) can be obtained by
𝒢_1(𝐮^(k)|𝐮^(k,l))≜ℱ_u(𝐮^(k,l))+𝐝_u^(𝐮^(k,l))(𝐮^(k)-𝐮^(k,l))
+1/2(𝐮^(k)-𝐮^(k,l))^𝐓^(k,l)(𝐮^(k)-𝐮^(k,l)),
where 𝐝_u^(k,l)= 𝐝_u(𝐮^(k,l)) and 𝐝_u(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂𝐮^(k)
denotes the gradient of ℱ_u(𝐮^(k)) w.r.t. 𝐮^(k), whose nth entry is given by d_u,n(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂ u_n^(k)=-((𝐉^(k)(𝐮^(k)|p^(k,j)))^-1𝐌_n^(k)).
The positive-definite matrix 𝐓^(k,l) should satisfy
𝐓^(k,l)≽𝐇_u(𝐮^(k,l)),
where 𝐇_u(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂𝐮^(k)∂(𝐮^(k))^
denotes the Hessian matrix of ℱ_u(𝐮^(k)) w.r.t. 𝐮^(k), whose (m,n)th entry is given by H_u,m,n(𝐮^(k))=∂ℱ_u(𝐮^(k))/∂ u_m^(k)∂ u_n^(k)= 𝐌_m^(k)(𝐉^(k)(𝐮^(k)|p^(k,j)))^-2𝐌_n^(k).
Then, at the (l+1)th iteration, the selection vector can be updated by solving the problem
min_𝐮^(k)∈𝒮_u 𝒢(𝐮^(k)),
where the surrogate function 𝒢(𝐮^(k)) is defined by
𝒢(𝐮^(k))=𝒢_1(𝐮^(k)|𝐮^(k,l))+ρ𝒫_γ(𝐮^(k)|𝐮^(k,l)).
The problem in (<ref>) is convex and can be solved by using the general CVX toolbox based on the interior point method <cit.>. However, the computational complexity of CVX is about 𝒪(N^3.5), which is not suitable for PMNs with a large N.
§.§.§ ADMM-based method for solving (<ref>)
To solve (<ref>) efficiently, we exploit the ADMM, which splits the problem into two distinct parts and handles them separately <cit.>. Since (<ref>) is Lipschitz continuous, the convergence of the ADMM can be guaranteed.
By introducing an auxiliary variable 𝐯^(k), (<ref>) is equivalent to
min_𝐮^(k),𝐯^(k) 𝒢_1(𝐮^(k)|𝐮^(k,l))+ρ𝒫_γ(𝐯^(k)|𝐮^(k,l))
s.t. 1^𝐮^(k)= N_max, 0≤𝐯^(k)≤1, 𝐮^(k)=𝐯^(k),
which leads to the augmented Lagrangian function <cit.>
ℒ(𝐮^(k),𝐯^(k),𝐳^(k)) =𝒢_1(𝐮^(k)|𝐮^(k,l))+ρ𝒫_γ(𝐯^(k)|𝐮^(k,l))
+ρ_a,l/2‖𝐮^(k)-𝐯^(k)+𝐳^(k)‖^2,
where 𝐳^(k) is the dual variable and ρ_a,l is a penalty parameter at the lth iteration.
Then, at the mth iteration, the optimization variables are updated as
𝐮_m+1^(k,l) =min_𝐮^(k)ℒ(𝐮^(k),𝐯_m^(k,l),𝐳_m^(k,l)),
s.t. 1^𝐮^(k)= N_max,
𝐯_m+1^(k,l)=min_𝐯^(k)ℒ(𝐮_m+1^(k,l),𝐯^(k),𝐳_m^(k,l)),
s.t. 0≤𝐯^(k)≤1,
𝐳_m+1^(k,l)=𝐳_m^(k,l)+𝐮_m+1^(k+1,l)-𝐯_m+1^(k+1,l),
where 𝐮_m^(k,l), 𝐯_m^(k,l) and 𝐳_m^(k,l) denote 𝐮, 𝐯 and 𝐳 at the mth ADMM iteration, respectively.
a) Update 𝐮_m+1^(k,l) via (<ref>):
By utilizing the Lagrange multiplier method, (<ref>) can be reformulated as an unconstrained problem, whose Lagrange function
is given by ℒ_u(𝐮^(k))=ℒ(𝐮^(k),𝐯_m^(k,l),𝐳_m^(k,l))+ν_l(N_max-1^𝐮^(k)),
where ν_l is a Lagrange multiplier. The closed-form solution to (<ref>) is
𝐮_m+1^(k,l)=𝐮^(k,l)-Φ_u^-1(𝐝_m^(k,l)-ν_l1),
where
Φ_l=𝐓^(k,l)+ρ_a,l𝐈 and 𝐝_m^(k,l)=𝐝_u^(k,l)-ρ_a,l (𝐯_m^(k,l)-𝐳_m^(k,l)).
By substituting (<ref>) into the constraint of (<ref>), we have
ν_l=N_max-1^𝐮^(k,l)+1^Φ_l^-1𝐝_m^(k,l)/1^Φ_l^-11=1^Φ_l^-1𝐝_m^(k,l)/1^Φ_l^-11,
which follows from the fact that N_max=1^𝐮^(k,l).
Therefore, the closed-form solution to (<ref>) is given by
𝐮_m+1^(k,l)=𝐮^(k,l)-Φ_l^-1(𝐝_m^(k,l)-1^Φ_l^-1𝐝_m^(k,l)/1^Φ_l^-111).
One remaining problem is how to determine Φ_l, which is equivalent to choosing a proper 𝐓^(k,l).
Indeed, it is not difficult to find a matrix 𝐓^(k,l) that satisfies (<ref>), such as 𝐓^(k,l)= 𝐇_u(𝐮^(k,l))+ϵ𝐈, where ϵ is a positive constant to make 𝐓^(k,l) positive definite.
However, the matrix inversion of Φ_l is involved in (<ref>) when updating 𝐮_m+1^(k,l), which may be computationally complex due to the large number of SNs.
To tackle this issue, 𝐓^(k,l) is desired to be a diagonal matrix.
One feasible solution is to make 𝐓^(k,l) proportional to the identity matrix, i.e., <cit.>
𝐓^(k,l)=C_T^(k,l)𝐈,
where C_T^(k,l) is a positive constant to satisfy (<ref>). For example, one feasible choice is C_T^(k,l)=λ_max(𝐇_F(𝐮^(k,l))) and λ_max(𝐗) denotes the principle eigenvalue of 𝐗.
b) Update 𝐯_m+1^(k,l) via (<ref>):
Since (<ref>) is convex, the closed-form solution 𝐯_m+1^(k,l) to (<ref>) can be obtained based on the KKT conditions, whose nth entry is given by
v_m+1,n^(k)={[l]
v_n, if 0≤v_n≤ 1,
0, if v_n< 0,
1, if v_n> 1,
.
where v_n denotes the nth entry of 𝐯, given by
𝐯=-ρ/ρ_a,l𝐝_γ^(k,l)+𝐮_m+1^(k)+𝐳_m^(k).
The cost function will not increase over the ADMM iteration process given in (<ref>). According to the monotone bounded theorem <cit.>, the iteration will converge to a set of stationary points in the feasible set, denoted by 𝐮_(⋆)^(k), 𝐯_(⋆)^(k), and 𝐳_(⋆)^(k). The selection vector 𝐮^(k,l+1) is updated by 𝐮_(⋆)^(k).
The convergence and performance of (<ref>) depend on the selection of 𝐓^(k,l). If 𝐓^(k,l) is selected as the Hessian matrix which is usually not diagonal, (<ref>) is similar to the Newton's descent update with quadratic convergence, but high computational complexity. In (<ref>), 𝐓^(k,l) is selected as a diagonal matrix, i.e., 𝐓^(k,l)=C_T^(k,l)𝐈, and thus the update in (<ref>) moves in the opposite direction of the gradient, which resembles the gradient descent method. With a diagonal 𝐓^(k,l), the computational cost at each ADMM iteration is about 𝒪(N^2), which is much lower than that of CVX.
In general, a larger C_T^(k,l) is desired to satisfy (<ref>).
However, in this case,
the constant C_T^(k,l)+ρ_a,l
is inversely proportional to the step size. An aggressive choice of C_T^(k,l) may require more iterations to converge. Meanwhile, the choice of 𝐓^(k,l) suggested in (<ref>) may not be optimal, and a better one within a larger feasible set, i.e., diagonal but not necessarily proportional to the identity matrix, is desired.
To this end, we propose to unfold the iterative optimization method as a DNN and tune 𝐓^(k,l) with deep learning.
One feasible way is to treat the diagonal elements of 𝐓^(k,l) as the learnable parameters. In this case, the number of learnable parameters is N at each layer, which will be large due to the dense SNs. Moreover, the trained 𝐓^(k,l) may break the convergence condition (<ref>). These issues motivate us to consider another design with three desirable properties: 1) the number of learnable parameters is moderate, 2) the convergence property is guaranteed, and 3) the proposed method will be restricted to first-order methods that only require gradients, since higher-order optimization methods may cost a large amount of computing and storage resource.
§.§ Deep-Alternative-Network: DNN Based Sensing Node Selection
To derive a DNN with the above-mentioned properties, we unfold the MM-ADMM-based SN selection method and introduce an additional module. The new DNN is called DAN.
As shown in Fig. <ref>, DAN consists of L cascaded layers with some learnable parameters, where the (l+1)th layer takes the first- and second-order momentum 𝐦̂^(l-1) and 𝐯̂^(l-1), the gradients 𝐝_u^(k,l) and 𝐝_v^(k,l), and the output from the previous layer 𝐮^(k,l) as inputs, and outputs an update 𝐮^(k,l+1). In particular, the (l+1)th layer updates 𝐮_m^(k,l), 𝐯_m^(k,l), and 𝐳_m^(k,l), alternatively, as shown by the blue, green, and orange blocks in Fig. <ref>, respectively. The update of 𝐮_m+1^(k,l) is of the same form as (<ref>). But we make the following two modifications, as shown by the red block in Fig. <ref>:
1) 𝐝_m^(k,l) is constructed as
𝐝_m^(k,l)=𝐦̂_l-ρ_a,l (𝐯_m^(k,l)-𝐳_m^(k,l)),
where
𝐦̂_l=β_1,l𝐦̂_l-1+(1-β_1,l)𝐝_u^(k,l).
Here, β_1,l=β_1η_1^l where η_1∈ (0,1) and β_1∈ (0,1) denotes a learnable hyper-parameters to avoid the case that the momentum diverges severely.
When β_1,l = 0, the first-order momentum 𝐦̂_l reduces to the gradient 𝐝_u^(k,l). In this paper, we define β_1,l=β_1η_1^l with β_1∈ (0,1) and η_1∈ (0,1).
The momentum terms caused by non-zero β_1,l may improve the performance significantly, especially in deep learning applications.
2) Φ_l is constructed as
Φ_l=𝐓̂^(k,l)+ρ_a,l𝐈,
where 𝐓̂^(k,l)≜([√(|v̂_l,1|)/α_1,l,⋯,√(|v̂_l,N|)/α_1,l]), and ρ_a,l=ρ_aη_a^l with η_a^l ∈ (0,1).
Here, v̂_l,i denotes the ith entry of the second-order momentum 𝐯̂_l, which is defined by
𝐯̂_l=β_2𝐯̂_l-1+(1-β_2)(𝐝_u^(k,l))^2,
where β_2 denotes a constant to control the second-order momentum
and α_1,l=α̅_1,l/√(l) with α̅_1,l∈ [α_1^-,α_1^+] representing a set of learnable parameters to control the update step size. Here, the positive constants α_1^- and α_1^+ are the lower and upper bounds of α̅_1,l.
We refer to the diagonal element of Φ_l^-1 as the learning rate of this algorithm, whose ith entry is given by ϕ_l,i^-1 = (√(|v̂_l,i|)/α_1,l +ρ_a,l)^-1.
Learning rate decay is critical for training neural networks. In the early training stage, a large learning rate can accelerate training and help the network escape spurious local minima. By the end of the iteration, a small learning rate helps the network converge to a local minimum and avoid oscillation. Therefore, we desire a set of ρ_a,l and α_1,l such that, for any l∈{2,⋯,L} and i ∈{1,⋯,N}, we have ϕ_l,i^-1≤ϕ_l-1,i^-1.
The updates are inspired by the adaptive momentum (Adam) method <cit.>, i.e., an algorithm for first-order gradient-based optimization.
Adam is chosen due to its favorable properties: 1) simple implementation, computationally efficient, and low memory requirements; 2) adaptability to large-scale problems; and 3) adaptation to sparse gradients <cit.>.
Based on the adaptive estimates of first- and second-order momentum, we propose a novel construction of 𝐝_m^(k,l) and 𝐓̂^(k,l) as well as its resultant Φ_l, which can meet the constraint in (<ref>) and the diagonal requirement, simultaneously.
But different from ADAM, the update has additional terms resulting from the original MM-ADMM and one learnable step size α_1,l to control the iteration process.
Compared with training all diagonal elements of 𝐓̂^(k,l), the learnable parameters in the DAN are changed to α̅_1,l and β_1. The total number of learnable parameters over all layers is reduced from L N to L+1.
The update of 𝐯_m+1^(k,l) and 𝐳_m+1^(k,l) are the same as (<ref>) and (<ref>), respectively.
With given 𝐦̂_l and Φ_l, the Lagrange function ℒ(𝐮^(k),𝐯^(k),𝐳^(k)|𝐦̂_l,Φ_l) defined in (<ref>) will not increase after updating 𝐮_m^(k,l), 𝐯_m^(k,l) and 𝐳_m^(k,l) by (<ref>), (<ref>), and (<ref>), respectively. The modified ADMM iteration will also converge at a set of station points denoted by 𝐮_(⋆)^(k), 𝐯_(⋆)^(k), and 𝐳_(⋆)^(k).
Therefore, we have
𝐮^(k,l+1)=𝐮_⋆^(k,l)=𝐮^(k,l)-Φ_l^-1(𝐝_⋆^(k,l)-ν_l1),
where
𝐝_⋆^(k,l)=𝐦̂_l-ρ_a,l (𝐯_⋆^(k,l)-𝐳_⋆^(k,l)), ν_l=1^Φ_l^-1𝐝_⋆^(k,l)/1^Φ_l^-11.
§.§ Convergence of DAN
Until now, we have developed a new model-driven method for SN selection.
However, the obtained 𝐓̂^(k,l) may not satisfy (<ref>), which indicates that the convergence property of the MM framework is questionable.
To address this issue, we next analyze the convergence of the proposed DAN.
For any sequence {𝐮^(k,l)}_l=1^L generated by the proposed DAN, the regret function is defined as
R_L≜∑_l=1^L( 𝒢(𝐮^(k,l))-𝒢(𝐮^(k,⋆))),
where 𝐮^(k,⋆) =min_𝐮^(k)∈𝒮_u𝒢(𝐮^(k)) denotes the best stationary point in the feasible set 𝒮_u. Generally speaking, the regret function indicates the sum of the difference between 𝒢(𝐮^(k,l)) and 𝒢(𝐮^(k,⋆)), which is widely used for the convergence proof <cit.>. Note that the feasible set has bounded diameter, i.e., for all 𝐮,𝐯∈𝒮_u, ||𝐮 - 𝐯||^2 ≤ D_Δ.
Define D_u,1≜max_l ||𝐝_u^(k,l)||_1, D_ϕ≜max_l max_i ϕ_l,i^-1, D_b,1≜max_l ||𝐛̂_l||_1, and D_b,2≜max_l ||𝐛̂_l||^2,
where
𝐛̂_l = 𝐯_⋆^(k,l)-𝐳_⋆^(k,l).
Then, we have the following theorem for the convergence analysis.
Assume that, for all l∈[2,L], ϕ_l,i^-1≤ϕ_l-1,i^-1.
The regret is bounded by
R_L≤ C_1 √(L) + C_2,
where C_1 = √(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1) and C_2 is defined by (<ref>), given at the top of this page.
Proof: See Appendix <ref>. ▪
Since C_1 and C_2 are constants independent of L, Theorem <ref> indicates that the DAN has a regret of 𝒪(L^1/2), which guarantees that the sequence {𝒢(𝐮^(k,l))}_l=1^L will converge to 𝒢(𝐮^(k,⋆)) with convergence rate on the order of 𝒪(L^-1/2).
§.§ Transmit Power Allocation For Multiple Targets
Given {𝐮_q^(k,j+1)}_q=1^Q, the problem in (<ref>) can be expressed as
min_𝐩^(k)∈𝒮_p ∑_q=1^Q ℱ_pa(p_q^(k)),
where ℱ_pa(p_q^(k))=log𝐂_q(p_q^(k)|𝐮_q^(k,j)) is the cost function and 𝒮_p={𝐩^(k)|∑_q=1^Q p_q^(k)≤ P_T,p_q^(k)≥ P_min, q=1,2⋯, Q} denotes the feasible set of 𝐩^(k).
This problem is convex and can be reformulated as a SDP problem, i.e.,
max_𝐩^(k) ∑_q=1^Q log(𝐐_q),
s.t. ∑_q=1^Q p_q^(k)≤ P_T, p_q^(k)≥ P_min,
𝐉_q^(k)(p_q^(k)|𝐮_q^(k,j)) ≽𝐐_q, q=1,2⋯, Q,
where {𝐐_q}_q=1^Q denotes a set of auxiliary symmetric matrices. Then, this problem can be solved by the CVX toolbox.
However, the CVX toolbox is generally time-consuming, especially when the number of targets is large.
To reduce the computational complexity and reveal more physical insights,
we propose an iterative water-filling-based power allocation method.
First, we merge the total power constraint into the cost function by the Lagrange multiplier method, i.e.,
ℒ_pa(𝐩^(k)) =∑_q=1^Q ℱ_pa(p_q^(k))
+ λ_pa(P_T-∑_q=1^Q p_q^(k)),
where λ_pa is the Lagrange multiplier.
The derivative of (<ref>) w.r.t. p_q^(k) is given by
∂ℒ_pa(𝐩^(k))/∂ p_q^(k)=( (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1Σ_q^(k))- λ_pa,
where Σ_q^(k)=∑_n=1^N u_q,n^(k)𝐌_q,n^(k).
By setting ∂ℒ_pa(𝐩^(k))/∂ p_q^(k)=0, we have the following fixed-point equation, i.e.,
p_q^(k)=1/λ_pa- (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1𝐉_P,q^(k)/ (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1Σ_q^(k).
If 𝐉_P,q^(k) and Σ_q^(k) reduce to one-dimensional constants denoted by J_P,q^(k) and Σ_q^(k), respectively, the closed-form solution of p_q^(k) can be directly obtained from (<ref>), i.e., p_q^(k)=μ_wf-J_P,q^(k)/Σ_q^(k),
where μ_wf=1/λ_pa denotes the water level. For the matrix-version 𝐉_P,q^(k) and Σ_q^(k),
we propose to obtain p_q^(k) and the water level μ_wf by an iteration process. In particular, at the ith iteration, p_q,i+1^(k) is obtained by
p_q,i+1^(k)=⌊μ_wf-
(𝐉_P,q^(k)+p_q,i^(k)Σ_q^(k))^-1𝐉_P,q^(k)/ (𝐉_P,q^(k)+p_q,i^(k)Σ_q^(k))^-1Σ_q^(k)⌋_P_min,
where p_q,i^(k) denotes the power for the qth target at the ith iteration and ⌊ a⌋_b=max{a,b}. Then, the water level
μ_wf is updated by setting ∑_q=1^Q p_q,i+1^(k)(μ_wf) = P_T.
According to the Rayleigh quotient, we have
λ̃_min≤ (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1𝐉_P,q^(k)/ (𝐉_P,q^(k)+p_q^(k)Σ_q^(k))^-1Σ_q^(k)≤λ̃_max,
where λ̃_min and λ̃_max denote the minimum and maximum eigenvalue of (Σ_q^(k))^-1𝐉_P,q^(k), respectively. Note that 𝐉_P,q^(k) and Σ_q^(k) denote the FIM of the prediction and the measurement, respectively. Thus, the eigenvalues of (Σ_q^(k))^-1𝐉_P,q^(k) denote the ratio between the prediction and measurement. Recalling (<ref>), if the eigenvalues of (Σ_q^(k))^-1𝐉_P,q^(k) are larger, p_q^(k) will be lower. This indicates that, more power will be allocated to a target, if 1) the measurement provides more information than the prediction, which enables the system to improve the accuracy of the prediction, or 2) the prediction of this target is so bad such that the system needs to allocate more power for better motion state estimation.
In turn, if the eigenvalues of (Σ_q^(k))^-1𝐉_P,q^(k) are smaller, p_q^(k) will be lower. This indicates that, a target will be assigned with a lower power, if 1) the prediction is good enough; or 2) the measurement is too bad.
§ SIMULATION
In the simulation, we will show the efficiency and effectiveness of the proposed DAN and FP-WF algorithms. In the following, we first introduce the system parameters, the training details of DAN, and the benchmark algorithms.
System parameters:
We consider a mmWave system operating at a carrier frequency of 28 GHz. There is one BS acting as the transmitter, which is located at [0,0] m. The number of SNs is N=32. These SNs are uniformly distributed in the area within 400× 400 m^2. On average, there is one SN within an area of 5000 m^2. The measurement covariance defined in (<ref>) is generated by
Σ_q,n^(k)=1/SNR_q^(k)Σ̇_q,n^(k), where Σ̇_q,n^(k)=[σ̇_θ_q,n^(k)^2,σ̇_τ_q,n^(k)^2,σ̇_μ_q,n^(k)^2] with σ̇_θ_q,n^(k)=2, σ̇_τ_q,n^(k)=1, σ̇_μ_q,n^(k)=1.
The SNR is defined by SNR_q^(k) =p_q^(k)γ_0/σ^2(d_q,n^(k))^2, where γ_0=-61.4 dB denotes the pathloss at reference distance.
We set the total power at BS P=30 dBm, the minimum power for single target P_min=20 dBm, the noise power σ^2=-90 dBm, the intensity of process noise q_s=5, and Δ T =0.5 s.
Initialization of motion state:
There are three targets to be tracked, i.e., Q=3, if not otherwise specified.
The initial velocities of the targets are given as 𝐯_1=[-10,0]^ m/s, 𝐯_2=[0,-10]^ m/s, 𝐯_3=[10,0]^ m/s, respectively. The initial locations of the targets are given as 𝐱_1^(0)=[124, 124]^ m, 𝐱_2^(0)=[-134, 134]^ m, and 𝐱_3^(0)=[-144, -144]^ m, respectively.
Training details:
During training, the learnable parameters are optimized by the SGD optimizer in the PyTorch with a learning rate 5×10^-5.
In our experiment, the loss function for training is selected as
f_loss=1/L∑_l=1^L ||𝐮_ES - 𝐮̂^l||^2, where 𝐮_ES denotes the selection vector obtained by the exhaustive search (ES). The number of data for training is set as N_train=500.
The network parameters are set as ρ=1, ρ_a=10^2, γ=10^4, β_2=0.999, η_1=0.99, and η_a=0.99. The learnable parameters are initialized as β_1=0.99, and α_1 = 0.15 for all layers. The number of layers is set as L=10. The maximum number of ADMM iterations is set as 200.
Benchmark methods: The proposed methods are compared with the following algorithms for SN selection and power allocation.
1) SN selection:
We compare DAN with the following methods:
∙ `Nearest SN Selection': this method selects the subset of SNs nearest to the target;
∙ `Exhaustive Search (ES)': this method selects the subset of SNs which minimizes the cost function;
∙ `MM-CVX': the method solves (<ref>) by CVX toolbox.
∙ `MM-ADMM': the optimization-based method proposed in Sec. III. A. To show the impact of 𝐓^(k,l) in MM-ADMM, we use two different 𝐓^(k,l). Specifically, the first choice is 𝐓_1^(k,l)=(𝐇_F(𝐮^(k,l)))𝐈, and the second choice is 𝐓_2^(k,l)=λ_max(𝐇_F(𝐮^(k,l)))𝐈, which are denoted by `MA-I' and `MA-II', respectively. The parameters of MM-ADMM and MM-CVX are the same as that for DAN.
The maximum number of MM iterations for MM-ADMM and MM-CVX is set as 30 and 50, respectively, unless specified otherwise.
2) Power allocation:
We compare FP-WF with `CVX', which represents the method for solving (<ref>) by CVX.
§.§ Computational Cost
Table <ref> shows the running time[Configuration of this computer: CPU: Inter Core i9-9900 @3.10GHz; RAM: 16GB; Software: Python 3.10.9 in Microsoft visual studio code and Matlab 2020b.] of the algorithms composed of different power allocation and SN selection methods.
It can be observed that the running time of DAN & FP-WF is 0.7724 s, which is the lowest among all combinations.
Meanwhile, we can observe that the running time of ES & CVX is 18.6242 s, which is about 24.11 times more than that of DAN & FP-WF.
To further demonstrate the low computational complexity provided by DAN and FP-WF, we study the computational cost of the SN selection and power allocation methods, respectively.
Running time of the SN selection methods: Table <ref> shows the running time of the SN selection algorithms with different N.
DAN achieves the lowest computational cost among the candidates with different N.
The computational consumption of ES is extremely large, especially when N is large.
For example, when N=128, the DAN is about 443 times faster than ES. MM-CVX is more time-consuming than MM-ADMM.
Meanwhile, the running time of DAN is less than that of the MM-ADMM.
There are two main reasons: 1) one layer of DAN has a lower computational cost than one iteration of MM-ADMM. In particular, DAN only requires the gradient, while MM-ADMM requires both the gradient and Hessian matrix, which needs more computational cost, and 2) owing to the well-trained 𝐓^(k,l), DAN can converge faster than MM-ADMM, which will be shown in the following.
Convergence of the SN selection methods:
The running time of MM-CVX, MM-ADMM and DAN is proportional to the required number of iterations/layers to converge.
Fig. <ref> shows the cost function over the number of the iterations (optimization-based methods) or the layers (DAN).
First, MM-CVX needs about 50 iterations to converge, which is more than MM-ADMM and DAN.
Meanwhile, we can observe that DAN can converge within 3 layers, while MM-ADMM needs about 15-20 iterations to converge, which leads to more running time.
This is because,
unlike MM-ADMM, DAN utilizes the momentum, which accumulates the gradient of the past layers and can thus speed up the convergence <cit.>.
Meanwhile, we see that MM-ADMM-II can converge faster than MM-ADMM-I which indicates that the convergence of MM-ADMM highly depends on the choice of 𝐓^(k,l). This is also the motivation to learn 𝐓^(k,l) in DAN.
Running time of the power allocation methods:
Table <ref> shows the running time for the power allocation algorithms versus different Q. We can observe that the running time of FP-WF is much lower than CVX for different cases. This is because FP-WF is derived based on the Lagrange multiplier method, which can solve (<ref>) more efficiently than the interior point method used by CVX.
§.§ Tracking Accuracy
The average root mean square error (RMSE) of multiple targets tracking over Q targets and K frames is selected as the performance metric for multiple target tracking, which is defined as 1/Q1/K∑_q=1^Q∑_k=1^K√(1/N_mc∑_i=1^N_mc‖𝐱_q^(k)-𝐱̂_q^(k,i)‖^2), where
𝐱̂_q^(k,i) denotes the estimated position of the target q at the kth time frame in the ith Monte-Carlo trial, and N_mc denotes the number of Monte-Carlo trials.
The number of tracking frames is set as K=10. Fig. <ref> shows the average RMSE with different power budget P.
We have several observations. First, associated with different SN selection methods, FP-WF can achieve the same performance as CVX. Recalling from the results in Table <ref>, compared to CVX, FP-WF can reduce the computational cost without losing performance loss.
Second, we can observe that ES can achieve the best performance among the SN selection methods. However, from Table <ref>, it can be observed that the running time of ES is extremely high, which limits its real application.
Third, MM-CVX and MM-ADMM can achieve similar performance, but as shown in Table <ref>, the computational cost of MM-CVX is higher than that of MM-ADMM.
Furthermore, DAN can outperform MM-ADMM, which is because a more suitable 𝐓 is learned by DAN.
Finally, the performance of the nearest SN selection is worse than DAN. This is because the tracking performance is affected by both the distance and the angle from target to SNs. DAN takes both of them into consideration, while the nearest SN selection only considers the distance.
This will be further demonstrated in the next part.
Illustration of SN selection:
To better understand the effect of SN selection, we focus on the single target case in this section. The power allocated to the target is set as p=25 dBm. The initial state of the target is given by
𝐯=[-10,0]^ m/s and 𝐱^(0)=[124, 124]^ m.
Fig. <ref> shows the SN selection result by DAN in 4 consecutive frames.
The selection depends on the geometric relation between the target and SNs. DAN does not always choose the nearest SNs, because, besides the distance, the different perspectives to observe the target provided by different SNs will also affect the tracking performance. Fig. <ref> shows the corresponding RMSE over the tracking frames.
It can be observed that DAN consistently outperforms the Nearest SN selection and achieves comparable performance as ES.
Effect of noise power:
One of the biggest drawbacks of DL-based approaches is the performance degradation when the features (such as the noise power) in test data differ from those in training. This leads to the study of generalization in this part.
Fig. <ref> shows the performance under different noise power with N=32. When the noise power is different from that of the training data, DAN can provide a near-ES RMSE. It indicates that DAN can adapt to the change of σ^2, which makes DAN attractive in real applications.
§.§ Accuracy-Complexity Tradeoff
By adjusting the termination tolerance and the maximum number of iterations, a tradeoff between computational cost and accuracy can be achieved by MM-ADMM. Meanwhile, the proposed DAN requires a fixed number of layers and thus has a fixed running time. Fig. <ref> shows the RMSE performance of different algorithms versus the running time. It is observed that DAN can always outperform MM-ADMM in terms of both computational cost and RMSE. Moreover, though MM-ADMM-II can converge faster than MM-ADMM-I, 𝐓_2^(k,l) requires more computational cost than 𝐓_1^(k,l). Thus, given the same time cost, MM-ADMM-I outperforms MM-ADMM-II.
§ CONCLUSION
In this paper, we considered the joint SN selection and power allocation problem for tracking multiple maneuvering targets in PMNs. To meet the stringent latency requirement of sensing applications, we proposed a model-driven approach for SN selection by unfolding the optimization-based MM-ADMM method.
A novel DNN architecture was derived to speed up the convergence by exploiting the momentum, whose convergence property was also guaranteed by deriving the regret bound. Furthermore, we proposed an efficient power allocation method based on fixed-point water filling and revealed some physical insights. Simulation results demonstrated that the proposed method can achieve better performance than the existing methods with much lower computational cost. This work demonstrated that, by reducing the number of iterations and improving the effectiveness of each layer, model-driven approaches offer a promising solution to meet the stringent latency requirement of sensing applications.
§ PROOF OF THEOREM <REF>
Given ℒ(𝐮^(k)) is convex,
we have
𝒢(𝐮^(k,l))-𝒢(𝐮^(k,⋆))≤<𝐝_u^(k,l), Δ𝐮^(k,l)>,
where Δ𝐮^(k,l)=𝐮^(k,l) - 𝐮^(k,⋆).
Since R_L≤∑_l=1^L<𝐝_u^(k,l), Δ𝐮^(k,l)>, the main idea of the proof is to find an upperbound of ∑_l=1^L<𝐝_u^(k,l), Δ𝐮^(k,l)>.
Recalling from (<ref>), we have
‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2=‖Φ_l^1/2(𝐮^(k,l+1) - 𝐮^(k,⋆))‖^2
(a)=‖Φ_l^1/2(𝐮^(k,l)-Φ_l^-1(𝐝_⋆^(k,l)-ν_l1))-𝐮^(k,⋆)‖^2
(b)=‖Φ_l^1/2Δ𝐮^(k,l)-Φ_l^-1/2(𝐦̂_l-ρ_a,l𝐛̂_l-ν_l1)‖^2
(c)=‖Φ_l^1/2Δ𝐮^(k,l)‖^2 - 2 <(1-β_1,l)𝐝_u^(k,l),Δ𝐮^(k,l)>
- 2 <β_1,l𝐦̂_l-1-ρ_a,l𝐛̂_l-ν_l1,Δ𝐮^(k,l)>
+ ‖Φ_l^-1/2(𝐦̂_l
-ρ_a,l𝐛̂_l-ν_l1
) ‖^2,
where step (a) follows (<ref>), step (b) follows (<ref>),
and step (c) follows (<ref>).
By adding 2 <(1-β_1,l)𝐝_u^(k,l),Δ𝐮^(k,l)>-‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2 to both sides of (<ref>), and dividing both sides of (<ref>) by 2(1-β_1,l), we have
<𝐝_u^(k,l),Δ𝐮^(k,l)> =‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2/2(1-β_1,l)
= -<β_1,l𝐦̂_l-1,Δ𝐮^(k,l)>/1-β_1,l+<ρ_a,l𝐛̂_l,Δ𝐮^(k,l)>/1-β_1,l
+<ν_l1,Δ𝐮^(k,l)>/1-β_1,l+‖Φ_l^-1/2(𝐦̂_l-ρ_a,l𝐛̂_l-ν_l1) ‖^2/2(1-β_1,l).
By using the Young's inequality for products, i.e., ± ab≤a^2/2 +b^2/2, the second, third, and fourth terms on the right-hand side of (<ref>) are upperbounded by -<β_1,l𝐦̂_l-1,Δ𝐮^(k,l)> /1-β_1,l≤‖Φ_l^-1/2𝐦̂_l-1‖^2 /2(1-β_1) +‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1),
< 𝐛̂_l,Δ𝐮^(k,l)>/1-β_1,l≤‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1) + ‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1), and
<1,Δ𝐮^(k,l)>/1-β_1,l≤‖Φ_l^-1/21‖^2/2(1-β_1) + ‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1), respectively.
By utilizing the inequality between the arithmetic mean and quadratic mean, the last term on the right-hand side of (<ref>) is upperbounded by ‖Φ_l^-1/2(𝐦̂_l-ρ_a,l𝐛̂_l-ν_l1) ‖^2/2(1-β_1,l)≤3‖Φ_l^-1/2𝐦̂_l‖^2/2(1-β_1)
+3ρ_a,l^2‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1) + 3ν_l^2‖Φ_l^-1/21‖^2/2(1-β_1).
Then, the upperbound of (<ref>) can be given by
<𝐝_u^(k,l),Δ𝐮^(k,l)>≤‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2/2(1-β_1,l)_172
+β_1,l‖Φ_l^-1/2𝐦̂_l-1‖^2/2(1-β_1)_173+ ρ_a,l‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1)_174 + ν_l‖Φ_l^-1/21‖^2/2(1-β_1)_175
+ β_1,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1)_176+ ρ_a,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1)_177
+ν_l‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1)_178
+3‖Φ_l^-1/2𝐦̂_l‖^2/2(1-β_1)_179
+3ρ_a,l^2‖Φ_l^-1/2𝐛̂_l‖^2/2(1-β_1)_180+3ν_l^2‖Φ_l^-1/21‖^2/2(1-β_1)_181,
To bound R_L, we upperbound of the summation of the terms 172-181 over the index l as follows.
§.§.§ Term 172
It can be shown that
‖Φ_l^1/2Δ𝐮^(k,l)‖^2=∑_i=1 ^Nϕ_l,i^-1 |Δ u_i^(k,l)|^2
=1/α_1,l∑_i=1 ^N√(|v̂_l,i|)· |Δ u_i^(k,l)|^2 +ρ_a,l
||Δ𝐮^(k,l)||^2
(a)=1/α_1,l∑_i=1^N∑_p=1^l√(1-β_2)β_2^l-p/2 |d_u,i^(k,p)| ·|Δ u_i^(k,l)|^2
+ρ_a,l
||Δ𝐮^(k,l)||^2
≤√(1-β_2)/α_1^-(1-√(β_2)) D_u,1 D_Δ√(l) + D_Δρ_aη_a^l,
where step (a) comes from (<ref>).
Then, with the decreasing learning rate ϕ_l,i^-1, we have
∑_l=1^L(‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l^1/2Δ𝐮^(k,l+1)‖^2/2(1-β_1,l))
≤‖Φ_1^1/2Δ𝐮^(k,1)‖^2/2(1-β_1)+‖Φ_L^1/2Δ𝐮^(k,L+1)‖^2/2(1-β_1)
+∑_l=2^L(‖Φ_l^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l) -‖Φ_l-1^1/2Δ𝐮^(k,l)‖^2/2(1-β_1,l))
(a)≤√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1)√(L) + ρ_aη_a^L D_Δ/1-β_1
+ √(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1) + ρ_aη_a D_Δ/1-β_1
+ ∑_l=2^L∑_i=1^N(ϕ_l,i-ϕ_l-1,i)|Δ u_i^(k,l)|^2/2(1-β_1)
(b)≤√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1)√(L) + ρ_aη_a D_Δ/1-β_1
+ √(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-β_1) + ρ_aη_a D_Δ/1-β_1 + D_Δ_u,2D_ϕ/1-β_1,
where step (a) follows (<ref>) and step (b) follows
∑_l=2^L∑_i=1^N(ϕ_l,i-ϕ_l-1,i)|Δ u_i^(k,l)|^2
≤ D_Δ u, 2∑_i=1^N∑_l=2^L(ϕ_l,i-ϕ_l-1,i)≤ 2D_Δ_u,2D_ϕ.
§.§.§ Terms 173 & 179
Since (1-β_1) is a non-zero constant, we focus on the upperbound of the terms ∑_l=1^L‖Φ_l^-1/2𝐦̂_l‖^2 and ∑_l=1^Lβ_1,l‖Φ_l^-1/2𝐦̂_l-1‖^2.
Denote m̂_l,i and d_u,i as the ith entry of 𝐦̂_l and 𝐝_u^(k,l), respectively. Then, we have
‖Φ_l^-1/2𝐦̂_l‖^2=∑_i=1 ^Nm̂_l,i^2/ϕ_l,i≤∑_i=1 ^Nm̂_l,i^2/√(|v̂_l,i|)/α_1,l
= ∑_i=1 ^N(∑_p=1^l(1-β_1,p)∏_q=1^l-pβ_1,l-q+1 d_u,i^(k,p))^2 /√(|v̂_l,i|)/α_1,l
(a)≤∑_i=1 ^Nα_1,lη_1^2l(∑_p=1^lβ_1^l-p) (∑_p=1^lβ_1^l-p (d_u,i^(k,p))^2 ) /√(∑_p=1^l(1-β_2)β_2^l-p |d_u,i^(k,p)|^2)
(b)≤α_1,lη_1^2l/(1-β_1)√(1-β_2)∑_i=1 ^N(∑_p=1^l(β_1/√(β_2))^l-p
|d_u,i^(k,p)| ),
where step (a) comes from the inequality (1-β_1,p)≤ 1, ∏_q=1^l-pβ_1,l-q+1≤β_1^l-pη_1^l and the Jensen inequality, i.e., (∑_i a_i b_i/∑_i a_i)^2≤∑ a_i b_i^2/∑_i a_i, and step (b) follows the inequalities ∑_p=1^lβ_1^l-p≤1/1-β_1 and ∑_p=1^l(1-β_2)β_2^l-p |d_u,i^(k,p)|^2 ≥ (1-β_2)β_2^l-p |d_u,i^(k,p)|^2.
By summing up (<ref>) over the index l, we have
∑_l=1^L‖Φ_l^-1/2𝐦̂_l‖^2
≤∑_l=1^Lα_1,lη_1^2l/(1-β_1)√(1-β_2)∑_i=1 ^N(
∑_p=1^l(β_1/√(β_2))^l-p
|d_u,i^(k,p)| )
=∑_l=1^Lα_1,lη_1^2l/(1-β_1)√(1-β_2) ||𝐝_u^(k,l)||_1 (∑_j=l^L(β_1/√(β_2))^j-l)
≤α_1^+D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)∑_l=1^Lη_1^2l/√(l)
(a)≤α_1^+D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)(1-η_1^2),
where
we have utilized the property that ∑_l=1^Lη_1^2l/√(l)≤∑_l=1^L η_1^2l≤1/1-η_1^2 in step (a).
Then, we have
∑_l=1^L‖Φ_l^-1/2𝐦̂_l‖^2≤α_1^+D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)(1-η_1^2).
Similarly, we can obtain
∑_l=1^Lβ_1,l‖Φ_l^-1/2𝐦̂_l-1‖^2 ≤∑_l=1^Lβ_1,l‖Φ_l-1^-1/2𝐦̂_l-1‖^2
≤α_1^+β_1D_u,1/(1-β_1)(1-β_1/√(β_2))√(1-β_2)(1-η_1^2).
§.§.§ Terms 174 & 180
First, we have
∑_l=1^Lρ_a,l‖Φ_l^-1/2𝐛̂_l‖^2 ≤∑_l=1^Lρ_a η_a^l D_ϕ^l ||𝐛̂_l||^2 ≤ρ_a D_ϕ D_b,2/1-η_a,
where D_ϕ^l= max_i ϕ_l,i^-1.
Similarly, we can obtain
∑_l=1^Lρ_a,l^2‖Φ_l^-1/2𝐛̂_l‖^2 ≤ρ_a^2 D_ϕ D_b,2/1-η_a^2.
§.§.§ Terms 175 & 181
By the definition of ν_l in (<ref>), we have
ν_l
≤||𝐝_⋆^(k,l)||_1≤ ||𝐦̂_l||_1 + ρ_a,l ||𝐛̂_l||_1.
Similar to (<ref>), we can obtain
∑_l=1^L||𝐦̂_l||_1
≤∑_l=1^L∑_i=1 ^N(∑_p=1^l∏_q=1^l-pβ_1,l-q+1 |d_u,i^(k,p)| )
≤∑_l=1^L ||𝐝_u^(k,p)||_1 η_1^l/(1-β_1)≤ D_u,1/(1-η_1)(1-β_1).
Then, we have
∑_l=1^Lρ_a,l ||𝐛̂_l||_1=ρ_aD_b,1∑_l=1^L η_a^l≤ρ_aD_b,1/1-η_a.
By substituting (<ref>) and (<ref>) into (<ref>), we have
∑_l=1^Lν_l‖Φ_l^-1/21‖^2 ≤ D_ϕ∑_l=1^Lν_l
≤ D_ϕ( D_u,1/(1-η_1)(1-β_1)+ρ_aD_b,1/1-η_a).
It thus follows that
∑_l=1^Lν_l‖Φ_l^-1/21‖^2 ≤ D_u,1D_ϕ/(1-η_1)(1-β_1)+ρ_aD_b,1D_ϕ/1-η_a.
Similarly, we can obtain
∑_l=1^Lν_l^2‖Φ_l^-1/21‖^2 ≤2 D_u,1^2 D_ϕ/(1-η_1^2)(1-β_1)^2+2ρ_a^2D_b,1^2D_ϕ/(1-η_a^2).
§.§.§ Term 176
By (<ref>), we have
∑_l=1^Lβ_1,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2
≤β_1√(1-β_2)/α_1^-(1-√(β_2)) D_u,1 D_Δ√(l)η_1^l + D_Δβ_1ρ_aη_1^lη_a^l
(a)≤β_1√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-η_1)^2 + β_1ρ_aD_Δ/(1-η_1η_a),
where we have utilized the bound of the arithmetic-geometric series, i.e., ∑_l=1^L l η_1^l≤1/(1-η_1)^2 in (a).
§.§.§ Term 177
By replacing β_1,l with ρ_a,l in (<ref>), we have
∑_l=1^Lρ_a,l‖Φ_l^1/2Δ𝐮^(k,l)‖^2≤ρ_a√(1-β_2)D_u,1 D_Δ/α_1^-(1-√(β_2))(1-η_a)^2 + ρ_a^2 D_Δ/1-η_a^2.
§.§.§ Term 178
Recalling (<ref>) and (<ref>), we have
ν_l ≤ ||𝐦̂_l||_1 + ρ_a,l ||𝐛̂_l||_1≤ D_u,1/(1-β_1)η_1^l+ ρ_aD_b,1η_a^l.
Then, we can obtain
ν_l‖Φ_l^1/2Δ𝐮^(k,l)‖^2=ν_l∑_i=1 ^Nϕ_l,i |Δ u_i^(k,l)|^2
=ν_l/α_1,l∑_i=1 ^N√(|v̂_l,i|)· |Δ u_i^(k,l)|^2 +ν_lρ_a,l
||Δ𝐮^(k,l)||^2
≤ν_l√(1-β_2)/α_1^-(1-√(β_2)) D_u,1 D_Δ√(l)+ ν_l D_Δρ_aη_a^l .
Similarly, we have
∑_l=1^L √(l)ν_l =
∑_l=1^L √(l)( D_u,1/(1-β_1)η_1^l+ ρ_aD_b,1η_a^l)
≤ D_u,1/(1-β_1)(1-η_1)^2+ ρ_aD_b,1/(1-η_a)^2,
∑_l=1^L η_a^l ν_l =
∑_l=1^L η_a^l ( D_u,1/(1-β_1)η_1^l+ ρ_aD_b,1η_a^l)
≤ D_u,1/(1-β_1)(1-η_1)(1-η_a)+ ρ_aD_b,1/(1-η_a)^2.
By substituting (<ref>) and (<ref>) into (<ref>), we have
∑_l=1^L ν_l‖Φ_l^1/2Δ𝐮^(k,l)‖^2
≤( D_u,1/(1-β_1)(1-η_1)^2+ ρ_aD_b,1/(1-η_a)^2)√(1-β_2)D_u,1 D_Δ/α_1(1-√(β_2))
+ ( D_u,1/(1-β_1)(1-η_1)(1-η_a)+ ρ_aD_b,1/(1-η_a)^2)D_Δρ_a.
By combining the upperbounds for the summations of terms 172-181, (<ref>) can be proved.
10
url@samestyle
8288677
F. Liu, C. Masouros, A. Li, H. Sun, and L. Hanzo, “Mu-mimo communications with
mimo radar: From co-existence to joint transmission,” IEEE Trans.
Wirel. Commun., vol. 17, no. 4, pp. 2755–2770, 2018.
liu2020radar
F. Liu, W. Yuan, C. Masouros, and J. Yuan, “Radar-assisted predictive
beamforming for vehicular links: Communication served by sensing,”
IEEE Trans. Wirel. Commun., vol. 19, no. 11, pp. 7704–7719, 2020.
9296833
A. Zhang, M. L. Rahman, X. Huang, Y. J. Guo, S. Chen, and R. W. Heath,
“Perceptive mobile networks: Cellular networks with radio vision via joint
communication and radar sensing,” IEEE Veh. Technol. Mag., vol. 16,
no. 2, pp. 20–30, 2021.
xie2022perceptive
L. Xie, P. Wang, S. Song, and K. B. Letaief, “Perceptive mobile network with
distributed target monitoring terminals: Leaking communication energy for
sensing,” IEEE Trans. Wirel. Commun., vol. 21, no. 12, pp.
10 193–10 207, 2022.
xie2022networked
L. Xie, S. Song, and K. B. Letaief, “Networked sensing with ai-empowered
interference management: Exploiting macro-diversity and array gain in
perceptive mobile networks,” arXiv preprint arXiv:2205.11331, 2022.
xie2023collaborative
L. Xie, S. Song, Y. C. Eldar, and K. B. Letaief, “Collaborative sensing in
perceptive mobile networks: Opportunities and challenges,” IEEE Wirel.
Commun., vol. 30, no. 1, pp. 16–23, 2023.
macsazade2010energy
E. Maşazade, R. Niu, P. K. Varshney, and M. Keskinoz, “Energy aware
iterative source localization for wireless sensor networks,” IEEE
Trans. Signal Process., vol. 58, no. 9, pp. 4824–4835, 2010.
7104065
H. Chen, S. Ta, and B. Sun, “Cooperative game approach to power allocation for
target tracking in distributed mimo radar sensor networks,” IEEE Sens.
J., vol. 15, no. 10, pp. 5423–5432, 2015.
yan2020optimal
J. Yan, W. Pu, S. Zhou, H. Liu, and M. S. Greco, “Optimal resource allocation
for asynchronous multiple targets tracking in heterogeneous radar networks,”
IEEE Trans. Signal Process., vol. 68, pp. 4055–4068, 2020.
shen2013sensor
X. Shen and P. K. Varshney, “Sensor selection based on generalized information
gain for target tracking in large sensor networks,” IEEE Trans. Signal
Process., vol. 62, no. 2, pp. 363–375, 2013.
yi2020resource
W. Yi, Y. Yuan, R. Hoseinnezhad, and L. Kong, “Resource scheduling for
distributed multi-target tracking in netted colocated mimo radar systems,”
IEEE Trans. Signal Process., vol. 68, pp. 1602–1617, 2020.
yan2015simultaneous
J. Yan, H. Liu, B. Jiu, B. Chen, Z. Liu, and Z. Bao, “Simultaneous multibeam
resource allocation scheme for multiple target tracking,” IEEE Trans.
Signal Process., vol. 63, no. 12, pp. 3110–3122, 2015.
yuan2020robust
Y. Yuan, W. Yi, R. Hoseinnezhad, and P. K. Varshney, “Robust power allocation
for resource-aware multi-target tracking with colocated mimo radars,”
IEEE Trans. Signal Process., vol. 69, pp. 443–458, 2020.
xie2017joint
M. Xie, W. Yi, T. Kirubarajan, and L. Kong, “Joint node selection and power
allocation strategy for multitarget tracking in decentralized radar
networks,” IEEE Trans. Signal Process., vol. 66, no. 3, pp. 729–743,
2017.
sun2021resource
H. Sun, M. Li, L. Zuo, and P. Zhang, “Resource allocation for multitarget
tracking and data reduction in radar network with sensor location
uncertainty,” IEEE Trans. Signal Process., vol. 69, pp. 4843–4858,
2021.
das2011submodular
A. Das and D. Kempe, “Submodular meets spectral: Greedy algorithms for subset
selection, sparse approximation and dictionary selection,” arXiv
preprint arXiv:1102.3975, 2011.
elhamifar2015dissimilarity
E. Elhamifar, G. Sapiro, and S. S. Sastry, “Dissimilarity-based sparse subset
selection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 11,
pp. 2182–2197, 2015.
grant2014cvx
M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex
programming, version 2.1,” 2014.
borgerding2017amp
M. Borgerding, P. Schniter, and S. Rangan, “Amp-inspired deep networks for
sparse linear inverse problems,” IEEE Trans. Signal Process.,
vol. 65, no. 16, pp. 4293–4308, 2017.
xin2016maximal
B. Xin, Y. Wang, W. Gao, D. Wipf, and B. Wang, “Maximal sparsity with deep
networks?” NeurIPS, vol. 29, 2016.
8550778
Y. Yang, J. Sun, H. Li, and Z. Xu, “Admm-csnet: A deep learning approach for
image compressive sensing,” IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 42, no. 3, pp. 521–538, 2020.
9420308
J. Johnston, Y. Li, M. Lops, and X. Wang, “Admm-net for communication
interference removal in stepped-frequency radar,” IEEE Trans. Signal
Process., vol. 69, pp. 2818–2832, 2021.
parkvall2017nr
S. Parkvall, E. Dahlman, A. Furuskar, and M. Frenne, “Nr: The new 5g radio
access technology,” IEEE Communi. Stand. Mag., vol. 1, no. 4, pp.
24–30, 2017.
62252
C. Jauffret and Y. Bar-Shalom, “Track formation with bearing and frequency
measurements in clutter,” IEEE Trans. Aerosp. Electron. Syst.,
vol. 26, no. 6, pp. 999–1010, 1990.
grossi2014track
E. Grossi, M. Lops, and L. Venturino, “Track-before-detect for multiframe
detection with censored observations,” IEEE Trans. Aerosp. Electron.
Syst., vol. 50, no. 3, pp. 2032–2046, 2014.
xie2020recursive
L. Xie, Z. He, J. Tong, and W. Zhang, “A recursive angle-doppler channel
selection method for reduced-dimension space-time adaptive processing,”
IEEE Trans. Aerosp. Electron. Syst., vol. 56, no. 5, pp. 3985–4000,
2020.
7181639
K. L. Bell, C. J. Baker, G. E. Smith, J. T. Johnson, and M. Rangaswamy,
“Cognitive radar framework for target detection and tracking,” IEEE
J. Sel. Top. Signal Process., vol. 9, no. 8, pp. 1427–1439, 2015.
325008
J. Helferty and D. Mudgett, “Optimal observer trajectories for bearings only
tracking by minimizing the trace of the cramer-rao lower bound,” in
Proc. 32th IEEE Conf. Decis. Control, 1993, pp. 936–939 vol.1.
326097
J. Helferty, D. Mudgett, and J. Dzielski, “Trajectory optimization for minimum
range error in bearings-only source localization,” in Proceedings of
OCEANS '93, 1993, pp. II/229–II/234 vol.2.
bejar2001distributed
R. Bejar, B. Krishnamachari, C. Gomes, and B. Selman, “Distributed constraint
satisfaction in a wireless sensor tracking system,” in Workshop on
Distributed Constraint Reasoning, International Joint Conference on
Artificial Intelligence, vol. 4, 2001.
zhai2018joint
X. Zhai, Q. Shi, Y. Cai, and M. Zhao, “Joint transmit precoding and receive
antenna selection for uplink multiuser massive mimo systems,” IEEE
Trans. Commun., vol. 66, no. 11, pp. 5249–5260, 2018.
malek2016successive
M. Malek-Mohammadi, A. Koochakzadeh, M. Babaie-Zadeh, M. Jansson, and C. R.
Rojas, “Successive concave sparsity approximation for compressed sensing,”
IEEE Trans. Signal Process., vol. 64, no. 21, pp. 5657–5671, 2016.
sun2016majorization
Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in
signal processing, communications, and machine learning,” IEEE Trans.
Signal Process., vol. 65, no. 3, pp. 794–816, 2016.
boyd2011distributed
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al.,
“Distributed optimization and statistical learning via the alternating
direction method of multipliers,” Found. Trends® Mach.
Learn., vol. 3, no. 1, pp. 1–122, 2011.
XIE2020107401
L. Xie, Z. He, J. Tong, J. Li, and H. Li, “Transmitter polarization
optimization for space-time adaptive processing with diversely polarized
antenna array,” Signal Process., vol. 169, p. 107401, 2020.
bibby_1974
J. Bibby, “Axiomatisations of the average and a further generalisation of
monotonic sequences,” Glas. Math. J., vol. 15, no. 1, p. 63–65,
1974.
kingma2014adam
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
|
http://arxiv.org/abs/2307.04356v1 | 20230710054920 | Reducing Information Loss for Spiking Neural Networks | [
"Yufei Guo",
"Yuanpei Chen",
"Liwen Zhang",
"Xiaode Liu",
"Xinyi Tong",
"Yuanyuan Ou",
"Xuhui Huang",
"Zhe Ma"
] | cs.NE | [
"cs.NE",
"cs.CV"
] |
headings
88
ECCV-22 submission ID
ECCV-22 submission ID
Paper ID
Reducing Information Loss for SNNs
Guo, Y. et al.
Intelligent Science & Technology Academy of CASIC, Beijing 100854, China
Chongqing University, Chongqing, 400044, China
[email protected], [email protected], [email protected]
Reducing Information Loss for Spiking Neural Networks
Yufei Guo1Equal contribution. Yuanpei Chen1^⋆ Liwen Zhang1 YingLei Wang1 Xiaode Liu1 Xinyi Tong1 Yuanyuan Ou2 Xuhui Huang1 Zhe Ma1
August 12, 2023
======================================================================================================================================
The Spiking Neural Network (SNN) has attracted more and more attention recently. It adopts binary spike signals to transmit information. Benefitting from the information passing paradigm of SNNs, the multiplications of activations and weights can be replaced by additions, which are more energy-efficient. However, its “Hard Reset" mechanism for the firing activity would ignore the difference among membrane potentials when the membrane potential is above the firing threshold, causing information loss. Meanwhile, quantifying the membrane potential to 0/1 spikes at the firing instants will inevitably introduce the quantization error thus bringing about information loss too. To address these problems, we propose to use the “Soft Reset" mechanism for the supervised training-based SNNs, which will drive the membrane potential to a dynamic reset potential according to its magnitude, and Membrane Potential Rectifier (MPR) to reduce the quantization error via redistributing the membrane potential to a range close to the spikes. Results show that the SNNs with the “Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets.
§ INTRODUCTION
Deep Neural Networks (DNNs) have greatly improved many applications in computational vision, , object detection and recognition <cit.>, object segmentation <cit.>, object tracking <cit.>, etc. In pursuit of models with better performance, more and more complex networks are proposed. However, the increasing complexity poses a new challenge to model deployment on power-constrained devices, thus becoming an impediment to the applications of these advanced complex models. There have been several approaches to address this problem, such as quantization <cit.>, pruning <cit.>, knowledge distillation <cit.>, spiking neural networks (SNNs) <cit.>, and so on. Among these approaches, the biology-inspired method, SNNs provide a unique way to reduce energy consumption by mimicking the spiking nature of brain neurons. A spiking neuron integrates the inputs over time and fires a spike output whenever the membrane potential exceeds the firing threshold. And using 0/1 spike to transmit information makes SNNs enjoy the advantage of multiplication-free inference by converting multiplication to additions. Furthermore, SNNs are energy-efficient on neuromorphic hardwares, such as SpiNNaker <cit.>, TrueNorth <cit.>, Darwin <cit.>, Tianjic <cit.>, and Loihi <cit.>.
Despite the attractive benefits, there is still a huge performance gap between existing SNN models and their DNN counterparts. We argue that the reason for the low accuracy is there exists information loss in SNNs. First, the information processing of neurons in supervised training-based SNNs are generally following the rules of the Integrate-and-Fire (IF) model or Leaky IF (LIF) model, where once a membrane potential exceeds the firing threshold, a “Hard Reset” operation will force the “residual” potential to be set to 0, , once fired, all the information will be taken away. Obviously, this mechanism of “residual” membrane potential-ignored reset mode would fail to preserve the diversity of various membrane potentials. Hence the information encoding capacity of the network is compromised, such that the risk of information loss increases accordingly. Second, although the 0/1 spike information processing paradigm enables SNNs to enjoy the advantage of high efficiency, quantifying the real-valued membrane potential to 0/1 spikes will inevitably introduce the quantization error, which also brings about information loss.
To address the information loss problem, we propose a “Soft Reset”-based IF (SRIF) neuron model that retains the “residual” membrane potential from subtracting its spike value at the firing instants. Hence the diversity of the membrane potentials that exceed the firing threshold will be preserved. Though “Soft Reset” is commonly used in converting methods from ANN to SNN (ANN2SNN) <cit.> methods,
rarely applied in supervised SNNs <cit.>, and has not been discussed in SNN enhancement from the perspective of information loss reducing. In addition, for alleviating quantization error, the Membrane Potential Rectifier (MPR) is proposed, which is performed before the firing activity to adjust the membrane potentials towards the spike values (, 0/1). With MPR, the membrane potential will be decoupled as an original one and a modulated one. The original one can keep the mechanism of a neuron and the modulated one enjoys less quantization error than the original one without suffering from any negative effects. The difference between our neuron and the vanilla neuron is illustrated in Fig. <ref>. Our main contributions are as follows:
* We propose using the SRIF model for supervised training-based SNNs. By retaining the “residual” membrane potential, SRIF enables the networks to distinguish the differences among those membrane potentials that exceed the firing threshold via subtracting their spike values thus enhancing the information encoding capacity of supervised training-based SNNs.
* We present MPR to mitigate the quantization error. By utilizing a non-linear function to modulate the membrane potential close to 0/1 before firing activity triggers, the gap between the potential and its corresponding 0/1 spike value is minified while maintaining the sparse spike activation mechanism of SNNs. To our best knowledge, few works have noticed the quantization error in SNNs, and a simple but effective method for addressing this problem is presented.
* Extensive experiments on both static and dynamic datasets were conducted to verify our method. Results show that the SNN trained with the proposed method is highly effective and efficient compared with other state-of-the-art SNN models, , 96.49% top-1 accuracy and 79.41% top-1 accuracy are achieved on the CIFAR-10 and CIFAR-100. These results of our models even outperform their DNN counterparts surprisingly, and it is very rare that SNNs may have a chance to surpass their DNN counterparts.
§ RELATED WORK
§.§ Learning Methods of Spiking Neural Networks
The training methods of SNNs can be divided into two categories. The first one is ANN2SNN <cit.>. ANN2SNN yields the same input-output mapping for the ANN-SNN pair via approximating the continuous activation values of an ANN using ReLU by averaging the firing rate of an SNN under the rate-coding scheme. Since the ANN has achieved great success in many fields, ANN2SNN can maintain the smallest gap with ANNs in terms of performance and can be generalized to large-scale structures. However, being restricted to rate-coding, ANN2SNN usually requires dozens or even hundreds of timesteps to obtain well-performed networks. Lots of efforts have been done to reduce the long inference time, such as weight normalization <cit.>, threshold rescaling <cit.>, soft reset <cit.>, threshold shift <cit.>, and the quantization clip-floor-shift activation function <cit.>, it is still hard to obtain high-performance SNNs with ultra-low latency.
The second one is supervised learning-based SNNs. SNNs quantize the real-valued membrane potentials into 0/1 spikes via the firing activity. Since the gradient of the firing activity function is zero almost everywhere, the gradient descent-based optimizer can not be directly used for the training of SNNs. To alleviate the optimization difficulty, the approximate gradient-based strategy is commonly used, and some related approaches had been proposed to achieve trainable SNNs with high performance. For example, by regarding the SNN as a special RNN, a training method of back-propagation through time with different kinds of surrogate gradient was proposed <cit.>. The spatio-temporal back-propagation (STBP) <cit.> method enables SNNs to be trained on the ANN programming platform, which also significantly promotes the direct training research of SNNs. Differentiable spike which can match the finite difference gradient of SNNs well was proposed in <cit.>. The temporal efficient training (TET) <cit.> method with a novel loss and a gradient descent regime that succeeds in obtaining more generalized SNNs, has also attracted much attention. In RecDis-SNN <cit.>, a new perspective to understand the difficulty of training SNNs by analyzing undesired membrane potential shifts is presented and the MPD-Loss to penalize the undesired shifts is proposed. Numerous works verify that supervised learning can greatly reduce the number of timesteps and handle dynamic datasets. It has increasingly aroused researchers’ interest in recent years. In this work, we focus on improving the performance of the supervised learning-based SNNs by repressing information loss, which is rarely mentioned in other works.
§.§ Threshold-dependent Batch Normalization
Batch Normalization (BN) is one of the most widely used normalization technologies, which is initially designed for very deep Convolutional Neural Networks (CNNs). As it only focuses on normalizing the spatial feature maps, directly applying BN to SNNs would damage the temporal characteristic of SNNs, which stand with spatio-temporal feature maps, leading to low accuracy. To address this issue, some specially-designed normalization methods for SNNs were proposed recently. Typically, to simultaneously balance neural selectivity and normalize the neuron activity, NeuNorm <cit.> was proposed. Then, a more effective normalization technique that can take good care of the firing threshold, named threshold-dependent Batch Normalization (tdBN) was further proposed in <cit.>. It can normalize the feature maps of SNNs in both spatial and temporal domains <cit.>. Specifically, let X_t ∈ℝ^B× C× H× W represent the input maps at each timestep, where t=1,…,T (B: batch size; C: channel; (H, W): spatial domain). Then for each channel c, the spatio-temporal sequence X^(c) = {X_1^(c), ⋯ ,X_T^(c)} is normalized by tdBN as follows,
X̃^(c) = λ·α V_th(X^(c)-x̅^(c))/√( mean((X^(c)-x̅^(c))^2)+ϵ) + β,
where V_th is the firing threshold, α is a network-structure-dependent hyper-parameter, ϵ is a tiny constant, λ and β are two learnable parameters, x̅^(c)= mean(X^(c)) is the mean value of X^(c), X̃^(c) is the normalized maps. In this paper, tdBN is also adopted considering its spatio-temporal normalization mechanism.
§ PRELIMINARY AND METHODOLOGY
To avoid the information loss in supervised training-based SNNs, we propose the “Soft Reset” IF (SRIF) model and Membrance Potential Rectificater (MPR).
§.§ “Soft Reset" IF Model
An SNN adopts a biology-inspired spiking neuron that accumulates inputs along the time dimension as its membrane potential and fires a spike when the potential exceeds the firing threshold. This mechanism makes it much different from its DNN counterpart. For better introducing the proposed SRIF neuron, a unified form defined by a recent work <cit.>, is given to describe the dynamics of all kinds of spiking neurons as follows,
H[t] = f(U[t-1],X[t]),
O[t] = Θ(H[t]-V_th),
U[t] = H[t](1-O[t])+V_resetO[t],
where X[t], H[t], U[t], and O[t] are the input, membrane potentials before and after the trigger of a spike, and output spike at the timestep t, respectively. V_th is the firing threshold, and is usually set to 0.5. Θ(·) is the step function defined by Θ(x) = 1 for x ≥ 0 and Θ(x) = 0 for x < 0. V_reset denotes the reset potential, which is set as 0. The function f(·) describes the neuronal dynamics of spiking neuron models, for the commonly used IF neuron and LIF neuron, f(·) can be respectively defined as follows,
H[t] = U[t-1]+X[t],
H[t] = τ U[t-1]+ X[t],
where τ denotes the membrane time constant.
Both LIF and IF neurons have some unique advantages, with decay characteristics introduced by the membrane time constant, LIF neuron behaves more biologically compared with IF neuron, while IF neuron is more efficient due to its addition-only processing manner. In terms of accuracy performance, neither of them show an overwhelming advantage, and more detailed experimental results of these two neurons are provided in Section 4. Considering the subtle gap in performance, we prefer to use LIF model due to its neurodynamic characteristic, from the perspective of brain science research. Conversely, from the perspective of computer science research, we recommend using IF model, since it is more friendly to hardwares.
However, both the IF model and LIF model might undertake a greater or lesser risk of information loss by the “Hard Reset" mechanism, , when the input membrane potentials exceed the firing threshold, the neurons will force the membrane potentials to a fixed value. Such mechanism ignores the “residual" parts of those fired membrane potentials. These “residual" parts contain the diversity of the input potentials, and we argue that a neuron model which can preserve the diversity or differences of these membrane potentials that cause the firing is more suitable.
To this end, along with the consideration of efficiency, we propose using a “Soft Reset" mechanism-based IF neuron, SRIF, which can keep the diversity of the membrane potentials by subtracting their firing spike values from themselves at the time where the threshold is exceeded. Though this similar “Soft Reset” mechanism has been widely used in ANN2SNN <cit.>, there are few works to use it in supervised learning-based SNNs <cit.>. We found
its value in this field from a new perspective to reduce information loss.
In SRIF neuron, Eq. (<ref>) is updated as
U[t] = H[t](1-O[t])+(H[t]-O[t])O[t].
It can be further simplified as
U[t] = H[t]-O[t].
It can be seen that, similar to IF neuron, SRIF is also an addition-only model, thus enjoying computational efficiency when implementing on hardwares. Fig. <ref> compares the difference between IF neuron and SRIF neuron in an intuitive way. Suppose that both models receive weighted input sequence of 1.5V_th, 1.2V_th, 1.5V_th, 0.9V_th, and 1.4V_th across 5 consecutive timesteps. Our SRIF neuron will produce three spikes by retaining the residual potentials at the firing instants as depicted in Fig. <ref>. Whereas, the IF neuron will produce four spikes.
§.§ Membrane Potential Rectificater
To further mitigate the information loss, we present a non-linear function, called MPR by reducing the quantization error. MPR aims to redistribute the membrane potential before it is operated by the step function. It only modulates the membrane potential that is presented to the step function but does not modify the value of membrane potential, which receives and accumulates spikes from other neurons. Specifically, we further distinguish the membrane potentials as the original one, H as in Eq. (<ref>) and the modulated one, Ĥ, which is the membrane potential that will be presented to the step function. In all previous works, H and Ĥ are treated as the same. While in this paper, we would like to provide a new perspective that using a decoupling function to separate H and Ĥ can be helpful. Specifically, H manages the original tasks as in other work, Ĥ derives from H with a non-linear function, φ(·), and it will be fed into the step function with a modulated form that can shrink the quantization error. With this decoupling mechanism, a neuron model can not only keep the membrane potential updating rule but also enjoy less quantization error.
Before giving the full details of the MPR, we try to formulate the quantization error first. It is clear that the quantization errors corresponding to different membrane potentials should be different. Hence, a value closer to its quantization spike, o, enjoys less quantization error. In specific, the firing threshold divides the membrane potentials into two parts, the part with smaller values is assigned to “0" spike, and the other with larger values is assigned to “1" spike. Then the quantization error depends on the margin between the membrane potential and its corresponding spike. Therefore, the quantization error can be defined as the square of the difference between the membrane potential and its corresponding quantization spike value as follows:
ℒ_q = (u-o)^2,
where u is the membrane potential and o ∈{0,1}. when u is below the firing threshold, o is 0, otherwise, 1.
Hence, the design of MPR should obey the following two principles:
* Spike-approaching: the modulated membrane potential, Ĥ should be closer to the 0/1 spikes than the original membrane potential, H. This principle ensures quantization error reduction.
* Firing-invariance: for the H less than V_th, the MPR should not produce the Ĥ greater than V_th and vice versa. This principle ensures the neuron output be consistent with or without using MPR.
Based on the above two principles, we define the MPR as the following symmetrical function:
φ (u) =
{[ -(1-u)^1/3+1, u 0,; 1/2tanh(3/2) tanh(3(u-1/2))+1/2, 0≤ u≤ 1,; (u)^1/3, u 1. ].
Fig. <ref> shows the response curve of the designed MPR function following the principles of spike-approaching and firing-invariance.
According to <cit.>, the membrane potential follows a Gaussian distribution, 𝒩(μ ; σ). Hence, to visualize the effect of the MPR, we sample 1000,00 values from a Gaussian distribution with 𝒩(1/2 ; 1), and present them to the MPR. Then the distribution of these 1000,00 MPR outputs is drawn in Fig. <ref>. It can be seen that the unimodal distribution, 𝒩(1/2 ; 1) is adjusted to a bimodal distribution which is with less quantization error since it can naturally gather the membrane potentials near “0" and “1".
Moreover, it is worth noting that, the redistributed membrane potential, Ĥ by MPR is only used for narrowing the gap between the true membrane potential, H and its quantization spike. It will not replace the original H in our SRIF neuron model. Then the complete new dynamics of the SRIF model can be described as follows,
H[t] = U[t-1]+X[t],
Ĥ[t] = φ(H[t]),
O[t] = Θ(Ĥ[t]-V_th),
U[t] = H[t]-O[t].
The detailed Feed-Forward procedure for the SRIF neuron with MPR is given in Algo.1.
§ EXPERIMENT
The proposed methods were evaluated on various static datasets (CIFAR-10 <cit.>, CIFAR-100 <cit.>, ImageNet <cit.>) and one neuromorphic dataset (CIFAR10-DVS <cit.>) with widely-used spiking archetectures including ResNet20 <cit.>, VGG16 <cit.>, ResNet18 <cit.>, ResNet19 <cit.>, and ResNet34 <cit.>.
§.§ Datasets and Settings
Datasets. The CIFAR-10(100) dataset consists of 60,000 images in 10(100) classes with 32× 32 pixels. The number of the training images is 50,000, and that of the test images is 10,000. The CIFAR10-DVS dataset is the neuromorphic version of the CIFAR-10 dataset. It is composed of 10,000 images in 10 classes, with 1000 images per class. ImageNet dataset has more than
1,250,000 training images and 50,000 test images.
Preprocessing. Data normalization is applied on all static datasets to ensure that input images have 0 mean and 1 variance. Besides, the random horizontal flipping and cropping on these datasets were conducted to avoid overfitting. For CIFAR-10, the AutoAugment <cit.> and Cutout <cit.> were used for data augmentation. For the neuromorphic dataset, since the CIFAR10-DVS dataset does not separate data into training and testing sets, we split the dataset into 9000 training images and 1000 test images similar to <cit.>. For data preprocessing and augmentation, we resized the training image frames to 48× 48 as in <cit.> and adopted random horizontal flip and random roll within 5 pixels. And the test images are just resized to 48× 48 without any additional processing.
Training setup. For all the datasets, the firing threshold V_th was set as 0.5 and V_reset as 0. For static image datasets, the images were encoded to binary spike using the first layer of the SNN, as in recent works <cit.>. This is similar to rate-coding. For the neuromorphic image dataset, we used the 0/1 spike format directly. The neuron models in the output layer accumulated the incoming inputs without generating any spike as the output like in <cit.>. For CIFAR-10(100) and CIFAR10-DVS datasets, the SGD optimizer with the momentum of 0.9 and learning rate of 0.01 with cosine decayed <cit.> to 0. All models were trained within 400 epochs with the same batch size of 128. For the ImageNet dataset, the SGD optimizer with a momentum set as 0.9 and a learning rate of 0.1 with cosine decayed <cit.> to 0. All models are trained within 320 epochs as in <cit.>. The batch size is set to 64.
§.§ Ablation Study for Different Neuron Models
We first conducted a set of ablation experiments to verify the effectiveness of the proposed SRIF model on CIFAR-10(100) using ResNet20 as the backbone under various timesteps without MPR. The results are shown in Tab. 1.
It can be seen that whether on CIFAR-10 or CIFAR-100, the SRIF neuron always obtains the best result ranging from 2 timesteps to 8 timesteps. This indicates the superiority of the SRIF neuron. On the other hand, the LIF neuron performs better than the “Hard Reset" IF neuron on CIFAR-10, while the IF neuron performs better on CIFAR-100, even though the LIF neuron is more like a biological neuron. This comparison also shows that, although SNNs are proposed to imitate the biological neural networks, for the implementation of large-scale networks, they still need to rely on computer hardwares. Hence, the characteristics of computational science should also be considered. In this respect, the SRIF neuron is more suitable for its advantage of low power consumption and capacity of reducing information loss.
§.§ Addition of MPR
Then, a set of ablation experiments for the MPR were conducted on CIFAR-10(100) using ResNet20 and ResNet19 as backbones within 4 timesteps. Results in Tab. 2 show that the MPR can greatly improve performance. Especially on CIFAR-100, where ResNet20 with MPR increases the accuracy by 2.73%. These results verify the effectiveness of MPR in terms of performance improvement.
We also computed the average quantization error of the first layer of the second block in the ResNet20/19 before and after MPR on the test set of CIFAR-10(100), respectively. Results in Tab. 3 show that the quantization error is obviously reduced by the MPR. The overall original membrane potential distribution and modulated membrane potential distribution by MPR of the first layer of the second block in ResNet20 on CIFAR-10 and CIFAR-100 test sets are shown in Fig. <ref>. It shows that the MPR adjusts the membrane potential distribution near “0" and “1", which is closer to its quantization spike. Put together, these results quantitatively support the effectiveness of MPR in reducing quantization error.
§.§ Comparisons with Other Methods
Our method was further compared with other state-of-the-art SNNs on static and neuromorphic datasets. Results are shown in Tab. 4, where for each run, the mean accuracy and standard deviation of 3 trials are listed. For simplification, InfLoR (, short for Information Loss Reducing) is used to denote the combination of SRIF and MPR.
CIFAR-10(100).
For CIFAR-10, our method improves network performance across all commonly used backbones in SNNs. ResNet19-based InfLoR-SNN achieved 96.49% top-1 accuracy with 6 timesteps, which outperforms its STBP-tdBN counterpart with 3.33% higher accuracy and its ANN counterpart 0.20% higher accuracy even. The ResNet20-based InfLoR-SNN can reach to 93.65%, while only 92.54% in <cit.>. And our VGG16-based network also shows higher accuracy than other methods with fewer timesteps. On CIFAR-100, InfLoR-SNN also performs better and achieves a 1.89% increment on VGG16. Noteworthy, InfLoR-SNN significantly surpasses Diet-SNN <cit.> with 7.12% higher accuracy, which is not easy to achieve in the SNN field. Again, our ResNet19 also outperforms its ANN counterpart. To our best knowledge, it is the first time that the SNN can outperform its ANN counterpart.
ImageNet.
For the ImageNet dataset, ResNet18 and ResNet34 were used as the backbones. Results show that our ResNet18 achieves a 1.60% increment on SEW ResNet18 and a 2.46% increment on Spiking ResNet18. The accuracy of our ResNet34 does not exceed SEW ResNet34. However, SEW ResNet34 <cit.> transmits information with integers, which is not a typical SNN. For a fair comparison, we also report the result of Spiking ResNet34 in <cit.> which is worse than our method. Moreover, our InfLoR-based ResNet34 with 4 timesteps still obviously outperforms STBP-tdBN-based RersNet34 with 6 timesteps.
CIFAR10-DVS.
For the neuromorphic dataset, CIFAR10-DVS, InfLoR-SNN achieves the best performance with 75.50% and 75.10% top-1 accuracy in 10 timesteps with ResNet19 and ResNet18 as backbones, and obtains 7.80% improvement compared with STBP-tdBN for ResNet19. It's worth noting that, as a more complex model, ResNet19 only performs a little better than ResNet20 on CIFAR10-DVS. It might be that this neuromorphic dataset suffers much more noise than static ones, thus a more complex model is easier to overfit.
§ CONCLUSIONS
This work aims at addressing the information loss problem caused by the “Hard Reset" mechanism of neurons and the 0/1 spike quantification. Then, the SRIF model, which will drive the membrane potential to a dynamic reset potential, and the MPR that can adjust the membrane potential to a new value closer to quantification spikes than itself are proposed. A detailed analysis of why the SRIF and MPR can reduce the information loss is provided. Furthermore, abundant ablation studies of the proposed methods are given. Combining these two methods, our SNNs outperform other state-of-the-art methods.
splncs04
|
http://arxiv.org/abs/2307.04341v1 | 20230710045017 | Stroke Extraction of Chinese Character Based on Deep Structure Deformable Image Registration | [
"Meng Li",
"Yahan Yu",
"Yi Yang",
"Guanghao Ren",
"Jian Wang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
Stroke extraction of Chinese characters plays an important role in the field of character recognition and generation.
The most existing character stroke extraction methods focus on image morphological features.
These methods usually lead to errors of cross strokes extraction and stroke matching due to rarely using stroke semantics and prior information.
In this paper, we propose a deep learning-based character stroke extraction method that takes semantic features and prior information of strokes into consideration.
This method consists of three parts: image registration-based stroke registration that establishes the rough registration of the reference strokes and the target as prior information;
image semantic segmentation-based stroke segmentation that preliminarily separates target strokes into seven categories;
and high-precision extraction of single strokes. In the stroke registration, we propose a structure deformable image
registration network to achieve structure-deformable transformation while maintaining the stable morphology of single
strokes for character images with complex structures. In order to verify the effectiveness of the method,
we construct two datasets respectively for calligraphy characters and regular handwriting characters.
The experimental results show that our method strongly outperforms the baselines. Code is available at https://github.com/MengLi-l1/StrokeExtraction.
§ INTRODUCTION
Stroke extraction of Chinese characters refers to extracting every single stroke of the characters based on the matching with
the templates consisting of standard ordered strokes. Stroke extraction is important for certain research on Chinese characters.
In the field of character recognition, the experiments in <cit.> show that further disassembly of the stroke structure
of characters can significantly improve the accuracy of character recognition. In other search fields such as evaluation of calligraphy the
important part of traditional Chinese culture <cit.>, and character generation <cit.>, the stroke extraction is also of great significance to them.
By analyzing the characteristics of Chinese characters, the difficulties of Chinese character stroke extraction mainly include the following three aspects.
First, there are more than 7000 Chinese characters commonly used. Most of them have a complex structure.
Second, the shapes of character strokes are simple and only have structural differences.
These indistinguishable features make recognition directly of stroke hard for cross strokes within character.
Third, the unfixed number of strokes in different Chinese characters makes it difficult to build a stroke extraction model.
Existing research on stroke extraction of Chinese characters, including some deep learning-based methods,
mostly focus on image morphological features of strokes and radicals <cit.>.
Although these methods can realize remarkable results, their cores are only morphological analysis, which exists two drawbacks or constraints:
(1) the prior information of the character stroke is rarely used, and (2) the semantic analysis of the stroke is lacking.
Rarely using prior information and stroke semantics can lead to errors in the separation of cross strokes and the matching of strokes with the template
in the stroke extraction of complex characters.
Inspired by the stroke extraction process of humans, we propose an efficient stroke extraction method of
Chinese characters which not only separates strokes but also establishes the matching with the reference template.
This method takes semantic features and prior information of strokes into consideration and mainly includes three steps:
(1) stroke registration based on the Structure Deformable Image Registration Network (SDNet);
(2) stroke segmentation based on the Image Semantic Segmentation Network (SegNet);
(3) single stroke high-precision extraction based on the Single Stroke Extraction Network (ExtractNet).
For human cognition, the prior information of Chinese character images refers to the basic knowledge of the position and shape of the strokes.
To obtain the prior information, we use the image registration to establish a rough mapping relationship between the reference character strokes and the target character.
The transformed reference strokes based on the mapping relationship are used as prior information of the target stroke positions and shapes.
The semantic features of the strokes need to be stable during the registration-based transformation for effective prior information. However,
the existing image registration methods <cit.> usually
cause the Chinese stroke to be severely distorted when characters have complex structures.
To solve the problem, we propose SDNet using multiple linear mapping planes to replace the native single mapping surface.
SDNet can maintain the stability of stroke semantic features while transforming deformably stroke structure. Our main contributions are summarized as follows.
* We propose a novel deep learning-based stroke extraction method, which takes semantic features and prior information of strokes into consideration more
adequately and achieves significant improvement in the separation of cross strokes and matching of strokes with the template.
* We propose a structure deformable image registration method, which performs better in the registration of image structure.
§ RELATED WORK
§.§ Image Registration
Image registration is a process of establishing pixel-level correspondences between different images with matched image contents.
In the past decades, image registration usually extracted and matched feature regions first, such as closed-boundary regions, edges, corners,
and other shape features <cit.>. By evaluating the transformation model, like elastic model <cit.>
and discrete model <cit.>, the transformation relationship of these feature regions and entire image is established.
Later, some researchers began to use deep learning techniques to enhance the feature extraction and matching,
based on an iterative framework or reinforcement learning <cit.>.
Recently, with the proposal of Spatial Transformer Networks (STN) <cit.>, the grid sample block with gradient backpropagation has
facilitated the direct application of deep learning <cit.>, which effectively promotes the improvement of the image registration.
However, the existing deep learning-based image registration methods cannot maintain the local stability while the whole structure is freely transformed.
It is not suitable for stroke registration of Chinese characters with complex structures.
§.§ Stroke Extraction for Chinese Character Image
For most existing stroke extraction methods of Chinese character, analyzing the ambiguous cross region is the core and primary task.
By detecting the corners caused by the interlaced strokes, <cit.> disassembled the Kaiti characters into simple strokes for calligraphy robots.
<cit.> used chained line segments to represent the boundaries of characters and separated interlaced strokes by detecting whether these boundaries are regular.
To further improve the accuracy of stroke extraction, some template-based methods are proposed <cit.>.
The methods use stroke structure matching to establish the correspondence between sub-strokes and template strokes,
which is used to merge sub-strokes created by separating strokes with cross region.
However, the template of the existing template-based methods is not used for previous steps of cross region detection and separation.
In this case, the reference template information (prior information) is insufficiently used. In addition, character structure matching is mostly based on shape
analysis and lacks the use of stroke semantic information.
§ METHOD
The proposed stroke extraction method, shown in Figure 1, mainly includes the following three modules.
(1) The prior information acquisition module that establishes the registration between the reference stroke and the target through SDNet and uses the transformed reference strokes as prior information.
(2) The stroke separation module that separates the target character preliminarily by SegNet with the guidance of prior information.
(3) The single stroke extraction module that uses ExtractNet to extract every single stroke images of the target one by one according to the order of the reference strokes.
§.§ SDNet for Prior Information
Due to the complex stroke structure and various writing styles of Chinese characters, the position and shape of the stroke in the same character may be very different.
It is the reason that it needs the high deformable registration model. However, highly distorted strokes can destroy their own shapes and reduce the validity of prior information,
which needs the registration model to ensure that the shape of a single stroke is stable before and after transformation. For addressing the problem, we propose a local linear stroke
spatial transformation method that constrains the transformation of a single stroke to be linear.
The SDNet uses UNet as the main frame of the registration network similar to <cit.>.
The main frame convolves down from size 256× 256 to 8×8 and then convolves up to size 256256 for output.
To improve the analysis of Chinese character features, we add features of the Chinese character recognition of the input characters in the last four stages of encoder in the UNet.
The Chinese character recognition model is a simple convolution network like VGG <cit.>.
The input of SDNet consists of two parts: target character image input as Target Data 𝑡 and reference character image marked with different values for different stroke labels input as Reference Data.
The output of the model is a prediction of the offset coordinate vector for each pixel, which can be easily constructed as a registration field. The structure of The SDNet is shown in Figure 2.
Based on the existing output registration field Φ_𝑑, as shown in Figure 2, we add a branch in the up-convolution process.
This branch upsamples the data at size 3232 by a factor of 4 and then upsamples again after one convolution to obtain the registration field Φ_𝑒 with the same size as Φ_𝑑.
Φ_𝑒 is a fine-tuning of the original registration field, and its output weight is only 0.5 of Φ_𝑑.
The registration field used to calculate the linear transformation is represented as Φ_𝑠 (Φ_𝑠=Φ_𝑑+0.5Φ_𝑒).
Due to learning from a smaller size, Φ_𝑒 is biased towards th e prediction of the overall offset of the single stroke,
which is suitable for the estimation of local linear transformation. During model training, both Φ_𝑑 and Φ_𝑠 are involved in the loss computation.
Φ_𝑠 tends to learn the local registration ability of strokes under the constraint, which weakens the learning of global registration.
Therefore, Φ_𝑑 is used to realize the global registration of character images to make up for the lack of Φ_𝑠.
In actual training, the position and shape of the reference stroke are more stable.
We use the reference stroke to mark the local region in Φ_𝑠 and calculate the linear transformation estimation of these regions.
Therefore, during model training, what we actually learn is the transformation from target to reference.
This operation can reduce the noise caused by errors in the linear estimation part, and improve the stability and efficiency of training.
During inference, the transformation from reference to target can be obtained by calculating the inverse spatial transformation for every single stroke.
§.§.§ Linear Estimation of Single Stroke Spatial Transformation
The existing linear fitting methods cannot be embedded in deep networks to achieve gradient backpropagation.
Inspired by the Taylor series, we construct a linear estimation method that can be used in deep neural networks:
_s_linear = mean(_s_local)+( X-P_x) ×
mean(∂_s_local/∂ X) + ( Y-P_y) × mean(∂_s_local/∂ Y),
where _s_local represents the local region of _s used for linear estimation.
X and Y denote coordinate matrices.
P_x and P_y denote the coordinates of the centroid of the local region, which can be calculated from the reference stroke.
_s_linear denotes the linear estimation result. Equation 1 is similar to the Least Squares Linear Fitting method while the slope is
directly estimated as the average of the gradients to simplify the calculation. Due to the strong learning ability of deep learning,
this simplification will not affect the final estimation result, but it can effectively reduce the computing workload.
§.§.§ Loss for Training
The loss of SDNet consists of two parts: the global registration loss L_global and the single-stroke registration loss L_single_linear.
L_global includes similarity loss L_sim_global and smoothing loss L_smooth of _d.
L_single_linear is the average of all single stroke similarity losses.
For the operation of linear transformation estimation, we do not need to calculate the smoothing loss of _s.
Traditional image registration methods usually use Normalization Mutual Information (NCC) to measure the similarity of two images.
However, NCC is not suitable for the data with simple shape such as the stroke image.
To solve this, we build a ContentNet that is trained to auto-encode stroke images with a simple Encoder-Decoder structure.
The similarity loss of the two stroke images is defined as the Euclidean distance of the encoding results with l_2-normalization:
S_c(a,b) = Dis[norm_l_2(E(a)),norm_l_2(E(b))],
where Dis denotes the calculation of Euclidean distance. E denotes the access function of the encoding result of ContentNet.
Final loss in train process is defined as:
L_sum(t,r,t_s,r_s,_d,_s) = λ L_single_linear(t_s,r_s,_s)+
L_sim_global(r,t,_d )+γ L_smooth(_d),
L_single_linear(t_s,r_s,_s ) =
1/stroke_num∑_i = 1^stroke_num S_c(r^i_s,_s_linear^i ∘ t^i_s ),
L_sim_global(r,t,_d) =S_c(r,_d ∘ t),
L_smooth(_d)=mean (∂_d/∂ X^2 + ∂_d/∂ Y^2),
where _s_linear^i is the linear transformation estimation result corresponding to the reference single stroke r_s^i and the target single stroke t_s^i.
Considering that the global registration result will have a greater impact on _s, we apply a larger weight to _d,
especially to L_smooth , and set λ and γ to be 0.5 and 5, respectively, in order to ensure a stable and better registration result of _d.
§.§ SegNet for Separating Strokes Roughly
There are 32 basic strokes for Chinese characters. However, there is a high similarity between these basic strokes, and the number of strokes is seriously unbalanced.
Therefore, we divide the 32 basic strokes into 7 categories artificially based on the following three rules:
(1) The number of strokes used in common Chinese characters within the category is as balanced as possible.
(2) The similarity of stroke shape is as high as possible within the category and as low as possible between the category.
(3) The probability of stroke crossing within the category is as low as possible.
We use the network architecture adapted from the Deeplabv3 model <cit.> as the main frame of the SegNet to segment strokes guided by prior information.
Considering the cross stroke, we construct the SegNet as a multi-label model. The loss of SegNet is the average of the binary cross-entropy of output and label.
The input of SegNet consists of two parts: Target Data and Prior Data. Prior Data is composed of reference single strokes that are linearly transformed by SDNet.
Strokes in Prior Data with different categories are marked with different values.
In the training process, in order to improve the generalization of SegNet, we apply a random position offset within maximal 5 pixels to every single stroke in Prior Data.
§.§ ExtractNet for Single Stroke Extraction
§.§.§ Input Data
As shown in Figure 3, ExtractNet has five inputs to provide sufficient prior information and stroke semantic information.
Table 1 shows the details of these inputs.
For ExtractNet, Segment Data and Reference Stroke Transformation Data provide the major information required for stroke extraction.
Considering the possible segmentation errors of SegNet, we add SegNet Feature and Target Data to supplement the information not included in the Segment Data.
The Reference Stroke Transformation Data can only roughly mark the location and shape of the target stroke.
For the high-precision stroke extraction work, we add Reference Segment Transformation Data to provide relative positional relationship information
with Reference Stroke Transformation Data. In this way, through spatial transformation of the STN Block,
the transformed reference information can be further registered to the target data, which can provide more accurate prior information of stroke position and shape.
Furthermore, within a Chinese character, the size of different strokes may vary greatly, which will weaken the learning of small-sized strokes.
Therefore, we adaptively scale and crop images of input data and labels to eliminate the size difference between strokes.
§.§.§ Structure and Loss
The structure of ExtractNet is shown in Figure 3, which mainly includes two parts: STN Block and the simple convolution network used to extract strokes.
In the beginning, we quickly compress the input to 1/4 of the original size by two layers of convolution.
The STN Block is used to further register the reference information to the target. The output is a single-channel stroke image.
We use binary cross-entropy to calculate the loss between the output and label.
§ EXPERIMENTS
§.§ Datasets and Reference Data
To evaluate our method, we construct two stroke extraction datasets for calligraphy and handwriting,
which basically cover the main application fields of the stroke extraction.
* The Chinese Calligraphy Character Stroke Extraction Dataset (CCSEDB): CCSEDB has a total of 5000 character data, consisting of calligraphy characters,
printed calligraphy characters, and calligraphy characters. We carefully select characters of CCSEDB to maintain a balance between the number of
strokes and stroke structure. Each record of data in CCSEDB contains a target image and some singlestroke label images of the target image
arranged in reference stroke order.
* The Regular Handwriting Character Stroke Extraction Dataset (RHSEDB): We construct RHSEDB referring to <cit.> based on the online
handwriting dataset CASIA-OLHWDB <cit.>, which contains 28,080 pieces of handwriting data written by 17 writers in total.
The format of each piece of data in RHSEDB is the same as CCSEDB, while the images of writing track of the stroke is normalized to a
width of 6 pixels (the size of the stroke image is 256256 pixels).
* Reference data: We construct reference data for CCSEDB and RHSEDB, respectively.
For CCSEDB, due to the large stroke area, we use character images and the corresponding single stroke images of the Kaiti font as reference data.
For RHSEDB, due to the thin stroke width, we use the skeleton images and the corresponding single stroke skeleton images of the Kaiti font as reference data,
which are also normalized to a width of 6 pixels.
§.§ Implementation Detail
In the experiments, for both CCSEDB and RHSEDB, we use 90% of the data for training and 10% for testing.
The images of training data of SDNet, SegNet, and ExtractNet have resolution of 256256. The training data are binarized,
except for a few that need to be marked with three-channel value. The three models are trained progressively with a batch size of 8 and for 40, 10, and 20 epochs
respectively. Their learning rates are initialized to 0.0001 and decrease by a factor of 0.5 every 10, 2, and5 epochs respectively.
§.§ Stroke Registration Results
In this task, we build experiments on two representative image registration methods: TpsStn <cit.> and VoxelMorph <cit.>.
TpsStn uses the idea of STN to realize the TPS registration of two images.
Due to the fewer control points, this method well maintains local stability but lacks sufficient deformability.
In order to fully verify the registration effect of TpsStn, we evaluate TpsStn with different numbers of control points of 44 and 88 respectively.
VoxelMorph is a typical image registration method that can theoretically achieve the maximum deformable transformation for predicting the offset of each pixel.
In order to quantify the effect of the prior information constructed by the registration results.
For the stroke position, we estimate the qualitative result of position with centroid pixel distance mDis of the single stroke.
For the stroke shape that is mainly reflected in the size, we estimate the qualitative result of stroke shape with the IOU mBIou of the bounding box of the single stroke.
mDis = 1/n∑_i=0^nDis(centroid(rt_s^i),centroid(t_s^i )),
mBI ou = 1/n∑_i=0^nIOU(box(rt_s^i),box(t_s^i)),
In Equation 7 and Equation 8, t_s^i denotes the single stroke of the target character. rt_s^i denotes the single transformation reference stroke in prior
information by SDNet. Dis refers to the Euclidean distance. A smaller mDis and a larger mBIou indicate the higher accurate prior information.
As we can see from Table 2, SDNet performs much better than other baseline methods in qualitative results of prior information.
This means that our method provides more precise prior information to the target character.
As shown in Figure 4, the SDNet has the best results, which is manifested in higher structural deformability and more stable single-stroke morphology.
Especially in Figure 4 (d) and (e), this advantage is more prominent when the target structure is quite different from the reference structure.
We believe that this is mainly due to the registration branch _e and linear estimation of single stroke spatial transformation.
Local linear estimation can construct different transformations for every single reference stroke even for cross stroke while providing linear constraint.
This is the reason for the structure deformable which is enhanced greatly by the registration branch _e.
However, TpsStn and VoxelMorph need to balance the deformation and the smoothness because each of them has only one registration field.
§.§ Stroke Extraction Results
We find that the morphological analysis method based on deep learning has a great improvement over traditional methods for stroke extraction.
Therefore we only compare the recent best deep learning-based stroke extraction methods <cit.> named Path-MatchNet and <cit.> named PathNet.
PathNet separates strokes by predicting the probability that two pixels belong to the same stroke using a deep learning-based image semantic segmentation method.
Path-MatchNet adds stroke matching on the basis of PathNet to further improve the accuracy of stroke extraction. Referring to <cit.>,
we construct two evaluation methods that employ the same evaluation strategy but differ slightly based on whether it needs to match the reference stroke.
The evaluation considering matching is defined as :
mIOU_m = ∑_i=0^nIOU(rt_s^i,t_s^i),
The evaluation without considering matching is defined as:
mIOU_um = ∑_i=0^nIOU(rt_s^i,maxCross(rt_s^i,t_s)),
In Equation 9 and Equation 10, t_s^i denotes the single stroke. t_s denotes all of the single strokes of the target character.
rt_s^i denotes the single transformation reference stroke in prior information by SDNet.
maxCross refers to obtaining the single stroke image of target character with the largest intersection area with rt_s^i in t_s.
As shown in Table 3, our method performs much better in mIOU_um and mIOU_m than baseline methods.
As shown in Figure 5, our stroke extraction results have higher precision and are almost 100% accurate in matching with reference strokes,
which benefit from the use of prior information and stroke semantic information. As shown in Figure 5 (a) and (d),
PathNet and PathMatchNet have lower stroke extraction precision in the cross stroke area.
That is because they only use deep learning for morphological stroke trajectory analysis, and lack analysis of the semantics.
In the part of stroke matching, simple position and morphological feature similarity calculation in PathMatchNet cannot guarantee the accuracy of matching,
especially in Chinese characters with a large number of strokes, as shown in Figure 5 (a) and (b).
§.§ Ablation Study
To evaluate the effect of prior information and stroke semantic information, we design ablation experiments for SegNet and Extraction.
As shown in Table 4 and Figure 6, the prior information has a significant effect on the SegNet because of the high similarity between strokes.
Compared to the SegNet, the prior information has a greater effect on the ExtractNet, as shown in Table 5.
This is because that the prior information provides major information for analyzing the position and shape of the current single stroke in the ExtractNet.
Semantic information has a few effect on the ExtractNet. This is because the Target Data can supplement more information when the Segment Data lacks.
Using semantic information can further improves the precision of ambiguous strokes, as shown in Figure 7.
§.§ Limitations
Finally, as shown in Figure 8, we show typical errors in the results of our method.
* Scribbled strokes: Scribbled strokes often lead to densely intersected strokes and indistinguishable stroke morphology,
which usually lead to failure of stroke segmentation and single-stroke extraction, such as (a) in Figure 8.
* Excessive difference in local stroke structure: As shown in Figure 8 (b), excessive local structure differences between reference and target
usually lead to large registration errors, which makes the prior information of these local strokes unusable.
§ CONCLUSION
In this paper, we propose an efficient stroke extraction model of Chinese characters which takes semantic features and prior information of strokes into consideration
efficiently. In our method, SDNet establishes the registration relationship between reference strokes and target strokes to provide the prior information of stroke
position and shape to the target character. The prior information can guide SegNet to segment roughly the strokes of the target character.
With the prior information and segmentation results, every stroke is extracted high precision through ExtractNet.
Furthermore, to solve the registration problem of characters with complex stroke structures, we propose a new method for Chinese character image registration called SDNet.
The use of the local linear stroke spatial transformation method in SDNet ensures deformability of stroke structure while maintaining the stability
of single stroke shape in transformation.
Experiments show that our method performs better in stroke extraction than baseline methods and can be used for a wide range of characters including calligraphic
characters and handwritten characters. And, SDNet performs better in the registration of image structure.
We believe that prior information and stroke semantic information are the keys to the stroke extraction of Chinese characters.
In our future work, we will pay more attention to studying new image registration methods based on SDNet and stroke segmentation methods for irregular characters.
|
http://arxiv.org/abs/2307.04492v1 | 20230710113046 | Calculating Originality of LLM Assisted Source Code | [
"Shipra Sharma",
"Balwinder Sodhi"
] | cs.SE | [
"cs.SE"
] |
Calculating Originality of LLM Assisted Source Code
Shipra Sharma
[email protected]
Balwinder Sodhi
Department of Computer Science and Engineering
Indian Institute of Technology Ropar
India
[email protected]
==========================================================================================================================================================================
The ease of using a Large Language Model (LLM) to answer a wide variety of queries and their high availability has resulted in LLMs getting integrated into various applications. LLM-based recommenders are now routinely used by students as well as professional software programmers for code generation and testing. Though LLM-based technology has proven useful, its unethical and unattributed use by students and professionals is a growing cause of concern. As such, there is a need for tools and technologies which may assist teachers and other evaluators in identifying whether any portion of a source code is LLM generated.
In this paper, we propose a neural network-based tool that instructors can use to determine the original effort (and LLM's contribution) put by students in writing source codes. Our tool is motivated by minimum description length measures like Kolmogorov complexity. Our initial experiments with moderate sized (up to 500 lines of code) have shown promising results that we report in this paper.
LLM, ChatGPT, plagiarism in education, automation in CSE education, Minimum Description Length
§ INTRODUCTION
With the advent of Large Language Models (LLM) models such as ChatGPT, several coding tasks have become easy to complete via use of such LLMs. Such tasks include programming assignments in courses, generating subroutines and code fragments for commonly encountered algorithmic tasks, and so on. For example, programming assignments in many Computer Science and Engineering (CSE) courses can be generated in large measure <cit.> via these models. It has become very difficult to detect by standard plagiarism detection tools such as Turnitin <cit.>, that such source code is LLM generated. Even a complex assignment can be broken into simpler components, and each component can be written separately using such LLMs. Given this situation, it is highly desirable to construct a tool which can detect unauthorized or unattributed LLM help taken by the students in preparing their coding assignments. Usage of such LLM-assisted coding tools is recommended as the engineers/students may be required by the employers to be conversant with the use of such tools <cit.>.
Although the LLM-based coding assistant tools seem to reply correctly to complex queries akin to an expert, they still lack the conceptual understanding of the queries as well as the results generated by the tool. The major shortcoming of these tools is lack of deep reasoning and analytical skills <cit.>. Hence, before we begin to resolve the difficulties mentioned above, we should first be able to measure (at least approximately) the amount of originality in an assignment. Motivated by the above, and by potential applications in the domain of Software Engineering, we consider the following research questions in this paper.
RQ1RQ 1 Can we quantify the amount of original contribution by a student in an assignment, assuming that he/she has used an LLM such as ChatGPT for its preparation?
RQ2RQ 2 How can we detect the similarity in the original contribution portion of two separate submissions when it is known that the students can take assistance from LLM-based tools in creating the submissions?
RQ3RQ 3 How efficiently can we automate our answers to the above questions?
In this paper, we propose two scores: the originality score o(D) and the similarity score s(D) of a source code D as solutions to the above questions.
We further propose to use these scores extensively in an adaptable teaching process as follows:
* Students with less measure of original contribution in their assignments (i.e., less originality scores) may be awarded suitably reduced scores.
* Students with large amounts of overlap in their respective contributions (i.e., high similarity scores) may not be awarded extra “originality credits”.
* More credits may be allocated to the “difficult” fragments of the program (or, assignment submission), and lesser credits may be allocated to the “easier” fragments of the program (or, assignment submission).
These steps will lead to a constructive assessment of students, which encourages the students to develop original and high-depth analytic thinking.
The above discussed scenario is one of the many applications of our work. Others are its usage in software development as these LLM-based models cannot replace software engineers (as of now), but can assist them <cit.>.
§ COMPUTING ORIGINALITY SCORE OF A PROGRAM
§.§ Setting up the problem
Suppose a programmer has unlimited access to a large language model 𝒜 (𝒜 can be ChatGPT, GPT-J, etc.). The programmer constructs a software program D using (see Figure <ref>):
* the answers A_1, A_2, …, A_z to a sequence P_1, P_2, …, P_z of z prompts to 𝒜, and
* the programmer's own original contribution 𝒪.
Program D is finally constructed by combining A_1, A_2, …, A_z and 𝒪 using conventional text editing, rearrangements, etc. To be more specific, a conventional plagiarism detection software (say, Turnitin) will detect high similarity between the strings D and the corpus {A_1, A_2, …, A_n, 𝒪}.
We define the following metrics:
* total effort e(D) of the programmer as the total length of all prompts and the programmer's original contribution:
e(D) = ∑_i=1^z |P_i| + |𝒪|
* originality score o(D) (0 ≤ o(D) ≤ 1) of the program:
o(D) = |𝒪|/|D|
Our assumption is that a lower originality score would imply a lower original contribution by the programmer. Any programmer or student using LLM models to assist in writing programs implicitly minimizes e(D) and in turn also minimizes o(D). This motivates the following question.
Question 1. Given a document D and LLM 𝒜, calculate the minimum originality score o(D).
(This corresponds to <ref>).
§.§ Solving <ref>
To solve Question 1 we bound the maximum number of prompts z, which is a positive integer and the maximum length L of each prompt (P_1, P_2, …, P_z). We now formulate a bounded version of Question 1 above:
Question 1.1. Compute the minimum value of the originality score o(D), under the assumption that the programmer can give at most z prompts, each of length at most L.
Let T be a conventional plagiarism detector (a trivial one to use could be the diff command in UNIX-based systems). Figure <ref> illustrates the algorithm for solving Question 1.1.
The program D in Figure <ref> forms the input to a neural network N. The output of N is of size z · L, and corresponds to the z unknown prompts to LLM 𝒜. The output of N is given as input to LLM 𝒜 to obtain answers A_1, A_2, …, A_z. A conventional plagiarism detector T is used to find the similarity percentage t between D and the output answers (A_1, A_2, …, A_n). The original contribution 𝒪 is estimated by removing the parts of D which match with the output answers. Finally, the output (originality score) u is equal to |𝒪|/|D|. If the similarity percentage between D and (A_1, A_2, …, A_n) is t, the originality score is expected to be approximately 1 - 0.01 · t[as t is percentage score we convert it to a number between 0 and 1 by multiplying by 0.01]. The output originality score u is given as the feedback to neural network N, with the objective of minimizing u.
Remark. Please note that giving the same prompt again to an LLM can generate somewhat different answers. To cover all possibilities, our model allows for the same prompt to be repeated more than once in the sequence P_1, P_2, …, P_z.
§.§ Applying the minimum description length (MDL) principle
The minimum description length (MDL) principle <cit.> is a
well-known principle for model selection. The MDL principle always selects the shortest description of given data, from the set of all possible descriptions. The quantity Γ=(P_1, P_2, …, P_z, 𝒪) (see Section <ref>) can be viewed as the content comprising of prompts plus the original code added by the student that results in the desired program as the output from an LLM. Thus, Γ can be thought to represent a description of D, which can lead to generation of the desired code. In other words, given the description Γ and LLM 𝒜, we can reconstruct program D almost completely.
Our proposed solution (see Section <ref>) can then be viewed as an application of the MDL principle. For each possible description Γ, our algorithm selects the description with minimum “length", where the length of a description Γ is defined as its originality score |𝒪|/|D|.
§ COMPUTING SIMILARITY SCORE OF TWO PROGRAMS
§.§ Setting up the problem
Suppose two programmers Alice and Bob produce programs D_1 and D_2 respectively. Both programs solve the same computational problem, and both Alice and Bob had unlimited access to LLM 𝒜 during the coding process.
Suppose Alice constructed D_1 using prompts P_1, P_2, …, P_z and original contribution 𝒪_1. Similarly, suppose Bob constructed D_2 using prompts Q_1, Q_2, …, Q_z and original contribution 𝒪_2. Let p be the similarity percentage between the two descriptions, Γ_1=(P_1, P_2, …, P_z, 𝒪_1) and Γ_2=(Q_1, Q_2, …, Q_z, 𝒪_2) using the conventional plagiarism detector T.
Then we define similarity score,
s(D_1, D_2) = 0.01 · p
We now state the second question considered in this paper:
Question 2. Given two source codes D_1 and D_2 and LLM 𝒜, calculate the similarity score s(D_1, D_2). (This corresponds to <ref>.)
§.§ Solving <ref>
In analogy with our approach for originality score, we consider a bounded version of Question 2:
Question 2.1. Given two source codes D_1 and D_2, compute the maximum value of similarity score s(D_1, D_2), under the assumption that both Alice and Bob can give at most z prompts, each of length at most L.
Figure <ref> illustrates the algorithm for solving Question 2.1:
Source codes D_1 and D_2 are the inputs to two neural networks N_1 and N_2. The output of each neural network is of size z · L. The output of N_1 corresponds to the z unknown prompts of Alice and the output of N_2 corresponds to the z unknown prompts of Bob. Next, the outputs of N_1 and N_2 are given as input to LLM 𝒜 to generate answers A_1, A_2, …, A_z and B_1, B_2, …, B_z respectively.
Using algorithm T, we compute the original contribution 𝒪_1 of Alice for prompts P_1, P_2, …, P_z and the original contribution 𝒪_2 of Bob for prompts Q_1, Q_2, …, Q_z. Finally, the similarity s between (P_1, P_2, …, P_z, 𝒪_1) and
(Q_1, Q_2, …, Q_z, 𝒪_2) is computed using T, and this is used as feedback for both neural networks N_1 and N_2. The objective of the training process is to maximize (see Question 2.1) the output similarity s.
Remark 1. In our implementation, we input (D_1, D_2) to a single neural network N, with ouput (P_1, P_2, …, P_z, Q_1, Q_2, …, Q_z). The intuition is that a single neural network may lead to faster convergence due to information flow along cross connections between input neurons of D_1 and D_2.
Remark 2. In terms of MDL principle, the above network tries to compute the shortest description ((P_1, P_2, …, P_z, 𝒪_1), (Q_1, Q_2, …, Q_z, 𝒪_2)) of (D_1, D_2), where the “length" of the description is defined as the similarity score of T on inputs (P_1, P_2, …, P_z, 𝒪_1) and (Q_1, Q_2, …, Q_z, 𝒪_2).
§ PREVIOUS WORK
Kolmogorov complexity and related measures. When the algorithm 𝒜 is a universal Turing machine (instead of a LLM), the minimum length description of program P is called its Kolmogorov complexity <cit.>. In <cit.>, the authors propose that neural network models such as GPT-3 have a “simplicity bias" and prefer data with low Kolmogorov complexity. Kolmogorov complexity inspired measures have a long history of application in similarity detection and compression. In <cit.>, the authors define a similarity metric called Normalized Information Distance (NID), based on Kolmogorov complexity. Since Kolmogorov complexity is non-computable, the authors further develop the notion of Normalized Compression Distance (NCD), which is an efficiently computable variant of NID using compression algorithms like gzip. More in-depth treatment of this topic is available in <cit.> and related papers.
Autoencoders. An autoencoder <cit.> is a neural network which first compresses the input using an encoder network and then tries to recover the input from the compressed code by using a decoder network <cit.>. For the use of minimum description length (MDL) principle for autoencoders, see <cit.>. In the algorithm proposed in this paper (Figure <ref>), the neural network N can be viewed as the encoder, and the LLM 𝒜 can be viewed as the decoder. Further, note that only the encoder is trained using feedback from the output.
AI-detection tools. We briefly discuss few recent softwares for detecting whether a
text is generated by a LLM or written by a human. An AI text classifier by OpenAI, the company behind ChatGPT, is now available <cit.>. The classifier outputs the probability that a given input text is AI-generated. GPTZero <cit.> is another AI-detection tool, which also provides scores for burstiness and perplexity <cit.>. Another well-known tool is Originality.AI <cit.>.
§ PRELIMINARY EXPERIMENTS AND VISION FOR FUTURE WORK
For an initial experimental setup for the proposed ideas, we designed a prompt space 𝒫 of size 64. Each prompt in this space is defined by a tuple of three words taken from independent sets A, B, C. Each of A, B and C contains words taken from common programming vocabulary encountered while describing the programs. For our experiments we chose |A|=8, |B|=2, |C|=4. For example, if the prompt is (“insertion", “sort", “C"), it is equivalent to writing a prompt: . We generated a pool of 10 answers to this prompt using calls to ChatGPT and BLOOM. BLOOM model was run on Macintosh, while ChatGPT was prompted through API calls. This gave us a collection of 64 · 10 = 640 (prompt, answer) pairs. We store this set in an offline repository ℛ which we used to train a neural network N using PyTorch. For each answer the neural network was trained with the following loss function: generate two prompts independently at random from the output probability distribution and calculate their similarity with answer.
Next, we collected a test set 𝒯 of 50 programs. Each program D in 𝒯 was manually evaluated for similarity with the repository. Accordingly, an originality score o(D) was assigned to every program in 𝒯 using the formulas discussed in Section <ref>.
The neural network N takes as input a source code D∈𝒯 and the output is a probability distribution over the prompt space 𝒫. The best score provided by the neural network is the computed originality score f(D) for two prompts. We found that the mean squared error ϵ between o(D) and f(D) was 0.3 (0≤ϵ≤ 1), which is an encouraging result (<ref>) .
This experiment required a considerable amount of manual effort as our goal was to prove the viability of our proposed idea. As the proposed idea shows to be implementable and valid, we propose the following research vision:
* We plan to create a prompt space that accurately maps with the internal representation of prompts for large-scale deployed LLMs such as BLOOM, ChatGPT, BARD etc.
* We plan to increase the size of repository ℛ, so that it consists of a realistic number of (prompt, answer) pairs.
* In future we plan to automate data cleaning, processing and model building so that the model can be trained and updated on real world data on regular basis.
* We plan to increase the number of prompts in the prompt sequence to at least 20.
* Finally, we will define prompt complexity, and how it minimizes originality score to be always less then 0.45. The implication being that easier the prompt is to write to get the desired code fragment., lesser will be the originality score of a source code.
§ CONCLUSION
As current plagiarism detection tools use a corpus of documents obtained from various sources for comparison, we envision an originality detection tool which generates a prompt sequence and calculates the minimum originality score. The key idea we have proposed in this paper is: the tools for detecting originality of LLM generated source code need to “learn” from the LLM generated source code itself and the prompts used to generate such source code.
Rather than trying to compute the probability that a text is AI-generated or human-generated (this has its technical limitations), we feel the focus should be on computing originality score using a pool of LLMs.
Our initial results are encouraging, and our computed originality scores are in agreement with human evaluations of originality and similarity.
9
farrokhnia1
Farrokhnia, Mohammadreza, et al. A SWOT analysis of ChatGPT: Implications for educational practice and research, Innovations in Education and Teaching International (2023): 1-15.
rosenblatt2
Rosenblatt, Kalhan. ChatGPT passes MBA exam given by a Wharton professor, Retrieved Jan 25 (2023): 2023.
dwivedi3
Y.K. Dwivedi, N. Yogesh, et al., “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 71 (2023): 102642.
khalil4
Khalil, Mohammad, and Erkan Er. Will ChatGPT get you caught? Rethinking of plagiarism detection. arXiv preprint arXiv:2302.04335 (2023).
weisz5
Weisz, Justin D., et al. Better together? an evaluation of ai-supported code translation. 27th International Conference on Intelligent User Interfaces. 2022.
peng6
Peng, Sida, et al. The impact of ai on developer productivity: Evidence from github copilot. arXiv preprint arXiv:2302.06590 (2023).
anu7
Baidoo-Anu, David, and Leticia Owusu Ansah. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484 (2023).
ss8
Shipra Sharma and Balwinder Sodhi. FACT-from actual to conceptual tie-ins: a multi-level knowledge graph structured on context and semantics of software artefacts. Proceedings of the 35th Annual ACM Symposium on Applied Computing. 2020
mdl1
A. Barron, J. Rissanen and B. Yu, The minimum description length principle in coding and modeling, IEEE transactions on information theory, vol. 44, no. 6,
pp. 2743–2760, 1998, IEEE.
goldblum2023free
Micah Goldblum and Marc Finzi and Keefer Rowan and Andrew Gordon Wilson, The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning, 2023.
kolmogorovbook
Ming Li and Paul Vitányi,
An Introduction to Kolmogorov Complexity and Its Applications (2nd Ed.), ISBN: 0387948686, Springer-Verlag, Berlin, Heidelberg, 1997.
livitanyi1
Ming Li, Xin Chen, Xin Li, Bin Ma and P. M. B. Vitanyi, The similarity metric, IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3250-3264, Dec. 2004, doi: 10.1109/TIT.2004.838101.
vitanyi2
Rudi Cilibrasi and Paul M. B. Vitányi,
Clustering by compression,
CoRR:cs.CV/0312044, 2003.
vitanyi3
M. Li, J.H. Badger, X. Chen, S. Kwong, P. Kearney, and H. Zhang.
An information-based sequence distance and its application to whole mitochondrial genome phylogeny, Bioinformatics, 17:2(2001), 149–154.
cilibrasi2
R. Cilibrasi, P. Vitanyi and R. de Wolf, Algorithmic clustering of music, Proceedings of the Fourth International Conference on Web Delivering of Music, 2004. EDELMUSIC 2004., Barcelona, Spain, 2004, pp. 110-117, doi: 10.1109/WDM.2004.1358107.
deeplearningbook
Ian J. Goodfellow and Yoshua Bengio and Aaron Courville,
Deep Learning, MIT Press, Cambridge, MA, USA, 2016
openai-classifier
https://platform.openai.com/ai-text-classifier
gptzero
https://gptzero.me/
perplexity
D. M. Blei, A. Y. Ng and M. I. Jordan, Latent Dirichlet Allocation, Journal of machine Learning research, 3 Jan 2003, 993-1022.
burstiness
T. Lappas, B. Arai, M. Platakis, D. Kotsakos and D. Gunopulos, On burstiness-aware search for document sequences, InProceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining 2009 Jun 28, pp. 477-486.
originalityai
https://originality.ai/
autoencoder
C.Y. Liou, W.C. Cheng, J.W. Liou and D.R. Liou, Autoencoder for words, Neurocomputing 139:84-96, Sep 2 2014 .
hinton
G. E. Hinton and R. Zemel, Autoencoders, Minimum Description Length and Helmholtz Free Energy, Advances in Neural Information Processing Systems,
Editors: J. Cowan and G. Tesauro and J. Alspector, Vol. 6, 1993.
|
http://arxiv.org/abs/2307.06094v1 | 20230712113708 | On the Galois covers of degenerations of surfaces of minimal degree | [
"Meirav Amram",
"Cheng Gong",
"Jia-Li Mo"
] | math.AG | [
"math.AG",
"math.AT",
"math.CO",
"math.GT",
"05E15, 14J10 (Primary), 14J25 (Secondary), 14N20"
] |
1]Meirav Amram
2]Cheng Gong
2]Jia-Li Mo
[1]Shamoon College of Engineering, Ashdod, Israel
[2]Department of Mathematics, Soochow University, Shizi RD 1, Suzhou 215006, Jiangsu, China
On the Galois covers of degenerations of surfaces of minimal degree
Email address: M. Amram: [email protected]; C. Gong: [email protected]; Jia-Li Mo: [email protected];
2020 Mathematics Subject Classification. 05E15, 14J10, 14J25, 14N20.
[
====================================================================================================================================================================================================================================================
We investigate the topological structures of Galois covers of surfaces of minimal degree (i.e., degree n) in ℂℙ^n+1. We prove that for n≥ 5, the Galois covers of any surfaces of minimal degree are simply-connected surfaces of general type.
§ INTRODUCTION
The moduli space of surfaces is a hot topic that mathematicians refer to, see for example <cit.>. The moduli space of surfaces of general type is a quasi-projective coarse moduli scheme <cit.>. Unlike the moduli of curves, it is not irreducible. Catanese <cit.> and Manetti <cit.> characterized the structures and the number of components of some moduli spaces. Not much more was thereafter known about the moduli space of surfaces of general type. Then, in <cit.>, Teicher defined some new invariants of surfaces that were stable on connected components of moduli space. These new invariants come from the polycyclic structure of the fundamental group of the complement of the branch curve S, of the generic projection from a surface X, to ℂℙ^2.
The fundamental group π_1(ℂℙ^2-S) of the complement of S does not change when the complex structure of X changes continuously. In fact, all surfaces in the same component of the moduli space have the same homotopy type and therefore have the same group π_1(ℂℙ^2-S).
In <cit.> and <cit.>, Moishezon-Teicher showed that if X_Gal is the Galois cover of X, then its fundamental group π_1(X_Gal) can be obtained as a quotient group of π_1(ℂℙ^2-S). As a consequence, π_1(X_Gal) does not change when the complex structure of X changes continuously. Based on this idea, they constructed a series of simply-connected algebraic surfaces of general type with positive and zero indices, disproving the Bogomolov Conjecture, which states that an algebraic surface of general type with a positive index has an infinite fundamental group. These examples have important value in the geography of algebraic surfaces.
To compute the group π_1(X_Gal), we construct some degenerations and work with special singularities that are defined and explained below.
In <cit.> and <cit.> Zappa first studied degenerations of scrolls to unions of planes. Then, in <cit.> and <cit.>, Calabri, Ciliberto, Flamini, and Miranda considered the flat degenerations of surfaces whose general fiber is a smooth projective algebraic surface and whose central fiber is a union of planes in ℂℙ^r, r≥3 with the following singularities:
* in codimension 1, double curves that are smooth and irreducible along with two surfaces meeting transversally;
* multiple points that are locally analytically isomorphic to the vertex of a cone over the stick curve, with an arithmetic genus of either 0 or 1, which is projectively normal in the projective space it spans.
These multiple points will be called Zappatic singularities and the central fiber (a union of planes) will be called a planar Zappatic surface. A degeneration is called a planar Zappatic degeneration, that is, a smooth surface that flatly degenerates to a planar Zappatic surface, as described in Section <ref> and Figure <ref>.
The topological structure of a Zappatic degeneration is complicated. In <cit.>, <cit.>, and <cit.>, the authors discuss the effect of a Zappatic degeneration on its numerical invariants, such as the Euler-Poincaré characteristic, sectional genus, geometric genus, and Chern numbers.
We are interested in the topological structure of Galois covers of planar Zappatic degenerations, which have not been as extensively known until now. We have discussed these degenerations in <cit.>. In this paper, we continue to study the planar Zappatic degenerations and find the group π_1(X_Gal).
We focus on special planar Zappatic degenerations — the degenerations to the cone over the stick curve C_R_k (i.e., unions of lines with only nodes as singularities).
Now, let T_k be any connected tree with k≥3 vertices. This corresponds to a non-degenerate stick curve of degree k in ℂℙ^k, which we denote by C_T_k. Moreover, when the tree T_k consists of a chain R_k of length k. The curve C_R_k is the union of k lines l_1, l_2, … , l_k spanning ℂℙ^k, such that l_i ∩ l_j = ∅ iff |i - j|>1, as described in Figure <ref>.
It is well known that the smallest possible degree of an irreducible, non-degenerate surface X⊂ℂℙ^n+1 is n. Such surface is said to be a surface of minimal degree.
Because any surface X of minimal degree in ℂℙ^n+1 can be flatly degenerated to the cone over the stick curve C_T_n (<cit.>), we get our main theorem:
The Galois covers of surfaces of minimal degree in ℂℙ^n+1 are simply-connected surfaces of general type for n≥ 5.
The paper is organized as follows: In Section <ref>, we explain the methods we use and the terminology related to the paper. In Section <ref>, we deal with the special case: degenerations to the cone over the stick curve C_R_k (see Theorem <ref>) and prove that the related Galois covers are of general type.
In Section <ref>, we relate the general case on degeneration to the cone over the stick curve C_T_k to the special case in Section <ref>, and prove Theorem <ref>.
Acknowledgements:
We thank Dr. Yi Gu for useful discussions about the degeneration of surfaces. This research was supported by the NSFC and ISF-NSFC joint research program (Grant No. 2452/17). It was also partly supported by the Natural Science Foundation of Jiangsu Province (BK 20181427).
We thank two anonymous referees for great comments and suggestions.
§ METHOD AND TERMINOLOGY
In this section, we describe the methods, the fundamental background, and some terminology that we use in this paper. We will use these methods and terminology in Section <ref>. The reader can refer to <cit.> and <cit.> for more details.
We consider planar Zappatic surfaces and are interested in the Galois cover of each such surface. The fundamental group of the Galois cover is a significant invariant of the surface, as explained in the introduction, and we are going to calculate it.
To do this, we first need to understand the following setting:
Let X be a projective algebraic surface embedded in projective space ℂℙ^n, for some n. Consider a generic projection f:ℂℙ^n→ℂℙ^2. The restriction of f|_X is branched along a curve S⊂ℂℙ^2. The branch curve S can tell a lot about X, but it is difficult to describe it explicitly. To tackle this problem we consider degenerations of X, defined as follows.
Let Δ be the unit disk, and X, X' be algebraic surface. Assume that f and f' are projective projections, where f: X→ℂℙ^2, f': X'→ℂℙ^2. We say that f is a projective degeneration of f' if there exists a flat family π: 𝔛→Δ (and where 𝔛⊆Δ×ℂℙ^n, n≥3 is a closed subscheme of relative dimension two), and a morphism F:𝔛→Δ×ℂℙ^2, such that F composed with the first projective is π, and:
* π^-1(0)≃ X.
* There exists 0≠ p_0∈Δ such that π^-1(p_0)≃ X'.
* The family 𝔛 - π^-1(0)→Δ - {0} is smooth.
* Restricting to π^-1(0), F≃{0}× f under the identification of π^-1(0) with X.
* Restricting to π^-1(p_0), F≃{p_0}× f' under the identification of π^-1(p_0) with X'.
We construct a degeneration of X into X_0, as a sequence of partial degenerations X: =X_r X_r-1⋯ X_r-i
X_r-(i+1)⋯ X_0. The degeneration X_0 is a union of planes, and each plane is projectively equivalent to ℂℙ^2 (see <cit.> for detail).
Consider generic projections π^(i) : X_i→ℂℙ^2 with the branch curves S_i, for 0 ≤ i ≤ r. Note that S_i-1 is a degeneration of S_i. Because X_0 is a union of planes, its projection S_0 is a line arrangement.
One of the principal tools we use is a reverse process of degeneration, and it is called regeneration. Using this tool, which was described in <cit.> as regeneration rules, we can recover S_i from S_i-1. Applying it multiple times, we can recover the original branch curve S from the line arrangement S_0. In the following diagram, we illustrate this process.
X⊆ℂℙ^n @>degeneration>> X_0⊆ℂℙ^n
@Vgeneric projectionVV @VVgeneric projectionV
S⊂ℂℙ^2 @<regeneration<regeneration< S_0⊂ℂℙ^2
A line in S_0 regenerates to a conic.
The resulting components of the partial regeneration are tangent to each other.
To get a transversal intersection of components, we regenerate further, and this gives us three cusps for each tangency point (see <cit.> for more details). Therefore, the regenerated branch curve S is a cuspidal curve with nodes and branch points. Local braids of such singularities are as follows:
* for a branch point, Z_j j' is a counterclockwise
half-twist of j and j' along a path below the real
axis,
* for nodes, Z^2_i, j j'=Z_i j^2 · Z_i j'^2 and Z^2_i i', j j'=Z_i' j'^2 · Z_i j'^2 · Z_i' j^2 · Z_i j^2,
* for cusps, Z^3_i, j j'=Z^3_i j· (Z^3_i j)^Z_j j'· (Z^3_i j)^Z^-1_j j'.
By the braid monodromy technique of Moishezon-Teicher, we derive the braids related to S as conjugations of the above local forms (i.e., a^b = b^-1ab). The reader can learn more about this technique in <cit.>; in the paper we give the final braids that are computed by this technique, as the computations themselves are too long and tiring.
Note that in several places we use the notation a̅ where a is a braid, which means the same braid as a but above the real axis.
Denote G:=π_1(ℂℙ^2-S) and its standard generators as _1, _1', …, _2m, _2m'. By the van Kampen Theorem <cit.> we can get a presentation of G by means of generators {_j, _j'} and relations of the types:
* for a branch point, Z_j j' corresponds to the relation _j = _j',
* for a node, Z_i j^2 corresponds to [_i,_j]=_i_j_i^-1_j^-1=e,
* for a cusp, Z_i j^3 corresponds to ⟨_i,_j⟩=_i_j_i_j^-1_i^-1_j^-1=e.
To get all the relations, we write the braids in a product and collect all the relations that correspond to the different factors. To each list of relations we add the projective relation ∏_j=m^1 _j'_j=e. See <cit.> for full treatment of the subject.
This method also enables us to compute the fundamental group of the Galois cover X_Gal of X.
We consider the fibered product arising from a general projection f: X →ℂℙ^2 of degree n as
X×_f⋯×_fX={(x_1, … , x_n)∈ X^n| f(x_1)=⋯=f(x_n)}.
Let the extended diagonal be
={(x_1, … , x_n)∈ X^n| x_i=x_j, for some i≠ j}.
The closure
X×_f⋯×_fX- is called the Galois cover w.r.t. the symmetric group S_n and denoted by X_Gal.
Then, there is an exact sequence
0 →π_1(X_Gal) → G_1 → S_n → 0,
where G_1:=G/⟨Γ_j^2 , Γ'_j^2 ⟩ and the map G_1 → S_n is a surjection of G_1 onto the symmetric group S_n. This epimorphism takes the generators of G_1 to transpositions in the symmetric group S_n according to the order of the edges in the degeneration. We thus obtain a presentation of the fundamental group π_1(X_Gal) of the Galois cover, as the kernel of this epimorphism. Then we simplify the relations to produce a canonical presentation that identifies with π_1(X_Gal), using the theory of Coxeter covers of the symmetric groups.
We use a proposition from <cit.>, as follows:
If
G_1/{∏_j=1^k _j'_j}≅ S_n,
then X_Gal is simply-connected.
§ DEGENERATION TO THE CONE OVER C_R_K
In this section, we pay attention to a special planar Zappatic degeneration — the degeneration to the cone over the stick curve C_R_k in ℂℙ^k+1.
It is clear that every plane arrangement can be represented by a triangulation
as long as no three planes meet in a line and no plane meets more than three other planes. In Figure <ref>, we depict a schematic representation of X_0, which is a cone over the stick curve C_R_k in ℂℙ^k+1. Each triangle corresponds to a plane ℂℙ^2 and each intersection of two triangles corresponds to a common edge between the two planes. The existence of such degeneration can be found in Corollary <ref>.
We give now the following definition of an outer (k-1)-point, then we will explain how it relates to Figure <ref>.
We call a (k-1)-point that is the intersection of k planes P_1,…,P_k, where P_i intersects P_j in a line iff |i-j|=1, an outer (k-1)-point. Especially noteworthy, a 1-point always comes from the intersection of 2 planes.
Point O in Figure <ref> is an outer (k-1)-point. We have also k-1 vertices that are 1-points. The branch curve S_0 is an arrangement of k-1 lines (the dashed lines) that are the images of the k-1 edges through the generic projection of X_0 onto ℂℙ^2.
In Subsections <ref> and <ref>, we give the braids that are related to an outer 5-point and an outer n-point respectively. Then we can find the group π_1(X_Gal) and conclude the following theorem (see Theorem <ref>):
Let 𝔛_k→Δ be a planar Zappatic degeneration, whose central fiber X_k is the cone over the stick curve C_R_k in ℂℙ^k+1 (for k≥ 5). Then the Galois cover of X_k is a simply-connected surface of general type.
In the following subsections, we follow the notations and formulations from <cit.>. Before we continue to that part of the computations, we give some notations for simplicity and convenience, as follows:
we denote Γ_j by j and Γ'_j by j' in the group G; we use B_k to denote the braid monodromy of an outer k-point; we write F_k instead of (B_k-1)^Z_(k-1) (k-1)', k^2, where Z_(k-1) (k-1)', k^2 is a full-twist of k around
k-1 and (k-1)'; and we denote the following formula as M_k:
M_k :=Z_(k-1) (k-1)', k^3 · (Z_1 1', k^2)^Z_2 2', k^-2⋯ Z_(k-2) (k-2)', k^-2·(Z_2 2', k^2)^Z_3 3', k^-2⋯ Z_(k-2) (k-2)', k^-2
··· Z_(k-2) (k-2)', k^2·Z̅_1 1', k'^2 ·Z̅_2 2', k'^2
···Z̅_(k-2) (k-2)', k'^2 · (Z_k k')^Z_(k-1) (k-1)', k^2,
(k=1, 2 ,3 ,⋯).
§.§ The cone over C_R_6
In <cit.> we have already considered the case of k=5. In order to help the reader better understand our proof, in this subsection we consider the case of k=6, see Figure <ref>.
Let 𝔛_6→Δ be a planar Zappatic degeneration whose central fiber X_6 is the cone over the stick curve C_R_6 in ℂℙ^7. Then the Galois cover X_6,Gal of X_6 is a simply-connected surface.
The branch curve S_0 in ℂℙ^2 is an arrangement of five lines, see Figure <ref>. We regenerate each vertex in turn and compute the group G_1.
First, each of the vertices i is an outer 1-point (for i=1,…,5) that regenerates to a conic; this gives rise to the braids Z_j j' for j=1,…,5. We have the following relations in G and also in G_1:
1=1', 2=2', 3=3', 4=4', 5=5'.
We will use the relations in (<ref>) as a prerequisite when we simplify relations (<ref>)–(<ref>) in G.
Vertex O is an outer 5-point, and the related braids appear in B_5, as follows:
B_5=M_5 · F_5
=M_5 · (M_4)^Z_4 4', 5^2· (B_3)^Z_3 3', 4^2 Z_4 4', 5^2 ,
where
M_5= Z_4 4', 5^3 · (Z_1 1', 5^2)^Z_2 2', 5^-2 Z_3 3', 5^-2·
(Z_2 2', 5^2)^Z_3 3', 5^-2· Z_3 3', 5^2 ·Z̅_1 1', 5'^2 ·Z̅_2 2', 5'^2 ·Z̅_3 3', 5'^2 · (Z_5 5')^Z_4 4', 5^2,
and
F_5 = (B_4)^Z_4 4', 5^2=(M_4)^Z_4 4', 5^2· (F_4)^Z_4 4', 5^2
= ((Z_3 3', 4^3)·(Z_1 1', 4^2)^Z_2 2', 4^-2·(Z_2 2', 4^2)·(Z̅_1 1', 4'^2)·(Z̅_2 2', 4'^2)·(Z_4 4')^Z_3 3', 4^2)^Z_4 4', 5^2· (F_4)^Z_4 4', 5^2
= (Z_3 3', 4^3)^Z_4 4', 5^2· (Z_1 1', 4^2)^Z_2 2', 4^-2 Z_4 4', 5^2· (Z_2 2', 4^2)^Z_4 4', 5^2· (Z̅_1 1', 4'^2)^Z_4 4', 5^2
·(Z̅_2 2', 4'^2)^Z_4 4', 5^2· (Z_4 4')^Z_3 3', 4^2 Z_4 4', 5^2· Z_1', 2 2'^3 ·(Z_1 1')^Z_1', 2 2'^2
· (Z_2 2', 3^3)^Z_1', 2 2'^2 Z_3 3', 4^2 Z_4 4', 5^2· (Z_3 3')^Z_2 2', 3^2 Z_1', 2 2'^2 Z_3 3', 4^2 Z_4 4', 5^2· (Z_1 1', 3 3'^2)^Z_3 3' , 4^2 Z_4 4', 5^2.
The braid (Z_5 5')^Z_4 4', 5^2 is depicted in the following picture:
The braids of B_5 give rise to three parts of relations in G. We will write down the
first part of the relations, which are the relations of braids of M_5:
⟨ 4,5⟩=⟨ 4',5⟩=⟨ 4^-14'4,5⟩=e,
[3'32'212^-12'^-13^-13'^-1,5]=[3'32'21'2^-12'^-13^-13'^-1,5]=e,
[3'323^-13'^-1,5]=[3'32'3^-13'^-1,5]=e,
[3,5]=[3',5]=e,
[4'43'32'212^-12'^-13^-13'^-14^-14'^-1,5^-15'5]=
[4'43'32'21'2^-12'^-13^-13'^-14^-14'^-1,5^-15'5]=e,
[4'43'323^-13'^-14^-14'^-1,5^-15'5]=
[4'43'32'3^-13'^-14^-14'^-1,5^-15'5]=e,
[4'434^-14'^-1,5^-15'5]=[4'43'4^-14'^-1,5^-15'5]=e,
5'=54'454^-14'^-15^-1.
We simplify relations (<ref>)–(<ref>), then get the following relations in G_1:
⟨ 4, 5⟩=[1,5]=[2,5]=[3,5]=e.
Now we give the relations in G of braids from (M_4)^Z_4 4', 5:
⟨3,545^-1⟩=⟨3',545^-1⟩=⟨3^-13'3,545^-1⟩=e,
[2'212^-12'^-1,545^-1]=[2'21'2^-12'^-1,545^-1]=e,
[2,545^-1]=[2',545^-1]=e,
[3'32'212^-12'^-13^-13'^-1,54^-14'45^-1]=[3'32'21'2^-12'^-13^-13'^-1,54^-14'45^-1]=e,
[3'323^-13'^-1,54^-14'45^-1]=[3'32'3^-13'^-1,54^-14'45^-1]=e,
3^-13'^-154^-14'45^-13'3=545^-1.
We simplify (<ref>)–(<ref>), using the relations of M_5. We obtain the following relations in G_1:
⟨ 3, 4⟩=[1,4]=[2,4]=e.
We write down the relations in G that are associated with the braids in (B_3)^Z_3 3', 4^2 Z_4 4', 5^2; the elements of G will appear with conjugations, according to the conjugation on B_3, as follows:
3→ 434^-1, 3'→ 43'4^-1, 3^-1→ 43^-14^-1, 3'^-1→ 43'^-14^-1;
4→ 545^-1, 4'→ 54'5^-1, 4^-1→ 54^-15^-1, 4'^-1→ 54'^-15^-1.
We get in G the relations:
⟨ 1',2⟩=⟨ 1',2'⟩=⟨ 1',2^-12'2⟩=e,
1=2'21'2^-12'^-1,
⟨ 2'21'21'^-12^-12'^-1,545^-1354^-15^-1⟩=e,
⟨ 2'21'2'1'^-12^-12'^-1,545^-1354^-15^-1⟩=e,
⟨ 2'21'2^-12'21'^-12^-12'^-1,545^-1354^-15^-1⟩=e,
3=54^-15^-12'21'2^-12'^-11'^-12^-12'^-1545^-13^-13'3
54^-15^-12'21'2'21'^-12^-12'^-1545^-1,
[1,545^-1354^-15^-1]=[1',545^-1354^-15^-1]=e,
[1,545^-13'54^-15^-1]=[1',545^-13'54^-15^-1]=e.
We simplify (<ref>)–(<ref>), using the ones from M_5 and (M_4)^Z_4 4', 5, and get the following relations in G_1:
⟨ 1, 2⟩=⟨ 2, 3⟩=[1,3]=e.
Moreover, the projective relation
5'54'43'32'21'1=e,
is trivial in G_1.
We summarize the relations in G_1, as follows:
(1) triple relations
⟨ 1,2⟩=⟨ 2,3⟩=⟨ 3, 4⟩=⟨ 4, 5⟩=e.
(2) commutative relations
[1,3]=[1,4]=[2,4]=[1,5]=[2,5]=[3,5]=e.
It is easy to see that {1,2,3,4,5} are the generators of G_1.
These relations are the same as the relations in S_6, hence G_1≅ S_6. It follows that π_1(X_6,Gal) is trivial, and the Galois cover of X_6 is a simply-connected surface.
In the next subsection, we will prove the general theorem (Theorem <ref>) by using the same method. The reader can discern and follow the inductive steps in the transition from Subsection <ref> to Subsection <ref>.
§.§ The cone over C_R_n+1
In this subsection, we study the fundamental group of the Galois cover of a Zappatic degeneration whose central fiber is the cone over the stick curve C_R_k in ℂℙ^k+1.
In order to further clarify the expressions in the proof, we set k=n+1, see Figure <ref>.
We then have the following general theorem:
Let 𝔛_n+1→Δ be a planar Zappatic degenerations whose central fibers X_n+1 is a cone over a stick curve C_R_n+1 in ℂℙ^n+2. Then the Galois cover of X_n+1 is simply-connected surfaces.
First, each of the vertices i is an outer 1-point, for i=1,…, n, that regenerates to a conic, giving rise to the braids Z_j j' for j=1,…,n. We have the following relations in G and also in G_1:
1=1', 2=2',…, n=n'.
We will use the relations in (<ref>) as a prerequisite when we simplify the following (n-2) parts of the
relations and the projective relation in G.
Vertex O is an outer n-point, and the related braids are:
B_n= M_n · F_n
= M_n · (M_n-1)^Z_(n-1) (n-1)', n^2· (M_n-2)^Z_(n-2) (n-2)', (n-1)^2 Z_(n-1) (n-1)', n^2
⋯ (M_5)^Z_5 5', 6^2 Z_6 6', 7^2 ⋯ Z_(n-1) (n-1)', n^2· (M_4)^Z_4 4', 5^2 Z_5 5', 6^2 ⋯ Z_(n-1) (n-1)', n^2
· (B_3)^Z_3 3', 4^2 Z_4 4', 5^2 ⋯ Z_(n-1) (n-1)', n^2.
In G, the braids of B_n relate to (n-2) parts of relations.
The relations from braids in M_n will be listed as the first part:
⟨ n-1,n⟩=⟨ (n-1)',n⟩=⟨ (n-1)^-1(n-1)'(n-1),n⟩=e,
[(n-2)'(n-2)⋯2'212^-12'^-1⋯(n-2)^-1(n-2)'^-1,n]=e,
[(n-2)'(n-2)⋯2'21'2^-12'^-1⋯(n-2)^-1(n-2)'^-1,n]=e,
[(n-2)'(n-2)⋯3'323^-13'^-1⋯(n-2)^-1(n-2)'^-1,n]=e,
[(n-2)'(n-2)⋯3'32'3^-13'^-1⋯(n-2)^-1(n-2)'^-1,n]=e,
⋯⋯⋯⋯
[(n-2)'(n-2)(n-3)(n-2)^-1(n-2)'^-1,n]=e,
[(n-2)'(n-2)(n-3)'(n-2)^-1(n-2)'^-1,n]=e,
[(n-2),n]=[(n-2)',n]=e,
[(n-1)'(n-1)⋯2'212^-12'^-1⋯(n-1)^-1(n-1)'^-1,n^-1n'n]=e,
[(n-1)'(n-1)⋯2'21'2^-12'^-1⋯(n-1)^-1(n-1)'^-1,n^-1n'n]=e,
[(n-1)'(n-1)⋯3'323^-13'^-1⋯(n-1)^-1(n-1)'^-1,n^-1n'n]=e,
[(n-1)'(n-1)⋯3'32'3^-13'^-1⋯(n-1)^-1(n-1)'^-1,n^-1n'n]=e,
⋯⋯⋯⋯
[(n-1)'(n-1)(n-2)(n-1)^-1(n-1)'^-1,n^-1n'n]=e,
[(n-1)'(n-1)(n-2)'(n-1)^-1(n-1)'^-1,n^-1n'n]=e,
n'=n(n-1)'(n-1)n(n-1)^-1(n-1)'^-1n^-1.
We simplify (<ref>)–(<ref>), then get in G_1 the following relations:
(1) triple relation
⟨ n-1,n⟩=e,
(2) commutative relations
[1,n]=[2,n]=⋯=[n-2,n]=e.
The relations that are associated with the braids of (M_n-1)^Z_(n-1) (n-1)', n^2 as the second part of the relations in G, are as follows:
⟨ n-2,n(n-1)n^-1⟩=⟨ (n-2)',n(n-1)n^-1⟩=e,
⟨ (n-2)^-1(n-2)'(n-2),n(n-1)n^-1⟩=e,
[(n-3)'(n-3)⋯2'212^-12'^-1⋯(n-3)^-1(n-3)'^-1,n(n-1)n^-1]=e,
[(n-3)'(n-3)⋯2'21'2^-12'^-1⋯(n-3)^-1(n-3)'^-1,n(n-1)n^-1]=e,
[(n-3)'(n-3)⋯3'323^-13'^-1⋯(n-3)^-1(n-3)'^-1,n(n-1)n^-1]=e,
[(n-3)'(n-3)⋯3'32'3^-13'^-1⋯(n-3)^-1(n-3)'^-1,n(n-1)n^-1]=e,
⋯⋯⋯⋯
[(n-3)'(n-3)(n-2)(n-3)^-1(n-3)'^-1,n(n-1)n^-1]=e,
[(n-3)'(n-3)(n-2)'(n-3)^-1(n-3)'^-1,n(n-1)n^-1]=e,
[(n-3),n(n-1)n^-1]=[(n-3)',n(n-1)n^-1]=e,
[(n-2)'(n-2)⋯2'212^-12'^-1⋯(n-2)^-1(n-2)'^-1,n(n-1)^-1(n-1)'(n-1)n^-1]=e,
[(n-2)'(n-2)⋯2'21'2^-12'^-1⋯(n-2)^-1(n-2)'^-1,n(n-1)^-1(n-1)'(n-1)n^-1]=e,
[(n-2)'(n-2)⋯3'323^-13'^-1⋯(n-2)^-1(n-2)'^-1,n(n-1)^-1(n-1)'(n-1)n^-1]=e,
[(n-2)'(n-2)⋯3'32'3^-13'^-1⋯(n-2)^-1(n-2)'^-1,n(n-1)^-1(n-1)'(n-1)n^-1]=e,
⋯⋯⋯⋯
[(n-2)'(n-2)(n-3)(n-2)^-1(n-2)'^-1,n(n-1)^-1(n-1)'(n-1)n^-1]=e,
[(n-2)'(n-2)(n-3)'(n-2)^-1(n-2)'^-1,n(n-1)^-1(n-1)'(n-1)n^-1]=e,
(n-2)^-1(n-2)'^-1n(n-1)^-1(n-1)'(n-1)n^-1(n-2)'(n-2)=n(n-1)n^-1.
Using the relations that are associated with M_n to simplify (<ref>)–(<ref>), we obtain the following relations in the group G_1:
(1)triple relation
⟨ n-2,n-1⟩=e,
(2) commutative relations
[1,n-1]=[2,n-1]=[n-3,n-1]=e.
Similarly, we can get the third part of the relations that relate to (M_n-2)^Z_(n-2) (n-2)', (n-1)^2 Z_(n-1) (n-1)', n^2, then use the relations of M_n and (M_n-1)^Z_(n-1) (n-1)', n^2 to simplify them. We get the following relations in the group G_1:
(1)triple relation
⟨ n-3,n-2⟩=e,
(2) commutative relations
[1,n-2]=[2,n-2]=⋯=[n-4,n-2]=e.
Continuing this process, we can also get the 4th, 5th, …, (n-3)th parts of the relations and simplify them, then get the following relations in G_1:
(1)triple relations
⟨ n-4,n-3⟩=⟨ n-5,n-4⟩=⋯⋯⋯=⟨ 3,4⟩=e,
(2) commutative relations
[1,n-3]=[2,n-3]=⋯=[n-5,n-3]=e,
[1,n-4]=[2,n-4]=⋯=[n-6,n-4]=e,
⋯⋯⋯
[1,4]=[2,4]=e.
Finally, we write down the (n-2)th part of the relations in G, coming from the braids in (B_3)^Z_3 3', 4^2 Z_4 4', 5^2 ⋯ Z_(n-1) (n-1)', n^2; this time they will appear with conjugated elements (i=3, …, (n-1)), as follows:
i→ (i+1) i(i+1)^-1, i'→ (i+1) i'(i+1)^-1, i^-1→ (i+1) i^-1(i+1)^-1, i'^-1→ (i+1) i'^-1(i+1)^-1.
We get the relations in G as follows:
⟨ 1',2⟩=⟨ 1',2'⟩=⟨ 1',2^-12'2⟩=e,
1=2'21'2^-12'^-1,
⟨2'21'21'^-12^-12'^-1,n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-1 4 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1⟩=e,
⟨2'21'2'1'^-12^-12'^-1,n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-1 4 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1⟩=e,
⟨2'21'2^-12'21'^-12^-12'^-1,n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-1 4 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1⟩=e,
3=n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)n^-14^-1n(n-1)^-1n^-1⋯
n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-12'21'2^-12'^-11'^-12^-12'^-1
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)n^-14n(n-1)^-1n^-1⋯
n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-13^-13'3 n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯
n(n-1)n^-14^-1n(n-1)^-1n^-1⋯ n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1
2'21'2'21'^-12^-12'^-1n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯
n(n-1)n^-14n(n-1)^-1n^-1⋯ n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1,
[1,n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-1 4 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1]=e,
[1',n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-1 4 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1]=e,
[1,n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-1 4 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13'
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1]=e,
[1',n(n-1)n^-1(n-2)n(n-1)^-1n^-1(n-3)n(n-1)n^-1(n-2)^-1
⋯ n(n-1)^-1n^-14 n(n-1)n^-1⋯ n(n-1)n^-1 (n-2)^-1n(n-1)^-1n^-13'
n(n-1)n^-1(n-2)n(n-1)^-1n^-1⋯ n(n-1)^-1n^-1 4^-1 n(n-1)n^-1⋯
(n-2)n(n-1)^-1n^-1(n-3)^-1n(n-1)n^-1(n-2)^-1n(n-1)^-1n^-1]=e.
We can use all the previous relations to simplify (<ref>)–(<ref>), and get the following relations in G_1:
(1)triple relations
⟨1,2⟩=⟨2,3⟩=e,
(2) commutative relation
[1,3]=e.
We also have the projective relation:
n'n(n-1)'(n-1)⋯3'32'21'1=e,
which is trivial in G_1.
In conclusion, we consider now all the above simplified relations in the group G_1:
(1) triple relations
⟨ 1,2⟩=⟨ 2,3⟩=⋯=⟨ n-1,n⟩=e,
(2) commutative relations
[1,3]=[1,4]=⋯=[1,n]=e,
[2,4]=[2,5]=⋯=[2,n]=e,
⋯⋯⋯
[n-3,n-1]=[n-3,n]=e,
[n-2,n]=e.
It is easy to see that {1,2,…,n} are the generators of G_1.
These relations are the same as the relations in S_n+1. Hence G_1≅ S_n+1. It is obvious that π_1(X_n+1,Gal) is trivial, and the Galois cover X_n+1,Gal of X_n+1 is simply-connected.
§.§ General type
When considering an algebraic surface X as a topological 4-manifold, it has the Chern
numbers c_1^2(X), c_2(X) as topological invariants.
In this subsection, we will prove that the Galois covers of the surfaces in Subsection <ref> are surfaces of general type by using c_1^2(X).
As a first step, we compute the Chern numbers c_1^2(X). The formula
was treated in <cit.> (proof there is given by F. Catanese).
Let S be the branch curve of an algebraic surface X. Denote the
degree of the generic projection by d, deg S= m.
Then,
c_1^2(X_Gal)=d!/4(m-6)^2.
Note that in Subsection <ref>, d = n+1 and m = 2n. Then by Proposition <ref>, we obtain
* c_1^2(X_5,Gal)=5!·1;
* c_1^2(X_6,Gal)=6!·2^2;
* ⋯⋯⋯
* c_1^2(X_n,Gal)=(n)!·(n-4)^2;
* c_1^2(X_n+1,Gal)=(n+1)!·(n-3)^2.
It is obvious that c_1^2(X_n,Gal)>0 for n ≥ 5. It means that the Galois covers are general type surfaces, as explained in <cit.> or <cit.>.
§ PROOF OF THEOREM <REF>
In this section we prove Theorem <ref>. First, we recall the following result of Pinkham:
(<cit.>) Let X ⊂ℂℙ^n be a smooth, irreducible, and projectively
Cohen-Macaulay surface. Then X degenerates to the cone over a hyperplane section of X.
Let C be the hyperplane section of X. Suppose that C
can be degenerated to a stick curve C_0. In this case, S
can be degenerated to the cone over the stick curve C_0. Therefore:
(<cit.>) Any surface X of minimal degree (i.e., of degree n) in ℂℙ^n+1
can be degenerated to the cone over the stick curve C_T_n, for any tree T_n with n vertices.
Every nondegenerate irreducible surface of degree n (n ≥ 5) in ℂℙ^n+1 is a rational normal scroll. Any hyperplane section of such a surfaces is a normal rational curve. Beginning at a general point p_i on each component of C_T_n, the line bundle 𝒪_C_T_n (p_1 +…+p_n) is very ample. C_T_n has arithmetic
genus 0 and is a flat limit of rational normal curves in ℂℙ^n. C_R_n
is a flat limit of rational normal curves (including C_T_n) in ℂℙ^n.
According to Corollary <ref>, any surface X of minimal degree in ℂℙ^n+1 can be degenerated to the cone over the stick curve C_R_n.
The fundamental group of the Galois cover π_1(X_Gal) does not change when the complex structure of X changes continuously. In the previous narrative, we proved that any surface X of minimal degree in ℂℙ^n+1
can be degenerated to the cone over the stick curve C_R_n, so we can use Theorem <ref> to get Theorem <ref>.
ABCD
degree6 Amram, M., Gong, C., Sinichkin, U., Tan, S.-L., Xu, W.-Y., Yoshpe, M., Fundamental groups of Galois covers of degree 6 surfaces, Journal of Topology and Analysis, 2021, https://doi.org/10.1142/S1793525321500412.
AGTX1 Amram, M., Gong, C., Tan, S.-L., Teicher, M., Xu, W.-Y., The fundamental groups of Galois covers of planar Zappatic deformations of type E_k, Int. J. Algebra. Comput., 29, 2019, 905–925.
AGTX Amram, M., Gong, C., Teicher, M., Xu, W.-Y., Fundamental group of Galois covers of degree 5 surfaces, Turkish Journal of Mathematics, published online on 6.4.21.
A-R-T Amram, M., Lehman, R., Shwartz, R., Teicher, M., Classification of fundamental groups of Galois covers of surfaces of small degree degenerating to nice plane arrangements, in Topology of algebraic varieties and singularities, 538, Contemp. Math., 63–92, Amer. Math. Soc., Providence, RI, 2011.
B Beauville. A., Complex Algebraic Surfaces (2nd), Cambridge University Press, 1996.
C-C-F-M-2 Calabri, A., Ciliberto, C., Flamini, F., Miranda, R.,
Degenerations of scrolls to unions of planes, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl., 17(2), 2006, 95–123.
C-C-F-M-3 Calabri, A., Ciliberto, C., Flamini, F., Miranda, R.,
On the genus of reducible surfaces and degenerations of surfaces, Ann. Inst. Fourier (Grenoble), 57(2), 2007, 491–516.
C-C-F-M-4 Calabri, A., Ciliberto, C., Flamini, F., Miranda, R.,
On the K^2 of degenerations of surfaces and the multiple point formula,
Ann. of Math., 165(2), 2007, 335–395.
C-C-F-M-5 Calabri, A., Ciliberto, C., Flamini, F., Miranda, R.,
On degenerations of surfaces, arXiv:math/0310009v2, 2008.
C1
Catanese, F., On the moduli spaces of surfaces of general type, J. Diff. Geom., 19, 1984, 483–515.
C2
Catanese, F., (Some) old and new results on algebraic surfaces, First European Congress of Mathematics, Birkhauser Basel, 1994, 445–490.
Gie
Gieseker, D., Global moduli for surfaces of general type,
Invent. Math., 41, 1977, 233–282.
Hu1
Gritsenko, V.A., Hulek, K., Sankaran, G.K., The Kodaira dimension of the moduli of K3 surfaces,
Invent. Math., 169, 2007, 519–567.
Hu2
Hulek, K., Sankaran, G.K., The Kodaira dimension of certain moduli spaces of abelian surfaces,
Compos. Math., Tome 90(1), 1994, 1-35.
Ma
Manetti, M., On the Chern numbers of surfaces of general type, Compos. Math., 92, 1994, 285–297.
MoTe87a
Moishezon, B., Teicher, M., Galois covers in theory of algebraic surfaces, Proceedings of Symposia in Pure Math., 46, 1987, 47–65.
MoTe87
Moishezon, B., Teicher, M., Simply-connected algebraic surfaces of positive index, Invent. Math.,
89, 1987, 601–643.
BGT2
Moishezon, B., Teicher, M., Braid group technique in complex geometry II, From arrangements of lines and conics to cuspidal
curves, Algebraic Geometry, Lect. Notes in Math., 1479, 1991, 131–180.
19
Moishezon, B., Teicher, M., Braid group technique in complex geometry IV: Braid monodromy of the branch curve S_3 of V_3 →^2 and application to π_1(^2 - S_3, ∗), Contemporary Math., 162, 1994, 332–358.
PiPinkham, H.C., Deformation of cones with negative grading, J. Algebra, 30, 1974, 92–102.
Tei2 Teicher, M., New invariants for surfaces, Tel Aviv Topology Conference: Rothenberg Festschrift 1998, 271–281, Contemp. Math., 231, Amer. Math. Soc., Providence, RI, 1999.
vk
van Kampen, E.R., On the fundamental group of an algebraic curve, Amer. J. Math., 55, 1933, 255–260.
Zappa Zappa, G., Su alcuni contributi alla conoscenza della struttura topologica delle superficie algebriche, dati dal metodo dello spezzamento in sistemi di piani, Acta Pont. Accad. Sci., 7, 1943, 4–8.
zg2
Zappa, G., Applicazione della teoria delle matrici di Veblen e di Poincaré allo studio delle superficie spezzate in sistemi di piani, Acta Pont. Accad. Sci., 7, 1943, 21–25.
BHPV
Barth, W., Hulek, K., Peters, C., Van de Ven, A., Compact complex surfaces, Springer, 2004.
|
http://arxiv.org/abs/2307.04861v2 | 20230710191726 | Bragg-Primakoff Axion Photoconversion in Crystal Detectors | [
"James B. Dent",
"Bhaskar Dutta",
"Adrian Thompson"
] | hep-ph | [
"hep-ph"
] |
apsrev4-1
Department of Physics, Sam Houston State University, Huntsville, TX 77341, USA
Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA
Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA
Axions and axion-like pseudoscalar particles with dimension-5 couplings to photons exhibit coherent Primakoff scattering with ordered crystals at keV energy scales, making for a natural detection technique in searches for solar axions. We find that there are large suppressive corrections, potentially greater than a factor of 𝒪(10^3), to the coherent enhancement when taking into account absorption of the final state photon. This effect has already been accounted for in light-shining-through-wall experiments through the language of Darwin classical diffraction, but is missing from the literature in the context of solar axion searches that use a matrix element approach. We extend the treatment of the event rate with a heuristic description of absorption effects to bridge the gap between these two languages. Furthermore, we explore the Borrmann effect of anomalous absorption in lifting some of the event rate suppression by increasing the coherence length of the conversion. We study this phenomenon in Ge, NaI, and CsI crystal experiments and its impact on the the projected sensitivities of SuperCDMS, LEGEND, and SABRE to the solar axion parameter space. Lastly, we comment on the reach of multi-tonne scale crystal detectors and strategies to maximize the discovery potential of experimental efforts in this vein.
MI-HET-804
Bragg-Primakoff Axion Photoconversion in Crystal Detectors
Adrian Thompson
^1Central European University, Quellenstraße 51, 1100 Vienna, Austria
^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary
^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary.
^4 National Laboratory for Health Security, Hungary.
^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary.
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Axions and axion-like particles (ALPs) - potentially long-lived pseudoscalars with weak couplings to the Standard Model (SM) that may have masses from the sub-eV to the GeV - are central features in the landscape of solutions to the strong CP problem <cit.>, dark matter problem <cit.>, and in the spontaneous breaking of generic global symmetries <cit.>. In addition to being dark matter candidates, axion-like particles in the keV to sub-eV mass range produced in the sun are well motivated <cit.>. Searches were carried out by several experimental collaborations by looking for a →γ Primakoff conversion in solid crystal detectors, including DAMA <cit.> (NaI), CUORE <cit.> (TeO_2), Edelweiss-II <cit.>, SOLAX <cit.>, COSME <cit.>, CDMS <cit.>, and Majorana <cit.> (Ge). Other upcoming experiments like SuperCDMS <cit.>, LEGEND <cit.>, and SABRE <cit.> are projected to greatly expand coverage over the axion parameter space and test QCD axion solutions to the strong CP problem in the eV mass range. These experiments aim to take advantage of coherence in the conversion rate when axions satisfy the Bragg condition, enhancing the detection sensitivity by orders of magnitude relative to incoherent scattering.
Searching for solar axions via their coherent conversion in perfect crystals was first treated by Buchmüller & Hoogeveen <cit.> using the Darwin theory of classical X-ray diffraction under the Bragg condition <cit.>. The authors also alluded to potential enhancements in the signal yield when one considers the symmetrical Laue-case of diffraction for the incoming ALP waves. Yamaji et al. <cit.> treated this case thoroughly for the 220 plane of cubic crystals, also using the classical theory, and included the effect of anomalous absorption, also known as the Borrmann effect. It was shown by these authors that an enhancement to the signal yield was possible, replacing the Bragg penetration depth (L_bragg∼ 1 μm) with the Borrmann-enhanced attenuation length (ranging from 10 μm all the way to centimeter scales).
The effect of anomalous absorption of X-rays was first shown by Borrmann <cit.>, and theoretically explained by Zachariasen <cit.> and other later authors (Battermann <cit.>, Hirsch <cit.>). A quantum mechanical treatment was offered by Biagini <cit.> in which the Borrmann effect was explained by the interference of statistical ensembles of the so-called |α⟩ and |β⟩ Bloch waves. There have been numerous modern studies that utilize the Borrmann effect, notably as in photon-photon dissipation on Bragg-spaced arrays of superconduncting qubits <cit.>, and in measuring quadrupole transitions in X-ray absorption spectra <cit.>.
Now, the calculation of the event rates expected for the Primakoff conversion of solar axions coherently with a perfect crystal was treated in a more traditional, particle physics-based approach in refs. <cit.> and it was applied to derive many of the constraints set by crystal-based solar axion experiments including DAMA, CUORE, Edelweiss-II, SOLAX, COSME, CDMS, and Majorana Demonstrator <cit.>. However, absorption effects in Bragg and Laue case diffraction were not considered in refs. <cit.>; indeed, when comparing the event rates between these references and those presented in light-shining-through-wall (LSW) experiments, which used the classical Darwin theory approach (e.g. ref. <cit.> and more recently ref. <cit.>), there is a clear inconsistency. While the event rates in the LSW literature only consider the coherent volume of the crystal up to the relevant attenuation length (λ∼ 1 μm in the Bragg diffraction case or λ≲ 100 μm in the Laue-case), the solar axion searches have considered the whole volume of the crystal to exhibit coherence. In this work, we show that such effects reduce the expected event rates potentially up to the 𝒪(10^3) level depending on the assumed crystal size (and therefore, the assumed coherent volume enhancement) and material. Although this may impact the existing sensitivities set by solar axion searches in solid crystals, measures can be taken to optimize suppression of the event rate due to absorption effects and recover some or potentially all of the coherent volume.
In <ref> we re-derive the event rate formula for solar axion Primakoff scattering under the Bragg condition, and in <ref> we discuss the anomalous enhancement to the absorption length under the Borrmann effect and numerically estimate the level of suppression in the coherent sum. In <ref> we write down the event rates for a perfect crystal exposed to the solar axion flux with and without the absorption effects and discuss the relevant phenomenology. In <ref> we project the impact on sensitivities with and without absorption effects for SuperCDMS, LEGEND-200, LEGEND-1000, SABRE, and multi-tonne benchmark detector setups and discuss possibilities to restore sensitivity from coherence in <ref>. Finally, in <ref> we conclude and discuss further work.
§ COHERENCE AND ABSORPTION
In order to show how photon absorption in coherent Bragg-Primakoff scattering affects the event rate, it is worth going through a pedagogical review of what we mean by coherent scattering and first assume that no absorption takes place. For the reader who is familiar with coherence in neutrino scattering, please refer to the approach illustrated by Bednyakov and Naumov <cit.> in which coherent neutrino-nucleus scattering is calculated by taking a sum over N scattering centers in a nucleus.
Let f(k⃗,k⃗^') be the Primakoff scattering matrix element for a single atomic target, for an incoming ALP 3-momentum k⃗ and outgoing γ 3-momentum k⃗^'. Written in terms of the atomic form factor F_A,
f = ℳ_free F_A (q)
where ℳ_free is the single-atomic scattering amplitude, q is the momentum transfer, with the angle of scattering defined by k⃗·k⃗^' = E_γ k cos2θ, averaged over spins and taken in the limit k ≫ m_a, m_N ≫ k,E_γ <cit.>,
|⟨ℳ_free||⟩^2 = 8 e^2 g_aγ^2q^4 E_γ^2 m_N^2 k^2 sin^2 2θ
for a nuclear mass m_N. The real atomic scattering form factor can be taken from ref. <cit.> which is defined such that F_A(0) = Z;
F_A(q) = Z r_0^2 q^21 + r_0^2 q^2
for atomic number Z and screening constant parameterization r_0=184.15 e^-1/2 Z^-1/3 / m_e, where m_e is the electron mass.
Similarly, we sum over the N scattering centers in a crystal;
ℳ(k⃗,k⃗^') = ∑_j=1^N f_j(k⃗,k⃗^') e^i(k⃗^' - k⃗)·r⃗_j
where e^i(k⃗^' - k⃗)·r⃗_j is a phase factor that comes from assuming plane wave solutions for the in and out states. This assumption is key; for atomic scattering in vacuum, the eigenstates of the final state photon should be a spectrum of plane waves.
If we square the total matrix element, we get
|ℳ(k⃗,k⃗^')|^2 = ∑_i=1^N | f_i|^2 + ∑_j≠ i^N ∑_i=1^N f_j^† f_i e^-iq⃗·(r⃗_i - r⃗_j)
taking q⃗≡k⃗ - k⃗^'. The first (diagonal) term is the incoherent piece, while the second term is usually suppressed by the average destructive interference of the phase factors. Using the Laue diffraction condition <cit.>, q⃗·(r⃗_i - r⃗_j) = 2π n for n∈ℤ, then the phase factor in the exponential goes to one and the scattering is coherent. In this limit, the diagonal term is subdominant and the final matrix element squared tends to ℳ^2 → N^2 f^2 and we have full coherence. See appendix <ref> for a derivation of the event rate in full with this approach.
Now consider interactions of the final state γ with the crystal lattice, including the absorption and scattering effects. Pragmatically, we modify the plane wave solutions of the final state photon to that of one in a dielectric medium,
k⃗^'→nk⃗^', n = n - i κ,
where n̅ is the complex index of refraction with real part n and imaginary part κ. Making this modification, we have
e^i n̅k⃗^'· (r⃗_i - r⃗_j) → e^i n k⃗^'· (r⃗_i - r⃗_j) e^-μ/2 |k̂^'· (r⃗_j - r⃗_i)|,
The absorption coefficient μ (which can also be expressed in terms of attenuation length or mean free path λ = 1μ) is related to the imaginary part of the index of refraction through μ≡ 2 κ |k⃗|. Conceptually, this factor encodes the effect of a reduced coherent interference amplitude between any two scattering centers, since a photon plane wave sourced at one scattering center will have been attenuated after reaching another scattering center.
We note that Eq. <ref> and Eq. <ref> are heuristic modifications, since the attenuated plane wave solution is not a true eigenstate of the interaction Hamiltonian, but rather a simple ansatz made to estimate the phenomenology of absorption. For further convenience, we use z_ij≡|k̂^̂'̂·(r⃗_i-r⃗_j)| and λ = 1/μ. We then have
|ℳ(k⃗,k⃗^')|^2 = ∑_i=1^N | f_i|^2
+ ∑_j≠ i^N ∑_i=1^N f_j^† f_j e^-iq⃗·(r⃗_i - r⃗_j)e^-z_ij/(2λ)
After using the Laue diffraction condition q⃗·(r⃗_i - r⃗_j) = 2π n and several manipulations of the sum, we find that
|ℳ(k⃗,k⃗^')|^2 ≳ f^† f∑_j≠ i^N λ L_x L_y NV
≳ f^† f N^2 λL_z
Comparing the proportionailty in Eq. <ref> to the usual result ∝ N^2, we see that the coherent volume is V ×λ / L_z, and the total scattering rate is suppressed by a factor λ/L_z, and is now more consistent with Darwin theory calculations <cit.>.
This inequality above is strictly a lower limit because, as we will show in <ref>, the suppression to the coherent sum by the absorptive sum, which we label as I,
I ≡∑_j≠ i^N ∑_i=1^N e^-z_ij/(2λ),
may be mitigated under certain conditions. Therefore, the suppression factor λ / L_z serves as a pessimistic guiding estimate, but in principle we should compute the sum in Eq. <ref> explicitly.
§ ANOMALOUS ABSORPTION AND THE BORRMANN EFFECT
The suppression to the event rate can be alleviated by considering the anomalous enhancement to the absorption depth or mean free path λ, which, in crystallographic diffraction, is not strictly proportional to the the inverse photon cross section multiplying into the material number desnsity, 1/(nσ).
Take for instance ref. <cit.> in which the authors have found that for the Laue-case conversion of ALPs, the attenuation length is modified as
L_att→ L_α / β≡ 2L_att,α / β(1 - exp(-L2L_att,α / β) )
where L_att,α / β = L_att1 ∓ϵ and ϵ is a ratio involving the imaginary parts of the scattering form factor. These modifications come from the anomalous dispersion or anomalous absorption effect, or the Borrmann effect. It is an effect that occurs for so-called “Bloch waves" α and β that form in the crystal, discussed further in refs. <cit.>.
The total scattering form factor can be decomposed into the real and imaginary parts <cit.>;
f = f^0 + Δ f^' + i Δ f^''
where f^0 is the atomic form factor, usually given as the Fourier transform of the charge density;
f^0(q) ≡∫ d^3 x⃗ρ(x⃗) e^i q⃗·x⃗
The second term in the real part of the form factor is the anomalous form factor Δ f^', and Δ f^'' is the imaginary part of the form factor associated with absorption. From Batterman <cit.>, the anomalous absorption due to the Borrmann effect modifies the absorption coefficient μ_0 = 1/λ as
1/λ = μ_eff = μ_0 [1 - F^''(hkl)F^''(000)]
Here F^''(hkl) is the combination of structure function and imaginary form factor, F^''(hkl) = S(hkl) Δ f^''. The ratio in the second term of the expression is the Borrmann parameter, usually denoted as ϵ[In ref. <cit.>, they use κ.]. More explicitly, studies by Wagenfield have related the Borrmann parameter to the quadrupole photoelectric cross section <cit.>;
ϵ≡ D (1 - 2 sin^2θ_B σ^Q/σ_PE) |S(h,k,l)|/|S(0,0,0)|
where D is the Debye-Waller factor accounting for thermal vibrations in anomalous absorption, D = e^-B s^2 where s = sinθ / λ and B is a temperature-dependent constant. The Debye-Waller factors for cryogenic temperatures can be found in ref. <cit.> as well as fits to Δ f^'' for several pure materials of interest. Equivalently, we can express the Borrmann factor in terms of the imaginary form factor Δ f^'' and the quadrupole form factor Δ f^''_Q (which obeys the selection rules ℓ = ℓ^'± 2);
ϵ≡ D (1 - 2 sin^2θ_B Δ f^''_Q/Δ f^'') |S(h,k,l)|/|S(0,0,0)|
and Δ f^'' is more explicitly written as <cit.>
Δ f^'' = ∑_ℓ^',m^'∑_n,ℓ,mπħ^2m_e| ∫ψ_f^*(r) _0 ·∇ e^i k·rψ_i(r) d^3 r |^2
While fits to this form factor can be found in ref. <cit.>, we can also usefully relate it to the vectorial form factor defined in ref. <cit.> and calculated using the (Python) or (C++) codes;
Δ f^''(k) =πħ^2 m_e |f_1→2(k)|_^2
For more discussion and example functional forms of the Borrmann parameter, see appendix <ref>.
While a dedicated study of the Borrmann parameter would require the calculation of the photoelectric quadrupole cross section σ^Q, Borrmann parameters for germanium crystal are already reported in the literature. We use the form factors derived in ref. <cit.> to estimate the Borrmann effect for each reciprocal lattice plane, giving us an anomalous attenuation length along the direction of travel of photons inside the detector I(k⃗,G⃗). We tabulate these and the corresponding values of ϵ in Table <ref> and plot the Borrmann parameters for Ge, Si, CsI, and NaI crystals in Fig. <ref>.
The absorptive part of the coherent sum that remains after the Laue condition is met is
I(k⃗,G⃗) ≡∑_j≠ i^N ∑_i=1^N e^-(k⃗-G⃗)/|k⃗-G⃗|·(r⃗_i - r⃗_j)/(2λ)
which, when the Bragg condition is met, is strictly a function of k⃗ and G⃗ since the mean free path λ can be related via Eq. <ref>. Taking the Ge lattice as an example, with lattice constant d = 5.657 Å, we evaluate I(k⃗,G⃗) numerically by constructing a lattice of N Ge atoms. Since computing the full sum for a real crystal of centimeter length scale would require a huge number of evaluations (∝ N^2), we take a sparse sampling of N atoms across the physical crystal volume such that the sum is computationally feasible. The sum can then be evaluated in increments of increasing N to test for convergence. We find that a lattice of around N≃ 10^4 atoms in a cubic geometry is enough to obtain a convergent error of around 5%. Some evaluations of I(k⃗,G⃗) as a function of varying mean free path λ are shown in Fig. <ref> for several choices of scattering planes G⃗ and incoming wavevectors k⃗.
One interesting phenomenon that can be seen in Fig. <ref> is that there are certain choices of k⃗^' = k⃗ - G⃗ such that k⃗^'· (r⃗_i - r⃗_j) = 0. In this special circumstance, while many of the terms in the coherent sum will tend to zero with decreasing λ, the terms where this dot product is zero will survive. What this means physically is that the plane in which r⃗_i - r⃗_j lies will avoid the decoherence from absorption as long as it remains orthogonal to k⃗^'. This relation can be made more apparent by considering the dot product under the Bragg condition;
k̂^'· (r⃗_i - r⃗_j) = (G⃗/2 k⃗·Ĝ - G⃗/k)·(r⃗_i - r⃗_j) = 0
where we take k̂ = (cosϕsinθ, sinϕsinθ,cosθ), solving this equation for θ in the hkl = 400 case gives
θ = ^-1(n_x cos (ϕ )-n_y sin (ϕ )/n_z)+π c_1
for n_x, n_y, n_z, c_1 ∈ℤ. This defines a family of lattice points that remain in the absorption sum I even in the limit λ→ 0, resulting a lower bound on I as shown for some example choices of k̂ in Fig. <ref>. This effect is similar in nature to the Laue-case diffraction enhancements where the photoconversion occurs down the scattering planes, minimizing the absorption, as studied in ref. <cit.>.
In Fig. <ref> the absorption factor I is shown for the plane G⃗(1,1,1) as a function of azimuthal and polar angles of the incoming axion momentum θ, ϕ under the Bragg condition. This fixes k = E_γ for a given (θ, ϕ), and therefore the attenuation length λ given by Eq. <ref>. We see a two prominent features of mitigated absorption in the S-shaped band (tracing out a great circle on the 2-sphere), where (i) I→ 1 as these (θ,ϕ) combinations correspond to larger energies where the photon absorption cross section falls off as we move further into the S, and (ii) there is a jump discontinuity in the S-band due to an absorption edge in the photoelectric cross section for germanium at around 11 keV.
§ EVENT RATES
The event rate for Primakoff coherent scattering with a perfect crystal worked out in <cit.> where full-volume coherence was assumed and there is no dependence on the attenuation length[Notice the factor of (ħ c)^3 rather than ħ c as written in ref. <cit.> for dimensional consistency.]; the event rate in an energy window [E_1, E_2] is
dNdt = π g_aγ^2 (ħ c)^3 Vv_cell^2∑_G⃗[dΦ_adE_a|F_j (G⃗) S_j(G⃗) |^2/|G⃗|^2sin^2 (2θ) 𝒲]
where S_j is the crystal structure factor (see appendix), F_j is the atomic form factor for species j, and dΦ_a / dE_a is the solar axion flux from Primakoff scattering and photon coalescence in the sun <cit.>. For the solar axion flux, we take the parameterized form appearing in ref. <cit.> which expands upon the form originally given by CAST <cit.> by accounting for the axion mass; see Eq. <ref>. The event rate in Eq. <ref> encodes the effect of detector energy resolution Δ within the function 𝒲;
𝒲(E_a, E_1, E_2, Δ) = 1/2(erf(E_a - E_1/√(2)Δ) - erf(E_a - E_2/√(2)Δ) )
The sum over the reciprocal lattice vectors G⃗ effectively counts the contributions to the coherent scattering from each set of lattice planes, illustrated in Fig. <ref>. The reader may refer to appendix <ref> for a compact description of the reciprocal lattice.
At this stage the effect of absorption will simply modify the event rate, as seen in the previous section, by replacing the full coherent volume V → V × I(k⃗,G⃗) with λ = [μ_0 (1 - ϵ(G⃗))]^-1, giving
dNdt = π g_aγ^2 (ħ c)^3 Vv_cell^2 ∑_G⃗[dΦ_adE_a·I(k⃗,G⃗)/|G⃗|^2
× |F_j (G⃗) S_j(G⃗) |^2 sin^2 (2θ) 𝒲]
With sin^2 (2θ) simplifying to 4(Ĝ·k̂)^2 (1 - (Ĝ·k̂)^2) <cit.> where k̂ is the unit vector pointing toward the Sun's location, we have
dNdt = π g_aγ^2 (ħ c)^3 Vv_cell^2 ∑_G⃗ I(k⃗,G⃗) [ dΦ_adE_a |F_j (G⃗) S_j(G⃗) |^2
×4(Ĝ·k̂)^2 (1 - (Ĝ·k̂)^2)/|G⃗|^2𝒲]
At this stage, we have also used the Bragg condition E_a = ħ c|G⃗|^2 / (2 k̂·G⃗). The time dependence is encoded in the solar position, which we can express through k̂ = (cosϕsinθ, sinϕsinθ, cosθ) for θ = θ(t) and ϕ = ϕ(t). For the solar angle as a function of time and geolocation, we use the NREL solar position algorithm <cit.>.
In principle, the sum over reciprocal lattice vectors G⃗ is taken to arbitrarily large combinations (h,k,l), but due to the 1/|G⃗|^2 suppression and the upper limit of the solar axion flux of around ∼ 20 keV, we can safely truncate the sum at max{h,k,l}=5.
The corresponding event rates for various energy windows are shown in Fig. <ref> for Ge crystal, where we compare the relative enhancements with and without the Borrmann effect to the case of full-volume coherence and to the case of incoherent scattering on an amorphous lattice[Atomic Primakoff scattering is still coherent here; we only turn off the coherence at the level of the lattice for the sake of comparison with scattering on amorphous materials, in this case, amorphous germanium.]. The fluctuating features in the event rate are the result of the sum over G⃗ which contributes to the Bragg peaks. Here we have assumed a volume of 260 cm^3 (corresponding roughly to the volumetric size of a SuperCDMS germanium module), and so the relative suppression for each G⃗ lattice plane goes like V^1/3 / λ(k⃗,G⃗), giving a suppression on the order of 10^2 compared to the full-volume coherence assumption.
The time-dependence can be visualized further by viewing the event rates as a function of incident angles integrated across the whole solar axion energy window, as shown in Fig. <ref>. Depending on the time of year, different sets of Bragg peaks will be traced over during the day, inducing an annual modulation in addition to the intra-day modulation of the signal.
Since the time of day fixes the solar zenith and azimuth (θ, ϕ), we can finally show the spectrum of the Primakoff signal as a function of energy deposition and time of day; see Fig. <ref>.
§ PROJECTED SENSITIVITIES FOR SOLAR AXION SEARCHES
We forecast the event rates for SuperCDMS <cit.>, LEGEND-200, LEGEND-1000, SABRE, in addition to envisioned multi-tonne setups, with detector specifications listed in Table <ref>. For the background-free limits, we look for the Poisson 90% CL corresponding to ≃ 3 events observed for a given exposure. The projected reach over the (g_aγ - m_a) parameter space for these detector benchmarks is shown in Fig. <ref>, where we show projections including the effects of absorption and the Borrmann enhancement to the absorption length, in addition to the projected limits assuming full volume coherence (FVC), i.e. I(k⃗,G⃗)→1, indicated by the arrows and dotted lines.
The QCD axion parameter space is shown (yellow band) for the Kim-Shifman-Vainshtein-Zakharov (KSVZ) type <cit.> and Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) type benchmark models <cit.>, where the range is defined by taking the anomaly ratios of E/N = 44/3 to E/N = 2 <cit.>, although the space of heavier masses is also possible in high-quality axion models and other scenarios <cit.>. To probe this model parameter space beyond the existing bounds from CAST and horizontal branch (HB) stars, when FVC is maintained, multi-tonne scale experiments are needed. Additionally, the stellar cooling hints that could be explained by ALPs with g_aγ≲ 10^-11 GeV^-1 (and for non-vanishing g_ae≃ 10^-13), are also shown in Fig. <ref>, indicated by the gray band (1σ) and down to vanishing g_aγ <cit.>. These hints, though mild, could be tested by the multi-tonne setups with FVC restored.
With the effects of absorption included, we project SuperCDMS, LEGEND, and SABRE to test parameter space unexplored by laboratory-based probes beyond the CAST and XENONnT constraints for m_a ≳ 1 eV, but already excluded by HB stars constraints. However, multi-tonne CsI and NaI setups would extend this to nearly cover the HB stars exclusion. Similar reach could in principle be found when considering the joint parameter space of multiple ALP couplings to photons, electrons, and nucleons <cit.>. For instance, by considering the ^57Fe solar axion flux, one could look for 14.4 keV energy signatures and their Bragg-Primakoff peaks, although the sensitivity would likely contend with astrophysics constraints as well <cit.>.
The existing bounds from DAMA <cit.>, CUORE <cit.>, Edelweiss-II <cit.>, SOLAX <cit.>, COSME <cit.>, CDMS <cit.>, and Majorana <cit.> are not shown here, but their exclusions would necessarily shift to larger coupling values to account for absorption effects in the Bragg-Primakoff rates, depending on the detector volume and material. Note that the relative reach between NaI and CsI crystals is relatively suppressed when absorption is included here, due to the behavior of the imaginary form factor for CsI giving more modest Borrmann enhancements at the lower reciprocal lattice planes; see Fig. <ref>. In order to push the sensitivity envelope beyond the current bounds by CAST and HB stars, even with multi-tonne setups, the absorption effects need to be mitigated. Some possibilities are discussed in the next section.
§ RESTORING COHERENCE
There may be ways to recover the sensitivity initially projected in the case of full-volume coherence by mitigating the loss of coherence due to absorption. These are of course speculative routes. Some of these routes for future work are enumerated below;
* Since the attenuation of the coherent volume is direction-dependent, as shown in Fig. <ref>, one could imagine optimizing a detector geometry such that the size and orientation relative to the incoming flux of axions is ideal, maximizing use of the Laue-type scattering and Borrmann effect to minimize the absorption. This would require precise knowledge of the crystal purity and plane orientation obtained from X-ray measurements.
* Along a similar vein, since the effects of absorption are minimized when the detector scale V^1/3 becomes comparable to the photon mean free path λ, one could instead prefer to use smaller detector volumes but with a large total mass partitioned into many individual modules. As long as each module is optically insulated from the others, the loss of coherence due to absorption will be contained within each module and the suppression to the event rate can be mitigated.
* It might be possible to apply the principles in this work to radioisotope experiments like those proposed in ref. <cit.>, where a keV-scale nuclear transition line (e.g. the 14.4 keV line of ^57Fe) could source ALPs through a coupling to nucleons. Subsequent detection by an array of crystals encasing the radioactive source searching for transition photons of known energy Primakoff-converting in the crystal would leave a missing energy signature in the detector. By looking for disappearing keV-scale transitions the signal rate would enjoy the coherent enhancement relative to the incoherent scattering considered in ref. <cit.>.
* A dedicated keV photon source that would impinge on a crystal detector could fire at a fixed angle of incidence such that the event rate enhancement from the Borrmann effect and Laue effects are optimized and full volume coherence is restored as best as possible. One might achieve this with a keV laser <cit.> or synchrotron sources in a similar fashion to LSW experiments <cit.>. By performing a similar “missing” photon search as the one discussed above, the event rate for the detection of missing energy will be proportional to g_aγ^2, rather than g_aγ^4 as in solar axion searches, greatly enhancing the sensitivity.
In the case where we assume full volume coherence, shown in Fig. <ref>, dotted lines, ton-scale setups like LEGEND-200 and LEGEND-1000 can reach significantly smaller couplings, probing values of g_aγ beyond the existing bounds fom HB Stars <cit.> and CAST <cit.> for masses m_a ≲ 10 keV, losing sensitivity for higher masses for which the axion production rates from photon coalescence and Primakoff scattering are diminished (see also Fig. <ref>). These reach more than an order of magnitude lower in the coupling than previous Bragg-Primakoff solar axion searches.
§ CONCLUSIONS
In this work, we have taken into account a more proper estimate of the effects of anomalous absorption into the event rate, i.e. via the Borrmann effect on the coherence condition of Bragg-Primakoff photoconversion of solar axions. The sensitivity of crystal technologies used in the SuperCDMS, LEGEND, and SABRE setups has been demonstrated, and we find that the inclusion of absorption effects even with Borrmann-enhanced signal rates still would require multi-tonne scale detectors to surpass the existing astrophysical constraints in sensitivity to ALPs. However, a dedicated study with a thorough and careful treatment of the absorption suppression and Borrmann effects is definitely needed to better understand its impact on experiments that utilize Bragg-Primakoff conversion. In particular, the evaluation of the imaginary form factor in other crystals (namely, PbWO_4 may be an interesting option) would help determine potential enhancements to the anomalous absorption effect in other detector materials.
Crystal detector technologies are also necessary tools to discriminate axion-like particle signals from other types of BSM and neutrino signatures, with high sensitivity to time modulation from the directional sensitivity of Bragg-Primakoff scattering. This is a powerful tool for background rejection as well, and ideally a joint analysis of multiple detectors situated at different latitudes and longitudes would benefit greatly from leveraging the time modulation of the signal. They are also complimentary to future helioscope experiments like IAXO; while the projected reach for IAXO over the axion-photon coupling parameter space is vast, the sensitivity to solar axions with masses m_a ≳ 1 eV becomes weaker to coherent Primakoff conversion in magnetic field helioscopes. Sensitivity to this region of parameter space is necessary in order to test QCD axions, especially in non-traditional models of high quality axions and the like, which have parametrically larger masses <cit.>. It was shown in ref. <cit.> that future liquid noble gas detectors for dark matter direct detection at kiloton-year scales could begin to probe couplings beyond the astrophysics constraints for axion-like particles, while in this work we find that equivalent reach is possible at ton-year exposures with crystal detector technology, if utilized to its fullest potential. The presence of complimentary searches at these mass scales is essential for a complete test of the axion solution to the strong CP problem and the broader space of ALPs.
§ ACKNOWLEDGEMENTS
We are very grateful to Imran Alkhatib, Miriam Diamond, Amirata Sattari Javid, and John Sipe for the vigorous discussions and studies on the theoretical treatment of coherent Primakoff scattering in crystals and the comparison of numerical computations. We graciously thank Tomohiro Yamaji for the insight on Laue-type diffraction, Timon Emken for the technical correspondence on the package, and Alexander Poddubny for the useful comments on Biagini's theory of anomalous absorption. The work of BD and AT is supported by the DOE Grant No. DE-SC0010813. JBD acknowledges support from the National Science Foundation under grant no. PHY-2112799. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. We also thank the Center for Theoretical Underground Physics and Related Areas (CETUP*) and SURF for facilitating portions of this research.
§ CRYSTAL STRUCTURE
For convenience of the reader we repeat the standard discussion on the description of the lattice vector space for the crystals we have considered, much of which can be found in <cit.> and other canonical literature. The α⃗_j describe the positions of each atom within the cell, while the basis vectors a⃗_i describe the Bravais lattice. The linear combination of the two is used to translate anywhere on the lattice by stepping in integer multiples of these basis vectors;
r⃗_i = n_1 a⃗_1 + n_2 a⃗_2 + n_3 a⃗_3 + α⃗_i
We can then introduce the reciprocal lattice, giving reciprocal lattice basis vectors b⃗_i which satisfy b⃗_i ·a⃗_j = 2πδ_ij. In general the transformations give
b⃗_1 = 2πa⃗_2 ×a⃗_3|a⃗_1 · (a⃗_2 ×a⃗_3)|
b⃗_2 = 2πa⃗_3 ×a⃗_1|a⃗_1 · (a⃗_2 ×a⃗_3)|
b⃗_3 = 2πa⃗_1 ×a⃗_2|a⃗_1 · (a⃗_2 ×a⃗_3)|
The reciprocal lattice basis vectors are used to construct the reciprocal lattice vector G⃗ that point along the surface normals of the scattering planes. In terms of integers m_1, m_2, and m_3, each scattering plane is defined;
G⃗ = m_1 b⃗_1 + m_2 b⃗_2 + m_3 b⃗_3
Sometimes the integers h,k,l are used instead, and in some contexts one can use this basis to express G⃗ as
G⃗(hkl) = 2πa (h, k, l)
The lattice constants, cell volumes, and basis vectors for a few examples (Ge, Si, CsI, and NaI) are listed in Table <ref>.
§ DERIVATION OF THE EVENT RATE
Let f(k⃗,k⃗^') be the Primakoff scattering matrix element for a single atomic target, for an incoming ALP 3-momentum k⃗ and outgoing γ 3-momentum k⃗^';
f = ℳ_free F_A (q)
where ℳ_free is the single-atomic scattering amplitude with the angle of scattering defined by k⃗_a ·k⃗_γ = E_γ k cos2θ, averaged over spins and taken in the limit k ≫ m_a, m_N ≫ k,E_γ;
⟨ℳ_free|=⟩8 e^2 g_aγ^2q^4 E_γ^2 m_N^2 k^2 sin^2 2θ
We sum over the N scattering centers in a crystal;
ℳ(k⃗,k⃗^') = ∑_j=1^N f_j(k⃗,k⃗^') e^i(k⃗^' - k⃗)·r⃗_j
where e^i(k⃗^' - k⃗)·r⃗_j is a phase factor that comes from assuming plane wave solutions for the in and out states. The position vector r⃗_j can be expressed in terms of the Bravais lattice basis vectors and the primitive basis vectors for each unit cell of the crystal. For germanium crystal with lattice constant a, we have primitive basis vectors
α⃗_0 = (0,0,0)
α⃗_1 = a/4 (1,1,1)
while the basis vectors of the Bravais lattice are described by a⃗_1, a⃗_2, and a⃗_3;
a⃗_1 = a/2(0,1,1)
a⃗_2 = a/2 (1,0,1)
a⃗_3 = a/2 (1,1,0)
we can represent any scattering site as a linear combination of the a's and either the first or second primitive;
r⃗_i,0 = R⃗_i + α⃗_0 = n_1 a⃗_1 + n_2 a⃗_2 + n_3 a⃗_3 + α⃗_0
r⃗_i,1 = R⃗_i + α⃗_1 = n_1 a⃗_1 + n_2 a⃗_2 + n_3 a⃗_3 + α⃗_1
where the index i maps to a unique combination (n_1, n_2, n_3).
If we square this, we get
|ℳ(k⃗,k⃗^')|^2 = ∑_i=1^N | f_i|^2 + ∑_j≠ i^N ∑_i=1^N f_j^† f_i e^-iq⃗·(r⃗_i - r⃗_j)
taking q⃗≡k⃗ - k⃗^'. Rewriting in terms of a sum over N_c cells and the cell primitives, the coherent part (second term) is
|ℳ(k⃗,k⃗^')|^2 = ∑_j≠ i^N_c∑_i=1^N_c∑_μ = 0^1∑_ν = 0^1 f_j^† f_i e^-iq⃗·(R⃗_i - R⃗_j + α⃗_μ - α⃗_ν)
When the Laue condition is met, we have q⃗ = G⃗ and G⃗·R⃗_i is a 2π integer multiple;
|ℳ|^2 ≡∑_j≠ i^N_c∑_i=1^N_c∑_μ,ν = 0^1 f_j^† f_i e^-iG⃗·(α⃗_μ - α⃗_ν)
Now we can factorize the sum over primitives, and since we are considering a monoatomic crystal we can also take the f_i = f_j, simplifying things;
|ℳ|^2 = N_c^2 f^† f ∑_μ,ν = 0^1 e^-iG⃗·(α⃗_μ - α⃗_ν)
In Eq. <ref> the structure function can be substituted, which is nothing but the sum over primtives;
S(G⃗) = ∑_μ e^i G⃗·α_μ
and we have no need for a species index j on S_j(G⃗) since we only have one atomic species, but it is trivial to extend this derivation to include it - we just need to add another index to the primitive basis vectors and sum over it. With this identification and also taking f^† f = |ℳ_free|^2 F^2_A (G⃗), we have
|ℳ|^2 = N_c^2 |ℳ_free|^2 |F_A (G⃗) S(G⃗)|^2
Now let's write down the cross section.
dσ = 14 E_a m_N v_a |ℳ|^2 d^3 k^'(2π)^3 2E_γd^3 p^'(2π)^3 2E_p^' (2π)^4 δ^4 (k + p - k^' - p^')
Taking the ALP velocity v_a = 1, momentum transfer minimal such that E_p^' = m_N, and integrating out the δ^3 we get
dσ = 164 π^2 E_a E_γ m_N^2 |ℳ|^2 d^3 k^'δ(E_a - E_γ)
Performing a change of variables to d^3k^'→ d^3q (since q = k - k^' and k is fixed), we would integrate this over q⃗. Since we have q⃗ = G⃗ at this stage, we should replace the integral with a sum;
∫ d^3 q →(2π)^3/V∑_G⃗
The event rate formula is constructed from a convolution of the detector response, axion flux Φ_a, and cross section;
dNdt = ∫_E_1^E_2 dE_ee∫_0^∞ dE_a (2π)^3/V∑_G⃗dΦ_ad E_a164 π^2 E_a E_γ m_N^2 |ℳ|^2 δ(E_a - E_γ) ·( 1Δ√(2π) e^-(E_ee - E_γ)^2/2Δ^2)
Putting in the definition of |ℳ|^2 that we worked out and substituting the free Primakoff cross section, integrating over the energy delta function (and identifying E_a = E_γ = E for simplicity), and integrating over dE_ee we get
dNdt = (2π)^3 e^2 g_aγ^28 π^2Vv_cell^2∑_G⃗dΦ_adEk^2 sin^2 (2θ)|G⃗|^4 |F_A(G⃗)S(G⃗)|^2 𝒲(E_1, E_2, E)
This is almost identical to the rate in ref. <cit.>, which uses a different definition of the atomic form factor up to a factor of q^2/e k^2. After some algebra, the event rate in Eq. <ref> is still different than that given in ref. <cit.> up to a factor of 4sin^2(θ). However, the event rate formula derived here is consistent with the calculation performed in refs. <cit.>. After rederiving the coherent sum using the replacements in Eqns. <ref>-<ref>, the event rate becomes
dNdt = (2π)^3 e^2 g_aγ^28 π^2Vv_cell^2∑_G⃗ I(k⃗,G⃗) dΦ_adEk^2 sin^2 (2θ)|G⃗|^4 |F_A(G⃗)S(G⃗)|^2 𝒲(E_1, E_2, E)
§ SOLAR AXION FLUX
We use the parameterization appearing in ref. <cit.> for massive axion production in the sun; the flux parameterizations are repeated here for convenience
dΦ_γ→ adE_a = 4.20· 10^10cm^-2s^-1keV^-1(g_aγ10^-10GeV^-1)^2 E_a p_a^2e^E_a/1.1 -0.7 (1 + 0.02 m_a)
dΦ_γγ→ adE_a = 1.68· 10^9cm^-2s^-1keV^-1(g_aγ10^-10GeV^-1)^2 m_a^4 p_a (1 + 0.0006 E_a^3 + 10/E_a^2 + 0.2) e^-E_a
where Φ_γ→ a is the Primakoff solar flux and Φ_γγ→ a is the flux resulting from resonant photon coalescence, both in units of cm^-2s^-1keV^-1, given for axion energy and momentum E_a and p_a in keV, and for the coupling g_aγ in GeV^-1. The solar axion flux from photon coalescence and Primakoff conversion is shown in Fig. <ref> for several benchmark axion masses.
§ UTILIZING / FOR CALCULATION OF THE ABSORPTIVE FORM FACTOR
Wagenfield's form factor for the anomalous dispersion of X-rays with incoming and outgoing momenta and polarizations k, _0, k^', _0^' is <cit.>
Δ f^'' = πħ^2m_e( ∫ψ_f^*(r) _0 ·∇ e^i k·rψ_i(r) d^3 r ) ( ∫ψ_f(r) ^'_0 ·∇ e^-i k^'·rψ_i^*(r) d^3 r )
Applying the gradient and expanding, we get some terms proportional to _0 ·k which vanish, leaving us with
Δ f^'' = πħ^2m_e( _0 ·∫ψ_f^*(r) e^i k·r∇ψ_i(r) d^3 r ) ( ^'_0 ·∫ψ_f(r) e^-i k^'·r∇ψ_i^*(r) d^3 r )
Referring to Catena et al <cit.>, we can then apply the definition of the vectorial form factor (eq B18, but with some changes made to keep the notation more consistent),
f_1→2(q) = ∫ d^3 r ψ^*_f (r) e^i q·ri ∇/m_eψ_i (r).
Here the final state and initial state wave functions have quantum numbers i = n,ℓ,m and f = p^',ℓ^', m^' where p^' is the final state electron momentum, and {n,ℓ,m},{ℓ^',m^'} are the initial and final quantum numbers, respectively. Applying this definition, we have
Δ f^'' = πħ^2m_e( _0 · (-i m_e) f_1→2(k) ) ( ^'_0 · (i m_e) f^*_1→2(k^') )
=πħ^2 m_e (_0 ·f_1→2(k)) (_0^'·f^*_1→2(k^'))
If our photons are unpolarized, then we can take a sum over the helicity states, giving the completeness relation ∑_s (_0(s))_i (_0^'(s))_j = δ_ij. Taking k^' = k - q, this reduces the polarization-summed imaginary form factor to
Δ f^''(k,q) = πħ^2 m_e (f_1→2(k) ·f^*_1→2(k - q))
|
http://arxiv.org/abs/2307.05242v1 | 20230711131630 | Relativistic Real-Time Methods | [
"Marius Kadek",
"Lukas Konecny",
"Michal Repisky"
] | physics.chem-ph | [
"physics.chem-ph"
] |
inst1,inst2]Marius Kadek
inst1,inst3]Lukas Konecny
inst1,inst4]Michal Repiskycauthor
[cauthor]Corresponding Author
[url]www.respectprogram.org/michalrepisky
[email protected]
[inst1]organization=Hylleraas Centre for Quantum Molecular Sciences,
Department of Chemistry, UiT The Arctic University of Norway,
city=Tromsø,
postcode=9037,
country=Norway
[inst2]organization=Department of Physics, College of Science, Northeastern University,
city=Boston,
postcode=Massachusetts 02115,
country=USA
[inst3]organization=Max Planck Institute for the Structure and Dynamics of Matter, Center for Free Electron Laser Science,
city=Hamburg,
postcode=22761,
country=Germany
[inst4]organization=Department of Physical and Theoretical Chemistry, Faculty of Natural Sciences, Comenius University,
city=Bratislava,
postcode=84104,
country=Slovakia
Recent advances in laser technology enable to follow electronic motion at
its natural time-scale with ultrafast pulses, leading the way towards atto- and
femtosecond spectroscopic experiments of unprecedented resolution.
Understanding of these laser-driven processes, which almost inevitably involve
non-linear light–matter interactions and non-equilibrium electron dynamics,
is challenging and requires a common effort of theory and experiment.
Real-time electronic structure methods provide the most straightforward way to
simulate experiments and to gain insights into non-equilibrium electronic processes.
In this Chapter, we summarize the fundamental theory underlying the relativistic particle–field interaction Hamiltonian as well as equation-of-motion for exact-state wave function in terms of the one- and two-electron reduced density matrix. Further, we discuss the relativistic real-time electron dynamics mean-field methods with an emphasis on Density-Functional Theory and Gaussian basis, starting from the four-component (Dirac) picture and continue to the two-component (Pauli) picture, where we introduce various flavours of modern exact two-component (X2C) Hamiltonians for real-time electron dynamics. We also overview several numerical techniques for real-time propagation and signal processing in quantum electron dynamics. We close this Chapter by listing selected applications of real-time electron dynamics to frequency-resolved and time-resolved spectroscopies.
real-time, electron dynamics, Liouville-von Neumann equation, reduced density matrix, density functional theory, noncollinearity, relativistic theory, Dirac Hamiltonian, X2C Hamiltonian, Fourier transformation, absorption, circular dichroism, nonlinear spectroscopy, pump-probe spectroscopy, time-resolved spectroscopy
§ OBJECTIVES BOX
The principal objectives of this Chapter are:
* Introduction to the fundamental theory leading to the relativistic particle–field interaction Hamiltonian.
* Discussion of the equations-of-motion for exact-state wave function in terms of the one-electron and two-electron reduced density matrix.
* Introduction to the relativistic four-component real-time electron dynamics mean-field methods with an emphasis on Density-Functional Theory and Gaussian basis.
* Detailed overview of various exact two-component (X2C) transformations towards the relativistic two-component real-time electron dynamics.
* Overview of numerical techniques for real-time propagation and signal processing in quantum electron dynamics.
* Selected application of real-time electron dynamics to frequency-resolved and time-resolved spectroscopies.
§ INTRODUCTION
The rapid advancement of laser technology in the past decades allows us to probe matter on
spatiotemporal scales that approach the characteristic time and length scales of the electron,
opening the field of attosecond science <cit.>. This development has forced quantum chemists to shift their attention from the time-independent to the time-dependent Schrödinger or Dirac equation. Real-time electronic structure theory thus describes the explicit time-evolution of the wave function or the electron density driven by non-equilibrium condition of the Hamiltonian under external perturbation(s). All physical quantities of a molecular system are then extracted non-perturbatively from the time-varying part of the wave function or the electron density. Due to non-perturbative nature, real-time methods represent the most straightforward approach to dynamical property calculations and enable the use of external perturbations of arbitrary strength, shape and duration, capturing in general both linear and non-linear effects within a wide spectral window from a single run <cit.>. This distinguishes real-time electronic structure theory from response theory where all physical quantities are obtained in the frequency domain using perturbation expansion <cit.>.
Historically, an early work on explicit time-propagation of electronic wave function dates back to 1990 when Cederbaum and coworkers developed the nonrelativistic multiconfigurational time-dependent Hartree (MCTDH) method <cit.>. Shortly after, Micha and Runge developed a real-time time-dependent Hartree–Fock (RT-TDHF) approach that couples electronic and nuclear motions <cit.>, whereas Theilhaber <cit.>, and Yabana and Bertsch <cit.>, introduced the first-ever real-time time-dependent density functional theory (RT-TDDFT) combining the local density approximation with real-space grid methodology. In the condensed matter physics community, these pioneering works led to several implementations of the time-propagation formalism using either localized basis sets <cit.>, plane waves <cit.>, or real-space grids <cit.>. Advancements in computing power and numerical algorithms have enabled performing large-scale RT-TDDFT simulations even on periodic solids <cit.>. In the quantum chemistry community, the first nonrelativistic RT-TDDFT implementation based on popular Gaussian-type atomic orbitals was pioneered by Isborn and coworkers in 2007 <cit.> and later adopted by several other groups <cit.>. The extension of RT-TDDFT to the relativistic four-component (4c) realm was presented by Repisky and coworkers in 2015 <cit.>. As shown by these authors, significant gains in computer time was obtained by transforming the parent 4c RT-TDDFT to an exact two-component (X2C) form <cit.>, although the accuracy of reference 4c results was achieved only after inclusion of the two-electron and exchange–correlation picture-change corrections <cit.>. Beyond DFT, there has been growing interest in explicit time propagation of correlated methods such as multiconfigurational self-consistent-field <cit.>, configuration interaction <cit.>, algebraic diagrammatic construction <cit.>, density matrix renormalization group<cit.>, Møller-Plesset <cit.>, and coupled cluster <cit.> theories.
At the non-relativistic level of theory, a plethora of applications of real-time methods has been presented including
UV/Vis absorption spectroscopy <cit.>,
excited-state absorption <cit.>,
photoionization <cit.>,
X-ray absorption <cit.>,
chiroptical spectroscopies <cit.>,
non-linear optical properties <cit.>,
spin and magnetization dynamics <cit.>,
molecular conductance <cit.>,
pump-probe spectroscopy <cit.>,
photoinduced electric currents <cit.>,
plasmon resonances <cit.>,
singlet–triplet transitions <cit.>,
and magnetic circular dichroism <cit.>.
This list of applications is by no means exhaustive and a reader interested in a more
thorough exploration of the use of non-relativistic real-time methods is referred
to the recent review <cit.> and references therein.
The advent of soft X-ray free electron laser pulses with subfemtosecond temporal widths have opened new ways to investigate time-resolved dynamics involving inner-shell electrons. A prerequisite for reliable quantum-chemical modeling of these processes is the inclusion of relativistic effects, defined as differences between the exact Dirac (four-component) description of matter and an approximate Schrödinger (one-component) description. This requirement stems from the fact that the inner-shell orbitals involved in X-ray absorption/emission processes are most affected by relativity, manifestations of which are frequency shifts of spectral lines due to the scalar (SC) relativistic effects as well as spectral fine structure splitting arising from the spin-orbit (SO) coupling <cit.>. The relativistic effects are significant even in light (third row) elements <cit.> and increase in importance for heavier elements <cit.>, highlighting the need for relativistic description across the Periodic Table. Therefore, the most accurate way to perform real-time simulations is the use of full four-component (4c) Dirac formalism where both the SC and SO relativistic effects are included variationally. The first 4c extension of the time-propagation formalism was presented by Repisky and coworkers at the RT-TDDFT level <cit.>, involving the program package ReSpect <cit.>. Recently, De Santis and coworkers reported a similar 4c RT-TDDFT implementation in the BERTHA code <cit.>. While advancements in computing power and numerical algorithms have enabled performing fairly large 4c real-time electron dynamics simulations <cit.>, there is still interest in developing approximate two-component (2c) methods that maintain the accuracy of the parent 4c method at a fraction of its computational cost. In this respect, the X2C Hamiltonian has gained wide popularity in quantum chemistry community as it reduces the original 4c problem by half at the expense of only a few simple algebraic manipulations <cit.>. As shown independently by Konecny <cit.> and Goings <cit.>, the central idea of X2C transformation can be extended to the real-time electron dynamics framework, provided the X2C decoupling matrix satisfies an adiabatic approximation <cit.>.
However, both real-time X2C implementations utilize a crude one-electron X2C (1eX2C) Hamiltonian model, which typically leads to absolute errors for core spinor energies of heavier elements of the order of tens of Hartree <cit.>. As shown by Knecht and coworkers <cit.>, accuracy of X2C Hamiltonians severely depends on the two-electron and exchange-correlation picture-change correction models employed, and can vary as much as 5-6 orders of magnitude for core-shell energies. As a remedy, the authors introduced two simple yet computationally efficient and numerically accurate X2C Hamiltonian models, dubbed as amfX2C and e(xtended)amfX2C, to correct both SC and SO two-electron and exchange-correlation picture-change effects using simple atomic mean-field quantities, achieving a consistent ≈10^-5 Hartree/atom accuracy <cit.>. The theoretical extension and numerical assessment of (e)amfX2C Hamiltonian models was recently performed for conventional and time-resolved TDDFT by Repisky and coworkers <cit.>. In addition to the previous works, this Chapter also provides an in-depth discussion on the transformation of the original 4c equation-of-motion to its 2c form, particularly focusing on the modern exact two-component (X2C) formalism.
At the relativistic level, the real-time applications are scarcer due to fewer computer programs providing such functionality. These programs include ReSpect <cit.>,
Gaussian <cit.>,
Chronus Quantum <cit.>,
PyBerthaRT <cit.>,
BDF <cit.>,
and FHI-aims <cit.>.
The molecular properties addressed at the relativistic level include
UV/Vis absorption spectroscopy <cit.>,
X-ray absorption <cit.>,
non-linear optical properties <cit.>,
chiroptical spectroscopies <cit.>,
high harmonic generation <cit.>,
and pump-probe spectroscopy <cit.>.
Before closing this Section, let us emphasize that we restrict ourselves to (i) the Born–Oppenheimer approximation and therefore coupled electron–nuclear dynamics is not considered here; (ii) the semiclassical approximation where electronic degrees of freedom are described quantum mechanically, while electromagnetic fields are treated classically. In next Sections, we introduce the fundamental theory behind the relativistic particle–field interaction Hamiltonian, and discuss the equation-of-motion for exact-state wave function in terms of the one-electron and two-electron reduced density matrix. Later, we dive into the relativistic four-component real-time electron dynamics mean-field methods with an emphasis on Density-Functional Theory and Gaussian basis, followed by a detailed overview of various exact two-component (X2C) transformation models within the time domain. Finally, we offer a brief overview of numerical techniques for real-time propagation and signal processing, and close this Chapter by listing selected applications in relativistic quantum electron dynamics.
§ RELATIVISTIC PARTICLE–FIELD INTERACTION HAMILTONIANS
For the theoretical description of spectroscopic processes, quantum chemistry commonly employs a semiclassical theory. In this framework, the molecules are described quantum-mechanically, whereas the electromagnetic (photon) field is treated classically. This assumption is justified in the limit of large photon numbers – to be specific, when the photon density exceeds one per cubic wavelength (for a discussion, see Ref. []). If this is not the case, one may need to quantize the photon field as well and work within the framework of quantum electrodynamics <cit.>. To provide an illustrative example, let us consider a laser pulse with intensity 10^14W/cm^2 and wavelength λ=1064nm. The number of photons per cubic wavelength is then given by (ħ and c denote the reduced Planck constant and the speed of light, respectively):
energy flux/ħωV/c
=
10^14W/cm^2/1.86×10^-19J1.064^3×10^-12cm^3/3×10^10cm/s≈
2×10^10,
which is obviously much greater than one. Therefore, this semiclassical theoretical framework is appropriate for absorption and emission processes, and we rely on this framework throughout this Chapter.
Before we actually dive into particle–field interactions, let us first consider a N-electron system alone, i.e. in the absence of any electromagnetic (photon) field. In this case, the system is governed by the relativistic electronic Hamiltonian
Ĥ
=
∑_i^Nĥ^D_i
+
1/2∑_i≠ j^Nĝ_ij
.
Here, ĥ^D_i is the famous relativistic Dirac Hamiltonian of a single electron i, while ĝ_ij is the interaction Hamiltonian between electrons i and j. A factor one half in front of ĝ corrects for double counting of the two-electron interactions. ĥ^D describes the relativistic kinetic energy of an electron as well as its interaction energy with the electrostatic scalar potential ϕ_0(r⃗) due to the fixed atomic nuclei. It bears the 4×4 matrix form <cit.>
ĥ^D_i =
β'_im_ec^2
+
c(α⃗_i·p⃗_i)
-
eϕ_0(r⃗_i)𝕀_4
.
Here, r⃗_i and p⃗_i=-iħ∇⃗_i refer to the position and canonical momentum of the ith electron, respectively. 𝕀_4 is a 4×4 identity matrix, and -e, m_e and c are constants referring to the electron charge, electron mass and the speed of light in vacuum. When compared to the original expression of Dirac <cit.>, ĥ^D utilizes the reduced rest mass energy β'm_ec^2 with β'≡β-𝕀_4 to align the relativistic and non-relativistic energy scales. β is one of four new 4×4 matrix variables
β =
[ 𝕀_2 0_2; 0_2 -𝕀_2 ];
α⃗=
[ 0_2 σ⃗; σ⃗ 0_2 ],
introduced by Dirac to formulate relativistic quantum-mechanical equations of motion for spin-1/2 particles that are linear in space and time <cit.>. These variables fulfill the anti-commutation relations
[α_k,β]_+ = 0_4;
[α_k,α_l]_+ = 2δ_kl𝕀_4;
k,l∈ x,y,z,
and are customarily written in terms of the two-component Pauli spin matrices
σ_x =
[ 0 1; 1 0 ];
σ_y =
[ 0 -i; i 0 ];
σ_z =
[ 1 0; 0 -1 ].
For further reading on the properties and physical interpretation of the Dirac Hamiltonian, the reader is referred to several excellent quantum chemistry textbooks <cit.>.
Now, let us subject the N-electron system to a classical electromagnetic radiation characterized by the fundamental electromagnetic field vectors: the electric field E⃗≡E⃗(r⃗,t) and the magnetic field B⃗≡B⃗(r⃗,t). These vectors satisfy the microscopic Maxwell's equations, which are the basic equations of motion of electromagnetism where charged particles appear as sources. Here, we apply the perturbation theory viewpoint of quantum electrodynamics: to first order it is assumed that the particles of whose motion is being studied do not affect the radiation field, which thus appear as a "driving field" <cit.>. Therefore, assuming that the sources of the radiation field are sufficiently remote from a molecule of interest, the E⃗ and B⃗ fields are source- and divergence-free, and conveniently described in terms of the scalar potential ϕ≡ϕ(r⃗,t) and the vector potential A⃗≡A⃗(r⃗,t), satisfying <cit.>:
E⃗(r⃗,t) = -∇⃗ϕ(r⃗,t) - ∂A⃗(r⃗,t)/∂ t
; ∇⃗·E⃗(r⃗,t) = 0;
B⃗(r⃗,t) = ∇⃗×A⃗(r⃗,t)
; ∇⃗·B⃗(r⃗,t) = 0.
In fact, both electromagnetic potentials enter the Dirac Hamiltonian and describe the coupling of an electron to the classical electromagnetic field as <cit.>
ĥ^D_i(t)
=
β'_im_ec^2
+
c(α⃗_i·p⃗_i)
-
eϕ_0(r⃗_i)𝕀_4
-
eϕ(r⃗_i,t)𝕀_4
+
ec(α⃗_i·A⃗(r⃗_i,t))
.
When compared to the field-free Dirac Hamiltonian in Eq. (<ref>), the scalar electrostatic potential due to the nuclei ϕ_0(r⃗) is substituted by the time-dependent potential: ϕ_0(r⃗) →ϕ_0(r⃗) + ϕ(r⃗,t), and the canonical momentum of an electron p⃗ is substituted by the mechanical momentum: p⃗→p⃗+eA⃗(r⃗,t). The latter substitution is known in literature as the principle of minimal electromagnetic coupling substitution <cit.>.
To gain insights into the physical interpretation of the matter–field interaction, let us consider the expectation value of the relativistic one-electron interaction Hamiltonian given by last two terms in Eq. (<ref>)
∫ψ^†(r⃗,t)
[
-
eϕ(r⃗,t)𝕀_4
+
ec(α⃗·A⃗(r⃗,t))
]
ψ(r⃗,t)
d^3r⃗
=
=
∫[
ρ(r⃗,t)ϕ(r⃗,t)
-
j⃗(r⃗,t)·A⃗(r⃗,t)
]
d^3r⃗
.
Assuming multiplicative potentials ϕ and A⃗, the second equation reveals that the scalar potential is coupled to the electron charge density ρ – i.e., the charge of the electron times its probability distribution
ρ(r⃗,t)
=
-eψ^†(r⃗,t)
𝕀_4ψ(r⃗,t)
,
whereas the vector potential is coupled to the electron current density j⃗ – i.e. the charge of the electron times its velocity distribution
j⃗(r⃗,t)
=
-eψ^†(r⃗,t)
cα⃗ψ(r⃗,t)
.
In order to write the interaction Hamiltonian in its explicit form, we need to know analytical expressions for both potentials ϕ and A⃗. By the use of Maxwell's equations for the source-free field, it can be shown that these potentials satisfy <cit.>
∇^2ϕ
+
∂/∂ t(∇⃗·A⃗)
= 0,
∇^2A⃗ - 1/c^2∂^2A⃗/∂ t^2
-
∇⃗( ∇⃗·A⃗ + 1/c^2∂ϕ/∂ t)
= 0.
However, there exists a certain arbitrariness in the definition of the potentials, in that it is possible to shift them by the transformation
ϕ→ϕ' = ϕ - ∂χ/∂ t
; A⃗→A⃗' = A⃗ + ∇⃗χ
,
where χ≡χ(r⃗,t) is an arbitrary scalar function of space and time coordinates called a gauge function. Since the physics, i.e. the force law and Maxwell's equations, is sensitive only to the electric field E⃗ and the magnetic field B⃗, the transformation of potentials, called a gauge transformation, does not affect it. This is known in physics as gauge invariance and may be readily verified by inserting two pairs of potentials (ϕ,A⃗) and (ϕ',A⃗') into the expression in Eq. (<ref>). In addition, gauge invariance may be exploited to simplify Eq. (<ref>), and this means also the interaction Hamiltonian.
In quantum chemistry, the gauge freedom is fixed by choosing the so-called Coulomb gauge defined by the condition <cit.>
∇⃗·A⃗(r⃗,t) = 0.
With this condition and the fact that the electric field is divergence-free in free space (<ref>), the scalar potential is a constant, i.e. ϕ(r⃗,t)=ϕ, and may be taken as zero to satisfy |ϕ|→0 at spatial infinity. In this case, the equations of motion for electromagnetic potentials (<ref>) simplify to
∇^2ϕ = ϕ = 0,
( ∇^2 - 1/c^2∂^2/∂ t^2) A⃗(r⃗,t) = 0.
The wave equation for the vector potential is identical in form in many problems of wave motion, with a real solution in the form of a monochromatic, linearly polarized electromagnetic plane-wave <cit.>
A⃗(r⃗,t)
=
A⃗_0cos(k⃗·r⃗ - ω t)
,
where A⃗_0 is a constant real vector called amplitude factor. The argument of the cosine function is called the phase of A⃗ and is given in terms of the wave vector k⃗ (characterizing the direction of wave propagation) and the angular frequency ω. Note that the phase sometimes contains a phase constant, manipulation of which the cosine function can be converted to a sine function. By substitution of the solution Eq. (<ref>) back into the wave equation Eq. (<ref>), we find that the magnitude of k⃗ obeys |k⃗| = k = ω/c. In addition, noting that the angular frequency is ω=2πν=2π c/λ with the frequency ν and wavelength λ, we also find |k⃗| = k = 2π/λ.
To summarize, time evolution of a N-electron system subjected to a classical electromagnetic field is governed by the electronic Hamiltonian
Ĥ(t)
=
∑_i^Nĥ^D_i(t)
+
1/2∑_i≠ j^Nĝ_ij
,
where its one-electron part given in the Dirac's relativistic formalism as
ĥ^D_i(t)
=
β'_im_ec^2
+
c(α⃗_i·p⃗_i)
-
eϕ_0(r⃗_i)𝕀_4
+
ĥ^(v)(r⃗_i,t)
,
contains also the electron–field interaction Hamiltonian characterized in the Coulomb gauge entirely by the vector potential
ĥ^(v)(r⃗_i,t)
=
ecα⃗_i·A⃗(r⃗_i,t)
=
ec(α⃗_i·A⃗_0)cos(k⃗·r⃗_i - ω t)
.
In the literature, ĥ^(v) is known as the one-electron interaction Hamiltonian in velocity representation which we shall label with the superscript (v).
Note that the spatial phase of ĥ^(v) can be simplified by considering that wavelengths of electromagnetic waves in the ultraviolet or visible range are very large compared with the spatial extent of typical molecular systems under study. To provide an illustrative example, let us consider a laser pulse with wavelength λ=1064nm applied to a molecule of size |r⃗| = r = 10Å. Hence,
k⃗·r⃗≤ kr = 2π/1064nm1nm≪ 1,
which implies that the spatial phase of an oscillating electromagnetic wave can be approximated by a constant over the length scale of a molecule (or more precisely over the mean-value of an electronic position), i.e.
exp[i(k⃗·r⃗)]
=
1 + i(k⃗·r⃗) - 1/2(k⃗·r⃗)^2 + ...
≈ 1
.
Independence of the electromagnetic wave on a spatial coordinate is known as dipole (or long-wavelength) approximation, which brings the velocity interaction Hamiltonian in Eq. (<ref>) into a particularly simple form labelled here as (vd)
ĥ^(v)(r⃗_i,t)
≈ĥ^(vd)_i(t)
=
ecα⃗_i·A⃗(t)
=
ec/2(α⃗_i·A⃗_0)
[exp(-iω t) + exp(iω t)]
.
Here, we used cos(x)=[exp(ix)+exp(-ix)]/2.
However, special care has to be taken for short wavelengths used for instance in hard X-ray spectroscopy where the dipole approximation may not be adequate. In particular, this is true for heavy-element K-edge X-ray absorption spectroscopy <cit.>. By including higher-order powers of k⃗·r⃗ in the expansion, one gets multipolar contributions known as electric-quadrupole, magnetic-dipole, etc., and there exist techniques to include these contributions into quantum-chemical calculations <cit.>.
Before we close this section, let us mention that there exists a unitary transformation of the wave function which yields an altered form of the interaction Hamiltonian that may be more useful for practical calculations. Let us start from the time-dependent Schrödinger/Dirac equation with the electronic Hamiltonian Ĥ containing the velocity-dipole interaction Hamiltonian (ĥ^(vd)) given by Eq. (<ref>):
( iħ∂/∂ t - Ĥ) Ψ = 0
; Ĥ≡Ĥ(t)
=
∑_i^N[ ĥ^D_i + ĥ^(vd)_i(t) ]
+
1/2∑_i≠ j^Nĝ_ij
.
The wave function Ψ≡Ψ(t) can undergo a unitary (gauge) transformation with a freely chosen function Λ≡Λ(t)
Ψ = exp(-iΛ)Ψ'.
The new wave function Ψ'≡Ψ'(t) is as physically meaningful as the old one, provided
( iħ∂/∂ t - Ĥ' ) Ψ' = 0
; Ĥ' ≡Ĥ'(t)
=
exp(iΛ)Ĥ(t)exp(-iΛ) - ħ∂Λ/∂ t
.
Now, by selecting Λ as
Λ(t)
=
e/ħ∑_i^N𝕀_4r⃗_i·A⃗(t)
,
one replaces the velocity-dipole interaction Hamiltonian in the original electronic Hamiltonian Ĥ by a new interaction Hamiltonian ĥ^(ld) in the so-called length-dipole representation (ld) in the new electronic Hamiltonian Ĥ':
H'(t)
=
∑_i^N[ ĥ^D_i + ĥ^(ld)_i(t) ]
+
1/2∑_i≠ j^Nĝ_ij
; ĥ^(ld)_i(t)
=
e𝕀_4r⃗_i·E⃗(t)
.
Physically, ĥ^(ld) couples the classical electric field E⃗(t) defined as
E⃗(t) = - ∂/∂ tA⃗(t)
,
to the quantum-mechanical system characterized by the sum of the electric dipole moment operators of individual electrons (μ⃗_i=-er⃗_i𝕀_4). This gauge transformation was first discussed at the nonrelativistic level by Göppert-Mayer in 1931 <cit.> and therefore it is often named after her. An additional reading on quantum-mechanical gauge invariance and general unitary transformations for atoms and molecules in interactions with radiation can be found in Ref. [].
§ EQUATIONS-OF-MOTION FOR EXACT-STATE WAVE FUNCTION
§.§ Time-dependent Schrödinger equation
In the most general case, the time evolution of a quantum-mechanical system is governed by the time-dependent equation-of-motion
iħΨ(t)t = Ĥ(t)Ψ(t),
where Ĥ(t) is the Hamiltonian operator that is explicitly dependent on time via external electromagnetic fields. If we are interested in the response of molecules subjected to ultrafast laser pulses and similar processes occurring at atto- or femtosecond time scales, we can study electron dynamics decoupled from the nuclear motion. However, molecular vibrations and nuclear relaxation occur on a time scale of 10–100 fs, and in principle should be included in the computation. Nevertheless, performing electron dynamics simulations with fixed nuclear configuration at this time scale is still beneficial for improving the spectral resolution and aids analyzing electron excitations without the effect of nuclear dynamics. In such a case, Ψ(t) in Eq. (<ref>) is the many-electron wave function depending on the position r⃗_i and spin of all electrons, and the Hamiltonian Ĥ is the many-electron Hamiltonian defined in Eq. (<ref>) containing the electron kinetic operator, Coulomb interactions between electrons and nuclei with fixed positions, and interactions between the system and external electromagnetic fields discussed in more detail in Section <ref>.
Eq. (<ref>) also remains valid in relativistic case, provided additional approximations are assumed. The Hamiltonain Ĥ needs to be treated as a multicomponent operator acting on multicomponent wave functions to reflect the fact that in relativistic theory, electron spin and orbital degrees of freedom interact with each other via the spin–orbit coupling terms. However, in a truly relativistic picture, we would need to consider multiple time variables associated with each electron's frame of reference. Such effects arising from the relative time are always neglected when studying molecular systems, and Eq. (<ref>) thus assumes the absolute time approximation, which leads to a single time variable t. For further discussion on the relativistic theory of many electrons, see Ref. <cit.>.
Response of the system to external time-dependent electromagnetic fields can be studied by solving Eq. (<ref>). This can be achieved by using the formalism of response theory <cit.>, in case the external fields are weak and can be regarded as small perturbations to the system compared to the intrinsic unperturbed Hamiltonian. Alternatively, the equation can be solved numerically by propagating the wave function in real time, which facilitates studying processes that involve arbitrarily strong fields.
§.§ Reduced density matrices
Since the many-electron wave function is a complicated object that depends on the spatial coordinates of each electron, for the forthcoming discussion, it will be more convenient to work in the formalism of reduced density matrices (RDMs) <cit.>. In the time domain, we can define the one-electron and two-electron RDMs, respectively, as
D(r⃗_1;r⃗'_1;t) = N ∫Ψ(r⃗_1,x_2,…,x_N,t) Ψ(r⃗'_1,x_2,…,x_N,t) dx_2… dx_N,
and
Γ(r⃗_1,r⃗_2;r⃗'_1,r⃗'_2;t) = N(N-1) ∫Ψ(r⃗_1,r⃗_2,x_3,…,x_N,t)
×Ψ(r⃗'_1,r⃗'_2,x_3,…,x_N,t) dx_2… dx_N,
where N is the number of electrons. We note here, that whereas r⃗_i represents spatial coordinates in three-dimensional space, x_i ≡ (r⃗_i,τ_i) denotes both the position r⃗_i and the spin τ_i of the i-th electron, and the integration symbolically also labels the summation over the spin degrees of freedom in addition to the integration over the spatial variables. In the relativistic theory with SOC, it is convenient to keep the indices associated with r⃗_1 and r⃗_2 free. As a consequence, D and Γ are still multicomponent tensors, for instance, in case of the Dirac theory, D and Γ have the dimensions of 4× 4 and 4× 4× 4× 4, respectively. Hence, the scalar electron charge density is obtained as
ρ(r⃗,t) = -e D(r⃗;r⃗;,t),
where indicates the trace over the bispinor components. Likewise, for the four-component current density, it follows that
j⃗(r⃗,t) = -ec α⃗D(r⃗;r⃗;,t).
Eqs. (<ref>) and (<ref>) generalize the one-electron definitions of the charge and current densities in Eqs. (<ref>) and (<ref>) for many-electron wave functions, since they are agnostic to the method that was used to calculate the one-electron RDM.
Exact time propagation determined by Eq. (<ref>) can equivalently be formulated in the language of RDMs, which avoids the use of the cumbersome many-electron wave function. Let us assume that we have a set of orthonormal spin-orbitals φ_p(r⃗). The one- and two- RDMs matrices in the spin-orbital basis then become
D_pq(t) = ∫ d^3r⃗_1 ∫ d^3r⃗'_1 φ_p(r⃗_1) D(r⃗_1;r⃗'_1;t) φ_q(r⃗'_1),
and
Γ_pqrs(t) = ∫ d^3r⃗_1 … d^3r⃗'_2 φ_p(r⃗_1) φ_r(r⃗_2) Γ(r⃗_1,r⃗_2;r⃗'_1,r⃗'_2;t) φ_q(r⃗'_1) φ_s(r⃗'_2),
respectively. The time-dependent one-electron RDM can be obtained by solving the equation of motion of Liouville-von Neumann (LvN) type <cit.>
iħt𝐃(t) = [𝐡(t),𝐃(t)] + 1/2_1[𝐆,Γ(t)],
where [,] denotes the commutator, 𝐡(t) and 𝐆 are matrices of one- and antisymmetrized two-electron integrals
h_pq(t) ≡∫φ_p(r⃗) ĥ^D(t) φ_q(r⃗) d^3r⃗,
G_pqrs≡ℐ_pqrs - ℐ_psrq;
ℐ_pqrs≡∬φ_p(r⃗_1)φ_q(r⃗_1) r^-1_12φ_r(r⃗_2)φ_s(r⃗_2) d^3r⃗_1 d^3r⃗_2,
and
(𝐆Γ)_pqrs ≡ G_perfΓ_eqfs,
(_1 𝐗)_pq ≡ X_pqrr
for any two-electron matrix 𝐗. Upon inspecting Eq. (<ref>), we can see that the exact time evolution of the one-electron RDM also depends on the two-electron RDM Γ(t), which is also not known. Likewise, we could proceed by writing the equation of motion for the two-electron RDM. However, in general, the equation of motion for the N-electron RDM will contain the RDM of the order N+1, leading to an infinite hierarchy of coupled equations for RDMs, mirroring the same situation that occurs in the theory of Green's functions <cit.>. Solving the resulting system of equations is impractical, hence, approximations to the higher-order second term that decouple the equations are sought. In the following sections, we will describe the LvN equation for the one-electron RDM where the second term containing Γ(t) is approximated in the mean-field manner using only the one-electron RDM in the framework of time-dependent Hartree–Fock theory and density functional theory.
§.§ Time-reversal symmetry
One of the most important properties of quantum-mechanical equations of motion (and all microscopic laws) is their symmetry with respect to the reversal of time. Let us use the shorthand notation for the many-electron wave function Ψ(t) ≡Ψ(x_1,…,x_N,t). Replacing t→ -t in Eq. (<ref>) gives
-iħΨ(-t)t = Ĥ(-t)Ψ(-t).
This equation differs from the original one in two ways. First, the Hamiltonian is expressed in the inverted time -t. Second, there is an extra minus sign on the left hand side of the equation. Let us assume we have an antiunitary operator 𝒦 that is unitary (𝒦𝒦 = 𝕀) and antilinear
𝒦i = -i𝒦.
Letting this operator act from the left on the Eq. (<ref>), and denoting
Ψ̅(t) := 𝒦Ψ(-t),
H̅(t) := 𝒦Ĥ(-t)𝒦,
we obtain
iħΨ̅(t)t = H̅(t)Ψ̅(t).
In principle, this is a new equation of motion with a new solution, however, if we can assume that the Hamiltonian satisfies H̅(t) = Ĥ(t), i.e.
𝒦Ĥ(t) = Ĥ(-t)𝒦,
then Eqs. (<ref>) and (<ref>) represent the same equation, for which we obtained a pair of solutions Ψ(t) and Ψ̅(t). The condition in Eq. (<ref>) is known as time-reversal symmetry (TRS).
Due to the requirement in Eq. (<ref>), the operator 𝒦 must at least contain complex conjugation. This is a sufficient condition for scalar wave functions in nonrelativistic theory, where Eq. (<ref>) reduces to Ĥ^*(t) = Ĥ(-t) and additionally the condition that the time-independent Hamiltonian is real-valued. However, for spinor wave functions and multicomponent relativistic theories, 𝒦 can have a more complicated matrix form. For instance, in case of the Dirac four-component one-electron Hamiltonian, the operator 𝒦 takes the form <cit.>
𝒦 = -i[ σ_y 0_2; 0_2 σ_y ]𝒦_0,
where 𝒦_0 denotes the complex conjugation, and σ_y is the y-th 2× 2 Pauli matrix.
We conclude this section by noting that the condition in Eq. (<ref>) is satisfied for nonrelativistic as well as relativistic Hamiltonians. Neither internal electromagnetic interactions nor spin–orbit coupling terms break TRS, i.e. the Hamiltonain consists of symmetric operators (𝒦Â𝒦=Â) or bilinear products of antisymmetric (𝒦Â𝒦=-Â) operators, such as σ⃗·p⃗, that are again symmetric. However, if external fields are introduced, they can break the TRS, for instance, an electric field with the time dependence given by an odd function of t. More importantly, it is often discussed in the literature that the presence of a magnetic field breaks TRS. This is only true if the magnetic field B⃗ is considered as external and does not change its orientation upon time reversal, i.e. 𝒦 only acts on the electronic degrees of freedom. In such situations, terms like B⃗·Ŝ⃗̂, where S denotes the electron spin, become antisymmetric with respect to the time reversal, because the operator 𝒦 only acts on Ŝ⃗̂ (𝒦Ŝ⃗̂𝒦=-Ŝ⃗̂) and not on B⃗.
§ EQUATIONS-OF-MOTION FOR APPROXIMATE-STATE WAVE FUNCTIONS
The previous section dealt with exact state theory. In practical calculations,
model quantum chemistries are used to treat systems containing many particles.
The theory presented here focuses on both time-dependent Hartree–Fock (TDHF) theory
and time-dependent density functional theory (TDDFT) in the time-dependent Kohn–Sham (TDKS) framework.
From the practical point of view, both TDHF and KS TDDFT are mean-field
theories solving equations for one-electron molecular orbitals. Therefore, we use the term
time-dependent self-consistent field (TDSCF) when referring to both methods together.
In the following text, we sketch the derivation of working equations for TDHF and TDKS theories.
Since the final form of the equations is the same for both methods, the rest of this chapter
concerning propagators, evaluation of molecular properties, and analysis applies equally
to both of them.
§.§ Time-dependent Hartree–Fock theory
The main idea of the TDHF method is to approximate the many-electron time-dependent wave function
Ψ(x_1,x_2,…;t) by a single Slater determinant built from time-dependent molecular
spin-orbitals (MO) φ_i (x,t), where we grouped the electron's spatial and spin degrees of freedom
into a single variable x≡(r⃗,τ). Hence, the ansatz reads
Ψ(x_1,x_2,…;t)
=
1/√(N!)
φ_1 (x_1,t) φ_1 (x_2,t) ⋯ φ_1 (x_N,t)
φ_2 (x_1,t) φ_2 (x_2,t) ⋯ φ_2 (x_N,t)
⋮ ⋮ ⋱ ⋮
φ_N (x_1,t) φ_N (x_2,t) ⋯ φ_N (x_N,t)
=
1/√(N!) ∑_{P} (-1)^|P| |φ_P(1) (x_1,t) φ_P(2) (x_2,t) …φ_P(N) (x_N,t) |
,
where P denotes a permutation of indices, P(i) is the new index after permutation, and ∑_{P} is the sum over all possible permutations of MO indices. The prefactor (-1)^|P| is the sign ± 1 of the permutation based on the permutation length |P|. This ansatz uses complex spin-orbitals instead of real scalar orbitals and facilitates a direct extension of the nonrelativistic HF theory into the relativistic domain. Furthermore, in the TDHF theory, we assume that the many-electron wave function retains the form of a single
Slater-determinant during the entire time evolution.
The working equations of TDHF can be derived using the time-dependent variational principle.
Several functionals to be minimized have been formulated, such as the Dirac–Frenkel functional <cit.>
I^DF = ∫ dt Ψ(t)|iħt - Ĥ|Ψ(t),
or the McLachlan functional
I^ML(t) = (iħt - Ĥ)Ψ(t)|(iħt - Ĥ)Ψ(t),
where we used the braket notation … to indicate the integration over degrees of freedom (spin and spatial) of all electrons. These functionals can be used to derive the final form of the TDHF equations for MOs <cit.>
iħ∂/∂ tφ_i(r⃗,t)
=
F̂_HF[{φ_j(r⃗,t)}](r⃗,t) φ_i(r⃗,t)
,
where F̂_HF is the Fock operator known from time-independent HF theory. Here, F̂_HF contains the Coulomb interaction of the electron with the mean-field of other electrons, the Fock exchange operator, and the one-electron Dirac Hamiltonian ĥ^D(t) that includes the interaction with external time-dependent electromagnetic fields. As a consequence, in the four-component Dirac theory, F̂_HF is a 4× 4 operator acting on bispinor orbitals ϕ_i. The presence of the explicitly time-dependent external fields in the one-electron part of F̂_HF and the dependence of the mean-field and exchange terms on the spin-orbitals, which are now time-dependent, represents the most distinct difference of the Fock operator in the TDHF theory from its time-independent counterpart.
§.§ Time-dependent Kohn–Sham DFT
Analogously to the static case, the idea of time-dependent density functional
theory (TDDFT)<cit.>
is to replace the many-electron wave function of 3N spatial variables and time with the simpler object — electron density ρ(r⃗,t).
In the nonrelativistic framework, the theoretical foundations of TDDFT are provided by two theorems. The first one,
the Runge–Gross theorem <cit.>,
is a time-dependent analogue of the Hohenberg–Kohn
theorem connecting the time-dependent external potential and time-dependent
density. The second one, the Van Leeuwen theorem <cit.>,
connects the real system with a fictitious system with different interaction potential.
Application of these theorems allows for introducing a fictitious KS system of non-interacting electrons for which the many-electron wave function is a single Slater determinant built of one-particle functions called KS orbitals. The final TDKS equations are similar to Eq. (<ref>),
iħ∂/∂ tφ_i(r⃗,t)
=
F̂_KS[{φ_j(r⃗,t)}](r⃗,t) φ_i(r⃗,t)
,
except that the Fock operator F̂_KS also contains the exchange–correlation (XC) potential derived from the approximation to the XC energy functional instead of the exact HF exchange term. This XC term links the fictitious KS system to the studied real system. In hybrid DFT <cit.>, functionals allow for a fraction of the HF exchange contribution to also enter F̂_KS, bringing the very important element of (exact) antisymmetry of the many-electron wave function to DFT.
Approximating the XC functional in the time domain is more challenging than in time-independent theory.
In principle, TDDFT requires the development and use of special time-dependent XC potentials that may
generally depend on the density in previous times.
However, a widespread practice is to simply use potentials from
time-independent DFT, with the time variable only entering via the time-dependence of the density (and its gradient).
This local-in-time approximation is known as the adiabatic approximation in TDDFT <cit.>.
The term adiabatic approximation is actually used to label a combination of two approximations: firstly the
adiabatic approximation itself <cit.> and secondly the approximations that were used in the construction of the time independent XC functional <cit.>.
The adiabatic approximation is valid when a system remains in its instantaneous eigenstate for slowly varying perturbations that act on it <cit.> and is widely used in TDDFT due to the lack of accepted time (memory) dependent functionals. The memory effects were also shown to be negligible in the context of nonlinear processes and strong-field excitations studied in non-perturbative electron dynamics <cit.>. However, non-adiabatic effects in the XC functional become important for high-frequency oscillations <cit.>, double and charge-transfer excitations <cit.>. Extending the XC potential beyond the adiabatic approximation while still exploiting the local gradient expansion can be achieved if the current density is used as a central variable <cit.>. More general framework for time-dependent functionals with memory in TDDFT introduces viscoelastic stresses known in hydrodynamics for the electron liquid <cit.> or formulates TDDFT in a comoving Lagrangian reference frame <cit.>.
In a similar manner, the extension of DFT to the relativistic domain <cit.> also makes use of non-relativistic XC potentials that take relativistic densities as input. Linear-response TDDFT has been extended to the relativistic approximate two-component framework and applied to calculate absorption spectra of solids <cit.>, however, proper theoretical foundations that incorporate both effects of time-dependent fields as well as relativity and generalize the Runge–Gross and Van Leeuwen theorems to the relativistic domain do not exist. Despite this, the relativistic real-time and linear-response TDDFT has been applied to study a number of molecular properties, as will be discussed in more detail in Section <ref>.
§.§ Liouville–von Neumann equation in four-component framework
As discussed in Section <ref>, it is often more practical to work in the formalism of density matrices. This is especially the case for theories that express the many-electron wave function as a single Slater determinant, such as TDHF and TDKS, where the two-electron RDM is not needed, and the entire information about the time evolution of a quantum state of the many-electron system is encoded in the one-electron RDM.
Let us express the time-dependent spin-orbitals φ_i(r⃗,t) appearing in Eqs. (<ref>) and (<ref>) using a set of n static orthonormal functions {X(r⃗)}. Then
φ_i(r⃗,t) = ∑_μ^nX_μ(r⃗) C_μ i(t),
where C_μ i(t) are the complex-valued expansion coefficients. For purposes of this Chapter, {X(r⃗)} shall refer to orthonormal atomic orbitals (AOs). For cases where the wave function Ψ(t) is a Slater determinant, the one-electron RDM from Eq. (<ref>) can be expressed through the occupied (occ) spin orbitals as
D(r⃗;r⃗';t) = ∑_i^occφ_i(r⃗,t)φ_i(r⃗',t).
Inserting Eqs. (<ref>) and (<ref>) into Eq. (<ref>) gives the following orthonormal AO representation of the RDM
D_μν(t) = ∑_i^occ C_μ i(t)C_ν i(t).
Introducing the matrices 𝐃(t) and 𝐂(t) with elements D_μν(t) and C_μ i, respectively, we can write
𝐃(t) = 𝐂(t)𝐂(t).
Taking the time derivative of this equation and using the time-dependent equations for spin-orbitals (Eqs. (<ref>) or (<ref>)) in combination with the expansion in Eq. (<ref>), we obtain the Liouville–von Neumann (LvN) equation of motion (EOM) for the RDM
iħ∂𝐃(t)/∂ t
=
[𝐅(t),𝐃(t)]
,
where we dropped the HF and KS labels on the Fock matrix 𝐅(t). We note, that in the HF theory, this equation coincides with the general Eq. (<ref>), since the two-electron RDM is approximated as
Γ_μνκλ(t) = D_μν(t)D_κλ(t) - D_μλ(t)D_κν(t).
Using this factorization of the two-electron RDM in Eq. (<ref>) gives rise to both the mean-field Coulomb as well as the exact exchange terms that complement the one-electron Hamiltonian in the Fock matrix of HF theory. The advantage of solving the LvN equation over the respective equations for spin orbitals is that the RDM is gauge invariant with respect to orbital rotations φ'_p →φ'_q V_qp, i.e. unitary transformations V that do not mix the occupied and virtual spin orbitals. This gauge freedom of the orbitals was utilized in the work of Jia et al. <cit.> to allow for much larger time steps used in real-time simulations based on solving the EOM for orbitals in the parallel transport gauge.
Real-time methods are based on directly solving Eq. (<ref>) in the time domain by numerically propagating the RDM (see Section <ref>). Since the Fock matrix is Hermitian, the time evolution must be unitary. However, Eq. (<ref>) is sometimes augmented by an extra term to model the relaxation of the system to the equilibrium (ground) state D_eq with an empirical rate of relaxation matrix γ. The LvN equation then reads
iħ∂𝐃(t)/∂ t
=
[𝐅(t),𝐃(t)]
-
iħγ( D(t) - D_eq)
.
At the level of theory presented here, the matrix γ is phenomenological and is commonly approximated by a single parameter, referred to as a damping parameter. In this case, the LvN equation can be solved without the damping parameter and its application is postponed to a post-processing step (see discussion below). We note, that if the damping term is included in the LvN equation, the time propagation is no longer unitary and the energy of the system is not conserved even in the absence of external field(s).
Within a finite time window, the solution of the LvN equation Eq. (<ref>) reduces to the evaluation of the time-dependent Fock matrix at discrete time steps, and to the propagation of the density matrix in time. Here, we briefly outline the main features of the Fock matrix evaluation, assuming the full four-component level of theory. By following the previous discussion, the Fock matrix in Eq. (<ref>) is given in a set of n orthonormal AOs {X(r⃗)},
F_μν(t)
=
X_μ(r⃗)|F̂[{φ(r⃗,t)}](r⃗,t)|X_ν(r⃗)
.
Of particular interest in this Chapter are applications where molecular systems are irradiated by classical time-dependent electric field(s). In this case,
the 4c Fock matrix can easily be derived from the electronic Hamiltonian in the
length-dipole representation (see Eq. (<ref>) in Section <ref>) <cit.>
F^4c_μν(t)
=
F^4c_μν[ℰ,ℱ](t)
=
h^D_μν
+
∑_κλ^n
G^4c_μν,κλ
D^4c_λκ(t,ℰ,ℱ)
+
∑_u∈0,x,y,z∫ v^xc_u[ρ^4c(r⃗,t,ℰ,ℱ)]
Ω_u,μν^4c(r⃗) d^3r⃗
-
∑_u∈ x,y,z
P^4c_u,μνℰ_u(t)
-
∑_u∈ x,y,z
P^4c_u,μνℱ_u(t)
.
The right-hand side includes the matrix representation of the one-electron Dirac operator, the two-electron (2e) Coulomb interaction operator, the exchange–correlation (xc) operator, and the particle–field interaction operators. For generality we involve two time-dependent electric fields ℰ(t) and ℱ(t) which are coupled to the molecular system via the electric dipole moment operator matrix (𝐏^4c_u).
Computationally most demanding is the 2e contribution as it requires the evaluation of generalized anti-symmetrized electron repulsion integrals (ERIs) <cit.>
G^4c_μν,κλ
=
ℐ^4c_μν,κλ
-
ζℐ^4c_μλ,κν
; ℐ^4c_μν,κλ
=
∬Ω_0,μν^4c(r⃗_1)
r_12^-1Ω_0,κλ^4c(r⃗_2)
d^3r⃗_1d^3r⃗_2
,
in terms of 4c charge distribution functions
Ω^4c_0,μν(r⃗)
=
X_μ^†(r⃗)
X_ν(r⃗)
.
Here, each 4c basis function X_μ(r⃗)≡{X^L_μ(r⃗)⊕ X^S_μ(r⃗)} consists of the direct product of the large 2c function X^L_μ(r⃗) and the small 2c function X^S_μ(r⃗), related to each other to the lowest order in c^-1 by the restricted kinetically balanced (RKB) relation <cit.>: X^S_μ≃(σ⃗·p⃗X^L_μ). Obvious computational cost and complexity of 4c ERIs arise from the presence of the canonical momentum operator (p⃗) as well as the Pauli spin operator (σ⃗) in the expression for the small-component basis. Therefore, as discussed in Ref. <cit.>, a single 4c ERI requires even in the most compact formalism of real quaternions the simultaneous evaluation and processing of 25 times more real scalar integrals than the simpler 1c or 2c cases. This ratio further increases when RKB is substituted by the restricted magnetically balanced (RMB) relation <cit.>, which is needed for handling interactions with magnetic fields and requires the ERI evaluation formalism to be based on complex quaternions <cit.>.
In addition to the charge distribution function Ω_0^4c(r⃗) used in Eq. (<ref>), one can define three spin distribution functions along the Cartesian directions
Ω^4c_k,μν(r⃗)
=
X_μ^†(r⃗)
Σ_k
X_ν(r⃗)
;
Σ_k =
[ σ_k 0_2; 0_2 σ_k ]
;
k∈ x,y,z
,
in terms of which the 4c electron charge density (ρ_0^4c) as well as the electron spin densities (ρ_x^4c,ρ_y^4c,ρ_z^4c) have a particularly simple form
ρ_k^4c
=
ρ_k^4c(r⃗,t)
=
∑_μν^nΩ_k,μν^4c(r⃗) D^4c_νμ(t)
;
k∈0,x,y,z
,
where Σ_k is the Dirac spin operator. Note that all current noncollinear extensions of nonrelativistic xc functionals employ those four densities (alongside of their gradients) as basic variables <cit.>. In the relativistic 2c and 4c theory, the use of a noncollinear formalism is necessary since the spatial and spin degrees of freedom are no longer independent and are coupled by the spin-orbit interaction. This coupling results in a lack of rotational invariance of the xc energy if the energy is calculated collinearly through the z spin-component only <cit.>. A common way to circumvent this variance problem is to formulate the nonrelativistic exchange–correlation functionals noncollinearly. Therefore, we utilize in our real-time TDSCF implementation the noncollinear variables of Scalmani and Frisch <cit.> and evaluate the noncollinear xc potential v_k^xc in Eq. (<ref>) within a generalized gradient approximation as
v^xc_k[ρ^4c(t)]
=
∂ε^xc/∂ρ_k^4c(t)
-
( ∇·∂ε^xc/∂∇ρ_k^4c(t))
;
k∈0,x,y,z
.
Here, ε^xc and ρ⃗^4c refer to a nonrelativistic xc energy density and an electron density vector consisting of the electron charge and spin densities (together with their gradients). For further details on our noncollinearity implementation, the reader is referred to Refs. <cit.>.
§.§ Reduction of the Liouville–von Neumann equation to the exact two-component (X2C) form
While full four-component (4c) relativistic real-time electron dynamics simulations are nowadays feasible <cit.>, there is interest in developing approximate methods enabling these simulations to be performed more efficiently at the two-component (2c) level while maintaining the accuracy of the parent 4c regime. Therefore, we shall discuss the transformation of the original 4c Liouville-von Neumann (LvN) equation to its 2c form, with a particular focus on the modern exact two-component (X2C) formalism.
The X2C Hamiltonian model has gained wide popularity in recent years as it reduces the original 4c problem by half while requiring only a few simple algebraic manipulations <cit.>. However,
accuracy of this Hamiltonian strongly depends on the two-electron (2e) and exchange–correlation (xc) picture-change correction models employed <cit.> and can vary as much as 5-6 orders of magnitude for core-shell energies. Since the pioneering X2C RT-TDDFT implementations <cit.> utilize a crude one-electron X2C (1eX2C) Hamiltonian model where the picture-change corrections are entirely neglected, the inner-shell spinors (and their energies) substantially differ from the reference 4c results <cit.>. Therefore, our focus here is to provide theoretical insights into three numerically accurate X2C Hamiltonian models <cit.>, dubbed as amfX2C, eamfX2C and mmfX2C that enable accounting for the two-electron and exchange-correlation picture-change effects.
By following the matrix-algebraic approach of X2C, let us assume that at an arbitrary time t there exists a unitary transformation matrix 𝐔(t) that block-diagonalizes/decouples the 4c Fock matrix
𝐅^4c(t)
→𝐅̃^4c(t)
=
𝐔^†(t)𝐅^4c(t)𝐔(t)
=
(
[ 𝐅̃^LL(t) 0_2; 0_2 𝐅̃^SS(t); ])
.
Note that: (i) we use tildes to indicate all transformed quantities; (ii) 𝐅^4c(t) and 𝐔(t) also depend on the electric field
ℰ(t) and ℱ(t), but for clarity of presentation this dependence is omitted now.
Under the X2C transformation, the parent 4c EOM for MO coefficients becomes
iħ∂C̃^4c_i(t)/∂ t
=
𝐅̃^4c(t)C̃^4c_i(t)
+
iħ(∂𝐔^†(t)/∂ t) 𝐔(t) C̃^4c_i(t)
,
where
C̃^4c_i(t)
=
𝐔^†(t)C^4c_i(t)
.
A similar relation also holds for the X2C transformed LvN equation
iħ∂𝐃̃^4c(t)/∂ t
=
[𝐅̃^4c(t), 𝐃̃^4c(t)]
+
iħ[ (∂𝐔^†(t)/∂ t) 𝐔(t), 𝐃̃^4c(t) ]
,
with the density matrix
𝐃̃^4c(t)
=
∑_i^occC̃^4c_i(t)(C̃^4c_i(t))^†
=
𝐔^†(t)𝐃^4c(t)𝐔(t)
.
The right hand side of Eqs. (<ref>) and (<ref>) involves
the matrix product 𝐔̇^†(t)𝐔(t) which has nonzero off-diagonal blocks that prevent expressing these equations in the complete decoupled (block-diagonal) form. However, as discussed in Ref. <cit.> for the case of a single electric field ℰ(t), the matrix values of 𝐔̇^†(t) are of the order 𝒪(|ℰ|ω c^-1), and therefore become negligibly small within a weak-field limit (|ℰ|≪1) and a dipole approximation (r ω c^-1≪1). As a result, the X2C transformation matrix remains approximately constant in time, i.e. 𝐔(t)≈𝐔, and Eqs. (<ref>) and (<ref>) reduce to the simple form
iħ∂C̃^4c_i(t)/∂ t
=
𝐅̃^4c(t)C̃^4c_i(t)
;
iħ∂𝐃̃^4c(t)/∂ t
=
[𝐅̃^4c(t), 𝐃̃^4c(t)]
.
This time-independence of matrix 𝐔 is generally denoted as the adiabatic X2C transformation <cit.>.
The best possible transformation matrix 𝐔 can be obtained from a so-called mmfX2C approach <cit.>. In this approach, 𝐔 is obtained a posteriori from converged 4c SCF HF/KS solutions (MO coefficients) applying for instance the one-step X2C transformation of Ilias and Saue <cit.>. From the real-time dynamics point of view, these solutions are associated with the initial simulation time t_0. An important observation is that at t_0 the 4c occupied positive-energy MO coefficients C⃗^4c_i as well as the 4c density matrix 𝐃^4c can be expressed in terms of their 2c counterparts,
C⃗^4c_i(t_0)
=
𝐔C⃗̃⃗^2c_i(t_0)
⇒ [ C^4c]^X_μ i
=
∑_ν[ U ]^XL_μν[ C̃^2c]_ν i
𝐃^4c(t_0)
=
𝐔𝐃̃^2c(t_0) 𝐔^† ⇒ [ D^4c]^XY_μν
=
∑_κλ[ U ]^XL_μκ[ D̃^2c]_κλ[ U^†]^LY_λν
.
Here, X and Y refer to the large-component (L) and small-component (S) subset of orthonormal AO basis. Within the adiabatic X2C transformation it is assumed that the relation (<ref>) remain valid also at an arbitrary future time t>t_0, and therefore 4c real-time dynamic results can be obtained just from the solution of simple 2c EOMs
iħ∂C̃^2c_i(t)/∂ t
=
𝐅̃^2c(t)C̃^2c_i(t)
;
iħ∂𝐃̃^2c(t)/∂ t
=
[𝐅̃^2c(t),𝐃̃^2c(t)]
.
However, as shown by Knecht and coworkers for static SCF case [], the correctly transformed 2c Fock matrix 𝐅̃^2c involves a so-called picture-change transformation of density matrix, overlap distribution matrix, and one- and two-electron integrals. Repisky and coworkers extended this observation to the time domain and derive <cit.>:
F̃^2c_μν(t)
=
[𝐔^†𝐅^4c(t)𝐔]^LL_μν =
h̃^2c_μν
+
∑_κλG̃^2c_μν,κλD̃^2c_λκ(t,ℰ,ℱ)
+
∑_u∈0,x,y,z∫ v^xc_u[ρ̃^2c(r⃗,t,ℰ,ℱ)]
Ω̃_u,μν^2c(r⃗) d^3r⃗
-
∑_u∈ x,y,zP̃^2c_u,μνℰ_u(t)
-
∑_u∈ x,y,zP̃^2c_u,μνℱ_u(t)
.
There are two important points to note here: (i) all transformed quantities are marked with tilde; (ii) the presence of the picture-change transformed charge distribution matrix (Ω̃^2c) in both 2e and xc interaction terms makes the evaluation of 𝐅̃^2c computationally more demanding than the original 4c Fock matrix.
Therefore, it is desirable to seek for an approximation that enables us to carry out electron dynamics simulations in 2c mode such that they are computationally efficient and reproduce the reference 4c results as closely as possible. Keeping this in mind, one can compare Eq. (<ref>) with an approximate and computationally efficient form of the Fock matrix built with untransformed (without the tilde) two-electron integrals 𝐆^2c and
overlap distribution matrix Ω^2c; that is
F^2c_μν(t)
=
h̃^2c_μν
+
∑_κλ
G^2c_μν,κλD̃^2c_λκ(t,ℰ,ℱ)
+
∑_u∈0,x,y,z∫ v^xc_u[ρ^2c(r⃗,t,ℰ,ℱ)]
Ω_u,μν^2c(r⃗) d^3r⃗
-
∑_u∈ x,y,zP̃^2c_u,μνℰ_u(t)
-
∑_u∈ x,y,zP̃^2c_u,μνℱ_u(t)
.
Here, it is important to emphasize that ρ^2c also remains untransformed in the sense that an untransformed Ω_u^2c is used but with the correctly transformed density matrix 𝐃̃^2c. We immediately find
that the difference between these two Fock matrices expresses the picture-change corrections associated with the two-electron integrals and the xc contribution
ΔF̃^2c_μν(t)
=
F̃^2c_μν(t)
-
F^2c_μν(t)
=
∑_κλΔG̃^2c_μν,κλD̃^2c_λκ(t)
+
ΔF̃^2c,xc_μν(t)
,
where
ΔG̃^2c_μν,κλ =
G̃^2c_μν,κλ
-
G^2c_μν,κλ,
ΔF̃^2c,xc_μν(t)
=
∫ v_k^xc[ρ̃^2c(r⃗,t)]
Ω̃_k,μν^2c(r⃗) d^3r⃗
-
∫ v_k^xc[ρ^2c(r⃗,t)]
Ω_k,μν^2c(r⃗) d^3r⃗
.
Here, we dropped the dependence on ℰ and ℱ for clarity. The central idea of X2C real-time electron dynamics is the solution of the 2c LvN equation (<ref>) with the Fock matrix
F̃^2c_μν(t)
=
h̃^2c_μν
+
ΔF̃^2c_μν(t)
+
∑_κλ
G^2c_μν,κλD̃^2c_λκ(t,ℰ,ℱ)
+
∑_u∈0,x,y,z∫ v^xc_u[ρ^2c(r⃗,t,ℰ,ℱ)]
Ω_u,μν^2c(r⃗) d^3r⃗
-
∑_u∈ x,y,zP̃^2c_u,μνℰ_u(t)
-
∑_u∈ x,y,zP̃^2c_u,μνℱ_u(t)
,
where Δ𝐅̃^2c(t) accounts for the picture-change corrections associated with the 2e integrals and the xc contribution.
Note that 𝐅̃^2c(t) in Eqs. (<ref>) and (<ref>) are equal, and all differences between various flavours of X2C are due to approximations in Δ𝐅̃^2c(t).
In the simplest but least accurate case, dubbed one-electron X2C (1eX2C), Δ𝐅̃^2c(t) in Eq. (<ref>)
is completely discarded, while the decoupling matrix 𝐔 is obtained simply from the parent one-electron Dirac Hamiltonian. This approach was employed in pioneering X2C RT-TDDFT implementations <cit.>.
Due to its simplicity the 1eX2C Hamiltonian still remains very popular, but caution is needed when applying this model beyond valence electric properties as shown for instance in Ref. <cit.>.
In the second model, coined as molecular mean-field X2C (mmfX2C), Δ𝐅̃^2c(t) in Eq. (<ref>) is approximated
by a static model Δ𝐅̃^2c, which is evaluated according to Eqs. (<ref>) and (<ref>) only once using the converged 4c molecular self-consistent field solutions <cit.>. Similarly, 𝐔 is determined from the same 4c solutions. For theoretical and numerical justification of the static approximation used in the real-time and response mmfX2C theory, readers are referred to the original publication <cit.>. Due to the late X2C transformation (post-SCF), the mmfX2C approach was found as most accurate among all X2C Hamiltonian models <cit.>; though, the price for this accuracy is the implementation and execution of 4c molecular SCFs.
In line with the idea of Knecht et al. <cit.> on the amfX2C Hamiltonian for time-independent Hartree-Fock and Kohn-Sham mean-field theories, one may exploit the local atomic nature of the static picture-change correction matrix
Δ𝐅̃^2c discussed in the previous paragraph. In the third model, dubbed as atomic mean-field X2C (amfX2C) <cit.>,
Δ𝐅̃^2c(t) in Eq. (<ref>)
is approximated by a static model Δ𝐅̃^amfX2C_⊕ obtained by a superposition of converged atomic quantities rather than the converged molecular one, i.e.
Δ𝐅̃^2c(t)
≈Δ𝐅̃^amfX2C_⊕
=
⊕_K=1^MΔ𝐅̃^2c_K[𝐃̃_K^2c]
.
Here, K runs over all atoms in an M-atomic system.
The main advantage of the amfX2C approach is that it introduces picture-change corrections to both spin-independent and spin-dependent parts of the two-electron and xc interaction just from simple atomic quantities. On the other hand, the fact that
Δ𝐅̃^2c_⊕ has only atomic diagonal blocks means that, for instance, the off-diagonal electron-nucleus contribution
will not cancel out with the direct electron-electron contribution at long distances from the atomic centers. This becomes problematic in solid-state calculations, where the exact cancellation of these contributions is essential at long distances. In fact, this motivated Knecht and coworkers <cit.> to introduce our last X2C Hamiltonian model, called extended amfX2C (eamfX2C). The generalization of eamfX2C to the time domain was recently discussed by Konecny and coworkers <cit.>, and it requires to approximate
Δ𝐅̃^2c(t) in Eq. (<ref>) by a static model Δ𝐅̃^eamfX2C_⊕ obtained from the time-independent version of equation (<ref>)
ΔF̃^eamfX2C_⊕,μν
=
F̃^2c_⊕,μν
-
F^2c_⊕,μν
=
∑_κλΔG̃^2c_μν,κλD̃^2c_⊕,λκ
+
ΔF̃^2c,xc_⊕,μν
,
with elements on the right hand side given in Eq. (<ref>). The picture-change corrections associated with the two-electron integrals and the xc contribution involve the 2c density matrix 𝐃̃^2c_⊕
obtained from a superposition of converged 4c atomic density matrices 𝐃^4c_K, i.e.
𝐃̃^2c_⊕
=
⊕_K=1^M[
𝐔_K^†𝐃^4c_K𝐔_K]^LL
.
Here, K runs over all atoms in an M-atomic system.
§ REAL-TIME PROPAGATION
§.§ Evolution operator
The solution of the TDHF and TDKS equations for an arbitrary time t can be written in the compact form by defining the evolution operator U(t,t') that propagates the state from time t' to time t as
φ_i(r⃗,t) = U(t, t') φ_i(r⃗,t')
,
or, equivalently, in the matrix form
C(t) = U(t,t') C(t').
We can also use the same evolution operator to obtain the solution of the LvN equation in the language of the RDM as
D(t) = U(t,t')D(t')U(t,t')
.
It is required that the evolution operator is unitary, i.e. U(t,t')U(t,t') = 𝕀, so that the time evolution preserves the norm of the φ_i as well as the idempotence and trace of the density matrix. It follows from the definition that
U(t,t) = 𝕀,
U(t_3,t_1) = U(t_3,t_2)U(t_2,t_1),
U^-1(t_1,t_2) = U(t_2,t_1).
The last property is related to the TRS described in Section <ref> and only holds when no external magnetic fields are present.
Inserting the definition of 𝐔 in Eq. (<ref>) into the LvN equation recasts the problem of time propagation into determining 𝐔 by solving
iħt𝐔(t,t') = 𝐅(t)𝐔(t,t').
It is possible to write a closed-form solution of this equation in the form of the Dyson series as
𝐔(t,t')
=
∑_n=0^∞(-i/ħ)^n/n!∫_t'^t dt_1…∫_t'^t dt_n 𝒯{𝐅(t_1)…𝐅(t_n)},
where 𝒯 represents the time-ordering of the product such, that the leftmost term has the latest time, and each following term is applied at an earlier time than the one before it. The time-ordering is necessary since [𝐅(t_1),𝐅(t_2)] ≠ 0. Except for the time-ordering, this series represents the expansion of the exponential function and thus is often written in the short-hand form
𝐔(t,t') = 𝒯exp[-i/ħ∫_t'^t 𝐅(τ)dτ].
This expression, albeit formally exact, requires truncation of the series in numerical implementations. Such truncation inevitably leads to the loss of the unitary property of 𝐔, and consequently the idempotence and trace of the density matrix, which can result in numerically unstable time propagation <cit.>.
§.§ Magnus expansion
As an alternative to the Dyson expansion, the evolution operator can be written as a true exponential function that does not require the time ordering, i.e. in the form of the exponent of the infinite series as
𝐔(t,t') = e^𝐀(t,t'),
where
𝐀(t,t') = ∑_n=1^∞𝐀_n(t,t').
This form was proposed by Magnus in 1954 with the first terms given by <cit.>
𝐀_1(t,t')
= 1/iħ∫_t'^t dt_1 𝐅(t_1),
𝐀_2(t,t')
=
-1/2(1/iħ)^2 ∫_t'^t dt_2 ∫_t'^t_2 dt_1
[𝐅(t_1),𝐅(t_2)],
𝐀_3(t,t')
=
-1/6(1/iħ)^3 ∫_t'^t dt_3 ∫_t'^t_3 dt_2∫_t'^t_2 dt_1(
[𝐅(t_1),[𝐅(t_2),𝐅(t_3)]]
.
+
.[[𝐅(t_1),𝐅(t_2)],𝐅(t_3)]).
Both Dyson and Magnus expansions are equivalent if their respective series are considered in the infinite limit, however, the Magnus expansion has an advantage in the numerical implementations as it retains the unitary property of the evolution operator even if truncated after any number of terms. Higher-order propagators based on the Magnus expansion require the evaluation of multiple commutators of the Hamiltonian (Fock) matrix, e.g. in Eqs. (<ref>) and (<ref>). A commutator-free version based on the Magnus series can be obtained by writing the evolution operator as an (infinite) product of exponentials <cit.> which leads to a powerful approach for deriving higher order commutator-free exponential time propagators.
§.§ Approximate evolution
Real-time simulations typically start from an initial state defined at t=0 by 𝐃(t=0). This initial state is in most cases obtained from a converged ground-state optimization procedure, though starting the evolution from approximate excited states is also possible <cit.>. Once the evolution operator is determined, the orbitals and RDM at arbitrary times can be calculated by applying Eqs. (<ref>) and (<ref>). For instance, 𝐃(t) is obtained as follows
𝐃(t) = 𝐔(t,0)𝐃(0)𝐔(t,0).
In practice, however, the global propagator 𝐔(t,0) is not known – this is the case even if the infinite series (Dyson or Magnus) from the previous section are truncated. Numerical implementations require that the time is discretized into a finite number of time steps N of size Δ t, and the evolution operator is factored using Eq. (<ref>) as
𝐔(NΔ t,0) = ∏_i=0^N-1𝐔((i+1)Δ t, iΔ t).
Thus, the propagation over one time step is achieved by the application of 𝐔(t+Δ t,t) on the density matrix or orbitals, where t≡ iΔ t. This short-time propagation allows us to approximate the integrals in 𝐔(t+Δ t,t)
The most commonly used approximation for U is the midpoint Magnus propagator <cit.>
𝐔(t+Δ t,t)
≈exp[ 1/iħ𝐅(t+Δ t/2) Δ t ]
,
where only the first term A_1 in Eq. (<ref>) of the Magnus expansion is considered, and the integral ∫_t^t+Δ tdt is approximated using the midpoint quadrature. This integrator is of the second order as it is correct to 𝒪(Δ t^2). Eq. (<ref>) is also the basis of the modified midpoint unitary transformation (MMUT) method in the literature <cit.>, where the time step is modified to 2Δ t. A fourth-order Magnus propagator can be constructed by taking the first two terms in the Magnus series A⃗_1 and A⃗_2 and approximating the integrals using a two-point Gaussian quadrature <cit.>. A comparison of the performance of the second- and fourth-order Magnus propagators with the predictor–corrector scheme was presented in the recent study of Müller, Sharma, and Sierka
<cit.> based on their efficient implementation of RT-TDDFT. The matrix exponential in Eq. (<ref>) can be evaluated directly by diagonalizing the Fock matrix and constructing the exponent from its eigenvalues. However, techniques that circumvent the expensive diagonalization step by using a series of matrix multiplications, such as the Baker-Campbell-Hausdorff formula <cit.> or the Chebyshev expansion <cit.>, lower the computational cost as well as improve parallelization.
Integrators that are not based on the Magnus series, such as the Runge-Kutta Method <cit.> or the Crank–Nicholson propagator <cit.>
𝐔(t+Δ t,t)
≈1 + i 𝐅(t+Δ t/2) Δ t/2ħ/1 - i 𝐅(t+Δ t/2) Δ t/2ħ
,
can also be used. However, unlike the midpoint Magnus or the Crank–Nicholson, the Runge-Kutta integrator does not preserve the unitary property of U and can lead to instabilities in the time evolution, since neither the electron number nor the total energy are strictly conserved during a time propagation that is not unitary <cit.>. On another note, the exponential Runge-Kutta method <cit.> was successfully used to solve equations of motion in the time-dependent coupled-cluster theory <cit.> which is based on the exponential coupled-cluster parametrization of the wave function <cit.>.
The issue of stable time propagation requires even more attention in the TDHF and TDKS theories. Due to the mean-field, XC, and HF exchange terms, the Fock matrix in the LvN equation depends on the density matrix D(t) that is not known at the time t. Hence, the LvN equation with this Fock matrix is nonlinear and must be solved self-consistently, i.e. the Fock matrix is constructed from the density matrix from the previous iterations (referred to as microiterations in the context of time domain methodology). Unless the loop over microiterations is introduced, this implicit time-dependence of the Fock matrix on the unknown density matrix results in the inability to express the future time midpoint Fock matrix 𝐅(t+Δ t/2) that appears in most approximations for the evolution operator. This issue is commonly mitigated by using predictor–corrector or extrapolation–interpolation schemes <cit.>, where the unknown midpoint Fock matrix is first constructed from the previous time step using the linear extrapolation
F(t+Δ t/2) = 2F(t) - F(t-Δ t/2).
Once the D(t+Δ t) is obtained by applying the evolution operator U(t+Δ t,t), a new Fock matrix at t+Δ t is formed. The two Fock matrices at t and t+Δ t are then linearly interpolated to create the updated midpoint Fock matrix
F(t+Δ t/2) = 1/2F(t) + 1/2F(t+Δ t),
and the time propagation restarts from the initial time t. This process is repeated until the self-consistence is reached. A thorough comparison of various propagation schemes in the context of nonrelativistic TDKS can be found in the study of Pueyo et al. <cit.>.
A detailed analysis is provided in the chapter by Ye et al. <cit.>.
Simulations of X-ray absorption spectra are particularly sensitive to the size of the time step as the core excitations appearing in the high energy region typically involve rapid oscillations of the wave functions that require a very small time step to be described properly <cit.>. For such studies, Ye et al. <cit.> presented a relativistic X2C approach based on the fourth-order commutator-free Magnus propagator that adaptively chooses optimal time step and simulation time. The discussion of the approximate time evolution is expanded further in this volume in the chapter by L. Ye, H. Wang, Y. Zhang, Y. Xiao and W. Liu entitled “Real-Time Time-Dependent Density Functional Theories with Large Time Step and Short Simulation Time”.
§ SIGNAL PROCESSING
RT-TDSCF solves the EOM by direct propagation in the time domain.
This allows direct simulation of time-resolved experiments and to obtain the entire spectral information from a single real-time calculation.
On the other hand, physical quantities of experimental interest are often defined
in the frequency domain, which creates a demand for techniques that efficiently extract the frequency-domain quantity from the simulations that are carried out with finite numerical accuracy, time step length, and simulation time.
A time-dependent property f(t) can be translated to the frequency domain as the Fourier integral
f̃(ω) = ∫^∞_-∞ f(t) e^iω t dt.
However, due to the presence of periodic oscillations in f(t) that originate in the quantum mechanical evolution containing the excitation energies, the integral in Eq. (<ref>) leads to δ-functions in the frequency domain. Such stick spectra are difficult to describe in numerical simulations, hence, the Fourier integral is replaced with the Laplace transform
f̃(ω) = ∫^∞_0 f(t) e^iω t-γ t dt,
where we assumed that there is no response of the studied system before the perturbation is applied at time t=0, and we introduced a phenomenological damping parameter γ>0. This damping parameter accounts for the fact that practical real-time simulations are performed with a finite simulation time, and truncating the periodically oscillating signal that is not damped results in undesirable features in the spectrum. To this end, γ is empirically set to a value inversely proportional to the simulation time to ensure that the oscillations diminish before the simulation is terminated. Analytic evaluation of Eq. (<ref>) for periodic signals leads to a series of Lorentzian peaks with finite width in the spectrum.
Signal processing techniques serve as a means to extract the frequency-dependent molecular properties f̃(ω) from the simulation results f(t), i.e. to approximate the integrals in Eqs. (<ref>) and (<ref>).
In this section we present an overview of approaches used in the context
of RT-TDSCF, starting with the simplest one, the Discrete Fourier transform, that is
instructive in order to explain the relationship between the time and frequency domains,
and then moving to the more sophisticated methods that achieve better resolution
from a shorter simulation.
In general, spectra with a high density of states pose a bigger challenge to
the signal processing methods.
§.§ Discrete Fourier transform
In practice, RT-TDSCF simulations are performed in a series of discrete time steps t_j for which the induced dipole moment is calculated from a trace of the
dipole moment matrix and the time-dependent density matrix
μ^ind(t_j) = [𝐏𝐃(t_j)] - μ^static
,
where the static dipole moment is calculated as μ^static = [𝐏𝐃_0].
The most straightforward approximation to Eq. (<ref>) is the discrete Fourier transform
f̃_k
=
∑_j=0^n-1Δ t f_j e^2π i jk/n - γ jΔ t
.
Here, k = 0, 1, …, n-1 where n is the number of time steps and ω_k = 2π k / (nΔ t)
is the k-th frequency point. The coefficients f_j ≡μ^ind(t_j) and f̃_k ≡μ^ind(ω_k) represent the components of the induced dipole moment in time and frequency domains, respectively.
§.§ Relationship between the time and frequency domains
The frequency-domain results are obtained by discrete Fourier transform of the time-domain
results. If we perform a time-domain simulation that consists of n steps of length
Δ t, the Fourier transform yields a frequency-domain interval of length
Ω
=
2π/Δ t
.
Since the number of points in both domains is the same, the resolution in the frequency
domain is
Δω
=
2π/n Δ t
.
This relationship tells us that in order to increase the resolution in calculated spectra
we need to increase the total simulation length nΔ t by increasing the number of
time steps (which makes the simulation more time consuming) or increasing the size of the
time step (which puts extra demands on the solver). However, because the frequency-domain
interval depends inversely on the time-step length, see Eq. (<ref>),
in order to describe high-frequency parts of spectra, such as in X-ray spectroscopies,
shorter time steps are required. Therefore, a balance between the resolution, frequency
range and computational cost has to be achieved by choosing suitable simulation parameters.
Eq. (<ref>) represents the major limitation for obtaining spectra with high resolution when using the discrete Fourier transform.
§.§ Padé approximants
The use of the Padé approximation as a signal processing technique <cit.> was introduced to RT-TDSCF
by Bruner, LaMaster, and Lopata <cit.>
and quickly gained popularity due to its advantage compared to the widespread Fourier transform.
In the Padé approximants, the expression for the Fourier components f̃(ω) (e.g. induced dipole moment in frequency domain)
f̃(ω) = ∑_j=0^M f(t_j) Δ t e^iω jΔ t e^-γ jΔ t
,
is understood as a power series
f̃(ω) = ∑_j=0^M c_j (z_ω)^j
,
where z_ω = e^iωΔ t and c_j = f(t_j) Δ t e^-γ jΔ t
that can be approximated as a division of two other power series using the Padé ansatz
f̃(z) = ∑_k=0^N a_k z^k/∑_k'=0^N b_k' z^k'
,
where N=M/2.
The comparison of Eqs. (<ref>) and (<ref>) leads to a system of
equations, or a matrix equation, for the coefficients b
b = G^-1d
,
where G_km = c_N-m+k and d_k = -c_N+k.
The system is overdetermined, leading to a customary choice of setting b_0 = 1.
The knowledge of the b-coefficients can then be used to determine the a-coefficients
from a_k = ∑_m=0^k b_m c_k-m. The a and b coefficients are subsequently
used to approximate the Fourier transform f̃(ω) with the advantage that
a and b are not functions of frequency. Hence, the frequency can be chosen
at will without the limitation of Eq. (<ref>), which allows
the spectrum to be evaluated with arbitrary frequency resolution.
In practice, the Padé approximation can suffer from numerical instabilities.
To mitigate this, the original work <cit.> suggested to combine it with a MO-based decomposition <cit.>,
where each occupied–virtual MO pair that contributes to the net frequency-dependent polarizability
is transformed into the frequency domain using its own Padé approximation, i.e.
the coefficients a_k and b_k become different for each MO pair.
A spectrum constructed from an individual MO pair is typically sparser and thus
less prone to defective behaviour.
The final spectrum is then a sum of spectra over all MO pairs.
§.§ Compressed sensing
Compressed sensing is a technique based on the observation that a small number of points in time domain is sufficient to sample a frequency domain signal that is sparse, i.e. when many Fourier-domain coefficients f̃_k in Eq. (<ref>) are near zero. The application of compressed sensing in RT-TDDFT was first explored by Andrade, Sanders, and Aspuru-Guzik <cit.> in the context of electronic and nuclear dynamics for the calculation of vibrational and optical spectra. The compressed sensing method recasts the problem of finding the Fourier coefficients into solving a system of linear equations
Af̃⃗̃ = f⃗,
where f_j ≡ (f⃗)_j and f̃_k ≡ (f̃⃗̃)_k are vectors with components in time and frequency domains, respectively, and A is the matrix containing the complex phase factors. This system allows for a different number of time and frequency points j=0,1,…,n_t-1 and k=0,1,…,n_ω-1. For a small number of time points n_t < n_ω the system is underdetermined with infinitely many solutions f̃⃗̃. The sparse solution with the largest number of zero coefficients is then obtained by finding f̃⃗̃ that minimize the norm |f̃| while satisfying
|Af̃⃗̃ - f⃗| < η,
where η≪ 1 accounts for a certain amount of numerical noise in the signal. Even though the benefits of compressed sensing are expected to be lower for dense signals, for systems with low density-of-states, significant savings can be obtained by reconstructing the spectra from shorter time simulations <cit.>.
§.§ Filter diagonalization
Time signals often take the form of a sum of damped oscillations, i.e.
f(t) = ∑_m d_m e^-iω_m t,
where the frequencies ω_m can be considered complex to account for the damping factors. In an ideal case (with no numerical noise), this is also the form obtained from the quantum mechanical time propagation discussed Section <ref>, with ω_m and d_m corresponding to excitation energies and oscillator strengths, respectively. Determining the values of ω_m and d_m from the known signal f(t) is referred to as the harmonic inversion problem <cit.>, and it was the connection to the quantum mechanical evolution that lead to the formulation of harmonic inversion as an eigenvalue problem <cit.>. The signal f(t) in Eq. (<ref>) can be considered a correlation function
f(t) = ⟨ψ_0|e^-iΩ̂ t|ψ_0|,⟩
some unknown Hamiltonian Ω̂ and an initial state ψ_0. The frequencies ω_m are the eigenvalues of Ω̂ and are obtained by solving the generalized eigenvalue equation
Ub⃗_m = u_m Sb⃗_m,
where Û = e^-iΩ̂Δ t, u_m = e^-iω_mΔ t, and Δ t is the time step. This equation can be formulated in Krylov basis constructed by a consecutive application of the operator Û on a reference vector v⃗_0 as v⃗_j := Û^j v⃗_0. In such a basis, the matrices U and S take simple forms of U_jj' = f_j+j'+1 and S_jj' = f_j+j', respectively, obtained from the time signal as f_j ≡ f(jΔ t). The coefficients d_m are calculated using the eigenvectors b⃗_m as √(d_m) = b⃗_m^Tf⃗. Note, that this method assumes that the time signal has the form of Eq. (<ref>), but it is not necessary that the signal was generated by an actual propagation with a quantum mechanical Hamiltonian, nor do we need to know the explicit forms of Ω̂ or ψ_0.
Even though Eq. (<ref>) provides a formally exact solution to the harmonic inversion problem (up to the dimension of the Krylov vector space), it suffers from the cubic scaling 𝒪(n^3) of the diagonalization procedure with the number of time steps compared to 𝒪(nlog n) scaling of the Discrete Fourier Transform. The filter diagonalization method circumvents this problem by transforming the matrices in Eq. (<ref>) from the Krylov basis v⃗_j into the Fourier basis
w⃗_k = ∑_j=0^n-1 e^ijΔ t ξ_kv⃗_j,
where the frequencies ξ_k can be chosen to form an equidistant grid. Since the basis vectors w⃗_k are localized in the frequency domain, the eigenvectors of the operator Û can be expressed using a small number k=1,…, n_win≪ n of w⃗_k. Nonnegligible contributions to the m-th eigenvector arise only from basis vectors for which ξ_k≈ω_m. Thus, matrices U and S expressed in this basis exhibit large diagonal and diminishing off-diagonal terms. This enables defining a small spectral window [ω_min,ω_max] of n_win frequencies and diagonalizing the matrix U in Eq. (<ref>) in the Fourier subspace spanned by w⃗_k. The main disadvantage of the filter diagonalization method is the assumption that the time signal takes the form of Eq. (<ref>). Even though the method can efficiently circumvent the uncertainty relation in Eq. (<ref>) and provide high resolution spectra of sparsely distributed dominant peaks, practical real-time simulations are hampered by numerical noise, which complicates the use of the filter diagonalization method for extracting spectral information in highly dense regions.
§ CALCULATION OF MOLECULAR PROPERTIES USING REAL-TIME METHODS
In this section we review some of the areas where relativistic RT-TDSCF methods have been
employed to combine the advantages of both the relativistic and real-time treatments.
Moreover, our aim is to explain the physical context of the molecular properties,
to show how to construct a computational protocol for obtaining these properties,
and what aspects of the calculations to pay attention to.
First we explore the calculation of linear and non-linear response properties where real-time
propagation is an alternative to perturbation theory.
Then we focus on non-equlibrium spectroscopies, for which real-time methods are the only viable approach.
We focus on heavy-element systems and X-ray spectroscopies, where relativistic effects are paramount.
Such applications are the primary motivation behind the development of relativistic real-time methods.
§.§ Linear response properties
Linear response properties include some of the most commonly measured spectroscopies
such as electron absorption spectroscopy (EAS), including X-ray absorption spectroscopy (XAS),
and chiroptical spectroscopies such as electron circular dichroism (ECD) and optical
rotatory dispersion (ORD). <cit.>
Therefore, they are both experimentally relevant as well as
provide a good introduction to the workflow and analysis of real-time simulations.
In this section, we consider a molecular system perturbed by a single external field
ℰ⃗(t)
interacting with the molecule within a dipole approximation which induces a time-dependent
dipole response in the molecule.
The response is a physical quantity R(t) calculated as an expectation value of its operator
R̂ from the time-dependent wave function, R(t) = ⟨Ψ(t)|R̂|Ψ(t)|$⟩.
In cases whenR̂is a one-electron operator, such as the electric dipole operator, its expectation
value can be calculated as the trace that contains the time-dependent one-electron RDMR(t) = [D(t)R].
Restricting ourselves to electric and magnetic dipole moment operators for the perturbation and response,
different combinations lead to different linear spectroscopies.
First, we follow the induced electric dipole resulting from an electric dipole perturbation,
leading to EAS spectrum and the frequency-dependent index of refraction.
Electron absorption spectroscopy
Electron absorption spectrum at all frequencies from UV/Vis to X-ray is determined by
the complex frequency-dependent polarizability tensorα(ω).
The polarizability tensor connects the induced electric dipole moment to
an applied electric field, which in the frequency domain reads
μ^ind_u (ω)
=
α_uv (ω) ℰ_v (ω) + …
,
whereℰ_v (ω)is the Fourier transform of the external electric fieldℰ⃗(t) = ℰ n⃗ F(t)defined by its amplitudeℰ,
directional unit vectorn⃗, and time dependenceF(t).
The external field couples to the molecular system via the electric dipole operator, resulting
in its appearance in the Fock matrix as the term
V^ext(t)
=
-ℰ F(t) n⃗·P
wherePis a matrix representation of the electric dipole moment operator.
By connecting the molecular induced dipole moment, i.e. the polarization of a bulk material,
to the applied external field, the polarizability tensor determines the complex index of refraction
whose real part is the standard index of refraction while the imaginary part corresponds to the
attenuation coefficient describing the absorption of light and appearing in the Beer–Lambert law.
However, the more common way of expressing the absorption spectrum is via the photoabsorption
cross-section tensor
σ(ω)
=
4πω/c[ α(ω) ]
,
wheredenotes the imaginary part, andcis the speed of light.
The absorption spectrum is then the dipole strength function obtained from the rotational average of the tensorσS(ω)
=
1/3[ σ(ω) ]
,
where Tr is the trace over the Cartesian components.
A calculation of the absorption spectrum defined in Eq. <ref> from
a real-time simulation then proceeds in the following steps:
* Obtain the reference ground-state density matrix D_0 by solving the time-independent SCF equation.
* Perturb the ground state to obtain the initial state D(t_0).
This is usually performed by a short “kick” in the time domain
that corresponds to a broadband pulse in the frequency domain, thus exciting all molecular transitions.
A pure form of such a pulse is the Dirac δ function
ℰ⃗(t) = ℰn⃗δ(t-t_0)
which in practical simulations can be represented numerically by a narrow
Gaussian function or rectangle, or by an analytic expression
D(t_0) = e^iℰn⃗·P/ħD_0 e^-iℰn⃗·P/ħ
.
which represents an infinitesimally short time evolution by U(t_0+ε,t_0-ε) driven by the δ(t-t_0) field in the limit ε→ 0 <cit.>.
* Propagate the density matrix D(t_0) in time for n time steps of length Δ t
while recording the induced dipole moment μ⃗(t) = [D(t)P] at each time step.
* Transform the induced dipole moment to the frequency domain using some of the techniques
discussed in Section <ref>, i.e. calculate
μ⃗(ω) = ∫_t_0^∞ dt μ⃗(t) e^iω t - γ t,
where the damping term e^-γ t is introduced to resolve the problem that arises when
periodic signals are truncated in numerical simulations with finite time length.
A graphical summary of these steps is shown in Figure <ref> for the case
of the EAS spectrum of the mercury atom (SVWN5 functional <cit.>, uncontracted Dyall's VDZ basis <cit.>)
calculated from a four-component RT-TDDFT simulation.
Besides illustrating the workflow of an EAS calculation from RT-TDDFT, the figure also demonstrates
the importance of including relativistic effects in such simulations by capturing the formally
forbidden singlet–triplet transition.
Even though non-relativistic approaches to these transitions have been presented employing for example
a spin-dependent perturbation, <cit.>
it is only in relativistic theories that include spin–orbit coupling that singlet–triplet transitions appear
in the spectra naturally from first principles and with correct intensities.
Therefore, several works on relativistic RT-TDSCF for electron absorption spectroscopy have focused on describing
singlet–triplet transitions in the spectra. <cit.>
X-ray absorption
X-ray absorption spectroscopy (XAS) is a subset of electron absorption spectroscopy where high-frequency X-ray radiation is absorbed in molecules
while exciting electrons from core orbitals. Therefore, in XAS the same physical quantities are evaluated.
However, relativistic effects, both scalar relativistic effects manifesting as shifts of spectral lines, as well as spin–orbit
interaction causing the splitting of spectral lines, are more pronounced in XAS necessitating the use of computational methods based
on relativistic Hamiltonians. These effects are observable even in light (3rd row) elements <cit.>, highlighting the need for
a relativistic description also in these cases.
The computational protocol for calculating XAS is the same as presented in the previous paragraph for EAS in the UV/Vis frequency range with two important caveats:
(i) X-ray absorption occurs at higher frequencies so that the settings of the simulation such as time step and
the number of time steps have to be adjusted in order to reach the desired frequencies with sufficient resolution
and numerical accuracy;
(ii) in simulations using finite atom-centered basis sets, a broadbandδ-type pulse excites all molecular modes
including excitations from valence orbitals to high-lying above-ionization virtuals that may fall into the XAS frequency range,
but are non-physical relicts of an improper description of continuum states, and thus have to be eliminated either in post-processing
or during the application of the external field. <cit.>
Chiroptical spectroscopies
Optical activity and circular dichroism are effects arising when chiral matter interacts with polarized light.
Chiral molecules possess a different complex index of refraction for right- and left-handed circularly polarized (CP) light.
The real part determines the different refraction of CP light and also the rotation of the plane of polarization of
linearly polarized (LP) light, while the imaginary part determines the difference in absorption of CP light and the induced ellipticity
of LP light. <cit.>
At the molecular level, the property underpinning these processes is the electric dipole–magnetic dipole tensorβ(also known as Rosenfeld tensor), that also connects to the first order the induced electric dipole momentμ⃗^indto the time derivative
of the external magnetic fieldB⃗as well as the induced magnetic dipole momentm⃗^indto the time derivative of the external electric fieldEμ^ind_i (ω) = β_ij (ω) Ḃ_j (ω),
m^ind_i (ω) = - β_ji (ω) Ė_j (ω)
.
Note that we have restricted ourselves to isotropic samples where a quadrupolar contribution that is non-zero for a single molecule
vanishes after averaging over molecular orientations.
RT-TDSCF calculations of chiroptical properties are based on Eq. (<ref>) rather than on the direct simulation
of an interaction of molecules with circularly polarized light.
The calculation proceeds analogously to the computational protocol outlined here for electron absorption spectroscopy:
a molecule in its ground stateD_0is perturbed by an external electric field in the form of aδ-pulse
and the induced magnetic dipole moment
m⃗^ind(t) = [MD(t)] - m^static
,
is evaluated in the course of the simulation.
In Eq. (<ref>),m⃗^static = [MD_0]is the static magnetic dipole moment
and
M^4c_μν =
-1/4c[ 0 ⟨X_μ | (r⃗_g×σ⃗) (σ⃗·p⃗) | X_ν|
⟩⟨X_μ | (σ⃗·p⃗) (r⃗_g×σ⃗) | X_ν| ⟩0 ]
,
is the matrix representation of the magnetic dipole moment operator in the RKB basis withr⃗_g = r⃗-R⃗_gstanding for the electronic position operator relative to a fixed gaugeR⃗_g.
The induced magnetic dipole moment is transformed to the frequency domain and used to calculate
the Rosenfeld tensor via
β_ji(ω)
=
- i m^ind_i (ω)/ℰ
,
whereℰis again the amplitude of the perturbing external field.
Chiroptical properties are notoriously sensitive to different parameters of a calculations such as the choice of functional,
basis set, conformation of the molecule, solvent effects etc. <cit.>. Using relativistic RT-TDDFT it was shown <cit.>
on a series of model molecules – analogs of dimethyloxirane with the oxygen atom replaced with heavier homologues (S, Se, Te, Po, Lv),
that relativity alone can change the sign of the spectral function, i.e. the factor discriminating between the enantiomers.
An example of such a spectrum is shown in Figure <ref> for dimethylpolonirane (PBE functional <cit.>,
uncontracted Dyall's aug-cVDZ basis <cit.> for Po, and uncontracted Dunning's aug-cc-pVDZ <cit.> for light elements).
Therefore, relativistic real-time methods should be an important tool in practical calculations of chiroptical spectra,
especially of molecules containing heavy elements.
§.§ Nonlinear optical properties
For a weak external field, the spectra resulting from the real-time propagation will be equivalent to the results obtained using response theory.
However, in stronger fields, real-time simulations contain corrections of higher orders.
This is seen from comparing the perturbation expansion for the induced dipole moment,
schematically
μ⃗^ind
=
α^PTℰ⃗ + β^PTℰ⃗^2 + γ^PTℰ⃗^3 + …
,
with the way the induced dipole moment from a real-time simulation is processed,
again schematically
μ⃗^ind
=
α^RTℰ⃗
=
[ α^PT + β^PTℰ⃗ + γ^PTℰ⃗^2 + …] ℰ⃗
.
In Eqs. (<ref>) and (<ref>) the indices PT and RT refer to perturbation theory and real-time, respectively,
and the molecular properties correspond to
polarizability (α), first hyperpolarizability (β), and second hyperpolarizability (γ).
While this feature of real-time methods enables the study of strong-field effects in spectra,
the properties of higher orders are incorporated in the non-perturbativeμ⃗^indorα^RTand are not readily available for further analysis.
However, in some applications, it is desirable
to know the values of higher-order responses individually. This is also possible
to achieve using real-time methods by combining simulations with various field strengths.
To show how a method for obtaining nonlinear responses from real-time simulations can work,
let us examine more closely the Taylor expansion of the time-dependent induced dipole moment
μ_i(t) = μ_ij^(1)(t) ℰ_j + μ_ijk^(2)(t) ℰ_j ℰ_k + μ_ijkl^(3)(t) ℰ_j ℰ_k ℰ_l + …
whereℰ_jcombines the amplitude and direction of the external field, i.e.ℰ⃗= ℰn⃗,
and we defined then-th order contributionsμ^(n)to the induced dipole moment.
These contributions are convolutions of the time-dependent (hyper)polarizability tensors with
the time dependence of the external field(s)
μ_ij^(1)(t) = ∫ dt_1 α_ij(t-t_1) F(t_1),
μ_ijk^(2)(t) = 1/2!∫ dt_1 ∫ dt_2 β_ijk(t-t_1,t-t_2) F(t_1) F(t_2),
μ_ijkl^(3)(t) = 1/3!∫ dt_1 ∫ dt_2 ∫ dt_3 γ_ijkl(t-t_1,t-t_2,t-t_3) F(t_1) F(t_2) F(t_3) .
Again, the experimentally relevant quantities are the frequency-dependent (hyper)polarizability tensors
α_ij(ω) = ∫ dt_1 α_ij(t_1) e^-i ω t_1,
β_ijk(ω_1, ω_2) = ∫ dt_1 ∫ dt_2 β_ijk(t_1, t_2) e^-i ω_1 t_1 e^-i ω_2 t_2,
γ_ijkl(ω_1, ω_2, ω_3 ) = ∫ dt_1 ∫ dt_2 ∫ dt_3 γ_ijkl(t_1,t_2,t_3) e^-i ω_1 t_1 e^-i ω_2 t_2 e^-i ω_3 t_3
.
If we choose a harmonic external field,V^ext(t) = ℰ cos(ωt) n⃗ ·P,
the integrals in Eqs. (<ref>) and (<ref>) can be simplified to obtain expressions
relatingμ^(n)to specific nonlinear optical (NLO) properties
μ_ij^(1)(t) = α_ij(-ω;ω) cos(ω t) ,
μ_ijk^(2)(t) = 1/4[ β_ijk(-2ω;ω,ω) cos(2ω t) + β_ijk(0;ω,-ω) ] ,
μ_ijkl^(3)(t) = 1/24[ γ_ijkl(-3ω;ω,ω,ω) cos(3ω t)
+ 3γ̅_ijkl(-ω;ω,ω,-ω) cos(ω t) ] .
The frequency-dependent molecular property tensors in equations (<ref>)
are the dipole polarizabilityα_ij(-ω;ω),
and higher-order properties governing processes involving several photons,
namely, the second harmonic generation (SHG) coefficientβ_ijk(-2ω;ω,ω),
the optical rectification (OR) coefficientβ_ijk(0;ω,-ω), the third
harmonic generation (THG) coefficientγ_ijkl(-3ω;ω,ω,ω)and
the averaged degenerate four-wave mixing (DFWM) coefficientγ̅_ijkl(-ω;ω,ω,-ω). <cit.>
The workflow of the procedure for evaluating NLO properties from real-time simulations <cit.> is as follows
* Starting from a converged ground-state SCF, perform several real-time simulations employing a cosine-shaped
external field with different amplitudes of the field, for example
ℰ_1 = ℰ, ℰ_2 = 2ℰ, ℰ_3 = -ℰ and ℰ_4
= -2ℰ.
Note that to improve the stability of time evolution and smoothness of extracted responses,
the cosine function is multiplied with a linear envelope ω t / (2π) in the first period. <cit.>
Different envelopes with improved performance has also been suggested. <cit.>
* Calculate μ^(n) as derivatives of induced dipole moment
μ_ij^(1)(t) = . ∂μ_i(t)/∂ℰ_j|_ℰ=0, μ_ijk^(2)(t) = . 1/2∂^2 μ_i(t)/∂ℰ_j ∂ℰ_k|_ℰ=0, μ_ijkl^(3)(t) = . 1/6∂^2 μ_i(t)/∂ℰ_j ∂ℰ_k ∂ℰ_l|_ℰ=0,
by means of numerical differentiation – a finite field method in each time step.
For example, the first- and second-order responses can be calculated from simulations emplying fields from step 1)
with precision of the order ℰ^4 via
μ_ij^(1)(t)
= 8 [ μ_i(t,ℰ_j) - μ_i(t,-ℰ_j) ] - [ μ_i(t,2ℰ_j) - μ_i(t,-2ℰ_j) ] /12ℰ_j,
μ_ijj^(2)(t) = 16 [ μ_i(t,ℰ_j) + μ_i(t,-ℰ_j) ] - [ μ_i(t,2ℰ_j) + μ_i(t,-2ℰ_j) ] /24ℰ_j^2.
* Fit the obtained n-th order induced dipole moment contributions to analytical expressions in Eqs. (<ref>) to
evaluate numerical values of the NLO properties.
An example of such a real-time finite field procedure is depicted in Figure <ref> for the second-order responseμ_xxx^(2)(t)of W(CO)5py, py = pyridine at the 1eX2C level of theory
(B3LYP functional <cit.>, Dyall’s uncontracted valence DZ basis set <cit.> for W,
uncontracted aug-cc-pVDZ basis <cit.> for the light elements).
The fitting was used to determine the second harmonic generation and optical rectification coefficientsβ^SHG_xxxandβ^OR_xxx,
respectively. The figure is based on data underpinning Ref. [] where it was shown that the inclusion of relativistic effects contributed to about 35% of the
final value, highlighting the importance of a relativistic treatment in NLO applications where heavy metal-containing compounds are of interest due to the
favourable electronic properties of the metallic centre <cit.>.
A special category of non-linear phenomena is high harmonic generation (HHG) during which photons from
a strong laser recombine into fewer photons of higher energy via an interaction with a material.
A HHG spectrum thus contains peaks corresponding to multiples of the frequency of the laser field.
HHG has practical importance both as a spectroscopy technique as well as a means of generating
coherent high-frequency radiation.
HHG presents a challenge for theoretical modelling due to the necessity of using strong fields
– several orders of magnitude stronger than the applications discussed so far, thus requiring
a stable propagation, and due to the
requirements on basis sets that need to be able to describe electrons oscillating far from nuclei.
The first relativistic RT-TDDFT calculations of HHG were presented by De Santis et al. using
the PyBerthaRT program for Au2, capturing harmonics up to the 13th order. <cit.>
§.§ Non-equilibrium spectroscopies
So far we have discussed molecular properties where formulations in terms of perturbation theory
exist, and are usually the preferred mode of calculation. However, real-time methods are particularly well
suited for the simulation of experiments where the use of response theory would be too cumbersome.
Such is the case of non-equilibrium spectroscopies where more than one laser pulses are used to
drive the molecule. In the so-called pump–probe or transient absorption (TA) spectroscopies,
the first pulse (the pump) is used to excite the molecule into a non-equlibrium state while the
second pulse (the probe) then measures the response of the driven molecule. By varying the time
delay between the pump and the probe it is possible to follow quantum dynamics of electrons
in molecules in real time.
While a response theory-based description for pump–probe experiments exists in the form of
non-equlibrium response theory,
the ability of real-time methods to tailor the pulse shape to match the experiments
and handle strong fields offers a distinct advantage over perturbative techniques.
In the case of transient absorption spectroscopies, two external pulses are used in the simulation,
the pumpℰ(t)and the probeℱ(t)as introduced in the Fock matrix
in Eq. (<ref>).
The pump first excites the molecule
to a non-stationary excited state. This perturbed state then evolves in time and its evolution is
probed by the second pulse applied after a time delayτ.
As an example, let us consider a set-up with the pump pulse taking the form
ℰ(t)
=
nℰ(t)
=
nℰcos^2 (πt-t_0/T)
sin(ω_0 t + ϕ),
with amplitudeℰ, polarization directionn⃗, and the
pulse shape defined by the carrier frequencyω_0of a sine wave,cos^2envelope, carrier–envelope phase (CEP)ϕ, and time durationT.
The carrier frequency is usually tuned to an excitation energy of the molecule
which then becomes the primary excited state in the superposition state created by
the pump. However, even with relatively large amplitudes, the ground state
remains the most populated one.
For the probe, we use a broadbandδ-function pulse
ℱ(t)
=
mℱ(t)
=
mℱ_0 δ(t-(T+τ)),
applied at timeτafter the pump pulse,
that similarly to the case of linear spectroscopies induces a time-dependent
dipole moment that can be processed to yield an absorption spectrum.
However, since the initial state now corresponds to the superposition state
instead of pure ground state, the spectrum contains imprints of quantum dynamics
of the non-equilibrium state.
The probe pulse can be applied while the pump is still active, overlapping regime,
or after the pump has been turned off, non-overlapping regime.
The final TA spectra are obtained within the RT-TDDFT framework from the differential induced dipole moment
Δμ^TAS_uv(t)
=
{𝐏_u [𝐃_v^pp(t) - 𝐃_v^p(t)]}≡μ^ind,pp_uv(t) - μ^ind,p_uv(t),
u,v ∈{x,y,z},
whereμ^ind_uv(t)denotes the expectation value of the dipole
operator. The computation of TA spectra involves performing two simulations
for recording the dipole moment at each time step; these simulations and their
quantities are denoted by p and pp subscripts, indicating that the real-time
propagation used pump-only pulse and pump together with the probe pulse,
respectively.
As an example, let us consider the TA spectrum of thiophene
(PBE0 <cit.> functional modified to contain 40% of Hartree–Fock exchange,
uncontracted aug-cc-pVXZ, X = T (S), D (C,H), basis <cit.>)
depicted in Figure <ref>.
Here, the pump carrier frequency was set to correspond to the first excitation energy while the
X-ray absorption at sulfur L_2,3-edges was investigated after the application of the
probe. The spin–orbit splitting of the sulfur 2p orbitals is visible in the spectrum which is
thus correctly described only by relativistic methods, in this case using the 4c Dirac–Coulomb
Hamiltonian (4c) and the amfX2C Hamiltonian (2c).
The pump–probe delayτadds an extra degree of freedom to the TA spectrum which is then normally
plotted as a heat map where alternating low- and high-intensity signals can be observed, tracing
the dynamics of the superposition state as induced by the pump pulse.
Due to the increased computational cost of obtaining such a spectrum – several spectra need to be
calculated from simulations with differentτin order to generate such a heat map – efficient
2c relativistic methods are mandatory for these applications.
§ CONCLUSION AND PERSPECTIVES
Real-time methods are based on a direct integration of the quantum mechanical equation of motion.
In non-relativistic quantum chemistry, they gained popularity in previous decades due to their
ability to describe phenomena ranging from linear response properties to interaction with strong
laser fields and time-resolved spectroscopies – areas at the forefront of experimental research.
In relativistic quantum chemistry, the pioneering theory development, computational implementation,
and first applications arrived later and the field has yet to catch up with the breadth of the scope
of applications of its non-relativistic counterparts.
In this chapter we summarized some of the advances of relativistic real-time methods in quantum chemistry
while restricting ourselves to mean-field methods and pure electron dynamics. It has been our ambition
that our introduction explains the fundamental principles of this methodology and inspires readers to join
in this rapidly developing and promising field.
elsarticle-num |
http://arxiv.org/abs/2307.07331v1 | 20230714131711 | How Different Is Stereotypical Bias Across Languages? | [
"Ibrahim Tolga Öztürk",
"Rostislav Nedelchev",
"Christian Heumann",
"Esteban Garces Arias",
"Marius Roger",
"Bernd Bischl",
"Matthias Aßenmacher"
] | cs.CL | [
"cs.CL",
"cs.CY",
"cs.LG",
"stat.ML"
] |
How Different Is Stereotypical Bias Across Languages?
Öztürk et al.
Department of Statistics, LMU Munich, Germany
[email protected]
{chris,esteban.garcesarias,marius.roger,bernd.bischl,matthias}
@stat.uni-muenchen.de
Smart Data Analytics (SDA), University of Bonn, Germany
[email protected]
Munich Center for Machine Learning (MCML), LMU Munich, Germany
How Different Is Stereotypical Bias Across Languages?
Ibrahim Tolga Öztürk1 Rostislav Nedelchev2 Christian Heumann1 Esteban Garces Arias1 Marius Roger1 Bernd Bischl1,3 Matthias Aßenmacher1,3
August 12, 2023
============================================================================================================================================
Recent studies have demonstrated how to assess the stereotypical bias in pre-trained English language models. In this work, we extend this branch of research in multiple different dimensions by systematically investigating (a) mono- and multilingual models of (b) different underlying architectures with respect to their bias in (c) multiple different languages. To that end, we make use of the English StereoSet data set <cit.>, which we semi-automatically translate into German, French, Spanish, and Turkish. We find that it is of major importance to conduct this type of analysis in a multilingual setting, as our experiments show a much more nuanced picture as well as notable differences from the English-only analysis. The main takeaways from our analysis are that mGPT-2 (partly) shows surprising anti-stereotypical behavior across languages, English (monolingual) models exhibit the strongest bias, and the stereotypes reflected in the data set are least present in Turkish models. Finally, we release our codebase alongside the translated data sets and practical guidelines for the semi-automatic translation to encourage a further extension of our work to other languages.
§ INTRODUCTION
Stereotypical bias in pre-trained language models (PLMs) has been an actively researched topic in contemporary natural language processing, with the concept of gender likely being the most prominent one among the examined demographic biases <cit.>. Since PLMs primarily learn from the data gathered from pages and websites open to and created by the public, they also inevitably memorize the stereotypes[A generalized belief about a particular category of people <cit.>.] present in this data. On one hand, it is infeasible to inspect individual entries one-by-one in a data set to ensure it does not possess any stereotypes, due to typically large data set sizes; on the other hand, the data set cannot be considerably downsized, as this would limit the performance of the machine learning model.
Stereotypical decisions driven by predictions derived from deep learning models can render companies or engineers to be liable for the stereotypical bias. Hence, the likelihood of producing stereotypical outputs must be minimized, and before that, a generic methodology to measure and evaluate the stereotypical bias in the models is essential. To this day, various approaches for stereotypical bias measurement exist in the literature. An inspired approach to measure stereotypical bias in the pre-trained language models was proposed by Nadeem et al. <cit.>, where an English data set and a methodology to measure the stereotypical bias in English language models was constructed. However, this methodology is significantly limited, as it supports only one language, whereas the current state-of-the-art multilingual models support more than 90 languages <cit.>.
Contribution In this work, we evaluate the stereotypical bias in mono- and multilingual models by creating new data sets via semi-automated translation of the StereoSet data <cit.> to four different languages. This enables us to draw comparisons across multiple different dimensions and obtain a more nuanced picture. We determine to which extent pre-trained language models exhibit stereotypical biases by carefully considering multiple different combinations: We 1) examine both mono- and multilingual models, while 2) considering the different commonly used transformer architectures (encoder, decoder, encoder-decoder) and 3) perform our experiments for languages of different families (Indo-European vs. Ural-Altaic).
In a series of experiments, we extend the code[https://github.com/moinnadeem/StereoSethttps://github.com/moinnadeem/StereoSet] published by Nadeem et al. <cit.> to a more generic version allowing for easier application to other languages and models. Additionally, we noticed and corrected some inconsistencies in this code, which we will further discuss in Section <ref>. We publish our codebase
[https://github.com/slds-lmu/stereotypes-multihttps://github.com/slds-lmu/stereotypes-multi]
to nurture further research with respect to stereotypical bias.
§ RELATED WORK
Detecting and mitigating bias and stereotypes in PLMs represents an active and relevant research field, especially since these stereotypes might actually lead to negative real-world consequences for humans. Thus, it has become common practice, to at least try to measure biases and stereotypes when pre-training a new model. The word embedding association tests (WEAT <cit.>) is one important example, showing that European-American names have more positive valence than African-American names in state-of-the-art sentiment analysis applications. Caliskan et al. <cit.> claim that this issue pertains to a much broader context than having intentional bias among different groups of people, as it is more challenging to analyze the underlying reasons for this behavior. Nadeem et al. <cit.> measure the stereotypical bias (for the English language) by creating their own data set, with WEAT being the inspiration for their so-called Context Association Test (CAT). Although this (as well as most other) work is conducted on English PLMs, there is also a notable amount of research on multi-lingual models. For instance, Stanovsky et al. <cit.> conduct an experiment on the comparison of gender bias in some of the widely used translation services. They discovered that Amazon Translate performs second best in the German language among the chosen systems. Moreover, three out of four systems attain the most satisfactory performance for German among eight different languages. A rationale for that might be German's similarity to the English source language. Lauscher and Glavas <cit.> measure different types of cross-lingual biases in seven languages from various language families. They come to the unanticipated finding that the Wikipedia corpus is more biased than a corpus of tweets. Further, their results indicate that FastText is the most biased method among the four examined embedding models. Névéol et al. <cit.> extend the CrowS-Pairs data set <cit.> to the French language and measure the bias while providing the possibility to extend to different languages.
Other than that, there is also work on the sources of bias and on mitigation (i.e., debiasing). Mehrabi et al. <cit.> divide the sources of bias into two categories: originating from the data and originating from the model. The behavior of a model overly focusing on data-related biases is called bias amplification <cit.>. Hall et al. <cit.> report a correlation between the strength of bias amplification and measures such as accuracy, model capacity, or model overconfidence. This also implies that this issue is more substantial when recognizing group membership (e.g., gender) is easier than class membership (e.g., positive). Besides introducing WEAT, Bolukbasi et al. <cit.> also propose debiasing techniques. Bartl et al. <cit.> apply counterfactual data substitution to the GAP corpus <cit.> and fine-tune BERT <cit.> to mitigate gender bias, achieving promising results for English. However, the same method yielded unsatisfactory performance for German – possibly due to grammar, since German is a gender-marking language, in contrast to English. This shows once more that bias detection and mitigation depend on the language, stressing the importance of our work. Going beyond gender, Meade et al. <cit.> also apply debiasing techniques for racial and religious biases.
§ MATERIALS AND METHODS
§.§ StereoSet data
The StereoSet data set, created by Nadeem et al. <cit.>, is designed to have two association tests (intra- and inter-sentence) for the evaluation of pre-trained models. For the intra-sentence test, the model predicts the probability for the occurrence of specific words within a sentence, which is essentially a fill-in-the-blank task. Three given candidates – where one is deemed "stereotypical", one "anti-stereotypical", and one "unrelated" – are inspected, and the predictions are used to calculate a score for the model.
Inter-sentence tests roughly correspond to BERT's Next Sentence Prediction (NSP) task. Again, three candidates belonging to the above-mentioned categories are considered, and the model's choice is expressed by ranking the three options. Examples for both tasks are depicted in Table <ref>. The "unrelated" category exists to measure the general performance of the model, i.e., to check whether the model prefers a meaningful option (i.e., stereotypical or anti-stereotypical) over the unrelated option. The final score (cf. Sec. <ref>) thus measures the biasedness as well as the language modeling capabilities.
Further, for each context sentence, the target of the stereotype (i.e., which group of people is concerned) is given. In the intra-sentence example above, the target word is "Muslim"; in the inter-sentence example, it is "Hispanic". Hence, it is possible to measure the bias for specific target groups. Nadeem et al. <cit.> used Wikidata relation triples (<subject, relation, object>) to produce these target terms, where the "relation" in these triples provides the bias type (e.g., "Gender"). Overall, there are four different bias types: gender, profession, race, and religion. Referring again to the intra-sentence example above, the bias type is religion, while for the inter-sentence example above, it is race. The categorization is important with regard to measuring the bias per type (cf. Sec. <ref>).
Overall, there are n = 2123 samples in the inter-sentence[ <cit.> only publish the development set, so our work is based on this.] and n = 2106 in the intra-sentence data set. From 79 unique target terms in the inter-sentence data set, the most common target term has 33 occurrences, and the least common has 20. For the intra-sentence data set, the target terms occur between 21 and 32 times. There are also 79 target terms for the intra-sentence test set, which makes the data set quite balanced with respect to the target terms. Regarding the bias type, there are 976 (962) examples for race, 827 (810) for profession, 242 (255) for gender, and 78 (79) for religion in the inter-sentence (intra-sentence) test sets.
§.§ Pre-Trained Models
We evaluate all three different commonly used pre-trained transformer architectures: encoder, decoder, and encoder-decoder. As a representative for the first type, we chose BERT, for the second one GPT-2 <cit.>, and for the third one T5 <cit.>. For each architecture, we evaluate monolingual models as well as their multilingual counterparts. While BERT was pre-trained using Masked Language Modeling (MLM) and the NSP objective, GPT-2 was on the language modeling objective. T5 relies on a pre-training objective similar to MLM but replaces entire corrupted spans instead of single tokens. Further, the English T5 models on huggingface <cit.> are already fine-tuned on 24 tasks. Appendix <ref> holds an overview of the specific models we evaluate. For Turkish, no pre-trained monolingual T5 model was available (as of the time of writing).
§.§ Evaluation
The model predictions are not only evaluated with respect to their biasedness but also with respect to their syntactic/semantic meaningfulness. A random model that always outputs random candidates would be non-stereotypical, but it would not have any language modeling capabilities. The ideal model should excel in language modeling while simultaneously exhibiting fair behavior. Therefore, a Language Modeling Score (LMS), as well as a Stereotype Score (SS), are calculated and combined to the Idealized Context Association Test (ICAT) score, as proposed by Nadeem et al. <cit.>.[Although the work by Nadeem et al. <cit.> serves as our main inspiration, there are differences regarding evaluation. See Appendix <ref> and <ref> for the differences and our corrections.]
Stereotype Score (SS) This score is designed to assess the potential amount of stereotypes in a model by comparing its preference of the stereotypical (x_stereo) over anti-stereotypical (x_anti) candidates, and vice versa.[Note that always preferring an anti-stereotypical candidate is also appraised as discriminatory behavior since it would also create unfairness towards the stereotypical group.] Thus, solely a model that prefers neither x_stereo nor x_anti candidates systematically is considered unbiased. SS calculation is depicted in Eq. <ref>, where a model with a score of 50% is considered unbiased.
SS = 1n∑_i=1^ng(x_i)*100,
with g(x)=
1, (x_stereo > x_anti)
0, (x_stereo < x_anti)
Language Modelling Score (LMS) Language modeling capabilities are assessed by measuring the number of cases in which the model prefers x_stereo and/or x_anti over the unrelated candidate (x_unr). The ideal model should always prefer both of them over the candidate, thus achieving an LMS of 100%. Again, we slightly deviate from <cit.>, since there are inconsistencies with their definition (cf. Appendix <ref>):
LMS = 1/2n∑_i=1^ng(x_i)*100, with
g(x) =
2, (x_stereo > x_unr) (x_anti > x_unr)
1, (x_stereo > x_unr) (x_anti < x_unr)
1, (x_stereo < x_unr) (x_i_anti > x_unr)
0, (x_stereo < x_unr) (x_anti < x_unr)
Idealized CAT (ICAT) Score This score combines both SS and LMS to overcome the trade-off between the two of them and allow for a holistic evaluation:
ICAT = LMS * min(SS, 100-SS)50
A completely unbiased model which always prefers meaningful candidates (i.e., SS = 50, LMS = 100) would produce an ICAT score of 100, whereas an entirely random model (i.e., SS = 50, LMS = 50) would score 50. A model that always picks the stereotypical over the anti-stereotypical candidate (or vice versa) would result in ICAT = 0.
§.§ Multi-Class Perspective
Nadeem et al. <cit.> considered the four different bias types as classes and were thus able the evaluate the models in a multi-class fashion. Nevertheless, there were some mistakes in this setting which we attempt to correct. While we define ICAT_macro as the average over the bias type-specific ICAT scores and ICAT_micro as the calculation of the ICAT over the averaged sub-scores (LMS and SS), their definition was exactly the other way round. We were in close contact with Nadeem et al. <cit.> to discuss this disagreement and they also confirmed our point of view.
§ METHODS FOR PROBABILITY PREDICTIONS
§.§ Intra-Sentence Predictions
Inferring BERT and T5 for the intra-sentence tests is trivial due to their highly similar pre-training objectives described in Section <ref>. However, GPT-2 does not have any objective related to MLM. Thus, it cannot solve this task in a discriminative manner but rather uses a generative approach. Since candidate words usually consist of multiple tokens, the probability of the whole word cannot be calculated directly. Following <cit.>, the candidate word is divided into its tokens, and each token is unmasked step-by-step from left to right. After manipulating the data set this way (cf. Fig. <ref>, Appendix <ref>), one sentence requires multiple inference steps. Nevertheless, due to efficient object-oriented handling, the inference can be accomplished batch-by-batch and with multiprocessing. Furthermore, instead of padding to a fixed length (as <cit.>), we use dynamic padding with the aim of reducing memory consumption. After acquiring the probabilities for the masked tokens, they are averaged per candidate word.
The probability distribution for each token is generated by providing their respective left context to the model. In other words, the generation is executed for every token instead of only the masked part. Due to the left-to-right nature of the model, the masked part does not affect only one token, but also the whole context on its right. The output of this operation produces a separate distribution for each token, where each distribution expresses the likelihood of the corresponding next token. Hence, the likelihood of generating a specific token is obtained by examining the likelihood distribution output of the previous token.
In order to predict the likelihood, the model-specific BOS token is used as the left context of the first token. After calculating the likelihoods for both the first token and the whole sentence, the softmax operation is performed separately over the vocabulary dimension to flatten the results into a probability space, where each of the results is between zero and one. To merge these probabilities from each token, the following formula inspired by <cit.> is used:
2^∑_i=1^Nlog_2(P(x_i|x_0,x_1,...,x_i-1))/N,
where N is the number of tokens in the sentence.
§.§ Inter-Sentence Predictions
Discriminative Approach
For BERT and mBERT, inter-sentence tests can be conducted by taking advantage of the discriminative NSP objective and using it to rank the candidate sentences. However, T5 and GPT-2 models were not pre-trained on NSP and must consequently be fine-tuned using this objective (cf. Sec. <ref>). An alternative approach would be to predict the probability for each word in the next sentence, making use of the generative nature of these models. We report more experimental results on the comparison of the discriminative and the generative evaluation approach in Appendix <ref>.
Generative Approach
For the generative approach, the inference process (including tokenization) differs substantially between T5 and GPT-2 based models. In T5 models, candidate sentences are fully masked, although this hinders predicting the whole next sentence for the model. The general form of the input sentence to the encoder is "<context sentence> <extra_id_0>". A specific example is "My professor is a Hispanic man. <extra_id_0>". To handle this cumbersome prediction, we use teacher forcing with the inputs to the decoder having the form "<pad> <extra_id_0> <candidate sentence>"; a specific example would be "<extra_id_0> He is a legal citizen.". After obtaining the probabilities for each token, they are combined by again applying Equation <ref>.
For inferring GPT-2, context and candidate sentences are merged, separated by whitespace "<context sentence> <candidate sentence>" (called "full sentence"). A specific example would be "My professor is a Hispanic man. He is a legal citizen.". Nadeem et al. <cit.> measure the final score by calculating the probability ratio of the candidate over the context, which does in fact not evaluate their dependence, but treats them entirely separately. Their results for this approach are not satisfying, which we suspect to be due to using a wrong ratio. We show that it is possible to achieve satisfying results using this generative approach for English (GPT-2 and mGPT-2) and German (mGPT-2).[Due to this finding, we abstain from fine-tuning any other monolingual GPT-2 model on NSP and rely solely on the (corrected) generative approach for this architecture.] For a more detailed explanation of our changes to the probability calculation, please refer to Appendix <ref>.
§ EXPERIMENTS
§.§ Data Set Translation
We translate StereoSet to German, French, Spanish, and Turkish using Amazon Web Service (AWS) translation services in Python (boto3). A crucial point in this process is translating the "BLANK" word in the context sentences in the intra-sentence data set. Since this word must be kept in the output, it is declared a special word, in the sense that it is not translated.[If left as a standard word, AWS performs various different (erroneous) translations depending on the target language/context.] We, therefore, make use of AWS's "custom terminology" approach by using the byte code [Or , , instead of for the other languages.] in Python to keep the BLANK token as is. After translation, all data sets were checked for punctuation errors and for the correct placement of the BLANK token in the different languages. We opted for these for languages since they exhibit several criteria which are deemed important:
a) German, French, and Spanish are among the most frequently spoken European languages.
b) German, French, and Spanish have multiple grammatical genders, as opposed to English. German has three grammatical genders (der, die, das), while French (le, la) and Spanish have two (el, la).
c) Turkish is a language from a different cultural background and does not have a grammatical gender (as does English).
§.§ NSP Fine-Tuning
Fill-in-the-blank tasks are naturally supported by all evaluated model types (cf. Sec. <ref>). Thus, no specific fine-tuning is required for the intra-sentence data set. For mGPT-2 and T5, however, we follow <cit.> by adding an NSP-head and fine-tuning these models.[We use the already fine-tuned English GPT-2 model from Nadeem et al. <cit.> and the generative approach for the other GPT-2 models. All other training processes were carried out on a Tesla V100-SXM2-16GB GPU.] We use the Wikipedia data set in English, German, and French from the library holding Wikipedia dumps extracted on March 1, 2022. Since there are no readily available data sets for Turkish and Spanish, we build them from the July 20, 2022, Wikipedia dump using the same library. After sentence-tokenizing and shuffling the data set, we add IDs to all sentences. This enables us to create consecutive sentence tuples as positive examples, while negative ones are created by drawing a random sentence.[Taking random sentences from a different article requires the model to differentiate between articles.]
§.§.§ Multilingual GPT-2
For NSP fine-tuning of mGPT-2 <cit.>, we consider 110,000 Wikipedia articles (∼ 9.5M sentences) for English and German, which is a similar number of sentences used by <cit.>. Due to hardware constraints, we train with a batch size of four while using gradient accumulation over 16 steps, yielding weight updates after every 64 examples. Following <cit.>, we set the core learning rate to 5e-6 and to 1e-3 for the NSP-head. Training is carried out with half-precision (FP16) and terminated after around 1M examples since the accuracy stabilized at around 90% and the loss converged (cf. Appendix <ref>).
§.§.§ Monolingual T5 models and mT5
We employ the T5 base models alongside their original tokenizers, which are both of comparable size to BERT. For fine-tuning the English T5 model, we add the prefix "binary classification: " – a unique wording in the T5 tokenizer – to the start of each input sequence. After reaching satisfactory performance with mGPT-2 on only 22,000 articles (∼ 1M samples), we use the same number here. Since T5 is much smaller than mGPT-2, more samples fit into GPU memory in each training step. Thus, we train with a batch size of 24 with three gradient accumulation steps to achieve a comparable number of examples per gradient update as for mGPT-2. After experimenting with FP16, the training is conducted with full precision, since FP16 training took longer for all T5-based models – an observation that is also reported by other researchers <cit.>. Since there is no separate NSP-head in T5 fine-tuning (as explained in Section <ref>), the learning rate is only set for the core model.[Appendix <ref> holds the details on the scheduler.] Again, we reach an accuracy of roughly 90% at the end of fine-tuning with converging loss (cf. Fig. <ref> in Appendix <ref>). The accuracy does not seem to be fully converged, but again we refrain from committing to fully optimizing on this auxiliary task.
We found that fine-tuning mT5 on NSP works with a relatively high (and stable) learning rate of 1e-4. To preserve comparability to mGPT-2 fine-tuning, the training is stopped after 25% of the data set is processed, due to already achieving 92% accuracy. We train with a batch size of eight and eight gradient accumulation steps. NSP fine-tuning for the monolingual German, French, and Spanish T5 models was performed in a similar fashion.
§ RESULTS
As described in Section <ref>, we use two different evaluation techniques: in addition to evaluating a model as a whole, we also consider each target term as a class and treat the problem from a multi-class perspective. While Nadeem et al. <cit.> only consider the multi-class results, we put a greater focus on the global evaluation of the models in order to draw conclusions with respect to the different languages and architectures.
§.§ Multilingual Models
The lower part of Table <ref> holds the evaluation results for the multilingual models in the intra-sentence setting. Regarding language modeling, mGPT-2 performs much better than mBERT and mT5 in all languages, which is also reflected in its higher ICAT scores. When comparing across languages, the multilingual models exhibit the highest stereotypical bias for Spanish and English, while mBERT appears to be less biased with regard to the models. The mGPT-2 model demonstrates a stereotypical bias for Spanish and English, while mT5 is quite biased for all languages. Overall, the strong mGPT-2 LMS performance leads to it also outperforming the other models with respect to ICAT, where we also observe a notable gap between English and German on the one hand and French, Spanish, and Turkish on the other hand.
Table <ref> provides inter-sentence evaluation results for all models.[As described in Section <ref>, there are two different approaches for evaluating GPT-2 and T5 models. For GPT-2, results for the generative approach are shown, while the T5 models are all fine-tuned on NSP.]
Regarding this test, mGPT-2 is outperformed by mBERT and mT5 by a large margin across languages with respect to LMS, which can probably be explained by the different pre-training regimes. Similarly, mGPT-2 behaves very differently from the other two models; while mBERT and mT5 are rather strongly biased, mGPT-2 seems to favor the anti-stereotypes across all languages.
The overall results calculated from the combination of both tests are displayed in Table <ref>. All three different types of architectures exhibit a similar LMS performance, with the German language being the exception, since mT5 outperforms the other two models by a wide margin. According to SS, mGPT-2 shows either very fair behavior (en, tur, es) or even leans towards the anti-stereotype groups (as already observed in Tab. <ref>). The other two models on average always prefer the stereotypical options, with the most stereotypical behavior for English and Spanish. With respect to the SS, the multilingual models' behavior seems to be the fairest for the Turkish language.[We suspect the employed data sets were collected to test primarily for western stereotypes, since they were prepared by people from the United States. Hence, this might be one of the reasons for the apparent unbiasedness for Turkish. Future work requires building different data sets for different cultural groups.] The overall ICAT scores also reflect these findings. According to these scores, mGPT-2 is deemed the best model for English and Spanish due to its far better SS values. For German, the two other models are able to catch up a little to mT5, since it is the most biased model (despite having the best LMS). For Turkish, all the models not only exhibit similar SS, but also similar LMS values, and hence all have similar ICAT scores. Regarding the performance on the French data, mT5 beats its two competitors by showing a competitive LMS and exhibiting a low bias.
§.§ Monolingual Models
The upper parts in Tables <ref>, <ref> and <ref> show performances for different monolingual models in each column. The most striking (and possibly least surprising) finding is that the monolingual English models exhibit the best LMS across all tables, except for GPT-2[Note that the monolingual models were not fine-tuned on NSP, but use the generative approach.] on the inter-sentence test. Similar to the multilingual setting, GPT-2 models stand out in intra-sentence LMS across languages, while they struggle in inter-sentence LMS. This leads to a more balanced overall LMS performance across models, except for BERT, which severely struggles in French and Spanish. Overall, LMS performance of most monolingual models on both tests is better compared to the multilingual ones (again, except for BERT in French and Spanish).
Regarding the biasedness of the different models, we observe that English models have the most severe stereotypical tendency; each of the three English models displays more stereotypical bias than any of the other models for any other language. Consequently, the higher LMS performance of these models comes at a price. Comparing the different architectures, GPT-2 models appear to be least biased on the inter-sentence test, while for the intra-sentence examples and overall, all the architectures exhibit stronger biases than their multilingual counterparts.
Focusing on ICAT scores, monolingual BERT and GPT-2 models outperform the multilingual versions on the inter-sentence test (except for French and Spanish BERT models), while monolingual T5 models are a bit worse. On the intra-sentence test, the picture is more nuanced: Spanish and Turkish models are better than the multilingual ones, while the performance is mixed for English and French, and German models are always worse than their multilingual counterparts. Overall, we also observe a strong performance of the multilingual models, mostly driven by the fact that they are less stereotypically biased. The strong performance for the Turkish monolingual models is noteworthy, since they are equally less biased but stronger in LMS than the multilingual models.
§.§ Multi-Class Results
Assuming that the target terms constitute separate classes, most of our findings from the above sections still hold. Thus, we only report the striking the differences for the overall results in the main paper (cf. Tab. <ref>) to avoid repetition.[The results for the intra-sentence tests (cf. Tab. <ref>) and the inter-sentence tests (cf. Tab. <ref>) can be found in Appendix <ref>.] The multi-class perspective comes with two separate scores: a macro and a micro version of the ICAT (cf. Sec. <ref>).
The result that the macro ICAT score is consistently lower than the micro ICAT score (across all models and languages) can be explained by larger variations of the ICAT scores between the different classes. The most important takeaway from this observation is that the scores in the underrepresented classes (gender and religion) seem to be worse than for the larger classes (race and profession), since they receive disproportionately high weights in the macro ICAT.
§ DISCUSSION AND FUTURE WORK
Probably one of the most important issues that until now has not been tackled in a holistic manner is the matter of how to take into account the differences in stereotypes in different cultural groups. For the Turkish language, we observe consistently lower measurements of stereotypical bias in the models, which we suspect to potentially originate from cultural differences. Furthermore, we did not address differences between different models of the same architecture within languages. This is also an important endeavor for the future since it allows for comparisons of the biasedness of different pre-training regimes. A holistic analysis – e.g., in a similar fashion to how Choshen et al. <cit.> execute analyses for model performance across tasks – is necessary for advancing applied research in this direction.
Another undeniable shortcoming of current research with respect to the stereotypical behavior of PLMs is that there is a variety of different (English) data sets covering different aspects, but no holistic (multilingual) framework. Efforts in the direction of building something similar to what Ribeiro et al. <cit.> created for behavioral testing might be a promising goal to move forward towards. This might even become more compelling when evaluating models like the recently introduced ChatGPT <cit.>.
To conclude, we provide a blueprint for the assessment of stereotypical bias in a multilingual setting, which is easily extendable to other models and languages. Our analysis reveals insights into the differences between the different languages and architectures when evaluated with these data sets. The overall picture drawn by this analysis is, admittedly, quite heterogeneous and does not allow drawing a conclusion declaring one architecture the clear winner. Weighting both scores (LMS and SS) equally gives them similar importance, which might also be a debatable choice depending on the intended use case of the model. Taking this into account, we would argue that it is rather up to the user to decide on the preferable model by considering all aspects of the respective application. Thus, we believe, that our results can nevertheless be used as meaningful starting points for drawing tentative conclusions or for generating new research questions in this domain.
§ ETHICS STATEMENT
Limitations Most certainly, analyses like ours do not come without debatable aspects, especially when it comes to the creation as well as the translation of the employed samples. Working on this set of four bias types is non-exhaustive and should definitely be extended and refined in the future. Furthermore, translating sentences from a language with two grammatical genders to languages with three genders also comes with shortcomings, since certain grammatical constructions favor specific (anti-)stereotypical candidates in the data sets. This issue appeared to be most striking for the French language. During our semi-automated translations, we also noticed errors in the original English data sets. Still, we decided for the moment to take them as is to keep our work comparable to <cit.>. For future work, we plan to carefully re-evaluate all the data sets manually. A proceeding for this might be to have native speakers of each target language check and correct every sentence of their translation of the respective data set for semantic and stylistic errors. However, this would both defeat the purpose of having it translated automatically and necessitate greater manpower than is currently available, roughly corresponding to creating the data set from scratch.
With respect to model size, our analysis is restricted to PLMs of small to medium size. Therefore it is not necessarily valid to transfer the findings to larger models, like e.g., the largest models of the GPT or T5 family. Regarding the computational requirements of our study it is important to note that assessing GPT-2 models is cheap, since the generative approach works well, whereas, for T5 models, NSP fine-tuning is recommended for the inter-sentence tests.
Ethical considerations When dealing with the concept of stereotypical bias, the question of ethical implications naturally arises. Utilizing crowd workers for annotating such data might expose such people to disturbing pieces of text. Given these considerations, our approach of semi-automatically translating the data is a step in the right direction. But still, we had to manually check the sentences afterward which does not reduce the exposure. Further, it is important that such a manifold and diverse, sometimes very subtle, concept of stereotypical bias is hard to grasp in an exhaustive manner. As such, many more experiments and also more elaborated data sets, dealing with the matter on an even more granular level, are required in future research. Finally, it is important to state that making applications driven by large language models (e.g. ChatGPT <cit.>) safe for public use is one of the most important requirements before they can be made available to a broader audience. As stereotypical bias is different in different languages and different cultural background, focusing only on the English language here is no real alternative.
§ ACKNOWLEDGEMENTS
This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of BERD@NFDI - grant number 460037581. It has also been partially funded by the OpenGPT-X project (BMWK 68GX21007C) in cooperation with Alexander Thamm GmbH.
splncs04
§ VISUALIZATION OF MGPT-2 NSP FINE-TUNING
§ VISUALIZATION OF T5 NSP FINE-TUNING
§ MODEL SPECIFICATIONS
§ DATA PREPARATION FOR INTRA-SENTENCE TESTS
§ GENERATIVE APPROACH FOR INTER-SENTENCE PREDICTIONS
The score calculation approach from <cit.> will be abbreviated as "gen_orig", while our approach, mathematically expressed as
P(cand | cont) = P(cand ∩ cont)/P(cont) ,
will simply be abbreviated as "gen". In Eq. <ref> P(cont) is the (isolated) probability of context sentence, which can be ignored since it is the same for all candidates. Thus, the primary focus is on P(cand ∩ cont), which is the probability of the "full sentence". This can be measured with the probabilities of candidate sentence tokens, which are computed by considering the context sentence as their left context. Hence, this methodology implicitly contains the relationship between context sentence and candidate sentences, contrary to the work shown in <cit.>. Finally, these tokens' probabilities are combined by utilizing Eq. <ref>.
Table <ref> holds the results for the generative approach in the inter-sentence tests for English and German models:
§ SCORE CALCULATION DIFFERENCES
For calculating the LMS, the code published by <cit.> contradicts the explanation in the paper to some extent. In the paper, the calculation is written to be counted towards the meaningful example for "either stereotypical or anti-stereotypical" candidate's superiority; indeed, it is counted towards "both stereotypical and anti-stereotypical" candidate's superiority. The difference would be apparent in an example where the stereotypical candidate's probability is higher than the unrelated candidate's, which is in turn higher than the anti-stereotypical candidate's. In this example, the score would be 100% according to the paper; however, it would be 50% according to the code that the authors published. Our approach is based on the code that they published since the same results with their publication are indeed reached by the code.
§ MULTI-CLASS RESULTS FOR INTRA- AND INTER-SENTENCE TESTS
|
http://arxiv.org/abs/2307.03965v1 | 20230708123352 | Seismic Signatures of the $^{12}$C($α$, $γ$)$^{16}$O Reaction Rate in White Dwarf Models with Overshooting | [
"Morgan T. Chidester",
"F. X. Timmes",
"Ebraheem Farag"
] | astro-ph.SR | [
"astro-ph.SR"
] |
red
Signatures of ^12C(α, γ)^16O in WD models with overshooting
Chidester, Timmes, & Farag
0000-0002-5107-8639]Morgan T. Chidester
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-0474-159X]F.X. Timmes
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-5794-4286]Ebraheem Farag
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
Morgan T. Chidester
[email protected]
We consider the combined effects that overshooting and the reaction rate have on variable white dwarf stellar models. We find that carbon-oxygen white dwarf models continue to yield pulsation signatures of the current experimental reaction rate probability distribution function when overshooting is included in the evolution. These signatures hold because the resonating mantle region, encompassing ≃ 0.2 in a typical ≃ 0.6 white dwarf model, still undergoes radiative helium burning during the evolution to a white dwarf. Our specific models show two potential low-order adiabatic g-modes, g_2 and g_6, that signalize the reaction rate probability distribution function. Both g-mode signatures induce average relative period shifts of Δ P/P = 0.44 % and Δ P/P = 1.33% for g_2 and g_6 respectively. We find that g_6 is a trapped mode, and the g_2 period signature is inversely proportional to the reaction rate. The g_6 period signature generally separates the slower and faster reaction rates, and has a maximum relative period shift of Δ P/P = 3.45%. We conclude that low-order g-mode periods from carbon-oxygen white dwarfs may still serve as viable probes for the reaction rate probability distribution function when overshooting is included in the evolution.
§ INTRODUCTION
Helium burning is primarily the fusion of helium into carbon by the triple-alpha (3α) process.
All stars born with more than ≃ 0.5 go through this stage of energy production as they evolve beyond the main-sequence <cit.>.
Helium burning also plays a key role in transients such as
Type I X-ray bursts <cit.>,
Type Ia supernovae <cit.>, and
He-rich subdwarf O stars <cit.>.
Helium burning also impacts several classes of distribution functions,
such as the black hole mass distribution function <cit.>
including any mass gaps based on the pair-instability mechanism in the evolution of
massive stars <cit.>.
He burning is triggered by the 3α process releasing 7.5 MeV in fusion energy and producing ^12C <cit.>.
This is a unique process, setting stringent conditions for helium ignition.
The 3α process is followed by the α capture reaction ^12C(α, γ)^16O,
converting the ^12C into ^16O <cit.>.
These two isotopes are the principal products of He burning.
In addition, nearly all of a star's initial CNO abundances in the stellar interior are converted to ^22Ne at the onset of He burning <cit.>.
This marks the first time in a star's life where the core becomes neutron rich. We follow the convention that ^22Ne is the “metallicity” of a carbon-oxygen (CO) white dwarf (WD).
The interiors of CO WDs are, in principle, the best probe of the ashes of He burning.
A goal of WD seismology is to characterize the chemical profiles of principal products of He burning
<cit.>
and the chemical profile of the trace ^22Ne metallicity <cit.>.
Furthermore, regions within a CO WD model that burn helium radiatively during its prior evolution can offer potential constraints on the He burning nuclear reaction rates.
For example, <cit.> found that certain trapped adiabatic g-modes in WD models
may provide a pulsation signature that constrains the experimental reaction rate probability distribution function.
These signature g-modes were shown to resonate
with the region of the CO WD model that underwent radiative He burning during its previous evolution. The innermost boundary of this resonant cavity
corresponds to the molecular weight gradient at O→C chemical transition, and the outermost boundary to the molecular weight C→He chemical transition.
The resonating region encompasses ≃ 0.2 of a typical ≃ 0.6 WD model.
C22 cautioned that the chemical structure and resulting pulsation spectrum
is sensitive to
the width of the O→C transition <cit.>,
the experimental 3α reaction rate probability distribution functions <cit.>,
convective boundary mixing processes during core He depletion <cit.>, and
the number of thermal pulses during the Asymptotic Giant Branch (AGB) phase of evolution <cit.>.
Modeling convective boundary mixing processes at the convective-radiative interface during core He burning in low- and intermediate-mass stellar models is currently uncertain
<cit.>.
Convective overshoot occurs because the convective boundary is not the location where convective velocities are zero,
but the location where the buoyant acceleration of the fluid is zero.
An order–of-magnitude expression Δ x = u Δ t provides an estimate for how far convective motions overshoot <cit.>.
Here Δ x is the overshoot distance, u is the convective velocity, and
Δ t ≃ 1/N where N is the frequency
in the stable region. There is disagreement on how to calculate Δ x, but this estimate
broadly shows Δ x ≪ H_P in stellar environments, where H_P is the pressure scale height.
The exponential overshoot parameterization <cit.> is frequently implemented in 1D models to describe this convective boundary mixing process, treating Δ x as a free parameter.
The values of Δ x
needed to match the gravity modes found in Slowly Pulsating B-type stars <cit.> suggest Δ x / H_P ≃ 0.1, which is larger than 3D hydrodynamical simulations of low Mach number flows at stable interfaces indicate <cit.>.
The injection of fresh He into the convective core enhances the rate of energy production by the ^12C(α,γ)^16O reaction rate, increases the central mass fraction <cit.>, and modifies the lifetime through this phase of evolution.
The resulting increase in the radiative gradient can also lead to rapid growth in the convective He core boundary (a “breathing pulse”).
A consensus on breathing pulses being physical or numerical has not yet been reached <cit.>.
C22 found a pulsation signature of the reaction rate probability distribution function using evolutionary models that purposely excluded overshooting.
This article is novel in analyzing whether or not pulsation signals of the reaction rate probability distribution function
still exist when overshooting at the inner convective-radiative interface during core He burning (CHeB) is included in the models' evolution history. Here, the inner convective-radiative interface is the transition from the convective core to the exterior radiative layer.
Section <ref> describes our models,
<ref> analyzes our models,
<ref> discusses our results,
and we summarize our findings in <ref>.
Appendix A lists the microphysics used, and
Appendix B discusses variations with the number of isotopes in the reaction network and with the temporal resolution of our models.
§ STELLAR EVOLUTIONARY MODELS
We define the term “model” to mean an evolutionary sequence that begins at the pre-main sequence, progresses through CHeB, and terminates as a cold WD. We define the term “snapshot” to mean a specific instance in time or phase of evolution within a model, and the term “set” to mean a suite of models or snapshots that have identical input physics except for the value of the reaction rate.
We use version r15140
<cit.> to build 2.1 ,
Z = 0.0151 metallicity, Y = 0.266 He mass fraction, nonrotating models at the pre-main sequence.
We adopt the AGSS09 <cit.> abundances and use a 23 isotope nuclear reaction network with ^22Ne being the heaviest isotope[A comparison to a 30 isotope network is given in Appendix B.].
Our models employ 's Henyey mixing-length theory (MLT) of convection option, with an MLT parameter of α = 1.5. This is consistent with the value used in C22.
We use the Ledoux criterion, and the predictive mixing scheme.
Additional details of the microphysics are listed in Appendix A.
One such model is run for each 0.5σ step in the experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0, giving 13 reaction rates.
As in C22, we span the current experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0 in 0.5σ steps, totaling to 13 σ_i reaction rates; each model is prescribed one such σ_i reaction rate value for its evolution.
We calculate one set of models without overshooting (NOV), and a second set with overshooting (OV) at the inner radiative-convective interface during the CHeB phase.
Hence, each evolutionary model differs only in its σ_i reaction rate, and NOV or OV mixing prescription. This yields 26 individual stellar evolutionary models; 13 for the NOV set and 13 for the OV set. For i=(-3.0, -2.5,...,+2.5, +3.0), we use σ_i and σ=i interchangeably to reference a given σ from the reaction rate probability distribution function.
After CHeB, the models evolve until log(L/L_⊙)=3.0, prior to the first thermal pulse on the AGB. At this snapshot, we interrupt the evolution of each model. All models at this snapshot thus have a C→He transition at nearly the same mass location. We use this snapshot to construct H-dominated atmosphere (DA) WDs by removing the H envelopes until log(M_H/M_*)<-3.5.
The resulting composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K. We discuss the reasoning for constructing the WDs from the post-CHeB log(L/L_⊙)=3.0 snapshot in the following section.
We use this snapshot to isolate the sensitivity to overshooting at the convective-radiative interface.
At this snapshot the H envelopes are removed until log(M_H/M_*)<-3.5 to construct H-dominated atmosphere (DA) WDs.
These composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K.
We utilized version 6.0.1 of the code <cit.> to compute the adiabatic pulsations of our WD models throughout their respective cooling tracks (from ∼ 50,000 K to 10,000 K). We tracked the pulsations for the entire WD cooling track to observe the evolution of the adiabatic modes. Further, this was the most convenient way to auto-implement pulsation calculations for multiple models (i.e. we did not have to post-process the pulsation calculations over a specifed range for each of the 26 models). We emphasize that the computed pulsations are adiabatic, and that the observed instability strip for DAV WDs spans only from ∼ 13,000 K to ∼ 10,000 K. The inlist parameters were set to search for modes of harmonic degrees ℓ=1,2 and radial orders n≤25, where our models were assumed to be non-rotating, hence only m=0 azimuthal orders were present. For the adiabatic mode analysis, we employed the fourth-order Gauss-Legendre collocation difference equation scheme <cit.>.
Details of the models and oscillation parameters are in the files to reproduce our results at
at doi:[10.5281/zenodo.8126450][https://doi.org/10.5281/zenodo.8126450.
§.§ Core Overshooting prescription during the CHeB
During the CHeB phase, we use the following core overshooting parameters in the inlist for the OV set:
= 1d-3
= `exponential'
= `any'
= `core'
= 0.016
= 0.008
= 0.01
= 0.4
Details of the specific parameters are described in the documentation[<https://docs.mesastar.org/en/latest/>].
We choose the conventional <cit.> value of .
This parameter sets the fractional distance of H_p to overshoot at the ∇_ad=∇_rad interface, for the order of magnitude estimate given in the introduction, Δ x = f_0· H_p.
The trapped mode seismic signatures found in C22 were resonating most with the region that underwent radiative He burning, defined as R2. Their inner boundary of R2 is near the molecular weight gradient at the
O→C transition (the “O drop") and their outer boundary is near the C→He transition. Mode trapping is sensitive to the location of both of these boundaries because they define the width of the resonant cavity.
One approach to analyzing the sensitivity
of the R2 trapped mode signatures is to fix one boundary and vary the other boundary. We fix the R2 outer boundary by excluding variations imposed from the thermal pulse history, hence the interruption at the post-CHeB log(L/L_⊙)=3.0 snapshot for all models. The phenomena that happens during the AGB phase is another source of model uncertainty. <cit.> found that early post-AGB pulsations can cause rapid growth of an instability that drives a super-wind which can shed much of the outer layers in a few years. Further, their 2.0 , Z=0.02 model shows a dynamic evolutionary track, especially during the AGB, that is similar to the models in this article. <cit.> summarizes that while the preliminary results show promise on future AGB and post-AGB phenomenon, there are currently more questions than answers. We therefore leave the thermal pulse history and the particular envelope ejection phenomena on the AGB to future studies, and freeze the outermost R2 boundary before the first thermal pulse occurs. In this vein, we isolate the sensitivity of the R2 region to its inner boundary, and specifically address how core overshooting influences the pulsation signatures for the reaction rate probability distribution function.
We end this section by stating we are not advocating for a specific evolutionary model or overshooting scheme.
Rather, we are exploring one approach to quantifying the coupled uncertainty between the reaction rate probability distribution function and a common overshooting model.
§ RESULTS
§.§ Evolution of Composition Profiles
Figure <ref> shows the mass fraction profiles for both sets at three evolutionary snapshots. The top row shows the mass fraction profiles for the NOV set and the bottom row shows the mass fraction profiles for the OV set. The left most column
shows the mass fraction profiles at the post-CHeB log(L/L_⊙)>3.0 snapshot. At this point, our models have not lost much mass and are all ∼2.1. The middle column shows the mass fraction profiles after removing the H envelopes until log(M_H/M_*)<-3.5. This snapshot shows the initial hot WD profiles, after completing one model step in wd_builder. The profiles shift slightly in mass location, but the overall composition structure only differs from the left panel in the thickness of the H envelope. The right column is the final snapshot of the mass fraction profiles, when the models reach =10,000 K. Diffusion was included on the WD cooling track and leads to the smoothness of the profiles in this column.
Figure <ref> accentuates the differences between the NOV (top) and OV (bottom) mass fraction profiles for the final WD structures (right column of Figure <ref>). Here, we show the abundance in mass fraction with respect to fractional radius r/R. We partition the WDs' composition profiles into four regions: R1, R2, R3, and R4. This is similar to that done in C22. The regions are defined to estimate trapping (resonant) zones. Boundaries for mode trapping are typically near composition transitions because they generally have large mean molecular weight gradients. This may lead to partial reflections for a resonant mode(s), “trapping" it within the local cavity <cit.>. The Ledoux B profile (henceforth B) captures composition gradients and can estimate trapping regions. We use B as our primary guide to define the region boundaries for a given model. The R1-R2 boundary is set at the first local maximum in B that occurs after reaching peak in a given model's chemical profile. The R2-R3 boundary is set at the second local maximum in B. The R3-R4 boundary is set at the location where X(^1H)>X(^4He).
In both NOV and OV sets, σ_i impacts the magnitude of the ^16O and ^12C profiles in R1. Core overshooting changes the structure of these profiles, especially at r/R ∼ 0.37 where the flatness of the profiles becomes disrupted. This is due to additional He fuel ingested during CHeB, from overshooting and/or convection. The fuel ingestion from overshooting and convection is a coupled effect and specific to each σ_i model. After r/R ∼ 0.37, there is some overlap in the profiles that perturbs the proportional trend with σ_i.
For both sets, the first group of vertical blue lines marks the R1-R2 boundary, with each line representing a given σ_i. The NOV set shows a steep composition gradient at the R1-R2 boundary, and the R1-R2 location is nearly the same for all σ_i. There is greater variance in the R1-R2 location for the OV set. Further, core overshooting has softened the and gradients, and the disruption of the profiles' regularity with σ_i continues into the start of the R2 region. At r/R∼0.6, the proportionality of σ_i to the and profiles is restored.
By design from stopping at the first thermal pulse, the R3 and R4 regions are almost identical between the NOV and OV sets. These regions are least affected from mixing processes in the core (e.g. overshooting).
In Figures <ref> and <ref>, the OV chemical profiles show a non-constant structure from overshooting during CHeB in the O dominated central core (below ≃0.4 ). While element diffusion is included during the white dwarf cooling phase, these chemical profiles may be further flattened by mixing processes not considered in this study such as time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing, or first-order phase separation of the CO mixture <cit.>.
§.§ Evolutionary differences after the main-sequence
How do the NOV and OV differences in the R1 and R2 regions of Figure <ref> relate to their respective CHeB evolution histories?
How do the final WD profiles for the NOV and OV sets in Figure <ref> relate to their respective CHeB evolution histories? Figure <ref> shows the Kippenhahn diagrams for the σ = 0.0 models for NOV (left) and OV (right). This figure shows the CHeB phase until the log(L/)>3.0 termination point, spanning ≃ 0.93–1.10 Gyr. During this period the total mass of our models is ≃ 2.1 , but we show only the innermost ≃ 0.65 to capture the evolution history that ultimately defines the CO WDs.
There are immediate differences between the NOV and OV CHeB evolution histories for the σ=0.0 models. These differences are similar for any given σ_i models, and a link to an interactive figure is provided in the online journal to see each rate's OV vs. NOV comparison in greater detail.
For the NOV set, we see gradual growth of the convective core throughout the CHeB phase; the noted central mass fraction isotopes smoothly deplete/grow to reach their final mass fractions; the convective cores have no apparent splitting during the CHeB phase. Further, there is a pure radiative zone throughout the CHeB history. In comparison, the OV set shows convective cores that ebb and flow in their extent, in a saw-tooth like manner; overshooting extends past the inner convective core in a fairly consistent mass length; the OV central mass fraction isotopes ebb and flow symmetrically with the mixing phenomena at any given time.
We also see splittings of the convective core in the OV set. These splittings were not observed in any of the NOV models during the CHeB phase. We presume they are a result of overshoot inclusion. This introduces “pollution" to the pureness of the radiative burning zone, which becomes the R2 region of the WD. The pollution is seen by observing that some of the split-convection zone surpasses the log(L/)>3.0 R2 inner edge boundary. This boundary becomes the inner edge of R2 in the cool WDs. The amount of convective pollution within the OV set is minor for σ_0.0, but varies with σ_i.
Figure <ref> qualifies R2 as “Mostly Radiative" for the NOV set due to localized, short-lived, subtle convective occurrences between ≃ 0.30–0.35 near core He depletion energetics. Composition profiles are less sensitive to mixing after CHeB is complete. Any convective pollution from these brief convective periods in the NOV set are insignificant compared to the convective pollution introduced in the OV set.
For both sets, nuclear burning primarily takes place within the convective core. Both sets also show similar burning regions in the mantle outside the core, in the radiative zone. Near the end of core He depletion, nuclear burning in the core extends past the convective and overshooting core regions in the OV set, and burns into the radiative zone. This is not seen in the NOV set.
§.§ WD Adiabatic Pulsation Analysis
How do these evolutionary and WD structural differences impact the WD reaction rate pulsation signatures? We first stress the importance of the NOV models' R2 pure radiative zone during the CHeB. The trapped mode σ_i signature found in C22 resonates the most with this region.
We want to determine if this signature, or any other σ_i pulsation signature, exists when overshooting is considered at the inner R2 boundary during CHeB. First we compare the NOV WD pulsation signatures in this work to those in C22.
§.§ NOV set vs. C22
In this section we briefly describe the main differences between the NOV and C22 models. The models in C22 used a 30 isotope chemical network compared to the 23 isotope network used here. See Appendix B for a comparison. Also, the temporal resolution was greater in C22, especially through CHeB. The most important difference in the NOV models is that we terminated the evolution prior to the first thermal pulse; the models in C22 continued the evolution through the thermal pulse phase of evolution. The overall composition structure of the R1 and R2 regions in our NOV models are quite similar to those in C22.
The NOV set of models in this work found two WD g-mode signals for σ_i rather than one. This is shown in the top two panels of Figure <ref>. Both panels show snapshots of the percent period differences as a function of σ_i, at =11,500 K (bright green) and =10,000 K (blue) respectively. The y-axis label defines the period differences as (P_σ_0-P_σ_i)/P_σ_0. That is, they are normalized to the pulsation periods of the σ=0 NOV model. The first panel is the signal from g_2 and the second is the signal from g_6. In C22,
the g-mode signature was a trapped mode. Trapped modes are identified from local minima in the kinetic energy diagram <cit.>. The NOV kinetic energy diagrams for all σ_i at these snapshots are shown in the bottom left and right panels of Figure <ref>, following Equation 2 in C22
<cit.>. The figure caption explains the coloring for σ_i. At =11,500 K (bottom left panel), the first apparent trapped mode occurs at g_6 for all σ_i, with the exception of σ=0.5, which has its first local minimum of E_kin at g_5. By =10,000 K (bottom right panel), all σ_i have the first local minimum in E_kin at g_6, including σ=0.5. This is important as g_6 is one of our signature modes for σ_i. These findings are in overall agreement with C22.
The trapped g_6 mode signature is not linear with σ_i, but overall shows σ_i<0 to have longer periods than σ=0.0, and σ_i>0 to have shorter periods than σ=0.0.
The R2 contribution to the g_6 period in our NOV models was ∼ 25%. Other regions equally contributed between ∼ 20-30%, meaning that the trapped mode from our NOV set is more equitably trapped among the four regions. Thus, its credibility from R2 isn't as strong as in C22.
Nonetheless, it is not a negligible contribution and can still serve as a viable probe for σ_i.
Our other g-mode signal, g_2, does not appear to be trapped by definition (see other highlighted mode in bottom of Figure <ref>). However, the g_2 period differences are directly proportional to σ_i (first panel of Figure <ref>). This suggests that g_2 is likely distinguishing CO features in the inner regions better than other g-modes. The additional g_2 signal
was either recovered or contrived as a consequence of excluding the thermal pulse history in the evolution. This was the only procedural difference between our models and those in C22.
The direct impact of this procedural difference is expressed by the nearly uniform and profiles after the C→He transition (see Figure <ref>).
C22 showed variations in these profiles that stemmed from variations in the thermal pulse histories. Eliminating such chemical variations near the R2-R3 interface can placate the g-modes' sensitivity to the R3 and R4 regions, especially for low-order g-modes such as g_2. Figure 9 in
C22 shows g_2 distinguishes σ_i in their thinner atmosphere sequence of models. Thinner atmospheres may also lessen sensitivities to outer regions, allowing lower-order g-modes like g_2 to probe deeper into the CO interior. We therefore suspect g_2 is a viable probe for σ_i if there are uniform composition profiles at the R2-R3 boundary, and/or thinner WD atmosphere models.
We conclude that our NOV pulsation signature results are overall consistent with C22;
we find certain low-order adiabatic WD g-modes which probe the reaction rate probability distribution function. With our two signature modes established, we now discuss the impact that overshoot inclusion has on these pulsation signatures.
§.§ Detailed Analysis of Differences
We first show the pulsation periods as a function of surface temperature for all σ_i models in Figure <ref>. Black dots mark the NOV periods and grey dots mark the OV periods. G-modes with radial orders n=1-10 are annotated, all for ℓ = 1. Figure <ref> shows that there are differences in the periods between the NOV and OV sets, but there is no global systematic offset; the differences between the OV and NOV periods for any given g-mode is random. This is the case even when σ_i is constant. We find that g_6 shows the largest spread in the periods of the models. Further, the kinetic energy diagrams for all models show that g_6 was a trapped mode by =10,000 K for every model, regardless of the σ_i, NOV/OV prescription. Since g_6 is one of the signals for σ_i in the NOV models, we point out this feature in Figure <ref>. We will touch on the cause of the larger spread later, but now focus our attention on the detailed pulsation properties of the signature g_2 and g_6 modes.
Figure <ref> shows, from top to bottom, the mass fraction profiles, B, and the g_6 and g_2 mode weight functions ζ for the final WDs at =10,000 K. The left and right columns are the NOV and OV results respectively. Here, we show the comparison for σ=0.0, but an interactive figure link is provided in the online article to compare these properties for any σ_i. For all σ_i, NOV/OV comparisons, the dotted vertical lines mark the region boundary locations in each panel. This is useful to compare where the boundary locations are across multiple profile properties. For instance, the R1-R2 boundary marks the C→O transition region, the first most prominent peak in B, and the first peak-like features in g_6 ζ and g_2 ζ in the NOV case. Comparing the OV column to the NOV column, we see the global impacts from overshooting. Overall, prominent features in the NOV set are lessened in magnitude in the OV set. The C→O transition is more gradual, lessening the composition gradient at the defined boundary. This remarkably impacts the shape of B. The first prominent peak after max(O) is much less in magnitude for all σ_i, and is not the only outstanding peak near the boundary. There are now multiple, smaller peaks in B and the g_6 ζ near the R1-R2 boundary as opposed to one.
There are slight deviations between NOV and OV in these profiles for the R3 and R4 regions of the WD, but the R1→R2 region in these profiles was affected most.
The g_6 ζ and g_2 ζ panels in Figure <ref> note the weight percentages per region in the WD. This tells each region's contribution to the overall mode period (frequency). An interesting result for all σ_i is that both the g_2 and g_6 modes decrease the amount of weight in R1 when overshoot is included, and increase the amount of weight in R2. There is also a slight decrease in the weight of R3 for g_2 for all σ_i when overshoot is included. These results are important. The R2 region is the most reliable region in terms of extracting the σ_i rate signature. When overshoot is included, the R2 contribution to the overall pulsation modes in g_2 and g_6 are accentuated, implying that these modes more reliably distinguish σ_i than the NOV set. A quantitative analysis of each region's weight percentage contribution per σ_i is given for both sets in Table <ref> and Table <ref> for g_2 and g_6 respectively. Overall, Table <ref> shows that R2 and R3 are the most heavily weighted regions for g_2's period. G_6 has more equitable weight dispersed across regions, but the combined weight of R1 and R2 accounts for ∼ 50 % of the g_6 period for any given model. As identified in Figure <ref> and Figure <ref>, R1 and R2 are the most impacted regions in this study. A g-mode with about half its weight from those regions may pick up the detailed differences more so than modes weighted more in outer regions. This may explain why Figure <ref> shows a larger spread in the g_6 periods as this g-mode is likely picking up the R1 and R2 contributions to its period better than other g-modes.
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_2 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 0.91 0.75 40.6 41.3 57.0 56.4 1.47 1.47
-2.5 1.14 0.99 40.2 44.2 57.2 52.9 1.43 1.94
-2.0 1.05 0.52 40.2 41.1 57.2 56.9 1.54 1.53
-1.5 1.18 0.53 39.5 41.7 57.9 56.2 1.50 1.50
-1.0 1.16 0.27 40.4 41.5 56.9 56.8 1.48 1.46
-0.5 1.15 0.18 38.8 42.1 58.6 56.3 1.43 1.49
0.0 1.25 0.38 40.6 42.0 56.6 56.1 1.52 1.47
0.5 1.44 0.49 40.8 41.9 56.2 56.2 1.52 1.47
1.0 1.28 0.31 40.4 41.4 56.9 56.7 1.49 1.58
1.5 1.32 0.28 39.9 41.4 57.2 56.8 1.50 1.51
2.0 1.35 0.19 39.4 40.8 57.8 57.5 1.50 1.49
2.5 1.25 0.42 38.3 41.6 58.9 56.6 1.47 1.45
3.0 1.39 2.06 40.2 39.6 56.9 56.8 1.59 1.52
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_6 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 25.5 20.1 25.6 32.4 21.1 19.8 27.8 27.8
-2.5 33.1 19.1 29.5 33.5 13.1 20.2 24.2 24.2
-2.0 32.3 16.6 30.8 36.3 13.9 19.7 23.0 23.0
-1.5 33.5 17.3 29.6 39.1 12.6 17.3 24.4 24.4
-1.0 33.8 13.4 30.0 43.1 12.9 17.4 23.3 23.3
-0.5 33.5 11.7 29.8 47.5 12.8 14.9 23.9 23.9
0.0 33.2 15.4 28.9 42.8 12.0 15.5 25.9 25.9
0.5 26.6 16.4 22.5 41.0 13.8 14.0 37.1 37.1
1.0 31.2 14.1 27.1 43.8 12.4 16.1 29.3 29.3
1.5 32.2 13.7 27.4 46.7 12.2 14.7 28.3 28.3
2.0 25.5 11.7 23.0 48.1 14.1 14.3 37.3 37.3
2.5 30.9 14.2 28.0 42.5 12.5 13.8 28.6 28.6
3.0 30.1 32.0 25.5 26.2 12.4 13.8 32.0 32.0
When an integer multiple q of the local radial wavelength λ_r for a given g-mode nearly matches the width of a certain region(s) in a star, the g-mode resonates with that region(s). Figure <ref> shows q·λ_r (R_⊙) as a function of radius R (R_⊙) for the g_2 and g_6 modes. The NOV set doesn't show any particular close matches for any region. But the closest matches to the R2 width were the λ_r curves of g_2, q=1, and g_6, q=2. Further, the g_2, q=2 and g_6, q=3 modes were best at resonating with R3. Larger q values may show stronger resonance with R4. The resonance with R2 is enhanced in the OV set. The g_2, q=1 and g_6, q=2 λ_r curves match much more closely to the R2 width in the OV set. This implies that overshoot has enhanced the g-mode resonance for our signature modes in the region that was constructed mainly from radiative burning (Figure <ref>). We also see stronger resonance within the R1 region with the g_2, q=1 λ_r curve.
Will the differences between the NOV and OV sets in Figure <ref> impact the WD σ_i pulsation signatures shown in Figure <ref>? Figure <ref> shows the resulting relative period percent differences, as a function of σ_i at =11,500 K (bright green) and =10,000 K (blue). The period differences are negative for σ_i with longer periods than the σ=0 model, and are positive for σ_i with shorter periods than the σ=0 model for the given NOV or OV set. The left of this figure shows the period differences for g_2, and the right shows the period differences for g_6. The NOV set is indicated by the dotted lines and the OV set is the solid lines.
Looking at g_2, the period differences between NOV and OV at =11,500 K are minimal; both sets show a trend of decreasing period with increasing σ_i. At =10,000 K, the OV set shows an overall decrease in the percent differences, and a slightly greater variation in the overall σ_i vs. g_2 period difference shape. However, at both temperatures, the same pattern of the g_2 period decreasing with increasing σ_i is sustained with overshoot inclusion.
Further, the magnitude of percent differences, ranging from ≃ -1.5 to +1.0, is within the detectable threshold <cit.>.
The OV set shows greater deviation from the NOV line of period percent differences in g_6 more-so than g_2. This is most likely because g_6 is more sensitive to changes from R1 than g_2. Nonetheless, despite the σ_=-0.5 and σ_+1.0 outliers, the overall trend remains: σ_i<0 generally have longer periods than σ_0 and σ_i>0 generally have shorter periods than σ_0. Once again, the magnitude of the relative period percent differences surpass the observable threshold.
An interesting note is that for both g_2 ad g_6 signals, the percent differences change more in the NOV set as the models cool from =11,500 to =10,000 K than the OV set. The OV set showed nearly the same period differences at both temperatures.
§ DISCUSSION
C22 found pulsation signature(s) for the experimental reaction rate probability distribution function. They describe four sensitivities that may impact this result: width of the O→C transition, mixing during CHeB, thermal pulse history on the AGB, and the 3α reaction rate.
This work investigated the impact that overshoot inclusion had on the reaction rate pulsation signature(s). Doing so, we address the width of the O→C transition and mixing during CHeB. Further, by ignoring the thermal pulse history in our models, we also address the sensitivity to the number of thermal pulses, albeit, the trivial case when the number of thermal pulses is zero. In the following paragraphs, we discuss how these three sensitivities impacted our results. We further caution how our results could be impacted from further sensitivity investigations.
Including overshooting overall increased the width of the O→C transition for all σ_i cool WDs. This lessened the sharp peak in B at the O→C transition, and decreased the peak in g_6 ζ at the O→C transition. While the transition peak was lessened and dispersed into R2, widening the O→C transition shows an enhancement of both the weight contribution to the R2 region for g_2 and g_6, and the R2 resonance with λ_r for g_2 and g_6. The widening of the O→C transition was from the combined effects of overshoot inclusion and the σ_i prescription. We conclude that widening the O→C transition imposes differences in B, ζ, and the pulsation periods. Despite these changes, we still find the g_2 and g_6 relative period differences in the NOV and OV sets to distinguish the reaction rate probability distribution function. Namely, the pattern of decreasing period with increasing σ_i persisted in both NOV and OV sets. By itself, the inclusion of overshooting does not destroy the seismic signatures of the reaction rate in our WD models – which was the primary question of this study.
We caution that increasing (decreasing) the width of the O→C transition in CO WD models could potentially yield different results. Our CO WD models were informed from their evolution history, with the stated model parameters. Thus, an increase (decrease) of the width of the O→C transition may come from choosing different mixing processes, prescriptions and parameters, such as for convection and overshooting. A change in the width of the O→C transition may also come from mixing processes not considered in this study such as
time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing,
or first-order phase separations of the CO mixture <cit.>.
Ignoring the thermal pulse history gave an additional low-order adiabatic g-mode signature for σ_i, namely the g_2 signal. This signal was not found in C22, where the thermal pulse history was included. Future studies on the thermal pulse phase of evolution with different temporal and spatial resolutions are needed to determine the sustainability of the g_2 signal as a probe for σ_i.
Concurrently, future studies could also explore the interaction, if any, between the thermal pulses and overshooting during CHeB on the chemical profiles.
The CO cores of WDs are the result of the competition between 3α and during CHeB. An experimental 3α reaction rate probability distribution function, similar to the existing one for
<cit.>, does not yet exist to our knowledge, although a probability distribution function could be constructed using the STARLIB reaction rate library <cit.>.
Future studies involving both reaction rate probability distribution functions could probe properties of DAV WD models in the 3α rate - rate plane. For example, the 3α reaction rate is likely to slowly modulate the central ^16O mass fraction at any reaction rate because 3α controls the production of ^12C. The reaction rate will likely modulate the central ^16O mass fraction more strongly at any 3α reaction rate. We speculate that the radiative region R2 will exist in all such models. We also suspect that all such models, whether terminated at the first thermal pulse or evolved through the thermal pulse phase, will show a trapped mode, with substantial trapping from R2, that best probes the ^12C(α, γ)^16O burning reaction rates (i.e. g_6 in this work, and see Figure 9 in C22). We caution that the relative period shifts we find in this work from considering the probability distribution and overshooting may change when a 3α reaction rate probability distribution function is also considered.
<cit.> found that including overshooting impacted ensuing WD pulsations by ∼ 2-5 s.
Their results were independent of their reaction rate uncertainty evaluation. We combined the effects of overshooting and the reaction rate sensitivities in our pulsation analysis, and likewise find period differences of similar magnitudes. Our reaction rate analysis spanned the current experimental probability distribution function, which analyzed different rate values than those explored in <cit.>. They concluded that the uncertainty was less relevant than overshooting. In this study, we find that the combined effects from overshooting and the reaction rate probability distribution function yields remarkable differences in the structure of the CO WDs, and pulsation differences. Despite these differences, we still find pulsation signatures for σ_i.
We conclude this section by discussing the physical meaning of our results. Overall, both g_2 and g_6 signatures generally state that the periods decrease with increasing σ_i. Put another way, increasing the amount of in the WDs shortens the periods of these signature modes. This trend was also seen in <cit.>, namely, as the amount of [22] was increased in the WDs, the periods, for all g-modes analyzed, were shorter. The reasoning of the result came from analyzing the components of the frequency equation. One of the largest drivers of the period differences was due to an increase in pressure scale height with increasing [22] abundance. If one likens pressure scale height to tension in a string, increasing the tension in a string will shorten its period. WDs are not strings, but the line of reasoning is analogous.
One might wonder why not all g-modes display this trend? Why is it only g_2 and g_6? In <cit.>, the presence/absence of [22] was throughout ∼99% of the WD's composition structure. Thus, a uniform increase (decrease) in [22] impacts all regions of the WD equally, which is likely the reason for the global offsetting of periods for all g-modes. In comparison, increasing and decreasing the reaction rate imposes a coupled effect on both and , which is not uniform for all regions in a WD's structure. The R1 and R2 regions are most affected by the reaction rate, with some impact on the inner part of the R3 region. Our above analysis found that the R1 and R2 regions gave larger contributions to the the periods of the g_2 and g_6 signature modes more-so than other g-modes. This is the most probable reason why only certain modes are capable of distinguishing the reaction rate, within the conditions of the present analysis.
§ SUMMARY
We conducted a search for signatures of the current
experimental reaction rate probability distribution function in the pulsation periods of CO WD models with the inclusion of overshooting. We found two signature adiabatic g-modes that show period differences with the reaction rate probability distribution function σ_i trend regardless of whether or not overshoot is included. We find a g_2 period difference signature is inversely proportional to σ_i. Without overshoot, the g_2 relative period differences span ± 0.9%. With overshoot, the g_2 relative period differences range from -1.33% to 0.47%. The average magnitude of the relative period differences for g_2 were 0.46% and 0.44% respectively. The g_6 period differences were larger in magnitude, spanning from -3.44% to 1.78% for NOV and -2.02% to 1.58% for OV. The average magnitude of the g_6 period differences were 1.21% and 0.95% respectively. The average magnitudes of the g_2 and g_6 period differences were slightly decreased from the NOV set.
We found that the R2 weight contribution to these g-modes was enhanced with overshoot inclusion. The R2 region remains the best identifying region for tracing the reaction rate probability distribution function. This is because even with overshoot inclusion, it is predominantly constructed by radiative burning during CHeB.
Regardless of whether or not overshooting is considered, we find:
* two signature g-modes, g_2 and g_6 probe σ_i
* g_2 is inversely proportional to σ_i and g_6 is a trapped mode
* the g_2 and g_6 periods are generally shorter for positive σ_i and longer for negative σ_i
* both signatures have period deviations within the detectable regime
These findings suggest that an astrophysical constraint on the reaction rate probability distribution function remains, in principle,
extractable from the period spectrum of observed variable WDs.
§ ACKNOWLEDGEMENTS
We thank James Deboer for sharing the ^12C(α,γ)^16O probability
distribution function, Josiah Schwab for sharing wd_builder,
and Pablo Marchant for sharing mkipp.
We acknowledge using ChatGPT <cit.> to polish the language of one paragraph <cit.>.
This research is supported by NASA under the Astrophysics Theory Program grant NNH21ZDA001N-ATP, and in part by the National Science Foundation under Grant No. NSF PHY-1748958.
This research made extensive use of the SAO/NASA Astrophysics Data System (ADS).
<cit.>,
20190830 <cit.>,
wd_builder <https://github.com/jschwab/wd_builder>,
<cit.>,
mkipp <https://github.com/orlox/mkipp>,
<cit.>,
<cit.>, amd
<cit.>.
§ MICROPHPYSICS IN MESA
The MESA EOS is a blend of the OPAL <cit.>, SCVH
<cit.>, FreeEOS <cit.>, HELM <cit.>,
PC <cit.>, and Skye <cit.> EOSes.
Radiative opacities are primarily from OPAL <cit.>, with low-temperature data from <cit.>
and the high-temperature, Compton-scattering dominated regime by
<cit.>. Electron conduction opacities are from
<cit.> and <cit.>.
Nuclear reaction rates are from JINA REACLIB <cit.>, NACRE <cit.> and
additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>.
Thermal neutrino loss rates are from <cit.>.
§ MODEL OPTIMIZATION AND RESOLUTION
§.§ Reduced Chemical Network
The nature of our evolutionary models is computationally expensive. This paper is concerned about overshooting and the reaction rate probability distribution function, which primarily dictate the evolutionary processes and consequences of the CHeB phase. The isotopes most impacted during CHeB are , , and . and are the next two most impacted isotopes during CHeB. We thus optimize the efficiency of our models by reducing the chemical network number of isotopes from 30 to 23. The eliminated isotopes are ^21Ne, ^21,22,23Na, ^23,24Mg, and ^56Fe. A comparison of the resulting inner mass fraction profiles for the 5 most abundant isotopes for both networks is shown in Figure <ref> for each chemical network. This figure shows the profiles at the completion of CHeB. both network models used the same temporal and spatial resolution during CHeB. The run-time was reduced from a few days to a a few hours on 12 cores. All resolution studies were conducted with σ=0.0 without overshoot (NOV).
Reducing the network impacted [22] most, with an offset of ∼ 22% more [22] in the 23 isotope network. We note that C22 used a 30 network and our overall signature results persistent through variations in heavier isotopes.
§.§ Temporal Resolution
Several timestep limiters in help optimize convergence studies. In this paper, we want to limit the timestep to achieve the temporal resolution that yields a smooth evolution of the central , , and abundances during CHeB. We first utilize the delta_XC_cntr_limit limiter. This limits the amount the central abundance can change in a given timestep. To help optimize computational run-time, we begin limiting the change in central during CHeB which the central helium abundance X(_c)<0.6. This is done by adding the following lines of code in the run_star_extras.f90 file:
This temporal resolution was used for the 30 and 23 isotope network models. We refer to it as resolution A. The remaining temporal resolution studies were performed using the 23 isotope chemical network.
The next iteration of increased temporal resolution modified the run_star_extras.f90 file to include the following:
This resolution is employed slightly earlier during CHeB, when X(_c)<0.5. We added limits to the change in central temperature and density from resolution A. This is resolution B.
Our third resolution iteration used the following limiter controls in the run_star_extras.f90 file:
This is resolution C. We have set the limiters at the start of CHeB, and have decreased the limiter values from those in resolution B.
A comparison for resolutions A, B, and C are shown in Figure <ref>. In each column, the solid light curves represent resolution A, the dotted curves B, and the dark solid curves C.
The left figure shows the evolution of central abundances of , , and during CHeB, starting when X(_c)≲0.6 until the completion of CHeB. The central abundances for resolutions A and B are nearly identical. Resolution C varies slightly, with the final central abundance reaching a slightly larger amount than resolutions A and B. Further, all three resolutions show a smooth evolution of these central abundances throughout CHeB.
The middle plot in Figure <ref> shows the mass fraction profiles at the completion of CHeB. We show the 5 most abundant isotope profiles for each resolution. The and profiles for A are noticeably different than the profiles for B and C, especially after the O→C transition. This is more apparent in the right plot of Figure <ref>, which zooms in on the and profiles of the three resolutions. Resolution B follows A in the core, but then more closely aligns with C after the O→C transition. Since resolutions B and C agree well, with only a slight difference in the central and abundance, we set resolution C as the standard temporal resolution for our 13 models.
aasjournal
|
http://arxiv.org/abs/2307.05004v1 | 20230711035346 | Control as Probabilistic Inference as an Emergent Communication Mechanism in Multi-Agent Reinforcement Learning | [
"Tomoaki Nakamura",
"Akira Taniguchi",
"Tadahiro Taniguchi"
] | cs.AI | [
"cs.AI",
"cs.LG",
"cs.MA"
] |
Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) Framework
Jesse Stevens*, Daniel N. Wilke, Itumeleng Setshedi
August 12, 2023
===========================================================================
This paper proposes a generative probabilistic model integrating emergent communication and multi-agent reinforcement learning.
The agents plan their actions by probabilistic inference, called control as inference, and communicate using messages that are latent variables and estimated based on the planned actions.
Through these messages, each agent can send information about its actions and know information about the actions of another agent.
Therefore, the agents change their actions according to the estimated messages to achieve cooperative tasks.
This inference of messages can be considered as communication, and this procedure can be formulated by the Metropolis-Hasting naming game.
Through experiments in the grid world environment, we show that the proposed PGM can infer meaningful messages to achieve the cooperative task.
empty
§ INTRODUCTION
Communication allows humans to engage in cooperative actions and accomplish tasks.
Furthermore, a task can be executed efficiently by creating symbols (e.g., words and gestures) that are only shared among the participants involved in the task.
In this study, we consider a symbol emergence model for a multistep cooperative task performed by two agents.
In this paper, we propose a probabilistic generative model comprising a Markov decision process that determines the actions of each agent and a message that acts as a latent variable coordinating the actions of both agents.
Figure <ref> shows an overview of the proposed model.
Each agent plans its actions using probabilistic inference based on control as inference (CaI) framework <cit.>.
A shared latent variable is inferred based on planned actions.
Because this latent variable is shared, each agent can obtain the information of another agent and change its plan through this latent variable.
In other words, this latent variable acts as a message and the actions of both agents are coordinated by the communication that involves exchanging the messages.
To emerge the message of the cooperative task, we employed the Metropolis–Hastings naming game (MHNG) <cit.>, which is based on the Metropolis–Hastings algorithm.
In the original MHNG, two agents generate symbols representing the objects observed by them, that is, object category names.
In this study, we applied the MHNG to obtain symbols representing the states of the two agents, which are used for communicating their own states and understanding the states of each other.
Deep reinforcement learning has been used for multiagent tasks <cit.>.
In these studies, multiple agents were connected through a network and messages were inferred by making them differentiable variables through back propagation. In other words, the error information computed from the internal states of others is directly transmitted to oneself, which is an unnatural modeling process from a communication perspective.
By contrast, the MHNG-based method <cit.> avoids unnatural assumptions and composes the inferences of message variables as natural communication.
Deep learning-based models for emergent communication have also been proposed <cit.>.
Most of these studies employed one-way communication, from the sender to the receiver, and included a referential game task wherein an appropriate target was selected through communication. However, these studies have not been applied to tasks that require multistep action selection through bidirectional communication, such as the task employed in this study.
§ PROPOSED METHOD
Figure <ref> shows a graphical model of the cooperative action generation of the two agents, and the details of each stochastic variable are listed in Table <ref>.
It was assumed that the behavior of each agent was generated through a Markov decision process.
State s_t of an agent at time t is determined according to state s_t-1, action a_t-1, and message m_t, which is the shared latent variable:
s_t ∼ p(s_t | m_t, s_t-1, a_t-1).
Because this model can influence the state of others through latent variable m_t, this latent variable can be considered a message. Furthermore, the process of inferring the optimal value of m_t from the states and actions of both agents can be considered communication, as described further on.
The optimality variable o^(m)_t ∈{0, 1} represents the state optimality of the two agents: 1 indicates optimal, whereas 0 indicates not optimal.
Therefore, probability p(o^(m)_t=1 | m_t ) of the optimality variable is the optimality of the states of both agents represented by message m_t.
Similarly, o_t, o'_t ∈{0, 1} is the optimality variable of each agent's state and action: 1 indicates that the state and action are optimal, whereas 0 indicates they are not.
The probability p(o_t=1 | s_t, a_t) of this optimality variable represents the optimality of the state and action and is assumed to be computed using reward function r(s_t, a_t) as follows:
p(o_t=1 | s_t, a_t ) ∝exp( r(s_t, a_t) ).
In other words, by inferring state s_t and message m_t under the condition that the value of the optimality variables is always 1, the optimal state sequence for both agents can be calculated as follows:
s_t, m_t ∼ p(s_t, m_t | s'_t, o_1:T =1, o^(m)_1:T=1 ).
However, this equation has two problems: it includes the internal state s'_t of others, which cannot be observed in practice, and it is difficult to derive this probability distribution analytically.
We solved these problems by alternately inferring the following two variables:
s_t ∼ p(s_t | o_t:T =1, m_1:T) :planning,
m_t ∼ p(m_t | s_t, s'_t, o^m_1:T=1) :communication.
Equation (<ref>) describes state planning, which can be computed based on the CaI framework <cit.> proposed by Levine.
Equation (<ref>) describes the inference of the message and can be formulated using the MHNG proposed by Taniguchi et. al <cit.>, which allows both agents to infer messages through communication without observing the internal states of each other.
The optimal states and messages were inferred using the following procedure:
* Set the distribution p(s_t | m^*_t ) of states given that message m^*_t constitutes a uniform distribution.
* Iterate the following steps C times.
* Using p(s_t | m^*_t ), optimal states under message m^*_t are inferred using the CaI framework.
s^*_t ∼ p(s_t | o_1:T =1, m^*_1:T ) t=1, ⋯, T.
* Using inferred state s^*_t, s'^*_t, the m^*_t is updated through the MHNG.
m^*_t ∼ p(m_t | s^*_t, s'^*_t, o^m_1:T=1) t=1, ⋯, T.
where C denotes the number of times both the agents communicate.
In subsequent sections, the equations required for each inference are derived.
§.§ Task Planning Using Messages
§.§.§ Computation of Backward Probability
The probability that the value of optimality variables o_t:T is 1 at all future times under message m_t:T can be expressed as
p(o_t:T=1 | s_t, a_t, m^*_t:T)
= p(o_t=1 | s_t, a_t) p( o_t+1:T=1| s_t, a_t, m^*_t:T)
= p( o_t=1| s_t, a_t ) ∫ p( o_t+1:T=1 | s_t+1, m^*_t+1:T )
p(s_t+1 | s_t, a_t, m^*_t+1 ) ds_t+1
≡ q(s_t, a_t),
where p(s_t+1| s_t, a_t, m^*_t+1 ) ∝ p(s_t+1|s_t, a_t) p(s_t+1|m^*_t+1) is obtained using the product of the expert approximation, and the equation is transformed to
q(s_t, a_t) ≈ p( o_t=1| s_t, a_t ) ∫ p( o_t+1:T=1 | s_t+1, m^*_t+1:T )
p(s_t+1|s_t, a_t) p(s_t+1|m^*_t+1) ds_t+1.
Next, we define v(s_t) as
v(s_t) = p( o_t:T=1 | s_t, m^*_t:T )
= ∫ p(o_t:T=1 | s_t, a_t, m^*_t:T) p(a_t | s_t) da_t
= ∫ q(s_t, a_t) p(a_t | s_t) da_t.
By using v(s_t), Eq. (<ref>) becomes
q(s_t, a_t) = p( o_t=1| s_t, a_t ) ∫ v(s_t+1)
p(s_t+1|s_t, a_t) p(s_t+1|m^*_t+1)ds_t+1.
Using these results, v(s_T) can be computed using q(s_T, a_T), and q(s_T-1, a_T-1) can be computed using v(s_T).
Therefore, by starting the computation from q(a_T, s_T), we can compute q(a_t, s_t) and v(s_t) at all times as follows:
q(a_T, s_T) → v(s_T) → q(a_T-1, s_T-1)
→ v(s_T-1) → q(a_T-2, s_T-2) ⋯→ q(a_1, s_1)
§.§.§ Computation of Forward Probability
The probability of a state under all past optimality variables is 1 and can be calculated as follows:
α(s_t) = p(s_t | o_1:t-1=1, m^*_1:t-1)
≈ ∫∫ p(s_t | s_t-1, a_t-1 )
p(a_t-1|s_t-1, o_t-1=1) p(s_t-1 | m^*_t-1 )
p(s_t-1 |o_1:t-2=1, m^*_1:t-2 ) ds_t-1 da_t-1
= ∫∫ p(s_t | s_t-1, a_t-1 ) p(a_t-1|s_t-1, o_t-1=1)
p(s_t-1 | m^*_t-1 ) α(s_t-1) ds_t-1 da_t-1.
Assuming that p( a_t-1| s_t-1 ) is uniformly distributed,
p(a_t-1|s_t-1, o_t-1=1) ∝ p(o_t-1=1| a_t-1, s_t-1 ),
and Eq. (<ref>) becomes
α(s_t) ∝∫∫ p(s_t | s_t-1, a_t-1 ) p(o_t-1=1| a_t-1, s_t-1 )
p(s_t-1 | m^*_t-1 ) α(s_t-1) ds_t-1 da_t-1.
Therefore, by starting the computation from α(s_1) = p(s_1), we can compute α(s_t) at all times, from t=1 to t=T.
§.§.§ Optimal State Distribution
Using the backward and forward probabilities computed above, the probability of the state wherein the value of all optimality variables is 1 at all times is computed as follows:
p(s_t | o_1:T =1, m^*_1:T)
∝ p( o_t:T=1 | s_t, m^*_t:T )
p(s_t | o_1:t-1=1, m^*_1:t-1) p(o_1:t-1=1 )
∝ v(s_t) α(s_t).
This equation can be solved without using the other agent's state s'_t; therefore, each agent can compute the optimal state distribution using the received message and its own internal parameters.
§.§ Message Generation for Cooperation
The inferred state s^*_t, s'^*_t is then used to infer message m^*_t whose optimality variable o^(m)_t = 1:
m^*_t ∼ p(m_t | s^*_t, s'^*_t, o^(m)_t=1).
However, this equation includes the internal state s'_t of the other states, which cannot be directly observed.
Therefore, we considered inference using the Metropolis-Hastings algorithm, similar to <cit.>.
First, to generate samples that follow Equation (<ref>), the target distribution is defined as
p(m̂|s^*_t, s'^*_t, o^(m)_t=1)
= p(s^*_t | m̂ ) p(s'^*_t, o^(m)_t=1 | m̂ ) p(m̂) / p(s^*_t) p(s'^*_t)
∝ p( m̂| s^*_t ) p( m̂ | s'^*_t, o^(m)_t=1),
where prior distributions p(s^*_t), p(s'^*_t), p( m̂| s^*_t ) are set as uniform distributions.
When Agent A decides to accept or reject a message from Agent B, the proposed distribution for Agent B is obtained as follows:
Q(m̂ | m ) = p( m̂ | s'^*_t, o^(m)_t=1)
Using the target distribution from Equation (<ref>) and the proposed distribution from Equation (<ref>), the acceptance probability r of message m̂ can be computed as follows:
r = p( m̂ |s^*_t, s'^*_t, o^(m)_t=1) Q(m|m̂) / p(m |s^*_t, s'^*_t, o^(m)_t=1) Q( m̂| m)
= p(m̂|s^*_t) p(m̂|s'^*_t, o^(m)_t=1) p( m | s'^*_t, o^(m)_t=1) / p(m|s^*_t) p(m|s'^*_t, o^(m)_t=1) p( m̂ | s'^*_t, o^(m)_t=1)
= p(m̂|s^*_t) / p(m|s^*_t) .
In other words, the probability r that Agent A accepts the proposed message can be calculated using only the parameters of Agent A.
By iterating the process of proposing and accepting/rejecting messages, both agents can infer message m^* that follows the target distribution (Eq. (<ref>)) according to the Metropolis-Hastings algorithm.
§ EXPERIMENTS
To verify that the proposed method could generate cooperative behavior, a two-agent movement task in a 2 × 4 grid world was conducted.
§.§ Experimental Setup
Figure <ref> shows the 2 × 4 grid world used in the experiment.
The triangles and circles represent agents, whose aim is to achieve their respective goals without colliding with each other.
To plan the goal position, the probability distribution representing individual optimality was set as follows:
p(o_t=1 | s_t, a_t )
∝
1 : if the agent can reach the goal
by taking action a_t at s_t
10^-7 : otherwise
The optimality variable o^(m)_t of cooperative behavior was set to zero if the agents collided, and one otherwise.
The agents could perform four actions: moving up, down, left, and right, but could not remain in the same grid.
In other words, to obtain larger accumulated optimality variables, the agents were required to continue moving while avoiding collisions and repeatedly enter and leave the goal grid as many times as possible.
States s_t, s'_t denote the grid indices, m_t denotes 32-dimensional categorical variables, and a multinomial distribution is used as their distribution.
The distribution of optimality o_t, o'_t, o^(m)_t ∈{0, 1} was binomial.
§.§ Message Learning
The probability distribution parameters in the message model were learned from the optimality variables of cooperative behavior, actions, and states of both agents moving randomly over 200 steps in the grid world.
Message m_t was inferred using the MHNG. A part of probability distributions (p(s|m), p(s'|m), p(o^(m)|m)) represented by the learned messages is shown in Fig. <ref>.
It was correctly learned that the optimality was low for m=1 and m=3, which indicated that both agents were in the same location, and high for m=2 and m=4, which indicated that the agents were in different locations.
These results show that messages expressing both states were learned through communication.
§.§ Action Planning
Thereafter, we tested whether the learned parameters can be used to achieve the goal without a collision.
The goal position is denoted by x in Fig.<ref>, where the lower left grid is the origin (0, 0), and (0, 2) and (0, 3) are the goals of Agents A and B, respectively.
Communication and planning were iterated C=10 times and a T=10 step path was planned.
The planned actions are shown in Figure <ref>(a); although there were two collisions, the actions required to avoid the collisions and to repeat entering and leaving for each goal were planned.
With respect to the number of communication repetitions, times the agents collided, and times they reached the goal in the planned 10-step path are shown in Figures <ref>(b)–(d), respectively.
In the case of C=0, wherein the agents did not communicate, the number of times each agent reached the goal was high, but collisions occurred at 5 out of 10 steps.
In the case of C=3, wherein the agents communicated three times, the number of times Agent B reached the goal decreased to four, but the number of collisions decreased to two.
This indicates that the agents could plan a path that allowed them to reach their goals while avoiding collisions.
§ CONCLUSIONS
We proposed a probabilistic generative model that can learn and generate cooperative behavior through symbol emergence by integrating CaI and MHNG.
Through a simple experiment in a grid world, we demonstrated that the proposed model can generate messages for cooperation and realize cooperative behavior.
This paper is preliminary, and we are planning the following future work:
* Use continuous variables as states and actions combining deep reinforcement learning.
* Use continuous variables as messages using Gaussian process latent variable models or variational auto-encoder
* Formulate the PGM for cooperative tasks of more than three agent
* Extend the proposed method to PoMDP by introducing the state space model
§ ACKNOWLEDGMENT
This work was supported by the JST Moonshot R&D program, Grant Number JPMJMS2011.
unsrt
|
http://arxiv.org/abs/2307.04148v1 | 20230709104530 | Towards a RISC-V Open Platform for Next-generation Automotive ECUs | [
"Luca Cuomo",
"Claudio Scordino",
"Alessandro Ottaviano",
"Nils Wistoff",
"Robert Balas",
"Luca Benini",
"Errico Guidieri",
"Ida Maria Savino"
] | cs.AR | [
"cs.AR"
] |
Towards a RISC-V Open Platform for Next-generation Automotive ECUs
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871669.
Luca Cuomo†,
Claudio Scordino†,
Alessandro Ottaviano⁎,
Nils Wistoff⁎,
Robert Balas⁎,
Luca Benini⁎,
Errico Guidieri†,
Ida Maria Savino†
⁎ Integrated Systems Laboratory, ETH Zurich, Switzerland
† Huawei Research Center, Pisa, Italy
{l.cuomo,c.scordino,e.guidieri,i.savino}@huawei.com
{aottaviano,nwistoff,balasr,lbenini}@ethz.ch
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================
The complexity of automotive systems is increasing quickly due
to the integration of novel functionalities such as assisted or autonomous driving.
However, increasing complexity poses considerable challenges to the automotive supply chain since the continuous addition of new hardware and network cabling is not considered tenable.
The availability of modern heterogeneous multi-processor chips represents a unique opportunity to reduce vehicle costs by integrating multiple functionalities into fewer Electronic Control Units (ECUs). In addition, the recent improvements in open-hardware technology allow to
further reduce costs by avoiding lock-in solutions.
This paper presents a mixed-criticality multi-OS architecture for
automotive ECUs based on open hardware and open-source technologies.
Safety-critical functionalities are executed by an AUTOSAR OS running on a RISC-V processor, while the Linux OS executes more advanced functionalities on a multi-core ARM CPU.
Besides presenting the implemented stack and the communication infrastructure, this paper provides a quantitative gap analysis between an HW/SW optimized version of the RISC-V processor and a COTS Arm Cortex-R in terms of real-time features, confirming that RISC-V is a valuable candidate for running AUTOSAR Classic stacks of next-generation automotive MCUs.
Automotive, AUTOSAR, mixed-criticality, open hardware, RISC-V, Multi-OS
§ INTRODUCTION
For decades, automotive has been a very conservative industry, with software
functionalities operated by simple Electronic Control Units (ECUs) communicating
through domain-specific networks (e.g. CAN, LIN, FlexRay).
However, the recent increase in the complexity of automotive systems due to the integration of novel functionalities, such as assisted or autonomous driving, poses considerable challenges to this industry.
Modern luxury cars already contain more than 100 different ECUs <cit.>,
and the addition of new hardware has become an untenable task due to the amount of cabling inside the vehicle <cit.> and increasing space, weight, power and cost (SWaP-C).
Thus, nowadays, a significant opportunity for this industry is represented by the availability of asymmetric multi-processor (AMP) chips, which integrate (i) high-performance, multi-core application-class CPUs running a general-purpose OS (GPOS) such as Linux, (ii) slow real-time microcontrollers (MCUs) running a real-time operating system (RTOS), and (iii) domain-specific accelerators.
The different processing units, in fact, could be used to integrate and consolidate multiple functionalities (even with different non-functional requirements) on the same ECU, reducing the amount of hardware and cabling inside the vehicle.
In parallel to this trend, another interesting opportunity for the automotive industry is represented by the open hardware initiatives, which aim at designing open instruction-set architectures (ISAs) to avoid vendor lock-in solutions and thus further reduce the recurrent costs faced by the OEMs.
In particular, the RISC-V ISA <cit.> is getting momentum across various industry domains as the future lingua franca for computing and is widely considered a promising technology with significant potential also for the transportation domain <cit.>.
While previous works combining multi-OS and open hardware architectures for the automotive and space domains have been proposed, they often rely on symmetric/heterogeneous multi-processor (SMP and HMP) chips demanding hypervisor support for multi-OS execution, closed-source hardware architectures, and bespoke software libraries for intra-OS communications (Sec. <ref>).
This paper proposes a hardware and software stack for the automotive domain that leverages both AMP and RISC-V-based hardware towards the design of an open platform for automotive that relies on typical middleware employed in automotive.
In particular, the paper provides the following contributions:
* We conceptualize a heterogeneous mixed-criticality system (MCS) with multi-OS architecture where a Linux-capable commercial multi-core system is paired with an open-source RISC-V MCU [<https://github.com/pulp-platform/cheshire>] designed around CVA6 <cit.> that runs an RTOS tailored for automotive, ERIKA Enterprise <cit.>.
* We demonstrate the MCS system on a heterogeneous FPGA, namely the Xilinx Zynq Ultrascale+, which combines a hard macro implementing an ARM-based multi-core system with programmable hardware implementing the RISC-V real-time MCU (Sec. <ref>).
To the best of the authors' knowledge, this is the first work that attempts to adopt a multi-OS open-source stack for automotive based on open hardware.
* We conduct a quantitative gap analysis of the CVA6 MCU against an Arm-based real-time MCU (Cortex-R series) available on the heterogeneous FPGA in terms of interrupt response time, showing a significant performance gap of the RISC-V interrupt support as intended in the Privileged specifications <cit.>.
* We extend the real-time capabilities of the RISC-V CVA6 MCU by coupling the core with a RISC-V fast interrupt controller (CLIC <cit.>), which allows achieving competitive real-time performance against the Arm competitor, paving the road for further development of the RISC-V ISA in the transportation domain (Sec. <ref>).
§ RELATED WORK
Automotive operating systems
In the '90s some German and French companies joined their efforts to create the OSEK/VDX consortium <cit.>, aiming at creating an open standard for the operating system and the communication stack of automotive embedded systems. Some parts of these specifications were then standardized in ISO 17356 <cit.>. Some open-source implementations have been
proposed during the years, with the most notable projects being Trampoline <cit.>
and ERIKA Enterprise <cit.>.
The AUTomotive Open System ARchitecture (AUTOSAR) consortium, started in 2004, has coordinated and driven a standardization effort in the last two decades to handle the growing complexity of the software inside vehicles.
The specification (namely, AUTOSAR Classic <cit.>) extended the original OSEK/VDX standard to design the stack for simple automotive ECUs executing tiny real-time operating systems (RTOSs) and communicating through domain-specific networks (e.g. CAN, LIN, FlexRay).
The advent of modern functionalities, like assisted or autonomous driving, has
then forced the consortium to release in 2017 an additional specification (namely, AUTOSAR Adaptive <cit.>) for a more dynamic platform based on the POSIX API <cit.>
and also capable of High-Performance Computing (HPC). The consortium has also provided an
exemplary implementation of part of the specification running on the Linux OS.
The idea of using a Multi-OS architecture for automotive is well understood in the literature.
Burgio et al. <cit.> proposed a Multi-OS architecture developed in the context of the HERCULES European project.
Despite employing the same RTOS presented in this paper (i.e., ERIKA Enterprise),
they relied on a closed-source Heterogeneous Multi-Processor (HMP) architecture based on the Arm big.Little concept, and run multiple operating systems on top of an open-source hypervisor.
Moreover, the communication between the different operating systems was achieved through ad-hoc
libraries rather than using standard middleware employed in automotive.
On the industrial side, silicon vendors have started designing AMP system-on-chips (SoC) comprising
a high-end multi-core processor (possibly in a big.Little configuration) tasked to run the GPOS, and a slower microcontroller tasked to run a safety-critical RTOS.
An example is the i.MX8 chip by NXP <cit.>, which includes Arm A52 and A72 cores along with Arm M4 cores. The intra-OS communication is delegated to bidirectional connection-less Remote Processor Messaging (RPM) interfaces.
More recently, Arm has proposed the first high-performance 64-bit real-time processor of the R-series, Cortex-R82. Despite the announced use case as a storage controller for the IoT domain, such a processor is a candidate for future dual GPOS/RTOS execution on a common hardware platform.
In fact, automotive OEMs are already transitioning from a domain architecture to a zonal architecture <cit.> similar to the one shown in Figure <ref>, where few Vehicle Computers run a multi-domain stack that includes both AUTOSAR Classic Platform (CP) and AUTOSAR Adaptive Platform (AP), along with 3rd party software (e.g., ROS2, plain Linux, other operating systems, etc.).
Albeit several architectural similarities between the available multi-OS platforms, this work distinguishes itself on multiple angles: (i) it relies on a fully open-source real-time MCU developed within the ever-growing RISC-V ecosystem to handle safety-critical tasks, (ii) the automotive software stack running on RISC-V is based on an open-source RTOS <cit.>, and (iii) it enables RISC-V as the leading actor of the transition towards zonal architectures, that is currently not taking into account open hardware processors.
The following paragraph analyzes the state-of-the-art in bringing RISC-V architectures into the automotive domain in the last few years.
Automotive RISC-V architectures
RISC-V technology has received much attention during the last years after
some vendors released high-end multi-core CPUs at 1.5 GHz
capable of running Linux.
Some recent work has also been devoted to increasing the safety levels of RISC-V architectures.
De-RISC <cit.> is an H2020 project aiming at designing a RISC-V processor and
software stack for safety-critical systems. However, the project is mainly focused
on the space industry.
Abella et al. <cit.> identified some issues of the RISC-V
ecosystem related to security and reliability and provided four contributions
to implement lock-step and system-level testing. Their work is very relevant
for the automotive domain and could be implemented on top of our proposed
architecture.
Pietzsch <cit.> presented EMSA5-FS, a 32-bit, single-issue, in-order, 5-stage RISC-V processor specifically designed for functional safety.
Cosimi et al. <cit.> proposed to mix core independent peripherals, including a Performance Monitoring Unit (PMU), an Error Management Unit, and an Execution Tracing Unit, to increase the safety integrity level of an application running on a RISC-V platform
up to the highest automotive level (i.e., ASIL-D).
Their implementation has been done on the same evaluation board used in our experiments
(i.e. Xilinx ZCU102 <cit.>) and can therefore be fully integrated with
the architecture presented in this paper.
Quite recently, Gruin et al. <cit.> presented MINOTAuR, a
timing predictable open source RISC-V core, based on the same
hardware architecture used in this work.
The experimental results have shown an overhead of 10% compared to
the unmodified core, obtained through partial speculative execution.
Although their work does not explicitly address automotive,
predictability is a non-functional requirement needed at every time- and safety-critical
domain (including automotive).
Very recently, SiFive and Renesas have announced a long-term
collaboration to design and produce RISC-V processors for the automotive
domain <cit.>.
These ISO26262-qualified processors will all have the same ISA to increase code
portability.
This work relies on the open-source 64-bit core CVA6 <cit.>. It extends its real-time capabilities to serve as a time- and safety-critical RISC-V system in a multi-OS platform, closing the gap with existing embedded COTS solutions (Sec. <ref>).
Automotive communication protocols
To ensure a reasonable and manageable complexity through composability,
the automotive industry is replacing the original
signal-oriented communication with modern service-oriented
architectures (SoA).
Using this paradigm, the various software components are decoupled from each other
and communicate by requesting and providing "services".
Each component can be designed in isolation, and the system is assembled
by composing and integrating the various functionalities.
Proposed initially by BMW, Scalable service-Oriented
MiddlewarE over IP (SOME/IP) <cit.> is a SoA protocol specifically
designed for Ethernet-based communications in automotive.
This standard specifies the serialization mechanism,
the service discovery and the integration with the AUTOSAR
stack.
More recently, Data Distribution Service (DDS) <cit.>
started attracting a growing interest from the automotive industry <cit.>.
Originally proposed in 2001, DDS became an Object Management Group (OMG)
standard in 2004, with several open-source implementations available
nowadays. The DDS specifications <cit.> describe a Data-Centric Publish-Subscribe
model for distributed application communication. This model builds on
the concept of a “global data space” contributed by publishers and
accessed by subscribers: each time a publisher posts new data into this
global data space, the DDS middleware propagates the information to all
interested subscribers.
The data-centric communication allows the decoupling of publishers from
subscribers, thus building a very scalable and flexible architecture.
The underlying data model specifies the set of data items,
identified by “topics“.
Nowadays, DDS is natively supported by most frameworks used in automotive
— namely, AUTOSAR Classic <cit.>, AUTOSAR Adaptive and ROS <cit.>.
Note that, according to some recent investigations <cit.>,
the ROS framework is already being used by about 80% of the automotive OEMs
and Tier-1s developing autonomous vehicles.
§ SYSTEM ARCHITECTURE
As shown in Figure <ref>, the proposed mixed-criticality architecture
consists of an AMP system-on-chip (SoC) comprising
a high-end multi-core processor tasked to run the GPOS, and a slower microcontroller tasked to run a safety-critical RTOS.
We design the RISC-V MCU around the 64-bit CVA6 core.
CVA6 is a 6-stage, single-issue, in-order core implementing the G and C extensions of the 64-bit RISC-V instruction set (RV64GC).
The core implements a Translation Lookaside Buffer (TLB) to accelerate address translations from the virtual to the physical domain and a classic branch predictor consisting of a branch target buffer (BTB), a branch history table (BHT), and a return address stack (RAS). The core employed in this work is configured with a 32-kiB write-through L1 data cache and a 16-kiB instruction cache.
Besides the core, the MCU hosts 128-kiB scratchpad memory (SPM), a direct memory access (DMA) engine, and low-latency peripherals (SPI, I2C, UART) for off-chip communication.
The MCU relies on an AXI4-compliant, on-chip, non-coherent interconnect system. AXI4 interfaces are exposed to the multi-core domain through a software-managed IOMMU (such as in <cit.>) consisting of an IO translation lookaside buffer (IOTLB) to efficiently translate virtual user-space application addresses from the multi-core domain to physical memory.
Fig. <ref> depicts the RISC-V MCU and its hardware interface towards the application-class host.
In the embedded domain, a general-purpose core's real-time capabilities strongly depend on its interrupt controller's design. This is a crucial and functional requirement in safety- and time-critical systems such as those operating in the automotive domain, aiming at minimizing interrupt latency and context switch time.
First, CVA6 lacks support for vectored interrupts, which store the interrupt service routine of each interrupt at a separate address.
Albeit increasing the code size as the vector table's size grows, this mechanism helps reduce the overall interrupt response time.
Furthermore, CVA6's native interrupt architecture consists of classic RISC-V PLIC and CLINT controllers from the RISC-V Privileged Specifications <cit.>. The core hosts three level sensitive interrupt signals: machine-mode timer interrupt, machine-mode software interrupt (inter-processor interrupt), and machine-supervisor-mode external interrupts, respectively.
The machine timer and machine software interrupt pending registers — and respectively — are provided by a Core Local Interruptor (CLINT) hardware Intellectual Property (IP), which generates one interrupt for each hardware thread (hart, a RISC-V execution context). While generates timer interrupts with a specific frequency, handles communications among processors by interrupting harts on writes/reads of dedicated memory-mapped registers.
The first 12 interrupts' identifiers are reserved for timer, software, and external interrupts in the machine (M), supervisor (S), and user (U) privilege modes. Other interrupt entries up to XLEN (for an RV64 processor such as CVA6, XLEN=64) are platform specific and referred to as local interrupts <cit.>.
Finally, the machine external and supervisor external interrupt pending registers bring the information from external devices to the hart.
The Platform Local Interrupt Controller (PLIC) <cit.> provides centralized interrupt prioritization and routes shared platform-level interrupts among multiple harts via the interrupt signals. The PLIC does not support interrupt preemption (nesting), nor runtime-configurable interrupt priorities and interrupt threshold control, which must be simulated in software.
As highlighted in Sec. <ref> and further detailed in Sec. <ref>, such native features are insufficient to fulfill functional real-time requirements.
An essential contribution of this work is the enhancement of CVA6's real-time capabilities in terms of interrupt response to achieve a competitive advantage against existing COTS real-time MCUs.
To ease the design and development of the AMP system, the RISC-V MCU hosting the RTOS has been implemented on a heterogeneous Xilinx Zynq Ultrascale+ FPGA <cit.> as part of the Programmable Logic (PL), thus taking advantage of the existing multi-core COTS SoC (Processing System, PS) to host the GPOS.
The PS consists of an industry-standard, quad-core, 64-bit Armv8 Cortex-A53 application-class core featuring 32 KiB L1 instruction and data cache per core and a 1 MiB L2 cache shared by all four cores and clocked at 1.2 GHz and a dual-core Cortex-R5F real time unit.
The Arm Cortex-R is employed to conduct the performance gap analysis with real-time enhanced CVA6 despite the difference in XLEN (32-bit and 64-bit, respectively).
The CVA6 MCU has been synthesized on the PL targeting 50 MHz frequency. The following describes the software stacks running on the various processors.
§.§ Real-time OS
The OSEK/VDX and AUTOSAR Classic standards specify the design of a tiny RTOS for automotive.
The programming paradigm is “run-to-completion,” and the configuration (e.g. number of tasks) is statically defined at compile time.
In this type of operating system, the Interrupt Service Routines (ISR)
are divided into two categories:
* ISR1: High-priority low-overhead routines that cannot call syscalls;
* ISR2: Priority-based routines, which could imply a rescheduling once finished.
In the proposed architecture, we have used the ERIKA Enterprise RTOS <cit.>.
This open-source RTOS supports various microcontroller architectures and
is used in several European research projects and industrial automotive products.
A fork of ERIKA Enterprise recently received ISO26262 ASIL-D qualification
(the highest safety level for automotive).
Moreover, there is an ongoing discussion with the AUTOSAR consortium to release this RTOS
within the Classic demonstrator under the name of “Open-ERIKA” <cit.>.
In the context of the AMPERE H2020 project, the ERIKA RTOS has been ported
and executed on the RISC-V architecture.
§.§ General-purpose OS
Linux is a well-known operating system implementing the POSIX API.
Its performance, open-source license, and portability made it a perfect candidate for the general-purpose operating system running on the multi-core ARM PS.
During the last decades, several attempts have been made to improve the real-time performance
of Linux systems <cit.>. From time to time, some support (e.g. preemptible kernel, priority inheritance protocol, high-resolution timers, SCHED_DEADLINE real-time scheduler <cit.>) have been merged in the official kernel.
PREEMPT_RT <cit.> is a long-term project sponsored by the Linux Foundation to improve the real-time capabilities of the operating system. The primary outcome of this project is a kernel patch that reduces the maximum latency experienced by applications and is expected to be merged in the mainline “Vanilla” codebase.
To improve the overall responsiveness of the proposed platform, we have therefore
re-compiled the Linux kernel applying the PREEMPT_RT patch and enabling the maximum preemption level.
It is worth mentioning the existence of a joint initiative, ELISA <cit.>, aiming
at easing the certification of this operating system in safety-critical environments. In the automotive scenario, the ELISA project aims to reach the ASIL-B certification of the OS. However, the project has not yet provided a process to obtain such a qualification.
§.§ Intra-OS communication
According to the latest trends in automotive <cit.>, the inter- and intra-OS communications have been entirely based on the DDS standard.
The intra-OS communication between processes running on the Linux OS has been implemented through an open-source DDS middleware (i.e., Fast-DDS, formerly known as Fast-RTPS).
The inter-OS communication between Linux and ERIKA, instead, has been based on DDS-XRCE <cit.>, a DDS protocol specifically designed by OMG
for resource-constrained systems.
As shown in Figure <ref>, in this client-server protocol, the devices (clients) communicate with an XRCE Agent (server), which provides the intermediate bridging service towards the DDS Data Global Space.
In particular, we have integrated eProsima's Micro XRCE-DDS stack <cit.>
(part of the Micro-ROS project <cit.>) on the ERIKA RTOS.
§ EVALUATION
In this section, we analyze and characterize the proposed automotive platform in terms of real-time capabilities, focusing on interrupt handling latency on both the multi-core system running the GPOS and the MCU running the RTOS, as well as the inter-domain communication time overhead:
* We analyze real-time extensions of the Linux kernel to suit the automotive domain better.
* We characterize the middleware layer for intra-OS communication.
* We optimize the real-time CVA6 MCU in hardware to boost its interrupt response capabilities with the integration of the RISC-V CLIC as the central interrupt controller for CVA6 and conduct a performance gap analysis with the COTS Arm Cortex-R5 already available in the PS of the FPGA.
§.§ Non-critical multi-core domain: Linux GPOS
For Linux, we have used an Ubuntu filesystem and the Foxy version of ROS2 on top of Fast-DDS.
The Linux kernel was version 4.19, patched with the PREEMPT_RT patch.
We have then run a set of tests
to measure the latency introduced by the OS.
The system has been stressed by creating interference through both the find command
(generating I/O traffic by scanning the filesystem on the SD memory and printing on the console) and
through the stress program generating CPU, memory and I/O interference: ./rt-test/stress -c 8 -i 8 -m 8 –vm-bytes
8000000.
The worst-case latency has been measured through the cyclictest tool provided
by the Linux kernel community developing the PREEMPT_RT patch. The tool has been run
with the following options:
./cyclictest –mlockall –smp –priority=80
–interval=200 –distance=0 –duration=5m.
The experimental results have shown a worst-case latency of 13.4 ms without PREEMPT_RT
and 159 μs with PREEMPT_RT. This means that the maximum latency experienced by user-level applications has been reduced of about 99% by simply applying the patch and recompiling the Linux kernel.
§.§ Inter-domain communication
The communication latency has been evaluated through a “ping-pong” application
that measured the round-trip time from Linux to ERIKA and back to Linux.
The involved processes on Linux (i.e., DDS Agent and ROS2 application) have been
scheduled using a real-time priority (i.e., SCHED_RR with priority 99).
Data has been exchanged through a non-cached shared memory area. We selected UART as the interrupt source, the only visible from both operating systems.
The experimental results showed a minimum, average and maximum communication time
of 2.0, 2.2, and 3.7 msec, respectively.
It is essential to highlight that the Micro-ROS framework has a periodic engine
which added some delay to the communication.
In particular, the clc_executor_spin_some function had a period of 1 msec,
while all the other interactions were event-driven.
§.§ Safety critical RISC-V MCU domain: ERIKA RTOS
When porting the ERIKA Enterprise RTOS on RISC-V, we have taken
inspiration from the previous FreeRTOS optimization <cit.>.
We have initially optimized interrupt handling by emulating the local
interrupt levels through an array statically generated by the OS tools since the core does not natively support them.
The performance of the ERIKA RTOS when running on
RISC-V has been measured through an existing benchmark <cit.>
that measures the time
needed by the RTOS for performing a set of critical scheduling
activities (e.g., task activation time, task exit time, ISR call time,
etc.).
The test suite also allows benchmarking the latency of the two types of interrupt
service routines available in AUTOSAR Classic kernels (i.e. ISR1 and ISR2)
that have been previously illustrated.
The tested functions are namely:
* act: activates a higher priority task and measures how long
it takes to start its execution.
* actl: activates a low-priority task and measures how long it
takes to return to the caller.
* intdisable: measures the time needed for disabling all interrupts.
* intenable: measures the time needed for enabling all interrupts.
* isrentry: measures the time elapsed between the occurrence
of an interrupt and the execution of the related ISR1 handler.
* isr2entry: measures the time elapsed between the occurrence
of an interrupt and the execution of the related ISR2 handler.
* isrexit : measures the time elapsed between the end of an
interrupt handler and when the task previously running resumes execution.
* istentry: measures the time elapsed between the end of an
interrupt handler and the execution of the task activated by such
interrupt handler.
* istexit: measures the time elapsed between the end of a task
handling an interrupt and when the task previously running resumes
execution.
* terml: measures the time needed for terminating a task and
switching to a lower priority one.
Execution times have been measured in processor clock cycles through the CSR register.
The same benchmark has been executed to evaluate the performance of the Cortex-R5 and the Cortex-A53 cores available on the ZCU102 board. For the Cortex-R5, the number of cycles has been measured through the PMCCNTR register.
For the Cortex-A53, instead, cycles have been measured through the cycle counter register PMCCNTR-EL0: __asm__ __volatile__
("MRC p15, 0, %0, c9, c13, 0" : "=r" (cycles));
It is essential to point out that, in the case of Cortex-A, the RTOS has been run on top of a hypervisor (namely, Jailhouse <cit.>) according to the typical configuration used when running RTOSs on Cortex-A processors.
The presence of the underlying hypervisor, however, implied some
non-negligible latency to trap and re-inject interrupts to the guest RTOS.
The possible interference from Linux on shared hardware resources has been removed by inducing the Linux kernel in panic mode through the following command: echo c > /proc/sysrq-trigger.
Since in safety-critical domains, such as automotive, we are most interested in bounding
the time needed for the various operations, we have restricted our analysis to the
worst-case number of cycles, measured over 100 consecutive runs.
The reported values in Fig. <ref> show that when CLINT/PLIC are being used, and no software optimization has been applied, the RISC-V soft-core can provide performance in the same order of magnitude of
state of the art (i.e., Cortex-R5). In particular, the worst-case cycles are higher only for handling ISR1 interrupts.
The reported values also confirm the non-negligible latency introduced by the interrupt injection mechanism on the hypervisor on Cortex-A. This strengthens the benefits of designing a mixed-criticality architecture through an AMP SoC rather than a hypervisor-based approach on an SMP SoC.
§.§ Software-driven RTOS optimization
The next step consisted in optimizing the code of the RTOS to obtain better performance in all the tested processors. The first optimization consisted in modifying the ISR2 handling by avoiding activating the ISR as a Task and directly calling the handler (i.e. not calling osEE_activate_isr2()). Moreover, similarly to <cit.>, we have used the -O3 optimization level of the GCC compiler.
Fig. <ref> reports the worst-case
number of cycles, still measured over 100 consecutive runs.
The benefits of the optimizations can be appreciated across all the architectures.
However, the proposed SW optimization of the RTOS on Cortex-R benefited more than the implementation on the RISC-V MCU, in some cases reducing the worst-case cycles to less than 25% of the original value (e.g., in the actl, intdisable and intenable tests).
From the presented values, we can see that the selected RISC-V processor still shows lower performance in terms of interrupt latency than the competing ARM Cortex-R5 architecture.
As already discussed in Sec. <ref>, we identify the bottleneck of the design in CVA6's interrupt handling support, which is not tuned for targeting fast-interrupt management and low interrupt latency, typically enabled through the following HW/SW mechanisms:
* Hardware support for fine-grained and configurable interrupt priorities
* Late-arriving interrupt behavior (preemption and nesting) <cit.>
* Context save/restore optimization with back-to-back interrupts (tail chaining)
* Banked stack pointer <cit.> (i.e. different stack pointers for different privilege levels)
* Hardware support for automatic saving of registers during the context switch.
The following section addresses the first three of the above-mentioned design items. To this aim, we extend the current CLINT interrupt controller with a Core Local Interrupt Controller (CLIC) <cit.> and evaluate the ERIKA RTOS's performance.
The remaining two design items involve implementing more advanced hardware features in the processor and will be investigated in future work.
§.§ Hardware-driven real-time optimization: RISC-V CLIC fast interrupt controller
As mentioned in Sec. <ref>, both native CVA6 and its interrupt controller architecture need adaptations to fulfill real-time needs.
We first modify CVA6's interrupt interface by replacing level-sensitive interrupts with a handshake mechanism carrying the interrupt identifier and the request to the processor that acknowledges the handshake. We then add support for vectored interrupts by implementing an interrupt identifier decoding logic to compute the jumping address of the vector table.
In the second step, we extend the CLINT with the Core Local Interrupt Controller (CLIC).
We employ an open-source implementation of the CLIC[<https://github.com/pulp-platform/clic>] that reflects the latest status of the RISC-V CLIC draft specifications. The integration process includes the addition of specific CSRs registers in the processor's micro-architecture as from specifications <cit.>.
The CLIC introduces several improvements to the standard CLINT to achieve faster interrupt handling. Among those are dedicated memory-mapped registers for software configurable interrupt priority and levels at the granularity of each interrupt line, runtime-configurable interrupt mode and trigger type, and support for interrupt preemption in the same privilege level.
Selective hardware vectoring enables the programmer to optimize each incoming interrupt for either faster response (vectored mode) or smaller code size (direct mode, when each interrupt traps to the same exception handler address).
Lastly, the CLIC introduces a novel CSR, namely <cit.> to accelerate the handling of back-to-back interrupts, a phenomenon called tail-chaining, which we have implemented in the CVA6 core.
CVA6 interrupt handling is modified as in Fig. <ref>.
In the improved design, the PLIC still arbitrates external system-level interrupts, and the legacy CLINT generates the timer interrupt. These interrupts are routed through the centralized CLIC interrupt source. Similarly, inter-processor interrupts are fired by writing to the corresponding CLIC memory-mapped registers.
Finally, local interrupts can be extended to 4096 lines instead of limited to the processor's . We implement 256 input interrupt lines arbitrated by the CLIC in this work.
Fig. <ref>
shows the experimental results on the improved hardware architecture with GCC -O3 optimization level at compile time on the selected benchmark. We notice that the worst-case overhead on RISC-V has become closer to the one measured on
the competitor MCU (i.e. ARM Cortex-R5).
The previous spikes (i.e. actl and isrentry) have been significantly reduced.
Moreover, for 4 metrics (i.e. act,
isrentry, istentry and istexit)
the number of cycles needed by the RTOS is equal to or even lower than on the Cortex-R5.
These experimental results confirm that RISC-V is
a valuable technology for running AUTOSAR Classic stacks of next-generation automotive MCUs, and can be further improved to surpass closed-source commercial solutions.
§ CONCLUSIONS
In this paper, we have illustrated some trends and challenges occurring in the automotive
domain, as well as various technologies being taken into account by the companies
operating in this industry.
We have proposed a novel mixed-criticality multi-OS architecture based on open hardware and open-source software. We have then described the optimizations done both at the software and hardware levels
to move performance closer to the commercial competitors.
The experimental results have shown performance comparable to the state-of-the-art and have also allowed identifying further room for future hardware optimizations of
the CVA6 RISC-V processor.
In the future, we plan further to evolve the proposed architecture by (i) designing advanced hardware features such as banked stack pointers and optimized context switch to improve the competitiveness of the CVA6 architecture further, (ii) leveraging the recent standardization of DDS in Classic AUTOSAR <cit.> by using plain DDS instead of DDS-XRCE for inter-domain communications, and (iii) adopting the scheduling policy <cit.> to have a more predictable timing behavior of the communication on the Linux OS.
IEEEtran
|
http://arxiv.org/abs/2307.06225v1 | 20230712151726 | Practical quantum imaging with undetected photons | [
"Emma Pearce",
"Nathan R. Gemmell",
"Jefferson Flórez",
"Jiaye Ding",
"Rupert F. Oulton",
"Alex S. Clark",
"Chris C. Phillips"
] | quant-ph | [
"quant-ph",
"physics.optics"
] |
APS/123-QED
[email protected]
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Quantum Engineering Technology Labs, H. H. Wills Physics Laboratory and Department of Electrical and Electronic Engineering, University of Bristol, BS8 1FD, United Kingdom
Blackett Laboratory, Department of Physics, Imperial College London, SW7 2AZ, United Kingdom
Infrared (IR) imaging is invaluable across many scientific disciplines, from material analysis to diagnostic medicine. However, applications are often limited by detector cost, resolution and sensitivity, noise caused by the thermal IR background, and the cost, portability and tunability of infrared sources. Here, we describe a compact, portable, and low-cost system that is able to image objects at IR wavelengths without an IR source or IR detector. This imaging with undetected photons (IUP) approach uses quantum interference and correlations between entangled photon pairs to transfer image information from the IR to the visible, where it can be detected with a standard silicon camera. We also demonstrate a rapid analysis approach to acquire both phase and transmission image information. These developments provide an important step towards making IUP a commercially viable technique.
Practical quantum imaging with undetected photons
Chris C. Phillips
July 12, 2023
=================================================
§ INTRODUCTION
The infrared (IR) spectral region provides a wealth of information. In the near-IR and shortwave-IR (SWIR), higher harmonics of the vibrational modes of molecules and combinations of them can be probed. In the mid-IR, fundamental vibrational absorption bands occur, which provide both greater molecular specificity <cit.> and the ability to perform quantitative chemical analysis. Sensing applications include studies of molecular structure, agriculture and food quality control, pharmaceutical monitoring, and biological imaging <cit.>.
However, the IR is technologically poor in comparison with the visible, particularly at longer MIR wavelengths. IR cameras have much lower pixel counts than their visible silicon counterparts and, when operated at room temperature, are orders of magnitude noisier. Even expensive cryogenically-cooled IR detectors <cit.> are susceptible to noise arising from the ever-present 300 K black-body radiation background (the so-called BLIP limit).
One approach to avoiding IR cameras is to image the sample with IR photons which then undergo frequency up-conversion to visible wavelengths before reaching the camera. However, this requires an IR source of photons and relies on high-power laser sources and/or cavities to combat the low conversion efficiencies <cit.>. This also leads to the sample being exposed to far more photons than are detected, which can be detrimental to photosensitive samples <cit.>. So-called `ghost imaging' uses non-degenerate correlated photon pairs. The IR photon probes the sample before being detected with a single-channel IR detector whilst its visible partner is logged by a camera <cit.>. This circumvents some of the limitations of IR cameras, but it still suffers from the poor IR detector sensitivity, and from the influence of thermal IR background.
In contrast, the IUP approach circumvents both the requirement to have a direct source of IR photons and the ability to detect them <cit.>. This works by generating photon pairs via spontaneous parametric down-conversion (SPDC) in a nonlinear crystal, where each pair consists of one visible photon (signal) and one infrared photon (idler). By passing the pump through this crystal twice, it is possible to generate a pair in the first and/or second pass of the pump. If the signal photons from these two passes are precisely overlapped, and the idler photons are similarly overlapped, it is impossible to determine which pass generated the photon pair. This lack of `which-way' (or indeed, `which-pass') information means that optical interference is seen in the count rate of the signal photons. Blocking one of the idler beams with an object effectively restores this `which-way' information, destroying the interference. Note that coincident detection is not required, as merely the possibility to detect distinguishing information will impact the interference. The presence or absence of interference due to an object can therefore be readily recorded in the visible photon channel, i.e. using photons that have not themselves interacted with the object. Crucially, the image transfer process leaves the thermal background behind, allowing for detection sensitivities that are considerably improved over direct IR detection <cit.>. This principle has since been demonstrated for a variety of applications, including microscopy <cit.>, hyperspectral imaging<cit.>, spectroscopy <cit.>, optical coherence tomography <cit.>, and holography <cit.>.
IUP certainly offers a promising alternative to direct infrared imaging, but it is important to address the potential barriers to practical implementation, and a literature is emerging that tackles issues of size, stability, and speed <cit.>, with compact near- and mid-IR technologies already seeing use in environmental and agricultural studies <cit.>.
Here, we demonstrate two generations of compact, fully self-contained, wavelength-tunable, and low cost devices for IUP, both of which enable SWIR imaging using only a basic silicon CMOS camera. We also discuss a rapid analysis approach which uses a pixel-wise Fourier transform to extract both transmission and phase information from as few as three image frames. These developments have allowed us to make dramatic reductions to the size, weight, cost, and power (SWaP-C) of IUP.
§ METHODS
Figure <ref>(a) shows the experimental setup of the first generation of our compact device. A 532 nm diode-pumped solid-state continuous-wave laser (CrystaLaser CL532-050) pumps a periodically-poled lithium niobate (PPLN) crystal with 35 mW of input power to produce signal (visible) and idler (IR) photons by SPDC. The dichroic mirror DM2 splits the idler photons from the signal and pump photons. The IR idler photons are sent to the sample, mounted on the sample mirror, while visible and pump photons propagate towards the scanning mirror, which is scanned to generate the interference fringes. All three wavelengths are then reflected back through the crystal, and as the pump makes its second pass, there is again a probability of generating a signal-idler photon pair.
As previously discussed, if the optical modes from the second pass are perfectly overlapped with those from the first then a sinusoidal modulation appears in the signal photon flux detected at the camera as the scanning mirror is moved. At these powers, the actual probability of generating a photon pair at each pass is low, allowing us to neglect the possibility of any stimulated down-conversion in the second process by photons originating from the first.
An object placed on the sample mirror can introduce a loss and/or phase change to the idler from the first pair, introducing distinguishability and a proportional change in the amplitude and/or phase of the visible interference fringes. The amplitude variations can be imaged directly, but the phase changes are only detectable by moving the scanning mirror.
Both signal and idler photons are separated from the pump by dichroic mirror DM1 and sent to another dichroic mirror DM3, which sends only the visible photons to the camera. The idler photons go completely undetected. In fact, the silicon camera we use would not see them even if they were not removed by DM3.
To form the imaging system, each arm of the interferometer contains an f =50 mm focal length lens (L2 and L3) such that both sample and scanning mirrors are in the image plane of the PPLN crystal, and the interferometer output is subsequently imaged onto the camera with a f =75 mm focal length lens. A number of camera frames are recorded at different scanning mirror positions, and the pixel-wise intensity variations are Fourier transformed <cit.>, allowing the transmission and phase information of the object's IR response to be extracted.
The PPLN crystal can be translated perpendicular to the pump beam to access regions with different poling periods, in a way that tunes the signal and idler wavelengths. Also, it can be temperature tuned to further extend the overall wavelength coverage, allowing us to generate signal photons from 706-839 nm and idler photons from 1455-2159 nm.
The whole system sits within a 60 cm × 45 cm footprint, and is 40 cm high (including all the laser and the temperature controlling and scanning electronics, Figure <ref>(b)). An additional enclosure can be added to reduce background light and allow safe operation outside of a lab environment, as demonstrated in Fig. <ref>(c). In this case, the idler beam passes through an AR-coated silicon window (which is opaque to both pump and signal) to reach the sample which sits outside of the enclosure. Typically, no realignment was required after local transportation and, once aligned, the system can be operated by an untrained user assuming familiarity with the image acquisition software. The whole system is assembled from standard off the shelf components for ∼£7000 (excluding the laser).
§ RESULTS
Figure <ref> shows analysis of a thin-film gold interdigitated ring array microelectrode (Figure <ref>(a), Micrux IDRA1) using 3 (d), 4 (e), 8 (f), and 15 (g) acquired images. In each case, the images are taken with a 200 ms exposure time at equally spaced piezo voltages over one oscillation of the interference pattern. The period of this oscillation is determined solely by the idler wavelength, as both signal and pump fields are scanned together, as shown in Figure <ref>(b). The scanning mirror moves by close to half the idler wavelength, as the path is travelled twice. The detected wavelength is 808 nm, which is filtered using a 10 nm wide 810 nm bandpass filter, while the probe wavelength is 1559 nm. The images have a pixel size of 5.2 × 5.2 μm and are 1024 × 1280 pixels, i.e. somewhat greater than those available with current IR cameras.
From the analysed images in Fig. <ref>(d), it is clear that both the phase and amplitude features of the sample can be identified reliably, even when working right at the Nyquist limit when as few as 3 recorded images are used for the analysis. This approach drastically reduces both acquisition and processing times. We define visibility as
𝒱 = N_max - N_min/N_max + N_min = 2F_1/F_0
where N_max (N_min) are the maximum (minimum) pixel values recorded on the camera during a phase scan, F_1 is the amplitude of the Fourier component which corresponds to the frequency of the interference oscillation, and F_0 is the amplitude of the DC Fourier component. Contrast is defined as
𝒞 = N_max - N_min = F_1 .
Contrast is dependent on the overall brightness of a given system but can be a useful metric if there is a high detector noise floor as this will be subtracted, unlike the case with visibility.
Phase is defined as
ϕ = arctanIm (F_1)/Re (F_1) .
Although as few as 3 images are sufficient for a qualitative analysis of the relative phase and transmission across a sample, unsurprisingly, more images improve accuracy if further parameter extraction is desired.
Features appear brighter in Figure <ref>(d) compared to Figure <ref>(e) due to where the frequency of the interference oscillation occurs on the Fourier transform sampling frequencies. Leakage into more than one Fourier component will be seen as a loss of signal and thus visibility will be reduced. This could be avoided by altering the Fourier transform length (via zero-padding) to better match the interference frequency to one of the sampled Fourier frequencies.
The time taken to perform the Fourier transform and calculate the above parameters is plotted against the number of input images in Figure <ref>(c). The Fourier transform is implemented using a Python wrapper of the Fastest Fourier Transform in the West (FFTW)<cit.> on a typical laboratory machine (Intel Xeon W-2102 processor, 4 cores). Processing times do not include saving or displaying the data. There is no delay between acquisitions required for image postprocessing <cit.> and we require both fewer images and shorter exposure times than seen in Ref. PaterovaSemiconductor.
Another example of SWIR imaging is shown in Figure <ref>, this time with an organic sample of a fly wing. This provides an object with continuously varying IR transmission across the image, rather than the example of the electrode which only has regions of either total transmission or no transmission. Both samples shown thus far have been measured in transmission. Regions of high visibility are seen where SWIR light passes through the sample to be reflected by the sample mirror. This requires that the path from the crystal to the sample mirror and the path from the crystal to the scanning mirror must be equal to within the SPDC coherence length (≈ 0.1 mm). Any optical path length introduced by transmission through the sample must also be considered.
Figure <ref> shows the wavelength tunability of the system while imaging the gold contacts of the electrode, shown in the bottom half of Figure <ref>(a). In Fig. <ref>(a), the crystal is kept at 125^∘C and the pump beam enters a region with a poling period of 7.40 μm. These conditions result in a probe wavelength of 1558 nm and a detected wavelength of 808 nm (filtered with a 10 nm wide 810 nm bandpass). The crystal is then translated perpendicular to the pump beam to access a poling period of 7.71 μm and heated to 200^∘C. This extends the probe wavelength to 1818 nm, beyond the sensitivity of a typical InGaAs camera, with the visible detection at 752 nm (filtered with a 10 nm wide 750 nm bandpass). The interferometer remains aligned throughout these types of wavelength sweep.
Both the spatial resolution (Δ x = f_uλ_u / √(2)π w_p) and magnification (M = f_cλ_d / f_cλ_d) are reduced as the idler probe wavelength increases. Here, f_u is the focal length of the lens in the undetected path, f_c is the focal length of the lens in front of the camera, λ_d is the detected wavelength, λ_u is the undetected probe wavelength, and w_p is the beam waist of the pump <cit.>. The focal lengths of the lenses are also likely to vary from their nominal values as the wavelengths change. This change in magnification leads to all 4 contacts being visible in Figure <ref>(b), while only 3 can be seen in Figure <ref>(a), although chromatic aberrations in the lenses may also be limiting the resolution. Here the sample is being imaged in reflection, with high visibility in regions where the idler probe is reflected by the gold features, in which case it is the path from the crystal to the front face of the sample (rather than the sample mirror) that must be matched to the path from the crystal to the scanning mirror.
§ FURTHER DEVELOPMENT: `ENTANGLECAM'
Figure <ref> shows the next design iteration of the system, the so-called `EntangleCam', using smaller optomechanics and simplifying the layout by removing and combining some of the components. The PBS that was used in the previous system to filter the pump polarisation has been replaced by a laser line PBS with an AR coating that doubles as a dichroic mirror to separate the signal from pump. The lens in front of the camera has been removed so that the detector array samples the far-field of the crystal directly, and the original laser has been replaced by a simple 532 nm laser diode (OdicForce green laser module) providing less than 30 mW of light to the crystal. The pump shaping preparation is handled entirely by a single lens after the diode, with a bandpass filter (BP) to eliminate any unwanted IR light.
The breadboard footprint has shrunk to 30 × 20 cm^2 and is only 15 cm high.The control electronics run from a single mains socket and add only 25 × 22.5 × 7.5 cm^2 volume, while the total component cost is reduced to ∼£6,000, in this case including the laser.
The system still retains the same wavelength tunability via crystal heating and translation with crystal translation stage (CTS), but even at a fixed room temperature, significant wavelength tunability is available. This is seen in Figure <ref>, imaging another gold electrode sample (identical to that seen in Figure <ref>), using only a room temperature crystal (24.5^∘C) without active temperature control. The bandpass filter on the camera was replaced with a long-pass filter to reject any residual pump light while also removing the need to change detection filters when moving between different poling periods. Images used in this analysis are taken with a 200 ms exposure time. It can be seen that despite the reduction in size, the system retains its imaging capabilities and broad tunability. Depending on the probe wavelength desired for a particular application, this shows that the device could be designed to operate without any need for temperature control, further reducing SWaP-C.
§ DISCUSSION
In conclusion, we have demonstrated compact IUP systems that can perform infrared imaging with visible detection and are robust and portable enough to be used outside of a laboratory environment. We have also demonstrated rapid analysis that allows a real-time quantitative measure of transmission and phase shift of a sample using as few as 3 recorded images. These systems represent a significant step forwards in the affordability and practicality of IUP as an alternative to direct IR imaging.
Future operation speeds could be enhanced by reducing the acquisition time required at each scanning mirror position, simply by using a higher power laser and/or a more sensitive camera. Both are readily available without major cost implications. The spatial resolution of the system can be further enhanced by increasing the momentum correlations between the signal and idler photons; in the current configuration this would be achieved by increasing the width of the pump beam <cit.>.
Furthermore, there are a number of ways in which the operating wavelengths can be extended towards the mid-IR, including different pump wavelengths <cit.>, different poling periods <cit.>, and different nonlinear materials <cit.>. By moving towards the mid-IR, we anticipate our system will be a valuable tool for chemically and medically relevant applications <cit.>.
We acknowledge funding from the UK National Quantum Hub for Imaging (QUANTIC, No. EP/T00097X/1), an EPSRC DTP, and the Royal Society (No. UF160475). The authors declare no conflicts of interest.
|
http://arxiv.org/abs/2307.04258v1 | 20230709200728 | Classicality from Quantum Stochastic Processes | [
"Esteban Martínez-Vargas"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Física Teòrica: Informació i Fenòmens Quàntics, Departament de Física, Universitat Autònoma de Barcelona, 08193 Bellatera (Barcelona) Spain
We develop a theory of classicality from quantum systems. This theory stems
from the study of classical and quantum stationary stochastic processes.
The stochastic processes are characterized by polyhedral (classical) and
semidefinite representative (quantum) cones.
Based on a previous result <cit.> we expand the study of
fixed points from quantum channels. We give a semidefinite program that
characterizes a quantum channel separating into a core and a part that
decays with many iterations. In general, the solution is non-separable in the
space it is defined. We present a characterization of channels in terms of
their fixed points for the separable case. A quantum simulation of a polyhedral
cone can then be constructed.
Classicality from Quantum Stochastic Processes
Esteban Martínez Vargas
August 12, 2023
==============================================
When describing the cause of a phenomenon a good practice is to think in the
simplest possible explanation, this philosophical principle is called
Ockham's razor <cit.>.
Stochastic processes is a very
general framework to describe a wide variety of systems, in biology, economics,
chemistry, physics, etc. <cit.>.
Specifically, the modeling through Hidden Markov Models has been widely studied <cit.>.
These kinds of models arise when we have a stationary structure.
These objects have
been widely studied classically and in quantum systems <cit.>.
Quantum stochastic processes are needed because quantum systems are always open to complex environments that
affect the evolution of a system and for foundational aspects of quantum mechanics <cit.>.
Even though intuition indicates that the simplest and most practical description would
correspond to classical information theory, several results show that
there is an advantageous simplicity when using quantum systems to describe stochastic
processes even when having classical systems <cit.>.
This means that quantum mechanics would be a natural language for stochastic processes.
Nevertheless, this perspective would imply a paradoxical worldview: if quantum mechanics
is the most natural way to describe stochastic processes, which is a very general tool
to describe aspects of the world (classical and quantum) then why does our world seem classical?
Quantum stochastic processes thus remit us to a question that was
raised since the inception of quantum mechanics a hundred years ago, that is
the passage from quantum to classical dynamics is usually expressed as the
correspondence principle <cit.>.
There is a large amount of work in this respect, specifically in the area of einselection
and quantum Darwinism
<cit.>.
Existing an advantage in terms of information one would ask why would classicality even exist.
Here we aim to study this topic using the formalism of quantum stochastic processes.
The approach that the theory of einselection assumes is that there is a constant
feature of a system, which is its environment (or system plus environment). It thus aims to study the features of the
system which are resistant to decoherence. Classicality from a quantum perspective
is thus defined as those features which persist in time.
In this line of thought, Hidden Markov Models arise when the notion of stationarity
is relevant, therefore, such stationarity of a stochastic process remits us to think in a persistent structure that produces it.
However, there exist stationary processes produced from quantum sources, as
characterized by Monràs and Winter in their ('O Scarrafone) theorem <cit.>.
There, they characterize the most general stationary stochastic process produced
by a quantum source. Therefore, stationarity in itself is not sufficient for
the notion of classicality. In einselection a central aspect is also the objectivity
of a specific basis, there are einselected states. The fact that these states are
unchanged by the dynamics is also central to the objectivity of the system.
Therefore, we will understand classicality in a quantum stochastic process if it
fulfills two conditions:
* Persistence in time.
* Objectivity of its generating states.
In this paper, we explore a possible mechanism for the persistence in
time of an objective set of quantum states that gives place to a Markov process.
We consider discrete uses of a quantum channel and the objective set will be made
of the fixed points of the channel.
Observe that quantum channels have at
least one fixed point <cit.>.
We here consider channels with multiple fixed points. A finite collection of such
points which are vectors form what is known as a polyhedral cone. Following a theorem
by Dharmadhikari <cit.> this is a necessary and sufficient
condition to have a stationary Markov process. We thus restrict Monràs and Winter
conditions to those asked by Dharmadhikari.
Although the fixed points of quantum channels have been studied in the literature <cit.>,
it is a cumbersome topic as a part of some theorems, given a channel there is no general
way of finding its fixed points and
numerical simulations are almost always the norm <cit.>.
Here we give a characterization of quantum channels inversely:
given a finite number of states explore all the quantum channels that have them as
fixed points. We extend the results of <cit.> to consider multiple fixed point
channels and give an example of the power of this approach.
First, we introduce Dharmadhikari's and Mondràs and Winter theorems. Then explain the problem of classicality from the quantum perspective as a “cone reduction” problem.
Finally, we introduce our study of multiple fixed point quantum channels and apply it to
an example. We finish with a discussion.
§ QUASI-REALIZATIONS OF STOCHASTIC PROCESSES
To study the different dynamics, classical and quantum, a general framework of
stationary stochastic processes will be introduced: the theory of quasi-realizations.
From an abstract point of view, a stochastic process is given by an alphabet of symbols
ℳ with size |ℳ|=m and we denote ℳ^l the set of words
of length l. We define
ℳ^* = ∪_l≥0ℳ^l.
We can obtain the probability of a specific word 𝐮=(u_1,u_2,…,u_l)∈ℳ^l,
p(𝐮)=p(Y_1=u_1,Y_2=u_2,…,Y_n=u_n).
It will be relevant to study stationary distributions as they describe the stochastic process
asymptotically.
We would like to infer what is the inner mechanism that
gives rise to a stochastic process Y. A very widely known kind of matrices that produce a stochastic process are the stochastic matrices, which are non-negative
matrices whose rows sum 1. However, in general, the hidden mechanism of a
stochastic process need not be described by a stochastic matrix.
To see clearly this affirmation we make the following definition
A quasi-realization of a stochastic process is a quadruple (𝒱,π,D,τ),
where 𝒱 is a vector space, τ∈𝒱, π∈𝒱^*, the dual
space to 𝒱 and
D is a unital representation of ℳ^* over 𝒱,
D^(ε) = 1,
D^(u)D^(v) =D^(uv), ∀ u,v∈ℳ^*.
We will call cause matrices to the matrices
D^c = ∑_u∈ℳD^(u),
where D^(u) was defined above.
Observe that several quasi-realizations can yield the same stochastic process, we will call
equivalent two quasi-realizations that generate the same stochastic process.
For us, the relevant outcome of this definition will be the quantum version, which means, to find
stochastic processes where their cause matrices are not necessarily stochastic matrices.
§.§ Classical cones: Dharmadhikari's theorem
Observe now that being the stochastic matrices a subclass of cause matrices then
we need to find out the conditions when the cause matrices of a quasi-realization
(R^d,π,M,τ) become nonnegative matrices and
M^s = ∑_u∈ℳM^(u),
is stochastic.
This is precisely given by Dharmadhikari's theorem, it gives us the conditions
for having a positive realization, which means, that eq. (<ref>)
is fulfilled, π∈(R^d)^* is a stationary distriburion and τ=(1,1,…,1).
Given a quasi-realization (𝒱,π,D,τ), an equivalent positive
realization exists if and only if there is a convex pointed polyhedral cone
𝒞⊂𝒱 such that
* τ∈𝒞.
* D^(v)(𝒞)⊆𝒞.
* π∈𝒞^*.
With 𝒞^* the dual cone of 𝒞.
We thus need that all the dynamics to be restricted to a polyhedral cone.
§.§ SDR cones: Scarrafone
For quantum systems the type of cause matrices is different. The
characterization of the quasi-realizations, in this case, is related to the
theorem in ref <cit.>.
Given a quasi-realization (𝒱,π,D,τ) an
equivalent, finite-dimensional, unital, completely positive realization
(ℬ(ℋ)^sa,ρ,ε,ℐ) exists if and only
if there is a SDR cone 𝒫⊂𝒱⊗𝒱^*
such that
* τ∈𝒞.
* D^(u)∈𝒫 for all u∈ℳ.
* π∈𝒞^*.
With 𝒞 a SDR cone and 𝒞^* its dual defined in
<cit.>.
§ CLASSICAL DYNAMICS AS A FIXED POINT PROBLEM
Any quantum dynamics is described by an SDR cone. This cone includes an instrument that
maps states from the cone into itself. Theorem (<ref>) offers a description
for general instruments. Observe, however, that for instruments with a constant channel
in all iterations asymptotic behavior of channels becomes relevant.
This situation implies analyzing the channels from the perspective of their fixed points,
as all channels have at least one by theorem (4.24) of <cit.>.
The theory of quantum einselection by Zurek et. al. implies a model of reduction from quantum dynamics
to classical dynamics. It states that the classical behavior of a system is given by certain states
that are selected over time by the interaction Hamiltonian which means, the interaction with
an environment. Observe that this approach implies a continuous evolution of a system. The einselected
states are the ones that remain over time. Another important aspect is that the environment, and thus
the Hamiltonian remains constant over time. A Hamiltonian of changing potential would change itself
and therefore einselection would not be possible.
We here want to study an analogous point of view. The classical dynamics will be given by a
polyhedral cone. The reduction from quantum to classical will be given in terms of fixed points of
a quantum channel. The time evolution will thus be discrete, but always maintaining a channel constant.
We do not require that an environment imposes some evolution but we require constant dynamics.
The classical cone is a result of constant quantum dynamics.
This point of view allows us to extend the einselection process from only a set of pure states to
possibly mixed states. Such states would not necessarily be orthogonal. We will show a decomposition
theorem for channels, to decompose any channel into its fixed point plus a part that goes to zero.
Then we show a bound to obtain this decomposition given a channel. Finally, we explore the decomposition
of multiple fixed points, which would imply a polyhedral cone.
§ CONE REDUCTION PROBLEM
In this section, we develop a general theory to describe the transition between
quantum mechanics and classical mechanics.
Starting from the historical perspective, the theorem (<ref>) by Dharmadhikari finds the
conditions for the existence of equivalent positive realizations given a realization.
We therefore can translate this mathematical statement into a physically relevant
one with the following postulate
Any discrete classical transformation is described by a convex polyhedral cone.
In general, any quantum Markov process is inscribed in a SDR cone. Following the
principles of quantum mechanics and the theorem (<ref>) by Monràs and Winter
it is natural to state the following postulate
Any discrete quantum mechanical transformation is described within a
SDR cone.
To our knowledge quantum mechanics is a fundamental and universal theory <cit.>.
Therefore, any classical dynamics should arise from a SDR cone, therefore the following
postulate arises naturally
Any classical (convex polyhedral) cone is embedded in a larger quantum (SDR) cone.
Notwithstanding, this last postulate thus demands a mathematical treatment that
makes it more concrete. In Fig. <ref> we present a diagram of a quantum
SDR cone containing a classical cone.
The main problem then is how to reduce a SDR cone to a classical polyhedral one.
As mentioned before, a simple reduction mechanism is to suppose an instrument with a constant
channel all the time. Then, because
of theorem (4.24) from <cit.> the channel has at least one
fixed point. The output of the channel is reduced to a single state.
However, a channel can have several fixed points, thus conforming a polyhedral
cone. This motivates the study of channels with multiple fixed points.
§ MULTIPLE FIXED POINT CHANNELS
We first cite a result from <cit.> we have the following
characterization of a channel in terms of its fixed point.
Given a state σ we can describe a trace-preserving separable family of channels with fixed point σ
in terms of its Choi matrix 𝒞 as follows
𝒞=σ⊗(|V_max⟩⟨V_max|)^⊺/λ_max
+B⊗(I-(|V_max⟩⟨V_max|)^⊺/λ_max),
λ_max is the maximum eigenvalue of σ and |V_max⟩ its correspondent eigenvector.
B is a state. This description is valid for ⟨V_max|B|V_max⟩≤λ_max and
any input state ⟨V_max|ρ|V_max⟩≤λ_max.
Analogously to the SDP used in <cit.> to find the above characterization
we have a way to find the channels that have as fixed points some desired
states.
The SDP for finding the channel with minimum trace with two fixed points is the
following
Xmaximize -[X]
subject to _ℋ_2[X(1_ℋ_1⊗σ^⊺_0)]=σ_0
_ℋ_2[X(1_ℋ_1⊗σ^⊺_1)]=σ_1
X≥ 0.
A trace-preserving channel is found as follows, in terms of a state B,
𝒞 = X + B⊗(1-_H_1[X]).
We further require that
[X(1_H_1⊗ B^⊺)]< 1,
so that iterations converge.
We nevertheless, lack a general characterization of the solution X in terms
of the states σ_0 and σ_1 in contrast to the
one-state case. Observe that if {σ_0,σ_1}∈ℋ then
X∈ℋ⊗ℋ and in general X is an entangled operator in that space.
As a special case, we have the following characterization for when X is separable.
A channel with Choi matrix 𝒞 has two fixed points σ_0 and
σ_1 can be written as
𝒞=σ_0⊗Π_0^⊺/[Π_0σ_0]
+σ_1⊗Π_1^⊺/[Π_1σ_1]
+B⊗(I-Π_0^⊺/[Π_0σ_0]-Π_1^⊺/[Π_1σ_1]),
if and only if the states σ_0 and σ_1 can be
unambiguously discriminated. This means, there exist the positive semidefinite operators Π_0 and Π_1
such that [σ_0Π_1]=[σ_1Π_0]=0
and [Π_0σ_0]≠0 and [Π_1σ_1]≠0.
Also we require [BΠ_0]/[Π_0σ_0]+[BΠ_1]/[Π_1σ_1]<1.
Observe that the channel 𝒞
fulfills the constrictions of the SDP (<ref>), specifically,
_H_2[𝒞(1⊗σ_0)]= σ_0 + σ_1[σ_0Π_1]/[Π_1σ_1]-B[σ_0Π_1]/[Π_1σ_1],
which is equal to σ_0 if and only if [σ_0Π_1]=0.
Analogously _H_2[𝒞(1⊗σ_1^⊺)]= σ_1
if and only if [σ_1Π_0]=0.
The generalization to the cases of more fixed points is straightforward, we put
more conditions in the SDP (<ref>), analogously add projectors
in the decomposition (<ref>).
§ SIMULATION OF A POLYHEDRAL CONE
Observe that a classical cone can be easily simulated with an SDR cone.
We start with a state ρ_0, apply a multiple-fixed point channel, end
up in a state σ_i. Then, we can apply a random channel to get
out of the fixed point, and apply the multiple-fixed point channel again
to end up in another state σ_j etc, as depicted below.
ρ_0 →Φ^n[ρ_0]≈σ_i→φ_rand[σ_i]=ρ_0^'
ρ_0^' →Φ^n[ρ_0^']≈σ_j→φ_rand[σ_j]=ρ_0^''
…
Φ is the multiple fixed point channel and φ_rand is a random channel.
§ EXAMPLE
Consider the Hilbert space spanned by two qubits. We can take the Bell basis and label
it as follows
|V_0⟩ =|00⟩+|11⟩/√(2),
|V_1⟩ =|00⟩-|11⟩/√(2),
|V_2⟩ =|01⟩+|10⟩/√(2),
|V_3⟩ =|01⟩-|10⟩/√(2).
We also define the states
|V_1^0⟩ =α_0|V_1⟩+β_0|V_2⟩,
|V_2^0⟩ =δ_0|V_1⟩+ϵ_0|V_2⟩,
|V_1^1⟩ =α_1|V_1⟩+β_1|V_2⟩,
|V_2^1⟩ =δ_1|V_1⟩+ϵ_1|V_2⟩.
Now let us define the mixed states
σ_0 =s_0|V_0⟩⟨V_0|+s_1|V_1^0⟩⟨V_1^0|+s_2|V_2^0⟩⟨V_2^0|.
σ_1 =r_0|V_1⟩⟨V_1|+r_1|V_1^1⟩⟨V_1^1|+r_2|V_2^1⟩⟨V_2^1|.
We can easily verify that ⟨V^1|σ_0|V^1⟩=⟨V^0|σ_1|V^0⟩=0 and therefore
the characterization of fixed points from Eq. (<ref>) is valid.
§ DISCUSSION
We develop here a general model to simulate a polyhedral cone using quantum systems. This
model is very general as it allows mixed states to be the vectors that subtend the cone.
We observe here a passing from a quantum system that has non-classical correlations
to a system that behaves classically, in terms of the stochastic processes they
produce. This is an example of a change of behavior from quantum dynamics into
classical dynamics which is closely related to the theory of einselection and
quantum Darwinism. Here the mechanism is analogous to the case of einselection
because it involves the repetitive action of a quantum channel.
However, the process yields the fixed points of the channel. We describe here
a way to construct quantum channels with desired specific fixed points. It can
be viewed as an engineering of quantum channels with multiple fixed points, which
gives rise to the classical behavior in terms of a polyhedral cone.
Observe that the most general characterization of our method is given by the SDP (<ref>)
which yields an operator X that in general can be entangled in the space it is defined.
This means X∈ℋ⊗ℋ where ℋ is the Hilbert space of
σ_0 and σ_1. However, in general, X is not separable in those subspaces. In theorem
<ref> we explore the separable case. A full
characterization of the solution X in the entangled case is a
matter of future research.
The mechanism that we study here is a specific one, however, the formalism could
be extended to consider other possible mechanisms that reduce the quantum
dynamics of a system into a classical one. This would extend the study of
quantum-to-classical transitions.
|
http://arxiv.org/abs/2307.07402v1 | 20230714152744 | The torsion of stellar streams and the overall shape of galactic gravity's source | [
"Adriana Bariego-Quintana",
"Felipe J. Llanes-Estrada"
] | astro-ph.GA | [
"astro-ph.GA",
"gr-qc"
] |
and the overall shape of galactic gravity's source
IFIC-Univ. Valencia, c/ Catedrático José Beltrán, 2, E-46980 Paterna, Valencia, Spain
[email protected]
Universidad Complutense de Madrid, IPARCOS & Dept. Física Teórica, Plaza de las Ciencias 1,
28040 Madrid, Spain
[email protected]
Flat rotation curves v(r) are naturally explained by elongated (prolate) Dark Matter (DM) distributions, and we have provided competitive fits to the SPARC database . To further probe the geometry of the halo, or the equivalent source of gravity in other formulations, one needs out-of-plane observables. Stellar streams, poetically analogous to airplane contrails, but caused by tidal dispersion of massive substructures such as satellite dwarf galaxies, would lie on a plane (consistently with angular momentum conservation) should the DM-halo gravitational field be spherically symmetric. Entire orbits are seldom available because their periods are commensurable with Hubble time, with streams often presenting themselves as short segments.
Therefore, we aim at establishing stellar stream torsion, a local observable that measures the deviation from planarity in differential curve geometry, as a diagnostic providing sensitivity to aspherical DM distributions which ensures the use of even relatively short streams.
We perform small-scale simulations of tidally distorted star clusters to check that indeed a central
force center produces negligible torsion while distorted haloes can generate it.
Turning to observational data, we identify among the known streams those
that are at largest distance from the galactic center and likely not affected by the Magellanic clouds, as most
promising for the study, and by means of polynomial fits we extract their differential torsion.
We find that the torsion of the few known streams that should be sensitive to most of the Milky Way's DM
Halo is much larger than expected for a central spherical bulb alone. This is consistent with nonsphericity of the halo.
Future studies of stellar stream torsion with larger samples and further out of the galactic plane should
be able to extract the ellipticity of the halo to see whether it is just a slight distortion of a spherical shape
or rather ressembles a more elongated cigar.
The torsion of stellar streams
Adriana Bariego–Quintana
1
Felipe J. Llanes–Estrada2
July 14th 2023
========================================================================================
§ INTRODUCTION: SHAPE OF DARK MATTER HALOES
The problem of galactic rotation is the empirical statement that rotational velocity around the galactic center
seems to flatten out for a large fraction of the galaxy population where this has been measured at long enough distances <cit.>.
This is at odds with orbital equilibrium outside a spherical source (Kepler's third law written for the velocity),
v^2/r = GM/r^2 v=√(GM/r)
that implies falling velocities for objects or clouds of gas further away.
Because typical velocities in spiral galaxies are of order 200-300 km/s, v/c∼ 10^-3,
relativity is a correction and Newtonian mechanics should get the bulk of the rotation right.
Therefore, either a modification of mechanics, such as MOND <cit.>, or a modification of the gravity
source, typically in the form of a spherical Dark Matter halo <cit.>, are invoked.
MOND however runs into problems at larger, cosmological scales <cit.>; and a spherical DM distribution
has to be fine-tuned to have very nearly an isothermal ρ(r)∝ 1/r^2 profile to explain
the flatness of the rotation curve.
If we inhabited a two-dimensional cosmos, however, the natural gravitational law would be |F|∝1/r
instead of ∝1/r^2 and the observed rotational law would be v_ 2D∝ constant which is the law that the experimental data demands.
We do not; but a cylindrical matter source achieves the same dimensional reduction by providing translational symmetry
along the OZ symmetry axis of the cylinder <cit.>. If the linear density of the cylindrical dark matter source is
λ, we can write
v^2/r = 2Gλ/r v = √(2Gλ) .
That is, the constant velocity function v(r) is natural for a filamentary source. Moreover, if
the rotation curve is only measured to a finite r, obviously the case, the source does not need to be infinitely
cylindrical: it is sufficient that it be prolate (elongated) instead of spherical, as shown by detailed fits <cit.>
to the SPARC database <cit.>
and consistently with simulations of DM haloes <cit.>.
Observables in the galactic plane alone, such as detailed rotation curves, cannot distinguish between competing
models such as spherical haloes with nearly ρ_ DM(r)∝1/r^2 profiles or elongated haloes with arbitrary profile. To lift the degeneracy between shape and profile one needs to find adequate, simple observables from out-of-plane data.
For a while now, stellar streams <cit.> in the Milky Way galaxy have been a promising new source of information on the DM distribution <cit.>, as they will eventually be for other galaxies <cit.>. In the rest of this article we develop what we think is a key observable to be measured on those streams to bear on the question of the overall shape of the presumed halo.
Section <ref> is dedicated to reviewing the definition of torsion in differential curve geometry and showing
that, around a spherical halo, orbits as well as streams are torsionless. Section <ref> then shows how
we expect tidal streams around elongated gravitational sources to show torsion if there is a component of the velocity parallel to the axis of elongation of the source. Section <ref> makes a reasonable selection among the known stellar streams and we plot the torsion calculated along each of them, showing that there seems to be a signal here. Section <ref> then concludes how further studies can improve the conclusion.
§ ORBITS AND STREAMS AROUND CENTRAL POTENTIALS ARE TORSIONLESS
§.§ Torsion quantifies separation from orbital planarity
Before explaining why we wish to propose torsion as a useful observable to probe the DM halo, let us recall a few concepts of differential geometry to fix the notation.
In differential geometry <cit.> the torsion of a curve measures how sharply it is twisting out of the osculating plane,
instantaneously defined by the velocity and normal acceleration.
To a curve r(t) in three-dimensional space parametrized by an arbitrary variable t
we can associate an arclength s(t)=∫^t r(t') dt'
and the tangent vector T = d r/ds; if at a certain point P the curvatuve is non-zero,
then the normal vector at P is defined by N = d T/ds (its inverse modulus giving the radius of the circumference best approximating the curve at P); and the binormal vector (that completes the Frenet-Serret trihedron)
by the vector product of both,
B = T× N .
If the curve is perfectly planar the tangent and normal vectors will always lie in the same plane,
and in such case the binormal vector stays parallel to itself along the curve. Any natural definition of torsion
will then yield zero.
But if the curve twists out of the plane (like a uniformly advancing helix which corresponds to constant torsion),
the binormal vector will acquire a rotation.
Torsion will then measure the speed of that rotation of the binormal, and it is a locally defined vector at
each point P along the curve r(t), as the scalar product of the intrinsic derivative of B
and the normal vector (this discounts the change of the modulus of B and rather measures its twisting),
τ = - d B/ds · N .
If the arc length is not at hand and the arbitrary parameter t needs to be used, then a convenient formula (with the prime denoting d/dt) is
τ = ( r'× r”)· r”'/| r'× r”|^2 .
Since up to three derivatives of the position along the curve need to be computed, several adjacent points of a discretized curve are needed to extract the torsion: but it is still quite a local observable that does not need long trajectory stretches.
We are going to demonstrate the use of this observable τ for stellar streams, particularly around the Milky Way,
to determine the shape of the gravitational potential of its DM Halo.
§.§ Movement around a Newtonian spherical source
Newtonian gravity predicts, for motion around a spherical body,
r” = F/m = -GM r/| r|^3
with M the mass inside the sphere of radius | r|.
The needed third derivative can be computed in a straight-forward manner, taking into account that
| r|' = r̂· r' is the modulus of the projection of the velocity
along the visual from the origin,
r”' = -GM/| r|^2(
r̂'-2/| r|(r̂· r')r̂)
in terms of components along the velocity and along the position.
Because of Eq. (<ref>),
( r'× r”) ∝ ( r'× r)
and therefore, observing that both terms of Eq. (<ref>) are proportional to either
r' or r, we see that ( r'× r”)⊥ r”'.
Therefore, the scalar product in the denominator of Eq. (<ref>) vanishes,
and thus τ=0 for motion around a spherical body.
The planarity of the orbit around a central potential is, of course, a textbook consequence <cit.> of the
conservation of the direction of the angular momentum vector L̂ that in this language
is parallel to the binormal vector. And additionally, the Newtonian gravity law is not strictly necessary: any central potential will yield the same result. This observation is of particular interest for the MOND explanation of the galactic rotation curves in Eq. (<ref>) since, while the intensity of the acceleration induced by matter is different from Newtonian mechanics, the central direction of the force is respected: MOND likewise predicts no torsion.
§.§ Simulation of an N-point stellar stream around a spherical gravitational source
The discussion just presented in subsection <ref> refers to the torsion of one test body moving in a central field. So if the body lost dust grains forming a kind of contrail, its shape through space would be a planar curve.
But stellar streams are not quite of this nature, rather the result of the tidal stretching of a globular cluster or dwarf galaxy <cit.>. Since each star or other object in the cluster starts off at a different height z respect to the galactic plane, its orbit around the center of force lies on a slightly different plane, the effect being
that the cluster, additionally to stretching, contorts, with the upper particles passing under the center of mass and becoming the lower ones with each half orbit.
We here show that this effect is negligible and the torsion of a stream around a central potential can safely be neglected, as the center of mass of the stream follows one of the trajectories of subsection <ref>, with τ=0.
Rather than entangling the discussion in detailed theory, a couple of simple simulations will serve to illustrate the point.
We simulate a globular cluster of N (typically up to a few hundred) pointlike stars with a certain mass m_* and common initial velocity v_*∼ 220kpc, randomly distributed at t=0 over a sphere of radius R_0 (of order one or few kpc, typically; in the following simulation, 2 kpc) at a distance |r| from the galactic center (of order 10 kpc in the following example). An additional random velocity kick Δ v_0 = Gm_ cluster/2r_0 in a random direction is given.
We then let it evolve under the gravitational force of the central source with mass M∈(10^9, 10^11), standing
for a galactic bulb or a spherical DM halo, and we allow for a correction due to the inner binding forces of the cluster.
This is small because the random masses are taken in the interval m_*∈(0, 20) M_⊙ and thus their mutual interactions are orders of magnitude smaller than those with the galactic center.
The constant GM of the central source can conveniently be eliminated in terms of the typical velocity of circular orbits around the galactic center, from orbital equilibrium v_ rot^2/r = GM/r^2. For the Milky Way this is typically 220 km/s.
The positions of the stellar objects are updated in Cartesian coordinates.
The position is updated using Euler's Method with time step Δ t=t_f/N_t, with the velocity updated via a once-improved Euler step,
x_j+1^i = x^i_j + Δ t v^i_j for i=1,2,3
v^i_j+1 = v^i_j + 1/2Δ t f^i( x_j+1/2h v_j) for i=1,2,3
where f^i is the function yielding each component's acceleration.
The acceleration is calculated at each step from standard formulae
a_i=1,2,3= - GMx^i/(x^2+y^2+z^2)^3/2
- ∑^N-1_j=1Gm_jx^i/((x-x_j)^2+(y-y_j)^2+(z-z_j)^2)^3/2 .
The first line of this expression is the acceleration caused by the central spherical source, and the second is the force that attempts to bind the stellar-stream stars together (and that is too weak to avoid the tidal stretching).
We show the simulation in Fig. <ref>. The concentrated green points mark the initial cluster at t=0 in all panels (three dimensional views as well as Cartesian projections as marked in the axes). The evolved cluster at later times, the cloud of red dots, is seen to stretch under tidal tensions.
In the panels of the two right columns we see that, due to the initial random velocity in the z direction, the cloud expands and compresses along the vertical OZ axes. But the left colum shows that the stream remains near the (slightly tilted) plane that contains the initial velocity, without developing out-of-plane motion, and therefore no measurable torsion.
§ ORBITS AROUND ELONGATED POTENTIALS AND Τ≠ 0
§.§ Movement around a Newtonian cylindrical source
We now move on to quickly show how torsion is expected to look for an orbit around a perfectly cylindrical source of gravity,
in a discussion paralleling that of subsection <ref>. We naturally employ cylindrical coordinates
(ρ,φ,z), so that
r = ρρ̂ + z ẑ
r' = ρ' ρ̂ +ρφ' φ̂ + z' ẑ
r” = ( ρ” -ρφ^'2) ρ̂ +(2ρ'φ'+ρφ”)φ̂
+ z”ẑ
where in the acceleration we recognize, from left to right, the radial, centrifugal, Coriolis, azimuthal and vertical accelerations, respectively.
The force law is the same as that for a line of charge in electromagnetism, except of course with the constant replaced,
so that in terms of the linear mass density λ,
r” = F/m = -2Gλ/ρρ̂ +0·φ̂
+0·ẑ .
Comparing with the general form in Eq. (<ref>) we recover z”=0 z'= constant (reflecting translational invariance along the OZ axis) and ρ^2φ'= constant so that the third component l_z/m of angular momentum per unit mass is conserved just as in the central force problem. However, now the direction of l is not conserved, so that the binormal vector changes and one expects a torsion. To obtain it, start from Eq. (<ref>) and take a further derivative to obtain
r”' = -2Gλ/ρ( -ρ'/ρρ̂ +φ' φ̂)
(valid for the Newtonian force with cylindrical symmetry only).
Calculating the cross-product of Eq. (<ref>) and Eq. (<ref>), while using the righthandedness of the trihedron (ρ̂,φ̂,ẑ) to evaluate each basis vector product, yields
r'× r” = (-2Gλ)[ -φ' ẑ +z'/ρφ̂] .
Next we take the scalar product with r”' and evaluate Eq. (<ref>) to obtain the torsion, yielding
τ = z'φ'/(ρφ')^2+z^'2 = 1/ρv_zv_φ/v_φ^2+v_z^2
that we have cast in an easier to remember form in the second expression. Clearly, for there to be a torsion we need both azimuthal and vertical velocities (so stellar streams in the galactic plane are not sensitive, as expected). Additionally, because |v_zv_φ| ≤ v_z^2+v_φ^2, torsion belongs to the interval τ∈ [-ρ^-1,ρ^-1],
so its maximum magnitude is controlled by the distance from the stellar stream segment to the galactic axis.
§.§ Simulation of an N-point stellar stream around a cylindrical source
Next we proceed to repeat the exercise of subsection <ref> with the same starting data, but replacing the central spherical Newtonian source by a cylindrical source.
The force in Eq. (<ref>) needs to be replaced, so that
a_i=1,2= - 2Gλx^i/x^2+y^2
- ∑^N-1_j=1Gm_jx^i/((x-x_j)^2+(y-y_j)^2+(z-z_j)^2)^3/2
a_i=3 = - ∑^N-1_j=1Gm_jx^i/((x-x_j)^2+(y-y_j)^2+(z-z_j)^2)^3/2 .
Its first term is the acceleration caused by the cylindrical gravitational source
(that along the OZ axis being zero by translational symmetry).
Its linear mass-density λ = M/L is obtained from the typical rotation curve around a galaxy v_rot = √(2Gλ) <cit.>.
The second term of Eq. <ref> is, again, the correction due to the tiny binding of the stellar streams stars among themselves, together with Eq. (<ref>). An example can be seen in Fig <ref>, where all trajectories seem to overall fall in a plane.
The result of the analogous simulation is represented in Fig. <ref>. If the starting velocity profile was perfectly set in the XY plane perpendicular to the cylinder, the torsion would still be zero as per Eq. (<ref>). We give it a slight tilt and then the orbit starts behaving
as a helix (which can be appreciated in the bottom row, where the originally compact cluster of stars has, after 1 Gyr,
become a tidal stream that does not close on itself but ascends in a spiral, showing a small torsion). We detail in Fig <ref> how the effect becomes more noticeable upon rigging the initial star cluster with a larger speed along the OZ axis.
§.§ Sphere and cylinder with additional v_z,0
To close this section, we will combine together both types of sources, a sphere (akin to a visible-matter galactic bulb)
and a cylinder (mimicking an elongated DM halo).
For the sphere we take the typical mass of a galaxy M_s∈ (10^9, 10^12)M_⊙ <cit.>, <cit.> and for the cylinder we use the expression for the linear mass density that we obtain from the asymptotic velocity at large r in the rotation curve v_rot of the Milky Way, as exposed in the previous section <ref>.
The updated expression for the acceleration of the stars in the stream is obtained by combining Eqs. (<ref>) and (<ref>), that is,
z” = -v_0^2/| r|^3 z
r_⊥” = -| v|^2/| r_⊥|^2 r_⊥ -v_0^2/| r|^3 r_⊥
where v is taken from the galactic rotation velocity when it has flatted out at large r, and v_0 estimated from the visible mass.
We have added a small but appreciable contribution to the initial velocity in the z direction,
v_* = (220+Δ v, Δ v, 5 + Δ v) km/s to induce sufficient vertical, out of plane motion that will
generate torsion as per Eq. (<ref>).
In Fig. <ref> we clearly observe traits of the motion around cylinder+sphere sources as described in <cit.>.
Along the symmetry OZ axis, a star will describe harmonic oscillations between the two hemispheres due to the Newtonian pull of the spherical part of the distribution acting towards the center (unless it is provided with escape velocity, in which case it will approach an asymptotic trajectory, a helix around the OZ axis).
The orbit on the XY plane is not closed due to the additional 1/r force. The net effect in three dimensions can be seen as a precession of the orbital plane around the OZ axis, with the trajectory creating complicated helicoidal patterns.
The simulation in Fig. <ref> reflects this and clearly shows the appearance of nonvanishing torsion in the stellar streams (see particularly the three-dimensional rendering in the left bottom plot).
§.§ Torsion in a galaxy with a spherical halo and a galactic plane
We wish to have a reference for a minimum torsion that we would consider “normal” in order of magnitude, so that if extensive studies of stellar streams show that their torsion exceeds that level, one could reject the hypothesis of a spherical halo.
For this purpose we propose here a toy model in which the halo is taken spherical, but we add a disk component. This adds a vertical (not radial) velocity outside of the galactic plane that points towards it.
The simplest (and coarsest) such model takes the galactic disk as being uniform and infinite.
This is a reasonable approximation only for streams that do not elevate too much along the OZ axis; otherwise it provides an upper bound to a more realistic torsion (since such additional vertical force will always be larger than that of a finite disk, whose effect will fall off with z).
In that case, observed torsions above this bound would still entail an incompatibility with a spherical halo to be studied further.
Therefore, in this minimum-torsion model we take the acceleration as
a_i=1,2= - GMx^i/(x^2+y^2+z^2)^3/2
a_3 = - GMz/(x^2+y^2+z^2)^3/2 - 2π G σ sign(z)
.
In this equation, σ is a surface mass density for the disk, in the range
50-100 M_⊙/parsec^2 which is a usual estimate <cit.>, <cit.> at 8 kpc from the galactic center.
Fig. <ref> shows the characteristic wobbling of movement near the galactic plane caused by the planar disk,
which is qualitatively consistent with <cit.>.
We can provide an analytical estimate of the torsion following the now familiar reasoning.
Since an instantaneous velocity that is parallel to any of the three coordinate vectors of the cylindrical base
{ρ̂, φ̂, 𝐳̂} will display zero torsion, we take a trajectory combining two of them,
r' = v_φφ̂ + v_z 𝐳̂ .
Multiplying by the acceleration in Eq. (<ref>) we obtain
r' × r” = GMρ/r^3(v_φ𝐳̂ - v_z φ̂)
-Gv_φ( M/r^3z+(2π)σ sign(z) )ρ̂ .
To construct the determinant (( r' × r”)· r”') necessary for the torsion, we evaluate the third derivative outside the galactic plane (where it is undefined),
r”' = -GM/r^2( r̂' - 2/r(r̂·𝐫')𝐫̂)
that is in the plane given by position and velocity, employing
r̂' = 1/r( v - (r̂· v) 𝐫̂)
= 1/r( v_φφ̂ + v_z 𝐳̂
-zv_z/r^2(ρρ̂ + z 𝐳̂) ) .
A slightly tedious but straightforward calculation then yields
τ = -6π (Mσ) (|z|ρ) (v_zv_φ)/v_φ^2(M^2 r +4πσ M|z|r^2+4π^2σ^2r^5)+v_z^2M^2ρ^2/r .
The numerator has mechanical dimensions of a squared momentum, and the denominator of squared momentum times length,
yielding the correct 1/L dimensionality of the torsion. Moreover, the structure of the denominator shows that in the
presence of a spherical source (M) alone, or a plane (σ) alone, the torsion vanishes as it should. Likewise, both components of the velocity have to be nonvanishing as in Eq. (<ref>) for the cylindrical source; and the torsion is null both on the galactic plane (z=0) and on its perpendicular axis through the center of the sphere (ρ=0).
We can then numerically evaluate Eq. (<ref>) to obtain the floor value of the torsion that we should expect to be able to use in the galaxy. Taking into account that the galactic plane is not infinite so that the elevation z will yield a diminishing multipolar field, it may be that galactic torsions from a spherical halo plus disk are even smaller; what we mean by this estimate is that those streams that may be found with larger values need to be further investigated
as they may be teaching us something about the dark matter halo or about dark matter inhomogeneities.
Employing z∼ 1kpc, ρ∼ r∼ 10 kpc, v_z∼ v_φ∼ 220 km/s (to take the most conservative floor to the torsion), M∼ 10^12 M_⊙, and σ∼ 10^8 M_⊙/kpc^2 as already discussed,
the denominator of Eq. (<ref>) is dominated by the M^2 terms, with the σ r^2 correcting M only at the percent level. With these numbers we then find τ∼ -9· 10^-4kpc^-1.
We then conclude that torsions of stellar streams below 10^-3 in our galaxy can be explained without resort to deformed dark matter haloes or exotic phenomena. Of the few streams presently known, most present torsions at this level or below and are thus
of no further interest for this application of the shape of the haloes. It is those that reach τ at the percent level that deserve further scrutiny to bear on the halo shape, among the ones known and in future searches for streams.
§ STELLAR STREAMS IN THE MILKY WAY AND THEIR TORSION
In this section we finally turn to some of the known stellar streams in the Milky Way.
We select as relevant those found at distances d>30 kPc from the galactic center,
so that the internal structure of the galaxy, such as the disk and spiral arms, produces the minimum possible alteration in the stream. These streams, see Figs. <ref> and <ref>, have been extracted from <cit.>. The intention in this section is to extract the value of the torsion of the parametrized stream curves with Eq. (<ref>) and check for their vanishing (or not).
We have taken two of the streams out of further consideration, namely those at Orphan-Chenab and Styx. The reason is that they may be influenced by gravity sources outside the MW.
Due to the proximity of the Large Magellanic Cloud (LMC) to our galaxy, the streams in its periphery in the angular direction of that cloud could suffer alterations due to this additional source of gravity <cit.>, <cit.>.
To obtain the torsion of the curves following the stream we have opted for employing a smooth (polynomial) parametrization. Therefore, we first fit each of the galactocentric Cartesian coordinates tracking the individual streams in the data compilation of <cit.> to order-four polynomial parametric curves. The parameter that describes each curve takes values in the interval k∈[0,5].
The generic parametrization and a plot of each of them for the various streams, projected over the Cartesian axes, are relegated to the appendix, see Eq. (<ref>) and Fig. <ref>.
This parameter k is an arbitrary coordinate that can be converted to arc length, that has clearer geometric significance,
by means of
s = ∫√(∑^3_i=1(dx^i/dk)^2) dk ;
the derivatives in Eq. (<ref>) are to be taken respect to the parameter k, in general, or s, if a change is variables is effected (the outcome is the same, of course) to obtain the torsion.
To work with the streams in the database[https://github.com/cmateu/galstreamshttps://github.com/cmateu/galstreams] we use the galstream library and to perform the fits we use the polyfit command in the numpy module within a standard Python installation.
A word about the uncertainty in this extraction is warranted.
The data points for the extracted stream trajectories are quoted without errors in the original reference of <cit.>, perhaps because they are rather small; thus, the uncertainty of our parametric reconstruction stems in its entirety from the interpolation of Eq. (<ref>) until uncertainties in the data are compiled.
After the reconstruction of the parametric curves in Fig. <ref>, we can obtain the expression for the torsion of each curve r = (x, y, z) using once more Eq. (<ref>) τ = Det(r', r”, r”)/||r' × r”||^2,
where the derivative is taken respect to the k parameter, '=d/dk. We analytically express the derivatives in terms of the polynomial parametrization and then evaluate them as function of the k parameter that we used for the fit, taking values from 0 to 5. Because the torsion is parametrization independent, the torsion can also be given as function of the arclength
τ(s) calculating the derivatives respect to either of k or s.
The torsion along the curve shows significant variations in some of the streams in Fig. <ref>, such as Cetus-Palca, Cetus, Elqui, Jet and Pal15. In Table (<ref>) we quantify this variation between minimum and maximum values in the relevant streams that we look at. Reasons for this variation might be a non-spherical gravitational source and also the interaction with other gravitational sources different from the overall galactic field.
Because torsion (as curvature) has dimensions of inverse length, we would expect
stellar streams to perhaps show an inverse relation with respect to their distance from the galactic center,
as defined in Eq. (<ref>).
Irrespectively, in a galaxy such as the Milky Way the torsion of galactic streams should have a characteristic scale of
(10 kPc)^-1.
As per the discussion around Eq. (<ref>), where we established
τ∈[ -1/ρ,1/ρ], our selection of streams at 30 kpc or more means that we
would consider values of the torsion of order 0.03 in units of inverse kiloparsec to be sizeable and very different from zero. Also, as per the discussion below Eq. (<ref>), those above 0.001 could perhaps carry interesting information about the
DM distribution.
Turning to the data, the torsions that we seem to observe in the MW streams show orders of magnitude variation, with some factors of 20 or more larger than the expected scale and others totally negligible, perhaps due to their being close to lying on the galactic plane or moving in an OZ-r vertical plane, so that v_z or v_φ, respectively, are small.
The Jet and Cetus streams have sizeable OZ-axis displacements and consistently with Eq. (<ref>) they present sizeable torsion.
§ CONCLUSIONS AND OUTLOOK
The problem of galactic rotation curves suggests that galaxies are surrounded by significant amounts of dark matter, and the overall shape of these sources is yet to be ascertained. Whereas spherical DM distributions around galaxies have to be fine-tuned to explain the flatness of rotation curves, a cylindrical (or, generally, prolate) DM source can naturally explain the flattening of rotation curves. This avoids the fine-tuning of spherical DM haloes to precisely follow the 1/r^2 fall-off for a large swath of r values.
Observables inside the galactic plane cannot however distinguish between spherical (though fine tuned) and cylindrical/elongated gravitational sources, but out-of-galactic-plane information could provide new strong discriminants.
The stellar streams around the Milky Way have been extensively investigated for a while now, and are still nowadays a relevant subject of research. The trajectory followed by streams can be used as a tool to infer the geometry of these gravitational sources.
Orbits can be characterized by their torsion according to Eq. (<ref>); around a central potential orbits move in a plane and are expected to be torsionless (see Fig. <ref>). In addition, test masses around cylindric sources are expected to follow helical orbits in which the torsion is non-zero (see Fig. <ref>). Another approach is to consider an ellipsoid-shaped halo, which is not perfectly cylindrical but rather elongated. The expected orbit of the streams would arise from the combination of the orbits around central potentials and the helical orbits around cylinders, as is seen in Fig. <ref>.
The streams of the Milky Way have been a subject of research for a considerable time span, and many of the objects that constitute these streams have been catalogued. From a reconstruction of the orbits followed by these streams we infer the torsion caused by the gravitational source in Fig. <ref>. In this work we only consider those streams that seem to be far enough away from the galactic center to (1) avoid large effects from the baryonic component of the galaxy and (2) have a bird's eye view of the DM halo from outside a large fraction thereof. From the extraction of the torsion we see that it is non-negligible in some of the streams considered.
From our evaluation of the torsion we do not dare favor one or another interpretation of the DM halo shape in view of current data; this article should be seen as a proposal for a new observable, τ, and a first exploratory study.
We do find streams with significant torsion, but that have also with significant variability, which begs for further understanding.
In future observational work it might be interesting to actively seek streams that show both vertical motion (along the axis perpendicular to the MW plane) and also azimuthal motion around that axis, as those with large v_z and v_φ will
be most sensitive to the torsion. Should those streams show trajectories that are compatible with lying on a plane (zero torsion), a spherical halo will be prefered. Should they however appear helicoidal, with nonnegligible torsion, they would be pointing to an elongated DM halo.
Streams that may be detected in nearby galaxies carry the same information about their respective haloes.
Finally, other observables can bear on the overall shape of the halo, and we are considering investigating
the shape-sensitivity of the gravitational lensing of both electromagnetic and gravitational radiation.
Financially supported by spanish Ministerio de Ciencia e Innovación: Programa Estatal para Impulsar la Investigación Científico-Técnica y su Transferencia (ref. PID2021-124591NB-B-C41 and PID2019-108655GB-I00) as well as Univ. Complutense de Madrid under research group 910309 and the IPARCOS institute.
§ APPENDIX
The parametrization (x(k), y(k),z(k)) that we employ reads
x^i (k) = a_i k^4 + b_i k^3 + c_i k^2 + d_i k + e_i for i=1,2,3.
It is idle to try to relate k to a Newtonian time t since, not knowing a priori the dynamics of the system, it is unknown at which time a star was at what position along its trajectory. Only the instantaneous (present) geometry of the stream is known with certainty, and therefore an arbitrary parameter k (or the arc length after computing it) should suffice.
In Figs. <ref> and <ref> we then display the polynomial fit to each one of the streams considered in this work using Eq. (<ref>) by plotting the parametrization (x(k),y(k),z(k)).
In particular, note that the Jet and Cetus streams, that show the largest torsions in Fig. <ref> are well described by the simple 4^ th order polynomial fit and have no structure worth mentioning.
|
http://arxiv.org/abs/2307.04466v1 | 20230710103140 | Decay of long-lived oscillations after quantum quenches in gapped interacting quantum systems | [
"Jacob H. Robertson",
"Riccardo Senese",
"Fabian H. L. Essler"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
apsrev4-2
|
http://arxiv.org/abs/2307.04608v1 | 20230710145214 | Learning Interpretable Heuristics for WalkSAT | [
"Yannet Interian",
"Sara Bernardini"
] | cs.AI | [
"cs.AI"
] |
Maximal violation of the Bell-CHSH inequality via bumpified Haar wavelets
Silvio P. Sorella
August 12, 2023
=========================================================================
Local search algorithms are well-known methods for solving large, hard instances of the satisfiability problem (SAT). The performance of these algorithms crucially depends on heuristics for setting noise parameters and scoring variables. The optimal setting for these heuristics varies for different instance distributions. In this paper, we present an approach for learning effective variable scoring functions and noise parameters by using reinforcement learning. We consider satisfiability problems from different instance distributions and learn specialized heuristics for each of them. Our experimental results show improvements with respect to both a WalkSAT baseline and another local search learned heuristic.
§ INTRODUCTION
The satisfiability problem (SAT), one of the most studied NP-complete problems in computer science, consists in determining if there exists an assignment that satisfies a given Boolean formula. SAT algorithms typically assume that formulas are expressed in conjunctive normal form (CNF). A CNF formula is a conjunction of clauses; a clause is a disjunction of literals; and a literal is a variable or its negation. SAT has a wide range of practical applications, including electronic
design automation, planning, scheduling and hardware verification.
Stochastic local search (SLS) algorithms are well-known methods for solving hard, large SAT instances <cit.>.
They are incomplete solvers: they typically run with a pre-set number of iterations, after which they produce a valid assignment or return “unsolved." Algorithm <ref> shows the pseudo-code of a generic SLS algorithm. Like most SLS solvers, it starts by generating a random assignment. If the formula is satisfied by this assignment, a solution is found. Otherwise, a variable is chosen by a variable selection heuristic (pickVar in Algorithm <ref>) and that variable is flipped. The loop is repeated until a solution is found or the maximum number of iterations is reached.
WalkSAT <cit.> and other successful local search algorithms select the variable to flip from an unsatisfied clause (see Algorithm <ref>). After picking a random unsatisfied clause c, the choice of which variable in c to flip is made in two possible ways: either a random variable is chosen, or a scoring function is used to select the best variable to flip. The version of WalkSAT in Algorithm <ref> picks a variable with the smallest “break" value, where break(x) of a variable x given an assignment X is the number of clauses that would become false by flipping x.
Other algorithms and other versions of WalkSAT use different heuristics <cit.> for choosing x.
WalkSAT-type algorithms also use a noise parameter p (see Algorithm <ref>) to control the degree of greediness in the variable selection process. This parameter has a crucial impact on the algorithms' performance <cit.>. Hoos et al. Hoos2002 propose a dynamic noise adaptation algorithm in which high noise values are only used when the algorithms appear to not be making progress.
Designing SLS algorithms requires substantial problem-specific research and a long trial-and-error process by the algorithm experts. Also, algorithms seldom exploit the fact that real-world problems of the same type are solved again and again on a regular basis, maintaining the same combinatorial structure, but differing in the data. Problems of this type include, for example, SAT encodings of AI Planning instances <cit.> and Bounded Model Checking instances <cit.>.
Recently, there has been increased interest in applying machine learning techniques to design algorithms to tackle combinatorial optimization problems <cit.>. In line with this work, our paper focuses on using machine learning to design algorithms for SAT. More specifically, we investigate the use of reinforcement learning to learn both adaptive noise strategies and variable scoring functions for WalkSAT-type algorithms. We call the resulting strategy LearnWSAT. The main contributions of this paper are as follows:
* Our technique automatically learns a scoring function and an adaptive noise strategy for WalkSAT-type algorithms.
* Our scoring functions are simple and interpretable. When coded efficiently, they would have a running time per iteration similar to WalkSAT.
* Our approach outperforms both a WalkSAT baseline algorithm and a previously published learned SLS-type algorithm <cit.>.
* Our technique uses a “warm-up" strategy designed to substantially decrease training time.
* Our algorithm, when trained on a specific distribution, generalizes well to both unseen instances and larger instances of the same distribution.
We remark that our goal in this paper is to show how reinforcement learning could be leveraged to make WalkSAT-type algorithms more efficient and their design more practical; we do not aim to offer the fastest WalkSAT implementation, which we leave as future work. [The implementation can be found here <https://github.com/yanneta/learning_heuristics_sat>]
§ RELATED WORK
The literature regarding SAT is vast. We focus here only on the following two topics, which are the most pertinent to our contribution.
§.§ Machine Learning for SAT
Guo et al. Guo2022MachineLM give an in-depth survey of machine learning for SAT. In their classification, our work falls into the category described as “modifying local search solvers with learning modules". There are two other works <cit.> that fall into the same category.
Yolcu and Poczos yolcu2019learning use reinforcement learning with graph neural networks to learn an SLS algorithm. The graph neural network takes a factor graph associated which the SAT formula and the current assignment to score each variable. Scoring each variable at every iteration incurs a large overhead, which leads the authors to run experiments only on small SAT instances. Our work is similar to Yolcu and Poczos's yolcu2019learning in that we also use a model to score variables. On the other hand, our approach differs from theirs in four ways. Our scoring model is a linear function of a small set of features, which is simple and interpretable. At every iteration, we only score variables from one unsatisfied clause, which makes our model much more scalable and practical. Our features are able to encode time dependencies (e.g. last time a variable was flipped). We learn a separate noise strategy.
Zhang et al. Zhang2020 propose a system (NLocalSAT) for guiding the assignment initialization of an SLS solver with a neural network. Their model feeds the CNF formula into a Gated Graph neural network for feature extraction. The neural network predicts an assignment for the SAT formula. The model is trained to predict a satisfying assignment. The output of the neural network is used to initialize SLS solvers. Whereas NLocalSAT modifies the initialization of the SLS algorithm, our algorithm modifies its internal loop. Those two improvements are potentially compatible.
Selsam et al. (2018) trained a message-passing neural network called NeuroSAT to predict the satisfiability (SAT) or unsatisfiability (UNSAT) of problem instances. The authors trained and evaluated NeuroSAT on random problem instances that are similar to the ones used in our paper. NeuroSAT achieved an accuracy of 85% and successfully solved 70% of the SAT problems. It is worth noting that our approach focuses on predicting satisfiability and does not directly address unsatisfiability. However, our approach demonstrates a significantly higher accuracy on SAT instances.
§.§ Stochastic Local Search for SAT
Various strategies have been proposed for picking the variables to flip within WalkSAT.
McAllester et al. McAllesterSK97 analyze six strategies. In all the strategies, a random unsatisfied clause c is selected, and the variable is chosen within c. With probability p, a random variable is selected from c; otherwise, one of the six following strategies is implemented. 1) Pick the variable that minimizes the number of unsatisfiable clauses. 2) Pick the variable that minimizes the break value (Algorithm <ref>). 3) Same as the previous strategy, but never make a random move if one with break value 0 exits. 3) Pick the variable that minimizes the number of unsatisfied clauses, but refuse to flip any variable that has been flipped in the last t steps. 5) Sort the variables by the total number of unsatisfied clauses, then pick the one with the smallest value. Break ties in favor of the least recently flipped variable. 6) Pick a variable using a combination of least recently picked variable and number of unsatisfied clauses.
ProbSAT <cit.> uses a scoring function based on the values make(x) and break(x) and samples the variable to pick based on that scoring function. Given a variable x and an assignment X, make(x) is the number of clauses that would become true by flipping x. Note that make(x) - break(x) is the number of unsatisfiable clauses after flipping x. Balint and Schoning ProbSAT2012 experiment with various types of scoring functions based on make and break and find that make values can be ignored.
Hoos Hoos2002 proposes a dynamic noise strategy that uses higher values of noise only when the algorithm is in an “stagnation" stage, which is when there is no improvement in the objective function's value over the last m/6 search steps, where m is the number of clauses of the given problem instance. Every incremental increase in the noise value is realized as p ← 0.8p + 0.2; the decrements are defined as p ← 0.6 p where p is the noise level.
The work by McAllester et al. McAllesterSK97 inspired our selection of features for the variable ranking, and the paper by Balint and Schoning ProbSAT2012 led us to use features based on break(x) and ignore make(x). Finally, the work in Hoos Hoos2002 inspired us to learn an automated noise strategy.
§ METHODOLOGY
Algorithm <ref> shows the pseudo-code for our pickVar module. Our objective is to learn the functions p_w and f_θ in such a way that they minimize the number of flips needed to solve a SAT problem. We now describe these functions in detail.
§.§ Variable Representation
To score each variable, we first compute some features that represent the state of the variable at the current iteration t. From our discussion of previous work in Section <ref>, we know that break(x) is an important feature in deciding the score of a variable. We also know, from previous work, that we want to avoid flipping variables back and forth. We design features encoding that information.
Let age_1(x) be the last iteration in which x was flipped and age_2(x) the last iteration in which x was flipped and selected by the algorithm using f_θ(x). Let last_K(x)=1 if x was flipped in the last K iterations by f_θ(x). Let x̃ = min(x, 10).
Based on this notation, we represent each variable via the following features:
* bk(x) = log(1+ break(x̃))
* Δ_1(x) = 1 - age_1(x)/t
* Δ_2(x) = 1 - age_2(x)/t
* last_5(x)
* last_10(x)
We use x̃ and log in the feature bk(x) to make the feature independent of the size of the formulas. bk(x) it is also normalized to be between 0 and 1.
We have selected these features based on an extensive preliminary evaluation performed on a variety of features and formulas. It would be easy to expand our technique to include additional features whenever relevant.
Let 𝐟(x) = (bk(x), Δ_1(x), Δ_2(x), last_5(x), last_10(x)) be the vector representing the variable x at iteration t, given a current assignment X for a formula F. Note that, to compute the vector, we keep updating variables age_1, age_2, last_10, which is very cheap. Similar to WalkSAT, break(x) is only computed for variables on one clause at each iteration.
§.§ Models for Scoring Variables and Controlling Noise
Our goal is to make our algorithm interpretable and fast, so we use a linear model for scoring variables. Given a feature vector 𝐟 = 𝐟(x) for a variable x, f_θ(x) is a linear model on 𝐟:
f_θ(x) =θ_0 + ∑_i θ_i ·𝐟_i
Inspired by the dynamic noise strategy discussed in Section <ref>, we define the stagnation parameter δ as the number of iterations since the last improvement in the number of satisfied clauses, divided by the number of clauses. Instead of increasing or decreasing it at discrete intervals as in Hoos Hoos2002, our noise is a continuous function of δ, defined as
p_w(δ) = 0.5 · Sigmoid(w_0 + w_1δ + w_2 δ^2)
We use the sigmoid function to ensure p_w being between 0 and 0.5. Those are commonly used values for noise. Parameters w_0, w_1, w_2 are learned together with parameters {θ_i}_i=0^5 by using reinforcement learning.
After running our initial experiments, we noticed that the effect of the stagnation parameter δ was almost negligible. Therefore, in most of our experiments, we use a noise parameter that is a constant learned for each instance distribution, that is, p_w = 0.5 · Sigmoid(w_0).
§.§ Simplicity and Interpretability of Models
Domingos domingos1999role states that one interpretation of Occam’s razor in machine learning is the following: “Given two models with the same generalization error, the simpler one should be preferred because simplicity is desirable in itself.”
Following this basic principle, in our technique, we use simple functions (linear and sigmoid functions) involving a small set of input variables and show that we get better results than related algorithms that use much more complex models, e.g. Yolcu and Poczos's one yolcu2019learning. Simplicity is also valuable because simple linear models are very fast to evaluate, which is crucial to practical SAT solvers.
Interpretability refers to a model's capacity to be “explained or presented in understandable terms to a human” <cit.>. Linear models that use only a few simple variables are typically considered highly interpretable. Our variable-scoring model, which has just six coefficients, is therefore highly interpretable. The interpretability of a model is useful because it allows us to identify which features are significant and important and thus make decisions about adding or subtracting features. If a feature has a coefficient close to 0, we can infer that the feature lacks statistical significance and should be eliminated.
By providing insight into the impact of each model feature, interpretability can help algorithm designers simplify the process of adding, removing, and designing features.
Table <ref> provides an example of the scoring parameters associated with random 3-SAT formulas of various sizes. The absolute value of each coefficient in the table allows us to gauge the contribution of each variable to the model. As demonstrated by the coefficients in Table <ref>, the bk(x) feature has a notably negative impact on the variable score, indicating its strong influence compared to other features. Conversely, the coefficients associated with the noise function p_w(δ) showed that δ was not a crucial feature, allowing us to simplify our assumptions regarding the noise parameter. This kind of insight can be extremely valuable.
§.§ Training with Reinforcement Learning
To learn heuristics by using reinforcement learning <cit.>, we formalize local search for SAT as a Markov Decision Process (MDP). For clarity, we describe the MDP assuming that the noise parameter is 0, that is, the algorithm always picks a variable x from a random unsatisfied clause c using features 𝐟(x).
For each problem distribution D, we have an MDP represented as a tuple (𝒮, 𝒜, 𝒫, ℛ, γ) where:
* 𝒮 is the set of possible states. The state encodes the information needed at iteration t to pick a variable to flip. In our setting, a state is a tuple (X, c, {𝐟(x)}_x ∈ c, t), where X is our current assignment, c is a clause unsatisfied by X, and {𝐟(x)}_x ∈ c) are the set of features for all variables in c and t is the current step. The formula F uniformly sampled from D is also part of the state, but it is fixed through the episode. There are also two end states: end_sat and end_unsolved.
* 𝒜 is the set of actions. Given a state s = (X, c, {𝐟(x)}_x ∈ c, t), the set of actions corresponds to picking a variable to flip from the state's clause c.
* 𝒫 is the transition probability function, defining the probability of going from a state-action pair (s,a) to the next state s'. Let s=(X, c, {𝐟(x)}_x ∈ c, t) be our current state, we pick a variable x in c with probability e^f_θ(x)/∑_y ∈ c e^f_θ(y), which gets us X', the assignment obtained from X by flipping variable x. If X' satisfies the formula F, we move to the end_sat state. If the max number of steps is reached and X' does not satisfy F, we move to end_unsolved. Otherwise, we move to (X', c', {𝐟(x)}_x ∈ c', t+1), where c' is a random unsatisfied clause by the new assignment X'.
* ℛ(s) is the immediate reward after transitioning to state s. ℛ(end_sat)=1 and 0 otherwise.
* γ∈ (0, 1) is the discount factor, which we set to less than 1 to encourage finding solutions in fewer steps.
We reformulate the problem of learning informative heuristics for SAT into the problem of finding an optimal policy π for the MDP described above. We use the well-known REINFORCE algorithm <cit.>. Our policy π(s) is determined by the function f_θ(x) that we use to sample the variable to flip based on the feature vector of each variable.
At each training iteration, we sample a batch of formulas from the distribution D and generate trajectories for each formula. We accumulate the policy gradient estimates from all trajectories and perform a single update of the parameters. Algorithm <ref> shows the pseudo-code of the REINFORCE algorithm for the case of constant noise and batch size of one.
§.§ Training with a Warm-Up Strategy
By performing an extensive experimental evaluation, we found that the training of our algorithm takes too long for formulas with over 50 variables when using completely random heuristics and not initially finding a satisfying assignment. Trials without satisfying assignments are not useful for training since they have a reward of zero. To cope with this problem, we design a warm-up strategy to speed up the training process. For a few epochs, we train the function f_θ in such a way that the sampling mimics the pickVar strategy from WalkSAT with probability e^f_θ(z)/∑_y ∈ c e^f_θ(y). We cast this as a classification problem and use log-loss and gradient descent to train f_θ. Figure <ref> displays the training with and without warm-up for formulas in rand_3(75, 320), showing the benefit of our approach.
§ EXPERIMENTAL SETTING
§.§ Data
We perform experiments using random formulas generated from the following problems: random 3-SAT, random 4-SAT, clique detection, graph coloring and dominating set. These distributions, except for random 4-SAT, are used in the evaluation of GnnSLS by Yolcu and Poczos yolcu2019learning. To facilitate comparison, we use the same problem distributions. They also used a vertex covering problem that the CNFgen package <cit.> no longer supports, so we do not include this problem in our experiments.
It has been observed empirically that random K-SAT problems are hard when the problems are critically constrained, i.e. close to the SAT/UNSAT phase boundary <cit.>. These problems are used as common benchmarks for SAT.
The threshold for 3-SAT is when problems have roughly 4.26 times as many clauses as variables. To generate hard problems for random 4-SAT, we set the number of clauses to be 9.75 times the number of variables <cit.>. The other three problems are NP-complete graph problems. For each of these problems, a random Erdos–Rényi graph G(N, p) is sampled. To sample from G(N, p), a graph with N nodes is generated by sampling each edge with probability p.
For all these problem distributions, we generate random instances and keep those that are satisfiable. We use the CNFgen package <cit.> to generate all instances and Minisat <cit.> to filter out the unsatisfiable formulas.
§.§ Algorithms
For comparison, we use the SLS algorithm learned via reinforcement learning developed by Yolcu and Poczos yolcu2019learning, which we call GnnSLS, and follow the same experimental setup. We also consider one of the WalkSAT versions, as described in Selman et al. SelmanKC93. Again, we follow Yolcu and Poczos yolcu2019learning in using this particular WalkSAT version.
We wrote our algorithms in Python and PyTorch, which does not make them competitive with state-of-the-art SAT solvers with respect to running time. Indeed, our goal in this paper is to explore the power of reinforcement learning for formulating effective SAT heuristics. To this aim, we offer a prototype algorithm that proves the concept. Although we do not try here to beat highly-optimized current SAT solvers, our results suggest that our technique has the potential to compete with them if written efficiently.
For each problem distribution, we generate 2500 satisfiable formulas. From these, 500 are used for testing, 1900 for training and 100 for validation.
As metrics, we use the median of the median number of flips, the average number of flips and the percentage of instances solved.
§.§ Training with Reinforcement Learning
We train GnnSLS as described in Yolcu and Poczos yolcu2019learning's paper and use their code from the related GitHub repository. The paper uses curriculum learning, where training is performed on a sequence of problems of increasing difficulty. For example, to train problems for rand_3(50, 213), the authors start by first training on rand_3(5, 21), using the resulting model to subsequently train on rand_3(10,43), rand_3(25,106) and rand_3(50, 213).
As mentioned above, for experiments with random formulas, our models are trained using 1900 instances. The 100 validation instances are used to select the model with the best median number of steps. We train for 60 epochs using one cycle training <cit.> and AdamW <cit.> as the optimizer (a link to our GitHub repository will be provided in due course). Most of our experiments are run with a discount factor of 0.5.
§.§ Evaluation
For evaluation, we use max_tries=10 and max_flips=10000 unless otherwise specified. As said above, for randomly generated problems, we use 500 instances for testing. The noise probability for WalkSAT and GnnSLS is set to p=1/2 as in the experiments by Yolcu and Poczos yolcu2019learning.
§ EXPERIMENTAL RESULTS
Comparison to GnnSLS and WalkSAT. Table <ref> summarizes the performance of LearnWSAT compared to GnnSLS and WalkSAT. We present results for five classes of problems, rand_3(50, 213), rand_4(30, 292), color_5(20, 0.5), clique_3(20, 0.05) and domeset_4(12, 0.2) and three metrics, median number of flips (m-flips), average number of flips (a-flips), and percentage solved (solved). Table <ref> indicates the number of variables and clauses in the sampled formulas and gives a sense of the size of the SAT problems we tackle. Table <ref> shows that, after training, LearnWSAT requires substantially fewer steps than GnnSLS and WalkSAT to solve the respective problems.
Our algorithm performs better than WalkSAT because it optimizes the variable scoring and the noise parameter to the particular distribution of SAT problems. Our technique is also better than GnnSLS because of the following two reasons. First, we speculate that GnnSLS underfits the problem. The SAT encoding and the model used by GnnSLS are more sophisticated but also much more complex than our approach. It is not possible to directly train the GnnSLS algorithm with problems that have a few variables (e.g. 50 variables). To get the GnnSLS encoding to work well, smarter training and more data are needed. Second, our approach uses time-dependent variables (the last time a variable has been flipped), which GnnSLS is unable to encode.
Generalization to larger instances. In Table <ref>, we compare the performance of LearnWSAT trained on data sets of different sizes to assess how well the algorithm generalizes to larger instances after having been trained on smaller ones. We consider random 3-SAT instances of different sizes, rand_3(n, m). As in Table <ref>, we consider three metrics: median number of flips (m-flips), average number of flips (a-flips), and percentage solved (solved). The second column reports the performance of LearnWSAT (indicated LWSAT for brevity) on instances of different sizes when the algorithm is trained on rand_3(50, 213) only. In the third column, for comparison, we report the performance of LearnWSAT when it is trained and evaluated on instances of the same size. The fourth column reports the performance of GnnSLS when the algorithm is trained on rand_3(50, 213) only. Finally, the last column reports the WalkSAT (indicated WSAT) baseline.
The table shows that our model evaluated on rand_3(50, 213) performs similarly or better than models trained on larger instances. Training becomes much more expensive as a function of the size of the formula, but this result suggests that we can train on smaller formulas of the same distribution. GnnSLS trained on smaller instances can also be evaluated on larger problems of the same distribution, but the results seem to degrade as the formulas get larger.
Table <ref> shows results on instances that are harder than the ones shown before. In particular, Minsat is not able to solve some of the instances of rand_3(500, 2130) and rand_4(200, 1950) in less than ten hours. We generated 100 problems from rand_3(300, 1278), rand_3(500, 2130) and rand_4(200, 1950), respectively.
These instances are generated at the SAT/UNSAT threshold, therefore around 50% of them are supposed to be satisfiable. In the case of rand_4(200, 1950), it seems that a few more are satisfiable since LearnWSAT is able to solve 68% of them.
Noise parameter. In our initial experiments, we learned a noise function that depended on the stagnation parameter δ. After inspecting the function, we noticed that the effect of δ is negligible. In Figure <ref>, we show the learned noise function as used by the algorithm at evaluation time.
We plot the noise function against the iteration until the formula is solved. The stagnation parameter varies per iteration, but these curves show very little variation. We ran experiments in which we fixed p_w to be a constant dependent on the distribution and found that the results are similar to when the noise function depends on δ. In particular, we optimize p_w = 0.5 · Sigmoid(w) by finding a single parameter w per distribution. After these initial experiments, we ran all the others (as they are reported here) with fixed constants. Note that these constants are small compared to typical values used for WalkSAT (p=1/2). This is because our PickVar algorithm (shown in Algorithm <ref>) injects noise by sampling instead of deterministically picking variables as in the original PickVar algorithm of WalkSAT (Algorithm <ref>).
Impact of the discount factor. We ran experiments to understand the dependencies of our results on the value of the discount factor for reinforcement learning. Figure <ref> shows the median flips as a function of the discount factor. The gray area shows the confidence intervals for each curve. We find that various discount factors give similar results.
Impact of the size of training data. Figure <ref> shows the median flips as a function of the size of the training data. The experiment uses formulas from rand_3(50, 213). The plot shows that we need a training size of at least 40 to learn an algorithm that is better than WalkSAT. For optimal results, we need at least 160 formulas. To run the experiments with smaller datasets, we increased the number of warm-up steps from 5 to 50 and the amount of epochs from 60 to 200.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we present LearnWSAT, a technique that discovers effective noise parameters and scoring variable functions for WalkSAT-type algorithms. Thanks to them, LearnWSAT uses substantially fewer flips than a WalkSAT baseline, as well as an existing learned SLS-type algorithm, to solve the satisfiability problem. Although we do not focus on optimizing the implementation of LearnWSAT in this paper, our experiments suggest that, when coded efficiently, our technique could compete with state-of-the-art solvers.
Despite improving over algorithms in the literature, we note that a limitation of LearnWSAT is the need to pre-define a set of features. In addition, training is slow for formulas with 150 variables or more. The last limitation is mitigated by the fact that, as we have shown in the experiments, models trained on smaller formulas generalize well to larger ones. Overcoming these limitations is part of our future work.
Finally, we remark that the ideas presented in this work are general and could be adapted to solve other hard combinatorial problems.
kr
|
http://arxiv.org/abs/2307.04039v1 | 20230708195157 | A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers | [
"Guy Blanc",
"Caleb Koch",
"Carmen Strassle",
"Li-Yang Tan"
] | cs.CC | [
"cs.CC",
"cs.DS"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers.
The ε-approximate junta complexity of a function f is the smallest integer r such that f is ε-close to a function that depends only on r variables. A strong composition theorem states that if f has large ε-approximate junta complexity, then g ∘ f has even larger ε’-approximate junta complexity, even for ε’ ≫ε. We develop a fairly complete understanding of this behavior, proving that the junta complexity of g ∘ f is characterized by that of f along with the multivariate noise sensitivity of g. For the important case of symmetric functions g, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity.
We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics.
empty
empty
§ INTRODUCTION
The growth in the sizes of modern datasets is both a blessing and a curse. These datasets, many of which now come with billions of features, contain a wealth of information that machine learning algorithms seek to tap into. On the other hand, their size stands in the way of the opportunities they present, as many of the algorithms that we would like to run on them simply cannot handle their dimensionality.
Thankfully, for many tasks of interest the vast majority of features are irrelevant. This motivates the design of algorithms that are able to quickly home in on the small number of relevant features, and whose efficiency scales gracefully with the number of such features. Already in the early 1990s Blum <cit.> (see also <cit.>) proposed the clean theoretical challenge of learning an unknown r-junta, a function that depends on r≪ n many of its n variables. Quoting <cit.>, “It is my belief that some of the most central open problems in computational learning theory are, at their core, questions about finding relevant variables.” This is now known simply as the junta problem and is the subject of intensive study <cit.>, having distinguished itself as “the single most important open question in uniform distribution learning" <cit.>.
The premise of the junta problem suggests an even more basic algorithmic problem, that of determining if an unknown function is even an r-junta to begin with. This is the problem of testing juntas, introduced by Fischer, Kindler, Ron, Safra, and Samorodnitsky <cit.> and subsequently studied in numerous works <cit.>. Junta testers are also at the heart of the best known testers for numerous other classes of functions, the key insight being that many functions are well-approximated by small juntas (see <cit.> and Chapter 5 of <cit.> for more on this connection). The surveys by Blais <cit.> give broad overviews of various junta testers and their applications throughout theoretical computer science.
This work. These algorithmic applications motivate the study of approximability by small juntas as a complexity measure. For a function f : ^n → and a distribution 𝒟 over ^n, the ε-approximate junta complexity of f with respect to 𝒟, denoted J_𝒟(f,ε), is the smallest integer r such that f is ε-close to an r-junta. Among the most basic questions one can ask about any complexity measure of functions is how it behaves under composition. In the first part of this paper we develop, from the ground up, a fairly complete understanding of this question for junta complexity. We prove a near-optimal composition theorem (<Ref>) that is built on notions of noise stability, both classical and new. In the second part we draw a general connection (<Ref>) between the type of composition theorem that we prove—a strong composition theorem, which we will soon define—and property testing, showing how they can be used to design the first generic boosters for property testers. Combining our two main contributions yields new implications for junta testing.
§ OUR RESULTS AND TECHNIQUES
§.§ First main result: A strong composition theorem for junta complexity
Composition theorems are statements about hardness amplification: the goal is to understand the extent to which the disjoint composition (g ∘ f)(x) g(f(x^(1)),…,f(x^(k))) is more complex than f itself, and how this depends on intrinsic properties of the combining function g. For approximate measures such has junta complexity, we are furthermore interested in strong composition theorems, statements of the form:
J_𝒟^k(g∘ f, ε_large)≫ J_𝒟(f, ε_small) even for ε_large≫ε_small.
In words, the composed function requires much more resources—in our case, much larger junta approximators—even if one only seeks a much coarser approximation. Strong composition theorems stand in contrast to weak ones that only amplify hardness with respect to one of the two parameters, either resources or approximation quality only. The canonical example in this context is Yao’s XOR lemma <cit.>, which says that if f is mildly hard to approximate with size-s circuits, then XOR∘ f is extremely hard to approximate with size-s’ circuits. A long-recognized downside of this important result, inherent to all known proofs of it <cit.> and its generalizations to arbitrary combining functions <cit.>, is the fact that it is only known to hold for s’ ≪ s, whereas intuitively it should hold even for s’ ≫ s.
Composition theorems, both weak and strong, have been studied for a variety of complexity measures
but appear to have been underexplored for junta complexity. One reason may be that the question appears deceptively simple. Indeed, things are completely straightforward in the zero-error setting, where we have the intuitive identity J(g ∘ f, 0) = J(g,0)· J(f,0). However, we show that the question becomes surprisingly intricate once error is allowed.
§.§.§ Context and motivation: Counterexamples to natural composition theorems
The question proves to be tricky even in the special case where the combining function g is symmetric. We now state a sequence of three seemingly intuitive conjectures for this special case. While false, these conjectures and their counterexamples will motivate and lead us to the statement of our actual composition theorem. (Details and proofs of the counterexamples discussed in this section are given in <Ref>.)
The following notation will be useful for us throughout this paper:
Notation. For a function f : ^n→, distribution 𝒟 over ^n, and integer r, we write f̃_𝒟,r to denote the best r-junta approximator of f with respect to 𝒟. When 𝒟 is clear from context, we simply write f̃_r.
Conjecture 1. It will be convenient for us to consider composition theorems in their contrapositive form. Suppose we would like to approximate g ∘ f with an R-junta, say with respect to the uniform distribution. If g is a k-variable symmetric function, how would we go about constructing an approximator that achieves the highest accuracy possible? Since g is symmetric, one may be inclined to divide the “junta budget” of R evenly among the k inner functions and conjecture that
g ∘f̃_R/k = g(f̃_R/k,…,f̃_R/k)
achieves the best, or close to the best, accuracy among all R-junta approximators.
However, this is badly false. Let g be the k-variable Majority function and f the n-variable Parity function. For any choice of R satisfying R/k < n (i.e. each inner Parity receiving a budget that falls short of its arity), we have Pr[g∘f̃_R/k g∘ f] = 1/2. This is because it is “all or nothing” when it comes to approximating Parity: no (n-1)-junta can achieve accuracy better than that of a constant approximator. The best strategy is therefore to allocate a full budget of n to as many of the inner Parities as possible (i.e. R/n many of them), and a budget of zero to the others. This shows a gap of 1/2 versus 1-o(1) in the accuracies of the “divide budget equally” strategy and the optimal one.
Conjecture 2. In light of this counterexample, one may then conjecture that the best strategy is to partition the junta budget optimally among the k inner functions and feed the respective approximators of f into g. That is, the conjecture is that the best approximator is of the form:
g(f̃_r_1,…,f̃_r_k) where ∑_i=1^k r_i = R.
While this is true for our example above, it is again badly false in general. In fact, the error of such an approximator can be close to 1, even worse than the trivial bound of ≤1/2 achievable with a constant approximator.
Our counterexample reveals another counterintuitive aspect of the overall problem. Consider an approximator for g∘ f of the form g(f̃_r_1,…,f̃_r_k). We show its approximation accuracy can increase if we replace one of the inner approximators for f with a worse one: e.g. if we replace f̃_r_1 with f̃_r_1’ where r_1’ < r_1. In more technical terms that we will soon define: while the noise stability of a function is, as one would expect, monotone in the noise rate, we show that the natural generalization of it where the corruption probabilities of 0’s and 1’s are decoupled (defined in <Ref>) is not monotone.
Conjecture 3. Finally, we consider a conjecture that is far laxer than either of the previous ones. It simply states that the optimal approximator for the composed function g∘ f is one of composed form:
h(q^(1),…,q^(k)) for some h : ^k → and q^(1),…,q^(k) : ^n →,
where the relevant variables of q^(i) fall within the ith block of variables.
We show (to our own surprise) that this conjecture is still false: there are composed functions for which the optimal approximator is not of composed form. However, unlike the first two conjectures, our work shows that this conjecture is morally true in a precise sense.
§.§.§ Our Strong Composition Theorem
Our strong composition theorem implies a close quantitative relationship between the error of the optimal approximator and that of the optimal composed form approximator, and indeed one with a specific structure that we call canonical:
We say that a composed form approximator for g∘ f is canonical if it is of the form:
h(f̃_r_1,…,f̃_r_k),
where h : ^k→ is the function:
h(y) = (_∼𝒟^k[ (g∘ f)()|y_i = f̃_r_i(^(i)) for all i∈ [k]]).
For intuition regarding the choice of h, we note that for the fixed k-tuple of functions f̃_r_1,…,f̃_r_k, it is the combining function that minimizes error with respect to g∘ f.
Canonical composed form approximators are therefore ones whose individual components are “locally" optimal: each f̃_r_i is the optimal r_i-junta approximator for f, and h the optimal way of combining the f_r_i's. Our strong composition theorem will say that we can get very close to the globally optimal approximator this way.
The notion of noise stability is central to our work:
For any μ∈ (-1,1) and vector ρ⃗∈ [0,1]^k, we define the multivariate noise stability of g as
_μ,ρ⃗(g) = [g()g()]
where independently for each i ∈ [k], we draw (_i, _i) as follows: Using π_μ to denote the unique distribution supported on with mean μ, _i ∼π_μ, and
_i = _i w.p. ρ⃗_i
Independent draw from π_μ w.p. 1 - ρ⃗_i.
When μ = 0 we simply write _ρ⃗(g).
This definition allows for a different noise rate for each coordinate, generalizing the more commonly studied definition where the noise rates are the same for every coordinate (see e.g. Chapter 2 of <cit.>). We use the terms multivariate noise stability and univariate noise stability to distinguish these definitions. Even in the case of symmetric combining functions g, our strong composition theorem will naturally involve its multivariate noise stability (necessarily so, as already suggested by the counterexample to Conjecture 1).
We present our strong composition theorem as a sequence of two parts that each carries a standalone message, the first of which formalizes the fact that the optimal canonical composed form approximator is a good proxy for the actual optimal approximator. It will be more convenient for us to state our results in terms of advantage instead of error, the two quantities being related via the identity advantage = 1-2·error. Also, for notational clarity we only state here the special case where f is balanced (i.e. _𝒟[f] = 0).
[colback = white,arc=1mm, boxrule=0.25mm]
Let f : ^n→ and g:^k → be arbitrary functions and 𝒟 be any distribution over ^n. Assume that _𝒟[f]=0. For the task of approximating g ∘ f under 𝒟^k with an R-junta, there is a correlation vector ρ⃗∈ [0,1]^k such that
_ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator
≤Advantage of optimal approximator≤√(_ρ⃗(g)).
For most applications of composition theorems, including those in this paper, the parameters of interest are such that the quartic gap between the upper and lower bounds above are inconsequential. (In particular, if the advantage of the optimal canonical composed form approximator diminishes to 0 as k grows, our bounds imply that the same is true for the actual optimal approximator. Indeed, the two rates of convergence are the same up to a polynomial factor.)
Part II of <Ref> elaborates on the correlation vector ρ⃗, showing how it is is determined by the junta complexity of f and the noise stability of g:
[colback = white,arc=1mm, boxrule=0.25mm]
Theorem 1 (Part II: Explicit description of ρ⃗). The correlation vector ρ⃗∈ [0,1]^k in Part I is the vector that maximizes _ρ⃗(g), subject to the constraint:
ρ⃗_i = _𝒟[f·f̃_r_i] for all i∈ [k] where ∑_i=1^k r_i = R.
Taken together, the two parts of <Ref> show that the junta complexity of g∘ f is tightly characterized by the junta complexity of f and the multivariate noise stability of g. It furthermore gives a simple and explicit strategy for constructing a near-optimal approximator: first partition the junta budget optimally among the k inner functions; next approximate each inner function optimally with its allocated budget; and finally combine these approximators in the optimal way.
Naturally, it would be preferable to understand the strategy for constructing the actual optimal approximator, but our counterexamples suggest that it defies a clean and interpretable description even for symmetric g (indeed, even for g being the And function).
Corollary: Highly noise sensitive functions strongly amplify junta complexity. <Ref> yields a hardness amplification statement of the form <ref> the following way. Suppose f is mildly hard for r-juntas, i.e. [f̃_r f] ≥ε_small. Our goal is to show that g ∘ f is extremely hard for R-juntas, [(g∘ f)_R g∘ f] _large≫ε_small, even for R ≫ r. For any partition of R = ∑_i=1^k r_i, at most a 0.999-fraction of the r_i's exceed 1.01R/ k r. <Ref> therefore tells us that the advantage of the optimal R-junta is upper bounded by
√(_ρ⃗(g)) where at least a 0.001-fraction of ρ⃗'s coordinates are at most 1-2·_small.
(Equivalently, at least a 0.001-fraction of coordinates receive at least an _small amount of noise.)
This motivates the following definition:
The (δ,)-noise stability of a function g:^k→ is the quantity
max{_ρ⃗(g)at least a δ-fraction of ρ⃗'s coordinates are at most 1-2}.
By the monotonicity of noise stability, this maximum is achieved by a ρ⃗ with exactly a δ-fraction of coordinates being exactly 1-2, and the remaining (1-δ)-fraction being 1.
We have sketched the following corollary of <Ref>:
Let g : ^k → be a function whose (1/2,_small)-noise stability is at most τ. Then for all functions f,
J_𝒟^k(g∘ f, 12(1-√(τ)))__≥ 0.99k·J_𝒟(f,_small)__.
In words, g ∘ f requires much larger junta approximators, an Ω(k) multiplicative factor more, even if we allow much larger error, 1/2(1- √(τ)) _large instead of _small. As two extreme examples of combining functions g,
∘ The (0.001,_small)-noise stability of the k-variable Parity function is (1-2·_small)^Ω(k), making it an excellent amplifier of junta complexity.
∘ The (0.001,_small)-noise stability of a dictator function g(x) = x_i is 1, making it a terrible amplifier of junta complexity as one would expect: if g is a dictator function then g∘ f ≡ f is of course no more complex than f itself.
The partial-noise stability of these two specific examples are straightforward to compute, but the calculations quickly become unwieldy even for other basic functions. In addition to being a quantity of independent technical interest, the upcoming connections between strong composition theorems and the boosting of property testers will also motivate understanding the partial-noise stability of broad classes of functions beyond just parity and dictator. (Roughly speaking, to boost testers for a property 𝒫 we need to analyze a function g such that 𝒫 is closed under g.)
Our next result is a general technique that yields sharp bounds on the partial-noise stability, and more generally the multivariate noise stability, of all symmetric functions.
The multivariate noise sensitivity of symmetric functions. For a symmetric function g : ^k → one intuits that its multivariate noise stability at a vector ρ⃗∈ [0,1]^k should be related to its univariate noise stability at a value ρ^⋆∈ [0,1] that is an “average" of the coordinates of ρ⃗. (This is certainly not true for general functions; consider for example the dictator function.) Using techniques from the study of negative association, we formalize this intuition and prove that indeed it is sandwiched by the arithmetic and geometric means of the coordinates of ρ⃗:
Let g : ^k→ be a symmetric function, μ∈ (-1,1), and ρ⃗∈ [0,1]^k. Define
(∏_i ∈ [k]ρ⃗_i)^1/k and 1/k∑_i ∈ [k]ρ⃗_i.
Then
_μ,(g) ≤_μ,ρ⃗(g) ≤_μ,(g).
Furthermore, the lower bound holds under the weaker assumption that g is transitive.
The more “reasonable" ρ⃗ is, the closer the upper and lower bounds of <Ref> are. In particular, we get the following bound on the (δ,)-noise stability of symmetric functions:
For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆∈ [0,1] satisfying
1 - 2δ - O(^2) ≤ρ^⋆≤ 1 - 2δ.
Recall that corresponds to the initial inapproximability factor _small in <Ref>, and so the additive gap of O(^2) between the upper and lower bounds is indeed small for our intended application.
§.§ Second main result: Composition theorems and boosting of property testers
Composition theorems are most naturally thought of as statements about hardness amplification, and indeed that is how they are most commonly used. As our second main contribution, we show how they can be used fruitfully in their contrapositive form as meta-algorithms. In more detail, we show how they can be used to generically boost the performance guarantees of property testers. While boosting is a story of success in both the theory and practice of machine learning, to our knowledge the analogous concept in property testing has not yet been considered. The connection that we draw can be instantiated with either strong or weak composition theorems, but as we now see, the parameters are qualitatively better in case of strong composition theorems.
Within property testing, a major strand of research, initiated by Parnas, Ron, and Samorodnitsky <cit.>, concerns testing whether an unknown function has a concise representation. Consider any parameterized property 𝒫 = {𝒫_s}_s ∈ℕ of boolean functions: size-s parities, size-s juntas, size-s decision trees, s-sparse polynomials over various fields, and so on. The task is as follows:
Given queries to an unknown function f : ^n →, access to i.i.d. draws from a distribution 𝒟, and parameters s,s'∈ and > 0, distinguish between:
∘ Yes: f ∈𝒫_s
∘ No: f is ε-far under 𝒟 from every function in 𝒫_s'.
Note that the task is more challenging as ε gets smaller, and as the gap between s and s' gets smaller. We show how a composition theorem for 𝒫 allows one to trade off these two parameters: a tester for large ε can be upgraded into one for small ε, at the price of larger gap between s and s'. The stronger the composition theorem, the more favorable this tradeoff is, and with an optimally strong composition theorem one is able to improve the ε-dependence without any associated price in the multiplicative gap between s and s':
[colback = white,arc=1mm, boxrule=0.25mm]
Let 𝒫 = {𝒫_s }_s∈ be a property and g : ^k→ be such that 𝒫 behaves linearly w.r.t. g. Suppose that 𝒫 admits an (_small, _large,λ)-composition theorem w.r.t. g. Then any (_large,ks,λ ks')-tester for 𝒫 can be converted in to an (_small, s,s')-tester for 𝒫.
We defer the precise definitions of the terms “(_small,_large,λ)-composition theorem" and “behaves linearly" to the body of the paper, mentioning for now that λ∈ [0,1] measures the strength of the composition theorem: such a theorem says that the composed function requires λ k more resources to achieve _large error than original function to achieve _small error. Therefore λ = 1/k can be viewed as the threshold separating weak and strong composition theorems, with λ = 1 corresponding to an optimally strong one. (<Ref>, for example, achieves λ = 0.99.) Note that if λ = 1 in <Ref>, then an (_large,s,s)-tester for all s yields an (_small,s,s)-tester for all s.
The formal version of <Ref> will also show that it upgrades uniform-distribution testers to strong uniform-distribution testers, and distribution-free testers to strong distribution-free testers. This stands in contrast to standard boosting in learning which can only upgrade distribution-free learners.
§.§.§ Example applications of <Ref>: New implications for junta testing
As mentioned in the introduction, juntas are among the most basic and intensively-studied function classes in property testing. Owing to two decades of research, the complexity of testing juntas in the non-tolerant setting is now fairly well-understood: we have highly-efficient adaptive <cit.>, non-adaptive <cit.>, and distribution-free testers <cit.>, all of them achieving query complexities that are essentially optimal <cit.>.
The picture is much less clear in the more challenging tolerant setting. For the uniform distribution, the best known testers require exponentially many queries <cit.>, and there are no known distribution-free testers. By generalization <Ref> to the tolerant setting and instantiating it with our strong composition theorem for juntas, we obtain new implications, both positive and negative, that help clarify this picture.
Positive implication: boosting of tolerant junta testers. First, any tolerant junta tester for large distance parameter can now be converted into one for small distance parameters, at the price of a slight gap in the junta sizes of the Yes and No cases. For example, for both the uniform and distribution-free settings we get:
Suppose we have a (r)-query tester that distinguishes between
∘ Yes: f is 1/4-close to an r-junta
∘ No: f is 1/3-far from every r-junta.
Then for every > 0 we have a (r/)-query tester that distinguishes between
∘ Yes: f is -close to an r-junta
∘ No: f is Ω()-far from every 1.001r-junta.
The resulting gap between the junta sizes of the Yes and No cases, while mild, is admittedly not ideal. As alluded to above, this stems from the fact that the “strength parameter" of <Ref> is λ = 0.99 and not λ = 1. Designing boosters that do not incur this gap, either via an optimally strong composition theorem or otherwise, is a natural avenue for future work.
On the other hand, we now show that even with this gap, <Ref> already carries with it an interesting consequence. This consequence crucially relies on our composition theorem for juntas being strong; the proof would not have gone through had the strength parameter of <Ref> only been λ = 1/k.
Negative implication: NP-hardness in the distribution-free setting. This implication concerns the time rather than query complexity of testers. The same proof of <Ref> also converts a (r,n)-time tester into a (r,1/,n)-time tester. Implicit in the work of Hancock, Jiang, Li, and Tromp <cit.> is an NP-hardness result for tolerantly testing juntas in the distribution-free setting. One downside of their result is that it only holds in the regime of = 1/(n). Applying the time-analogue of <Ref>, we lift this hardness up to the standard regime of constant :
The following task is NP-hard under randomized reductions. Given queries to a function f : ^n→, access to i.i.d. draws from a distribution 𝒟, and parameters r∈ and > 0, distinguish between:
∘ Yes: f is 1/4-close under 𝒟 to an r-junta;
∘ No: f is 1/3-far under 𝒟 from every r-junta.
This implies a fairly dramatic separation between the non-tolerant versus tolerant versions of the problem. The recent (r)-query non-tolerant testers <cit.> are also time efficient, running in (r,n) time. <Ref> shows that any tolerant tester, regardless of query efficiency, must have time complexity that is as bad as that of SAT: e.g. if SAT requires randomized exponential time, then so does any tolerant tester.
In fact, our actual result is stronger than as stated in <Ref>: we prove that the task is NP-hard even if the Yes case states that f is 0-close under 𝒟 to an r-junta. We therefore show that the testers of <cit.> are quite fragile in the sense that they break if the Yes case in the definition of non-tolerant testing is changed from “f is an r-junta" to “f is 0-close under 𝒟 to an r-junta".
§ OTHER RELATED WORK
O'Donnell's generalization of Yao's XOR lemma.
Yao's XOR lemma states that if f is -hard against circuits of size s, meaning every size-s circuit differs from f on at least an -fraction of inputs, then XOR_k∘ f is (1/2 + 1/2(1-2)^k + δ)-hard against circuits of size s' where
s'= Θ(δ^2/log(1/))· s.
The (1-2)^k term in the resulting inapproximability factor agrees precisely with the (univariate) noise stability of XOR_k at ρ = 1-2. In <cit.> O'Donnell showed that this is no coincidence. He proved a far-reaching generalization of Yao's XOR lemma that allows for an arbitrary combining function g : ^k → instead of XOR, and showed that the resulting inapproximability of g∘ f is given by the “expected bias" of g, a quantity that is closely related to the (univariate) noise stability of g.
Like Yao's XOR lemma, <cit.>'s composition theorem is weak in the sense that the hardness of g∘ f only holds against size s' circuits where s' ≪ s. (In fact, <cit.> incurs an additional multiplicative loss of k in the resulting circuit size.) Our composition theorem concerns a different resource, juntas instead of circuits, and as emphasized in the introduction, our main focus is on proving a composition theorem that is strong in the sense of amplifying both the amount of resource required and the inapproximability factor.
Both our work and <cit.> utilize Fourier analysis in our proofs, which is to be expected given the centrality of noise stability to both works. That aside, our overall approach and techniques are entirely different from <cit.>'s—necessarily so, as we elaborate next.
Hardness amplification via boosting.
In <cit.> Klivans and Servedio observed that most known hardness amplification results are proved via a boosting-type argument. For example, for Yao's XOR lemma and <cit.>'s generalization of it, one proceeds by contradiction: one assumes that XOR_k∘ f can be mildly approximated by a size-s' circuit C (in the language of boosting, C is a weak hypothesis for XOR_k ∘ f), and one constructs a larger circuit C^⋆ of size s that well-approximates f (i.e. C^⋆ is a strong hypothesis for f). In boosting, the strong hypothesis is built out of many weak hypotheses; likewise, in Yao's XOR lemma the size-s circuit C^⋆ is built out of many size-s' circuits that are like C. The work of <cit.> formalizes this connection.
From this perspective, it becomes clear why such approaches are fundamentally limited to weak composition theorems where s' ≪ s. Strong composition theorems therefore necessitate a different tack, and indeed our proof proceeds via the forward implication instead of the contrapositive: we reason directly about the inapproximability of g∘ f under the assumption about the inapproximability of f. Somewhat ironically, our second main contribution is then an application of strong composition theorems to the boosting of property testers, which goes in the opposite direction to <cit.>'s “Boosting ⇒ Hardness Amplification" observation above.
Independent work of Chen and Patel <cit.>. A recent work of Chen and Patel also gives new lower bounds for tolerant junta testing. For the problem of testing whether an unknown function is _1-close to or _2-far from a k-junta under the uniform distribution, they prove a query lower bound of k^Ω(log(1/(_2-_1))), which is superpolynomial when the gap _2-_1 is subconstant. This yields the first superpolynomial query complexity separation between tolerant and non-tolerant testing for a natural property of boolean functions.
Their result is incomparable to <Ref> in several respects. We give a time lower bound when the gap _2-_1 is a fixed constant in the distribution-free setting. Being an NP-hardness result, our lower bound is conditional whereas theirs is unconditional.
§ DISCUSSION AND FUTURE WORK
Complexity measures can behave in highly counterintuitive ways under composition, which makes composition theorems, and strong composition theorems in particular, tricky to prove.
A motivating goal of this work is to develop an understanding of strong composition theorems from first principles, and hence our focus on junta complexity, perhaps the most basic complexity measure of a function. We are optimistic that our techniques can apply to other measures, though we believe that as in this work, much of the challenge will lie in first figuring out the right statement to prove.
Consider for example decision tree complexity, a natural next step from junta complexity. There are existing strong XOR lemmas for decision tree complexity, but they come with limitations and do not appear to be the final word. (Briefly, the XOR lemma of <cit.> is only strong when the initial inapproximability factor _small is at least a constant, and the strong XOR lemma of <cit.> only holds for decision trees that are allowed to “abort".) Indeed, Shaltiel <cit.> has shown that certain hoped-for strong XOR lemmas for decision tree complexity are false, though as he remarked, his counterexample “seems to exploit defects in the formation of the problem rather than show that our general intuition for direct product assertions is false". We hope that our results, and specifically the new connections to various notions of noise stability, can serve as a guide to the right statement for decision tree complexity and other measures.
As for our second main result, the general connection between strong composition theorems and the boosting of property testers, we believe that it adds compelling algorithmic motivation to the study of composition theorems, a topic traditionally considered to be mostly of complexity-theoretic interest. Likewise, we hope that our work spurs future research on this new notion of boosting for property testers, a notion that we believe is of interest independent of the connections to composition theorems. For example, an ambitious goal for future work is to broadly understand when and how a tester for constant distance parameter can be automatically upgraded into one with the optimal -dependence, as well as the associated costs of such a transformation.
§ PRELIMINARIES
Distributions and random variables. We use bold font (e.g ∼) to denote random variables.
For any set S, we use ∼ S as shorthand for ∼Unif(S) where Unif(·) denotes the uniform distribution. Of particular importance to this work will be μ-biased distributions over the Boolean hypercube.
For any μ∈ (-1,1), we use π_μ to denote the unique distribution over with mean μ. Formally, for ∼π_μ,
=
1 with probability 1 + μ/2
-1 with probability 1 - μ/2.
Similarly, for ∈ [-1,1]^k, we use π_ to denote the product distribution π__1×⋯×π__k.
Fix some bias μ∈ (-1,1). For any ∈ [0,1]^k and y ∈^k, we write y to denote that for each i ∈ [k], _i is independently drawn as
_i =
y_i with probability _i
Drawn from π_μ with probability 1 - _i.
Whenever we use the above notation, the choice of μ will be clear from context. This gives the following more succinct way to express <Ref>, defining multivariate noise stability,
_μ,(g) _∼ (π_μ)^k,[g()g()].
Some useful sets. For any integers a ≤ b, we use [a,b] as shorthand for the set {a, a+1, …, b}. Similarly, for b ≥ 1, we use [b] as shorthand for the set [1,b]. For any set S and ℓ≤ |S|, we use Sℓ to denote all subsets of S with cardinality ℓ.
Junta complexity. For any function f: ^n →, and S ⊆ [n], we say that f is an S-junta if for all x,y ∈^n for which x_i = y_i whenever i ∈ S it holds that f(x) = f(y). With a slight abuse of notation, when r ∈ [n] is an integer, we say that f is an r-junta if there is a set |S| ≤ r for which f is an r-junta.
Advantage.
For any functions f, g:^n → and distribution over ^n, we define
_(f,g) _∼[f() g()].
With a slight abuse of notation, we define for f:^n → and S ⊆ [n],
_(f,S) max_S-junta g:^n →_(f,g).
Similarly, for r ∈ [n],
_(f,r) max_r-junta g:^n →_(f,g).
When the base distribution is clear, we will drop it from our notation. Furthermore, for any function f:^n → and S ⊆ [n] or r ∈ [n], we use f̃_S and f̃_r to denote the S-junta and r-junta respectively maximizing the above two advantages.
Function composition.
For a function f: ^n →, its direct product f^⊗ k:*^n^k→^k is defined as
f^⊗ k(x^(1), …, x^(k)) = (f(x^(1)), …, f(x^(k))).
For any g:^k →, we use g ∘ f:*^n^k→ as shorthand for g∘ f^⊗ k, meaning,
(g∘ f)(x^(1), …, x^(k)) = g(f(x^(1)), …, f(x^(k))).
Vector powers. For any vector v ∈^k and set S ⊆ [k], we'll use the notation v^S as shorthand for
v^S ∏_i ∈ S v_i.
§.§ Fourier Analysis
Our proof of <Ref> will make heavy use of Fourier analysis over the μ-biased hypercube, (π_μ)^k. In this section, we will review relevant definitions and facts. A more complete exposition is given in <cit.>.
For any μ∈ (-1,1), we define ϕ_μ(x) x-μ/σ where σ√(1 - μ^2). Every g: ^k → can be uniquely decomposed as
g(y) = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(y_i) where ĝ_μ(S) = _∼ (π_μ)^k*g() ∏_i ∈ Sϕ_μ(_i).
This decomposition has a number of useful properties stemming from the fact that transforming g from its representation as a truth table to its Fourier coefficients ĝ_μ(S) is an orthonormal transformation.
[Basic facts about the Fourier decomposition]
* Plancherel's theorem: For any g, h: ^k → and μ∈ (-1,1),
_∼ (π_μ)^k[g()h()] = ∑_S ⊆ [k]ĝ_μ(S)ĥ_μ(S).
* Parseval's theorem: For any g: ^k → and μ∈ (-1,1),
_∼ (π_μ)^k[g()^2] = ∑_S ⊆ [k]ĝ_μ(S)^2.
In particular, when g has a range of , Parseval's theorem guarantees that the sum of its squared Fourier coefficients is 1. As a result, the following distribution is well defined.
For any g: ^k → and bias μ∈ (-1,1), the spectral sample of g, denoted _μ(g), is the probably distribution over subsets of [k] in which the set S has probability ĝ_μ(S)^2.
The Fourier decomposition gives a concise way to represent important quantities, as in the following results.
For any μ∈ (-1,1) and ∈ [0,1]^k, _μ, can be related to g's μ-biased Fourier decomposition as,
_μ, (g) = ∑_S ⊆ [k]ĝ(S)^2 ^S = _∼_μ(g)[()^].
We define g^()(y) _ y[g()]. Then, by Plancherel's theorem,
_μ, (g) = _∼ (π_μ)^k[g() g^()()] = ∑_S ⊆ [k]g_μ(S) g^()_μ(S).
Next, we compute the Fourier decomposition of g^().
g^()_μ(S) = _∼ (π_μ)^k*g^()() ∏_i ∈ Sϕ_μ(_i)
= _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)
= _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)(,) distributed identically to (, )
= _∼ (π_μ)^k*g() ·_*∏_i ∈ Sϕ_μ(_i).
Applying the independence of _1, …, _k conditioned on and that [ϕ_μ(_i)] = _i ϕ_μ(_i),
g^()_μ(S) = _∼ (π_μ)^k*g() ·∏_i ∈ S_i ϕ_μ(_i)
= ()^S ·_∼ (π_μ)^k*g() ·∏_i ∈ Sϕ_μ(_i) = ()^S g_μ(S).
Putting the above together,
_μ, (g) = ∑_S ⊆ [k]ĝ_μ(S)^2 ()^S.
One immediate corollary of the above is that multivariate noise stability is monotone.
For any μ∈ (-1,1), g:^k →, and , ρ⃗'⃗∈ [0,1]^k satisfying _i ≤ρ⃗'⃗_i for all i ∈ [k],
_μ, (g) ≤_μ, ρ⃗'⃗(g).
Recall that for any ν∈ [-1,1]^k, the distribution π_ν is the unique product distribution supported on ^k with mean ν. The Fourier decomposition of g also gives a useful way to compute _∼π_ν[g()].
For any g: ^k →, μ∈ (-1,1), and ν∈ [-1,1]^k,
_∼π_ν[g()] = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(ν_i).
We expand g into it's Fourier decomposition
[g()] = ∑_S ⊆ [k]ĝ_μ(S) *∏_i ∈ Sϕ_μ(_i)Linearity of expectation
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*ϕ_μ(_i)_1, …, _k are independent
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*_i - μ/σDefinition of ϕ_μ
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ Sϕ_μ(ν_i). Linearity of expectation
§ A STRONG COMPOSITION THEOREM FOR JUNTAS
In this section, we characterize the junta size required to approximate g ∘ f in terms of the multivariate noise stability of g, and the junta size required to approximate f.
For any g: ^k →, f: ^n → and base distribution over ^n, let μ = _∼[f()].
* Lower bound on advantage: For any approximators q^(1), …, q^(k): ^n →, define the lower normalized correlations, for each i ∈ [k] as
α_i max*0, _(f, q^(i))^2 - μ^2/1 - μ^2.
Then, there is an h:^k → for which
_^k(g∘ f, h (q^(1), …, q^(k))) ≥_μ, α(g).
* Upper bound on advantage: For any S_1,…, S_k, define the upper normalized correlation as
β_i max*0,_(f, S_i) - μ^2/1 - μ^2,
construct S ⊆ [n] × [k] by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}). Then,
_^k(g∘ f, S) ≤√(_μ, β(g)).
Our goal is to understand the error of the best R-junta approximating g ∘ f. <Ref> says that for any way to partition R = r_1 + ⋯ r_k, the approximator h (f̃_r_1, …, f̃_r_k) achieves nearly optimal advantage across all R-juntas that partition their budget this way. Of course, by maximizing both sides across all partitions, we can conclude that there is some partitioning and function h for which h (f̃_r_1, …, f̃_r_k) has nearly optimal advantage among all R-juntas. Indeed, as a simple corollary of <Ref>, we can show that the error of the optimal canonical composed form approximator is within a factor of 4 of the optimal approximator. Recall that _(q_1,q_2) = _∼[q_1() ≠ q_2()] and is related to advantage via the equality = 1 - 2·.
For any g: ^k →, f:^n →, junta budget R, and base distribution , there is an h:^n → and partition of the budget r_1 + ⋯ + r_k = R for which,.
_^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤ 4 ·_^k(g∘ f, R).
When μ = 0, the guarantee of <Ref> can further be given in the concise form of <Ref>: For an appropriately chosen ∈ [0,1]^k,
_ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator
≤Advantage of optimal approximator≤√(_ρ⃗(g)).
We include the proofs of <Ref> and <Ref> in <Ref>.
§.§ Proof of the lower bound on advantage
In this subsection, we show that (x_1, …, x_k) → h(f̃_r_1(x_1), …, f̃_r_k(x_k)) is close to the best R-junta approximator for g ∘ f. Here, the function h can be different than g, and this is necessary as shown in the counterexample to conjecture 2 in <Ref>.
For any g:^k →, f:^n →, and approximators q^(1), …, q^(k), there is some h:^k → for which
_^k(g∘ f, h ∘ (q^(1), …, q^(k))) ≥_μ, α(g),
where μ = _∼[f()] and for each i ∈ [k],
α_i max*0, (f, q^(i))^2 - μ^2/1 - μ^2.
Note α_i naturally interpolates between 0 and 1. Setting q^(i) to the better of the constant -1 or the constant +1 function will lead to α_i = 0, while setting q^(i) = f gives α_i = 1.
§.§.§ Characterizing the advantage of composed form approximators
To ease notation, we begin with a simpler setting. Suppose we use the same budget, r R/k, in each of the k pieces. Our goal is to understand
max_h:^k →(g∘ f, h∘f̃_r)
in terms of the noise sensitivity of g and (f, f̃_r). To do so, we will consider unbalanced noise stability.
For any x ∈^k, we use the notation x to denote that for each i ∈ [k], _i is independently drawn as
* If x_i = -1, with probability a, we set _i = x_i and otherwise set _i = -x_i
* If x_i = 1, with probability b, we set _i = x_i and otherwise set _i = -x_i.
For any g,h:^k →, μ∈ [-1,1] and a,b ∈ [0,1], we define the unbalanced noise stability as
_μ, (a,b)(g,h) = _∼ (π_μ)^k, [g()h()].
We refer to the above notion as unbalanced because when drawing x, the probability of the i^th coordinate flipping from -1 to 1 and from 1 to -1 may differ. Unbalanced noise stability is useful in our setting due to the following proposition.
For any f, f̃: ^n → and g,h:^k →,
_∼^k[(g ∘ f)() · (h ∘f̃)()] = _μ, (a,b)(g,h),
where
μ_∼[f()],
a _∼[f̃() = -1 | f() = -1],
b _∼[f̃() = 1 | f() = 1].
Draw ∼^k and then define f^⊗ k(), f̃^⊗ k(). Clearly,
_∼^k[(g ∘ f)() · (h ∘f̃)()] = [g() h()].
Furthermore, the distribution of , is equivalent to if we drew ∼ (π_μ)^k,. The above quantity therefore matches the definition of _μ, (a,b)(g,h).
§.§.§ Unbalanced noise stability behaves strangely
The most basic requirement of our approximation for g ∘ f is that it have advantage at least 0, as either the constant -1 or the constant +1 function is guaranteed to have such an advantage. Indeed, in the balanced case, it is well known that the approximation will satisfy this basic requirement even if we take h = g.
For any g:^k → and a ∈ [0,1/2],
_0, (a,a)(g,g) ≥ 0.
However, in the unbalanced case, this basic requirement no longer holds.
For any k ≥ 0, and a,b ∈ [0,1] for which |a-b| ≥ 0.01, there is a function g:^k → for which
_0, (a,b)(g,g) ≤ -(1-2^-Ω(k)).
Without loss of generality, we assume b ≥ a + 0.01. We define
g(x)
1 if ∑_i ∈ [k]x_i ≥ 0.005k,
-1 otherwise.
Draw ∼ (π_μ)^k,. Then,
*∑_i ∈ [k]_i = 0 , *∑_i ∈ [k]_i = k(b-a).
Furthermore, a standard application of Hoeffding's inequality implies that
[g() = 1] ≤ 2^-Ω(k) , [g() = -1]≤ 2^-Ω(k).
By union bound, with probability at least 2^-Ω(k), we have that both g() = -1 and g() = 1. This implies the desired result.
§.§.§ Unbalanced noise stability behaves well if we use the best h
Surprisingly, we show that if we use the best h, our approximation does meet this most basic requirement. Furthermore, we can relate it to the classical notion of balanced noise stability. The below Lemma directly implies <Ref>.
For any g:^k → and distribution over , each in ^k satisfying,
* The pairs (_1, _1), …, (_k, _k) are independent of one another.
* The means satisfy [_1] = ⋯ = [_k] = μ.
Define the correlations α_1, …, α_k as
α_i max*0,[_i _i]^2 - μ^2/1 - μ^2.
Then, there is an h:^k → for which
[g()h()] ≥_μ, α(g).
Comparing to <Ref>, if μ = 0, then α_i = max(0,1-a-b) for all i ∈ [k]. Since _μ, α(g) ≥ 0 whenever α≥ 0, <Ref> shows that the phenomenon in <Ref> cannot occur if we use the best approximator h.
The following Lemma will be useful in the proof of <Ref>.
For any function g: ^k →, let _1, …, _k be independent random variables each with mean μ and supported on [-1,1]. Then,
_*_∼π_[g()]^2 = _μ, ([ϕ_μ(_1)^2],
…, [ϕ_μ(_k)^2])(g).
We'll use the μ-biased Fourier expansion of g. Applying <Ref>,
_*_∼π_[g()]^2 = _**∑_S ⊆ [k]ĝ(S) ∏_i ∈ Sϕ_μ(_i)^2
= ∑_S_1, S_2 ⊆ [k]ĝ(S_1)ĝ(S_2)*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i).
We claim that, in the above sum, any term in which S_1 ≠ S_2 is equal to 0. Let S_1 S_2 denote the symmetric difference of S_1 and S_2. Then, due to the independence of _1, …, _k,
*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i) = ∏_i ∈ S_1 ∩ S_2[ϕ_μ(_i)^2] ∏_i ∈ S_1 S_2[ϕ_μ(_i)].
Since the mean of _i is μ, [ϕ_μ(_i)] = ϕ_μ(μ) = 0. If S_1 ≠ S_2, there is at least one element in S_1 S_2, and so the term is 0. We are therefore left with,
_*_∼()[g()]^2 = ∑_S ⊆ [k]ĝ(S)^2∏_i ∈ S*ϕ_μ(_i)^2.
This is exactly the Fourier expansion for the claimed result.
We'll also use the following proposition.
For any random variable bounded on [-1,1] almost surely and with mean μ,
max*0,[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2 .
We expand, using linearity of expectation,
[ϕ_μ()^2] = *( - μ)^2/1 - μ^2 = [ρ^2] - 2μ[] + μ^2/1 - μ^2.
Since [] = μ, we have that [ϕ_μ()^2] = [^2] - μ^2/1 - μ^2. Therefore, by Jensen's inequality,
[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2].
Furthermore, since ^2 ≤,
[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2.
Lastly, [ϕ_μ()^2] ≥ 0 follows from non-negativity.
Finally, we are ready to prove <Ref>.
For any y ∈^n, we define
g_(y) = [g() | = y].
Then, setting h(y) (g_(y)),
[g()h()] = _**g_()≥_**g_()^2.
Note that, conditioning on = y, the distribution of is still product. Let ν(y) be the mean of this distribution, so that
g_(y) = _∼π_ν(y)*g().
By <Ref>,
_**_∼π_ν()*g()^2 = _μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g).
For each i ∈ [k],
[ϕ_μ(ν()_i)^2] ≥max*0,_[ν()_i]^2 - μ^2/1 - μ^2<Ref>
≥max*0,_[_iν()_i]^2 - μ^2/1 - μ^2x≥ cx when c ∈
= max*0,_,[_i_i]^2 - μ^2/1 - μ^2Definition of ν(y)
= α_i.
Putting all of the above together,
[g()h()] ≥_μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g)
≥_μ, ρ(g),
where the final inequality follows from the monotonicity of noise stability.
§.§ Proof of the upper bound on advantage
In this section, we prove the following.
For any g: ^k→, f:^n →, μ_∼[f()], and S_1,…, S_k, define the upper normalized correlation as
β_i _(f, S_i) - μ^2/1 - μ^2.
For S ⊆ [n] × [k] constructed by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}).. Then,
_^k(g∘ f, S) ≤√(_μ, β(g)).
To begin with, we rewrite advantage in the following form.
For any function q: ^m →, distribution over ^m, and S ⊆ [m], define
q_S, ^(x) _∼[q() |_S = x_S],
where y_S = x_S is shorthand for x_i = y_i for all i ∈ S. Then,
_(q, S) = _∼**q_S, ^().
Consider any S-junta h. Then,
_(q, h) = _∼[
q() h()] = _∼*_∼[q() h() |_S = _S].
Since h is an S-junta, it must classify x and y the same whenever x_S = y_S. Therefore,
(q, h) = _∼*h()_∼[q() |_S = _S]
= _∼*h()q^_S,().
to maximize the above advantage among all h, we set h(x) = (q^_S, (x)), in which case
(q, h) = _∼**q^_S, ().
Given <Ref>, to compute _^k(g∘ f, S), it suffices to understand the function (g ∘ f)^_S,. We proceed to transform that function into a form which is easier to understand.
In the setting of <Ref>, for any x ∈ (^n)^k, let ν(x) ∈ [-1,1]^k be the vector where
ν(x)_i _∼^k[f() | x^(i)_S_i = _S_i].
Then,
(g ∘ f)^_S,^k(x) = _∼π_ν(x)[g()].
Consider drawing ∼ (^n)^k conditioned on _S = x_S. Let = f^⊗ k(). By definition,
(g ∘ f)^_S, ^k(x) = [g()].
Therefore, we merely need to show that the distribution of is that of π_ν(x). For this it is sufficient that,
* Each _1, …, _k is independent. This follows from the fact _1, …, _k are independent, and that the restriction that _S = x_S is a disjoint restriction for each of the k components.
* For each i ∈ [k], that [_i] = ν(x)_i. This follows from the definition of ν(x)_i.
The desired result follows from the fact that π_ν(x) is the unique product distribution over ^k with mean ν(x).
We now prove the upper bound.
Let ν be as defined in <Ref>. Applying it and <Ref>,
_^k(g∘ f, S) = _∼^k**_∼π_ν()[g()]≤√(_∼^k**_∼π_ν()[g()]^2).
The inequality above is Jensen's. Consider the random variables ν()_1, …, ν()_k. The have the following two properties.
* They are independent. This is because the value of ν()_i depends on only the value of _i, which is independent of the other _j for j ≠ i.
* They each have mean μ. This is because,
[ν()_i] = *_∼[f() | (^(i))_S_i = y_S_i] = _∼[f()] = μ.
Therefore, we can use <Ref>:
_∼^k**_∼π_ν()[g()]^2 = _μ, ([ϕ_μ(ν()_1)^2],
…, [ϕ_μ(ν()_k)^2])(g).
We can further upper bound,
[ϕ_μ(ν()_i)^2] ≤[ν()_i] - μ^2/1 - μ^2<Ref>
= (f, S_i) - μ^2/1 - μ^2<Ref>
= β_i.
Putting the above together, we have that
_^k(g∘ f, S) ≤√(_μ, β(g)).
§.§ Proofs of the consequences of our strong composition theorem
In this section, we complete the proofs of <Ref> and <Ref>.
For any partition of the budget junta budget r_1 + ⋯ + r_k = R, let (r_1,…,r_k) be the vector,
(r_1,…,r_k)_i _D(f, r_i).
Then, applying the upper bound on advantage of <Ref> and maximizing over all possible partitions of the budget R, we have that
_^k(g∘ f, R) ≤max_r_1 + ⋯ + r_k = R√(_(r_1, …, r_k)(g)).
This completes the upper bound on the advantage of the optimal R-junta approximator of g ∘ f of <Ref>. For the lower bound on the advantage of the optimal composed form approximator, let r_1, …, r_k be the partition of budget maximizing _(r_1, …, r_k)(g). Using the lower bound of <Ref>, and using (·)^2 to refer to an elementwise squaring of a vector,
_^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≥_(r_1,…,r_k)^2(g).
Using the Fourier expression for stability <Ref>,
_(r_1,…,r_k)^2(g) = _∼_μ(g)*(((r_1,…,r_k)^2)^
=_∼_μ(g)*(((r_1,…,r_k)^)^2
≥_∼_μ(g)*(((r_1,…,r_k)^)^2 Jensen's inequality
= _(r_1,…,r_k)(g)^2.
Therefore, there is a composed form approximator with advantage at least _(r_1, …, r_k)(g)^2.
Our proof of <Ref> uses the following.
For any α_1,…, α_m ∈ [0,1] and β_1, …, β_m ∈ [0,1], satisfying (1-α_i) ≤ 2(1-β_i) for each i ∈ [m],
1 - ∏_i ∈ [m]α_i ≤ 2* 1 - ∏_i ∈ [m]β_i .
We consider the vector β' ∈ [0,1]^m satisfying
1 - α_i = 2 · (1 - β'_i).
Note that β'_i ≥β_i, which means that
1 - ∏_i ∈ [m]β'_i ≤ 1 - ∏_i ∈ [m]β_i.
Now, consider the function q:[0,1] → [0,1] defined as
q(x) 1 - ∏_i ∈ [m]1 - x(1- α_i).
A quick calculation confirms that the second derivative of q is nonpositive, so q is concave. Furthermore, it satisfies,
q(0) = 0,
q(1) = 1 - ∏_i ∈ [m]α_i,
q(1/2) = 1 - ∏_i ∈ [m]β'_i.
We conclude,
1 - ∏_i ∈ [m]α_i concavity of q≤ 2*1 - ∏_i ∈ [m]β'_i≤*1 - ∏_i ∈ [m]β_i.
Let r_1 + ⋯ + r_k = R be the partition of R used in the junta achieving minimum error relative to g ∘ f and define, for each i ∈ [k],
α_i max*0, _(f, r_i)^2 - μ^2/1 - μ^2,
β_i max*0, _(f, r_i) - μ^2/1 - μ^2,
which satisfy the relation
1-α_i ≤ 2(1 - β_i).
Applying <Ref> and the relation = 1 - /2, we have that
_^k(g∘ f, R) ≥1 - √(_μ, β(g))/2, and _^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤1 - _μ, α(g)/2.
Our goal is to show the following series of inequalities, which would imply the desired result,
1 - _μ, α(g) (iq 1)≤ 2(1 - _μ, β(g)) (iq 2)≤ 4(1 - √(_μ, β(g))).
The second, (inequality 2), follows the fact that for any x ∈ [0,1], (1-x) ≤ 2(1-√(x)). For the first inequality, using <Ref>, we can express stability via the Fourier spectrum of g as
1 - _μ, α(g) = ∑_Sĝ(S)^2(1 - ∏_i ∈ Sα_i)
≤ 2∑_Sĝ(S)^2(1 - ∏_i ∈ Sβ_i) <Ref>, 1-α_i ≤ 2(1 - β_i)
= 2(1 - _μ, β(g)).
This proves inequality 1, giving the desired result.
§ MULTIVARIATE NOISE STABILITY OF SYMMETRIC FUNCTIONS
In this section, we prove <Ref> and <Ref>, connecting the multivariate noise stability of symmetric functions to their univariate noise stability.
For any function g:^k →, a permutation σ:[k]→ [k] is an automorphism of g if for all inputs x ∈^k,
g(x) = g(x_σ(1), …, x_σ(k)).
We say g is symmetric if every permutation of [k] is an automorphism of g. Similarly, g is transitive if for all i,j ∈ [k], there is an automorphism of g sending i to j.
§.§ The upper bound on the multivariate noise stability of symmetric functions
For any symmetric g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let 1/k ·∑_i ∈ [k]_i. Then,
_μ, (g)≤_μ, (g).
Our proof of <Ref> will use make heavy use of the negative association of random variables.
A set of random variables _1, …, _m supported on are negatively associated if for all disjoint subsets S_1, S_2 ⊆ [m] and S_1-juntas f_1:^m →, S_2-juntas f_2:^m → both monotonically nondecreasing,
[f_1()f_2()] ≤[f_1()][f_2()].
For our purposes, we will only need a few useful facts about negatively associated random variables given in <cit.> (see also <cit.> for a useful overview).
[Permutation distributions are negatively associated, <cit.>]
For any z_1, …, z_m ∈, draw a uniformly random permutation :[m] → [m] and set _i = z_(i) for each i ∈ [k]. Then, _1, …, _m are negatively associated.
[Subsets of negatively associated random variables are negatively associated]
For any 2 ≤ m' ≤ m, if _1, …, _m are negatively associated, then _1, …, _m' are also negatively associated.
[Product consequence of negative association]
For any negatively associated _1, …, _m and nondecreasing f:→_≥ 0,
*∏_i ∈ [m]f(_i)≤∏_i ∈ [m]*f(_i).
Given the above, facts about negative associated random variables, we can now prove <Ref>.
We expand _μ, (g) using the Fourier spectrum of g (<Ref>),
_μ, (g) = _∼_μ(g)[()^].
Let be the distributed the same as || for ∼_μ(g). Then,
_μ, (g) = _*_∼_μ(g)[()^| || = ℓ].
Since g is symmetric, for any |S_1| = |S_2|, ĝ(S_1) = ĝ(S_2). As a result the distribution of ∼_μ(g) conditioned on || = ℓ is simply a uniformly random size-ℓ subset of [k]. Formally,
_μ, (g) = _*_∼[k][()^].
Let _1, …, _k be a uniform random permutation of _1, …, _k. Then, the distribution of ()^ for ∼[k]ℓ is identical to that of ∏_i ∈ [ℓ]_i. By <Ref>, _1, …, _ℓ are negatively associated, and so,
_∼[k]ℓ[()^] = *∏_i ∈ [ℓ]_i(<Ref>)≤∏_i ∈ [ℓ][_i] = *^ℓ.
Therefore,
_μ, (g) ≤_**^ = _μ, (g).
§.§ The lower bound on the multivariate noise stability of symmetric functions
For any transitive g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let *∏_i ∈ [k]ρ⃗_i^1/k. Then,
_μ, (g)≥_μ, (g).
Note that every transitive g is also symmetric, but the reverse does not hold.
Similarly to the proof of <Ref>, let be the distribution of || when ∼_μ(g). Then,
_μ, (g) = _*_∼_μ(g)[()^| || = ].
For each S ⊆ [k], we'll use χ(S) ∈^k to denote the characteristic vector of S, meaning χ(S)_i [i ∈ S]. Then,
_μ, (g) = _*_∼_μ(g)*∏_i ∈ [k] (_i)^χ()_i | || =
= _*_∼_μ(g)*exp*∑_i ∈ [k]χ()_i log(_i) | || =
≥_*exp*_∼_μ(g)*∑_i ∈ [k]χ()_i log(_i) | || = Jensen's inequality
= _*exp*∑_i ∈ [k]log(_i) _∼_μ(g)*i ∈| || = . Linearity of expectation
Fix any i_1, i_2 ∈ [k] and level ℓ∈ [0,k]. Since g is transitive, there is an automorphism, σ, of g sending i_1 to i_2. Since σ is an automorphism of g, for any S ⊆ [k], for ∼_μ(g), [ = S] = [ = σ(S)]. As a result
_∼_μ(g)*i_1 ∈| || = ℓ = _∼_μ(g)*i_2 ∈| || = ℓ,
and so _∼_μ(g)*i ∈| || = ℓ must be the same for all i ∈ [k]. The sum of these probabilities is ℓ, meaning each is ℓ/k. This allows us to bound,
_μ, (g) ≥_*exp*∑_i ∈ [k]log(_i) ·/k
=_*∏_i ∈ [k]*_i^/k
=_*()^ = _μ, (g).
§.§ Bounding the (δ,)-noise stability of symmetric functions
Recall, from <Ref>, that the (δ,)-noise stability of a function g:^k→ is the quantity
max{_ρ⃗(g)at least δ-fraction of ρ⃗'s coordinates are at most 1-2}.
We prove <Ref>, restated below.
For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), let δ'kδ/k be δ rounded up to the nearest integer multiple of 1/k. Then, the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆ satisfying
1 - 2δ' - 4^2 ≤ρ^⋆≤ 1 - 2δ'.
Since stability is monotone (<Ref>), the (δ, )-noise stability of g is its multivariate noise stability with a correlation vector where δ' fraction of the coordinates are 1 - 2 and the remainder are 1. The arithmetic mean of this vector is exactly 1 - 2δ', and its geometric mean is (1 - 2)^δ'. The desired result then follows from <Ref> and the inequality
(1 - x)^c ≥ 1-cx - (1-c)x^2 ≥ 1 - cx - x^2
which holds for all c,x ∈ [0,1]. To prove this inequality, it is sufficient that q_c(x) ≥ 0 for all x,c ∈ [0,1] where
q_c(x) (1-x)^c - 1 +cx + (1-c)x^2.
To see this, we note that for any c ∈ [0,1], the function q_c(x) has roots at x = 0 and x=1. It is furthermore increasing at x = 0, and decreasing at x = 1. If q_c(x) were to be negative for any x ∈ [0,1], then, it would need to have at least 3 local extrema. However, the derivative q_c'(x) is concave, so it can only be zero at a maximum of 2 points. This proves the desired inequality. (If the reader prefers, <Ref> gives a “proof by picture".)
§ COMPOSITION THEOREMS YIELD BOOSTERS FOR PROPERTY TESTING
§.§ A general boosting framework
Let 𝒫={𝒫_s}_s∈ be a parametrized property of Boolean functions. For a function f:^n→ and distribution 𝒟 over ^n, we write
_𝒟(f,𝒫_s)min_h∈𝒫_s_𝒟(f,h)
to denote f's distance to 𝒫_s over 𝒟. We are interested in the relaxed testing regime for size parameters s>s' where we want to decide whether an unknown target function f belongs to 𝒫_s or is -far from 𝒫_s' under 𝒟: _𝒟(f,𝒫_s')> (recall <Ref>). We say that 𝒫 is (,s,s')-testable if there exists an algorithm for (,s,s')-testing 𝒫 for every distribution 𝒟. As → 0, the gap between the Yes and No cases becomes smaller and (,s,s')-testing becomes more difficult. The main result of this section is that if 𝒫 “behaves well” under function composition, then testers for large can be boosted to testers for the more challenging regime of small . We will specialize our attention to properties which behave linearly with respect to function composition.
A parametrized property 𝒫={𝒫_s}_s∈ behaves linearly (with respect to function composition) if
f∈𝒫_s ⇒ g∘ f∈𝒫_k· s
for all g:^k→, f:^n→, and s∈.
Examples.
Being an s-junta, depth-s decision tree, depth-s formula, or degree-s polynomial are all properties of Boolean functions which behave linearly with respect to composition. As is often the case, it is straightforward to show from their definitions that these properties behave linearly. Many properties which do not a priori behave linearly can be converted into ones that do by applying an appropriate transformation to their size. For example, the property 𝒫_s={size-exp(s) decision trees} behaves linearly.
Strong composition theorems for properties.
A property 𝒫 which behaves linearly with respect to function composition is said to admit a strong composition theorem if the upper bound from <Ref> can be shown to be nearly tight. This definition generalizes the relation <ref>.
A parametrized property 𝒫={𝒫_s}_s∈ admits an (,,λ)-composition theorem with respect to g:^k→ for ,∈ (0,1) and a constant λ>0 if
_𝒟(f,𝒫_s)> ⇒ _𝒟^k(g∘ f,𝒫_λ ks)>
for all f:^n→ and distributions 𝒟 over ^n.
Strong composition theorems depend on the combining function g. For example, if g is a constant function then one would not expect the upper bound from <Ref> to be tight. For this reason, the dependence on g is made explicit in the definition of strong composition theorem.
Roughly speaking, the definition says that if a property 𝒫 behaves linearly and admits a strong composition theorem with respect to g, then composing with g turns a function in 𝒫_s into one in 𝒫_s k and turns a function slightly far from 𝒫_s into one very far from 𝒫_Θ(s k). For a fixed , having an (,,λ)-composition theorem with respect to g becomes stronger as approaches 0. In general, we are interested in (,,λ)-composition theorems when ≫. The parameter λ is built into the definition to tolerate a small amount of slack between the upper and lower bounds on g∘ f. For many applications, this constant factor is necessary. We are now equipped to state our main boosting theorem.
Let 𝒫={𝒫_s}_s∈ be a property which behaves linearly and admits an (,,λ)-composition theorem with respect to g:^k→. If 𝒫 is (,s,s')-testable in q(,s,s') queries, then it is (,s,λ^-1 s')-testable using k· q(,ks,ks') many queries.
Let be an algorithm for (,s,s')-testing 𝒫. Given queries to a function f:^n→ and random samples from a distribution 𝒟 over ^n, we (,s, λ^-1 s')-test 𝒫 using the procedure in <Ref> where is given an instance of (,ks,ks')-testing 𝒫.
Query complexity.
The target g∘ f:^nk→ is a (, ks,ks')-testing instance for . Therefore, makes q(,ks,ks') queries to the target g∘ f:^nk→ before terminating. Our tester makes k queries to f for each query to g∘ f. So our tester for f makes k· q(,ks,ks') queries in total.
Correctness.In the Yes case, f∈𝒫_s. We then have g∘ f∈𝒫_sk since 𝒫 behaves linearly. This ensures that outputs Yes. In the No case, _𝒟(f,𝒫_s'/λ)>. We then have _^k(g∘ f,𝒫_ks')>λ since 𝒫 admits an (,,λ)-composition theorem. This ensures that outputs No.
§.§ Implications for current landscape of junta testing
Our results have new implications for tolerantly testing juntas. In this regime, the Yes case of <Ref> is relaxed to only require that f is close to an r-junta over 𝒟.
Given parameters r≤ r' and ≤, queries to an unknown function f:^n→, and random samples from a distribution 𝒟 over ^n, distinguish between
* Yes: f is -close to being an r-junta under 𝒟, and
* No: f is -far from being an r'-junta under 𝒟.
In all of our applications, we will be using <Ref>, or a variant of it, with g set to _k. For this reason, we start with some useful properties about the noise stability of parity.
§.§.§ Noise stability of parity under general product distributions
For any f:^n →, distribution over ^n, junta budget R, and R-junta h,
_^k(_k ∘ f, h) ≥min_r_1+⋯+r_k=R1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2.
Our proof of <Ref> will use the multivariate noise stability of parity.
For any μ∈ (-1,1), ρ⃗∈ [0,1]^k,
_μ, (_k) = ∏_i ∈ [k]*_i + (1-_i)·μ^2=∏_i ∈ [k]*1 - (1-_i)(1-μ^2).
Note that _k(y_1, …, y_k) = ∏_i ∈ [k]y_i. Therefore,
_μ, (_k) = _∼ (π_μ)^k, *∏_i ∈ [k]_i _i.
Each pair (_i, _i) are independent of another, so
_μ, (_k) = ∏_i ∈ [k]*_i _i.
The distribution of (_i, _i) can be succinctly described: With probability _i, _i = _i. Otherwise, they are each independent draws from π_μ. Therefore,
*_i _i = _i + (1-_i)·μ^2.
The desired result follows from combining the above equations
We apply our strong composition theorem, <Ref>. It is stated in terms of advantage and gives
max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(_μ, β(r_1, …, r_k)(_k)),
where we define μ = _∼[f()], and β(r_1, …, r_k) ∈ [0,1]^k is the vector
β(r_1, …, r_k)_i = _(f, f̃_r_i) - μ^2/1 - μ^2 = 1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2.
Applying <Ref>,
max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *1 -1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2(1 - μ^2))
= max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *2·_(f, f̃_r_i)/1 - μ^2(1 - μ^2))
= max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i)).
The desired result follows from = 1 - /2.
§.§.§ Warmup: weak testers suffice for (0,,r,r')-testing juntas
We first boost tolerant testers in the regime where is fixed to 0 in <Ref>. This version is slightly easier to state and is also the version we will use later in proving <Ref>.
If juntas can be (0,,r,r')-tested using q(,r,r') queries, then for all k∈ and λ∈ (0,1), they can be (0,,r,λ^-1 r')-tested in k· q(,kr,kr') queries where
=1-(1-2)^(1-λ)k/2/2.
We will need to following composition theorem for juntas. It is a more precise version of <Ref> stated in terms of <Ref>.
For any λ∈ (0,1), the property of being an r-junta admits an (, ,λ)-composition theorem with respect to _k for any ≤ where
= 1-(1-2)^(1-λ)k/2/2.
Assume that f:^n→ is -far from being an r-junta over 𝒟. We would like to show that _k∘ f is -far from being a λ r k-junta over 𝒟^k where is defined as in the lemma statement. Let r_1+⋯+r_k=λ rk be the partition of the junta budgets which minimizes the expression
1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2
from <Ref>. Let A_≤ r [k] denote the indices for which r_i≤ r and let A_>r=[k]∖ A_≤ r. By a counting argument, at least a (1-λ)-fraction of r_i satisfy r_i≤ r and so |A_≤ r|≥ (1-λ)k. By our assumption that f is far from being an r-junta, for these r_i, we get _𝒟(f,f_r_i)>. Therefore, we can conclude that for any λ rk-junta h:^nk→:
_𝒟^k(_k∘ f,h) ≥1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2<Ref>
=1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i)·∏_i∈ A_>r*1 - 2·_(f, f̃_r_i))/2
≥1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i))/2≤1/2
> 1 - *1 - 2^(1-λ)k/2/2_𝒟(f,f_r_i)> for i∈ A_≤ r.
Since h was arbitrary, this shows that _k∘ f is -far from being a λ rk-junta.
<Ref> is stated in the non-tolerant regime. However, we note that the same theorem holds in the (0,,r,r')-testing regime. That is, under the conditions of <Ref>, if 𝒫 is (0,,s,s')-testable, then it is also (0,,s,λ^-1s')-testable. This is because if f is a 0-approximator of f over 𝒟, then g∘f is a 0-approximator of g∘ f over 𝒟^k.
<Ref> shows that the property of being an r-junta admits an (, 1-(1-2)^(1-λ)k/2/2,λ)-composition theorem. Therefore, <Ref> shows that if juntas can be (0,,r,r')-tested in q(,r,r') queries then they can be (, r,r')-tested in k· q(,kr,kr') queries where
=1-(1-2)^(1-λ)k/2/2.
§.§.§ Weak testers suffice for tolerant junta testing
If there is a q(r)-query tester that, given queries to f:^n→ and random samples from a distribution 𝒟, distinguishes between
* Yes: f is 1/4-close to an r-junta, and
* No: f is 1/3-far from every r-junta,
then for every >0 and λ∈ (0,1), there is a q(r/(4))/4-query algorithm that distinguishes between
* Yes: f is -close to an r-junta, and
* No: f is Ω(/1-λ)-far from every λ^-1r-junta.
Let 𝒯 be a q(r)-query tester for juntas that satisfies the theorem statement. Given queries to a function f:^n→ and random samples to , we design an algorithm for (,5/1-λ, r,λ^-1r)-testing f over . The algorithm is straightforward. We choose k=1/4, and run the procedure in <Ref> with g=_k:^k→ and junta size kr.
Query complexity.𝒯 makes q(kr)=q(r/4) queries to the target _k∘ f:^nk→ before it terminates. Our tester makes k queries to f for each query to _k∘ f. Therefore, our tester makes k· q(r/4)=(r/4)/(4) queries in total.
Correctness.For correctness, we need to show:
Yes case: if f is -close to being an r-junta over , then _k∘ f is 1/4-close to being a kr-junta over ^k, and
No case: if f is 5/1-λ-far from being an λ^-1r-junta over , then _k∘ f is 1/3-far from being a kr-junta over ^k.
Yes case.
Let f be an r-junta which -approximates f over . By a union bound:
_∼𝒟^k[XOR_k∘ f()≠XOR_k∘f()] ≤_∼𝒟^k[some f(^(i))≠ f(^(i))]
≤ k·_𝒟(f,f)≤ k = 1/4.
Since _k∘f is a kr-junta, this shows that _k∘ f is 1/4-close to a kr-junta.
No case.
If f is 5/(1-λ)-far from being a λ^-1r-junta, then <Ref> implies that _k∘ f is
1-(1-2)^(1-λ)k/2/2
far from being a λλ^-1kr=kr-junta over ^k where 5/(1-λ). Therefore, it is sufficient to show that 1-(1-2)^(1-λ)k/2/2≥1/3. We observe 2/(1-λ)k≤log_1/3(e)· which implies 3^-2/((1-λ)k)≥ e^-2≥ 1-2. It follows:
1/3≥ (1-2)^(1-λ)k/2
which provides the desired bound.
§.§.§ Hardness of distribution-free tolerant junta testing
We prove the following which implies <Ref>.
Given queries to a function f:^n→ and random samples from a distribution 𝒟, and r≤ n, it is NP-hard under randomized reductions to distinguish between
* Yes: f is 0-close an r-junta over 𝒟, and
* No: f is 1/3-far from every Ω(rlog n)-junta over 𝒟.
We reduce from the SetCover problem.
A SetCover instance over a universe [m] is a collection of subsets 𝒮 = { S_1,…,S_n} where S_i [m]. The SetCover problem is to compute a minimal size subcollection {S_i_1,…, S_i_r} which covers the universe: [m]=S_i_1∪⋯∪ S_i_r.
SetCover is known to be hard to approximate.
Given a SetCover instance 𝒮 and a parameter r, it is NP-hard to distinguish between
* Yes: 𝒮 has a size-r set cover, and
* No: 𝒮 requires set covers of size Ω(rlog n).
Suppose we have an algorithm 𝒯_weak for testing juntas that can distinguish between the Yes and No cases in the theorem statement. In particular, there is a (0,1/3,r,Ω(rlog n))-tester for juntas. <Ref> implies that there is a (0,,r,Ω(rlog n))-tester, 𝒯_strong, for juntas as long as satisfies
1/3≤1-(1-2)^(1-λ)k/2/2⊛.
In the reduction, we will choose appropriately and use this boosted tester to solve SetCover.
The reduction.The reduction from SetCover to junta testing is standard <cit.>. We will restate it here for convenience. Let 𝒮 = { S_1,…,S_n} be a SetCover instance over the universe [m] and define u^(1),…,u^(m)∈^n where
(u^(j))_i =
1 if j ∈ S_i
-1 otherwise.
Let 𝒟 be the uniform distribution over { u^(1),…,u^(m), (-1)^n} and let f:^n→ be the function which is the disjunction of its inputs: f x_1⋯ x_n (where 1 is interpreted as true and -1 as false).
We choose k=Θ(m) so that <ref> holds with Ω(1/m)<<1/m+1. We then run the boosted tester 𝒯_strong on the function f and distribution 𝒟, to test if f is 0-close to an r-junta or -far from being a Ω(rlog n)-junta (where the parameters r and Ω(rlog n) correspond to the SetCover parameters). Our algorithm for SetCover outputs Yes if and only if the tester accepts f as being 0-close to an r-junta.
Runtime.
If the tester 𝒯_weak runs in polynomial time, then since k=Θ(m) and =Θ(1/m), the tester 𝒯_strong runs in polynomial time. Queries to the target function f and random samples from can also be simulated in randomized polynomial time.
Correctness.
For correctness, we need to show:
Yes case: if 𝒮 has a size-r set cover, then f is 0-close to an r-junta over , and
No case: if 𝒮 requires set covers of size Ω(rlog n), then f is -far from being a Ω(klog n)-junta over .
Yes case.
Let S_i_1,…, S_i_r be a size-r set cover. Consider the function f=x_i_1⋯ x_i_r. Since these indices form a set cover of 𝒮, f(u^(i))=1 for all i∈ [m] and f((-1)^n)=-1. This shows _(f,f)=0. It follows that f is 0-close to an r-junta over since f is an r-junta.
No case.
Suppose f is an r'-junta satisfying _𝒟(f,f)< 1/m+1. The relevant variables of f must correspond to a set cover of 𝒮: if some element i∈ [m] is not covered, then f(u^(i))=f((-1)^n) and _𝒟(f,f)≥1/m+1. This shows if 𝒮 requires set covers of size Ω(rlog n) then f is 1/m+1-far from every Ω(rlog n)-junta. In particular, since <1/m+1, every Ω(rlog n)-junta is -far from f.
§ ACKNOWLEDGMENTS
We thank the FOCS reviewers for their helpful comments and feedback. The authors are supported by NSF awards 1942123, 2211237, 2224246 and a Google Research Scholar award. Caleb is also supported by an NDSEG fellowship, and Carmen by a Stanford Computer Science Distinguished Fellowship.
alpha
§ COUNTEREXAMPLES TO NATURAL COMPOSITION THEOREMS
§.§ Counterexample to Conjecture 1
For any odd k and n ≥ k let R = (n-1)k and be the uniform distribution over ^n. There are symmetric functions g:^k → and f:^n → for which the following holds.
* There is an R-junta h achieving,
_^k(g∘ f, h) ≤ O(1/√(k)).
* The natural strategy of dividing the budget equally achieves,
_^k(g∘ f, g ∘f̃_R/k) = 1/2.
We set g = _k to be the majority function on k bits,
g(y_1, …, y_k) =
1 if ∑_i ∈ [k] y_i ≥ 0
-1 otherwise.
and f = _n to be the parity function,
f(x_1, …, x_n) = ∏_i ∈ [n] x_i.
The following fact will be useful in giving a strategy that achieves low error.
Let _1, …, _k-1 each be uniform and independent samples from . Then, for any choice of c,
*∑_i ∈ [k-1]_i = c≤ O*1/√(k).
We now give the junta achieving low error.
Let h = _k-1∘_n. Then,
* h is an ((k-1)n ≤ R)-junta.
* h achieves,
_^k(g∘ f, h) ≤ O(1/√(k)).
Clearly h depends on only the first (k-1)n bits of its inputs, so it is an R-junta as long as (k-1)n ≤ (n-1)k, which is guaranteed by the assumption n≥ k in <Ref>. We compute h's error,
_^k(g∘ f, h) = _∼^n[_k() ≠_k-1()].
In order for _k() ≠_k-1(), it must be the case that the ∑_i ∈ [k-1]_i is -1 or 0. The desired result follows from <Ref>.
We'll next show the natural strategy achieves advantage 0, equivalent to error 1/2.
Let f = _n and be the uniform distribution over ^n. Then,
_(f, f̃_n-1) = 0.
By <Ref>, it is sufficient to show that for any set |S| = n-1 and any x ∈^n,
_∼[f() |_S = x_S] = 0.
For any fixed x, there are two y ∈^n satisfying y_S = x_S: The first choice if y = x, and the second choice is x with a single bit flipped (the one bit not in S). One of these two choices will have a parity of +1 and one will have a parity of -1, so the average parity is 0, as desired.
For any odd k, μ = 0, and = [0,…, 0],
_μ, (_k) = 0.
For odd k, _k is an odd function, _∼^n[_k()]. Then,
_μ, (_k) = __1 ∼^k, _2 ∼^k[_k(_1)_k(_2)]
= __1 ∼^k[_k(_1)]__2 ∼^k[_k(_2)] _1, _2 independent
= 0 · 0 =0._k is odd
The following completes the proof of <Ref>.
In the setting of <Ref>,
_^k(g∘ f, g ∘f̃_R/k) = 0.
This follows from <Ref> and <Ref>.
§.§ Counterexample to Conjecture 2
For any n ≥ 10, k ∈, and R ≤ n/2, let be uniform over ^n. There are g: ^k and f:^n → for which, for all partitions r_1 + ⋯ +r_k = R,
_^k(g∘ f, g(f̃_r_1, …, f̃_r_k)) ≥ 1 - 2^-Ω(k).
<Ref> is particularly surprising in light of the fact that either the constant -1 or constant 1 functions, both of which are 0-juntas, will achieve error ≤ 1/2 with respect to g ∘ f. We begin with a probabilistic construction of f achieving the following.
For any n ≥ 10, there is an f: ^n → for which _∼^n[f()] ≤ 0.5 but, for all |S| ≤ n/2 and x ∈^n,
_∼^n[f() | = x] > 0.
Consider a random function where, for each x ∈^n, (x) ∼π_0.25. We'll show that meets the desired criteria with a strictly positive probability, proving the existence of at least one such f.
Let μ() _∼^n[()]. Then μ() is the average of 2^n independent samples of π_0.25. Applying Hoeffding's inequality,
[μ() > 0.5] ≤exp(-2 · (0.25)^2 · 2^n) = exp(-2^n/2).
Similarly, for any |S| ≤ n/2 and x ∈^n, let μ(, S, x) _∼^n[() | = x]. μ(,S,x) the average of at least 2^n/2 independent samples of π_0.25. Once again, by Hoeffding's inequality,
[μ(,S,x) ≤ 0] ≤exp(-2 · (0.25)^2 · 2^n/2) = exp(-2^n/2/2).
Union bounding over all 2^n choices of S and 2^n choices for x, we have that meets the desired criteria with probability at least
1 - exp(-2^n/2) - 2^2nexp(-2^n/2/2).
When n ≥ 10, the above probability is strictly positive, so such an f must exist.
Let f be a function with the properties of <Ref>, and g = And_k return +1 if and only if all k of its inputs are +1. By <Ref>, for any r ≤ n/2, f̃_r is the constant +1 function. Therefore, for any r_1 + ⋯ + r_k = R, g(f̃_r_1, …, f̃_r_k) is the constant +1 function. However,
_∼^k[(g ∘ f)() = +1] = (3/4)^k.
§.§ Counterexample to Conjecture 3
There is g:^k →, f:^n →, distribution over ^n, and budget R for which no R-junta of composed form achieves optimal error among all R-Juntas for g∘ f with respect to ^k.
We'll set k = 2, g = And_2. Let p:^2 → [0,1] be defined as
p(x)
1 if x_1 = x_2 = 1,
3/4 if x_1 ≠ x_2,
3/5 if x_1 = x_2 = -1.
We begin by describing a probabilistic construction: Given the input x, the value of (x) will still be a random variable. In particular, we set n =2, and (x) is set to +1 with probability p(x) and -1 otherwise. This probabilistic construction will later be derandomized. We allow a junta budget of R = 4.
Next, we construct an optimal approximator for g ∘. Given an input x^(1), x^(2), let _1 = (x^(1)) and _2 = (x^(2)). For succinctness, we'll use p_i to refer to the [_i = 1]. Then, since g = And_2, the optimal approximator will return 1 iff p_1p_2 ≥ 1/2. For our particular the only choices for p_i are 3/5,3/4,1. As a result,
h^(opt)(p_1,p_2) =
1 if p_1 = 1 or p_2 = 1,
1 if p_1 = p_2 = 3/4,
0 otherwise.
However, no composed form can achieve the above optimal approximator. Recall that composed form approximators are of the form h(q_1, q_2), where each q_i has range . The fact that the size of this range is 2, but there are three possible choices (3/5, 3/4, 1) for p_i, is the crux of the issue.
In more detail, of the three choices (3/5,3/4,1) for p_i, q_1 must classify at least two of them the same way. This gives three cases.
* If q_1 classifies 3/4 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/5 and p_1 = 1, p_2 = 3/5, and so cannot be optimal.
* If q_1 classifies 3/5 and 3/4 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/4 and p_1 = 3/5, p_2 = 3/4, and so cannot be optimal.
* If q_1 classifies 3/5 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/5, p_2 = 3/4 and p_1 = 1, p_2 = 3/4, and so cannot be optimal.
In all three cases composed form cannot achieve optimal error. It will always be off by some constant.
To derandomize this construction, we set n ≫ 2 sufficiently large. For each x ∈^n, we sample the value f(x) to be +1 with probability p(x_1,x_2) and -1 otherwise. Note that after randomly selecting the value of f on each input x ∈^n, f is now a deterministic function. Following the same arguments as in <Ref>, with high probability over the random choices in defining f, the error of the optimal 4-junta and of the optimal composed form 4-junta for g∘ f are within ±(n) of what they are for g ∘, where (n) goes to 0 as n →∞. Therefore, for sufficiently large n, there exists an f meeting the desired criteria.
|
http://arxiv.org/abs/2307.04137v1 | 20230709093305 | A Novel Explainable Artificial Intelligence Model in Image Classification problem | [
"Quoc Hung Cao",
"Truong Thanh Hung Nguyen",
"Vo Thanh Khang Nguyen",
"Xuan Phong Nguyen"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
A Novel Explainable Artificial Intelligence Model in Image Classification problem
Hung Quoc Cao
FSO.QNH.QAI.AIC
FPT Software
Binh Dinh, Vietnam
[email protected]
Hung Truong Thanh Nguyen1, 2
1FSO.QNH.QAI.AIC
FPT Software
2Department of Computer Science
Frankfurt University of Applied Sciences
Frankfurt am Main, Germany
[email protected]
Khang Vo Thanh Nguyen
FSO.QNH.QAI.AIC
FPT Software
Binh Dinh, Vietnam
[email protected]
Phong Xuan Nguyen
Graduate School of Engineering Advanced Interdisciplinary Studies
The University of Tokyo
[email protected]
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In recent years, artificial intelligence is increasingly being applied widely in many different fields and has a profound and direct impact on human life. Following this is the need to understand the principles of the model making predictions. Since most of the current high-precision models are black boxes, neither the AI scientist nor the end-user deeply understands what's going on inside these models. Therefore, many algorithms are studied for the purpose of explaining AI models, especially those in the problem of image classification in the field of computer vision such as LIME, CAM, GradCAM. However, these algorithms still have limitations such as LIME's long execution time and CAM's confusing interpretation of concreteness and clarity. Therefore, in this paper, we propose a new method called Segmentation - Class Activation Mapping (SeCAM) that combines the advantages of these algorithms above, while at the same time overcoming their disadvantages. We tested this algorithm with various models, including ResNet50, Inception-v3, VGG16 from ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set. Outstanding results when the algorithm has met all the requirements for a specific explanation in a remarkably concise time.
Explainable Artificial Intelligence (XAI), machine learning, explanation, transparency, interpretability
§ INTRODUCTION
In recent years, along with the rapid development of deep learning, more and more new models are being created with outstanding accuracy in the field of computer vision. However, those models have a complex network structure where users and scientists still cannot fully interpret and understand its black box <cit.>. Arising from the need to explain to users and experts the reasons behind the model’s decisions or predictions, Explainable Artificial Intelligence (XAI) was born and is drawing more and more attention in AI. Various XAI methods have been introduced with different approaches, Adadi and Berrada <cit.> have presented several ways to classify XAI algorithms, it's based on scope, time of information extraction or model AI. With Scope-based classification, Global and Local are two variations according to the scope of interpretability. The Global XAI methods try to understand the entire model behavior while the Local XAI methods want to understand a single prediction. Global interpretability techniques lead users to trust a model. In reverse, local techniques lead users to believe in a prediction. Also, they try to identify each feature’s contributions in the input towards a particular output <cit.>. Also for Based on Time of information extraction, we have two classes Post-hoc and Intrinsic. Post-hoc methods explain the model after being trained, these methods use model prediction and parameters to explain. Post-hoc methods can be applied to models without changing the model architecture. Therefore, Post-hoc methods can be applied to pretrained models. Intrinsic methods will modify the original model architecture. The model will be adjusted to have a new layer with interpretable constraints. And finally, model related methods, it’s another important way to classify XAI methods is whether they are agnostic or specific models. If an XAI method can be used for any type of model, it is classified as Agnostic. If an XAI method can be applied for only a single type or a number of classes of models, it is classified as Specific.
Although these methods can give satisfactory explanations, they still have many limitations and need to be improved. With the image classification problem, many XAI methods have been proposed, each method has a different approach and the output is therefore also different. For example: the output of LIME <cit.> are the superpixels that affect the model's prediction most. SHAP <cit.> will show the impact on the prediction, either positive or negative, of all superpixels. Those superpixels come from a previous perturbation step. With visualization algorithms like CAM <cit.>, SISE <cit.>, Saliency Map <cit.>... the output will show the user a heatmap on the original image.
From the knowledge we have gained through researching and comparing the three algorithms: LIME, SHAP, CAM. We recognize that LIME's explanation, the areas with the most influence, is the most intuitive and accurate for the image classification problem. These regions explained by LIME are approximately equivalent to how humans perceive the object. Nevertheless, the calculation time of LIME is too high. But, when we explain an image, the average time of LIME is 200 seconds greater than that of the CAM. However, the choice of the number of the most influenced regions is still dependent on people and specific image <cit.>. The recent works have proposed a method to improve the computation speed of LIME, namely Modified Perturbed Sampling for LIME (MPS-LIME) [6]. In their experimental results with Google’s pre-trained Inception neural network on Image-net database, the runtime of MPS-LIME is nearly half as the runtime of LIME; but the calculation time is still incredibly long. In contrast, CAM does not suffer from these limitations, but the high impact areas of CAM are far broader than the human-defined bounding box. Moreover, CAM must modify the original model’s layers to work. In this work, we propose a new local post-hoc method of XAI in the image classification problem, called Segmentation - Class Activation Mapping (SeCAM). That method selects the regions that affect the model’s prediction as LIME, but with much faster time (approximate to CAM) . We believe that this concept of segmentation is also applicable to the class of CAM-based XAI methods <cit.>
Our main contribution are:
* Propose SECAM as a new local post-hoc agnostic method of XAI in the image classification problem, which combines these advantages of the above two algorithms (LIME and CAM), and at the same time, overcomes their inherent weaknesses. Specifically, it can provide friendly images, close to human explanations like LIME while ensuring computation speed as fast as CAM, averaging 2 seconds for an explanation; Moreover, it also overcomes the weakness of having to edit the original model of the CAM method in some specific models.
* In addition to applying SeCAM to explain AI models in image classification problems, this approach’s main idea has much potential to improve other related XAI algorithms, especially in the computer vision field.
* We have experimented with datasets, models, ... and a number of qualitative as well as quantitative evaluation methods.
* We have a discussion about what it means to view an image as these superpixel to a user through a user study. We believe that, with the representation of an image in the form of superpixels, each superpixel will have some meaning, for example the head area, body area, ... of the object. Therefore, the user will be able to learn how the parts of the object affect the prediction of the model, or what do they mean?
* We also experiment with many segmentation algorithms to see how much impact it has on XAI algorithms such as LIME, SHAP, SeCAM - the algorithms use segmentation algorithms as part of the perturbation step.
* We had a survey with real users to see what the given explanations mean to them.
* Finally, we believe that applying segmentation to XAI methods can make the results to be consistent and easy to compare between XAI methods.
This paper's remainder is arranged to provide in the following order: related work, our proposed methods, experiments, and, ultimately, conclusions along with our future research directions.
§ RELATED WORK
XAI methods can be categorized based on two factors. Firstly, the method can be intrinsic or post-hoc based on when the data is extracted. Secondly, the method can be either global or local based on the explanation's scope. Global models explain the complete, general model behavior and attempt to explain the whole logic of a model by inspecting the model's structures. Local models give explanations for a specific choice. For example, "Why the model have this prediction?". Global interpretability techniques lead users to trust a model. In reverse, local techniques lead users to believe in a prediction. Also, they try to identify each feature's contributions in the input towards a particular output <cit.>. Post-hoc interpretation models can be applied to intrinsic models, but not fundamentally vice versa. LIME represents the Local Post-hoc approach <cit.>, which is model-agnostic. In contrast, CAM represents the Local Intrinsic approach, which belongs to model-specific <cit.>.
We also introduce the superpixel-based image segmentation method that we chose to use in this article
§.§ Segmentation Algorithms
In the problem of image classification, the input image is of course a very important part. However, not every pixel in an image is meaningful. It would seem more intuitive to evaluate not only the perceptual but also the semantic meanings of an image created by locally grouping pixels. We get superpixels when we do this kind of local grouping of pixels on our pixel grid. It at the same time brings about computational efficiency benefits. It allows us to reduce the complexity of the image itself from hundreds of thousands of pixels down to just a few hundred superpixel. Each of these superpixels would then contain some sort of perceptual value and, ideally, semantics.
So, superpixels are becoming increasingly popular for use in computer vision applications. Superpixels provide a convenient original for calculating local image features<cit.>. The XAI algorithms in this field have segmented the image into superpixels and used the presence or absence of these superpixels as the interpretable representation <cit.>. With this presentation, the image will be divided into regions. Each region will consist of several superpixels and will have certain meanings <cit.>. For example with the segmented image below, humans can easily see that the region consisting of superpixels number 11,12 and 20 will represent the hummingbird, where superpixels number 11 represent the bird’s beak, the bird’s head is the superpixels number 12,...
The goal of the XAI methods in this case is to figure out which superpixels have the most influence on the model’s prediction.
Some of the most commonly used superpixels methods are ETPS (Extended Topology Preserving Superpixels)<cit.>, SEEDS (Superpixels Extracted via Energy-Driven Sampling)<cit.>, SLIC (Simple Linear Iterative Clustering)<cit.>, Quickshift <cit.>,...
For superpixels to be useful they must be fast, easy to use, and produce high-quality segmentations. It is difficult to determine if segmentation is good or not because the definition of “good” often depends on the application. In this work, we experiment with the SLIC algorithm first and then with other algorithms. We will discuss the influence of segmentation algorithms later.
§.§ Local Interpretable Model Agnostic Explanations
Local Interpretable Model Agnostic Explanations (LIME) is an XAI method that can explain any classifier or regressor's predictions assuredly by approximating it locally with an interpretable model <cit.>. LIME intends to provide an easy to interpret method with local fidelity. The local fidelity means that the explanation for individual predictions should at least be locally faithful. In other words, it must correspond to how the model performs in the vicinity of the individual observation being predicted. The local fidelity does not imply global fidelity where the local context may not require globally essential features and vice versa. Due to this, even if a model has hundreds of variables globally, it could be the case that only a handful of variables directly relate to a local or individual prediction. LIME performs the steps below:
* Generating new samples then gets their predictions using the original model.
* Weighing these new samples by the proximity to the instance being explained.
Using the output probabilities from a given collection of samples that cover part of the input desired to be clarified, it then builds a linear model. Then, the surrogate model weights are used to measure the value of input features. Moreover, LIME is model-agnostic, so that it can be applied to any model of machine learning <cit.>.
Figure<ref> is an example for LIME explanation with the Input image in Figure<ref> with model Resnet50.
§.§ Class Activation Mapping
Class Activation Mapping (CAM) is a weighted activation map created for each input image <cit.>. It utilizes a global average pooling (GAP) in CNNs. A class activation map for an appropriate category indicates the discriminative image regions used by CNN to identify that category. It is a locally intrinsic interpretable model that achieved by designing more justified model architectures <cit.>. It explicitly allows CNNs to have exceptional localization ability despite being trained on image-level labels, enabling classification-trained CNNs to learn to produce object localization without using any bounding box annotations. CAM permits us to visualize the predicted class scores on any given image, highlighting the CNN's discriminative object parts. The CAM result shows a heatmap on the input image. This heatmap presents the impactful area of a given prediction <cit.>.
§.§ Gradient-weighted Class Activation Mapping
Gradient-weighted Class Activation Mapping (Grad-CAM) <cit.> is a generalized version of CAM. Grad-CAM uses the gradient information flowing into the last convolutional layer of the CNN to understand each neuron's decision of interest. Note that, to use CAM, the model must use a GAP layer followed by a fully connected softmax layer. This model architecture modification forces us to retrain the model. With a gradient approach, Grad-CAM can get the visualizations without changing the base model or retraining.
The final feature convolutional map of the input image is activated for different class channels. In detail, every channel in the feature is weighed with the class gradient for that channel. The global average pooling over two dimensions (i,j) for the gradient of respective class output for feature map is the spatial score of a specific class. Then, the resulting value is multiplied with the feature map along with the channel axis k. While the resultant is pooled along its channel dimension. Hence, the spatial score map is of size i*j which is normalized to positive region predictions using the nonlinear ReLU transformation. The class k's score correlates directly with the class-specific saliency map's importance, impacting the final prediction output.
§ PROPOSED METHODS
§.§ Motivation
In this section, we present the reason why we come up with the idea of SeCAM. In our previous work, we have applied LIME and CAM to explain the ResNet50 model. With the LIME method, we divided the image into 49 regions using the K-Means algorithm and calculated with the number of examples of 1000, which is the most appropriate number in this case. The results are shown in Figure <ref>.
The computation time of LIME and CAM are presented in Table <ref>.
The result in Table <ref> reveals that both LIME and CAM can yield the original image's regions that most affect the prediction. However, we find that LIME's explanation is resembling human explanations. The CAM heatmap area is too large, thus containing additional areas that do not have a decisive effect on the model's prediction, thereby reducing the explanation's reliability. As introduced in the Segmentation Algorithms section, with the results of LIME, we can see that the Resnet50 model rated the head and tail the most impact, more than the beak, but the InceptionV3 model shows that the head and the beak are the more important parts. Thus, by using superpixels, humans can see which regions will make more sense in making the model’s prediction instead of individual pixels like CAM. Nevertheless, LIME's computation time is too large, while CAM's computation time is completely superior with nearly 20000 times faster speed. The calculation time here is solely the time given for explaining. One of the prerequisites for using the CAM method is that at least one Global Average Pooling (GAP) layer exists in the model architecture <cit.>. If the GAP layer is not already available in the model, then the model needs to be added with a GAP layer and retrains with all the data.
To overcome the above problems, we propose a new method called Segmentation - Class Activation Mapping (SeCAM) to improve LIME and CAM's disadvantages while preserving their advantages. Our proposed method produces a precise explanation of a predicted object like LIME but has a quick computation time of CAM. Furthermore, this method can be directly applied to any model with only an individual layer, followed by the last fully-connected softmax output layer. Thus, we do not have to add the GAP layer to the model or retrain the model anymore in the mentioned situation.
§.§ Segmentation - Class Activation Mapping (SeCAM)
In this section, we describe our novel method - SeCAM in detail. As sketched in Figure 2, SeCAM consists of three blocks. In the first block, we initially apply the same procedure as the original CAM method, which identifies the image regions’ importance by projecting the output layer’s weights onto the convolutional feature maps [4]. For models without a GAP layer followed by a fully connected layer, we use the gradient information flowing into the last convolutional layer of the CNN (idea from Grad-CAM) and make some adjustments. So, our method SeCAM does not require models to have a GAP layer because it allows using any other layer such as a flatten layer to replace the GAP layer in the original CAM method, as shown in Figure 2. In block 2, we use a segmentation algorithm to segment the input image into superpixels. Results obtained from the previous two blocks are combined and compute the effect of each superpixel on the model’s prediction. In the following section, we will discuss more carefully about each block.
§.§.§ Block 1: Class Activation Mapping
For an input image, in the last convolutional layer, we get n feature maps. Let f_k(x, y) present the activation for unit k in feature maps at spatial location (x, y). After a flattening class, each point represents a value of f_k(x, y). In the case of a pooling layer such as max or average pooling that turn multiple values f_k(x, y) (where (x, y) belongs to a spatial location set A into a point, that point will represent multiple corresponding values for all spatial locations in the set A
Therefore, for a class c, we call the weights corresponding to the input to the softmax layer S_c = ∑_k∑_x, yw_k^c(x, y)f_k(x, y) where w_k^c(x, y) is the weight corresponding to class c for unit k at (x, y) location. In case the model already has a GAP layer followed by a fully connected softmax layer (Resnet50, InceptionV3,...), w_k^c(x,y)is taken directly from weight of (x, y) in unit k corresponding to class c. In the other cases,w_k^c(x,y) get the value of gradient via backpropagation:
w_k^c(x,y) = ∂ y^c/∂ A^k (x, y)
where A^k is the k^th feature map.
In the other words, w_k^c(x, y) presents the importance of spatial element (x, y) in unit k to class c. After a softmax class, the output for class c, P_c is determined by the Equation <ref>.
expS_c/∑_cexpS_c
.
Let M_c as the class activation map for class c, where each spatial element is given by Equation <ref>.
M_c (x,y)=∑_k w_k^c(x,y) f_k (x,y)
Therefore,
S_c =∑_k ∑_x,y w_k^c (x,y) f_k (x,y)
= ∑_x,y∑_k w_k^c(x,y)f_k (x,y)
=∑_x,y M_c(x,y)
Thus, M_c (x, y) indicates the importance of the activation at the spatial grid (x, y), leading to the classification of input image to class c.
§.§.§ Block 2: Image Segmentation
The Image Segmentation block runs parallel to the calculation of CAM values in Block 1. In this block, we split the input image into separate regions with similar coloring pixels. Hence, each region represents more meaningful and interpretable, and also carries more information than pixels.
Currently, there are many image segmentation algorithms. In the scope of this article, we use the K-Means algorithm to perform this division of images. More specifically, we use the simple linear iterative clustering (SLIC) algorithm, which is a particular case of K-means adapted to the task of generating superpixels. SLIC performs a local clustering of pixels in the 5-dimensional space defined by the L, a, b values of the CIELAB color space and the x, y pixel coordinates. We get high-quality segmentations with the SLIC algorithm with a low computational cost (SLIC achieves O(N) complexity) <cit.> [9]. With the SLIC algorithm, we can adjust the number of regions to be divided.
The SLIC algorithm includes the following steps:
* Firstly, we initialize K cluster centers by sampling pixels at every grid interval S= √(N/K), where N is the number pixels of the input image.
* We move the centers to the new locations corresponding to the lowest gradient position 3 x 3 neighborhood. Image gradients are calculated as follow:
G(x,y)= I(x+1,y)-I(x-1,y)^2
+ I(x,y+1)-I(x,y-1)^2
Where I(x,y) is the color vector in CIELAB color space corresponding to the pixel at position (x,y), and . is the L2 norm.
* Similar to the K-means clustering <cit.>, we have a loop to update cluster centers and labels for all pixels. We repeat the following loop until convergence:
* Update label for each pixel base on the nearest cluster center according to the distance measure Dsbetween a cluster center C_k = [l_k, a_k, b_k, x_k, y_k]^T and a pixel P_i=[l_i, a_i, b_i, x_i, y_i]^T, is the sum of the lab distance and the xy plane distance normalized by the gri-d interval S:
D_s = d_lab + m/Sd_xy
where:
* m is a parameter allowing us to control the density of a superpixel. The value of m can be in range [1, 20]. We choose 10 as the default value.
* d_lab and d_xy are respectively the lab and the xy plane distances, defined as follow:
d_lab=√((l_k - l_i)^2 + (a_k - a_i)^2 + (b_k - b_i)^2)
d_xy= √((x_k - x_i)^2 + (y_k - y_i)^2)
* Compute a new center as the average labxyvector of all the pixels belonging to the cluster.
* In the last step, if a few tray labels may remain, SLIC will enforce connectivity by relabeling disjoint segments with the labels of the largest neighboring cluster.
§.§.§ Block 3: CAM Averaging in Segmented Image
Deriving the CAM values from Block 1 and the segmented image from Block 2, we average the values from the heatmap obtained in Block 1 for each region, called Segmentation Class Activation Mapping (SeCAM) value corresponding to that region. The SeCAM value for class c of region s is denoted M_c^s and is calculated by Equation <ref>.
M_c^s = 1/|s|∑_(x, y) ϵ sM_c (x, y)
In which, |s| is the number of pixels in region s. Thus, the M_c^s value represents each region's importance to the given prediction. The averaging ensures fairness between regions with different acreages and bypasses the requirement of adding a GAP class to the original model's architecture. When we take the average, each point in the region influences the M_c^s value. If the M_c^s value is maximized, the SeCAM ignores the effects of points with a smaller CAM value in one region.
Finally, we select the regions that have the most significant impact on the prediction of the model, which are areas with the highest SeCAM values. There are two approaches to extract SeCAM’s explanation. The first way is to choose the number of regions with the most influence. The greater the number of regions, the broader the scope of explanation. The second way is to select the areas in which the value is above a given threshold of the SeCAM’s max value. The more extensive the threshold, the less the number of regions selected will be, and the smaller the explanation’s scope. We will discuss strategies for selecting the region later.
§ EXPERIMENTAL DETAILS
Experimental Setup
We conduct experiments for our SeCAM method, CAM, GradCAM, LIME on image from the dataset ILSVRC. We use models Resnet50, InceptionV3 and VGG16. In those models, Resnet50 and InceptionV3 have a GAP (Global Average Pooling) layer followed by a fully connected softmax layer, so we can use CAM with those two models easily. The VGG16 model is more complicated so CAM can’t be applied directly so we will use GradCAM instead.
Quanlitative Results
We compare the explanation quality of LIME, CAM (GradCAM) and SeCAM on sample images from the ILSVRC dataset. The qualitative results for some images are shown in table II. The running time for each explanation is also included.
Quantitative Results
Since currently, to the extent of our knowledge, there are no accurate and recognized methods for comparative evaluation of XAI algorithms, we compare the precision between these results of SeCAM, LIME and CAM (GradCAM) on a human-grounded basis. Denoting G is the human-grounded bounding box and S is the bounding box of the explanation. We use the following evaluation metrics:
* Intersection Over Union (IOU) is the ratio between the overlapped area of two bounding boxes and their union area. IOU compares each bounding box produced by XAI methods to the ground truth.
IOU=S_intersection/S_union=S_A∩ B/S_A∪ B
The IOU value varies from 0 to 1. The XAI method with the highest IOU value is the most accurate.
* Energy-Based Pointing Game (EBPG) evaluates the “accuracy” and variability of the XAI algorithms <cit.>. Extending the traditional pointing game, EBPG measures the fraction of its energy captured in the corresponding ground truth G.
EBPG_S = ∑1_(S∩ G){x}/∑1_S{x}
In which, ∑1_(S∩ G){x} is the number of points in both regions of S and G, and ∑1_S{x} is the number of points in S. So EBPS tells us what percentage of explanation's box S is in the human-grounded box G.
§ DISCUSSION
§.§ Computational resources
We used the 6 x RTX 2080 Ti GPU with Dual Xeon E5-2673 v3 CPU and 128Gb memory for experience. SeCAM is always the fastest and the slowest algorithm is LIME.
§.§ SeCAM vs CAM
Through the Qualitative results in Table II. We find that the SeCAM results are not only closer, but also help us learn some insights of models. For example with the hummingbird image. The results of CAM (GradCAM) are heatmap regions related to the hummingbird bird and it is difficult to know which region has the most influence on the model's prediction when we look at the explanation of model Resnet50. Meanwhile, the results of SeCAM can show the user the influence of each part of the hummingbird on the prediction results. Specifically, the hummingbird is divided into 3 main parts: the beak, the head and the body. The model resnet50 rated the body and head as the most important, while InceptionV3 took the beak and head. More specifically, the VGG16 model evaluates the beak as particularly important. However, with the GradCAM results, it is difficult to see that the head part also has a great influence on the prediction. This can be seen easier in the SeCAM results with 4 segments.
CAM obscure some important color parts
During the experiment, we also found that, because the result of CAM or GradCAM is a heatmap, when overlaying the heatmap on the original image, it sometimes obscures some important parts. For example in Figure 3: In this case, the prediction of model InceptionV3 is Indigo bunting instead of coucal. We use CAM and SeCAM to explain why the model’s prediction is Indigo bunting. We found that the heatmap results of CAM obscured the characteristic blue color of the Indigo bunting species, while the SeCAM explanation showed more clearly the characteristic blue color area. With SeCAM we can see that the reason InceptionV3 predicts Indigo bunting is because of the similarity in color.
CAM can not have a clear distinction between parts
Also in the experimental process, in some cases, we find that the heatmap of CAM does not show humans which region affects the prediction the most, instead humans can only see that the heatmap contains the object and it is difficult to understand why the model makes that prediction. For example in Figure 4 and 5: In those cases, the InceptionV3 model predicts Input images 4.a and 5.d are Kite and Vulture instead of Coucal. If humans look at the explanation of CAM (Figure 4.b and 5.b), it's hard for humans to understand why the model predicted wrong. Meanwhile, based on SeCAM's explanation, we can see that the model evaluates the impact of the head is not high. So it is easier to understand why the model prediction Figure 4.a is a kite when we look at the SeCAM explanation.Especially with the 5.a image, the background with the surrounding dry branches also greatly affects the prediction of Vulture.
§.§ SeCAM vs LIME
When we compare SeCAM and LIME, the most obvious thing is that SeCAM and LIME's results are in similar form, indicating which superpixels have the most influence on the model's prediction results. However, the computation time of SeCAM is enormously faster with more than 100 seconds difference. Also, SeCAM ensures safety and stability when any image can give appropriate explanations to standard humans while LIME will be skewed in some instances, as shown in Table II, Figure 4 and Figure 5. So, we find that the quantitative results of SeCAM equal or surpass LIME.
§.§ Effect of Segmentation Algorithms
We also tried other fractional algorithms in addition to SLIC. Of course, with different algorithms, the way they segment the image is also different. However, XAI algorithms can still determine the exact affected area compared to human judgment. For example, in the Table III below, we have used two algorithms quickshift and SLIC, they produce different segmentation even though they have the same number of regions, but SeCAM can still select the correct areas of the image containing the object in both uses.
Therefore, the choice of different segmentation algorithms has no effect on model interpretation. We chose the SLIC algorithm because it is the easiest to understand, uses k-means clustering, and is the fastest implemented. At the same time, showing the segmentation areas of this algorithm is also friendly to the end-user, especially users who do not know much about technology.
§.§ Comparition of XAI methods
As introduced, comparing and evaluating XAI methods with each other is a big challenge. One of the reasons is that the results are out of sync. With LIME, SHAP results in perturbed images based on an segmentation algorithm, and CAM, GradCAM results in heatmaps. We believe that the idea of using segmentation integrated into heatmaps will not only improve the interpretation results, but also make comparisons between XAI methods easier. Because we can consider the explanation based on the influence of each part of the image instead of each pixel, comparing XAI methods and understanding the model behavior will be easier and save more time.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we have introduced a novel method that explains the model’s prediction in the image classification problem, called Segmentation - Class Activation Mapping (SeCAM). Our method is developed based on these algorithms LIME, CAM, and GradCAM, the improved version of CAM. It combines the advantages of LIME’s explanation accuracy with the CAM’s calculation speed. Besides, SeCAM can represent the most significantly influential regions on the prediction and its impact value. These values can evaluate SeCAM’s accuracy based on humans grounded. We held a survey to see if the explanation of the new approach really got better. The results are extremely satisfactory. We experimented with many different image classification models on the ILSVRC dataset. In some cases, SeCAM gave the correct explanation and LIME gave rather absurd results. In explaining widely used image classification models, our method SeCAM results are more outstanding than other XAI methods such as LIME and CAM. There are a whole number of pathways of future work for us to explore with SeCAM. We recognize that our proposed method has limitations, and future academics and researchers should be aware of these and indeed interpret the material presented in this research within the context of the limitations. Firstly, there is a reasonably obvious limitation that the accuracy of SeCAM’s explanation depends too much on selecting the parameter for the selected segments or exact threshold level. Secondly, choosing the right algorithm for each model still has to be done manually. Currently, we are classifying into two main categories, which have multiple fully connected layers such as VGG16 and only one class fully connected layer, for example, ResNet50. The user will have to define the model's type in order to correctly apply the algorithm. We will try to find ways to automatically identify the model types in updated versions of the algorithm. Besides, we also find the inconvenience of the lack of a standard evaluation method for existing XAI methods; Therefore, we will also study to give a general evaluation method of the accuracy of different XAI algorithms in parallel with the development and improvement of the XAI algorithm.
§ ACKNOWLEGMENT
We are grateful for the collaborative research environment provided by FPT Software Quy Nhon. We would like to express our special thanks of gratitude to Phong Nguyen for his sponsor and Prof. Takehisa Yairi from The University of Tokyo for his helpful support and discussions; Dr. Vinh Nguyen for his careful review. Finally, we would also like to acknowledge FSOFT AI Laboratory for providing us opportunities of incubating ideas in this project.
IEEEtran
§ APPENDIX
§.§ Result images in experiment
In the experiment, we have applied SeCAM, CAM and LIME on various images from the ILSVRC dataset. In the Table <ref>, we present three examples of results with different XAI methods' parameters.
|
http://arxiv.org/abs/2307.04015v1 | 20230708164731 | Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder | [
"Qi Wang",
"Shubing Zhang",
"Li Zhou"
] | cs.SD | [
"cs.SD",
"cs.MM",
"eess.AS"
] |
Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
Qi Wang, Shubing Zhang , Li Zhou 1
China University of Geosciences(Wuhan)
{wangqi233,zhouli}@cug.edu.cn
* Corresponding author
1 This research was funded by the Chinese Regular Projects of the Humanities and Social Sciences Fund of the Ministry of Education of Grant No.16YJAZH080.
August 12, 2023
===================================================================================================================================================================================================================================================================================================
Music accompaniment generation is a crucial aspect in the composition process. Deep neural networks have made significant strides in this field, but it remains a challenge for AI to effectively incorporate human emotions to create beautiful accompaniments. Existing models struggle to effectively characterize human emotions within neural network models while composing music. To address this issue, we propose the use of an easy-to-represent emotion flow model, the Valence/Arousal Curve, which allows for the compatibility of emotional information within the model through data transformation and enhances interpretability of emotional factors by utilizing a Variational Autoencoder as the model structure. Further, we used relative self-attention to maintain the structure of the music at music phrase level and to generate a richer accompaniment when combined with the rules of music theory.
Our experimental results indicate that the emotional flow of the music generated by our model has a strong correlation with the input emotion, demonstrating the model's strong interpretability and control of emotional flow. The generated music is also well-structured, diverse, and dynamic, outperforming the baseline models.
Music Accompaniment Generation, Emotional Flow, Variational Autoencoder, Rule constraints
§ INTRODUCTION
Music evokes emotions in listeners, making it a powerful and intuitive medium for understanding. It also serves as a driving force for musicians to create. One important aspect of composing is incorporating emotional expression into the music. Composers use their emotions along with their technical skills and knowledge to craft their compositions.
Current AI methods fall short of replicating a composer's approach. Neural networks primarily focus on combining and utilizing pre-existing knowledge of compositions, rather than incorporating emotions as high-level information. Our research aims to overcome this limitation by developing a model for generating accompaniment that takes emotions into account.
The way emotions are processed impacts every aspect of music composition and, as a result, every aspect of deep neural networks <cit.>. This puts a significant emphasis on the need for network control. While autoregressive models can effectively capture key elements of music, they lack transparency and do not guarantee internal control and interpretability of musical information. Adversarial networks <cit.> can separate elements like pitch, rhythm, and texture, but they struggle with capturing emotional information and prioritize interpretability over musicality and structure.
Additionally, many music generation models <cit.> primarily focus on identifying and evaluating the emotional aspects of music, rather than using them as a controllable variable. Therefore,
instead of using subjective and limited emotional labels<cit.>, such as "relaxed" or "nervous," we have adopted Thayer's continuous emotion model<cit.>. This model takes into account two quantitative and controllable factors: valence, which measures the level of positivity or negativity, and arousal, which measures the level of excitement or calmness. This approach provides a controlled understanding of human emotions.
Thus, we designed a system based on Variational Autoencoder, a controllable deep learning model, which incorporates emotional factors into the neural network's learning process. The user inputs valence and arousal trends, which are then encoded using our Valence Encoder and Arousal Encoder. The model then decodes and reconstructs this information to generate 2-bar piano accompaniments that match the emotional flow of the user's input.
To compose a dynamic piece of music, we take into account two key elements: tonality<cit.>, which enhances the beat and rhythm of the music by incorporating rule-based constraints in the model's decoder, and structural organization<cit.>, which improves the storytelling aspect of the music and preserves the internal structure of the piece through a self-attention mechanism.
Our data, code, and samples have been made publicly available [<https://github.com/Duoluoluos/Emotion-Guided-Music-Accompaniment-Generation>]online.
Our main contributions include:
* Emotion-Guided Composition, where the user inputs an Emotion-Flow Curve and the model generates music
that closely matches the input emotions.
* Enhanced accompaniment generation, incorporating global tonality, music phrases, and local texture for a more realistic and dynamic improvised accompaniment.
* Integration of rules and deep learning, combining the creative capabilities of deep networks with the constraints of music theory to improve the transparency of the music creation process.
§ RELATED WORKS
§.§ Accompaniment Generation
Generating musical accompaniment is essentially a specific type of music generation problem<cit.>, where the melody is used as a constraint, and the accompaniment is the generated music. In the past, accompaniment generation was approached in the same way as music generation, treating pitch and temporal values as simple data. Algorithms such as Hidden Markov Chain (HMC)<cit.>, Random Forest (RF), Support Vector Machine (SVM)<cit.> <cit.>, etc. were used to approach the problem from a regression perspective. However, with the advancement of deep learning, more accurate prediction models have been developed.
DeepBach<cit.>, a well-known music generation network based on RNN/LSTM<cit.> networks, represents Bach choral as voice lists with metadata lists and embedding representation to RNN for prediction. However, RNN/LSTM networks alone may not be sufficient for achieving the required level of long-range coherence in accompaniment. Hybrid models, such as the RNN-LSTM model in paper <cit.> and the RNN-RBM model in paper <cit.>, have been proposed to address this issue. The RNN-LSTM model learns different models in stages, while the RNN-RBM model uses several Restricted Boltman Machines (RBMs) and samples the output of the RBMs as input for the RNN, training local information and then makes autoregression for each information.
In 2018, the Music Transformer <cit.> was introduced, which shifted the focus from regression problems and note prediction to natural language processing (NLP) techniques for recognizing relationships between different segments of music and evaluating the logicality of musical phrases, similar to how NLP tasks analyze relationships and coherence in language. The Transformer model uses attention mechanisms, positional coding, and other techniques to ensure long-range coherence, making it useful for various accompaniment generation tasks such as drum and piano accompaniment. The model is similar to text completion in NLP, using a priori melodic data and key information such as drum beats to "fill in" missing features. Papers <cit.> have expanded upon this data representation and the MuMidi proposed in paper <cit.> can solve harmonic problems in a long-term context by integrating pitch, time value, and tempo. However, the generation process is not always interpretable or controllable and the randomness of notes can increase over time, resulting in non-sequential music.
To improve control over the music generation process, various methods have been employed. MuseBert <cit.> uses data corruption and fine-tuning during the inference learning process, while Music VAE <cit.> <cit.> uses decoupled feature representations such as pitch, chord, and texture, and employs interpolation, back-and-forth sampling, and temperature factors to increase accompaniment diversity. MuseGAN <cit.> treats music data as images and can generate multi-track accompaniments, but the structure of each track is not well-constrained by composition rules and the resulting music may not be as listenable. It is worth noting that the "hidden space" of the Variational Autoencoder(VAE) is better suited to the music generation problem than the image representation method used in the generative adversarial network. Unlike pass-through data, notes are affected by pitch, time, and velocity and have a high dimensionality of information. The VAE <cit.> normalizes this information to the hidden space for posterior estimation and reconstruction using an Encoder-Decoder architecture, which can be combined with a "learning from scratch" strategy and improve the model's ability to migrate and transfer. Therefore, we chose to use VAE as a controllable accompaniment generation model. Our model can generate well-structured accompaniments that conform to certain composition rules and follow an Emotion Flow.
§.§ Emotional Flow Guided Composition
Valence and Arousal are commonly used as quantitative measures of musical emotion in research. Studies<cit.> have shown that the rhythmic density of music, determined by the duration of notes in each measure, can affect a person's arousal levels independently of note velocity. Additionally, the melodic and harmonic direction of a song can affect the overall emotional direction <cit.>, referred to as valence. These factors can have a significant impact on the emotional response to a piece of music.
The objective of our research is to extract features from Emotion Flow, specifically the Valence Curve and Arousal Curve <cit.>, and then systematically associate those features with the generated accompaniment. Previous research, as shown in the paper <cit.>, used dynamic programming and template-matching methods to complete the Emotion-Flow Guided Accompaniment Generation. However, these methods can ensure the audibility of the music but do not guarantee the diversity of the accompaniment. In contrast, deep neural networks can achieve accompaniment diversity through large-scale learning, but they struggle to maintain the structure of the music compared to methods such as template matching <cit.>. Although self-similarity <cit.> can maintain some of the structure, neural network methods have difficulty ensuring the structure of the music because the music structure is strongly regulated through music phrases. Therefore, decoding music segments into "phrase" units is the key to maintain music structure. In this paper, we propose using a VAE which makes full use of structured features of the music to improve the overall structure and diversity of the accompaniment.
§ METHODS
§.§ Data Preparation
The POP909 Dataset <cit.> comprises 909 popular music tracks, which are piano-based and have a total running time of 60 hours. Each track is stored in MIDI file format and includes three separate components - melody, bridge, and piano. The bridge and piano tracks serve as an accompaniment. Additionally, the dataset includes chord and bar annotations for each song.
The POP909 dataset includes melodies that are broken down into 2-bar, 4-bar, and 6-bar fragments. The bar annotations in the dataset provide information about the structure of these fragments. The chord annotations, on the other hand, provide information about the harmony of each bar in the melodies.
To address the issue of music structure in a consistent manner, we discovered that the majority of music is composed of 2-bar segments. As a result, we carried out data cleaning, filtering out 2/4-bar segments and 2/4-bar segments with 6-bar introductory fragments. The training and testing sets were then split in an 8:2 ratio.
As sample data, we selected a subset from the Nottingham Dataset <cit.>. This dataset comprises over 1000 European and American folk songs, all of which have chord annotations. For validation purposes, we chose 2-bar and 4-bar segments from the dataset. The collated data information is presented in Table <ref>. (It is worth noting that if the user-supplied music does not have chord annotations like the sample data, we used Bi-LSTM Harmonizer <cit.> to implement the chord annotations)
To showcase the capabilities of our model, we chose two representative songs, one with high valence and the other with low valence, from the 20 songs we used. These songs were made available on a web page for users to evaluate and [<https://soundcloud.com/ko9isjyplxrb/sets/demos-of-emotion-guided-generated-accompaniment>]enjoy.
§.§ Models
§.§.§ The Conversion of Valence and Arousal
The overall architecture is illustrated in Figure <ref>.
The initial music data is represented by piano rolls. Each row of the piano roll matrix corresponds to one of the 128 pitch values and each column corresponds to a unit of time, with the duration of a 16th note used as the unit of time. The accompaniment tracks were merged and transformed to produce the accompaniment piano roll p_T^ACC, where T represents the duration of the altered accompaniment fragment. Similarly, the rhythm piano roll is represented as p_T^RHY, and the labeled chord progression is represented as c_T. According to the twelve-mean meter <cit.>, c_T is a matrix of 12 × T, where 12 is the number of notes in an octave.
Valence_T=V(c̅_̅T̅)
Where V(·) is the Valence's mapping and c̅_̅T̅ is the chord data after normalizing the root note of c_T to the C3 note. This is to ensure that the Valence is in the same key, and we set the T here to 8.
Also with respect to Arousal's mapping as A( · ), there are,
Arousal_T= A(p_T^ACC+p_T^RHY)
The operation of mapping A is to transform the multitrack music data into a tree structure <cit.>, where the nodes of the tree can more clearly characterize the density distribution of notes. Arousal is a four-dimensional matrix of size 128× T × 16 × 8, denoting the pitch-duration-density grouping, respectively.
Denote the quantization operation of Arousal and Density as | · |,
|Arousal|_T=1/5 · T∑_T∑_pitch A(p_T^ACC+p_T^RHY)
|Valence|_T = ∑_T W_V(c̅_̅T̅)
The W value in this context refers to the chroma weights of each chord and serves as a measure of the valence, or emotional assessment, of each chord. By performing a quantization-transformation operation, the emotional content of the music can be translated into a format that the composition model can understand, allowing for the user's desired Emotion Flow to be incorporated into the final output.
§.§.§ Valence/Arousal Encoder
Arousal and Valence Encoder are both dominated by LSTM as the backbone network. Arousal Encoder extracts the features of pitch-time-value information through a CNN with a (4,12) sized kernel in convolutional layer and (1,4) sized kernel in max pooling layer.
In fact, after the features are extracted by the convolutional network, the arousal information is more concise and refined [38], so that Decoder can learn better emotional features.
The layers of the LSTM network are all 1, and both are bidirectional. the dimension of the input weight of the Arousal Encoder is 256, and the dimension of the output weight is 1024. the dimension of the input weight of the Valence Encoder is 32, and the dimension of the output weight is 1024. Both are encoded to calculate the mean and variance of the probability distribution and sampled to obtain a 256-dimensional latent space variable z_Arousal or z_Valence.
§.§.§ Decoder
The Valence Decoder is introduced first, and the LSTM encoder of the decoder is roughly the same, except that the input side is fused with z_valence, and the dimension is modified to 292. The reconstructed Valence is estimated by calculating the variance and mean, and it is input to the LSTM as a token so that the decoding part of the model is completed. The probability distribution of valence is a 12-dimensional Bernoulli distribution.
PianoTree Decoder, on the other hand, refers to the design of the paper <cit.> and uses the model in this paper as a baseline. The original model is divided into two main stages, one is the time domain decoding and the other is the decoding of notes for each pitch. Since different notes may be concatenated into fragments and have some autocorrelation in the structure of the music to form the music phrases, we performed a note summary operation after the time-domain decoding operation and introduced a self-attention mechanism, which we will explain the ins and outs in detail in the next subsection.
The role of the first Pianotree-LSTM in Figure <ref> is to decode 512-dimensional latent space vectors. latent space vectors are the hidden space mapping changes of notes, and LSTM (hidden size=1024) is to summarize and summarize the results of the changes in the temporal dimension, so we call the summarized results note summary with size (1,512). After obtaining the relative self-attention, it is then decoded in the dimension of the pitch by LSTM(2) and mapped to 128 pitches through the fully connected layer. For each or each class of notes, respective temporal values are then decoded by LSTM (hidden size=16) to obtain the emotional stream/music sequence after reconstruction.
§.§.§ Relative Self-Attention
In order to maintain the structural organization in the music sequences, we introduce a self-attentive mechanism. This inspiration comes from the paper <cit.>, which does this by comparing a template music sequence fragment with a training music fragment and obtaining the correlation of the relative positions in the two sequences by one-dimensional/two-dimensional convolution, and the resulting correlation data is called self-similarity.
In this paper, self-similarity is not done by convolution operation because we do not have template fragments, but by note summary, a tensor of stacked pitch and mood information in the time domain. Similarly, since self-attention obtains the autocorrelation information inside the input by soft addressing, it is just possible to obtain the autocorrelation of note summary in the time domain and thus maintain the structured organization of the music fragments as the estimated "music phrases".
Since there is some time invariance in the relative positions of the sequences <cit.>, we also introduce offsets. Each fragment is not very informative, and to optimize the efficiency of the algorithm, we use a single-head attention mechanism. The query, key, and value tensor of relative attention are written as Q, K, and V, respectively. S^rel represents the offset matrix and the matrix element r=NS_k-NS_q, where NS_k and NS_q are the note summary query and key's position code, then the formula for relative self-attention(abbreviated as Att) is
Att = Softmax(QK^T+S^rel/√(D))V.
As for the parameter settings, we set the weight dimension of Q to 1024 and the weight dimension of K, V to D=128.
§.§.§ Rules-based Constraint
Two rules are very common in the realm of improvised accompaniment, enriching the player's accompaniment performance by changing tonality. The first principle is to add variety to the chords by making small adjustments to the chord tuning.
The second technique is to add a sense of layering between the different voices by shifting the tonality of the chords significantly at the same time.
Either way, chord arrangement is the most important thing.
If we want to use the rules in our accompaniment generator, we need to grasp the key information and build the model. Whether it's chord transposition or pitch shifting, it's essentially shifting pitch. So instead of inferring from the model, we can use the chord arrangement and transposition information directly to shift the pitch and change the generated accompaniment.
To obtain the chord transposition information, a mathematical evaluation is required. We note that the originally labeled chords of the input melody are C^pre and the chords generated by PianoTree decoding are C^gene, each chord is represented by 12 mean meters, so it is a 12-dimensional vector. The two are compared and the maximum difference is used as the criterion for transposition. Note the current bar number i, the pitch shift Δ C refers to:
Δ C = argmax(C^pre_i C^gene(T)_i /|| C^pre_i || · || C^gene(T)_i ||)
Here T denotes matrix transposition.
Each bar has a chord best transposition selection, and a number of bars with large Δ C are selected for pitch shift so that tonality adjustment is achieved by means of rules and mathematical modeling.
§.§ Training Objective
The training objective of VAE <cit.> is much the same, and its loss function mainly consists of regularization loss and reconstruction loss. To shorten the formulation, we abbreviate Valence and Arousal as V and A.
For the regularized loss, we set the prior Gaussian distributions of Valence and Texture as p(z_V) and p(z_A), and the posterior distributions after encoder are noted as p(z_V|V), p(z_A|A), respectively. To find the regularization loss of the two probability distributions, we commonly use the KL scatter [40], denoted here as KL(·).
For the reconstruction loss, we set the probability distribution of the Valence Decoder output as p(V|z_V) and the PianoTree Decoder output as p(A|z_A, z_V), and the reconstruction loss is generally found by finding the log probability expectation value. In summary, the loss function Loss(V, A) of the model is
Loss(V, A) = E_p[log p(V|z_V) + log p(A|z_V,z_A)]
+ KL(p(z_V|V) || p(z_V)) + KL(p(z_A|A) || p(z_A))
§ EXPERIMENTS
§.§ Training Details of Our Proposed Model
The experiment was run on a host with a 12th Gen Intel(R) Core(TM) i7-12700H and a single NVIDIA GeForce RTX3060 6GB.
In the section <ref>, we explain the dataset and convert the MIDI files in the dataset into a piano roll representation and a 12-measure chord representation, respectively We set the batch size to 128, so that the model is trained with a time value of 32 for each arousal fragment and 8 for the valence fragment.
When training our VAE model, we set the epoch to 6 and the learning rate to 10^-3 with an exponential decay of 0.999 and a minimum value of 10^-5. To speed up the training speed and reduce the possibility of model divergence, we use the Teacher-Forcing strategy. The Teacher-Forcing training ratio of Encoder-PianoTree Decoder , and Encoder-Valence is set to 0.6 and 0.5 respectively. The training ratio of Encoder-Valence Decoder is set to 0.5.
§.§ Baseline Models
Our baseline models are Poly-dis and M-GPT chosen from the model in the paper <cit.> <cit.>. Poly-dis, the state-of-the-art disentanglement learning-based model, decouples the characterization of harmony and texture. Unlike our rule constraint and modeling, this model achieves the adjustment of the generated accompaniment by learning prior and posterior sampling. M-GPT is the state-of-the-art piano music generation model and can harmonize the melody using auto-regression principles.
§.§ Emotional Flow Comparison Test
The experiment aims to compare the correlation between the Emotional Flow entered by the user, used as a guide, and the Emotional Flow finally generated by the system. This is an important indicator of the effectiveness of the system's control over the input Emotional Factors.
We evaluate the correlation by comparing the Pearson coefficients between the two sequences, referring to the evaluation metrics in the paper <cit.>, so as to avoid misevaluation due to misalignment of the Emotional Flow.
There are two constraints on the Emotional Flow of the user input guidelines. The first is that there cannot be more than five extreme points per flow curve, except for the start and end points. This is because the melodic data of the sample data does not exceed 90s in length, and too many extreme points mean too many melodic ups and downs, which is not in accordance with the rules of music composition. The second is that each flow curve must have a certain amount of ebb and flow, because too much flatness is not necessary for correlation. Specifically, V̅ and A̅ are the mean values of the valence and arousal curves, and the duration of the melody is set to T.
1/T∫_0^T (V-V̅)^2 dt > 0.15
1/T∫_0^T (A-A̅)^2 dt > 0.15
The data for the experiment were obtained from the "Samples" mentioned in the section <ref>, with 20 pieces of music to be validated. Four typical cases were selected to visualize the results. The criteria we chose are similar to the idea of control variables, which are the correlation of Arousal Flow in the low arousal and high arousal cases, and the correlation of Valence Flow in the Low Valence and High Valence cases, respectively. We calculated the average valence and arousal correlation values for 20 samples of music. For statistical convenience, high arousal/valence is denoted as High Input Basis (HIB) and low arousal/valence is denoted as Low Input Basis (LIB).
The visualization in Figure <ref>, a combination of a heat map and box plot, presents a comparison of the input and output Emotional Flow. The heat map illustrates the specifics of the Emotional Flow, while the box plot offers a broader statistical comparison. The results reveal that the mean values and quartiles of the Emotional Flow are similar for both the user input and the system output. This suggests that the system-generated Emotional Flow aligns with the user input statistically, regardless of the Emotional Flow's baseline.
We also compared the association values between the baseline model and our VAE model, as shown in Table <ref>. Where the baseline model is abbreviated as Poly-Dis, our model is called VA-VAE.
It can be seen that the average correlation of our model outperforms the baseline models for both valence flow and arousal flow. The correlation of our VA-VAE also outperforms the baseline model under HIB versus LIB.
§.§ Subjective Musicality test
The subjective musicality assessment was mainly a professional assessment by music experts. A total of 44 junior and senior music majors and graduate students were invited. The music experts were randomly selected from two of the eight sample groups, and each group contained two pieces of music, one with the accompaniment generated by the baseline Transformer model and the other with the accompaniment generated by the VA-VAE model. The two pieces of music were not distinguished by name; in other words, the music experts' music was selected in a completely blind manner.
The music experts evaluated the level of the accompaniment from four angles: 1) whether the overall layout of the composition was appropriate; 2) whether the chords were harmoniously chosen and connected; 3) whether the rhythmic density (articulation points) was specific to the melody; and 4) whether there was a sub-melody or passing phrase that accentuated the melody. Each evaluation angle is evaluated quantitatively using a rating value, and is assigned a score of 1 to 5. The above four perspectives are abbreviated as Q1, Q2, Q3 and Q4.
The experimental results are shown below, and the final score for each assessment perspective is based on the weighted average score.
From the experimental results shown in Fig <ref>, we can see that the weighted average score of our VA-VAE model is stronger than that of the Baseline models in terms of the overall layout of the weave (Q1), chord selection and connection (Q2), melodic counterpoint (Q3), and melodic underscoring (Q4). The overall arrangement of the accompaniment generated by our model is more reasonable, and the chord selection and connection are more fully considered, and the rhythm between the accompaniment and the melody is more organized and regular, which can also better support the melody. The musical accompaniment generated by our model has a more artistic character.
Refer to Figure <ref> for a visual representation of the music's attention structure.
The darker the color of the music phrases, the greater the weight of attention. The structure of the different "music phrases" gathered by attention mechanism is divided by dotted lines, so that the music as a whole is well organized.
§.§ Ablation Study
For the ablation study, we abbreviated the control group without relative self-attention and Rule Constraint (RC) as CG, the model after adding relative self-attention as CG+NS, and then after adding Rule Constraint as CG+NSR. We used a quantitative approach to assess the generation The quality of the accompaniment in the ablation experiment is assessed quantitatively. Quantitative metrics such as pass/fail ratios, null ratios, etc. are less applicable in our piano improvisation accompaniment generation task. The key criteria for the evaluation of the accompaniment task are the texture of the accompaniment, the harmony of the accompaniment with the melody, the contribution to the melody, etc. This way of evaluation is very similar to that of the translation task, where the harmony of the accompaniment is like the valuation of the translated utterance, the weaving arrangement is like the wording of the translation, and the contribution to the melody is like the synthesis and comparison of the information in the translation task. Therefore, we chose the MUTE evaluation index from the paper <cit.>, which is analogous to the F-Score evaluation index in the translation task, to accurately and quantitatively assess the level of the accompaniment arrangement.
In MUTE, F1 Score(FS) evaluates the "translation accuracy" of the accompaniment from the perspective of 128 pitches and is suitable for evaluating texture, while the F1 Score Pitch Class(FSPC) normalizes the pitches to 12 basic pitches and is therefore suitable for evaluating harmony.
As seen in Table <ref>, the model incorporating relative self-attention and RC outperformed the CG and CG+NS control groups in both FS and FSPC metrics. Whether it is harmony or texture, the newly incorporated relative self-attention mechanism and rule constraint can be better designed and orchestrated to create higher quality accompaniment. Further, we visualized the comparison test of the rule constraints, as shown in Figure <ref>, and found that the rule constraints did indeed shift the range of the accompaniment to better harmonize the melody.
§ CONCLUSION
In this study, we investigate the generation of musical accompaniment that is guided by emotional flow. We focus on two key aspects of the problem. First, we establish a mechanism for converting emotional streams into music information data and a VAE network architecture that is tailored to emotional quantization data, allowing us to control the network model with emotional factors. Secondly, we optimize the structural planning of accompaniment generation by introducing the Self-Similarity and relative self-attention mechanism. By using rule constraints, we further improve the local and global tonality of the music. This approach of progressing from the whole to the local, layer by layer, allows us to create an automatic accompaniment system that has excellent emotional flow control and high-quality music generation.
In the future, we plan to further improve our research. Currently, the accompaniment is generated by a single instrument and we intend to extend it to include multiple instruments to create an automated orchestra. Additionally, the representation of emotional flow is not yet clear, and we will research on better visualization methods to make the AI technology more user-friendly.
§ ACKNOWLEDGMENT
This research was funded by the Regular Projects of the Humanities and Social Sciences Fund of the Ministry of Education of Grant No.16YJAZH080.
00
b1 Wu, Yi-Chan, and Homer H. Chen. "Emotion-flow guided music accompaniment generation." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016.
b2 Thayer, Robert E. The biopsychology of mood and arousal. Oxford University Press, Oxford, UK, 1990, ch. 2-5.
b3 Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription." arXiv preprint arXiv:1206.6392 (2012).
b4 Choi, Keunwoo, George Fazekas, and Mark Sandler. "Text-based LSTM networks for automatic music composition." arXiv preprint arXiv:1604.05358 (2016).
b5 Dua, Mohit, et al. "An improved RNN-LSTM based novel approach for sheet music generation." Procedia Computer Science 171 (2020): 465-474.
b6 Lyu, Qi, et al. "Modelling high-dimensional sequences with lstm-rtrbm: Application to polyphonic music generation." Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.
b7 Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. "MidiNet: A convolutional generative adversarial network for symbolic-domain music generation." arXiv preprint arXiv:1703.10847 (2017).
b8 Luo, Jing, et al. "MG-VAE: deep Chinese folk songs generation with specific regional styles." Proceedings of the 7th Conference on Sound and Music Technology (CSMT). Springer, Singapore, 2020.
b9 Lattner, Stefan, Maarten Grachten, and Gerhard Widmer. "Imposing higher-level structure in polyphonic music generation using convolutional restricted boltzmann machines and constraints." Journal of Creative Music Systems 2 (2018): 1-31.
b10 Zhao, Jingwei, and Gus Xia. "AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer." arXiv preprint arXiv:2108.11213 (2021).
b11 Hadjeres, Gaëtan, François Pachet, and Frank Nielsen. "Deepbach: a steerable model for bach chorales generation." International Conference on Machine Learning. PMLR, 2017.
b12 Huang, Cheng-Zhi Anna, et al. "Music transformer." arXiv preprint arXiv:1809.04281 (2018).
b13 Huang, Yu-Siang, and Yi-Hsuan Yang. "Pop music transformer: Beat-based modeling and generation of expressive pop piano compositions." Proceedings of the 28th ACM International Conference on Multimedia. 2020.
b14 Wu, Shih-Lun, and Yi-Hsuan Yang. "The Jazz Transformer on the front line: Exploring the shortcomings of AI-composed music through quantitative measures." arXiv preprint arXiv:2008.01307 (2020).
b15 Jin, Cong, et al. "A transformer generative adversarial network for multi‐track music generation." CAAI Transactions on Intelligence Technology 7.3 (2022): 369-380.
b16 Wang, Ziyu, and Gus Xia. "MuseBERT: Pre-training Music Representation for Music Understanding and Controllable Generation." ISMIR. 2021.
b17 Jiang, Junyan, et al. "Transformer VAE: A hierarchical model for structure-aware and interpretable music representation learning." ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020.
b18 Tanaka, Keitaro, et al. "Pitch-Timbre Disentanglement Of Musical Instrument Sounds Based On Vae-Based Metric Learning." ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.
b19 Yang, Ruihan, et al. "Deep music analogy via latent representation disentanglement." arXiv preprint arXiv:1906.03626 (2019).
b20 Song, Kai, Xia Liang, and Junmin Wu. "ViT-based VQ-VAE Generative Network for Accompaniment Generation." 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence. 2021.
b21 Liu, Weiming. "Literature survey of multi-track music generation model based on generative confrontation network in intelligent composition." The Journal of Supercomputing (2022): 1-23.
b22 Wu, Yi-Chan, and Homer H. Chen. "Emotion-flow guided music accompaniment generation." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016.
b23 Wallis, Isaac, et al. "A rule-based generative music system controlled by desired valence and arousal." Proceedings of 8th international sound and music computing conference (SMC). 2011.
b24 Morreale, Fabio, and Antonella De Angeli. "Collaborating with an autonomous agent to generate affective music." Computers in Entertainment (CIE) 14.3 (2016): 1-21.
b25 Miyamoto, Kana, Hiroki Tanaka, and Satoshi Nakamura. "Online EEG-Based Emotion Prediction and Music Generation for Inducing Affective States." IEICE TRANSACTIONS on Information and Systems 105.5 (2022): 1050-1063.
b26 Kaliakatsos-Papakostas, Maximos, Andreas Floros, and Michael N. Vrahatis. "Artificial intelligence methods for music generation: a review and future perspectives." Nature-Inspired Computation and Swarm Intelligence (2020): 217-245.
b27 Boulesteix, Anne‐Laure, et al. "Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2.6 (2012): 493-507.
b28 Eddy, Sean R. "What is a hidden Markov model?." Nature biotechnology 22.10 (2004): 1315-1316.
b29 Hearst, Marti A., et al. "Support vector machines." IEEE Intelligent Systems and their applications 13.4 (1998): 18-28.
b30 Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription." arXiv preprint arXiv:1206.6392 (2012).
b31 Dahale, Rishabh, et al. "Generating Coherent Drum Accompaniment With Fills And Improvisations." arXiv preprint arXiv:2209.00291 (2022).
b32 Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017).
b33 Ren, Yi, et al. "Popmag: Pop music accompaniment generation." Proceedings of the 28th ACM International Conference on Multimedia. 2020.
b34 Temperley, David. The cognition of basic musical structures. MIT press, 2004: 10-20.
b35 Wang, Ziyu, et al. "Pop909: A pop-song dataset for music arrangement generation." arXiv preprint arXiv:2008.07142 (2020).
b36 Medeot, Gabriele, et al. "StructureNet: Inducing Structure in Generated Melodies." ISMIR. 2018.
b37 Wang, Ziyu, et al. "Pianotree vae: Structured representation learning for polyphonic music." arXiv preprint arXiv:2008.07118 (2020).
b38 Sharif Razavian, Ali, et al. "CNN features off-the-shelf: an astounding baseline for recognition." Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014.
b39 Wang, Ziyu, et al. "Pianotree vae: Structured representation learning for polyphonic music." arXiv preprint arXiv:2008.07118 (2020).
b40 An, Jinwon, and Sungzoon Cho. "Variational autoencoder based anomaly detection using reconstruction probability." Special Lecture on IE 2.1 (2015): 1-18.
b41 Wang, Ziyu, et al. "Learning interpretable representation for controllable polyphonic music generation." arXiv preprint arXiv:2008.07122 (2020).
b42 Lim, Hyungui, Seungyeon Rhyu, and Kyogu Lee. "Chord generation from symbolic melody using BLSTM networks." arXiv preprint arXiv:1712.01011 (2017).
b43 Gover, Matan, and Oded Zewi. "Music Translation: Generating Piano Arrangements in Different Playing Levels." Ismir 2022 Hybrid Conference. 2022.
|
http://arxiv.org/abs/2307.03958v1 | 20230708114851 | Secrets Revealed in Container Images: An Internet-wide Study on Occurrence and Impact | [
"Markus Dahlmanns",
"Constantin Sander",
"Robin Decker",
"Klaus Wehrle"
] | cs.CR | [
"cs.CR",
"cs.NI"
] |
An Internet-wide Study on Secrets in Container Images]Secrets Revealed in Container Images:
An Internet-wide Study on Occurrence and Impact
Markus Dahlmanns, Constantin Sander, Robin Decker, Klaus Wehrle
Communication and Distributed Systems, RWTH Aachen University Aachen Germany
{dahlmanns, sander, decker, wehrle}@comsys.rwth-aachen.de
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
[email protected]
RWTH Aachen University
Germany
Containerization allows bundling applications and their dependencies into a single image.
The containerization framework Docker eases the use of this concept and enables sharing images publicly, gaining high momentum.
However, it can lead to users creating and sharing images that include private keys or API secrets—either by mistake or out of negligence.
This leakage impairs the creator's security and that of everyone using the image.
Yet, the extent of this practice and how to counteract it remains unclear.
In this paper, we analyze numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax other private registries unveiling that pctaffectedimages of images indeed include secrets.
Specifically, we find validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches leaked API secrets, both opening a large attack surface, i.e., putting authentication and confidentiality of privacy-sensitive data at stake and even allow active attacks.
We further document that those leaked keys are used in the wild:
While we discovered casignedcerts certificates relying on compromised keys being issued by public certificate authorities, based on further active Internet measurements, we find 20220901numuniquehosts TLS and SSH hosts using leaked private keys for authentication.
To counteract this issue, we discuss how our methodology can be used to prevent secret leakage and reuse.
<ccs2012>
<concept>
<concept_id>10002978.10003014</concept_id>
<concept_desc>Security and privacy Network security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10002979.10002980</concept_id>
<concept_desc>Security and privacy Key management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Network security
[500]Security and privacy Key management
[
Klaus Wehrle
August 12, 2023
===================
§ INTRODUCTION
While originally developed to isolate applications <cit.>, containerization has become a new cornerstone of interconnected services as it significantly eases their deployment <cit.>.
To this end, Docker, the most prominent containerization framework <cit.>, uses prebuilt images that include all software dependencies necessary to deploy an application <cit.>.
Users only need to download an image from a registry or can derive their own image by adapting its configuration and included files.
These new images can then again be uploaded building a whole ecosystem of containerized applications.
For example, Docker Hub, the official Docker registry, comprises more than 9000000 images <cit.> anybody can use.
With this level of public exposure, any mistake during image creation can have drastic consequences.
Most notably, including confidential secrets such as cryptographic keys or API secrets, by mistake or out of negligence, can introduce two security issues:
[(i)]
* attackers can misuse compromised secrets leading to potential loss of data, money, privacy, or control, and
* administrators instantiating images can rely on broken security, e.g., paving the way for Man-in-the-Middle attacks.
Aggravatingly, there is no easy tooling to show which files have been added—accidentally adding a secret is thus much easier than identifying such an incident.
Indeed, related work traced three reused private keys authenticating 6000 (Industrial) Internet of Things services back to the occurrence in a Docker image <cit.>.
Additionally, blog entries produced anecdotal evidence that Docker images include further confidential security material <cit.>.
However, comprehensive analyses on revealed security secrets at scale do not exist in this realm.
Instead, such analyses focus on GitHub repositories <cit.>.
Hence, the extent for container images is unknown.
In this paper, we thus comprehensively study whether Docker images include confidential security material and whether administrators reuse these compromised secrets at large scale by
[(i)]
* scanning publicly available Docker images for confidential security material, and
* measure whether these secrets are used in practice on production deployments.
To this end, we analyze images available on the official and largest registry Docker Hub as well as examine the entire IPv4 address space for public registries and services relying their security on compromised secrets.
Contributions Our main contributions are as follows.
* We found privatemeasurementnumtotalmax Docker registries in the IPv4 address space that contain not only secrets but also potentially confidential software and likely allow attackers to replace images, e.g., with malware.
* After filtering test secrets, we identified totalvalidmatches leaked distinct secrets, i.e., validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets, in numaffectedimages images (pctaffectedimages of images we scanned are affected).
* We show that operators use 20220901corrFingerprint compromised private keys in practice affecting the authenticity of 20220901numuniquehosts Internet-reachable hosts providing, i.a., HTTP, AMQP, MQTT, and LDAP services.
* We discuss improvements of the Docker paradigm to prevent secret leakage and reuse in the future as well as provide our software used to find and verify secrets <cit.> to support mitigation.
§ A PRIMER ON THE DOCKER PARADIGM
In contrast to other containerization frameworks, Docker <cit.> does not only provide an isolated execution environment for applications.
Instead, Docker specifies an easy-to-use paradigm to create, share and deploy ready-to-run container images <cit.>.
These images constitute the filesystems of the containers and include all dependencies necessary for the actual applications, i.e., they can include all kinds of files added during creation.
The completeness of these images allows to share them via (publicly accessible) registries.
Figure <ref> shows the structure and lifecycle of Docker images in detail, from creating images to sharing and running them.
Image Creation
To create an image, Docker uses a user-defined Dockerfile <cit.> to specify the image ingredients.
First 1, the Dockerfile references another image, the base image, which is downloaded from a registry and comprises the initial file system of the new image.
Second 2, image layers consisting of differential snapshots of the file system after running commands from the Dockerfile are created and stacked on each other <cit.>.
These commands can include shell statements to, e.g., compile an application running in the container.
Furthermore, specific commands exist to embed environment variables or to add files from the host system into the image <cit.>.
While the files can be, e.g., source code or further dependencies, image creators can also easily and accidentally include (cryptographic) secrets into the image or its environment variables, putting the service's security at risk when leaked.
Once an image has been fully created, it exists as a self-containing unit, which is ready-to-run but also allows little insight on what has been added.
Image Push
After generating the image, creators can push it to a registry <cit.>, e.g., the official and largest registry Docker Hub <cit.>, allowing to deploy containers among an own fleet of servers easily, but also to share it with other users <cit.>.
To this end, the image layers are uploaded to the registry under a repository name and tag 3.
Thereby, the repository name typically represents the application in the image, and the tag describes a version.
Conventionally, creators tag the newest image in a repository with .
Container Deployment
To run a Docker container, users pull an image from a registry.
When pulling, users first request an image manifest <cit.> from the registry, including meta information about the image and its layers.
After downloading all layers 4, Docker merges the content composing the file system for the new container 5 <cit.>.
The application then finds an unchanged file system with all content provided by the image creator, i.e., all dependencies but also potentially added secrets, and can very likely provide services to the public Internet.
Since numerous containers of various users can base on a single image, included, and thus compromised, secrets could affect several deployments.
The Docker paradigm eases distribution and deployment of applications.
However, insight into what is added in images and up- or downloaded from a registry can be lost.
Thus, secrets can be leaked and reused, impairing Internet-reachable services at scale.
§ RELATED WORK
Three streams of research motivate our analysis of confidential security material in Docker images: studies that detect leaked security material, research on publicly available Docker images, and Internet-wide scans disclosing security weaknesses at scale.
Actively Leaked Security Material
Currently, the search for leaked security material focuses on code repositories.
Several studies detected the leakage of passwords <cit.>, SSH private keys <cit.>, Amazon Cloud API keys <cit.>, and Slack API keys <cit.>, using the built-in search of GitHub.
To allow broader searches, researchers entailed regular expressions but focused on specific file types <cit.> or code snippets <cit.>, i.e., the scale of this research was limited.
In contrast, Meli et al. performed a large scale study without focusing on specific file types, showing that ∼3.5 of the 4 analyzed code repositories on GitHub included leaked secrets <cit.>.
Further approaches use machine learning to improve the detection by relying on code semantics <cit.>, false-positive detection <cit.>, or both requiring further user input <cit.>.
Away from GitHub, research proposed methods to investigate various platforms <cit.> and proved the presence of secrets in publicly available Android apps <cit.>.
A recent study underlines that most developers experienced secret leakage, and guidelines are insufficient for prevention <cit.>.
While retroactively deleting leaked secrets does not help <cit.>, (non)-commercial approaches, e.g., GitGuardian <cit.>, TruffleHog <cit.>, or Gitrob <cit.>, aim at preventing secret leakage for Git.
Docker Images
Besides Git, researchers and developers, early on without evidence, assumed leaked secrets in images for virtual machines or Docker and provided countermeasures <cit.>.
Nevertheless, non-academic Web-blog studies <cit.> still find leaked secrets in images on Docker Hub.
However, these studies either limit their scale <cit.> to a few thousand images/secrets or restrict their methodology <cit.> to process large amounts of available images.
The latter study <cit.> finds 46076 affected images among 6.3 images on Docker Hub, but only considers information available in Dockerfiles, e.g., specific file paths.
Meanwhile, SecretScanner <cit.>, a smaller secret search tool, implements a function allowing users to find secrets in Docker images.
Still, a comprehensible, large-scale, and methodology-driven analysis on introduced security weaknesses by leaked security material is missing.
Instead, large-scale studies on Docker images focused on data compression <cit.>, software vulnerabilities <cit.>, or typosquatting of image names <cit.>.
Hence, as of now, it is unclear how widespread secret leakage is in images on Docker Hub as well as private Internet-reachable registries.
Moreover, it is unknown to what extent these compromised images are then used on the Internet and whether they weaken security at scale.
Internet Measurements
For understanding deployment security at scale, Internet-wide measurements have been a valuable tool in the past.
Internet scan services, such as Shodan <cit.> or Censys <cit.>, fetch and publish meta-information, e.g., security configurations, on Internet-reachable services.
Although these services often helped researchers analyzing the security of connected devices, e.g., cars <cit.> or (insecure) Industrial IoT (IIoT) deployments <cit.>, they usually do not see all deployments <cit.>.
Hence, researchers frequently conduct own active Internet measurement, e.g., using ZMap <cit.>.
On the web, these measurements allowed to analyze the deployment of new TLS versions <cit.> and revealed wide security configuration mistakes <cit.> or implementation deficits <cit.>.
Aside the web, researchers assessed the security of SSH services <cit.> and key-value stores leaking confidential data <cit.>.
For the IoT and IIoT, research revealed many deployments relying on vulnerable software <cit.> and communicating without any security mechanism <cit.>, e.g., access control.
Even with built-in security features, operators often configure such services insecurely <cit.>.
For example, a massive reuse of certificates was traced back to a Docker image including certificates and corresponding private keys <cit.> jeopardizing the authenticity of numerous deployments.
Based on this, we claim that it is probable that there are further public Docker images that wrongly include confidential secrets and harm security on the Internet—especially when looking at the sheer size of Docker and Docker Hub.
Although the broad leakage of security secrets in code repositories is well understood, the spread of revealed secrets in Docker images and the introduced security risk for the Internet are unknown.
However, known secret leakage detection techniques and Internet measurements are predestined to shed light on these issues.
§ COMPOSING OUR DATASET
To answer whether Docker image creators actively compromise security secrets by publishing them in openly available Docker images, we set out and retrieve images from Docker Hub (Section <ref>) and publicly reachable private registries (Section <ref>).
§.§ Retrieving Images from Docker Hub
Table <ref> guides through our composition process on Docker Hub, which has three tasks:
[(i)]
* composing a list of repositories,
* selecting one image per repository to widely spread our analysis, and
* identifying layers the images consist of.
§.§.§ Repositories
While Docker Hub limits the number of image downloads <cit.> and we cannot download and analyze all 15 of images available on Docker Hub <cit.> due to runtime and bandwidth restrictions, our analysis requires a selection of repositories of interest.
Furthermore, Docker Hub does not support listing all available images to choose from.
Hence, we use specific search terms to get images users retrieve when searching via the Web interface.
Our search terms (which we elaborate in more detail in Appendix <ref>) build two query groups (Table <ref> (left));
Standard comprises mainstream communication protocol names <cit.> and frequently used technologies <cit.> for a wide analysis of images referencing current issues.
For comparison and more focusing on a specific area, we choose the Industrial Internet of Things (IIoT) as past studies showed a great susceptibility to security faults <cit.>, i.e., IIoT includes protocol names from this area.
We list the number of repositories covered by our analysis per query group, i.e., the sum of found repositories of all search terms of a group, in Table <ref> (column Repositories-#).
To further convey the prevalence of our search terms, we indicate the minimum, maximum, and 25-, 50-, and 75-percentiles of search results for included terms, i.e., higher values of lower percentiles would imply a higher prevalence.
While both query groups contain terms that lead to no results (min), i.e., the term is not mentioned in any repository name or description, terms in the standard group generate more results due to their closer correlation to frequently used technologies than IIoT protocols (p_25, p_50, p_75).
Docker Hub's API limits the number of results to 10000 (max).
As different search terms lead to overlapping repositories, we further report on the distinct number of repositories gradually, i.e., per query group, and overall.
In total, we gathered distinctnumrepooverall distinct repositories subject to our study of which standarddistinctpctrepopergrouponly are uniquely added by our standard search terms and iiotdistinctpctrepopergrouponly by IIoT related search queries.
§.§.§ Images
Table <ref> (column Images-#) indicates how many images were available in total over the distinct repositories of a search group.
While repositories mostly contain different images, including the same software in other versions and thereby comprising similar files, we choose to analyze one tag per repository to spread our analysis as widely as possible.
Here, we select images tagged with which is used as Docker's default and typically includes the newest version of an image.
However, not all repositories contain images tagged with (as shown in Table <ref> (column Images-).
Here, we select the image with the latest changes (as reported by Docker Hub's API).
Empty repositories (Table <ref> (column Images-none)), i.e., have no image layers available, cannot include any secrets.
Besides the number of images that are covered by our study (column Images-analyzed), we also report on the age of the images to analyze how long they are already available on Docker Hub.
The ages of images included in both query groups roughly have the same distribution indicating that although the number of images found by our IIoT-related queries is lower image creators update their images in the same frequency as image creators of images included in our Standard group.
§.§.§ Layers
While we report on the number of layers included in all images (Table <ref> column Layers-#), different images often share the same layers, e.g., layers from frequently used base images.
Hence, to speed up our search for leaked secrets, we analyze each distinct layer only once.
We show the distinct number of layers gradually, i.e., per query group, and overall.
To cover all distinctnumrepooverall repositories, we analyze distinctnumlayersoverall layers.
(standarddistinctpctlayersgroup uniquely added by Standard-related, iiotdistinctpctlayersgroup by IIoT-related repositories).
§.§ Images from Private Docker Registries
Since image creators might upload sensitive images preferably to private registries, we want to include images from these registries in our analysis.
Table <ref> shows our steps taken to extend our dataset with images from private registries, i.e., we search private registries, and, subsequently, include a subset of available layers.
§.§.§ Find Private Registries and Repositories
To find publicly reachable Docker registries, we scan the complete IPv4 address space for services running on the standard port for Docker registries, i.e., TCP port 5000, under comprehensive ethical measures (cf. Appendix <ref>) twice to analyze short-term fluctuations (Table <ref> (left)).
Both times, we perform a TCP SYN scan using <cit.>, identifying hosts running a service behind this port and subsequently send an HTTP request as defined by Docker's Registry API <cit.> for verification.
Whenever we do not receive a valid HTTP response, we retry via HTTPS.
While we found up to privatemeasurementnumtotalmax private registries on privatemeasurementdatemax, the difference in found registries in comparison to our scan on privatemeasurementdatemin is due to registries in Amazon AWS-related ASes that do not reply after our first scan anymore.
Since these registries only contain the same and single image (uhttpd), they might relate to another research project, e.g., implementing a registry honeypot.
Contrarily to Docker Hub's API, the API of private registries allows listing available repositories without search terms.
However, we limit our requests to receive a maximum of 100 repositories per registry to prevent any overloads.
As such, the found private registries provide privatemeasurement220801repositorysum resp. privatemeasurement220806repositorysum repositories.
Since the registries do not implement access control for read access, clients are able to download all included images.
Notably, by default also write access is not restricted <cit.>, i.e., attackers might be able to inject malware.
privatemeasurement0repositoryuhttpd
privatemeasurement2repositorynginx
privatemeasurement4repositoryredis
While being publicly available on private registries but not filtered by any search terms, the content of these images is of special interest.
Here, often the repository name indicates the image's content and thus allows conclusions on widely distributed applications, i.e., over both measurements, is the most reoccurring repository name (reoccurring privatemeasurement0sum times, but only during our first scan).
Repository names on the second and third place, i.e., and , indicate proxy and cloud services where image creators might have included security secrets before uploading it to their registry.
Beyond the scope of security secrets, other repository names occurring less often, e.g., or , imply that image creators might include confidential software, source code, private data, or information on systems especially worthy of protection in openly available Docker images.
§.§.§ Image and Layer Selection
For all found repositories, we collect the lists of available images and their tags (Table <ref> (center)).
Although private registries typically do not implement any rate limiting like Docker Hub, we do not want to overload found registries or their Internet connections.
Hence, to spread our analysis as far as possible but limit the load on each registry, we choose one tag per image.
Similar to our selection process on Docker Hub, typically, in each repository, we select images tagged as to download the corresponding manifest.
Whenever no image is available, we sort all available images naturally by their tag (to account for version numbers as tags), and select the maximum (i.e., the newest version), as the API does not provide any information on the latest changes.
Subsequently, we download the corresponding image manifests to retrieve accompanying layers.
To further limit load on Internet connections of found registries, we do not download all available layers for included secrets.
Instead, we randomly select layers of chosen images such that the sum of their sizes does not exceed 250 per registry and per measurement.
All in all, we added privatenumdistinctlayersselected layers from private registries to our dataset.
In parallel to Docker Hub numerous private registries exist providing images to the public.
Overall, we assemble a dataset of numconsideredlayersoverall layers from numnonemptyimages images subject to our future research.
Furthermore, private registries might allow attackers to, e.g., inject malware, potentially infecting container deployments at scale as well.
§ LEAKED SECRETS IN DOCKER IMAGES
Next, we search in considered images for included secrets (Section <ref>), discuss the origin of affected images to later evaluate remedies (Section <ref>), and analyze also found certificates compromised due to private key leakage to estimate arising risks (Section <ref>).
§.§ Searching for Secrets
To analyze available images for included secrets, we align our approach to established methods <cit.>, i.e., we choose and extend regular expressions identifying specific secrets and match these on files and environment variables.
Additionally, we extensively filter our matches to exclude false positives.
§.§.§ Regular Expression Selection
We base our selection of regular expressions on previous work to find secrets in code repositories <cit.> (we further elaborate on our election process and expressions in Appendix <ref>).
Table <ref> (left) names the domains of secrets that our selected expressions match and indicates how attackers could misuse these secrets.
We start with regular expressions composed by Meli et al. <cit.> due to their selection of unambiguous expressions (reducing false positives) matching secrets with a high threat when leaked.
We extend their expressions for private keys to match a larger variety, e.g., also OpenSSH private keys.
Moreover, we widen the set by expressions matching API secrets of trending technologies <cit.> based on match rules from TruffleHog <cit.>.
However, TruffleHog's rules are relatively ambiguous and incur many false positives, which TruffleHog filters by validating the API secrets against their respective endpoints.
As our ethical considerations do not allow for any further use of the secrets (cf. Appendix <ref>), we focus on rules which expect at least one fixed character and later add further filtering and verification steps.
§.§.§ Matching Potential Secrets
To analyze whether image layers include secrets, we match the selected regular expressions on the images as follows (we will open-source our tool on acceptance of this paper):
We download and decompress the image layers and then match our regular expressions on the included files.
Moreover, we recursively extract archive files up to a depth of 3 and match again.
As API documentations often suggest setting secrets in environment variables and not writing them into files, we analyze set variables.
Since Docker allows downloading the small image configuration containing set variables aside of the image, i.e., potential attackers do not have to download and search through all files to find included secrets, we analyze variables separately:
As such, we only download the image configuration file and iterate our regular expression over set environment variables.
Here, we adapt the API expressions, as some expect a specific term before the secret (cf. Table <ref> in Appendix <ref>), e.g., the service name as part of a variable name.
As the variable names and values are separated in the configuration file, we also split the according expressions and match them individually.
Table <ref> (center) lists for each secret domain how many matches and how many distinct matches we found in both, image content and environment variables.
Notably, while only covering two services, i.e., Facebook and Twitter, the expressions in the Social Media domain matched most often over all domains, which already indicates that API secrets of this domain are often suspect to leakage.
The high redundancy of the matches, visible as the significant decrement between distinct and non-distinct matches, already hints at invalid matches, e.g., private keys or example API tokens prevalent in unit tests or documentation in several layers.
Indeed, the most reoccurring match (mostreoccurringnumocc times in mostreoccurringnumlayer different layers), is an example key for mostreoccurringrule from a library documentation which creators usually include in their images.
We thus validate our matches extensively.
§.§.§ Match Validation
To exclude test keys for cryptographic libraries, example API secrets, and completely invalid matches to get a near lower bound of harmful leaked secrets in Docker images, we use different filters depending on the secret type.
While we show the number of resulting valid secrets in Table <ref> (right), Figure <ref> details the filtering results separated by the match's origin, i.e., image content or environment variable and domain.
Private Keys
Our regular expressions for private keys match on PEM or XML formatted keys.
Thus, we can first exclude every match that is not parsable (filter Unparsable).
Figure <ref> shows that only a minority of all potential private keys in image layers are unparsable, underlining that image creators include and compromise private keys actually usable in final Docker containers for practical operations.
Contrarily, the single match within the environment variables is only a key fragment and thus not parsable.
Still, we expect a high number of software test keys in Docker images among found keys, as they are part of several libraries creators might include in their images, e.g., OpenSSL.
Since users will most likely not use such keys to secure their deployments, we filter out test keys that are included in kompromat <cit.>, a repository listing already compromised secrets (filter Kompromat).
More specifically, we filter keys occurring in RFCs (kompromatfoundrfcnumdistinct), libraries for software tests (kompromatfoundsoftwaretestsnumdistinct), or as special test vectors (kompromatfoundtestvectorsnumdistinct).
To also account for software test keys that are not available in kompromat, we analyze the file paths where respective keys were found (filter File).
While we do not generally exclude all paths containing signal words indicating test or example keys, as users might use such paths also for keys they generated and use in practice, we apply different measures.
For instance, based on locations of test keys identified using kompromat, we deliberately exclude matches in similar locations, i.e., keys within directories where we already detected test keys and all parent directories under which we find more than 2/3 test keys.
Last, we exclude file paths typically used by libraries (cf. Appendix <ref>), e.g., , as there is a lower chance that users adapt their keys here.
Figure <ref> shows that these filters process the largest share of excluded private key matches.
It further indicates that kompromat only includes a minority of software test keys, i.e., is not directly usable to exclude all false-positive matches.
Still, many of the found keys are not filtered and, thus, most likely, no software test keys.
In total, we found validprivatekeyvalidnumdistinctmatchestotal valid private keys potentially in use in practice (cf. Table <ref> (right)).
Since all of these keys are located in files, attackers would have to download respective image layers to get access and not only meta information to retrieve environment variables.
Still, since these keys are publicly available and thus compromised, usage in production puts authentication at stake, i.e., attackers can perform impersonation attacks.
API Secrets
Since our ethical considerations deter us from validating API secrets against their service endpoints (cf. Appendix <ref>) as applied by TruffleHog <cit.>, and related methods for false positive detection focus on matches in source code <cit.>, which is not prevalent in Docker images, we need alternative measures to filter invalid matches.
By manually supervising our filtering, we ensure that the final set only includes valid-looking API secrets.
Based on invalid matches in GitHub code repositories <cit.>, we expect human-created example keys that contain keywords, e.g., , or consecutive character sequences, e.g., , that we must exclude (filter Sequence).
To filter consecutive sequences, we search for segments consisting of ascending, descending (both with a length of four), and repeating characters (with a length of three).
Furthermore, we filter matches including sequences that occur unusually often, i.e., we create (frequencyngrammin, frequencyngrammax)-character-grams of all matches, exclude grams created over fixed parts of our regular expressions as well as grams only containing digits, and count the number of occurrences over all API matches.
To account for randomly reoccurring grams, we filter all matches that include grams occurring frequencyNgramsTimeFactor times more often than the average.
We manually ensured that our filter is not too restrictive but also not to loose leaving often reoccurring grams out.
Figure <ref> shows that this filtering excludes a large share of matches.
Interestingly, the most reoccurring gram is [sic!], which we could trace back to DNA sequences in images related to bioinformatics underpinning the large variety of different and unexpected file types occurring in Docker images.
Similar to filtering private key matches by their file paths, we also filter API matches occurring in manually selected paths (filter File, cf. Appendix <ref>).
Essentially, we revisited the location and file types of all matches and excluded paths that most likely do not include any valid secrets compromised by publishing these in Docker images.
Figure <ref> indicates that the filtered paths often also include matches filtered by our sequence filter and thus that libraries include strings similar to secrets, e.g., in their documentation.
Still, after manual revision of the remaining matches, we conclude that rules which match on a fixed term before the secret, e.g., the service name, and then allow a specific length of characters are too ambiguous for usage on files in Docker images as they match on arbitrary content, e.g., on hashes with the service name in front.
We thus decide to exclude matches of these rules from our further analysis (gray in Table <ref> (left)), i.e., consider these matches invalid, to ensure the integrity of our further results.
Still, a minority of these matches might be valid, potentially enabling attackers to compromise production services or access confidential data.
Comparing the filter results of API secret matches in files and environment variables, the share of valid matches in variables is significantly higher than in files indicating that image creators less likely include secret placeholders in variables.
Still, as Table <ref> (right) shows, most secrets are located within the images.
Thus, attackers have a higher chance of finding valid secrets when downloading both environment variables and image content.
In total, we found apinumdistinctmatches distinct API secrets in Docker images, mostly related to services from the cloud domain (validapicloudvalidnumdistinctmatchestotal secrets).
Although we cannot prove the functionality of these secrets, the occurrence of apicloud1numdistinctmatches secrets for the apicloud1rule or apicloud2numdistinctmatches secrets for the apicloud2rule indicate that attackers might be able to reconfigure cloud services maliciously, e.g., by editing DNS or VM options.
Additionally, we found evidence for secrets allowing attackers to access private data from social media (validapisocialmediavalidnumdistinctmatchestotal secrets), or even access financial services (validapifinancialvalidnumdistinctmatchestotal secrets, most matches: apifinancial0rule).
Notably, although we focused our image search partly on IoT terms, we found no valid secrets from selected IoT services.
§.§.§ Secrets Owned by Single Users
Based on findings over leaked secrets found on GitHub <cit.>, we expect most valid secrets to residing in images of single users (as users do not share their secrets intentionally).
Contrarily, invalid matches, e.g., library test keys, would mainly reside in images of multiple owners.
Thus, to check whether the matches we identified as valid secrets are located in images of single users, we analyze the number of different owners that include a specific secret in their images.
To this end, for images from Docker Hub, we consider the repository owner (embedded in the repository name) as the owner of a secret.
For private registries, we consider the registry's IP address as the owner (assuming that owners only run a single registry and neglecting that registries might use different (dynamic) IP addresses).
Figure <ref> shows that the largest share of valid secrets indeed occurs in images of single owners.
validmatchmultiuserprivatekeyFalsepct of private keys (validmatchmultiuserprivatekeyFalsenum keys) and validmatchmultiuserapiFalsepct of API secrets (validmatchmultiuserapiFalsenum secrets) reside in images of single owners underpinning that these should be protected.
Moreover, we can trace validmatchmultiuserlayer0privatekeyTruenum private keys and validmatchmultiuserlayer0apiTruenum API secrets of multiple owners back to inheritance.
These secrets were already included in the base image, but w.r.t. to the overall occurrence, we conclude that secret spread due to inheritance is no major problem.
To responsibly inform image creators about leaked secrets in their images, we reach out to them whenever possible (numemaildisclosure extractable and valid e-mail addresses) and also contacted the operator of Docker Hub (cf. Appendix <ref>).
Early on, we received notifications of creators that removed found secrets from their images.
totalvalidmatches found secrets show that image creators publish confidential information in their publicly available Docker images.
As attackers have access to these secrets relying authentication and other security mechanisms are futile, potentially leading to compromised servers or leaked privacy-sensitive data.
§.§ Origin of Leaked Secrets
Next, we analyze where the validated secrets stem from to see whether specific images are more affected and why.
To this end, we examine the distribution of affected images and compare between private registries and Docker Hub, as well as IIoT specific and Standard images.
Moreover, we evaluate which operation in the original Dockerfile led to the insertion of secrets and inspect the file paths where they reside to get an intuition for their usage.
§.§.§ Docker Hub Leads Before Private Registries
We already discovered that private registries include potentially sensitive images.
However, until now, it remains unclear whether images on these registries are more often subject to secret leakage than images from Docker Hub, e.g., due to creators believing that these are unavailable for the public.
Thus, we analyze whether leaked secrets occur more often in images from Docker Hub or from private registries.
While we found that numaffectedimages images (pctaffectedimages of images analyzed) contain valid secrets, pctaffectedimagesdockerhub of images from Docker Hub and pctaffectedimagesprivate of images from private registries are affected.
Thus, creators upload secrets to Docker Hub more often than to private registries indicating that private registry users may have a better security understanding, maybe due to a deeper technical understanding required for hosting a registry.
Yet, both categories are far from being leak-free.
For Docker Hub, besides the increased fraction of leaked secrets, we see an issue for others, i.e., other users can easily deploy containers based on these images.
Thus, there is a higher chance their containers rely their security on included and compromised secrets.
For example, a shared certificate private key could lead to an impersonation attack.
In case of shared API secrets, all deployed containers might use the same API token leading to exhausted rate limits in the best case, but maybe also to overwritten or insufficiently secured private data.
As a single API token does not allow fine-granular exclusions, i.e., it is either valid or revoked for all users, a revocation would also interfere with benign users.
Independent of their origin, attackers could equally misuse the secrets we found to leverage authentication or access privacy- or security-sensitive data.
As such, both user groups of Docker Hub and private registries leak sensitive information, be it through unawareness or a deceptive feeling of security.
§.§.§ Domains are Similarly Affected
For our image selection on Docker Hub, we specifically included search terms relating to the IIoT, as past research has shown significant security shortcomings in this area.
However, until now it is open whether found images of a certain domain are suspect to revealed secrets more frequently than other images.
To answer this question, we trace images that include secrets back to the query group that led to their inclusion.
We discovered that affectedstandardrepositorypct of the images only found using queries from the Standard query group and affectediiotrepositorypct of images only from the IIoT group include valid secrets[Images found by both query groups are not included.].
Thus, in case of secret leakage via Docker images and based on our selected search terms, the IIoT domain does not perform worse than our Standard domain.
However, it underpins that the problem of secret leakage in Docker images is a prominent issue for all domains.
§.§.§ Fresh Private Keys and Copied API Secrets
To find countermeasures against secret leakage in Docker images, it is important to understand how these leaked secrets became part of Docker images.
More specifically, for private keys, it is unclear whether creators execute commands in the Dockerfile to create fresh keys, which are then published in images, or whether they manually add them, i.e., using or in a Dockerfile.
Additionally, both, private keys and API secrets, could be indirectly included through other means, e.g., by cloning Git repositories or downloading further data.
Figure <ref> shows that while most API secrets are typically inserted by file operations (File), e.g., copied from the image creator's host system, private keys are predominantly included by executing a command within the Dockerfile (Exec.)[Secrets can be associated with both, File and Exec. operations, e.g., when first ed to the image and then copied or moved internally using or .].
Thus, private keys might be either downloaded or generated during the creation process.
To further trace the insertion of secrets in Exec. layers back to the responsible executed commands, we analyze these commands.
Since image creators often concatenate several bash commands whose output is then included in a single layer without any opportunity to associate files (and thus secrets) to a specific command, we count each of the commands related to the leakage of a secret.
We show the most prominent of all validmatchnumdistinctcommands commands associated with secret leakage in Figure <ref>.
In fact, privatekeyinstsshdpct of private keys were generated in layers where image creators installed the OpenSSH server.
Since the installation triggers to generate a fresh host key pair, it is automatically included in the image.
While the procedure of automatic key generation is beneficial on real hardware, i.e., users are not tempted to reuse keys on different hosts, in published Docker images it automatically leads to compromised keys and thus puts the authenticity of all containers relying on this image in danger.
Further privatekeysshkeygenpct of found private keys were generated by a direct call of , e.g., to generate fresh SSH client key material, implying the planned usage in production of generated but compromised key material.
Given the massive secret leakage on GitHub <cit.>, we also expect secrets to be included in images by cloning Git repositories.
However, only a minority of secrets can be associated with Git, suggesting that the sets of users leaking secrets via Docker and GitHub are distinct. Furthermore, only a minority of secrets were downloaded (using or ) both indicating that the secrets we found were most likely exclusively leaked in Docker images and underpinning that they are actually worth being protected.
§.§.§ File Paths Indicate Usage
To further reason about the usage of our found secrets, we analyze their file paths within the images assessing where secrets stem from and how services apply them.
Separated by private keys and API secrets, Figure <ref> shows the distribution of secrets throughout the directory structure of all images and focuses on the top seven paths.
We found the majority of private keys in underpinning a high prevalence of compromised SSH host keys.
Another large share occurs in suggesting compromised keys used for host authentication via TLS.
This path is also the location for TLS default (“snakeoil”) keys that are used if no other information is provided.
They are auto-generated when the package is installed such that every host possesses a unique default key-pair.
However, when installed during the creation of Docker images, the key is included in the image and, thus, compromised when shared.
Based on the key's filename, indeed, we found numsnakeoiletcssl of such keys which are potentially used to offer TLS services with broken authenticity to the public Internet.
Even more alarming, we found keys lying in , indicating that included keys are associated with a Public Key Infrastructure (PKI), and thus potentially destined to offer services to a higher number of users.
Furthermore, contains private keys used in relation to the IoT and, as per the repository names, for authentication using IoT protocols like CoAP and MQTT.
Thus, attackers possessing these private keys can leverage the authentication of all connections users establish to each container created based on these images.
In fact, attackers then can access or alter transmitted confidential information, e.g., privacy-sensitive user data or commands of IoT services potentially impacting cyber-physical systems.
In addition, we found keys in , i.e., a location where SSH client key pairs typically reside.
Hence, these keys might enable attackers to take over SSH servers, trusting these keys and having access to confidential data.
Contrarily, found API secrets are distributed more evenly through the directory structure.
We found the largest share in , which is the example folder for including own applications in Docker images <cit.>, underlining that image creators compromise their own application's API secrets.
While similar holds for , another large share of secrets resides in stemming from Firefox profiles containing Google Service API secrets in cached JavaScript files.
Although these secrets are most likely usable in combination with Google Maps or Google Analytics and thus meant to be shared with website visitors, this leakage implies privacy issues:
An attacker could retrace the creator's browsing history, which apparently exists due to the cache being filled, which could show potentially sensitive information.
In addition, we found a large share of Google API secrets (both Cloud and Services) in .
Since we do not use API tokens for further validation (cf. Appendix <ref>), we cannot be entirely sure whether these secrets are usable or only generated for testing purposes.
However, manual supervision of the matches and including files suggest that they could be actually in use.
pctaffectedimages of analyzed images contain and thus leak secrets.
While the majority stems from public Docker Hub images regardless of their domain, also private registries leak a significant number of secrets.
Notably, associated file paths and commands imply their production use and that various authentication mechanisms are futile.
§.§ Compromised Certificates
To further understand the severity of potentially compromised systems, we now focus on found certificates as they provide various information on their relations and use cases.
Thus, we research the trust chain, validity, and usage parameters of knowncompromizedcerts compromised certificates occurring in Docker images.
Trust Anchors
While self-signed certificates indicate the usage of certificates in controlled environments, i.e., clients need a safelist with all certificates they can trust, CA-signed certificates imply the usage at larger scale as these are trusted by all clients having a corresponding root certificate installed.
We consider certificates where the issuer and common name are similar as self-signed and CA-signed otherwise.
For CA-signed certificates, we consider those which we can validate against widespread root stores[Stores from Android, iOS/MacOS, Mozilla NSS, OpenJDK, Oracle JDK, and Windows.] as signed by a public CA, and otherwise signed by a private CA.
We discovered that the majority of found compromised certificates (selfsignedcertspct) are self-signed, but also privatecacerts private CA-signed and casignedcerts public CA-signed certificates.
While all systems relying on these certificates open the door for impersonation attacks, the occurrence of CA-signed certificates is especially alarming as such certificates are typically planned to provide authenticity to many clients/users and are universally accepted.
Thus, knowing these certificates' private key not only allows attackers to perform Man-in-the-Middle attacks but also enable them to sign malicious software to compromise other's systems.
Validity
As a countermeasure against key leakage, the certificate's lifetime enforces service operators to request new certificates from time to time, as clients should reject outdated certificates.
Notably, casignedvalidondownload public-CA, privatecavalidondownload private-CA, and selfsignedvalidondownload self-signed certificates were valid when we downloaded their containing image layer, showing that the authenticity of relying services is at stake, i.e., the lifetime does not help in these cases of key leakage.
Interestingly, casignedvalidonhistory public-CA, privatecavalidonhistory private-CA, and selfsignedvalidonhistory self-signed certificates were valid when added to their Docker image (as per the image's history timestamp).
While these larger numbers show that the limited lifetime of certificates helps to mitigate leaked private keys, they also indicate that key leakage in images is tedious, i.e., more and more private keys are leaked.
Usages
The usage attributes of certificates can optionally indicate the practical use-case of CA-signed certificates and, thus, further help to understand the severity of the private key leakage.
While all public-CA-signed certificates allow for authentication (digital signatures), and casignedparsedFindingextensionsextendedkeyusageserverauth are explicitly declared for server authentication, casignedparsedFindingextensionsextendedkeyusagecodesigning (private-CA: privatecaparsedFindingextensionsextendedkeyusagecodesigning) allow for code-signing.
Thus, knowing the private key of these certificates, does not only allow attackers to perform Man-in-the-Middle attacks, but also enable to sign malicious software to compromise others systems.
knowncompromizedcerts found compromised certificates show that leaked private keys can have extensive influence on the authenticity of services and software.
Thus, attackers can impersonate services, decrypt past communications, or sign malware to infect production systems.
§ SECRET USAGE IN THE WILD
Until now, it is open whether the found compromised secrets are used in practice and, if so, to what extent, i.e., whether a single compromised secret is reused due to several Docker containers stemming from the same image.
While we cannot check the validity of API secrets by using them against their destined endpoint due to our ethical guidelines (cf. Appendix <ref>), we can investigate whether hosts on the Internet use found private keys for authentication.
To assess whether Internet-reachable hosts can be suspect to impersonation attacks due to secret leakage in Docker images, we check for TLS- and SSH-enabled hosts relying their authentication on compromised private keys by using the Censys database, i.e., 15 months of active Internet-wide measurement results <cit.>.
Here, we search for hosts presenting a public key, i.e., as SSH host key or within a TLS certificate, matching to one of the found compromised keys.
More specifically, we match the fingerprint of public keys in the Censys database on ones extracted from found private keys.
In Figure <ref>, we detail how many hosts rely their authenticity on found compromised private keys and how often these keys are reused.
While the total number of hosts relying on compromised keys is worrying on its own (20220901numuniquehosts hosts in Oct. 2022), their protocols, even worse, imply sensitive services.
As such, in October 2022, we find MQTT20220901numuniquehosts MQTT and AMQP20220901numuniquehosts AMQP hosts, potentially transferring privacy-sensitive ((I)IoT) data.
Moreover, FTP20220901numuniquehosts FTP, PostgreSQL20220901numuniquehosts PostgreSQL, Elasticsearch20220901numuniquehosts Elasticsearch, and MySQL20220901numuniquehosts MySQL instances serve potentially confidential data.
Regarding Internet communications, we see SIP20220901numuniquehosts SIP hosts used for telephony as well as SMTP20220901numuniquehosts SMTP, POP320220901numuniquehosts POP3, and IMAP20220901numuniquehosts IMAP servers used for email.
Since these hosts are susceptible to impersonation attacks due to their leaked private keys, attackers can eavesdrop, relay, or alter the sensitive data transmitted here.
Aggravatingly, we also find services with administrative relevance:
SSH20220901numuniquehosts SSH servers rely on SSH20220901corrFingerprint compromised host keys and Kubernetes20220901numuniquehosts Kubernetes instances use leaked keys opening doors for attacks which can lead to remote-shell access, extension of botnets or further data access.
The comparably low number of compromised keys used (compared to knowncompromizedhostkeys found SSH host keys) is probably due to a missing need for SSH servers in Docker containers as other mechanisms, e.g., , already allow shell access.
Furthermore, we see LDAP20220901numuniquehosts LDAP instances relying on leaked secrets.
As LDAP is used as a base for user authentication on attached systems, the integrity of unknown many other clients is at stake.
For instance, attackers could grant themselves root access to a myriad of systems.
The number of actually used keys is low compared to the number of hosts which rely on them indicating that a few Docker images lead to various compromised container deployments.
Thus, the simplicity of Docker to deploy services based on ready-to-use images puts the authenticity of several instances most likely operated by different users under threat.
In this regard, HTTPS hosts stand out in particular.
HTTP20220901numuniquehosts HTTPS hosts use HTTP20220901corrFingerprint different compromised private keys showing that the reuse of these keys is rampant for Web services.
Thus, attackers can perform Man-in-the-Middle attacks to alter webpages on their delivery or data sent to the server.
Figure <ref> also underpins that the key usage of compromised keys is long-lasting and rising, i.e., over the complete available period the number of compromised systems grew from 20210501numuniquehosts (relying on 20210501corrFingerprint compromised keys) to 20220901numuniquehosts hosts (20220901corrFingerprint keys) indicating that container images with compromised certificates or SSH host keys included are increasingly used.
Thus, the authenticity of more and more systems is futile, offering an ever-growing attack surface.
While our study is significantly driven by initially found compromised keys in Docker images in the area of the IIoT, Censys does not identify secured IIoT protocols other than AMQP and MQTT via TLS.
Thus, we perform own Internet-wide measurements for a deeper inspection of whether IIoT services also use compromised certificates, e.g., for authentic communication via OPC UA.
To this end, we select ten secure IIoT protocols from recent literature <cit.> and mimic its proposed measurement strategy.
Our results show that besides the already large number of compromised AMQP and MQTT hosts, only 2 CoAP hosts use 2 different leaked keys from Docker containers.
That we do not find substantially more compromised hosts using other IIoT protocols underlines that the issue of key leakage is not an IIoT specfic hotspot but a general problem.
20220901numuniquehosts hosts use 20220901corrFingerprint compromised private keys found in Docker images for authentication on the Internet and encompass deployments using, i.a., MQTT, SMTP, and PostgreSQL.
This widespread usage allows attackers to eavesdrop on confidential or alter sensitive information, e.g., from the IoT, webpages, or databases.
§ DISCUSSION, LIMITATIONS & MITIGATIONS
The outcome of our work has different aspects.
We have seen that numerous private keys are compromised by image creators publishing their images via Docker registries and shown that security relies on these secrets in practice.
Still, future work could investigate the limitations of our approach or implement the derived mitigation opportunities from our results.
View on Available Images
Due to rate and computation-time limits and comprehensive ethical considerations (cf. Appendix <ref>), we could not analyze all available images on Docker Hub and private registries.
Thus, we might have missed secrets included in single layers or complete images that were not subject to our study.
In this light, the absolute number of found secrets is already very alerting.
Also, in relative numbers, our results should be representative for the selected groups due to our sampling.
Yet, the selected groups, i.e., our Docker Hub search terms, might lead to skewed results overestimating the overall population.
For instance, images that are not targeted at protocols might have been created with fewer secrets.
Thus, we opted for a broad body of terms based on, i.a., public polls <cit.> to avoid any bias.
Moreover, our private registry analysis has not been targeted but included randomly sampled layers, and we still found a similar share of affected images as on Docker Hub.
As such, we believe that our relative results are—at least in their magnitude—representative for the overall population of Docker images publicly available.
Missing Methods to Check API Secrets
While relying on Internet-wide measurements was a suitable measure to assess the usage of compromised private keys for the authenticity of Internet-reachable services, we could not check whether found API secrets are functional.
The only option would be to contact the corresponding API's endpoint to check for the acceptance of found credentials.
However, due to our ethical considerations, we must not use found secrets as such usage might influence other systems or services.
Thus, we cannot validate them against their respective endpoint.
Still, the number of found secrets is worrying and looking at the usage of compromised private keys, we are convinced that many API secrets are also functional.
Causes & Mitigation Opportunities
We have seen both creators actively copying secrets from their local file system into the image, e.g., most of the API secrets but also private keys, incl. certificates, and passively generating key material during the image creation process, e.g., by installing an OpenSSH server.
Both behaviors lead to compromised secrets and affect the security of both image creators and users basing their containers on an image and already included secrets.
Most likely, creators and users are unaware of compromising or using compromised foreign secrets.
In fact, compared to GitHub, which provides a graphical interface to browse published files and potentially notice a mistakenly uploaded secret, files in Docker images and containers cannot be browsed easily, i.e., users barely get an overview on included files.
Furthermore, while Git repositories only include manually added files, images of Docker containers contain a complete system directory tree.
Thus, files with included secrets cannot be identified.
The mitigation of these problems must be two-fold.
On the one hand, image creators must be warned that they are uploading their secrets to (publicly reachable) Docker registries.
On the other hand, when deploying containers based on downloaded images, users should be informed that included secrets, especially private keys, might already be compromised, putting the authentication of deployed services at stake.
To this end, credential-finding tools such as TruffleHog <cit.> or SecretScanner <cit.> can be integrated on both sides of the Docker paradigm.
When uploading or downloading an image, these tools could then scan all layers of the image for included secrets.
To reduce the number of false positives, for potential API secrets, the tool can also check the secret's function against the respective endpoint (we think this is also ethically correct on the user's side who downloaded the image).
For private keys, the tools could maintain a list of test keys that are usually included in libraries.
Increasing the image creator's awareness regarding the leakage of such secrets should decrease their number in uploaded images.
Additionally, performing a second check at the user deploying a container based on a downloaded image should further decrease the number of services relying on already compromised secrets.
An additional help could be an API + graphical view for images on Docker Hub, which shows the included files.
This API could also enable third-party solutions similar to those for GitHub <cit.> to easily search for known secret file paths.
§ CONCLUSION
Containerization allows integrating applications and their dependencies in self-containing and shareable images making software deployment easy.
However, when focusing on security, sharing of secrets or using already compromised secrets breaks promises, e.g., authenticity or access control.
Thus, cryptographic secrets must not be included in publicly available container images.
Our analysis of numnonemptyimages images from Docker Hub and privatemeasurementnumtotalmax private registries revealed that, however, pctaffectedimages include secrets that should not be leaked to the public.
More specifically, we found a near-lower bound of validprivatekeyvalidnumdistinctmatchestotal private keys and apinumdistinctmatches API secrets.
validapicloudvalidnumdistinctmatchestotal API secrets belonging to cloud providers, e.g., apicloud1rule (apicloud1numdistinctmatches secrets), or validapifinancialvalidnumdistinctmatchestotal secrets to financial services, e.g., apifinancial0rule (apifinancial0numdistinctmatches secrets), show that attackers can cause immediate damage knowing these secrets.
Focusing on the leaked private keys, we find that these are also in use in practice: 20220901numuniquehosts TLS and SSH hosts on the Internet rely their authentication on found keys, thus being susceptible to impersonation attacks.
Notably, many private keys automatically generate when installing packages during image creation.
While beneficial when running on real hardware where every computer generates its own key, in container images, this process automatically leads to compromised secrets and potentially a sheer number of containers with compromised authenticity.
We further discover that especially private registries serve images with potentially sensitive software, most likely not intended to be publicly shared.
Additionally, these registries might not prevent write access enabling attackers to add malware to images.
Our work shows that secret leakage in container images is a real threat and not neglectable.
Especially the proven usage of leaked private keys in practice verifies numerous introduced attack vectors.
As a countermeasure, the awareness of image creators and users regarding secret compromise must be increased, e.g., by integrating credential search tools into the Docker paradigm.
Funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) — Research Project VeN2uS — 03EI6053K.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy — EXC-2023 Internet of Production — 390621612.
ACM-Reference-Format
§ ETHICAL CONSIDERATIONS
Our research curates a comprehensive archive of leaked security secrets in Docker images on Docker Hub and private registries whose leakage is again a threat to security.
Moreover, to find private registries and deployments relying their security on leaked secrets, we leverage Internet-wide measurements that can have unintended implications, e.g., high load on single network connections impacting stability or alerting sysadmins due to unknown traffic.
Thus, we base our research on several ethical considerations.
First, we take well-established guidelines <cit.> and best practices of our institution as base for our research.
We handle all collected data with care and inform image creators and Docker Inc., to responsibly disclose our findings (cf. Appendix <ref>).
Moreover, we comply with recognized measurement guidelines <cit.> for our Internet-wide measurements reducing their impact (cf. Appendix <ref>).
§.§ Handling of Data & Responsibilities
During our research, we always only collect and request publicly available data, i.e., our access is limited to publicly available image repositories.
At no time do we bypass access control, e.g., by guessing passwords.
We, thus, cannot download private images.
Still, we revealed that many of the public images contain sensitive security secrets (cf. Section <ref>) which we stored for further analysis.
All found secrets are stored on secured systems.
Furthermore, we refrain from releasing our dataset including these secrets or image names, to not provide an archive of leaked secrets or pinpoints for potential attackers.
While this restriction prevents others from independently reproducing our results, we consider this decision to constitute a reasonable trade-off to protect affected users.
Responsible Disclosure
To further support affected users in removing their secrets from publicly available Docker images, we target to responsibly disclose our findings.
To this end, we extract e-mail addresses from maintainer variables set in Dockerfiles and furthermore derive addresses from Gravatar accounts linked to affected Docker Hub accounts.
In this regard, we identified numemaildisclosure e-mail addresses we contacted to notify about our possible findings.
Already after a few hours, we received >30 answers of owners appreciating our efforts, fixing their images or informing us that the image at hand is not used anymore.
A handful informed us that no secrets were leaked helping us to refine our filtering.
Moreover, we decided to reach out to the operator of Docker Hub, i.e., Docker Inc., to discuss potential further disclosure to unidentifiable creators.
§.§ Reducing Impact of Measurements
To reduce the impact of our active Internet scans, we follow widely accepted Internet measurement guidelines <cit.>.
Coordination
We coordinate our measurements with our Network Operation Center to reduce the impact on the Internet and to react correspondingly.
Abuse emails are handled informing about the intent of our measurements and how to opt-out of our measurements.
As part of this opt-out process, we maintain a blocklist to exclude IPs from our measurements.
External Information
For giving external operators information about our research intent, we provide rDNS records for all our scan IPs and transmit contact information in the HTTP header of each request to the registries.
Moreover, we host a webpage on our scan IPs, which gives further information on our project and how to opt-out.
Over time, also due to other measurements, we excluded 5.8 M IP addresses (0.14% of the IPv4 address space).
Limiting Load
To limit load and stress on all systems involved (along the path and the end-host), we deliberately reduce our scan-rate.
Our scans are stretched over the course of one day and use 's address randomization to spread load evenly.
We further limit the load on single private registries when downloading available images.
While we paid to increase the existing rate limiting for image downloads on Docker Hub (cf. Appendix <ref>), private registries typically do not implement any rate limiting.
Hence, to prevent our scanner from overloading registries running on resource-constrained hardware or connected via slow or volume-billed Internet connections, we decide to only download image layers randomly until their size sums up to at most 250.
Additionally, we shuffle the downloads of layers of different registries to further distribute the load.
§.§ Overall Considerations
Without taking our goals into account, summarizing the sensitive nature and the impact of our measurements can quickly lead to the conclusion that our measurements are not beneficial.
However, we consider it public interest and fundamental for improving security to know about potential security issues and how widespread these are.
The Docker paradigm does not include any mechanisms to prevent image creators from (accidentally) adding security secrets to their images and no mechanisms exist that warns users relying on already compromised security secrets.
Hence, we consider it essential to know whether secrets are widely included in publicly available Docker images and whether these are in use at scale to steer future decisions for counter-measures.
To answer this question, we carefully weighed the impact of our measurements against their benefit and have taken sensible measures to reduce the risks of building a large archive of leaked security secrets and risks introduced by active Internet measurements.
§ IMAGE DOWNLOAD FROM DOCKER HUB
The limit of image manifest downloads from Docker Hub depends on the booked plan, e.g., free users are allowed to pull only 800 images per day.
Hence, for a faster analysis of images on Docker Hub, we purchased two Pro accounts, that allow 5000 image downloads per day each.
Still, we are required to perform our analysis on a subpart of available images as the download of one image of every of the 9321726 available repositories would require 933 days under best conditions.
Thus, we decided to limit our analysis on two categories:
[(i)]
* a context of standard protocol and frequently used technologies, and
* an (Industrial) IoT context for comparison.
Both categories have communication in common as here security can be affected on an Internet scale.
Standard Context
To generate a wide view on secret leakage in Docker images, we create a list of search queries comprising standard protocols <cit.>, and frequently used technologies <cit.>.
To find related images, we employ Docker Hub's API to perform searches over all available images and retrieve results users would retrieve when using the CLI command or Docker Hub's web interface.
To ensure that different handling of special characters in technology and protocol names does not exclude any images, we include different spelling variants in our query list, i.e., we include terms as they are, but also replace non-alpha-numeric characters by and space.
Table <ref> (top) shows our constructed search queries for the standard context.
(Industrial) IoT Context
We extend our analysis on images in the (Industrial) IoT context, as deployments in this area showed massive security deficits in past <cit.>, in single cases traced back to security secret leakage via GitHub and Docker images <cit.>.
As search terms, we take (Industrial) IoT protocol names that were subject to recent research <cit.>.
We proceed similar as in the standard context, i.e., include derived spellings of these terms, and show our constructed search query of this context in Table <ref> (bottom).
§ REGULAR EXPRESSIONS
Following already established procedures to find security secrets in code repositories <cit.>, we build our secret detection in Docker Images on regular expressions, i.e., we try to match regular expressions derived from secrets on the content of included files.
Table <ref> shows our composed list of regular expressions covering a variety of secrets, i.e., asymmetric private keys and API keys, as well as accompanying material we use for our analysis, i.e., public keys and certificates.
We orientate our expressions towards related work <cit.> and TruffleHog <cit.>, an established tool to find secrets in various sources, i.e., the local file system, Git repositories, S3 storages, and syslogs.
Specifically, we inherit Meli et al.'s <cit.> regular expressions to allow comparisons between the occurrence of leaked secrets in GitHub repositories at scale and our findings.
Furthermore, they composed their expressions comprehensibly, i.e., they included API keys for certain services by the occurrence of service domains in Alexa's Top 50 Global and United States lists in combination with a list of well-known APIs manually filtered for services with a high risk on key leakage and keys with a distinctive signature (to reduce the number of false-positives).
For private keys they focus on the most prevalent types and form to store, i.e., RSA, elliptic curve keys, PGP, and general keys in PEM format.
To spread our analysis and align our expressions to the scope of our search queries (cf. Appendix <ref>), we adapt our expression for private keys to match every type of private key in PEM format and, furthermore, extend the list of expressions to also match private key blocks, keys in PKCS7 format, and keys stored in XML format (due to their unambiguous signature).
Regarding API secrets to match, we extend our list with expressions from TruffleHog <cit.> on basis of services being currently trending under developers <cit.> or having a high risk for misuse and the regular expressions including a unique signature (also to reduce the number of false positives).
For some services we found more than one type of secret, i.e., secrets for different API versions (GitHub v1 and v2), or different types of keys (Stripe).
Our final list contains 48 expressions which we match on the content of every file in the images part of our study.
§ FILTERING BASED ON FILEPATHS
After matching our regular expressions on arbitrary file content available in Docker images, extensive filtering is required to exclude false positive matches, i.e., matches that do not contain any secret.
Our File filter bases on file paths derived from matches our Kompromat filter excluded, i.e., all parent directories under which we find more than 2/3 test keys known by kompromat <cit.> and all directories that include known test keys directly.
Additionally, it takes manually compiled file paths, e.g., where standard libraries reside () or package managers store their downloads (e.g., ) and extensions of database files (e.g., and ) into account which we selected after manually revisit all matches as these produced a high number of false positives.
Figure <ref> shows the seven most prevalent file paths that contain matches excluded by our File filter.
Indeed, most of the exclusions are matches included in folders belonging to package managers and thus most likely test secrets.
The massive filtering of API secret matches in is due to the high number of false positives of the Twitter regular expressions on database files.
|
http://arxiv.org/abs/2307.05351v1 | 20230711153927 | $B^+$ decay to $K^+ηη$ with ($ηη$) from the $D\bar{D}(3720)$ bound state | [
"Pedro C. S. Brandão",
"Jing Song",
"Luciano M. Abreu",
"E. Oset"
] | hep-ph | [
"hep-ph"
] |
We search for a B decay mode where one can find a peak for a D D bound state predicted in effective theories and in Lattice QCD calculations, which has also been claimed from some reactions that show an accumulated strength in D D production at threshold. We find a good candidate in the B^+→ K^+ ηη reaction, by looking at the ηη mass distribution. The reaction proceeds via a first step in which one has the B^+→ D_s^*+D^0 reaction followed by D_s^*+ decay to D^0 K^+ and a posterior fusion of D^0 D^0 to ηη, implemented trough a triangle diagram that allows the D^0 D^0 to be virtual and produce the bound state. The choice of ηη to see the peak is based on results of calculations that find the ηη among the light pseudoscalar channels with stronger coupling to the D D bound state. We find a neat peak around the predicted mass of that state in the ηη mass distribution, with an integrated branching ratio for B^+→ K^+ (DD, bound) ; (DD, bound) →ηη of the order of 1.5 × 10^-4, a large number for hadronic B decays, which should motivate its experimental search.
[E-mail me at: ][email protected]
Instituto de Física, Universidade Federal da Bahia, Campus Ondina, Salvador, Bahia 40170-115, Brazil
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, 46071 Valencia, Spain
[E-mail me at: ][email protected]
School of Physics, Beihang University, Beijing, 102206, China
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, 46071 Valencia, Spain
[E-mail me at: ][email protected]
Instituto de Física, Universidade Federal da Bahia, Campus Ondina, Salvador, Bahia 40170-115, Brazil
Instituto de Física, Universidade de São Paulo,
Rua do Matão, São Paulo SP, 05508-090, Brazil
[E-mail me at: ][email protected]
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, 46071 Valencia, Spain
B^+ decay to K^+ηη with (ηη) from the DD̅(3720) bound state
E.Oset
August 12, 2023
===========================================================
B^+ decay to K^+ηη with (ηη) from the DD̅(3720) bound state
E.Oset
August 12, 2023
===========================================================
§ INTRODUCTION
The search for hadronic states in the charm sector and the description of their structure is attracting much attention recently as evidenced by the large amount of review papers devoted to the subject <cit.>. We mention two examples: the state X(3872) couples strongly to D^*D and it is a subject of debate concerning its nature as a D^*D molecule or a compact tetraquark state <cit.>, and the T_cc(3875) <cit.>, coupling strongly to DD^*, is also thought to be a DD^* molecule state, but others opinions have also being given (see list of references in <cit.>). Taking advantage of this wave of enthusiasm on this subject we want to come back to a recurrent problem, the possible existence of a DD bound state, proposing a method to find it experimentally. The state was predicted studying the meson-meson interaction in the charm sector in <cit.> and was bound about 20 MeV. The state was confirmed in posterior theoretical studies <cit.>. More recently it was also found in lattice calculations <cit.>.
Several works have tried to see experimental evidence for its existence. Since DD bound cannot decay into meson states containing cc, evidence for its existence has been searched for in the DD distribution close to the threshold in several reactions. In <cit.> support for its existence was found in the ee^-→ DD reaction looking at the DD spectrum close to the threshold. An updated experimental work for this reaction was done in <cit.> and, again, support for the DD state from this reaction and γγ→ DD was claimed in <cit.>. A more refined theoretical work of these two latter reactions was done in <cit.> claiming evidence for this bound state. In <cit.> three reactions were proposed to observe this bound state, but none has being done so far to found it. In <cit.> it was suggested to be found in the ψ(3770) radiative decay, ψ(3770) →γ D^0D^0, and in <cit.> in the B^+→ D^0D^0K^+ and B^0→ D^0D^0K^0 decays, looking in both cases for the D^0D^0 mass distribution close to threshold.
In the present work we propose a different reaction, the B^+→ K^+ηη, looking for the ηη invariant mass distribution where a peak is being expected. The reason to propose this reaction is double. Among the light pairs of light pseudoscalar mesons into which this state could decay, the ηη channel stands as one of the most important. On the other hand, in the PDG <cit.> one finds that the reaction B^+→ D_s^*+D^0 has a very large branching fraction for a B^+ decay, of the order of 10^-2. It might seem that this decay has nothing to do with the K^+ηη decay, but we will show that a triangle diagram with B^+→ D_s^*+D^0 followed by D_s^*+→ D^0K^+, and fusion of D^0D^0 to produce the DD bound final state, with its posterior decay to ηη, has a reasonable large branching fraction which would make this decay easily accessible.
The DD bound state with isospin I=0 looks now more acceptable after the discovery of the T_cc(3875), but from the theoretical point of view, it resembles very much the f_0(980), which couples strongly to KK, that has been obtained within the chiral unitary approach <cit.>. Since a general rule is that the binding of states becomes larger when going from lighter to heavier quarks with the same configuration <cit.>, the existence of the DD bound state seems unavoidable, and with this conviction we propose the new reaction with the B^+→ K^+ηη decay which is accessible by the LHCb and Belle collaborations.
§ FORMALISM
The idea is to find an efficient mechanism to produce ηη at the end. For this purpose it is not necessary to produce ηη in a first step in a B decay. Instead, the idea is to produce DD since this is the main component of the DD bound state and ηη is only one decay channel. Yet, we wish to have three particles in the final state (including ηη) because then we can play with the ηη invariant mass and observe the peak of the DD bound state. The idea is then to produce one particle and DD. Then the DD can interact producing the DD bound state. One way to accomplish it is to produce D_s^*+D^0, let D_s^*+ decay to K^+D^0 and then we have the pair D^0D^0 to interact and proceed via DD→ηη.
The choice of the first step is most welcome since the process proceeds via the most Cabibbo favored mode for a B decay, with external emission, as shown in Fig. <ref> for the complex conjugate B^-→ D_s^*-D^0 reaction. This favors a large rate of this decay mode and one finds the branching fraction <cit.>,
Br[B^+→ D_s^*+D^0]=(7.6±1.6)× 10^-3.
This is a big rate for a B decay, which necessarily involves a suppressed Cabibbo transition b → c.
The next step after the D_s^*+D^0 production is to allow the D_s^*+ decay to D^0K^+(virtually) and then proceed with the D^0D^0 transition to ηη, where the peak of the bound state would show up. This process is depicted in Fig. <ref>, through a triangle diagram, which, however, does not develop a triangle singularity <cit.>, since D_s^*+→ D^0K^+ is kinematically forbidden and one cannot place the three intermediate particles on shell <cit.>.
We take the meson masses from the PDG <cit.>,
m_B^+ = 5279.34 MeV,
m_η = 547.862 MeV,
m_D^0 = 1864.84 MeV,
m_K^+ = 493.677 MeV, and M_ D_s^*+ = 2112.2 MeV.
§.§ B^+ decay to D_s^*+D^0
In the diagram of Fig. <ref> we have a vertex D_s^*+→ K^+D^0 which one can obtain from a standard Lagrangian, the D^0D^0→ηη scattering amplitude that one takes from <cit.> and the B^+ → D_s^*+D^0 transition, determined from the experiment as described below.
The B^+→ D_s^*+D^0 vertex has the typical structure of a vector coupling to two pseudoscalars as follows
t_1 = C ϵ^μ(P+q)_μ,
where ϵ^μ is the polarization vector of the D_s^*+, and C the coupling constant.
The B^+ → D_s^*+D^0 width is given by
Γ[B^+→ D_s^*+D^0]
= 1/8π1/m^2_B∑_pol |t_1|^2 q,
with
q=λ^1/2(m^2_B,m^2_D^0,m_D^*_s^2)/2m_B.
After some algebra, we obtain
∑_pol |t_1|^2 = C^2 ∑_polϵ^μ(P+q)_μϵ^ν(P+q)_ν = 4 C^2 (m_B/m_D^*_s)^2 q ^2,
then the branching fraction can be written as
Br[B^+→ D_s^*+D^0]
=Γ/Γ_B = 1/Γ_B1/2π C^2q^3/m_D^*_s^2 ,
and using Eq. (<ref>) we find
C^2/Γ_B
= 2π m_D^*_s^2/q^3 (7.6±1.6)× 10^-3=0.00537 MeV^-1.
§.§ loop evaluation
We now evaluate the amplitude for the K^+ηη triangle diagram of Fig. <ref>. We construct the D_s^*+→D^0K^+ vertex by using the effective Lagrangian:
ℒ_VPP= -ig ⟨ [P,∂_μ P] V^μ⟩,
where g=m_v/2f_π, m_v=800 MeV, f_π=93 MeV, and P, V are the qq matrices with u, d, s quarks, written in terms of pseudoscalar (P) or vector mesons (V) as
P=
(
[ π^0/√(2)+η/√(3)+η'/√(6) π^+ K^+ D^0; π^- -π^0/√(2)+η/√(3)+η'/√(6) K^0 D^-; K^- K̅^0 -η/√(3)+√(2/3)η' D_s^-; D^0 D^+ D_s^+ η_c; ]),
V_μ=
(
[ ρ^0/√(2)+ω/√(2) ρ^+ K^*+ D^*0; ρ^- -ρ^0/√(2)+ω/√(2) K^*0 D^*-; K^*- K̅^*0 ϕ D_s^*-; D^*0 D^*+ D_s^*+ J/Ψ; ])_μ,
where we have taken the ordinary η-η' mixing of Ref. <cit.>. The symbol ⟨...⟩ in Eq. (<ref>) denotes the trace in SU(4). Note however, that with this algorithm one is only making use of the qq character of the meson <cit.>.
We find for this vertex
-it_2 = -ig ϵ^μ[ (2k-P+q)_μ]
Then, the loop amplitude is given by
-it_L = ∫d^4q/(2π)^4(-i) C ϵ^μ[ (P+q)_μ] g(-i)ϵ^ν[ (2k-P+q)_ν] × (-i)t_D^0D^0,ηη(M_inv(ηη))
×i/q^2-m_D^0^2+iϵ×i/(P-q)^2-m_D^*_s^2+iϵ×i/(P-q-k)^2-m_D^0^2+iϵ,
where M_inv(ηη) is the invariant mass of the ηη system.
By doing the sum over polarizations for the vector meson D_s^* we get
∑ϵ^μ[ (P+q)_μ] ϵ^ν[ (2k-P+q)_ν]
= [-g^μν+(P-q)^μ(P-q)^ν/m_D^*_s^2](P+q)_μ(2k-P+q)_ν
= m_B^2-q^2-2Pk-2kq+ 1/m_D^*_s^2[(m_B^2-q^2)(2Pk-2kq-m_B^2-q^2+2Pq)].
We perform the q^0 integration analytically using Cauchy's residues. For this purpose we use
i/q^2-m_D^0^2+iϵ = 1/2w_D(q)(1/q^0-w_D(q)+iϵ-1/q^0+w_D(q)-iϵ),
and keep only the positive energy part because we are dealing with heavy particles. The Cauchy integration picks up the pole q^0=w_D^0(q).
As a consequence, we obtain the loop amplitude
t_L = ∫d^3q/(2π)^3 C g [-m_D^0^2-m_K^2+M^2_inv(ηη)-2w_k(k)w_D(q)+2k·q
+ 1/m_D^*_s^2(m_B^2-m_D^0^2)(m_K^2-m_D^0^2-M^2_inv(ηη)-2w_k(k)w_D(q)+2k·q+2m_Bw_D(q))]
× t_D^0D^0,ηη(M_inv(ηη)) ×1/2w_D(q)×1/2w_D^*_s(q)×1/2w_D(k+q)
×1/m_B-w_D(q)-w_D^*_s(q)+iϵ×1/m_B-w_D(q)-w_k(k)-w_D(k+q)+iϵ, F_HQSΘ(q_max-|q ^*|),
with
k=λ^1/2(m^2_B,m^2_k,M^2_inv(ηη))/2m_B,
where we have used
(P-k)^2= M^2_inv(ηη),
2Pk= m_B^2+m_k^2-M^2_inv(ηη),
2kq= 2w_k(k)w_D(q)-2k·q,
2Pq= 2m_Bw_D(q).
In Eq. (<ref>) we have the factor F_HQS given by
F_HQS=m_D^*_s/m_k^*,
which stems from consideration of heavy quark spin symmetry <cit.> to obtain correct width of the D^*→ Dπ decay. On the other hand the factor Θ(q_max-|q ^*|) comes from the way that we regularize the loops in our t amplitudes with the cut-off method, which implies that the t matrix has the structure <cit.>
t(q,q ')=tΘ(q_max-|q|)Θ(q_max-|q '|),
where q_max is the regulator in the loop functions G in the T=[1-VG]^-1V matrix. Since q_max regulates the DD loops in their rest frame, we must take the factor Θ(q_max-|q ^*|) where q ^* is the D^0 momentum in the rest frame of ηη given by <cit.> as
q ^*=[(E_R/M_inv(ηη)-1)q·k/k^2+w_D^0(q)/M_inv(ηη)]k+q,
with
E_R=√(M^2_inv(ηη)+k^2).
Now the integral only depends on |k|, and hence on M_inv(ηη).
The matrix elements for the DD→ j transition are obtained using the Bethe-Salpeter equation T=[1-VG]^-1V in coupled channels in Ref. <cit.>, but since the couplings of the state to the DD bound state are calculated there, we directly take the t_D^0D^0,ηη transition amplitude from this reference and write it with a Breit-Winger form as
t_D^0D^0,ηη(M_inv(ηη))= g_D^0D^0g_ηη/M^2_inv(ηη)-m^2_D^0D^0+im_D^0D^0Γ_D^0D^0,
with the relevant quantities given by <cit.>
g_D^0D^0 = (5962 + i 1695) MeV,
g_ηη =(1023 +i 24) MeV,
m_DD|_b = 3722 MeV (for the bound state),
Γ_DD|_b =36 MeV.
Finally, the differential mass distribution for the ηη system is given by
dΓ/dM_inv(ηη)=1/(2π)^31/4m_B^2kP_η|t_L|^2,
where P_η is the momentum of η in the ηη rest frame,
P_η=λ^1/2(M^2_inv(ηη),m^2_η,m^2_η)/2M_inv(ηη).
§ RESULTS
In Fig. <ref> we show the results of R_T=1/Γ_BdΓ/dM_inv. We make use of the value of the ratio C^2/Γ_B from Eq. (<ref>), hence we can predict not only the shape of the mass distribution but also its strength. We indeed find in Fig. <ref> a neat peak around the mass of the DD bound state with the width predicted in Ref. <cit.>.
In order to find the feasibility of this experiment we integrate R_T over M_inv(ηη) to get the strength of the peak as 1/Γ_B∫dΓ/dM_inv(ηη)dM_inv(ηη), and we obtain the following value for the branching ratio of the reaction B^+→ K^+DD|_b; DD|_b→ηη, where DD|_b means the DD bound state,
Br[B^+→ K^+DD|_b; DD|_b→ηη]=1.47× 10^-4
Taking into account that most of the hadronic branching fractions reported in the PDG are of the order of 10^-4 or smaller, with some branching ratios of the order of 10^-7, this branching ratio is relatively big and could easily be observed in experiments. This result can only encourage experimental teams to perform this measurement that would show for the first time the peak associated to the DD bound state.
§ CONCLUSIONS
We have studied the reaction B^+→ K^+ηη with the aim of finding a peak in the ηη mass distribution corresponding to a D D bound state that has been predicted by several theoretical frameworks, in lattice QCD simulations, and has also been claimed to exist from the observation of a concentration of strength around the D D threshold in reactions producing D D in the final state.
In order to maximize the chances of observation we have selected a reaction that in a first step produces a D D which is allowed to interact and produce the ηη at the end. The reaction chosen is B^+→ D_s^*+D^0, which has a large branching fraction for a B hadronic decay, of the order of 10^-2. The D_s^*+ decays to D^0 K^+ and the D^0 D^0 interact and produce the ηη. Technically the combined process is evaluated by means of a triangle diagram where the D D are virtual, a necessary condition to produce the D D bound state. The choice of ηη being produced by the D^0 D^0 interaction is motivated because the D D bound state only decays in light meson pairs, where the c c quarks have been annihilated. From previous calculations one knows that the ηη channel is one of the light pseudoscalar channels that couples most strongly to the D D bound state.
With this promising scenario we have evaluated the ηη mass distribution for the B^+ → K^+ηη decays and we have found indeed a clear peak around the predicted mass of the D D bound state. Then we have integrated the mass distribution and found a branching fraction for B^+ → K^+ (DD, bound); (DD, bound) →ηη of the order of 1.5 × 10^-4. This is a relatively large branching fraction for a B decay, which should encourage its search to finally find a peak for this much searched for state.
Final Note: While sending the paper to the Inspire web, a similar paper
appeared there <cit.> dealing on a similar reaction,
B^-→ K^-ηη_c decay. While the reaction is also promising, the
method and formalism used in <cit.> are different and no absolute
rate is predicted.
§ ACKNOWLEDGEMENTS
The work of P.C.S.B and L.M.A. is partly supported by the Brazilian agencies CNPq (Grant Numbers 309950/2020-1, 400215/2022-
5, 200567/2022-5), FAPESB (Grant Number INT0007/2016) and CNPq/FAPERJ under the Project INCT-Física Nuclear e
Aplicações (Contract No. 464898/2014-5).
This work of J. S. is partly supported by the National Natural Science Foundation of China under Grants No. 12247108 and the China Postdoctoral Science Foundation under Grant No. 2022M720359.
This work is
also partly supported by the Spanish Ministerio de Economia y Competitividad (MINECO) and European FEDER
funds under Contracts No. FIS2017-84038-C2-1-P B, PID2020-112777GB-I00, and by Generalitat Valenciana under
contract PROMETEO/2020/023. This project has received funding from the European Union Horizon 2020 research
and innovation programme under the program H2020-INFRAIA-2018-1, grant agreement No. 824093 of the STRONG-2020 project. This research is also supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP)
which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s
Excellence Strategy-EXC-2094 -390783311.
|
http://arxiv.org/abs/2307.04737v1 | 20230710175053 | Constraining Electromagnetic Signals from Black Holes with Hair | [
"Nicole R. Crumpler"
] | astro-ph.HE | [
"astro-ph.HE",
"hep-ph"
] |
Particle production from non–minimal coupling in a symmetry breaking potential transporting vacuum energy
Orlando Luongo
August 12, 2023
==========================================================================================================
§ INTRODUCTION
There is an important distinction between astrophysical and mathematical black holes (BHs). Astrophysically, BHs are observed as compact regions of spacetime in which gravity is so strong that even light cannot escape. These objects have been detected merging with each other <cit.>, emitting electromagnetically-bright jets <cit.>, consuming stars <cit.>, and more. Mathematically, BHs are vacuum solutions of Einstein equations of general relativity describing spacetime external to a compact mass distribution within an event horizon. The theory underlying mathematical BHs has been used to characterize astrophysical BHs, although there remain clear disconnects between these theoretical models and physical reality.
One such disconnect stems from the so-called “no-hair" theorem. Canonically, the “no-hair" theorem proposes that mathematical BHs are completely characterized by the BH's mass, charge, and spin as seen by an external observer. This theorem has been tested astrophysically using a variety of probes such as radio observations of the shadow of a BH event horizon <cit.>, gravitational wave signals of binary BH (BBH) mergers <cit.>, and stellar orbits around the galactic center <cit.>. No evidence for its violation has yet been discovered in astrophysical BHs. However, the “no-hair" theorem leads directly to the BH information paradox <cit.>. In this paradox, different configurations of matter, radiation, etc. that have fallen into a BH can be described by the same mathematical BH solution, losing information about the initial quantum state of the system. This violates a core tenet of quantum mechanics in a regime in which both the theories of general relativity and quantum mechanics are valid. There have been many attempts to resolve this paradox; one can refer to Ref. <cit.> for a recent review. Despite these attempts, no consensus has yet been reached.
The BH information paradox supports a possibility of richer physics underlying BHs. “Hairy" BHs are novel solutions to the Einstein field equations which are characterized by more than the three parameters of a canonical BH. Some of these BH models have been proposed as explicit solutions to the BH information paradox. One interesting possibility, proposed by Ref. <cit.>, is the firewall BH model. This mathematical BH has a singular shell (otherwise known as a firewall) just outside the horizon, causing general relativity to break down outside the BH horizon. Since general relativity no longer holds in this regime, the BH information problem no longer applies. Such an exotic object appears as a Schwarzschild BH to a distant observer, raising the question of how a firewall BH (and, more generally, other “hairy" BHs) might be distinguished from a canonical BH in astrophysical observations.
Electromagnetic (EM) radiation from astrophysical BHs in baryon-poor environments would be a beacon of new fundamental physics and support the existence of non-canonical BHs. Canonical BHs do not directly source EM radiation, except the weak emission of thermal Hawking radiation <cit.>, which is not observable for BHs of astrophysically relevant masses. However, some “hairy" BH models could radiate an appreciable proportion of their mass as EM radiation. We do not propose a specific model for this mechanism, but as a motivating example consider the firewall BH discussed earlier. Ref. <cit.> suggests several ways in which this model could produce EM radiation including the explosion of an unstable firewall, a BH phase transition from a canonical to a firewall BH, and BBH mergers involving a firewall BH. Overall, our understanding of BHs is inconsistent, motivating searches for generic signals of deviations from canonical BH models.
Given that only non-canonical BHs can radiate appreciably, there are two important considerations in order to distinguish this radiation from typical astrophysical sources. Firstly, we must be confident that there is a BH in the region sourcing the radiation. Secondly, the BH must be in a sufficiently baryon-poor environment and the emitted radiation must be sufficiently energetic that we can be confident the radiation is not produced by standard processes such as relativistic jets. The best observable available satisfying these considerations is concurrent observations of BBH mergers with gravitational wave detectors and EM radiation instruments. Gravitational wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO, <cit.>) and Virgo <cit.> regularly observe the mergers of stellar mass BHs. These events have only been observed extragalactically, with local BBH merger rates measured to be ∼ 10 Gpc^-3 yr^-1 <cit.>. Thus, any observable EM radiation from such events must be extremely energetic, on the order of supernova energies.
There is typically insufficient baryonic matter surrounding BBHs to produce EM radiation observable at these extragalactic distances and energy scales. Some models have been proposed to produce a gamma-ray burst (GRB) during a BBH merger. Most of these models require rapid accretion during the merger <cit.>, necessitating a baryon-rich environment, and all of these models have been contested <cit.>. Another class of model involves charged BHs <cit.>, but the required charge is unreasonably large <cit.>. Given the shortcomings of these models, no extragalactically-observable EM signal is expected from a stellar-mass BBH merger. Thus, in this paper, we investigate the observational signatures of a stellar-mass BH directly releasing some of its mass as EM radiation during a BBH merger as a novel indicator of the existence of “hairy" BHs.
Since BBHs mergers are catastrophic events characterized by short timescales, a large amount of energy could be emitted by the BH in a burst of EM radiation. We parameterize the amount of energy released by ϵ such that ϵ M is the energy emitted directly by the BH in EM radiation into the ambient environment. The characteristic frequency of the EM signal is independent of the “hairy" BH model sourcing the radiation because any such emission directly from the BH must tunnel out of the gravitational well in the same manner as Hawking radiation <cit.>. If the Schwarzschild radius of a BH is r_s, then the characteristic frequency of the radiation emitted by the BH is
f=1/2 r_s=3.3× 10^-17(M/M_⊙)^-1 MeV.
Throughout this paper, we work in natural units where ħ=c=k_B=ϵ_0=1 unless otherwise stated. The frequencies emitted by stellar mass BHs are very low-frequency radio waves. At these frequencies, all of the radiation is absorbed in the interstellar medium, predominantly via free-free absorption by the warm ionized medium <cit.>. Thus, this radiation is not directly observable, but is absorbed and re-emitted as a secondary signal. We calculate the range of M and ϵ for which this secondary signal could be detected. In future work, model-dependent effects will need to be included to augment this generic parameterization. Although there are “hairy" BH models capable of producing EM radiation, no complete model able to make quantitative predictions for this effect exists currently. We hope this paper will motivate others in the field to work through the details of such models.
In this paper, we constrain a broad class of “hairy" BH models using a generic and model-independent EM signal that is characterized by the BH mass (M) and the fraction of that mass that is lost to EM radiation (ϵ). In Section <ref>, we discuss the two phenomenologically distinct cases in which radiation is emitted and derive the critical value of ϵ that separates them. This division is set by the Schwinger limit, above which the BH radiation triggers pair production resulting in a GRB and below which the EM field accelerates ambient charged particles to create an overdensity of cosmic rays. In Section <ref> we characterize the extragalactic observability of a GRB created by the BH radiation to constrain ϵ given the non-detection of GRBs from BBH mergers. In Section <ref> we describe the electron and proton cosmic ray energy spectrum created below the Schwinger limit and discuss the difficulties of observationally constraining ϵ in this less-energetic regime. In Section <ref> we summarize our results.
§ THE SCHWINGER LIMIT
The Schwinger limit dictates the critical value of ϵ separating the two phenomenologically distinct cases in which radiation is emitted for a particular BH mass. This limit, derived from quantum electrodynamics, sets the field strength at which an electric field becomes nonlinear due to the spontaneous production of electron-positron pairs <cit.>. Quantitatively, the Schwinger limit occurs at an electric field strength of ℰ_C=m_e^2/e=0.86 MeV ^2. This corresponds to a field energy density of u_C=ℰ_C^2=0.74 MeV ^4.
This field energy density can be related to ϵ as follows. We assume the radiation from the BH is spread over a volume one wavelength (λ) in thickness outside of the BH Schwarzschild radius. Then, the energy density of the BH radiation is
u=ϵ M/4/3π ((r_s + λ)^3 - r_s^3)
= 3.1×10^9ϵ(M/M_⊙)^-2 MeV^4.
Setting u=u_C and solving for the critical value of ϵ gives
ϵ_C=2.4×10^-10(M/M_⊙)^2.
Therefore, ϵ_C∼ 10^-10-10^-6 for BH masses ranging from 1 to 50 M_⊙. For ϵ>ϵ_C, pair-production dominates and results in a GRB as discussed in Section <ref>. For ϵ<ϵ_C, the EM field accelerates ambient charged particles, creating cosmic ray electrons and protons as discussed in Section <ref>.
§ GAMMA-RAY EMISSION ABOVE THE SCHWINGER LIMIT
Above the Schwinger limit, the energy density of the field is large enough to result in electron-positron pair production <cit.>. Qualitatively, the electron-positron-photon gas thermalizes due to Thompson scattering and expands relativistically as an ideal fluid, creating an object known as a fireball in GRB literature <cit.>. We denote the lab frame of an Earth observer as S and the comoving frame of the fluid as S'.
When the EM radiation is first emitted by the BH, the lab and fluid frames coincide. The initial temperature of the fireball in both S and S' is given by
T_0=(E/V_0 g_0 a)^1/4
where a=π^2/15, E is the energy dumped into a region of volume V_0, and g_0 = 2.75=11/4 is half of the effective degrees of freedom for a plasma consisting of photons, electrons, and positrons in thermal equilibrium <cit.>. Again assuming that the radiation from the BH is spread over a volume one wavelength in thickness outside of the BH Schwarzschild radius, the initial temperature is
T_0=(ϵ M/4/3π((r_s+λ)^3-r_s^3)11/4a)^1/4
=200ϵ^1/4(M/M_⊙)^-1/2 MeV.
Assuming that any remnants of stellar ejecta and envelopes have long dispersed, we neglect any external baryon contributions to the dynamics of the fireball. Thus, the fireball is a relativistic radiation-dominated fluid, which rapidly accelerates to γ≫1 under its own super-Eddington radiation pressure. Because the fireball is created outside the Schwarzschild radius of the BH, the system originates in a region of small curvature. Consequently, general relativistic and gravitational redshift effects can be neglected. Employing the usual relativistic conservation equations of baryon number and energy-momentum from Ref. <cit.> in the limit where γ≫1, yields the following scaling relations for each fluid shell <cit.>
γ(r) ∼(r/R_0)
T'(r) ∼ T_0(r/R_0)^-1∼T_0/γ(r)
where r is the distance from the origin in the lab frame and R_0 is the initial width of the fireball. These relations apply so long as the fireball is ultra-relativistic, radiation-dominated, and opaque due to Thompson scattering. So, as the fireball expands from the origin, the bulk Lorentz factor continues to increase due to the acceleration from the radiation pressure of the fluid. To first order in γ, the width of the fireball in the lab frame is constant R(r)= R_0 <cit.>. This requires the width in the comoving frame to increase as R'(r)= γ(r)R_0, illustrating why the fireball cools in its co-moving frame. In the lab frame, the temperature is blue-shifted by
T(r)=γ(r)T'(r)= T_0
since the fluid is moving relativistically towards the observer. Thus, so long as the scaling relations apply, a lab observer sees each shell of the fireball at the same constant temperature, T_0.
Within each fluid shell, the number density of electron-positron pairs in the comoving frame decreases as the fireball cools. Eventually, the process of pair creation and annihilation freezes out when the time for a positron to annihilate with an electron is of the same order as the dynamical time. This occurs at a comoving temperature of T'∼ 20 keV <cit.>. At this temperature, the proportion of the initial energy from the BH contained in the remaining electron-positron pairs is negligible <cit.>. So, nearly all of the initial ϵ M is contained in photons that had been trapped in the fluid by Thompson scattering. When pair production freezes out, the Thompson opacity decreases dramatically and these photons escape. Since the comoving temperature depends only on r, each shell experiences this freeze out as it moves through the same radius in the lab frame. Thus, the characteristic time delay between when photons free-stream from the inner and outermost edges of the fireball is given by <cit.>
δ t∼R_0/c=2.0×10^-5(M/M_⊙) s
since the initial radius of the fireball is set by the wavelength of the BH radiation, λ. These timescales are short enough to be consistent with a short GRB, <O(1 second).
When the photons are able to free stream from the fireball, an observer on Earth sees a nearly thermal black body spectrum at temperature T_0 radiating from the BH <cit.>. The black body spectrum in photon number as a function of photon frequency and the fireball temperature is given by
B_f(T_0)=2f^2/e^2π f/T_0-1 MeV^2 s^-1 Hz^-1 sr^-1.
The peak photon energy for this spectrum is
E_peak=1.6 T_0=320ϵ^1/4(M/M_⊙)^-1/2 MeV.
These peak energies are plotted in Figure <ref> as a function of BH mass and ϵ. These are the most likely photons to be emitted from the fireball, and all peak energies are gamma rays (energy >1 MeV).
Since nearly all the energy from the fireball is converted into photons at the peak energy, the number of photons emitted is
N_γ∼ϵ M/E_peak∼ 3.4×10^57ϵ^3/4(M/M_⊙)^3/2.
Over the full range of photon energies shown in Figure <ref>, the most sensitive telescope is the Fermi Gamma-ray Space Telescope. The two instruments onboard Fermi are the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). The LAT observes photon energies in the range 20 MeV-300 GeV with a sensitivity of 10^-4 erg cm^-2 <cit.>. The GBM observes photon energies in the range 8 keV-40 MeV with a sensitivity of 0.5 ph cm^-2 s^-1 <cit.>. For the gamma-ray energies emitted by 1-50 M_⊙ BHs over a timescale of ≲ 1 second, Fermi's sensitivity limits requires a flux of F≳ 1 ph cm^-2. This minimum flux can be used to calculate the maximum distance, d, to which EM emission by a BH above the Schwinger limit is observable.
d=√(N_γ/4π F)= 5360ϵ^3/8(M/M_⊙)^3/4 Mpc
These distances are plotted in Figure <ref>. For a given distance, we can solve for the minimum ϵ for which Fermi could observe such an event extragalactically.
ϵ_min=2×10^-5(d/100 Mpc)^8/3(M/M_⊙)^-2
The Gravitational-wave (GW) Transient Catalog lists events detected by LIGO <cit.>, Virgo <cit.>, and the Kamioka Gravitational Wave Detector (KAGRA, <cit.>). Most BBH merger candidates listed in the catalog are posted on NASA's General Coordinates Network, which contains both concurrent and follow-up EM observations of GW triggers by the Fermi GBM. The Fermi GBM has an 8 steradian field-of-view <cit.>, which is large enough to cover the full LIGO/Virgo 90% confidence GW localization region if the telescope is well-aligned at the time of the trigger. We cross-reference all BBH mergers in the GW Transient Catalog with Fermi observations from the General Coordinates Network. For BH masses ranging from 10 to 50 M_⊙ in intervals of 10 M_⊙, we identify the nearest BBH merger for which Fermi observed at least 90% of the localization region and recorded no GRB event. The GW events constraining each mass interval are listed in Table <ref>. These non-detections are used to constrain ϵ for each BH mass. To do this, we assume the furthest distance (luminosity distance + error bar) measured by LIGO/Virgo and calculate the minimum value of ϵ needed such that the event would be observable with the Fermi GBM for the observed BH mass. All constrained values of ϵ are above the Schwinger limit for the given BH mass and all result in ∼ MeV photons which are in the energy range observable by the GBM. These constraints are listed in Table <ref>. Assuming all BHs are “hairy" BHs capable of producing this signal, the current upper bounds on ϵ are ϵ<10^-5 for 10, 30, 40 M_⊙ BHs and ϵ<10^-4 for 20, 50 M_⊙ BHs since no high energy EM signal was observed from these BBH mergers. These constraints will improve as more GW events with concurrent Fermi observations are detected.
There has been one observation of a GRB and BBH merger occurring concurrently. An offline search of Fermi GBM data following the detection of GW150914 <cit.> revealed a 1 second short GRB occurring 0.4 seconds after the LIGO trigger with a localization consistent with that of the GW signal <cit.>. The GRB transient was near the detection threshold for the GBM (2.9σ detection), was not detected by any other instrument, and the localization was poorly constrained <cit.>. Thus, the GRB cannot be confidently associated with GW150914. Assuming that the GRB did indeed originate from the BBH merger, we can identify the range of ϵ that is consistent with the measured properties of the merger and GRB. From the GW Transient Catalog, the constituent BH masses range from ∼30-40 M_⊙ and the range of luminosity distances is 270-590 Mpc. For these masses and distances, the minimum value of ϵ such that the event is observable with the Fermi GBM ranges from ϵ_min∼10^-7-10^-6, resulting in peak photon energies of ∼ 1-3 MeV. This range of peak photon energies is consistent with the properties of the observed GRB, which peaked near an MeV <cit.>. Since this event was barely above the GBM's detection threshold, we anticipate ϵ∼ϵ_min. Therefore, the observed GRB is consistent with a GRB produced via rapid EM emission directly from a 30-40 M_⊙ “hairy" BH for ϵ∼10^-7-10^-6.
§ GALACTIC COSMIC RAY SIGNAL BELOW THE SCHWINGER LIMIT
Below the Schwinger limit the electric field emitted by the BH propagates outwards, accelerating ambient charged particles. Mutual attraction between the protons and electrons inhibits charge separation, requiring that the particles have the same Lorentz factor on average. Since the protons are a factor of ∼10^3 heavier than the electrons and the particle energy scales with mass, most of the BH field energy is absorbed by protons. Thus, the dynamics of the system are set by the protons.
A rapid burst of isotropic EM radiation can be generically described by a coherent single-wavelength pulse. The exact duration and coherence of emission is model-dependent, but so long as the emission occurs on the short timescales characteristic of BBH mergers, the signal is sufficiently similar to the single-wavelength pulse approximation. First, we consider the acceleration of a single charged particle due to this strong EM pulse. In the (+ - - -) metric, the relativistic Lorentz force law is
du^α/dτ=e/mF^αβu_β
where e is the elementary charge, m is the proton or electron mass, u^α is the particle's 4-velocity, and τ is the proper time in the instantaneous rest frame of the particle. For a sinusoidal EM field, the characteristic timescale of field variation is ℰ/dℰ/dt∼ 1/f where ℰ is the electric field magnitude and f is the frequency of the BH radiation. Since f∼10^-17 MeV is very small, this characteristic timescale is very long. So, the field magnitude is nearly constant in time and the system can be approximated by a constant crossed field. For the ranges of ϵ considered here, the absorption length for the EM field is long enough that the average charged particle experiences a sufficiently weak field to neglect any special relativistic effects that transform the field but also a sufficiently strong field that the protons are quickly accelerated to v∼ 1 and radiate negligibly. We choose a coordinate system such that the electric field is directed along the x-axis and the magnetic field along the y-axis, then the Faraday tensor is
F^αβ=[ 0 -ℰ 0 0; ℰ 0 0 0; 0 0 0 ℰ; 0 0 -ℰ 0 ]
where ℰ is the magnitude of the electric field in MeV^2. Since e/mF^αβ is constant, the Lorentz force equation has a matrix exponential solution
u^α(τ)=exp(e/mτ F^α_β)u^β(0).
The particles accelerate from the thermal speed of the plasma (v∼ 0), fixing u^β(0).
We compute u^α(τ) for the given Faraday tensor and use this to extract the Lorentz factor, γ, and the components of the 3-velocity in the lab frame, v.
γ(τ) =1+e^2ℰ^2τ^2/2m^2
v_x(τ) =2eℰmτ/2m^2+e^2ℰ^2τ^2
v_y(τ) =0
v_z(τ) =1-2m^2/2m^2+e^2ℰ^2τ^2
The final kinetic energy of the particle depends on the time at which the particle exits the field in its instantaneous rest frame, τ_f. By definition, γ(τ)=dt/dτ. Thus,
∫_0^τ_fγ(τ)dτ=τ_f+e^2ℰ^2τ_f^3/6m^2=∫_0^t_fdt=1/f
since in the lab frame the particle is in the field for t_f=1/f. The magnitude of the EM field is sufficiently large, and we can neglect the term that is linear in τ_f to find
τ_f=(6m^2/e^2ℰ^2f)^1/3
= 1.2×10^81/ℰ^2/3(M/M_⊙)^1/3 MeV^-1,
where here, and in all following numerical expressions, we set m=m_p.
Now we characterize the absorption of energy from the BH EM field by ambient protons. We partition the volume around the BH into spherical shells one wavelength in thickness such that the distance from the BH is parameterized by the dimensionless number j, the number of wavelengths from the Schwarzschild radius of BH. Let E be the total EM energy incident onto a shell j wavelengths from the BH. Assuming j is large, the initial energy density of the shell is
u = E/4/3π[(r_s+jλ)^3-(r_s+(j-1)λ)^3]∼E/4 πλ^3 j^2
= 3.0×10^-51E/ j^2(M/M_⊙)^-3 MeV^4.
Then the magnitude of the electric field is
ℰ=√(u)
=5.4×10^-26(E/j^2)^1/2(M/M_⊙)^-3/2 MeV^2.
The kinetic energy of each proton after interacting with the EM wave is
K∼γ(τ_f)m
= 1.0× 10^-5(E/j^2)^1/3(M/M_⊙)^-1/3 MeV.
This gives the kinetic energy of one proton in the jth shell. To derive the total kinetic energy lost in the jth shell, we assume a homogeneously distributed number density of charged particles with n=n_p=n_e∼ 1 cm^-3∼ 10^-32 MeV^3, consistent with the Milky Way interstellar medium <cit.>. The total number of protons in the jth shell is
N∼ 4 π n λ^3 j^2=2.6×10^18j^2 (M/M_⊙)^3.
So the total kinetic energy absorbed in the jth shell is
K_tot=N K=2.6×10^13j^4/3E^1/3(M/M_⊙)^8/3 MeV.
This gives a differential equation for the energy absorbed by the field
d E/d j=-K_tot
subject to the initial condition E(0)=ϵ M. This equation can be integrated to yield
E(j)= (1.1×10^40ϵ^2/3(M/M_⊙)^2/3-7.5×10^12(M/M_⊙)^8/3j^7/3)^3/2 MeV.
Solving for the value of j at which E(j)=0 gives the dimensionless absorption length, j_abs,
j_abs= 4.4×10^11ϵ^2/7(M/M_⊙)^-6/7.
The absorption length, j_absλ, gives the number of protons accelerated by the field and the average kinetic energy and Lorentz factor of each proton.
N_abs =4/3π n (j_absλ)^3=7.1×10^52ϵ^6/7(M/M_⊙)^3/7
K_avg =ϵ M/N_abs=1.6×10^7ϵ^1/7(M/M_⊙)^4/7 MeV
γ_avg ∼KE_avg/m=1.7×10^4ϵ^1/7(M/M_⊙)^4/7
The number of electrons accelerated by the field is also N_abs and the average Lorentz factor of the electrons must be the same as that for the protons.
We perform this calculation for values of ϵ ranging from ϵ=10^-20 to the Schwinger limit for 1-50 M_⊙ BHs. The resulting average proton kinetic energies as a function of ϵ are plotted in Figure <ref>. Overall, the average kinetic energy per proton ranges from 20 GeV for a 1 M_⊙ BH with ϵ=10^-20 to 20 TeV for a 50 M_⊙ BH at the Schwinger limit (ϵ∼10^-6). The average proton kinetic energies can be used to calculate the corresponding average electron kinetic energies. Overall, the average kinetic energy per electron ranges from 0.01 GeV for a 1 M_⊙ BH with ϵ=10^-20 to 10 GeV for a 50 M_⊙ BH at the Schwinger limit. These relativistic protons and electrons are cosmic rays.
Cosmic rays of these energies are difficult to identify with a point source on the sky. Ambient magnetic fields confine these cosmic rays to their host galaxies and cause them to quickly diffuse (on timescales ≲ 0.01 years) into the galactic background of cosmic rays. Thus, these cosmic rays are indistinguishable from cosmic rays produced by supernova remnants <cit.>, stellar winds or flares <cit.>, and other processes. Although these particles create secondary signals as they lose energy to bremsshtrahlung, ionization, synchrotron radiation and inverse Compton scattering for electrons and to inelastic collisions for protons <cit.>, the timescales for these processes is much longer than the diffusion time (≳ 10^6 years). Therefore, these secondary signals are also indistinguishable from those produced by other cosmic ray processes.
Because these cosmic rays are mixed with and are indistinguishable from cosmic rays produced via other mechanisms, it is not possible to confidently identify cosmic rays due to EM radiation from BHs either in external galaxies or in the Milky Way. Therefore, it becomes difficult to place strong constraints on ϵ below the Schwinger limit. Although this avenue cannot constrain ϵ, the effect is still potentially observable. Should the BBH merger occur on a sufficiently strong magnetic field background (B>10 MeV^2=0.05 T), the ultrarelativistic electrons would produce synchrotron radiation in the X-ray band, motivating X-ray observations of BBH mergers.
§ DISCUSSION AND CONCLUSIONS
The theoretical understanding of BHs is inconsistent, motivating searches for generic signals of deviations from canonical BH models. In this paper, we have constrained a broad class of “hairy" BH models capable of emitting a fraction of their mass as EM radiation. Since this radiation is sourced directly from the BH, it must tunnel out of the BH's gravitational well in the same manner as Hawking radiation. Thus, the characteristic frequency of the radiation depends only on the mass of the BH, resulting in a signal that is generic and model-independent. We derive the critical value of ϵ, the fraction of the BH mass released as radiation, above which the field strength triggers a GRB and below which ambient particles are accelerated to cosmic ray energies. Because no extragalactically-observable EM signal is expected from a stellar-mass BBH merger, we find that concurrent observations of BBH mergers with GW detectors and EM radiation instruments offer the best data to detect such a signal.
In the GRB regime, the BH mass and ϵ fix the initial volume and temperature of the electron-positron fireball. The fireball expands relativistically, maintaining constant temperature in the frame of an Earth observer and cooling in its comoving frame. Once the fireball is sufficiently cool in its frame, pair-production freezes out and the photons free stream. In the frame of an Earth observer, these photons have energies described by a black body spectrum at the initial temperature of the fireball. Thus, the energy deposited by the BH is re-emitted as gamma-rays over a short timescale. By cross-referencing GW events with concurrent Fermi GBM observations of the localization region, we place upper bounds on ϵ. These bounds are ϵ<10^-5-10^-4 for 10-50 M_⊙ BHs depending on the BH mass since no high energy EM signal was observed from these BBH mergers. These constraints will improve as more GW events with concurrent Fermi observations are detected. We also discuss the weak detection of a GRB following GW150914, and find that this event is consistent with a GRB produced via rapid EM emission directly from a “hairy" BH for ϵ∼ 10^-7-10^-6.
Below the Schwinger limit, the EM radiation can be described by a constant-crossed field. The dynamics of the system are fixed by the ambient protons, which are rapidly accelerated to v∼ 1 by the field, absorbing energy as the radiation propagates away from the BH. We solve the differential equation describing the energy lost by the field to calculate the absorption length as a function of BH mass and ϵ. This absorption length fixes the average energy of the ambient protons and electrons that interacted with the BH radiation. For 1-50 M_⊙ BHs and ϵ ranging from 10^-20 to the Schwinger limit, the average kinetic energy per proton ranges from 20 GeV-20 TeV and the energy per electron ranges from 0.01-10 GeV. At these energies, cosmic rays have a short diffusion length due to the galactic magnetic field and are mixed in with other astrophysical cosmic rays. Additionally, the secondary signals from cosmic rays of these energies are produced on too long of a timescale to be attributed to EM radiation directly from a BH. Overall, constraining ϵ in this less energetic regime is difficult. Future work could investigate BBH mergers in strong background magnetic fields. In this case, the ultrarelativistic electrons emit X-rays via synchrotron radiation that may be observable.
Although this work benefits from employing a model-independent approach to generically characterize radiation emitted directly from “hairy" BHs, model-dependent effects will need to be included to augment this general parameterization. Some “hairy" BH models, such as the firewall BH <cit.>, are capable of producing EM radiation. But currently, no complete model able to quantitatively characterize this effect exists. We hope this paper will motivate others in the field to work through the details of such models. The firewall BH metric of Ref. <cit.> is particularly well-suited for this. Given this metric and a parameterized charge distribution adhered to the firewall, one could use numerical relativity to simulate the emission of gravitational and EM radiation during a BBH merger involving a firewall BH. This approach offers independent constraints on ϵ as a function of the charge distribution parameter and may be able to constrain ϵ in cases below the Schwinger limit.
Strengthening constraints on ϵ will be an important pursuit for future work given the relatively loose bounds found in this paper. In general, “hairy" BHs are well-motivated by the BH information paradox. However, such BHs must appear nearly canonical to a distant observer due to observational evidence in favor of the “no hair" theorem. Therefore, novel approaches to constraining “hairy" models will continue to be vital in the search for new fundamental physics.
We thank Surjeet Rajendran and David Kaplan for useful discussions. We also thank Nadia Zakamska and Erwin Tanin for their edits to this manuscript.
This work is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE2139757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
10
Abbott_2019
B.P. Abbott, R. Abbott, T.D. Abbott, S. Abraham, F. Acernese,
K. Ackley et al., GWTC-1: A Gravitational-Wave Transient Catalog of
Compact Binary Mergers Observed by LIGO and Virgo during the First and Second
Observing Runs,
https://doi.org/10.1103/PhysRevX.9.031040Physical Review X
9 (2019) 031040.
Vayner_2021
A. Vayner, N. Zakamska, S.A. Wright, L. Armus, N. Murray and
G. Walth, Multiphase Outflows in High-redshift Quasar Host
Galaxies, https://doi.org/10.3847/1538-4357/ac2b9eThe Astrophysical Journal
923 (2021) 59.
Komossa_2015
S. Komossa, Tidal disruption of stars by supermassive black holes:
Status of observations,
https://doi.org/10.1016/j.jheap.2015.04.006Journal of High
Energy Astrophysics 7 (2015) 148.
Broderick_2014
A.E. Broderick, T. Johannsen, A. Loeb and D. Psaltis, Testing
the No-hair Theorem with Event Horizon Telescope Observations of Sagittarius
A*, https://doi.org/10.1088/0004-637X/784/1/7The Astrophysical Journal 784 (2014) 7 [https://arxiv.org/abs/1311.55641311.5564].
Psaltis_2016
D. Psaltis, N. Wex and M. Kramer, A Quantitative Test of the
No-hair Theorem with Sgr A* Using Stars, Pulsars, and the Event Horizon
Telescope, https://doi.org/10.3847/0004-637X/818/2/121The Astrophysical Journal
818 (2016) 121
[https://arxiv.org/abs/1510.003941510.00394].
Wang_2022
D. Wang, Shaving the Hair of Black Hole with Sagittarius A^* from
Event Horizon Telescope,
https://doi.org/10.48550/arXiv.2205.08026arXiv e-prints (2022)
arXiv:2205.08026 [https://arxiv.org/abs/2205.080262205.08026].
Isi_2019
M. Isi, M. Giesler, W.M. Farr, M.A. Scheel and S.A. Teukolsky,
Testing the No-Hair Theorem with GW150914,
https://doi.org/10.1103/PhysRevLett.123.111102Phys. Rev. Lett. 123 (2019) 111102 [https://arxiv.org/abs/1905.008691905.00869].
Wang_2022_gw
K. Wang, Retesting the no-hair theorem with GW150914,
https://doi.org/10.1140/epjc/s10052-022-10049-xEuropean
Physical Journal C 82 (2022) 125
[https://arxiv.org/abs/2111.009532111.00953].
Sadeghian_2011
L. Sadeghian and C.M. Will, Testing the black hole no-hair theorem
at the galactic center: perturbing effects of stars in the surrounding
cluster,
https://doi.org/10.1088/0264-9381/28/22/225029Classical and
Quantum Gravity 28 (2011) 225029
[https://arxiv.org/abs/1106.50561106.5056].
Qi_2021
H. Qi, R. O'Shaughnessy and P. Brady, Testing the black hole
no-hair theorem with Galactic Center stellar orbits,
https://doi.org/10.1103/PhysRevD.103.084006Phys. Rev. D 103 (2021) 084006 [https://arxiv.org/abs/2011.022672011.02267].
Hawking_1976
S.W. Hawking, Breakdown of predictability in gravitational collapse,
https://doi.org/10.1103/PhysRevD.14.2460Phys. Rev. D
14 (1976) 2460.
Raju_2022
S. Raju, Lessons from the information paradox,
https://doi.org/10.1016/j.physrep.2021.10.001Physics Reports
943 (2022) 1.
Kaplan_2019
D.E. Kaplan and S. Rajendran, Firewalls in general relativity,
https://doi.org/10.1103/PhysRevD.99.044033Phys. Rev. D 99
(2019) 044033 [https://arxiv.org/abs/1812.005361812.00536].
Hawking_1975
S.W. Hawking, Particle creation by black holes,
https://doi.org/10.1007/BF02345020Communications in
Mathematical Physics 43 (1975) 199.
LIGO_2015
J. Aasi, J. Abadie, B.P. Abbott, R. Abbott, T. Abbott,
M.R. Abernathy et al., Characterization of the LIGO detectors during
their sixth science run,
https://doi.org/10.1088/0264-9381/32/11/115012Classical and
Quantum Gravity 32 (2015) 115012
[https://arxiv.org/abs/1410.77641410.7764].
Acernese_2015
F. Acernese, M. Agathos, K. Agatsuma, D. Aisa, N. Allemandou,
A. Allocca et al., Advanced Virgo: a second-generation
interferometric gravitational wave detector,
https://doi.org/10.1088/0264-9381/32/2/024001Classical and
Quantum Gravity 32 (2015) 024001
[https://arxiv.org/abs/1408.39781408.3978].
Mandel_2016
I. Mandel and S. Mink, Merging binary black holes formed through
chemically homogeneous evolution in short-period stellar binaries,
https://doi.org/10.1093/mnras/stw379Monthly Notices of the
Royal Astronomical Society 458 (2015) .
Loeb_2016
A. Loeb, Electromagnetic Counterparts to Black Hole Mergers Detected
by LIGO, https://doi.org/10.3847/2041-8205/819/2/L21The Astrophysical Journal
819 (2016) L21
[https://arxiv.org/abs/1602.047351602.04735].
Perna_2016
R. Perna, D. Lazzati and B. Giacomazzo, Short Gamma-Ray Bursts
from the Merger of Two Black Holes,
https://doi.org/10.3847/2041-8205/821/1/L18The Astrophysical Journal 821 (2016) L18 [https://arxiv.org/abs/1602.051401602.05140].
Woosley_2016
S.E. Woosley, The Progenitor of GW150914,
https://doi.org/10.3847/2041-8205/824/1/L10The Astrophysical Journal 824 (2016) L10 [https://arxiv.org/abs/1603.005111603.00511].
Dai_2017
L. Dai, J.C. McKinney and M.C. Miller, Energetic constraints on
electromagnetic signals from double black hole mergers,
https://doi.org/10.1093/mnrasl/slx086Monthly Notices of the Royal Astronomical Society 470
(2017) L92 [https://arxiv.org/abs/1611.007641611.00764].
Fedrow_2017
J.M. Fedrow, C.D. Ott, U. Sperhake, J. Blackman, R. Haas,
C. Reisswig et al., Gravitational Waves from Binary Black Hole
Mergers inside Stars,
https://doi.org/10.1103/PhysRevLett.119.171103Phys. Rev. Lett. 119 (2017) 171103 [https://arxiv.org/abs/1704.073831704.07383].
Kimura_2017
S.S. Kimura, S.Z. Takahashi and K. Toma, Evolution of an accretion
disc in binary black hole systems,
https://doi.org/10.1093/mnras/stw3036Monthly Notices of the Royal Astronomical Society 465
(2017) 4406 [https://arxiv.org/abs/1607.019641607.01964].
Zhang_2016
B. Zhang, Mergers of Charged Black Holes: Gravitational-wave Events,
Short Gamma-Ray Bursts, and Fast Radio Bursts,
https://doi.org/10.3847/2041-8205/827/2/L31The Astrophysical Journal 827 (2016) L31 [https://arxiv.org/abs/1602.045421602.04542].
Lyutikov_2016
M. Lyutikov, Fermi GBM signal contemporaneous with GW150914 - an
unlikely association,
https://doi.org/10.48550/arXiv.1602.07352arXiv e-prints (2016)
arXiv:1602.07352 [https://arxiv.org/abs/1602.073521602.07352].
Parikh_2000
M.K. Parikh and F. Wilczek, Hawking Radiation As Tunneling,
https://doi.org/10.1103/PhysRevLett.85.5042Phys. Rev. Lett. 85
(2000) 5042 [https://arxiv.org/abs/hep-th/9907001hep-th/9907001].
Reynolds_1990
R.J. Reynolds, The Low Density Ionized Component of the Interstellar
Medium and Free-Free Absorption at High Galactic Latitudes, in Low
Frequency Astrophysics from Space, N.E. Kassim and K.W. Weiler, eds.,
vol. 362, p. 121 (1990), https://doi.org/10.1007/3-540-52891-1DOI.
Heisenberg_1936
W. Heisenberg and H. Euler, Consequences of Dirac's theory of
positrons, https://doi.org/10.1007/BF01343663Z. Phys.
98 (1936) 714
[https://arxiv.org/abs/physics/0605038physics/0605038].
Schwinger_1951
J. Schwinger, On gauge invariance and vacuum polarization,
https://doi.org/10.1103/PhysRev.82.664Phys. Rev. 82 (1951) 664.
Lieu_1998
R. Lieu, Y. Takahashi and T.W.B. Kibble, Gamma Ray Burst as Vacuum
Discharge of Super-Schwinger Electric Fields,
https://doi.org/10.48550/arXiv.astro-ph/9803072arXiv e-prints
(1998) astro [https://arxiv.org/abs/astro-ph/9803072astro-ph/9803072].
Goodman_1986
J. Goodman, Are gamma-ray bursts optically thick?,
https://doi.org/10.1086/184741The Astrophysical Journal 308 (1986)
L47.
Paczynski_1986
B. Paczynski, Gamma-ray bursters at cosmological distances,
https://doi.org/10.1086/184740The Astrophysical Journal 308 (1986)
L43.
Weinberg_1972
S. Weinberg, Gravitation and Cosmology: Principles and Applications of
the General Theory of Relativity (1972).
Piran_1993
T. Piran, A. Shemi and R. Narayan, Hydrodynamics of Relativistic
Fireballs, https://doi.org/10.1093/mnras/263.4.861Monthly Notices of the Royal Astronomical Society
263 (1993) 861.
Kumar_2015
P. Kumar and B. Zhang, The physics of gamma-ray bursts &
relativistic jets,
https://doi.org/10.1016/j.physrep.2014.09.008Physics Reports
561 (2015) 1 [https://arxiv.org/abs/1410.06791410.0679].
Meszaros_2006
P. Meszaros, Gamma-Ray Bursts,
https://doi.org/10.48550/arXiv.astro-ph/0605208arXiv e-prints
(2006) astro [https://arxiv.org/abs/astro-ph/0605208astro-ph/0605208].
Piran_1999
T. Piran, Gamma-ray bursts and the fireball model,
https://doi.org/10.1016/S0370-1573(98)00127-6Physics Reports
314 (1999) 575
[https://arxiv.org/abs/astro-ph/9810256astro-ph/9810256].
Dermer_2013
C.D. Dermer, Sources of GeV Photons and the Fermi Results,
https://doi.org/10.1007/978-3-642-36134-0_3Saas-Fee Advanced
Course 40 (2013) 225
[https://arxiv.org/abs/1202.28141202.2814].
FermiGBM
The Fermi GBM Collaboration, Overview of the Fermi GBM, https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Introduction/GBM_overview.html2020.
Aso_2013
Y. Aso, Y. Michimura, K. Somiya, M. Ando, O. Miyakawa, T. Sekiguchi
et al., Interferometer design of the KAGRA gravitational wave
detector, https://doi.org/10.1103/PhysRevD.88.043007Phys. Rev. D
88 (2013) 043007
[https://arxiv.org/abs/1306.67471306.6747].
GCN_26454
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S191216ap.gcn3GCN 26454, .
GCN_25752
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190915ak.gcn3GCN 25752, .
GCN_24098
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190412m.gcn3GCN 24098, .
GCN_24629
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190521r.gcn3GCN 24629, .
GCN_24948
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190701ah.gcn3GCN 24948, .
GWTC3
R. Abbott, The LIGO Scientific Collaboration, the Virgo Collaboration,
the KAGRA Collaboration, T.D. Abbott, F. Acernese et al.,
GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During
the Second Part of the Third Observing Run,
https://doi.org/10.48550/arXiv.2111.03606arXiv e-prints (2021)
arXiv:2111.03606 [https://arxiv.org/abs/2111.036062111.03606].
GWTC2
R. Abbott, Abbott, The LIGO Scientific Collaboration, T.D. the Virgo
Collaboration, F. Acernese, K. Ackley et al., GWTC-2.1: Deep
Extended Catalog of Compact Binary Coalescences Observed by LIGO and Virgo
During the First Half of the Third Observing Run,
https://doi.org/10.48550/arXiv.2108.01045arXiv e-prints (2021)
arXiv:2108.01045 [https://arxiv.org/abs/2108.010452108.01045].
Abbott_2016
B.P. Abbott, R. Abbott, T.D. Abbott, M.R. Abernathy, F. Acernese,
K. Ackley et al., Localization and Broadband Follow-up of the
Gravitational-wave Transient GW150914,
https://doi.org/https://doi.org/10.3847
826 (2016) L13
[https://arxiv.org/abs/1602.084921602.08492].
Connaughton_2016
V. Connaughton, E. Burns, A. Goldstein, L. Blackburn, M.S. Briggs,
B.B. Zhang et al., Fermi GBM Observations of LIGO Gravitational-wave
Event GW150914,
https://doi.org/10.3847/2041-8205/826/1/L6The Astrophysical Journal 826 (2016) L6 [https://arxiv.org/abs/1602.039201602.03920].
Canto_1977
J. Canto, On the density and energy of supernova remnants.,
https://ui.adsabs.harvard.edu/abs/1977A A....61..641C
Astronomy and Astrophysics 61 (1977) 641.
Reynolds_1992
R.J. Reynolds, The warm ionized medium,
https://doi.org/10.1063/1.44005AIP Conference Proceedings
278 (1992) 156
[https://arxiv.org/abs/https://aip.scitation.org/doi/pdf/10.1063/1.44005https://aip.scitation.org/doi/pdf/10.1063/1.44005].
Miroshnichenko_2001
L.I. Miroshnichenko, Solar Cosmic Rays, vol. 260 (2001),
https://doi.org/10.1007/978-94-015-9646-610.1007/978-94-015-9646-6.
Longair_1992
M.S. Longair, High energy astrophysics. Vol.1: Particles, photons and
their detection (1992).
Longair_1994
M.S. Longair, High energy astrophysics. Vol.2: Stars, the galaxy and
the interstellar medium, vol. 2 (1994).
|
http://arxiv.org/abs/2307.07555v1 | 20230714180019 | Neutrino mass constraint from an Implicit Likelihood Analysis of BOSS voids | [
"Leander Thiele",
"Elena Massara",
"Alice Pisani",
"ChangHoon Hahn",
"David N. Spergel",
"Shirley Ho",
"Benjamin Wandelt"
] | astro-ph.CO | [
"astro-ph.CO"
] |
[email protected]
Department of Physics, Princeton University, Princeton, NJ 08544, USA
Waterloo Centre for Astrophysics, University of Waterloo, 200 University Ave W, Waterloo, ON N2L 3G1, Canada
Department of Physics and Astronomy, University of Waterloo, 200 University Ave W, Waterloo, ON N2L 3G1, Canada
The Cooper Union for the Advancement of Science and Art, 41 Cooper Square, New York, NY 10003, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA
Institut d’Astrophysique de Paris (IAP), UMR 7095, CNRS, Sorbonne Université, Paris, France
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, NY 10010, USA
Cosmic voids identified in the spatial distribution of galaxies provide complementary information
to two-point statistics.
In particular, constraints on the neutrino mass sum, , promise to benefit from
the inclusion of void statistics.
We perform inference on the CMASS NGC sample of SDSS-III/BOSS with the aim of constraining .
We utilize the void size function, the void-galaxy cross power spectrum, and the galaxy auto power spectrum.
To extract constraints from these summary statistics we use a simulation-based approach,
specifically implicit likelihood inference.
We populate approximate gravity-only, particle neutrino cosmological simulations with an expressive
halo occupation distribution model.
With a conservative scale cut of =0.15 and a Planck-inspired ΛCDM prior,
we find upper bounds on of 0.43 and 0.35 eV
from the galaxy auto power spectrum and the full data vector, respectively (95 % credible interval).
We observe hints that the void statistics may be most effective at constraining from below.
We also substantiate the usual assumption that the void size function is Poisson distributed.
Neutrino mass constraint from an Implicit Likelihood Analysis of BOSS voids
Benjamin Wandelt
received: ** 2023, accepted: * 2023
===========================================================================
§ INTRODUCTION
The Universe's ability to provide glimpses into experimentally inaccessible conditions has a long history,
including the deduction of the laws of Gravity and the discovery of helium.
In the present day, cosmology offers a unique view on the properties of neutrinos
which are amongst the last unknowns in the standard model of particle physics.
First evidence for a non-zero neutrino mass sum, , came from the solar neutrino problem <cit.>.
Subsequently, oscillation experiments provided proof that neutrinos must have mass <cit.>
and established the lower bounds of 0.06 and 0.1 eV in the normal and inverted hierarchy,
respectively.
The terrestrial experiment KATRIN currently sets an upper bound of ≥ 0.8 eV <cit.>.[
The KATRIN bound is on m_β^2 ≡∑_ν |U^PMNS_eν|^2 m_ν^2,
so it only equals a bound on for a special, experimentally excluded, choice of the PMNS matrix
and the mass hierarchy. In general, the bound is weaker.]
However, the strongest upper bounds are already provided by cosmological data,
the primary CMB alone giving 0.38 eV <cit.>, for example.
It will be one of cosmology's primary goals in the coming decade to tighten this bound and eventually
detect neutrino mass.
One of the natural regimes to look at to constrain are extremely underdense
regions, cosmic voids <cit.>.
As the cold dark matter (CDM) flows out of the voids and into filaments and clusters,
neutrinos are more smoothly distributed.
Thus, the neutrino/CDM ratio is higher in the voids and lower in the clusters.
These qualitative considerations have spawned considerable theoretical interest in the use of void properties
to constrain .
This includes simulated data vector-level investigations <cit.>
as well as forecasts <cit.>.
The forecasts find promising error bars on , albeit under simplifying assumptions.
Voids may be the first regime in which non-linear signatures of massive neutrinos
will be observed <cit.>.
Being large objects, voids had to wait for the era of relatively deep, large-volume surveys with
approximately uniform selection function to be statistically usable.
While the original detections focused on individual objects <cit.>,
which already contain cosmological information <cit.>,
we are now able to utilize catalogs of hundreds and thousands of voids <cit.>
to perform precision cosmology with void shapes <cit.>
and sizes <cit.>.
In this work, we use voids identified in the CMASS sample of the Sloan Digital Sky Survey (SDSS)-III
Baryon Oscillation Spectroscopic Survey (BOSS) <cit.>
to place constraints on .
The void statistics we consider are the void size function (VSF) and the void-galaxy cross power spectrum.
We combine these with the usual galaxy auto power spectrum multipoles which by themselves already
place a tight upper bound on (through the suppression of matter power below
the neutrino free-streaming scale) <cit.>.
Since voids can be considered anti-halos <cit.>, a popular model for the VSF
descends from Press-Schechter theory and the excursion set formalism <cit.>,
with slight modifications <cit.>.
While the void-galaxy correlation function can be used for cosmological purposes without
explicit knowledge of the void profile through the Alcock-Paczynski test
and redshift space distortions <cit.>,
the modeling of the profile itself has also been considered <cit.>.
However, all analytic approaches to modeling void statistics are problematic for our purposes.
First, it is difficult to construct a consistent galaxy bias model across the different statistics
comprising the data vector.
Second, the calibration of analytic models typically did not utilize large simulations with varied neutrino mass.
Third, existing models typically apply only to an aggressively cleaned subset of the entire void catalog,
potentially leading to appreciable losses in constraining power.
Therefore, we choose to work in a simulation-based framework.
Our simulations are based on particle-neutrino, approximate gravity-only <cit.>
realizations, in which we place galaxies through an expressive
halo occupation distribution model (HOD) <cit.>.
We then post-process the galaxy catalogs to generate light cones incorporating survey realism.
The likelihood analysis with these simulations is a non-trivial problem.
A popular approach is to build emulators of the mean data vector and perform the analysis
under the assumption of a usually Gaussian likelihood where the covariance matrix is estimated from simulations.
However, this approach turns out to be challenging for our problem.
First, constructing an emulator in a 17-dimensional space (6 cosmology, 11 HOD) is quite difficult,
especially given a feasible number of simulations.
Second, the assumption of a Gaussian likelihood is wrong.
We demonstrate in Sec. <ref> that the VSF is very close to Poisson distributed
(as long as bins are chosen wide enough, as would be naively expected),
but modeling its covariance with the void-galaxy cross power spectrum and the galaxy auto power spectrum
is difficult.
For these reasons, we opt for an implicit-likelihood[likelihood-free, simulation-based.]
approach <cit.>.
This formalism uses neural networks to approximate functions that can be converted into posteriors.
In general structure, this work is therefore similar to the papers <cit.>,
but it differs in almost all details (statistics, simulations, objective, HOD, code).
The resulting complementarity will therefore be useful to assess the state of implicit likelihood inference
in galaxy clustering cosmology.
The rest of this paper is structured as follows.
Sec. <ref> describes our simulation pipeline.
Sec. <ref> contains details on the data vector and the inference procedure.
Sec. <ref> collects our results and their interpretation.
We conclude in Sec. <ref>.
The appendices contain additional material as well as information about data and code availability.
§ SIMULATIONS
§.§ Cosmological prior
Since our objective is , we place a tight prior on ΛCDM.
For this, we use the posterior from the Planck <cit.>
primary CMB analysis.[]
Specifically, we use the chains run with fixed and measure the mean
and covariance matrix in the five “CMB parameters” ω_c, ω_b, log A_s, n_s,
and θ_MC.
In these parameters the posterior is close to Gaussian and we approximate it as such.
To ensure the robustness of our conclusions, we inflate the Planck error bars
on cosmological parameters by a factor of two.
For , we choose a flat prior between 0 and 0.6 eV,
the upper boundary being motivated by preliminary tests in which we established a sensitivity
of the order σ∼ 0.2 eV.
We assume three neutrino species with degenerate masses.
Of course, the primary CMB's information leads to some correlation between and
the CMB parameters. This correlation is not included in our prior.
However, given the sensitivity of the data used (compared to Planck), these residual correlations
have a relatively small effect.
For example, projecting the –ω_c correlation in the Planck posterior
to our upper prior boundary of 0.6 eV, we obtain a shift Δω_c/σ_EFT∼ 0.25
where σ_EFT is the error bar obtained from the EFTofLSS analysis of BOSS <cit.>
(of which we only use a subset).
From these considerations, it also follows that our results do not depend strongly on the precise
choice of ΛCDM prior.
We draw from the cosmological prior using an open quasi-random sequence.
In contrast to popular sampling methods such as latin hypercube or Sobol, an open sequence
does not require initial knowledge of the total number of samples.
Our sequence is constructed by taking integer multiples of a vector whose elements are powers
of a generalized golden ratio.[
<http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences>]
In hindsight it turns out that our results are insensitive to taking out random subsets from the
sequence (c.f. appendix <ref>), indicating that a simple pseudo-random sampling would have been sufficient.
This is likely due to the aforementioned compactness of the prior compared to the data's sensitivity.
§.§ Cosmological simulations
We run 127 simulations with varied cosmologies and 69 at a fiducial cosmology,
illustrated in Fig. <ref>.[
We attempted 130/71, but some runs failed. As discussed before, the fact that due to the failures
we do not sample the quasi-random sequence strictly sequentially does not affect our results.]
We choose the fiducial cosmology close to the mean of the ΛCDM prior, with =0.1 eV.
The cosmo-varied simulations share the random seed. This is because the decision to adopt an
implicit-likelihood approach was only made after we encountered severe challenges with the conventional
approach, as described in the introduction.
However, as we shall see below, the simulations are large enough that sufficient quasi-independent
data vectors can be generated.
After generating a cosmological parameter vector in the CMB parameters we replace θ_MC
by the Hubble constant using <cit.>.
We then produce power spectra at z=99 using <cit.>
and <cit.>.
We run particle neutrino gravity-only simulations using the approximate solver.
We choose a box size of 2.5 h^-1Gpc with 2800^3 CDM particles,
leading to a minimum resolved halo mass of ∼ 1.3×10^12 h^-1M_⊙
that is sufficient for the CMASS LRG sample.
With regard to the neutrino options in we follow Ref. <cit.>.[
, , ]
In particular, we follow their recommendation to increase the number of early time steps
for larger neutrino masses.
Specifically, at =0 we take seven logarithmic steps (in scale factor) between
z=99 and z=19, followed by twelve linear steps until z=0.68. Afterwards, we take twenty steps until
z=0.44 during each of which we write a snapshot of CDM particles to disk.
At higher neutrino masses we insert up to twelve additional logarithmic steps before z=79.
The seemingly large number of twenty snapshots was established in our preliminary tests
during which we saw slight
differences between ten and twenty snapshots (in the summary statistics considered and
for a single run) and thus decided to err on the side of caution.
Each simulation takes about 90 minutes on 70 40-core nodes of the Tiger machine at Princeton.
§.§ Galaxies
We fine-tuned the method to populate CDM snapshots with galaxies through preliminary tests.
Specifically, we performed global optimization over a large and partly discrete space
of halo occupation distributions considering two objectives:
(1) the power spectrum multipoles of galaxies placed in Quijote simulations <cit.>;
(2) the VSF of the CMASS data.
The first step was intended to identify degrees of freedom that are necessary to correct for potential
approximation errors in ,
while the second test was primarily meant to test our simulations' fidelity.
The optimization problems were solved with <cit.>.
In the following, we briefly describe the halo occupation distribution model (HOD)
resulting from these preliminary tests. More detail can be found in appendix <ref>.
We identify halos in the CDM snapshots using <cit.>,
which in our preliminary tests performed better than the friends-of-friends
finder shipped with .
Then, galaxies are assigned stochastically to halos using an HOD.
Besides the usual five-dimensional model parameterized by
M_min, σ_log M, M_0, M_1, α <cit.>,
we introduce six additional degrees of freedom.
Although assembly bias <cit.>
has been argued to be not necessary to describe the clustering of CMASS galaxies <cit.>,
we decide to be conservative by adding assembly bias parameterized by P_1 and a_bias.
Furthermore, we add velocity bias <cit.>
parameterized by η_cen and η_sat.
Finally, we introduce redshift dependence to M_min and M_1,
parameterized by μ(M_min) and μ(M_1).
One advantage of having these slopes as free parameters is that we know them to be relatively close to
zero, enabling useful sanity checks on any posteriors.
The resulting 11-dimensional HOD parameterization is quite similar to, e.g.,
Refs. <cit.>.
We populate the cosmo-varied simulations with galaxies according to HOD parameters
drawn from the priors given in appendix <ref>.
For the simulations at the fiducial cosmology we only populate with a single HOD.
We choose the HOD parameters used for the fiducial mocks based on preliminary inference runs using the VSF only.
It turns out that these parameters are not very close to best-fit when considering the entire data vector;
generating new mocks closer to the best-fit point may increase the efficiency of the compression step
described below. However, the difference in HOD parameters cannot cause biases since the fiducial mocks
are not used in constructing the likelihood.
§.§ Light cones
We use the cuboid remapping code <cit.>
to deform our simulated cubes to the CMASS NGC geometry. It turns out that there are two possible choices
of remapping and we use both (as part of the augmentation scheme discussed below).
When projecting galaxies onto the light cone, we extrapolate their positions from the snapshots
using the host halo velocities (using the stochastic galaxy velocities would weaken the correlation between centrals
and satellites). The resulting corrections are small thanks to the large number of available snapshots.
After mapping galaxies to the light cone, we apply all angular masks and approximately mimic fiber
collisions using the procedure described in Ref. <cit.>.
In contrast to some other works, we downsample the galaxy field predicted by the HOD to the data's
density n(z) (only, of course, if the simulation contains more galaxies at the given redshift).
This downsampling is performed iteratively in conjunction with the implementation of fiber collisions
so that both are self-consistent.
Our implementation performs any necessary downsampling regardless of host halo properties;
future work could take a prediction for stellar mass into account.
§ INFERENCE
§.§ Data vectors
We use the northern galactic cap (NGC) part of the CMASS sample. The southern part (SGC) is smaller and
we do not expect dramatic improvements from its inclusion. Since our focus is on better understanding the
impact of void statistics, rather than the tightest possible bounds on neutrino mass, we ignore the SGC
for simplicity.
Similarly, we do not include the LOWZ sample; its lower volume makes it less suitable for void science.
We cut galaxies into the redshift interval 0.42<z<0.7 and map them to comoving space
using a fixed Ω_m=0.3439.
Voids are identified using the code <cit.>
which is based on <cit.>
and works by Voronoi tessellating the galaxies and then applying a watershed algorithm to find
contiguous density minima.
We use the “untrimmed” catalog computed by as it does not require arbitrary assumptions.
While many different void finders exist <cit.>,
prior work suggests that shape-agnostic void finders such as yield voids with better constraining
power on than spherical finders <cit.>.
Future work could investigate the influence of void definition on signal-to-noise.
Galaxy auto power spectra P^gg_ℓ(k) and void-galaxy cross power spectra P^vg_ℓ(k)
are computed using <cit.>
and ,[<https://pypower.readthedocs.io>]
reducing variance with FKP weights <cit.>
and correcting for observational systematics using the provided weights
(except for fiber collisions, of course) <cit.>.
We only utilize the systematics weights when computing P^gg.
In the case of P^vg, we find no significant change in posteriors when using the galaxy weights,
consistent with Ref. <cit.>.
In the case of void identification, there is no guarantee that the obvious method to incorporate
the systematics weights would yield cleaner voids.
Given the relatively large void sizes considered in this work, we do not expect significant contamination
by unmodeled survey systematics, but suggest that this point may warrant future work.
Galaxy randoms are taken from the public catalogs.
Void randoms are constructed by taking a large catalog of voids from many different mocks
to choose angular positions and constructing a kernel density estimator in redshift matched to the specific
void catalog. This procedure ensures that the randoms are consistent with a given cut in void radius since
voids of different sizes have somewhat different angular distributions due to the survey mask.
We consider the VSF in 32 linearly spaced effective radius bins between 30 and 80.
The minimum radius cut is well above the mean tracer separation and thus we expect contamination by Poisson
voids to be small <cit.>.
We split voids for the VSF into two redshift bins, separated at z=0.53.
This splits the CMASS sample approximately equally.
We perform analyses with =0.15 and 0.2.[
For brevity, we implicitly take all wavenumbers in units of .]
We consider =0.15 the conservative baseline choice but =0.2 is still expected to be
reliably modeled by as well as the halo model <cit.>.
We do not use power spectra on scales larger than k_min=0.01 <cit.>.
Since our theoretical model is simulation based, we do not deconvolve the survey window function.
This means that there is a small level of contamination by Fourier modes outside the k-range considered,
but we assess this effect to be negligible.
In our baseline analysis we only use the monopole P^gg_0 of the galaxy auto power spectrum multipoles.
This choice was made based on the limited information content of the quadrupole (from EFTofLSS posteriors)
and with the aim of simplicity. We discuss the effect of including the quadrupole below.
For P^vg we use both the monopole and the quadrupole.
It is worth noting that we opt to use reciprocal space void-galaxy cross power spectra P^vg
instead of the more popular configuration space correlation function.
The correlation function has the primary advantage that one can rescale its argument on a void-by-void
basis by the respective radius and thus sharpen the resulting void profile
(this is also possible in reciprocal space but computationally expensive,
future work could explore this point).
We believe, however, that the mixing of Fourier modes in the correlation function
could lead to problems with approximate solvers like whose domain of validity
is better localized in reciprocal space.
In order to optimize signal-to-noise, we consider P^vg computed with three different choices of minimum
void radius, 30, 40, and 50.
An illustration of the data vector is given in Fig. <ref>.
§.§ Implicit likelihood inference
As discussed in the introduction, the standard emulator-based approach is difficult in 17 dimensions.
The main reason is that the training objective for an emulator does not directly map
to the ultimate goal of accurate posteriors, implying that the optimum needs to be very sharp (which requires many simulations).
Combined with the unknown likelihood function, we believe implicit likelihood inference (ILI) to be the appropriate tool for our task.
We opt for neural ratio estimation (NRE) <cit.>
which recasts inference as a classification problem.
The choice of an amortized instead of a sequential method was made based on the hierarchical structure
of our simulations; we then opt for NRE because of its simplicity.
In its original and simplest form, NRE works with pairs of parameter vectors θ, θ'
drawn from the prior p(θ). We then consider a data vector x drawn from the likelihood p(x|θ),
where the simulation process described above approximates this draw.
A neural network f maps the pairs (x,θ), (x,θ') to scalars y, y'.
If we now choose a classification loss function L(y,y'), e.g., binary cross entropy
L(y,y') = -log(y) - log(1-y') ,
it is easy to show that the functional optimization problem
f^* = fargmin∫ dθ dθ' dx p(θ)p(θ')p(x|θ) L
has the solution
p(x|θ)/p(x) = f^*(x,θ)/1 - f^*(x,θ) .
In other words, a neural network trained to distinguish between samples from p(x,θ) and samples from p(x)p(θ)
approximates the likelihood-to-evidence ratio at optimum.
Posteriors can then be obtained through usual Monte Carlo Markov Chain sampling
which we perform with <cit.>.
In practice, this general idea of approximating p(x|θ)/p(x) through a classifier
works better in the multi-class version “NRE-B” <cit.>.
We use the implementation provided in the package <cit.>.
In the above, it is actually not necessary for the parameter vectors θ to be drawn independently from
the prior p(θ). In fact, all that is required is that a sum over the simulated parameter vectors
approximates the integral in Eq. (<ref>). For this reason, it is correct for us to populate each
of the 127 cosmo-varied simulations with multiple draws from the HOD prior (∼ 230).
For each HOD draw we compute 8 augmentations as described below, yielding ∼ 1.7× 10^5 training samples.
The ILI framework allows implicit marginalization over nuisance parameters.
This is one of its primary benefits in high dimensional parameter spaces.[
Consider computation of a high-dimensional integral over f(𝐱) given samples f(𝐱_i).
Interpolating f(𝐱) using these samples and then performing quadrature is a more difficult
problem than using the Monte Carlo estimator.]
In principle, we could take θ = {} as one dimensional.
In practice, it is likely better to include a subset of the nuisance parameters in θ.
This is because we have intuition for the posteriors expected for some nuisance parameters and thus
making them explicit allows useful checks.
We opt to include log M_min and μ(M_min) in the parameter vector.
For the former we know that the data should provide a constraint considerably tighter than the prior,
while for the latter we expect a result close to zero.
The posterior on is unaffected by this choice of θ and the extra computational cost
in training and sampling the neural network is marginal when making two nuisance parameters explicit
(however, making all nuisance parameters explicit would complicate the training unnecessarily).
We parameterize the classifier f as a residual neural network.
Hyperparameter optimization was performed considering the
loss on a validation set of 13 cosmologies (i.e., ∼ 2.4× 10^4 mocks).[
It is actually important to separate training and validation sets by cosmologies.
Initial trials that mixed simulations exhibited hidden overfitting.]
We converged at a relatively large network with 1.7× 10^7 trainable parameters but high dropout rates.
High-dimensional data vectors x are often problematic for ILI, our problem being no exception.
This necessitates a compression step before the data vector is passed to the neural network.
Since we expect our likelihood to be close to Gaussian/Poissonian, we use the linear score compression
<cit.>
to 17 compressed statistics.
Indeed, is locally optimal both for a pure Gaussian and a pure Poissonian likelihood.
We also experimented with the nuisance-hardened generalization <cit.>
to five and ten dimensions, obtaining consistent but slightly wider posteriors.
In order to construct the compression matrix, estimates for the covariance and derivatives
of the data vector are required.
We construct the covariance matrix from our fiducial mocks using the usual estimator.
For the derivatives, we generate ∼ 10^5 additional mocks (10^3 parameter vectors, each with 96 augmentations)
in a small ball around the fiducial model. We then perform linear regression and read off the derivatives.
Simple tests indicate that the dependence on parameters is close to linear in the region considered.
§.§ Augmentations
As discussed before, our cosmo-varied simulations share the random seed. This fact ostensibly makes them
unsuitable for the ILI approach discussed before since the integral in Eq. (<ref>) requires
a sampling of initial conditions.
However, as we shall discuss in this section, it is possible to generate many quasi-independent realizations
from a single simulation.
As mentioned before, we do not require independent identically distributed realizations,
so this is in fact enough to approximate Eq. (<ref>) with sufficient accuracy.
For a single 2.5 h^-1Gpc simulation box populated with galaxies, we can take the product of the
following transformations: 2 cuboid remappings, 8 reflections, 6 axis transpositions.
This results in 96 augmentations.
In principle, many more augmentations can be generated through translations, but we expect these to be
more correlated.
The crucial question now is whether these 96 augmentations approximate the distribution over initial conditions.
We can answer this question by considering our fiducial simulations which have 69 different random seeds.
Given the fiducial parameter vector, we generate a matrix D_μ a whose elements are data vector-valued
and where μ=1…69, a=1…96 index the initial conditions and augmentations, respectively.
We can perform statistical tests by computing marginals over μ and a individually or jointly.
In order to simplify the statistical interpretation, we restrict a to 69 randomly chosen indices.
In the upper panel of Fig. <ref>, we compare the diagonals of covariance matrices
in the compressed space.
We see that the standard deviations are almost identical for marginalization over μ and a.
This test can also be performed for the uncompressed data vector, yielding consistent results
and no systematic differences between different summary statistics or scales.
In the lower panel, we perform a test considering the entire content of the covariance matrices.
We construct the Wishart distribution given the covariance matrix C_joint obtained by marginalizing over μ and a
jointly and then compute the log-likelihood of the individually marginalized covariance matrices.
If these covariance matrices were drawn from the Wishart distribution sourced by C_joint,
their log-likelihoods would be distributed as indicated by the green line.
We see that the distributions are somewhat different but still have large overlap.
In conclusion, the 96 augmentations reproduce the distribution over initial conditions reasonably
well.
Since 96 ≫ 1, we expect the augmentations to provide a good approximation to the integral in Eq. (<ref>).
Why does this augmentation procedure work?
First, our simulation boxes are about 5.7× larger than the survey volume.
Second, the augmentations alter the redshift direction.
Third, galaxies and the survey mask interact.
Fourth, galaxies are captured at different times so their peculiar motions alter their real space positions.
All these points need to be seen relative to the specific survey and simulation configuration;
the described augmentation procedure is certainly not expected to work universally.
§ RESULTS
In this section, we first present our main posteriors on
from the CMASS NGC data, taking various combinations of the summary statistics VSF N_v,
void-galaxy cross power spectrum P^vg, and galaxy auto power spectrum P^gg.
We present most of our posteriors in their cumulative form. This is because at the current level
of precision, no neutrino mass detection is expected and upper bounds are the main objective.
The cumulative posterior is the most direct visualization of upper bounds.
In all plots we include a diagonal dashed line indicating the prior.
In the following, we will occasionally compare with results obtained with the EFTofLSS <cit.>.
The EFTofLSS allows for the analysis of the full-shape galaxy auto power spectrum
(as well as other statistics we will not consider here).
We use the window-less full-shape likelihood <cit.>[<https://github.com/oliverphilcox/full_shape_likelihoods>]
and the code <cit.>.[<https://github.com/michalychforever/CLASS-PT>]
We restrict the data included in the likelihood to the NGC high-z sample,
approximately equal to the data we use for our analysis.
Furthermore, we impose the same ΛCDM prior while keeping the nuisance parameter priors equal to those
implemented in the public likelihood code.
In any comparison with our results we use identical .
Likewise, we usually only use the monopole P^gg_0, consistent with our simulation-based analysis.
The likelihood part termed “Alcock-Paczynski” in the EFTofLSS likelihood is included, since our method
also effectively includes this term. On the other hand, we do not include the BAO reconstruction or real space
likelihoods.
We emphasize that a comparison between EFTofLSS and HOD methods is beyond the scope of this work.
Therefore, we will use the EFTofLSS posteriors to provide intuition, show that our posteriors are at
least qualitatively reasonable, and for an interesting observation about the quadrupole P^gg_2 later on.
§.§ Neutrino mass posterior
In Fig. <ref>, we show the baseline posterior on , with =0.15.
The galaxy auto power spectrum gives a 95 % credible interval constraint of < 0.43 eV.
Upon inclusion of the VSF, the posterior broadens somewhat.
Including the void-galaxy cross power spectrum tightens the posterior to < 0.35 eV,
a ∼ 20 % improvement.
Further adding the VSF does not lead to any appreciable change.
Posteriors are generally wider than the EFTofLSS result.
In Fig. <ref>, we show a similar set of posteriors obtained with =0.2.
We believe that our simulated model should still have a high level of fidelity at these somewhat smaller scales.
We observe that including smaller scales tightens the posterior, as expected.
However, adding void statistics to P^gg now slightly broadens the posterior.
Most of the remainder of this section will be devoted to better understanding the observations
from Figs. <ref>, <ref>.
§.§ Validation
Any simulation-based, and especially implicit-likelihood, inference necessitates rigorous validation
of the simulated model, the likelihood approximation, and the resulting posteriors.
In this section, we present three tests verifying different aspects of our pipeline.
First, in Fig. <ref>, we present the usual coverage (or q-q) plot <cit.>.
For this diagnostic, we perform inference on mocks drawn from the prior;
in particular, we use the ∼ 2.4×10^4 validation mocks discussed before.
We use the N_v+P^vg+P^gg likelihood with =0.15.
For each resulting chain, we compute the marginal distributions of the explicit parameters and
then the confidence level at which the true input parameter is located.
In Fig. <ref>, we show the cumulative histograms of these confidence levels.
If the posterior is well-calibrated, these CDFs should coincide with the diagonal.
As can be seen, for all parameters considered this is the case.
This diagnostic is a powerful internal consistency check and verifies that the neural network is well-trained.
Second, in Fig. <ref>, we show an interesting observation concerning the galaxy auto power spectrum
quadrupole P^gg_2. As discussed before, this summary statistic has limited constraining power and we do not use
it for our main posteriors.
As can be seen in Fig. <ref>, adding the quadrupole to the data vector slightly broadens the posterior.
This happens consistently in our analysis and in the EFTofLSS.
We believe that this observation increases confidence in the validity of our simulation model,
in particular the modeling of redshift space distortions.
Third, in Fig. <ref> we compare the data posteriors with posteriors obtained by running inference
on randomly chosen mocks generated at the fiducial point.
We remind the reader that the fiducial HOD is rather far from best-fit which somewhat complicates the interpretation.
We observe that at =0.15 the data posterior is tighter than most of the mock ones.
If the cosmological simulations were to blame for this,
the naive expectation would be for the discrepancy to become more severe
as smaller scales are included in the analysis.
However, this appears not to be the case: at =0.20 the data posterior becomes more typical.
We conclude that even though we observe hints of differences between data and simulations,
the evidence is not conclusive and the data posterior could well be consistent with the observed distribution.
It should also be noted that the real may be less than the choice 0.1 eV with
which the fiducials were run, potentially leading to a tighter data posterior.
§.§ Broadening of posteriors
One peculiar observation is that inclusion of void statistics can broaden the posterior on .
We do not fully understand this phenomenon and can only provide some suggestive results.
These are more comprehensively described in appendix <ref>;
here we only provide a summary.
We observe similar broadening on fiducial mocks and thus propose that we are in fact observing a generic
phenomenon.
Therefore, we suggest that void statistics are most effective at constraining the neutrino mass sum
from below.
A further test using artificially enlarged volumes supports this theory.
For a potential physical explanation, we consider the free streaming length.
At z=0.5, λ_fs = 90 (0.3 eV/) for degenerate masses.
This length scale is comparable to the diameter of the voids that seem to contribute most
(c.f. Sec. <ref>).
Thus, it may be that at the upper end of the posterior is “invisible” to voids.
However, we identify voids using tracers of small-scale fluctuations,
so the full picture is much more complicated and could be a subject for further study.
§.§ Void radius
In Fig. <ref>, we show posteriors obtained with the N_v+P^vg+P^gg likelihood,
concentrating on void size.
Cuts on effective radius are performed both in the VSF and P^vg parts of the data vector.
We observe that the posteriors are almost identical regardless of whether we cut at 30
(the baseline analysis) or 40.
On the other hand, further increasing the minimum radius to 50 removes much of
the effect of voids on the posterior.
Fig. <ref> indicates that at least for the present analysis voids with effective radii
between 40 and 50 are the most constraining.
Smaller voids might be contaminated by spurious Poisson voids and perhaps also due to their shallower
density profile less affected by neutrinos.
Larger voids presumably suffer from their low abundance.
In Fig. <ref> we show posteriors obtained from void statistics only.
We show them mostly for completeness; in the present analysis these are entirely prior dominated.
However, even in this plot we see the previously mentioned observation that larger voids appear to
carry more signal.
§.§ Poissonian void size function
As a final point of this section, we substantiate the previous claim that the VSF is very close to Poissonian distributed.
While this seems to be a natural assumption, void exclusion makes it non-trivial.
Indeed, previous works have assumed Poisson likelihoods <cit.>;
our simulations enable us to check this assumption.
In Fig. <ref>, we show two checks performed with our fiducial mocks.
The left panel shows the covariance matrix divided by the outer square of Poissonian standard deviations;
the result is close to the identity.
The right panel shows a check using the variance-stabilizing Anscombe transform <cit.>.
For each mock data vector c^(α) and bin i, we compute the transformed VSF count
c̃^(α)_i = 2( √(c^(α)_i+3/8) - √(⟨ c_i ⟩+3/8))
+ 1/4√(⟨ c_i ⟩) .
In the limit of large counts the distribution of these transformed counts converges to the standard normal
if the counts themselves are Poissonian.
As can be seen, the agreement with the standard normal is quite good indeed.
These tests demonstrate that deviations from Poissonian distribution are small for the VSF,
at least for the choice of binning considered here.
§ CONCLUSIONS
We have performed inference on galaxy clustering in the BOSS CMASS northern sample,
combining the void size function, the void-galaxy cross power spectrum, and the galaxy auto power spectrum.
Our primary target was the neutrino mass sum, ; thus, we imposed a tight prior on ΛCDM informed
by primary CMB data.
We argued that analytic models for the considered void statistics are not mature enough
and unsuitable for our specific problem, necessitating a simulation-based approach.
To this end, we ran approximate gravity-only simulations and populated them with galaxies
using an expressive halo occupation distribution.
Several factors motivated the use of implicit likelihood inference.
In our baseline analysis, we find < 0.43 eV from the galaxy auto power spectrum alone,
and < 0.35 eV with the void statistics included (95 % credible interval).
We performed several tests to confirm statistical and systematic validity of our likelihood approximation.
We performed a short investigation of the impact of voids on the neutrino mass posterior.
It appears that the void statistics may be most effective in constraining from below.
This result would imply that future analyses aiming at measuring
may benefit from including void statistics.
Our results suggest that larger voids with effective radii >40 carry most of
the signal despite their lower abundance.
This has interesting implications for future analyses, since voids of this size should be detectable
in photometric catalogs with relatively low redshift error, such as the one expected for Rubin/LSST <cit.>.
Of course, spectroscopic surveys such as
DESI <cit.>, Euclid <cit.>, SPHEREx <cit.>, PFS <cit.>, and Roman <cit.>
will continue to be cornerstones of void science.
The trade-off between volume, galaxy number density, and redshift precision warrants further investigation.
We also demonstrate that the void size function is very close to Poisson distributed,
a feature that had been assumed in previous analyses but never explicitly confirmed.
Future work could improve upon our analysis in multiple ways.
First, the cosmo-varied simulations should be run with different random seeds
(we decided for a fixed seed in anticipation of an emulator-based analysis which ultimately
turned out to be very difficult).
Second, it may be beneficial to normalize the void-galaxy cross power spectrum by void number.
Although in principle this would contain the same information as our data vector once the VSF is included,
the necessary transformation is non-linear and thus potentially invisible to our data compression.
Third, the HOD modeling could be improved. Some of our priors may not be optimal, and our n(z) downsampling
is simplistic. The CMASS sample's completeness is quite well known and could be used to put a prior
on the downsampling.
Fourth, it turns out that the cosmological simulations did not dominate compute cost.
It may therefore be economical to increase accuracy in or switch to a different solver.
Our results point toward a complicated picture with regard to the relationship between massive neutrinos
and voids.
Future data sets, both spectroscopic as well as photometric, promise to bring tight cosmological constraints
from void science,
since it scales well with number.
We thank Sofia Contarini, Adrian Bayer, Jia Liu, Jo Dunkley, Masahiro Takada
for useful discussions.
We thank Oliver Philcox for explaining the EFTofLSS likelihood.
The work of LT is supported by the NSF grant AST 2108078.
The authors are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing.
§ HALO OCCUPATION DISTRIBUTION
In this appendix, we provide a more detailed description of the adopted HOD model.
First, for reference, the baseline five-parameter model only depends on halo mass M
and has mean occupations
N_cen = 1/2[
1 + erf( log M - log M_min/σ_log M)
]
for the central galaxy and
N_sat = N_cen( M - M_0/M_1)^α
for the satellites.
A central is placed with probability N_cen and the number of
satellites is drawn from a Poisson distribution with mean N_sat.
The central is placed at the halo's center and assigned the halo velocity.
The satellites are distributed isotropically with an NFW profile <cit.>
and the concentration model of Ref. <cit.>,
using the analytic solution for the inverse NFW CDF from Ref. <cit.>.
Satellite velocities are drawn from a distribution assuming virialization.
On top of this baseline model, we implement assembly bias using the
decorated HOD <cit.> with the ratio of kinetic to potential energy
r ≡ T/U as proxy for assembly history.
In our preliminary tests T/U outperformed halo concentration, possibly due to limited resolution
within the halos.
The decoration works by splitting halos into two groups according to r.
In order to reduce the effect of any potential evolution of r with halo mass,
we do this splitting separately within 64 groups containing equal numbers of halos.
The fraction P_1 of halos with lowest r is assigned type 1 (2) for positive (negative) a_bias,
while the rest is assigned type 2 (1).
Then, the mean occupations are modified as
ΔN_cen = |a_bias|
min[1-N_cen, 1-P_1/P_1N_cen]
ΔN_sat = |a_bias|
1-P_1/P_1N_sat
for type 1 and
ΔN_cen = |a_bias|
max[-N_cen, 1-P_1/P_1 (N_cen - 1) ]
ΔN_sat = |a_bias|
(-N_sat)
for type 2.
Velocity bias for the centrals is implemented by adding η_cen V_vir n
where n ∼ N(0,1).
For the satellites, the velocity difference from the host halo is scaled by η_sat.
Redshift dependence for M_min, M_1 is approximated as linear in scale factor,
such that
Δlog M_i = μ(M_i) ( a - a_0 )
with a_0=1/(1+0.53).
We adopt flat priors 12.5<log M_min<13.2, 0.1<σ_log M<0.8, 12.5<log M_0,1<15.5,
0.2<α<1.5, -3<P_1'<3, -1<a_bias<1, 5<η_cen'<10, -1<η_sat'<1,
-20<μ(M_min)<20, -40<μ(M_1)<40.
Here, all masses are in h^-1M_⊙, and the primed parameters are defined as
2 P_1=(1+tanh P_1'), η_sat=exp(η_sat'), η_cen=exp(-10+η_cen').
The above intervals were found during preliminary inference runs.
Note in particular the small values of M_min, compared to other analyses.
This is partly explained by systematically lower halo masses in ,
and partly by the n(z) downsampling described below.
We picked the transformations given by the primed parameters based on the intuition
that strictly (mathematically) bounded intervals often indicate that a uniform prior
in a transformed quantity is a better choice.
§ BROADENING OF POSTERIORS
We have seen in our main posteriors that adding void statistics to the data vector sometimes broadens
the posterior on .
In this appendix, we attempt to better understand this observation, focusing on =0.20.
For this, we will rely on inference on our fiducial mocks.
The first possible explanation could be a statistical fluctuation,
and we cannot definitely exclude this hypothesis.
One way, however, to test it is to look at average posteriors on our fiducial mocks.
We perform inference on ∼ 20 randomly chosen mocks and plot the CDFs of concatenated chains
in Fig. <ref>.
There, we observe that the expected, average behavior is for the posterior to broaden once void statistics
are added to the data vector.
The second possible explanation could be that once void statistics are added our compression procedure
becomes less efficient. This could certainly be the case if at linear order the void statistics appear more
constraining than they are globally, thus P^gg would be unnecessarily downweighted.
This hypothesis appears unlikely in light of the full posteriors presented
in Figs. <ref>, <ref>.
In these posteriors, we observe that for the parameters that are actually constrained (like M_min)
adding void statistics generically tightens the posteriors.
It appears unlikely that should be an exception.
Having found these two hypotheses unsatisfactory, we arrive at the third one:
void statistics tend to constrain from below.
We illustrate this theory qualitatively in Fig. <ref>, which should not be interpreted as a literal depiction.
In fact, in Sec. <ref> we show that void statistics alone yield posteriors close to the prior.
Fig. <ref> provides merely an effective depiction.
We can investigate this hypothesis further by performing the following test.
In order to increase signal-to-noise, we perform inference on four fiducial mocks at the same time,
shown in Fig. <ref>.
For this, we use a different set of neural nets in which we leave five nuisance parameters explicit.
The reason is that all implicitly marginalized nuisance parameters are effectively assumed to be different
for each of the four mocks, an effect we would like to minimize.
Of course, increasing the number of explicit nuisance parameters complicates the training and we have less
confidence in the precise calibration of the posteriors.
For this reason, our baseline results were obtained with only two explicit nuisance parameters.
For reference, the real data posteriors obtained with these alternative neural nets are shown in
Fig. <ref>.
We perform this test with two different sets of nuisance parameters kept explicit in order to gauge robustness
(corresponding to the solid and dashed lines in Fig. <ref>).
Similar to Fig. <ref>, we average posteriors over ∼ 30 randomly chosen groups of four mocks in order to decrease
sample variance.
We observe that, consistent with our theory, the posteriors that include void statistics show a more pronounced
hint of a bump at the true .
In principle, one could increase the simulated volume further by combining more mocks,
but our neural nets are not calibrated at the required level of precision and thus the resulting posteriors
would not be robust.
In summary, the more mundane ideas to explain the observed broadening of posteriors appear questionable
given the tests presented.
On the other hand, the idea that void statistics are most effective at constraining from below
receives support from our experiments.
A more in-depth examination of this issue would constitute a great starting point for future work.
§ CORNER PLOTS
This appendix collects posteriors in the full parameter spaces considered.
Fig. <ref> shows the baseline parameter space with two explicit nuisance parameters.
Fig. <ref> shows larger sections of parameter space
(it should be mentioned, however, that the corresponding neural networks were trained without further
hyperparameter optimization, implying a somewhat lower level of confidence in the validity of these posteriors).
Fig. <ref> shows our EFTofLSS posteriors, demonstrating that the ΛCDM part of the parameter
space is prior-dominated.
§ SIMULATION BUDGET
One might worry that the 127 cosmo-varied simulations are not enough to properly sample the cosmological prior.
We test this by discarding a third of the simulations and training on the rest.
The resulting posterior, compared to our baseline result, is shown in Fig. <ref>.
Agreement between the two posteriors is almost perfect, demonstrating that our simulations cover the cosmological
prior sufficiently well.
§ SIMULATION DATA
About 50TB of halo catalogs, light cones, void catalogs, and summary statistics have been saved
(at 20 times between z=0.44 and z=0.68 in 127 different massive-neutrino cosmologies with various HODs
and 69 different initial conditions with a fiducial model).
We are currently finalizing how to make this data set publicly available.
§ CODE
In terms of new code, we have written
C[4].4ex-3++
code to populate halo catalogs with galaxies
and to generate light cones including survey realism.
We have also written a C implementation of the quasi-random sampling scheme for uniform
and Gaussian priors.
This work necessitated several small modifications to public codes:
* : read files generated by the current version;
write output in a user-defined directory.
* : do not write neutrinos to disk.
* : support for half-precision floats.
* : native reading of the snapshots generated by
(using the file chunking to read in distributed fashion since
does not use ).
* : output to with lower priority fields in half precision.
* : support for velocities.
* : custom splitting into training and validation data.
Since all these items are relatively obscure, we do not provide documentation.
However, we are happy to share any of these with interested researchers.
A repository with most of the code is available at <https://github.com/leanderthiele/nuvoid_production>.
146
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Bahcall and Davis,
Jr.(1976)]Bahcall1976
author author J. N. Bahcall and author R. Davis, Jr., https://doi.org/10.1126/science.191.4224.264
journal journal Science volume 191, pages 264 (year
1976)NoStop
[Wolfenstein(1978)]Wolfenstein1978
author author L. Wolfenstein, https://doi.org/10.1103/PhysRevD.17.2369
journal journal volume 17, pages 2369 (year
1978)NoStop
[Mikheyev and Smirnov(1985)]Mikheyev1985
author author S. P. Mikheyev and author A. Y. Smirnov, @noop journal journal
Yadernaya Fizika volume 42, pages
1441 (year 1985)NoStop
[Super-Kamiokande Collaboration et al.(1998)Super-Kamiokande Collaboration, Fukuda et al.]Fukuda1998
author author Super-Kamiokande
Collaboration, author Y. Fukuda, et al., https://doi.org/10.1103/PhysRevLett.81.1562 journal journal volume 81, pages
1562 (year 1998), https://arxiv.org/abs/hep-ex/9807003 arXiv:hep-ex/9807003 [hep-ex]
NoStop
[SNO Collaboration et al.(2002)SNO Collaboration, Ahmad
et al.]Ahmad2002
author author SNO Collaboration,
author Q. R. Ahmad, et al., https://doi.org/10.1103/PhysRevLett.89.011301 journal journal volume 89, eid 011301 (year 2002), https://arxiv.org/abs/nucl-ex/0204008 arXiv:nucl-ex/0204008 [nucl-ex]
NoStop
[KamLAND Collaboration et al.(2005)KamLAND Collaboration, Araki et al.]Araki2005
author author KamLAND
Collaboration, author T. Araki, et al., https://doi.org/10.1103/PhysRevLett.94.081801 journal
journal volume 94, eid 081801 (year 2005), https://arxiv.org/abs/hep-ex/0406035 arXiv:hep-ex/0406035 [hep-ex]
NoStop
[K2K Collaboration et al.(2006)K2K Collaboration, Ahn
et al.]Ahn2006
author author K2K Collaboration,
author M. H. Ahn, et al., https://doi.org/10.1103/PhysRevD.74.072003 journal journal volume 74, eid 072003 (year 2006), https://arxiv.org/abs/hep-ex/0606032 arXiv:hep-ex/0606032 [hep-ex]
NoStop
[Daya Bay Collaboration et al.(2012)Daya Bay Collaboration, An
et al.]An2012
author author Daya Bay
Collaboration, author F. P. An, et al., https://doi.org/10.1103/PhysRevLett.108.171803 journal
journal volume 108, eid 171803 (year 2012), https://arxiv.org/abs/1203.1669 arXiv:1203.1669 [hep-ex] NoStop
[KATRIN Collaboration et al.(2022)KATRIN Collaboration, Aker
et al.]Aker2022
author author KATRIN
Collaboration, author M. Aker, et al., https://doi.org/10.1088/1361-6471/ac834e
journal journal Journal of Physics G Nuclear
Physics volume 49, eid 100501
(year 2022), https://arxiv.org/abs/2203.08059
arXiv:2203.08059 [nucl-ex] NoStop
[Planck Collaboration et al.(2020)Planck Collaboration, Aghanim et al.]PlanckCollaboration2020
author author Planck
Collaboration, author N. Aghanim, et al., https://doi.org/10.1051/0004-6361/201833910 journal journal volume 641, eid A6
(year 2020), https://arxiv.org/abs/1807.06209
arXiv:1807.06209 [astro-ph.CO] NoStop
[Icke(1984)]Icke1984
author author V. Icke, https://doi.org/10.1093/mnras/206.1.1P journal journal volume 206, pages 1P (year 1984)NoStop
[Pisani et al.(2019)Pisani, Massara, Spergel et al.]Pisani2019
author author A. Pisani, author E. Massara,
author D. N. Spergel, et al., https://doi.org/10.48550/arXiv.1903.05161 journal journal volume 51, eid 40 (year 2019), https://arxiv.org/abs/1903.05161 arXiv:1903.05161 [astro-ph.CO]
NoStop
[Moresco et al.(2022)Moresco et al.]Moresco2022
author author M. Moresco et al., https://doi.org/10.1007/s41114-022-00040-z journal journal Living Reviews in Relativity volume 25, eid 6 (year 2022), https://arxiv.org/abs/2201.07241 arXiv:2201.07241 [astro-ph.CO]
NoStop
[Schuster et al.(2023)Schuster, Hamaus, Dolag, and Weller]Schuster2023
author author N. Schuster, author N. Hamaus, author K. Dolag, and author J. Weller, https://doi.org/10.1088/1475-7516/2023/05/031 journal
journal volume 2023, eid 031 (year 2023), https://arxiv.org/abs/2210.02457 arXiv:2210.02457 [astro-ph.CO]
NoStop
[Massara et al.(2015)Massara, Villaescusa-Navarro, Viel, and Sutter]Massara2015
author author E. Massara, author F. Villaescusa-Navarro, author M. Viel, and author P. M. Sutter, https://doi.org/10.1088/1475-7516/2015/11/018
journal journal volume 2015, pages 018 (year 2015), https://arxiv.org/abs/1506.03088 arXiv:1506.03088 [astro-ph.CO]
NoStop
[Banerjee and Dalal(2016)]Banerjee2016
author author A. Banerjee and author N. Dalal, https://doi.org/10.1088/1475-7516/2016/11/015
journal journal volume 2016, eid 015 (year 2016), https://arxiv.org/abs/1606.06167 arXiv:1606.06167 [astro-ph.CO]
NoStop
[Kreisch et al.(2019)Kreisch, Pisani, Carbone,
Liu, Hawken, Massara,
Spergel, and Wandelt]Kreisch2019
author author C. D. Kreisch, author A. Pisani,
author C. Carbone, author J. Liu, author
A. J. Hawken, author
E. Massara, author
D. N. Spergel, and author
B. D. Wandelt, https://doi.org/10.1093/mnras/stz1944 journal journal volume 488, pages
4413 (year 2019), https://arxiv.org/abs/1808.07464
arXiv:1808.07464 [astro-ph.CO] NoStop
[Schuster et al.(2019)Schuster, Hamaus, Pisani,
Carbone, Kreisch, Pollina, and Weller]Schuster2019
author author N. Schuster, author N. Hamaus, author A. Pisani,
author C. Carbone, author C. D. Kreisch, author
G. Pollina, and author
J. Weller, https://doi.org/10.1088/1475-7516/2019/12/055 journal
journal volume 2019, eid 055 (year 2019), https://arxiv.org/abs/1905.00436 arXiv:1905.00436 [astro-ph.CO]
NoStop
[Contarini et al.(2021)Contarini, Marulli, Moscardini,
Veropalumbo, Giocoli, and Baldi]Contarini2021
author author S. Contarini, author F. Marulli, author L. Moscardini, author A. Veropalumbo, author C. Giocoli, and author M. Baldi, https://doi.org/10.1093/mnras/stab1112 journal journal volume 504, pages 5021 (year 2021), https://arxiv.org/abs/2009.03309 arXiv:2009.03309 [astro-ph.CO]
NoStop
[Verza et al.(2022)Verza, Carbone, Pisani, and Renzi]Verza2022
author author G. Verza, author C. Carbone,
author A. Pisani, and author A. Renzi, https://doi.org/10.48550/arXiv.2212.09740 journal journal arXiv e-prints , eid arXiv:2212.09740 (year 2022), https://arxiv.org/abs/2212.09740
arXiv:2212.09740 [astro-ph.CO] NoStop
[Sahlén(2019)]Sahlen2019
author author M. Sahlén, https://doi.org/10.1103/PhysRevD.99.063525
journal journal volume 99, eid 063525 (year 2019), https://arxiv.org/abs/1807.02470 arXiv:1807.02470 [astro-ph.CO]
NoStop
[Bayer et al.(2021a)Bayer, Villaescusa-Navarro, Massara, Liu,
Spergel, Verde, Wandelt, Viel, and Ho]Bayer2021a
author author A. E. Bayer, author F. Villaescusa-Navarro, author E. Massara, author J. Liu,
author D. N. Spergel, author L. Verde, author
B. D. Wandelt, author
M. Viel, and author
S. Ho, https://doi.org/10.3847/1538-4357/ac0e91 journal journal volume 919, eid 24
(year 2021a), https://arxiv.org/abs/2102.05049 arXiv:2102.05049 [astro-ph.CO]
NoStop
[Kreisch et al.(2022)Kreisch, Pisani, Villaescusa-Navarro,
Spergel, Wandelt, Hamaus, and Bayer]Kreisch2022
author author C. D. Kreisch, author A. Pisani,
author F. Villaescusa-Navarro,
author D. N. Spergel, author B. D. Wandelt, author
N. Hamaus, and author
A. E. Bayer, https://doi.org/10.3847/1538-4357/ac7d4b journal journal volume 935, eid 100
(year 2022), https://arxiv.org/abs/2107.02304
arXiv:2107.02304 [astro-ph.CO] NoStop
[Hotinli et al.(2023)Hotinli, Sabti, North, and Kamionkowski]Hotinli2023
author author S. C. Hotinli, author N. Sabti,
author J. North, and author M. Kamionkowski, https://doi.org/10.48550/arXiv.2306.15715 journal journal arXiv e-prints , eid arXiv:2306.15715 (year 2023), https://arxiv.org/abs/2306.15715
arXiv:2306.15715 [astro-ph.CO] NoStop
[Gregory and Thompson(1978)]Gregory1978
author author S. A. Gregory and author L. A. Thompson, https://doi.org/10.1086/156198 journal
journal volume 222, pages 784 (year 1978)NoStop
[Jõeveer et al.(1978)Jõeveer, Einasto, and Tago]Joeveer1978
author author M. Jõeveer, author J. Einasto, and author E. Tago, https://doi.org/10.1093/mnras/185.2.357 journal journal volume 185, pages 357 (year 1978)NoStop
[Tully and Fisher(1978)]Tully1978
author author R. B. Tully and author J. R. Fisher, in @noop booktitle Large Scale
Structures in the Universe, Vol. volume 79, editor edited by editor M. S. Longair and editor J. Einasto (year 1978) p. pages 31NoStop
[Kirshner et al.(1981)Kirshner, Oemler, Schechter, and Shectman]Kirshner1981
author author R. P. Kirshner, author J. Oemler,
A., author P. L. Schechter, and author S. A. Shectman, https://doi.org/10.1086/183623 journal
journal volume 248, pages L57 (year 1981)NoStop
[de Lapparent et al.(1986)de Lapparent, Geller, and Huchra]deLapparent1986
author author V. de
Lapparent, author M. J. Geller, and author J. P. Huchra, https://doi.org/10.1086/184625 journal
journal volume 302, pages L1 (year 1986)NoStop
[Sahlén et al.(2016)Sahlén, Zubeldía, and Silk]Sahlen2016
author author M. Sahlén, author Í. Zubeldía, and author J. Silk, https://doi.org/10.3847/2041-8205/820/1/L7 journal journal volume 820, eid L7 (year 2016), https://arxiv.org/abs/1511.04075 arXiv:1511.04075 [astro-ph.CO]
NoStop
[Hoyle and Vogeley(2004)]Hoyle2004
author author F. Hoyle and author M. S. Vogeley, https://doi.org/10.1086/386279 journal
journal volume 607, pages 751 (year 2004), https://arxiv.org/abs/astro-ph/0312533 arXiv:astro-ph/0312533 [astro-ph]
NoStop
[Pan et al.(2012)Pan,
Vogeley, Hoyle, Choi, and Park]Pan2012
author author D. C. Pan, author M. S. Vogeley, author F. Hoyle,
author Y.-Y. Choi, and author C. Park, https://doi.org/10.1111/j.1365-2966.2011.20197.x journal
journal volume 421, pages 926 (year 2012), https://arxiv.org/abs/1103.4156 arXiv:1103.4156 [astro-ph.CO]
NoStop
[Sutter et al.(2012a)Sutter, Lavaux, Wandelt, and Weinberg]Sutter2012a
author author P. M. Sutter, author G. Lavaux,
author B. D. Wandelt, and author D. H. Weinberg, https://doi.org/10.1088/0004-637X/761/1/44 journal journal volume 761, eid 44
(year 2012a), https://arxiv.org/abs/1207.2524 arXiv:1207.2524 [astro-ph.CO]
NoStop
[Sutter et al.(2014a)Sutter, Lavaux, Wandelt, Weinberg,
Warren, and Pisani]Sutter2014b
author author P. M. Sutter, author G. Lavaux,
author B. D. Wandelt, author D. H. Weinberg, author M. S. Warren, and author A. Pisani, https://doi.org/10.1093/mnras/stu1094 journal journal volume 442, pages
3127 (year 2014a), https://arxiv.org/abs/1310.7155 arXiv:1310.7155 [astro-ph.CO]
NoStop
[Nadathur(2016)]Nadathur2016
author author S. Nadathur, https://doi.org/10.1093/mnras/stw1340 journal journal volume 461, pages 358 (year 2016), https://arxiv.org/abs/1602.04752 arXiv:1602.04752 [astro-ph.CO]
NoStop
[Mao et al.(2017a)Mao, Berlind,
Scherrer, Neyrinck, Scoccimarro, Tinker, McBride,
Schneider, Pan, Bizyaev, Malanushenko, and Malanushenko]Mao2017a
author author Q. Mao, author A. A. Berlind, author R. J. Scherrer, author M. C. Neyrinck, author R. Scoccimarro, author J. L. Tinker, author C. K. McBride, author D. P. Schneider, author K. Pan,
author D. Bizyaev, author E. Malanushenko, and author V. Malanushenko, https://doi.org/10.3847/1538-4357/835/2/161 journal journal volume 835, eid 161
(year 2017a), https://arxiv.org/abs/1602.02771 arXiv:1602.02771 [astro-ph.CO]
NoStop
[Sutter et al.(2012b)Sutter, Lavaux, Wandelt, and Weinberg]Sutter2012b
author author P. M. Sutter, author G. Lavaux,
author B. D. Wandelt, and author D. H. Weinberg, https://doi.org/10.1088/0004-637X/761/2/187 journal journal volume 761, eid 187
(year 2012b), https://arxiv.org/abs/1208.1058 arXiv:1208.1058 [astro-ph.CO]
NoStop
[Sutter et al.(2014b)Sutter, Pisani, Wandelt, and Weinberg]Sutter2014a
author author P. M. Sutter, author A. Pisani,
author B. D. Wandelt, and author D. H. Weinberg, https://doi.org/10.1093/mnras/stu1392 journal journal volume 443, pages
2983 (year 2014b), https://arxiv.org/abs/1404.5618 arXiv:1404.5618 [astro-ph.CO]
NoStop
[Hamaus et al.(2016)Hamaus, Pisani, Sutter, Lavaux, Escoffier, Wandelt, and Weller]Hamaus2016
author author N. Hamaus, author A. Pisani,
author P. M. Sutter, author G. Lavaux, author
S. Escoffier, author
B. D. Wandelt, and author
J. Weller, https://doi.org/10.1103/PhysRevLett.117.091302 journal
journal volume 117, eid 091302 (year 2016), https://arxiv.org/abs/1602.01784 arXiv:1602.01784 [astro-ph.CO]
NoStop
[Hamaus et al.(2017)Hamaus, Cousinou, Pisani,
Aubert, Escoffier, and Weller]Hamaus2017
author author N. Hamaus, author M.-C. Cousinou, author A. Pisani, author M. Aubert,
author S. Escoffier, and author J. Weller, https://doi.org/10.1088/1475-7516/2017/07/014 journal
journal volume 2017, eid 014 (year 2017), https://arxiv.org/abs/1705.05328 arXiv:1705.05328 [astro-ph.CO]
NoStop
[Mao et al.(2017b)Mao, Berlind,
Scherrer, Neyrinck, Scoccimarro, Tinker, McBride, and Schneider]Mao2017b
author author Q. Mao, author A. A. Berlind, author R. J. Scherrer, author M. C. Neyrinck, author R. Scoccimarro, author J. L. Tinker, author C. K. McBride, and author D. P. Schneider, https://doi.org/10.3847/1538-4357/835/2/160
journal journal volume 835, eid 160 (year
2017b), https://arxiv.org/abs/1602.06306
arXiv:1602.06306 [astro-ph.CO] NoStop
[Hamaus et al.(2020)Hamaus, Pisani, Choi, Lavaux, Wandelt, and Weller]Hamaus2020
author author N. Hamaus, author A. Pisani,
author J.-A. Choi, author G. Lavaux, author
B. D. Wandelt, and author
J. Weller, https://doi.org/10.1088/1475-7516/2020/12/023 journal
journal volume 2020, eid 023 (year 2020), https://arxiv.org/abs/2007.07895 arXiv:2007.07895 [astro-ph.CO]
NoStop
[Nadathur et al.(2020)Nadathur, Woodfinden, Percival et al.]Nadathur2020
author author S. Nadathur, author A. Woodfinden, author W. J. Percival, et al., https://doi.org/10.1093/mnras/staa3074 journal journal volume 499, pages
4140 (year 2020), https://arxiv.org/abs/2008.06060
arXiv:2008.06060 [astro-ph.CO] NoStop
[Aubert et al.(2022)Aubert, Cousinou, Escoffier,
Hawken, Nadathur et al.]Aubert2022
author author M. Aubert, author M.-C. Cousinou, author S. Escoffier, author A. J. Hawken, author S. Nadathur, et al., https://doi.org/10.1093/mnras/stac828 journal journal volume 513, pages
186 (year 2022), https://arxiv.org/abs/2007.09013
arXiv:2007.09013 [astro-ph.CO] NoStop
[Woodfinden et al.(2023)Woodfinden, Percival, Nadathur,
Winther, Fraser, Massara, Paillas, and Radinovic]Woodfinden2023
author author A. Woodfinden, author W. J. Percival, author S. Nadathur, author H. A. Winther, author T. S. Fraser, author E. Massara,
author E. Paillas, and author S. Radinovic, journal journal https://doi.org/10.1093/mnras/stad1725 10.1093/mnras/stad1725 (year 2023), https://arxiv.org/abs/2303.06143 arXiv:2303.06143
[astro-ph.CO] NoStop
[Contarini et al.(2022a)Contarini, Pisani, Hamaus, Marulli,
Moscardini, and Baldi]Contarini2022a
author author S. Contarini, author A. Pisani, author N. Hamaus,
author F. Marulli, author L. Moscardini, and author M. Baldi, https://doi.org/10.48550/arXiv.2212.03873 journal journal arXiv e-prints , eid arXiv:2212.03873 (year 2022a), https://arxiv.org/abs/2212.03873
arXiv:2212.03873 [astro-ph.CO] NoStop
[Contarini et al.(2022b)Contarini, Pisani, Hamaus, Marulli,
Moscardini, and Baldi]Contarini2022b
author author S. Contarini, author A. Pisani, author N. Hamaus,
author F. Marulli, author L. Moscardini, and author M. Baldi, https://doi.org/10.48550/arXiv.2212.07438 journal journal arXiv e-prints , eid arXiv:2212.07438 (year 2022b), https://arxiv.org/abs/2212.07438
arXiv:2212.07438 [astro-ph.CO] NoStop
[SDSS Collaboration et al.(2000)SDSS Collaboration, York
et al.]York2000
author author SDSS Collaboration,
author D. G. York, et al., https://doi.org/10.1086/301513 journal
journal volume 120, pages 1579 (year 2000), https://arxiv.org/abs/astro-ph/0006396 arXiv:astro-ph/0006396 [astro-ph]
NoStop
[SDSS Collaboration et al.(2011)SDSS Collaboration, Eisenstein, Weinberg et al.]Eisenstein2011
author author SDSS Collaboration,
author D. J. Eisenstein,
author D. H. Weinberg, et al., https://doi.org/10.1088/0004-6256/142/3/72 journal journal volume 142, eid 72 (year 2011), https://arxiv.org/abs/1101.1529 arXiv:1101.1529 [astro-ph.IM]
NoStop
[SDSS Collaboration et al.(2013)SDSS Collaboration, Dawson,
Schlegel et al.]Dawson2013
author author SDSS Collaboration,
author K. S. Dawson, author D. J. Schlegel, et al., https://doi.org/10.1088/0004-6256/145/1/10 journal
journal volume 145, eid 10 (year 2013), https://arxiv.org/abs/1208.0022 arXiv:1208.0022 [astro-ph.CO]
NoStop
[Ivanov et al.(2020a)Ivanov, Simonović, and Zaldarriaga]Ivanov2020b
author author M. M. Ivanov, author M. Simonović, and author M. Zaldarriaga, https://doi.org/10.1103/PhysRevD.101.083504
journal journal volume 101, eid 083504 (year
2020a), https://arxiv.org/abs/1912.08208
arXiv:1912.08208 [astro-ph.CO] NoStop
[Semenaite et al.(2023)Semenaite, Sánchez, Pezzotta,
Hou, Eggemeier, Crocce,
Zhao, Brownstein, Rossi, and Schneider]Semenaite2023
author author A. Semenaite, author A. G. Sánchez, author A. Pezzotta, author J. Hou,
author A. Eggemeier, author M. Crocce, author
C. Zhao, author J. R. Brownstein, author G. Rossi, and author D. P. Schneider, https://doi.org/10.1093/mnras/stad849
journal journal volume 521, pages 5013 (year 2023), https://arxiv.org/abs/2210.07304 arXiv:2210.07304 [astro-ph.CO]
NoStop
[Stopyra et al.(2021)Stopyra, Peiris, and Pontzen]Stopyra2021
author author S. Stopyra, author H. V. Peiris, and author A. Pontzen, https://doi.org/10.1093/mnras/staa3587 journal journal volume 500, pages 4173 (year 2021), https://arxiv.org/abs/2007.14395 arXiv:2007.14395 [astro-ph.CO]
NoStop
[Press and Schechter(1974)]Press1974
author author W. H. Press and author P. Schechter, https://doi.org/10.1086/152650 journal journal volume 187, pages 425 (year 1974)NoStop
[Bond et al.(1991)Bond,
Cole, Efstathiou, and Kaiser]Bond1991
author author J. R. Bond, author S. Cole,
author G. Efstathiou, and author N. Kaiser, https://doi.org/10.1086/170520 journal journal
volume 379, pages 440 (year 1991)NoStop
[Sheth and Tormen(1999)]Sheth1999
author author R. K. Sheth and author G. Tormen, https://doi.org/10.1046/j.1365-8711.1999.02692.x
journal journal volume 308, pages 119 (year 1999), https://arxiv.org/abs/astro-ph/9901122 arXiv:astro-ph/9901122 [astro-ph]
NoStop
[Sheth and van de
Weygaert(2004)]Sheth2004
author author R. K. Sheth and author R. van de
Weygaert, https://doi.org/10.1111/j.1365-2966.2004.07661.x
journal journal volume 350, pages 517 (year 2004), https://arxiv.org/abs/astro-ph/0311260 arXiv:astro-ph/0311260 [astro-ph]
NoStop
[Paranjape et al.(2012)Paranjape, Lam, and Sheth]Paranjape2012
author author A. Paranjape, author T. Y. Lam, and author R. K. Sheth, https://doi.org/10.1111/j.1365-2966.2011.20128.x
journal journal volume 420, pages 1429 (year 2012), https://arxiv.org/abs/1105.1990 arXiv:1105.1990 [astro-ph.CO]
NoStop
[Paranjape et al.(2013)Paranjape, Sheth, and Desjacques]Paranjape2013
author author A. Paranjape, author R. K. Sheth, and author V. Desjacques, https://doi.org/10.1093/mnras/stt267 journal journal volume 431, pages 1503 (year 2013), https://arxiv.org/abs/1210.1483 arXiv:1210.1483 [astro-ph.CO]
NoStop
[Jennings et al.(2013)Jennings, Li, and Hu]Jennings2013
author author E. Jennings, author Y. Li, and author W. Hu, https://doi.org/10.1093/mnras/stt1169 journal journal volume 434, pages
2167 (year 2013), https://arxiv.org/abs/1304.6087
arXiv:1304.6087 [astro-ph.CO] NoStop
[Pisani et al.(2015a)Pisani, Sutter, Hamaus, Alizadeh,
Biswas, Wandelt, and Hirata]Pisani2015a
author author A. Pisani, author P. M. Sutter, author N. Hamaus,
author E. Alizadeh, author R. Biswas, author
B. D. Wandelt, and author
C. M. Hirata, https://doi.org/10.1103/PhysRevD.92.083531 journal journal volume 92, eid 083531
(year 2015a), https://arxiv.org/abs/1503.07690 arXiv:1503.07690 [astro-ph.CO]
NoStop
[Ronconi and Marulli(2017)]Ronconi2017
author author T. Ronconi and author F. Marulli, https://doi.org/10.1051/0004-6361/201730852
journal journal volume 607, eid A24 (year 2017), https://arxiv.org/abs/1703.07848 arXiv:1703.07848 [astro-ph.CO]
NoStop
[Ronconi et al.(2019)Ronconi, Contarini, Marulli,
Baldi, and Moscardini]Ronconi2019
author author T. Ronconi, author S. Contarini, author F. Marulli, author M. Baldi, and author L. Moscardini, https://doi.org/10.1093/mnras/stz2115 journal
journal volume 488, pages 5075 (year 2019), https://arxiv.org/abs/1902.04585 arXiv:1902.04585 [astro-ph.CO]
NoStop
[Verza et al.(2019)Verza, Pisani, Carbone, Hamaus, and Guzzo]Verza2019
author author G. Verza, author A. Pisani,
author C. Carbone, author N. Hamaus, and author L. Guzzo, https://doi.org/10.1088/1475-7516/2019/12/040 journal
journal volume 2019, eid 040 (year 2019), https://arxiv.org/abs/1906.00409 arXiv:1906.00409 [astro-ph.CO]
NoStop
[Contarini et al.(2019)Contarini, Ronconi, Marulli,
Moscardini, Veropalumbo, and Baldi]Contarini2019
author author S. Contarini, author T. Ronconi, author F. Marulli, author L. Moscardini, author A. Veropalumbo, and author M. Baldi, https://doi.org/10.1093/mnras/stz1989 journal journal volume 488, pages 3526 (year 2019), https://arxiv.org/abs/1904.01022 arXiv:1904.01022 [astro-ph.CO]
NoStop
[Contarini et al.(2022c)Contarini, Verza, Pisani, Hamaus, Sahlén et al.]Contarini2022c
author author S. Contarini, author G. Verza, author A. Pisani,
author N. Hamaus, author M. Sahlén, et al., https://doi.org/10.1051/0004-6361/202244095 journal
journal volume 667, eid A162 (year 2022c), https://arxiv.org/abs/2205.11525 arXiv:2205.11525 [astro-ph.CO]
NoStop
[Pelliciari et al.(2023)Pelliciari, Contarini, Marulli,
Moscardini, Giocoli, Lesci, and Dolag]Pelliciari2023
author author D. Pelliciari, author S. Contarini, author F. Marulli, author L. Moscardini, author C. Giocoli, author G. F. Lesci, and author K. Dolag, https://doi.org/10.1093/mnras/stad956 journal journal volume 522, pages 152 (year 2023), https://arxiv.org/abs/2210.07248 arXiv:2210.07248 [astro-ph.CO]
NoStop
[Alcock and Paczynski(1979)]Alcock1979
author author C. Alcock and author B. Paczynski, https://doi.org/10.1038/281358a0 journal journal volume 281, pages 358 (year 1979)NoStop
[Lavaux and Wandelt(2012)]Lavaux2012
author author G. Lavaux and author B. D. Wandelt, https://doi.org/10.1088/0004-637X/754/2/109
journal journal volume 754, eid 109 (year 2012), https://arxiv.org/abs/1110.0345 arXiv:1110.0345 [astro-ph.CO]
NoStop
[Pisani et al.(2014)Pisani, Lavaux, Sutter, and Wandelt]Pisani2014
author author A. Pisani, author G. Lavaux,
author P. M. Sutter, and author B. D. Wandelt, https://doi.org/10.1093/mnras/stu1399 journal journal volume 443, pages
3238 (year 2014), https://arxiv.org/abs/1306.3052
arXiv:1306.3052 [astro-ph.CO] NoStop
[Padilla et al.(2005)Padilla, Ceccarelli, and Lambas]Padilla2005
author author N. D. Padilla, author L. Ceccarelli, and author D. G. Lambas, https://doi.org/10.1111/j.1365-2966.2005.09500.x
journal journal volume 363, pages 977 (year 2005), https://arxiv.org/abs/astro-ph/0508297 arXiv:astro-ph/0508297 [astro-ph]
NoStop
[Paz et al.(2013)Paz,
Lares, Ceccarelli, Padilla, and Lambas]Paz2013
author author D. Paz, author M. Lares,
author L. Ceccarelli, author N. Padilla, and author D. G. Lambas, https://doi.org/10.1093/mnras/stt1836 journal journal volume 436, pages
3480 (year 2013), https://arxiv.org/abs/1306.5799
arXiv:1306.5799 [astro-ph.CO] NoStop
[Hamaus et al.(2014a)Hamaus, Sutter, and Wandelt]Hamaus2014a
author author N. Hamaus, author P. M. Sutter, and author B. D. Wandelt, https://doi.org/10.48550/arXiv.1409.7621 journal journal arXiv e-prints , eid
arXiv:1409.7621 (year 2014a), https://arxiv.org/abs/1409.7621 arXiv:1409.7621 [astro-ph.CO]
NoStop
[Hamaus et al.(2014b)Hamaus, Sutter, and Wandelt]Hamaus2014b
author author N. Hamaus, author P. M. Sutter, and author B. D. Wandelt, https://doi.org/10.1103/PhysRevLett.112.251302
journal journal volume 112, eid 251302 (year
2014b), https://arxiv.org/abs/1403.5499
arXiv:1403.5499 [astro-ph.CO] NoStop
[Massara and Sheth(2018)]Massara2018
author author E. Massara and author R. K. Sheth, https://doi.org/10.48550/arXiv.1811.03132 journal journal arXiv e-prints , eid
arXiv:1811.03132 (year 2018), https://arxiv.org/abs/1811.03132 arXiv:1811.03132 [astro-ph.CO]
NoStop
[Feng et al.(2016)Feng,
Chu, Seljak, and McDonald]Feng2016
author author Y. Feng, author M.-Y. Chu,
author U. Seljak, and author P. McDonald, https://doi.org/10.1093/mnras/stw2123 journal journal volume 463, pages
2273 (year 2016), https://arxiv.org/abs/1603.00476
arXiv:1603.00476 [astro-ph.CO] NoStop
[Bayer et al.(2021b)Bayer, Banerjee, and Feng]Bayer2021b
author author A. E. Bayer, author A. Banerjee, and author Y. Feng, https://doi.org/10.1088/1475-7516/2021/01/016 journal journal volume
2021, eid 016 (year 2021b), https://arxiv.org/abs/2007.13394 arXiv:2007.13394 [astro-ph.CO]
NoStop
[Berlind and Weinberg(2002)]Berlind2002
author author A. A. Berlind and author D. H. Weinberg, https://doi.org/10.1086/341469 journal
journal volume 575, pages 587 (year 2002), https://arxiv.org/abs/astro-ph/0109001 arXiv:astro-ph/0109001 [astro-ph]
NoStop
[Cooray and Sheth(2002)]Cooray2002
author author A. Cooray and author R. Sheth, https://doi.org/10.1016/S0370-1573(02)00276-4
journal journal volume 372, pages 1 (year 2002), https://arxiv.org/abs/astro-ph/0206508 arXiv:astro-ph/0206508 [astro-ph]
NoStop
[Wechsler and Tinker(2018)]Wechsler2018
author author R. H. Wechsler and author J. L. Tinker, https://doi.org/10.1146/annurev-astro-081817-051756
journal journal volume 56, pages 435 (year 2018), https://arxiv.org/abs/1804.03097 arXiv:1804.03097 [astro-ph.GA]
NoStop
[Cranmer et al.(2020)Cranmer, Brehmer, and Louppe]Cranmer2020
author author K. Cranmer, author J. Brehmer, and author G. Louppe, https://doi.org/10.1073/pnas.1912789117 journal journal Proceedings of the National Academy of
Science volume 117, pages 30055
(year 2020), https://arxiv.org/abs/1911.01429
arXiv:1911.01429 [stat.ML] NoStop
[Hahn et al.(2022)Hahn,
Eickenberg, Ho, Hou,
Lemos, Massara, Modi,
Moradinezhad Dizgah, Régaldo-Saint
Blancard, and Abidi]Hahn2022
author author C. Hahn, author M. Eickenberg, author S. Ho,
author J. Hou, author P. Lemos, author
E. Massara, author
C. Modi, author A. Moradinezhad Dizgah, author B. Régaldo-Saint Blancard, and author
M. M. Abidi, https://doi.org/10.48550/arXiv.2211.00723 journal journal arXiv e-prints , eid arXiv:2211.00723 (year 2022), https://arxiv.org/abs/2211.00723
arXiv:2211.00723 [astro-ph.CO] NoStop
[Hahn et al.(2023)Hahn,
Eickenberg, Ho, Hou,
Lemos, Massara, Modi,
Moradinezhad Dizgah, Régaldo-Saint
Blancard, and Abidi]Hahn2023
author author C. Hahn, author M. Eickenberg, author S. Ho,
author J. Hou, author P. Lemos, author
E. Massara, author
C. Modi, author A. Moradinezhad Dizgah, author B. Régaldo-Saint Blancard, and author
M. M. Abidi, https://doi.org/10.1088/1475-7516/2023/04/010 journal
journal volume 2023, eid 010 (year 2023), https://arxiv.org/abs/2211.00660 arXiv:2211.00660 [astro-ph.CO]
NoStop
[Philcox and Ivanov(2022)]Philcox2022
author author O. H. E. Philcox and author M. M. Ivanov, https://doi.org/10.1103/PhysRevD.105.043517 journal journal volume 105, eid
043517 (year 2022), https://arxiv.org/abs/2112.04515 arXiv:2112.04515 [astro-ph.CO]
NoStop
[Lewis et al.(2000)Lewis,
Challinor, and Lasenby]Lewis:1999bs
author author A. Lewis, author A. Challinor, and author A. Lasenby, https://doi.org/10.1086/309179 journal journal
volume 538, pages 473 (year 2000), https://arxiv.org/abs/astro-ph/9911177
arXiv:astro-ph/9911177 [astro-ph] NoStop
[Howlett et al.(2012)Howlett, Lewis, Hall, and Challinor]Howlett:2012mh
author author C. Howlett, author A. Lewis,
author A. Hall, and author A. Challinor, https://doi.org/10.1088/1475-7516/2012/04/027 journal
journal volume 1204, pages 027 (year 2012), https://arxiv.org/abs/1201.3654 arXiv:1201.3654 [astro-ph.CO]
NoStop
[Blas et al.(2011)Blas,
Lesgourgues, and Tram]Blas2011
author author D. Blas, author J. Lesgourgues, and author T. Tram, https://doi.org/10.1088/1475-7516/2011/07/034 journal journal volume
2011, eid 034 (year 2011), https://arxiv.org/abs/1104.2933 arXiv:1104.2933 [astro-ph.CO]
NoStop
[Zennaro et al.(2016)Zennaro, Bel, Villaescusa-Navarro,
Carbone, Sefusatti, and Guzzo]Zennaro2016
author author M. Zennaro, author J. Bel,
author F. Villaescusa-Navarro,
author C. Carbone, author E. Sefusatti, and author L. Guzzo, @noop
title REPS: REscaled Power Spectra for initial conditions with
massive neutrinos, howpublished Astrophysics Source Code
Library, record ascl:1612.022 (year 2016), https://arxiv.org/abs/1612.022 ascl:1612.022 NoStop
[Zennaro et al.(2017)Zennaro, Bel, Villaescusa-Navarro,
Carbone, Sefusatti, and Guzzo]Zennaro2017
author author M. Zennaro, author J. Bel,
author F. Villaescusa-Navarro,
author C. Carbone, author E. Sefusatti, and author L. Guzzo, https://doi.org/10.1093/mnras/stw3340 journal journal volume 466, pages
3244 (year 2017), https://arxiv.org/abs/1605.05283
arXiv:1605.05283 [astro-ph.CO] NoStop
[Villaescusa-Navarro et al.(2020)Villaescusa-Navarro, Hahn,
Massara, Banerjee, Delgado et al.]Quijote_sims
author author F. Villaescusa-Navarro, author C. Hahn, author E. Massara,
author A. Banerjee, author A. M. Delgado, et al., https://doi.org/10.3847/1538-4365/ab9d82 journal
journal volume 250, eid 2 (year 2020), https://arxiv.org/abs/1909.05273 arXiv:1909.05273 [astro-ph.CO]
NoStop
[Akiba et al.(2019)Akiba, Sano, Yanase, Ohta, and Koyama]Akiba2019
author author T. Akiba, author S. Sano,
author T. Yanase, author T. Ohta, and author
M. Koyama, @noop journal journal arXiv e-prints , eid
arXiv:1907.10902 (year 2019), https://arxiv.org/abs/1907.10902 arXiv:1907.10902 [cs.LG] NoStop
[Behroozi et al.(2012)Behroozi, Wechsler, and Wu]Behroozi2012
author author P. Behroozi, author R. Wechsler, and author H.-Y. Wu, @noop title Rockstar: Phase-space halo
finder, howpublished Astrophysics Source Code Library, record
ascl:1210.008 (year 2012), https://arxiv.org/abs/1210.008 ascl:1210.008 NoStop
[Behroozi et al.(2013)Behroozi, Wechsler, and Wu]Behroozi2013
author author P. S. Behroozi, author R. H. Wechsler, and author H.-Y. Wu, https://doi.org/10.1088/0004-637X/762/2/109 journal journal volume 762, eid 109 (year 2013), https://arxiv.org/abs/1110.4372 arXiv:1110.4372 [astro-ph.CO]
NoStop
[Zheng et al.(2005)Zheng, Berlind, Weinberg,
Benson, Baugh, Cole,
Davé, Frenk, Katz, and Lacey]Zheng2005
author author Z. Zheng, author A. A. Berlind, author D. H. Weinberg, author A. J. Benson, author C. M. Baugh, author S. Cole,
author R. Davé, author C. S. Frenk, author
N. Katz, and author
C. G. Lacey, https://doi.org/10.1086/466510 journal journal
volume 633, pages 791 (year 2005), https://arxiv.org/abs/astro-ph/0408564
arXiv:astro-ph/0408564 [astro-ph] NoStop
[Zhu et al.(2006)Zhu,
Zheng, Lin, Jing,
Kang, and Gao]Zhu2006
author author G. Zhu, author Z. Zheng,
author W. P. Lin, author Y. P. Jing, author
X. Kang, and author
L. Gao, https://doi.org/10.1086/501501 journal journal
volume 639, pages L5 (year 2006), https://arxiv.org/abs/astro-ph/0601120
arXiv:astro-ph/0601120 [astro-ph] NoStop
[Zentner et al.(2014)Zentner, Hearin, and van den
Bosch]Zentner2014
author author A. R. Zentner, author A. P. Hearin, and author F. C. van den Bosch, https://doi.org/10.1093/mnras/stu1383
journal journal volume 443, pages 3044 (year 2014), https://arxiv.org/abs/1311.1818 arXiv:1311.1818 [astro-ph.CO]
NoStop
[Pujol and Gaztañaga(2014)]Pujol2014
author author A. Pujol and author E. Gaztañaga, https://doi.org/10.1093/mnras/stu1001
journal journal volume 442, pages 1930 (year 2014), https://arxiv.org/abs/1306.5761 arXiv:1306.5761 [astro-ph.CO]
NoStop
[Reid et al.(2014)Reid,
Seo, Leauthaud, Tinker, and White]Reid2014
author author B. A. Reid, author H.-J. Seo,
author A. Leauthaud, author J. L. Tinker, and author M. White, https://doi.org/10.1093/mnras/stu1391 journal journal volume 444, pages
476 (year 2014), https://arxiv.org/abs/1404.3742
arXiv:1404.3742 [astro-ph.CO] NoStop
[Lin et al.(2016)Lin,
Mandelbaum, Huang, Huang, Dalal, Diemer, Jian, and Kravtsov]Lin2016
author author Y.-T. Lin, author R. Mandelbaum,
author Y.-H. Huang, author H.-J. Huang, author
N. Dalal, author B. Diemer, author H.-Y. Jian, and author A. Kravtsov, https://doi.org/10.3847/0004-637X/819/2/119 journal journal volume 819, eid 119
(year 2016), https://arxiv.org/abs/1504.07632
arXiv:1504.07632 [astro-ph.GA] NoStop
[Kobayashi et al.(2022)Kobayashi, Nishimichi, Takada, and Miyatake]Kobayashi2022
author author Y. Kobayashi, author T. Nishimichi, author M. Takada, and author H. Miyatake, https://doi.org/10.1103/PhysRevD.105.083517
journal journal volume 105, eid 083517 (year 2022), https://arxiv.org/abs/2110.06969 arXiv:2110.06969 [astro-ph.CO]
NoStop
[Berlind et al.(2003)Berlind, Weinberg, Benson,
Baugh, Cole, Davé,
Frenk, Jenkins, Katz, and Lacey]Berlind2003
author author A. A. Berlind, author D. H. Weinberg, author A. J. Benson, author C. M. Baugh, author S. Cole,
author R. Davé, author C. S. Frenk, author
A. Jenkins, author
N. Katz, and author
C. G. Lacey, https://doi.org/10.1086/376517 journal journal
volume 593, pages 1 (year 2003), https://arxiv.org/abs/astro-ph/0212357
arXiv:astro-ph/0212357 [astro-ph] NoStop
[Yoshikawa et al.(2003)Yoshikawa, Jing, and Börner]Yoshikawa2003
author author K. Yoshikawa, author Y. P. Jing, and author G. Börner, https://doi.org/10.1086/375148 journal journal volume 590, pages 654 (year 2003), https://arxiv.org/abs/astro-ph/0303053 arXiv:astro-ph/0303053 [astro-ph]
NoStop
[van den Bosch et al.(2005)van den Bosch, Weinmann, Yang,
Mo, Li, and Jing]vandenBosch2005
author author F. C. van den Bosch, author S. M. Weinmann, author X. Yang,
author H. J. Mo, author C. Li, and author
Y. P. Jing, https://doi.org/10.1111/j.1365-2966.2005.09260.x journal
journal volume 361, pages 1203 (year 2005), https://arxiv.org/abs/astro-ph/0502466 arXiv:astro-ph/0502466 [astro-ph]
NoStop
[Guo et al.(2015)Guo,
Zheng, Zehavi, Dawson,
Skibba, Tinker, Weinberg, White, and Schneider]Guo2015
author author H. Guo, author Z. Zheng,
author I. Zehavi, author K. Dawson, author
R. A. Skibba, author
J. L. Tinker, author
D. H. Weinberg, author
M. White, and author
D. P. Schneider, https://doi.org/10.1093/mnras/stu2120 journal journal volume 446, pages
578 (year 2015), https://arxiv.org/abs/1407.4811
arXiv:1407.4811 [astro-ph.CO] NoStop
[Zhai et al.(2023)Zhai,
Tinker, Banerjee, DeRose, Guo, Mao, McLaughlin, Storey-Fisher, and Wechsler]Zhai2023
author author Z. Zhai, author J. L. Tinker, author A. Banerjee, author J. DeRose, author H. Guo,
author Y.-Y. Mao, author S. McLaughlin, author
K. Storey-Fisher, and author
R. H. Wechsler, https://doi.org/10.3847/1538-4357/acc65b journal journal volume 948, eid 99
(year 2023), https://arxiv.org/abs/2203.08999
arXiv:2203.08999 [astro-ph.CO] NoStop
[Carlson and White(2010)]Carlson2010
author author J. Carlson and author M. White, https://doi.org/10.1088/0067-0049/190/2/311 journal journal volume 190, pages 311 (year 2010), https://arxiv.org/abs/1003.3178 arXiv:1003.3178 [astro-ph.CO]
NoStop
[Sutter et al.(2015)Sutter, Lavaux, Hamaus, Pisani, Wandelt, Warren,
Villaescusa-Navarro, Zivick,
Mao, and Thompson]Sutter2015
author author P. M. Sutter, author G. Lavaux,
author N. Hamaus, author A. Pisani, author
B. D. Wandelt, author
M. Warren, author
F. Villaescusa-Navarro, author
P. Zivick, author
Q. Mao, and author
B. B. Thompson, https://doi.org/10.1016/j.ascom.2014.10.002 journal journal Astronomy and Computing volume 9, pages 1 (year 2015), https://arxiv.org/abs/1406.1191 arXiv:1406.1191 [astro-ph.CO]
NoStop
[Neyrinck(2008)]Neyrinck2008
author author M. C. Neyrinck, https://doi.org/10.1111/j.1365-2966.2008.13180.x
journal journal volume 386, pages 2101 (year 2008), https://arxiv.org/abs/0712.3049 arXiv:0712.3049 [astro-ph] NoStop
[Colberg et al.(2008)Colberg, Pearce, Foster,
Platen, Brunino et al.]Colberg2008
author author J. M. Colberg, author F. Pearce,
author C. Foster, author E. Platen, author
R. Brunino, et al., https://doi.org/10.1111/j.1365-2966.2008.13307.x journal
journal volume 387, pages 933 (year 2008), https://arxiv.org/abs/0803.0918 arXiv:0803.0918 [astro-ph] NoStop
[van de Weygaert and Schaap(2009)]vandeWeygaert2009
author author R. van
de Weygaert and author W. Schaap, in https://doi.org/10.1007/978-3-540-44767-2_11
booktitle Data Analysis in Cosmology, Vol. volume 665, editor edited by editor
V. J. Martínez, editor
E. Saar, editor E. Martínez-González, and editor
M. J. Pons-Bordería (year 2009) pp. pages 291–413NoStop
[Cautun et al.(2018)Cautun, Paillas, Cai, Bose, Armijo, Li, and Padilla]Cautun2018
author author M. Cautun, author E. Paillas,
author Y.-C. Cai, author S. Bose, author
J. Armijo, author
B. Li, and author
N. Padilla, https://doi.org/10.1093/mnras/sty463 journal journal volume 476, pages
3195 (year 2018), https://arxiv.org/abs/1710.01730
arXiv:1710.01730 [astro-ph.CO] NoStop
[Hand et al.(2018)Hand,
Feng, Beutler, Li,
Modi, Seljak, and Slepian]Hand2018
author author N. Hand, author Y. Feng,
author F. Beutler, author Y. Li, author
C. Modi, author U. Seljak, and author Z. Slepian, https://doi.org/10.3847/1538-3881/aadae0 journal journal volume 156, eid 160
(year 2018), https://arxiv.org/abs/1712.05834
arXiv:1712.05834 [astro-ph.IM] NoStop
[Hand et al.(2019)Hand,
Feng, Beutler, Li,
Modi, Seljak, and Slepian]Hand2019
author author N. Hand, author Y. Feng,
author F. Beutler, author Y. Li, author
C. Modi, author U. Seljak, and author Z. Slepian, @noop title nbodykit:
Massively parallel, large-scale structure toolkit, howpublished Astrophysics Source Code Library, record ascl:1904.027
(year 2019), https://arxiv.org/abs/1904.027
ascl:1904.027 NoStop
[Feldman et al.(1994)Feldman, Kaiser, and Peacock]Feldman1994
author author H. A. Feldman, author N. Kaiser, and author J. A. Peacock, https://doi.org/10.1086/174036 journal
journal volume 426, pages 23 (year 1994), https://arxiv.org/abs/astro-ph/9304022 arXiv:astro-ph/9304022 [astro-ph]
NoStop
[Reid et al.(2016)Reid,
Ho, Padmanabhan, Percival, Tinker et al.]Reid2016
author author B. Reid, author S. Ho,
author N. Padmanabhan, author W. J. Percival, author J. Tinker, et al., https://doi.org/10.1093/mnras/stv2382 journal
journal volume 455, pages 1553 (year 2016), https://arxiv.org/abs/1509.06529 arXiv:1509.06529 [astro-ph.CO]
NoStop
[Cousinou et al.(2019)Cousinou, Pisani, Tilquin,
Hamaus, Hawken, and Escoffier]Cousinou2019
author author M. C. Cousinou, author A. Pisani, author A. Tilquin,
author N. Hamaus, author A. J. Hawken, and author S. Escoffier, https://doi.org/10.1016/j.ascom.2019.03.001 journal journal Astronomy and Computing volume 27, eid 53 (year 2019), https://arxiv.org/abs/1805.07181 arXiv:1805.07181 [astro-ph.CO]
NoStop
[Pisani et al.(2015b)Pisani, Sutter, and Wandelt]Pisani2015b
author author A. Pisani, author P. M. Sutter, and author B. D. Wandelt, https://doi.org/10.48550/arXiv.1506.07982 journal journal arXiv e-prints , eid
arXiv:1506.07982 (year 2015b), https://arxiv.org/abs/1506.07982 arXiv:1506.07982 [astro-ph.CO]
NoStop
[Ivanov et al.(2020b)Ivanov, Simonović, and Zaldarriaga]Ivanov2020a
author author M. M. Ivanov, author M. Simonović, and author M. Zaldarriaga, https://doi.org/10.1088/1475-7516/2020/05/042
journal journal volume 2020, eid 042 (year
2020b), https://arxiv.org/abs/1909.05277
arXiv:1909.05277 [astro-ph.CO] NoStop
[Cranmer et al.(2015)Cranmer, Pavez, and Louppe]Cranmer2015
author author K. Cranmer, author J. Pavez, and author G. Louppe, https://doi.org/10.48550/arXiv.1506.02169 journal journal arXiv e-prints , eid arXiv:1506.02169 (year 2015), https://arxiv.org/abs/1506.02169
arXiv:1506.02169 [stat.AP] NoStop
[Hermans et al.(2019)Hermans, Begy, and Louppe]Hermans2019
author author J. Hermans, author V. Begy, and author G. Louppe, https://doi.org/10.48550/arXiv.1903.04057 journal journal arXiv e-prints , eid arXiv:1903.04057 (year 2019), https://arxiv.org/abs/1903.04057
arXiv:1903.04057 [stat.ML] NoStop
[Durkan et al.(2020)Durkan, Murray, and Papamakarios]Durkan2020
author author C. Durkan, author I. Murray, and author G. Papamakarios, https://doi.org/10.48550/arXiv.2002.03712 journal
journal arXiv e-prints , eid arXiv:2002.03712
(year 2020), https://arxiv.org/abs/2002.03712
arXiv:2002.03712 [stat.ML] NoStop
[Delaunoy et al.(2022)Delaunoy, Hermans, Rozet,
Wehenkel, and Louppe]Delaunoy2022
author author A. Delaunoy, author J. Hermans, author F. Rozet,
author A. Wehenkel, and author G. Louppe, https://doi.org/10.48550/arXiv.2208.13624 journal journal arXiv e-prints , eid arXiv:2208.13624 (year 2022), https://arxiv.org/abs/2208.13624
arXiv:2208.13624 [stat.ML] NoStop
[Miller et al.(2022)Miller, Weniger, and Forré]Miller2022
author author B. K. Miller, author C. Weniger, and author P. Forré, https://doi.org/10.48550/arXiv.2210.06170
journal journal arXiv e-prints , eid arXiv:2210.06170 (year 2022), https://arxiv.org/abs/2210.06170 arXiv:2210.06170 [stat.ML] NoStop
[Foreman-Mackey et al.(2013a)Foreman-Mackey, Conley, Meierjurgen Farr, Hogg,
Lang, Marshall, Price-Whelan, Sanders, and Zuntz]Foreman-Mackey2013a
author author D. Foreman-Mackey, author A. Conley, author W. Meierjurgen
Farr, author D. W. Hogg,
author D. Lang, author P. Marshall, author
A. Price-Whelan, author
J. Sanders, and author
J. Zuntz, @noop title emcee: The MCMC Hammer, howpublished Astrophysics
Source Code Library, record ascl:1303.002 (year
2013a), https://arxiv.org/abs/1303.002
ascl:1303.002 NoStop
[Foreman-Mackey et al.(2013b)Foreman-Mackey, Hogg, Lang, and Goodman]Foreman-Mackey2013b
author author D. Foreman-Mackey, author D. W. Hogg, author D. Lang, and author J. Goodman, https://doi.org/10.1086/670067 journal journal
volume 125, pages 306
(year 2013b), https://arxiv.org/abs/1202.3665 arXiv:1202.3665 [astro-ph.IM]
NoStop
[Tejero-Cantero et al.(2020)Tejero-Cantero, Boelts, Deistler,
Lueckmann, Durkan, Gonçalves, Greenberg, and Macke]tejero-cantero2020sbi
author author A. Tejero-Cantero, author J. Boelts, author M. Deistler,
author J.-M. Lueckmann, author C. Durkan, author
P. J. Gonçalves, author
D. S. Greenberg, and author
J. H. Macke, https://doi.org/10.21105/joss.02505 journal journal Journal of Open Source Software volume
5, pages 2505 (year 2020)NoStop
[Heavens et al.(2000)Heavens, Jimenez, and Lahav]Heavens2000
author author A. F. Heavens, author R. Jimenez, and author O. Lahav, https://doi.org/10.1046/j.1365-8711.2000.03692.x
journal journal volume 317, pages 965 (year 2000), https://arxiv.org/abs/astro-ph/9911102 arXiv:astro-ph/9911102 [astro-ph]
NoStop
[Alsing and Wandelt(2019)]Alsing2019
author author J. Alsing and author B. Wandelt, https://doi.org/10.1093/mnras/stz1900 journal journal volume 488, pages 5093 (year 2019), https://arxiv.org/abs/1903.01473 arXiv:1903.01473 [astro-ph.CO]
NoStop
[d'Amico et al.(2020)d'Amico, Gleyzes, Kokron,
Markovic, Senatore, Zhang, Beutler, and Gil-Marín]dAmico2020
author author G. d'Amico, author J. Gleyzes, author N. Kokron,
author K. Markovic, author L. Senatore, author
P. Zhang, author F. Beutler, and author H. Gil-Marín, https://doi.org/10.1088/1475-7516/2020/05/005 journal
journal volume 2020, eid 005 (year 2020), https://arxiv.org/abs/1909.05271 arXiv:1909.05271 [astro-ph.CO]
NoStop
[Chen et al.(2022)Chen,
Vlah, and White]Chen2022
author author S.-F. Chen, author Z. Vlah, and author M. White, https://doi.org/10.1088/1475-7516/2022/02/008 journal
journal volume 2022, eid 008 (year 2022), https://arxiv.org/abs/2110.05530 arXiv:2110.05530 [astro-ph.CO]
NoStop
[Philcox et al.(2020)Philcox, Ivanov, Simonović, and Zaldarriaga]Philcox2020
author author O. H. E. Philcox, author M. M. Ivanov, author M. Simonović, and author M. Zaldarriaga, https://doi.org/10.1088/1475-7516/2020/05/032
journal journal volume 2020, eid 032 (year 2020), https://arxiv.org/abs/2002.04035 arXiv:2002.04035 [astro-ph.CO]
NoStop
[Philcox(2021)]Philcox2021
author author O. H. E. Philcox, https://doi.org/10.1103/PhysRevD.103.103504 journal journal volume 103, eid
103504 (year 2021), https://arxiv.org/abs/2012.09389 arXiv:2012.09389 [astro-ph.CO]
NoStop
[Chudaykin et al.(2020)Chudaykin, Ivanov, Philcox, and Simonović]Chudaykin2020
author author A. Chudaykin, author M. M. Ivanov, author O. H. E. Philcox, and author M. Simonović, https://doi.org/10.1103/PhysRevD.102.063533
journal journal volume 102, eid 063533 (year 2020), https://arxiv.org/abs/2004.10607 arXiv:2004.10607 [astro-ph.CO]
NoStop
[Hermans et al.(2021)Hermans, Delaunoy, Rozet,
Wehenkel, Begy, and Louppe]Hermans2021
author author J. Hermans, author A. Delaunoy, author F. Rozet,
author A. Wehenkel, author V. Begy, and author
G. Louppe, https://doi.org/10.48550/arXiv.2110.06581 journal journal arXiv e-prints , eid arXiv:2110.06581 (year 2021), https://arxiv.org/abs/2110.06581
arXiv:2110.06581 [stat.ML] NoStop
[Anscombe(1948)]Anscombe1948
author author F. J. Anscombe, https://doi.org/10.1093/biomet/35.3-4.246 journal journal Biometrika volume
35, pages 246 (year 1948), https://arxiv.org/abs/https://academic.oup.com/biomet/article-pdf/35/3-4/246/785684/35-3-4-246.pdf
https://academic.oup.com/biomet/article-pdf/35/3-4/246/785684/35-3-4-246.pdf
NoStop
[Ivezić et al.(2019)Ivezić, Kahn, Tyson,
Abel, Acosta et al.]Ivezic2019
author author Ž. Ivezić, author S. M. Kahn, author J. A. Tyson,
author B. Abel, author E. Acosta, et al., https://doi.org/10.3847/1538-4357/ab042c journal
journal volume 873, eid 111 (year 2019), https://arxiv.org/abs/0805.2366 arXiv:0805.2366 [astro-ph] NoStop
[Dey et al.(2019)Dey,
Schlegel, Lang et al.]Dey2019
author author A. Dey, author D. J. Schlegel, author D. Lang,
et al., https://doi.org/10.3847/1538-3881/ab089d journal journal volume 157, eid 168 (year 2019), https://arxiv.org/abs/1804.08657 arXiv:1804.08657 [astro-ph.IM]
NoStop
[Laureijs et al.(2011)Laureijs, Amiaux, Arduini,
Auguères, Brinchmann, Cole et al.]Laureijs2011
author author R. Laureijs, author J. Amiaux, author S. Arduini,
author J. L. Auguères,
author J. Brinchmann, author R. Cole, et al., https://doi.org/10.48550/arXiv.1110.3193 journal journal arXiv e-prints , eid arXiv:1110.3193 (year 2011), https://arxiv.org/abs/1110.3193 arXiv:1110.3193
[astro-ph.CO] NoStop
[Doré et al.(2014)Doré, Bock et al.]Dore2014
author author O. Doré, author J. Bock,
et al., https://doi.org/10.48550/arXiv.1412.4872 journal journal arXiv e-prints , eid
arXiv:1412.4872 (year 2014), https://arxiv.org/abs/1412.4872 arXiv:1412.4872 [astro-ph.CO]
NoStop
[Takada et al.(2014)Takada, Ellis, Chiba, Greene, Aihara et al.]Takada2014
author author M. Takada, author R. S. Ellis, author M. Chiba,
author J. E. Greene, author H. Aihara, et al., https://doi.org/10.1093/pasj/pst019 journal journal volume 66, eid R1
(year 2014), https://arxiv.org/abs/1206.0737
arXiv:1206.0737 [astro-ph.CO] NoStop
[Spergel et al.(2015)Spergel, Gehrels, Baltay,
Bennett, Breckinridge et al.]Spergel2015
author author D. Spergel, author N. Gehrels, author C. Baltay,
author D. Bennett, author J. Breckinridge, et al., https://doi.org/10.48550/arXiv.1503.03757 journal
journal arXiv e-prints , eid arXiv:1503.03757
(year 2015), https://arxiv.org/abs/1503.03757
arXiv:1503.03757 [astro-ph.IM] NoStop
[Navarro et al.(1997)Navarro, Frenk, and White]Navarro1997
author author J. F. Navarro, author C. S. Frenk, and author S. D. M. White, https://doi.org/10.1086/304888 journal
journal volume 490, pages 493 (year 1997), https://arxiv.org/abs/astro-ph/9611107 arXiv:astro-ph/9611107 [astro-ph]
NoStop
[Duffy et al.(2008)Duffy, Schaye, Kay, and Dalla Vecchia]Duffy2008
author author A. R. Duffy, author J. Schaye,
author S. T. Kay, and author C. Dalla Vecchia, https://doi.org/10.1111/j.1745-3933.2008.00537.x journal journal volume 390, pages L64 (year 2008), https://arxiv.org/abs/0804.2486 arXiv:0804.2486 [astro-ph] NoStop
[Robotham and Howlett(2018)]Robotham2018
author author A. S. G. Robotham and author C. Howlett, https://doi.org/10.3847/2515-5172/aacc70 journal journal Research Notes of the American Astronomical Society volume 2, eid 55 (year 2018), https://arxiv.org/abs/1805.09550 arXiv:1805.09550 [astro-ph.CO]
NoStop
[Hearin et al.(2016a)Hearin, Zentner, van den Bosch, Campbell, and Tollerud]Hearin2016a
author author A. P. Hearin, author A. R. Zentner, author F. C. van
den Bosch, author D. Campbell, and author E. Tollerud, https://doi.org/10.1093/mnras/stw840 journal journal volume 460, pages 2552 (year 2016a), https://arxiv.org/abs/1512.03050 arXiv:1512.03050 [astro-ph.CO]
NoStop
[Hearin et al.(2016b)Hearin, Tollerud, Robitaille, Droettboom,
Zentner et al.]Hearin2016b
author author A. Hearin, author E. Tollerud, author T. Robitaille, author M. Droettboom, author A. Zentner, et al., @noop title Halotools:
Galaxy-Halo connection models, howpublished Astrophysics
Source Code Library, record ascl:1604.005 (year
2016b), https://arxiv.org/abs/1604.005
ascl:1604.005 NoStop
|
http://arxiv.org/abs/2307.04882v1 | 20230710200454 | Word length versus lower central series depth for surface groups and RAAGs | [
"Justin Malestein",
"Andrew Putman"
] | math.GR | [
"math.GR",
"math.GT"
] |
maketitle42@
42@
=1
=1.5pt
[0]label=(*)
equationsection
plain
theoremTheorem[section]
maintheoremTheorem
maincorollary[maintheorem]Corollary
maintheoremprimeTheorem
proposition[theorem]Proposition
lemma[theorem]Lemma
repeatlemmaLemma
*unnumberedlemmaLemma
sublemma[theorem]Sublemma
corollary[theorem]Corollary
conjecture[theorem]Conjecture
question[theorem]Question
*unnumberedquestionQuestion
problem[theorem]Problem
stepxStep
claimxClaim
definition
asm[theorem]Assumption
assumption[1][][#1]
defn[theorem]Definition
definition[1][]
notn[theorem]Notation
notation[1][][#1]
remark
rmk[theorem]Remark
remark[1][][#1]
eg[theorem]Example
example[1][][#1]
Word length versus lower central series depth for surface groups and RAAGs]Word length versus lower central series depth for surface groups and RAAGs
Dept of Mathematics; University of Oklahoma; 601 Elm Ave; Norman, OK 73019
[email protected]
Dept of Mathematics; University of Notre Dame; 255 Hurley Hall; Notre Dame, IN 46556
[email protected]
JM was supported in part by a Simons Foundation Collaboration Grant 713006.
For surface groups and right-angled Artin groups, we prove lower bounds on the shortest word in the generators
representing a nontrivial element of the k^th term of the lower central series.
[
Andrew Putman
July 10, 2023
=================
empty
§ INTRODUCTION
Let G be a group and let γ_k(G) be its lower central series:
γ_1(G) = G and γ_k+1(G) = [γ_k(G),G] for k ≥ 1.
If γ_k+1(G) = 1, then G is at most k-step nilpotent. Let S be a finite generating set
for G.
What is the shortest word in S^± 1 representing
a nontrivial element in γ_k(G)? What are the asymptotics of the length of this word
as k →∞?
The asymptotic question is only interesting for non-nilpotent groups. It is also natural
to only consider groups that are residually nilpotent, i.e., such that
⋂_k=1^∞γ_k(G) = 1
Let G be a non-nilpotent residually nilpotent group with a finite generating set S. Define for g ∈ G its associated word norm:
g_S = minℓg can be written as a word of length ℓ in S^± 1.
The lower central series depth function
is the following function
d_G,S→:
d_G,S(k) = ming_Sg ∈γ_k(G), g ≠ 1.
Though d_G,S(k) depends on the generating set S, its asymptotic behavior as k →∞ is independent
of S. Our goal in this paper is to find bounds on d_G,S(k) for
several natural classes of groups G.
§.§ Free groups
For n ≥ 2, let F_n be the free group on
S = {x_1,…,x_n}. These are the most fundamental examples
of groups that are residually nilpotent but not nilpotent <cit.>, and both lower and upper bounds on d_F_n,S(k) have been studied:
* Using the free differential calculus, Fox <cit.> proved that
d_F_n,S(k) ≥1/2 k for k ≥ 1.
In <cit.>, the authors improved this to d_F_n,S(k) ≥ k.
* In <cit.>, the authors proved that
d_F_n,S(k) ≤1/4(k+1)^2. Elkasapy–Thom <cit.> later
improved this to a bound that grows like k^c with
c = log_2(3+√(17))-1/log_2(1+√(2))≈ 1.4411.
The growth rate of d_F_n,S(k) thus lies between k and k^1.4411. It is not clear what the correct
asymptotics are.
§.§ Upper bounds
Now let G be a non-nilpotent residually nilpotent group with a finite generating set S. If
G contains a non-abelian free subgroup, then using the work of Elkasapy–Thom discussed above
we can find an upper bound on d_G,S(k) that grows[Precise upper bounds are
more complicated and depend on how the free subgroup is embedded in G.] like k^1.4411. However, lower bounds
on d_G,S(k) do not follow from the analogous results for free groups, so for
the rest of this paper we focus on lower bounds.
§.§ Surface groups
Let Σ_g be a closed oriented genus g ≥ 2 surface and let
π = π_1(Σ_g) = a_1,b_1,…,a_g,b_g[a_1,b_1] ⋯ [a_g,b_g] = 1.
Here our convention is that [x, y] = xyx^-1y^-1.
The surface group π is residually nilpotent but not nilpotent <cit.>, and shares many features
with free groups. Since g ≥ 2, the subgroup of π generated by a_1 and b_1 is a rank 2
free group. As in <ref> above, this implies a
k^1.4411 upper bound on the growth rate of d_π,S(k).
However, lower bounds are more problematic. The known lower bounds for free groups use
the free differential calculus, and there is no analogue of the free differential
calculus for surface groups.[The free derivatives are derivations
d F_n →[F_n]. For a group G, if there exist nontrivial derivations
d G →[G] then ^1(G;[G]) ≠ 0. If G has a compact K(G,1) this implies that G has more
than one end <cit.>, so G cannot be a one-ended group like a surface group.]
The lower bounds for free groups can also be derived using
the “Magnus representations” from free groups to units in rings of power series with
noncommuting variables,
but again it seems hard to construct suitable analogues for surface groups.
Nevertheless, we are able to prove the following:
Let π be a nonabelian surface group with standard generating set
S = {a_1,b_1,…,a_g,b_g}. Then for all k ≥ 1 we have
d_π,S(k) ≥1/4 k.
The 1/4 in this theorem is probably not optimal. We make the following conjecture:
Let π be a nonabelian surface group with standard generating set
S = {a_1,b_1,…,a_g,b_g}. Then d_π,S(k) ≥ k for all k ≥ 1.
See <ref> below for why our proof likely cannot be extended to prove
this conjecture.
§.§ Right-angled Artin groups
We will derive Theorem <ref> from an analogous result
for right-angled Artin groups, which are defined as follows.
Let X be a finite graph. The associated right-angled Artin group (RAAG) is the group A_X given
by the following presentation:
* The generators are the vertex set V(X).
* The relations are [x,y]=1x,y ∈ V(X) are joined by an edge.
The free abelian group ^n is the RAAG with X the complete graph on n vertices, and the free group F_n is the RAAG
with X a graph with n vertices and no edges.
These groups play an important role in many areas of geometric group theory (see, e.g., <cit.>).
Just like free groups and surface groups, they are residually nilpotent <cit.>, and
they are only nilpotent if they are free abelian, i.e., if X is a complete graph.
The latter fact can be deduced from the basic observation that if Y is a vertex-induced
subgraph of X, then the natural map A_Y → A_X is split injective; indeed,
the map A_X → A_Y that kills the generators which are not vertices of Y is a right inverse
for it.
Right-angled Artin groups
often contain many surface subgroups <cit.>, and
we will prove Theorem <ref> by embedding surface groups into RAAGs
and studying the lower central series depth function there. The main result
we need along these lines is as follows.
Let X be a finite graph that is not a complete graph,
and let S = V(X) be the generating set of A_X. Then
for k ≥ 1 we have d_A_X,S(k) ≥ k.
Though Theorem <ref> does not seem to previously appear
in the literature, it is implicit in the work of Wade (see <cit.>), and
our proof follows his ideas.
The key tool is
a version of the “Magnus representation” for RAAGs
that was introduced by Droms in his thesis <cit.>, generalizing
work of Magnus on free groups. The classical Magnus representations
are maps from F_n to units in rings of power series with noncommuting variables (see <cit.>).
They contain much of the same information as the free derivatives.
§.§ From RAAGs to surface groups
Let G be a non-nilpotent residually nilpotent group with finite generating set T and let H be the subgroup of G generated
by a finite subset S<G. Each s ∈ S can be written as a word in T^± 1, so we can define
r = maxs_Ts ∈ S.
For h ∈ H, we thus have
h_S ≥1/rh_T.
From this, we see that
d_H,S(k) ≥1/r d_G,S(k) for all k ≥ 1.
Since all nonabelian surface groups π are subgroups of RAAGs, Theorem <ref> therefore immediately
implies a linear lower bound on the lower central series depth function of π.
However, the precise constants depend on the embedding into a RAAG, and without further
work might depend on the genus g.
To get the genus-independent constant 1/4 from Theorem <ref>,
we will have to carefully control the geometry of our embeddings of surface groups into RAAGs
and ensure that we can take r=4 in the above.
Many other groups can also be embedded in right-angled Artin groups, and the argument
above shows that all of them have linear lower bounds on their lower central series
depth functions.
§.§ Optimal embeddings
It is natural to wonder if we can improve the 1/4 in Theorem <ref>
by using a more clever embedding into a RAAG. We conjecture that this is not possible:
Let π be a nonabelian surface group with standard generating set
S = {a_1,b_1,…,a_g,b_g}, let X be a finite graph, and let
ϕπ↪ A_X be an embedding. Then there exists
some s ∈ S such that ϕ(s)_V(X)≥ 4.
As we will discuss in <ref> below, Crisp–Wiest <cit.> gave
an explicit description of all homomorphisms from surface groups to RAAGs in terms of collections of
loops on the surface. To prove Conjecture <ref>, what one would have
to show is that if ϕπ→ A_X is a map from a surface group to a RAAG
arising from the Crisp–Wiest construction that does not satisfy the conclusion of
Conjecture <ref>, then ϕ is not injective.
§.§ Outline
We prove Theorem <ref> in <ref> and Theorem <ref>
in <ref>. This last
section depends on the preliminary
<ref>, which discusses work of Crisp–Wiest parameterizing maps from surface groups to RAAGs.
§ RIGHT-ANGLED ARTIN GROUPS
Let X be a finite graph with associated right-angled Artin group A_X. In this section,
we first discuss some structural results about A_X and then prove Theorem <ref>.
§.§ Monoid
In addition to the right-angled Artin group A_X, we will also need the right-angled Artin monoid M_X. This
is the associative monoid with the following presentation:
* The generators are the vertices V(X) of X. To distinguish these generators from the
corresponding generators of A_X, we will sometimes write them with bold-face letters. In other words, s denotes
an element of A_X and denotes an element of M_X.
* The relations are =x,y ∈ V(X) are joined by an edge.
There is a monoid homomorphism M_X → A_X whose image is the set of all elements
of A_X that can be represented by “positive words”. As we will discuss below, this
monoid homomorphism is injective.
§.§ Normal form
Let S = V(X) be the generating set for A_X and M_X. Consider a word
w = s_1^e_1⋯ s_n^e_n with s_1,…,s_n ∈ S and e_1,…,e_n ∈.
This word represents an element of A_X, and if e_i ≥ 0 for all 1 ≤ i ≤ n it represents
an element of M_X (here for conciseness we are not using our bold-face conventions). We say that w is fully reduced if it satisfies the following conditions:
* Each e_i is nonzero.
* For all 1 ≤ i < j ≤ n with s_i = s_j, there exists some k with i < k < j such that
s_k does not commute[As observed earlier, A_Y embeds in A_X for any
vertex-induced subgraph Y, so this is equivalent to s_k being distinct from
and not adjacent to s_i = s_j.] with s_i = s_j.
Note that this implies in particular that s_i ≠ s_i+1 for all 1 ≤ i < n, so w is reduced as
a word in the free group on S. It is clear that every element of A_X and M_X can be represented
by a fully reduced word.
This representation is unique in the following sense:
* Consider fully reduced words
w = s_1^e_1⋯ s_n^e_n and w' = t_1^f_1⋯ t_m^f_m
representing the same element of A_X or M_X. Then we can obtain w' from w by a sequence of swaps, i.e.,
flipping adjacent terms s_i^e_i and s_i+1^e_i+1 such that s_i commutes with s_i+1.
For A_X, this uniqueness was stated without proof by Servatius <cit.>. The earliest proof
we are aware of is in Green's thesis <cit.>. Alternate proofs can be found in
<cit.> and <cit.>.
Using the monoid homomorphism M_X → A_X, the uniqueness for M_X follows[Whether this is a
circular argument depends on the proof of uniqueness used for A_X. The geometric proof
from <cit.> works directly with groups, and does not even implicitly
prove anything about monoids.] from that of A_X. Note that this uniqueness also implies
that the monoid homomorphism M_X → A_X is injective.
The following lemma shows that fully reduced words realize the word norm in A_X:
Let X be a finite graph. Let S = V(X) be the generating set for A_X. Consider some w ∈ A_X, and represent
w by a fully reduced word
w = s_1^e_1⋯ s_n^e_n with s_1,…,s_n ∈ S and e_1,…,e_n ∈.
Then w_S = |e_1|+⋯+|e_n|.
Immediate from the uniqueness up to swaps of fully reduced words as well as the fact that taking
an arbitrary word and putting it in fully reduced form does not lengthen the word.
§.§ Monoid ring
Let [M_X] be the monoid ring whose elements are formal -linear combinations of elements of M_X.
Since the relations in M_X are all of the form = for generators and ,
all words representing an element ∈ M_X have the same length, which we will denote ℓ(). This length function
satisfies ℓ(_1 _2) = ℓ(_1)+ℓ(_2) for _1,_2 ∈ M_X. For k ≥ 0, define
M_X^(k) = ∈ M_Xℓ() = k.
The monoid ring [M_X] is a graded ring with [M_X]_(k) = [M_X^(k)].
§.§ Partially commuting power series
Let I ⊂[M_X] be the ideal generated by the elements of the generating set V(X). For k ≥ 1,
the ideal I^k consists of -linear combinations of ∈ M_X with ℓ() ≥ k. Define
_X = lim_⟵[M_X]/I^k.
Elements of the inverse limit _X can be regarded as power series
∑_k=0^∞_k with _k ∈[M_X]_(k) for all k ≥ 0.
Each _k is a linear combination of products of k generators from V(X), some of which commute and
some of which do not. Multiplication works in the usual way:
(∑_k=0^∞_k) (∑_k'=0^∞'_k') = ∑_ℓ=0^∞(∑_k+k' = ℓ_k '_k').
§.§ Magnus representation
We now discuss the Magnus representation of A_X, which was introduced by Droms in his thesis <cit.>, generalizing
classical work of Magnus for free groups (see <cit.>). See <cit.> for
a survey. The starting point is the observation that for s ∈ V(X), we have the following identity in
_X:
(1+)(1-+^2-^3+⋯) = 1.
In other words, 1+ is a unit in _X. If generators s,s' ∈ V(X) commute, then 1+ and 1+' also commute. It follows that we can define
a homomorphism
μ A_X ⟶(_X)^×
via the formula
μ(s) = 1+ for s ∈ V(X).
§.§ Dimension subgroups and the lower central series
Recall that I ⊂[M_X] is the ideal generated by elements of the generating set V(X). There
is a corresponding ideal ⊂_X consisting of
all elements with constant term 0.
For k ≥ 1, the k^th dimension subgroup of A_X,
denoted D_k(A_X), is the kernel of the composition
A_X μ⟶_X ⟶_X/^k.
In other words, D_k(A_X) consists of elements w ∈ A_X such that
μ(w) = 1 + (terms of degree at least k).
The most important theorem about D_k(A_X) identifies it with the k^th term
of the lower central series of A_X:
Let X be a finite graph. Then D_k(A_X) = γ_k(A_X) for all k ≥ 1.
In fact, for what follows all we need is the much easier fact that γ_k(A_X) ⊂ D_k(A_X),
which appears in Droms's thesis <cit.>. For this, since D_1(A_X) = A_X = γ_1(A_X)
it is enough to verify that
[D_k(A_X),D_ℓ(A_X)] ⊂ D_k+ℓ(A_X),
which is immediate from the definitions.
§.§ Lower bounds for the lower central series of a RAAG
We close this section by proving Theorem <ref>. As we said in the introduction,
the proof closely follows ideas of Wade <cit.>.
We start by recalling the statement. Let X be a finite graph that is not a complete graph and
let S = V(X) be the generating set for A_X. Consider a nontrivial element w ∈ A_X, and
let k = w_S be its word norm in the generating set S. We must prove that
w ∉γ_k+1(A_X). By Theorem <ref>, it is enough to prove
that w ∉ D_k+1(A_X).
Represent w by a fully reduced word:
w = s_1^e_1⋯ s_n^e_n with s_1,…,s_n ∈ S and e_1,…,e_n ∈.
By Lemma <ref>, we have
w_S = |e_1|+⋯+|e_n| ≥ n.
It is thus enough to prove that w ∉ D_n+1(A_X). To do this, is enough to
prove that a term of degree n appears in μ(w) ∈_X.
An easy induction shows that for all 1 ≤ i ≤ n, we have
μ(s_i^e_i) = (1+_i)^e_i = 1 + e_i _i + _i^2 _i for some _i ∈_X.
It follows that
μ(w) = (1+e_1 _1 + _1^2 _1) (1+e_2 _2 + _2^2 _2) ⋯(1+e_n _n + _n^2 _n).
Say that some ∈ M_X is square-free if it cannot be expressed as a word
in the generators S = V(X) for the monoid M_X with two consecutive letters the same generator.[Be warned
that it is possible for an element to have one such expression while not being square-free.
For instance, if ,' ∈ S are distinct commuting generators then ' is not square-free
since ' = ^2 '.] It is immediate from the uniqueness up to swaps of fully reduced
words that the fully reduced word _1 _2 ⋯_n represents a square-free element of
M_X. When we expand out (<ref>), the only square-free term of degree
n is
e_1 e_2 ⋯ e_n _1 _2 ⋯_n.
It follows that this degree n term survives when we expand out μ(w), as desired.
§ MAPPING SURFACE GROUPS TO RAAGS
Before we can prove Theorem <ref>, we must
discuss some work of Crisp–Wiest <cit.> that parameterizes maps from surface groups to RAAGs.
We will not need the most general form of their construction (which they prove can give any homomorphism
from a surface group to a RAAG), so we will only describe a special case of it. Fix a closed oriented
surface Σ and a basepoint ∗∈Σ.
§.§ Crisp–Wiest construction
A simple dissection[Crisp and Wiest use the term dissection for a collection of curves which satisfy some conditions and have a certain decoration. We add “simple” to indicate that
we do not have any decoration.] on Σ is a finite collection of oriented simple closed curves
on Σ satisfying the following conditions:
* None of the curves contain the basepoint ∗.
* Any two curves in intersect transversely.
* There are no triple intersection points between three curves in .
For a simple dissection , let X() be the graph whose vertices are the curves in and where
two vertices are joined by an edge if the corresponding curves intersect. Crisp–Wiest <cit.> proved that
the following gives a well-defined homomorphism ϕπ_1(Σ,∗) → A_X():
* Consider some x ∈π_1(Σ,∗). Realize x by an immersed based loop [0,1] →Σ
that is transverse to all the curves in and avoids intersection points between curves of . If is disjoint from all the curves in , then ϕ(x) = 1. Otherwise,
let
0 < t_1 < ⋯ < t_n < 1
be the collection of all values such that (t_i) is contained in some γ_i ∈. For 1 ≤ i ≤ n, let
e_i = ± 1 be the sign of the intersection of with the oriented loop γ_i at x(t_i). Then
ϕ(x) = γ_1^e_1⋯γ_n^e_n∈ A_X().
We will say that ϕ is the map obtained by applying the Crisp–Wiest construction to .
§.§ Injectivity criterion
Crisp–Wiest <cit.> describe an approach for proving that ϕ is injective in certain cases.
To describe it, we must introduce some more terminology. For a simple dissection
on Σ, let
G() = ⋃_γ∈γ,
which we view as a graph embedded in Σ_g with a vertex for each intersection point
between curves in . We say that is a filling simple dissection if each
component of Σ∖ G() is a disk.
For a component U of Σ∖ G(), the boundary of U can be identified
with a circuit in the graph G(). Say that U satisfies the injectivity criterion
if the following holds for any two distinct edges e and e'
in the boundary of U. Let γ and γ' be the oriented curves in
that contain e and e', respectively. We then require that γ≠γ' and
that if γ intersects γ', then e and e'
are adjacent edges in the boundary of U.
We can now state our injectivity criterion:
Let Σ be a closed oriented surface equipped with a basepoint
∗ and let be a filling simple dissection on Σ.
For all components U of Σ∖ G(), assume that U
satisfies the injectivity criterion.
Then the map ϕπ_1(Σ,∗) → A_X() obtained
by applying the Crisp–Wiest construction to is injective.
While Proposition <ref> is not explicitly stated or proved in <cit.>,
it is implicit in their work. We present a proof for the convenience of the reader. This requires some preliminary definitions.
§.§ Salvetti complex
Let X be a finite graph and let A_X be the corresponding right-angled Artin group. The
Salvetti complex of A_X, denoted (X), is a certain non-positively curved cube complex[Here
a cube complex is non-positively curved if its universal cover is CAT(0).] with π_1((X)) = A_X.
It can be constructed as follows. Enumerate
the vertices of X as
V(X) = {v_1,…,v_n}.
Identify S^1 with the the unit circle in , so 1 ∈ S^1 is a basepoint.
For a subset I ⊂{v_1,…,v_n} of cardinality k, let S_I ≅ (S^1)^k be
S_I = (z_1,…,z_n) ∈ (S^1)^nz_i = 1 for all i with v_i ∉ I.
A subset I ⊂{v_1,…,v_n} is a k-clique of X if the subgraph of X
induced by I is a complete subgraph on k vertices. A clique is a set of vertices
that forms a k-clique for some k. With these definitions,
(X) is the union of the S_I as I ranges over cliques in X. The space (X)
can be given a cube complex structure containing a k-cube for each k-clique
in X. In particular, it has a single vertex (i.e., 0-cube) corresponding to the (empty)
0-clique.
§.§ Dual cubulation
Now let be a filling simple dissection on Σ_g.
We can form a dual cube complex structure on Σ_g as follows:
* Put a vertex in the interior of each component of Σ_g ∖ G(). For the component
containing the basepoint ∗, the vertex should be ∗.
* For each edge e of G(), connect the vertices in the components on either side of
e by an edge.
* For each vertex v of G(), put a 2-cube centered at v as follows:
file=Cube,scale=1
Here the graph G() is blue, the cube centered at the vertex of G() is green, and the surrounding
cubes are yellow, pink, and orange. The colors are just there to distinguish the different cubes visually, so
e.g., the different yellow cubes might or might not coincide (depending on the structure of G() on the rest
of the surface).
We will call this the cube complex structure dual to .
§.§ Proof of Proposition <ref>
We first recall what we must prove. Let Σ be a closed oriented surface equipped with a basepoint
∗ and let be a filling simple dissection on Σ.
For all components U of Σ∖ G(), assume that U
satisfies the injectivity criterion.
We must prove that the map ϕπ_1(Σ,∗) → A_X() obtained
by applying the Crisp–Wiest construction to is injective.
Endow Σ with the cube complex structure dual to , and let (X()) be the Salvetti complex
of A_(X). We start by constructing a map of cube complexes fΣ→(X())
such that
f_∗π_1(Σ,∗) →π_1((X())) = A_X()
equals ϕ. Define f as follows:
* The map f sends each vertex of Σ to the unique vertex of (X()).
* For an edge e of Σ that crosses an oriented loop γ of , the map
f takes e isometrically to the loop of (X()) corresponding to the 1-clique {γ}
of X(). Orienting e such that the intersection of e with γ is positive, we do
this such that f(e) goes around the loop in the direction corresponding to the generator γ
of π_1((X())) = A_X().
* For a 2-cube c of Σ centered at an intersection of loops γ_1 and γ_2
of , the map f sends c isometrically to the 2-cube corresponding to the 2-clique
{γ_1,γ_2} of X().
With these definitions, it is clear that f_∗ = ϕ.
By <cit.>, the map f_∗ = ϕ will be an injection if for every vertex
v of Σ, the map f take the link of v injectively into a full subcomplex
of the link of f(v) in (X()). These links have the following description:
* The vertex v lies in some component U of Σ∖ G(). The link of v is
a cycle whose vertices are precisely the edges of G() surrounding U.
* The vertex f(v) is the unique vertex of (X()). Its link is the following complex:
* There are two vertices for each generator γ of A_X() (or alternatively, each
γ∈), one corresponding to the positive direction and the other to the negative direction.
* A collection of vertices forms a simplex if they correspond to distinct generators of
A_X_ all of which commute.
From this description, we see that the fact that U satisfies the injectivity criterion
ensures that f takes the link of v injectively into a full
subcomplex of the link of f(v) in (X()), as desired.
§ BOUNDS ON SURFACE GROUPS
We now study the lower central series of surface groups and prove
Theorem <ref>.
We start by recalling the statement. For some g ≥ 2, let Σ_g be a closed oriented genus g surface
equipped with a basepoint ∗ and let S = {a_1,b_1,…,a_g,b_g} be the standard basis for π = π_1(Σ_g,∗).
Our goal is to prove that
d_π,S(k) ≥1/4 k for all k ≥ 1. Equivalently, consider
some nontrivial w ∈γ_k(π). We must prove that w_S ≥1/4 k.
What we will do is find a finite graph X and an injective homomorphism
ϕπ→ A_X such that letting T = V(X) be the generating set for A_X, we have
ϕ(s)_T≤ 4 for all s ∈ S.
We then have
ϕ(w) ∈γ_k(A_X), and since ϕ is injective we have ϕ(w) ≠ 1.
Since π is nonabelian the graph X is not a complete graph, so
we can apply Theorem <ref> to deduce that
ϕ(w)_T≥ k. Since ϕ(s)_T≤ 4 for all s ∈ S,
we conclude that
w_S ≥1/4ϕ(w)_T≥1/4 k,
as desired.
It remains to construct X and ϕ. We can draw the elements of S as follows, where a_k “encircles” the kth hole from the left:
file=PiGenerators,scale=1
Let
= {x_0,…,x_g,y_1,…,y_g,z}
be the following filling simple dissection on Σ_g:
file=ArtinLoops,scale=1
Let ϕπ→ A_X() be the homomorphism obtained by applying the Crisp–Wiest
construction to and let T = V(X()) be the generating set for A_X().
There are four components of Σ_g ∖ G(), and by inspection each of them
satisfies the injectivity criterion from <ref>.
Proposition <ref> thus implies that ϕ is injective.
By construction, the following hold:
ϕ(a_k) = x_k-1 x_k^-1,
ϕ(b_k) = x_k z y_k x_k^-1.
These formulas imply that
ϕ(s)_T≤ 4 for all s ∈ S, as desired.
99
Baumslag
G. Baumslag, On generalised free products, Math. Z. 78 (1962), 423–438.
CharneySurvey
R. M. Charney, An introduction to right-angled Artin groups, Geom. Dedicata 125 (2007), 141–158. math/0610668
CrispSageevSapir
J. S. Crisp, M. Sageev and M. V. Sapir, Surface subgroups of right-angled Artin groups, Internat. J. Algebra Comput. 18 (2008), no. 3, 443–491. 0707.1144
CrispWiest
J. S. Crisp and B. Wiest, Embeddings of graph braid and surface groups in right-angled Artin groups and braid groups, Algebr. Geom. Topol. 4 (2004), 439–472. math/0303217
DromsThesis
C. Droms, Graph Groups, PhD thesis, Syracuse University, 1983.
ElkasapyThom
A. I. Elkasapy and A. Thom, On the length of the shortest non-trivial element in the derived and the lower central series, J. Group Theory 18 (2015), no. 5, 793–804. 1311.0138
FoxFree1
R. H. Fox, Free differential calculus. I. Derivation in the free group ring, Ann. of Math. (2) 57 (1953), 547–560.
Frederick
K. N. Frederick, The Hopfian property for a class of fundamental groups, Comm. Pure Appl. Math. 16 (1963), 1–8.
GreenThesis
E. R. Green, Graph products of groups, PhD thesis, University of Leeds, 1990.
KimSurface
S. Kim, On right-angled Artin groups without surface subgroups, Groups Geom. Dyn. 4 (2010), no. 2, 275–307. 0811.1946
Magnus
W. Magnus, Beziehungen zwischen Gruppen und Idealen in einem speziellen Ring, Math. Ann. 111 (1935), no. 1, 259–280.
MagnusKarrassSolitar
W. Magnus, A. Karrass and D. M. Solitar, Combinatorial group theory, second revised edition, Dover Publications, Inc., New York, 1976.
MalesteinPutmanFree
J. Malestein and A. Putman, On the self-intersections of curves deep in the lower central series of a surface group, Geom. Dedicata 149 (2010), 73–84. 0901.2561
ScottWallTopological
G. P. Scott and C. T. C. Wall, Topological methods in group theory, in Homological group theory (Proc. Sympos., Durham, 1977), 137–203, London Math. Soc. Lecture Note Ser., 36, Cambridge Univ. Press, Cambridge.
ServatiusAutos
H. Servatius, Automorphisms of graph groups, J. Algebra 126 (1989), no. 1, 34–60.
ServatiusDromsServatius
H. Servatius, C. Droms and B. Servatius, Surface subgroups of graph groups, Proc. Amer. Math. Soc. 106 (1989), no. 3, 573–578.
WadeSurvey
R. D. Wade, The lower central series of a right-angled Artin group, Enseign. Math. 61 (2015), no. 3-4, 343–371. 1109.1722
WiseBook
D. T. Wise, From riches to raags: 3-manifolds, right-angled Artin groups, and cubical geometry, CBMS Regional Conference Series in Mathematics, 117, Published for the Conference Board of the Mathematical Sciences, Washington, DC, 2012.
|
http://arxiv.org/abs/2307.04403v1 | 20230710081123 | $φ(2170)$ decaying to $φππ$ and $φK\bar{K}$ | [
"Yun-Hua Chen"
] | hep-ph | [
"hep-ph",
"hep-ex",
"nucl-th"
] | |
http://arxiv.org/abs/2307.05473v1 | 20230711175831 | Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives | [
"Tom Monnier",
"Jake Austin",
"Angjoo Kanazawa",
"Alexei A. Efros",
"Mathieu Aubry"
] | cs.CV | [
"cs.CV"
] |
=1
sectionSec.Secs.
sectionSectionSections
tableTableTables
tableTab.Tabs.
imgs/
[itemize]itemsep=0pt,topsep=0pt,partopsep=0pt,parsep=0pt,leftmargin=4mm
C>c<let@tokenonedotonedotlet@token..e.gE.gi.eI.ecfCfetcvsw.r.td.o.fa.k.ai.i.dw.l.o.get al IR𝔸𝔾𝕄𝕊𝔹ℍℕ𝕋ℂ𝕀𝕆𝕌𝔻𝕁ℙ𝕍𝔼𝕂ℚ𝕎𝔽𝕃ℝ𝕏𝕐ℤ𝒜𝒢ℳ𝒮ℬℋ𝒩𝒯𝒞ℐ𝒪𝒰𝒟𝒥𝒫𝒱ℰ𝒦𝒬𝒲ℱℒℛ𝒳𝒴𝒵𝔄𝔊𝔐𝔖𝔅ℌ𝔑𝔗ℭℑ𝔒𝔘𝔇𝔍𝔓𝔙𝔈𝔎𝔔𝔚𝔉𝔏ℜ𝔛𝔜ℨDifferentiable Blocks World:
Qualitative 3D Decomposition by Rendering Primitives
Tom Monnier^1 Jake Austin^2 Angjoo Kanazawa^2 Alexei A. Efros^2
Mathieu Aubry^1
^1LIGM, Ecole des Ponts, Univ Gustave Eiffel ^2UC Berkeley
====================================================================================================================================================================
=-1
Given a set of calibrated images of a scene, we present an approach that produces a simple,
compact, and actionable 3D world representation by means of 3D primitives. While many
approaches focus on recovering high-fidelity 3D scenes, we focus on parsing a scene into
mid-level 3D representations made of a small set of textured primitives. Such
representations are interpretable, easy to manipulate and suited for physics-based
simulations. Moreover, unlike existing primitive decomposition methods that rely on
3D input data, our approach operates directly on images through differentiable rendering.
Specifically, we model primitives as textured superquadric meshes and optimize their
parameters from scratch with an image rendering loss. We highlight the importance of
modeling transparency for each primitive, which is critical for optimization and also
enables handling varying numbers of primitives. We show that the resulting textured
primitives faithfully reconstruct the input images and accurately model the visible 3D
points, while providing amodal shape completions of unseen object regions. We compare our
approach to the state of the art on diverse scenes from DTU, and demonstrate its robustness
on real-life captures from BlendedMVS and Nerfstudio. We also showcase how our results can
be used to effortlessly edit a scene or perform physical simulations. Code and video
results are available at
https://www.tmonnier.com/DBW.
§ INTRODUCTION
Recent multi-view modeling approaches, building on Neural Radiance
Fields <cit.>, capture scenes with astonishing accuracy by optimizing a
dense occupancy and color model. However, they do not incorporate any notion of objects,
they are not easily interpretable for a human user or a standard 3D modeling software, and
they are not useful for physical understanding of the scene. In fact, even though these
approaches can achieve a high-quality 3D reconstruction, the recovered content is nothing but
a soup of colorful particles! In contrast, we propose an approach that recovers textured
primitives, which are compact, actionable, and interpretable.
More concretely, our method takes as input a collection of calibrated images of a scene, and
optimizes a set of primitive meshes parametrized by
superquadrics <cit.> and their UV textures to minimize a rendering
loss. The approach we present is robust enough to work directly from a random
initialization. One of its key components is the optimization of a transparency parameter for
each primitive, which helps in dealing with occlusions as well as handling
varying number of primitives.
This notably requires adapting standard differentiable
renderers to deal with transparency. We also show the benefits of using a perceptual loss, a
total variation regularization on the textures and a parsimony loss favoring the use of a
minimal number of primitives.
=-1 Our scene representation harks back to the classical Blocks World
ideas <cit.>. An important difference is that the Blocks World-inspired
approaches are typically bottom-up, leveraging low-level image features, such as
edges <cit.>,
super-pixels <cit.>, or more recently learned
features <cit.>, to infer 3D blocks. In
contrast, we perform a direct top-down optimization of 3D primitives and texture using a
rendering loss, starting from a random initialization in the spirit of analysis-by-synthesis.
Unlike related works that fit
primitives to 3D point clouds <cit.> (<Ref>),
our approach, dubbed Differentiable Blocks World (or DBW), does not require any 3D
reconstruction a priori but instead operates directly on a set of calibrated input
images, leveraging photometric consistency across different views (<Ref>).
This makes our approach more robust since methods based on 3D are very sensitive to noise in
the reconstructions and have difficulties dealing with incomplete objects. Our setting is
similar to existing NeRF-like approaches, but our model is able to recover a significantly
more interpretable and parsimonious representation. In particular, such an interpretable
decomposition allows us to easily play with the discovered scene, , by performing
physics-based simulations (<Ref>). Code and video results are available on
our project webpage: https://www.tmonnier.com/DBW.
§ RELATED WORK
Scene decomposition into 3D primitives.
The goal of understanding a scene by decomposing it into a set of geometric primitives can be
traced back to the very fist computer vision thesis by Larry Roberts on Blocks
World <cit.> in
1963. In it, Roberts shows a complete scene understanding system for a simple closed world of
textureless polyhedral shapes by using a generic library of polyhedral block components. In
the 1970s, Binford proposes the use of Generalized Cylinders as general
primitives <cit.>, later refined by Biederman into the
recognition-by-components theory <cit.>. But applying these ideas
to real-world image data has proved
rather difficult.
A large family of methods does not consider images at all, instead focusing on finding
primitives
in 3D data. Building upon the classical idea of RANSAC <cit.>, works
like <cit.> accurately extract
various primitive shapes (, planes, spheres and cylinders for <cit.>) from a point cloud.
In particular, MonteBoxFinder <cit.> is a recent RANSAC-based
system that robustly extracts cuboids from noisy point clouds by selecting the best
proposals through Monte Carlo Tree Search. To avoid the need for RANSAC hyperparameter tuning
while retaining robustness, Liu <cit.> introduce a probabilistic
framework dubbed EMS that recovers superquadrics <cit.>.
Other methods
instead leverage neural learning advances to robustly predict primitive decomposition from a
collection of shapes (, ShapeNet <cit.>), in the form of
cuboids <cit.>, superquadrics <cit.>, shapes from a small
dictionary <cit.> or learnable prototypical
shapes <cit.>. However,
they are typically limited to shapes of known categories and require perfect 3D data. More
generally, the decomposition results of all 3D-based methods highly depend on the quality of
the 3D input, which is always noisy and incomplete for real scenes. For a complete survey of
3D decomposition methods, we refer the reader to <cit.>.
More recently, there has been a renewed effort to fit 3D primitives to various image
representations,
such as depth maps, segmentation predictions or low-level image features. Depth-based
approaches <cit.> naturally associate a 3D point cloud to each image which is then used for
primitive fitting. However, the resulting point cloud is highly incomplete,
ambiguous and sometimes inaccurately predicted, thus limiting the decomposition quality.
Building upon the single-image scene layout estimation <cit.>, works like <cit.> compute cuboids
that best match the predicted surface orientations.
Finally, Façade <cit.>, the classic image-based rendering work, leverages
user annotations across multiple images with known camera viewpoints to render a scene with
textured 3D primitives.
In this work, we do not rely on 3D, depth, segmentation, low-level features, or user
annotations to compute the 3D decomposition. Instead, taking inspiration from
Façade <cit.> and recent multi-view modeling
advances <cit.>, our approach only requires calibrated views of the scene
and directly optimizes textured primitives through photometric consistency in an
end-to-end fashion. That is, we solve the 3D decomposition and multi-view stereo problems
simultaneously.
Multi-view stereo. Our work can be seen as an end-to-end primitive-based approach
to multi-view stereo (MVS), whose goal is to output a 3D reconstruction from multiple images
taken from known camera viewpoints. We refer the reader to <cit.> for an exhaustive review of classical methods. Recent MVS works can be
broadly split into two groups.
Modular multi-step approaches typically rely on several processing steps to extract the
final geometry from the images. Most methods <cit.>, including the widely used
COLMAP <cit.>, first estimate depth maps for each image (through
keypoint matching <cit.> or neural network
predictions <cit.>), then apply a depth fusion step to generate a textured point cloud.
Finally, a mesh can be obtained with a meshing algorithm <cit.>. Other multi-step approaches directly rely on point
clouds <cit.> or voxel
grids <cit.>.
Note that, although works like <cit.> leverage end-to-end
trainable networks to regress the geometry, we consider them as multi-step methods as they
still rely on a training phase requiring 3D supervision before being applied to unknown sets
of multi-view images. Extracting geometry through multiple steps involves careful tuning of
each stage, thus increasing the pipeline complexity.
End-to-end approaches directly optimize a 3D scene representation using photometric
consistency across different views along with other constraints in an
optimization framework. Recent
methods use neural networks to implicitly represent the 3D scene, in the form of occupancy
fields <cit.>, signed distance functions <cit.>
or radiance fields, as introduced in NeRF <cit.>. Several works incorporate
surface constraints in neural volumetric rendering to further improve the scene
geometry <cit.>,
with a quality approaching that of traditional MVS methods. Other
methods <cit.>
instead propose to leverage recent advances in mesh-based differentiable
rendering <cit.> to explicitly
optimize textured meshes. Compared to implicit 3D representations, meshes are highly
interpretable and are straightforward to use in computer graphic pipelines, thus
enabling effortless scene editing and simulation <cit.>.
However, all the above approaches represent the scene as a single mesh, making it ill-suited
for manipulation and editing. We instead propose to discover the primitives that make up
the scene, resulting in an interpretable and actionable representation. A concurrent work
PartNeRF <cit.> introduces parts in NeRFs. However, only
synthetic scenes with a single object are studied and the discovered parts mostly correspond
to regions in the 3D space rather than interpretable geometric primitives.
§ DIFFERENTIABLE BLOCKS WORLD
Given a set of N views _1:N of a scene associated with camera poses _1:N,
our goal is to decompose the 3D scene into geometric primitives that best explain the images.
We explicitly model the scene as a set of transparent superquadric meshes, whose parameters,
texture and number are optimized to maximize photoconsistency through differentiable
rendering. Note that compared to recent advances in neural volumetric
representations <cit.>, we
do not use any neural network and directly optimize meshes, which are
straightforward to use in computer graphic pipelines.
Notations. We use bold lowercase for vectors (, 𝐚), bold uppercase for
images (, 𝐀), double-struck uppercase for meshes (, )
and write a_1:N the ordered set {a_1,…,a_n}.
§.§ Parametrizing a World of Blocks
We propose to represent the world scene as an explicit set of textured meshes positioned in
the 3D space. <Ref> summarizes our modeling and the parameters updated (top)
during the optimization (bottom). Specifically, we model each scene as a union of primitive
meshes: (i) an icosphere modeling a background dome and centered on the scene, (ii)
a plane modeling the ground, and (iii) K primitive blocks _1:K in the
form of superquadric meshes, where K is fixed and refers to a maximum number of blocks.
Unless mentioned otherwise, we arbitrarly use K=10. We write the resulting scene mesh
∪∪_1 ∪…∪_K.
The goal of the background dome is to model things far from the cameras that can be well
approximated with a planar surface at infinity. In practice, we consider an icosphere with a
fixed location and a fixed scale that is much greater than the scene scale. On the contrary,
the goal of the planar ground and the blocks is to model the scene close to the cameras. We
thus introduce rigid transformations modeling locations that will be updated during the
optimization. Specifically, we use the 6D rotation parametrization
of <cit.> and associate to each block k a pose _k = {_k,
_k}∈^9 such that every point of the block ∈^3 is transformed into
world space by _world = (_k) + _k, where _k
∈^3, _k ∈^6 and maps a 6D vector to a rotation
matrix <cit.>. Similarly, we associate a rigid transformation _
= {_, _} to the ground plane. We next describe how we model variable
number of blocks via transparency values and the parametrization of blocks' shape and texture.
Block existence through transparency. Modeling a variable number of
primitives is a difficult task as it involves optimizing over a discrete random variable.
Recent works tackle the problem using reinforcement learning <cit.>,
probabilistic approximations <cit.> or greedy
algorithms <cit.>, which often yield complex optimization
strategies. In this work, we instead propose to handle variable number of primitive blocks by
modeling meshes that are transparent. Specifically, we associate to each block k a
learnable transparency value α_k, parametrized with a sigmoid, that can be pushed
towards zero to change the effective number of blocks. Such transparencies are not only
used in our rendering process to softly model the blocks existence and occlusions
(<Ref>), but also in regularization terms during our optimization, , to
encourage parsimony in the number of blocks used (<Ref>).
Superquadric block shape. We model blocks with
superquadric meshes. Introduced by
Barr in 1981 <cit.> and revived recently
by <cit.>,
superquadrics define a family of parametric surfaces that exhibits a strong expressiveness
with a small number of continuous parameters, thus making a good candidate for primitive
fitting by gradient descent.
More concretely, we derive a superquadric mesh from a unit icosphere. For each vertex of the
icosphere, its spherical coordinates η∈ [-π/2, π/2] and ω∈ [-π, π] are mapped to the superquadric surface through the
parametric equation <cit.>:
(η, ω) =
[ s_1cos^ϵ_1ηcos^ϵ_2ω; s_2sin^ϵ_1η; s_3cos^ϵ_1ηsin^ϵ_2ω; ],
where = {s_1, s_2, s_3}∈^3 represents an anisoptropic scaling
and = {ϵ_1, ϵ_2}∈^2 defines the shape of the superquadric.
Both and are updated during the optimization process. Note that by design,
each vertex of the icosphere is mapped continuously to a vertex of the superquadric mesh, so
the icosphere connectivity - and thus the icosphere faces - is transferred to the
superquadric mesh.
Texturing model. We use texture mapping to model scene appearance.
Concretely, we optimize K + 2 texture images {_, _, _1:K} which
are UV-mapped onto each mesh triangle using pre-defined UV mappings. Textures for the
background and the ground are trivially obtained using respectively spherical coordinates of
the icosphere and a simple plane projection. For a given block k, each vertex of the
superquadric mesh is associated to a vertex of the icosphere. Therefore, we can map the
texture image _k onto the superquadric by first mapping it to the icosphere using a
fixed UV map computed with spherical coordinates, then mapping the icosphere triangles to the
superquadric ones (see supplementary material for details).
§.§ Differentiable Rendering
In order to optimize our scene parameters to best explain the images, we propose to leverage
recent mesh-based differentiable renderers <cit.>. Similar to them, our differentiable
rendering corresponds to the soft rasterization of the mesh faces followed by a blending
function. In contrast to existing mesh-based differentiable renderers, we introduce the
ability to account for transparency. Intuitively, our differentiable rendering can be
interpreted as an alpha compositing of the transparent colored faces of the mesh. In the
following, we write pixel-wise multiplication with ⊙ and the division of image-sized
tensors corresponds to pixel-wise division.
Soft rasterization.
Given a 2D pixel location , we model the influence of
the face projected onto the image plane with the 2D occupancy function
of <cit.> that we modify to incorporate the transparency
value α_k_j associated to this face. Specifically, we write the occupancy function
as:
_() = α_k_jexp(min(/σ, 0)) ,
where σ is a scalar hyperparameter modeling the extent of the soft mask of the face
and is the signed Euclidean distance between pixel and projected face
, such that < 0 if pixel is outside face and ≥ 0
otherwise. We consider the faces belonging to the background and the ground to be opaque,
, use a transparency of 1 for all their faces in the occupancy function.
Blending through alpha compositing. For each pixel, we find all projected
faces with an occupancy greater than a small threshold at this pixel location, and sort them
by increasing depth. Denoting by L the maximum number of faces per pixel, we build
image-sized tensors for occupancy _ℓ and color _ℓ by associating to each
pixel the ℓ-th intersecting face attributes. The color is obtained through barycentric
coordinates, using clipped barycentric coordinates for locations outside the face. Different
to most differentiable renderers and as
advocated by <cit.>, we directly interpret these tensors as an
ordered set of RGBA image layers and blend them through traditional alpha
compositing <cit.>:
(_1:L, _1:L) = ∑_ℓ = 1^L (∏_p<ℓ^L(1 - _p))
⊙_ℓ⊙_ℓ .
We found this simple alpha composition to behave better during optimization than the original
blending function used in <cit.>. This is notably in line with recent
advances in differentiable rendering like NeRF <cit.> which can be
interpreted as alpha compositing points along the rays.
§.§ Optimizing a Differentiable Blocks World
We optimize our scene parameters by minimizing a rendering loss across batches of images
using gradient descent. Specifically, for each image , we build the scene mesh as
described in <Ref> and use the associated camera pose to render an image
using the rendering process detailed in <Ref>. We optimize an objective
function defined as:
= + + + ,
where is a rendering loss between and , , , are
scalar hyperparameters and , , are regularization terms respectively
encouraging parsimony in the use of primitives, favoring smoothness in the texture maps and
penalizing the overlap between primitives. Our rendering loss is composed of a pixel-wise MSE
loss and a perceptual LPIPS
loss <cit.> such that = +. In all experiments, we use = 0.01, =
= 0.1 and = 1. <Ref> (bottom) shows the evolution of our
renderings throughout the optimization.
Encouraging parsimony and texture smoothness. We found that regularization
terms were critical to obtain meaningful results. In particular, the raw model typically
uses the maximum number of blocks available to reconstruct the scene, thus over-decomposing
the scene. To adapt the number of blocks per scene and encourage parsimony, we use the
transparency values as a proxy for the number of blocks used and penalize the loss by = ∑_k √(α_k)/K. We also use a total variation (TV)
penalization <cit.> on the texture maps to encourage uniform
textures. Given a texture map of size U × V and denoting by [u, v] ∈^3 the RGB values of the pixel at location (u, v), we define:
() = 1/UV∑_u, v([u+1, v] - [u, v]_2^2 +
[u, v+1] - [u, v]_2^2) ,
and write = (_) + (_) + ∑_k (_k) the final
penalization.
Penalizing overlapping blocks. We introduce a regularization term
encouraging primitives to not overlap. Because penalizing volumetric intersections of
superquadrics is difficult and computationally expensive, we instead propose to use a Monte
Carlo alternative, by sampling 3D points in the scene and penalizing points belonging to more
than λ blocks, in a fashion similar
to <cit.>. Following <cit.>, λ is set to
1.95 so that blocks could slightly overlap around their surface thus avoiding unrealistic
floating blocks. More specifically, considering a block k and a 3D point , we
define a soft 3D occupancy function as:
() = α_k (1 - _k()/) ,
where is a temperature hyperparameter and _k is the superquadric
inside-outside function <cit.> associated to the block k, such that
_k() ≤ 1 if lies inside the superquadric and _k() > 1
otherwise. Given a set of M 3D points , our final regularization
term can be written as:
= 1/M∑_∈max(∑_k=1^K(), λ) .
Note that in practice, for better efficiency and accuracy, we only sample points in the
region where blocks are located, which can be identified using the block poses _1:K.
Optimization details. We found that two elements were key to avoid bad
local minima during optimization. First, while transparent meshes enable
differentiability the number of primitives, we observed a failure mode where two semi
opaque meshes model the same 3D region. To prevent this behavior, we propose to inject
gaussian noise before the sigmoid in the transparencies α_1:K to create
stochasticity when values are not close to the sigmoid saturation, and thus encourage values
that are close binary. Second, another failure mode we observed is one where the planar
ground is modeling the entire scene. We avoid this by leveraging a two-stage curriculum
learning scheme, where texture maps are downscaled by 8 during the first stage. We
empirically validate these two contributions in <Ref>. Optimizing our model on
a scene takes 4 hours on a single NVIDIA RTX 2080 Ti GPU. We provide all other
implementation details in our supplementary material.
§ EXPERIMENTS
§.§ DTU Benchmark
Benchmark details. DTU <cit.> is an MVS dataset containing
80 forward-facing scenes captured in a controlled indoor setting, where the 3D ground-truth
points are obtained through a structured light scanner. We evaluate on 10 scenes
(, , , , , ,
, , , ) that have different geometries and a
3D decomposition that is relatively intuitive. We use standard processing
practices <cit.>, resize the images to 400 × 300 and run our
model with K = 10 on all available views for each scene (49 or 64 depending on the scenes).
We use the official evaluation presented in <cit.>, which computes the
Chamfer distance between the ground-truth points and points sampled from the 3D
reconstruction, filtered out if not in the neighborhood of the ground-truth points. We use
the official implementations to apply EMS <cit.> and MonteboxFinder
(MBF) <cit.> on the ground-truth point clouds and follow the
papers' recommendations (normalization, default hyperparameters).
Results. We compare our Chamfer distance performances to
state-of-the-art 3D decomposition methods in <Ref>. For each method, we report the
input used and highlight the average number of discovered primitives #P in green (smaller
than 10) or red (larger than 10). Intuitively, overly large numbers of primitives lead to
less intuitive and manipulative scene representations. Our performances correspond to a
single random run (random) and a run automatically selected among 5 runs using the minimal
rendering loss (auto). We augment concurrent methods with a preprocessing step (+ ps) using
RANSAC to remove the planar ground from the 3D input. Overall, we obtain results that are
much more satisfactory than prior works. On the one hand, EMS outputs a reasonable number of
primitives but has a high Chamfer distance reflecting bad 3D reconstructions. On the other
hand, MBF yields a lower Chamfer distance (even better than ours with the preprocessing step)
but it outputs a significantly higher number of primitives, thus reflecting
over-decompositions.
=-1 Our approach is qualitatively compared to EMS and MBF (augmented with the
preprocessing step) in <Ref>. Because the point clouds are noisy and
incomplete (see 360^∘ renderings in our supplementary material), EMS and MBF struggle
to find reasonable 3D decompositions: EMS misses some important parts, while MBF
over-decomposes the 3D into piecewise planar surfaces. On the contrary, our model is able to
output meaningful 3D decompositions with varying numbers of primitives and very different
shapes. Besides, ours is the only approach that recovers the scene appearance (last column).
Also note that it produces a complete 3D scene, despite being only optimized on
forward-facing views.
§.§ Real-Life Data and Applications
We present qualitative results on real-life captures in <Ref>. The first row
corresponds to the Campanile scene from Nerfstudio
repository <cit.> and the last four rows correspond to BlendedMVS
scenes <cit.> that were selected in <cit.>. We adapt their
camera conventions to ours and resize the images to roughly 400 × 300. From left to
right, we show a subset of the input views, a rendering overlaid with the primitive edges,
the primitives, as well as two novel view synthesis results. For each scene, we run our model
5 times and automatically select the results with the minimal rendering loss. We set the
maximum number of primitives to K = 10, except the last row where it is increased to K =
50 due to the scene complexity. These results show that despite its simplicity, our approach
is surprisingly robust. Our method is still able to compute 3D decompositions that capture
both appearances and meaningful geometry on a variety of scene types. In addition, increasing
the maximum number of primitives K allows us to easily adapt the decomposition granularity
(last row).
=-1 In <Ref>, we demonstrate other advantages of our approach.
First, compared to NeRF-based approaches like Nerfacto <cit.> which only
reconstruct visible regions, our method performs amodal scene completion (first row). Second,
such a textured decomposition allows to easily edit the 3D scene (second row). Finally, our
primitive meshes enable straightforward physics-based simulations (bottom).
§.§ Analysis
rt0.5-3ptAblation study.=-1 We report metrics averaged over five runs:
number of primitives found (#P), Chamfer Distance (CD) and image rendering metrics (PSNR
in dB, SSIM and LPIPS in %).
Best and second best are highlighted, #P variability is emphasized
in green (smaller than 5) and red (larger than 5).
Method 8pt8pt#P↓ 8pt8pt CD↓ 8pt8pt PSNR↑ 8pt8pt SSIM↑ 8pt8pt LPIPS↓
Complete model mygreen!20 4.60 3.63 20.5 73.5 23.9
w/o red!25 8.86 3.65 20.6 73.7 23.2
w/o mygreen!20 4.38
3.80 20.4 73.2 24.1
w/o curriculum mygreen!20 4.66
3.99 20.4 72.7 24.5
w/o α_1:K noise mygreen!203.60
4.13 20.0 72.0 25.6
w/o mygreen!20 4.04
4.58 19.7 70.8 26.5
w/o mygreen!20 3.22
4.80 19.7 72.7 40.0
Ablation study on DTU <cit.>.
In <Ref>, we assess our model's key components by removing one component at a
time and computing the performance averaged over the 10 DTU scenes. We report the final
number of primitives, Chamfer distance and rendering metrics. We highlight the varying
number of primitives in green (smaller than 5) and red (larger than 5). Results are averaged
over five runs (with standard deviations in the supplementary material). Overall, each
component except consistently improves the quality of the 3D reconstruction and the
renderings. successfully limits the number of primitives (and thus, primitive
duplication and over-decomposition) at a very small quality cost.
=-1 Limitations. Despite its simplicity and robustness, our
approach
has some limitations. First, our optimization is still prone to bad local minima. Although
our automatic selection among several runs is effective, introducing data-driven priors to
overcome such local minima would be an interesting future direction.
Second, our texturing model does not adapt to the scene geometry thus yielding efficiency and
resolution issues. Indeed, large regions in the renderings do not necessarily correspond to
large regions in the texture images. Finally, our approach does not model lighting or dynamic
objects.
§ CONCLUSION
=-1 We present an end-to-end approach that successfully computes a primitive-based
3D reconstruction given a set of calibrated images. We show its applicability and robustness
through various benchmarks, where our approach obtains better performances than methods
leveraging 3D data. We believe our work could be an important step towards more
interpretable multi-view modeling.
We thank Cyrus Vachha for help on the physics-based simulations; Antoine Guédon, Romain
Loiseau for visualization insights; François Darmon, Romain Loiseau, Elliot Vincent for
manuscript feedback. This work was supported in part by the European Research Council (ERC
project DISCOVER, number 101076028), ANR project EnHerit ANR-17-CE23-0008, gifts from Adobe
and HPC resources from GENCI-IDRIS (2022-AD011011697R2, 2022-AD011013538).
abbrv Supplementary Material for
Differentiable Blocks World:
Qualitative 3D Decomposition by Rendering
Primitives
In this supplementary document, we provide additional results (<Ref>),
details on the DTU benchmark (<Ref>) as well as implementation
details (<Ref>), including design and optimization choices.
§ ADDITIONAL RESULTS
Videos for view synthesis, physical simulations and amodal completion.
We present additional results in the form of videos at our project webpage:
https://www.tmonnier.com/DBW. Videos are separated in
different sections depending on the experiment type. First, we provide view synthesis videos
(rendered using a circular camera path), further outlining the quality of both our renderings
and our primitive-based 3D reconstruction. Second, we include videos for physics-based
simulations. Such simulations were produced through Blender by simply uploading our output
primitive meshes. Note that for modeling primitive-specific motions in Blender (, in our
teaser figure), primitives should not overlap at all, thus requiring a small preprocessing
step to slightly move the primitives for a clear separation. Because each primitive is its
own mesh, this operation is easily performed within Blender. Finally, we provide video
results where we perform scene editing and compare our amodal view synthesis results to the ones
of Nerfacto introduced in Nerfstudio <cit.>. Models for amodal synthesis
are optimized on a homemade indoor scene built from a forward-facing capture only. We use
Nerfstudio for data processing and data convention.
Detailed ablation results. In <Ref>, we provide our
ablation results averaged over 5 runs with standard deviations. In particular, this
emphasizes that the reconstruction and rendering performances between our complete model and
a model without are not significantly different, although the latter is using twice
as many primitives.
§ DTU BENCHMARK
In <Ref>, we show for each scene a subset of the input images as well as
360^∘ renderings of the GT point clouds obtained through a structured light scanner. To
compute performances, we use a Python version of the official evaluation:
https://github.com/jzhangbs/DTUeval-python.
§ IMPLEMENTATION DETAILS
Icosphere and superquadric UV mapping. We use spherical coordinates that we
correct to build our texture mapping for the unit icosphere. <Ref> shows our
process with an example. Specifically, we retrieve for each vertex its spherical coordinates
η∈ [-π/2, π/2] and ω∈ [-π, π] which are linearly
mapped to the UV space [0, 1]^2. Because such parametrization presents discontinuities and
strong triangle deformations at the poles, we perform two corrections. First, we fix
discontinuities by copying the border pixels involved (using a circular padding on the
texture image) and introducing new 2D vertices such that triangles do not overlap anymore.
Second, we avoid distorted triangles at the poles by creating for each triangle, a new 2D
vertex positioned in the middle of the other two vertices. As detailed in the main paper, we
derive a superquadric mesh from a unit icosphere in such a way that each vertex of the
icosphere is continuously mapped to the superquadric vertex. As a result, the texture mapping
defined for the icosphere is directly transferred to our superquadric meshes without any
modification.
Design choices. Except constants related to the world scale, orientation and
position in the 3D space to the known cameras, all our experiments share the same design
choices. Specifically, all the following design choices are defined for a canonical 3D scene
assumed to be centered and mostly contained in the unit cube, with a y-axis orthogonal to the
ground and pointing towards the sky. We roughly estimate the scene-specific constants related
to the world scale and pose (through coarse visual comparisons or using the camera
locations), and apply them to our final scene model to account for the camera conventions.
The background corresponds to a level-2 icosphere (320 faces), the ground plane is subdivided
into 128 uniform faces (for visual purposes) and superquadric meshes are derived from level-1
icospheres (80 faces). The scale for the background and the ground is set to 10. The ground
is initialized perpendicular to the y-axis and positioned at [0, -0.9, 0].The poses of our
primitive blocks are initialized using a Gaussian distribution for the 3D translation and a
random 6D vector for the rotation such that rotations are uniformly distributed on the unit
sphere. We parametrize their scale with an exponential added to a minimum scale value of 0.2
to prevent primitives from becoming too small. These scales are initialized with a uniform
distribution in [0.5, 1.5] and multiplied by a constant block scale ratio of 0.25 to yield
primitives smaller than the scene scale. The superquadric shape parameters are implemented
with a sigmoid linearly mapped in [0.1, 1.9] and are initialized at 1 (thus corresponding
to a raw icosphere). Transparency values are parametrized with a sigmoid and initialized at
0.5. All texture images have a size of 256 × 256, are parametrized using a sigmoid
and are initialized with small Gaussian noises added to gray images.
Optimization details. All our experiments share the same optimization details.
We use Pytorch3D framework <cit.> to build our custom
differentiable rendering process and use the default hyperparameter σ = 10^-4. Our
model is optimized using Adam <cit.> with a batch size of 4 for roughly a
total of 25k iterations. We use learning rates of 0.05 for the texture images and 0.005 for
all other parameters, and divide them by 10 for the last 2k iterations. Following our
curriculum learning process, we optimize the model for the first 10k iterations by
downsampling all texture images by 8. Then, we optimize using the full texture resolution
during the next 10k iterations. Finally, to further increase the rendering quality, we
threshold the transparency values at 0.5 to make them binary, remove regularization terms
related to transparencies (, and ), divide the weights for the other
terms and by 10, decrease the smoothness rendering parameter σ to
5×10^-6 and finetune our model for the final 5k iterations. In particular, this
allows the model to output textures that are not darken by non-binary transparencies. During
the optimization, we systematically kill blocks reaching a transparency lower than 0.01 and
at inference, we only show blocks with a transparency greater than 0.5. Similar
to <cit.>, we use λ = 1.95 and = 0.005 in our overlap
penalization. |
http://arxiv.org/abs/2307.07583v1 | 20230714191314 | On Diameter Approximation in Directed Graphs | [
"Amir Abboud",
"Mina Dalirrooyfard",
"Ray Li",
"Virginia Vassilevska-Williams"
] | cs.DS | [
"cs.DS",
"cs.CC"
] |
On Diameter Approximation in Directed Graphs
Amir AbboudWeizmann Institute of Science, <[email protected]>. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement No 101078482). Additionally, Amir Abboud is supported by an Alon scholarship and a research grant from the Center for New Scientists at the Weizmann Institute of Science. ,
Mina DalirrooyfardMassachusetts Institute of Technology, <[email protected]>. Partially supported by an Akamai Fellowship. ,
Ray LiUC Berkeley, <[email protected]>. Supported by the NSF Mathematical Sciences Postdoctoral Research Fellowships Program under Grant DMS-2203067, and a UC Berkeley Initiative for Computational Transformation award. ,
Virginia Vassilevska-WilliamsMassachusetts Institute of Technology, <[email protected]>. Partially supported by the National Science Foundation Grant CCF-2129139.
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds.
In directed graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since d(u,v) may not be the same as d(v,u), there are multiple ways to define the problem, the two most natural being the (one-way) diameter (max_(u,v) d(u,v)) and the roundtrip diameter (max_u,v d(u,v)+d(v,u)). In this paper we make progress on the outstanding open question for each of them.
* We design the first algorithm for diameter in sparse directed graphs to achieve n^1.5- time with an approximation factor better than 2. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication.
* We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a 1.5-approximation in subquadratic time would refute the All-Nodes k-Cycle hypothesis, and any (2-)-approximation would imply a breakthrough algorithm for approximate ℓ_∞-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.
empty
§ INTRODUCTION
The diameter of the graph is the largest shortest paths distance. A very well-studied parameter with many practical applications (e.g. <cit.>), its computation and approximation are also among the most interesting problems in Fine-Grained Complexity (FGC).
Much effort has gone into understanding the approximation vs. running time tradeoff for this problem (see the survey <cit.> and the progress after it <cit.>).
Throughout this introduction we will consider n-vertex and m-edge graphs that, for simplicity, are unweighted and sparse with m=n^1+o(1) edges[Notably, however, our algorithmic results hold for general graphs, and our hardness results hold even for very sparse graphs.].
The diameter is easily computable in Õ(mn)=n^2+o(1) time[The notation Õ(f(n)) denotes O(f(n)log(f(n)).] by computing All-Pairs Shortest Paths (APSP). One of the first and simplest results in FGC <cit.> is that any O(n^2-) time algorithm for >0 for the exact computation of the diameter would refute the well-established Strong Exponential Time Hypothesis (SETH) <cit.>.
Substantial progress has been achieved in the last several years <cit.>, culminating in an approximation/running time lower bound tradeoff based on SETH, showing that even for undirected sparse graphs, for every k≥ 2, there is no 2-1/k-δ-approximation algorithm running in Õ(n^1+1/(k-1)-) time for some δ,>0.
In terms of upper bounds, the following three algorithms work for both undirected and directed graphs:
* compute APSP and take the maximum distance, giving an exact answer in Õ(n^2) time,
* compute single-source shortest paths from/to an arbitrary node and return the largest distance found, giving a 2-approximation in Õ(n) time, and
* an algorithm by <cit.> giving a 3/2-approximation in Õ(n^1.5) time.
For undirected graphs, there are some additional algorithms, given by Cairo, Grossi and Rizzi <cit.> that qualitatively (but not quantitatively) match the tradeoff suggested by the lower bounds: for every k≥ 1 they obtain an Õ(n^1+1/(k+1)) time, almost-(2-1/2^k) approximation algorithm, meaning that there is also a small constant additive error.
2.37285996
usx1
2-1/(2^(#1+2))
usy1
1+(2*((2/(-1))^#1) - ((-1)^2)/2) / ( ((2/(-1))^#1) * (7-) - (^2 - 1)/2 )
un-directed
The upper and lower bound tradeoffs for undirected graphs are depicted in Figure <ref> ; a gap remains (depicted as white space) because the two trade-offs have different rates.
In directed graphs, however, the gap is significantly larger because an upper bound trade-off is missing (the lower bound tradeoff follows immediately because it is a harder problem).
One could envision for instance, that the conditional lower bounds for directed diameter could be strengthened to show that if one wants a (2-)-approximation algorithm, then it must take at least n^1.5-o(1) time.
Since the work of <cit.>, the main open question (also asked by <cit.>) for diameter algorithms in directed graphs has been:
Why are there only three approximation algorithms for directed diameter, but undirected diameter has an infinite approximation scheme? Is directed diameter truly harder, or can one devise further approximation algorithms for it?
*Directed is Closer to Undirected.
Our first result is that one can devise algorithms for directed diameter with truly faster running times than n^1.5, and approximation ratios between 3/2 and 2.
It turns out that the directed case has an upper bound tradeoff as well, albeit with a worse rate than in the undirected case.
Conceptually, this brings undirected and directed diameter closer together. See Figure <ref> for our new algorithms.
Let k=2^t+2 for a nonnegative integer t≥ 0.
For every ε>0 (possibly depending on m), there exists a randomized 2-1/k+ε-approximation algorithm for the diameter of a directed weighted graphs in time Õ(m^1+α/ε), for
α=2(2/ω-1)^t - (ω-1)^2/2/(2/ω-1)^t(7-ω) - ω^2-1/2.
The constant 2 ≤ω< 2.37286 in the theorem refers to the fast matrix multiplication exponent <cit.>. A surprising feature of our algorithms is that we utilize fast matrix multiplication techniques to obtain faster algorithms for a problem in sparse graphs. Prior work on shortest paths has often used fast matrix multiplication
to speed-up computations, but to our knowledge, all of this work is for dense graphs (e.g. <cit.>).
Breaking the n^1.5 bound with a combinatorial algorithm is left as an open problem.
*Roundtrip is Harder.
One unsatisfactory property of the shortest paths distance measure in directed graphs is that it is not symmetric (d(u,v)≠ d(v,u)) and is hence not a metric. Another popular distance measure used in directed graphs that is a metric is the roundtrip measure. Here the roundtrip distance d̃(u,v) between vertices u,v is d(u,v)+d(v,u).
Roundtrip distances were first studied in the distributed computing community in the 1990s <cit.>. In recent years, powerful techniques were developed to handle the fast computation of sparse roundtrip spanners, and approximations of the minimum roundtrip distance, i.e. the shortest cycle length, the girth, of a directed graph <cit.>. These techniques give hope for new algorithms for the maximum roundtrip distance, the roundtrip diameter of a directed graph.
Only the first two algorithms in the list in the beginning of the introduction work for roundtrip diameter: compute an exact answer by computing APSP, and a linear time 2-approximation that runs SSSP from/to an arbitrary node. These two algorithms work for any distance metric, and surprisingly there have been no other algorithms developed for roundtrip diameter. The only fine-grained lower bounds for the problem are the ones that follow from the known lower bounds for diameter in undirected graphs, and these cannot explain why there are no known subquadratic time algorithms that achieve a better than 2-approximation.
Are there O(n^2-) time algorithms for roundtrip diameter in sparse graphs that achieve a 2-δ-approximation for constants ,δ>0?
This question was considered e.g. by <cit.> who were able to obtain a hardness result for the related roundtrip radius problem, showing that under a popular hypothesis, such an algorithm for roundtrip radius does not exist.
One of the main questions studied at the “Fine-Grained Approximation Algorithms and Complexity Workshop" at Bertinoro in 2019 was to obtain new algorithms or hardness results for roundtrip diameter. Unfortunately, however, no significant progress was made, on either front.
The main approach to obtaining hardness for roundtrip diameter, was to start from the Orthogonal Vectors (OV) problem and reduce it to a gap version of roundtrip diameter, similar to all known reductions to (other kinds of) diameter approximation hardness.
Unfortunately, it has been difficult to obtain a reduction from OV to roundtrip diameter that has a larger gap than that for undirected diameter; in Section <ref> we give some intuition for why this is the case.
In this paper we circumvent the difficulty by giving stronger hardness results for roundtrip diameter starting from different problems and hardness hypotheses. We find this intriguing because all previous conditional lower bounds for (all variants of) the diameter problem were based on SETH. In particular, it gives a new approach for resolving the remaining gaps in the undirected case, where higher SETH-based lower bounds are provably impossible (under the so-called NSETH) <cit.>.
Our first negative result conditionally proves that any 5/3- approximation for roundtrip requires n^2-o(1) time; separating it from the undirected and the directed one-way cases where a 1.5-approximation in Õ(n^1.5) time is possible.
This result is based on a reduction from the so-called All-Nodes k-Cycle problem.
Given a k partite directed graph G=(V,E),V=V_1 ∪⋯∪ V_k, whose edges go only between “adjacent” parts E ⊆⋃_i=1^k V_i × V_i+1 k, decide if all nodes v ∈ V_1 are contained in a k-cycle in G.
This problem can be solved for all k in time O(nm), e.g. by running an APSP algorithm, and in subquadratic O(m^2-1/k) for any fixed k <cit.>. Breaking the quadratic barrier for super-constant k has been a longstanding open question; we hypothesize that it is impossible.
No algorithm can solve the All-Nodes k-Cycle problem in sparse directed graphs for all
k ≥ 3 in O(n^2-δ) time, with δ>0.
Similar hypotheses have been used in recent works <cit.>. The main difference is that we require all nodes in V_1 to be in cycles; such variants of hardness assumptions that are obtained by changing a quantifier in the definition of the problem are popular, see e.g. <cit.>.
Under Hypothesis <ref>, for all ε,δ>0, no algorithm can 5/3-ε approximate the roundtrip diameter of a sparse directed unweighted graph in O(n^2-δ) time.
We are thus left with a gap between the linear time factor-2 upper bound and the subquadratic factor-5/3 lower bound.
A related problem with a similar situation is the problem of computing the eccentricity of all nodes in an undirected graph <cit.>; there, 5/3 is the right number because one can indeed compute a 5/3-approximation in subquadratic time <cit.>.
Could it be the same here?
Alas, our final result is a reduction from the following classical problem in geometry to roundtrip diameter, establishing a barrier for any better-than-2 approximation in subquadratic time.
Let α>1. The α-approximate ℓ_∞ Closest-Pair (CP) problem is, given n vectors v_1,…,v_n of some dimension d in ℝ^n, determine if there exists v_i and v_j with v_i-v_j_∞≤ 1, or if for all v_i and v_j, v_i-v_j_∞≥α.
Closest-pair problems are well-studied in various metrics; the main question being whether the naive n^2 bound can be broken (when d is assumed to be n^o(1)). For ℓ_∞ specifically, a simple reduction from OV proves a quadratic lower bound for (2-)-approximations <cit.>; but going beyond this factor with current reduction techniques runs into a well-known “triangle-inequality” barrier (see <cit.>). This leaves a huge gap from the upper bounds that can only achieve O(loglog n) approximations in subquadratic time <cit.>.
Cell-probe lower bounds for the related nearest-neighbors problem suggest that this log-log bound may be optimal <cit.>; if indeed constant approximations are impossible in subquadratic time then the following theorem implies a tight lower bound for roundtrip diameter.
If for some α≥ 2, >0 there is a 2-1/α-ε approximation algorithm in time O(m^2-) for roundtrip diameter in unweighted graphs, then for some δ>0 there is an α-approximation for ℓ_∞-Closest-Pair with vectors of dimension d≤ n^1-δ in time Õ(n^2-δ).
In particular, a 2- approximation for roundtrip diameter in subquadratic time implies an α-approximation for the ℓ_∞-Closest-Pair problem in subquadratic time, for some α=O(1/).
Thus, any further progress on the roundtrip diameter problem requires a breakthrough on one of the most basic algorithmic questions regarding the ℓ_∞ metric (see Figure <ref>).
§.§ Related Work
Besides the diameter and the roundtrip diameter, there is another natural version of the diameter problem in directed graphs called Min-Diameter <cit.>.
The distance between u,v is defined as the min(d(u,v),d(v,u)).[Note that the Max-Diameter version where we take the max rather than the min is equal to the one-way version.]
This problem seems to be even harder than roundtrip because even a 2-approximation in subquadratic time is not known.
The fine-grained complexity results on diameter (in the sequential setting) have had interesting consequences for computing the diameter in distributed settings (specifically in the CONGEST model).
Techniques from both the approximation algorithms and from the hardness reductions have been utilized, see e.g. <cit.>.
It would be interesting to explore the consequences of our techniques on the intriguing gaps in that context <cit.>.
§.§ Organization
In the main body of the paper, we highlight the key ideas in our main results (Theorem <ref> and Theorem <ref>) by proving an “easy version” of each theorem, and in the appendices, we establish the full theorems.
First, we establish some preliminaries in Section <ref>.
In Section <ref>, we prove the special case of Theorem <ref> when t=0, giving a 7/4-approximation of the diameter in directed unweighted graphs in time O(m^1.458).
In Section <ref> of the appendix, we generalize this proof to all parameters t≥ 0 and to weighted graphs.
In Section <ref> we give an overview of the hardness reductions in this paper.
In Section <ref>, we prove a weakening of Theorem <ref> that only holds for weighted graphs.
Later, in Section <ref> of the appendix, we extend this lower bound to unweighted graphs.
In Section <ref>, we prove a weakening of Theorem <ref>: under Hypothesis <ref>, there is no 5/3-ε approximation of roundtrip diameter in weighted graphs.
Later, in Section <ref>, we extend this lower bound to unweighted graphs.
§ PRELIMINARIES
All logs are base e unless otherwise specified.
For reals a≥ 0, let [± a] denote the real interval [-a,a].
For a boolean statement φ, let [φ] be 1 if φ is true and 0 otherwise.
For a vertex v in a graph, let (v) denote its degree.
For r≥ 0, let B_r^in(v) = {u: d(u,v) ≤ r} be the in-ball of radius r around v, and let B_r^out(v) = {u: d(v,u) ≤ r} be the out-ball of radius r around v.
For r≥ 0, let B_r^in+(v) be B_r^in(v) and their in-neighbors, and let B_r^out+(v) be B_r^out(v) and their out-neighbors.
Throughout, let ω≤ 2.3728596 denote the matrix multiplication constant.
We use the following lemma which says that we can multiply sparse matrices quickly.
We can multiply a a× b and a b× a matrix, each with at most ac nonzero entries, in time O(ac· a^ω-1/2).[In <cit.>, this runtime of O(ac· a^ω-1/2) is stated only for the case ac>a^(ω+1)/2. However, the runtime bound for this case works for other cases as well so the lemma is correct for all matrices.]
We repeatedly use the following standard fact.
Given two sets B⊂ V with B of size k and V of size 2m, a set of 4(m/k)log m uniformly random elements of V contains an element of B with probability at least 1-1/m^2.
The probability that B is not hit is (1-k/2m)^4m/klog m≤ e^-2log m = 1/m^2.
§ 7/4-APPROXIMATION OF DIRECTED (ONE-WAY) DIAMETER
In this section, we prove Theorem <ref> in the special case of t=0 and unweighted graphs.
That is, we give a 7/4-approximation of the (one-way) diameter of a directed unweighted graph in O(m^1.4575) time.
For the rest of this section, let α=ω+1/ω+5≤ 0.4575.
Before stating the algorithm and proof, we highlight how our algorithm differs from the undirected algorithm of <cit.>. At a very high level, all known diameter approximation algorithms compute some pairs of distances, and use the triangle inequality to infer other distances, saving runtime. Approximating diameter in directed graphs is harder than in undirected graphs because distances are not symmetric, so we can only use the triangle inequality “one way." For example, we always have d(x,y)+d(y,z) ≥ d(x,z), but not necessarily d(x,y)+d(z,y) ≥ d(x,z). The undirected algorithm <cit.> crucially uses the triangle inequality “both ways," so it was not clear whether their algorithm could be adapted to the directed case. We get around this barrier using matrix multiplication together with the triangle inequality to infer distances quickly. We consider the use of matrix multiplication particularly interesting because, previously, matrix multiplication had only been used for diameter in dense graphs, but we leverage it in sparse graphs.
Let α=ω+1/ω+5. There exists a randomized 7/4-approximation algorithm for the diameter of an unweighted directed graph running in Õ(m^1+α) time.
It suffices to show that, for any positive integer D > 0, there exists an algorithm 𝒜_D running in time Õ(m^1+α) that takes as input any graph and accepts if the diameter is at least D, rejects if the diameter is less than 4D/7, and returns arbitrarily otherwise.
Then, we can find the diameter up to a factor of 7/4 by running binary search with 𝒜_D,[
We have to be careful not to lose a small additive factor. Here are the details: Let D^* be the true diameter. Initialize hi = n, lo = 0. Repeat until hi-lo=1: let mid=(hi+lo)/2, run 𝒜_mid, if accept, set lo=mid, else hi=mid. One can check that hi ≥ D^*+1 and lo ≤ 7D^*/4 always hold. If we return lo after the loop breaks, the output is always in [D^*, 7D^*/4].]
which at most adds a factor of O(log n).
We now describe the algorithm 𝒜_D.
The last two steps, illustrated in Figure <ref> contain the key new ideas.
* First, we apply a standard trick that replaces the input graph on n vertices and m edges with an 2m-vertex graph of max-degree-3 that preserves the diameter: replace each vertex v with a (v)-vertex cycle of weight-0 edges and where the edges to v now connect to distinct vertices of the cycle. From now on, we work with this max-degree-3 graph on 2m vertices.
* Sample 4m^αlog m uniformly random vertices and compute each vertex's in- and out-eccentricity. If any such vertex has (in- or out-) eccentricity at least 4D/7 Accept.
* For every vertex v, determine if |B_D/7^out(v)|≤ m^α. If such a vertex v exists, determine if any vertex in B_D/7^out+(v) has eccentricity at least 4D/7, and Accept if so.
* For every vertex v, determine if |B_D/7^in(v)|≤ m^α. If such a vertex v exists, determine if any vertex in B_D/7^in+(v) has eccentricity at least 4D/7, and Accept if so.
* Sample 4m^1-αlog m uniformly random vertices Ŝ. Let S^out={s∈Ŝ: |B^out_2D/7(s)|≤ m^1-α} and S^in={s∈Ŝ: |B^in_2D/7(s)|≤ m^1-α}.
Compute B^out_2D/7(s) and B^out+_2D/7(s) for s∈ S^out, and B^in_2D/7(s) and B^in+_2D/7(s) for s∈ S^in.
* Let A^out∈ℝ^S^out× V be the |S^out|× n matrix where A_s,v=[v∈ B^out_2D/7(s)]. Let A^in∈ℝ^V× S^in be the n× |S^in| matrix where A^in_v,s=[v∈ B^in_2D/7(s)] if 4D/7=22D/7 and A^in_v,s=[v∈ B^in+_2D/7(s)] otherwise. Compute A^out· A^in∈ℝ^S^out× S^in using sparse matrix multiplication. If the product has any zero entries, Accept, otherwise Reject.
*Runtime.
Computing a single eccentricity takes time O(m), so Step <ref> takes time Õ(m^1+α).
For Step <ref> checking if |B^out_D/7(v)|≤ m^α takes O(m^α) time for each v via a partial Breadth-First-Search (BFS). Here we use that the max-degree is 3.
If |B^out_D/7(v)|≤ m^α, there are at most 3m^α eccentricity computations which takes time O(m^1+α).
Step <ref> takes time O(m^1+α) for the same reason.
Similarly, we can complete Step <ref> by running partial BFS for each s∈Ŝ until m^1-α vertices are visited.
This gives S^out and S^in and also gives B^out_2D/7(s) and B^out+_2D/7(s) for s∈ S^out and B^in_2D/7(s) and B^in+_2D/7(s) for s∈ S^in.
For Step <ref>, the runtime is the time to multiplying sparse matrices. Matrix A^out has at most |Ŝ|≤ 4m^1-αlog m rows each with at most max_s∈ S^out|B^out_2D/7(s)|≤ m^1-α entries, and similarly A^in has at most 4m^1-αlog m columns each with at most max_s∈ S^in|B^in+_2D/7(s)| ≤ 3m^1-α entries.
The sparse matrix multiplication takes time Õ(m^(2-2α)· m^(1-α)ω-1/2)=Õ(m^1+α) by Lemma <ref> with a=m^1-α, b=n, c=m^1-α.
*If the Diameter is less than 4D/7, we always reject.
Clearly every vertex has eccentricity less than 4D/7, so we indeed do not accept at Steps <ref>, <ref>, and <ref>.
In Step <ref>, we claim for every s∈ S^out, s'∈ S^in there exists v such that A_s,v^out=A_v,s'^in=1, so that (A^out· A^in)_s,s'≥ 1 for all s∈ S^out and s'∈ S^in and thus we reject.
Fix s∈ S^out and s'∈ S^in. By the diameter bound, d(s,s') ≤4D/7.
Let v be the last vertex on the s-to-s' shortest path such that d(s,v)≤2D/7, and, if it exists, let v' be the vertex after v.
Clearly A^out_s,v = 1. We show A^in_v,s'=1 as well.
If v=s', then clearly v∈ B^in_2D/7(s') so A^in_v,s'=1 as desired.
Otherwise d(s,v) = 2D/7.
If 4D/7=22D/7, then d(v,s')≤ d(s,s')-d(s,v) ≤4D/7-2D/7 = 2D/7, so v∈ B^in_2D/7(s') and A^in_v,s'=1, so again A^in_v,s'=1.
If 4D/7=22D/7+1, then d(v',s')≤ d(s,s')-d(s,v') ≤4D/7-(2D/7+1) = 2D/7, so v'∈ B^in_2D/7(s') and thus v∈ B^in+_2D/7(s') and A^in_v,s'=1, as desired.
This covers all cases, so we've shown we reject.
*If the Diameter is at least D, we accept with high probability.
Let a and b be vertices with d(a,b)≥ D.
If |B^out_3D/7(a)|> m^1-α, Step <ref> computes the eccentricity of some v∈ B^out_3D/7(a) with high probability (by Lemma <ref>), which is at least d(v,b)≥ d(a,b)-d(a,v)≥ 4D/7 by the triangle inequality, so we accept.
Similarly, we accept with high probability if |B^in_3D/7(b)|> m^1-α.
Thus we may assume that |B^out_3D/7(a)|, |B^in_3D/7(b)|≤ m^1-α for the rest of the proof.
If |B^out_D/7(v)|≤ m^α for any vertex v, then either (i) d(v,b)≥ 4D/7, in which case v has eccentricity at least 4D/7 and we accept at Step <ref>, or (ii) d(v,b)≤ 4D/7, in which case there is a vertex u∈ B^out+_D/7(v) on the v-to-b path with d(u,b)≤ 3D/7 (take the u∈ B^out+_D/7(v) closest to b on the path). Then d(a,u)≥ 4D/7 by the triangle inequality and we accept in Step <ref> as we perform a BFS from u.
Thus we may assume |B^out_D/7(v)|> m^α for all vertices v.
Similarly, because of Step <ref>, we may assume |B^in_D/7(v)|>m^α for all vertices v.
In particular, we may assume |B^out_D/7(a)|> m^α and |B^in_D/7(b)|>m^α.
Figure <ref> illustrates this last step.
Then Ŝ hits B^out_D/7(a) with high probability (by Lemma <ref>), so B^out_D/7(a) has some s∈Ŝ with high probability, and similarly B^in_D/7(b) has some s'∈Ŝ with high probability.
The triangle inequality implies that B^out_2D/7(s) ⊂ B^out_3D/7(a), so |B^out_2D/7(s)|≤ |B^out_3D/7(a)|≤ m^1-α and thus s∈ S^out. Similarly s'∈ S^in.
By the triangle inequality, we have d(s,s')≥ d(a,b) - d(a,s) - d(s',b) ≥ D-D/7-D/7 = 5D/7.
Then we must have (A· B)_s,s' = 0, as otherwise there is a v such that d(s,v)≤2D/7 and d(v,s')≤ 4D/7-2D/7, contradicting d(s,s')≥ 5D/7.
Hence, we accept at step 5, as desired.
§ HARDNESS REDUCTIONS FOR ROUNDTRIP
§.§ Overview
In this paper we prove hardness results for roundtrip diameter that go beyond the 2 vs. 3 barrier. Before presenting the proofs, let us begin with an abstract discussion on why this barrier arises and (at a high level) how we overcome it.
All previous hardness results for diameter are by reductions from OV (or its generalization to multiple sets).
In OV, one is given two sets of vectors of size n and dimension d=log n, A and B, and one needs to determine whether there are a∈ A, b∈ B that are orthogonal. SETH implies that OV requires n^2-o(1) time <cit.>. In a reduction from OV to a problem like diameter, one typically has nodes representing the vectors in A and B, as well as nodes C representing the coordinates, and if there is an orthogonal vector pair a,b, then the corresponding nodes in the diameter graph are far (distance ≥ 3), and otherwise all pairs of nodes are close (distance ≤ 2).
Going beyond the 2 vs. 3 gap is difficult because each node a ∈ A must have distance ≤ 2 to each coordinate node in C, regardless of the existence of an orthogonal pair, and then it is automatically at distance 2+1 from any node b ∈ B because each b has at least one neighbor in C. So even if a,b are orthogonal, the distance will not be more than 3.
The key trick for proving a higher lower bound (say 3 vs. 5) for roundtrip is to have two sets of coordinate nodes, a C^fwd set that can be used to go forward from A to B, and a C^bwd set that can be used to go back.
The default roundtrip paths from A/B to each of these two sets will have different forms, and this asymmetry will allow us to overcome the above issue.
This is inspired by the difficulty that one faces when trying to make the subquadratic 3/2-approximation algorithms for undirected and directed diameter work for roundtrip.
Unfortunately, there is another (related) issue when reducing from OV. First notice that all nodes within A and within B must always have small distance (or else the diameter would be large). This can be accomplished simply by adding direct edges of weight 1.5 between all pairs (within A and within B); but this creates a dense graph and makes the quadratic lower bound uninteresting. Instead, such reductions typically add auxiliary nodes to simulate the n^2 edges more cheaply, e.g. a star node o that is connected to all of A. But then the node o must have small distance to B, decreasing all distances between A and B.
Overcoming this issue by a similar trick seems impossible. Instead, our two hardness results bypass it in different ways.
The reduction from ℓ_∞-Closest-Pair starts from a problem that is defined over one set of vectors A (not two) which means that the coordinates are “in charge” of connecting all pairs within A. We remark that while OV can also be defined over one set (monochromatic) instead of two (bichromatic) and that it remains SETH hard; that would prevent us from applying the above trick of having a forward and a backward sets of coordinate nodes. Our reduction in Section <ref> is able to utilize the structure of the metric in order to make both ideas work simultaneously.
The reduction from All-Node k-Cycle relies on a different idea: it uses a construction where only a small set of n pairs a_i ∈ A, b_i ∈ B are “interesting” in the sense that we do not care about the distances for other pairs (in order to solve the starting problem).
Then the goal becomes to connect all pairs within A and within B by short paths, without decreasing the distance for the (a_i,b_i) pairs.
A trick similar to the bit-gadget <cit.> does the job, see Section <ref> of the appendix. For the complete reduction see Section <ref>.
§.§ Weighted Roundtrip 2-ε hardness from ℓ_∞-CP
In this section, we highlight the key ideas in Theorem <ref> by proving a weaker version, showing the lower bound for weighted graphs.
We extend the proof to unweighted graphs in Section <ref>.
The main technical lemma is showing that to α-approximate ℓ_∞-Closest-Pair, it suffices to do so on instances where all vector coordinates are in [±(0.5+ε)α].
Towards this goal, we make the following definition.
The α-approximate β-bounded ℓ_∞-Closest-Pair problem is, given n vectors v_1,…,v_n of dimension d in [-β,β]^d determine if there exists v_i and v_j with v_i-v_j_∞≤ 1, or if for all v_i and v_j, v_i-v_j_∞≥α.
We now prove the main technical lemma.
Let ε∈(0,1/2) and α>1.
If one can solve α-approximate (0.5+ε)α-bounded ℓ_∞-CP on dimension O(dε^-1log n) in time T, then one can solve α-approximate ℓ_∞-CP on dimension d in time T+O_ε(dnlog n), where in O_ε(·) we neglect dependencies on ε.
Start with an ℓ_∞ instance Φ=(v_1,…,v_n).
We show how to construct a bounded ℓ_∞ instance Φ' such that Φ has two vectors with ℓ_∞ distance ≤ 1 if and only if Φ' has two vectors with ℓ_∞ distance ≤ 1.
First we show we may assume that v_1,…,v_n are on domain [0,α n].
Suppose that x∈[d].
Reindex v_1,…,v_n in increasing order of v_i[x] (by sorting).
Let v_1',…,v_n' be vectors identical to v_1,…,v_n except in coordinate x, where instead
v_i'[x] = ∑_j=0^i-1min(α, v_j+1[x]-v_j[x])
for i=1,…,n, where the empty sum is 0.
We have that v_i'[x]≤α n for all i, and furthermore |v_i'[x]-v_j'[x]|≥α if and only if |v_i[x]-v_j[x]|≥α and also |v_i'[x]-v_j'[x]|≤ 1 if and only if |v_i[x]-v_j[x]|≤ 1.
Hence, the instance given by v_1',…,v_n' is a YES instance if and only if the instance Φ is a YES instance, and is a NO instance if and only if the instance Φ is a NO instance.
Repeating this with all other coordinates x gives an instance Φ' such that Φ' is a YES instance if and only if Φ is a YES instance, and Φ' is a NO instance if and only if Φ' is a NO instance, and furthermore Φ' has vectors on [0,α n].
Now we show how to construct an ℓ_∞-CP instance in dimension O_ε(dlog n) vectors with coordinates in [±(0.5+ε)α].
Let ε∈(0,0.5) and α > 1.
For any real number M, there exists two maps g:[0,M] → [-(0.5+ε)α,(0.5+ε)α]^2ε^-1+1 and h:[0,M]→ [0, M/2] such that for all a,b∈[0,M], we have min(|a-b|,α) = min((g(a),h(a))-(g(b),h(b))_∞,α). (here, (g(·),h(·)) is a length 2ε^-1+2 vector.)
Furthermore, g and h can be computed in O_ε(1) time.
It suffices to consider when ε^-1 is an integer.
Let f_z:ℝ→ [-(0.5+ε)α,(0.5+ε)α] be the piecewise function
f_z(x) = {-(0.5+ε)α if x≤ z-(0.5+ε)α
(0.5+ε)α if x≥ z+(0.5+ε)α
x-z otherwise
.
For a∈[M], define g(a)∈ℝ^2ε^-1+1 and h(a)∈ℝ as follows, where we index coordinates by -ε^-1,…,-1,0,1,ε^-1 for convenience
g(a)_i = f_M/2 + 0.5iεα(a) for -ε^-1≤ i≤ε^-1
h(a) = |a-M/2|.
Clearly g and h have the correct codomain, and they can be computed in O_ε(1) time.
Additionally, note that f_z(x) and |x-M/2| are 1-Lipschitz functions of x for all z, so g is a Lipschitz function and thus g(a)-g(b)_∞≤ |a-b|.
Now, it suffices to show that min((g(a),h(a))-(g(b),h(b))_∞,α)≥min(|a-b|,α).
If a and b are on the same side of M/2, then h(a)-h(b)_∞≥ ||a-M/2| -|b-M/2|| = |a-b|, as desired.
Now suppose a and b are on opposite sides of M/2, and without loss of generality a < M/2 < b.
Let 0≤ i≤ε^-1 be the largest integer such that a ≤ M/2 - iεα (i=0 works so i always exists).
If i=ε^-1, then a < M/2 - α and
g(a)-g(b)_∞≥ f_M/2-0.5α(b) - f_M/2-0.5α(a)
≥ 0.5α - (-0.5α)
= α≥min(|a-b|,α),
as desired.
Now assume i<ε^-1.
Let z = M/2 + (0.5 - iε) α.
By maximality of i, we have a-z∈[-(0.5+ε)α,-0.5α].
We have g(·)_ε^-1 - 2i = f_z(·) by definition of g.
By the definition of f_z(·), since a∈[z-(0.5+ε)α,z-0.5α] and b≥ a, we have min(f_z(b)-f_z(a),α) = min(b-a,α).
Thus,
min(g(a)-g(b)_∞,α)
≥min(g(b)_ε^-1-2i-g(a)_ε^-1-2i,α)
= min(f_z(b)-f_z(a),α)
= min(b-a,α),
as desired.
In either case, we have min(g(a)-g(b)_∞,α)≥min(|a-b|,α), so we conclude that min(g(a)-g(b)_∞,α)= min(|a-b|,α)
Iterating Lemma <ref> gives the following.
Let ε∈(0,1/2).
There exists a map g:[0,α n] → [±(0.5+ε)α]^4ε^-1log n such that for all a,b∈[0,α n], we have min(|a-b|,α) = min(g(a)-g(b)_∞,α).
Furthermore, g can be computed in O_ε(log n) time.
For ℓ=1,…, let M_ℓ=α n/2^ℓ-1, and let g_ℓ^*:[M_ℓ]→[±(0.5+ε)α]^2ε^-1+1 and h_ℓ^*:[M_ℓ]→ [M_ℓ+1] be the functions given by Lemma <ref>.
For ℓ=0,1,…, let g_ℓ:[0,α n]→ [-(0.5+ε)α,(0.5+ε)α]^ℓ(2ε^-1+1) and h_ℓ:[0,α n] → [0,α n/2^ℓ] be such that g_0(x)=() is an empty vector, h_0(x) = x is the identity, and for ℓ≥ 1, g_ℓ(x) = (g_ℓ-1(x), g_ℓ^*(h_ℓ-1(x))) and h_ℓ(x) = h_ℓ^*(h_ℓ-1(x)).
By Lemma <ref>, we have that
min((g_ℓ-1(a),h_ℓ-1(a))-(g_ℓ-1(b),h_ℓ-1(b))_∞,α)
= min((g_ℓ-1(a),g_ℓ^*(h_ℓ-1(a)), h_ℓ^*(h_ℓ-1(a)))-(g_ℓ(b), g_ℓ^*(h_ℓ-1(b)), h_ℓ^*(h_ℓ-1(b)))_∞,α)
= min((g_ℓ(a),h_ℓ(a))-(g_ℓ(b),h_ℓ(b))_∞,α)
for all ℓ.
For ℓ=log n, the vector g(a) (g_ℓ(a),h_ℓ(a)-0.5α) has every coordinate in [±(0.5+ε)α], and by (<ref>), we have
min(|a-b|,α)
= min(|g_0(a)-g_0(b)|,α)
= min(|g_ℓ(a)-g_ℓ(b)|,α)
= min(|g(a)-g(b)|_∞,α),
as desired.
The length of this vector is at most log n(2ε^-1+1) + 1, which we bound by 4ε^-1log n for simplicity (and pad the corresponding vectors with zeros).
To finish, let g:[0,α n]→ [± (0.5+ε)α] be given by Lemma <ref>, and let the original ℓ_∞ instance be v_1,…,v_n.
Let the new (0.5+ε)α-bounded ℓ_∞ instance be w_i = (g(v_i[x]))_x∈ [d] of length 4dε^-1log n.
We now prove our goal for this section, Theorem <ref> for weighted graphs.
If for some α≥ 2, >0 there is a 2-1/α-ε approximation algorithm in time O(m^2-) for roundtrip diameter in weighted graphs, then for some δ>0 there is an α-approximation for ℓ_∞-Closest-Pair with vectors of dimension d≤ n^1-δ in time Õ(n^2-δ).
By Lemma <ref> it suffices to prove that there exists an O(n^2-δ) time algorithm for α-approximate (0.5+ε)α-bounded ℓ_∞-CP for ε=(4α)^-1.
Let Φ be the bounded-domain ℓ_∞-CP instance with vectors v_1,…,v_n∈[±(0.5+ε)α]^n.
Then construct a graph G (see Figure <ref>) with vertex set S∪ X_1∪ X_2 where X_1=X_2=[d] and S=[n].
We identify vertices with the notations i_S, x_X_1, and x_X_2, for i∈[n] and x∈[d].
Draw directed edges
* from i_S to x_X_1, of weight α+v_i[x],
* from x_X_1 to i_S, of weight α-v_i[x],
* from i_S to x_X_2, of weight α-v_i[x],
* from x_X_2 to i_S, of weight α+v_i[x], and
* between any two vertices in X_1∪ X_2, of weight α.
Note that all edge weights are nonnegative, and any two vertices in X_1∪ X_2 are roundtrip distance 2α, and any s∈ S and x∈ X_1∪ X_2 are distance 2α.
Suppose Φ has no solution, so that every pair has ℓ_∞ distance α.
Then for vertices i_S,j_S, there exists a coordinate x such that v_i[x] - v_j[x] is either ≥α or ≤ -α.
Without loss of generality, we are in the case v_i[x] - v_j[x]≥α.
Then the path i_S → x_X_2→ j_S → x_X_1→ i_S is a roundtrip path of length
(α - v_i[x]) + (α+v_j[x]) + (α + v_j[x]) + (α-v_i[x])
= 4α - 2(v_i[x]-v_j[x]) ≤ 2α.
So when Φ has no solution, the roundrip diameter is at most 2α.
On the other hand, suppose Φ has a solution i,j such that for all x, |v_i[x] - v_j[x]| ≤ 1.
Then, as every edge has weight at least (0.5-ε)α,
d(i_S,j_S)
≥min(min_x∈[d](d(i_S,x_X_1)+d(x_X_1,j_S), d(i_S,x_X_2)+d(x_X_2,j_S)), 4(0.5-ε)α)
≥min(min_x∈[d](α+v_i[x] +α-v_j[x], α+v_j[x] + α-v_i[x]), 2α-4εα)
≥min(2α-1,2α-4αε)
= 2α-1.
Similarly, we have
d(j_S,i_S)≥ 2α-1,
so we have
d_RT(j_S,i_S)≥ 4α-2.
so in this case the RT-diameter is at least 4α-2.
A 2-α^-1-ε approximation for RT diameter can distinguish between RT diameter 4α-2 and RT-diameter 2α.
Thus, a 2-α-ε approximation for RT diameter solves α-approximate ℓ_∞-CP.
§ WEIGHTED ROUNDTRIP 5/3-Ε HARDNESS FROM ALL-NODES K-CYCLE
In this section, we highlight the key ideas in Theorem <ref> by proving a weaker version, showing the lower bound for weighted graphs.
We extend the proof to unweighted graphs in Section <ref>.
Under Hypothesis <ref>, for all ε,δ>0, no algorithm can 5/3-ε approximate the roundtrip diameter of a sparse directed weighted graph in O(n^2-δ) time.
Let G=(V,E),V=V_1 ∪⋯∪ V_k, E ⊆⋃_i=1^k V_i × V_i+1 k be the input graph to the All-Nodes k-Cycle problem.
The reduction constructs a new graph G' as follows. See Figure <ref>.
* Each set V_i for i ∈{ 2,…,k } has two copies in G' one set V_i^fwd will be used for interesting forward paths and one set V_i^bwd that will be used for interesting backward paths.
Naturally, the copy of a node x ∈ V_i in the copy V_i^fwd will be denoted x^fwd and its copy in V_i^bwd will be denoted x^bwd.
* The set V_1 has two copies that we will call S and T. The interesting pairs in our construction will be a subset of the pairs in S × T.
We will use the letters a,b,c,… to denote the nodes in V_1. The two copies of a node a ∈ V_1 that are in S and T will be denoted by a and a' such that a∈ S and a' ∈ T.
The interesting pairs will in fact be the n pairs (a,a') ∈ S × T.
* Let us assume that |V_1|=n and that each node a∈ V_1 is associated with a unique identifier a̅ on d=O(logn) bits such that for any pair a,b ∈ V_1 if a ≠ b then the two identifiers a̅,b̅ have at least two coordinates i,j ∈ [d] where a̅[i]=1 while b̅[i]=0, and a̅[j]=0 while b̅[j]=1. In words, we can always find a bit that is 1 in one but 0 in the other. In addition, we require that for all a,b there exist two coordinates i,j ∈ [d] where both a̅[i]= b̅[i]=1 and a̅[i] = b̅[i]=0, meaning that both are 1 and 0. Such identifiers can be obtained, e.g., by taking the bit representation of the name of the node and concatenating it with its complement, then adding a 0 and a 1 to all identifiers.
* There are also some new auxiliary nodes. Most importantly there is a bit-gadget comprised of a set J={g_1,…,g_d} of d nodes. In addition, there are four special nodes that connect “everyone to everyone” in certain sets; thus, let us denote them o_1,o_2,o_3,o_4 where o is for omni.
The edges of G' and their weights are as follows. Let t>2k be a large enough integer; the reduction will make it difficult to distinguish between diameter 6t ± O(k) and diameter 10t ± O(k).
* For each i ∈{2,…,k} and for each edge (x,y) ∈ V_i × V_i+1 in G, we add two edges to G': one forwards (x^fwd,y^fwd) ∈ V_i^fwd× V_i+1^fwd and one backwards (y^bwd,x^bwd) ∈ V_i+1^bwd× V_i^bwd. The weight on these edges is 1, which can be thought of as negligible because it is 0· t +1.
* Each edge leaving V_1 in G, i.e. an edge (a,x) ∈ V_1 × V_2, becomes two edges: a forwards (a,x^fwd) ∈ S × V_2^fwd of weight 3· t+1 and a backwards edge (x^bwd,a) ∈ V_2^bwd× S of weight 0 · t + 1.
* Each edge going into V_1 in G, i.e. an edge (x,a) ∈ V_k × V_1, becomes two edges: a forwards (x^fwd,a) ∈ V_k^fwd× T of weight 3 · t +1 and a backwards edge (a,x^bwd) ∈ T × V_k^bwd of weight 0 · t +1.
The edges defined so far are the main ones. A node a ∈ S can reach its copy a' ∈ T with a (forwards) path of weight 6· t +k if and only if a is in a k-cycle in G, and in which case there is also a backwards path of weight 0· t +k from a'∈ T to a ∈ S.
This will indeed be the difficult condition to check for an algorithm (under Hypothesis <ref> about the complexity of k-cycle) and the rest of the construction aims to make the diameter of G' depend solely on whether this condition is satisfied; and importantly, to make it vary by a large constant factor based on this condition.
This is accomplished with the edges that we define next.
* The first o-node o_1 serves to connect everyone in the set S to everyone in V^fwd_2 ∪⋯∪ V^fwd_k with weight 5 · t+O(1). This could have been achieved more simply by having direct edges of weight 5t from everyone in S to everyone in those sets. However, this would have incurred n^2 edges; the node o_1 simulates this with O(n) edges.
It is connected with edges (o_1,v) to all nodes v ∈ V^fwd_2 ∪⋯∪ V^fwd_k. The weight of these edges is 1. And every node a ∈ S is connected with an edge of weight 5· t +1 to o_1.
* At the same time, the node o_1 serves to connect everyone in T to everyone in V^fwd_2 ∪⋯∪ V^fwd_k with weight 1 · t+O(1). This is achieved by connecting every node a' ∈ T with an edge (a',o_1) of weight 1 · t +1 to o_1.
* The second o-node o_2 serves to connect everyone in V^fwd_2 ∪⋯∪ V^fwd_k to everyone in S with weight 1 · t +O(1).
Every node v ∈ V^fwd_2 ∪⋯∪ V^fwd_k has a direct edge (v,o_2) to o_2 with weight 1, and the node o_2 is connected to every node a ∈ S with an edge (o_2,a) of weight 1· t + 1.
* And o_2 also serves to connect everyone in V^fwd_2 ∪⋯∪ V^fwd_k to everyone in T with weight 5· t +O(1). Thus, we add an edge (a',o_2) of weight 5 · t+1 for all nodes a' ∈ T.
* The third o-node o_3 connects everyone in T to everyone in V^bwd_2 ∪⋯∪ V^bwd_k with weight 2t+O(1). There are edges of weight 1 from o_3 to all nodes in V^bwd_2 ∪⋯∪ V^bwd_k, and there are edges of weight 2 · t +1 from every node in T to o_3.
* Moreover, o_3 connects everyone in S to everyone in V^bwd_2 ∪⋯∪ V^bwd_k with weight 4t+O(1). There is an edge of weight 4· t+1 from every node in S to o_3.
* The fourth and last o-node o_4 connects everyone in V^bwd_2 ∪⋯∪ V^bwd_k to everyone in T with weight 4t+O(1). There are edges of weight 1 from every node in V^bwd_2 ∪⋯∪ V^bwd_k to o_4, and there are edges of weight 4 · t+1 from o_4 to every node in T.
* Similarly, o_4 connects everyone in V^bwd_2 ∪⋯∪ V^bwd_k to everyone in S with weight 2t +O(1). There are edges of weight 2 · t+1 from o_4 to every node in S.
* There are bi-directional edges of weight 3 · t between all pairs of nodes in {o_1,o_2,o_3,o_4}.
At this point, our construction is nearly complete. Almost all pairs of nodes have a roundtrip of cost 6t+O(1), and a node a ∈ V_1 that does not appear in a k-cycle in G causes the pair (a,a') ∈ S × T to have a roundtrip distance of at least 10· t.
However, we still have to worry about the pairs within S (and also within T); currently their roundtrip distance to each other is ≥ 8t even if we are in a YES instance of the k-cycle problem.
The next and final gadget J, the bit-gadget, will make all distances within S and within T at most 6t+O(1) without making the interesting pairs (a,a') ∈ S × T closer than 10t.
Unfortunately, we do not know how to achieve the latter guarantee when the set of interesting pairs is larger than O(n). If we could make the roundtrip distances within S smaller without decreasing the roundtrips to T for all pairs in S × T we could have a similar lower bound under SETH rather than Hypothesis <ref>.
The edges that make up the bit-gadget are as follows.
* Every node a ∈ S is connected to and from every node g_j in J, but the weights on the edges vary based on the identifier a̅.
For a coordinate j ∈ [d], let a̅[j] ∈{0,1} be the j^th bit in the identifier a̅.
* If a̅[j] = 1 we set the weight of the edge (a,g_j) to 3 · t+1, and if a̅[j] = 0 we set it to 5 · t+1.
* If a̅[j] = 1 we set the weight of the edge (g_j,a) to 2 · t+1, and if a̅[j] = 0 we set it to 0 · t+1.
* Similarly, every node a' ∈ T is connected to and from every node g_j in J and the weights depend on a̅.
* If a̅[j] = 1 we set the weight of the edge (g_j,a') to 5 · t+1, and if a̅'̅[j] = 0 we set it to 3 · t+1.
* If a̅[j] = 1 we set the weight of the edge (a',g_j) to 0 · t+1, and if a̅[j] = 0 we set it to 2 · t+1.
* Finally, every node in J is connected with bi-directional edges of weight 3 · t +1 to each of the o-nodes {o_1,o_2,o_3,o_4 }.
This completes the reduction. The new graph has O(n) nodes and O(n log n) edges.
*Correctness
The correctness of the reduction follows from the next two lemmas.
If node a ∈ V_1 is not in a k-cycle in G then _G'(a,a')≥ 10· t where a ∈ S, a' ∈ T are the two copies of a in G'.
Suppose that all nodes a ∈ V_1 are in a k-cycle in G, then _G'(x,y) ≤ 6 · t + 2k for all pairs x,y ∈ V(G').
The two lemmas will become evident after we establish a series of claims about the distances in G'.
Let us begin with the interesting pairs (a,a') ∈ S × T where a,a' are the two copies in G' of a node a ∈ G.
The next claim shows that in the “good” case where a is in a k-cycle, the roundtrip distance is 6t+O(k).
If node a ∈ V_1 is in a k-cycle in G then _G'(a,a') ≤ 6t +2k.
This holds because of the forwards and backwards edges defined in the beginning.
The edges of the k-cycle correspond to a forwards path from a to a' via the nodes in V_2^fwd,…,V_k^fwd and a backwards path from a' to a via the nodes in V_2^bwd,…,V_k^bwd.
The weight of the forwards path is 6 · t +k and the weight of the backwards path is 0 · t + k.
Note that if the node a is not in a k-cycle then neither the forwards nor backwards paths that were used in the previous proof exist in G'.
Next, we show that the distance between any pair a ∈ S, b' ∈ T for a distinct pair of nodes a,b∈ V_1 is ≤ 6t +O(1) due to the bit-gadget J.
For any pair of nodes a,b ∈ V_1 such that a ≠ b we have _G'(a,b') ≤ 6t +4 where a∈ S and b' ∈ T.
Let j ∈ [d] be the coordinate such that a̅[j]=1 but b̅[j]=0. Such a coordinates is guaranteed to exist because a ≠ b.
The path a → g_j → b' has weight 3 · t +1 + 3 · t +1 = 6 · t +2.
The path b' → g_j → a has weight 0 · t +1 + 0 · t +1 = 0 · t +2.
Thus, the roundtrip distance is at most 6· t + 4.
Note that for the interesting pairs (a,a') the above argument breaks, and the gadget J does not provide a path of length <12t.
So far we have established that if all nodes a∈ V_1 are in a k-cycle then all pairs in S × T have roundtrip distance 6t+O(k).
Let us now bound the distances within S and within T, also using the bit-gadget J.
For any pair of nodes a,b ∈ V_1 such that a ≠ b we have _G'(a,b) ≤ 6t +4 and _G'(a',b') ≤ 6t +4 where a,b∈ S and a',b' ∈ T.
Let j ∈ [d] be the coordinate such that a̅[j] = b̅[j]=1.
The path a → g_j → b has weight 3 · t +1 + 0 · t +1 = 3 · t +2.
And for the same reason,
the path b → g_j → a also has weight 3 · t +1 + 0 · t +1 = 3 · t +2.
Thus, the roundtrip distance between a and b is at most 6· t + 4.
For the pair a',b' ∈ T we make a similar argument but consider the coordinate i ∈ [d] in which both identifiers are 0 rather than 1; i.e. a̅[j] = b̅[j]=0.
The path a' → g_i → b' has weight 0 · t +1 + 3 · t +1 = 3 · t +2.
And the path b' → g_i → a' has weight 0 · t +1 + 3 · t +1 = 3 · t +2.
Thus, the roundtrip distance is at most 6· t + 4.
After upper bounding all distances among pairs in S ∪ T, it remains to analyze the other nodes in the construction; fortunately the o-nodes make it easy to see that all such distance are upper bounded by 6t+O(1).
The roundtrip distance between any pair of nodes u,v ∈ V_2^fwd∪⋯∪ V_k^fwd∪ V_2^bwd∪⋯∪ V_k^bwd∪ J ∪{o_1,o_2,o_3,o_4} is at most 6t+6.
The upper bound holds trivially for all pairs in J ∪{o_1,o_2,o_3,o_4} because there are bidirectional edges of weight 3 · t+1 between any pair of them.
Let {u,v} by any pair that is not already covered by the previous argument. It must have an endpoint in V_2^fwd∪⋯∪ V_k^fwd∪ V_2^bwd∪⋯∪ V_k^bwd, let it be u.
Observe that u can reach any v with distance 3 · t+3 because u is at distance 1 to some node in {o_1,o_2,o_3,o_4} and v is at distance ≤ 3 · t + 2 from any node in {o_1,o_2,o_3,o_4}.
Moreover, v can reach any node in {o_1,o_2,o_3,o_4} with weight ≤ 3 · t + 2 and there is some node in {o_1,o_2,o_3,o_4} that can reach u with weight 1.
Thus, the roundtrip distance is at most ≤ 3 · t + 6.
Finally, it remains to bound the distances for pairs with one endpoint in S∪ T and one endpoint in the rest of G'.
This will be broken into two claims, each using a different simple argument.
For any nodes a ∈ S, a' ∈ T, g ∈ J we have (a,g), (a,g') ≤ 6· t+2.
The direct roundtrips a → g → a and a' → g → a' have the desired distance.
For any nodes a ∈ S, a' ∈ T, v ∈ V_2^fwd∪⋯∪ V_k^fwd∪ V_2^bwd∪⋯∪ V_k^bwd∪{ o_1,o_2,o_3,o_4 } we have (a,v), (a',v) ≤ 6· t+4.
* For a ∈ S and any node v ∈ V_2^fwd∪⋯∪ V_k^fwd the roundtrip a → o_1 → v → o_2 → a has weight 6· t+4. Thus, (a,v),(a,o_1),(a,o_2) ≤ 6· t+4.
* For a ∈ S and any node v ∈ V_2^bwd∪⋯∪ V_k^bwd the roundtrip a → o_3 → v → o_4 → a has weight 6· t+4. Thus, (a,v),(a,o_3),(a,o_4) ≤ 6· t+4.
* For a' ∈ T and any node v ∈ V_2^fwd∪⋯∪ V_k^fwd the roundtrip a' → o_1 → v → o_2 → a' has weight 6· t+4. Thus, (a',v),(a',o_1),(a',o_2) ≤ 6· t+4.
* For a' ∈ T and any node v ∈ V_2^bwd∪⋯∪ V_k^bwd the roundtrip a' → o_3 → v → o_4 → a' has weight 6· t+4. Thus, (a',v),(a',o_3),(a',o_4) ≤ 6· t+4.
The above claims suffice to establish Lemma <ref> because we have upper bounded the roundtrip-diameter by 6t+2k in the case that all nodes in G are in a k-cycle.
The next series of claims lower bound the roundtrip-distance between a pair {a,a'} in the case that a is not in a k-cycle in G.
In this case, there is simply no path from a to a' (or in the other direction) that avoids one of the o-nodes or the bit-gadget J.
Therefore, our proof strategy is to lower bound the weight of any path that uses these nodes.
In these arguments we will ignore the +1 in the weights of edges and treat them as zero.
Any path from a∈ S to a'∈ T that uses one of the nodes in {o_1,o_2,o_3,o_4} must have distance at least 8t.
To establish the claim we lower bound the distances between the nodes in S,T and the o-nodes.
* (a,o_2) ≥ 3t because, in fact, there are no edges leaving S that are cheaper than 3t.
* (o_1,a') ≥ 3t because there are no edges entering T that are cheaper than 3t.
* (a,o_1) ≥ 5t because the direct edge has weight 5t+1 and all other edges entering o_1 have weight ≥ 3t plus all edges leaving a have weight ≥ 3t, meaning that any path of length at least two will have weight ≥ 6t.
* (o_2,a') ≥ 5t because the direct edge has weight 5t+1 and all other edges leaving o_2 have weight ≥ 3t and all edges entering a' have weight ≥ 3t.
By the above bounds on the distances we can see that any path from a to a' that goes through o_1 or o_2 must have distance ≥ 8t.
The following bounds address the paths that use o_3 or o_4.
* (a,o_3),(a,o_4) ≥ 4t because only nodes in V_2^fwd∪⋯∪ V_k^fwd∪{ o_2 } may be reachable from S with distance <4t.
* (o_3,a'),(o_4,a') ≥ 4t because only nodes in V_2^fwd∪⋯∪ V_k^fwd∪{ o_2 } may reach T with distance <4t.
If a ∈ V_1 is not in a k-cycle in G then any path from a∈ S to a'∈ T that does not use one of the nodes in {o_1,o_2,o_3,o_4} must have distance at least 8t.
If the node a is not in a k-cycle in G then there are only two ways that at a path of distance <8t from a ∈ S to a' ∈ T could possibly go, without using any of the o-nodes: either by first going to another node b ∈ S and then going from b to a', or by first going to a node b' ∈ T and then going from b' to a'.
This is because any path via V_2^fwd∪⋯ V_k^fwd corresponds to a k-cycle in G, which is assumed to be inexistent, and any path of weight <8t via the bit-gadget J corresponds to a coordinate j ∈ [d] in which the two identifiers differ which is also inexistent (since both are a̅).
In either case, the path will have length ≥ 9t due to the following observations:
* For any pair a,b ∈ S we have (a,b) ≥ 3t. This is because all edges leaving S have weight ≥ 3t.
* For any pair a',b' ∈ T we have (a',b') ≥ 3t. This is because all edges entering T have weight ≥ 3t.
* For any pair a∈ S,b' ∈ T we have (a,b') ≥ 6t. This is because all edges leaving S or entering T have weight ≥ 3t and moreover there are no direct edges from S to T.
Any path from a'∈ T to a∈ S that uses one of the nodes in {o_1,o_2,o_3,o_4} must have distance at least 2t.
The proof is analogous to that of Claim <ref>. Let us lower bound the distances between the nodes in S,T and the o-nodes.
* (a',o_3) ≥ 2t because there are no edges entering o_3 with weight less than 2t.
* (o_4,a) ≥ 2t because there are no edges leaving o_4 with weight less than 2t.
This implies that any path from a' to a that goes through o_3 or o_4 must have distance ≥ 2t.
The following bounds address the paths that use o_1 or o_2.
* (a',o_1),(a',o_2) ≥ t because only nodes in V_2^bwd∪⋯∪ V_k^bwd∪{ o_4 } may be reachable from T with distance <t.
* (o_1,a),(o_2,a) ≥ t because only nodes in V_2^bwd∪⋯∪ V_k^bwd∪{ o_3 } may reach T with distance <t.
If a ∈ V_1 is not in a k-cycle in G then any path from a'∈ T to a∈ S that does not use one of the nodes in {o_1,o_2,o_3,o_4} must have distance at least 2t.
The proof is analogous to that of Claim <ref>.
A direct path via V_2^bwd∪⋯ V_k^bwd does not exist, and a direct path via the J gadget has weight ≥ 2t.
Thus, a path of weight <2t from a' to a must either visit a node b∈ S or a node b' ∈ T. In either case the distance will be ≥ 3t by the following bounds:
* For any pair a,b ∈ S we have (a,b) ≥ 3t because all edges leaving S have weight ≥ 3t.
* For any pair a',b' ∈ T we have (a',b') ≥ 3t because all edges entering T have weight ≥ 3t.
As a result of the above four claims, we know that if a is not in a k-cycle in G then the roundtrip distance between a ∈ S and a' ∈ T is at least 8t + 2t which establishes Lemma <ref>.
Together, Lemma <ref> and <ref> show the correctness of the reduction. An algorithm that can distinguish between roundtrip-diameter ≥ 10t from roundtrip-diameter ≤ 6t +2k can solve the All-Nodes k-Cycle problem.
By choosing t to be a large enough constant, this can be achieved by an algorithm for roundtrip-diameter with approximation factor 5/3-.
alpha
§ GENERAL APPROXIMATION OF DIRECTED (ONE-WAY) DIAMETER
We now give our general algorithm, generalizing the algorithm from Section <ref> and proving Theorem <ref>.
[Theorem <ref>, restated]
Let k=2^t+2 for nonnegative integer t≥ 0.
For every ε>0, there exists an 2-1/k+ε approximation of diameter in directed weighted graphs in time Õ(m^1+α/ε), for
α=2(2/ω-1)^t - (ω-1)^2/2/(2/ω-1)^t(7-ω) - ω^2-1/2.
Note that for t=0, this recovers Theorem <ref>, with a lost ε factor in the approximation.
Similar to Theorem <ref>, it suffices to show that, for any positive integer D > 0, there exists an algorithm 𝒜_D running in time Õ(m^1+α) that takes as input any graph and accepts if the diameter is at least D, rejects if the diameter is less than (k/2k-1-ε)D, and returns arbitrarily otherwise.
Then with a binary search argument we can get a 2-1/k+4ε-approximation for every small ϵ>0. Replacing ε with ε/4 gives the result.
One can check that our choice of α guarantees a unique sequence of numbers 1-α = α_0 > α_1 > ⋯ > α_t > α_t+1 = α such that
2α = α_i + (1-α_i+1)ω-1/2
for i=0,…,t. We can determine α by using α_0=1-α and α_t+1=α and iterating the recursion (<ref>) to obtain an equation for α, which we solve to get (<ref>).[
Here are some details. First rewrite (<ref>) as
α_i+1 = 2/ω-1α_i + 1 - 4/ω-1α
For i=0, since α_0=1-α we have α_1=2/ω-1+1- 6/ω-1α by (<ref>). If we define α_i=β_i-γ_iα, we have that β_1=ω+1/ω-1 and γ_1=6/ω-1. Using equation <ref> for i≤ t-1, we have β_i+1=2/ω-1β_i+1 and γ_i+1=4/ω-1+2/ω-1γ_i. So we have β_i+1=(2/ω-1)^i(β_1-ω-1/ω-3)+ω-1/ω-3, and γ_i+1=(2/ω-1)^i(γ_1-4/ω-3)+4/ω-3. So β_t=(2/ω-1)^t2/3-ω+ω-1/ω-3 and γ_t=(2/ω-1)^t7-ω/3-ω+4/ω-3.
Using equation (<ref>) for i=t we have α = 2/ω-1α_t+1-4/ω-1α, so we have
α=β_t+ω-1/2/γ_t+ω+3/2=2(2/ω-1)^t-(ω-1)^2/2/(2/ω-1)^t(7-ω)-ω^2-1/2.
]
For such an α, we can check α_0 > α_1 > ⋯ > α_t > α_t+1.[
Here are some details: First check that α≥2/7-ω from (<ref>) and 2≤ω≤ 3.
Combining this with (<ref>) at i=0 gives α_0 > α_1.
Additionally, subtracting the i and i+1 versions of equation (<ref>), we see that α_i > α_i+1 implies α_i+1> α_i+2, so by induction we indeed have α_0 > α_1 > ⋯ > α_t > α_t+1.
]
*The algorithm. We now describe the algorithm 𝒜_D.
* First, we apply a standard trick that replaces the input graph on n vertices and m edges with an 2m-vertex graph of max-degree-3 that preserves the diameter: replace each vertex v with a cycle of degree(v) new vertices with weight-0 edges and where the edges to v now connect to distinct vertices of the cycle. From now on, we work with this max-degree-3 graph on 2m vertices.
Now running Dijkstra's algorithm until m^β vertices are visited takes Õ(m^β) time, since it costs Õ(1) time to visit each vertex and it's at-most-3 edges in Dijkstra's algorithm.
* Sample 4m^αlog m uniformly random vertices and compute their eccentricities. If any such vertex has (in- or out-) eccentricity at least k/2k-1D Accept.
* For every vertex v, determine if |B_1/2k-1D^out(v)|≤ m^α. If such a vertex v exists, determine if any vertex of B_1/2k-1D^out(v) has eccentricity at least k/2k-1D, and Accept if so.
* For every vertex v, determine if |B_1/2k-1D^in(v)|≤ m^α. If such a vertex v exists, determine if any vertex of B_1/2k-1D^in(v) has eccentricity at least k/2k-1D, and Accept if so.
* For i=0,…,t:
* Sample 4m^1-α_i+1log m uniformly random vertices Ŝ.
For each vertex in Ŝ, run partial in- and out-Dijkstra each until m^α_i vertices have been visited.
Compute
S^out = {s∈Ŝ: B^out_2^i+1/2k-1D(s)≤ m^α_i}
S^in = {s∈Ŝ: B^in_2^i+1/2k-1D(s)≤ m^α_i}
and record the distances d̂(s,v) from the partial out-Dijkstra for s∈ S^out, for v∈ B^out+_2^i+1/2k-1D(s).
Note that d̂(s,v) ≥ d(s,v) for all such v with equality if v∈ B^out_2^i+1/2k-1D(s).
Similarly, record the distances d̂(v,s) from the partial in-Dijkstra for s∈ S^in, for v∈ B^in+_2^i+1/2k-1D(s).
* Sample 4m^1-αlog m uniformly random vertices T̂.
For each vertex in T̂, run partial in- and out- Dijkstra until m^α_i vertices have been visited.
Compute
T^out ={t∈T̂: B^out_k-2^i+1/2k-1D(t)≤ m^α_i}
T^in ={t∈T̂: B^in_k-2^i+1/2k-1D(t)≤ m^α_i}
and record the distances d̂(t,v) from the partial out-Dijkstra for t∈ T^out, for v∈ B^out+_k-2^i+1/2k-1D(s), and similarly, record the distances d̂(v,t) from the partial in-Dijkstra for t∈ T^in, for v∈ B^in+_k-2^i+1/2k-1D(s).[Note that if a'∈ S^out∪ T^out and b'∈ S^in∪ T^in, d̂(a',b') may be recorded multiple times, with different values. We take the smallest one, as this only helps us.]
* For integers 0≤ j≤1/ε·k/2k-1, construct the following matrices
* A^j,out∈ℝ^S^out× V where A_s,v^j,out = 1 if d̂(s,v)≤ jε D, and all other entries are zero.
* A^j,in∈ℝ^V× S^in where A_v,s^j,in=1 if d̂(v,s)≤ (k/2k-1-jε)D and all other entries are zero.
* B^j,out∈ℝ^T^out× V where B_t,v^j,out = 1 if d̂(t,v)≤ jε D, and all other entries are zero.
* B^j,in∈ℝ^V× T^in where B_v,t^j,in=1 if d̂(v,t)≤ (k/2k-1-jε)D and all other entries are zero.
For all j, compute A^j,out· B^j,in∈ℝ^S^out× T^in and B^j,out· A^j,in∈ℝ^T^out× S^in using sparse matrix multiplication.
If there exists s∈ S^out and t∈ T^in such that (A^j,out· B^j,in)_s,t = 0 for all j, Accept.
If there exists t∈ T^out and s∈ S^in such that (B^j,out· A^j,in)_t,s = 0 for all j, Accept.
Otherwise Reject.
*Runtime.
Similar to Theorem <ref>, Steps <ref>, <ref>, and <ref> take time Õ(m^1+α).
For Step <ref>, like in Theorem <ref>, we can compute S_i^out and S_i^in and determine the desired distances in time Õ(m^1-α_i+1+α_i) using partial Dijkstra.
Similarly in Step <ref>, we can compute T_i^out and T_i^in and the desired distances in time Õ(m^1-α+α_i).
In Step <ref>, the runtime is the time to multiply sparse matrices. Each matrix A^j has Õ(m^1-α_i+1) rows, m columns, and sparsity Õ(m^1-α_i+1+α_i), and matrix B^j has m rows, Õ(m^1-α) columns, and sparsity Õ(m^1-α+α_i).
We can compute the product A^j,out· B^j,in by breaking into m^α_i+1-α matrix multiplications of dimension (Õ(m^1-α_i+1), n, Õ(m^1-α_i+1)), where each matrix has sparsity Õ(m^1-α_i+1+α_i) (because each row of A^j,out and each column of B^j,in has sparsity O(m^α_i)).
Each submatrix multiplication runs in time Õ(m^1-α_i+1+α_im^(1-α_i+1)ω-1/2) by Lemma <ref>.
To apply Lemma <ref>, we need 1-α_i+1+α_i ≥ (1-α_i+1)ω+1/2, which holds by rearranging (<ref>) and using α_i > α.
Thus, one product A^j,out· B^j,in takes time Õ(m^1-α+α_im^(1-α_i+1)ω-1/2), and so, as there are O(1/ε) matrix multiplications, Step <ref> takes time Õ(m^1-α+α_im^(1-α_i+1)ω-1/2/ε).
Thus, the total runtime is
Õ(m^1+α) + ∑_i=0^tÕ(m^1-α+α_i + (1-α_i+1)ω-1/2)
≤Õ(m^1+α)
as desired, where the bound follows from (<ref>).
*If the diameter is less than (k/2k-1-ε)D, we always reject.
Clearly every vertex has eccentricity less than k/2k-1, so we indeed do not accept at Steps <ref>, <ref>, and <ref>.
At Step <ref>, consider any s∈ S^out and t∈ T^in.
By definition, we have d(s,t) < (k/2k-1-ε)D.
Let v be the latest vertex on the s-to-t shortest path such that d(s,v) ≤2^i+1/2k-1D and let v' be the following vertex, if it exists.
Then v is in B^out_2^i+1/2k-1D(s) and v', if it exists, is in B^in_k-2^i+1/2k-1D(t).
Thus, v is visited in the partial Dijkstra from s, so d̂(s,v)=d(s,v) is accurate.
Similarly, either v=t so that d̂(v,t) is accurately 0, or v' exists and is visited in the partial Dijkstra from t, so that d̂(v,t) is updated to be at most w_v,v' + d̂(v',t) = w_v,v'+d(v',t) = d(v,t), and thus is accurate. We used in the second equality that the v-to-t shortest path goes through v'.
We thus have d̂(s,v) + d̂(v,t) < (k/2k-1-ε)D.
Setting j = d̂(s,v)/ε, we have A^j,out_s,v = 1 and d̂(v,t) = (k/2k-1-ε)D - d̂(s,v) ≤ (k/2k-1-jε)D, so B^j,in_v,t = 1.
Hence, (A^j,out· B^j,in)_s,t≥ 1, and this holds for any s∈ S^out and t∈ T^in.
Similarly, (A^j,in· B^j,out)_t,s≥ 1 for any t∈ T^out and s∈ S^in.
Thus, we do not accept at Step <ref>, so we reject, as desired.
*If the diameter is at least D, we accept with high probability.
Let a and b be vertices at distance d(a,b)≥ D.
Similar to Theorem <ref>, we may assume all the folloiwng hold, or else we accept with high probability at one of Steps <ref>, <ref>, or <ref>.
|B_k-1/2k-1D^out(a)|≤ m^1-α,
|B_k-1/2k-1D^in(b)|≤ m^1-α,
|B_1/2k-1D^out(a)| > m^α,
|B_1/2k-1D^in(b)| > m^α
Let i∈{0,…,t+1} be the largest index such that |B_k-2^i+1+1/2k-1D^out(a)|≤ m^α_i, |B_k-2^i+1+1/2k-1D^in(b)|≤ m^α_i.
By the first half of (<ref>), i exists, and by the second of half of (<ref>), i < t+1. Thus, we have
|B_k-2^i+1+1/2k-1D^out(a)|≤ m^α_i, and
|B_k-2^i+1+1/2k-1D^in(b)|≤ m^α_i, and
either |B_k-2^i+2+1/2k-1D^out(a)|> m^α_i+1 or
|B_k-2^i+2+1/2k-1D^in(b)|> m^α_i+1
We now prove that iteration i of Step <ref> accepts.
Suppose that |B_k-2^i+2+1/2k-1D^out(a)|> m^1-α_i+1. The case |B_k-2^i+2+1/2k-1D^in(b)|> m^1-α_i+1 is similar.
With high probability Ŝ has a vertex s in B_k-2^i+2+1/2k-1D^out(a) by Lemma <ref>.
As |B_2^i+1/2k-1D^out(s)|≤ |B_k-2^i+1+1/2k-1D^out(a)|≤ m^α_i, we have s∈ S^out.
With high probability T̂ has a vertex t in B_1/2k-1D^in(b) by the bound in (<ref>) and Lemma <ref>.
As |B_k-2^i+1/2k-1D^in(t)|≤ |B_k-2^i+1+1/2k-1D^in(b)|≤ m^α_i, we have t∈ T^in.
By choice of s and t, the triangle inequality gives
d(s,t)≥ d(a,b) - d(a,s) - d(t,b) ≥ D - k-2^i+2+1/2k-1D - 1/2k-1D ≥k+1/2k-1.
Note that if (A^j,out· B^j,in)_s,t > 0 for some j, then there exists a vertex v such that d(s,v) ≤ jε D and d(v,t)≤ (k/2k-1-jε)D, so d(s,t) ≤ d(s,v)+d(v,t) ≤k/2k-1D, contradicting (<ref>).
Thus, (A^j,out· B^j,in)_s,t = 0, so we accept, as desired.
If the edge weights are integers {0,…, C}, we can get remove the +ε and get a 2k-1/k-approximation in time Õ(m^1+αC).
We can set ε=1/D, and in Step <ref>, we only need matrix multiplications for j∈[2^i+1/2k-1D, 2^i+1/2k-1D+C], since crossing an edge changes distance by at most C, saving the 1/ε factor from the number of matrix multiplications.
Furthermore, because the diameter is an integer, we can stop the binary search after log D≤log(nC) steps.
§ UNWEIGHTED ROUNDTRIP 2-Ε HARDNESS FROM ℓ_∞-CP
We now prove Theorem <ref>, extending the proof from Section <ref> to unweighted graphs.
Let α≥ 2, γ∈(0,1), δ>0.
If there is an 2-1/α-γ approximation algorithm in time O(m^2-δ) for roundtrip diameter in unweighted graphs, there is a α-approximation for ℓ_∞-Closest-Pair on vectors of dimension d≤ n^1-δ in time Õ(n^2-δ).
Let M ≥ 20/γ be a constant, βMα - 1, ε = 1/4(β+1.5).
For convenience, we assume that M is such that the fractional part of Mα is less than 0.5, so that β> Mα-1.5.
By Lemma <ref>, it suffices to find an algorithm for a (0.5+ϵ)α-bounded instance I'={v_1',…,v_n'} of the ℓ_∞-Closest-Pair problem on vectors of dimension dε^-1log n in time Õ(n^2-δ).
This algorithm needs to distinguish between the “YES case,” where there exists i≠ j with v_i'-v_j'_∞≤ 1, and the “NO case” where v_i'-v_j'_∞≥α for all i≠ j.
First we construct a new set of vectors I={v_1,…,v_n}, where for each j∈[n] and x∈ [d], v_j[x]=M· v_j[x]. This set of vectors has the following properties
* In the YES case, there exists i≠ j∈ [n] with |v_i'[x]-v_j'[x]|≥α and thus |v_i[x]-v_j[x]|≥ Mα-1≥β.
* In the NO case, for all i≠ j∈ [n], we have |v_i'[x]-v_j'[x]|≤ 1, then |v_i[x]-v_j[x]|≤ M.
* All entries of vectors in I have absolute value at most 0.5β+2: note that v_j[x]=Mv_j'[x]≤ Mv_j'[x]≤ (0.5+ϵ)Mα < 0.5β + 2 and v_j[x]=Mv_j'[x] > Mv_j'[x]-1≥ -(0.5+ϵ)Mα-1 ≥ -0.5β-2.
We now construct a graph G in O_γ,α(nd) time such that, if there exists |v'_i[x]-v'_j[x]|≥β, then the roundtrip diameter is at least 4β-2M, and otherwise the diameter is a most 2β+8.
Indeed, this implies that a better than 4β-2M/2β+8 = 2 - M+8/β+4≥ 2-1/α-γ approximation of the roundtrip diameter can distinguish between the YES and NO case, solving the bounded ℓ_∞ instance, as desired. We now describe the graph.
*The graph.
The graph is illustrated in Figure <ref>.
For each i∈ [n] we first describe a subgraph called the v_i-subgraph, which consists of the following. Let x^*=_x∈[d]|v_i[x]|.
* Vertices i^f_1,…,i^f_β+|v_i[x^*]|-1 and i^b_1,…,i^b_β+|v_i[x^*]|-1. Vertex i=i^f_0=i^b_0. (superscripts f and b are for “forward” and “backward”)
* For j=0,…,β+|v_i[x^*]|-2, edges (i_j^f,i_j+1^f) and (i_j+1^b,i_j^b), so that the i_j^f nodes construct a path of length β+|v_i[x^*]|-1 from i and the i_j^b nodes construct a path of length β+|v_i[x^*]|-1 to i.
* Vertices P^i={p^i_1,…,p^i_β+1} which form a directed path P^i of length β. Edges (i_j^b,p^i_1) and (p^i_β+1,i_j^f) for all j=1,…,β+|v_i[x^*]|-1.
The subgraph on the union of ∪_ji^f_j, ∪_j i^b_j and P^i is called the v_i-subgraph, as all the nodes are associated to v_i.
In addition to the v_i-subgraphs for all i, our graph has the following:
* As in the weighted case, we have two vertex sets X_1=[d] and X_2=[d] each identified by the coordinates.
* For each x∈ [d], connect i_j^f to x_X_1 and x_X_2 to i_j^b for all j≥β+v_i[x]-1. Connect x_X_1 to i_j^b and i_j^f to x_X_2 for all j≥β-v_i[x]-1. These simulate the weighted edges from S to X_1 and X_2 in the weighted construction.
This finishes the construction.
We now show that the roundtrip diameter is at most 2β+8 in the NO case and at least 4β-M in the YES case.
*NO case.
We show the roundtrip distance between every pair of vertices a and c is at most 2β+8. We break into the following cases.
* Case 1: a,c are both in the v_i-subgraph for some i.
Let x^*=_x∈[d](|v_i[x]|). Consider the two following cycles of length 2β:
i=i_0^f,…,i_β+v_i[x^*]-1^f,x_X_1^*, i_β-v_i[x^*]-1^b, …, i_0^b=i
i=i_0^f,…,i_β-v_i[x^*]-1^f,x_X_2^*, i_β+v_i[x^*]-1^b, …, i_0^b=i
These two cycles don't cover the following cases: (case 1) a=i_j^f and c=i_j'^b for j,j' > β-|v_i[x^*]|-1, (case 2) a is in P^i.
Without loss of generality suppose v_i[x^*]>0. For the case 1, consider the following cycle
a,P^i, c=i_j'^b,…, i_β+v_i[x^*]-1,x_X_1^*,a
This cycle has length at most β+2v_i[x^*]+2. Since v_i[x^*]≤ 0.5β+2, the cycle has length at most 2β+6.
Note that this also covers case 2 when c∈{i_j^f,i_j^f}∪ P^i for some j> β-|v_i[x^*]|-1.
For case 2, if c∈{i_j^f,i_j^f} for j≤β-|v_i[x^*]|-1, consider the following cycle:
i=i^f_0,…,i^f_β-v_i[x^*]-1, x_X_2^*, i^b_β+v_i[x^*]-1,P^i,i^f_β+v_i[x^*]-1,x_X_1^*,i^b_β-v_i[x^*]-1,…,i_0^b=i
This cycle is of length at most 3β+1-2v_i[x^*]. Note that v_i[x^*]≥ 0.5β-2. This is because if we consider some j∈ S, then there is y such that |v_i[y]-v_j[y]|≥β. Since |v_i[y]|,|v_j[y]|≤ 0.5β+2, we have that |v_i[y]|,|v_j[y]|≥ 0.5β-2. So v_i[x^*]≥ |v_i[y]|≥ 0.5β-2, and hence the length of the cycle is at most 2β+7.
* Case 2: a is in the v_i-subgraph and c is in the v_j-subgraph for i≠ j.
Let x∈ [d] be a coordinate where |v_i[x]-v_j[x]|≥β.
Without loss of generality suppose that v_i[x]>0>v_j[x]. Note that 0.5β-2≤ |v_i[x]|,|v_j[x]|≤ 0.5β+2.
We show that there is a path from x_X_1 to x_X_2 of length at most β+4 that contains a. Similarly, we show that there is a path from x_X_2 to x_X_1 of length at most β+4 that contains c. Then the union of these two paths constructs a cycle of length at most 2β+8 passing through a and c.
If a∈{i_k^b,i_k^f} for some k≤β-v_i[x]-1, consider this path of length 2β-2v_i[x]≤β+4.
x_X_1,i_β-v_i[x]-1^b,…,i_0^b=i_0^f,…, i_β-v_i[x]-1^f,x_X_2
If a∈{i_k^b,i_k^f} for k>β-v_i[x]-1, consider the following path of length β+4.
x_X_1,i_k^b,P^i,i_k^f, x_X_2
If a∈ P^i, we consider the above path of length β+4.
Now for c, we do a similar case analysis. If c∈{j_k^f,j_k^b}, for some k≤β+v_j[x]-1, consider the following path of length 2β+2v_j[x]≤β+4.
x_X_2,j_β+v_j[x]-1^b,…,j_0^b=j_0^f,…, j_β+v_j[x]-1^f,x_X_1
If c∈{j_k^b,j_k^f} for k>β+v_j[x]-1, consider the following path of length β+4.
x_X_2,j_k^b,P^j,j_k^f, x_X_1
If c∈ P^j, we consider the above path of length β+4.
So the cycle is of length at most 2β+8.
* Case 3: a is in the v_i-subgraph and c∈ X_1∪ X_2.
Suppose c=x_X_1. If a=i_k^b for k≤β-v_i[x]-1 or a=i_k'^f for k'≤β+v_i[x]-1, then consider the following cycle of length 2β.
i=i_0^f,…,i_β+v_i[x]-1^f,x_X_1,i_β-v_i[x]-1^b,…,i_0^b=i.
If a=i_k^b for k> β-v_i[x]-1, consider the following cycle of length β+4:
x_X_1, a,P^i,i_β+v_i[x]-1^f,x_X_1
If a=i_k^f for k> β+v_i[x]-1, consider the following cycle of length β+4:
x_X_1, i_β-v_i[x]-1^b,P^i,a,x_X_1
For c=x_X_2, everything is symmetric.
* Case 4: a,c∈ X_1∪ X_2. Any two vertices in X_1∪ X_2 are at distance β+4 and thus roundtrip distance 2β+8: for any x,x'∈ X_1∪ X_2, pick any i. Then
x,i^b_β+v_i_∞-1,P^i,i^f_β+v_i_∞-1,x'
is a path of length β+4.
This covers all cases, so we have shown that the roundtrip diameter in the NO case is at most 2β+8.
*YES case.
Suppose that there exist i,j such that for all x∈ [d], |v_i[x]-v_j[x]|≤ M. We show that d(i,j)≥ 2β-M. By symmetry, it follows that d(j,i)≥ 2β-M, so the roundtrip distance is at least 4β-2M.
For every vertex k, we can check that d(k,X_1∪ X_2), d(X_1∪ X_2,k) ≥β-v_k_∞≥ 0.5β-2.
If a path from i to j passes through a path P^k, then it must hit X_1∪ X_2 before and after path P^k (even if k=i or k=j), creating a path of length at least β+4 between two vertices of X_1∪ X_2.
Then the i-to-j path has length at least
d(i,X_1∪ X_2)+(β+4)+d(X_1∪ X_2,j) ≥ (0.5β-2)+(β+4) + (0.5β-2) = 2β,
as desired.
If a path from i to j has a vertex k∈[n], then the path must have length at least
d(i,X_1∪ X_2)+d(X_1∪ X_2,k)+d(k,X_1∪ X_2)+d(X_1∪ X_2,j)
≥ 4(0.5β-2)
> 2β-M.
Finally, if a path from i to j passes through no path P^k and no vertex k for all k≠ i,j, then the path cannot visit any v_k-subgraph for k≠ i,j.
Thus, the path must go from i through the v_i-subgraph to some x∈ X_1∪ X_2, then through the v_j-subgraph to j.
If x∈ X_1, the path has length
d(i,x_X_1) + d(x_X_1,j)≥ (β+v_i[x]) + (β-v_j[x]) ≥ 2β-M,
by assumption of i and j, and similarly if x∈ X_2 the path has length
d(i,x_X_2) + d(x_X_2,j)≥ (β-v_i[x]) + (β+v_j[x]) ≥ 2β-M,
as desired.
§ UNWEIGHTED ROUNDTRIP 5/3-Ε HARDNESS FROM ALL-NODES K-CYCLE
In this section, we extend the proof from Section <ref> to unweighted graphs.
[Theorem <ref>, restated]
Under Hypothesis <ref>, for all ε,δ>0, no algorithm can 5/3-ε approximate the roundtrip diameter of a sparse directed unweighted graph in O(n^2-δ) time.
We change the weighted construction as follows. We make 7t copies of S and call them S_i^fwd for i=1,…,5t (forward copies), and S_i^bwd for i=1,…,2t (backward copies). Similarly we make 7t copies of T and call them T_i^fwd for i=1,…,5t, and T_i^bwd for i=1,…,2t. These copies are the only new vertices added to the weighted construction. We call the subset of the graph containing S and all its copies the S-area. Similarly, we call the subset containing T and all its copies the T-area. We define the edges between these copies as follows. See Figure <ref>.
* We put perfect matchings between these copies. Formally, for a∈ V_1, we add the edges (a,a)∈ S× S^fwd_1 and (a,a)∈ S^fwd_i× S_i+1^fwd for i=1,… 5t-1. We add the edges (a,a)∈ S_1^bwd× S and (a,a)∈ S_i+1^bwd× S_i^bwd for i=1,…,2t-1.
* For a∈ V_1, we add the edges (a',a')∈ T^fwd_1× T and (a',a')∈ T^fwd_i+1× T_i^fwd for i=1,… 5t-1. We add the edges (a',a')∈ T× T_1^bwd and (a',a')∈ T_i^bwd× T_i+1^bwd for i=1,…,2t-1.
Note that so far we have a path of length 5t+1 out of each a∈ S and a path of length 2t+1 to each a∈ S. Similarly we have a path of length 5t+1 to each a'∈ T and a path of length 2t+1 from each a'∈ T. Now we can add edges that simulate the edges in the weighted construction. We start by defining the edges adjacent to V_i^fwd and V_i^bwd for i=2,…,k. Note that the edges in V_i^fwd× V_i+1^fwd and V_i+1^bwd× V_i^bwd for i=2,…,k-1, and the edges in (V_i^fwd∪ V_i^bwd)×{o_1,…,o_4} for i=2,…,k are the same as the weighted case and we include them here for completeness.
* For all a∈ V_1 and x∈ V_2, add the edge (a,x^fwd)∈ S_3t^fwd× V_2^fwd if (a,x)∈ E(G). Add the edge (x^bwd,a)∈ V_2^bwd× S if (x,a)∈ E(G).
* Similarly, for any a∈ V_1 and x∈ V_k, add the edge (x^fwd,a')∈ V_k^fwd× T_3t^fwd if (x,a)∈ E(G). Add the edge (a',x^bwd)∈ T× V_k^bwd if (a,x)∈ E(G).
* The following edges are the same as in the weighted case and we note them for completeness. For each i ∈{2,…,k} and for each edge (x,y) ∈ V_i × V_i+1 in G, we add two edges to G': one forwards (x^fwd,y^fwd) ∈ V_i^fwd× V_i+1^fwd and one backwards (y^bwd,x^bwd) ∈ V_i+1^bwd× V_i^bwd. The weight on these edges is 1, which can be thought of as negligible because it is 0· t +1.
We define the edges adjacent to o_i for i=1,…,4.
* For all a∈ V_1, we add (a,o_1)∈ S_5t^fwd× o_1, (a,o_3)∈ (S_4t^fwd∪ S_5t^fwd)× o_3 and (o_2,a)∈ o_2×(S_2t^bwd∪ S_t^bwd)[Note that if we want to copy the weighted case, intuitively we should add edges S_4t^fwd× o_3. Adding edges from S_5t^fwd to o_3 only makes longer paths from S so wouldn't hurt the yes case.].
* For all a∈ V_1, we add (o_2,a')∈ o_2× T_5t^fwd, (o_4,a')∈ o_4×( T_4t^fwd∪ T_5t^fwd) and (a',o_1)∈ (T_2t^bwd∪ T_t^bwd)× o_1.
The following edges exists in the weighted version as well, and we put them here for completeness. Note that there are no edges between any o_i and o_j in the unweighted case.
* Add edges from o_1 to all nodes v ∈ V^fwd_2 ∪⋯∪ V^fwd_k. Add an edge from all v ∈ V^fwd_2 ∪⋯∪ V^fwd_k to o_2
* Add edges from o_3 to all nodes v ∈ V^bwd_2 ∪⋯∪ V^bkw_k. Add an edge from all v ∈ V^bwd_2 ∪⋯∪ V^bwd_k to o_4
Now we add edges adjacent to J.
* For all a∈ V_1 and j∈ [d], we add the edge (a,g_j)∈ S_5t^fwd× J and we add (g_j,a)∈ J× S_2t^bwd[In the weighted case the S_5t^fwd× J edges have the constraint a[j]=0, but if we drop this condition it wouldn't hurt the yes case.]. If a̅[j]=1, we add the edge (a,g_j)∈ S_3t^fwd× J. If a̅[j]=0, we add the edge (g_j,a)∈ J× S.
* For any a∈ V_1 and j∈ [d], we add the edge (g_j,a')∈ J× T_5t^fwd and we add (a',g_j)∈ T_2t^bwd× J. If a̅[j]=0, we add the edge (g_j,a')∈ J× T_3t^fwd. If a̅[j]=1, we add the edge (a,g_j)∈ T× J.
Finally we add the following edges that don't simulate any edges in the weighted case, and their use is to make copies of S (and T) close to each other.
* For all a∈ V_1, add edges (a,a)∈ (S_1^bwd∪ S_t+1^bwd)× (S_2t+1^fwd∪ S_3t+1^fwd).
* For all a∈ V_1, add edges (a',a')∈ (T_2t+1^fwd∪ T_3t+1^fwd)× (T_1^bwd∪ T_t+1^bwd).
Note that we do not add any edges between o_is, or between o_i and J, as the existence of the above edges make it unnecessary.
*NO case
We compute distances with an O(1) additive error to make the proof simpler.
First we cover the roundtrip distance between nodes that are in the S-area and T-area. Let a,b∈ S-area. Let J_i(a)={g_j|a[j]=i} for i∈{0,1}, and let S_0^fwd=S and T_0^fwd=T.
Let a,b∈ S-area∪ T-area such that a̅≠b̅, where a̅ and b̅ are a and
b's identifiers. Suppose that d(a,J)+d(J,a)≤ 3t and d(b,J)+d(J,b)≤ 3t. Then d_rt(a,b)≤ 6t.
By the construction of the graph, we know that there is i_1∈{0,1} such that d(a,J)=d(a,g_j) for all g_j∈ J_i_1(a). Similarly there exist i_2,i_3,i_4∈{0,1} such that
* d(J,a)=d(g_j,a) for all g_j∈ J_i_2(a)
* d(b,J)=d(b,g_j) for all g_j∈ J_i_3(a)
* d(J,b)=d(g_j,b) for all g_j∈ J_i_4(a)
Now since a̅≠b̅, There is g_j∈ J_i_1(a)∩ J_i_4(b). Similarly, there is g_j'∈ J_i_2(a)∩ J_i_3(b). So d(a,b)≤ d(a,g_j)+d(g_j,b)=d(a,J)+d(J,b) and d(b,a)≤ d(b,g_j')+d(g_j',a)=d(b,J)+d(J,a). So d_rt(a,b)≤ 6t.
Now we show that for all a,b∈ S-area∪ T-area where a̅≠b̅, the conditions of Lemma <ref> hold, and hence d_rt(a,b)≤ 6t. Suppose a∈ S-area. We have
* If a∈ S_i^fwd for i=0,…,3t, then d(a,J)≤ 3t-i and d(J,a)≤ 1+i.
* If a∈ S_3t+i^fwd for i=1,…,2t, then d(a,J)≤ 2t-i and d(J,a)≤ t+i using the edges in J× S_2t^bwd and S_t+1^bwd× S_3t+1^fwd.
* If a∈ S_i^bwd for i=1,…, 2t, then d(a,J)≤ i+t (through S_1^bwd× S_2t+1^fwd edges) and d(J,a)≤ 2t-i.
Since T-area is symmetric, we have similar results if a∈ T-area. So we can apply Lemma <ref>.
Now suppose that a∈ S-area and a'∈ T-area are from the same node a∈ V_1. First suppose a∈ S_3t+i^f for some i∈{1,…,2t}. We know that there exist j,j' such that d(b,g_j)+d(g_j',b)≤ 3t. Then since all edges in S^fwd_5t× J and J× S_2t^bwd exist, we have d(g_j,a)≤ t+i (Using S_t+1^bwd× S_3t+1^fwd edges) and d(a,g_j')≤ 2t-i. So d_rt(a,b)≤ 6t. If b∈ T_3t+i^fwd for i∈{1,…,2t}, we have a symmetric argument.
So suppose a∉ S_3t+1^fwd∪…∪ S_5t^fwd. Let the cycle passing through a in G be ax_2… x_k where x_i∈ V_i for i=2,…,k.
* Let a∈ S_i^fwd and a'∈ T_j^fwd for some i,j∈{0,…,3t}. Then consider the cycle passing through copies of a in all S_ℓ^fwd for ℓ=0,…,3t, then going to x_i^fwd∈ V_i^fwd for i=2,…,k, then to all copies of a' in T_ℓ^fwd for ℓ=3t,…, 0, x_i^bwd∈ V_i^bwd for i=k,…,2 and finally back to the copy of a in S. This cycle passes through a and a' and is of length 6t.
* Let a∈ S_i^fwd and a'∈ T_j^bwd for i∈{0,…, 3t} and j∈{1,…, 2t}. Let z∈ [d] be a coordinate such that a[z]=0. The cycle passing through a and a' is the following: start from copies of a in S_ℓ^fwd for all ℓ=0,…,3t, then to x_i^fwd∈ V_i^fwd for i=2,…,k, to all copies of a' in T_ℓ^fwd for ℓ=3t,…,2t+1, all copies of a' in T_ℓ^bwd for ℓ=1,…,2t then to g_z and then back to the copy of a in S.
* Let a∈ S_i^bwd and a'∈ T_j^bwd for some i,j∈{1,…, 2t}. The cycle passing through a and a' is the following: start from copies of a in S_ℓ^fwd for all ℓ=2t+1,… 3t, then to x_i^fwd∈ V_i^fwd for i=2,…,k, to all copies of a' in T_ℓ^fwd for ℓ=3t,…,2t+1, all copies of a' in T_ℓ^bwd for ℓ=1,…,2t, then to g_z for some arbitrary z∈ [d], to all the copies of a in S_ℓ^bwd for ℓ=2t,…,1 and finally back to S_2t+1^fwd.
Now we show that S-area nodes are close to all nodes in J, V_i^fwd, V_i^bwd and o_j for i=2,…,k and j=1,…, 4.
Let a∈ S-area. We show that d(a,o_1)+d(o_2,a)≤ 6t and d(a,o_3)+d(o_4,a)≤ 6t. Then since for every x^fwd∈ V_i^fwd for any i∈{2,…,k} there is a 2-path o_1x^fwdo_2, and for every x^bwd∈ V_i^bwd there is a 2-path o_3x^bwdo_4, we have that a is close to all nodes in V_i^fwd∪ V_i^bwd∪ o_j. The proof for a∈ T-area is similar.
For a∈ S-area, we have that d(a,o_1)+d(o_2,a)≤ 6t and d(a,o_3)+d(o_4,a)≤ 6t.
We do case analysis.
* If a∈ S_i^fwd for i=0,…, 5t, then d(a,o_1)=5t-i, and d(o_2,a)=t+i.
* If a∈ S_i^bwd for i=1,…, 2t, then d(a,o_1)=i+3t+1 using S_1^bwd× S_3t+1^fwd edges, and d(o_2,a)=2t-i.
* If a∈ S_i^fwd for i=0,…,4t, then d(a,o_3)=4t-i using edges in S_4t^fwd× o_3 and d(o_4,a)=2t+i, using the edges in o_4× S_2t^bwd.
* If a∈ S_4t+i^fwd for i=1,…,t, then d(a,o_3)=t-i and d(o_4,a)=2t+i using the edges in S_t+1^bwd× S_3t+1^fwd.
* If a∈ S_i^bwd for i=1,…, 2t, then d(a,o_3)= i+2t using S_1^bwd× S_3t+1^fwd edges, and d(o_4,a)=2t-i using o_4× S_2t^bwd edges.
Note that we can use these 6t paths from o_2 to o_1 in the Lemma to bound the roundtrip distances between v^fwd,u^fwd∈∪_i=2^kV_i^fwd for any v,u∈∪_i=2^kV_i. Similarly, we can use 6t-paths from o_4 to o_3 to bound the roundtrip distances between v^bwd,u^bwd∈∪_i=2^kV_i^bwd, for any v,u∈∪_i=2^kV_i.
Furthermore, we have that d(o_2,o_3)=3t using o_2× T_5t^fwd, T_3t+1^fwd× T_t+1^bwd and T_2t^bwd× o_3 edges. Symmetrically, d(o_4,o_1)=3t using o_4× S_2t^bwd, S_t+1^bwd× S_3t+1^fwd and S_5t^fwd× o_1 edges. Now since for any u∈∪_i=2^kV_i, o_1u^fwdo_2 and o_3u^bwdo_4 are paths of length 2, using appropriate 2-paths from o_1 to o_2 and from o_3 to o_4 we can form a cycle containing x^fwd∈∪_i=2^kV_i^fwd and y^bwd∈∪_i=2^kV_i^bwd for any x,y∈∪_i=2^kV_i.
Also note that using the above cycles, any o_i for i=1,…,4 and x^fwd∈ V_i^fwd or x^bwd∈ V_i^bwd are close for any x∈ V_i for i=2,…, k.
Now it remains to prove that all the nodes in J are close to all the other nodes. Fix some g_j∈ J
* For a∈ S_3t+i^fwd or a∈ S_i^bwd for some i∈{1,…,2t}, there is a cycle passing through all copies of a in S_3t+ℓ and S_ℓ^bwd for all ℓ=1,…,2t and g_j, since there is an edge between all pairs in S_5t^fwd× J and J× S_2t^bwd.
* Let a∈ S_i^fwd for some i∈{0,…, 3t} and suppose a[j]=1. Then let j'∈ S be a coordinate where a[j']=0. Consider the following cycle: start from copies of a in S_ℓ^fwd for all ℓ=0,…,3t, then go to g_j, then to copies of a in S_ℓ^bwd for all ℓ=2t,…,t+1, then to copies of a in S_3t+ℓ^fwd for all ℓ=1,…,2t, then to g_j' and finally back to S.
* Let a∈ S_i^fwd for some i∈{0,…, 3t} and suppose a[j]=0. Then let j'∈ S be a coordinate where a[j']=1. We consider the same cycle above where we swap g_j and g_j' in the cycle.
* To show that J is close to o_1,o_2,V_i^fwd for i=2,…,k, we consider the following cycle. Let a∈ V_1 be an arbitrary node. Start from copies of a in S_3t+ℓ^fwd for all ℓ=1,…,2t, then go to o_1, then to a vertex in V_i^fwd for some i=2,…,k, then to o_2, to all copies of a' in T_3t+ℓ^fwd for all ℓ=2t…,1, then all copies of a' in T_ℓ^bwd for ℓ=t+1,…,2t, to g_j then to all copies of a in S_ℓ^bwd for ℓ=2t,…,t+1, and finally back to S_3t+1^fwd.
* To show that J is close to o_3,o_4,V_i^bwd for i=2,…,k, we change the previous cycle. To go from the copy a in S_5t^fwd to the copy of a in T_5t^bwd, we go through g_j. Then to go from the copy of a in T_2t^bwd to the copy of a in S_2t^bwd, we go to o_3, then to any node in V_i^bwd for some i=2,…,k, then to o_4 and finally to S_2t^bwd.
* Suppose that we want to show g_j is close to g_j'. Let a be any node in S. Note that d(g_j,g_j')=3t by going through all the copies of a in S_ℓ^bwd for ℓ=2t,…,t+1, and the copies of a in S_3t+ℓ^fwd for ℓ=1,…,2t. Similarly, d(g_j',g_j)=3t.
*YES case
In order to simplify the proof of the YES case, we note that the main difference between weighted and unweighted case that might cause short paths in the YES case are the edges added in (S_t+1^bwd∪ S_1^bwd)× (S_2t+1^fwd∪ S_3t+1^fwd) in the S-area (and the symmetric case in the T-area). We will show that if the roundtrip cycle uses any of these edges, the path is going to be long. If the path doesn't use any of these edges, then it is easy to see from the construction that there is an equivalent path in the weighted case.
We show that if a∈ S and a'∈ T, d(a,a')≥ 8t and d(a',a)≥ 2t.
First consider the aa' path. The first 3t nodes on the path must be copies of a in S_i^fwd for i=0,…,3t. Similarly, the last three nodes on this path must be copies of a' in T_i^fwd for i=3t,…,0.
Let a∈ S and a'∈ T be copies of the same node in V_1. If the aa' shortest path uses any of the edges in (S_t+1^bwd∪ S_1^bwd)× (S_2t+1^fwd∪ S_3t+1^fwd), then this path has length at least 8t.
First we show that d(S,S_t+1^bwd)≥ 4t: This is because any path from S to S_t+1^bwd ends with nodes in S_i^bwd for all i=2t,…,t+1. Since it must go through S_i^fwd for all i=0,…,3t, we have d(S,S_t+1^bwd)≥ 4t.
Similar as above, we have that d(S,S_1^bwd)≥ 4t. Now note that the distance from S_2t+1^fwd∪ S_3t+1^fwd to any edge going out of the S-area is at least t. So before entering the 3t-subpath in the T-area that ends in a', the path has length ≥ 4t+t=5t. So in total it has length at least 8t.
With a symmetric argument we can show that if the path aa' uses any edge in (T_3t+1^fwd∪ T_2t+1^fwd) × (T_1^bwd∪ T_t+1^bwd), it has length at least 8t.
For the a'a path, it is easier to see that if the path uses any of these edges, the length of it is at least 2t. This is because d(T,T_3t+1^fwd∪ T_2t+1^fwd)>2t and d(S_3t+1^fwd∪ S_2t+1^fwd, S)>2t.
|
http://arxiv.org/abs/2307.05121v1 | 20230711085653 | Transaction Fraud Detection via Spatial-Temporal-Aware Graph Transformer | [
"Yue Tian",
"Guanjun Liu"
] | cs.LG | [
"cs.LG",
"q-fin.GN"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Transaction Fraud Detection via Spatial-Temporal-Aware Graph Transformer
Yue Tian,
Guanjun Liu, Senior Member, IEEE,
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
Yue Tian and Guanjun Liu are with Department of Computer Science, Tongji University, Shanghai 201804, China (e-mail: [email protected]; [email protected]).
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================
How to obtain informative representations of transactions and then perform the identification of fraudulent transactions is a crucial part of ensuring financial security. Recent studies apply Graph Neural Networks (GNNs) to the transaction fraud detection problem. Nevertheless, they encounter challenges in effectively learning spatial-temporal information due to structural limitations. Moreover, few prior GNN-based detectors have recognized the significance of incorporating global information, which encompasses similar behavioral patterns and offers valuable insights for discriminative representation learning. Therefore, we propose a novel heterogeneous graph neural network called Spatial-Temporal-Aware Graph Transformer (STA-GT) for transaction fraud detection problems. Specifically, we design a temporal encoding strategy to capture temporal dependencies and incorporate it into the graph neural network framework, enhancing spatial-temporal information modeling and improving expressive ability. Furthermore, we introduce a transformer module to learn local and global information. Pairwise node-node interactions overcome the limitation of the GNN structure and build up the interactions with the target node and long-distance ones. Experimental results on two financial datasets compared to general GNN models and GNN-based fraud detectors demonstrate that our proposed method STA-GT is effective on the transaction fraud detection task.
Graph neural network, transaction fraud, spatial-Temporal information, transformer.
§ INTRODUCTION
Transaction fraud incidents frequently occur in the rapidly evolving development of financial services, leading to substantial economic losses <cit.>. According to the Nielsen report, global credit card losses amounted to 25 billion dollars in 2018, and further increases are expected <cit.>.
Consequently, identifying fraudulent transactions is crucial to mitigate financial losses, enhance customer experience, and safeguard the reputation of financial institutions.
Numerous techniques have been proposed for detecting transaction fraud, and they can be classified into two categories: rule-based methods and machine learning-based methods.
1) The rule-based methods rely on human-designed rules with expert knowledge to assess the likelihood that fraud has occurred <cit.>. These methods heavily rely on experts' domain knowledge which cannot perform well in complex environments.
Moreover, the fixed rules limit the algorithm's ability to adapt to dynamic fraud patterns.
2) The machine learning-based methods can detect fraudulent transactions automatically by constructing supervised or unsupervised models leveraging vast historical transaction data <cit.>. Machine learning-based methods usually resort to feature extraction <cit.>. Achieving statistical features from transaction attributes is feasible such as time, location, and amount. However, incorporating unstructured data such as device ID and WiFi position is challenging to extract. Additionally, effectively capturing the interaction between transactions presents difficulties. Therefore, applying machine learning-based methods to identify fraud is still constrained by itself.
Graph-based approaches have recently exhibited superior performance in fraud detection <cit.>.
GNN techniques acquire the representation of the central node through the selective aggregation of information from neighboring nodes <cit.>. In contrast to conventional fraud detection methods, they can facilitate automatic feature learning by capturing the interactive relationships between transactions. Additionally, graph-based approaches can efficiently identify fraudulent transactions through end-to-end learning <cit.>.
However, graph-based fraud detection methods encounter significant challenges when faced with the following problems.
First, applying the GNN method for our fraud detection task needs to pay attention to the learning of spatial-temporal information. We have the following observations for fraudulent transactions: 1) Spatial aggregation: Fraudsters often utilize a limited number of devices to execute fraudulent activities, as acquiring transaction equipment incurs costs. 2) Temporal aggregation: Fraudulent actions are frequently undertaken within a narrow time frame, as the detection of suspicious behavior by the cardholder or financial institution can prompt the termination of the transaction. Some recent works, including GEM <cit.> and STAGN <cit.>, have noted similar challenges. GEM establishes a connection with the account that occurred on the device within the same time period <cit.>. STAGN leverages temporal and spatial slices to consider both spatial and temporal aggregation <cit.>. However, they fail to distinguish the temporal differences of neighbor transactions in the same time slice, as shown in Fig. 1(a). Meanwhile, the representation of the target transaction may depend on the ones in other time slices, as shown in Figs. 1(b) and (c). In this way, due to structural limitations, informative transactions that satisfy the homogeneity assumption cannot be fully exploited. Therefore, it is unreasonable to construct a separate transaction graph for each time slice and then perform graph convolution operation to incorporate spatial-temporal information.
Secondly, graph-based detection methods rely on aggregating information from neighbor nodes to update the representation of target nodes. However, this approach only utilizes local information while ignoring global information. In fact, long-distance transactions may contain similar information. GNN-based fraud detectors cannot capture the information to obtain discriminative representations. For example, the target transaction a needs to use the K-hop transaction b. Although we can obtain the information of transaction b by stacking K layers of GNN layers, it may cause a dilution of information from transaction b. Simply expanding the receptive field of GNN is insufficient for learning discriminative representations. With the depth increases, GNNs may face the over-smoothing issue, where the learned representations of each node tend to become consistent. It results in the limited expressive ability for GNN-based fraud detectors.
To address the aforementioned challenges, we propose a novel graph neural network model to detect fraudulent transactions, called Spatial-Temporal-Aware Graph Transformer (STA-GT). First, STA-GT is built on a heterogeneous graph neural network to model spatial-temporal information for learning discriminative representations. Specifically, a heterogeneous graph is constructed, which takes transactions as nodes and consists of various edge types (e.g., IP address and MAC address) based on the transaction location.
To capture temporal dependencies, we incorporate a designed temporal encoding strategy into the graph neural network architecture, which makes STA-GT gather spatial-temporal information effectively.
To further improve STA-GT's performance, we leverage a relation-level attention mechanism to specify the contributions of different relations dynamically and concatenate the intermediate embeddings from the corresponding GNN layers to deal with varying degrees of sharpness and smoothness.
Finally, a Transformer sub-network is added on top of the heterogeneous GNN layer stack. In this way, STA-GT incorporates global information into its learning process while preserving the GNN's ability to capture local structural information.
Extensive experiments are conducted on two financial datasets to evaluate the performance of STA-GT. Compared to other state-of-art methods, the experimental results demonstrate its superiority in fraud detection tasks.
The contributions of this paper are summarized as follows:
* We propose a heterogeneous graph neural network method to identify fraudulent transactions. It can learn spatial-temporal information while preserving structural information.
To the best of our knowledge, it is the first work that employs a graph neural network integrated with the temporal encoding strategy to model spatial-temporal dependencies on the transaction fraud problem.
* We overcome the limitation of the GNN structure to propose a local-global learning module, which can capture all pairwise node-node interactions and build up the connections between the target node and long-distance neighbors. By incorporating this module, STA-GT is able to effectively learn both local and global transaction information while alleviating the over-smoothing phenomenon.
* We construct experiments on two financial datasets, including performance comparison, ablation studies, and parameter sensitivity analysis. The results show that STA-GT outperforms other baselines on the transaction fraud detection task.
The rest of the paper is organized as follows. Section II presents the related work. Section III introduces the problem definition for the transaction fraud tasks. Section IV describes how the STA-GT identifies fraudulent transactions. Section V introduces the datasets and evaluates the performance of STA-GT compared with the other GNN-based baselines. Section VI concludes the paper.
§ RELATED WORK
§.§ Graph Neural Networks
The GNNs' excellent ability to process non-structured data has made them widely applied in electronic transactions, recommendation systems, and traffic forecasting <cit.>. Its basic idea is to obtain the representation of each node by leveraging the information from itself as well as its neighboring nodes. GNNs are divided into two categories. 1) Spectral neural networks propose graph convolution operations in the spectral domain.
ChebNet approximates graph convolution using polynomial expansion <cit.>.
GCN performs spectral convolutions on graphs to capture structure and feature information <cit.>. 2) Spatial Graph Neural Networks apply convolution operations on the graph structure through leveraging the information of neighborhood nodes. GraphSAGE proposes a general inductive framework, which can efficiently update the representation of the target node <cit.>. It utilizes the defined aggregators to sample and aggregate local neighborhood information of the target nodes <cit.>. GAT leverages a self-attention mechanism to enable distinct treatment of various neighbors during the embedding updating of the target node <cit.>.
To model heterogeneity and learn rich information, heterogeneous graph neural networks are proposed. RGCN is an extension method of GCN to model the relational data <cit.>. HAN utilizes a hierarchical attention strategy to evaluate the corresponding significance of neighbors and meta-paths. According to the learned importance, HAN can learn the complex structure and feature information to generate the representations of each node <cit.>.
However, these methods are not explicitly designed for our transaction fraud detection task. And they ignore the problem of temporal-spatial dependency and how to make full use of informative but long-distance transactions.
§.§ GNN-based Fraud Detection
Recently, some researchers have explored how to apply GNNs to fraud detection tasks, revealing the suspiciousness of fraudulent behaviors. Based on various scenarios, GNN-based fraud detection is divided into two categories: financial fraud detection <cit.> and opinion fraud detection <cit.>. GEM is the pioneering work to detect malicious accounts via a heterogeneous graph neural network <cit.>. CARE-GNN designs a label-aware similarity sampler with a reinforcement learning strategy to solve two camouflage issues, including the feature and relation camouflage <cit.>. To address the issue of imbalanced node classification, PC-GNN introduces a label-balanced sampler for reconstructing sub-graphs<cit.>. It employs an over-sampling technique for the neighbors belonging to the minority class and an down-sampling technique for the others <cit.>. To handle feature inconsistency and topology inconsistency, FRAUDRE integrates several key components, including the topology-agnostic embedding layer, the fraud-aware graph operation, and the inter-layer embedding fusion module <cit.>. Moreover, to mitigate the impact of class imbalance, the imbalance-oriented loss function is introduced <cit.>.
STAGN aims to learn spatial-temporal information via an attention-based 3D convolution neural network <cit.>. MAFI alleviates the camouflage issue via a trainable sampler and utilizes the relation-level and aggregator-level attention mechanisms to specify the corresponding contributions <cit.>.
xFraud adopts a self-attentive heterogeneous graph neural network to automatically aggregate information from different types of nodes without predefined meta-paths and designs a hybrid explainer which is a tradeoff between GNN-based explanations and traditional topological measures <cit.>.
Among these methods, only two works <cit.> noticed the temporal information. While <cit.> only captures the interaction between two nodes that occurred on the device within the same time period, and <cit.> utilizes the temporal slices. They ignore the spatial-temporal information in other time slices. STA-GT remedies the shortcoming via the temporal encoding strategy. Furthermore, the above methods fail to use global transaction information for great expressive ability.
§ PROBLEM DEFINITION
In this section, we present the conceptions of multi-relation graph. Then, we formulate fraud detection on the graph problem.
Definition 1. Multi-relation Graph.
A multi-relation graph is defined as 𝒢 = {𝒱,{ℰ}_1^R,𝒳,𝒴}, where 𝒱 and ℰ respectively are the sets of nodes and edges. e_i,j^r denotes an edge which connects nodes i ans j under the relation r ∈{ 1,...,R }. Each node refers to a transaction record x, where x is a d-dimensional feature vetcor denoted as x_i ∈ℛ^d and the set of node features are represented as 𝒳 = x_1, ..., x_n. 𝒴 represents the set of labels of all nodes 𝒱.
Definition 2. Fraud Detection on the Graph.
In the transaction graph, each node v denotes a transaction whose suspiciousness needs to be predicted. And its label is denoted as y_v ∈{0,1}, where 0 and 1 represent legitimate and fraudulent transactions, respectively. The identification of fraudulent transactions is a semi-supervised binary classification problem. A transaction fraud detection model can be trained using information from labeled nodes. Then, it is utilized to infer the possibility that the unlabeled node is predicted to be fraudulent.
§ PROPOSED MODEL
In this section, the pipeline of our method STA-GT is introduced, as shown in Fig. 1. STA-GT has five modules: 1) Attribute-driven Embedding. The topology-agnostic layer is utilized to obtain initial layer embeddings. 2) Temporal aware module. The temporal encoding strategy is used to capture the temporal dependency. 3) Aggregation process. Intra-relation and inter-aggregation are performed to learn the embeddings in each layer with the full utilization of the information from the target node and its neighbors while keeping the contributions of different relations.
And then, we concatenate these intermediate embeddings to fuse all information. Spatial-Temporal information can be learned by the above step. 4) Transformer layer. Global information can be captured by making use of pair-wise node-node interactions. 5) Predicted layer. It is used to classify whether the transaction is fraudulent or not.
§.§ Attribute-driven Embedding
Fraudsters often mimic cardholders' behavior to avoid their suspicions, leading to fraudulent nodes and normal ones often exhibit similarities. Consequently, utilizing the original node features to learn representations is imperative before the GNN training. In this study, we employ the attribute-driven embedding layer <cit.> to facilitate the learning of feature similarity without relying on graph topological information.
Given the node v_i, the initial layer embedding can be denoted as:
h_v_i^0=σ(x_iW_1),
where σ represents a non-linear activation function, xi ∈ℝ^s denotes the original attributes of node v_i, and W_1 in ℝ^s ḋ denotes a learnable weight matrix.
§.§ Modeling the Temporal Dependency
In the field of transaction fraud detection, the conventional approach integrated temporal information usually construct separate graphs for each time slice.
However, they cannot distinguish the temporal dependency from different neighbor nodes and break structural limitations allowing the target node to interact with nodes in other time slices. In the domain of transaction fraud detection, the prevailing approach for incorporating temporal information involves generating separate graphs for each time slice. Nonetheless, this methodology fails to differentiate temporal dependencies from various neighboring nodes in the same time slice and
cannot break structural limitations enabling interactions between the target node and nodes existing in different time slices. To capture temporal information and maximize the use of structural information, we allow the target node to interact with the nodes that occur at any time. And we define the temporal encoding strategy, allowing nodes to learn a hidden temporal representation. Given a target node v_i at time t(v_i), the temporal encoding can be expressed as follows:
Base(t(v_i), 2i) = sin( t_v_i/10000^2i/d),
Base(t(v_i), 2i+1) = cos( t_v_i/10000^2i+1/d),
TE(t_v_i)=T-Linear(Base(t(v_i))),
where T-Linear is a tunable linear projection. Then, we can model the relative temporal dependency between nodes v_i and v_j. By adding the temporal encoding, the hidden representation of node v_i can be updated as follows:
h_v_i^0, t = h_v_i^0 + TE(t_v_i).
By adopting this approach, the enhanced temporal representation becomes capable of capturing the relative temporal relationships between the target node v_i and its neighbor nodes.
§.§ Modeling the Spatial Dependency
The transaction graph 𝒢 encodes the relationships among the transactions. The connected transactions in the 𝒢 tend to share similar features. Specific to our fraud detection problem, fraudsters connect with others since they always leverage shared devices to execute fraudulent activities. Hence, we utilize the GNN method to model the spatial dependency. Given a node v and its hidden embedding, which contains temporal information after the aforementioned step, we leverage the following intra-relation and inter-relation aggregation mechanisms to update its representation. Subsequently, by concatenating the intermediate layer embeddings, we obtain a comprehensive representation that incorporates both spatial and temporal patterns.
§.§.§ Intra-Relation and Inter-Relation Aggregation
Given a node v_i and its neighbor node v_j under r relation at ℓ-th layer, we learn the neighborhood information under the homophily assumption and the difference between them. The graph convolution operation is denoted as:
h_i,r^ℓ,t= COMBINE(AGGR{h_v_j^ℓ-1,t', v_j ∈𝒩_r(v_i)},
AGGR{h_v_i^ℓ-1,t-h_v_j^ℓ-1,t', v_j ∈𝒩_r(v_i)}),
where h_v_j^ℓ-1,t denotes the ℓ-th layer embedding with temporal information of node v_i under the r-th relation, t denotes the timestamp of node v_i, t' denotes the timestamp of node v_j, ℓ∈{1,2,...,L}, r ∈{1,2,...,R}, and 𝒩_r(v_i) is the set of neighbors of node v_i under relation r.
Considering that the different relations provide corresponding contributions, we employ the attention mechanism to specify the importance of each relation. The representation of node v_i under R relations is denoted as:
h_v_i^ℓ,t=∑_r=1^Rα_r^ℓ⊙ h_i,r^ℓ,t,
where α_r^ℓ denotes the normalized importance of relation r. α_r^ℓ can be formulated as follows:
w_r^ℓ=1/|V|∑_i ∈𝒱 q^T · tanh(W_2 · h_i,r^ℓ,t+b),
α_r^ℓ=exp(w_r^ℓ)/∑_i=1^Rexp(w_r^ℓ),
where q, W_2, b denote the relation level vector, the weighted matrix, and the bias vector, respectively.
§.§.§ Inter-layer Representation Fusion
The node representations outputted of different layers in the GNN architecture manifest distinct levels of sharpness and smoothness information, as evidenced by prior studies <cit.>. The initial layers of the network predominantly capture localized information, whereas the subsequent layers exhibit an increased ability to capture global information, as supported by previous research <cit.>. Consequently, to obtain discriminative node embeddings, we concatenate the intermediate embeddings derived from the corresponding GNN layers:
h_v_i^t=COMBINE(h_v_i^1,t,h_v_i^2,t,...,h_v_i^L,t).
Following the above step, graph convolution operations are performed to capture the local neighbor information and the relative temporal relationships between nodes, thereby facilitating the modeling of spatial-temporal information.
§.§ Learning Global Information
Given the above spatial-temporal information learning, the next step is how to obtain global information for the target node which exhibits similar behavioral patterns. Inspired by <cit.>, we use a transformer layer for each node individually. Specifically, a multi-head attention mechanism is performed on the above obtained embedding matrix that denotes H^v_i∈ℝ^N × d, where v_i represents the node v_i. Initially, we present the single-head attention approach, which is subsequently expanded to a multi-head attention mechanism. Firstly, we perform a linearly project on H^v_i to obtain queries, keys, and values as follows:
Q^v_i=H^v_iW^Q, K^v_i=H^v_iW^K, V^v_i=H^v_iW^V,
where W^Q, W^K, W^V are the trained projection matrices, respectively. Hence, the single-attention function is defined as:
Attention(H^v_i) =softmax(Q^v_iK^v_i^T/√(d_k)V^v_i)
= softmax((H^v_iW^Q)(H^v_iW^K)^T/√(d_k)H^v_iW^V,
The multi-attention function can be expressed as the concatenation of the outputs from individual attention function:
Multihead(H^v_i) = Concat(head_1,...,head_s)W^O,
head_s = attention_s(H^v_i)
= softmax((H^v_iW_s^Q)(H^v_iW_s^K)^T/√(d_k)H^v_iW_s^V),
where W_s^Q, W_s^K, and W_s^V are the projection matrices of the s-th attention head, respectively. W^O denotes also a linear projection. Subsequently, the output of the multi-head attention layer is passed through a point-wise feed-forward neural network, a residual layer, and a normalization layer. This sequential process ultimately leads to the update of the representations for all nodes, which are denoted as H^v_i_out∈ℝ^N × d, such that local-global information can be captured.
§.§ The Prediction Layer
For each node v, we generate the final representation h_out^v_i by integrating the above local spatial-temporal information and global information. Subsequently, the MLP classifier is employed to achieve the node classification task, that is, the identification of fraudulent transactions. The optimization of this process is carried out by the cross-entropy loss function <cit.>.
L = -∑_v∈ Vy_vlogP_v+(1-y_v)log(1-P_v),
P_v= σ(MLP(z_v)),
where y_v denotes the real label of node v.
§ EXPERIMENTS
In this section, we perform the experiments to investigate the superiority of the proposed method STA-GT on our transaction fraud detection tasks.
§.§ Datasets and Graph Construction
We conduct experiments on one private dataset and one public dataset to indicate that STA-GT achieves significant improvements compared to both classic methods and state-of-the-art GNN-based fraud detectors.
The private dataset, PR01, consists of 5.2 million transactions that took place in 2016 and 2017. Transactions are labeled by professional investigators of a Chinese bank, with 1 representing fraudulent transactions and 0 representing legitimate ones. In data pre-processing, we first utilize the down-sampling of legitimate transactions to solve the imbalanced problem. Then, we apply one-hot coding and min-max normalization to handle the discrete and continuous values, respectively. For our experimental setup, the training set comprises transactions from the first month, while the remaining transactions are partitioned into five distinct groups (PR1 to PR5) to serve as the test set. Transactions are represented as nodes, and there exist two relations among these nodes. Specifically, the Trans-IP-Trans relation links transactions that occurred at the same IP address. The Trans-MAC-Trans relation is employed to establish links between transactions that have occurred on the same MAC address.
The TC dataset[https://challenge.datacastle.cn/v3/] contains 160,764 transaction records collected by Orange Finance Company, including 44,982 fraudulent transactions and 115,782 legitimate transactions. We perform the same data processing as for the private dataset. The training set utilized in this study comprises transaction records from a designated week, while the subsequent week's transaction records constitute the test set. In this way, the TC dataset is split into TC12, TC23, and TC34. Transactions are also represented as nodes, and there exist four relations among these nodes: Trans-IP-Trans, Trans-MAC-Trans, Trans-device1-Trans, and Trans-device2-Trans.
§.§ Baselines
We compare the proposed method STA-GT with homogeneous GNNs (GCN, GraphSAGE, and GAT), heterogeneous GNNs (RGCN, HAN), and GNN-based fraud detectors (CARE-GNN, SSA, and FRAUDRE) to demonstrate its superiority. The baselines we choose are introduced as follows:
* GCN <cit.>: It is a traditional homogeneous GNN method that employs the efficient layer-wise propagation rule based on the first-order approximation of spectral convolutions.
* GraphSAGE <cit.>: It is an inductive framework for learning node embedding from selective local information on the homogeneous graph.
* GAT <cit.>: It is a homogeneous GNN method that leverages a self-attention strategy to specify the importance of neighbor nodes.
* CARE-GNN <cit.>: It is a heterogeneous GNN architecture that effectively addresses the challenge of camouflages in the aggregation process of GNNs.
* Similar-sample + attention SAGE (SSA) <cit.>: It is a GNN method performed on a multi-relation graph, which proposes a new sampling policy and a new attention mechanism to ensure the quality of neighborhood information.
* RGCN <cit.>: It is a GNN method designed to to address the challenges posed by complex, multi-relational data in tasks including entity classification and link prediction.
* HAN <cit.>: It is a heterogeneous GNN method that employs hierarchical attention at both node and semantic levels, enabling the incorporation of the significance of both nodes and meta-paths.
* FRAUDRE <cit.>: It is a graph-based fraud detection framework to effectively tackle the challenges of imbalance and graph inconsistency. To handle feature inconsistency and topology inconsistency, the model integrates several key components, including the topology -agnostic embedding layer, the fraud-aware graph operation, and the inter-layer embedding fusion module. Moreover, to mitigate the impact of class imbalance, the imbalance-oriented loss function is introduced.
Note that the above methods, including GCN, GraphSAGE, GAT, and SSA, are applied to homogeneous graphs, treating each relation equally. CARE-GNN, RGCN, FRAUDRE, and HAN are used on multi-relation graphs to aggregate information under different relations.
§.§ Evaluation Metrics
To compare the performance of our approach STA-GT with the baseline models, Recall, F1, and AUC are adopted as evaluation metrics. The metrics are briefly calculated as follows:
Recall= T_P/T_P+T_N,
where T_P, T_N, and F_P are the numbers of true positive transaction records, true negative transaction records, and false positive transaction records, respectively <cit.>.
Precision= T_P/T_P+F_P,
F_1= 2 × Recall × Precision/Recall+Precision,
AUC= ∑_r ∈ℛ^+rank_r- |ℛ^+| × (|ℛ^+|+1)/2/ |ℛ^+| × |ℛ^-| ,
whereℛ^+ and ℛ^- are the fraudulent and legitimate class sets and rank_r is the rank of r by the predicted score.
For the mentioned metrics, a higher value indicates better model performance.
§.§ Performance Comparison
We conduct a comparative analysis between the proposed method STA-GT and the baseline models on two financial datasets. The results are reported in Tables. I and II. We have the following observations.
Compared to GCN, GraphSAGE, and GAT modeled on the graph with a single relation, RGCN and HAN running on the multi-relation graph did not perform better on the two datasets. The reason is that directly employing GNN models for the identification of fraudulent transactions is unsuitable. While the utilization of multi-graphs offers a broader range of information and more complex relationships, it is crucial to handle node interactions with caution and avoid introducing dissimilarity information, ensuring the opportunity for enhanced performance. FRAUDRE has achieved promising performance by introducing the fraud-aware module and an imbalance-oriented loss function to tackle graph inconsistency and imbalance issues.
As shown in Tables. I and II, the proposed method STA-GT outperforms all baselines with at least 3.8%, 1.1%, 1.7%, 5.1%, 5.2%, 9.9%, 4.3%, and 15.0% Recall improvements on all datasets.
Meanwhile, STA-GT outperforms the other baselines with at least 2.7%, 9.9%, 1.3%, 3.4%, 11.6%, 3.3%, 0.6%, and 14.1% F_1 improvements. The AUC score of our method also improved on most datasets. These experimental results provide strong evidence of the superiority of STA-GT for the identification of fraudulent transactions.
§ CONCLUSION
In this paper, we propose a novel heterogeneous graph neural network framework called STA-GT to tackle the transaction fraud detection problem. To integrate spatial-temporal information and enlarge the receptive field, we design the temporal encoding strategy and combine it with heterogeneous graph convolution operation to learn node representations. Furthermore, a transformer module is built on the top of the above GNN layer stack to learn global and local information jointly. It utilizes informative but long-distance transaction records effectively, which can ensure both intraclass compactness and interclass separation.
Experimental results on two financial datasets show the superiority of STA-GT on the transaction fraud detection task. In the subsequent work, we will explore how the explainability of the GNN model.
§ REFERENCES SECTION
You can use a bibliography generated by BibTeX as a .bbl file.
BibTeX documentation can be easily obtained at:
http://mirror.ctan.org/biblio/bibtex/contrib/doc/
The IEEEtran BibTeX style support page is:
http://www.michaelshell.org/tex/ieeetran/bibtex/
§ SIMPLE REFERENCES
You can manually copy in the resultant .bbl file and set second argument of \begin to the number of references
(used to reserve space for the reference number labels box).
1
IEEEtran
ref1
Mathematics Into Type. American Mathematical Society. [Online]. Available: https://www.ams.org/arc/styleguide/mit-2.pdf
ref2
T. W. Chaundy, P. R. Barrett and C. Batey, The Printing of Mathematics. London, U.K., Oxford Univ. Press, 1954.
ref3
F. Mittelbach and M. Goossens, The Companion, 2nd ed. Boston, MA, USA: Pearson, 2004.
ref4
G. Grätzer, More Math Into LaTeX, New York, NY, USA: Springer, 2007.
ref5M. Letourneau and J. W. Sharp, AMS-StyleGuide-online.pdf, American Mathematical Society, Providence, RI, USA, [Online]. Available: http://www.ams.org/arc/styleguide/index.html
ref6
H. Sira-Ramirez, “On the sliding mode control of nonlinear systems,” Syst. Control Lett., vol. 19, pp. 303–312, 1992.
ref7
A. Levant, “Exact differentiation of signals with unbounded higher derivatives,” in Proc. 45th IEEE Conf. Decis.
Control, San Diego, CA, USA, 2006, pp. 5585–5590. DOI: 10.1109/CDC.2006.377165.
ref8
M. Fliess, C. Join, and H. Sira-Ramirez, “Non-linear estimation is easy,” Int. J. Model., Ident. Control, vol. 4, no. 1, pp. 12–27, 2008.
ref9
R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez, “Stabilization of food-chain systems using a port-controlled Hamiltonian description,” in Proc. Amer. Control Conf., Chicago, IL, USA,
2000, pp. 2245–2249.
IEEEtran
|
http://arxiv.org/abs/2307.04036v1 | 20230708195101 | Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations | [
"Tong Steven Sun",
"Yuyang Gao",
"Shubham Khaladkar",
"Sijia Liu",
"Liang Zhao",
"Young-Ho Kim",
"Sungsoo Ray Hong"
] | cs.HC | [
"cs.HC",
"cs.AI",
"cs.CV",
"cs.LG"
] |
]Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
George Mason University
USA
[email protected]
Emory University
USA
[email protected]
George Mason University
USA
[email protected]
Michigan State University
USA
[email protected]
Emory University
USA
[email protected]
NAVER AI Lab
Republic of Korea
[email protected]
George Mason University
USA
[email protected]
The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs.
Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a Misc.valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed , the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.
helps CNN engineers to systemically search “unreasonable” local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using , participants made a more accurate and “reasonable” model than the current state-of-the-art. Also, participants found the way guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010282</concept_id>
<concept_desc>Computing methodologies Learning settings</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003124</concept_id>
<concept_desc>Human-centered computing Interaction paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Learning settings
[500]Human-centered computing Human computer interaction (HCI)
[500]Human-centered computing Interaction paradigms
[500]Computing methodologies Machine learning
[
Sungsoo Ray Hong
====================
§ INTRODUCTION
As the societal impact of Computer Vision (CV) models grows <cit.>, it has become crucial to find an effective way to steer Convolutional Neural Networks (CNNs) to align their behaviors with users' mental model <cit.>.
Using Explainable AI (XAI) techniques can be the first step to steering Machine Learning (ML) models, as spotting repeating cases that “surprise” ML engineers for a similar reason can help the engineers to generalize the cases to a bigger pattern that signals the vulnerability of their model <cit.>. While XAI techniques are increasingly becoming essential for revising ML models, there are relatively fewer options available for CNNs <cit.>.
Among few, local explanation–the technique that overlays a saliency map on a single image to visualize the attentive areas that the model referred to–has been widely used by tremendous ML engineers due to its visual straightforwardness <cit.>.
By seeing the attention of a model, a user can assess whether the rationale behind the prediction is reasonable <cit.>.
Checking the reasonableness of CNN's “attention” through local explanation can improve CNN's performance in two ways.
First, checking the attention can help ML engineers to identify the bias of a dataset used in training.
In diagnosing a gender classifier, for example, if a model is attentive to contextual objects, such as “snowboard” to predict a man <cit.> or “shopping cart” to infer a women <cit.>, it means that these contextual objects often appear with a specific gender class in the training dataset. As a result, such an imbalanced distribution of contextual objects causes the model attention to be biased towards contextual objects rather than focusing on the person in the image to classify the gender <cit.>.
Using a biased dataset can induce a model to reference contextual objects in prediction, which is defined to be unfair <cit.>.
Therefore, diagnosing CNNs using local explanation can reduce bias ingrained in a training set, leading the forthcoming model to be fairer <cit.>.
Second, detecting unfair predictions through local explanation can lead to a more robust and generalizable model with stable accuracy. The repeated occurrence of unfair predictions is related to the vulnerability of a CNN, which can be essential for defending against malicious attacks.
For example, imagine that an attacker found a gender classifier that tends to classify images with snowboards as men. In that case, the attacker can prepare counter-contextual examples that show women riding snowboards in a backdoor attack to drop the model accuracy.
Steering CNNs to fix the found vulnerable patterns can thus yield a model that provides stable accuracy performance regardless of object types appearing in future images.
In summary, if the dataset used in training is biased <cit.>, the model fails at demonstrating reasonable attention for specific predictions, which we call to be unfair predictions <cit.>.
Such unfair cases, in turn, make the CNN model vulnerable <cit.>.
Collectively, the phenomenon of a CNN shifting attention in an unreasonable way due to biased data refers to the problem of contextual bias <cit.>.
While contextual bias has become a highly crucial issue in ML and beyond <cit.>, spotting the vulnerability and steering the model is highly challenging or not even feasible <cit.> even for experienced ML engineers <cit.>.
Detecting unreasonable attention through local explanation can be “just noticeable” from human eyes, but the current solutions are predominantly a machine-centric approach with limited human involvement <cit.>.
In Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW), despite the rich body of research dedicated to better supporting ML engineers <cit.>, little effort has been made to design interfaces that can efficiently and effectively steer CNNs to mitigate contextual bias.
Further, while there exists a breadth of empirical studies focused on understanding the ML engineers' practice, challenges, and design opportunities(e.g., <cit.>), it is not well understood how ML engineers apply local explanation in steering CNNs to mitigate contextual bias or what are the practical challenges.
Through this work, we aim to bridge the technical and empirical gaps we identified in the problem of contextual bias.
Specifically, we aim to create a novel interactive system that can empower ML engineers to leverage local explanations in diagnosing the vulnerability of CNNs and steer them.
To inform our design based on real practice, we conducted a formative study (S1) with five industry CNN experts who have more than 5 years of model development.
We sought to understand how they use local explanations, what the limitations of existing tools are, and how the new design can practically help their practice.
As a result, we identified 3 challenges and 3 desires that we were able to use to streamline their process in our new design.
Based on the findings, we devised , the first interactive system that realizes a direct feedback loop that connects a user and a CNN using local explanations for model steering.
First, enables a user to systematically categorize unreasonables—the images that have overlaps between the model attention and contextual objects—among images used in validation.
Next, for the categorized unreasonables, suggests the “reasonable” attention boundary that excludes contextual objects to help a user effortlessly finish the annotation task required for steering.
Third, using the user-confirmed boundary input, steers the target model by optimizing both the prediction loss and attention loss (minimizing prediction errors and shifting the model's attention towards confirmed “reasonable” areas).
Finally, helps a user to see what has been changed before and after steering.
In particular, provides the evaluation results regarding (1) how the attention quality has become reasonable and (2) how the improved model attention quality affected the model accuracy performance.
In the summative study (S2), we evaluated with 12 experienced CNN builders, asking them to revise a gender classifier across two days.
We found using enabled every participant to boost their model accuracy performance and model attention quality than applying state-of-the-art techniques.
Meanwhile, after using , we also found that over 80% of the participants perceived that using would improve their capability regarding model vulnerability assessment and performance improvement.
Based on the two studies, we provide implications for design on Beyond XAI—how the future design can convert XAI-driven insights into actionable steering plans such that the AI's behavior can gradually be aligned to the human mental model.
This work offers the following contributions:
* S1: Understanding How Local Explanation Is Used in Improving CNNs: We extend our knowledge about how field practitioners apply local explanations when working on CNNs and what the challenges are. Based on the analysis, we suggest how new design can mitigate their difficulties in steering CNNs.
* Design Contribution:
We devise and instantiate , a novel, end-to-end, and interactive design that enables ML engineers to practice a systematic case-based vulnerability diagnosis and model steering.
* S2: Understanding the Effect of : Through the study with 12 experienced CNN developers, we understand how the new design can make a difference in building more accurate and robust CNNs.
* Implications for Design for Steerable AI: Based on the results of S1 and S2, we provide how the HCI and CSCW communities can contribute to converting XAI-driven insights more useful and actionable through steerable AI design.
§ RELATED WORK
In this review, we first dive deeper into understanding the problem of contextual bias and explain how unreasonable model attention can detrimentally affect CNN's model performance.
Second, we review landmark XAI-driven systems in HCI devised for diagnosing Deep Neural Networks (DNNs) and discuss how the findings can be applied to resolve the problem of contextual bias through an interactive system.
Next, we cover how the recent advance in explanation-guided steering techniques can be applied to implement an interactive and integrated model steering environment.
Then we highlight the remained technical and empirical challenges in HCI.
When CNNs are not trained properly with generalized and representative datasets, there can be various kinds of bias that can introduce several weaknesses in the model performance <cit.>.
Imagine that one engineer is preparing a set of images for training a dog detection model.
In preparation of data, 50% of the images would show a dog to balance positive and negative cases <cit.>.
The problem can start when some contextual objects, such as a ball, appear more frequently in positive cases than negative <cit.>.
Using such a biased dataset, a model would establish a “spurious” correlation between a dog and a ball <cit.>.
In such a case, the model's attention visualized through local explanation is on the ball rather than a dog <cit.>.
Consequently, when bringing an image that shows a ball, the model may likely say that it detected a dog by seeing a ball regardless of a dog appearing in the image <cit.>.
As such, this phenomenon of “contextual bias” refers to the case where a model's attention is shifting to contextual objects which are not directly relevant to the model's goal <cit.>.
Consequently, using this potential vulnerability, an attacker may be able to drastically decrease model accuracy by showing the ball images without dogs <cit.>.
Furthermore, CNN's shifting the focus to a contextual object incurs the fairness issue <cit.>;
While model accuracy is accepted as a “golden standard” in modern ML research for evaluation, there is growing concern that putting insufficient emphasis on the quality of model explanation can lead us to have a technical debt <cit.>.
This aspect of a CNN's blind decision made by referring to contextual objects has become crucial in the Fairness, Accountability, and Transparency (FAccT) community and beyond <cit.>.
In handling contextual bias, several studies outside of HCI commonly apply mathematical approaches rather than incorporating human input <cit.>.
For example, Singh et al. used Class Activation Maps as a “weak” automatic attention annotation <cit.>.
Feature augmentation <cit.> is another technique proposed for de-biasing using disentangled representation.
Hirota et al. provided a way to analyze skewed data distributions to attain unbiased human-like reasoning <cit.>.
While each method has its pros and cons, there has been no ideal breakthrough.
In recent years, ML communities' approaches are gradually shifting towards involving more human inputs <cit.>.
Aligning with this direction, local explanations, such as Grad-CAM <cit.>, started to catch attention as an XAI technique that can mitigate contextual bias. It enables a user to spot the unreasonable model attention at a glance, and perhaps this aspect makes the technique the most widely used XAI technique for investigating CNNs <cit.>.
Meanwhile, in HCI and CSCW, despite the wide range of novel systems proposed for helping ML engineers <cit.>, we didn't recognize a system directly focusing on handling contextual bias.
When we scope the approaches related to Deep Neural Networks, we found the two perspectives useful in handling contextual bias through local explanation.
The first takeaway is that a bottom-up approach—the design that helps users understand the vulnerable patterns by exploring specific cases through local explanation <cit.>—can provide a more straightforward and intuitive flow than a top-down approach which aims at helping a user to understand global structure or rules to explain how DNNs make a prediction <cit.>.
Prospector <cit.> and What-if tool <cit.> belong to the bottom-up design that can help ML engineers to see the instance-level of prediction cases to gradually realize a set of patterns for making prediction <cit.>.
On the other hand, top-down approaches include XAI techniques and visual analytic components to help a user to understand the “landscape” of prediction rules, structure, and decision boundaries.
For instance, Squares <cit.> and Blocks <cit.> are some of the earliest designs that explain how DNNs predict the multi-class problem.
MLCube Explorer <cit.>, TwoRavens <cit.>, and Visus <cit.> present the model comparison feature, helping ML engineers more easily decide the model they would like to deploy.
ActiVis <cit.>, RuleMatrix <cit.>,
CNN explore <cit.>, ExplainExplorer <cit.>, DeepEyes <cit.>, RNNVis <cit.>, NeuroCartography <cit.>, and Dodrio <cit.> fall into visual analytic approaches.
The second takeaway is that by including every feature required for assessing and steering in a single, end-to-end systems can reduce the cost of switching the context between the diagnosis to the refinement <cit.>.
EnsembleMatrix <cit.>, ModelTracker <cit.>, Tenserflow Graph Visualizer <cit.>, and explAIner <cit.> present end-to-end environments that combine diagnosis and model refinement.
This review concludes that local explanations can help a user to easily diagnose the model vulnerability for easing contextual bias in a bottom-up fashion. Meanwhile, including both diagnosis and steering in a single system can further help ML engineers. In realizing this design goal, the first technical challenge is understanding how to steer a CNN upon finding the unreasonable model attention.
In recent years, new techniques have enabled steering the AI's behavior using human input through local explanation.
For example, Attention Branch Network <cit.> is a pioneering method that allows humans to directly adjust the boundary of model attention.
More advanced techniques, such as GRADIA <cit.>, RES <cit.>, and GNES <cit.> have been proposed.
While they can be potentially effective, they have never surfaced or been used by ML engineers through interactive systems.
The second challenge is the lack of studies aimed at understanding how ML engineers practice and perceive local explanations in their CNN building workflow.
There has been a series of empirical studies aimed at learning the workflow of ML engineers and data scientists. The directions include understanding how they use XAI tools <cit.>, how ML beginners learn XAI tools to work on their model building <cit.>, how ML experts view the automated AI <cit.>, how ML experts collaborate in using XAI tools, and beyond <cit.>.
Despite the popularity of local explanations, we didn't identify the work specifically focusing on understanding ML engineers' current practices and challenges.
Item.1So, we believe that an interactive system is essential to bridge the gap between computational techniques and human-centered design to diagnose and resolve contextual bias.
Since diagnosing and steering a CNN is a deep cognitive process that requires dense and repetitive interaction with a system, conducting a formative study in advance would higher the chance of yielding a practically useful design <cit.>.
§ STUDY 1: FORMATIVE STUDY
Through the reviews, we defined our specific goal of designing an interactive system that can mitigate contextual bias embedded in CNNs.
In doing so, we learned that local explanation provided through bottom-up fashion could help a user to efficiently and effectively examine CNN's vulnerable patterns and steers it.
To situate our design considerations based on real practice, we conduct a formative study with industry practitioners.
§.§ Method
We conducted open-ended, semi-structured interviews with professional CNN developers.
In recruiting them, we first provided a flyer to a company bulletin and communicated with industry acquaintances who use local explanations.
As a result, we recruited five experts with an average of over 5 years of experience building state-of-the-art CNN solutions in their field (see Item.2Table <ref>).
In shaping the detail of the interview, we strictly followed the interview methodology in HCI <cit.>.
First, in scoping our directions of inquiry, we motivated participants to focus on sharing their lived experiences, specifically about their practice and perception of local explanation but not discouraging them from connecting their story about local explanation with other experiences.
Consequently, in designing our questions Item.3(shown in Appendix A), we started from their general background and workflow in the early phase as follows.
In particular, we asked about their (1) roles and areas of expertise, the (2) CNNs they build, and (3) their development settings and tool belts.
Then we moved to their local-explanation-related questions aiming to learn their (4) workflows, (5) reasons-of-use, (6) challenges in using local explanation, and (7) their wish lists.
Second, to construct an appropriate dialogue with our participants, two authors—who completed HCI-centered training in their PhDs and currently working on a specialized domain of Human-AI Interaction and Deep Learning in academia and industry, respectively—participated in every interview.
One author proceeded with the interview with questions, while the second author asked follow-up questions to gain more specific insights.
In our interview, we collected 4 hours and 31 minutes of video. On average, each interview lasted 54 minutes, ranging from 37 minutes to 67 minutes in total.
In our analysis, we used a qualitative coding process <cit.> which entails two authors' coding, diagramming, and consensus-based theme generation.
First, the two authors each created, using the interview records, initial sets of codes, and memos <cit.>.
Second, they shared the codes and analyzed the emerging Item.1commonalities and discrepancies related to their perceived challenges and desires. For the matters of discrepancies, the two authors discussed the reasons for the disagreement and decided each matter could be agreed upon or annexed in existing commonalities.
Finally, after thinking about others' code choices, they reviewed all our coded text, quotes, and memos to tweak and derive the final structure.
§.§ Results
From every participant, we heard strong reasons why they apply local explanations in their practice.
The overarching reason they apply explanation in their workflow is predominantly related to retaining the “generalizability” of their model.
The generalizability explains the degree to which the model would “shake” when it sees unexpected, different cases they didn't see in the past.
P5 mentioned: “we strongly believe that that's the way to go, those sorts of visualizations are clearly the path towards understanding how to improve the model. I think it's a required envision. If the mistake is turned out to be unreasonable, I'm going to explore my data and see why it's not robust enough.”
P4 shared his interesting observation that accurate prediction and reasonable attention might be somewhat correlated.
Item.2He believed it was more crucial for a model to focus on the right gaze to make it robust for unexpected cases than optimizing performance on the test set, as we could not prepare the perfect dataset that represents every case equally.
All participants shared their experiences about the cases of spotting unreasonable attention in checking the vulnerability to remove the model's weakness.
P3 mentioned that he uses local explanation in the model comparison task mainly because it can be a good indicator of how robust the model can be:
“I see model behaves very differently task-by-task. ResNet works very well in one task, and VGG works well in a different task. I have no idea why. And the local explanation tells me why.”
While attaining a CNN's generalizability has been discussed in previous literature, our findings extend the existing in two directions.
First, we identified the three practical challenges they are encountering when applying local explanation in their workflow every day.
Second, we also identified the three future desires that the current local explanation-driven techniques cannot realize but could be achieved with future solutions.
§.§.§ Challenges
C1. Iterative and Exhaustive Diagnosis:
In diagnosing their model through local explanation, participants expressed the process as “nothing is given”.
In detecting vulnerable patterns using local explanation, participants seemed to have proactive and iterative shaping of their assumption and collecting the cases.
Generally, participants went for several rounds of iterative target image selection and local explanation generation.
This generation was made based on their dense inductive and deductive reasoning.
The aspect of iterative case-based reasoning seemed to entail nontrivial labor, which exhausts ML engineers.
P1 mentioned: “I wish I could check the (saliency) maps for every case. But coding to layout multiple maps takes some effort and does not become feasible as the dataset gets bigger. In the end, I normally have to compromise, just checking instances in an inaccurate category if I'm lucky, or even fewer.”
P3 developed a multi-classifier that has 4,000 to 5,000 classes. He mentioned that the required mental effort for detecting vulnerable attention grows exponentially as the number of classes increases. In the end, he can only consider a few “major” classes.
Many of our participants remarked that their model vulnerability analysis using local explanation is mostly a group effort, and sharing insights with colleagues also adds up even more time.
For P2's case, his group made a web-based tool where the team member can upload image groups and show the local explanation results for discussion due to the complexity of coding and positioning on a screen.
C2. Ad-Hoc Diagnosis Leads to Uncertainty:
The next challenge that our participants mentioned was the uncertainty they had to cope with in determining the vulnerable patterns.
They seemed to suffer from two types of vulnerability.
Since finding the vulnerable patterns stems from their intuition, our participants mentioned that there is no guarantee that their selection covers every major and minor vulnerability type.
In addition, upon spotting the local explanations that gaze at unreasonable objects, they had to decide if cases sow merely noise or the signal that leads to a vulnerable pattern.
Often, our participants' vulnerability determination process was done on their “gut feeling”, which made them perceive the process as heuristic and ad-hoc.
P2 mentioned: “I feel like showing the pros and cons of model's attention using local explanation is cherry picking, in many cases. Even if someone says the quality of model attention is good or bad with some examples, there is no ground one can say the cases represent a real pattern or merely subtle noise that won't likely happen in the future.”
Item.2P3 also shared similar difficulties that increasing classes could result in more bad-attention cases. Even though these problematic cases were identified, they might likely reoccur in the future.
P4 said that the hardship in verifying the severity of the vulnerability is closely related to the fact that there is no measure that we can rely on to see the “impact of the detected cases” from the perspective of the whole dataset.
There was a minor opinion that their feeling of uncertainty in the process was connected to the doubt about the diagnosis results.
For instance, P1 mentioned that he doesn't believe he can completely remove the bias no matter how much effort he may put in or what tools he may use.
C3. Hard to Steer as Intended: Every participant agreed that changing the model's future behavior from learned insights is challenging or often not feasible.
P5 mentioned that the insights were not actually insightful as they are often unactionable:
“Surprisingly, it wasn't really insightful when we looked at the mistakes our model made, and the saliency map was totally unreasonable. It was like it doesn't know what to do here, something is missing, architectural leap or something I don't know, we didn't quite solve a lot of the failure cases.”
Item.2He also shared his “dream tool” idea for instant attention adjustment, which could be some drawing applications that he could manually guide CNNs to focus on previously missed features of images and retrain through backpropagation.
P1 mentioned his current struggle to fix a model by fortifying the training set, such as adding more data to counterbalance the failure class. He still looked for alternative methods as the performance was not promising.
§.§.§ Desires
D1. The Way to Interact: Beyond Command Line:
Some mentioned that local explanation could not fully realize their potential with command line interfaces as the way to create them requires some work.
This aspect is connected to C1; participants feel making multiple queries for selecting images and examining model attention can become arduous.
From the interaction design's perspective, shifting the command line-based interface to a directly manipulatable GUI can streamline the process.
P1 remarked: “I feel like a complex task like this (vulnerability diagnosis), we would mostly benefit from GUI rather than a tool with a command line. It takes too long to create saliency maps. Showing the maps with different selection criteria and sorting can be super helpful.”
By lowering the cost of creating local explanations, participants could more effectively examine a bigger volume of model attention than the current design.
Item.2Some also mentioned the necessity of reorganizing results after each search, which was not easy with the current tools. P4 always looked for failure cases manually but struggled when there were too many cases. He suggested some summarization or pre-filtering features that prioritize interesting cases.
This finding indicates it is worth considering designing an interactive analytic system that enables a user to easily formulate the query and see the results.
D2. Evaluating Model: Model Accuracy and Beyond:
We had multiple chances to hear participants' voices regarding what they care about when it comes to evaluating their models.
In particular, we found that our participants shared the consensus regarding the model accuracy as a gold standard metric that should not be sacrificed even though the purpose of revision is not for boosting model accuracy (e.g., mitigating contextual bias).
For instance,
Item.2P4 was very curious to see whether improving model attention could improve model accuracy, and if the model were not improved, he would care less about attention quality improvement.
P5 also mentioned the tension between fairness and accuracy in model development: “I had much of a concern for fairness in my practice, it was more the kind of thing where prioritizing fairness connects to increasing failure case. This would result in my client making less money. If it was a courtroom, there's a much stronger debate here. But it's very serious in industrial cases that fairness is important, but the accuracy is still the king.”
At the same time, they shared their concern that the way the current tools provide the model accuracy is not enough to understand how accurate and how reasonable their models are.
Item.2P2 found it very difficult to check the saliency maps for accurate cases, and he felt uncomfortable making decisions solely based on overlooking accurate cases since it could penalize model generalizability. He was less focused on the test set performance than generalizability in the long run.
This internal tension helped us realize the delicate view of the way ML experts see model accuracy. It's still the “King” that should not be compromised, but they may still need more than that to make their model generalizable and trustworthy enough.
D3. A Balance between “Pain” and “Gain”:
One aspect we learned from our participants is that ML engineers are generally more conservative about testing a new feature using a human-in-the-loop-driven approach than we thought due to its high cost.
Regarding the idea of using human input for steering CNNs, some participants mentioned that the direction has potential but would only work if the workload is manageable.
For instance, P3 mentioned that he might not likely use the new tool if the expected effort is more than what they are currently investing in for the model diagnosis.
Not surprisingly, many participants mentioned the difficulties in eliciting data from in-house annotators or workers in crowdsourcing platforms.
P5 said: “The workflow of human-in-the-loop to adjust attention using human help, no one would say it's a bad idea that you could include humans and get more data and improve it. This is an obvious virtuous aspect, but it's not like you just sign up for data bricks, and you're done. Getting human labels would probably need a little bit of training. You don't want that to be an expense to ML engineers.”
This aspect helped us realize that making a practical tool can be readily adopted. It must automate the vast volume of work via intelligent automation and minimize the chance for human outsourcing.
§.§ Design Considerations
While we found that the local explanation serves as an indispensable tool for diagnosing the vulnerability of participants' data and model, they suffered in each stage of C1: detecting cases that signal vulnerable patterns, C2: verifying them to be “real”, and C3: steering.
Meanwhile, we also found they desire to D1: have an interactive and directly manipulatable design that can cut down their effort for writing lots of queries
and parameters, D2: use the product that can improve the model accuracy while also improving the quality of model attention to be reasonable, and D3: enable users to achieve the new feature with a reasonable size of additional labor.
As D1 suggests, we were able to find the reason why the interactive interface can be well appreciated by ML engineers, especially when completing their task requires deep thinking and iterative interactions with their tool.
In designing the system, we further synthesize our findings and establish the design considerationsItem.1Item.2 as shown below. Table <ref> also shows how the participants (“PID”) support the identified challenges (“C”), desires (“D”), and design considerations (“DC”).
* DC1. Semantic local explanation browser:
Seeing the results of local explanations for finding the cases that signal vulnerable patterns is the first stage to mitigating contextual bias.
In this stage, providing a semantic browser—that users can see, rank, and select the dominant semantic object types observed within the model's area of attention for every image—could reduce ML engineers' uncertain feelings and save them time.
In building a dog detector, this feature may enable a user query such as “find every image attentive on treat” or “rank every object type by its occurrence in a dataset.”
Descriptive statistics, such as how frequently the object types appear, can help users understand the degree to which the object grabs the model's attention.
DC1 will relieve C1, C2, and D2 (based on all 5 participants).
* DC2. Labor-efficient selection of “unreasonables” and adjustment of their attention boundaries:
Using the browser, users can diagnose a CNN by finding the cases that show unreasonable attention (“unreasonables”, hereinafter).
Then the users would annotate the areas that can make the annotation reasonable.
The system would need to provide this annotation with a lightweight interaction cost.
DC2 is related to D3 (based on 2 participants: P3 and P5).
* DC3. The fine-tuning mechanism that can boost both model accuracy and model attention quality:
One of the most evident consensuses among the participants was their difficulties in steering CNNs.
Therefore, the tool must help users to clearly understand how the CNN's quality of the model attention visualized through local explanation has been changed based on the input the users provided.
While doing so, the tool must not compromise the model's accuracy performance.
DC3 is derived from C3 (based on 2 participants: P1 and P5).
* DC4. Evaluation results that show what has been changed:
The last stage of the workflow would be to help users understand how their attempts made a difference.
In showing the differences, providing a set of views that show the difference made regarding the accuracy of model prediction, the quality of model attention, and the combined view that explain how the changing of the attention has been related to the accuracy would facilitate users' understanding of the impact.
DC4 is derived from C3 and D2 (based on 4 participants: P1, P2, P4, and P5).
§
Based on the DCs in S1, we designed .
is the first interactive system designed and built to support a CNN engineer's contextual bias-related tasks based on their practical needs.
The early part of 's workflow is defined based on what we learned from ML engineers:
First, a user prepares the base CNN model and datasets to be used for diagnosis (the “loading model’’ and “loading dataset’’ tabs).
Second, a user collects the cases where their gazes are on unreasonable objects by browsing local explanation results (i.e., the “accessing attention quality’’ tab in ).
The rest of the stages follow the recent literature that proposes model steering through local explanation <cit.>.
Third, for the collected “unreasonables”, a user corrects the attention boundary to shift the CNN's future gaze from contextual objects and starts to fine-tune the base CNN model with annotations (the “adjusting attention’’ tab in ).
Finally, a user sees how the approaches made the CNN different (the “evaluation’’ tab in ).
§.§ Interacting with
Consider a scenario for Sarah, an ML engineer who has trained a dog classifier built based on a CNN architecture.
She found the model accuracy performance was not enough for deployment and found a few cases that she could not understand why it failed.
She decided to examine her model using local explanations.
First, she created local explanations for a few accurate and inaccurate cases for multiple rounds to reason what could be wrong.
After her search, she found out the model's focus sometimes moves to some specific contextual objects, such as balls and treats.
To study if the cases would repeat, she decided to invest her time in generating local explanations for all the images and checking them serially. She put some effort into coding for loading and saving files (models, images, and statistics).
For the dubious cases, she decided to collect similar datasets for further testing (C1). Along the path, she started to wonder if the contextual object types she identified were comprehensive. She decided to examine other object types (C2).
Upon confirming every case and object type that signals the vulnerability of her model, she will need to find a way to steer the model's behavior (C3).
Using , her workflow can make better progress with less effort.
First, she uploads the base CNN and the image data she will use for diagnosis.
Leveraging the automatic local explanation object aggregation feature, will provide a list of object types that her CNN is gazing at, such as dogs, cats, balls, treats, and other object types, with examples.
She asks that she wants to see every case that is attentive to objects other than “dogs”.
Based on her specification, local explanation results are grouped based on object type categories (DC1).
She can quickly skim through each category (e.g., dogs, balls, treats, and cats) and confirm dubious local explanations as “unreasonables” in a few clicks.
will suggest the automatically drawn “reasonable” boundary for unreasonables' and asks Sarah to confirm or manually refine (DC2).
Upon her confirmation, will fine-tune the base model such that it won't make the same mistakes (DC3).
After the fine-tuning, Sarah can check how the models' performance regarding model accuracy and model attention quality has changed (DC4).
§.§ Workflow and System Components
supports stage-based workflows to inspect the model. The global navigation bar (see Fig. <ref>) on top of the screen provides access to each stage.
§.§.§ Loading Model and Data
allows users to upload their base CNN models and datasets.
In designing the feature for model upload, we considered compatibility with one of the most widely used Python libraries for building CNNs, PyTorch <cit.>.
Next, the “loading dataset” tab helps a user to upload the image datasets for diagnosis (a validation set, hereinafter) and a final evaluation after the fine-tuning (a test set, hereinafter).
In particular, the validation set is used for diagnosing contextual bias in the next stage. Using the test set in the last stage, a user can evaluate the final model by comparing before and after treatment and more.
§.§.§ Attention Quality Assessment
This stage has two goals.
First, helping a user understand which semantic object types are causing contextual bias by which degree (DC1).
Second, helping a user categorize every image into reasonable or unreasonable (i.e., the images that do not focus or focus on contextual bias in their local explanation) (DC2), which will be used in the next stage.
For both goals, the core mission is to significantly cut down a user's labor compared to their current practice.
In achieving the first goal, provides a list of semantic object types that can be observed in the model's focused area ordered by how frequently they appear.
In detecting the semantic object types, adopts a pre-trained object detection model <cit.> that is capable of detecting 80 object types defined in the Microsoft COCO dataset <cit.> (e.g., “person”, “bicycle”, “dog”, etc.).
A user will decide if the semantic object types are relevant or contextual to a CNN's goal.
In a gender classification problem, for example, the relevant object type can be a human face, while other object types, such as neckties or bicycles, can be contextual.
Second, based on the relevant object types specified by a user, intelligently suggests if local explanations of the images in a validation set are reasonable or unreasonable (see Item.3Item.7Fig. <ref>, green borders suggest the local explanations are reasonable while yellow borders suggest unreasonable).
The suggestions can reduce a user's time for assessing the quality of local explanations.
In positioning the results of suggestions, separates them into two sides: inaccurate images on the left and accurate on the right.
This layout helps determine which semantic object contributes to accurate/inaccurate records by how much.
When a user encounters a suggestion that is not right, (s)he can flip the suggestion by clicking the image, the semantic object group, or every of the accurate or inaccurate images.
Finally, provides 3 options for visualizing local explanation results: color-scale, gray-scale, or polygon mask (see Fig. <ref>-C).
§.§.§ Adjusting Attention
To support the later part of DC2—correcting the attention boundary of images categorized as unreasonables, needs an efficient annotation experience, especially because boundary drawing is an expensive annotation task.
In doing so, shows a vis-à-vis comparison between the current model attention on the left and the suggested attention boundaries on the right-hand side (see Item.7Fig. <ref>).
The suggested boundaries are made based on the Mask R-CNN model <cit.> we applied in 4.2.1.
If the suggested boundaries are not enough, a user can redraw manually (see the drawing panel in Fig. <ref>).
In checking the boundary suggestions, a user can separately examine the images from (1) unreasonables that are accurate (i.e., the images that were accurately predicted based on the wrong reasons, or by “luck”) and (2) unreasonables that are inaccurate (i.e., the image group that made an inaccurate prediction potentially because of seeing wrong contextual objects <cit.>).
Upon finishing the correction for unreasonable, becomes ready for fine-tuning using adjusted inputs.
§.§.§ Fine-Tuning
This stage is the key to maintaining an overall effective pipeline.
Based on DC3, we implemented a fine-tuning mechanism that can consider attention adjustment as new guidance for revising a better model and making the process of using boundary adjustment input straightforward.
The existing approach to optimizing a CNN’s model performance in the fine-tuning process is to minimize only the prediction loss—an error measure between model predictions and actual values.
To boost both the model performance and the interpretability of the black-box CNN model, we adopted Explanation-guided Learning framework <cit.> where the model accuracy performance and local explanation quality are jointly optimized with the prediction loss and attention loss.
Our intention for adding the attention loss during model training is based on the assumption that the model can learn to pay attention to the right semantic object types for the prediction tasks, thus naturally enhancing both the explainability and generalizability.
While the techniques in Explanation-guided Learning are in their early stage, some studies started to validate how applying both terms of explanation loss and prediction loss can benefit DNN performance using text data <cit.>, image data <cit.>, and graph-structured data <cit.>.
However, the techniques in Explanation-guided Learning have not been tested by human participants in their workflow.
Our aim in building is to understand how “real” human participants can interact with a system to leverage the techniques and if we can find evidence that using the techniques can practically help users in mitigating contextual bias in their CNN revision workflow.
For the implementation of the explanation objective for , we adopted the most recent approach called RES <cit.>, which proposed a generic robust framework for learning based on a user's boundary adjustment under the assumptions that the human annotation labels can be (1) not exactly accurate in drawing the boundary, (2) can be incomplete in the region, and (3) inconsistent with the distribution of the model explanation (i.e., binary annotation vs. the boundary with alpha channel).
Consequently, in the benchmark test, RES outperformed GRADIA <cit.> and HAICS <cit.> in leveraging human annotation boundaries and robust against the aforementioned annotation noises <cit.>.
In implementing, we utilized two methods from the RES's GitHub codebase[Available at: https://github.com/YuyangGao/RES], “Baseline” as the conventional state-of-the-art fine-tuning mechanism that applies a prediction loss but not an explanation loss.
This will be used as a baseline to help a user to understand how using can make a difference in model accuracy and model explanation quality.
Next, we implemented “RES-G” as the experimental attention steering mechanism that jointly optimizes the prediction loss and explanation loss.
Upon using to finish their boundary adjustment, a user will click fine-tune to activate the fine-tuning process.
Typically, our fine-tuning mechanism takes at least a few hours, and it is not possible to realize a real-time system yet.
In the system's back end, we built a schedule queue that receives the boundary input one by one. The inputs will be fine-tuned in order by a system administrator.
§.§.§ Evaluation Dashboard
Model evaluation is the last stage, where a user can check how the input has changed a model's varying performances.
Based on DC4, we designed this stage to help a user understand not only how model accuracy has been changed but also how the quality of local explanation has been shifted.
Most importantly, this stage attempt to facilitate a user's understanding of how accurate or inaccurate records are reasonable or unreasonable local explanations are related.
In doing so, we adopted Reasonable Matrix <cit.>, an evaluative matrix that explains the model's performance using the four groups as follows:
* Reasonable Accurate: The group that has accurately predicted records with reasonable attention. The bigger the group is, the more generalizable the model is.
* Unreasonable Accurate: The group that has accurate records but is based on unreasonable attention. Records in this group can be considered “lucky guess”. Reducing this group can increase model generalizability.
* Reasonable Inaccurate: The group has inaccurate records, but the attention is on the right area.
* Unreasonable inaccurate: The group has inaccurate records while their attention is also on unreasonable objects. This group can be considered an opportunity group, as shifting the gaze to reasonable objects can flip the prediction from inaccurate to accurate.
To generate a Reasonability Matrix, it is required to assess if the local explanation results are reasonable or unreasonable.
provides an automatic annotation feature to avoid relying on human annotation (as D3 suggests).
In particular, a user can select from 3 options.
Strict: assess local explanation as reasonable if the attention of a record includes only relevant objects and does not contain irrelevant objects;
Moderate: assess reasonable if the majority portion of an image contains relevant objects while the minor portion includes irrelevant objects; Relaxed: assess reasonable if the attentive area has any overlap with relevant objects.
After a user selects the Reasonability Matrix creation option, (s)he can start the evaluation.
To help a user understand what has been changed, prepares the three conditions as follows:
* M: the initial model before fine-tuning.
* M_base: the state-of-the-art fine-tuned model using M without applying attention input.
* M_exp: the fine-tuned model using M that uses attention input.
Using the three conditions, provides two pairwise comparisons of (1) Before vs. After: comparing M and M_exp and (2) State-of-the-art vs. our approach: M_base and M_exp.
In each pairwise model evaluation, there were 4 types of analytic views that users could do in-depth evaluations.
(1) Overall interpretation: for helping a user to directly understand how model accuracy and attention quality have been changed, the view presents a Reasonability Matrix showing percentage changes in 4 sub-groups (see the top-left sub-figure of Item.7Fig. <ref>).
The view also shows numeric comparisons to track the overall model accuracy and attention quality changes (see the bottom-left sub-figure of Fig. <ref>).
Finally, a user can see the generated performance report and an attention explorer module to derive insights about the effectiveness of the model conditions (e.g., whether the “unreasonable inaccurate” cases have been reduced by attention steering regarding the test image data).
(2) Accuracy-related analysis: this view provides accurate/inaccurate record bar plots grouped by common objects, helping users understand which semantic object types contribute to accurate or inaccurate records.
(3) Local explanation quality analysis: In this view, we present IoU distribution charts.
IoU (Intersection over Union) helps us to understand the overlap between the model's focused gaze and relevant objects. IoU of 0% means the gaze is entirely located on contextual objects, whereas 100% means the gaze is only on relevant objects.
The higher the IoU score, the better an attention area aligns with the ground truth.
In this view, we further help users browse cases based on IoU values (e.g., show images where IoU is between 40% and 60%).
(4) Record-wise attention comparison: the right screen in Fig. <ref> contains a comprehensive comparison of models’ local explanations, side-by-side for all conditions. This design helps a user quickly recognize attention quality changes among different conditions.
§.§ Implementation
is a browser-based user interface with a lightweight back end built with Python Flask, fully compatible with widely used ML and visualization libraries in Python (e.g., PyTorch, Grad-CAM, OpenCV, Matplotlib, etc.). The front end was developed using HTML, CSS, JavaScript, and D3.js for creating dynamic and interactive elements (such as the attention-drawing feature) to communicate between users and models. Item.3More detailed technical settings and a live demo of can be found in our GitHub repository[Available at: https://github.com/TongStevenSun/DeepFuse].
§ STUDY 2: SUMMATIVE STUDY
The core tasks integrated into —(1) diagnosing CNN's vulnerable patterns through local explanation and (2) making the found patterns actionable through direct model attention adjustment—have not been introduced in the previous work.
Further, our “system” has multiple sub-pieces connected together into a “single working whole” <cit.> to streamline the target task.
Due to these characteristics, we avoid applying comparison or experimental study where we have a clear baseline, just like many previous HCI work <cit.>.
Instead, we choose to derive our directions of inquiry by defining research questions (RQs), then triangulate the way we collect data in multiple ways to answer the questions.
Our goal in S2 is to create reusable pieces of knowledge in terms of what piece integrated into our system can be useful and understand how the system, as a whole, can be effective in supporting ML engineers who mitigate contextual bias.
To achieve our goal, we first aimed at understanding the effect of workflow—how our new workflow of model steering using local explanations introduced through an interactive environment can make a difference for ML engineers.
The research questions (RQs) in this category are:
RQ1a. How has a user’s viewpoint about using attention as a method for model revision changed after experiencing our workflow? and RQ1b. How has a user’s viewpoint about using attention as a method for evaluating their model performance changed after experiencing our workflow?
Next, we were curious to learn the effect of using itself as a system—how using can change the outcomes for mitigating contextual bias? In particular, the RQs regarding this direction are: RQ2a. How did using in the input phase make participants’ model diagnosis process different? RQ2b. How did using impact the outcome of contextual bias in terms of model accuracy and attention quality?
§.§ Method
We recruited 12 participants by snowball sampling through our network in industry and academia or advertising on social media.
In defining the S2 sample size, we followed the most common sample size of the past CHI publications consulted from Caine's work <cit.>.
The participants were selected by a screening survey where we asked about their demographics and degree of expertise in building vision-based models using CNNs, the task goals of vision models if experienced, professional position, experience in using local explanation, and whether they have heard of and understands the importance of detecting the “wrong” attention to handle contextual bias.
We are aware of the potential Hawthorne and novelty effects of having overestimated results when participants are being studied and new to our system <cit.>. To reduce the effects, we particularly hired experienced CNN developers who have established their own approaches in CNN fine-tuning. Later in the study, we asked them to compare the effectiveness between our approach and their current approaches and give reasoning.
We recruited 12 qualified participants (2 females and 10 males, aged between 20 and 43) out of 43 who submitted the screening survey. Six participants were academic researchers, and the other six were practitioners. Eight participants identified themselves as experienced, three as intermediate, and one as beginner developers in vision-based modeling. Item.4Although the experience distribution was imbalanced due to our consideration of having all genders' perspectives, there should not be any potential effect of this distribution on the study since all participants were qualified for the study with a good understanding of handling contextual bias and wrong reasoning of a model based on its saliency maps. Eight participants out of 12 have experienced using local explanation to improve model performance in the past (see Table <ref>).
<ref> summarizes the S2 workflow. Participants joined two online sessions, the input and output sessions, for two consequent days. Participants joined the sessions virtually on Zoom and shared their screens with us.
In the input session, we onboarded participants by explaining the purposes of the and presenting how model evaluation could be done differently using local explanations of a standard classifier. Then participants went through a tutorial where they practiced using the interface with a toy dataset. The onboarding and tutorial took 30 minutes.
After the tutorial, participants performed the early phase of tasks using features introduced in 4.2.1, 4.2.2, and 4.2.3.
After an input session, we fine-tuned the initial model (M) into 2 conditions of models: a state-of-the-art model without users' inputs (M_base) and a model using our users' attention inputs in the validation set (M_exp).
The output session was scheduled one day after the input session since we cannot make our participants wait until fine-tuning is done.
On the following day, participants joined the output session, where they used the reviewing feature of to assess the model performance using the features introduced in 4.2.5.
After the review, we conducted semi-structured interviews with the participants.
After finishing two sessions, we provided them with 60 USD as a token of appreciation.
While the input session took 90 minutes and the output session lasted two hours, Item.4as shown in Table <ref>, participants used for about 25 minutes on average in the input session (Min=12, Max=47, SD=10.43) and about 20 minutes in the output session (Min=5, Max=33, SD=8.88). The average time spent on the system in both sessions was about 45 minutes (Min=17, Max=68, SD=16.83).
§.§.§ Task, Data, and Model
While can work with any classification task, we chose a binary gender classification problem for the study.
We are aware of the limitation of framing the gender recognition task as a binary classification, which cannot fully represent the viewpoint of gender diversity.
We are aware of the negative aspects of choosing a binary gender classification as the main task in S2. For instance, automatic gender recognition primarily classifies gender through physical characteristics, which can disadvantage gender minorities <cit.>.
Also, while we believe that binary cannot represent the diversity in gender, we chose the task because it is one of the most widely adopted tasks in the problem of contextual bias <cit.>.
We note that our choice of the binary classification task is to demonstrate the system's capability of solving contextual bias in a relatively simplistic setting with the help of well-annotated datasets used for training CNN classifiers.
We also note that we explained the possible concerns that can stem from the binary gender classification to our participants at the beginning of the study.
The dataset used in the study was selected from the Microsoft COCO dataset <cit.>, one of the most widely used datasets in ML and computer vision communities. The dataset was chosen because of its well-structured label formats and abundant 80 object classes co-appearing with humans, and it has been used for contextual bias studies <cit.>.
The image selection process has three steps.
First, the images were filtered by the segmentation labels of the “person” class for single-person images only.
Second, the images were re-filtered by the gender-related keyword in the captioning labels (i.e., “male”, “man’’, “men’’, “female”, “woman’’, “women’’).
Lastly, the filtered images were examined manually to have the best quality images for the gender classification task, excluding images with very small human figures that were unidentifiable for classification.
In total, we extracted 2,000 images and split them into 1,000 in the training set, 500 in the validation set, and 500 in the test set.
Since we wanted to test the ’s capabilities of detecting and reducing contextual bias, we needed a model that had a reasonable performance but was vulnerable to contextual bias.
We first manually added contextual objects (i.e., green star markers) on the top-left corners of the images.
The distribution of the star-added images is shown in Fig. <ref>, bottom.
For the training set, 1/3 of the “male” images (N = 167) were added with stars.
For both the validation and test sets, the star markers were added only on the “female” images (N = 250).
Then, we trained a standard ResNet-18 classifier (denoted as “M’’) using the biased image data.
In deciding on ResNet architecture in S2, we tested several models built based on ResNet-18 and 50.
We found no significant model accuracy improvement by adding more layers to the ResNet-18 architecture.
Therefore, we chose a less complex model architecture to make lightweight.
Since the majority of images in the training set were original images, the model can achieve a reasonable prediction accuracy of 74% on regular images without the star markers.
We should expect that the model only saw “male” images have star markers.
When we tested the model on the validation set that only has star markers in the female class, the accuracy dropped to 43.8%, and 77.6% of “female” images were mispredicted.
This showed that the model only used commonly appeared star markers on “male” images as a feature to make predictions for images with the same contextual objects, meaning the model (M) was vulnerable to contextual bias.
In generating local explanations, applies Grad-CAM <cit.> on the last convolutional layer.
Due to CNN's hierarchical structure and comparisons of attention maps between layers <cit.>, earlier layers' attention maps are more scattered around objects' edges and corners, whereas the focus of local explanation gets shape to semantic objects as getting closer to later layers (see Fig. 5 in <cit.>).
Using the last layer, local explanations can create more semantic object-level meanings, which a human user can easily leverage for adjusting boundaries.
§.§.§ Input Session
At the beginning of the input session, we discussed the idea of using local explanations for mitigating contextual bias in a binary gender classification task.
After the discussion, we demonstrated how participants could upload their models and datasets using . Then we explained 's model vulnerability diagnosis feature explained in 4.2.1 and 4.2.2. and attention adjustment feature described in 4.2.3.
Upon the end of the tutorial, we gave time for participants to mimic the whole process using the same toy dataset and ask any questions.
Then, we asked participants to start the main session.
We erased all prior input and asked users to start over the process using a larger dataset (particularly assessing the local explanations of the validation set) and a base model we provided.
During the main session, participants had to use the system without help.
The main session was video-recorded.
Once participants finish their input session, we asked them to fill out an input survey, asking 2 questions for the “absolute” and “relative” valuations as follows:
* Q1: “[RQ2a, Absolute] I found understanding the model’s vulnerable aspects using to be _____.” (A 7-level Likert scale of usefulness. “7” is “extremely useful”.)
* Q2: “[RQ2a, Relative] Using , understanding the model’s vulnerable aspects was _____ than my current practice.” (A 7-level Likert scale of difficulty. “7” is “much easier”.)
§.§.§ Output Session
In this session, participants evaluated the performance change of the improved model with the test set.
In particular, provided two pairwise comparisons between M and M_exp, and M_base and M_exp) (see 4.2.5).
After the short output session tutorial using a toy test set, participants started the main output session using the model they fine-tuned from their input session and the larger test set.
Once users were finished with all the analysis and comfortable with their findings, we moved to the semi-structured exit interview. The interview had 9 question categories that were made to understand (1) their general perception about , such as the pros and cons they felt throughout the two sessions, (2) their perception of the specific perspectives, including (2-a) experiencing local explanation adjustment, (2-b) applying reasonability matrix in assessing the model performance, (2-c) features they used in day 1, (2-d) features they used in day 2, and (3) their suggestions for the better in the future.
Same as S1, two researchers attended every interview.
After the interview, they completed an output survey Item.5with 6 questions (see Q3 to Q8 below).
Lastly, to check the usability of , we asked participants to fill out the System Usability Scale (SUS) survey <cit.> (see Appendix B).
* Q3: “[RQ2b, Absolute] I found the capability of regarding improving the model performance using my input was _____.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.)
* Q4: “[RQ2b, Relative] I found the capability of regarding improving the model performance was _____ than my current practice.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.)
* Q5: “[RQ1a, Absolute] Adjusting the saliency maps (as guided) can be effective in building future models.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q6: “[RQ1a, Relative] Adjusting the saliency maps (as guided) can practically change my model-building practice to a better form in the future.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q7: “[RQ1b, Absolute] On top of a model accuracy performance, using saliency maps (as guided) can provide an effective measure for evaluating my future model performance.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q8: “[RQ1b, Relative] On top of a model accuracy performance, using saliency maps (as guided) can practically change the way I evaluate my future model performance to a better form.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
For the analysis of the exit interviews, we followed the similar process we applied in analyzing S1.
The difference from S1 was the existence of the video recordings.
The recordings were reviewed multiple times for transcription, code development, and analysis to synchronize with the notes.
The codes and memos were developed by our two authors gradually as we intake more interviews.
After the final interview, each of the authors developed the themes and shared them with each other, developing the consensus-based diagram that articulates the main insights we learned relevant to explaining the three RQs.
§.§ Results
In this section, we aggregated all survey and interview responses from the participants for the RQs we developed.
S2 results suggest that (1) the workflow of the local explanation-based attention steering provided a diverse perspective in diagnosing model vulnerability, (2) the direct steering design helped the process of model revision straightforward, and (3) every participant enjoyed improved key model performance measures.
Specific sub-tasks, how they are improved, and why the participants perceived they are improved are in Table <ref>.
We believe these are not merely because of the Hawthorne and novelty effects since we have subjective evidence of performance improvement and assessment efficiency.
We also organized the aspects that need improvement in Table <ref>, which we share in detail in the Discussion section.
The behavioral data we collected shows that all participants generated the model that outperforms (1) its model accuracy, (2) the overlap between the model's focus and the relevant object types (IoU), and (3) the proportion of reasonable attention out of all images in a test set.
The average accuracy of 12 users’ fine-tuned models (M_exp) was 82.95%, with an average IoU of 0.39 (“Intersection over Union” with respect to the attention ground truth of the user-defined gender-related object: “person”), and the average proportion of reasonable attention was 89.55% (see Item.8Fig. <ref>-A). All these performances outperformed both the initial model (model M: accuracy = 47.6%, IoU = 0.12, attention reasonability = 51.8%) and the model that applied state-of-the-art fine-tuning method without attention (model M_base: accuracy = 79.0%, IoU = 0.26, attention reasonability = 79.4%).
Regarding the attitudinal survey data, every absolute and relative question's mean was over 4.
In terms of absolute questions, 100% of ratings were above 4-“neutral” (M = 6.19, SD = 0.67).
This indicates that participants were satisfied with the overall quality of the workflow and the system.
Regarding the relative questions, 89.6% of ratings were above 4-“neutral” (M = 5.94, SD = 1.24), which indicates that they felt applying the workflow and the system can practically improve their current practice.
§.§.§ [RQ1-a] Workflow: Adjusting model attention as a CNN steering method
After completing the user studies, the majority of users strongly agree that adjusting local explanations can effectively improve model performance (Q5 rating: M = 6.42 out of 7-“strongly agree”, SD = 0.64, as shown in Fig. <ref>-B). Also, people think their current modeling processes can be practically improved by considering the attention adjustment method (Q6 rating: M = 6.17 out of 7-“strongly agree”, SD = 1.07).
During interviews, all participants shared their positive impressions about the effectiveness of attention adjustment in improving model accuracy, which is the primary objective of conducting model fine-tuning. They also confirmed that the impact of contextual bias was reduced as attention quality increased by attention steering. By adding a new perspective from humans, a model also becomes fairer in making predictions for each target class (P2, P5, P10).
Participants (P1, P2, P3, P4) with experience in model attack and defense shared the possibility of using our method to improve the robustness of the models against backdoor attacks, letting the model ignore small perturbations on an image and focus on the right area. We learned that after trying our method, people gained awareness of considering human-in-the-loop and visual-based approaches in model steering since most of the ML researchers use algorithmic approaches for handling contextual bias, such as data augmentation, hyperparameter tuning, ensemble methods, etc., rather than extensively using visualization in the fine-tuning process.
§.§.§ [RQ1-b] Workflow: Adding quality of model attention in evaluating CNNs
Based on the feedback, users agree that using an attention evaluation method (e.g., reasonability matrix as guided, based on Gao et al. <cit.>) is effective in diagnosing model vulnerabilities (Q7 rating: M = 6.33, SD = 0.47, see Fig. <ref>-B), and they are very likely to use this method for improving future practices Q8 rating: M = 6.08, SD = 0.76).
Participants think that the attention assessment features in provide more diverse and rigorous perspectives in assessing a model's vulnerabilities, especially the reasonability matrix, which can be seen as an expansion of the accuracy dimension to understanding “why” a model underperforms (P1, P3, P5, P6, P8, P9, P10, P12). P1 and P4 endorsed the necessity of equipping a reasonable matrix assessment step in checking the model’s decision-making.
The matrix interpretation was straightforward to most users, as it is related to the widely-used confusion matrix concept in the data science domain.
The dynamic shifts of model vulnerability were well presented as shown by the reasonability matrix (3 vulnerable sub-groups, “UIA - unreasonable inaccurate’’, “UA - unreasonable accurate’’, and “RIA - reasonable inaccurate’’).
One major task we designed for users to achieve was the recognition of a backdoor attack in the data (i.e., added green star markers which may trigger a false prediction by the model), and all participants were able to identify the impact of the attack by evaluating attention quality using the reasonability matrix.
§.§.§ [RQ2-a] System: How improved CNN diagnosis
After comparing with people's current practices, was confirmed as a useful (Q1 rating: M = 5.92 out of 7-“extremely useful”, SD = 0.76, see Fig. <ref>-B) and easier tool (Q2 rating: M = 6.0, SD = 1.15) in understanding model vulnerability, benefiting from the labor-efficient mechanisms.
The step-by-step nature of the assessment process in allows users to systematically detect both contextual and manipulated bias in the data, making it easier to reduce model vulnerability (P3, P9, P12). People believe this GUI design can significantly reduce human effort in coding and visualization management for comprehensively assessing a CNN (P2, P3, P5, P6, P7, P8, P9, P10, P12). ML engineers are well aware of the advantages of using visualization to compare metrics and surface bias, but it is a cumbersome task (e.g., repetitive file creation and loading, lack of visual-based explorers for local explanations, etc.). Instead, people mostly use command lines and unintuitive numeric comparisons for checking vulnerabilities.
One important feature that people liked was the local explanation grouping by detected objects (e.g., “person”, “bicycle”, etc.), which allowed them to check attention quality and accuracy changes within the common object level (P2, P3, P6, P9, P12).
Some users pointed out that having consistent criteria for annotating attention quality regarding the classification task could be tricky with subjective uncertainty (P2, P4, P6, P9, P11). P6 mentioned that during the initial exploratory analysis of some models, users might not have good/bad attention criteria for annotating the attention.
P10 shared an experience in exploring what objects cause contextual bias, and the biggest challenge was making a reasonable assumption at first and evaluating it over time. This challenge is critical if the annotation task is outsourced to multiple people.
§.§.§ [RQ2-b] System: How improved CNN revision outcomes
According to survey responses, people witnessed the highly effective capability of in the performance steering task (Q3 rating: M = 6.08 out of 7-“extremely effective”, SD = 0.64, see Fig. <ref>-B).
Regarding the same task, people found it slightly more effective than current approaches (Q4 rating: M = 5.5, SD = 1.66) as 2 users who preferred their approaches and rated 2-“less effective”.
Aligning model attention with human perceptions can effectively revise a model performance, and with 's adjustment mechanisms (i.e., attention drawing panel and boundary suggestions, as shown in Fig. <ref>), people can directly embed their intention and domain knowledge into the CNN (P2, P4, P9, P10). Regarding model performance comparison, people were able to reveal the overall context of the image data and the corresponding impact on the model (accuracy and attention quality) by detected object sub-grouping of (P1, P2, P3, P5, P6, P8, P9, P11, P12).
An industry practitioner who worked primarily on model quality assurance mentioned that the black-box models were not usually accessible for engineers outside the core ML team, and had features that could be practical for them to evaluate the model performance in that situation (P11).
In the last evaluation view of for record-wise attention comparison (as shown on the right of Fig. <ref>), P7 was curious about the opposite shift of attention quality (i.e., a change from “right’’ to “wrong’’ attention after model fine-tuning) and wanted to see some quantitative measures about it.
The IoU distribution visualization was another measure in that could provide a rigorous comparison between model conditions (with/without attention adjustment), revealing the positive relationship between accuracy and attention quality improvement (P2, P8, P11). As people mentioned, measuring IoU was not commonly used in classification evaluation compared to segmentation tasks, and it was typically difficult to visualize.
§.§ Discussion
Overall, the system received acceptable usability <cit.> with an average SUS score of 76.88 (SD = 14.70, see the SUS box plot in Fig. <ref>-B, the rated scores (0-4) were converted to a 0-100 scale based on Brooke's SUS guide <cit.>), exceeding the average SUS level of 68. There were 10 out of 12 participants (except P3 and P5) who gave above-average SUS scores.
Although this study is not for system-level comparison, we wanted to understand the effect of our fine-tuning mechanism collected from real users. We conducted Mann-Whitney U tests to confirm the significant performance improvement after using attention.
From each of the 12 participants' results, the accuracy of our fine-tuned model using attention was significantly greater than the baseline line condition (U = 0, n_base = n_exp = 12, p < 0.00001). The same results apply to the IoU and attention reasonability proportion comparisons.
Through the studies, we also identified disadvantages of our system that need to be improved (as shown in Table <ref>).
Regarding the interpretation of the reasonability matrix produced by users' annotation and model prediction, the guidelines can be more formally provided to be acceptable in the ML community (P4, P5, P11). The styles of attention visualization (i.e., color-scale, gray-scale, and polygon mask) need improvement, especially since the orange polygon mask was not visually clear for P3 and P10. It can be solved by having color and opacity adjustment features.
People also raise the potential inconsistency issue in attention adjustment, where users may have subjective options and criteria about where the “right” attention should be. needs to further provide more deterministic guidelines in attention adjustment for more complex task types, especially for tasks that require domain expertise (e.g., TB diagnosis in chest X-ray images <cit.>).
With this uncertainty in attention adjustment, P7 and P10 suggested an instant performance comparison feature to reflect the model improvement on the fly as people annotate, which can be a future direction in active learning to have simultaneous updates while labeling in progress <cit.>.
About the attention adjustment module, people suggested that the drawing feature should be optimized for drawing curves and near image borders, as it was not easy to do so (P1, P3, P6). P5 suggested existing smart drawing features (e.g., image matting tool in Photoshop <cit.>) to be added. P7 thinks that binary mask drawings might not be enough for the best attention guidance used in fine-tuning the model. A solution could be giving higher weights toward the centroid of the attention areas.
Item.5(b)With the current data size and task setting in S2, the trade-off between manual workload and model improvement may not be as significant since the overall workload was not overwhelming and considered labor-efficient compared with existing assessment methods. Though evaluating attention maps could be a labor-intensive step, diagnosing and optimizing the model's vulnerability were effective and easy to use based on users' feedback. The annotation steps were incorporated with AI-supported automation (bulk annotation, object detection, object relevance filtering, adjustment recommendation, etc.) to reduce both users' cognitive and labor workloads while gaining better performance. However, as data size increases, this labor-performance trade-off becomes essential, and more specifically, scalability solutions should be explored to reduce human labor while maintaining good fine-tuning performance. We further discussed scalability considerations regarding the trade-off in the next section (6.3).
§ IMPLICATIONS FOR DESIGN BEYOND XAI
Through S1 and S2, we learned several insights from our participants.
While listening to their voice and questions, and observing the way they perceive after their usage, we learned that at the heart of people's pursuit of grounding their models into their practice, one of the core challenges they encounter seems to understand how they can harmonize between the way they see the CNN should suppose to work and the way CNNs actually work.
When they identify such a gap through XAI-driven tools, the upcoming challenge seemed to be to know how to reconcile such a gap efficiently and effectively.
We reflect on this aspect of beyond XAI—how to help a user to shift their learned insights to actionable plans—and list up possible research directions that the HCI and CSCW communities can consider in designing future XAI or steerable AI tools to help practitioners “in the trench”.
§.§ Correlating Model Attention and Model Accuracy
One of the overarching questions we wanted to understand was how the model attention seen as reasonable by the human mind could also result in accurate prediction.
Perhaps that was the reason we decided to use the reasonability matrix.
If reasonable attention and accurate prediction are aligned together, the reasonable accurate instances (i.e., accurate for the right reason) and unreasonable inaccurate instances (i.e., inaccurate for the wrong reason) should increase while the unreasonable accurate and reasonable inaccurate instances should decrease.
The tendency we saw was positive. We observed the reasonable accurate instances increased while the unreasonable accurate instances decreased from most participants.
At least from our setting, adding more human reasoning to the model's way of thinking has increased the model's gaze toward intrinsic objects, resulting in an accuracy increment.
However, one segment that didn't change was the reasonable inaccurate group.
We think understanding the reason when and why the model makes inaccurate predictions despite the reasonable gaze should be closely related to improving model performance.
Regarding research in Fairness, Accountability, and Transparency (FaccT), a dominant view is that human input or intervention may be required to realize a model that retains FaccT with the cost of model accuracy drop.
We hope to understand the effective way to correlate the right reason, and accurate prediction can motivate the development of a fair, robust, and accurate model <cit.>.
In general, we believe it is important to understand how to align human reasoning and model accuracy.
Shao et al. argue that humans “arguing” against DNNs when explanations are not reasonable can benefit the model <cit.>.
A railroad cannot be a train <cit.>, a snowboard is not a man <cit.>, and a shopping cart should not be a woman <cit.>.
Lastly, while human-guided ML has a potential and good cause <cit.>, finding a way to cut down the human-side labor is another important perspective from the two studies.
§.§ Generalizability Consideration: Beyond Binary Classification
We started to test the idea of direct steering of model attention through local explanation from the binary classification problem for reasons—simplicity of the problem and well-annotated datasets.
After using , several participants shared their feedback and curiosity on how our pipeline can be applied in more advanced vision-based tasks.
The design we provided in binary classification can be relatively simpler than the aforementioned cases.
As the model's task gets more complex and diverse, new designs customized to the particular task type and application area should be required to understand the generalizability of our findings.
Item.5(a)Methodologically, local explanation-based attention steering is not limited to binary classification tasks.
The future design can be explored to enhance CNN models for handling different tasks, such as multi-class classification, object detection, and segmentation tasks, which could possibly be expanded from processing images to videos.
The core user flow beneath in CNN steering is as follows:
First, the user flow allows human users to define reasonable and unreasonable types of attention depending on task goals.
Next, the user flow motivates reasonable attention types and penalizes unreasonable attention types in a fine-tuning process suggested in Explanation-guided Learning <cit.>.
Finally, the designer can provide a dashboard that helps users to understand how their indicated directions were reflected in the model revision process.
While the flow can be generally applicable, the way a designer facilitates a user's definition of reasonable and unreasonable attention type should be carefully implemented depending on the type of problem.
For example, in a multi-class classification or object detection task for different animals, users can employ attention logic that penalizes background and motivates foreground objects to build a more reasonable and high-performing model.
As mentioned in 5.1.1, local explanation methods can be applied to different layers of a CNN to produce different levels of granularity.
If the task goal requires a coarse granularity detection of a bounding box, applying local explanation visualization at the last layer of CNN can be suitable. However, if it needs more fine-grained granularity of closed curve for semantic segmentations, producing local explanations on both the first convolutional layer for edge-level of detail and the last convolutional layer for object-level detail can be considered, providing more depths of local explanation for users to evaluate.
Finally, we noted P7's suggestion about extending this flow to a more advanced video level of object classification, detection, and segmentation model steering.
Due to the data volume, special design considerations need to be applied in such a task.
However, upon the efficient design for indicating reasonable and unreasonable attention types, we believe that it is possible to apply the suggested flow to the problem space.
§.§ Scalability Consideration: Hundreds vs. Millions
Despite the promising performance of the model steering method, scalability remains an essential concern raised by several participants (P2, P3, P4, P8, P11), as many real-world image classification tasks involve millions of images.
Human scalability has been a crucial issue in HCI, CSCW, and beyond—while Misc.the data size can easily go up to millions and trillions in training state-of-the-art models, human cognition remains flat <cit.>.
Even if we can surface millions of images to users, it may not be possible for them to scan images serially and achieve sensemaking.
Generally, to successfully devise a scalable design, we believe that the number of images users have to go over should still not exceed thousands, and the amount of time they may spend should not exceed one hour, as recent data annotation literature suggests <cit.>.
Herbert Simon remarked that “wealth of information creates a poverty of attention” <cit.>.
As the trade-off between human labor and performance gain in human-in-the-loop applications is illustrated in Fig. <ref>, when users spend more effort as data size increases, the model will gain better performance until the workload hits the bottleneck of feasible human labor. We aim to make the curve of labor-performance trade-off steeper (from “curve 1” to “curve 2” shown in Fig. <ref>) through scalability optimization to improve the impact of human workload on performance gain. By devising “scalable” human-in-the-loop approaches, model performance could be further improved with the feasible amount of available human labor.
Item.5(b)While every human-in-the-loop approach can suffer the bottleneck of limited information, labor source, session time, etc., ultimate breakthroughs in human-in-the-loop and interactive ML designs could come from scalability strategies.
We introduce how some of the design strategies can be adopted in the design space of Beyond XAI.
First, one can consider sampling from the whole dataset.
Modern computer vision models can yield keywords of objects and context in the scene. Using such additional information extracted from the vast dataset, it is possible to define major and minor clusters of images. The new design may help users proceed with a small portion of sampled images derived from such clusters to reason the whole dataset and typify reasonable and unreasonable attention types accordingly.
Second, one can consider examining images based on the sequence built from Active Learning, Misc.a technique that chooses the fewest unlabeled data possible that could maximize the model accuracy gain <cit.>.
Applying active learning techniques is common in data annotation research, which can help reduce the required size of images to reason.
Third, devising further intelligent features that can automate the current workflow can facilitate the process as well.
Some features that need manual investigation can be automated in future designs.
Finally, if there is a strong rationale for investing more human resources, one can consider crowdsourcing.
§.§ Data Iteration and Continual Lifelong Learning
's capability of figuring out the vulnerability through local explanation is closely related to the capability of fortifying the dataset by adding more examples that can remove the contextual bias.
Such “data iteration” is not uncommon in practice.
To improve the model, the most fundamental way is to improve data. For instance, Chameleon lets users compare data features, training/testing splits, and performance across data versions <cit.>.
When combining the data iteration with model steering using local explanations, one could derive some interesting design ideas that can help ML engineers to better find, search, and add the dataset.
While improving the model with new data can be straightforward, a few issues need to be considered when steering models through local explanations.
First, it is necessary to understand what learning strategy can be more effective between the case where stacking every dataset in one place and retraining the model and the case of iteratively adding the new dataset and making the model “evolve”,
In general, the first case can yield a high-performing model than the second case due to the chance of catastrophic forgetting, which is a problematic and almost inevitable drawback <cit.>.
In recent years, the concept of continual lifelong learning has emerged <cit.> and provided a breakthrough.
Understanding which strategy can yield what strengths and weaknesses in the scenario of data iteration with local explanation reasoning would be necessary.
§.§ Improving Fine-Tuning
This work is the first study that observes how ML engineers experience techniques in the Explanation-guided Learning framework in fine-tuning their model and perceiving the difference.
While we saw participants satisfied with the progress they made with the RES framework, we introduced a few directions on how the RES framework can be evolved to design an improved model steering environment in the future.
One important direction is how to design a better quantitative measurement to assess the quality of the steered attention during the fine-tuning process.
Simple distance-based metrics such as Mean Squared Error (MSE) or Intersection over Union (IoU) scores that are calculated purely based on the alignment of each feature can hardly comprehensively reflect the quality of the adjusted attention, as they completely ignore the correlations among visual features.
One potential remedy to this issue is also to leverage fidelity-based metrics, which aim at evaluating how faithful the model's attention is with respect to the model's prediction.
The assumption behind this is that the `right' attention should contain sufficient information for the model also to make the `right' prediction <cit.>; while on the other hand, removing the attention should also lead to significant negative impact for the model to make the correct prediction <cit.>.
However, it is still not clear and challenging to propose a single metric that can together measure the faithfulness and the degree of alignment with the human annotation to make a more comprehensive assessment of the attention quality.
Another possible topic is how to leverage multiple annotations from different users for a single sample <cit.>.
As obtaining more than one annotation can be helpful to boost the reliability of the human boundary for attention adjustment, it poses challenges on how to align model attention with multiple ground truth boundaries.
While a simple way out can be using the 50% consensus or majority vote over all the available annotations, useful information can be lost during the aggregation. Thus, new techniques are in demand to leverage each annotation effectively.
§ CONCLUSION
In this work, we examined our inquiry of how we can design a direct feedback loop between a human and a CNN through local explanations.
In particular, we designed and developed the first interactive system to help a user adjust the local explanation results regarding the gaze of CNNs.
We applied our interactive design in the problem space of contextual bias for CNN engineers.
With the S1, we learned ML engineers' practical challenges and desires, converting the insights to design considerations that could improve how we use local explanations in model diagnosis and steering.
With , we conducted S2 and found how can provide a better workflow and experience to CNN engineers.
At the same time, we also found limitations and future research directions.
In particular, we boiled down and shared in Implications for Design beyond XAI within the categories of (1) correlating model attention and model accuracy, (2) generalizability consideration, (3) scalability consideration, (4) data iteration and lifelong learning, and (5) improving fine-tuning.
We hope this work can benefit researchers and practitioners who seek to understand how to make XAI-driven insights actionable in steering AI.
ACM-Reference-Format
§ STUDY 1 INTERVIEW QUESTIONS
Item.3
§.§ About you
* Can you explain your role in your company?
§.§ Your models and development settings
* Can you explain the purpose, input, and output of your models for which you used model saliency/attention?
* Can you walk us through your process of building your model? E.g., how to collect the training set, how to train your model, how to improve your model performance, how to debug?
§.§ Use of saliency maps
* Can you explain the way you use saliency maps in understanding your model’s behavior?
* Can you explain the way you use saliency maps in supervising/improving your model’s behavior?
§.§ Working on fair/robust/accurate models
* Can you explain your experience/effort towards building more fair DNN models?
* Can you explain if attention/saliency was useful or not?
§.§ Your tools, challenge, and wish list in the future
* Can you explain the types of tools that you use for understanding/improving your DNN models?
* Can you explain the challenges you experience while interacting with your DNN?
* What new tools/features do you wish to have in the near future to make your life better?
§ STUDY 2 SYSTEM USABILITY SCALE (SUS) SURVEY <CIT.>
Item.5
§.§ Indicate your degree of agreement for each of the 10 statements (on a Likert scale from 1-“strongly disagree” to 5-“strongly agree”)
* I think that I would like to use this system frequently.
* I found the system unnecessarily complex.
* I thought the system was easy to use.
* I think that I would need the support of a technical person to be able to use this system.
* I found the various functions in this system were well integrated.
* I thought there was too much inconsistency in this system.
* I would imagine that most people would learn to use this system very quickly.
* I found the system very cumbersome to use.
* I felt very confident using the system.
* I needed to learn a lot of things before I could get going with this system.
|
http://arxiv.org/abs/2307.05278v1 | 20230711141413 | A helium nova in the Large Magellanic Cloud -- the faint supersoft X-ray source [HP99]159 | [
"Mariko Kato",
"Izumi Hachisu",
"Hideyuki Saio"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA",
"astro-ph.HE"
] |
firstpage–lastpage
A Quasi Time-Reversible scheme based on density matrix extrapolation on the Grassmann manifold for Born-Oppenheimer Molecular Dynamics
Filippo Lipparini
August 12, 2023
======================================================================================================================================
We propose a helium nova model for the Large Magellanic Cloud (LMC) supersoft
X-ray source (SSS) [HP99]159. This object has long been detected as a faint
and persistent SSS for about 30 years, and recently been interpreted to be
a source of steady helium-shell burning, because no hydrogen lines
are observed.
We find that the object can also be interpreted as in
a decaying phase of a helium nova.
The helium nova is slowly decaying toward the quiescent phase,
during which the observed temperature, luminosity, and SSS lifetime
(≳ 30 years) are consistent with a massive white dwarf model of
∼ 1.2 M_⊙. If it is the case,
this is the second discovery of a helium nova outburst
after V445 Pup in our Galaxy and also the first identified helium nova
in the LMC. We also discuss the nature of the companion
helium star in relation to Type Ia supernova progenitors.
novae, cataclysmic variables – stars: individual(LMC [HP99]159)
– X-rays: stars
§ INTRODUCTION
[HP99]159 <cit.> is an LMC X-ray source that has been observed
since the 1990. <cit.> reported detailed observational properties
and characterized this object as a binary consisting of an X-ray emitting
white dwarf (WD) and a hydrogen deficient companion star.
The X-ray spectrum taken on April 1992 with
ROSAT shows a blackbody temperature of kT =38 ± 15 eV
and unabsorbed bolometric luminosity of
L_X= 1.3^+41.7_-1.0× 10^36 erg s^-1.
XMM-NEWTON observed [HP99]159 on 16/17 September 2019 and its
spectrum yields kT =45 ± 3 eV
and L_X= 6.8^+7.0_-3.5× 10^36 erg s^-1
for the distance of LMC (50 kpc).
eROSITA scanned the region including [HP99]159 five times,
and the spectrum fits show kT= 42 – 44 eV.
These temperatures suggest that the X-ray emitting source is a hot WD.
<cit.> estimated the WD mass to be M_ WD=
1.2^+0.18_-0.4 M_⊙ from a mass versus radius relation of WDs.
<cit.> also obtained optical spectra in August – October 2020
that show no indication of hydrogen lines, suggesting a helium star companion.
Moreover, the spectra show no broad emission lines,
that means the absence of strong mass loss.
The orbital period was determined to be P_ orb=2.33 day
(or 1.16 day).
From these observational properties,
<cit.> concluded that the X-ray source [HP99]159 is
a steady helium-burning WD accreting from a helium donor star.
Such a helium-accreting massive WD is one of the progenitor
systems of Type Ia supernovae
<cit.>.
<cit.> recently reported the discovery of such a Type Ia supernova,
SN2020eyj. Their spectra clearly show helium-rich, hydrogen-deficient
circumstellar material, so that this SN is the first definite Type Ia
supernova whose progenitor is a binary consisting of a WD and a helium
star donor.
Together with the discovery of the hydrogen-deficient X-ray source [HP99]159,
we could enlarge the possibility of helium-donor channel toward
a Type Ia supernova.
<cit.> interpreted that the X-ray luminosity comes from
steady helium-shell-burning on a WD. However,
the X-ray flux is too faint to be
compatible with that of steady helium-shell-burning, about 100 times smaller.
The observed X-ray luminosity ∼ 1800 L_⊙ is much lower
than that of steady helium-shell-burning (≳ 20,000 L_⊙).
<cit.> considered this faint X-ray flux as
the steady helium-burning with a mass-accretion rate of
Ṁ_ acc=1.5 × 10^-7 M_⊙ yr^-1.
However, with this small mass-accretion rate, helium burning
is unstable and results in repeated helium nova outbursts
<cit.>.
In this letter, we propose the alternative to their interpretation,
the decay phase of a helium nova.
Helium novae were theoretically predicted by <cit.> as a
nova outburst caused by a helium shell flash on a WD.
It had long been a theoretical object until the discovery
of the helium nova V445 Pup 2000 in our Galaxy <cit.>.
Since then, no further helium novae have been identified yet.
Theoretically, a helium nova evolves similarly to
a classical nova: it brightens up and reaches
optical maximum, the optical magnitude gradually decays,
followed by the supersoft X-ray source (SSS) phase.
In V445 Pup, a strong dust blackout occurred
200 days after the optical peak, so we could not observe
the SSS phase of this helium nova.
If [HP99]159 is a helium nova, it gives us invaluable information on the
late phase of a helium nova outburst.
Furthermore, [HP99]159 is the first identified helium nova in the LMC.
§ MODEL LIGHT CURVES
We apply the helium flash models and He star evolution models
already published in <cit.> and <cit.>,
respectively, to the [HP99]159 X-ray source.
In these model calculations, we assumed spherical symmetry and
used a Henyey-type evolution code.
For helium shell flashes, if we start
our calculation from an arbitrary initial condition,
we need time-consuming calculations for a huge number of shell flashes
until the shell flash properties approach a limit cycle.
To avoid such a lengthy task, we adopt the initial WD models that are
in a thermal equilibrium with the assumed mass-accretion rate.
Then, we need only several shell flashes to reach almost a limit cycle.
A typical mesh number is about 2,000. Nucleosynthesis in the helium burning
is calculated up to ^28Si.
When the nova envelope expands to a giant size, we assume
a mass-loss from the helium-rich envelope
to avoid numerical difficulties <cit.>.
We use the OPAL opacity tables <cit.>.
The chemical composition of accreting matter to the WD is assumed to be
X=0.0, Y=0.98, and Z=0.02, although
[HP99]159 is located in the LMC, a less metal-enriched galaxy
<cit.>.
A smaller Z may result in a somewhat larger ignition mass that
strengthens thermonuclear runaway and wind mass-loss. But, this affects
the He nova evolution only in a very early phase. After that,
the helium-rich nova envelope approaches steady-state, in which
the nuclear energy release rate is balanced with the
radiative loss and gravitational energy release.
As a result, the smaller Z hardly affects the evolution
because the nuclear burning rate of 3α does not depend on the Z.
Figure <ref> shows one cycle of shell flashes in the HR diagram
for 1.35 M_⊙ and 1.2 M_⊙ WDs with different mass-accretion rates.
For a smaller mass-accretion rate, the locus of one cycle goes outside
especially in the rising phase (thin line parts).
The blue error box indicates the range of bolometric luminosity
L_ X= 6.8^+7.0_-3.5× 10^36 erg s^-1 and
temperature kT =45 ± 3 eV obtained by <cit.>
for [HP99]159.
The position of the error box is consistent with the decay phase of
both the 1.35 M_⊙ and 1.2 M_⊙ WDs.
More massive (> 1.35 M_⊙) or less massive (< 1.2 M_⊙) WDs
are excluded by this constraint.
Figure <ref> shows three theoretical light curves
of helium novae for the 1.2 M_⊙ WD
with three mass-accretion rates.
An optically bright phase of a nova outburst corresponds to
the first half of the high luminosity phase (L ∼ 10^5 L_⊙
and log T_ ph 5.5) in Figure <ref>,
while a low luminosity decay phase in Figure <ref>
is related to a long lasted low luminosity period
(L 10^3 L_⊙) in Figure <ref>.
The SSS phase in Figure <ref>
begins in the later half of the high luminosity phase
(L ∼ 10^5 L_⊙ and log T_ ph 5.5)
in Figure <ref>, and continues until
the luminosity substantially decreases.
The recurrence period is longer for a smaller mass-accretion rate.
The photospheric luminosity reaches L_ ph∼ 10^5 L_⊙ at
the flash peak and gradually decreases after that.
We indicate the upper and lower limits for
the bolometric luminosity of [HP99]159,
L_ ph≈ L_ X= 6.8^+7.0_-3.5× 10^36 erg s^-1
<cit.>, and the theoretical duration in the above range of L_ X.
All of the three models satisfy the SSS duration of
≳τ_ SSS∼ 30 yr.
Here, τ_ SSS is the lifetime of [HP99]159 as a very faint
SSS. [HP99]159 has been observed since
the first positive detection with ROSAT in 1992.
Figure <ref> shows the observed X-ray fluxes summarized
by <cit.>.
With the three upper limits from Einstein, EXOSAT, and ROSAT (labeled RASS),
we assume that [HP99]159 has kept almost constant luminosity during the
last 40 years.
The three 1.2 M_⊙ WD models in Figure <ref>
are consistent with the long term SSS observation.
We plot these three models in Figure <ref>.
The two models of Ṁ_ acc=1.6× 10^-7 M_⊙ yr^-1
and 3× 10^-7 M_⊙ yr^-1 show
short duration of τ_ SSS= 41 years and 55 years,
respectively, that is barely consistent with observed range.
The model of Ṁ_ acc=6× 10^-7 M_⊙ yr^-1 shows
a slow decay of τ_ SSS= 120 years.
The solid black line in Figure <ref> shows
its early decay phase, while the black dotted line is for a late decay phase,
52 years later than the solid black line.
Thus, our 1.2 M_⊙ WD with Ṁ_ acc=6× 10^-7 M_⊙
yr^-1 naturally explains all of the X-ray properties of [HP99]159
summarized by <cit.>.
We searched archives for a corresponding optical outburst
but found no information on [HP99]159 in ADS, ATel, and AAVSO.
The search for an otpical counter part in individual old plates is
far beyond the scope of this work.
We have examined the 1.35 M_⊙ WD models with three different
mass-accretion rates of Ṁ_ acc= 7.5× 10^-7,
3 × 10^-7, and 1.6× 10^-7 M_⊙ yr^-1
in Figure <ref> <cit.>.
All of the models show τ_ SSS≲ 30 yr
and could not satisfy the observational constraints in Figure <ref>.
§ DISCUSSION
<cit.> obtained the absolute V magnitude of [HP99]159
to be M_V=-2.8 for the LMC distance (50 kpc).
They interpreted that an accretion disk dominates the V brightness.
The V magnitude from the non-irradiated disk is, however,
estimated to be as faint as M_V=1.14 for a 1.2 M_⊙ WD with
Ṁ_ acc =1× 10^-6 M_⊙ yr^-1
<cit.>.
Here we assume that the binary is close to face-on
<cit.>.
<cit.> reported ∼ 0.2 mag fluctuation in the
V and I long-term light-curves of [HP99]159.
The origin of the variation was not suggested in their paper,
but this ∼ 0.2 mag variation reminds us the flickering, that are
often observed in disk-dominated cataclysmic-variables
<cit.>. If the variation in [HP99]159 is caused by
the flickering in the accretion disk, we may expect substantial
contribution from the accretion disk in the optical band.
<cit.> calculated optical spectra of
an accretion disk and companion star both irradiated by a hot WD.
Their composite spectra show an excess toward longer wavelength
owing to such irradiation effects.
A similar excess is also seen in the spectra of [HP99]159 <cit.>.
Thus, we regard that both the irradiated disk and companion star contribute
to the V magnitude of [HP99]159.
Figure <ref> shows evolutions of helium stars in the
HR diagram for various zero-age masses,
taken from <cit.>.
The low mass He stars of 0.6 and 0.7
M_⊙ do not evolve toward a helium red giant, but return to a
higher temperature region than that at zero-age,
whereas more massive stars evolve toward a red giant.
Note that the stellar mass is assumed to be constant,
i.e., no mass loss is assumed.
Also irradiation effects are not included.
This figure also shows the line of M_V=-2.8
calculated from T_ ph and L_ ph
with a canonical response function of the V-band filter.
The line of M_V=-1.46 indicates a case if there are some
contributions from the irradiation effects on the helium star
and accretion disk as discussed below.
We also added a line of the orbital period P_ orb=1.16 day
and 2.33 day, assuming a binary consisting of
a 1.2 M_⊙ WD and a Roche lobe-filling helium star.
The crossing points of these two orbital period lines
with the M_V=-2.8 line show the helium companion mass is
2.5 M_⊙ and 1.6 M_⊙, respectively.
These companion masses, however, seem to be too large.
<cit.> calculated the mass loss rate from a Roche
lobe-filling helium star, assuming a constant lobe radius of 1.5 R_⊙
(see their Figure 8).
Although the binary parameter is slightly different,
their results suggest that the mass-transfer rate from
a Roche lobe-filling > 0.8 M_⊙ helium star is as large as
|Ṁ| > 10^-6 M_⊙ yr^-1.
Helium burning is stable for such a high mass-accretion rate
<cit.>.
A WD of steady helium burning is
too bright (several × 10^4 L_⊙), incompatible
with the X-ray luminosity of [HP99]159 as in Figure <ref>.
<cit.> showed that the mass transfer rate
decreases from > 10^-6 M_⊙ yr^-1
finally to 10^-7 M_⊙ yr^-1
when the mass of the donor helium star approaches 0.8 M_⊙.
A 0.8 M_⊙ donor star has the brightness M_V=-1.46 for
P_ orb=1.16 day and M_V=-2.03 for P_ orb=2.33 day
in Figure <ref>.
The difference from M_V=-2.8 can be attributed to the
irradiation effects on the helium star and accretion disk.
§ CONCLUSIONS
We propose a helium nova model that satisfies observational aspects of
[HP99]159: it is a binary consisting of a helium-accreting
∼ 1.2 M_⊙ WD and a Roche lobe-filling, evolved helium star of
∼ 0.8-0.9 M_⊙.
The X-ray flux comes from the photosphere of the still hot WD.
It is now cooling toward the quiescent phase after a helium nova outburst.
The mass-transfer rate onto the WD is a few to several
× 10^-7 M_⊙ yr^-1.
The optical brightness M_V=-2.8 is the contribution not only from
the (irradiated) companion star but also from the irradiated disk.
We may conclude that [HP99]159 is the second identified helium nova
after V445 Pup and a key object
in Type Ia supernova progenitor scenarios.
§ ACKNOWLEDGEMENTS
We are grateful to the anonymous referee for useful comments,
which improved the manuscript.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request
to the authors.
mnras
99
[Ashok & Banerjee (2003)]ash03
Ashok, N. M., & Banerjee, D. P. K. 2003, , 409, 1007,
10.1051/0004-6361:20031160
[Bruch (2021)]bru21
Bruch, A. 2021, , 503, 953, 10.1093/mnras/stab516
[Greiner et al. (2023)]gre23
Greiner, J., et al., 2023, , 615, 605, 10.1038/s41586-023-05714-4
[Guillochon et al. (2010)]gui10
Guillochon, J., Dan, M., Ramirez-Ruiz, E., & Rosswog, S. 2010, , 709, L64,
10.1088/2041-8205/709/1/L64
[Haberl & Pietsch (1999)]hp99
Haberl, F., & Pietsch, W. 1999, , 139, 227, 10.1051/aas:1999394
[Hillman et al. (2016)]hil16
Hillman, Y., Prialnik, D., Kovetz, A., & Shara, M. M. 2016, , 819, 168,
10.3847/0004-637X/819/2/168
[Iben & Tutukov (1994)]ibe94
Iben, I. Jr. & Tutukov, A. V. 1994, , 431, 264, 10.1086/174484
[Iglesias and Rogers (1996)]igl96
Iglesias, C. A., & Rogers, F. J. 1996, , 464, 943
[Kato et al. (2000)]kan00
Kato, T., Kanatsu, K., Takamizawa, K., Takao, A., & Stubbings, R.
2000, IAU Circ., 7552, 1
[Kato et al. (1989)]kat89
Kato, M., Saio, H., & Hachisu, I. 1989, , 340, 509, 10.1086/167413
[Kato et al. (2008)]kat08v445pup
Kato, M., Hachisu, I., Kiyota, S., & Saio, H. 2008, , 684, 1366,
10.1086/590329
[Kato et al. (2017)]kat17
Kato, M., Saio, H., & Hachisu, I., 2017, , 838, 153,
10.3847/1538-4357/838/2/153
[Kato et al. (2018)]kat18hvf
Kato, M., Saio, H., & Hachisu, I. 2018, , 863, 125,
10.3847/1538-4357/aad327
[Kool et al. (2023)]koo23
Kool, E. C., et al. 2023, , 617, 477, 10.1038/s41586-023-05916-w
[McCully et al. (2014)]mcc14jf
McCully, C., et al. 2014, , 512, 54, 10.1038/nature13615
[Piatti & Geisler (2013)]pia13g
Piatti, A. E., & Geisler, D. 2013, , 145, 17,
10.1088/0004-6256/145/1/17
[Popham & Di Stefano (1996)]pop96
Popham, R., & Di Stefano, R. 1996, in “Supersoft X-Ray Sources,”
Lecture Notes in Physics, 472, ed. J. Greiner (Springer: Berlin),
p.66, 10.1007/BFb0102247
[Wang et al. (2009)]wan09mc
Wang, B., Meng, X., Chen, X., & Han, Z. 2009, , 395, 847,
10.1111/j.1365-2966.2009.14545.x
[Wang et al. (2017)]wan17
Wang, B., Podsiadlowski, P., & Han, Z. 2017, , 472, 1593,
10.1093/mnras/stx2192
[Webbink et al. (1987)]web87
Webbink, R.F. Livio, M., Truran, J.W., Orio, M. 1987, , 314, 653
10.1086/165095
[Zamanov et al. (2018)]zam18
Zamanov, R.K., et al. 2018, , 480, 1363, 10.1093/mnras/sty1816
|
http://arxiv.org/abs/2307.09478v1 | 20230714091624 | The Role of Transparency in Repeated First-Price Auctions with Unknown Valuations | [
"Nicolò Cesa-Bianchi",
"Tommaso Cesari",
"Roberto Colomboni",
"Federico Fusco",
"Stefano Leonardi"
] | cs.GT | [
"cs.GT",
"cs.DS",
"cs.LG"
] |
A Monte Carlo study of multiplicity fluctuations
in proton-proton collisions at √(s)= 7 TeV
Zbigniew Włodarczyk
August 12, 2023
============================================================================================
empty
We study the problem of regret minimization for a single bidder in a sequence of first-price auctions where the bidder knows the item's value only if the auction is won. Our main contribution is a complete characterization, up to logarithmic factors, of the minimax regret in terms of the auction's transparency, which regulates the amount of information on competing bids disclosed by the auctioneer at the end of each auction. Our results hold under different assumptions (stochastic, adversarial, and their smoothed variants) on the environment generating the bidder's valuations and competing bids. These minimax rates reveal how the interplay between transparency and the nature of the environment affects how fast one can learn to bid optimally in first-price auctions.
arabic
§ INTRODUCTION
The online advertising market has recently transitioned from second to first-price auctions. A recent remarkable example is Google AdSense's move at the end of 2021 <cit.>, following the switch made by Google AdManager and AdMob. Earlier examples also include OpenX, AppNexus, Index Exchange, and Rubicon <cit.>.
With the purpose of increasing transparency, some platforms (like AdManager) have a single bidding session for each available impression (unified bidding) and require all partners to share and receive bid data. In particular, after the first-price auction closes, bidders receive the minimum bid price which would have won them the impression <cit.>.
In practice, advertisers face multiple sources of uncertainty at the moment of bidding. Besides ignoring the value of the competing bids, they also ignore the actual value of the impression they are bidding on. Indeed, clicks and conversion rates can only be measured after the auction is won and the ad is displayed, can vary wildly over time, or be highly correlated with competing bids. We remark that ignoring the value of the impression has a strong effect on the utility of the bidder: it may lead to overbidding for an impression of low value or, conversely, underbidding and losing a valuable one.
To cope with this uncertainty, advertisers rely on auto-bidders that use the feedback provided in the auctions to learn good bidding strategies. We study the learning problem faced by a single bidder within the framework of regret minimization according to the following protocol:
[h]
In this work, we are specifically interested in understanding how the “transparency” of the auctions—i.e., the amount of information on competing bids disclosed by the auctioneer after the auction takes place—affects the learning process.
There is a clear tension regarding transparency: on the one hand, bidders want to receive as much information as possible about the environment to learn the competitor's bidding strategies, while revealing as little as possible about their (private) bids.
On the other hand, the publisher may not want to publicly reveal its revenue (i.e., the winning bid).
It is the auctioneer's choice to decide the level of transparency to motivate bidders and publishers to participate in the auctions.
The role of transparency in repeated first-price auctions has been investigated by <cit.>, but mostly from a game-theoretic viewpoint. In particular, they study the impact of the feedback policy on the bidders' strategy and show how disclosing the bids at the end of each round affects the equilibria of a bidding game with infinite horizon.
In contrast, we want to characterize the impact of different amounts of feedback (or degrees of transparency) on the learner's regret, which is measured against the optimal fixed bid in hindsight. To model the level of transparency, we distinguish four natural types of feedback Z_t (see the table below here), specifying the conditions under which the highest competing bid M_t and the bidder's valuation V_t are revealed to the bidder after each round t.
2-3
M_t V_t
1|c|Full 2c|Always observed
1|c|Transparent Always observed
1-2
1|c|Semi-Transparent Observed if auction is lost Observed if auction is won
1-2
1|c|Bandit Never observed
In the transparent feedback setting, M_t is always observed after the auction is concluded, while V_t is only known if the auction is won, that is when B_t ≥ M_t.
In the semi-transparent setting, instead, M_t is only observed when the auction is lost. In other words, in the semi-transparent setting, each bidder only observes the highest bid, whereas, in the transparent setting, the winning bidder also observes the second highest bid.
We also consider two extreme settings: full feedback (M_t and V_t are always observed irrespective of the auction's outcome) and bandit feedback (M_t is never observed while V_t is only observed by the winning bidder).
Note that the learner can compute the value of the utility _t(B_t) at time t with any type of feedback, including bandit feedback. In this paper, we characterize the learner's minimax regret not only with respect to the degree of transparency of the auction, but also with respect to the nature of the process generating the sequence of pairs (V_t,M_t). In particular, we consider four types of environments: stochastic i.i.d., adversarial, and their smooth versions (see the end of <Ref> for a discussion about smoothness, and <Ref> for the formal definition). We refer to Table <ref> for a summary of our results.
§.§ Overview of our results (ignoring logarithmic factors)
Stochastic i.i.d. settings
* In both the full and transparent feedback models, the minimax regret is of order √(T) (<Ref>), and adding the smoothness requirement leaves this rate unchanged.
* In the semi-transparent feedback model, the minimax regret is of order T^2/3 (<Ref>). Also in this case, adding the smoothness requirement leaves this rate unchanged.
* In the bandit feedback model, smoothness is crucial to achieve a sublinear regret (<Ref>). In particular, smoothness implies a minimax regret of T^2/3 (this is obtained by combining the upper bound in <Ref> and the lower bound in <Ref>).
Adversarial settings
* Without smoothness, sublinear regret cannot be achieved, even with full feedback (<Ref>).
* In both the full and transparent feedback model, the minimax regret in a smooth environment is of order √(T) (combining the lower bound in <Ref> and the upper bound in <Ref>).
* Both with semi-transparent and bandit feedback, the minimax regret in a smooth environment is of order T^2/3 (combining the lower bound in <Ref> and the upper bound in <Ref>).
The minimax regret rates for first-price auctions mirror the allowed regret regimes in finite partial monitoring games <cit.> and in online learning with feedback graphs <cit.>. However, as shown by <cit.>—see also <cit.>, games with continuous outcome/action spaces allow for a much larger set of regret rates.
Table <ref> reveals some interesting properties of the minimax regret for this problem: full feedback and transparent feedback are essentially equivalent while semi-transparent feedback and bandit feedback differ only in the stochastic i.i.d. setting. Moreover, while smoothness is key for learning in the adversarial setting, in the stochastic case smoothness is only relevant for bandit feedback.
§.§ Technical challenges
The utility function. The utility functions b ↦_t(b)=(V_t-b)M_t ≤ b are defined over a continuous decision space [0,1] and are not Lipschitz (even the weaker property that the expected cumulative reward b↦∑_t∈[T]_t(b) is one-sided Lipschitz does not hold in general).
We address this problem by developing techniques designed to control the approximation error incurred when discretizing the bidding space.
In the stochastic i.i.d. setting, the approximation error is controlled by adaptively building a non-uniform grid.
This allows us to estimate the distribution of these competing bids, uniformly over the subintervals of [0,1]. In the adversarial setting, instead, we use the smoothness assumption to guarantee that the expected utility is Lipschitz. In this case, the approximation error is controlled using a uniform grid with an appropriate grid-size (<Ref>).
The feedback models. Our feedback models interpolate between bandit (only the bidder's utility is observed) and full feedback (V_t and M_t are always observed).
In the stochastic i.i.d. case, the different levels of transparency are crucial to the process of building the non-uniform grids used to control the discretization error.
In the adversarial case, when there are only K allowed bids, the optimal rates are √(T ln K) and √(KT) under full and bandit feedback, respectively.
While the semi-transparent feedback is not enough to improve the bandit rate, the transparent one can be exploited via a more sophisticated approach. To this end, we design an algorithm, , enjoying the full feedback regret rate √(T ln K) while only relying on the weaker transparent feedback.
Lower bounds. The linear lower bounds (<Ref>) exploit a “needle in a haystack” phenomenon, where there is a hidden optimal bid b^* (the needle) in the [0,1] interval (the haystack) and the learner has no way of finding b^* using the feedback it has access to. This is indeed the case in the non-smooth adversarial full-feedback setting and in the non-smooth i.i.d. bandit setting. To prove the remaining lower bounds, we design careful embeddings of known hard instances into our framework. In particular, in <Ref> we embed the hard instance for prediction with two experts and in <Ref> the hard instance for K (with K = Θ(T^1/3)) bandits.
§.§ Related Work
The role of transparency in first-price auctions, where the winning bid is disclosed at the end of each auction, has been studied in <cit.> with a focus on how transparency affects the equilibria of the repeated bidding game.
Although the problem of regret minimization in first-price auctions has been studied before, only a few papers consider the setting of unknown valuations.
<cit.> introduce a general framework for the study of regret in auctions where a bidder's valuation is only observed when the auction is won. In the special case of first-price auctions, their setting is equivalent to our transparent feedback when the sequence of pairs (V_t,M_t) is adversarially generated. Following a parameterization introduced by <cit.>, <cit.> provide a O(√(Tlnmax{Δ_0^-1,T})) regret bound, where Δ_0 = min_t < t'|M_t-M_t'| is controlled by the environment. In the stochastic i.i.d. case, their results translate into distribution-dependent guarantees not providing any worst-case sublinear bound (we obtain a √(T) rate). In the adversarial case, their guarantees are still worst-case linear (we obtain √(T) bounds leveraging the smoothness assumption).
<cit.> consider a stochastic i.i.d. setting with the additional assumption that V_t and M_t are independent. Their main result is a bidding algorithm with distribution-dependent regret rates (of order T^1/3 + ε or √(T), depending on the assumptions on the underlying distribution) in the transparent setting. Again, this result is not comparable to ours because of the independence assumption and the distribution-dependent rates (which do not allow to recover our minimax rates).
Other works consider regret minimization in repeated second-price auctions with unknown valuations. For instance, <cit.> investigate a repeated bidding setting, but do not consider regret minimization. <cit.> derive regret bounds for the case when M_t are adversarially generated, while V_t are stochastically or adversarially generated and the feedback is transparent.
Considerably more work study first price auctions when the valuation V_t is known to the bidder at the beginning of each round t. Note that these results are not directly comparable to ours.
<cit.> look at the case when the V_t are adversarial and the M_t are either stochastic i.i.d. or adversarial. In the bandit feedback case (when M_t is never observed), they show that the minimax regret is O(T^2/3) in the stochastic case and O(T^3/4) in the adversarial case.
<cit.> prove a O(√(T)) regret bound in the semi-transparent setting (M_t observed only when the auction is lost) with adversarial valuations and stochastic bids.
<cit.> focus on the adversarial case, when V_t and M_t are both generated adversarially. They prove a O(√(T)) regret bound in the full feedback setting (M_t always observed) when the regret is defined with respect to all Lipschitz shading policies—a much larger class than the set all fixed bids which we consider here. This setup is extended in <cit.> where the authors consider the case in which the bidder is provided access to hints before each auction. <cit.> also studied the full information feedback setting and design a space-efficient variant of the algorithm proposed by <cit.>.
<cit.> introduce a contextual model in which V_t is adversarial and M_t = ⟨θ,x_t⟩ + ε_t where x_t∈^d is contextual information available at the beginning of each round t, θ∈^d is an unknown parameter, and ε_t is drawn from an unknown log-concave distribution. They study regret in bandit and full feedback settings.
A different thread of research is concerned with the convergence property of the regret minimization dynamics in first-price auctions (or, more specifically, with the learning dynamics of mean-based regret minimization algorithms).
<cit.> show that with continuous bid levels, coarse-correlated equilibria exist whose revenue is below the second price.
<cit.> prove that regret minimizing bidders converge to a Bayesian Nash equilibrium in a first-price auctions when bidder values are drawn i.i.d. from a uniform distribution on [0,1]. <cit.> show that if two bidders with finitely many bid values converge, then the equilibrium revenue of the bidder with the highest valuation is the second price.
<cit.> provide a characterization of the equilibria of the learning dynamics depending on the number of bidders with the highest valuation. Their characterization is for both time-average and last-iterate convergence.
Finally, smoothed analysis of algorithms, originally introduced by <cit.> and later formalized for online learning by <cit.> and <cit.>, is a known approach to the analysis of algorithms in which the instances at every round are generated from a distribution that is not too concentrated.
Recent works on the smoothed analysis of online learning algorithms include
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>,
and
<cit.>.
§ THE LEARNING MODEL
We introduce formally the repeated bidding problem in first-price auctions.
At each time step t, a new item arrives for sale, for which the learner holds some unknown valuation V_t∈ [0,1].
The learner bids some B_t ∈ [0,1] and, at the same time, a set of competitors bid for the same object. We denote their highest competing bid by M_t∈ [0,1].
The learner gets the item at cost B_t if it wins the auction (i.e., if B_t ≥ M_t), and does not get it otherwise. Then, the learner observes some feedback Z_t and gains utility _t(B_t), where, for all b∈[0,1],
_t(b) = ( V_t - b ) { b ≥ M_t } (see Online bidding protocol in <Ref>).
Crucially, at time t the learner does not know its valuation V_t for the item before bidding, implying that its bid B_t only depends on its past observations Z_1,…,Z_t-1 (and, possibly, some internal randomization).
The goal of the learner is to design a learning algorithm that maximizes its utility. More precisely, we measure the performance of an algorithm by its regret R_T() against the worst environment in a certain class Ξ, where
R_T() = sup_∈Ξ R_T (,) ,
R_T (,) =
sup_b ∈ [0,1]∑_t=1^T _t(b) - ∑_t=1^T _t(B_t) ,
and the expectation is taken with respect to the randomness of the algorithm which selects B_t, and (possibly) the randomness of the environment generating the (V_t,M_t) pairs.
The environments. In this paper we consider both stochastic i.i.d. and adversarial environments.
* Stochastic i.i.d.: The pairs (V_1,M_1),(V_2,M_2),… are a stochastic i.i.d. process
* Adversarial: The sequence (V_1,M_1),(V_2,M_2),… is generated by an oblivious adversary.
Following previous works in online learning (see <Ref>), we also study versions of the above environments that are constrained to generate the sequence of (V_t,M_t) values using distributions that are “not too concentrated”. To this end, we introduce the notion of smooth distributions.
Let be a domain that supports a uniform distribution ν. A measure μ on is said to be σ-smooth if for all measurable subsets A ⊆, we have μ(A) ≤ν(A)/σ.
We thus also consider the following two types of environments.
* The σ-smooth stochastic i.i.d. environment, which is a stochastic i.i.d. environment where the distribution of each pair (V_1,M_1),(V_2,M_2),… is σ-smooth
* The σ-smooth adversarial setting, where the pairs (V_1,M_1),(V_2,M_2),… form a stochastic process such that, for each t, the distribution of the pair (V_t,M_t) is σ-smooth.
The feedback. Once we have described the types of environments we study, we specify the types of feedback the learner receives at the end of each round, from the richest to the less informative.
* Full Feedback. The learner observes its valuation and the highest competing bid: Z_t=(V_t,M_t).
* Transparent Feedback. The learner always observes M_t, but V_t is only revealed if it gets the item: Z_t is equal to (⋆,M_t) if B_t < M_t and (V_t,M_t) otherwise.
* Semi-Transparent Feedback[This feedback is similar to the winner-only feedback in <cit.>.].
The learner observes V_t if it gets the item and M_t otherwise: Z_t is equal to (⋆,M_t) if B_t < M_t and (V_t,⋆) otherwise.
* The bandit feedback[We call this the bandit feedback because it is equivalent to receiving _t(B_t) (with the extra information ⋆ to distinguish between losing the item and winning it with V_t = B_t).].
The learner observes V_t if it gets the item and the symbol ⋆ otherwise: Z_t is ⋆ if B_t < M_t and V_t otherwise.
§ THE STOCHASTIC I.I.D. SETTING
In this section, we investigate the problem of repeated bidding in first-price auctions with unknown valuations, when the pairs of valuations and highest competing bids are drawn i.i.d. from a fixed but unknown distribution. We study the different feedback models separately. We start by proving in <Ref> that it is not possible to achieve sublinear regret under the bandit feedback model without any assumption on the distribution of the environment.
Then, in <Ref> we give matching upper and lower bounds of order T^2/3 in the semi-transparent feedback model.
Notably, the latter lower bound holds for smooth distributions, while the upper bound works for any (possibly non-smooth) distributions.
Finally, in <Ref> we prove
that both the full and transparent feedback yield the same minimax regret regime of order √(T), regardless of the regularity of the distribution.
§.§ Stochastic i.i.d. environment with bandit feedback
In the bandit feedback model, at each time step, the learner observes the valuation V_t (and nothing else) when it wins, it observes nothing at all when it loses the auction.
The crucial difference with the other (richer) types of feedback is the amount of information received about M_t, which, in the bandit case, is just the relative position with respect to B_t (i.e., whether M_t ≤ B_t or B_t < M_t).
This allows to hide in the interval [0,1] an optimal bid b^⋆ which cannot be uncovered by the learner over a finite time horizon. Following this idea, a difficult environment should be one which randomizes between two scenarios: a good scenario with large value V_t = 1 and M_t slightly smaller than b^⋆ and a bad one with poor value V_t = 0 and M_t slightly larger than b^⋆. This way, not to suffer linear regret, the learner has to find this tiny interval around b^⋆ (the “needle in a haystack”).
Consider the problem of repeated bidding in first-price auctions in a stochastic i.i.d. environment with bandit feedback. Then, any learning algorithm satisfies
R_T()
≥1/12 T .
For any deterministic algorithm , we construct a distribution such that an environment for which each element of the i.i.d. sequence (V_1,M_1), (V_2, M_2), … follows this distribution will induce a regret R_T(, ) ≥T12.
By Yao's Minimax principle, this will be sufficient to conclude.
Fix then any deterministic algorithm and consider its bids against an environment that selects the valuations V_t to be either 0 or 1. At each time step, the feedback that receives is then either 0, 1 or ⋆ (when the item is allocated to one of the competitors). This implies that the history of the bids posted by can be naturally described by a ternary decision tree of height T, where each level corresponds to a time step and any node to a bid. Crucially, the leaves of this tree are finite (at most 3^T), which means that the algorithm only posts bids in a finite subset N of [0,1]. Being the set ( 13, 12) ∖ N open, there exists a bid b^⋆ and > 0 such that [b^⋆, b^⋆+] is contained in the interval ( 13, 12) and does not intersect N.
Consider now the i.i.d. environment , which draws the (V_t,M_t) as follows: with probability 12 it selects (1,b^⋆), otherwise (0,b^⋆+).
The bid b^⋆ is the best bid in hindsight, yielding an overall expected utility of T2 (1-b^⋆), which is at least T4 because b^⋆ belongs to the interval ( 13, 12). Focus now on what happens to the learner: we know that it never bids in [b^⋆, b^⋆ + ], which implies that we only need to consider the following two cases. Every time that posts bids smaller than b^⋆, then it never wins the item (zero utility). Instead, if it posts bids larger than b^⋆+, then it always gets the item (whose average value is 12), paying at least b^⋆+≥ 13. Putting these two cases together, we have proved that at each time step the expected utility earned by the learner is at most 16 = 12 - 13. Finally, by combining the lower bound on the performance of b^⋆ with the upper bound on the expected utility of the learner, we get
R_T(, ) ≥ T4 - T6 = T12.
§.§ Stochastic i.i.d. environment with semi-transparent feedback
In this section, we prove two results settling the minimax regret for the semi-transparent feedback where the environment is i.i.d. (and, possibly, smooth). First, we construct a learning algorithm, , achieving T^2/3 regret against any i.i.d. environment. Then, we complement it with a lower bound of the same order (up to log terms) obtained even in a smooth i.i.d. environment.
§.§.§ A T^2/3 upper bound for the i.i.d. environment
Our learning algorithm is composed of two phases. First, for T_0=Θ(T^2/3) rounds, it collects samples from the highest competing bid random variables M_1,M_2,…, M_T_0 by posting dummy bids B_1=B_2=…=B_T_0=0. Among these values (plus the value X_0 = 0), the algorithm selects Θ(√(T_0)) bids according to their ordering, in a manner that the empirical frequencies of bids M_1,M_2,…,M_T_0 landing strictly in between two consecutive selected values are at most Θ(1/√(T_0)) (see the pseudo-code of for details). Second, for the remaining time steps, it runs any bandit algorithm, using as candidate bids the ones collected in the first phase (see for details). Note that, in this second phase, the (less informative) bandit feedback would be enough to run the algorithm: we only used the additional information provided by the semi-transparent feedback in the initial “collecting bids” phase.
[t!]
*()
As a first step, we state a simple concentration result pertaining the i.i.d. process M,M_1,M_2,…,M_T_0, for T_0 ∈. If is the family of all the subintervals of [0,1] and
δ∈(0,1), we define
_δ^T_0 = ⋂_I ∈{1/T_0∑_t=1^T_0 M_t ∈ I - [ M ∈ I] < 8 √(ln (1/δ) /T_0)} .
For every T_0 ∈ and δ∈ (0,1), we have
[_δ^T_0] ≥ 1 - δ.
The family of all the subintervals of [0,1] has VC dimension 2 (see, e.g., Chapter 14.2. of <cit.>). Therefore we get the desired result by directly applying the standard sample complexity bound for -samples (see, e.g., Theorem 14.15 of <cit.>) for T_0 samples and = 8 √(ln (1/δ) /T_0).
To lighten future notation, we introduce the following
If K ∈, 0 = x_0 < x_1 < … < x_K ≤ 1 < x_K+1=2, and = { x_0, …, x_K },
we denote by k_ [0,1] → the function that maps each b∈[0,1] to the unique k∈{0,1,…,K} such that x∈[x_k, x_k+1).
We now prove another lemma that allows us to control the expected cumulative utility of any bid in [0,1] with that of the best bid in a discretization (without relying on any smoothness assumption).
Assume that the process M,M_1,M_2,… of the highest competing bids form an i.i.d. sequence.
Let also 0 = x_0 < x_1 < … < x_K ≤ 1 < x_K+1=2 and = { x_0, …, x_K }.
For all b∈[0,1] and T_0,T_1 ∈ with T_0 < T_1, we have:
∑_t=T_0+1^T_1_t(b) ≤∑_ t=T_0+1 ^T_1_t x_k_(b) + (T_1-T_0) x_k_(b) < M < x_k_(b)+1 .
Fix any b∈[0,1], T_0,T_1 ∈ with T_0 < T_1, and a time step t∈{T_0+1, …, T_1}.
Then
_t(b)
=
(V_t-b){b ≥ M_t }≤(V_t-x_k_(b)){x_k_(b)≥ M_t} + {b ≥ M_t > x_k_(b)}
≤_t(x_k_(b)) + [x_k_(b) < M_t ≤ b]
≤_t(x_k_(b)) + [x_k_(b) < M_t < x_k_(b)+1] .
Summing over t and recalling that M_t and M shares the same distribution, yields the conclusion.
As a corollary of <Ref> we obtain similar discretization error guarantees when the grid of points is random.
Fix any T_0∈ and δ∈ (0,1).
Let = { X_0, …, X_K } be a random set containing a random number K of points satisfying
0 = X_0 < X_1 < … < X_K ≤ 1 < X_K+1=2.
Assume that the random variables K, X_0,X_1, …, X_K+1 are _T_0-measurable, where _T_0 is the history up to and including time T_0.
Assume that the process (V_1,M_1), (V_2,M_2),… of the valuations/highest competing bids form an i.i.d. sequence.
Then, for all b∈[0,1] and T_1 ∈ with T_1 > T_0, we have:
∑_t=T_0+1^T_1_t(b) ≤∑_ t=T_0+1 ^T_1_t X_k_(b)
+ (T_1-T_0)1/T_0∑_t=1^T_0 X_k_(b) < M_t < X_k_(b)+1 + (T_1-T_0) 8√(ln(1/δ)/T_0) + δ .
[t!]
* ()
We are now ready to present the main theorem of this section.
Consider the problem of repeated bidding in first-price auctions in a stochastic i.i.d. environment with semi-transparent feedback.
Then there exists a learning algorithm such that
R_T() ≤ 16 13 + √(ln T) T^2/3 .
We prove that yields the desired bound when its learning routine is (a rescaled version of) MOSS <cit.>: since MOSS is designed to run with gains in [0,1] while the utilities we observe are in [-1,1], we first apply the reward transformation x ↦x+1/2 to the observed utilities. This will cost a multiplicative factor of 2 on the regret guarantees of MOSS.
Leveraging the fact that the empirical frequency between two consecutive X_k and X_k+1 generated by is at most 2/√(T_0) by design and applying <Ref> with T_1 = T to the random variables X_0,X_1,…,X_K, we obtain, for all b∈[0,1]
∑_t=T_0+1^T_t(b) ≤∑_ t=T_0+1 ^T_t X_k_(b)
+ (T-T_0) 2/√(T_0) + 8√(ln(1/δ)/T_0) + δ
=
(⋆) .
Now, applying the tower rule to the expectation on the right-hand side conditioning to the history _T_0 up to time T_0, we can use the fact that the regret of the rescaled version of MOSS is upper bounded by 98√((K+1) (T-T_0)) and the number of points K+1 collected by is at most √(T_0) + 1 to obtain
(⋆)
≤∑_ t=T_0+1 ^T_t (B_t)
+ 98√(( √(T_0)+1)(T-T_0) )
+ (T-T_0) 2/√(T_0) + 8√(ln(1/δ)/T_0) + δ .
Finally, tuning δ = 1/T_0, upper bounding the cumulative regret over the first T_0 rounds with T_0, and recalling that T_0 = ⌈ T^2/3⌉, yields the conclusion.
§.§.§ A T^2/3 lower bound for the smooth i.i.d. environment
We prove here that the Õ(T^2/3) bound achieved by is indeed optimal, up to logarithmic terms. Our lower bound consists in carefully embedding into our model a hard multiarmed bandit instance with K = Θ(T^1/3) arms, which entails a lower bound of order Ω(√(KT)) = Ω(T^2/3). Note that the proof agenda we have presented is rich of challenges: we want to embed a discrete construction on K independent actions into our continuous framework, where the utility of different bids are correlated, while enforcing smoothness. Furthermore, the feedback models are different.
We report here a proof sketch and refer the interested reader to <Ref> for the missing details.
Consider the problem of repeated bidding in first-price auctions in a stochastic i.i.d. σ-smooth environment with semi-transparent feedback, for σ∈ (0, 166]. Then, any learning algorithm satisfies, for T ≥ 8,
R_T()
≥3/10^4T^2/3 .
Define, for all v,m∈[0,1], the density
f(v,m)
=
_[7/8, 1](v) 1/(v-m)^2_[1/4, v-1/8](m) + 4/v-1/4_ [0 , 1/4) (m) .
Let ^0 be a probability measure such that (V,M),(V_1,M_1), (V_2,M_2), … is a -i.i.d. sequence where each pair (V,M) has common probability density function f.
Denoting by ^0 the expectation with respect to ^0, we have, for any bid b∈[0,1] and any t
^0_t(b)
=
b1/2 + (1-4b) ln6/5_[0,1/4)(b)
+
1/8_[1/4, 3/4) (b)
-
4b^2 - 6b + 17/8_[3/4, 7/8) (b)
+
15/16 - b _[7/8 , 1] (b) .
This function grows with b on [0,1/4), it has a plateau of maximizers [1/4, 3/4], then decreases on (3/4,1] (see <Ref>, right).
Now, let Ξ = (w,) ∈ [0,1]^2 : w-≥ 1/4 and w+≤ 3/4 and define, for all (w,)∈Ξ, the four rectangles R^1_w, = [15/16, 1] × [w-, w), R^2_w, = [15/16, 1] × [w, w + ), R^3_w, = [7/8, 15/16) × [w - , w) , R^4_w, = [7/8, 15/16) × [w , w + ), and, for all v,m ∈ [0,1], the perturbation
g_w, (v,m)
=
16/9_R^1_w,∪ R^4_w,(v,m) - _R^2_w,∪ R^3_w,(v,m)
.
For all (w,)∈Ξ, define
f_w,
=
f + g_w, (see <Ref>, left/center)
and note that it is a valid probability density function, i.e., f_w,≥ 0 and ∫_[0,1]^2 f_w,(v,m) v d m = 1.
For all (w,)∈Ξ, let ^w, be a probability measure such that (V,M),(V_1,M_1), (V_2,M_2), … is a ^w,-i.i.d. sequence where each pair (V,M) has common probability density function f_w,.
Denoting by ^w, the expectation with respect to ^w,, we have, for any bid b∈[0,1] and any t
^w,_t(b)
=
^0 _t(b) + /144Λ_w,(b)
where Λ_u,r is the tent map centered at u with radius r defined as Λ_u,r(x) = max1-|x-u|r, 0.
In words, in a perturbed scenario ^w, the expected utility is maximized at the peak of a spike centered at w with length and height Θ(ε) perturbing the plateau area [1/4, 3/4] of maximum height (see <Ref>, right).
Define, for all times t∈, the feedback function
ψ_t [0,1] → [0,1] ×{⋆}∪{⋆}× [0,1] ,
b ↦
(V_t,⋆) if b ≥ M_t
(⋆, M_t) if b < M_t
and note that, in our semi-transparent feedback model, the feedback Z_t received after bidding B_t at time t is ψ_t(B_t).
Then, for each (w,) ∈Ξ and each b∈[0,1][w-,w+], note that the distribution of ψ_t(b) under ^w, coincides with the distribution of ψ_t(b) under ^0, i.e., in push-forward notation (for a refresher on push-forward measures, see <Ref>),
_ψ_t(b)^w, = ^0_ψ_t(b) .
Now, let K∈, = 1/(4K), w_k = 1/4 + (2k-1) and ^k = ^w_k, (for each k∈[K]).
At a high level, we built a problem in which we know in advance the region where the optimal bid belongs to (i.e., the interval [1/4,3/4]), but, when the underlying scenario is determined by the probability measure ^k for some k∈[K], in order not to suffer regret Ω( T ), the learner has to detect inside this potentially optimal region where a spike of height (and length) Θ() in the reward occurs. This last task can be accomplished only by locating where the perturbation in the base probability measure occurs, which, given the feedback structure, can only be done by playing in the interval [w_k-,w_k+) if the underlying probability is ^k, suffering instantaneous regret of order whenever the underlying probability is ^j, with j ≠ k.
Given that we partitioned the potentially optimal region [1/4,3/4] into Θ(1/) disjoint intervals where these perturbations can occur, the feedback structure implies that each of these intervals deserves its own dedicated exploration.
To better highlight this underlying structure, we will show (see <Ref>) that our problem is no easier than a simplified K-armed stochastic bandit problem, where the instances we consider are determined by the probability measures ^1, …, ^K.
In this bandit problem, when the underlying probability measure is induced by some ^k, the corresponding arm k has an expected reward Θ() larger than the others.
Then, via an information-theoretic argument, we can show that any learner would need to spend at least order of 1/^2 rounds to explore each of the K arms (paying Ω() each time) or else, it would pay a regret Ω( T).
Hence, the regret of any learner, in the worst case, is lower bounded by ΩK/^2 + T = Ω K^2 + T/K (recalling our choice of = 1/(4K)).
Picking K = Θ(T^1/3) yields a lower bound of order T^2/3.
For all missing technical details, see <Ref>.
§.§ Stochastic i.i.d. environment with transparent and full feedback
This section completes the study of the stochastic i.i.d. environment by determining the minimax regret when the learner has access to full or transparent feedback.
§.§.§ A √(T) upper bound for the i.i.d. environment
While with semi-transparent feedback, we had to rely on dummy bids B_1=…=B_T_0 = 0 to gather information about the distribution of the highest competing bids, with the transparent one, this information is collected for free at each bidding round.
To use this extra information, we present a wrapper (for a sequence of base learning algorithms for the transparent feedback model) whose purpose is restarting the learning process with a geometric cadence to update the set of candidate bids.
We assume that each of the wrapped base algorithms _τ can take as input any finite subset ⊂ [0,1] and returns bids in .
Furthermore, for all T', we let _T'(_τ,) be an upper bound on the regret over T' rounds of _τ with input against the best fixed x∈.
Formally, we require that for any two times T_0 < T_1 such that T' = T_1 - T_0, the quantity _T'(_τ,) is an upper upper bound on
max_x∈∑_t=T_0+1^T_1_t(x) - ∑_t=T_0+1^T_1_t(B_t), where B_t ∈ is the sequence of prices played by _τ (with input ) when started at round t=T_0+1 and ran up to time T_1.
Without loss of generality, we assume that T' ↦_T'(_τ,) is non-decreasing.
[ht]
* ()
Consider the problem of repeated bidding in first-price auctions in a stochastic i.i.d. environment with transparent feedback.
Then the regret of run with base algorithms _1, _2,… satisfies
R_T() ≤∑_τ=2^⌈log_2(T+1) ⌉_2^τ - 1_τ,_τ + 3 + 16 √(2)+2 √( T ln T) .
Fix an arbitrary epoch τ∈2,…, log_2 (T+1) (the first epoch will be upper bounded separately).
With respect to the notation in <Ref>,
let = _τ,
K + 1 =,
T_0 = ∑_τ'=1^τ-12^τ'-1=2^τ-1-1 (the time passed from the beginning of epoch 1 up to and including the end of epoch τ-1),
T_1 = min{T_0+2^τ-1,T} (the end of epoch τ),
and let X_0 < X_1 < … < X_K be the distinct elements of in increasing order, where we note that X_0 = 0, X_K≤ 1, and we set X_K+1=2.
Let also _T_0 be the history up to and including time T_0 and recall <Ref>.
Applying first <Ref> (together with the fact that the empirical frequency between any two consecutive values X_k and X_k+1 is 0 by design), then exploiting the monotonicity of T' ↦_T'(_τ,_τ) for the last epoch (if T_0+2^τ-1 > T), we obtain, for all b∈[0,1] and δ∈(0,1),
∑_t=T_0+1^min{T_0+2^τ-1,T}_t(b) ≤∑_ t=T_0+1 ^min{T_0+2^τ-1,T}_t X_k_(b)
+ 2^τ-1 8√(ln(1/δ)/T_0) + δ
≤∑_ t=T_0+1 ^min{T_0+2^τ-1,T}_t (B_t)
+ _2^τ - 1_τ,_τ
+ 2^τ-1 8 √(ln(1/δ)/2^τ-1-1) + δ .
Summing over epochs τ∈2,…, log_2 (T+1), upper bounding by 1 the regret incurred in the first epoch, and tuning δ = 1/T, yields the conclusion.
Now we are only left to design appropriate base algorithms _1,_2,… for the transparent feedback to wrap around.
The algorithm.
To this end, we introduce the algorithm (designed to run with transparent feedback), which
borrows ideas from online learning with feedback graphs <cit.>.
Similar algorithms for related settings have been previously proposed by <cit.> and <cit.>.
For the familiar reader, note that our setting can be seen as an instance of online learning with strongly observable feedback graphs.
In contrast to a black-box application of feedback-graph results, we shave off a logarithmic term (in the time horizon) by using a dedicated analysis.
For any x ∈ [0,1], we denote by δ_x the Dirac distribution centered at x.
[ht]
*
Note that the transparent feedback is sufficient to compute the reward estimates in <Ref>.
We defer the proof of the following proposition to <Ref>.
Let ⊂ [0,1] be a finite set, T∈ a time horizon, and tune the exploration rate as γ = √(ln ( ) (e-1)T ).
Then, the regret of against the best fixed bid in is
max_x∈∑_t=1^T _t(x) - ∑_t=1^T _t(B_t) ≤
2 √((e-1) ln T )
Putting together <Ref> yields the desired optimal rate.
Consider the problem of repeated bidding in first-price auctions in a stochastic i.i.d. environment with transparent feedback.
Then the regret of run with the base algorithm of each epoch τ being tuned with γ = γ(τ) = √(ln ( |_τ| ) (e-1) 2^τ-1), satisfies
R_T()
≤
3 + 2√(2)+2 √(2(e-1)) + 8 √(T ln T) .
Plugging the guarantees of <Ref> into those of <Ref> and recalling that _τ≤ 2^τ-1 for each epoch τ = 2,3,…, gives the result (after straightforward computations).
§.§.§ A √(T) lower bound for the i.i.d. environment
We complement the positive result of <Ref> with a matching lower bound of order √(T). The idea underlying our hard instance is to embed the well-known lower bound for prediction with (two) experts into our framework: we construct two smooth distributions that are “similar” but have two different optimal bids whose performance is separated. We then formally prove that no learner can identify the correct distribution without suffering less than √(T) regret.
Consider the problem of repeated bidding in first-price auctions in a stochastic i.i.d. σ-smooth environment with full feedback, for σ∈ (0, 19]. Then, any learning algorithm satisfies
R_T()
≥1/2048√(T) .
We prove the Theorem by Yao's principle: we show that there exists a distribution over stochastic σ-smooth environments such that any deterministic learning algorithm suffers Ω(√(T)) regret against it, in expectation. We do that in two steps. First, for every ∈ (0, 12) we construct a pair of 19-smooth distributions that are hard to discriminate for the learner. Then, we prove that, for the right choice of , any learner suffers the desired regret against at least one of them. For visualization, we refer to <Ref>.
As a tool for our construction, we introduce a baseline probability measure ^0, such that the sequence (V,M),(V_1,M_1),(V_2,M_2),… is ^0-i.i.d., and (V,M) has a distribution ^0_(V,M) (for a refresher on push-forward measures, see <Ref>) whose density function is as follows:
f^0(v,m) = 8(v,m) ∈ Q_+ + 8(v,m) ∈ Q_-,
where Q_+ = (0,14) × (0,14) and Q_- = (34,1) × ( 14,12) (see <Ref>).
A convenient way to visualize this distribution is to draw a uniform random variable U_t in the square Q_+ and then toss an unbiased coin. If the coin yields heads, then (V_t,M_t) is equal to U_t, otherwise (V_t,M_t) coincides with U_t translated by ( 34, 14). With some simple (but tedious) computation, it is possible to explicitly compute the expected utility of posting any bid b∈ [0,1], when (V_t,M_t) is drawn following the distribution ^0 (with expectation 𝔼^0):
𝔼^0[_t(b)] =
b/4 (1 - 8b) if b ∈ [0, 14)
-1/8 (16b^2 - 14b + 3) if b ∈ [ 14, 12)
12(1 - 2b) if b ∈ [ 12,1]
The function 𝔼^0[_t(b)] has two global maxima in [0,1], of value 1128, attained in 116 and 716 (see the red line in <Ref>).
For any ∈ (0, 12), we also define two additional (perturbed) probability measures ^±, such that the sequence (V,M),(V_1,M_1),(V_2,M_2),… is ^±-i.i.d. and the distribution ^±_(V,M) of (V,M) has density:
f^±(v,m) = 8(1 ±)(v,m) ∈ Q_+ + 8(1 ∓)(v,m) ∈ Q_-.
Note, ||f^±||_∞ < 9, while ||f^0||_∞ = 8, therefore all the distributions considered in this proof are 19-smooth.
To visualize this new perturbed distributions, recall the construction of ^0_(V,M) using the coin toss and the uniform random variable U: in this case the coin is biased and the probability of getting tail is (1±)/2. It is still possible to compute explicitly the expected utility under these perturbed distributions for any bid b ∈ [0,1]:
𝔼^±[_t(b)] =
b/4 (1 - 8b) ±b/4(1-8b) if b ∈ [0, 14)
-1/8 (16b^2 - 14b + 3) ±/4(8b^2-11b+2) if b ∈ [ 14, 12)
12(1 - 2b∓3/4) if b ∈ [ 12,1]
We refer to <Ref> for visualization. The crucial property of the distributions we constructed is that the instantaneous regret of not playing in the “correct” region is Ω(); formally we have the following result. For the sake of readability, we postpone the proof of this Claim to <Ref>.
claimsuboptimality
There exists two disjoint intervals I_+ and I_- in [0,1] such that, for any ∈ (0, 12) and any time t, the following inequalities hold:
max_x ∈ [0,1]𝔼 ^± [_t(x)] ≥𝔼 ^± [_t(b)] + 1/128, for all b∉ I_±
Since the two distributions are “-close[In <Ref> we formally prove that their total variation is at most Θ().]”, any learner needs at least 1^2 rounds to discriminate which ones of the two distributions it is actually facing, paying each error with an instantaneous regret of Ω() (<Ref>). All in all, any learner suffers a regret that is Ω(·1^2 + T), which is of the desired Ω(√(T)) order for the right choice of ≈ T^-1/2.
As the last step of the proof, we formalize the above argument. Fix = 1/(4√(T)) and rename ^+=^1 and ^-=^2, given our choice of ; similarly, denote with I_1 and I_2 the two intervals I_+ and I_- as in the statement of <Ref>.
For each j ∈{0,1,2}, consider the run of 𝒜 against the stochastic environment which draws (V_1,M_1),(V_2,M_2), … i.i.d. from ℙ^j. Let N_1 be the random variable that counts the number of times that algorithm posts a bid in I_1. Similarly, N_2 counts the number of times that it posts a bid in I_2. For i=1,2, we have the following crucial relation between the expected value of N_i under ^i. Note, the results hold because the two distributions are so similar that the deterministic algorithm bids in the wrong region a costant fraction of the time steps. For the formal proof of we refer the reader to <Ref>.
claimlastStep
The following inequality hold:
1/2∑_i=1,2N_i≤3/4 T.
We finally have all the ingredients to conclude the proof. Consider an environment that selects uniformly at random either ^1 or ^2 and then draws the (V_t,M_t) i.i.d. following it. We prove that the algorithm suffers linear regret against this randomized environment and, by a simple averaging argument, against at least one of them. Specifically, if b^⋆_i is the optimal bid in the scenario determined by ^i, for i ∈{1,2}, we have
R_T() ≥1/2∑_i=1,2𝔼^i[∑_t = 1^T_t(b^⋆_i) - ∑_t = 1^T_t(B_t)]
≥1/1024√(T)∑_i=1,2𝔼^i[T - N_i] *(By <Ref> and choice of )
≥1/512√(T)(T - 3/4 T) *(By <Ref>)
= √(T)/2048.
§ THE ADVERSARIAL SETTING
In this section we complete the perspective on repeated bidding in first-price auction by investigating the adversarial model. In particular, we consider two models: the standard one, where the sequence (V_1,M_1), (V_2,M_2), … is chosen up front in a deterministic oblivious way, and the smooth environment, where the sequence (V_1,M_1), (V_2,M_2), … is any σ-smooth stochastic process. In <Ref> we construct an algorithm achieving T^2/3 regret in the bandit feedback model under the smoothness assumption; this result, together with the lower bound of the same order for the semi-transparent feedback (<Ref>) settles the problem for these two feedback regimes. Then, in <Ref> we provide another upper bound, namely an algorithm achieving √(T) regret in the transparent feedback model under the smoothness assumption; this result, together with the lower bound of the same order for the semi-transparent feedback (<Ref>) settles the problem for these two feedback regimes. Finally, in <Ref> we provide a lower bound proving that the non-smooth adversarial environment is too hard to learn, even when the learner has access to full feedback.
§.§ Bandit Feedback against the smooth environment
The smoothness assumption regularizes the objective function. In particular, if (V_t,M_t) is smooth, then the corresponding expected utility is Lipschitz.
Let (V_t,M_t) be a σ-smooth random variable in [0,1]. Then the induced expected utility function _t(·) is 2/σ-Lipschitz in [0,1]:
|_t(y) - _t(x)| ≤2/σ |y-x|, ∀ x,y ∈ [0,1].
Let x>y be any two bids in [0,1], we have the following:
|_t(x) - _t(y)| = |(V_t-x)M_t≤ x - (V_t-y)M_t≤ y|
= |(V_t-x) y < M_t ≤ x + (x-y) M_t ≤ y) |
≤M_t ∈ [x,y] + (x-y) ≤2/σ(x-y).
As an interesting fact, note that we only need the marginal distribution of M_t to be σ-smooth for the previous lemma to hold.
This Lipshitzness property has the immediate corollary that any fine enough discretization of [0,1] contains a bid whose utility is close the the optimal one.
Let be any finite grid of bids in [0,1], and let δ() be the largest distance of a point in [0,1] to (i.e., δ() = max_p ∈ [0,1]min_x ∈ |p-x|), then if each pair of random variables (V_1,M_1), …, (V_T,M_T) is σ-smooth, we have the following:
max_b ∈ [0,1]∑_t=1^T _t(b) - max_x ∈∑_t=1^T _t(x)≤ 2 δ()/σ T .
Fix any such sequence and let b^* be the corresponding best fixed bid in hindsight. If b^* is in there is nothing to prove, otherwise these exists x^*∈ such that |b^* - x^*| ≤δ() (by definition of δ()). We have the following:
∑_t=1^T _t(b^*) - ∑_t=1^T _t(x^*) = ∑_t=1^T _t(b^*)- _t(x^*)
≤∑_t=1^T 2/σ|b^* - x^*| *(By Lipschitzness, <Ref>)
≤ 2 δ()/σ T
We can combine in a natural way the above discretization Lemma with any (optimal) bandits algorithm to obtain the desired bound on the regret. For details we refer to the pseudocode of .
[ht]
*
Consider the problem of repeated bidding in first-price auctions in an adversarial σ-smooth environment with bandit feedback.
Then there exists a learning algorithm such that
R_T() ≤27/σ T^2/3 .
We prove that algorithm with the right choice of learning algorithm and grid of bids achieves the desired bound on the regret. As learning algorithm we use (a rescaled version of) the Poly INF algorithm <cit.>: since Poly INF is designed to run with gains in [0,1] while the utilities we observe are in [-1,1], we first apply the reward transformation x ↦x+1/2 to the observed utilities. This transformation will cost a multiplicative factor of 2 in the regret guarantees of Poly INF.
The analysis builds on the discretization result in <Ref>, by choosing as the uniform grid of ⌈ T^2/3⌉ + 1 equally spaced bids on [0,1] (note, δ() becomes T^-1/3). Fix any σ-smooth environment , we have the following:
max_b ∈ [0,1]∑_t=1^T _t(b) ≤max_x ∈∑_t=1^T _t(x) + 4/σ T^2/3*(<Ref>)
≤∑_t=1^T _t(B_t) + 4/σ T^2/3 + 23 T^2/3≤27/σT^2/3,
where the second inequality follows from the guarantees of (the rescaled version of) Poly INF (Theorem 11 of <cit.>).
§.§ Transparent Feedback against the smooth environment
For transparent feedback we combine two tools we have already used: the discretization Lemma (<ref>) and the algorithm for learning with transparent feedback on a finite grid. Note: using any other √(KT) black box learning algorithm (like in the previous section for bandits) would yield a suboptimal regret bound of T^2/3.
Consider the problem of repeated bidding in first-price auctions in an adversarial σ-smooth environment with transparent feedback.
Then there exists a learning algorithm such that
R_T() ≤ 4(1/σ + √(ln T)) √(T) .
Consider algorithm on the uniform grid of ⌈√(T)⌉+1 bids, with δ() ≤√(T). Fix any σ-smooth environment , we have the following:
max_b ∈ [0,1]∑_t=1^T _t(b) ≤max_x ∈∑_t=1^T _t(x) + 4/σ√(T)*(<Ref>)
≤∑_t=1^T _t(B_t) +2 √((e-1) T ln T )+ 4/σ√(T)
≤∑_t=1^T _t(B_t) + 4(1/σ + √(ln T))√(T),
where the second inequality follows from the guarantees of <Ref>.
§.§ The (non-smooth) Adversarial Model is Hopeless
In the previous sections, we have been able to provide positive results under one of two conditions: either the environment is stochastic and the learner has at least the semi-transparent feedback (<Ref> says that bandit feedback is not enough) or the environment uses smooth distributions. Both these settings allows the learner to compute efficiently a discrete class of representative bids where the learning may happen. In this final section, we formally complete the proof of the fact that learning is impossible if any of these assumptions are dropped. Specifically, the standard adversarial environment that generates arbitrarily the sequence without any smoothness constraint is too strong. In particular, we construct a randomized sequence (V_1,M_1), (V_2,M_2), … that induces any learner to suffer at least linear regret. This construction shares some similarities with the lower bound construction in <Ref>, the main difference being that the best bid b^* is randomized and hidden in such a way that even a learner having access to full feedback cannot pin-point it.
Consider the problem of repeated bidding in first-price auctions in an adversarial environment with full feedback. Then, any learning algorithm satisfies
R_T()
≥T/24 .
We prove the result via Yao's principle, showing that there exists a randomized environment such that any deterministic learning algorithm suffers 124· T regret against it.
The random sequence posted by is based on two randomized auxiliary sequences L_1, L_2, … and U_1, U_2, … defined as follows. They are initiated to L_0 = 12, U_0 = 23. Then, they evolve recursively following the rule
L_t = L_t-1 + 23 Δ_t-1 and U_t = U_t-1, with probability 12,
U_t = U_t-1 - 23 Δ_t-1 and L_t = L_t-1, with probability 12,
where Δ_t-1 = U_t-1 - L_t-1.
For each realized sequence of the (L_t,U_t) pairs, the actual sequence of the (M_t,V_t) selected by is constructed as follows. At each time step t, the environment selects (M_t,V_t) = (L_t,1) or (U_t,0), uniformly at random. Note, the distribution is characterized by two levels of independent randomness: the auxiliary sequence of shrinking intervals and the choice between (L_t,1) and (U_t,0).
We move our attention to the expected performance of the best fixed price in hindsight. For each realization of the random auxiliary sequence, there exists a bid B^* such that (i) it wins all the auctions (V_t,M_t) of the form (L_t,1) (which we may call “good auctions” because they bring positive utility when won) and (ii) it loses all the auctions (V_t,M_t) of the form (U_t,0) (which we may call “bad auctions” because they bring negative utility). Thus its expected utility at each time step is at least 16: with probability 12 the environment selects a good auction, which induces an utility of (1-L_t) ≥ 13. All in all, the optimal bid achieves an expected utility of at least T6.
Consider now the performance of any deterministic algorithm : for any fixed time t>1 and possible realization of the past observations, the learner posts some deterministic bid B_t. If B_t<L_t-1, then it gets 0 utility, so we only consider the following cases:
* If B_t ∈ [L_t-1,L_t-1+ 13 Δ_t-1), then the bidder gets the item with probability 14 (L_t = L_t-1, V_t is set to 1 and M_t = L_t) with an expected utility of 14 (1 - L_t) ≤ 18.
* If B_t ∈ [L_t-1+ 13 Δ_t-1, L_t-1+ 23 Δ_t-1), the bidder gets the item with probability 12 (when L_t = L_t-1 and U_t = U_t-1 - 23 Δ_t-1) for an expected utility of 14 (1 - L_t-1) - 14 (L_t-1 + 13 Δ_t-1) ≤ 0 ≤ 18
* If B_t ∈ [L_t-1+ 23 Δ_t-1, U_t-1) the bidder gets the item with probability 34 (when L_t = L_t-1 and when U_t = U_t-1, V_t = 1 and M_t = L_t) for an expected utility of 14 (1 - L_t-1) - 14 (L_t-1 + 13 Δ_t-1) + 14 (1-L_t-1 - 23 Δ_t-1) ≤ 0 + 18 = 18
* If B_t ≥ U_t-1 then the bidder always gets the item, with an expected utility smaller than 0 (which is in turn smaller than 18).
All in all, we have that the expected utility of any deterministic algorithm is at most 18T. If we compare this quantity with the lower bound on the expected utility of the best bid in hindsight we get the desired result:
R_T(,)≥T/6 - T/8 = T/24 .
A final observation: the crucial ingredient in the proof is the possibility of constructing this elaborate auxiliary sequence. To this end, we only needed the non-smoothness of M_t, while we may have chosen the valuations V_t to be smooth (and even i.i.d.), say uniformly in [0, 14] for the bad auctions and in [ 34,1] for the good ones.
§ CONCLUSION
Motivated by the recent shift from second to first-price auctions in online advertising market, in this paper we offered a comprehensive analysis of the online learning problem of repeated bidding in first-price auction under the realistic assumption that the bidder does not know its valuation before bidding. We have characterized the minimax regret achievable for different levels of transparency in the auction format and for different data generation models, considering both the stochastic i.i.d. and the standard adversarial model, with a focus also on smoothness. Although all our regret rates are tight in their dependence on the time horizon T, a natural open problem consists in studying their minimax dependence in the smoothness parameter σ.
This paper belongs to the long line of research that studies economic problems from the online learning perspective; an intriguing open problem there is to offer an unified framework to characterize in a satisfying way all these games with partial feedback, similarly to what has been done for partial monitoring and feedback graph.
plainnat
§ MEASURE AND INFORMATION-THEORETIC NOTATION AND KNOWN FACTS
We recall that given two probability measures and on a measurable space (Ω, ), is said to be absolutely continuous with respect to (and we write ≪) if, for all E∈ such that [E]=0, it holds that [E]=0.
Whenever ≪, the Radon-Nikodym theorem states that there exists a density (called Radon-Nikodym derivative of with respect to ) /Ω→ [0,) such that, for all E∈, it holds that
[E]
=
∫_E /(ω) (ω) .
See <cit.> for a reference.
If (Ω, , ) is a probability space, (, _) is a measurable space, and X is a random variable from (Ω, ) to (, _), the push-forward measure of by X is denoted by _X. In this case, we recall that the push-forward measure is defined as the unique probability measure on _ defined via _X[F] = [X ∈ F], for all F ∈_.
If (Ω, ) and (Ω', ') are two measurable spaces, their product σ-algebra is denoted by ⊗'. We recall that ⊗' is the σ-algebra of subsets of Ω×Ω' generated by the collection of subsets of the form F× F', where F ∈ and F' ∈'.
If (Ω, , ) and (Ω', ',') are two probability spaces, the product measure of and ' is denoted by ⊗'. We recall that ⊗' is the unique probability measure defined on ⊗' which satisfies (⊗')[F× F'] = [F]'[F'], for all E∈ and E'∈'.
If (Ω, , ) is a probability space, (, _) and (, _) are measurable spaces, X is a random variable from (Ω, ) to (, _), and Y is a random variable from (Ω, ) to (, _), the conditional probability of X given Y is denoted by _X | Y, where, for each E ∈_, we recall that _X | Y[E] = [X ∈ E | Y] and that _X | Y[E] is a σ(Y)-measurable random variable.
The following result has been proven in <cit.>.
Suppose that (,d) is a separable and complete metric space with _ as the Borel σ-algebra of (,d). Let (Ω, ) be a measurable space, X a random variable from (Ω, ) to {0,1}, 2^{0,1},
Y a random variable from (Ω,) to (, _),
and U random variable from (Ω, ) to [0,1],,
where is the Borel σ-algebra of [0,1].
Suppose that , are probability measures defined on , and p ∈ (0,1), q∈ [0,1] are such that:
* [X=1] = p and [X=1] = q.
* U is a uniform random variable on [0,1] both under and , i.e., we have that _U = = _U.
* U is independent of X both under and , i.e., _(X,U) = _X ⊗_U and _(X,U) = _X ⊗_U.
Then, the following are equivalent:
* There exists a measurable function from {0,1}×[0,1], 2^{0,1}⊗ to (, _) such that
_Y = _(X,U) and _Y = _(X,U) .
* _Y ≪_Y, and _Y-almost-surely it holds that
min_X/_X≤_Y/_Y≤max_X/_X .
§ MISSING DETAILS OF THE PROOF OF <REF>
In this section, we will complete the proof of <Ref>, showing that the repeated first-price auctions with semi-transparent feedback (in the following, referred to as “our problem”) are no easier than a K-armed bandit instance based on the probability measures ^1,…,^K introduced in <Ref>.
The structure of the proof is inspired by <cit.>.
The related bandit problem.
The action space is [K], where we recall that K was some arbitrarily fixed natural number.
Let Y,Y_1,Y_2,… be a sequence of {0,1}^K-valued random variables such that, for any k∈{0,1,…,K}, the sequence is ^k-i.i.d. and, for all j∈[K]
^kY(j)=1
=
1/2 if j ≠ k
1/2 + 1/(6K) if j = k
This sequence of latent random variables will determine the rewards of the actions.
The reward function is
ρ [K] ×{0,1}→ [0,1] , (i,y) ↦23 + 2 y(i)/192
and the feedback received after playing an action I_t at time t is Y_t (I_t) (which is equivalent to receiving the bandit feedback ρ(I_t, Y_t) gathered at time t).
For any k∈{0,…,K} and any i∈[K] the expected reward is
^kρ(i,Y)
=
1/8 if i ≠ k
1/8 + /144 if i = k
Mapping our problem into this bandit problem.
Assume that K≥ 3.
We partition the interval [0,1] in the following K disjoint regions: J_1 = [0, w_1 + ), J_k = [w_k - , w_k + ) (for all k ∈{2, …, K-1}), and J_K = [w_K - , 1].
We define a function ι [0,1] → [K] that maps each point in the interval [0,1] to one of the K arms by mapping each b ∈ [0,1] to the unique i ∈ [K] such that b ∈ J_i (for a pictorial representation of the map ι, see <Ref>).
Simulating the feedback.
To lighten the notation, besides the already defined random functions ψ_1,ψ_2,…, define also:
ψ [0,1] → [0,1] ×{⋆}∪{⋆}× [0,1] ,
b ↦
(V,⋆) if b ≥ M
(⋆, M) if b < M
The next lemma shows that we can use the feedback observed in the bandit problem together with some independent noise to simulate exactly the feedback of our problem.
For each b∈[0,1], there exists _b {0,1}× [0,1] → such that, if U' is a [0,1]-valued random variable such that, for each k∈{0,…,K}, the distribution U' with respect to ^k is a uniform on [0,1] and U' is ^k-independent of Y, then ^k__b(Y(ι(b)),U') = ^k_ψ(b).
A direct verification shows that, for all k∈[K] and all b∈[0,1],
^k_ψ(b)≪^0_ψ(b) (i.e., ^k_ψ(b) is absolutely continuous with respect to ^0_ψ(b))
and the Radon-Nikodym derivative of the push-forward measure ^k_ψ(b) with respect to ^0_ψ(b) satisfies,
for ^0_ψ(b)-a.e. (v,m) ∈ [0,1] ×{⋆}∪{⋆}× [0,1],
d^k_ψ(b)/d^0_ψ(b) (v,m)
=
1 + ·16/9 v-b sgn v - 15/16Λ_w_k, (b) v ∈7/8, 1
which implies, for ^0_ψ(b)-a.e. (v,m) ∈ [0,1] ×{⋆}∪{⋆}× [0,1], that
mind^k_Y(ι(b))/d^0_Y(ι(b))
=
1-4/3≤d^k_ψ(b)/d^0_ψ(b) (v,m)
≤
1+4/3
=
maxd^k_Y(ι(b))/d^0_Y(ι(b))
Thus, for each b ∈ [0,1], by <ref>, there exists (and we fix)
_b{0,1}× [0,1] →
such that
^ι(b)__b(Y(ι(b)),U')
=
^ι(b)_ψ(b) and ^0__b(Y(ι(b)),U')
=
^0_ψ(b) .
Since for all b∈ [0,1] and all k ∈ [K]ι(b), we have
^k_ψ(b)
=
^0_ψ(b) (by <Ref>) and
^k__b(Y(ι(b)),U')
=
^0__b(Y(ι(b)),U'), then, for all b ∈[0,1] and all k∈{0,…,K}, it holds that
^k__b(Y(ι(b)),U')
=
^k_ψ(b) .
We now show that any algorithm for our problem can be transformed into an algorithm to solve the bandit problem that suffers no-larger regret.
To do so, we begin by formally explaining how algorithms for our problem work.
Functioning of an algorithm for our problem
A randomized algorithm for our problem is a sequence of functions that take as input a sequence of random seeds U_1, U_2,… and some feedback Z_1, Z_2, … and generates bids B_t as described below.
At time t=1, selects a bid B_1 as a deterministic function of U_1 and observes feedback Z_1 = ψ_1(B_1).
Inductively, for any t≥ 2, selects a bid B_t as a deterministic function of U_1,…,U_t,Z_1,…,Z_t-1 (where Z_s = ψ_s(B_s), for all s∈[t-1]).
For all k∈{0,…,K}, the sequence of seeds is a ^k-i.i.d. sequence of uniform random variables on [0,1] that is ^k-independent of (V,M), (V_1,M_1),(V_2,M_2), ….
Building from
We show now how to map to an algorithm (that shares the same seeds for the randomization) for the bandit problem that suffers a worst-case regret that is no larger than that of .
To do so, consider a sequence U',U'_1,… of random variables that, for all k∈{0,…,K} is a ^k-i.i.d. sequence of uniforms on [0,1] that can access as a further source of randomness.
We will assume that, for all k∈{0,…,K}, the four sequences Y,Y_1,…, (V,M),(V_1,M_1),…, U,U_1,…,and U',U'_1,… are independent of each other.
The algorithm acts as follows.
At time 1, plays the arm _1 = ι(B'_t), where B'_1 = B_1 is the bid played by at round t=1 (chosen as a deterministic function of the random seed U_1).
Then observes the bandit feedback Y_1(_1) and feeds back to the surrogate feedback Z'_1 = _B'_1 Y_1(_1), U'_1.
Then, inductively, for any time t≥ 2, assuming that played arms _1,…,_t-1 and fed back to the surrogate feedback Z'_1, …, Z'_t-1, then
* plays the arm _t = ι(B'_t), where B'_t is the bid played by at round t (chosen as a deterministic function of the random seeds U_1,…,U_t and past surrogate feedback Z'_1,…,Z'_t-1).
* observes the bandit feedback Y_t(_t) and feeds back to the surrogate feedback Z'_t = _B'_t Y_t(_t), U'_t.
This way, we defined by induction the randomized algorithm .
By induction on t, one can show that, if B_1,B_2,… are the bids played by on the basis of the feedback Z_1 = ψ_1(B_1), Z_2 = ψ_2(B_2), …, then, for all k∈{0,…,K}, we have
^k_(B_t,Y_t)
=
^k_(B_t',Y_t)
which leads to
R_T^k()
=
T·^k(w_k) - ∑_t=1^T ^k_t(B_t) ≥
T·^kρ(k,Y) - ∑_t=1^T ^kρι(B_t),Y_t
R_T^k()
=
T·^kρ(k,Y) - ∑_t=1^T ^kρι(B'_t),Y_t
=
T·^kρ(k,Y) - ∑_t=1^T ^kρ_t, Y_t
=
R^k_T()
(the last equality is a definition).
Now we are left to show only that for any algorithm for the bandit problem which plays actions I_1,I_2,…, there exists k∈[K] such that
R_T^k( )
=
T·^kρ(k,Y) - ∑_t=1^T ^kρ I_t, Y_t
=
Ω( T^2/3 )
(the first equality is a definition).
By Yao's Minimax principle, it is sufficient to show this for deterministic algorithms for the bandit problem.
Fix any deterministic algorithm for the bandit problem on K actions, then there exists k∈[K] such that R_T^k () ≥3/10^4 T^2/3.
For any deterministic algorithm for the bandit problem on K actions, let I_1, I_2, … be the actions played by on the basis of the sequential feedback received Z_1, Z_2, … and define N_t(i) as the random variables counting the number of times the learning algorithm plays action i, up to time t, for any i ∈ [K] and any time t ∈ [T]:
N_t(i) = ∑_s=1^t {I_s = i} .
We relate the expected values of N_T(k) under ^0 and ^k as a function of the expected number of times the algorithm plays the corresponding actions k. This formalizes the intuition that to discriminate between the different ^k the learner needs to play exploring actions.
The following inequality holds true for any k∈[K]:
^k N_T(k) - ^0 N_T(k) ≤2/3·· T ·√( 2 ^0 [N_T(k)] ).
For any t∈ [T], the action I_t = I_t(Z_1, …, Z_t-1) selected by at round t is a deterministic function of Z_1, …, Z_t-1, for each k∈[K]. In formula, we then have the following
^k N_T(k) - ^0 N_T(k) =
∑_t = 2^T ^k I_t (Z_1, …, Z_t-1 ) = k - ^0 I_t (Z_1, …, Z_t-1 ) = k
≤∑_t = 2^T ^k_(Z_1, …, Z_t-1 ) - _(Z_1, …, Z_t-1 )^0 _TV,
where ·_TV denotes the total variation norm. We move now our attention towards bounding the total variation norm. To that end we use Pinsker's inequality and apply the chain rule for the KL divergence . For each k ∈ [K] and t ∈ [T] we have the following:
_(Z_1,…,Z_t)^0 - ^k_(Z_1,…,Z_t)_TV≤√(1/2_(Z_1,…,Z_t)^0, ^k_(Z_1,…,Z_t))
≤√(1/2_Z_1^0, ^k_Z_1 + ∑_s=2^t _Z_s | Z_1,…,Z_s-1^0 , ^k_Z_s | Z_1,…,Z_s-1)
We bound the two KL terms separately. is a deterministic algorithm, thus I_1 is a fixed element of [K], which implies that, for all k∈ [K],
_Z_1^0, ^k_Z_1
=
ln^0[Y_1(k) = 0]/^k[Y_1(k) = 0] ^0[Y_1(k) = 0]
+
ln^0[Y_1(k) = 1]/^k[Y_1(k) = 1] ^0[Y_1(k) = 1]
I_1 = k
=
1/2ln1/2/1/2 - ·
+
ln1/2/1/2 + ··{I_1 = k}
Similarly, since is a deterministic algorithm, for all s ≥ 2, the action I_s = I_s(Z_1,…,Z_s-1) selected by at time t is a function of Z_1,…, Z_s-1 only, which implies, for all k ∈ [K],
_Z_s | Z_1,…,Z_s-1^0 , ^k_Z_s | Z_1,…,Z_s-1
=
^0 [
ln^0[Z_s = 0 | Z_1, …, Z_s-1 ] /^k[Z_s = 0 | Z_1, …, Z_s-1 ] ^0[Z_s = 0 | Z_1, …, Z_s-1 ]
.
+
.
ln^0[Z_s = 1 | Z_1, …, Z_s-1 ] /^k[Z_s = 1 | Z_1, …, Z_s-1 ] ^0[Z_s = 1 | Z_1, …, Z_s-1 ]
]
=
^0 [ ln^0[Y_s(k) = 0]/^k[Y_s(k) = 0] ^0[Y_s(k) = 0]
+
ln^0[Y_s(k) = 1]/^k[Y_s(k) = 1] ^0[Y_s(k) = 1]
× I_s (Z_1, …, Z_s-1) = k ]
=
1/2ln1/2/1/2 - ·
+
ln1/2/1/2 + ·^0 I_s (Z_1, …, Z_s-1) = k
Now, since = 1/4K≤1/4≤2/3, the following useful inequality holds:
1/2ln1/2/1/2 - ·
+
ln1/2/1/2 + ·≤
4 ·^2 ·^2.
We can combine the inequalities in <Ref> and <Ref> into <Ref> and plug in the bound in to obtain:
_(Z_1,…,Z_t)^0 - ^k_(Z_1,…,Z_t)_TV≤··√( 2 [N_t(k)] )
Once we have this upper bound on the total variations of the random variables (Z_1,…,Z_t) under ^0 and ^k we can get back to the initial <Ref> and obtain the desired bound via Jensen:
^k N_T(k) - ^0 N_T(k) ≤∑_t=2^T ··√( 2 ^0 [N_t-1(k)] )≤·· T ·√( 2 ^0 [N_T(k)] ).
Averaging the quantitative bounds in <Ref> for all k in [K], and applying Jensen's inequality, we get the following:
1/K∑_k ∈ [K]^k[N_T(k)]
≤1/K∑_k ∈ [K]^0[N_T(k)] + ·· T ·√(2/K∑_k ∈ [K]^0 N_T(k) )
=
1/K + ··√(2T/K)· T .
Now, we have all the ingredients to lower bound the average regret suffered by . Note that every time a suboptimal arm is played the learner suffers (expected) instantaneous regret equal 1/144·.
Then, recalling that =1/(4K) and setting K = T ^1/3 we have, for all T≥ 8,
1/K∑_k∈ [K]R̃_T^k ()
=
1/K∑_k∈ [K]··^k T - N_T(k)
=
· T - 1/K∑_k∈ [K]^k N_T(k)
≥·· 1 - 1/K - ··√(2T/K)· T
=
·1/4K· 1 - 1/K - 1/6K·√(2T/K)· T
≥1/8 · 1443-√(2)/6T^2/3≥3/10^4 T^2/3 .
Therefore, for all T≥ 8, there exists k∈[K] such that R̃_T^k () ≥ (310^4) · T^2/3, concluding the proof.
§ MISSING PROOF OF THEOREM <REF>
Let γ >0.
Notice that, for each t ∈, it holds that ∑_y ≥ M_t p_t(y) ≥γ. It follows, for each x ∈ and t ∈, that γ_t(x) ≤ 1, and hence
exp(γ_t(x)) ≤ 1 + γ_t(x) + (e-2) γ^2 _t(x)^2 .
Then, for each t ∈,
w_t+1_1/w_t_1
=
∑_x ∈w_t(x)/w_t_1expγ_t(x)≤
1+∑_x ∈w_t(x)/w_t_1γ_t(x) + (e-2) γ^2 _t(x)^2 ,
which implies
lnw_t+1_1/w_t_1≤∑_x ∈w_t(x)/w_t_1γ_t(x) + (e-2) γ^2 _t(x)^2≤γ/1-γ∑_x ∈ p_t(x) _t(x) + (e-2) γ_t(x)^2.
Now, for each t ∈, let _t be the σ-algebra generated by p_t, V_t and M_t and denote by _t := [·|_t]. First, notice that, for each t ∈ and each x ∈
_t[_t(x)] = _t(x) ,
_t ∑_x ∈ p_t(x)_t(x) = [_t(B_t) | V_t, M_t] ,
and that
_t∑_x ∈ p_t(x)_t(x)^2≤_t∑_x ∈ p_t(x) {x≥ M_t}{M_t ≤ B_t}/∑_y ≥ M_t p_t(y) ^2
=
_t∑_x ∈ p_t(x) {x≥ M_t}/∑_y ≥ M_t p_t(y) = 1 .
It follows that, for each x ∈,
∑_t=1^T _t(x) - ln =
∑_t=1^T _t(x) - ln
=
lnw_T+1(x) - ln
≤lnw_T+1_1/w_1_1
=
∑_t=1^T_tlnw_t+1_1/w_t_1
≤γ/1-γ∑_t=1^T _t(B_t) + (e-2)γ T ,
which, after rearranging and upper bounding, yields
∑_t=1^T _t(x) - ∑_t=1^T _t(B_t) ≤ln||/γ + (e-1)γ T .
Selecting γ as in the statement of the theorem leads to the conclusion.
§ MISSING DETAILS OF THE PROOF OF <REF>
*
For any ∈ (0, 12), the distributions ^± are such that, the set of all the bids that induce non-negative utility 𝔼^±[_t(b)] is contained into two disjoint intervals I_+ = [0, 18] and I_- = [ 14,1][The choice of I_+ and I_- is not tight.].
We consider separately the two cases ^+ and ^-. We start from the former. By simply looking at the definition (<ref>), it is clear that 𝔼 ^+ [_t(b)] is monotonically increasing in for any b ∈ I_+, on the contrary, it is monotonically decreasing for b ∈ I_-. We have the following:
max_b ∈ I_-𝔼 ^+ [_t( b)] ≤max_b ∈ I_-𝔼 ^0 [_t( b)]= 1128.
On the other hand,
max_x ∈ [0,1]𝔼 ^+ [_t(b̂)] ≥𝔼 ^+ [_t( 116)] = 1128(1+)> max_b ∈ I_-𝔼 ^+ [_t( b)] + 128.
We consider now the other case, corresponding to ^-. By the definition in <Ref>, 𝔼 ^- [_t(b)] is monotonically increasing in its first argument for any b ∈ I_-, on the contrary, it is monotonically decreasing for b ∈ I_+. Similarly to the other case we have two steps. On the one hand, it holds that
max_b ∈ I_+𝔼 ^- [_t( b)] ≤max_b ∈ I_+𝔼 ^0 [_t( b)] = 1128,
while on the other hand it holds that
max_x ∈ [0,1]𝔼 ^- [_t(x)] ≥𝔼 ^- [_t( 716)] = 1128 + 41128 > max_b ∈ I_-𝔼 ^+ [_t( b)] + 4.
We need a preliminary result for the proof of <Ref>. Recall, we use the same random variable (V,M) to denote the highest competing bid/valuation pair drawn from the different probability distribution. When we change the underlying measure, we are changing its law. Consider now the push forward measures on [0,1]^2 (with the Borel σ-algebra) induced by these three measures: ℙ_(V,M)^0, ℙ_(V,M)^+ and ℙ_(V,M)^-. With some simple calculations (similarly to what is done in, e.g., Appendix B of <cit.>) it is possible to bound the KL divergence:
For any ∈ (0, 12) the following inequality holds true:
(ℙ_(V,M)^+,ℙ_(V,M)^0) = (ℙ_(V,M)^-, ℙ_(V,M)^0) ≤ 2 ^2
We simply apply the definition of divergence for continuous random variables. We only do the calculations for ℙ_(V,M)^+, the other term is analogous:
(ℙ_(V,M)^+,ℙ_(V,M)^0) =
∫_Q_+ ∪ Q_- f^+(v,m) lnf^+(v,m)/f^0(v,m) dm dv
= 1/2 (1+) ln (1+) + 1/2 (1-) ln (1-)
≤ 2^2,
where the last inequality holds for any ∈ (0,1/2).
*
We have the following:
N_i - N_i = ∑_t=2^T ^iB_t ∈ I_i - ^0 B_t ∈ I_i
≤∑_t=2^T ||ℙ^i_(V_1,M_1), …,(V_t-1,M_t-1) - ℙ^0_(V_1,M_1), …,(V_t-1,M_t-1) ||_Total variation
≤∑_t=2^T √(1/2(ℙ^i_(V_1,M_1), …,(V_t-1,M_t-1),ℙ^0_(V_1,M_1), …,(V_t-1,M_t-1)))Pinsker's inequality
≤∑_t=2^T √(t/2(ℙ^i_(V,M),ℙ^0_(V,M)))(V_1,M_1), …,(V_t-1,M_t-1), … are i.i.d.
≤1/4√(T)∑_t=2^T √(t)≤1/4 T,
where in the last inequality we applied <Ref> for our choice of = 1/(4√(T)). Note, ^j_(V_1,M_1), …,(M_t,V_t) is the push-forward measure on ([0,1]^2)^t induced by t i.i.d. draws of (V,M) from distribution ℙ^j, j ∈{0,1,2}.
Averaging the result in <Ref>, we get the desired inequality:
1/2∑_i=1,2N_i≤1/2∑_i=1,2N_i + T/4 = 3/4 T.
|
http://arxiv.org/abs/2307.04848v1 | 20230710183823 | Masses and densities of dwarf planet satellites measured with ALMA | [
"Michael E. Brown",
"Bryan J. Butler"
] | astro-ph.EP | [
"astro-ph.EP"
] |
0000-0002-8255-0545]Michael E. Brown
Division of Geological and Planetary Sciences
California Institute of Technology
Pasadena, CA 91125, USA
0000-0002-5344-820X]Bryan J. Butler
National Radio Astronomy Observatory, Socorro NM 87801 (U.S.A.)
We have used the Atacama Large Millimeter Array (ALMA) to measure precise
absolute astrometric
positions and detect the
astrometric wobble of dwarf planet Orcus and its satellite Vanth over a complete orbit. We also
place upper limits to the astrometric
wobble induced by Dysnomia on dwarf planet Eris
around its orbit.
From the Vanth-Orcus barycentric motion, we
find a Vanth-Orcus mass ratio of 0.16±0.02 –
the highest of any known planet or dwarf planet.
This large ratio is consistent with the hypothesis that Vanth is a largely-intact
impactor from a giant collision in the system,
and that the system has likely evolved to a double synchronous state.
We find only an upper limit of the barycenter motion of Eris,
which implies a one sigma upper limit to the Dysnomia-Eris mass ratio of 0.0085, close
to the modeled transition region between giant impact generated
satellites which are largely intact remnants of the original impactor and
those which form out of
reaccreted disk material left over post-impact.
The low albedo of Dysnomia leads us to marginally favor the intact impactor scenario. We find that Dysnomia has density of <1.2 g cm^-3, significantly lower than the 2.4 g cm^-3 of
Eris.
§ INTRODUCTION
Satellites are ubiquitous around dwarf planets in the Kuiper belt, with satellites
known around at least 8 of the 10 largest dwarf planets. Smaller
Kuiper belt objects (KBOs) are often found as nearly equal-sized binaries of similar color on eccentric
orbits <cit.>, and their
orbital properties have lead to the hypothesis that they are formed from direct collapse via gravitational
instabilites <cit.>. Dwarf
planet satellites, in contrast, are frequently found to be significantly smaller than their primaries and are often
found on circular or near-circular orbits, suggesting
a separate formation mechanism for these systems
<cit.>.
Giant impacts have long been discussed as a likely
formation mechanism for dwarf
planets satellites. A giant impact origin for
Pluto's satellite Charon was proposed by <cit.>,
and <cit.> showed that a relatively large and high
density satellite such as Charon could indeed
be formed through an oblique giant impact, where the
impactor is captured largely intact after the collision.
<cit.> modeled a wider range of dwarf planet
collisions and found that most of the known dwarf planet
satellite systems appear consistent with formation via a giant
impact that occurred at close to the escape velocity,
followed by the retention of the largest fragment from the
impactor. Depending on the impact angle, this largest
captured fragment can range from a nearly-intact
original impactor (as in the case of Pluto-Charon), to a
low density icy fragment containing only a small fraction of the
mass of the original impactor. The requirement that the
impact occur at close to the escape velocity is a strong
indicator that these collisions happened early in
solar system history before dynamical excitation
of the Kuiper belt.
Two critical parameters for understanding the formation of
dwarf planet satellites and for testing models such as
these giant impact scenarios are the satellite-primary
mass ratio and the density of the satellite compared to
the primary. Such parameters are known for only two
systems – Pluto-Charon and Haumea-Hi'iaka-Namaka –
which span a wide range of parameter space.
The Charon-Pluto mass ratio is 0.12 – the largest
measured for any planet or dwarf planet – and was determined
by detecting the motion of Pluto and Charon around their
common barycenter
<cit.>.
Densities of Pluto and Charon are 1.85 and 1.7 g cm^-3, respectively,
approximately equal and typical for objects of their sizes.
In contrast, the Haumea system has a (total) satellite-primary mass ratio
of 0.0045, where the mass of the satellites was determined by
detecting their mutual perturbations <cit.>. The sizes
of the satellites have not been measured directly, but with
their low masses and surface spectra that resemble
pure water ice <cit.>, it appears likely that
the objects are icy fragments with densities of ≲ 1
g cm^-3, in marked contrast to the 1.8-2.0 g cm^-3 density of Haumea
<cit.>.
For both of these systems, measurement of their mass ratios
relies on the presence of multiple satellites.
Measuring the mass of dwarf planet
satellites which have no additional satellite companions
is more difficult.
The only plausible method to
measure satellite mass is through the detection
of the barycenter
motion of the primary determined against an
absolute astrometric background.
Barycenter motion has not been possible
to measure with high resolution optical or infrared
imaging to date owing to insufficient astrometric references
in the same field of view as the target moves against
the background stars.
Unlike high resolution
optical or infrared imaging, radio interferometric
observations routinely measure positions in an absolute
astrometric reference frame during the process
of phase calibration, usually using
extragalactic radio sources which define the
standard celestial reference frame <cit.>.
There is a well-established history of obtaining
precise positions of sources over time from such observations
<cit.>.
We use the extraordinary ability of ALMA to provide absolute
astrometry to allow us to search for
the barycentric motion of Eris and Orcus caused by
their satellites.
Eris is the most massive dwarf planet known and has
a single known satellite in a circular orbit
at a distance of a/R_p≈ 32 from the primary, where a
is the radius of the orbit and R_p is the radius of the
primary. ALMA observations
give a diameter of Dysnomia of 700±115 km <cit.>, making
it the second largest known satellite of a dwarf planet, and
suggesting that the Dysnomia-Eris mass ratio could be
anywhere from below 0.01 to 0.03, depending on whether or
not Dysnomia has a density below 1.0 g cm^-3 – as
appears typical for KBOs of this size – or has a density
more similar to the 2.4 g cm^-3 value derived
for Eris <cit.>. The Eris-Dysnomia system appears
in a regime different from either that of Pluto or
that of Haumea.
The Orcus-Vanth system, in contrast, appears like
a scaled-down version of the Pluto-Charon system. Vanth orbits
on a circular orbit at a distance of a/R_p≈ 20
from the
primary (compared to ∼16.5 for Pluto-Charon),
and ALMA observations have measured diameters of
910^+50_-40 and 475±75 km for Orcus and Vanth (BB18), a size ratio of
1.6±0.3 (comparable to the value of 2.0 for Pluto-Charon). The ALMA-derived
effective diameter of Vanth is consistent with that measured from a stellar
occultation and an assumed spherical shape of 443±10 km <cit.>, though we will conservatively use the ALMA result for consistency between
Orcus and Vanth. Assuming identical
densities, the Vanth-Orcus mass ratio would be 0.142±0.02, a value
even higher than the 0.12 of Pluto-Charon.
These observations will expand the range of dwarf planet
and satellite sizes, ratios, and orbital distances with
fully characterized systems,
allowing us to continue to explore formation mechanisms
for these ubiquitous satellite systems.
§ OBSERVATIONS AND DATA REDUCTION
The Orcus-Vanth system was observed 4 times in Oct/Nov 2016. The Eris-Dysnomia
system was observed 3 times in Nov/Dec 2015.
A complete description of the observations, including the method for obtaining
final flux densities, is contained in BB18. We describe
here only the further steps in the data reduction required to obtain the
astrometric positions and errors.
To measure the positions of the detected objects, we perform direct
fits to the interferometric
visibilities. In the case of Orcus and Vanth, we use a model
of two point sources, with initial position estimates given by Gaussian
fits to the images. In the case of Eris, we use a model of a slightly
limb-darkened disk, with diameter 1163 km <cit.>, also
with initial position estimate given by Gaussian fits to the images.
We assume three sources of astrometric error: formal
fitting uncertainties (“ff error”), systematic fitting uncertainties (“sf error”), and overall
celestial frame uncertainties (“cf error”). Formal fitting
uncertainties are simply the errors returned in the visibility fits. We
estimate the systematic fitting uncertainties by differencing the
positions returned by the visibility fitting with those returned by the
Gaussian image fits. Our methodology for finding the celestial frame
error is described below. We then take the total error in either direction
(right ascension or declination) as the root sum squared (RSS) of
those three values.
The most difficult error term to determine is the overall celestial
frame uncertainty. Fortunately, astrometric observations with ALMA
always contain at least two “check sources,” which are sources near
the science target source which have well-determined positions.
Additionally, we used a primary phase calibrator which also has a
well-determined position for both sets of observations. The positions
of these sources are either taken from the International Celestial
Reference Frame (ICRF), or the Radio Fundamental Catalog (RFC)
<cit.>. For the Eris observations, the primary
phase calibrator was J0125-0005 (ICRF; 5.2 degrees distant) and the two check sources were
J0141-0928 (ICRF; 6.5^∘) and J0115-0127 (RFC; 7.0^∘). Unfortunately, J0141-0928 was
sufficiently far from Eris that phases did not
transfer from the primary phase calibrator well, so we did not use it.
For the Orcus and Vanth observations, the primary phase calibrator was
J1048-1909 (ICRF; 13.0^∘), and the two check sources were J1022-1037 (RFC; 2.2^∘) and
J0942-0759 (RFC; 6.0^∘). Both were sufficiently close to Orcus and Vanth that
they could be used. On each date, we made an image of the check
sources, and did a Gaussian fit to find the offsets of those sources
from their expected positions (which should be at the phase center, or
the image center, if everything worked perfectly). For Eris we took
the offset of J0115-0127 as the frame uncertainty; for Orcus and Vanth
we took the average of the offsets of J1022-1037 and J0942-0759 as the
frame uncertainty.
Table <ref> shows the final positions, all
contributions to the error, and the final error. For Eris, the errors
are of order a few mas; for Orcus and Vanth they are a factor of
roughly 2-3 higher on two of the days, but much larger for the two others. The
observation on October 13 was impacted by poor observing conditions;
that on November 7 was taken when antennas had been moved into a more
compact configuration. Normally, neither of these would happen for ALMA
observations, as they are scheduled purposefully when conditions and
resolution is appropriate to the proposed science. However, for these
observations, there were very tight time constraints - both because
observations must be taken at separated orbital phases (which means a
few days apart), and because the time spent in each configuration is
limited. If all observations had been taken under ideal circumstances,
the errors would almost certainly all be as they are for Eris, a few
masec. This uncertainty agrees with the expected astrometric accuracy of ALMA
<cit.>.
Figures 1 and 2 show the astrometric measurements of Orcus and Vanth,
and of Eris, respectively. Dysnomia is too faint to be detected in the individual
images.
The barycentric motion of the Orcus-Vanth system can clearly be seen in the data.
For Eris some barycenter deviation is apparent, but it is much smaller and less regular than that of Orcus-Vanth.
§ ANALYSIS
§.§ Orcus-Vanth
To determine the center of mass of the Orcus-Vanth system,
we fit the deviations of Orcus
and Vanth from the expected system ephemeris position to a 6-parameter likelihood model
that consists of the following (Fig. 3): (1) an orbital phase offset to the
position of Vanth from that determined in
BB18 to allow for the uncertainty
in the decade-old orbital phase measurement from the
Hubble Space Telescope (HST) (note that
uncertainties in the other orbital parameters are
significantly smaller and would not affect the results here); (2) a satellite-primary mass ratio;
(3-4) a constant ephemeris offset from
the expected position of the pair to the center of mass; and (5-6) a constant
offset between the center-of-light observed with ALMA and the center-of-light measured at optical wavelength. This last parameter
could be significant if, for example, enhanced thermal emission was coming from a
sunward-facing pole offset from the projected center of the body. At the
projected size of Orcus of nearly 28 mas, this parameter could be large.
We evaluate this 6-parameter model using the Markhov Chain Monte Carlo method implemented
in emcee <cit.>.
After an initial burn-in, we collect 100000 samples of the Markhov chain (MCMC)
for analysis. The distributions of the parameters are nearly Gaussian, so we report the median and the
15.9% and 84.1% intervals as the results and uncertainties.
Table 2 gives the retrieved values of all parameters.
Figure 4 shows the predicted positions of Orcus and Vanth using the median of all parameters. We find that the center-of-light measured with ALMA and
at optical wavelengths is the same (measured offsets of 2±3 mas both
east-west and north-south compared to the 14 mas projected radius of
Orcus). Such a measurement, along with the
the nearly
pole-on orbit of Vanth and the lack of a significant light curve for
Orcus <cit.>, suggests that we are viewing Orcus pole on.
The phase of Vanth is advanced by 4.3 degrees from the prediction using the earlier orbit. An updated mean anomaly and epoch for the satellite is given in Table 2. The center-of-mass of the Orcus-Vanth
system is situated 0.137±0.013 of the way towards Vanth, outside of the body of Orcus.
This 10σ detection of barycenter motion
directly shows that Vanth contains 13.7±1.3% of
the mass of the system, for a Vanth-Orcus mass ratio of 0.16±0.02.
For diameters of Orcus and Vanth of 910^+50_-40 and 475±75km and a system mass of 6.32×10^20 kg,
the densities of Orcus and Vanth are 1.4±0.2 and 1.5^+1.0_-0.5 g cm^-3, respectively.
§.§ Eris-Dysnomia
Dysnomia is not detected in the individual images, so we must determine the center-of-mass using
only the ephemeris deviations of
the position of Eris itself. In this case, ephemeris offset and center-of-light offset are
degenerate, so we solve only for their combination, yielding no
useful information. We use the Dysnomia ephemeris from BB18, which has uncertainties
of only 2 mas at the epoch of observation. Our model thus only has 3 parameters, the barycenter offset
and the two-dimensional ephemeris offset. We again evaluate this model with an MCMC analysis.
Figure 2 shows the fitted model to the data. We find a 1.5σ detection of a barycenter
motion of Eris corresponding to a mass ratio of
0.0050±.0035, or a barycenter motion of approximately
±2 mas. The predicted positions of Eris for
the maximum likelihood solution to this model can
be seen in Figure 2. The predictions are moderately
consistent with the data, with a reduced χ^2 value of 1.8.
While this barycentric motion may represent a true
detection, we chose to instead report the derived
mass ratio as a 1σ upper limit of 0.0084 also corresponding
to a 3σ upper limit of 0.015. Dysnomia is a small fraction of the mass of Eris;
for a
system mass of 1.65×10^22 kg <cit.>
the 1 σ upper limit to the mass of Dysnomia is
1.4× 10^20 kg.
§.§ The size and density of Dysnomia
In BB18 we reported a 3.5σ detection of a source when
all three observations were stacked at the predicted position of Dysnomia, corresponding to
a body with a diameter of 700± 115 and low albedo. Because of the unexpected nature of this result, the
modest statistical significance, and the need to stack multiple datasets from different days to obtain a detection,
we re-observed the Eris-Dysnomia system with ALMA on 11 October 2018. We obtained a single epoch in Band 6 (center frequency ∼233 GHz), in configuration C43-6 (resulting resolution 0.15×0.12 arcsec), with an on-source integration time of 78 minutes. That observation should have sufficient sensitivity to detect a large Dysnomia, and sufficient resolution to separate it from Eris. We reduced the data in an identical
manner to that described in BB18, using the QSO J0006-0623 for pointing, atmosphere, bandpass, and flux density scale calibration, and the QSO J0141-0202 for complex gain as a function of time calibration. The image resulting from these data is shown in Figure <ref>, where Dysnomia is clearly visible to the North and East of Eris, in its expected position. We fit visibilities to a two-source model and find flux densities of 390±7 μJy for Eris and 45±7 μJy for Dysnomia. That flux density for Dysnomia (a 6.5σ detection) is close to that expected for a large Dysnomia, similar to what we found in BB18. We note that the errors in fitting the flux density for Eris are reduced significantly when adding the second model source for Dysnomia (rather than having a single source for Eris), and that the final fitted positions for Eris and Dysnomia are insensitive to their positions in the initial model provided to the fit. We add this Band 6 flux density of Dysnomia (and the simultaneously measured flux density of Eris) to our thermal model from BB18 and derive a new size for Dysnomia of 615^+60_-50km, with an albedo of 0.05±0.01. Figure <ref> shows an updated
version of Figure 7 of BB18 where we add the Band 6 flux densities for
Eris and Dysnomia at 1280 μm.
For a system mass of 1.65×10^22 kg, the density of Dysnomia is 0.7±0.5 g cm^-3
(1-σ
upper limit of 1.2 g cm^-3). The density of Eris is 2.4 g cm^-3 <cit.>.
To high confidence, the density of Dysnomia
is significantly smaller than that of Eris.
Recent observations have found that the spin period
of Eris is consistent with the orbital period of
Dysnomia, i.e. that Eris is phase locked to the orbit
of Dysnomia <cit.>. <cit.>
suggest that for tidal evolution to produce this
state within the age of the solar system
requires a Dysnomia with a density >1.8 g cm^-3,
a uniquely high value for any known
object this size. They predict a Dysnomia-Eris mass
ratio of 0.01-0.03, which is between 1.4 and 7 σ above our measured value.
<cit.>, in contrast, considered tidal evolution of Dysnomia
with a mass more consistent with our new measurement and concluded that Dysnomia
could become phase-locked if Eris were unusually dissipative. Such high dissipation
could have important consequences for the internal structure of Eris.
§ DISCUSSION
The Vanth-Orcus mass ratio of 0.16±0.02 is larger
than that even of Charon-Pluto, which has a ratio of 0.12,
making the Vanth-Orcus mass ratio the highest of any known planet
or dwarf planet.
Like Pluto
and Charon, Orcus and Vanth have a similar density
(although the uncertainty on the Vanth density is
sufficiently large that future refinement
could change this conclusion) and lie on a circular
or nearly-circular orbit.
Such a system is an expected outcome in the simulations of <cit.> for either
differentiated or undifferentiated bodies impacting at near the escape velocity at
an impact angle greater than about 45^∘. In this scenario, Vanth would be a nearly-intact
body, having lost little mass in the collision. The separation of Orcus and Vanth is close to that
expected for creation through a giant impact and full tidal evolution to a double synchronous
state <cit.>. It appears likely that, like Pluto-Charon, Orcus-Vanth has achieved
this state.
The major difference between the two systems is
the apparent lack of system of small satellites
outside the orbit of Vanth, though satellites with the same fractional brightness as
the small ones of Pluto are still beyond the observational limits of any search
<cit.>.
The Eris-Dysnomia system, with an upper limit to the mass ratio of 0.0085,
lies close to the transition region in <cit.> between low mass ratio
satellites formed out of a reaccreted disk material and a small intact fragment.
In both cases, the moon is predicted to have an ice fraction near 100%,
consistent with the very low density of Dysnomia. An important clue is
perhaps the low albedo of Dysnomia. If the very low mass ratio small satellites of Pluto
and of Haumea can be used as representatives of disk reaccretion – which is far from
certain – their high albedos are perhaps the signature of processing through a disk
and removal of whatever volatile materials lead to space weathered darkening.
A large intact fragment could retain its complement of darkening material and
lead to the typical low albedo that Dysnomia appears to have. Understanding whether or not
this process
occurs requires a significantly greater understanding of icy disk processing and the
causes of the low albedos of the Kuiper belt. The Eris-Dysnomia system is in a range
of parameter space poorly sampled in the <cit.> simulations, so more insight
could also be gained through attempting to simulate this system, specifically.
Giant impact appears the most likely formation for
these dwarf planet satellite systems, but
inconsistencies in the picture remain. No model
has successfully generated Pluto's small moon system
at its current distance from the primary <cit.>, Haumea and
the low mutual velocity of its
collisional family remain unexplained, and the near-100%
occurrence of satellites to these dwarf planets is
surprising. Dwarf planet satellite systems yield unique
insights into early solar system history and icy collisional
physics and continued study will provide an important window
into these processes.
This paper makes use of ALMA data: ADS/JAO.ALMA#2018.1.00929.S, ADS/JAO.ALMA#2015.1.00810.S, and ADS/JAO.ALMA#2016.1.00830.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc
aasjournal
lcccccccccccc
Astrometric positions and errors (milliarcseconds).
Right Ascension Declination
Bodies Date Beama Offset ff errorb sf errorc cf errord
total error Offset ff error sf error cf error
total error
Eris 2015-Nov-09 17 X 15 @ 78^∘ -82.5 1.5 1.8 0.1 2.4 -123.8 1.5 0.2 0.2 1.5
Eris 2015-Nov-13 22 X 14 @ 51^∘ -88.8 1.5 1.5 0.9 2.3 -121.6 1.5 0.7 0.3 1.6
Eris 2015-Dec-04 37 X 22 @ 63^∘ -90.9 2.3 1.1 0.1 2.6 -120.8 2.2 0.8 1.8 3.0
Orcus 2016-Oct-11 98 X 90 @ -56^∘ -43.5 1.7 0.4 2.8 3.3 -24.0 1.6 2.7 4.0 5.1
Vanth 2016-Oct-11 " +145.4 6.6 7.7 2.8 10.5 +137.9 6.2 0.3 4.0 7.4
Orcus 2016-Oct-13 107 X 93 @ -47^∘ -26.8 3.7 9.9 32.3 34.0 +25.3 3.5 2.1 36.0 36.2
Vanth 2016-Oct-13 " +178.2 14.4 2.2 32.3 35.4 -101.6 14.0 27.2 36.0 47.3
Orcus 2016-Oct-15 123 X 103 @ 69^∘ +16.3 1.8 0.4 7.8 8.0 +22.1 1.7 1.6 4.0 4.6
Vanth 2016-Oct-15 " -96.6 5.8 2.7 7.8 10.1 -205.1 5.5 5.8 4.0 8.9
Orcus 2016-Nov-07 197 X 163 @ 55^∘ -2.6 2.3 6.7 16.5 18.0 -24.6 2.3 7.7 20.0 21.6
Vanth 2016-Nov-07 " -60.1 6.8 2.1 16.5 18.0 +165.6 6.7 17.2 20.0 27.2
aSynthesized beam FWHM axes and position angle (North through East, or CCW) with a robust weighting parameter of 0.
bFormal error from fitting visibilities.
cSystematic fitting error.
dCelestial frame error.
|
http://arxiv.org/abs/2307.05319v1 | 20230711150327 | Jet substructure measurements in CMS | [
"C. Royon"
] | hep-ex | [
"hep-ex",
"hep-ph"
] |
=6.0in =8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Jet substructure measurements in CMS
Christophe Royon
Department of Physics and Astronomy, The University of Kansas, Lawrence, USA
Various recent measurements from the CMS collaboration related to the study of hadronic jets substructure in proton collisions at 13 TeV with the CMS experiment are presented, namely the generalized angular studies in dijet and Z+jet events and the measurement of the primary Lund jet plane density.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
Measuring jet substructures is fundamental to compare them with QCD predictions. The idea is to map the jet constituents onto physically meaningful observables.
We can distinguish between fragmentation functions (the leading hadrons are identified), the classic jet shapes (such as the thrust), and groomed variables (where we want to remove the effects of soft gluon emissions for instance during hadronization).
In this short report, we will mention two different measurements from the CMS collaboration, namely the generalized angular studies in dijet and Z+jet events and the measurement of the primary Lund jet plane density.
§ GENERALIZED ANGULAR PROPERTIES IN Z+JET AND DIJET EVENTS
In order to study jet substructures in Z+jet and dijet events, we define new observables <cit.>
λ_β^κ = Σ_i z_i^κ( Δ R_i/R)^β
and z_i = p_Ti/Σ_j P_Tj
where the sum stands over all jet constituents, z_i is the jet fractional transverse momentum carried by the jet component i,
Δ R_i=√((Δ y_i)^2+(Δϕ_i)^2) between the jet axis and the jet constituent.
β and κ are some parameters controlling the momentum and angular distributions. High β values enhance angular effects whereas high κ values momentum effects. The CMS collaboration studied Les Houches Angularity (λ_0.5^1), the width (λ_1^1), the thrust (λ_2^1), the multiplicity (λ_0^0) and (p_T^D)^2 (λ_0^2) <cit.>. Two different samples are used to distinguish between quark and gluon induced jets. Z+jet is a quark enhanced sample while dijets are gluon enhanced, especially for central dijets.
As an example, the Les Houches angularity observable (κ=1, λ=0.5) is shown in Fig. <ref>.
Data are unfolded to particle level.
MG5+PYTHIA <cit.> and HERWIG++ <cit.> describe quark-enriched data well, and envelop the gluon-enriched data.
For Z+jet events, the resummation at NLL matched to fixed-order NLO matrix elements, with non-perturbative corrections from
SHERPA <cit.> is not in perfect agreement with data.
The groomed and ungroomed generalized angularities in dijet and Z+jet events were also measured by the CMS Collaboration <cit.>. The idea is to increase the β value for fixed κ to increase the weight of angular effects. The more weight is given to angular scales, the better agreement of theory with data is found for ungroomed measurements <cit.>. The groomed measurements are shown in Fig. <ref>. Soft-drop grooming is used to remove soft, wide angle radiation. Some
tension between the measurements and theory is found at small β=0.5, which might be related to hard collinear splittings description.
The CMS Collaboration also measured the dijet over Z+jet ratio for the different angular observables benefitting from the fact that many experimental uncertainties partially cancel in the ratio.
Leading order predictions and parton showers overestimate the
gluon to quark-enriched ratio. This ratio is better modelled with the “old"
PYTHIA8 and HERWIG7 CMS
tunes <cit.>.
Angular measurements are fundamental to tune further MC and to understand better gluon radiation from QCD.
§ MEASURING THE LUND JET PLANE USING CMS DATA
The idea of measuring the Lund jet plane in CMS is to visualize the phase-space of 1→2 QCD splittings.
We define the
relative transverse momentum of the emission k_T and the splitting angle Δ R= √((y_soft-y_hard)^2+(ϕ_soft-ϕ_hard)^2).
Theoretically, Lund jet planes are used for parton shower calculations and jet substructure
techniques developments and experimentally, it is possible to construct a
proxy for Lund diagrams using the iterative jet declustering procedure <cit.>.
In CMS, the constituents of the anti-k_T jets are reclustered using
the Cambridge/ Aachen (CA) algorithm <cit.>.
The CA algorithm sequentially combines the pairs of protojets with
strict angular ordering, and the CA jet is then declustered iteratively (from large to
small angle emissions).
The transverse momentum k_T and splitting angle Δ R of
soft subjet emission relative to the hard subjet
(core) are measured at each step
k_T = p_T Δ R
where p_T is the subjet transverse momentum. The procedure is iterated until the core is a single particle.
The measurements of the primary Lund jet plane were performed for jet radius R=0.4 and for the first time for R=0.8 and corrected to particle level <cit.>.
The primary Lund jet plane density projected onto the log k_T axis is shown in Fig. <ref> for 0.4 jets (the results for 0.8 jets can be found in Ref. <cit.>). The results for large (respectively small) splitting angles are shown on the left (respectively right) plot.
PYTHIA8 CP5 <cit.> overestimates the number of emissions by 15-20%, Data favor lower final state radiation in the parton shower region. HERWIG 7 CH3 <cit.> leads to a better agreement with data.
The primary Lund jet plane projected onto the log R/Δ R axis is shown in Fig. <ref> for R=0.4.
The region corresponding to soft (respectively hard) splittings is shown on the left (respectively right) plot.
Low k_T splitting populates the whole angle radiation region while the large k_T only the wide one.
PYTHIA8 CP5 overshoots data by 25-35% at low k_T and leads to a better description for hard emissions.
The comparison between the Lund jet plane measurement and model expectations is shown in Fig. <ref> for different region in ln (R/Δ R) and Δ R.
PYTHIA8 with VINCIA <cit.> or DIRE <cit.> models are in agreement with data within a few % except at high k_T.
SHERPA <cit.> and HERWIG7 <cit.> with dipole showers describe the data within 5-10% including at high k_T.
The comparison between data and HERWIG7 allows to
choose the best recoil scheme in angular ordered parton showers in a region where quark and gluon fragmentations play an important role.
The ultimate goal will be to achieve NLL accuracy in the next generation of parton showers.
The measurement of the jet-averaged density of emissions also scales with α_S
in the soft and collinear limit of perturbative QCD, which shows that the measurement of jet substructure and
of the Lund jet plane is directly sensitive to the value of α_S <cit.>.
The density of emissions has a simple dependence at leading log
2/π C alpha_S (k_T), where C is the color factor.
One can use the Lund plane density to calibrate α_S from the parton shower (as a MC tuning parameter) since this is what drives part of the differences between the MC generators.
To conclude, we described two measurements of jet substructure that are sensitive to basic building blocks of QCD.
The measurement of the angular distributions in Z+jet and dijet events are a valuable input for a better understanding of quark-jet and gluon-jet substructure. In addition, the measurement of the Lund jet plane is crucial to visualize the phase space of QCD splittings and it will improve our understanding of QCD and the description of data by MC, with the goal of achieving NLL accuracy in the next generation of parton showers.
99
observables
A.J. Larkoski, J. Thaler and W.J. Waalewijn, JHEP 11 (2014) 129.
cms1
CMS Collaboration, JHEP 01 (2022) 188.
pythia8
T. Sjöstrand et al., Comput. Phys. Commun. 191 (2015)
159; CMS Collaboration, Eur. Phys. J. C 80 (2020) 4.
herwig7
S. Catani and M. H. Seymour, Nucl. Phys. B 485 (1997) 291; CMS Collaboration, Eur. Phys. J. C 81 (2021) 312.
sherpa
Sherpa Collaboration, SciPost Phys. 7 (2019) 034.
gregory
A. Dreyer, G. Salam, G. Soyez, JHEP 12 (2018) 064.
ca
Y. L. Dokshitzer, G. D. Leder, S. Moretti, B. R. Webber, JHEP 08 (1997) 001.
cms2
CMS Collaboration, CMS-PAS-SMP-22-007.
antikt
M. Cacciari, G. P. Salam, and G. Soyez, JHEP 04
(2008) 063,
vincia
W. T. Giele, D. A. Kosower, and P. Z. Skands, Phys. Rev. D 78 (2008) 014026,
dire
S. Höche and S. Prestel, Eur. Phys.
J. C 75 (2015) 461.
|
http://arxiv.org/abs/2307.06080v1 | 20230712110247 | Conformal and Contact Kinetic Dynamics and Their Geometrization | [
"Oğul Esen",
"Ayten Gezici",
"Miroslav Grmela",
"Hasan Gümral",
"Michal Pavelka",
"Serkan Sütlü"
] | math-ph | [
"math-ph",
"math.MP",
"37K30, 70H05"
] |
Conformal and Contact Kinetic Dynamics and Their Geometrization
Daniel Russo
Received: 2 February 2023 / Accepted: 4 July 2023
===============================================================
We propose a conformal generalization of the reversible Vlasov equation of kinetic plasma dynamics, called conformal kinetic theory. To arrive at this formalism, we start with the conformal Hamiltonian dynamics of particles and lift this to the dynamical formulation of the associated kinetic theory. This theory represents a simple example of a geometric pathway from dissipative particle motion to dissipative kinetic motion. We also derive kinetic equations of a continuum of particles governed by the contact Hamiltonian dynamics, which can be interpreted in the context of relativistic mechanics. Once again we start with contact Hamiltonian dynamics and lift it to a kinetic theory, called contact kinetic dynamics. Finally, the contact kinetic theory projected to the conformal kinetic theory so that they form a geometric hierarchy.
MSC2020 classification: 37K30; 70H05.
Key words: Vlasov Equation, Hamiltonian Dynamics, Contact Hamiltonian Dynamics, Conformal Hamiltonian Systems.
§ INTRODUCTION
The dynamics of a non-relativistic and collisionless plasma resting in M⊂ℝ^3 is determined by the plasma density function f=f(q^i,p_i), defined on the momentum phase space T^*M with Darboux' coordinates (q^i,p_i). Equation of motion then is a coupled integrodifferential system
∂ f/∂ t+1/mp_i∂ f/∂
q^i-e∂ ^2ϕ/∂ q^i∂ f/∂
p_i=0
∇ _q^2ϕ _f(q)=-e∫ f(q,p)d^3p
which are known as the Vlasov–Poisson equations, where e is the charge and ϕ is the potential. Hamiltonian analysis of this system may be recalled from <cit.>, wherein it is well established that the Vlasov–Poisson system (<ref>) admits Hamiltonian formulation. More precisely, the Vlasov equation fits in Lie-Poisson (a Poisson framework available on the dual of a Lie algebra <cit.>) picture.
In a series of papers <cit.>, while investigating Lie-Poisson formulation of the Vlasov equation, an intermediate level of description is obtained on the space of one-forms on T^*M. In this case, the dynamics is represented by the evolution of a dual element, more precisely a one-form Π, governed by Hamiltonian vector field X_H through
Π̇=-ℒ_X_HΠ
where ℒ_X_H is the Lie derivative, whereas H=p^2/2m+eϕ is assumed to be the total energy of a single particle. The link between the Vlasov equation (the first line in (<ref>)) and the momentum-Vlasov equation (<ref>) is determined through the dual mapping of Lie algebra homomorphism H↦ X_H computed to be
f=divΩ^♯_M(Π).
Here, Ω_M is the canonical symplectic two-from on T^*M, and Ω^♯_M is the induced musical isomorphism, while div denotes the symplectic divergence. Since (<ref>) is determined as the dual of a Lie algebra homomorphism, it is both a Poisson and a momentum map. In this geometrization, the Poisson equation (the second line in (<ref>)) is realized as the momentum map due to the gauge invariance of Hamiltonian dynamics. One of the interesting features of the momentum-Vlasov equation (<ref>) is its pure geometric derivation. More precisely, starting from a Hamiltonian vector field X_H and then lifting it to the cotangent bundle and later taking to the vertical representative, one arrives at a generalized vector field VX_H which determines the motion of Π as given in (<ref>). Keeping the same line of thought, further analysis has also been carried out on fluid dynamics <cit.>. Additionally, rich algebraic structure of momentum-Vlasov dynamics is examined in <cit.>, inspired from the moment algebra of Vlasov dynamics, see for instance <cit.>.
This present work consists of three main sections, in which we propose novel geometries and kinetic theories generalizing the ones in the literature, along with an appendix.
Section <ref>: Conformal Kinetic Dynamics. Classical Hamiltonian vector fields on symplectic manifolds are divergence-free. This is one of the manifestations of reversibility. In <cit.>, Hamiltonian vector fields are generalized to conformal vector fields with constant divergences.
In this work, our first goal is to present a kinetic equation of particles governed by conformal vector fields. This novel generalization will be done both on the dynamics of one-form section as well as the dynamics of density function (the link between these two realizations, on the other hand, will be established by a Poisson map in Appendix <ref>). We shall also provide a geometrical pathway from the particle motion to the motion of the continuum by means of geometrical operations such as complete lifts and vertical representatives.
Section <ref>: Contact Kinetic Dynamics. Even though contact manifolds are known as the odd-dimensional counterparts of symplectic manifolds, there exist some characteristic differences. In the contact framework, Hamiltonian flow preserves neither the Hamiltonian function nor the volume form. Our second goal is to provide the kinetic dynamics of a bunch of particles under contact Hamiltonian motion. So, dissipative motion on the particle level gives rise to dissipative motion on the level of density functions.
Section <ref>: From Contact Kinetic Theory to Conformal Kinetic Theory. To sum up the discussions, we shall present the hierarchy of the underlying Lie algebras of the previous sections. We shall later dualize the Lie algebra homomorphisms to arrive at momentum and Poisson mappings connecting kinetic dynamics on different levels of descriptions, namely, the reversible Hamiltonian dynamics, conformal Hamiltonian dynamics, and the contact Hamiltonian dynamics.
Notation. We shall follow the notation used in <cit.>. To be more precise, given a manifold P, we shall denote the space of smooth functions by ℱ(P), the space of one-form sections by Λ^1(P), and the space of vector fields by 𝔛(P).
On the other hand, given a two-form Ω we shall make use of the musical flat mapping
Ω^♭: 𝔛(P)⟶Λ^1(P), Ω^♭(X) (Y)=(ι_X Ω) (Y): = Ω(X,Y).
Moreover, in case Ω^♭ is invertible (occurs if Ω is non-degenerate), we shall denote its inverse by Ω^♯. Finally, given a one-form α and a vector field X, we shall represent by
α(X)=Ω(Ω^♯(α),X)
the relation between the symplectic two-form Ω, and the musical mapping Ω^♯.
§ CONFORMAL KINETIC THEORY
In this section, we shall consider a particle that is governed by a conformal vector field that dissipates the energy and has non-zero divergence. We then lift this particle motion to a kinetic theory underlying the dynamics of a number of such particles.
§.§ Conformal Hamiltonian Dynamics
Let M be a manifold, which is called a symplectic manifold if it admits a closed non-degenerate two form Ω, <cit.>. Accordingly, we shall denote a symplectic manifold by a pair (M,Ω). The non-degeneracy of Ω suffices to define a non-vanishing top form on the manifold, called the symplectic volume
d μ=(-1)^n(n-1)/2/n!Ω∧…∧Ω.
The generic example of a symplectic manifold is the cotangent bundle T^*M, along with the Liouville one form Θ_M and the canonical symplectic two-form Ω_M=-dΘ_M.
Classical Hamiltonian Dynamics. Given Hamiltonian function H, the Hamiltonian vector field X_H is defined through the Hamilton's equation
ι_X_HΩ =dH.
Taking the exterior derivative of (both sides of) the equality, we see that the Lie derivative of the symplectic two-form vanishes. As such, the divergence of a Hamiltonian vector field is zero concerning the symplectic volume (<ref>). We record these as
ℒ_X_HΩ =0, ℒ_X_H d μ =0 ,
respectively. Next, the integration of (<ref>) implies that the integral flows φ_t of the Hamiltonian vector field preserve both the symplectic two-form and the symplectic volume
φ_t^*(Ω)=Ω, φ_t^*( d μ)= d μ,
respectively. Furthermore, the calculations
ℒ_X_H(H)=X_H(H)=0, φ_t^*(H)=H
show us that the Hamiltonian function is constant along the motion. This corresponds to the conservation of energy and determines the reversible character of the symplectic Hamiltonian dynamics.
A symplectic two-form can be used to determine a Poisson bracket on the space of smooth functions given by
{F,H}^(S):= Ω(X_F,X_H),
which is skew-symmetric and satisfies both the Leibniz and the Jacobi identities. Since the characteristic distribution is integrable, the space of Hamiltonian vector fields is closed under the Jacobi-Lie bracket. More precisely we have
[X_F,X_H]=-X_{F,H}^(S).
In other words,
𝔛_ham(M):={X_H∈𝔛(M): ι_X_HΩ =dH }
is a Lie algebra.
Conformal Hamiltonian Dynamics.
A Hamiltonian vector field on M is defined through the covariance equation (<ref>). Recall from (<ref>) that the Lie derivative vanishes identically and the divergence is zero. Relaxing this condition, we define a conformal vector field X_H^c as <cit.>
ℒ_X_H^cΩ=c Ω
for a fixed real number c called conformal parameter. In the sequel, we shall work on several conformal vector fields making use of subindexes (such as c_H) to be more precise about the parameter. Let us note also that a Hamiltonian vector field is a conformal vector field with the conformal factor being zero, while the divergence of a conformal vector field is non-zero. More precisely (for a symplectic manifold of dimension 2n) it is computed to be
ℒ_X_H^c d μ= cn d μ, div(X_H^c)=nc,
where d μ is the symplectic volume. Integration of the defining identity (<ref>) yields that the flow of a conformal vector field preserves the symplectic two-form up to the conformal factor c and the volume upto the conformal factor cn that are
φ_t^*(Ω)=exp(ct)Ω, φ_t^*( d μ)=exp(nct)d μ,
respectively, where exp stands for the exponential function.
For an exact symplectic manifold where Ω=-dΘ, we define a conformal vector field as
ι_X_H^cΩ=dH-cΘ.
Following <cit.> we define the Liouville vector field Z as the image of the Liouville (canonical) one-form Θ under the musical isomorphism Ω^♯ (recall from (<ref>) and (<ref>)) induced from the symplectic form Ω, namely,
Z:=Ω^♯(Θ), Θ=Ω^♭(Z).
It is worth noting that the Liouville vector field is not a Hamiltonian vector field but a conformal vector field with the conformal factor -1, and with the divergence -n, as
ℒ_ZΩ=-Ω, ℒ_Zd μ=-nd μ, div(Z)=-n.
In terms of the Liouville vector field, defined in (<ref>), we can express a conformal vector field as the linear combination
X_H^c=X_H-cZ,
where X_H is the Hamiltonian vector field for the Hamiltonian function. Accordingly, we compute the change of the Hamiltonian function along the conformal vector field, and its flow ϕ_t, as
ℒ_X_H^c(H)=X_H(H)-cZ(H)=-cZ(H), ϕ_t^*(H)=H-cψ_t^*(H)
respectively, where ψ_t denotes the integral curve of the Liouville vector field Z.
We consider the Darboux' coordinates (q^i,p_i) on M assuming that it is locally isomorphic to the momentum phase space T^*M of a configuration manifold M. The symplectic Hamiltonian vector field X_H and the Liouville vector field Z are computed to be
X_H= ∂ H/∂ p_i∂/∂ q^i - ∂ H/∂ q^i∂/∂ p_i,
Z=-p_i∂/∂ p_i,
respectively. So the conformal vector field X_H^c becomes
X_H^c= ∂ H/∂ p_i∂/∂ q^i - ( ∂ H/∂ q^i-cp_i) ∂/∂ p_i.
Then, the dynamics governed by a conformal vector field X_H^c is given by
q̇^i=∂ H/∂ p_i, ṗ_i=-∂ H/∂ q^i+cp_i.
Let us note that in case the conformal factor c is trivial, then (<ref>) reduces to the classical Hamilton's equations as expected.
§.§ Lie Algebra of Conformal Vector Fields
Consider a symplectic manifold (M,Ω).
Let us first note that the Jacobi-Lie bracket of two conformal vector fields is a local Hamiltonian vector field. Indeed,
ℒ_[X_H^c,X_K^c]Ω=ℒ_X_H^cℒ_X_K^cΩ - ℒ_X_K^cℒ_X_H^cΩ=(c_H c_K - c_K c_H) Ω =0.
As such, we can argue that the space of conformal vector fields is a Lie algebra
𝔛_ham^c(M):={X_H∈𝔛(M): ℒ_X_H^cΩ =c_H Ω}
and contains the space of Hamiltonian vector fields 𝔛_ham(M) as an ideal.
Let, now, 𝔷 denotes the space of vector fields spanned by Z. Evidently, this space is one-dimensional and may be identified with the space of real numbers as
ℝ⟷𝔷, c↔ cZ,
and thus acquires the structure of a trivial Lie algebra.
Moreover, this trivial Lie algebra 𝔷 acts on the space of Hamiltonian vector fields 𝔛_ham(M) from the left as
𝔷×𝔛_ham(M) ⟶𝔛_ham(M), (Z,X_H)↦ [Z,X_H]=X_Z(H)+H.
The realization (<ref>) motivates us to have the space of conformal vector fields 𝔛_ham^c(M) as the Cartesian product of the space of Hamiltonian vector fields 𝔛_ham(M) and 𝔷≃ℝ.
Accordingly, we can recast the space 𝔛_ham^c(M) of conformal vector fields as a central extension of the space 𝔛_ham(M) of Hamiltonian vector fields as
𝔛_ham(M) ×𝔷⟷𝔛_ham^c(M)
, (X_H,c)↦ X_H^c=X_H-cZ .
Now we are ready to determine the Lie algebra structure on 𝔛^c_ham(M). To this end, given two conformal fields X_H^c and X_F^c, we compute their opposite Jacobi Lie bracket
[X_H^c,X_F^c]_𝔛=-
[X_H^c,X_F^c] =-[X_H-c_HZ,X_F-c_FZ]
=-[X_H,X_F]+c_H[Z,X_F]+c_F[X_H,Z]
=X_{H,F}^(S)+c_HX_Z(F)+F-c_FX_Z(H)+H
=X_{H,F}^(S)+c_H(Z(F)+F)-c_F(Z(H)+H),
where we have employed the action (<ref>) on the forth equality.
Let us conclude the present subsection with another characterization of conformal vector fields that will be useful in the sequel.
To this end, let us recall that we have identified the space 𝔛_ham(M) of Hamiltonian vector fields with the space ℱ(M) of smooth functions (modulo constants). We now employ this to the identification in (<ref>) to obtain
Φ ^c: ℱ(M) ×𝔷⟶𝔛_ham^c(M)
, (H, c)↦ X_H^c=X_H-cZ.
In view of (<ref>), it is thus possible to endow ℱ(M) ×𝔷 with a Lie algebra structure so that the mapping Φ ^c is a Lie algebra homomorphism. Accordingly, we define the bracket
[(H,c_H),(F,c_F)]:=( {H,F}^(S)+c_H(Z(F)+F)-c_F(Z(H)+H),0)
which happens to be a Lie algebra bracket, satisfying the Jacobi identity.
§.§ Conformal Kinetic Dynamics in Momentum Formulation
In order to characterize the dual space 𝔛_ham^c*(M) of the space 𝔛_ham^c(M) of conformal vector fields we shall now consider the Lie algebra ℱ(M) ×𝔷.
For the function space ℱ(M), the dual space is the space of densities Den(M). Fixing the symplectic volume d μ, the L_2 pairing allows us to identify the dual space ℱ^*(M) with ℱ(M) itself. Further, the identification 𝔷≃ℝ implies the isomorphism 𝔷^*≃ℝ on the level of dual spaces. Accordingly, we may consider the dual space ℱ^*(M) ×𝔷^* as ℱ(M) ×𝔷 itself. More precisely, given (H,c_H) in ℱ(M) ×𝔷, and a dual element (f,c^*) in ℱ^*(M) ×𝔷^*, we shall consider the pairing given by
⟨(f,c^*), (H,c_H)⟩ =c^*c_H+ ∫_M fH d μ .
Accordingly, the dual space 𝔛_ham^c*(M) is determined by the pairing
⟨Π⊗ d μ,X_H^c ⟩ _L_2 = ∫_M⟨Π, X_H - cZ
⟩ d μ =∫_M⟨Π, Ω ^♯(dH) - c Ω^♯(Θ)
⟩ d μ
=∫_M⟨Π, Ω ^♯(dH)⟩ d μ -∫_M⟨Π, c Ω^♯(Θ)
⟩ d μ
= -
∫_M⟨Ω ^♯(Π), dH
⟩ d μ -c∫_M⟨Π, Ω ^♯(Θ) ⟩ d μ,
=
-∫_Mι_Ω ^♯(Π) dH d μ +c∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ
=
-∫_Mι_Ω ^♯(Π) (dH∧ d μ) -∫_M dH∧ι_Ω ^♯(Π) ( d μ) +c∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ
=-∫_MdH ∧ι_Ω ^♯(Π)
(d μ) +c∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ
=
-∫_M d( H ι_Ω ^♯(Π)d μ) + ∫_M H d ι_Ω ^♯(Π)d μ +c∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ
=
∫_MH ℒ_Ω ^♯(Π) d μ +c∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ
= ∫_MH divΩ ^♯(Π) d μ + c∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ.
We thus arrive at the dual mapping of the Lie algebra homomorphism Φ^c in (<ref>), that is,
Φ^c*: 𝔛_ham^c*(T^*M) ⟶Den(M) ×ℝ, (Π⊗ d μ) ↦( divΩ ^♯(Π)d μ ,∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ).
In order to ensure the non-degeneracy of the pairing, therefore, we present the dual space as
𝔛_ham^c*(T^*M) ={Π⊗ d μ∈Λ^1(M)⊗Den(M): divΩ ^♯(Π) ≠ 0, ∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ≠ 0
}.
This determines the following identification for the two tuple (fd μ,c^*) with the one-form sections as
f(z)=divΩ ^♯(Π), c^*= ∫_M⟨Θ, Ω ^♯(Π) ⟩ d μ.
Hence, following (<ref>) we compute the coadjoint action of the Lie algebra 𝔛_ham^c(M) on its dual space 𝔛_ham^c* as
ad^*_X_H^c(Π⊗ d μ) = ( ℒ_X_H^cΠ
+div(X_H^c) Π) ⊗ d μ = (ℒ_X_H^cΠ+ncΠ) ⊗ d μ,
where we have substituted the divergence of X_H^c from (<ref>). Accordingly, in view of (<ref>) the Lie-Poisson equation, or equivalently the coadjoint flow, is computed to be
Π̇=-ℒ_X_H^cΠ - cn Π .
The equality (<ref>) is the conformal kinetic equation in terms of momenta.
Momentum-Vlasov Equation. Let us particularly consider the case where the conformal factor is zero. In this case, one arrives at the pure Hamiltonian flow, and the Lie algebra associated with this particular case is the space 𝔛_ham(M) of Hamiltonian vector fields. Moreover, the domain of the Lie algebra homomorphism Φ^c in (<ref>) turns out to be the function space ℱ(M), without the extension, that is
Φ: ℱ(M) ⟶𝔛_ham (M)
, H↦ X_H.
The dual of this linear mapping can be computed similarly to (<ref>). As a result, one can see that, only the first term in the dual operation (Φ^c)^* in (<ref>) remains. Namely,
Φ^*:𝔛_ham^*(T^*M)⟶Den(M) , Π↦ fd μ:=divΩ ^♯(Π)d μ .
This calculation allows us to define the dual space 𝔛_ham^*(M) of the space of Hamiltonian vector fields as
𝔛_ham^*(M): ={Π⊗ d μ∈Λ ^1(M)⊗Den(M)):divΩ ^♯(Π)≠ 0 }∪0.
Having determined the dual space properly, we can then determine the Lie-Poisson equations on 𝔛_ham^*(M). It follows from (<ref>) that a Hamiltonian vector field is divergence-free. As such, in view of the calculation (<ref>), the Lie-Poisson equation is given by
Π̇= - ad^*_X_HΠ = - ℒ_X_HΠ,
which is called the momentum-Vlasov equation in the literature <cit.>. We remark that this system is precisely equal to conformal kinetic dynamics (<ref>) with c being zero.
§.§ Conformal Kinetic Dynamics in Density Formulation
As was shown above, the dual space of the Lie algebra ℱ(M) ×𝔷 is the product Den(M) ×𝔷 of the space of densities with real numbers.
Accordingly, in view of the bracket formula (<ref>), the coadjoint action of ℱ(M) ×𝔷 on its dual Den(M) ×𝔷 is given by
⟨ ad^*_(H,c_H) (fd μ,c^*),(F,c_F) ⟩ = ⟨ (fd μ,c^*),ad_(H,c_H) (F,c_F) ⟩
= ⟨ (fd μ, c^*),({H,F}^(S)+c_H(Z(F)+F)-c_F(Z(H)+H),0)⟩
= ∫_M f {H,F}^(S) + f c_H(Z(F)+F)-fc_F(Z(H)+H) d μ.
Let us analyze the terms on the right-hand side of the last line one by one. The first term reads
∫_M f {H,F}^(S) d μ = ∫_M {f,H}^(S)F d μ,
while the third term can be written as a multiple of c_F. The second term, on the other hand, may be examined through
∫_M fc_H(Z(F)+F) d μ = ∫_M fc_H Z(F) d μ+∫_M fFc_H d μ
=
∫_M fc_H (ι_ZdF) d μ +∫_M fFc_H d μ
=
∫_M fc_H dF∧ι_Z d μ +∫_M fFc_H d μ
=
- ∫_M Fc_H df ∧ι_Z d μ -
∫_M fc_H F dι_Z d μ
+ ∫_M fF c_H d μ
=
- ∫_M F c_H (ι_Zdf) d μ -
∫_M fc_H F div(Z) d μ
+∫_M fFc_H d μ
=-∫_M (c_H Z(f) + div(Z)fc_H-fc_H)Fd μ.
Now, in case M is 2n dimensional, we recall from (<ref>) that the divergence of the Liouville vector field Z is div(Z)=-n. Then,
ad^*_(H,c_H) (f,c^*)=({f,H}^(S)-c_H Z(f) + c_H (n+1) f,-∫_M f (Z(H)+H)d μ).
This calculation gives us the Lie-Poisson dynamics generated by a conformal vector field X_H^c as a coupled PDE system
∂ f/∂ t = {H,f}^(S)+c_H Z(f) - c_H (n+1) f,
∂ c^*/∂ t =∫_M f (Z(H)+H)d μ,
which we call the conformal Kinetic equations.
Consider the Darboux coordinates (q^i,p_i), and let the Hamiltonian function be the total energy H=p^2/2m+eϕ of a single particle. Then the conformal Kinetic equations in density formulation (<ref>) take the particular form
∂ f/∂ t+1/mδ^ij p_i ∂ f/∂ q^j -e ∂ϕ/∂ q^i∂ f/∂ p_i+ c_H(n+1)f-c_Hp_i ∂ f/∂ p_i =0,
∂ c^*/∂ t -∫_M f (
H- p_i ∂ H/∂ p_i)d μ =0.
If the conformal factor is trivial, then the equation in (<ref>) becomes
∂ f/∂ t= {H,f}^(S),
and, in local coordinates, we are left with the first equation in (<ref>) which turns out to be the Vlasov equation
∂ f/∂ t+1/mδ^ij p_i ∂ f/∂ q^j -e ∂ϕ/∂ q^i∂ f/∂ p_i=0.
An Algebraic Route to Conformal Kinetic Equations. Let us now note that the algebraic structure of ℱ(M)×𝔷 formulated in (<ref>) fits the abstract formalism presented in Appendix <ref>, more precisely the bracket (<ref>). Indeed, given the left action
▹ : 𝔷×ℱ(M)⟶ℱ(M), (c,H)↦ c(Z(H)+H)
of 𝔷 on ℱ(M), and the adjoint action
ad_HF={H,F}^(S)
of ℱ(M) on itself, the bracket (<ref>) may be written as
[(H,c_H),(F,c_F)]=(ad_HF +c_H▹ F - c_F▹ H,0).
A direct computation, then, yields the coadjoint action as
ad^*_(H,c_H)(f,c^*)= (ad^*_Hf+f∗◃c_H, -𝔟^*_Hf),
where
ad^*_Hf= {f,H}^(S), f∗◃c_H=(n+1)c_Hf -c_HZ(f), 𝔟^*_Hf=∫_Mf(Z(H)+H)d μ.
Let us note also that the coadjoint action (<ref>) fits exactly the one in (<ref>).
We refer to Appendix <ref> for the pure geometric link between the conformal kinetic equations (<ref>) in momentum formulation and the conformal kinetic equation (<ref>) in terms of the density.
§.§ A Geometric Pathway to Kinetic Conformal Dynamics
In this section, we provide a geometric pathway from particle motion to irreversible kinetic dynamics. For reversible motion, this geometry has already been given in <cit.>. Let M be an m-dimensional manifold equipped with the local coordinates (x^a), and let ϕ_t:M→ M denotes the flow of a vector field X=X^a / x^a on M.
Complete Cotangent Lift. Let, on the other hand, T^*M be the cotangent bundle equipped with the Darboux' coordinates (x^a,y_a). The complete cotangent lift of a flow φ _t on M is then given by a one-parameter group of diffeomorphisms φ_t on T^∗
M satisfying
π _M∘φ_t=φ _t∘π _
M,
where π _M is the natural projection defined on T^∗M to M. The vector field X on T^*M, which has the flow φ_t, is called the
complete cotangent lift of X. We do note that,
Tπ _M∘X=X∘π _M,
where, in coordinates, we have
X = X^a∂/∂ x^a - ∂ X^b/∂ x^a y_b ∂/∂ y_a.
Let us note also that the Jacobi-Lie bracket of complete cotangent lifts is a complete cotangent lift, <cit.>. More precisely we have that the mapping
:𝔛(M) →𝔛
( T^∗M) :X→X
is a one-to-one Lie algebra homomorphism.
Divergence Lift. Let M be a (volume) manifold, and let W:=Ω^♯(Θ_M) be the Liouville vector field on the cotangent bundle T^*M, where Θ_M is the canonical one-form on T^*M. Moreover, let also π^*_Mℱ(M) be the pullback of the space of functions ℱ(M), to the level of cotangent bundle, using the cotangent bundle projection π_M: T^*M↦ M. Now, since the canonical Poisson bracket on the cotangent bundle T^*M vanishes on π^*_Mℱ(M), it turns out to be a Lie subalgebra of the space ℱ(T^*M) of smooth functions on T^*M. We then define the space
𝔴={FW∈𝔛(T^*M): F∈π^*_Mℱ(M) },
which evidently is a Lie subalgebra of 𝔛(T^*M), as the Jacobi-Lie bracket of such vector fields vanishes. Let us remark that in Darboux' coordinates (x^a,y_a), the Liouville vector field W and an arbitrary element FW take the form of
W= - y_a∂/∂ y_a, FW=- F(x^a)y_a∂/∂ y_a.
Next, let X be a vector field on M with possibly a non-zero divergence. Since div(X) is a function on M, we can lift it to the cotangent bundle by means of the Liouville vector field W as
𝒟:𝔛(M)⟶𝔛(T^*M), X↦div(X)W.
The mapping 𝒟 is Lie algebra homomorphism if we restrict the domain to the space 𝔛_c(M) of vector fields with constant divergences. The kernel of 𝒟, on the other hand, happens to be the space of divergence-free vector fields.
Now we collect the complete cotangent lift (<ref>) and the divergence lift (<ref>) to define a mapping from the space 𝔛_c(M) of vector fields with constant divergence into the space 𝔛(T^*M) of vector fields as
κ:𝔛_c(M)⟶𝔛(T^*M), X↦X+div(X)W.
A direct calculation proves that the Jacobi-Lie bracket of a complete cotangent lifts X and a divergence lift 𝒟(X) is trivial. As such, the mapping κ is a Lie algebra homomorphism. Moreover, in Darboux coordinates, the image of a vector field under κ is computed to be
κ(X)= X^a∂/∂ x^a - ( X^b/ x^b y_a + ∂ X^b/∂ x^a y_b )∂/∂ y_a.
If the vector field X is divergence-free then we are left only with the complete cotangent vector field.
Holonomic Part.
Once again π_M: T^*M↦ M being the cotangent bundle, let J^1T^*M be the first jet bundle (which happens to be a 2n+n^2 dimensional manifold) with the induced coordinates
(x,y,y_x)=(x^a,y_a, y_b / x^a).
Let X be a vector field on M, and σ a one-form. The Lie derivative (directional derivative) of a smooth function F, defined on the total space T^*M, with respect to the vector field X can be computed by means of σ as ℒ_X(F∘σ). Accordingly, the definition of the holonomic lift X^hol of the vector field X may be given by the identity
X^hol(F)∘σ:=ℒ_X(F∘σ),
see for instance <cit.>. In local coordinates, the holonomic lift of U=U^a∂/∂ x^a is computed to be
U^hol=U^a∂/∂ x^a+U^a y_b / x^a∂/∂ y_b.
We note that U^hol is not a classical vector field, since its coefficients depend on the first-order jet bundle. Such kinds of sections are called the generalized vector fields, <cit.>.
For a projectable vector field Y on the cotangent bundle T^*M, the holonomic part HY is defined to be the holonomic lift of its projection, that is, for a vector field Y=Y^a(x)∂ /∂ x^a+Y_b(x,y)∂ /∂
y_b,
HY:=(Tπ∘ Y)^hol=Y^a∂/∂ x^a+Y^a y_b / x^a∂/∂ y_b.
A generalized vector field on T^*M, then, is of the form
χ =χ ^a( x) ∂/∂ x^a
+χ_b( x,y,y_x)
∂/∂ y_b.
As such, the first order prolongation pr^1χ of χ may is given by
pr^1χ =χ +Δ _ba∂/∂ ( y_b / x^a), Δ _ba=D_x^a( χ_e-χ^b ( y_e / x^b))
+(^2 y_e / x^a x^b)χ^b,
where D_x^a is the total derivative operator with respect to x^a,
and ^2 y_e / x^a x^b is an element of the second order jet bundle. Furthermore, the Lie bracket of two first-order generalized vector fields χ and ψ is
the unique first-order generalized vector field is given by
[ χ ,ψ] _pro=( pr^1χ( ψ ^a)
-pr^1ψ( χ ^a) ) ∂/∂ x^a
+( pr^1χ( ψ_a) -pr^1ψ( χ
_a) ) ∂/∂ y_a.
Accordingly, the holonomic part operation H: Y→ HY defined in (<ref>) is a Lie algebra homomorphism from the space of projectable vector fields into the space of generalized vector fields of order one, equipped with the bracket (<ref>).
Finally, the holonomic part of the image space of the mapping κ given by (<ref>) is computed to be
Hκ(X)= X^a∂/∂ x^a +X^a y_b / x^a∂/∂ y_b.
We remark that the term div(X)W does not contribute to the holonomic part since it is a pure vertical vector and that the composition mapping X↦ Hκ(X) is a Lie algebra homomorphism since both (<ref>) and (<ref>) are Lie algebra homomorphisms.
Vertical Representative. The holonomic lift operation U^hol copies the dynamics on the base manifold to the cotangent bundle in terms of the action of the vector field U on the fiber coordinates. Hence, the vertical motion (that is, the dynamics governing the sections) is obtained by subtracting the holonomic part HY from a projectable vector field Y on T^*M, <cit.>. In short, we shall call
VY=Y-HY=( Y^α-Y^au_a^λ) ∂/∂ u^λ
the vertical representative of Y. We note at once that VY lies in the kernel of Tπ_M.
The vertical representative of κ(X) defined in (<ref>) is computed to be
Vκ(X)=-( X^b/ x^b y_a + ∂ X^b/∂ x^a y_b + X^b y_a / x^b)∂/∂ y_a.
Given the one-form Π=y_adx^a, a quick computation yields
Π̇=-ℒ_X(Π)-div(X)Π,
that is, the local formulation of the vertical representative Vκ(X) is exactly the Lie-Poisson dynamics (<ref>). Furthermore, the vertical representative operation κ(X)↦ Vκ(X) is a Lie algebra homomorphism, endowing the image space with the prolonged bracket (<ref>).
To sum up, we have the following Lie algebra homomorphisms for the particle motion to the motion of the continuum:
[ Particle; motion, X ][rrr]^κ in (<ref>) [ Lifted; motion, κ(X) ][rrr]^V in (<ref>) [ Kinetic motion; of fibers, Vκ(X). ]
We do note that this geometric path provides the geometrization of the conformal kinetic dynamics in (<ref>). For the autonomous cases, one reduces the mapping κ to the cotangent bundle and arrives at the momentum Vlasov dynamics in (<ref>).
§ CONTACT KINETIC THEORY
In classical kinetic theory, the distribution function f(t,,) is a function of time, position, and momentum. In relativistic physics, however, the distribution function F(,) becomes a function of four-position and four-momentum , and the explicit dependence on a special parameter standing alongside the coordinates (global time) disappears. The distribution function can be constructed from the positions and momenta of concrete particles by the Klimontovich formula
F(,) = 1/mc∫ dτ⟨∑_i (δ(-_i(τ))δ(-_i(τ))⟩_ensemble
where τ is the proper time <cit.>. Contact kinetic theory may be seen as a geometrization of this construction.
The usual Hamiltonian kinetic theory can be geometrically constructed by the following steps. First, the Lie group of canonical transformations on a cotangent bundle is considered. To that Lie group, the Lie algebra and Lie algebra dual are attached. On the Lie algebra dual, there is the Lie-Poisson bracket. Finally, once energy is provided, the Lie-Poisson bracket and energy yield the evolution equation for the distribution function.
This construction has the same drawback as the usual Hamiltonian mechanics, namely that the evolution parameter has to be interpreted as time, which leads to suspicious splitting of space-time. Is it possible to construct kinetic theory without that drawback? Let us follow the same strategy as in the preceding section while replacing symplectic geometry with contact geometry.
§.§ Contact Manifolds
Let M̅ be an odd, say (2n + 1), dimensional manifold. A contact structure on M̅ is a maximally non-integrable smooth distribution of codimension one, and it is locally given by the kernel of a one-form η such that
dη^n
∧η≠ 0.
Such a one-form η is called a (local) contact form <cit.>. In the present manuscript, we shall consider the existence of a global contact one-form. A contact one-form for a given contact structure is not unique. Indeed, if η is a contact one-form for a fixed contact structure, then λη also defines the same contact structure for any non-zero real-valued function λ defined on M̅. In short, we call a (2n+1)-dimensional manifold M̅ as a contact manifold if it is equipped with a contact one-form η satisfying dη^n
∧η≠ 0, and we shall denote a contact manifold by (M̅, η).
Given a contact one-form η, the vector field ℛ satisfying
ι_ℛη =1, ι_ℛdη =0
is unique, and it is called the Reeb vector field. There is, on the other hand, a musical isomorphism ♭ from the space of sections of the tangent bundle TM̅ to the sections of the cotangent bundle T^*M̅ defined by
♭:𝔛(M̅)⟶Λ^1(M̅), Y↦ι_Ydη+η(Y)η.
It is worth noting that the image of the Reeb vector field ℛ under the musical mapping is the contact one-form η. We shall denote the inverse of (<ref>) by ♯. Referring to this, we define a bivector field Λ on M as
Λ(α,β)=dη(♯α, ♯β).
Referring to the bivector field Λ we introduce the following musical mapping
♯_Λ: Λ^1(M̅) ⟶𝔛(M̅), α↦Λ(∙,α)= ♯α - α(ℛ) ℛ.
The kernel of the mapping ♯_Λ is spanned by the contact one-form η so it fails to be an isomorphism. A contact manifold (M̅,η) admits a Jacobi manifold structure <cit.>, see <cit.> for a more recent exposition. This realization permits us to define a Jacobi (contact) bracket
{F̅,H̅}^(C) =Λ(dF̅,dH̅) +F̅ℛ(H̅) - H̅ℛ(F̅).
See that the bracket satisfies the Jacobi identity but the Leibniz's identity is violated due to the last two terms (the Reeb terms) on the right-hand side.
Contact Hamiltonian Motion.
For a Hamiltonian function H̅ on a contact manifold (M̅,η), there is a corresponding contact Hamiltonian vector field ξ_H̅ given by
ι_ξ_H̅η =-H̅, ι_ξ_H̅dη =dH̅-ℛ(H̅) η,
where ℛ is the Reeb vector field, and H̅ is called the contact Hamiltonian function <cit.>. We also have
♭(ξ_H̅)=dH̅-(ℛ(H̅)+H̅)η.
Dissipation.
Let us denote a contact Hamiltonian system as a three-tuple (M̅,η,H̅), where (M̅,η) is a contact manifold and H̅ is a smooth real function on M̅. A direct computation determines a conformal factor for a given contact vector field via
ℒ_ξ_H̅η =
dι_ξ_H̅η+ι_ξ_H̅dη= -ℛ(H̅)η.
According to (<ref>), the flow of a contact Hamiltonian system preserves the contact structure, but it preserves neither the contact one-form nor the Hamiltonian function. Instead, we obtain
ℒ_ξ_H̅ H̅ = - ℛ(H̅) H̅.
Being a non-vanishing top-form we may consider d μ∧η as a volume form on M̅, where d μ is the symplectic volume in (<ref>).
The Hamiltonian motion does not preserve the volume form since
ℒ_ξ_H̅ (dη^n
∧η) = - (n+1) ℛ(H̅) dη^n
∧η.
Assuming the dimension of M̅ to be 2n+1, we compute the divergence of a contact vector field as
div(ξ_H̅)= - (n+1) ℛ(H̅).
However, it is immediate to see that, for a nowhere vanishing Hamiltonian function H̅, the quantity H̅^-(n+1) (dη)^n ∧η
is preserved along the motion (see <cit.>).
A direct computation proves that the contact bracket and the contact vector field are related as
{F̅,H̅}^(C) =- ι_[ξ_H̅,ξ_F̅]η =-ℒ_ξ_H̅ι_ξ_F̅η+
ι_ξ_F̅ℒ_ξ_H̅η
=-ℒ_ξ_H̅(-F̅) + ι_ξ_F̅(-ℛ(H̅)η)
= ξ_H̅(F̅)+F̅ℛ(H̅).
This observation is important. We remark that the flow generated by the contact vector field and the flow generated by the contact bracket are not the same.
Darboux Coordinates and Contactization of a Symplectic Manifold. We start with an exact symplectic manifold (M,Ω=-dΘ) and consider the principal circle bundle
S^1⇝ (M̅,η )pr⟶
(M,Ω=-dΘ),
called the quantization bundle. For a local coordinate system z on the circle (which we shall consider being ℝ), and the Darboux coordinates (q^i,p_i) on the symplectic manifold, the contact manifold admits the Darboux coordinates (q^i,p_i,z) on M̅. In this realization, the contact one-form and the associates Reeb field are
η=dz-Θ̅=dz-p_idq^i, ℛ=∂/∂ z,
where Θ̅ is the pullback of the potential one-form on M. This suggests the coordinates (q^i,p_i,z) on the contact manifold M̅. In this case, the volume form d μ on the contact manifold is computed to be
d μ=dz∧ d μ
where d μ is the symplectic volume in (<ref>).
Accordingly, a generic example of a contact manifold is established by the so-called contactization of the canonical symplectic manifold. The bivector Λ in (<ref>) is computed to be
Λ= ∂/∂ q^i∧∂/∂ p_i + p_i ∂/∂ z∧∂/∂ p_i.
In terms of the Darboux coordinates, we compute the musical mapping ♯_Λ
in (<ref>) as
♯_Λ:α_i dq^i + α^i dp^i +
u dz↦α^i ∂/∂ q^i-(α_i + p_i u)∂/∂ p_i + α^ip_i ∂/∂ z.
Then, in this local picture, the contact bracket (<ref>) is
{F̅,H̅}^(C) = ∂F̅/∂ q^i∂H̅/∂ p_i -
∂F̅/∂ p_i∂H̅/∂ q^i + (F̅ - p_i∂F̅/∂ p_i)∂H̅/∂ z -
(H̅ - p_i∂H̅/∂ p_i)∂F̅/∂ z.
Since a symplectic manifold M looks like a cotangent manifold, without loss of generalization, one may substitute the symplectic manifold M with the cotangent bundle T^*Q. In this case, the contact manifold M̅ locally turns out to be the extended cotangent bundle T^*Q×ℝ.
§.§ Dynamics on Contact Manifolds
This subsection introduces two different dynamical vector fields that can be determined on a contact manifold (M̅,η). To have these realizations, for a given Hamiltonian function H̅, we first recall the
contact Hamiltonian vector field definition in (<ref>) and write it as
ξ_H̅=♯(dH̅)-ℛ(H̅)ℛ - H̅ℛ.
As we depict in the sequel, the space of such vector fields determines a Lie algebra as a manifestation of the Jacobi manifold structure of the contact manifold. By using only the first and third terms on the right-hand side we define a strict contact Hamiltonian vector field as
Y_H̅=♯(dH̅)- H̅ℛ.
Let us now depict all the algebraic properties of these dynamics in detail.
Contact Diffeomorphisms and Contact Hamiltonian Vector Fields.
For a contact manifold (M̅,η), a contact diffeomorphism (contactomorphism) is the one that preserves the contact structure. We denote the
group of contact diffeomorphisms by <cit.>
Diff_con ( M̅ ) ={φ∈ Diff ( M̅ ) :φ ^∗η =γη, γ∈ℱ
( M̅ ) } .
Here, Diff ( M̅ ) is standing for the group of all diffeomorphism on M̅. Notice that the existence of γ in the definition manifests the conformal definition of the contact structure.
A vector field on the contact
manifold ( M̅,η) is a contact vector field (called also infinitesimal conformal contact diffeomorphism) if
it generates a one-parameter group of contact diffeomorphisms. Accordingly, the space of contact vector fields is given by
𝔛_con ( M̅ ) ={ X∈𝔛 (
M̅ ) :ℒ_Xη =-λη , λ∈ℱ ( M̅ ) } .
Sometimes a contact vector field is denoted by a two-tuple (X,λ) to exhibit the conformal factor λ.
A direct observation reads from (<ref>) that a contact Hamiltonian vector field ξ_H̅ is a contact vector field with conformal parameter λ=ℛ( H̅) so it belongs to 𝔛_con-ham ( M̅ ), for more details, see <cit.>. In this work, our interest is the space of contact Hamiltonian vector fields
𝔛_con-ham ( M̅ )={ξ_H̅∈𝔛(M̅ ): ι_ξ_H̅η =-H̅, ι_ξ_H̅dη =dH̅-ℛ(H̅) η}.
This space is a Lie subalgebra of space of all vector fields as a manifestation of the identity
[ξ_F̅,ξ_H̅]=-ξ_{F̅,H̅}^(C).
So, one may establish the following
isomorphism from the space of real smooth functions on M̅ to the space of contact Hamiltonian vector fields
Ψ: ( ℱ( M̅
) ,{∙,∙} ^(C))⟶( 𝔛_con-ham ( M̅ ) ,-[∙ ,∙] ) , H̅↦ξ_H̅.
Referring to the Darboux's coordinates (q^i,p_i,z), for a function H̅=H̅(q^i,p_i,z), the contact Hamiltonian vector field determined in (<ref>) becomes
ξ_H̅=∂H̅/∂ p_i∂/∂ q^i - (∂H̅/∂ q^i + ∂H̅/∂ z p_i)
∂/∂ p_i + (p_i∂H̅/∂ p_i - H̅)∂/∂ z.
Thus, we obtain the contact Hamilton's equations for H̅ as
dq^i/dt = ∂H̅/∂ p_i,
dp_i/dt = -∂H̅/∂ q^i-
p_i∂H̅/∂ z,
dz/dt = p_i∂H̅/∂ p_i - H̅.
In particular, the Reeb vector field becomes ℛ=∂/∂ z. The divergence of a contact Hamiltonian vector field (<ref>) is then
div(ξ_H̅)= - (n+1) ℛ(H̅) = - (n+1) ∂H̅/∂ z.
Contact dynamics finds many applications in various fields of physics especially in thermodynamics see, for example, <cit.>.
Quantomorphisms and Strict Contact Hamiltonian Vector Fields. Let us consider a contact manifold (M̅,η). By asking the conformal factor γ to be the unity for a contact diffeomorphism in (<ref>), one arrives at the conservation of the contact form φ^*η = η. We call such a mapping as a strict contact diffeomorphism (or a quantomorphism). For a contact manifold (M̅,η),
we denote the group of all strict contact diffeomorphisms as
Diff_st-con ( M̅)
={φ∈ Diff ( M̅) :φ ^∗η = η}⊂ Diff_con ( M̅)
.
The Lie algebra of this group consists of so-called strict contact vector fields (or, infinitesimal quantomorphisms, or infinitesimal strict contact diffeomorphism)
𝔛_st-con ( M̅) ={ X ∈𝔛 ( M̅ ) :ℒ_Xη =0}⊂𝔛_con( M̅ ).
Notice that for a given Hamiltonian function H̅, the contact Hamiltonian vector field ξ_H defined in (<ref>) is a strict contact vector field if and only if dH̅(ℛ)=0.
This reads the following space of strict contact Hamiltonian vector fields
𝔛_st-con-ham ( M̅ )={Y_H̅∈𝔛(M̅ ): ι_Y_H̅η =-H̅, ι_Y_H̅dη =dH̅}⊂𝔛_con-ham ( M̅ )
Referring to the local realization in (<ref>) given in terms of the Darboux coordinates (q^i,p_i,z), it is possible to see that to generate a strict contact Hamiltonian vector field, a function H̅ must be independent of the fiber variable z.
For two functions, those that are not dependent on the fiber variable z, the contact bracket {∙,∙}^(C) in (<ref>) locally turns out to be equal to the canonical Poisson bracket on the symplectic manifold M. Accordingly, a direct calculation reads that
[ Y_H̅,Y_F̅]=-Y_{H̅,F̅} ^(C).
Note that, one has the following identities in terms of the musical mapping ♭ in (<ref>) and its inverse ♯ as
♭(Y_H̅)= dH̅ - H̅η, Y_H̅=♯(d H̅) -H̅ℛ.
Referring to the Darboux's coordinates (q^i,p_i,z), for a Hamiltonian function H̅=H̅(q^i,p_i) independent of the fiber variable z, the strict contact Hamiltonian vector field is
Y_H̅=∂H̅/∂ p_i∂/∂ q^i - ∂H̅/∂ q^i∂/∂ p_i + (p_i∂H̅/∂ p_i - H̅)∂/∂ z.
Thus, we obtain strict contact Hamilton's equations as
dq^i/dt = ∂H̅/∂ p_i,
dp_i/dt = -∂H̅/∂ q^i,
dz/dt = p_i∂H̅/∂ p_i - H̅.
See that this flow is divergence-free.
§.§ Kinetic Dynamics in Terms of Momenta
Let M̅ be the extended cotangent bundle with the contact one-form η. We shall now determine the kinetic motion of contact particles. To this end, we shall lift the particle motion.
The Dual Space of Contact Hamiltonian Vector Fields.
Let us now determine the dual space 𝔛_con-ham^* (M̅) of the the space contact vector fields 𝔛_con-ham(M̅) given in (<ref>). We first note that 𝔛_con-ham^*(M̅) is a subspace of the space Λ^1(M̅)⊗Den (M̅) of one-form densities. To be more precise, we compute the L_2-pairing (simply multiply-and-integrate) of an arbitrary contact vector field ξ_H̅ with a one-form density Π̅⊗d μ. Making use of the identities of the Cartan calculus, we obtain
⟨Π̅⊗d μ, ξ_H̅⟩ _L_2 = ∫⟨Π̅, ξ_H̅⟩d μ =∫⟨Π̅, ♯(dH̅) - (ℛ(H̅)+H̅ )ℛ⟩d μ
= - ∫⟨♯Π̅, dH̅⟩d μ + ∫ (ℛ(H̅)+H̅ ) ⟨♯Π̅,η⟩d μ
= - ∫(ι_♯Π̅ dH̅) d μ + ∫ℛ(H̅) ⟨♯Π̅,η⟩d μ + ∫H̅⟨♯Π̅,η⟩d μ
= -
∫ dH̅ι_♯Π̅d μ +
∫ι_ℛdH̅⟨♯Π̅,η⟩d μ + ∫H̅⟨♯Π̅,η⟩d μ
= ∫H̅ d ι_♯Π̅d μ
+ ∫⟨♯Π̅,η⟩ dH̅∧ι_ℛd μ + ∫H̅⟨♯Π̅,η⟩d μ
= ∫H̅ d ι_♯Π̅d μ
- ∫H̅ d⟨♯Π̅,η⟩∧ι_ℛd μ
-∫H̅⟨♯Π̅,η⟩ dι_ℛd μ
+ ∫H̅⟨♯Π̅,η⟩d μ
=
∫H̅( div(♯Π̅) d μ
- ι_ℛd⟨♯Π̅,η⟩ - ⟨♯Π̅,η⟩div(ℛ) + ⟨♯Π̅,η⟩) d μ
=
∫H̅( div(♯Π̅)
- ℒ_ℛ⟨♯Π̅,η⟩ + ⟨♯Π̅,η⟩) d μ,
where div stands for the divergence with respect to the contact volume d μ in (<ref>). Accordingly, once the volume form is fixed, the non-degeneracy of the pairing motivates us to define the dual space as
𝔛_con-ham^* (M̅) = {Π̅∈Λ^1 (M̅) : div(♯Π̅)
- ℒ_ℛ⟨♯Π̅,η⟩
+ ⟨♯Π̅,η⟩≠ 0 }∪{0}.
We next recall the Lie algebra isomorphism H̅↦ξ_H̅ of (<ref>). Identifying the dual ℱ(M̅) with the space of densities on the contact manifold, and fixing the contact volume form, (<ref>) determines the dual of (<ref>) as
Ψ^*: 𝔛_con-ham^*(M̅) ⟶ℱ^*(M̅), Π̅↦f̅=div(♯Π̅)
- ℒ_ℛ⟨♯Π̅,η⟩
+ ⟨♯Π̅,η⟩.
In terms of the Darboux coordinates (q^i,p_i,z), we can compute the density function f̅ from (<ref>) as follows. Consider a one-form section
Π̅ =Π̅_i dq^i + Π̅^i dp_i + Π̅_z dz.
Let us present the three terms on the right-hand side of (<ref>) one by one.
A direct calculation reads the contact divergence of Π̅ as
div(♯Π̅) =∂Π̅^i/∂ q^i - ∂Π̅_i/∂ p_i -nΠ̅_z -p_i∂Π̅_z/∂ p_i + ∂Π̅_z/∂ z +p_i∂Π̅^i/∂ z.
Then the second Lie derivative term is computed to be
ℒ_ℛ⟨♯Π̅,η⟩ = ℒ_ℛΠ̅_z =ι_RdΠ̅_z=∂Π̅_z/∂ z.
The third term is simply ⟨♯Π̅,η⟩ = Π̅_z. Adding all of these terms we arrive at the definition of the density function
f̅= ∂Π̅^i/∂ q^i - ∂Π̅_i/∂ p_i -p_i(∂Π̅_z/∂ p_i-∂Π̅^i/∂ z) -(n-1)Π̅_z.
Coadjoint Flow on 𝔛_con-ham^*(M̅). Let, as above, 𝔛_con-ham (M̅) be the Lie algebra of contact vector fields with the opposite Jacobi-Lie bracket. That is,
ad_ξ_H̅ξ_F̅= - [ξ_H̅,ξ_F̅],
which we consider to be the left adjoint action of 𝔛_con-ham (M̅) on itself. Now dualizing the adjoint action, we arrive at the coadjoint action of 𝔛_con-ham(M̅) on its dual 𝔛_con-ham^*(M̅) as
ad^*:𝔛_con-ham(M̅)×𝔛_con-ham^* (M̅)↦𝔛_con-ham^*(M̅) , ⟨ ad^∗_ξ_H̅Π̅, ξ_F̅⟩ =
⟨Π̅ , ad_ξ_H̅ξ_F̅⟩.
More explicitly, given an arbitrary field ξ_F̅ we have
⟨ ad_ξ_H̅^∗Π̅, ξ_F̅⟩ = ⟨Π̅ , ad_ξ_H̅ξ_F̅⟩ =
-∫⟨Π̅,[ξ_H̅,ξ_F̅] ⟩ d μ
=- ∫⟨Π̅,ℒ_ξ_H̅ξ_F̅⟩ d μ
= ∫⟨ℒ_ξ_H̅Π̅ + div(ξ_H̅)Π̅, ξ_F̅⟩ d μ
= ∫⟨ℒ_ξ_H̅Π̅ - (n+1) ℛ(H̅)Π̅, ξ_F̅⟩ d μ,
where we used (<ref>) for the divergence of the contact vector field ξ_H̅. As a result, the coadjoint action may be presented as
ad_ξ_H̅^∗Π̅= ℒ_ξ_H̅Π̅ - (n+1) ℛ(H̅)Π̅.
Being the dual space of a Lie algebra, 𝔛_con-ham^*(M̅) admits a Poisson bracket called the Lie-Poisson bracket <cit.>. More precisely, given two functionals A and B on 𝔛_con-ham^*(M̅) the Lie-Poisson bracket on 𝔛_con-ham^*(M̅) is defined to be
{ A,B} ^𝔛_con-ham^* (Π̅
) = ∫⟨Π̅ , ad_δ A / δΠ̅δ B/δΠ̅⟩d μ = - ∫⟨Π̅ , [δ A/δΠ̅ ,
δ B/δΠ̅ ] ⟩d μ
where δ A/δΠ̅ stands for the Fréchet derivative of the functional A. Given a Hamiltonian functional ℋ, the Lie-Poisson dynamics is governed by the Lie-Poisson equations computed in terms of the coadjoint action, that is,
Π̇̅̇= {Π̅, ℋ}^𝔛_con-ham^*=- ad_δℋ / δΠ̅^∗Π̅.
In particular, for the Hamiltonian functional defined by means of the contact vector field ξ_H̅ as
ℋ(Π̅)=∫⟨Π̅,ξ_H̅⟩d μ,
the Fréchet derivative δH̅/ δΠ̅ of ℋ with respect to the momenta becomes the vector field ξ_H̅. In this case, the Lie-Poisson equation (<ref>) takes the form of
Π̇̅̇ = - ℒ_ξ_H̅Π̅ + (n+1) ℛ(H̅)Π̅.
The Dual Space of Strict Contact Hamiltonian Vector Fields. Now we consider the algebra (<ref>) of strict contact Hamiltonian vector fields
𝔛_st-con-ham ( M̅ ). Similar to the calculation (<ref>) done above, we compute the precise dual of this vector space by means of L_2-pairing. Accordingly, we have
⟨Σ̅⊗d μ, Y_H̅⟩ _L_2 = ∫⟨Σ̅, Y_H̅⟩d μ =
∫H̅( div(♯Σ̅) + ⟨♯Σ̅,η⟩) d μ.
Once again, we fix the volume form. Then the non-degeneracy of the pairing (<ref>) leads us to define the dual space as
𝔛_st-con-ham^* (M̅) = {Σ̅∈Λ^1 (M̅) : div(♯Σ̅)
+ ⟨♯Σ̅,η⟩≠ 0 }∪{0}.
For the contact manifold M̅, consider the contactization bundle τ:M̅↦ M over the symplectic base manifold M. A real-valued function H on the base manifold can be pulled back to the contact manifold by means of the projection τ. This gives a real-valued function τ^*H which satisfies ℛ(τ^*H)=0.
So that τ^*H generates a strict contact Hamiltonian vector field Y_τ^*H. As a matter of fact, this picture is generic for all functions on the contact manifold that do not depend on the fiber variable. So we arrive at the following isomorphism
Γ: ( ℱ( M̅ ) ,{∙,∙} )⟶( 𝔛_st-con-ham ( M̅ ) ,-[∙ ,∙] ) , H ↦ Y_τ^*H.
The computation (<ref>) provides the dual of this as
Γ^*:𝔛^*_st-con-ham ( M̅ ) ⟶Den(M), Σ̅↦∫ _S^1 (div(♯Σ̅)
+ ⟨♯Σ̅,η⟩ ) dz ⊗ d μ,
where Den(M̅ ) is the space of densities on the symplectic manifold (M,Ω). Accordingly, we have that the density function
f(q,p)= ∫ _S^1 (div(♯Σ̅)
+ ⟨♯Σ̅,η⟩ ) dz
defined on the base manifold (that is the symplectic manifold) M.
Referring to the Darboux coordinates (q^i,p_i) on M, and the induced Darboux coordinates (q^i,p_i,z) on M̅ we compute the density function as
f(q,p)= ∫ _S^1( ∂Σ̅^i/∂ q^i - ∂Σ̅_i/∂ p_i -p_i(∂Σ̅_z/∂ p_i-∂Σ̅^i/∂ z) + ∂Σ̅_z/∂ z-(n-1)Σ̅_z ) dz
Note that this distribution function is not in the form of a divergence of a vector field, and thus is not normalized to zero.
Coadjoint Flow on 𝔛_st-con-ham^*(M̅). Let, as above, 𝔛_st-con-ham (M̅) be the Lie algebra of contact vector fields with the opposite Jacobi-Lie bracket. That is,
ad_Y_H̅ Y_F̅= - [Y_H̅,Y_F̅],
which we consider to be the left adjoint action of 𝔛_st-con-ham (M̅) on itself. Now dualizing the adjoint action, we arrive at the coadjoint action of 𝔛_st-con-ham(M̅) on its dual 𝔛_st-con-ham^*(M̅) as
ad^*:𝔛_st-con-ham(M̅)×𝔛_st-con-ham^* (M̅)↦𝔛_st-con-ham^*(M̅) , ⟨ ad^∗_ξ_H̅Σ̅, ξ_F̅⟩ =
⟨Σ̅ , ad_ξ_H̅ξ_F̅⟩.
More explicitly, given an arbitrary field Y_F̅ we have
⟨ ad_Y_H̅^∗Σ̅, Y_F̅⟩ = ⟨Σ̅ , ad_ξ_H̅ξ_F̅⟩ =
-∫⟨Σ̅,[Y_H̅,Y_F̅] ⟩d μ = ∫⟨ℒ_Y_H̅Σ̅, Y_F̅⟩d μ .
So we arrive at the coadjoint action as
ad_Y_H̅^∗Σ̅= ℒ_Y_H̅Σ̅.
The dual space 𝔛_st-con-ham^*(M̅) has the Lie-Poisson bracket
{ A,B} ^𝔛_st-con-ham^* (Σ̅
) = - ∫⟨Σ̅ , [δ A/δΣ̅ ,
δ B/δΣ̅ ] ⟩d μ.
and for a given Hamiltonian functional ℋ, the Lie-Poisson dynamics is
Σ̇̅̇= {Σ̅, ℋ}^𝔛_st-con-ham^*=- ad_δℋ / δΣ̅^∗Σ̅=-ℒ_Y_H̅Σ̅,
where we chose ℋ(Σ̅)=∫⟨Σ̅,Y_H̅⟩d μ.
§.§ Kinetic Dynamics in Terms of Density Function
Given a Hamiltonian function H̅ on the contact manifold M̅, one may define two particle motions on the manifold. One is due to the contact bracket given by ȧ={a,H̅}, and the other is due to the contact vector field ξ_H̅ given by ȧ=ξ_H̅(a). In the symplectic framework, these two definitions coincide but not for the contact geometry. So we treat these two situations one by one. Let us start with the kinetic lift of the contact bracket motion.
Kinetic Lift of Contact Bracket Dynamics. In view of the contact bracket (<ref>) of smooth functions, let now
ad_H̅K̅ ={H̅,K̅}^(C)
be the adjoint action of ℱ(M̅) on itself. As was noted above, we shall make use of the identification ℱ^*(M̅)≃ℱ(M̅) with the dual space. This way, the coadjoint action ℱ(M̅) on ℱ^*(M̅) is computed from
∫{F̅,H̅}^(C)K̅d μ = ∫( ξ_H̅(F̅)+F̅ℛ(H̅) ) K̅d μ
=∫K̅(ι_ξ_H̅ dF̅) d μ + ∫K̅F̅ℛ(H̅) d μ
=∫K̅ dF̅∧ι_ξ_H̅d μ + ∫K̅F̅ℛ(H̅) d μ
= - ∫F̅dK̅∧ι_ξ_H̅d μ
- ∫F̅K̅ dι_ξ_H̅d μ
+ ∫K̅F̅ℛ(H̅) d μ
=
- ∫F̅(ι_ξ_H̅ dK̅ ) d μ
- ∫F̅K̅div(ξ_H̅) d μ
+ ∫K̅F̅ℛ(H̅) d μ
=
- ∫F̅ξ_H̅(K̅) d μ
+ ∫F̅K̅ (n+1) ℛ(H̅) d μ
+ ∫K̅F̅ℛ(H̅) d μ
=
- ∫F̅ξ_H̅(K̅) d μ
+ ∫F̅K̅ (n+1) ℛ(H̅) d μ
+ ∫K̅F̅ℛ(H̅) d μ
= - ∫F̅({K̅,H̅}^(C)-K̅ℛ(H̅)) + (n+2) ∫F̅K̅ℛ(H̅) d μ
= ∫F̅{H̅,K̅}^(C)d μ + (n+3) ∫F̅K̅ℛ(H̅) d μ,
that is,
∫{F̅,H̅}^(C)K̅d μ = ∫F̅{H̅,K̅}^(C)d μ + (n+3) ∫F̅K̅ℛ(H̅) d μ
for all smooth functions F̅, H̅, and K̅ defined on the contact manifold M̅. Accordingly, the coadjoint action appears as
ad^*_H̅f̅= {H̅,f̅}^(C) - (n+3) f̅ℛ(H̅).
As discussed in the previous subsection, the dynamics on the density level is determined through the coadjoint action. In particular, for the Hamiltonian functional
ℋ(f̅)=∫H̅f̅ d μ
on ℱ^*≃ℱ, where H̅ is the Hamiltonian function defined on the extended cotangent bundle, the Fréchet derivative δℋ / δf̅ becomes H̅. In this case, the coadjoint flow may be computed to be
ḟ̅̇ = - ad^*_δℋ / δf̅f̅ =- ad^*_H̅ f.
Substituting the action in (<ref>) into the coadjoint dynamics, we compute the kinetic equation of contact particles as
ḟ̅̇+{H̅,f̅}^(C) =(n+3) f̅ℛ(H̅).
Keeping in mind that the Lie-Poisson dynamics (<ref>) in momentum variables and the Lie-Poisson dynamics (<ref>) are related with the Poisson mapping Π̅↦f̅ given in (<ref>), the kinetic equation is computed in Darboux coordinates as
∂f̅/∂ t = -∂H̅/∂ p_i∂f̅/∂ q^i + ∂H̅/∂ q^i∂f̅/∂ p_i
+p_i (∂f̅/∂ p_i∂H̅/∂ z- ∂H̅/∂ p_i∂f̅/∂ z)
+(n+2)f̅∂H̅/∂ z + ∂f̅/∂ zH̅.
Kinetic Lift of Contact V-Field Dynamics.
Given a contact vector field ξ_H̅, let us consider the linear mapping
Ψ_ξ_H̅:ℱ(M) ⟶ℱ(M), (ξ_H̅,K̅)↦ξ_H̅(K̅)
that takes a function to its directional derivative along ξ_H̅. A similar calculation to the one presented in (<ref>) hence yields
∫ξ_H̅(F̅)K̅d μ = ∫F̅{H̅,K̅}^(C)d μ + (n+2) ∫F̅K̅ℛ(H̅) d μ .
Accordingly, the dual of (<ref>) is given by
Ψ^*_ξ_H̅: ℱ^*(M) ⟶ℱ^*(M), f̅↦Ψ^*_ξ_H̅(f̅) = {H̅,f̅}^(C) - (n+2) f̅.
We then define the dynamics generated by the dual action as
ḟ̅̇ = - Ψ^*_ξ_H̅(f̅) = - {H̅,f̅}^(C) + (n+2) f̅ℛ(H̅).
In terms of the Darboux coordinates (q^i,p_i,z), the kinetic dynamics turns out to be
∂f̅/∂ t = -∂H̅/∂ p_i∂f̅/∂ q^i + ∂H̅/∂ q^i∂f̅/∂ p_i
+p_i (∂f̅/∂ p_i∂H̅/∂ z- ∂H̅/∂ p_i∂f̅/∂ z)
+(n+1)f̅∂H̅/∂ z + ∂f̅/∂ zH̅.
Note that the normalization of the distribution function is preserved by this dynamics.
A Direct Calculation to Kinetic Dynamics.
Instead of geometric constructions, we may use a simplified method of derivation. Evolution of an observable function a=a(q^i,p_i,z) along the vector field reads
da/dt = ξ_H̅(a)
=
∂H̅/∂ p_i∂ a/∂ q^i
-(∂H̅/∂ q^i + p_i ∂H̅/∂ z)∂ a/∂ p_i
+(-H̅ + p_i ∂H̅/∂ p_i)∂ a/∂ z.
In order to construct the kinetic theory, we need to introduce the distribution function f̅=f̅(q^i,p_i,z), which makes it possible to define the averaged functional
A̅(f̅) = ∫ a(q^i,p_i,z)f̅(q^i,p_i,z) d μ.
Evolution of this functional is on the one hand given by
dA̅/dt = ∫da/dtf̅(q^i,p_i,z) d μ,
while on the other hand, it can also be seen as an evolution of the distribution function itself,
dA̅/dt = ∫ a ∂f̅(q^i,p_i,z)/∂ td μ.
Rewriting the former expression in the form of the latter (integrating by parts while dropping the boundary terms), we obtain (<ref>).
Dynamics of Densities for Strict Contact Dynamics. Let us recall the Poisson mapping in (<ref>). This turns the contact kinetic dynamics in (<ref>) and in (<ref>) to the Vlasov equation
ḟ + {H̅,f̅}^(S) =0
where the bracket is the canonical bracket. The dynamics on the Lie algebra dual of quantomorphisms can be thus seen as the standard dynamics of the distribution function on the phase space of particles.
§ CONCLUSION: A HIERARCHY FROM CONTACT TO CONFORMAL DYNAMICS
We have so far provided the generalizations of the Vlasov dynamics for conformal
and contact settings on a pure geometrical setting. To sum up and relate the dynamical equations we obtained, we shall present in the present section the hierarchy of the relevant Lie algebras (by means of Lie algebra homomorphisms) of both the function spaces and the vector fields. We shall then dualize the Lie algebra homomorphisms to arrive at the momentum and Poisson mappings between different levels of descriptions, namely the reversible Hamiltonian dynamics, the conformal Hamiltonian dynamics, and the contact Hamiltonian dynamics. For the level of particle dynamics the relationship between the conformal and the contact Hamiltonian dynamics discussed in, for example, <cit.>.
Lie Algebra Hierarchy.
In order to begin with the contact geometry let us first consider the extended cotangent bundle T^*M×ℝ (as a contact manifold), along with a contact Hamiltonian function
H̅(q^i,p_i,z)=H(q^i,p_i)-cz
on it. Then, the contact Hamiltonian dynamics (<ref>) takes the particular form
dq^i/dt = ∂H/∂ p_i, dp_i/dt = -∂ H/∂ q^i+cp_i, dz/dt=p_i∂H/∂ p_i-H(q^i,p_i)+cz.
The first two equations of (<ref>) can be projected to the cotangent bundle T^*M, which gives a reduction to the conformal Hamiltonian dynamics (<ref>).
In order to conduct further analysis, we consider two functions
F̅(q^i,p_i,z)=F(q^i,p_i)-c_Fz, H̅(q^i,p_i,z)=H(q^i,p_i)-c_Hz,
and compute their contact bracket (<ref>) as
{F̅, H̅}^(C) = { F-c_Fz, H-c_Hz}^(C)
= { F, H}^C-c_H{ F, z}^(C)-c_F{ z, H}^(C)+c_Fc_H{ z, z}^(C)
={ F, H}^(S)-c_H{ F, z}^(C)-c_F{ z, H}^(C)
={ F, H}^(S)-c_H(F+Z(F))+c_F(H+Z(H)),
where the contact bracket reduces to (the pullback of) the canonical Poisson bracket in the second line. A direct comparison of (<ref>) with (<ref>) reveals that they are equal. Accordingly, the choice of the Hamiltonian function (<ref>) motivates us to determine the Lie algebra homomorphism (more precisely, an embedding)
Ξ: ℱ(T^*M) ×ℝ⟶ℱ(T^*M×ℝ), (H,c_H) ↦H̅(q^i,p_i,z)=H(q^i,p_i)-c_Hz,
endowing ℱ(T^*M) ×ℝ with the Lie algebra bracket in (<ref>), and ℱ(T^*M×ℝ) with the contact bracket in (<ref>).
It is possible to carry this Lie algebra homomorphism to the level of vector fields. To this end, we employ the isomorphisms (<ref>) and (<ref>) on the domain and the range of (<ref>) to arrive at the Lie algebra homomorphism
Υ: 𝔛_ham(T^*M)×ℝ⟶𝔛_con-ham(T^*M×ℝ), X_H^c↦ξ_H̅
where H̅ is the contact Hamiltonian function in (<ref>). Finally, in view of the canonical inclusions of the Lie algebras 𝔛_ham(T^*M) and ℱ(T^*M) into their extensions 𝔛_ham(T^*M)×ℝ and ℱ(T^*M)×ℝ, we present the following commutative diagram.
𝔛_ham(T^*M) [rr,hook] 𝔛_ham(T^*M)×ℝ[rr,hook,"Υ in (<ref>)"] 𝔛_con-ham(T^*M×ℝ)
ℱ(T^*M) [uu,"Φ in (<ref>)"][rr,hook] ℱ(T^*M) ×ℝ[rr,hook,swap,"Ξ in (<ref>)"] [uu,"Φ^c in (<ref>)"] ℱ(T^*M×ℝ) [uu,swap,"Ψ in (<ref>)"]
Poisson Maps Hierarchy.
Let us next dualize the Lie algebra homomorphisms that appear in the above diagram. To this end, we consider a density function f̅=f̅(q^i,p_i,z) in the dual space ℱ^*(T^*M×ℝ) and examine the mapping Ξ in (<ref>). We thus obtain the dual mapping
Ξ^* : ℱ^*(T^*M×ℝ) ⟶ℱ^*(T^*M) ×ℝ^* ,
f̅(q^i,p_i,z) ↦(∫_ℝf̅(q^i,p_i,z) dz, ∫_T^*M×ℝ z f̅(q^i,p_i,z) d μ) .
Let us remark that the first term on the range is indeed in ℱ^*(T^*M), while the second one is a real number in ℝ^*≃ℝ. More precisely,
f(q^i,p_i):=∫_ℝf̅(q^i,p_i,z) dz, c^*:= ∫_T^*M×ℝ z f̅(q^i,p_i,z) d μ.
Let us note also that (<ref>) being a dual of a Lie algebra homomorphism, the moments (<ref>) constitute a Poisson map. Therefore, we can argue that the moments in (<ref>) map the coadjoint flow (<ref>) on the contact level to the coadjoint flow (<ref>) on the conformal Hamiltonian geometry. In terms of one-forms, given Π̅=Π̅_i dq^i+Π̅^idp_i + Π̅_z dz we have the projection
Π _i ( q^i,p_i ) =∫_ℝΠ̅_i ( q^i,p_i,z )
dz, Π^i( q^i,p_i ) =∫ _ℝΠ̅^i( q^i,p_i,z ) dz, Π̅_z=0
These maps take the kinetic dynamics in (<ref>) to the kinetic dynamics in (<ref>). All these dynamics and projections may now be summarized through the following commutative diagram.
[ M-Vlasov; (<ref>) ][d]_Φ^* in (<ref>) [ Conformal; M-Vlasov; (<ref>) ][ll]^c=0[d]^(Φ^c)^* in (<ref>) [ Contact; M-Vlasov; (<ref>) ][d]^Ψ^* in (<ref>)[ll]
[ Vlasov; (<ref>) ] [ Conformal; Vlasov; (<ref>) ][ll]^c=0 [ Contact; Vlasov; (<ref>) ][ll]
In the future, we would like to apply conformal and kinetic theories in relativistic mechanics and to geometrize non-equilibrium statistical mechanics <cit.>.
§ ACKNOWLEDGMENTS
MP was supported by Czech Science Foundation, project 23-05736S.
§ APPENDIX
§.§ Lie-Poisson Dynamics and Coadjoint Flow
This Section contains the definition of Lie-Poisson dynamics both in the most abstract way and in the case of a diffeomorphism group.
Lie-Poisson Formulation.
Consider a Lie group G (a manifold admitting a group multiplication compatible with the manifold structure). We cite <cit.> for more on Lie group theory and its applications in physics. The tangent space at the identity element of a Lie group is called Lie algebra 𝔤:=T_eG. Here, the algebra is determined by a skew-symmetric bilinear bracket [∙,∙]
satisfying the Jacobi identity. Referring to this Lie bracket, one can define the so-called adjoint action of the Lie algebra 𝔤 on itself
ad:𝔤×𝔤↦𝔤, ad_ξη:=[ξ,η].
See that ad is a left action as a manifestation of the Jacobi identity.
We denote the dual space by 𝔤^*. By dualizing the adjoint action, one arrives at the coadjoint action of 𝔤 on its dual 𝔤^* given as
ad^*:𝔤×𝔤^*⟶𝔤^*, ⟨ ad^∗_ξρ, η⟩ =
⟨ρ , ad_ξη⟩.
Notice that ad^* is a right action.
The dual space 𝔤^* admits a Poisson bracket, called Lie Poisson bracket, defined to be <cit.>
{ A,B} (
ρ ) = ⟨ρ ,[ δ A/
δρ,δ B/δρ] ⟩,
where ρ is an element in the dual space 𝔤^*, A and B are two functionals on 𝔤^*, and the pairing on the right-hand side is the natural pairing between 𝔤^* and 𝔤. Notice also that δ A/δρ stands for the Fréchet derivative of the functional A. Under the reflexivity condition, δ A/δρ belongs to 𝔤. This observation justifies the Lie bracket appearing on the right-hand side of (<ref>). See that one can multiply the right-hand side of (<ref>) by a minus sign and still have a Poisson algebra. We prefer the positive one and justify this choice in the following paragraph. For a Hamiltonian functional H, the dynamics is governed by the Lie-Poisson
equations computed in terms of the coadjoint action as
ρ̇= - ad_δ H / δρ^∗ρ.
Let 𝔤 and 𝔥 be two Lie algebras and assume that there is a Lie algebra homomorphism ϕ:𝔤↦𝔥 that is
ϕ[ξ,η]=[ϕ(ξ),ϕ(η)],
where ξ and η are arbitrary elements in 𝔤.
The dual spaces 𝔤^* and 𝔥^* are Lie-Poisson spaces and the dual mapping ϕ^*: 𝔥^*↦𝔤^* is a momentum and Poisson mapping <cit.>. The relation between the coadjoint representations is computed to be
ϕ^*∘ ad^*_ϕ(ξ) =ad^*_ξ∘ϕ^*
for all ξ in 𝔤.
Lie-Poisson Dynamics for Diffeomorphism Group.
For many continua and kinetic theories including fluid flows and plasma theories, configuration spaces are diffeomorphism groups which are infinite-dimensional Lie groups <cit.>. To see this, we start with a bunch of particles resting in a (volume) manifold M. We denote the set of all diffeomorphisms on M by Diff(M) <cit.>. The motion of the particles is determined by the left action of Diff(M) on the particle space M. The right action commutes with the particle motion and constitutes
an infinite-dimensional symmetry group
called the particle relabelling symmetry. The Lie algebra of Diff(M) is the space of vector fields 𝔛(M). Here, the Lie algebra bracket is minus the Jacobi-Lie bracket of vector fields, that is
ad_X Y = [X,Y]_𝔛( M
)=-[X,Y]_JL=-ℒ_X Y,
where ℒ_X denotes the Lie derivative operator.
We define the dual space 𝔛^∗(M) of the Lie algebra as the space of one-form densities Λ ^1(M) ⊗Den(M) on M. Here, the pairing between a vector field X and a dual element Π⊗ d μ is defined to be L_2 (simply multiply-and-integrate form) pairing
⟨∙, ∙⟩_L_2: Λ^1(M)⊗Den(M)×𝔛(M)⟶ℝ, (Π⊗ d μ,X)↦∫_M⟨Π , X ⟩ d μ.
The pairing inside the integral is the one between the one-form Π and the vector field X, and d μ is a density (a volume form) on M.
To compute the coadjoint action of the Lie algebra onto the dual space, we perform the following calculation
⟨ ad_X^∗ (Π⊗ d μ), Y ⟩ = ⟨Π⊗ d μ, ad_X Y⟩ = - ∫_M ⟨Π,ℒ_XY ⟩ d μ
= ∫_M ⟨ℒ_X Π + div(X)Π, Y ⟩ d μ
where div(X) stands for the divergence of the vector field with respect to the volume form d μ. To write the second line of this calculation, we have integrated by parts. Hence,
ad_X^∗( Π⊗ d μ) =( ℒ_XΠ
+div(X) Π) ⊗ d μ,
where div(X)
denotes the divergence of the vector field X with respect to the volume
form d μ. At this point, without loss of generalization, we fix the volume form d μ, so that we particularly consider a dual element as a one-form Π.
Now we consider that a particle moves according to dynamics generated by a vector field X defined on manifold M. This particle motion can be lifted to the evolution of distribution functions as follows. Consider a linear Hamiltonian functional on the space of one-form densities Λ^1(M)⊗Den(M) given by
H(Π⊗ d μ)=∫_M ⟨Π,X ⟩ d μ,
where d μ is a volume form.
The Fréchet derivative of H with respect to the momenta is δ H/ δΠ is the vector field X. In this case, the Lie-Poisson equation turns out to be
Π̇⊗ d μ=-ad^*_δ H/δΠ (Π⊗ d μ) = -ad^*_X (Π⊗ d μ).
Now we fix the volume d μ and recall the coadjoint action given in (<ref>), which gives the Lie-Poisson equation
Π̇=-ℒ_XΠ -
div(X)
Π.
If the dynamics is generated by a divergence-free vector field (for example, the case of incompressible fluid flow, or Vlasov flow), then the second term on the right-hand side of
(<ref>) drops, and we obtain
Π̇=-ℒ_XΠ.
§.§ An Algebra Extension
This Section contains an extension of a Lie algebra 𝔤 by merging it with another algebra 𝔥 (see also <cit.>). This gives algebraically the direct sum space 𝔤⊕𝔥. The dual space is the product space as well 𝔤^*⊕𝔥^*. The next step is to merge the algebraic structures on the constitutive spaces to arrive at an algebra on 𝔤⊕𝔥. The naive way to do that is to add the brackets on 𝔤 and 𝔥. Instead, we let the Lie algebras under mutual action and the bracket then encodes these actions as well. Therefore, we assume the following actions
▹ : 𝔥⊗𝔤→𝔤, η⊗ξ↦η▹ξ, ◃ : 𝔥⊗𝔤→𝔥, η⊗ξ=η◃ξ.
Then, we introduce the following Lie algebra bracket (called matched pair product <cit.>) on the direct sum
𝔤⊕𝔥 as
[ (ξ_1, η_1 ), (ξ_2,η_2) ] = ([ ξ_1,ξ_2]+η_1▹ξ_2-η_2▹ξ_1, [ η_1,η_2] + η_1◃ξ_2- η_2 ◃ξ_1 ).
To satisfy the Jacobi identity, one needs to employ compatibility conditions,
η⊳ξ _1,ξ _2]=[η⊳ξ _1,ξ _2]+[ξ _1,η⊳ξ _2]+(η⊲ξ _1)⊳ξ _2-(η⊲ξ _2)⊳ξ _1,
η _1,η _2]⊲ξ =[η _1,η _2⊲ξ ]+[η _1
⊲ξ ,η _2]+η _1⊲ (η _2⊳ξ
)-η _2⊲ (η _1⊳ξ ).
Physically this corresponds to the collective motion of two dynamical systems. The Lagrangian (Euler-Poincaré) dynamics on the matched-pair Lie algebra examined in <cit.> and the Hamiltonian dynamics on the dual space of matched Lie algebras is studied in <cit.>. For the discrete dynamics, we refer <cit.>. In the present work, our focus is on a particular case of this algebraic structure and on the coadjoint flow on its dual space.
An Extension.
Let us choose the trivial right action in bracket (<ref>). That is, η◃ξ=0 for all ξ in 𝔤 and η in 𝔥. Further, let the algebra on 𝔥 be trivial, [ η_1,η_2]=0 for all η_1 and η_2 in 𝔥. This leads to the following bracket on the direct sum 𝔤⊕𝔥
[ (ξ_1, η_1 ), (ξ_2,η_2) ] = ([ ξ_1,ξ_2]+η_1▹ξ_2-η_2▹ξ_1, 0).
To arrive at the dual operation, we need to fix some notations first. We fix the algebra element ξ in the left action and then define the following linear operation
𝔟_ξ: 𝔥→𝔤, η↦η▹ξ.
Then, the dual of the mapping 𝔟_ξ is defined as
𝔟_ξ^*: 𝔤^* →𝔥^*, ⟨𝔟^*_ξd μ, η⟩ =⟨ d μ, 𝔟_ξη⟩.
Now, we can define the dual action of the left action ▹, which gives the right action of the Lie algebra 𝔥 on the dual space 𝔤^*,
∗◃ η: 𝔤^*→𝔤^*, d μ→ d μ∗◃η, ⟨ d μ∗◃η, ξ⟩=⟨ d μ, η▹ξ⟩
The linear algebraic dual of the adjoint action gives the coadjoint action. The dual of (<ref>) can be then obtained as
⟨ ad^*_(ξ_1,η_1)(d μ,ν), (ξ_2,η_2) ⟩ = ⟨ (d μ,ν), ad_(ξ_1,η_1)(ξ_2,η_2) ⟩
=⟨ (d μ,ν), ([ ξ_1,ξ_2]+η_1▹ξ_2-η_2▹ξ_1, 0) ⟩
=⟨ d μ, [ ξ_1,ξ_2]+η_1▹ξ_2-η_2▹ξ_1 ⟩
=⟨ ad_ξ_1^*d μ , ξ_2 ⟩ + ⟨ d μ∗◃η_1, ξ_2 ⟩
- ⟨𝔟^*_ξ_1d μ, η_2 ⟩ .
To sum up, for (ξ,η)∈𝔤⊕𝔥 and (d μ,ν)∈𝔤^*⊕𝔥^*, the coadjoint action is
ad^*_(ξ,η)(d μ,ν)= (ad^*_ξ d μ+d μ∗◃η, -𝔟^*_ξ d μ).
Finally, we consider a Hamiltonian functional H=H(d μ, ν ) on the direct sum 𝔤^*⊕𝔥^*. The Lie-Poisson equations (<ref>) then turn out to be
ḋ ̇μ̇=-ad^*_δ H/δ d μd μ-d μ∗◃δ H/δν,
ν̇=𝔟^*_δ H/δ d μd μ,
which is the abstract evolution equation for d μ and ν.
§.§ From Momentum to Density Formulations of Conformal Kinetic Theories
We link now the conformal kinetic equations (<ref>) in momentum formulation and the conformal kinetic equation (<ref>) in terms of the density. To this end, we start with the density function f given in (<ref>) and compute its time derivative in view of the conformal kinetic equations (<ref>) in momentum formulation. This reads
∂ f/∂ t =divΩ ^♯(Π̇)= - (divΩ ^♯(ℒ_X_H^cΠ + cn Π))
=- divΩ ^♯(ℒ_X_H^cΠ)-(c_Hn)divΩ ^♯
(Π),
where, keeping (<ref>) in mind, the second term is equal to -c_Hnf whereas the first one needs a more detailed observation. To write the first term as a function of f we pair it with an arbitrary function using the L_2-pairing and compute
∫_M divΩ ^♯(ℒ_X_H^cΠ) K d μ
=
∫_M ⟨ℒ_X_H^cΠ, X_K ⟩ d μ
=
∫_M ⟨ℒ_X_H-c_HZΠ, X_K ⟩ d μ
= ∫_M ⟨ℒ_X_HΠ, X_K ⟩ d μ - c_H ∫_M ⟨ℒ_ZΠ, X_K ⟩ d μ
= - ∫_M ⟨Π, ℒ_X_HX_K ⟩ d μ + c_H ∫_M ⟨Π, ℒ_ZX_K ⟩ d μ + c_H ∫_M ⟨Π, X_K ⟩div(Z)d μ
=
∫_M ⟨Π, X_{H,K }^(S)⟩ d μ + c_H ∫_M ⟨Π, X_Z(K)+K⟩ d μ + c_H ∫_M ⟨Π, X_K ⟩div(Z)d μ
=
∫_M divΩ ^♯(Π){H,K } ^(S)d μ + c_H ∫_M ⟨Π, X_Z(K)+K⟩ d μ - c_H n∫_M ⟨Π, X_K ⟩ d μ
= ∫_M {divΩ ^♯(Π),H } ^(S)K d μ + c_H ∫_M ⟨Π, X_Z(K)⟩ d μ - c_H (n-1) ∫_M ⟨Π, X_K ⟩ d μ
= ∫_M {divΩ ^♯(Π),H }^(S) K d μ + c_H ∫_M divΩ ^♯(Π) Z(K) d μ - c_H (n-1) ∫_M divΩ ^♯(Π) K d μ
=∫_M {divΩ ^♯(Π),H }^(S) K d μ - c_H ∫_M Z(divΩ ^♯(Π)) K d μ - c_H ∫_M (divΩ ^♯(Π)) Kdiv(Z) d μ
- c_H (n-1) ∫_M divΩ ^♯(Π) K d μ
=∫_M {f,H }^(S) K d μ - c_H ∫_M Z(f) K d μ + c_H ∫_M f K n d μ - c_H (n-1) ∫_M f K d μ
=∫_M ( {f,H }^(S) - c_H Z(f)+ c_H f ) K d μ.
As a result, we have
divΩ ^♯(ℒ_X_H^cΠ)={f,H }^(S) - c_H Z(f) + c_H f,
via which we obtain
∂ f/∂ t =divΩ ^♯(Π̇)= - divΩ ^♯(ℒ_X_H^cΠ)-(c_Hn)divΩ ^♯
(Π)
= {H,f }^(S)+c_H Z(f)-c_H (n+1) f.
This is exactly the same as the evolution of the density variable given in the first line of (<ref>). Let us perform a similar analysis for the real variable as well. We thus compute the time derivative of the scalar variable established in (<ref>) to arrive at
∂ c^*/∂ t = -∫_M⟨Π̇,Z ⟩ d μ = ∫_M⟨ℒ_X_H^cΠ,Z ⟩ d μ + ∫_M⟨ cn Π, Z ⟩ d μ
=∫_M⟨ℒ_X_H-cZΠ,Z ⟩ d μ + ∫_M⟨ cn Π, Z ⟩ d μ
= ∫_M⟨ℒ_X_HΠ,Z ⟩ d μ-c∫_M⟨ℒ_ZΠ,Z ⟩ d μ + ∫_M⟨ cn Π, Z ⟩ d μ
=-∫_M⟨Π,ℒ_X_HZ ⟩ d μ + c∫_M⟨Π,ℒ_ZZ ⟩ d μ
+c∫_M⟨Π,Z ⟩div(Z)d μ + ∫_M⟨ cn Π, Z ⟩ d μ
=∫_M⟨Π,X_Z(H)+H⟩ d μ=
∫_MdivΩ^♯(Π) (Z(H)+H)d μ = ∫_Mf (Z(H)+H)d μ,
which coincides with the evolution of the real variable given in the second line of (<ref>). So, the conformal Kinetic equation (<ref>) becomes a particular instance of the abstract Lie-Poisson equation (<ref>).
abbrv
|
http://arxiv.org/abs/2307.04187v1 | 20230709144054 | Predictive Coding For Animation-Based Video Compression | [
"Goluck Konuko",
"Stéphane Lathuilière",
"Giuseppe Valenzise"
] | cs.CV | [
"cs.CV",
"cs.MM"
] |
𝐱
ŁL
Goluck Konuko^†, Stéphane Lathuilière^, Giuseppe Valenzise^†
^† Université Paris-Saclay, CentraleSupélec, Laboratoire des signaux et systèmes
^ LTCI, Télécom Paris, Institut Polytechnique de Paris, France
Predictive Coding for Animation-Based Video Compression
Morgane Austern
Received: date / Accepted: date
=======================================================
We address the problem of efficiently compressing video for conferencing-type applications. We build on recent approaches based on image animation, which can achieve good reconstruction quality at very low bitrate by representing face motions with a compact set of sparse keypoints. However, these methods encode video in a frame-by-frame fashion, i.e., each frame is reconstructed from a reference frame, which limits the reconstruction quality when the bandwidth is larger. Instead, we propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame. The residuals can be in turn coded in a predictive manner, thus removing efficiently temporal dependencies. Our experiments indicate a significant bitrate gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC, on a dataset of talking-head videos.
Video compression, image animation, generative models, video conferencing, predictive coding
§ INTRODUCTION
Recent work on learning-based video coding for videoconferencing applications has shown that it is possible to compress videos of talking heads with extremely low bitrate, without significant losses in visual quality <cit.>. The basic tenet of these methods is that face motion can be represented through a compact set of sparse keypoints <cit.>, which can be transmitted and used at the decoder side to animate a reference video frame.
However, despite the impressive coding performance of these methods at very low bitrates, existing animation-based codecs for videoconferencing still have several bottlenecks. Firstly, when the available bitrate increases, the reconstruction quality quickly reaches saturation, and conventional coding tools such as HEVC or VVC perform better. Secondly, bitrate variability in current schemes is complex, unlike conventional coding methods where a simple quantization parameter can be used to regulate bitrate. Finally, animation-based codecs operate on a frame-by-frame basis, which is inefficient for eliminating temporal redundancy in the video.
This paper addresses these limitations by proposing a predictive coding scheme for videoconferencing applications. Specifically, we interpret the keypoint-based image animation used in previous codecs <cit.> as a spatial predictor of the current (target) frame, as depicted in Figure <ref>. The residual between the animated and the target frame is then coded and used at the decoder side to correct the animated target frame. Since animation residuals exhibit temporal correlation, we also encode them in a predictive manner, i.e., we predict the current animation residual based on the previously decoded residual and encode the prediction difference.
It is worth noting that this approach is similar in principle to the classic video coding prediction loop, with the important distinction that residual coding and animation are jointly learned in an end-to-end fashion.
We name our method RDAC, for Residual Deep Animation Codec.
Our results demonstrate significant rate-distortion improvements compared to standard codecs such as HEVC and VVC, as measured by several classical and learning-based perceptual quality metrics. Furthermore, the proposed technique has the additional advantage of reducing temporal drift compared to previous frame-by-frame approaches.
§ RELATED WORK
Image animation models have been applied to compress talking head videos at ultra-low bitrates in conferencing-type applications <cit.>. Different from other learning-based compression frameworks <cit.>, the animation-based codecs in <cit.> and <cit.> propose architectures that use a variable number of motion keypoints to change the reconstruction quality within a small range of low bitrates. The deep animation codec (DAC) in our previous work <cit.> offers the possibility to vary the bitrate by creating a list of reference frames from which the best reconstruction is computed. Specifically, a new reference frame is added to the decoder buffer if all the available frames give reconstruction below a predefined threshold. However, this approach may introduce temporal jittering when adjacent animated frames are predicted from different reference frames. Using second-order motion coherence <cit.> introduces spatio-temporal stability in the decoded video, hence reducing the jittering. However, this architecture is still limited in terms of quality variability since it relies only on face animation. In our recent work <cit.>, we proposed a hybrid coding architecture (HDAC) that uses a low-quality HEVC bitstream as side information to enhance the final result of the animation codec. While improving on previous methods, the use of this low-quality auxiliary stream limits in practice the possibility to reconstruct high-frequency details.
In this work, we propose a residual deep animation codec (RDAC) that learns a compact representation of the residual between a frame and its animation-based prediction, and encodes this residual using temporal prediction.
§ PROPOSED METHOD
A general scheme of the proposed residual deep animation codec is depicted in Fig. <ref>. The components of the proposed system are detailed as follows: Section <ref> introduces the frame prediction and residual coding and Section <ref> presents temporal learning in the residual space.
§.§ Deep Image Animation Prediction and Residual Coding
We leverage the principles developed in the First Order Model <cit.> for image animation and our prior works <cit.> for animation-based prediction. The image animation process works by estimating a sparse set of motion landmarks using a keypoint detector (KPD) which is a UNet-like architecture from <cit.>. The keypoints are used by a motion transfer network (MTN) that generates the optical flow between a decoded reference image 𝐗̃_0 and the desired target 𝐗_t. Subsequently, the optical-flow map is applied to the feature space representation of the reference frame derived by the encoder of an autoencoder network. The deformed source features are assumed to be a close approximation of the target frame's feature representation and are used by a decoder network to produce the final animation 𝐗̂_t.
We build on this animation framework by including an encoder network that learns a latent representation of 𝐑_t = 𝐗_t - 𝐗̂_t i.e. the residual after animation as illustrated in Fig. <ref>. We start with the architecture of the variational autoencoder network <cit.> used for learned image compression frameworks. However, since the residual images have very sparse features we mitigate the potential encoding of a noisy latent representation by increasing the number of downsampling convolutional layers from 3 to 5 and symmetrically increase the number of upsampling layers.
§.§ Using Temporal Correlation in the Residual Layer
For a sequence of target frames 𝐗_1→𝐗_T animated from a single reference frame, 𝐗_0, we observe that the residual differences 𝐑_1→𝐑_T have a high temporal correlation. In this paper, we use a simple differential coding scheme to exploit this temporal correlation. Specifically, we compute the temporal difference signal between consecutive frame residuals, 𝐃_t = 𝐑_t-𝐑̂_t-1, as shown in Fig. <ref>. Note that, in general, more sophisticated prediction schemes are possible, that could bring additional temporal decorrelation, e.g., any dense or block-based motion compensated scheme. In this work, we demonstrated coding gains even with a suboptimal zero-motion temporal predictor, leaving the study of more advanced prediction schemes to future work.
The difference signal 𝐃_t is coded using an additional autoencoder network, which is trained together with the animation-based predictor and the reconstruction network. The decoding process consists in reconstructing the residual 𝐑̃_t=𝐃̃_t + 𝐑̃_t-1. The reconstructed residual is then concatenated to the animation-based predictor 𝐗̂_t and passed as input to a reconstruction network that produces the final decoded frame 𝐗̃_t. The reconstruction network consists of 2 convolution layers and 3 ResNet blocks.
§.§ Model Training
We initialize the animation module with pre-trained models from <cit.>. The loss terms for image animation are the same as in <cit.>, while the rate-distortion loss ℒ_RD is derived as described in <cit.>:
ℒ_RD = λ·MSE(𝐑_𝐭, 𝐑̂_𝐭) + Rate
where the bitrate cost in bits-per-pixel (bpp) is computed from the entropy estimate of the residual latent representation.
§ EXPERIMENTS AND RESULTS
§.§ Evaluation Protocol
We randomly select 30 video sequences from the VoxCeleb test set with minimum lengths of 128 frames. We note that chaning the GOP size affects the average reconstruction quality of the video sequences. Therefore, we encode the sequences with GOP sizes 16, 32, 64, and 128 and select the best reconstruction point at each bitrate from a union of the computed metrics i.e. the convex hull of all the GOP configurations. The reference frame is encoded with QP 30 using the BPG codec (HEVC intra) and the motion keypoints as well as the compressed residuals are entropy coded using a context-adaptive arithmetic coder with a Prediction by Partial Match (PPM) model <cit.>. HEVC and VVC (VTM-11) metrics are computed under low-delay configurations with high QP values to minimize bitrate. We also compare against the LPIPS-VGG metrics reported for BeyondKP <cit.> and FaceVid2Vid <cit.> since they use comparable test conditions. Notice that for these last two methods, we only have a single bitrate point, since they do not support bitrate variability beyond 10 kbps. MSE loss is used at training time for residual learning. However, the other loss terms used in training the network optimize for perceptual quality. Therefore, we restrict our evaluation to use only perceptual metrics and multi-scale pixel fidelity metrics.
§.§ RD Evaluation
In Tab. <ref>, we note over 70% bitrate savings for perceptual-based metrics i.e. LPIPS <cit.>, msVGG <cit.> and DISTS <cit.> as well as over 40% bitrate savings for pixel-based metrics over HEVC. In Fig. <ref> we make a visual comparison of our proposed framework with HEVC and VVC in the low bitrate range.
Fig. <ref> illustrates the rate-distortion performance using the LPIPS metric. RDAC significantly improves performance of conventional video codecs over a wide range of bitrates, and it outperforms previous animation-based codecs which do not employ predictive coding.
§.§ Ablation study and temporal drift
An advantage of using a closed-loop prediction scheme for temporal coding of residuals is that it avoids the temporal drifting affecting previous open-loop schemes such as DAC. This is supported by Fig. <ref>, where we show the temporal reconstruction quality (measured with MS-SSIM) of our framework and DAC.
We also investigate to which extent the temporal prediction contributes to the RD gains, over a frame-by-frame scheme to code the prediction residuals 𝐑_t. To this end, we remove the temporal feedback loop in Fig. <ref>, encoding the residuals as all Intra. Tab. <ref> reports the gains of our proposed RDAC (with temporal prediction) over this simpler solution, demonstrating that reducing temporal correlation has a significant impact on coding performance.
§.§ Computational complexity
In Tab. <ref>, we make a complexity evaluation by comparing the coding or decoding time for a single interframe. The animation-based models DAC, HDAC, and our framework are evaluated on a CPU and GPU while the HEVC and VVC codecs are only evaluated on a CPU since they do not have GPU acceleration capability. We note that our proposal adds only a moderate level of complexity relative to HEVC. However since we achieve bitrate savings greater than VVC, we consider this additional complexity as an acceptable tradeoff for the target application.
§ CONCLUSIONS
Animation-based compression offers the possibility to transmit videos with very low bitrate. However, it is often limited to reconstructing the outputs at a fixed quality level, cannot scale efficiently when higher bandwidth is available, and does not compress efficiently temporal redundancies in the signal. In this paper, we propose a coding scheme that integrates image animation (re-interpreted as a frame predictor) with classical predictive coding principles, where we exploit both spatial and temporal dependencies to achieve a coding gain. Our RDAC codec outperforms previous methods and standard codecs by a large margin on a dataset of talking head videos, despite the very simple temporal prediction approach employed.
Acknowledgement: This work was funded by Labex DigiCosme - Université Paris-Saclay. This work was performed using HPC resources from GENCI-IDRIS
utils/IEEEbib
|
http://arxiv.org/abs/2307.05933v1 | 20230712055859 | BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration | [
"Junjia Liu",
"Hengyi Sim",
"Chenzui Li",
"Fei Chen"
] | cs.RO | [
"cs.RO",
"cs.AI"
] |
Roberto Doriguzzi-Corin^α, Luis Augusto Dias Knob^α, Luca Mendozzi^β, Domenico Siracusa^α, Marco Savi^β
^αCybersecurity Centre, Fondazione Bruno Kessler, Trento - Italy
^βUniversity of Milano-Bicocca, Department of Informatics, Systems and Communication (DISCo), Milano - Italy
August 12, 2023
============================================================================================================================================================================================================================================================================================
empty
empty
Human bimanual manipulation can perform more complex tasks than a simple combination of two single arms, which is credited to the spatio-temporal coordination between the arms. However, the description of bimanual coordination is still an open topic in robotics. This makes it difficult to give an explainable coordination paradigm, let alone applied to robotics. In this work, we divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination. Then we propose a relative parameterization method to learn these types of coordination from human demonstration. It represents coordination as Gaussian mixture models from bimanual demonstration to describe the change in the importance of coordination throughout the motions by probability. The learned coordinated representation can be generalized to new task parameters while ensuring spatio-temporal coordination. We demonstrate the method using synthetic motions and human demonstration data and deploy it to a humanoid robot to perform a generalized bimanual coordination motion. We believe that this easy-to-use bimanual learning from demonstration (LfD) method has the potential to be used as a data augmentation plugin for robot large manipulation model training. The corresponding codes are open-sourced in <https://github.com/Skylark0924/Rofunc>.
§ INTRODUCTION
Humanoid robots with high redundancy are expected to perform complex manipulation tasks with human-like behavior. However, ensuring the coordination between multiple degrees of freedom is still an open problem in robotics. This is often the key to the success of most human daily activities, like stir-frying, pouring water, sweeping the floor, and putting away clothes. Thus, it is necessary to provide an explainable paradigm to describe and learn coordination. Learning the manipulation of a humanoid robot by observing human motion and behavior is a straightforward idea <cit.>, but the technology behind it is still challenging. It needs to understand human motion data and design a bridge connecting humans and robots. In this work, we focus on the learning and generalization of bimanual coordination motions from human demonstration.
Learning from demonstration (LfD) is a type of machine-learning approach that allows robots to learn tasks or skills from human demonstrations. Instead of programming robot motions with explicit instructions that are defined manually for each task <cit.><cit.>, LfD enables robots to learn skills by observing human performance <cit.>. It is implemented by the following processes: recording human demonstration data, learning the representation of multiple demonstrations, transferring the data to the workspace of robots, and finally designing a controller for generating the smooth trajectory and its corresponding control commands. LfD has become an increasingly popular approach for training robots, as it can be faster and more efficient than traditional programming methods. It also allows robots to learn tasks that may be difficult to program explicitly, such as those that involve complex movements or interactions with a dynamic environment. Meanwhile, another important feature of LfD is that it enables robots to adapt to new or changing environments <cit.>, as they can learn from demonstrations in different settings and apply that knowledge to new situations.
Bimanual robots are much more complex to learn from demonstration than single-armed robots that can be taught by kinesthetic teaching <cit.>. Some previous works tried to combine the trajectories taught multiple times to realize the kinesthetic teaching of highly redundant robots <cit.>. However, this also makes the demonstration data less reliable. Recently, several works proposed feasible frameworks for learning directly from human demonstration. Krebs et al. provided a taxonomy of human bimanual manipulation in daily activities by focusing on different types of coordination <cit.>. Liu et al. regarded the leader-follower coordination as sequence transduction and designed a coordination mechanism based on Transformer model to achieve a human-level stir-fry task <cit.>. Besides, offline reinforcement learning algorithms have been used to let robots bimanual coordination tasks from offline demonstration dataset <cit.>, allowing the robot to learn the most efficient and effective ways to coordinate its arms for a given task.
In this work, we aim to propose an explainable paradigm for learning generalized coordination from demonstration. The main contributions can be summarized as follows:
* Coordination parameterization: We propose a relative parameterization method (BiRP) for extracting the coordination relationship from human demonstration and embedding it into the motion generation of each arm.
* Leader-follower motion generation: We provide conditional coordinated motion generation for bimanual tasks with different roles in arms, allowing us to generate the follower's motion according to the leader.
* Synergistic motion generation: For tasks where there is no obvious role difference between arms, we also provide a motion generation method that enables both arms to adapt to new situations synergistically.
§ CONSTRUCT BIMANUAL COORDINATION BY RELATIVE PARAMETERIZATION
The definition of relative parameterization is a way to parameterize the relative relationship between bimanual arms and embed this relationship into the representation of each arm. The relative relationship can have many forms, which depend on task-specific coordination characteristics. For example, if bimanual arms are asked to grasp the same object simultaneously and keep the hold until it is placed, the relative relationship can be the relative displacement of end-effectors. The definitions of symbols are listed in Table <ref>.
In this section, we first briefly introduce the fundamental learning from demonstration method used in uni-manual scenarios (Sec. <ref>), which consists of two parts: demonstration representation and motion reproduction or generation. We add the concept of relative parameterization to these two parts so that both the process of representation learning (Sec. <ref>) and the process of control (Sec. <ref>) take into account the bimanual coordination characteristics in the demonstration data. These two methods can be used independently or jointly. A feasible weighting approach is also proposed to increase the importance of coordination characteristics in the representation and control (Sec. <ref>). The whole framework that is illustrated by a leader-follower example is shown in Fig. <ref>.
§.§ Demonstration Representation and Motion Generation
Learning from demonstration method is a bridge between humans and robots, which is required to have the ability to extract the characteristics of human skills, plan the trajectories, and control the robot to perform similar skills. Thus, it is necessary to combine human skill learning and robot motion planning and control together in the same encoding approach. A popular way is to link them in the form of probability, like Hidden Markov Model (HMM) and Gaussian Mixture Model (GMM). Besides, considering that the application scenarios of service robots are unstructured and need to adapt to changing situations, a class of task-parameterized models is proposed to address this problem <cit.>. The task parameters are variables describing the task-specific situation, like the position of an object in a pick-and-place task. By contrary, some task-independent information can also be extracted from the demonstration data, which reflects the nature of the skill itself, namely skill parameters. The concept of task-parameterized models is to observe the skill in multiple frames, like from starting points and ending points, and describe the impedance of the systems by variations and correlations with a linear quadratic regulator, which can then be used to control the robot.
Task-parameterized Gaussian Mixture Model (TP-GMM) is a typical method that probabilistically encodes datapoints, and the relevance of candidate frames P by mixture models, which has good generalization capability <cit.>. Formally, if we define the task parameters as {b_j, A_j}_j=1^P, the demonstrations ξ can be observed as ζ_j=A_j^-1(ξ-b_j) in each frame j. These transformed demonstrations are then represented as GMM {π^(k),{μ_j^(k), Σ_j^(k)}_j=1^P}_k=1^K by log-likehood maximization, where π^(k) refers to prior probability of k-th Gaussian component, μ_j^(k) and Σ_j^(k) refer to mean and covariance matrix of the k-th Gaussian in frame j. We can regard these Gaussian components in multiple frames as skill parameters that can be transferred following the change of task parameters. For instance, if a new situation is given by task parameters {b̂_j, Â_j}_j=1^P, a new task-specific GMM can be generated by Product of Expert (PoE):
𝒩(ν̂^(k), Γ̂^(k)) ∝∏_j=1^P 𝒩(ν_j^(k), Γ_j^(k))
where ν_j^(k)=A_jμ_j^(k)+b_j, Γ_j^(k)=A_jΣ_j^(k)A_j^⊤.The result of the Gaussian product is given analytically by
Γ̂^(k)=(∑_j=1^P Γ̂_j^(k)^-1)^-1, ν̂^(k)=Γ̂^(k)∑_j=1^P Γ_j^(k)^-1ν_j^(k)
For generating robot motion from GMM, optimal control methods like Linear Quadratic Regulator (LQR) and Linear Quadratic Tracking (LQT) can be used as planning and control methods. Here we give the classical form of LQT as follows:
cost=(ν̂-x)^⊤Q(ν̂-x) + u^⊤Ru
where ν̂ is the mean matrices of the task-specific GMM obtained by the previous PoE process.
Assume that the system evolution is linear,
x_t+1=A_s x_t+B_s u_t
where A_s, B_s are coefficients for this system. Then, the relationship between the control command and the robot states can be described in the matrix as x=S_x x_1+S_uu, where S_x∈ℝ^DT× D and S_u∈ℝ^DT× D(T-1) are the matrix form combination of A_s, B_s. More details can be found in the appendix of <cit.>.
Here we just consider an open loop controller, which solution can be given analytically by
û=(S_u^⊤QS_u+R)^-1S_u^⊤Q(ν̂-S_x x_1)
with a residual as Σ̂_u=(S_u^⊤QS_u+R)^-1.
§.§ Representation with Relative Parameterization
In the bimanual setting, coordination is reflected at the data level as some characteristics of the relative motion of the arms. For instance, for a bimanual box-lifting task, this characteristic manifests itself as the arms move from free movement to a fixed relative relationship and maintain this relationship for a certain time frame. For a leader-follower task like stir-fry <cit.>, the characteristic refers to the following arm (holding the spoon) motion, and its periodicity is determined with reference to the leading arm (holding the pot). In this work, instead of pre-defining the roles between the arms (as leader or follower), we aim to describe the relative relationship between the arms in a more general way: let the arms parameterize each other.
Formally, we define another frame that takes the trajectory of the other arm as dynamic task parameters and represents the relative relationship as GMM as well. Different from the observation perspectives built with a fixed pose, the transformation matrices A_c, t, b_c, t are dynamic that change with the motion of the other arm. The relative motion is described as ζ_c=A_c, t^-1(ξ-b_c, t) and represented by {π^(k),μ_c^(k), Σ_c^(k)}_k=1^K. For each arm h, the task-specific GMM obtained by PoE
𝒩(ν̂^(k), Γ̂^(k)) ∝∏_j=1^P 𝒩(ν_j^(k), Γ_j^(k)) ·𝒩(ν_c^(k), Γ_c^(k))
where ν_c^(k)=A_c, tμ_c^(k)+b_c, t, Γ_j^(k)=A_c, tΣ_j^(k)A_c, t^⊤.
Such a relative parameterization entangles the representation of bimanual arms together, letting them consider each other by constructing time-varying mutual observing perspectives. This brings two useful functions:
* Generate the motion of one arm based on a given motion of the other one in a leader-follower manner.
* Generate bimanual motions to adapt to new situations simultaneously in a synergistic manner.
For instance, if the left arm motion ξ_l is pre-defined or adjusted to new situations by other methods like Dynamic Movement Primitive (DMP) in <cit.>, a corresponding right arm motion that considers the spatial-temporal coordination implicit in the demonstration can be generated by gaining the dynamic relative parameters A_c, t, b_c, t from ξ_l. Then we can obtain a task-and-coordination-specific GMM of the right arm for further motion generation and control.
For generating bimanual motions synergistically, since the bimanual motions are unknown at the beginning, the relative parameterization cannot be established. Thus, we first use the product of GMMs in other reference systems to generate the independent motions of arms and then use these motions as the relative frame of the other arm to embed learned coordination iteratively.
§.§ Control with Relative Parameterization
Coordination relationships can also be embedded when generating trajectories and corresponding control commands from GMM. Let the cost function of the vanilla LQT controller in Equ. <ref> be 𝒞_vanilla. The composition cost function that takes coordination into account can then be written as
𝒞=∑_h^H𝒞_vanilla^h + (ν_c-x_c)^⊤Q_c(ν_c-x_c)
By setting a similar linear system like Equ. <ref>, the composition cost function can rewrite the cost function as
𝒞= ∑_h^H[(ν̂^h_u-u^h)^⊤Ω_u^h(ν̂^h_u-u^h) + u^h^⊤R^h u^h]
+(ν_u, c-u_c)^⊤Ω_u, c(ν_u, c-u_c)
where ν̂^h_u=S_u^-1(ν̂^h-S_x x_1) and Ω_u=S_u^⊤QS_u. ν̂_u,c and Ω_u,c share the similar transformation.
Since there exists multivariate (u^h, u_c), we cannot directly change this sum of quadratic error terms into PoE. Thus, we set a unified vector U∈ℝ^D T× H for representing the control command of the whole system, and a binary coordination matrix C∈ℝ^D T × D T H, C=[C^1, …, C^H]. For convenience, we set [C^h]=[0, …, C^h, …, 0], then we can continue to rewrite the cost function as
𝒞 = ∑_h^H[(ν̂^h_u-[C^h] U)^⊤Ω_u^h (ν̂^h_u-[C^h] U)
+U^⊤[C^h]^⊤R^h[C^h] U]
+(ν_u, c-CU)^⊤Ω_u, c(ν_u, c-CU)
Set Ω_U^h=[C^h]^⊤Ω_u^h[C^h], R_U^h=[C^h]^⊤R^h[C^h], ν̂^h_U=[C^h]^-1ν̂^h_u, ν_U, c=C^-1ν_u, c, the composition cost function is simplified as
𝒞 = ∑_h^H[(ν̂^h_U-U)^⊤Ω_U^h (ν̂^h_U-U) +U^⊤R^h_U U]
+(ν_U, c-U)^⊤Ω_U, c(ν_U, c-U)
Then we can finally change this sum of quadratic error terms into PoE
𝒩 (Û, Σ̂_U) ∝
∏_h^H[𝒩(0, R_U^h^-1) 𝒩(ν̂^h_U, Ω_U^h^-1)] 𝒩(ν_U, c,Ω_U, c^-1)
The result can be written as
Σ̂_U =(∑_h^H[Ω_U^h+R_U^h]+Ω_U, c)^-1
Û =Σ̂_U(∑_h^H Ω_U^h ν̂^h_U+ Ω_U, cν_U, c)
By using the binary coordination matrix C, we can extract the coordinated control commands and motions from Û.
§.§ Weighted Relative Parameterization
A feasible variant of the above methods is to introduce weight coefficients σ to adjust the influence of coordination relationship in representation and control.
For the GMM representation,
𝒩(ν̂^(k), Γ̂^(k)) ∝∏_j=1^P 𝒩(ν_j^(k), Γ_j^(k)) ·[𝒩(ν_c^(k), Γ_c^(k))]^σ
For the LQT controller,
Σ̂_U =(∑_h^H[Ω_U^h+R_U^h]+σ·Ω_U, c)^-1
Û =Σ̂_U(∑_h^H Ω_U^h ν̂^h_U+ σ·Ω_U, cν_U, c)
§ EXPERIMENTS
§.§ Setup
The effectiveness of the proposed method is illustrated by learning through both synthetic motions and real demonstration motions. Some pre-designed coordinated motions can show the coordination explicitly, which is meant to demonstrate the performance of the method.
Synthetic motions: The synthetic motions were created via Bézier curves, where bimanual arms depart from a distance and meet at the same point. This kind of motion often occurs in some daily activities that require both arms to grasp, carry or pick up something simultaneously. We provide both two-dimensional and three-dimensional data to show the dimension scalability, as shown in Fig. <ref>.
Real demonstration motions: We also provide demonstrations of two real tasks to show the effect in bimanual robot manipulation. The palletizing example shown in Fig. <ref> represents a class of synergistic coordinated motions and tasks, while the pouring example shown in <ref> is a typical bimanual coordination task in the leader-follower manner.
§.§ Demonstration collection
The human demonstration data was collected via Optitrack. The demonstrator attached two groups of markers on his hands for detection by Optitrack. Each group of markers contains four individual markers, which are required to determine the pose of each arm. These four markers will be detected via six Optitrack cameras to record two end-effector trajectories with both position and orientation. We chose the poses from the centers of each marker group to reproduce human bimanual demonstration motions. In addition, the box and the two cups each have a set of four markers for recording object motions. The raw data were pre-processed by our open-source toolbox <cit.> to extract the valuable information and separate it into multiple demonstrations visually. Each demonstration will have seven pose values for each marker group.
§.§ Coordination learning performance analysis
The goal of synthetic motions is that bimanual arms should meet in the same pose, whether in 2-dim or 3-dim. As shown in the left column of Fig. <ref>, we provide three bimanual motions as demonstrations for each synthetic example. These motions start and end from different positions but move in a similar style. The middle column, with multiple small figures, shows the process of using the proposed relative parameterization method. We use three observation frames to parameterize the motions of each arm, from the start points, endpoints, and a dynamic relative observation frame depending on the other arm. We can extract and construct coordination relationships from this parameterization from demonstration data. The parameterized coordination is then used in motion generation and control in new situations with different task parameters. Keeping the same coordination relationship in these generalized motions is required to achieve some specific bimanual tasks. Thus, the generalized motion generation results are shown in the right column of Fig. <ref>. In the 2-dim example, bimanual motions are required to meet at a new position, (5, 5). In the 3-dim example, this new meeting point is set to (5, 8, 5). The generated motions with learned coordination are shown in red and blue, while we also provide a comparison with generated motions without coordination (in light red and blue). By comparison, we find that just regarding bimanual arms as a simple combination of two single arms is insufficient for bimanual tasks. It is necessary to parameterize coordination relationship no matter in a leader-follower or synergistic manner; this is the key to achieving bimanual tasks mostly.
§.§ Real robot experiment
We adopt the self-designed humanoid CURI robot for real robot experiments to perform the bimanual motions. Since this work focuses on learning and generalizing coordinated motion, task parameters such as start and end points and object poses are obtained through the Optitrack system. As shown in Fig. <ref>, we paste four markers on the box to be transported and the box as a destination to facilitate obtaining their poses in the world coordinate system. Meanwhile, four fixed connected markers are also on the back of the CURI robot. The coordinated human hand motions are learned by relative parameterization. Then we use this parameterized coordination model to generate motions that adapt to new object poses and destinations. It is worth mentioning that, unlike the observation frames used in the synthetic data, we set five observation frames to transport this palletizing task, namely from start points, end points, center poses of the transport box, and the center pose of the destination box. This allows the robot to move from an initial pose with its arms outstretched to the sides of the box, carry the box and place it in the target position, and then release the box. Besides, the result of the pouring example can be found in Fig. <ref>. A self-designed impedance controller supports the execution of the CURI robot, and the trajectories are converted to joint space commands via its inverse kinematics model.
§ DISCUSSION
This work still has some limitations. First, the proposed relative parameterization method is only applied to trajectories in Cartesian space without considering joint space coordination. Learning joint-space bimanual coordination or even whole-body coordination from human demonstrations remains an open problem. Some previous work can be found in <cit.>. Besides, the method based on the Gaussian mixture model will take a certain amount of time when processing high-frequency sampling demonstration data, which might affect the actual real-time usage. Some improvements using Tensor instead of large sparse matrices can be found in <cit.>.
§ CONCLUSION
In this work, we propose a method for parameterizing coordination in bimanual tasks by probabilistic relative motion relationship of bimanual arms from human demonstration and guiding the robot motion generation in new situations. By embedding relative motion relationship, bimanual motions can be generated in a leader-follower manner and also synergistic manner. We provide a detailed formulation derivation process and demonstrate the effectiveness of the proposed method in coordination learning with some synthetic data with prominent coordination characteristics. We also deploy the method on a real humanoid robot to perform coordination motions to show its generalization in new situations. We believe that this easy-to-use bimanual LfD method can be used as a robust demonstration data augmentation method for training robot large manipulation model <cit.>, and we will do research to show this potential in the future.
ieeetr
|
http://arxiv.org/abs/2307.04431v1 | 20230710091152 | PSO-Based Optimal Coverage Path Planning for Surface Defect Inspection of 3C Components with a Robotic Line Scanner | [
"Hongpeng Chen",
"Shengzeng Huo",
"Muhammad Muddassir",
"Hoi-Yin Lee",
"Anqing Duan",
"Pai Zheng",
"Hongsheng Pan",
"David Navarro-Alarcon"
] | cs.RO | [
"cs.RO"
] |
Article Title]PSO-Based Optimal Coverage Path Planning for Surface Defect
Inspection of 3C Components with a Robotic Line Scanner
1]Hongpeng [email protected]
1]Shengzeng [email protected]
2]Muhammad [email protected]
1]Hoi-Yin [email protected]
1]Anqing [email protected]
1]Pai [email protected]
3]Hongsheng [email protected]
[1]David [email protected]
*[1]Faculty of Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong
[2]Faculty of Construction and Environment, The Hong Kong Polytechnic University, Kowloon, Hong Kong
[3]Shanghai Microintelligence Technology Co. Ltd, Shanghai, China
The automatic inspection of surface defects is an important task for quality control in the computers, communications, and consumer electronics (3C) industry.
Conventional devices for defect inspection (viz. line-scan sensors) have a limited field of view, thus, a robot-aided defect inspection system needs to scan the object from multiple viewpoints.
Optimally selecting the robot's viewpoints and planning a path is regarded as coverage path planning (CPP), a problem that enables inspecting the object's complete surface while reducing the scanning time and avoiding misdetection of defects.
However, the development of CPP strategies for robotic line scanners has not been sufficiently studied by researchers.
To fill this gap in the literature, in this paper, we present a new approach for robotic line scanners to detect surface defects of 3C free-form objects automatically.
Our proposed solution consists of generating a local path by a new hybrid region segmentation method and an adaptive planning algorithm to ensure the coverage of the complete object surface.
An optimization method for the global path sequence is developed to maximize the scanning efficiency.
To verify our proposed methodology, we conduct detailed simulation-based and experimental studies on various free-form workpieces, and compare its performance with a state-of-the-art solution.
The reported results demonstrate the feasibility and effectiveness of our approach.
[
*
August 12, 2023
===================
§ INTRODUCTION
Defect inspection is essential to quality control, process monitoring, and non-destructive testing (NDT) in the manufacturing industry (Chen et al., chen2022novel; Chen & Yang, chen2020arrival; Luo & He, luo2016cost).
Specifically, manufacturing processes in the 3C industry are highly sophisticated and demand detailed and accurate defect inspection.
Traditional defect inspection approaches typically rely on visual inspection of an intermediate/finished product by a quality control or quality check inspector.
This sole dependence on human workers is a problem for regions and countries with a shortage of manpower (Liu et al., liu2021task; Ming et al., ming2020comprehensive). Furthermore, human-based inspection is inherently subjective, hence, prone to errors.
To address these problems, various researchers have reported the automatic surface inspection system for free-form components (Li et al., li2022five; Yang et al., yang2023template).
Recently, automatic detection systems equipped with an industrial-grade line scanner, depth camera, and robotic manipulator has been developed to offer effective and rapid non-contact measurement (Huo et al., huo2021sensor; Liu et al., liu2022coverage).
During the defect inspection task, the robotics inspection system scans the surface of the target workpiece exhaustively from different viewpoints. Planning an inspection path can be considered as the CPP problem (Molina et al., molina2017detection).
Estimating a CPP strategy for automatic inspection consists of three tasks: (1) determining the viewpoints to measure the workpiece’s surfaces, (2) generating a sequence to visit all viewpoints in a time and kinematically optimal way, and (3) planning a feasible path to travel to each viewpoint.
Additional criteria can be defined while planning the coverage path, including full coverage of the target surfaces and the resulting cycle-time for the inspection task (Glorieux et al., glorieux2020coverage).
The existing CPP methods can be divided into two coarse categories: two-dimensional and three-dimensional methods.
Various researchers reported two-dimensional (2D) CPP for mobile robots in floor cleaning, bridge crack monitoring, and weed mowing tasks (Almadhoun et al., almadhoun2016survey; Galceran & Carrreras, galceran2013survey).
Veerajagadheswar et al. (veerajagadheswar2020motion) developed a motion planner for floor cleaning.
Polyomino tiling theory was adapted to define reference coordinates and generate a navigation path to maximize the area coverage; Real-time experiments in different scenarios tested the planner on a Tetris-inspired shape-shifting robot. Hung M. La et al. (la2013mechatronic) proposed an autonomous robotic system for precise and efficient bridge deck inspection and identification, where a boustrophedon decomposition was applied to solve the CPP problem.
Lim et al. (lim2014robotic) developed an automatic detection and mapping system for automatic bridge crack inspection and maintenance; They used an improved genetic algorithm to search for a CPP solution to minimize the number of turns and detection time while achieving an efficient bridge inspection.
Danial Pour Arab et al. (pour2022complete) presented a CPP algorithm providing the optimal movements over an agricultural field; First, tree exploration was applied to find all potential solutions meeting predefined requirements, and then, a similarity comparison was proposed to find the best solution for minimizing overlaps, path length, and overall travel time.
It must be remarked that 2D CPP methods cannot be adopted directly for a three-dimensional (3D) CPP problem, as the level of complexity in 3D space is much higher than in 2D space.
In most 2D applications, a complete planner map is available during planning.
Most 3D CPP methods have to plan the paths from partial or occluded 3D maps.
A CPP method for 3D reconstruction based on Building information modeling used a robot arm and a lifting mechanism for wall painting at construction sites (Zhou et al., zhou2022building).
It consists of a two‐stage coverage planning framework, a global planner that can optimally generate the waypoints sequence, and a local planner that can provide the mobile base pose.
The authors reported that this method could ensure coverage of all waypoints and improve painting efficiency.
Hassan and Liu (hassan2019ppcpp) proposed an adaptive path planning approach cable of updating the paths when unexpected changes occur and still can attain the coverage goal.
Zbiss. K et al. (zbiss2022automatic) reported a path-planning method for collaborative robotic car painting.
This proposed algorithm depends on computational geometry and convex optimization, and Morse cellular decomposition and boustrophedon algorithms are applied for path planning to generate a feasible and collision-free trajectory.
A CPP method is based on Unmanned Aerial Vehicles (UAV) equipped LiDAR for bridge inspection (Bolourian & Hammad, bolourian2020lidar).
This method combined a genetic algorithm and an A* algorithm to find a barrier-free and shortest path. This method planned the near-optimal and feasible path.
Recent studies on 3D CPP for industrial product quality detection focused on achieving full surface coverage of the workpiece with minimum inspection time are:
Li et al. (li2018path) demonstrated a robust CPP method for aerospace structures based on their geometric features. Path planning relied on the feature graph construction through Voronoi Diagram. Then, a search method is proposed to find this graph to decide the inspection sequence and a convex hull-based approach is applied to avoid collisions.
Glorieux et al. (glorieux2020coverage) presented a targetted waypoint sampling strategy with the shortest inspection time for dimensional quality inspection of sheet metal parts.
Liu et al. (liu2022coverage) developed an enhanced rapidly exploring random tree (RRT*) method and integrated the inspection errors and the optimal number of viewpoints into measurement cost evaluation for higher precision in quality inspection.
Huo et al. (huo2021sensor) applied the nearest neighbor search algorithm to find a near-shortest scanning path aiming at convex free-form specular surface inspection.
Despite numerous recent developments, CPP for free-form surface inspection remains an open research problem.
There are very few CPP solutions for line scanning robotic systems (Kapetanovic et al., kapetanovic2018side).
Compared with area-scan sensors, a line-scanning sensor is more suitable for defect inspection in industrial/manufacturing applications due to higher spatial resolution and lower production costs (Steger & Ulrich, steger2021camera; Wang et al., wang2022new).
Unlike a common area camera or other optical sensors that only work at some discrete positions, a line scanner utilizes only single beam scanning light to detect 3D objects when capturing images, and it needs to move continuously using a robotics manipulator along the coverage path. These features lead to many traditional CPP methods being ineffective. Therefore, developing a novel CPP method for the automatic line scanning system becomes imperative and advantageous.
This paper aims to overcome the limitations of existing CPP methods for surface defect inspection. We focus on defect detection for free-form surfaces of 3C workpieces based on a robotic line scanning system.
This robotic system utilizes a 6-DOF robot manipulator with a line scanner to finish a full-coverage inspection path and a depth sensor to localize the workpiece.
The proposed CPP method for robotics line scanning inspection consists of two parts, local path definition for accurate defect inspection and global time optimization for minimum scanning path.
It incorporates the detailed requirements of 3C components surface inspection and the specific characteristics of a robotic line scanning system.
The main contribution of this paper includes:
* A new region segmentation method and an adaptive region-of-interest (ROI) algorithm to define the local scanning paths for free-form surfaces.
* A Particle Swarm Optimization (PSO)-based global inspection path generation method to minimize the inspection time.
* Detailed simulations, experiments, and comparisons to validate the proposed method.
The rest of this article is organized as follows.
Section “sec:ccp_for_inspection" describes the path planning problem for 3C component surface detection.
Section “sec:methods" presented the proposed CPP approach in detail. Section “sec:results" shows the specific simulations, experiments, and comparisons on 3C components to validate the method's feasibility.
Finally, Section “sec:conclusion" concludes this article and discusses the limitations and future direction.
§ COVERAGE PATH PLANNING FOR INSPECTION
The CPP problem can be divided into two subproblems: 1) the local path definition is to generate view regions and partial scanning paths to meet the precise scanning and full coverage for 3C free-form workpieces. 2) global path planning aims to find an optimal or near-optimal sequence of all local paths (Gerbino et al., gerbino2016influence).
The key to the first sub-problem determines the position and orientation of each pair of viewpoints at both ends of local paths (the path between two consecutive viewpoints). The line-scan camera only captures an image line of pixels at a time, so the relative motion perpendicular to the line of pixels between the camera and object is necessary for 2D image acquisition during the defect inspection task (see Fig. <ref>). In this automatic scanning system, the camera is moved with a robotics manipulator along the stationary object, and the direction of depth of view (DOV) of the camera should be perpendicular to the scanned region to ensure image quality. Therefore, the scanned area needs to keep as flat as possible even if models of workpieces include many different geometric features (see Fig. <ref>). In addition, each local path consists of two viewpoints at both ends of it, and the camera at the robotic end-effector could scan one viewpoint to another to inspect the surface defects of the regions corresponding to this local path. The change in the position of these two waypoints is required to be along one regular direction, whose orientations need to remain as unchanged as possible to ensure the quality of acquired images. Besides, this sub-problem is also affected by some critical factors, such as field of view (FOV) and DOV (Liu et al., liu2022coverage).
The global path planning problem is concerned with finding the sequence and path connecting the selected viewpoints to minimize the total travel cost. This generated coverage path needs to reach all local paths with the shortest connection path. In other words, the objective is to find the minimum kinematic feasible path for the robot manipulator to target the scanning sensor at each viewpoint precisely through all local paths, without colliding with any obstacles in the workspace.
This proposed method should provide a feasible coverage path that transverses all the local paths with minimum inspection time efficiently and automatically. Moreover, it needs to consider diverse measurement directions of local paths to ensure high detection precision. Generally, there are many local paths to evaluate the surface quality of the 3C components. To obtain precise defect original images, every scanning parameter is significant and could be set according to one new automatic method rather than the workers’ experience and opinion.
§ METHODOLOGY
A CPP generation and optimization approach is presented based on the robotics line scanning system(see Fig. <ref>). This includes i) a new hybrid region segmentation method based on the random sample consensus (RANSAC) and K-means clustering method; ii) an adaptive ROI method to define the local measurement paths; and iii) one PSO-based global optimization approach for the minimum inspection time. This optimal path is then implemented for offline programming and surface detection, thereby improving the efficiency of the inspection of 3C components.
To exact the workpiece's geometry features, the 3D model is converted to a point cloud. The sampling procedure is based on selecting a series of points randomly and uniformly from the model to form a point cloud that can be used to segment and process all surfaces of the workpiece. The acquired point cloud O consists of points p_i=[x_i,y_i,z_i], i=1,2,..., m (m is the total sampling number of O), which preserves the geometric information of all faces.
§.§ Hybrid region segmentation based on RANSAC and K-means clustering
The image acquisition characteristics of line-scan cameras necessitate the preservation of flat scanning areas to ensure optimal image quality. Therefore, it becomes crucial to employ an effective segmentation method to divide the entire surface into flat regions. In this study, we propose a hybrid region segmentation method specifically designed for the surface features of 3C components. This method leverages the RANSAC method and enhanced K-means clustering to achieve accurate segmentation. The RANSAC method is used to detect a region with planar geometry. It can also remove some points with minimum curvature from the entire point cloud, enhancing the computation speed of the whole procedure (Su et al., su2022building). Furthermore, it can effectively remove outliers, thereby improving the accuracy of the subsequent K-means clustering process.
Here, we use RANSAC to partition O first. It includes two steps: producing an assumption by random samples and proving this assumption with the remaining data. Given different hypothesis geometrical models, RANSAC can identify planes, spheres, cylinders, and cones (Xu et al., xu2015investigation). Since the flat regions are required for precise line scanning, RANSAC utilized the equation of a plane as a feature model in the proposed system. It selects N sample points of O and estimates the plane model parameters by those sample points. The position of a point is selected as an inlier if the distance between the point and plane is less than the fixed thresholds and the shape that contains the greatest number of outlier points could be split and extracted after multiple iterations. The plane model can be represented as
aX+bY+cZ+d=0
where [a,b,c,d]^T is the plane model parameter, and [X,Y,Z]^T denotes any point in the 3D coordinates.
This method can extract a nearly planar point cloud region C_0 when the best plane model has been identified. RANSAC does not require complex optimization or high memory resource so that we can obtain C_0 rapidly. However, the remaining point cloud O^r with the size η^r cannot be segmented clearly by this approach since O^r consists of bevels, curved surfaces, and other complex geometrical information.
The traditional K-means clustering methods regarded the region segmentation as a clustering analysis problem of surface geometric features. They applied the position and surface normals of the point cloud for segmentation, which are not appropriate for workpieces with large variations in curvature or many bevels and corners (Li et al., li2018leaf; Liu et al., liu2020method). Therefore, Some different factors should be considered to describe the features of the object. The enhanced K-means clustering is proposed in this paper to process O^r. In the standard K-means method, the number of clusters N dramatically affects the performance of this method, and many trials are required to find a near-optimal N in some classical methods (Juang & Wu, WOS:000290138700014). In this developed method, we apply not only the corresponding surface normals n_i^r=[n_ix^r,n_iy^r,n_iz^r] of the points in O^r but also the Gaussian curvature K_i^r and Mean curvature H_i^r of each point p_i^r in O^r as the inputs of the enhance K-means clustering. Besides, a feasible weighting factor ω among n_i^r, K_i^r, and H_i^r is determined through many manual experiments. K_i^r is the product of the principal curvatures of p_i^r, and it neutralizes the maximum and minimum curvatures. A positive Gaussian curvature value means the surface is locally either a summit or a valley, while a negative value illustrates the surface locally consists of saddle points. And zero Gaussian curvature indicates the surface is flat in at least one direction like a plane or cylinder (Li et al., li2019automated). In mathematics, the mean curvature of a surface presents the curvature of an inset surface in Euclidean space or other ambient spaces. The curvature of the point can be represented by c_i^r=[K_i^r,H_i^r]. With adding these two parameters in this enhanced K-means method, the clustering quality can be improved than before, so the geometric feature of the point of O^r is presented as I_i^r = [n_i^r,c_i^r]. Besides, we present a method to automatically adjust N since N affects the result of the classification, and the traditional techniques set one fixed N, whose drawback is its poor flexibility. The algorithm depends on a two-looped 1D search, with the inner loop for similarity comparison and the outer loop for iterating N. The iteration can end when the largest intra-class difference is smaller than a threshold T. The entire procedure of this enhanced K-means method is illustrated in Algorithm <ref>.
For the outer loop, we represent the feature vectors of the N-cluster set as
Q_j=[q_n,q_c]
q_n=[q_1,q_2,q_3]
q_c=[q_4,q_5]
Q_j is one 5-dimensional vector (j=1,2...,N). All of them can be initialized with a random value. Afterward, the procedure goes into the inner loop, composed of two steps: 1) similarity comparison and 2) updating. In the first step, cosine similarity is used in this proposed method for assessing the similarity between I_i^r and Q_j, which is considered as a measure of similarity between two sequences of numbers in data analysis (Kiricsci et al., kiricsci2022new). The similarity α _ij is described in detail as follows:
α _ij =ω _1cos(n_i^r · q_n / | n_i^r | · | q_n | )+ω _2cos(c_i^r · q_c / | c_i^r | · | q_c | )
where ω _1 and ω _2 are the weighting factors for α _ij, and they are set as 0.6 and 0.4 respectively in this method according to many experiments.
Then, this method should find the cluster C_j with the smallest α _ij and exact the corresponding p_i^r and I_i^r to it. The next step is to determine whether the classification has met the termination condition. For each cluster C_j, the termination parameter λ _j is calculated from the maximum intra-class difference D_j as:
λ _j=
0, D_j>T
1,else
;
D_j= max_iα _ij
β _t represents the sum of λ _j from every region C_j at this iteration t .If β _t = N, the current segmentation is satisfactory and the algorithm can finish iteration. Otherwise, the procedure continues. In this stage, the search direction should be considered since the method includes two loops, the inner one that compares similarity and clusters concerning N and the outer one that increases the value of
N gradually. The change relies on the performance of β _t. If the performance deteriorates at the iteration step t (i.e. β _t is smaller than β _t-1),
the inner loop must stop immediately and a new outer loop starts with N←N+1 because the current N is not ideal. If the performance is better(i.e. β _t is larger than β _t-1), the search within the inner loop continues.
Before switching to the next inner iteration, all feature vector Q_j=[q_n,q_c] are updated to improve the representation level:
q_n= 1/η _j∑_i=1^η _j n_ij/1/η _j∑_i=1^η _j n_ij
q_c= 1/η _j∑_i=1^η _j c_ij/1/η _j∑_i=1^η _j c_ij
where n_ij, c_ij and η_j are i-th normal feature vector in C_j, curvature feature vector in C_j and the size of the C_j separately.
The proposed algorithm only takes the limited features of the region C_j into consideration, which can lead to a high sparsity of the clustered points within the same region. Therefore, Euclidean cluster extraction is implemented as a post-processing step to verify if it is necessary to subdivide the region C_j into two new regions according to the location of the points in it.
§.§ Adaptive ROI Based Path Planning
The local paths are generated according to the proposed planning method, which takes the segmented region C_j as input. Due to the synchronization of the scanning inspection of the line camera and the robot's motion, every viewpoint in these local paths should be produced through a feasible method for accurate detection, and all local paths are required to cover the whole region C_j of the workpiece. Hence, this part presents an adaptive ROI method for generating local paths that aim to adapt scan paths and viewpoints to the various shapes of objects
Since the scanning sensor captures a horizontal line image, the scanning coverage can be thought of as a cuboid when the system is moving linearly, which contains the DOV V_D, the FOV V_F, and the moving direction V_L(see Fig. <ref>). Besides, the key of this approach is to determine the position μ =[x,y,z] and pose i=[d⃗,l⃗] of the viewpoints (v^p,v^p*) at both ends of a local path G_t,t=1,2,...,U. The pose i is described by the direction d⃗ of V_D and the direction l⃗ of V_L.
To make the geometric scanning model effective and keep the accuracy of this system, our algorithm further segments every C_j into 3 sub-regions W_jf, f=1,2,3. Due to the irregular shape of each C_j, we stipulate that the C_j is divided into 3 sub-region W_jf evenly following the direction k⃗ of the longest length of each C_j and the scanning motion is also along k⃗ for every area ( l⃗=k⃗). In addition, we define that d⃗ is the reverse direction of the surface normal w⃗_⃗j⃗f⃗ of this W_jf ( d⃗=-w⃗_⃗j⃗f⃗).
Thus, the corresponding μ_1,μ_2 are located on :
μ = τ-w⃗_⃗j⃗f⃗· |V_D|
The center of the sub-region W_jf is regarded as c_jf=[c_x,c_y,c_z], and the intersections τ_1,τ_2 of the W_jf 's edge and the line k⃗· c_jf are deemed as the inspection points of viewpoints v^p,v^p* at both ends of a local path G_t on this sub-region surface. |V_D| is the magnitude of V_D.
§.§ PSO-based global path optimization
Based on the local path definition in the previous step, we need to find an optimal sequence of all local paths to generate a complete scanning path for the whole free-form workpiece surface. We should consider how to minimize the total robot's motion time under a constant velocity of the sensor during the inspection task. According to the requirements in practice, the robotics manipulator should complete the scanning inspection task through all pre-defined viewpoints. This sequence optimization problem can be regarded as Traveling Salesman Problem (TSP) to obtain a path with the shortest time (Claro et al., claro2023energy). The TSP is one integrated optimization problem and nondeterministic polynomial time (NP)-hard. The problem of global path planning can be formulated
min{∑_t=1^U∑_s=1^U-1 T_t^scanning+T_s^across}
where T_t^scanning is the cost time of passing every local path G_t, T_s^across means the cost time from G_t to G_t+1 and U represents the total number of local paths. The cost time in the context of the robot manipulator's end-effector is determined by the straight-line distance between two viewpoints, considering the constant speed of movement. In contrast to the general TSP, our scenario requires sequential traversal of adjacent viewpoints within the same local path to ensure optimal inspection performance. This constraint is imposed due to the limitations of region segmentation and the necessity for adaptive ROI local path definition. The limitation can be summarized as
T_t^scanning(G_t)=
T(v_t^p→ v_t^p*)
T(v_t^p*→ v_t^p)
T_s^across(G_t,G_t+1)=
T(v_t^p→ v_t+1^p)
T(v_t^p→ v_t+1^p*)
T(v_t^p*→ v_t+1^p*)
T(v_t^p*→ v_t+1^p)
The prior studies on this problem include branch and bound linear programming, and dynamic programming methods (Shang et al., shang2020co; Xu et al., xu2022path). However, with the increasing number of targets, the computation of a feasible path becomes exponentially more difficult, and obtaining the global optimal solution becomes more challenging. Different heuristic algorithms have been developed for TSP, including Simulated Annealing, Genetic Algorithm, Ant Colony Optimization, A* algorithm, etc (Abualigah & Diabat, abualigah2022improved); Ghali et al., ghali2023genetic). In the proposed method, the PSO-based method is used to solve TSP with the advantage of general flexibility in TSP solving. After selecting the shortest path, the optimal general path sequence can be acquired in this step.
In PSO (Karim et al., karim2021hovering), a swarm of particles are used to describe the possible solutions. Every particle ξ is related to two vectors in D-dimension space, i.e.,
the velocity vector V_ξ=[V_ξ^1,V_ξ^2,...,V_ξ^D] and the position vector X_ξ=[X_ξ^1,X_ξ^2,...,X_ξ^D].Both of them are initialized by random vectors. During the PSO process, the velocity and position of particle ξ on dimension d are updated as (Zhan et al., zhan2009adaptive):
V_ξ^d= ω V_ξ^d+c_1rand_1^d(pBest_ξ-X_ξ^d)
+ c_2rand_2^d(gBest-X_ξ^d)
X_ξ^d= X_ξ^d+V_ξ^d
where ω represents the inertia weight, and c_1 and c_2 are random numbers within [0,1]. pBest_ξ is the position with the best fitness value for the ξth particle and gBest is the best position in the global.
The main steps of PSO are:
* Initialize all particles, including their velocity and position.
* Establish the fitness function and calculate the fitness value of each particle,
* Update the pBest_ξ and gBest.
* Update the velocity and position of each particle according to (10) and (11).
* Increase the number of iterations, Go to step 3 and repeat until the termination condition.
§ CASE STUDY
To illustrate the performance of the proposed method, we provide two case studies for simulation tests (Case 1: a camera lens, Case 2: a Computer fan) and two case studies for experimental evaluation (Case 3: a tablet back cover, Case 4: upper part of computer mouse) on 3C component surface inspection. A state-of-the-art CPP method is also used for comparison with the developed method in “ssec:case_study".
§.§ Case study setup
Fig. <ref> shows the experimental setup for evaluating the proposed methods.
A custom-made end-effector housed the defect inspection system consisting of a line scanning sensor (Hikvision MV-CL041-70GM camera) and a uniform line illumination source (TSD-LSH230200-B from TSD company).
The Intel RealSense L515 LiDAR camera was mounted on the top of the workspace to capture the real-time stream of point clouds.
The pose of the workpiece was estimated using the point clouds from LiDAR.
An analog control box with a high-power strobe ensures an adjustable and stable voltage for the light source.
The system consisted of a UR5 manipulator from Universal Robots to manipulate the end-effector in order to scan the workpiece automatically.
The entire automated line scanning framework is based on ROSon Linux PC, which can simultaneously monitor the sensors (line scanner, depth sensor) and control the actuator (manipulator).
The line velocity and acceleration of the manipulator's end-effector were empirically set to 0.05 m/s and 0.5m/s^2, respectively.
During trajectory execution, the robot manipulator followed a constant line speed to maintain consistency of image acquisition (the acquisition line rate of the scanner is 3000 line/s).
Table <ref> summarizes the other parameters for the line scanning system used for the experiment.
§.§ Path generation and defect inspection
Fig. <ref> presents four 3C component models.
Each 3D mesh model (or CAD model) was converted into a point cloud to identify the geometrical features through uniform and random sampling (Arias-Castro et al., WOS:000237574800012), as shown in Fig. <ref>.
Some geometrical features, such as surface normals, Gaussian curvature, and mean curvature, are computed by a point cloud processing software named CloudCompare (Tang et al., 10081460).
Then, the point cloud was inputted into the proposed method for estimating the scanning path.
The similarity threshold T should be selected before region segmentation.
If T is large, the segmentation process needs more computation time to cluster the point cloud, which could reduce the overall clustering efficiency.
On the contrary, a smaller value of T groups the different features into the same cluster C_j, which degrades the segmentation accuracy.
Consequently, selecting this component must balance the segmentation accuracy and calculation efficiency.
0.64 is an optimal value for T, found by hit and trials.
The results from the hybrid segmentation method are shown in Fig. <ref>, where the different colors indicate various segmented regions (or clusters).
Here, the methods used RANSAC to cluster the plane region.
In Case 3 and Case 4, a significant portion of the planar/near-planar region has been grouped in one cluster, as shown in Fig. <ref>(c).
Initial clustering using RANSAC significantly reduces the processing time.
After the hybrid unsupervised region segmentation, the surfaces with similar geometric features were clustered together.
Fig. <ref> shows the four geometrically diverse workpieces, and each is divided into different regions based on the features.
Some segmentation errors will remain due to the uncertain nature of computing features, but if they do not affect the scanning path generation.
With adaptive ROI-based path planning and PSO-based global path generation, a complete and near-optimal inspection path can be produced, which is visualized in Fig. <ref>. The number of viewpoints is 48, 48, 42, and 30 in Case 1-4 respectively, displayed by the frames. They show the pose of the robot's end-effector during the inspection task. The global path planning is demoted with a black line and every segmentated region has a corresponding local path. The different viewpoints are connected by straight lines in the optimal sequence. The robotics motion should follow this detection path to achieve full object coverage.
We input the inspection paths to the automatic line scanning system to scan the tablet back cover and upper part of computer mouse in order to mimic the real defect inspection, as illustrated in Fig. <ref>.
Fig. <ref> illustrates the surface defects of these two objects.
Since the segmented results have similar geometric features, and the feasible viewpoints can be selected by the ROI-based method based on the parameters of the line-scan camera, surface defects can be acquired clearly, even where the defects are easy to ignore for a human eye, like corners and curved surfaces.
The proposed method can effectively conduct region segmentation, local path planning, and global path optimization, enabling precise surface defect inspection and further process optimization for the 3C industry.
§.§ Comparative analysis and verification
To further validate the proposed CPP method, a cutting-edge line scanning CPP method (Huo et al., huo2021sensor), a convex specular surface inspection method, is applied as a benchmark approach for comparative analysis. In this method, the traditional K-means clustering method is used for region segmentation and they produced the final path through a local optimization method, nearest neighbor search (Aryal et al., arya1998optimal).
There are five comparison criteria: region segmentation time, total number of viewpoints, length of the global inspection path, total inspection time, and surface defect detection rate. Segmentation time was used as a measure of efficiency for region segmentation methods. The inspection path length and total detection time served as indicators of overall path efficiency in CPP methods. The surface defect detection rate provided insights into the actual effectiveness of defect acquisition, reflecting the accuracy of region segmentation and the quality of path planning. Additionally, when defect results or coverage rates were similar, preference was given to the CPP method that generated fewer viewpoints as it was considered a more viable path planning approach (Liu et al., liu2020optimal).
The comparison results are shown in Fig. <ref>. For region segmentation time, the proposed time used less time to finish this procedure. Due to the usage of RANSAC and more geometric features, the proposed method can obtain the subregions with planar/near-planar geometry efficiently. As for the viewpoints, our developed approach produces fewer viewpoints since more accurate region segmentation results and concise ROI generation. Conversely, the convex specular surface inspection method employed a more complex iteration process for viewpoint determination, as it struggled to precisely segment objects with intricate geometries. When comparing inspection path length and time, our method outperformed the benchmark approach. While the benchmark utilized a local optimization solution, namely nearest neighbor search, it fell short in generating a feasible global inspection path for CPP. In contrast, our PSO-based method effectively addressed the TSP with reasonable optimization goals and feasible viewpoints. Although our approach is slightly better than its surface defect detection rate, the presented method can finish the inspection task with less time and shorter paths. Based on this comprehensive comparison, our proposed CPP method stands as a superior choice over the state-of-the-art line scanning inspection method. Consequently, the proposed method presents a valuable and feasible solution for CPP in surface defect inspection.
§ CONCLUSION
This paper proposes a systematic framework for an inspection CPP method for 3C component surfaces. According to this framework, a high-resolution line scanning sensor, mounted on a multi-DOF robotic manipulator, can execute surface scanning and detection precisely and flexibly. The developed methodology includes (1) a new hybrid region segmentation method based on the RANSAC and K-means clustering method; (2) an adaptive ROI method to define the local measurement paths; and (3) a PSO-based global optimization approach for the minimum inspection time. Four case studies verify the effectiveness and efficiency of this method. The results show it outperforms the state-of-the-art line scanning CPP method according to comparison. Overall, the proposed method can achieve precise and efficient surface inspection for 3C free-from components. It can be applied in the 3C industry and be extended to inspect other structures such as auto spare parts and industry-standard components.
However, it should be noted that the proposed method may encounter challenges when applied to workpieces with complex structures, making it less suitable for parts with intricate shapes. Future research should focus on optimizing the design of the system end-effector to enhance the flexibility of the inspection framework. Additionally, exploring mathematical methods for optimal path planning and investigating the potential of information theory and deep learning techniques, such as convolutional neural networks, could further improve the effectiveness of the segmentation method.
*Supplementary information
The following video demonstrates the performance of the proposed method with simulations and experiments: https://vimeo.com/842785212 https://vimeo.com/842785212.
*Funding This work was supported by the grant from Shanghai Microintelligence Technology Co. Ltd (No. P21-0078).
§ DECLARATIONS
*Competing interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
*Data availability statement
The data underlying this article will be shared on reasonable request to the corresponding author.
*Authors' contribution
Hongpeng Chen: Conceptualization, Methodology, Software, Validation, Writing – original draft. Shengzeng Huo: Software, Validation, Writing – review & editing. Muhammad Muddassir: Conceptualization, Validation, Writing – review & editing. Hoi-Yin Lee: Video Making, Validation, Writing – review & editing. Anqing Duan: Methodology, Data curation, Writing – review & editing. Pai zheng: Supervision, Resources, Conceptualization. Hongsheng Pan: Resources, Funding acquisition, Writing – review & editing. David Navarro-Alarcon: Supervision, Resources, Conceptualization, Methodology, Funding acquisition, Writing – review & editing.
|
http://arxiv.org/abs/2307.04372v1 | 20230710070518 | New results on the dynamics of critical collapse | [
"Jun-Qi Guo",
"Yu Hu",
"Pan-Pan Wang",
"Cheng-Gang Shao"
] | gr-qc | [
"gr-qc"
] |
[email protected] of Physics and Technology, University of Jinan, Jinan 250022, Shandong, China
[email protected] Key Laboratory of Fundamental Physical Quantities Measurement, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF, and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China
[email protected]
[email protected]
We study the dynamics of critical collapse of a spherically symmetric scalar field. Approximate analytic expressions for the metric functions and matter field in the large-radius region are obtained. It is found that because of the boundary conditions at the center, in the central region, the equation of motion for the scalar field is reduced to the flat-spacetime form. On the other hand, due to the connection to its neighbouring region where gravity plays an important role, the scalar field in the central region feels the gravitational effects indirectly.
New results on the dynamics of critical collapse
Cheng-Gang Shao2
August 12, 2023
================================================
§ INTRODUCTION
The critical phenomena in gravitational collapse discovered by Choptuik demonstrate the rich dynamics of the Einstein equations <cit.>. Consider gravitational collapse of generic families of massless scalar field, which initial data are parameterized by p. The parameter p measures the strength of the gravitational interaction. Strong interactions (high p) result in black hole formation, while weak interactions (low p) will make the matter field disperse to infinity and flat spacetime will be left. By fine tuning p to the threshold of black hole formation, p=p_*, critical collapse occurs.
In supercritical collapse, a tiny black hole will form, which mass has a scaling relation, m_BH∝|p-p_*|^γ, where γ≃ 0.37. The critical collapse solution shows universality feature. Namely, the spacetime produced by different families of critical initial data approach the same solution after a finite time in a finite region. The solution also displays discrete self-similarity: it is invariant under rescaling the spacetime by a certain factor.
After the discovery, similar results have been obtained in many other models (see Ref. <cit.> for review). Recently, further results on simulations were reported in Refs. <cit.>.
Analytic interpretations are important for understanding the dynamics of gravitational collapse. In Refs. <cit.>, critical collapse was treated as an eigenvalue problem. By imposing discrete self-similarity, the global structure of the critical collapse spacetime was constructed with the pseudo-Fourier method. The rescaling factor Δ becomes an eigenvalue and was solved with high precision. The scaling law of the black hole mass in supercritical collapse was recovered analytically via perturbation approach in Ref. <cit.>. Critical collapse was analyzed with a renormalization group method in Refs. <cit.>. In Ref. <cit.>, with an explicit approximate solution, a true solution was shown to exist. In Ref. <cit.>, using one typical log-periodic formula in discrete scale invariance systems, the authors obtained one approximate analytic solution for the spacetime near the center. Approximate analytic expressions for the metric functions and matter field near the central singularity in black hole formation were obtained in Refs. <cit.>. In Ref. <cit.>, the equations for the matter field in critical collapse were analyzed with certain terms in the equations being dropped. Approximate expressions for certain combinations of the metric functions and derivatives of the scalar field were obtained.
In this paper, considering the significance of analytic results, with numerical data, we obtain approximate analytic expressions for the metric functions and matter field in the large-radius region. We also investigate the dynamics in the central region. We find that due to the boundary conditions at the center, the equation of motion for the scalar field in the central region is reduced to the flat-spacetime form.
This paper is organized as follows. In Sec. <ref>, we describe the methodology for simulating critical collapse. In Secs. <ref> and <ref>, we study the dynamics in the large-radius and central regions, respectively. The results are summarized in Sec. <ref>.
§ METHODOLOGY
We consider critical collapse of a spherically symmetric massless scalar field ϕ. Take the polar coordinates,
ds^2=-A(r,t)e^-2δ(r,t)dt^2+1/A(r,t)dr^2+r^2dΩ^2.
Then the equations can be written as
A_,r=1-A/r-4π rA(P^2+Q^2),
δ_,r=-4 π r(P^2+Q^2),
Q_,t=(A e^-δ P)_,r,
P_,t=1/r^2(r^2 A e^-δ Q)_,r,
A_,t=-8π rA^2e^-δPQ,
where Q(r,t)≡ϕ_,r, and P(r,t)≡ A^-1 e^δϕ_,t. The (_,r) and (_,t) denote partial derivatives with respect to the coordinates r and t, respectively. The Misner-Sharp mass is defined as <cit.>
m≡r/2(1-g^μνr_,μr_,ν)=r/2(1-A).
The initial conditions for ϕ are set up as ϕ|_t_i=aexp[-(r/σ)^2] and ϕ_,t|_t_i=0. The regularity of Eq. (<ref>) at the center requires that A(r=0,t)=1. We choose δ(r=0,t)=0, which implies that the coordinate time is equal to its proper time at the center. In the simulation, we integrate Eqs. (<ref>)-(<ref>) by the fourth-order Runge-Kutta method. Mesh refinement algorithm is implemented. For details on the numerics, see Ref. <cit.>.
§ RESULT I: DYNAMICS IN THE LARGE-RADIUS REGION
Rewrite the metric (<ref>) as
ds^2=-α^2(r,t)dt^2+β^2(r,t)dr^2+r^2dΩ^2.
For convenience, we adjust the time coordinate, such that t=0 when the naked singularity forms.
Define the variables, X(r,t)≡√(2π)(r/β)ϕ_,r, Y(r,t)≡√(2π)(r/α)ϕ_,t,
ρ≡ln r, T≡ln(-t), and u≡ t/r. Then the equations for ϕ (<ref>) and (<ref>) can be respectively rewritten as
(β X)_,u=-α Y + (α Y)_,ρ -u(α Y)_,u,
(β Y)_,u=α X + (α X)_,ρ-u(α X)_,u.
In critical collapse, the period in terms of the coordinate time t is exponentially decreasing. Consequently, the metric functions and matter field in the late stage of collapse and large-radius region for which |t/r|≪ 1, appear to be frozen, rather than propagating <cit.>. In Ref. <cit.>, the authors made one ansatz that in this region the last terms in Eqs. (<ref>) and (<ref>) are negligible in comparison with the first ones. Moreover, treating α and β as constants, the authors obtained the following solutions:
X≈ Bsin[ω(ρ-α u)-γ], Y≈ Bsin[ω(ρ-α u)],
where 1+ω^-2=β^2, sinγ=(ωβ)^-1, and cosγ=-β^-1. The expressions (<ref>) match well with the numerical results. However, some treatments in the above have not been fully justified. In addition, although the approximate expressions for X and Y were obtained, the results for the metric functions and scalar field remain absent. We address such issues below.
In Ref. <cit.>, some terms in Eqs. (<ref>) and (<ref>), -u(α Y)_,u, -u(α X)_,u, β_,uX, α_,ρY, β_,uY and α_,ρX, were dropped. Actually, as shown in Figs. <ref> and <ref>, in the large-radius region (r>10^-3), the absolute values of the terms, -uα_,uY, -uα_,uX, α_,ρY and α_,ρX, can sometimes be greater than the absolute values of other terms. On the other hand, the terms dropped approximately cancel. Consequently, the equations constructed by the remaining terms roughly hold,
β X_,u≈-α Y + α Y_,ρ,
β Y_,u≈α X + α X_,ρ.
So from this point of view, the treatments in Ref. <cit.> are effectively valid.
Motivated by the expressions for X and Y (<ref>) and the numerical results for ϕ, we find that the field ϕ admits the following approximate expression:
ϕ(r,t)≈ C_1(1+C_2[H(r,t)])cos(ωln r + C_3[H(r,t)] + φ_0 ).
The quantity [H(r,t)] has the following features:
* For [H(r,t)], there is
[H(r,t)]=H(r,t)≡ωα t/r=ω A^1/2e^-δt/r.
* For ϕ_,t, there is
ϕ_,t≈ C_1√(C_2^2+C_3^2)[H]_,tcos(ωln r +C_3[H]+φ_0+φ_1),
where tanφ_1≡ C_3/C_2. Regarding the quantity H_,t(=ωα/r+ωα_,tt/r), the numerical results show that |ωα_,tt/r| is sometimes greater than ωα/r. However, comparing the expression (<ref>) with the numerical results for ϕ_,t, we always obtain
[H]_,t≈ωα/r=ω A^1/2e^-δ1/r.
This implies that in [H]_,t the contribution from ωα_,tt/r is negligible. This should be related to the fact that the respective reductions of Eqs. (<ref>) and (<ref>) to (<ref>) and (<ref>) are equivalent to treating α and β as constants.
* The numerical results in Fig. <ref>(a) show that in the large-radius region, the equation of motion for ϕ (<ref>) is reduced to
A^-1e^δϕ_,tt≈-(A^-1e^δ)_,tϕ_,t.
Using Eq. (<ref>) and the numerical results of |δ_,t|≫ |A_,t|, we have
ϕ_,tt≈-δ_,tϕ_,t.
Combination of Eqs. (<ref>), (<ref>), (<ref>) and the numerical results of |δ_,t|≫ H_,t generates
[H]_,tt≈ωα_,t/r≈ -δ_,t[H]_,t.
Namely, the dynamical feature of α begins to take effect since [H]_,tt.
* At the late stage of critical collapse, in the large-radius region for which |t/r|≪ 1, there are |H|≪|ωln r| and |H_,r|≪ 1/r. Therefore, with Eq. (<ref>), [H] mainly contributes to the temporal derivatives of ϕ, rather than to the field ϕ and its spatial derivatives.
The numerical results show that C_1≈0.058, C_2^2+C_3^2≈ 1, and φ_1≈ 1.08. As shown in Figs. <ref>(a) and <ref>(b), the expressions for ϕ (<ref>), ϕ_,t (<ref>) and ϕ_,tt (<ref>) agree well with the numerical results.
With Eqs. (<ref>) and (<ref>), one can rewrite Eq. (<ref>) as
1/A∂ A/∂ t =-8π rϕ_,tϕ_,r
≈ C_4[H]_,t[sin(2ωln r + 2C_3[H] + 2φ_0 + φ_1 ) - sinφ_1],
where C_4=4πωC_1^2√(C_2^2+C_3^2). Via integration, we have
ln A≈ -C_4/2C_3cos(2ωln r + 2C_3[H] + 2φ_0 + φ_1 )
- C_4sinφ_1[H] + C_5.
Then using Eq. (<ref>) and the fact that |H|≪ 1, we obtain
m/r≈ C_6cos(2ωln r+2C_3[H]+2φ_0+φ_1)+C_7,
where C_6≈ e^C_5C_4/(4C_3)≈ e^C_5πωC_1^2√(C_2^2+C_3^2)/C_3, and C_7=(1/2)(1-e^C_5). As shown in Fig. <ref>(c), the expression for m/r (<ref>) matches well with the numerical results. The fitting results are C_6=0.013360± 0.000009≈ 1/75, and C_7≈ 0.065480± 0.000007≈ 1/15.
With Eq. (<ref>), one can rewrite Eqs. (<ref>) and (<ref>) as
m_,r=2π r^2 A(P^2+Q^2),
rδ_,r=∂δ/∂ln r=-2/1-2m/rm_,r.
Then the solution for δ can be expressed as
δ≈ C_8ln r +ln(1-2m/r)
+C_9sin(2ωln r+2C_3[H]+2φ_0+φ_1)+δ_0(t),
where C_8≈-2C_7/(1-2C_7)-2C_6^2, and C_9≈-(C_6+8C_6C_7)/ω. As shown in Fig. <ref>(d), the expression for δ (<ref>) match well with the numerical results.
In Ref. <cit.>, the quantities α and β were treated as constants. The approximate expressions for X and Y obtained in this way agree well with the numerical results. Then it was stated that in this circumstance the spacetime is effectively flat. Actually, X and Y are combinations of the metric functions and derivatives of the scalar field, rather than the scalar field. In order to check whether the spacetime is effectively flat, it may be more appropriate to investigate directly the behavior of the equation of motion for the scalar field (<ref>). As shown in Fig. <ref>(a), in the large-radius region, Eq. (<ref>) is reduced to Eq. (<ref>), which is clearly different from the flat-spacetime form, ϕ_,tt=r^-2(r^2ϕ_,r)_,r. So the spacetime in this region is not effectively flat.
§ RESULT II: DYNAMICS IN THE CENTRAL REGION
As shown in Fig. <ref>(a), in the central region, the absolute values of the terms of (A^-1e^δ)_,tϕ_,t and (A^-1e^δ)_,rϕ_,r in Eq. (<ref>) are much less than the absolute values of A^-1e^δϕ_,tt, Ae^-δϕ_,rr, and (2/r)Ae^-δϕ_,r. Moreover, in this region, A≈1, and δ≈ 0. Consequently, Eq. (<ref>) is reduced to the flat-spacetime form,
ϕ_,tt≈1/r^2(r^2ϕ_,r)_,r.
Regarding Eq. (<ref>), we make the following discussions:
* Equation (<ref>) implies that in the central region, the scalar field ϕ evolves almost as in flat spacetime, not directly feeling the gravitational effects.
On the other hand, as shown in Fig. <ref>, in the transition region locating between the central and large-radius ones, the gravitational effects are important on the dynamics of the scalar field. Therefore, due to the connection between the central and transition regions, in the central region, gravity affects the evolution of the scalar field indirectly.
* Besides critical collapse, we also check the evolution for the scalar field in another two types of collapse (dispersion and black hole formation), and obtain similar results as (<ref>).
* The result (<ref>) is closely related to the asymptotic behaviors of the metric functions and scalar field near the center. Under the smoothness requirement at the center, the metric functions and scalar field have the following power series expansions near the center <cit.>:
A≈ 1+A_2(t)r^2, dddδ≈δ_2(t)r^2, dddϕ≈ϕ_0(t) + ϕ_2(t)r^2.
With Eqs. (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the following asymptotic expressions:
[ A_,t≈ -16πϕ_,tϕ_2r^2, δ_,t≈ -4πϕ_,ttϕ_,tr^2,; ; ϕ_,t≈ϕ_0'(t)+ϕ_2'(t)r^2, A_,r≈ -8π/3(ϕ_,t)^2 r,; ; δ_,r≈ -4π(ϕ_,t)^2 r, ϕ_,r≈ 2ϕ_2(t)r,; ]
which are also shown in Fig. <ref>. With Eqs. (<ref>) and (<ref>), one can straightforwardly simplify Eq. (<ref>) to (<ref>).
* It is known that in critical collapse, the Ricci curvature scalar R in the central region is very high and will diverge eventually. This fact is not in contradiction with the result (<ref>). For the metric (<ref>), the Ricci curvature scalar can be written as
R= 4Aδ_,r/r - 4A_,r/r + 2Aδ_,rr + 2(1-A)/r^2 - A_,rr - A_,tte^2δ/A^2
+ 3A_,rδ_,r - 2A(δ_,r)^2 + 2(A_,t)^2 e^2δ/A^3 - A_,tδ_,te^2δ/A^2.
With Eqs. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain the asymptotic expressions for all the terms on the right-hand side of Eq. (<ref>):
4Aδ_,r/r≈ -2D,- 4A_,r/r≈4/3D,2Aδ_,rr≈ -D,
2(1-A)/r^2≈- A_,rr≈D/3, D=8π(ϕ_,t)^2.
-A_,tte^2δ/A^2≈ 16πϕ_,ttϕ_2r^2,
3A_,rδ_,r≈ 32π^2 (ϕ_,t)^4 r^2,
-2A(δ_,r)^2≈ -32π^2 (ϕ_,t)^4 r^2,
2(A_,t)^2 e^2δ/A^3≈ 512π^2 (ϕ_,t)^2 (ϕ_2)^2 r^4,
- A_,tδ_,te^2δ/A^2≈ -64π^2 ϕ_,tt(ϕ_,t)^2 ϕ_2 r^4.
The first five terms are dominant and have the same order of magnitude as 8π(ϕ_,t)^2 which will diverge eventually; and the rest terms are proportional to r^2 or r^4 and are negligible.
As shown in Fig. <ref>(b), the transition region between the central and large-radius regions can be expressed as r∈ [r_1, r_2]. At r=r_1, there is
|C_3H|∼|ωln r|; and at r=r_2, there is |C_3H_,r|∼ω/r.
§ SUMMARY
Analytic solutions are important for understanding the dynamics of gravitational collapse. Due to the complexity of the Einstein equations, seeking the analytic solutions to the equations has been a very difficult task. In the successful circumstances, the equations are usually reduced to ODEs. In critical collapse, the equations remain PDEs, while in the large-radius region and late stage of the evolution, the spatial and temporal contributions are separate to some extent. This enables us to obtain approximate analytic expressions for the metric functions and matter field.
The boundary conditions at the center play a key role on the dynamics in the central region. In this region, due to the boundary conditions, in the equation of motion for the scalar field, the terms related to the gravitational effects are negligible, such that the equation is reduced to the flat-spacetime form. On the other hand, in the transition region, gravity is important for the evolution of the scalar field. Consequently, due to the connection between the central and transition regions, the scalar field in the central region feels the gravitational effects indirectly.
§ ACKNOWLEDGMENTS
The authors are very thankful to Xiao-Kai He, Junbin Li and Cheng-Yong Zhang for the helpful discussions. JQG is supported by Shandong Province Natural Science Foundation under
grant No. ZR2019MA068. YH and CGS are supported by the National Natural Science Foundation of China (Grant No. 11925503).
99Choptuik:1992jv
M. W. Choptuik,
Phys. Rev. Lett. 70, 9 (1993).
Gundlach:2007gc
C. Gundlach and J. M. Martin-Garcia,
Living Rev. Rel. 10, 5 (2007).
[[gr-qc]0711.4620]
Bizon:2011gg
P. Bizon and A. Rostworowski,
Phys. Rev. Lett. 107, 031102 (2011).
Deppe:2018uye
N. Deppe, L. E. Kidder, M. A. Scheel and S. A. Teukolsky,
Phys. Rev. D 99, 024018 (2019).
[[gr-qc]1802.08682]
Baumgarte:2019fai
T. W. Baumgarte, C. Gundlach and D. Hilditch,
Phys. Rev. Lett. 123, 171103 (2019).
[[gr-qc]1909.00850]
Kelson-Packer:2020hbb
C. Kelson-Packer and J. Belz,
Phys. Rev. D 102, 084050 (2020).
[[gr-qc]2008.06774]
Mendoza:2021nwq
M. F. P. Mendoza and T. W. Baumgarte,
Phys. Rev. D 103, 124048 (2021).
[[gr-qc]2104.03980]
Zhang:2021nnn
C.-Y. Zhang, Q. Chen, Y. Liu, W.-K. Luo, Y. Tian and B. Wang,
Phys. Rev. Lett. 128, 161105 (2022).
[[gr-qc]2112.07455]
Gundlach:1995kd
C. Gundlach,
Phys. Rev. Lett. 75, 3214 (1995).
[gr-qc/9507054].
Gundlach:1996eg
C. Gundlach,
Phys. Rev. D 55, 695 (1997).
[gr-qc/9604019]
Martin-Garcia:2003xgm
J. M. Martin-Garcia and C. Gundlach,
Phys. Rev. D 68, 024011 (2003).
[gr-qc/0304070]
Koike:1995jm
T. Koike, T. Hara and S. Adachi,
Phys. Rev. Lett. 74, 5170 (1995).
[gr-qc/9503007]
Hara:1996mc
T. Hara, T. Koike and S. Adachi,
gr-qc/9607010Reiterer:2012hnr
M. Reiterer and E. Trubowitz,
Commun. Math. Phys. 368, 143 (2019).
[[gr-qc]1203.3766]
Guo:2018yyt
J.-Q. Guo and H. Zhang,
Eur. Phys. J. C 79, 625 (2019).
[[gr-qc]1808.09826]
Guo:2013dha
J.-Q. Guo, D. Wang and A. V. Frolov,
Phys. Rev. D 90, 024017 (2014).
[[gr-qc]1312.4625]
Guo:2020jfa
J.-Q. Guo,
J. Phys. Comm. 5, 075015 (2021).
[[gr-qc]2011.14853]
Price:1996sk
R. H. Price and J. Pullin,
Phys. Rev. D 54, 3792 (1996).
[gr-qc/9601009]
Misner_1964
C. W. Misner and D. H. Sharp,
Phys. Rev. 136, B571 (1964).
Zhang:2016kzg
C.-Y. Zhang, Z.-Y. Tang and B. Wang,
Phys. Rev. D 94, 104013 (2016).
[[gr-qc]1608.04836]
Choptuik_workshop_1993
M. W. Choptuik,
Critical Behaviour in Scalar Field Collapse,
in Proceedings of a NATO Advanced Research Workshop on Deterministic Chaos in General Relativity,
Springer Science+Business Media, LLC. Editors: D. Hobill and A. Burd and A. Coley, 155-175, 1993.
Choptuik:1997mq
M. W. Choptuik,
The (Unstable) threshold of black hole formation,
15th International Conference on General Relativity and Gravitation (GR15),
67–86, 1997.
|
http://arxiv.org/abs/2307.04778v2 | 20230710054331 | Formulating A Strategic Plan Based On Statistical Analyses And Applications For Financial Companies Through A Real-World Use Case | [
"Saman Sarraf"
] | cs.LG | [
"cs.LG",
"cs.CE"
] |
Reducing Information Loss for Spiking Neural Networks
Yufei Guo1Equal contribution. Yuanpei Chen1^⋆ Liwen Zhang1 YingLei Wang1 Xiaode Liu1 Xinyi Tong1 Yuanyuan Ou2 Xuhui Huang1 Zhe Ma1
August 12, 2023
======================================================================================================================================
Business statistics play a crucial role in implementing a data-driven strategic plan at the enterprise level to employ various analytics where the outcomes of such a plan enable an enterprise to enhance the decision-making process or to mitigate risks to the organization. In this work, a strategic plan informed by the statistical analysis is introduced for a financial company called LendingClub, where the plan is comprised of exploring the possibility of onboarding a big data platform along with advanced feature selection capacities. The main objectives of such a plan are to increase the company’s revenue while reducing the risks of granting loans to borrowers who cannot return their loans. In this study, different hypotheses formulated to address the company’s concerns are studied, where the results reveal that the amount of loans profoundly impacts the number of borrowers charging off their loans. Also, the proposed strategic plan includes onboarding advanced analytics such as machine learning technologies that allow the company to build better generalized data-driven predictive models.
§ INTRODUCTION
Formulating a strategic plan aligned with a company’s business scope allows the company to explore data-driven ways of business improvement and risk mitigation quantitively while utilizing collected data to perform statistical applications. The company’s business leadership generally organizes joint meetings with internal or external data analysis teams to design a plan for executing business-related statistical analysis. Such projects demonstrate that the company should invest in what areas and adjust the budget for business verticals with low revenue. Furthermore, statistical applications can determine the logic of how to improve staff performance in the workplace.
LendingClub, as a peer-to-peer lending company, offers loans and investment products in different sectors, including personal and business loans, automobile loans, and health-related financing loans. LendingClub’s business model comprises three primary players: borrowers, investors, and portfolios for issued loans.
LendingClub is about expanding the statistical analytics that consists of infrastructure and software algorithm applications to develop two meaningful solutions ultimately: a) estimating durations in which clients will pay off loans; and b) 30-minute loan approval decision-making. To implement these two capabilities, the company has collected data on loans that were granted or rejected over 12 years, including 145 attributes and more than 2 million observations, where 32 features have no missing values across the dataset.
To achieve its ultimate targets, LendingClub performs a statistical analysis of numerous steps to determine whether to accept or reject hypotheses, which enables data scientists and statisticians to select attributes for predictive modeling. LendingClub seeks patterns in the loan data to discover relationships between a loan amount and borrowers who have charged off and reported by LendingClub <cit.>. The company assumes a potential correlation between the two features, which establishes specific loan criteria for the group loan applicants who might encounter such an issue. Discovering the correlation enables LendingClub to enhance its risk management portfolio and minimize the risk of losing financial resources, aiming to mitigate the negative impacts of issuing loans to borrowers of this category. Using business statistics, the company seeks proof of concept for the mentioned ideas before recruiting a third-party software developer to implement a standalone product; therefore, the internal data scientists explore various aspects of such data, not limited to the questions listed above <cit.>.
In the first phase, demographic information is extracted from the datasets, and data preprocessing steps, such as data cleaning, are performed to remove any broken data from the database. Next, further investigation of specific data (e.g., type of loans issued, loans issued by region, and a more in-depth analysis of bad loans) is performed <cit.>. In the second phase, which oversees the business perspective, the company’s experts explore the operative side of the business (operational business aspects) and analyze applicants’ income category. The third phase refers to the risk assessment of issuing loans, which consists of four steps: a) identifying existing risks in the business; b) the importance and role of credit scores in the loan approval or denial; c) defining bad loans and risky borrowers; d) loans by default (pre-approved); and e) exploring risks by targeted criteria <cit.>. The ultimate goals of such extensive analysis are to lead LendingClub’s data scientists to explore the feasibility of answering the two questions above based on current data, provide recommendations for data collection, or modify the business scope <cit.>.
§ PROBLEM STATEMENT AND HYPOTHESIS
The problem for this work points to statistical applications in LendingClub, which establishes three hypotheses regarding the relationship between the “Loan Amount” and “Charge OFF Flag” features, where various statistical analyses, including hypothesis testing <cit.> and correlation analysis <cit.>, are employed. The hypotheses are as follows:
* Accepting or rejecting the hypothesis that any relationship exists between the loan amounts and charge-offs
* Accepting or rejecting the hypothesis that any relationship exists between the higher loan amounts and charge-offs
* Accepting or rejecting the hypothesis that any relationship exists between the lower loan amounts and charge-offs
§ STATISTICAL ANALYSIS PIPELINE DESIGN
The problem statement consists of three main components: a) data exploration, b) descriptive analysis of loan duration, and c) real-time (fast) loan approval (or denial). Data exploration includes preprocessing, data cleaning, feature engineering, and selection to result in a meaningful descriptive analysis to find an accurate loan during and prediction. In the real-time step, various statistical techniques are explored, including hypothesis testing, student T-Test, and ANOVA testing, and statistical models, such as linear regression, logistic regression, cluster analysis, ANOVA tests, and correlation analysis <cit.>.
§.§ Data Exploration
Missing values are removed from the loan data, and “loanAmnt” refers to “the listed amount of the loan applied for by the borrower if, at some point in time, the credit department reduces the loan amount, then it will be reflected in this value” and “debtsettlementflag” indicating “flags whether or not the borrower, is charged-off, is working” are extracted from the preprocessed data shown in Figure <ref>. The “debtsettlementflag” – a binary feature – is considered a categorical attribute requiring conversion to numerical equivalents for statistical analysis <cit.>. Also, the histogram of loan amounts shows how borrowers are distributed regarding loan amounts.
§.§ Hypothesis Testing
In this experiment, T-Test is the primary method for whether to accept or reject the hypothesis. A T-Test is a hypothesis-testing method with broad applications in the industry due to its simplicity and convergence capability with a small sample of data <cit.>. T-Test requires a relatively small subset of data so that the loan dataset is shuffled, and a subsample of 1000 observations is randomly selected from charged-off samples along with 1000 samples, which are randomly selected from the on-time borrowers’ observations for further analysis<cit.>. To explore the consistency of T-Test results, analysis of variance (ANOVA) tests are applied to the same subsets as those used in the previous method. ANOVA tests demonstrate whether such groups offer statistically significant differences <cit.>.
§.§ Correlation Analysis
Correlation analysis is applied to the subsets to show the dependency between two features <cit.>. This analysis can indicate whether the loan amount impacts the number of borrowers charged off. Correlation analysis provides additional exposure to the data, which might strengthen the acceptance or rejection of the three hypotheses<cit.>.
§.§ Results Visualization and Interpretation
The results of statistical analysis methods are visualized and interpreted to verify whether the hypotheses are accepted. Also, the visualization of results allows the company’s data scientists to explore whether such outcomes from various techniques converge for decision-making and conclusion purposes.
§ SUMMARY OF RESULTS
To perform an accurate T-Test, several data requirements must be met: a) test variables are continuous; b) test variables (observations) are independent; c) subsets are randomly selected; c) data distribution is approximately normal; d) variance scores of subsets and population are approximately consistent; and e) no outliers <cit.>. In addition to these criteria, a balanced dataset design is required to conduct a meaningful ANOVA test, where the number of subjects in each group needs to be equal <cit.>. Also, an ideal correlation analysis requires data to be independently collected as paired samples, preferably continuous numeric values <cit.>.
§.§ Data Analysis
The first step of data analysis is exploring the distribution of observations regarding the number of on-time borrowers versus those who have charged off. The next step is to downsample the charged-off samples into subsets of 1000 observations. The same procedure was applied to on-time borrowers’ observations (non-charged-off), and 1000 samples were randomly selected; thus, each subset included 2000 samples of each class equally distributed <cit.>. The mean, standard deviation, and variance of each subset were calculated. The statistical measures of subsets are highly similar, which suggests the need for statistical testing to produce interpretable results. Figure <ref> shows a histogram of each subset where the number of bins is automatically calculated from the data (bin=10). The histogram results indicate that most of the issued loan amounts are in the range of [$5000,$20000].
§.§.§ Hypothesis 1
G*Power statistical software application <cit.> performed a T-Test against each subset, including 2000 samples of charged-off and on-time borrowers’ observations equally distributed. One-tailed T-Tests were conducted using an alpha error probability of 0.05 and a power of 0.95 (1 – beta error probability) to produce an actual power (decision-making criteria) for each subset. The results demonstrated that the actual power values were greater than 0.95, suggesting that the null hypothesis can be rejected, meaning that a “Loan Amount” affects whether a borrower can be charged-off.
ANOVA test was conducted against each subset using G*Power, where the outcomes demonstrate that the actual power values are higher than 0.95, suggesting that the null hypothesis can be rejected, which means two groups offer variance differences so that a “Loan Amount” affects whether a borrower can be charged-off.
The correlation analysis was performed against each subset and produced scores of -0.005255, 0.061228, and 0.007396 per subset, where the results indicate no strong correlation between the loan amount and the status of charged-off borrowers. The correlation results are not aligned with the T-Tests, suggesting that further analysis is needed.
§.§.§ Hypothesis 2
To explore the second hypothesis regarding a relationship between higher “Loan Amount” and “Charged-off,” each subset was sorted in descending order by loan amount, and the top 25% of observations were selected for analysis. The results revealed that all actual power values were higher than 0.95, suggesting that the null hypothesis should be rejected and indicating a strong relationship between the loan amount and charged-off borrowers.
§.§.§ Hypothesis 3
The third hypothesis is that the bottom 25% of loan amounts would also show a statistical relationship with the charged-off borrowers. Each subset was sorted in descending order regarding loan amount attributed, and the bottom 25% of observations were selected. The two-tailed T-Test (conducted by G*Power) revealed a strong relationship between the loan amount and charged-off accounts.
§ DISCUSSION
The company formulated a hypothesis to explore the impact of “Loan Amount” as a dependent variable on an independent attribute referring to “Charge OFF Flag,” showing whether a borrower has repaid the loan or charged it off. To do so, LendingClub decided to conduct T-Test and ANOVA hypothesis testing and correlation analysis. The hypothesis testing revealed a statistically significant difference at p-values less than .05, which is interpreted as an indication of the impact of the loan amount on loan repayment. However, the correlation analysis produced a low score, which disagreed with the results of hypothesis testing, and the company decided to perform a more in-depth analysis to locate the source of such divergence.
§.§ Steps in Statistical Analysis
Statistical analysis includes various steps, such as data exploration, hypothesis testing, and visualization, where the interpretation of results is the last step that aims to explain the results of each step (or most steps) of the analysis <cit.>. In general, an explanation of statistical results often covers four main areas: a) sample size, b) metrics of central tendency, c) distribution of data, and d) hypothesis testing <cit.>.
§.§.§ Dataset or Sample Size
The number of observations available for statistical analysis plays a crucial role in interpreting results. This number demonstrates whether the samples (observations) can be considered representative of analyzed data <cit.>. A significant difference between statistics and machine learning exists in terms of the number of samples required for experiments, where, for example, 50 observations can represent a population for statistical analysis. A significantly larger dataset is often required for developing a machine learning model.
§.§.§ Measures of Central Tendency
The mean, median, and mode of observations used for statistical analysis, along with the variance and standard deviation (i.e., measures of central tendency), reveal the central gravity of observations <cit.>. Interpreting those metrics enables practitioners to discover outliers in the observations and explore the possibility of removing them from the analysis. Unlike machine learning model development, where outliers might not impact results significantly, outliers here can affect statistical results by biasing the results towards that extreme.
§.§.§ Data Distribution
Spreading data by calculating the observation variance can show how samples are distributed among a population <cit.>. Also, exploring data distribution by calculating the histogram of data can reveal the type of data distribution (i.e., normal distribution). It also indicates whether the data are skewed towards the left or right of the histogram <cit.>. Interpreting the data distribution also reveals whether the data are multimodal, where observations come from two or more distributions. Moreover, such interoperation can be used for accurate data normalization, removing outliers, and properly formulating hypotheses for future analyses or reiterations of the current analysis <cit.>.
§.§.§ Hypothesis Testing
Interpretation of hypothesis testing comprises two steps: a) exploring the logic of formulating such a hypothesis and b) exploring the results of hypothesis testing <cit.>. In the first step, statisticians review the reasons for forming such a hypothesis by studying documents related to the business aspects of an organization. For example, statisticians can only formulate a hypothesis for analysis because they have considered the types/amounts of loans granted as dependent variables (inputs) when predicting whether borrowers could repay <cit.>. The logic behind such a hypothesis is explored and interpreted once the data are analyzed and the results produced. The second step is to interpret the hypothesis testing results, determine whether the hypothesis is accepted or rejected, and explore the confidence interval of such interpretations <cit.>. For example, the interpretation of hypothesis testing results for types of loans and successful repayment could potentially reveal a) whether types/amounts of loans are adequate metrics for predicting risks associated with a borrower; and b) how an organization can mitigate potential risks and update their criteria for granting loans <cit.>.
§.§ Limitations in Statistical Analysis
Statistical analysis encounters various limitations that make the interpretation of results challenging. As discussed earlier, the primary challenge of statistical analysis, relative to machine learning techniques, is the number of observations required to perform analysis <cit.>. A standard practice in statistical analysis is to sample a population randomly and test hypotheses against the subset of data that can raise concerns about whether the generated subset is a true representative of data <cit.>. By contrast, training machine learning algorithms require a significant amount of data, so practitioners assume that the number of samples or observations used to train the algorithms would represent the entire population <cit.>. Another limitation in interpreting the analysis results is how to relate findings to business problems and interpret the outcomes of hypothesis testing to address business problem statements <cit.>.
§.§.§ Small Dataset
The size of the dataset or sample used for statistical analysis plays a crucial role in determining the extent to which the results can be generalized <cit.>. A small sample size imposes significant limitations on statistical analysis, where a small dataset serves as a somewhat unrepresentative sample of the entire population, causing different types of bias in the analysis results <cit.>. Also, a small dataset increases the risk that outliers in each population will negatively impact measures of central tendency that have been calculated based on samples out of distribution. In addition to the problem of outliers discussed earlier, a small dataset makes splitting data into training and testing highly challenging. Although statistical analysis methods employ all samples provided to implement models based on hypothesis testing, practitioners in the field often use unseen data to validate hypothesis testing results <cit.>. Another issue caused by a small sample size is an unpredicted increase in measurement errors where the error metrics used to evaluate the models produce highly varying results. To overcome the limitations imposed by a small dataset, the primary practice is to randomly shuffle the dataset and generate several subsets of data, repeating statistical analysis to ensure the results converge <cit.>.
§.§.§ Cause and Effect
One of the challenges in interpreting statistical results relates to inconsistency between the hypotheses formulated and the outcomes of testing methods. Practitioners interpreting the statistical results might notice that the results are misaligned with the logic of hypothesis tests <cit.>. In such ambiguous circumstances, discovering the cause and effect in statistical analysis results conducted on specific business use cases is challenging since the interpretation disagrees with the predefined scenario <cit.>. This issue can arise when the hypothesis testing design does not cover the useful parameters in testing or when less powerful features and attributes in data are used for hypothesis testing <cit.>. It sometimes happens that practitioners or business teams helping design such statistical analysis misinterpret the results or overlook some findings and/or implications <cit.>. Another source of issues includes a low confidence interval level and results lacking statistical significance <cit.>.
§.§.§ Divergence of Results Obtained from Various Methods
A common challenge in interpreting statistical analysis results occurs when the results obtained from various techniques diverge <cit.>. It is a widespread practice that statisticians design a statistical analysis using multiple techniques, such as T-Test, ANOVA, or regression, to explore whether the results produced by these techniques align. An agreement between the results from different methods enables an organization to interpret analytical results clearly and make firm recommendations. However, the research shows that hypothesis testing and other methods, such as correlation analysis or machine learning, sometimes produce different results, contrasting with other methods<cit.>. Such an issue indicates that a systematic problem might exist in preparing samples or conducting hypothesis testing. The solution for this type of problem is offered case by case, where practitioners more familiar with the organization’s business scope can suggest methods that produce results closer to the problem statement.
§.§ Business Statistical Analysis and Interpretation
Business statistics, which include various types of analysis, focus on statistical methodologies aligned with an organization’s business scope to improve the decision-making process, mitigate risks to the organization, and increase revenue <cit.>. Interpretation of such analysis is crucial to the organization, and the process is expected to go beyond that of a simple report or presentation. The areas covered by business statistics include a) customer behavior prediction and trend extraction; b) data exploration, hypothesis testing, and interpretation, such as extensive visualization; c) enhancing business performance from various angles; and d) improving decision-making processes <cit.>. To achieve such targets, business data analysts understand their organization’s business objectives and explore data and results. Also, the root cause analysis is performed to extract in-depth technical insights regarding the organization’s vulnerabilities, enabling the organization to inform its decision-making process <cit.>.
§.§ Reflection on the Statistical Analysis Process
The findings from the initial statistical application enable the company to redesign the statistical analysis processes to concentrate on those attributes that more substantially impact their business. Feature engineering—a systematic methodology—is necessary to reveal the relationships between dependent attributes and target variables <cit.>. Also, the company aims to explore other features highly correlated with potential target variables from the business perspective but uncorrelated with other dependent attributes <cit.>.
§.§.§ Potential Improvement
The process of statistical analysis at LendingClub requires several changes to better serve the company’s business needs. The primary targets are to enhance the process of issuing loans, such as the duration of the loan approval process, and to mitigate financial risks to the company by offering borrowers a data-driven loan amount. LendingClub is to apply such changes to the statistical analysis and decision-making process by employing big data infrastructure for advanced multi-model data collection and analytics. In the first step, the company needs a plan demonstrating how to onboard new technology and its costs. The second step includes a broader statistical analysis, such as hypothesis testing, and uses the current data to assess whether specific statistical applications could broadly improve the company’s performance. In the third step, LendingClub conducts research and recruits a third party to develop the required infrastructure.
§.§.§ Required Infrastructure
Onboarding a large-scale system, such as an enabled big data analytics platform, is a significant change to LendingClub, where modifications have been performed to everything from databases to reporting systems. The first stage is to decide whether LendingClub would adopt a big data platform to the current system or entirely migrate to the new model. This decision allows the stakeholder to estimate the cost of a big data platform and start planning. Although the cost of system adaptation or migration to the big data platform requires detailed information, the migration to a cloud environment, for example, offering various big data services, would be a potential expansion of LendingClub’s analytics in the future. Figure <ref> illustrates the proposed steps for migrating the LendingClub data collection and analytics pipeline to a cloud-based environment that offers big data services such as Amazon Web Services (AWS) <cit.>. These steps consist of a) cloud assessment, b) proof of concept, c) data migration, d) application migration, e) leverage of the cloud, and f) optimization.
§.§ Proposed Large-Scale Plan
The large-scale plan to enhance the current statistical analysis pipeline consists of two primary phases: a) designing and implementing an end-to-end data collection and processing pipeline that offers big data analytics, and b) increasing the number and quality of features <cit.>. The current data collection pipeline collects data from various sources, and no broadly systematic methodology is employed to acquire such data. Gathering data from different providers (in-house or third-party) involves an extensive preprocessing pipeline, which might remove many observations to prepare a consistent dataset.
The proposed pipeline illustrated in Figure <ref> offers various capabilities, including big data collection and data stream processing. The first component of the architecture is a user interface that enables it to receive data from external sources where the data could either be stored in a multi-model database or be in the form of real-time messaging input into an allocated database. The collected data can be transferred between data storage and real-time messaging place holders, which offers big data capabilities to host structured and unstructured data. The next architecture layer includes enabled big data processing components for batch processing, which oversees data preparation and preprocessing for further analysis <cit.>.
A similar component—the stream processing unit—prepares and preprocesses data streams for real-time analysis and applications. The preprocessed data are sent to the next component of the architecture, which encompasses the statistical analysis and machine learning methods, where such a block is considered the brain that orchestrates the data analytics. Statistical analysis or machine learning outcomes are stored in a “results database.” The last layer of this orchestration is the user interface block, which enables practitioners in the organization to generate reports with visualizations that can be provided to leadership for decision-making purposes. An extra capability in the new architecture is scheduling automatic training machine learning models or performing statistical analysis.
The second phase of the new data analytics platform aims to enhance the quality of feature selection, which concentrates on those attributes that contribute most to target variables. Quarter-based statistical analysis and feature engineering demonstrate what features should be collected with higher resolution. The advantage of using targeted data collection through particular data attributes is to reduce the cost of on-demand infrastructure by reducing the load on the architecture servers and analytical blocks. However, the main disadvantage of employing such a step is that it decreases the amount of data that can be collected, which might harm statistical analysis or predictive model development. Therefore, the organization must weigh the cost of massive data streaming and collection against the impact of selective data collection.
§ CONCLUSIONS
Statistical applications enable enterprises to establish a data-driven business plan that provides clear objectives to enhance the enterprise’s performance, revenue, and risk management. This work summarized a strategic plan informed by an already performed analysis for LendingClub – a financial company – that grants various forms. The statistical results showed that different logic could be extracted from currently collected data. Such results enabled LendingClub to improve its business scope and to encourage the company to onboard a big data platform. The plan recommended exploring employing enhanced feature engineering capabilities to acquire enormous data per year and develop predictive models to increase the company’s revenue and lessen potential risks. LendingClub’s plan also seeks to utilize artificial intelligence and machine learning technologies to implement robust models aligned with the company’s business scopes.
unsrtnat
|
http://arxiv.org/abs/2307.04139v1 | 20230709094632 | A Randomized Algorithm for Single-Source Shortest Path on Undirected Real-Weighted Graphs | [
"Ran Duan",
"Jiayi Mao",
"Xinkai Shu",
"Longhui Yin"
] | cs.DS | [
"cs.DS",
"68W20",
"F.2.2"
] |
empty
Emotion Analysis on EEG Signal Using Machine Learning and Neural Network
S. M. Masrur Ahmed
Software Engineer
bKash Limited
Dhaka, Bangladesh
[email protected]
Eshaan Tanzim Sabur
Department of Computer Science
BRAC University
Dhaka, Bangladesh
[email protected]
August 12, 2023
==============================================================================================================================================================================================================================================
empty
In undirected graphs with real non-negative weights, we give a new randomized algorithm for the single-source shortest path (SSSP) problem with running time O(m√(log n ·loglog n)) in the comparison-addition model.
This is the first algorithm to break the O(m+nlog n) time bound for real-weighted sparse graphs by Dijkstra's algorithm with Fibonacci heaps.
Previous undirected non-negative SSSP algorithms give time bound of O(mα(m,n)+min{nlog n, nloglog r}) in comparison-addition model, where α is the inverse-Ackermann function and r is the ratio of the maximum-to-minimum edge weight [Pettie & Ramachandran 2005], and linear time for integer edge weights in RAM model [Thorup 1999]. Note that there is a proposed complexity lower bound of Ω(m+min{nlog n, nloglog r}) for hierarchy-based algorithms for undirected real-weighted SSSP [Pettie & Ramachandran 2005], but our algorithm does not obey the properties required for that lower bound. As a non-hierarchy-based approach, our algorithm shows great advantage with much simpler structure, and is much easier to implement.
§ INTRODUCTION
Shortest path is one of the most fundamental problems in graph theory, and its algorithms lie at the core of graph algorithm research. In a graph G=(V,E,w) with m=|E|, n=|V| and non-negative edge weight w:E→ℝ_≥ 0, single-source shortest path (SSSP) problem asks for the distances from a given source s ∈ V to all other vertices.
Dijkstra's algorithm <cit.> computes the distances (s,u) by dynamic programming. For each vertex u, it maintains a temporal distance d(u), which represents the shortest path from s to u only passing through the vertices in current S, where S is the set of visited vertices during algorithm process. In each round of iteration it selects vertex u with the smallest d(u) from the unvisited nodes. Finally when S=V, d(u)=(s,u) for all vertex u.
Advanced data structures with amortized O(1) time for insertion and decrease-key, and O(log n) for extract-min, called Fibonacci heap <cit.> and relaxed heap <cit.>, make the time bound for Dijkstra's algorithm to O(m+nlog n).
This time bound is in the comparison-addition model where only comparison and addition operations on edge weights are allowed and considered as unit-time operations, which is the most common model for real number inputs.
For undirected graphs, <cit.> proposed an SSSP algorithm with running time O(mα(m,n)+min{nlog n, nloglog r}) in the comparison-addition model, where α is the inverse-Ackermann function and r bounds the ratio of any two edge weights. However, no SSSP algorithm faster than O(m+nlog n) has been found for real-weighted graphs without ratio constraints, both for undirected and directed graphs.
A byproduct of Dijkstra's algorithm is the sorting of all vertices by their distances from s, but the lower bound of Ω(nlog n) lies for comparison-based sorting algorithms. Researchers used to believe that this sorting bottleneck existed for many graph problems,
and breaking this bottleneck is an important and interesting direction. Yao <cit.> gave a minimum spanning tree (MST) algorithm with running time O(mloglog n), citing an unpublished result of O(m√(log n)) by Tarjan. The current best results for MST are the randomized linear time algorithm <cit.>, the deterministic O(mα(m,n))-time algorithm <cit.>, and a deterministic algorithm with proven optimal (but unknown) complexity <cit.>. In the bottleneck path problem, we want to find the path maximizing the minimum edge weight on it between two vertices. <cit.> gave an O(mlog^* n)-time algorithm for s-t bottleneck path problem in directed graphs, which was later improved to randomized O(mβ(m,n)) time <cit.>. For single-source all-destination bottleneck path problem in directed graphs, there is a recent result of O(m√(log n))-time randomized algorithm by <cit.>. For single-source nondecreasing path problem, Virginia V.Williams <cit.> proposed an algorithm with time bound O(mloglog n). All the results above are comparison-based though, the techniques in these works, such as local construction or divide-and-conquer approach, hardly works for the shortest path problem. Therefore it remains how to break the sorting bottleneck for SSSP.
§.§ Our Results
In this paper we propose the first SSSP algorithm for undirected real-weighted graphs that breaks the sorting bottleneck.
In an undirected graph G=(V,E,w) with nonnegative edge weights w:E→ℝ_≥ 0, there is a comparison-addition based Las-Vegas randomized algorithm that solves the single-source shortest path problem in O(m√(log n·loglog n)) time, in which the results are always correct and it can achieve this time bound with high probability. The time complexity can be improved to O(√(mnlog n)+n√(log nloglog n)) when m=ω(n) and m=o(nlog n).
Note that there is a (worst-case) lower bound of Ω(m+min{nlog n, nloglog r}) in <cit.> for “hierarchy-based” algorithms for undirected real-weighted SSSP, but our algorithm is randomized and not hierarchy-based. See Remark <ref> for discussions.
Technical Overview.
The bottleneck of Dijkstra-based algorithm is the priority queue. For this reason, we only add a fraction of vertices into the priority queue. As in many works on distance oracles or spanners, we sample a subset of vertices R, and the heap is only for vertices in R, then we “bundle” every other vertex v to its nearest vertex in R, which is called (v). Then define (v) to be the set of vertices closer than (v) to v. Since the algorithm doesn't know the correct order of most vertices on a shortest path, relaxing only neighbours as in Dijkstra's algorithm doesn't work. So when popping a vertex u∈ R from the heap, we also deal with vertices v which are bundled to u. In an undirected graph, this also implies that |(s,u) - (s,v)| is not large. Here we relax v from vertices in (v) and their neighbors, then from v we relax neighbors of v and vertices in their balls. (To make it easier to describe, we first change the graph to a constant-degree graph with O(m) vertices.) Details of the algorithm will be discussed in Section <ref>, as well as the analysis of correctness and running time. The detailed construction of bundles so that the algorithm can achieve the time bound w.h.p. will be introduced in Section <ref>. The improvement of time complexity by relaxing the constant-degree constraint will be discussed in Section <ref>.
§.§ Other Related Works
Existence of algorithms better than O(m+nlog n) for real-weighted SSSP has been open for long. Pettie and Ramachandran's algorithm <cit.> works better than O(nlog n) if the ratio between maximum and minimum edge weights is not very large. For the integer-weighted case, random access machine (RAM) model is usually adopted, where multiplications, shifts and Boolean operations on edge weights are allowed. There are many works on improving heaps and SSSP algorithms on RAM model with integer weights <cit.>.
Finally Thorup gave a linear-time algorithm for undirected graphs <cit.> and O(m+nloglogmin{n,C}) for directed graphs <cit.> where C is the maximum edge weight.
Recently, almost linear time O(m^1+o(1)log C) algorithms for SSSP with negative weights are also discovered <cit.>.
All-pair shortest path (APSP) problem requires the shortest path between every pair of vertices u,v in the graph G. We can run Dijkstra's algorithm <cit.> from all vertices which will have running time O(mn+n^2log n), or use Floyd-Warshall algorithm <cit.> with running time O(n^3). Researchers have made many improvements since then <cit.>, but there is still no truly subcubic time (O(n^3-ϵ) for some constant ϵ>0) APSP algorithm for real-weighted graphs or even graphs with integer weights in [0,n]. Williams <cit.> gave an APSP algorithm with running time n^3/2^Θ(√(log n)) for real-weighted graphs. For undirected real-weighted graphs, Pettie and Ramachandran's APSP algorithm <cit.> runs in O(mnlogα(m,n)) time, and for directed real-weighted graphs, Pettie <cit.> gave an APSP algorithm in O(mn+n^2loglog n) time.
§ PRELIMINARIES
In this paper we work on an undirected graph G=(V, E, w) with vertex set V, edge set E⊆ V^2 and non-negative weight function w: E →ℝ_≥ 0, also denoted by w_uv. In an undirected graph w_uv = w_vu holds for all edges (u, v) ∈ E. We denote n = V, m=E as the number of vertices and edges in the graph, and N(u) = {v: (u, v) ∈ E} as the neighbors of u. For two vertices u, v∈ V, _G(u,v) is length of the shortest path connecting u and v, namely the distance of u and v in graph G. The subscript G is omitted when the context is clear. Let s be the source vertex. The target of our algorithm is to find (s, v) for every v∈ V. Without loss of generality we assume that G is connected, so m≥ n-1.
Constant-Degree Graph. Throughout the paper we need a graph with constant degree. To accomplish this, given a graph G, we construct G' by a classical transformation (see <cit.>):
* Substitute each vertex v with a cycle of N(v) vertices x_vw (w∈ N(v)) connected with zero-weight edges, that is, for every neighbor w of v, there is a vertex x_vw on this cycle.
* For every edge (u,v) in G, add an undirected edge between corresponding vertices x_uv and x_vu with weight w_uv.
We can see distance _G'(x_uu', x_vv') = _G(u, v) for arbitrary u'∈ N(u) and v'∈ N(v). Each vertex in G' has degree at most 3, while G' being a graph with O(m) vertices and O(m) edges.
Comparison-Addition Model. In this paper our algorithm works under comparison-addition model,
in which real numbers are subject to only comparison and addition operations. In this model, each addition and comparison takes unit time, and no other computations on edge weights are allowed.
Fibonacci Heap. Under such a model, it is possible to construct a Fibonacci heap H that spends amortized O(1) time for initialization, insertion, decrease-key operations, and O(logH) time for each extract-min operation <cit.>. When we extract the minimum element from the heap, we also call that element “popped” from the heap.
§ MAIN ALGORITHM
In the following sections we assume that G is connected, and each vertex in G has degree no more than 3. (There are O(m) vertices and O(m) edges in G, but we still use O(log n) which is equivalent to O(log m) where n is the number of vertices in the original graph without degree constraints.)
Our algorithm is based on original Dijkstra's algorithm <cit.>. Since the main bottleneck of Dijkstra's algorithm is the O(log n) time per every vertex extracted from the Fibonacci heap <cit.>, we only insert a subset R ⊆ V of vertices into the heap. Each remaining vertex v∈ V ∖ R is bundled to its closest vertex in R. Throughout the algorithm, vertices are updated only when some vertex u∈ R is popped from the heap. Our algorithm consists of two stages: bundle construction and Bundle Dijkstra, whose details will be introduced in Section <ref> and <ref> respectively.
To demonstrate the main idea of our algorithm, in this section we first give an algorithm that runs in expected O(m√(log n·loglog n)) time but not “with high probability”. In Section <ref> we give an improved version of the bundle construction stage, leading to an algorithm that runs in O(m√(log n·loglog n)) time with high probability. Both algorithms always give correct answers.
§.§ Bundle Construction
A simple version of bundle construction works as follows[One may notice that sampled set, closest sampled vertex and balls are common techniques in papers on shortest path algorithms, distance oracles and spanners, and there are deterministic construction algorithms for such “dominating set” (e.g. <cit.>), but the extra O(log n) factor for deterministic approaches introduced on the size of dominating set or construction time is not affordable here.] (k is a parameter to be determined later):
* Independently sample each vertex v∈ V∖{s} with probability 1/k to form set R, then add s into R.
* For each vertex v ∉ R, run Dijkstra's algorithm started from v until first vertex of R is extracted from the heap, denoted by (v). Therefore (v) is one of the closest vertices in R to v, i.e., (v) ∈min_u∈ R(u, v). We say that v is bundled to (v).
* For each u∈ R, let (u) = u, and (u) = {v: u = (v)} be the set of vertices bundled to u. By definition, {(u)}_u∈ R forms a partition of the vertex set V.
* For each vertex v ∉ R, define (v) = {w∈ V: (v, w) < (v, (v))}, that is, the set of vertices closer to v than its bundled vertex (v). In the previous Dijkstra's algorithm we can get (v) and also values of (v, w) for all w∈(v)∪{(v)}.
Time Analysis of Bundle Construction. For each vertex v∉ R, without loss of generality we assume its Dijkstra's algorithm breaks tie in a deterministic way. Therefore, the order of vertices extracted from heap is fixed.
We can see 𝔼[|R|]=O(m/k). For each vertex v∉ R, let S_v be the set of vertices extracted before its Dijkstra's algorithm stops, then (v) ⊊ S_v. By definition of R, S_v follows geometric distribution with success probability 1/k, thus 𝔼[S_v]=k and 𝔼[(v)] ≤ k.
By constant degree property, the number of vertices ever added into the heap is also O(S_v), so the total time of the bundle construction is O(∑_v∈ V∖ R𝔼[S_vlogS_v])=O(mklog k) in expectation.
One may notice that xlog x is a convex function so that 𝔼[S_vlogS_v] = O(klog k) does not trivially hold. We present a simple proof here: (by geometric distribution 𝔼[S_v^2] = 2k^2 - k)
𝔼[S_vlogS_v] = ∑_n = 1^∞1/k(1 - 1/k)^n-1· nlog n
≤∑_n≤ k^21/k(1 - 1/k)^n-1· nlog n + ∑_n > k^21/k(1 - 1/k)^n-1· n^2
≤ 2log k ∑_n≤ k^21/k(1-1/k)^n-1· n + ∑_n = 1^∞1/k(1 - 1/k)^k^2 + (n - 1) (n + k^2)^2
≤ 2log k ·𝔼[S_v] + (1 - 1/k)^k^2·𝔼[(S_v+k^2)^2]
≤ 2 k log k + e^-k· O(k^4) = O(klog k).
§.§ Bundle Dijkstra
Given the set R and the partition of bundles, the main algorithm works as follows, with pseudocode given in Algorithm <ref>:
Initially we set d(s)=0 and d(v)=+∞ for all other vertex v, and insert all vertices of R into a Fibonacci heap <cit.>. Whenever we pop a vertex u∈ R from the heap, we update the distances by the following steps. (Here relaxing a vertex v by a value D means that we update d(v) by min{d(v), D}.)
* For every vertex v bundled to u, we need to find the exact value of (s,v). First relax v by d(u)+(u,v); then for every vertex y∈(v), relax v by d(y)+(y,v); and for every z_2 ∈(v)∪{v} and z_1∈ N(z_2), relax v by d(z_1)+w_z_1,z_2+(z_2,v). That is, we update d(v) by its bundled vertex u, vertices in (v), and vertices neighboring to v and (v).
* After updating d(x) for every x∈(u), we update the vertices y∈ N(x) and vertices z_1∈(y). That is, relaxing y by d(x)+w_x,y for all y∈ N(x) and then relaxing z_1 by d(x)+w_x,y+(y,z_1) for all z_1∈(y).
* Whenever we update a vertex v∉ R, we also relax its bundled vertex (v) by d(v)+(v,(v)). (But later we will see this is only needed in Step 2 but not Step 1, since in Step 1 v is bundled to u, but the distance (s,u) is already found when popping u from the heap.)
The following observation holds naturally from the algorithm.
d(v) ≥(s, v) always holds for all v ∈ V.
Time Analysis for Bundle Dijkstra. For the Bundle Dijkstra stage, only vertices in R are inserted into heap, thus the extract-min operation only takes O(Rlog n) time in total. Since every vertex in V∖ R only appears once as v and x in Step 1 and Step 2, respectively, and by constant degree property, every vertex appears constant times as the vertex y∈ N(x) in Step 2, so the number of vertices z_1,z_2 in Step 1 for every v is O(|(v)|), and the number of vertices z_1 in Step 2 for every y is O(|(y)|). Also note that the recursive call of Relax in Step 3 can only recurse once since (v)∈ R. So the total time for Step 1, 2 and 3 is O(∑_v∈ V∖ R|(v)|). Thus, the time of the bundle Dijkstra stage is 𝔼[O(|R|·log n+∑_v∈ V∖ R(v))] = O(m/klog n + mk) in expectation.
Now, we can see that the expected total time of the two stages is O(m/klog n + mklog k), which is minimized to O(m√(log n·loglog n)) if we choose k = √(log n/loglog n). We move to explain our main ideas of the correctness proof. A formal proof is given in Section <ref>.
Main ideas. The following propositions hold in the algorithm. (Here the iteration of u means the iteration performed when popping u∈ R; a real distance (s,v) is found means d(v)=(s,v) already holds.)
When popping u∈ R from the heap, its distance (s,u) is already found.
After Step 1 in the iteration of u, (s,v) for all v∈(u) are found.
The following lemmas contain the main ideas of the algorithm.
For any vertex u∈ R and any path P from s to u, if P goes through vertex y, (s,(y)) is at most the length of P.
(s, y) is at most the length of subpath of P from s to y. By definition of (y), (y, (y)) is at most the length of subpath of P from y to u. Concatenating two subpaths together, (s, (y)) ≤(s, y) + (y, (y)) is at most the length of P.
Lemma <ref> shows that for any vertex u∈ R, the shortest path from s to u only contains vertices y with (s, (y)) ≤(s, u). This is the intuition why vertices of R are popped in increasing order of (s,·). However, the shortest path from s to some vertex v∈(u) may go through some vertex y with (s, (y)) ≥(s, u), that is, (y) is still not popped from the heap. But surprisingly, with the ideas of Lemma <ref> we can deal with this case even before the iteration of (y).
For a vertex v∉ R, if the shortest path from s to v is shorter than (s,(v))+((v),v), and it goes through a vertex y (other than v) such that (s,(y))≥(s,(v)), then on the shortest path from y to v there are two adjacent vertices z_1,z_2 such that z_1∈(y)∪{y} and z_2∈(v)∪{v}.
We have (y,v)=(s,v)-(s,y) and (s,v)<(s,(v))+((v),v). By triangle inequality, (s,y)≥(s,(y))-(y,(y)), and by (s,(y))≥(s,(v)),
(y,v)<((v),v)+(y,(y))+(s,(v))-(s,(y))≤((v),v)+(y,(y))
Let z_1 be the last vertex on the shortest path from y to v satisfying (y,z_1)<(y,(y)), so z_1∈(y). Then z_2 will be the next vertex after z_1, so (y,z_2)≥(y,(y)), and (z_2,v)<(v,(v)), so z_2∈(v). (If (y,(y))=0 then z_1=y, and if (y,v)<(y,(y)) then z_2=v and z_1 is the vertex before v.)
Then we can see Proposition <ref> and <ref> hold throughout the algorithm iteratively: (A formal proof will be given in Section <ref>.)
* When we pop the source s from the heap, d(s)=0, and the distances (s,v) for all v∈(s) are found in the bundle construction step, and can be put in d(v) in Step 1.
* Assume Proposition <ref> holds for the first i vertices popped, so the real distances for all vertices bundled to popped vertices are found. By Step 2 and 3, we can see for all unpopped u∈ R, the distance (s,u) can be found if the shortest path from s to u does not go through vertices bundled to other unpopped vertices in the heap. If the next popped vertex u' does not satisfy this, let y be the first vertex on the shortest path from s to u' which is bundled to an unpopped vertex (y) other than u', so (s,(y)) can be found. By Lemma <ref>, (s,(y))≤(s,u'), so if d(u')>(s,u'), (y) will be the next popped vertex. Thus, (s,u') for the next popped vertex u' is found before it is popped.
* If an unpopped vertex u'∈ R is updated in the iteration of popped vertex u, the new path to u' must go through a vertex in (u). By Lemma <ref>, d(u') cannot be updated to be smaller than (s,u), so the unpopped vertices must have longer or equal distances than any popped vertex.
* Thus when popping a vertex u∈ R, its distance (s,u) is already found. For all vertex v∈(u), if (s,v) is not directly obtained by d(u)+(u,v), that is, (s,v)<(s,u)+(u,v), let x be the last vertex on the shortest path from s to v such that (x) is popped before u, and let y be the next vertex after x. We can see (s,(y))≥(s,u), so by Lemma <ref>, we get such z_1 and z_2. Then from Proposition <ref> (s,x) can be found in Step 1 in the iteration of (x), then (s,z_1) can be found in Step 2 of that iteration. In this iteration of u, (s,v) can be set to (s,z_1)+w_z_1,z_2+(z_2,v) in Step 1, so Proposition <ref> still holds after this iteration.
§.§ Proof of Correctness
We give a formal proof based on the pseudocode of Algorithm <ref>. Define u_i∈ R as the vertex extracted in the i-th iteration of while-loop in Algorithm <ref>. Our key lemma in the following shows the main properties of the algorithm, therefore Bundle Dijkstra is correct no matter how R is chosen.
The following properties hold for any i≥ 1 in Bundle Dijkstra (Algorithm <ref>):
* When u_i is extracted from the heap, d(u_i) = (s, u_i) holds.
* After i-th iteration of the while-loop, d(u) ≥ d(u_i) for all u ∈ R \u_j_j≤ i.
* After Step 1 of i-th iteration of the while-loop, d(v)=(s,v) for all v ∈(u_i).
We shall prove the lemma by induction on i.
The lemma holds for i = 1 since d(s) = 0 and d(v) = (s, v) for all v ∈(s) after Line <ref>.
Suppose the lemma holds for every i ≤ t-1, consider the case i=t.
* Consider a shortest path P from s to u_t. Let x be the last vertex on P such that x∈(u_j) for some j < t, and y be the next one after x, hence y ∈(u) for some u ∈ R \u_ℓ_ℓ < t. By Property <ref> of induction hypothesis d(x) = (s, x) after Step 1 of j-th iteration. After that the algorithm updates d(y), and further d(u) in line <ref> since y ∈ N(x).
Therefore after (t-1)-th iteration d(y) = (s, x) + (x, y) = (s, y) and d(u) ≤(s, y) + (y, u). Further:
d(u) ≤ (s, y) + (y, u)
≤ (s, y) + (y, u_t) (y) = u
= (s, u_t) y on shortest path
≤ d(u_t) Observation <ref>.
On the other hand, the algorithm extracts u_t from Fibonacci heap H immediately after (t-1)-th iteration, thus d(u_t) ≤ d(u). So all the inequalities above should be equations, thus d(u_t) = (s, u_t).
* When executing line <ref> of t-th iteration, d(u) ≥ d(u_t) holds for every u ∈ R \u_j_j≤ t since H is a Fibonacci heap.
Suppose d(u) < d(u_t) for some u ∈ R \u_j_j≤ t after t-th iteration. The further updates on d(u) must start from d(x) for some x ∈(u_t). For last such update, applying Lemma <ref> on this path from s to x then to u, we have:
d(u) ≥ (s, u_t) Lemma <ref>
= d(u_t), Property <ref>
leading to contradiction.
* We want to show that d(v) = (s, v) holds for all v∈(u_t).
Suppose there exists a vertex v∈(u_t) such that d(v)>(s,v) after Step 1. Denote P as the shortest path from s to v. Let x be the last vertex on P such that x∈(u_j) for some j < t, and y be the next one after x on P, hence y ∈(u) for some u ∈ R \u_ℓ_ℓ < t. By Property <ref> of induction hypothesis, d(x) = (s, x) after Step 1 of j-th iteration. Same as above we can show that d(y) = (s, x) + (x, y) = (s, y) and d(u) ≤(s, y) + (y, u) (where u=(y)) before t-th iteration.
We have:
(s, y) ≥ d(u) - (y, u).
(s, v) < d(v) Assumption
≤ d(u_t) + (u_t, v) d(v) updated in Line <ref>.
On the other hand, d(u) ≥ d(u_t) after t-th iteration by Property <ref>. Since d(u_t) doesn't change (Property <ref> and Observation <ref>), and d(u) can only decrease in t-th iteration, so d(u) ≥ d(u_t) holds throughout t-th iteration. Hence:
(y, v) = (s, v) - (s, y) < (u_t, v) + (y, u),
while the equation holds since y lies on the shortest path from s to v.
Therefore there are two possible cases:
* (y, v) < (u_t, v).
In this case y ∈(v), so we can update d(v) to (s,y)+(y,v) on line <ref>, contradicting to d(v)>(s,v).
* (y, v) ≥(u_t, v).
First, by Inequality (<ref>), (y, u) > (y, v) - (u_t, v) ≥ 0. Let z_1 be the last vertex on path P with (y, z_1) < (y, u), we have z_1 ∈(y).
Let z_2 be the next vertex on the path, then (y, z_2)≥(y, u), so (z_2,v) = (y, v) - (y, z_2) ≤(y, v) - (y, u) < (u_t,v), that is, z_2∈(v).
(If z_2 does not exist, then z_1=v.)
By Property <ref> of induction hypothesis, d(x) = (s, x) just after Step 1 of j-th iteration, so d(z_1) is updated to (s, z_1) in line <ref> of j-th iteration. Therefore d(v) is updated to (s, v) in line <ref> of t-th iteration (since j < t), contradicting the assumption.
Therefore d(v)=(s,v) for all v ∈(u_t) after Step 1 of t-th iteration.
Pettie and Ramachandran <cit.> proved that, any hierarchy-based SSSP algorithm on undirected graphs in comparison-addition model takes time at least Ω(m + min{nloglog r, nlog n}), where r is the ratio of the maximum-to-minimum edge weight. This bound becomes Ω(m+nlog n) when r is exponentially large. Here the hierarchy-based algorithm is defined to generate a permutation π_s satisfying hierarchy property: (s, v)≥(s, u) + (u, v) ⇒π_s(u) < π_s(v), where (u, v) is the longest edge on the MST path between u and v. The permutation π_s is, though not defined to be, typically for the algorithms discussed in <cit.>, the order that the algorithm visits the vertices. However, that is a worst-case lower bound, and our algorithm is randomized. Also the order that our algorithm visits the vertices does not follow the hierarchy property: think of two vertices x and y are both connected to u by edges (x,u), (y,u) both with weight 1, and x,y are both bundled to u. It is possible that (x,y)=1 and (s, y)= (s, x) + 2 but we set no limit on the order we visit x and y, that it is possible we visit y before x. This explains why our algorithm can break this Ω(m + nlog n) lower bound in <cit.>.
§ IMPROVED BUNDLE CONSTRUCTION
In this section we propose an improved bundle construction that runs in O(m√(log n·loglog n)) time with high probability.
In Section <ref> we showed that correctness of Bundle Dijkstra does not depend on the choice of R, as long as (·), (·) and (·) are correctly computed with respect to R. The running time for the bundle construction is O(∑_v∈ V∖ RS_vlogS_v), and Bundle Dijkstra is O(∑_v∈ V∖ R(v) + Rlog n).
Naturally, S_v is a random variable following geometric distribution for each vertex v∈ V, and they are not independent since a vertex x∈ V may appear in several sets. However, for a subset W ⊆ V, if any vertex appears at most once in S_v_v∈ W, the corresponding random variables S_v_v∈ W are independent. By Lemma <ref>, if each random variable is dependent to few other variables, their summation deviates from the expectation with exponentially small probability. So we manually include all those vertices with S_v≥ klog k into R. In this way for each vertex in V∖ R, its random variable is dependent to only a limited number of other ones, and we can bound their summation with high probability.
We introduce how to generate R and compute {(v)}_v∈ V∖ R below, as well as {(v)}_v∈ V∖ R, {(u)}_u∈ R and (v, u) for u∈(v)∪{(v)}. The pseudocode is given in Algorithm <ref>. We still set parameter k = √(log n/loglog n) as in Section <ref>.
Improved Bundle Construction.
* Sample each vertex v∈ V∖{s} with probability 1/k to form set R_1 and add s into R_1;
* For each v∈ V∖ R_1, run Dijkstra algorithm started from v, until we have extracted a vertex in R_1; or have already popped klog k vertices.
* In the former case, denote the extracted vertices in the order they appeared as list V_extract^(v). Note that V_extract^(v) is similar to S_v of Section <ref>.
In the latter case, add v into R_2;
* Set R = R_1∪ R_2, and for v∈ V∖ R, let the first vertex in V_extract^(v) that lies in R be (v);
* With the results above, compute (u) for u∈ R, (v) for v∈ V∖ R, and record (v, u) for u∈(v)∪{(v)}. This step takes linear time.
The correctness of this bundle construction follows from the Dijkstra's algorithm <cit.>. We only need to analyze R, ∑_v∈ V∖ R(v) and its running time. The performance of this improved bundle construction is characterized in Lemma <ref> below. By Lemma <ref>, the bundle construction takes O(mklog k) time, and bundle Dijkstra takes O(∑_v∈ V∖ R(v) + Rlog n) = O(mk+mlog n/k) with probability 1 - e^-Ω(n^1 - o(1)). Thus the total running time of our algorithm is O(mklog k + mlog n/k) = O(m√(log n·loglog n)) w.h.p. The proof of Lemma <ref> is based on Lemma <ref>.
By running Algorithm <ref>, with probability 1-e^-Ω(n^1 - o(1)), the following properties hold:
(a) R = O(m/k).
(b) ∑_v∈ V∖ R(v) = O(mk).
(c) The running time of Algorithm <ref> is O(mklog k).
First, each vertex of V∖{s} is inserted to R_1 independently with probability 1/k, so by Chernoff bound, with probability 1 - O(e^-m/k) = 1 - e^-Ω(n^1 - o(1)), R_1 = Θ(m/k), and meanwhile m' := V∖ R_1 = Θ(m).
For each vertex v∈ V∖ R_1, define X_v = 𝕀[v∈ R_2] and Y_v = V_extract^(v). Then, X_v is a Bernoulli random variable, X_v∈ [0, 1] with probability 1 and 𝔼[X_v] = (1 - 1/k)^klog k = Θ(1/k). And Y_v is a Geometric random variable except its value truncated at klog k, Y_v∈ [0, klog k] with probability 1 and 𝔼[Y_v] = k(1-(1-1/k)^klog k) = Θ(k).
For each vertex v∈ V∖ R_1, denote V_full^(v) as the first klog k vertices extracted in the Dijkstra algorithm if it did not truncate. They are determined by the structure of G, so there is no randomness in V_full^(v). The values of X_v and Y_v are determined by whether vertices in V_full^(v) were inserted into R_1. Therefore, if V_full^(v_1), V_full^(v_2), ⋯, V_full^(v_j) are disjoint, then X_v_1, X_v_2, ⋯, X_v_j are independent, and similarly, Y_v_1, Y_v_2, ⋯, Y_v_j are independent.
For each vertex w∈ V_full^(v), because w is found by v within klog k steps of Dijkstra's algorithm, there must exist a path from v to w of no more than klog k edges, so by constant degree property, there are at most 3·(1 + 2 + ⋯ + 2^klog k - 1)≤ 3· 2^klog k different u such that w∈ V_full^(u).
Hence, for each v, there are at most 3klog k · 2^klog k= O(n^o(1)) different u∈ V∖ R_1 such that V_full^(v)∩ V_full^(u)≠∅.
To apply Lemma <ref>, for each v∈ R_1, also define X_v, Y_v and V_full^(v) in the same way as if v is executed in the loop of Algorithm <ref>.
Now, we apply Lemma <ref> for {X_v}_v∈ V and {Y_v}_v∈ V. For {X_v}_v∈ V, S is V and V = m, μ = Θ(1/k), b = 1, T = O(n^o(1)), and {W_v}_v∈ V are { V_full^(v)}_v∈ V, and we can verify that 8Tbμ^-1 = O(n^o(1)) and 8b^3T/μ^3 = O(n^o(1)), so with probability at least 1 - e^-Ω(m/n^o(1)), it holds that ∑_v∈ SX_v = Θ(m/k). And for {Y_v}_v∈ V, S is V, μ = Θ(k), b = klog k, T = O(n^o(1)), and {W_v}_v∈ V are also { V_full^(v)}_v∈ V, and similarly we infer that with probability 1 - e^-Ω(m/n^o(1)), it holds that ∑_v∈ SY_v = Θ(mk). Thus, we conclude that with probability 1 - e^-Ω(n^1 - o(1)), ∑_v∈ V∖ R_1X_v = Θ(m/k) and ∑_v∈ V∖ R_1 Y_v = Θ(mk).
Then, we prove the three claims of this lemma.
For (a), by definition R = R_1 + R_2, so by union bound, with probability 1 - e^-Ω(n^1 - o(1)), R = R_1+ ∑_v∈ V∖ R_1X_v=O(m/k).
For (b), by definition (v)≤ Y_v, so ∑_v∈ V∖ R(v)≤∑_v∈ V∖ R_1Y_v. Thus, with probability 1 - e^-Ω(n^1 - o(1)), ∑_v∈ V∖ R(v) = O(mk).
For (c), we count the total time for the truncated Dijkstra algorithm in all iterations. For each vertex v∈ V∖ R_1, by constant degree property, the number of Insert operations is O(Y_v), so H^(v) = O(Y_v) = O(klog k). Therefore, each ExctractMin operation takes time O(log(Y_v)) = O(log k), and other operations takes constant time. Thus the truncated Dijkstra algorithm of v takes time O(Y_vlog k). Thus, with probability 1 - e^-Ω(n^1 - o(1)), the total time of Algorithm <ref> is O(∑_v∈ V∖ R_1Y_vlog k)=O(mklog k).
(Similar arguments as in <cit.>)
Suppose a set of random variables {Z_v}_v∈ S satisfy that for each v∈ S, 𝔼[Z_v] = μ, Z_v∈[0, b] with probability 1, and each Z_v is corresponded to a fixed deterministic set W_v such that, if W_v_1, W_v_2, ⋯, W_v_j are disjoint, then Z_v_1, Z_v_2, ⋯, Z_v_j are independent, and W_v intersects with at most T different W_u.
Then, with probability at least 1 - 8Tbμ^-1· e^-μ^3S/8b^3T, it holds that ∑_v∈ SZ_v = Θ(Sμ).
We try to partition {Z_v}_v∈ S into several subsets {𝒵_t} such that all Z_v's in each 𝒵_t are independent so that we can apply Hoeffding's inequality, or the size of 𝒵_t is small so that we can bound them by the upper bound b, and finally combine everything by the union bound. Also note that we do not need to actually compute {𝒵_t}, as they are merely introduced for this mathematical proof.
Fix parameter p = Sμ/4Tb. Since each W_v intersects with at most T different other W_u, whenever there are at least p(T+1) elements in {Z_v}_v∈ S, we can pick p different Z_v from them whose W_v are disjoint, so that they are independent: pick an arbitrary Z_v and discard those Z_u if W_u∩ W_v≠∅; since there are at most T such Z_u different from Z_v, every time we discard at most T+1 elements. So from p(T+1) elements we can pick p of them.
We let them form a 𝒵_t and remove them from {Z_v}_v∈ S. Repeating this process we will end up with a partition {𝒵_1, 𝒵_2, ⋯, 𝒵_q, 𝒵_q+1} of {Z_v}_v∈ S such that: 𝒵_t = p, and all Z_v∈𝒵_t are independent for 1≤ t≤ q; 𝒵_q+1≤ p(T+1)≤ 2pT = μ/2bS. By definition μ≤ b, so 𝒵_q+1≤1/2S.
Then by Hoeffding's inequality, for each 1≤ t≤ q,
[∑_v∈𝒵_tZ_v - 𝒵_tμ > 1/2𝒵_tμ] ≤ 2e^-2(1/2𝒵_tμ)^2/Z_tb^2 = 2e^-μ^2p/2b^2.
and 0≤∑_v∈𝒵_q+1Z_ v≤𝒵_q+1b with probability 1.
By union bound, with probability at least 1 - 2qe^-μ^2p/2b^2,
∑_v∈ SZ_v ≥1/2∑_t=1^q𝒵_tμ = 1/2(S - 𝒵_q+1)μ≥1/2(S-1/2S)μ = 1/4Sμ,
and meanwhile
∑_v∈ SZ_v≤3/2∑_t=1^q𝒵_tμ + 𝒵_q+1b≤3/2Sμ + μ/2bS· b = 2Sμ.
And from S≥∑_t=1^q𝒵_t≥ qp, we conclude that q ≤S/p = 4Tb/μ. Thus, with probability at least 1 - 8Tbμ^-1e^-μ^3S/8b^3T, it holds that ∑_v∈ SZ_v = Θ(Sμ).
§ DISCUSSION
We gratefully acknowledge an anonymous reviewer for suggesting that constant-degree is not a necessary condition for this algorithm, so we can get improved time complexity when m=ω(n) and m=o(nlog n). Instead of making the graph of degree 3, we use similar methods to split the vertices of degree >m/n to vertices of degrees ≤ m/n, so that the number of vertices is still O(n). Then in each step:
* In bundle construction, the time for Dijkstra search for every vertex v will become O(|S_v|·m/n+|S_v|log (|S_v|·m/n)), since the size of the heap is at most |S_v|·m/n, so in total O(mk+nklog(mk/n)).
* The time for Bundle Dijkstra will become O(n/klog n+ mk), since the number of vertices z_2 in Step 1 for every v is O(m/n|(v)|), and the number of vertices z_1 in Step 2 for every y is O(|(y)|) but each vertex appears O(m/n) times as y in Step 2.
* When m/n=o(log n), one can check that the analysis of independence in Section <ref> still works, since the number of different u∈ V∖ R_1 which have V_full^(v)∩ V_full^(u)≠∅ for each v is still O(n^o(1)).
Thus, the time complexity for this algorithm is O(n/klog n+ mk + nklog(mk/n)). When m<nloglog n, k still equals to √(log n/loglog n), and the time bound is O(n√(log nloglog n)). When nloglog n≤ m< nlog n, let k=√(n/mlog n), and the time bound will be O(√(mnlog n)).
plainnat
|
http://arxiv.org/abs/2307.04874v1 | 20230710195039 | Chern-Kuiper's inequalities | [
"Diego Guajardo"
] | math.DG | [
"math.DG"
] |
Engineering bound states in continuum via nonlinearity induced extra dimension
Girish S. Agarwal
August 12, 2023
==============================================================================
Given a Euclidean submanifold gMn^n+p, Chern and Kuiper provided inequalities between μ and ν_g, the ranks of the nullity of M^n and the relative nullity of g respectively.
Namely, they prove that
ν_g≤μ≤ν_g+p.
In this work, we study the submanifolds with ν_g≠μ.
More precisely, we characterize locally the ones with 0≠(μ-ν_g)∈{p,p-1,p-2} under the hypothesis of ν_g≤ n-p-1.
§ INTRODUCTION
There are two associated distributions to a submanifold gMn^n+p, the nullity Γ⊆ TM of the curvature tensor and the relative nullity _g⊆ TM, i.e., the nullity of the second fundamental form α of g.
The relative nullity plays a fundamental role in many works of submanifold theory; for example <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
In many of them, this distribution coincides with the nullity, turning the problem into an intrinsic one; besides the ones already cited, see <cit.>, <cit.>, <cit.>.
We want to understand the submanifolds whose relative nullity does not coincide with the nullity.
There are two natural families of submanifolds with ν_g≠μ.
Firstly, if M^n is flat and g is not (an open subset of) an affine subspace then _g≠ TM=Γ.
Secondly, we have the compositions, that is, if ĝMn^n+ℓ has nontrivial nullity and hU⊆n+ℓ^n+p is a flat submanifold with ĝ(M^n)⊆ U then generically g=h∘ĝMn^n+p has less relative nullity, in particular _g≠Γ.
Theorem 1 of <cit.> is an example of this phenomenon.
As a starting point, Gauss equation shows that _g⊆Γ.
Furthermore, Chern and Kuiper provided a complementary relation in <cit.>.
Namely, they showed that the ranks μ:=(Γ) and ν_g:=(_g) are related by desigualdades de Chern-Kuiper's.
Straightforward computations show that if ν_g=μ-p then M^n is flat and ν_g=μ-p=n-p.
Proposition 7 of <cit.> analyzes the next case of the Chern-Kuiper's inequalities in a restricted situation.
It shows that if gMn^n+2 has ν_g=μ-1=n-3 then g is locally a composition.
However, the authors' approach seems difficult to generalize.
The first result of this work extends that proposition, and the generalization is in two directions.
We allow higher codimensions and do not impose a particular rank for the nullity.
Let gMn^n+p be a submanifold with p≥ 2 and
ν_g=μ-p+1≤ n-p-1.
Then g=G∘ĝ is a composition, where G:N^n+1→^n+p is a flat submanifold and ĝMnN^n+1 is an isometric embedding.
Moreover, _ĝ=Γ and ν_G=(n+1)-(p-1).
In particular, with teorema de composicion chern-kuiper mu=nu+p-1 we characterize locally the submanifolds gMn^n+2 with μ≠ν_g.
Observe that the inequality condition in the last result is equivalent to M^n being nowhere flat.
Using our technique, we analyze the next case of Chern-Kuiper's inequalities.
We show that if p≥ 3 and ν_g=μ-p+2≤ n-p-1 then, on connected components of a dense subset of M^n, g is also a composition.
Let gMn^n+p be an isometric immersion with p≥ 3 and
ν_g=μ-p+2≤ n-p-1.
Let U be a connected component of an open dense subset of M^n where (p-ℓ):=(𝒮(α|_TM×Γ)) is constant.
Then ℓ∈{1,2} and g|_U=G∘ĝ is a composition, where GNn+ℓ^n+p is a flat submanifold and ĝU⊆ MnN^n+ℓ is an isometric embedding.
Moreover, _ĝ=Γ and ν_G∈{(n+1)-(p-j)}_j=ℓ^2.
The organization of this paper is as follows.
In section preliminaries, we recall flat bilinear forms, properties of the nullities, and ruled extensions, among others.
Ch-K section is devoted to analyzing the submanifolds with ν_g≠μ.
More precisely, is divided into subsections to analyze each possible value of μ-ν_g.
Lastly, in final section, we give some final comments about this work.
Acknowledgment. This work is a portion of the author's Ph.D. thesis at IMPA - Rio de Janeiro. The author would like to thank his adviser, Prof. Luis Florit, for his orientation.
§ PRELIMINARIES
In this section, we introduce the main techniques used in this article. Firstly, we discuss the basic properties of bilinear forms.
Then, we analyze the two principal distributions of this work, which are the nullity and the relative nullity.
The final subsection summarizes the properties of ruled extensions.
§.§ Flat bilinear forms
Given a bilinear map β:𝕍×𝕌→𝕎 between real vector spaces, set
𝒮(β)=span{β(X,Y):X∈𝕍, Y∈𝕌}⊆𝕎.
The (left) nullity of β is the vector subspace
Δ_β={X∈𝕍:β(X,Y)=0 , ∀ Y∈𝕌}⊆𝕍.
For each Y∈𝕌 we denote by β^Y:𝕍→𝕎 the linear map defined by β^Y(X)=β(X,Y).
Let
Re(β)={Y∈𝕌:(Im(β^Y)) is maximal}
be the set of (right) regular elements of β, which is open and dense in 𝕌.
There are similar definitions for left regular elements and right nullity.
Assume now that 𝕎 has a positive definite inner product ⟨·,·⟩:𝕎×𝕎→ℝ.
We say that β is 𝑓𝑙𝑎𝑡 if
⟨β(X,Y),β(Z,W)⟩=⟨β(X,W),β(Z,Y)⟩ ∀ X,Z∈𝕍 ∀ Y,W∈𝕌.
The next result is due to Moore in <cit.>.
It lets us determine the nullity of a flat bilinear form.
Let β:𝕍×𝕌→𝕎 be a flat bilinear form.
If Z_0∈𝕌 is a right regular element, then Δ_β=(β^Z_0).
In particular,
(Δ_β)=(𝕍)-(Im(β^Z_0))≥(𝕍)-(𝕎).
§.§ Intrinsic and relative nullities
We describe now the two main distributions of this work, the nullity of a Riemannian manifold and the relative nullity of a submanifold.
Given a Riemannian manifold M^n and x∈ M^n, the nullity of M^n at x is the nullity of its curvature tensor R at x, that is, the subspace of T_xM given by
Γ(x)={X∈ T_xM: R(X,Y)=0,∀ Y∈ T_xM}.
The rank of M^n at x is defined by n-μ, where μ=(Γ(x)).
As the results that we are looking for are of local nature and our subspaces are all either kernels or images of smooth tensor fields, without further notice we will always work on each connected component of an open dense subset of M^n where all these dimensions are constant and thus all the subbundles are smooth.
In particular, we assume that μ is constant and hence the second Bianchi identity implies that Γ is a totally geodesic distribution, namely, ∇_ΓΓ⊆Γ.
For an isometric immersion g:M^n→^n+p we denote by α^g:TM× TM→ T^⊥ _gM its second fundamental form.
We define the relative nullity of g at x as the nullity of α^g(x), that is, Δ_g(x):=_α^g.
The rank of g is the number n-ν_g, where ν_g=(Δ_g).
Gauss equation implies that _f⊆Γ, while Codazzi equation implies that it is a totally geodesic distribution of M^n.
The isometric immersion g:M^n→ℝ^n+p is said to be R^d-ruled, if R^d⊆ TM is a d-dimensional totally geodesic distribution whose leaves are mapped by g onto (open subsets of) affine subspaces of ℝ^n+p.
§.§ Revisiting ruled extensions
Given a submanifold gMn^n+p with μ≠ν_g≤ n-p-1, we want to describe g as a composition G∘ĝ, where GNn+ℓ^n+p is flat as in Theorems <ref> and <ref>.
For this, we will use ruled extensions.
The present subsection describes the basic properties of these extensions, many of which are already present in the literature; see <cit.> and <cit.> for example.
In order to describe g as such a composition, the first step is to find a rank ℓ subbundle L=L^ℓ⊆ T^⊥_gM to be a candidate of normal bundle of ĝ.
Then, we consider the tensor ϕ:=ϕ_L:TM×(TM⊕ L)→ L^⊥ given by
ϕ(X,v)=(∇̃_Xv)_L^⊥,
where ∇̃ is the connection of ^n+p and the subindex denotes the orthogonal projection on the respective subspace, that is, L^⊥.
Proposition 17 of <cit.> shows the importance of this tensor for our work.
Namely, the flatness of ϕ is equivalent to the local existence of an isometric immersion ĝU⊆ Mn^n+ℓ whose normal bundle is L (up to a parallel identification), and its second fundamental form is the orthogonal projection of α^g onto L.
However, meaningful cases also occur when ϕ is non-necessarily flat, as shown by Thm L=1.
Consider the covariant derivative of ϕ as
(∇_Xϕ)(Y,v):=(∇̃_X(ϕ(Y,v)))_L^⊥-ϕ(∇_XY,v)-ϕ(Y,(∇̃_Xv)_TM⊕ L).
Notice that
(∇_Xϕ)(Y,v)-(∇_Yϕ)(X,v) =(∇̃_X(ϕ(Y,v))_L^⊥-(∇̃_Y(ϕ(X,v)))_L^⊥-ϕ([X,Y],v)
+ϕ(X,(∇̃_Yv)_TM⊕ L)-ϕ(Y,(∇̃_Xv)_TM⊕ L)
=(∇̃_X∇̃_Yv-∇̃_Y∇̃_Xv-∇̃_[X,Y]v)_L^⊥,
but the curvature of the ambient space is zero, so ϕ satisfies the following Codazzi equation
(∇_Xϕ)(Y,v)=(∇_Yϕ)(X,v), ∀ X,Y∈ TM, ∀ v∈ TM⊕ L.
We denote by _ϕ^l and _ϕ^r the left and right nullities of ϕ respectively.
Certainly, _ϕ^l⊆_ϕ^r∩ TM.
Moreover, Codazzi equation Codazzi phi implies that _ϕ^l⊆ TM is integrable and ∇̃__ϕ^l_ϕ^r⊆_ϕ^r.
In particular, if _ϕ^l=_ϕ^r then g is _ϕ^l-ruled.
Given such ϕ, we define the curvature of ϕ as the tensor R_ϕ given by
R_ϕ(X,Y,v,w)=ϕ(X,w)ϕ(Y,v)-ϕ(X,v)ϕ(Y,w), ∀ X,Y∈ TM, ∀ v,w∈ TM⊕ L.
In particular, ϕ is flat when its curvature is zero.
Intuitively, ϕ and R_ϕ are the second fundamental form and curvature of the extension respectively.
The curvature of ϕ satisfies the following Bianchi identities.
[Bianchi's identities]
The curvature of ϕ satisfies the following first and second Bianchi identities
∑ R_ϕ(S,T,U,v)=0, ∀ S,T,U∈ TM, ∀ v∈ TM⊕ L,
∑(∇_SR_ϕ)(T,U,v,w)=0, ∀ S,T,U∈ TM, ∀ v,w∈ TM⊕ L.
where the sum denotes the cyclic sum over S, T, and U.
The first Bianchi identity comes from opening the curvatures and simplifying terms.
To prove the second identity, we make the computations at a fixed point q∈ M^n, and we take smooth sections such that the derivatives between S, T, U, v, and w are zero at q, that is, (∇_ST)(q)=0, (∇̃_Sv)_TM⊕ L(q)=0, and so on.
Denote by B the left-hand side of Bianchi phi, and notice that
B =∑ S(R_ϕ(T,U,v,w))
=∑[(∇_Sϕ)(T,w)ϕ(U,v)+ϕ(T,w)(∇_Sϕ)(U,v)]-∑[(∇_Sϕ)(T,v)ϕ(U,w)+ϕ(T,v)(∇_Sϕ)(U,w)]
=∑[(∇_Sϕ)(T,w)ϕ(U,v)-ϕ(T,v)(∇_Sϕ)(U,w)]+∑[ϕ(T,w)(∇_Sϕ)(U,v)-(∇_Sϕ)(T,v)ϕ(U,w)],
rearranging the terms of both sums we get
B=∑[(∇_Sϕ)(T,w)-(∇_Tϕ)(S,w)ϕ(U,v)]+∑[ϕ(T,w)(∇_Sϕ)(U,v)-(∇_Uϕ)(S,v)],
which is zero since ϕ satisfies Codazzi equation Codazzi phi.
We denote by Γ_ϕ^l and Γ_ϕ^r the left and right nullities of R_ϕ, that is
Γ_ϕ^l:={X∈ TM| R_ϕ(X,TM,TM⊕ L,TM⊕ L)=0}⊆ TM,
and
Γ_ϕ^r:={v∈ TM⊕ L| R_ϕ(TM,TM,TM⊕ L,v)=0}⊆ TM⊕ L.
Certainly, _ϕ^l⊆Γ_ϕ^l and _ϕ^r⊆Γ_ϕ^r.
The first Bianchi identity 1 Bianchi identity phi shows that Γ_ϕ^l⊆Γ_ϕ^r∩ TM.
Moreover, the second one implies that Γ_ϕ^l⊆ TM is integrable and (∇̃_Γ_ϕ^lΓ_ϕ^r)_TM⊕ L⊆Γ_ϕ^r.
In particular, ∇̃__ϕ^lΓ_ϕ^r⊆Γ_ϕ^r.
Consider the vector bundle Λ:=_ϕ^r∩(_ϕ^l)^⊥⊆ TM⊕ L, and suppose that rank(Λ)=ℓ=rank(L).
The ruled extension G:Λ→^n+p of g is given by
G(ξ_q)=g(p)+ξ_q, ∀ q∈ M^n, ∀ξ_q∈Λ_q
We restrict G to a neighborhood N^n+ℓ of the zero section ĝMnN^n+ℓ⊆Λ in order to G being an immersion.
Moreover, we endow N^n+ℓ with the induced metric by G.
Assume that _ϕ^l=_ϕ^r∩ TM and (Λ)=ℓ=(L).
Then _ϕ^r is the nullity of G, that is, _G=_ϕ^r up to a parallel identification along _ϕ^l.
Similarly, the nullity of N^n+ℓ is given by Γ_ϕ^r.
First, G is _ϕ^r-ruled since ∇̃__ϕ^l_ϕ^r⊆_ϕ^r.
Take a section ξ of N^n+ℓ⊆Λ and Y∈ TM, then
G_*(ξ_*Y)=g_*Y+∇̃_Yξ∈ TM⊕ L=G_*(TN).
Notice that TM⊕ L is parallel along _ϕ^r since _ϕ^l=_ϕ^r∩ TM, so Λ⊆_G.
As TN≅ TM⊕Λ, to compute the second fundamental form of G is enough to understand α^G|_TM×(TM⊕ L).
If X∈ TM then
∇̃_X(G_*(ξ_*Y))=g_*∇_XY+α(X,Y)+∇̃_X∇̃_Yξ,
so
α^G(X,ξ_*Y)=(∇̃_X(G_*(ξ_*Y)))_L^⊥=(α(X,Y))_L^⊥+(∇̃_X∇̃_Yξ)_L^⊥=ϕ(X,Y)+ϕ(X,∇̃_Yξ)=ϕ(X,ξ_*Y),
up to parallel identifications.
This proves that _ϕ^r=_G.
Finally, Gauss equation shows that the curvature tensor R_N of N^n+ℓ is given by
R_N(X,Y,v,w)=α^G(X,w)α^G(Y,v)-α^G(X,v)α^G(Y,w)=R_ϕ(X,Y,v,w), ∀ X,Y∈ TM, ∀ v,w∈ TM⊕ L,
which shows that Γ_ϕ^r is the nullity of N^n+ℓ since the remaining values of R_N involve terms of relative nullity.
We can give a weaker version of the last proposition for 0≤(Λ)<rank(L).
In that case, there is an orthogonal decomposition T_G^⊥N=ℒ⊕ E such that rank(ℒ)=rank(L)-(Λ), G is _ϕ^r-ruled, and this distribution coincides with the nullity of the E-component of α^G.
§ CHERN-KUIPER'S INEQUALITIES
In this section, we describe the basic properties of the submanifolds gMn^n+p whose relative nullity _g does not coincide with the intrinsic nullity Γ.
In the following subsections, we analyze the cases ν_g=μ-p, ν_g=μ-p+1, and ν_g=μ-p+2 respectively.
Let gMn^n+p be a submanifold with non-trivial intrinsic nullity Γ≠0.
Call α its second fundamental form and _g its relative nullity.
Gauss equation implies that _g⊆Γ and the flatness of the bilinear tensor β:=α|_TM×Γ.
Let _β be the (left) nullity of β.
The flatness of β implies that
α(Y,X)∈𝒮(β)^⊥, ∀ Y∈_β, ∀ X∈ TM.
So in particular
α(Y,X)∈𝒮(β)∩𝒮(β)^⊥=0, ∀ Y∈_β∩Γ, ∀ X∈ TM,
which shows that _g=_β∩Γ.
Then, we have the following relation
ν_g+(_β+Γ)=(_β)+μ.
Notice that _β⊆ TM is an integrable distribution.
Indeed, Codazzi equation for T_1,T_2∈_β gives
α([T_1,T_2],Z)=α(T_1,∇_T_2Z)-α(T_2,∇_T_1Z), ∀ Z∈Γ,
but the left-hand side belongs to 𝒮(β) and the right-hand side to 𝒮(β)^⊥ by alpha(Y,X) in S(beta)perp, so [T_1,T_2]∈_β.
Let us recall Chern-Kuiper's inequalities, and provide a quick proof.
Let gMn^n+p be a submanifold, then desigualdades de Chern-Kuiper's holds.
As _g⊆Γ then ν_g≤μ.
Take Z_0∈Re(β)⊆Γ a (right) regular element of β, then by nulidad para no simetrica and suma de dimensiones=suma de dimensiones we get that
ν_g+n≥ν_g+(_β+Γ)=(_β)+μ=n-(Im(β^Z_0))+μ≥ n-p+μ.
which proves the second inequality of desigualdades de Chern-Kuiper's.
Before analyzing the inequality cases of the Chern-Kuiper's inequalities, we present a result that gives us bounds for the rank of 𝒮(β) under the hypothesis of ν_g≤ n-p-1.
Let gMn^n+p be a submanifold with ν_g≤ n-p-1.
Then
μ-ν_g≤(𝒮(β))≤ p-1.
The first inequality comes from suma de dimensiones=suma de dimensiones and nulidad para no simetrica since
n+ν_g≥(_β+Γ)+ν_g=(_β)+μ≥ n-(𝒮(β))+μ.
On the other hand, suppose by contradiction that 𝒮(β)=T^⊥_gM. Then, by
alpha(Y,X) in S(beta)perp, we have that _β=_g.
However, in this case, nulidad para no simetrica implies that
ν_g=(_β)≥ n-(𝒮(β))=n-p,
which is absurd.
§.§ The case TEXT
In this subsection, we analyze the maximal case of Chern-Kuiper's inequalities.
We also describe the technique that will be used for the following cases.
The next result shows that only flat submanifolds attain the second inequality of desigualdades de Chern-Kuiper's.
Let gMn^n+p be a submanifold with
μ=ν_g+p.
Then M^n is flat, in particular μ=ν_g+p=n.
In this case we must have equalities in hhhhh, hence _β+Γ=TM and Im(β^Z_0)=𝒮(β)=T^⊥_gM.
Then alpha(Y,X) in S(beta)perp implies that _β=_g⊆Γ, and thus Γ=_β+Γ=TM.
There are natural parametrizations for flat submanifolds attaining igualdad extrema de chern-kuiper, planas; see <cit.> for p=1 and <cit.> for p=2.
This is generalized in <cit.> for any p≤ n.
Chern-Kuiper's inequalities and proposicion caso extremo ChK es flat characterize the hypersurfaces with _g≠Γ by means of the Gauss parametrization.
Hence, we assume from now on that p≥ 2.
There is a natural way to produce submanifolds gMn^n+p with _g≠Γ using compositions.
Consider a submanifold ĝMn^n+ℓ with Γ=_ĝ≠0, ℓ<p, and let GU⊆n+ℓ^n+p be an isometric immersion of an open subset U of ^n+ℓ with ĝ(M^n)⊆ U.
Then g:=G∘ĝ generically has less nullity than ĝ, so _g≠Γ.
Conversely, we will use the following strategy to prove that such a g must be a composition.
Naively, 𝒮(β) should be T^⊥_jU (or, at least, contained), and so L:=𝒮(β)^⊥⊆ T^⊥_gM is a candidate to be T^⊥_ĝM.
Hence, we can use the techniques of section revisiting ruled extensions.
Namely, we will study the properties of the tensor ϕ=ϕ_L given by phi en la seccion chern kuiper associated with L, then we will use prop ruled extensions to obtain the desired composition.
§.§ The case TEXT
This subsection is dedicated to analyzing the following case of Chern-Kuiper's inequalities.
We will prove a more general statement.
We characterize the submanifolds such that the first inequality of lemma bound of L is attained; they are all flat compositions.
Suppose that gMn^n+p is a submanifold with μ=ν_g+p-1 and p≥2.
lemma bound of L implies that
(𝒮(β))=μ-ν_g=p-1.
In particular, we are in a situation where the lower bound of eq bound of the rank is attained.
The following result analyzes this equality in complete generality.
teorema de composicion chern-kuiper mu=nu+p-1 is a direct consequence of it.
Consider a submanifold gMn^n+p with _g≠Γ.
Let β=α|_TM×Γ and suppose that
p-ℓ:=(𝒮(β))=μ-ν_g<p.
Then g=G∘ĝ is a composition, where GNn+ℓ^n+p is a flat submanifold and ĝMnN^n+ℓ is an isometric embedding.
Moreover, _ĝ=Γ and ν_G=(n+1)-(p-ℓ).
Let Z_0∈Γ be a (right) regular value of β.
nulidad para no simetrica and suma de dimensiones=suma de dimensiones imply that
(_β+Γ)=(_β)+(Γ)-(_g)= n-(Im(β^Z_0))+μ-ν_g≥ n,
which shows that α(Z_0,TM)=L^⊥ and _β+Γ=TM.
In particular, 𝒮(β)=𝒮(α|_Γ×Γ).
Let L:=𝒮(β)^⊥⊆ T^⊥_gM and consider the tensor ϕ=ϕ_L given by phi en la seccion chern kuiper.
We will use prop ruled extensions to prove that g is such a composition.
Hence, we need to show that ϕ is flat,
(_ϕ^r)=(_ϕ^l)+ℓ=n+ℓ-(p-ℓ),
and
_β=_ϕ^l=_ϕ^r∩ TM.
Notice that ϕ(_β,TM)=0 by alpha(Y,X) in S(beta)perp, and so _β=_ϕ^r∩ TM.
Moreover, if Y∈_β then Codazzi equation for ξ∈ L and Z_1,Z_2∈Γ gives us that
ϕ(Y,ξ)α(Z_1,Z_2)=∇^⊥_Yξα(Z_1,Z_2)=-ξ(∇_Y^⊥α)(Z_1,Z_2)=ξα(Y,∇_Z_1Z_2)=0, ∀ Z_1,Z_2∈Γ,
since Γ⊆ TM is totally geodesic.
Hence, ϕ(Y,ξ)=0 since 𝒮(α|_Γ×Γ)=𝒮(β)=L^⊥, and so _β=_ϕ^l.
This proves hjhj.
As TM=_β+Γ and hjhj holds, the flatness of ϕ is equivalent to the flatness of ϕ|_Γ×(Γ⊕ L).
Notice that ϕ|_Γ×Γ=α|_Γ×Γ is flat by Gauss equation.
On the other hand, if Z_1,Z_2,Z_3∈Γ and ξ∈ L then
ϕ(Z_1,ξ)ϕ(Z_2,Z_3)=∇^⊥_Z_1ξα(Z_2,Z_3)=-ξ(∇^⊥_Z_1α)(Z_2,Z_3),
which is symmetric in Z_1 and Z_2 by Codazzi equation.
Hence, to prove the flatness is enough to show that
ϕ(T_1,ξ_1)ϕ(t T_2,ξ_2)=ϕ(T_1,ξ_2)ϕ(T_2,ξ_1), ∀ T_1,T_2∈Γ, ∀ξ_1,ξ_2∈ L.
Notice first that the nullity of α|_Γ×Γ is _β∩Γ=_g.
Thus, α|_Γ×Γ is completely described by Theorem 2 of <cit.>.
Namely, there are vectors Z_1,…,Z_p-ℓ∈Γ∩_g^⊥ such that α(Z_i,Z_j)=0 for i≠ j and the set
{ρ_i:=α(Z_i,Z_i)}_i=1^p-ℓ
is an orthonormal basis of L^⊥.
Given ξ∈ L, Codazzi equation implies that
ϕ(Z_i,ξ)ρ_j=-ξ(∇^⊥_Z_iα)(Z_j,Z_j)=ξ∇^⊥_Z_j(α(Z_i,Z_j))-α(∇_Z_jZ_i,Z_j)-α(Z_i,∇_Z_jZ_j)=0, ∀ i≠ j.
Then ϕ(Z_i,ξ)=λ_i(ξ)ρ_i for some 1-forms λ_i:L→.
Then jhjh holds since {Z_1,…,Z_p-ℓ} is a basis of Γ and
ϕ(Z_i,ξ_1)ϕ(Z_j,ξ_2)=δ_ijλ_i(ξ_1)λ_j(ξ_2), ∀ i,j, ∀ξ_1,ξ_2∈ L.
Finally, by nulidad para no simetrica, we have for Z_0∈Γ a regular element of β that
(_ϕ^r)=n+ℓ-Im(ϕ^Z_0)=(n+ℓ)-(p-ℓ)=n-Im(β^Z_0)+ℓ=(_β)+ℓ=(_ϕ^l)+ℓ.
The result now follows from prop ruled extensions.
Notice that the second fundamental form of ĝ is the orthogonal projection of α onto L, but as α(Γ,TM)∈ L^⊥ then Γ=_ĝ.
We can describe locally all the submanifolds gMn^n+2 with _g≠Γ.
Let gMn^n+2 be a submanifold with Γ≠_g.
Then, on each connected component U of an open dense subset of M^n, we have one of the following possibilities:
* μ=ν_g+1 and g|_U=j∘ĝ is a composition where ĝ:U→ V⊆^n+1 and j:V→^n+2 are isometric immersions with Γ=_ĝ;
* μ=ν_g+2 and U is flat.
By proposicion caso extremo ChK es flat, and teorema de composicion chern-kuiper mu=nu+p-1, it only remains to analyze the case μ=n=ν_g+1.
However, this case is a direct consequence of teo de composicion para betaD.
Each case of ChK mu=nu+1 en codimension 2 is naturally parametrizable.
For (i) we use the Gauss parametrization described in <cit.>, and Corollary 18 of <cit.> describes the second case.
§.§ The case TEXT
In this final subsection, we discuss the next case of Chern-Kuiper's inequalities.
For this, we prove Thm L=1 which analyzes in generality the case ℓ=1.
This result and teo de composicion para betaD imply thm Ch-K nu+p-2=mu.
teo de composicion para betaD describes the submanifolds that attain the first inequality of eq bound of the rank.
We now analyze when the second one does.
Namely, let us consider gMn^n+p a submanifold with _g≠Γ and suppose that L:=𝒮(β)^⊥ has rank ℓ=1.
As before, consider ϕ=ϕ_L the tensor given by phi en la seccion chern kuiper.
We begin with the next result.
If L has rank 1 and
ν_g≤ n-p-1 then _β=_ϕ^l=_ϕ^r∩ TM.
Furthermore, if α(_β,_β)≠ 0 then Γ⊆Γ_ϕ^l.
On the second Bianchi identity Bianchi phi, take S=Z∈Γ, T=d_1,v=d_2∈_β, and w∈ TM to obtain
0=R_ϕ(Z,d_1,α(U,d_2),w)+R_ϕ(U,Z,α(d_1,d_2),w) ∀ Z∈Γ, ∀ d_1,d_2∈Γ, ∀ w∈ TM.
In the last equation, fix d_1 and choose 0≠ d_2∈_β∩_g^⊥ such that α̂(d_1,d_2)=0.
This is possible since ℓ=1 and
(_β∩_g^⊥)=(_β)-(_g)≥ n-(p-1)-(n-p-1)=2,
where the last inequality comes from nulidad para no simetrica.
Let ρ∈ L be a fixed unit generator of L and take U∈ TM such that ρ=α̂(U,d_2) to obtain
0=R_ϕ(Z,d_1,ρ,w)=ϕ(Z,w)ϕ(d_1,ρ), ∀ Z∈Γ, ∀ d_1∈_β, ∀ w∈ TM,
but ϕ(Γ,TM)=β(TM,Γ)=L^⊥, so ϕ(d_1,ρ)=0 for any d_1∈_β.
Thus
_β⊆_ϕ^l⊆_ϕ^r∩ TM⊆_β.
Finally, suppose that α(_β,_β)≠0.
Then α(_β,_β)=L by alpha(Y,X) in S(beta)perp.
Take d_1,d_2∈_β such that α(d_1,d_2)=ρ, and use them in triop to obtain
0=R_ϕ(Z,U,ρ,w), ∀ Z∈Γ, ∀ U,w∈ TM,
which proves that Γ⊆Γ_ϕ^l since R_ϕ(Γ,TM,TM,TM)=0 by Gauss equation.
lemma L=1, phi(delta-beta,TM+L)=0 holds under the weaker assumption of (_β∩_g^⊥)≥2 instead of ν_g≤ n-p-1.
Let gMn^n+p be an isometric immersion with
μ≠μ_g≤ n-p-1.
Suppose that L:=𝒮(α|_TM×Γ)^⊥⊆ T^⊥_gM has rank 1.
Then, on each connected component U of an open dense subset of M^n where μ, ν_g, and k=α(_β,_β) are constant, we have the following possibilities
* k=1 and g|_U is a composition of a ruled extension GNn+1^n+p and an isometric embedding ĝ:U⊆ M^n→ N^n+1.
Moreover, _ĝ=Γ,
(n+1)-(p-1)≤ν_G≤ (n+1)-(μ-ν_g),
and ĝ_*(Γ)⊆Γ̂, where Γ̂⊆ TN is the nullity of N^n+1 and satisfies that (Γ̂)≥μ-ν_g+ν_G;
* k=0 and g is _β-ruled.
Moreover, the rank of the ruling is at least (n-p+1).
We want to use prop ruled extensions to prove this result.
By lemma L=1, phi(delta-beta,TM+L)=0 we know that _β=_ϕ^l=_ϕ^r∩ TM.
Suppose first that k=1, and so Γ⊆Γ_ϕ^l by lemma L=1, phi(delta-beta,TM+L)=0.
This implies that the tensor β̂:(TM⊕ L)×Γ→ L^⊥ given by β̂(v,Z)=ϕ(Z,v) is a flat extension of β.
Notice that the left nullity of β̂ coincides with _ϕ^r.
Indeed, let us verify the non-trivial contention.
Take v_0∈ TM⊕ L such that β̂(Γ,v_0)=0.
Then, as Γ⊆Γ_ϕ^l, we have that
0=R_ϕ(Z,X,v_0,w)=ϕ(X,v_0)ϕ(Z,w), ∀ Z∈Γ, ∀ X,w∈ TM,
but ϕ(Γ,TM)=L^⊥, and so v_0∈_ϕ^r.
In particular, nulidad para no simetrica for β and β̂ shows that the dimensions of _ϕ^r and _ϕ^l=_β differ by at most 1.
However, if _β=_ϕ^l=_ϕ^r then g would be _β-ruled which is absurd since k=1.
prop ruled extensions shows that g has a ruled extension GNn+1^n+p.
Moreover, nulidad para no simetrica implies that the nullity of G satisfies that
ν_G=(_ϕ^r)=(_β)+1≥ n+1-(p-1).
On the other hand, using suma de dimensiones=suma de dimensiones we get that
ν_G=(_β)+1=(_β+Γ)+ν_g+1-μ≤ n+1-(μ-ν_g).
The bound on the nullity of N^n+1 follows from suma de dimensiones=suma de dimensiones and Γ⊆Γ_ϕ^l ⊆Γ_ϕ^r since
(Γ̂)=(Γ_ϕ^r)≥(_ϕ^r+Γ)=1+(_β+Γ)=1+(_β)+μ-ν_g=ν_G+μ-ν_g.
Finally, suppose that k=0, that is, α(_β,_β)=0.
Codazzi equation phi en la seccion chern kuiper for d_1,d_2∈_β give us that
0=(∇_Xϕ)(d_1,d_2)-(∇_d_1ϕ)(X,d_2)=ϕ(X,∇_d_1d_2), ∀ X∈ TM, ∀ d_1,d_2∈_β,
which proves that _β is totally geodesic since _ϕ^r∩ TM=_β.
Hence, g is _β-ruled, and
nulidad para no simetrica gives the desired bound on the rank of the rulings.
We prove now thm Ch-K nu+p-2=mu.
By lemma bound of L we know that ℓ∈{1,2}.
The case ℓ=2 follows from teo de composicion para betaD.
Assume now that ℓ=1, so we can apply Thm L=1.
Then, if k=1, N^n+1 must be flat since
(Γ̂)≥ p-2+ν_G≥ p-2+(n+1)-(p-1)=n,
but (Γ̂)=n is not possible by the symmetries of the curvature tensor, so Γ̂=TN.
It remains to exclude the second possibility of that result, that is, k=0.
Suppose, by contradiction, that ℓ=1 and g is _g-ruled on an open subset of M^n.
Notice that as ϕ|_TM×Γ=β is flat and _β=_ϕ^r∩ TM, so ϕ|_TM× (_β+Γ) is flat.
However, by suma de dimensiones=suma de dimensiones and nulidad para no simetrica we know that
(_β+Γ)=(_β)+μ-ν_g≥ n-(p-1)+p-2=n-1,
so ϕ|_TM× TM must be flat.
Then, fixing a unit generator ρ of L, the shape operator A=A_ρ satisfies Gauss equation.
However, as g is _β-ruled and AΓ=0, we have that
A(_β+Γ)_β+Γ=0,
which implies that μ=(A)≥ n-2.
This is a contradiction since μ=ν_g+p-2≤ (n-p-1)+(p-2)= n-3.
§ FINAL COMMENTS
In this final section, we give some observations of this work.
The results of this work suggest that there are at least two distinct families of submanifolds gMn^n+p with ν_g≠μ.
The submanifolds with rank greater than their codimension and those that do not.
Moreover, Theorems <ref> and <ref> suggest that, aside from the ruled cases, any submanifold of the first class is contained in a submanifold of the second one.
In the submanifold theory, there are many works in which it is necessary to exclude compositions of the form g=G∘ĝ where ĝ:N^n+ℓ→^n+p is a flat submanifold; see <cit.> and <cit.> for example.
Moreover, the notion of honest deformation is to exclude this type of behavior in the deformation theory; see <cit.>.
This concept seems to be related to our work, but we did not deal with deformations.
For this reason, it may be more appropriate a notion of honesty that depends only on the submanifold itself.
IMPA – Estrada Dona Castorina, 110
22460-320, Rio de Janeiro, Brazil
E-mail address: [email protected]
|
http://arxiv.org/abs/2307.05761v2 | 20230711193112 | Electromagnetic bremsstrahlung and energy loss in chiral medium | [
"Jeremy Hansen",
"Kirill Tuchin"
] | hep-ph | [
"hep-ph"
] | |
http://arxiv.org/abs/2307.05384v1 | 20230711155204 | Stochastic Nested Compositional Bi-level Optimization for Robust Feature Learning | [
"Xuxing Chen",
"Krishnakumar Balasubramanian",
"Saeed Ghadimi"
] | math.OC | [
"math.OC",
"cs.DS",
"cs.LG",
"stat.ML"
] |
Handwritten Text Recognition Using Convolutional Neural Network
Atman Mishra
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
A. Sharath Ram
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
Kavyashree C.
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
Received / Accepted
==================================================================================================================================================================================================================================================================================================================================================================
We develop and analyze stochastic approximation algorithms for solving nested compositional bi-level optimization problems. These problems involve a nested composition of T potentially non-convex smooth functions in the upper-level, and a smooth and strongly convex function in the lower-level. Our proposed algorithm does not rely on matrix inversions or mini-batches and can achieve an ϵ-stationary solution with an oracle complexity of approximately Õ_T(1/ϵ^2), assuming the availability of stochastic first-order oracles for the individual functions in the composition and the lower-level, which are unbiased and have bounded moments. Here, Õ_T hides polylog factors and constants that depend on T. The key challenge we address in establishing this result relates to handling three distinct sources of bias in the stochastic gradients. The first source arises from the compositional nature of the upper-level, the second stems from the bi-level structure, and the third emerges due to the utilization of Neumann series approximations to avoid matrix inversion. To demonstrate the effectiveness of our approach, we apply it to the problem of robust feature learning for deep neural networks under covariate shift, showcasing the benefits and advantages of our methodology in that context.
§ INTRODUCTION
We study a new class of optimization problems, namely the nested compositional bi-level problems, that are given by
min_x∈ X Φ(x):= Ψ(x, y^*(x)), y^*(x)= _y∈^q g(x,y),
where X is a closed and convex set in ^p, and Ψ is a compositional function defined as Ψ(x,y) = f_1∘…∘ f_T(x,y). The functions f_i(z)[F_i(z;ξ_i)] :^d_i→^d_i-1 for i=1,2,...,T, in the upper-level of (<ref>) are assumed to be smooth. The function g(x,y):= [G(x,y;ζ)]:^d_T→ in the lower-level is also assumed to be smooth and strongly convex, with d_T = p + q. Our goal in this work is to develop and analyze fully-online, batch-free stochastic approximation algorithms to solve (<ref>), given access to stochastic gradients (and function values) of the individual function.
Motivation. The motivation behind our algorithm design for solving (<ref>) lies in our primary objective of developing robust feature learning methods within the realm of predictive deep learning. It is widely acknowledged that in various applications, the distribution of the testing data significantly differs from that of the training data, which is commonly referred to as covariate shift <cit.>. Ensuring the robustness of training procedures against this shift is crucial for the effective implementation of predictive deep learning techniques in real-world scenarios.
As a concrete example, consider solving least-squares non-parametric regression with deep neural networks (e.g., <cit.>) with Z∈ℝ^d being the input data and Y∈ℝ being the real-valued response. We denote by Φ: ℝ^d →ℝ^p as the feature learned by a depth L neural network and β∈ℝ^p to be the last layer of the neural network. With this notation, we could explicitly decouple the feature learning part (which is captured by Φ) and the regression part (which is captured by β). The problem of learning robust features in this context could be formulated as the following distributionally robust (DR) bi-level optimization problem.
Φmin Q ∈ B(P)max _Q [(Y - ⟨β, Φ(X)⟩)^2] s.t. β = β̃∈ℝ^p _P[ (Y - ⟨β̃, Φ(X)⟩)^2 ].
When a coherent risk measure is chosen, the distributionally robust optimization problem at the top-level could be reformulated as a risk minimization problem <cit.>. In particular, when a mean semi-deviation risk measure is used, the min-max problem above can be reformulated as the following compositional minimization problem
Φmin{[(Y - ⟨β, Φ(X)⟩)^2 ] + λ ([max(0,(Y - ⟨β, Φ(X)⟩)^2 - [(Y - ⟨β, Φ(X)⟩)^2] )^2])^1/2}
s.t. β = β̃∈ℝ^p _P[ (Y - ⟨β̃, Φ(X)⟩)^2 ],
which fits the general setup that we consider in (<ref>). We refer, for example, to <cit.> for additional details on the reformulation. In Section <ref>, we provide simulation experiments illustrating the robustness of the above approach to certain classes of covariate shifts.
Bias sources. The main challenge in designing and analyzing stochastic optimization algorithms for solving (<ref>) is dealing with the various sources of bias arising in estimating the gradient. To illustrate the point, first note that under standard smoothness assumptions, it is easy to see that the gradient of Φ in (<ref>) is given by
∇Φ(x) = ∇_xΨ(x, y^*(x)) - ∇_xy^2 g(x, y^*(x))·Neumann series approximation ∇^2_yy g(x, y^*(x))^-1∇_yΨ(x, y^*(x)).
In the above expression, we have highlighted the following three sources of bias. The first one is due to the presence of nested composition of T functions at the upper-level. The second one is due to the presence of the bi-level structure in which there is a lack of knowledge of the exact solution to the lower problem. Moreover, calculating the Hessian inverse in (<ref>) is computationally prohibitive even for moderate size problems. A common practice in bi-level optimization literature to avoid this matrix inversion is to use the Neumann series based methods to approximate the Hessian inverse or its product and the gradient directly (see e.g., <cit.>).
The third source of bias thus comes from r̅ - ∇^2_yy g(x, y^*(x))^-1∇_y Ψ(x, y^*(x)), where r̅ denotes the vector obtained by approximating the solution of the linear system ∇^2_yy g(x, y^*(x))· r = ∇_y Ψ(x, y^*(x)). Moreover, the aforementioned sources of bias are also nested, i.e., the bias arising due to Neumann series approximation is affected by the nested structure of Φ and lack of knowledge of y^*(x). A major contribution of our work lies in carefully dealing with the above three sources of bias in the design of our algorithm and its convergence analysis.
Related Works. Studying bi-level problems dates back to <cit.>. Since then, there has been a body of literature working on different forms of bi-level problems. A series of works have focused on replacing the lower-level problem with its optimality conditions as constraints for the upper-level one and thus reducing the problem to a single-level constrained optimization problems (see e.g., <cit.>). On the other hand, a number of iterative algorithms have been also proposed to directly tackle the bi-level problems (see e.g., <cit.>).
More recently, due to the emerging applications of bi-level models, studying finite-time convergence analysis of iterative algorithms for solving bi-level optimization problems has gained renewed interest. Approximation algorithms with established such convergence analysis have been first proposed in <cit.> for convex/nonconvex and deterministic/stochastic settings and followed up by several works aimed to improve the complexity bounds under different structural assumptions or settings (see e.g., <cit.>).
On the other hand, composition problems have been first studied in <cit.> where the authors studied a penalized version of stochastic constraints in the objective function. There has been renewed interest in analyzing finite-time convergence analysis of SA-type algorithms over the past few years (see e.g., <cit.>). However, all of these works obtained complexity bounds which are worse than their counterparts for classical stochastic optimization, which are single-level problems. The first optimal bound for two-level composition problems has been established in <cit.> and generalized to the multi-level case in <cit.>. Other works have also been developed to study nested composition problems under different assumptions, like non-smoothness (e.g., <cit.>), mean-square smoothness (e.g., <cit.>) and dependence (e.g., <cit.>).
Our main contributions in this paper consist of the following aspects.
* We generalize the bi-level optimization problems by considering the case when the upper-level problem is a constrained optimization over a nested composition of T functions, as defined in (<ref>). We propose , a fully-online and batch-free stochastic approximation algorithm for solving (<ref>), based on a novel Neumann series approximation procedure for avoiding matrix inversions. To our knowledge, there has been no prior work considering the above mentioned combination of bi-level and nested compositional problems.
* We undertake a careful analysis of the oracle complexity of the proposed algorithm and show in Theorem <ref> that requires Õ_T(1/ϵ^2) calls to the stochastic oracle to obtain an ϵ-approximate stationary solution. Our proofs are based on non-trivial adaptations of the existing analyses of bi-level and nested compositional problems; the main difficulty arises in handling the different sources of biases in the stochastic gradients.
* We model the problem of robust feature learning in predictive deep learning as a special case of the proposed bi-level framework. Through simulations we demonstrate that the learnt features using are robust to a wide class of covariate shifts.
The rest of this paper is organized as follows. In Section <ref> we present our main algorithms, and in Section <ref> we provide their convergence analyses. We then present simulation results in Section <ref>.
§ PRELIMINARIES AND METHODOLOGY
In this section, we first provide the main assumptions and some technical results that will be later used in our analysis. We first need the following assumptions which are standard in both bi-level and compositional optimization literature.
All functions f_1,…,f_T and their derivatives are Lipschitz continuous with Lipschitz constants L_f_i and L_∇ f_i, respectively.
The lower-level function g is twice continuously differentiable and μ_g-strongly convex with respect to y for any x. Its gradient ∇ g and Hessian matrix ∇^2 g are Lipschitz continuous with Lipschitz constants L_∇ g and L_∇^2 g.
We adopt some useful properties from bi-level optimization and compositional optimization literature.
Suppose Assumption <ref> holds. Then Ψ(x,y) and ∇Ψ(x,y) are L_Ψ and L_∇Ψ-Lipschitz continuous respectively with the constants given by
L_Ψ = ∏_i=1^TL_f_i, L_∇Ψ = ∑_j=1^T[L_∇ f_j∏_l=1^j-1L_f_l∏_l=j+1^TL_f_l^2]
The expression of L_Ψ is a direct result of Assumption <ref>. The proof of the closed form expression of L_∇Ψ can be found in Lemma 2.1 of <cit.>.
Suppose Assumptions <ref> and <ref> hold. Then the hypergradient ∇Φ(x) takes the form
∇Φ(x) = ∇_x Ψ(x, y^*(x)) - ∇_xy^2 g(x, y^*(x))·∇^2_yy g(x, y^*(x))^-1∇_y Ψ(x, y^*(x)).
Moreover, y^*(x) and ∇Φ(x) are L_y^* and L_∇Φ-Lipschitz continuous respectively with the constants given by L_y^* = L_∇ g/μ_g and
L_∇Φ = L_∇_x Ψ + (L_∇_x Ψ + L_∇_y Ψ)L_∇ g + L_Ψ^2L_∇^2 g/μ_g + 2L_ΨL_∇ gL_∇^2 g+L_∇_yΨL_∇ g^2/μ_g^2 + L_ΨL_∇^2 gL_∇ g^2/μ_g^3,
where L_∇_x Ψ and L_∇_y Ψ represent the Lipschitz constants of ∇_x Ψ and ∇_y Ψ respectively.
The difference is that now Ψ is a compositional function and its gradient is:
[ ∇_x Ψ(x,y); ∇_y Ψ(x,y) ]=∇Ψ(x,y) = ∇ f_T(x, y)∇ f_T-1(x̃_T-1)…∇ f_1(x̃_1),
where ∇ f_i denotes the transpose of the Jacobian matrix of f_i, and x̃_i = f_i+1∘…∘ f_T(x, y) for 1≤ i<T.
Our methods generate different random sequences {d_k, x_k, u_k^(i), y_k^(i)}, for which we define the following filtrations
_k = σ({u_0^(1),...,u_k^(1), ...,u_0^(T+1),...,u_k^(T+1), d_0, ..., d_k}),
_j^(t) = σ(⋃_l=0^j-1{x_l, y_l^(0), y_l^(1), ..., y_l^(N)}⋃{x_j, y_j^(0),y_j^(1),...,y_j^(t)}).
Moreover, we make the following standard assumptions on the outputs of the stochastic oracle which are used at each iteration of the algorithm.
Denote u_k^(T+1)≡ (x_k, y_k^(N)). For each i, k and t, the stochastic oracle outputs F_k+1^(i)∈ℝ^d_i-1, J_k+1^(i)∈ℝ^d_i × d_i-1, v_k^(t)∈^q, J_g^(k+1)∈^p× q, and H_n^(k+1)∈^q× q such that for i ∈{1,…,T},
* The outputs are unbiased and have bounded variances:
[F_k+1^(i)|_k] = f_i(u_k^(i+1)), [F_k+1^(i)-f_i(u_k^(i+1))^2
|_k]≤σ_F_i^2,
[J_k+1^(i)|_k] = ∇ f_i(u_k^(i+1)), [J_k+1^(i) - ∇ f_i(u_k^(i+1))^2]≤σ_J_i^2, [J_k+1^(i)^2
|_k]≤σ̂_J_i^2,
[v_k^(t)|_k^(t)] = ∇_y g(x_k,y_k^(t)), [v_k^(t)-∇_y g(x_k,y_k^(t))^2|_k^(t)]≤σ_v^2,
[J_g^(k+1)|_k] = ∇_xy^2 g(x_k,y_k^(N)), [J_g^(k)^2
|_k]≤σ_J_g^2,
[H_n^(k+1)|_k] = ∇_yy^2g(x_k,y_k^(N)), [H_n^(k+1) - ∇_yy^2g(x_k,y_k^(N))^2]≤σ_H_g^2.
* Given _k, the outputs of the stochastic oracle at each level i, F_k+1^(i), J_k+1^(i), J_g^(k+1) and H_n^(k+1) are independent.
* Given _k, the outputs of the stochastic oracle are independent between levels i.e., {F_k+1^(i)}_i=1,…,T are independent and so are {J_k+1^(i)}_i=1,…,T.
Recall from (<ref>) that the hypergradient of the bi-level problems involves a computationally demanding matrix inverse operation. Algorithms like <cit.>, <cit.>, and <cit.> overcome this challenge by using a Neumann series approximation for the matrix inverse. However, we point out that these works suffer from an error in the analysis, which is detailed in Appendix <ref>. In this work, we fix this issue and as a consequence we identify that the rates in the above works have an additional log factor, similar to our results.
In this paper, we provide a different approach in which we estimate the product of the Hessian inverse and partial derivative in (<ref>) by taking a weighted average of stochastic Hessian-vector products as shown in Algorithm <ref>.
In this method, named as Nested Hypergradient Estimation, we first call the stochastic oracle to estimate the partial derivatives of the composition function Ψ at a given point (x,y) and then estimate the (∇_yy^2 g)^-1∇_y Ψ in a loop. Finally, we can estimate the hypergradient by incorporating the stochastic second-order partial derivatives of the inner function g as in Step 6 of this method.
We are now ready to present our method as Algorithm <ref>.
We now make a few comments about the above algorithm. First, note that ignoring the inner loop, the NHE method, and assuming that w_k+1 is an unbiased gradient estimator for Φ, Algorithm <ref> reduces to the one in <cit.> proposed for solving the nested compositonal optimization problems. However, due to the bi-level structure, we need another sequence (y_k^t) to update the decision variable for the lower problem. Second, while Algorithm NHE provides a biased estimate for the gradient of the upper problem, taking the weighted average of such estimates in Step 10 of the algorithm will help us to reduce the associated bias. Finally, using the linear approximation of the inner functions in the upper objective function will also reduce the noise associated with their function value estimates.
§ MAIN RESULT
In this section, we will provide the convergence analysis of Algorithm <ref> under the standard assumptions that have been presented before. We first need to define our termination criterion for which the convergence of Algorithm <ref> is provided.
Following <cit.>, we define the measure of optimality V_k as
V_k = z_k - x_k^2 + d_k - ∇Φ(x_k)^2,
which is commonly used as a measure of optimality in compositional optimization literature <cit.>, since it provides an upper bound for gradient mapping <cit.>. We now state the main convergence result of Algorithm <ref>.
Suppose that Assumptions <ref>, <ref>, and <ref> hold. Moreover, assume that the stepsize sequences {τ_j}_j=0^∞ and {γ_j}_j=0^∞, for any k≥ 0, satisfy
τ_k+1≤τ_k, 0< γ_k+1≤γ_k≤ c_γτ_k≤τ_k≤τ_0<1 for some c_γ>0,
and α, N and M also satisfy
0<α < min{μ_g/μ_g^2 + σ_H_g^2, 1/L_∇ g}, N≥(1 + 1/1-τ_1)τ_k/2γ_kμ_g, δ_g := (1-αμ_g)^M≤1/2.
Letting R be chosen from {0,1,...,K} with the probability mass function P(R=k) = τ_k/∑_j=0^Kτ_j, we have
[V_R] = O_T(∑_k=0^Kτ_k^2 + 1/∑_k=0^Kτ_k + δ_g^2).
Sample complexity. If we pick M = Θ(log K), and
τ_k =Θ(1/√(K)), γ_k = Θ(1/√(K)), α = 1/2·min{μ_g/μ_g^2 + σ_H_g^2, 1/L_∇ g},
then the oracle complexity of stochastic gradients, Jacobian-vector products and Hessian-vector products will be O_T(KM) = O_T(ϵ^-2log1/ϵ) to guarantee [V_R] = O(ϵ), which matches the rate in <cit.>[The work of <cit.> considers [√(V_R)] = O(ϵ) and obtains O_T(ϵ^-4).] if we ignore the log factor.
Difficulty in removing the log factor.
A recent work <cit.> introduces , which removes the log factor in the sample complexity of stochastic bi-level optimization (i.e., T=1 in (<ref>)) comparing to previous results <cit.>. However, it is unclear if a similar method can be incorporated into our algorithm and its convergence analysis. This is due to the fact that the upper-level function of our problem (<ref>) has a nested structure, which makes the unbiased estimate of the upper-level gradient unavailable. Note that this bias does not exist when T=1 and it causes additional major challenges in directly incorporating the framework of <cit.> to solve (<ref>).
§.§ Proof of Theorem <ref>
Before providing the proof of Theorem <ref>, we first need a few simple technical results.
For a sequence {τ_k}_k=0^∞∈ (0,1) define Γ_0 = 1, Γ_j+1 = ∏_i=0^j(1-τ_i). Then, for any j≥ 0, we have
∑_j=0^kτ_jΓ_k+1/Γ_j+1< 1, ∑_j=k^Kτ_jΓ_j/Γ_k< 1 for any 0≤ k≤ K,
where K ≥ 0.
Note that by definition of Γ_j, we have
∑_j=0^kτ_jΓ_k+1/Γ_j+1 = ∑_j=0^kτ_j∏_i=j+1^k(1-τ_i) = ∑_j=0^k(∏_i=j+1^k(1-τ_i) - ∏_i=j^k(1-τ_i)) = 1 - ∏_i=0^k(1-τ_i),
where we set ∏_i=k+1^k(1-τ_i) = 1 in the first equality. Similarly, we have
∑_j=k^Kτ_jΓ_j/Γ_k = ∑_j=k^Kτ_j ∏_i=k^j-1(1-τ_i) = ∑_j=k^K(∏_i=k^j-1(1-τ_i) - ∏_i=k^j(1-τ_i)) = 1 - ∏_i=k^K(1-τ_i).
Noting that ∏_i=0^k(1-τ_i) and ∏_i=k^K(1-τ_i) are both positive and less than 1, the proof is complete.
The next lemma provides a tight estimation of weighted sums under certain conditions.
Suppose we are given five sequences {a_n}_n=1^∞, {b_n}_n=1^∞, {c_n}_n=0^∞,
{τ_n}_n=0^∞, and {δ_n}_n=1^∞ satisfying
a_k+1≤δ_k a_k + b_k, Γ_0 = 1, Γ_k = ∏_l=1^kδ_l, ∑_k=i^Kτ_kΓ_k≤ c_iΓ_i,
a_k≥ 0, b_k≥ 0, c_i≥ 0, τ_i ≥ 0, 0≤δ_k < 1,
for all k=1,2,... and i=0,1,... Then, for any K > 0, we have
a_k+1≤ a_1Γ_k + ∑_i=1^kb_iΓ_k/Γ_i, ∑_k=0^Kτ_ka_k+1≤ c_0a_1 + ∑_i=1^Kc_i b_i.
First note that by the definition of Γ_k, for any k≥ 1, we have
a_k+1/Γ_k≤a_k/Γ_k-1 + b_k/Γ_k,
which implies that
a_k+1≤
a_1Γ_k + ∑_i=1^kb_iΓ_k/Γ_i.
By denoting ∑_i=1^0b_iΓ_k/Γ_i = 0, we obtain
∑_k=0^Kτ_ka_k+1≤∑_k=0^Ka_1τ_kΓ_k + ∑_k=0^K∑_i=1^kb_iτ_kΓ_k/Γ_i = ∑_k=0^Ka_1τ_kΓ_k + ∑_i=1^K∑_k=i^Kb_iτ_kΓ_k/Γ_i
≤ a_1∑_k=0^Kτ_kΓ_k + ∑_i=1^Kb_i/Γ_i∑_k=i^Kτ_kΓ_k ≤ c_0a_1 + ∑_i=1^Kc_i b_i,
where the last inequality holds due to the fact that ∑_k=i^Kτ_kΓ_k≤ c_iΓ_i.
Remark: By choosing δ_k ≡δ∈ (0, 1) and c_i = τ_i/1-δ with τ_i being a decreasing sequence,
we know
∑_k=i^K τ_kΓ_k = ∑_k=i^Kτ_kδ^k≤τ_iδ^i·(∑_k=i^Kδ^k-i)≤ c_iΓ_i.
Furthermore, if a_k, b_k and c_k are chosen such that other conditions are satisfied, we know by (<ref>) that
∑_k=0^Kτ_ka_k+1 ≤τ_0 a_1/1-δ + ∑_i=1^Kτ_i b_i/1-δ,
a_k+1^2 ≤(a_1δ^k + ∑_i=1^kb_iδ^k-i)^2
≤(δ^k + ∑_i=1^kδ^k-i)(a_1^2δ^k + ∑_i=1^kb_i^2δ^k-i)
<1/1-δ(a_1^2δ^k + ∑_i=1^kb_i^2δ^k-i),
which, also gives
∑_k=0^Kτ_ka_k+1^2≤τ_0a_1^2/(1-δ)^2 + ∑_i=1^Kτ_ib_i^2/(1-δ)^2.
We also need the following standard result in stochastic optimization.
Suppose f(x) is μ-strongly convex and L-smooth. For any x and γ<2/μ + L, define x^+ = x - γ∇ f(x), x^*= f(x). Then we have x^+ - x^*≤ (1-γμ)x-x^*.
The next result establishes the smoothness of the optimal value of the subproblem solved at Step 2 of Algorithm <ref>.
Define η(x, d, β) = min_y∈ X{z,y-x> + β/2y-x^2}.
Then ∇η is L_∇η-Lipschitz continuous with the constant given by
L_∇η = 2 √((1+β)^2 + (1 + 1/2β)^2).
To establish the convergence of Algorithm <ref>, we first analyze the convergence of its inner loop updates in the next lemma.
Let the sequence y_k^(t) be generated by Algorithm <ref> and denote y_k^* := y^*(x_k). Suppose that Assumption <ref> holds and stepsizes satisfy (<ref>). If γ_k<2/μ + L, for any k≥ 1, the inner loop updates, for any 0≤ t≤ N, satisfy
[y_k+1^(N) - y_k^(N)^2]
≤ N^2γ_k+1^2σ_v^2
+ min{Nγ_k+1, 1/μ_g}NL_∇ gγ_k+1[y_k+1^(0) - y_k+1^*^2] + N^3L_∇ gγ_k+1^4σ_v^2/2.
Moreover, if N≥ (1 + 1/1-τ_1)τ_k/2γ_kμ_g, we have
[y_k^(0) - y_k^*^2] ≤Γ_k/Γ_1[y_1^(0) - y_1^*^2] + 2σ_v^2c_γ/μ_g + 2L_y^*^2max_1≤ i≤ k[x_k-z_k^2]
∑_k=1^Kτ_k[y_k^(0) - y_k^*^2] ≤[y_1^(0) - y_1^*^2] + 2 ∑_k=1^Kτ_k (Nσ_v^2c_γ^2τ_k+ L_y^*^2[x_k-z_k^2] ).
By definition of the σ-algebra _k^(t), the update rule of y_k^(t) in Step 7 of Algorithm <ref>, and under Assumption <ref>, we have
[y_k^(t+1) - y_k^*^2|_k^(t)]
= [y_k^(t) - γ_k∇_yg(x_k, y_k^(t)) - y_k^*+ γ_k(∇_yg(x_k,y_k^(t)) - v_k^(t))^2|_k^(t)]
= y_k^(t) - γ_k∇_yg(x_k, y_k^(t)) - y_k^*^2 + γ_k^2[∇_yg(x_k,y_k^(t)) - v_k^(t)^2|_k^(t)]
≤ (1-γ_kμ_g)^2y_k^(t) - y_k^*^2 + γ_k^2σ_v^2,
where the inequality follows from Lemma <ref>. Taking expectation from both sides of the above inequality, we obtain
[y_k^(t+1) - y_k^*^2]≤ (1-γ_kμ_g)^2[y_k^(t) - y_k^*^2] + γ_k^2σ_v^2, implying that
[y_k^(t) - y_k^*^2] ≤ (1-γ_kμ_g)^2t[y_k^(0) - y_k^*^2] + γ_k^2σ_v^2∑_i=0^t-1(1-γ_kμ_g)^2i
≤ (1-γ_kμ_g)^2t[y_k^(0) - y_k^*^2] + min{tγ_k^2σ_v^2, γ_kσ_v^2/μ_g},
where the second inequality follows from the fact that
∑_i=0^t-1(1-γ_kμ_g)^2i≤∑_i=0^t-1 (1-γ_kμ_g)^i ≤1/γ_kμ_g.
Now, observe that
y_k+1^(N) - y_k^(N)^2 = y_k+1^(N) - y_k+1^(0)^2 = ∑_t=0^N-1(y_k+1^(t+1) - y_k+1^(t))^2 = γ_k+1^2∑_t=0^N-1v_k+1^(t)^2
≤ Nγ_k+1^2∑_t=0^N-1v_k+1^(t)^2,
which together with the fact that ∇_yg(x_k+1,y_k+1^*) = 0 and under Assumption <ref>,
imply that
[y_k+1^(N) - y_k^(N)^2|_k+1^(N-1)]
≤ Nγ_k+1^2∑_t=0^N-1{[v_k+1^(t)-∇_y g(x_k+1,y_k+1^(t))^2|_k+1^(t)] + ∇_y g(x_k+1, y_k+1^(t))^2 }
≤ N^2γ_k+1^2σ_v^2 + Nγ_k+1^2∑_t=0^N-1∇_yg(x_k+1,y_k+1^(t)) - ∇_y g(x_k+1,y_k+1^*)^2
≤ N^2γ_k+1^2σ_v^2 + NL^2_∇ gγ_k+1^2∑_t=0^N-1y_k+1^(t) - y_k+1^*^2.
Taking expectation on both sides and noting (<ref>), we obtain
[y_k+1^(N) - y_k^(N)^2]≤ N^2γ_k+1^2σ_v^2 + NL^2_∇ gγ_k+1^2∑_t=0^N-1{(1-γ_k+1μ_g)^2t[y_k+1^(0) - y_k+1^*^2] + tγ_k+1^2σ_v^2}
which together with (<ref>), imply (<ref>).
To show (<ref>), we first consider the decrease of [y_k^(0) - y_k^*^2]. Noting (<ref>), the fact that y^*(x) is L_y^*-Lipschitz continuous due to Lemma <ref>, and Step 3 of Algorithm <ref>, we have
[y_k+1^(0) - y_k+1^*^2] = [y_k^(N) - y_k^* + y_k^* - y_k+1^*^2]
≤ [(1+τ_k)y_k^(N) - y_k^*^2 + (1+1/τ_k)y_k^* - y_k+1^*^2]
≤ (1+τ_k) [(1-γ_kμ_g)^2N[y_k^(0) - y_k^*^2] + Nγ_k^2σ_v^2 ]+ (τ_k+τ_k^2)L_y^*^2[x_k-z_k^2].
If we set N≥ (1 + 1/1-τ_1)τ_k/2γ_kμ_g, we have
(1+τ_k)(1-γ_kμ_g)^2N = e^log(1+τ_k) + 2Nlog(1-μ_gγ_k)≤ e^τ_k - 2Nμ_gγ_k≤ e^-τ_k/1-τ_k≤ 1-τ_k,
where the first and third inequality follow from the fact that x/1+x≤log(1+x)≤ x for any x>-1, and the second inequality follows from N ≥ (1 + 1/1-τ_1)τ_k/2γ_kμ_g≥ (1 + 1/1-τ_k)τ_k/2γ_kμ_g. The above observation together with (<ref>) imply that
[y_k+1^(0) - y_k+1^*^2] - [y_k^(0) - y_k^*^2]≤ -τ_k[y_k^(0) - y_k^*^2] + 2Nγ_k^2σ_v^2 + 2L_y^*^2τ_k[x_k-z_k^2].
Taking summation on both sides and using <ref>, we obtain (<ref>). To prove (<ref>), we
first notice that in (<ref>) we use another upper bound in (<ref>) and follow the same process of proving the above inequality we may get:
[y_k+1^(0) - y_k+1^*^2]
≤ (1+τ_k)δ_k^N[y_k^(0) - y_k^*^2] + (1+τ_k)γ_kσ_v^2/μ_g + (τ_k + τ_k^2)L_y^*^2[x_k-z_k^2]
≤ (1-τ_k)[y_k^(0) - y_k^*^2] + 2γ_kσ_v^2/μ_g + 2L_y^*^2τ_k[x_k-z_k^2].
In the view of Lemma <ref>, we have
[y_k+1^(0) - y_k+1^*^2]/Γ_k+1
≤ [y_1^(0) - y_1^*^2]/Γ_1 + 2σ_v^2/μ_g∑_i=1^kγ_i/Γ_i+1 + 2L_y^*^2∑_i=1^kτ_i/Γ_i+1[x_i-z_i^2]
≤ [y_1^(0) - y_1^*^2]/Γ_1 + 2σ_v^2c_γ/μ_gΓ_k+1 + 2L_y^*^2/Γ_k+1max_1≤ i≤ k[x_i-z_i^2].
The second inequality uses Assumption <ref> and Lemma <ref>. Multiplying Γ_k+1 on both sides completes the proof.
Remark: If we pick γ_k = Θ(τ_k) then it suffices to pick N=1, which is independent of the iteration number k. This suggests using the same timescale for both loops, which matches the result in <cit.>.
Now, note that (<ref>) can be written as
r̅_n,y^(k+1) = r̅_n-1,y^(k+1) - α H_n^(k+1)r̅_n-1,y^(k+1) + α r_0,y^(k+1), n=1,2,...,M,
which is essentially a SGD-like update. If we fix k and define
x_n := r̅_n,y^(k+1), A_n := H_n^(k+1), A := ∇_yy^2g(x_k,y_k^(N)), b_0 := r_0,y^(k+1),
the above equation becomes x_n = x_n-1 - α A_n x_n-1 + α b_0, with A_n being the unbiased estimator of A and b_0 being a biased estimator of ∇_yΨ(x_k,y_k^(N)), since under Assumption <ref>, we have
[A_n] = A, [A_n - A^2]≤σ_H_g^2,
[b_0] = [r_0,y^(k+1)] = ∇_y f_T(x_k, y_k^(N))∏_i=2^T∇ f_T+1-i(u_k^(T+2-i)).
Thus, (<ref>) can be viewed as a M-step SGD applied to the following quadratic optimization problem min_x {1/2x Ax - b_0 x}, where b_0 is obtained from the stochastic oracle ahead of the first update x_1 and is fixed during the M-step updates. Using standard analysis of SGD, we can bound the variance and the second moment of x_k via the following lemma:
Suppose that we are given a vector b_0∈^q and a symmetric positive definite matrix A∈^q× q satisfying μ I≼ A≼ L I for 0<μ≤ L. Moreover, a sequence {x_k}_k=0^∞ is defined as
x_k = (I - α A_k)x_k-1 + α b_0, x_0 = 0,
where A_k satisfies [A_k] = A, [A_k-A^2]≤σ^2, and A_1,..., A_k are independent, A_i and x_i-1 are also independent. If α satisfies
0<α <min{μ/μ^2 + σ^2, 1/L},
we have [x_k - [x_k]^2]< b_0^2/μ^2 and [x_k^2]< 2b_0^2/μ^2.
For each x_k, we have x_k = (I - α A_k)x_k-1 + α b_0, and hence [x_k] = (I - α A)[x_k-1] + α b_0, which give the closed form of x_k and [x_k] as
x_k = α∑_p=0^k-1∏_i=1^p(I - α A_k+1-i)b_0 + ∏_i=1^k(I - α A_k+1-i)x_0,
[x_k] =α[∑_p=0^k-1(I-α A)^p]b_0 + (I - α A)^k [x_0].
Together with x_0 = 0, the above implies that
[x_k] = (I - (I-α A)^k)A^-1b_0≤b_0/μ.
Hence we know
x_k - [x_k]^2 = (I - α A)(x_k-1 - [x_k-1]) + α (A - A_k)x_k-1^2
= (I - α A)(x_k-1 - [x_k-1])^2 + α^2 (A - A_k)x_k-1^2
+ 2α(I - α A)(x_k-1 - [x_k-1]), (A - A_k)x_k-1>.
Taking expectation on both sides, we know:
[x_k - [x_k]^2]
= [(I - α A)(x_k-1 - [x_k-1])^2] + α^2[(A - A_k)x_k-1^2]
≤ (1-αμ)^2[x_k-1 - [x_k-1]^2] + α^2σ^2([x_k-1-[x_k-1]^2] + [x_k-1]^2)
≤ (1-αμ)[x_k-1 - [x_k-1]^2] + α^2σ^2b_0^2/μ^2
≤ (1-αμ)^k[x_0 - [x_0]^2] + α^2σ^2b_0^2/μ^2·(∑_i=0^k-1(1-αμ)^i) < ασ^2b_0^2/μ^3≤b_0^2/μ^2.
The second inequality is due to a direct result of α≤μ/μ^2+σ^2:
(1-αμ)^2 + α^2σ^2≤ 1 - αμ,
and the fifth inequality uses α≤μ/μ^2 + σ^2≤μ/σ^2. For the second moment we have
[x_k^2] = [x_k - [x_k]^2] + [x_k]^2 < 2b_0^2/μ^2.
(<ref>) and (<ref>) completes the proof.
A direct result of Lemma <ref> is the following lemma, which indicates r̅_M,y^(k+1) in Algorithm <ref> has bounded variance and bounded second moment, and so does r^(k+1).
Suppose that Assumptions <ref>, <ref>, and <ref> hold. Define positive constants σ̂_r, σ_r̅, σ_w as
σ̂_r^2 = ∏_l=1^Tσ̂_J_l^2, σ_r̅^2 =2σ̂_r^2/μ_g^2, σ_w^2 = (σ̂_r + σ_J_gσ_r̅)^2.
In Algorithm <ref>, if α satisfy
0<α< min{μ_g/μ_g^2 + σ_H_g^2, 1/L_∇ g},
the output r satisfies [r - [r]^2]≤[r^2]≤σ_w^2.
The variance and second moment of r_0 in Step 2 of Algorithm <ref> are bounded under Assumption <ref> since
[r_0 - [r_0]^2]≤[r_0^2]= [∏_l=1^TJ^(l)^2]≤∏_l=1^Tσ̂_J_l^2 =σ̂_r^2.
By line 5 of Algorithm <ref>, we know for n=1,2,..., M,
r̅_n,y = r̅_n-1,y - α H_n r̅_n-1,y + α r_0,y,
which satisfies
[H_n] = ∇_y^2g(x,y), [H_n - ∇_y^2g(x,y)^2]≤σ_H_g^2.
Setting δ_g = (1 -αμ_g)^M and applying Lemma <ref>, we can bound the second moment of r̅_M,y:
By (<ref>) and in the view of Lemma <ref>, we have [r̅_M,y^2] ≤2r_0,y^2/μ_g^2. Taking expectation on both sides of the above inequality, noting that r̅_0,y = 0 and (<ref>), we have
[r̅_M,y^2] < 2σ̂_r^2/μ_g^2 = σ_r̅^2.
Then for the second moment of r, we have:
[r^2] = [r_0,x - J_g·r̅_M,y^2]≤[(r_0,x+J_g·r̅_M,y)^2]≤ (σ̂_r + σ_J_gσ_r̅)^2,
implying [r - [r]^2]≤[r^2]≤σ_w^2.
We also need the following result about the output of Algorithm <ref>.
Suppose that Assumptions <ref>, <ref>, <ref> hold, and α satisfies (<ref>). Then we have [r] = [r_0,x] - ∇_xy^2 g(x, y)[∇_y^2g(x,y)]^-1[r_0,y] + ℰ, where r is the output of Algorithm <ref> and
ℰ = ∇_xy^2 g(x, y)[I - α∇_y^2 g(x, y)]^Mr̅_*,y, r̅_*,y = [∇_y^2g(x, y)]^-1[r_0,y],
and
ℰ≤ (1-αμ_g)^M·L_∇ gσ̂_r/μ_g.
Note that the output r̅^(k+1) of Algorithm <ref> takes the following form
r = r_0,x - J_g·r̅_M,y,
r̅_M,y = α·∑_i=0^M-1∏_n=1^i(I - α H_M+1-n)· r_0,y.
Noting (<ref>), definition of r̅_*,y in (<ref>), Neumann series,
and under Assumption <ref>, we have
r̅_*,y ≤[∇_yy^2g(x, y)]^-1·[r_0,y]≤σ̂_r/μ_g,
[r̅_M,y] =α[∑_n=0^M-1(I-α∇_yy^2 g(x, y))^n][r_0,y]
=[I - (I - α∇_y^2 g(x, y) )^M][∇_y^2g(x, y)]^-1[r_0,y]
= r̅_*,y - (I - α∇_y^2 g(x, y) )^Mr̅_*,y.
Moreover, by (<ref>) and (<ref>), we have
[r] = [r_0,x - J_g·r̅_M,y]= [r_0,x] - ∇_xy^2 g(x, y)[∇_y^2g(x, y)]^-1[r_0,y] + ℰ,
where ℰ defined in (<ref>).
Under Assumption <ref> and by (<ref>), we have
ℰ≤ L_∇ g(1-αμ_g)^M·σ̂_r/μ_g,
which together with (<ref>), complete the proof.
Next, we prove the boundedness of some error terms that will be later used in our convergence analysis.
Suppose Assumption <ref>, <ref>, and <ref> hold. Then in Algorithm <ref> we have
β^2[z_k - x_k^2] ≤[d_k^2]≤σ_w^2, [d_k+1-d_k^2] ≤ 4τ_k^2σ_w^2, for all k≥ 0
Note that z_k in Algorithm <ref> can be written as z_k = Π_X(x_k - 1/βd_k).
The optimality condition of the projection gives d_k + β(z_k - x_k), z - z_k>≥ 0, ∀ z∈ X. Setting z = x_k and using Cauchy-Schwartz inequality, we note that βz_k - x_k≤d_k,
which is due to the nonexpansiveness of projection operator. Then we know
β^2[z_k-x_k^2]≤[d_k^2]
≤max([d_k-1^2], [w_k^2])≤max_i≤ k[w_i^2]≤σ_w^2.
The second inequality uses the fact that d_k is a convex combination of d_k-1 and w_k, the third inequality applies the second inequality on each [d_i^2], and the fourth ienquality uses Lemma <ref>. Hence the first conclusion is proved. For d_k+1 - d_k we have:
[d_k+1 - d_k^2] = τ_k^2[d_k - w_k+1^2]≤ 2τ_k^2([d_k^2] + [w_k+1^2])≤ 4τ_k^2σ_w^2,
which completes the proof.
For each u_k+1^(i) - f_i(u_k+1^(i+1)) we adopt Lemma 3.1 from <cit.>:
Suppose Assumption <ref> and <ref> hold. Define
θ_k+1^(i) :=2τ_k⟨η_k+1^(i), E_k,i + (1-τ_k)(f_i(u_k^(i+1)) - u_k^(i)) + (η̂_k+1^(i))(u_k+1^(i+1) - u_k^(i+1))⟩
+2⟨ (η̂_k+1^(i))(u_k+1^(i+1) - u_k^(i+1)), E_k,i+(1-τ_k)(f_i(u_k^(i+1))-u_k^(i))⟩,
θ̂_k+1^(i) := τ_k ⟨ -η_k+1^(i), τ_k(f_i(u_k^(i+1)) - u_k^(i)) + (J_k+1^(i))(u_k+1^(i+1)-u_k^(i+1))⟩,
η_k+1^(i) := f_i(u_k^(i+1)) - F_k+1^(i), η̂_k+1^(i): = ∇ f_i(u_k^(i+1)) - J_k+1^(i),
E_k,i := f_i(u_k+1^(i+1)) - f_i(u_k^(i+1)) - ∇ f_i(u_k^(i+1)) (u_k+1^(i+1)-u_k^(i+1)).
Then in Algorithm <ref> the following hold.
a) For 1≤ i ≤ T,
u_k+1^(i) - f_i(u_k+1^(i+1))^2 ≤ (1 - τ_k)u_k^(i) - f_i(u_k^(i+1))^2 + τ_k^2 η_k+1^(i)^2 + θ_k+1^(i)
+ [4L_∇ f_i^2 + f_i(u_k^(i+1)) - u_k^(i) + η̂_k+1^(i)^2]u_k+1^(i+1) - u_k^(i+1)^2,
b) For 1≤ i ≤ T,
u_k+1^(i) - u_k^(i)^2≤τ_k^2[2 f_i(u_k^(i+1)) - u_k^(i)^2 + η_k+1^(i)^2 + 2/τ_k^2J_k+1^(i)^2 u_k+1^(i+1) - u_k^(i+1)^2]+2 θ̂_k+1^(i),
For 1≤ i≤ T, by definition of E_k,i, η̂_k+1^(i),F_k+1^(i),u_k+1^(i), and θ_k+1^(i), we have
f_i(u_k+1^(i+1)) - u_k+1^(i)^2
= E_k,i + f_i(u_k^(i+1)) + ∇ f_i(u_k^(i+1)) (u_k+1^(i+1)-u_k^(i+1)) - (1-τ_k)u_k^(i) - τ_kF_k+1^(i) - (J_k+1^(i))(u_k+1^(i+1)-u_k^(i+1))^2
= E_k,i + (η̂_k+1^(i))(u_k+1^(i+1)-u_k^(i+1)) + (1-τ_k)(f_i(u_k^(i+1)) - u_k^(i)) + τ_kη_k+1^(i)^2
= (η̂_k+1^(i))(u_k+1^(i+1)-u_k^(i+1))^2 + E_k,i + (1-τ_k)(f_i(u_k^(i+1))-u_k^(i))^2 +τ_k^2η_k+1^(i)^2 + θ_k+1^(i)
≤ E_k,i + (1-τ_k)(f_i(u_k^(i+1)) - u_k^(i))^2 +τ_k^2η_k+1^(i)^2 + θ_k+1^(i) + η̂_k+1^(i)^2u_k+1^(i+1)-u_k^(i+1)^2
≤ (1-τ_k)f_i(u_k^(i+1)) - u_k^(i)^2+E_k,i^2 + 2(1-τ_k)E_k,i, (f_i(u_k^(i+1)) - u_k^(i))>
+τ_k^2η_k+1^(i)^2 + θ_k+1^(i)+η̂_k+1^(i)^2u_k+1^(i+1)-u_k^(i+1)^2.
where the second inequality holds by convexity of ·^2. By Assumption <ref>, we have
E_k,i≤1/2min{4 L_f_iu_k+1^(i+1)-u_k^(i+1), L_∇ f_iu_k+1^(i+1)-u_k^(i+1)^2 },
and using Cauchy–Schwarz inequality in (<ref>), we obtain (<ref>).
To show part b), noting definition of η_k+1^(i), η̂_k+1^(i) and θ̂_k+1^(i), Cauchy-Schwartz and Young's inequality, for 1≤ i ≤ T,
u_k+1^(i) - u_k^(i)^2
= τ_k(F_k+1^(i) - f_i(u_k^(i+1))) + τ_k(f_i(u_k^(i+1)) - u_k^(i)) + (J_k+1^(i))(u_k+1^(i+1) - u_k^(i+1)))^2
= τ_k^2η_k+1^(i)^2 + τ_k(f_i(u_k^(i+1)) - u_k^(i)) + (J_k+1^(i))(u_k+1^(i+1) - u_k^(i+1)))^2
+ 2τ_k -η_k+1^(i), τ_k(f_i(u_k^(i+1)) - u_k^(i)) + (J_k+1^(i))(u_k+1^(i+1)-u_k^(i+1))>
≤ 2τ_k^2 f_i(u_k^(i+1)) - u_k^(i)^2 + τ_k^2 η_k+1^(i)^2 + 2J_k+1^(i)^2 u_k+1^(i+1) - u_k^(i+1)^2 + 2 θ̂_k+1^(i).
Hence we know the decrease of u_k^(i) - f_i(u_k^(i+1))^2 for 1≤ i≤ T:
u_k+1^(i) - f_i(u_k+1^(i+1))^2 - u_k^(i) - f_i(u_k^(i+1))^2 ≤ -τ_ku_k^(i) - f_i(u_k^(i+1))^2 + θ̃_k+1^(i),
θ̃_k+1^(i) = [4L_∇ f_i^2 + f_i(u_k^(i+1)) - u_k^(i) + η̂_k+1^(i)^2] u_k+1^(i+1) - u_k^(i+1)^2 +τ_k^2η_k+1^(i)^2 + θ_k+1^(i).
We adopt Lemma 3.2 in <cit.> to characterize u_k+1^(i) - f_i(u_k+1^(i+1))^2 and u_k+1^(i) - u_k^(i)^2.
Suppose Assumption <ref>, <ref>, <ref> and <ref> hold. In Algorithm <ref> we have
[u_k+1^(i) - u_k^(i)^2|_k]≤ a_iτ_k^2, [u_k+1^(T+1) - u_k^(T+1)^2|_k]≤ a_T+1τ_k^2,
[u_k^(i) - f_i(u_k^(i+1))^2]≤ b_i^2 := [u_0^(i) - f_i(u_0^(i+1))^2] + σ_F_i^2+ (4L_f_i^2 + σ̂_J_i^2)a_i+1,
for 1≤ i≤ T. The constants are defined as
a_i := 2b_i + σ_F_i^2 + 2σ̂_J_i^2a_i+1, b_i≥ 0,
a_T+1 := σ_w^2/β^2 + N^2c_γ^2σ_v^2 + N^2c_γ^2L_∇ g[y_1^(0) - y_1^*^2 + 2σ_v^2c_γ/μ_g + 2L_y^*^2σ_w^2/β^2] +N^3c_γ^4L_∇ gσ_v^2/2,
Recall definitions of E_k,i, η_k+1^(i), η̂_k+1^(i), and for 1≤ i≤ T, define
Λ_k,i = E_k,i + τ_kη_k+1^(i) + η̂_k+1^(i)(u_k+1^(i+1)-u_k^(i+1)).
Then we know for 1≤ i≤ T,
u_k+1^(i) - f_i(u_k+1^(i+1)) = (1-τ_k)(u_k^(i) - f_i(u_k^(i+1))) - Λ_k,i.
Hence by convexity of ·^2 we know
u_k+1^(i) - f_i(u_k+1^(i+1))^2≤ (1-τ_k)u_k^(i) - f_i(u_k^(i+1))^2 + 1/τ_kΛ_k,i^2.
For Λ_k,i we have
Λ_k,i^2 = E_k,i^2 + τ_k^2 η_k+1^(i)^2 + (η̂_k+1^(i))(u_k+1^(i+1)-u_k^(i+1))^2 +2 θ_k,i',
θ_k,i' = ⟨ E_k,i,τ_k η_k+1^(i)+(η̂_k+1^(i))(u_k+1^(i+1)-u_k^(i+1)) ⟩ + τ_k ⟨η_k+1^(i), (η̂_k+1^(i))(u_k+1^(i+1)-u_k^(i+1))⟩,
which together with [θ_k,i'|_k] = 0 imply
[Λ_k,i^2| _k]
= [E_k,i^2| _k] + τ_k^2 [η_k+1^(i)^2| _k] +
[η̂_k+1^(i)(u_k+1^(i+1)-u_k^(i+1))^2| _k]
≤ τ_k^2 [η_k+1^(i)^2| _k] +
(4L_f_i^2+[η̂_k+1^(i)^2|_k]) [u_k+1^(i+1)-u_k^(i+1)^2| _k],
≤ τ_k^2σ_F_i^2 + (4L_f_i^2 + σ̂_J_i^2)[u_k+1^(i+1)-u_k^(i+1)^2| _k],
where the inequality follows from (<ref>). Hence we know by τ_k+1≤τ_k, (<ref>) and (<ref>) that
1/Γ_k+1[u_k+1^(i) - f_i(u_k+1^(i+1))^2]
≤ 1/Γ_k[u_k^(i) - f_i(u_k^(i+1))^2] +σ_F_i^2τ_k/Γ_k + (4L_f_i^2 + σ̂_J_i^2)/τ_kΓ_k+1[u_k+1^(i+1)-u_k^(i+1)^2],
which gives
1/Γ_k+1[u_k+1^(i) - f_i(u_k+1^(i+1))^2]
≤ [u_0^(i) - f_i(u_0^(i+1))^2] + σ_F_i^2∑_j=0^kτ_j/Γ_j + (4L_f_i^2 + σ̂_J_i^2)∑_j=0^k[u_j+1^(i+1)-u_j^(i+1)^2]/τ_jΓ_j+1
≤ [u_0^(i) - f_i(u_0^(i+1))^2] + σ_F_i^2/Γ_k+1+ (4L_f_i^2 + σ̂_J_i^2)∑_j=0^k[u_j+1^(i+1)-u_j^(i+1)^2]/τ_jΓ_j+1
By Lemma <ref> we know
[u_k+1^(i) - u_k^(i)^2]
≤ τ_k^2[2u_k^(i) - f_i(u_k^(i+1))^2 + η_k+1^(i)^2 + 2/τ_k^2J_k+1^(i)^2 u_k+1^(i+1) - u_k^(i+1)^2],
≤ 2τ_k^2[u_k^(i) - f_i(u_k^(i+1))^2] + τ_k^2σ_F_i^2 + 2σ̂_J_i^2[ u_k+1^(i+1) - u_k^(i+1)^2].
Notice that by definition of u_k^(T+1), we have u_k+1^(T+1) - u_k^(T+1)^2 = x_k+1-x_k^2 + y_k+1^(N) - y_k^(N)^2. Hence by using Lemma <ref> and <ref>, we know
[u_k+1^(T+1) - u_k^(T+1)^2|_k]
= [x_k+1-x_k^2|_k] + [y_k+1^(N) - y_k^(N)^2|_k]
≤ τ_k^2σ_w^2/β^2 + N^2γ_k+1^2σ_v^2 + min{Nγ_k+1, 1/μ_g}NL_∇ gγ_k+1[y_k+1^(0) - y_k+1^*^2|_k] + N^3L_∇ gγ_k+1^4σ_v^2/2
≤ τ_k^2σ_w^2/β^2 + N^2c_γ^2σ_v^2τ_k^2 + N^2c_γ^2L_∇ gτ_k^2[[y_1^(0) - y_1^*^2] + 2σ_v^2c_γ/μ_g + 2L_y^*^2σ_w^2/β^2] + N^3c_γ^4L_∇ gσ_v^2τ_k^4/2
≤ a_T+1τ_k^2,
where we use (<ref>) in the first inequality and (<ref>) in the second inequality. Combining the above inequality, (<ref>), (<ref>) and Lemma <ref>, the proof is complete by using backward induction.
As a direct result of the above lemma, we can now characterize ∑_k=0^K[θ̃_k+1^(i)].
Suppose that Assumptions <ref>, <ref>, <ref> and <ref> hold. Then, we have
∑_k=0^K[θ̃_k+1^(i)]≤[(4L_∇ f_i^2 + σ_J_i^2 + b_i)a_i+1 + σ_F_i^2]∑_k=0^Kτ_k^2.
Recalling the definition of θ̃_k+1^(i) in (<ref>), we have
∑_k=0^K[θ̃_k+1^(i)]
= ∑_k=0^K[(4L_∇ f_i^2 + f_i(u_k^(i+1)) - u_k^(i) + η̂_k+1^(i)^2) u_k+1^(i+1) - u_k^(i+1)^2 +τ_k^2η_k+1^(i)^2]
≤ ∑_k=0^K[(4L_∇ f_i^2 + σ_J_i^2)a_i+1 + σ_F_i^2]τ_k^2 + ∑_k=0^K[[u_k+1^(i+1) - u_k^(i+1)^2|_k]f_i(u_k^(i+1)) - u_k^(i)],
which together with (<ref>), complete the proof.
Now, we define the following merit function
W_k = Φ(x_k) - Φ^* - η(x_k,d_k,β) + ∑_i=1^Tρ_iu_k^(i) - f_i(u_k^(i+1))^2 + νy_k^(0) - y_k^*^2,
which will be used to combine the previous results. We emphasize here that the above merit function is different from prior analyses of nested compositional problems, e.g., <cit.> in that it is also designed to handle the additional bi-level structure. Recall the measure of optimality,V_k, as defined in (<ref>).
The following result analyzes V_k in terms of the above merit function.
Suppose that Assumptions <ref>, <ref>, <ref> and <ref> hold. Then, we have
∑_i=0^Kτ_i[d_i - ∇Φ(x_i)^2]=O_T(∑_i=0^Kτ_i[z_i-x_i^2 + ∑_j=2^Tf_j(u_i^(j+1)) - u_i^(j)^2 + y_i^(0) - y_i^*^2+ δ_g^2] + 1).
We first analyze the decrease of d_k-∇Φ(x_k)^2. Noting Step 10 of Algorithm <ref> and convexity of ·^2, we have
d_k - ∇Φ(x_k)^2 = (1 - τ_k-1)(d_k-1 - ∇Φ(x_k-1)) + τ_k-1(e_k-1 + Δ_k^F)^2
≤ (1-τ_k-1)(d_k-1 - ∇Φ(x_k-1))^2 + τ_k-1e_k-1^2 + τ_k-1^2Δ_k^F^2
+2τ_k-1(1 - τ_k-1)(d_k-1 - ∇Φ(x_k-1)) + τ_k-1e_k-1, Δ_k^F>,
where
e_k-1 = 1/τ_k-1(∇Φ(x_k-1) - ∇Φ(x_k)) + ([w_k|_k-1] - ∇Φ(x_k-1)),
Δ_k^F = w_k - [w_k|_k-1].
Taking expectation on both sides of (<ref>), we obtain
[d_k - ∇Φ(x_k)^2]≤ (1-τ_k-1)[(d_k-1 - ∇Φ(x_k-1))^2] + τ_k-1[e_k-1^2] + τ_k-1^2[Δ_k^F^2].
Noting Lemma <ref>, defining
a_k+1 = [d_k - ∇Φ(x_k)^2], b_k = τ_k-1[e_k-1^2] + τ_k-1^2[Δ_k^F^2], c_k = 1, δ_k = 1 - τ_k-1,
and in the view of Lemma <ref>, for k≥ 1, we have
∑_i=0^Kτ_i[d_i - ∇Φ(x_i)^2]
≤ [d_0 - ∇Φ(x_0)^2] + ∑_i=1^K(τ_i-1[e_i-1^2] + τ_i-1^2[Δ_i^F^2])
≤ [d_0 - ∇Φ(x_0)^2] + ∑_i=0^K-1τ_i[e_i^2] + σ_w^2∑_i=0^K-1τ_i^2,
where the second inequality follows from Lemma <ref> and the definition of Δ_i^F in (<ref>). Now, to bound e_i, we need to first analyze [r_0^(k+1)|_k] - ∇Ψ(x_k,y_k^*). We adopt Lemma 2.4 from <cit.> that
∇Ψ(x,y) - ∇ f_T(x,y)∏_i=2^T∇ f_T+1-i(u^(T+2-i))≤∑_j=2^T-1C_jf_j(u^(j+1)) - u^(j) + C_Tf_T(x,y) - u^(T),
where, according to <cit.>, the constants are defined as
R_1 = L_∇ f_1L_f_2⋯ L_f_T, R_j = L_f_1⋯ L_f_j-1L_∇ f_jL_f_j+1⋯ L_f_T/L_f_j 2≤ j≤ T-1,
C_2 = R_1, C_j = ∑_i=1^j-2R_i(∏_l=i+1^j-1L_f_l) 3≤ j≤ T.
By replacing r_0 and ℰ with r_0^(k+1) and ℰ_k in Algorithm <ref> and Lemma <ref>, to represent their corresponding vectors when the inputs are x_k, y_k^(N) and u_k^(i) (1≤ i≤ T), we have
[r_0^(k+1)|_k] = ∇ f_T(x_k,y_k^(N))∏_i=2^T∇ f_T+1-i(u_k^(T+2-i)),
which together with the Lipschitz smoothness assumption on ∇Ψ, imply that
[r_0^(k+1)|_k] - ∇Ψ(x_k,y_k^*)
≤ ∇ f_T(x_k,y_k^(N))∏_i=2^T∇ f_T+1-i(u_k^(T+2-i)) - ∇Ψ(x_k,y_k^(N)) + ∇Ψ(x_k,y_k^(N)) - ∇Ψ(x_k,y_k^*)
≤ ∑_j=2^TC_jf_j(u_k^(j+1)) - u_k^(j) + L_∇Ψy_k^(N) - y_k^*
Define the positive constant C̃ as
C̃^2 = max{L_Φ^2, max_2≤ j≤ T[(1 + L_∇ g^2/μ_g^2)C_j^2], (√(1 + L_∇ g^2/μ_g^2)L_∇Ψ + (L_∇ g+μ_g)L_∇^2 g/μ_g^2)^2}
From (<ref>) and Lemma <ref>, we obtain
[w_k+1|_k] - ∇Φ(x_k)- ℰ_k
= [r_0,x^(k+1)|_k] -∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^(N))]^-1[r_0,y^(k+1)|_k]
- ∇_x Ψ(x_k, y_k^*)+ ∇_xy^2 g(x_k, y_k^*)[∇_y^2g(x_k,y_k^*)]^-1∇_y Ψ(x_k, y_k^*)
≤ [ I, -∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^(N))]^-1 ]([r_0^(k+1)|_k] - ∇Ψ(x_k,y_k^*))
+ ∇_xy^2 g(x_k, y_k^*)[∇_y^2g(x_k,y_k^*)]^-1∇Ψ_y(x_k,y_k^*) -∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^(N))]^-1∇Ψ_y(x_k,y_k^*)
≤ √(1 + L_∇ g^2/μ_g^2)[∑_j=2^TC_jf_j(u_k^(j+1)) - u_k^(j) + L_∇Ψy_k^(N) - y_k^*] +(L_∇ g + μ_g)L_∇^2 g/μ_g^2y_k^(N) - y_k^*
≤ C̃(∑_j=2^Tf_j(u_k^(j+1)) - u_k^(j) + y_k^(N) - y_k^*),
where the third inequality follows from (<ref>) and the fact that
∇_xy^2 g(x_k, y_k^*)[∇_y^2g(x_k,y_k^*)]^-1-∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^(N))]^-1
≤ ∇_xy^2 g(x_k, y_k^*)[∇_y^2g(x_k,y_k^*)]^-1-∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^*)]^-1
+ ∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^*)]^-1-∇_xy^2 g(x_k, y_k^(N))[∇_y^2g(x_k,y_k^(N))]^-1
≤ L_∇^2g/μ_gy_k^(N) - y_k^*+L_∇_g[∇_y^2g(x_k,y_k^*)]^-1(∇_y^2g(x_k,y_k^(N)) - ∇_y^2g(x_k,y_k^*))[∇_y^2g(x_k,y_k^(N))]^-1
≤ (L_∇ g + μ_g)L_∇^2 g/μ_g^2y_k^(N) - y_k^*.
Inequality (<ref>) indicates that
[w_k+1|_k] - ∇Φ(x_k)≤C̃(∑_j=2^Tf_j(u_k^(j+1)) - u_k^(j) + y_k^(N) - y_k^*) + ℰ_k,
which, together with the definition of e_i in (<ref>), imply that e_i^2≤C̃^2(T+2)(z_i-x_i^2 + ∑_j=2^Tf_j(u_i^(j+1)) - u_i^(j)^2 + y_i^(N) - y_i^*^2) + (T+2)ℰ_i^2.
By the above inequality and (<ref>), we have
∑_i=0^Kτ_i[d_i - ∇Φ(x_i)^2]
≤ [d_0 - ∇Φ(x_0)^2] + ∑_i=0^K-1τ_i[e_i^2] + σ_w^2∑_i=0^K-1τ_i^2
≤ C̃^2(T+2)∑_i=0^Kτ_i[z_i-x_i^2 + ∑_j=2^Tf_j(u_i^(j+1)) - u_i^(j)^2 + y_i^(N) - y_i^*^2]
+ L_∇ g^2 σ̂_r^2/μ_g^2(T+2)δ_g^2∑_i=0^Kτ_i + [d_0 - ∇Φ(x_0)^2] + σ_w^2∑_i=0^K-1τ_i^2
= O_T(∑_i=0^Kτ_i[z_i-x_i^2 + ∑_j=2^Tf_j(u_i^(j+1)) - u_i^(j)^2 + y_i^(0) - y_i^*^2 + δ_g^2] + 1).
The equality holds due to Lemma <ref> and under Assumption <ref>.
We are now ready to provide the proof of Theorem <ref>.
Observe that
[∇Φ(x_k) - w_k+1, z_k - x_k>] = [[∇Φ(x_k) - w_k+1, z_k - x_k>|_k]]
= [∇Φ(x_k)-[w_k+1|_k], z_k-x_k>]≤[∇Φ(x_k)-[w_k+1|_k]·z_k-x_k]
≤ [C̃∑_j=2^Tf_j(u_k^(j+1)) - u_k^(j)z_k-x_k + C̃y_k^(N)-y_k^*z_k-x_k + ℰ_kz_k-x_k]
≤ C̃[∑_j=2^Tf_j(u_k^(j+1)) - u_k^(j)z_k-x_k + (y_k^(0)-y_k^* + √(N)γ_kσ_v)z_k-x_k]
+ [1/4λℰ_k^2 + λz_k-x_k^2],
where λ is a positive constant to be determined, the second inequality follows from (<ref>), the third one follows from Lemma <ref>, and the fact that
[y_k^(N)-y_k^*z_k-x_k]
= [[y_k^(N)-y_k^*z_k-x_k|x_k,z_k,y_k^(0)]] = [[y_k^(N) - y_k^*|x_k,y_k^(0)]z_k-x_k]
≤ [√([y_k^(N)-y_k^*^2|x_k,y_k^(0)])z_k-x_k]≤[√(y_k^(0)-y_k^*^2 + Nγ_k^2σ_v^2)z_k-x_k]
≤ [(y_k^(0)-y_k^*+√(N)γ_kσ_v)z_k-x_k].
We also have
Φ(x_k+1) - Φ(x_k)≤L_Φτ_k^2/2z_k - x_k^2 + τ_k∇Φ(x_k), z_k - x_k>,
η(x_k,d_k,β) - η(x_k+1,d_k+1,β)
≤ -βτ_kz_k - x_k^2 - τ_kw_k+1, z_k-x_k> + L_∇η/2[x_k+1-x_k^2 + d_k+1 - d_k^2],
[y_k+1^(0) - y_k+1^*^2] - [y_k^(0) - y_k^*^2]≤-τ_k[y_k^(0) - y_k^*^2] + 2Nγ_k^2σ_v^2 + 2L_y^*^2τ_k[x_k-z_k^2],
where (<ref>) uses L_Φ-smoothness of Φ, (<ref>) uses L_∇η-smoothness of η and the definition of z_k (see (3.9) of <cit.>), and (<ref>) is from (<ref>). Now, we are ready to bound W_k+1 - W_k. By (<ref>), (<ref>), (<ref>), (<ref>), and Lemma <ref> we have
[W_k+1 - W_k]
≤ [L_Φτ_k^2/2z_k - x_k^2 + τ_k∇Φ(x_k), z_k - x_k>]
- [βτ_kz_k - x_k^2 + τ_kw_k+1, z_k-x_k>] + L_∇η/2[x_k+1-x_k^2 + d_k+1 - d_k^2]
- ∑_i=1^Tρ_iτ_k[u_k^(i) - f_i(u_k^(i+1))^2] +∑_i=1^Tρ_i[θ̃_k+1^(i)] + ν([y_k+1^* - y_k+1^(0)^2] - [y_k^* - y_k^(0)^2])
= - βτ_k[z_k - x_k^2] + ν([y_k+1^* - y_k+1^(0)^2] - [y_k^* - y_k^(0)^2])
- ∑_i=1^Tρ_iτ_k[u_k^(i) - f_i(u_k^(i+1))^2] + τ_k[∇Φ(x_k) - w_k+1, z_k - x_k>]
+ (L_∇η+L_Φ)τ_k^2/2[z_k-x_k^2] + L_∇η/2[d_k+1-d_k^2] + ∑_i=1^Tρ_i[θ̃_k+1^(i)]
≤ - βτ_k[z_k - x_k^2] -ντ_k[y_k^(0) - y_k^*^2] + 2ν Nγ_k^2σ_v^2 + 2ν L_y^*^2τ_k[x_k-z_k^2]
- ∑_i=1^Tρ_iτ_k[u_k^(i) - f_i(u_k^(i+1))^2] + τ_k[C̃∑_j=2^Tf_j(u_k^(j+1)) - u_k^(j)z_k-x_k]
+ τ_k[C̃(y_k^(0)-y_k^* + √(N)γ_kσ_v)z_k-x_k +1/4λℰ_k^2 + λz_k-x_k^2]
+ (L_∇η+L_Φ)τ_k^2/2[z_k-x_k^2] + L_∇η/2[d_k+1-d_k^2] + ∑_i=1^Tρ_i[θ̃_k+1^(i)].
Assume that we choose the constants β, ν, ρ_1,...,ρ_T, λ such that
- (β - λ - 2ν L_y^*^2)z_k-x_k^2 - νy_k^(0)-y_k^*^2 - ∑_i=1^Tρ_iu_k^(i) - f_i(u_k^(i+1))^2
+ C̃z_k-x_ky_k^(0) - y_k^* + C̃∑_j=2^Tz_k-x_kf_j(u_k^(j+1)) - u_k^(j)
≤ -c·(z_k-x_k^2 + y_k^(0)-y_k^*^2 + ∑_i=1^Tu_k^(i) - f_i(u_k^(i+1))^2)
for some constant c>0, define
R_k = 2ν Nγ_k^2σ_v^2 + C̃√(N)σ_vτ_kγ_k[z_k-x_k] + τ_k/4λ[ℰ_k^2]
+ (L_∇η+L_Φ)τ_k^2/2[z_k-x_k^2] + L_∇η/2[d_k+1-d_k^2] + ∑_i=1^Tρ_i[θ̃_k+1^(i)].
and the constant
C_R,1 = 2ν Nc_γ^2σ_v^2 + C̃√(N)c_γσ_vσ_w/β + (L_∇η+L_Φ)σ_w^2/2β^2 + 2L_∇ησ_w^2
+ ∑_i=1^Tρ_i[(4L_∇ f_i^2 + σ_J_i^2 + b_i)a_i+1 + σ_F_i^2].
Taking summation on both sides of (<ref>), we obtain
∑_k=0^KR_k≤ 2ν Nc_γ^2σ_v^2∑_k=0^Kτ_k^2 + C̃√(N)c_γσ_vσ_w/β∑_k=0^Kτ_k^2 + 1/4λ∑_k=0^Kτ_k[ℰ_k^2]
+ (L_∇η+L_Φ)σ_w^2/2β^2∑_k=0^Kτ_k^2 + 2L_∇ησ_w^2∑_k=0^Kτ_k^2
+ ∑_i=1^Tρ_i[(4L_∇ f_i^2 + σ_J_i^2 + b_i)a_i+1 + σ_F_i^2]∑_k=0^Kτ_k^2 ≤ C_R,1∑_k=0^Kτ_k^2 + δ_g^2/4λ∑_k=0^Kτ_k,
where the second inequality follows from Lemma <ref>. Taking summation on both sides of (<ref>) and using (<ref>), (<ref>) and (<ref>), we have
[W_K+1] - [W_0]
≤ -c·∑_k=0^Kτ_k[z_k-x_k^2 + y_k^(0)-y_k^*^2 + ∑_i=1^Tu_k^(i) - f_i(u_k^(i+1))^2]
+ C_R,1∑_k=0^Kτ_k^2 + δ_g^2/4λ∑_k=0^Kτ_k,
which implies that
∑_k=0^Kτ_k[z_k-x_k^2 + y_k^(0)-y_k^*^2 + ∑_i=1^Tu_k^(i) - f_i(u_k^(i+1))^2]
≤ 1/c[[W_0 - W_K+1] + C_R,1∑_k=0^Kτ_k^2 + δ_g^2/4λ∑_k=0^Kτ_k].
Now, if we choose R∈{0, 1,2,...,K} satisfying P(R=k) = τ_k/∑_j=0^Kτ_j, we have:
[V(x_R,d_R)]≤max(1,β)/∑_j=0^Kτ_j∑_k=0^Kτ_k[z_k-x_k^2+d_k-∇Φ(x_k)^2]= O_T(∑_k=0^Kτ_k^2 + 1 + δ_g^2∑_k=0^Kτ_k/∑_k=0^Kτ_k).
The equality holds because of (<ref>) and (<ref>).
To ensure condition (<ref>), we need the following technical result.
There exist positive constants β, ν, ρ_1,...,ρ_T, λ, c such that for any positive constants x, y, z_1,...,z_T, we have
(β - λ - 2ν L_y^*^2-c)x^2 + (ν - c)y^2 + ∑_i=1^T(ρ_i - c)z_i^2 - C̃xy - C̃∑_j=2^Txz_i ≥ 0
implying that (<ref>) holds.
Note that the above conclusion is equivalent to show that A^(T)≽ 0, where A^(T)=(a_ij)∈^(T+2)× (T+2) and its non-zero elements only include
a_11 = (β - λ - 2ν L_y^*^2-c), a_1i = a_i1 = -C̃/2, a_22 = ν - c, a_jj = ρ_j-2-c,
for all 2≤ i≤ T+2 and 3≤ j≤ T+2. In other words, we have
A^(T)=
[ a_11 -C̃/2 -C̃/2 ⋯ -C̃/2; -C̃/2 ν-c 0 ⋯ 0; -C̃/2 0 ρ_1-c ⋯ ⋮; ⋮ ⋮ ⋮ ⋱ 0; -C̃/2 0 ⋯ 0 ρ_T-c ]
Defining A_k = (A^(k)), for any 1≤ k≤ T, we know by induction that
A_k+1 = (ρ_k+1 - c)A_k - C̃^2/4(ν-c)(ρ_1-c)⋯(ρ_k-c),
A_1 = (ρ_1-c)(a_11(ν-c) - C̃^2/4) - C̃^2/4(ν-c).
Hence, for any 1≤ k≤ T, we have
A_k = [A_0 - C̃^2(ν - c)/4(∑_i=1^k1/ρ_i-c)]·∏_i=1^k(ρ_i-c), A_0 = a_11(ν - c) - C̃^2/4.
Hence, A^(T)≽ 0 if and only if for any 0≤ k≤ T, A_k≥ 0 and a_11≥ 0. One sufficient condition is to set
ν = ρ_1 = ⋯ = ρ_T = 2c, β - λ = 4cL_y^*^2 + c + C̃^2(T+1)/4c,
for any constant c>0. Then, it is trivial to verify that, for any 0≤ k≤ T,
a_11 = C̃^2(T+1)/4c≥ 0, A_k = C̃^2(T-k)c^k/4≥ 0.
§ SIMULATION RESULTS
We now present our simulation results on the robust feature learning problem introduced in Section <ref>. We consider solving the distributionally robust feature learning problem (<ref>), in which Y is generated using multi-index model Y = ∑_i=1^50(ω_i X + c·sin(ω_i X)) + ε, a popular model in statistics and economics <cit.>. Here, each ω_i is sampled from 𝒩(0, I_100) and then normalized to a unit vector in ^100, ε is the noise sampled from normal distribution 𝒩(0, 0.01^2) for training data and 𝒩(0, 0.1^2) for testing. We set c=1 and 1.5 in our experiments. Covariate X∈^100 and all the coordinates are independent and identically generated from the uniform distribution over [0, 1]. We consider a fully-connected neural network with two hidden layers, each of which has 50 nodes. The activation function is (smoothed) ReLU. The β in (<ref>) represents the weights in the last layer and the feature mapping Φ is the whole model without the last layer.
§.§ Robust Meta Learning
Meta learning can be viewed as a compositional bi-level problem if we view the upper-level objective as a compostional function.
Consider X ∈ℝ^d to be random variable and Y^(ℓ) for ℓ∈{1,…, T } be T tasks. Let Φ: ℝ^d →ℝ^p be a feature map to be learned that is common across tasks.
In order to incorporate robustness in training the shared feature-map, we reduce the variance instead of the expectation at the top level. This leads to composition of two functions at the upper-level.
The upper-level problem is given by
Φ1/T∑_ℓ=1^T {[
(Y^(ℓ) - ⟨β^(ℓ), Φ(X)⟩)^4 - [ ( Y^(ℓ) - ⟨β^(ℓ), Φ(X)⟩)^2 ]^2
]
}
subjected to
β^(ℓ) = β∈ℝ^p [(Y^(ℓ) - ⟨β, Φ(X)⟩ )^2 ], ∀ℓ∈{1,…, T }
If we replace the variance with risk-averse loss, we have a composition of three functions at the upper-level.
The reason behind the formulation (<ref>) is to learn a robust feature mapping Φ that can generalize well on a test distribution Q. By reformulation (<ref>) we know that this problem can be solved via , i.e., Algorithm <ref>. To compare the generalization performance of , we also conduct experiments using <cit.>, <cit.>, and <cit.>.
* For , we choose the stepsize to be 10^-4 and solve the following problem:
min_Φ, β _P [(Y - ⟨β, Φ(X)⟩)^2]
directly.
* For , we set the stepsizes ρ^t = γ^t = 0.01/√(K) in Algorithm 1 of <cit.>, as suggested in Theorem 1 of <cit.>, to solve the following bi-level problem:
min_Φ _P [(Y - ⟨β, Φ(X)⟩)^2] s.t. β = β̃∈ℝ^p _P[ (Y - ⟨β̃, Φ(X)⟩)^2 ].
* For , we set τ_k = 0.1/√(K), β = 3 in Algorithm 2 of <cit.>, to solve the following problem:
.85!Φ, βmin{_P[(Y - ⟨β, Φ(X)⟩)^2] + λ (_P[max(0,(Y - ⟨β, Φ(X)⟩)^2 - _P[(Y - ⟨β, Φ(X)⟩)^2])^2])^1/2}.
* For we choose α = 0.01, M=⌊log K⌋ in Algorithm <ref>, and N=5, β = 30, τ_k = γ_k = 0.03/√(K) in Algorithm <ref> to solve (<ref>):
.9!Φmin{_P[(Y - ⟨β, Φ(X)⟩)^2] + λ (_P[max(0,(Y - ⟨β, Φ(X)⟩)^2 - _P[(Y - ⟨β, Φ(X)⟩)^2] )^2])^1/2}
s.t. β = β̃∈ℝ^p _P[ (Y - ⟨β̃, Φ(X)⟩)^2 ].
For (<ref>) and (<ref>), we use a smoothed version of the max function for the experiments. In all algorithms the number of (outer-loop) iterations (i.e., K in Algorithm <ref>) is fixed to be 200. To evaluate the generalization performance of each algorithm, all entries of each test data X are independent and identically sampled from the test distribution Q which is set to be the Beta distribution over [0, 1]. Recall that the corresponding density function is given by
p(x;a,b) = x^a(1-x)^b/∫_0^1 u^a(1-u)^b du,
where we choose (a, b) = (3, 6) and (1.5, 4.5) in our experiments. According to <cit.>, λ is chosen to be 1.65× 10^-3 when (a, b) = (3, 6) and 1.89× 10^-3 when (a, b) = (1.5, 4.5) in (<ref>). We compare the algorithms via testing each of it on 100 trials, and the results are summarized in Figure <ref>. For all algorithms we evaluate the mean-squared losses on the same test dataset of size 1000. We plot the average of all the losses and the shaded regions represent standard deviation. From the result, we observe that the nested compositional bi-level formulation solved using the proposed method outperforms other formulations and algorithms in terms of test loss, which indicates that solving problem <ref> via Algorithm <ref> indeed introduces robustness in the feature learning process. In particular, it is worth noting that the comparison between and indicates the superiority of reformulating the single level optimization problem in (<ref>), which learns the features and regression coefficients jointly), as a bi-level one in (<ref>), which learns the features robustly and the regression coefficients only using least-squares.
§ CONCLUSION
In this paper, we study a class of problems at the intersection of bi-level and nested compositional optimization arising in several application domains including robust feature learning. This class of problems consist of bi–level problems in which the upper-level objective function has a nested composition structure
imposing additional challenge in controlling the error in estimating the hypergradient. We propose a novel stochastic approximation algorithm and establish its finite-time convergence analysis showing that it can achieve state-of-the-art complexity bounds (optimal up to an additional log factor) for bilevel nested compositional problems.
abbrv
§ AN ISSUE IN NEUMANN SERIES ANALYSIS IN PRIOR WORKS
In this section, we point out the issue in <cit.> on estimating the Hessian inverse using Neumann series. Using our notation, the following inequality is used in the above works; see (3.76) in <cit.>, Lemma 11 in arXiv version 2 of <cit.>, and Lemma 5 in <cit.>:
[w_k - [w_k|_k]^2]≤σ_w^2, [w_k^2]≤σ̃_w^2,
where w_k is the hypergradient estimate, which is constructed by Neumann series based approach, and σ_w and σ̃_w^2 are constants that are independent of M.
It essentially means that the variance of the hypergradient estimate can be bounded by some given constants that are independent of M. Below, we provide a counter example refuting the claim that the constant σ_w^2 and σ̃_w^2 are independent of M.
Suppose we are going to use Neumann series based approach to estimate A^-1, where μ I≼ A ≼ L I for some constants 0<μ<L. The process is:
* Fix an integer M>0. Sample p from {0,1,...,M-1} uniformly at random.
* Set X = M/L∏_i=1^p(I - 1/LA_i) as the estimate for A, where each A_i satisfies [A_i] = A, [A_i -A^2]≤σ^2 and A_i is independent of A_j for j≠ i. X = M/LI if p=0.
Note that if we set A_i = A = 1 (noiseless scalar), then we have:
X = M/L(1-1/L)^p with probability 1/M,
[X] = 1/M∑_p=0^M-1M/L(1-1/L)^p = 1 - (1-1/L)^M.
Thus, for M>L we have
[X- [X]^2] > 1/MM/L - (1 - (1-1/L)^M)^2
= 1/M(M/L - 1 + (1-1/L)^M)^2 > 1/M(M/L - 1)^2,
where in the first inequality we just consider the case when p=0, and the last inequality uses M>L. Note that this means the upper bound of the variance of X must depend on M if we use this process.
As a consequence, for the algorithms in the above works, an additional log factor is introduced in the overall sample complexity. Indeed, as we show in our Theorem <ref>, we need to pick M=Θ(log K), with K denoting the number of outer-loops.
|
http://arxiv.org/abs/2307.04514v1 | 20230710122050 | Improving Heterogeneous Graph Learning with Weighted Mixed-Curvature Product Manifold | [
"Tuc Nguyen-Van",
"Dung D. Le",
"The-Anh Ta"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
Ashish Singh
Antonio Bevilacqua
Timilehin B. Aderinola
Thach Le Nguyen
Darragh Whelan
Martin O'Reilly
Brian Caulfield
Georgiana Ifrim
August 12, 2023
============================================================================================================================================
In graph representation learning, it is important that the complex geometric structure of the input graph, e.g. hidden relations among nodes, is well captured in embedding space.
However, standard Euclidean embedding spaces have a limited capacity in representing graphs of varying structures.
A promising candidate for the faithful embedding of data with varying structure is product manifolds of component spaces of different geometries (spherical, hyperbolic, or Euclidean).
In this paper, we take a closer look at the structure of product manifold embedding spaces and argue that each component space in a product contributes differently to expressing structures in the input graph, hence should be weighted accordingly.
This is different from previous works which consider the roles of different components equally.
We then propose , a data-driven method for learning embedding of heterogeneous graphs in weighted product manifolds.
Our method utilizes the topological information of the input graph to automatically determine the weight of each component in product spaces. Extensive experiments on synthetic and real-world graph datasets demonstrate that is capable of learning better graph representations with lower geometric distortion from input data, and performs better on multiple downstream tasks, such as word similarity learning, top-k recommendation, and knowledge graph embedding.
We provide the source of implementation in https://github.com/sharecodesubmission/weighted_product_manifoldhttps://github.com/product_manifold.
§ INTRODUCTION
Representation learning aims to acquire the ability to effectively embed meaningful data into feature spaces <cit.>. In traditional representation learning models, Euclidean embedded spaces have been predominantly utilized. However, the uniform geometric structure of Euclidean spaces has certain limitations when it comes to providing accurate representations for various types of structured data, particularly graphs such as tree structures <cit.> or circular graphs <cit.>. Consequently, there is a growing interest in developing methods that enable the embedding of graph features in non-Euclidean spaces <cit.>.
Real-world data frequently exhibit diverse patterns and complex geometries that cannot be adequately captured by the uniform structures of Euclidean embedding spaces. It has been observed that Euclidean spaces are often insufficient for embedding various types of real-world graph data, such as hierarchical structures that induce negative curvature geometry <cit.>, or circle structures <cit.> that require positive curvature geometry.
Previous research has demonstrated that using spherical embedding spaces instead of Euclidean ones can result in minimal distortion errors when embedding data with circle and ring structures <cit.>. Moreover, models that solely utilize embedding spaces of a single geometric type often struggle to capture mixed structures effectively. These models tend to produce embedding representations with significant geometric distortion compared to the underlying geometry of the input data <cit.>. In contrast, approaches employing product spaces composed of components with different geometries have shown promising results in graph representation learning.
Problem
Current geometric embedding models, as seen in <cit.>, typically employ product spaces with equally weighted components. In this setup, the learnable parameters are fitted to the training data samples across all component spaces in a uniform manner. However, we contend that this approach hinders the robustness of models when learning data with diverse geometric structures.
Specifically, when the input data predominantly exhibit a particular geometric type compared to others, updating all components equally may not be optimal. Instead, it would be advantageous to assign more emphasis to the dominant geometric type during the parameter update process. This would allow the model to better capture and represent the most prevalent geometric structure in the data.
Our approach
To address this issue, we introduce a novel data-driven approach that incorporates a scoring mechanism for each component in the product spaces. This scoring mechanism enables the automatic learning of weights for each component based on the geometric structures present in the input data.
By considering the specific geometric characteristics of the data, our method allows for the construction of flexible and adaptive product spaces. This includes not only updating the weights of the components but also adjusting the geometric curvatures of the spaces.
As a result, our models are capable of effectively capturing and representing the complex geometric structures inherent in the data, leading to improved embedding performance.
Contributions
We summarize our contribution as follows.
Firstly, to the best of our knowledge, this is the first work that considers the structure at each component of product manifold and proposes that each component space contributes differently to expressing various geometric structures in the input graph, hence should be weighted accordingly.
Secondly, we propose , a data-driven method for learning to embed of
heterogeneous graphs in weighted product manifolds.
Thirdly, we conduct extensive experiments on both synthetic and real-world datasets to validate our approach to the various downstream tasks.
§ RELATED WORKS & BACKGROUND
The field of machine learning has witnessed a proliferation of works focusing on learning data representations in non-Euclidean spaces, as evidenced by studies such as <cit.>. However, recent research by <cit.> has highlighted the computational challenges and numerical instability faced by hyperbolic graph convolution networks, particularly in high-dimensional settings. To address this issue, <cit.> proposed a random feature mapping technique that utilizes the eigenfunctions of the Laplace operator to approximate an isometry-invariant kernel on hyperbolic space.
Another notable approach in this area is CurvGAN <cit.>, which introduces a GAN-based graph representation method that preserves the topological properties of discrete structures by approximating them as continuous Riemannian geometric manifolds. However, these methods primarily focus on a single embedding space and may struggle to effectively capture the underlying structure of the input data.
In contrast, the product of spaces has been shown to possess the capability to achieve higher generalization and effectively capture the intrinsic structures of graphs with mixed geometries <cit.>. By combining multiple spaces with different geometric characteristics, the product of spaces approach offers improved representation learning and a more comprehensive understanding of complex data structures.
While several approaches have explored the use of product spaces, few have addressed the challenges associated with defining and updating the component spaces. One such work, Switch Spaces <cit.>, introduces a method that selects a combination of K components from a set of N spaces based on input specifications. It employs a gating mechanism to score and choose subspace components using pairwise relationships in the training data. However, since entities in a graph are not independent and identically distributed (iid), the component spaces selected based on individual input instances may not effectively capture the overall relationships between nodes in the graph. Consequently, Switch Spaces requires embedding spaces with high dimensions (e.g., 100, 500) to achieve competitive performance in various downstream tasks like knowledge graph embedding and recommendation.
Unfortunately, this approach unintentionally sacrifices the advantages offered by non-Euclidean models, which can achieve compactness by requiring smaller dimensions to achieve the same capacity as Euclidean space. In our study, we propose a novel approach that leverages a richer and more robust representation space to capture the diverse geometric structures present in graph data. By enhancing the quality of embeddings, our research complements existing graph-based learning methods and enables more effective representation learning.
Non-Euclidean embedding spaces
Non-Euclidean representation learning has emerged as a powerful approach, delivering state-of-the-art performance across diverse tasks. Specifically, hyperbolic space has proven effective in tasks such as network embedding <cit.>, recommendation systems <cit.>, and knowledge graphs <cit.>. On the other hand, spherical space excels in modeling directional similarity and data with cyclical structures <cit.>. Each of these spaces possesses unique geometric features, and the selection of an appropriate embedding space should be guided by the inherent structure of the data. By leveraging the most suitable embedding space, we can effectively capture the intrinsic properties and relationships within the data, leading to superior performance across a wide range of applications.
Product manifold
Product manifolds are constructed by combining embedding spaces with different geometric types, such as Euclidean, hyperbolic, and spherical spaces. In the context of representation learning, the concept of product spaces was introduced in <cit.>, where each component of the product space has a constant curvature. The curvature of the product space is determined by the sum of curvatures of its individual components <cit.>, resulting in a constant curvature overall. This property enables product spaces to capture a wide range of curvatures with lower distortion compared to a single space <cit.>. As a result, product spaces are particularly well-suited for real-world data that exhibit mixtures of geometric structures.
For example, <cit.> developed a Mixed-curvature Variational Autoencoder, which efficiently trains a VAE with a latent space consisting of a product of constant curvature Riemannian manifolds. Additionally, the heterogeneous structure present in user-item interaction graphs can be effectively learned by utilizing product spaces with different curvature components <cit.>.
Distortion error of embedding
Given metric spaces U and V equipped with distances d_U and d_V respectively, an embedding is a continuous and injective mapping f: U → V. To evaluate the quality of an embedding, we use the average distortion metric D_avg(f), which calculates the average distortion over all pairs of points. Distortion between a pair of points a and b is defined as |(d_V(f(a), f(b))/d_U(a, b))^2 - 1|.
§ PROPOSED METHOD:
In this section, we present our approach to learning the weights between sub-geometries with different curvatures in the product of embedding spaces. Our objective is to ensure that the curvatures of the graph embedding spaces closely match the curvatures of the graph itself. To accomplish this, we introduce a novel gating mechanism that assigns a score to each component space.
Motivated from the coarsening approaches <cit.>, we designed gating mechanism to leverage the message-passing of information across various regions of the input graph, enabling the extraction of topology information. Our gating mechanism divides the graph into multiple parts, where each sub-graph is predominantly characterized by a specific type of geometry, such as a tree or cycle structure.
For example, in a graph consisting of a ring of trees where the tree structure dominates, we assign higher scores to hyperbolic components in the product space compared to spherical components. This choice is made to improve the quality of the embeddings produced.
By applying this gating mechanism and adjusting the weights between the different sub-geometries, we aim to achieve a more accurate representation of the graph's underlying structures, resulting in improved embedding results.
Problem formulation
Given three types of geometry: Euclidean (𝔼), Hyperbolic (ℍ), and Spherical (𝕊).
Let ℳ_1, ℳ_2, …, ℳ_N be N component spaces where M_i is of one geometric type among {𝔼, ℍ, 𝕊}, and M_i = b_i.
The goal of our approach is to learn the score 𝐰 = (w_1, …, w_N) ∈ℝ^N from the input graph data on each component of product manifold embedding space in such a way that the embedding of input graph into P = w_1 ℳ_1 × w_2 ℳ_2 ×…× w_N ℳ_N will have lowest possible geometric distortion.
§.§ Coarsening input graph data
Hierarchical pooling layers
Given input graph 𝒢, with n > 0 nodes, adjacency matrix 𝐀∈{ 0, 1}^n × n and node features 𝐗∈𝐑^n × d.
The matrix 𝐀 represents graph structure: 𝐀(i, j) = 1 if there is an edge connecting two nodes i, j, otherwise 𝐀(i, j) = 0.
D is the diagonal degree matrix of the graph 𝐆 where D_ii = ∑_i 𝐀_ij.
We use hierarchical pooling-based GCNs to learn cluster assignments.
There are two GCNs with two different sets of parameters in this module.
At each layer l, the soft cluster assignment matrix 𝐒^(l)∈𝐑^n_l-1× n_l is computed as follows:
0.8!𝐒^(l) = softmax (GNN_1^l(𝐀^(l-1), 𝐗^(l-1))) with (𝐀^(0), 𝐗^(0)) = (𝐀, 𝐗).
Then, we apply the second GNN on 𝐒^(l) to compute the graph representation at layer l:
0.8!𝐗^(l) = 𝐒^(l)^T (GNN_2^(l)(𝐀^(l-1), 𝐗^(l-1))) and 𝐀^(l) = 𝐒^(l)^T 𝐀^(l-1)𝐒^(l)).
Coarsening input graph
The hierarchical pooling layer produces a coarsened graph with m < n nodes, a weighted adjacency matrix A' ∈ℝ^m × m, and node embeddings Z' ∈ℝ^m × d.
This process is then repeated L times, resulting in a GNN model with L layers that operate on the input graph and a series of coarser versions of it.
The soft assignment matrix S^(l) assigns each node at layer l to a cluster at the next layer l+1.
In other words, each row of S^(l) corresponds to one of the n_l nodes or clusters at layer l, while each column of S^(l) corresponds to one of the n_l+1 clusters at layer l+1.
In our approach, we treat the number of clusters as a hyperparameter and set n_l+1 = N, where N is the number of components in the product space P.
Each row of S^(l) shows the degree of membership of a node to each component space in P.
Attention pooling
We use the attention mechanism with the input being the matrix 𝐒^(l) to take the influence vector for each subspace.
Consider the matrix 𝐒 in form 𝐒 = [𝐡_1, 𝐡_2, ... , 𝐡_N ], with 𝐡_t ∈ℝ^d , and a trainable matrix 𝐔∈ℝ^d.
Self attention:
We define a relevant scalar weight for each element of the sequence through a softmax layer as follows w_t = softmax(𝐡_t^T 𝐔).
Given the set of weights over all the elements of the sequence, we can then obtain the pooled representation as the weighted average of the hidden states
s = ∑_t = 1^N 𝐡_t^T 𝐰_t.
Multi-head self attention:
Considering a number of k heads for the multi-head attention, 𝐡_t = [𝐡_t1, 𝐡_t2, …, 𝐡_tk] where 𝐡_tj∈ℝ^d/k and size of each head is d/k.
In the same sense, we have a trainable parameter 𝐔 =[𝐮_1 𝐮_2 …𝐮_k] where 𝐮_j ∈ℝ^d/k.
Different attention is then applied over each head of the encoded sequence softmax function following
w_t = softmax(𝐡_tj^T 𝐮_j), where w_tj corresponds to the attention weight of the head j on the element t.
A soft weight representation for each subspace is computed as follows:
s_j = ∑_t=1^N 𝐡_tj^T 𝐰_tj.
This method allows a multi-head self-attention network extracts different kinds of information over different regions of the self-attention network.
In the end, 𝐬∈ℝ^N represents the average weight of N component spaces in the product manifold P over the n_l clusters.
§.§ Objective function
Let 𝐬∈ℝ^N be the weight vector of N components based on the data's local geometry information.
The distance between x_i, x_j ∈ P is computed following d_P^2(x_i, x_j) = ∑_k = 1^N 𝐬_k dist^2 (x_i^k, x_j^k).
Then the base objective ℒ_base is defined as:
ℒ_base = ∑_1 ≤ i < j ≤ n|(d_P(x_i, x_j)/d_G(X_i, X_j))^2-1|
Finally, the total average distortion objective function is defined as ℒ = ℒ_base + ℒ_aux,
where ℒ_aux = ℒ_LP + ℒ_e is a combination of the link prediction loss (ℒ_LP) and the entropy regularization loss (ℒ_e).
More precisely, ℒ_LP = 𝐀^(l) - 𝐒^(l)𝐒^(l)^T_F at each layer l, where ·_F denotes the Frobenius norm; and
ℒ_e=1/n∑_i=1^n H(𝐒_i) where H(𝐒_i) is the entropy of the row i^th in matrix 𝐒.
Minimizing ℒ_LP means enforcing close nodes to be pooled together, while
minimizing ℒ_e makes the output cluster assignment for each node close to a one-hot vector so that the membership for each cluster is clearly defined.
Our total average distortion ℒ is optimized with the Algorithm <ref>.
§.§ Physical meaning of subspace weights
In manifold representation learning, the goal is to embed data into appropriate embedding spaces where the curvature of the embedding matches the curvature of the original data. In the case of a product manifold, each data point is partially embedded in different subspaces with varying curvatures.
Our work explores the relationship among the curvatures of all the subspaces and introduces a partial update mechanism for the embedding space based on their respective influence scores. In the importance score box of Model Architecture (Figure <ref>), if the input data is predominantly characterized by hierarchical structures, the importance score of the hyperbolic embedding component (s_2) will receive a larger value compared to the others (s_1 and s_3).
In Algorithm <ref>, we update the subspaces' curvatures and the embedding itself. The higher the curvature embedding scores, the more effort is required to minimize them. As a result, the negative curvature loss should contribute more to the overall loss, leading to more active updates of the embedding spaces associated with negative curvature compared to the other spaces. This ensures that the embedding adapts to the data's curvature characteristics and effectively captures the underlying structures.
§ EXPERIMENTS
This section presents our experimental evaluation of the proposed model's performance across various learning tasks. We begin by evaluating the model's effectiveness in improving graph reconstruction, as described in section <ref>.
Following this, we apply our framework to four downstream tasks: recommendation systems, knowledge graph embedding, node classification, as well as graph classification, and word similarity tasks.
§.§ Graph reconstruction
We perform experiments on both synthetic and real-world datasets to evaluate the performance of our proposed model.
More information on baselines and metrics is shown in Appendix <ref>.
Model performance on synthetic datasets
Table <ref> shows the average distortion (D_avg) of our model on the three synthetic graphs. When d = 3, achieves D_avg = 0.104 with the product manifold s_1 ℍ^2×s_2 𝕊^1.
Meanwhile, without any constraints in subspace curvatures (PM <cit.>), the distortion measure of ℍ^2×𝕊^1 on the Cycle graph is 0.11.
Overall, for all three synthetic graphs, our proposed model improves upon the main contender method PM from <cit.> by 5.4 %, 16.3 %, and 18.6 %, respectively (Table <ref>).
Similar trend continues in higher dimension d=5, our proposed method improves upon the baseline by 17.3%, 3.3% and 11.9 %, respectively (Table <ref>).
Model performance on benchmark datasets
We first employ a single space to generate embedding representations for each dataset in order to explore its intrinsic geometry.
Based on these observations, we develop heuristics for the dataset characteristics and utilize them to select the component in the model space product.
Then, the learning process optimizes the curvature of each subspace according to the dominant graph structure.
Figure <ref> presents the average distortion D_avg of embeddings into single model spaces for three complex benchmark datasets, as the number of embedding dimensions increases within the range of [5, 100].
We can see that, with the Cs PhDs and Power dataset, D_avg is smaller in hyperbolic space than in spherical space when d<50, indicating that the hyperbolic space should be considered in the general product space.
Similarly, the Cities dataset exhibits a more spherical structure than other geometric properties, and thus components of positive curvature should be used.
Table <ref> reports the performance of our model on the benchmark datasets.
Unlike the results obtained from the synthetic dataset, the best results are predominantly obtained when learning with the product manifolds.
This phenomenon is attributed to the more complex structure of real-world data compared to synthetic ones.
Specifically, the Power graph dataset has a hierarchical and cyclical structure that can be embedded effectively into any space with a hyperbolic and spherical component.
Our proposed model outperforms the main baseline PM <cit.> in all cases.
With embedding dimension d = 10, our model achieves the best distortion on the three datasets.
Specifically, in the Cs PhDs dataset, the percentage of improvements in terms of D_avg is 15.6 %.
In the Power dataset, with the soft gating mechanism, our model achieves better distortion upon the product of the space model with 28.4 %.
In case d = 50, the same improvement with d = 50, for specific, these average distortions (D_avg) compare with the uniform product of spaces (PM) of <cit.> is 19.3 % and 13.9%, respectively.
Furthermore, Table <ref> shows that for distortion of 0.0231 in the product space ℍ^5 ×𝕊^5 with the Power dataset, our method determines that the optimally weighted product manifold for embedding the dataset are 0.83 ℍ^5 × 0.16 𝕊^5. The ratio between the hyperbolic and spherical components is approximately 5:1, indicating the greater importance of hyperbolic components compared to spherical ones.
In contrast, the uniform product embedding space PM of <cit.> assumes that each component space contributes equally to learning representations in the product of spaces.
Our method , on the other hand, captures the constraints relation among all sub-geometries of different curvatures in the product manifold, depending on the geometry of the input graph data, leading to better performance than using the uniform product of spaces (PM) without scoring mechanism. Our proposed method has advantages in discovering general models with suitable geometry in the product manifold. Notably, we also observe that the mAP measures are not consistently better than the uniform product model spaces <cit.> when D_avg decreases.
§.§ on Knowledge Graph Embedding
Knowledge graphs (KGs) are a fundamental tool for representing information and have a wide range of applications, including question answering and web search <cit.>.
However, KGs are often highly incomplete, which poses a significant challenge for downstream use.
The goal of our approach is to address this issue by inferring missing facts in the KGs using entity and relation embedding techniques to map them to appropriate spaces.
In this section, we propose using the product of manifolds with a gating mechanism to represent the relations between entities in the KGs.
Detailed experimental scenario is shown in Appendix <ref>.
Model performance
Table <ref> reports the performance of various methods on two knowledge graphs.
To enable a fair comparison, we set the total embedding dimension to 64, which is a common practice in non-Euclidean embedding due to its ability to provide more compact spaces than Euclidean embeddings.
Our proposed model achieves superior performance over the baselines on the knowledge embedding graph, highlighting its effectiveness in learning informative representations of the data.
§.§ on node classification and link prediction
In this section, we evaluate the performance of our proposed model on node and graph classification tasks.
Hyperbolic GCN <cit.> uses message-passing on the hyperbolic tangent space for graph convolutional networks (GCNs).
However, our proposed model replaces the hyperbolic space with and applies message passing in the tangent of the product spaces.
We further introduce δ <cit.> which is used to evaluate the degree of tree-likeness of a graph by evaluating its graph distance metric.
The value of δ ranges from 0 to half of the graph diameter, with trees having δ = 0, while "circle graphs" and "grid graphs" have a larger δ, approximately half of their diameters.
Further details on the metrics, datasets, and baselines used in our experiments can be found in Appendix <ref>.
Model performance
Table <ref> presents the F1 and AUC scores for the link prediction and node classification tasks.
Notably, the DISEASE and AIRPORT datasets exhibit high hyperbolicity (δ = 0 and 1, respectively), where the performance of using the product of hyperbolic space surpasses that of using the product of mixture curvatures.
This is because the unified product of curvature fails to differentiate the primary intrinsic graph structure and instead adapts equally to spaces that do not align with the graph's topology.
Our proposed extension addresses this issue by incorporating a weighting mechanism that identifies the dominant embedding manifold most influenced by the underlying structure of the graph data, leading to improved results in both link prediction and node classification for these two datasets.
§.§ on Recommendation Systems
In this section, we evaluate the performance of our proposed model on the recommendation task. Specifically, we apply to replace the hyperbolic space in metric learning recommendation (HyperML <cit.>). Detailed information on baselines, datasets and metrics can be seen in Appendix <ref>.
Objective function
In HyperML <cit.>, the push-pull loss is proposed to learn the metric between the positive and negative items.
The overall objective is defined as ℒ = ℒ_P + γℒ_D,
where pull-push loss ℒ_P and distortion loss ℒ_D are defined as:
ℒ_P = ∑_(i, j) ∈𝕊∑_(i, k) ∉𝕊 [m + d^2_𝔻(i,j) - d^2_𝔻(i,k)]_+,
0.8!ℒ_D = ∑_(i, j) ∈𝕊[d_𝔻(f(i), f(j)) - d_𝔼(i, j)|/d_𝔼(i, j)]_+ + ∑_(i, k) ∉𝕊[d_𝔻(f(i), f(k)) - d_𝔼(i, k)/d_𝔼(i, k)]_+,
where |z|_+ = max(0, z), m > 0 is the margin size (m = 0.4 in this paper),
and f(.) is a mapping function f: 𝔼→𝔻 (f is the identity in <cit.>), γ is the multi-task learning weight and 𝕊 is the set of positive user-item pairs.
We use the same loss function in <cit.> with a difference in the distance on 𝔻. For specific, we compute the distance d between two embeddings in the product of model spaces.
Model performance
Table <ref> reports the H@10 and N@10 scores for two different datasets, considering the number of factors d ∈{32, 64}.
Our experiments demonstrate that, overall, CML and HyperML achieve better results with the weighted product manifolds () than in the Hyperbolic space alone, highlighting the advantages of using scoring sub-manifolds to model the distance between users and items.
§.§ Performance on word similarity task
We evaluated our model's performance on applications that require an understanding of the underlying manifold structure. To conduct our experiment, we trained word embeddings on the Word Similarity (WS-353) benchmark dataset, following the methodology established in previous works such as <cit.>. Our implementation is based on hyperbolic skip-gram embeddings from <cit.>.
Setup
For our setup, we utilized the standard skip-gram model <cit.> and extended the loss function to a generic objective suitable for arbitrary manifolds, using a variant of the objective used in <cit.>.
Specifically, given a word u and a target w with label y=1 if w is a context word for u and y=0 if it is a negative sample, our model is represented by P(y | w, u)=σ((-1)^1-y(-cosh(d(α_u, γ_w))+θ)).
Word similarity
To measure the effectiveness of our model, we evaluated its performance on the WS-353 dataset using the Spearman rank correlation ρ between our scores and annotated ratings.
We obtained the dataset from <cit.>, and the results of our experiment are presented in Table <ref>.
Our model outperformed the hyperbolic word embeddings of <cit.> and the product space (PM) in all dimension settings.
§ CONCLUSIONS
Real-world data often possess intricate geometric structures that are challenging to capture by embedding into spaces with uniform curvature.
To address this issue, we propose a method that partially extracts the topology information from the input data to update the embedding vectors and curvature of each subspace.
Our motivation is that graphs are constructed by combining simple structure topologies, such as trees, cycles, and stars.
Our approach introduces a data-driven method of weighted product spaces for learning better representations.
Our empirical experiments on synthetic and real-world datasets demonstrate that our framework enhances the embedding quality of input graphs with varying structures and improves the performance of the downstream tasks.
iclr2021_conference
§ ADDITIONAL BACKGROUND
Riemannian Geometry
Let ℳ^n be a smooth manifold in n-dimensional space, where ℳ^n is locally approximated by an n-dimensional Euclidean tangent space T_pℳ at p ∈ℳ.
The pair (ℳ, g) is called a Riemannian manifold if ℳ is equipped with a positive-definite metric tensor g that satisfies certain conditions.
Geodesics are the shortest-distance paths on manifolds, and the metric tensor g is integrated along the geodesic to compute distances on a Riemannian manifold.
The exponential map exp_p: T_p ℳ→ℳ and logarithmic maps log_p: ℳ→ T_p ℳ are two common bijections defined on the manifold ℳ.
A formal introduction to Riemannian manifolds can be found in <cit.>.
Product manifolds
Consider a sequence of smooth Riemannian manifolds ℳ_1, ℳ_2, …, ℳ_k.
ℳ_i can be positive (Spherical), zero (Euclidean), negative (Hyperbolic) curvature space.
The product manifold is defined as the Cartesian product ℳ = ℳ_1 ×ℳ_2 ×…×ℳ_k.
We write a point p ∈ℳ through their coordinates p=(p_1, …, p_k), p_i ∈ℳ_i. Similarly, a tangent vector v ∈ T_p ℳ can be written as (v_1, … , v_k) : v_i ∈ T_p_iℳ_i.
Gradient descent on manifolds requires the notion of taking steps.
This step can be performed in the tangent space and transferred to the manifold via the logarithmic map, and exponential map <cit.>.
The product space is also equipped with a distance function. The squared distance between points x, y ∈ℳ is defined as: d_P^2(x, y)=∑_i=1^k d_i^2(x_i, y_i).
§ CURVATURE ESTIMATION ON GRAPH DATA
Curvature estimation on simple graphs
There are three commonly used definitions for local graph curvature: Ollivier-Ricci <cit.>, Forman-Ricci <cit.>, and sectional curvature <cit.>.
In this paper, we use sectional curvature for estimating the geometric structures of graphs.
Sectional curvature is determined by geometric triangle properties as follows.
Theorem 1: Recall from <cit.> that on a given constant curvature geometric space, if abc is a geodesic triangle and m is the midpoint of bc, then d(a,m)^2+ d(b,c)^2/4 - d(a,b)^2 + d(a,c)^2/2 is equal to zero when the underlying space is Euclidean, is positive in spherical and negative in hyperbolic space, respectively.
Proof:
We provide proof of Theorem 1.
A = d(a,m)^2+ d(b,c)^2/4 - d(a,b)^2 + d(a,c)^2/2
= x^2 + z^2 - y^2/2 - t^2/2
= 1/2 (2x^2 + 2z^2 - y^2 - t^2)
= 1/2 [(x^2 +z^2 - y^2) + (x^2 + z^2 - t^2)]
= 1/2 [2xz cosα_1 + 2 xz cosα_2]
= xz (cosα_1 + cosα_2)
From Equation (6), we apply the cosine rule [https://en.wikipedia.org/wiki/Law_of_cosines].
We have three cases:
* cosα_1 + cosα_2 = 0: α_1 and α_2 are two supplementary angles, α_1 + α_2 = 180^0. Then the triangle is in Euclidean space.
* Similarly, it will be negative in hyperbolic and positive in the spherical curvature space.
Curvature estimation on graph data
Given theorem (1), let v be a node in G; b, c neighbors of v and a any other node.
Then, the sectional curvature of a node v and its neighbors b,c is defined following: 1/|V|-3∑_a ∈ G \{v, b, c}ξ_G(v ; b, c ; a) where
0.8!ξ_G(v ; b, c ; a)=1/2 d_G(a, v)(d_G(a, v)^2+d_G(b, c)^2/4-d_G(a, b)^2+d_G(a, c)^2/2)
and 2d_G(v;b,c) is included to yield the right scalings for trees and cycles.
Next, we estimate the curvature of some typical topology graph structures.
Star 𝐒_n is created from one central node and n leaves. We consider n ≥ 3, the local curvature at the center node v with two neighbors b, c is -1.
Tree 𝐓_b with branching factor b is the finite depth tree with b ≥ 2. The sectional curvature on the tree in the range ξ(T) ∈ [-1, 0].
Cycles graph 𝐂_n with n ≥ 4. If n is even, then ξ_C_n(v; b,c;a) = 0 for all points except the one diametrically opposite to v for which have ξ_C_n(v; b,c;a) = 1.
If n is odd, then for two points we have ξ_C_n(v; b,c;a) = n/2(n-1).
As a result, ξ(C_n) = 1/n-3 for even n and ξ(C_n) = n/(n-1)(n-3) for odd n.
Distortion error on simple graphs
We have demonstrated the limitations of using a single curvature approach to embed graphs with varying topologies.
To investigate the impact of curvature spaces on the quality of embedding spaces, we conducted experiments on three synthetic datasets with specific structures, including trees, circles, and rings of trees (Table <ref>).
Figure <ref> shows the distortion error results for Cycle and Tree graphs.
Our findings suggest that different graph structures require corresponding curvature spaces for optimal embedding quality.
For instance, spherical space (positive curvature) provides the least distortion error for cycle-like datasets (from 𝐒_3 to 𝐒_50), while hyperbolic spaces (negative curvature) give a minimal error for tree-like datasets (from 𝐇_3 to 𝐇_50).
All three models show some advancements compared to others in certain cases.
However, the overall distortions achieved are significantly higher than when using hyperbolic space with tree-like or spherical space with circle-like data.
For example, the distortion error on the Cycle tree is 0.09 compared to 0.02 on H_10 with Cycle data and 0.042 on S_5 with simple Tree data.
Therefore, using a product of individual spaces can improve the accuracy of embedding data with a mixture of structures.
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ Graph reconstruction task
Datasets
The synthetic datasets we use are small graphs with 40 nodes that are designed to have specific geometric structures, including a circle, a tree, and a ring of trees.
To assess the effectiveness of our approach on larger and more complex graphs, we also use three benchmark datasets: CsPhD <cit.>, Power <cit.>, and Cities <cit.>.
The Cities dataset consists of 1025 nodes and 1043 edges, while the Power dataset contains 4941 nodes and 6594 edges. Additionally, the CsPhD dataset has 312 nodes and 48516 edges.
Baselines
We compare the distortion error of node embeddings on both synthetic and benchmark datasets between our proposed model and the product spaces (PM) <cit.> method.
Metrics
We use two standard metrics to measure the quality of embeddings: average distortion D_avg and mean average precision mAP.
D_avg is a global metric that considers all the exact distance values.
Let G = (V, E) be a graph and node a ∈ V have a neighborhood 𝒩_a = b_1, ⋯, b_deg(a), where deg(a) is the degree of a.
In the embedding f, define R_a, b_i to be the smallest ball around f(a) that contains b_i, which means R_a, b_i is the smallest set of nearest points required to retrieve the i^th neighbor of a in f.
Thus, mAP = 1/|V|∑_a ∈ V1/deg(a)∑_i = 1^|𝒩a||𝒩a∩ R_a, b|/|R_a, b_i|. mAP is a ranking-based measure for local neighborhoods, and it does not track exact distances like D_avg.
§.§ Additional information for Recommendation task
Metrics
We use two measures Hit Ratio (H) <cit.> and Normalized Discounted Cumulative Gain (N) <cit.> to examine the predictive ability of these models.
The final H@k and N@k are averaged on all users' H@k and N@k scores.
We choose k = 10 to evaluate the model.
Datasets We perform experiments on two popular datasets, MovieLens-1M and LastFM-20K. The LastFm dataset <cit.> is obtained from a music website[http://millionsongdataset.com/lastfm/]. It is preprocessed to have 1892 users and 17632 music records. The MovieLens-1M is created from 6040 users and 3706 movies.
Baselines
We consider the three works below as the baselines for our model: CML <cit.>, HyperML <cit.>.
For specific, CML <cit.> investigates the relationship between metric learning and collaborative filtering.
It proposes a method that learns a joint metric space capable of encoding not only users' preferences but also the similarity between users and items.
HyperML <cit.> presents the connection between metric learning in hyperbolic space and collaborative filtering by exploring hyperbolic geometry.
HyperML-PM is our extension of HyperML in the product of model space.
HyperML-WPM (Our) is our extension of HyperML in the product of model spaces with the gating mechanism.
§.§ Additional information for Knowledge graph embedding
Metrics
The performance of various models is evaluated using two standard metrics: mean reciprocal rank (MRR) and hit rate (HR@3).
Datasets
We used two standard datasets, WN18RR <cit.> and FB15K-237 <cit.>, for our analysis. WN18RR is derived from WordNet, a lexical database of semantic relations between words. FB15K-237 is a subset of the Freebase knowledge graph, which is a comprehensive resource containing general information.
Table <ref> shows the statistics of the two datasets.
Objective function
Given a knowledge graph 𝒢 with a set of entities ℰ and a set of relation ℛ. Each triplet (h,r,t) ∈𝒢 is included by head entity h, tail entity t, and the relation r ∈ℛ between them.
There are a lot of works that propose RotE <cit.> in Euclidean space, and RotH <cit.> in Hyperbolic space. In this work, we extend to the product of different curvature spaces. Formally, entities h, t are represented by vector 𝐞_h, 𝐞_t ∈ℝ^b and the relation r is represented by two translation vectors α_r, β_r ∈ℝ^b and a rotation vector γ_r ∈ℝ^b. The head entity will translate twice via Mobius addition operation and rotate one time.
Q(h,r)= Rot(exp_0^c(𝐞_h) ⊕_c exp_0^c(α_r), γ_r) ⊕_c exp_0^c (β_r)
with c > 0 and exp_0^c is the exponential map over the origin. Rot is a rotation function with γ_r is the rotation matrix.
According to the above definition, for each triple (h,r,t), we define the distance function as:
d_r(h, t) = √(d_ℳ_c^2 (Q(h,r), exp_0^c(e_t)))
where ℳ_c is the product of curvature manifold. In <cit.>, the distance function of RotatE for the triple (h,r,t) is defined as: d_r(h, r) = || h⊙r - t||
The final negative sampling loss is defined by the cross-entropy loss:
ℒ =∑_(h,r,t) ∈Ωlog(1+ exp(-Y_(h,r,t) d_r(h,t)))
where Y_(h,r,t)∈{1, -1} is a binary label indicating whether a triplet is real or not.
Baselines
RotatE <cit.> is a knowledge graph embedding that is used to learn the representations of entities and relations in knowledge graphs.
RotatH is the extension of RotatE <cit.> in the hyperbolic space.
Product-RotatH is the extension of RotatE in the product of the hyperbolic spaces <cit.>.
SwisE <cit.> used the gating mechanism which is learned to choose the component space for knowledge graph embedding.
-Rotat is our extension by using the product of manifold in representing the relations among entities in the knowledge graph.
§.§ Additional information for Node Classification and Link Prediction
Metrics We utilize ROC AUC as a metric to evaluate the performance of Link Prediction (LP), whereas we rely on the F1 score to assess the Node Classification (NC) performance. In both cases, a higher score indicates better performance.
Datasets In this experiment, we evaluate model performance on the two different benchmark datasets.
DISEASE is the dataset of Infectious diseases from Oxford University <cit.>.
AIRPORT: is the dataset of airline routes from OpenFlight.org. Each node represents an airport, and the edge represents airline routes among these airports.
Detailed information regarding these datasets is provided in Table <ref>.
Baselines We evaluate the contributions of our proposed model by measuring the F1 and AUC scores on two datasets, compared with five different baseline models:
MLP and Hyperbolic-MLP are two variants of multilayer perceptron (MLP) classifiers operating on the Euclidean (𝐄) and hyperbolic space (𝐇), respectively.
HGCN <cit.> is an extension of graph convolutional networks (GCNs) to hyperbolic geometry.
Product-HGCN <cit.> extends GCNs in the product of hyperbolic geometries.
Mix-GCN <cit.> extends GCNs in the product of hyperbolic, spherical, and Euclidean spaces.
Our proposed model (-GCN) extends GCNs with a gating mechanism in the product of different curvature spaces (H, E, S).
|
http://arxiv.org/abs/2307.03963v1 | 20230708122517 | An observational signature for extremal black holes | [
"Stefanos Aretakis",
"Gaurav Khanna",
"Subir Sabharwal"
] | gr-qc | [
"gr-qc",
"hep-th",
"math-ph",
"math.MP"
] | |
http://arxiv.org/abs/2307.04460v1 | 20230710101312 | Exploiting an External Microphone for Binaural RTF-Vector-Based Direction of Arrival Estimation for Multiple Speakers | [
"Daniel Fejgin",
"Simon Doclo"
] | eess.AS | [
"eess.AS",
"cs.SD",
"eess.SP"
] |
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging
Chi Wang
August 12, 2023
==================================================================================================================================
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2177/1 - Project ID 390895286 and Project ID 352015383 - SFB 1330 B2.In hearing aid applications, an important objective is to accurately estimate the direction of arrival (DOA) of multiple speakers in noisy and reverberant environments. Recently, we proposed a binaural DOA estimation method, where the DOAs of the speakers are estimated by selecting the directions for which the so-called Hermitian angle spectrum between the estimated relative transfer function (RTF) vector and a database of prototype anechoic RTF vectors is maximized. The RTF vector is estimated using the covariance whitening (CW) method, which requires a computationally complex generalized eigenvalue decomposition. The spatial spectrum is obtained by only considering frequencies where it is likely that one speaker dominates over the other speakers, noise and reverberation. In this contribution, we exploit the availability of an external microphone that is spatially separated from the hearing aid microphones and consider a low-complexity RTF vector estimation method that assumes a low spatial coherence between the undesired components in the external microphone and the hearing aid microphones. Using recordings of two speakers and diffuse-like babble noise in acoustic environments with mild reverberation and low signal-to-noise ratio, simulation results show that the proposed method yields a comparable DOA estimation performance as the CW method at a lower computational complexity.
§ INTRODUCTION
In speech communication applications such as hearing aids, methods for estimating the direction of arrival (DOA) of multiple speakers are often required. To solve this estimation task, (deep) learning-based and model-based methods are continuously developed and advanced <cit.>. However, only few methods exploit the availability of external mobile devices equipped with microphones <cit.>, although wirelessly linking hearing aids to these devices has become increasingly popular <cit.>.
Recently, we proposed relative-transfer-function (RTF) vector-based DOA estimation methods for a single speaker in <cit.>, without relying on the external microphone to be close to the target speaker and capturing only little noise or reverberation as in <cit.>. We estimated the DOA as the direction that maximized the similarity between the estimated RTF vector and a database of prototype anechoic RTF vectors for different directions in terms of a frequency-averaged distance function.
However, the methods in <cit.> considered only a single speaker. To address DOA estimation for multiple speakers, we introduced the so-called frequency-averaged Hermitian angle spectrum from which the DOAs were estimated as the directions corresponding to the peaks of this spatial spectrum (throughout the paper, we refer to a direction-dependent similarity score as a spatial spectrum) <cit.>. Opposed to <cit.>, the spatial spectrum was constructed from time-frequency (TF) bins where one speaker was assumed to be dominant over all other speakers, noise, and reverberation, solely.
Estimation of the RTF vector of a speaker from noisy microphone signals can be accomplished using, e.g., the state-of-the-art covariance whitening (CW) method <cit.> or the spatial coherence (SC) method <cit.>. Despite the effectiveness of the CW method and the possibility to apply the method using only the head-mounted microphone signals or all available signals, such a computationally expensive method (due to the inherent generalized eigenvalue decomposition) is less desirable than methods with a lower computation complexity for resource-constrained applications like hearing aids. Opposed to the CW method, the SC method requires an external microphone but does not perform expensive matrix decompositions. The SC method relies on the assumption of a low spatial coherence between the undesired component in one of the microphone signals and the undesired components in the remaining microphone signals. As shown in <cit.>, this assumption holds quite well, for example, when the distance between the external microphone and the head-mounted microphones is large enough and the undesired component is spatially diffuse-like.
In this paper, we propose to construct the frequency-averaged Hermitian angle spectrum for DOA estimation for multiple speakers using the computationally inexpensive SC method. We compare the DOA estimation accuracy when estimating the RTF vector using the SC method or the CW method in a reverberant acoustic scenario with diffuse-like babble noise. Experimental results show for multiple positions of the external microphone that estimating the RTF vector with the SC method yields a DOA estimation accuracy that is comparable to the CW method at a lower computational complexity.
§ SIGNAL MODEL AND NOTATION
We consider a binaural hearing aid setup with M microphones, i.e., M/2 microphones on each hearing aid, and one external microphone that is spatially separated from the head-mounted microphones and can be located at an arbitrary position, i.e., M +1 microphones in total. We consider an acoustic scenario with J simultaneously active speakers with DOAs θ_1:J (in the azimuthal plane) in a noisy and reverberant environment, where J is assumed to be known. In the short-time Fourier transform (STFT) domain, the m-th microphone signal can be written as
Y_m(k,l) = ∑_j=1^JX_m,j(k,l) + N_m(k,l) ,
where m ∈{1,…,M+1} denotes the microphone index, k∈{1,…,K} and l∈{1,…,L} denote the frequency bin index and the frame index, respectively, and X_m,j(k,l) and N_m(k,l) denote the j-th speech component and the noise component in the m-th microphone signal, respectively. For conciseness, we will omit the frequency bin index k and the frame index l in the remainder of this paper wherever possible. Assuming sparsity in the STFT domain and one dominant speaker (indexed by j=d) per TF bin <cit.>, and stacking all microphone signals in an (M+1)-dimensional vector =[Y_1,…, Y_M+1]^T, where (·)^T denotes transposition, the vector is given by
= ∑_j=1^J + ≈ + ,
with , , and defined similarly as .
Choosing the first microphone as the reference microphone (without loss of generality) and assuming that the speech component for each (dominant) speaker can be decomposed into a direct-path component and a reverberant component , can be written as
= + = X_1,d^ DP + ,
where
= [1, G_2,…, G_M+1]^T
denotes the extended (M+1)-dimensional direct-path RTF vector and X_1,d^ DP denotes the direct-path speech component of the dominant speaker in the reference microphone. The M-dimensional head-mounted direct-path RTF vector corresponding to the head-mounted microphone signals can be extracted from as
= , = [𝐈_M× M,0_M] ,
where denotes the (M× M+1)-dimensional selection matrix for the head-mounted microphone signals with 𝐈_M× M denoting an (M× M)-dimensional identity matrix and 0_M denoting an M-dimensional vector of zeros. Both RTF vectors and encode the DOA of the dominant speaker. However, the extended RTF vector depends on the (unknown) position of the external microphone, whereas the head-mounted RTF vector with fixed relative positions of the head-mounted microphones (ignoring small movements of the hearing aids due to head movements) does not depend on the position of the external microphone. Hence, for DOA estimation, we will only consider the head-mounted RTF vector .
The noise and reverberation components are condensed into the undesired component = + such that ≈ +.
Assuming uncorrelated direct-path speech and undesired components, the covariance matrix of the noisy microphone signals can be written as
= ℰ{^H} = + ,
with
= , = ℰ{^H} ,
where (·)^H and ℰ{·} denote the complex transposition and expectation operator, respectively. and denote the covariance matrices of the direct-path dominant speech component and undesired component, respectively, and =ℰ{| X_1,d^ DP|^2} denotes the power spectral density of the direct-path dominant speech component in the reference microphone.
§ RTF-VECTOR-BASED DOA ESTIMATION
In this section, we review the RTF-vector-based DOA estimation method proposed in <cit.> that is based on finding the directions corresponding to the peaks of the spatial spectrum called frequency-averaged Hermitian angle spectrum.
To estimate the DOAs θ_1:J of the speakers from the estimated head-mounted[As previously stated, we only consider the estimated head-mounted RTF vector for DOA estimation and not the extended RTF vector that depends both on the speaker DOA and the (unknown) position of the external microphone.] RTF vector (k,l), the estimated head-mounted RTF vector (k,l) is compared to a database of prototype anechoic RTF vectors for several directions θ_i , i=1,…, I using the Hermitian angle <cit.> as a measure of dissimilarity, i.e.,
p(k,l,θ_i) = h(𝐠̂_H_d(k,l),) ,
h(𝐠̂,𝐠̅) = arccos(𝐠̅^H𝐠̂/𝐠̅_2 𝐠̂_2) .
These prototype anechoic head-mounted RTF vectors can be obtained, e.g., via measurements using the same microphone array configuration as used during the actual source localization or using spherical diffraction models <cit.>.
Accounting for the disjoint activity of the speakers in the STFT domain and aiming at including only TF bins where the estimated head-mounted RTF vector (k,l) is a good estimate for the direct-path RTF vector in (<ref>) (of one of the speakers), the narrowband spatial spectrum (<ref>) is integrated over a set 𝒦(l) of selected frequency bins, where it is likely that one speaker dominates over all other speakers, noise, and reverberation <cit.>, i.e.,
P(l,θ_i)=-∑_k∈𝒦(l)p(k,l,θ_i) .
Based on the usage of the Hermitian angle for the construction of (<ref>), the spatial spectrum in (<ref>) is called the frequency-averaged Hermitian angle spectrum. The DOAs θ_1:J(l) are estimated by selecting the directions corresponding to the J peaks of this spatial spectrum (assuming J to be known).
In the context of DOA estimation, coherence-based quantities such as the coherent-to-diffuse ratio (CDR) are a common criterion for frequency subset selection <cit.>. The usage of the CDR as a criterion for frequency subset selection can be motivated by the fact, that for higher values of the CDR at the respective TF bin it is more likely that a speaker dominates over all other speakers, noise, and reverberation at the respective TF bin. As in <cit.>, the subset 𝒦(l) is obtained using the coherent-to-diffuse ratio (CDR) criterion (<ref>), i.e.,
𝒦(l) = {k: CDR(k,l)≥CDR_thresh} ,
where the CDR is estimated as
CDR(k,l) = f(Γ_y,eff(k,l), Γ_u(k)) ,
with the CDR-functional f defined in (<ref>) for a single microphone pair comprising the microphones m=i and m=j <cit.>. The arguments of the function in (<ref>) are the estimated coherence Γ_y,i,j of the noisy signal
Γ_y_i,j(k,l)= Φ̂_y_i,j(k,l)/√(Φ̂_y_i,i(k,l) Φ̂_y_j,j(k,l))
with Φ̂_y_i,j denoting an estimate of the (i,j)-th element of the covariance matrix of the noisy microphone signals and a model Γ_u,i,j of the coherence of the undesired component. To consider more than just a single microphone pair for the estimation of the CDR, the coherence of the noisy signals between multiple microphone pairs (denoted as the microphone set ℳ) between the left and the right hearing aid is averaged prior to evaluating the CDR-functional in (<ref>), resulting in the binaural effective coherence <cit.>, i.e.,
equation1Γ_y,eff(k,l) = 1/|ℳ|∑_i,j ∈ℳΓ_y_i,j(k,l) ,
Thus, the binaural effective coherence represents the average coherence between the head-mounted microphone signals. Due to the arbitrary position of the external microphone, we consider only the head-mounted microphones (with fixed relative positions) for the estimation of the binaural effective coherence Γ_y,eff(k,l).
To model the coherence of the undesired component for the estimation of the CDR in (<ref>) between the head-mounted microphone signals, head shadow effects need to be included. Assuming a diffuse sound field for both the noise and reverberation component, a modified sinc-model <cit.> is employed, i.e.,
Γ_u(k) = (αω_kr/c) 1/√(1 + (βω_kr/c)^4) ,
where ω_k denotes the discrete angular frequency, r denotes the distance between the microphones of left and right hearing aid which is approximated as the diameter of a head, c denotes the speed of sound, and α=0.5 and β=2.2 denote empirically determined parameters of the modified sinc-model.
In this paper we compare the influence of different RTF vector estimation methods on constructing the frequency-averaged Hermitian angle spectrum in (<ref>). In <cit.> no external microphone was used and therefore the DOAs were estimated from the spatial spectrum as in (<ref>) constructed from head-mounted RTF vectors that were estimated using the CW method as in (<ref>), i.e.,
P^(CW)(l,θ_i)=-∑_k∈𝒦(l)h(𝐠̂_H_d^(CW)(k,l),) .
In this paper, we propose to exploit the availability of the external microphone and estimate the DOAs from the spatial spectrum constructed as in (<ref>) constructed from head-mounted RTF vectors that are estimated using the SC method as in (<ref>), i.e.,
P^(SC)(l,θ_i)=-∑_k∈𝒦(l)h(𝐠̂_H_d^(SC)(k,l),)
A summary on the covariance whitening (CW) method <cit.> and the spatial coherence (SC) method <cit.> is provided in the next section.
§ RTF VECTOR ESTIMATION
In order to estimate DOAs of multiple speakers, a frequency-averaged Hermitian angle spectrum is constructed, which assess the similarity between the estimated M-dimensional head-mounted RTF vector and a database of prototype anechoic RTF vectors for different directions. In this section, we review two RTF vector estimation methods. The computationally expensive state-of-the-art covariance whitening (CW) method <cit.> is summarized in Section <ref>. The computationally inexpensive spatial coherence (SC) method <cit.> is discussed in Section <ref>.
§.§ Covariance whitening (CW)
To apply the CW method <cit.>, estimates and of the covariance matrices of the noisy signal and the undesired signal component are required. Based on these estimates, the head-mounted direct-path RTF vector can be estimated using only the head-mounted microphone signals as
^(CW) =f(^H,^H) ,
f(Φ̌_y,Φ̌_u) =Φ̌_u^1/2Φ̌_u^-1/2Φ̌_yΦ̌_u^-H/2/𝐞̌_1^TΦ̌_u^1/2Φ̌_u^-1/2Φ̌_yΦ̌_u^-H/2 ,
where · denotes the principal eigenvector of a matrix, Φ̌_u^1/2 denotes a square-root decomposition (e.g., Cholesky decomposition) of the M̌-dimensional matrix Φ̌_u and 𝐞̌_1=[1,0,…,0]^T denotes an M̌-dimensional selection vector. Note that can be estimated likewise from the head-mounted microphone signals and the external microphone signal together, via f(,), differing in general from the estimate ^(CW) as in (<ref>). However, based on the results of <cit.> and <cit.>, we will consider only the estimate as in (<ref>) obtained from the head-mounted microphone signals only as no significant benefit in DOA estimation performance was reported when all microphone signals were used.
§.§ Spatial coherence (SC)
The SC method <cit.> requires an external microphone and relies on the assumption of a low spatial coherence between the undesired component U_M+1 in the external microphone signal and the undesired components U_m, m∈{1,…,M}, in the head-mounted microphone signals, i.e.
U_mU_M+1^∗≈ 0 , m∈{1,…, M} .
As shown in <cit.>, this assumption holds quite well, for example, when the distance between the external microphone and the head-mounted microphones is large enough and the undesired component is spatially diffuse-like. Exploiting this assumption, results in Y_mY_M+1^∗=X_mX_M+1^∗, m∈{1,…, M}, thus the RTF vector can be efficiently estimated without expensive matrix decompositions as
^(SC) = _M+1/_1^T_M+1 ,
with _m denoting an (M+1)-dimensional selection vector selecting the m-th element.
§ EXPERIMENTAL RESULTS
Applying the CW and SC method for RTF vector estimation, in this section we compare the DOA estimation performance when using the SC-based frequency-averaged Hermitian angle spectrum as in (<ref>) against the DOA estimation performance when using the CW-based frequency-averaged Hermitian angle spectrum as in (<ref>). We evaluate the methods with recorded signals for an acoustic scenario with two static speakers in a reverberant room with diffuse-like babble noise. The experimental setup and implementation details of the algorithms are described in Section <ref>. The results in terms of localization accuracy are presented and discussed in Section <ref>.
§.§ Experimental setup and implementation details
For the experiments we used signals that were recorded in a laboratory at the University of Oldenburg with dimensions of about [parse-numbers=false]7×6×2.7, where the reverberation time can be adjusted by means of absorber panels, which are mounted to the walls and the ceiling. The reverberation time was set to approximately T_ 60≈250. Fig. <ref> depicts the experimental setup. A dummy head with a binaural hearing aid setup (M = 4) was placed approximately in the center of the laboratory. For this hearing aid setup a database of prototype anechoic RTF vectors is obtained from measured anechoic binaural room impulse responses <cit.> with an angular resolution of 5 (I = 72). A single external microphone was placed at four different positions (denoted as E1 - E4), which was not restricted to be close to a speaker. Two speakers from the EBU SQAM CD corpus <cit.> (male and female, English language) were played back via loudspeakers that were located at approximately 2 distance from the dummy head. For the evaluation, all 72 pairs of DOAs of non-collocated speakers (each of the 9 DOAs in the range [-160,-120,…,160]) were considered. The speech signals were constantly active and had a duration of approximately 5. Diffuse-like noise was generated with four loudspeakers facing the corners of the laboratory, playing back different multi-talker recordings. The speech and noise components were recorded separately and were mixed at {-5,0,5} broadband signal-to-noise ratio (SNR) averaged over all head-mounted microphones of the hearing aid setup. All microphone signals were recorded simultaneously, hence neglecting synchronization and latency aspects.
The microphone signals were processed in the STFT-domain using a 32 square-root Hann window with 50 % overlap at a sampling frequency of 16. The covariance matrices and were estimated recursively during detected speech-and-noise and noise-only TF bins, respectively, using smoothing factors corresponding to time constants of 250 for and 500 for , respectively. The speech-and-noise TF bins were discriminated from noise-only TF bins based on the speech presence probability <cit.>, averaged and thresholded over all head-mounted microphone signals.
We assess the DOA estimation performance by averaging the localization accuracy over the considered DOA pairs and SNRs. For the localization accuracy we average the per-frame-accuracies over all frames, where we define the per-frame accuracy as
ACC(l) = j_correct(l)/J ,
with j_correct(l) denoting the number of speakers that are correctly localized within a range of ± 5^∘ in the l-th frame and J=2.
§.§ Results
Fig. <ref> depicts the average localization accuracies that are obtained from the spatial spectrum as in (<ref>), denoted by CW, and the accuracies obtained from the spatial spectrum as in (<ref>), denoted by SC-EX, where X stands for one of the four positions of the external microphone. To show the effectiveness of the subset selection, we considered two threshold values, CDR_thresh = [parse-numbers = false]-∞ (corresponding to selecting all frequencies) and CDR_thresh = 0, shown as blue bars and orange bars, respectively.
First, for every condition a large improvement in the localization accuracy of up to 11 due to the frequency subset selection can be observed. This result is in line with the results reported in <cit.>. Second, considering the spatial spectrum obtained from (<ref>), it can be observed that the position of the external microphone has a minor effect on the estimated DOA, resulting in localization accuracies in the range 62 - 66 using a threshold value of CDR_thresh = 0. For the external microphone placed at positions E3 or E4, i.e., close to the loudspeakers playing back the noise, a slightly lower DOA estimation accuracy can be observed when comparing to the external microphone placed at positions E1 or E2. Third, comparing the DOA estimation performance when using the CW method against the SC method for estimating the head-mounted RTF vector, a difference up to around 5 - 7 can be observed. Thus, the low-complexity SC method yields a comparable DOA estimation performance for multiple speakers as the CW method, which is line with the single speaker DOA estimation results reported in <cit.>.
§ CONCLUSIONS
Based on two RTF vector estimation methods, in this paper we compared the DOA estimation performance for multiple speakers for a binaural hearing aid setup exploiting an external microphone or not. We did not restrict the position of the external microphone to be close to the target speaker. Estimating the RTF vector using either the CW method without exploiting the external microphone or using the SC method exploiting the external microphone, we constructed a frequency-averaged Hermitian angle spectrum from which the DOAs of the speakers were estimated as the directions that maximized the spatial spectrum. We evaluated the approach using simulations with recorded two speaker scenarios in acoustic environments with mild reverberation and diffuse-like babble noise scaled to low SNRs for different positions of the external microphone. The results show that using the SC method for the construction of the frequency-averaged Hermitian angle spectrum yields a DOA estimation accuracy (62 - 66) that is comparable to the CW method (≈70) at a lower computational complexity.
|
http://arxiv.org/abs/2307.05585v1 | 20230710150855 | Speed and Acceleration of CMEs Associated with Sustained Gamma-Ray Emission Events Observed by Fermi/LAT | [
"P. Mäkelä",
"N. Gopalswamy",
"S. Akiyama",
"H. Xie",
"S. Yashiro"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.SR"
] |
Pertti Mäkelä
[email protected], [email protected]
0000-0002-0786-7307]Pertti Mäkelä
The Catholic University of America
620 Michigan Ave., N.E.
Washington, DC 20064, USA
NASA Goddard Space Flight Center
8800 Greenbelt Road
Greenbelt, MD 20771, USA
NASA Goddard Space Flight Center
8800 Greenbelt Road
Greenbelt, MD 20771, USA
The Catholic University of America
620 Michigan Ave., N.E.
Washington, DC 20064, USA
NASA Goddard Space Flight Center
8800 Greenbelt Road
Greenbelt, MD 20771, USA
The Catholic University of America
620 Michigan Ave., N.E.
Washington, DC 20064, USA
NASA Goddard Space Flight Center
8800 Greenbelt Road
Greenbelt, MD 20771, USA
The Catholic University of America
620 Michigan Ave., N.E.
Washington, DC 20064, USA
NASA Goddard Space Flight Center
8800 Greenbelt Road
Greenbelt, MD 20771, USA
The sustained gamma-ray emission (SGRE) from the Sun is a prolonged enhancement of >100 MeV gamma-ray emission that extends beyond the flare impulsive phase. The origin of the >300 MeV protons resulting in SGRE is debated, both flares and shocks driven by coronal mass ejections (CMEs) being the suggested sites of proton acceleration. We compared the near-Sun acceleration and space speed of CMEs with 'Prompt' and 'Delayed' (SGRE) gamma-ray components <cit.>. We found that 'Delayed'-component-associated CMEs have higher initial acceleration and space speed than 'Prompt-only'-component-associated CMEs. We selected halo CMEs (HCMEs) associated with type II radio bursts (shock-driving HCMEs) and compared the average acceleration and space speed between HCME populations with or without SGRE events, major solar energetic particle (SEP) events, metric, or decameter-hectometric (DH) type II radio bursts. We found that the SGRE-producing HCMEs associated with a DH type II radio burst and/or a major SEP event have higher space speeds and especially initial accelerations than those without an SGRE event. We estimated the radial distance and speed of the CME-driven shocks at the end time of the 2012 January 23 and March 07 SGRE events using white-light images of STEREO Heliospheric Imagers and radio dynamic spectra of Wind WAVES. The shocks were at the radial distances of 0.6–0.8 au and their speeds were high enough (≈975 km s^-1 and ≈750 km s^-1, respectively) for high-energy particle acceleration. Therefore, we conclude that our findings support the CME-driven shock as the source of >300 MeV protons.
§ INTRODUCTION
The sustained gamma-ray emission (SGRE) from the Sun is a prolonged enhancement of >100 MeV gamma-ray emission that extends beyond the flare impulsive phase. SGRE typically lasts for several hours, extending well beyond the end of the associated soft X-ray flare emission. The first SGRE event at energies above 100 MeV was detected on 1991 June 15 by the Gamma-1 telescope on board the Gamma spacecraft and it lasted at least 2.16 hours <cit.>. Similar observation of a long-duration >50 MeV gamma-ray emission was reported by <cit.> during the 1991 June 11 flare. The >100 MeV SGRE is produced by >300 MeV protons precipitating from the solar corona into the solar chromosphere, where their interactions with the dense plasma layers create pions, which then decay into the observed >100 MeV gamma-rays <cit.>. The dominant source of >100 MeV gamma-rays is neutral pion decay <cit.>. <cit.> first reported a clear detection of >40 MeV gamma-rays that require pion production during the extended phase of the 1982 June 3 gamma-ray flare.
SGRE events were originally called long duration gamma-ray flares <cit.>. Nowadays they are also known as late-phase >100 MeV gamma-ray emission <cit.> events. A review of gamma-ray observations analogous to Gamma-1 measurements by <cit.> listed 13 LDGRFs between 1982–1991. A few more early events have been discovered from observations by non-dedicated gamma-ray telescopes <cit.>. Most recently, observations by the Large Area Telescope <cit.> on board the Fermi satellite have shown that SGRE events are relatively common <cit.>. The >100 MeV SGRE event on 2012 March 7 was observed to last over 20 hours <cit.>.
The origin of the >300 MeV protons producing SGRE is still debated. <cit.> studied the 1982 June 03 event and suggested a two-phase particle acceleration scenario, where a short-duration impulsive-phase acceleration is followed by a second acceleration phase, probably due protons accelerated by coronal shocks and resulting in SGREs <cit.>. <cit.> investigated the same 1983 June 03 gamma-ray flare and found a good agreement between their model of turbulent solar flare loops and the observed gamma-ray light-curves, including the extended emission phase, which their model explained to be due to delayed protons diffusing both in momentum space and spatially in the flare loops. The flare loop scenario requires that flare-accelerated protons must remain trapped and/or be continuously re-accelerated in the coronal loops long after the X-ray flare itself has ended. However, trapping of high-energy protons in coronal loops for several hours requires force-free loops <cit.> with a sufficiently low density and turbulence level <cit.>. As an alternative to particle trapping in the coronal loops, <cit.> suggested a continuous stochastic acceleration due to additional pulses of energy that could explain gamma-ray observations during the extended phase of the 1991 June 15 LDGRF <cit.>.
Gamma-ray-line observations of the behind-the-limb flare on 1989 September 29 were interpreted to require a spatially extended gamma-ray source and hence to suggest shocks driven by fast and wide coronal mass ejections (CMEs) as a likely source of the gamma-ray-emission producing particles <cit.>. Recent LAT observations of SGRE events during eruptions occurring behind the solar limb, have confirmed that an extended source of gamma-rays must exist at the Sun <cit.>. The CME-driven shock naturally extends over large regions of solar surface allowing the shock-accelerated protons to have access to areas far from the behind-the-limb eruption site. <cit.> forward modelled the CME flux rope and the surrounding shock in the 2014 September 1 behind-the-limb event and found that the Fermi/LAT SGRE source was located far from the flare site - in the space between the flux rope and shock confirming the extended nature of the emission. <cit.> suggested another scenario where closed magnetic loops extended up to the height of several solar radii will capture high-energy protons that might be accelerated by a CME shock and subsequently the loops retract and enable sufficiently large number of >300 MeV protons to interact with the solar atmosphere.
Recently, <cit.> compared the estimated fluxes of gamma-ray producing particles precipitating into the solar atmosphere with the fluxes of SEPs escaping into interplanetary space and did not find significant correlation. They suggested that the lack of correlation rules out the CME-driven shock as a common source of both fluxes. However, <cit.> pointed out that the correlation is high when systematic effects are corrected differently. <cit.> compared the SGRE time profile observed during the ground level enhancement (GLE) on 2017 September 10 with the time profiles of simulated shock parameters and found a good match between them, supporting the CME shock as a common source of SGRE-producing protons at the Sun and GLE protons at 1 au. <cit.> studied properties of flares and CMEs with and without SGREs. They found that SGRE events are associated with intense X-class flares but only one-third of the X-class solar flares Fermi/LAT observed have an SGRE event. They also note that fast and wide CMEs are associated with SGRE events. Therefore, their results on the flare and CME associations favor CME-driven shock as the source of >300 MeV protons.
Additional support for the CME-shock scenario is provided by the correlation of the SGRE durations with the durations and the end frequencies of type II radio bursts <cit.>. Figure <ref> shows two examples of concurrent SGRE events and type II radio bursts during SGRE event in January and March 2012. Although type II radio bursts are produced by CME-shock accelerated electrons, they indicate the presence of a strong shock that could also accelerate protons to high energies. Therefore, the correlations suggest that CME-driven shocks could be the source of both the electrons resulting in the decameter-hectometric (DH) type II radio bursts and the >300 MeV protons generating the SGRE events. <cit.> investigated the EUV wave connection to the behind-the-limb (BTL) flare at S20E140 on 2021 July 17. They found that the time when the EUV wave crosses the limb onto the visible disk and the onset of the LAT >100 MeV flux enhancement are concurrent. They also found a coupling between the peak times of the time derivative of the EUV wave intensity profile observed at 193 Å and the >100 MeV gamma-ray flux suggesting that the EUV wave and the acceleration of the SGRE-producing protons are connected. They found the correlation to be valid in three other Fermi/LAT BTL flares. <cit.> conclude that the correlation between the derivative of the EUV wave intensity and gamma-ray flux and the near-simultaneous appearance of a complex type II radio burst indicate that radio, EUV and gamma-ray emissions share the same source (CME-shock) although the emissions originate at different heights in the corona.
Back-precipitation of shock-accelerated protons have been studied using numerical simulations but results so far have not been accordant with one another. <cit.> modelled particle precipitation including enhanced turbulence and found scattering to increase back-precipitation but even that being the case the fraction of protons able to precipitate down to the radial distance of 1 R_ relative to the injected back-propagating protons is less than 1%. The precipitation fraction decreases as a function of the radial distance of the CME shock. Therefore, they conclude that the CME-driven shocks cannot provide a sufficient flux of >300 MeV protons to explain the SGRE events. Opposite conclusions in support of a CME-shock as the source of the gamma-ray-producing protons have been obtained by <cit.> who studied the Fermi behind-the-limb flare on 2014 September 1. Their simulations of the CME-driven shock indicated that the quasi-perpendicular part of the shock had a magnetic connection to the gamma-ray source at the front-side of the Sun and the shock compression ratio increase matched the increase in the observed gamma-ray emission. <cit.> simulated proton acceleration in the CME-driven shocks during the 2012 January 23 and May 17 SGRE events. The 2012 May 17 SGRE event was also observed as a GLE by neutron monitors. They concluded that proton acceleration by coronal shocks and diffusive downstream particle transport could explain the SGRE events. However, the authors of the above-mentioned studies suggest that more elaborated MHD models for the particle transport back to the Sun is required because the complex structure of the magnetic fields near the Sun, which the current simulation efforts cannot fully replicate. The lack of direct observations of the precipitating protons close to the Sun leaves the question whether they can propagate back to the solar atmosphere deep enough open.
The initial acceleration and speed of the CME in part control the formation height and strength of the shock, which in turn affect particle acceleration efficiency of the shock. Therefore, the CME acceleration and speed provide a proxy for the effectiveness of high-energy particle acceleration in the CME-driven shocks. <cit.> studied SGRE association with on-disk CMEs producing major SEP events and HCMEs with sky-speeds ≥1800 km s^-1 during cycle 24. They investigated the initial acceleration and space speed of the CMEs, which they defined to be the instantaneous peak space speed and acceleration obtained from forward fitting of the graduated cylindrical shell (GCS) flux rope model <cit.> to the EUV and coronagraph images of the CMEs. They found that the peak space speed and peak initial acceleration of the SGRE-producing CME are 2516 km s^-1 and 3.87 km s^-2, respectively. <cit.> suggest that the close connection they found between CME kinematics and the SGRE events give support to the CME-shock scenario.
In addition to SEP events, type II radio burst are related to particle acceleration by CME-driven shocks. In this report we estimate the initial acceleration and space speed of the CMEs associated with the Fermi/LAT solar flares (FLSFs) during solar cycle 24 listed by <cit.>. In order to evaluate the feasibility of the CME-driven shocks in producing SGRE events, we compare average initial acceleration and space speed of CME populations associated with SGRE and SEP events and type II radio bursts. We use space speeds obtained by applying geometrical correction to close-to-the-limb CMEs or by applying the model by <cit.> to HCMEs. Initial acceleration is estimated by assuming that the CME obtains its estimated space speed during the interval extending from the onset time to the peak time of the associated soft X-ray flare <cit.>. In addition, we estimate the radial distance and the space speed of shocks at the end time of the two longest-duration SGRE events on 2012 January 23 and March 07.
§ DATA
In the analysis we use the catalog published by <cit.> that contains 45 FLSFs with >30 MeV gamma-ray emission in the period 2010 January–2018 January. We do not repeat here all the details of the event data analysis, which are given in <cit.>. We briefly describe their categorization method of FLSFs. <cit.> characterized the light curves of the FLSFs based on the associated hard X-ray (HXR) observations made by the Fermi Gamma-ray Burst Monitor <cit.>. If the early evolution of the gamma-ray emission was synchronous with the Fermi/GBM HXR evolution, the flare was deemed to have an impulsive 'Prompt' component lasting ≲10 minutes. If the flare had a second phase of gamma-ray emission without a corresponding HXR evolution, the flare was deemed to have a gradual 'Delayed' component that could last up to ≈20 hours. <cit.> found that a total of 39 out of the 45 FLSFs had detectable level of >100 MeV emission. One should note that Fermi/LAT does not observe the Sun continuously, the average LAT measurement interval lasts about 30 minutes <cit.>. Of those 45 FLSFs, they classified 6 flares as 'Prompt only' and 4 flares as 'Delayed only'. In 10 flares both the 'Prompt' and 'Delayed' emission were detected by LAT and 6 flares were detected with LAT Low Energy (LLE) analysis only.
The existence of the DH type II radio bursts is based on Wind spacecraft's radio and plasma wave instrument <cit.> observations (<https://cdaw.gsfc.nasa.gov/CME_list/radio/waves_type2.html>), STEREO/WAVES instrument <cit.> observations, and on the analysis by <cit.>. The metric type II radio burst and soft X-ray flare observations are obtained from the NOAA Solar and Geophysical Event Reports. We adjusted the NOAA-reported flare onset times in some events after inspecting concurrent EUV images and soft X-ray curves of the solar eruption. The CME data near the Sun is provided by the Large Angle and Spectrometric Coronagraph <cit.> on the Solar and Heliospheric Observatory <cit.> spacecraft. The CME data is collected from the SOHO/LASCO CME Catalogs (<https://cdaw.gsfc.nasa.gov/CME_list/index.html>, <https://cdaw.gsfc.nasa.gov/CME_list/halo/halo.html>). SEP event data are from the Major SEP Event list (<https://cdaw.gsfc.nasa.gov/CME_list/sepe/>) and from the GOES-equivalent >10 MeV intensities calculated using data provided by the High Energy Telescope <cit.> onboard STEREO. For the shock distance estimation at the end of the SGRE event, we used white-light images of the Sun Earth Connection Coronal and Heliospheric Investigation <cit.> Heliospheric Imagers <cit.> onboard the Solar Terrestrial Relations Observatory <cit.> spacecraft. The HI images were provided by the STEREO Archive maintained by the UK Solar System Data Centre (<https://www.ukssdc.ac.uk/solar/stereo/data.html>). To identify the associated CMEs, we inspected the CME catalogues provided by the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS, <https://www.helcats-fp7.eu/>).
§ ESTIMATION METHOD OF THE CME INITIAL ACCELERATION
The initial acceleration of the CME near the Sun is difficult to measure because the cadence of white-light coronagraphs is limited. In our study we follower the method previously used by <cit.> and <cit.>. We assume that the CME accelerates from rest to its final maximum speed, which it reaches at the peak time of the associated soft X-ray flare. <cit.> have shown that the main acceleration phase of the CME coincides with the impulsive phase of the associated X-ray flare. Therefore, we calculate the initial acceleration a of the CME with a formula: a=V_Space/(t_FlarePeak-t_FlareOnset), where V_Space is the estimated space speed of the CME and t_FlarePeak and t_FlareOnset are the flare peak and onset times, respectively. The space speed of halo CMEs (HCMEs) has been estimated by using a cone model for HCMEs <cit.> and the space speeds are listed in the SOHO/LASCO HALO CME catalog (<https://cdaw.gsfc.nasa.gov/CME_list/halo/halo.html>). For non-HCMEs, the space speed, V_Space, is calculated from the measured CME speed on the sky plane, V_Sky, by using a geometrical correction V_Sky/cosθ, where θ is the angle the CME propagation direction makes away from the sky plane. The angle θ depends on the longitude of the flare location. To avoid unrealistically large corrections, we have included in the analysis only non-HCMEs for which θ is ≤30^∘ as seen either from the SOHO or STEREO spacecraft. The method calculates an average over the acceleration phase of the CME. The peak initial acceleration of the CME can be higher than obtained average initial acceleration as was shown by <cit.>.
In general, we know that the CME speed profiles near the Sun vary from event to event and CME speed is an important parameter governing particle acceleration efficiency of the CME-driven shocks. <cit.> showed that CMEs associated with major SEP events have a hierarchical relationship between the initial acceleration and speed of the CME and the SEP fluence spectral indices <cit.>: CMEs associated with filament eruptions have low initial speeds and acceleration and produce the softest SEP spectra at 1 au, while the CMEs with highest initial speed and acceleration have the hardest SEP spectra. The CMEs with an intermediate speed and acceleration result in moderately hard SEP spectra at 1 au. Therefore, initial acceleration and speed provide a proxy for the effectiveness of high-energy particle acceleration in the CME-driven shocks.
§.§ Initial Acceleration and Space Speed of CMEs Associated with LAT Gamma-ray Flares
In our analysis we use the on-disk gamma-ray events listed in <cit.>. Their list contains 45 gamma-ray flares during cycle 24. <cit.> categorized the flares based on whether 'Prompt' or 'Delayed' component (SGRE event) of gamma-ray emission was detected. In the 6 events of the 45 events, only 'Prompt' (impulsive) emission was detected, 4 events had no detected 'Prompt' emission at all, 10 events have both 'Prompt' and 'Delayed' emission and the remaining 25 had 'Delayed' emission, but the presence of 'Prompt' emission could not be excluded because of LAT was not pointing to the Sun at the appropriate time. 32 of the flares were associated with a HCME, 10 were associated with non-HCMEs, and 3 had no associated CME. Based on our own estimations, we changed the CME of the 2014 September 10 flare to the 08:00 UT HCME. We have excluded the 3 back sided flares, the 3 flares without a CME and the 2017 September 06 X2.2 flare for which we could not estimate the space speed of the CME at 09:48 UT, because there is no suitable side-view either from SOHO or STEREO-A.
Table <ref> list the total number and the average value of the initial acceleration and space speed of CMEs in different categories. First, we divided the CMEs into two main categories: those associated with flares showing only a 'Prompt' component, labelled as 'Prompt Only' and those with a 'Delayed' component, labelled 'All Delayed' in Table <ref>. The 'Prompt Only' flares are impulsive gamma-ray flares and the 'All Delayed' ones are SGRE events. Clearly, the impulsive gamma-ray flares are associated with significantly slower CMEs (775 km s^-1) than flares with an SGRE event (1708 km s^-1). The difference in the initial acceleration is not as clear, but again the CMEs with SGRE events show a larger initial acceleration than those without an SGRE event. From SEP event comparisons by (; see also ) we know that higher acceleration and speed indicate that the CME-driven shock produces harder energy spectra, i.e., more likely to have >300 MeV protons. Similar high initial acceleration and fast speed characteristics are shared by CMEs associated with GLEs, which are guaranteed to have >300 MeV protons.
Then we divided the 'All Delayed' CMEs into three subcategories: the 'Prompt Delayed' CMEs are associated with gamma-ray flares having both emission components, the 'No-Prompt Delayed' CMEs do not have a detectable 'Prompt' component and the 'Delayed' CMEs have a 'Delayed' component but the existence of the 'Prompt' component is uncertain because of the lack of LAT observations during the impulsive phase of the flare. Differences are now less significant (the sample sizes also become small), but the 'Prompt Delayed' CMEs appear to have the highest average initial acceleration and space speed and the 'No-prompt Delayed' the lowest ones among the three groups. Most likely the CMEs without associated 'Prompt' gamma-ray component are more slowly accelerating CMEs but are still able to produce >300 MeV protons as their space speed becomes high enough in the later phase. Again, similar slower initial acceleration but high later-phase speed has been detected for CMEs producing major SEP events <cit.>. Table <ref> in Appendix lists the data for events included in calculations of Table <ref>.
lccccccc
0pt
1
Initial acceleration and space speed of CMEs associated with LAT gamma-ray flares
Quantity 2cMain Types 3cSubtypes of 'All Delayed'
3-4
6-8
Prompt Only All Delayed Delayed
Prompt Delayed No-Prompt Delayed
(1) (2) (3) (4)
(5) (6)
Count 6 32 18 8 4
Mean Acceleration (km s^-2) 1.37 1.75 1.73 1.87 1.62
Mean Space Speed (km s^-1) 775 1708 1745 1753 1663
§ COMPARISON WITH HCMES ASSOCIATED WITH TYPE II RADIO BURSTS AND MAJOR SEP EVENTS
Because the 'All Delayed' gamma-ray flares are mainly associated with HCMEs, we compare their initial acceleration and space speed with HCMEs associated with type II radio bursts and major SEP events. Major SEP events are defined as those with the peak proton flux in the GOES >10 MeV integral channel above 10 particles cm^-2 s^-1 sr^-1. Since SEPs are charged particles, they spiral along the interplanetary magnetic field lines as they propagate away from the acceleration source. Therefore, at Earth we can detect mostly SEP events originating from eruptions occurring in the western hemisphere of the Sun. Some very intense eruptions from the eastern limb can produce particle events at Earth but in that case only at the lower energies.
In general, DH type II radio bursts are well correlated with major SEP events <cit.>. Both radio and gamma-ray emissions can be detected from all on-disk eruptions because electromagnetic emission can propagate away from the Sun without being significantly affected by coronal or interplanetary medium. Type II solar radio bursts occur at the fundamental and second harmonic of local plasma frequency that depends on the electron density at the upstream of the CME shock. Because the electron number density decreases as a function of the radial distance, the plasma frequency decreases away from the Sun and higher frequency emissions originating from a lower height can propagate freely outwards. Therefore, the type II burst can be identified in the radio dynamic spectra as an intensity feature slowly drifting towards lower frequencies at the rate that depends on the shock speed and the density scale height of the ambient medium.
lcccccccc
0pt
2
Initial acceleration and space speed of cycle-24 HCMEs with metric type II radio bursts
HCME Category 3c'Delayed' Component (SGRE Event) 3cNo 'Delayed' Component
3-5
7-9
Count Mean Acceleration Mean Space Speed Count
Mean Acceleration Mean Space Speed
(km s^-2) (km s^-1)
(km s^-2) (km s^-1)
(1) (2) (3) (4) (5)
(6) (7)
DH Type II 23 1.85 1869 37 1.09 1211
No DH Type II 1 2cToo low statistics 26 1.11 959
SEP Event 17 1.83 2004 13 1.15 1499
SEP Event (w/STEREO) 21 1.70 1858 18 1.07 1396
No SEP Event 7 1.81 1360 50 1.09 1006
No SEP Event (w/STEREO) 3 2.68 1524 45 1.11 992
In Table <ref> we have divided the 87 cycle-24 HCMEs with metric type II radio emission into CMEs with and without a 'Delayed' gamma-ray component. The existence of the metric type II radio burst indicates that a shock forms early, making these HCMEs good candidates for SGRE production. We investigated how many of the metric type II-associated HCMEs are with and without a DH type II radio burst or a major SEP event. We selected major SEP events as they are intense events and could have enhancements of >300 MeV protons, which are unlikely to be present in the inherently low-intensity SEP events. Because observer's connection to the SEP source affects the possibility to detect SEPs, the group without a major SEP event could still contain events that were able to accelerate particles, especially the poorly connected eastern hemisphere events could have produced high-energy particles that were not detected. We account for this possibility by using GOES >10 MeV equivalent STEREO intensities to identify major SEP events observed by STEREO. The STEREO >10 MeV flux is estimated using data from the STEREO/HET <cit.>, which covers the energy ranges of 13–100 MeV. The flux is estimated by fitting a power law to HET data points and integrating the flux in the 10–150 MeV range <cit.>. In Tables <ref> and <ref>, we have separated the two SEP event sets and marked the one containing both GOES and STEREO events as "(w/STEREO)", although in Table <ref> the statistics for the SGRE events are mostly too low. One should note that STEREO spacecraft drift around the Sun, so their magnetic connection to the Sun changes continuously. In addition, STEREO-A observations have significant data gaps during solar conjunction period during 2014–2015 and contact to STEREO-B was lost on October 1, 2014. We surveyed also STEREO/WAVES data for additional DH type II radio bursts but we found only one on 03 August 2011. The STEREO-A data showed a short-duration, slanted feature in the 10–14 MHz frequency range starting at 13:38 UT, which we added to our DH type II burst list. All other STEREO/WAVES DH type II bursts were accompanied with a Wind/WAVES DH type II burst, so we study STEREO and Wind DH type II bursts together. The most western event of the 7 SGRE events without a major GOES SEP event occurred at the heliographic longitude W18 and 4 of the 7 SGRE events occurred less than 30 from the eastern limb. The two bottom rows of Table <ref> are difficult to interpret, but we have added them mainly for completeness. Results show that all HCMEs associated with an SGRE event have similar average initial acceleration values (1.70–1.85 km s^-2) with the exception of the group without a GOES or STEREO SEP event, which contains only three events and the average initial acceleration (2.68 km s^-2) is very high, possibly indicating missed major SEP event identification. The range is considerably higher than those of HCMEs without an SGRE event (1.07–1.15 km s^-2). The SGRE and SEP-associated HCMEs have the highest average space speed, whereas two groups of HCMEs, SGRE-associated HCMEs without an SEP event and SEP-associated HCMEs without an SGRE event, seem to have similar speeds. However, the average initial accelerations of SGRE-associated HCMEs without a major SEP event are higher (even when we ignore the group without a GOES or STEREO SEP event that has only 3 events in total) than those of without SGRE event but with a major SEP event. The DH type II-associated HCMEs without an SGRE event have only slightly lower average space speed (1211 km s^-1), but we know that DH type II bursts are associated with SEP events and this mixed population includes 12 HCMEs with an SEP event, which have high space speeds. If we exclude these 12 SEP-associated events, then the average space speed of the remaining 25 events decreases down to 1064 km s^-1. Clearly, the existence of >300 MeV protons is connected to a high initial acceleration and speed of the associated HCME. The HCMEs without an SEP and SGRE event have the lowest average speed (992 km s^-2). Therefore, the SGRE-associated HCMEs conform to the hierarchy between the initial acceleration and speed of the CME and the fluence spectral index as described by <cit.>. Especially the high initial acceleration seems to be crucial for SGRE production.
The average accelerations and speeds of the 20 cycle-24 HCMEs associate with only a DH type II radio burst are shown in Table <ref>. HCMEs are mostly without SGRE events (only 4 SGRE events) or major SEP events (only 6 SEP events if STEREO observations are included, two of which have also an SGRE event). Therefore, statistics for SGRE events are low, but the average space speed of SGRE events without an SEP event (≈1579 km s^-1) is below the space speed of the SGRE events with an SEP event (≈2004 km s^-1; ≈1858 km s^-1 if STEREO observations are included) in Table <ref>. None of the three eruptions without an major SEP event detected by GOES were magnetically well-connected to Earth, so they probably accelerated high-energy particles efficiently but particles didn't reach Earth. One of them, the 10 June 2014 HCME with a solar source at S17E82, actually had a major SEP event observed by STEREO-B, which was located at the heliographic longitude E164. The 05 March 2012 HCME had a solar source at N17E52, but the GOES-equivalent >10 MeV intensities observed by STEREO-B at longitude E117 were already elevated above 100 pfu due to a preceding HCME on 04 March that was not associated with an SGRE event. At the onset of the 10 March 2012 HCME launched from N17W24, the >10 MeV intensities were elevated above 10 pfu at all three spacecraft. In fact, the March 5 and 10 events are the first and last events in a cluster of 4 SGRE events accompanied by high level of SEP flux <cit.>. So, it is quite possible that the two March 2012 events also accelerated particles. The average initial acceleration value is lower than respective value for SGRE events with SEP events in Table <ref>, but this is expected because CMEs associated with only a DH type II radio burst accelerate slowly and the shock forms later. This probably explains the lower average space speed near the Sun. The initial acceleration and average space speed of HCMEs without an SGRE event and an SEP event are lower or similar, respectively, to the respective values in Table <ref>. Table <ref> in Appendix lists the data for events included in calculations of Tables <ref> and <ref>.
lcccccccc
0pt
3
Initial acceleration and space speed of cycle-24 HCMEs with DH type II radio bursts only
HCME Category 3c'Delayed' Component (SGRE Event) 3cNo 'Delayed' Component
3-5
7-9
Count Mean Acceleration Mean Space Speed Count
Mean Acceleration Mean Space Speed
(km s^-2) (km s^-1)
(km s^-2) (km s^-1)
(1) (2) (3) (4) (5)
(6) (7)
SEP Event 1 2cToo low statistics 3 0.46 1263
SEP Event (w/STEREO) 2 2cToo low statistics 4 0.38 1265
No SEP Event 3 1.00 1579 13 0.44 1164
No SEP Event (w/STEREO) 2 2cToo low statistics 12 0.47 1155
§ RADIAL DISTANCE OF THE SHOCK AT THE END OF THE SGRE EVENTS
We selected two SGRE events, the 2012 January 23 and March 07 events, with the longest duration of the associated type II radio burst (Gopalswamy et al. 2019) and for which the STEREO observations provided sideview white-light images of the HCMEs.
The 2012 January 23 04:00 UT HCME produced a DH type II burst with a duration about 25.0±9.6 hr, while the estimated duration of the SGRE event was 15.4±0.8 hr. The SGRE ended around 19:25 UT. The estimated space speed was 2511 km s^-1, and the interplanetary shock arrived at the SOHO spacecraft at 14:33 UT on January 24. The eruption was associated with a M8.7 X-ray flare starting at 03:38 UT at the heliographic location of N28W21. The STEREO-A and STEREO-B longitudes were W108 and E114, respectively. The eruption produced a major SEP event at Earth with a GOES >10 MeV peak proton flux 6310 cm^-2 s^-1 sr^-1. The 2012 January 23 eruption close to the Sun has been studied extensively because the eruption involved two flux ropes that merged below the radial distance of 15 R_ <cit.>.
The second HCME at 00:24 UT on 2012 March 07 was associated with an SGRE event that had even longer duration, about 21.3±1.6 hr <cit.>. The 2012 March 07 SGRE had a slightly longer estimated duration of about 21 hr but the SGRE durations cannot be measured accurately because LAT does not observe the Sun continuously. The SGRE end time was 21:40 UT. The estimated duration of the DH type was 27.9±6.8 hr. The LASCO space speed of the HCME was 3146 km s^-1 and it was associated with a X5.4 X-ray flare at 00:02 UT from N17E27. A second X1.3-class flare started about an hour later at 01:05 UT. The associated HCME at 01:30 UT had a slightly slower space speed of 2160 km s^-1. The STEREO-A and STEREO-B longitudes were W109 and E118, respectively. The SOHO shock arrival time was at 10:53 UT on March 08. The GOES >10 MeV peak proton flux was 6530 cm^-2 s^-1 sr^-1. The onset of the HCME has been studied by <cit.> and the heliopsheric propagation by <cit.> and <cit.>.
§.§ Distance Estimation
We estimated the radial distance and the space speed of the shock by forward fitting a spheroidal shock model to white-light images of STEREO/HIs <cit.> around the end time of the SGRE event. For shock fitting, we used IDL programs in the Solar Corona Ray-Tracing Software package developed to forward modeling of structures of the solar corona <cit.>.
The fitting of the spheroidal shock model to HI observations is shown in Figure <ref>. The propagation direction of the shock is difficult to estimate, our estimates were N25W05 for the 2012 January 23 CME and N34E27 for the 2012 March 07 CME. For the 2012 January 23 HCME we obtained the radial distance r=121 R_, and the space speed 975 km s^-1. In the case of the 2012 March 07 HCME the estimated radial r=140 R_, and the space speed 750 km s^-1. The obtained speeds are reasonably high for a strong CME-driven shock to exist.
We compared these results with radial distances estimated using Wind/WAVES observations of the type II radio burst. First, we measured the mid-frequency of type II emission lane at the time the CME leading edge was around 20 R_, because type II emissions are often very complex and overlapped by more intense type III emission during the early phase of the eruption, which makes radio measurements at frequencies corresponding the shock distances close to the Sun difficult. From the frequency formula f_plasma=9.0 ×√(N × n(r)), where the radial distance r is in units of R_ and frequency f in kHz, we calculated the multiplier N for the Leblanc density model n(r) <cit.>. The measurement time was obtained by extrapolating the CME height-time profiles obtained by forward fitting a flux rope model to LASCO and SECCHI/COR images to a radial distance 20 R_.
We then estimated the radial distance at the SGRE end time from the mid-frequency of the type II emission lane: For the 2012 January 23 HCME we obtained the multiplier N= 4.51, which then gave for the mid-frequency f=83 kHz the radial distance of r=132 R_. For the 2012 March 07 HCME the respective values were N=9.07, f=90 kHz and r=173 R_. The distances estimated from the radio bursts data are 9% and 24% larger than those estimated from the STEREO/HI images. The STEREO/HI height-time measurements are complicated because the actual shape, location, and propagation direction of the shock ahead of the CME body are difficult to discern from the white-light images. The CME structure in white-light is also transparent, so we may confuse structures <cit.> and brightness depends on local density and Thomson-scattering geometry <cit.>. On the other hand, type II radio emissions are sporadic and depend on local density at the radio source, which the general density model cannot capture. We also assume that the location of the radio source is at the shock nose <cit.> and the type II emission in interplanetary space occurs at the fundamental of the plasma frequency <cit.>.
§ DISCUSSION
In the first part of our analysis, we showed that the near-Sun kinematics of the CMEs correlate with the properties of the gamma-ray emission observed by Fermi/LAT. The population of the CMEs (total of 8 CMEs) that were associated with a gamma-ray event whose light-curve indicated both 'Prompt' and 'Delayed' emission component, as defined by <cit.>, had the highest average initial acceleration (1.87 km s^-2) and fastest average space speed (1753 km s^-1). The mixed 'Delayed' category, where the existence of the 'Prompt' component is uncertain due to the lack of LAT measurements around the flare onset, has a similar average space speed (1745 km s^-1) but somewhat lower initial acceleration (1.73 km s^-2). The lowest average values (1.37 km s^-2 and 775 km s^-1, respectively) had the population of CMEs (total of 6 CMEs) associated with gamma-ray flares showing a 'Prompt' emission component only, i.e., there were no SGRE emissions detected by LAT. The speeds correspond well with those obtained by <cit.> who studied CME properties for X-class flares with and without gamma-ray emission. They found a median CME linear speed of 768 km s^-1 for X-class flares without gamma-ray emission. If Fermi detected gamma-rays during the X-class flare, the median speed of the associated CMEs was 1828 km s^-1. CMEs associated with SGRE events had the highest median speed of 2125 km s^-1. The definition of SGRE in their study was that the >100 MeV gamma-ray duration is ≳∼2 hr. <cit.> definition used here is based on details of hard X-ray and gamma-ray light-curves, which probably explains why <cit.> SGRE events were associated with faster CMEs.
In addition, we divided cycle 24 on-disk HCMEs associated with type II radio bursts into groups with and without (a) SGRE events, (b) DH type II bursts, and (c) major SEP events observed. For SEP events we analyzed major events observed by GOES only and the second group of major SEP events observed by GOES or STEREO spacecraft. Our statistical analysis show that all metric type II-associated HCMEs with an SGRE event have considerably higher initial acceleration and also space speed if an major SEP event was also detected than those of metric type II-associated HCMEs without an SGRE event. The average space speeds of the SGRE-associated HCMEs without an SEP event and the non-SGRE-associated HCMEs with an major SEP event were similar. The analysis of the HCMEs associated with only DH type II emission shows that the three SGRE-producing HCMEs without an SEP event observed by GOES spacecraft have higher space speed than any studied population of HCMEs not associated with an SGRE event. However, one of those three HCMEs had an major SEP event observed by STEREO-B and the other two had elevated backgrounds at leat at the best connected spacecraft, so all three events could have accelerated protons. Whereas the avergae initial acceleration is slightly lower than those of the metric type II-associated HCMEs without an SGRE event, but clearly higher than those of the DH type II-associated HCMEs without an SGRE event. The lower value is expected because CMEs associated with only a DH type II radio burst accelerate slowly and the shock forms later. This result resembles the kinematic hierarchy of CMEs with major SEP events, where rare, slowly accelerating but eventually fast CMEs associated with filament eruptions outside active regions can produce large SEP events at 1 au. In the case filament eruptions, we know that the resulting energy spectrum is soft, but the general idea that occasionally the initial acceleration of the CME is slower, but acceleration continues long enough so that a sufficiently strong shock forms later at higher altitudes is comparable. In general, our results are similar to those reported by <cit.>. The SGRE-associated HCMEs seem to conform to the hierarchy between the initial acceleration and speed of the CME and the fluence spectral index as described by <cit.>. Clearly, the existence of >300 MeV protons is connected to a high initial acceleration and speed of the associated HCME. Therefore, our results suggest that CME-driven shocks are the likely source for the >300 MeV protons required to produce SGREs at the Sun.
The mirror effect near the Sun limits the number of protons that can penetrate deep enough, i.e., particles with a pitch angle α in the sheath region cannot penetrate a near-Sun region if μ=cosα is larger than the critical value μ_c:
|μ| ≥μ_c ≡√(1-B_sheath/B_⊙).
Because the foot points of the field lines crossing the shock nose could be connected to areas outside the source active region where the average magnetic field strength B_ is considerably lower than in active regions, the mirror ratio B_sheath/B_ increases and the width of the loss cone α_c=cos^-1μ_c=sin^-1√(B_sheath/B_) becomes larger. Because the CME flux ropes have a pile-up region in front of them, the magnetic field within the sheath could be significantly larger than the ambient field, which will further lower the mirror ratio. Therefore, more protons can precipitate deep into solar atmosphere. As mentioned earlier, enhanced turbulence increases scattering into the loss cone, which in turn increases the number of precipitating particles at the foot points. The level of the turbulence and its time evolution along the flux-rope-wrapping field lines and in the atmospheric layers close to the Sun are difficult to estimate. For example, EUV waves associated with large solar eruptions and propagating long distances over the solar surface clearly indicate that coronal shocks and CME lateral expansion affect the solar atmosphere far from the eruption site, most likely resulting in large volumes of enhanced turbulence around the source active region.
It should be noted that fast CME shocks are the only sites for which we have clear corroborating observational evidence for acceleration of >300 MeV protons over extended times long after the end of the solar flare. <cit.> studied the properties of the soft X-ray flares, CMEs and SEP events associated with SGRE events. They found that SGRE events are not produced by the brightest, most intense X-ray flares. In their reverse study, they found that during the period from 2011 March to 2015 June 45 X-class soft X-ray flares were detected, but only 15 of those were associated with a SGRE event. Similarly, their study showed that SGRE events are associated with fast CMEs and the SGRE duration increases as the CME speed increases. The reverse study of the fast HCMEs with speeds above 1500 km s^-1 found only four HCMEs without a reported gamma-ray event. Two of the four HCMEs, the 2011 September 22 10:48 UT and 2012 July 19 05:24 UT halos, had concurrent Fermi/LAT observations. In both events, the LAT spectra showed slight increases that were not significant enough to be characterized as detection. In the study of related SEP events observed by GOES, <cit.> list only the 2011 March 07 SGRE event as a magnetically well-connected to GOES and without a significant background increase due to a preceding event but did not show any increase in the GOES >300 MeV flux. <cit.> suggest that the lack of high-energy protons is due to a poor latitudinal magnetic connection of the shock nose to Earth because the flare occurred at the heliographic latitude of N31 and the northern polar region of the Sun is tilted away from Earth in March. Similarly, <cit.> showed that soft energy spectrum observed by GOES during the 2014 January 7 SGRE event was due to poor magnetic connectivity of the shock nose to an Earth observer. The final conclusion of <cit.> is that their results favor the CME-shock as the source of the SGRE-producing protons.
The >300 MeV protons are accelerated near the nose of the CME-driven shock <cit.>, with a possible exception of the earliest phase of the eruption, where the fast lateral expansion of the shock could result in efficient particle acceleration away from the nose region. Some fraction of the shock-accelerated protons escapes into the IP space and are detected as an SEP event, but others propagate along the magnetic field lines deep down into the solar atmosphere and generate SGRE. In addition, the magnetic field lines that are pushed ahead of the CME body maintain a continuous connection between the shock and the solar atmosphere. Therefore, if the shock can accelerate protons to energies >300 MeV, protons will have a propagation path back to the Sun and can generate SGREs. The key aspect of the CME-shock model is that the magnetic field lines protons travel along sunwards cross the shock front into the sheath region behind the shock, wrap around the CME flux rope and connect back to the Sun in areas outside the foot points of the CME flux rope and possibly also areas outside the source active region. Therefore, the locations of the foot points are widely separated providing a natural explanation for the spatially extended source of gamma-ray emission.
We estimated the radial distance the CME-driven shock at the end time of the SGRE event for the 2012 January 23 and 2012 March 07 SGRE events. These events were associated with the two longest duration type II radio bursts. We estimated the shock radial distance by forward fitting a spheroidal shock model to STEREO/HI white-light images of the CME and obtained for the 2012 January 23 SGRE event the shock radial distance r=121 R_ and for the 2012 March 07 SGRE event r=140 R_. In addition, we used the frequency of the type II radio burst obtained from the radio dynamic spectra of Wind/WAVES together with a radial density model to get another estimate for the shock radial distance. The obtained distances from radio measurement were slightly longer, for the 2012 January 23 SGRE event r=132 R_, and for the 2012 March 07 SGRE event r=173 R_.
<cit.> estimated the speed of the 2012 March 07 CME using the standard aerodynamic drag-force model approach, where the CME travelled through quiet or perturbed solar wind (SW). Based on their results (their Figure 8), the estimated speed of the CME at 22 UT for the quiet SW model was ≈740 km s^-1 and for the perturbed SW model ≈820 km s^-1. The perturbed SW model matches better the CME arrival time and speed at the Wind spacecraft. Therefore, our estimated CME speed of 750 km s^-1 seems to be slightly below the one obtained from the perturbed SW model. Recently, <cit.> studied the arrival signatures of the 2012 March 07 CME at several heliospheric locations. They report that Venus Express detected the arrival of the CME ejecta at 13:28 UT, when Venus was at the radial distance of 154 R_. Therefore, the radial distance estimated from the forward fitting of the spheroidal shock model, r=140 R_, is clearly too low and most likely the radial distance obtained from radio observations, r=173 R_, is closer to the actual distance. The model fitting to images of the 2012 March 07 CME was difficult because the CME structure was very faint in the STEREO-B images (see Figure <ref>c). Therefore, the sensitivity of the imaging system may not be high enough to detect the shock in front of the CME.
The estimation of the radial distance of the shock at the end time of the SGRE event is quite complicated. The location and the shape of the shock front is difficult to discern from white-light images of the CME <cit.>. CMEs are transparent structures and intensity of Thomson scattering depends on viewing angle relative to the structure. The location of type II radio source on the shock front is also difficult to measure. Imaging radio instrument operate at the higher frequencies, which correspond to heights of couple solar radius above the solar surface. Direction finding and triangulation can be used to locate the interplanetary type II radio sources at lower frequencies. However, scattering of radio waves and low intensity of type II emission limit the accuracy of the direction-finding measurements.
The CME-driven shock in both events reached SOHO spacecraft (2012 January 24 14:33 UT and 2012 March 08 10:53 UT respectively) and when shocks passed 1 au about 30 minutes later, GOES spacecraft observed a clear increase around the shock time visible in Figure <ref>. Both events had particle flux increase in the GOES 350–429 MeV channels indicating that the CME-driven shock did accelerate >300 MeV protons. In both cases the 1-au enhancement continued beyond the end times of the SGRE events, estimated to be at 2012 January 23 19:25 UT and 2012 March 07 21:40 UT, respectively. During the March 07 event, the particle increase was detectable at even higher energies, up to the 510–700 MeV energy range. The event integrated fluence spectrum of 2012 January 23 event provided by PAMELA indicate that SEP flux at 1 au extended above 300 MeV <cit.>. Therefore, the CME-driven shock clearly must accelerate >300 MeV protons far away from the Sun, providing support for the CME-shock as the source of the SGRE-producing protons.
The 2012 March 07 SGRE event was very bright that the location of the >100 MeV gamma-ray emission source could be estimated over several time intervals over a period of about 10 hours <cit.>. The emission centroid seemed to move away from the flare site across the solar towards west. <cit.> studied two other, bright SGRE flares on 2014 February 25 and 2017 September 10. The gamma-ray intensity of the 2014 February flare was weaker, so they could determine the location of the emission centroid only during two intervals over three hours. The September 2017 flare was brighter, and the location was determined in three intervals over 7 hours, but the flare occurred over the western limb of the Sun making the detection of possible source movement difficult. In both events the centroid remained consistent with the AR location. Therefore, the movement of the SGRE source during the 2012 March 07 event supports CME-shock scenario, whereas the detection of a possible source movement in the 2014 February 25 and 2017 September 10 events is complicated because of the weaker gamma-ray intensity or unfavorable location of the flare.
§ CONCLUSIONS
We compared acceleration and speed of CMEs associated with gamma-ray flares with 'Prompt' and/or 'Delayed' (SGRE event) component as defined by <cit.>. In addition, we divided the on-disk HCMEs associated with type II radio bursts into groups with or without SGRE events, SEP events, metric or DH type II radio bursts and compared the average acceleration and speed between the HCME groups. We showed that the CMEs associated with the 'Delayed' gamma-ray component and the metric type II-producing HCMEs associated with SGRE events together with a DH type II radio burst and/or a major SEP events have higher initial acceleration and space speed than the CMEs associated with the 'Prompt-only' gamma-ray component or the SEP- or type II-associated HCMEs without SGRE. The only exception was the space speed of metric type II -associated HCMEs with an major SEP event but without an SGRE event that had similar average space speeds as the SGRE-associated HCMEs without an major SEP event. Similar high initial acceleration and fast speed characteristics are shared by CMEs associated with GLEs, which are guaranteed to have >300 MeV protons. The SGRE-associated CMEs also conform to the hierarchy between the initial acceleration and speed of the CME and the fluence spectral index as described by <cit.>. Therefore, our findings support the CME-driven shock as the source of >300 MeV protons producing SGRE events.
We estimated the radial distance the CME-driven shock at the end time of the SGRE event with the long-duration type II radio bursts on 2012 January 23 and 2012 March 07 using STEREO/HI white-light images of the CME and radio dynamic spectra of Wind/WAVES. The shock radial distances for 2012 January 23 SGRE event were r=121 R_ and r=132 R_, and for the 2012 March 07 SGRE event r=140 R_ and r=173 R_, respectively. The distances derived from white-light and radio observations are reasonably consistent, indicating that the radio source is near the shock nose as assumed. The distances are also consistently longer that the estimated shock height ≈70 R_ for the shorter-duration 2014 February 25 SGRE event <cit.> (Gopalswamy et al. 2019). Because the shock location is not visible in the white-light images, the radial distance estimated from forward fitting of the spheroidal shock model are probably underestimations. At the end time of the SGRE event, the shock speeds were still high enough (975 km s^-1 and 750 km s^-1) for high-energy particle acceleration. Therefore, we conclude that strong CME-driven shocks accelerate >300 MeV protons up to the radial distances of 0.6–0.8 au.
We thank the Fermi/LAT, GOES, SOHO/LASCO, STEREO/SECCHI, Wind/WAVES, and HELCATS teams for providing the data. PM and SA were partially supported by NSF grant AGS-2043131. NG was supported by NASA's STEREO project and the Living With a Star program. HX was partially supported by NSF grant AGS-2228967.
§ CMES AND X-RAY FLARES ASSOCIATED WITH ON-DISK FERMI/LAT SOLAR FLARES
Table <ref> contains data for the CME and X-ray flare data used in Table <ref>. The first column gives the first observation date and time of the CME followed by the measured sky-plane speed and the projection-corrected space speed in the second and third columns. The fourth column lists the estimated initial acceleration of the CME. The columns 5–7 list the location in heliographic coordinates, the onset and peak times of the GOES soft X-ray flare. The last column lists the gamma-ray components detected by Fermi/LAT taken from <cit.>.
lcccrlll
CME and X-ray flare data for Fermi/LAT solar flares
700pt
A
CME Sky Speed
Space Speed Acc
Location Flare Onset
Flare Peak Gamma-ray Components
(UT) (km ^-1)
(km s^-1) (km s^-2)
(UT) (UT)
(UT)
2010/06/12 01:31 620 674 0.42 N23W43 2010/06/12 00:30 2010/06/12 00:57 LLE-Prompt
2011/03/07 20:00 2125 2223 1.28 N30W48 2011/03/07 19:43 2011/03/07 20:12 Delayed
2011/06/07 06:49 1255 1321 0.88 S21W54 2011/06/07 06:16 2011/06/07 06:41 Delayed
2011/08/04 04:12 1315 1477 1.54 N19W36 2011/08/04 03:41 2011/08/04 03:57 Delayed
2011/08/09 08:12 1610 1640 1.61 N17W69 2011/08/09 07:48 2011/08/09 08:05 Prompt Short-Delayed
2011/09/06 23:05 575 830 1.73 N14W18 2011/09/06 22:12 2011/09/06 22:20 LLE-Prompt Short-Delayed
2011/09/07 23:05 710 735 2.04 N14W28 2011/09/07 22:32 2011/09/07 22:38 Delayed
2011/09/24 09:48 1936 2235 1.96 N12E60 2011/09/24 09:21 2011/09/24 09:40 LLE-Prompt Short-Delayed
2012/01/23 04:00 2175 2511 1.99 N28W21 2012/01/23 03:38 2012/01/23 03:59 Delayed
2012/01/27 18:27 2508 2541 0.71 N27W78 2012/01/27 17:37 2012/01/27 18:37 Delayed
2012/03/05 04:00 1531 1627 0.52 N17E52 2012/03/05 03:17 2012/03/05 04:09 Delayed
2012/03/07 00:24 2684 3146 2.38 N17E27 2012/03/07 00:02 2012/03/07 00:24 Delayed
2012/03/09 04:26 950 1229 0.66 N15W03 2012/03/09 03:22 2012/03/09 03:53 No-Prompt Delayed
2012/03/10 18:00 1296 1638 0.94 N17W24 2012/03/10 17:15 2012/03/10 17:44 Delayed
2012/05/17 01:48 1582 1596 1.21 N11W76 2012/05/17 01:25 2012/05/17 01:47 Delayed
2012/06/03 18:12 772 786 1.87 N16E38 2012/06/03 17:48 2012/06/03 17:55 LLE-Prompt Short-Delayed
2012/07/06 23:24 1828 1907 4.54 S13W59 2012/07/06 23:01 2012/07/06 23:08 Delayed
2012/08/06 05:12 198 199 0.66 S14E84 2012/08/06 04:33 2012/08/06 04:38 LLE-Prompt
2012/11/13 02:24 980 1002 2.78 S25E46 2012/11/13 01:58 2012/11/13 02:04 Prompt
2013/04/11 07:24 861 1369 1.09 N09E12 2013/04/11 06:55 2013/04/11 07:16 No-Prompt Short-Delayed
2013/05/13 02:00 1270 1270 0.88 N11E90 2013/05/13 01:53 2013/05/13 02:17 Delayed
2013/05/13 16:07 1850 1852 1.82 N11E85 2013/05/13 15:48 2013/05/13 16:05 Delayed
2013/05/14 01:25 2625 2645 4.01 N08E77 2013/05/14 01:00 2013/05/14 01:11 No-Prompt Delayed
2013/05/15 01:48 1366 1408 0.71 N12E64 2013/05/15 01:15 2013/05/15 01:48 No-Prompt Delayed
2013/10/25 08:12 587 599 1.25 S08E73 2013/10/25 07:53 2013/10/25 08:01 Delayed
2013/10/28 02:24 695 726 0.55 N04W66 2013/10/28 01:41 2013/10/28 02:03 LLE-Prompt
2013/10/28 04:48 1201 1270 2.35 N08W71 2013/10/28 04:32 2013/10/28 04:41 LLE-Prompt
2013/10/28 15:36 812 1098 2.29 S06E28 2013/10/28 15:07 2013/10/28 15:15 Delayed
2013/10/28 21:25 771 777 1.44 N07W83 2013/10/28 20:48 2013/10/28 20:57 LLE-Prompt
2014/01/07 18:24 1830 2246 1.34 S15W11 2014/01/07 18:04 2014/01/07 18:32 Delayed
2014/02/25 01:25 2147 2153 3.59 S12E82 2014/02/25 00:39 2014/02/25 00:49 LLE-Prompt Delayed
2014/06/10 13:30 1469 1473 1.53 S17E82 2014/06/10 12:36 2014/06/10 12:52 LLE-Prompt Delayed
2014/06/11 09:24 829 915 2.18 S18E65 2014/06/11 08:59 2014/06/11 09:06 Short-Delayed
2014/09/10 18:00 1267 1652 1.15 N14E02 2014/09/10 17:21 2014/09/10 17:45 Short-Delayed
2015/06/21 02:36 1366 1740 0.97 N12E16 2015/06/21 02:06 2015/06/21 02:36 Prompt Delayed
2015/06/25 08:36 1627 1805 2.15 N09W42 2015/06/25 08:02 2015/06/25 08:16 Delayed
2017/09/06 12:24 1571 1819 3.37 S08W33 2017/09/06 11:53 2017/09/06 12:02 Delayed
2017/09/10 16:00 3163 3163 1.70 S09W92 2017/09/10 15:35 2017/09/10 16:06 Prompt Delayed
Gamma-ray Components are taken from <cit.>.
§ CYCLE 24 HCMES WITH TYPE II RADIO BURSTS AND ON-DISK X-RAY FLARES
Table <ref> contains data for the cycle-24 HCME and X-ray flares used in Tables <ref> and <ref>. The columns 1–7 are the same as in Table <ref>. Columns 8–9 list the onset times of the reported metric and DH type II radio bursts. The DH type II onset times are listed for Wind/WAVES, except on 03 August 2011, when only STEREO-A/WAVES detected a DH type II burst. The column 10 indicates on which spacecraft the WAVES instruments could detect a DH type II radio burst (W=Wind, A=STEREO-A, B=STEREO-B, '-'=no report). The columns 11–12 mark if the event had a major SEP event (G=GOES, A=STEREO-A, B=STEREO-B, '-'=data gap) and an SGRE event ('Delayed' component detected), respectively.
lcccrllllccc
Cycle 24 HCMEs with type II radio bursts
700pt
B
HCME Sky Speed
Space Speed Acc
Location Flare Onset
Flare Peak m-Type II
DH-Type II WAVES S/C SEP SGRE
(UT) (km s^-1)
(km s^-1) (km s^-2)
(UT)
(UT) (UT)
(UT) G/A/B
2010/08/01 13:42 850 1030 0.34 N20E36 2010/08/01 07:36 2010/08/01 08:26 2010/08/01 09:20 W/A/B 0/-/0 0
2010/08/07 18:36 871 1102 0.63 N11E34 2010/08/07 17:55 2010/08/07 18:24 2010/08/07 18:08 2010/08/07 18:35 W/A/B 0/0/1 0
2010/08/14 10:12 1205 1280 0.51 N17W52 2010/08/14 09:23 2010/08/14 10:05 2010/08/14 09:52 -/-/- 1/0/0 0
2011/02/14 18:24 326 544 1.51 S20W04 2011/02/14 17:20 2011/02/14 17:26 2011/02/14 17:28 -/-/- 0/0/0 0
2011/02/15 02:24 669 960 1.33 S20W10 2011/02/15 01:44 2011/02/15 01:56 2011/02/15 01:52 2011/02/15 02:10 W/A/B 0/0/1 0
2011/03/07 20:00 2125 2223 1.28 N30W48 2011/03/07 19:43 2011/03/07 20:12 2011/03/07 19:54 2011/03/07 20:00 W/A/- 1/1/0 1
2011/06/02 08:12 976 1147 0.64 S19E25 2011/06/02 07:16 2011/06/02 07:46 2011/06/02 08:00 W/A/B 0/0/0 0
2011/06/07 06:49 1255 1321 0.88 S21W54 2011/06/07 06:16 2011/06/07 06:41 2011/06/07 06:25 2011/06/07 06:45 W/A/B 1/0/0 1
2011/06/21 03:16 719 882 0.12 N16W08 2011/06/21 01:22 2011/06/21 03:25 2011/06/21 03:07 W/-/- 0/0/0 0
2011/08/03 14:00 610 785 0.37 N16W30 2011/08/03 13:13 2011/08/03 13:48 2011/08/03 13:35 2011/08/03 13:38 -/A/- 0/0/0 0
2011/08/04 04:12 1315 1477 1.54 N19W36 2011/08/04 03:41 2011/08/04 03:57 2011/08/04 03:54 2011/08/04 04:15 W/A/B 1/0/0 1
2011/08/09 08:12 1610 1640 1.61 N17W69 2011/08/09 07:48 2011/08/09 08:05 2011/08/09 08:01 2011/08/09 08:20 W/-/- 1/0/0 1
2011/09/06 02:24 782 1232 1.37 N14W07 2011/09/06 01:35 2011/09/06 01:50 2011/09/06 01:46 2011/09/06 02:00 W/-/- 0/0/0 0
2011/09/06 23:05 575 830 1.73 N14W18 2011/09/06 22:12 2011/09/06 22:20 2011/09/06 22:19 2011/09/06 22:30 W/A/- 0/0/0 1
2011/09/22 10:48 1905 1905 0.99 N09E89 2011/09/22 10:29 2011/09/22 11:01 2011/09/22 10:39 2011/09/22 11:05 W/-/B 1/1/1 0
2011/09/24 12:48 1915 2018 0.58 N10E56 2011/09/24 12:22 2011/09/24 13:20 2011/09/24 12:50 W/-/B 0/0/0 0
2011/09/24 19:36 972 1076 1.49 N12E42 2011/09/24 19:09 2011/09/24 19:21 2011/09/24 19:14 -/-/- 0/0/1 0
2011/11/09 13:36 907 1012 0.54 N24E35 2011/11/09 13:04 2011/11/09 13:35 2011/11/09 13:11 2011/11/09 13:30 W/-/B 0/0/0 0
2011/11/26 07:12 933 1001 0.60 N17W49 2011/11/26 06:42 2011/11/26 07:10 2011/11/26 07:15 W/A/- 1/1/0 0
2012/01/19 14:36 1120 1269 0.15 N32E22 2012/01/19 13:44 2012/01/19 16:05 2012/01/19 15:00 W/A/B 0/0/1 0
2012/01/23 04:00 2175 2511 1.99 N28W21 2012/01/23 03:38 2012/01/23 03:59 2012/01/23 03:43 2012/01/23 04:00 W/A/- 1/1/1 1
2012/01/27 18:27 2508 2541 0.71 N27W78 2012/01/27 17:37 2012/01/27 18:37 2012/01/27 18:10 2012/01/27 18:30 W/A/B 1/1/0 1
2012/03/05 04:00 1531 1627 0.52 N17E52 2012/03/05 03:17 2012/03/05 04:09 2012/03/05 04:00 W/A/B 0/0/0 1
2012/03/07 00:24 2684 3146 2.38 N17E27 2012/03/07 00:02 2012/03/07 00:24 2012/03/07 00:17 2012/03/07 01:00 W/A/B 1/0/1 1
2012/03/07 01:30 1825 2160 4.00 N15E26 2012/03/07 01:05 2012/03/07 01:14 2012/03/07 01:09 -/-/- 0/0/0 0
2012/03/09 04:26 950 1229 0.66 N15W03 2012/03/09 03:22 2012/03/09 03:53 2012/03/09 03:43 2012/03/09 04:10 W/-/- 0/1/0 1
2012/03/10 18:00 1296 1638 0.94 N17W24 2012/03/10 17:15 2012/03/10 17:44 2012/03/10 17:55 W/A/- 0/0/0 1
2012/03/13 17:36 1884 1931 0.89 N17W66 2012/03/13 17:05 2012/03/13 17:41 2012/03/13 17:15 2012/03/13 17:35 W/A/- 1/0/0 0
2012/04/05 21:25 828 1065 0.66 N18W29 2012/04/05 20:43 2012/04/05 21:10 2012/04/05 21:08 -/-/- 0/0/0 0
2012/04/09 12:36 921 945 0.38 N20W65 2012/04/09 12:02 2012/04/09 12:44 2012/04/09 12:28 2012/04/09 12:20 W/A/- 0/0/0 0
2012/04/23 18:24 528 769 0.99 N14W17 2012/04/23 17:38 2012/04/23 17:51 2012/04/23 17:42 -/-/- 0/0/0 0
2012/05/17 01:48 1582 1596 1.21 N11W76 2012/05/17 01:25 2012/05/17 01:47 2012/05/17 01:31 2012/05/17 01:40 W/A/- 1/0/0 1
2012/07/04 17:24 662 830 2.31 N14W34 2012/07/04 16:33 2012/07/04 16:39 2012/07/04 16:42 2012/07/04 17:00 W/-/- 0/0/0 0
2012/07/06 23:24 1828 1907 4.54 S13W59 2012/07/06 23:01 2012/07/06 23:08 2012/07/06 23:09 2012/07/06 23:10 W/A/- 1/0/0 1
2012/07/12 16:48 885 1405 0.51 S15W01 2012/07/12 16:03 2012/07/12 16:49 2012/07/12 16:25 2012/07/12 16:45 W/-/- 1/0/1 0
2012/07/19 05:24 1631 1631 0.37 S13W88 2012/07/19 04:45 2012/07/19 05:58 2012/07/19 05:24 2012/07/19 05:30 W/-/- 1/0/0 0
2012/07/28 21:12 420 463 0.64 S25E54 2012/07/28 20:44 2012/07/28 20:56 2012/07/28 20:52 -/-/- 0/0/0 0
2012/07/31 11:24 567 605 0.23 N19E59 2012/07/31 10:46 2012/07/31 11:30 2012/07/31 11:04 -/-/- 0/0/0 0
2012/08/13 13:25 435 705 1.68 N22W03 2012/08/13 12:33 2012/08/13 12:40 2012/08/13 12:41 -/-/- 0/0/0 0
2012/08/31 20:00 1442 1495 0.35 S19E50 2012/08/31 19:32 2012/08/31 20:43 2012/08/31 19:42 2012/08/31 20:00 W/A/- 1/0/1 0
2012/09/28 00:12 947 1093 0.87 N09W31 2012/09/27 23:36 2012/09/27 23:57 2012/09/27 23:44 2012/09/27 23:55 W/A/- 1/0/1 0
2012/11/08 02:36 855 855 0.95 N13E89 2012/11/08 02:08 2012/11/08 02:23 2012/11/08 02:21 -/-/- 0/0/0 0
2012/11/21 16:00 529 942 0.79 N05E05 2012/11/21 15:10 2012/11/21 15:30 2012/11/21 15:33 -/-/- 0/0/0 0
2013/03/15 07:12 1063 1366 0.39 N11E12 2013/03/15 06:00 2013/03/15 06:58 2013/03/15 07:00 W/-/- 1/0/0 0
2013/04/11 07:24 861 1369 1.09 N09E12 2013/04/11 06:55 2013/04/11 07:16 2013/04/11 07:02 2013/04/11 07:10 W/-/B 1/0/1 1
2013/05/13 02:00 1270 1270 0.88 N11E90 2013/05/13 01:53 2013/05/13 02:17 2013/05/13 02:10 2013/05/13 02:20 W/-/B 0/0/1 1
2013/05/13 16:07 1850 1852 1.82 N11E85 2013/05/13 15:48 2013/05/13 16:05 2013/05/13 15:57 2013/05/13 16:15 W/A/B 0/0/1 1
2013/05/14 01:25 2625 2645 4.01 N08E77 2013/05/14 01:00 2013/05/14 01:11 2013/05/14 01:07 2013/05/14 01:16 W/A/B 0/0/0 1
2013/05/15 01:48 1366 1408 0.71 N12E64 2013/05/15 01:15 2013/05/15 01:48 2013/05/15 01:37 2013/05/15 01:49 W/-/- 1/0/0 1
2013/05/17 09:12 1345 1412 1.68 N12E57 2013/05/17 08:43 2013/05/17 08:57 2013/05/17 08:50 -/-/- 0/0/0 0
2013/05/22 13:25 1466 1491 0.71 N15W70 2013/05/22 12:57 2013/05/22 13:32 2013/05/22 12:59 2013/05/22 13:10 W/A/B 1/1/0 0
2013/06/28 02:00 1037 1254 0.91 S18W19 2013/06/28 01:36 2013/06/28 01:59 2013/06/28 01:53 W/-/- 0/0/0 0
2013/08/17 19:12 1202 1418 0.54 S05W30 2013/08/17 18:49 2013/08/17 19:33 2013/08/17 18:56 2013/08/17 20:25 W/-/- 0/0/0 0
2013/08/30 02:48 949 1031 0.31 N15E46 2013/08/30 01:51 2013/08/30 02:46 2013/08/30 02:12 2013/08/30 02:34 W/-/- 0/0/0 0
2013/09/29 22:12 1179 1370 0.21 N17W29 2013/09/29 21:43 2013/09/29 23:31 2013/09/29 21:53 2013/09/29 21:53 W/A/B 1/0/0 0
2013/10/22 21:48 459 1070 3.57 N04W01 2013/10/22 21:15 2013/10/22 21:20 2013/10/22 21:21 2013/10/22 21:33 W/-/- 0/0/0 0
2013/10/24 01:25 399 766 1.42 S10E08 2013/10/24 00:21 2013/10/24 00:30 2013/10/24 00:31 -/-/- 0/0/0 0
2013/10/25 08:12 587 599 1.25 S08E73 2013/10/25 07:53 2013/10/25 08:01 2013/10/25 07:59 -/-/- 0/0/1 1
2013/10/25 15:12 1081 1103 1.53 S06E69 2013/10/25 14:51 2013/10/25 15:03 2013/10/25 14:58 2013/10/25 15:08 W/-/B 0/0/0 0
2013/10/28 02:24 695 726 0.55 N04W66 2013/10/28 01:41 2013/10/28 02:03 2013/10/28 02:00 -/-/- 0/0/0 0
2013/10/28 15:36 812 1098 2.29 S06E28 2013/10/28 15:07 2013/10/28 15:15 2013/10/28 15:10 2013/10/28 15:24 W/-/- 0/0/0 1
2013/10/29 22:00 1001 1001 1.39 N05W89 2013/10/29 21:42 2013/10/29 21:54 2013/10/29 21:48 -/-/- 0/0/0 0
2013/11/19 10:36 740 761 1.06 S14W70 2013/11/19 10:14 2013/11/19 10:26 2013/11/19 10:24 2013/11/19 10:39 W/-/- 0/0/0 0
2013/12/07 07:36 1085 1165 1.62 S16W49 2013/12/07 07:17 2013/12/07 07:29 2013/12/07 07:27 2013/12/07 07:43 W/-/- 0/0/0 0
2014/01/07 18:24 1830 2246 1.34 S15W11 2014/01/07 18:04 2014/01/07 18:32 2014/01/07 18:17 2014/01/07 18:33 W/A/B 1/1/1 1
2014/01/20 22:00 721 750 0.18 S07E67 2014/01/20 21:39 2014/01/20 22:49 2014/01/20 22:24 W/-/- 0/0/0 0
2014/02/20 08:00 948 960 0.53 S15W73 2014/02/20 07:26 2014/02/20 07:56 2014/02/20 07:45 2014/02/20 08:05 W/-/- 1/0/0 0
2014/02/25 01:25 2147 2153 3.59 S12E82 2014/02/25 00:39 2014/02/25 00:49 2014/02/25 00:56 2014/02/25 00:56 W/A/B 1/1/1 1
2014/03/20 04:36 740 921 1.10 S14E35 2014/03/20 03:42 2014/03/20 03:56 2014/03/20 03:52 -/-/- 0/0/0 0
2014/03/29 18:12 528 679 0.87 N11W32 2014/03/29 17:35 2014/03/29 17:48 2014/03/29 17:53 2014/03/29 17:59 W/-/- 0/0/0 0
2014/04/02 13:36 1471 1564 0.55 N11E53 2014/04/02 13:18 2014/04/02 14:05 2014/04/02 13:23 2014/04/02 13:42 W/-/B 0/0/1 0
2014/04/18 13:25 1203 1359 0.71 S20W34 2014/04/18 12:31 2014/04/18 13:03 2014/04/18 12:55 2014/04/18 13:05 W/-/- 1/0/0 0
2014/06/10 13:30 1469 1473 1.53 S17E82 2014/06/10 12:36 2014/06/10 12:52 2014/06/10 12:58 W/-/B 0/0/1 1
2014/07/08 16:36 773 841 1.00 N12E56 2014/07/08 16:06 2014/07/08 16:20 2014/07/08 16:14 -/-/- 0/-/0 0
2014/08/01 18:36 789 1256 1.16 S10E11 2014/08/01 17:55 2014/08/01 18:13 2014/08/01 18:18 2014/08/01 18:58 W/-/- 0/0/0 0
2014/08/22 11:12 600 993 1.18 N12E01 2014/08/22 10:13 2014/08/22 10:27 2014/08/22 10:37 W/-/- 0/-/0 0
2014/08/24 12:36 551 569 0.56 S07E75 2014/08/24 12:00 2014/08/24 12:17 2014/08/24 12:14 -/-/- 0/-/0 0
2014/08/25 15:36 555 697 0.46 N05W36 2014/08/25 14:46 2014/08/25 15:11 2014/08/25 15:08 2014/08/25 15:20 W/-/- 0/-/0 0
2014/09/09 00:06 920 1080 0.33 N12E29 2014/09/08 23:34 2014/09/09 00:29 2014/09/09 00:05 W/-/- 0/0/0 0
2014/09/10 18:00 1267 1652 1.15 N14E02 2014/09/10 17:21 2014/09/10 17:45 2014/09/10 17:45 W/-/- 1/-/0 1
2014/12/17 05:00 587 855 0.46 S20E09 2014/12/17 04:20 2014/12/17 04:51 2014/12/17 04:44 2014/12/17 05:00 W/-/- 0/-/- 0
2014/12/19 01:04 1195 1513 1.48 S11E15 2014/12/18 21:41 2014/12/18 21:58 2014/12/18 22:22 2014/12/18 22:31 W/-/- 0/-/- 0
2014/12/21 12:12 669 906 0.28 S14W25 2014/12/21 11:24 2014/12/21 12:17 2014/12/21 12:05 W/-/- 0/-/- 0
2015/02/09 23:24 1106 1148 0.53 N12E61 2015/02/09 22:59 2015/02/09 23:35 2015/02/09 23:14 -/-/- 0/-/- 0
2015/03/07 22:12 1261 1304 0.59 S19E74 2015/03/07 21:45 2015/03/07 22:22 2015/03/07 21:57 -/-/- 0/-/- 0
2015/03/10 00:00 995 1081 0.75 S18E45 2015/03/09 23:29 2015/03/09 23:53 2015/03/10 00:05 2015/03/10 00:10 W/-/- 0/-/- 0
2015/03/10 03:36 1040 1156 3.85 S15E40 2015/03/10 03:19 2015/03/10 03:24 2015/03/10 03:28 -/-/- 0/-/- 0
2015/03/15 01:48 719 932 0.27 S22W25 2015/03/15 01:15 2015/03/15 02:13 2015/03/15 01:27 -/-/- 0/-/- 0
2015/04/23 09:36 857 864 0.29 N12W89 2015/04/23 09:18 2015/04/23 10:07 2015/04/23 09:22 -/-/- 0/-/- 0
2015/05/05 22:24 715 721 2.00 N15E79 2015/05/05 22:05 2015/05/05 22:11 2015/05/05 22:12 2015/05/05 22:24 W/-/- 0/-/- 0
2015/05/13 18:48 438 730 1.35 N13W16 2015/05/13 18:09 2015/05/13 18:18 2015/05/13 18:21 -/-/- 0/-/- 0
2015/06/18 17:24 1305 1398 0.35 N15E50 2015/06/18 16:30 2015/06/18 17:36 2015/06/18 17:42 W/-/- 0/-/- 0
2015/06/21 02:36 1366 1740 0.97 N12E16 2015/06/21 02:06 2015/06/21 02:36 2015/06/21 02:24 2015/06/21 02:33 W/-/- 1/-/- 1
2015/06/22 18:36 1209 1573 0.60 N12W08 2015/06/22 17:39 2015/06/22 18:23 2015/06/22 18:05 2015/06/22 18:20 W/-/- 0/-/- 0
2015/06/25 08:36 1627 1805 2.15 N09W42 2015/06/25 08:02 2015/06/25 08:16 2015/06/25 08:16 2015/06/25 08:35 W/-/- 1/-/- 1
2015/08/22 07:12 547 817 1.36 S15E20 2015/08/22 06:39 2015/08/22 06:49 2015/08/22 06:50 2015/08/22 07:07 W/-/- 0/-/- 0
2015/09/20 18:12 1239 1458 0.78 S22W50 2015/09/20 17:32 2015/09/20 18:03 2015/09/20 18:16 2015/09/20 18:23 W/-/- 0/-/- 0
2015/11/04 14:48 578 987 0.01 N09W04 2015/11/04 14:08 2015/11/05 13:31 2015/11/04 13:43 2015/11/04 14:07 W/-/- 0/-/- 0
2015/12/16 09:36 579 937 0.43 S13W04 2015/12/16 08:27 2015/12/16 09:03 2015/12/16 08:45 W/-/- 0/-/- 0
2015/12/28 12:12 1212 1471 0.29 S23W11 2015/12/28 11:20 2015/12/28 12:45 2015/12/28 11:50 W/-/- 0/-/- 0
2016/01/01 23:24 1730 1734 2.22 S25W82 2016/01/01 23:58 2016/01/02 00:11 2016/01/01 23:21 2016/01/02 00:55 W/A/- 1/0/- 0
2016/02/11 21:17 719 1174 0.43 N11W07 2016/02/11 20:18 2016/02/11 21:03 2016/02/11 20:35 -/-/- 0/0/- 0
2017/04/18 19:48 926 932 0.32 N14E77 2017/04/18 19:21 2017/04/18 20:10 2017/04/18 19:49 -/-/- 0/1/- 0
2017/07/14 01:25 1200 1422 0.38 S06W29 2017/07/14 01:07 2017/07/14 02:09 2017/07/14 01:18 W/-/- 1/0/- 0
2017/09/04 20:36 1418 1831 6.10 S10W12 2017/09/04 20:28 2017/09/04 20:33 2017/09/04 20:42 2017/09/04 20:27 W/-/- 1/0/- 0
2017/09/06 12:24 1571 1819 3.37 S08W33 2017/09/06 11:53 2017/09/06 12:02 2017/09/06 12:02 2017/09/06 12:05 W/A/- 1/0/- 1
2017/09/10 16:00 3163 3163 1.70 S09W92 2017/09/10 15:35 2017/09/10 16:06 2017/09/10 16:08 2017/09/10 16:02 W/A/- 1/1/- 1
The SEP column gives major SEP events observed by GOES and the SGRE column gamma-ray flares with a 'Delayed' component observed by Fermi/LAT <cit.>
aasjournal
|
http://arxiv.org/abs/2307.04300v1 | 20230710013458 | Blockwise Key Distillation in Satellite-based Quantum Key Distribution | [
"Minu J. Bae",
"Nitish K. Panigrahy",
"Prajit Dhara",
"Walter O. Krawec",
"Alexander Russell",
"Don Towsley",
"Bing Wang"
] | quant-ph | [
"quant-ph"
] |
Blockwise Key Distillation in Satellite-based Quantum Key Distribution
Minu J. Bae
University of Connecticut
Storrs CT, USA
[email protected]
Nitish K. Panigrahy
University of Massachusetts
Amherst MA, USA
[email protected]
Prajit Dhara
University of Arizona
Tucson AZ, USA
[email protected]
Walter O. Krawec
University of Connecticut
Storrs CT, USA
[email protected]
Alexander Russell
University of Connecticut
Storrs CT, USA
[email protected]
Don Towsley
University of Massachusetts
Amherst MA, USA
[email protected]
Bing Wang
University of Connecticut
Storrs CT, USA
[email protected]
August 12, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Free-space satellite communication has significantly lower photon loss than terrestrial communication via optical fibers. Satellite-based quantum key distribution (QKD) leverages this advantage and provides a promising direction in achieving long-distance inter-continental QKD. Satellite channels, however, can be highly dynamic, due to various environmental factors and time-of-the-day effects, leading to heterogeneous noises over time. In this paper, we compare two key distillation techniques for satellite-based QKD. One is the traditional non-blockwise strategy that treats all the signals as a whole; the other is a blockwise strategy that divides the signals into individual blocks that have similar noise characteristics and processes them independently. Through extensive simulation in a wide range of settings, we show trend in optimal parameter choices and when one strategy provides better key generation rates than the other. Our results show that the blockwise strategy can lead to up to 5% key rate improvement (leading to on average 1.9×10^7 more key bits per day) when considering two types of blocks, i.e., for nighttime and daytime, respectively. The blockwise strategy only requires changes in the classical post-processing stage of QKD and can be easily deployed in existing satellite systems.
The blockwise key distillation in satellite-based quantum key distribution (SatQKD) is a new approach to extracting secret quantum keys in dynamic, heterogeneous, and environmental circumstances due to various times and weather conditions. SatQKD leveraged by the satellite-based entanglement distribution via free-space to ground stations shows much less photon loss than ground quantum communication via optical fibers. Thus, the blockwise-based SatQKD allows us to build an inter-continental QKD network and manages multifarious losses and failures in the key distillation. This paper introduces the new blockwise key distillation approach in SatQKD over different satellite altitudes and baseline distances of two ground stations. Mainly, we show its various simulation results that show the main takeaways of the blockwise-based SatQKD.
As quantum key distribution becomes increasingly practical, questions of how to effectively employ it in large-scale networks and over large distances becomes increasingly important. To that end, in this work, we model the performance of the E91 entanglement based QKD protocol when operating in a network consisting of both quantum repeaters and trusted nodes. We propose a number of routing protocols for this network and compare their performance under different usage scenarios. Through our modeling, we investigate optimal placement and number of trusted nodes versus repeaters depending on device performance (e.g., quality of the repeater's measurement devices). Along the way we discover interesting lessons determining what are the important physical aspects to improve for upcoming quantum networks in order to improve secure communication rates.
§ INTRODUCTION
Quantum cryptography, and specifically Quantum Key Distribution (QKD), holds several promising benefits. In particular, it has the ability to achieve certain cryptographic tasks without relying on computational assumptions, unlike much of our current-day secure communication infrastructure based on public key systems <cit.>. However, several challenges remain, limiting its effectiveness and the rate of adoption of this technology. One challenge is that long distance quantum communication links are required, while greater key generation rates are necessary to allow for either faster refreshing of AES keys or, hopefully, the ability to stream a true one-time-pad at a rate fast enough to keep up with the communication stream. Due to the exponential loss in fiber channels <cit.>, many researchers are turning to the study of satellite-based quantum communication to solve this long-distance quantum communication problem, leveraging free-space satellite communication that has much lower loss than fiber channels.
A satellite can help to distribute long-range entangled pairs to two ground stations, thus building a QKD network over much longer distances than a point-to-point ground fiber network (without repeaters) could achieve on its own <cit.>.
Several experimental demonstrations of quantum communication through satellites have been conducted recently <cit.>, showing their technological feasibility. Despite this interest, several questions remain, especially in terms of optimizing overall QKD system performance and speed. Since the hardware of these systems would be difficult to change after launch, it is important to investigate what can be done on the classical stage of the protocol, without forcing users to invest or install new quantum hardware. Every QKD protocol consists of two stages: a quantum communication stage and a classical post-processing stage. The first is the only one that requires quantum-capable hardware; the second involves only classical communication and can more easily be altered than the first.
In this work, we investigate the classical post-processing stage of a standard QKD protocol, specifically BB84 <cit.> (or, rather, the entanglement based version E91 <cit.>) in an attempt to maximize the performance of a satellite system, without altering the quantum layer of the network. Satellite channels show dynamic environmental circumstances due to various time and weather conditions <cit.>. For instance, nighttime and daytime have different background/thermal photons that impact the fidelity of entangled photons or their loss, while weather conditions in the atmospheric layers increase noise on entangled photons. So, we must carefully consider the impact of these factors on QKD. Next, it is necessary to determine the optimal pump power to generate the right entangled photon pairs and a sampling rate to estimate noise in the finite key analysis.
In this work, we compare two classical post-processing strategies for satellite systems. In one instance, which we call “blockwise post-processing,” we divide an entire signal into several individual “blocks” and process them independently; these blocks should have similar noise/loss characteristics.
The other strategy is the more traditional method of treating the entire quantum signal as a single unit and processing accordingly (which we call “non-blockwise post-processing”).
For both post-processing methods, we investigate optimal parameter settings for various satellite configurations and operating conditions.
We make several contributions in this work. To our knowledge, we are the first to evaluate and compare
these two different post-processing methods in both the finite key and asymptotic scenario and under various satellite operating conditions.
We also conduct a rigorous evaluation of QKD satellite operation using extensive simulations with realistic noise and loss models,
showing trends in optimal parameter choices and when, exactly, the two different post-processing methods should be used to optimize overall key generation rates. For instance, we show that the blockwise scheme can lead to 5% higher key rate than the non-blockwise scheme when the satellite is at a high altitude, leading to on average 1.9×10^7 more key bits per day.
We comment that all our investigations are on the classical stage of the QKD protocol, and any alterations which our work suggests that may be beneficial to a satellite QKD system, can be easily adopted by the current systems, and easily added after a satellite's launch. In addition, while focusing on satellite-based QKD, our findings also apply to terrestrial QKD network scenarios where the raw key bits have significant dynamics, e.g., because they are created over disparate network paths.
§ PRELIMINARIES
§.§ Notation and Definitions
In this section, we introduce underlying definitions and notations that will be used throughout this paper. A density operator is a Hermitian positive semi-definite operator of unit trace acting on a Hilbert space ℋ. Note that for given a pure quantum state |φ⟩, its density operator is |φ⟩⟨$|. We simply denote it as the symbol[φ]. We define a notationh(x)as an extended binary entropy function, namely,h(x)=0ifx < 0,h(x)=xlog_2(x)-(1-x)log_2(1-x)if0≤x≤1/2, andh(x)=0ifx>1/2.
§.§ Satellite-based QKD
We consider a satellite that orbits around the Earth at a certain altitude. The satellite has photon sources that generate entangled pairs, and sends them to a pair of ground stations. Specifically, for each entangled pair, the satellite transmits one photon in the pair to one ground station using a down-link optical channel, thus creating a dual downlink entanglement distribution as shown in Fig. <ref>. The two ground stations run an entanglement based protocol (e.g., E91 <cit.>) for QKD.
In the rest of this paper, we assume that the two ground stations are located on the equator. The satellite orbits the Earth in a west-to-east direction above the equator, in alignment with the Earth's rotation. We focus on low-earth-orbit (LEO) satellites (i.e., altitude between 250 to 2000 km) that benefit from proximity to earth surface and have been demonstrated experimentally
<cit.>.
To transmit photons successfully to a ground station, the elevation angle, i.e., the angle between the satellite and the horizon at the ground station, needs to exceed a threshold,θ_e. For successful delivery of an entangled pair, the elevation angles between the satellite and the two ground stations must both exceedθ_e, as illustrated in Fig. <ref>. In this figure, the sector betweenG_iLandG_iRrepresent the region where the elevation angle between the satellite and ground stationiexceedsθ_e,i=1,2. The intersection of these two sectors is the region where the satellite can transmit entanglement pairs successfully to both ground stations.
This study considers one pair of two ground stations on earth. They rotate around the earth's axis with 24 hours rotating period. Also, we operate a single satellite of our research, which orbits around the earth at particular altitudes. A satellite has photon sources that generate entangled pairs of photons and transmit them to the ground stations using a down-link optical channel. The ground stations can achieve any quantum application with the transmitted entanglements' pairs, e.g., QKD. Note that a satellite should exceeds the elevation angleθ_eto successfully deliver an entangled pair to the ground stations, see Fig. <ref>-(a) and (b). Our SatQKD employs the dual-downlink protocol <cit.>, meaning a satellite transmits an entangled photon pair to two ground stations. Since a beam wander effect induced by atmospheric turbulence—also known as the shower-curtain effect <cit.>—arises only around the end of the transmission route close to the surface of Earth, the downlink shows a lower beam wander effect and higher link efficiency than the uplink <cit.>. In the dual down-link approach, a satellite and each ground station act as transmitters and receivers, respectively. The satellite generates an entangled photon pair via its entanglement photon source to distribute pairs to ground stations, see Fig. <ref>-(c).
§.§ Quantum Key Distribution
The goal of a key distribution protocol is to generate a secret key shared between two parties, Alice and Bob. Under a classical communication channel, building a secret key sharing between Alice and Bob is impossible <cit.>. But, when quantum mechanics applies to the key distribution, the storyline is changed. Bennett and Brassard first introduced a quantum key distribution (QKD) scheme, which utilizes an insecure quantum channel and the authenticated classical channel. Their scheme is known as the BB84 protocol <cit.>. Ekert introduced the entangled-based equivalence <cit.>, called the E91 protocol. Since observing a quantum system without measuring its state is impossible, QKD can provide unconditional security proof and easily detect tampering by an adversary based on the laws of physics. Renner introduced the security proof of QKD by using entropic uncertainty relations, an error collection, and a privacy amplification <cit.>. The short concept is as follows: for given a classic-quantum stateρ_SEafter the measurement on a quantum state, and it conducts an error correction and a privacy amplification process on theSregister of the state to have a stateσ_YE. The process maps theSrecord on theYregister through a randomly chosen two-universal hash function. If the output isℓbits is long, the following relation was shown in <cit.>, namely:
σ_YE - I_Y/2^ℓ⊗σ_E≤ 2^-1/2(H_∞^ϵ(S|E)_ρ - ℓ) + 2ϵ,
whereH_∞^ϵis the smooth conditional entropy andI_Y/2^ℓis theℓlength of uniform random bits and independent of an adversarial partyE.
§.§ Entanglement Sources
We assume that the satellite utilizes spontaneous parametric down-conversion (SPDC) based dual-rail polarization entanglement sources that are well-studied and widely used <cit.>.
In such entanglement sources, a two-qubit entangled Bell state requires four orthogonal modes (i.e., two pairs of mode) to encode.
The expression of the output is a quantum state as follows<cit.>:
|φ^±⟩ = N_0[√(p(0))|0,0;0,0⟩+√(p(1)/2)(|1,0;0,1⟩±|0,1;1,0⟩)+√(p(2)/3)(|2,0;0,2⟩±|1,1;1,1⟩+|0,2;2,0⟩)],
whereN_0is a normalization factor, namely:
N_0 = 1/√(p(0)+p(1)+p(2)) = (N_s+1)^2/√(6N_s^2+4N_s+1)
andp(n)is the probability of generating an-photon term in each pair of mode, given by:
p(n) = (n+1)N_s^n/(N_s+1)^n+2,
whereN_sis pump power, i.e., the mean photon number per mode. The entangled pair from the SPDC dual-rail polarization source is as follows:
|Ψ^±⟩ = 1/√(2)(|1,0;0,1⟩±|0,1;1,0⟩),
the vacuum state is|0,0;0,0⟩, and all the other terms are spurious two-photon states. In Eq. (<ref>), we assume thatN_sis low (e.g., below0.2) and hencep(n)forn>2is negligible and is omitted in the quantum state.
The pump power is an important configurable parameter that can be tuned to maximize the entanglement rate, while adhering to a desired fidelity threshold. In Section <ref>, we show that the pump power impacts two important factors, success probability and fidelity, for QKD, and needs to be chosen carefully.
§ ENTANGLEMENT SOURCE
§.§ The Dual-rail Entangled Source in a Satellite
The expression of 2 photon pairs in <cit.> is as follows:
|Ψ^±⟩ = N_0[√(p(0))|0,0;0,0⟩+√(p(1)/2)(|1,0;0,1⟩±|0,1;1,0⟩)+√(p(2)/3)(|2,0;0,2⟩±|1,1;1,1⟩+|0,2;2,0⟩)],
whereN_0is a normalization factor, namely:
N_0 = 1/√(p(0)+p(1)+p(2)) = (N_s+1)^2/√(6N_s^2+4N_s+1)
and the coefficient termsp(n)are given by:
p(n) = (n+1)N_s^n/(N_s+1)^n+2.
Note thatN_sis a pumping power, in other words, the mean photon number per mode. The entangled pair from the SPDC dual-rail polarization source is as follows:
|φ^±⟩ = 1/√(2)(|1,0;0,1⟩±|0,1;1,0⟩),
the vacuum state is|0,0;0,0⟩, and all the other terms are suspicious two-photon states.
§ MODELS
§.§ Orbit Model
In this paper, we consider two ground stations located on the equator. And we define a satellite orbit model as follows—the satellite orbits Earth in a west-to-east direction above the equator, in alignment with Earth's rotation in Fig. <ref>. We also consider a low-orbit satellite model with 500, 800, and 1000 km altitudes and 600, 1200, and 1800 km baseline distances between two ground stations on the equator.
§.§ Loss Model
The satellite quantum communication channel via free space optical (FSO) transmission, must account for the characteristics of the optical channel in its underlying analysis. The transmission losses for each qubit (comprising of a pair of modes) scales quadratically with free space propagation length and exponentially with aerial propagation length <cit.>. We incorporate the effect of transmission loss by treating FSO transmission as a Bosonic pure loss channel acting on each mode of the quantum state described in Eq. (<ref>).
Most generally, the Bosonic pure loss channel leads to a reduction in the mean photon number of the input state; additionally an input pure quantum state becomes a mixed state for non-zero loss. In the present context, this impedes the probability of successfully delivering the entangled pairs to both ground stations, as well as affecting the fidelity (to the ideal Bell state) of the delivered entangled photons <cit.>. Namely, it changes the probability of the generation of entangled pairs. The probability of receiving a perfect Bell pair becomes necessarily smaller thanp(1)from Eq. (<ref>). See more details of the loss model in <cit.>.
§.§ Noise Model
Atmospheric FSO transmission channels have to contend with a variety of noise processes. In this manuscript, we limit our noise estimate to unfiltered background photons. Any excess photons in the channel will cause false events (i.e., where the qubits of the Bell pair were lost) to be treated as successes, thereby impacting the fidelity of the entangled pair that should have been delivered. The main contributor of the background photon flux (for example from imperfect filtering by the ground receiver) is commonly associated with the brightness of the sky and varies drastically depending on the time of the day. More specifically, the level of background photon flux is at its highest during clear daylight, and at its lowest during clear nighttime. In our work, we consider these two setups (i.e., daytime and nighttime) consistent with the state-of-the-art <cit.> and compute the fidelity of the generated entangled state between two ground stations by modeling the arrival of unfiltered background photons as detector dark click events.
This paragraph assumes an entanglement source with p(2)=0, right? If so, move it to key rate analysis after clarifying that we assume p(2)=0 for the analysis. We utilize the fidelityFobtained using the above model to estimate the noiseQfor QKD key rate analysis (see Section <ref>).
Specifically, we assume a depolarizing channel, and the state is modeled as
F |Ψ^+⟩⟨+| (1-F)I/4,
where|Ψ^+⟩is the desired Bell state. In that case,Q=(1-F)/2.
a secret key distillation. Namely, we can compute the noise as follows: for anyt∈{night,day},
Q_ave^(t) = (1-fid_ave^(t))/2,
wherefid_ave^(t)can be found in Eq. (<ref>).
Add information on depolarization noise - how to get Q = (1-F)/2.
§ BLOCKWISE KEY DISTILLATION
We assume the E91 protocol<cit.> is used for QKD between the pair of ground stations. This protocol, like most QKD protocols, consists of a quantum communication stage, followed by a classical post-processing stage. In the quantum communication stage,Mentangled pairs are sent to Alice and Bob (some of which may be lost in transmission due to channel loss). Alice and Bob then choose, independently at random, whether to measure their particles in theZorXbasis, recording their results. Later, Alice and Bob will disclose, over the authenticated classical channel, their basis choice, discarding all iterations that do not match. The resulting strings are called the users' raw key. This concludes the quantum communication stage—the output is a raw key of sizeN-bits (withN ≤M), which may be partially correlated (errors in the channel or adversarial noise may cause errors in Alice and Bob's raw key) and partially secret (Eve may have some non-negligible side information on the raw key based on her attack). Next, the classical post-processing stage will further process the raw key to produce a secret key. First the error rate in the raw key is determined. After this, error correction, and finally privacy amplification protocols are run. The output of this stage is the final secret key of sizeℓ≤Nbits. An important metric for the entire QKD protocol is its key rate, namely the ratio of the secret key size (ℓ) to the total number of signals sent (M).
§.§ Blockwise vs. Non-blockwise Schemes
In this work we analyze and compare two different classical post-processing strategies: blockwise and non-blockwise. The latter, non-blockwise, is the traditional QKD scenario whereby the raw key ofNbits is treated as a single system from which error correction and privacy amplification are run. The former, blockwise, divides the raw key up into smaller systems, or blocks. This division can be arbitrary, but to potentially provide a performance boost, each block should have homogenous channel statistics (especially in terms of noise—that is, while each block may have very different noise levels, the raw keys within a single block should be similar). In general, if there is a significant difference in the noise levels of the blocks, one can expect blockwise to produce a strictly higher key rate due to the concavity of entropy as discussed below.
First, consider the standard non-blockwise post-processing where all the raw key bits are considered together and is agnostic to the dynamics of the quantum channel. Specifically, in this strategy, a random subset ofm < N/2bits is randomly chosen from the set ofNbits in the raw key, and Alice and Bob's measurement results are used to estimate the noise of the entire block, denoted asQ. This is defined to be the relative number of bit-flips in Alice and Bob's raw key. In this work, we assume the noise is modeled by a depolarizing channel and thus (1) the noise is the same in bothZandXbases and (2) we haveQ = (1-F)/2, whereFis the fidelity obtained using the noise model in Section <ref>.
Under the blockwise post-processing strategy,
users break the raw key into blocks of signals based on operating conditions, where each block is expected to have similar noise characteristics.
In our case, for simplicity, we divide the total raw key into two classes of blocks: one from day and one from night operating conditions as it is expected that the noise in the daylight will be higher than at night. However, blockwise processing can be applied to any arbitrary number of blocks, so long as there are a sufficient number of raw key bits in each block.
More formally, letrk_A, rk_B∈{0,1}^Nbe the raw key for Alice and Bob, and letB^A_ibe a single block such thatrk_A = B^A_1B^A_2⋯B^A_k(i.e.,Bis the bitstring concatenation of each block). Similarly forrk_Bwhich will have the same decomposition. Then, a random subsett_iof sizem_iis chosen for each blockB_iand measurement results in that block, indexed by that subset, are disclosed over the authenticated channel to determine the noise present in each block, denotedQ_i. Finally, error correction and privacy amplification are run on each block separately, distillingksecret keyss_1throughs_kwhich are later concatenated into a single secret keys. The size of eachs_idepends onQ_i.
§.§ Key Rate Analysis
As the main goal of this paper is to evaluate and compare the effectiveness of both blockwise and non-blockwise key distillation strategies, we require a performance metric. For this, we will use the key rate of the protocol, defined to be the ratio of the number of final secret key bits to the total number of attempted entanglement pairs sent by the source. Finally, we will also consider both the asymptotic scenario, where the number of signals sent approaches infinity giving us upper-bounds on the key-rates, and the more realistic finite-key scenarios,
where we will also have to take into account imperfect sampling and other imprecisions. We note that, for this work, we consider idealized photon sources which may emit zero or one photon, but never two. That is, we setp(2) = 0in Eq. <ref>. In our evaluations,p(2)is generally small under the optimal pump power and cannot drastically decrease the key rate, as we shall show in Section <ref>. However, a rigorous blockwise and non-blockwise analysis for multiphoton sources remains an interesting future challenge.
To compute the key rate of the protocol, both in the blockwise and non-blockwise cases, we turn to analysis methods derived in <cit.> which utilize entropic uncertainty <cit.>.
Key rate analysis for non-blockwise scheme. First consider the non-blockwise case. Here, the entire raw-key is treated as a single system from which a random sample of sizemis chosen (leavingn=N-mbits for the raw key). This sample allows parties to estimate the error in the entire raw key denoted asQ. From this, the remaining signals are run through an error correction process (leaking an additionalλ_ECbits to the adversary). A test is then run by hashing the error corrected raw key and testing correctness between Alice and Bob (which leakslog1/ϵ_corbits to the adversary for user specifiedϵ_cor). Finally, privacy amplification is run, outputting a secret key of sizeℓ≤n. It is guaranteed that, conditioning on not aborting the protocol, the final secret key system and Eve's ancilla (denoted asρ_KE) will satisfy the following:
1/2ρ_KE - I/2^ℓ⊗ρ_E≤ϵ_sec,
whereρ_Eis Eve's system.
That is, the final secret key system will beϵ_secclose (in trace distance) to a truly uniform random key,I/2^ℓ, which is also completely independent of Eve's systemρ_E.
Using results in <cit.>, the non-blockwise case can be shown to have an overall secret key length of:
ℓ_non-block = n(1-h(Q+μ)) - λ_EC - log2/ϵ_sec^2ϵ_cor.
whereϵ_coris a security parameter determining the failure rate of the correctness portion of the protocol (i.e., Alice and Bob will have the same secret key, except with probability at mostϵ_cor), andϵ_secis a security parameter determining the distance of the final secret key from a truly uniform random key. Above,λ_ECrepresents the information leaked during error correction; in our evaluations, later, we simply setλ_EC = nh(Q+μ). Other realistic settings ofλ_EC(Q) = 1.2nh(Q)can also be used. However such a setting will not significantly affect our results in later sections as we are primarily interested in comparing blockwise to non-blockwise. Finally,μis a result of finite sampling effects and is set to:
μ = √((n+m)(m+1)/nm^2ln2/ϵ_sec).
The above may be derived from standard classical sampling arguments <cit.>.
Key rate analysis for blockwise scheme. In the blockwise case, the setting is similar and we may again use results from <cit.> to distill each sub-block into secret keys independently, and then concatenate the final blockwise secret keys into a single secret key. Here, letB_ibe the size of thei'th block (determined by the user). Now, a random subsett_iof sizem_ifor each blockB_iis chosen. As with the non-blockwise, this sampling subset is used to determine the error rate in the raw key, however, now, it is used only to estimate the error rate in thei'th block of the raw key, denotedQ_i. Error correction, a correctness test, and finally privacy amplification is then performed individually on each block. From this setup, we can compute the secret key size of blockito be:
ℓ_i = (B_i-m_i)(1-h(Q_i+μ_i)) - λ_EC^(i) - log2/ϵ_sec^2ϵ_cor,
from which we have the following total secret key size:
ℓ_block = ∑_i=1^kn_i(1-h(Q_i+μ_i)) - ∑_iλ_EC^(i) - klog2/ϵ_sec^2ϵ_cor,
wherekis the total number of blocks andn_i = B_i-m_i. The value ofμ_iis identical toμabove, except replacingmwithm_iandnwithB_i-m_i. Finally,λ_EC^(i)is the amount of information leaked during error correction of blocki. In our evaluations, we set this toλ_EC^(i) = (B_i-m_i)h(Q_i+μ_i).
The above values ofℓcan be used to immediately compute the key rate simply by dividing by the number of attempted entanglement pairs sent by the satellite (we say attempted as the satellite may send out vacuum states which count, detrimentally, to the overall key rate).
To determine theoretical upper-bounds, we also consider the asymptotic scenario, where the number of signals approaches infinity. In this instance, the key rate for the non-blockwise scheme is simply1-2h(Q), while the key rate for the blockwise scheme converges to∑_i p_i(1-2h(Q_i)), wherep_iis the proportion of total raw keybits used in blockias the size of the raw key approaches infinity.
Note that the above equations give immediate intuition as to why blockwise processing can lead to higher key-rates. For non-blockwise, the total errorQis actually the average error over all individual blocks. Due to the concavity of Shannon entropy, the key rate can only be higher, but no less than, the blockwise processing, at least in the asymptotic scenario. In the finite key scenario, sampling imprecisions lead to other problems and, so, as we show later blockwise processing can actually lead to worse results in some settings. Knowing when to use blockwise processing and when to use non-blockwise is an important question to answer if these systems are to be practically deployed.
The block-wise and entanglement-based BB-84 (E91) protocol runs as follows:
* A satellite, s, prepares a quantum state |ϕ_0⟩∈ℋ_g_1⊗ℋ_g_2⊗ℋ_E, where ℋ_g_1≅ℋ_g_2≅ℋ_d^⊗ B. Note that B is the entire number of quantum states generated from the satellite such that B=∑_i=1^nB_i and B_i = m_i+n_i. The g_1 portion is sent to a ground station g_1 and the g_2 portion is sent to another ground station g_2 via the free space and atmospheric layer.
* For each block period i, the station g_1 chooses a random subset t_i of size m_i out of B_i and sends it to another station g_2. Both ground stations measure their systems indexed by t_i in the X-measurement, which produces outcomes q_1^(i), q_2^(i)∈𝒜_2^m_i, respectively. These values are revealed to each other via the authenticated channel.
* After the sampling and for each block period i, a pair of grounds stations, (g_1,g_2), measures the n_i-number of the remaining portion of their system in the Z-measurement, which outputs their raw-keys r_1^(i) and r_2^(i) of size at most n_i bits each. If there are observations of vacuum states, |vac⟩, those will not contribute to the raw key so their keys are possibly smaller than n_i in a lossy channel. Then these stations move to the next block and repeat processes 2 and 3 until they exhaust all the n-number of blocks.
* After all recurrences, g_1 and g_2 run an error correction protocol that can correct their block-wise all raw keys up to Q errors, which reveals leak_EC bits to Eve.
* Finally, privacy amplification operates on the error-corrected raw key resulting in their secret key.
§.§ Finding a Fidelity and Entanglement Transmission Data
The computing the average fidelity over each block, e.g., nighttime or daytime, as follows: for eacht∈{night, day},
fid_ave^(t) = 1/k∑_i=1^kfid_ave^(i)=fid_ave^(i),
wherekis the number of all rounds in the time block, e.g.,k=16, and the average fidelity per each contact round:
fid_ave^(i) = fid_tot^(i)/pass,
wherefid_tot^(i) = ∑_j=1^passfid^(i,j)andpassis the passing time for each contact round, e.g.,pass=84sec. We also pick a different pumping power for the night and the day to get an optimal fidelity for the positive key generation. The night and daytime dark click probabilities are considered as follows:Pd_n = 3×10^-6and3×10^-3, respectively. The simulation results divide into two details: one is from nighttime simulation, and another is from daytime. As we choose a higher pumping power for the daytime than the nighttime, the success probability is higher than the nighttime one. So, the number of transmitted entanglements in the daytime is higher than at night, but the fidelity is lower due to noises. The number of entanglements per each time is as follows: for eacht∈{night, day}B_t = ∑_i=1^kB_t^(i),
whereB_t^(i)is the number of entanglements per each contact round during the timet, namely:
B_t^(i) = S_#× P_tot^(i),
where the number of signalsS_#=10^9,P_tot^(i)is the total probability, that is,P_tot^(i) = ∑_j=1^pass P_succ^(i,j),passis the passing time per each contact round, e.g.,pass=84, andS_#=10^9is a source generating rate from a satellite per second during one contact round. So, the number of entanglements in the nighttime isN_n, and the number of entanglements in the daytime isN_d.
§ PERFORMANCE EVALUATION
In this section, we evaluate the performance of blockwise and non-blockwise key distillation schemes.
As mentioned in Section <ref>, we consider a LEO satellite, with two ground stations on the equator. In the following, we first describe the evaluation setup and then the results.
§.§ Evaluation Setup
We consider three satellite altitudes,A=500km, 800 km, 1000 km. For each satellite altitude, we consider two ground stations along the equator of the Earth, with a distance ofD=600km, 1200 km, or 1800 km. The satellite is equipped with a SPDC entanglement source (see Section <ref>) that operates at a 1 GHz rate, i.e., generating10^9entangled photons per second. The elevation angle threshold (Section <ref>) is set to 20^∘. For simplicity, we assume 10 hours of nighttime (8pm-6am), and the remaining 14 hours as daytime each day. The dark click probabilityP_dis set to3 ×10^-6for nighttime and3 ×10^-3for daytime based on the study in <cit.>.
The blockwise scheme treats the raw key bits produced during daytime and nighttime separately, i.e., it considers two types of blocks, corresponding to the raw keys from daytime and nighttime respectively, while the non-blockwise scheme considers all of the raw key bits together.
When the satellite altitude is 500 km, the orbit time (the amount of time for a satellite to finish one orbit) is 5,647 seconds, for satellite altitudes of 800 and 1000 km, the orbit time is longer (6,022 and 6,276 seconds, respectively). The number of passes of the satellite over the two ground stations is 6 passes during nighttime and 9 passes during daytime for all the settings, except when the satellite altitude is 500 km, where the number of passes during nighttime is 7.
Table <ref> lists the contact length (i.e., pass duration, the duration that the satellite is in the contact of both ground stations) for the various settings.
The contact length varies from less than 1 minute to over 8 minutes. As expected, for a given satellite altitude, larger ground station distance leads to shorter contact length; while for the same ground station distance, higher satellite altitude leads to longer contact length. Using the loss and noise models in Section <ref>, we obtain the success probability and fidelity of the transmission from the satellite to each ground station in each second, and then obtain the average success probability and fidelity over the contact length in the following evaluation.
We consider the key rate for running the protocol over 1 to 80 days to show the performance of the two key distillation schemes over time as more raw key bits are accumulated at the ground stations, and the performance of these schemes relative to the asymptotic results.
For both blockwise and non-blockwise schemes, we vary the pump power
of the SPDC source (see Section <ref>) and the sampling rate for each setting so that the number of secret key bits is maximized. Specifically, the pump power is varied from 0 to 0.1. We limit the pump power up to 0.1 so that the approximation of the quantum states in <ref> is accurate and high-order-photon contributions are negligible <cit.>.
The sampling rate is varied from5×10^-4/kto3×10^-1/kfor the raw keys generated inkdays.
Unless otherwise stated, our results below assume thatp(2), i.e., the probability of generating a 2-photon term in each pair of mode of the SPDC source, is zero
(see Section <ref>). In Section <ref>, we show that this is a reasonable approximation.
Baseline distance between two ground stations: 600 km, 1200 km, and 1800 km above the equator.
Satellite altitude: 500 km, 800 km, 1000 km
nighttimeP_d:3 ×10^-6day timeP_d:3 ×10^-3the range of sample sizes:[5×10^-4/days×B_i,...3×10^-1/days×B_i ], whereB_iis the number of entanglements for each block.
Ignorep_2in simulation; not ignorep_2in simulation. We define the nighttime as 10 hours, and so the daytime as 14 hours.
Basic statistics:
For the 3×3 altitude and baseline combinations, list
* the orbit time (amount of time to finish one orbit; this only depends on the altitude, does not depend on baseline);
* the number of passes in night time and the number of passes in day time;
* for each pass at night, the duration of the pass time (i.e., the contact length, i.e., the duration that the satellite is in the contact of both ground stations). Similarly, do this for day time.
§.§ Impact of Pump Power
images/bk_sampling_num_keys_overhead_1_1.png Blockwise scheme: Number of secret key bits that are generated under the optimal pump power and sampling rate (1 day). num-key-blockwise
We first examine the impact of the pump power on the success probability and fidelity in the various settings.
Fig. <ref> plots the success probability and fidelity as a function of pump power when the satellite altitudeA=500km. Figures <ref>(a) and (b) show the results for nighttime, where the results for various ground station distances are shown in the figure. We see that, for all three ground station distances, success probability increases with pump power, while fidelity decreases with pump power. In addition, for the same pump power, a shorter ground station distance leads to a larger success probability, but lower fidelity.
Figures <ref>(c) and (d) show the results for daytime. In this case, while we see similar trend for success probability as that for nighttime, the relationship between fidelity and pump power is more complex: fidelity first increases and then decreases with the pump power. In addition, for the same pump power, while a shorter ground station distance again leads to higher success probability as that in nighttime, it leads to higher fidelity in daytime, opposite to the observation in nighttime.
Results for the other two satellite altitudes (800 and 1000 km) show similar trends, with variations in the relative relationship among the three ground station distances. For instance, whenA=1000km, for the same pump power, the fidelity for the three ground station distances is very close to each other during nighttime, while the fidelity forD=1200km is larger than that forD=600km, followed by that ofD=1800km.
main message: pump power has roughly opposite impact on the number of entanglement and fidelity; finding optimal pump power is important. Fig. <ref> plots the results for night time. Fig. <ref>(a), (c) and (e) plot the success probability, while Fig. <ref>(b), (d) and (f) plot fidelity for various satellite altitudes.
Fig. <ref> plots the corresponding results for day time.
They are independent of key distillation schemes.
Only present the results of overhead mode. Use two figures, each with 6 sub-figures. Do not include the results when the power pump is 0 in the plots; we just need to mention that when the pump power is 0, the number of entanglement is zero and hence it has no impact on key generation.
§.§.§ Satellite Altitude 500 km
Plot how the number of entanglement, fidelity and the total number of keys (plot blockwise and non-blockwise separately, show the result under optimal sampling) as a function of pump power;
4*2*2 (bisect and overhead mode; night time daytime) figures in total; in each figure, plot 3 curves (3 distance values).
§.§.§ Satellite Altitude 800 km
§.§.§ Satellite Altitude 1000 km
images/nbk_sampling_total_num_keys_overhead_1_1.pngNon-blockwise scheme: Number of secret key bits that are generated under the optimal pump power and sampling rate (1 day). num-key-non-blockwise
§.§ Optimal Pump Power and Sampling Rate
Since secret key rate is affected by both success probability and fidelity, while these two factors are affected by pump power in roughly opposite ways as shown above, we need to find the optimal pump power to maximize the secret key generation rate for various settings. The optimal pump power hence may differ, depending on satellite altitude, ground station distance, nighttime versus daytime, and also the key distillation scheme. In addition, as shown in Eq. (<ref>) and Eq. (<ref>), secret key rate is also affected by sampling rate. In the following, we show the optimal pump power and sampling rate for the various settings for key generation in one day; the results for multiple days are deferred to Section <ref>.
§.§.§ Blockwise Scheme
Fig. <ref> plots results for the blockwise distillation scheme. Specifically, Fig. <ref>(a) shows the optimal pump power for various combinations of satellite altitude and ground station distance; results for both nighttime and daytime are plotted in the figure. For the same satellite altitude and ground station distance, we see the optimal pump power for nighttime is larger than that for daytime. Specifically, for satellite altitude of 800 and 1000 km, the optimal pump power for nighttime is 0.1, the maximum pump power that is allowed, and for satellite altitude of 500 km, the optimal pump power is close or equal to 0.1. For daytime, under the same ground station distance, the optimal pump power is lower for higher satellite altitude. As a special case, when the ground station distance is 1800 km, the optimal pump power for daytime is 0 when the satellite altitude is 1000 km, since no key can be generated for any values of the range of pump power.
Figures <ref>(b) and (c) plot the success probability and fidelity under the optimal pump power for the various settings. For the same satellite altitude and ground station distance, the success probability and fidelity for nighttime are both larger than their corresponding values for daytime. In addition, for the same ground station distance, the success probability under the optimal pump power for lower satellite altitude tends to be larger, for both nighttime and daytime. For fidelity, the optimal fidelity is similar for all nighttime settings, while for daytime, lower satellite altitude tends to have higher fidelity for the same ground station distance. When the ground station distance is 1800 km and the satellite altitude is 1000 km, both the success probability and fidelity are 0, since the optimal pump power for that case is 0.
Fig. <ref>(d) plots the optimal sampling rate for the various settings. The optimal sampling rate varies from 0.0075 to 0.1045, with higher optimal sampling rate for daytime than nighttime under the same satellite altitude and ground station distance. For a given satellite altitude, larger ground station distances tend to require higher optimal sampling rates.
Fig. <ref> plots the number of secret key bits generated over daytime and nighttime in a day for the various settings. For each satellite altitude, the number of secret keys generated decreases with ground station distance for both nighttime and daytime, except for a satellite altitude of 1000 km during daytime. For the same ground station distance, lower satellite altitude tends to lead to more secret keys, except for one case (satellite altitude of 500 km and ground station distance of 1800km), which leads to fewer key bits than the satellite altitude of 800 km due to the significantly shorter contact length in this scenario than others (see Table <ref>).
§.§.§ Non-blockwise Scheme
Fig. <ref> plots the results for the non-blockwise scheme. For various settings, the optimal pump power in Fig. <ref>(a) is similar to that under the blockwise scheme, except that when the satellite altitudeA=1000km, the optimal pump power for daytime is 0 for all ground station distances. This is because the fidelity in the daytime is low for all the pump power values, which leads to higher average error rate (across nighttime and daytime), and overall lower number of keys, compared to the case when only generating keys at nighttime. Fig. <ref>(b) and (c) plot the resultant success probability and fidelity for the various settings with the optimal pump power. They are similar to those for the blockwise scheme except when the satellite altitudeA=1000km and daytime. Fig. <ref>(d) plots the optimal sampling rate, which is in the range of0.0065and0.0245. Similar to that of the blockwise case, for a given satellite altitude, larger ground station distances have higher optimal sampling rates.
Fig. <ref> plots the number of secret key bits generated using the non-blockwise scheme in a day for the various settings. Since this scheme combines the raw keys generated during nighttime and daytime together, we simply plot the overall number of keys over a day. For the same ground station distance, more keys are generated at lower satellite altitude, except for one case,
the satellite altitude is 500 km and the ground station distance is 1000 km, due to its significantly shorter contact length than other settings (see Table <ref>).
§.§ Compring Blockwise and Non-blockwise Schemes
We now compare the key rate of the blockwise
and non-blockwise schemes. Specifically, we assume that the schemes run overkdays, and setkto 1, 20, 40, 60, and 80.
For eachk, we again select the pump power and the sampling rate to maximize the number of secret keys generated overkdays. We see that the optimal pump power forkdays is similar to that of one day for both the blockwise and non-blockwise schemes (figure omitted).
Figures <ref>(a)-(c) plot the key rate under the blockwise scheme
when the satellite altitude is 500, 800, and 1000 km, respectively. In each plot, we show results for both the finite and asymptotic scenarios. We see that for the key rate for the finite scenario increases with the number of days, and approaches the asymptotic result whenk≥20.
Figure <ref>(a)-(c) plot the key rate
when the satellite altitude is 500, 800, and 1000 km, respectively. In each plot, the results for the three ground station distances under the blockwise and
non-blockwise schemes are shown in the figure. We see that for the effective key rate increases with the number of days. The effective key rate under the blockwise scheme is visibly higher than that of the non-blockwise scheme when the satellite altitude is800and1000km.
We define(r_b -r_nb)/r_nbas the relative key rate difference between the blockwise and non-blockwise schemes, wherer_bandr_nbare the key rate of the blockwise and non-blockwise schemes for the finite scenario, respectively. Fig. <ref> plot the
relative key rate difference for the various settings.
We see that the difference is larger than 0 for all the cases when the satellite altitude is800and1000km, i.e., the blockwise outperforms the non-blockwise scheme. Specifically, the blockwise scheme leads up to 4% and 5% improvements when the satellite altitude is 800 km and 1000 km, respectively. When the satellite altitude is 500 km, we see up to 1% difference between these two schemes, and the blockwise scheme leads to slightly lower key rate in some settings (D=600and 1200 km and when the number of days is small). We only see four cases where the blockwise strategy leads to fewer key bits than the non-blockwise strategy:A=500km andD=600, 1200 or 1800 km,k=1day; and whenA=500km,D=1200km, andk=20days.
What about the differences in the number of key bits per day between blockwise and non-blockwise schemes in the various scenarios? (A=500,D=600)→ 2.9× 10^6, (A=500,D=1200)→ 1.7× 10^6, (A=500,D=1800)→ 10^6, (A=800,D=600)→ 1.8× 10^7, (A=800,D=1200)→ 10^7, (A=800,D=1800)→ 7× 10^6, (A=1000,D=600)→ 1.6× 10^7, (A=1000,D=1200)→ 1.9× 10^7, (A=1000,D=1800)→ 0.
Letℓ̅_blockandℓ̅_non-blockrepresent the average number of key bits generated per day for the blockwise and non-blockwise strategies, respectively. Table <ref> showsℓ̅_block-ℓ̅_non-block, where both quantities are obtained from the results of 80 days (i.e., the number of secret keys generated over 80 days divided by 80). We see that the blockwise strategy leads to10^6to1.9×10^7more keys per day in the various settings, except for one setting (A=1000km andD=1800km) since no key is generated during daytime for both strategies.
Summarizing the the results in Fig. <ref> and Table <ref>, we see that the blockwise strategy in general leads to higher key rate and more key bits except for the scenarios with low satellite altitudes and small number of days.
Therefore, it is in general more advantageous to use the blockwise strategy, which can be easily deployed since it is only used in the classical post-processing stage of QKD.
We see that the relative key rate difference in the finite case approaches the asymptotic case as the number of keys increases. The blockwise scheme outperforms the non-blockwise scheme by up to 4% when the satellite altitude is 800 km, and up to 5% when the satellite altitude is 1000 km. When the satellite altitude is 500 km, we only see up to 1% difference between these two schemes.
For comparison, we further add the asymptotic results,(r̅_b -r̅_nb)/r̅_nb, wherer̅_bandr̅_nbare the asymptotic effective key rate for the blockwise and non-blockwise schemes, respectively.
We see the relative key rate different in the finite key case approaches that of the
asymptotic results in the various settings.
4/7/2023. Show three plots, for satellite altitude of 500, 800 and 1000 km, respectively. In each plot, the x-axis is number of days, the y-axis is effective key rate. In each figure, include 6 curves, three for blockwise and the other three for non-blockwise, for ground distance of 551, 1200 and 1922 km, respectively.Also show three plots of the relative difference, for satellite altitude of 500, 800 and 1000 km, respectively. In each plot, include three curves, for ground distance of 551, 1200 and 1922 km, respectively.4/10/2023. Add the results on optimal sampling rate in all the cases: Show three plots, for satellite altitude of 500, 800 and 1000 km, respectively. In each plot, the x-axis is number of days, the y-axis is optimal sampling rate. In each figure, include 6 curves, three for blockwise and the other three for non-blockwise, for ground distance of 551, 1200 and 1922 km, respectively.4/17/2023. Does multiday case uses the same optimal pump power as that for the single day?
§.§ Handling Spurious 2-photon Terms
Recall thatp(2)in Eq. (<ref>) is the probability of generating a 2-photon term in
each pair of mode. Such 2-photon events are detrimental to QKD due to photon-number-splitting (PNS) attacks <cit.>. Specifically, when two photons (instead of one photon in an entanglement pair) are sent from the satellite to a ground station, an adversary can keep one photon and sends the other to the ground station, and hence knows the state at the ground station. So far,
we have ignoredp(2)for ease of analysis. To investigate the impact of this approximation on our results, we
simulate a hypothetical idealistic entanglement source, where either vaccuum state or entanglement pairs are generated, i.e., we normalizep(0)andp(1)asp(0)/(1-p(2))andp(1)/(1-p(2)), respectively, and then setp(2)=0.
After that, the simulation of loss and noise on the entanglement pairs follows the models in Section <ref>.
Then for the optimal pump power chosen in Section <ref>, we compare the resultant success probability and fidelity of this idealistic source with those of the actual SPDC source we use. We observe that these two sources have similar success probabilities in all settings (the difference is within 0.001). For fidelity, although their differences are small (within 0.01) in most cases, the difference can be large (0.03) when the satellite altitude is 500 km and the ground station distance is 600 km. Further exploration on such cases is left as future work.
Create plots to compare the above quantities under optimal power for the various settings.
6 plots for success probability: x-axis: ground station distance. y-axis: success probability. 3 plots are for day time, with the altitude of 500, 800 and 1000 km.
3 plots are for night time, with the altitude of 500, 800 and 1000 km. In each plot, show the success probability for 4 cases: blockwise (pr_2=0), blockwise, non-blockwise (pr_2=0), and non-blockwise.Similarly, 6 plots for fidelity: x-axis: ground station distance. y-axis: fidelity. 3 plots are for day time, with the altitude of 500, 800 and 1000 km.
3 plots are for night time, with the altitude of 500, 800 and 1000 km. In each plot, show the fidelity for 4 cases: blockwise (pr_2=0), blockwise, non-blockwise (pr_2=0), and non-blockwise.
§.§ Blockwise and non-blockwise Schemes
We now compare the effective key rate of the blockwise and non-blockwise schemes. Fig. <ref>(a) plots the effective key rate when the satellite altitude is 500 km versus the various ground station distances in one day. Both the results for blockwise and non-blockwise schemes are plotted in the figure. Figures <ref>(b) and (c) plot the results when the satellite altitude is 800 km and 1000 km, respectively.
We see that the blockwise scheme leads to higher key rate in most cases except for the cases when no keys are generated during day time.
Fig. <ref> plots the relative difference of key rate of the blockwise scheme versus the non-blockwise scheme, i.e., (the effective key rate of blockwise - effective key rate of non-blockwise)/(effective key rate of non-blockwise). We see that the effective key rate of the blockwise scheme is up to xx higher than that of the non-blockwise scheme.
§.§ Key Generation Results
3/16/2023: only plot the key rate. Also use effective key rate: total number of secret key bits/ total number of signals.
§.§.§ Satellite Altitude 500 km
Total number of keys and key rate. x: number of days. plot 3 figures for each: for the 3 difference distances; in each figure, show the 4 different schemes (asymptotic and finite, block and non-block) 2 modes separately.
The high ratio of the blockwise number of keys versus the non-blockwise number of keys with an altitude of 500 km and a distance of 1922 km is about0.022%in the overhead mode and0.027%in the bisect mode.
§.§.§ Satellite Altitude 800 km
The high ratio of the blockwise number of keys versus the non-blockwise number of keys with an altitude of 800 km and a distance of 1922 km is about0.064in the overhead mode and0.7in the bisect mode.
§.§.§ Satellite Altitude 1000 km
The high ratio of the blockwise number of keys versus the non-blockwise number of keys with an altitude of 1000 km and a distance of 1922 km is about0.5in the overhead mode and2.3in the bisect mode.
§.§ Optimal Sampling rate
For each block, nighttime and daytime, we set up a sample sizem_i∈{10^3,5×10^3,10^4,5×10^4,10^5,5×10^5,10^6,5×10^6,10^7,5×10^7}to find an optimal sample rate for the finite key distillation. For all altitudes and distances, the optimal sample sizem_i=5×10^6is to produce the maximum number of secret keys in the nighttime and daytime.
§.§.§ Satellite Altitude 500 km
For each figure, x-axis: # of days, y-axis: optimal sampling rate (from 1% to 50%). Three curves: blockwise daytime and nighttime and non-block finite key. For the cases where we do not have a positive key rate for daytime (distance of 1922km, altitude of 800 and 1000km), just set it to 0.5 or some constant for now.
§.§.§ Satellite Altitude 800 km
§.§.§ Satellite Altitude 1000 km
§ RELATED WORK
Satellite-based quantum communication provides a promising direction for global-scale QKD <cit.>. A recent study <cit.> explores the finite key effect in satellite-based QKD. It considers a satellite communicating with a single ground station, instead of entanglement-based QKD where a satellite transmits entanglement pairs to a pair of ground station simultaneously as in this study. In addition, it concatenates all the data together, i.e., it only considers the non-blockwise strategy, while our study compares blockwise and non-blockwise strategies. The authors use the finite key analysis techniques proposed in <cit.>. We derive our finite key results based on <cit.>. In particular, that reference provides tight key-rate bounds, using entropic uncertainty, when processing a raw key into a secret key. Typically this method is used directly for the non-blockwise scenario which is usually considered in QKD research. We also use their methods in our work to analyze the amount of secret key material in smaller blocks, running privacy amplification independently on each block and, thus, using results in <cit.> to determine the size of the secret key derived from each (smaller) block. It would be interesting future work to see if one could bound the quantum min entropy of each sub-block directly and run a single privacy amplification processes over the entire block. That is, use a single invocation of privacy amplification, as in the non-blockwise strategy, yet still retain the benefit of increased key lengths as in blockwise postprocessing.
The loss and noise models in this paper are based on those in <cit.>, and we extend its noise model by considering unfiltered background photons. The focus of <cit.> is on optimal scheduling of satellite to ground station transmissions
with a constellation of satellites. This work focuses on comparing blockwise and non-blockwise key distillation in satellite-based QKD.
A promising approach to building a practical global-scale secure QKD is sending a photon through an optical fiber or terrestrial space directly. But both cases show exponentially decreasing the number of transmitted photons as the length increases due to channel loss. Also, the quantum non-cloning theorem does not allow the noiseless amplification of the quantum signal in QKD as opposed to classical communications <cit.>, which restricts the highest distance for secure QKD to hundred kilometers <cit.>. So, the secure key distribution via quantum communications evolves significantly difficult beyond the length scale <cit.>. One of the solutions to the limited length for QKD is using a quantum repeater incorporating entanglement swapping, purification, and quantum memories <cit.>. However, the quantum repeater-based approach is still far away down the road to constructing practical long-distance secure quantum communications despite remarkable signs of progress <cit.>. For global-scale QKD, one of the promising solutions will be using quantum-free space communications via satellites, which can significantly reduce photon losses <cit.>. Because the atmospheric layer—about 10 km—only affects photon losses, the entanglement distribution via free space shows successfully transmitted entangled photon pairs at a ground station using a quantum communication application <cit.>. The more related works of satellite-based quantum communications show the feasibility of satellite-based QKD over long distances and multiple circumstances <cit.>. In 2017, Liao et al. introduced the satellite-to-ground QKD with a low-Earth-orbit satellite and about an altitude of 500 km to implement decoy-state QKD. They achieved a kilohertz key rate from the satellite to the ground over a distance of up to 1,200 kilometers <cit.>.
§ CONCLUSION AND FUTURE WORK
In this paper, we
compare blockwise and non-blockwise key distillation strategies for satellite-based QKD, where the satellite quantum channel is highly dynamic and hence can produce raw key blocks with significantly difference characteristics. Using extensive simulation, we show that the blockwise strategy can lead to a 5% higher secret key rate
than the traditional non-blockwise strategy that is agnostic to the dynamics of the quantum channel.
As future work, we will consider scenarios with multiple satellites in a constellation and multiple ground station pairs. We will also consider more factors when modeling quantum satellite channels (e.g., weather conditions, cloud coverage). In addition, we will consider more blocks based on time of the day (e.g., sunset, night, sunrise, noon).
§ ACKNOWLEDGMENTS
This research was supported in part by the NSF grant CNS-1955744, NSF-ERC Center for Quantum Networks grant EEC-1941583, MURI ARO Grant
W911NF2110325, and NSF CCF-2143644.
ieeetr |
http://arxiv.org/abs/2307.07291v1 | 20230714120514 | Sampling-Priors-Augmented Deep Unfolding Network for Robust Video Compressive Sensing | [
"Yuhao Huang",
"Gangrong Qu",
"Youran Ge"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Beijing Jiaotong University
Beijing Jiaotong University
Beijing Jiaotong University
Video Compressed Sensing (VCS) aims to reconstruct multiple frames from one single captured measurement, thus achieving high-speed scene recording with a low-frame-rate sensor. Although there have been impressive advances in VCS recently, those state-of-the-art (SOTA) methods also significantly increase model complexity and suffer from poor generality and robustness, which means that those networks need to be retrained to accommodate the new system. Such limitations hinder the real-time imaging and practical deployment of models. In this work, we propose a Sampling-Priors-Augmented Deep Unfolding Network (SPA-DUN) for efficient and robust VCS reconstruction. Under the optimization-inspired deep unfolding framework, a lightweight and efficient U-net is exploited to downsize the model while improving overall performance. Moreover, the prior knowledge from the sampling model is utilized to dynamically modulate the network features to enable single SPA-DUN to handle arbitrary sampling settings, augmenting interpretability and generality. Extensive experiments on both simulation and real datasets demonstrate that SPA-DUN is not only applicable for various sampling settings with one single model but also achieves SOTA performance with incredible efficiency.
< g r a p h i c s >
Overview of the VCS system. The camera sensor encodes multiple frames of the scene through dynamic sampling mask. Our SPA-DUN realizes high-quality reconstruction for unseen sampling settings with one single trained model.
Sampling-Priors-Augmented Deep Unfolding Network for Robust Video Compressive Sensing
Youran Ge
August 12, 2023
=====================================================================================
§ INTRODUCTION
As an important branch of computational imaging, inspired by compressive sensing (CS) theory, video compressive sensing (VCS) systems <cit.> compress multiple frames along the time dimension into one measurement within a single exposure as shown in Fig. <ref>. And then, we input the captured measurement and the given sampling mask into a reconstruction algorithm to restore multiple high-quality frames. In this way, a low-frame-rate sensor can achieve ultrafast photography, enjoying the advantages of low-bandwidth, low-power, and low-cost.
Traditional model-based methods regard VCS reconstruction as an optimization problem with image or video prior knowledge as the regularized term. These methods focus on exploiting a structural prior with theoretical guarantees and generalizability, such as sparsity in some transformation domains <cit.>, low rank <cit.>, and so on <cit.>. Although these model-based methods can handle with different scale factors, CS ratios, and mask patterns, the main drawback is that they require manual parameter tuning, which leads to poor generality and slow reconstruction speed.
Over the past few years, deep network-based methods <cit.> have accelerated VCS reconstruction and significantly improved the imaging effect by direct learning a nonlinear mapping from the measurements to the original signals. However, most deep network-based methods neglect the VCS problem context. Many advanced but complex designs (eg. 3D convoluton <cit.>, Vision Transformer <cit.>) from general vision have been introduced as a video-to-video network with stronger representation ability. While these advanced designs effectively improve reconstruction performance, they also entail higher training and inference costs. Not only that, these deep network-based methods suffer from poor generality and robustness. These networks were trained for a fixed sampling setting and fail to handle other unseen situations. In real applications, not only the recording target is complex and variable, but also the camera parameters are frequently adjusted for various needs. Therefore, the setting of the sampling system also varies in terms of imaging resolution, CS ratio, and sampling mask pattern. As shown in Fig. <ref>, most deep network-based methods need to be retrained to accommodate such sampling settings that have not been seen in their training. Obviously, such practices result in large storage space and expensive time costs. Although the model-based methods does not require training, its iterative process is time-consuming, for example, the PnP algorithm <cit.> takes 604s to reconstruct 30 frames with poor results. Recently, ELP-Unfolding <cit.> proposed scalable learning to improve the generality of the model, but the fixed maximum frame of 24 limits further extension.
To address the above issues, we proposed a Sampling-Priors-Augmented Deep Unfolding Network (SPA-DUN) to realize efficient video compressive sensing for arbitrary sampling settings. In order to improve the efficiency of the reconstruction model, we have extracted key components from advanced image-to-image networks <cit.> to obtain a more concise and effective U-net. Based on this lightweight U-net, we unfold the alternating direction multiplier method (ADMM) <cit.> to form an end-to-end deep unfolding network (DUN), which enjoys high interpretability and efficiency. To improve the generality, we propose Sampling Priors Augmented Learning (SPA-Learning) strategies, both on the training level and the architectural level. Without resorting to external datasets, we augment the common dataset by random sampling. Besides, our reflective padding enables 2D CNN to be flexible with videos of any lengths while mitigating the counter-impact on the network fitting. And last, the prior knowledge from sampling model are fed into the DUN as explicit physical guidance. In this way, SPA-DUN is able to dynamically modulate the network features for adopting different sampling settings. The major contributions are summarized as follows:
* We design a lightweight and efficient U-net as the backbone network, which significantly reduces the complexity and increases the capacity of the network.
* We propose sampling-priors-augmented learning which is exploited to make network robust to unseen sampling settings without retraining.
* Our SPA-DUN establishes new SOTA in terms of reconstruction effect, model complexity, calculation speed, and generality, promoting the application in real-world VCS systems.
§ RELATED WORK
§.§ Video Compressive Sensing
Video Compressive Sensing is also known as Video Snapshot Compressive Imaging<cit.>, which can be mathematically defined as an ill-posed inverse problem for large-scale linear sampling equation. Traditional model-based approaches treat this ill-posed problem as an optimization problem with a prior-regularized term, such as sparsity in some transformation domains <cit.>, low rank <cit.>, and so on <cit.>. However, these model-based methods not only require iterative solving of optimization problems, but also require manual tuning of different samples, and thus suffer from limited representing capacity, higher latency, and poor generalization ability.
Recently, inspired by the great success of deep learning in image restoration <cit.>, many deep network-based methods have been introduced for accelerating VCS reconstruction. Deep network-based methods directly design E2E networks to learn a nonlinear mapping from the measurement domain to the original signal domain, and then provide instantaneous reconstruction. For example, BIRNAT <cit.> employs bidirectional recurrent neural networks to aggregate information from time series. RevSCI <cit.> adopts reversible 3D convolution to achieve better reconstruction with lower memory consumption. However, the performance of such E2E networks with black-box property is heavily dependent on well-designed architectures. This fact not only results in their tricky training schemes but also drags down their performance, due to the large difficulty of learning recovery mapping without explicit physical guidance.
For explicit physical guidance, Plug-and-Play algorithms <cit.> alternate between minimizing a data-fidelity term to promote data consistency and imposing a learned regularizer in the form of an image denoiser <cit.>. This paradigm combines deep networks and interpretable model-based methods to provide flexible and powerful algorithms, but still involve a time-consuming iterative solution process and depend on careful tuning of hyperparameters.
§.§ Deep Unfolding Network
As the main part of physical-inspired CS reconstruction approaches, Deep Unfolding Networks (DUN) <cit.> have shown promising performance in many tasks <cit.> and usually serve as a key principle for structure design. In the last few years, various DUNs like GAP-net <cit.>, Tensor-FISTA <cit.>, Tensor-ADMM <cit.>, and DUN-3D <cit.> have emerged for VCS reconstruction. The main idea of all of them is to unfold traditional model-based methods into fewer iterations and utilize neural networks to learn partial terms in E2E manner. As the backbone network becomes more advanced, DUN is able to reconstruct more and more details from the measurements. However, previous DUN-based methods have two potential drawbacks: 1) The increasing complexity of the network brings huge training costs and slows down inference. 2) Most previous networks lack generality and robustness. They often suffered significant performance drop or even failed to function at all when sampling settings are changed. Obviously, these two drawbacks hinder the actual deployment and operation of the model.
Recently, ELP-Unfolding <cit.> propose the scalable learning to handle different CS ratios, but is still limited by the fixed maximum frame. The poor generality of DUN is also reported in the field of CS research <cit.>. COAST <cit.> designs a controllable unit to modulate network features by the given hyperparameters, effectively improving the generality of the model. Inspired by this control idea, we extract the prior from the sampling mask and use it to guide the network learning, where such sampling prior is more intuitive and informative for VCS reconstruction.
§ SPA-DUN
As shown in Fig. <ref>, the proposed SPA-DUN is consisted of a sampling model which simulates the capture of the measurements, a reconstruction model which alternates between data-fidelity modules 𝒟 and prior-regularized modules 𝒫, and several SPA-Learning strategies which enhance generality and robustness. Due to page limitations, we only discuss grayscale imaging problem in the main text, while color imaging problem is given by supplementary material (SM).
§.§ Sampling Model
The VCS system consists of a sampling process on the hardware side and a reconstruction process on the algorithm side. During the sampling process, the optical encoder modulates the scene through a given sampling mask {𝐌_t}_t=1^c∈{0,1}^h× w within a single exposure, compressing the image sequence {𝐗_t}_t=1^c∈ℝ ^h× w into a 2D measurement 𝐘∈ℝ ^h× w along the temporal dimension:
𝐘=∑_t=1^c𝐌_t⊙𝐗_t +𝐙
where c denotes the CS ratio, ⊙ denotes the Hadamard (element-wise) product, and 𝐙∈ℝ ^h× w is the unknown measurement noise. For easy mathematical description, (<ref>) is equivalent to the following linear form:
y=Φx+z
where y=vec(𝐘)∈ℝ ^hw, x=[vec(𝐗_1),… ,vec(𝐗_c)]∈ℝ ^chw , and z=vec(𝐙)∈ℝ ^hw are the vectorized representation of tensors 𝐘, 𝐗 and 𝐙, respectively. Different from traditional CS problem, the mask Φ∈ℝ ^hw× chw in (<ref>) is a block diagonal matrix consisting of c diagonal matrices shaped as follows:
Φ = [𝐃_1, …, 𝐃_c ]
where 𝐃_t=diag(vec(𝐌_t))∈ℝ ^hw× hw for t = 1,…,c. The sampling mask is generated by the fully random pattern in the Digtial Micromirror Devices (DMD) <cit.> or the shifting pattern in the CACTI system <cit.>. We take only the former (DMD pattern) to build the sampling model in training.
According to this mathematical modeling of the sampling process, we can simulate the capture of measurements. In this way, we can quickly generate sufficient data pairs (𝐗,𝐘,𝐌) or (x,y,Φ) for training a reconstruction model.
§.§ Reconstruction Model
In the following, we will first briefly introduce the ADMM algorithm as preliminary to facilitate the discussion of DUN. Then we will elaborate the details of data-fidelity modules 𝒟 and prior-regularized modules 𝒫 in proposed SPA-DUN respectively.
§.§.§ DUN based on ADMM
From the optimization perspective, the ill-posed inverse problem of solving orignal x in (<ref>) can be considered as finding the (hopefully unique) x at the intersection of the affine subspace U={x∈ℝ ^chw:y=Φ x} and the natural video set O. It can be formulated as follows:
x=min_x1/2y-Φ x_2^2+λΨ(x)
where the former data-fidelity term enables x to maintain the consistency of sampling equation, the latter prior-regularized term enables x to match the natural video features, and λ balances these two terms. Under the ADMM framework, by introducing an auxiliary vector v, the unconstrained optimization in (<ref>) can be converted into:
(v̂, x̂)=min _v, xy-Φ v_2^2+λΨ(x), s.t. x=v
This minimization can be solved by the following sub-problems:
v^(k+1) =min_v1/2y-Φ v_2^2+γ/2v-x^(k)-b^(k)_2^2
x^(k+1) =min_xλΨ(x)+γ/2(v^(k+1)-b^(k))-x_2^2
b^(k+1) =b^(k)-(v^(k+1)-x^(k+1))
where k is the number of iterations, and we initialize b^0=0, x^0=Φ^⊤y.
It can be observed that data-fidelity term and prior-regularized term in (<ref>) are decoupled to sub-problems (<ref>) and (<ref>). We unfold these alternating iterative processes into a neural network with N finite stages, where k-th iteration of ADMM is cast to k-th stage comprising data-fidelity module 𝒟 and prior-regularized module 𝒫 as shown in Fig. <ref>.
§.§.§ Data-fidelity Module 𝒟
Following the above analysis, given {x, v, Φ, y}, (<ref>) is a quadratic form and has a closed-form solution.
v=(Φ^⊤Φ+γ𝐈)^-1[Φ^⊤y+γ(x+b)]
Due to the special structure of Φ, ΦΦ^⊤ is a diagonal matrix and can be defined as:
ΦΦ^⊤def=diag{ψ_1, …, ψ_hw}
As proved in DeSCI <cit.>, (<ref>) can be solved in one shot:
v = (x+b)+
Φ^⊤[y_1-[Φ(x+b)]_1/γ+ψ_1, …, y_hw-[Φ(x+b)]_hw/γ+ψ_hw]^⊤
After this projection, v (or tensor 𝐕) will be close to the fidelity domain, i.e., guaranteeing the consistency of the sampling equation in (<ref>). Moreover, we set the penalty coefficient γ as a learnable parameter to enhance the flexibility of the reconstruction process.
§.§.§ Prior-regularized Module 𝒫
For prior-regularized term, it is difficult to define a mathematically feasible and practically effective constraint Ψ(·) with natural video features and derive a closed-form solution. Therefore, similar to previous DUN methods, we employ a deep network Net_θ(·) which maps from degraded video to high-quality video to replace Ψ(·). In other words, the network will learn prior knowledge from numerous training data, thus acting as a regularization of (<ref>) in ADMM.
𝒫: 𝐗=Net_θ(𝐕-𝐁)
Previous works usually employ a more advanced and complex video-to-video network to improve the representation ability. However, the paradigm of DUN, which sequentially stacks multiple networks, inevitably magnifies the overall complexity and drags down the inference speed. To realize the trade-off between the model's computational cost and quality, we design a lightweight U-net as the prior-regularized module 𝒫. This U-net contains MLPMixer-inspired convolution blocks as shown in the lower right of Fig. <ref>.
In details, we utilize depthwise (DW) convolution <cit.> and 1×1 convolution as a combination. This popular combination not only drastically reduces the complexity compared to the native convolution, but also improves the performance of the network on many other vision tasks <cit.> by increasing the cardinality <cit.> of the features. Inspired by MLPMixer <cit.>, we add two residual connections with learnable scaling factors to form a spatial mixer and a channel mixer. Besides, we retain GELU <cit.> and LayerNorm <cit.>, which are common in Transformer and also work in CNN <cit.>. In section <ref>, we implement several U-nets with different types of blocks for comparison, which shows that our MLPMixer-inspired design is efficient for such low-semantic video-to-video mapping.
§.§ Sampling Priors Augmented Learning
To realize generality and robustness for unseen sampling settings, we propose novel Sampling Priors Augmented Learning strategies, both at the training level and the architectural level.
§.§.§ Sampling Augmentation (SA)
The proposed SA is only adopted at the training stage of sampling model as shown in the Fig. <ref>. Given a selection set of CS ratios S={c_i}_i=0^n and a sampling mask 𝐌^* ∈{0,1}^c^* × h^*× w^* with sufficient size, we randomly crop out a patch 𝐌∈{0,1}^c'× h'× w' where c'∈𝐒, and then generate the corresponding measurements in each small batch of training.
As a result, the SA strategy promotes the training diversity by cropping out various sampling settings from one fixed mask. This low-cost strategy can alleviate the overfitting problem of network similar to the regular data augmentation techniques. Meanwhile, the learning from different sampling settings will significantly improve the generalization capability. The effectiveness of SA will be validated in section <ref>.
§.§.§ Reflective Padding (RP)
Although the module 𝒫 in our DUN is fully convolution network that can input sequences with any spatial sizes, it cannot function on sequences with different CS ratios (i.e., temporal sizes) due to the inherent limitations of 2D convolution. ELP-Unfolding <cit.> fixed the temporal size of the input to a maximum value L, and padded the data less than L frames by repetitive arrangement. In this work, we upgrade this simple padding to reflective padding (RP) as:
!RP(A)={ cat[{𝐀_1…𝐀_c},{𝐀_c…𝐀_1},…]_1^[0:L] c < L
cat[{𝐀_1…𝐀_L},{𝐀_L+1…𝐀_2L},…,{𝐀_c-L+1…𝐀_c}]_0 c ≥ L
.
where cat[ ]_0 and cat[ ]_1 denotes the concatenation along the batch dimension and temporal dimension, respectively. For an image sequence (video) A∈ R^b× c× h× w where b is the batch size, if the temporal size c<L, we append its reverse sequence at the end and repeat T times until Tc>=L. If c≥ L, we input the subsequences with L frames into the network in batches, where the last subsequence less than L frames will be backfilled into L frames.
In this way, the output sequences (𝐕-𝐁) with various c from the former module 𝒟 are padded into RP(𝐕-𝐁) of shape [b',L,h,w], where b'=b×Roundup(c/L), and are fed into the 2D CNN in module 𝒫. Compared to the previous simple padding, this low-cost reflective padding not only makes the 2D CNN flexible for arbitrary inputs without upper limit, but also has smoother inter-frame transitions to reduce the difficulty of network learning.
§.§.§ Sampling Priors (SP)
If the module 𝒫 takes only the fidelity output (𝐕-𝐁) as input, it may not be able to sense and adapt the changes in sampling model. To compensate for missing information, we extract and feed the priors of sampling model to the module 𝒫. Specifically, we first normalize measurements by 𝐘 = 𝐘⊘∑_t=1^c𝐌_t. Since the normalized 𝐘 is closer to the fidelity output (𝐕-𝐁) in distribution, we can concatenate these two as:
𝐕 = cat[RP(𝐕-𝐁),𝐘]_1
And then, we use 𝐕 as the first layer input of the network in the module 𝒫. Moreover, a lightweight Mask Guided Module <cit.> is introduced to sense changes in the sampling mask and further modulate the network features as shown in Fig. <ref>. The input of this module consists of the following concatenation:
𝐌 = cat[RP(𝐌), 𝐂]_1 = cat[RP(𝐌),span(c'/L)]_1
where the operation span(·) duplicates the constant c'/L into a 2D matrix 𝐂, replenishing the missing length information. After passing through several 1×1 convolutions and 5×5 DW convolutions, we use the output attention maps to modulate the stem features in the convolution blocks.
In this way, the priors from the measurements and sampling masks are exploited to augment the network in a reasonable way. On the one hand, those extra priors can be regarded as physical guidance to reduce the difficulty of learning recovery mapping. On the other hand, when the sampling model changes, the network can directly sense these changes and dynamically modulate the features.
§ EXPERIMENTS
§.§ Experimental Settings
§.§.§ Datasets
Following previous research <cit.>, we selected 150 scenes at 480p resolution from the DAVIS2017 dataset <cit.> as our training dataset. We cropped the original frames into 128× 128 patches to reduce training burden. According to the sampling model and SA strategy, we can simulate the sampling process to generate measurements for training.
To evaluate the basic performance of the model, we utilized six grayscale benchmark datasets including Areial, Crash, Drop, Kobe, Runner, and Traffic with a size of 256× 256, following the setup in <cit.>. To assess the generality of the model, we added four large-scale datasets<cit.> including Beauty, Bosphorus, Jockey, and ShakeNDry, with a size of 1080× 1920.
§.§.§ Implementation Details
SPA-DUN uses the same U-net design for each module 𝒫. Specifically, each U-net has 4, 6, and 4 convolution blocks respectively at three scales. The channel width of the first scale is set to 48 and is doubled after every downsampling layer. To achieve a better trade-off, the default stage number N is set to 10. For SPA-Learning, we set L=24 and 𝐒={8,14,18,24}. Lastly, the loss function is designed to the weighted RMSE between the ground truth 𝐗^* and the reconstructed outputs 𝐗^N, 𝐗^N-1, 𝐗^N-2 from the last three stages as:
ℒ(θ) = √(||𝐗^*-𝐗^N||_2^2)+0.5√(||𝐗^*-𝐗^N-1||_2^2)
+0.5√(||𝐗^*-𝐗^N-2||_2^2)
We trained SPA-DUN using AdamW optimization <cit.> with a batch size of 6. During the first 1000 epochs, we set the learning rate to 1e-3 for faster convergence. In the next 5000 epochs, the learning rate was decayed by 90% every 300 epochs to reduce oscillation. The training of SPA-DUN lasted for roughly six A100 days.
§.§ Comparison with State-of-the-Art Methods
§.§.§ Benchmark Datasets
We compared our proposed SPA-DUN with recent representative methods, including PnP <cit.>, RevSCI <cit.>, DUN-3D <cit.>, and ELP-Unfolding <cit.>. The average PSNR/SSIM performance on six grayscale benchmark datasets with different sampling settings are summarized in Table <ref>. "Seen" means that the testing mask pattern is the same as the training mask pattern. "Unseen" means that if a method used DMD pattern during training, we changed it to CACTI pattern during testing and vice versa. It's worth noting that all deep network-based methods in this comparison are trained by the same training datasets and are validated by one single trained model without any extra fine-tuning or retraining.
Table <ref> shows that SPA-DUN outperforms significantly other methods at all CS ratios, both for seen and unseen mask patterns, benefiting from the proposed SPA-Learning. We displayed some selected reconstructed results under seen mask pattern in Fig. <ref>. SPA-DUN is able to recover more details of high-speed moving objects (branches and vehicles) under extreme conditions (c=24), while the reconstructions of other methods have been highly distorted.
We also ploted intuitive performance curves in Fig. <ref> (a) and (b), where LPIPS <cit.> (lower value indicates better performance) is closer to human perception and suitable for evaluating these highly distorted results. Compared to ELP-Unfolding with scalable learning, SPA-DUN is not limited by the maximum frame and leads significantly at high CS ratios. In terms of performance degradation, the downtrend of SPA-DUN is even flatter than the PnP method which iteratively solves for each sample, demonstrating excellent robustness.
§.§.§ Large-Scale Datasets
To verify the high-resolution imaging capability required for realistic applications, we introduce several large-scale datasets with a size of 1080 × 1920 and set the CS ratio to 24. The quantitative results are reported in Table <ref>. Noted that RevSCI failed to output as expected and DUN-3D is out of GPU memory. On the contrary, our SPA-DUN maintains the same performance superiority as the benchmark datasets. Meanwhile, SPA-DUN leads other approaches by a large margin in terms of model complexity, calculation speed and GPU memory usage, due to the efficient network structure. As shown in the Fig. <ref>, SPA-DUN can recover more details (waves and human faces), making the VCS process nearly lossless. This advantages promote real-time imaging applications on mobile devices.
§.§ Ablation Study
§.§.§ Validating the Efficiency of our U-net
To verify the efficiency of the convolution block proposed in section <ref>, we used the classical ResNet block <cit.> and ResNeXt block <cit.> as comparisons. We adopted a single U-net to learn the mapping from the measurement to the original signal without unfolding, which provides a more intuitive assessment for the fitting ability of the convolution blocks itself. It is worth noting that our block contains double residual connections and more convolutions, so we reduced the number of blocks to half for a fair comparison.
We used the benchmark datasets as the validation sets for the training process and record the results in the Fig. <ref> and Table <ref>. Compared to ResNet block, ResNeXt block with DW convolution has more stable training and lower model complexity, sacrificing some reconstruction accuracy. Benefiting from the MLPMixer-inspired layer ordering, our lightweight design effectively increases the capacity of the network and thus achieves a significant lead in the grayscale benchmark.
§.§.§ Validating the Scalability of SPA-DUN
This subsection will present the ablation study to investigate the contribution of each component in our proposed SPA-DUN. To save computing resources, we conducted ablation studies on a shallower SPA-DUN with N=5 and num_blocks=[2,3,2], and reported the average PSNR results on benchmark datasets in Table <ref>.
Effect of SA Scheme 1 is a baseline trained by a fixed mask at c=8. Scheme 2 is similar to the scalable learning used in ELP-Unfolding, which includes the SA strategy and ReP padding for diverse the sampling settings. The comparison results show that SA enables one single model to be robust for unseen sampling settings, but sacrifices the performance in the specific setting (c=8).
Effect of RP Compared to scheme 2, scheme 3 adopted the ReF padding. Such an nearly zero-cost modification can improve the model by 0.32∼2.91 dB overall, especially for the unseen CS ratios. This indicates that such natural inter-frame transition is beneficial for network learning.
Effect of SP Compared to scheme 3, scheme 4 additionally adopted normalized measurement as the input of module 𝒫. The overall performance can be slightly improved by 0.11∼0.33 dB. Scheme 5 further utilized the sampling mask as physical guidance, which allows the network to dynamically adapt to changes in the sampling mask, resulting in significant boosts of about 0.78∼1.33 dB in the seen mask and 0.96∼1.25 dB in the unseen mask.
Furthermore, we visualized the attention in the mask guided module with different sampling settings. As illustrated in Fig. <ref>, the attention map for CACTI pattern displays a clear horizontal stretching texture, which corresponds to the shifting nature behind CACTI pattern. As the CS ratio increases, the horizontal texture in the attention map is further stretched. At the same time, the average value is smaller to ensure the final output energy is stable. We conclude that this mask guided module is able to perceive changes explicitly and then impose the learned attention map on the network features, forming an adaptive paradigm.
§.§ Real Applications
We evaluate SPA-DUN on several real datasets captured by two VCS system <cit.>. The Domino and Hand data were modulated by DMD <cit.> with c=10 and c=20. The Wheel data was modulated by a lithography mask in CACTI system <cit.> with c=14. Reconstructing these real captured measurements is very challenging due to noise effects. Besides, the masks used in these systems are not ideal binary due to nonuniform illumination. Despite this challenging setting, our method still provides decent reconstruction results with only one training. Fig. <ref> clearly demonstrates that SPA-DUN has sharper edges in Domino, fewer artifacts in Hand, and more details without over-smoothing in Wheel. The above observations show the feasibility and effectiveness of SPA-DUN in real applications.
§ CONCLUSION
In this paper, an efficient Sampling-Priors-Augmented Deep Unfolding Network (SPA-DUN) is proposed for video compressive sensing. This optimization-inspired deep unfolding network has good interpretability and reconstruction performance. Benefiting from the designed lightweight backbone network, SPA-DUN achieves the state-of-the-art reconstruction accuracy with lower model complexity, calculation speed and memory consumption. Furthermore, SPA-DUN has excellent generality and robustness benefiting from the proposed SPA-Learning. This means that one single SPA-DUN can handle arbitrary sampling settings without retraining. This great efficiency and generality promotes the real-world application of VCS systems. In the future, we will further extend our SPA-DUN to other image inverse problems.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.03976v2 | 20230708133744 | Short-time large deviations of the spatially averaged height of a KPZ interface on a ring | [
"Timo Schorlepp",
"Pavel Sasorov",
"Baruch Meerson"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
[email protected]
Institute for Theoretical Physics I,
Ruhr University Bochum, 44801 Bochum, Germany
[email protected]
ELI Beamlines Facility,
ERIC, 25241 Dolní Br̆ežany, Czech Republic
[email protected]
Racah Institute of Physics, Hebrew
University of Jerusalem, Jerusalem 91904, Israel
Using the optimal fluctuation method, we evaluate the short-time probability
distribution P (H̅, L, t=T) of the spatially averaged height H̅ = (1/L) ∫_0^L h (x, t=T) dx
of a one-dimensional interface h (x, t) governed by the Kardar–Parisi–Zhang equation
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t)
on a ring of length L. The process starts from a flat interface, h(x,t=0)=0.
Both at λH̅<0, and at sufficiently small positive λH̅ the optimal
(that is, the least-action) path h(x,t) of the interface, conditioned on H̅, is uniform
in space, and the distribution P (H̅, L, T) is Gaussian. However, at sufficiently
large λH̅>0 the spatially uniform solution becomes sub-optimal and gives way
to non-uniform optimal paths. We study them, and the resulting non-Gaussian distribution P (H̅, L, T),
analytically and numerically. The loss of optimality of the uniform solution occurs via a dynamical
phase transition of either first, or second order, depending on the rescaled system size
ℓ = L/√(ν T), at a critical value H̅=H̅_c(ℓ). At large but
finite ℓ the transition is of first order. Remarkably, it becomes an “accidental" second-order
transition in the limit of ℓ→∞, where a large-deviation
behavior -ln P (H̅, L, T) ≃ (L/T) f(H̅)
(in the units λ=ν=D=1) is observed. At small ℓ the transition is of second order,
while at ℓ =O(1) transitions of both types occur.
Short-time large deviations of the spatially averaged height
of a KPZ interface on a ring
Baruch Meerson
August 12, 2023
=========================================================================================
§ INTRODUCTION
Atypically large fluctuations in macroscopic systems out of
equilibrium continue to attract great interest from statistical physicists.
Although a universal description of such fluctuations is unavailable, there has been
much progress in studies of particular systems. One of the main theoretical tools
in this area is known under different names in different areas of physics:
the optimal fluctuation method
(OFM), the instanton method, the weak-noise theory, the
macroscopic fluctuation theory, etc. This method relies
on a saddle-point evaluation of the pertinent path integral
of the stochastic process, conditioned on the
large deviation. The method is based on a model-specific
small parameter (often called “weak noise"), and it brings about a
conditional variational problem. The solution of this problem – a
deterministic, and in general time-dependent, field – describes the “optimal path" of the system:
the most probable system's history which dominates the contribution of different
paths to the statistics in question.
Among multiple applications of the OFM, we focus on one set of problems which has attracted attention in the last
two decades <cit.>: short-time
large deviations of a stochastically growing interface as described by the one-dimensional Kardar–Parisi–Zhang (KPZ) equation <cit.>
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t) ,
where ξ(x,t) is a white noise with
⟨ξ(x,t)⟩=0 , ⟨ξ(x,t)ξ(x^',
t^')⟩=δ(x-x^')δ(t-t^') .
Here we employ the OFM to study a KPZ interface on a ring of length L, i.e. with periodic boundary
conditions at x=0 and x=L. The interface is initially flat,
h(x,t=0)=0 ,
and we are interested in evaluating
the probability density function (PDF) P(H̅, L, T)
of the spatially averaged surface height
H̅ = 1/L∫_0^L h(x,T) dx
at a final time t=T >0, which is much shorter than the characteristic nonlinear
time of Eq. (<ref>), τ_NL= ν^5/D^2 λ^4.
The short-time limit allows one to employ the OFM in a controlled
manner <cit.>, as we will
reiterate shortly. The problem, defined by Eqs. (<ref>)-(<ref>), continues the
line of studies of Refs. <cit.> of finite system-size effects (which turn out to be quite dramatic)
in large deviations of height of the KPZ interface.
Upon rescaling t → tT,
x → (ν T)^1/2 x, h →ν h / λ and ξ→(ν T^3)^-1/4ξ, Eq. (<ref>) becomes
∂_th= ∂_x^2h+1/2(∂_xh)^2
+√(ε)ξ(x,t) ,
with rescaled noise strength ε = D λ^2 T^1/2
/ ν^5/2 on a ring of rescaled length ℓ = L / √(ν T).
The PDF of the rescaled average height H̅ at final time t = 1
can then be written as a path integral
P(H̅,ℓ,ε) = ∫_h(·, 0) = 0 Dh δ(
1/ℓ∫_0^ℓ h(x,1) dx - H̅)
J[h] exp{-1/ε S[h] }
with action functional
S[h] = ∫_0^1 dt ∫_0^ℓ dx L(h, ∂_t h ) = 1/2∫_0^1 dt
∫_0^ℓ dx [∂_th - ∂_x^2h-1/2(∂_xh)^2 ]^2 ,
where ℒ(h,∂_t h) is the Lagrangian.
The OFM assumes a weak-noise limit ε→ 0, when the path integral (<ref>) can be evaluated
by the saddle-point method, while the Jacobian J[h] does not contribute in the leading-order.
In this limit, the PDF P(H̅,ℓ,ε) is dominated by
the optimal path of the system, that is by the most likely history h(x,t) conditional on a given average height at t=1:
-ln P(H̅, ℓ, ε) ε→ 0≃ε^-1min_h(·, 0)= 0 ,
∫_0^ℓ
h(x,1)dx = ℓH̅ S[h] = ε^-1 S(H̅, ℓ) .
Hence, the PDF can be determined, up to pre-exponential factors, from the
solution of this constrained minimization problem. Here
we will solve this minimization problem numerically, for different H̅
and ℓ, and analytically in the asymptotic limits of large and small
ℓ[Note that whenever there exists a spatially
non-uniform optimal path, there are actually infinitely many possible
paths due to the translational symmetry of the problem with respect to x. Accounting for
this submanifold of degenerate solutions and for the associated zero
mode is, however, only relevant for pre-exponential factors <cit.> which
we do not address here.].
It will be convenient to present our results by setting
ν=λ=D=1[In most of the paper we assume, without
loss of generality, that λ>0. Indeed, changing λ to -λ is equivalent to changing h to -h.].
Then the weak-noise scaling (<ref>) reads
-ln P(H̅, ℓ, ε→ 0) ≃
T^-1/2 S(H̅, ℓ) .
Note that the limit ε→ 0 at fixed ℓ corresponds to
the short-time limit T → 0 and small-length limit L → 0
with L / √(T) = const.
When instead T goes to zero at L=const, one has
both ε→ 0 and ℓ→∞. The latter limit turns out to be most interesting, and it is analyzed here
in detail. It is natural to expect that for
any H̅, when ℓ→∞, the action S(H̅, ℓ) should exhibit
a large-deviation form
S(H̅,ℓ) ℓ→∞≃ℓ f(H̅) ,
leading to
-ln P(H̅, L, T→ 0) ≃
(L/T) f(H̅) ,
and this is what we indeed observe here. Less expectedly, we also find that the rate
function f(H̅) exhibits, at a critical value H̅=H̅_c(ℓ),
a dynamical phase transition (DPT) which is accidentally second-order.
By that we mean that
the rate function at the critical point becomes continuously differentiable
only in the limit of ℓ→∞. At arbitrary large but finite ℓ the
large-deviation form (<ref>) breaks down. We show, however, that the action S(H̅,ℓ) still exhibits
a DPT at a critical point H̅=H̅_c, but this DPT is of first order and the optimal
path at the critical point changes discontinuously via a subcritical bifurcation.
For small ℓ a truly second-order DPT is observed as predicted earlier <cit.>.
At intermediate values of ℓ = O(1) DPTs of both types occur. In the latter regime analytical
results are unavailable as of yet, and we present some numerical results. All the DPTs that we
found in this system occur because of a loss of optimality of a path that is uniform in space.
The loss of optimality takes the form either of a subcritical bifurcation (for the first-order DPTs),
or a supercritical bifurcation (for the true second-order DPTs).
The remainder of this paper is structured as follows. In Sec. <ref> we formulate
the OFM equations and boundary conditions, present a simple uniform solution of these equations,
previously studied in Refs. <cit.>, and
argue that it describes the optimal path of the system at all λ H<0. Supercritical
bifurcations of the uniform solution have been recently studied in Ref. <cit.>. Still,
for convenience of further discussion, we briefly rederive them in Sec. <ref>.
Section <ref> includes our results of numerical minimization of the action
functional (<ref>) in different regions of the (H̅,ℓ) phase diagram.
These numerical results provided valuable insights into the nature of optimal paths of the
interface which led us to develop asymptotic analytical solutions of the OFM problem for
large ℓ that we present in Sec. <ref>. The asymptotic solution for small ℓ
is briefly discussed in Sec. <ref>. We summarize and discuss our main results
in Sec. <ref>. A description of numerical algorithms that we use here is relegated to the Appendix.
§ OFM EQUATIONS AND UNIFORM SOLUTION
At a technical level, the main objective of this work is to determine the minimum action S(H̅, ℓ)
as a function of the rescaled average height H̅ and rescaled
system size ℓ. In this section, we present the necessary
conditions for minimizers of the action functional (<ref>) – the OFM equations and the boundary conditions.
We argue then that a simple spatially uniform solution
of the ensuing OFM problem is always optimal for H̅ < 0.
The first-order necessary conditions for a minimizer of the action
functional (<ref>) can be represented as a pair of Hamilton's equations
for the optimal history of the interface h(x,t) and the
conjugate momentum density p = ∂ L / ∂(∂_t h). These equations
were derived in many papers <cit.>, and they take the form
∂_th = ∂_x^2h+1/2(∂_xh)^2+p
,
∂_tp = -∂_x^2p+∂_x(p∂_xh)
.
The “momentum density" p(x,t) describes the (rescaled) optimal realization of
the external noise ξ(x,t) that drives the interface conditional on a specified H̅.
In the present case Eq. (<ref>) and (<ref>) should be complemented by the periodic boundary conditions
at x=0 and x = ℓ, by the initial condition
h(x,0)=0 ,
and by the final-time condition
p(x,1)=Λ= ,
which follows from the demand that a boundary term at t=1, originating from an
integration by parts, should vanish for any h(x,1).
The parameter Λ is a Lagrange multiplier which needs
to be chosen so as to impose the rescaled final-time condition
1/ℓ∫_0^ℓ h(x,1) dx = H̅ .
Once the optimal path is determined, the action S(H̅,ℓ)
can be determined from the equation
S = 1/2∫_0^1 dt∫_0^ℓ dx p^2(x,t) ,
which follows from Eqs. (<ref>) and (<ref>).
By differentiating the action S(H̅, ℓ) = S[h(x,t;H̅,ℓ)] of
the optimal profile h = h(x,t;H̅,ℓ) with respect to H̅ using
the chain rule, one can show that Λ is related to the action via
Λ=1/ℓ ∂ S(H̅, ℓ)/∂H̅ dS=ℓΛ dH̅) .
If the action S(H̅, ℓ) is a strictly convex function of H̅,
there is a bijective relation between Λ and H̅, and it
suffices, for the purpose of calculating the action, to only
determine H̅(Λ) and use Eq. (<ref>). This shortcut is very convenient and
holds for many large-deviation calculations <cit.>.
There is an obvious exact solution of the OFM equations and the boundary conditions:
h(x,t)=H̅ t , p(x,t)=Λ , Λ = H̅ ,
S=ℓ/2H̅^2 ,
which describes a uniformly growing flat interface.
We will often call this branch of solutions branch 1. By virtue of Eq. (<ref>),
whenever the uniform solution (<ref>) is the optimal one, we have
a Gaussian PDF for H̅ up to pre-exponential factors. Of most interest, however,
are the regions of parameters H̅
and ℓ, for which the uniform solution is sub-optimal. As we will see,
the loss of optimality can occur via either a supercritical, or a subcritical bifurcation.
First of all, we can argue that, for negative H̅, the uniform
solution (<ref>) is always optimal. Using the evident conservation law
1/ℓ∫_0^ℓ p(x,t)
d x = Λ = const
of Eq. (<ref>), we can rewrite the action (<ref>) for any solution
of the OFM equations as
S = 1/2∫_0^1 dt∫_0^ℓ
dx p^2(x,t)=ℓΛ^2/2+1/2∫_0^1 dt∫_0^ℓ dx
[p(x,t)-Λ]^2 ,
Also, integrating both sides of Eq. (<ref>) with respect to t from 0 to 1 and
with respect to x over the ring, and using the periodic boundary conditions
and the conservation law (<ref>), we obtain
H̅=1/ℓ∫_0^ℓ h(x,1) dx
=Λ+1/2ℓ∫_0^1 dt∫_0^ℓ
dx [∂_xh(x,t)]^2 .
One can easily see from Eqs. (<ref>) and (<ref>) that, at negative Λ
(or H̅) any inhomogeneity in the
momentum density p both increases
the action S, and decreases the average height |H̅| in comparison to their
values for the uniform solution. Therefore, any nonuniform solution here is sub-optimal.
In contrast to this, for Λ >0 (or
H̅>0), an inhomogeneity increases both S,
and H̅ in comparison to the uniform solution. A competition
between these two opposite effects may give rise to non-uniform solutions with lesser action than
the uniform one, as we will indeed see in the following.
§ BIFURCATIONS OF THE UNIFORM SOLUTION
In this brief section we carry out a linear stability analysis of the
uniform solution (<ref>). We find that, for sufficiently
large positive H̅, the uniform solution can continuously
and supercritically bifurcate to a non-uniform solution. The first
spatial Fourier mode to become unstable as H̅ increases depends
on the rescaled system size ℓ in a nontrivial way and is determined
from Eq. (<ref>). This equation has also been obtained in Ref. <cit.>
by calculating the leading-order prefactor correction to the asymptotic
scaling in Eq. (<ref>) through Gaussian integration of
fluctuations around the uniform solution (<ref>).
At first order of a perturbation theory around the uniform
solution (<ref>) we have
p(x,t)=H̅+b(t)cos qx , h(x,t)=H̅ t + a(t)cos qx
, |a|, |b|≪ 1 .
Here the wave number q spans the set 2π m/ℓ for
m=1,2,…. Substituting the expressions (<ref>)
into Eqs. (<ref>) and (<ref>) and neglecting higher-order terms, we obtain
the following system
of linear ordinary differential equations:
ȧ=-q^2a+b , ḃ=q^2b-q^2H̅ a .
It has solutions proportional to e^iω t, where
ω=± q √(H̅-q^2) .
Using the boundary conditions (<ref>) and (<ref>), we obtain the
following relationship between q and H̅ = H̅_c(q)
at the bifurcation points:
tan(q√(H̅-q^2))=-√(H̅-q^2)/q .
Note that the trivial solution H̅=q^2 of Eq. (<ref>) does
not correspond to a valid non-uniform solution due to the boundary conditions
at t=0 and 1. The resulting dependence H̅(q) can be expressed in a
parametric form
H̅ = -2 u/sin 2u , q=√(-u u) ,
(2n-1)π/2<u<nπ; n=1,2,3,… ,
where, for given ℓ, only values of q = 2 π m / ℓ
with m = 1, 2, 3, … are allowed.
The first three branches of Eq. (<ref>) are shown in
Fig. <ref>. As one can see, the first instability appears for n = 1,
and a necessary condition for the instability, for any ℓ, is H̅_c≥ 4.603.
When ℓ→∞, the first instability of the
uniform solution will occur, at H̅_c≃ 4.603, for a very high mode
m ≃ 1.343 ℓ/ 2 π.
For finite ℓ, one can find the bifurcation point on the n=1 branch of Eq. (<ref>)
numerically.
Finally, for ℓ→ 0, the first instability occurs for the m = 1 mode at
H̅≃ (2 π / ℓ)^2 in
agreement with Ref. <cit.>.
§ NUMERICAL RESULTS
Now we proceed with a numerical solution of the
minimization problem in Eq. (<ref>) for different H̅ and ℓ. The numerical methods that
we used are described in the Appendix. In addition to confirming
the supercritical bifurcations of the uniform solution that we discussed in Sec. <ref>,
we will uncover important subcritical bifurcations
and get insight into non-perturbative optimal paths which
will be studied analytically in Secs. <ref> and <ref>.
We start with the simpler case of small ℓ.
Choosing a moderately small value ℓ = π / 8 and numerically
minimizing the action (<ref>) for different Λ, we
obtain
the rate function S(H̅, ℓ) and Lagrange
multiplier Λ(H̅) shown in Fig. <ref>.
The spatially uniform solution (<ref>), corresponding
to branch 1 of the action, is seen to become unstable
close to H̅≃ (2 π / ℓ)^2 as stated in Sec. <ref>,
and there is a
continuous (second-order) DPT to a spatially
nonuniform solution. Indeed, the (m = 1)-spatial Fourier mode of the
profile becomes unstable at this point. One such spatially nonuniform solution close to the transition point
is shown in Fig. <ref>. As H̅ increases, the optimal solution
turns, for most of the time 0<t<1, into a stationary “cnoidal" solution for p which
drives an h-profile which is non-uniform in x, but is uniformly translating in the vertical direction.
The same solution appears in the problem of the one-point height distribution for the KPZ
equation on a ring <cit.>, and we use it in
Sec. <ref> to calculate the theoretical curves in
Figs. <ref> and <ref>,
which match the numerical results quite well.
Next, we turn to the more complicated and interesting case of large
ℓ.
For ℓ = 16 π the minimization of the augmented action (<ref>)
leads to the results for the rate function S(H̅) and Lagrange
multiplier Λ(H̅) shown
in Fig. <ref>. In addition to branch 1 we observe two other branches of solutions.
Branch 2 is observed to the right of a narrow
transition region close to H̅≃ 4. On this branch the action S(H̅) is
approximately a linear function, while Λ is almost constant. Further, for much larger H̅,
there is a smoothed-out second-order transition from branch 2 to a
third branch 3 with a different scaling behavior.
The optimal paths for branches 2 and 3 are shown in
Fig. <ref>. They consist of strongly localized large-amplitude stationary
solitons of p that drive an outgoing almost triangular structure of h (or two antishocks
of V(x,t) = ∂_x h(x,t), see Sec. <ref>. The solution, corresponding to branch 2,
clearly emerges via a subcritical, rather than supercritical bifurcation. Strikingly, the soliton
has a well-defined life time which is very close to 1/2. The
difference between branches 2 and 3 is that, for branch 3, the two edges
of the triangular structure of h(x,t) collide before the final time t=1 is reached,
while for branch 2 they do not.
These crucial findings will guide our stationary-soliton-based asymptotic theory for large ℓ that we develop
in Sec. <ref>. There we give an analytical description of the optimal paths
for branches 2 and 3, which are the only relevant ones for large
ℓ. There we establish a first-order transition at H̅≃ 4 for large but finite ℓ
and show that it becomes “accidentally" second order in the limit of ℓ→∞.
We also find that the smoothed-out second-order
transition from branch 2 to branch 3 occurs at H̅ = ℓ^2 / 6. The resulting
analytical predictions, indicated by the lines in
Figs. <ref> and <ref>, are in good agreement with numerics
at large, but finite ℓ.
At moderate ℓ the transition region where the spatially uniform
solution (<ref>) of branch 1 becomes sub-optimal is quite
complex, as one can appreciate from
Fig. <ref>.
We see that, in general, there are both first and second order
transitions in this region: The uniform solution becomes
linearly unstable for some m > 1, leading to second-order
transitions, but there is also a competition with the (subcritical) one-soliton
solution. The subcritical scenario clearly wins for sufficiently large ℓ. Indeed, for ℓ = 32 π
we observe only a first-order
transition from the spatially uniform to the soliton solution,
while the linear instability becomes irrelevant.
Note that, for branch 2, in addition to stationary single-soliton
solutions of the OFM equation, discussed so far, there are also stationary multi-soliton solutions
consisting of two or more (almost) non-interacting strongly localized stationary solitons
of p and corresponding expanding triangles of h. One such solution, which we observed numerically, is
shown in the top row of
Fig. <ref>. We found, however,
that such solutions always have a larger action than
the one-soliton solution for the same ℓ
and H̅. Therefore, the one-soliton solution indeed seems to provide
the optimal solution. In the limit ℓ→∞,
these multi-soliton solutions – a soliton gas – would contribute to the
pre-exponential factor for 𝒫(H̅, ℓ), but
pre-exponential factors are beyond the scope of this paper. Additionally, in the
bottom row in Fig. <ref>,
we show an optimal path for ℓ = 16 π and close
to H̅ = 4, which emerges through linear instability of
the (m = 11)-mode. Later on, however, it is overtaken by the
one-soliton solution.
§ LARGE-ℓ ASYMPTOTICS: RISE AND FALL OF THE SOLITON
§.§ General description of the solution
Guided by our numerical solutions and by the previous works on the one-point KPZ height
statistics on the line <cit.> and on a ring <cit.>, here we find approximate
asymptotic solutions of Eqs. (<ref>)-(<ref>) which give rise to two nontrivial
branches (we call them branches 2 and 3) of the large-deviation function S(H̅) for large ℓ.
As we found, for both branches the maximum one-point height of the interface H=max h(x,t=1) turns
out to be very large: H≫ 1. Therefore, in addition to the strong inequality ℓ≫ 1,
we can also use the strong inequality H≫ 1. This allows us to construct “inviscid" asymptotic
solutions in different regions of space, separated by discontinuities of proper types. Like their
numerical counterparts, the analytical solutions exhibit two distinct stages in time, with an abrupt
transition between them at some branch-dependent intermediate time 0<t=τ<1 which we will determine.
For 0<t<τ the solution has the form of a strongly localized stationary soliton of p(x,t)
and “antishock" of V(x,t)= -∂_x h(x,t) which were previously identified in the problem
of one-point height statistics on the line <cit.> and on a ring <cit.>.
The characteristic width, O(1/√(H)), of the soliton-antishock structure is much less than
unity. Outside of the soliton-antishock one has p(x,t) ≃ 0. As a result, Eq. (<ref>)
is obeyed trivially and, at distances ≳ 1 from the soliton, h(x,t) follows the deterministic KPZ dynamics
∂_th=∂_x^2h+1/2(∂_xh)^2 ,
which is equivalent to the Burgers equation
∂_tV+ V ∂_x V =∂_x^2V
for the field V(x,t) =-∂_x h(x,t). In addition, the diffusion term in Eq. (<ref>)
can be also neglected at large distainces <cit.>, and one arrives at the inviscid Hopf equation
∂_tV+V∂_x V=0 .
The stationary soliton-antishock structure drives an almost triangular configuration of h(x,t)
which is expanding outwards <cit.>. The height of the triangle grows linearly with time, while
its two edges propagate with a constant speed as “ordinary" shocks of V(x,t) obeying Eq. (<ref>)
or, when treated as discontinuities, obeying Eq. (<ref>) <cit.>. The positions of these shocks
at t=1 determine the boundaries of the “impact region" of the soliton-antishock structure. When the
size of the impact region, which scales as O(√(H)) <cit.>, is shorter than the rescaled system
size ℓ (this happens when H̅ is not too large, see below), there is also an external region
where the uniform solution p(x,t)=Λ =const and V(x,t)=0 holds, see Eq. (<ref>).
The external uniform solution holds for all times 0<t<1, and it contributes to the large-deviation
function of H̅. In the inviscid limit the regions of zero and nonzero p are divided by a
stationary discontinuity. This regime corresponds to branch 2.
Branch 3 appears when, due to the periodicity of the system, the ordinary shocks of V(x,t)
collide with each other before the final time t=1 is reached. In this case the impact region
of the soliton-antishock structure extends to the whole system, and a region of the uniform solution does not appear.
For the solution to obey the boundary condition (<ref>), the p-soliton must turn into a
constant p= Λ at t=1. Remarkably, as we have seen in our numerical results for large ℓ,
the soliton rapidly decays in the vicinity of a well-defined time t=τ<1. For both branches 2 and 3,
the subsequent dynamics, at τ<t<1,
gives only a subleading contribution (which we neglect, alongside with other subleading contributions)
to the maximum one-point height H and to the action. This stage is important, however, for determining H̅.
We can qualitatively understand this nontrivial temporal structure of the solutions from the viewpoint of action
minimization: First, for 0 ≤ t ≤τ, the interface is efficiently driven upward by a stationary
p-soliton, in the same manner as for the one-point height PDF of the KPZ equation on the line <cit.>
and on a ring <cit.>. Then, quickly suppressing the soliton at an intermediate time 0<τ < 1 and
evolving the interface according to the almost free KPZ dynamics for τ < t ≤ 1 increases considerably
the average height H̅ for a negligible additional cost in terms of action. The optimal value of τ
is the one that minimizes the action for a given H̅.
As an overview, we present here the action S(H̅, ℓ) at leading order for large ℓ,
as will be derived in subsections <ref> and <ref>:
S(H̅, ℓ) ≃{[ H̅^22ℓ , -∞ < H̅≤ 4 , (branch 1); (4 H̅ - 8) ℓ , 4 < H̅≤ℓ^26 , (branch 2); H̅^3/2Φ(H̅ / ℓ^2) , ℓ^26 < H̅ < ∞ , (branch 3) ].
where the function Φ(…) is defined in Eq. (<ref>) and
obeys Φ(z →∞) → 8 √(2) /3. The first line in Eq. (<ref>)
comes from the uniform solution (<ref>). The first two lines manifestly reveal the large-deviation
scaling (<ref>), while the third line does not.
Now we proceed to a more detailed description of the solutions, and we will start with branch 2.
§.§ Branch 2
Due to a translational symmetry of the problem (<ref>)-(<ref>), we can place the soliton-antishock
structure at x=0 (see Fig. <ref>) so that, to the leading order, H≃ h(0,τ).
As explained above, at H≫ 1, the p-soliton can be considered as a point-like object. We will only need
the value of its “mass", ∫ dx p(x,t) which, by virtue of Eq. (<ref>), is conserved. Using
the explicit expression for the soliton, p(x,t)=p_s(x) = 2 c cosh^-2 (√(c/2) x) <cit.>,
where c=H/τ, we obtain
∫_-∞^∞ dx p_s(x) = √(32 H/τ) .
The base of the triangular structure of the h-profile is equal to
2a(t)=√(2H/τ) t ,
while the triangle's height is
h(0,t)=Ht/τ , 0<t<τ .
Let us denote the total size of the impact region of the soliton-antishock structure
by 2a_1, where a_1 ≡ a(t=1). In the region a(t)<|x|<a_1 we have
p=h=0 .
The triangular profile of h on the interval 0<|x|<a(t) is described by the expressions <cit.>
p(x,t)=0 , h(x,t)
=H(t/τ-√(2)|x|/√(Hτ))
, and
V(x,t)=-∂_xh(x,t) = Ṽ x ,
where
Ṽ=√(2H/τ) .
As one can see from Eqs. (<ref>) and (<ref>), the ordinary shocks propagate
with the speed Ṽ/2, as to be expected from Eq. (<ref>) or (<ref>) <cit.>.
After the rapid decay of the soliton at t=τ, the “post-soliton" solution (in the region to be determined)
can be described by the ideal hydrodynamic equations corresponding to the inviscid limit of Eqs. (<ref>)
and (<ref>):
∂_tV +V ∂_xV = -∂_x p ,
∂_tp+∂_x(pV) = 0 .
The V-antishock now plays the role of a discontinuity which undergoes a decay starting from t=τ.
In the leading order we can neglect the -∂_x p term, so that Eq. (<ref>) becomes the Hopf
equation (<ref>). Its solution is
V(x,t)=x/t-τ .
Plugging Eq. (<ref>) into Eq. (<ref>) and using the “final" condition (<ref>)
on p(x,t=1), we obtain
p(x,t) =Λ(1-τ)/(t-τ) .
The solution (<ref>) and (<ref>) holds at t>τ and |x|≤ a_d(t). The boundaries of this region,
x= ± a_d(t)≡Ṽ(t-τ) ,
represent weak discontinuities, moving with the speed Ṽ – that is twice as fast as
the ordinary shocks at x=± a(t), see Eq. (<ref>). Our simulations show
that the weak discontinuities catch up with the shocks at t=1. The corresponding condition can
be written as a_d(1) = a_1, and it yields τ=1/2[We also
obtained τ=1/2 analytically by solving the problem for a general τ and then minimizing the
resulting action with respect to τ. These calculations are somewhat cumbersome, and we do not show them here.]
Therefore, during the second stage of the dynamics, 1/2<t<1, V(x,t) is described by the following expressions:
V(|x|≤ a_d(t),t)=x/t-1/2 , V(a_d(t)≤|x|≤ a(t),t)=±Ṽ , V(a(t)<|x|< a_1,t)=0 .
Using the relation V(x,t)=-∂_x h(x,t), we can obtain the h-profile at any time 1/2<t<1
by integrating Eq. (<ref>) over x. The result describes a parabolic profile of h at |x|<a_d(t),
flanked by the linear profiles at a_d(t)<|x|<a_1 corresponding to the triangular structure of h(x,t) of
the first stage the dynamics. At t=1 the parabolic profile takes over the whole interval |x|<a_1, and we obtain
h(x,t=1)=H-x^2 , |x|<a_1=√(H).
At |x|>a_1 the uniform solution holds:
h(|x|>a_1,t)=Λ t , p(|x|>a_1,t)=Λ .
Now we evaluate the contributions of the uniform solution to the action, Δ S_u, and to the average
height, ΔH̅_u, at t=1. As ℓ goes to infinity, we can neglect the difference between the
total system length ℓ and the length of the domain of uniform solution ℓ-2a_1, and obtain
Δ S_u=Λ^2ℓ/2 ΔH̅_u=Λ .
The leading-order contribution of the soliton-antishock solution to the action is <cit.>
Δ S_s=8√(2)/3 H^3/2/√(τ)=16 H^3/2/3 .
This contribution comes from the first stage of the process, 0<t<1/2, while the second stage gives
only a subleading contribution which we neglect.
The second stage, 1/2<t<1 does contribute to H̅, however. Using Eq. (<ref>), we obtain
ΔH̅_s=4 H^3/2/3ℓ .
What remains to be done is to determine Λ, to collect the contributions to S and H̅,
and to eliminate H in favor of H̅ and ℓ.
In order to determine Λ, we use the local conservation of p(x,t) evident in Eq. (<ref>).
Because of this local conservation law,
the total soliton “mass", see Eq. (<ref>), must be equal to the integral of the solution (<ref>)
for p(x,t) over x from -a_1 to a_1. This condition yields a remarkably simple result: Λ=4,
a constant value (up to small subleading corrections).
Combining Eqs. (<ref>)-(<ref>), we obtain
H̅=4+4 H^3/2/3ℓ ,
S=8ℓ+16 H^3/2/3 .
Eliminating H, we arrive at the leading-order result for the large-deviation function of H̅
for branch 2 in the limit of large ℓ, which was announced in the second line of Eq. (<ref>):
S=(4H̅ -8) ℓ .
This expression obeys the large-deviation scaling (<ref>). As was to be expected, the actions
of branch 1 and 2 coincide at
H̅=H̅_c=4. Noticeably, their first derivatives with respect to H̅
also coincide at this point.
In addition, using Eq. (<ref>), we see that Eq. (<ref>) is consistent with Λ=4,
independently of H̅, for branch 2.
We will look into these peculiarities more carefully in Sec. <ref>.
One applicability condition of Eq. (<ref>) is the strong inequality H≫ 1.
Using the first relation in Eq. (<ref>),
we can rewrite this strong inequality in terms of H̅ and ℓ≫ 1:
H̅-4 ≫ 1/ℓ .
This condition limits H̅ from below. A condition on H̅ from above distinguishes
branch 2 from branch 3. It demands that the ordinary shocks of V(x,t) do not collide with
each other until t=1[While deriving Eq. (<ref>) we
demanded a strong inequality 2√(H)≪ℓ. However, when H̅≫ 1, the main contribution
to S and H̅ comes from the soliton-antishock solution, rather than from the uniform one. As a
result, the strong inequality 2√(H)≪ℓ becomes unnecessary, and a simple inequality suffices.].
This condition can be written as 2√(H)<ℓ or, using Eq. (<ref>),
H̅-4<ℓ^2/6ℓ≫1 .
Now we proceed to a description of branch 3.
§.§ Branch 3
When the inequality (<ref>) is violated, the two outgoing ordinary shocks of V(x,t) collide
with each other and merge at x=±ℓ / 2 (which is the same point of the ring) at some t<1.
Upon the merger, a single stationary shock appears, see Fig. <ref>. Now the impact region of
the soliton-antishock is the whole system: 2a_1=ℓ, and the external region of the uniform solution,
characteristic of branch 2, does not appear here.
Most of the general formulas, derived in the context of branch 2, remain valid for branch 3.
In particular, here too τ is determined by the condition that the weak discontinuities catch
up with the ordinary shocks at t=1. The only difference is that a_1=ℓ/2 now. Solving the
equation a_d(1) = a_1, or
√(2H/τ)(1-τ) = ℓ/2 ,
we obtain
τ =1+ℓ^2/16 H-ℓ√(ℓ^2+32H)/16 H ,
so that τ depends on H and ℓ. Unsurprisingly, Eq. (<ref>) yields τ=1/2 in
the boundary case H=ℓ^2/4, when the size 2a_1 of the impact region of the soliton-antishock
in an infinite system is equal to the system size ℓ. When H goes to infinity, τ approaches 1.
We will not repeat here all expressions for h(x,t), V(x,t) and p(x,t) in different regions,
and present only the expression for h(x,1):
h(x,1)=H-x^2/2(1-τ) ,
with τ from Eq. (<ref>).
Using this expression, we can evaluate H̅. The action S remains the same as in the
first equality in Eq. (<ref>), and we obtain
H̅=H-1/24 ℓ^2/(1-τ) ,
S=8√(2)/3 H^3/2/√(τ) .
Eliminating H from these relations and using Eq. (<ref>), we arrive at a leading-order
result for the large-deviation function S(H̅,ℓ) in the limit of large ℓ and very
large H̅, which was announced in the third line of Eq. (<ref>):
S(H̅,ℓ) = H̅^3/2Φ(H̅/ℓ^2) , where Φ(z) =2 √(2) (9 z+1+√(18z+1))^1/2(36 z+1+√(18z+1))/81 z^3/2 .
In terms of H̅, the condition H>ℓ^2/4 becomes, in the leading order, H̅>ℓ^2/6.
As a result, the function Φ(z) is defined for z≥ 1/6, and Φ(1/6) = 4 √(6).
A graph of Φ(z) is depicted in Fig. <ref>.
In the limit of H̅≫ℓ^2≫ 1 Eq. (<ref>) yields
S=8√(2)/3H̅^3/2+4/3H̅ℓ+ … .
The leading-order term of this expression coincides with the action for a single-point height H <cit.>.
This is to be expected, because for very large H̅, τ approaches 1, and the difference
between H̅ and H becomes relatively small.
The expressions in Eqs. (<ref>) and (<ref>) match in the leading order in ℓ
at the boundary H̅≃ℓ^2/6 between the branches 2 and 3, both giving (2/3) ℓ^3+O(ℓ).
For completeness, we also present the optimal transition time τ in Eq. (<ref>) in terms of H̅ and ℓ:
τ(H̅,ℓ)=1+ℓ^2/12 H̅-ℓ√(ℓ^2+18
H̅)/12 H̅ .
§.§ Dynamical phase transition
In this subsection we resolve the nature of the DPT between
branches 1 and 2, which corresponds to the subcritical bifurcation from the uniform solution (<ref>)
to the leading-order soliton solution discussed in Sec. <ref>. To this end we will have to focus
on subleading corrections that we have previously ignored. We will also present the large-deviation
scaling of 𝒫(H̅,L,T) in the limit of T → 0 at fixed L, in the physical units.
As we have already noticed, the actions S_1(H̅, ℓ) and S_2(H̅, ℓ), described
by the first and second lines of Eq. (<ref>),
coincide at H̅=H̅_c=4 together with their first derivatives ∂ S_1(H̅, ℓ) /
∂H̅ and ∂ S_2(H̅, ℓ)/∂H̅
at H̅_c=4. It would be incorrect, however,
to conclude from here that the DPT between branches 1 and 2 at H̅=H̅_c
is of second order. Indeed, the supercritical first bifurcation of the uniform solution (<ref>)
to a solution with a single maximum of h(x,1) – the one with q = 2 π / ℓ
in Eq. (<ref>) – actually occurs, as ℓ→∞, at much
larger H̅≃ℓ^2 / 16 ≫ 4. Furthermore,
as follows from numerical minimization of Eq. (<ref>), instability
of any Fourier mode around the uniform solution can only occur
at H̅≃ 4.60334 (for q ≃ 1.34336). It
is not surprising, therefore, that
at large but finite ℓ, and at a slightly shifted transition
point H̅_c> 4 where the actions of branches 1 and 2
are equal, the optimal paths h(x,t) for branches 1 and 2, that we found numerically,
are dramatically different, and their respective Lagrange
multipliers Λ are not equal. The latter fact means, by
virtue of Eq. (<ref>), that at large ℓ we actually observe a first-order DPT, not a second-order one.
To make sense of these facts, we recall that Eq. (<ref>)
for the action of branch 2 is merely a leading order asymptotic
at ℓ→∞. Subleading terms, so far unaccounted for, should remove
the degeneracy of the leading-order results by breaking the accidental continuity
of the first derivative ∂ S(H̅, ℓ)/∂H̅
at H̅=H̅_c, and
rendering the corresponding bifurcation subcritical and the corresponding DPT
first-order. The subleading terms should also account for a slight shift of the critical
point H̅_c to the right from its leading-order
value H̅_c=4, as observed in our numerics.
Motivated by the large-H asymptotic of the upper tail of the exact
short-time probability distribution of the one-point height h(x = 0,t = 1)=H
on the line, determined in Ref. <cit.>, we can conjecture the following
subleading terms of S_2(H̅,ℓ) at large ℓ:
S_2(H̅,ℓ)=(4H̅ -8) ℓ+B H^1/2+C H^-1/2+… ,
where B>0 and C are numerical constants O(1), which are independent
of ℓ. The condition B>0 is necessary for the equation
S_1 ( H̅_c,ℓ) =
S_2 ( H̅_c,ℓ)
to have a solution for H̅)_c close to
4 at large ℓ.
To verify Eq. (<ref>), we plotted in Fig. <ref> our large-ℓ numerical results for
[S_2(H̅,ℓ) - (4H̅ -8)
ℓ]/√(H) versus H. A fair plateau at large H is observed, with B ≃ 5.3 > 0 found by fitting.
Now, keeping the first subleading term in Eq. (<ref>)
and the leading-order dependence of H on H̅ in Eq. (<ref>),
we can rewrite Eq. (<ref>) in terms of H̅ and ℓ:
S_2(H̅,ℓ)=8ℓ+4(H̅ -4) ℓ
+ (3/4)^1/3 B [(H̅-4)ℓ]^1/3
+ … ,
(H̅-4)ℓ≫ 1 .
Now Eq. (<ref>) for the critical point becomes
1/2(H̅_c-4 )^2ℓ
= (3/4)^1/3 B [ (H̅_c
-4 )ℓ]^1/3+… ,
Its approximate solution,
H̅_c = 4 + 6^1/5 B^3/5 ℓ^-2/5+… ,
describes a small ℓ-dependent positive shift of the critical point from the leading-order value 4.
This H̅_c corresponds to
H = (9/8)^2/5 B^2/5ℓ^2/5 +…
of the branch-2 solution at the critical point. We observe that, for this solution, H →∞
as ℓ→∞, guaranteeing applicability of our theory at large ℓ. Going back to the
large-deviation scaling (<ref>), we notice that there is now a small but finite jump ∼ℓ^-2/5
of the derivative ℓ^-1∂ S/∂H̅ of the effective rate function at the shifted critical
point. The transition between branches 1 and 2, therefore, is of first order.
By virtue of Eq. (<ref>), the subleading correction in Eq. (<ref>) also removes the degeneracy
of the leading-order result Λ=4 by adding to it a small ℓ-dependent correction that goes
to zero as ℓ→∞.
Using Eq. (<ref>), we plotted in Fig. <ref> the actions of branches 1 and 2, normalized
by ℓH̅^2, in the
vicinity of the H̅ = H̅_c. It is clearly seen that the subleading correction removes the degeneracy
and makes the DPT first-order. Furthermore,
the predicted H̅_c from Eq. (<ref>)
for ℓ = 32 π, which is H̅_c≃ 4.6, is close to our numerical result H̅_c≃ 4.57. for this ℓ, see
Fig. <ref>.
Note that our arguments in favor of the expansion (<ref>) are far from rigorous.
In particular, we cannot exclude a very
slow (for example, logarithmic) dependence of the coefficient B on H in Eq. (<ref>)
based only on the numerical evidence. However,
our main conclusion about the first-order DPT between branches 2 and 3
seems robust.
To conclude this section, we present our large-deviation results, described by the first two lines
of Eq. (<ref>), in
the physical units. Recall that, by taking the
limit T → 0 at fixed L,
we have both ε∝ T^1/2→ 0 and ℓ→∞. In this limit only the first
two lines of Eq. (<ref>) are relevant, and we
obtain[Note the factor of T instead of the customary weak-noise
factor T^1/2 on the left-hand side
of Eq. (<ref>).]
-lim_T→ 0 T ln P(H̅,L,T)
=ν^2/Dλ^2 L f(λH̅/ν) ,
f(w)={[ w^2/2 w<4 ,; 4w-8 w>4 . ].
As we
elaborated in this subsection, the DPT
in Eq. (<ref>) at w = 4 can be called an “accidental”
second order DPT in the sense that the optimal paths, that are responsible for the two branches in Eq. (<ref>),
transition into each other discontinuously, and that the differentiability of the rate function
at the critical point emerges only in
the limit T → 0 at fixed L.
§ SMALL-ℓ ASYMPTOTICS
We found that our numerical results on the second-order DPT at small ℓ, shown in Figs. <ref>
and <ref> and described in Sec. <ref>,
can be understood in terms of a small-ℓ asymptotic solution of the OFM equations (<ref>)
and (<ref>) which was previously found in the context of the one-point
height distribution on a ring <cit.>. In this solution
the interface is driven by a stationary dn^2 profile (see below) of p. The solution represents a finite-amplitude
generalization of a weak sinusolidal modulation with m = 1 which results from the second-order DPT from
the uniform solution. This solution is given by the following expressions[This
solution is invalid inside
narrow boundary layers in time at t=0 and t=1, but their contribution to the action is negligible.]
h(x,t) ≃ H t + 2 lndn[2 K(k) x/ℓ,
k ] ,
p(x,t) ≃ p_0(x) = [4 K(k)/ℓ]^2
dn^2 [2 K(k) x/ℓ , k] ,
where K(k) is the complete elliptic integral of the first kind
and dn(…) is one of the Jacobi elliptic functions <cit.>.
The elliptic modulus k ∈ (0,1) is determined by H via the relation
8 (2 - k^2) K^2(k)/ℓ^2 = H ,
The action of this solution as a function of k is <cit.>
S(k) = 128/3 ℓ^3 K^3(k) [2(2-k^2) E(k)
- (1-k^2) K(k) ] .
At given ℓ≪ 1, Eqs. (<ref>) and (<ref>) determine S as a
function of H in a parametric form. The critical point H̅ = (2 π / ℓ)^2 corresponds
to k=0, when Eqs. (<ref>) and (<ref>) reduce to the uniform solution. k>0
correspond to supercritical solutions.
In order to recast this dependence in terms of S(H̅,ℓ),
we need to express H through H̅ and ℓ. Although Eq. (<ref>) is formally inapplicable
at t=1, asymptotically as ℓ→ 0 we still have
H - H̅≃ -1/ℓ∫_-ℓ /2^ℓ / 2
2 lndn[2 K(k) x/ℓ,
k ] dx= 1/2ln1/1 - k^2 .
where we have used a product formula for dn <cit.>.
Using Eqs. (<ref>) and (<ref>), we obtain
H̅(k) = 8 (2 - k^2) K^2(k)/ℓ^2-1/2ln1/1-k^2 .
Equations (<ref>) and (<ref>) determine S=S(H̅,ℓ) and were
used in Fig. <ref> to draw the theoretical curves for the action and
Lagrange multiplier (via Eq. (<ref>))
at ℓ = π / 8, which agree very well with the numerical action minimization results. Also shown is the
asymptotic action
S(H̅) ≃8 √(2)/3H̅^3/2
as H̅→∞, which agrees with Eq. (<ref>) and can be obtained from
Eqs. (<ref>) and (<ref>) by considering the limit k → 1
with E(k) → 1 and K(k) ≃12ln11-k. As one can see from
Fig. <ref>, the asymptotic relation (<ref>)
is not yet satisfied for the moderately small ℓ = π / 8: noticeably, the solution h(x,1)
at the final time deviates from Eq. (<ref>). However, the numerically found action
is already accurately described by Eqs. (<ref>) and (<ref>), because
the difference between H and H̅ is always subleading – at most O(√(H)) – at small ℓ.
§ SUMMARY AND DISCUSSION
We applied the OFM to evaluate analytically and numerically the short-time PDF P (H̅, L, t=T),
and the optimal paths which dominate this PDF, of the KPZ interface on a ring. The short-time PDF has
the scaling form (<ref>), where ε∼ T^1/2 plays the role of the weak-noise
parameter. The phase diagram of the system
represents the (H̅, ℓ=L/√(ν T)) plane. We were especially interested in the DPTs that occur
in this system at sufficiently large positive λH̅>0. We found that, depending on ℓ, these
DPTs occur via either a supercritical, or a subcritical bifurcation of the “trivial" (uniform in space)
optimal path of the KPZ interface. The supercritical bifurcations dominate at very small ℓ, the subcritical
bifurcations dominate at very large ℓ. In these two limits we obtained asymptotic analytical solutions
for the optimal paths of the system, evaluated the resulting action, and verified the analytical results
numerically. We also found that, as T goes to zero at constant L, the PDF acquire a simple large-deviation
form (<ref>). Interestingly, the rate function f(H̅) exhibits, at a critical value
of H̅=H̅_c(ℓ), a DPT which is accidentally second-order.
In the (much more complicated) region of intermediate ℓ=O(1) we observed numerically both supercritical,
and subcritical bifurcations of the uniform solution. This region of the phase diagram is presently out of
reach of analytical theory. It would be very interesting, but challenging, to determine the complete phase
diagram of the system in this region. In particular, it would be interesting to locate, somewhere
between ℓ=16 π and ℓ = 32π, at least one critical point (H̅_*, ℓ_*) where the
second order DPT curve H̅_c^(2)(ℓ) ends when it meets the first order DPT curve H̅_c^(1)(ℓ),
as well as other possible critical points.
These tasks will become more feasible if this problem, as described by Eqs. (<ref>)-(<ref>),
joins the list of similar
large-deviation OFM problems for the KPZ equation which have been solved exactly by the inverse scattering
method (ISM) <cit.>. Indeed, as was previously found in Ref. <cit.>,
a canonical Hopf–Cole transformation brings Eqs. (<ref>) and (<ref>) into the nonlinear
Schrödinger equation in imaginary space and time. Therefore, Eqs. (<ref>) and (<ref>)
belong to a family of completely integrable models. The only problem (but potentially a big one) is to
adapt the ISM to a finite system with periodic boundaries and to accommodate the problem-specific boundary
conditions (<ref>) and (<ref>). The exact solution would also provide
a full analytic control of the subleading corrections to the action of branch 2, which are presently half-empiric.
Finally, it would be very interesting to explore the possibility of extending to the spatially averaged KPZ
interface height some of the recent “stochastic integrability" approaches, which led, for selected initial
conditions, to exact representations for the complete statistics of the one-point interface
height <cit.>.
§ ACKNOWLEDGMENTS
The authors thank Eldad Bettelheim and Naftali R. Smith for useful discussions.
This research was supported by the program
“Advanced Research Using High Intensity Laser-Produced Photons and Particles"
(ADONIS) (CZ.02.1.01/0.0/0.0/16019/0000789) of the European Regional Development Fund (ERDF) (PS),
and by the Israel Science Foundation (Grant No. 1499/20) (BM).
§ NUMERICAL METHODS
Our numerical procedure of finding solutions h and p of the
OFM problem (<ref>)-(<ref>)
can be summarized as follows:
To compute numerical solutions to the boundary-value problem
for h and p for given ℓ and H̅, we use a
refined version of the popular Chernykh–Stepanov
back-and-forth iteration algorithm <cit.> as described in detail
in Ref. <cit.>, using the language of PDE-constrained optimization.
The idea is to interpret the back-and-forth
iterations – fixing Λ and solving Eq. (<ref>) forward in time
with fixed p, and Eq. (<ref>) backward in time with fixed h until
convergence – as adjoint <cit.> gradient evaluations δ S /
δ p of the action
functional with fixed Λ,
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ∫_0^ℓ h[p](x,1) dx ,
with the height profile h = h[p] determined for a
given p through Eq. (<ref>).
This interpretation allows us to use automatic update step-size
control (here: Armijo line search <cit.>) and
preconditioning for faster convergence (here: L-BFGS method <cit.>).
Conceptually, one fixes Λ in this formulation and obtains
the corresponding average height value H̅ a posteriori.
For large ℓ we find multiple solutions for the
same H̅, and the action S(H̅,ℓ) of the optimal solution as a
function of H̅
becomes nonconvex for some H̅. Nonconvexity of the rate
function S(H̅) is an issue because
minimizing the functional (<ref>) effectively computes the
Legendre–Fenchel transform of the rate function at Λ,
which may diverge in this case. Therefore, we add a
penalty term to the action, leading to the so-called
augmented Lagrangian formulation <cit.>
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ(
∫_0^ℓ h[p](x,1) dx - ℓH̅)
+ μ/2(∫_0^ℓ h[p](x,1)
dx - ℓH̅)^2 ,
and solve multiple minimization problems for increasing penalty
parameters μ.
In this formulation, one can directly prescribe H̅ at the
cost of solving multiple optimization problems, and it is usable
regardless of convexity of the rate function, or in other words regardless of
bijectivity of the map between H̅ and Λ.
The formulation (<ref>) is more convenient to
trace solution branches: one initializes the optimization on an
already found solution on a given branch and slightly changes
Λ. In order to trace branches close to the transition
region for large ℓ in
the nonconvex case, we temporarily reparameterize the observable
as described in Ref. <cit.> with reparameterizations
g(z) = lnln z or g(z) = 1 - exp{-(z - 3.5) }.
Within this general framework, we use a
pseudo-spectral code with spatial resolution n_x
to solve Eqs. (<ref>)
and (<ref>), with an exact integration of the diffusion
terms through an integrating factor in Fourier space. An explicit
second-order Runge–Kutta integrator with n_t equidistant steps
is used in time. The gradient of the action functional is
evaluated exactly on a discrete level (“discretize,
then optimize”). Python source code to illustrate the optimization
methods in a simple toy problem
can be found in Ref. <cit.>.
99
KK2007 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 75, 140201(R) (2007).
KK2008 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 78, 024206 (2008).
KK2009 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. E 80, 031107 (2009).
MKV B. Meerson, E. Katzav, and A. Vilenkin, Phys. Rev. Lett. 116, 070601 (2016).
KMSparabola A. Kamenev, B. Meerson, and P. V. Sasorov, Phys. Rev. E 94, 032108 (2016).
LDMRS P. Le Doussal, S. N. Majumdar, A. Rosso, and G. Schehr,
Phys. Rev. Lett. 117, 070403 (2016).
Janas2016 M. Janas, A. Kamenev, and B. Meerson, Phys. Rev. E 94, 032133 (2016).
KLD2017 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 96, 020102(R)
(2017).
MeersonSchmidt2017 B. Meerson and J. Schmidt, J. Stat. Mech. (2017) P103207.
SMS2018 N. R. Smith, B. Meerson, and P. V. Sasorov, J. Stat. Mech. (2018) 023202.
SKM2018 N. R. Smith, A. Kamenev, and B. Meerson, Phys. Rev. E 97, 042130 (2018).
SmithMeerson2018 N. R. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Hartmann2018 A. K. Hartmann, P. Le Doussal, S. N. Majumdar, A. Rosso,
and G. Schehr, Europhys. Lett. 121, 67004 (2018).
MV2018 B. Meerson and A. Vilenkin, Phys. Rev. E 98, 032145 (2018).
Asida2019 T. Asida, E. Livne, and B. Meerson, Phys. Rev. E 99, 042132 (2019).
SMV2019 N. R. Smith, B. Meerson, and A. Vilenkin, J. Stat. Mech. (2019)
053207.
HMS2019 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. Res. 1, 032043(R) (2019).
KLD2021 A. Krajenbrink and P. Le Doussal, Phys. Rev. Lett. 127, 064101 (2021).
HMS2021 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. E 104, 054125 (2021).
KLD2022 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 105, 054142 (2022).
Lamarre P. Y. G. Lamarre, Y. Lin, L.-C. Tsai,
Probab. Theor. Rel. Fields 185, 885 (2023).
SGG T. Schorlepp, T. Grafke, and R. Grauer, J. Stat. Phys. 190, 50 (2023).
KPZ M. Kardar, G. Parisi, and Y.-C. Zhang, Phys. Rev. Lett. 56, 889
(1986).
shortcut F. D. Cunden, P. Facchi, and P. Vivo, J. Phys. A: Math. Theor. 49, 135202.
(2016).
Whithambook G. B. Whitham, Linear and Nonlinear Waves (Wiley, New York, 2011).
SM18 N. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Jacobi Wolfram MathWorld, https://mathworld.wolfram.com/JacobiEllipticFunctions.html
Wolf Wolfram Research, Inc., https://functions.wolfram.com/EllipticFunctions/JacobiDN/08/
SS T. Sasamoto and H. Spohn, Phys. Rev. Lett. 104, 230602 (2010).
CDR P. Calabrese, P. Le Doussal, A. Rosso, Europhys. Lett.
90, 20002 (2010).
Dotsenko V. Dotsenko, Europhys. Lett. 90, 20003 (2010).
ACQ G. Amir, I. Corwin, and J. Quastel, Comm. Pur. Appl. Math.
64, 466 (2011).
CLD11 P. Calabrese, and P. Le Doussal, Phys. Rev. Lett. 106, 250603 (2011).
CLD12 P. Le Doussal and P. Calabrese, J. Stat. Mech. (2012) P06001.
IS12 T. Imamura and T. Sasamoto, Phys. Rev. Lett. 108, 190603 (2012).
IS13 T. Imamura and T. Sasamoto, J. Stat. Phys. 150, 908 (2013).
Borodinetal A. Borodin, I. Corwin, P. L. Ferrari, and B. Vető, Math. Phys. Anal. Geom. 18, 20 (2015).
CS A. I. Chernykh and M. G. Stepanov, Phys. Rev. E 64,
026306 (2001).
SGMG T. Schorlepp, T. Grafke, S. May, and R. Grauer, Philos. Trans. Royal Soc. A 380, 20210051 (2022).
Plessix R.-E. Plessix, Geophys. J. Int. 167, 495 (2006).
Armijo L. Armijo, Pacific J. Math. 16, 1 (1966).
LN D. C. Liu and J. Nocedal, Math. Program. 45, 503 (1989).
Hestenes M. R. Hestenes, J. Optim. Theory. Appl. 4, 303 (1969).
AG M. Alqahtani and T. Grafke, J. Phys. A: Math. Theor. 54 175001 (2021).
STGS T. Schorlepp, S. Tong, T. Grafke, and G. Stadler, arXiv:2303.11919 (2023).
|
http://arxiv.org/abs/2307.03999v1 | 20230708154620 | Transport properties in gapped graphene through magnetic barrier in a laser field | [
"Rachid El Aitouni",
"Miloud Mekkaoui",
"Ahmed Jellal",
"Michael Schreiber"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
[email protected]
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Canadian Quantum Research Center,
204-3002 32 Ave Vernon, BC V1T 2L7, Canada
Institut für Physik, Technische Universität, D-09107 Chemnitz, Germany
We study the transport properties of Dirac fermions through gapped graphene through a magnetic barrier irradiated by a laser field oscillating in time. We use Floquet theory and the solution of Weber's differential equation to determine the energy spectrum corresponding to the three regions composing the system. The boundary conditions and the transfer matrix approach are employed to explicitly determine the transmission probabilities for multi-energy bands and the associated conductance. As an illustration, we focus only on the three first bands: the central band T_0 (zero photon exchange) and the two first side bands T_±1 (photon emission or absorption). It is found that the laser field activates the process of translation through photon exchange. Furthermore, we show that varying the incident angle and energy gap strongly affects the transmission process. The conductance increases when the number of electrons that cross the barrier increases, namely when there is a significant transmission.
78.67.Wj, 05.40.-a, 05.60.-k, 72.80.Vp
Keywords: Graphene, laser field, magnetic field, energy gap, transmission, Klein effect, conductance.
Transport properties in gapped graphene through magnetic barrier in a laser field
Michael Schreiber
August 12, 2023
===================================================================================
§ INTRODUCTION
Graphene is a two-dimensional carbon-based material that is one atom thick, and has atoms structured in a hexagonal shape like a honeycomb <cit.>. Graphene has incredible properties such as a very high mobility <cit.>, electrons moving with a speed 300 times lower than the speed of light, a good conductivity (minimal in the vicinity of the Dirac points, i.e., always the fermions pass), being flexible <cit.> and being very hard <cit.>.
Due to these properties, graphene is becoming the most used material in the technological industries <cit.>.
It is theoretically studied in the framework of the tight-binding model <cit.> and as a result, the energy spectrum shows a linear dispersion relation. In addition, the energy bands are in contact at six points <cit.>, called Dirac points K (K'), and form cones around them. It is surprising that electrons can pass from the valance band to the conduction band easily without any effect. This lack of excitation energy constitutes, in fact, an obstacle and a challenge for the fabrication of devices based on graphene. Consequently, to control the passage of electrons, an energy gap should be created between the two bands. Several studies have been reported on the subject to overcome such situations, for instance, either by deforming graphene to generate pseudo-magnetic fields that play the role of a real magnetic field <cit.> or by stacking one layer of graphene on the other <cit.>.
On the other hand, fermions confined in graphene under barriers, at normal incidence, can cross them even if their energy is less than the barrier heights, an effect known as the Klein paradox <cit.>.
For an oscillating potential over time, the energy spectrum acquires sub-bands, generating several transmission modes, and each mode corresponds to an energy band <cit.>.
Furthermore, an applied magnetic field to graphene generates a quantized energy spectrum known as Landau levels <cit.>. Combining these with the oscillating potential gives rise to a current density in x- and y-directions <cit.>. When the graphene is irradiated by a time-varying laser field,
subbands emerge in the energy spectrum, and then the barrier exchanges photons with the fermions, generating infinite transmission modes <cit.>. As a consequence, the laser field suppresses the Klein effect, which makes it possible to control the passage of fermions.
We investigate how Dirac fermions can cross a gapped graphene subjected to a magnetic barrier and irradiated by a laser field. Within the framework of Floquet theory <cit.> and by using the solution of Weber's differential equation <cit.>, we will be able to determine the eigenspinors corresponding to each region composing the system. These will be matched at boundaries and mapped in matrix form by applying the matrix transfer approach to finally get the transmission coefficients for all energy bands. Now, with the help of the current density, we derive the transmission probabilities for all modes.
The conductance is also calculated by integrating the total transmission over all incident angles.
Since it is not easy to treat all modes numerically, we limit our study to the first three bands, which are the central band (l=0) and the two first side bands (l =±1). We show that increasing the barrier width, or the incidence energy, decreases the transmissions, which implies that the number of electrons that cross the barrier decreases, consequently, the conductance decreases. On the other hand, when the intensity of the laser field increases, we observe that the transmissions decrease, but they increase as long as its frequency increases. When the barrier width increases, it is found that the resonance peaks appear, and their number increases. Another set of results shows that the transmissions are almost zero when the incidence energy is less than the energy gap, and the Klein paradox is still present.
This paper is organized as follows. In Sec. <ref>, we present the Hamiltonian describing our system and we will solve the eigenvalue equations to determine the wave functions in the three regions. We use the boundary conditions and the matrix formalism to express the transmission probabilities of each band, and we calculate the integral of this total transmission which makes it possible to determine the conductance at zero temperature in Sec. <ref>. We discuss our numerical results in Sec. <ref>. Finally, we conclude our work.
§ THEORETICAL MODEL
We study the behavior of Dirac fermions in a graphene sheet divided into three regions. Regions 1 and 3 contain only pristine graphene, whereas the gapped region 2 of width d is subjected to a perpendicular magnetic field and irradiated by a laser field, as shown in Fig. <ref>.
The present system can be described the following Hamiltonian
H= v_F σ⃗·[p⃗-e/c(A⃗_L(t)+A⃗_B(x))]+Δσ_z
where σ_x,y,z are Pauli matrices, v_F≈ c/300 is the Fermi velocity , p⃗=-iħ(∂/∂ x,∂/∂ y) the momentum operator, e the electronic
charge. The vector potential
A⃗_L(t) of the laser field in the dipole approximation <cit.> is generated by an electric field of amplitude F and frequency ω defined as E(t)=Fsin(ω t), which is given by
A⃗_L(x,y,t)=(0,A_0cos(ω t),0)
with the laser field amplitude A_0=F/ω. For the magnetic field, the vector potential A⃗_B( x) is chosen in the Landau gauge B(0,x,0) and the continuity allows us to write
A⃗_B(x)= {[ 0, x<0; Bx, 0<x<d; Bd, x>d. ].
To determine the eigenspinors Ψ(x,y,t)=(Ψ_1, Ψ_2)^T in the three regions, we solve the eigenvalue equation, with T standing for transpose. In region 2 (0<x<d), we get
ΔΨ_1(x,y,t) + v_F[p_x-i(p_y-eF/ωcos(ω t)-eBx)]Ψ_2(x,y,t)=iħ∂/∂ tΨ_1(x,y,t)
v_F[p_x+i(p_y-eF/ωcos(ω t)-eBx)]Ψ_1(x,y,t)-ΔΨ_2(x,y,t)=iħ∂/∂ tΨ_2(x,y,t)
To proceed further, note that in the framework of the Floquet approximation <cit.>, the oscillation of the laser field over time produces several energy modes in the eigenspinors. As a result, we have
Ψ(x,y,t)=ψ(x,y,t)e^-iEt/ħ
where E is the Floquet quasi-energy, ψ(x,y,t) is a time periodic function satisfying ψ(x,y,t+t_0)=ψ(x,y,t) and t_0 is the time period of the laser field. On the other hand, if the Hamiltonian is invariant along the y-direction, then we write Ψ(x,y,t)=e^ik_yye^-iEt/ħφ(t)(ϕ_1(x),ϕ_2(x))^T, and therefore (<ref>,<ref>) become
v_F[-i∂/∂ x-i(k_y-F/ωcos(ω t)-Bx)]ϕ_2(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t-Δ)ϕ_1(x)φ(t)e^ik_yye^-iEt
v_F[-i∂/∂ x+i(k_y-F/ωcos(ω t)-Bx)]ϕ_1(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t+Δ)ϕ_2(x)φ(t)e^ik_yye^-iEt
in the system unit (ħ=e=c=1). It is straightforward to find
-iF/ωcos(ω t)=∂/∂ tφ(t)
and therefore the temporal component is
φ(t)=e^-iαsin(ω t).
Now, we use the Jacobi–Anger identity e^-iαsin(ω t)=∑_-∞^+∞J_m(α)e^-imω t to write (<ref>,<ref>) as
∂ϕ_2(x)/∂ x-[x/ℓ_B^2-k_y+mϖ]ϕ_2(x)-i (ε+mϖ-δ)ϕ_1(x)=0
∂ϕ_1(x)/∂ x+[x/ℓ_B^2-k_y+mϖ]ϕ_1(x)-i (ε+mϖ+δ)ϕ_2(x)=0
where ℓ_B=1/√(B), ϖ=ω/v_F, F̃=F/v_F, ε=E/v_F and δ=Δ/v_F. From
(<ref>,<ref>), we obtain two new decoupled equations
∂^2ϕ_1(x)/∂ ^2 x+[1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2-δ^2]ϕ_1(x) = 0
∂^2ϕ_2(x)/∂ ^2 x+[-1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2+δ^2]ϕ_1(x) = 0.
These can be expressed in terms of the Weber differential equations <cit.> by making the change of variable X_m=√(2)(x/ℓ_B-k_yℓ_B+mϖℓ_B) and setting v_m=(εℓ_B+mϖℓ_B)^2-(δℓ_B)^2/2, to get
d^2ϕ_1,2(X_m)/dX_m^2+[±1/2-X^2_m/4 +v_m]ϕ_1,2(X_m)=0
having the following solutions
ϕ_1(X_m) = A_mD_v_m(X_m)+B_mD_v_m(-X_m)
ϕ_2(X_m) = -i√(2 )/εℓ_B+mϖℓ_B+δℓ_B[ A_mD_v_m+1(X_m)-B_mD_v_m+1(-X_m)]
where A_m, B_m are constant coefficients corresponding to mth side-band, and D_v_m is the parabolic cylinder function. Consequently, the eigenspinors in region 2 take the form
Ψ_2(x,y,t)=e^ik_yy∑_l=-∞^+∞[A_l[ Ξ^+_l(x); η^+_l(x) ]
+B_l[ Ξ^-_l(x); η^-_l(x) ]]∑_m=-∞^+∞J_m(α)e^-i(ε+(l+m)ω)t
and we have defined
Ξ^±(x) = D_v_m(± X_m)
η^±(x) = ∓i√(2)/εℓ_B+m ϖℓ_B+δℓ_B D_v_m+1(± X_m).
In the region 1 (x<0) we have only pristine graphene, and then we can easily obtain the associated eigenspinors and eigenvalues <cit.>
Ψ_1(x,y,t)=e^ik_yy∑_m=-∞^+∞[δ_l,0[ 1; Λ_l ]e^ik_lx+∑_m,l=-∞^+∞r_l[ 1; -Λ^*_l ]e^-ik_lx]δ_m,le^-iv_F( ε+mϖ)t
ε+lϖ=s_l√(k^2_l+k^2_y)
where r_l is the amplitude of the reflected wave corresponding to band l, δ_m,l=J_m-l(α=0), s_l=sgn(v_Fε+lv_Fϖ),
ϕ_l=tan^-1k_y/k_l,
k_l=εcosϕ_l,
k_y=εsinϕ_l and
Λ_l=s_lk_l+ik_y/√(k^2_l+k^2_y)=s_le^iϕ_l.
We can establish
the relation between the incident angles
ϕ_l=arcsin(ε/ε+lϖsin(ϕ_0)).
In region 3 (x>d), the emergent angle ϕ'_l is different than the incident one ϕ_0 because of the continuity of the vector potential. The solution is <cit.>
Ψ_3(x,y,t)=e^ik_yy∑_m,l=-∞^+∞[t_l[ 1; Λ'_l ]e^ik'_lx+b_l[ 1; -Λ'^*_l ]e^-ik'_lx]δ_m,le^-iv_F(ε+mϖ)t
ε+lϖ =s_l√(k_l^'2+(k_y- d/ℓ_B^2)^2)
where t_l is the transmission amplitude of the transmitted wave corresponding to the band l, b_l is a null vector,
ϕ'_l=tan^-1ky- d/ℓ_B^2/k'_l,
k'_l=(ε+lϖ)cosϕ'_l,
k_y=(ε+lϖ)sinϕ'_l+d/ℓ_B^2
and
Λ'_l=s_lk'_l+i(k_y-d/ℓ_B^2)/√(k_l^'2+(k_y-d/ℓ_B^2)^2)=s_le^iϕ'_l.
From the conservation of the momentum k_y, we get the relation
ϕ'_l=arcsin(ε/ε+l ϖsinϕ_0- d/ℓ_B^2/ε+lϖ).
As we will see, the above results can be used to study the transport properties of gapped graphene scattered by a magnetic barrier and irradiated by a laser field. We obtain the transmissions associated with several energy bands and the corresponding conductance.
§ TRANSMISSION PROBABILITIES
We use the continuity of the eigenspinors at x=0 and x =d to
determine the transmission probabilities for the present system. This corresponds to the processes
Ψ_1(0,y,t)=Ψ_2(0,y,t) and Ψ_2(d,y,t)=Ψ_3(d,y,t),
which yields
δ_m,0+r_m=∑_l=-∞^+∞(A_lΞ^+_l(0)+B_lΞ^-_l(0))J_m-l(α)
δ_m,0Λ_m-r_mΛ_m^*=∑_l=-∞^+∞(A_lη^+_l(0)+B_lη^-_l(0))J_m-l(α)
t_me^ik'_md+b_me^-ik'_md=∑_l=-∞^+∞(A_lΞ^+_l(d)+B_lΞ^-_l(d))J_m-l(α)
t_mΛ^'_me^ik'_md-b_mΛ_m^'*e^-ik'_md=∑_l=-∞^+∞(A_lη^+_l(d)+B_lη^-_l(d))J_m-l(α).
We have four equations, but each one has an infinite number of modes, and to solve the problem, we use the transfer matrix approach. As a result, we get
[ Υ_1; Υ'_1 ]
=[ ℕ_1,1 ℕ_1,2; ℕ_2,1 ℕ_2,2 ][ Υ_2; Υ'_2 ]=ℕ[ Υ_2; Υ'_2 ]
with
ℕ=[ 𝕀 𝕀; Γ^+ Γ^-; ]^-1[ 𝕏^+_0 𝕏^-_0; ℝ^+_0 ℝ^-_0 ][ 𝕏^+_d 𝕏^-_d; ℝ^+_d ℝ^-_d ]^-1[ 𝕀 𝕀; Γ'^+ Γ'^-; ][ 𝕂^+ 𝕆; 𝕆 𝕂^-; ]
and
Γ^±=±δ_m,lΛ_l^±1, Γ'^±=±δ_m,lΛ_l^'±1, 𝕏^±_z=Ξ_l^±(z)J_m-l(α), ℝ^±_z=η_l^±(z)J_m-l(α), 𝕂^±=e^± ik'_lLδ_m,l
where 𝕆 is the zero matrix, 𝕀 is the unit matrix and z={0,d}.
In this case, we take into account Dirac fermions traveling from left to right with energy E, and from (<ref>), we obtain
Υ_2=ℕ^-1_1,1Υ_1
with the Kronecker coefficient δ_0,l=Υ_1 and Υ_2=t_l.
Because n and l range from -∞ to +∞ and are challenging to solve, the aforementioned transfer matrix is of infinite order. Due to this, we replace the infinite series with a finite set of terms ranging from -N to N, provided that N≥F/ω^2 <cit.>, resulting in
t_-N+k=ℕ'[k+1,N+1]
where ℕ'=ℕ^-1_11, k=0, 1, 2,⋯ N.
To simplify, we limit our studies only to the central band and the first two side bands l=0,± 1 of energy E± hω having the following transmission coefficients
t_-1=ℕ'[1,2],
t_0=ℕ'[2,2],
t_1=ℕ'[3,2].
On the other hand, the current density is determined from the continuity equation, its expression given by J=e v_F Ψ^*σ_xΨ , therefore the expression of the incident, reflected and transmitted current density given by
J_inc,0=ev_F(Λ_0+Λ^*_0)
J_tra,l=ev_Ft^*_lt_l(Λ'_l+Λ'^*_l)
J_ref,l=ev_Fr^*_lr_l(Λ_l+Λ^*_l)
The relation between the current density and the transmission probability is expressed as T_l=J_tra,l/J_inc,0. Then, after some algebra, we get
T_l=cosϕ'_l/cosϕ_0|t_l|^2
and the total transmission probability is given by summing up over all modes
T=∑_lT_l.
By definition, the conductance at zero temperature is the average of the flux of the fermions on the half Fermi surface <cit.>, on the other hand it is the integration of the total transmission T over k_y <cit.>, given by
G=G_0/2π∫_-k_y^max^k_y^maxT dk_y
where G_0 is the conductance unit.
Using the relation between transverse wave vector k_y and the incident angle ϕ_0 to express G as
G=G_0/2π∫_-π/2^π/2T cosϕ_0dϕ_0.
To investigate and underline the basic features of the present system, we numerically analyze the transport properties based on the transmission channels and associated conductance in the following chapter.
§ RESULTS AND DISCUSSION
We numerically study the transmission probabilities of Dirac fermions in gapped graphene through a magnetic barrier in a laser field. Recall that the oscillation of the barrier over time generates several energy bands, which give rise to transmission channels. Due to the difficulty of analyzing all modes, we will limit ourselves to the first three bands, where the central band T_0 corresponds to zero photon exchange and the first two side bands T_±1 to absorption or emission of photons.
Fig. <ref> shows the transmission probability as a function of the energy εℓ_B for different incident angles. There is transmission if the condition ε >d/ℓ_B^2-l ϖ/1+sinϕ_0
is satisfied, in other words, this quantity plays the role of an effective mass <cit.>. For normal incidence, as depicted in Fig. <ref>, transmission is zero for ε<δ. Due to this condition, resonance peaks appear with decreasing amplitudes along the εℓ_B-axis, that is to say the disappearance of the Fabry-Pérot resonance, which is in agreement with previous results <cit.>. The transmission process with zero photon exchange, T_0, is dominating, and therefore, the majority of the electrons cross the barrier without photon exchange.
Fig. <ref> shows the behavior of T_0 for different incident angles. As a result, in Fig. <ref> it increases
sharply away from the normal incidence. On the other hand, transmission with photon exchange as shown in Figs. <ref>, <ref> there is a decrease for large energy.
We can conclude that the behavior of T_0 changes if we move away from the normal incidence and that
the photon exchange process is suppressed.
Fig. <ref> displays the transmission probability as a function of εℓ_B under a suitable choice of physical parameters. Transmissions appear when condition ε >δ is satisfied. As clearly seen in Fig. <ref>, we observe the dominance of T_0 compared to those corresponding to the first two side bands, and it is almost equal to the total transmission as
found
in <cit.>. Now for different values of F̃ℓ_B^2, we plot T_0 in Fig. <ref>. We see that T_0 decreases with the increase of F̃ℓ_B^2, because the increase in laser field suppresses T_0 as we have already seen <cit.>.
Fig. <ref> displays the effect of field frequency on transmission: increasing the frequency increases T_0.
Fig. <ref> is drawn for different values of barrier width d/ℓ_B. If this increases, resonance peaks appear and their number increases, and the
oscillations get closer. A similar result is obtained in our previous work <cit.>.
Fig. <ref> presents the transmission probabilities as a function of the energy gap δℓ_B.
We show in Fig. <ref> the total transmission probability (magenta line) and those with or without photon exchange.
We distinguish two interesting cases: first, for δℓ_B<6, the Klein effect is very clear and transmission with photon exchange is almost zero, that means that the majority of electrons cross the barrier without photon exchange. Second, for δℓ_B > 6, the transmissions decrease in an oscillatory way until they become zero when δℓ_B is close to εℓ_B=15.
Fig. <ref> displays the total transmission for different values of F̃ℓ_B, and we see that the increase of F̃ℓ_B suppresses the transmission, as has been found in <cit.>. The Klein effect is clear for very small values of F̃ℓ_B and δℓ_B. For F̃ℓ_B=0.3, the Klein effect is observed only for δℓ_B<6, then the transmission decreases in an oscillatory way until the oscillations vanish. If we increase F̃ℓ_B the transmission keeps the same shape with decreasing amplitude, which is in agreement with the results of <cit.>.
Fig. <ref> is similar to the previous one, but here we vary ϖℓ_B. As a result, for ϖℓ_B=1 the Klein effect always exists up to ϖℓ_B=5, then the transmission decreases in an oscillatory way towards zero near εℓ_B. On the other hand, there will be total reflection if the incident energy is lower than the energy gap.
If the frequency decreases, the transmission retains the same shape, but the amplitude decreases. Fig. <ref> shows the effect of the barrier width on the total transmission. We observe that resonance peaks appear when the width increases. For very small widths, the Klein effect is found up to δℓ_B ≈ 6, and then the transmission decreases towards zero. Increasing the width increases the number of oscillations and their amplitudes, as already seen in <cit.>. We summarize that increasing the amplitude of the field suppresses transmission inside the barrier. On the other hand, increasing the frequency increases the transmission, and increasing the width increases the number of oscillations and their amplitude.
Fig. <ref> shows the transmission probabilities as a function of the barrier width d/ℓ_B. In Fig. <ref> we observe that all the transmissions have sinusoidal behavior. The total transmission oscillates in the vicinity of one (Klein paradox). T_0 is predominant and its oscillation amplitude decreases when the width increases. The transmissions with photon exchange also oscillate, but with phase shift, which increases along the d/ℓ_B-axis. For certain values of d/ℓ_B, the transmissions with or without photon exchange are equal.
Fig. <ref> displays transmission with photon emission for different values of the transverse wave vector k_yℓ_B. There is always a sinusoidal behavior with increasing amplitude along the d/ℓ_B-axis. When k_yℓ_B increases, the width of the oscillations decreases.
In Fig. <ref>, we show the effect of the laser field frequency on transmission. We notice that the amplitude and period of oscillations decrease as the frequency increases. Thus, the increase in frequency suppresses the transmissions with photon exchanges.
We vary the intensity of the laser field F̃ℓ_B^2 in Fig. <ref> and observe that the transmission is oscillating with the same period. We notice that the increase in F̃ℓ_B^2 causes an increase in transmission with photon exchange and decreases that of the central band.
In Fig. <ref>, we plot the conductance as a function of the energy εℓ_B. Choosing different values of width d/ℓ_B, Fig. <ref> reveals that the conductance varies almost exponentially for lower values of d/ℓ_B, and oscillates when d/ℓ_B increases.
Fig. <ref> shows the effect of intensity F̃ℓ_B^2 of the laser field on conductance. We observe that conductance increases as F̃ℓ_B^2 increases, but it vanishes when ε→δ.
Fig. <ref> is plotted for different values of frequency ϖℓ_B. We notice that the conductance tends to zero when εℓ_B is close to δℓ_B and the oscillations increase as ϖℓ_B increases.
In Fig. <ref>, we vary δℓ_B to observe that the conductance is always almost zero when ε tends towards δ.
Finally, to increase the conductance, it is necessary to increase the number of electrons crossing the barrier, thereby increasing the transmission. As we have seen, the transmission increases when the incident energy increases or the barrier width decreases, as well as when the intensity of the laser field decreases or its frequency increases.
In Figure <ref>, the conductance is represented as a function of the energy gap δℓ_B. By choosing three values of incident energy in Fig. <ref>, we show that the conductance is maximum at the beginning, then decreases in an oscillatory way towards zero near the value δ =ε. The amplitude increases when incident energy increases as well, exhibiting a behavior similar to transmission as we have seen before.
Fig. <ref> shows the effect of width d/ℓ_B on the conductance. There are always resonance peaks that appear around δℓ_B=3, the number of oscillations increases with the increase of d/ℓ_B. In Figs. <ref> and <ref>, we visualize the effect of the laser field parameters on the conductance. They show that the amplitude of the conductance increases with the increase in frequency, and decreases when the amplitude increases.
§ CONCLUSION
We studied the effect of a gapped magnetic barrier irradiated by a laser field generated by an electric field of amplitude F and frequency ω on Dirac fermions in graphene. We started with the solution of the eigenvalue equations to determine the spinors in the three regions of the gapped sheet. We used the Floquet theory, and the solution of Weber's differential equation to determine the eigenspinors corresponding to each region as combinations of parabolic cylindrical functions. Then we employed the boundary conditions, which give four equations, each equation has infinite modes. To solve them, we used the transfer matrix approach to obtain a matrix of infinite order that is difficult to solve. For simplicity, we focused only on the three first bands, the central band corresponds to l=0 and the two first side bands correspond to l=±1. Lastly, we calculated the integral of the total transmission probability to obtain conduction at zero temperature.
When a barrier oscillates in time, it generates several energy bands, namely the photon exchange between the barrier and the Dirac fermions. Here we found that the transmission process with zero photon exchange is much more important than the process with photon exchange. Klein's paradox is still present, but we can suppress it. As we know, the original Klein effect is only observed for normal incidences (ϕ_0=0), but in this work, this effect is observed for non-normal incidences. When the barrier width is increased, the transmission decreases until it disappears for a critical width, the same thing happens for the conductance. On the other hand, the transmission increases when the incident energy increases. However, to have transmission, it is necessary to satisfy the condition that binds the incident energy to the other barrier parameters: ε >d/ℓ_B^2-l ϖ/1+sinϕ_0. As we know the conductance exists if we have a non-zero transmission, which always implies the verification of this last condition..
9
Novoselov2004
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004).
Novoselov2005
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438, 197 (2005).
mobil2
S. Morozov, K. Novoselov, M. Katsnelson, F. Schedin, D. Elias, J. Jaszczak, and A. Geim, Phys. Rev. Lett. 100, 016602
(2008).
mobil
K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun.
146, 351 (2008).
flix
C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science 321, 385 (2008).
Beenakker2008
C. W. Beenakker, Rev. Mod. Phys. 80, 1337 (2008).
Bhattacharjee2006
S. Bhattacharjee and K. Sengupta, Phys. Rev. Lett. 97, 217001 (2006).
Bunch2005
J. S. Bunch, Y. Yaish, M. Brink, K. Bolotin, and P. L. McEuen,
Nano Lett. 5, 2887 (2005).
Berger2004
C. Berger, Z. M. Song, T. B. Li, X. B. Li, A. Y. Ogbazghi, R. Feng,
Z. T. Dai, A. N. Marchenkov, E. H. Conrad, P. N. First, and W. A. de Heer, J.
Phys. Chem. B 108, 19912 (2004).
Tight
S. Reich, J. Maultzsch, C. Thomsen, and P. Ordejon, Phys. Rev. B 66, 035412 (2002).
Castro2009
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
propr
N. M. R. Peres, J. Phys.: Condens. Matter 21, 323201 (2009).
def1
F. Guinea, M. I. Katsnelson, and A. K. Geim, Nat. Phys. 6, 30 (2010).
def4
G.-X. Ni, Y. Zheng, S. Bae, H. R. Kim, A. Pachoud, Y. S. Kim, C.-L. Tan, D. Im, J.-H. Ahn, B. H. Hong, and B. Ozyilmaz, ACS Nano 6, 1158 (2012).
scatring
S. Latil and L. Henrard, Phys. Rev. Lett. 97, 036803 (2006).
Morozov2005
S. V. Morozov, K. S. Novoselov, F. Schedin, D. Jiang, A. A. Firsov, and A. K. Geim, Phys. Rev. B 72, 201401 (2005).
klien2
M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nat. Phys. 2, 620 (2006).
jellal2014
A. Jellal, M. Mekkaoui, E. B. Choubabi, and H. Bahlouli, Eur. Phys. J. B 87, 123 (2014).
conmagnetic
A. De Martino, L. Dell’Anna, and R. Egger, Phys. Rev. Lett. 98, 066802 (2007).
Landau
F. Xu and L. Zhang, Chin. Phys. B 28, 117403 (2019).
Magnetic2011
M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
confinementmagnetic
N. Myoung and G. Ihm, Physica E 42, 70 (2009).
Elaitouni2022
R. El Aitouni and A. Jellal, Phys. Lett. A 447, 128288 (2022).
biswas2013
R. Biswas and C. Sinha, Appl. Phys. 114, 183706 (2013).
biswas2012
C. Sinha and R. Biswas, Appl. Phys. Lett. 100, 183107 (2012).
laser2
M. Ahsan Zeb, K. Sabeeh, and M. Tahir, Phys. Rev. B 78, 165420 (2008).
rachid2022
R. El Aitouni, M. Mekkaoui, A. Jellal, Ann. Phys. (Berlin) 535, 2200630 (2023).
floquetappr
Z. Gu, H. A. Fertig, D. P. Arovas, and A. Auerbach, Phys. Rev. Lett. 107, 216601 (2011).
grad
I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products (Academic Press, Inc. New York, 1980).
approx
R. Loudon, The Quantum Theory of Light (3rd ed, Oxford University Press, New York, 2000).
math
F. W. J. Olver, J. Res. Nat. Bur. Standards Sect. B 63, 131 (1959).
conduct1
X. Chen and J. W. Tao, Appl. Phys. Lett. 94, 262102 (2009).
conduct2
M. R. Masir, P. Vasilopoulos, and F. M. Peeters, Phys.
Rev. B 79, 035409 (2009).
Biswas2021
R. Biswas and C. Sinha, Sci. Rep. 11, 2881 (2021).
biswas2016
R. Biswas, S. Maitty, and C. Sinha, Physica E. 84, 235 (2016).
Mekkoui2021
M. Mekkaoui, A. Jellal, and H. Bahlouli, Solid State Communi.
358, 114981 (2022).
Sergy2011
S. E. Savel’ev and A. S. Alexandrov, Phys. Rev. B 84, 035428 (2011).
MEKKAOUI2018
M. Mekkaoui, R. El Kinani, and A. Jellal, Mater. Res. Expr. 6, 085013 (2019).
Makkoui2015
H. Chnafa, M. Mekkaoui, A. Jellal, and A. Bahaoui, Physica E 148, 115645 (2023).
|
http://arxiv.org/abs/2307.07634v1 | 20230714211133 | Features of a spin glass in the random field Ising model | [
"Sourav Chatterjee"
] | math.PR | [
"math.PR",
"cond-mat.dis-nn",
"math-ph",
"math.MP",
"82B44, 82D30"
] |
Stanford University
Features of a spin glass in the random field Ising model
Sourav ChatterjeeDepartment of Statistics, Stanford University, 390 Jane Stanford Way, Stanford, CA 94305, USA. Email: mailto:[email protected]@stanford.edu.
August 12, 2023
=============================================================================================================================================================================
A longstanding open question in the theory of disordered systems is whether short-range models, such as the random field Ising model or the Edwards–Anderson model, can indeed have the famous properties that characterize mean-field spin glasses at nonzero temperature. This article shows that this is at least partially possible in the case of the random field Ising model. Consider the Ising model on a discrete d-dimensional cube under free boundary condition, subjected to a very weak i.i.d. random external field, where the field strength is inversely proportional to the square-root of the number of sites. It turns out that in d≥ 2 and at sufficiently low temperatures, this model has some of the key features of a mean-field spin glass. Namely, (a) the site overlap exhibits one step of replica symmetry breaking, (b) the quenched distribution of the overlap is non-self-averaging, and (c) the overlap has the Parisi ultrametric property. Furthermore, it is shown that for Gaussian disorder, replica symmetry does not break if the field strength is taken to be stronger than the one prescribed above, and non-self-averaging fails if it is weaker, showing that the above order of field strength is the only one that allows all three properties to hold. However, the model does not have two other features of mean-field models. Namely, (a) it does not satisfy the Ghirlanda–Guerra identities, and (b) it has only two pure states instead of many.
Key words and phrases. Spin glass, random field Ising model, ultrametricity, replica symmetry breaking.
2020 Mathematics Subject Classification. 82B44, 82D30.
§ INTRODUCTION
The random field Ising model (RFIM) was introduced as a simple model of a disordered system by <cit.> in 1975. The model is defined as follows. Take any d≥ 1 and Λ⊆^d. Let E denote the set of edges connecting neighboring points in Λ. Given a field strength h∈, define the (random) Hamiltonian H:{-1,1}^Λ→ as
H(σ) := -∑_{i,j}∈ Eσ_iσ_j - h ∑_i∈Λ J_iσ_i,
where J = (J_i)_i∈Λ is a fixed realization of i.i.d. random variables from some distribution. At inverse temperature β > 0, the RFIM prescribes a random Gibbs measure on {-1,1}^Λ with probability mass function proportional to e^-β H(σ).
A large body of deep mathematics has grown around this model, such as the early works of <cit.> on the multiplicity of ground states in the 3D RFIM, the proof of phase transition in d≥ 3 by <cit.>, the absence of phase transition in d≤ 2 proved by <cit.>, and the more recent works on quantifying the Aizenman–Wehr theorem <cit.>, culminating in the proof of exponential decay of correlations in the 2D RFIM by <cit.>. The recent developments have led to a resurgence of interest in this model in the mathematical community, yielding a number of new and important results <cit.>.
In spite of all this progress, one major question that has not yet been settled is whether the RFIM has a spin glass phase. A disordered system is said to exhibit spin glass behavior if it has the properties that characterize mean-field spin glasses. In the formulation laid out by Giorgio Parisi <cit.>, the main features of mean-field spin glasses are replica symmetry breaking (RSB), non-self-averaging (NSA), ultrametricity, and the presence of many pure states. These properties are defined as follows. Consider a system of N particles, with spins σ = (σ_1,…, σ_N)∈{-1,1}^N. In a disordered system, the probability law μ of σ is random. Let σ^1,σ^2,… be i.i.d. spin configurations drawn from a fixed realization of the random probability measure μ. The overlap between the configurations σ^i and σ^j is defined as
R_i,j := 1/N∑_k=1^N σ^i_kσ^j_k.
Let R_1,2 denote the expected value of R_1,2 with respect to μ. Roughly speaking, we say that the system exhibits replica symmetry if R_1,2≈R_1,2 with high probability (i.e., probability → 1 as the system size →∞). Otherwise, we say that replica symmetry breaks. The breaking of replica symmetry is usually quite difficult to prove rigorously. RSB has been established rigorously only in mean-field systems, where every particle interacts with every other particle. The primary example of this is the Sherrington–Kirkpatrick (SK) model <cit.>, where the discovery of RSB led to the development of Parisi's broken replica method <cit.>. Rigorous proofs of RSB in the SK and other mean-field models came much later (see <cit.> and references therein).
For short-range models such as the RFIM and the Edwards–Anderson (EA) model <cit.>, there is no proof of RSB as of now. Settling a longstanding debate <cit.>, it was shown in <cit.> that replica symmetry does not break in the RFIM at any fixed temperature and nonzero field strength. The question of RSB in the EA model is still open, although some aspects of spin glass behavior have been established at zero temperature in the recent preprint <cit.>, confirming some old conjectures from physics <cit.>.
The second basic property of spin glass models in Parisi's formulation is non-self-averaging (NSA). NSA is the property that the quenched law of R_1,2 (i.e., its law conditional on a realization of μ) does not converge to a deterministic limit in probability as the system size goes to infinity. Rigorous proofs of NSA are now known for mean-field systems <cit.>, but there is no short-range model that has been rigorously proved to have the NSA property. In fact, there are mathematical arguments based on ergodic theory that seem to rule out NSA in translation-invariant short-range models in infinite volume <cit.>, but there is a counter-argument that infinite volume systems do not truly represent finite volume behavior <cit.>. The main result of <cit.> implies that the RFIM does not have the NSA property at non-critical field strengths, but that leaves open the possibility that NSA may hold at critical temperatures in the RFIM. Nothing is known about NSA in the EA model.
The third basic property of spin glasses is ultrametricity. This means, roughly speaking, that for any given > 0, the probability of the event R_1,3≥min{R_1,2, R_2,3} - tends to 1 as the system size goes to infinity. Ultrametricity implies that the Gibbs measure “organizes the states like a tree”, a notion that has recently been made mathematically precise in <cit.>. Ultrametricity also has the important consequence that it allows one to write down the joint distribution of arbitrarily many overlaps from the distribution of a single overlap, and thereby understand almost everything about the system. Ultrametricity has been rigorously proved in mean-field systems, most notably by <cit.> for a variant of the SK model, followed by extensions to other mean-field systems <cit.>. As of today, there are no rigorous results about ultrametricity for systems with purely local interactions.
The fourth property — the existence of many pure states — means, very roughly, that the Gibbs measure behaves like a mixture of a large number of ergodic measures. Again, this is known rigorously only for mean-field models <cit.> and certain special models on lattices <cit.>. Incidentally, it is rather unclear how to define a pure state outside the setting of Markov random fields on finite-dimensional lattices <cit.>. For certain kinds of mean-field spin glasses, a rigorous definition was given by <cit.>. In the next section, we will give a general definition of the number of pure states that encompasses both mean-field and lattice models.
Proving that short-range models of disordered systems can have the above features of mean-field spin glasses has long been one of the main unsolved questions in this area, first posed in the seminal monograph of <cit.>. As mentioned above, there is a negative result from <cit.>, where it was established that the RFIM does not have a phase where replica symmetry breaks. In this article, we show that in spite of this negative result, the first three features of a spin glass listed above — RSB, NSA, and ultrametricity — can in fact arise in the RFIM, if instead of keeping the field strength h in (<ref>) fixed, we take it to zero like |Λ|^-1/2 as |Λ|→∞, and β is large enough. Moreover, if the J_i's are Gaussian, then we show that this is the only scaling of h where this happens. However, the fourth property does not hold, because the system appears to be a mixture of two pure states instead of many. Another common (but perhaps not essential) feature of mean-field spin glasses, called the Ghirlanda–Guerra identities, also does not hold for this system.
§ RESULTS
Take any d≥ 2. For each n, let B_n := {-n, …, n}^d, and let E_n be the set of undirected nearest neighbor edges of B_n. Let Σ_n := {-1,1}^B_n be the set of ±1-valued spin configurations on B_n. Let (J_i)_i∈ B_n be a collection of i.i.d. random variables with mean zero, variance one, and finite moment generating function in an open neighborhood of the origin. Let h∈ be a parameter. Define the Hamiltonian H_n: Σ_n → as
H_n(σ) := -∑_{i,j}∈ E_nσ_i σ_j - h/√(|B_n|)∑_i∈ B_n J_i σ_i
This is the Hamiltonian for the Ising model on B_n subjected to a random external field of strength h J_i |B_n|^-1/2 at site i for each i∈ B_n. That is, we have replaced the parameter h in (<ref>) by h|B_n|^-1/2. The Gibbs measure for this model at inverse temperature β is the random probability measure on Σ_n with probability mass function proportional to e^-β H_n(σ) at each σ∈Σ_n. For a function f:Σ_n→, let f denote its expected value with respect to the Gibbs measure. The “quenched distribution” of f is the law of f(σ) conditional on (J_i)_i∈ B_n, where σ is drawn from the Gibbs measure.
§.§ Replica symmetry breaking and non-self-averaging
Let σ^1 and σ^2 be drawn independently from the Gibbs measure defined by a single realization of the disorder (J_i)_i∈ B_n. Recall from the previous section that the site overlap (or spin overlap) between σ^1 and σ^2 is defined as
R_1,2 := 1/|B_n|∑_i∈ B_nσ_i^1 σ_i^2.
If we have a sequence of configurations σ^1,σ^2,… drawn independently from the Gibbs measure, then R_i,j denotes the overlap between σ^i and σ^j. The following theorem is the first main result of this paper.
Take any d≥ 2 and n≥ 1 and consider the model defined above on B_n = {-n,…,n}^d at inverse temperature β>0. There exists β_0>0 depending only on d such that if β≥β_0, then there is a deterministic value q>0 depending only on β and d, such that (R_1,2^2 - q^2)^2→ 0 as n→∞. Moreover, if we define
X_n := √(q)β h/√(|B_n|)∑_i∈ B_n J_i,
then we have that
lim_n→∞[(R_1,2 - q tanh^2 X_n)^2] = 0.
Consequently, as n→∞, R_1,2 converges in law to qtanh^2(√(q)β h Z), where Z is a standard Gaussian random variable.
For the reader's convenience, let us briefly explain the significances of the two assertions of the above theorem. The first assertion, that (R_1,2^2 - q^2)^2→ 0 as n→∞, shows that when n is large, the overlap R_1,2 is close to either q or -q with high probability. The second assertion shows that the quenched expectation of R_1,2 is a random variable that converges to a non-degenerate limiting distribution as n→∞. Jointly, this proves two things. First, it shows that R_1,2 does indeed behave like a random variable that is close to one of two values, and not just one value (because otherwise, R_1,2 would be close to q or -q). This is known as one step of replica symmetry breaking (1RSB). Second, it shows that the quenched distribution of the overlap is not self-averaging — that is, it does not converge to a deterministic limiting distribution as n→∞. Equation (<ref>) shows that the mass near q is approximately
1/2(1+ tanh^2 X_n),
and the mass near -q is 1 minus the above. An important thing to note is that q depends only on β and d, and not on h. Thus, q is the limiting absolute value of the overlap in the ordinary Ising model — that is, the case h=0. In particular, Theorem <ref> implies that for the Ising model, the quenched law of R_1,2 converges in probability to the uniform distribution on {-q,q} as n→∞. The presence of h only changes the masses near q and -q.
Theorem <ref> shows that non-self-averaging can occur even in a system that only has local interactions. It is to be noted that the system under consideration here has no obvious representative in the infinite volume limit (because the field strength is tending to zero but with a non-trivial effect which cannot be captured by a model in infinite volume in any obvious way), thereby posing no contradiction to the results of <cit.> on the impossibility of NSA in translation-invariant infinite volume systems.
§.§ Ultrametricity
The next result says that the overlap satisfies the Parisi ultrametric property in the large n limit, meaning that R_1,3≥min{R_1,2,R_2,3} - o(1) with probability 1-o(1) as n→∞.
Let d, n, β_0, β and q be as in Theorem <ref>. Then, as n→∞, the quenched distribution of (R_1,2, R_1,3, R_2,3) converges in law to a random limiting distribution with support
{(q,q,q), (-q,-q,q), (-q,q,-q), (q,-q,-q)}.
Consequently, for any >0, the quenched probability of the event R_1,3≥min{R_1,2,R_2,3}- tends to 1 in probability as n→∞.
Combined with Theorem <ref>, it is easy to deduce the approximate masses assigned by the law of (R_1,2, R_1,3, R_2,3) near the four points displayed in (<ref>). Let a be the approximate mass near (q,q,q), and let b be the approximate mass near each of the other three points (which must be equal, by symmetry). Then a+3b ≈ 1, and a+b ≈ the probability of the event R_1,2≈ q, which is given by the formula (<ref>). Solving, we get
a ≈1/4(1+ 3tanh^2 X_n), b ≈1/4(1- tanh^2 X_n).
Just like Theorem <ref>, Theorem <ref> is valid even if h=0, that is, for the Ising model. It shows that at sufficiently low temperature, the overlap in the Ising model has the ultrametricity property.
§.§ Behavior of the magnetization
The magnetization of a configuration σ is defined as
m = m(σ) := 1/|B_n|∑_i∈ B_n σ_i.
The following theorem identifies the limiting behavior of the magnetization of a configuration drawn from the Gibbs measure when β is large enough. It also gives a relation between the magnetizations of two independently drawn configurations and their overlap.
Let d, n, β_0, β and q be as in Theorem <ref>. The magnetization m of a configuration σ drawn from the model satisfies (m^2 - q)^2→ 0 as n→∞, and with X_n defined as in (<ref>), we have
lim_n→∞[(m - √(q)tanh X_n )^2] = 0.
In particular, m converges in law to √(q)tanh(√(q)β h Z), where Z is a standard Gaussian random variable. Moreover, for most values of j∈ B_n, σ_j≈m with high probability, in the sense that
lim_n→∞1/|B_n|∑_j∈ B_n[(σ_j - m)^2] =0.
Lastly, if m(σ^1) and m(σ^2) are the magnetizations in two configurations σ^1 and σ^2 chosen independently from the same Gibbs measure, then (R_1,2 - m(σ^1)m(σ^2))^2→ 0 as n→∞.
This theorem is the basis for proving the previously stated results about the overlap, because it says that the overlap between two configuration is approximately equal to the product of their magnetizations with high probability, and gives the approximate distribution of the magnetization, which is concentrated near q or -q with high probability. In addition to the previously stated results, it also gives the asymptotic quenched distribution of any number of overlaps, because conditional on the disorder, m(σ^1), m(σ^2),… behave like i.i.d. random variables taking values in {-√(q), √(q)} with a certain distribution, and R_i,j≈ m(σ^i)m(σ^j) for each i j.
§.§ Two pure states
As mentioned in the introduction, it is unclear how to rigorously define pure states outside the setting of Markov random fields on a lattice, where it is well-understood <cit.>. We will now give a general definition of the number of pure states in a sequence of models, and show that according to this definition, our model has two pure states in the n→∞ limit.
Let {N_n}_n≥ 1 be a sequence of positive integers tending to infinity, and let (X_n,i)_n≥ 1, 1≤ i≤ N_n
be a triangular array of real-valued random variables. For each n, let π_n be a uniform random permutation of 1,…,N_n, independent of the X_n,i's. Let Y_n,i := X_n, π_n(i). Let Z = (Z_1,Z_2,…) be a sequence of random variables such that for each k, (Y_n,1,…, Y_n,k) converges to (Z_1,…,Z_k) in distribution as n→∞. Then note that Z is an infinite exchangeable sequence of random variables. By De Finetti's theorem <cit.>, the law of Z is a mixture of probability laws of i.i.d. sequences, with a unique mixing measure <cit.>.
In the above setting, let μ be the mixing measure of the law of Z. Let p be the size of the support of μ, which may be a positive integer or infinity. Then, we will say that the law of (X_n,i)_1≤ i≤ N_n has p pure states asymptotically as n→∞.
For example, if the X_n,i's are i.i.d., then so are the Z_i's, and therefore p=1. On the other hand, suppose that N_n = n and X_n,i = Y + W_i, i=1,…,n, where Y and W_1,W_2,… are i.i.d. standard Gaussian random variables. If π_n is a uniform random permutation of 1,…,n, then for any n and k, the law of (X_n,π_n(1),…, X_n,π_n(k)) is the same as the law of (Z_1,…,Z_k), where Z_i = Y+W_i. Now, Z_1,Z_2,… is an infinite exchangeable sequence, which is conditionally i.i.d. given Y. Since the support of Y contains infinitely many points, we deduce that the law of (X_n,i)_1≤ i≤ n has infinitely many pure states as n→∞.
In the setting of disordered systems, the law of (X_n,i)_1≤ i≤ N_n is itself random, and may not be converging to a deterministic limit in any reasonable sense as n→∞. Thus, we have to modify Definition <ref> to accommodate this scenario. Let Y_n,i = X_n,π_n(i) be defined as before. For each k, let ν_n,k be the law of (Y_n,1,…,Y_n,k), which is now a random probability measure. Let ν be a random probability measure taking value in the set of laws of infinite exchangeable sequences. Let ν_k be the (random) law of the first k coordinates of a sequence with law ν.
Let ν be as above, and let μ be the (random) mixing measure of a random probability measure with law ν. Suppose that there is a deterministic p∈{1,2,…}∪{∞} such that with probability one, the support of μ has p points. Also, suppose that for each k, the law of ν_n,k converges weakly to the law of ν_k. Then, we will say that the (random) law of (X_n,i)_1≤ i≤ N_n has p pure states asymptotically as n→∞.
The following result shows that under the above definition, our model has two pure states asymptotically as n→∞. This holds for any h, and in particular h=0, which is the case of the ordinary Ising model.
Let d, n, β_0 and β be as in Theorem <ref>. Then the random probability measure on Σ_n defined by the model from Theorem <ref> has two pure states asymptotically as n→∞, as defined in Definition <ref>.
§.§ Failure of the Ghirlanda–Guerra identities
The Ghirlanda–Guerra (GG) identities are a set of identities that are satisfied in the infinite volume limits of many mean-field spin glass models <cit.>. A symmetric array of random variables (S_i,j)_1≤ i,j<∞ is said to satisfy the GG identities if for any k, any bounded measurable function f of (S_i,j)_1≤ i,j≤ k, and any bounded measurable function ψ:→,
(f ψ(S_1,k+1)) = 1/k(f) (ψ(S_1,2)) + 1/k∑_i=2^k (f ψ(S_1,i)).
These identities have been proved for the limiting joint law of overlaps for a variety of mean-field models of spin glasses. (Here, the “joint law” refers to the unconditional distribution, averaged over the disorder.) They form the basis of Panchenko's proof of ultrametricity in <cit.>, following a line of prior work connecting the GG identities with ultrametricity <cit.>. The following theorem shows that the GG identities are not valid for our model. This shows that while the GG identities are sufficient for ultrametricity of the overlap (as shown by Panchenko <cit.>), they are not necessary.
Let d, n, β_0 and β be as in Theorem <ref>. Then the limiting joint distribution of the overlaps, as n→∞, does not satisfy the Ghirlanda–Guerra identities.
§.§ Failure of spin glass behavior at other field strengths
One may wonder if taking the field strength to be proportional to |B_n|^-1/2 is the only way to get replica symmetry breaking and non-self-averaging in the large n limit. Our next result shows that this is indeed the case for Gaussian disorder (and it is reasonable to conjecture that the same holds for any i.i.d. disorder). Replica symmetry does not break if the parameter h is allowed to go to ±∞ as n→∞, and the non-self-averaging of the quenched law of the overlap breaks down if h is allowed to go to zero as n→∞.
Suppose that the parameter h in the Hamiltonian H_n is allowed to vary with n. If h→0 as n→∞, then the distance between the quenched law of R_1,2 under our model and the law of R_1,2 under the Ising model on B_n at the same temperature and free boundary condition tends to zero in probability as n→∞, for any metric that metrizes weak convergence of probability measures. In particular, non-self-averaging fails. On the other hand, if |h|→∞ as n→∞, and if the J_i's are i.i.d. standard Gaussian random variables, then (R_1,2-R_1,2)^2→ 0, meaning that replica symmetry does not break. These conclusions hold at any temperature.
The second assertion of the above theorem extends <cit.> by showing that replica symmetry holds not only when the parameter h in the standard form (<ref>) of the RFIM Hamiltonian is fixed and nonzero, but is even allowed to go to zero slower than |Λ|^-1/2 (for Λ = B_n).
§.§ The antiferromagnetic RFIM
For the sake of completeness, let us also consider the random field antiferromagnetic Ising model on B_n under free boundary condition. This is the model where the minus in front of the first term on the right side in (<ref>) is replaced by a plus. That is, the Hamiltonian is
H_n(σ) := ∑_{i,j}∈ E_nσ_i σ_j - h/√(|B_n|)∑_i∈ B_n J_i σ_i.
All of the results for the ferromagnetic model continue to hold for the antiferromagnetic version, except one — the magnetization tends to zero instead of converging in law to a non-degenerate distribution.
Theorems <ref>, <ref> and <ref> remain valid for the antiferromagnetic model, with J_i replaced by (-1)^|i|_1J_i in the (<ref>), where |i|_1 is the ℓ^1 norm of i. The magnetization, however, satisfies m^2→ 0 as n→∞.
§.§ Uniformity of correlations in the ordinary Ising model
In addition to the above results, our analysis also reveals the following “uniformity of correlations” for the ordinary Ising model on B_n under free boundary condition and sufficiently low temperatures. Namely, σ_iσ_j≈ q for most i,j∈ B_n. More generally, for any even l and most i_1,…, i_l∈ B_n, σ_i_1⋯σ_i_l≈ q^l/2. This result is the foundation for most of the other results in this paper. Note that the correlation is zero if l is odd due to the invariance of the model under the transform σ→ -σ.
Let d, n, β_0, β and q be as in Theorem <ref>. Consider the ferromagnetic Ising model on B_n at inverse temperature β and free boundary condition (i.e., the model with Hamiltonian given in (<ref>) but with h=0). Then for any even positive integer l,
lim_n→∞1/|B_n|^l∑_i_1,…,i_l∈ B_n|σ_i_1⋯σ_i_l - q^l/2| = 0.
Uniformity of correlations in infinite volume is a simple consequence of a result of <cit.> (see also <cit.>), which says that the infinite volume Gibbs measure for the Ising model under free boundary condition is the average of the infinite volume measures under plus and minus boundary conditions. We give this short proof in Subsection <ref>.
The finite volume result stated above does not follow easily from the infinite volume result, even though we know that correlations decay exponentially under plus and minus boundary conditions <cit.>. This is because Bodineau's theorem does not imply that the finite volume Gibbs measure under free boundary is approximately the average of the finite volume measures under plus and minus boundary conditions. We give two proofs of Theorem <ref>. The first proof is a bare-hands argument based on a generalization of the Kramers–Wannier duality for the Ising model to arbitrary dimensions and a multi-scale argument via this duality, starting from the infinite volume result as a launchpad. The second proof, due to Hugo Duminil-Copin (private communication), uses the random cluster representation of the Ising model and Pisztora's renormalization scheme <cit.>. While the second proof is shorter, I have decided to retain the original proof because of its self-contained nature and the new ideas (e.g., the duality relation) that may be useful for other purposes.
This completes the statements of the results. The rest of the paper is devoted to proofs.
§.§ Acknowledgements
I thank Louis-Pierre Arguin, Andrew Chen, Persi Diaconis, Hugo Duminil-Copin, Zhihan Li and Gourab Ray for many helpful comments and references. In particular, I thank Hugo for sketching the alternative proof of Theorem <ref> and Gourab for helping expand the sketch to a complete argument. This work was partially supported by NSF grants DMS-2113242 and DMS-2153654.
§ PROOF OF THEOREM <REF>
In this section we give the first proof of Theorem <ref>. We begin with some preliminaries.
§.§ Discrete exterior calculus
For a detailed exposition of discrete exterior calculus, see <cit.>. The basic facts that we will need in this paper are the following. The cell complex of ^d is defined as follows. A d-cell is any cube like [x_1, x_1+1]×⋯× [x_d, x_d+1] for x_1,…, x_d∈. For 0≤ k < d, a k-cell is any k-dimensional face of a d-cell. So, for example, a 0-cell is a vertex and a 1-cell is an edge. A 2-cell is called a plaquette. Consider G:={-1,1} as a group under multiplication. A G-valued k-form on ^d is any function from the set of k-cells into G.
If 1≤ k≤ d and f is a G-valued k-form, the exterior coderivative δ f of f is a (k-1)-form defined as follows. For each (k-1)-cell x, δ f(x) is the product of f(y) over all k-cells y that contain x as a face. Note that this definition is only for G={-1,1}. When G is a general Abelian group, the definition is more complicated; see <cit.> for details.
A cube in ^d is a set of the form [a_1,b_1]×⋯×[a_d,b_d] for some a_1,…,a_d∈ and b_1,…,b_d ∈ such that b_i> a_i for all i and b_i - a_i is the same for all i. We will say that a G-valued k-form f is supported on a cube B if f(x)=1 for any k-cell x that is not a part of B.
The main result from discrete exterior calculus that we will need is the following “discrete Poincaré lemma” for the coderivative.
Let B be any cube in ^d and take any 1≤ k< d. Let f be a G-valued k-form supported on B. Suppose that δ f ≡ 1. Then there are exactly r many (k+1)-forms h supported on B such that f = δ h, where r is the number of (k+1)-forms g supported on B such that δ g ≡ 1. Conversely, if f = δ h for some h supported on B, then δ f ≡ 1.
The existence of h follows from <cit.>. For the second assertion, take any h supported on B such that δ h = f. Let h' be any other (k+1)-form supported on B such that δ h' = f. Let g be the pointwise product of h and h'. Then g is supported on B, and for any k-cell x, δ g(x) = δ h(x)δ h'(x) = f(x)^2 =1. On the other hand, if g is a (k+1)-form supported on B such that δ g ≡ 1, and h' = gh, then δ h' = δ g δ h = f. Thus, δ h' = f if and only if h' = gh for some g supported on B, satisfying δ g ≡ 1. This shows that there are exactly r many solutions of δ h = f that are supported on B. Finally, the claim that δδ h ≡ 1 for any h follows from a combination of Lemmas 2.1 and 2.3 of <cit.>, and the discussion preceding <cit.>.
§.§ A generalization of the Kramers–Wannier duality
The Kramers–Wannier duality <cit.> relates the 2D ferromagnetic Ising model at a given temperature with the same model at another “dual” temperature. The Kramers–Wannier duality is centrally important in the study of the 2D Ising model, and has been generalized to numerous other contexts. In this section we present a generalization of this duality to the Ising model in any dimension d≥ 2 with general ferromagnetic couplings and free boundary conditions. In general dimension, the dual model is no longer an Ising model with spins on vertices; rather, it is an Ising model with spins attached to plaquettes. This includes the original duality in 2D because plaquettes in 2D are in one-to-one correspondence with vertices in the dual lattice. It also includes the duality between the 3D Ising model and the 3D Ising lattice gauge theory discovered by <cit.>, because plaquettes are dual to edges in 3D.
Fix n, d≥ 2. Let B_n := {-n,…,n}^d and let E_n be the set of nearest-neighbor edges of B_n, as before. Suppose that for each e = {i,j}∈ E_n, we have a deterministic coupling parameter K_e = K_ij> 0. The Hamiltonian of the ferromagnetic Ising model on Σ_n := {-1,1}^B_n with these couplings and free boundary condition is given by
H_K(σ) := -∑_{i,j}∈ E_n K_ijσ_i σ_j.
Since the couplings are arbitrary, we may absorb the inverse temperature into the coupling parameters, and define the Gibbs measure associated with this model as the probability measure on Σ_n with probability mass function proportional to e^- H_K(σ). In this subsection, we will denote averaging with respect to this Gibbs measure by ·.
The dual model is defined as follows. Let P_n denote the set of plaquettes of B_n. We will write e∈ p to mean that an edge e belongs to a plaquette p. For each e∈ E_n, define
L_e := -1/2logtanh K_e.
Note that L_e∈ (0,∞) since K_e∈ (0,∞). The configuration space for our dual model is Σ_n^* := {-1,1}^P_n. For σ^* ∈Σ_n^* and e∈ E_n, we will use the notation
σ_e^* := ∏_p∈ P_n, p∋ eσ_p^*.
Define the Hamiltonian H_L^*:Σ_n^* → as
H_L^*(σ^*) := - ∑_e∈ E_n L_eσ_e^*.
This is the Hamiltonian for the dual model. We will denote averaging with respect to the dual model by ·^*. The following lemma gives a duality relation between the partition functions (normalizing constants) of the primal and dual models.
Let r_n be the number of 2-forms g supported on B_n such that δ g ≡ 1. Let
α := ∏_e∈ E_n√(cosh K_esinh K_e).
Let Z and Z^* be the partition functions of the primal and dual models defined above. Then
Z = 2^|B_n|α/r_n Z^*.
For σ∈Σ_n and e = {i,j}∈ E_n, let σ_e := σ_iσ_j. Then
Z = ∑_σ∈Σ_n∏_e∈ E_n e^K_eσ_e.
Since σ_e is either 1 or -1,
e^K_e σ_e = cosh K_e + σ_e sinh K_e = √(cosh K_e sinh K_e)(e^L_e + σ_e e^-L_e),
Substituting this in (<ref>), we get
Z = α∑_σ∈Σ_n∏_e∈ E_n(e^L_e + σ_e e^-L_e).
Expanding the product on the right gives
∏_e∈ E_n(e^L_e + σ_e e^-L_e) = ∑_κ∈{0,1}^E_n∏_e∈ E_nσ_e^κ_e e^L_e(1-2κ_e).
Making the change of variable τ_e=1-2κ_e, this gives
∏_e∈ E_n(e^L_e + σ_e e^-L_e) = ∑_τ∈Γ_n∏_e∈ E_nσ_e^1/2(1-τ_e) e^L_e τ_e,
where Γ_n := {-1,1}^E_n. Substituting this back in (<ref>), we get
Z = α∑_σ∈Σ_n∑_τ∈Γ_n∏_e∈ E_nσ_e^1/2(1-τ_e)e^L_e τ_e
= α∑_τ∈Γ_n e^∑_e∈ E L_eτ_e(∑_σ∈Σ_n∏_i∈ B_nσ_i^f(τ,i)),
where
f(τ, i):= ∑_e∈ E_n, e∋ i1-τ_e/2.
In other words, f(τ,i) counts the number of edges e∈ E_n containing the vertex i for which τ_e=-1. For convenience, let us call this set of edges E(i). The number f(τ, i) is even if and only if
∏_e∈ E(i)τ_e = 1.
Consequently,
∑_σ∈Σ_n∏_i∈ B_nσ_i^f(τ,i) = ∏_i∈ B_n (1+ (-1)^f(τ,i))
=
2^|B_n| if ∏_e∈ E(i)τ_e = 1 for every i∈ B_n,
0 otherwise.
Let Γ_n' be the set of all τ∈Γ_n for which (<ref>) holds for every i. By (<ref>) and (<ref>), we get
Z = 2^|B_n|α∑_τ∈Γ_n'exp(∑_e∈ E_n L_eτ_e).
Now note that any τ∈Γ_n naturally defines a G-valued 1-form f supported on B_n. In terms of f, the condition (<ref>) means that the 0-form δ f satisfies δ f(i)=1 for all i∈ B_n. Thus, τ∈Γ_n' if and only if the corresponding f is supported on B_n and satisfies δ f ≡ 1. By Lemma <ref>, this happens if and only if f = δ h for some 2-form h supported on B_n, and moreover, there are exactly r_n choices of h for any given f. But any such h corresponds to a spin configuration σ^* on the set of plaquettes P_n, and so,
∑_τ∈Γ_n'exp( ∑_e∈ E_n L_eτ_e) = 1/r_n∑_σ^*∈Σ_n^*exp(∑_e∈ E_n L_eσ^*_e).
To complete the proof, note that the sum on the right is Z^*.
The next lemma gives a duality relation between expectations of certain kinds of functions with respect to the primal and dual Gibbs measures.
For σ∈Σ_n and e = {i,j}∈ E_n, let σ_e := σ_iσ_j. For σ^*∈Σ_n^* and e∈ E_n, define σ_e^* as in equation (<ref>). Let F be any nonempty subset of E_n. Then
∏_e∈ Fσ_e = exp(-2∑_e∈ FL_e σ^*_e)^*.
Note that
∏_e∈F σ_e = 1/Z∑_σ∈Σ_n (∏_e∈F σ_e )(∏_e∈E_n e^K_eσ_e).
Let Γ_n be as in the proof of Lemma <ref>. For each τ∈Γ_n, define a vector τ'∈{0,1,2}^E_n as:
τ'_e =
1/2(1-τ_e) if e∉F,
1/2(1-τ_e)+ 1 if e∈F.
Proceeding as in the proof of Lemma <ref>, we get
∑_σ∈Σ_n∏_e∈ Fσ_e ∏_e∈ E_n e^K_eσ_e = α∑_σ∈Σ_n(∏_e∈ Fσ_e)(∑_τ∈Γ_n∏_e∈ E_nσ_e^1/2(1-τ_e)e^L_e τ_e)
= α∑_σ∈Σ_n∑_τ∈Γ_n∏_e∈ E_nσ_e^τ_e'e^L_eτ_e
= α∑_τ∈Γ_n e^∑_e∈ E L_eτ_e(∑_σ∈Σ_n∏_i∈ B_nσ_i^h(τ,i)),
where
h(τ, i):= ∑_e∈ E(i)τ'_e.
Now, for each τ∈Γ_n, define another vector τ”∈Γ_n as
τ”_e =
τ_e if e∉F,
-τ_e if e∈ F.
Note that if e∉F, then τ_e' is even if and only if τ_e = 1. On the other hand, if e∈ F, then τ_e' is even if and only if τ_e = -1. Thus, for any e, τ'_e is even if and only if τ”_e=1. From this observation it follows easily that h(τ, i) is even if and only if
∏_e∈ E(i)τ_e” = 1.
Consequently,
∑_σ∈Σ_n∏_i∈ B_nσ_i^h(τ,i) = ∏_i (1+ (-1)^h(τ,i))
=
2^|B_n| if ∏_e∈ E(i)τ_e” = 1 for every i∈ B_n,
0 otherwise.
Let Γ_n” be the set of all τ∈Γ_n that satisfy (<ref>) for all i. Then by (<ref>) and (<ref>),
∑_σ∈Σ_n∏_e∈ Fσ_e ∏_e∈ E_n e^K_eσ_e = 2^|B_n|α∑_τ∈Γ_n”exp( ∑_e∈ E_n L_eτ_e).
Now recall the set Γ_n' from the proof of Lemma <ref>. Note that τ∈Γ_n” if and only if τ”∈Γ'_n. Moreover, the map τ↦τ” is a bijection, which is its own inverse. Thus,
∑_τ∈Γ_n”exp(∑_e∈ E_n L_eτ_e) = ∑_τ∈Γ'_nexp(∑_e∈ E_n L_e τ”_e)
= ∑_τ∈Γ'_nexp( - 2 ∑_e∈ F L_eτ_e + ∑_e∈ E_n L_eτ_e).
Now, following the same steps as in the last part of the proof of Lemma <ref>, we get that
∑_τ∈Γ'_nexp( - 2 ∑_e∈ F L_eτ_e + ∑_e∈ E_n L_eτ_e) = 1/r_n∑_σ^*∈Σ_n^*exp(-2∑_e∈ F L_eσ^*_e - H_L^*(σ^*)).
Using the formula for Z from Lemma <ref>, this completes the proof.
§.§ A generalization of Dobrushin's condition
Let (Ω, ) be a measurable space. Recall that the total variation distance between two probability measures ν and ν' on (Ω, ) is defined as
(ν, ν') := 1/2sup|∫ fdν - ∫ fdν'|,
where the supremum is over all measurable functions f:Ω→ [-1,1].
Now suppose that Ω is a finite set. Take any n≥ 1, and let μ and μ' be two probability measures on Ω^n, with supports (μ) and (μ'). In the following, we will use the notation x̅^i to denote the element of Ω^n-1 obtained by dropping coordinate i of a vector x∈Ω^n. For x∈(μ), let μ_i(·|x̅^i) denote the conditional law (under μ) of coordinate i given that the vector of the remaining coordinates equals x̅^i. Define μ_i' similarly.
Suppose that for any i, any x∈(μ) and any y∈(μ'),
(μ_i(·|x̅^i), μ^'_i(·|y̅^i)) ≤∑_j=1^n α_ij 1_{x_j y_j} + h_i,
where α_ij's and h_i's are fixed nonnegative real numbers, α_ii=0, and 1_{x_j y_j} = 1 if x_j y_j and 0 otherwise.
Assume that
s := max_1≤ i≤ n∑_j=1^n α_ij < 1.
Let Q be the matrix (α_ij)_1≤ i,j≤ n and suppose that P is a Markov transition matrix on {1,…,n} such that Q ≤ sP elementwise. For each i and j, let τ_ij be the first hitting time of state j starting from state i of a Markov chain with transition kernel P. Under the above assumptions, the following result follows from <cit.>. This generalizes a classical condition for exponential decay of correlations due to Dobrushin <cit.>.
Let all notations be as above, and suppose that (<ref>) and (<ref>) hold. Let Z and Z' be two Ω^n-valued random vectors, with laws μ and μ' respectively. Take any B⊆{1,…,n}. Let ν be the law of (Z_i)_i∈ B and ν' be the law of (Z_i')_i∈ B. Then
(ν, ν') ≤1/1-s∑_i∈ B∑_j=1^n (s^τ_ij)h_j.
We will use a corollary of the above theorem, which compares a single probability measure under two different conditionings instead of comparing two probability measures as in Theorem <ref>. Let μ and μ_i be as above, and suppose that α_ij are nonnegative constants such that for any x,y∈Ω^n and any i, α_ii=0 and
(μ_i(·|x̅^i), μ_i(·|y̅^i)) ≤∑_j=1^n α_ij 1_{x_j y_j}.
Then we claim that the following is true.
Let μ be as above, satisfying (<ref>). Let X= (X_1,…,X_n) be a random vector with law μ. Take any two disjoint and nonempty sets A, B ⊆{1,…,n}. Let U be a function of (X_i)_i∈ A and V be a function of (X_i)_i∈ B.
Suppose that
s := max_i∉ A∑_j=1^n α_ij < 1.
Let l be the minimum possible length of a sequence i_0,i_1,…,i_l such that i_0∈ B, i_l∈ A, and α_i_ri_r+1 > 0 for each r. Then
|(UV) - (U)(V)| ≤2|A||B|s^l/1-sV_∞|U|,
where V_∞ denotes the maximum of |V(x)| over x∈Ω^B.
Take any (a_i)_i∈ A and (b_i)_i∈ A in the support of (X_i)_i∈ A. Let ν and ν' be the conditional distributions of X given (X_i)_i∈ A = (a_i)_i∈ A and (X_i)_i∈ A = (b_i)_i∈ A. Take any x,y∈(μ) such that for all i∈ A, x_i =a_i and y_i=b_i. Then x∈(ν), y∈(ν'), and the condition (<ref>) implies that
(ν_i(·|x̅^i), ν'_i(·|y̅^i)) ≤∑_j=1^n α_ij' 1_{x_j y_j} + h_i,
where h_i = 1 if i∈ A and 0 otherwise, and α_ij' = α_ij if i∉ A and 0 otherwise. Note that ∑_j=1^n α_ij' ≤ s for all i. Now define a Markov transition matrix P= (p_ij)_1≤ i,j≤ n as follows. Take any i and j. If ∑_k=1^n α_ik' = 0, then define p_ij = 0 if j i and 1 if j=i. If ∑_k=1^n α_ik' >0, define
p_ij := α_ij'/∑_k=1^n α_ik'.
Then the entries of P are nonnegative and each row sum equals 1. That is, P is a Markov transition matrix. Take any 1≤ i j≤ n. If ∑_k=1^n α_ik' = 0, then α_ij'=0≤ s p_ij. If ∑_k=1^n α_ik' > 0, then i∉ A, and therefore, by (<ref>) and the above definition of p_ij, α_ij' ≤ s p_ij. Also, for any i, α_ii' = 0≤ sp_ii. Thus, Q≤ sP elementwise, where Q = (α_ij')_1≤ i,j≤ n. So, we can apply Theorem <ref>. Note that the Markov chain with transition matrix P can jump from i to j only if i=j or α_ij>0. Thus, if i∈ B and j∈ A, then τ_ij≥ l (using the notation of Theorem <ref>). By Theorem <ref>, this implies that
(γ,γ') ≤|A||B|s^l/1-s,
where γ and γ' are the laws of (X_i)_i∈ B conditional on (X_i)_i∈ A= (a_i)_i∈ A and (X_i)_i∈ A = (b_i)_i∈ A. This proves that
|(V|(X_i)_i∈ A= (a_i)_i∈ A) - (V|(X_i)_i∈ A= (b_i)_i∈ A)| ≤2|A||B|s^l/1-sV_∞.
Integrating over (b_i)_i∈ A with respect to the law of (X_i)_i∈ A, we get that for any (a_i)_i∈ A,
|(V|(X_i)_i∈ A= (a_i)_i∈ A) - (V)| ≤2|A||B|s^l/1-sV_∞.
Since U is a function of (X_i)_i∈ A, this shows that
|(UV) - (U)(V)| = |(((V|(X_i)_i∈ A) - (V))U)|
≤|((V|(X_i)_i∈ A) - (V))U|
≤2|A||B|s^l/1-sV_∞|U|,
completing the proof.
§.§ Decay of correlations in the dual model at high temperature
Consider the dual model from Subsection <ref> on B_n. The following result proves a particular kind of exponential decay of correlations in the model if λ is small enough. The proof uses Corollary <ref>. Recall that P_n denotes the set of plaquettes (i.e., 2-cells) of B_n.
Take any d, n≥ 2, and let σ^* be a random configuration drawn from the dual model of Subsection <ref> on the cube B_n. Take any two disjoint and nonempty sets A, B ⊆ P_n. Suppose that there is some λ∈ (0, 1/8d - 12) such that for any p∈ P_n∖ A and any edge e∈ p, L_e ≤λ. Let U be a function of (σ^*_p)_p∈ A and V be a function of (σ^*_p)_p∈ B. Let l be the minimum possible length of a sequence p_0,p_1,…, p_l such that p_0∈ B, p_l∈ A, and for each r, p_r and p_r+1 share an edge. Then
|UV^* - U^*V^*| ≤2|A||B|(λ(8d - 12))^l/1-λ(8d - 12)V_∞|U|^*.
For e∈ E_n, let P_n(e) denote the set of all p∈ P_n such that e∈ p. A standard computation shows that if σ^* is drawn from the dual model, then for any p∈ P_n and any x∈Σ_n^*,
(σ_p^*|σ_q^* = x_q q∈ P_n ∖{p}) = tanh(∑_e∈ p L_e (∏_q∈ P_n(e)∖{p}x_q)),
where the empty product denotes 1, in case P_n(e) = {p} for some e∈ p.
Now, if X and Y are {-1,1}-valued random variables, then the total variation distance between the laws of X and Y is equal to 1/2|(X)-(Y)|. Since
| tanh(∑_e∈ p L_e (∏_q∈ P_n(e)∖{p}x_q)) - tanh(∑_e∈ p L_e (∏_q∈ P_n(e)∖{p}y_q)) |
≤∑_e∈ p L_e |∏_q∈ P_n(e)∖{p}x_q - ∏_q∈ P_n(e)∖{p}y_q|
≤∑_e∈ p L_e (∑_q∈ P_n(e)∖{p}|x_q-y_q|),
and two distinct plaquettes can share at most one edge, this shows that the dual model satisfies (<ref>) with
α_pq =
L_e if p q and p and q share an edge e,
0 otherwise.
Now, for p∈ P_n∖ A, there are at most 8d-12 plaquettes q that share an edge with p, and L_p∩ q≤λ for all such q (where p∩ q denotes the shared edge). Thus, the condition (<ref>) is satisfied if λ(8d - 12) < 1. The claim now follows from Corollary <ref>.
§.§ Decay of correlations in the Ising model at low temperature
We will now use Lemma <ref> to establish a particular kind of decay of correlations in the ferromagnetic Ising model at low temperature, under free boundary condition. Besides Lemma <ref>, the other main ingredient in the proof is the duality relation given by Lemma <ref>. In the following, a “lattice path” refers to a path in the lattice ^d, hopping from one vertex to a nearest neighbor, such that no edge is traversed more than once. The length of a path P, denoted by |P|, is the number of edges in the path.
Take any d,n≥ 2. Let σ be a configuration drawn from the generalized ferromagnetic Ising model on B_n with free boundary condition, defined in Subsection <ref>. Take any distinct i,j,k,l∈ B_n. Suppose that there are three numbers a>0, b>0 and c≥ 2, such that there is a lattice path P_1 joining i to j and a lattice path P_2 joining k to l, both completely inside B_n, and a set of edges F⊇ P_1, such that |F|≤ a, |P_2|≤ b, and the ℓ^1 distance of any endpoint of any edge in F to any endpoint of any edge in P_2 is at least c. Finally, let β be a number such that for each edge e∉ F, K_e≥β. Let λ := -1/2logtanhβ. Suppose that β is so large that λ(8d - 12) < 1. Then
|σ_iσ_jσ_kσ_l - σ_iσ_jσ_kσ_l| ≤8d^2 ab e^2λ b (λ (8d-12))^c-1/1-λ(8d - 12).
By Lemma <ref> and the fact that P_1 and P_2 are inside B_n and are disjoint (since c> 0) and no edge is traversed more than once by either path, we have that
σ_i σ_jσ_k σ_l = ∏_e∈ P_1σ_e ∏_e∈ P_2σ_e
= exp(-2 ∑_e∈ P_1 L_e σ_e^*) exp(-2 ∑_e∈ P_2 L_eσ_e^*)^*,
where ·^* denotes expectation with respect to the dual model defined in Subsection <ref>.
Let U and V denote the two exponentials inside the angle brackets in the last line above. Then note that σ_i σ_j = U^* and σ_k σ_l = V^*.
Let A denote the set of plaquettes in B_n that contain edges from F, and B denote the set of plaquettes in B_n that contain edges from P_2. Since c≥ 2, A and B are disjoint. Since each edge is in at most 2d plaquettes in B_n, |A|≤ 2d a and |B|≤ 2db. Also, for any p∉ A, no edge of p is in F, and so, for any e∈ p, L_e ≤λ.
Now, suppose that p_0,…,p_w is a sequence of plaquettes in B_n such that p_0∈ A and p_w ∈ B, and for each r, p_r and p_r+1 share an edge. Let e_r denote the edge shared by p_r and p_r+1. Let e be an edge of p_0 that is in F and f be an edge of p_w that is in P_2. Then there is a path of length ≤ 1 from an endpoint of e to an endpoint of e_0. Also, for all 0≤ r< w-1, there is a path of length ≤ 1 from any given endpoint of e_r to some endpoint of e_r+1. Finally, there is a path of length ≤ 1 from any given endpoint of e_w-1 to some endpoint of f. This shows that there is a lattice path of length ≤ w+1 joining an endpoint of an edge in F to an endpoint of an edge in P_2. Thus, w ≥ c-1. Combining these observations, and the fact that V_∞≤ e^2λ b,
we get by Lemma <ref> that
|σ_iσ_jσ_kσ_l - σ_iσ_jσ_kσ_l|
= |UV^* - U^* V^*| ≤8d^2 ab e^2λ b (λ (8d-12))^c-1/1-λ(8d - 12)|U|^*.
Since U is a nonnegative function, |U|^* = U^* = σ_kσ_l∈ [0,1]. This completes the proof of the lemma.
§.§ Uniformity of two-point correlations in infinite volume
Consider the ferromagnetic Ising model on a finite set Λ⊆^d under free boundary condition, at a given inverse temperature β >0. For a subset A⊆Λ and a configuration σ∈{-1,1}^Λ, let
σ_A := ∏_i∈ Aσ_i,
and let σ_A_Λ denote the expected value of σ_A under the Gibbs measure defined by this model. It is a well-known consequence of the Griffiths (or GKS) inequalities that for any A, Λ and β,
0≤σ_A_Λ≤ 1,
and for any A⊆Λ_1⊆Λ_2 and any β,
σ_A_Λ_1≤σ_A_Λ_2.
For proofs, see <cit.> or <cit.>. A consequence of these inequalities is that for any sequence Λ_n ↑^d and any finite set A⊆^d,
σ_A := lim_n→∞σ_A_Λ_n
exists and is independent of the choice of the sequence {Λ_n}_n≥ 1. These limits define a probability measure on {-1,1}^^d, which is called the infinite volume Gibbs measure for the Ising model under free boundary condition, at inverse temperature β.
Infinite volume Gibbs measures can be similarly defined for all plus and all minus boundary conditions. For the all plus boundary condition, the direction of the inequality in the monotonicity relation (<ref>) is reversed <cit.>. An important consequence of these monotonicity relations is that these three infinite volume Gibbs measures are translation invariant.
We now prove the following theorem, which is a key ingredient for proving Theorem <ref>.
Take any d≥ 2 and consider the infinite volume Gibbs measure for the ferromagnetic Ising model under free boundary condition at inverse temperature β. If β is large enough, then the limit
q = q(β,d) := lim_|i|→∞σ_0 σ_i
exists and is strictly positive, and is an increasing function of β. Here |i| denotes the Euclidean norm of i.
Let ·_+ and ·_- denote averaging with respect to the infinite volume Gibbs measures with all plus and all minus boundary conditions, respectively. By a result of <cit.>, the infinite volume Gibbs measure under free boundary is the average of the Gibbs measures under plus and minus boundary conditions. In particular,
σ_0σ_i = 1/2(σ_0σ_i_+ + σ_0σ_i_-).
It is a simple consequence of the FKG inequality <cit.> and the monotonicity relation (<ref>) that the Gibbs measures under plus and minus boundary conditions are pure states, implying correlation decay. In particular,
lim_|i|→∞ (σ_0σ_i_± - σ_0_±σ_i_±) = 0.
But by translation invariance, σ_i_± = σ_0_±. By the Peierls argument, r := σ_0_+ is strictly positive for large enough β, and by symmetry, σ_0_-= - r. Taking q := r^2, we see that σ_0σ_i→ q as |i|→∞. The monotonicity of q as a function of β is another easy consequence of the FKG inequality.
§.§ Uniformity of two-point correlations in finite volume
We will now show that for most pairs of vertices i,j∈ B_n, σ_iσ_j≈ q if n is large, where the expectation is with respect to the Ising model with free boundary condition on B_n at inverse temperature β, and q is as in Theorem <ref>.
Fix some d≥ 2. Take any β_0>0 and let q_0 := q(β_0,d), where q is as in Theorem <ref>. Choose β_0 so large that q_0>0. Then by the increasing nature of q as a function of β, q(β,d) ≥ q_0 for any β≥β_0. Our goal in this subsection is to prove the following theorem.
Let d and β_0 be as above. If β_0 is chosen large enough, the following holds. Take any β≥β_0. Let q := q(β,d), as defined in Theorem <ref>. Take any ∈ (0,1). For each n, let
δ_n = δ_n(β,d,):= max{|σ_iσ_j - q|: i,j∈ B_⌊ (1-)n⌋, |i-j|_1≥ n},
where · denotes averaging with respect to the Ising model on B_n at inverse temperature β and free boundary condition. Then δ_n → 0 as n→∞.
For the remainder of this subsection, we will fix ∈ (0,1) and n≥1, eventually taking n→∞. Let m := ⌊ (1-)n⌋. Throughout, C_1,C_2,… will denote constants depending only on d, whose values may change from line to line, and · will denote averaging with respect to the Ising model on B_n at inverse temperature β and free boundary condition. The phrase “β large enough” will mean “β bigger than a constant depending only on d”, and the phrase “n large enough” will mean “n bigger than a constant depending only on β, d and ”. The first step is the following lemma.
Take any '∈ (0,). Let
δ_n' = δ_n'(β,d,, '):= max{|σ_iσ_j - q|: i,j∈ B_m, ' n ≤ |i-j|_1≤ n}.
Then δ_n' → 0 as n→∞.
Take some N> n, and i,j∈ B_m such that ' n ≤ |i-j|_1≤ n. Let λ := -1/2logtanhβ. Let ∂ B_n be the set of edges with one endpoint in B_n and the other outside B_n. Consider the generalized Ising model on B_N under free boundary condition (as defined in Subsection <ref>), with K_e =β for e∉∂ B_n and K_e = γ for e∈∂ B_n, where γ is a parameter that is allowed to vary. Let ·_γ denote averaging with respect to this model. Take any edge {k,l}∈∂ B_n. Let P_1 consist of the single edge {k,l}, and let P_2 be a shortest path in ^d that connects i and j. Then in Lemma <ref> (applied to the generalized model on B_N), we can take F = ∂ B_n, a = |∂ B_n|, b = n, and c = n, and get (assuming that β is large enough, and swapping the roles of (i,j) in (k,l) in the lemma) that
|σ_iσ_jσ_kσ_l_γ - σ_iσ_j_γσ_kσ_l_γ|
≤ C_1 n^d e^2λ n e^C_2 nlogλ + C_3.
Note that the above bound has no dependence on γ. A standard computation, done by explicitly writing σ_iσ_j as a function of γ and differentiating, shows that
d/dγσ_iσ_j_γ = ∑_{k,l}∈∂ B_n (σ_iσ_jσ_kσ_l_γ - σ_iσ_j_γσ_kσ_l_γ).
Combining this with the previous display and the fact that |∂ B_n|≤ Cn^d-1, we get
|σ_iσ_j_β - σ_iσ_j_0| ≤ C_1 β n^2d-1 e^2λ n e^C_2 nlogλ + C_3.
If β is so large that 2λ + C_2 logλ +C_3< -1, the above inequality implies that
|σ_iσ_j_β - σ_iσ_j_0| ≤ C_1 β n^2d-1 e^- n.
But note that σ_iσ_j_0 is the expected value of σ_i σ_j under the Ising model on B_n at inverse temperature β and free boundary condition, and σ_iσ_j_β is the expected value of σ_i σ_j under the Ising model on B_N at inverse temperature β and free boundary condition. Thus, fixing n and taking N→∞ in the above inequality, we get
|σ_iσ_j_∞ - σ_iσ_j| ≤ C_1 β n^2d-1 e^- n,
where ·_∞ denotes expectation in the infinite volume limit and · denotes expectation in B_n. But by Theorem <ref> and the facts that |i-j|_1≥' n and that the infinite volume Gibbs measure is translation invariant, we get
|σ_iσ_j_∞ - q| ≤κ_n,
where κ_n = κ_n (β,d,')→ 0 as n→∞. Combining the last two displays, we get the desired result.
Lemma <ref> gives uniformity of two-point correlations between points that are not too close and yet not too far from each other. To get rid of the latter restriction, we need the following lemma.
Take any i,j∈ B_m such that |i-j|_1> 10d^3. Then there exist distinct k,l∈ B_m∖{i,j} such that the following conditions are satisfied:
* The distances |i-k|_1, |j-l|_1, |k-l|_1 are all bounded above by 2d/2d+1|i-j|_1 and bounded below by 1/5d^2 |i-j|_1.
* Inside B_m, there is a path P_1 joining i and j, and a path P_2 joining k and l, such that distance between any vertex of P_1 and any vertex of P_2 is at least 1/5d^2|i-j|_1, and |P_1| and |P_2| are both bounded above by |i-j|_1.
* Inside B_m, there is a path Q_1 joining i and k, and a path Q_2 joining j and l, such that distance between any vertex of Q_1 and any vertex of Q_2 is at least 2d-1/2d^2|i-j|_1, and |Q_1| and |Q_2| are both bounded above by |i-j|_1.
Let i = (i_1,…,i_d) and j = (j_1,…,j_d). Using the symmetries of B_m, let us assume without loss of generality that i_r ≤ j_r for all r, and that j_1 - i_1≥ j_r - i_r for all r. This assumption ensures that
j_1 - i_1 ≥1/d∑_r=1^d (j_r-i_r) = |i-j|_1/d.
Let a∈ [-m,m] be an integer such that
i_1+(2d-1)j_1/2d≤ a ≤i_1+2d j_1/2d+1,
and let b∈ [-m,m] be an integer such that
j_1-i_1/5d≤ |b- i_2| ≤j_1-i_1/4d.
Note that an a as above exists because the upper and lower bounds for a are both in [-m,m], and their difference is
j_1-i_1/2d(2d+1)≥|i-j|_1/2d^2(2d+1) > 1
by (<ref>) and the assumption that |i-j|_1 > 10d^3. Similarly, a b as above exists because
j_1-i_1/4d≤2m/4d≤ m
and
j_1-i_1/4d - j_1-i_1/5d = j_1-i_1/20d≥|i-j|_1/20d^2 > 1,
again by (<ref>) and the assumption that |i-j|_1 > 10d^3.
Define
k := (i_1, b, i_3,i_4,…,i_d), l := (a,b,i_3,…, i_d).
First, note that k,l∈ B_m. Next, note that |i-k|_1 = |b-i_2|. By (<ref>) and (<ref>), this shows that
1/5d^2|i-j|_1 ≤ |i-k|_1 ≤1/4d |i-j|_1.
Similarly, by (<ref>),
(2d-1)(j_1-i_1)/2d≤ a-i_1 ≤2d(j_1-i_1)/2d+1,
and therefore, since |k-l|_1 = |a-i_1|, we get using (<ref>) that
2d-1/2d^2|i-j|_1 ≤ |k-l|_1 ≤2d/2d+1|i-j|_1.
Finally, note that
|j-l|_1 = |a-j_1|+ |b-j_2| + ∑_r=3^d (j_r - i_r).
By (<ref>),
- j_1-i_1/2d≤ a-j_1≤ -j_1-i_1/2d+1,
and by (<ref>),
|b-j_2| ≤ |b-i_2| + |i_2 - j_2|
≤j_1-i_1/4d + j_2 - i_2.
Combining the last three displays, we get
|j-l|_1 ≤j_1-i_1/2d + j_1-i_1/4d + ∑_r=2^d (j_r - i_r)
= |i-j|_1 - (1-3/4d) (j_1-i_1).
Combined with (<ref>), this yields
|j-l|_1 ≤ |i-j|_1 - (1-3/4d)1/d|i-j|_1 ≤(1-5/8d) |i-j|_1.
Also, by (<ref>) and (<ref>)
|j-l|_1 ≥ |a-j_1| ≥j_1-i_1/2d+1≥|i-j|_1/(2d+1)d.
The inequalities (<ref>), (<ref>), (<ref>) and (<ref>) complete the proof of the first assertion of the lemma, because 2d/2d+1 is the largest of the coefficients of |i-j|_1 in the upper bounds, and 1/5d^2 is the smallest of the coefficients of |i-j|_1 in the lower bounds.
For the second assertion, let P_1 be the path from i to j where the first coordinate increases from i_1 to j_1, keeping all other coordinates fixed, and then the second coordinate increases from i_2 to j_2 keeping all else fixed, and so on. Let P_2 be the path from k to l that simply changes the first coordinate from i_1 to a, keeping all else fixed. Take any point u = (x,b, i_3,…,i_d)∈ P_2, where x lies between i_1 and a. Consider a point v∈ P_1 within the first j_1-i_1 steps, where the first coordinate is changing from i_1 to j_1, keeping all else fixed. Then the second coordinate of v is i_2, and thus, by (<ref>),
|u-v|_1 ≥ |b-i_2| ≥j_1-i_1/5d.
Next, keeping u the same, let us take v∈ P_1 in the latter part of the path, where the first coordinate is already j_1. Then, since x is between i_1 and a, and a is between i_1 and j_1, (<ref>) implies that
|u-v|_1 ≥ |x - j_1| ≥ j_1 - a ≥j_1-i_1/2d+1.
Combined with (<ref>), this proves that the distance between any point in P_1 and any point in P_2 is at least 1/5d^2|i-j|_1. Moreover, it is clear that |P_1| and |P_2| are both bounded above by |i-j|_1, and both paths lie entirely in B_m. Thus, we have proved the second assertion of the lemma.
Finally, let Q_1 be the path from i to k that takes the second coordinate from i_2 to b, keeping all else fixed, and let Q_2 be a path from l to j that first increases the first coordinate from a to j_1, and then successively alters each coordinate from its starting value to its final value in j in the quickest possible manner. Take any point u = (i_1,x,i_3,…, i_d)∈ Q_1 and any point v∈ Q_2. Note that first coordinate of u is i_1, and the first coordinate of v is between a and j_1. By (<ref>) and (<ref>), this implies that
|u-v|_1 ≥ a - i_1 ≥(2d-1)(j_1-i_1)/2d≥(2d-1)|i-j|_1/2d^2.
It follows from (<ref>) that |Q_1|≤ |i-j|_1. Also, from the definition of Q_2, it is clear that |Q_2| = |j-l|_1. By (<ref>), this is bounded above by |i-j|_1. This completes the proof of the lemma.
For each integer p≥ 1, let
S_p := {(i,j)∈ B_m× B_m: 4dn/(5d^2)^p≤ |i-j|_1 ≤(2d)^p 4dn/(2d+1)^p},
It is not hard to see that every pair (i,j)∈ B_m× B_m is in some S_p, since the maximum possible ℓ^1 distance between i and j is at most
2dn ≤2d/2d+1 4dn,
and for any p≥ 1,
1/(5d^2)^p≤(2d)^p+1/(2d+1)^p+1.
Define
θ_p := max{|σ_iσ_j - q|: (i,j)∈ S_p},
where · denotes averaging with respect to the Ising model on B_n with free boundary condition, at inverse temperature β. We adopt the convention that θ_p=0 if the set on the right is empty. Given ∈ (0,1), let p_0 be the smallest integer such that
(2d)^p_0 4d/(2d+1)^p_0≤.
Note that p_0 depends only on d and . Let
' := 4d/(5d^2)^p_0.
For any 1≤ p≤ p_0, θ_p→ 0 as n→∞.
We will prove the claim by backward induction on p. By Lemma <ref> (with the above choices of and '), the claim is true for p=p_0. Suppose that it holds for all p_0≥ p'>p. Given (i,j)∈ S_p, find k,l∈ B_m as in Lemma <ref>. By Lemma <ref>, the pairs (i,k), (k,l) and (j,l) are all in S_p+1, and thus,
max{|σ_iσ_k - q|, |σ_kσ_l-q|, |σ_jσ_l - q|}≤θ_p+1.
By Lemma <ref> and the bounds provided by Lemma <ref>,
|σ_iσ_jσ_kσ_l - σ_i σ_jσ_kσ_l | ≤ C_1|i-j|_1^2 e^-C_2|i-j|_1,
provided that β is large enough, and the same bound also holds for |σ_iσ_jσ_kσ_l - σ_i σ_kσ_jσ_l |, perhaps with different constants. Combining, we get
|σ_iσ_jσ_kσ_l - σ_i σ_kσ_jσ_l | ≤ C_1 |i-j|_1^2 e^-C_2|i-j|_1.
By (<ref>) and the induction hypothesis, σ_k σ_l≥q/3 if n is large enough (depending on β, d and ). Thus, by (<ref>) and (<ref>), we have that for large enough n,
|σ_iσ_j-q| ≤|σ_iσ_j - σ_iσ_kσ_jσ_l/σ_kσ_l| + |σ_iσ_kσ_jσ_l/σ_kσ_l - q|
≤3/q(|σ_iσ_jσ_kσ_l - σ_i σ_kσ_jσ_l | + | σ_i σ_kσ_jσ_l - q σ_kσ_l|)
≤C_1 /q(|i-j|_1^2e^-C_2|i-j|_1 + |σ_i σ_k-q|) + C_1 |σ_jσ_l - σ_kσ_l|
≤C_1/q |i-j|_1^2 e^-C_2|i-j|_1 + C_1 θ_p+1.
This proves that θ_p→ 0 as n→∞ and completes the induction step.
To complete the proof of Theorem <ref>, simply note that any (i,j)∈ B_m× B_m satisfying |i-j|_1≥ n is in S_p for some p≤ p_0, and apply Lemma <ref>.
§.§ Uniformity of four-point correlations in finite volume
In this subsection, we show that for most quadruples of vertices i,j,k,l∈ B_n, σ_iσ_jσ_kσ_l≈ q^2 if n is large, where the expectation is with respect to the Ising model with free boundary condition on B_n at inverse temperature β, and q is as in Theorem <ref>. First, we need the following geometric lemma.
Take any d≥ 2. There is a positive constant C depending only on d such that the following holds. For any n, and any set A⊆ B_n of size four, there is a labeling i,j,k,l of the elements of A such that there is a path P_1 connecting i and j, and a path P_2 connecting k and l, both inside B_n, with the properties that |P_1|= |i-j|_1, |P_2|≤ (2d+4)n, and the ℓ^1 distance between any point in P_1 and any point in P_2 is at least C |i-j|_1.
Name the elements of A as i,j,k,l, such that |i-j|_1 is the smallest among all pairwise distances between distinct points in A. Let
δ := 1/4d|i-j|_1, δ' := 1/20d(d-1) |i-j|_1.
Using the symmetries of B_n, let us assume without loss of generality that i_r ≤ j_r for all r, where i = (i_1,…,i_d) and j = (j_1,…,j_d). First, we claim that there is at least one r such that k_r ∉ [i_r - δ, j_r + δ]. Suppose that this is not true. Then
|i-k|_1 + |j-k|_1 = ∑_r=1^d(|i_r-k_r|+|j_r-k_r|)
≤∑_r=1^d(|i_r - j_r|+ 2δ)
= 3/2|i-j|_1.
But |i-k|_1 and |j-k|_1 are both ≥ |i-j|_1 >0. So this is impossible, which proves the claim. By the same logic, there is at least one r such that l_r ∉ [i_r - δ, j_r + δ]. We now consider several possibilities.
Case 1. Suppose that k_r∉ [i_r -δ, j_r+δ] and l_s∉ [i_s -δ, j_s+δ] for some distinct r,s. In this case, let P_1 be the path from i to j where we first increase i_1 to j_1, then i_2 to j_2, and so on, and let P_2 be a path that similarly takes the coordinates of k to the coordinates of l one by one, but in an order such that coordinate r comes last. Then |P_1|=|i-j|_1 and |P_2|=|k-l|_1. Moreover, any point in x = (x_1,…,x_d)∈ P_1 satisfies i_p≤ x_p≤ j_p for all p, and any point y = (y_1,…, y_d)∈ P_2 satisfies y_r = k_r or y_s = l_s, because coordinate r is changed after coordinate s. If y_r = k_r, then |x-y|_1 ≥ |x_r - y_r|≥δ, and if y_s=l_s, then |x-y|_1 ≥ |x_s - y_s|≥δ. This proves that this construction works.
Case 2. Suppose that k_r,l_r∉ [i_r -δ, j_r+δ] for some r. Without loss of generality, r=1. We break this case into three sub-cases.
Case 2a. Suppose that k_1 and l_1 are both less than i_1 -δ, or both greater than j_1+ δ. In this case, it is easy to see that the paths P_1 and P_2 constructed in Case 1 satisfy the required criteria, since any point between k_1 and l_1 is at a distance at least δ from any point between i_1 and j_1.
Case 2b. Suppose that k_1 < i_1 - δ and l_1 > j_1 + δ, and for some r≥ 2, we have that both i_r and j_r are less than n - δ', or both are greater than -n+δ'. In this case, let P_1 be as in Case 1, and construct P_2 as follows. Starting from k, alter coordinate r by successively adding 1 (or -1) until that coordinate is at least δ' away from both i_r and j_r (which is possible because of our choice of r). Then fixing coordinate r at that value, change k_s to l_s for all other s, one by one. Then, finally, change coordinate r to l_s. Take any x∈ P_1 and y∈ P_2. If y is in the first part of P_2, then |x-y|_1 ≥ |x_1 - y_1| >δ, because y_1 = k_1 and x_1∈ [i_1,j_1]. If y is in the second part, then |x-y|_1≥ |x_r - y_r| ≥δ', by our choice of y_r in this part of the path. If y is in the third part, then |x-y|_1 ≥ |x_1-y_1|≥δ, because y_1 = l_1 and x_1∈ [i_1,j_1]. Thus, in all cases, |x-y|_1≥δ'. Lastly, note that |P_1| = |i-j|_1 and |P_2|≤ 4n + |k-l|_1≤ (2d+4)n.
Case 2c. Suppose that k_1 < i_1 - δ and l_1 > j_1 + δ, and for all r≥ 2, i_r ≤ -n + δ' and j_r ≥ n-δ'. Then
|i-j|_1 ≥∑_r=2^d (j_r - i_r) ≥∑_r=2^d (2n - 2δ')
= 2(d-1)n - 2(d-1)δ' = 2(d-1)n - 1/10d |i-j|_1.
Rearranging and using the fact that d≥ 2, we get
|i-j|_1 ≥2(d-1)n/1+1/10d≥2n/1+1/10d.
But |j-k|_1 ≥ |i-j|_1, and
|j_1 - k_1| = j_1 - k_1 < (l_1 -δ) - (-n) ≤ 2n - δ.
Thus, by (<ref>) and the fact that d≥ 2, we get
∑_r=2^d|j_r - k_r| ≥ |i-j|_1 - (2n-δ)
= (1+1/4d) |i-j|_1 - 2n
≥(1+1/4d/1+1/10d - 1)2n
= 3n/10d + 1.
In particular, there exists some r≥ 2 such that
|j_r - k_r|≥3n/(10d+1)(d-1).
Take any such r. Note that k_r cannot be bigger than j_r, because otherwise, the above inequality would imply that
k_r ≥ j_r + 3n/(10d+1)(d-1)≥ n - δ' + 3n/(10d+1)(d-1)
≥ n - n/10d(d-1) + 3n/(10d+1)(d-1) > n.
Thus,
k_r ≤ j_r - 3n/(10d+1)(d-1).
Also,
j_r - i_r ≥ (n-δ') - (-n+δ') = 2n - 2δ' ≥ 2n - n/10d(d-1).
The last two displays show that any point between i_r and k_r is at a distance at least Cn from j_r, where C depends only on d. Now, we swap the roles of j and k in the statement of the lemma, and construct a path P_1 from i to k, and a path P_2 from j to l. The path P_1 is constructed in the usual way, by successively changing each coordinate of i to the corresponding coordinate of k, starting with coordinate 1 and ending with coordinate d. Similarly, the path P_2 is constructed by changing each coordinate of j to the corresponding coordinate of l, again starting with coordinate 1 and ending with coordinate d. Then |P_1|=|i-k|_1 and |P_2|=|j-l|_1. Take any x∈ P_1 and y∈ P_2. Then x_s lies between i_s and k_s for each s, and y_s lies between j_s and l_s for each s. Suppose that y is in the part of P_2 where the first coordinate is changing from j_1 to l_1. Then |x-y|_1 ≥ |x_r - y_r|≥ Cn, because x_r lies between i_r and k_r, and, as proved above, any such point is at a distance at least Cn from j_r = y_r. Next, suppose that y is in the subsequent part of P_2, where the first coordinate has already changed to l_1. Then |x-y|_1 ≥ |x_1 - y_1| ≥δ, since k_1 ≤ x_1≤ i_1, and y_1 = l_1 ≥ j_1 +δ≥ i_1+δ.
This wraps up the proofs in all cases, and completes the proof of the lemma.
We are now ready to prove the main result of this subsection.
Let d, β_0, β and q be as in Theorem <ref>. Take any ∈ (0,1). For each n, let
γ_n = γ_n(β,d,) := max{|σ_iσ_jσ_kσ_l - q^2|: i,j,k,l∈ B_⌊ (1-)n⌋,
},
where · denotes averaging with respect to the Ising model on B_n at inverse temperature β and free boundary condition. Then γ_n → 0 as n→∞.
Throughout this proof, C_1,C_2,… denote positive constants that depend only on d, whose values may change from line to line. Fix some i,j,k,l as in the statement of the lemma, and let P_1 and P_2 be as Lemma <ref> (possibly after relabeling i,j,k,l). Taking this P_1 as the P_2 in Lemma <ref>, and this P_2 as the P_1 in Lemma <ref>, we have a = (2d+4)n, b = |i-j|_1, and c = C_1|i-j|_1. This gives us
|σ_iσ_jσ_kσ_l - σ_iσ_jσ_kσ_l| ≤C_2n^2 e^2λ |i-j|_1 (λ (8d-12))^C_1|i-j|_1-1/1-λ(8d - 12).
Recalling that |i-j|_1≥ n, this shows that if β_0 is taken large enough, then
|σ_iσ_jσ_kσ_l - σ_iσ_jσ_kσ_l| ≤ C_1 n^2 e^-C_2 n.
Let δ_n be as in Theorem <ref>. Then |σ_iσ_j - q| and |σ_kσ_l - q| are both bounded above by δ_n. Combining this with the above inequality completes the proof.
§.§ Concentration of the magnetization and the overlap in the Ising model
To generalize the results for two-point and four-point correlations to l-point correlations for all even l, as well as for other purposes, we need the following theorem.
Take any d≥ 2 and β >0. Let m be the magnetization and R_1,2 be the overlap between two independent replicas in the ferromagnetic Ising model on B_n under free boundary condition and inverse temperature β. Let q be as in Theorem <ref>. Then for sufficiently large β, (m^2-q)^2→ 0 and (R_1,2^2 - q^2)^2→ 0 as n→∞.
Throughout this proof, C, C_1, C_2,… will denote constants that depend only on d, whose values may change from line to line. Fix some n and some ∈ (0,1). Let m := ⌊ (1-) n⌋. Let δ_n be as in Theorem <ref> and γ_n be as in Theorem <ref>. Let
S := {(i,j)∈ B_n × B_n: i,j∈ B_m, |i-j|_1≥ n},
and let
T := {(i,j,k,l)∈ B_m^4: }.
Let S^c := B_n^2 ∖ S and T^c := B_n^4 ∖ T. Then
|m^2 - q| ≤1/|B_n|^2∑_i,j∈ B_n |σ_iσ_j - q|
≤|S^c|/|B_n|^2 + |S|δ_n/|B_n|^2,
which shows that
lim sup_n→∞|m^2 - q| ≤ C.
On the other hand,
|m^4 - q^2| ≤1/|B_n|^4∑_i,j,k,l∈ B_n |σ_iσ_jσ_kσ_l - q^2|
≤|T^c|/|B_n|^4 + |T|γ_n/|B_n|^4,
which shows that
lim sup_n→∞|m^4 - q^2| ≤ C.
Combining (<ref>) and (<ref>), we get
lim sup_n→∞(m^2-q)^2 = lim sup_n→∞( m^4 - 2m^2q + q^2)
= lim sup_n→∞( m^4 - q^2 - 2(m^2-q)q)
≤ C.
Since is arbitrary, this completes the proof of the first assertion of the theorem. Next, note that
R_1,2^2 = (1/|B_n|∑_i∈ B_nσ_i^1 σ_i^2)^2
= 1/|B_n|^2∑_i,j∈ B_nσ_i^1 σ_i^2 σ_j^1 σ_j^2
= 1/|B_n|^2∑_i,j∈ B_nσ_iσ_j^2.
Proceeding as above, this shows that R_1,2^2→ q^2 as n→∞. Similarly,
R_1,2^4 = 1/|B_n|^4∑_i,j,k,lσ_iσ_jσ_kσ_l^2,
which can be used as above to show that R_1,2^4→ q^4 as n→∞. Combining, we get that (R_1,2^2-q^2)^2→ 0.
In the setting of Theorem <ref>, as n→∞, the law of m tends to the probability measure that puts equal mass on ±√(q), and the law of R_1,2 tends to the probability measure that puts equal mass on ± q.
Simply combine Theorem <ref> with the observation that R_1,2 = m=0 by the invariance of the model under the transform σ→ -σ.
§.§ Proof of Theorem <ref>
For l=2, the proof is contained in the proof of Theorem <ref>. Take any even l≥ 4. Note that
1/|B_n|^l∑_i_1,…,i_l∈ B_nσ_i_1⋯σ_i_l = m^l,
1/|B_n|^l∑_i_1,…,i_l∈ B_nσ_i_1⋯σ_i_l^2 = R_1,2^l.
Thus, by the Cauchy–Schwarz inequality,
1/|B_n|^l∑_i_1,…,i_l∈ B_n |σ_i_1⋯σ_i_l - q^l/2| ≤[1/|B_n|^l∑_i_1,…,i_l∈ B_n (σ_i_1⋯σ_i_l - q^l/2)^2]^1/2
= [1/|B_n|^l∑_i_1,…,i_l∈ B_n (σ_i_1⋯σ_i_l^2 - 2q^l/2σ_i_1⋯σ_i_l + q^l)]^1/2
= [R_1,2^l - 2q^l/2m^l + q^l]^1/2.
Now, by the fact that R_1,2 and q are both in [0,1], and the inequality
|x^l/2 - y^l/2|≤l/2|x-y|
that holds for all x,y∈ [-1,1], we have
|R_1,2^l - q^l| ≤|R_1,2^l - q^l|≤l/2(R_1,2^2 - q^2)^l/2≤l/2(R_1,2^2 - q^2)^2.
Thus, by Theorem <ref>, R_1,2^l→ q^l as n→∞. Similarly,
|m^l - q^l/2| ≤|m^l - q^l/2|≤l/2(m^2 - q)^l/2≤l/2(m^2 - q)^2,
and so, m^l→ q^l/2 as n→∞. Using these in (<ref>) completes the proof.
§ ALTERNATIVE PROOF OF THEOREM <REF>
We now present the alternative proof of Theorem <ref>, due to Hugo Duminil-Copin (private communication), which uses coupling with the FK-Ising (random cluster) model.
§.§ The FK-Ising model
Recall that the FK-Ising model on B_n is defined as follows <cit.>. Let E_n be the set of edges of B_n, as before, and let Ω_n := {0,1}^E_n. Each element ω∈Ω_n defines a graph on B_n, with (open) edges corresponding to those e∈ E_n for which ω_e = 1. Edges of B_n that are not in this graph are said to be “closed”. Let E(ω) denote the number of open edges and k(ω) denote the number of connected components of this graph. The FK-Ising model with parameter p, under free boundary condition, assigns a probability proportional to
p^E(ω) (1-p)^|E_n|-E(ω) 2^k(ω)
at each ω∈Ω_n. A different kind of boundary condition, called the “wired boundary condition”, has an identical form of the probability mass function but with a different definition of k(ω). Under the wired boundary condition, all the boundary vertices of B_n are assumed to be connected to each other, and so all connected components that touch the boundary are merged into a single component. Fixing p, we will denote probabilities computed under the free and wired boundary conditions by P_n^0 and P_n^1, respectively.
It is known that the infinite volume limits of these measures exist and are equal if p is close enough to 1 (depending on d) <cit.>; that is, for any event A determined by finitely many edges, the limits lim_n→∞ P_n^0(A) and lim_n→∞ P_n^1(A) exist and are equal. We will denote this limit by P(A).
Following standard convention, we will denote by x↔ y the event that two vertices x and y are connected by a path of open edges. Similarly, x↔∂ B_n will denote the event that x is connected by a path to the boundary of B_n, and x↔∞ will denote the event that x belongs to an infinite open cluster. It is known that when p is close enough to 1 (depending on d), the infinite volume FK-Ising model has a unique infinite open cluster with probability one <cit.>. In the following, we will assume throughout that p is so close to 1 that this holds.
Lastly, define
q := lim_n→∞ (P(0↔∂ B_n))^2,
where the existence of the limit follows from monotonicity of the probability as a function of n. We will hold this q fixed throughout the remaining discussion. Note that
P(0↔∞) = P(0 ↔∂ B_n for all n)
= lim_n→∞ P(0↔∂ B_n) = √(q).
The numbers p and q will remain fixed throughout the remainder of this section, unless otherwise mentioned.
§.§ Uniformity of connectivities in infinite volume
The identity (<ref>) leads to the following lemma, which shows that P(0↔ x) ≈ q whenever |x| is large.
For any x, P(0↔ x) ≥ q, and given any >0, there exists C depending on such that whenever |x|_∞>C (where |x|_∞ denotes the ℓ^∞ norm of x), we have P(0↔ x) ≤ q +.
Take any x. By the uniqueness of the infinite open cluster, the event 0↔ x is implied by the events that 0↔∞ and x↔∞ (with probability one). By the FKG property and the identity (<ref>), this implies that
P(0↔ x) ≥ P(0↔∞, x↔∞)
≥ P(0↔∞)P(x↔∞) = q.
This completes the proof of the lower bound. Next, for each n, let B_n(x) denote the cube B_n shifted by x, that is, the set x+B_n. Let ∂ B_n(x) denote the boundary of B_n(x). Take any x 0 and k< 1/2|x|_∞-1. Then the cubes ∂ B_k and ∂ B_k(x) are disjoint. Moreover, there is a finite set S of edges in ^d that are not edges of B_k or B_k(x), such that
* if the edges in S are all open, then all vertices of ∂ B_k and ∂ B_k(x) in the same connected component, and
* every edge that is incident to a vertex in ∂ B_n ∪∂ B_k(x) but is not an edge of B_n or B_n(x), is a member of S.
Let F denote the event that all edges in S are open. Conditional on F, the configurations of open edges in B_k and B_k(x) are independent, and follow the random cluster models on these cubes with wired boundary condition. Take any l<k, and let E be the event {0↔∂ B_l}∩{ x ↔∂ B_l(x)}. Since E and F are increasing events and P(F) >0, the FKG property implies that P(E|F) ≥ P(E). Consequently,
P(0↔ x) ≤ P(E) ≤ P(E|F)
= P(0↔∂ B_l, x ↔∂ B_l(x) | F)
= P_k^1(0↔∂ B_l) P_k^1(x↔∂ B_l(x))
= (P_k^1(0↔∂ B_l))^2.
Take any >0. For fixed l, if k is large enough, then
(P_k^1(0↔∂ B_l))^2 ≤( P(0↔∂ B_l))^2 + /2.
But if l is large enough, then (P(0↔∂ B_l))^2 ≤ q + /2. Thus, if |x| is large enough, then we can choose l can k so that both inequalities are satisfied. This proves the claimed upper bound.
§.§ Uniformity of two-point correlations in finite volume
Our next goal, roughly speaking, is to show that the conclusion of Lemma <ref> holds even if we consider the model restricted to a cube, as long as 0 and x are not too close to the boundary of the cube. The following lemma provides the upper bound.
Given any >0 and n, there is some k>0 depending only on d and (and not on n), such that whenever x,y∈ B_n and |x-y|_∞>k, we have P^0_n(x↔ y) ≤ q +.
It is a simple consequence of the FKG property that P_n^0(x↔ y) is an increasing function of n. As a result, we have
P_n^0(x↔ y) ≤ P(x↔ y)
for any x,y∈ B_n. But by Lemma <ref> and the translation-invariance of the infinite volume measure, there is some k depending only on d and such that P(x↔ y)≤ q+ whenever |x-y|_∞>k. This completes the proof of the lemma.
The lower bound in a finite cube is more complicated. It requires the so-called “Pisztora renormalization argument” <cit.>. The fundamental result that we are going to use is the following. First, we recall the definition of the FK-Ising model on B_n under arbitrary boundary condition. A general boundary condition ξ refers to a partition of the set of boundary vertices of B_n, where we think of all vertices within the same member of the partition as being connected, when defining k(ω) in (<ref>). So, for example, ξ consists of only singletons for the free boundary condition, and ξ consists of only the full set ∂ B_n for the wired boundary condition. Let P_k^ξ denote the model on B_k under boundary condition ξ. For a given realization of the model on B_n, we say that a “block” B_k(x)⊆ B_n is “good” if x∈ k ^d, and the following hold:
* There is an open cluster in B_k(x) that touches all faces of B_k(x).
* Any open path in B_k(x) of length k is contained in this cluster.
We will frequently refer to the above open cluster as the “giant open cluster” of B_k(x). Two blocks B_k(x) and B_k(y) are said to be neighbors if x and y are neighbors in k^d. Note that two neighboring blocks have a substantial overlap. In particular, if two neighboring blocks are both good (in a realization of the model), then the conditions imply that the large clusters in the two blocks also intersect. In this situation, we say that the two blocks are “connected”.
A fundamental consequence of the Pisztora renormalization argument, as observed in <cit.>, is that for any k, any boundary condition ξ on B_2k and any p close enough to 1,
P_2k^ξ(B_k is good) ≥ 1- e^-ck,
where c depends only on p and d. This follows from results of <cit.> and <cit.>. A consequence of this is the following lemma.
Given any >0, the following is true for all large enough even k (with the threshold depending only on d and ). Suppose that B_k(x) and B_k(y) are both contained in B_n, and are disjoint. Then, under P_n^0, the probability that any open path of size k/2 in B_k(x) is connected to any open path of size k/2 in B_k(y) is at least 1-.
Let P be an open path of length k/2 in B_k(x). Consider blocks of the form B_l(z), z∈ l^d, where l=k/2. Let a be the starting point of P. By the nature of the blocks, there is at least one block D such that a∈ D and the ℓ^1 distance of a from ∂ D is at least ld/2. (For example, if a= (a_1,…,a_d), then we can choose D= B_l(z) where z_i is the integer multiple of l that is closest to a_i, so that |(z_i ± l)-a_i|≥l/2.) Then the part of P starting from a and continuing until the first time P hits ∂ D, has length at least ld/2≥ l (because d≥ 2). Thus, if D is a good block, then this part of P lies within the giant open cluster of P as in the definition of good block above.
If k is chosen large enough (depending on d and ), then (<ref>) shows that with probability at least 1-/2, all blocks intersecting B_k(x) or B_k(y) are good. On the other hand, as argued in the proof of <cit.> with the help of the main result of <cit.>, the collection of good blocks forms a finitely dependent percolation process on l^d∩ B_n, which dominates an i.i.d. percolation process with parameter q, where q can be made as close to 1 as we want by choosing k large enough. Consequently, by choosing k large enough we can guarantee that with probability at least 1-/2, the giant open cluster of any good block intersecting B_k(x) is connected to the giant open cluster of any good block intersecting B_k(y). Combining this with our previous deductions, we get that with probability at least 1-, any open path of size k/2 in B_k(x) is connected to any open path of size k/2 in B_k(y).
We are now ready to prove the lower bound.
Given any n and >0, there exist positive integers k,l depending only on d and (and not on n) such that whenever x,y∈ B_n, |x-y|_∞≥ 2k, and x,y are at an ℓ^∞ distance at least l from ∂ B_n, we have P_n^0(x↔ y) ≥ q-.
Take any n, k, l, x and y as in the statement of the theorem, where k and l will be chosen later. We will choose k to be even and k<l<n. Let E be the event that there is an open cluster C_x in B_k(x)∖ B_k/2(x) connecting ∂ B_k(x) and ∂ B_k/2(x), and an open cluster C_y in B_k(y)∖ B_k/2(y) connecting ∂ B_k(y) and ∂ B_k/2(y), such that C_x↮C_y in B_n. Note that if x↔∂ B_k(x), y↔∂ B_k(y), and E fails to happen, then the open clusters connecting x to ∂ B_k(x) and y to ∂ B_k(y) must be connected in B_n, and therefore x↔ y. Thus, by the FKG inequality,
P_n^0(x↔ y) ≥ P_n^0(x↔∂ B_k(x), y↔∂ B_k(y)) - P_n^0(E)
≥ P_n^0(x↔∂ B_k(x))P_n^0( y↔∂ B_k(y)) - P_n^0(E).
Let Q_x denote the FK-Ising model on B_l(x) under free boundary condition. Since k<l<n and B_l(x) ⊆ B_n, the FKG property implies that
P_n^0(x↔∂ B_k(x)) ≥ Q_x(x↔∂ B_k(x)) = P_l^0(0↔∂ B_k).
Similarly,
P_n^0(y↔∂ B_k(y)) ≥ P_l^0(0↔∂ B_k).
By the definition of q, we can choose k large enough (depending on d and ) such that
P(0↔∂ B_k) ≥√(q)-/8.
Having chosen k like this, we can then use the definition of the infinite volume measure to find l large enough (depending on d, and k) such that
P_l^0(0↔∂ B_k) ≥ P(0↔∂ B_k) -/8.
Thus, with such choices of k and l, we get that P_n^0(x↔∂ B_k(x)) and P_n^0(y↔∂ B_k(y)) are both bounded below by √(q) - /4. Plugging this into (<ref>), we get
P_n^0(x↔ y) ≥ q - /2 - P_n^0(E).
Finally, with a large enough choice of k (depending on d and ), Lemma <ref> implies that P_n^0(E)</2, which completes the proof.
Using Lemmas <ref> and <ref>, and a standard coupling of the Ising and FK-Ising models, we can now give an alternative proof of Theorem <ref>.
Take any β >0 and let p:= 1-e^-2β. Take any n. Let · denote averaging with respect to the Ising model on B_n at inverse temperature β and free boundary condition, and P_n^0(·) denote probability computed under the FK-Ising model with parameter p on B_n under free boundary condition. It is a standard fact <cit.> that for any i,j∈ B_n,
σ_iσ_j = P_n^0(i↔ j).
Using this identity and Lemmas <ref> and <ref>, and noting that p→ 1 as β→∞, it is now straightforward to prove Theorem <ref>.
§.§ Uniformity of four-point correlations in finite volume
First, we need the following analogue of Lemma <ref> for four-point connectivities. Let p and q be as in the beginning of this section, with p small enough.
Given any >0 and n, there is some k>0 depending only on d and (and not on n), such that if x,y,w,z∈ B_n are such that all interpoint ℓ^∞ distances are greater than 2k, and all four points are at least at an ℓ^∞ distance k from the boundary, then we have
P_n^0(x↔ y ↔ w ↔ z) ≤ q^2 + , P_n^0(x↔ y ↮w ↔ z)≤.
For the proof of the first inequality, we proceed as in the proof of Lemma <ref>. Since the interpoint distances are all greater than 2k, the cubes B_k(x), B_k(y), B_k(w) and B_k(z) are disjoint. Let F be the event that all edges of B_n that do not belong to these cubes are open. Then by the FKG property, we have that for any l<k,
P_n^0(x↔ y ↔ w ↔ z) ≤ P_n^0(x↔∂ B_l(x), y ↔∂ B_l(y), w ↔∂ B_l(w), z↔∂ B_l(z))
≤ P_n^0(x↔∂ B_l(x), y ↔∂ B_l(y), w ↔∂ B_l(w), z↔∂ B_l(z) | F).
But, given F, the configurations inside the cubes B_k(x), B_k(y), B_k(w) and B_k(z) are independent, and follow the FK-Ising models in these cubes with wired boundary condition. Thus, we get
P_n^0(x↔ y ↔ w ↔ z) ≤ (P_k^1(0↔∂ B_l))^4.
By the equality of the infinite volume measures under free and wired boundary conditions, we have that for l fixed and k sufficiently large (depending on l, d and ),
(P_k^1(0↔∂ B_l))^4 ≤ (P(0↔∂ B_l))^4 +/2.
By the definition of q, a large enough value of l ensures that
(P(0↔∂ B_l))^4 ≤ q^2 + /2.
Combining the last three displays proves the first claim of the lemma.
For the second claim, note that the event x↔ y ↮w ↔ z implies that x↔∂ B_k(x), y↔∂ B_k(y), w↔∂ B_k(w), and z↔∂ B_k(z), but the open cluster joining y to ∂ B_k(y) is not connected to the open cluster joining w to ∂ B_k (w). By Lemma <ref>, the probability of this event can be made as small as we like by choosing k large enough.
The next lemma is the analogue of Lemma <ref> for four-point connectivities.
Given any >0 and n, there exist k and l depending only on d and (and not on n), such that if x,y,w,z∈ B_n are such that all interpoint ℓ^∞ distances are greater than 2k, and all four points are at least at an ℓ^∞ distance l from the boundary, then we have
P_n^0(x↔ y ↔ w ↔ z) ≥ q^2 - .
Let C_x and C_y be as in the proof of Lemma <ref>, and define C_w and C_z analogously. Let E be the event at least two of these clusters are not connected to each other. Then, as in the proof of Lemma <ref>, we can use Lemma <ref> to conclude that P_n^0(E)</2 if k is chosen large enough. Also, as in the proof of Lemma <ref>, we deduce that
P_n^0(x↔ y ↔ w ↔ z)
≥ P_n^0(x↔∂ B_k(x), y ↔∂ B_k(y), w ↔∂ B_k(w), z↔∂ B_k(z)) - P_n^0(E).
The proof is now completed by applying the FKG inequality to replace the first probability on the right by the product of the probabilities of the four events, and then proceeding as in the proof of Lemma <ref> to show that these probabilities are all bounded below by √(q)-/100 if k and l are large enough.
Finally, the following lemma gives the analogue of equation (<ref>) for four-point correlatons.
Take any β >0 and let p:=1-e^-2β. Take any n. Let · denote averaging with respect to the Ising model on B_n at inverse temperature β and free boundary condition, and let P_n^0(·) denote probability computed under the FK-Ising model with parameter p on B_n under free boundary condition. Then for any distinct i,j,k,l∈ B_n,
σ_iσ_jσ_kσ_l = P_n^0()
= P_n^0(i↔ j ↔ k ↔ l) + P_n^0( i↔ j ↮k ↔ l)
+ P_n^0(i↔ k↮j↔ l) + P_n^0(i↔ l ↮j ↔ k).
It is a standard fact that a configuration from the Ising model on B_n at inverse temperature β and free boundary condition may be obtained as follows. First, generate a configuration from the FK-Ising model on B_n with parameter p, under free boundary condition. Then, take the connected components of vertices in this configuration, and independently for each component, assign the same spin to all vertices, where the spin is chosen to be 1 or -1 with equal probability. (For a proof, see <cit.>.)
Now take any distinct i,j,k,l∈ B_n. To compute σ_iσ_jσ_kσ_l, we consider the above coupling and compute the conditional expectations given the FK-Ising configuration, which we denote by σ_iσ_jσ_kσ_l'. The following are easy to see:
* If i, j,k,l are all in the same cluster, them σ_iσ_jσ_kσ_l' = 1.
* If two of i,j,k,l are in one cluster and the other two are in a different cluster, then σ_iσ_jσ_kσ_l' = 1.
* In all other cases, σ_iσ_jσ_kσ_l' = 0.
Taking unconditional expectation gives the desired result.
It is easy to see that Theorem <ref> follows from the representation of the four-point correlation given in Lemma <ref>, together with the upper and lower bounds given in Lemma <ref> and <ref>.
Having obtained Theorem <ref> and Theorem <ref> via this alternate route, the rest of the proof of Theorem <ref> can now be completed as before.
§ PROOFS OF THE MAIN RESULTS
In this section, we will complete the proofs of the results from Section <ref> (except Theorem <ref>, which has already been proved in the previous section). Throughout this section, we will let · denote averaging with respect to the model on B_n with Hamiltonian H_n defined in (<ref>), at inverse temperature β. Diverging from the notation used in the previous section, we will use ·_0 to denote averaging with respect to the Ising model on B_n at inverse temperature β and free boundary condition (because this model corresponds to the case h=0 of our model). The following lemma (which is just the central limit theorem for the moment generating function) will be used several times.
Take any a_i∈, i∈ B_n. Let M:= max_i∈ B_n|a_i|. Suppose that M ≤1/2θ |B_n|^1/2 for some θ∈ [0,1] such that (e^θ |J_0|)<∞. Then
|[exp(1/√(|B_n|)∑_i∈ B_n a_i J_i)] - exp(1/2|B_n|∑_i∈ B_n a_i^2 )| ≤C_1 e^C_2/|B_n|^3/2∑_i∈ B_n |a_i|^3,
where C_1 and C_2 are positive constants that depend only on the law of the J_i's and the choice of θ.
Take any θ as in the statement of the theorem. We will let C, C_1,C_2,… denote any positive constants whose values depend only on d, on the law of the J_i's and on the choice of θ, and whose values may change from line to line. First, note that for any k,
|J_0|^k ≤k!/θ^k(e^θ|J_0|) ≤Ck!/θ^k.
By the above inequality and the facts that (J_0)=0, (J_0^2)=1, and M≤1/2θ |B_n|^1/2, we get that for any i,
[exp(a_i J_i/√(|B_n|))] = 1 + a_i^2/2|B_n| + R_i,
where
R_i := ∑_k=3^∞(a_i^k J_i^k/k!|B_n|^k/2).
Note that by (<ref>) and the fact that M≤1/2θ |B_n|^1/2,
|R_i| ≤∑_k=3^∞|a_i|^k/k! |B_n|^k/2|J_i|^k ≤∑_k=3^∞C_1|a_i|^k/|B_n|^k/2θ^k≤C_2|a_i|^3 /|B_n|^3/2.
Similarly, since M|B_n|^-1/2≤θ/2≤1/2,
|exp(a_i^2/2|B_n|) - 1 - a_i^2/2|B_n|| = ∑_k=2^∞a_i^2k/k! 2^k |B_n|^k≤Ca_i^4/|B_n|^2.
Now, for any N, and any x_1,…, x_N, y_1,…, y_N ∈, if we let K be the maximum of |x_i| and |y_i| over all i, then
|∏_i=1^N x_i - ∏_i=1^N y_i| ≤∑_i=1^N |x_1⋯ x_i-1 y_i⋯ y_N - x_1⋯ x_i y_i+1⋯ y_N|
≤∑_i=1^N K^N-1|x_i - y_i|.
By (<ref>), (<ref>), and the inequalities 1+x≤ e^x and M ≤1/2θ |B_n|^-1/2,
0≤[exp(a_i J_i/√(|B_n|))] ≤ e^C.
Similarly, by (<ref>),
0≤exp(a_i^2/2|B_n|) ≤ e^C.
Thus, by (<ref>), (<ref>), (<ref>) and (<ref>),
|[exp(1/√(|B_n|)∑_i∈ B_n a_i J_i)] - exp(1/2|B_n|∑_i∈ B_n a_i^2 )|
= |∏_i∈ B_n[exp(a_i J_i/√(|B_n|))] - ∏_i∈ B_nexp(a_i^2/2|B_n|)|
≤ e^C∑_i∈ B_n|[exp(a_i J_i/√(|B_n|))] - exp(a_i^2/2|B_n|)|
≤C_1 e^C_2/|B_n|^3/2∑_i∈ B_n |a_i|^3.
This completes the proof of the lemma.
§.§ Proof of Theorem <ref>
In this proof, o(1) will denote any quantity, deterministic or random, whose absolute value can be bounded by a deterministic quantity depending only on n (and the law of the J_i's and our choices of β and d) that tends to zero as n→∞.
We begin with the derivation of the approximate formula for the quenched expectation of the magnetization. Let X_n be defined as in (<ref>),
and define the random variable
L = L(σ) := β h/√(|B_n|)∑_i∈ B_n J_i σ_i,
where σ is drawn from the Ising model on B_n at inverse temperature β and free boundary condition. Let m and R_1,2 be the magnetization of σ and the overlap between two configurations drawn independently from the Gibbs measure of the Ising model, respectively. Let β and q be as in Theorem <ref>. The first step in the proof of Theorem <ref> is the following lemma.
Let L be as above. Then
lim_n→∞[(e^L_0 - e^1/2β^2 h^2(1-q)cosh X_n )^2 ]= 0.
Note that by Lemma <ref>,
e^L_0^2 = e^L(σ^1) + L(σ^2)_0
= exp(β h/√(|B_n|)∑_i∈ B_n J_i(σ_i^1 + σ_i^2))_0
=exp(β^2h^2/2|B_n|∑_i∈ B_n (σ_i^1 + σ_i^2)^2) + o(1)_0
= e^β^2h^2e^β^2 h^2 R_1,2_0 + o(1).
By Corollary <ref>, this shows that
lim_n→∞e^L_0^2 = e^β^2 h^2cosh(β^2 h^2 q).
Next, again by Lemma <ref>,
e^β^2 h^2(1-q)cosh^2X_n = 1/4e^β^2 h^2(1-q)(e^2X_n + e^-2X_n + 2)
= 1/2e^β^2 h^2(1-q)(e^2β^2 h^2 q + 1) + o(1)
= e^β^2 h^2cosh(β^2 h^2 q) + o(1).
Finally, by another application of Lemma <ref>,
e^1/2β^2 h^2(1-q)[e^L_0 cosh X_n] = 1/2e^1/2β^2 h^2(1-q)[e^L + X_n_0 + e^L - X_n_0]
= 1/2e^1/2β^2 h^2(1-q)[exp(β h/√(|B_n|)∑_i∈ B_n J_i(σ_i + √(q)))_0
+ exp(β h/√(|B_n|)∑_i∈ B_n J_i(σ_i - √(q)))_0]
= 1/2e^1/2β^2 h^2(1-q)[exp(β^2 h^2/2|B_n|∑_i∈ B_n (σ_i + √(q))^2)_0
+ exp(β^2 h^2/2|B_n|∑_i∈ B_n (σ_i - √(q))^2)_0 + o(1)]
= 1/2 e^β^2 h^2 [e^β^2 h^2√(q) m_0 + e^-β^2 h^2√(q)m_0] + o(1).
But, by Corollary <ref>,
lim_n→∞e^β^2 h^2√(q) m_0 = lim_n→∞e^- β^2 h^2√(q) m_0 = cosh(β^2 h^2 q).
Thus,
lim_n→∞ e^1/2β^2 h^2(1-q)[e^L_0 cosh X_n] = e^β^2 h^2cosh(β^2 h^2 q).
Combining (<ref>), (<ref>) and (<ref>), we get
lim_n→∞[(e^L_0 - e^1/2β^2 h^2(1-q)cosh X_n)^2 ]
= lim_n→∞[e^L_0^2 - 2 e^1/2β^2 h^2(1-q)e^L_0cosh X_n + e^β^2 h^2(1-q)cosh^2 X_n] = 0.
This completes the proof of the lemma.
The next step in the proof of Theorem <ref> is the following lemma.
Let L be as above. Then
lim_n→∞[(me^L_0 - √(q)e^1/2β^2h^2(1-q)sinh X_n)^2] = 0.
Take any j∈ B_n. By a computation similar to the one that led to equation (<ref>), we get
σ_j e^L_0^2 = e^β^2 h^2σ_j^1 σ_j^2 e^β^2 h^2R_1,2_0 + o(1).
Averaging over j, we get
1/|B_n|∑_j∈ B_nσ_j e^L_0^2 = e^β^2 h^2R_1,2 e^β^2 h^2R_1,2_0 + o(1).
By Corollary <ref>, this shows that
lim_n→∞1/|B_n|∑_j∈ B_nσ_j e^L_0^2 = q e^β^2 h^2sinh(β^2 h^2q).
Next, by a computation similar to the one that led to equation (<ref>), we get
qe^β^2h^2(1-q)sinh^2 X_n = q e^β^2 h^2sinh(β^2h^2 q) + o(1).
Finally, by a computation similar to the one that led to equation (<ref>), we get
√(q)e^1/2β^2h^2(1-q)[σ_j e^L_0 sinh X_n]
= √(q)/2 e^β^2h^2[σ_je^β^2 h^2√(q) m_0 - σ_je^-β^2 h^2√(q)m_0] + o(1).
Averaging this over j, we get
1/|B_n|∑_j∈ B_n√(q) e^1/2β^2 h^2(1-q)[σ_j e^L_0 sinh X_n] = √(q) e^β^2 h^2msinh(β^2 h^2 √(q) m)_0 + o(1).
By Corollary <ref>, this shows that
lim_n→∞1/|B_n|∑_j∈ B_n√(q)e^1/2β^2 h^2(1-q)[σ_j e^L_0 sinh X_n] = qe^β^2 h^2sinh(β^2 h^2 q).
Combining (<ref>), (<ref>) and (<ref>), we get
lim_n→∞1/|B_n|∑_j∈ B_n[(σ_j e^L_0 - √(q)e^1/2β^2h^2(1-q)sinh X_n)^2] = 0.
An application of the Cauchy–Schwarz inequality shows that the above quantity is an upper bound for the quantity that we want to show is converging to zero, thereby completing the proof.
The final ingredient we need is the following.
Let β be as in Theorem <ref>. Then (R_1,2 - m(σ^1)m(σ^2))^2_0 → 0 as n→∞, where σ^1 and σ^2 are drawn independently from the Ising model on B_n at inverse temperature β and free boundary condition.
Note that
(R_1,2 - m(σ^1)m(σ^2))^2_0 = R_1,2^2_0 - 2R_1,2m(σ^1)m(σ^2)_0 + m^2_0^2.
By Corollary <ref>, R_1,2^2_0 → q^2 and m^2_0 → q as n→∞. Now, note that
R_1,2m(σ^1)m(σ^2)_0 = 1/|B_n|^3∑_i,j,k∈ B_nσ_i^1 σ_i^2 σ_j^1 σ_k^2_0
= 1/|B_n|^3∑_i,j,k∈ B_nσ_iσ_j_0 σ_iσ_k_0.
Using the same tactics as in the proof of Theorem <ref>, it is now easy to show that the above quantity tends to q^2 as n→∞. This completes the proof.
We are now ready to complete the proof of Theorem <ref>.
First, note that by Jensen's inequality,
e^L_0 ≥ e^L_0 = 1.
Thus, by (<ref>) and the Cauchy–Schwarz inequality,
|m^2 -q| = |m^2 -q| e^L_0/e^L_0
≤|m^2 -q| e^L_0 ≤√((m^2-q)^2_0e^2L_0).
By Theorem <ref>, the first term within the square-root tends to zero as n→∞. By Lemma <ref>, the second term is uniformly bounded in n. Thus, |m^2 -q|→ 0 as n→∞. Since
(m^2 -q)^2≤ 2|m^2 -q|,
this proves the first claim of the theorem. Next, note that
m = me^L_0/e^L_0.
Thus, if we let
a := √(q) e^1/2β^2h^2(1-q)sinh X_n, b := e^1/2β^2h^2(1-q)cosh X_n, c := a/b = √(q)tanh X_n,
then by (<ref>),
|m - c| = | me^L_0/e^L_0 - a/b|= |b me^L_0 - ae^L_0|/b e^L_0
≤ |b me^L_0 - ae^L_0|/b≤ |me^L_0 - a| + a/b |e^L_0-b|.
Since b≥ 1, this shows that
|m - c| ≤|me^L_0 - a| + (a |e^L_0-b|)
≤√([(me^L_0 - a)^2]) + √((a^2) [(e^L_0-b)^2]).
By Lemma <ref>, Lemma <ref>, and the fact that (a^2) is uniformly bounded in n (by Lemma <ref>), we get that the above quantity tends to zero as n→∞. But, since m and c are both in [-1,1],
[(m - c)^2] ≤ 2 |m - c|.
This proves (<ref>). To prove (<ref>), note that by Lemma <ref> and the inequality (<ref>), proceeding as in the derivation of (<ref>), we get
1/|B_n|∑_j∈ B_n|σ_j - √(q)tanh X_n| = 1/|B_n|∑_j∈ B_n|σ_je^L_0/e^L_0 - a/b|
= 1/|B_n|∑_j∈ B_n(|b σ_je^L_0 - a e^L_0|/be^L_0)
≤1/|B_n|∑_j∈ B_n(|b σ_je^L_0 - a e^L_0|/b)
≤√((a^2) [(e^L_0-b)^2]) + 1/|B_n|∑_j∈ B_n√([(σ_je^L_0 - a)^2]).
We have already seen that the first term tends to zero as n→∞. The second term is bounded above by
[1/|B_n|∑_j∈ B_n[(σ_je^L_0 - a)^2]]^1/2.
By (<ref>), this also tends to zero as n→∞. Thus,
lim_n→∞1/|B_n|∑_j∈ B_n|σ_j - √(q)tanh X_n| = 0.
Thus, by (<ref>) and the fact that
(σ_j - √(q)tanh X_n)^2≤ 2|σ_j - √(q)tanh X_n|,
we get (<ref>). Finally, note that by (<ref>) and the Cauchy–Schwarz inequality,
|R_1,2 - m(σ^1) m(σ^2)| = (|R_1,2 - m(σ^1) m(σ^2)|e^L(σ^1) + L(σ^2)_0/e^L_0^2)
≤|R_1,2 - m(σ^1) m(σ^2)|e^L(σ^1) + L(σ^2)_0
≤√((R_1,2 - m(σ^1) m(σ^2))^2_0e^2L(σ^1) + 2L(σ^2)_0)
By Lemma <ref>, the first term inside the square-root tends to zero as n→∞. The second term is uniformly bounded in n, by Lemma <ref>. This shows that the expression on the left tends to zero. But note that
(R_1,2 - m(σ^1) m(σ^2))^2≤ 2|R_1,2 - m(σ^1) m(σ^2)|.
This completes the proof of Theorem <ref>.
§.§ Proof of Theorem <ref>
All assertions of Theorem <ref> are direct consequences of the properties of m from Theorem <ref> and the result that (R_1,2 - m(σ^1) m(σ^2))^2→ 0 as n→∞.
§.§ Proof of Theorem <ref>
Let A := {-√(q),√(q)}^3 ⊆^3, and let B denote the set displayed in (<ref>). Consider the map f: ^3 →^3 defined as f(x,y,z) := (xy, yz, zx). Then f is a continuous map, and an easy verification shows that f(A) = B. (For example, f(q,q,q) = (q,q,q), f(q,q,-q) = (q, -q, -q), f(q,-q,-q) = (-q,q,-q), etc.) Take any open set V⊇ B, and let U=f^-1(V). Then U is also open, and U⊇ A. Let σ^1, σ^2, σ^3 be three configurations drawn independently from the Gibbs measure of our model, and define the overlaps as usual. By Theorem <ref>, the difference between the random vectors (R_1,2, R_2,3, R_3,1) and f(m(σ^1), m(σ^2), m(σ^3)) converges to the zero vector in L^2 (unconditionally, after integrating out the disorder). This shows, first of all, that the quenched law of (R_1,2, R_2,3, R_3,1) converges in distribution, because so does the quenched law of (m(σ^1), m(σ^2), m(σ^3)). Next, note that by Theorem <ref>,
lim_n→∞((m(σ^1), m(σ^2), m(σ^3)) ∈ U) =1,
where denotes the unconditional probability, after integrating out the disorder. Thus,
lim_n→∞(f(m(σ^1), m(σ^2), m(σ^3)) ∈ V) =1.
Combining this with the previous observation, we see that for any open set V⊇ B,
lim_n→∞((R_1,2, R_2,3, R_3,1) ∈ V) =1.
This shows that the quenched probability of the event (R_1,2, R_2,3, R_3,1) ∈ V converges to 1 in probability. From this, it is easy to complete the proof of the theorem.
§.§ Proof of Theorem <ref>
Taking k=2, f= S_1,2 and ψ(x) = x in (<ref>) gives the equation
(S_1,2S_1,3) = 1/2((S_1,2))^2 + 1/2(S_1,2^2).
We will show that this equation fails for the overlap in the infinite volume limit of our model. Indeed, by Theorem <ref>,
lim_n→∞ (2R_1,2R_1,3 - (R_1,2)^2 - R_1,2^2)
= lim_n→∞ (2m(σ^1)^2m(σ^2)m(σ^3) - (m(σ^1) m(σ^2))^2 - m(σ^1)^2m(σ^2)^2)
=lim_n→∞ (2(m^2m^2) - ((m^2))^2 - (m^2^2)),
provided that the limits exist (which we will prove shortly). Let X_n be defined as in (<ref>), and let Y_n := tanh X_n.
Then by Theorem <ref>, the right side of (<ref>) equals
lim_n→∞ (2q^2(Y_n^2) - q^2((Y_n^2))^2 - q^2) = - q^2 lim_n→∞ (1-(Y_n^2))^2.
But Y_n is a bounded random variable with converges in distribution to tanh(√(q)β h Z), where Z is a standard Gaussian random variable. Thus, for any finite h, the above limit is nonzero. This completes the proof.
§.§ Proof of Theorem <ref>
In this subsection, we will denote averaging with respect to the antiferromagnetic Ising model on B_n at inverse temperature β and free boundary condition by ·_a,0, and averaging with respect to the model on B_n with Hamiltonian (<ref>) at inverse temperature β by ·_a.
Let σ be a configuration drawn from the ferromagnetic Ising model on B_n at inverse temperature β and free boundary condition. Define η∈Σ_n as
η_i := (-1)^|i|_1σ_i for all i∈ B_n.
Then, it is easy to see that η is drawn from antiferromagnetic Ising model on B_n at inverse temperature β and free boundary condition. Thus, we have
m^2_a,0 = 1/|B_n|^2∑_i,j∈ B_n σ_i σ_j_a,0 = 1/|B_n|^2∑_i,j∈ B_n (-1)^|i|_1+|j|_1σ_i σ_j_0.
Take any ∈ (0,1). Let δ_n be as in Theorem <ref> and let m := ⌊ (1-)n ⌋. Let S be defined as in equation (<ref>),
and let S^c := (B_n × B_n)∖ S.
Then
1/|B_n|^2∑_i,j∈ B_n (-1)^|i|_1+|j|_1(σ_i σ_j_0-q) ≤|S^c|/|B_n|^2 + |S|δ_n/|B_n|^2≤|S^c|/|B_n|^2 + δ_n.
Now, if (i,j)∈ S^c, then either at least one of i and j is in B_n ∖ B_m, or |i-j|_1< n. From this observation, it follows that
|S^c| ≤ C n^2d + C^d n^2d,
where C depends only on d. Also, by Theorem <ref>, δ_n → 0 as n→∞. Combining, we get
lim sup_n→∞1/|B_n|^2∑_i,j∈ B_n (-1)^|i|_1+|j|_1(σ_i σ_j_0-q) ≤ C.
It is easy to see that
lim_n→∞1/|B_n|^2∑_i,j∈ B_n (-1)^|i|_1+|j|_1 = 0.
Combining all of the above, we get that lim sup_n→∞m^2_a,0≤ C. Since this holds for every ∈ (0,1), and m^2_a,0≥0, we conclude that m^2_a,0→ 0 as n→∞.
Now let L be defined as in (<ref>). Then, as in (<ref>), we have e^L_a,0≥ e^L_a,0 = 1. Thus,
|m|_a = (|m| e^L_a,0/e^L_a,0) ≤|m| e^L_a,0≤√(m^2_a,0e^L_a,0).
We have shown above that the first term inside the square-root tends to zero as n→∞. By Lemma <ref> and the above relationship between η and σ, the second term is uniformly bounded in n. Thus, |m|_a → 0 as n→∞. Since |m|≤ 1, this implies that m^2_a → 0.
Lastly, let σ^1 and σ^2 are configurations drawn independently from the model on B_n with Hamiltonian (<ref>) at inverse temperature β, but with J_i replaced by (-1)^|i|_1 J_i. Define η^1 and η^2 via the relationship (<ref>).
Then, it is easy to see that η^1 and η^2 are drawn independently from the model on B_n with Hamiltonian (<ref>) at inverse temperature β. Moreover, the overlap between η^1 and η^2 is exactly the same as the overlap between σ^1 and σ^2. Thus, all of the claims about the overlap that we have proved for the ferromagnetic model continue to hold for the antiferromagnetic model, after replacing J_i by (-1)^|i|_1J_i in the theorem statements.
§.§ Proof of Theorem <ref>
Let F denote the free energy of our model. That is,
F = log∑_σ∈Σ_n e^-β H_n(σ),
with H_n defined as in (<ref>). Then note that
FJ_i = β hσ_i/√(|B_n|).
This implies, by the Gaussian Poincaré inequality <cit.>, that
(F) ≤∑_i∈ B_n[(FJ_i)^2] ≤β^2 h^2.
On the other hand,
FJ_iJ_j = β^2h^2/|B_n| ( - ),
and therefore, by <cit.>,
(F) ≥1/2∑_i,j∈ B_n[(FJ_iJ_j)]^2
= β^4h^4/2|B_n|^2∑_i,j∈ B_n [(-)]^2.
By the FKG inequality for the RFIM <cit.>, -≥ 0 for i, j. Thus,
R_1,2^2 - R_1,2^2 = 1/|B_n|^2∑_i,j∈ B_n (^2 - ^2^2)
= 1/|B_n|^2∑_i,j∈ B_n ( - )(+)
≤2/|B_n|^2∑_i,j∈ B_n | - |
= 2/|B_n|^2∑_i,j∈ B_n ( - ).
Combining this with (<ref>), we get
(R_1,2 - R_1,2)^2 = (R_1,2^2 - R_1,2^2)
≤2/|B_n|^2∑_i,j∈ B_n( - )
≤ 2[1/|B_n|^2∑_i,j∈ B_n (( - ))^2]^1/2
≤ 2[2(F)/β^4h^4]^1/2.
Plugging in the upper bound on (F) from (<ref>), we get
(R_1,2 - R_1,2)^2 ≤2^3/2/β |h|.
The upper bound tends to zero if |h|→∞ as n→∞. This proves the second claim of Theorem <ref>. For the first claim, note that for any k,
R_1,2^k = R_1,2^k e^L_0/e^L_0,
where L is the function defined in (<ref>).
By (<ref>), this shows that
|R_1,2^k - R_1,2^k_0| = |R_1,2^k e^L_0/e^L_0 - R_1,2^k_0|
= |R_1,2^k e^L_0 - R_1,2^k_0e^L_0|/e^L_0
≤ |R_1,2^k e^L_0 - R_1,2^k_0e^L_0|
= |R_1,2^k (e^L-e^L_0)_0|
≤|e^L - e^L_0|_0
= |(e^L-1) - e^L-1_0|_0 ≤ 2|e^L-1|_0.
Now, note that for any given σ∈Σ_n, by Lemma <ref>,
|e^L(σ) - 1| ≤√([(e^L(σ) - 1)^2])
= √([e^2L(σ) - 2e^L(σ) + 1])
= √(e^2β^2 h^2 - 2e^1/2β^2h^2 + 1 + o(1))
= √((e^2β^2 h^2 - 1) - 2(e^1/2β^2h^2 - 1) + o(1)).
By the inequality e^x - 1≤ ex that holds for 0≤ x≤ 1, we get that the above quantity is bounded above by Cβ|h| + o(1) when |h|≤1/β, where C is a universal constant. In particular, it tends to zero as h→ 0. Thus, if h→ 0 as n→∞, then for every k,
lim_n→∞|R_1,2^k- R_1,2^k_0| = 0.
This shows that if n is large, then all quenched moments of R_1,2 under our model are, with high probability, close to the corresponding moments of R_1,2 under the Ising model. From this, it is not hard to prove the claim stated in the theorem (e.g., using Bernstein approximation).
§.§ Proof of Theorem <ref>
By Theorem <ref> and Theorem <ref>, (R_1,2^2 - q^2)^2→ 0 and (m^2 - q)^2→ 0 for our model. This allows us to repeat the proof of Theorem <ref> from Subsection <ref> verbatim to deduce that the conclusion of Theorem <ref> holds even if h 0, with the same q. This shows, in particular, that if π_n is a uniform random permutation of the elements of B_n, then for any even l,
σ_π_n(1)σ_π_n(2)⋯σ_π_n(l)→ q^l/2
in probability as n→∞. Next, let us consider the case of odd l. For the Ising model, the above expectation is zero if l is odd. This is no longer true if h 0. Recall the random variable X_n defined in equation (<ref>). Take any odd positive integer l. We claim that
σ_π_n(1)σ_π_n(2)⋯σ_π_n(l) - q^l/2tanh X_n → 0
in probability as n→∞. To prove this, let m be the magnetization. We claim that
m^l - q^l/2tanh X_n → 0
in probability as n→∞. To see this, note that by Theorem <ref>, (m^2 - q)^2→ 0. From this, it is easy to deduce that
m^l - q^1/2(l-1)m→ 0
in probability, since m^l-1 can be replaced by q^1/2(l-1) asymptotically. But again by Theorem <ref>, m - √(q)tanh X_n → 0 in probability. Combining these two observations yields (<ref>). Next, we claim that
R_1,2^l - q^l tanh^2 X_n → 0
in probability as n→∞. To see this, note that by Theorem <ref>, (R_1,2-m(σ^1)m(σ^2))^2→ 0 in probability. This implies that
R_1,2^l - m(σ^1)^l m(σ^2)^l→ 0
in probability. But m(σ^1)^l m(σ^2)^l = m^l^2. Thus, (<ref>) follows from (<ref>). Now, proceeding just as in the derivation of (<ref>), we get
1/|B_n|^l∑_i_1,…,i_l∈ B_n |σ_i_1⋯σ_i_l - q^l/2tanh X_n|
≤[1/|B_n|^l∑_i_1,…,i_l∈ B_n (σ_i_1⋯σ_i_l - q^l/2tanh X_n)^2]^1/2
= [1/|B_n|^l∑_i_1,…,i_l∈ B_n (σ_i_1⋯σ_i_l^2 - 2q^l/2σ_i_1⋯σ_i_ltanh X_n + q^ltanh^2 X_n)]^1/2
= [R_1,2^l - 2q^l/2m^ltanh X_n + q^ltanh^2 X_n]^1/2.
By (<ref>) and (<ref>), the last expression tends to zero in probability as n→∞. This proves the claim (<ref>).
Now take any n, and let τ_n,1,τ_n,2,… be an infinite exchangeable sequence of random variables with the following random law. Given X_n, let Z_n be a random variable that takes value √(q) with probability 1/2(1+tanh X_n) and -√(q) with probability 1/2(1-tanh X_n). Having generated Z_n, let τ_n,1,τ_n,2,… be i.i.d. random variables taking value 1 with probability 1/2(1+Z_n) and -1 with probability 1/2(1-Z_n). Then note that (τ_n,i|Z_n, X_n) = Z_n, and therefore, for any positive integer l,
(τ_n,1⋯τ_n,l|Z_n, X_n) = Z_n^l.
This give us
(τ_n,1⋯τ_n,l|X_n) = (Z_n^l|X_n) =
q^l/2tanh X_n if l is odd,
q^l/2 if l is even.
Comparing this with (<ref>) and (<ref>), it is now easy to show that for any l, the Lévy–Prokhorov distance between the (random) laws of (σ_π_n(1),…, σ_π_n(l)) and (τ_n,1,…, τ_n,l) converges to zero in probability as n→∞. But, the random law of (τ_n,1,…, τ_n,l) converges in distribution to the random law of (τ_1,…,τ_l), where τ_1,τ_2,… are defined just like the τ_n,i's, but with X_n replaced by X = √(q)β h W, where W is a standard Gaussian random variable. This suffices to complete the proof.
abbrvnat
|
http://arxiv.org/abs/2307.04921v1 | 20230710220542 | Brown dwarf companions in binaries detected from the 2021 season high-cadence microlensing surveys | [
"Cheongho Han",
"Youn Kil Jung",
"Ian A. Bond",
"Sun-Ju Chung",
"Michael D. Albrow",
"Andrew Gould",
"Kyu-Ha Hwang",
"Chung-Uk Lee",
"Yoon-Hyun Ryu",
"In-Gu Shin",
"Yossi Shvartzvald",
"Hongjing Yang",
"Jennifer C. Yee",
"Weicheng Zang",
"Sang-Mok Cha",
"Doeon Kim",
"Dong-Jin Kim",
"Seung-Lee Kim",
"Dong-Joo Lee",
"Yongseok Lee",
"Byeong-Gon Park",
"Richard W. Pogge",
"Fumio Abe",
"Richard Barry",
"David P. Bennett",
"Aparna Bhattacharya",
"Hirosame Fujii",
"Akihiko Fukui",
"Ryusei Hamada",
"Yuki Hirao",
"Stela Ishitani Silva",
"Yoshitaka Itow",
"Rintaro Kirikawa",
"Naoki Koshimoto",
"Yutaka Matsubara",
"Shota Miyazaki",
"Yasushi Muraki",
"Greg Olmschenk",
"Clément Ranc",
"Nicholas J. Rattenbury",
"Yuki Satoh",
"Takahiro Sumi",
"Daisuke Suzuki",
"Mio Tomoyoshi",
"Paul J. Tristram",
"Aikaterini Vandorou",
"Hibiki Yama",
"Kansuke Yamashita"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP"
] |
Microlensing brown-dwarf companions in binaries
Department of Physics, Chungbuk National University, Cheongju 28644, Republic of Korea,
Korea Astronomy and Space Science Institute, Daejon 34055, Republic of Korea
Korea University of Science and Technology (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea
Institute of Natural and Mathematical Science, Massey University, Auckland 0745, New Zealand
Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA
University of Canterbury, Department of Physics and Astronomy, Private Bag 4800, Christchurch 8020, New Zealand
Max-Planck-Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany
Department of Astronomy, Ohio State University, 140 W. 18th Ave., Columbus, OH 43210, USA
Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 76100, Israel
Department of Astronomy, Tsinghua University, Beijing 100084, China
School of Space Research, Kyung Hee University, Yongin, Kyeonggi 17104, Republic of Korea
Institute for Space-Earth Environmental Research, Nagoya University, Nagoya 464-8601, Japan
Code 667, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
Department of Astronomy, University of Maryland, College Park, MD 20742, USA
Komaba Institute for Science, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan
Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain
Department of Earth and Space Science, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan
Department of Physics, The Catholic University of America, Washington, DC 20064, USA
Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo, Sagamihara, Kanagawa 252-5210, Japan
Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France
Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand
University of Canterbury Mt. John Observatory, P.O. Box 56, Lake Tekapo 8770, New Zealand
As a part of the project aiming to build a homogeneous sample of binary-lens (2L1S) events
containing brown-dwarf (BD) companions, we investigate the 2021 season microlensing data
collected by the Korea Microlensing Telescope Network (KMTNet) survey.
For this purpose, we first identify 2L1S events by conducting systematic analyses of
anomalous lensing events. We then select candidate BD-companion events by applying the
criterion that the mass ratio between the lens components is less than q_ th∼ 0.1.
From this procedure, we find four binary-lens events including KMT-2021-BLG-0588,
KMT-2021-BLG-1110, KMT-2021-BLG-1643, and KMT-2021-BLG-1770, for which the estimated mass
ratios are q∼ 0.10, 0.07, 0.08, and 0.15, respectively. The event KMT-2021-BLG-1770
is selected as a candidate despite the fact that the mass ratio is slightly greater than
q_ th because the lens mass expected from the measured short time scale of the event,
∼ 7.6 days, is small. From the Bayesian analyses, we estimate that the primary and
companion masses are
(M_1/M_⊙, M_2/M_⊙)=
(0.54^+0.31_-0.24, 0.053^+0.031_-0.023) for KMT-2021-BLG-0588L,
(0.74^+0.27_-0.35, 0.055^+0.020_-0.026) for KMT-2021-BLG-1110L,
(0.73^+0.24_-0.17, 0.061^+0.020_-0.014) for KMT-2021-BLG-1643L, and
(0.13^+0.18_-0.07, 0.020^+0.028_-0.011) for KMT-2021-BLG-1770L.
It is estimated that the probabilities of the lens companions being in the BD mass range are
82%, 85%, 91%, and 59% for the individual events. For confirming the BD nature of the
lens companions found in this and previous works by directly imaging the lenses from future
high-resolution adaptive-optics (AO) followup observations, we provide the lens-source
separations expected in 2030, which is an approximate year of the first AO light on 30 m class
telescopes.
Brown dwarf companions in binaries detected from the 2021 season high-cadence microlensing surveys
Cheongho Han01
Youn Kil Jung02,03
Ian A. Bond04
(Leading authors)
Sun-Ju Chung02, 05
Michael D. Albrow06
Andrew Gould07,08
Kyu-Ha Hwang02
Chung-Uk Lee02
Yoon-Hyun Ryu02
In-Gu Shin05
Yossi Shvartzvald09
Hongjing Yang10
Jennifer C. Yee05
Weicheng Zang05,10
Sang-Mok Cha02,11
Doeon Kim01
Dong-Jin Kim02
Seung-Lee Kim02
Dong-Joo Lee02
Yongseok Lee02,11
Byeong-Gon Park02
Richard W. Pogge08
(The KMTNet collaboration)
Fumio Abe12
Richard Barry13
David P. Bennett13,14
Aparna Bhattacharya13,14
Hirosame Fujii12
Akihiko Fukui15,16
Ryusei Hamada17
Yuki Hirao18
Stela Ishitani Silva13,19
Yoshitaka Itow12
Rintaro Kirikawa17
Naoki Koshimoto17
Yutaka Matsubara12
Shota Miyazaki20
Yasushi Muraki12
Greg Olmschenk13
Clément Ranc21
Nicholas J. Rattenbury22
Yuki Satoh17
Takahiro Sumi17
Daisuke Suzuki17
Mio Tomoyoshi17
Paul J. Tristram23
Aikaterini Vandorou13,14
Hibiki Yama17
Kansuke Yamashita17
(The MOA Collaboration)
August 12, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
With the trait that does not depend on the light of a lens, microlensing is suited for finding
and studying faint and dark astronomical objects. One scientifically important object to which
this microlensing trait is successfully applied is an extrasolar planet. With the proposals of
<cit.> and <cit.>, extensive searches for extrasolar planets using the
microlensing method have been carried out since the 1990s. Being started with the first
discovery of a giant planet in 2003 by <cit.>, 200 microlensing planets have been
reported according to the NASA Exoplanet Archive[https://exoplanetarchive.ipac.caltech.edu],
making microlensing the method that is used to detect the third most planets after the transit and
radial-velocity methods.
Brown dwarfs (BDs) are another population of astronomical objects for which microlensing is
well suited for detections.
Microlensing BDs can be detected through two channels. The first channel is
via a single-lens single-source (1L1S) event with a short time scale . The event time
scale is related to the lens mass M as
= μ; = (κ M π_ rel)^1/2,
and thus short time-scale events may be produced by BDs with masses lower than those of stars.
Here represents the angular Einstein radius, μ is the relative lens-source proper
motion, κ=4G/(c^2 AU), π_ rel = AU(D_ L^-1- D_ S^-1)
is the relative lens-source parallax, and and denote the distances to the lens and
source, respectively. However, it is difficult to confirm the BD nature of a lens based on the
event time scale alone, because the time scale depends additionally on μ and π_ rel.
The mass and distance to the lens can be unambiguously determined by measuring the extra observables
of the Einstein radius and the microlens parallax from the relations
M= κ; = AU + π_ S.
The microlens parallax is related to the relative lens-source parallax and Einstein radius
by =π_ rel/ <cit.>. For a 1L1S event, the probability
of measuring the angular Einstein radius is very low because can be measured for only a very
minor fraction of events in which the lens passes over the surface of the source, for example, 1L1S
events presented in <cit.>, <cit.>, and <cit.>. The probability
of measuring the microlens parallax, which is generally measured from the deviation of the lensing
light curve caused by the departure of the relative lens-source motion from rectilinear induced by
the orbital motion of Earth, is even lower because the parallax-induced deviation in the lensing
light curve is generally too small to be measured for a short time-scale BD event. The microlens
parallax for a short time-scale event can be measured under special observational environments,
and there exist only three cases for which the nature of the single BD lens was confirmed from the
mass determination by measuring the microlens parallax. The first case is OGLE-2007-BLG-224, for
which was measured from the subtle differences among the light curves constructed from
observations using telescopes lying at multiple sites on Earth when the magnifications of the event
were extremely high <cit.>. For the other two cases of OGLE-2015-BLG-1268 <cit.>
and OGLE-2017-BLG-0896 <cit.>, values were measured from simultaneous
observations of the events using ground-based telescopes and the space-based Spitzer satellite.
In the case of OGLE-2015-BLG-1482 <cit.>, which was also simultaneously observed using
the Spitzer and ground-based telescopes, the light curve was almost equally well explained by two
solutions, in which the lens was a very low-mass star with a mass 0.10 ± 0.02 M_⊙ according
to one solution, and the lens is a BD with a mass 0.052 ± 0.008 M_ J according to the other
solution, and thus the BD nature of the lens could not be confirmed.
Another channel of detecting microlensing BDs is via a binary-lens single-source (2L1S) event.
Compared to a 1L1S event, analysis of a 2L1S event yields an additional constraint of the
companion-to-primary mass ratio q. This constraint can be used to select candidate BD
companions of binary lenses based on the fact that typical Galactic lensing events are produced
by low-mass stars <cit.>, and thus companions with mass ratios q ≲ 0.1 are very
likely to be BDs. Furthermore, the probability of measuring the Einstein radii for these events is
high because the light curves of these events usually exhibit anomaly features resulting from source
crossings over or approaches very close to caustics. In these cases, the light curves are likely to
be affected by finite-source effects, from which can be measured and the lens mass can be
further constrained.
In order to find BDs through the second channel, <cit.>, hereafter paper I, investigated
the microlensing data collected during the 2016–2018 period by the high-cadence surveys and
reported 6 binaries with candidate BD companions, including OGLE-2016-BLG-0890LB, MOA-2017-BLG-477LB,
OGLE-2017-BLG-0614LB, KMT-2018-BLG-0357LB, OGLE-2018-BLG-1489LB, and OGLE-2018-BLG-0360LB. From
continued analyses of the lensing events found during the 2018–2020 period, <cit.>,
hereafter paper II, reported another 4 binaries with candidates BD companions, including
KMT-2018-BLG-0321LB, KMT-2018-BLG-0885LB, KMT-2019-BLG-0297LB, and KMT-2019-BLG-0335LB.
In this work, we report four additional candidate BD companions to binary lenses found from the
inspection of the 2021 season microlensing data, including KMT-2021-BLG-0588LB, KMT-2021-BLG-1110LB,
KMT-2021-BLG-1643LB, and KMT-2021-BLG-1770LB. The main scientific purpose of this and previous
works is building a homogeneous sample of binary-lens events containing BD companions found from
the KMTNet survey by applying a consistent criterion. The sample will be useful for future statistical
analyses on BDs such as the distribution of mass ratios and separations and the occurrence rate of
star-BD binary pairs.
For the presentation of the findings and analyses of the BD events, we organize the paper as
follows. In Sect. <ref>, we describe the procedure of selecting candidate events produced
by binary lenses possessing BD companions. In Sect. <ref>, we depict the data used in
the analyses and the observations carried out to obtain the data. In Sect. <ref>, we start
by explaining the common procedure applied to analyze the events and detail the analyses of the
individual events in the following subsections: KMT-2021-BLG-0588L in Sect. <ref>,
KMT-2021-BLG-1110L in Sect. <ref>, KMT-2021-BLG-1643L in Sect. <ref>,
and KMT-2021-BLG-1770L in Sect. <ref>. In Sect. <ref>, we mention the
procedure of specifying the source stars and estimate the Einstein radii of the individual events.
In Sect. <ref>, we explain the Bayesian analyses conducted to estimate the physical lens
parameters of the events and present the obtained parameters. In Sect. <ref>, we summarize
the results from the analyses and discuss future followup observations that can confirm the BD
natures of the lens companions reported in this work and those found from previous analyses in
papers I and II.
§ SELECTIONS OF BD CANDIDATES
The binary-lens events with BD companions were found from the inspection of the microlensing
events that were found in the 2021 season by the Korea Microlensing Telescope Network
<cit.> survey. For a 2L1S event possessing a planetary lens companion,
with a companion-to-primary mass ratio of order 10^-3 or less, the signal of the companion,
in general, can be readily identified from its characteristic short-term anomaly feature in the
lensing light curve <cit.>. For a 2L1S event with a BD companion, which has a mass
ratio of order 10^-2, however, it is difficult to promptly identify the BD nature of the
companion, because the lensing light curves are, in many cases, similar to those produced by
binary lenses with approximately equal-mass components. In the searches for BD companions in
binary lenses, therefore, we conducted systematic analyses of all anomalous lensing events
detected by the KMTNet survey.
We selected events with BD companions by imposing the criterion of q ≲ 0.1 among the
2L1S events identified from the first-round analyses. We note that the criterion is the same as
the criterion that was adopted in papers I and II, and thus the BD events presented in this and
previous works constitute a uniform sample. From this procedure, we identified four candidate
BD-companion events including KMT-2021-BLG-0588, KMT-2021-BLG-1110, KMT-2021-BLG-1643, and
KMT-2021-BLG-1770. In Table <ref>, we list the equatorial coordinates,
(RA, DEC)_ J2000, of the individual events together with the corresponding Galactic
coordinates, (l, b), and I-band extinction, A_I, toward the field. Here the extinction
values were adopted from the OGLE Internet archive <cit.>.[ http://ftp.astrouw.edu.pl/ogle/ogle3/ext/blg/] The event KMT-2021-BLG-1770 was picked out
despite the fact that the estimated mass ratio between the lens components, q∼ 0.15, was
slightly greater than the adopted threshold mass ratio q_ th∼ 0.1, because the mass
of the lens expected from the short time scale of the event, ∼ 7.6 days, was low, and
thus the probability for the mass of the companion to be in the BD mass regime was high.
For this reason, this event is not a part of uniformly selected sample for future statistical
studies, although analysis is presented in this work. For
the identified candidate events, we then checked whether the events were additionally observed
by other lensing surveys to include the data in the analyses if they exist. We found that
KMT-2021-BLG-0588 was additionally observed by the Microlensing Observations in Astrophysics
<cit.> group, who referred to the event as MOA-2021-BLG-139, and the other
events were observed solely by the KMTNet group. For KMT-2021-BLG-0588, we use the KMTNet ID
reference because the KMTNet group first found the event.
§ OBSERVATIONS AND DATA
The KMTNet group has carried out a high-cadence survey since 2016 by monitoring stars lying
toward the Galactic bulge field in search of light variation of stars caused by microlensing.
The survey group utilizes three wide-field telescopes, which are distributed in three sites of
the Southern Hemisphere for continuous and dense coverage of lensing events. The sites of the
individual telescopes are the Siding Spring Observatory in Australia (KMTA), the Cerro Tololo
interamerican Observatory in Chile (KMTC), and the South African Astronomical Observatory in
South Africa (KMTS). The telescopes are identical and each telescope with a 1.6 m aperture is
equipped with a camera that yields 4 deg^2 field of view. KMTNet observations were mainly
conducted in the I band, which is relatively less affected by extinction, and about one tenth
of images were acquired in the V-band for the source color measurements of lensing events.
Photometry of the events was conducted using the automatized pySIS pipeline <cit.>,
which is based on the difference image method <cit.>. For the color
measurements of the source stars, we additionally used the pyDIA code <cit.> to
construct a set of the I and V-band light curves and color-magnitude diagrams (CMDs) of
stars that lie in the neighborhoods of the source stars. For the events analyzed in this work,
we conducted rereduction of the data to obtain optimized photometry data after the events were
selected as BD candidates. We normalized the error bars of the data to make them consistent
with scatter of data and χ^2 per degree of freedom (dof) for each data set to become unity.
In the error-bar normalization process, we used the routine described in <cit.>.
Among the four analyzed events, the lensing event KMT-2021-BLG-0588 was additionally observed
by the MOA survey. The observations of the event by the MOA survey were done with the use of
the 1.8 m telescope of the Mt. John Observatory in New Zealand. The camera mounted on the
telescope yields 2.2 deg^2 field of view. The MOA observations were mostly conducted in
the customized MOA-R band, and the photometry was done using the MOA pipeline. Normalization
of the MOA data set was done using the same routine that was applied to the KMTNet data sets.
[The photometry data are available at the follow site:
http://astroph.chungbuk.ac.kr/∼cheongho/download.html.]
§ ANALYSES
The events were analyzed under the common interpretation of the lens-system configuration that
the lenses are binaries because the light curves of all events exhibit caustic features that
arise due to the multiplicity of the lens masses. Under the assumption of a rectilinear relative
lens-source motion, the lensing light curve of a 2L1S event is described by 7 basic lensing
parameters. Among these parameters, the first three parameters (t_0, u_0, ) describe the
lens-source approach, and the individual parameters represent the time of the closest lens-source
approach, the lens-source separation at t_0, and the event time scale, respectively. Another
three parameters (s, q, α) describe the binarity of the lens, and the individual parameters
describe the projected separation (scaled to ) and mass ratio between the lens components,
and the angle between the source trajectory and the axis connecting the binary lens components.
The last parameter ρ represents the ratio of the angular source radius θ_* to the
Einstein radius, ρ=θ_*/ (normalized source radius), and it describes the
deformation of the light curve during the caustic crossings of a source caused by finite-source
effects.
A 2L1S lensing light curve can deviate from a standard form due to the departure of the relative
lens-source motion from rectilinear. The first cause of such a deviation is the microlens-parallax
effects, which is caused by the positional change of the observer by the orbital motion of Earth
around the sun <cit.>. The second cause is the lens-orbital effects, which is caused
by the change of the lens position by the orbital motion of the binary lens <cit.>.
These higher-order effects induce subtle deviations in the lensing light curve from the standard
form, and description of these deviations requires additional lensing parameters in modeling. We
checked these higher-order effects by conducting additional modeling, in which additional parameters
were added in the modeling. The two parameters describing the parallax effect are (, ),
which represent the north and east components of the microlens-lens parallax vector _ E
= (π_ rel/ )(/μ), respectively. Under the assumption that the positional
change of the lens by the orbital motion is minor, the lens-orbital effect is described by two
parameters (ds/dt, dα/dt), which denote the annual change rate of the binary separation
and source trajectory angle, respectively. It was found that secure detections of the higher-order
effects were difficult for KMT-2021-BLG-0588, KMT-2021-BLG-1110, and KMT-2021-BLG-1770, for which
the event time scales are less than 40 days. For KMT-2021-BLG-1643 with ∼ 105 days, the
higher-order effects are minor, but the amplitude of the parallax parameters yielded a useful
constraint on the physical lens parameters. See Sect. <ref> for the detailed discussion
on the parallax constraint.
In the 2L1S modeling, we searched for a lensing solution, which refers to a set of the lensing
parameters that best depict the observed lensing light curve. In the first round of modeling,
we divided the lensing parameters into two groups, and found the binary parameters (s, q) of
the first group via a grid approach with multiple initial values of α, and the other lensing
parameters of the second group were searched for by minimizing χ^2 using the Markov Chain Monte
Carlo (MCMC) method with an adaptive step size Gaussian sampler <cit.>. In the second
round, we refined the local solutions identified from the first-round modeling by further reducing
χ^2 value using the MCMC method. We adopt this two-step approach because the change of the
lensing magnification with the variation of the grid parameters is discontinuous, while the
magnification changes smoothly with the variation of the downhill parameters. Furthermore, the
Δχ^2 map obtained from the first-round grid search enables us to identify local solutions
that are caused by various types of degeneracy. We consider the limb-darkening variation of the
source surface brightness in the computation of finite magnifications by adopting the linear
limb-darkening coefficients of <cit.> corresponding to the stellar type of the source
stars. In the following subsections, we present the detailed analyses conducted for the individual
events.
§.§ KMT-2021-BLG-0588
Figure <ref> shows the lensing light curve of the event KMT-2021-BLG-0588. The source
with an I-band baseline magnitude I_ base∼ 19.11 was in the KMT32 field, toward
which observations were conducted with a 2.5 hr cadence. The source flux magnification induced
by lensing was first found by the KMTNet group on 2021 April 26, which corresponds to the abridged
heliocentric Julian date ≡ - 2450000 =9331, when the source was brighter than the
baseline by Δ I∼ 0.46 mag. The light curve exhibited a strong anomaly, which peaked at
∼ 9354.25 with a strong deviation of Δ I∼ 3 mag from the baseline 1L1S model.
The MOA group independently found the event on 2021 May 22 (=9357), which was about 3 days
after the strong peak. The zoom-in view of the strong peak, which was covered by the combination
of the MOA and KMTA data sets, is shown in the top panel of Figure <ref>. From the sharp
rise and fall, the strong peak is likely to be produced by the source star's crossing over the tip
of a caustic formed by a binary lens.
In Table <ref>, we list the lensing parameters of the solutions found from the 2L1S
modeling of the light curve together with the χ^2 values of the fits and degrees of freedom
(dof). We identified a pair of local solutions, in which one solution has a binary separation
s < 1 (close solution) and the other solution has a separation s > 1 (wide solution). Although
the solutions are designated as the "close" and "wide" solutions, we note that the similarity
between the model curves of the two solutions is caused by an accidental degeneracy rather than
the well-known close–wide degeneracy, which arises due to the similarity between the central
caustics induced by a pair of solutions with separations s and 1/s <cit.>. We further discuss the cause of the degeneracy in the following paragraph.
It is found that the wide solution with s∼ 1.17 yields a better fit than the close solution
with s∼ 0.77 by Δχ^2=71.8, and thus the degeneracy is resolved with strong
statistical confidence.
In Figure <ref>, we draw the model curve of the wide solution in the bottom panel, which
shows the whole view of the light curve, and plot the models curves and residuals of both the
close and wide solutions in the upper panels, which show the zoom-in view of the region around
the strong peak. According to the wide solution, the estimated event time scale and the mass
ratio between the lens components are ∼ 39 days and q∼ 0.10, respectively. From
the fact that the time scale is in the range of events produced by stellar lenses together with
the fact that the mass ratio is low, the probability of the binary lens companion being a BD is
high. The normalized source radius, ρ∼ 0.7× 10^-3, was securely measured from the
analysis of the strong peak, which was affected by finite-source effects
The lens-system configurations of the close and wide solutions are presented in the two insets
of the bottom panel of Figure <ref>. According to the wide solution, the binary lens
forms a single six-sided resonant caustic, and the strong peak was produced by the source
passage through the tip of the lower left cusp of the caustic. According to the close solution,
on the other hand, the lens induces 3 sets of caustics, in which a single central caustic around
the primary lens is detached from the two peripheral caustics, and the strong peak was generated
by the source crossing over the slim cusp extending from the lower left cusp of the central
caustic. The two sets of caustics of the close and wide solutions do not appear to be similar
to each other, and this suggests that the degeneracy between the two solutions is accidental.
§.§ KMT-2021-BLG-1110
We present the light curve of the lensing event KMT-2021-BLG-1110 in Figure <ref>. The
lensing magnification of the source, which had a baseline magnitude I_ base∼ 19.52
before lensing, was found by the KMTNet group on 2021 June 2 (=9367), when the source was
brighter than the baseline by Δ I∼ 0.5 mag. The source lies in the overlapping region
of the KMTNet prime fields BLG01 and BLG41, toward which observations were done with a 0.5 hr
cadence for each field, and a 0.25 hr cadence in combination. The light curve is characterized
by the double spikes appearing at t_1∼ 9370.85 and t_2∼ 9371.56. The rising and falling
sides of both spikes were densely and continuously resolved from the high-cadence observations
conducted with the use of the three KMTNet telescopes. The first spike was resolved by the KMTC
data, and the second one was covered by the combined data from KMTS and KMTC.
The spike features are very likely to be produced by the caustic crossings of the source,
and thus we conducted modeling the light curve under the 2L1S interpretation. The modeling
yielded two local solutions: one with s<1 (close solution) and the other with s>1 (wide
solution). It is found that the wide solution is preferred over the close solution by
Δχ^2 =33.8, which is large enough to resolve the degeneracy between the solutions.
The model curve of the wide solution is drawn in the bottom panel of Figure <ref>, and
the model curves and residuals of both the close and wide solutions in the region around the
two peaks are presented in the upper panels. The similarity between the models of the two
solutions is caused by the classic close–wide degeneracy. The lensing parameters of the
solutions are listed in Table <ref> together with the values of χ^2/dof. The
binary lensing parameters are (s, q)_ close∼ (0.44, 0.07) for the close solution,
and (s, q)_ wide∼ (2.43, 0.07) for the wide solution. From the fact that the estimated
mass ratio q∼ 0.07 between the lens components is low together with the fact that the event
time scale ∼ 27–29 days is a typical value of a stellar lens event, the companion of
the lens is a strong BD candidate. The normalized source radius, ρ∼ 0.79× 10^-3
for the wide solution, is precisely measured from the well-resolved spike features.
In the two insets of the bottom panels of Figure <ref>, we present the lens-system
configurations of the close and wide solutions. Both solutions result in central caustics of
similar shape, in which the caustic is elongated along the binary-lens axis. The source passed
through the back-end side of the caustic at an acute source trajectory angle of ∼ 69^∘
with respect to the binary axis. According to the model, the two spikes were produced by the
successive passages of the source through the on-axis cusp and upper off-axis cusp of the caustic.
§.§ KMT-2021-BLG-1643
The lensing light curve of KMT-2021-BLG-1643 is presented in Figure <ref>. The event
was found in its early stage by the KMTNet survey on 2021 June 8 (=9374), at which the
source was brighter than the baseline magnitude I_ base=18.91 by Δ I∼ 1.2 mag.
The source lies in the KMTNet BLG04 field, toward which the event was monitored with a 1 hr
cadence. The event exhibited a pair of caustic spikes, which occurred at ∼ 9401.1 and
9403.4, and a weak bump, which was centered at ∼ 9409. The region between the two
caustic spikes exhibited a characteristic U-shape pattern, indicating that the spikes occurred
when the source entered and exited a caustic. The first caustic spike was not resolved because
the sky at the KMTA site was clouded out, but the second caustic was partially covered by the two
KMTS and one KMTC data points.
From the 2L1S modeling of the light curve, we found a pair of solutions resulting from the
close–wide degeneracy. The binary lensing parameters are (s, q)_ close∼ (0.69, 0.08)
and (s, q)_ wide∼ (1.52, 0.08) for the close and wide solutions, respectively. We
list the full lensing parameters of the two solutions in Table <ref>, and the model
curves and residuals are presented in Figure <ref>. From the comparison of the fits,
it is found that the wide solution is preferred over the close solution by Δχ^2=38.3,
indicating that the degeneracy is lifted with a fairly strong confidence level. Despite the fact
that the caustic exit was partially covered by only a small number data points, the normalized
source radius, ρ∼ 0.3× 10^-3, could be constrained.
The measured event time scale, ∼ 105 days, of the event comprises an important portion
of a year, and thus it may be possible to constrain microlens-parallax parameters. We conducted
an additional modeling considering the higher-order effects. Figure <ref> shows the
scatter plot of points in the MCMC chain on the – parameter plane. It was found
that the improvement of model fit with the inclusion of the higher-order effects is very minor,
but the amplitude of the scatter plot provided a constraint on the physical lens parameters.
We present the configurations of the close and wide lens systems in the two insets of the
bottom panel of Figure <ref>. Similar to the case of KMT-2021-BLG-1110, the source
passed the back-end side of the caustic. The spike features were produced by the source passage
through the lower left cusp of the caustic, and the weak bump was generated by the source approach
close to the left-side on-axis cusp of the caustic.
§.§ KMT-2021-BLG-1770
Figure <ref> shows the light curve of the lensing event KMT-2021-BLG-1770. The event
was found by the KMTNet group on 2021 July 16 (∼ 9406). The source, which had a
baseline magnitude I_ base=19.06, was in the KMTNet prime field BLG03, for which images
were taken with a 0.5 hr cadence. Most region of this field overlaps with the region covered by
the BLG43 field, but the event lies in the offset region that was not covered by the BLG43 field.
In our analysis, we do not use the KMTA data set due to its low photometric quality. Similar to
the event KMT-2021-BLG-1643, the light curve of KMT-2021-BLG-1770 is characterized by a pair of
caustic spikes and a following weak bump. The first caustic spike, which occurred at
=9412.2, was not covered, but the second spike, which occurred at =9412.4, and
the U-shape region between the two spikes were resolved by the combination of the KMTS and KMTC
data sets. The weak bump is centered at ∼ 9414, which was about 2 days after the caustic
spikes.
From the analyses of the light curve, we identified two local solutions, in which one solution
has a binary separation s<1 (close solution) and the other has a separation s>1 (wide solution).
The model curves of the solutions are drawn over the data points and residuals from the models are
shown in Figure <ref>. The binary lensing parameters of the individual solutions are
(s, q)_ close∼ (0.81, 0.15) and (s, q)_ wide∼ (1.14, 0.19). As stated, the
event was chosen as a BD candidate despite the fact that the mass ratio between the lens components
is slightly greater than the threshold mass ratio q_ th=0.1, because the event time scale,
∼ 7 days, is substantially shorter than several-week time scale of typical lensing events.
The normalized source radius, ρ∼ (6-7)× 10^-3, was measured from analyzing the
caustic-exit part of the light curve.
The lens-system configurations of the close and wide solutions are presented in the two insets
of the bottom panel of Figure <ref>. It is found that the configurations of the close
and wide solutions are very similar to those of the corresponding solutions of KMT-2021-BLG-0588.
That is, the caustic spikes were generated by the passage of the source through the slim bridge
part connecting the central and peripheral caustics according to the close solution, and by the
source pass through the tip of the lower left cusp of the six-sided resonant caustic according
to the wide solution. The difference between the solutions of the two events is that the close
solution is preferred over the wide solution by Δχ^2=8.9 in the case of KMT-2021-BLG-1770,
while the wide solution yields a better fit than the close solution in the case of KMT-2021-BLG-0588.
For the same reason mentioned in Sect. <ref>, the similarity between the model curves
of the close and wide solutions is caused by an accidental degeneracy rather than a close–wide
degeneracy.
§ SOURCE STARS AND EINSTEIN RADII
In this section, we specify the source stars of the events. Specifying the source star of a
caustic-crossing 2L1S event is important to estimate the angular Einstein radius from the relation
= θ_*ρ,
where the normalized source radius ρ is measured by analyzing the caustic-crossing parts of
the light curve, and the angular source radius θ_* can be deduced from the source type.
We specified the source stars of the individual events by measuring their de-reddened colors and
magnitudes. To estimate the de-reddended color and magnitude, (V-I, I)_0, from the instrumental
values, (V-I, I)_s, we applied the <cit.> method, in which the centroid of red giant
clump (RGC) is used as a reference for the calibration. Following the routine procedure of the
method, we first estimated instrumental I and V-band magnitudes of the source by regressing
the photometry data of the individual passbands processed using the pyDIA code, and placed the
source in the instrumental CMD of stars around the source constructed using the same pyDIA code.
We then measured the offsets in color and magnitude, Δ (V-I, I), of the source from the
RGC centroid, and estimated de-reddened color and magnitude as
(V - I, I)_s,0 = (V - I, I)_ RGC,0 + Δ (V - I, I),
where (V - I, I)_ RGC,0 are the de-reddened color and magnitude of the RGC centroid
known from <cit.> and <cit.>, respectively.
Figure <ref> shows the positions of the source (blue dot) and RGC centroid (red dot)
in the instrumental CMDs of the individual events. In Table <ref>, we list the
values of (V-I, I)_s, (V-I, I)_ RGC, (V-I, I)_ RGC,0. and (V-I, I)_s,0
estimated from the procedure described in the previous paragraph. According to the estimated
colors and magnitudes, the spectral types of the source stars are G0V, G9V, K3V, and G9V for
KMT-2021-BLG-0588, KMT-2021-BLG-1110, KMT-2021-BLG-1643, and KMT-2021-BLG-1770, respectively.
With the measured source color and magnitude, we estimated the angular radius of source star
by first converting V-I color into V-K color using the <cit.> relation, and
then by deducing θ_* from the <cit.> relation between (V-K, V) and
θ_*. With the measured source radii, the angular Einstein radii were estimated using
the relation in Equation (<ref>). We list the the estimated values of θ_* and
of the individual events in the bottom two lines of Table <ref>.
Also marked in Figure <ref> are the positions of the blend (green dots) in the CMDs
of the individual events. We list the measured values of the color and magnitude of the blend,
(V-I, I)_b, in Table <ref>. Besides KMT-2021-BLG-0588, for which the blended light
is similar to the flux of the source, it is found that the blended fluxes are substantially
greater than the source fluxes. In order to check the possibility that the lens is the main
origin of the blended flux, we measured the astrometric offset δθ between the
centroid of the source measured at the peak time of the lensing magnification and that measured
at the baseline. If the lens were the main origin of the blended flux, the offset would be very
small because the relative lens-source proper motions are < 10 mas/yr for all events. In the
case that the origin of the blended flux is a nearby star, which is typically separated from
the source by an order of 100 mas, the resulting astrometric offset would be substantially
greater than the typical astrometric precision of order 10 mas. In Table <ref>, we
list the measured centroid offsets of the individual events. For all events, it is found that
the astrometric offsets are much greater than the measurement precision, and this indicates
that the origins of the blended light are nearby stars rather than the lenses.
§ PHYSICAL LENS PARAMETERS
The mass M and distance to the lens can be constrained by measuring lensing observables:
, , and . The event time scale is the basic observable that is measurable
for general lensing events, and the angular Einstein radius is another observable that is measurable
for events with light curves affected by finite-source effects. These two observables are related
to the physical lens parameters by the relations in Equation (<ref>). With the measurement of
the extra observable , the physical lens parameters would be uniquely determined from the
relations in Equation (<ref>). For the analyzed events, the observables and were
measured, but was not securely measured for any of the events. Without the constraint of ,
we estimated the physical lens parameters by conducting Bayesian analyses of the events using models
of physical and dynamical distributions and mass function of objects in our Galaxy together with the
constraints provided by the measured blended flux.
In the first step of the Bayesian analysis, we conducted a Monte Carlo simulation to generate
a large number of artificial lensing events. For each artificial event, the distances of the
lens and source and their relative proper motion were assigned using a Galactic model, and the
mass of the lens was assigned using a model mass function. In the simulation, we adopted the
Galactic model of <cit.> and the mass function model of <cit.>. In the mass
function, we included white-dwarf remnants but exclude black holes and neuron stars. In the
second step, we computed the lensing observables (t_ E,i, θ_ E,i) corresponding
to the assigned values (M, , , μ) of each artificial event using the relations in
Equation (<ref>). In the final step, we constructed Bayesian posteriors of the lens mass and
distance by imposing a weight w_i=exp(-χ^2/2) on each event. Here the χ^2 value was
calculated as
χ_i^2 =
[ t_ E,i-σ()]^2 +
[ θ_ E,i-σ()]^2,
where [, σ()] and [, σ()] represent the measured values and
uncertainties of the observables and , respectively. For the event KMT-2021-BLG-1643
with a long event time scale, we imposed the constraint by including an additional term
∑_j=1^2 ∑_k=1^2 b_j,k (π_ E,j,i-π_ E,i) (π_ E,k,i-
π_ E,i) to the right side of Eq. (<ref>).
Besides the constraints from the lensing observables, we additionally imposed the blending
constraint in the Bayesian analyses. This constraint is provided by the fact that the flux
from the lens comprises a portion of the total blending flux, and thus the lens flux should
be less than the total blending flux. For the imposition of this constraint, we calculated
the lens brightness as
I_L = M_I,L + 5 log( pc) - 5 + A_I, tot,
where M_I,L denotes the absolute I-band magnitude corresponding to the lens mass, and
A_I,L is the extinction to the lens lying at a distance . The extinction was modeled as
A_I,L = A_I, tot[ 1-exp( -|z| h_z, dust)],
where A_I, tot denotes the total extinction toward the field, h_z, dust =
100 pc is the adopted vertical scale height of dust, z = sin b + z_0 and z_0=15 pc
represent the vertical positions of the lens and the sun above the Galactic plane, respectively.
The values A_I, tot for the individual events are listed in Table <ref>. It
was found that the blending constraint had important effects on the determined physical parameters
of the events KMT-2012-BLG-0558 and KMT-2021-BLG-1643, for which the lenses are expected to be
located relatively nearby to the Sun based on their large Einstein radii. Below we discuss
this issue in more detail.
In Figures <ref> and <ref>, we present the Bayesian posteriors of the mass of
the binary lens companion and distance to the lens system, respectively. The estimated values of
the primary (M_1) and companion (M_2) masses, distance, and projected separation between the
lens components (a_⊥=s) are listed in Table <ref>. For each parameter,
the median value was adopted as a representative value and the upper and lower ranges of the
uncertainty were chosen as the 16% and 84% of the posterior distribution, respectively.
According to the estimated masses, it is found that the masses of the lens companions are well
within the BD mass range 0.012<M_2/M_⊙≤ 0.076 (or 13<M_2/M_ J≤ 80), although
there is some variation of the primary masses, which lie in the mass range of main-sequence stars
with spectral types from K to M. In Table <ref>, we list the probabilities for the
companions of the individual lenses being in the BD mass range, P_ BD. It is found that
the probabilities are greater than 59% in all cases of the events.
For KMT-2021-BLG-1770L, the mass of the primary is so small that it can be a BD as
well with a probability of P_ BD∼ 35%.
In this case, the lens is a BD binary like
OGLE-2009-BLG-151L, OGLE-2011-BLG-0420L <cit.>
OGLE-2016-BLG-1266L <cit.>,
OGLE-2016-BLG-1469L <cit.>,
MOA-2016-BLG-231L <cit.>, and
OGLE-2017-BLG-1038L <cit.>.
In Table <ref>, we list the probabilities of the lenses being in the disk, P_ disk,
and bulge, P_ bulge. For the events KMT-2021-BLG-0588 and KMT-2021-BLG-1643, it is very likely
that the lenses lie in the disk, while the lens of KMT-2021-BLG-1770 is likely to lie in the bulge.
For KMT-2021-BLG-1110, on the other hand, the disk and bulge probabilities are approximately the
same. It is found that the constraint on the lens location comes mainly from the estimated radius
of the Einstein ring. For the events KMT-2021-BLG-0588 and KMT-2021-BLG-1643, the respective
Einstein radii are ∼ 0.90 mas and ∼ 1.08 mas, which are approximately two times
bigger than the typical Einstein radius of ∼ 0.5 mas for the event produced by a low-mass
stellar lens with a mass M∼ 0.3 M_⊙ lying about halfway between the sun and a bulge
source. By contrast, the Einstein radius ∼ 0.16 mas of KMT-2022-BLG-1770 is
substantially smaller than the typical value, and thus P_ bulge is substantially higher
than P_ disk. The Einstein radius ∼ 0.58 mas of KMT-2021-BLG-1110 is close
to the typical value, and thus P_ disk and P_ bulge are approximately the same.
In the posterior distributions presented in Figures <ref> and <ref>, we mark
the contributions of the disk and bulge lens populations by blue and red curves, respectively.
§ SUMMARY AND DISCUSSION
Following the works in papers I and II, we reported the BD companions in binary lenses found
from the inspection of the microlensing data collected in the 2021 season by the high-cadence
surveys, including KMT-2021-BLG-0588LB, KMT-2021-BLG-1110LB, KMT-2021-BLG-1643LB, and
KMT-2021-BLG-1770LB. Modeling the light curve of each event yielded a pair of solutions with
projected separations smaller and greater than the Einstein radius, but the degeneracy between
the solutions was resolved with a strong confidence level except for KMT-2021-BLG-1770, for which
the resolution of the degeneracy was less clear than the others. From the Bayesian analyses
conducted with the constraints provided by the observables of the event time scale and Einstein
radius together with the constraint from the blended light, it was estimated that the masses of
the primary and companion of the individual events are
(M_1/M_⊙, M_2/M_⊙)=
(0.54^+0.31_-0.24, 0.053^+0.031_-0.023) for KMT-2021-BLG-0588L,
(0.74^+0.27_-0.35, 0.055^+0.020_-0.026) for KMT-2021-BLG-1110L,
(0.73^+0.24_-0.17, 0.061^+0.020_-0.014) for KMT-2021-BLG-1643L, and
(0.13^+0.18_-0.07, 0.020^+0.028_-0.011) for KMT-2021-BLG-1770L.
The estimated masses of the binary companions were well within the BD mass range, although there
was some variation of the primary masses, which were in the mass range of main-sequence stars
with spectral types from K to M. The probabilities of the lens companions being in the BD mass
range were estimated as 82%, 85%, 91%, and 59% for the individual events.
The BD nature of the lens companions presented in this work and papers I and II can be
confirmed by directly imaging the lenses from future high-resolution adaptive-optics (AO)
followup observations when the lenses are separated from the source stars <cit.>.
For these followup observations, we compute the lens-source separations Δθ_2030
expected in 2030, which is an approximate year of the first AO light on 30 m class telescopes.
In Table <ref>, we list the relative lens-source proper motions, expected lens-source
separations, and K-band source magnitudes of the BD events reported in this work and papers I
and II. The K-band source magnitude was estimated as K = I_s,0 + (V-I)_0 - (V-K)_0 + A_I/7,
and the separation is estimated as Δθ_2030 =μΔ t, where the relative
lens-source proper motion is computed by μ=/ and Δ t indicates the time
gap between the peak of the event and the year 2030. We note that Δθ_2030 of the
event OGLE-2017-BLG-0614 is not listed because the Einstein radius and the resulting proper
motion could not be measured, and only the lower limits are listed for KMT-2018-BLG-0321 and
KMT-2018-BLG-0885 because only the lower limits of were constrained for these events.
From the table, one finds that the separations are greater than 30 mas for all events with
measured proper motions, and except for the two events KMT-2019-BLG-0335 and KMT-2021-BLG-1643,
the separations are greater than ∼ 50 mas, which will be adequate for the clear resolution
of the lens from the source. By comparing the relative lens-source proper motion estimated from
the model with the value measured from followup AO observations, one can confirm the solution.
Furthermore, from the stellar type of the primary lens, which comprises most of the flux from the
lens, the approximate mass of the lens can be estimated. This together with the estimated mass
ratio enables one to confirm the BD nature of the lens companion. We note that this test of
presented solutions will be most useful for events with relative accuracy of relative proper
motion better than 10%.
Work by C.H. was supported by the grants of National Research Foundation of Korea
(2019R1A2C2085965).
This research has made use of the KMTNet system operated by the Korea Astronomy and Space
Science Institute (KASI) at three host sites of CTIO in Chile, SAAO in South Africa, and
SSO in Australia. Data transfer from the host site to KASI was supported by the Korea Research
Environment Open NETwork (KREONET).
This research was supported by the Korea Astronomy and Space Science Institute under the R&D
program (Project No. 2023-1-832-03) supervised by the Ministry of Science and ICT.
The MOA project is supported by JSPS KAKENHI Grant Number JP24253004, JP26247023, JP23340064,
JP15H00781, JP16H06287, JP17H02871 and JP22H00153.
J.C.Y., I.G.S., and S.J.C. acknowledge support from NSF Grant No. AST-2108414.
Y.S. acknowledges support from BSF Grant No 2020740.
[Alard & Lupton(1998)]Alard1998 Alard, C., & Lupton, R. H. 1998, , 503, 325
[Albrow et al.(2009)]Albrow2009 Albrow, M., Horne, K., Bramich, D. M., et al. 2009, , 397, 2099
[Albrow(2017)]Albrow2017 Albrow, M. 2017, MichaelDAlbrow/pyDIA: Initial Release on Github,Versionv1.0.0, Zenodo, doi:10.5281/zenodo.268049
[Albrow et al.(2018)]Albrow2018 Albrow, M. D., Yee, J. C., Udalski, A., et al. 2018, , 858, 107
[An(2005)]An2005 An, J. H. 2005, , 356, 1409
[Bensby et al.(2013)]Bensby2013 Bensby, T. Yee, J.C., Feltzing, S. et al. 2013, , 549, A147
[Bessell & Brett(1988)]Bessell1988 Bessell, M. S., & Brett, J. M. 1988, , 100, 1134
[Bond et al.(2001)]Bond2001 Bond, I. A., Abe, F., Dodd, R. J., et al. 2001, , 327, 868
[Bond et al.(2004)]Bond2004 Bond, I. A., Udalski, A., Jaroszyński, M., et al. 2004, , 606, L155
[Choi et al.(2013)]Choi2013 Choi, J. -Y., Han, C., Udalski, A., et al. 2013, , 768, 129
[Chung et al.(2017)]Chung2017 Chung, S. -J., Zhu, W., Udalski, A., et al. 2017, , 838, 154
[Chung et al.(2019)]Chung2019 Chung, S.-J., Gould, A., Skowron, J., et al. 2019, , 871, 179
[Claret(2000)]Claret2000 Claret, A. 2000, , 363, 1081
[Dominik(1998)]Dominik1998 Dominik, M. 1998, , 329, 361
[Dominik(1999)]Dominik1999 Dominik, M. 1999, , 349, 108
[Doran & Mueller(2004)]Doran2004 Doran, M., & Mueller, C. M. 2004, JCAP, 09, 003
[Gould(1992)]Gould1992a Gould, A. 1992, , 392, 442
[Gould & Loeb(1992)]Gould1992b Gould, A., & Loeb, A, 1992, , 396, 104
[Gould(2000)]Gould2000 Gould, A. 2000, , 542, 785
[Gould et al.(2009)]Gould2009 Gould, A., Udalski, A., Monard, B., et al. 2009, , 698, L147
[Gould et al.(2022a)]Gould2022a Gould, A., Han, C., Zang, W., et al. 2022a, , 664, A13
[Gould et al.(2022b)]Gould2022b Gould, A., Jung, Y. K., Hwang, K.-H., et al. 2022b, JKAS, 55, 173
[Griest & Safizadeh(1998)]Griest1998 Griest, K., & Safizadeh, N. 1998, , 500, 37
[Han & Gould(2003)]Han2003 Han, C., & Gould, A. 2003, , 592, 172
[Han et al.(2017)]Han2017 Han, C., Udalski, A., Sumi, T., et al. 2017, , 843, 59
[Han et al.(2020)]Han2020 Han, C., Lee, C.-U., Udalski, A., et al. 2020, , 159, 134
[Han et al.(2022)]Han2022 Han, C., Ryu, Y.-H., Shin, I.-G., et al. 2022, , 667, A64
[Han et al.(2023)]Han2023 Han, C., Jung Y. K., Kim, D., et al. 2023, , 675, A71
[Jung et al.(2018)]Jung2018 Jung, Y. K., Udalski, A., Gould, A., et al. 2018, , 155, 219
[Jung et al.(2021)]Jung2021 Jung, Y. K., Han, C., Udalski, A., et al. 2021, , 161, 293
[Kervella et al.(2004)]Kervella2004 Kervella, P., Thévenin, F., Di Folco, E., & Ségransan, D. 2004, , 426, 29
[Kim et al.(2016)]Kim2016 Kim, S.-L., Lee, C.-U., Park, B.-G., et al. 2016, JKAS, 49, 37
[Koshimoto et al.(2023)]Koshimoto2023 Koshimoto, N., Sumi, T., Bennett, D. P., et al. 2023, arXiv:2303.08279
[Malpas et al.(2022)]Malpas2022 Malpas, A., Albrow, M. D., Yee, J. C., et al. 2022, , 164, 102
[Mao & Paczyński(1991)]Mao1991 Mao, S., & Paczyński, B. 1991, , 374, L37
[Nataf et al.(2013)]Nataf2013 Nataf, D. M., Gould, A., Fouqué, P. et al. 2013, , 769, 88
[Shvartzvald et al.(2019)]Shvartzvald2019 Shvartzvald, Y., Yee, J. C., Skowron, J., et al. 2019, , 157, 106
[Tomaney & Crotts(1996)]Tomaney1996 Tomaney, A. B., & Crotts, A. P. S. 1996, , 112, 2872
[Yee et al.(2012)]Yee2012 Yee, J. C., Shvartzvald, Y., Gal-Yam, A., et al. 2012, , 755, 102
[Yoo et al.(2004)]Yoo2004 Yoo, J., DePoy, D.L., Gal-Yam, A. et al. 2004, , 603, 139
[Zhu et al.(2016)]Zhu2016 Zhu, W., Calchi Novati, S., Gould, A., et al. 2016, , 825, 60
|
http://arxiv.org/abs/2307.05230v1 | 20230711125444 | Accreting luminous low-mass planets escape from migration traps at pressure bumps | [
"O. Chrenko",
"R. O. Chametla"
] | astro-ph.EP | [
"astro-ph.EP"
] |
firstpage–lastpage
Measurements of dense fuel hydrodynamics in the NIF burning plasma experiments using backscattered neutron spectroscopy
J. P. Chittenden
August 12, 2023
=======================================================================================================================
We investigate the migration of Mars- to super-Earth-sized planets in the vicinity of a pressure bump in a 3D radiative protoplanetary disc while accounting for the effect of accretion heat release.
Pressure bumps have often been assumed to act as efficient migration traps, but we show that the situation changes when the thermal forces are taken into account.
Our simulations reveal that for planetary masses ≲2 M_, once their luminosity exceeds the critical value predicted by linear theory, thermal driving causes their orbits to become eccentric, quenching the positive corotation torque responsible for the migration trap. As a result, planets continue migrating inwards past the pressure bump. Additionally,
we find that planets that remain circular and evolve in the super-Keplerian region of the bump exhibit a reversed asymmetry of their thermal lobes, with the heating torque having an opposite (negative) sign compared to the standard circular case, thus leading to inward migration as well.
We also demonstrate that the super-critical luminosities of planets in question can be reached through the accretion of pebbles accumulating in the bump. Our findings have implications for planet formation scenarios that rely on the existence of migration traps at pressure bumps, as the bumps may repeatedly spawn inward-migrating low-mass embryos rather than harbouring newborn planets until they become massive.
planet and satellites: formation – planet-disc interactions – protoplanetary discs – hydrodynamics
§ INTRODUCTION
The recent advancements of interferometric observations
with high angular resolution have enabled to detect
ring-like concentrations of small solid particles
(dust or pebbles)
in numerous protoplanetary discs
<cit.>.
Such ring-shaped accumulations could have formed at the locations of pressure bumps
where the disc rotation becomes locally super-Keplerian and the inward drift
of small solids due to the aerodynamic drag is blocked <cit.>.
Since pressure bumps overlap with transitions in the gas density,
they can act as barriers for planetary migration due to a local boost
of the positive corotation torque, which balances the negative Lindblad torque <cit.>.
Therefore, a pressure bump could hypothetically represent a sweat spot for planet formation:
Any planet growing at the bump would be protected from planetary migration and
it would remain submerged in a relatively dense and self-replenishing
reservoir of solids from which it could continue accreting.
Such an interplay has become a key component of many novel planet
formation scenarios <cit.>.
However, the existence of the migration trap at the pressure bump
has so far been justified by taking only the Lindblad and corotation
disc-driven torques <cit.> into account
and thus, from the viewpoint of planet-disc interactions,
the picture is not entirely complete.
In non-isothermal discs with any form of thermal diffusion,
planets are subject to additional thermal torques <cit.>.
When the planet is non-luminous, the gas traveling past the planet gains energy
by compressional heating but this energy is spread by thermal diffusion
<cit.> and a perturbation arises which
is cooler and denser compared to a fully adiabatic case.
When the planet is accreting and luminous, a threshold luminosity exists
for which the disc perturbation is similar to the fully adiabatic case <cit.>.
Planets with super-critical luminosities
switch to the regime of the heating torque <cit.>.
The heating torque arises because the accretion heat renders the gas flowing
past the planet underdense. The underdense gas is redistributed by
the thermal (or radiative) diffusion and disc shear
and two lobes are formed,
the inner lobe leading and the outer lobe trailing the orbital motion of the planet <cit.>.
For a typical sub-Keplerian disc and a circular planetary orbit, the outer
trailing lobe is more pronounced because the corotation radius between the planet
and the disc material is shifted inwards.
We recall, however, that
<cit.> showed that the advection of hot gas near
super-Earth-sized planets
is not governed purely by the shear motion but rather by a complex 3D circumplanetary flow
interacting with the horseshoe region of the planet <cit.>.
Additionally, the perturbing force acting upon the luminous planet can also excite
its orbital eccentricity <cit.>.
When that happens, the two thermal lobes are replaced with a single lobe
whose trajectory can be well approximated with an epicycle
and the planet is said to enter the headwind-dominated regime of thermal torques <cit.>.
While the heating torque is typically positive and supports outward migration
in the circular shear-dominated
regime <cit.>,
its contribution to the overall torque balance in the eccentric headwind-dominated regime
is less clear.
In our study, we investigate the migration of pebble-accreting planets
in the vicinity of the pressure bump while taking the thermal torques into account,
with the aim to answer the following questions.
Will the thermal torques assist or counteract the migration trap? Will the orbital
eccentricity grow and if so, how will the torque balance change?
By carrying out an extensive set of numerical simulations, we show
that planets with super-critical luminosities are likely to experience the
eccentricity excitation and enter the headwind-dominated regime.
The subsequent orbital migration of such planets is
dominated by the Lindblad torque because the corotation torque is quenched
for eccentric orbits <cit.>
and we demonstrate that the influence of the thermal torque
on the evolution of the semi-major axis weakens as well <cit.>.
Finally, for parameters that allow the planet to remain
in the circular shear-dominated regime, we show that the thermal torques
in the super-Keplerian region of the pressure bump have an opposite effect
compared to <cit.>.
§ 3D RADIATIVE HYDRODYNAMIC MODEL
We model a patch of a 3D gas disc on an
Eulerian grid with uniform
spacing and N_r× N_θ× N_ϕ
cells, where the subscripts stand for the radial, azimuthal,
and colatitudinal spherical coordinates, respectively.
The azimuthal extent of the grid
for our simulations with planets covers a quadrant,
not the full azimuth (see Section <ref>).
We use the Fargo3D code
<cit.>
and treat the gas disc as a viscous fluid evolving in
a non-inertial frame centered on the star M_⋆ and co-rotating with an embedded planet M_p,
whose orbital evolution we study. The gravitational
potential of the planet is smoothed with the cubic spline
function of <cit.> at cell-planet distances d<r_sm, where r_sm
is the smoothing length (Table <ref>), as
Φ_p = - GM_p/d[(d/r_sm)^4-2(d/r_sm)^3+2d/r_sm] ,
where G is the gravitational constant.
Aside from the continuity and momentum equations that
are included in the public version of Fargo3D <cit.>,
we consider the evolution of
the internal energy density of the gas ϵ
and the energy density of diffuse thermal radiation E_R following the two-temperature approximation
<cit.>:
∂ E_R/∂ t + ∇·F⃗ = ρκ_P[4σ T^4 - cE_R] ,
∂ϵ/∂ t + ∇·(ϵv⃗) =
- ρκ_P[4σ T^4 - cE_R]
-P∇·v⃗
+ Q_visc + Q_art + Q_acc ,
where t is the time, F⃗ is the radiation flux vector, ρ is the gas density, κ_P is the Planck opacity, σ is the Stefan-Boltzmann constant, T is the gas temperature, c is the speed
of light, v⃗=(v_r,v_θ,v_ϕ) is the velocity vector of the gas flow,
P is the gas pressure (P=(γ-1)ϵ for the ideal gas with the
adiabatic index γ),
Q_visc is the viscous heating term <cit.>,
Q_art is the heating due to the shock-spreading
viscosity of finite-difference codes <cit.>,
and Q_acc is the heat source related to the luminosity of the embedded planet. For the implementation of the energy equations,
the flux-limited approximation, and the remaining closure relations, the reader is referred to <cit.> and references therein.
Let us only point out that
the Planck and Rosseland opacities (the latter of which governs the radiation diffusion) assumed here
are uniform and equal, κ_P=κ_R=κ.
Equations (<ref>) and (<ref>) are solved implicitly using the successive over-relaxation method with the relative precision of 10^-8.
To model the heat release due to planetary accretion,
we consider a simple luminosity relation <cit.>
L = GM_pṀ_̇ṗ/R_p = GM_p^2/R_pτ ,
where Ṁ_̇ṗ is the mass accretion rate of the planet, R_p is its physical radius, and
τ=M_p/Ṁ_̇ṗ is its mass doubling time.
Let us point out that τ is used
only to regulate L but M_p
itself is kept fixed throughout our simulations.
The heat source is non-zero in eight cells surrounding
the planet <cit.>
and zero elsewhere. To account for the shift of the planet
with respect to these cells, we define the grid
coordinates of the planet
(r_p,θ_p,ϕ_p),
coordinates of an n-th
cell centre (r_n,θ_n,ϕ_n),
dimensions of the respective cell
(Δ r,Δθ,Δϕ), its volume V_n, and apply
<cit.>
Q_acc,n = L/V_n(1-|r_p-r_n|/Δ r)(1-|θ_p-θ_n|/Δθ)(1-|ϕ_p-ϕ_n|/Δϕ) ,
inside cells that satisfy |r_p-r_n|<Δ r,
|θ_p-θ_n| < Δθ,
and |ϕ_p-ϕ_n|<Δϕ.
Most of our simulations are performed on a (i) disc quadrant
and with (ii) migrating planets. Due to (i),
it is not possible to consider the indirect potential
term related to the acceleration of the star generated
by the disc. Point (ii) motivates us to subtract
the azimuthally-averaged gas density from ρ before
the disc-planet interaction is evaluated <cit.>. Such a
procedure improves the displacement of the Lindblad resonances in a non-self-gravitating disc.
The planetary orbit is propagated using the standard fourth-order
Runge-Kutta integrator of Fargo3D.
§.§ Initial conditions with a pressure bump
Before conducting simulations with embedded planets,
it is imperative to find an equilibrium state of the
unperturbed disc determined by the balance between
viscous heating and radiative cooling[Realistic
discs are also heated by stellar irradiation
<cit.>
but we neglect it here because the vertical opening angle of the domain is small (to reach high resolution), which prevents us from resolving the absorption of impinging stellar photons. Therefore, our model is applicable
to optically thick disc regions within several au rather
than to outer disc regions where stellar irradiation
dominates the energy budget.].
We start with a disc that has a radial extension of
(r_min,r_max)=(2.6,7.8) au
and a grid resolution of N_r× N_θ× N_ϕ = 768 × 1 × 64. The disc is azimuthally symmetric (as represented by N_θ=1) as well as
vertically symmetric around the midplane (only one hemisphere of the disc is modelled). Remaining parameters
are the same as in Table <ref>.
The initial temperature
profile is that of an optically thin disc <cit.> and
the initial surface
density profile follows
Σ(r) = Σ_0(r/1 au)^-0.5 .
We introduce the pressure bump as a Gaussian perturbation of
the surface density <cit.>
Σ'(r) = Σ(r)[1 + (A_b-1)exp(-(r-r_b)^2/2w_b^2)] ,
where A_b is the bump amplitude, r_b is the radial centre of the perturbation,
and w_b parametrizes the width of the Gaussian.
To minimize the viscous spreading over the course of our simulations, we `mirror' the Gaussian perturbation
as a minimum in the viscosity profile <cit.>
ν'(r) = ν[1 + (A_b-1)exp(-(r-r_b)^2/2w_b^2)]^-1 .
Let us point out that in the absence of the pressure bump,
the disc would be similar to that of <cit.>.
The first part of our relaxation procedure is hydrostatic
and iterative. For a given temperature field, we solve
the equations of hydrostatic equilibrium
<cit.>
to find a new density distribution ρ(r,ϕ).
Then we fix the new density distribution and perform one
time step equal to the characteristic radiation diffusion time-scale in the energy equations. With the updated temperature field, the iteration process is repeated until
the relative change in T and ρ per iteration becomes as small as 10^-5.
In the second step of our relaxation procedure, we continue
to evolve the disc using the full set of time-dependent
fluid equations over 2,000 orbital time-scales at r_b. To insert a planet at an arbitrary position
r_p in the disc, we remap the relaxed disc to a numerical grid with the radial extension of
(r_min,r_max)=(0.7,1.3) r_p and an azimuthal extension of a quadrant.
The resolution of the remapped grid is given in Table <ref>. We also transform the azimuthal
gas velocity v_θ to a frame corotating with the planet. If the planet starts in an eccentric orbit,
the frame velocity is corrected for the eccentricity
(planets start in their apocentre).
§.§ Boundary conditions
The boundaries are periodic in the azimuthal direction
and the disc is mirrored at the midplane.
Scalar quantities (ρ,ϵ,E_R) have symmetric boundary conditions,
with the exception of E_R at the
lower boundary in the colatitude, where
we allow for the escape of photons through the disc surface by setting
E_R=a_RT_bc^4, a_R being
the radiation constant and T_bc=5 K.
Azimuthal velocities are symmetric at the vertical boundaries
and a Keplerian extrapolation is used at the radial boundaries.
Radial/vertical velocities are anti-symmetric at radial/vertical boundaries, respectively, and symmetric elsewhere.
Boundary conditions are supplemented with the wave-damping
zones <cit.> in the
radial intervals of (1,1.2^2/3)r_min
and (1.2^-2/3,1)r_max. We damp ρ, v_r and v_ϕ towards their values
corresponding to the end of the disc relaxation.
Azimuthal velocities and energy densities are not damped
and there is no
damping zone at the disc surface, nor in the midplane.
§ RESULTS
§.§ Relaxed disc
Fig. <ref> (top panel) shows the radial profiles
of the gas surface density Σ obtained during the relaxation
towards the thermodynamic equilibrium. Starting from our hydrostatic
estimate, the bump amplitude slightly decreases and the bump edges
undergo minor spreading over the first 200 dynamical time-scales
of the hydrodynamic relaxation. Afterwards,
the red and black curves are difficult to distinguish
from one another and the profile remains
almost unchanged.
Pressure bumps are typically characterized by the variation
of the pressure support that a gas parcel feels while orbiting
in the disc. We computed the pressure support parameter in the midplane <cit.>
η = - 1/2(H/r)^2∂log P/∂log r ,
where H is the non-isothermal pressure scale height that
relates to the sound speed c_s=√(γ P/ρ)
and the local Keplerian frequency Ω_K=√(GM_⋆/r^3) as H=c_s/(√(γ)Ω_K). If η>0, the gas parcel orbits
at a sub-Keplerian velocity. If, on the other hand, η<0,
the gas parcel becomes super-Keplerian. Fig. <ref>
(bottom panel)
indicates that η≃0.002 away from the pressure bump,
the maximum value of η≃0.005 is reached roughly
in the outer half of the surface density peak,
and the minimum value
of η≃-4×10^-4 is reached in the inner half
of the Σ peak.
In Fig. <ref>, we show the gas temperature T
in the vertical plane of the disc.
The vertical temperature stratification is rather steep,
as expected for a disc dominated by the viscous heating,
and it is apparent that the bump does not have a strong influence on T.
The bump only slightly flattens the radial temperature gradient,
which results in the leveling of isocontours with the horizontal
axis near 5 au in Fig. <ref>.
§.§ Bump parameters and the migration trap
The span of possible parameters characterizing pressure bumps
in protoplanetary discs is largely unconstrained because
the physics of these bumps is still a subject of intensive research.
We choose the bump position r_b=5.2 au
to tie our study of planet migration to previous works <cit.>.
The Gaussian width w_b=0.32 au≃1.5H is chosen to ensure that the bump remains Rossby-stable <cit.>.
Regarding the bump amplitude A_b=1.45, it is kept rather
small but large enough to facilitate the existence of (i) a radial
interval of super-Keplerian gas rotation and (ii) a migration trap in
the absence of thermal torques, even for planets with very low masses.
Point (i) is fulfilled based on Fig. <ref> and point
(ii) is proven in the following paragraph. As such, our bump can be interpreted
to be close to the lower limit of A_b and
w_b (while satisfying all of the above-mentioned
requirements). Larger values of A_b and
w_b are not ruled out by our study.
To verify that the pressure bump would act as a migration trap
in the absence of thermal torques, we applied the torque formulae
of <cit.> to our disc model
and constructed the migration map shown in Fig. <ref>.
According to the obtained result, low-mass planets would migrate
inwards due to the dominance of the Lindblad torque
in the majority of the disc <cit.>.
However, there is indeed
a narrow interval of radii for which the disc torque would become
positive due to a boost of the corotation torque (red colour in Fig. <ref>) and the planets would experience outward migration <cit.>.
At the outer edge of the red-coloured
region, there is the zero-torque radius that would act as a migration trap in the pressure bump.
Having analyzed the disc properties and their influence on
the Lindblad and corotation torques, we specify three values of
interest for the planetary semi-major axes a.
Our fiducial value is a_1=4.84,au and it is
marked in Fig. <ref> with the innermost vertical
dashed line. At this disc location, the planet starts within
the minimum of η and at the same time, it finds itself
within the V-shaped region of outward migration in Fig. <ref>.
The second semi-major axis of interest is a_2=5.2 au (middle dashed line in Fig. <ref>)
for which the planet is close to the maximum
of the surface density peak while η is rather similar to the
background unperturbed value.
Finally, we choose a_3=5.56 au (outermost dashed line in Fig. <ref>) as a counterpart to a_1=4.84 au because it overlaps
with the maximum of η.
§.§ Characteristic scales
The linear perturbation theory of thermal torques
<cit.> argues that the thermal
disturbance near a luminous planet has a characteristic length-scale
λ_c=√(χ/(3/2)Ω_Kγ) ,
where χ is the thermal diffusivity. Recent high-resolution simulations of
inviscid discs with thermal diffusivity <cit.>
have demonstrated that the numerical convergence of thermal forces depends
on the resolution and requires at least 10 cells per λ_c, or
l/λ_c=0.1 where l is the cell size.
In order to verify that our resolution is sufficient, we calculate the thermal diffusivity
due to radiation diffusion in an optically thick medium as <cit.>
χ = 16γ(γ-1)σ T^4/3κ(ρ H Ω_K)^2 ,
and, at a_1=4.84 au, we obtain χ=3.82×10^15 cm^2 s^-1 and λ_c=0.02 au=0.1H.
Using the quadrant grid in the azimuth and the resolution specified in Table <ref>, we reach l_r/λ_c=0.09, l_θ/λ_c=0.09, and l_ϕ/λ_c=0.2, where l_r,θ,ϕ are the lengths of cell interfaces along the respective spherical coordinates. Therefore,
the thermal disturbance is well resolved in the radial and azimuthal directions
and only slightly under-resolved in the vertical direction, which we consider
a reasonable compromise.
Additionally, Type I migration critically depends on the resolution of the half-width of the horseshoe
region x_s <cit.>,
which influences the accuracy of the horseshoe drag <cit.>. The minimal requirement for radiative discs is to resolve x_s by 4 cells <cit.>.
For the lowest planetary mass that we shall consider in the following (see Table <ref> for the range of parameters varied in our simulations), that is M_p=0.1 M_, we have (7,7,3) cells per x_s
in the (r,θ,ϕ) directions. For M_p=1 M_,
we have (21,21,10) cells per x_s, respectively.
For completeness, let us remark the resolution of the Hill radius R_H; we have (12,12,6) cells per R_H
for the Mars-mass planet and (26,26,12) cells per R_H for
the Earth-mass planet.
§.§ Eccentricity excitation of luminous planets in the super-Keplerian bump region
Thermal forces can in principle change both the semi-major axis a
and eccentricity e of migrating planets[Thermal forces can
also pump orbital inclinations <cit.>, although
this effect is often quenched by the eccentricity growth and thus
not considered in our study.]. Therefore, to evaluate the migration of planets near
pressure bumps, it is necessary to establish whether e can become non-zero and if so,
what is the equilibrium value e→ e_eq for which the
eccentricity driving and damping are balanced.
We start our investigation at a_1=4.84 au where the
region of outward (or stalled) migration is expected to exist and planets should become trapped
close to it (Fig. <ref>). We explore the parameter space of planetary masses M_p/M_=(0.1,0.5,1,2,4),
initial eccentricities e(t=0)=(0.1,1,2,3,4,5,6)×10^-2, and luminosities L/L_c=(0.75, 1.5, 3, 6) that are scaled using the
critical luminosity <cit.>
L_c = 4π GM_pχρ/γ .
The critical luminosity is expected to separate the regimes of eccentricity damping and driving
for L<L_c
and L>L_c, respectively
<cit.>.
By plugging equation (<ref>) into (<ref>),
one finds that the mass doubling time necessary to reach L_c
scales as τ∝ M_p^2/3.
For our span of planetary masses M_p/M_=(0.1,0.5,1,2,4),
the luminosity becomes equal to L_c for τ≃(36,105,166,264,419) kyr,
respectively.
These values of τ
are substantially larger with respect to our typical integration times, which cover several planetary orbits. It is therefore appropriate to keep the planetary masses fixed in our simulations and consider the luminosity as a free parameter.
Since our high-resolution radiative simulations are numerically demanding and the number
of parameters is relatively large, we start by performing simulations over 5 orbital
periods of the planet (with a few exceptions specified below). Planets are smoothly introduced in the simulations by ramping M_p
from zero to its parametric value over the first orbital time-scale.
Fig. <ref> shows the eccentricity evolution rate ė/e as a function
of the initial orbital eccentricity for all our parameters (see Fig. <ref>
for M_p=0.5 M_). The time derivative of the eccentricity ė
was obtained by fitting a linear function to the time series of e(t) that were
recorded during our simulations. The fit was performed
over the last three orbits and ė was determined from its slope.
For several cases with the lowest eccentricity e=10^-3, the behaviour
of e(t) was deviating from a linear trend on the time-scale of three orbits.
We thus extended these specific cases to twenty orbital periods
and we performed the linear fit over the last eighteen orbits[The linear fitting
technique adequately describes ė/e
as long as the planet mass
and luminosity remain constant.
Nevertheless, when fluctuations
in the shape of thermal lobes are present (see Section <ref>),
the time interval of the fit has to be larger
than the time-scale of the fluctuations.
The presence of such fluctuations
is the main reason why some of our
simulations at e=10^-3 had to be
prolonged.].
Similarly to <cit.> or <cit.>,
Fig. <ref> allows us to find the equilibrium asymptotic eccentricity e_eq
that the planets would reach. If ė<0 for e=10^-3, we take e_eq=0.
Such cases represent planets that, when starting on circular orbits, feel the eccentricity
damping straightaway, which prevents them from any eccentricity growth.
If, on the other hand, ė>0 for e=10^-3, we find e_eq by applying
linear interpolation to our measurements and identifying the point where ė=0.
In these cases, planets starting at e<e_eq experience eccentricity driving
while planets starting at e>e_eq experience eccentricity damping.
Equilibrium eccentricities e_eq are marked by arrows in Fig. <ref>
and summarized in Table <ref>.
We see that the planets remain circular as long as their luminosity is sub-critical.
Once the luminosity becomes super-critical, e_eq reaches values
of the order of h=H/r <cit.> in most cases.
As for other trends, larger L implies larger e_eq for fixed M_p,
while smaller M_p implies larger e_eq for fixed L,
in good agreement with <cit.>.
In the case with M_p=2 M_ and L=3 L_c, as well
as for all cases with M_p=4 M_, we find e_eq=0
even for the super-critical L.
We attribute this behaviour to the non-linearities that arise for increasing
planetary masses for which the advective redistribution
of hot gas close to the planet does not reach a steady state
<cit.>.
These cases are covered in greater detail in Section <ref>.
For M_p≤1 M_, however, we conclude that L_c
<cit.>
is a robust separation for the eccentricity damping and excitation
even in 3D radiative discs.
§.§ Headwind-dominated regime
Section <ref> reveals that
e_eq∼ h for the majority of our parameter space.
These relatively large eccentricities place the planets firmly to the headwind-dominated
regime of thermal torques <cit.>
for which the two-lobed density perturbation <cit.>
becomes replaced by a single hot trail <cit.>.
Here, we focus on the headwind-dominated regime in greater detail.
As a first step, we need to establish whether the magnitude of eccentricity excitation
from Fig. <ref> is universal or rather dependent on the local conditions
within the pressure bump. To this point, we repeated the measurement from Fig. <ref>
for M_p=1 M_ but this time we placed the planet at
initial semi-major axes a_2=5.2 and a_3=5.56 au to
investigate ė at the bump centre and at the maximum of η, respectively.
Fig. <ref> compares the eccentricity evolution rate at different
locations in the pressure bump. Although we detect marginal differences at
very low e, the values obtained at fixed L are rather similar, which makes them independent
of a. Therefore, the ability of a planet to reach the headwind-dominated
regime with non-zero e_eq does not depend on its position with respect to
the pressure bump. Once the planet develops non-zero e_eq anywhere, it will
tend to maintain it despite its radial migration.
Next, we explore the migration in the headwind-dominated regime. Focusing solely
on a_1 again, we let the planets migrate using e_eq from Table <ref>
as their initial eccentricities.
For the cases with L≥1.5 L_c and M_p≥ 2 M_ that exhibit e_eq=0 in Table <ref>, we start from their largest e that
would lead to ė=0 in Fig. <ref>. The simulation time-scales for migrating
planets are equal to 20 orbital periods.
Fig. <ref> (solid curves) shows the resulting temporal evolution of semi-major axes. Clearly,
planets with super-critical luminosities often abandon the outward migration predicted
by Fig. <ref> and they predominantly switch to inward migration. We only detect stalled
migration for M_p=2 M_ and L=1.5 L_c, outward
migration for M_p=4 M_ and L=1.5 L_c, and we point
out that the migration of M_p=0.1 M_ is generally slow because of its very
low mass (see the extent of the vertical axis in Fig. <ref>).
Since the inward migration becomes faster with increasing L, is is natural to ask whether
this effect is somehow regulated by the thermal torques themselves or whether it is
driven by the Lindblad and corotation torques operating at increased eccentricities.
To find the answer, we performed simulations with non-luminous planets that are subject
to cold thermal torques in radiative discs
<cit.>. We placed
these planets on fixed eccentric orbits with the same span of e as used for their luminous
counterparts. The purpose of fixing the orbits is to prevent the eccentricity damping.
To predict the migration outcome, we measured the torque Γ and power P exerted by
the gravitational disc forces, then we averaged these values over the last ten orbital periods,
and we estimated <cit.>
ȧ/a = 2a/GM_⋆M_pP ,
which expresses the evolution rate of the semi-major axis from the change in the orbital energy.
By integrating Equation (<ref>), we obtained the dashed curves shown in Fig. <ref>.
Focusing on the change in the slope of a(t) curves,
it becomes clear that the cases with L=0 exhibit the same change of slope
with the increase of e as the cases with L≠ 0. Although there is a small systematic offset between the L=0 and L≠ 0 curves, it does not seem to strongly depend on the
actual value of L (in contrast to Fig. <ref> discussed later).
Therefore, we obtain an important result: The actual migration rate for e_eq
corresponding to the headwind-dominated regime is not controlled by thermal torques
themselves, these are only responsible for keeping e pumped up.
Instead, the migration rate is controlled by the standard Lindblad and corotation torques.
Since the latter experiences exponential quenching with increasing e <cit.>, the inward
migration in Fig. <ref> becomes faster for larger e as the Lindblad torque
becomes more and more dominant.
The implications of this section are the following. The excitation of e_eq∼ h
is a global effect independent of planet position in the disc and at this level
of eccentricity, the migration is governed mostly by the Lindblad and corotation torques.
Then, if a luminous planet is found to exhibit inward migration at a_1 where Fig. <ref>
predicts outward (or stalled) migration and where the corotation torque is the strongest,
it will be subject to inward migration in our entire simulated disc because the corotation
torque can only become weaker away from a_1 and the Lindblad torque is always negative.
Hence, for the inward-migrating planets from Fig. <ref>, the migration trap
at the pressure bump does not exist.
§.§ Shear-dominated regime
Let us now turn out attention to the migration at e=0 for which the
thermal torques operate in the shear-dominated regime.
This regime becomes important when L<L_c (Table <ref>),
or when the behaviour of thermal torques becomes highly non-linear
(e.g. for our case M_p=4 M_), and we also expect that
planets can temporarily evolve in this regime if they are born in circular orbit
before having their e excited.
We performed simulations of both cold and luminous planets (L/L_c=(0,0.75,1.5,3,6)) evolving from e=0 and starting at three distinct positions a_1,
a_2, and a_3 spread across the pressure bump. The simulations covered 20 orbital
periods again. Fig. <ref> depicts the migration tracks at a_1. The migration is outward,
and thus in accordance with the migration map in Fig. <ref>, as long as L<L_c and we also find one case of stalled migration for M_p=4 M_, L=1.5 L_c.
For the remaining cases, however, the migration becomes directed inwards and surprisingly,
its rate becomes faster with increasing L.
This is in striking contrast to the usual behaviour of thermal torques in
simple power-law discs <cit.>
where increasing L typically makes the migration tracks more and more outward.
To further highlight this finding, Fig. <ref> shows the evolution
rate of the semi-major axis ȧ/a for M_p=1 M_
at all three initial semi-major axes a_1, a_2, and a_3.
The behaviour at a_1 is as described in the previous paragraph.
The behaviour at a_2 and a_3 is antisymmetric, in a sense that
the fastest inward migration (i.e. the most negative total torque) is found for L=0
and then it slows down and reverses as L increases (i.e. the torque receives gradual
positive boosts each time L grows).
We infer that these trends are related to the local disc rotation and pressure support.
The locations a_2 and a_3 have η>0 (Fig. <ref>)
and the local rotation is sub-Keplerian.
Therefore, the exact corotation between the disc material and the planet is offset inwards
with respect to the planet and the resulting two-lobed underdensity is asymmetric because
the disc shear is more efficient in advecting the hot gas into the outer lobe, trailing
the orbital motion of the planet.
Since η is larger at a_3, so is the offset of the planet from the corotation
and the thermal torques are more prominent compared to a_2. This is in accordance with
<cit.>.
However, η<0 at a_1 (super-Keplerian rotation)
and the same reasoning implies that the exact corotation is
located outwards from the planet and the inner underdense lobe that leads the orbital motion of the planet will dominate over the outer one. In other words, the lack of gas ahead of the planet means that the planet will be robed of angular momentum by the gas trailing it. The total torque will therefore become more negative, which facilitates the shift towards inward migration found in Figs. <ref> and <ref> at the minimum of η.
As a verification of our claims, Fig. <ref> shows the density perturbation
in the disc midplane near an Earth-mass planet. The density perturbation (ρ-⟨ρ⟩)/⟨ρ⟩ is computed with respect to the azimuthal average ⟨ρ⟩.
By tracing the isocontours, one can see that the excavation of the inner lobe
is larger at a_1 (left panel in Fig. <ref>), while the outer lobe dominates
at a_3 (right panel in Fig. <ref>). This asymmetry depends on where
the disc material corotates with the planet, as indicated by the light green curve in Fig. <ref>.
We point out that the dependence of thermal torques on the corotation offset has already been described in previous works <cit.>, however,
the case when the corotation is located farther out with respect to the planet
was not considered in detail or it was explicitly regarded as unrealistic <cit.>.
Here, instead, we demonstrate that the impact of such an outward corotation offset, which
inherently occurs near pressure bumps (left panel in Fig. <ref>), can be substantial.
Before summarizing this section, let us also point out that while the evolution of the semi-major axes can be reverted depending on η,
the eccentricity evolution rate is affected only weakly as we saw in Fig. <ref>.
An extended analysis of this fact for the shear-dominated regime is provided in
Appendix <ref>.
The implications of this section are best inferred from Fig. <ref>,
which reveals a radius of convergent migration (roughly at 4.95 au)
for L<L_c but
leads to divergent migration for L>L_c.
Therefore, if planets with super-critical luminosities remain circular (which happens
only for a limited subset of our parameter space), they are again expected
to migrate away from the pressure bump. If these planets were to start at a_1,
they could possible get trapped near a≃4.67 au where η becomes positive
again and thus we can expect another change in the sign of ȧ (not explored in our Fig. <ref>).
If these planets were to start at a_2 or a_3, they would migrate outwards.
§.§ Fluctuating thermal disturbance
When assessing the equilibrium eccentricities, we saw in Fig. <ref> that
while the planets with M_p≤ 1 M_ exhibit an orderly and mildly decreasing dependence
of ė on e, our most massive planets exhibit a break near the smallest eccentricities.
Consequently, these planets are capable of remaining circular even at super-critical luminosities (Table <ref>). Similarly, when focusing on the migration of these planets
in the circular case, we saw that the semi-major axis evolution is accompanied by short-term oscillations
that are best apparent for M_p=4 M_ and L≳1.5 L_c (Fig. <ref>, bottom right).
By investigating the gas evolution for these peculiar cases, we found that they exhibit fluctuations
of the thermal disturbance. These fluctuations were first reported in <cit.>
and they are driven by a complex reconfiguration of the 3D circumplanetary flow.
An interesting fact is that while <cit.> found the presence of these fluctuations in a setup with temperature-dependent disc opacities, here we obtain the same behaviour
in a constant-opacity disc.
Fig. <ref> shows the temporal evolution of the gas density perturbation
for M_p=4 M_ and L=6 L_c during two orbits of the planet.
At t=20 orbits (top left panel), the outer rear lobe dominates.
After one half of the orbit (top right panel), the outer rear lobe shrinks while the inner front lobe becomes more pronounced.
At t=21 orbits (bottom left panel), the inner front lobe dominates. The cycle returns to the beginning
shortly after t=21.5 orbits (bottom right panel).
The realism of the fluctuating thermal disturbance is still somewhat unclear because <cit.> showed
that its occurrence is favoured in discs with vertically steep temperature gradients.
Realistic protoplanetary discs that are stellar-irradiated have shallower vertical temperature
gradients than discs heated purely by viscous friction (considered here), future work should
therefore examine the occurrence of fluctuating thermal disturbances in stellar-irradiated discs as well.
However, the fact that the fluctuations
naturally appear in our constant-opacity setup with much better resolution than <cit.>
provides additional independent indication of their robustness.
§.§ Luminosity reached by pebble accretion
In previous sections, the luminosity L of accreting planets
was a free parameter unrelated to any physical process.
The aim of this section is to determine the values of L
that can be reached by pebble accretion <cit.>,
which is thought to be a major
accretion channel in many planet formation scenarios <cit.>.
We constructed a simple model for a coupled evolution of gas and pebbles. We solved
2D vertically-integrated continuity and Navier-Stokes equations for a bi-fluid mixture of gas and
pebbles using Fargo3D again
and we additionally assumed axial symmetry, thus making the model effectively 1D.
Gas and pebbles were aerodynamically coupled <cit.> and the turbulent pebble diffusion was taken into account
as well <cit.>.
To maintain the disc non-isothermal and thus comparable to our 3D model,
we included a one-temperature gas energy equation in the form of <cit.>
while neglecting the stellar irradiation term. Unlike in <cit.>
(see equation 11 of that paper), we did not include any opacity correction in the
vertical optical depth <cit.>. The disc parameters
were the same as in the case of our 3D disc.
The 1D simulation was started by an iterative hydrostatic relaxation similar to Section <ref>.
Once the gas temperature equilibrated, we added the pebble component by assuming a constant
pebble flux Ṁ_F=10^-4 M_ yr^-1 through the disc:
Σ_p = - Ṁ_F/2π r v_r,p≃Ṁ_F/4π r^2StηΩ_K ,
where Σ_p is the initial pebble surface density, v_r,p is the radial drift velocity of pebbles and St=0.07 is the Stokes number (dimensionless stopping time) of pebbles <cit.>. For the sake of pebble initialization, we used
η corresponding to a bump-free disc (because Equation <ref> would lead to Σ_p<0 for η<0 inside the bump, which would be unphysical). The value of St was derived in a bump-free disc again,
assuming that the dominant physical radius of pebbles is drift-limited <cit.>.
We evolved the disc for 8,500 orbital time-scales, which evaluates roughly to t≃100 kyr.
To maintain the pebble flux entering the disc uniform, we damped Σ_p to its initial
value close to the outer boundary.
The result of our simulation is given in Fig. <ref>. First, the top panel demonstrates
that the temperature profile of the 1D disc is indeed
comparable to our more advanced 3D model.
Second, the bottom panel shows the radial profile of Σ_p
and reveals that our parametrization of the pressure bump does not create a perfect barrier to the pebble flux.
This is because the region of η<0 is relatively narrow (see Fig. <ref>)
and thus, once the mass loading of the bump by pebbles becomes sufficient, the pebbles start
to penetrate the bump by means of turbulent diffusion. Due to the latter, the radial flux
of pebbles inwards from the bump remains non-zero.
Since Σ_p is not evolving
substantially between t=50 and 100 kyr, it is clear that an equilibrium state was reached.
Having the knowledge of Σ_p, we calculated the pebble accretion rate for a range
of planetary masses across the disc. The calculation followed the recipe of <cit.> that was reformulated by <cit.>
for the environment of pressure bumps.
We neglected any vertical stirring of pebbles (and thus also the 3D regime of pebble accretion)
and we assumed circular orbits of planets. The results are shown in Fig. <ref>
in terms of the mass doubling time τ. Clearly, the accumulation of pebbles at the bump
boosts the accretion efficiency (shortens τ) and the growth of planets can become rapid.
In the light of our study, however, we can see that growing planets easily exceed the critical
luminosity L_c. At a_1, this happens for M_p≃ 0.2 M_.
Consequently, planets with M_p≳ 0.2 M_ would undergo eccentricity excitation,
they would enter the headwind-dominated regime, and they would start migrating inwards.
§ DISCUSSION
§.§ On the viability of model parameters
The full parameter space of our model is extensive and we thus only focused on exploring several specific dependencies. We have already explained our rationale behind selecting parameters
for the pressure bump and planetary luminosity in Sections <ref> and <ref>, respectively. However, it is important to discuss the remaining model parameters, namely,
the opacity κ=1 cm^2 g^-1 and the kinematic viscosity ν=10^15 cm^2 s^-1. These values were adopted from <cit.> in order to establish a connection between our study
and prior research. Nevertheless, one should bear in mind that κ and ν can vary over orders of magnitude in realistic protoplanetary discs.
For less opaque protoplanetary discs, the efficiency of thermal torques decreases, as numerically demonstrated by <cit.> and analytically shown
by <cit.>.
This torque reduction occurs
because a lower opacity leads to a larger thermal diffusivity (Equation <ref>)
while the hot thermal torque component scales as ∝χ^-3/2 in the linear regime.
In the light of our study, however, we must also ask how the eccentricity excitation is affected.
By utilizing the results from linear theory,
we can roughly estimate that the eccentricity will grow as
long as (i) the response time of the thermal driving t_th is shorter than the time-scale of wave-induced damping
t_wave <cit.>
and (ii) the luminosity remains super-critical. Both of these conditions
are dependent on χ and thus influenced by κ. Regarding (i), <cit.> found
t_th/t_wave = √(π/2)1/γ(γ-1)λ_c/H ,
and we recall that λ_c=0.1H in our model. To prevent the eccentricity growth of luminous planets born
in circular orbits, κ would need to decrease sufficiently to expand λ_c∝√(χ) by an
order of magnitude. Nevertheless, determining the critical value of κ at which the wave-induced damping dominates is not straightforward
because not only χ∝κ^-1, but also χ∝ T^4 and we expect that the equilibrium temperature profile of a disc with lower opacity would be cooler. Regarding (ii), lower κ (and consequently larger χ) would elevate the critical luminosity threshold L_c.
In other words, larger accretion rates (lower mass doubling times τ) would be required to initiate the eccentricity growth.
To assess whether the value of κ=1 cm^2 g^-1 is appropriate
for the studied disc region, we compared it with the Rosseland opacities based on the DIANA standard for the dust composition <cit.> within the temperature range of our disc. Appendix <ref> and Fig. <ref> show that these more detailed opacities are similar to κ=1 cm^2 g^-1 at the radial positions
a_1–a_3, which we have considered as the initial locations for the planets.
We admit, however, that the planets themselves are surrounded by temperature peaks when they
release the accretion heat (for instance, M_p=1 M_ at L=3 L_c exhibits a temperature peak exceeding 200 K)
and our constant-opacity model is certainly a simplification within these local temperature maxima.
Concerning the viscosity,
the value of ν=10^15 cm^2 s^-1 translates to the Shakura-Sunyaev viscosity <cit.> of α≃3.5–4×10^-3 across the pressure bump, which is relatively large and requires a substantial level of turbulent
stress to operate. Such a stress might be difficult to achieve because a hydrodynamic turbulence
is typically weaker
<cit.>
and a magneto-hydrodynamic turbulence in the midplane at several au is likely to be suppressed <cit.>.
At lower and vanishing viscosities, we think that the response of planets to the thermal
perturbation shown in our study remains valid
because <cit.> reported similar
eccentricity evolution in models of inviscid power-law discs with thermal diffusion.
Additional effects, however, can be expected in discs with pressure bumps
because in the low-viscosity limit, embedded planets might perturb the bumps
more prominently and they might even destabilize them. This problem is left for future work.
§.§ Implications for planet formation at pressure bumps
Pressure bumps have been proposed as efficient sites of planet accretion
in several recent works (Section <ref>). However, if the accretion is indeed efficient,
it is likely that the accreting planets will exceed the critical luminosity
and begin experiencing eccentricity excitation due to thermal driving.
Our simulations robustly demonstrate that the eccentricities become non-zero for the majority of considered planetary masses, beginning with those as small as Mars-sized embryos.
Such eccentric planets experience inward migration because they lose the support of the corotation torque.
Planets with M_p≃4 M_ undergo more complex behaviour
and they can actually remain on circular orbits. Nonetheless,
we argue that before reaching this mass,
the planet would have grown through a sequence of lower masses,
during which its eccentricity would become
excited and then maintained even upon reaching M_p≃4 M_ (the planet would
start in the headwind-dominated regime of the bottom-right panel of Fig. <ref> and it would remain there).
In Section <ref>, we developed
a 1D model to estimate the pebble accretion rates, which revealed that for planets with masses M_p≲2 M_, the accretion rate slows
down near the inner edge of the bump, resulting in a sub-critical luminosity (see Fig. <ref>).
However, when that occurs, the planet is already located inwards the island of outward migration (Fig. <ref>) which is radially narrower than the radial range of super-critical luminosities.
Therefore, the planet is not saved from inward migration.
In other words, the main implication of our model is that the pressure bump is not likely
to harbour a growing embryo until it becomes a full-grown planet. Instead,
the bump is likely to lose the embryo to inward migration. This process could be repeated
and thus the bump could spawn sub-Earth-mass embryos and populate the disc region interior
to the bump with them.
Future work is necessary to explore more parameters of the pressure bump itself, as only one pressure bump was studied here.
For instance, one can imagine more pronounced
pressure bumps that would act as perfect barriers for drifting pebbles. In such cases, the pebble
concentration would be confined to a narrow ring, as studied by <cit.>, who
showed that the range of equilibrium planet positions (without thermal torques)
would be wider than the pebble ring itself.
By incorporating thermal torques, it might be feasible
for a planet starting within the pebble ring to migrate outside
it, switch to L<L_c, and still find itself
in the range of the Type-I migration trap.
However, we emphasize that any model considering pressure bumps as perfect pebble traps
should be able to account for streaming instabilities and planetesimal formation
because large solid-to-gas ratios are expected to be reached <cit.>.
How planetesimal accretion near pressure bumps affects our study is unclear and requires further investigation, but
it can in principle only increase the energy output
of the planets.
§.§ General implications for the assembly of planetary systems
We would like to emphasize that even though <cit.>
argued that the heating torque adds a positive contribution to the total torque
and even though the linear heating torque of <cit.> is positive for circular orbits
in classical power-law discs,
the heating torque does not necessarily support outward migration once the eccentricity is excited
(see Fig. <ref>).
And since the heating torque operates at L>L_c, the eccentricity excitation will inevitably
accompany it.
Therefore, current predictions for planet evolution with the addition of the heating torque, such as
those from <cit.>, might strongly overestimate the importance
of outward migration because they do not take the eccentricity driving into account.
The inclusion of the eccentricity excitation <cit.>
in N-body codes seems to be the most important missing piece at the moment because non-zero e
affects all components of the disc-driven torques.
If planets accrete efficiently and their eccentricities are pumped,
we think that they might ignore any migration
traps driven by positive corotation torques, even those related to the entropy-driven corotation torque <cit.>.
To explain the clustering and trapping of low-mass planets at the inner disc edge <cit.>,
it might be necessary for planets to switch back to sub-critical luminosities and circularize
via the usual eccentricity damping. One way to achieve this in the pebble accretion
paradigm is for the planet to reach the terminal pebble isolation mass M_iso <cit.>,
which decreases in the inner disc <cit.>
as well as when the orbits are eccentric
<cit.>.
Another possibility is to rely on a decrease of the
pebble flux as the pebble disc becomes depleted <cit.>
or as multiple planets contribute to the filtering of pebbles
<cit.>.
§ CONCLUSIONS
Previous studies have suggested that pressure bumps in protoplanetary
discs can facilitate rapid and efficient planet accretion <cit.>
because the Lindblad and corotation disc-driven torques
cancel out near the bump, allowing the growing planets to remain close to a reservoir of accumulating dust and pebbles.
In this study, we explored the robustness of
the migration trap when the thermal torques <cit.>
are taken into account.
To this end, we conducted high-resolution 3D radiative hydrodynamic simulations, modelling the pressure bump as a Gaussian perturbation of the density and viscosity.
We focused on planets in the mass range of M_p=0.1–4 M_ and
we considered that they release the accretion heat, parametrized with respect
to the critical luminosity L_c derived from the linear theory of thermal torques <cit.>. For instance,
Mars- and Earth-sized planets in our disc model require
the mass doubling times τ≃35 and 170 kyr,
respectively, to achieve L=L_c.
Our study yields several key findings:
* The migration trap is robust when
the planet's luminosity is sub-critical (L<L_c).
* For super-critical luminosities (L>L_c), planets with M_p≲2 M_
experience eccentricity excitation by thermal driving and enter the headwind-dominated regime of thermal torques <cit.>.
This excitation causes e to become a sizeable fraction of the disc aspect ratio h <cit.>, which quenches the positive corotation torque and allows the negative Lindblad torque to prevail. As a result, the planet undergoes
orbital decay and migrates past the bump.
Our findings suggest that although the thermal forces
maintain e excited
(which modifies the Lindblad and corotation torques),
they have a negligible contribution to the migration rate in the headwind-dominated regime <cit.>.
* For a handful of cases with super-critical luminosities (M_p=2 M_
with L=3 L_c, and all cases with M_p=4 M_),
the thermal disturbance near the planet fluctuates as in <cit.>
and the eccentricity excitation can be prevented.
* If the planet remains circular and evolves in the region of the bump where the gas rotation
is super-Keplerian (η<0), the asymmetry of thermal lobes becomes reversed compared
to the standard circular case of <cit.>. The reversal
is driven by an outward shift of the corotation between the planet and the gas.
The inner thermal lobe leading the orbital motion deepens and the thermal
torque becomes negative, contributing again to inward migration.
We also simulated a simplified 1D gas-pebble disc to estimate the mass loading of
the pressure bump by pebbles and to calculate expected pebble accretion rates.
Our results indicate that most of the planets considered in our study reach super-critical luminosities
in the vicinity of the bump. Therefore, the prevalent outcome of our model is that
growing low-mass embryos leave the bump and populate the disc interior from the bump.
Planet formation scenarios with pressure bumps should be refined by considering that
the migration trap due to the positive corotation torque is not robust in the presence
of vigorous accretion heating and eccentricity driving.
This fact can be generalized to any Type-I migration trap operating in protoplanetary discs.
Accumulation of low-mass planets at migration traps might delicately depend on the processes
regulating their accretion efficiency, such as the pebble isolation.
§ ACKNOWLEDGEMENTS
This work was supported by the Czech Science Foundation (grant 21-23067M)
and the Ministry of Education, Youth and Sports of
the Czech Republic through the e-INFRA CZ (ID:90254). The
work of O.C. was supported by the Charles University Research program (No.
UNCE/SCI/023). We wish to thank the referee Elena Lega
whose valuable and constructive comments allowed us to improve this paper.
§ DATA AVAILABILITY
The public version of the Fargo3D code is available at <https://bitbucket.org/fargo3d/public/>.
The simulation data underlying this article will be shared on reasonable request
to the corresponding author. The Optool code is available at
<https://github.com/cdominik/optool>.
mnras
§ RESULTS FOR THE HALF-EARTH-MASS PLANET
With the aim to keep the main text short and
comprehensive,
this appendix summarizes simulation results
for the planetary mass M_p=0.5 M_.
Individual panels of Fig. <ref>
are complementary to Figs. <ref>, <ref>,
and <ref> as well as to their discussion in the main text
of the article.
§ ECCENTRICITY GROWTH IN THE SHEAR-DOMINATED REGIME
We demonstrated in Section <ref>, for the case of circular orbits,
that the influence of thermal torques on the semi-major axis evolution at a_2
and a_3 is in agreement with previous
studies <cit.>,
while it can facilitate inward migration in the super-Keplerian region
near a_1. However, the eccentricity evolution rate is quite similar
at a_1, a_2, and a_3 (Fig. <ref>).
To provide more insight into the eccentricity evolution in the shear-dominated
regime, we performed two simulations at a_1 and a_3 for
the parameters M_p=1 M_, e=0.001, and L=3 L_c.
These simulations were performed over 20 orbits and the planets were free to migrate.
We measured the disc torque Γ and power P with the aim to relate
them to the eccentricity evolution rate. To do so, one can utilize <cit.>
ė/e = 1-e^2/e^2(1/2ȧ/a-Γ/L)
δ e_P + δ e_Γ ,
where ȧ/a comes from Equation (<ref>) and the orbital angular momentum is
L = μ√(GMa(1-e^2)) ,
where M=M_⋆+M_p and μ=M_⋆M_p/M. In writing Equation (<ref>),
we split the expression into a power-driven term δ e_P and a torque-driven term δ e_Γ.
Fig. <ref> shows Γ, P, δ e_Γ, δ e_P, and ė/e
as functions of the true anomaly f. We can therefore infer how the instantaneous change of the orbital
angular momentum and energy propagates into the change of e during orbital cycles of the planet.
First, we notice that P is negative at a_1 while it is positive at a_3. This corresponds
to the opposite ȧ at these two locations (see Equation <ref>).
Second, ė oscillates during each orbit but the positive values clearly
dominate on average and as a result, e grows both at a_1 and a_3.
However, there is a systematic difference between the profiles of ė(f)/e. For a_1,
ė is zero when the planet is at the pericentre, then it follows a broad positive peak
as the planet moves towards the apocentre, and a small negative peak occurs between f=-90^∘ and 0^∘.
For a_3, the maximum (minimum) of ė occurs roughly at the pericentre (apocentre).
§ CONNECTION TO DETAILED OPACITY MODELS
In our disc model, we assumed
a constant uniform opacity κ=1 cm^2 g^-1.
Whether this value is realistic or not depends mostly on the dust composition
and grain sizes in the given disc region. To provide at least one quantitative comparison
of our κ with a more detailed opacity model, we calculated frequency-dependent
dust opacities κ_ν,d using the Optool code <cit.>,
while assuming the DIANA standard for the dust grain composition and size-frequency distribution <cit.>.
Subsequently, we converted κ_ν,d into the Rosseland mean opacities κ_R,
which are commonly used to describe the transfer of thermal radiation in the gray diffusion approximation.
Finally, since κ is used in our model to calculate optical depths in the gas,
we rescaled κ_R by multiplying it with the canonical dust-to-gas ratio
of the interstellar medium f_d2g=0.01.
Fig. <ref> shows the scaled Rosseland opacities as a function of the local
disc temperature T. At radial positions a_1, a_2, and a_3, which are highlighted
with vertical dashed lines, we can see that κ=1 cm^2 g^-1
differs from the Rosseland opacity curve by 6, 9, and 15 per cent, respectively.
|
http://arxiv.org/abs/2307.06139v2 | 20230709045849 | Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates: I. Integration of the Field Equations | [
"Sheref Nasereldin",
"Kayll Lake"
] | gr-qc | [
"gr-qc"
] |
APS/123-QED
[email protected]
[email protected]
Department of Physics, Queen's University, Kingston, Ontario, Canada, K7L3N6
This paper explores a complete representation of the Vaidya model, a radial flux of radiation in the eikonal approximation, used for modeling various phenomena in both classical and semi-classical General Relativity and Astrophysics. The majority of the applications of the Vaidya model have been formulated in an incomplete representation. A complete representation is obtained here by direct integration of the Einstein field equations. We present the methodology to obtain this complete representation, and its utility in the modeling of general relativistic phenomena.
Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates:
I. Integration of the Field Equations
Kayll Lake
August 12, 2023
===================================================================================================================
§ INTRODUCTION
The Schwarzschild metric <cit.> has been used to study the exterior geometry of spherical stellar objects undergoing gravitational collapse <cit.>, where it is assumed that the radiation emitted by the object is insignificant. However, during the advanced stages of stellar collapse, these objects are expected to emit a considerable amount of mass in the form of radiation, see for example <cit.>. Therefore, the exterior of a collapsing stellar object is no longer empty, and the Schwarzschild vacuum metric is no longer suitable for its description. The Vaidya metric <cit.> is more suitable for this situation and has been widely used to classically study the geometry outside [With suitable boundary conditions, such as Israel's conditions, see <cit.>, on the spherical surface, this exterior solution can be matched to some proper interior solution, see for example <cit.> and <cit.>.] radiating spherical stellar objects, see for example <cit.>.
Thus, one can treat this dynamical mass distribution with its envelop of radiation as an isolated system existing in otherwise vacuum, asymptotically flat spacetime that is described by the Schwarzschild vacuum metric.
The “self-similar" Vaidya metric has been used to construct spacetimes that exhibit a visible strong singularity, demonstrating the potential for the failure of the Penrose “Cosmic censorship hypothesis" <cit.>. This conjecture states that singularities arising from regular initial conditions do not have any causal influence on spacetime. If the hypothesis were to fail, it would be a major flaw in the theory of general relativity and would make it impossible to predict the events in any region of spacetime containing a singularity, as new information could emerge in an unpredictable manner. The growth of curvature along non-spacelike geodesics has been examined (see for example, <cit.>), and the visible singularity in self-similar spacetimes has been classified as strong. Furthermore, Lake and Zannias <cit.> showed that the emergence of naked singularities in these spacetimes is due to the self-similarity assumption, rather than spherical symmetry.
On the semi-classical level, the Vaidya metric has been utilized to explore black hole evaporation, possibly due to Hawking's radiation <cit.>, (see for example <cit.>). Furthermore, the Vaidya metric in the double-null coordinates (the mass function must be linear) <cit.> has been used to study the quasi-normal modes (QNM) as a model that supposedly will give deeper insights on the gravitational excitations of black holes (see for example <cit.>).
Despite the fact that the majority of applications were structured with the Vaidya metric written in the Eddington-Finkelstein-Like (EFL) coordinates, these coordinates have been known for some time to be incomplete (see for example <cit.>), leaving the Vaidya manifold not maximally covered. Thus, to ensure the accuracy of all applications, it is required to construct a complete set of coordinates and thoroughly assess the impact of this set of coordinates. This is the primary objective of this paper.
We organize this paper as follows. In the next section, we review the EFL coordinates and provide a proof of incompleteness of this set of coordinates, which is the main motivation for any subsequent coordinate representation. In Section <ref>, we review the use of Israel coordinates <cit.> to write the Vaidya metric <cit.>, and discuss why the derivation of these coordinates resulted in unsatisfactory results when attempting to obtain maximal coverings of the Vaidya manifold. The main results of this paper are outlined in Section <ref>, in which we introduce an algorithmic method to obtain Israel coordinates by direct integration of the field equations, without relying on any coordinate transformation. In Section <ref>, we present necessary physical restrictions that must be imposed on the flux of radiation. In Section <ref>, we provide a general derivation regarding the location of the apparent horizon in the Vaidya manifold. It is emphasized that the location of the apparent horizon is established before introducing any expressions to the characterizing functions. In Section <ref>, we demonstrate that our construction can be used to obtain both EFL and Israel coordinates by choosing different expressions for the functions that arise from integrating the field equations; such functions, as well as the coefficient of the cross term in the general metric that is presented, shall be referred to as the “characterizing functions". In Section <ref>, we briefly calculate some of the invariants of the Vaidya metric in Israel coordinates. The last section highlights the main results of the paper and discusses the possible extensions of the current work.
§ THE EFL COORDINATES
The Vaidya metric, in the EFL coordinates, is a spherically symmetric solution to the Einstein field equations with the energy momentum tensor approximated in “the eikonal form" <cit.>, which expresses a unidirectional radial flow of unpolarized radiation,
T_αβ = Φ k_αk_β= ϵ/4π r^2dm(u)/duk_αk_β,
where ϵ = ± 1 and k_α = δ^u_α is tangent to radial inward or outward-going null geodesics. The spacetime line element in the EFL coordinates takes the form
ds^2 = -(1-2m(u)/r)du^2+2ϵ dudr+r^2dΩ^2_2,
where dΩ^2_2 = dθ^2+sin^2θ dϕ^2 is the metric of a unit 2-sphere. For ϵ = +1, the metric expresses inward-directed radiation (towards smaller values of the radius r) with a monotonically increasing m as a function of the “advanced time" coordinate u. If ϵ = -1, the metric is that of outgoing radiation (towards larger values of the radius r) with m being monotonically decreasing as a function of the “retarded time" coordinate u. However, it is conventional, as stated in <cit.>, to assign u as the retarded time and v as the advanced time. Furthermore, it is worthwhile to note that the quantity Φ, usually called as the energy density of the radiation flux, does not have a direct operational meaning because the tangent null vector k_α does not have a natural normalization. Thus, it is preferable, see also <cit.>, to consider the following quantity:
ρ = Φ (k_αu^α)^2,
which defines the energy density as measured locally by an observer with a timelike 4-velocity u^α.
§.§ Incompleteness of the EFL Coordinates
In this subsection, we demonstrate why the EFL coordinates (u,r,θ,ϕ) do not provide a complete description of the Vaidya manifold. The incompleteness of these coordinates is the primary motivation for the search for new coordinates in which the manifold is complete, allowing radial null geodesics to continue moving to infinite values of their affine parameter or be terminated upon encountering a gravitational singularity. The incompleteness of the coordinates (u,r,θ,ϕ) becomes evident when studying the behavior of the ingoing radial null geodesics, emanating from the past null infinity ^- or from the past singularity surface r=0, for the case (0<m(∞)<∞). It was suggested, but not proven in <cit.>, that the geodesics appear to approach the future even horizon (FEH) surface, r=2m(∞), as u →∞, though they actually reach it for finite values of their affine parameter, see Fig. <ref>.
To support these insightful claims, we present a more articulated proof. We draw attention to the fact that, whereas Fig. <ref> is only valid for outgoing radiation, the forthcoming proof is valid for both ingoing and outgoing radiation. Let us consider the two branches of radial null curves, for which ds^2=0 and θ = ϕ = const. The first branch is given by u=const (red), and the second branch (blue) is given by the solution of the following ordinary differential equation [This differential equation is a special case of Chini's equation <cit.>, which does not have a general solution.],
du/dr =2 ϵ r/r-2m(u).
We assume the following to hold
0 < m(±∞)< ∞,
the question now arises as to whether the affine parameter λ remains finite as r → 2m(±∞) along the second branch. In order to answer this question we write the second branch (<ref>) as a system of 1^st order ODEs
ṙ = r-2m(u)/λ,
u̇ = 2ϵ r/λ,
where an overdot indicates d/dλ, so that differentiation of the previous system with respect to λ produces the geodesic equations of (<ref>)
r̈ = - 4 ϵ m^'(u)r/λ^2,
ü = - 4ϵ m(u)/λ^2,
where use has been made of both (<ref>) and (<ref>). Now let us assume that λ→±∞ as r → 2m(±∞) then by virtue of (<ref>) and (<ref>) we obtain
lim_λ→±∞u̇= lim_λ→±∞ü = 0,
which is not possible as this changes the second geodesic branch into the first [Note that the first branch is characterized by u=const, which entails u̇ = ü = 0.]. Therefore, our assumption is wrong, and we conclude that λ along the second branch remains finite as r → 2m(±∞). If we write this value of λ as λ_0, we obtain
lim_λ→λ_0ṙ = 0,
and
lim_λ→λ_0u̇ = 4ϵ m(±∞)/λ_0.
Evidently, the last equation remains finite because the mass function m(±∞) is assumed finite from the beginning. By virtue of (<ref>), we conclude that the region (r<2m(±∞)) is inaccessible in the EFL coordinates. Therefore, an extension is necessary.
§ ISRAEL COORDINATES
In order to overcome the “incompleteness problem" of the EFL coordinates, Israel <cit.> introduced what he described as the analytic completion of the Vaidya manifold (<ref>). In Israel coordinates (u,w,θ,ϕ), the Vaidya line element reads
ds^2 = (w^2/2m(u)r(u,w)+4m^'(u)/U(u)) du^2+2dudw+r(u,w)^2dΩ^2_2,
where U(u) = ∫_0^udu/4m(u), r(u,w) = U(u)w+2m(u), and the function m(u) is always positive. Notice that (<ref>) suffers a true singularity at r(u,w) = 0, see (<ref>), and at u=0, if m'(u) does not vanish there, as explained below. To avoid any possible confusion about what is to be said, let us label the EFL retarded coordinate, u, as t. This then shows that (<ref>) is reduced to the outgoing Vaidya metric, (<ref>) with u=t and ϵ=-1, by the transformation
t(u) = -∫_0^udu/U(u),
regular for (u>0, t<∞). Apart from the cumbersome nature of Israel coordinates, the Vaidya metric in Israel coordinates (<ref>) does not adequately represent both the internal and external fields as long as the mass function m(u) is only defined for u ≥ 0. Since u=0 corresponds to t=+∞ (t(u)∝ -log U(u)), it is impossible to extend the line element to the range (u<0) via a coordinate transformation, as it would require knowledge of the mass function m(t>∞), i.e., beyond FEH. Hence, we believe that the “maximal" extension of the Vaidya manifold, as given by the line element (<ref>), is imprecise. It is worth noting that there was an attempt <cit.> to extend the Vaidya metric in terms of Israel coordinates. However, this approach faced the same problem as the original Israel extension of relying on coordinate transformations and the necessity of knowing the mass function m(u) beyond the FEH in advance. It is also worthy of notice that although Israel coordinates have obvious advantages over the EFL coordinates, the Vaidya metric in Israel coordinates has not gained enough attention. To our knowledge, the metric has only been used once (see <cit.>) to study the complete gravitational collapse of a radiating shell of matter. Prior to the attempt given in <cit.>, all the work done to investigate the gravitational collapse in the presence of radiation was not complete. That is, the gravitational collapse was not followed beyond the event horizon because the Vaidya manifold in the EFL coordinates only describes the external field around a collapsing radiating object.
§ GENERAL COORDINATE CONSTRUCTION
Consider the following general spherically symmetric metric expressed in the coordinates (u,w,θ,ϕ) <cit.>
ds^2 = f(u,w) du^2+2h(u,w) du dw + r(u,w)^2dΩ^2_2,
where r(u,w) measures the area of the 2-sphere u=w=const. The energy momentum tensor is once more taken to be of the eikonal form,
T^αβ = Φ k^αk^β,
where k^α = δ^α_w is a radial null vector and the quantity Φ(k^αu_α)^2 is the energy flux, measured by an observer with tangent u_α. Straightforward calculations <cit.> show that the only non-zero component of the Einstein tensor is G^w w from which Φ can be directly obtained. If we take radial null trajectories with four-tangent k^α to be radial null geodesics affinely parametrized by w, i.e.,
k^β∇_βk^α = 0,
this yields
∂ h(u,w)/∂ w = 0.
Thus, the function h(u,w) reduces to a function of only u, h(u,w)≡ h(u). While we will limit ourselves to the choice h(u) = ±1, we will keep the function as is for potential future use.
§.§ Solving the Einstein Field Equations
First [This approach of solving the field equations was first introduced in <cit.> to express the Schwarzschild-de Sitter vacuum metric in Israel coordinates, and was later utilized in <cit.> to obtain the Vaidya metric in the same set of coordinates.], we benefit from the vanishing of the G^uu component to obtain
∂ ^2/∂ w^2 r(u,w)= 0.
This leads, by integration, for a general expression [We also note that this expression can be deduced by assuming that (<ref>) has a vanishing second Ricci invariant <cit.>. This result is particularly important because it is directly obtained from the geometry of the spacetime before considering the matter content.], to r(u,w)
r(u,w) = f_1(u)w+f_2(u).
In the sequel all the functions f_n (u) are assumed suitably smooth [ All the functions are assumed to be at least C^2.]. Second, by solving G^θθ = 0, with the aid of (<ref>), we obtain
r(u,w)∂ ^2/∂ w^2 f(u,w) + 2f_1(u)∂/∂ wf(u,w) - 4h(u)d /duf_1(u) =0.
Integrating (<ref>) yields
f(u,w)= 2 f_1^'(u) h(u) f_2(u)^2-f_1(u)f_3(u)/f_1(u)^2r(u,w)
+2 f_1^'(u) h(u)w/f_1(u)+f_4(u),
where (') denotes ordinary differentiation with respect to the coordinate u. By solving G^uw = 0, we find that f_4(u) is given by
f_4(u) = h(u)(2f_1(u)f_2^'(u)-h(u))/f_1(u)^2,
where use has been made of (<ref>) and (<ref>). By virtue of (<ref>), (<ref>), and (<ref>) the only non-zero component of the Einstein tensor can be given as
G^ww = 1/χ(u)(2h(u)^2f_2(u)^2f_1^”(u)+4h(u)^2f_2(u)f_1^'(u)f_2^'(u)
-h(u)f_3(u)f_1^'(u)-2h(u)f_2(u)^2 h^'(u)f_1^'(u)
-h(u) f_1(u)f_3^'(u)+2f_1(u)f_3(u)h^'(u) ),
where χ(u,w)=h(u)^4f_1(u)r(u,w)^2. The G^ww is conveniently expressed in the following way. First define the Hernandez-Misner mass <cit.>
m ≡r(u,w)^3/2 R_θϕ^ θϕ,
where R is the Riemann tensor. By calculating R_θϕ^ θϕ for (<ref>) and making the necessary simplifications, (<ref>) can be given in terms of the characterizing functions f_n(u) as
m = m(u) = 2h(u)f_2(u)^2f_1^'(u)-f_1(u)f_3(u)/2h(u)^2,
where the mass function must always remain positive-valued over its domain. As a result, G^ww can be expressed in a more succinct form,
G^ww = 2 m^'(u)/h(u)f_1(u)r(u,w)^2 = 8 πΦ.
Similarly, a more convenient expression of the function f(u,w) can be obtained with the aid of (<ref>), (<ref>), (<ref>), and (<ref>)
f(u,w) = 𝒜(u) r(u,w)^2 +ℬ(u) r(u,w)+𝒞(u)/f_1(u)^2r(u,w),
where
𝒜(u) = 2h(u)f_1^'(u),
ℬ(u) = 2h(u)f_1(u)f_2^'(u)-2h(u)f_2(u)f_1^'(u)-h(u)^2,
𝒞(u) = 2h(u)^2m(u).
§ PHYSICAL RESTRICTIONS ON THE CHOICE OF THE CHARACTERIZING FUNCTIONS
The first restriction that we impose, using (<ref>), is given by the following inequality
2h(u)f_2(u)^2f_1^'(u)>f_1(u)f_3(u).
This is necessary to ensure that the mass function, m(u), is always positive.
The second restriction is that the measured radiation flux is a positive quantity,
Φ (k^αu_α)^2> 0.
Substituting (<ref>) in (<ref>) and simplifying, we obtain
m^'(u)/h(u)f_1(u)>0,
which dictates that the signs of m^'(u) and h(u)f_1(u) have to be identical. As our attention is confined to classical matter fields (radiation), a minimum requirement is that this matter distribution must satisfy the Weak Energy Condition (WEC). This requirement implies, with the aid of (<ref>), the following stipulations on the different forms of radiation, summarized in Table <ref>.
Table. <ref> clearly illustrates that both ingoing and outgoing radiation can be obtained without changing the sign of the function h(u). However, as will be seen shortly, the direction of radiation in the EFL coordinates is dictated by the sign of the function h(u).
§ THE APPARENT HORIZON AND THE EVENT HORIZON
We begin this section by providing a general derivation to the location of the apparent horizon of (<ref>). To this end, let us examine the congruence of radial null trajectories
characterized by the four-tangent ℓ^α,
ℓ^α = δ^α_u-f(u,w)/2h(u)δ^α_w,
However, it does not satisfy the geodesic equation in the affine-parameter form. This is evident from the equations ℓ^α∇_αℓ^u = κℓ^u and ℓ^α∇_αℓ^w = κℓ^w, where κ = κ (u,w) and it is called the inaffinity. The geodesics equations are:
ℓ^α∇_αℓ^u = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(1) = κℓ^u,
and
ℓ^α∇_αℓ^w = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(-f(u,w)/2h(u)) = κℓ^w,
with the inaffinity κ given by
κ = 2d/duh(u)-∂/∂ wf(u,w)/2h(u).
The associated expansion scalar Θ^(ℓ) of this non affinley parametrized congruence of radial null geodesics, see <cit.> for the definition of the expansion in this case, is given by
Θ^(ℓ) = ∇_αℓ^α-κ,
= -r(u,w) ∂/∂ wf (u,w)-2 r(u,w) d/d uh (u)/2 h (u) r(u,w)
- 2 f (u,w) ∂/∂ wr (u,w)-4 h (u) ∂/∂ ur (u,w)/2 h (u) r(u,w)-κ,
= - 1/h(u)r(u,w)( f(u,w) ∂/∂ wr(u,w)-2h(u)∂/∂ ur(u,w)).
The apparent horizon is characterized by Θ^(ℓ) = 0, and thus by virtue of (<ref>) we obtain the following condition
2h(u)∂ r(u,w)/∂ u = f(u,w) ∂ r(u,w)/∂ w.
We substitute (<ref>) in (<ref>), which yields
2h(u) ( f_1^'(u)w+f_2^'(u)) = f(u,w)f_1(u).
With the aid of (<ref>) the previous equation takes the form
0 = 2f_1^'(u)r(u,w)^2+2h(u)m(u)
-( 2w f_1(u)f_1^'(u)+2f_2(u)f_1^'(u)+h(u))r(u,w).
We can use (<ref>) once more to reduce the last equation to
-h(u)( r(u,w)-2m(u) ) = 0,
which immediately gives the sought-after result:
r(u,w) = 2m(u).
It is thus established that the apparent horizon is located at r=2m(u).
We also note that the previous result is established before making any choices for the characterizing functions, f_n(u). Determining the location of the event horizon in the Vaidya metric is not as straightforward as locating the apparent horizon. In fact, the entire future history of the metric, as specified by the functions f(u,w) and h(u), must be predetermined in order to identify the null generators of the event horizon <cit.>.
However, we may generically define the future (past) event horizon as a causal boundary for the timelike geodesics terminating at future (past) timelike infinity, i^+(i^-) [For the definitions of these infinities we refer to <cit.>.].
§ SPECIFIC COORDINATE REPRESENTATIONS OF THE VAIDYA METRIC
In this section, we demonstrate that we can obtain various coordinate representations of the Vaidya metric by selecting different expressions for the characterizing functions, h(u) and f_n(u). Additionally, we emphasize that the meaning of the coordinate u is dependent on the choice of the characterizing functions, and thus the coordinate u in the EFL coordinates has a different interpretation to that in Israel coordinates.
§.§ The Vaidya Metric in the EFL Coordinates
Let us choose the characterizing functions such that h(u) = ± 1, f_1(u) = 1, and f_2(u) = 0, then we obtain w = r with the help of (<ref>). Furthermore, we get f_3(u) = -2m(u) from (<ref>). Substituting these values in (<ref>) yields
f(u,r) = -r+2m(u)/r,
and thus the metric (<ref>) becomes
ds^2 = -(1-2m(u)/r)du^2± 2dudr+r^2dΩ_2^2,
with G^ww = ± 2m^'(u)/r^2. It is clear that, with the help of Table <ref>, we can obtain h(u) = -1 for the outgoing radiation version of the Vaidya metric, where the coordinate u is a retarded time. Similarly, selecting h(u) = +1 yields the ingoing radiation version of the Vaidya metric, with u as an advanced time.
§.§ The Vaidya Metric in Israel Coordinates
In this subsection, we explore how by introducing different choices to the functions f_n(u), we obtain Israel coordinates. Let us consider the following choices: f_1(u) = U(u), f_2(u) = 2 M(u), and f_3(u) = 0. It follows from (<ref>) that for M(u)=m(u) (which is a choice),
U^'(u) = h(u)/4m(u).
Thus, with the aid of the first fundamental theorem of calculus we write
U(u) = ∫_0^uh(x)/4m(x) dx.
However, since our choices for the function h(u) will be confined to either +1 or -1, we set h(u)=h=±1. Consequently, the expression (<ref>) takes the form
U(u) = h∫_0^u1/4m(x) dx.
It follows that the spacetime line element (<ref>) can be written as
ds^2 = (w^2/2m(u)r+4hm^'(u)/U(u)) du^2+2hdudw+r^2dΩ^2_2,
where r is no longer a coordinate; it is now a function r=r(u,w) = U(u)w+2m(u) and G^ww = 2hm^'(u)/U(u)r(u,w)^2. Here, u is a null coordinate and (<ref>) describes both outgoing and ingoing radiation. It is interesting to note that the presence of h is not necessary for (<ref>), as demonstrated in <cit.>, particularly when m^'(u)=0. It is noteworthy that, in accordance with (<ref>), the apparent horizon is now located at w=0.
There is some ambiguity regarding the sign of u which appears in the definition of the function U(u) (<ref>); for example, in <cit.>, u is always positive, whereas in <cit.> u can be either positive or negative. We shall resolve this ambiguity and demonstrate when u can be negative or positive. To this end, recall that
U^'(u) = h/4m(u),
which means that the sign of U^'(u) is solely determined by the sign of h. Also, with the aid of the WEC, (<ref>), and (<ref>), we have
m^'(u)/hU(u) = m^'(u)/∫_0^udx/4m(x) > 0,
where in the last equation we have taken h^2 = 1. Hence, for m^'(u)>0 the integral must be positive (u in the integral must be positive) and for m^'(u)<0 the integral has to be negative (u in the integral must be negative). Consequently, we have seen that the sign of u in the integral is not always positive like in <cit.>, and the dichotomy in the function U(u) based on the sign of u is explained in a more articulated way. We have summarized all the choices we have considered thus far in Table <ref>.
Finally, we introduce a restriction on the w coordinate corresponding to the the surface r(u,w) = 0, the physical singularity, see below. Since r(u,w) = U(u)w+2m(u), for r(u,w) = 0 we obtain
w = -2m(u)/U(u)≡ w_0(u),
and so w_0 > 0 for U(u)<0 and w_0 < 0 for U(u)>0. It turns out that this exactly the case when we study the radial null geodesics in the proposed maximal extensions of the Vaidya metric <cit.>.
§ INVARIANTS
Up to syzygies <cit.>, we find that the only non-differential non-vanishing invariant of (<ref>) is the first Weyl invariant,
w1R ≡1/8C_αβγδC^αβγδ
= 3/2h(u)^4r(u,w)^6(f_1(u)f_3(u)-2h(u)f_1(u)'f_2(u)^2),
which reduces to the following expression in Israel coordinates,
w1R ≡1/8C_αβγδC^αβγδ = 6m(u)^2/r(u,w)^6,
where C_αβγδ is the Weyl tensor. However, as (<ref>) makes clear, it would be informative to have invariant information for m^'(u). This is obtained by way of the Bach tensor <cit.>, see also <cit.>. First define
A_αβδ = ∇^γC_αγβδ,
where ∇^γ denotes contravariant derivative. The Bach tensor is given by
B_αβ = ∇^δ A_αβδ+R^γδC_αγβδ/2.
Since the Bach tensor is trace-free, the first Bach invariant is
B≡ B_αβB^αβ.
In the present case we find, with the aid of (<ref>), that
B = (4U(u)m^'(u)/r(u,w)^4)^2.
Nevertheless, the preceding result does not provide the desired invariant definition of m'(u) due to its dependence on the functions r(u,w) and U(u).
§ SUMMARY AND DISCUSSION
We have examined the construction of Israel coordinates for the Vaidya metric and have simplified the problem to finding appropriate expressions for the characterizing functions that arise from integrating the field equations. This construction is systematic and does not necessitate any coordinate transformation, which provides us with the chance to spot potential extensions of the Vaidya manifold by introducing distinct expressions for the characterizing functions, f_n(u). Nonetheless, the main focus of this paper is to reconstruct Israel coordinates for the Vaidya metric. By utilizing the WEC, we have understood the role of the function h(u) in the Vaidya metric. Although the sign of the h(u) is paramount in determining the direction of radiation in the EFL coordinates, we have demonstrated that this is not the case for Israel coordinates. That is, both ingoing and outgoing radiation can be achieved with h=+1 or h=-1. However, the impact of changing the sign of the function h(u) will be further investigated when we discuss the completeness of Israel coordinates in <cit.>. The next step, see <cit.>, is to introduce explicit mass functions as candidates for the three possible Vaidya models and assess the completeness of Israel coordinates in relation to these mass functions.
§ ACKNOWLEDGEMENT
This work was supported (in part) by a grant from the Natural Sciences and Engineering Research Council of Canada (to KL).
|
http://arxiv.org/abs/2307.04388v1 | 20230710075157 | Core localized alpha-channeling via low frequency Alfven mode generation in reversed shear scenarios | [
"Zhiyong Qiu",
"Shizhao Wei",
"Tao Wang",
"Liu Chen",
"Fulvio Zonca"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
Huanyuan Shan
August 12, 2023
=========================================================================================
A novel channel for fuel ions heating in tokamak core plasma is proposed and analyzed using nonlinear gyrokinetic theory. The channel is achieved via spontaneous decay of reversed shear Alfvén eigenmode (RSAE) into low frequency Alfvén modes (LFAM), which then heat fuel ions via collisionless ion Landau damping. The conditions for RSAE spontaneous decay are investigated, and the saturation level and the consequent fuel ion heating rate are also derived. The channel is expected to be crucial for future reactors operating under reversed shear configurations, where fusion alpha particles are generated in the tokamak core where the magnetic shear is typically reversed, and there is a dense RSAE spectrum due to the small alpha particle characteristic dimensionless orbits.
Energetic particles (EPs) as well as fusion alpha particles related physics <cit.> are key elements towards understanding the performance of future fusion reactors, among which two crucial topics are EPs transport loss by self-generated collective oscillations such as shear Alfvén wave (SAW) eigenmodes <cit.> and searching for alternative/complementary routes to transfer EP power to fuel ions, i.e., alpha-channeling <cit.>. Both processes are influenced by the saturation level and spectrum of SAWs. In this contribution, a channel for reversed shear Alfvén eigenmode (RSAE) <cit.> nonlinear saturation is proposed and analysed, which is expected to play significant roles in future reactor-scale tokamaks with rich spectrum of core-localized RSAEs <cit.> due to the reversed shear magnetic configuration and small dimensionless EP orbit size.
In this proposed process, a RSAE spontaneously decays into another RSAE and a low frequency Alfvén mode (LFAM), which can be ion Landau damped, leading to effective heating of thermal ions in the reversed shear region, and consequently, enhanced fusion performance.
We consider for simplicity low-β_i plasma such that the frequency separation between RSAE and LFAM required for resonant mode coupling can be well satisfied. The nonlinear coupling is dominated by thermal plasma contribution, while the RSAEs are excited by EPs, so the thermal plasma nonuniformity can be neglected, which is also consistent with the advanced scenario of reversed shear configuration.
The governing equations describing nonlinear interactions among RSAEs and LFAM with all predominantly SAW polarization can be derived from nonlinear gyrokinetic vorticity equation <cit.>
and quasi-neutrality condition,
with the particle response derived from nonlinear gyrokinetic equation <cit.>.
The general equation for three SAWs nonlinear interaction, with the matching condition being Ω_3(ω_3,𝐤_3)=Ω_1(ω_1,𝐤_1)+Ω_2(ω_2,𝐤_2), can be derived as
b_k_3ℰ_k_3δϕ_k_3 = - i/ω_3Λ^k_3_k_2,k_1[ (b_k_2-b_k_1)(1-k_∥1k_∥2V^2_A/ω_1ω_2) +b_k_3V^2_Ak_∥ 3/ω_3(k_∥ 1/ω_1 - k_∥ 2/ω_2) ]δϕ_k_1δϕ_k_2,
with ℰ_k≡ -k^2_∥ V^2_A/ω^2_k + 1 - ω^2_G/ω^2_k being the SAW dielectric function in the WKB limit, ω_G≡√(7/4+T_e/T_i) v_i/R_0 being the leading order geodesic acoustic mode frequency <cit.>, accounting for SAW continuum upshift and creation of beta-induced continuum gap, and Λ^k_k”,k'≡ (c/B_0)𝐛̂·𝐤”×𝐤' with 𝐛̂ being the unit vector along the equilibrium magnetic field 𝐁_0.
Equation (<ref>) describes the nonlinear evolution of SAWs, with Ω_3 modified by the beating of Ω_1 and Ω_2, the first term on the right hand side due to the competition of Reynolds and Maxwell stresses, and the second term from finite parallel electric field contribution to field line bending. Note that, since (ω_1+ω_2)≃ (k_∥ 1+k_∥ 2)V_A, Ω_3 naturally satisfies the SAW D.R. and can be strongly excited if it is a normal mode of the system, leading to significant spectral transfer of SAW turbulence.
We note that, in the expression of ℰ_k, effects of wave-particle interactions are not included, consistent with the k_∥v_i≪ω_k ordering for bulk non-resonant ions. However, finite Landau damping due to resonance with ions is crucial for alpha-channeling, and will be recovered formally in the later analysis by inclusion of the anti-Hermitian part of ℰ_k <cit.>.
§ PARAMETRIC DECAY OF RSAE
Equation (<ref>) will be applied to the nonlinear decay of a pump RSAE Ω_0(ω_0, 𝐤_0) into a RSAE sideband Ω_1(ω_1, 𝐤_1) and a LFAM Ω_B(ω_B, 𝐤_B), with the frequency/wavenumber matching condition Ω_0=Ω_1+Ω_B assumed without loss of generality.
For RSAE and LFAM being dominated by single-n and single-m mode structures, we take
δϕ_k=A_k(t)Φ_k(x) exp(-iω_k t+inξ-imθ), with A_k(t) being the slowly varying mode amplitude, Φ_k(x) the parallel mode structure localized about q_min with x≡ nq-m, and the normalization condition ∫ |Φ_k|^2 dx=1 is satisfied.
For the effective transfer of alpha particle energy to core ions, ω_B≤ O(v_i/(qR_0)), and thus, |ω_B|≪ |ω_0|, |ω_1| and k_∥ B≃ 0. Thus, the q_min surface also corresponds to the rational surface of Ω_B, i.e., Ω_B is the LFAM in the reversed shear configuration, as investigated theoretically <cit.>. We then have, ω_0≃ω_1 and k_∥0≃ k_∥ 1. Effects of small frequency mismatch on the decay process will be discussed later.
The nonlinear RSAE sideband and LFAM equations can be derived from equation (<ref>) as
b̂_1ℰ̂_1 A_1 = -i/ω_1⟨Λ^k_1_k_0,k_B^*α_1 Φ_1Φ_0Φ_B⟩_x A_0 A_B^*,
b̂_Bℰ̂_B A_B = -i/ω_B⟨Λ^k_B_k_0,k_1^*α_B Φ_BΦ_0Φ_1⟩_x A_0 A_1^*,
with α_1≡ (b_0-b_B)(1- k_∥ Bk_∥0V^2_A/(ω_0ω_B)) + b_1 V^2_A (k_∥ 1/ω_1 ) (k_∥ B/ω_B - k_∥0/ω_0), α_B ≡ (b_0-b_1)(1- k_∥ 1k_∥0V^2_A/(ω_0ω_1)) + b_B V^2_A (k_∥ B/ω_B ) (k_∥ 1/ω_1 - k_∥0/ω_0), ⟨⋯⟩_x≡∫⋯ dx denoting averaging over the fast radial scale, b̂_1ℰ̂_1≡∫Φ_1 b_1 ℰ_1Φ_1 dx being the Ω_1 eigenmode local dispersion function, and b̂_Bℰ̂_B being the local dispersion function for the LFAM eigenmode.
The parametric decay dispersion relation for RSAE decaying into another RSAE and LFAM can then be derived by combining equations (<ref>) and (<ref>)
ℰ̂_1ℰ̂_B^*≃(Λ̂^k_1_k_0,k_B^*)^2α̂_N/b̂_Bb̂_1 ω_Bω_1Ĉ^2 |A_0|^2,
with Ĉ≡⟨Φ_0Φ_BΦ_1⟩_x, Λ̂^k_1_k_0,k_B^*= ⟨Λ^k_1_k_0,k_B^*⟩_x, α̂_N≡α̂_1α̂_B, and Ĉ≃√(2 Δ_B/(√(π)Δ_0Δ_1)), with Δ_0∼Δ_1∼ O(1) and Δ_B∼ O(β^1/2) being the characteristic radial widths of the respective linear parallel mode structures.
Expanding ℰ̂_1≃ i ∂_ω_1ℰ̂_1(∂_t+γ_1)≃ (2 i/ω_1) (γ+γ_1) and ℰ̂_B^*≃ (-2i/ω_B) (γ+γ_B) in the local limit, with γ denoting the slow temporal variation of Ω_1 and Ω_B due to the parametric instability, and γ_1/γ_B being the linear damping rates of RSAE/LFAM accounted for by the anti-Hermitian part of ℰ_1/ℰ_B, one obtains
(γ+γ_1)(γ+γ_B)=(Λ̂^k_1_k_0,k_B^*)^2 α̂_N/4 b̂_B b̂_1Ĉ^2|A_0|^2.
The condition for the pump RSAE spontaneous decay can thus be obtained from equation (<ref>) as
α̂_N>0
and
(Λ̂^k_1_k_0,k_B^*)^2 α̂_N Ĉ^2|A_0|^2/(4 b̂_B b̂_1) >γ_Bγ_1 for the nonlinear drive overcoming the threshold due to Ω_1 and Ω_B Landau damping.
The nonlinear dispersion relation is very complex, and depends on various conditions including the polarization and mode structure of the three modes involved. For further analytical progress, the WKB limit and the strong assumption of k_∥ B→ 0 is adopted, and a parameter regime can be identified for the spontaneous decay process to strongly occur, which corresponds to k_⊥1≫ k_⊥0, such that (b_0-b_1)(b_0-b_B-b_1)>0; and α̂_N>0 can be satisfied with 1-k_∥0k_∥ 1V^2_A/(ω_0ω_1)>0, which generally requires Ω_1 being excited above the local SAW continuum accumulation point with n_1q_min< m_1.
The threshold condition for the RSAE spontaneous decay, for the proposed parameter region of RSAE “normal cascading" to |k_⊥1|≫ |k_⊥0|, can be estimated as
|δ B_⊥0/B_0|^2 > 4γ_1γ_B/ω_0ω_1k^2_∥0/k^2_⊥11/Ĉ^21/1-k_∥0k_∥ 1V^2_A/(ω_0ω_1)∼𝒪(10^-7),
and is comparable with or slightly higher than typical threshold condition for other dominant nonlinear mode coupling processes, e.g., ZS generation. This threshold amplitude, is also consistent with typical SAW instability intensity observed in experiments. Thus, this channel could be an important process in determining the nonlinear dynamics of RSAE.
§ NONLINEAR SATURATION AND CORE-LOCALIZED ION HEATING
The RSAE saturation level can be estimated by considering the feedback of the two sidebands to the pump RSAE, which can be derived from equation (<ref>) as
b̂_0ℰ̂_0 A_0≃ -i/ω_0Λ̂^k_0_k_1,k_Bα̂_0 Ĉ A_1 A_B,
with α_0= (b_1-b_B) (1- k_∥ Bk_∥ 1V^2_A/(ω_1ω_B)) + b_0 V^2_A(k_∥0/ω_0) (k_∥ B/ω_B- k_∥ 1/ω_1). The saturation level of LFAM, can be estimated from the fixed point solution of equations (<ref>), (<ref>) and (<ref>), and one obtains,
|A_B|^2= γ_0γ_1 b̂_0b̂_1ω_0ω_1∂_ω_1ℰ_1,ℛ∂_ω_0ℰ_0,ℛ/(α̂_0 α̂_1 |Ĉ|^2 (Λ̂^k_0_k_1,k_B)^2), and the ion heating rate due to LFAM Landau damping, can be estimated as
P_i=2γ_B ω_B∂ℰ_B,ℛ/∂ω_Bn_0e^2/T_ib̂_B |A_B|^2 ∼ 10^-3γ_0 n T.
The obtained core ion heating due to LFAM conllisionless damping, can be comparable to Coulomb collisional heating estimated by n T/τ_E, with τ_E being the energy confinement time.
This channel, achieved via the Landau damping of secondary LFAM, noting that k_∥ B≪1, is highly localized around the q_min surface (this conclusion can also be obtained, noting as the “secondary" LFAM structure will be determined by the primary RSAE, with a narrower extent than the primary RSAEs), will deposit fusion alpha particle power locally and heating core ions, leading to direct improvement of fusion performance in the tokamak center. The nonlinear dynamics of RSAE with multiple channels accounted for simultaneously <cit.> is crucial for the understanding of core plasma behaviour and fusion performance of future reactors.
10
AFasoliNF2007
Fasoli A, Gormenzano C, et al, 2007 Nuclear
Fusion 47 S264
LChenRMP2016
Chen L and Zonca F 2016 Review of Modern Physics 88 015008
NFischPRL1992
Fisch N J and Rax J M 1992 Phys. Rev. Lett. 69(4) 612–615
HBerkPRL2001
Berk H L, Borba D N, Breizman B N, Pinches S D and Sharapov S E 2001 Phys.
Rev. Lett. 87(18) 185002
TWangPoP2018
Wang T, Qiu Z, Zonca F, Briguglio S, Fogaccia G, Vlad G and Wang X 2018 Physics of Plasmas 25 062509
LChenJGR1991
Chen L and Hasegawa A 1991 Journal of Geophysical Research: Space
Physics 96 1503 ISSN 2156-2202
EFriemanPoF1982
Frieman E A and Chen L 1982 Physics of Fluids 25 502–508
NWinsorPoF1968
Winsor N, Johnson J L and Dawson J M 1968 Physics of Fluids 11
2448–2450
FZoncaPPCF1996
Zonca F, Chen L and Santoro R A 1996 Plasma Physics and Controlled
Fusion 38 2011
RMaPPCF2022
Ma R, Chen L, Zonca F, Li Y and Qiu Z 2022 Plasma Physics and Controlled
Fusion 64 035019
SWeiJPP2021
Wei S, Wang T, Chen N and Qiu Z 2021 Journal of Plasma Physics 87
905870505
SWeiNF2022 Wei S, Wang T, Chen L, Zonca F and Qiu Z, 2022 Nuclear Fusion 62 126038
|
http://arxiv.org/abs/2307.06295v1 | 20230712164852 | Singular products and universality in higher-derivative conformal theory | [
"Yuri Makeenko"
] | hep-th | [
"hep-th",
"math-ph",
"math.MP"
] |
=1
|
http://arxiv.org/abs/2307.04595v2 | 20230710143631 | Singling out SO(10) GUT models using recent PTA results | [
"Stefan Antusch",
"Kevin Hinze",
"Shaikh Saad",
"Jonathan Steiner"
] | hep-ph | [
"hep-ph"
] |
[E-mail:][email protected]
[E-mail:][email protected]
[E-mail:][email protected]
[E-mail:][email protected]
Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland
In this work, we construct promising model building routes towards SO(10) GUT inflation and examine their ability to explain the recent PTA results hinting at a stochastic gravitational wave (GW) background at nanohertz frequencies. We consider a supersymmetric framework within which the so-called doublet-triplet splitting problem is solved without introducing fine-tuning. Additionally, realistic fermion masses and mixings, gauge coupling unification, and cosmic inflation are incorporated by utilizing superfields with representations no higher than the adjoint representation. Among the three possible scenarios, two of these cases require a single adjoint Higgs field, and do not lead to cosmic strings. In contrast, the third scenario featuring two adjoints, can lead to a network of metastable cosmic strings that generates a GW background contribution compatible with the recent PTA findings and testable by various ongoing and upcoming GW observatories.
Singling out SO(10) GUT models using recent PTA results
Jonathan Steiner
February 2023
=========================================================
Introduction:–
Global collaboration among pulsar timing arrays (PTAs) (NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and IPTA <cit.>) previously revealed evidence of common-spectrum noise at nanohertz frequencies. Recent analysis, including CPTA <cit.>, EPTA <cit.>, NANOGrav <cit.>, and PPTA <cit.>, identified spatial correlations (Hellings-Downs effect <cit.>), providing strong support for a stochastic gravitational-wave background (SGWB). Although the mergers of supermassive black hole binaries (SMBHBs) are natural astrophysical sources of the SGWB at nanohertz frequencies, the new data somewhat disfavors SMBHBs in explaining the observed PTA SGWB signal <cit.>. Therefore, the SGWB likely points toward new physics beyond the Standard Model (SM). One of the explanations that fits well with the data is a metastable cosmic string network (CSN) <cit.>. Since such cosmic strings (CSs) can arise from the multi-step spontaneous breaking of the symmetry group of a Grand Unified Theory (GUT) after cosmic inflation, this raises the question of what can be learned about GUTs from this finding.
GUTs <cit.>, combined with SUSY, offer an appealing framework for a more fundamental theory beyond the SM of elementary particles. GUTs unify the three fundamental forces of the SM, while SUSY provides a natural solution to the gauge hierarchy problem and a potential weakly interacting dark matter candidate when R-parity or matter-parity ensures its stability. SO(10)-based GUTs are particularly interesting as they unify all SM fermions of each family into a single irreducible 16-dimensional representation. This 16-dimensional representation also includes a SM singlet right-handed neutrino, which, through the type-I seesaw mechanism <cit.>, generates tiny masses for the SM neutrinos.
Promising GUT models must satisfy proton decay bounds and achieve successful gauge coupling unification. In SUSY GUT models, the d=5 proton decay operators are induced by color-triplet exchange, necessitating the superheavy nature of color-triplet states compared to their doublet partners, known as the doublet-triplet splitting (DTS) problem <cit.>. A desirable GUT model should solve the DTS problem without fine-tuning parameters. Since GUTs generate the Yukawa matrices out of joint GUT operators, leading to
constraints on the flavor structure, a further challenge
consists in realizing viable fermion masses and mixings.
Cosmic inflation <cit.> that solves the horizon and flatness problems of the standard Big Bang cosmology, and explains the origin of structure formation of the observable Universe, could have a deep connection to SUSY GUT models. In addition to the similarity of the scales of inflation and gauge coupling unification, inflation is also crucial to dilute away unwanted topological defects <cit.> like monopoles which generically form at some stage of GUT symmetry breaking. Furthermore, supersymmetric theories typically possess many flat directions, providing an attractive framework for realizing inflation. While monopoles have to be diluted by inflation, other topological defects, like (metastable) CSs <cit.> that form after inflation can leave an observable signature in the SGWB.
In this work, we explore supersymmetric SO(10) GUTs that naturally solve the DTS problem, generate realistic fermion masses, and achieve successful gauge coupling unification and inflation. We focus on lower-dimensional field representations and investigate scenarios with Higgs fields no higher than the adjoint representation. Three promising routes for SO(10) GUT model building are identified: two cases use a single adjoint Higgs field, while the third scenario requires two copies. In the latter case, the intermediate symmetry contains two Abelian factors crucial for CSN formation. For the first time, we construct a realistic SUSY SO(10) GUT scenario (particularly the third scenario), satisfying the mentioned criteria and leads to metastable CSs capable of explaining the recent PTA results for a stochastic GW background at nanohertz frequencies.
SO(10) model building:–
Two major guiding principles in building realistic models in our framework are the natural DTS <cit.> (see also <cit.>) and employing smaller dimensional representations. In achieving this, we utilize 45_H and 16_H+16_H Higgs representations to break the GUT symmetry down to the SM, which is subsequently broken by 10_H (and possibly by 16_H+16_H). The fundamental representation contains weak-doublet and color-triplet states,
10_H =(2_H+3_H)+(2_H+3_H)
=(1,2,1/2)+(3,1,-1/3)+c.c..
The VEV of the adjoint, ⟨ 45_H⟩∝ iτ_2⊗diag(a_1,a_2,a_3,a_4,a_5) that breaks the GUT symmetry is expected to provide superheavy masses to both these components. With this setup, one can construct three classes of models:
* a single adjoint Higgs with ⟨ 45_H⟩∝ B-L generator,
* a single adjoint Higgs with ⟨ 45_H⟩∝ I_3R generator,
* two adjoint Higgses, one with ⟨ 45_H⟩∝ B-L generator and another with ⟨ 45_H^'⟩∝ I_3R generator.
For each model, the superpotential takes the form,
W= W_GUT-breaking+ W_Inflation+W_Mixed_W_Intermedite-breaking
+W_DTS+W_Yukawa,
where terms in W_GUT-breaking and W_Intermedite-breaking lead to a consistent symmetry breaking of the GUT symmetry down to the SM gauge group. Terms in W_DTS realize DTS without fine-tuning, and the W_Inflation part of the superpotential leads to an inflationary period.
∙ B-L-case: The symmetry breaking chain in this scenario is given by
SO(10)
SU(3)_C× SU(2)_L× SU(2)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y .
The GUT scale symmetry breaking is achieved via
W_GUT-breaking ⊃m_45/2Tr[45_H^2]+λ/4ΛTr[45_H^4],
with the VEV ⟨ 45_H⟩∝ iτ_2⊗diag(a,a,a,0,0).
Note that breaking the GUT symmetry gives rise to superheavy monopoles that must be inflated away. Therefore inflation must take place after the formation of the monopoles. A straightforward option is to utilize hybrid <cit.> inflation (an alternative option is tribrid inflation <cit.>) at the last intermediate symmetry breaking stage, which we achieve via employing 16_H+16_H that acquire VEVs [As a result, the appearance of automatic R-parity from within the SO(10) group is no longer possible. However a discrete symmetry, such as a Z_2 symmetry (matter parity), can readily be imposed.] in the right-handed neutrino direction. Then the relevant superpotential term contributing to inflation takes the following form,
W_Inflation⊃κ S(16_H16_H-m_16^2),
which fixes the magnitude of the VEVs ⟨ 16_H16_H⟩ = m^2_16.
Here, S is a GUT singlet superfield, the scalar component of which plays the role of the inflaton.
Since 45_H and 16_H+16_H have component fields that share the same quantum numbers,
45_H, 16_H, 16_H⊃ (1,1,1)+ (3,2,1/6)+ (3,1,-2/3) +c.c.,
to avoid additional would-be Goldstone bosons, which would ruin gauge coupling unification, these fields must have non-trivial mixing terms. The simplest possible interaction term, 16_H 45_H 16_H, is not welcome since it would destabilize the VEV of 45_H from the desired “Dimopoulos-Wilczek form”.
To circumvent this issue, we introduce a second copy of spinorial representations, 16_H^'+16_H^', which do not acquire a VEV in the right-handed neutrino direction. Then a consistent symmetry breaking without additional would-be Goldstone bosons can be achieved via the addition of the following terms in the superpotential:
W_Mixed⊃
16_H(λ_1 45_H+λ^'_1 1_H)16_H^' +16_H^' (λ_2 45_H+λ^'_2 1^'_H)16_H.
Here, we introduced the “sliding singlets” 1^(')_H, which are assumed to have no other terms in the superpotential that could fix their VEVs. They are needed to allow for vanishing F-terms corresponding to 16_H^', 16_H^'.
Concerning DTS, remarkably, the specific VEV structure of the 45_H provides masses to only the color-triplets, while the weak-doublets remain massless, schematically
10_1H⟨ 45_H⟩ 10_2H=
02_1H 2_2H
+02_2H 2_1H
+3_1H 3_2H
+3_2H 3_1H.
However, if only the above term is added to the superpotential, then the low energy spectrum would contain four light doublets instead of the usual two doublets of the MSSM. This would spoil the successful gauge coupling unification of the MSSM. To avoid extra light states, we allow a direct mass term for 10_2H, i.e.,
10_2H10_2H= 2_2H 2_2H
+3_2H 3_2H.
Then, the terms in the superpotential relevant for providing the masses of the doublets and triplets and naturally realizing their splittings are
W_DTS⊃γ 10_1H 45_H 10_2H +m_10 10_2H 10_2H.
A crucial remark is in order. Assuming that only 10_1H couples to the fermions, the term in Eq. (<ref>) by itself does not induce proton decay. Once the term in Eq. (<ref>) is also introduced, together they allow the proton to decay via color-triplet Higgses, since now an effective mass term linking 3_1H and 3_1H can be written down after integrating out 3_2H and 3_2H. This can be understood schematically as follows:
3_1H⟨ 45_H⟩z13_2Hz23_2H m_10z33_2Hz43_2H⟨ 45_H⟩ 3_1H .
[remember picture,overlay]
[latex-latex]
([shift=(2pt,-2pt)]z1)
– ([shift=(2pt,-12pt)]z1)
– node[midway, below] ([shift=(2pt,-12pt)]z2)
– ([shift=(2pt,-2pt)]z2);
[latex-latex]
([shift=(2pt,-2pt)]z3)
– ([shift=(2pt,-12pt)]z3)
– node[midway, below] ([shift=(2pt,-12pt)]z4)
– ([shift=(2pt,-2pt)]z4);
With a sufficiently large effective triplet mass ∼ M^2_GUT/m_10, the d=5 proton decay is suppressed.
∙ I_3R-case: The symmetry breaking chain in this scenario is given by
SO(10)
SU(4)_C× SU(2)_L× U(1)_R
SU(3)_C× SU(2)_L× U(1)_Y ,
which is obtained by ⟨ 45_H⟩∝ iτ_2⊗diag(0,0,0,b,b). Although the W_GUT-breaking and W_Intermedite-breaking parts of the superpotential are identical to the B-L case, W_DTS takes a different form, which we discuss in the following.
Due to ⟨ 45_H⟩∝ I_3R, we now have the opposite situation compared to the previous case, namely
10_1H⟨ 45_H⟩ 10_2H=
2_1H 2_2H
+2_2H 2_1H
+03_1H 3_2H
+03_2H 3_1H.
Therefore, a different strategy must be implemented to obtain light doublets and superheavy color-triplets. By noting that
16_H^'⊃2_H^' is a SU(2)_R singlet, and, on the contrary, 16_H^'⊃3_H^' resides in a SU(2)_R doublet, one obtains a mass only for the color-triplet, and not for the weak doublet, i.e.,
16_H^'⟨ 45_H⟩ 16_H^'=
02_H^' 2_H^'
+3_H^' 3_H^' .
If only the above term is included in the superpotential, then a pair of triplets will remain massless in addition to one pair of doublets. To provide large masses to all the color-triplets, we add two more terms
W_DTS⊃
λ_316_H^' 45_H 16^'_H
+λ_4 10_H 16_H 16_H+λ_5 10_H 16_H 16_H .
As for the d=5 proton decay,
assuming the SM fermion masses are coming from their coupling to the 10_H (i.e. neglecting all contributions from the 16_H), the effective triplet mass m_T is approximately given by
m_T=-λ_3λ_4λ_5⟨ 16_H⟩⟨16_H⟩/2λ_1λ_2 ⟨ 45_H⟩.
Choosing somewhat small λ_1,λ_2 allows having m_T≳ 10^19 GeV, which is required by proton decay constraints.
∙ B-L & I_3R-case: Depending on the values of the VEVs of the two adjoints, various symmetry breaking chains may arise in this scenario, examples of which are (a) ⟨ 45_H⟩ > ⟨ 45_H^'⟩ > ⟨ 16_H⟩, ⟨16_H⟩:
SO(10)
SU(3)_C× SU(2)_L× SU(2)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y ,
(b) ⟨ 45_H^'⟩ > ⟨ 45_H⟩ > ⟨ 16_H⟩, ⟨16_H⟩:
SO(10)
SU(4)_C× SU(2)_L× U(1)_R
SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y ,
(c) ⟨ 45_H⟩ = ⟨ 45_H^'⟩ > ⟨ 16_H⟩, ⟨16_H⟩:
SO(10)
SU(3)_C× SU(2)_L× U(1)_R× U(1)_B-L
SU(3)_C× SU(2)_L× U(1)_Y .
In this scenario, for each of the adjoints, the GUT symmetry breaking superpotential consists of the terms given in Eq. (<ref>). Since ⟨ 45_H⟩ and ⟨ 45_H^'⟩ break SO(10) to the left-right symmetry and quark-lepton symmetry, respectively, the first and the second break the generators in (3,2,+1/6)+(3,2,-5/6)+(3,1,2/3)+c.c and (3,2,+1/6)+(3,2,-5/6)+(1,1,+1)+c.c, respectively. Consequently, there would be additional massless states. To avoid such massless states, we add the following mixing term in the superpotential,
W_GUT-breaking ⊃η/Λ Tr[45_H.45_H.45_H^' .45_H^'].
As before, one requires non-trivial interactions between the spinorial representations and the adjoints to give masses to the would-be Goldstones. For the two adjoints, we now introduce two sets of additional spinorial representations, 16_H^'+16_H^' and 16_H^''+16_H^'', and add the following terms, such that the VEVs of the adjoints are not destabilized:
W_Mixed⊃
16_H(λ_1 45_H+λ_1^' 1_H)16_H^' +16_H^' (λ_2 45_H+λ_2^' 1_H^')16_H
+ 16_H(λ_3 45_H^'+λ_3^' 1_H^'')16_H^'' +16_H^'' (λ_4 45_H^'+λ_4^' 1_H^''')16_H .
For the DTS, we include the term 10_1H45_H10_2H. However, here we can construct an example model which does not lead to proton decay at leading order via d=5 operators. To this end, we forbid the direct mass term 10_2H10_2H. Instead, we include a higher dimensional operator, 10_2H. 45^' 2.10_2H, such that an effective triplet mass for 3_1H and 3_1H cannot be written down, since,
10_2H45_H^' 210_2H=
2_2H 2_2H
+2_2H 2_2H
+03_2H 3_2H
+03_2H 3_2H.
With the inclusion of the above two terms, still one pair of color-triplets and an additional pair of weak doublets remain massless. We cure this by adding a term of the form 16^''_H 16^'_H to the superpotential,
W_DTS⊃
γ_1 10_1H 45_H 10_2H +γ_2/Λ 10_2H45_H^' 210_2H
+ω_1616^''_H 16^'_H ,
that leads to a single pair of light doublets, as desired.
It is important to note that all the scenarios discussed above can successfully reproduce correct charged fermion masses and mixings by incorporating suitable higher-dimensional operators. The light neutrinos acquire masses through the standard type-I seesaw mechanism. The Majorana masses for the right-handed neutrinos are generated by the following higher-dimensional operator:
W_Yukawa⊃ Y_R 16_i 16_j 16_H16_H/Λ∼ Y_R v^2_R/Λν^cν^c .
Gravitational wave signals:–
In some of the models we consider, breaking e.g. a simple group into a subgroup that contains a U(1) factor leads to monopole creation. To prevent overclosing the universe, inflation must get rid of the monopoles. At some later stage, once the left-over Abelian symmetry is broken, strings appear (we assume the ideal Nambu-Goto string approximation, where the dominant radiation emission of CSs is into GWs <cit.>). If these two scales are very close, Schwinger nucleation of monopole-antimonopole pairs <cit.> on the string cuts it into pieces and makes it decay. How quickly these metastable strings decay depends on a parameter κ_m <cit.>,
κ_m= m^2/μ∼8π/g^2( v_m/v_R)^2,
where m is the mass of the monopole and v_m (v_R) is the monopole (string) creation scale. The network behaves like a stable-string network for κ_m^1/2≫ 10.
Metastable CSNs provide an intriguing explanation for the newly released PTA data <cit.>. The data indicates string tension (μ) values in the range Gμ∼ 10^-8-10^-5 for κ_m^1/2∼ 7.7-8.3 (with a strong correlation, cf. Fig. 10 of <cit.>), consistent with CMB bounds. Notably, the 68% credible region in the Gμ-κ_m^1/2 parameter plane overlaps with the third advanced LIGO–Virgo–KAGRA (LVK) bound, while major parts of the 95% credible region are compatible, preferring Gμ≲ 10^-7 and κ_m^1/2∼ 8 <cit.>, as shown in Fig.<ref>. However, it should be remarked that the computation of the GW spectrum from metastable CSs carries significant uncertainty <cit.>. Furthermore, various possible effects are not included in the above shown GW spectrum, for instance, an extended matter domination phase after inflation <cit.> or the change of degrees of freedom below the SUSY breaking scale <cit.>. Nevertheless, observing a higher frequency SGWB signal in the next LIGO–Virgo–KAGRA rounds would be a fascinating confirmation of the scenario.
Interestingly, Gμ∼ 10^-7 corresponds roughly to v_R∼ 10^15 GeV, which is fully consistent with the type-I seesaw contribution to neutrino masses and corresponds to the right scale for inflation. On the other hand, stable CSs are disfavored by the recent PTA data[Stable cosmic strings, however, were consistent with the previous PTA data. For works on GWs, in light of NANOGrav12.5 data, arising from cosmic strings within GUTs, c.f., <cit.>.].
The first (and second) model studied, the B-L- (and I_3R-) case, leads to embedded strings, which are generally unstable <cit.>. Interestingly, all three models in the B-L & I_3R-case have the potential to produce metastable strings for nearly degenerate monopole and string formation scales: M_I∼ M_II for cases (a) and (b), and M_GUT∼ M_I for case (c). However, in case (c), a lower GUT scale ∼ 10^15 GeV would have to be arranged that requires suppression of d=6 proton decay utilizing the freedom in the Yukawa sector, which makes this case somewhat less appealing.
We like to point out that the class of promising SO(10) models we considered in this work may or may not lead to the formation of CSs, contrary to the class of models considered in <cit.>, where the appearance of CSs is unavoidable.
Before concluding, we discuss the gauge coupling unification for an example scenario that leads to metastable CSs (specifically, we choose case (a) within B-L & I_3R). To achieve metastable strings, the monopole and string formation scales must nearly coincide. Therefore, we effectively have three scales: the GUT scale, the monopole/string formation scale, and the SUSY breaking scale (fixed at 3 TeV). To simplify the analysis, we assume that the fields breaking a symmetry are degenerate with the corresponding scale, while the remaining states have GUT scale masses. This minimal number of free parameters allows us to find a wide range for the monopole/string formation scale, approximately M_I∼ M_II∼ [10^9-10^17] GeV (with 10^16 GeV ≤ M_GUT≤ 10^18 GeV and M_GUT>M_I), while still being consistent with gauge coupling unification. Our analysis considers a 1% uncertainty on the measured values of the gauge couplings to account for GUT threshold uncertainties.
A comprehensive analysis encompassing gauge coupling unification, fermion masses and mixings, proton decay, GW signal, and the mass spectrum of the component fields from the superpotential terms will be presented in a forthcoming publication.
Conclusions:–
We explored promising model-building routes for SO(10) GUT inflation in light of the recent PTA results suggesting the presence of a SGWB at nanohertz frequencies. Our investigation focused on a supersymmetric SO(10) framework with small dimensional representations, effectively solving the doublet-triplet splitting problem without fine-tuning. This approach enables realistic fermion masses, gauge coupling unification, and simple options for embedding cosmic inflation. Among the three model classes studied, one involves two adjoint fields capable of generating a network of metastable cosmic strings. This network generates a SGWB background contribution that can explain the recent PTA data, and will be tested by various upcoming GW observatories.
Note added: As we were completing this work, several papers appeared that also discussed the
impact of the new PTA results on new physics scenarios <cit.>.
style
|
http://arxiv.org/abs/2307.06035v1 | 20230712093224 | Systole functions and Weil-Petersson geometry | [
"Yunhui Wu"
] | math.DG | [
"math.DG",
"math.CV",
"math.GT"
] |
=1
figures/
fancy
commafont@̌mathfonts
@size@
[1]
@#1
@<.1@@.3excommafont,
@.5excommafont`
|
http://arxiv.org/abs/2307.04189v1 | 20230709144340 | Histopathology Whole Slide Image Analysis with Heterogeneous Graph Representation Learning | [
"Tsai Hor Chan",
"Fernando Julio Cendra",
"Lan Ma",
"Guosheng Yin",
"Lequan Yu"
] | cs.CV | [
"cs.CV"
] |
Histopathology Whole Slide Image Analysis with Heterogeneous Graph Representation Learning
Tsai Hor Chan^1,*, Fernando Julio Cendra^1,2*, Lan Ma^2, Guosheng Yin^1,3, Lequan Yu^1
^1Department of Statistics and Actuarial Science,
The University of Hong Kong
^2TCL Corporate Research Hong Kong
^3Department of Mathematics, Imperial College London
{hchanth, fcendra}@connect.hku.hk, [email protected], [email protected], [email protected]
Received: date / Accepted: date
=======================================================================================================================================================================================================================================================================================================================================================================================
*The first two authors contributed equally to this work.footnote
Graph-based methods have been extensively applied to whole slide histopathology image (WSI) analysis due to the advantage of modeling the spatial relationships among different entities.
However, most of the existing methods focus on modeling WSIs with homogeneous graphs (, with homogeneous node type).
Despite their successes, these works are incapable of mining the complex structural relations between biological entities (, the diverse interaction among different cell types) in the WSI.
We propose a novel heterogeneous graph-based framework to leverage the inter-relationships among different types of nuclei for WSI analysis.
Specifically, we formulate the WSI as a heterogeneous graph with “nucleus-type” attribute to each node and a semantic similarity attribute to each edge.
We then present a new heterogeneous-graph edge attribute transformer (HEAT) to take advantage of the edge and node heterogeneity during massage aggregating.
Further, we design a new pseudo-label-based semantic-consistent pooling mechanism to obtain graph-level features, which can mitigate the over-parameterization issue of conventional cluster-based pooling.
Additionally, observing the limitations of existing association-based localization methods, we propose a causal-driven approach attributing the contribution of each node to improve the interpretability of our framework.
Extensive experiments on three public TCGA benchmark datasets demonstrate that our framework outperforms the state-of-the-art methods with considerable margins on various tasks.
Our codes are available at https://github.com/HKU-MedAI/WSI-HGNNhttps://github.com/HKU-MedAI/WSI-HGNN.
§ INTRODUCTION
Histopathology slides provide rich information on
diagnosis and treatment planning for many cancer diseases.
The recent technological advancements in tissue digital scanners facilitate the development of whole slide histopathology image (WSI) analysis.
However, traversing through the WSI with diverse magnifications is time-consuming and tedious for pathologists due to the large-scale nature of the WSI (e.g., its typical size is 60,000 × 60,000 pixels).
Hence deep learning techniques play an important role as they introduce accurate and automated analysis of WSIs, which can significantly relieve the workload of pathologists.
Since it is difficult to fit the complete WSI into the memory, most of the works adopt multiple instance learning (MIL) to divide the WSI into instances and then aggregate them for WSI analysis.
However, these methods operate on bags of instances that do not emphasize the inter-relationships between these instances.
Recently, the emergence of graph neural networks (GNNs) has made large progress in representing the spatial relationships between instances.
As a result, there are many attempts to represent the WSIs as graphs of instances.
Figure <ref> presents an example of a graph constructed from WSI.
Unlike convolutional neural networks (CNNs) that aggregate features based on locality in the Euclidean space, GNNs focus on locality on graph topology, which offers more flexibility in analyzing the deep connections between features in the image data beyond the spatial locality <cit.>.
For example, GNNs are able to learn relational information and distinguish cells based on their apposition to tumor cells, or normal stroma (i.e., cells which are tumor-infiltrating lymphocytes or from an adjacency inflammatory response), which are important for prognosis <cit.>.
However, existing paradigms on graph-based WSI analysis focus on representing the WSI with a homogeneous graph structure and then predicting the response via vanilla GNNs with cluster-based pooling (i.e., based on similarities of node embeddings).
Despite their successes, these methods suffer from several drawbacks:
(i) GNNs on homogeneous graphs focus on aggregating direct relational information from neighboring nodes, where the complex relational information of the graphs is often neglected.
(ii) For different graphs, the clusters defined by similarities between node embeddings have inconsistent meanings. This introduces a large degree of freedom in parameters and leads to over-parameterization issue <cit.>.
Therefore, GNNs tend to easily overfit due to a lack of identifiability <cit.>.
In view of these limitations, we propose a novel framework for WSI analysis, which leverages a heterogeneous graph to learn the inter-relationships among different types of nodes and edges.
The heterogeneous graph introduces a “nucleus-type" attribute to each node, which can serve as an effective data structure for modeling the structural interactions among the nuclei in the WSI.
To tackle the aggregation process in the heterogeneous graph, we propose a novel heterogeneous-graph edge attribute transformer (HEAT) architecture which can take advantage of the edge and node heterogeneity.
Thus, the diverse structural relations among different biological entities in the WSI can be incorporated to guide the GNN for more accurate prediction.
Further, to obtain the graph-level representations for slide-level prediction, we propose a semantic-consistent pooling mechanism — pseudo-label (PL) pooling, which pools node features to graph level based on clusters with a fixed definition (i.e., nucleus type).
The proposed PL pooling can regularize the graph pooling process by distilling the context knowledge (i.e., pathological knowledge) from a pretrained model to alleviate the over-parameterization issue <cit.>.
Additionally, we propose a Granger causality <cit.> based localization method to identify the potential regions of interest with clinical relevance to provide more insights to pathologists and promote the clinical usability of our approach.
We extensively evaluate our method on three TCGA public benchmark datasets, including colon adenocarcinoma cancer (COAD) and breast invasive carcinoma (BRCA) datasets from the TCGA project <cit.> and the Camelyon 16 dataset <cit.>, and compare to various latest state-of-the-art (SOTA) methods.
Our method outperforms the competitors on cancer staging, cancer classification, cancer typing, and localization tasks.
§ RELATED WORKS
Multiple Instance Learning on WSIs.
Existing WSI analysis approaches generally adopt MIL
<cit.>, which first divide the WSI into fixed-size patches and then compress the information of these patches into low-dimensional vectors.
Conventional methods aggregate bags of instances to learn WSI-level features for final predictions.
Tellez <cit.> compress the WSI-level image into embedding vectors and use a standard CNN to perform patch-level and WSI-level cancer classification.
These CNN-based methods analyze local areas in the Euclidean space on fixed connectivity (i.e., fixed-size kernels), limiting the performance beyond the spatial locality.
Graph-based methods <cit.> have recently been proposed, which model the interactions between instances via graphs.
Their capability of modeling instances based on graph topology provides more flexibility to analyze complex structures of WSIs.
Chen <cit.> propose patch-GCN, a method of modeling WSI with homogeneous graphs, and regress survival data with a graph convolutional neural network (GCN) <cit.>.
Zheng <cit.> propose a graph-based MIL method using graph transformer networks <cit.>.
In spite of their power, most of these WSI methods use homogeneous graphs, which limits the information mined from WSIs.
A recent method <cit.> is proposed to model WSIs with heterogeneous graphs, where the heterogeneity in each patch is introduced by different resolution levels.
However, it only considers the resolution level heterogeneity of patches, with insufficient ability to model the complex contextual interaction between patches in the same resolution level.
Graph Neural Networks.
Although the SOTA GNNs have shown great successes in many problem domains <cit.>, they are mostly focused on homogeneous graphs <cit.>.
These architectures extract the locality information on the graph topology and learn the graph representations by performing aggregation on neighboring nodes.
However, the potential heterogeneity in nodes and edges is not incorporated by these homogeneous GNN algorithms, and therefore their capability in mining the structural information is limited.
Several works attempt to address the heterogeneity in their architectural designs <cit.> and assume that the relation type is finite and discrete.
However, when modeling images with graphs, the heterogeneity in relations is typically continuous (e.g., the similarity between nodes) or high-dimensional. Although there are several attempts <cit.> to extend SOTA GNNs <cit.> to incorporate edge attributes, their works are limited to homogeneous graphs.
Graph Pooling.
Graph pooling aims to aggregate node-level features to obtain graph-level features. Conventional methods <cit.> directly take the average of node-level features to extract graph-level features, which tends to over-smooth the signals of the nodes and cannot generate representative graph-level features.
Recently, there is extensive development of graph pooling algorithms based on the clusters of the embeddings <cit.>.
However, the clusters constructed based on similarity are inconsistent across graphs.
This leads to a large degree of freedom in parameters which easily causes overfitting.
A semantic-consistent pooling method is therefore needed.
Explaining GNNs.
Despite the success of graph neural networks, their poor interpretability of the parameters makes them notoriously recognized as “blackboxes".
With the advances in network attribution methods <cit.>, extensive attempts have been made to open such “blackboxes" <cit.>. Generating network explanation is an important qualitative step in the WSI analysis since it can highlight the abnormal regions for further investigation.
Conventional explainers try to find the associations between the parameters in deep neural networks (or the nodes in GNNs) and the predictions.
GNNExplainer <cit.> is the SOTA method explaining the contributions of node features to the GNN predictions.
It trains feature masks on each node and edge feature to minimize the prediction loss of a trained GNN.
PGExplainer <cit.> shares the same objective as GNNExplainer and trains a generative model to generate explanations.
Recently, there has been emerging attention in generating causal explanations for GNNs <cit.>, and most of the methods focus on the Granger causality as the explanation objective.
Gem <cit.> trains explanation generators from the causal perspective. Causal explainers attempt to provide explanations of features that are causal rather than associated with the neural network prediction.
§ PRELIMINARIES
Heterogeneous Graph: A heterogeneous graph is defined by a graph 𝒢 = (𝒱, ℰ, 𝒜, ℛ), where 𝒱, ℰ, 𝒜 represent the set of entities (vertices or nodes), relations (edges), and entity types, respectively.
And ℛ represents the space of edge attributes.
For v ∈𝒱, v is mapped to an entity type by a function τ(v) ∈𝒜. An edge e = (s, r, t) ∈ℰ links the source node s and the target node t, and r is mapped to an edge attribute by a function ϕ(e) = r ∈ℛ.
Every node v has a d-dimensional node feature x ∈𝒳, where 𝒳 is the embedding space of node features.
Granger Causality <cit.>: Let ℐ be all the available information and ℐ_-X be the information excluding variable X. If we can make a better prediction of Y using ℐ than using ℐ_-X, we conclude that X Granger-causes Y.
WSI Classification:
Given a WSI X and a heterogeneous graph 𝒢 constructed from X, we wish to predict the label y with a GNN model ℳ. We also aim to assign an importance score f(v) to each node v ∈𝒱 in 𝒢 as the causal contribution of each patch to the prediction for localization.
§ METHODOLOGY
§.§ Heterogeneous Graph Construction
We introduce our methodology of modeling the WSI with a heterogeneous graph.
Figure <ref> presents the overall workflow of our proposed framework.
We adopt the commonly used OTSU thresholding algorithm <cit.> and sliding window strategy to crop each WSI into non-overlapping patches.
Uninformative patches with backgrounds are removed.
These patches define the nodes of the graph constructed.
To define the corresponding node type, we use HoverNet <cit.> pretrained on the PanNuke dataset <cit.> to classify the patches into predefined types.
HoverNet detects nuclei in each patch and assigns types to these nuclei.
By majority votes, we take the most frequently predicted nucleus type to be the type of the patch.
Figure <ref> presents an example of a WSI with patches selected from the OTSU and node types generated by HoverNet <cit.>.
We use a pretrained feature encoder (i.e., KimiaNet <cit.>) to obtain the embeddings of each patch, which serves as the features of each node in the heterogeneous graph.
Based on the nodes and node features, we define the edges and edge attributes between the patches. For each node v ∈𝒱, we use the k-nearest neighbor algorithm to find k nodes that have the most similar features to that node, and connect edges between node v and these neighboring nodes.
For each edge, we compute the Pearson R correlation between the head and tail node features as the edge attributes. The edge attributes introduce heterogeneity in edges and highlight meta-relations in the WSI.
We adopt data augmentations (e.g., randomly removing some edges) during training to alleviate the potential noises introduced by the edge attributes.
As a result, we obtain a heterogeneous graph 𝒢 with heterogeneity introduced by different node types and edge attributes.
As shown in Figure <ref>, a heterogeneous graph outlines the meta-relations between the nuclei in a WSI.
Mining these meta-relations can reveal the structural interactions between the cells, leading to improved performances on different tasks.
§.§ Heterogeneous Edge Attribute Transformer
The conventional graph attention mechanism is incapable of tackling the heterogeneity of the graph.
Inspired by the transformer architecture <cit.> and its extension on graphs <cit.>, we propose a new graph aggregation layer, named the Heterogeneous Edge Attribute Transformer (HEAT) layer, to aggregate the structural relations between biological entities in the built heterogeneous graph.
We explicitly incorporate the node types and continuous edge features into the aggregation process, which guides the learning of edge similarities.
Our proposed architecture also generalizes the existing architecture to incorporate continuous or high-dimensional edge attributes and simplifies the use of linear layers to avoid overfitting led by model over-parameterizations.
For each edge e = (s, r, t) and each attention head i, we project the target node t into a query vector with a linear projection layer W^i_τ(s), and the source node into a key vector with W^i_τ(t). We also compute the value vector h_value^i of each source node by the same projection layer W^i_τ(s)
h_key^i = W^i_τ(s) H^(l-1)_s, h_query^i = W^i_τ(t) H^(l-1)_t,
h_value^i = W^i_τ(s) H^(l-1)_s,
where H^(l-1)_v is the input node feature for node v ∈𝒱 from the (l-1)-th layer.
These projection layers can project node features of various node types into a node-type-invariant embedding space.
The edge attributes from the (l-1)-th layer h_e^(l-1) are also projected to h'_e = W_edge h_e^(l-1) by a linear projection layer W_edge.
After projecting the node embeddings, we compute the dot-product similarity between the query and key vectors and further multiply the linear transformed edge attribute to the similarity score to incorporate the edge attributes in 𝒢.
We then concatenate the scores from each head and take the softmax of the score (i.e., overweights of incoming edges for all neighboring nodes) to obtain the final attention scores to the value vector h_value^i,
Attention(e) = ∀ s ∈ N(t)softmax( i ∈ [1,h]‖ATT(e, i)),
ATT(e, i) = ( h_key^i h'_e h_query^i ) / √(d),
where N(t) is the set of all the source nodes pointing to target node t, d is the dimension of node embeddings, ATT(e, i) represents the i-head attention score of edge e, ‖_i ∈ [1,h] is the concatenation operator concatenating the attention scores from all heads and Attention(e) represents the final attention score of the edges aggregating all the heads.
We multiply the attention score obtained by the value vector to obtain the output features.
By doing so, the output features contain both the node-type and edge-attribute-specific information. Hence the HEAT layer can capture the structural information in 𝒢 by transforming the node features from different node types. It can also model different semantic relations since edge attributes are included in the aggregation.
Finally, we perform target-specific aggregation to update the feature of each target node by averaging its neighboring node features. We concatenate all h attention heads to obtain the attention vector for each pair of source and target nodes. For each target node t, we conduct a softmax operation on all the attention vectors from its neighboring nodes and then aggregate the information of all neighboring source nodes of
t together.
The updated node features H^(l)_t for 𝒢_l can be represented as
H^(l)_t = ∀ s ∈ N(t)⊕ (i ∈ [1,h]‖ h^i_value·Attention(e) ),
where ⊕ is an aggregation operator (e.g., mean aggregation).
The updated graph 𝒢_l is returned as the output of the l-th HEAT layer. Algorithm <ref> demonstrates the overall process of our proposed HEAT layer.
§.§ Pseudo-label Graph Pooling
We introduce a novel pooling method — pseudo-label (PL) pooling, to aggregate information with respect to the pseudo-labels (i.e., node types) predicted from a pretrained teacher network (e.g., HoverNet <cit.>).
Unlike conventional methods of pooling features based on clusters, we define clusters using a pretrained node classifier.
Pooling from pseudo-labels ensures the semantic consistency in cluster definitions and distills the context knowledge (e.g., nuclei features) from the teacher network.
Specifically, for each node type a, we pool all node features belonging to type a into a single vector h_a with a readout layer.
The pooled features from each node type are then aggregated into a feature matrix S ∈ℝ^|𝒜| × d.
The graph level feature is then determined by another readout layer (e.g., mean readout).
Algorithm <ref> presents the workflow of the proposed PL Pooling.
By pooling with the pseudo-labels, we are able to cluster patch representation according to nuclei types, such that the graph-level features are enhanced with the prior knowledge on nuclei type distributions.
The detailed mechanism of the PL Pool is presented in the supplementary materials.
We also perform an ablation study in Table <ref> and show that PL Pooling outperforms existing pooling methods in cancer classification tasks.
§.§ Prior Knowledge Regularization
Here we discuss the motivation for introducing prior knowledge in our proposed HEAT and PL pooling algorithms.
In the context of WSI analysis when the data are scarce, while data distributions are sparse and high-dimensional. The curse of high dimensionality makes the sampling distributions difficult to approximate the properties of true distributions of the WSIs. This leads to a significant gap between training and testing distributions.
Hence regularization techniques are needed to reduce the model variance and mitigate performance deterioration when transferring the model from training to testing environments.
Since WSI data contain enriched prior knowledge (e.g., the interaction among different cell types), integrating such knowledge into the framework regularizes the model, such that the testing performance improves.
Therefore, we design the above two designs by integrating prior knowledge into the feature aggregation procedure.
Specifically, for the HEAT layer, we integrate the prior knowledge of node type and node attributes when extracting node-level features.
For PL Pooling, we pool node-level features using prior definitions on node clusters.
Moreover, we perform data augmentations (e.g., random pruning on edges and nodes) to regularize the learning from training distributions.
Besides that, other regularization such as imposing a Gaussian prior on the model weights (i.e., using a Bayesian neural network) would also achieve the goal.
§.§ Causal-driven Localization
We make use of the Granger causality to outline causal regions in the WSI with the causal graph explainer <cit.>.
Given a trained GNN model ℳ, the causal contribution of each node v is given by
Δ_δ, v = ℒ(y, y_𝒢) - ℒ(y, y_𝒢\{ v}),
where y is the true label and y_𝒢 = ℳ(𝒢) and y_𝒢\{ v} = ℳ(𝒢\{ v}) are the predicted labels from ℳ with input graphs 𝒢 and 𝒢\{ v}, respectively.
The causality heatmap of the patches can then be visualized with the causal contribution computed for each patch (i.e., node). Addressing causality in instance interpretation can adjust for observational and selection biases, which would improve the explanation accuracy.
Moreover, the causal property of the explainer could facilitate pathologists to find out potential biomarkers for diagnosis and prognosis by highlighting the patches with clinical relevance in the WSI.
§ EXPERIMENTS
§.§ Datasets
We use WSIs from the public TCGA–COAD (cancer staging task: 1304 cases, classification task: 1434 cases), TCGA–BRCA (cancer staging task: 1328 cases, classification task: 1712 cases), and TCGA–ESCA (typing task: 213 cases) from the TCGA project <cit.> and Camelyon 16 <cit.> as the benchmark datasets.
On average, around 300 patches are sampled from each WSI in the TCGA datasets (around 5,000 for Camelyon 16), where each patch corresponds to a node in the final heterogeneous graph.
For the TCGA–COAD and the TCGA–BRCA datasets, we conduct two tasks for the benchmark methods — cancer staging and cancer classification.
For the cancer staging task, all the cases are divided into the “Stage I", “Stage II", “Stage III", and “Stage IV" classes.
For the cancer classification task, all the cases are divided into the “Normal" and “Tumor" classes.
For the cancer typing task, we use TCGA–ESCA dataset where all the cases are divided into two classes i.e., “Type I: adenocarcinoma" and “Type II: squamous cell carcinoma".
We also evaluate the localization ability of our framework on the Camelyon 16 dataset, as this dataset provides the tumor mask annotations.
A detailed summary of datasets is provided in supplementary materials.
§.§ Implementation Details
The proposed framework is implemented in Python with the Pytorch library on a server equipped with four NVIDIA TESLA V100 GPUs.
We use openslide <cit.> as the tool to process the WSIs.
The dropout ratio of each dropout layer is selected as 0.2.
All models are trained with 150 epochs with early stopping.
The batch size is selected as 2.
We adopt the cross-entropy loss to train the network for classification tasks.
We use the Adam optimizer to optimize the model with a learning rate of 5 × 10^-5 and a weight decay of 1 × 10^-5.
We perform data augmentations on the training graphs by randomly dropping the edges and nodes, and adding Gaussian noises to the node and edge features.
§.§ Experiment Settings and Evaluation Metrics
We compare our method with an array of SOTA methods, including MIL or graph-based methods. We use five-fold cross-validation to evaluate the overall performance of our framework and other methods. We used the pretrained KimiaNet as the feature extraction for all methods for a fair comparison.
The details of compared methods are listed below.
* ABMIL <cit.>: a MIL framework aggregating bag-level instance information by the attention mechanism.
* DSMIL <cit.>: a dual-stream multiple instance learning method using max pooling and attention to aggregate the signals from the individual patches.
* ReMix <cit.>: a general and efficient MIL's based framework for WSI analysis that takes the advantage of data augmentation and reduces method to produce rich features.
* PatchGCN <cit.>: a hierarchical graph-based model on survival data with patient-level and WSI-level aggregations. We adapt this method as a GCN model with global attention pooling <cit.>.
* GTNMIL <cit.>: a graph-based MIL method based on the graph transformer network <cit.>.
* H^2-MIL <cit.>: a tree-graph-based multiple instance learning method that utilizes different magnification levels to represent hierarchical features.
For the cancer staging, classification and typing tasks, we use AUC, classification accuracy, and macro F-1 score as the evaluation metrics.
Percentage [%] values are reported for each of the metrics.
Standard errors are reported in brackets.
For all metrics, a higher value indicates a better performance.
Detailed definitions of the evaluation metrics can be found in the supplementary materials.
§.§ Comparison with Other Methods
Quantitative Results.
Table <ref> shows the cancer staging and classification results on the TCGA–COAD and the TCGA–BRCA datasets, and Table <ref> presents cancer typing results on the TCGA–ESCA dataset.
Compared to graph-based WSI analysis methods <cit.>, our method demonstrates improved performance, which indicates our graph modeling method potentially better represents the interaction of patches in a WSI than existing graph-based methods.
We also observe that aggregation on a graph of instances is more effective than aggregation on bags of instances in the staging tasks, which implies graph-based methods are more capable of capturing the global information of WSI for staging tasks than conventional MIL methods <cit.>.
We further compare HEAT on the BRCA subtyping task with a recent SOTA method on WSI — hierarchical image pyramid transformer (HIPT) <cit.>.
Our method achieves an AUC of 89.69 (SD: 3.63), which outperforms the AUC of 87.4 (SD: 6.0) by HIPT.
Additionally, we perform a t-test on the AUCs to demonstrate the statistical significance of our improvements over the SOTA methods, for which the results are presented in Table <ref>.
We observe that the improvements are statistically significant over most of the baseline methods under the 0.05 significance level.
Qualitative Results.
We compute the causal contribution of each patch using Equ. (<ref>).
We visualize the patch image associated with that node to outline the causal regions related to the predictions.
We also compare our causal explanation method to numerous baseline graph interpretation methods based on associations <cit.>.
Figures in the supplementary materials present the explanation results with different graph explainers on the Camelyon 16 dataset.
It is observed that using an association-based explainer provides a smooth heatmap where many regions are highlighted as important.
A such heatmap is less accurate in localizing the tumor regions and pathologists still need to traverse a large number of abnormal regions suggested by the explainer to identify tumor regions.
On the contrary, we observe that using a causal explainer can outline the tumor regions in the WSIs more accurately, with the heatmap more concentrated on the ground-truth tumor regions compared to association-based explainers (e.g., GNNExplainer <cit.>).
§.§ Analysis of Our Framework
Effectiveness of Heterogeneous Graph Construction.
We compare our method with other SOTA GNNs <cit.> to evaluate the effectiveness of our heterogeneous graph construction.
For heterogeneous graph transformer (HGT) <cit.> and HetRGCN <cit.>, we define the discrete edge types — each relation either has the “positive" type representing positive correlations between the nodes of the edge, or the “negative" type representing negative correlations.
Table <ref> presents cancer typing results of our method compared to various SOTA GNN aggregation methods on the TCGA–ESCA dataset.
Not only our method outperforms SOTA homogeneous GNN architectures <cit.>, but it is also superior to some recently heterogeneous GNN architectures <cit.>.
This implies the advantage of our proposed architecture for graph-based WSI analysis.
Analysis of Different Pooling Strategies.
We compare our proposed pooling strategy to a variety of comparable pooling methods, including basic pooling methods, such as sum/max/mean poolings and advanced pooling strategies <cit.>.
Table <ref> presents the comparison results of cancer classification on TCGA–COAD dataset.
We fix the model architecture to be GCN <cit.> and the feature encoder as KimiaNet <cit.>.
It is observed that our pooling strategy outperforms the competitors, which validates the advantage of using semantic-consistently defined clusters in pooling.
Performance on Different Class Distributions.
We observe the WSI datasets for cancer classification is imbalanced (i.e., approximately ten cancer WSIs to one normal WSI). We thus compose a balanced dataset (i.e., normal:cancer = 1:1) with the undersampling strategy to study how the difference in class distributions affect the performance of our model.
Table <ref> presents the comparison. It is observed that our method achieves similar performance with the unbalanced setting (See Table <ref>).
Generalizability.
The pretrained features are a key component of our proposed framework.
As the pretrained embedding models are from a diverse WSI context, they can extract good features from most of the WSI datasets.
Because the PanNuke dataset <cit.> (used to pretrain the HoverNet node type classifier) contains WSIs of most of the common cancer types, this leads to a broad generalization of HoverNet.
Furthermore, one may adopt contrastive learning to fine-tune the pretrained models to improve their generalizability to new datasets in potential deployment scenarios.
Accuracy of HoverNet.
The performance of the HoverNet classifier would influence the sensitivity of our framework.
Since the PanNuke dataset contains WSIs of most of the common cancer types and cohorts of the TCGA dataset (e.g., COAD), there are domain overlaps between them.
Hence the HoverNet trained on the PanNuke dataset can be transferred to the TCGA dataset for patch types classification with good performance.
Furthermore, we perform cancer classification on COAD using node types generated by unsupervised K-means clustering.
The performance (AUC: 98.5) is lower than that using HoverNet predicted node types (AUC: 99.9).
This demonstrates that incorporating the pretrained HoverNet outperforms unsupervised methods and improves WSI analysis.
§ CONCLUSION
We present a novel heterogeneous graph-based framework for WSI analysis.
By modeling WSI as a heterogeneous graph with various node types and edge attributes, our method not only leverages the locality information, but also mines the complex relational information of WSI.
We further design a novel heterogeneous edge attribute transformer architecture to aggregate the structural information in the graph and a semantic consistent pooling method to address the potential over-parameterization problems in conventional pooling.
We provide a causal explanation mechanism to highlight the causal contributions of the instances to improve the clinical usability of our work.
Extensive experiments on public datasets validate the effectiveness of our proposed framework and our framework could be adapted to other graph-based computer vision tasks, such as 3D point cloud analysis and anomaly detection.
Acknowledgement. We thank the anonymous reviewers and the area chair for their insightful comments on our manuscript.
This work was partially supported by the Research Grants Council of Hong Kong (17308321), the Theme-based Research Scheme (T45-401/22-N), the National Natural Science Fund (62201483), and the HKU-TCL Joint Research Center for Artificial Intelligence sponsored by TCL Corporate Research (Hong Kong).
unsrt
|
http://arxiv.org/abs/2307.06126v1 | 20230712122537 | Guided Bottom-Up Interactive Constraint Acquisition | [
"Dimos Tsouros",
"Senne Berden",
"Tias Guns"
] | cs.AI | [
"cs.AI"
] |
[
Ritwick Sarkar and Pritam Roy
=================================
Constraint Acquisition (CA) systems can be used to assist in the modeling of constraint satisfaction problems.
In (inter)active CA, the system is given a set of candidate constraints and posts queries to the user with the goal of finding the right constraints among the candidates.
Current interactive CA algorithms suffer from at least two major bottlenecks. First, in order to converge, they require a large number of queries to be asked to the user.
Second, they cannot handle large sets of candidate constraints, since these lead to large waiting times for the user. For this reason, the user must have fairly precise knowledge about what constraints the system should consider.
In this paper, we alleviate these bottlenecks by presenting two novel methods that improve the efficiency of CA.
First, we introduce a bottom-up approach named GrowAcq that reduces the maximum waiting time for the user and allows the system to handle much larger sets of candidate constraints. It also reduces the total number of queries for problems in which the target constraint network is not sparse.
Second, we propose a probability-based method to guide query generation
and show that it can significantly reduce the number of queries required to converge.
We also propose a new technique that allows the use of openly accessible CP solvers in query generation, removing the dependency of existing methods on less well-maintained custom solvers that are not publicly available.
Experimental results show that our proposed methods outperform state-of-the-art CA methods, reducing the number of queries by up to 60%. Our methods work well even in cases where the set of candidate constraints is 50 times larger than the ones commonly used in the literature.
§ INTRODUCTION AND RELATED WORK
Constraint programming (CP) is considered one of the main paradigms for solving combinatorial problems, with many successful applications in a variety of domains. However, there are still challenges to be faced in order for CP technology to become even more widely used. One of the most important challenges is to ease the modeling process. The current assumption in CP is that the user first models the problem and that a solver is then used to solve it. However, modeling is a non-trivial task. Expressing a combinatorial problem as a set of constraints over decision variables is not straightforward and requires substantial expertise <cit.>. As a result, modeling is considered a major bottleneck for the widespread adoption of CP <cit.>.
This obstacle has led to research into a very different approach to modeling: that of learning the constraint problem from data, as opposed to manually constructing it. This is the focus of the research area of constraint acquisition (CA), in which CP meets machine learning. In CA, the model of a constraint problem is acquired (i.e., learned) (semi-)automatically from a set of examples of solutions, and possibly non-solutions. CA methods can be categorized as active or passive on the basis of whether a user provides feedback during learning or not.
In passive acquisition, a dataset of examples of solutions and non-solutions is provided by the user upfront. Based on these examples, the system learns a set of constraints modeling the problem <cit.>.
Approaches vary in the types of constraints they are able to learn and the methodologies they employ: Conacq.1 is a version space algorithm for learning fixed-arity constraints <cit.>, ModelSeeker learns global constraints that are taken from a predefined constraint catalog <cit.>, and COUNT-CP is a generate-and-aggregate approach that can learn expressive first-order constraints <cit.>.
None of these approaches are robust to errors in the labeled data. To this end, SeqAcq and BayesAcq were introduced, being robust to noise in the training set. In SeqAcq, a statistical approach based on sequential analysis is used <cit.>, while in BayesAcq, a naive Bayes classifier is trained, from which a constraint network is then derived <cit.>.
In contrast to passive learning, active or interactive acquisition systems learn the constraints through interaction with the user, by asking queries. The main type of query used is the membership query, which asks the user to classify a given example (i.e., an assignment to the variables of the problem) as a solution or a non-solution. An early work in active CA is the Matchmaker agent <cit.>, where users, when they answer a membership query negatively, also have to provide a violated constraint. In order to lower the expertise level required from the user, Bessiere et al. later proposed Conacq.2 <cit.> – an active version of Conacq.1 that uses membership queries and does not require the user to provide any violated constraints. In <cit.>, Conacq.2 was in turn extended to also accept arguments regarding why examples should be rejected or accepted.
As the number of membership queries needed can be exponentially large for these methods <cit.>, a new family of interactive algorithms was proposed that use partial queries instead <cit.>. A partial query asks the user to classify a partial assignment to the variables. Using partial queries, CA systems are able to converge faster. QuAcq was the first system to use partial queries <cit.>, and was later extended into MultiAcq <cit.>. MQuAcq was later introduced to reduce the number of queries needed per learned constraint <cit.>, and further improved the performance by exploiting the structure of the constraints already learned <cit.>.
Despite these advancements in active CA, there are still significant obstacles for the technology to become usable in practice.
One of the main limitations is that it typically still requires asking a large number of queries to the user in order to find all constraints.
In addition, existing systems cannot handle large sets of candidate constraints in reasonable run times, and thus require significant expertise from the user in limiting the constraints the system should consider (and thus the size of the candidate set) upfront.
Finally, query generation – a highly important part of the CA process – currently requires the use of customized solvers that are not publicly available and are not as well-maintained as conventional solvers. Without the use of such customized solvers, current active CA algorithms can lead to very high query generation times or are sometimes unable to converge to the correct set of constraints when time limits are imposed <cit.>.
We focus on the above limitations, and contribute the following improvements:
* We present a novel query generation method named PQ-Gen that allows conventional constraint solvers to be used by CA algorithms while also ensuring convergence, removing the dependency on customized solvers.
* We propose a bottom-up learning approach named GrowAcq that uses any other CA algorithm to learn the constraints of an increasingly large problem. It starts learning with only a subset of variables and an associated subset of candidate constraints, and incrementally grows this set of variables and constraints. This allows it to handle significantly larger sets of candidate constraints and reduces the maximum waiting time for the user.
* Finally, we introduce a better way to guide the query generation process, with the goal of generating queries that learn the set of constraints faster.
We propose an objective function for query generation that uses probabilistic estimates of whether constraints are likely to hold or not.
We demonstrate the potential of this method by using a simple counting-based approach as probabilistic estimator.
The rest of the paper is structured as follows. Some background on CA is given in Section <ref>. <Ref> present our proposed methods. An experimental evaluation is given in Section <ref>. Finally, Section <ref> concludes the paper.
§ BACKGROUND
We now introduce some basic notions regarding constraint satisfaction problems and interactive constraint acquisition.
§.§ Constraint satisfaction problems
A constraint satisfaction problem (CSP) is a triple P = (X, D, C), consisting of:
* a set of n variables X = {x_1, x_2, ..., x_n}, representing the entities of the problem,
* a set of n domains D = {D_1, D_2, ..., D_n}, where D_i ⊂ℤ is the finite set of values for x_i,
* a constraint set (also called constraint network) C = {c_1, c_2, ..., c_t}.
A constraint c is a pair (rel(c), var(c)), where var(c) ⊆ X is the scope of the constraint and rel(c) is a relation over the domains of the variables in var(c) that specifies (implicitly or explicitly) what assignments are allowed. |var(c)| is called the arity of the constraint.
The constraint set C[Y], where Y ⊆ X, denotes the set of constraints from C whose scope is a subset of Y. The set of solutions of a constraint set C is denoted by sol(C).
A redundant or implied constraint c ∈ C is a
constraint in C such that sol(C) = sol(C∖{c}).
A (partial) assignment e_Y is an assignment over a set of variables Y ⊆ X. e_Y is rejected by a constraint c iff var(c) ⊆ Y and the projection e_var(c) of e_Y on the variables in the scope var(c), is not in rel(c), that is, is not allowed by the constraint.
κ_C(e_Y) represents the subset of constraints from C[Y] that reject e_Y, i.e., κ_C(e_Y) = {c | c ∈ C[Y] e_var(c)∉ rel(c) }.
A complete assignment e that is accepted by all the constraints in C is a solution to C, i.e., e ∈ sol(C).
A partial assignment
e_Y is called a partial solution to C iff it is accepted by all the constraints in C[Y]. Note that a partial solution to C may not be extendable to a complete one, due to constraints not in C[Y].
§.§ Active constraint acquisition with partial membership queries
In CA, the pair (X, D) is called the vocabulary of the problem at hand and is common knowledge shared by the user and the system. Besides the vocabulary, the learner is also given a language Γ consisting of fixed-arity constraint relations. Using the vocabulary (X, D) and the constraint language Γ, the system generates the constraint bias B, which is the set of all possible candidate constraints for the problem.
Let C_T, the target constraint network, be an unknown set of constraints such that for every assignment e over X it holds that e ∈ sol(C_T) iff e is a solution to the problem the user has in mind.
The goal of CA is to learn a constraint set C_L ⊆ B that is equivalent to C_T. Like other works, we assume that the bias B can represent C_T, i.e., there exists a C ⊆ B s.t. sol(C) = sol(C_T).
In active CA, the system interacts with the user while learning the constraints.
A membership query <cit.> in this setting is a question ASK(e_X), asking the user whether a complete assignment e_X is a solution to the problem that the user has in mind.
A partial query ASK(e_Y), with Y ⊂ X,
asks the user to determine if e_Y, which is an assignment in D^Y,
is a partial solution with respect to C_T[Y].
We use the notation c ∈ C_T iff ∀ e ∈ D^Y with var(c) ⊆ Y ⊆ X, ASK(e_Y) = True e_var(c)∈ sol(c).
While in passive acquisition there are methods that can handle noisy answers <cit.>, this is not the case for active acquisition. For this reason, in this work, we follow the assumption that the user answers all queries correctly.
A query ASK(e_Y) is called irredundant iff the answer is not implied by any information already available to the system. That is, the query is irredundant iff e_Y is rejected by at least one constraint from the bias B and is not rejected by the network C_L learned thus far.
The first condition captures that κ_B(e_Y) cannot be empty, since if κ_B(e_Y) would be empty, the answer to the query ASK(e_Y) would have to be `yes', based on the assumption that C_T is representable by the constraints in B. The second condition captures that e_Y should not be rejected by any constraint in the learned network C_L,
since otherwise the user would certainly answer `no' to the query.
<Ref> presents the generic process followed by active CA methods with partial queries. The learned set C_L is first initialized either to the empty set or to a set of constraints given by the user that is known to be part of C_T (i.e., C_in⊂ C_T) (line 1). Then the main loop of the acquisition process begins, where, in every iteration, the system first generates an irredundant query (line 3) and posts it to the user (line 5). If the query is answered positively, then the candidate constraints from B that violate it are removed (line 6). Otherwise, the system has to find one or more constraints from C_T that violate the query. This is done in two steps. First, queries are asked to find the scope of a constraint in κ_C_T(e) (line 8). Then, queries are asked to find all constraints c ∈ C_T with that scope (line 9).
The acquisition process has converged on the learned network C_L ⊆ B iff C_L agrees with the set of all labeled examples E, and for every other network C ⊆ B that agrees with E, it holds that sol(C) = sol(C_L). This is proved if no query could be generated at line 3, as in this case, all remaining constraints in B (if any) are redundant.
If the first condition
is true but the second condition
has not been proved when the acquisition process finishes,
premature convergence has occurred. This can happen when the query generation at line 3 returns e=nil, but without having proved that an irredundant query does not exist (e.g., because of a time limit).
Existing algorithms like QuAcq <cit.>, MQuAcq <cit.> and MQuAcq-2 <cit.> follow this template, but differ mainly in how they implement lines 3, 8 and 9, and hence how many constraints they are able to learn in each iteration. Examples of functions used to locate the scope of a constraint (line 8) are FindScope <cit.> or the more efficient FindScope-2 <cit.>. To learn the constraints in the scope found (line 9), the FindC function is typically used <cit.>.
§ USING CONVENTIONAL SOLVERS FOR QUERY GENERATION
Query generation (line 3 of <Ref>)
is one of the most important parts of the CA process.
It aims to find an irredudant membership query (i.e., a (partial) assignment that does not violate C_L but violates at least one c ∈ B) that will be asked to the user. Thus, it can be formalized as follows:
find e_Y s.t. e_Y ∈ sol( C_L[Y] ∧⋁_c_i ∈ B[Y] c_i ),
which can be formulated as a CSP with variables Y and constraints C_L[Y] ∧⋁_c_i ∈ B[Y] c_i.
§.§ Problems when using conventional solvers
In principle, this CSP could be solved using any conventional CP solver.
However, this can lead to issues for the following two reasons.
A large bias
At the start of the acquisition process, the set of candidate constraints B can be very large. This makes the propagation of the constraint ⋁_c_i ∈ B[Y] c_i time-consuming, and severely slows down the query-generation process.
Indirectly implied constraints
At the end of the acquisition process, only constraints that are implied by C_L remain in B, if any. In this case, it will be impossible to generate a query that does not violate C_L and violates at least one constraint from B. However, propagation is often unable to prove such implications when they are indirect and involve multiple variables and constraints. For this reason, solvers internally end up enumerating all possible variable assignments satisfying C_L and checking if the constraint ⋁_c_i ∈ B[Y] c_i can be satisfied. This can be very time-consuming, and a time limit is usually imposed on query generation, leading to premature convergence.
In order to limit the large runtimes in a more advanced way than by simply imposing a time bound t, Addi et al. proposed a method using conventional solvers named TQ-Gen <cit.>. It iteratively tries to solve the query generation problem, by gradually reducing the number of variables taken into account by a proportion α∈]0, 1[, until a query can be generated within a small time limit τ.
This is repeated until either an irredundant query is generated, or a global time bound t is reached, leading to premature convergence.
However, choosing the right hyperparameters for t and τ
is problem-specific <cit.> and requires tuning, and thus more interaction with the user.
§.§ Customized solvers
To avoid premature convergence, a CP solver can be customized to store partial assignments that satisfy every c ∈ C_L[Y] and violate at least one c ∈ B[Y] during the search.
Given an objective function, such as maximizing the number of assigned variables, in every non-failing node of the search tree it will check the above property and, if fulfilled, store the best-scoring partial assignment.
As these customized solvers are guaranteed to find valid partial solutions, their use will never lead to premature convergence.
In addition, finding a partial query to return is not time-consuming (especially when combined with specialized search heuristics <cit.>), even when the bias is large.
However, such custom solvers are not publicly available and are typically not based on the latest version of state-of-the-art solvers. This also means that the corresponding active CA methods are heavily tied to those particular customized solvers.
§.§ Projection-based Query Generation
We now introduce a method named Projection-based Query Generation (PQ-Gen) that makes it possible to use state-of-the-art conventional solvers for query generation, without premature convergence.
Our proposed method is shown in <Ref>.
Avoiding indirectly implied constraints
A key observation we make is that when generating a query on line 3, it might be that ⋃_c ∈ B var(c) ⊂ X, that is, some variables have no more candidate constraints in B. These have become irrelevant, as both lines 6 and 8 are only concerned with κ_B(e), which will not include these variables. So, to generate an irredundant query, it is sufficient to consider only the variables in B.
This is not only faster, but also avoids indirectly implied constraints, as these are indirect through variables not used in B.
Thus,
our proposed query generator projects the variables down to Y ⊆ X, with
Y = ⋃_c ∈ B var(c), thereby simplifying the problem to finding an assignment over Y ⊆ X. This will inherently result in a partial assignment when Y is a strict subset of X, without requiring a custom solver.
Thus, we first compute the set of variables Y relevant to the query (line 2), and project C_L down to those variables (on lines 4 and 7). The solver then has to prove that there exists a query that satisfies C_L[Y] and violates at least one constraint from B.
Dealing with large biases As mentioned above, having a large bias B can severely slow down the solver during query generation because propagating the ⋁_c_i ∈ B[Y] c_i constraint takes a long time. However, we observe that when B contains many constraints, the property that a query e violates at least one of these is usually satisfied without needing to enforce this. Hence, we propose not using this constraint when the bias is larger than some threshold (lines 3 to 6 in <Ref>). If in a post-hoc check, it turns out that the generated query violates at least one c ∈ B, it is directly returned (line 6). Otherwise, we again generated a query, this time with the constraint enforcing that there must exist a constraint in B that is violated (line 7).
Optimizing the query
The above ensures that we will always find a valid query. However, much better queries – according to some objective function – can often be found. This would take additional time, but is safe because, since a valid query has already been found, the optimization can always safely be interrupted.
Given a time limit, we can hence call an optimization solver for the remaining time after a first valid query has been found (lines 8-11).
As expressed in Proposition <ref>, Algorithm <ref> is correct.
Given a bias B, with an unknown target network C_T being representable by B, and a learned constraint set C_L, if nil is returned by <Ref>, then the system has converged on C_T[X].
When nil is returned by <Ref>, it means that ∄ e ∈ sol(C_L[Y] ∧⋁_c_i ∈ B[Y] c_i), with Y = ⋃_c ∈ B var(c), i.e., ∄ e ∈ sol(C_L[Y] ∧⋁_c_i ∈ B[Y] c_i). In order to prove convergence over all of X, we must have ∄ e ∈ sol(C_L[X] ∧⋁_c_i ∈ B[X] c_i). We will now show that when Y = ⋃_c ∈ B var(c), it means that
∄ e ∈ sol(C_L[Y] ∧⋁_c_i ∈ B[Y] c_i) ∄ e ∈ sol(C_L[X] ∧⋁_c_i ∈ B[X] c_i)
Assume that <Ref> returns nil, i.e., that no assignment exists in a Y ⊂ X that is accepted by C_L[Y] and rejected by B[Y]. This means that all the constraints in B[Y] are proved to be implied by the constraints in C_L[Y]. Thus, the remaining constraints in B, that are not proved to be redundant, are the constraints c ∈ B ∖ B[Y]. When we know that Y = ⋃_c ∈ B var(c) it means that B[Y] = B, so B ∖ B[Y] = ∅. As a result, in this case, all the constraints in B are proved to be implied. Hence, no assignment that is accepted by C_L and rejected by B exists in X.
§ BOTTOM-UP CONSTRAINT ACQUISITION
We start by observing that all current active CA algorithms always consider either the full set of variables X, or a large subset Y ⊆ X, in their top-level loop (lines 2-9 in Algorithm <ref>).
This generally leads to complete or almost-complete queries getting generated (line 3 of Algorithm <ref>).
However, larger queries are generally harder to answer than smaller queries <cit.>. Also, a large initial query leads to many additional queries getting posed in the scope-finding method on line 8. That is because the worst-case complexity of the best scope-finding methods, in terms of the number of queries required, is Θ(log(|Y|)), where Y ⊆ X is the set of variables considered <cit.>.
Additionally, by directly considering the whole set of variables, the CA algorithm has to represent and operate on the entire set of candidate constraints (i.e., the bias B) at once. The bias is used in many parts of the acquisition process. Hence, the memory requirements and the run time of the acquisition process increase significantly as the bias grows, either because the problems contain more variables or because the language Γ given to the system includes a larger number of relations. This means that, in practice, state-of-the-art active CA methods are only applicable to problems with not too many variables or problems for which the user already has relatively precise knowledge about what constraints the system should consider (which corresponds to the bias being small).
To improve on this, we propose a novel meta-algorithm named GrowAcq (Algorithm <ref>).
The key idea is to call a CA algorithm on an increasingly large subset of the variables Y ⊆ X,
each time using only a relevant unexplored subset of the bias.
GrowAcq begins with Y = ∅ (line 2) and gradually incorporates more variables (lines 3-5). Once a new variable x_i ∈ X has been added to Y, the new problem becomes to find the new C_T[Y]. However, as C_T[Y ∖{x_i}] was already found in the previous iterations, the set of constraints to seek is actually C_T[Y] ∖ C_T[Y ∖{x_i}]. To find C_T[Y] ∖ C_T[Y ∖{x_i}], any existing active CA algorithm can be used. We represent this with the function Acq (line 7).
In every iteration, only a part of the bias B is needed, namely B[Y] ∖ B[Y ∖{x_i}], and as shown in Lemma <ref>, the bias constructed at line 6 is equivalent to B[Y] ∖ B[Y ∖{x_i}].
Let Y_i be the set of variables Y in iteration i after line 5 of <Ref> and B_i = { c | rel(c) ∈Γ var(c) ⊆ Y_i x_i ∈ var(c)} be the bias B constructed at line 6 in iteration i. It holds that B_i = B[Y_i] ∖ B[Y_i-1].
At line 6 of <Ref>, the bias B is constructed. For each iteration i, it is constructed as B_i = { c | rel(c) ∈Γ var(c) ⊆ Y_i x_i ∈ var(c)}. For a set of variables Y_i, the full bias, which includes all candidate constraints, is B[Y_i] = {c | rel(c) ∈Γ var(c) ⊆ Y_i}. For the previous iteration, as Y_i-1 = Y_i ∖{x_i}, we know that B[Y_i-1] = {c | rel(c) ∈Γ var(c) ⊆ Y_i ∖{x_i}}. Thus, the additional constraints that are in B[Y_i] and not in B[Y_i-1] are the ones with a scope var(c) ⊆ Y_i for which x_i ∈ var(c):
B[Y_i] ∖ B[Y_i-1] = {c | rel(c) ∈Γ var(c) ⊆ Y_i}∖{c | rel(c) ∈Γ var(c) ⊆ Y_i ∖{x_i}}
= { c | rel(c) ∈Γ var(c) ⊆ Y_i x_i ∈ var(c)} = B_i
Hence, it holds that B_i = B[Y_i] ∖ B[Y_i-1] for B_i = { c | rel(c) ∈Γ var(c) ⊆ Y_i x_i ∈ var(c)}.
This bottom-up approach alleviates the problems described above, i.e., starting from large initial queries and having to represent the whole bias from the beginning, in two ways. First, it naturally leads to partial queries of increasing size in the first step of the `inner' CA system (<Ref> line 5). This is valuable since smaller queries are generally easier for the user to answer <cit.>, and also a smaller initial query leads to a lower worst-case number of additional queries to locate scopes.
Second, since the algorithm only stores and uses a small part of the bias at a time (line 6 of <Ref>), it is able to handle significantly larger biases than the state-of-the-art. Not representing the whole bias in every iteration does not affect the algorithm's correctness, as we state in <Ref>.
Given a bias B built from a language Γ, with bounded arity constraints, and a target network C_T representable by B, GrowAcq is correct (i.e., will learn a constraint set C_L that is equivalent to C_T), as long as a correct (i.e., sound and complete) CA algorithm is used in line 7.
(Sketch)
Let us now prove that if any correct algorithm is used in line 7 of Algorithm <ref> – like QuAcq, MQuAcq or MQuAcq-2 – GrowAcq remains correct. We will subscript sets with the number of the iteration that they occur in to distinguish between the iterations. Even though the full bias B is never constructed and never kept in memory all at once in GrowAcq, we will still refer to it in this proof and denote it with B, i.e., B = {c | rel(c) ∈Γ var(c) ⊆ X}. When we instead write B_i, we refer to the part of the bias that is constructed and used in iteration i (line 6 of Algorithm <ref>), which is B[Y_i] ∖ B[Y_i-1] (Lemma <ref>).
Soundness.
GrowAcq adds constraints to C_L only at line 7 of <Ref>. At that line, only constraints returned from the inner interactive CA algorithm are added to C_L. Since the assumption is that a sound algorithm is used in the Acq function, GrowAcq is sound.
Completeness.
We prove that GrowAcq is complete by proving by induction that, after each iteration i, C_L is equivalent to C_T[Y_i], meaning that after the last iteration, C_L is equivalent to C_T[X].
GrowAcq starts with Y_1 = ∅, so both C_T[Y_1] and B_1 are empty. The first iteration where the algorithm has to actually learn any constraints will be the one where Y grows large enough so that C_T[Y] ≠∅. Assume that this happens at iteration k. In this case, C_T[Y_k] will be representable by B_k, because B_k = B[Y_k] ∖ B[Y_k-1] and we know that C_T[Y_k-1] = ∅. Since C_T[Y_k] is representable by B_k, it will be successfully learned in line 7, as long as a complete interactive CA algorithm is used.
Assuming now that C_L = C_T[Y_n] holds at the end of the n-th iteration, let us now prove that C_L = C_T[Y_n+1] will hold at the end of the n+1-th iteration. From the assumption that C_L = C_T[Y_n], it follows that (B[Y_n] ∖ C_L) ∩ C_T = ∅. As a result, B_n+1, being equal to B[Y_n+1] ∖ B[Y_n] does not exclude any constraint from C_T[Y_n+1] that has not already been learned. From this, it follows that (C_T[Y_n+1] ∖ C_L) ⊆ B_n+1, and thus this set of constraints will be learned in line 7 as long as a complete interactive CA algorithm is used.
Hence, GrowAcq is complete.
§ GUIDED QUERY GENERATION
We now turn our attention to the objective function used
at line 9 of <Ref>. Since when GrowAcq is used, the size of B used in every iteration is reduced, query generation is now often fast, leaving sufficient room for using optimization to find a good query.
The objective function used in existing query generation systems <cit.> tries to maximize the number of constraints from B that are violated by the generated query e. The motivation is that this can potentially help shrink the bias faster. The objective function is
e = _e ∑_c ∈ B e ∉sol({c})
where · is the Iverson bracket which converts True/False into 1/0.
However, looking only at the number of violated constraints in B does not fully capture what a good query is:
* We want queries that lead to a positive answer to violate many constraints from the bias B, as these can then all be removed from B, shrinking it faster.
* On the other hand, we want queries that lead to a negative answer to violate a small number of constraints from B, as it allows the CA system to find the conflicting constraint faster.
Based on this, in order to generate good queries regardless of the user's answer, we want query generation to minimize the violation of constraints that are in the unknown target set C_T, seeking a query to which the user's answer will be “yes”. At the same time, we want to maximize the violation of constraints in B that are not in C_T, so that positive answers can shrink the bias faster (the first bullet point above).
Note that we also have the constraint ensuring that at least one constraint from B has to be violated. This means that when B∖ C_T = ∅, we want a minimum number of constraints in C_T that we have not already learned to be violated. This leads to negative queries that violate a small number of constraints in B (the second bullet point above).
Assume we have access to an oracle O that tells us whether a constraint c belongs to the unknown target set or not: O(c) = (c ∈ C_T). Using this oracle we can formulate an objective function for query generation, using the reasoning above, as follows:
∑_c ∈ B e ∉sol({c}) · (1 - |Γ| · O(c) ),
On the one hand, every time that the oracle returns False for a constraint from the bias that is violated by e, the objective function is increased by 1, thereby maximizing the violation of these constraints.
Conversely, for constraints where O returns True, we aim to minimize the violations, which requires a reduction in the objective value for each such violated constraint.
However, it is possible that violating a set of constraints C (where ∀ c_i ∈ C | O(c_i) = False) may imply the violation of a constraint c_j with O(c_j) = True.
In such cases, if the reduction in the objective value for violating c_j is not large enough, the system will violate both C and c_j, maximizing the objective.
To address this issue, we introduce a “penalty” of |Γ|, which is equal to the upper bound of the number of constraints in each scope.
This ensures that the system prioritizes satisfying a constraint with O(c_j) = True, over violating other constraints from B.
Modeling the oracle
Observe how the current objective of maximizing violations corresponds to using a model of the oracle M that always answers False, i.e., that assumes that none of the candidate constraints belong to C_T.
On the other hand, if we used an oracle M that always answers True, then the query generation would try to violate as few constraints as possible. However, the ⋁_c_i ∈ B[Y] c_i constraint would still need to be satisfied, in the extreme case leading every query to violate exactly one constraint from B.
Based on this observation, we propose to model the oracle using the following model M, which tries to determine for every constraint c whether violating or satisfying c would lead to the least amount of queries later on in the algorithm.
M(c) = ( 1/P[c ∈ C_T]≤ log(|Y|) )
On the one hand, in the extreme case, the constraints for which M(c) answers True will be violated one by one in the later queries (once most of the constraints for which M(c) answers False have been dealt with). Let P[c ∈ C_T] be a probabilistic estimate of whether c is part of C_T. Then, if the generated queries would violate the constraints with that probability one by one, we would in expectation need 1/P[c ∈ C_T] queries to find a constraint from C_T. For example, for a set of constraints that each has a probability of 25%, 1 in every 4 queries is expected to lead to a c ∈ C_T being learned.
On the other hand, for each constraint c ∈ C_T for which M(c) answers False, a scope-finding procedure is needed to locate the violated constraint. The most efficient functions commonly used to do it (i.e., FindScope <cit.> or FindScope-2 <cit.>) have been shown to require Θ(log(|Y|)) queries to find a violated constraint c ∈ C_T in the worst case, where Y is the number of variables considered in query generation. As a result, we estimate the number of queries needed in this case as k · log(|Y|), with k a constant. We found k = 1 to work well in practice.
Probability estimation To compute the probability P(c ∈ C_T) of a constraint c ∈ B, we use a simple approach, considering only information from the relations rel(c) of the constraints. More specifically, to compute P(c ∈ C_T), we count the number of times a constraint with relation rel(c) has been added to C_L, and divide it by the total number of times that such a constraint has been removed from B.
Much more advanced estimation techniques, including machine learning methods, can be used for more accurate estimation. We leave this for future work.
§ EXPERIMENTAL EVALUATION
In this section, we empirically answer the following research questions:
(Q1) Does using PQ-Gen with conventional solvers avoid premature convergence, and how do CA systems perform when they use it?
(Q2) Does GrowAcq (using MQuAcq-2) perform better than using MQuAcq-2 directly?
(Q3) How does our probability-guided query generation objective function perform compared to the one used in current CA systems?
(Q4) How does the combination of our methods perform?
(Q5) How do our methods perform on problems with a huge bias B?
§.§ Benchmarks
We used the following benchmarks:
Jigsaw Sudoku.
The Jigsaw Sudoku is a variant of Sudoku in which the 3 × 3 boxes are replaced by irregular shapes.
It consists of 81 variables with domains of size 9. The target network consists of 811 binary ≠ constraints, on rows, columns, and shapes.
The bias B was constructed using the language Γ = {≥, ≤, <,>,≠, = } and contains 19 440 binary constraints.
Murder.
The Murder puzzle problem consists of 20 variables with domains of size 5. The target network contains 4 cliques of 10 ≠ constraints and 12 additional binary constraints. The bias was initialized with 760 constraints based on the language Γ = {≥, ≤, <,>,≠, = }.
Random.
We used a problem with 100 variables and domains of size 5. We generated a random target network with 495 ≠ constraints. The bias was initialized with 19 800 constraints, using the language Γ = {≥, ≤, <,>,≠, = }.
Golomb rulers.
The problem is to find a ruler where the distance between any two marks is different from that between any other two marks. We built a simplified version of a Golomb ruler with 8 marks, with the target network consisting only of quaternary constraints.[The ternary constraints derived when i = k or j = l in |x_i - x_j| ≠ |x_k - x_l| were excluded, as also done in the literature <cit.>]
The bias, consisting of 238 binary and quaternary constraints, was created with the language Γ = {≥, ≤, <,>, ≠, =, |x_i - x_j| ≠ |x_k - x_l| }.
Job-shop scheduling.
The job-shop scheduling problem involves scheduling a number of jobs, consisting of several tasks, across a number of machines, over a certain time horizon.
The decision variables are the start and end times of each task.
There is a total order over each job's tasks, expressed by binary precedence constraints.
There are also constraints capturing the duration of the tasks and that tasks should not overlap on the same machine.
The language Γ = {≥, ≤, >,≠, =, x_i + c = x_k } was used, with c being a constant from 0 up to the maximal duration of the jobs. We used a problem instance containing 10 jobs, 3 machines (i.e., |X| = 60) and a time horizon of 15 steps, leading to a bias containing 14 160 constraints.
§.§ Experimental setup
Let us now give some details about the experimental settings:
* All the experiments were conducted on a system carrying an Intel(R) Core(TM) i9-11900H, 2.50 GHz clock speed, with 16 GB of RAM.
* We measure the total number of queries #q, the average time of the query generation process T̅_gen
(line 3 of <Ref>), the average waiting time T̅ per query for the user, and the total time needed (to converge) T_total. All times are presented in seconds.
The difference between T̅_gen and T̅ is that the latter takes into account also the queries posed on lines 8-9 of <Ref>, which are very fast to compute.
* We evaluate our methods in comparison with the state-of-the-art method MQuAcq-2 <cit.>.
* All methods and benchmarks were implemented in Python [Our code is available online in: https://github.com/Dimosts/ActiveConLearn] using the CPMpy constraint programming and modeling library <cit.>, except for the experiments using custom solvers.[For the custom solver based query generators from <cit.>, we obtained the implementations (in C++) through personal communication with the authors.]
* The results presented in each benchmark, for each algorithm, are the means of 10 runs.
We now discuss the results of our experimental evaluation, based on the questions we posed at the beginning of the section.
§.§ [Q1] Performance of PQ-Gen
Both PQ-Gen, our projection-based query generation approach, and TQ-Gen <cit.> (discussed in Section <ref>) involve hyperparameters that affect their performance. Thus, we first performed a hyperparameter sensitivity analysis to assess their performance under different configurations. In tandem with TQ-Gen, we also used the adjust function described in <cit.>.
We used the JSudoku benchmark for this comparison. For TQ-Gen, we fixed the hyperparameter α to 0.8 as recommended in <cit.>, and used τ = {0.05, 0.1, 0.2, 0.3} and t = {0.5, 1, 1.5, 2}. For PQ-Gen hyperparameters, we used l = {3000, 5000, 7500, 10000} and t = {0.5, 1, 1.5, 2}. Thus, we examined 16 different configurations for each. A summary of the results are shown in <Ref>.[More details regarding this experiment can be found in <Ref>.]
Confirming our analysis, with our PQ-Gen there is never a case of premature convergence, no matter what hyperparameters are used. On the other hand, when TQ-Gen is used, the system fails to converge in the majority of cases, and specific hyperparameter values have to be chosen to ensure convergence.
In addition, our PQ-Gen shows much better performance both in terms of the number of queries needed and, especially, in terms of runtime.
In more detail, we compared our projection-based query generation (PQ-Gen) with a baseline where we run a conventional CP solver to directly solve the query generation problem, using a one-hour time limit, as well as with query generation methods from the literature, i.e., TQ-Gen and the custom solver based query generators from <cit.>. For PQ-Gen and TQ-Gen we used the best configuration found in the previous experiment. That is, we run PQ-Gen with l = 5000 and t = 1 and TQ-Gen with τ = 0.2 and t = 2.
We used benchmarks that are similar to the ones used in <cit.>. For consistency, we used the same state-of-the-art query generation objective function across all methods that accept one, i.e., our PQ-Gen and the custom solvers, which tries to maximize the number of violated constraints from B. The results are shown in <Ref>.
We can observe that convergence was reached in all cases, except for the baseline in which a conventional solver was used directly using a time limit. Our method, PQ-Gen, and the baseline show similar performance in terms of the number of queries needed. while being much better than TQ-Gen in JSudoku and Random, where B is larger, especially when considering time performance.
On the other hand, when custom solvers are used, we can see that the time performance has improved and the number of queries has decreased. This happens because the custom solver can return a partial assignment of any size, trying only to maximize the value of the objective function used, and utilizing heuristics from the literature, while when our PQ-Gen is used, the query generated has to be a solution in a specific (sub)set of variables, which takes more time to compute.
As a result, we observed that custom solvers often return queries that violate more constraints from B, which helps MQuAcq-2 shrink the bias faster in terms of the number of queries needed.
§.§ [Q2 - Q4] Evaluating GrowAcq and guided query generation
Hereafter, we continue our experiments using PQ-Gen, as the motivation was to investigate techniques that work with any solver. As using PQ-Gen allows us to use conventional solvers, able to run on any given benchmark, in contrast to the custom solvers from <cit.>, where specific constraint relations are implemented, from now on we will use all of the benchmarks mentioned in <Ref>.
[Q2] Using GrowAcq within MQuAcq-2
We now evaluate the performance of GrowAcq, our proposed bottom-up CA approach. To evaluate it, we used MQuAcq-2, as the inner CA algorithm within GrowAcq (line 7 of <Ref>) and compared this to using MQuAcq-2, directly on the full-sized problem. <Ref>, top two blocks, presents the results.
We can observe that the usage of GrowAcq results in a reduction of the number of queries in JSudoku, Murder, and Random, while a slight increase can be seen in Golomb. In Job-shop, the increase in the number of queries is somewhat larger (25%). This is the case because the target constraint network in this benchmark is sparse, with most of the iterations of GrowAcq in a Y ⊂ X not learning any constraint from C_T and only shrinking the bias. So, when the full-sized problem is looked at directly when MQuAcq-2 is used, the bias B can shrink with fewer queries. On the other hand, when the target network is not sparse, there is a decrease in the number of queries of up to 19%, due to the fact that the system can locate the scopes of the constraints faster, starting from a Y ⊂ X every time. Based on the above observations, we can see that using GrowAcq leads to learning constraints in a lower amount of queries, but on the other hand, needs more queries to shrink the bias.
Finally, although the total time is almost the same in most problems, and slightly increased in JSudoku and Golomb, the average time per query has not noticeably increased, while the maximum time the user has to wait between two queries has decreased significantly (up to 88% in the Job-shop benchmark), due to the overall reduction in the time needed in query generation in almost all problems (as indicated by the T̅_qgen column). As the (maximum) waiting time for the user is of paramount importance for interactive settings, we can see that GrowAcq improves this aspect of time performance of interactive CA systems.
[Q3] Guided query generation In order to evaluate the performance of our proposed objective function for guiding query generation, we compare it with the use of the most popular objective function used in state-of-the-art CA systems, i.e., maximizing violations of constraints from B. The objective functions are utilized in line 9 of <Ref>. For this comparison, GrowAcq is used, again with MQuAcq-2 as the inner acquisition algorithm at line 7 of <Ref>.
The results using the guided query generation can be seen in <Ref>, bottom-two blocks, comparing GrowAcq +
MQuAcq-2 against GrowAcq + MQuAcq-2 _guided.
We can see that, when using our probability-based guidance for query generation, the number of queries has significantly decreased in JSudoku, Murder, and Golomb, while it has remained nearly the same in Random and Job-Shop. In the latter cases, the number of queries has not decreased because these are under-constrained problems, and thus the probability derived from the constraints' relations was small. This led to maximizing the violations of all constraints in B (i.e., the same behavior as with the existing objective).
On the other hand, in the problems that do not have a sparse constraint network,
where using the simple counting method to compute the probabilities of the constraints could effectively guide the acquisition system, the decrease observed in the number of queries is substantial (32% in JSudoku, 30% in Murder, and 64% in Golomb). However, as violating constraints one-by-one leads to more queries generated at line 3 of <Ref>, yet fewer queries at lines 8-9, which are very fast to compute, there is a small increase in the total time on JSudoku.
[Q4] Combination of our methods Comparing the combination of our methods (i.e., GrowAcq + MQuAcq-2 _guided) with MQuAcq-2 (<Ref>), we can see that combining our bottom-up approach with guiding the query generation greatly outperforms MQuAcq-2 in terms of the number of queries needed to achieve convergence on most of the benchmarks.
The number of queries has decreased on all benchmarks except Job-shop, where, because of its sparse target network, we need 23% more queries, as GrowAcq increases the number of queries to converge in underconstrained problems, due to the reasons described in section <ref>, while guiding the query generation does not improve it, as the probabilities estimated are always low. In the rest of the problems, we observe a total decrease of 16% in Random, 39% in JSudoku, 32% in Murder, and up to 60% in Golomb.
These results demonstrate the effectiveness of the proposed methods in reducing the number of queries needed for CA algorithms, which is crucial in interactive scenarios.
§.§ [Q5] Dealing with larger biases
To answer this question, we evaluated GrowAcq and the combination of our methods on larger instances of the Job-shop benchmark, using the same language as before. We used two instances: one with 15 jobs, 11 machines, and 40 steps (denoted as JS-15-11), which resulted in a bias consisting of 542 850 constraints, and one with 19 jobs, 12 machines, and again 40 steps (denoted as JS-19-12), resulting in a bias of 1 037 400 constraints. The results are presented in <Ref>.
On the one hand, GrowAcq needs more queries to converge (like on the smaller Job-Shop instance) because the constraint network of this problem is sparse.
Yet the total time needed to converge is one order of magnitude lower than in MQuAcq-2, being 24.4 times faster in the instance with a bias size of 0.5 million constraints and 25.6 times faster in the instance with |B| > 1M.
In addition, the maximum waiting time has drastically decreased by using GrowAcq (and the combination GrowAcq and guiding query generation), from 5 499 seconds to only 3 (resp. 8) seconds in JS-15-11 and from more than 20 371 seconds to only 7 (resp. 6) seconds in JS-19-12. Importantly, the average waiting time is more than 30 times lower when using GrowAcq. Note that, as in the smaller job-shop instance, guiding does not lead to improvement in terms of the number of queries.
However, it does not noticeably worsen the time performance of the system.
Hence, the experiments confirm that the proposed methodology can efficiently handle significantly larger sets of candidate constraints than the state of the art, up to 50 times larger than the ones commonly used in the literature <cit.>.
§ CONCLUSIONS
Some of the most important limitations of interactive CA methods are the large number of queries needed to converge, as well as the size of the candidate constraint set that they can handle efficiently. In this work, we presented novel methods to alleviate these
issues, improving the efficiency of CA systems. We proposed a bottom-up approach, which allows the system to handle significantly larger biases, reducing the maximum waiting time for the user, and also reducing the total number of queries needed when the target constraint network is not sparse. We also introduced a probabilistic method to guide query generation, further reducing the number of posted queries when our simple counting method could guide the acquisition system to learn constraints more efficiently. In addition, we presented a new query generation technique, named PQ-Gen, that allows the use of conventional CP solvers, removing the dependency of existing methods on customized solvers to converge. Our experimental evaluation showed that our proposed methods outperform state-of-the-art systems in terms of the number of queries in problems with non-sparse constraint networks, reducing this number up to 60%. In addition, the experiments show that GrowAcq can handle up to 50 times larger biases than the ones commonly used in the literature, allowing CA to tackle increasingly large and complex problems.
The biggest avenue for future work is to further investigate additional ways to reduce the number of queries needed, e.g., by using guidance in all parts of the acquisition process (not just the query generation), and with more advanced probabilistic models. Another important avenue is to consider the setting in which user answers can be noisy
as has been investigated for passive systems.
§ HYPERPARAMETER EVALUATION FOR PQ-GEN AND TQ-GEN
Both PQ-Gen, our projection-based query generation approach, and TQ-Gen <cit.> (discussed in Section <ref>) involve hyperparameters that affect their performance. As mentioned in <Ref>, we performed a sensitivity analysis of the performance with respect to the hyperparameter configuration used of PQ-Gen and TQ-Gen <cit.>. In this comparison, both query generation methods were used within the state-of-the-art active CA method MQuAcq-2. We used the JSudoku benchmark for this comparison, as from the benchmarks considered in this paper, this is shown to be the hardest one to reach convergence on (see Table <ref>).
In more detail, we varied the hyperparameters of both PQ-Gen and TQ-Gen to assess their performance under different configurations.
While we fixed the hyperparameter α of <cit.> to 0.8 as recommended in <cit.>, we had to use different values for the time-related hyperparameters, τ and t, as the previous study used a different CP solver. To be more specific, while we used the CPMpy modeling language, compiling to the OR-Tools CP-SAT solver, the authors of <cit.> use Choco Solver. Although Choco solver is generally inferior to OR-Tools in harder-to-solve problems, OR-tools involves also the pre-solve process in the beginning, which uses most of the (limited) time to generate a query with TQ-Gen, and thus needs larger values for the time limits than when Choco is used as a solver.
In our evaluation we used τ = [ 0.05s, 0.1s, 0.2s, 0.3s ] and t = [0.5s, 1s, 1.5s, 2s] for TQ-Gen. We also used the adjust function described in <cit.>, as it has been shown to improve its performance.
For PQ-Gen hyperparameters, we used l = {3000, 5000, 7500, 10000} and t = {0.5, 1, 1.5, 2}. Thus, we examined 16 different configurations for each. The results of our experiments are presented in Figures <ref> and <ref>, respectively, for PQ-Gen and TQ-Gen.
Focusing on <Ref>,
we can see that the performance of PQ-Gen is stable across all configurations, both in terms of the number of queries and time performance, having also converged in all cases.
Let us now shift our focus to <Ref> and the performance of TQ-Gen.
The first observation is that in the majority of the cases, MQuAcq-2 failed to converge when using TQ-Gen as the query generator.
Only when the time limit was set to 2s, we see at least one run achieving convergence for all values of τ.
In addition, the performance of MQuAcq-2 using TQ-Gen is highly sensitive to changes in hyperparameter values, particularly with respect to time.
Overall, comparing the results of PQ-Gen and TQ-Gen, we observe that PQ-Gen exhibits superior performance in terms of convergence rate, fully overcoming the issue of premature convergence. PQ-Gen also requires a lower number of queries to reach convergence and offers improved time performance, resulting in reduced waiting times for the user.
|
http://arxiv.org/abs/2307.05026v1 | 20230711060022 | Optimization of Adams-type difference formulas in Hilbert space $W_2^{(2,1)}(0,1)$ | [
"Kh. M. Shadimetov",
"R. S. Karimov"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Optimization of Adams-type difference formulas in Hilbert space]
Optimization of Adams-type difference formulas in Hilbert space W_2^(2,1)(0,1)
Kh.M. Shadimetov, R.S. Karimov]Kh.M. Shadimetov^1,2, R.S. Karimov^1,3,*
[
[
August 12, 2023
===================
^1V.I. Romanovskiy Institute of Mathematics, Uzbekistan Academy of Sciences, Tashkent, Uzbekistan
^2Department of Informatics and computer graphics, Tashkent State Transport University, Tashkent, Uzbekistan
^3Department of Mathematics and natural sciences, Bukhara Institute of Natural Resources Management, Bukhara, Uzbekistan
4mm Abstract.
In this paper, we consider the problem of constructing new optimal explicit and implicit Adams-type difference formulas for finding an approximate solution to the Cauchy problem for an ordinary differential equation in a Hilbert space. In this work, I minimize the norm of the error functional of the difference formula with respect to the coefficients, we obtain a system of linear algebraic equations for the coefficients of the difference formulas. This system of equations is reduced to a system of equations in convolution and the system of equations is completely solved using a discrete analog of a differential operator d^2/dx^2-1. Here we present an algorithm for constructing optimal explicit and implicit difference formulas in a specific Hilbert space. In addition, comparing the Euler method with optimal explicit and implicit difference formulas, numerical experiments are given. Experiments show that the optimal formulas give a good approximation compared to the Euler method.
Keywords.
Hilbert space; initial-value problem; multistep method; the error functional; optimal explicit difference formula; optimal implicit difference formula.
^*Corresponding author.
E-mail addresses: [email protected] (Kh.M. Shadimetov), [email protected] (R.S. Karimov).
Received July 11, 2023; Accepted July 11, 2023.
§ INTRODUCTION
It is known that the solutions of many practical problems lead to solutions of differential equations or their systems. Although differential equations have so many applications and only a small number of them can be solved exactly using elementary functions and their combinations. Even in the analytical analysis of differential equations, their application can be inconvenient due to the complexity of the obtained solution. If it is very difficult to obtain or impossible to find an analytic solution to a differential equation, one can find an approximate solution.
In the present paper we consider the problem of approximate solution to the first order linear ordinary differential equation
y'=f(x,y), x∈[0,1]
with the initial condition
y(0)=y_0.
We assume that f(x,y) is a suitable function and the differential equation (<ref>) with the initial condition (<ref>) has a unique solution on the interval [0,1].
For approximate solution of problem (<ref>)-(<ref>) we divide the interval [0,1] into N pieces of the length h=1/N and find approximate values y_n of the function y(x) for n=0,1,...,N at nodes x_n=nh.
A classic method of approximate solution of the initial-value problem (<ref>)-(<ref>) is the Euler method. Using this method, the approximate solution of the differential equation is calculated as follows: to find an approximate value y_n+1 of the function at the node x_n+1, it is used the approximate value y_n at the node x_n:
y_n+1=y_n+hy_n',
where y_n'=f(x_n,y_n), so that y_n+1 is a linear combination of the values of the unknown function y(x) and its first-order derivative at the node x_n.
Everyone are known that there are many methods for solving the initial-value problem for ordinary differential equation (<ref>). For example, the initial-value problem can be solved using the Euler, Runge-Kutta, Adams-Bashforth and Adams-Moulton formulas of varying degrees <cit.>. In <cit.> by Ahmad Fadly Nurullah Rasedee, et al., research they discussed the order and stepsize strategies of the variable order stepsize algorithm. The stability and convergence estimations of the method are also established. In the work <cit.> by Adekoya Odunayo M. and Z.O.Ogunwobi, it was shown that the Adam-Bashforth-Moulton method is better than the Milne Simpson method in solving a second-order differential equation. Some studies have raised the question of whether Nordsieck's technique for changing the step size in the Adams-Bashforth method is equivalent to the explicit continuous Adams-Bashforth method. And in N.S.Hoang and R.B.Sidje's work <cit.> they provided a complete proof that the two approaches are indeed equivalent. In the works <cit.> and <cit.> there were shown the potential superiority of semi-explicit and semi-implicit methods over conventional linear multi-step algorithms.
However, it is very important to choose the right one among these formulas to solve the Initial-value problem and it is not always possible to do this. Also, in this work, in contrast to the above-mentioned works, exact estimates of the error of the formula is obtained.
Our aim, in this paper, is to construct new difference formulas that are exact for e^-x and optimal in the Hilbert space W_2^(2,1)(0,1). Also these formulas can be used to solve certain classes of problems with great accuracy.
The rest of the work is organized as follows. In the first paragraph, an algorithm for constructing an explicit difference formula in the space is given. The above algorithm is used to obtain an analytical formula for the optimal coefficients of an explicit difference formula. In the second section, the same algorithm is used to obtain an analytical formula for the optimal coefficients of the implicit difference formula. In the third and fourth sections, respectively, exact formulas are given for the square of the norm of the error functionals of explicit and implicit difference formulas. Numerical experiments are presented at the end of the work.
§ OPTIMAL EXPLICIT DIFFERENCE FORMULAS OF ADAMS-BASHFORTH TYPE IN THE HILBERT SPACE W_2^(2,1)(0,1)
We consider a difference formula of the following form for the approximate solution of the problem (<ref>)-(<ref>) <cit.>
∑ _β =0^kC[β]φ[β] -h∑ _β =0^k-1C_1[β]φ '[β] ≅ 0,
where h=1/N, N is a natural number, C[β] and C_1[β] are the coefficients,
functions φ belong to the Hilbert space W_2^(2,1)(0,1). The space W_2^(2,1)(0,1) is defined as follows
W_2^(2,1)(0,1)={φ:[0,1]→ℝ|φ' φ”∈ L_2(0,1)}
equipped with the norm <cit.>
φ|W_2^(2,1)={∫_0^1(φ”(x)+φ'(x))^2dx}^1/2.
The following difference between the sums given in the formula (<ref>) is called the error of the formula (<ref>) <cit.>
(ℓ,φ)=∑_β =0^kC[β] φ(hβ)-h∑_β =0^k-1C_1[β]φ'(h β).
To this error corresponds the error functional <cit.>
ℓ(x)=∑_β =0^kC[β] δ(x-hβ)+ h∑_β =0^k-1C_1[β]δ'(x-hβ),
where δ(x) is Dirac's delta-function. We note that (ℓ,φ) is the value of the error functional ℓ at a function φ and it is defined as <cit.>
(ℓ,φ)=∫_-∞^∞ℓ(x)φ(x)dx.
It should be also noted that since the error functional ℓ is defined on the space W_2^(2,1)(0,1) it satisfies the following conditions
(ℓ,1)=0,
(ℓ,e^-x)=0.
These give us the following equations with respect to coefficients C[β] and C_1[β]:
∑_β =0^kC[β]=0,
∑_β =0^kC[β] e^-hβ + h∑_β =0^k-1C_1[β] e^-hβ=0.
Based on the Cauchy-Schwartz inequality for the absolute value of the error of the formula (<ref>) we have the estimation
|(ℓ,φ)|≤φ|W_2^(2,1)·ℓ|W_2^(2,1)*.
Hence, the absolute error of the difference formula (<ref>) in the space W_2^(2,1) is estimated by the norm of the error functional ℓ on the conjugate space W_2^(2,1)*. From this we get the following<cit.>.
Problem 1. Calculate the norm ℓ|W_2^(2,1)* of the error functional ℓ.
From the formula (<ref>) one can see that the norm ℓ|W_2^(2,1)* depends on the coefficients C[β] and C_1[β].
Problem 2. Find such coefficients C_1[β]=C_1[β] that satisfy the equality
ℓ|W_2^(2,1)*=inf_C_1[β]sup_φ|W_2^(2,1)≠ 0|(ℓ,φ)|/φ|W_2^(2,1).
In this case C_1[β] are called the optimal coefficients and the corresponding difference formula (<ref>) is called the optimal difference formula.
A function ψ_ℓ satisfying the following equation is called the extremal function of the difference formula (<ref>) <cit.>
(ℓ,ψ_ℓ)=ℓ|W_2^(2,1)*·ψ_ℓ|W_2^(2,1).
Since the space W_2^(2,1)(0,1) is a Hilbert space, then from the Riesz theorem on the general form of a linear continuous functional on a Hilbert space there is a function ψ_ℓ (which is the extremal function) that satisfies the following equation <cit.>
(ℓ,φ)=⟨φ,ψ_ℓ⟩_W_2^(2,1)
and the equality ℓ|W_2^(2,1)*=ψ_ℓ|W_2^(2,1) holds, here ⟨φ,ψ_ℓ⟩_W_2^(2,1) is the inner product in the space W_2^(2,1)(0,1) and is defined as follows <cit.>
The solution of equation (<ref>) has the form
ψ_ℓ(x)=ℓ(x)*G_2(x)+de^-x+p_0
and it is an extremal function for the difference formula (<ref>),
where G_2(x)=sgn(x)/2(e^x-e^-x/2-x), d and p_0 are real numbers.
According to the above mentioned Riesz's theorem, the following equalities is fulfilled
ℓ|W_2^(2,1)*^2=(ℓ,ψ_ℓ)=ℓ|W_2^(2,1)*·ψ_ℓ|W_2^(2,1).
By direct calculation from the last equality for the norm of the error functional for the difference formula (<ref>) we have the following result <cit.>.
For the norm of the error functional of the difference formula (<ref>) we have the following expression
ℓ|W_2^(2,1)*^2=∑_γ=0^k∑_β=0^kC[γ] C[β] G_2(hγ-hβ)-2h∑_γ=0^k-1C_1[γ]∑_β=0^kC[β] G_2'(hγ-hβ)-
-h^2∑_γ=0^k-1∑_β=0^k-1C_1[γ] C_1[β] G_2”(hγ-hβ),
where G_2'(x)=sgn(x)/2(e^x+e^-x/2-1) and G_2”(x)=sgn(x)/2(e^x-e^-x/2).
It is known that stability in the Dahlquist sense, just like strong stability, is determined only by the coefficients C[β], β =0,k. For this reason, our search for the optimal formula is only related to finding C_1[β]. Therefore, in this subsection we consider difference formulas of the Adams-Bashforth type, i.e. C[k]=-C[k-1]=1 and C [k-i]=0, i=2,k, <cit.>. Then is easy to check, that the coefficients satisfy the condition (<ref>).
In this work, we find the minimum of the norm (<ref>) by the coefficients C_1[β] under the condition (<ref>) in the space W_ 2^(2,1) (0,1) <cit.>. Then using Lagrange method of undetermined multipliers we get the following system of linear equations with respect to the coefficients C_1[β]:
h∑ _γ =0^k-1C_1[γ]G”_2 (hβ -hγ )+de^-hβ =-∑ _γ =0^kC[γ] G'_2 (hβ -hγ ),
β =0,k-1,
h∑ _γ =0^k-1C _1[γ] e^-hγ =-∑ _γ =0^kC[γ] e^-hγ .
It is easy to prove that the solution of this system gives the minimum value to the expression (<ref>) under the condition (<ref>). Here d is an unknown constant, C^∘ _1[β] are optimal coefficient. Given that C[k]=1, C[k-1]=-1, C[k-i]=0, i=2,k the system (<ref>),(<ref>) is reduced to the form,
h∑ _γ =0^k-1C^∘ _1 [γ]G”_2 (hβ -hγ )+de^-hβ =f[β], β =0,k-1
h∑ _γ =0^k-1C^∘ _1 [γ] e^-hγ =g ,
where
f[β]=1-e^h/4(e^hβ -hk -e^-hβ +hk-h),
g=e^-hk+h -e^-hk .
Assuming that C_1 [β]=0, for β <0 and β >k-1, we rewrite the system (<ref>), (<ref>) in the convolution form
{[ hC^∘ _1 [β]*G_2”(hβ) +de^-hβ =f[β] for β =0,k-1,; h∑ _γ =0^k-1C^∘ _1 [γ] e^-hγ =g. ].
We denote first equation of the system (<ref>) by U_exp
U_exp[β]=hC^∘ _1 [β]*G”_2 (hβ)+de^-hβ.
(<ref>) implies that
U_exp[β]=f[β] for β =0,k-1.
Now calculating the convolution we have
U_exp[β]=C^∘ _1 [β]*G”_ 2(hβ)+de^-hβ =h∑ _γ =0^k-1C^∘ _1 [γ] G”_2(hβ -hγ)+de^-hβ .
For β <0 we get
U_exp[β]=h∑ _γ =0^k-1C^∘ _1 sgn(hβ -hγ)/2(e^hβ -hγ -e^-hβ +h γ/2)+de^-hβ
=-e^hβ/4 h∑ _γ =0^k-1C^∘ _1 [γ]e^-hγ +e^-hβ/4 h∑ _γ =0^k-1 C^∘ _1 [γ]e^hγ +de^-hβ =-e^hβ/4 g+e^-hβ(d+b).
For β >k-1
U_exp[β]=e^hβ/4 g+e^-hβ(d-b).
Then d^+ =d+b and d^- =d-b the function U_exp[β] becomes
U_exp[β]={[ -e^hβ/4 g+e^-hβ d^+ for β >k-1,; f[β] for β =0,k-1,; e^hβ/4 g+e^-hβ d^- for β <0. ].
We use to find the unknowns d^+ and d^- from the discrete analogue of the differential operator d^2/dx^2-d/dx which is given below <cit.>
D_1[β]=1/1-e^2h{[ -2e^h for |β|=1,; 2(1+ e^2h) for β =0,; 0 for |β|≥ 2. ].
The unknowns d^+ and d^- are determined from the conditions
C^∘ _1 [β]=h^-1 D_1[β]*U_exp[β]=0 for β <0 and β >k-1.
Calculate the convolution
h^-1 D_1[β]*U_exp[β]
=h^-1∑ _γ =1^∞D_1[β +γ]U_exp[-γ] +h^-1∑ _γ =0^k-1D_1[β -γ]U_exp[γ]
+h^-1∑ _γ =1^∞D_1[β -k-γ +1]U_exp[k+γ -1] .
From (<ref>) with β =k and β =-1, we have
{[ h^-1 D_1[0]U_exp[-1]+h^-1 D_1[1]U_exp[-2]+h^-1 D_1[-1]U_exp[0]=0,; h^-1 D_1[0]U_exp[k]+h^-1 D_1[1]U_exp[k-1]+h^-1 D_1[-1]U_exp[k+1]=0. ].
Hence, due to (<ref>), we get
{[ 2(1+e^2h)[-1/4 e^-h g+e^h d^+]-2e^h[-1/4 e^-2h g+e^2h d^+]-2e^h f[0]=0; 2(1+e^2h)[1/4 e^hk g+e^-hk d^-]-2e^h[1/4 e^hk+h g+e^-hk-h d^-]-2e^h f[hk-h]=0. ].
From the first equation d^+ is equal to the following
d^+ =e^hk -e^hk-h/4 .
From the second equation d^- is equal to the following
d^- =e^hk -3e^hk-h +2e^hk-2h/4 .
so
d=d_0^+ +d_0^-/2 =e^hk -2e^hk-h +e^hk-2h/4 and b=d_0^+ -d_0^-/2 =e^hk-h -e^hk-2h/4 .
Now we calculate the optimal coefficients C^∘ _1 [β]
C^∘ _1 [β]=h^-1 D_1[β]*U_exp[β]=h^-1∑ _γ =-∞^∞D_1[β -γ]U_exp[γ], β =0,k-1.
Let β =k-1, then
C^∘ _1 [k-1]=h^-1∑ _γ =-∞^∞D_1[k-1-γ]U_exp[γ]
=h^-1{D_1[0]U_exp[k-1]+D_1[1]U_exp[k-2]+D_1[-1]U_exp[k]}
=h^-1/(1-e^2h )·{1-e^-h +e^h -e^2h}=e^h -1/he^h,
thus, C^∘ _1 [k-1]=e^h -1/he^h for β =k-1.
Compute C^∘ _1 [0]
C^∘ _1 [0]=h^-1∑ _γ =-∞^∞D_1[-γ]U_exp[γ]
=h^-1{D_1[0]U_exp[0]+D_1[1]U_exp[-1]+D_1[-1]U_exp[1]}
=h^-1/2(1-e^2h )·{(1+e^2h )(e^-hk -e^hk-h -e^-hk+h +e^hk )
-e^-h (-e^-hk +e^-hk-h -e^hk+h -e^hk ) }
-h^-1/2(1-e^2h )·{e^-h (e^-hk+h -e^hk-2h -e^-hk+2h +e^hk-h )}=h^-1/2(1-e^2h )· 0=0,
hence, C^∘ _1 [0]=0 for β =0.
Now calculate C^∘ _1 [β] for β =1,k-2
C^∘ _1 [β]=h^-1∑ _γ =-∞^∞D_1[-γ]U_exp[γ]
=h^-1{D_1[0]U_exp[β]+D_1[1]U_exp[β -1]+D_1[-1]U_exp[β +1]}
=h^-1/2(1-e^2h )·{(1+e^2h )(1-e^h )(e^-hk+hβ -e^hk-hβ -h )}
-h^-1/2(1-e^2h )·{e^-h (1-e^h )(e^-hk+hβ -h -e^hk-hβ )}
-h^-1/2(1-e^2h )·{e^-h (1-e^h )(e^-hk+hβ +h -e^hk-hβ -2h )}=h^-1/2(1-e^2h )· 0=0,
thereby, C^∘ _1 [β]=0 for β =1,k-2.
Finally, we have proved the following theorem.
In the Hilbert space W_2^(2,1) (0,1) there is a unique optimal explicit difference formula of the Adams-Bashforth type whose coefficients are determined by following expressions
C[β]={[ 1 for β =k,; -1 for β =k-1,; 0 for β =0,k-2, ].
C^∘ _1 [β]={[ e^h -1/he^h for β =k-1,; 0 for β =0,k-2. ].
Thus, the optimal explicit difference formula in W_2^(2,1) (0,1) has the form
φ _n+k =φ _n+k-1 +e^h -1/e^hφ '_n+k-1,
where n=0,1,...,N-k, k≥ 1.
§ OPTIMAL IMPLICIT DIFFERENCE FORMULAS OF ADAMS-MOULTON TYPE IN THE HILBERT SPACE W_2^(2,1) (0,1)
Consider an implicit difference formula of the form
∑ _β =0^kC[β ]φ [β ] -h∑ _β =0^kC_1 [β ]φ '[β ] ≅ 0
with the error function
ℓ (x)=∑ _β =0^kC[β ]δ (x-hβ ) +h∑ _β =0^kC_1 [β ]δ '(x-hβ )
in the space W_2^(2,1) (0,1).
In this section, we also consider the case C[k]=-C[k-1]=1, and C[k-i]=0,
i=2,k , i.e. Adams-Moulton type formula.
Minimizing the norm of the error functional (<ref>) of an implicit difference formula of the form (<ref>) with respect to the coefficients C_1 [β ] , β =0,k in the space W_2^(2,1) (0,1)
we obtain a system of linear algebraic equations
{[ h∑ _γ =0^kC^∘ _1 [γ]G”_2 (hβ -hγ )+de^-hβ =f[β], β =0,k; h∑ _γ =0^kC^∘ _1 [γ] e^-hγ =g. ].
Here C^∘ _1 [β ] are unknowns coefficients of the implicit difference formulas (<ref>), β =0,k and d is an unknown constant,
f[β]=G'_2 (hβ -hk+h)-G'_2 (hβ -hk)
={[ 1/4(1-e^h)(e^-hk+hβ -e^hk-hβ -h), β =0,k-1,; 1/4(e^h +e^-h -2), β =k, ].
g=e^-hk+h -e^-hk.
Assuming, in general, that
C^∘ _1 [β ]=0 , for β <0 and β >k,
rewrite the system in the convolution form
{[ hC^∘ _1 [β]*G”_2(hβ) +de^-hβ =f[β], β =0,k,; h∑ _γ =0^kC^∘ _1 [γ] e^-hγ =g. ].
Denote by
U_imp[β ]=hC^∘ _1 [β]*G”_2(h β) +de^-hβ .
Shows that
U_imp[β ]=f[β ] for β =0,k
Now we find U_imp[β ] for β <0 and β >k.
Let β <0, then
U_imp[β]=h∑ _γ =0^kC^∘ _1 [γ]sgn(hβ -hγ)/2(e^hβ -hγ -e^-hβ +hγ/2)+de^-hβ
=-e^hβ/4 h∑ _γ =0^kC^∘ _1[γ]e^-hγ +e^-hβ/4 h∑ _γ =0^kC^∘ _1 [γ]e^hγ +de^-hβ .
Here d^+ is defined by the equality
d^+ =e^-hβ/4 h∑ _γ =0^kC^∘ _1 [γ]e^hγ +de^-hβ .
Similarly, for β >k we have
U_imp[β]=h∑ _γ =0^kC^∘ _1 [γ] sgn(hβ -hγ)/2(e^hβ -hγ -e^-hβ +hγ/2)+de^-hβ
=e^hβ/4 h∑ _γ =0^kC^∘ _1 [γ]e^-hγ -e^-hβ/4 h∑ _γ =0^kC^∘ _1 [γ]e^hγ +de^-hβ .
Here d^- is defined by the equality
d^- =-e^-hβ/4 h∑ _γ =0^kC^∘ _1 [γ]e^hγ +de^-hβ .
(<ref>) and (<ref>) immediately imply that
d=d^+ +d^-/2 .
So U_imp[β ] for any β∈ Z is defined by the formula
U_imp[β]={[ -e^hβ/4 g+e^-hβ d^+ for β >k,; f[β] for β =0,k,; e^hβ/4 g+e^-hβ d^- for β <0. ].
If we operate operator (<ref>) on expression U_imp[β ], we get
C^∘ _1 [β ] =h^-1 D_1 [β ]*U_imp[β ] , β∈ Z.
Assuming that C^∘ _1 [β ] =0 for β <0 and β >k, we get a system of linear equations for finding the unknowns d^+ and d^- in the formula (<ref>).
Indeed, calculating the convolution, we have
h^-1 D_1[β]*U_imp[β]=h^-1∑ _γ =-∞^∞D_1[β -γ]U_imp[γ]
=h^-1∑ _γ =-∞^-1D_1[β -γ]U_imp[γ] +h^-1∑ _γ =0^kD_1[β -γ]U_imp[γ] +h^-1∑ _γ =k+1^∞D_1[β -γ]U_imp[γ]
=h^-1∑ _γ =1^∞D_1[β +γ]U_imp[-γ] +h^-1∑ _γ =0^kD_1[β -γ]U_imp[γ]
+h^-1∑ _γ =1^∞D_1[β -k-γ]U_imp[k+γ] .
Equating the expression (<ref>) to zero with β =-1,β =k+1and using the formulas (<ref>), (<ref>) we get
{[ h^-1 D_1[0]U_imp[-1]+h^-1 D_1[1]U_imp[-2]+h^-1 D_1[-1]U_imp[0]=0,; h^-1 D_1[0]U_imp[k+1]+h^-1 D_1[1]U_imp[k]+h^-1 D_1[-1]U_imp[k+2]=0 ].
or
{[ 2(1+e^2h)[-1/4 e^-h g+e^h d^+]-2e^h[-1/4 e^-2h g+e^2h d^+]-2e^h f[0]=0,; 2(1+e^2h)[1/4 e^hk+h g+e^-hk-h d^-]-2e^h[1/4 e^hk+2h g+e^-hk-2h d^-]-2e^h f[hk]=0. ].
By virtue of the formulas (<ref>) and (<ref>), finally, we find
d^+ =1/4 (e^hk -e^hk-h),
d^- =1/4 (e^hk-h -e^hk).
Then from (<ref>) we find that d=0.
As a result, we rewrite U_imp[β ] through the (<ref>) and (<ref>) as follows
U_imp[β]={[ -e^hβ/4 g+e^-hβ/4(e^hk -e^hk-h) for β >k,; f[β] for β =0,k,; e^hβ/4 g+e^-hβ/4(e^hk-h -e^hk) for β <0. ].
Now we turn to calculating the optimal coefficients of implicit difference formulas C^∘ _1 [β ] , β =0,k according to the formula (<ref>)
C^∘ _1 [k]=h^-1∑ _γ =-∞^∞D_ 1 [k-γ ]U_imp[γ ] =
=h^-1{D_1 [0]U_imp[k]+D_1 [1]U_imp[k-1]+ D_1 [-1]U_imp[k+1]}=
=h^-1/2(1-e^2h)(-2e^2h +4e^h -2)= e^h -1/h(e^h +1) .
So C^∘ _1 [k]=e^h -1/h(e^h +1).
Calculate the next optimal coefficient
C^∘ _1 [k-1]=h^-1∑ _γ =-∞^∞ D_1 [k-γ -1]U_imp[γ ] =
=h^-1{D_1 [0]U_imp[k-1]+D_1 [1]U_imp [k-2]+D_1 [-1]U_imp[k]}=
=h^-1/2(1-e^2h)(-2e^2h +4e^h -2)= e^h -1/h(e^h +1) .
Thus C^∘ _1 [k-1]=e^h -1/h(e^h +1).
Go to computed C^∘ _1 [β ] when β =1,k-2
C^∘ _1 [β]=h^-1∑ _γ =-∞^∞ D_1 [β -γ ]U_imp[γ ]
=h^-1{D_1 [0]U_imp[β ]+D_1 [1]U_imp[β -1]+D_1 [-1]U_imp[β +1]}
=h^-1/2(1-e^2h){(1+e^2h)(e^-hk+hβ -e^hk-hβ -h -e^-hk+hβ +h +e^hk-hβ)}
-h^-1/2(1-e^2h){e^h(e^-hk+hβ -h -e^hk-hβ -e^-hk+hβ +e^hk-hβ +h)}
-h^-1/2(1-e^2h){e^h(e^-hk+hβ +h -e^hk-hβ -2h -e^-hk+hβ +2h +e^hk-hβ -h)}
=h^-1/2(1-e^2h)· 0=0 .
Thereby, C^∘ _1 [β ]=0 , for β =1,k-2.
Then calculate C^∘ _1 [0]
C^∘ _1 [0]=h^-1∑ _γ =-∞^∞D_1 [-γ ]U_imp[γ ]
=h^-1{D_1 [0]U_imp[0]+D_1 [1]U_imp[-1]+D_1 [-1]U_imp[1]}
=h^-1/2(1-e^2h){(1+e^2h)(e^-hk -e^hk-h -e^-hk+h +e^hk)}
-h^-1/2(1-e^2h){e^h(-e^-hk +e^-hk-h +e^hk+h -e^hk)}
-h^-1/2(1-e^2h){e^h(e^-hk+h -e^hk-2h -e^-hk+2h +e^hk-h)}=h^-1/2(1-e^2h)· 0=0.
hence C^∘ _1 [0]=0.
Finally, we have proved the following.
In the Hilbert space W_2^(2,1) (0,1), there exists a unique optimal implicit difference formula, of Adams-Moulton type, whose coefficients are determined by formulas
C[β ]={[ 1 for β =k,; -1 for β =k-1,; 0 for β =0,k-2, ].
C^∘ _1 [β ] ={[ e^h -1/h(e^h +1) for β =k,; e^h -1/h(e^h +1) for β =k-1,; 0 for β =0,k-2 . ].
Consequently, the optimal implicit difference formula in W_2^(2,1) (0,1) has the form
φ _n+k =φ _n+k-1 +e^h -1/e^h +1(φ '_n+k +φ '_n+k-1),
where n=0,1,...,N-k, k≥ 1.
§ NORM OF THE ERROR FUNCTIONAL OF THE OPTIMAL EXPLICIT DIFFERENCE FORMULA
The square of the norm of an explicit Adams-Bashforth type difference formula is expressed by the equality
ℓ|W_2^(2,1)* (0,1). ^2 =∑ _γ =0^k∑ _β =0^kC[γ ]C[β ]G_2[γ -β]-
-2h∑ _γ =0^k-1C_1 [γ ]∑ _β =0^kC[β ]G'_2[γ -β] -h^2∑ _γ =0^k-1∑ _β =0^k-1C_1 [γ ]C_1 [β ]G”_2[γ -β].
In this section, we deal with the calculation of the squared norm (<ref>) in the space W_2^(2,1) (0,1). For this we use the coefficients C[β] and
C^∘ _1[β], which is detected in the formulas (<ref>) and (<ref>).
Then we calculate (<ref>) in sequence as follows.
ℓ^∘|W_2^(2,1)* (0,1). ^2 =∑ _γ =0^kC[γ ]{G_2[γ -k]-G_2[γ -k+1]}
-2h∑ _γ =0^k-1C^∘ _1 [γ ] {G'_2[γ -k]-G'_2[γ -k+1]}
-h^2e^h -1/he^h∑ _γ =0^k-1C^∘ _1 [γ ]{G”_2[γ -k+1]}
=G_2[0]-G_2[1]-G_2[-1]+G_2[0]-2(e^h -1)/e^h{G'_2[-1]-G'_2[0]}-(e^h -1)^2/e^2h G”_2[0]
=-2G_2[1]+2(e^h -1)/e^h G'_2[1]=2(e^h -1)/e^h·sgn(h)/2(e^h +e^h/2 -1)
-2·sgn(h)/2(e^h -e^h/2 -h)=e^h -1/e^h·e^2h -2e^h +1/2e^h -e^2h -1/2e^h +h
=h-(e^h -1)(3e^h -1)/2e^2h.
As a result, we get the following outcome.
The square of the norm of the optimal error functional of an explicit difference formula of the form (<ref>) in the quotient space W_2^(2,1) (0,1) is expressed as formula
ℓ^∘|W_2^(2,1)* (0,1). ^2 =h-(e^h -1)(3e^h -1)/2e^2h .
§ NORM OF THE ERROR FUNCTIONAL OF THE IMPLICIT OPTIMAL DIFFERENCE FORMULA
In this case, the square of the norm of the error functional of an implicit Adams-Moulton type difference formula of the form (<ref>) is expressed by the equality
ℓ|W_2^(2,1)* (0,1). ^2 =∑ _γ =0^k∑ _β =0^kC[γ ]C[β ]G_2[γ -β]-
-2h∑ _γ =0^kC_1 [γ ]∑ _β =0^kC[β ]G'_2[γ -β] -h^2∑ _γ =0^k∑ _β =0^kC_1 [γ ]C_1 [β ]G”_2[γ -β].
Here we use the optimal coefficients of an implicit difference formula of the form (<ref>), which is detected in the formulas (<ref>) and (<ref>).
Then, we calculate (<ref>) as follows
ℓ^∘|W_2^(2,1)* (0,1). ^2 =∑ _γ =0^kC[γ ]{G_2[γ -k]-G_2[γ -k+1]}
-2h∑ _γ =0^kC^∘ _1 [γ ] {G'_2[γ -k]-G'_2[γ -k+1]}
-h^2e^h -1/h(e^h +1)∑ _γ =0^kC^∘ _1 [γ ]{G”_2[γ -k]+G”_2[γ -k+1]}
=G_2[0]-G_2[1]-G_2[-1]+G_2[0]-2h(e^h -1)/h(e^h +1){G'_2[0]-G'_2[1]+G'_2[-1]-G'_2[0]}
-h^2(e^h -1)^2/h^2(e^h +1){G”_2[0]+G”_2[1]+G”_2[-1]+G”_2[0]}
=-2G_2[1]+4(e^h -1)/e^h +1 G'_2[1]-2(e^h -1)^2/(e^h +1) G”_2[1]
=4(e^h -1)/e^h +1·sgn(h)/2(e^h +e^h/2 -1)-2·sgn(h)/2(e^h -e^h/2 -h)
-2(e^h -1)^2/(e^h +1)·sgn(h)/2(e^h -e^h/2)
=h-e^2h -1/2e^h +2(e^h -1)^2(e^h -1)/2e^h(e^h +1) -(e^h -1)^2(e^h -1)/2e^h(e^h +1)
=h+(e^h -1)/2e^h((e^h -1)^2/e^h +1 -e^h -1)=h-2(e^h -1)/e^h +1 .
Consequently, we get the following result.
Among all implicit difference formulas of the form (<ref>) in the Hilbert space W_2^(2,1) (0,1), there is a unique implicit optimal difference formula square the norms of the error functional of which is determined by the equality
ℓ^∘|W_2^(2,1)* (0,1). ^2 =h-2(e^h -1)/e^h +1 .
§ NUMERICAL RESULTS
In this section, we give some numerical results in order to show tables and graphs of solutions and errors of our optimal explicit difference formulas (<ref>) and optimal implicit difference formulas (<ref>), with coefficients given correspondingly in Theorem <ref> and Theorem <ref>.
We show the results of the created formulas in some examples in the form of tables and graphs. Here, of course, the results presented in the table are then shown in the graph.We have taken examples from the book by Burden R.L. et al <cit.> to illustrate numerical results.
In accordance with the table above, shown in Figures 1,3,5, on the left side of these Figures 2,4,6, graphs of approximate and exact solutions are shown, and on the right side of these Figures 2,4,6, graphs of the difference between the actual and approximate solutions.
As can be seen from the results presented above, in a certain sense, optimal explicit formula give better results than the classical Euler formula.
In accordance with the table above, shown in Figure 7, on the left side of these Figure 8, graphs of approximate and exact solutions are shown, and on the right side of these Figure 8, graphs of the difference between the exact and approximate solutions.
As can be seen from the result presented above, in a certain sense, optimal implicit difference formulas give better results than the classical Euler formula.
§ CONCLUSION
In conclusion, In this paper, new Adams-type optimal difference formulas are constructed and exact expressions for the exact estimation of their error are obtained. Moreover, we have shown that the results obtained by the optimal explicit difference formulas constructed in the W_2^(2,1)(0,1) Hilbert space are better than the results obtained by the Euler formula. In addition, the optimal implicit formula is more accurate than the optimal explicit formula and the effectiveness of the new optimal difference formulas was shown in the numerical results.
99
Burden16
Burden R.L., Faires D.J., Burden A.M.
Numerical Analysis.
- Boston, MA : Cengage Learning, 2016, 896 p.
Fadly2021
Ahmad Fadly Nurullah Rasedee, Mohammad Hasan Abdul Sathar, Siti Raihana Hamzah, Norizarina Ishak, Tze Jin Wong, Lee Feng Koo and Siti Nur Iqmal Ibrahim.
Two-point block variable order step size multistep method for solving higher order ordinary differential equations directly.
Journal of King Saud University - Science, vol.33, 2021, 101376, https://doi.org/10.1016/j.jksus.2021.101376
Adekoya2021
Adekoya Odunayo M. and Z.O. Ogunwobi.
Comparison of Adams-Bashforth-Moulton Method and Milne-Simpson Method on Second Order Ordinary Differential Equation.
Turkish Journal of Analysis and Number Theory, vol.9, no.1, 2021: 1-8., https://doi:10.12691/tjant-9-1-1.
Hoang2013
N.S. Hoang, R.B. Sidje.
On the equivalence of the continuous Adams-Bashforth method and Nordsiecks technique for changing the step size.
Applied Mathematics Letters, 2013, 26, pp. 725-728.
Beuken2022
Loïc Beuken, Olivier Cheffert, Aleksandra Tutueva, Denis Butusov and Vincent Legat.
Numerical Stability and Performance of Semi-Explicit and Semi-Implicit Predictor-Corrector Methods.
Mathematics, 2022, 10(12), https://doi.org/10.3390/math10122015
Tutueva2021
Aleksandra Tutueva and Denis Butusov.
Stability Analysis and Optimization of Semi-Explicit Predictor-Corrector Methods.
Mathematics, 2021, 9, 2463. https://doi.org/10.3390/math9192463
BaSob65
Babus̆ka I., Sobolev S.L.
Optimization of numerical methods.
- Apl. Mat., 1965, 10, 9-170.
BaViPr65
Babus̆ka I., Vitasek E., Prager M.
Numerical processes for solution of differential equations.
- Mir, Moscow, 1969, 369 p.
SHadHay14
Shadimetov Kh.M., Hayotov A.R.
Optimal quadrature formulas in the sense of Sard in W_2^( m,m-1 ) space.
Calcolo, Springer, 2014, V.51, pp. 211-243.
SHadHay15
Shadimetov Kh.M., Hayotov A.R.
Construction of interpolation splines minimizing semi-norm in W_2^( m,m-1 ) space.
BIT Numer Math, Springer, 2013, V.53, pp. 545-563.
Shad15
Shadimetov Kh.M.
Functional statement of the problem of optimal difference formulas.
Uzbek mathematical Journal,
Tashkent, 2015, no.4, pp.179-183.
ShadMir18
Shadimetov Kh.M., Mirzakabilov R.N.
The problem on construction of difference formulas.
Problems of Computational and Applied Mathematics,
- Tashkent, 2018, no.5(17). pp. 95-101.
Sob74
Sobolev S.L.
Introduction to the theory of cubature formulas.
- Nauka, Moscow, 1974, 808 p.
SobVas96
Sobolev S.L., Vaskevich V.L.
Cubature fromulas.
- Novosibirsk, 1996, 484 p.
ShadMir19
Shadimetov Kh.M., Mirzakabilov R.N.
On a construction method of optimal difference formulas.
AIP Conference Proceedings, 2365, 020032, 2021.
AkhHayShad18
Akhmedov D.M., Hayotov A.R., Shadimetov Kh.M.
Optimal quadrature formulas with derivatives for Cauchy type singular integrals.
Applied Mathematics and Computation,
Elsevier, 2018, V.317, pp. 150-159.
BolHayShad16
Boltaev N.D., Hayotov A.R., Shadimetov Kh.M.
Construction of Optimal Quadrature Formula for Numerical Calculation of Fourier Coefficients in Sobolev space L_2^(1 ).
American Journal of Numerical Analysis, 2016, v.4, no.1, pp. 1-7.
HayKar21
Hayotov A.R., Karimov R.S.
Optimal difference formula in the Hilbert space W_2^(2,1)(0,1).
Problems of Computational and Applied Mathematics,
- Tashkent, 5(35), 129-136, (2021).
Dahlq56
Dahlquits G.
Convergence and stability in the numerical integration of ordinary differential
equations.
-Math. Scand., 1956, v.4, pp. 33-52.
Dahlq59
Dahlquits G.
Stability and error bounds in the numerical integration of ordinary differential
equations.
- Trans. Roy. Inst. Technol. Stockholm, 1959.
ShadMir22
Shadimetov Kh. M., Mirzakabilov R. N.
Optimal Difference Formulas in the Sobolev Space.
- Contemporary Mathematics. Fundamental Directions, 2022, Vol.68, No.1, 167-177.
ShadHay04
Shadimetov Kh.M., Hayotov A.R.
Construction of a discrete analogue of a differential operator
Uzbek Matematical Journal,
- Tashkent, 2004, no. 2, pp. 85-95.
|
http://arxiv.org/abs/2307.03891v3 | 20230708035823 | MARBLER: An Open Platform for Standarized Evaluation of Multi-Robot Reinforcement Learning Algorithms | [
"Reza Torbati",
"Shubham Lohiya",
"Shivika Singh",
"Meher Shashwat Nigam",
"Harish Ravichandar"
] | cs.RO | [
"cs.RO",
"cs.MA"
] |
Feature selection simultaneously preserving both class and cluster structures
Suchismita Dasmycorrespondingauthor and Nikhil R. Pal
August 12, 2023
=============================================================================
Multi-agent reinforcement learning (MARL) has enjoyed significant recent progress, thanks to deep learning. This is naturally starting to benefit multi-robot systems (MRS) in the form of multi-robot RL (MRRL).
However, existing infrastructure to train and evaluate policies predominantly focus on challenges in coordinating virtual agents, and ignore characteristics important to robotic systems. Few platforms support realistic robot dynamics, and fewer still can evaluate Sim2Real performance of learned behavior.
To address these issues, we contribute MARBLER: Multi-Agent RL Benchmark and Learning Environment for the Robotarium. MARBLER offers a robust and comprehensive evaluation platform for MRRL by marrying Georgia Tech's Robotarium (which enables rapid prototyping on physical MRS) and OpenAI's Gym framework (which facilitates standardized use of modern learning algorithms).
MARBLER offers a highly controllable environment with realistic dynamics, including barrier certificate-based obstacle avoidance. It allows anyone across the world to train and deploy MRRL algorithms on a physical testbed with reproducibility.
Further, we introduce five novel scenarios inspired by common challenges in MRS and provide support for new custom scenarios.
Finally, we use MARBLER to evaluate popular MARL algorithms and provide insights into their suitability for MRRL.
In summary, MARBLER can be a valuable tool to the MRS research community by facilitating comprehensive and standardized evaluation of learning algorithms on realistic simulations and physical hardware.
Links to our open-source framework and the videos of real-world experiments can be found at <https://shubhlohiya.github.io/MARBLER/>.
§ INTRODUCTION
With increasing demand for robotics to operate in complex real-world environments, coordination of multiple robots is becoming paramount. However, the complexity of exact solutions to important problems (e.g., coverage control <cit.>, path-planning <cit.>, and task allocation <cit.>) grows exponentially as the number of robots increase <cit.>. Consequently, Multi-Robot Reinforcement Learning (MRRL) <cit.> is emerging as a promising alternative paradigm to address this challenge.
MRRL has proven useful for delivery robots <cit.>, coordinated robotic exploration <cit.>, multi-robot communication <cit.>, multi-robot path planning <cit.>, multi-robot target localization <cit.> and more <cit.>. However, despite being developed for robotics, learning algorithms are rarely evaluated in the real-world, with a few notable exceptions <cit.>. However, even the exceptions were tested on smaller teams (2, 2, 3, and 4 robots, respectively) and on ad-hoc platforms, rending reproducibility time-consuming and difficult.
In contrast, Multi-Agent Reinforcement Learning (MARL) algorithms can be evaluated in a systematic way in many standardized simulated environments, such as the Multi-Agent Particle Environment (MPE) <cit.> and the StarCraft Multi-Agent Challenge (SMAC) <cit.>. While it might possible use existing MARL environments to evaluate algorithms developed for MRS, they lack realistic robot dynamics and likely have a large sim2real gap. Further, they do not directly allow for evaluation and benchmarking on physical robots.
In this work, we develop an integrated and holistic platform that can enable seamless training of MRRL policies and their evaluation on physical robots. Specifically, we contribute Multi-Agent RL Benchmark and Learning Environment for the Robotarium (MARBLER). MARBLER is a bridge between the MARL community and the physical robots in the Robotarium <cit.> that makes it easy to evaluate MRRL algorithms and design novel scenarios. The Robotarium is a remotely-accessible, publicly-available, and free-to-use testbed for MRS that allows for up to 20 robots at once in a highly-customizable environment.
As such, MARBLER enables machine learning researchers to develop and test algorithms for physical robots, and control theorists to experiment with state-of-the-art (SOTA) learning algorithms.
Our MARBLER platform has the following key benefits:
* The simulated robots in MARBLER exhibit dynamics similar to that of physical robots as it is built on top of the Robotarium's simulator. Further, MARBLER includes support for barrier certificates to prevent collisions, forcing algorithms to learn in realistic settings.
* MARBLER inherits the open-access benefits of the Robotarium, enabling anyone across the world to train coordination algorithms and systematically deploy on a physical multi-robot testbed with reproducibility.
* MARBLER is compatible with any learning algorithm that can be used with the OpenAI Gym interface.
* MARBLER currently has 5 novel scenarios inspired by common and challenging problems in MRS.
* MARBLER is open-source and allows users to easily add new scenarios or modify existing ones.
By creating an interface between MARL algorithms and the Robotarium, MARBLER is the first publicly-available environment that can evaluate Sim2Real capability in MRRL. Further, MARBLER can serve as a benchmark to evaluate learning algorithms in simulation with real-world constraints and readily deploy them on physical robots.
In addition, we conducted detailed evaluations of existing MARL algorithms by leveraging Extended PyMARL (EPyMARL) <cit.> within MARBLER. Our experiments reveal insights into how different characteristics of existing algorithms (e.g., policy gradient vs. valued-based, parameter sharing, etc.) impact performance in both simulated and physical multi-robot systems.
§ RELATED WORK
§.§ MARL and MRRL Platforms
The Multi-Agent Particle Environment (MPE) <cit.> is a popular framework for evaluating MARL algorithms, consisting of cooperative and adversarial 2D tasks.
In MPE, agents apply forces to particles which can interact with landmarks and other agents. This is a popular setup in MARL environments and has been extended by platforms such as VMAS <cit.>: a vectorized version of MPE that is supported by GPUs to allow for more complex scenarios and faster training. However, particle simulators have very different dynamics than real robots making them poor choices for MRRL benchmarking.
Another popular MARL environment is StarCraft Multi-Agent Challenge (SMAC) <cit.> which is considerably more complex, requiring agents to handle partial observability over long horizons.
However, the agent dynamics in SMAC is still considerably different from real world robots, again making it a poor choice to evaluate MRRL algorithms.
There are few frameworks that are designed to benchmark MRRL algorithms and fewer that are able to evaluate Sim2Real performance of algorithms. SMART <cit.> is one such evironment. However, SMART is limited to scenarios involving autonomous driving, it only supports up to four robots, and neither their evaluation test bed nor their source code is publicly available.
The other MRRL environment that allows for Sim2Real testing is MultiRoboLearn <cit.>: an open-source framework that provides an OpenAI Gym interface for easier integration. However it also only supports a maximum of 4 robots, and, like SMART, it does not have a publicly available testbed. Additionally, creating new scenarios in MultiRoboLearn requires creating custom environments in Gazebo <cit.>, introducing significant overhead.
In contrast to existing environments, MARBLER's simulator closely mimics the constraints of physical robots and allows researchers to evaluate Sim2Real capabilities in a standardized and reproducible way. Therefore, MARBLER is the first MRRL benchmark that has both a realistic simulator and a physical testbed that anyone can use.
§.§ MARL Algorithms
A variety of MARL algorithms have been proposed that perform very well in simulated environments. PPO <cit.> is an effective actor-critic policy gradient method for single agent RL. MAPPO <cit.> is the multi-agent extension of PPO where a single centralized critic is conditioned on all agent's observations to learn a joint state value function and a separate actor for each agent tries to learn the best action to take conditioned only on the agent's individual observations.
In contrast to MAPPO, QMIX <cit.> and VDN <cit.> are value-based methods that decompose the joint state-action value function into individual state-action value functions. VDN learns to decompose the team value function agent-wise while QMIX learns agent-specific Q networks and combines them monotonically via hypernetworks.
In SMAC and MPE, MAPPO, QMIX, and VDN have been shown to be three of the best performing MARL algorithms <cit.>.
However, while these algorithms have performed very well in simulation, there is limited testing of their real world performance. <cit.> evaluated VDN's and QMIX's performance on robots and <cit.> and <cit.> evaluate different versions of multi-agent PPO based algorithms on real robots. However, these are some of the only works to do real-world evaluations and the experiments only used at most four robots and are not easily reproducible.
Another important design problem in MRRL is if robots should share parameters. When robots share parameters, their networks all learn together which greatly reduces the number of parameters to be trained. However, this leads to robots all learning the same behavior. To combat this, robots have unique IDs appended to their observations but this approach still only allows robots to learn policies with limited heterogeneity <cit.>. Alternatively, each robot can learn its own set of network parameters which allows robots to learn truly heterogeneous behavior but greatly increases the number of environment interactions needed for robots to learn, which can be expesive in realistic settings.
§.§ The Robotarium
The Robotarium<cit.> is a remotely accessible multi-robot laboratory developed by Georgia Tech. It features a 12ft x 14ft testbed, 8 Vicon motion-capture cameras and allows up to 20 GRITSBots <cit.> to operate at once.
The Robotarium has inbuilt control barrier certificates (CBF) <cit.> which provide a provable guarantee of online collision avoidance for the robots, by ensuring a minimum inter-robot distance.
Control commands that don't satisfy constraints are updated with minimum possible deviation before execution, by a quadratic-program based controller.
Hence, the policies learned in environments utilizing CBFs will have to adapt to these actuator constraints which makes the platform more realistic and allows policies to be run on real robots.
The Robotarium also provides a Python simulator that closely resembles how the robots will act in the real Robotarium. Once programs are working in simulation, the Robotarium has a publicly accessible website where anyone in the world can upload their programs for them to then be run in the real Robotarium on real robots.
§ THE MARBLER PLATFORM
Historically, evaluating MRRL algorithms using the Robotarium's simulator has been a challenging task. The lack of a standardized framework for MRRL in the Robotarium means that researchers have to create scenarios from scratch, design the low level control algorithms to control the robots after they select an action, control how the graphics are displayed, and more. As a result, to the best of our knowledge, only <cit.> has evaluated deep reinforcement learning algorithms with the Robotarium, despite its open accessibility to researchers. Addressing this limitation, MARBLER establishes a cohesive and user-friendly API tailored specifically for MRRL experiments. Researchers can design novel environments or employ the pre-existing default environments to execute their algorithms, thereby allowing reproducibility across studies.
Moreover, owing to its integration with the Robotarium's simulator, MARBLER streamlines the process of transitioning trained robots from simulation to real-world deployment. Through the execution of a single script, users can generate the files necessary for submitting their policies to the physical Robotarium. Because the Robotarium is accessible to all users free of charge, MARBLER is the first platform that allows for the deployment of MRRL algorithms on real robots in a highly reproducible manner.
§.§ Core Components
MARBLER is comprised of four core components that form the foundation of the platform:
Core: The Core component serves as the fundamental building block of MARBLER, leveraging the Robotarium's python simulator. It encompasses critical functionalities necessary for the environment, such as environment resetting and discrete time step advancement. By utilizing the capabilities of the Robotarium's simulator and CBFs, MARBLER incorporates realistic dynamics that emulate the constraints encountered by real robots.
Scenarios: The scenarios module defines the environments the robots interact in and the specific tasks they must accomplish.
Gym Interface: Each scenario within MARBLER is registered as a Gym environment, which allows for direct compatibility with the algorithms and tools that support the Gym interface.
Test Pipeline: The Test Pipeline
provides a streamlined process for importing trained robots into the simulation environment, giving researchers a way to visualize robots' performance and collect test data. Subsequently, researchers can execute a script to prepare their files for submission to the Robotarium, which can then be uploaded to the real Robotarium, enabling evaluation in a real-world setting.
§.§ Scenarios
§.§.§ Existing Scenarios
To facilitate immediate testing and evaluation using MARBLER, we introduce five scenarios inspired by diverse MRRL problems. These scenarios are designed to offer researchers a starting point for experimentation and can be easily customized by modifying the scenario's associated configuration file. Parameters such as the number of robots, communication methods, scenario difficulty, and more, can be adjusted as needed.
A complete overview of these scenarios is available in the supplementary material[Supplementary material can be found
https://shubhlohiya.github.io/MARBLER/assets/supplementary.pdfhere].
but we include brief descriptions here:
Simple Navigation (Fig. <ref>):
Robots navigate towards a known destination point. This scenario is an easy starting point for algorithms to learn in.
Predator Capture Prey (PCP) (Fig. <ref>):
Sensing robots and capture robots must work together to capture the prey. Sensing robots know the location of prey within their sensing radius and must communicate this to the blind capture robots. Inspired by the Predator Capture Prey scenario in <cit.>.
Warehouse (Fig. <ref>):
Robots must navigate to their color zone on the right to receive a load and then unload in their color zone on the left while avoiding collisions; a Multi-Robot Path Finding environment <cit.>.
Material Transport (MT) (Fig. <ref>):
Robots with varying speeds and capacities must collaborate to efficiently unload two zones: one nearby with a large amount of material and one further away with a small amount of material. This is a task allocation problem <cit.> where the robots must collaborate to unload the zones within a time limit.
Arctic Transport (AT) (Fig. <ref>):
Drones can move fast over any tile and have a large sensing radius. Ice and water robots have a limited sensing radius and move fast over some tiles but slow over other tiles. Robots are rewarded based on how far the ice/water robots are from the goal zone so the drones must guide the ice/water robots. This is a Multi-Robot Path Planning scenario <cit.> where the drones must find a path to the goal zone and communicate it to the ice/water robots.
§.§.§ Creating New Scenarios
MARBLER provides a user-friendly approach to create new scenarios, similar to MPE and VMAS. Researchers can customize the action space, observation space, visualizations, and other relevant parameters without needing to interact with the underlying Robotarium code, allowing researchers to develop tailored scenarios that align with their specific use cases. Our GitHub includes comprehensive documentation to create new scenarios.
§ EXPERIMENTS
§.§ Experiment Setup
For all our experiments, we used the EPyMARL framework to train our robots. Because the scenarios in MARBLER have been registered as Gym environments, they are directly compatible with EPyMARL. This allowed us to train policies using the various learning algorithms available in EPyMARL with no modifications.
Baselines: We compared MAPPO <cit.>, QMIX <cit.>, and VDN <cit.> with parameter sharing. To investigate the effects of parameter sharing, we also evaluated QMIX without parameter sharing (QMIX_NS).
§.§ Evaluation Protocol
We evaluated all algorithms in the PCP, Warehouse, MT, and AT scenarios with 4, 6, 4, and 4 robots respectively. Before training each algorithm, we ran a hyperparameter search in the Simple Navigation environment in a manner similar to <cit.>. Exact details on the hyperparameter search along with the hyperparameters we used for each algorithm can be found in the supplementary material[Supplementary material can be found
https://shubhlohiya.github.io/MARBLER/assets/supplementary.pdfhere].
We trained VDN and QMIX for a total of 5 million time steps in each scenario. Given the conflicting evidence about off-policy algorithms being more sample efficient than on-policy algorithms due to their use of a replay buffer <cit.>, we trained MAPPO for a total of 25 million time steps. We trained five seeds for each algorithm.
Because the Robotarium immediately stops a run when robots collide or go outside the acceptable boundaries, we used strict CBFs so that, if the robots attempt to get within 20cm from each other, their movement slows to the point to where they almost stop. We also penalize the robots and end the episode if robots collide or drive outside the boundaries of the environment. By doing this, the robots are able to successfully run in the Robotarium after training.
In all scenarios, robots had full communication and in all scenarios except MT, robots had unlimited bandwidth in their communications. Exact details about how the environments were configured for these evaluations are included in the supplementary material.
§.§ Computational Requirements
We trained all models using CPUs; primarily with a Dual Intel(R) Xeon(R) Gold 6226 <cit.> and an Intel(R) Core(TM) i7-12700KF. It took 16084 CPU hours to train all models (excluding hyperparameter searches).
§ RESULTS
To compare baselines, first we look at training evaluation returns to evaluate sample efficiency and how much of an impact different seeds make which can be seen in Fig. <ref>. Then, we compared the best performing models for each algorithm in each scenario. To do this, we took the model that achieved the highest reward for each algorithm and evaluated the model in simulation and on real robots to compare performances. In simulation, we ran each model for 100 episodes and on the real robots, we ran each model for 10 episodes. The results can be seen in table <ref>.
§.§ Value Based vs. Policy Gradient
For the first 5 million timsteps, VDN is the best performing algorithm in every scenario. After 25 million steps, MAPPO's best performing seeds approaches that of VDN's in MT and AT and surpasses it in Warehouse. However, all seeds in MAPPO converge to lower performance in PCP than in any of the value based methods.
Additionally, MAPPO's performance is much more influenced by its seed than in any value-based method. This is contradictory to the findings in <cit.> but it seems that VDN generally outperforms MAPPO in MARBLER suggesting that value based methods, particularly VDN, may be more applicable to physical robots than policy gradients.
§.§ Effects of Parameter Sharing
The performance of models trained with parameter sharing vs. without parameter sharing depends on the heterogeneity of the environment. In the Warehouse scenario, where robots are homogeneous except for their loading zone locations, QMIX outperformed QMIX_NS significantly. In MT, the robots need to learn slightly different policies to ensure that all zones are unloaded within the time limit, but the optimal policies are similar. In AT, drones and ice/water robots had fundamentally different optimal policies, yet neither QMIX nor QMIX_NS utilized the drones' enhanced sensing radius, resulting in similar policies for all robots. In AT and MT, with limited heterogeneity, QMIX showed a significant performance advantage over QMIX_NS but much less significant than in Warehouse. However, in the PCP scenario, where very different policies were learned for the Predator and the Capture robots, QMIX and QMIX_NS performed similarly. Thus, as heterogeneity increases, the gap between policies trained with and without parameter sharing shrinks, consistent with the findings from <cit.>. This suggests that in scenarios with more diverse heterogeneity, models trained without parameter sharing may outperform those trained with it.
Additionally, robots trained with QMIX_NS went out of bounds a total of 10 times in simulation and 6 times on real robots. In contrast, robots trained with all parameter sharing methods only went out of bounds once in simulation and once on real robots. When a single robot goes out of bounds, all robots are given a large negative penalty and the episode ends.
This suggests it is much more difficult for robots to learn how to handle events where a single robot can cause all other robots to suffer a penalty without parameter sharing.
§.§ Sim2Real Gap
As shown in table <ref>, there are few significant differences between the algorithms' performance in simulation and in the real Robotarium. This gives strong evidence that the simulator is very similar to real robots. However, there is one key difference between the real experiments and the simulated experiments: the robots never collide in simulation and robots go out of bounds more than 6x more often on average on real robots. The only time an algorithms' metrics were significantly worse on real robots vs. in simulation was when the real robots collided or went out of bounds.
To further evaluate this, we retrained VDN in PCP using less safe CBFs that are only effective at 17cm and do not slow the robots as much when their within the safety radii. In addition, we did not stop the episode or penalize the robots for driving out of bounds or colliding. This is how the Robotarium's safety mechanisms are setup by default. Other than these two modifications, we trained these models the same way as the original VDN models.
As seen in table <ref>, the differences between the test performance of the robots with the default CBFs compared to the safe CBFs in simulation is not significant. However, when we ran these robots in the Robotarium, they collided 3/10 episodes, despite using the recommended method of preventing collisions, the robots never colliding in the 100 simulated episodes, and the robots with the safe CBFs never colliding. This gives more evidence that, when it comes to safety, there is a significant Sim2Real gap which highlights the second major benefit of using MARBLER: even if robots seem to learn safe policies in simulation, those policies may not run safely in the real world. This makes MARBLER the first open platform created that can be used to evaluate how safe learned MRRL policies are.
§ CONCLUSION
We introduce MARBLER, the first open platform with Sim2Real capabilities, realistic robot dynamics, and the ability to evaluate how safe MRRL algorithms are. MARBLER environments are fully compatible with OpenAI Gym, providing an easy interface with modern learning algorithms.
To demonstrate the utility of MARBLER, we developed five MRRL scenarios and utilized the EPyMARL framework to benchmark popular MARL algorithms, both in simulation and in the real-world. We believe MARBLER will help researchers benchmark Sim2Real transfer capabilities of MRRL algorithms in a systematic and reproducible way, making it an invaluable tool for the research community.
IEEEtran
|
http://arxiv.org/abs/2307.04904v1 | 20230710210827 | Fast dynamic time warping and clustering in C++ | [
"Volkan Kumtepeli",
"Rebecca Perriment",
"David A. Howey"
] | eess.SP | [
"eess.SP",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Planar Curve Registration using Bayesian Inversion
[
==================================================
§ ABSTRACT
We present an approach for computationally efficient dynamic time warping (DTW) and clustering of time-series data. The method frames the dynamic warping of time series datasets as an optimisation problem solved using dynamic programming, and then clusters time series data by solving a second optimisation problem using mixed-integer programming (MIP). There is also an option to use k-medoids clustering for increased speed, when a certificate for global optimality is not essential. The improved efficiency of our approach is due to task-level parallelisation of the clustering alongside DTW. Our approach was tested using the UCR Time Series Archive, and was found to be, on average, 33% faster than the next fastest option when using the same clustering method. This increases to 64% faster when considering only larger datasets (with more than 1000 time series). The MIP clustering is most effective on small numbers of longer time series, because the DTW computation is faster than other approaches, but the clustering problem becomes increasingly computationally expensive as the number of time series to be clustered increases.
§ INTRODUCTION
Time series datasets are ubiquitous in science, engineering and many other fields such as economics. Applications range from finding patterns in energy consumption to detecting brain activity in medical applications and discovering patterns in stock prices in the financial industry. Tools for analysing time-series data are widely available, and one such tool is clustering—a form of unsupervised learning that groups datasets into 'similar' subsets, providing useful insights.
Most time series clustering algorithms depend on dimension reduction or feature extraction techniques to achieve computational efficiency at scale <cit.> but these can introduce bias into the clustering. Distance-based approaches have the significant advantage of directly using the raw data, thus the results are not biased by the feature selection process. However, choosing which distance metric to use is not obvious, and an incorrect choice can lead to illogical clusters. Dynamic time warping <cit.> is a well-known technique for manipulating time series to enable comparisons between datasets, using local warping (stretching or compressing along the time axis) of the elements within each time series to find an optimal alignment between series. This emphasises the similarity of the shapes of the respective time series, rather than the exact alignment of specific features. Finding similarities in shape is often preferable to finding similarities in time whenever time of occurrence is not relevant to the clustering problem <cit.>. The approach can distinguish similarity in time series when lags or shifts in time occur; these are undetectable if using Euclidean distances <cit.>. This is beneficial even when using time series of the same length and time-frame, such as power load demand time series <cit.>. Finally, a user-defined warping constraint allows flexibility on which time shifts or lags can be defined as `similar' for each clustering problem <cit.>. The warping constraint uses a `window' to limit which points in one data set can be mapped to another <cit.>. For example, a warping window of 99 means the first data point in one time series can be mapped only up to the hundredth data point in the time series it is being compared to.
Unfortunately, DTW does not scale well in computational speed as the length and number of time series to be compared increases—the computational complexity grows quadratically with the total number of data points. This complexity is a barrier to DTW being widely implemented in time series clustering <cit.>. In this paper, we present a novel approach to speed up the computation of DTW distances and the subsequent clustering problem, allowing longer time series and larger datasets to be analysed. We use dynamic programming to solve the DTW problem and then perform clustering of the warped time series, using the pairwise DTW distances, by formulating the clustering problem as a mixed-integer program (MIP). The user must specify the number of clusters required, and the algorithm then finds the optimal clusters, including a centroid for each cluster, where the centroid is the time series within each cluster that minimises the intercluster distance, i.e., the sum of the distances between each time series within the cluster and the respective centroid. The software associated with this paper, , is freely available from https://github.com/Battery-Intelligence-Lab/dtw-cpp.
While there are other packages available for time series clustering using DTW <cit.>, offers significant improvements in speed and memory use, especially for larger datasets. As an aside, there are also innovative methods for speeding up DTW by solving approximate versions of the problem. For example, Deriso and Boyd <cit.> considered DTW as a continuous-time optimal control problem and solved this by discretisation with iterative refinement using regularisation instead of hard band constraints.
In our approach, speed-up is achieved by task-level parallelisation, allowing multiple pairwise comparisons between time series to be evaluated simultaneously. Additionally, implements more
efficient memory management by solving the DTW problem using only the preceding vector rather than storing the entire warping matrix (see mathmatical-background for details). This means that the complete warping path between each time series is not stored—but this is not required for the clustering process since only the final cost is needed. Reduction in memory use also paves the way for a future GPU implementation of the algorithm <cit.>. Our approach uses MIP for clustering—this is preferable to other DTW clustering packages that use k-based methods since the iterative nature of the latter means they are susceptible to getting stuck in local optima, whereas MIP provides a certificate for global optimality. However, where a global optimality certificate is not required, also provides the necessary functions to solve the clustering problem iteratively.
§ OVERVIEW OF METHOD
The current functionality of the software is as follows:
* Load time series data from CSV file(s).
* Calculate DTW pairwise distances between time series, using a vector based approach to reduce memory use. There is also the option to use a Sakoe-Chiba band to restrict warping in the DTW distance calculation <cit.>. This speeds up the computation time as well as being a useful constraint for some time series clustering scenarios (e.g., if an event must occur within a certain time window to be considered similar).
* Produce a distance matrix containing all pairwise comparisons between each time series in the dataset.
* Split all time series into a predefined number of clusters, with a representative centroid time series for each cluster. This can be done using MIP or k-medoids clustering, depending on user choice.
* Output the clustering cost, which is the sum of distances between every time series within each cluster and its cluster centroid.
* Find the silhouette score and elbow score for the clusters in order to aid the user decision on how many clusters, k, to include.
§ MATHEMATICAL BACKGROUND
Consider a time series to be a vector of some arbitrary length. Consider that we have p such vectors in total, each possibly differing in length. To find a subset of k clusters within the set of p vectors using MIP formulation, we must first make 1/2p 2 pairwise comparisons between all vectors within the total set and find the `similarity' between each pair. In this case, the similarity is defined as the DTW distance. Consider two time series x and y of differing lengths n and m respectively,
x=(x_1, x_2, ..., x_n)
y=(y_1, y_2, ..., y_m).
The DTW distance is the sum of the Euclidean distance between each point and its matched point(s) in the other vector, as shown in Fig. <ref>. The following constraints must be met:
* The first and last elements of each series must be matched.
* Only unidirectional forward movement through relative time is allowed, i.e., if x_1 is mapped to y_2 then x_2 may not be mapped to
y_1 (monotonicity).
* Each point is mapped to at least one other point, i.e., there are no jumps in time (continuity).
Finding the optimal warping arrangement is an optimisation problem that can be solved using dynamic programming, which splits the problem into easier sub-problems and solves them recursively, storing intermediate solutions until the final solution is reached. To understand the memory-efficient method used in , it is useful to first examine the full-cost matrix solution, as follows. For each pairwise comparison, an n by m matrix C^n× m is calculated, where each element represents the cumulative cost between series up to the points x_i and y_j:
c_i,j = (x_i-y_j)^2+min
c_i-1,j-1
c_i-1,j
c_i,j-1
The final element c_n,m is then the total cost, C_x,y, which provides the comparison metric between the two series x and y. Fig. <ref> shows an example of this cost matrix C and the warping path through it.
For the clustering problem, only this final cost for each pairwise comparison is required; the actual warping path (or mapping of each point in one time series to the other) is superfluous for k-medoids clustering. The memory complexity of the cost matrix C is 𝒪(nm), so as the length of the time series increases, the memory required increases greatly. Therefore, significant reductions in memory can be made by not storing the entire C matrix. When the warping path is not required, only a vector containing the previous row for the current step of the dynamic programming sub-problem is required (i.e., the previous three values c_i-1,j-1, c_i-1,j, c_i,j-1), as indicated in Eq. <ref>.
The DTW distance C_x,y is found for each pairwise comparison. As shown in Fig. <ref>, pairwise distances are then stored in a separate symmetric matrix, D^p× p, where p is the total number of time series in the clustering exercise. In other words, the element d_i,j gives the distance between time series i and j.
Using this matrix, D, the time series can be split into k separate clusters with integer programming. The problem formulation begins with a binary square matrix A^p× p, where A_ij=1 if time series j is a member of the ith cluster centroid, and 0 otherwise, as shown in Fig. <ref>.
As each centroid has to be in its own cluster, non-zero diagonal entries in A represent centroids. In summary, the following constraints apply:
* Only k series can be centroids,
∑_i=1^p A_ii=k.
* Each time series must be in one and only one cluster,
∑_i=1^pA_ij=1 ∀ j ∈ [1,p].
* In any row, there can only be non-zero entries if the corresponding diagonal entry is non-zero, so a time series can only be in a cluster where the row corresponds to a centroid time series,
A_ij≤ A_ii ∀ i,j ∈ [1,p].
The optimisation problem to solve, subject to the above constraints, is
A^⋆ = min_A∑_i ∑_j D_ij× A_ij.
After solving this integer program, the non-zero diagonal entries of A represent the centroids, and the non-zero elements in the corresponding columns in A represent the members of that cluster. In the example in Fig. <ref>, the clusters are time series 1, 2, 5 and 3, 4 with the bold time series being the centroids.
Finding global optimality can increase the computation time, depending on the number of time series within the dataset and the DTW distances. Therefore, there is also a built-in option to cluster using k-medoids, as used in other packages such as <cit.>. The k-medoids method is often quicker as it is an iterative approach, however it is subject to getting stuck in local optima. The results in the next section show the timing and memory performance of both MIP clustering and k-medoids clustering using compared to other packages.
§ PERFORMANCE COMPARISON AND DISCUSSION
We compared our approach with two other DTW clustering packages, <cit.> and <cit.>. The datasets used for the comparison are from the UCR Time Series Classification Archive <cit.>, and consist of 128 time series datasets with up to 16,800 data series of lengths up to 2,844. The full results can be found in Table <ref> in the Appendix. Benchmarking against was stopped after the first 22 datasets because the results were consistently over 20 times slower than . Table <ref> shows the results for datasets downselected to have a number of time series (N) greater than 100 and a length of each time series greater than 500 points. This is because is aimed at larger datasets where the speed improvements are more relevant.
As can be seen in these results, is the fastest package for 90% of the datasets, and all 13 datasets where was faster were cases where the entire clustering process was completed in 1.06 seconds or less. Across the whole collection of datasets, was on average 32% faster. When looking at larger datasets with N > 1000, is on average 65% faster. In all apart from 2 of the 115 cases where is the fastest, it uses the k-medoids algorithm. This is however to be expected as the latter is an iterative clustering method and therefore does not compute all DTW distances. Fig. <ref> clearly shows the increasing superiority of as the number of time series increases. In this comparison, both algorithms use k-medoids, so the speed improvement is due to faster dynamic time warping.
MIP was on average 16 times slower than over all samples. Fig. <ref> shows that as the number of time series increases, MIP clustering becomes increasingly slower. This is to be expected because the computational complexity of the MIP clustering optimisation increases significantly. However, as the length of the time series increases, the performance of MIP converges to the speed of , while finding global optimality. This confirms the improved performance of DTW in . Therefore, the MIP approach is recommended for occasions when the time series to be clustered are very long, but the number of time series is smaller. It is also worth noting the length of time series in the UCR Time Series Classification Archive are relatively small compared to many time series datasets, and therefore the performance and relevance of the MIP clustering approach in is understated by these results.
§ ACKNOWLEDGEMENTS
We gratefully acknowledge contributions by
https://howey.eng.ox.ac.ukBattery Intelligence Lab members, and thank BBOXX for project funding and access to data. This work was also funded by the UKRI PFER Energy Superhub Oxford demonstrator and the “Data-driven exploration of the carbon emissions impact of grid energy storage deployment and dispatch” project (EP/W027321/1).
IEEEtran
§ APPENDIX A
We include here the full benchmarking comparison between (using k-Medoids and MIP), and . As stated in the main text, benchmarking of the latter was discontinued once it was apparent it was significantly slower on all datasets. Additionally, any datasets with a number of time series greater than 4000 were not included for the MIP clustering as the computation time is significantly longer and MIP is not suitable to solve these clustering problems.
|
http://arxiv.org/abs/2307.05450v1 | 20230711172608 | Bulk-to-boundary propagators with arbitrary spin J in soft-wall AdS/QCD | [
"Valery E. Lyubovitskij",
"Ivan Schmidt"
] | hep-ph | [
"hep-ph"
] | |
http://arxiv.org/abs/2307.04634v1 | 20230710152157 | Toward optimal placement of spatial sensors | [
"Mingyu Kim",
"Harun Yetkin",
"Daniel J. Stilwell",
"Jorge Jimenez",
"Saurav Shrestha",
"Nina Stark"
] | cs.RO | [
"cs.RO",
"stat.OT"
] |
Self-consistent Combined HST, K-band, and Spitzer Photometric Catalogs of the BUFFALO Survey Fields
[
August 12, 2023
===================================================================================================
empty
empty
This paper addresses the challenges of optimally placing a finite number of sensors to detect Poisson-distributed targets in a bounded domain. We seek to rigorously account for uncertainty in the target arrival model throughout the problem. Sensor locations are selected to maximize the probability that no targets are missed. While this objective function is well-suited to applications where failure to detect targets is highly undesirable, it does not lead to a computationally efficient optimization problem. We propose an approximation of the objective function that is non-negative, submodular, and monotone and for which greedy selection of sensor locations works well. We also characterize the gap between the desired objective function and our approximation. For numerical illustrations, we consider the case of the detection of ship traffic using sensors mounted on the seafloor.
Log-Gaussian Cox process, Void probability, Optimal sensor placement, Jensen gap
§ INTRODUCTION
This paper addresses the challenging task of optimally placing a finite number of sensors to detect Poisson-distributed targets within a bounded domain. The primary objective is to develop an optimal sensor placement algorithm that enables the deployment of sensors based on acquired environmental and target data, possibly allowing for adjustments to sensor locations as new target data becomes available.
We model target arrivals using a Poisson distribution, and we consider that the target arrival rate, which is represented by the intensity function of the Poisson distribution, is uncertain. To model the uncertainty in the target arrival rate, we employ a log-Gaussian Cox process, which is a Poisson point process where the logarithm of the intensity function is a Gaussian process. We then estimate the underlying intensity function based on prior target arrival data. Based on the estimated intensity function, the selection of sensor locations is determined with the objective of minimizing the probability of failing to detect a target. We show that this objective is equivalent to maximizing the void probability of the Poisson process, which refers to the probability that no targets are undetected. We propose an approximation of the void probability as the objective function for the sensor placement problem. We show that our approximation of the void probability is submodular and monotonic increasing (monotone). Thus, greedy selection of sensor locations works well. For the numerical illustrations, we consider the case of subsea sensors that detect ship traffic. Example ship traffic data is obtained from historical records of the Automated Identification System (AIS) near Hampton Roads Channel, Virginia, USA.
Poisson point processes have been used to model target arrivals in various applications, such as conducting marine mammals surveys <cit.>, disease mapping <cit.>, crime rate modeling <cit.>, and border surveillance <cit.>. The authors in <cit.> consider a Poisson spatial point process with known intensity function as target arrival model. The authors in <cit.> assume that target arrivals follow a homogeneous Poisson point process with a known intensity value. In contrast, our approach uses an uncertain intensity function that can be estimated from historical data or in real-time. In <cit.>, the authors address greedy selection of sensor locations to detect Poisson-distributed target arrivals. However, in these studies, stochasticity in the intensity function is not accounted for. In <cit.>, the authors seek to adaptively identify a stochastic intensity function while choosing a sequence of single observation locations that minimize a reward function related to the number of missed targets. In contrast, we assume that a stochastic intensity function has been identified from historical data, and we seek a set of sensor locations that minimize the probability of that no targets are missed through the entire domain. The existing studies in this field do not analyze the proximity of their solutions to the optimal solution. In our paper, we bridge this gap by conducting an analysis of the deviation between our proposed approximate solution and the optimal solution.
We model target arrivals as a log-Gaussian Cox process (LGCP) <cit.>. A Cox process is a Poisson process with a stochastic intensity function. For our applications, we model the intensity function as the log of a Gaussian process. To estimate the intensity function based on prior data, we use Integrated Nested Laplace Approximation (INLA) method, which is a deterministic approximation. INLA approximates the posterior distribution of latent Gaussian models using nested Laplace approximations <cit.>. We use void probability as our objective function and select sensor locations where the void probability is maximum. We show that in our formulation of the sensor placement problem, maximizing void probability is the same as minimizing the number of undetected targets.
§.§ Contributions
We address sensor placement using an LGPC target model. Because the optimization problem is numerically challenging, we propose a lower-bound of the objective function that is submodular and monotone, and for which greedy sensor location selection works well. We further characterize the gap between the desired objective function, which is the probability that no targets are missed, and our lower bound, and we show via numerical examples that the gap appears to be small for representative problems that motivate our analysis.
The organization of the paper is as follows. In Section <ref>, we present a detection model with multiple sensors and target arrivals modeled as a log-Gaussian Cox process. In Section <ref>, we derive a lower bound for the void probability that is submodular and monotone, and facilitates computationally tractable selection of sensors. In Section <ref>, we analyze the gap between void probability and its lower bound from Section <ref>. In Section <ref>, we provide numerical results that show the efficacy of our proposed approach. The appendix shows the proofs of submodularity, monotonicity of the proposed objective function from Section <ref>, and of monotonic-decrease of the upper bound of Jensen gap from Section <ref>.
§ PROBLEM FORMULATION
This paper focuses on the sensor placement problem, specifically addressing scenarios where a set of sensors is used to detect stochastic target arrivals.
§.§ Sensor model
We define γ(s, a_i):S × S → [0,1] to be the probability of sensor i detecting a target at location s in a bounded domain S where a_i represents the location of sensor i.
The probability of failing to detect a target at location s with sensor i is expressed 1 - γ(s, a_i). Let 𝐚 = {a_1, a_2, …, a_M } denote the locations of a set of M sensors. Then, when all M sensors are placed at 𝐚, the probability of failing to detect a target at location s is
π(s, 𝐚) := ∏_i=1^M ( 1 - γ(s, a_i) )
§.§ Target arrival model: Log-Gaussian Cox Process
Target arrivals in a bounded region S for a time-interval T_c are modeled by an inhomogeneous Poisson point process with a random intensity Λ(S,T_c) where T_c is a time-interval for historical target arrival data collection to compute an estimated target arrival per unit time within the domain. The intensity Λ(S,T_c) can be thought of as the expected number of target arrivals in area S over a time interval with length T_c and can be computed
Λ(S,T_c) = 1/T_c∫_Sλ(s) ds
where λ(s):S → [0, ∞) is the intensity function at location s ∈ S. The intensity function is derived to represent the expected number of targets per unit area in a time-interval T_c. We assume that λ(s) is stochastic and the logarithm of the spatial variation in the intensity function is a Gaussian process.
log(λ(s)) ∼GP(μ(s), k(s,s'))
where μ(s), k(s,s') are mean and covariance functions respectively and s', s ∈ S. This model is called the log-Gaussian Cox process (LGCP). We refer the reader to <cit.> for more details on LGCP.
Given Λ(S,T_c), the probability of observing n number targets within S for a time-interval T using Poisson distribution is
P(N(S,T) = n) = (Λ(S,T_c)T)^n/n!e^-Λ(S,T_c)T
where N(S,T) denotes the number of target arrivals.
§ SUBOPTIMAL SENSOR PLACEMENT
Our goal is to find optimal sensor locations that minimize the number of undetected targets in S and for a time period T.
§.§ Void probability
We let N̅(S,T) represent the number of undetected targets in S over time-interval T. The probability that N̅(S,T) is zero is computed from the Poisson process
P(N̅(S,T) = 0 |λ(s) )
= exp( - ∫_S T/T_cλ(s) π(s, 𝐚) ds )
where we say that the intensity function λ(s) has been thinned by the probability of failing to detect a target π(s,𝐚). The probability of that N̅(S,T) is zero is known as the void probability of the log-Gaussian Cox process. Since we assume the target arrival intensity function λ(s) in (<ref>) is stochastic, the void probability is
P( N̅( S,T ) = 0 )
=𝔼_λ[exp(-∫_ST/T_cλ(s) π(s, 𝐚) ds)]
where (<ref>) represents (<ref>) after marginalizing out λ(s).
§.§ Void probability approximation
Let 𝐀 be the set of all possible sensor locations within S such that the location of a finite number of sensors is 𝐚⊂𝐀. We compute a set of optimal sensor locations such that the void probability of the thinned Cox process is maximized
𝐚^⋆ = _𝐚⊂𝐀𝔼_λ[exp(-∫_ST/T_cλ(s) π(s, 𝐚) ds)]
The objective function in (<ref>) is computationally challenging due to a stochastic variable λ(s) in the integrand. Therefore, we consider a lower bound for the objective function (<ref>) that can potentially be maximized with less computational effort than directly computing the void probability.
We use Jensen's inequality to obtain a computationally tractable lower bound to (<ref>). Furthermore, we show that over any discretized set of possible sensor locations, this lower bound is submodular and monotone. Thus, greedy selection of sensor locations is guaranteed to generate sensor locations at which the lower bound is within at least a factor (1-1/e) of the optimal sensor location <cit.>.
Jensen's inequality applied to (<ref>) yields
𝔼_λ[ e^- Λ̃(𝐚)]
≥ e^-𝔼_λ [ Λ̃(𝐚)]
where
Λ(𝐚)=∫_ST/T_cλ(s) π(s, 𝐚) ds
The inequality in (<ref>) provides a lower bound to the void probability. Since the lower bound e^-𝔼_λ [ Λ̃(𝐚)] is computationally tractable, we seek a set of sensor location 𝐚^⋆ that maximizes the lower bound in (<ref>). That is
𝐚^⋆ = _𝐚⊂𝐀 exp( - ∫_S λ(s) π(s, 𝐚) ds )
where we denote 𝔼_λ[ T/T_cλ(s) ] by λ(s), which is the mean of the intensity function for the Cox process.
We may apply the logarithm without changing the extremum due to the monotonic nature of the logarithm function. Thus, we may apply the logarithm to the objective function in (<ref>), yielding
𝐚^⋆ = _𝐚⊂𝐀 - ∫_S λ(s) π(s, 𝐚) ds.
The objective function in (<ref>) is submodular and monotonically increasing, but not non-negative. Thus, in order to apply the greedy algorithm to compute a finite number of sensor locations in (<ref>), the objective function can be modified by adding a constant term
𝐚^⋆ = _𝐚⊂𝐀 ∫_S λ(s) ds - ∫_S λ(s) π(s, 𝐚) ds
that yields a non-negative function.
We compute a set of sensor locations with respect to the objective function in (<ref>). Below, we formally address our main result
The non-negative objective function
F(𝐚) = ∫_S λ(s) ds - ∫_S λ(s) π(s, 𝐚) ds
is submodular and monotone.
Greedy selection of sensor locations with respect to the objective function in (<ref>) yields at least 1-1/e of the optimal results.
Proof for Theorem <ref> is in Appendix A. Proof for Corollary <ref> follows from Theorem <ref> and is the well-known result in <cit.>.
§ JENSEN GAP ANALYSIS
Jensen's inequality (<ref>) simplifies the computation of the void probability approximation. However, while this inequality yields a lower bound for the void probability, the lower bound is not necessarily tight. The accuracy of the sensor network we obtain using this approximation depends on the size of this gap, which measures how closely our objective function approximates the void probability. A smaller gap indicates a closer approximation to the void probability. Therefore, the size of the gap is a crucial factor in determining the performance of the sensor network.
In this section, building on the results in <cit.>, we first present an upper bound on the Jensen gap given a sensor placement (Theorem <ref>). Then, by proving that the gap is monotonic decreasing, we show a method to compute the bound (Theorem <ref>).
Let X be an one-dimensional random variable with mean μ_X, variance σ_X^2 and P(X∈ (d_1, d_2))=1, where -∞≤ d_1 ≤ d_2 ≤∞. Let ϕ(X) be a twice-differentiable function on (d_1,d_2). Then, the upper bound of the Jensen gap J is
J ≤sup_X∈(d_1,d_2)( ϕ(X)-ϕ(μ)/(X-μ)^2-ϕ'(μ)/X-μ)σ^2
In our problem, the random variable X is the expected number of undetected ships Λ(𝐚) ∈ [0,∞) when sensor locations 𝐚 are known from (<ref>). The twice differential function ϕ(·) is e^-(·) from the definition of void probability. For simplicity, let the mean and variance of Λ(𝐚) be μ_u and σ^2_u, respectively. Then, Theorem <ref> yields
J≤ J_up= sup_Λ(𝐚)∈[0,∞)( e^-Λ(𝐚)-e^-μ_u/(Λ(𝐚)-μ_u)^2+e^-μ_u/Λ(𝐚)-μ_u)σ_u^2
where J_up is the upper bound of the Jensen gap.
When μ_u and σ_u are given from the sensor locations 𝐚, the upper bound J_up in (<ref>) is monotonic-decreasing with respect to Λ(𝐚). Therefore, the upper bound of Jensen gap is maximized when Λ(𝐚) is zero, which yields
J_up(σ_u,μ_u)=σ_u^2(1-e^-μ_u-μ_ue^-μ_u)/μ_u^2
Theorem <ref> is based on the fact that expression inside the supremum in (<ref>) are monotonically decreasing with respect to increasing Λ̃(𝐚), which is proved in Appendix B.
§ NUMERICAL RESULTS
In this section, we illustrate our results with numerical examples in which we seek to detect ships using sensors located on the seafloor. We apply INLA to estimate the intensity of ship traffic near Hampton Roads, Virginia. Then we greedily select sensor locations using the objective function in (<ref>). Through numerical illustration, we also show that Jensen's gap is small in this example. That is, the difference between the void probability (𝔼_λ[e^-Λ(𝐚)]) and its approximation (e^-𝔼_λ[Λ(𝐚)]) in (<ref>) is small. We also directly evaluate Jensen's gap for a specific numerical illustration and compare it to the upper bound for Jensen's gap in Section <ref>. Our numerical example also shows that the greedy algorithm produces sensor locations that achieve almost the same performance as the optimal sensor locations for the small number of sensors where we can compute optimal locations with respect to void probability via brute force.
We use the ship traffic data near the Hampton Roads Channel, Virginia, USA, provided by the Office for Coastal Management and the Bureau of Ocean Energy Management<cit.>. The data comprises the location (latitude and longitude) of a ship, ship type, and ship detection time. We use the ship traffic data corresponding to the entire month of March 2020 (=T_c), where the domain S in (<ref>) is labeled A in Fig. <ref> (top). Region A is treated as one-dimensional line for sensor placement: latitude 36.91676 to 37.08721, longitude -76.08209. Fig. <ref> (top) shows the heat map of ship traffic in the selected area. The red color indicates greater ship traffic has been observed in the area, while the yellow color indicates less ship traffic has been observed, and the blue means no ship has been observed. Within this bounded domain, the possible location where sensors can be placed is discretized with an interval of 50m.
§.§ Estimation of intensity of ship arrival model
In order to estimate the intensity function, we use the inlabru package in R <cit.>, which builds on the R-INLA package <cit.>. We consider a zero mean Gaussian process with a Matern covariance function
k(s,s^')= σ_u^2 2^1-ζ/Γ(ζ) (κ ||s - s^' || )^ζ K_ζ (κ ||s - s^' ||)
where s and s^' are two locations within the domain, σ_u is the variance, ζ > 0 is the smoothness parameter, κ=√(()8ζ)/β >0 is the scale parameter, || · || denotes the Euclidean distance, K_ζ is the modified Bessel function of second kind, and β is a spatial range parameter (see <cit.> for more details).
We use the following parameter values for the numerical illustrations: ζ = 1.5, β, σ_u from P(β<β_0 = 150)= 0.75, P(σ_u > σ_u0 = 0.1) = 0.75, respectively. As shown in Fig. <ref> (middle, bottom), with the parameters, covariance function above, and historical ship traffic data in March 2020 (histogram), we estimate the mean (black line) and the 95% of the confidence interval (blue lines) using INLA.
§.§ Sensor model
For the probability of sensor i detecting a ship, we use the sensor model
γ(s, a_i) = ρ e^-(a_i-s)^2/σ_l
where 0 ≤ρ≤ 1 is the maximum probability of detection and σ_l is the length scale parameter. For the numerical illustrations, we consider that ρ = 0.95 and σ_l = 0.9.
§.§ Sensor placement for maximizing void probability approximation
We seek to maximize void probability directly,
(probability that number of undetected ships is zero), but instead, we select sensor locations using the lower bound for void probability in Section <ref>. Furthermore, we evaluate the difference between void probability and its lower bound. We do not directly consider optimal sensor location selection. Rather, we evaluate the utility in this numerical illustration of greedily selecting sensor locations to maximize the lower bound. Furthermore, we evaluate the difference between greedy and optimal selection of sensor locations when maximizing the lower bound for small numbers of sensors for which we can compute optimal sensor locations using brute force.
Given the estimated intensity function and the probability of detection from (<ref>), we compute the suboptimal sensor locations. Fig. <ref> (middle) shows Jensen's gap, which is the difference between void probability (𝔼_λ[e^-Λ(𝐚)]) and void probability approximation (e^-𝔼_λ[Λ(𝐚)]). For the results in Fig. <ref>, using the objective function in (<ref>), we first greedily compute the suboptimal sensor locations that maximize the void probability approximation. We evaluate the void probability for the suboptimal sensor locations using Monte Carlo method. We sample a large number ( ≥ 10,000) of the ship arrival intensity functions λ̂_j from (<ref>), which has been estimated by using INLA. The average void probability for the greedily selected sensor locations is
∑_j^W_λexp(-∫_Sλ̂_j(s) ∏_i=1^M ( 1 - γ(s, a_i^⋆) ) ds)/W_λ
where W_λ is the number of Monte Carlo sampled functions of the stochastic estimated intensity function of ship arrival λ(s). Correspondingly, the j^th sampled function is λ̂_j(s), and a_i^⋆∈𝐚^⋆(={a_1^⋆,...,a_M^⋆}) is i^th greedily selected sensor location. For simplicity, we denote λ̂_j(s)=T/T_cλ_j(s). Fig. <ref> (top) shows the void probability approximation (dashed blue line) with the void probability (red line) for the same set of sensor locations. This process is repeated for the number of sensors M varying from 0 to 100.
As shown in Fig. <ref> (middle), the maximum percent difference between the void probability and its approximation is less than 0.0125, and as we place more sensors, the gap tends to be smaller. As discussed in Sec. <ref>, using (<ref>), the upper bound of Jensen gap is computed with the expected value of undetected number of ships μ_u and its variance σ_u^2 shown in Fig. <ref> (bottom). As shown as a blue dotted line in Fig. <ref> (middle), the maximum upper bound of Jensen gap is approximately 0.15 and the Jensen gap (black line) is less than or equal to the upper bound. Table <ref> shows that while the computation time for greedily placing 100 sensors is less than 0.1 seconds, evaluating the void probability at the same locations takes 150715.67 seconds.
§.§ Small number of sensor placement for void probability
Fig. <ref> shows sensor location for both greedy and optimal sensor placement for the number of sensors varying from 2 to 5. Correspondingly, Table <ref> shows a comparison of the performance of the greedy selection and the optimal sensor placement. It demonstrates that the greedy selection performs well compared to the optimal. In our numerical experiment, the algorithms are implemented in MATLAB on a Windows computer that has a processor of Core i7 CPU with 1.3 GHz and a RAM of 16.0 GB.
§ CONCLUSION
We propose a computationally tractable suboptimal sensor placement method using void probability approximation as an objective function. This proposed objective function takes into account a stochastic target arrival intensity function. We show that the modified void probability approximation is non-negative, submodular, and monotone, which allows us to use the greedy selection method. Furthermore, we analyze Jensen gap and provide an upper bound for Jensen gap. In numerical illustrations, with historical ship traffic data, we demonstrate that a greedy algorithm for choosing sensor locations yields suboptimal results.
§ APPENDIX A: PROOF OF SUBMODULARITY AND MONOTONICITY
Proof:
Let F(𝐚) be defined as
F(𝐚) = ∫_S λ(s) ds -∫_S λ(s)π(s, 𝐚) ds
where λ(s) is the non-negative expectation of intensity function and π(s, 𝐚) is defined in (<ref>). For the location of the set of sensors A, B, C such that A ⊆ B ⊂ C and for a new common sensor location (of A,B) â∈ C \ B, F(𝐚) is submodular if the following inequality holds
F(A∪{â}) - F(A) ≥ F(B∪{â}) - F(B)
For π(s,A) and π(s,B), the sensor network A and B are composed of location of M_1 and M_2 sensors (M_1 ≤ M_2) respectively. Then, π(s,A) is
π(s,A) = ∏_i=1^M_1(1-γ(s,a_i))
Then, similarly with the set of B, for π(s,B)
π(s,B) = ∏_i=1^M_2(1-γ(s,a_i))
With the common sensor location â,
π(s,A∪{â}) = π(s,A)(1-γ(s,â))
π(s,B∪{â}) = π(s,B)(1-γ(s,â))
With the modified objective function in (<ref>)
F(𝐚) =∫_S λ(s) ds-∫_S λ(s) π(s,𝐚) ds
such that
F(A) =∫_S λ(s) ds-∫_S λ(s) π(s,A) ds
F(B) =∫_S λ(s) ds-∫_S λ(s) π(s,B) ds
F(A∪{â}) =∫_S λ(s) ds-∫_S λ(s) π(s,A∪{â})ds
F(B∪{â}) =∫_S λ(s) ds-∫_S λ(s) π(s,B∪{â}) ds
Then, as long as (<ref>) holds, F(𝐚) is submodular. The left (LHS) and right-hand side (RHS) of the inequality (<ref>) are
F(A∪{â})-F(A) = ∫_S λ(s) (π(s,A)-π(s,A∪{â})) ds
F(B∪{â})-F(B) = ∫_S λ(s) (π(s,B)-π(s,B∪{â})) ds
By subtracting RHS from LHS
(F(A∪{â}) -F(A))-(F(B∪{â})-F(B))
=∫_S λ(s) [(π(s,A)-π(s,A∪{â}))
-(π(s,B)-π(s,B∪{â}))] ds
=∫_S λ(s) π(s,A)π(s,â) ×
(1-∏_j=M_1+1^M_2(1-γ(s,a_j))) ds
where π(s,â) = 1-γ(s,â).
In (<ref>), the result consists of non-negative four components: λ(s) is non-negative and π̂(M_1,t), π̂(â,t), and the rest of the term are between zero and one. Therefore, (π(s,A)-π(s,A∪{â}))-(π(s,B)-π(s,B∪{â})) ≥ 0.
That is, F(A∪{â})-F(A) ≥ F(B∪{â})-F(B). Therefore, it proves that F(𝐚) where 𝐚={ a_1,...,a_M}, a_i ∈ S is non-negative submodular.
To prove that F(𝐚) is monotonic increasing, we show that the F(A) ≤ F(B) holds. By subtracting F(A) from F(B)
F(B) - F(A) = ∫_S λ(s) (π(s,A)-π(s,B)) ds
Due to the fact that π(s,A) is greater than or equal to π(s,B) and λ(s) is non-negative, 0 ≤ F(B)-F(A). That is equivalent to F(A) ≤ F(B). Thus, F(𝐚) is monotonic increasing
.
§ APPENDIX B: PROOF OF MONOTONIC-DECREASE OF J_UP
Proof: We can rewrite (<ref>) as
J_up = sup_Λ(𝐚)∈[0,∞)σ_u^2(e^-Λ(𝐚)-e^-μ_u+Λ(𝐚)e^-μ_u-μ_ue^-μ_u)/(Λ(𝐚)-μ_u)^2
=sup_Λ(𝐚)∈[0,∞)σ_u^2 e^-μ_u(e^-(Λ(𝐚)-μ_u)-1+Λ(𝐚)-μ_u)/(Λ(𝐚)-μ_u)^2
Let y=Λ(𝐚)-μ_u ∈ [-μ_u,∞). Then, the upper bound is
=sup_y∈[-μ_y,∞)σ_u^2 e^-μ_u(e^-y-1+y)/y^2
Given μ_u and σ_u^2, if h(y)=e^-y-1+y/y^2 is monotonic-decreasing, J_up is monotonic-decreasing. The function h(y) is monotonic-decreasing if
∂ h(y)/∂ y=(2-y)-e^-y(y+2)/y^3≤ 0
There are a number of ways to show that (<ref>) is satisfied. One approach is to evaluate the roots of the numerator as a polynomial in y, and show that the roots are not real, and thus the numerator does not change sign. The sign of (<ref>) is then evaluated separately for the case that y>0 and y<0 due to the fact that the denominator changes the sign. For the case that y = 0, application of L'Hopital's rule twice shows that the ratio remains well defined at y=0 .
11
IEEEtran
|
http://arxiv.org/abs/2308.01916v1 | 20230709040958 | Semi Supervised Meta Learning for Spatiotemporal Learning | [
"Faraz Waseem",
"Pratyush Muthukumar"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk.
^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com.
^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu.
^*Work done as an intern at Alibaba DAMO Academy.
Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Labeled data is hard to come by in the real world. Moreover, a majority of available data comes in the source of video and visual media.
Recent advancements in representation learning have shown great successes in learning rich representations from a variety of inputs including text, images, and videos.
However, these state-of-the-art architectures are data-intensive, whereas meta learning architectures possess unique capabilities of learning new tasks from diverse training tasks and corresponding labels in the few-shot regime.
We apply semi-supervised meta learning to video data for learning spatiotemporal patterns.
We extend work on Masked Autoencoders (MAEs) utilizing the Vision Transformer (ViT) architecture for scalable self-supervised learning in the spatiotemporal domain.
We approached the goal of applying meta-learning to self-supervised masked autoencoders for spatiotemporal learning in three steps.
Broadly, we seek to understand the impact of applying meta-learning to existing state-of-the-art representation learning architectures.
Thus, we test spatiotemporal learning through: a meta-learning architecture only, a representation learning architecture only, and an architecture applying representation learning alongside a meta learning architecture. We utilize the Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework.
Specifically, we first experiment with applying a pre-trained MAE and fine-tuning on our small-scale spatiotemporal dataset for video reconstruction tasks.
Next, we experiment with training an MAE encoder and applying a classification head for action classification tasks.
Finally, we experiment with applying a pre-trained MAE and fine-tune with MANN backbone for action classification tasks.
To execute our experiments, we generate a custom small-scale video dataset of 518 human-action classes consisting of 24927 video clips and human-generated annotations sourced from the MiniKinetics-200 and TinyVIRAT datasets. We also modify the existing ViT backbone in existing MAE architectures for small-scale datasets by applying Shifted Patch Tokenization (SPT) to combats the lack of locality inductive bias available in small-scale datasets.
Our experimental results show that fine-tuning on our custom small-scale video dataset outperforms existing pre-trained MAE architectures on video reconstruction tasks. Further, we find that training an MAE encoder with a small-scale ViT backbone on our small-scale video dataset for action classification tasks converges steadily. Finally, we find that applying a pre-trained MAE and fine-tuning with an MANN backbone for action classification tasks is effective on our small-scale video dataset test tasks.
§ INTRODUCTION
Recent advancements in deep learning including the Transformer architecture have shown great success in both vision and language domains learning rich representations from a variety of inputs including text, images, and videos (https://arxiv.org/abs/1706.03762ref: attention is all you need). Models such as BERT have shown success in the semi-supervised regime in denoising messy data and extracting high level embeddings from partially labeled datasets (https://arxiv.org/abs/1810.04805ref: bert). However, real-world labeled data in the format of videos is scarce and unstructured. State-of-the-art representation learning architectures have shown great success in the vision domain in extracting high-level features from images for reconstruction or classification tasks, however, these models require massive amounts of annotated vision data.
The field of meta learning has shown promise in learning high-level features from data in the few shot regime. Moreover, applying meta-learning to existing supervised learning architectures has been shown to allow for more data-efficient models while preserving generalizability to unseen tasks and datasets (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We propose applying semi-supervised meta-learning to video data for learning spatiotemporal patterns. We believe that wrapping existing state-of-the-art self-supervised representation learning architectures within a meta-learning framework will allow our architecture to both improve sample efficiency and generalize well to unseen data, particularly in the application of spatiotemporal learning on video datasets. Specifically, we perform experiments in the style of an ablation study to compare the performances of existing representation learning architectures for video data alone, existing self-supervised meta learning frameworks for video data alone, and our formulation of applying meta learning to representation learning architectures for video data classification tasks.
In addition to considering the effectiveness of applying meta learning towards existing representation learning architectures, we perform modifications to perform experiments with the scope of this project. That is, we scale down the vision transformer (ViT) backbone within the existing representation learning architecture for training on our custom small-scale video dataset. We generate this dataset consisting of video clips describing human-object interactions as well as corresponding human-generated annotations.
In this project, we make the following contributions:
* We collect a custom small-scale human-object video dataset built as a composite dataset from existing human-object video sources upon which we preprocess.
* We apply the meta-learning framework to existing self-supervised representation learning architectures and apply our model to downstream tasks including video reconstruction and action classification
* We perform an ablation study to understand the impact of applying meta-learning to existing self-supervised representation learning architectures on action classification accuracy and video reconstruction loss
§ RELATED WORKS
Prior work in the field of representation learning has shown successes in learning rich representations from vision and language domains. Particularly, autoencoder architectures have been proven to be effective in extracting representations from text and images. (https://arxiv.org/abs/2111.06377ref: masked autoencoders are scalable vision learners) proposed applying masked autoencoders (MAEs) for self-supervised learning for vision. By masking random patches of the input images and pre-training an autoencoder to reconstruct the missing pixels, they found that the architecture was able to perform well on the ImageNet dataset compared to similar self-supervised models. Moreover, their architecture was more efficient and scalable for larger models such that transfer performance in downstream tasks outperformed supervised pre-training models. They noted that a masking ratio larger than 75% masked pixels in an image poses as a non-trivial task to current state-of-the-art vision models.
(https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners) builds off of this work by applying masked autoencoders for video data to learn spatiotemporal patterns. The masking process follows similarly from above, however random spacetime patches of videos are masked out rather than pixels during the pre-training step. Their results showed that a masked autoencoder with a masked ratio of 90% outperforms supervised pre-training approaches by a wide margin on both benchmark datasets and real-world video data.
Meta-learning has shown effectiveness in generalizing well to unseen data with sample-efficient architectures in the few shot regime. One such implementation of meta-learning is the Memory Augmented Neural Network (MANN) architecture proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). The authors propose a black-box meta-learning framework with a two-part architecture. Their architecture included a controller implemented as a sequence model – they utilize an LSTM architure in their implementation – and an external memory module with reading and writing heads implemented with a Neural Turing Machine (NTM) (https://arxiv.org/abs/1410.5401neural turing machines). The LSTM sequence models are used to help a model to learn quickly from data with a small number of examples.
In our review of this space, we have not found existing work applying meta-learning alone towards self-supervised spatiotemporal learning. However, prior research has been done on applying self-supervised meta learning for natural language classification tasks (https://aclanthology.org/2020.emnlp-main.38/href: self-supervised meta-learning for few-shot natural language classification tasks).
Current vision models have become increasingly powerful since the widespread application of the Transformer architecture. The Vision Transformer (ViT) architecture, proposed by (https://arxiv.org/abs/2103.15691ref: ViViT: a video vision transformer), builds upon the self-attention mechanism proposed by (https://arxiv.org/abs/1706.03762ref: attention is all you need) for learning complex high dimensional representations from image datasets. This family of architectures relies on large amounts of image data, typically in the scale of hundreds of gigabytes worth of labelled images to train large architectures with hundreds of millions of parameters.
Some work has been done on scaling down these large-scale ViT architectures while preserving the learned high-level representations.(https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets) proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA) as methods to combat the lack of locality inductive bias available in small-scale datasets.
Existing work on applying representation learning architectures such as MAEs with ViT backbones show incredible performance in video classification and video reconstruction tasks, but are limited in real-world applications due to the data requirements of these sample inefficient architectures. Current research on small-scale ViT architectures perform well on image classification tasks, but have yet to be extended towards video data or applied in the regime of self-supervised learning.
§ METHODS
We approach the goal of applying meta-learning to self-supervised masked autoencoders for spatio-temporal learning using MANNs (memory augmented neural networks), in a similar fashion proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). In our case, we utilize the masked autoencoder (MAE) approach for initial pre-training, and then fine-tune using the MANN approach, using the MAE encoder as a backbone to the sequence model. In our implementation, we utilize the ViT sequence model scaled down and trained on our small-scale video dataset. We scale down the ViT backbone within the MAE encoder and decoder in a method proposed by (https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets), however in their implementation, they focus on image data.
We consider the MAE method proposed by (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
) as a baseline for testing the performance of a state-of-the-art classification algorithm that does not use meta learning. We then train the MANN architecture with the ViT backbone end-to-end to evaluate the performance of a solely meta-learning based approach. Finally, we test our proposed combination of MAE with MANN fine-tuning to test if the MAE architecture in combination with meta-learning approaches is more effective in learning spatiotemporal patterns.
One benefit of applying meta-learning in this domain is that if we assume videos of humans interaction with objects share some high level structure, we can combine video clips from various human-object interaction datasets, allowing us to pre-train on more data. These combinations of benchmarks will allow us to pinpoint whether applying meta-learning with MAE is effective for spatiotemporal learning as well as the individual contributions of each.
To summarize, we devised a three-stage approach to reaching our proposed goals:
* Apply pre-trained MAE and fine-tune for video reconstruction downstream task
* Train MANN with MAE encoder on small-scale dataset and apply classification head for action classification downstream task
* Apply pre-trained MAE and fine-tune with MANN backbone for action classification downstream task
Describes a visualization of the model architectures for each of the three approaches we implement.
§ EXPERIMENTS
For the first approach in our technical method, we fine-tune the pre-trained MAE on our small-scale dataset and evaluate against the baseline video MAE model pre-trained on Kinetics-400. We utilize a pre-trained MAE architecture sourced from the authors of the video MAE architecture trained with the ViT-Large backbone on Kinetics-400 with a masking ratio of 90% and 1600 effective epochs (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
).
For the second approach in our technical method, we train the MAE autoencoder with our small-scale ViT and fine-tune with a classification head on our small-scale composite dataset. We ran experiments training the full video MAE as well as training the video MAE outfitted with a classification head. Additionally, we evaluate training the video autoencoder with and without masking to analyze the difference in training loss and classification accuracy. Note that the autoencoders used for training in this set of experiments utilize our small-scale ViT backbone which implements Shifted Patch Tokenization (SPT) to preserve locality-specific representations typically lost with small-scale datasets. Further, since the original work proposing small-scale ViT architectures implemented a small-scale ViT for image classification rather than video classification, we extend upon their work by including spacetime attention to their small-scale ViT architecture in order to support 3D video data in the format of time-indexed series of 2D images.
§.§ Datasets
For our experiments, we seek to perform spatiotemporal learning on video datasets. Initially, we started by utilizing the Kinetics-400 video dataset consisting of 400 human-action classes each with at least 400 video clips (https://arxiv.org/abs/2007.07355TinyVIRAT: low-resolution video action recognition). In total, the dataset consisted of 306,245 video clips each around 10 seconds in length with a resolution of 224 x 224 pixels. However, the size of this dataset is over 300 GB, and while it can be effectively used for the ViT-base backbone with 84,943,656 parameters within the MAE encoder of the existing state-of-the-art representation learning architecture for video learning, it was not a feasible dataset within the scope of our project. Instead, we developed a small-scale ViT backbone within the MAE encoder architecture which instead has 3,109,008 parameters. Correspondingly, we sought to scale down our video dataset used for training our small-scale ViT backbone.
One aspect we considered while building our dataset was since we apply the MANN meta-learning framework for self-supervised spatiotemporal learning, we can combine multiple datasets of varying action class distributions together into a composite dataset where each unique action class could be considered a new task during black-box adaptation with the MANN architecture. As a result, we were not limited to a single data source when constructing our small-scale dataset and instead, we utilized human-action video clips and annotations from a variety of input sources to generate our small-scale video dataset. In a semi-supervised dataset, labels are sparse, hence we hypothesize that a meta-learning based approach that learns quickly from a small number of examples can excel where standard fine-tuning may not be sufficient.
Our composite small-scale video dataset was sourced from the Kinetics-400, MiniKinetics-200, and TinyVIRAT datasets. MiniKinetics-200 is a subset of the Kinetics dataset consisting of the 200 human-action classes with the most training examples and TinyVIRAT is a video dataset containing real-life tiny actions in videos collected from low resolution video cameras consisting of 12829 video clips. Our small-scale video dataset contains 24,927 video clips amongst 518 human-action classes. Each video clip in our dataset consists of 100 frames at a temporal resolution of 10 FPS, meaning that each clip is around 10 seconds in length. We scale all clips in our dataset to a resolution of 64x64 pixels to perform efficient training and achieve our project goals with the computational resources available to us. All spatial and temporal resolution downscaling was performed using the OpenCV Python package.
We split our dataset into training and testing splits such that we reserve 18406 videos over 414 action classes for training and 6521 videos over 104 action classes for testing. For our implementation utilizing meta-learning for self-supervised spatiotemporal learning, each human-action class can be formulated as a distinct task, where our task training-testing split is roughly an 80-20 split. Kinetics-400, Mini-Kinetics200, and TinyVIRAT all include human-generated annotations of the video clips, which define the locations of individual video clips within the action classes.
§ RESULTS
For the first approach in our technical outline, we provide cross entropy loss results of a pre-trained MAE fine-tuned on our small-scale video dataset against a pre-trained baseline MAE architecture trained on Kinetics-400. For the sake of brevity, we provide experimental results for every 20th frame in the 100 frame video samples of our small-scale video dataset. We evaluate the pre-trained MAE baseline against our fine-tuned MAE model on the testing set of our small-scale video dataset consisting of 6521 100-frame video clips of 64x64 pixel resolution over 104 human-action classes. Table <ref> describes the averaged cross entropy loss for every 20th frame in the 100-frame video clips across the test set for our fine-tuned model compared against the pre-trained MAE baseline. The overall averaged cross entropy loss for all 100-frames across the test set in our pre-trained model was 0.1776, whereas the pre-trained MAE baseline was 0.1781.
We also provide a video reconstruction visualization for a single video in the testing split of our small-scale video dataset. Since we cannot show all 100 frames of this video reconstruction, we show a visualization of every 20th video frame reconstructed by our fine-tuned model in Figure <ref>.
For the second approach in our technical outline, we evaluate training our modified video MAE architecture with a small-scale ViT backbone end-to-end as well as training with a classification head attached for action classification tasks. These experiments were conducted on the TinyVIRAT dataset with 26 action classes, so we can formulate the experimental setting as a 26-way multi-class classification task. The end-to-end video MAE architecture with a small-scale ViT backbone contains 3.1 million parameters, while the video MAE architecture with the classification head contains 2.7 million parameters.
The top-1 accuracy for the end-to-end video MAE architecture with a small-scale ViT backbone was 37% and the top-5 accuracy was 75%. Figures <ref> and <ref> describe the training and validation curves of this end-to-end model. Note that since we do not normalize the loss value with the number of examples in the batch, the magnitude of the loss is not necessarily indicative of the model performance.
Additionally, we evaluate training the video auto encoder outfitted with a classification head with and without masking for our 26-way multi-class classification task. We consider a masking ratio of 80% when implementing masking. We find that the top-5 performance on the TinyVIRAT dataset is 76% with masking and 74.5% without masking. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification head with masking implemented. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification hea trained without masking. Figures <ref> and <ref> describe the validation split accuracy curve over training for the masked autoencoder and the autoencoder without masking, respectively.
When using a video autoencoder with shift patch tokenization, and a reduced number of parameters, in only 10 epochs of pretrainign and 10 epochs of finetuning, we get 46.8% top1 accuracy, which is significantly higher than the previous methods we tested, indicating the importance of using shifted patch tokenization and not masking during the finetuning phase.
§ CONCLUSION
To summarize, we apply self-supervised meta-learning for spatiotemporal learning on video data. We extend existing representation learning architectures for vision and video data and apply meta-learning through the black-box Memory Augmented Neural Network (MANN) architecture. We evaluate the effectiveness of applying MANN alongside Masked Auto Encoders (MAE) by tackling our goals for this project in a three stage approach.
Firstly, we experiment with fine-tuning a pre-trained MAE architecture on our custom small-scale video dataset. This small-scale video dataset is built and collected by combining multiple human-action video datasets such as the TinyVIRAT, Kinetics-400, and MiniKinetics-200 datasets. Our experimental results of our fine-tuned model against a pre-trained MAE baseline shows that our model outperforms the pre-trained MAE architecture in terms of averaged cross entropy loss across all frames of the testing split videos in our small-scale dataset with a value of 0.1776 compared to the baseline's averaged cross entropy loss of 0.1781. However, since the difference between these two values are negligible – our fine-tuned model outperforms the baseline by 0.3% – we note that there is not a significant enough improvement from fine-tuning a pre-trained MAE architecture on our small-scale video dataset alone. We anticipated these results and hypothesize that because the pre-trained model is very large and trained on hundreds of gigabytes worth of Kinetics-400 data, whereas we fine-tune on our small-scale dataset consisting of less than 25,000 video clips, fine-tuning this architecture directly will not have a noticeable impact on predictive power. Nevertheless, our fine-tuned model slightly outperforms the baseline pre-trained MAE architecture, however there are not enough results or significant enough a difference to suggest a trend.
Next, we experiment with training an end-to-end video MAE architecture with a modified small-scale ViT backbone. We evaluated this architecture on the TinyVIRAT dataset and formulated the problem as a 26-way multi-class video classification problem. The top-1 accuracy score was 37% and the top-5 accuracy score was 75%. We believe this is a significant accomplishment because the majority of existing benchmarks for the TinyVIRAT challenge utilize very large encoder architectures with hundreds of millions of parameters. However, we are able to achieve competent results on the TinyVIRAT dataset with a small-scale ViT backbone with just 3 million parameters.
Finally, we experiment with training a video auto encoder architecture with a classification head and evaluating the effect of masking. We similarly evaluated both the masked and non-masked architectures on the TinyVIRAT 26-way multi-class video classification task and find that the top-5 performance for the masked auto encoder architecture with an 80% masking ratio was 76% and for the auto encoder without masking was 74.5%. Comparatively, this shows that applying masking to the architecture improves action-class classification task performance. However, with just 50 epochs used for training, we would need to continue running experiments and fine-tune the masking ratio hyperparameter to confirm this trend.
§ FUTURE WORK
In the future, we want experiment with fine tuning the MANN architecture with and without a pre-trained video MAE. Another test we want to try is to replace MANN with other meta-learning implementations such as Model Agnostic Meta-Learning (MAML) proposed by (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We can also experiment with integrating text signals such as utilizing BERT pretrained embeddings generated on descriptions of videos in the action-class classification task setting. We have performed significant contributions to the TinyVIRAT codebase and could consider contributing to open-source implementations by providing our codebase for small-scale video MAE and meta-learning capabilities. Additionally, we have introduced a hook to export latent video frame representations, which can be used for future work by us and others. We believe we have created very useful building blocks for building more advanced vision transformers for the spatiotemporal learning domain.
9
@articlevaswani2017attention,
title=Attention is all you need,
author=Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, Łukasz and Polosukhin, Illia,
journal=Advances in neural information processing systems,
volume=30,
year=2017
@articledevlin2018bert,
title=Bert: Pre-training of deep bidirectional transformers for language understanding,
author=Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina,
journal=arXiv preprint arXiv:1810.04805,
year=2018
@inproceedingsfinn2017model,
title=Model-agnostic meta-learning for fast adaptation of deep networks,
author=Finn, Chelsea and Abbeel, Pieter and Levine, Sergey,
booktitle=International conference on machine learning,
pages=1126–1135,
year=2017,
organization=PMLR
@articlekay2017kinetics,
title=The kinetics human action video dataset,
author=Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and others,
journal=arXiv preprint arXiv:1705.06950,
year=2017
@articlelee2021vision,
title=Vision transformer for small-size datasets,
author=Lee, Seung Hoon and Lee, Seunghyun and Song, Byung Cheol,
journal=arXiv preprint arXiv:2112.13492,
year=2021
@inproceedingsxie2018rethinking,
title=Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification,
author=Xie, Saining and Sun, Chen and Huang, Jonathan and Tu, Zhuowen and Murphy, Kevin,
booktitle=Proceedings of the European conference on computer vision (ECCV),
pages=305–321,
year=2018
@inproceedingsdemir2021tinyvirat,
title=Tinyvirat: Low-resolution video action recognition,
author=Demir, Ugur and Rawat, Yogesh S and Shah, Mubarak,
booktitle=2020 25th International Conference on Pattern Recognition (ICPR),
pages=7387–7394,
year=2021,
organization=IEEE
@inproceedingshe2022masked,
title=Masked autoencoders are scalable vision learners,
author=He, Kaiming and Chen, Xinlei and Xie, Saining and Li, Yanghao and Dollár, Piotr and Girshick, Ross,
booktitle=Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages=16000–16009,
year=2022
@inproceedingsarnab2021vivit,
title=Vivit: A video vision transformer,
author=Arnab, Anurag and Dehghani, Mostafa and Heigold, Georg and Sun, Chen and Lučić, Mario and Schmid, Cordelia,
booktitle=Proceedings of the IEEE/CVF International Conference on Computer Vision,
pages=6836–6846,
year=2021
@bansal2020self,
title=Self-supervised meta-learning for few-shot natural language classification tasks,
author=Bansal, Trapit and Jha, Rishikesh and Munkhdalai, Tsendsuren and McCallum, Andrew,
journal=arXiv preprint arXiv:2009.08445,
year=2020
@articlefeichtenhofer2022masked,
title=Masked Autoencoders As Spatiotemporal Learners,
author=Feichtenhofer, Christoph and Fan, Haoqi and Li, Yanghao and He, Kaiming,
journal=arXiv preprint arXiv:2205.09113,
year=2022
@inproceedingssantoro2016meta,
title=Meta-learning with memory-augmented neural networks,
author=Santoro, Adam and Bartunov, Sergey and Botvinick, Matthew and Wierstra, Daan and Lillicrap, Timothy,
booktitle=International conference on machine learning,
pages=1842–1850,
year=2016,
organization=PMLR
@articlegraves2014neural,
title=Neural turing machines,
author=Graves, Alex and Wayne, Greg and Danihelka, Ivo,
journal=arXiv preprint arXiv:1410.5401,
year=2014
|
http://arxiv.org/abs/2307.03882v1 | 20230708024835 | The Busboy Problem: Efficient Tableware Decluttering Using Consolidation and Multi-Object Grasps | [
"Kishore Srinivas",
"Shreya Ganti",
"Rishi Parikh",
"Ayah Ahmad",
"Wisdom Agboh",
"Mehmet Dogar",
"Ken Goldberg"
] | cs.RO | [
"cs.RO"
] |
The Busboy Problem: Efficient Tableware Decluttering
Using Consolidation and Multi-Object Grasps
Kishore Srinivas^1, Shreya Ganti^1, Rishi Parikh^1, Ayah Ahmad^1,
Wisdom Agboh^1,2,
Mehmet Dogar^2, Ken Goldberg^1
^1The AUTOLab at UC Berkeley (automation.berkeley.edu).
^2University of Leeds, UK.
=============================================================================================================================================================================================================
We present the “Busboy Problem": automating an efficient decluttering of cups, bowls, and silverware from a planar surface. As grasping and transporting individual items is highly inefficient, we propose policies to generate grasps for multiple items. We introduce the metric of Objects per Trip (OpT) carried by the robot to the collection bin to analyze the improvement seen as a result of our policies. In physical experiments with singulated items, we find that consolidation and multi-object grasps resulted in an 1.8x improvement in OpT, compared to methods without multi-object grasps. See https://sites.google.com/berkeley.edu/busboyproblem for code and supplemental materials.
§ INTRODUCTION
The post-meal task of clearing a dining table, commonly referred to as “bussing,” requires moving cups, bowls, and utensils that are dispersed across the surface into a bin or tray to be cleaned in the kitchen. This is a common task that occurs after any event involving food service and dish collection, from daily household meals to casual picnics to formal cocktail parties and dinners. Automating this tedious and repetitive task could reduce fatigue and busy work for the skilled waiters who typically perform it.
We define the “Busboy Problem" as the efficient transfer of cups, bowls, and utensils (collectively called tableware) from the table into a designated collection bin while minimizing the time required for completion. This is an interesting problem for automation because the tableware are of varying shape, requiring low-level planning to execute grasps and high-level planning to consolidate tableware for efficient transport. Even small inaccuracies can lead to toppling or dropping delicate and expensive tableware, so the system must be extremely reliable.
Previous work in multi-object grasping, object manipulation, and grasp candidate generation highlight the efficiency of grasping pre-stacked objects as well as objects manually oriented for multi-object grasps <cit.>. Whereas these works explore situations with objects are already positioned for said grasps, our work investigates methods of stacking and clustering objects into these favorable positions for multi-object grasps.
In this paper, we present a framework and algorithms for the Busboy Problem. We consider a scenario where multiple items are placed on a work surface (see Fig. <ref>), under an RGBD camera. We use the concept of multi-object grasping, which enables the robot to move multiple items simultaneously, thus reducing the number of pick-and-place actions needed.
This paper makes the following contributions:
* Formulation of the Busboy Problem.
* Action primitives for rearranging and grasping cups, bowls, and utensils.
* Two algorithms that leverage consolidation and multi-object grasps.
* Experimental results indicating a 1.8x improvement in OpT.
§ RELATED WORK
§.§ Multi Object Grasping
Prior work on multi-object grasping includes different grasping techniques to facilitate multi-object grasps <cit.>, detecting the number of objects in a grasp <cit.>, decluttering surfaces <cit.>, and multi-object grasping to place objects in virtual reality <cit.>. Yamada et al. considered the simplified multi-object grasping problem, where the objects are already in a configuration where they can be grasped at once <cit.>. Agboh et. al. <cit.>
showed that friction can increase picks per hour for convex polygonal objects.
Some prior work has focused on the design of grippers for multi-object grasping. Jiang et. al. <cit.> proposed a vaccum gripper with multiple suction cups, while Nguyen et. al. <cit.> proposed a soft gripper based on elastic wires for multi-object grasping.
Object stacking <cit.> has the potential to improve the number of objects per trip. We take inspiration from these works to include a stacking primitive.
§.§ Pulling
Prior work by Berretty et al. has examined the use of inside-out pulling to orient convex polygonal parts <cit.>. We utilize a similar technique for circular cups and bowls. Furthermore, a planner for ensuring convergence to the final pose of pulling trajectories is proposed by Huang et al. <cit.>, where they examine the motion of planar objects undergoing quasi-static movement.
§.§ Grasp Candidates
Satish et al. discuss using a synthetic data sampling distribution that combines grasps sampled from the policy action set with guiding samples from a robust grasping supervisor to construct grasp candidates <cit.>.
Additionally, Mahler et al. <cit.> discuss the use of energy-bounded caging to evaluate grasp candidates. They efficiently compute candidate rigid configurations of obstacles that form energy-bounded cages of an object, where the generated push-grasps are robust to perturbations.
Mousavian et al. describe the process of using a variational autoencoder to generate grasps by mapping the partial point cloud of an observed object to a diverse set of grasps for the object <cit.>.
Because of the relative simplicity of our setup, we found that an analytical approach to constructing grasp candidates is sufficient. In the case of bowls and cups, we sample a random point uniformly on the rim and then orient the gripper perpendicular to the tangent of the circle at that point. In the case of utensils, we identify the axis of the utensil, and pick the highest depth point along that line, with the gripper perpendicular to the axis.
§.§ Object Manipulation in Cluttered Environments
Efficiently finding object manipulation plans in high-dimensional environments with a large number of objects is a challenging problem. Hasan et al. <cit.> addressed this problem by identifying high-level manipulation plans in humans, and transferring these skills to robot planners. Other work by Tirumala et al. <cit.> used tactile sensing to singulate layers of cloth from a stack.
Different from these works, our goal in the cluttered environment is to bring objects together, or stack them, to enable multi-object grasps.
§ THE BUSBOY PROBLEM
The Busboy Problem involves the task of decluttering a workspace containing cups, bowls, and utensils, with the objective of minimizing both the time and number of trips required for completion.
§.§ Assumptions
In the initial configuration, a planar workspace is defined in a cartesian grid (x, y) and has n_c cups, n_b bowls, and n_u utensils scattered across its surface.
All items are assumed to be face up, visible by camera, and within a workspace defined by the constraints of the robot arm. These items may be initially stacked on top of one another or resting individually on the surface, and we assume that the initial state meets the following criteria:
* All items are of known dimensions, and cups and bowls are circular when viewed from top-down. Cups have radius 4.5cm, bowls have radius 8.5cm, and utensils are at most 17cm × 1.8cm.
* Cups and bowls are upright, and utensils are laid flat on the surface.
* Any stacks that exist are stable, such that r_0 ≥ r_1 ≥ ... ≥ r_s, where r_0 represents the radius of the vertically lowest item, and r_s the highest one.
* Initially, no two items are touching (items are singulated).
§.§ State
We use cups, bowls, and utensils (forks and spoons) as the tableware set - collectively called “tableware" - in this work.
Each cup and bowl has a position [x, y], and each utensil has a position [x, y] and orientation θ.
§ DECLUTTERING TABLEWARE
§.§ Action primitives
We propose to use a combination of manipulation primitives to solve the Busboy Problem. We specifically propose to use single object grasps, multi-object grasps, pull-grasps, and stack-grasps to efficiently clear a work surface of items (Figure <ref>).
§.§.§ Grasp
We use both single and multi-object grasps in this work. Let u⃗_G be the grasp to pickup objects — single or multiple. We represent this action as:
u⃗_G = [p⃗_G, θ_G]
where p⃗_G = [x_G, y_G, z_G] is the center point of the grasp, and θ_G is the grasp orientation.
§.§.§ Pull-Grasp
A pull-grasp action involves two steps: a pull of one object to another, then a multi-object grasp of both objects. We represent a pull action as:
u⃗_P = [p⃗_S, θ_S, p⃗_E, θ_E]
where p⃗_S = [x_S, y_S, z_S] is the pull start point, θ_s is the gripper orientation at the state point, p⃗_E = [x_E, y_E, z_E] is the pull end point, and θ_E is the gripper orientation at p⃗_E. For circular objects such as bowls and cups, the gripper pulls outwards from the center of the dish using an internal pull, and for utensils, the gripper cages the utensil around its center point while moving it (Figure <ref>). Then, we denote a pull-grasp action as:
u⃗_⃗P⃗G⃗ = [u⃗_P, u⃗_G]
§.§.§ Stack-Grasp
A stack-grasp action involves two steps: a stack of one object onto another, then a multi-object grasp of both objects. We represent a stack action as:
u⃗_⃗S⃗ = [u⃗_G_i, p⃗_L, θ_L]
where u⃗_G_i is a grasp on the lifted object, and p⃗_L = [x_L, y_L, z_L] is the placement point on the stationary object, and θ_L is the gripper orientation at p⃗_L. Then, we denote a stack-grasp action as:
u⃗_⃗S⃗G⃗ = [u⃗_S, u⃗_G]
§.§ Determining allowable actions
§.§.§ Grasp
A single-object grasp is always allowable. We can safely assume this since any dish or stack of items is already top-down graspable. When no other actions are allowed, the single-object grasp action is used as a default to clear the workspace.
A multi-object grasp is allowable when the grasp heights of both items are similar (within an adjustable threshold value) and if the lateral distance between the grasp points of both items is less than the width of the gripper. If the grasp heights of the items are significantly different, the gripper will have to either collide with the taller dish while attempting to grasp the shorter dish or grasp only the taller dish to avoid the collision, and either case results in a failure of grasping multiple items at once. Similarly, if the items are separated by more than the maximum inside width of the grippers, an attempt to grasp both at the same time will fail.
§.§.§ Pull
A pull of two items is allowable if a multi-object grasp can be executed on those items and if no other objects lie between the two items on the workspace. We disallow pull actions of items for which a multi-object grasp cannot be executed, since the pull becomes a wasted action. We also disallow pull actions of items with other objects between them to ensure that the intermediate objects are not displaced in a non-deterministic manner.
§.§.§ Stack
A stack of dish d_a with radius r_a onto dish d_b with radius r_b is allowable if r_a ≤ r_b. This means that a cup can be stacked onto a bowl, but not vice versa, and that a utensil can be stacked onto any other dish, including another utensil. This is to ensure that the stack stability assumption present at the initial state remains valid after each action.
§.§ Robustness of action primitives
We present three primitives to robustly execute the above actions. This design makes the primitives more robust.
§.§.§ Grasp
When executing a grasp at location x, y, z, the robot will open its grippers centered around x, y, and then move down to the appropriate height, as measured by the depth sensor, before closing the gripper to grasp the object. The affordances granted by max gripper opening, gripper height, and gripper width mean that an off-center grasp point x, y, z will still successfully complete the single-object or multi-object grasp of the object (Figure <ref>).
§.§.§ Pull
For cups and bowls, the gripper pulls outwards from the center of the dish, contacting the inner surface of the dish (Figure <ref>). This action is successful as both r_b and r_c are larger than the width of the gripper when closed. If the gripper is anywhere within the opening of the object, it will be able to move the target object to a specified location. For utensils, the gripper cages the utensil around its center point while moving it, preventing unwanted rotation and moving the utensil to its specified location.
§.§.§ Stack
For bowls and cups, the top lip radius is larger than the radius of the base, giving the sides a taper. Because a dish d_a is only stacked onto another dish d_b of equal or larger size, the base radius of d_a is guaranteed to be smaller than the top radius of d_b, allowing the tapered sides of the items to funnel d_a into place even if there is slight error in the placement of the dish. Placing a utensil onto a bowl is extremely robust to error because of the relative radii of the items, and placing a utensil onto another utensil is robust due to the curvature of the utensils themselves which slide a misplaced utensil into place, making them naturally conducive to stacking.
§.§ Policies
§.§.§ Pull Policy
The pull policy combines Pull-Grasp and Grasp actions. From the initial scene, it checks if any multi-object grasps can be executed right away, and executes those first. Then, it runs the Pull-Grasp action for all remaining items, pulling together items that don't cause collisions and executing multi-object grasps to clear them from the workspace. If any items remain after all possible multi-object grasps are executed, those items are cleared with single-object Grasp actions. After each action, a new image of the workspace is taken and the state representation is updated to reflect the new state of the workspace, including any tableware that has been moved or left behind by the previous action. This policy is formalized in Algorithm <ref>.
§.§.§ Stack Policy
The stack policy combines Stack-Grasp and Grasp actions. It repeatedly executes the Stack-Grasp action to clear the workspace, and if there are any remaining items they are cleared with single-object Grasp actions. It prioritizes stacking utensils onto bowls and transporting them to the bin, and then tries to stack the remaining dishes. Stacking utensils first is an efficient way to improve the number of OpT for this policy. The policy is formalized in Algorithm <ref>.
After utensils are cleared, the stacks created by this policy are limited to be a combination of at most 2 existing stacks (i.e. once a Stack action is executed, the next action is necessarily a Grasp on the resulting stack, not another Stack action onto that stack). This is because when 4 or more bowls or cups are stacked, the height difference between the lip of the top dish and the lip of the bottom dish exceeds the height of the gripper jaws, causing many attempted grasps to fail. By limiting stacks to at most 2 existing stacks, we significantly reduce the chances of creating a stack with more than 3 dishes.
§ EXPERIMENTS AND RESULTS
We evaluate through physical experiments the robustness of the pulling action primitive and then evaluate the pull and stack policies on a real-world table clearing task.
§.§ Experimental Setup
We use a UR5 robot arm with a Robotiq 2F-85 gripper and Intel RealSense 455D RGBD camera mounted 83cm above the workspace. The workspace is a flat 78cm x 61cm surface with 4 cups, 4 bowls, and 4 utensils, n_b = n_c = n_u = 4. In our experimental setup, we calculated a max gripper opening of w = 8.5cm, gripper height of h = 4.5cm, bowl radius r_b = 8.5cm, cup radius r_c = 4.5cm and utensil width r_u = 1.8cm.
We identify and locate tableware on the workspace with a vision pipeline. Since the surface of the workspace is white, we use darker colored tableware to be easily visible. To locate cups and bowls, we first use edge detection, contour forming, and HoughCircles to identify circular shapes on the workspace, then filter these circles based on the known image radius of cups and bowls. We cluster these circles by their centers and remove circles that overlap beyond a specified threshold, allowing an unambiguous detection of cups and bowls. To locate utensils, we use edge detection and contour forming, and then filter out the contours that are too “square", as determined by the aspect ratio of the identified contour. We draw an imaginary line through the lengthwise center of bounding rectangle of the contour, and sample depth values along that line; we use the highest depth point as the grasp point of the utensil to allow the gripper maximum clearance with the surface.
We define three tiers to evaluate the performance of our algorithm on scenes of increasing complexity.
* Tier 0: scenes contain 6 items, either all cups, all bowls, or all utensils, with no stacks in the initial state.
* Tier 1: scenes contain 4 items each of cups, bowls, and utensils, and have no stacks in the initial state.
* Tier 2: scenes contain 4 items each of cups, bowls, and utensils, but we allow stacks of at most 3 objects in the initial state.
For Tier 2, we limit initial stacks to at most 3 objects because of the dimensions of the gripper, as mentioned in Section <ref>. The number of objects in a stack, and not the actual dimensions of individual dishes, is the main limiting factor for the grasp, because we grasp dishes from the rim. The dishes could actually be much larger and still be graspable as long as the walls are thin enough to allow the gripper to slide over them, and the weight of the dish does not exceed the payload limitations of the gripper itself. We limit ourselves to a small set of known kitchenware objects for consistency in our experiments.
We evaluate the performance of the pull and stack policies against a baseline single-item policy, referred to as “Random" in Table <ref>. This policy picks a dish at random, and if the dish is a cup or bowl, it uniformly samples a point on the rim and grasps the dish at that point. If the dish is a utensil, it identifies the grasp point of the utensil as described above and grasps the utensil at that point. This policy is stack-agnostic, so even in Tier 2 when there are stacks present in the initial state, it treats each item in the stack as its own object, and clears the stack by transporting one item at a time.
§.§ Scene Generation
In order to evaluate our policies, we generate multiple scenes at each tier, and every policy is run once on each scene. To generate each scene, we use the dimensions of the workspace (78cm × 61cm), and r_b, r_c, r_u for the dimensions of the objects. We randomly sample x, y locations within the scene for each object. If an object intersects with another object, we create a stack of the two objects if the maximum number of intersections has not been exceeded, and resample a position for the object if it has. Tiers 0 and 1 allow no such intersections, whereas Tier 2 allows 4 intersections. For each trial we manually reset the scene to maintain consistency.
§.§ Evaluation
We evaluated on 9 scenes at Tier 0 (3 scenes per type of dish), 3 scenes at Tier 1, and 3 scenes at Tier 2. A trial is one execution of one policy on one scene, so we have a total of (9+3+3)*3 = 45 trials. For each trial, we record the time in seconds to clear the table, the OpT, and the number of failures. A failure occurs when the robot is unable to move all items to the collection bin, either because of a perception failure that leaves items behind on the workspace or a policy failure that drops a dish off the workspace. We report our results in Table <ref>.
To evaluate the performance of our policies in more realistic scenario, we present the theoretical improvement in execution time when the bin is placed further away from the workspace, as might be seen in a home or professional kitchen. Given the physical limitation of the UR5 arm length, we simulated the lengthening distance by adding time delays of 3 and 5 seconds in both directions of motion (to and from the collection bin). We find that moving the bin further away causes the stack and pull policies to perform significantly better than the baseline policy because motions to and from the bin are penalized, making policies with fewer total actions perform better. We report these results in Table III in the appendix of the project website.
§ DISCUSSION
Results show that using consolidation and multi-object grasps allows clearing the workspace efficiently, with the pull policy transporting at least 1.6x as many objects per trip, and the stack policy at least 1.8x. A discussion of resulting execution time improvement is in the appendix of the project website.
§ LIMITATIONS AND FUTURE WORK
An overhead RGBD camera gives only a clear top view. This affects state estimation and can lead to failures. We assume circular cups and bowls. This makes it easy to compute grasps. For more general dishes, advanced grasp generation methods will be needed. In future work, we will loosen the assumption of starting with singulated objects. We also hope to combine the pull and stack policies into a higher-level policy that can efficiently clear the workspace.
§ ACKNOWLEDGMENTS
This research was performed at the AUTOLAB at UC Berkeley in
affiliation with the Berkeley AI Research (BAIR) Lab,
and the CITRIS “People and Robots" (CPAR) Initiative. The authors were supported in part by donations from Toyota Research
Institute, Bosch, Google, Siemens, and Autodesk and by equipment
grants from PhotoNeo, NVidia, and Intuitive Surgical. Mehmet Dogar was partially supported by an EPSRC Fellowship (EP/V052659).
IEEEtran
|
http://arxiv.org/abs/2307.05547v1 | 20230709055046 | Robust Routing Made Easy: Reinforcing Networks Against Non-Benign Faults | [
"Christoph Lenzen",
"Moti Medina",
"Mehrdad Saberi",
"Stefan Schmid"
] | cs.DC | [
"cs.DC"
] |
Robust Routing Made Easy:
Reinforcing Networks Against Non-Benign Faults
Research supported by the Federal Ministry of Education and Research (BMBF), grant 16KISK020K, 2021-2025.
This article extends work presented at SSS
2017 <cit.>.
Christoph Lenzen^1 Moti Medina^2 Mehrdad Saberi^3 Stefan Schmid^4
^1CISPA Helmholtz Center for Information Security, Germany ^2Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
^3University of Maryland, College Park, USA ^4TU Berlin, Germany
August 12, 2023
===============================================================================================================================================================================================================================================================================
With the increasing scale of communication networks,
the likelihood of failures grows as well.
Since these networks form a critical backbone
of our digital society, it is important that they rely on
robust routing algorithms which ensure connectivity
despite such failures. While most modern communication
networks feature robust routing mechanisms, these mechanisms
are often fairly complex to design and verify, as they
need to account for the effects of failures and rerouting
on communication.
This paper conceptualizes the design of robust routing mechanisms,
with the aim to avoid such complexity. In particular,
we showcase simple and generic blackbox transformations that increase resilience of routing against independently distributed failures, which allows
to simulate the routing scheme on the original network, even in the presence
of non-benign node failures (henceforth called faults). This is attractive
as the system specification and routing policy can simply be preserved.
We present a scheme for constructing such a reinforced network, given
an existing (synchronous) network and a routing scheme. We prove that
this algorithm comes with small constant overheads, and only requires a minimal
amount of additional node and edge resources;
in fact, if the failure probability is smaller than 1/n,
the algorithm can come without any overhead at all.
At the same time,
it allows to tolerate a large number of
independent random (node) faults,
asymptotically almost surely.
We complement our analytical results with simulations on different real-world topologies.
§ INTRODUCTION
Communication networks have become a critical backbone
of our digital society. For example, many datacentric applications
related to entertainment, social networking, or health, among others,
are distributed and rely on the high availability and
dependability of the interconnecting network (e.g., a
datacenter network or a wide-area network).
At the same time, with the increasing scale of
today's distributed and networked systems (often relying
on commodity hardware as a design choice
<cit.>), the number of
failures is likely to increase as well
<cit.>.
It is hence important that communication networks can tolerate
such failures and
remain operational despite the failure of some of their
components.
Robust routing mechanisms aim to provide such guarantees:
by rerouting traffic quickly upon failures,
reachability is preserved. Most communication
networks readily feature robust routing mechanisms,
in the control plane (e.g.
<cit.>), in
the data plane (e.g. <cit.>), as well as on higher
layers (e.g. <cit.>).
However, the design of such robust routing mechanisms is
still challenging and comes with tradeoffs, especially if
resilience should extend to multiple failures <cit.>.
Besides a fast reaction time and re-establishing connectivity, the
resulting routes typically need to fulfill certain additional properties,
related to the network specification and policy.
Ensuring such properties however can be fairly complex,
as packets inevitably follow different paths after failures.
Interestingly, while the problem of how to re-establish reachability
after failures is well explored,
the problem of providing specific properties on the failover
paths is much less understood.
This paper conceptualizes the design of robust routing, presenting a new approach to robust routing which conceptually differs
significantly from existing literature by relying on proactive reinforcement (rather than reaction to failures).
In particular, our approach aims to overcome the complexities involved in designing
robust routing algorithms, by simply sticking to the original
network and routing specification.
To achieve this, our approach is to mask the effects of failures
using redundancy: in the spirit of error correction,
we proactively reinforce networks by adding a minimal number of
additional nodes and links, rather than
coping with failed components when they occur.
The latter is crucial
for practicability: significant refactoring of existing systems
and/or accommodating substantial design constraints is rarely
affordable.
In this paper, to ensure robustness while maintaining
the network and routing specification, we aim to
provide a high degree of fault-tolerance,
which goes beyond simple equipment and failstop failures,
but accounts for more general faults which include non-benign
failures of entire nodes.
While our approach presented in this paper will be general
and applies to any network topology, we are particularly
interested in datacenter networks (e.g., based on low-dimensional
hypercubes or d-dimensional tori <cit.>)
as well as in wide-area
networks (which are typically sparse <cit.>).
We will show that our approach works especially well for these networks.
§.§ The Challenge
More specifically,
we are given a network G=(V,E) and a routing scheme, i.e.,
a set of routes in G.
We seek to reinforce the network G by
allocating additional resources, in terms of nodes and edges,
and to provide a corresponding routing strategy to simulate the routing scheme
on the original network despite non-benign node failures.
The main goal is to maximize the probability that the network withstands
failures (in particular, random failures of entire nodes),
while minimizing the resource overhead.
Furthermore, we want to ensure that the network transformation is simple
to implement, and that it interferes as little as possible with the existing system design and operation, e.g., it
does not change the reinforced system's specification.
Toward this goal, in this paper, we make a number of simplifying assumptions.
First and most notably, we assume independent failures,
that is, we aim at masking faults with little or no correlation among each other.
Theoretically, this is motivated by the fact that
guaranteeing full functionality despite having f adversarially placed faults trivially requires redundancy (e.g., node degrees) larger than f.
There is also practical motivation to consider independent faults:
many distributed systems proactively avoid fault clusters
<cit.> and there is also empirical
evidence that in certain scenarios, failures are only weakly correlated <cit.>.
Second, we treat nodes and their outgoing links as fault-containment regions (according to <cit.>), i.e., they are the basic components our systems are comprised of.
This choice is made for the sake of concreteness;
similar results could be obtained when considering, e.g., edge failures, without changing the gist of results or techniques.
With these considerations in mind, the probability of uniformly random
node failures that the reinforced system can tolerate is a canonical choice for measuring resilience.
Third, we focus on synchronous networks, for
several reasons:
synchrony not only helps in handling faults, both on the theoretical level (as illustrated by the famous FLP theorem <cit.>) and for ensuring correct implementation, but it also
simplifies presentation, making it easier to focus on the proposed concepts.
In this sense, we believe
that our approach is of particular interest in the context of real-time systems,
where the requirement of meeting hard deadlines makes synchrony an especially attractive choice.
§.§ Contributions and Techniques
This paper proposes a novel and simple approach to robust routing,
which decouples the task of designing a reinforced network from the task of
designing a routing scheme over the input network. By virtue of this decoupling,
our approach supports arbitrary routing schemes and objectives,
from load minimization to throughput maximization and beyond,
in various models of computation, e.g., centralized or distributed, randomized
or deterministic, online or offline, or oblivious.
We first consider a trivial approach:
we simply replace each node by ℓ∈ copies
and for each edge we connect each pair of copies of its endpoints,
where ℓ is a constant.[Choosing concreteness over generality,
we focus on the, in our view, most interesting case of constant ℓ. It is straightforward to generalize the analysis.]
Whenever a message would be sent over an edge in the original graph,
it should be sent over each copy of the edge in the reinforced graph.
If not too many copies of a given node fail, this enables each receiving copy to recover the correct message.
Thus, each non-faulty copy of a node can run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph.
When analyzing this approach,
we observe that asymptotically almost surely (a.a.s., with probability 1-o(1)) and with ℓ=2f+1, this reinforcement can sustain an independent probability p of f Byzantine node failures <cit.>, for any p∈ o(n^-1/(f+1)), i.e., f nodes may violate the protocol in any arbitrary way (and may hence also collude).
This threshold is sharp up to (small) constant factors: for p∈ω(n^-1/(f+1)), a.a.s. there is some node for which all of its copies fail.
If we restrict the fault model to omission faults
(faulty nodes may skip sending some messages but otherwise act according to the protocol), ℓ=f+1 suffices.
The cost of this reinforcement is that the number of nodes and edges increase by factors of ℓ and ℓ^2, respectively.
Therefore, already this simplistic solution can support non-crash faults of probability p∈ o(1/√(n)) at a factor-4 overhead.
We note that the simulation introduces no large computational overhead and
does not change the way the system works, enabling to use it as a blackbox.
Also randomized algorithms can be simulated in a similar fashion,
provided that all copies of a node have access to a shared source of randomness.
Note that this requirement is much weaker than globally shared randomness:
it makes sense to place the copies of a node in physical proximity to approximately preserve the geometrical layout of the physical realization of the network topology.
Our approach above raises the question whether
we can reduce the involved overhead further.
In this paper, we will answer this question positively:
We propose to apply the above strategy only to a small
subset E' of the edge set.
Denoting by v_1,…,v_ℓ the copies of node v∈ V, for
any remaining edge {v,w}∈ E∖ E' we add only edges
{v_i,w_i}, i∈ [ℓ], to the reinforced graph.
The idea is to choose E' in a way such that the connected components
induced by E∖ E' are of constant size, yet |E'|=ε |E|.
This results in the same asymptotic threshold for p, while the number of edges of the reinforced graph drops to ((1-ε)ℓ+εℓ^2)|E|.
For any constant choice of ε, we give constructions with this property for grids or tori of constant dimension and minor-free graphs of bounded degree.
Again, we consider the case of f=1 of particular interest:
in many typical network topologies, we can reinforce the network to boost the failure probability that can be tolerated from Θ(1/n) to Ω(1/√(n)) by roughly doubling (omission faults) or tripling (Byzantine faults) the number of nodes and edges.
The redundancy in this second construction is near-optimal under the constraint that we want to simulate an arbitrary routing scheme in a blackbox fashion,
as it entails that we need a surviving copy of each edge, and thus in particular each node.
In many cases, the paid price will be smaller than the price for making each individual component sufficiently reliable to avoid this overhead.
Furthermore, we will argue that the simplicity of our constructions enables us to re-purpose the redundant resources in applications with less strict reliability requirements.
Our results show that while approach is general and can be applied to any
existing network topology (we will describe and analyze valid reinforcements for
our faults models on general graphs), it can be refined and is particularly
interesting in the context of networks that
admit suitable partitionings. Such networks include
sparse, minor-free graphs, which are practically relevant topologies in
wide-area networks, as well as torus graphs and low-dimensional
hypercubes, which arise in datacenters and parallel architectures.
To complement our theoretical findings and investigate the reinforcement
cost in real networks, we conducted experiments on the Internet Topology Zoo <cit.>.
We find that our approach achieves robustness at significantly lower cost compared to
the naive replication strategy often employed in dependable networks.
§.§ Putting Things Into Perspective
In contrast to much existing robust routing literature on reactive
approaches to link failures <cit.> (which come with a delay),
we consider a proactive approach by enhancing the network with redundancy.
Our proactive approach also allows us to replicate the routing scheme (and hence the network policy) on the new network.
In particular, we show that if the failure probability is smaller than 1/n, there is a good probability that our approach works even without any overhead at all.
Furthermore, there are two ways in which our system can be used. One approach is to replicate the entire node (including the compute part), and then forward the traffic to its two associated peers. Alternatively, traffic can also simply be replicated to multiple NICs, without additional compute requirements, depending on the failure model. More generally, our contribution can also be seen more abstractly and the robust routing happen on a logical level, depending on the failure scenario.
Also, we show that in the absence of a valid message, it can simply be ignored, as the rest of the system continues to perform
The most closely related work to ours is NetCo <cit.>,
which also relies on network reinforcement and can handle malicious behavior.
NetCo is is based on a robust
combiner concept known from cryptography, and complements each router with two additional routers.
Using software-defined networking, traffic is replicated across the three (untrusted) devices and then merged again, using a consensus algorithm. While a high degree of robustness is achieved, the three-fold overhead is significant. More importantly, however, in contrast to our approach, Netco requires special hardware for splitting and merging the traffic; while the functionality of this hardware can be simple, it still needs to be trusted. The consensus requirement dramatically reduces the throughput, as shown in the empirical evaluation of NetCo in <cit.>.
Our solution does not require such components and is hence not only more practical but also significantly more performant.
§.§ Organization
In <ref>, we sketch the properties of our approach and state a number of potential applications. In <ref>, we formalize the fault models that we tackle in this article alongside the notion of a valid reinforcement and its complexity measures. In <ref> and <ref>, we study valid reinforcements on general graphs, and in <ref>, we study more efficient reinforcements for specific graphs.
We complement our analytical results with an empirical simulation study in
<ref>.
In <ref> we raise a number of points in favor of the reinforcement approach. We review related work in
<ref>, and we conclude and present a number of interesting
follow-up questions in <ref>.
§ HIGH-LEVEL OVERVIEW: REINFORCING NETWORKS
Let us first give an informal overview of our blackbox transformation
for reinforcing networks (for formal specification see <ref>), as well as its guarantees and preconditions.
Assumptions on the Input Network
We have two main assumptions on the network at hand: (1) We consider synchronous routing networks, and (2) each node in the network (alongside its outgoing links) is a fault-containment region, i.e., it fails independently from other nodes.
We do not make any assumptions on the network topology, but will provide specific
optimizations for practically relevant topologies (such as sparse, minor-free networks
or hypercubes) in <ref>.
Valid Reinforcement Simulation Guarantees
Our reinforcements create a number of copies of each node. We have each non-faulty copy of a node run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph. Moreover, the simulation fully preserves all guarantees of the schedule, including its timing, and introduces no big computational overhead.
This assumption is simple to meet in stateless networks, while it requires synchronization primitives in case of stateful network functions.
Unaffected Complexity and Cost Measures
Routing schemes usually revolve around objective functions such as load minimization, maximizing the throughput, minimizing the latency, etc., while aiming to minimize complexity related to, e.g., the running time for centralized algorithms, the number of rounds for distributed algorithms, the message size, etc. Moreover, there is the degree of uncertainty that can be sustained, e.g., whether the input to the algorithm is fully available at the beginning of the computation (offline computation) or revealed over time (online computation). Our reinforcements preserve all of these properties, as they operate in a blackbox fashion. For example, our machinery readily yields various fault-tolerant packet routing algorithms in the Synchronous Store-and-Forward model by Aiello et. al <cit.>. More specifically, from <cit.> we obtain a centralized deterministic online algorithm on unidirectional grids of constant dimension that achieves a competitive ratio which is polylogarithmic in the number of nodes of the input network w.r.t. throughput maximization. Using <cit.> instead, we get a centralized randomized offline algorithm on the unidirectional line with constant approximation ratio w.r.t. throughput maximization. In the case that deadlines need to be met the approximation ratio is, roughly, O(log^* n) <cit.>. As a final example, one can obtain from <cit.> various online distributed algorithms with sublinear competitive ratios w.r.t. throughput maximization.
Cost and Gains of the Reinforcement
The price of adding fault-tolerance is given by the increase in the network size, i.e., the number of nodes and edges of the reinforced network in comparison to the original one. Due to the assumed independence of node failures, it is straightforward to see that the (uniform) probability of sustainable node faults increases roughly like n^-1/(f+1) in return for (i) a linear-in-f increase in the number of nodes and (ii) an increase in the number of edges that is quadratic in f. We then proceed to improve the construction for grids and minor-free constant-degree graphs to reduce the increase in the number of edges to being roughly linear in f. Based on this information, one can then assess the effort in terms of these additional resources that is beneficial, as less reliable nodes in turn are cheaper to build, maintain, and operate. We also note that, due to the ability of the reinforced network to ensure ongoing unrestricted operability in the presence of some faulty nodes, faulty nodes can be replaced or repaired before communication is impaired or breaks down.
Preprocessing
Preprocessing is used, e.g., in computing routing tables in Oblivious Routing <cit.>.
The reinforcement simply uses the output of such a preprocessing stage in the same manner as the original algorithm. In other words, the preprocessing is done on the input network and its output determines the input routing scheme. In particular, the preprocessing may be randomized and does not need to be modified in any way.
Randomization
Randomized routing algorithms can be simulated as well, provided that all copies of a node have access to a shared source of randomness. We remark that, as our scheme locally duplicates the network topology, it is natural to preserve the physical realization of the network topology in the sense that all (non-faulty) copies of a node are placed in physical proximity. This implies that this constraint is much easier to satisfy than globally shared randomness.
§ PRELIMINARIES
We consider synchronous routing networks.
Formally, the network is modeled as a directed graph G=(V,E), where V is the set of n≜ |V| vertices, and E is the set of m≜ |E| edges (or links).
Each node maintains a state, based on which it decides in each round for each of its outgoing links which message to transmit.
We are not concerned with the inner workings of the node, i.e., how the state is updated;
rather, we assume that we are given a scheduling algorithm performing the task of updating this state and use it in our blackbox transformations.
In particular, we allow for online, distributed, and randomized algorithms.
Probability-p Byzantine Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol in arbitrary ways, including delaying, dropping, or forging messages, etc.
Probability-p Omission Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol by not sending a message over an outgoing link when they should. We note that it is sufficient for this fault model to be satisfied logically. That is, as long as a correct node can identify incorrect messages, it may simply drop them, resulting in the same behavior of the system at all correct nodes as if the message was never sent.
Simulations and Reinforcement
For a given network G=(V,E) and a scheduling algorithm A, we will seek to reinforce (G,A) by constructing G'=(V',E') and scheduling algorithm A' such that the original algorithm A is simulated by A' on G', where G' is subject to random node failures. We now formalize these notions. First, we require that there is a surjective mapping P:V'→ V; fix G' and P, and choose F'⊆ V' randomly as specified above.
Assume that in each round r∈, each v'∈ V'∖ F' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, a strict majority of the nodes v'∈ V' with P(v')=v computes in each round r∈ the state of v in A in this round. The simulation is strong, if not only for each v∈ V there is a strict majority doing so, but all v'∈ V'∖ F' compute the state of P(v') in each round.
Assume that in each round r∈, each v'∈ V' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, there is v'∈ V' with P(v')=v that computes in each round r∈ the state of v in A in this round. The simulation is strong, if each v'∈ V' computes the state of P(v') in each round.
A (strong) reinforcement of a graph G=(V,E) is a graph G'=(V',E'), a surjective mapping P V'→ V, and a way of determining a scheduling algorithm A' for G' out of scheduling algorithm A for G. The reinforcement is valid under the given fault model (p or p) if A' is a (strong) simulation of A a.a.s.
*Resources and Performance Measures.
We use the following performance measures.
* The probability p of independent node failures that can be sustained a.a.s.
* The ratio ν≜ |V'|/|V|, i.e., the relative increase in the number of nodes.
* The ratio η≜|E'|/|E|, i.e., the relative increase in the number of edges.
We now briefly discuss, from a practical point of view, why we do not explicitly consider further metrics that are of interest.
§.§ Other Performance Measures
* Latency:
As our reinforcements require (time-preserving) simulation relations, in terms of rounds, there is no increase in latency whatsoever.
However, we note that (i) we require all copies of a node to have access to the input (i.e., routing requests) of the simulated node and (ii) our simulations require to map received messages in G' to received messages of the simulated node in G.
Regarding (i), recall that it is beneficial to place all copies of a node in physical vicinity, implying that the induced additional latency is small.
Moreover, our constructions naturally lend themselves to support redundancy in computations as well, by having each copy of a node perform the tasks of its original;
in this case, (i) comes for free.
Concerning (ii), we remark that the respective operations are extremely simple;
implementing them directly in hardware is straightforward and will have limited impact on latency in most systems.
* Bandwidth/link capacities.
We consider the uniform setting in this work.
Taking into account how our simulations operate, one may use the ratio η as a proxy for this value.
* Energy consumption.
Regarding the energy consumption of links, the same applies as for bandwidth.
The energy nodes use for routing computations is the same as in the original system, except for the overhead induced by Point (ii) we discussed for latency.
Neglecting the latter, the energy overhead is in the range [min{ν,η},max{ν,η}].
* Hardware cost.
Again, neglecting the computational overhead of the simulation, the relative overhead lies in the range [min{ν,η},max{ν,η}]
In light of these considerations, we focus on p, ν, and η as key metrics for evaluating the performance of our reinforcement strategies.
§ STRONG REINFORCEMENT UNDER BYZ(P)
We now present and analyze valid reinforcements
under Byz(p)
for our faults model
on general graphs.
Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and set ℓ = 2f+1.
Reinforced Network G'
We set V'≜ V× [ℓ], where [ℓ]≜{1,…,ℓ}, and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Consider node v'∈ V'∖ F'. We want to maintain the invariant that in each round, each such node has a copy of the state of v=P(v') in A. To this end, v'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the message that has been sent to v' by at least f+1 of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step requires such a majority to exist; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, then A' strongly simulates A.
We show the claim by induction on the round number r∈, where we consider the initialization to anchor the induction at r=0. For the step from r to r+1, observe that because all v'∈ V'∖ F' have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. Accordingly, each v'∈ V'∖ F' receives
the message A would send over (w,v) ∈ E
from each w'∈ V'∖ F' with P(w')=w (via the link (w',v')). By the assumption of the lemma, we have at least ℓ-f=f+1 such nodes, implying that v' updates the local copy of the state of A as if it received the same messages as when executing A in round r+1. Thus, the induction step succeeds and the proof is complete.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
If p ∈ o(n^-1/(f+1)), the above construction is a valid strong reinforcement for the fault model p. If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f. If p ∈ o(n^-1/(f+1)), using ℓ=2f+1 and a union bound we see that the probability of this event is at least
1-n∑_j=f+1^2f+12f+1jp^j(1-p)^2f+1-j
≥ 1-n ∑_j=f+1^2f+12f+1jp^j
≥ 1-n 2f+1f+1p^f+1∑_j=0^f p^j
≥ 1-n (2e)^f·p^f+1/1-p= 1-o(1).
Here, the second to last step uses that ab≤ (ae/b)^b and the final step exploits that p∈ o(n^-1/(f+1)).
For the second claim, assume w.l.o.g. p≤ 1/3, as increasing p further certainly increases the probability of the system to fail. For any v∈ V, the probability that |{v_i∈ F'}|> f is independent of the same event for other nodes and larger than
2f+1f+1p^f+1(1-p)^f≥(3/2)^f p^f+1(1-p)^f≥ p^f+1,
since ab≥ (a/b)^b and 1-p≥ 2/3. Hence, if G contains Ω(n) nodes v with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the probability that there is such a node v for which |{v_i∈ F'}|> f is at least
1-(1-p^f+1)^Ω(n)⊆ 1-(1-ω(1/n))^Ω(n)= 1-o(1).
If there is such a node v, there are algorithms A and inputs so that A sends a message across some edge (v,w) in some round. If faulty nodes do not send messages in this round, the nodes w_i∈ V'∖ F' do not receive the correct message from more than f nodes v_i and the simulation fails. Hence, the reinforcement cannot be valid.
For constant p, one can determine suitable values of f∈Θ(log n) using Chernoff's bound. However, as our focus is on small (constant) overhead factors, we refrain from presenting the calculation here.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = ℓ^2 = 4f^2 + 4f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes.
However, η = 9, i.e., while the number of edges also increases only by a constant, it seems too large in systems where the limiting factor is the amount of links that can be afforded.
§ STRONG REINFORCEMENT UNDER OM(P)
The strong reinforcement from the previous section is, trivially, also a strong reinforcement under p. However, we can reduce the number of copies per node for the weaker fault model. Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and, this time, set ℓ = f+1.
Reinforced Network G'
We set V'≜ V× [ℓ] and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Each node[Nodes suffering omission failures still can simulate A correctly.] v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the (unique) message that has been sent to v' by some of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step assumes that some such neighbor sends a message and all w' with P(w') send the same such message; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, A' strongly simulates A.
Analogous to the one of Lemma <ref>, with the difference that faulty nodes may only omit sending messages and thus a single correct copy per node is sufficient.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
The above construction is a valid strong reinforcement for the fault model p if p ∈ o(n^-1/(f+1)). If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f = ℓ -1. For v∈ V,
{v_i | i∈ [ℓ]}∩ F'=ℓ = p^f+1.
By a union bound, A' thus simulates A with probability 1-o(1) if p∈ o(n^-1/(f+1)).
Conversely, if there are Ω(n) nodes with non-zero outdegree and p∈ω(n^-1/(f+1)), with probability 1-o(1) all copies of at least one such node v are faulty. If v sends a message under A, but all corresponding messages of copies of v are not sent, the simulation fails. This shows that in this case the reinforcement is not valid.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = ℓ^2 = f^2 + 2f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and quadrupling the number of edges.
§ MORE EFFICIENT REINFORCEMENT
In this section, we reduce the overhead in terms of edges at the expense of obtaining reinforcements that are not strong. We stress that the obtained trade-off between redundancy (ν and η) and the sustainable probability of faults p is asymptotically optimal: as we require to preserve arbitrary routing schemes in a blackbox fashion, we need sufficient redundancy on the link level to directly simulate communication. From this observation, both for p and p we can readily derive trivial lower bounds on redundancy that match the constructions below up to lower-order terms.
§.§ A Toy Example
Before we give the construction, we give some intuition on how we can reduce the number of required edges. Consider the following simple case. G is a single path of n vertices (v_1,…, v_n), and the schedule requires that in round i, a message is sent from v_i to v_i+1. We would like to use a “budget” of only n additional vertices and an additional (1+) m=(1+) (n-1) links, assuming the fault model p. One approach is to duplicate the path and extend the routing scheme accordingly. We already used our entire budget apart from m links! This reinforcement is valid as long as one of the paths succeeds in delivering the message all the way.
The probability that one of the paths “survives” is
1-(1-(1-p)^n)^2 ≤ 1-(1-e^-pn)^2 ≤ e^-2pn,
where we used that 1-x≤ e^-x for any x∈ℝ.
Hence, for any p = ω(1/n), the survival probability is o(1). In contrast, the strong reinforcement with ℓ=2 (i.e., f=1) given in <ref> sustains any p∈ o(1/√(n)) with probability 1-o(1); however, while it adds n nodes only, it requires 3m additional edges.
We need to add some additional edges to avoid that the likelihood of the message reaching its destination drops too quickly. To this end, we use the remaining ε m edges to “cross” between the two paths every h≜ 2/ε hops (assume h is an integer), cf. Figure <ref>.
This splits the path into segments of h nodes each. As long as, for each such segment, in one of its copies all nodes survive, the message is delivered. For a given segment, this occurs with probability 1-(1-(1-p)^h)^2≥ 1-(ph)^2. Overall, the message is thus delivered with probability at least (1-(ph)^2)^n/h≥ 1-nhp^2.
As for any constant ε, h is a constant, this means that the message is delivered a.a.s. granted that p∈ o(1/√(n))!
The reader is cautioned to not conclude from this example that random sampling of edges will be sufficient for our purposes in more involved graphs. Since we want to handle arbitrary routing schemes, we have no control over the number of utilized routing paths. As the latter is exponential in n, the probability that a fixed path is not “broken” by F would have to be exponentially small in n. Moreover, trying to leverage Lovász Local Lemma for a deterministic result runs into the problem that there is no (reasonable) bound on the number of routing paths that pass through a single node, i.e., the relevant random variables (i.e., whether a path “survives”) exhibit lots of dependencies.
§.§ Partitioning the Graph
To apply the above strategy to other graphs, we must take into account that there can be multiple intertwined routing paths. However, the key point in the above example was not that we had path segments, but rather that we partitioned the nodes into constant-size regions and added few edges inside these regions, while fully connecting the copies of nodes at the boundary of the regions.
In general, it is not possible to partition the nodes into constant-sized subsets such that only a very small fraction of the edges connects different subsets; any graph with good expansion is a counter-example. Fortunately, many network topologies used in practice are good candidates for our approach. In the following, we will discuss grid networks and minor free graphs, and show how to apply the above strategy in each of these families of graphs.
Grid Networks
We can generalize the above strategy to hypercubes of dimension d>1.
A q-ary d-dimensional hypercube has node set [q]^d and two nodes are adjacent if they agree on all but one index i∈ [d], for which |v_i-w_i|=1.
For any h,d∈, assume that h divides q∈ and set ε=1/h. Then the q-ary d-dimensional hypercube can be partitioned into (q/h)^d regions of h^d nodes such that at most an ε-fraction of the edges connects nodes from different regions.
We subdivide the node set into h-ary d-dimensional subcubes; for an example of the subdivision of the node set of a 6-ary 2-dimensional hypercube into 2-ary 2-dimensional subcubes see Figure <ref>. There are (q/h)^d such subcubes. The edges crossing the regions are those connecting the faces of adjacent subcubes. For each subcube, we attribute for each dimension one face to each subcube (the opposite face being accounted for by the adjacent subcube in that direction). Thus, we have at most dh^d-1 crossing edges per subcube. The total number of edges per subcube are these crossing edges plus the d(h-1)h^d-1 edges within the subcube. Overall, the fraction of crossedges is thus at most 1/(1+(h-1))=1/h, as claimed.
Note that the above result and proof extend to tori, which also include the “wrap-around” edges connecting the first and last nodes in any given dimension.
Minor free Graphs
Another general class of graphs that can be partitioned in a similar fashion are minor free bounded-degree graphs.
For a fixed graph H, H is a minor of G if H is isomorphic to a graph that can be obtained by zero or more
edge contractions on a subgraph of G. We say that a graph G is H-minor free if H is not a minor of G.
For any such graph, we can apply a corollary from <cit.>, which is based on <cit.>, to construct a suitable partition.
Let H be a fixed graph. There is a constant c(H) > 1 such that for every ∈ (0, 1] and
every H-minor free graph G = (V, E) with degree bounded by Δ a partition R_1,…,R_k⊆ V with the following properties can be found in time O(|V|^3/2):
* ∀ i : |R_i|≤c(H)Δ^2/^2,
* ∀ i the subgraph induced by R_i in G is connected.
* |{(u,v) | u ∈ R_i, v ∈ R_j, i≠ j}|≤· |V|.
Grids and tori of dimension d>2 are not minor-free.
We note that this construction is not satisfactory, as it involves large constants. It demonstrates that a large class of graphs is amenable to the suggested approach, but it is advisable to search for optimized constructions for more specialized graph families before applying the scheme.
§.§ Reinforcement
Equipped with a suitable partition of the original graph G=(V,E) into disjoint regions R_1,…,R_k⊆ V, we reinforce as follows.
As before, we set V'≜ V× [ℓ], denote v_i≜ (v,i), define P(v_i)≜ v, and set ℓ≜ f+1. However, the edge set of G' differs. For e=(v,w)∈ E,
E_e'≜{(v_i,w_i) | i∈ [ℓ]}
{(v_i,w_j) | i,j∈ [ℓ]}
and we set E'≜⋃_e∈ E E_e'.
Simulation under Om(p)
Consider v∈ V. We want to maintain the invariant that in each round, some v_i has a copy of the state of v in A. To this end, v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A and sets _v'=;
* sends on each link (v',w')∈ E' in each round
* message M, if P(v') would send M via (P(v'),P(w')) when executing A and _v'=,
* a special symbol if _v'=, but v would not send a message via (P(v'),P(w')) according to A, or
* no message if _v'=;
* if, in a given round, _v'= and v' receives for each neighbor w of P(v') a message from some w_j∈ V', it updates the local copy of the state of v in A as if P(v') received this message (interpreting as no message); and
* if this is not the case, v' sets _v'=.
We claim that as long as _v'= at v', v' has indeed a copy of the state of P(v') in the corresponding execution of A; therefore, it can send the right messages and update its state variables correctly.
Suppose that for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], some i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. As P(C)=V, it suffices to show that each v'∈ C successfully maintains a copy of the state of P(v') under A. However, we also need to make sure that all messages, not only the ones sent by nodes in c, are “correct,” in the sense that a message sent over edge (v',w')∈ E' in round r would be sent by A over (P(v'),P(w')) (where means no message is sent). Therefore, we will argue that the set of nodes T_r≜{v'∈ V' | _v'= in round r} knows the state of their counterpart P(v') under A up to and including round r∈. As nodes v' with _v'= do not send any messages, this invariant guarantees that all sent messages are correct in the above sense.
We now show by induction on the round number r∈ that (i) each v'∈ T_r knows the state of P(v') under A and (ii) C⊆ T_r. Due to initialization, this is correct initially, i.e., in “round 0;” we use this to anchor the induction at r=0, setting T_0≜ V'.
For the step from r to r+1, note that because all v'∈ T_r have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E' with P(w')=w. Recall that v'∈ T_r+1 if and only if v'∈ T_r and for each (w,P(v'))∈ E there is at least one w'∈ V' with P(w')=w from which v' receives a message. Since under p nodes in F' may only omit sending messages, it follows that v'∈ T_r+1 correctly updates the state variables of P(v'), just as P(v') would in round r+1 of A.
It remains to show that C⊆ T_r+1. Consider v_i∈ C and (w,v)∈ E. If v,w∈ R_k' for some k'∈ [k], then w_i∈ C by definition of C. Hence, by the induction hypothesis, w_i∈ T_r, and w_i will send the message w would send in round r+1 of A over (w,v)∈ E to v_i, using the edge (w_i,v_i)∈ E'. If this is not the case, then there is some j∈ [ℓ] such that w_j∈ C and we have that (w_j,v_i)∈ E'. Again, v_i will receive the message w would send in round r+1 of A from w_j. We conclude that v_i receives at least one copy of the message from w for each (w,v)∈ E, implying that v∈ T_r+1 as claimed. Thus, the induction step succeeds and the proof is complete.
Figure <ref> provides an example of a comparison between a network, a naive duplication of that network, and its reinforcement. The simulation process of sending a message in the same sample network is shown in Figure <ref>.
Resilience of the Reinforcement
We denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree and R∈ O(1), p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Accordingly, the probability that for a given k' the precondition of the lemma is violated is at most (Rp)^f+1. As k≤ n/r, taking a union bound over all k' yields that with probability at least 1-n/r· (Rp)^f+1, A' simulates A. Therefore, the reinforcement is valid if p ∈ o((n/r)^-1/(f+1)/R).
Now assume that r≤ R∈ O(1) and also that p∈ω(n^-1/(f+1))⊆ω((n/r)^-1/(f+1)/R). Thus, for each v∈ V, all v'∈ V' with P(v')=v simultaneously end up in F' with probability ω(1/n). Therefore, if Ω(n) nodes have non-zero outdegree, with a probability in 1-(1-ω(1/n))^Ω(n)=1-o(1) for at least one such node v all its copies end up in F'. In this case, the simulation fails if v sends a message under A, but all copies of v' suffer omission failures in the respective round.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(1+ε)f+ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and multiplying the number of edges by 2.4.
For hypercubes and tori, the asymptotic notation for p does not hide huge constants.
Lemma <ref> shows that h enters the threshold in Theorem <ref> as h^-d+1/2.
For the cases of d=2 and d=3, which are the most typical (for d>3 grids and tori suffer from large distortion when embedding them into 3-dimensional space), the threshold on p degrades by factors of 11.2 and 55.9, respectively.
§.§ Simulation under Byz(p)
The same strategy can be applied for the stronger fault model p, if we switch back to having ℓ=2f+1 copies and nodes accepting the majority message among all messages from copies of a neighbor in the original graph.
Consider node v∈ V. We want to maintain the invariant that in each round, a majority among the nodes v_i, i∈ [ℓ], has a copy of the state of v in A. For v'∈ V' and (w,P(v'))∈ E, set N_v'(w)≜{w'∈ V' | (w',v')∈ E'}. With this notation, v' behaves as follows.
[(1)]
* It initializes local copies of all state variables of v as in A.
* It sends in each round on each link (v',w')∈ E' the message v would send on (P(v'),P(w')) when executing A (if v' cannot compute this correctly, it may send an arbitrary message).
* It updates its state in round r as if it received, for each (w,P(v'))∈ E, the message the majority of nodes in N_v'(w) sent.
Suppose for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], f+1 indices i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. We claim that each v'∈ C successfully maintains a copy of the state of P(v') under A. We show this by induction on the round number r∈, anchored at r=0 due to initialization.
For the step from r to r+1, observe that because all v'∈ C have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. For each v'∈ C and each (w,P(v')), we distinguish two cases. If P(v') and w are in the same region, let i be such that v'=v_i. In this case, N_v'(w)={w_i} and, by definition of C, w_i∈ C. Thus, by the induction hypothesis, w_i sends the correct message in round r+1 over the link (w',v'). On the other hand, if P(v') and w are in different regions, N_v'(w)={w_i | i∈ [ℓ]}. By the definition of C and the induction hypothesis, the majority of these nodes (i.e., at least f+1 of them) sends the correct message w would send over (w,P(v')) in round r+1 when executing A. We conclude that v' correctly updates its state, completing the proof.
Resilience of the Reinforcement
As before, denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for the fault model p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Thus, analogous to the proof of Theorem <ref>, the probability that for a given k' the condition is violated is at most
∑_j=f+1^2f+12f+1j(Rp)^j(1-Rp)^2f+1-j
= (2e)^f(Rp)^f+1(1+o(1)).
By a union bound over the at most n/r regions, we conclude that the precondition p ∈ o((n/r)^-1/(f+1)/R) guarantees that the simulation succeeds a.a.s.
For the second statement, observe that for each node v∈ V of non-zero outdegree,
|{v_i}∩ F'|≥ f+1≥ p^f+1= ω(1/n).
Thus, a.a.s. there is such a node v. Let (v,w)∈ E and assume that A sends a message over (v,w) in some round. If v and w are in the same region, the faulty nodes sending an incorrect message will result in a majority of the 2f+1=|{w'∈ V' | P(w')=w}| copies of w attaining an incorrect state (of the simulation), i.e., the simulation fails. Similarly, if w is in a different region than v, for each copy of w the majority message received from N_w'(v) will be incorrect, resulting in an incorrect state.
Note that the probability bounds in Theorem <ref> are essentially tight in case R∈ O(1). A more careful analysis establishes similar results for r∈Θ(R)∩ω(1), by considering w.l.o.g. the case that all regions are connected and analyzing the probability that within a region, there is some path so that for at least f+1 copies of the path in G', some node on the path is faulty. However, as again we consider the case R∈ O(1) to be the most interesting one, we refrain from generalizing the analysis.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(2+2ε)f+4ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes and multiplying the number of edges by 4.2.
§ EMPIRICAL EVALUATION
We have shown that our approach from <ref> works particularly well
for graphs that admit a certain partitioning, such as
sparse graphs (e.g., minor-free graphs) or low-dimensional
hypercubes. To provide some empirical motivation for the relevance
of these examples, we note that the topologies collected
in the Rocketfuel <cit.> and Internet Topology Zoo <cit.> projects
are all sparse: almost a third (namely 32%) of the topologies even belong to the family of
cactus graphs, and roughly half of the graphs (49%) are outerplanar <cit.>.
To complement our analytical results and study the reinforcement cost
of our approach in realistic networks, we conducted simulations on
the around 250 networks from the Internet Topology Zoo.
While we have a fairly good understanding of the different network topologies
deployed in practice, unfortunately, little is known about the state-of-the-art protection mechanisms used by network operators today. Network operators are typically reluctant to share details about their infrastructure for security reasons, rendering a comparative evaluation difficult. That said, it seems relatively safe to assume that the most robust solutions rely on an one-by-one (“A/B”) replication strategy which allows to completely reroute traffic to a backup network; this baseline requires doubling resources and can hence be fairly costly.
In the following, we will report on our main insights.
Due to space constraints, we focus on the case of omission faults;
the results for Byzantine faults follow the same general trends.
Recall that we replace each node by f+1 of its copies, and each edge with endpoints in
different regions of the partition with (f+1)^2 copies; every other edge is replaced by f+1 copies.
Our goal is to do this partitioning such that it minimizes the edge overhead of the new network and
maximizes the probability of the network's resilience.
The fault probability of the network for given p, f and partitions with l_1, l_2, ..., l_k nodes is calculated as
1 - ∏_i=1^k [1-(1-(1-p)^l_i)^f+1].
In the following, as a case study, we fix a target network failure probability of at most 0.01.
That is, the reinforced network is guaranteed to operate correctly with a probability of 99%, and we aim to maximize the probability p with which nodes independently fail subject to this constraint.
For this fixed target resilience of the network, we determine the value of p matching it using the above formula.
We remark that the qualitative behavior for smaller probabilities of network failure is the same, where the more stringent requirement means that our scheme outperforms naive approaches for even smaller network sizes.
For the examined topologies, it turned out that no specialized tools were needed to find good partitionings.
We considered a Spectral Graph Partitioning tool <cit.> and Metis <cit.>,
a partitioning algorithm from a python library.
For small networks (less than 14 nodes), we further implemented a brute-force algorithm,
which provides an optimal baseline.
Figure <ref> shows the resulting edge overheads for the different partitioning algorithms
as a function of p and for f=3, at hand of a specific example.
For reference, we added the value of p for the original graph (f=0) to the plot, which has an overhead factor of 1 (no redundancy).
As to be expected, for each algorithm and the fixed value of f=3, as the number of components in partitionings increases, the edge overhead and p
increase as well.
The “Singleton partition” point for f=3 indicates the extreme case where the size of the components is equal to 1 and the approach becomes identical to strong reinforcement (see <ref>);
hence, it has an edge overhead of (f+1)^2=16.
The leftmost points of the f=3 curves correspond to the other extreme of “partitioning” the nodes into a single set, resulting in naive replication of the original graph, at an edge overhead of f+1=4.
We observed this general behavior for networks of all sizes under varying f, where the spectral partitioning consistently outperformed Metis, and both performed very close to the brute force algorithm on networks to which it was applicable.
We concluded that the spectral partitioning algorithm is sufficient to obtain results that are close to optimal for the considered graphs, most of which have fewer than 100 nodes, with only a handful of examples with size between 100 and 200.
Accordingly, in the following we confine the presentation to the results obtained using the spectral partitioning algorithm.
In Figure <ref>, we take a closer look on how the edge overhead
depends on f, at hand of a network of 33 nodes. Note that the partitionings do not depend on f, causing the 10 curves to have similar shape.
As f increases, the node overhead, edge overhead, and p for the reinforced networks increase.
We can see that it is advisable to use larger values of f only if the strong reinforcement approach for smaller f cannot push p to the desired value.
We also see that f=1 is sufficient to drive p up to more than 6%, improving by almost two orders of magnitude over the roughly 0.01/33≈ 0.03% the unmodified network can tolerate with probability 99%.
While increasing f further does increase resilience, the relative gains are much smaller, suggesting that f=1 is the most interesting case.
Following up on this, in Figure <ref> we plot p for all existing networks in the Topology Zoo using the spectral graph partitioning algorithm and f=1.
Specifically, for each network, we calculated the value of p on a set of reinforced networks with different node and edge overheads. Naturally, with increasing network size, the value of p that can be sustained at a given overhead becomes smaller. Note, however, that naive replication quickly loses ground as n becomes larger. In particular, already for about 20 nodes, an edge overhead of 3 with our approach is better than adding two redundant copies of the original network, resulting in more nodes, but the same number of edges. Beyond roughly 50 nodes, our approach outperforms two independent copies of the network using fewer edges, i.e., an edge overhead of 2.5.
To show more clearly when our approach outperforms naive network replication, Figure <ref> plots the relative gain in the probability p of node failure that can be sustained compared to the original network.
This plot is similar to the previous one. The y-axis now represents p divided by the value of p for the original graph. We now see that naive replication provides an almost constant improvement across the board. This is due to the fact that under this simple scheme, the reinforcement fails as soon as in each copy of the graph at least one node fails, as it is possible that a routing path in the original graph involves all nodes corresponding to failed copies.
Denote by p_k the probability of node failure that can be sustained with 99% reliability when simply using k copies of the original graph (in particular p_1≈ 0.01/n). For small k, the probability (1-p_k)^n that a single copy of the original graph is fault-free needs to be close to 1. Hence, we can approximate (1-p_k)^n≈ 1-p_k n. The probability that all copies contain a failing node is hence approximately (p_kn)^k. Thus, p_1 n ≈ 0.01≈ (p_k n)^k, yielding that
p_k/p_1=p_k n/p_1 n≈0.01^1/k/0.01=100^1-1/k.
In particular, we can expect ratios of roughly 10 for k=2 and 21.5 for k=3, respectively. The small discrepancy to the actual numbers is due to the approximation error, which would be smaller for higher target resilience.
As the plot clearly shows, our method achieves a relative improvement that increases with n, as predicted by Theorem <ref>.
In conclusion, we see that our approach promises substantial improvements over the naive replication strategy,
which is commonly employed in mission-critical networks
(e.g., using dual planes as in RFC 7855 <cit.>).
§ DISCUSSION
In the previous sections, we have established that constant-factor redundancy can significantly increase reliability of the communication network in a blackbox fashion. Our constructions in <ref> are close to optimal. Naturally, one might argue that the costs are still too high. However, apart from pointing out that the costs of using sufficiently reliable components may be even higher, we would like to raise a number of additional points in favor of the approach.
Node Redundancy
When building reliable large-scale systems, fault-tolerance needs to be considered on all system levels. Unless nodes are sufficiently reliable, node replication is mandatory, regardless of the communication network. In other words, the node redundancy required by our construction may not be an actual overhead to begin with. When taking this point of view, the salient question becomes whether the increase in links is acceptable. Here, the first observation is that any system employing node redundancy will need to handle the arising additional communication, incurring the respective burden on the communication network. Apart from still having to handle the additional traffic, however, the system designer now needs to make sure that the network is sufficiently reliable for the node redundancy to matter. Our simple schemes then provide a means to provide the necessary communication infrastructure without risking to introduce, e.g., a single point of failure during the design of the communication network; at the same time, the design process is simplified and modularized.
Dynamic Faults
Because of the introduced fault-tolerance, faulty components do not impede the system as a whole, so long as the simulation of the routing scheme can still be carried out. Hence, one may repair faulty nodes at runtime. If T is the time for detecting and fixing a fault, we can discretize time in units of T and denote by p_T the (assumed to be independent) probability that a node is faulty in a given time slot, which can be bounded by twice the probability to fail within T time. Then the failure probabilities we computed in our analysis directly translate to an upper bound on the expected fraction of time during which the system is not (fully) operational.
Adaptivity
The employed node- and link-level redundancy may be required for mission-critical applications only, or the system may run into capacity issues. In this case, we can exploit that the reinforced network has a very simple structure, making various adaptive strategies straightforward to implement.
* One might use a subnetwork only, deactivating the remaining nodes and links, such that a reinforced network for smaller f (or a copy of the original network, if f=0) remains. This saves energy.
* One might subdivide the network into several smaller reinforced networks, each of which can perform different tasks.
* One might leverage the redundant links to increase the overall bandwidth between (copies of) nodes, at the expense of reliability.
* The above operations can be applied locally; e.g., in a congested region of the network, the link redundancy could be used for additional bandwidth. Note that if only a small part of the network is congested, the overall system reliability will not deteriorate significantly.
Note that the above strategies can be refined and combined according to the profile of requirements of the system.
§ RELATED WORK
Robust routing is an essential feature of dependable
communication networks, and has been explored
intensively in the literature already.
*Resilient Routing on the Network Layer
In contrast to our approach,
existing resilient routing mechanisms on the network layer
are typically reactive.
They
can be categorized
according to whether they are supported in the
control plane, e.g.,
<cit.>,
or in the data plane, e.g., <cit.>,
see also the recent survey <cit.>.
These mechanisms are usually designed to cope with link failures.
Resilient routing algorithms in the control plane
typically rely on a global recomputation of paths
(either
centralized <cit.>,
distributed <cit.>
or both <cit.>),
or on techniques based on link reversal <cit.>, and can
hence re-establish policies relatively easily;
however, they come at the price of a relatively high restoration time
<cit.>.
Resilient routing algorithms in the dataplane can react to failures
significantly faster <cit.>; however,
due to the local nature of the failover, it is challenging to
maintain network policies or even a high degree of resilience <cit.>.
In this line of literature,
the network is usually given and the goal is to re-establish
routing paths quickly, ideally as long as the underlying physical
network is connected (known as perfect resilience <cit.>).
In contrast, in this paper we ask the question of how to proactively enhance the
network in order to tolerate failures, rather than reacting to them. In particular, we consider more general failures,
beyond link failures and benign faults.
We argue that such a re-enforced
network simplifies routing as it is not necessary to compute new paths.
The resulting problems are very different in nature, also in terms
of the required algorithmic techniques.
*Local Faults
In this paper, we consider more general failure models
than typically studied in the resilient routing literature above,
as our model is essentially a local fault model.
Byzantine faults were studied in <cit.> in the context of broadcast and consensus problems. Unlike its global classical counterpart, the f-local Byzantine adversary can control at most f neighbors of each vertex. This more restricted adversary gives rise to more scalable solutions, as the problems can be solved in networks of degree O(f); without this restriction, degrees need to be proportional to the total number of faults in the network.
We also limit our adversary in its selection of Byzantine nodes, by requiring that the faulty nodes are chosen independently at random. As illustrated, e.g., by Lemma <ref> and Theorem <ref>, there is a close connection between the two settings. Informally, we show that certain values of p correspond, asymptotically almost surely (a.a.s), to an f-local Byzantine adversary. However, we diverge from the approach in <cit.> in that we require a fully time-preserving simulation of a fault-free routing schedule, as opposed to solving the routing task in the reinforced network from scratch.
*Fault-Tolerant Logical Network Structures
Our work is reminiscent of literature on
the design fault-tolerant network structures.
In this area (see <cit.> for a survey), the goal is to compute a sub-network that has a predefined property, e.g., containing minimum spanning tree. More specifically, the sub-network should sustain adversarial omission faults without losing the property. Hence, the sub-network is usually augmented (with edges) from the input network in comparison to its corresponding non-fault-tolerant counterpart. Naturally, an additional goal is to compute a small such sub-network. In contrast, we design a network that is reinforced (or augmented) by additional edges and nodes so that a given routing scheme can be simulated while facing randomized Byzantine faults. As we ask for being able to “reproduce” an arbitrary routing scheme (in the sense of a simulation relation), we cannot rely on a sub-network.
The literature also considered random fault models.
In the network reliability problem, the goal is to compute the probability that the (connected) input network becomes disconnected under random independent edge failures. The reliability of a network is the probability that the network remains connected after this random process.
Karger <cit.> gave a fully polynomial randomized approximation scheme for the network reliability problem.
Chechik et. al <cit.> studied a variant of the task, in which the goal is to compute a sparse sub-network that approximates the reliability of the input network.
We, on the other hand, construct a reinforced network that increases the reliability of the input network;
note also that our requirements are much stricter than merely preserving connectivity.
*Self-healing systems
In the context of self-healing routing (e.g., Castañeda et al. <cit.>), researchers have studied a model where an adversary removes nodes in an online fashion, one node in each time step (at most n such steps). In turn, the distributed algorithm adds links and sends at most O(Δ) additional messages to overcome the inflicted omission fault.
Ideally, the algorithm is “compact”: each node's storage is limited to o(n) bits.
A nice property of the algorithm in <cit.> is that the degrees are increased by at most 3. For our purposes, an issue is that the diameter is increased by a logarithmic factor of the maximum initial degree, and hence the same holds for the latency of the routing scheme. Instead, we design a network that is “oblivious” to faults in the sense that the network is “ready” for independent random faults up to a certain probability, without the need to reroute messages or any other reconfiguration. Moreover, our reinforcements tolerate Byzantine faults and work for arbitrary routing schemes. We remark that compact self-healing routing schemes also deal with the update time of the local data structures following the deletion of a node; no such update is required in our approach.
*Robust Peer-to-Peer Systems
Peer-to-peer systems are often particularly dynamic and the development
of robust algorithms hence crucial.
Kuhn et. al <cit.> study faults in peer-to-peer systems in which an adversary adds and removes nodes from the network within a short period of time (this process is also called churn). In this setting, the goal is to maintain functionality of the network in spite of this adversarial process. Kuhn et al. <cit.> considered hypercube and pancake topologies, with a powerful adversary that cannot be “fooled” by randomness. However, it is limited to at most O(Δ) nodes, where Δ is the (maximum) node degree, which it can add or remove within any constant amount of time. The main idea in <cit.> is to maintain a balanced partition of the nodes, where each part plays the role of a supernode in the network topology. This is done by rebalancing the nodes after several adversarial acts, and increasing the dimensionality of the hypercube in case the parts become too big.
Hypercubes were also of particular interest in this paper. We employ two partitioning techniques to make sure that: (1) the size of each part is constant and (2) the number of links in the cut between the parts is at most · n, where n is the number of nodes. These partitioning techniques help us dial down the overheads within each part, and avoid a failure of each part due to its small size. However, we note that our motivation for considering these topologies is that they are used as communication topologies, for which we can provide good reinforcements, rather than choosing them to exploit their structure for constructing efficient and/or reliable routing schemes (which is of course one, but not the only reason for them being used in practice).
§ CONCLUSION
In this paper, we proposed simple replication strategies for improving network reliability. Despite being simple and general, both in terms of their application and analysis, our strategies can substantially reduce the required reliability on the component level to maintain network functionality compared the baseline, without losing messages or increasing latencies.
The presented transformations allow us to directly reuse non-fault-tolerant routing schemes as a blackbox,
and hence avoid the need to refactor working solutions.
We consider this property highly useful in general and essential in real-time systems.
Hence, being prepared for non-benign faults can be simple, affordable, and practical, and therefore enables building larger reliable networks. Interestingly, while our basic schemes may hardly surprise, we are not aware of any work systematically exploring and analyzing this perspective.
We understand our work as a first step and believe that it opens
several interesting avenues for future research.
For example:
* Which network topologies allow for good partitions as utilized in <ref>? Small constants here result in highly efficient reinforcement schemes, which are key to practical solutions.
* Is it possible to guarantee strong simulations at smaller overheads?
* Can constructions akin to the one given in <ref> be applied to a larger class of graphs?
On the practical side, while
our simulations indicate that our approach
can be significantly more efficient than a naive one-by-one replication strategy
to provision
dependable ISP networks,
it will be interesting to extend these empirical studies and also consider
practical aspects such as the incremental deployment
in specific networks.
Acknowledgments.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 716562) and from the Vienna Science and Technology Fund (WWTF), under grant number ICT19-045 (project WHATIF).
This research was supported by the Israel Science Foundation under Grant 867/19.
spmpsci
[
< g r a p h i c s >
]Christoph Lenzen
received a diploma degree in mathematics from the University of Bonn in 2007 and a
Ph. D. degree from ETH Zurich in 2011. After postdoc positions at the Hebrew University of Jerusalem,
the Weizmann Institute of Science, and MIT, he became group leader at MPI for Informatics in 2014.
In 2021 he became faculty member at CISPA.
He received the best paper award at PODC 2009, the ETH medal for his dissertation, and in 2017 an ERC starting grant.
[
< g r a p h i c s >
]Moti Medina
is a faculty member at the Engineering Faculty at Bar-Ilan University since 2021. Previously, he was a faculty member at the Ben-Gurion University of the Negev and a post-doc
researcher in MPI for Informatics and in the Algorithms and Complexity group at
LIAFA (Paris 7). He graduated his Ph. D., M. Sc., and B. Sc. studies at the
School of Electrical Engineering at Tel-Aviv University, in 2014, 2009, and 2007
respectively. Moti is also a co-author of a text-book on logic design
“Digital Logic Design: A Rigorous Approach”, Cambridge Univ. Press, Oct.
2012.
[
< g r a p h i c s >
]Mehrdad Saberi
is an undergraduate student in Computer Engineering at Sharif University of Technology, Tehran, Iran. He achieved a silver medal in International Olympiad in Informatics (2018, Japan) during high school and is currently interested in studying and doing research in Theoretical Computer Science.
[
< g r a p h i c s >
]Stefan Schmid
is a Professor at TU Berlin, Germany.
He received his MSc (2004) and PhD
(2008) from ETH Zurich, Switzerland. Subsequently, Stefan Schmid
worked as postdoc at TU Munich and the University of Paderborn (2009).
From 2009 to 2015, he was a senior research scientist at the Telekom Innovations Laboratories (T-Labs) in Berlin, Germany, from 2015 to 2018 an Associate
Professor at Aalborg University, Denmark, and from 2018 to 2021 a Professor
at the University of Vienna, Austria.
His research interests revolve around algorithmic problems of networked and distributed systems,
currently with a focus on self-adjusting networks
(related to his ERC project AdjustNet) and resilient networks (related to his WWTF project
WhatIf).
|
http://arxiv.org/abs/2307.06202v1 | 20230712144827 | Autonomous Ratcheting by Stochastic Resetting | [
"Pulak K. Ghosh",
"Shubhadip Nayak",
"Jianli Liu",
"Yunyun Li",
"Fabio Marchesoni"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
^1 Department of Chemistry, Presidency University, Kolkata
700073, India
^2 Center for Phononics and Thermal Energy Science, Shanghai
Key Laboratory of Special Artificial Microstructure Materials and Technology,
School of Physics Science and Engineering, Tongji University, Shanghai 200092, China
^3 Dipartimento di Fisica, Università di Camerino, I-62032 Camerino, Italy
We propose a generalization of the stochastic resetting mechanism for a
Brownian particle diffusing in a one-dimensional periodic potential: randomly
in time, the particle gets reset at the bottom of the potential well it was in.
Numerical simulations show that in mirror asymmetric potentials, stochastic
resetting rectifies the particle's dynamics, with
maximum drift speed for an optimal average resetting time. Accordingly, an unbiased
Brownian tracer diffusing on an asymmetric substrate can rectify its motion by
adopting an adaptive stop-and-go strategy. Our proposed ratchet mechanism can model
directed autonomous motion of molecular motors and micro-organisms
Autonomous Ratcheting by Stochastic Resetting
Pulak K. Ghosh^1[[email protected] (corresponding author)], Shubhadip Nayak^1, Jianli Liu^2, Yunyun Li^2, and Fabio Marchesoni^2,3[[email protected](corresponding author)]
August 12, 2023
==================================================================================================================================================================================================
The notion of stochastic resetting (SR) is attracting growing attention (see
Ref. <cit.> for a recent review). This term refers to the sudden
interruption of a stochastic process after random time intervals, followed by its
starting anew, possibly after a further latency time, with same initial conditions.
Diffusion under SR is a non-equilibrium stationary process, which found
applications in search contexts <cit.>, optimization of randomized
computer algorithms <cit.>, and in many biophysical problems
<cit.>. Surprisingly, under SR the otherwise infinite mean
first passage time of a freely diffusing Brownian
particle <cit.> from an injection point to an assigned target point becomes finite,
and, most notably, can be minimized for an optimal choice of the resetting
time, τ <cit.>. Many analytical methods earlier
developed in the theory of homogeneous stochastic processes <cit.> can
be generalized to study diffusion under SR, for instance, to calculate the
mean-first-exit time (MFET) of a reset particle out of a one-dimensional (1D)
domain <cit.> or potential well <cit.>. In general, SR speeds
up (slows down) diffusive processes characterized by random escape times
with standard deviation larger (smaller) than the respective averages <cit.>.
In this Letter we propose an SR mechanism with degenerate resetting point.
Let us consider an overdamped Brownian particle of coordinate x, diffusing
in a 1D periodic potential, V(x), of period L. We assume for simplicity
that the potential unit cells have one minimum each at x_n=x_0 +nL, with
n=0, ± 1, …. Upon resetting, the particle stops diffusing and falls
instantaneously at the bottom of the potential well it was in; it will resume
diffusing after a latency time τ_0 ≥ 0, see Fig. <ref>(a). By this
mechanism of autonomous SR, we intend to model the dynamics of small
motile tracers (like bacteria or micro-robots <cit.>) capable of
switching their internal engine on and off. In the case of undirected
motility, the tracer would perform an unbiased Brownian motion. Let us further assume that the
barriers separating two adjacent potential minima are asymmetric
under mirror reflection, i.e., V(x-x_0) ≠ V(-x+x_0) (ratchet potential
<cit.>). Extensive numerical simulations show that (i) SR rectifies
diffusion in a ratchet potential. The particle's net drift speed, ⟨ v
⟩, reaches a maximum for an optimal value of the resetting time, τ,
which strongly depends on the potential profile, see Fig. <ref>(b);
(ii) SR suppresses spatial diffusion. For large observation times, the particle's
mean-square displacement (MSD) turns proportional to time (normal diffusion);
the relevant diffusion constant increases sharply with the resetting time
in correspondence with the maximum of the drift speed, see Fig. <ref>(c).
While this variant of the SR mechanism may be reminiscent of a flashing
ratchet with pulsated temperature <cit.>, here the diffusing tracer
exploits the substrate spatial asymmetry to autonomously rectify its
random motion in the absence of external time-dependent fields of force or
gradients <cit.>, simply by time-operating its internal engine to adjust
to the substrate itself.
Model. The simulated particle dynamics was formulated in terms of the Langevin equation (LE),
ẋ=-V'(x) +ξ(t),
where ξ(t) denotes a stationary zero-mean valued Gaussian noise with
autocorrelation ⟨ξ(t) ξ(0) ⟩ =2D_0δ(t) (white noise)
and V(x) is the standard ratchet potential <cit.>,
V(x)= sin(2π x/L) +(1/4)sin(4π x/L),
with asymmetric barriers of height Δ V=
(3/2)(1+2/√(3))^1/2≃ 2.20. The potential unit cell [0,L] has a
maximum (barrier) at x_b=(L/2π)arccos[(√(3)-1)/2]≃ 0.19L and a
minimum (well bottom) at x_0=L-x_b≃ 0.81L, with curvatures
ω_0^2=V”(x_0)=-V”(x_b)= (2π/L)^2(3√(3)/2)^1/2≃
63.6/L^2, see Fig. <ref>(a). The asymmetric potential wells have
right/left slopes of different lengths,
L_R,L, with L_L=x_0-x_b=L-L_R≃ 0.62L. In addition to the thermal fluctuations and the ratchet potential, the particle is subjected to resetting to the attracting
local substrate minimum after a random time that is taken from exponential distribution with mean τ = 1/r. Where, r is
the resetting rate. Partly motivated by technical issues in experiments,
earlier works<cit.>
had considered the case when the particle was reset to a fully randomly chosen position.
Along with the restart protocol, the Eq.(<ref>) was numerically
integrated by means of a standard Milstein scheme <cit.>, to compute
the drift speed ⟨ v⟩ =
lim_t→∞[⟨ x(t) ⟩ -x(0)]/t, and the asymptotic MSD,
⟨Δ x^2(t)⟩=⟨ x^2(t)⟩ -⟨ v⟩^2 t^2 ≡ 2Dt,
of a particle under stationary conditions (with or without SR), (Figs. <ref> and
<ref>), and the MFET's, ⟨ T_R,L(τ)⟩, for a reset particle injected
at the bottom of the well, x_0, to first exit it through the left (right)
barrier, x_b (x_b+L) (Fig. <ref>).
Rectification under SR. The key features of the resulting SR ratchet are illustrated in the bottom panels of
Fig. <ref>: the particle motion gets rectified with net speed ⟨
v(τ)⟩ [Fig. <ref>(b)] and asymptotic diffusion constant, D(τ), a
monotonically increasing function of the SR time [Fig. <ref>(c)]. Rectification is maximum in
an optimal τ range, as D approaches an stationary value (the same as in the absence of SR).
To explain ratcheting under SR we anticipate two properties of the
statistics of particle escape out of a potential well, summarized in Fig. <ref>. In
the absence of resetting, i.e., for asymptotically large τ, the
probability current density of the process is zero, which rules out rectification
<cit.>, ⟨ v(∞)⟩=0. Things change upon decreasing the
SR time, as proven by the τ-dependence of the splitting probabilities,
π_R,L(τ), for the particle to exit a potential well through the right/left
barrier. Upon lowering τ, the asymmetry ratio π_R/π_L in Fig. <ref>(b)
grows monotonically, the effect being more apparent at low noise,
D_0≪Δ V, so that
⟨ v(τ)⟩>0. We qualitatively explain this property with the
increased asymmetry of the probability density <cit.> of the reset particle around the
potential minima [Fig. <ref>(a)]. On the other hand, the data of Fig. <ref>(b) clearly show that in
the limit τ→ 0, ⟨ T_R(τ)⟩ diverges exponentially,
so that we anticipate ⟨ v(τ→ 0)⟩=0+. The combination of these two opposite effects
determines the typical resonant profile of the
⟨ v(τ)⟩ curves.
Slow SR. More in detail, the data of Fig. <ref>(b) suggest that ⟨ v(τ)⟩
decays asymptotically like τ^-1. This behavior can be easily explained in the strong
noise regime with D_0 ≫Δ V and ⟨ T_R,L(τ)⟩≪τ. Under this condition, the particle executes many barrier crossings before being reset at the
bottom of a V(x) well. At resetting, it is caught in average to the left of the
well bottom; hence, at each resetting the particle jumps to the right an average distance,
δ x=x̅ -x_0 >0, x̅ being the center of mass of the
(periodic) particle's stationary probability density function, p(x; τ, D_0), in the
potential well with bottom at x=x_0. Accordingly, the particle gets rectified with positive net drift speed
⟨ v(τ) ⟩=δ x/τ. In the strong noise regime, p(x; τ, D_0),
approaches a uniform distribution; hence, δ x=(L_L-L_R)/2,
in good agreement with the numerical data of Fig. <ref>(a).
Upon decreasing the noise strength, δ x diminishes for two reasons,
as illustrated in Fig. <ref>(a). Firstly, in the absence of SR, i.e., for τ→∞, the probability
density, p(x; τ, D_0), approaches its thermal equilibrium form, p(x;
∞, D_0)= Nexp(-V(x)/D_0), with N an appropriate
normalization constant. For D_0 ≪Δ V, p(x; ∞, D_0) shrinks
around x_0, that is, δ x diminishes. Secondly, by lowering D_0 in the
presence of SR, i.e., for finite τ, ⟨ T(τ)⟩ grows
comparable with τ. Accordingly, barrier escape and resetting events grow
correlated, which invalidates the above estimate of the particle's drift speed.
However, numerical data confirm that ⟨ v(τ)⟩, though strongly suppressed,
keeps decaying asymptotically like 1/τ, even at low noise.
Fast SR. The plots of p(x;τ,D_0) for the lowest τ values in Fig. <ref>(a)
consist of a central peak tapering off with asymmetric slow-decaying tails on both sides. In the limit
τ→ 0, (i) the peak gets sharper and more symmetric, while remaining
centered at the resetting point, x_0. Its square half-width can be easily
calculated for D_0 ≪Δ V, by approximating V(x)≃ω_0^2(x-x_0)^2/2 and
averaging over the SR time, that is, ⟨ (x-x_0)^2⟩≃
2D_0τ/(1+2ω_0^2τ); (ii) the tails get thinner
but more asymmetric. This behavior is consistent with the τ-dependence
of the escape asymmetry ratio, π_R/π_L, displayed in Fig. <ref>(b) <cit.>.
The sharp decay of ⟨ v (τ) ⟩ for τ→ 0
proves that fast SR eventually suppresses the
interwell particle diffusion. In such limit, as shown in Fig.
<ref>, the particle tends to jump to the right, with
π_R(τ) ≫π_L(τ) and, therefore, ⟨ T(τ)⟩≃⟨ T_R(τ)⟩ with ⟨ T(τ)⟩≫τ.
Under these conditions, the
resulting drift speed can be easily estimated under renewal theory
approximation <cit.>, that is ⟨ v(τ⟩=L/⟨ T_R (τ)⟩.
To calculate ⟨ T_R(τ)⟩ we had recourse to the analytical
results of Ref. <cit.> for Brownian diffusion under SR in the
presence of a constant bias. We made contact with Eq. (6) there, by
replacing the constant bias with the effective (right-to-left)
restoring force of our ratchet potential, Δ V/L_R. In the limit τ→ 0, the MFET for the transition to the adjacent well on the right,
x_0→ x_0+L, is twice the MFET for the transition x_0→ x_b+L, that is
⟨ T(τ)⟩≃⟨ T_R(τ)⟩≃ 2τexp (Δ V/2D_0+L_R/√(D_0τ) ).
Of course, this approximation holds good only for π_R(τ) ≃ 1
(π_L(τ)≃ 0), and its agreement with the numerical data improves
upon decreasing the noise strength, i.e., for D_0 ≲Δ V, as
shown in Fig. <ref>(a). On making use of this estimate for ⟨
T_R(τ)⟩, we closely reproduced also
the raising branches of the ⟨ v(τ)⟩ curves in Fig. <ref>(b).
Diffusion under SR. Regarding the intrawell diffusion,
we remind that in the absence of SR, the MFET from
x_0 to x_0 ± L amounts to the standard Kramers' time <cit.>
T_K=(2 π/ω_0^2)exp(-Δ V/D_0). By the same token, one concludes
that for τ→∞, ⟨ T_R ⟩≃⟨ T_L ⟩, with both MFET's
tending to T_K for D_0/Δ V → 0, and their
ratio, ⟨ T_R⟩ /⟨ T_L ⟩
approaching (1+L/L_L)/(1+L/L_R)≃ 0.72 in the opposite limit,
D_0/Δ V→∞. On the other hand, for large τ the splitting
probabilities can be easily computed assuming no SR
(see Sec. 5.2.7 of Ref. <cit.>); their limits for D_0/Δ V→ 0 (and →∞)
are respectively π_R,L(∞) =1/2 (and L_L,R/L), as shown in Fig. <ref>(b).
These remarks are useful to interpret the MSD data sets of Fig. <ref>(c).
Numerical simulation indicates that diffusion
at large times, t≫⟨ T(τ) ⟩, is normal, as anticipated by the fitting law
of Eq. (<ref>). At small τ, a transient plateau for
t ≲⟨ T(τ)⟩, ⟨Δ x^2 ⟩≃ 2D_0τ,
marks the particle relaxation inside a single potential well [with ⟨Δ x^2 ⟩
of the order of the square half-width of the p(x;τ, D_0) peak estimated above]. The τ-dependence of the
asymptotic diffusion constants, D, is reported in Fig. <ref>(c). For large τ, the
D(τ) curves approach the horizonal asymptotes <cit.>, D=L^2/2T_K,
as to be expected in the absence of SR. Vice versa for very short SR times,
the diffusion constant is well approximated by D(τ)=L^2/2⟨ T_R(τ)⟩,
as predicted by the renewal theory for a process with average escape time constant ⟨ T_R(τ)⟩
<cit.>. In both τ limits, our phenomenological arguments are supported by
numerical simulation.
Comparison with standard diffusion under SR.
Numerical data in Fig. <ref>(a) show that by decreasing the SR time, ⟨
T_R(τ)⟩ keeps being larger than ⟨
T_L(τ)⟩. Moreover, ⟨ T(τ)⟩ grows
monotonically with τ, i.e., the MFET out of the potential well is not optimized by
resetting. Of course, the predicted SR optimization of the average passage times
<cit.> is still detectable, but only for the unconstrained transitions x_0→
x_b with x≥ x_b, and x_0→ x_b+L with x≤ x_b+L. In panel (c) of
Fig. <ref> we investigated the same transitions as in panel (a), except
for the reflecting barriers, which were shifted at ∓∞. The
corresponding right/left unconstrained MFPT curves, ⟨ T^
(u)_R,L(τ)⟩, overlap throughout the entire τ range. Furthermore,
all MFPT curves diverge for τ→∞, as to be expected due to the lack
of a reflecting barrier <cit.>. In the absence of SR (i.e.,
for τ→∞), the particle still diffuses over the substrate like a
free particle, but with the reduced effective diffusion constant, D=L^2/2T_K,
of Eq. (<ref>) [see fits in Fig. <ref>(c)]. This suggests rewriting
Eq. (7) of Ref. <cit.> as
⟨ T^ (u)_R,L(τ)⟩/τ=exp(L/√(Dτ))-1,
a formula that well reproduces the large-τ branches of the curves in Fig
<ref>(c) with no additional fitting parameters. In the inset of the same figure, we
analyze the small-τ dependence of the MFPT's for the right transitions x_0→
x_0 + nL with n=1,2, … and reflecting barriers at -∞.
By applying the heuristic argument invoked to derive Eq. (<ref>), we
obtain the working approximate estimate,
⟨ T^ (u)_R,L(τ)⟩/τ≃exp (Δ V/2D_0+nL/√(D_0τ)),
which holds for n-cell transitions to the right/left at vanishingly small τ. Note that here,
contrary to Eq. (<ref>), we make use of the free diffusion constant, D_0.
Concluding remarks. The SR ratcheting mechanism introduced above can be readily generalized to more realistic
cases when resetting takes a finite time <cit.>, τ_0, called
here latency time. The relevant net ratchet speed turns out to be a function of both τ and
τ_0, ⟨ v(τ, τ_0)⟩, which can be related
with the zero-latency speed, ⟨ v(τ, 0)⟩, through a simple time
rescaling, namely
⟨ v(τ, τ_0)⟩=⟨ v(τ,0)⟩/(1+τ_0/τ)),
as illustrated in Fig. <ref>(a).
This instance of SR ratchet lends itself to a simple laboratory demonstration.
We start again from the LE (<ref>) with the potential of Eq. (<ref>) but,
instead of implementing the SR protocol with latency
time τ_0, we now assume a dichotomic noise strength, D_0(t), with
D_0=0 for fixed time intervals, τ_0, and D_0(t)=D_0 for random time
intervals exponentially distributed with average τ. The resulting LE
describes a rectifier, which could be classified as a special case of flashing
ratchet <cit.>. In one regard the two rectification mechanisms are
apparently similar: in both cases the particle rests at the bottom of a potential well
for the time interval, τ_0, before resuming Brownian diffusion,
because either reset that way (SR ratchet) or given enough time to relax
there (flashing ratchet with ω_0^2 τ_0 ≫ 1). As shown in the
inset of Fig. <ref>, for the same choice of the tunable parameters, D_0,
τ and τ_0, the rectification power of the two ratchets is almost
identical. Therefore, one can utilize a ratchet with dichotomic noise strength
to experimentally demonstrate the rectification properties of the proposed SR ratchet.
However, an important difference between these two ratchets is also noteworthy.
The flashing ratchet is fueled by an external source capable of “heating and
cooling” the particle or its substrate <cit.>. SR ratcheting
with finite latency time,
instead, can be controlled by the particle itself, by autonomously regulating
its own internal motility mechanism for maximum efficiency.
In summary, we have proposed a new protocol of stochastic resetting,
whereby a particle diffusing on a one-dimensional substrate, gets reset not
at a fixed point, but rather at one of the degenerate minima of the
substrate. We investigated, both numerically and analytically, the diffusion
properties of the reset particle and showed that for spatially asymmetric
substrates the particle gets rectified with direction determined by the
substrate profile, and optimal speed depending on the resetting time. We
argue that, thanks to such a mechanism, a motile system (biological and synthetic, alike)
can exploit the substrate asymmetry to autonomously direct its motion,
for instance, by
randomly switching on and off its propulsion engine at an appropriate rate.
§ ACKNOWLEDGEMENTS
Y.L. is supported by the NSF China under grants No. 11875201 and No.
11935010. P.K.G. is supported by SERB Core Research Grant No. CRG/2021/007394.
§ DATA AVAILABILITY
The data that support the findings of this study are available within the article.
§ CONFLICT OF INTEREST
The authors have no conflicts to disclose.
SR_rev M. R. Evans, S. N. Majumdar, and G. Schehr, Stochastic resetting and applications,
J. Phys. A: Math. Theor. 53, 193001 (2020).
Maj_PRL1 L. Kusmierz, S. N. Majumdar, S. Sabhapandit, and G. Schehr,
First order transition for the optimal search time of Lévy flights with resetting,
Phys. Rev. Lett. 113, 220602 (2014).
Zecchina A. Montanari and R. Zecchina, Optimizing searches via
rare events, Phys. Rev. Lett. 88, 178701 (2002).
Reuveni1 S. Reuveni, M. Urbakh, and J. Klafter,
Role of substrate unbinding in Michaelis–Menten enzymatic reactions
Proc. Natl. Acad. Sci. USA 111, 4391 (2014).
Reuveni2 S. Reuveni, Optimal stochastic restart renders fluctuations
in first passage times universal, Phys. Rev. Lett. 116, 170601 (2016).
Redner S. Redner, A Guide to First-Passage Processes (Cambridge
University Press, UK, 2001).
Gardiner C. W. Gardiner, Handbook of Stochastic Methods (Springer, Berlin, 1985).
Maj_PRL2 M. R. Evans and S. N. Majumdar, Diffusion with stochastic resetting,
Phys. Rev. Lett. 106, 160601 (2011).
Reuveni3 A. Pal and S. Reuveni, First passage under restart,
Phys. Rev. Lett. 118, 030603 (2017).
Pal A. Pal and V. V. Prasad, First passage under stochastic resetting in an interval,
Phys. Rev. E 99, 032123 (2019).
Reuveni4 S. Ray, D. Mondal, and S. Reuveni, Péclet number governs
transition to acceleratory restart in drift-diffusion, J. Phys A:
Math. and Theor. 52 255002 (2019).
Wang J. Wang, Nanomachines: Fundamentals and Applications (Wiley-VCH, Weinheim, 2013).
RMP P. Hänggi and F. Marchesoni, Rev. Mod. Phys. 81, 387 (2009).
pla P. Reimann,
Brownian motors: Noisy transport far from equilibrium, Phys. Rep. 361, 57 (2002).
Kloeden P. E. Kloeden and E. Platen, Numerical Solution of
Stochastic Differential Equations (Springer, Berlin, 1992).
Cox D. R. Cox, Renewal Theory (Methuen, London, 1970)).
Libchaber L. P. Faucheux, L. S. Bourdieu, P. D. Kaplan, and A. J. Libchaber, Optical thermal ratchets,
Phys. Rev. Lett. 74, 1504 (1995).
JCP1 F. D. Ribetto, S. E. Deghi, H. L. Calvo, and R. A. Bustos-Marún, A dynamical model for Brownian molecular motors driven by inelastic electron tunneling, J. Chem. Phys. 157, 164102 (2022).
JCP2 J. Valdiviezo, P. Zhang, D. N. Beratan, Electron ratcheting in self-assembled soft matter,
J. Chem. Phys. 155, 055102 (2021).
Majumdar-A B. Besga, A. Bovon, A. Petrosyan,S. N. Majumdar, and S. Ciliberto, Optimal mean first-passage time for a Brownian searcher subjected to resetting: experimental and theoretical results. Phys. Rev. Res., 2, 032029 (2020).
Majumdar-B B. Besga, F. Faisant, A. Petrosyan, S. Ciliberto,and S. N. Majumdar, Dynamical phase transition in the first-passage probability of a Brownian motion Phys. Rev. E, 104, L012102 (2021).
Majumdar-C G. Tucci, A. Gambassi, S. N. Majumdar, and G. Schehr, First-passage time of run-and-tumble particles with noninstantaneous resetting, Phys. Rev. E, 106, 044127 (2022).
Sano H. R. Jiang, N. Yoshinaga, and M. Sano, Active motion of a Janus particle by
self-thermophoresis in a defocused laser beam, Phys. Rev. Lett.
105, 268302 (2010).
|
http://arxiv.org/abs/2307.07455v1 | 20230714163058 | Real Equation Systems with Alternating Fixed-points (full version with proofs) | [
"Jan Friso Groote",
"Tim A. C. Willemse"
] | cs.LO | [
"cs.LO",
"68Q60",
"F.3.1; D.2.4"
] |
theoremcnt[section]
example
theoremcnt
* Example .
definition
theoremcnt
* Definition .
remark
theoremcnt
* Remark .
definition-arg[1]
theoremcnt
* Definition (#1).
lemma
theoremcnt
* Lemma .
lemma-arg[1]
theoremcnt
* Lemma (#1).
theorem
theoremcnt
* Theorem .
theorem-arg[1]
theoremcnt
* Theorem (#1).
exercise
theoremcnt
* Exercise .
exercise*
theoremcnt
* Exercise .
proof
* Proof.
Real Equation Systems with Alternating Fixed-points
(full version with proofs)
Department of Mathematics and Computer Science
Eindhoven University of Technology, The Netherlands
=============================================================================================================
We introduce the notion of a Real Equation System (RES), which lifts Boolean Equation Systems (BESs)
to the domain of extended real numbers. Our RESs allow arbitrary nesting of least and greatest fixed-point operators.
We show that each RES can be rewritten into an equivalent RES in normal form. These normal forms provide the basis
for a complete procedure to solve RESs. This employs the elimination of the fixed-point variable at the left side
of an equation from its
right-hand side, combined with a technique often referred to as Gauß-elimination. We illustrate how this
framework can be used to verify quantitative modal formulas with alternating fixed-point operators
interpreted over probabilistic labelled transition systems.
§ INTRODUCTION
The modal mu-calculus is a logic that allows to formulate and verify a very wide range
of properties on behaviour, far more expressive than virtually any other behavioural logic around <cit.>.
For instance,
CTL and LTL can be mapped to it, but the reverse is not possible.
By allowing data parameters in the fixed point variables in modal formulas,
this can even be done linearly, without loss of computational effectiveness <cit.>.
Using alternating fixed-points, the modal mu-calculus can intrinsically express various forms of fairness, which in other
logics can often only be achieved by adding special fairness operators.
An effective way to evaluate a modal property on a labelled transition system is by translating both to a single
Boolean Equation System (BES) with alternating fixed-points <cit.>.
Exactly if the initial boolean variable of the obtained BES has the solution true, the property is valid for the labelled transition system.
A BES with alternating fixed-points is equivalent to a parity game <cit.>. There are many algorithms to solve
BESs and parity games <cit.>. Although, it is a long standing open problem whether a polynomial algorithm exists to solve
BESs <cit.>, the existing algorithms work remarkably well in practical contexts.
For a while now, it has been argued that modal logics can become even more effective if they provide quantitative answers
<cit.>, such as durations, probabilities and expected values.
In this paper we lift boolean equation systems to real numbers to form a framework for the evaluation
of quantitative modal formulas, and call the result Real Equation Systems (RESs), i.e.,
fixed-point equation systems over the domain of the extended reals, ∪{-∞, ∞}.
Conjunction and disjunction are interpreted as minimum and maximum, and new operators such as addition and multiplication
with positive constants are added. A typical example of a real equation system is the following
[ μ X = (1/2 X+1)∨(1/5 Y+3),; ν Y = ((1/10 Y-10)∨(2 X+5))∧ 17. ]
Based on Tarski's fixed-point theorem, this real equation system has a unique solution. Using the method provided in this
paper we can determine this solution using algebraic manipulation.
In the case above, see Section <ref>, the second fixed-point equation can be simplified to
ν Y=-100/9∨ ((2 X+5)∧ 17). It is sound to substitute this in the first equation, which becomes
μ X = (1/2 X+1)∨7/9∨((2/5X+4)∧32/5).
This equation can be solved for X yielding X=32/5, from which it directly follows that Y=17.
Concretely, this paper has the following results. We define real equation systems with alternating fixed-points.
The base syntax for expressions is equal to that of <cit.> with constants,
minimum, maximum, addition and multiplication with positive real constants. We add four additional operators,
namely two conditional operators, and two tests for infinity, which turn out to be required to algebraically solve arbitrary real
equation systems.
We provide algebraic laws that allow to transform any expression to conjunctive/disjunctive normal form.
Based on this normal form we provide rules that allow to eliminate each variable bound in the left-hand side of an equation from the right-hand side of that equation. This enables `Gauß-elimination', developed for BESs,
using which any real equation system can be solved.
We provide a quantitative modal logic, and define how a quantitative formula and a (probabilistic) labelled transition system ((p)LTS)
can be transformed into a RES. The solution of the initial variable of this equation system is
equal to the evaluation of the quantitative formula on the labelled transition system. We also briefly touch upon the embedding
of BESs into RESs.
The approach in this paper follows the tradition of boolean equation systems <cit.>.
By allowing data parameters in the fixed-point variables we obtain Parameterised Boolean Equation Systems (PBESs)
which is
a very expressive framework that forms the workhorse for model checking <cit.>. In this paper we do not address such parametric extensions, as they are pretty straightforward,
but in combination with parameterised quantitative modal logic, it
will certainly provide a very versatile framework for quantitative model checking.
There are a number of extensions of the boolean equation framework to the setting of reals but these
typically limit themselves to only single fixed-points.
In <cit.> the minimal integer solutions for a set of equations with only
minimal fixed-points is determined. In <cit.> a polynomial algorithm is provided
to find the minimal solution for a set of real equation systems.
In <cit.> convex lattice equation systems are introduced, also restricted to
a single fixed-point. In that paper a proof system is given
to show that all models of the equations are consistent, meaning that the evaluation of a quantitative modal formula
is limited by some upper-bound.
In <cit.>, the Łukasiewicz μ-calculus is studied, which resembles RESs
restricted to the interval [0,1].
This logic does allow minimal and maximal fixed-points.
They provide two algorithmic ways of computing the solutions for formulas in their logic, viz. an indirect
method that builds formulas in the first-order theory of linear arithmetic and exploits quantifier elimination,
and a method that uses iteration to refine successive approximations of conditioned linear expressions. Embedding
our logic in the Łukasiewicz μ-calculus can be done by mapping the extended reals onto the interval
[0,1] using an appropriate sigmoid function. But such a mapping does not map our addition and constant multiplication
to available counterparts in the Łukasiewicz μ-calculus, which prevents using algorithms for
Łukasiewicz μ-terms <cit.> to our setting.
However, as the Łukasiewicz μ-calculus is directly encodable into the RES framework,
all our results are directly
applicable to the Łukasiewicz μ-calculus.
The proofs of all lemmas and theorems are given in Appendix <ref>.
§ EXPRESSIONS AND NORMAL FORMS
We work in the setting of extended real numbers, i.e., ∪{∞,-∞}, denoted by .
We assume the normal total
ordering ≤ on where -∞≤ x and x≤∞ for
all x∈.
Throughout this text we employ a set of variables and
valuations η:→ that map variables to extended reals.
We write η(X) to apply η to X, and η[X:=r] to adapt by:
η[X:=r](Y)=
{[ r if X=Y,; η(Y) otherwise. ].
We consider expressions over the set of variables with the following syntax.
e ::= X| d| c·e | e+e| e∧ e | e∨ e |eee|eee|(e)|(e)
where X∈, d∈
is a constant, c∈_>0 a positive constant,
+ represents addition, ∧ stands for minimum, ∨ for maximum,
___ and ___ are
conditional operators, and and are auxiliary functions to check for ±∞.
The conditional operators and the checks for infinity occur naturally while solving fixed-point equations
and therefore, we made them part of the syntax.
We apply valuations to expressions, as in
η(e), where η distributes over all operators in the expression.
The interpretation of these operators on the domain is
largely obvious. A variable X gets a value by a valuation.
Multiplying expressions with a constant c is standard, and yields
±∞ if applied on ±∞. The conditional operators, addition
and infinity operators are defined below where e,e_1,e_2,e_3∈.
[ e_1 + e_2 = 4@l{[ e_1+e_2 if e_1,e_2∈, i.e., apply normal addition,; ∞ if e_1=∞ or e_2=∞,; -∞ if e_i=-∞ and e_3-i≠∞ for i=1,2. ].; ; e_1e_2e_3 = {[ e_2∧ e_3 if e_1≤ 0,; e_3 if e_1>0. ]. e_1e_2e_3 = {[ e_2 if e_1< 0,; e_2∨ e_3 if e_1≥ 0. ].; ; (e) = {[ ∞ if e=∞,; -∞ if e≠∞. ]. (e) = {[ ∞ if e≠-∞,; -∞ if e=-∞. ]. ]
Note that all defined operators are monotonic on . We have the identity
(e)=e+-∞, and so, we do not treat as a primary operator. We write e[X:=e'] for
the expression representing the syntactic substitution of e' for X in e.
We write (e) for the set of variables from occurring in e.
Table <ref> contains many useful algebraic laws for our operators.
The addition operator + has as property that -∞+∞=∞+-∞=∞.
One may require the other natural addition operator +̂, as used in <cit.>,
satisfying that -∞+̂∞=∞+̂-∞=-∞.
It can be defined as follows:
e_1+̂e_2 = (e_1)-∞((e_2)-∞(e_1+e_2)).
We can extend
the syntax with unary negation -e with its standard meaning,
and, provided no variable occurs in the scope of its definition within an odd number of
negations, negation can be eliminated using standard simplification rules. Therefore, we
do not consider it as a primary part of our syntax. At the end of Table <ref> we list several identities
involving negation.
Note that operators + and +̂ are each other's dual with regard to negation.
We introduce normal forms, crucial to solve real equation systems, where the sum, conjunction and disjunction over empty domains of variables equal 0, ∞ and -∞, respectively.
Let be a set of variables.
An expression e is in simple conjunctive normal form iff it has the shape
⋀_i∈ I⋁_j∈ J_i ((∑_X∈_ijc^X_ij· X) +
(∑_X∈'_ij(X)) + d_ij)
and
it is in simple disjunctive normal form iff it has the shape
⋁_i∈ I⋀_j∈ J_i ((∑_X∈_ijc^X_ij· X) +
(∑_X∈'_ij(X)) +d_ij)
where _ij⊆ and '_ij⊆ are finite sets of variables,
c_ij^X∈_>0, and d_ij∈.
An expression e is in conjunctive, resp. disjunctive normal form iff
* e is in simple conjunctive, resp. disjunctive normal form, or
* e has the shape e_1e_2e_3 or e_1e_2e_3
where e_1 is in simple conjunctive, resp. disjunctive normal form and e_2 and e_3 are
conjunctive resp. disjunctive normal forms.
Each expression e not containing the conditional operators e_1e_2e_3 or e_1e_2e_3 can be
rewritten to a simple conjunctive or disjunctive normal form using the equations in Table <ref>.
Expression of the forms e_1e_2e_3 and e_1e_2e_3 can be rewritten to
equivalent expressions where the first argument of such a conditional operator
is a simple conjunctive or disjunctive normal form using the equations in Table <ref>.
Each expression e can be
rewritten to both a conjunctive and a disjunctive normal form using the equations in
Table <ref>.
§ REAL EQUATION SYSTEMS AND GAUSS-ELIMINATION
In this section we introduce Real Equation Systems (RESs) as sequences of fixed-point equations, introduce a natural equivalence
between RESs, and provide
a generic solution method, known as Gauß-elimination <cit.>.
Let be a set of variables. A Real Equation System (RES) E is a finite sequence of
(fixed-point) equations
σ_1 X_1 = e_1,…, σ_n X_n = e_n
where σ_i is either the minimal fixed-point operator μ or the maximal fixed-point operator ν,
X_i ∈ are variables and e_i are expressions.
We write ( E) for the set of variables occurring in the left-hand side, i.e.,
( E)={X_1,…,X_n}.
The empty sequence of equations is denoted by ε.
The semantics of a real equation system is a giving
the solutions of all variables, based on an initial η giving the solution for all variables
not bound in E.
Let be a set of variables and E be a real equation system over .
The solution Eη:→ yields an extended real number for
all X ∈, given a η:→ of E. It is
inductively defined as follows:
[ εη=η,; σ X=e, Eη = E (η[X:=σ(X, E,η,e)]) ]
where σ (X, E,η,e) is defined as
[ μ(X, E,η,e) = ⋀{r∈| r≥ E (η[X:=r])(e)} and; ν(X, E,η,e) = ⋁{r∈| E (η[X:=r])(e) ≥ r}. ]
It is equivalent to write = instead of ≥ in the above sets.
This makes the fixed-points easier to understand.
Note that if the real equation system is closed, i.e., all variables in the
right-hand sides occur in ( E), the value Eη(X) is independent of η for all X ∈( E).
Following <cit.>, we introduce the notion of equivalency between equation systems.
We use the symbol ≡ to distinguish this equivalence from `=' used in equation systems.
Let E, E' be real equation systems. We say that E≡ E' iff
E, Fη= E', Fη for
all η and real equation systems
F with ( F)∩ (( E)∪( E'))=∅.
In <cit.> it was observed that defining E≡ E'
as Eη= E'η for all η is not desirable, as the
resulting equivalence is not a congruence. With this alternative notion, we find that
μ X=Y and ν X=Y are equivalent. But μ X=Y, ν Y=X and ν X=Y, ν Y=X are not
as the first one has solution X=Y=-∞ and the second one has X=Y=∞.
However, if the fixed-point symbol is the same, it is not necessary to take surrounding equations into
account. This is a pretty useful lemma which makes the proofs in this paper much easier, and of which
we are not aware that it occurs elsewhere in the literature.
Let X be a variable, e and f be expressions and σ either the minimal or the maximal fixed-point symbol.
If for any η it holds that σ X=eη=σ X=fη
then σ X=e≡σ X=f.
The proof of the main Theorem <ref> is quite involved and heavily uses the following two lemmas,
which we only give for the minimal fixed-point. The formulations for the maximal fixed-point are dual.
Let X∈ be a variable and e,f be expressions. It holds that μ X=e ≡ μ X=f
if for every η:
* for the smallest r∈ such that r=η[X:=r](e) it holds that
there is an r'∈ satisfying that r'≤ r and r'≥η[X:=r'](f), and, vice versa,
* for the smallest r∈ such that r=η[X:=r](f) it holds that
there is an r'∈ satisfying that r'≤ r and r'≥η[X:=r'](e).
If μ X=e ≡ μ X=f, then for any η it holds that
* for any r∈ such that r≥η[X:=r](e), there is an r'∈ such
that r'≤ r and r'= η[X:=r'](f), and, vice versa,
* for any r∈ such that r≥η[X:=r](f), there is an r'∈ such
that r'≤ r and r'= η[X:=r'](e).
The notion of equivalence of Definition <ref> is an equivalence relation on RESs and it satisfies the
properties E1-E7 in Table <ref>. E1-E5 are proven for boolean equation systems in <cit.> and the proofs carry over to our setting.
The proofs of E6 and E7 are given in Appendix <ref>.
In the table, σ and σ' stand for the fixed-point symbols μ and ν.
The equivalences E3 and E4 above give a method to solve arbitrary equation systems, provided a
single equation can be solved. Here, solving a single equation σ X=e means replacing it by an
equivalent equation σ X=e' where X does not occur in e', which is the topic of the
next section. This method is known as Gauß-elimination as it resembles the well-known Gauß-elimination
procedure for sets of linear equations <cit.>.
The idea behind Gauß-elimination for a real equation system E is as follows. First, the last
equation σ_n X_n=e_n of E is solved for X_n.
Assume the solution is σ_n X_n=e_n', where X_n does not occur in e_n'. Using E3 the expression e_n' is
substituted for all occurrences X_n in right-hand sides of E removing all occurrences of X_n except
in the left hand side of the last equation. Subsequently, this process is repeated for the one but
last equation of E up to the first equation. Now the first equation has the shape X_1=e_1 where
no variable X_1 up till X_n occurs in e_1. Using E4 this equation can be moved to the end of E, and
by applying E3 all occurrences of X_1 are removed from the right-hand sides of E. This is then repeated
for X_2, which now also does not contain X_1,…,X_n, until all variables X_1,…,X_n have been removed
from all right-hand sides of E.
A concrete, but simple example is the following. Consider the real equation system
μ X=Y, ν Y=(X+1)∧ Y.
We can derive:
[ μ X=Y, ν Y=(X+1)∧ Y (†)≡ μ X=Y, ν Y=X+1 E3≡ μ X=X+1, ν Y=X+1 ()≡; μ X=-∞, ν Y=X+1 E4≡ ν Y=X+1, μ X=-∞, E3≡ ν Y=-∞, μ X=-∞. ]
Solving the equation ν Y=(X+1)∧ Y at (†) above, and μ X=X+1 at () can be done
with simple fixed-point iteration. In ν Y=(X+1)∧ Y fixed-pointed iteration starts with Y=∞.
This yields in the first iteration Y=X+1, and this iteration is stable, and hence it is the maximal fixed-point solution.
For μ X=X+1, the initial approximation X=-∞ is also a solution, and hence the minimal solution.
Unfortunately, fixed-point iteration does not terminate always. For instance, μ X=(X+1)∨ 0 has minimal
solution X=∞, which can only be obtained via an infinite number of iteration steps.
§ SOLVING SINGLE EQUATIONS
In this section we show that it is possible to solve each fixed-point equation σ X=e in a finite number of steps.
First assume that e does not contain conditional operators. If we have a minimal fixed-point equation μ X=e,
we know via Theorem <ref> that we can rewrite e to simple conjunctive normal form.
We want to explicitly expose occurrences of the variable X in the normal form of e and do this by denoting
the normal form of e as shown in (<ref>).
Here, all expressions containing variables different from X are moved to f_ij or m_i.
⋀_i∈ I(⋁_j∈ J_i(c_ij· X + c'_ij·(X) + f_ij)∨ m_i).
The expressions f_ij and m_i do not contain X.
Subexpressions c_ij· X are optional,
i.e., abusing notation, we allow c_ij to be 0 if this sub-term
is not present. Likewise, (X) is optional and therefore, c'_ij is either 0 or 1, where
0 means that the expression is not present.
Constants c_ij and c'_ij cannot both be 0, as in that case the
conjunct does not contain X and is hence part of m_i.
We define the solution of μ X=e, in which e is assumed to be of shape (<ref>), as μ X=^μ_X=e where:
[ _X=e^μ = ⋀_i∈ I (((⋁_j∈ J_if_ij))
((m_i)-∞
((
⋁_j∈ J_i| c_ij≥ 1f_ij+(c_ij-1)· U_i)∨⋁_j∈ J_i| c'_ij=1∞U_i∞))
∞) ]
where U_i=m_i∨⋁_j∈ J_i| c_ij<11/1-c_ij· f_ij.
Note that we use the notation ⋁_j∈ J_i|𝑐𝑜𝑛𝑑 where 𝑐𝑜𝑛𝑑 is a condition.
This means that the disjunction is only taken over elements j that satisfy the condition. Also observe
that we use expressions such as 1/1-c_ij·f_ij. This is an ordinary multiplication
with 1/1-c_ij as positive constant. It is worth noting that if only rational numbers are used
in the equations, the solutions to the variables are restricted to -∞, ∞ and rationals.
It can be understood that (<ref>) is a solution of (<ref>) as follows.
First observe that due to property E6 the solution of a minimal fixed-point
distributes over the initial conjunction ⋀_i∈ I of clauses. This
means that we can fix some i∈ I and only concentrate on understanding how one single clause
⋁_j∈ J_i(c_ij· X + c'_ij·(X) + f_ij)∨ m_i must be solved.
If f_ij is equal to ∞ for some j∈ J_i,
the solution must be infinite. This is ensured by the outermost conditional
operator in (<ref>). Now, assuming that no f_ij is equal to ∞, we inspect m_i. If m_i equals -∞,
then the minimal solution for the given i∈ I is also -∞. This explains the nested conditional operator in
(<ref>).
Next consider the innermost conditional operator of (<ref>) and additionally assume m_i>-∞. If there is some
c'_ij that is equal to 1, then the minimal solution is at least m_i due to the disjunct m_i that appears in the clause. But then it must also be at least 1·(m_i)=∞. Hence, in this case the solution is ∞,
which is ensured by the expression in the condition of the innermost conditional ⋁_j∈ J_i| c'_ij=1∞.
Otherwise, all c'_ij equal 0, and both the right-hand side of (<ref>) and the solution
(<ref>) can be simplified to
⋁_j∈ J_i(c_ij· X + f_ij)∨ m_i and (
⋁_j∈ J_i| c_ij≥ 1f_ij+(c_ij-1)· U_i)U_i∞.
This resulting situation is best explained using Figure <ref> (left). The simple conjunctive normal form consists
of a number of disjunctions of the shape c_ij· X+f_ij. These characterise lines of which we are interested in their
intersection with the line x=y. In Figure <ref> such lines are drawn as l_1,…,l_4, and h_1 and h_2.
Due to the disjunction, we are interested in the maximal intersection point. If we first concentrate on those lines with c_ij<1,
then we see that (U_i,U_i) is the maximal intersection point of these lines above m_i. This intersection point is the solution for the equation
unless there is a steep line, with c_ij≥ 1 which at x=U_i lies above (U_i,U_i). In the figure there is such a line, viz. h_2.
In such a case the fixed-point lies at the intersection of h_2 with the line x=y for x>U_i.
As this point does not exist in ,
the solution is ∞. The expression ⋁_j∈ J_i| c_ij≥ 1f_ij+(c_ij-1)· U_i
in (<ref>) takes care of this situation. Steep lines, like h_1 which lie below (U_i,U_i) at x=U_i can be ignored,
as they do not force the minimal fixed-point U_i to become larger.
In case of a maximal fixed-point equation, ν X=e where
e is a simple disjunctive normal form, it is useful to again expose the occurrences of X.
We can denote the normal form of e in the following way:
⋁_i∈ I(⋀_j∈ J_i(c_ij· X + c'_ij·(X) + f_ij)∧ m_i)
where c_ij· X and (X) are optional, i.e., c_ij can be 0, and c'_ij is either 0 or 1,
where 0 means that the expression is not present. One of c_ij and c'_ij is not equal to 0.
Again, the expressions f_ij and m_i do not contain X.
The solution of ν X=e, where e is of the shape (<ref>), is ν X=^ν_X=e with
[ _X=e^ν = ⋁_i∈ I ((m_i)
(⋀_j∈ J_i| c_ij≥ 1∧ c'_ij=0(f_ij+(c_ij-1))· U_i)-∞U_i
∞) ]
where U_i=m_i∧⋀_j∈ J_i| c_ij<1∧ c'_ij=01/1-c_ij· f_ij.
The two fixed-point solutions are not syntactically dual which is due to the fact that simple
conjunctive and disjunctive
normal forms are not each other's dual, because of the presence of + and . We refrain from sketching the intuition underlying the solution to the maximal fixed-point as it is similar to that of the minimal fixed-point.
A full normal form can contain the conditional operators e_1e_2e_3 and e_1e_2e_3.
Suppose we have an equation
σ X = e_1e_2e_3
with σ either μ or ν.
For the minimal fixed-point
the right-hand side of the solution is
^μ_X=e_1e_2e_3=
(e_1[X:=^μ_X=e_2∧^μ_X=e_3])^μ_X=e_2^μ_X=e_3.
For the maximal fixed-point we find the right-hand side
^ν_X=e_1e_2e_3=
(e_1[X:=^ν_X=e_3])^ν_X=e_2∧ e_3^ν_X=e_3.
In case of the other conditional operator
σ X = e_1e_2e_3 we obtain
for the right side of the minimal fixed-point
^μ_X=e_1e_2e_3=
(e_1[X:=^μ_X=e_2])^μ_X=e_2^μ_X=e_2∨ e_3, and
for the right side of the maximal fixed-point
^ν_X=e_1e_2e_3=
(e_1[X:=^ν_X=e_2∨^ν_X=e_3])^ν_X=e_2^ν_X=e_3.
The following theorem summarises that these solutions solve fixed-point equations.
For any fixed-point symbol σ, variable X∈ and expression e, it holds that
σ X=e ≡ σ X=_X=e^σ
and X ∉(_X=e^σ), where _X=e^σ is defined above.
§ RELATION TO BOOLEAN EQUATION SYSTEMS
A boolean equation system (BES) is a restricted form of a real equation system where solutions can only be
or <cit.>. Concretely, the syntax for expressions is
e ::= X||| e∨ e| e∧ e
where X is taken from some set of variables <cit.>. A boolean equation system
is a sequence of fixed-point equations σ_1 X_1=e_1,…,σ_n X_n=e_n where σ_i are fixed-point operators, X_i are variables from ranging over and ,
and e_i are boolean expressions.
We do not spell out
the semantics of boolean equation systems, as it is similar to that of RESs. However, we believe that it is useful to
indicate the relation with real equation systems.
The simplest embedding is where a given BES is literally transformed to a RES and and
are interpreted as ∞ and -∞. We consider a minimal fixed-point equation. The right-hand side can be rewritten
to a simple conjunctive normal form.
We write this in the shape of equation (<ref>). So, c_ij=1, c'_ij=0,
f_ij is absent and m_i does not contain X and can only be interpreted as ±∞.
Exactly if J_i is not empty, X is present in conjunct i.
μ X=⋀_i∈ I((⋁_j∈ J_i X) ∨ m_i).
The solution is given by
equation (<ref>), which can be simplified to:
⋀_i∈ I ((m_i)-∞((⋁_j∈ J_i 0)m_i∞))=
⋀_i∈ I m_i=
⋀_i∈ I ((⋁_j∈ J_i-∞)∨ m_i).
The latter exactly coincides with the Gauß-elimination rule for BESs that says that in an equation μ X=e, any
occurrence of X in e can safely be replaced by . For the maximal fixed-point operator, dual reasoning
applies. As Gauß-elimination is a complete way to solve a BES with and , and exactly the
same reduction works with the corresponding RES with ∞ and -∞, this confirms that this interpretation works.
An alternative interpretation is given by taking two arbitrary constants c_ and c_ with
as only constraint that c_>c_.
A boolean equation system σ_1 X_1=e_1,…,σ_n X_n=e_n is translated into
σ_1 X_1=c_∨(c_∧ e_1),…,σ_n X_n=c_∨ (c_∧ e_n)
of which the validity can be established in the same way as above.
§ QUANTITATIVE MODAL FORMULAS AND THEIR TRANSLATION TO RESS
We can write quantitative modal formulas that
yield a value instead of true and false. In the next section we
provide examples of what can be expressed.
Our formulas have the syntax
ϕ ::= X|
d| c·ϕ|ϕ+ϕ|ϕ∨ϕ|ϕ∧ϕ|⟨ a⟩ϕ| [a]ϕ|μ X.ϕ|ν X.ϕ.
Here d∈ and c∈ with c>0 are constants, X∈
is a variable, and a∈ is an action from some set of actions .
Although there are many similar logics around, we have not encountered this exact form before.
We evaluate these modal formulas on probabilistic LTSs.
For a finite set of states S, we use distributions d:S→ [0,1] where d(s)
is the probability to end up in state s. Distributions satisfy that ∑_s∈ Sd(s)=1.
The set of all distributions over S is denoted by D(S).
A probabilistic labelled transition system (pLTS) is a four-tuple M=(S,,,d_0) where
S is a finite set of states,
is a finite set of actions,
the relation ⊆ S×× D(S) represents the transition relation, and
d_0∈ D(S) is
the initial distribution.
We leave out the definition of the interpretation of quantitative modal formulas on probabilistic LTSs,
as it is standard. Instead, we define the real equation system that is generated given a
modal formula ϕ and a probabilistic labelled transition system M=(S,,,d_0), following the translations in
<cit.>.
The function (ϕ) generates the required sequence of RES equations for ϕ and
(s,ϕ) yields the expression for the right-hand side of such an equation representing the
value of ϕ in state s.
[t]0.5[ (X)=ϵ,; (d)=ϵ,; (c·ϕ)=(ϕ),; (ϕ_1+ϕ_2)=(ϕ_1),(ϕ_2),; (ϕ_1∨ϕ_2)=(ϕ_1),(ϕ_2),; (ϕ_1∧ϕ_2)=(ϕ_1),(ϕ_2),; (⟨ a⟩ϕ)=(ϕ),; ([ a] ϕ)=(ϕ),; (μ X.ϕ)=⟨μ X_s=(s,ϕ)| s∈ S⟩,(ϕ),; (ν X.ϕ)=⟨ν X_s=(s,ϕ)| s∈ S⟩,(ϕ).; ][t]0.5[ (s,X)=X_s,; (s,d)=d,; (s,c·ϕ)=c·(s,ϕ),; (s,ϕ_1+ϕ_2)=(s,ϕ_1)+(s,ϕ_2),; (s,ϕ_1∨ϕ_2)=(s,ϕ_1)∨(s,ϕ_2),; (s,ϕ_1∧ϕ_2)=(s,ϕ_1)∧(s,ϕ_2),; (s,⟨ a⟩ϕ)=⋁_{d∈ D(S)| sad}∑_s'∈ Sd(s')·(s',ϕ),; (s,[ a] ϕ)=⋀_{d∈ D(S)| sad}∑_s'∈ Sd(s')·(s',ϕ),; (s,μ X.ϕ)=X_s,; (s,ν X.ϕ)=X_s. ]
We use the notation ⟨σ X_s =e_s| s∈ S⟩ for the sequence of all equations σ X_s=e_s for
all states s ∈ S.
The evaluation of a modal formula ϕ in M with initial distribution d_0
is the solution in of variable X_init in the RES
μ X_init=(∑_s∈ Sd_0(s)·(s,ϕ)), (ϕ).
The use of the minimal fixed-point for the initial variable is of no consequence as
X_init does not occur elsewhere in the equation system.
A maximal fixed-point could also be used.
§ APPLICATIONS
§.§ The longest a-sequence to a b-loop
We are interested in the longest sequence of actions a to reach a state where an infinite sequence
of actions b can be done. The modal formula that expresses this is the following:
μ X.(1+⟨ a⟩ X)∨(0∧ν Y.⟨ b⟩ Y).
The last part with the maximal fixed-point 0∧ν Y.⟨ b⟩ Y when evaluated in a state equals
-∞ if no infinite sequence of b's is possible. Otherwise, it evaluates to 0.
The first part 1+⟨ a⟩ X yields 1 plus the maximum values of the evaluation
of X in all states reachable by an action a. If no infinite b-sequence can be reached from such a state, this
value is -∞, and otherwise it represents the maximal number of steps to reach such an infinite b-sequence.
We evaluate this formula in the labelled transition system given at the right in Figure <ref>.
This leads to the following real equation system where X_i and Y_i correspond to the value of X, resp. Y in state s_i. The solution of the equation system is written behind
each equation.
[ μ X_1=(1+(X_2∨ X_3∨ X_4∨ X_6))∨(0∧ Y_1) 2 ν Y_1=-∞ -∞; μ X_2=(1+X_3)∨(0∧ Y_2) 1 ν Y_2=-∞ -∞; μ X_3=(1+-∞)∨(0∧ Y_3) 0 ν Y_3=Y_3 ∞; μ X_4=(1+X_5)∨ (0∧ Y_4) -∞ ν Y_4=-∞ -∞; μ X_5=(1+X_6)∨(0∧ Y_5) -∞ ν Y_5=-∞ -∞; μ X_6=(1+-∞)∨(0∧ Y_6) -∞ ν Y_6=-∞ -∞ ]
We find that the longest sequence of actions a is 2, which matches
our expectation.
§.§ The probability to reach a loop
We are interested in the probability to reach a b-loop. We apply it to the LTS at the left in Figure <ref>.
Due to the non-determinism there are more paths to such loops, and we are interested in the path
with the highest probability. This is expressed by the modal formula
μ X.⟨ a ⟩ X∨⟨ b ⟩ X ∨ ((ν Y.⟨ b⟩ Y ∨ 0)∧ 1).
The formula ν Y.⟨ b⟩ Y∨0 yields ∞ if an infinite sequence of actions b is possible
and 0 otherwise.
As we want a probability, we use _∧ 1 and _∨ 0 to enforce that the solution is in [0,1].
The translation of this formula on the labelled transition system in Figure <ref> yields the following
real equation system.
[ 3lμ X_1=(1/3·X_2+2/3·X_3)∨(1/2·X_4+1/2·X_5)∨ (Y_1∧ 1) ν Y_1=-∞∨ 0 = 0,; = 1/3∨1/2∨ 0 = 1/2,; μ X_2= X_2∨ (Y_2∧ 1) = X_2∨ 1 = 1, ν Y_2=Y_2 = ∞,; μ X_3= -∞∨ (Y_3∧ 1) = -∞∨ 0 =0, ν Y_3=-∞∨ 0 = 0,; μ X_4= X_4∨ (Y_4∧ 1) = X_4 ∨ 1 = 1, ν Y_4=Y_4 = ∞,; μ X_5= -∞∨ (Y_5∧ 1) = -∞∨ 0=0, ν Y_5=-∞∨ 0 = 0. ]
This shows that the maximal probability to reach a b-loop is 1/2.
§.§ Determining the reward of process behaviour
In Figure <ref> at the right a labelled transition system is drawn, where a reward R is
changed when a transition
takes place. The transition labelled with action a costs one unit,
b yields 1/2R+5 units, and the transition c
adapts the reward by 9/10R+2. We want to know what the maximal stable reward is. This is expressed
by the following formula:
μ R.⟨ a⟩(R-1)∨⟨ b⟩(12·R+5)∨⟨ c⟩(910·R+2)∨ 0.
Note that we express this as the minimal reward larger than 0, which is the maximum of all individual rewards.
Translating this to a real equation system yields
[ μ R_1=(R_2-1)∨-∞∨-∞∨ 0, μ R_2=-∞∨ (1/2·R_1+5)∨ (9/10R_1+2)∨ 0. ]
We solve this using Gauß-elimination. This means that the second equation is
substituted in the first, which, after some straightforward simplifications, gives us
μ R_1= (12·R_1+4)∨(910·R_1+1)∨ 0.
We solve this equation using the technique of Section <ref>, leading to:
R_1=4/1-1/2∨1/1-9/10∨ 0=10.
§ CONCLUSIONS AND OUTLOOK
We introduce real equation systems (RESs) as the pendant of Boolean Equation Systems with solutions in the
domain of the reals extended with ±∞. By a number of examples we show how this can be used to
evaluate a wide range of quantitative properties of process behaviour.
We provide a complete method to solve RESs using an extension of what is called `Gauß-elimination' <cit.>
to solve boolean equation systems. It shows that any RES can be solved by carrying out a finite number of
substitutions. As solving RESs generalises solving BESs, and Gauß-elimination on BESs is exponential,
our Gauß-elimination technique can also lead to exponential
growth of intermediate terms. A prototype implementation shows that depending on the nature of the system being analysed,
this may or may not be an issue. For instance, analysing the Game of the Goose <cit.>
or The Ant on a Grid <cit.> are practically undoable with the method proposed here,
while the Lost Boarding Pass Problem
<cit.> is easily solved, even for planes
with 100,000 passengers.
We believe that the next step is to come up with algorithms that are more efficient in practice than
Gauß-elimination. This is motivated by the situation with BESs where for instance the
recursive algorithm <cit.> turns out to be practically far more efficient than Gauß-elimination <cit.>.
plain
§ FULL PROOFS OF THE LEMMAS AND THEOREMS IN THIS PAPER
This appendix repeats
all lemmas and theorems in this paper and adds proofs.
Lemma <ref>
Each expression e not containing the conditional operators e_1e_2e_3 or e_1e_2e_3 can be
rewritten to a simple conjunctive or disjunctive normal form using the equations in Table <ref>.
The proof uses induction on the structure of terms. The only case that is more involved is if e has the shape
(e'). For this we use
that (∑_i∈ I e_i)=(⋀_i∈ I(e_i))∨⋁_i∈ I(e_i) which is provable
with induction on the finite index set I.
Lemma <ref>
Expression of the forms e_1e_2e_3 and e_1e_2e_3 can be rewritten to
equivalent expressions where the first argument of such a conditional operator
is a simple conjunctive or disjunctive normal form using the equations in Table <ref>.
The proof uses induction on the number of operators ___/___
in e_1.
The case where e_1 is X or d is trivial. If e_1 does not contain a
conditional operator, we are ready using Lemma <ref>.
Otherwise, if any of the operators
c·, +, ∨, ∧, or occur as outermost symbols
of e_1,
they can be pushed inside the conditional operator, transforming e_1 to an
expression of the shape f_1f_2f_3 or f_1f_2f_3. The four cases that ensue are all similar.
We only show one case and derive it using equation D_⇒^⇒:
[ e_1e_2e_3=
(f_1f_2f_3)e_2e_3=
((f_1∨ f_2)∧ f_3)e_2e_3. ]
Now (f_1∨ f_2)∧ f_3 is an expression containing one less
conditional operator, and hence, using the induction hypothesis, we can
transform it to a simple conjunctive/disjunctive normal form.
This finishes the proof.
Theorem <ref>
Each expression e can be
rewritten to both a conjunctive and a disjunctive normal form using the equations in
Table <ref>.
The proof uses induction on the structure of expressions.
* The expressions d and X are by themselves conjunctive and disjunctive normal forms.
* Consider the expressions c·e. Using the induction hypothesis, there is a conjunctive/disjunctive normal form equal
to e. The normal form c·e is obtained by pushing c inside the normal form.
* Consider the expressions e_1+e_2, e_1∨ e_2 and e_1∧ e_2.
If e_1 and e_2 are simple normal forms, the result follows by Lemma <ref>.
If one or both of e_1 and e_2 has the shape f_1f_2f_3, then the other term can be pushed inside the second and third
argument, and using the induction hypothesis, these terms can be transformed to the required normal forms also.
* For an expression of the form (e), we find with induction a conjunctive/disjunctive
normal form for e. Using the equations in Table <ref>, and using the identity from the proof of Lemma <ref>,
the operator can be pushed inside, leading to the required normal form.
* The last cases are e_1e_2e_3/e_1e_2e_3.
Using the induction hypothesis there are conjunctive/disjunctive
normal forms f_1, f_2 and f_3 equal
to e_1, e_2 and e_3, respectively.
If e_1 has a simple conjunctive/disjunctive normal form, we are ready, as in that case f_1f_2f_3,
respectively, f_1f_2f_3 is the
required normal form.
The only non-trivial case is if f_1 has the shape f_11f_12f_13 or f_11f_12f_13.
But in this case Lemma <ref> applies,
also leading to the required normal form.
The following lemma provides a monotonicity property that we require and that does not occur
in the main text. We write η≥η' for η and η' iff η(X)≥η'(X) for
all X∈.
Let E be real equation system, e an expression,
and let η and η' be such that η≥η'.
Then ( Eη)(e)≥ ( Eη')(e).
We prove this lemma with induction on the size of E. If E is empty, then the lemma
reduces to η(e)≥η'(e) which follows by monotonicity of e.
If E equals σ X=f, F then, by definition, we must show that
( F(η[X:=σ(X, F,η,f)]))(e)≥ ( F(η'[X:=σ(X, F,η',f)]))(e).
We prove this for σ=μ. The proof for σ=ν is completely similar.
[ ( F(η[X:=σ(X, F,η,f)]))(e)=; ( F(η[X:=⋀{r∈| r≥ F(η[X:=r])(f)}]))(e)≥; ( F(η[X:=⋀{r∈| r≥ F(η'[X:=r])(f)}]))(e)≥; ( F(η'[X:=⋀{r∈| r≥ F(η'[X:=r])(f)}]))(e)=; ( F(η'[X:=σ(X, F,η',f)]))(e). ]
In the first ≥ above, we use the induction hypothesis saying that
F(η[X:=r])≥ F(η'[X:=r]) and therefore,
the minimal fixed-point can only decrease, and hence Fη[X:=⋀…] decreases also using the induction hypothesis.
In the second ≥ we again use the induction hypothesis.
Lemma <ref>
Let X be a variable, e and f be expressions and σ either the minimal or the maximal fixed-point symbol.
If for any η it holds that σ X=eη=σ X=fη
then σ X=e≡σ X=f.
We prove this lemma for σ=μ. The case where σ=ν is completely
dual.
First we elaborate a little on the condition of this lemma. It can be rewritten to
η[X:=⋀{r∈| r≥η[X:=r](e)}]=η[X:=⋀{r∈| r≥η[X:=r](f)}].
Applying both sides to X reduces this further to
⋀{r∈| r≥η[X:=r](e)}=⋀{r∈| r≥η[X:=r](f)}.
This means that the smallest r satisfying r≥η[X:=r](e) is equal to the smallest r' satisfying
r'≥η[X:=r'](f).
We use this property below.
We must prove that for all η and real equation systems F with X∉( F)
that
μ X=e, Fη = μ X=f, Fη.
Expanding this definition gives us an equivalent statement.
[ F( η[X:=⋀{r∈| r≥ F(η[X:=r])(e)}])=; F( η[X:=⋀{r∈| r≥ F(η[X:=r])(f)}]). ]
Define
[ m_e=⋀{r∈| r≥ F(η[X:=r])(e)} and; m_f=⋀{r∈| r≥ F(η[X:=r])(f)}. ]
Note that the lemma follows if we have shown m_e=m_f, which we do below.
Due to symmetry we assume that m_f≤ m_e without loss of generality.
Consider the following expression.
m=⋀{r∈| r≥ζ[X:=r](f)}
where ζ= F(η[X:=m_f]). Clearly, m_f satisfies
m_f≥ζ[X:=m_f](f)
as this is equivalent to
m_f≥ ( F(η[X:=m_f]))[X:=m_f](f).
So, m≤ m_f. Vice versa, m satisfies
m≥ζ[X:=m](f).
This implies
m≥ ( F(η[X:=m_f]))[X:=m](f)≥ ( F(η[X:=m]))[X:=m](f)=( Fη[X:=m])(f)
using Lemma <ref> and the fact that m_f≥ m.
From this we derive that m_f≤ m,
and combined with the already derived m≤ m_f, that m=m_f.
We now turn our attention to m_e and show that m is a solution for r in
r≥ F(η[X:=r])(e).
So, we must show
m≥ F(η[X:=m])(e).
We know that m is the smallest value that satisfies
m≥ζ[X:=m](f).
By (<ref>) we also have that
m≥ζ[X:=m](e).
Combining these results leads to
m≥ζ[X:=m](e)=( F(η[X:=m_f]))[X:=m](e)= F(η[X:=m])(e)
where m=m_f is used in the last equality.
Hence, we know that m_e≤ m, and since m=m_f, also
m_e≤ m_f. We conclude m_e=m_f, which means we have proven this lemma.
Lemma <ref>
Consider some variable X. We find that μ X=e ≡ μ X=f
if for every η:
* for the smallest r∈ such that r=η[X:=r](e) it holds that
there is an r'∈ satisfying that r'≤ r and r'≥η[X:=r'](f), and vice versa,
* for the smallest r∈ such that r=η[X:=r](f) it holds that
there is an r'∈ satisfying that r'≤ r and r'≥η[X:=r'](e).
Dually, it is the case that
ν X=e ≡ ν X=f
if for every η:
* for the largest r∈ such that r=η[X:=r](e) it holds that
there is an r'∈ satisfying that r'≥ r and r'≤η[X:=r'](f), and vice versa,
* for the largest r∈ such that r=η[X:=r](f) it holds that
there is an r'∈ satisfying that r'≥ r and r'≤η[X:=r'](e).
Due to duality, we only provide the proof for the minimal fixed-point.
Define the following two sets:
[ S_e={r∈| r≥η[X:=r](e)}; S_f={r∈| r≥η[X:=r](f)}. ]
We first prove that ⋀ S_e=⋀ S_f. Consider r=⋀ S_e. By the first condition,
there is an r'≤ r such that r'≥η[X:=r'](f). Hence, ⋀ S_f≤ r'≤ r=⋀ S_e.
Using the second condition we prove similarly that ⋀ S_e≤⋀ S_f. So, we conclude
⋀ S_e= ⋀ S_f.
The remainder of the proof consists of a straightforward expansion of the definition. In order to prove that
μ X=e ≡ μ X=f,
it suffices to prove that
μ X=eη=μ X=f
using Lemma <ref>.
Expanding this further, yields the equivalent equality
η[X:=⋀ S_e]=η[X:=⋀ S_f]
with S_e and S_f as defined above. As we have already derived that ⋀ S_e=⋀ S_f,
we can conclude that this last equation is derivable, and hence the lemma follows.
Lemma <ref>
If μ X=e ≡ μ X=f, then for any η it holds that
* for any r∈ such that r≥η[X:=r](e), there is an r'∈ such
that r'≤ r and r'= η[X:=r'](f), and vice versa,
* for any r∈ such that r≥η[X:=r](f), there is an r'∈ such
that r'≤ r and r'= η[X:=r'](e).
If ν X=e ≡ ν X=f, then for any η it holds that
* for any r∈ such that η[X:=r](e)≥ r, there is an r'∈ such
that r'≥ r and r'= η[X:=r'](f), and vice versa,
* for any r∈ such that η[X:=r](f)≥ r, there is an r'∈ such
that r'≥ r and r'= η[X:=r'](e).
Both statements in this lemma are dual to each other, so, we only provide the proof for the minimal fixed-point.
The statement μ X=e ≡ μ X=f implies to the following equation by expanding the definitions with
an empty real equation system.
For any η
η[X:=⋀ S_e]=η[X:=⋀ S_f]
where S_e={r∈| r≥η[X:=r](e)} and
S_f={r∈| r≥η[X:=r](f)}. From this we can conclude ⋀ S_e =⋀ S_f.
According to the condition of case 1 of this lemma there
is an r∈ such that r≤η[X:=r](e). Clearly, ⋀ S_e≤ r. Now take r'=⋀ S_f.
Clearly, r'=⋀ S_f=⋀ S_e≤ r and r' satisfies the equation r'=η[X:=r'](f) because it
is a fixed-point.
The second part is completely symmetric to the first and has the same proof.
Theorem <ref>
For any fixed-point symbol σ, variable X∈ and expression e, it holds that
σ X=e ≡ σ X=_X=e^σ,
where _X=e^σ is defined in the main text of this paper.
Furthermore, the variable X does not occur in _X=e^σ.
By construction it is straightforward to see that X does not occur in _X=e^σ.
We concentrate on
the first part of this theorem.
Consider an equation of the shape σ X=e. If σ=μ we can assume that e is a conjunctive normal form, and
if σ=ν we can assume e is a disjunctive normal form, by Theorem <ref>.
We show by induction on the number of conditional operators in e that the first part of the theorem holds. By the normal form theorem,
e either consists of an application of a conditional operator or it is a simple conjunctive/disjunctive normal form.
* Assume e has the shape e_1e_2e_3. We know using the induction hypothesis that the equations
σ X=e_2, σ X=e_2∧ e_3 and σ X=e_3 have equivalent equations σ X=^σ_X=e_2,
σ X=^σ_X=e_2∧ e_3 and σ X=^σ_X=e_3.
For these equivalences we know the properties as listed in
Lemma <ref>.
First we consider the case where σ=μ.
We use Lemma <ref>. So, we fix some η and we show that
both cases 1 and 2 of Lemma <ref> hold. For case 1 we can assume that there is an r∈ such
that r=η[X:=r](e_1e_2e_3). It suffices to show that there is an r'≤ r such that
r'≥η[X:=r']((e_1[X:=^μ_X=e_2∧^μ_X=e_3])^μ_X=e_2^μ_X=e_3). We distinguish two cases.
* First the situation where η[X:=r](e_1)≤ 0 is considered. In this case r=η[X:=r](e_2∧ e_3),
and hence r= η[X:=r](e_2) or r= η[X:=r](e_3). So, using the induction hypothesis and
Lemma <ref> there is an r'≤ r such that either r'=η[X:=r'](^μ_X=e_2) or
r'=η[X:=r'](^μ_X=e_3). In either case, r'≥η[X:=r'](^μ_X=e_2∧^μ_X=e_3). We find that
η(e_1[X:=^μ_X=e_2∧^μ_X=e_3])≤η[X:=r'](e_1)≤η[X:=r](e_1)≤ 0.
So, we can derive that
[ η[X:=r']((e_1[X:=^μ_X=e_2∧^μ_X=e_3])^μ_X=e_2^μ_X=e_3)=; η[X:=r'](^μ_X=e_2∧^μ_X=e_3)≤ r' ]
as was to be shown.
* Now we investigate the situation where η[X:=r](e_1)>0. It follows that r=η[X:=r](e_3).
Using the induction hypothesis and Lemma <ref> we know that there is some r'≤ r such
that r'=η[X:=r'](^μ_X=e_3). Hence, r' also satisfies r'≥η[X:=r'](^μ_X=e_2∧^μ_X=e_3). So, we can
conclude that r'≥η[X:=r']((e_1[X:=^μ_X=e_2∧^μ_X=e_3])^μ_X=e_2^μ_X=e_3) as we had to show.
For case 2 of Lemma <ref> and the minimal fixed-point, we consider some η and we assume there is an r∈ such that
r=η[X:=r]((e_1[X:=^μ_X=e_2∧^μ_X=e_3])^μ_X=e_2^μ_X=e_3). We must show that there is an r'≤ r
such that r'≥η[X:=r'](e_1e_2e_3). We distinguish two cases.
* First assume η[X:=r](e_1[X:=^μ_X=e_2∧^μ_X=e_3])≤ 0. In that case r=η(^μ_X=e_2)∧η(^μ_X=e_3).
By the induction hypothesis and Lemma <ref> it follows that there is an r_1≤ r such that
r_1=η[X:=r_1](e_2) and there is an r_2≤ r such that r_2=η[X:=r_2](e_3).
Define r'=r_1∧ r_2. Clearly, r'≤ r.
Observe that η[X:=r'](e_1)=η[X:=r_1∧ r_2](e_1)≤η[X:=r](e_1)=η(e_1[X:=^μ_X=e_2∧^μ_X=e_3])≤ 0.
Hence,
η[X:=r'](e_1e_2e_3) is equal to η[X:=r'](e_2∧ e_3).
We find r'=r_1∧ r_2=η[X:=r_1](e_2)∧η[X:=r_2](e_3)≥η[X:=r_1∧ r_2](e_2)∧η[X:=r_1∧ r_2](e_3)=η[X:=r'](e_2∧ e_3).
Hence, r'≥η[X:=r'](e_1e_2e_3)
as had to be shown.
* Now assume η[X:=r](e_1[X:=^μ_X=e_2∧^μ_X=e_3])>0. Hence, r=η(^μ_X=e_3).
So, using the induction hypothesis
and Lemma <ref> there is an r'≤ r such that r'=η[X:=r'](e_3). So, it also follows
that r'≥η[X:=r'](e_2∧ e_3). Hence, r'≥η[X:=r'](e_1e_2e_3) as it is larger
than both possible outcomes of the conditional expression, which finishes this case.
This means the proof for the minimal fixed-point is finished.
Now we consider the case where σ=ν. The proof is very similar to that of the minimal fixed-point,
but as reasoning with fixed-points is tedious we give it in full.
We again apply Lemma <ref>. So, fix some η. For case 1 of Lemma <ref>
consider an r such that r=η[X:=r](e_1e_2e_3). We are ready with this case if we have shown
that there is an r'≥ r and r'≤η[X:=r']((e_1[X:=^ν_X=e_3])^ν_X=e_2∧ e_3^ν_X=e_3).
We distinguish two cases.
* First we consider the case where η[X:=r](e_1)≤ 0. Then r=η[X:=r](e_2∧ e_3). So, r≤η[X:=r](e_3).
Using the induction hypothesis and by applying Lemma <ref>, we know that there
are r_1≥ r such that r_1=η[X:=r_1](^ν_X=e_2∧ e_3), and r_2≥ r such that r_2=η[X:=r_2](^ν_X=e_3).
Choose r'=r_1∧ r_2, i.e., the minimum of the two. Clearly, r'≥ r. Furthermore,
η[X:=r']((e_1[X:=^ν_X=e_3])^ν_X=e_2∧ e_3^ν_X=e_3) is either equal to
η(^ν_X=e_2∧ e_3∧^ν_X=e_3) or to η(^ν_X=e_3).
In the first case we find that r'=r_1∧ r_2=η(^ν_X=e_2∧ e_3)∧η(^ν_X=e_3)=
η[X:=r'](^ν_X=e_2∧ e_3∧^ν_X=e_3), and
in the second case
r'=r_1∧ r_2≤ r_2=η(^ν_X=e_3)=η[X:=r'](^ν_X=e_3). From these two cases it follows that
r'≤η[X:=r']((e_1[X:=^ν_X=e_3])^ν_X=e_2∧ e_3^ν_X=e_3) as had to be shown.
* Second, we consider the case η[X:=r](e_1)>0. Then r=η[X:=r](e_3). Using the induction
hypothesis and Lemma <ref> there is an r'≥ r such that r'=η[X:=r'](^ν_X=e_3).
So, we find that η(e_1[X:=^ν_X=e_3])=η[X:=r'](e_1)≥η[X:=r](e_1)>0. Hence,
η[X:=r']((e_1[X:=^ν_X=e_3])^ν_X=e_2∧ e_3^ν_X=e_3)=η[X:=r'](^ν_X=e_3)=r'
which implies our proof obligation.
Now we concentrate on case 2 of Lemma <ref> for the maximal fixed-point. So, we consider
an r∈ such that r=η[X:=r]((e_1[X:=^ν_X=e_3])^ν_X=e_2∧ e_3^ν_X=e_3),
and we must show that
an r'≥ r exists such that r'≤η[X:=r'](e_1e_2e_3). Again, we distinguish
two cases.
* Assume η(e_1[X:=^ν_X=e_3])≤ 0. It follows that r=η(^ν_X=e_2∧ e_3). Using the induction hypothesis and
Lemma <ref> it follows that there is an r'≥ r such that r'=η[X:=r'](e_2∧ e_3).
So, it follows that r'≤η[X:=r'](e_3). As η[X:=r'](e_1e_2e_3) must be equal
to one of these, we find that r'≤η[X:=r'](e_1e_2e_3) as we had to show.
* Assume η(e_1[X:=^ν_X=e_3])> 0. It follows that r=η(^ν_X=e_3). Using the induction hypothesis and
Lemma <ref> there is an r'≥ r such that r'=η[X:=r'](e_3).
So, η[X:=r'](e_1)≥η[X:=r](e_1)=η(e_1[X:=^ν_X=e_3])>0.
Hence, η[X:=r'](e_1e_2e_3)=η[X:=r'](e_3)=r' and this is sufficient to finish the proof for
the lemma for this case.
*
The proof where e has the shape of the conditional operator e_1e_2e_3 is quite similar, but due to the intricate nature
of fixed-point proofs, we provide it explicitly.
First we consider the case where σ=μ.
We use Lemma <ref>. So, for some η we show that
both cases 1 and 2 of Lemma <ref> hold. For case 1 we can assume that there is an r∈ such
that r=η[X:=r](e_1e_2e_3). It suffices to show that there is an r'≤ r such that
r'≥η[X:=r']((e_1[X:=^μ_X=e_2])^μ_X=e_2^μ_X=e_2∨ e_3). We distinguish two cases.
* First the situation where η[X:=r](e_1)< 0 is considered. In this case r=η[X:=r](e_2).
Using the induction hypothesis and
Lemma <ref> there is an r'≤ r such that r'=η[X:=r'](^μ_X=e_2).
From this it follows that η[X:=r'](e_1[X:=^μ_X=e_2])=η[X:=r'](e_1)≤η[X:=r](e_1)<0.
This allows us to derive
[ η[X:=r']((e_1[X:=^μ_X=e_2])^μ_X=e_2^μ_X=e_2∨ e_3)=
η[X:=r'](^μ_X=e_2)= r', ]
which implies our proof obligation.
* Now we investigate the situation where η[X:=r](e_1)≥ 0. It follows that r=η[X:=r](e_2∨ e_3).
From this it follows that r≥η[X:=r](e_2).
Using the induction hypothesis and Lemma <ref> we know that there are r_1≤ r such that
r_1=η[X:=r_1](^μ_X=e_2), and r_2≤ r such that
r_2=η[X:=r_2](^μ_X=e_2∨ e_3). Define r'=r_1∨ r_2. Clearly, r'≤ r.
We find that r'=r_1∨ r_2≥ r_1=η[X:=r_1](^μ_X=e_2)=η[X:=r'](^μ_X=e_2), using that X does
not occur in ^μ_X=e_2.
Moreover, we find that r'=r_1∨ r_2≥η[X:=r_1](^μ_X=e_2)∨η[X:=r_2](^μ_X=e_2∨ e_3)=
η[X:=r'](^μ_X=e_2)∨η[X:=r'](^μ_X=2_2∨ e_3).
So, it follows that both sides of the conditional satisfy the required proof obligation and therefore, we are ready with this case.
For case 2 of Lemma <ref> and the minimal fixed-point, we consider some η and we assume there is an r∈ such that
r=η[X:=r]((e_1[X:=^μ_X=e_2])^μ_X=e_2^μ_X=e_2∨ e_3).
We must show that there is an r'≤ r
such that r'≥η[X:=r'](e_1e_2e_3). We distinguish two cases.
* First assume η[X:=r](e_1[X:=^μ_X=e_2])< 0. In that case r=η(^μ_X=e_2).
By the induction hypothesis and Lemma <ref> it follows that there is an r'≤ r such that
r'=η[X:=r'](e_2). So, we derive
[ η[X:=r'](e_1)≤η[X:=r](e_1) =
η[X:=η(^μ_X=e_2)](e_1)=; η(e_1[X:=^μ_X=e_2])=
η[X:=r](e_1[X:=^μ_X=e_2])<0. ]
Hence, η[X:=r'](e_1e_2e_3) is equal to η[X:=r'](e_2).
Hence, r'= η[X:=r'](e_1e_2e_3),
which implies what had to be shown.
* Now assume η[X:=r](e_1[X:=^μ_X=e_2])≥ 0. Hence, r=η(^μ_X=e_2∨^μ_X=e_2∨ e_3).
From this, it follows that r≥η(^μ_X=e_2∨ e_3).
So, using the induction hypothesis
and Lemma <ref> there is an r'≤ r such that r'=η[X:=r'](e_2∨ e_3).
So, we can also derive that r'=η[X:=r'](e_2)∨η[X:=r'](e_3)≥η[X:=r'](e_2).
Hence, r' is larger than both sides of the conditional operator, and we can conclude r'≥η[X:=r'](e_1e_2e_3),
finalising the proof in this case.
This finishes the proof for the minimal fixed-point, and we continue with the maximal fixed-point σ=ν.
We again apply Lemma <ref>. So, fix some η. For case 1 of Lemma <ref>
consider an r such that r=η[X:=r](e_1e_2e_3). We are ready with this case if we have shown
that there is an r'≥ r and
r'≤η[X:=r']((e_1[X:=^ν_X=e_2∨^ν_X=e_3])^ν_X=e_2^ν_X=e_3).
We distinguish two cases.
* First we consider the case where η[X:=r](e_1)< 0. Then r=η[X:=r](e_2).
Using the induction hypothesis and by applying Lemma <ref>, we know that there
is an r'≥ r such that r'=η[X:=r'](^ν_X=e_2).
We also see that r' satisfies r'=η[X:=r'](^ν_X=e_2)≤η[X:=r'](^ν_X=e_2∨^ν_X=e_3).
So, we can conclude that r'≤η[X:=r']((e_1[X:=^ν_X=e_2∨^ν_X=e_3])^ν_X=e_2^ν_X=e_3)
as we had to show.
* Second we consider the case η[X:=r](e_1)≥ 0. Then r=η[X:=r](e_2∨ e_3). So, r=η[X:=r](e_2) or
r=η[X:=r](e_3). We assume that the first case holds, as the proof for the second case is perfectly symmetric.
Hence, using the induction
hypothesis and Lemma <ref> there is an r'≥ r such that r'=η[X:=r'](^ν_X=e_2).
We derive
[ η[X:=r'](e_1[X:=^ν_X=e_2∨^ν_X=e_3])≥η[X:=r'](e_1[X:=^ν_X=e_2]) =; η[X:=r'](e_1)≥η[X:=r'](e_1) ≥ 0. ]
So,
[ η[X:=r']((e_1[X:=^ν_X=e_2∨^ν_X=e_3])^ν_X=e_2^ν_X=e_3)=; η[X:=r'](^ν_X=e_2∨^ν_X=e_3) ≥; η[X:=r'](^ν_X=e_2)=r'. ]
as we had to prove.
Now we concentrate on case 2 of Lemma <ref> for the maximal fixed-point. So, we consider
an r∈ such that r=η[X:=r]((e_1[X:=^ν_X=e_2∨^ν_X=e_3])^ν_X=e_2^ν_X=e_3),
and we must show that
an r'≥ r exists such that r'≤η[X:=r'](e_1e_2e_3). Again, we distinguish
two cases.
* Assume η(e_1[X:=^ν_X=e_2∨^ν_X=e_3])< 0.
It follows that r=η(^ν_X=e_2). Using the induction hypothesis and
Lemma <ref> it follows that there is an r'≥ r such that r'=η[X:=r'](e_2).
So, it follows that r'≤η[X:=r'](e_2∨ e_3). As η[X:=r'](e_1e_2e_3) must be equal
to one of these, we find that r'≤η[X:=r'](e_1e_2e_3) as we had to show.
* Assume η(e_1[X:=^ν_X=e_2∨^ν_X=e_3])≥ 0.
It follows that r=η(^ν_X=e_2∨^ν_X=e_3).
Assume η(^ν_X=e_2)≥η(^ν_X=e_3). The reverse assumption follows the same reasoning steps.
Hence, r=η(^ν_X=e_2).
Using the induction hypothesis and
Lemma <ref> there is an r'≥ r such that r'=η[X:=r'](e_2).
So, η[X:=r'](e_1)≥η[X:=r](e_1)=η(e_1[X:=^ν_X=e_2])=η(e_1[X:=^ν_X=e_2∨^ν_X=e_3])≥ 0.
Hence, η[X:=r'](e_1e_2e_3)=η[X:=r'](e_2∨ e_3)≥η[X:=r'](e_2)=r'
and this is sufficient to finish the proof for the lemma for this case.
With this we have proven that this theorem holds for conditional expressions.
*
We now consider the case with a minimal fixed-point where e is a conjunctive normal form.
Using property E6 it is possible to solve all conjuncts separately.
So, without loss of generality, we assume that e has the shape
e=⋁_j∈ J(c_j· X + c'_j·(X) + f_j)∨ m
where c_j≥ 0 and c'_j∈{0,1} are constants such that c_j and c_j' are not both 0,
and f_j and m are expressions in which X does not occur.
We show that the right-hand side of equation (<ref>) without the initial conjunction
provides the required term ^μ_X=e in this theorem. Concretely,
[ ^μ_X=e = ((⋁_j∈ Jf_j))
((m)-∞
(((
⋁_j∈ J| c_j≥ 1f_j+(c_j-1)· U)∨⋁_j∈ J| c'_j=1∞)U∞))
∞ ]
where U=m∨⋁_j∈ J| c_j<11/1-c_j· f_j.
Using Lemma <ref> we must prove case 1 and 2 for a η.
We start with case 1. So, consider the smallest r=η[X:=r](e). We define r'=η(^μ_X=e)
automatically satisfying the first proof obligation of Lemma <ref>, where it should be noted
that X does not occur in ^μ_X=e.
Hence, we only need to show that r'≤ r. We distinguish a number of cases.
* Suppose there is some f_j such that η[X:=r](f_j)=∞. In that case both r=∞ and r'=∞.
So, clearly, r'≤ r. Below we can now assume that there is no j∈ J such that η[X:=r](f_j)=∞.
* Now assume η(m)=-∞.
By the previous case we know that f_j≠∞.
In that case r'=η(^μ_X=e)=-∞, as η((m))=-∞≤ 0, and hence, r'≤ r.
Below we assume that η(m)≠-∞.
* If there is at least one j∈ J such that c_j'=1, then r=η[X:=r](e)=∞. The reason for this
is that r>-∞, as r at least has the value η(m). But then r=∞ as η[X:=r](c'_j·(X))=∞. Clearly, r'≤ r. So, below we can assume that c'_j=0 for all j∈ J.
* With the assumptions above, we can write e more compactly.
e=⋁_j∈ J(c_j· X + f_j)∨ m.
We know that r is the smallest value satisfying
r=η[X:=r](e)=η[X:=r](⋁_j∈ J(c_j· X + f_j)∨ m).
Consider r_1=η(m∨⋁_j∈ J| c_j<1(f_j/1-c_j)).
* First assume that there is no
j∈ J with c_j≥ 1 such that r_1< η[X:=r_1](c_j· X+f_j). We show that r_1 is the solution, i.e.,
r_1=r.
Consider the case where that η(m)≥η(f_j)/1-c_j for all j∈ J such that c_j<1. So, r_1=η(m).
In this case η(m) is a solution as (i) for those j∈ J such that c_j<1 it holds that η(m)≥ c_j·η(m)+η(f_j).
Moreover, by the assumption of this item for those j∈ J such that c_j≥ 1, η(m)<c_j·η(m)+η(f_j) (ii).
It is obvious that η(m) must be the smallest solution.
Now consider the case where
η(m)<η(f_j)/1-c_j for some j∈ J. In this case it holds that r_1=⋁_j∈ J| c_j<1
(η(f_j)/1-c_j)=η(f_j')/1-c_j' for some j'∈ J, where j' is the index of the largest
solution. It is straightforward to check that η(f_j')/1-c_j' is a solution. It is also the smallest
solution, which can be seen as follows. Suppose there were a smaller solution r_2<η(f_j')/1-c_j'.
Hence, r_2=η(m)∧⋀_j∈ J(c_j·r_2+η(f_j))≥ c_j'·r_2+η(f_j').
From this it follows that
r_2≥η(f_j')/1-c_j' contradicting that it is a smaller solution.
It follows that r_1=r is the smallest solution. Furthermore,
r'=η(^μ_X=e)=η(U)=η(m∨⋁_j∈ J| c_j<1f_j/1-c_j)=r_1=r. Obviously, r'≤ r.
* Now assume that there is a j∈ J with c_j≥ 1 such that r_1< η[X:=r_1](c_j· X+f_j).
We show that r=∞. Using the argumentation of the previous item, the smallest solution r is at least
r_1. But clearly, r_1 is larger than the non infinite solution of X= η[X:=r_1](c_j· X+f_j)
as by the assumption r_1>η(f_j)/1-c_j. Note that if c_j> 1 this solution exists, and if c_j=1 there
is only a finite solution if η(f_j)=0, but in this latter case the assumption of this item is invalid.
Hence, the only remaining minimal solution is r=∞. Clearly,
for any choice of r' it holds that r'<r.
Now we concentrate on case 2 for the minimal fixed-point of Lemma <ref>.
We know that r=η(^μ_X=e) is the minimal solution for η(^μ_X=e) and
we must show that there is an r'≤ r such that r'≥η[X:=r'](e). We take r'=r
leaving us with the obligation to show that r≥η[X:=r](e).
We distinguish the following cases.
* Assume that there is some f_j such that η(f_j)=∞. In that case r=∞, which
satisfies ∞≥η[X:=∞](e). From here we assume that η(f_j)<∞ for all j∈ J.
* Now assume that η(m)=-∞. Note that for any j∈ J it is the case that c_j≠0 or c_j'≠0.
In this case, r=-∞ is the solution as
η[X:=-∞](e)=-∞ and this implies our proof obligation. So, in the steps below we assume
that η(m)>-∞.
* With the conditions above, if there is at least one j∈ J such that c'_j=1, then r=∞ is the fixed-point
satisfying our proof obligation. Below we assume that for all j∈ J it holds that c'_j=0.
* As all c'_j can be assumed to be 0, we can simplify the equation for X to:
μ X=⋀_j∈ J(c_j·X+f_j)∨ m.
We find η(U)=η(m∨⋁_j∈ J| c_j<1f_j/1-c_j).
If there is no j∈ J with c_j≥ 1 such that η(f_j)-η((1-c_j)·U)>0 we find that r=η(^μ_X=e)=η(U).
We show that r≥η[X:=r](e). If η(m)≥⋁_j∈ J| c_j<1η(f_j)/1-c_j then r=η(m).
For a j∈ J with c_j<1 we find that c_j·η(m)+η(f_j)≤η(m) as η(m)≥f_j/1-c_j.
For a j∈ J with c_j≥ 1, we find by the condition above that η(f_j+c_j·U)≤η(U),
or in other words η(f_j+c_j·m)≤η(m). So, r=η(m)=η[X:=r](e) as we had to show.
Otherwise, there is some j'∈ J with c_j'<1
such that η(f_j')/1-c_j'=⋁_j∈ J| c_j<1η(f_j)/1-c_j.
In this case r=η(f_j')/1-c_j'. From the conditions, we can see that r=η[X:=r](e) as
we had to show.
* Now assume that there is a j∈ J with c_j≥ 1 such that η(f_j)-η((1-c_j)·U)>0. In this
case r=η(^μ_X=e)=∞, clearly satisfying our proof obligation.
This finishes our proof for a minimal fixed-point equation.
* The last case of this proof regards a maximal fixed-point equation.
The proof is similar to that of the minimal fixed-point
equation.
The maximal fixed-point equation that we consider has the shape
ν X= ⋁_i∈ I(⋀_j∈ J_i(c_ij· X + c'_ij·(X) + f_ij)∧ m_i).
Due to property E7 we can solve the disjuncts separately, and take the disjunction of these solutions as
the solution of this equation. So, we concentrate on a maximal fixed-point equation of the shape
ν X= ⋀_j∈ J(c_j· X + c'_j·(X) + f_j)∧ m
and we show that the solution is
[ ^ν_X=e =(m)
(⋀_j∈ J| c_j≥ 1∧ c'_j=0f_j+(c_j-1)· U)-∞U
∞ ]
where
U=m∧⋀_j∈ J| c_j<1∧ c'_j=01/1-c_j· f_j.
We write e for the right-hand side of Equation (<ref>).
We use Lemma <ref> for the largest fixed-point.
First, we concentrate on case 1. So, for the largest r satisfying
r=η[X:=r](e) we must show that there is an r'≥ r such that r'≤η[X:=r'](^ν_X=e). We take r'=η(^ν_X=e). As X does not occur in ^ν_X=e
we have that η(^ν_X=e)≤η[X:=η(^ν_X=e)](^ν_X=e).
So, our only proof obligation is η(^ν_X=e)≥ r. We split the proof in a number of cases.
* In case η(m)=∞, we find r'=η(^ν_X=e)=∞
and our proof obligation is met.
Below, we assume that η(m)<∞.
* Now assume that for all j∈ J with c_j≥ 1 and c_j'=0, it holds that η(f_j+(c_j-1)·U)≥ 0.
We show below that no r”>η(U) can be a solution for
(<ref>). In this case η(^ν_X=e)=η(U) from which it follows that η(^ν_X=e)≥ r.
Assume η(m)≤η(f_j)/1-c_j for all j∈ J such that c_j<1 and c_j'=0.
If r”>η(m), then clearly r” is not a solution, as the solution is at most η(m).
If the above does not hold, there is at least one j∈ J with c_j<1 and c_j'=0
such that η(m)>η(f_j)/1-c_j.
Assume r” is larger than the smallest conjunct η(f_j)/1-c_j for any j∈ J with
c_j<1 and c'_j=0. If r” were a solution of (<ref>), then it satisfies
r”≤ c_j· r”+η(f_j). This is equivalent to r”≤η(f_j)/1-c_j contradicting
the assumption.
* Now assume that there is some j∈ J with c_j≥ 1 and c_j'=0 for which
it holds that η(f_j+(c_j-1)·U)< 0.
In this case η(^ν_X=e)=-∞. In the previous item it was shown that any solution r” for
(<ref>) it holds that r”≤η(U). Moreover, r” has to satisfy that
r”≤ c_j· r”+η(f_j). If r”∈ and c_j≠1, this is the same as saying that
r”≥η(f_j)/1-c_j, and combining it with the assumption of this item, it follows
that r”>η(U), leading to a contradiction. If r”∈ and c_j=1 we derive that both η(f_j)≥ 0
and η(f_j)<0, also leading to a contradiction.
Hence, in both cases r”∉, meaning that r”=-∞.
In this case this is exactly the value of η(^ν_X=e), finishing this item of the proof.
In the last part of the proof we focus our attention on case 2 of Lemma <ref> for maximal
fixed-points.
So, we consider r=η(^ν_X=e) and we have to find an r'∈ that satisfies
r'≥ r and r'≤η[X:=r'](e). We take r'=r, and this means that we only have to show that η(^ν_X=e)
satisfies η(^ν_X=e)≤η[X:=η(^ν_X=e)](e). We again walk through a number of cases.
* First assume that η(m)=∞. Then η(^ν_X=e)=∞ and it clearly satisfies (<ref>).
If η(m)=-∞, then the right-hand side of (<ref>) equals -∞. In this case
η(U)=-∞, and therefore, η(^ν_X=e)=-∞, which satisfies our proof obligation.
So, below we can safely assume that η(m)≠±∞.
* Now assume there is a j∈ J with c_j'=1. As η(m)>-∞, clearly, (η(m))=∞, and
this disjunct equals ∞, being larger than η(^ν_X=e), satisfying our proof obligation.
So, we can safely assume that c_j'=0 for all j∈ J.
* Assume that for all j∈ J such that c_j≥ 1 and c'_j=0, it holds that η(f_j+(c_j-1)·U)≥ 0.
We find that η(^ν_X=e)=η(U). Assume that η(U)=η(m), which means that η(m)≤η(f_j)/1-c_j for
all j∈ J with c_j<1 and c'_j=0. We see that η(m) is a solution for (<ref>)
by showing that c_j·η(m)+η(f_j)≥η(m) for all j∈ J.
First consider such a j∈ J such that c_j<1. The identity above follows directly from η(m)≤η(f_j)/1-c_j. Second consider such a j∈ J such that c_j≥ 1. The required identity follows from the assumption
that η(f_j+(c_j-1)·U)≥ 0.
Now assume that η(U)=η(f_j)/1-c_j for some j∈ J with c_j<1 as this is the smallest conjunct
of η(U). We see that η(U) satisfies (<ref>). For those j'∈ J with c_j'<1 we
find that
η(c_j'·U+f_j')≥η(U) as it is equivalent to stating that η(U)≤η(f_j')/1-c_j'.
For the same reason, we see that η(c_j·U+f_j)= η(U)<η(m).
Now consider those j'∈ J with c_j'>1. By the condition at the beginning of this item it follows that
η(f_j+(c_j)·U)≥η(U). Hence, the right-hand side of (<ref>) reduces to
η(U) as we had to show.
* Assume that for some j∈ J such that c_j≥ 1 and c'_j=0, it holds that η(f_j+(c_j-1)·U)≥ 0.
Hence, η(^ν_X=e)=-∞, rendering our proof obligation trivial.
This finishes all cases we had to go through in the proof, proving the theorem.
§ VALIDITY OF E6 AND E7
We prove that the implication E6 is valid. The validity of E7 follows by duality.
We show, given μ X=e_1 ≡ μ X=f_1 and μ X=e_2 ≡ μ X=f_2, that
μ X=e_1∧ e_2 ≡ μ X=f_1∧ f_2
holds using Lemma <ref>. As cases 1. and 2. are symmetric, we only prove case 1.
So, we must show that
for the smallest r ∈ such that r=η[X:=r](e_1∧ e_2), it holds that there in an r'
satisfying that r'≤ r and r'≥η[X:=r'](f_1∧ f_2).
We know that r=η[X:=r](e_i) for i=1 or i=2 as is totally ordered.
As μ X=e_i ≡ μ X=f_i, we know by Lemma <ref> that there is an r' such that r'≤ r and
r'=η[X:=r'](f_i). Clearly, r' also satisfies that r'=η[X:=r'](f_i)≥η[X:=r'](f_1∧ f_2).
This finishes the proof.
|
http://arxiv.org/abs/2307.04565v1 | 20230710135830 | A Simple, Exact Formulation of Number Counts in the Geodesic-Light-Cone Gauge | [
"G. Fanizza",
"M. Gasperini",
"G. Marozzi"
] | hep-th | [
"hep-th",
"astro-ph.CO",
"gr-qc"
] |
1
1
1
0
2023
2023
Academic Editor: Firstname Lastname
13 June 2023
3 July 2023
7 July 2023
https://doi.org/
=1
0.4ex<-0.8em0.62ex∼
0.4ex>-0.7em0.62ex∼
γ#1 [GF: #1]#1 [GM: #1]#1 [GV: #1]#1 #1
∂→Λℛℓxn 0.4ex< -0.8em 0.62ex∼ 0.4ex> -0.7em 0.62ex∼
φϕ̇ḣḧł⟨⟩t̅∂→λλ_ sλ_ PM_ sM_ PΛΔδβ̱αα^'κΓγσΣδϵρ̊ωΩ𝒲( η^+-η)^(0)η^+θ^aΥþ#1#2θ^#2(#1)#1η^(#1)+#1#2g_GLC^#2(#1)#1∇_cþ#1c#̊1#2#1-#2/η_o-#1#1#2∫_#1^η_od#2Φ_W(#2)#̊1#2#1-#2/η_o-#10-η^(0)þ#1#2θ^#2(#1)#1#2^(#1)#2#1η^+(#1)#1#2g_GLC^#2(#1)
A Simple, Exact Formulation of Number Counts in the Geodesic-Light-Cone Gauge
Number Counts in the GLC Gauge
Giuseppe
Fanizza ^1,†, Maurizio Gasperini ^2,3,*^,† and Giovanni Marozzi ^4,5,†
Giuseppe Fanizza, Maurizio Gasperini and Giovanni Marozzi
Fanizza,
G.; Gasperini, M.; Marozzi, G.
^1 Instituto de Astrofisíca e Ciências do Espaço,
Faculdade de Ciências da Universidade de Lisboa,
Edificio C8, Campo Grande, P-1740-016 Lisbon, Portugal; [email protected]
^2 Dipartimento di Fisica, Università di Bari,
Via G. Amendola 173, 70126 Bari, Italy
^3 Istituto Nazionale di Fisica Nucleare, Sezione di Bari, 70126 Bari, Italy
^4 Dipartimento di Fisica, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy; [email protected]
^5 Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, 56127 Pisa, Italy
Correspondence: [email protected]
These authors contributed equally to this work.
In this article, we compare different formulations of the number count prescription using the convenient formalism of the Geodesic-Light-Cone gauge. We then find a simple, exact, and very general expression of such a prescription which is suitable for generalised applications.
[-15]observational cosmology; covariant averages of astrophysical variables; light cone gauge
Published in the Special Issue on “Universe: Feature Papers 2023 – Cosmology", edited by K. Bamba ( Universe 9, 327 (2023)).
Preprint: BA-TH/808-23
Availabe at: https://www.mdpi.com/2218-1997/9/7/327
Galaxy number counts represent a very useful tool for understanding the Large-Scale Structure (LSS) of our universe and testing the underlying cosmological models. Indeed, the information provided by such a tool on the distribution and overall properties of galaxies can be interpreted as a probe for the associated distribution of dark matter, and thereby as a test of the early cosmological dynamics based on late time observations (see<cit.> and references therein).
One of the first studies involving galaxy number count was presented in a paper discussing the angular auto-correlation of the LSS <cit.> and its cross-correlation with the anisotropies of the Cosmic Microwave Background (CMB) temperature. Later, this was generalized to include linear relativistic corrections <cit.>.
More recently, further developments have been achieved. Among the most relevant, we should mention the computation of the theoretically expected galaxy power spectrum <cit.> based on the expression of galaxy number counts containing all relativistic corrections (i.e., at the source and observer positions and along the light cone). This advance complements the evaluation of the galaxy two-point correlation function presented in <cit.>. In addition, <cit.> computed the “observationally expected” galaxy power spectrum by taking into account effects such as lensing magnification which are important in principle for the evaluation of the power spectrum multipoles, with the results showing that this approach can correct the leading distortion terms due to the redshift at most at the ten percent level.
On the other hand, in a previous paper <cit.> we presented a new class of covariant prescriptions for averaging astrophysical variables on spatial regions typical of the chosen sources and intersecting the past light-cone of a given observer. In this short communication, we show that with the appropriate choice of ingredients, our integral prescription can exactly reproduce the number count integral introduced long ago as a useful tool in the general context of observational cosmology (see, e.g., <cit.>).
In addition, it can reproduce the average integrals recently used in <cit.> as essential ingredients to obtain a reliable (i.e., observationally compatible) prescription for galaxy number counts and for their above-mentioned applications.
Let us start by considering the (possibly realistic) experimental situation in which the sources are not exactly confined on a given space such as a hypersurface (defined for instance by a scalar field B(x) such that B(x)=B_s=const), and are instead localized inside an (arbitrarily) extended space-time layer corresponding to the interval B_s≤ B≤ B_s+Δ B_s. In thais case, the average is characterized by the following covariant integral prescription <cit.>:
I( B_s)= ∫__4d^4x √(-g) ρ (V_o-V) Θ(B_s+ B_s-B) Θ(B-B_s) ^μ A _μ V|_ A ^ A |^1/2 .
Here,
ρ(x) is a scalar field that specifies an appropriate physical weight factor associated with the averaged sources, V(x) is a scalar field (with light-like gradient) which identifies the past light-cone centered on the observer and spanned by the null momentum k_μ=_μ V of the light signals emitted by the sources, and A(x) is a scalar field (with a time-like gradient) associated with the following unit vector:
n_μ=_μ A/|_α A^α A|^1/2,
which possibly represents a convenient four-velocity reference,
but
in general depending on the particular observations we are interested in.
The particular choice of ρ, A, and B obviously depends on the physical situation and on the type of observation being performed.
Suppose now that we are interested in sources localized between the constant redshift surfaces z=z_s and z=z_s+Δ z_s. In this case, we can choose B=1+z, where z is the standard redshift parameter defined in general by
1+z=( u_μ k^μ)_s/( v_μ k^μ)_o ,
where u_μ and v_μ are the velocities of the source and the observer, respectively, not necessarily co-moving in the given geometry, and the subscripts “s” and “o” respectively denote the source and observer positions (see, e.g., <cit.>). In this case, as we show below, we find that our integral (<ref>) can exactly reproduce the standard number-count prescription of <cit.>, provided the fields A and ρ are appropriately chosen, as is shown explicitly just after Equation (<ref>).
It is convenient for this purpose to work in the so-called Geodesic Light-Cone (GLC) gauge based on the coordinates x^μ=( τ,w,θ^a ) and a=1,2, where the most general cosmological metric is parameterized by six arbitrary functions , U_a, γ_ab=γ_ba, and the line-element takes the following form <cit.>:
ds^2_GLC=-2 dwdτ+^2dw^2+_ab(dθ^a-U^a dw)(dθ^b-U^b dw).
Let us recall here for the reader's convenience that w is a null coordinate (_μ w ^μ w=0), that in this gauge the light signals travel along geodesics with constant w and θ^a, and that the time coordinate τ coincides with the time of the synchronous gauge <cit.>. In fact, we can easily determine _μτ defines a geodesic flow, i.e., that (^ντ) ∇_ν (_μτ)=0, which is in agreement with the condition g^ττ=-1 following from the metric (<ref>).
Working in the GLC gauge, we can identify V=w; thus, k_μ=_μ w and k^μ=-^-1δ^μ_τ. Moreover, we can conveniently impose the general
temporal gauge defined by the conditionThis choice generalises the definition of the temporal gauge, already introduced in <cit.>, such that it can be applied to the case of an arbitrary observer velocity v_μ.( v_μ k^μ)_o=-1 ,
where the past light cones w=w_o=const are simply labeled by the reception time τ_o of the corresponding light signals <cit.>, i.e., w_o=τ_o. Hence, in this gauge we have
1+z=( u_τ^-1)_s
and we can replace the τ integration of Equation (<ref>) with the z integration defined by
dτ=dz( dτ/dz)=dz/_τ( u_τ^-1) .
Note that Equation (<ref>) has to be further integrated, as prescribed by Equation (<ref>), on the spatial hypersurface containing the averaged source.
Thus, in order to avoid any confusion between the integrated quantities and the boundaries of the integrals, we omit the explicit subscript “s” in the differential integration measure. Finally, for the metric (<ref>) we have √(-g)=√(γ), where γ=γ_ab. Thus, our integral prescription (<ref>) takes the form
I(Δ z_s) = -∫ dτ dw d^2θ √(γ) ρ δ( w_o-w ) Θ(z_s+Δ z_s-z) Θ(z- z_s) n_τ
= -∫ dz d^2θ √(γ) ρ Θ(z_s+Δ z_s-z) Θ(z- z_s) n_τ/_τ( u_τ^-1)
= -∫_z_s^z_s+Δ z_s dz d^2θ √(γ) ρ n_τ/_τ( u_τ^-1) ,
where we have used the explicit form of n_μ of Equation (<ref>). It should be stressed here that u_τ is the time component of the source velocity, while for the moment n_τ and ρ are both arbitrary variables to be adapted to the physical situation under consideration.In our previous paper <cit.>, we applied the above integral in the limit of the small redshift bin z_s 0. Finally, all the integrated functions have to be evaluated on the past light cone w=w_o.
We now Consider the number count integral, which evaluates the number of sources dN located inside an infinitesimal layer of thickness dλ at a distance d_A(z) and seen by a given observer within a bundle of null geodesics subtending the solid angle dΩ<cit.>:
dN=n dV≡ n dΩ dλ d^2_A ( -u_μ k^μ) .
Here,n is the number density of the sources per unit proper volume, d_A is the angular distance, and u_μ (as before) is the velocity field of the given sources. Finally, λ is a scalar affine parameter along the path x^μ( λ) of the light signals such that k^μ=d x^μ/dλ, and is normalized along the observer world-line by the condition
( v_μ k^μ)_o=-1 ,
where v_μ is the observer velocity and the scalar product is evaluated at the observer position <cit.>.
We now move to the GLC coordinates, where k^μ=g^μν_ν w=-^-1δ^μ_τ. The condition k^μ=dx^μ/dλ then provides the following:
dτ/dλ=-^-1, dw/dλ=0,
dθ^a/dλ=0 ,
and the normalization condition (<ref>) exactly coincides with the general temporal gauge (<ref>). In these coordinates, as anticipated in <cit.>, the angular distance d_A satisfies
d^2_A dΩ=√(γ) d^2θ ;
see Appendix <ref> for an explicit derivation of the above equation.
Here, θ^a represents the angular coordinates of the so-called “observational gauge”<cit.>, defined by exploiting the residual gauge freedom of the GLC gauge in such a way that the angular directions exactly coincide with those expressed in a system of Fermi Normal Coordinates (FNC).The angular directions related to local observations (also used in <cit.>) are indeed those measured by a free-falling observer, and can be identified with the angles of the FNC system <cit.> where the metric is locally flat around all points of a given world line, with leading curvature corrections (which are quadratic) in the distance. Using
dλ=dz( dz/dτ)^-1( dτ/dλ)^-1
and applying Equations (<ref>) and (<ref>), we find that the number of sources of Equation (<ref>) integrated between z_s and z_s+Δ z_s (as was the case before; see Equation (<ref>)) finally reduces to
N=-∫_z_s^z_s+Δ z_sdz d^2θ √(γ) nu_τ/_τ( u_τ^-1) .
We are now in the position of comparing this result with our previous average integral (<ref>). It is clear that the two are exactly the same, provided the following three conditions are satisfied: (i
) our scalar density ρ coincides with the number density n; (ii) the scalar parameter A is chosen in such a way that the unit vector n_μ coincides with the source velocity u_μ (and, obviously, n_τ= u_τ); and (iii) the angular directions are expressed in terms of the angles fixed by the observational gauge, i.e., θ^a θ^a.
It may be appropriate to recall at this point that the number count prescription has been recently used in (apparently) different forms by other authors. For instance, in the context of defining physically appropriate averaging prescriptions the number count has been presented in the following form <cit.>:
dN=n dV=n d^2_A (1+z) H_|| dz d ,
where H_|| is a local longitudinal expansion parameter defined by
H_||≡( 1+z )^-2 k^μ k^ν∇_μ u_ν .
Moving to the GLC coordinates and using the temporal gauge (<ref>), we can easily obtain
(1+z) H_||=-u^-1_τ _τ( u_τ^-1) .
By inserting this result in the definition in (<ref>) and using Equation (<ref>) in Equation (<ref>), we can immediately determine that the
number count expressions in (<ref>) and (<ref>) are exactly the same.
As a second example, recall the form of the number-count integral presented in <cit.> based on the following volume element:
dV=√(-g) ϵ_μναβ u^μ dx^ν dx^α dx^β ,
where, as before, u^μ is the source velocity. Moving to the GLC gauge and projecting this volume element on the light cone w=const, dw=0, we obtain
dV= √(γ) u^wdτ d^2θ .
Recalling that u^w=g^wνu_ν=-u_τ^-1 and again using the expression dτ/dz provided in Equation (<ref>), it is immediately clear that Equation (<ref>) reduces to the same expression of the number count integrand in (<ref>), provided we add the source density n and, as before, the GLC angles are identified with those of the observational gauge <cit.>d^2θ d^2 θ.
The discussion presented in this paper provides a further example of the crucial role played by the GLC coordinates in the simplification and comparison of formal non-perturbative expressions of physical observables. In addition, the simple expression obtained here for the volume element dV in terms of observable variables such as the redshift and observation angles is promising for a number of different physical applications, which will be discussed in forthcoming papers.
We would finally remark that in this brief article we have expressed the galaxy number counts in terms of the redshift. A recent interesting paper <cit.> has proposed studying galaxy number counts as a function of the luminosity distance of the given sources, rather than their redshift, and showed that there are already differences between the two computational methods at the first perturbative order. Thus, in the near future we plan to evaluate the exact expression of the galaxy number counts in terms of the luminosity distance by applying the covariant averaging formalism and using the GLC coordinate approach presented in this paper.
Conceptualization, G. Fanizza, M. Gasperini and G. Marozzi; methodology, G. Fanizza, M. Gasperini and G. Marozzi; formal analysis, G. Fanizza, M. Gasperini and G. Marozzi; original draft preparation, G. Fanizza, M. Gasperini and G. Marozzi; review and editing, G. Fanizza, M. Gasperini and G. Marozzi.
All authors have read and agreed to the published version of the manuscript.This research received no external funding.Not applicable.Not applicable.Not applicable.G.
Fanizza acknowledges support by Fundação para a Ciência e a Tecnologia (FCT) under the program “Stimulus"
with the grant no. CEECIND/04399/2017/CP1387/CT0026, and through the research project with ref. number PTDC/FIS-AST/0054/2021.
M. Gasperini and G. Marozzi are supported in part by INFN under the program TAsP (“Theoretical Astroparticle Physics"). M. Gasperini is also supported by the research grant number 2017W4HA7S “NAT-NET: Neutrino and Astroparticle Theory Network", under the program PRIN 2017 funded by the Italian Ministero dell'Università e della Ricerca (MUR). G. Fanizza and M. Gasperini wish to thank the kind hospitality and support of the TH Department of CERN, where part of this work has been carried out. Finally, we are very grateful to Gabriele Veneziano for his fundamental contribution and collaboration during the early stages of this work.The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
GLC Geodesic Light-Cone
FNC Fermi Normal Coordinates
no
§
In order to prove Equation (<ref>), we can combine Equations (3.15) and (3.17) from <cit.> to express the angular distance in generic GLC coordinates as follows:
d_A^2 = 4 v_τ^2 √(γ_s)[ det (∂_τγ_ab)/√(γ)]_o^-1
where, as before, the subscripts “s” and “o” respectively denote the source and observer positions.
We can now move to the observational gauge <cit.>,
where we choose a system at rest with the local observer (v_τ^2=1) and use Equation (3.16) from <cit.> to obtain, after a simple calculation of γ_ab,
d_A^2 = √(γ_s)/sinθ ,
from which we have
d_A^2 d Ω = √(γ_s) d^2 θ,
where the tilde denotes the variables of the observational gauges.
Finally, note that by using Equations (3.13)–(3.15) from
<cit.> it can easily be shown that the left-hand side of the above equation does not depend on the particular choice of v_τ.
-0cm[custom] References
999
Jeong:2011as
Jeong, D.; Schmidt, F.; Hirata, C.M.
Large-scale clustering of galaxies in general relativity.Phys. Rev. D2012, 85, 023504.
Schmidt:2012ne
Schmidt, F.; Jeong, D.
Cosmic Rulers.
Phys. Rev. D2012, 86, 083527.
Kehagias:2013yd
Kehagias, A.; Riotto, A.
Symmetries and Consistency Relations in the Large Scale Structure of the Universe.
Nucl. Phys. B2013, 873, 514–529.
Bertacca:2014dra
Bertacca, D.; Maartens, R.; Clarkson, C.
Observed galaxy number counts on the lightcone up to second order: I. Main result.
J. Cosmol. Astropart. Phys.2014, 9, 037.
Kehagias:2015tda
Kehagias, A.; Dizgah, A.M.; Na, J.N.; Perrier, H.; Riotto, A.
A Consistency Relation for the Observed Galaxy Bispectrum and the Local non-Gaussianity from Relativistic Corrections.
J. Cosmol. Astropart. Phys.2015, 8, 18.
Ginat:2021nww
Ginat, Y.B.; Desjacques, V.; Jeong, D.; Schmidt, F.
Covariant decomposition of the non-linear galaxy number counts and their monopole.
J. Cosmol. Astropart. Phys.2021, 12, 31.
Yoo:2009au
Yoo, J.; Fitzpatrick, A.L.; Zaldarriaga, M.
Three-point correlation of the Lyman-alpha forest: An optimal redshift space distortion estimator.
Phys. Rev. D2009, 80, 083514.
Yoo:2010ni
Yoo, J.
A new relativistic N-body code for the clustering of cosmic neutrinos.
Phys. Rev. D2010, 82, 083508.
Challinor:2011bk
Challinor, A.; Lewis, A.
The linear power spectrum of observed source number counts.
Phys. Rev. D2011, 84, 043516.
Bonvin:2011bg
Bonvin, C.; Durrer, R.
What galaxy surveys really measure.
Phys. Rev. D 2011, 84, 063505.
Grimm:2020bmv
Grimm, N.; Scaccabarozzi, F.; Yoo, J.; Biern, S.G.; Gong, J.-O.
Precision cosmology with overlapping surveys: The importance of volume and cross-correlations.
J. Cosmol. Astropart. Phys.2020, 11, 64.
Scaccabarozzi:2018iyz
Scaccabarozzi, F.; Yoo, J.; Biern, S.G.
Cross-correlation of future weak lensing surveys and Planck lensing data.
J. Cosmol. Astropart. Phys.2018, 10, 24.
Castorina:2021ihl
Castorina, E.; Dio, E.D.
Observing the cosmic acceleration with the Kilo-Degree Survey.
J. Cosmol. Astropart. Phys.2022, 1, 61.
1
Fanizza, G.; Gasperini, M.; Marozzi, G.; Veneziano, G.
Generalized covariant prescriptions for averaging cosmological observables.
J. Cosmol. Astropart. Phys.2020, 2, 017.
2
Ellis, G.
Relativistic cosmology.
Gen. Rel. Grav.2009, 41, 581–660.
3
Ellis, G.; Nell, S.D.; Maartens, R.; Stoeger, W.R.; Whitman, A.P. Ideal observational cosmology.
Phys. Rep. 1985, 124, 315–417.
4Fleury, P.; Clarkson, C.; Maartens, R.
How does the cosmic large-scale structure bias the Hubble diagram?
J. Cosmol. Astropart. Phys.2017, 1703, 062.
5Dio, E.D.; Durrer, R.; Marozzi, G.; Montanari, F. Galaxy number counts to second order and their bispectrum. J. Cosmol. Astropart. Phys.2014, 1412, 17;
Erratum in J. Cosmol. Astropart. Phys.2015, 1506, E01.
6Gasperini, M.; Marozzi, G.; Nugier, F.; Veneziano, G. Light-cone averaging in cosmology: Formalism and applications. J. Cosmol. Astropart. Phys.2011, 1107, 008.
7Ben-Dayan, I.; Gasperini, M.; Marozzi, G.; Nugier, F.; Veneziano, G. Backreaction on the luminosity-redshift relation from gauge invariant light-cone averaging.
J. Cosmol. Astropart. Phys.2012, 1204, 36.
8Fleury, P.; Nugier, F.; Fanizza, G. Geodesic-light-cone coordinates and the Bianchi I spacetime. J. Cosmol. Astropart. Phys.2016, 6, 008.
9Ben-Dayan, I.; Gasperini, M.; Marozzi, G.; Nugier, F.; Veneziano, G. Average and dispersion of the luminosity-redshift relation in the concordance model. J. Cosmol. Astropart. Phys.2013, 1306, 2.
10
Fanizza, G.; Gasperini, M.; Marozzi, G.; Veneziano, G.
An exact Jacobi map in the geodesic light-cone gauge.
J. Cosmol. Astropart. Phys.2013, 11, 019.
11Fanizza, G.; Gasperini, M.; Marozzi, G.; Veneziano, G. Observation angles, Fermi coordinates, and the Geodesic-Light-Cone gauge.
J. Cosmol. Astropart. Phys.2019, 1, 4.
12Mitsou, E.; Scaccabarozzi, F.; Fanizza, G. Observed Angles and Geodesic Light-Cone Coordinates. Class. Quantum Grav.2018, 35, 107002.
Fonseca:2023uay
Fonseca, J.; Zazzera, S.; Baker, T.; Clarkson, C. The observed number counts in luminosity distance space. arXiv2023,
arXiv:2304.14253.
|
http://arxiv.org/abs/2307.04274v1 | 20230709223246 | Assessing the efficacy of large language models in generating accurate teacher responses | [
"Yann Hicke",
"Abhishek Masand",
"Wentao Guo",
"Tushaar Gangavarapu"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
[
August 12, 2023
========================================================================
<cit.> organized the shared task hosted by the 18th Workshop on Innovative Use of NLP for Building Educational Applications on generation of teacher language in educational dialogues. Following the structure of the shared task, in this study, we attempt to assess the generative abilities of large language models in providing informative and helpful insights to students, thereby simulating the role of a knowledgeable teacher. To this end, we present an extensive evaluation of several benchmarking generative models, including GPT-4 (few-shot, in-context learning), fine-tuned GPT-2, and fine-tuned DialoGPT. Additionally, to optimize for pedagogical quality, we fine-tuned the Flan-T5 model using reinforcement learning. Our experimental findings on the Teacher-Student Chatroom Corpus subset indicate the efficacy of GPT-4 over other fine-tuned models, measured using BERTScore and DialogRPT.
We hypothesize that several dataset characteristics, including sampling, representativeness, and dialog completeness, pose significant challenges to fine-tuning, thus contributing to the poor generalizability of the fine-tuned models. Finally, we note the need for these generative models to be evaluated with a metric that relies not only on dialog coherence and matched language modeling distribution but also on the model's ability to showcase pedagogical skills.
§ INTRODUCTION
The advent of powerful open-source generative language models such as GPT-2 <cit.>, T5 <cit.>, OPT <cit.>, BLOOM <cit.>, Flan-T5 <cit.> or LLAMA <cit.> has led to significant developments in conversational agents, opening avenues for various applications in education <cit.>. Such AI-driven educational dialogues offer the potential for skill improvement and personalized learning experiences, with intelligent tutoring systems increasingly gaining traction <cit.>. However, deploying AI-based teachers in the educational ecosystem demands the careful modeling and evaluation of these agents to ensure their capability to address critical pedagogical concerns.
<cit.> created the AI teacher test challenge which follows the recommendations from <cit.> (pp. 67-72) stating that, if we want to put generative models into practice as AI teachers, it is imperative to determine whether they can (a) speak to students like a teacher, (b) understand students, and (c) help students improve their understanding.
Taking inspiration from the AI teacher test challenge which asks whether state-of-the-art generative models are good AI teachers, capable of replying to a student in an educational dialogue this paper seeks to investigate the applicability of reinforcement learning (RL) techniques in the generation of AI teacher responses within educational dialogues. The AI teacher test challenge emphasizes the need for a systematic evaluation of generative models to ensure that they can effectively communicate with students, comprehend their needs, and facilitate their academic improvement. Can we guide the language generator with RL to help it focus on these pedagogical requirements?
<cit.> organized the shared task hosted by the 18th Workshop on Innovative Use of NLP for Building Educational Applications on generation of teacher language in educational dialogues. Following the structure of the shared task, in this study, we aim to evaluate the potential of combining state-of-the-art generative language models with reinforcement learning algorithms to generate AI teacher responses in the context of real-world educational dialogues sourced from the Teacher Student Chatroom Corpus <cit.>. The natural baselines for the task at hand are SOTA closed-source models such as GPT-4, and fine-tuned open-source pre-trained models such as GPT-2 <cit.>. We will evaluate these natural baselines before evaluating fine-tuned pre-trained models using RL techniques, that optimize for pedagogical quality.
By exploring the role of reinforcement learning in guiding the generation of AI teacher responses, we aim to advance the discourse on the utilization of conversational agents in educational settings and contribute innovative ideas to the ongoing shared task on the generation of teacher language in educational dialogues at the 18th Workshop on Innovative Use of NLP for Building Educational Applications.
The rest of this paper is structured as follows. Section 2 offers a comprehensive review of relevant literature in the areas of AI-driven educational dialogues and reinforcement learning-based language generation. Section 3 discusses the analysis and processing of the dataset prior to conducting any language modeling tasks. In Section 4, the proposed model and its methodology for generating AI teacher responses in educational interactions are introduced. Section 5 evaluates the effects of our approach on the quality and relevance of the generated AI teacher responses and highlights key observations. Finally, Section 6 concludes the paper and explores potential directions for future research.
§ RELATED WORK
A variety of related literature exists in the realm of conversational teaching between a student and a teacher. In this section, we review several notable works addressing aspects of teacher-student dialogues, foundation models, and conversational datasets, which have contributed to the progress and understanding of generative models in educational contexts.
Teacher-Student Dialogues
One prominent resource in educational dialogues is the National Council of Teachers of English (NCTE) dataset <cit.>. It includes numerous examples of teacher-student interactions, which can serve as a valuable resource for the training and evaluation of generative models in an educational context.
The SimTeacher dataset <cit.> is an assemblage of information obtained through a "mixed-reality" simulation platform. This unique environment aids beginner educators in honing essential skills for classroom settings by employing student avatars managed by human actors. All aspiring teachers from a prominent public university participate in several brief simulation sessions throughout their educational preparation program, focusing on improving their ability to encourage more profound textual understanding among students. The original researchers annotated a variable called "quality of feedback" within the transcript, determining how effectively teachers proactively assist students.
In <cit.>, we can find a dataset collected from an education technology company that provides on-demand text-based tutoring for math and science. With a mobile application, a student can take a picture of a problem or write it down and is then connected to a professional tutor who guides the student to solve the problem. The dataset represents, after some selection, 108 tutors and 1821 students. Each session is associated with two outcome measures: (1) student satisfaction scores (1-5 scale) and (2) a rating by the tutor manager based on an evaluation rubric (0-1 scale).
Foundation Models
<cit.> provided a comprehensive analysis of the opportunities and risks of foundation models, including insights into their use in educational applications. They identified potential benefits, such as personalized learning and accessibility, while also highlighting the major risks, such as unfair biases and the generation of harmful content. This work establishes the need for carefully crafted benchmarks and evaluations to assess the potential of generative models in education.
The AI Teacher Test <cit.> builds on this idea by examining the performance of generative models such as GPT3 <cit.> and Blender <cit.> in generating appropriate and informative responses in a teacher-student dialogue.
Kasneci et al. <cit.> conducted an investigation to understand the effectiveness of ChatGPT <cit.> as a tool for educational support. They analyzed the model's performance in a student-tutoring context, examining its ability to provide accurate, relevant, and engaging responses for learners. By identifying the strengths and weaknesses of ChatGPT in this specific setting, they contributed to a better understanding of how generative models can be successfully deployed in educational applications.
Our work builds on these foundations by evaluating the potential of combining reinforcement learning with generative models to enhance the performance of AI teacher agents in educational dialogues.
Conversational Uptake
<cit.> introduced the concept of uptake as a way to comprehend the effectiveness of conversational responses in a teacher-student dialogue. It laid the groundwork for the evaluation of generative models in dialogues by taking into account the relevance and appropriateness of model-generated responses.
Demszky et al. <cit.> further explored the concept of Conversational Uptake by proposing metrics to assess the success of responses in maintaining and advancing a conversation. By applying these metrics to AI-generated responses, their work contributes to the evaluation of models in realistic conversation settings, including teacher-student dialogues. Our work attempts to guide the language generation process with similar goals in mind. We hope to find proxies of pedagogical quality through NLP metrics such as BERTScore combined with DialogRPT.
We continue by reviewing the literature utilizing reinforcement learning as a guide for language generation.
Reinforcement Learning for language generation
Policy gradient-based algorithms and their variants have been widely used in text generation to optimize sequence-level metrics <cit.>. Off-policy Reinforcement Learning (RL) is also commonly used in dialogue applications where online interaction with users is expensive <cit.>. The main difference in our work is that we take advantage of demonstrations and design generic reward functions for generation tasks.
We extend this concept to educational contexts by employing reinforcement learning to guide the generation of AI teacher responses in educational dialogues. We focus on optimizing the responses of fine-tuned generative models based on a reward system designed to enhance the pedagogical quality of the generated responses. Recently, Ramamurthy et al. <cit.> explored the efficacy of using RL to optimize language models in several natural language processing tasks, including text classification, sentiment analysis, and language generation. They developed a library, RL4LMs, which provides a generic framework for deploying RL-based language models for various tasks. We build on top of the RL4LMs framework by adding a new task to its existing array of tasks which we hope can be added as a standard for any future RLHF benchmark.
§ DATA
The shared task for BEA 2023 is based on the Teacher-Student Chatroom Corpus (TSCC) <cit.>. This corpus comprises data collected from 102 chatrooms where English as a Second Language (ESL) teachers interact with students to work on language exercises and assess the students' English language proficiency.
§.§ Data Extraction and Format
From each dialogue in the TSCC, several shorter passages were extracted. Each passage is at most 100 tokens long, consisting of several sequential teacher-student turns (i.e., the preceding dialogue context) and ending with a teacher utterance (i.e., the reference response). These short passages are the data samples used in this shared task.
The data samples are formatted using a JSON structure inspired by the ConvoKit <cit.>. Each training sample is represented as a JSON object with three fields:
* id: a unique identifier for the sample.
* utterances: a list of utterances corresponding to the preceding dialogue context. Each utterance is a JSON object with a "text" field containing the utterance and a "speaker" field containing a unique label for the speaker.
* response: a reference response, which corresponds to the final teacher's utterance. This utterance is a JSON object with a "text" field containing the utterance and a "speaker" field containing a unique label for the speaker.
Each test sample is represented as a JSON object that uses the same format as the training sample but excludes the reference response. As a result, each test sample has two fields:
* id: a unique identifier for the sample.
* utterances: a list of utterances, which corresponds to the preceding dialogue context. Each utterance is a JSON object with a "text" field containing the utterance and a "speaker" field containing a unique label for the speaker.
§.§ Data Distribution and Characteristics
The TSCC corpus is divided into three sets: train, dev, and test, each comprising 2747, 305 and 273 of the samples, respectively.
The corpus has 3325 samples, and each sample has an average length of 7.52 turns, with about 7.33 tokens per turn on average.
Table <ref> presents a summary of the statistics of the TSCC corpus across the training, development, and testing sets.
The TSCC corpus exhibits several characteristics that are specific to educational dialogues and pose challenges to natural language generation models. For instance, the dialogues often include technical vocabulary and idiomatic expressions related to English language learning. Additionally, the dialogues can be highly varied in terms of topic, complexity, and participant proficiency. Finally, the dialogues can include challenging responses which are based on out-of-context information, posing challenges for conversational agents. These characteristics must be taken into consideration when selecting and evaluating generative models for the TSCC corpus.
§.§ Data Overlap and Challenges
It is worth noting that the released development and training sets in the TSCC dataset have some overlaps, as individual conversation samples within these sets have been generated by creating chunks from larger conversations. This overlap may lead to potential biases and overfitting when training and evaluating models on this dataset. However, the test set for the BEA 2023 shared task is free of overlaps, allowing for a more accurate assessment of the model's performance in generating AI teacher responses.
The presence of overlaps in the development and training sets posed a challenge, as models inadvertently learned to predict teacher responses based on the similarities between the samples rather than genuinely understanding the context and dynamics of the teacher-student interaction. It is essential to be aware of this issue and devise strategies to mitigate the risks associated with such overlaps and ensure that the models are robust and capable of handling diverse and unseen scenarios.
To ensure the validity of our model on the validation set, we employed an iterative inclusion process to create a train-val split without any overlap between them. This process involved carefully selecting and excluding samples from the training set that had any similarity or overlap with the samples in the development set. This approach aimed to minimize the risk of data leakage and ensure that our model was evaluated on a truly unseen set of dialogues.
§ METHODS
The primary objective of our study is to investigate the potential of using in-context learning, supervised fine-tuning, and reinforcement learning to generate AI teacher responses in educational dialogues. Our proposed methods will be evaluated using the Teacher Student Chatroom Corpus (TSCC) dataset. In this section, we provide an overview of the three main parts of our methodology: in-context learning using GPT-4, supervised fine-tuning with existing models such as GPT-2 and DialoGPT, and supervised fine-tuning with Reinforcement Learning using the RL4LMs library <cit.>.
§.§ In-context Learning
§.§.§ GPT-4
As a preliminary step, we investigate the potential of in-context learning using GPT-4, a state-of-the-art language model. It generates educational dialogues based on its pre-trained knowledge, which has been acquired from a vast corpus of text data during its training process (the pre-training data might have included the test set; we will address this issue in the discussion section).
To evaluate the performance of GPT-4, we prompted GPT-4 in a few-shot fashion. We retrieved 5 most similar teacher-student conversations from the TSCC dataset and provided them to the model in addition to the current conversation and instructions about the model's role as a teacher. Details about the prompt construction that helps guide the model toward generating suitable responses as a teacher can be found in the Appendix <ref>.
§.§ Supervised Fine-tuning
To further adapt pre-trained language models to the specific educational context and generate more accurate and context-aware teacher responses, we explore supervised fine-tuning using GPT-2 and DialoGPT models.
§.§.§ GPT-2
GPT-2 <cit.> is a decoder-only large language model pre-trained on WebText, and we used GPT-2 Large, which has 24 transformer decoder blocks with 774 million parameters.
We fine-tune the GPT-2 model <cit.> using the Huggingface Library on the Teacher Student Chatroom Corpus (TSCC) dataset. For each educational dialogue, we concatenated all dialogue turns into a single string with additional information of speaker roles i.e. students or teachers. As a result, the input to the GPT-2 model consists of a sequence of text representing the conversation history, culminating in the teacher's response. We then finetuned GPT-2 Large <cit.> with a causal language modeling task. Details of the exact hyperparameters used during the fine-tuning process can be found in the Appendix.
After the fine-tuning process, we evaluated the fine-tuned GPT-2 model's performance on the test set by comparing its generated teacher responses to reference responses, assessing the model's ability to generate context-aware and educationally relevant responses in line with the teacher's role in the TSCC dataset.
§.§.§ DialoGPT
DialoGPT <cit.> is a dialogue model based on the GPT-2 architecture, specifically designed for generating conversational responses. DialoGPT is trained with 147M conversation pieces extracted from Reddit <cit.>, and it is trained with causal language modeling objectives with multi-turn dialogue. We adapt our training dataset with the same format as that of DialoGPT during pretraining and then prompt the DialoGPT to generate an educational dialogue of teachers in the validation set. After training, we follow the same methodology for evaluation as GPT-2 which we discussed in the earlier section.
§.§ Supervised Fine-tuning with Reinforcement Learning
§.§.§ Flan-T5 Fine-tuned with RL4LMs
To optimize the generative models for pedagogical quality, we explore the use of reinforcement learning techniques in the fine-tuning process. We employ the RL4LMs library <cit.>, which provides an efficient and scalable framework for reinforcement learning-based language model fine-tuning.
The RL4LMs library incorporates Proximal Policy Optimization (PPO) <cit.> as the reinforcement learning algorithm, which is known for its stability and sample efficiency. The library also supports the integration of custom reward functions, allowing us to design rewards that encourage the generation of pedagogically sound teacher responses.
To implement the reinforcement learning-based fine-tuning, we first fine-tune the Flan-T5 <cit.> model on the TSCC dataset using supervised learning, as described in the previous section. Next, we utilize the RL4LMs library to fine-tune the model further using the PPO algorithm. We use an equal division of the F1 as calculated by the roberta-large version of BERTScore and DialogRPT-updown as the reward function. More Details about the reinforcement learning fine-tuning process can be found in the Appendix.
The subsequent evaluation of the fine-tuned Flan-T5 model reveals the benefits of incorporating reinforcement learning into the fine-tuning process, contributing to more context-aware, relevant, and pedagogically effective AI teacher responses.
§ RESULTS
In this section, we present the results and discuss the performance of GPT-4, fine-tuned GPT-2, and fine-tuned DialoGPT models on the TSCC dataset. We analyze the strengths and weaknesses of each approach and provide insights into their potential applications and limitations in an educational context.
§.§ GPT-4
The GPT-4 model, without fine-tuning on the TSCC dataset, demonstrates a relatively strong performance in generating educational dialogues. The generated teacher responses are generally fluent and contextually relevant, indicating that GPT-4 has a good understanding of the educational context based on its pre-trained knowledge. However, some limitations are observed in the model's ability to generate accurate and pedagogically sound responses consistently.
The carefully crafted prompt provided to the model plays a crucial role in guiding GPT-4 toward generating suitable responses as a teacher. Although the model is capable of generating contextually relevant and linguistically correct responses, it may not always produce the most pedagogically sound or helpful responses for the students. This limitation highlights the importance of fine-tuning the model on a specific educational dataset, such as TSCC, to further enhance its performance in generating AI teacher responses.
Additionally, due to the nature of the dataset, where conversations were often cut off, the model sometimes lacked the full context needed to generate meaningful responses that accurately represented the ground truth. Despite this limitation, GPT-4's responses were generally sensible and appropriate given the available context.
§.§ Finetuned GPT-2
We observe that compared with DialoGPT, GPT-2 usually generates longer and more formal responses, even with the same generation hyperparameters.
§.§ Finetuned DialoGPT
We observe that DialoGPT usually generates shorter and more vernacular responses. It fits better in a conversational setting, but sometimes the educational uptakes are not satisfactory since the responses are not guiding students to learn the language.
§.§ Finetuned Flan-T5 w/ RL
We observe that the results of Flan-T5 w/ RL on the validation set are really good suggesting that the model was able to hack the metrics designed as the reward. On the contrary, it is performing poorly on the test set suggesting that it overfits the validation set. We hypothesize two reasons for this to be the case:
the way conversations are split into chunks in the training dataset or the difference in distribution between the training set and the test set.
§ DISCUSSION
Conversational agents have the potential to revolutionize the teaching landscape by addressing several challenges and enhancing the overall learning experience for both students and educators <cit.>. However, developing conversational agents that can behave like human teachers requires addressing several challenges <cit.>.
Data challenges. As noted in the subsections above, the generations from the GPT-4 model outperformed all the fine-tuned models, with and without reinforcement learning. To this end,
we put forward the proposition that an array of dataset features plays a crucial role in posing significant challenges to the fine-tuning process of generative models. These features include several dataset characteristics, including sampling, representativeness, prompt and response lengths, and dialogue completeness—upon manual inspection, we identified several dialogues to be cut off—pose serious challenges in achieving superior performance with fine-tuning. Furthermore, upon random inspection of the generations from the fine-tuned models, we identified that these models seem to have learned simple, generic, often inappropriate yet correct responses such as “thank you” and “okay.” While more recent language models have been shown to have high few-shot performance, we believe that fine-tuned models could adapt better to provide domain-specific responses in comparison. To achieve this, we emphasize the need for extending the current dataset to include longer prompts with more context.
It is important to acknowledge that these models might not be as effective as desired in their response generation due to these intricacies. The current efforts made by the research community to collect and build quality datasets encompassing enough information about the educational task to enable AI teacher generative models to fully generalize in any context is what we assess to be the main focus that the community should adopt <cit.>.
Evaluation metrics.
In addition, we emphasize that to truly gauge the efficiency of these AI-powered teaching models, it is vital to go a step further and examine their ability to comprehend the unique nuances in the students' queries and cater to their particular educational requirements. This implies the need for a pedagogically meaningful evaluation metric. We believe that it is crucial for the research community to embrace this as the second primary focus.
While common evaluation metrics such as BERTScore and DialogRPT are commonly used in several language and dialog modeling tasks, it is important to note that these metrics were not fundamentally designed to capture the level of pedagogical meaningfulness in the generated responses. As an example, consider the dialog shown in Figure <ref>—depending on the given context, only one of the responses (option (b): disconnected) is correct, while both the responses are ranked as equally correct by the BERTScore metric. Commonly-used domain-agnostic metrics often serve as a proxy for how coherent and human-like the generated responses are. However, for more goal-oriented tasks such as modeling teacher-student conversational dialogues, these metrics seem to fall short. This generalization gap becomes more apparent on analyzing the results from the fine-tuned Flan-T5 model with a feedback loop based on BERTScore and DialogRPT scores—despite the model performing significantly well on training and validation sets, it failed to generalize on unseen test data.
In an effort to advance research on this front, we note the need for auxiliary training-level metrics, including the faithfulness of the generation to the true response, to ensure that the generations are context-aware and factually accurate (e.g., correct option (b) vs. incorrect option (a) in Figure <ref>).
GPT-4 unknown pre-training data. We understand that the use of GPT-4 as a baseline in our study presents challenges due to its unknown training data. Yet, whether GPT-4 has seen parts of the TSCC dataset during its pre-training or not, the improvement of performance compared to the reference with regard to the DialogRPT scores and human evaluation scores attached to the leaderboard of the shared task suggests that the potential of using such high-performing models in this domain warrants further exploration.
§ CONCLUSION
In this paper, we explored the potential of using large pre-trained language models and reinforcement learning for generating AI teacher responses in an educational context. We first presented a few-shot approach using the GPT-4 model, which demonstrated promising results in generating contextually relevant and fluent responses, but with limitations in generating pedagogically sound responses consistently. We then fine-tuned GPT-2 and DialoGPT on the TSCC dataset and evaluated their performance using BERTScore and DialogRPT metrics. We also proposed an approach using RL to optimize directly for pedagogical values. We hypothesized that several dataset characteristics (e.g., dialog completeness, sampling) pose challenges to achieving superior performance with fine-tuning. To this end, we recommend the extension of the dataset to include longer prompts with extended context. Finally, we also draw attention to the need for more domain-specific metrics (in both evaluation and reward-based training) in enabling the generation of accurate, context-aware, and factually correct teacher responses.
acl_natbib
§ APPENDIX
§.§ GPT-4 Prompt Construction
To evaluate the performance of GPT-4, we provided it with a few-shot prompt that includes a selection of similar teacher-student conversations from the TSCC dataset. This approach helps guide the model toward generating suitable responses as a teacher. The prompt is constructed as follows:
* We direct the system role to act as a teacher and encourage learning by using the prompt as given below.
* Retrieve the 5 most similar teacher-student conversations from the TSCC dataset. This is done by computing the cosine similarity between the input conversation context and the current conversation context in the dataset using embeddings generated by the text-embedding-ada-002 model.
* Concatenate the selected conversations with the input conversation, separated by special tokens to indicate the beginning and end of a new sample conversation.
This prompt construction aims to provide GPT-4 with the necessary context and guidance to generate accurate and pedagogically relevant responses in the context of teacher-student dialogues. The prompt is designed as follows:
You are acting as a teacher, and you are helping a student learn. Be patient, helpful, and kind. Don't be superimposing; give short responses to encourage learning. Make the student feel comfortable and confident, and help them learn. Now, join the following conversation: <conversation context>
The prompt is designed using the following directives in mind:-
* We instruct the system with several indicators to act as a teacher and provide helpful advice to the student.
* To mitigate the challenge of generating teacher-like responses, we advise the model to be patient, kind, and helpful to the student.
* Through the directive to keep responses short and encouraging, we guide the model toward generating suitable responses that might help the student learn effectively.
* The model is also instructed to make the student feel comfortable and confident in their learning process, providing an overall supportive environment for the student.
* Finally, the conversation context is provided to the model to set the context for the given student query, allowing the model to generate appropriate responses given the conversation context.
Combining all these aspects together, we aim to guide the model toward generating contextually relevant and pedagogically meaningful responses in the given teacher-student dialogue.
We use the following hyperparameters for querying the GPT-4 model:
* Model: gpt-4-0314
* Temperature: 1
* Max Tokens: 100
* Top p: 1
§.§ Fine-tuning Exact Parameters
For our supervised fine-tuning experiments, we used the following hyperparameters:
§.§.§ GPT-2
* Learning rate: 1e-5
* Batch size: 32
* Epochs: 10
* Max sequence length: 1024
* Optimizer: AdamW
* Scheduler: linear learning rate scheduler
§.§.§ DialoGPT
* Learning rate: 1e-5
* Batch size: 32
* Epochs: 10
* Max sequence length: 1024
* Optimizer: AdamW
* Scheduler: linear learning rate scheduler
§.§ Supervised Fine-tuning with Reinforcement Learning Details
To implement the reinforcement learning-based fine-tuning using the RL4LMs library, we first fine-tuned the Flan-T5 model on the TSCC dataset using supervised learning. After this initial fine-tuning step, we utilized the RL4LMs library to fine-tune the model further using reinforcement learning. We used an equal division of the BERTScore and DialogRPT as the reward function to optimize the model for pedagogical quality. The following hyperparameters were used for the reinforcement learning fine-tuning process:
* Learning rate: 1e-6
* Batch size: 64
* Epochs: 5
* Max prompt length: 512
* Max episode length: 100
* Optimizer: AdamW
* Scheduler: linear learning rate scheduler
The YAML file for the RL4LMs scipt is as follows:
[
gobble=4
]yaml
tokenizer:
model_name: google/flan-t5-small
padding_side: left
truncation_side: left
pad_token_as_eos_token: False
reward_fn:
id: dialog_rpt_bert
args:
BERTScore_coeff: 0.5
DialogRPT_coeff: 0.5
datapool:
id: bea
truncate: False
args:
env:
n_envs: 1
args:
max_prompt_length: 100
max_episode_length: 20
terminate_on_eos: True
context_start_token: 0
prompt_truncation_side: "right"
alg:
id: ppo_separate
args:
n_steps: 20
batch_size: 64
verbose: 1
learning_rate: 0.000001
clip_range: 0.2
n_epochs: 1
value_update_epochs: 3
# batchify: False
gae_lambda: 0.95
gamma: 0.99
ent_coef: 0.01
kl_div:
coeff: 0.001
target_kl: 2.0
policy:
id: seq2seq_lm_actor_critic_policy
args:
model_name: google/flan-t5-small
apply_model_parallel: True
prompt_truncation_side: "right"
generation_kwargs:
do_sample: True
top_k: 0
min_length: 9
max_new_tokens: 20
train_evaluation:
eval_batch_size: 64
n_iters: 200
eval_every: 20
save_every: 10
metrics:
- id: bert_score
args:
language: en
- id: dialog_rpt
args:
model_name: "microsoft/DialogRPT
-updown"
label_ix: 0
batch_size: 1
# - id: uptake
# args:
# model_name: None
# label_ix: 0
# batch_size: 1
generation_kwargs:
num_beams: 5
min_length: 9
max_new_tokens: 20
|
http://arxiv.org/abs/2307.05925v1 | 20230712054527 | A Tractable Statistical Representation of IFTR Fading with Applications | [
"Maryam Olyaee",
"Hadi Hashemi",
"Juan M. Romero-Jerez"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
IEEEexample:BSTcontrol
A Tractable Statistical Representation of IFTR Fading with Applications
Maryam Olyaee, Hadi Hashemi and Juan M. Romero-Jerez, Senior Member, IEEE
This work was submitted to the IEEE for publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
This work has been
funded in part by Junta de
Andalucía through project P21-00420 and also grant EMERGIA20-00297, and in part by
MCIN/AEI/10.13039/501100011033 through grant PID2020-118139RB-I00.
M. Olyaee and J. M. Romero-Jerez are with Communications and Signal Processing Lab, Telecommunication Research Institute (TELMA), Universidad de Málaga,
ETSI Telecomunicación, Bulevar Louis Pasteur 35, 29010 Málaga, Spain.
Hadi Hashemi is with the Department of Signal Theory, Networking and Communications, Universidad de Granada, 18071, Granada, Spain.
(e-mails: [email protected], [email protected], [email protected],).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The recently introduced IFTR fading model, consisting of two specular components fluctuating independently plus a diffuse component, has proven to provide an excellent fit to different wireless environments, including the millimeter-wave band. However, the original formulations of the probability density function (PDF) and cumulative distribution function (CDF) of this model are not applicable to all possible values of its defining parameters, and are given in terms of multifold generalized hypergeometric functions, which prevents their widespread use for the derivation of performance metric expressions. In this paper we present a new formulation of the IFTR model as a countable mixture of Gamma distributions which greatly facilitates the performance evaluation for this model in terms of the metrics already known for the much simpler and widely used Nakagami-m fading. Additionally, a closed-form expression is presented for the generalized moment generating function (GMGF), which permits to readily obtain all the moments of the distribution of the model, as well as several relevant performance metrics. Based on these new derivations, the IFTR model is evaluated for the average channel capacity, the outage probability with and without co-channel interference, and the bit error rate (BER), which are verified by Monte Carlo simulations.
Multipath fading, Independent Fluctuating Two-Ray (IFTR), Gamma Distribution, Generalized Moment Generating Function, Outage Probability, Co-channel Interference
IFTR
PDF
CDF
SNR
GMGF
§ INTRODUCTION
Due to the spectrum shortage in future generation wireless network, higher frequency bands are being considered in the standards in order to cover users' demands. Thus, millimeter and terahertz bands usage has emerged in the context of 5G/6G cellular networks <cit.>. In many wireless scenarios, channel multipath fading is an essential propagation effect to be considered due to the potential detrimental impact on performance. Therefore, accurate characterization of wireless channel fading at those higher frequencies has become a relevant research topic, and much effort is being made in this area <cit.>.
Recently, the IFTR <cit.> channel model has been presented to characterize multipath propagation, which includes several well-known distributions, namely Rayleigh, Rician, Hoyt (Nakagami-q), Rician Shadowed, and Nakagami-m, as special or limiting cases. The IFTR model consists of two dominant (specular) waves plus a diffuse component, due to the aggregation of multiple low-power scattered waves, modeled as a complex Gaussian random variable (RV), where the specular components are assumed to fluctuate independently following Nakagami-m fading. This model is related to the FTR fading model except that in the latter the two specular components are assumed to be fully correlated and fluctuate simultaneously. The FTR model was introduced in <cit.> and was later reformulated in <cit.> and, more recently, in <cit.>, and has been studied abundantly for different wireless environments, mostly in the context of millimeter-wave communications, and considering many different performance metrics (see for example <cit.> and the references therein). In spite of the apparent similitude in the formal definition of the FTR and IFTR fading models, there are major differences between them, both in terms of the fitting results to experimental measurements and in the involved mathematical derivations. On the one hand, the IFTR fading model has been shown to provide a (sometimes remarkable) better fit than FTR fading (as well as other generalized fading models such as κ-μ shadowed <cit.> and two-wave with diffuse power –TWDP– <cit.>) to experimental data in very different environments, including line-of-sight (LOS) millimeter-wave, land-mobile satellites (LMS), and underwater acoustic communications (UAC) <cit.>. On the other hand, the independence of the two specular components in the IFTR model imposes new mathematical challenges, as now a two-fold nested integration always appear in its statistical characterization.
Although both the PDF and CDF of the IFTR model were presented in <cit.>, their use is rather limited for two reasons: on the one hand they are not completely general, as they require assuming one of the model parameters m_1 or m_2 to be integer, while they can take any arbitrary positive real value in realistic propagation scenarios; on the other hand, the known PDF and CDF are given in terms of a generalized hypergeometric function, which is actually a multifold infinite summation, which is very difficult to manipulate to obtain analytical expressions for most performance metrics in wireless communication systems.
In this paper, we solve the aforementioned issues by deriving a new statistical characterization of the IFTR fading model assuming arbitrary positive values of m_1,m_2 and easy to manipulate. Additionally, we expand the known results for the precise characterization of the model and apply them for the performance analysis of wireless systems.
Specifically, the key contributions of this paper are:
* A new formulation is presented for the PDF and CDF of the instantaneous SNR of IFTR fading in terms of an infinite countable mixture of Gamma distributions for arbitrary values of the channel parameters m_1 and m_2, where the weights of the elements of the mixture are given in closed-form. The resulting infinite series are demonstrated to be convergent and are precisely truncated and evaluated using the Kolmogorov-Smirnov goodness-of-fit test.
* The GMGF of the IFTR fading model is obtained for the first time, which for many relevant cases can be written in closed-form, allowing to obtain all the moments of the distribution. In spite of the model generality and statistical complexity, this function permits to obtain closed-form expressions for different relevant performance metrics including, for example, secrecy capacity outage, outage probability under interference and energy detection probability.
* The new and expanded statistical characterization of IFTR fading is used for its performance analysis evaluation in terms of the average capacity, outage probability with and without interference and average bit error rate (BER) for different modulations. The effect of the parameters values of the model are evaluated numerically and verified by simulation.
The rest of this paper is organized as follows:
The channel model is presented in Section II.
Then, in Section III, the new representation of the IFTR fading is presented, as well as, for the first time, to the authors' knowledge, an expression of the GMGF, which for many relevant cases can be written in closed-form.
Several performance metrics, including the average capacity, the outage probability, and the BER in IFTR fading are analyzed in Section IV. Simulation and numerical results are given in Section V. Finally, the paper is concluded in Section VI.
§ PRELIMINARY DEFINITIONS AND CHANNEL MODEL
A RV X following a Gamma distribution with shape parameter λ and scale parameter ν will be denoted as X ∼𝒢(λ,ν), and its PDF and CDF will be given, respectively, by
f^𝒢(x;λ,ν)=x^λ-1/Γ(λ)ν^λe^-x/ν,
F^𝒢(x;λ,ν)=1/Γ(λ)γ(λ,x/ν),
where γ(·,·) is the incomplete Gamma function <cit.>.
The SNR γ_𝒦 (or, equivalently, the received power) in a Nakagami-m fading with mean γ̅_𝒦 and fading severity parameter m follows a Gamma distribution with shape parameter m and scale parameter γ̅_𝒦/m, i.e.,
γ_𝒦∼𝒢(m,γ̅_𝒦/m).
The IFTR fading model is composed of two specular waves, whose amplitude fluctuate according to independent Nakagami-m fading, plus an undetermined number of scattered low-amplitude waves (the diffuse component) which, by virtue of the central limit theorem, are jointly represented by a complex Gaussian RV. Let ζ_i ∼𝒢(m_i,1 /m_i), with i ∈{1,2}, then the complex base-band representation of the IFTR fading model can be expressed as
V_r = √(ζ_1) V_1 e^j ϕ_1 + √(ζ_2) V_2 e^j ϕ_2 + X + j Y,
where V_i is the average amplitude of the i-th specular component, ϕ_i is a uniformly distributed RV in [0,2π) representing its phase,
and X + j Y models the diffuse component with X,Y ∼𝒩(0,σ^2).
In addition to the fading severity parameters of the specular components, m_1 and m_2, the IFTR model will be determined by the following physically-motivated parameters:
K = V_1^2+V_2^2/2σ^2,
Δ = 2V_1V_2/V_1^2+V_2^2,
where K represents the ratio of the average power of the dominant components to the power of the diffuse component and Δ∈ [0,1] provides a measure of the specular components similarity, so that Δ=0 implies V_1=V_2. Without loss of generality we will assume V_1 ≥ V_2, and therefore Δ=1 implies V_2=0, i.e., only the first specular component, if any, is received. For the sake of compactness in subsequent expressions, we will also define the following ancillary parameters, given in terms of K and Δ:
K_1≜V_1^2/2σ ^2 = K1 + √(1 - Δ ^2)/2,
K_2≜V_2^2/2σ ^2 = K1 - √(1 - Δ ^2)/2.
The IFTR model is very versatile and includes different classical and generalized fading models as particular cases by an appropriate selection of the parameters. Thus, for m_1,m_2 →∞ the fluctuations of the specular components tend to disappear and the IFTR model collapses to the TWDP one <cit.>. If, in addition, we let Δ=0, the Rice model is obtained. For finite values of m_1, Δ=0 yields the Rician Shadowed model <cit.>, which was shown in <cit.> that it contains the Hoyt (Nakagami-q) model for m_1=0.5, with q=(√(1+2K))^-1. The Rayleigh fading model can be obtained as a particularization of either the aforementioned Rice or Hoyt models for K=0, and also for m_1=1 and Δ=0. If there is only one specular component and the diffuse component is absent (Δ=0, K→∞), the IFTR model collapses to the Nakagami-m model.
§ NEW REPRESENTATION OF THE IFTR FADING MODEL
In this paper, we present a new statistical characterization of the SNR of a signal undergoing IFTR fading which, denoting by E_s the symbol energy density and N_0 the power spectral density, is defined as γ≜(E_s/N_0)|V_r|^2.
A RV γ following an IFTR distribution with parameters m_1, m_2, K, Δ and mean γ will be denoted by
γ∼ℐℱ𝒯ℛ(γ,m_1, m_2,K,Δ), and its PDF and CDF will be denoted, respectively, by f_γ^ IFTR(·) and F_γ^ IFTR(·).
Following the same spirit as in <cit.> for TWDP and in <cit.> for FTR fading, we now show that the PDF and CDF of the SNR of a RV following an IFTR distribution can be expressed as infinite countable mixtures of the corresponding functions for the Gamma distribution. Additionally, we show how this result can be applied to readily obtain any metric, defined by averaging over the channel realizations, for the IFTR model, from such metric for the much simpler and widely used Nakagami-m fading.
§.§ PDF and CDF of IFTR fading
Let γ∼ℐℱ𝒯ℛ(γ,m_1, m_2,K,Δ), then, its PDF and CDF can be expressed, respectively, as
f_γ^ IFTR( x ) = ∑_j = 0^∞ A_j f^𝒢( x;j + 1,γ̅/1+K),
F_γ^ IFTR( x ) = ∑_j = 0^∞ A_j F^𝒢( x;j + 1,γ̅/1+K),
where f^𝒢 and F^𝒢 are, respectively, the PDF and CDF of the Gamma distribution given in (<ref>) and (<ref>), and coefficients A_j are given in (<ref>) in terms of the channel parameters and the regularized Gauss hypergeometric function[The regularized Gauss hypergeometric function can be calculated in terms of the standard Gauss hypergeometric function as
_2 F̃_1 ( a,b;c;z) = _2 F_1 ( a,b;c;z)/Γ (c) when c ∉{0,-1,-2,…}, however, the corresponding parameter c in the coefficients A_j in (<ref>) can indeed be a non-positive integer for some values of index j, therefore, _2 F̃_1 has to be calculated using (<ref>). Nevertheless, the regularized Gauss hypergeometric function is in-built in the Mathematica software.], which is defined as
_2 F̃_1 ( a,b;c;z) = ∑_k = 0^∞( a )_k ( b )_k /Γ( c + k)z^k /k!,
where (a)_k≜Γ (a+k) / Γ (a) is the Pochhammer symbol.
See Appendix A.
Note that, in contrast to the PDF and CDF expressions given in <cit.>, (<ref>) and (<ref>) are valid for arbitrary values of m_1 and m_2, and therefore this is also true for all the performance metrics derived from them.
By noting that the j-th term in (<ref>) is proportional to (x/γ̅)^j, the PDF and CDF in IFTR fading in the high SNR regime (i.e., as γ̅→∞) can be approximated by only maintaining the first term in the infinite summations, yielding
f_γ^ IFTR( x ) ≈ A_0 γ̅/1 + Ke^ - x(1 + K)/γ̅ , γ̅≫ x ,
F_γ^ IFTR( x ) ≈ A_0 ( 1 - e^ - x(1 + K)/γ̅), γ̅≫ x
with
A_0 = m_1^m_1 m_2^m_2 /( K_1 + m_1 )^m_1 ( K_2 + m_2 )^m_2
× _2 F_1 ( m_1 ,m_2 ;1;K^2 Δ ^2 /4( K_1 + m_1 )( K_2 + m_2 )).
Let h(γ) be a performance metric (or statistical function) depending on the instantaneous SNR, and let X_𝒦
(γ̅_𝒦,m) be the metric (or function) obtained by averaging over an interval of the PDF of the SNR for Nakagami-m fading with mean γ̅_𝒦 and fading severity m, i.e.,
X^𝒦(γ̅_𝒦,m) = ∫_a^b h(x)f^𝒢(x;m,γ̅_𝒦/m)dx,
where 0 ≤ a ≤ b < ∞. Then, the average performance metric for IFTR fading can be calculated as
X^ IFTR (γ̅, m_1,m_2,K,Δ)
= ∑_j=0^∞ A_j X^𝒦(γ̅/1+K(j+1),j+1),
where A_j are the IFTR coefficients defined in (<ref>).
The average metric in IFTR fading channel is calculated as
X^ IFTR(γ̅,m_1,m_2,K,Δ) = ∫_a^b h(x)f_γ^ IFTR( x ) dx .
By plugging (<ref>) into (<ref>) we can write
X^ IFTR( γ̅,m_1 ,m_2 ,K,Δ)
= ∫_a^b h( x )[ ∑_j = 0^∞A_j f^𝒢( x;j + 1,γ̅/1 + K)]dx
= ∑_j = 0^∞A_j ∫_a^b h( x ) f^𝒢( x;j + 1,γ̅/1 + K)dx .
Comparing the integral of the resulting expression with (<ref>) and identifying j + 1=m and γ̅/1 + K = γ̅_𝒦/m, (<ref>) is obtained.
§.§ Series convergence and Kolmogorov-Smirnov goodness-of-fit statistical test
The series expressions of the PDF given in (<ref>) is calculated by averaging the convergent series expression for TWDP fading, given in (<ref>), over the fluctuations of the specular components, as explained in Appendix A. The weights of the Gamma PDF's in the TWDP series are positive <cit.> and therefore the interchange of integration and infinite summation in (<ref>) can be carried out by virtue of Tonelli’s theorem <cit.>, which has the following consequences:
(i) The series in the right hand side of (<ref>) converges to the PDF of the IFTR fading model f_γ^ IFTR(x) ∀ x ∈ [0,∞).
(ii) The calculated coefficients A_j are positive for all j.
Moreover, the performance metrics in communication systems (e.g., BER, channel capacity, outage probability, etc.) are typically non-negative functions which, together with (ii), permits to invoke again Tonelli’s theorem, thus allowing the interchange of integration and infinite summation in (<ref>), yielding two additional consequences:
(iii) The series in the right hand side of (<ref>) converges to the average metric in IFTR fading X^ IFTR(γ̅,m_1,m_2,K,Δ).
(iv) Considering h(γ)=1 in [0,∞) in Corollary 1 yields ∑_j=0^∞A_j = 1. Adittionally, considering h(γ)=1 in [0,x) in Corollary 1 provides a formal justification for obtaining (<ref>) by integrating (<ref>) term by term.
The infinite series used in the statistical characterization of IFTR fading must be truncated for numerical computation.
We now provide the Kolmogorov-Smirnov (KS) goodness-of-fit statistical test, which permits to check how close a truncated series is to the exact value. The KS test statistic is given by <cit.>
T_KS = max |F̂_γ^ IFTR(x)-F_γ^ IFTR(x)|,
where F_γ^ IFTR(x) is the exact value of the CDF and F̂_γ^ IFTR(x) is the approximation of the CDF when the series is truncated to J terms.
Table <ref> reports the KS test for different channel parameters when the truncated series have 20, 30, or 40 terms.
It can be seen that the accuracy reaches an acceptable level when the first 40 terms of the series are computed, so the numerical calculations of all the series in this work will consider 40 terms.
Figs. <ref> and <ref> show the PDF of the SNR for different IFTR channel parameters obtained from (<ref>) assuming 40 terms in the truncated series computation. Fig. <ref> is plotted for Δ=0.1,0.9 and for both integer and non-integer values of m_1 and m_2, while Fig. <ref> shows the PDF for K=5,15. The numerical results are verified by Monte-Carlo simulation, showing an excellent agreement in all cases.
Fig. <ref> illustrates the CDF of the SNR in IFTR fading computed from (<ref>) for different values of K, Δ and m_1,m_2.
§.§ GMGF and moments of the IFTR model
Let n>0, and let X be a continuous non-negative RV with PDF f_X(·). The GMGF of X is defined as
ϕ _X ^(n)( s ) ≜ E{X ^n e^X s} = ∫_0^∞x^n e^xs f_X ( x )dx,
where E{·} denotes the expectation operator.
The moment generating function (MGF) is defined as ϕ _X ( s ) ≜ E{e^X s} = ϕ _X^(0)( s ), and it is therefore a particular case of the GMGF. Note that for n∈ℕ, the GMGF coincides with the n-th order derivative of the MGF. Also, the n-th order moment of X is obtained as μ _X^n ≜ E{X^n } = ϕ _X ^(n)( 0 ).
The GMGF finds application in different communication theory areas, including energy detection, outage probability
under co-channel interference, physical layer security or BER analysis. In most cases it suffices to consider n∈ℕ, which usually results in closed-form expressions for the GMGF, such as it is the case for IFTR fading, as we show bellow. However, there are situations, such as composite Inverse Gamma (IG) shadowing/fading modeling <cit.>, where the more general case of arbitrary n>0 needs to be considered.
In the following Lemma we derive expressions for the GMGF of the IFTR fading model for both cases.
Let γ∼ℐℱ𝒯ℛ(γ,m_1, m_2,K,Δ), then, its GMGF can be expressed as follows:
(i) General case (n∈ℝ^+):
ϕ_γ^(n) (s)
= ∑_j=0^∞ A_j ϕ_G^(n)(s,j+1,γ̅/1+K),
where A_j is defined in (<ref>) and ϕ_G^(n) is the GMGF of a RV G ∼𝒢(λ,ν), which is given by
ϕ_G^(n) (s,λ,ν) = Γ(n+λ) ( 1 /ν-s)^-(n+λ)/Γ(λ) ν^λ .
(ii) Case n∈ℕ: A closed-form expression is given in (<ref>).
Case (i): This result is obtained by
by applying Corollary <ref> to the GMGF of the SNR in Nakagami-m fading given in <cit.>.
Case (ii): See Appendix B.
Let γ∼ℐℱ𝒯ℛ(γ,m_1, m_2,K,Δ), then its n-th order moment can be expressed as follows:
(i) General case (n∈ℝ^+):
μ_γ^n = ∑_j=0^∞ A_j Γ(n+j+1) γ̅^n/Γ(j+1) (1+K)^n.
(ii) Case n∈ℕ: A closed-form expression is given now by
μ_γ^n = ( γ/1 + K)^n ∑_q = 0^n nqn!/q!∑_r = 0^q qr
×∑_p = 0^q - rq-rp K_1^p K_2^q - r - p∑_l = 0^r rl( KΔ/2)^2l
×Γ( m_1 + l + p)/Γ( m_1 )m_1^l + pΓ( m_2 + q - l - p)/Γ( m_2 )m_2^q - l - pδ _2l,r.
where δ_2l,r is the kronecker delta function.
These results follows by considering s=0 in the GMGF expressions. In case (ii), the following equality has been taken into account to obtain (<ref>):
lim_s→ 0s^n - m· _2 F̃_1 ( a,b;n - m + 1;A · s^2 ) = δ _n,m,
which holds for any n,m ∈ℕ, where the cases n>m and n=m are trivial, and the case n<m results from the fact that the Gamma function has simple poles at the non-positive integers, and therefore from (<ref>) and given p ∈ℕ∪{0} we can write
_2 F̃ _1 ( a,b; - p;z) = ∑_k = p + 1^∞( a )_k ( b )_k /Γ( - p + k)z^k /k!.
From the expression of the moments for n∈ℕ given in (<ref>), a closed-form expression for the amount of fading (AoF) for IFTR fading can be obtained in closed-form. The AoF captures the severity, in terms of the variability, of the fading channel as a function of the parameters of the model and is defined as the variance of the SNR normalized by its squared mean, so that AoF≜ E{(γ -γ̅)^2}/γ̅^2=E{γ^2}/γ̅^2 - 1.
Let γ∼ℐℱ𝒯ℛ(γ,m_1, m_2,K,Δ), then, its AoF can be written as
AoF = 1/( 1 + K)^2 [ 1 + 2K + ( KΔ)^2 /2 + K_1^2 /m_1 + K_2^2 /m_2 ].
This result is obtained by particularizing the moments in (<ref>) to the definition of the AoF.
The IFTR fading model tends to the TWDP one for m_1,m_2 →∞. As a check, it must be noted that for such condition the expression given in (<ref>) tends to the AoF given in <cit.> for TWDP fading.
§ PERFORMANCE ANALYSIS
By using the derived statistical characterization of the IFTR fading model, the performance of different wireless communication systems undergoing this fading distribution can be calculated. In the following, the channel capacity, the outage probability in an interference-limited multi-antenna receiver and the symbol error rate have been obtained for IFTR fading.
§.§ Average channel capacity
The average capacity per unit bandwidth for IFTR fading is given by
C = ∫_0^∞log_2(1+x) f_γ^ IFTR(x) dx.
A direct application of Corollary I using the average channel capacity expression for Nakagami-m fading channels <cit.> provides the following closed-form expression:
C = ∑_j=0^∞A_j e^K+1/γ̅/ln(2)∑_k=0^j( 1+K/γ̅)^k Γ( -k,1+K/γ̅),
where A_j is given in eq. (<ref>) and Γ(.,.) is the upper incomplete gamma function, which can
be computed, when the first parameter is a negative integer, as <cit.>
Γ(-n,x)=(-1)^n/Γ(n)[ ∑_r=0^n-1Γ(n-r)/(-x)^n-r e^x - Ei(-x) ],
where E_i(·) is the exponential integral function <cit.>.
§.§ Outage probability in interference-limited multi-antenna receiver
The outage probability, i.e., the probability that the received SNR is below a threshold γ_th, under IFTR fading is given by
P_out = Pr(γ<γ_th) = F_γ^ IFTR(γ_th).
On the other hand, in the presence of co-channel interference (CCI) of total received power I, considering negligible background noise and denoting as W the received power from the desired user, which is assumed to experience IFTR fading, the outage probability is defined as
P̂_ out= P(W/I<R_th),
where R_th denotes the signal-to-interference (SIR) threshold.
We further assume N receive antennas performing maximal ratio combining (MRC) and L independent and identically distributed (i.i.d.) Rayleigh interferers with average power P_I. In this scenario, the outage probability is given by <cit.>
P̂_ out =
∑_k=0^L-1(1/ R_thP_I)^k
∑_𝒰∏_i=1^N 1/u_i !ϕ^(u_i)_W_i(-1/ R_thP_I),
where 𝒰 is a set of N-tuples such that 𝒰={(u_1 ... u_N), u_i∈ℕ, ∑_i=1^N u_i = k},
and ϕ^(u_i)_W_i(s) is computed using (<ref>), as u_i ∈ℕ, by simply considering the relation W_i = γ_i/E_s/N_0, thereby providing a closed-form expression for the outage probability.
§.§ Exact and approximated average BER
The average symbol error rate in a telecommunication system is one of the main parameters for measuring the quality of communication. In this section, we calculate this metric for the IFTR fading channel. The conditional BER probability in AWGN channel for some relevant modulations with coherent detection can be written as <cit.>
P_e(x) = ∑_r=1^Rα_r Q(√(β_r x)).
The average BER is calculated by averaging over all possible channel realizations. From the result in <cit.> for Nakagami-m fading, by virtue of Corollary 1, the average BER in IFTR fading can be written, after some manipulation, as
P̅_e = ∑_r = 1^R α _r /2∑_j = 0^∞A_j [ 1 - √(β _r γ/2( 1 + K) + β _r γ)∑_k = 0^j 2kk.
. ×( 1 - β _r γ/2( 1 + K) + β _r γ/4)^k ].
In the high SNR regime (γ̅→∞), the average BER can be simplified by simply maintaining the first term in the infinite summation, as stated in Remark 2, yielding
P̅_e ≈∑_r = 1^Rα _r /2 A_0 [ 1 - √(β _r γ/2( 1 + K) + β _r γ)], γ̅→∞.
§ NUMERICAL AND SIMULATION RESULTS
This section presents figures illustrating the performance of IFTR fading channels. The obtained numerical results have been validated by Monte Carlo simulations where 10^7 random realizations of the IFTR distribution have been computed.
Based on Table I, numerical results involving infinite series have been calculated truncating to 40 terms, as it provides a satisfactory accuracy for all the considered cases. In all the presented figures we have assumed γ̅= 1.
The average capacity for IFTR fading is presented in Fig. <ref> for different values of the channel parameters {m_1, m_2,K,Δ}. The presented numerical results have been obtained from (<ref>). It can be seen that a higher capacity is obtained for high K. On the other hand, a high value of Δ (close to 1) yields lower capacity due to the increased probability that the specular components cancel each other, which increases the channel variability.
Fig. <ref> shows the outage probability (P_out) computed from (<ref>) versus the average SNR (γ̅) for different channel model parameters. It can be observed that decreasing Δ from 0.9 to 0.5, increasing K from 5 to 15, and decreasing m_1 and m_2 yields a better performance (lower outage probability), as these changes give rise to a reduced fading severity.
In Fig. <ref> the same values of the IFTR model parameters as in Fig. <ref> are considered, although now a multiantenna receiver is assumed under CCI for the outage probability. The same effect as in Fig. <ref> is observed when the channel parameters are modified, but the amount of variation in the outage probability is affected by the presence of CCI and the use of MRC reception.
For example, for γ̅=SIR=10 dB, the outage probability under CCI, P̂_out=2× 10^-4, is lower than P_out=5 × 10^-3 due to the MRC diversity gain when the values of the parameters are K=15, Δ=0.5, m_1=15 and m_2=7.5.
Fig. <ref> shows the outage probability with CCI for different system parameters vs. the SIR threshold. The numerical results of the outage probability from (<ref>) are plotted for different number of antennas N=1,2,3 and average interference power P_I=1,2 when L=1. It can be seen that as the number of received antennas increases, the outage probability decreases, and the diversity gain increases. Also, for a given SIR threshold, the outage probability is higher for larger average interference power, as expected. Monte-Carlo simulations show an excellent match to the numerical results.
Finally, Fig. <ref> shows the exact and asymptotic BER vs. the average SNR in IFTR fading for BPSK modulation (R=1, α_1=1, β_1=2). The figure shows this performance metric for different channel parameters Δ=0.1,0.5,0.9 when the fluctuating parameters m_1(=15.7) and m_2(=5.1) are non-integers. Again, increasing Δ results in higher channel variability, causing a detrimental impact on performance, i.e., a higher average BER. It is worth mentioning that when the average SNR is above 20 dB, the asymptotic curves, which are much simpler to compute, yield very good approximated results, and above 30 dB the exact and asymptotic results are indistinguishable in all the presented cases.
§ CONCLUSION
In this paper, a new formulation in series form has been derived for the PDF and CDF of the IFTR fading model. The convergence of the obtained series are demonstrated and truncated for numerical computation using the Kolmogorov-Smirnov goodness-of-fit.
We show that, leveraging on any average performance metric already known for the much simpler Nakagami-m fading model, such metric can be readily obtained for IFTR fading.
Additionally, the GMGF of IFTR fading has been obtained which, for most cases of interest, can be expressed in closed-form, thus opening the door to circumvent the model mathematical complexity and obtain several relevant performance metrics also in closed-form, as well as the moments of the distribution and the amount of fading.
Finally, the new and expanded statistical characterization of the IFTR fading model has been exemplified, showing and discussing numerical results for the average capacity, the outage probability with and without interference and the BER for BPSK modulation, which have been verified by Monte Carlo simulations.
§ ACKNOWLEDGMENT
The authors would like to express their gratitude to F. Javier López-Martínez for his insightful comments during the development of this work.
§ PROOF OF LEMMA I
Let us consider the fading model defined in (<ref>) conditioned to the particular realizations of the RVs ζ_1=u_1, ζ_2=u_2. Thus, we can write
. V_r |_ u_1, u_2 = √(u)_1 V_1 e^ jϕ _1 + √(u)_2 V_2 e^ jϕ _2 + X + jY,
which corresponds to the TWDP fading model with specular components amplitudes √(u)_1 V_1 and √(u)_2 V_2 and parameters
K_u_1 ,u_2 = u_1 V_1^2 + u_2 V_2^2 /2σ ^2=u_1K_1+u_2K_2,
Δ _u_1 ,u_2 = 2√(u_1 u_2 ) V_1 V_2 /u_1 V_1^2 + u_2 V_2^2 ,
which satisfy
K_u_1 ,u_2 Δ _u_1 ,u_2 = √(u_1 u_2 ) V_1 V_2 /σ ^2 = √(u_1 u_2 )KΔ.
The conditional average SNR for the model definition given in (<ref>) will be
γ̅_u_1 ,u_2 = E_s/N_0( u_1 V_1^2 + u_2 V_2^2 + 2σ ^2 ) = E_s/N_0 2σ ^2 ( 1 + K_u_1 ,u_2 ).
On the other hand, by promediating over all possible realizations of the unit-mean RVs ζ_1, ζ_2, the unconditional average SNR will be
γ̅= E{γ̅_u_1 ,u_2 }=E_s/N_0(V_1^2+V_2^2+2σ^2)= E_s/N_02σ^2(1+K),
and therefore, equating (<ref>) and (<ref>), we can write
1 + K_u_1 ,u_2 /γ̅_u_1 ,u_2 = 1/( E_s /N_0 )2σ ^2 = 1 + K/γ̅,
From the PDF of the received power of the TWDP fading model given in <cit.> as a mixture of Gamma distributions, the PDF of the conditional SNR of the model defined in (<ref>) can be written as
f_ γ_u_1 ,u_2^TWDP (x)= e^ - K_u_1 ,u_2∑_j = 0^∞K_u_1 ,u_2^j/j!f^𝒢( x;j + 1,γ̅_u_1 ,u_2 /1 + K_u_1 ,u_2 )
×∑_k = 0^j jk( Δ _u_1 ,u_2/2)^k∑_l = 0^k klI_2l - k( - K_u_1 ,u_2Δ _u_1 ,u_2),
which, from (<ref>)-(<ref>), can be rewritten as
f_ γ_u_1 ,u_2^TWDP (x) = e^ - u_1K_1 - u_1K_2∑_j = 0^∞1/j!f^𝒢( x;j + 1,γ̅/1 + K)
×∑_k = 0^j jk∑_q = 0^j - kj-kq( u_1K_1)^q( u_2K_2)^j - k - q
×( √(u_1 u_2) KΔ/2)^k∑_l = 0^k klI_2l - k( -√(u_1 u_2)KΔ),
The PDF of the SNR of the IFTR model can be obtained by averaging (<ref>) over all possible realizations of the RVs ζ_1 and ζ_2, i.e.
f_ γ^IFTR (x) = ∫_0^∞∫_0^∞ f_ γ_u_1 ,u_2^TWDP (x) f_ζ _1 ( u_1 )f_ζ _2 ( u_2 )du_1 du_2 ,
where
f_ζ _i( u_i ) = m_i ^m_i u_i^m_i - 1/Γ( m_i ) e^ - m_i u_i , i = 1,2.
The double integral in (<ref>) can be solved in closed-form by iteratively integrating with respect to variables u_1 and u_2. Thus, after changing the order of integration and summation, we can write
f_γ ^IFTR (x) = ∑_j = 0^∞f^𝒢( x;j + 1,γ/1 + K)
×∑_k = 0^j jk∑_q = 0^j - kj-kqK_1^q K_2^j - k - q/j!
×( KΔ/2)^k ∑_l = 0^k klm_1^m_1 /Γ (m_1 )m_2^m_2 /Γ (m_2 )ℋ_1,
where we have defined
ℋ_1 ≜∫_0^∞u_2^m_2 + j - k/2 - q - 1 e^ - ( m_2 + K_2 )u_2 ℐ_1(u_2 )du_2,
ℐ_1(u_2 ) ≜∫_0^∞u_1^m_1 + q + k/2 - 1 e^ - ( m_1 + K_1 )u_1
× I_2l - k( - √(u_1 u_2 ) KΔ)du_1.
We now consider the following equality from <cit.> and <cit.>:
𝒥 = ∫_0^∞t^μ - 1/2e^ - ptI_2ν( 2β√(t)) dt
=Γ(μ+ν+1/2)β^2ν/p^ν+μ+1/2
_1 F̃_1 (μ+ν+1/2,2ν+1,β^2/p),
where _1 F̃_1 is the regularized Kummer hypergeometric function, and from which (<ref>) can be written in closed-form as
ℐ_1(u_2 ) = ( - 1)^kΓ( m_1 + q + l)/ ( m_1 + K_1 )^m_1 + q + l( KΔ/2)^2l - k u_2^l - k/2
× _1F̃ _1 ( m_1 + q + l;2l - k + 1;u_2 K^2 Δ ^2 /4( m_1 + K_1 )).
Introducing (<ref>) into (<ref>) and solving the integral with the help <cit.> we can write
ℋ_1
= ( - 1)^k ( KΔ/2)^2l - kΓ( m_1 + q + l)/ ( m_1 + K_1 )^m_1 + q + l
×Γ (m_2 + j - k - q + l)/( m_2 + K_2 )^m_2 + j - k - q + l _2 F̃_1 ( m_1 + q + l,.
. m_2 + j - k - q + l;2l - k + 1;K^2 Δ ^2 /4( m_1 + K_1 )( m_2 + K_2 )),
which, together with (<ref>), yields the desired result in (<ref>) for the PDF of the SNR of the IFTR fading model. On the other hand, the CDF in (<ref>) is obtained by a simple integration of (<ref>) (see additional comments on this in Section <ref>).
§ PROOF OF LEMMA 2: CASE (II)
As in Appendix A, we consider an IFTR model conditioned to the particular realizations of the RVs ζ_1=u_1, ζ_2=u_2, which yields a TWDP model with specular components amplitudes √(u)_1 V_1 and √(u)_2 V_2, parameters K_u_1 ,u_2 and Δ _u_1 ,u_2 given, respectively, by (<ref>) and (<ref>), and conditional mean γ̅_u_1 ,u_2, given in (<ref>).
The GMGF for the TWDP model for n ∈ℕ can be obtained from <cit.> as
ϕ _γ̅_u_1 ,u_2 ^(n)( s ) = γ̅_u_1 ,u_2 ^n n! e^ K_u_1 ,u_2γ̅_u_1 ,u_2 s/1 + K_u_1 ,u_2 - γ̅_u_1 ,u_2 s∑_q = 0^n nqK_u_1 ,u_2^q/q!
×( 1 + K_u_1 ,u_2)^q+1/( 1 + K_u_1 ,u_2 - γ̅_u_1 ,u_2 s)^q+n+1∑_r = 0^q qr( Δ_u_1 ,u_2/2)^r
×∑_l = 0^r rl I_2l - r( K_u_1 ,u_2Δ_u_1 ,u_2γ̅_u_1 ,u_2 s/1 + K_u_1 ,u_2 - γ̅_u_1 ,u_2 s),
which can be written, by using the relations (<ref>)-(<ref>), as
ϕ _γ _u_1 .u_2 ^(n) (s) = γ̅^n n!e^γs/1 + K - γ̅s( u_1 K_1 + u_2 K_2 )∑_q = 0^n nq∑_p = 0^q - rq-rp
×( u_1 K_1 )^p ( u_2 K_2 )^q - r - p/q!( 1 + K)^q + 1/( 1 + K - γ̅s)^q + n + 1∑_r = 0^q qr
×( √(u_1 u_2 ) KΔ/2)^r ∑_l = 0^l rl I_2l - r( √(u_1 u_2 )KΔγ̅s/1 + K - γ̅s).
The GMGF of IFTR fading is obtained by averaging (<ref>) over all possible realizations of ζ_1, ζ_2 as
ϕ _γ ^(n)( s ) =∫_0^∞ ∫_0^∞ϕ _γ _u_1 .u_2 ^(n)(s) f_ζ_1(u_1) f_ζ_2(u_2)du_1 du_2.
Introducing (<ref>) into (<ref>) we can write
ϕ _γ _u_1 .u_2 ^(n) (s) = γ̅^n n!m_1^m_1 /Γ (m_1 )m_2^m_2 /Γ (m_2 )∑_q = 0^n nq∑_p = 0^q - rq-rp
×( K_1 )^p ( K_2 )^q - r - p/q!( 1 + K)^q + 1/( 1 + K - γ̅s)^q + n + 1∑_r = 0^q qr
×( KΔ/2)^r ∑_l = 0^l rlℋ_2,
where we have defined
ℋ_2 ≜∫_0^∞u_2^m_2 + q - r/2 - p - 1 e^ - ( m_2 - K_2 γ̅s/1 + K - γ̅s)u_2 ℐ_2 (u_2 )du_2,
ℐ_2 (u_2 ) ≜∫_0^∞e^ - ( m_1 - K_1 γ̅s/1 + K - γ̅s)u_1 u_1^m_1 + p + r/2 - 1
I_2l - r( √(u_1 u_2 )KΔγ̅s/1 + K - γ̅s)du_1.
Note that ℋ_2 and ℐ_2 are actually the same integrals ℋ_1 and ℐ_1 defined, respectively, in (<ref>) and (<ref>), although for different coefficients, which are now in some cases rational functions on s. Therefore, following the same procedure as in (<ref>)-(<ref>), a closed-form expression can be found for ℋ_2 as given in (<ref>), which together with (<ref>) yields (<ref>).
IEEEtran
|
http://arxiv.org/abs/2307.04421v2 | 20230710085412 | Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference | [
"Lei Li",
"Julia Camps",
"Zhinuo",
"Wang",
"Abhirup Banerjee",
"Marcel Beetz",
"Blanca Rodriguez",
"Vicente Grau"
] | eess.SP | [
"eess.SP",
"cs.CV",
"eess.IV"
] |
Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference
Lei Li, Julia Camps, Zhinuo (Jenny) Wang, Abhirup Banerjee, Marcel Beetz, Blanca Rodriguez, and Vicente Grau
Corresponding author: Lei Li (e-mail: [email protected]).
This work was supported by the CompBioMed 2 Centre of Excellence in Computational Biomedicine (European Commission Horizon 2020 research and innovation programme, grant agreement No. 823712).
L. Li was partially supported by the SJTU 2021 Outstanding Doctoral Graduate Development Scholarship.
A. Banerjee is a Royal Society University Research Fellow and is supported by the Royal Society Grant No. URF\R1\221314.
The work of A. Banerjee and V. Grau was partially supported by the British Heart Foundation Project under Grant PG/20/21/35082.
Lei Li, Abhirup Banerjee, Marcel Beetz, and Vicente Grau are with the Department of Engineering Science, University of Oxford, Oxford, UK.
Julia Camps, Zhinuo (Jenny) Wang, and Blanca Rodriguez are with the Department of Computer Science, University of Oxford, Oxford, UK.
Received / Accepted
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Myocardial infarction (MI) demands precise and swift diagnosis.
Cardiac digital twins (CDTs) have the potential to offer individualized evaluation of cardiac function in a non-invasive manner, making them a promising approach for personalized diagnosis and treatment planning of MI.
The inference of accurate myocardial tissue properties is crucial in creating a reliable CDT platform, and particularly in the context of studying MI.
In this work, we investigate the feasibility of inferring myocardial tissue properties from the electrocardiogram (ECG), focusing on the development of a comprehensive CDT platform specifically designed for MI.
The platform integrates multi-modal data, such as cardiac MRI and ECG, to enhance the accuracy and reliability of the inferred tissue properties.
We perform a sensitivity analysis based on computer simulations, systematically exploring the effects of infarct location, size, degree of transmurality, and electrical activity alteration on the simulated QRS complex of ECG, to establish the limits of the approach.
We subsequently propose a deep computational model to infer infarct location and distribution from the simulated QRS.
The in silico experimental results show that our model can effectively capture the complex relationships between the QRS signals and the corresponding infarct regions, with promising potential for clinical application in the future.
The code will be released publicly once the manuscript is accepted for publication.
Cardiac digital twins, myocardial infarction, inverse problem, cardiac MRI, QRS, multi-modal integration.
§ INTRODUCTION
Myocardial infarction (MI) is a major cause of mortality and disability worldwide <cit.>.
Assessment of myocardial viability is essential in the diagnosis and treatment management for patients suffering from MI.
In particular, the location and distribution of myocardial scars provide important information for patient selection and treatment planning.
Late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) has been widely used to characterize myocardial scars <cit.>.
However, the incorporation of LGE into MRI examination prolongs scan times, and has potential side effects <cit.>.
Recent studies have tried to delineate scars using non-enhanced cine MRI, with promising preliminary results <cit.>.
Alternatively, the electrocardiogram (ECG) can be used to reveal abnormalities related in electrophysiology post-MI <cit.>.
For example, ST-segment elevation and T-wave inversion are commonly used indicators of cardiac remodeling associated with different stages of MI <cit.>.
In contrast, QRS patterns have received less attention in the literature, though they also provide valuable information about the extent and location of myocardial damage following an MI <cit.>.
It is still partly unclear how QRS abnormalities reflect MI characteristics, such as location, size, transmural extent, and cardiac electrical activity alterations.
Therefore, a reliable technique to detect and delineate infarct regions combining non-enhanced imaging and QRS data is highly desirable.
Cardiac “digital twin" (CDT) technology can create virtual models of the heart combining cardiac images, ECG, and other subject-specific information <cit.>.
It allows clinicians to visualize and analyze the structure, function, and electrical activity of the heart in real-time, providing valuable insights into the underlying mechanisms of MI <cit.>.
As fig:intro:CDT shows, CDT workflows usually involve two stages, namely anatomical and functional twinnings, which present various challenges to overcome <cit.>.
The anatomical twinning stage involves the segmentation of cardiac images, reconstruction of the 3D geometry of the heart, and the identification and extraction of relevant anatomical structures.
It is complicated by the variability in the heart's anatomy across individuals, as well as by imaging artifacts and noise.
At the functional twinning stage, the main challenge is to solve the inverse problem of electrocardiography, i.e. inferring electrophysiological properties in the myocardium from the ECG.
This is complicated by the limitations of ECG recordings, which are sparse, noisy, and subject to substantial uncertainties.
To solve the inverse problem, state-of-the-art approaches can be coarsely separated into two kinds: deterministic and probabilistic methods <cit.>.
Deterministic approaches in cardiac electrophysiology involve minimizing a cost function that quantifies the discrepancy between the observed data and the model predictions.
For robust inverse, spatial and/ or temporal regularization <cit.> and physics-informed regularization <cit.> have been widely used.
Probabilistic methods rely on Bayesian inference theory and numerical techniques to generate posterior distributions for the model parameters <cit.>.
They can incorporate prior knowledge into the parameter estimation with an uncertainty, which can be used to guide decision-making and assess the robustness of the results <cit.>.
Nevertheless, conventional probabilistic methods are usually computationally expensive, as repeated numerical simulations are required to generate samples for the posterior distribution.
Recently, deep learning based probabilistic methods have emerged as an alternative to conventional methods for modeling complex dynamics of cardiac electrical activity.
They can leverage deep neural networks to approximate the posterior distribution of the model parameters or latent variables, providing faster and more accurate approximations.
For example, Ghimire et al. <cit.> proposed a deep generative model to reconstruct cardiac transmembrane potential from ECG data.
Li et al. <cit.> designed a deep computational model for the inverse inference of ventricular activation properties in a non-invasive and efficient manner.
Xie et al. <cit.> employed a physics-constrained deep learning framework to inversely predict the heart-surface electrical signals from body surface potential maps.
Sahli et al. <cit.> developed physics-information neural networks for the reconstruction of activation maps in cardiac electrophysiology.
Dhamala et al. <cit.> proposed a generative variational autoencoder for parameter estimation of a personalized cardiac model.
In addition to inferring the electrophysiological properties under sinus rhythm, several studies tried to investigate the propagation of cardiac electrical signals under arrhythmias based on deep neural networks.
For example, Meister et al. <cit.> employed graph convolutional neural networks to estimate the depolarization patterns in the myocardium with scars.
Bacoyannis et al. <cit.> reconstructed activation patterns of the myocardium with various local wall thicknesses, as thin walls indicate infarct regions.
However, with regards to different post-MI scenarios, the inverse inference of electrophysiological heterogeneity in the infarct regions has not been fully investigated.
In this work, we develop a deep computational model for the inverse inference of post-MI with different properties, varying the infarct location, size, and transmural extent.
We first conduct a sensitivity analysis to investigate the relationship between QRS abnormalities and infarct characteristics in post-MI.
This analysis provides insights into how variations in QRS signals are associated with specific infarct properties, informing the subsequent inverse inference process.
The framework can efficiently combine the anatomical properties from cine MRI and electrophysiological information from QRS simulated via a biventricular electromechanical model of post-MI.
This study provides an integrated and personalised perspective that incorporates the features from multi-modal data to predict tissue properties of post-MI, enabling the construction of a CDT platform.
To the best of our knowledge, this is the first deep learning based computational model that addresses the inverse inference of MI with different characteristics.
§ METHODOLOGY
§.§ Anatomical Twinning: Mesh Reconstruction
At the anatomical twinning stage, we reconstruct a subject-specific 3D torso-biventricular tetrahedral mesh from multi-view cardiac MRIs <cit.>.
Specifically, for the biventricular reconstruction, we first use a deep learning based ventricle segmentation from long- and short-axis cardiac MRIs and thus obtain sparse 3D contours.
We then perform a misalignment correction based on the intensity and contour information coupled with a statistical shape model, followed by a surface mesh reconstruction and volumetric tetrahedral mesh generation.
We utilize a two-step automated framework for the torso reconstruction, and the locations of the ECG electrodes (I, II, V1-V6, LA, RA, LL, RL) are measured from the personalized 3D torso mesh.
To ensure a symmetric, consistent, and intuitive biventricular representation across various geometries, we project the biventricular mesh into a consistent biventricular coordinates (Cobiveco) system <cit.>.
The Cobiveco system is defined by (tm, ab, rt, tv), which correspond to transmural, apicobasal, rotational, and transventricular coordinates, respectively.
The reader is referred to the anatomical twinning stage of fig:intro:CDT for the illustration of Cobiveco (tv is excluded there).
We represent infarct areas in the myocardium as an ellipse with radii r_tm, r_ab, and r_rt as follows,
(tm_i - tm_0)^2/r_tm^2 + (ab_i - ab_0)^2/r_ab^2 + (rt_i - rt_0)^2/r_rt^2≤ 1,
where (tm_0, ab_0, rt_0) is the center coordinate of the scar region.
We consider different post-MI scenarios, including seven locations, two transmural extents, two different sizes, and two different cardiac electrical activity alterations.
As fig:method:17AHA_MI_location shows, one can define the infarct areas consistently in the 17-segment American Heart Association (AHA) map <cit.>, enabling the study of the effects of MI properties at a population level.
Note that in this study, we only consider the scars in the left ventricle (LV), as the majority of clinically significant myocardial scars present there <cit.>.
The LV region is defined in Cobiveco as tv = 0 ∨ (tv = 1 ∧ rt > 2/3) to include the whole septum.
For the comparison of different infarct sizes and cardiac electrical activity alterations, we only report on lateral MI as an illustrative case.
As tb:method:MI scenario shows, we simulate infarct at seven different locations and one smaller size on lateral MI, each with two levels of transmural extent, and one scenario with a slower CV on transmural large lateral MI, resulting in a total of 17 post-MI scenarios for each subject.
fig:method:MI_examples provides several examples of our experimental scenarios.
§.§ Functional Twinning: Forward Electrophysiological Simulation
At the functional twinning stage, we simulate cardiac electrophysiology via an efficient orthotropic Eikonal model <cit.>, which incorporates a human-based Purkinje system into the formulation of the activation times of root nodes (RN).
The simulation is performed on the Cobiveco mesh, solving:
{ √(∇^T t𝒱^2 ∇ t) = 1,
t(Γ_0) = pk(Γ_0)-min(pk(Γ_0)),
.
where 𝒱 are the orthogonal conduction velocities (CVs) of fibre, sheet (transmural), and sheet-normal directions, t is the time at which the activation wavefront reaches each point in the mesh, Γ_0 is the set of RN locations, and pk is a Purkinje-tree delay function from the His-bundle to every point.
Therefore, the earliest activation time at the RNs is defined as their delay from the His-bundle through the Purkinje tree normalized by the earliest activation, such that the wavefront originates at t = 0 in one of the endocardial RNs.
The QRS can be calculated from the activation time map (ATM) via a pseudo-ECG equation <cit.> for a 1D cable source with constant conductivity at a given electrode location (x',y',z'), as
ϕ_e (x',y',z' ) = a^2 σ_i/4 σ_e∫ - ∇ V_m ·[ ∇1/r] dx dy dz ,
where V_m is the transmembrane potential, ∇ V_m is its spatial gradient, r is the Euclidean distance from a given point (x,y,z) to the electrode location, a is a constant that depends on the fiber radius, and σ_i and σ_e are the intracellular and extracellular conductivities, respectively.
The pseudo-ECG method can efficiently generate normalized ECG signals without significant loss of morphological information compared to the bidomain simulation <cit.>.
In modeling the effects of scars on the QRS, it is essential to consider the electrophysiological properties of the infarct regions, such as the slower CVs <cit.>, which can lead to changes in the timing and amplitude of the ECG waveform and thus manifest as changes in QRS.
Therefore, we vary the CVs of infarct and healthy myocardial areas during QRS simulation (see Sec. <ref>).
As Fig. <ref> shows, the ATM of MI patients presents slower electrical signal propagation compared to that of healthy ones, resulting in corresponding alteration in the simulated QRS morphology.
§.§ Functional Twinning: Inverse Inference of Post-MI Properties
fig:method:computation model provides an overview of the proposed deep computation model, consisting of a dual-branch variational autoencoder (VAE) and an inference model.
The VAE captures both anatomical and electrophysiological features, while the inference model uses the latent space representation to predict scar and border zone location.
fig:method:network depicts the network architecture.
For the geometry reconstruction, we reconstruct coarse and dense point clouds (PCs) to simultaneously learn global shape and local anatomy of the ventricles.
Therefore, the PC reconstruction loss function is defined as follows,
ℒ^rec_PC = ∑_i=1^K(ℒ_i,coarse^chamfer + αℒ_i,dense^chamfer),
where K is the number of classes, α is the weight term between the two PCs, and ℒ^chamfer is the chamfer distance between the input and reconstructed PCs.
To improve the fidelity and resemblance of the reconstructed QR̂S to the original QRS, we minimize their mean-squared error (MSE) and dynamic time warping (DTW) distance <cit.>,
ℒ^rec_QRS = ℒ_MSE(QRS, QR̂S) + ℒ_DTW(QRS, QR̂S).
Finally, the loss function for training the VAE is calculated as,
ℒ_VAE = λ_PCℒ^rec_PC + λ_QRSℒ^rec_QRS + λ_KLℒ^KL,
where λ_PC, λ_QRS, and λ_KL are balancing parameters, and ℒ^KL is the Kullback-Leibler (KL) divergence loss to mitigate the distance between the prior and posterior distributions of the latent space.
For the inference, we predict the infarct location based on the low-dimensional features learned from the VAE.
To alleviate the class-imbalance issue existed in the MI segmentation, we combine the cross-entropy (CE) loss and Dice score loss,
ℒ_seg = ℒ_CE + λ_Diceℒ_Dice,
where λ_Dice is a balancing parameter.
For realistic infarct shape, we further introduce a compactness loss,
ℒ_compact = 1/N^pre∑_i=1^N^pred_i^pre + d_i^gd/d_max^gd,
where N^pre is the total number of predicted MI points, d_i^pre and d_i^gd are the Euclidean distances from each predicted MI point i to the center of predicted and ground truth MI, respectively,
and d_max^gd is the maximum Euclidean distance from ground truth MI points to their center.
We introduce two further constraints, to control infarct size and prevent scar from appearing in the right ventricle (RV), through two additional loss functions:
ℒ_size = N^pre-N^gd/N^gd,
ℒ_spa = N^pre_RV/N^pre,
where N^gd is the total number of ground truth infarct points, while N^pre_RV is the number of predicted infarct points located in the RV, excluding the septum boundary.
Hence, the final inference loss is defined as,
ℒ_inf = ℒ_seg + λ_compactℒ_compact + λ_sizeℒ_size + λ_spaℒ_spa + λ_VAEℒ_VAE,
where λ_compact, λ_size and λ_spa are balancing parameters.
§ EXPERIMENTS AND RESULTS
§.§ Materials
§.§.§ Dataset and Simulation Setup
We collected 49 subjects with paired 12-lead ECGs and multi-view cardiac MRIs from the UK Biobank study <cit.>.
The dataset was randomly divided into 34 training subjects, 5 validation subjects, and 10 test subjects, and each subject has 17 post-MI scenarios.
The biventricular tetrahedral mesh for each subject was converted into PCs and then resampled into coarse and dense versions with 1,024 and 4,096 nodes, respectively.
On these meshes, we imposed simulated infarcts with different locations, sizes, transmural extents, and CV alterations.
During the electrophysiology simulations, a fixed set of RN locations and CV values were utilized.
Specifically, the RNs were placed at seven specific homologous locations based on Cobiveco – four in the LV and three in the RV.
In the LV, they were situated in the mid-septum, basal-anterior paraseptal, and two mid-posterior locations, while in the RV, they were located in the mid-septum and two free wall regions <cit.>.
Two sizes of lateral MI were achieved by halving r_ab and r_rt values for the small lateral MI compared to the large one.
Two transmural extents were set by varying r_tm, which was set as 3 and 0.5 for transmural and subendocardial scars, respectively.
For baseline QRS simulation, the CV values for different directions were set as follows: 65 cm/s along the fiber direction, 48 cm/s along the sheet direction, 51 cm/s along the sheet-normal direction, and 100 cm/s and 150 cm/s for the sparse and dense endocardial directions, respectively <cit.>.
These values were consistent with reported velocities for healthy human myocardium in previous studies <cit.>.
In the simulation of QRS for MI, the CVs in the areas of myocardial scarring and BZ were set to 10% and 50% (another slower CV configuration: 5% and 25%) of the respective values observed in healthy myocardium.
§.§.§ Evaluation
For evaluation, we compared the predicted MI distribution of our proposed automatic method with the gold standard set in the simulation phase.
To evaluate the segmentation accuracy, we calculated the Dice score, precision, and recall of the MI prediction, calculated on the PCs.
Furthermore, we propose a novel evaluation metric called the AHA-loc-score, to assess the accuracy of MI localization using the 17-segment AHA map,
AHA-loc-score = β_c-idδ_c-pre, c-gd + β_idIoU_id + β_c-d(1-d_c),
where δ_c-pre, c-gd indicates whether the AHA index of predicted infarct center is matched with that of ground truth,
IoU_ids calculates the intersection over union (IoU) score of the AHA indices appeared in the predicted and ground truth MI regions,
and d_c refers to the normalized distance between predicted and ground truth infarct centers.
The weights β_c-id, β_ids, and β_c-d have values of 0.5, 0.2, and 0.3, respectively.
§.§.§ Implementation
The framework was implemented in PyTorch, running on a computer with 3.50 GHz Intel(R) Xeon(R) E-2146G CPU and an NVIDIA GeForce RTX 3060.
We use the Adam optimizer to update the network parameters (weight decay = 1e-3).
The batch size is 4, and the initial learning rate is set to 1e-4 with a stepped decay rate of 0.5 every 6800 iterations.
The balancing parameters in Sec. <ref> are set as follows: α=5, λ_KL=0.01, λ_compact=1, λ_size=1, λ_spa=1, and λ_VAE=1.
The simulation of one QRS of MI spent about 5 min.
The training of the model took about 10 hours (300 epochs in total), while the inference of the networks required about 9 s to process one test case.
§.§ Sensitivity Analysis of QRS for Different Post-MI Characteristics
We performed a sensitivity analysis in which we studied the effects of different infarct configurations in the QRS complex.
The aim was to find out which locations and sizes had a significant effect on QRS, and thus to establish the feasibility of the inverse inference task.
To quantify discrepancy between QRS shapes, we employed a global measure, DTW, which compared signals of different lengths with an additional penalty for the difference in QRS duration between the two signals <cit.>.
Furthermore, we introduced four QRS abnormalities reported in literature, i.e., QRS duration prolongation <cit.>, pathological Q-waves <cit.>, poor R wave progression (PRWP) <cit.>, and fragmented QRS (fQRS) <cit.>.
The reader is referred to fig:exp:abnormalQRS_MI_example for illustration of each local QRS criteria of post-MI.
QRS duration prolongation can occur due to the damage to the heart muscle and subsequent changes in electrical conduction of MI.
Pathological Q waves are typically deeper, wider, and longer than normal Q waves, and are usually associated with the loss of electrical activity in the area of the heart affected by the MI.
Specifically, it can be defined as the presence of Q wave with duration ≥ 0.03 s and/ or amplitude ≥ 25% of R-wave amplitude <cit.>.
PRWP refers to the absence of the normal increase in amplitude of the R wave in the precordial leads when advancing from lead V1 to V6 <cit.>.
In the literature, different definitions of PRWP exist <cit.>.
Here, we utilize specific criteria, such as the R wave amplitude of 2 mm or less in the lead V3/V4 and the presence of reversed R-wave progression.
This is determined when the R wave amplitude of V5 is less than that of V6 or the R wave amplitude of V2 is less than that of V1 or any combination of these.
fQRS refers to the presence of multiple small spikes or notches within the QRS complex <cit.>.
It is typically present in the lead corresponding to the location of the infarct zone.
Note that although these QRS abnormalities have been shown to be useful in the diagnosis and prognosis of MI in some studies, there is also conflicting evidence and debate among researchers regarding their clinical significance and usefulness <cit.>.
§.§.§ Sensitivity Analysis: Global QRS Measure
To assess the impact of QRS on the 17 different MI scenarios, we measured the dissimilarity between each of these and the baseline, as well as the dissimilarity between them.
As fig:exp:QRS_dissimilarity shows, the QRS complex showed morphological alterations in most post-MI scenarios when compared to the normal QRS complex.
Particularly, inferolateral, extensive anterior, and apical transmural MI presented more evident alterations compared to others.
One can see a significant decrease in QRS morphology alteration in small lateral MI when compared to that of large lateral MI, especially for subendocardial one.
The orientation and location of the heart within the torso can affect the direction and amplitude of the electrical signals detected on the body surface, which can lead to variation in the QRS complex morphology among different individuals.
Moreover, differences in the anatomy and physiology of the heart itself can also contribute to the variation in QRS morphology.
In the case of lateral MI, the variation in the QRS complex may be more pronounced.
This is because the electrical activity associated with ventricular depolarization needs to traverse a larger distance through the LV myocardium to reach the lateral wall, which can result in changes to the amplitude, duration, and morphology of the QRS complex.
The degree of transmurality presented a noticeable impact on the QRS morphology at all infarct locations, namely transmural scars generally caused more prominent changes in QRS morphology compared to subendocardial scars.
Although the QRS dissimilarities between transmural and subendocardial septal scars were relatively small (DTW^max=0.2 and DTW^avg=0.3), differences in QRS morphology can still be observed, as shown in fig:exp:simulated_QRS_examples.
Despite the influence of transmurality on QRS morphology, the differences in QRS between various infarct locations seemed to be more pronounced than those caused by the extent of transmurality.
This implies that the QRS has greater sensitivity in localizing MI rather than predicting its transmural extent.
The primary QRS morphological difference observed with varying degrees of CV reduction was the QRS duration: 99.5 ms vs. 113.8 ms on transmural large lateral MI.
However, our initial tests presented unexpected QRS simulation results when we significantly reduced the CVs in the MI regions.
This suggests that the personalized CV configuration of infarct areas during simulation requires further investigation in the future.
Most infarct locations were represented on the QRS by leads I, V5, and V6, whereas septal MI was represented by leads V1-V4 and V3-V4 for subendocardial and transmural ones, respectively.
This result is in agreement with those reported in clinical practice <cit.>.
Generally, larger scars tend to result in QRS changes appearing in more leads.
The ability of various QRS leads to accurately detect the location of infarction varied.
This is because the electrical activity of the heart is not uniform, and different leads may have a better view of certain regions of the heart.
Additionally, the location of the infarct and its extent can influence the morphology of the QRS complex in different leads, which can affect their ability to detect the infarct location.
§.§.§ Sensitivity Analysis: Local QRS Measure
The changes in QRS morphology for the 17 MI scenarios were reflected in multiple ways.
Here, we introduced several QRS criteria and compared the contribution of each of these for infarct detection.
We found that apical and inferolateral MI tended to present prolongation of the QRS duration: 124.1 ms and 107.7 ms (apical and inferolateral MI) vs. 90.4 ms (normal).
PRWP mainly occurred in extensive anterior, septal, and apical MI, similar as reported in the literature <cit.>.
Specifically, the R wave amplitude in the septal MI was sometimes flattened, while the R wave of V6 tended to be larger than that of V5 in the apical MI, as fig:exp:simulated_QRS_examples shows.
The prevalence of fQRS was more common in the inferior lead (lead II) compared with the anterior leads (leads V3 and V4) and the lateral leads (leads V5 and V6), similar to the results reported in Liu et al. <cit.>.
The presence of fQRS in lead II and leads V3-V4 indicated inferolateral and extensive anterior MI, respectively.
In contrast, pathological Q wave failed to classify MI from healthy subjects in our simulation system.
§.§ Inference Accuracy of Post-MI Properties
tb:results:MIinference presents the quantitative results of the proposed method, and fig:result:boxplot provides the boxplots of Dice score.
The proposed method obtained the best segmentation and localization performance on the transmural extensive anterior MI (Dice= 0.934 ± 0.028, AHA-loc-score = 0.987 ± 0.007).
Even for the scenarios where there were not notable QRS morphology changes, such as MI in the septum and limited anterior areas, the model still can localize the corresponding infarct (DTW^max=0.4, AHA-loc-score ≈ 0.7).
Nevertheless, the model showed difficulties in detecting lateral (especially for the subendocardial and small size ones, with Dice score of 0.097 ± 0.112) and inferior MI with Dice scores of 0.228 ± 0.252 and 0.173 ± 0.288 for subendocardial and transmural one, respectively.
In general, the segmentation of the transmural MI tended to be more accurate than that of the subendocardial MI (Dice: 0.518 ± 0.347 vs. 0.396 ± 0.271).
This observation aligned with expectations, since transmural MI often exhibit more pronounced and distinct QRS abnormalities compared to subendocardial MI, as proved in previous sensitivity analysis.
As a result, our model can leverage these noticeable differences to identify and segment the affected region accurately.
Nevertheless, their ability to precisely determine the location of the infarction within the myocardium did not vary significantly (AHA-loc score: 0.610 ± 0.343 vs. 0.659 ± 0.339).
This can be attributed to the fact that the localization of MI is not solely dependent on the depth or extent of the infarct.
Furthermore, the accuracy of predicting scars was generally higher than that of predicting border zones (BZs).
This could be because the complex nature of BZs, where the myocardial tissue undergoes a transition from healthy to scarred, introduces additional variability and ambiguity in the QRS signals, leading to a lower prediction accuracy for BZs.
The performance in terms of Dice coefficient, precision, recall and AHA-loc-score was generally consistent.
However, in specific cases like apical, limited anterior, and inferolateral transmural MI, precision may exhibit a slight superiority over the Dice.
Apical MI obtained the highest AHA-loc-score, indicating its accurate and reliable localization.
This could be attributed to the uniqueness of the apical location, allowing for a more precise and unambiguous localization of MI due to the absence of significant interference from neighboring structures.
Figure <ref> provides 3D results of a representative test subject on different scenarios.
One can observe that the 3D visualization agrees well with the quantitative analysis result.
There were outliers appearing in the inferior area for lateral MI detection and vice versa, which suggests that the model had difficulty distinguishing between the lateral and inferior MI areas based on their QRS.
Furthermore, even though extensive anterior and inferolateral MI both covered large areas, the detection of inferolateral MI tended to be more difficult compared to that of extensive anterior MI, which can be further proved in the correlation study of MI volume presented in fig:result:volume_regression.
§.§ Ablation Study
Accurate MI inference goes beyond merely identifying the location of the infarction, but also requires a comprehensive assessment of the extent of infarct tissue.
Therefore, we introduced additional constrains, namely localization constrains (ℒ_spa and ℒ_comp) and an extent constrain (ℒ_size).
To evaluate their effectiveness, we conducted an ablation study by selectively removing them from the proposed framework, as presented in tb:result:ablation_study.
One can see that in most scenarios the proposed method obtained the best performance compared to others.
For example, without localization constrains, the model presented worse performance in identifying septal MI.
Note that septal MI normally presents complexity for detection, due to its unique position and overlapping ECG effects from neighboring regions, such as the anterior and inferior walls.
We observed that the absence of ℒ_comp led to improved Dice in cases of inferolateral and subendocardial limited anterior MI and decreased Dice in cases of extensive anterior MI.
Nevertheless, reduction in outliers observed in the visualization results suggests that ℒ_comp effectively minimizes the occurrence of outliers, leading to more reliable and accurate predictions.
The extent constraint was also crucial, particularly in distinguishing between subendocardial and transmural MI that present different sizes in the same anatomical position.
§.§ Extended Evaluation
§.§.§ Exploring the Detection Limit of QRS for Small Infarct Areas
To investigate what is the smallest infarct area that can be detected from QRS complexes, we employed apical MI as an example and varied the infarct size and retrained the model based on the pre-trained one.
The idea behind this approach is to determine the sensitivity of QRS-based detection methods for small infarct areas, which may have important clinical implications for risk stratification and management of post-MI patients.
Figures <ref> (a) and (c) demonstrate that as the infarct size decreased, the QRS morphological changes also diminished.
This is because a smaller infarct would have a lesser impact on the overall electrical conduction and activation patterns of the heart.
Consequently, the deviations in the QRS, which represent the depolarization of the ventricles, would be less pronounced.
Nevertheless, our method still can extract subtle features from the QRS complex that may be indicative of small infarct areas, as fig:result:QRS_MIsize (b) shows.
This ability was limited until when the Cobiveco apicobasal radius r_ab of scars equaled to 0.1 for apical MI.
§.§.§ Correlation Analysis: Relationship between ECG/ PC Reconstruction and MI Inference Accuracy
To evaluate the robustness of the proposed inference scheme to the reconstruction error, we analyzed the relationship between the reconstruction and inference errors by the proposed method.
The accuracy of PC and ECG reconstruction was calculated as 0.5*ℒ^rec_PC with α=1 and ℒ^rec_QRS, respectively.
The r^2 values of scar/ BZ for PC and ECG-MI inference correlations were 0.002/ 0.006 and 0.008/ 0.009, respectively, indicating no relationship between inference and reconstruction accuracy.
This implies that the accuracy of MI inference using the proposed method was not significantly influenced by the quality of the reconstruction.
This is reasonable, as the proposed method focuses on extracting relevant features from the input data rather than relying solely on accurate reconstruction for MI inference.
Nevertheless, the reconstructions are still necessary as they provide valuable information for the inference.
To demonstrate this, we conducted a comparison by removing the reconstruction steps, and the results noticeably decreased (AHA-loc scores: 0.610 ± 0.343 vs. 0.561 ± 0.338 for subendocardial MI, and 0.659 ± 0.339 vs. 0.585 ± 0.367 for transmural MI), highlighting the significance of incorporating reconstruction in the inverse inference.
§.§.§ Comparison with Conventional MI Inference Method
To demonstrate the efficacy of our approach, we conducted a comparative analysis with the Selvester QRS scoring system <cit.>.
The score criteria have been employed to identify scar location based on QRS phenotypes, such as wave duration (Q or R), wave amplitude (R or S), amplitude ratio (R/Q, R/S, R/R^', or S/S^'), and QRS slurs or notches <cit.>.
...
§ DISCUSSION AND CONCLUSION
In this paper, we have developed a deep computational model to tackle the inverse problem in cardiac electrophysiology, i.e., inferring MI distribution from QRS signals.
Through the integration of anatomical and electrophysiological data, we achieve a comprehensive analysis that incorporates different infarct locations, sizes, transmural extents, and cardiac electrical activity alterations.
By consistently representing the ventricular anatomy in a coordinate reference system, we establish a robust sensitivity analysis framework for studying the association between infarct characteristics and QRS abnormalities.
The sensitivity analysis results have demonstrated significant morphological alterations in the QRS complex for various post-MI scenarios, particularly inferolateral, extensive anterior, and apical MI.
These findings suggest that the involvement of large areas of damaged heart muscle leads to pronounced changes in QRS morphology.
Furthermore, the analysis emphasizes the impact of transmurality on QRS morphology, namely transmural MI presents more prominent changes compared to subendocardial MI.
However, the differences in QRS between various infarct locations can be more pronounced than those caused by the extent of transmurality, indicating the greater sensitivity of QRS in localizing MI rather than predicting its transmural extent.
The analysis further highlight the importance of lead selection in accurately detecting the location of infarction.
Overall, the sensitivity analysis provides valuable insights into the relationship between infarct characteristics and QRS abnormalities, enhancing our understanding of the complex interplay between infarct characteristics and electrophysiological features.
The proposed method can effectively segment and localize MI, even in scenarios with limited QRS morphology changes, demonstrating its feasibility of developing CDTs for MI patients.
The results of the ablation study emphasize the importance of the localization and extent constraints in accurate MI inference.
The proposed method exhibits the ability to detect small infarct areas, although its sensitivity is limited, as proved in our extended study.
The correlation analysis demonstrates that while incorporating reconstruction in the inference process is important, the accuracy of MI inference is not significantly dependent on the quality of reconstruction.
To conduct a sensitivity analysis of MI properties, we intentionally select consistent infarct location, size and transmural extent for each subject.
While it ensures a controlled comparison, it may have led to a limited evaluation of MI inference.
We conduct a small test by randomly selecting infarct for each subject and only obtain reasonable good results on few cases.
This outcome is expected because randomly simulating a single scenario for each subject limits ability of the proposed model to learn and generalize across different infarct characteristics.
In order to improve performance, in the future a more diverse and comprehensive dataset with a wider range of infarct scenarios should be used to train the model.
Note that this work is an initial study, and there are several limitations that need to be acknowledged.
Firstly, this study assumes a known set of RNs and fixed CVs for all subjects, which may not fully capture the complexity and heterogeneity present in real-world healthcare data.
Therefore, further research is needed to personalize these activation properties based on individual patient characteristics and specific healthcare settings.
Secondly, we only consider cardiac anatomical information and electrode nodes while disregarding the full torso geometry.
The inclusion of torso geometry could provide valuable insights into its influence on QRS patterns.
By incorporating full torso geometry in our future work, we can gain a more comprehensive understanding of the factors influencing QRS patterns and improve the accuracy of our predictions and interpretations.
Thirdly, this study focuses solely on the QRS complex, rather than considering the entire ECG signal.
Applying the analysis to the whole ECG signal would provide a more comprehensive assessment but may require significant computational resources.
To address this limitation, future research could explore computationally efficient surrogate to replace the expensive simulation model.
Finally, while the developed CDTs can provide valuable insights into the mechanisms of MI, they are based on simplified assumptions about the heart and may not capture all aspects of the complex interactions between cardiac structures and functions.
Given the limitations, particularly in the simulated dataset used, this can only serve as a proof of concept until validation on the clinical data can be performed.
ieeetr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.