entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04469v1 | 20230710103740 | Beyond spectroscopy. II. Stellar parameters for over twenty million stars in the northern sky from SAGES DR1 and Gaia DR3 | [
"Yang Huang",
"Timothy C. Beers",
"Hai-Bo Yuan",
"Ke-Feng Tan",
"Wei Wang",
"Jie Zheng",
"Chun Li",
"Young Sun Lee",
"Hai-Ning Li",
"Jing-Kun Zhao",
"Xiang-Xiang Xue",
"Yu-Juan Liu",
"Hua-Wei Zhang",
"Xue-Ang Sun",
"Ji Li",
"Hong-Rui Gu",
"Christian Wolf",
"Christopher A. Onken",
"Ji-Feng Liu",
"Zhou Fan",
"Gang Zhao"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
1School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, Chinese; [email protected]
2Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, P. R. China; [email protected]; [email protected]
3Department of Physics and Astronomy and JINA Center for the Evolution of the Elements (JINA-CEE), University of Notre Dame, Notre Dame, IN 46556, USA
4Department of Astronomy, Beijing Normal University, Beijing 100875, People's Republic of China
5Department of Astronomy and Space Science, Chungnam National University, Daejeon 34134, Republic of Korea
6Department of Astronomy, School of Physics, Peking University, Beijing 100871, People's Republic of China
7Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People's Republic of China
8Department of Space Science and Astronomy, Hebei Normal University, Shijiazhuang 050024, People's Republic of China
9Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia
10Centre for Gravitational Astrophysics, Research Schools of Physics, and Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia
We present precise photometric estimates of stellar parameters, including effective temperature, metallicity, luminosity classification, distance, and stellar age, for nearly 26 million stars using the methodology developed in the first paper of this series, based on the stellar colors from the Stellar Abundances and Galactic Evolution Survey (SAGES) DR1 and Gaia EDR3.
The optimal design of stellar-parameter sensitive uv filters by SAGES has enabled us to determine photometric-metallicity estimates down to -3.5, similar to our previous results with the SkyMapper Southern Survey (SMSS), yielding a large sample of over five million metal-poor (MP; [Fe/H] ≤ -1.0) stars and nearly one million very metal-poor (VMP; [Fe/H] ≤ -2.0) stars.
The typical precision is around 0.1 dex for both dwarf and giant stars with [Fe/H] >-1.0, and 0.15-0.25/0.3-0.4 dex for dwarf/giant stars with [Fe/H] <-1.0.
Using the precise parallax measurements and stellar colors from Gaia, effective temperature, luminosity classification, distance and stellar age are further derived for our sample stars.
This huge data set in the Northern sky from SAGES, together with similar data in the Southern sky from SMSS, will greatly advance our understanding of the Milky Way, in particular its formation and evolution.
§ INTRODUCTION
Estimates of stellar parameters, in particular the metallicity, of a large, complete sample of stars is of vital importance to understand the formation and evolution of the Milky Way.
In the past decades, massive progress has been achieved by large-scale spectroscopic surveys, such as the HK Survey <cit.>, the Hamburg/ESO Survey (HES; ) the Sloan Digital Sky Survey (SDSS; ), the Radial Velocity Experiment (RAVE; ), the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; ), the Galactic Archaeology with HERMES project (GALAH; ), and the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ).
However, the total number of observed targets collected from all those surveys is no greater than about ten million, less than one ten-thousandth of the estimated total numbers of Milky Way stars.
This under-sampling, together with the complex target-selection strategies, makes it extremely difficult to understand the full assembly history of our Galaxy.
In the first paper of this series <cit.>, we proposed to alleviate this issue of current spectroscopic surveys by deriving stellar parameters for a huge number of stars using narrow/medium-bandwidth photometric surveys (see Table 1 of H22 for a summary).
As a pioneering experiment, H22 present measurements of stellar parameters, including metallicity, luminosity classification, effective temperature,
distance, and stellar age, for over 24 million stars, based on the stellar colors from the SkyMapper Southern Survey (SMSS; ) and Gaia <cit.>, as well as the parallax measurements from Gaia.
This huge data set has already been applied to a number of Galactic studies, including searching for metal-poor stars <cit.>, discovery of ancient halo substructures <cit.>, and understanding the disk/halo formation history (Hong et al. 2023). Its contribution to this field is just beginning to be explored.
In this paper, we present a second pioneering experiment in the Northern sky, using the data from the first data release of the Stellar Abundance and Galactic Evolution Survey <cit.> and Gaia EDR3 <cit.>.
SAGES is an optical multi-band (u, v, g, r, i, DDO-51, Hα_ wide, Hα_ narrow) large-scale photometric survey, aiming to cover 12,000 square degrees of the Northern sky with δ > -5^∘ down to a 5σ depth of 21.5 in the u-band <cit.>.
The u-band filter is the same as in the Strömgren system <cit.>, and the v-band is optimized to provide reliable metallicity measurements by shifting the central wavelength of the SkyMapper v <cit.> to longer wavelengths, by about 100 Å, to reduce the effect of molecular bands of carbon and nitrogen on the metallicity estimates.
The special design of the uv filters (especially the v-band) provides photometric sensitivity to stellar surface gravity and metallicity that are well-demonstrated by numerous previous efforts with similar filter systems (e.g., ; H22).
The gri filters are SDSS-like, which can be used to estimate the stellar effective temperature.
The combination of Hα and other filters can be used to estimate the values of reddening.
Similar to our effort with SMSS (H22), here we present stellar parameter estimates for over 26 million stars using the uv-band data released in SAGES DR1, along with the photometric and parallax information provided by Gaia EDR3 <cit.>.
This paper is structured as follows.
In Section 2, we introduce the data adopted in the current work.
In Section 3, photometric-metallicity estimates from the stellar colors of SAGES DR1 and Gaia EDR3 are described, along with various checks on the photometric measurements.
The determinations of effective temperature, T_ eff, distance, and age are presented in Section 4.
Radial velocity measurements collected from previous spectroscopic surveys and the final sample are described in Section 5.
We present a summary in Section 6.
§ DATA
In the present work, the SAGES DR1 <cit.> dataset is adopted.
SAGES DR1 has released a total of about 100 million sources extracted from 36,092 accepted frames in the uv-bands collected by the 90-inch (2.3m) Bok Telescope at Kitt Peak National Observatory in Arizona.
DR1 covers about half of the Northern Hemisphere (9960 square degrees), about 90 per cent of the planned area.
The median completeness is about 20.4 and 20.3 for the u- and v-band, respectively.
This is one of the deepest near-ultraviolet large-scale photometric survey with a 5σ depth close to 21.5 in the u-band.
Compared to other near-ultraviolet deep photometric surveys, e.g., the SDSS <cit.> and the South Galactic Cap u-band Sky Survey <cit.>, SAGES has the advantage of using the two medium-bandwidth filters uv, which are optimized for estimates of stellar parameters.
In addition to the uv-band data provided by SAGES DR1, the optical bands of G, G_ BP, G_ RP, as well as astrometric information, is adopted from the Gaia EDR3 <cit.>.
The Gaia EDR3 broadband photometry is essentially complete between G = 12 and G = 17.
The completeness is quite complicated for sources fainter than G = 17, which is strongly dependent on celestial position <cit.>.
In total, nearly 33 million stars are selected by the following cuts:
* flag_u/v = 0 in SAGES DR1
* Uncertainties of G, G_ BP, and G_ RP smaller than 0.05 mag
* Galactic latitude |b| ≥ 10^∘
SAGES was initially designed to avoid the high-reddening regions with |b| ≤ 10^∘, although a few disk areas are observed for specific reasons.
The former two cuts are required for precise metallicity estimates, but they do affect the completeness in the faint range (G > 18.5).
The last cut is to exclude those disk regions in our analysis, given their high values of extinction.
This sample is referred to as the main sample for our following analysis.
In this study, the colors u-G_ BP, v-G_ RP, and G_ BP - G_ RP are used.
We note that the mean G_ BP flux in Gaia EDR3 is
over-estimated for faint red sources with G ≥ 20.0 <cit.>. However, only 650 thousand stars (no more than 3 per cent of the full sample) in our final catalog are fainter than 20th magnitude in the G-band.
Therefore, the systematic issue for G_ BP is minor for the current study.
Unless indicated otherwise, these colors are corrected for reddening using the extinction map of <cit.>
[Here the SFD98 E(B-V) is corrected for a 14% systematic
over-estimate <cit.>].
The reddening coefficients for those colors, as well as for the G-band, are calculated using the same way as in H22.
§ METALLICITY DETERMINATION
§.§ Training Set
The key to determinations of metallicity using stellar colors is the training set.
The training set adopted here is similar to that used in H22, which consists of 1) LAMOST DR9[<http://www.lamost.org/dr9/v1.0/>], 2) the revised parameters of metal-poor ([Fe/H] ≤ -1.8) stars of SEGUE <cit.>, along with other datasets from SDSS (we refer to the total dataset below as SEGUE), and LAMOST <cit.> by a custom version of the SSPP (LSSPP; Lee et al. 2015), along with careful visual inspection (by Beers), and 3) the bibliographical compilation of measurements
of stellar atmospheric parameters from high-resolution spectroscopy (HRS) by PASTEL <cit.> and SAGA <cit.> .
The metallicity scale of the former two sets is calibrated to the one obtained from the HRS dataset.
More details of our efforts to construct a training set with a homogenous scale of metallicity, as well as other elemental-abundance ratios, will be described in Huang et al. (2023).
We then cross-match the above training set to the main sample, together with the following cuts:
* The stars must have small values of extinction (to minimize uncertainties due to reddening corrections): Galactic latitude |b| ≥ 20^∘ and E (B - V) ≤ 0.08
* The stars must have reliable metallicity estimates: LAMOST/SEGUE spectral signal-to-noise ratio (SNR) greater than 20, effective temperatures in the range 3800 ≤ T_ eff (K)≤ 7500 (i.e., typical FGK-type stars)
* The photometric uncertainties in the SAGES uv and Gaia G_ BPG_ RPG bands must be smaller than 0.035 mag
* The stars must have Gaia relative parallax measurement uncertainties smaller than 50%
In addition to the above cuts, only about half of the metal-rich ([Fe/H] >-1.0) stars are selected to avoid large differences in the number of metal-rich ([Fe/H] >-1.0) and metal-poor ([Fe/H] <-1.0) stars (see the right panel of Fig. 1).
Given the number of stars in common between SAGES and those with spectroscopy, the cut on Galactic latitude would not introduce bias in the training sets, e.g., a lack of metal-rich disk populations (see the right panel of Fig. 1).
A total of 223,537 stars (182,035 dwarfs and 41,502 giants) are selected to construct the final training set.
The absolute G-band magnitudes of these stars are derived by adopting the distances from <cit.>, based on the parallax measurements from Gaia EDR3.
The Hertzsprung–Russell (H-R) diagram of the training set is then shown in the left panel of Fig. 1.
By using empirical cuts defined in H22, the training stars are further divided into dwarf and giant stars.
The right panel of Fig. 1 shows the metallicity distributions of the dwarf and giant stars in the training set.
§.§ Metallicity Estimation
To estimate photometric metallicity, we first define the metallicity-dependent stellar loci of (u/v-G_ BP)_0 versus (G_ BP - G_ RP)_0 in Fig. 2 for both dwarf stars (top panel) and giant stars (bottom panel).
Similar to our results with SMSS DR2 in H22, both (u-G_ BP)_0 and (v-G_ BP)_0 colors exhibit significant sensitivities to stellar metallicity for different types of stars characterized by (G_ BP - G_ RP)_0.
Third-order 2D polynomials with 10 free parameters are then applied to describe the stellar loci of dwarf and giant stars:
(u/v - G_ BP)_0 = a_0,0 + a_0,1y + a_0,2y^2 + a_0,3y^3 + a_1,0x +
a_1,1xy + a_1,2xy^2 + a_2,0x^2 + a_2,1x^2y + a_3,0x^3,
where x and y represent (G_ BP - G_ RP)_0 and [Fe/H], respectively.
Two to three sigma-clipping is applied in the fitting process.
The resultant fit coefficients are listed in Table 1.
Using the stellar loci, one can determine the photometric metallicity using the maximum-likelihood approach developed in H22.
For a given star, the metallicity is obtained from the probability distribution function (PDF) of [Fe/H] estimated from the likelihood function:
L_c = 1/√(2π)σ_c_ obsexp-(c_ obs - c_ pred)^2/2σ_c_ obs^2,
where c_ obs are the observed colors, i.e., (u/v - G_ BP)_0, with assumed Gaussian errors σ_c_ obs.
The c_ pred represents the same colors predicted by the metallicity-dependent stellar loci (defined by Equation 1) with (G_ BP - G_ RP)_0 from observations and [Fe/H] ranging from -3.5 to +0.8 in steps of 0.01 dex.
The uncertainty in the photometric metallicity estimated is taken to be half of the 68% interval of the resultant PDF.
From the above approach, we estimate the photometric metallicities of training-set stars to be compared to the spectroscopic measurements as an internal test.
These comparisons are shown in Fig. 3 for both dwarf stars (top panel) and giant stars (bottom panel).
Generally, the estimated photometric metallicities agree with the spectroscopic metallicities very well for both dwarf and giant stars, either from (u - G_ BP)_0 or (v - G_ BP)_0; the overall scatter is only 0.09 dex and 0.13 dex for dwarf stars achieved by (u - G_ BP)_0 and (v - G_ BP)_0, respectively.
The scatter of the combined estimates using an error-weighted mean is further reduced to 0.08 dex, even better than the precision of low/medium-resolution spectroscopy.
As shown in the top-right panel of Fig. 4, no significant systematic offset is found for dwarf stars with photometric [Fe/H] >-1.0, and a mild offset of -0.20 to -0.4 dex (photometric minus spectroscopic) is found for metal-poor dwarf stars with photometric [Fe/H] ≤-1.0.
The metallicity precision for dwarf stars as revealed by the internal comparisons is a function of [Fe/H], with scatter smaller than 0.1 dex for [Fe/H] >-0.5, increasing to 0.3-0.4 dex at the extremely metal-poor end ([Fe/H] ∼ -3.0).
For giant stars, the overall scatter is around 0.11 dex.
The comparisons show that photometric metallicity derived from (v - G_ BP)_0 is in excellent agreement with that of spectroscopy, with negligible offsets for [Fe/H] >-2.0 and a small offset of -0.2 dex (photometric minus spectroscopic) at the extremely metal-poor end ([Fe/H] ∼ -3.0).
The metallicity precision from (v - G_ BP)_0 is around 0.1 dex for [Fe/H] >-1.0, and 0.2-0.3 dex for [Fe/H] ≤ -1.0.
The performance of photometric metallicity derived from (u - G_ BP)_0 is moderately worse, especially for warmer giant stars, which are mostly BHB stars (see the blue box in the bottom left panel of Fig. 3).
Finally, the internal checks indicate that there are no systematic trends with effective temperature for the photometric-metallicity estimates of both dwarf and giant stars (see the top-left panel of Fig. 4).
In addition to the internal test, we derive photometric metallicities for LAMOST targets with larger values of E (B-V) that are not included in the training set.
Using the LAMOST targets (including these stars with low values of extinction in the training set), we show the metallicity differences between the photometric and spectroscopic values as a function of E (B-V) in Fig. 5.
The metallicity differences (photometric minus spectroscopic) steadily decrease with E (B-V), and reach ∼ +0.2 dex at E (B-V) ∼ 0.5 for both dwarf and giant stars.
This trend is possibly due to the spatial systematic uncertainties of theSFD98 extinction map, as found most recently by <cit.>.
Moreover, <cit.> have shown that the reddening coefficients depend not only on effective temperature/intrinsic colors, but also extinction itself (ignored in this work).
The neglect of the extinction term may also partly contribute to this E (B-V) dependent trend.
To correct for this systematic trend, a fifth-order polynomial is applied to describe the differences as a function of E (B-V) for dwarf and giant stars, respectively.
According to the above tests, the final metallicity of a dwarf star is given by the combined estimate if both (u - G_ BP)_0 and (v - G_ BP)_0 colors are
available, or given by the single measurement from either (u - G_ BP)_0 or (v - G_ BP)_0, depending on which color is available.
The final metallicity of a giant star is given by the measurement of color (v - G_ BP)_0, or the color (u - G_ BP)_0 if the former is not available.
In this manner, photometric-metallicity estimates are derived for over 26 million stars (23 million dwarf stars and 3 million giant stars) in SAGES.
Note that the extinction dependent zero-point offsets are corrected using the fifth-order polynomial constructed above.
The G-band magnitude distributions of stars with metallicity estimates are shown in the left panel of Fig. 6.
The overall completeness limit is around magnitudes G = 17.5 and 18.5, for dwarf and giant stars, respectively.
As mentioned earlier, we caution that the completeness of Gaia broadband photometry is quite complicated, especially in crowded regions, for stars with G > 17 <cit.>.
The photometric-metallicity distributions of dwarf and giant stars are shown in the right panel of Fig. 6.
The total number of very metal-poor (VMP; [Fe/H] < -2.0) stars is about one million, which is the largest database of VMP candidates yet assembled from photometric techniques.
The metallicity uncertainty of a star is contributed by two sources: the method error deduced from the internal checks and the random errors derived from the likelihood function.
The metallicity uncertainty as a function of G-band magnitude is shown in Fig. 7, which is dominated by the method error and random errors in the bright and faint end, respectively.
§.§ Comparison with APOGEE DR17 and GALAH DR3+
The accuracy of our photometric estimates of metallicity is examined by comparisons with the independent spectroscopic measurements from the APOGEE DR17 <cit.> and GALAH DR3+ <cit.>.
The comparisons are shown in Fig. 8 for 72,995 high-quality (SNR ≥ 30) stars in common with APOGEE and 13,038 high-quality (SNR ≥ 30) stars in common with GALAH DR3+.
Generally, the photometric-metallicity estimates agree very well with the spectroscopic values, without significant offsets.
The overall scatter is only 0.09 dex for dwarf stars and 0.10-0.15 dex for giant stars.
The zero-point and precision of individual metallicity bins are also examined in the lower panels of Fig. 8; the results are consistent with our internal tests (see Fig. 4).
We also present the metallicity differences between the photometric estimates and spectroscopic values from APOGEE DR17 as a function of E (B-V) in Fig. 9.
The plot clearly shows that the offsets are all around zero for different bins of E (B-V), a validation of our polynomial corrections described in Section 3.2 (see Fig. 5).
§.§ Comparison with Metal-poor Samples from High-resolution Spectroscopy
To explore the capabilities of the SAGES filters for determinations of metallicity for metal-poor stars, we collect samples of independent metallicity estimates from HRS, especially for metal-poor stars.
The HRS samples we compare with include a sample of the most metal-poor stars <cit.>, the R-Process Alliance sample <cit.> for over 600 VMP stars, the CFHT ESPaDOnS follow-up observations of 132 metal-poor candidates selected from the Pristine survey <cit.>, the Subaru follow-up observations of 400 VMP candidates selected from the LAMOST <cit.>, and the GTC follow-up observations of extremely metal-poor (EMP) candidates identified from the Pristine and LAMOST surveys <cit.>.
We cross-match the SAGES sample to the collected HRS samples and find 112 stars in common (54 dwarfs and 58 giant stars).
The comparison result is shown in Fig. 10.
Generally, our photometric-metallicity estimates are consistent with the HRS values for metal-poor stars without significant carbon enhancements ([C/Fe] < +0.6).
The overall scatter of the differences (photometric minus spectroscopic) is 0.57 dex and 0.30 dex, respectively, for dwarf and giant stars, with mild offsets of +0.38 dex and +0.18 dex, respectively .
The result is in line with our internal checks (see Fig. 4).
We note the photometric-metallicity estimates of ultra metal-poor (UMP; [Fe/H] < -4.0) stars can be over-estimated by up to 2 dex for stars with very high carbon enhancements ([C/Fe] ≥ +2.0).
§.§ Comparison with SMSS and Gaia XP Spectra
We compare our results to those of H22 from SMSS and those of <cit.> from Gaia XP low-resolution spectra. The latter has recently delivered estimates of metallicity using a
data-driven technique for over 120 million stars from Gaia XP low-resolution spectra.
As shown in Fig. 11, our estimates are consistent with those of <cit.> and H22, with tiny offsets and a scatter smaller than 0.20 dex.
Finally, although the total number of our metallicity estimates (SAGES + SMSS) does not exceed 50 million stars,
we emphasize that the volume of our sample is much larger than that of sample constructed from Gaia XP spectra, given that the limiting magnitude of SAGES and SMSS is nearly 3 mag deeper than that of the Gaia XP spectra.
This larger volume will enable numerous interesting studies of the Milky Way, e.g., searching for substructures in the stellar halo.
§ EFFECTIVE TEMPERATURE, DISTANCE, AND AGE ESTIMATES
The effective temperatures of dwarf and giant stars are derived from the metallicity-dependent T_ eff–color relations constructed in H22.
Here the color is the de-reddened (G_ BP - G_ RP)_0, and metallicity is given by photometric [Fe/H].
In this way, effective temperatures are obtained for all of our program stars.
As examined with over 159,000 stars in common, the effective temperature estimated in this work is quite consistent with that from LAMOST, with a small offset around -24 K (this work minus LAMOST) and a scatter of only 84 K (see Fig. 13).
Distances estimated by <cit.> are adopted for stars with reliable parallax measurements with precision better than 30%, parallax greater than 0.15 mas, and renormalized unit weight error (RUWE) smaller than 1.4.
A total of 15,974,812 stars have distances estimated in this way.
Using the apparent G-band magnitudes and SFD E (B-V), the G-band absolute magnitudes have been derived for the nearly 16 million stars with reliable geometric distances.
Fig. 12 is the Hertzsprung-Russell (H-R) diagram for about 8 million stars with relative parallax error better than 10%, parallax greater than 0.4 mas, and RUWE≤ 1.4.
Guided by the isochrones of PARSEC <cit.>, empirical cuts are defined to further classify dwarf stars into main-sequence turn-off, main-sequence,
and binary stars.
For the stars without geometric distance estimates, the distances are obtained by inferring their absolute magnitudes from the constraints of stellar colors and photometric metallicity.
For main-sequence dwarf stars, the G-band absolute magnitudes are derived from the third-order 2D polynomial relation constructed in H22.
Combining with the G-band magnitude and the SFD E (B -V), the distances are found for over one million main-sequence dwarf stars with (G_ BP - G_ RP)_0 ≥ 1.0.
For giant stars, a likelihood method developed in <cit.> and <cit.> is adopted to infer the i-band absolute magnitude using the (g - i)_0 color, photometric [Fe/H], and empirical color–magnitude fiducials interpolated from six globular clusters.
Here, the g- and i-band magnitudes are from the Pan-STARRS1 surveys <cit.>; the reddening-correction coefficients are from <cit.>.
The interested reader is referred to X14 or <cit.> for more details.
In the above manner, a total of over 1.6 million giant stars have their distances estimated. To test the accuracies of our distance estimates for giant stars, Fig. 14 compares these with those of X14 for over 1600 stars in common.
The results are consistent with each other, with a tiny relative offset of -3.7% (this work minus X14) and a scatter of 21.7%.
This scatter implies that both estimates have a typical precision of about 16%, which is expected by X14.
Finally, we derive stellar ages for stars with good parallax measurements, i.e., parallax measurements with precision better than 30%, parallax greater than 0.15 mas, and RUWE≤ 1.4, using the technique developed in H22.
Nearly 15 million stars have their ages estimated in this way.
We note that the RUWE cut cannot exclude all of the binary stars, whose ages may be over-estimated.
As noted by H22, this technique is mostly valid for main-dequence turn-off and sub-giant stars; uncertainties are larger for other types of stars in the H-R diagram.
We perform a similar check as done in H22 with over 160,000 stars in common between this work and <cit.>, who derived isochrone ages for over 3 million stars with both spectroscopic and astrometric information.
The check shows that the age estimates in this work agree with with those from SD18, with an offset of 5% in relative age difference (age_ TW -age_ SD18)/age_ SD18 and a scatter in the relative age difference of around 20%.
§ RADIAL VELOCITIES AND THE FINAL SAMPLE
We collect measurements of radial velocities for our sample stars available from from completed and ongoing spectroscopic surveys, including
GALAH DR3+ <cit.>, SDSS/APOGEE DR17 <cit.>, Gaia DR3 <cit.>, RAVE DR5 <cit.>, LAMOST DR9[<http://www.lamost.org/dr9/v1.0/>] and SDSS/SEGUE DR16 <cit.>, with typical measurement errors of 1.1, 0.5, 1.0-6.0, 2.0, 5.0 and 5.0 km s^-1, respectively.
In total, over 4.2 million stars in our final sample have radial velocity measurements.
The detailed contributions of radial velocities from each survey are given in Table 2.
If a star has radial velocity measurements from two more surveys, the result from the survey with the highest spectral resolution is adopted.
We note that all of the radial velocity zero-points are calibrated to the updated APOGEE radial-velocity standard stars based on the SDSS/APOGEE DR17 constructed using the same technique proposed in <cit.>.
In the final sample, over 22 million dwarf and 3 million giant stars have photometric-metallicity estimates (see Section 3) from the stellar colors provided by SAGES DR1 <cit.> and Gaia EDR3 <cit.>, and effective temperature estimates from the intrinsic (G_ BP - G_ RP)_0 colors and photometric [Fe/H] (see Section 4).
From the well-developed techniques described in H22, distances and ages are further derived for 18 and 15 million stars in the final sample, respectively (see Section 4).
The radial velocity measurements, if available from the spectroscopic surveys, and the astrometric parameters in Gaia EDR3 <cit.> are also included.
A description of the information for stars in the final sample catalog is presented in Table 3.
The final stellar-parameter sample catalog will be released by the SAGES project as a value added catalog.
This sample already represents large progress on the development of stellar samples from the Northern sky for use in Galactic studies.
Together with our former effort from SMSS DR2 described in the first paper in this series, the sum of which represent photometric metallicities for on the order of 50 million stars, these results will shed light on understanding the formation and evolutionary history of our Galaxy.
The next step of this project is to extend this technique to derive photometric-metallicity with improved precision, especially at the metal-poor end, and other
elemental-abundance ratios (e.g., [α/Fe] and [C/Fe]) from the narrow/medium-band photometric surveys <cit.>, or from Gaia XP low-resolution spectra, although only for stars with a relatively bright limiting magnitude around G ∼ 17.5 mag <cit.>.
§ SUMMARY
In this, the second paper of this series, we present stellar parameters for over 20 million stars in the Northern sky, using SAGES DR1 and Gaia EDR3.
With a careful and comprehensive selection of a training set from spectroscopic measurements, we present photometric-metallicity estimates for nearly 26 million stars (23 million dwarf and 3 million giant stars), with useful metallicity determinations down to [Fe/H] = -3.5.
Both internal and external checks show that the precisions of our photometric measurements are about 0.1 dex in the metal-rich range ([Fe/H] > -1.0) and 0.15-0.25/0.3-0.4 dex for dwarf/giant stars with [Fe/H]≤ -1.0.
This result is comparable to or even better than obtained for the low/medium-resolution spectroscopy.
In addition to metallicity, the final sample also includes measurements of effective temperature from metallicity-dependent T_ eff–color relations, distances either from Gaia parallax measurements or from the metallicity-dependent color-absolute magnitude fiducials, and ages from comparisons between observations and stellar isochrones.
Radial velocities from spectroscopic surveys and astrometric parameters from Gaia EDR3 are also included.
To date, we have delivered stellar parameters for over 50 million stars covering almost 3π steradians of sky, which will be useful to a variety of studies of the Milky Way.
§ ACKNOWLEDGEMENTS
This work is supported by National Key R&D Program of China No. 2019YFA0405500 and National Natural Science Foundation of China grants 11903027, 11833006, 11973001, 11603002, 11811530289 and U1731108.
We used data from the European Space Agency mission Gaia (<http://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC; see <http://www.cosmos.esa.int/web/gaia/dpac/consortium>).
T.C.B. acknowledges partial support from grant PHY 14-30152, Physics
Frontier Center/JINA Center for the Evolution of the
Elements (JINA-CEE), awarded by the US National Science
Foundation. His participation in this work was initiated by conversations that took place during a visit to China in 2019, supported by a PIFI Distinguished Scientist award from the Chinese Academy of Science. Y.S.L. acknowledges support from the National Research Foundation (NRF) of
Korea grant funded by the Ministry of Science and ICT (NRF-2021R1A2C1008679).
Y.S.L. also gratefully acknowledges partial support for his visit to the University
of Notre Dame from OISE-1927130: The International Research Network for Nuclear Astrophysics (IReNA),
awarded by the US National Science Foundation.
CAO acknowledges support from the Australian Research Council through Discovery Project DP190100252.
The Stellar Abundance and Galactic Evolution Survey (SAGES) is a multi-band photometric project built and managed by the Research Group of the Stellar Abundance and Galactic Evolution of the National Astronomical Observatories, Chinese Academy of Sciences (NAOC).
The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS).
The Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National
Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the
National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
apj
|
http://arxiv.org/abs/2307.04634v1 | 20230710152157 | Toward optimal placement of spatial sensors | [
"Mingyu Kim",
"Harun Yetkin",
"Daniel J. Stilwell",
"Jorge Jimenez",
"Saurav Shrestha",
"Nina Stark"
] | cs.RO | [
"cs.RO",
"stat.OT"
] |
Self-consistent Combined HST, K-band, and Spitzer Photometric Catalogs of the BUFFALO Survey Fields
[
August 12, 2023
===================================================================================================
empty
empty
This paper addresses the challenges of optimally placing a finite number of sensors to detect Poisson-distributed targets in a bounded domain. We seek to rigorously account for uncertainty in the target arrival model throughout the problem. Sensor locations are selected to maximize the probability that no targets are missed. While this objective function is well-suited to applications where failure to detect targets is highly undesirable, it does not lead to a computationally efficient optimization problem. We propose an approximation of the objective function that is non-negative, submodular, and monotone and for which greedy selection of sensor locations works well. We also characterize the gap between the desired objective function and our approximation. For numerical illustrations, we consider the case of the detection of ship traffic using sensors mounted on the seafloor.
Log-Gaussian Cox process, Void probability, Optimal sensor placement, Jensen gap
§ INTRODUCTION
This paper addresses the challenging task of optimally placing a finite number of sensors to detect Poisson-distributed targets within a bounded domain. The primary objective is to develop an optimal sensor placement algorithm that enables the deployment of sensors based on acquired environmental and target data, possibly allowing for adjustments to sensor locations as new target data becomes available.
We model target arrivals using a Poisson distribution, and we consider that the target arrival rate, which is represented by the intensity function of the Poisson distribution, is uncertain. To model the uncertainty in the target arrival rate, we employ a log-Gaussian Cox process, which is a Poisson point process where the logarithm of the intensity function is a Gaussian process. We then estimate the underlying intensity function based on prior target arrival data. Based on the estimated intensity function, the selection of sensor locations is determined with the objective of minimizing the probability of failing to detect a target. We show that this objective is equivalent to maximizing the void probability of the Poisson process, which refers to the probability that no targets are undetected. We propose an approximation of the void probability as the objective function for the sensor placement problem. We show that our approximation of the void probability is submodular and monotonic increasing (monotone). Thus, greedy selection of sensor locations works well. For the numerical illustrations, we consider the case of subsea sensors that detect ship traffic. Example ship traffic data is obtained from historical records of the Automated Identification System (AIS) near Hampton Roads Channel, Virginia, USA.
Poisson point processes have been used to model target arrivals in various applications, such as conducting marine mammals surveys <cit.>, disease mapping <cit.>, crime rate modeling <cit.>, and border surveillance <cit.>. The authors in <cit.> consider a Poisson spatial point process with known intensity function as target arrival model. The authors in <cit.> assume that target arrivals follow a homogeneous Poisson point process with a known intensity value. In contrast, our approach uses an uncertain intensity function that can be estimated from historical data or in real-time. In <cit.>, the authors address greedy selection of sensor locations to detect Poisson-distributed target arrivals. However, in these studies, stochasticity in the intensity function is not accounted for. In <cit.>, the authors seek to adaptively identify a stochastic intensity function while choosing a sequence of single observation locations that minimize a reward function related to the number of missed targets. In contrast, we assume that a stochastic intensity function has been identified from historical data, and we seek a set of sensor locations that minimize the probability of that no targets are missed through the entire domain. The existing studies in this field do not analyze the proximity of their solutions to the optimal solution. In our paper, we bridge this gap by conducting an analysis of the deviation between our proposed approximate solution and the optimal solution.
We model target arrivals as a log-Gaussian Cox process (LGCP) <cit.>. A Cox process is a Poisson process with a stochastic intensity function. For our applications, we model the intensity function as the log of a Gaussian process. To estimate the intensity function based on prior data, we use Integrated Nested Laplace Approximation (INLA) method, which is a deterministic approximation. INLA approximates the posterior distribution of latent Gaussian models using nested Laplace approximations <cit.>. We use void probability as our objective function and select sensor locations where the void probability is maximum. We show that in our formulation of the sensor placement problem, maximizing void probability is the same as minimizing the number of undetected targets.
§.§ Contributions
We address sensor placement using an LGPC target model. Because the optimization problem is numerically challenging, we propose a lower-bound of the objective function that is submodular and monotone, and for which greedy sensor location selection works well. We further characterize the gap between the desired objective function, which is the probability that no targets are missed, and our lower bound, and we show via numerical examples that the gap appears to be small for representative problems that motivate our analysis.
The organization of the paper is as follows. In Section <ref>, we present a detection model with multiple sensors and target arrivals modeled as a log-Gaussian Cox process. In Section <ref>, we derive a lower bound for the void probability that is submodular and monotone, and facilitates computationally tractable selection of sensors. In Section <ref>, we analyze the gap between void probability and its lower bound from Section <ref>. In Section <ref>, we provide numerical results that show the efficacy of our proposed approach. The appendix shows the proofs of submodularity, monotonicity of the proposed objective function from Section <ref>, and of monotonic-decrease of the upper bound of Jensen gap from Section <ref>.
§ PROBLEM FORMULATION
This paper focuses on the sensor placement problem, specifically addressing scenarios where a set of sensors is used to detect stochastic target arrivals.
§.§ Sensor model
We define γ(s, a_i):S × S → [0,1] to be the probability of sensor i detecting a target at location s in a bounded domain S where a_i represents the location of sensor i.
The probability of failing to detect a target at location s with sensor i is expressed 1 - γ(s, a_i). Let 𝐚 = {a_1, a_2, …, a_M } denote the locations of a set of M sensors. Then, when all M sensors are placed at 𝐚, the probability of failing to detect a target at location s is
π(s, 𝐚) := ∏_i=1^M ( 1 - γ(s, a_i) )
§.§ Target arrival model: Log-Gaussian Cox Process
Target arrivals in a bounded region S for a time-interval T_c are modeled by an inhomogeneous Poisson point process with a random intensity Λ(S,T_c) where T_c is a time-interval for historical target arrival data collection to compute an estimated target arrival per unit time within the domain. The intensity Λ(S,T_c) can be thought of as the expected number of target arrivals in area S over a time interval with length T_c and can be computed
Λ(S,T_c) = 1/T_c∫_Sλ(s) ds
where λ(s):S → [0, ∞) is the intensity function at location s ∈ S. The intensity function is derived to represent the expected number of targets per unit area in a time-interval T_c. We assume that λ(s) is stochastic and the logarithm of the spatial variation in the intensity function is a Gaussian process.
log(λ(s)) ∼GP(μ(s), k(s,s'))
where μ(s), k(s,s') are mean and covariance functions respectively and s', s ∈ S. This model is called the log-Gaussian Cox process (LGCP). We refer the reader to <cit.> for more details on LGCP.
Given Λ(S,T_c), the probability of observing n number targets within S for a time-interval T using Poisson distribution is
P(N(S,T) = n) = (Λ(S,T_c)T)^n/n!e^-Λ(S,T_c)T
where N(S,T) denotes the number of target arrivals.
§ SUBOPTIMAL SENSOR PLACEMENT
Our goal is to find optimal sensor locations that minimize the number of undetected targets in S and for a time period T.
§.§ Void probability
We let N̅(S,T) represent the number of undetected targets in S over time-interval T. The probability that N̅(S,T) is zero is computed from the Poisson process
P(N̅(S,T) = 0 |λ(s) )
= exp( - ∫_S T/T_cλ(s) π(s, 𝐚) ds )
where we say that the intensity function λ(s) has been thinned by the probability of failing to detect a target π(s,𝐚). The probability of that N̅(S,T) is zero is known as the void probability of the log-Gaussian Cox process. Since we assume the target arrival intensity function λ(s) in (<ref>) is stochastic, the void probability is
P( N̅( S,T ) = 0 )
=𝔼_λ[exp(-∫_ST/T_cλ(s) π(s, 𝐚) ds)]
where (<ref>) represents (<ref>) after marginalizing out λ(s).
§.§ Void probability approximation
Let 𝐀 be the set of all possible sensor locations within S such that the location of a finite number of sensors is 𝐚⊂𝐀. We compute a set of optimal sensor locations such that the void probability of the thinned Cox process is maximized
𝐚^⋆ = _𝐚⊂𝐀𝔼_λ[exp(-∫_ST/T_cλ(s) π(s, 𝐚) ds)]
The objective function in (<ref>) is computationally challenging due to a stochastic variable λ(s) in the integrand. Therefore, we consider a lower bound for the objective function (<ref>) that can potentially be maximized with less computational effort than directly computing the void probability.
We use Jensen's inequality to obtain a computationally tractable lower bound to (<ref>). Furthermore, we show that over any discretized set of possible sensor locations, this lower bound is submodular and monotone. Thus, greedy selection of sensor locations is guaranteed to generate sensor locations at which the lower bound is within at least a factor (1-1/e) of the optimal sensor location <cit.>.
Jensen's inequality applied to (<ref>) yields
𝔼_λ[ e^- Λ̃(𝐚)]
≥ e^-𝔼_λ [ Λ̃(𝐚)]
where
Λ(𝐚)=∫_ST/T_cλ(s) π(s, 𝐚) ds
The inequality in (<ref>) provides a lower bound to the void probability. Since the lower bound e^-𝔼_λ [ Λ̃(𝐚)] is computationally tractable, we seek a set of sensor location 𝐚^⋆ that maximizes the lower bound in (<ref>). That is
𝐚^⋆ = _𝐚⊂𝐀 exp( - ∫_S λ(s) π(s, 𝐚) ds )
where we denote 𝔼_λ[ T/T_cλ(s) ] by λ(s), which is the mean of the intensity function for the Cox process.
We may apply the logarithm without changing the extremum due to the monotonic nature of the logarithm function. Thus, we may apply the logarithm to the objective function in (<ref>), yielding
𝐚^⋆ = _𝐚⊂𝐀 - ∫_S λ(s) π(s, 𝐚) ds.
The objective function in (<ref>) is submodular and monotonically increasing, but not non-negative. Thus, in order to apply the greedy algorithm to compute a finite number of sensor locations in (<ref>), the objective function can be modified by adding a constant term
𝐚^⋆ = _𝐚⊂𝐀 ∫_S λ(s) ds - ∫_S λ(s) π(s, 𝐚) ds
that yields a non-negative function.
We compute a set of sensor locations with respect to the objective function in (<ref>). Below, we formally address our main result
The non-negative objective function
F(𝐚) = ∫_S λ(s) ds - ∫_S λ(s) π(s, 𝐚) ds
is submodular and monotone.
Greedy selection of sensor locations with respect to the objective function in (<ref>) yields at least 1-1/e of the optimal results.
Proof for Theorem <ref> is in Appendix A. Proof for Corollary <ref> follows from Theorem <ref> and is the well-known result in <cit.>.
§ JENSEN GAP ANALYSIS
Jensen's inequality (<ref>) simplifies the computation of the void probability approximation. However, while this inequality yields a lower bound for the void probability, the lower bound is not necessarily tight. The accuracy of the sensor network we obtain using this approximation depends on the size of this gap, which measures how closely our objective function approximates the void probability. A smaller gap indicates a closer approximation to the void probability. Therefore, the size of the gap is a crucial factor in determining the performance of the sensor network.
In this section, building on the results in <cit.>, we first present an upper bound on the Jensen gap given a sensor placement (Theorem <ref>). Then, by proving that the gap is monotonic decreasing, we show a method to compute the bound (Theorem <ref>).
Let X be an one-dimensional random variable with mean μ_X, variance σ_X^2 and P(X∈ (d_1, d_2))=1, where -∞≤ d_1 ≤ d_2 ≤∞. Let ϕ(X) be a twice-differentiable function on (d_1,d_2). Then, the upper bound of the Jensen gap J is
J ≤sup_X∈(d_1,d_2)( ϕ(X)-ϕ(μ)/(X-μ)^2-ϕ'(μ)/X-μ)σ^2
In our problem, the random variable X is the expected number of undetected ships Λ(𝐚) ∈ [0,∞) when sensor locations 𝐚 are known from (<ref>). The twice differential function ϕ(·) is e^-(·) from the definition of void probability. For simplicity, let the mean and variance of Λ(𝐚) be μ_u and σ^2_u, respectively. Then, Theorem <ref> yields
J≤ J_up= sup_Λ(𝐚)∈[0,∞)( e^-Λ(𝐚)-e^-μ_u/(Λ(𝐚)-μ_u)^2+e^-μ_u/Λ(𝐚)-μ_u)σ_u^2
where J_up is the upper bound of the Jensen gap.
When μ_u and σ_u are given from the sensor locations 𝐚, the upper bound J_up in (<ref>) is monotonic-decreasing with respect to Λ(𝐚). Therefore, the upper bound of Jensen gap is maximized when Λ(𝐚) is zero, which yields
J_up(σ_u,μ_u)=σ_u^2(1-e^-μ_u-μ_ue^-μ_u)/μ_u^2
Theorem <ref> is based on the fact that expression inside the supremum in (<ref>) are monotonically decreasing with respect to increasing Λ̃(𝐚), which is proved in Appendix B.
§ NUMERICAL RESULTS
In this section, we illustrate our results with numerical examples in which we seek to detect ships using sensors located on the seafloor. We apply INLA to estimate the intensity of ship traffic near Hampton Roads, Virginia. Then we greedily select sensor locations using the objective function in (<ref>). Through numerical illustration, we also show that Jensen's gap is small in this example. That is, the difference between the void probability (𝔼_λ[e^-Λ(𝐚)]) and its approximation (e^-𝔼_λ[Λ(𝐚)]) in (<ref>) is small. We also directly evaluate Jensen's gap for a specific numerical illustration and compare it to the upper bound for Jensen's gap in Section <ref>. Our numerical example also shows that the greedy algorithm produces sensor locations that achieve almost the same performance as the optimal sensor locations for the small number of sensors where we can compute optimal locations with respect to void probability via brute force.
We use the ship traffic data near the Hampton Roads Channel, Virginia, USA, provided by the Office for Coastal Management and the Bureau of Ocean Energy Management<cit.>. The data comprises the location (latitude and longitude) of a ship, ship type, and ship detection time. We use the ship traffic data corresponding to the entire month of March 2020 (=T_c), where the domain S in (<ref>) is labeled A in Fig. <ref> (top). Region A is treated as one-dimensional line for sensor placement: latitude 36.91676 to 37.08721, longitude -76.08209. Fig. <ref> (top) shows the heat map of ship traffic in the selected area. The red color indicates greater ship traffic has been observed in the area, while the yellow color indicates less ship traffic has been observed, and the blue means no ship has been observed. Within this bounded domain, the possible location where sensors can be placed is discretized with an interval of 50m.
§.§ Estimation of intensity of ship arrival model
In order to estimate the intensity function, we use the inlabru package in R <cit.>, which builds on the R-INLA package <cit.>. We consider a zero mean Gaussian process with a Matern covariance function
k(s,s^')= σ_u^2 2^1-ζ/Γ(ζ) (κ ||s - s^' || )^ζ K_ζ (κ ||s - s^' ||)
where s and s^' are two locations within the domain, σ_u is the variance, ζ > 0 is the smoothness parameter, κ=√(()8ζ)/β >0 is the scale parameter, || · || denotes the Euclidean distance, K_ζ is the modified Bessel function of second kind, and β is a spatial range parameter (see <cit.> for more details).
We use the following parameter values for the numerical illustrations: ζ = 1.5, β, σ_u from P(β<β_0 = 150)= 0.75, P(σ_u > σ_u0 = 0.1) = 0.75, respectively. As shown in Fig. <ref> (middle, bottom), with the parameters, covariance function above, and historical ship traffic data in March 2020 (histogram), we estimate the mean (black line) and the 95% of the confidence interval (blue lines) using INLA.
§.§ Sensor model
For the probability of sensor i detecting a ship, we use the sensor model
γ(s, a_i) = ρ e^-(a_i-s)^2/σ_l
where 0 ≤ρ≤ 1 is the maximum probability of detection and σ_l is the length scale parameter. For the numerical illustrations, we consider that ρ = 0.95 and σ_l = 0.9.
§.§ Sensor placement for maximizing void probability approximation
We seek to maximize void probability directly,
(probability that number of undetected ships is zero), but instead, we select sensor locations using the lower bound for void probability in Section <ref>. Furthermore, we evaluate the difference between void probability and its lower bound. We do not directly consider optimal sensor location selection. Rather, we evaluate the utility in this numerical illustration of greedily selecting sensor locations to maximize the lower bound. Furthermore, we evaluate the difference between greedy and optimal selection of sensor locations when maximizing the lower bound for small numbers of sensors for which we can compute optimal sensor locations using brute force.
Given the estimated intensity function and the probability of detection from (<ref>), we compute the suboptimal sensor locations. Fig. <ref> (middle) shows Jensen's gap, which is the difference between void probability (𝔼_λ[e^-Λ(𝐚)]) and void probability approximation (e^-𝔼_λ[Λ(𝐚)]). For the results in Fig. <ref>, using the objective function in (<ref>), we first greedily compute the suboptimal sensor locations that maximize the void probability approximation. We evaluate the void probability for the suboptimal sensor locations using Monte Carlo method. We sample a large number ( ≥ 10,000) of the ship arrival intensity functions λ̂_j from (<ref>), which has been estimated by using INLA. The average void probability for the greedily selected sensor locations is
∑_j^W_λexp(-∫_Sλ̂_j(s) ∏_i=1^M ( 1 - γ(s, a_i^⋆) ) ds)/W_λ
where W_λ is the number of Monte Carlo sampled functions of the stochastic estimated intensity function of ship arrival λ(s). Correspondingly, the j^th sampled function is λ̂_j(s), and a_i^⋆∈𝐚^⋆(={a_1^⋆,...,a_M^⋆}) is i^th greedily selected sensor location. For simplicity, we denote λ̂_j(s)=T/T_cλ_j(s). Fig. <ref> (top) shows the void probability approximation (dashed blue line) with the void probability (red line) for the same set of sensor locations. This process is repeated for the number of sensors M varying from 0 to 100.
As shown in Fig. <ref> (middle), the maximum percent difference between the void probability and its approximation is less than 0.0125, and as we place more sensors, the gap tends to be smaller. As discussed in Sec. <ref>, using (<ref>), the upper bound of Jensen gap is computed with the expected value of undetected number of ships μ_u and its variance σ_u^2 shown in Fig. <ref> (bottom). As shown as a blue dotted line in Fig. <ref> (middle), the maximum upper bound of Jensen gap is approximately 0.15 and the Jensen gap (black line) is less than or equal to the upper bound. Table <ref> shows that while the computation time for greedily placing 100 sensors is less than 0.1 seconds, evaluating the void probability at the same locations takes 150715.67 seconds.
§.§ Small number of sensor placement for void probability
Fig. <ref> shows sensor location for both greedy and optimal sensor placement for the number of sensors varying from 2 to 5. Correspondingly, Table <ref> shows a comparison of the performance of the greedy selection and the optimal sensor placement. It demonstrates that the greedy selection performs well compared to the optimal. In our numerical experiment, the algorithms are implemented in MATLAB on a Windows computer that has a processor of Core i7 CPU with 1.3 GHz and a RAM of 16.0 GB.
§ CONCLUSION
We propose a computationally tractable suboptimal sensor placement method using void probability approximation as an objective function. This proposed objective function takes into account a stochastic target arrival intensity function. We show that the modified void probability approximation is non-negative, submodular, and monotone, which allows us to use the greedy selection method. Furthermore, we analyze Jensen gap and provide an upper bound for Jensen gap. In numerical illustrations, with historical ship traffic data, we demonstrate that a greedy algorithm for choosing sensor locations yields suboptimal results.
§ APPENDIX A: PROOF OF SUBMODULARITY AND MONOTONICITY
Proof:
Let F(𝐚) be defined as
F(𝐚) = ∫_S λ(s) ds -∫_S λ(s)π(s, 𝐚) ds
where λ(s) is the non-negative expectation of intensity function and π(s, 𝐚) is defined in (<ref>). For the location of the set of sensors A, B, C such that A ⊆ B ⊂ C and for a new common sensor location (of A,B) â∈ C \ B, F(𝐚) is submodular if the following inequality holds
F(A∪{â}) - F(A) ≥ F(B∪{â}) - F(B)
For π(s,A) and π(s,B), the sensor network A and B are composed of location of M_1 and M_2 sensors (M_1 ≤ M_2) respectively. Then, π(s,A) is
π(s,A) = ∏_i=1^M_1(1-γ(s,a_i))
Then, similarly with the set of B, for π(s,B)
π(s,B) = ∏_i=1^M_2(1-γ(s,a_i))
With the common sensor location â,
π(s,A∪{â}) = π(s,A)(1-γ(s,â))
π(s,B∪{â}) = π(s,B)(1-γ(s,â))
With the modified objective function in (<ref>)
F(𝐚) =∫_S λ(s) ds-∫_S λ(s) π(s,𝐚) ds
such that
F(A) =∫_S λ(s) ds-∫_S λ(s) π(s,A) ds
F(B) =∫_S λ(s) ds-∫_S λ(s) π(s,B) ds
F(A∪{â}) =∫_S λ(s) ds-∫_S λ(s) π(s,A∪{â})ds
F(B∪{â}) =∫_S λ(s) ds-∫_S λ(s) π(s,B∪{â}) ds
Then, as long as (<ref>) holds, F(𝐚) is submodular. The left (LHS) and right-hand side (RHS) of the inequality (<ref>) are
F(A∪{â})-F(A) = ∫_S λ(s) (π(s,A)-π(s,A∪{â})) ds
F(B∪{â})-F(B) = ∫_S λ(s) (π(s,B)-π(s,B∪{â})) ds
By subtracting RHS from LHS
(F(A∪{â}) -F(A))-(F(B∪{â})-F(B))
=∫_S λ(s) [(π(s,A)-π(s,A∪{â}))
-(π(s,B)-π(s,B∪{â}))] ds
=∫_S λ(s) π(s,A)π(s,â) ×
(1-∏_j=M_1+1^M_2(1-γ(s,a_j))) ds
where π(s,â) = 1-γ(s,â).
In (<ref>), the result consists of non-negative four components: λ(s) is non-negative and π̂(M_1,t), π̂(â,t), and the rest of the term are between zero and one. Therefore, (π(s,A)-π(s,A∪{â}))-(π(s,B)-π(s,B∪{â})) ≥ 0.
That is, F(A∪{â})-F(A) ≥ F(B∪{â})-F(B). Therefore, it proves that F(𝐚) where 𝐚={ a_1,...,a_M}, a_i ∈ S is non-negative submodular.
To prove that F(𝐚) is monotonic increasing, we show that the F(A) ≤ F(B) holds. By subtracting F(A) from F(B)
F(B) - F(A) = ∫_S λ(s) (π(s,A)-π(s,B)) ds
Due to the fact that π(s,A) is greater than or equal to π(s,B) and λ(s) is non-negative, 0 ≤ F(B)-F(A). That is equivalent to F(A) ≤ F(B). Thus, F(𝐚) is monotonic increasing
.
§ APPENDIX B: PROOF OF MONOTONIC-DECREASE OF J_UP
Proof: We can rewrite (<ref>) as
J_up = sup_Λ(𝐚)∈[0,∞)σ_u^2(e^-Λ(𝐚)-e^-μ_u+Λ(𝐚)e^-μ_u-μ_ue^-μ_u)/(Λ(𝐚)-μ_u)^2
=sup_Λ(𝐚)∈[0,∞)σ_u^2 e^-μ_u(e^-(Λ(𝐚)-μ_u)-1+Λ(𝐚)-μ_u)/(Λ(𝐚)-μ_u)^2
Let y=Λ(𝐚)-μ_u ∈ [-μ_u,∞). Then, the upper bound is
=sup_y∈[-μ_y,∞)σ_u^2 e^-μ_u(e^-y-1+y)/y^2
Given μ_u and σ_u^2, if h(y)=e^-y-1+y/y^2 is monotonic-decreasing, J_up is monotonic-decreasing. The function h(y) is monotonic-decreasing if
∂ h(y)/∂ y=(2-y)-e^-y(y+2)/y^3≤ 0
There are a number of ways to show that (<ref>) is satisfied. One approach is to evaluate the roots of the numerator as a polynomial in y, and show that the roots are not real, and thus the numerator does not change sign. The sign of (<ref>) is then evaluated separately for the case that y>0 and y<0 due to the fact that the denominator changes the sign. For the case that y = 0, application of L'Hopital's rule twice shows that the ratio remains well defined at y=0 .
11
IEEEtran
|
http://arxiv.org/abs/2307.03966v1 | 20230708123510 | Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems | [
"Nischal Ashok Kumar",
"Nitin Gupta",
"Shanmukha Guttula",
"Hima Patel"
] | cs.AI | [
"cs.AI",
"cs.SE"
] |
Both authors contributed equally to the paper
Work done during internship at IBM Research
UMass Amherst
United States
[email protected]
[1]
IBM Research
India
[email protected]
IBM Research
India
[email protected]
IBM Research
India
[email protected]
printacmref=false
In mapping enterprise applications, data mapping remains a fundamental part of integration development, but its time consuming. An increasing number of applications lack naming standards, and nested field structures further add complexity for the integration developers. Once the mapping is done, data transformation is the next challenge for the users since each application expects data to be in a certain format. Also, while building integration flow, developers need to understand the format of the source and target data field and come up with transformation program that can change data from source to target format. The problem of automatic generation of a transformation program through program synthesis paradigm from some specifications has been studied since the early days of Artificial Intelligence (AI). Programming by Example (PBE) is one such kind of technique that targets automatic inferencing of a computer program to accomplish a format or string conversion task from user-provided input and output samples. To learn the correct intent, a diverse set of samples from the user is required. However, there is a possibility that the user fails to provide a diverse set of samples. This can lead to multiple intents or ambiguity in the input and output samples. Hence, PBE systems can get confused in generating the correct intent program. In this paper, we propose a deep neural network based ambiguity prediction model, which analyzes the input-output strings and maps them to a different set of properties responsible for multiple intent. Users can analyze these properties and accordingly can provide new samples or modify existing samples which can help in building a better PBE system for mapping enterprise applications.
Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems
Hima Patel
August 12, 2023
=======================================================================================
§ INTRODUCTION
String Transformation in mapping enterprise applications refers to the specific paradigm in the domain of Programming by Example (PBE) approaches, where a computer program learns to capture user intent, expressed through a set of input-output pairs, from a pre-defined set of specifications and constraints <cit.>. The set of specifications and constraints is expressed through the Domain-Specific Language (DSL) which consists of a finite number of atomic functions or string expressions that can be used to formally represent a program for the user to interpret. Most of the PBE systems <cit.> for string transformation use ranking mechanisms that are either built using heuristics or learned using historical data. These kinds of ranking systems are designed to capture the following two important characteristics: small length and simpler programs. Such kind of ranking system mostly depends on the quality and number of input and output (I/O) annotation samples to learn better program. The quality of I/O samples denotes how good the I/O samples are to generate a single intent output.
The number of given I/O annotation samples can vary depending on the user intent, but the fewer the better for the user (as the user has to provide less annotations). Therein lies the challenge of learning correct intent i.e., if examples are too few, then many possible DSL functions can satisfy them, and picking one intent (or program) arbitrarily or based on some ranking mechanism that satisfies simplicity and smaller length criteria, might lead to non-desired intent program. This might yield a solution that works well only on the given I/O samples but not on unseen samples. Similarly, the quality of I/O samples (irrespective of high I/O samples count) plays an important role in generating the correct intent program. The above two challenges are critical for PBE kind of systems to understand the user's intent by analyzing the given I/O samples. This can lead to a sub-optimal program that works on seen data but does not give desired outputs for unseen data. Hence, it is important to understand whether the given I/O samples capture the user's desired intent correctly or not.
For illustration, let's take an example shown in Table <ref>. "Train" columns denote the columns representing I/O samples used to generate a transformation program and "Test" columns denote the input sample column which is passed to the transformation program to generate an output. GT output column denotes the actual desired output. For each example, the user provides 3 I/O samples to generate a transformation program using <cit.>. In the first example, user intent is to extract substring after ”_" character, but here PROSE system learns the program which transforms test input “B_DS2345" into test output “2345" (see generated output column), which implies that the system learns to extract last numeric substring, which is different from a user-desired intent. This happens because there can be many possible programs to transform one set of inputs into outputs. Sometimes those programs converge to the same intent and other times it can lead to multiple intents. For example, in Table <ref>, for the first set of I/O samples, multiple programs can be possible. For test input sample “B_D2S345", where desired output value is “D2345". However, programs in Program(s) column generate different values for this example, first program generates - “345", second program -“D2S345", third program - “D2S345", fourth program - “345" and so on. This shows that all these consistent programs with I/O samples can lead to multiple intents (or outputs) on unseen data. For the above use case, two clear intents are - (a) Extract numeric substring after “_", and second intent is extract substring after “_". But if we look at the second row in Table <ref>, where we replaced the third sample with a better sample “GE_D443 - D443", then automatically first intent program got eliminated from the programs list. Hence, accessing the quality of annotations with respect to single or multiple intents is required for better PBE systems. If the user provides sufficient and single intent specific samples, the system can easily generalize to the rest of the samples. Hence, there is a need for a system that analyses the I/O samples that can help in finding multiple intent issues in annotations.
This would help in informing the user about multi-intent issues before generating a transformation program.
Therefore, we propose a framework to understand the quality of I/O samples to accurately predict a single confident program. To achieve this goal, we introduce a set of generic properties which helps to find ambiguity/multiple intents in a given set of I/O annotation samples. These properties are generic enough for most of the PBE systems because these properties are designed by analyzing several PBE systems' DSL. We propose a deep learning-based framework to automatically identify the presence of these properties in the annotations. The proposed framework takes a set of I/O samples annotation pairs as input and analyzes those samples together to classify the annotations to these properties. User can utilize this information to enhance the I/O samples, hence, generating more accurate, single intent, simpler and shorter program. In summary, the core contributions of our work are as follows:
* Multi-Tasking Attention-Based Deep Neural Network to address the issues of input and output annotation quality to generate a program with the correct intent.
* Defined a set of generic properties after analyzing several PBE systems' DSL that can help to find whether a given set of I/O samples can lead to multiple intents or not.
* We present an extensive quantitative analysis of a synthetically generated dataset. We also show the motivation of each module of our proposed framework through an ablation study.
* We also demonstrate the impact of detecting multiple intents and correcting them before building any PBE system.
§ OVERVIEW OF PROPOSED METHODOLOGY
In this section, we discuss the overview of the proposed methodology, define the set of properties to detect multiple intents, and formally define the problem setting. For any PBE system, I/O samples play an important role in determining the correct intent program. Examples are an ambiguous form of specification: there can be different programs that are consistent with the provided examples, but these programs differ in their behavior on unseen inputs. If the user does not provide a large set of examples or less but good quality samples, the PBE system may synthesize unintended programs, which can lead to non-desired outputs. Hence, there is a need for a framework that can access the quality of I/O samples with respect to multiple intents before generating the program. To access the quality of I/O samples, the most important aspect is to understand how good I/O patterns are for PBE system DSL.
The proposed framework (Figure <ref>) consists of two major modules, (a) For I/O annotations, defining set of properties which can cause ambiguity or multiple intents - we analyzed several string transformation specific DSL's, and came out with a generic set of properties which helps to identify whether given I/O samples can lead to multiple intents or not. However, the proposed system is generic enough that users can always add a new set of properties based on new functions introduced in DSL which can cause multiple intents, (b) Multiple intent analyzer - we designed a multi-tasking attention-based deep neural network to detect the ambiguity in given I/O samples based on given set of identified properties. The system first analyzes the user I/O annotation samples using the proposed deep learning framework to detect properties that cause multiple intents or ambiguities. In the next step, the user analyzes those detected properties and based on that, add or modifies samples in I/O annotations to improve the overall annotation quality to learn the correct intent program.
In the next section, we will first discuss the properties that will be helpful to decide whether given I/O samples can cause multiple intents or not. In Section 2.2, we will describe the proposed deep learning-based framework which utilizes these properties to find the presence of multiple intents in given annotations.
§.§ Properties to Detect Multiple Intent
The most important part in finding the ambiguity or possibility of the multiple intents in a given annotation is to analyze the I/O samples for generic characteristics of operators present in the DSL. Mostly, all the DSLs that exist in the literature for string transformation-based PBE systems use similar kinds of operators like split, substring with regex or constant value as an argument, concat, replace, extract first substring, etc. We analyzed several string manipulation-specific DSLs and come out with five generic properties that can help in detecting the multiple intents in the I/O samples. Figure <ref> shows one of the DSL created by combining several other DSL's commonly used operators. There can be other string manipulations operators such as trim, but these are high-level operators and generally doesn't contribute in the multi-intent scenario. In this paper, we will use the DSL showed in Figure <ref> to illustrate the importance of the defined properties.
Properties of I/O to detect the presence of multiple intents should be tightly bound to the DSL used for the PBE system. At the same time, those properties should also be (1) concise enough to capture the implicit or explicit multiple intent and (2) expressive enough to allow transformations to be achieved without any confusion in ranking between the programs. Below, we describe the set of 5 properties and the motivation behind their design.
§.§.§ Similar Length Ambiguity
- This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have the same length. For example, in Table <ref>, example 1, following substring or continuous sequence in output “123" and “535" are extracted from the similar continuous sequence in input and also are of the same length, hence it is not clear that whether the user wants to extract everything after second “_" or just three characters. In terms of DSL, mostly this kind of ambiguity can be possible because of the outputs generated by constant length-based operators like substring with constant positions vs pattern-based operators like split, substring with a pattern. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to the same continuous sequence of string in input and have the same number of characters across that sequence in all the output samples. I_l denotes the l^th input sample, and O_l denotes the corresponding output sample, and l denotes the total I/O samples in one example.
§.§.§ Exact Position Placement Ambiguity
- This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output string always starts or ends on the same position in the input string. For example, in Table <ref>, example 2, following substring or continuous sequence in output “Kumar" and “Williams" are extracted from the similar continuous sequence in input and also starts from the same position in input i.e. 5, hence it is not clear that whether the user always wants to extract substring started from position 5 in input, or the user have some other desired intent (extract something after space character). In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators which uses regex or split-based operation to extract substring. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)), it satisfies this property when the continuous sequence of string in an output matches to the same continuous sequence of string in the corresponding input and it has a start or end always at the same position.
§.§.§ Exact Match Ambiguity
This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output substring across annotations have the same string value. For example, in Table <ref>, example 3, following substring or continuous sequence in output “11" and “11" are extracted from the similar continuous sequence in input and also have the same string value i.e. 11, hence it is not clear that whether the user always wants to have constant value 11 in output or the user want to extract this value from the input string. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators like split/substring which allow values to be extracted from the string itself. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ...., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value.
§.§.§ Similar in Token Type Ambiguity
This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input and extracted output substring across I/O pairs is of the same type. For example, in Table <ref>, example 4, following substring or continuous sequence in output “123" and “53" are extracted from the similar continuous sequence in input and also have the same value type, hence it is not clear that whether the user always wants to extract the same data type value or something else. Mostly, three types of tokens, Alphabet Tokens which consists of all uppercase and lowercase English alphabets, Numeric Tokens which consists of digits from 0 to 9 and Special-Character Tokens which consists of all printable special characters on the keyboard, are possible. Hence, we say that an example satisfies a similar token type if all its continuous substring in outputs are either all Alphabet Tokens or all Numeric Tokens or all Special-Character Tokens. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow a specific set of regex positions to detect a position of substring vs operators like split/substring which allows values to be extracted from string itself. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value type.
§.§.§ Repeating Characters Ambiguity
This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have multiple instances of that output substring is possible in input. For example, in Table <ref>, example 5, following substring or continuous sequence in output “1" and “2" can be extracted from two similar positions from the input. Those positions can be defined by any low-level operators like constant positions, regex, or high-level operators like split, etc. In this case, that common substring is possible at two constant positions in input i.e. positions 3 and 9. Hence it is not clear that whether the user wants to extract a substring from position 3 or 9. This is DSL independent ambiguity, which can happen because the user provided the samples in the way that it internally it generating such kind of ambiguity. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to multiple instance of continuous sequence of string in input.
§.§ Problem Formulation
Given a set of l input-output annotations ((I_1, O_1),.., (I_l, O_l)), and a set of p properties (P_1, P_2, .., Pp) which can help to detect multi-intents in I/O annotations. The goal of this task is to answer the question “Is there any multi-intent or ambiguity present in I/O samples", if yes, what kind of ambiguities exist. In this paper, p is set to 5, as we designed and discussed 5 properties in the last section that can hinder the generalization of PBE systems. To learn to detect these sets of ambiguities, we design the multi-tasking attention-based deep neural network model.
We first generate a set of I/O annotation examples corresponding to each of the five ambiguous properties. We refer to a single I/O pair (I_1, O_1) as a sample and a group of I/O pairs to learn the program using any PBE system as an example. Here, l denotes the total samples (I/O pair) used for each example. In this work, we used l=3, which means in each example, we have three I/O samples. One example can have multiple properties issues also. Intuitively, the goal of our proposed task is to detect the ambiguities in the user-provided I/O annotations so that the user resolves these ambiguities by adding the new or modifying the existing samples. This will enable PBE systems to generate a single intent program that performs as desired on the unseen samples.
In the proposed framework, we train a multi-tasking attention-based deep neural network model as shown in Figure <ref> to learn the ambiguities as expressed in the I/O examples. We define each task as a formulation to learn one type of ambiguity. Consequently, the proposed framework solves the five tasks at a time corresponding to ambiguity detection for five different properties. Our model follows an encoder-decoder architecture where the encoder is shared among all the tasks and the decoder is independent for each task. We pose this problem as a multi-class classification problem. Each example is classified against five ambiguous properties as positive or negative, where positive means that the example is ambiguous for that property and negative means that it is not ambiguous.
0.2in
Model Architecture - We model the proposed framework shown in Figure <ref> for detecting ambiguities through a hard-parameter sharing paradigm for multi-task learning. As shown in Figure <ref>, the proposed framework consists of three modules, Common Encoder, Task-Specific Modules, and the Loss module. We discuss each of these modules in subsequent subsections.
§.§.§ Common Encoder
This module is used for encoding the raw I/O strings (see Figure <ref>) and consists of two sub-modules:
* Character Level Embedding Layer - This layer maps each character of the I/O pairs in each example to a 128-dimensional learning space. Given an input (i_1,.., i_n) and an output string (o_1,.., o_m) consisting of a sequence of characters of length n and m respectively, this layer outputs a list of character embedding. Here, n refers to the maximum length of input among all the examples in the dataset. The input strings which are smaller than the maximum length are appended with <pad> tokens to make their length equal to n. The <pad> tokens specify that the current character does not signify the original string but marks the end of it or is used to make all sequences of the same length so that the deep learning tensor computations are easier.
A similar procedure is followed with the output strings, where the maximum length of output among all the examples in the dataset is m.
Each character i_t and o_s in the input and output sequence is mapped to the 128-dimensional raw embedding e_i_t and e_o_s respectively via a randomly initialized and trainable embedding matrix, where t ∈{1,..n} and s ∈{1,..m}.
* Input Encoder - This layer uses LSTM representations <cit.> applied on the embedding e_i_t of the inputs of each example as shown in Equation <ref>. This layer helps to learn the sequential dependencies of the characters of the inputs. It takes the input embedding of each character e_i_t and passes them through a LSTM layer consisting of n separate LSTM cells with a hidden vector size of 512 as shown in Equation <ref>.
h_i_t = LSTM(e_i_t,h_i_t-1), t ∈ (1...n)
Hence, the Common Encoder takes I/O pair as input and produces two output representations, the raw 128-dimensional embedding of each character in the output sample and the LSTM encoded embedding of the Input sample in the I/O pair. These embeddings will be generated for each I/O pairs in an example. The outputs of the Common Encoder are then utilized by next Modules.
§.§.§ Task Specific Modules
These modules are designed for the detection of each ambiguity property. We have 5 such modules (one for each ambiguity) with a similar structure, which process the inputs obtained from the common encoder. Each Task-Specific Module contains an Additive Attention Output Encoder, Concatenation Layer, Convolution Neural Networks & Pooling Layer, and Softmax Layer as classification layer. The weights across all these 5 task-specific modules are not shared with each other.
* Attention Output Encoder -
In our architecture, we use additive attention mechanism <cit.> to selectively impart more importance to the part of the input which has more influence on the output characters and hence obtain better output sample encoding. Specifically, this layer computes the additive attention a_e_o_s of a single embedded output character e_o_s with respect to the encoding of all the input characters h_i_1..n as shown in the equations <ref> and <ref>. For this, we pass the output from the Input Encoder to the Attention Output Encoder which first computes the attention weights α_s_1..n as shown in eq. <ref> and the corresponding attention vector a_e_o_s as shown in eq. <ref> for each output character O_s with respect to all input characters in the I/O pair. Here, W_a and U_a are the learnable weight matrices. W_a corresponds to the output embeddings vector e_o_s and U_a corresponds to the input encodings matrix h_i_1..n. V_a is the learnable vector.
The attention output a_e_o_s is concatenated with the output embedding e_o_s to give c_o_s as shown in equation <ref> and is passed through an LSTM layer with hidden vector size 512 as shown in eq. <ref>.
α_s_1..n = V_atanh(W_ae_o_s + U_ah_i_1..n), s ∈ (1..m)
a_e_o_s = Σ_tα_s_t h_i_t, t ∈ (1..n), s ∈ (1..m)
c_o_s= [ a_e_o_s,e_o_s], s ∈ (1..m)
h_o_s = LSTM(c_o_s,h_o_s-1), s ∈ (1..m)
The Attention Output Encoder outputs m different LSTM encodings h_O_1..m for each output string of length m in l I/O pairs, which further passed to the next Layer.
* Concatenation Layer -
For this, we concatenate the l encodings corresponding to l I/O pairs for each example. Detecting ambiguity is possible only by analyzing all the I/O pairs in a given example and not just one I/O pair. These encodings are obtained from the Attention Output Encoder in a row-wise manner as shown in equation <ref>. Here, h_1_o_s refers to the attention-encoded output of the s^th character of the Output O_1 from the first I/O pair. Similarly, h_l_o_s refers to the attention-encoded output of the s^th character of the Output O_l from the l^th I/O pair.
q_s = concat(h_1_o_s, h_2_o_s, ..., h_l_o_s), s ∈ (1..m)
Q = [q_1, q_2, ...., q_m-1, q_m]
The output of the Concatenation Layer is a matrix Q as shown in eq. <ref>. There are a total of m different rows in the matrix corresponding to the m characters of the Outputs in an I/O pair. More specifically, each row of the matrix represents the character-level concatenation of the output encodings from l different examples. This matrix is then passed into the next layer.
* Convolution Neural Network and Pooling Layers -
Convolution Neural Networks (CNNs) are used for finding local dependencies in features. In our architecture, CNNs help us to capture the dependencies between adjacent characters and subsequent encoded Outputs of the I/O pairs. The input to the CNN layer is the matrix Q for each example that we obtain from the Concatenation Layer. In this layer, we apply 2-dimensional convolution operations with 512 output channels where each channel contains a kernel of dimension (2, l*512) on the input from the concatenation layer. We then applying MaxPooling on the outputs of the CNNs across each channel to obtain a single vector r of size 512 dimensions for the I/O pairs in an example as seen in eq. <ref>. This 512 size vector is then passed into the next Layer.
r = MaxPool2D(Conv2D(Q))
* Classification Layer - Classification Layer is a fully-connected dense layer with 2 neurons corresponding to either the positive or negative class for each ambiguous property classification to give the classification logits u. This is shown in equation <ref> where W_f and b_f are the weight matrix and the bias vector respectively.
u = W_f r + b_f
Classification logits from the Classification Layer are then passed through the Softmax Layer.
* Softmax Layer - This layer applies the softmax activation function on the classification logits to obtain a probability distribution p over the prediction classes (ambiguous properties) as shown in equation <ref>. Here, z is used for indexing a single class among the positive and the negative classes.
p = exp(u_z)/Σ_z exp(u_z), z ∈ (0, 1)
§.§.§ Loss Calculation
The proposed multitask learning framework uses Cross-Entropy loss between the original and predicted labels as the objective function for all the five task-specific modules. Equation <ref> denotes the loss from the k^th task-specific module. We use k to index the task-specific modules. p_k is the predicted probability distribution for the k^th task-specific module. y_k is the original probability distribution for the k^th task-specific module.
We obtain the final loss L by taking a weighted sum of the individual losses L_k of each of the task-specific modules as shown in equation <ref>. Here w_k is the weight corresponding to the kth Loss L_k.
L_k = -Σ[y_klogp_k + (1-y_k)log(1-p_k)]
L = w_1*L_1 + w_2*L_2 + w_3*L_3 + w_4*L_4 + w_5*L_5
§ RESULTS AND DISCUSSIONS
§.§ Dataset Creation
We created a dataset corresponding to the five different ambiguous properties discussed in Section <ref>. We have written different regexes satisfying each ambiguous property based on a fixed Domain Specific Language (DSL). For each ambiguity property, the regexes generate several examples, and each example consists of 3 I/O pairs. We consider uppercase English characters, lowercase English characters, digits from 0 to 9, and all printable special characters. We generate a total of 100002 individual samples, grouped in an example of 3 samples, to finally produce 33334 examples per ambiguous property. In the next few subsections, we describe the procedure of generating the dataset for each ambiguous property. Table <ref> shows examples corresponding to each property.
§.§.§ Similar Length Ambiguity
For each output substring in an example, we chose a length from a range of 2-9 characters. We limit the output substrings to a maximum of 4 for each sample. Each output substring will contain a mixture of lowercase, uppercase English alphabets, and digits from 0-9. We add random strings on the front and the back of each output substring to construct the input string. Similarly, we do this for other output substrings, and finally, combine the I/O substrings to make it a single I/O pair. We repeat the above process by fixing the output substrings size across the samples in a single example and combine those I/O pairs to make a single example. In our case, we use a set of three I/O pairs in a single example.
We illustrate the process of creating I/O pairs through the following example. In the first step, we first assume an output substring of length three for sample-1 is “abc", for sample-2 is “klp" and for sample-3 is “12j". In the second step, we add random I/O strings before and after the first output substring for sample-1 “dfg1#abc#2311", sample-2 “era#klp#hj1", and sample-3 “h2ral#12j#klj23jk". In the third step, we create a new output substring that can follow this similar length property or not. We then repeat the second step, for example, let us assume that the second output substring is of varied length, let's say “hjuk", “puefhkj", and “jf16hsk". Now, either we append this directly to the input with some delimiter or first add some other random string before or after this string. In this case, we append this directly using delimiter “@", so final input strings become “dfg1#abc#2311@hjuk", “era#klp#hj1@puefhkj" and “h2ral#12j#klj23jk@jf16hsk". We can combine the output substrings using any character or directly. In this example, we are combining it directly which leads to the following output samples corresponding to the input samples - `abchjuk", “klppuefhkj" and “12jjf16hsk". We can repeat the same process by generating more output substrings for an example.
§.§.§ Exact Position Placement Ambiguity
The process of example generation for this ambiguity will remain almost the same as the “Similar Length Ambiguity" property. The only change is that instead of fixing output substring length across samples, we will fix the output substring's position in the input string.
§.§.§ Exact Match Ambiguity
- In this case, the process differs with respect to output substring value. The output substring value across the I/O pairs within the same example will remain the same. This property inherently also satisfies the Similar Length Ambiguity.
§.§.§ Similar in Token Type Ambiguity
In this case, the process differs with respect to output substring type. That is, the output substring's token-type across the I/O pairs within same example will remain the same. In our work, we define two types of token-types viz. alphabets and numerals. More specifically, the two categories of similar token types are when, either the output strings contain only the uppercase and lowercase alphabets or only digits from 0-9.
§.§.§ Repeating Characters Ambiguity
In this case, the output substring exists (or repeats itself) at multiple positions in the input.
§.§ Ablation Studies
We compare the results of two major variations in the proposed framework : (a) two different loss functions - Cross-Entropy and Focal Loss, and (b) the importance of each layer by removing it from the framework. We consider the model in Figure <ref> as the main model. This model is referred to as Our in the results table. We carry out various ablation studies of the proposed model by removing various components to ascertain the role played by each component in the model. These models are are discussed below.
§.§.§ Our_No_CNN:
In this setup, we remove the CNN and the MaxPool layers from the proposed model architecture and only pass the concatenated output encodings to the classification layer.
§.§.§ Our_No_AM
In this setup, we remove the Attention Mechanism from the proposed model. We retain the same output encoder but set the attention weights for each output characters over all input characters is equal to 1 while calculating the attention vector.
§.§.§ Our_GRU:
In this, we replace all LSTM layers and cells with GRU <cit.> cells in the proposed architecture. We retain the same overall architecture and keep the GRU hidden size equal to 512.
§.§ Discussions
§.§.§ Quantitative Results
In Table <ref>, we compare the results of the proposed framework with two different loss functions, Cross-Entropy, and Focal Loss. Also, we provide a quantitative analysis highlighting the importance of each layer. For this, we first remove the layer from the task-specific modules and then report the performance of the same. We show the property-wise performance in Table <ref>. From the result table, we can see that overall Cross-Entropy is performing better than the Focal Loss. The model has trained with 26,667 examples corresponding to each ambiguous property for 100 epochs with a batch size of 5 per epoch. We set the weights of each loss corresponding to five different ambiguity tasks equally to 1. We report the results on the 6,667 examples test set.
The main model, denoted by our, performs better than the other variations of the proposed framework when using the same loss metric. We can also observe that by removing the attention layer from the main model, the performance of the model got decreased by 10-20% for most of the cases, which highlights the need for an attention layer. A similar kind of pattern can be observed when we remove the CNN layer from the main model. In some cases, performance got dropped to around 50%. Also, we can observe that removing the CNN layer makes the model more worse as compared to removing the attention layer. This shows that the CNN part of the architecture plays an important role in ambiguity detection. Also, we can see a significant drop in performance in most of the cases, if we replace the LSTM units with GRU units. The reason for the same is that LSTM units are able to capture context better than GRU because a sufficient number of samples are provided to our model to learn the context. When a sufficient number of samples are available for training, we can expect the LSTM model to learn the context better than GRU <cit.>. Hence, this analysis shows the importance of different layers in our proposed framework.
Combining all these layers, makes the system perform almost 100 percent accurately on the test set, which shows that these ambiguities can be easily learned if we define the architecture which can capture context, interrelationships, and attention of output on input. In some cases, we observe that other variations are also giving perfect results which highlight that for those properties simpler network can also be generalizable on unseen test data.
§.§.§ Saliency Maps
For better understanding the predictions of the proposed model, we used the integrated gradients <cit.> based saliency on the inputs of the examples for visualization. We use three properties (similar length, exact match, and repeating characters) to illustrate the predictions of the learned model as shown in Figure <ref>. For each of these properties, we use one example (three I/O samples) to visualize the saliency maps. Also, we use a single substring in output just for ease of visualization, as this visualization becomes more complex to interpret if we have multiple substrings in output.
The first row in the Figure <ref> denotes the saliency maps corresponding to the Similar Length Ambiguity property for the I/O pair - {"input": ["niti gup", "klop kio", "xyz abc"], "output": ["gup", "kio", "abc"]}. From Figure <ref> (a), we can see that in all the inputs, more importance (shown by lighter colors with high values) is given to the characters which mark the beginning and the end of the part of the string (“gup", “kio", and “abc") which belongs to the output. That is, we can see that a higher saliency score is associated with the hyphen and the @end symbol which mark the beginning and the ending of the output string. Hence, we can conclude that the model is able to learn the Similar Length Ambiguity property.
The second row illustrates the saliency maps for the Exact Match Ambiguity property for the I/O pair - {"input": ["niti abc123", "klop abc123", "xyz abc123"], "output": ["abc123", "abc123", "abc123"]}. Here, it can be seen that, on average, more importance is given to the part of the input which contains the output as compared to the one which does not. That is, the characters corresponding to abc123 have higher saliency values as compared to the other parts like niti, klop, and xyz in the three inputs respectively. Hence, we can conclude that the model is able to recognize the output strings clearly and hence correctly classifying them.
The third row shows the saliency maps for the Repeating Characters Ambiguity for the I/O pair - {"input": ["M%qSFA8qb%We %qSFA8qb%", "1bN%i6Op4%YK%i6Op4%", "Yp%83cGK3%yRv%83cGK3%"], "output": ["qSFA8qb", "i6Op4", "83cGK3"]}. It can be noticed that the characters in the output string have higher saliency values on an average in the input in their second repetition as compared to their first occurrence. This shows that the model is able to well recognize the repeated characters and hence correctly classify them. We have observed similar kinds of patterns for the other ambiguities.
§.§.§ Case Study: Impact of detecting multiple intents and correcting them before building PBE systems
In this section, we discuss that how the presence of ambiguity in input and output annotations can affect the output of widely used tools like PROSE <cit.> and Microsft Excel. Table <ref>, shows the different ambiguities detected by the proposed system on 6 examples, and also shows that whether existing PBE systems will able to learn correct intent or not using those sets of I/O pairs. For each example, the user provides three I/O samples to convey the desired intent. However, as we can see from the ambiguities detected column that each of these examples has some kind of ambiguities or multi-intent issues. Effect of the same can be reflected in a mismatch of PROSE/Excel output columns and GT output column. This shows the need for the framework which helps to figure out the multi-intent quality issues in annotation before generating program through any PBE systems.
In the first example in Table <ref>, the system detects “Similar in Token Type Ambiguity", because substring (only one substring exist in this case) across the outputs have the same token type. This can lead to multiple intent issues of whether the user wants to extract everything after “_" irrespective of the token/data type, or user is just interested in specific numeric data type content for this case. Same multi-intent confusion can be reflected in the output of two different PBE systems on an input “B_DS2345" - (a) PROSE output is “2345", that means the PROSE framework learn to extract numeric content after “_", and (b) Excel output is “DS2345" which means that excel learns to extract all the content after “_". So, it is good if the user can first analyze the detected ambiguity and if that ambiguity holds for a user's actual intent, then the user can accordingly either provide new samples or change the existing samples. Like for the first example, the user intent is to extract everything after “_" and also detected ambiguity is of similar token type. So, the user can now either modify or add one new sample where the extracted output string also has non-numerical characters. With this new additional I/O sample (highlighted in bold) provided by a user, after analyzing the detected ambiguity, both PROSE and EXCEL are able to learn correct intent. This is reflected through the output columns i.e. the value of these columns is the same as the GT column (see Table <ref>) .
Similarly, if we analyze the fifth example in Table <ref>, the system detects multiple ambiguities. Exact Position, Similar Length, and Similar in Token Type ambiguities exist for both the output substrings (Mohan/Abhil/Johny and Mr.). Similar in Exact Match Ambiguity exists only for “MR" substring in the output. For the first output substring (Mohan/Abhil/Johny), the user is fine with Exact Position and Similar in Token Type Ambiguity. However, the user wants to add a new example to remove the Similar Length Ambiguity. Similarly, for the second output substring, the user is fine with all the detected ambiguities except Exact Position Placement Ambiguity, because the user's goal is not to extract this information from the input string, the user wants to add that as a constant string in the output. So, after analyzing these properties, the user can provide new samples which will remove these ambiguities to learn the correct intent. Also, we can see from the table that due to these ambiguities both PROSE and EXCEL system learn the intent wrongly. However, after analyzing the ambiguities, the user provided the new sample as shown in Table <ref>. This new sample helps the system to learn the correct intent, which can be seen through the correct output on the test data.
Similarly, by providing new sample as shown in Table <ref> for other examples, the user will be able to resolve the multi-intent quality issue and also be able to learn the correct intent through existing PBE frameworks. This shows the effectiveness of our proposed framework to detect ambiguity in PBE systems specifically in the string transformation domain.
§ RELATED WORK
Task-specific string transformation can be achieved via both program synthesis and induction models. Induction-based approaches obviate the need for a DSL since they are trained to generate required output directly from the input string and used in tasks like array sorting <cit.>, long binary multiplication <cit.>, etc. However, induction models are not feasible for the string transformation domain as they require to be re-trained for each task and have lower generalization accuracy on unseen samples than synthesis models <cit.>. In literature, both neural-guided-based and symbolic-based approaches have been widely used for program synthesis.
Several neural-guided approaches have been proposed in the last few years for program synthesis <cit.>. A sequential encoder-decoder network to infer transformation programs that are robust to noise present in input-output strings, where the hand-engineered symbolic systems fail terribly is proposed in <cit.>. A different variant of an encoder-decoder network where input-output string encoders are not cascaded but work in parallel to infer program sequences is proposed in <cit.>. In <cit.>, a novel neural architecture consisting of a R3NN module that synthesizes a program by incrementally expanding partial programs is used. These networks can be trained end-to-end and do not require any deductive algorithm for searching the hypotheses space. However, they do not guarantee that inferred programs are consistent with the observed set of input-output pairs and also, training on synthetically generated datasets results in poor generalizability on real-world tasks.
Symbolic Program Synthesis approaches operate by dividing required transformation tasks into sub-tasks and searching the hypothesis space for regex-based string expressions to solve each of them. However, smart search and ranking
strategies to efficiently navigate the huge hypothesis search space require significant engineering effort and domain knowledge. One of the earliest attempts to solve the problem
of program synthesis pioneered the Flash-Fill algorithm designed to infer specification satisfying string transformation program in the
form of Abstract Syntax Trees (AST) <cit.>. The
PROSE system from <cit.> employs several hand-crafted heuristics to design ranking functions for deductive searching. Systems like PROSE though perform well on tasks similar to the previously encountered tasks but face a generalizability issue when exposed to new unseen tasks. This is also demonstrated in Table <ref> where the system infers one intent which is satisfied in the seen examples but fails on new unseen test data. Since PBE systems for string transformations rely on input and output annotations, it is necessary to provide non-ambiguous input and output samples to them. There is no work existing in the literature that talks about finding the ambiguities or multiple intent-based quality issues in input and output annotations, and providing that information to the user so that the user can look for those detected ambiguities and accordingly modify existing samples or provide new samples. This kind of system will help to capture the user's intent more clearly and make the system automatically generalizable on unseen data. Hence, in this paper we focused on finding the quality issues in input-output annotations with respect to multi-intent, to learn correct intent.
§ CONCLUSION
This paper aims to solve the problem of detecting ambiguity in the user-provided I/O annotations for PBE systems which leads to the generation of wrong intent programs. To the best of our knowledge, our proposed framework is the first to solve this issue at the input and output annotation level. To solve this, we propose extensible multi-tasking attention-based DNN to find the multiple intents in the I/O samples. We also define a set of generic properties that help in detecting the multiple intents in the annotations. We have done a quantitative analysis of different variations of the proposed model architecture to show the impact of the proposed systems' modules. We have also illustrated the effectiveness of the proposed model through saliency maps and by using an existing PBE system outputs. A natural extension of our work is to use the detected ambiguity properties to automatically generate new input and output samples and to improve the program search space.
siam
|
http://arxiv.org/abs/2307.04855v1 | 20230710185620 | Time-resolved purification of photon pairs from ultrasmall sources | [
"Vitaliy Sultanov",
"Maria Chekhova"
] | quant-ph | [
"quant-ph",
"physics.optics"
] |
[Time-resolved purification of photon pairs from ultrasmall sources
Vitaliy Sultanov^1, 2* and Maria V. Chekhova^1, 2
^1 Friedrich-Alexander Universität Erlangen-Nürnberg, Staudstrasse 7 B2, 91058, Erlangen, Germany
^2 Max-Planck Institute for the Science of Light, Staudtstrasse 2, 91058, Erlangen, Germany
^* Corresponding author: [email protected]
August 12, 2023
Generation of entangled photons through spontaneous parametric down-conversion (SPDC) from ultrasmall sources like thin films, metasurfaces, or nanoantennas, offers unprecedented freedom in quantum state engineering. However, as the source of SPDC gets smaller, the role of photoluminescence increases, which leads to the contamination of two-photon states with thermal background. Here we propose and implement a solution to this problem: by using pulsed SPDC and time distillation, we increase the purity and the heralding efficiency of the photon pairs. In the experiment, we increase the purity of two-photon states generated in a 7 μm film of lithium niobate from 0.002 to 0.99. With the higher purity, we were able to observe and characterize different polarization states of photon pairs generated simultaneously due to relaxed phase matching. In particular, we showed the presence of orthogonally polarized photons, potentially usable for the generation of polarization entanglement.
Keywords: photon pairs, purity, nanoscale
]
Miniaturized sources of quantum photonic states are in the spotlight of quantum research as they are vital for the investigation of light-matter interaction at the nanoscale and the realization of quantum technologies with integrated photonic circuits. One of the leading trends is “flat optics", involving ultrathin layers, down to a thickness of several atomic layers, and metasurfaces <cit.>. In linear and nonlinear optics, flat optical devices already outperform their bulk counterparts <cit.>, especially in terms of tunability and multifunctionality <cit.>. `Flat' platforms are also promising sources of quantum light, including single-photon and two-photon states <cit.>. Nanoscale sources of photon pairs mainly use spontaneous parametric down-conversion (SPDC) without momentum conservation <cit.>, which gives unprecedented flexibility for the engineering of quantum entanglement in position-momentum <cit.>, time-frequency <cit.>, and polarization <cit.>, although at the cost of low generation efficiency. Researchers try out different materials and designs for nanoscale sources of quantum light <cit.>, investigating new approaches for generation rate enhancement, quantum state engineering, and adding multi-functionality <cit.>.
A huge advantage of nanoscale sources for producing high-dimensional entangled photons, apart from the freedom in the state engineering, is that such sources are free from most of the entanglement degradation mechanisms. For instance, due to the confined volume of nonlinear interaction, the dispersion effects are negligible. However, the signal-to-noise ratio is significantly reduced by the presence of background photoluminescence <cit.>. Although highly-dimensional entangled photonic states are robust to noise to some extent <cit.>, at the nanoscale the noise level is so high that it significantly lowers the purity of the generated two-photon state and makes it impossible to certify a high degree of entanglement.
Photoluminescence is an incoherent process, therefore its rate scales linearly with the thickness of the source and at nanoscale it is much brighter than SPDC, whose rate scales quadratically with the thickness. Typically, background thermal noise surpasses photon pair generation by several orders of magnitude. Because photoluminescence is isotropic and spectrally broadband, photon pairs can be filtered from it neither in space nor in polarization nor in frequency. Although photon pairs can still be observed via correlation measurements, such "noisy" sources of two-photon light are barely feasible for quantum applications requiring a high purity of the generated quantum light.
The two-photon state generated via nanoscale SPDC is a mixture of the pure highly entangled (multimode) two-photon state |Ψ⟩ and a maximally mixed state of the photoluminescent background noise,
ρ̂ = p|Ψ⟩⟨Ψ| + 1-p/d^2𝕀_d^2,
where p is the probability of the pure state, d the dimensionality, or the number of modes, and 𝕀_d^2 the d^2-dimensional identity operator <cit.>. The number of spectral modes is very large as photons occupy a broad spectral range. Under this condition, the purity of the mixed part is negligibly small <cit.>, and p fully determines the purity of the generated state.
For low-flux strongly multimode light, the probabilities to have a pair from SPDC and photoluminescence scale, respectively, linearly and quadratically with the corresponding mean photon numbers N_SPDC, N_PL:
p =C N_SPDC, 1-p =C N_PL^2,
where C is the proportionality coefficient. Therefore, p depends on the total mean number of photons N_0 and the fraction of photons produced by SPDC, α=N_SPDC/N_0, as
p(α, N_0) = α/α+(1-α)^2N_0.
A rigorous calculation (Supplementary Information, section 3) yields a very similar result. The purity of state (<ref>),
Tr(ρ^2)=p^2(1-1/d^2)+1/d^2,
becomes p^2 for a highly dimensional state, d≫ 1. Figure <ref> shows the purity of the state as a function of α for different values of the total photon number N_0. In nanoscale SPDC experiments, typically α< 10^-2 and the purity of photon pairs is very low.
The solution we propose relies on the fundamental difference between the two processes. While SPDC is a parametric process and occurs almost instantaneously, photoluminescence is a non-parametric process with the time dynamic defined by the matter relaxation. Here we show that the photon pairs can be distilled from the photoluminescent background by time-resolved detection under pulsed SPDC. Resolving the time dynamics of emission is not possible under continuous-wave (CW) pump excitation <cit.>, which is typically used for nano-SPDC. To the best of our knowledge, this is the first work dedicated to pulsed SPDC at the nanoscale.
As a source of photon pairs, we use a 7 μ m thick wafer of x-cut lithium niobate (LiNbO_3) illuminated by laser radiation with a wavelength of 532 nm, either CW or pulsed (Fig. <ref>). For the experiments with the pulsed pump, we use a laser with 25 ps pulse duration and 1 kHz repetition rate. A set of two half-wave plates (HWP) and a Glan prism (GP) control the power and polarization of the pump. After focusing the pump onto the wafer, we collect the emitted photons, filter out the pump with a set of long-pass filters with a maximum cut-on wavelength of 950 nm and send photons to a Hanbury Brown - Twiss setup. The latter consists of a fiber beam splitter connected to two superconducting nanowire single-photon detectors (SNSPDs) and a time tagger, which registers the detectors' `clicks' and builds the distribution of the arrival time difference between the photon detections (`the coincidence histogram'). An additional set of a HWP and a GP filter an arbitrarily chosen linear polarization state of registered photons.
A typical coincidence histogram for the case of the CW pump is shown in Fig. <ref>. Although the pronounced narrow peak clearly indicates photon pair detection, there is a strong background of accidental coincidences, caused by the high rates of photons registered by both detectors. These rates, amounting to 1.2· 10^5 and 1.5· 10^5 s^-1, originate from photoluminescence and exceed the coincidence rate by several orders of magnitude. The ratio of the coincidence (after subtracting the accidentals) and singles rates is known as the heralding efficiency,
and it is crucial for using SPDC as a source of single photons. In bulk SPDC sources, the heralding efficiency coincides with the detection efficiency. However, in the case of a noisy source, it also reflects the purity of the photon pairs. The heralding efficiency of each channel is related to α as η_1,2=α η_1, 2^det, with η_1, 2^det being the detection efficiency (see Supplementary Information, Section 3). Photoluminescence significantly lowers α and, as a result, the heralding efficiency.
To distill the photons emitted via SPDC from the thermal radiation caused by photoluminescence, we use a pulsed laser as a pump. Its electronic trigger synchronizes the detection of the emitted light, similar to the time-domain fluorescence lifetime imaging (FLIM) with single-photon counting <cit.>. Fig. <ref> shows the example of synchronous photon detection revealing the time dynamics of emission. We attribute high and sharp equidistant peaks to the emission of SPDC photons, whereas long subsequent “tails" correspond to photoluminescence photons. By cutting the tails, we remove the contribution of photoluminescence to the single counts of both detectors and strongly suppress the rate of accidental coincidences. The coincidence histogram is obtained by acquiring the three-fold coincidences between the detectors' `clicks' and the electronic trigger of the laser (see Supplementary Information, section 2).
To fairly compare photon pair generation in the CW and the pulsed regime, we acquire the statistics of single-photon and coincidence events for a set of input pump powers and calculate, for both cases, the second-order correlation function and the heralding efficiency (Fig. <ref>). In both cases, we fit the experimental data with the inverse proportionality to the photon rate, which scales linearly with the pump power. Such dependence, as well as a high value of the second-order correlation function, clearly points towards photon pair detection. However, a high number of photons detected from the photoluminescent background in the CW case results in an extremely low heralding efficiency 0.085 ± 0.002%. In contrast, in the pulsed regime, time-domain distillation increases the heralding efficiency to 9.6±0.1%, two orders of magnitude higher. We attribute the two orders of magnitude improvement in the heralding efficiency to the two-orders of magnitude higher value of α when the time-resolved distillation is applied. However, the absolute value of α is unknown yet. We determine it further from the correlation measurements with different polarization configurations of SPDC.
Due to the relaxed phase matching condition in SPDC from ultrathin sources, pairs are generated both from ordinary (o-) and extraordinary (e-) polarized pump. The versatility of their polarization properties is only restricted by the efficiency of different types of SPDC, which can be adjusted by varying the source thickness, and the nonlinear tensor of LN, which has several nonzero elements (see Supplementary Information, section 5). We measure the rates of detected single photons and pairs (coincidences) for four different polarization configurations, involving e- and o-polarization of both the pump and detected photons. The results are shown in Fig. <ref> for both CW and pulsed SPDC. In the first case, there is no time distillation; therefore, the detected photons mainly come from photoluminescence and their rate does not depend on polarization (Fig. <ref>). In the second case, due to the time distillation, the rates of detected photons (Fig. <ref>) and coincidences (Fig. <ref>) are strongly polarization-dependent. The corresponding values of g^(2)(0,0) are shown in Fig. <ref>.
Because the coincidence rates contain almost no contribution from photoluminescence, we use them to analyze the polarization state of the pairs. The strongest coincidence count rate is from e-polarized pump. Despite the largest nonlinear tensor component d_33 supports the generation of e-polarized pairs (`e-ee' process), the rate of o-polarized pairs (`e-oo' process) is stronger because of the larger coherence length for this case (see Supplementary Information, section 5). Accordingly (see the inverse dependence of Fig. <ref>), the second-order normalized correlation function g^(2) is lower for o-polarized pairs than for e-polarized ones.
We notice an interesting feature: the co-existence of e-oo and e-ee processes leads to the coherent generation of |oo⟩ and |ee⟩ photon pairs, which, through two-photon interference, convert into pairs of orthogonally polarized photons <cit.>. This makes an ultrathin LN layer a promising source of polarization-entangled photons. Given its ultrabroad SPDC spectrum (Supplementary Information, section 6), allowed by relaxed phase matching and inferring a high degree of time/frequency entanglement, such a source will provide high-dimensional hyper-entangled two-photon states.
Although o-polarized pump also generates pairs, the rates are 2 orders of magnitude lower (Fig. <ref>). For o-ee, no SPDC pairs are expected since the effective value of χ^(2) is zero. The value of g^(2)≈ 1 for this configuration means that nearly all photons are produced by photoluminescence. For o-oo, g^(2) is somewhat higher, indicating the presence of some SPDC photons.
By assuming the photon pair generation rate from the o-polarized pump to be negligible (see Supplementary Information, section 5), we estimate the residual photoluminescent background as the rate of single counts measured from this pump polarization. From Fig. <ref>, it is clear that the residual level of photoluminescent photons is only about 10% of the overall single-count rate obtained after time distillation for the e-polarized pump. Therefore, the lower bound for α after time-domain distillation is 90%, and a relatively low heralding efficiency of 10% is entirely attributed to the detection efficiency. Then, based on the increase of the heralding efficiency after time-resolved distillation, we conclude that the fraction α of SPDC photons in the emitted light increases from 1% to 90%. This improvement leads to a significant increase of the purity of the generated two-photon state. Taking into account the detection efficiency of 10%, we conclude that the number of generated photons N_0 for the e-oo configuration is about 0.2 photons per pulse, and the number of modes d can be estimated as 1130 (Supplementary Information, section 6). For this particular set of parameters, the purity increases from 0.002 to 0.99 (Fig. <ref>). Therefore, time-resolved distillation allows us to increase the purity of the two-photon state to almost unity, making the emitted light feasible for quantum technology applications.
In conclusion, we have implemented, for the first time, pulsed SPDC in an ultrathin source. We have shown that unlike the CW regime, pulsed regime enables time distillation of the detected photons and achieving their high purity. While the heralding efficiency for the CW case was only 0.085± 0.002%, in the pulsed regime, for the same value of the second-order correlation function, the measured heralding efficiency was two orders of magnitude higher, 9.6± 0.1%. This value is mainly limited by the detection inefficiency and optical losses (see Section 3 of the Supplementary Information). Through the time distillation, we increase the purity of the photon pairs from 0.01 to 0.99.
The estimated heralding efficiency is enough to use flat sources for real quantum technology applications such as quantum key distribution <cit.> or boson sampling <cit.>. The small size of flat sources and relative freedom in the material choice makes non-phase-matched SPDC sources a convenient tool for the generation of quantum light under `flat' geometry. Due to the loose phase matching condition for nanoscale SPDC, any nonlinear materials can be used for photon pair generation. It is of particular interest to test monolayers <cit.> and few-layer crystals <cit.>. Such materials possess an extremely high value of second-order nonlinearity and maintain all the unique features of nanoscale SPDC in the extreme case of vanishing crystal length. Further, one can create composite materials and combine ultrathin sources of photon pairs with, i.e., quantum dots, to perform various quantum operations. This requires a high heralding efficiency of the two-photon source, which is available from flat sources in the pulsed regime.
1
Yu:2014 N. Yu and F. Capasso, "Flat optics with designer metasurfaces," Nat. Mater. 13, 139-150 (2014). DOI: https://doi.org/10.1038/nmat383910.1038/nmat3839
Krasnok2018 A. Krasnok, M. Tymchenko, A. Alù, "Nonlinear metasurfaces: a paradigm shift in nonlinear optics," Mater. Today 21, 8-21 (2018). DOI: https://doi.org/10.1016/j.mattod.2017.06.00710.1016/j.mattod.2017.06.007
Ko2022 J. H. Ko, Y. J. Yoo, Y. Lee, H.-H. Jeong, Y. M. Song, "A review of tunable photonics: Optically active materials and applications from visible to terahertz," iScience 25 (8), 104727 (2022). DOI: https://doi.org/10.1016/j.isci.2022.10472710.1016/j.isci.2022.104727
Chen2021 W. T. Chen and F. Capasso, "Will flat optics appear in everyday life anytime soon?", Appl. Phys. Lett. 118, 100503 (2021). DOI: https://doi.org/10.1063/5.003988510.1063/5.0039885
Toth2019 M. Toth and I. Aharonovich, "Single Photon Sources in Atomically Thin Materials," Annu. Rev. Phys. Chem. 70, 123-142 (2019). DOI: https://doi.org/10.1146/annurev-physchem-042018-05262810.1146/annurev-physchem-042018-052628
Soln2021 A. S. Solntsev, G. S. Agarwal, and Y. Kivshar, "Metasurfaces for quantum photonics," Nature Photonics 15, 327–336 (2021). DOI: https://doi.org/10.1038/s41566-021-00793-z10.1038/s41566-021-00793-z
Sharap2023 P. R. Sharapova, S. S. Kruk, and A. S. Solntsev, "Nonlinear Dielectric Nanoresonators and Metasurfaces: Toward Efficient Generation of Entangled Photons," Laser Photonics Rev. 2200408 (2023). DOI: https://doi.org/10.1002/lpor.20220040810.1002/lpor.202200408
Okoth2019 C. Okoth, A. Cavanna, T. Santiago-Cruz, and M. V. Chekhova, "Microscale generation of entangled photons without momentum conservation," Phys. Rev. Lett. 123, 263602 (2019). DOI: https://doi.org/10.1103/PhysRevLett.123.26360210.1103/PhysRevLett.123.263602
Okoth2020 C. Okoth, E. Kovlakov, F. Bönsel, A. Cavanna, S. Straupe, S. P. Kulik, and M. V. Chekhova, "Idealized Einstein-Podolsky-Rosen states from non-phase-matched parametric down-conversion," Phys. Rev. A 101, 011801-011806 (2020). DOI: https://link.aps.org/doi/10.1103/PhysRevA.101.01180110.1103/PhysRevA.101.011801
Zhang2022 J. Zhang, J. Ma, M. Parry, M. Cai, R. Camacho-Morales, L. Xu, D. N. Neshev and A. A. Sukhorukov, "Spatially entangled photon pairs from lithium niobate nonlocal metasurfaces," Sci. Adv. 8, eabq4240 (2022). DOI: https://www.science.org/doi/10.1126/sciadv.abq424010.1126/sciadv.abq4240
Santiago-Cruz2021 T. Santiago-Cruz, V. Sultanov, H. Zhang, L. A. Krivitsky, and M. V. Chekhova, "Entangled photons from subwavelength nonlinear films," Opt. Lett. 46(3), 653-656 (2021). DOI: https://doi.org/10.1364/OL.41117610.1364/OL.411176
Sultanov2022 V. Sultanov, T. Santiago-Cruz, and M. V. Chekhova, "Flat-optics generation of broadband photon pairs with tunable polarization entanglement," Opt. Lett. 47, 3872-3875 (2022). DOI: https://doi.org/10.1364/OL.45813310.1364/OL.458133
Guo2022 Q. Guo, X.-Z. Qi, L. Zhang, M. Gao, S. Hu, W. Zhou, W. Zang, X. Zhao, J. Wang, B.n Yan, M. Xu, Y.-K. Wu, G. Eda, Z. Xiao, S. A. Yang, H. Gou, Y. P. Feng, G.-C. Guo, W. Zhou, X.-F. Ren, C.-W. Qiu, S. J. Pennycook, and A. T. S. Wee, "Ultrathin quantum light source enabled by a nonlinear van der Waals crystal with vanishing interlayer-electronic-coupling", Nature 613, 53-59 (2023). DOI: https://doi.org/10.1038/s41586-022-05393-710.1038/s41586-022-05393-7
Marino2019 G. Marino, A. S. Solntsev, L. Xu, V. F. Gili, L. Carletti, A. N. Poddubny, M. Rahmani, D. A. Smirnova, H. Chen, A. Lemaître, G. Zhang, A. V. Zayats, C. De Angelis, G. Leo, A. A. Sukhorukov, and D. N. Neshev, "Spontaneous photon-pair generation from a dielectric nanoantenna," Optica 6, 1416-1422 (2019). DOI: https://doi.org/10.1364/OPTICA.6.00141610.1364/OPTICA.6.001416
Santiago-Cruz2021_Nano T. Santiago-Cruz, A. Fedotova, V. Sultanov, M. A. Weissflog, D. Arslan, M. Younesi, T. Pertsch, I. Staude, F. Setzpfandt, and M. V. Chekhova, "Photon Pairs from Resonant Metasurfaces," Nano Lett. 21, 4423-4429 (2021). DOI: https://doi.org/10.1021/acs.nanolett.1c0112510.1021/acs.nanolett.1c01125
Jin2021 B. Jin, D. Mishra, and Ch. Argyropoulos, "Efficient single-photon pair generation by spontaneous parametric down-conversion in nonlinear plasmonic metasurfaces", Nanoscale 13, 19903-19914 (2021). DOI: https://doi.org/10.1039/D1NR05379E10.1039/D1NR05379E
Duong2022 N. Hanh Duong, G. Saerens, F. Timpu, M. Buscaglia, V. Buscaglia, A. Morandi, J. Müller, A. Maeder, F. Kaufmann, A. Solntsev, and R. Grange, "Spontaneous parametric down-conversion in bottom-up grown lithium niobate microcubes," Opt. Mater. Express 12, 3696-3704 (2022). DOI: https://doi.org/10.1364/OME.46298110.1364/OME.462981
Santiago-Cruz2022 T. Santiago-Cruz, S. D. Gennaro, O. Mitrofanov, S. Addamane, J. Reno, I. Brener, and M. V. Chekhova, "Resonant metasurfaces for generating complex quantum states," Science 377, 991-995 (2022). DOI: https://doi.org/10.1126/science.abq868410.1126/science.abq8684
Saerens2023 G. Saerens, T. Dursap, I. Hesner, N. M. H. Duong, A. S. Solntsev, A. Morandi, A. Maeder, A. Karvounis, P. Regreny, R. J. Chapman, A. Danescu, N. Chauvin, J. Penuelas, and R. Grange, "Background-Free Near-Infrared Biphoton Emission from Single GaAs Nanowires," Nano Lett. 23, 3245-3250 (2023). DOI: https://doi.org/10.1021/acs.nanolett.3c0002610.1021/acs.nanolett.3c00026
Son2023 C. Son, V. Sultanov,T. Santiago-Cruz, A. P. Anthur, H. Zhang, R. Paniagua-Dominguez, L. Krivitsky, A. I. Kuznetsov, and M. Chekhova, "Photon pairs bi-directionally emitted from a resonant metasurface," Nanoscale 15, 2567 (2023). DOI: https://pubs.rsc.org/en/content/articlelanding/2023/nr/d2nr05499j10.1039/d2nr05499j
Zhu2021 F. Zhu, M. Tyler, N. H. Valencia, M. Malik, and J. Leach, "Is high-dimensional photonic entanglement robust to noise?" AVS Quantum Sci. 3, 011401 (2021). DOI: https://doi.org/10.1116/5.003388910.1116/5.0033889
Flagg2012 E. B. Flagg, S. V. Polyakov, T. Thomay, and G. S. Solomon, "Dynamics of Nonclassical Light from a Single Solid-State Quantum Emitter," Phys. Rev. Lett. 109, 163601 (2012). DOI: https://doi.org/10.1103/PhysRevLett.109.16360110.1103/PhysRevLett.109.163601
Becker2012 W.Becker, "Fluorescence lifetime imaging – techniques and applications," Journal of Microscopy 247, 119-136 (2012). DOI: https://doi.org/10.1111/j.1365-2818.2012.03618.x10.1111/j.1365-2818.2012.03618.x
Ecker2019 S. Ecker, F. Bouchard, L. Bulla, F. Brandt, O. Kohout, F. Steinlechner, R. Fickler, M. Malik, Y. Guryanova, R. Ursin, and M. Huber, "Overcoming Noise in Entanglement Distribution," Phys. Rev. X 9, 041042 (2019). DOI: https://doi.org/10.1103/PhysRevX.9.04104210.1103/PhysRevX.9.041042
Nape2021 I. Nape, V. Rodríguez-Fajardo, F. Zhu, H.-Ch. Huang, J. Leach, and A. Forbes, "Measuring dimensionality and purity of high-dimensional entangled states," Nature Communication 12, 5159 (2021). DOI: https://doi.org/10.1038/s41467-021-25447-010.1038/s41467-021-25447-0
Ivanova2006 O.A. Ivanova, T.Sh. Iskhakov, A.N. Penin, M.V. Chekhova, "Multiphoton correlations in parametric down-conversion and their measurement in the pulsed regime," Quantum Electron. 36 951 (2006). DOI: http://dx.doi.org/10.1070/QE2006v036n10ABEH01330010.1070/QE2006v036n10ABEH013300
Mandel_Wolf L. Mandel and E. Wolf, Optical Coherence and Quantum Optics. Cambridge University Press (1995)
Kwiat1999 P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, "Ultrabright source of polarization-entangled photonsn," Phys. Rev. A 60, R773 (1999). DOI: https://doi.org/10.1103/PhysRevA.60.R77310.1103/PhysRevA.60.R773
Adachi2007 Y. Adachi, T. Yamamoto, M. Koashi, and N. Imoto, "Simple and effcient quantum key distribution with parametric down-conversion," Phys. Rev. Lett. 99, 180503 (2007). DOI: https://doi.org/10.1103/PhysRevLett.99.18050310.1103/PhysRevLett.99.180503
Tillmann2013 M. Tillmann, B. Dakić, R. Heilmann, S. Nolte, A. Szameit, and P.Walther, "Experimental Boson sampling," Nat. Photonics 7, 540-544 (2013). DOI: https://doi.org/10.1038/nphoton.2013.10210.1038/nphoton.2013.102
Kalashnikov2016 D. A. Kalashnikov, A. V. Paterova, S. P. Kulik, and L. A. Krivitskiy, "Infrared spectroscopy with visible light," Nature Photonics 10, 98-101 (2016). DOI: https://doi.org/10.1038/nphoton.2015.25210.1038/nphoton.2015.252
MoS_monolayer H. Dinparasti Saleh, S. Vezzoli, L. Caspani, A. Branny, S. Kumar, B. D. Geradot, and Danielle Faccio, "Towards spontaneous parametric down conversion from monolayer MoS_2," Scientific Reports 8, 3862 (2018). DOI: https://doi.org/10.1038/s41598-018-22270-410.1038/s41598-018-22270-4
|
http://arxiv.org/abs/2307.05317v1 | 20230711150142 | Automatic Generation of Semantic Parts for Face Image Synthesis | [
"Tomaso Fontanini",
"Claudio Ferrari",
"Massimo Bertozzi",
"Andrea Prati"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
T. Fontanini, et al.
IMP Lab, Department of Engineering and Architecture, University of Parma
{tomaso.fontanini, claudio.ferrari2, massimo.bertozzi, andrea.prati}@unipr.it
Automatic Generation of Semantic Parts for Face Image Synthesis
Tomaso Fontanini0000-0001-6595-4874 Claudio Ferrari0000-0001-9465-6753 Massimo Bertozzi0000-0003-1463-5384 Andrea Prati0000-0002-1211-529X
August 12, 2023
==============================================================================================================================================
Semantic image synthesis (SIS) refers to the problem of generating realistic imagery given a semantic segmentation mask that defines the spatial layout of object classes. Most of the approaches in the literature, other than the quality of the generated images, put effort in finding solutions to increase the generation diversity in terms of style i.e. texture. However, they all neglect a different feature, which is the possibility of manipulating the layout provided by the mask.
Currently, the only way to do so is manually by means of graphical users interfaces.
In this paper, we describe a network architecture to address the problem of automatically manipulating or generating the shape of object classes in semantic segmentation masks, with specific focus on human faces. Our proposed model allows embedding the mask class-wise into a latent space where each class embedding can be independently edited. Then, a bi-directional LSTM block and a convolutional decoder output a new, locally manipulated mask. We report quantitative and qualitative results on the CelebMask-HQ dataset, which show our model can both faithfully reconstruct and modify a segmentation mask at the class level. Also, we show our model can be put before a SIS generator, opening the way to a fully automatic generation control of both shape and texture. Code available at <https://github.com/TFonta/Semantic-VAE>.
§ INTRODUCTION
The task of Semantic Image Synthesis (SIS) consists in generating a photo-realistic image given a semantic segmentation mask that defines the shape of objects. The mask is usually an image in which the pixel values define a specific semantic class (like eyes, skin, hair, etc. in the case of human face).
This allows for accurately defining the spatial layout and shape of the generated images, while maintaining a high degree of freedom in terms of textures and colors.
Indeed, those can be randomly generated <cit.> or by extracting a specific style from a reference image <cit.>.
A nice feature of SIS methods is that the semantic mask can be manipulated to alter the shape of objects in the generated samples.
However, currently this is done manually by using custom painting software allowing the user to modify the shape of one or more mask parts. Attempts of performing automatic face shape parts manipulation have been done, yet with different techniques, such as by using a 3D deformable model of the face <cit.>.
Whereas manual alteration of the semantic masks is fun, it turns out impractical when the objective is to modify the shape of a large number of images.
In the attempt of overcoming this limitation, in this paper we explore the problem of the automatic generation and manipulation of classes in segmentation masks, and propose a method that allows to generate and edit the shape of any number of parts. The proposed model can be used to produce a large variety of novel semantic masks that can then be used in conjunction with any SIS model to generate previously unseen photo-realistic RGB images. This is achieved by designing an architecture composed by an encoder that embeds each of the semantic mask parts separately, a recurrent module composed by a series of bi-directional LSTMs <cit.> that learns the relationships between the shape of different mask parts and, finally, a decoder that maps the latent representation back into a realistic semantic mask. The model is trained as a Variational Autoencoder (VAE), so combining a reconstruction loss with a KL divergence in order to induce a specific distribution in the latent space. This enables the generation, interpolation or perturbation of semantic classes; these specific features, to the best of our knowledge, are still unexplored in the literature. Overall, the main contributions of this paper are the following:
* we explore the novel problem of automatic generation and editing of local semantic classes in segmentation masks, independently from the others;
* we propose a novel architecture combining a VAE and a recurrent module that learns spatial relationships among semantic classes by treating them as elements of a sequence, under the observation that the shape of each part has an influence on the surrounding ones. More in detail, each part embedding is subsequently fed into the LSTM block so to account for shape dependencies, and then employed by the decoder to generate the final mask. The proposed architecture can finally be used in combination with any SIS architecture to boost the shape diversity of the generated samples;
* we quantitatively and qualitatively validate our proposal in the task of face parts editing, and report and extensive analysis of the advantages, limitations and challenges.
§ RELATED WORKS
Given that no prior works addressed the problem presented in this paper, in the following we summarize some recent literature works on semantic image synthesis and variational autoencoders.
Semantic Image Synthesis. Semantic Image Synthesis approaches can be divided into two main categories: diversity-driven and quality-driven. Both of them take inspiration and improve upon the seminal work of Park et al., named SPADE <cit.>, where semantic image synthesis is achieved by means of custom, spatially-adaptive normalization layers. Methods in the former category focus on the task of generating samples having the shape conditioned over semantic masks, but the style is generated randomly in order to achieve an high degree of multi-modality. Some examples of these approaches are <cit.>. The trend here points towards increasing the granularity of the generated texture; for example, in CLADE <cit.> styles are generated at the class-level, while INADE <cit.> is able to generate instance-specific style by sampling from a class-wise estimated distribution. On the other side, quality-driven methods try to extract a specific style from a target image and to apply it over the generated results, in the attempt of both maintaining the shape defined by the mask and the texture defined by a reference image. An example of paper falling in this category is MaskGAN <cit.>, in which a style mapping between an input mask and a target image is achieved using instance normalization. Also in this case, efforts are put into finding solutions to increase the precision and granularity of the style control. To this aim, Zhu et al. developed SEAN <cit.>, a method that is able to extract the style class-wise from each of the different semantic part of an image and map it locally over the corresponding area of the input mask. Another work following the same trend is SC-GAN <cit.>. Overall, it turns out clearly that none of the recent literature works deals with the problem of locally manipulating the face shape by acting on segmentation masks.
Variational Autoencoders. Autoencoders introduced in <cit.> were proposed as a way to achieve a compressed latent representation of a set of data, but they lack generation capabilities. On the contrary, Variational Autoencoders (VAE) <cit.> described data generation through a probabilistic distribution. Indeed, once trained using a combination of reconstruction loss and Kullback-Leibler divergence, they can generate new data by simply sampling a random latent distribution and feeding it to the decoder. There exist several variations of VAE such as Info-VAE <cit.>, β-VAE <cit.> and many more <cit.>.
§ NETWORK ARCHITECTURE
The main objective that guided the design of the model architecture is that of performing automatic manipulation and generation of semantic masks, independently for each class. A semantic segmentation mask can be represented as C-channel image, where each channel is a binary image containing the shape of a specific object class i.e. M ∈ [0,1]^C × H × W. So, each pixel belongs to a unique class i.e. has value 1 only in a single channel, and each class shape is complementary to all the others, i.e. there is no intersection between the semantic classes.
The challenge behind manipulating or generating a specific semantic class in a segmentation mask is that its shape, and the shape of all its surrounding classes, need to be adapted so that the above properties are maintained. At the same time, the spatial arrangement of each class have also to be realistic, since it is a scenario-dependent property. In the case of facial features, the spatial relations of the different face parts need to be preserved; as example, the nose should be mostly centered between eyes.
§.§ Architecture
To account for the above challenges, we designed our proposed architecture (Fig. <ref>) to have 4 main components: (1) an MLP ℳ to independently encode the mask channels into a latent representation m_e. This allows us to operate on the mask channels directly in the compressed space; (2) an LSTM-Feed Forward block ℒ composed of three bi-directional LSTM layers to process the encoded mask channels m_e^j and account for possible misalignments resulting from manipulating a semantic class, and a feed-forward block ℱ to further re-arrange the processed mask encodings; (3) finally, a convolutional decoder 𝒟 to reconstruct the complete semantic mask M_o.
MLP Encoder.
The encoder ℳ is a simple MLP made up of three linear layers, each followed by a ReLU activation function. Each mask channel is first flattened so that the input mask has size M ∈ℝ^C × H^2 where H = W = 256 is the spatial size of the mask; each linear layer of the encoder has a hidden size of 256, so that m_e = [m_e^1, ⋯, m_e^C] = ℳ(M) ∈ℝ^C × 256.
Bi-directional LSTM Block.
The bi-directional LSTM <cit.> block was designed to process the encoded mask channels m_e one after another, as if they were frames of a temporal sequence. The goal is that of correcting possible inconsistencies resulting from manipulating or generating a class embedding m_e^c, based on the information of the other classes. Intuitively, if we change the shape of a facial part in the mask e.g. nose, the surrounding parts need to be adjusted so that the combined result looks realistic and artifact-free. One problem arising from using a recurrent module is that of choosing the order in which the channels are processed. Temporal sequences have a unique ordering implicitly defined by the time flow, whereas in our scenario there is no clear nor unique way of choosing the order by which processing the face parts, being them simply parts of a spatial layout.
This motivated us to opt for the bi-directional variant of the LSTM; indeed, the latter processes the sequence in both directions (first to last, and last to first), so that each class embedding is influenced by all other classes, not only by the previously processed ones.
Each class embedding m_e^c is thus processed, and provided as hidden state both for the subsequent m_e^c+1 and previous m_e^c-1 classes. In addition, differently from the standard use of LSTMs where only the last processed embedding keeps flowing through the network, we also store the embeddings at intermediate steps m_e^c. In doing so, once all the C embeddings have been processed, we end up with the same number of C embeddings, one for each class.
Finally, following the same principle of <cit.>, a feed-forward block composed of two linear layers equipped with GeLu <cit.> activation function is stacked after the LSTMs so to make the embeddings better fit the input to the decoder.
Decoder.
Finally, the convolutional decoder 𝒟 is responsible for learning to reconstruct the segmentation mask from the C embeddings resulting from the previous steps. In particular, the C embeddings are reshaped into a set of C feature maps m_d ∈ℝ^C × 16 × 16. These are processed by 4 residual blocks, equipped with SiLU <cit.> activation function and group normalization. The decoder outputs the reconstructed segmentation mask M_o ∈ℝ^C × 256 × 256.
§.§ Loss Functions
The model is trained to self-reconstruct the input segmentation mask, without any other specific strategy to guide the manipulation process. The output mask is generated by minimizing a pixel-wise class prediction, using a cross-entropy loss. In particular, we used a weighted variant of the standard cross entropy ℒ_CE.
More in detail, we observed that the problem resembles a highly imbalanced classification problem; indeed, smaller parts such as, for face masks, the eyes or the nose, are significantly under-represented in the data i.e. occupy a smaller number of pixels, with respect to larger parts such as skin or hair, ultimately weighing less in the overall loss computation. So, the weights are set considering this imbalance; smaller weights will be assigned to bigger parts, and bigger weights will be assigned to smaller parts. We calculate the weights w = [w_0,⋯,w_C] based on the overall training set statistics, in the following way:
w = 1 - 1/NHW∑_N^i∑_H^j∑_W^kx_c,i,j,k ∀ c ∈ C
where N is the number of samples in the training set, and H and W are the height and width of the semantic mask, respectively. Given that each of the mask channels can contain only one or zero values, this equation provides a series of C weights that rank each of the semantic parts by their average size. The equation of the final weighted cross entropy ℒ_wCE therefore becomes:
ℒ_wCE = -∑_x w(y(x))
y(x)log(ŷ(x))
where y(x), ŷ(x), and w(y(x)) are the ground-truth class labels, the predicted labels, and the weight for the ground-truth class at pixel x, respectively.
In addition to the weighted cross entropy, a KL-Loss ℒ_KL is used to push the latent codes of each of the parts to have zero mean and unit variance and allow the generation process where a random latent code is sampled from 𝒩(0,1). Ultimately, the full loss utilized to train the model is:
ℒ = ℒ_wCE + λℒ_KL
where λ is the KL weight and is set to 0.0005 in all the experiments.
§ EXPERIMENTAL RESULTS
In this section, we report the results of an experimental validation.
We show both quantitative and qualitative results, in terms of reconstruction accuracy and different generation or manipulation tasks. In fact, despite our goal being that of performing editing of semantic masks at the class level, we also need to make sure the reconstruction process does not degrade the segmentation accuracy of the input masks and in turn compromise the subsequent image synthesis.
As dataset to train and test our model, we used the CelebAMask-HQ <cit.>, which is composed by 30K high resolution face images (1024×1024) along with the corresponding segmentation masks. Out of the 30K samples, 28K were used for training and 2K for testing.
§.§ Reconstruction, Generation and Perturbation
Given that no prior works addressed this particular problem, before analyzing the ability of the model to manipulate the mask parts, we compare our solution with some baseline architectural designs in terms of reconstruction accuracy, in a sort of an extended ablation study.
Reconstruction results are reported in Table <ref> in terms of pixel-wise classification accuracy (Acc) and Mean Intersection over Union (mIoU). In particular, the following configurations were explored: a simple encoder-decoder trained with the standard cross-entropy (row 1), the model with 1 or 3 standard LSTMs trained with standard cross entropy (rows 2 and 3), our final model with 3 standard LSTMs trained with the weighted cross entropy (row 4), our final model with 3 bidirectional LSTMs trained with regular cross entropy (row 5), and the final architecture (bottom row).
Quantitatively, we observe a generally-high reconstruction accuracy in all the cases. The simplest architecture (w/o LSTM and weighted CE) achieves the highest accuracy but lower mIoU. A visual inspection of the results suggests that the the additional processing due to the LSTM block induces a slight smoothing of high-frequency details such as the hair contour. This is caused by the compression of each semantic part in the encoding phase, and also by the Bi-directional LSTM block pass which makes more difficult for the decoder to exactly reproduce the corresponding input. This hypothesis is supported if looking at the results obtained with either 1 or 3 LSTM layers; indeed the two measures decrease when stacking more LSTM layers. On the other hand though, we will show (Fig. <ref>) that removing such layers severely compromises the manipulation ability. Nevertheless, when comparing configurations including 3 LSTM layers, our final architecture scores the highest accuracy. In particular, it obtains the highest mIoU, which indicates the overall shape and spatial arrangement of parts is best preserved. This is also supported by the results in Fig. <ref>, which shows per class mIoU results of different configurations. Indeed, even though the configuration w/o LSTM tends to perform better with bigger parts like skin or hair, our final architecture manages to push the quality of the smaller parts up thanks to the combination of bidirectional LSTMs and weighted cross entropy loss, resulting in an overall better mIoU.
In Fig. <ref> some results for both reconstruction, generation and perturbation of different parts in the semantic masks are presented. More in detail, we refer to generation when a novel latent code drawn from the normal distribution m̂_e^j ∼𝒩(0,1) is substituted to its encoded counterpart m_e^j and passed to the bi-directional LSTM block in order to generate a particular part c. On the other side, we refer to perturbation when a random noise vector drawn from the normal distribution z ∼𝒩(0,1) is added to an existing latent code i.e. m̂_e^j = m_e^j + z. Indeed, in the latter, usually the shape of the generated parts is more similar to the original input, while in the first case the generated shape can be (and usually is) completely different.
Regarding reconstruction, we can see how the proposed method manages to maintain the overall shape of the semantic mask parts, supporting the results in Table <ref>. Nevertheless, as discussed above, a certain degree of smoothing in the results can be noted. This represents a minor limitation of the current proposal.
On the other side, results when generating parts from scratch, or by perturbing an existing latent code, are impressive. Our method is not only able to generate realistic parts independently from one another, but also, thanks to the recurrent part of the model, is able to adapt the shape of the parts surrounding the one that is being generated in order to produce a realistic final result. This can be particularly appreciated for example when perturbating the nose latent code in Fig <ref> in the third row: indeed, the nose is made longer by the perturbation and as a consequence the mouth is deformed accordingly.
Finally, in Fig. <ref> we show some qualitative results to prove that the final architecture is indeed better in the generative task which is the main purpose of this paper. Starting from the top, it is clear how when generating hair the proposed model is much more capable of producing a realistic results without generating undesired classes (like the pink part in the model without LSTM). Then, in the second row, is proved how our model is much better at rearranging all the semantic parts in order to create a realistic mask with a newly generated part. Finally, in the last row, we can see how the mouth part is generated correctly by almost every configuration, but, at the same time, our model is able to generate much more varied and diverse results.
§.§ Interpolation
In Fig. <ref> interpolation results are presented. Interpolation is done by choosing a part c from a source and a target mask and merging together the corresponding latent vectors using an interpolation factor α. More in detail, the interpolation equation is the following:
m^int_c = α· m^t_c + (1-α)· m^s_c
where m^t_c and m^s_c are the latent codes of the part c of the target and source images, respectively. In addition, α = 0 is equal to reconstructing the source image, while α = 1 represents a sort of “face part swapping”, that is a specific face part is swapped from a target face to a source one.
Indeed, it is evident how the KL loss, that pushes the latent codes to have almost zero mean and unit variance, allows to easily interpolate every mask part. In particular, while increasing the interpolation factor α, the shape changes continuously. The only previous method that we are aware of capable of performing a similar task is MaskGAN <cit.>; however, MaskGAN can only perform global mask interpolations, and can not independently manipulate individual parts.
§.§ Semantic Image Synthesis with Shape Control
In this section we qualitatively show results for the main purpose of our model, that is equipping SIS generators with a module to enable automatic shape control. In Fig. <ref>, several mask with automatically generated parts are fed to a state-of-the-art SIS model in order to produce new and diverse face images.
We chose to use the SEAN <cit.> generator to this aim because SEAN can very precisely control the image generation thanks to its semantic region-adaptive normalization layers.
Previous to our proposal, the editing of masks could only be done manually. Results in Fig. <ref> clearly show that, provided a generator that is accurate enough to handle local shape changes, the shape of the generated faces can be automatically edited by means of our solution. This paves the way to a very efficient way of employing SIS models, for example, for data augmentation which can be very helpful for task like re-identification, classification or detection.
§ CONCLUSION
In this paper, we introduced the problem of automatic manipulation of semantic segmentation masks, and presented a preliminary novel architecture to achieve this goal, with a specific application to face part editing.
The proposed system is able do generate or manipulate any semantic part by just feeding random noise to the LSTM block in the place of the latent representation of the corresponding part. We show the efficacy of our architecture through a series of quantitative and qualitative evaluations. Even if we observed the tendency of smoothing the shapes of the generated results, still our method is able to generate realistic semantic parts, and can be readily used in combination with potentially any SIS models so to generate a virtually infinite number of RGB results.
Finally, we believe there is still large room for improvements. For example, extending the proposal to different scenarios with less constrained objects layout or more classes would represent a valuable feature for a SIS model. Also, currently, the shape manipulation is not controlled, meaning that it is not yet possible to generate parts with a specific shape or attributes, e.g. long nose or curly hair. All the above are features that we plan to investigate in future works.
§ ACKNOWLEDGMENTS
This work was supported by PRIN 2020 “LEGO.AI: LEarning the Geometry of knOwledge in AI systems”, grant no. 2020TA3K9N funded by the Italian MIUR.
splncs04
|
http://arxiv.org/abs/2307.03942v1 | 20230708093617 | Ariadne's Thread:Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images | [
"Yi Zhong",
"Mengqiu Xu",
"Kongming Liang",
"Kaixin Chen",
"Ming Wu"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Using Text Prompts to Improve Segmentation
Y. Zhong et al.
Beijing University of Posts and Telecommunications, China
{xiliang2017, xumengqiu, liangkongming, chenkaixin, wuming}@bupt.edu.cn
Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.]
: Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu
August 12, 2023
===========================================================================================================================================================================================================================================================
Segmentation of the infected areas of the lung is essential for quantifying the severity of lung disease like pulmonary infections. Existing medical image segmentation methods are almost uni-modal methods based on image.
However, these image-only methods tend to produce inaccurate results unless trained with large amounts of annotated data. To overcome this challenge, we propose a language-driven segmentation method that uses text prompt to improve to the segmentation result. Experiments on the QaTa-COV19 dataset indicate that our method improves the Dice score by 6.09% at least compared to the uni-modal methods. Besides, our extended study reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required.
§ INTRODUCTION
Radiology plays an important role in the diagnosis of some pulmonary infectious diseases, such as the COVID-19 pneumonia outbreak in late 2019<cit.>. With the development of deep learning, deep neural networks are more and more used to process radiological images for assisted diagnosis, such as disease classification, lesion detection and segmentation, etc.
With the fast processing of radiological images by deep neural networks, some diagnoses can be obtained immediately, such as the classification of bacterial or viral pneumonia and the segmentation mask for pulmonary infections, which is important for quantifying the severity of the disease as well as its progression<cit.>. Besides, these diagnoses given by the AI allow doctors to predict risks and prognostics in a "patient-specific" way<cit.>. Radiologists usually take more time to complete lesion annotation than AI, and annotation results can be influenced by individual bias and clinical experience<cit.>. Therefore, it is of importance to design automatic medical image segmentation algorithms to assist clinicians in developing accurate and fast treatment plans.
Most of the biomedical segmentation methods<cit.> are improved based on U-Net<cit.>. However, the performance of these image-only methods is constrained by the training data, which is also a dilemma in the medical image field. Radford et al. proposed CLIP<cit.> in 2021, where they used 4M image-text pairs for contrastive learning. With the rise of multi-modal learning in the recent years, there are also methods<cit.> that focus on vision-language pretraining/processing and applying them on local tasks. Li et al. proposed a language-driven medical image segmentation method LViT<cit.>, using a hybrid CNN-Transformer structure to fuse text and image features. However, LViT uses an early fusion approach and the information containd in the text is not well represented. In this paper, we propose a multi-modal segmentation method that using independent text encoder and image encoder, and design a GuideDecoder to fuse the features of both modalities at decoding stage. Our main contributions are summarized as follow:
* We propose a language-driven segmentation method for segmenting infected areas from lung x-ray images. Source code of our method see: https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023
* The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities.
* We have cleaned the errors contained in the text annotations of QaTa-COV19<cit.> and contacted the authors of LViT to release a new version.
* Our extended study reveals the impact of information granularity in text prompts on the segmentation performance of our method, and demonstrates the significant advantage of multi-modal method over image-only methods in terms of the size of training data required.
§ METHOD
The overview of our proposed method is shown in Fig. <ref>(a). The model consists of three main components: Image Encoder, Text Encoder and GuideDecoder that enables multi-modal information fusion. As you can see, our proposed method uses a modular design. Compared to early stage fusion in LViT, our proposed method in modular design is more flexible. For example, when our method is used for brain MRI images, thanks to the modular design, we could first load pre-trained weights trained on the corresponding data to separate visual and text encoders, and then only need to train GuideDecoders.
§.§.§ Visual Encoder & Text Encoder
The Visual Encoder used in the model is ConvNeXt-Tiny<cit.>. For an input image I∈ℝ^H× W×1, we extract multiple visual features from the four stages of ConvNeXt-Tiny, which are defined as f_4∈ℝ^H/4×W/4× C_1, f_8∈ℝ^H/8×W/8× C_2,
f_16∈ℝ^H/16×W/16× C_3 and
f_32∈ℝ^H/32×W/32× C_4,
Note that C is the feature dimension, H and W are the height and width of the original image.
For an input text prompt T ∈ℝ^L, We adopt the CXR-BERT<cit.> to extract text features g_t ∈ℝ^L× C. Note that C is the feature dimension, L is the length of the text prompt.
§.§.§ GuideDecoder
Due to our modular design, visual features and textual features are encoded independently by different encoders. Therefore, the design of the decoder is particularly important, as we can only fuse multi-modal features from different encoders in post stage. The structure of GuideDecoder is shown in Fig. <ref>(b). The GuideDecoder first processes the input textual features and visual features before performing multi-modal interaction.
The input textual features first go through a projection module (i.e. Project in the figure) that aligns the dimensionality of the text token with that of the image token and reduces the number of text tokens. The projection process is shown in Equation 1.
f_t = σ(Conv(T W_T))
where W_T is a learnable matrix, Conv(·) denotes a 1×1 convolution layer, and σ(·) denotes the ReLU activation function. Given an input feature T ∈ℝ^L× D, the output projected features is f_t ∈ℝ^M × C_1, where M is the number of tokens after projection and C_1 is the dimension of the projected features, consistent with the dimension of the image token. For the input visual features I∈ℝ^H× W× C_1, after adding the position encoding we use self-attention to enhance the visual information in them to obtain the evolved visual features. The process is shown in Equation 2.
f_i = I + LN(MHSA(I))
where MHSA(·) denotes Multi-Head Self-Attention layer, LN(·) denotes Layer Normalization, and finally the evolved visual features f_i ∈ℝ^H× W× C_1 with residuals could be obtained.
After those, the multi-head cross-attention layer is adopted to propagate fine-grained semantic information into the evolved image features. To obtain the multi-modal feature f_c ∈ℝ^H× W× C_1, the output further computed by layer normalization and residual connection:
f_c = f_i + α (LN(MHCA(f_i,f_t)))
where MHCA(·) denotes multi-head cross-attention and α is a learnable parameter to control the weight of the residual connection.
Then, the multi-modal feature f_c ∈ℝ^(H× W)× C_1 would be reshaped and upsampling to obtain f'_c ∈ℝ^H'× W'× C_1. Finally the f'_c is concatenated with f_s∈ℝ^H'× W'× C_2 on the channel dimension, where f_s is the low-level visual feature obtained from visual encoder via skip connection. The concatenated features are processed through a convolution layer and a ReLU activation function to obtain the final decoded output f_o ∈ℝ^H'× W'× C_2
f'_c = Upsample(Reshape(f_c))
f_o = σ(Conv([f'_c, f'_s]))
where
[·,·] represents the concatenate operation on the channel dimension.
§ EXPERIMENTS
§.§ Dataset
The dataset used to evaluate our method performance is the QaTa-COV19 dataset<cit.>, which is compiled by researchers from Qatar University and Tampere University. It consists of 9258 COVID-19 chest radiographs with pixel-level manual annotations of infected lung areas, of which 7145 are in the training set and 2113 in the test set. However, the original QaTa-COV19 dataset does not contain any matched text annotations.
Li et al. <cit.>have made significant contributions by extending the text annotations of the dataset, their endeavors are worthy of commendation. We conducted a revisitation of the text annotations and found several notable features. Each sentence consists of three parts, containing position information at different granularity. However, these sentences cannot be considered as medical reports for lacking descriptions of the disease, we consider them as a kind of "text prompt" just as the title of the paper states.
Besides, we found some obvious errors (e.g. misspelled words, grammatical errors and unclear referents) in the extended text annotations. We have fixed these identified errors and contacted the authors of LViT to release a new version of the dataset. Dataset see Github link: https://github.com/HUANGLIZI/LViThttps://github.com/HUANGLIZI/LViT
§.§ Experiment Settings
Following the file name of the subjects in the original train set, we split the training set and the validation set uniformly in the ratio of 80% and 20%. Therefore, the training set has a total of 5716 samples, the validation set has 1429 samples and the test set has 2113 samples.
All images are cropped to 224×224 and the data is augmented using a random zoom with 10% probability.
We used a number of open source libraries including but not limited to PyTorch, MONAI<cit.> and Transformers<cit.> to implement our method and baseline approach. We use PyTorch Lightning for the final training and inference wrapper. All the methods are training on one NVIDIA Tesla V100 SXM3 32GB VRAM GPU. We use the Dice loss plus Cross-entropy loss as the loss function, and train the network using AdamW optimization with a batch size of 32. We utilize the cosine annealing learning rate policy, the initial learning rate is set to 3e-4 and the minimal learning rate is set to 1e-6.
We used three metrics to evaluate the segmentation results objectively: Accuracy, Dice coefficient and Jaccard coefficient.
Both Dice and Jaccard coefficient calculate the intersection regions over the union regions of the given predicted mask and ground truth, where the Dice coefficient is more indicative of the segmentation performance of small targets.
§.§ Comparison Experiments
We compared our method with common mono-modal medical image segmentation methods and with the LViT previously proposed by Li et al. The quantitative results of the experiment are shown in Table <ref>. UNet++ achieves the best performance of the mono-modal approach. Comparing to UNet++, our method improves accuracy by 1.44%, Dice score by 6.09% and Jaccard score by 9.49%. Our method improves accuracy by 1.28%, Dice score by 4.86% and Jaccard coefficient by 7.66% compared to the previous multi-modal method LViT. In general, using text prompts could significantly improve segmentation performance.
The results of the qualitative experiment are shown in Fig. <ref>. The image-only mono-modal methods tend to generate some over-segmentation, while the multi-modal approach refers to the specific location of the infected region through text prompts to make the segmentation results more accurate.
§.§ Ablation Study
Our proposed method introduces semantic information of text in the decoding process of image features and designs the GuideDecoder to let the semantic information in the text guide the generation of the final segmentation mask. We performed an ablation study on the number of GuideDecoder used in the model and the results are shown in the Table <ref>.
As can be seen from the Table <ref>, the segmentation performance of the model improves as the number of GuideDecoders used in the model increases. The effectiveness of GuideDecoder could be proved by these results.
§.§ Extended Study
Considering the application of the algorithm in clinical scenarios, we conducted several interesting extension studies based on the QaTa-COV19 dataset with the text annotations. It is worth mentioning that the following extended studies were carried out on our proposed method.
§.§.§ Impact of text prompts at different granularity on segmentation performance.
In section 3.1 we mention that each sample is extended to a text annotation with three parts containing positional information at different granularity, as shown in the Fig. <ref>. Therefore we further explored the impact of text prompts at different granularity on segmentation performance of our method and the results are shown in Table <ref>.
The results in the table show that the segmentation performance of our proposed method is driven by the granularity of the position information contained in the text prompt. Our proposed method achieved better segmentation performance when given a text prompt with more detailed position information.
Meanwhile, we observed that the performance of our method is almost identical when using two types of text prompts, i.e. Stage3 alone and Stage1 + Stage2 + Stage3. It means the most detailed position information in the text prompt plays the most significant role in improving segmentation performance. But this does not mean that other granularity of position information in the text prompt does not contribute to the improvement in segmentation performance. Even when the input text prompts contain only the coarsest location information (Stage1 + Stage2 items in the Table <ref>), our proposed method yielded a 1.43% higher Dice score than the method without text prompt.
§.§.§ Impact of the size of training data on segmentation performance.
As shown in Table <ref>, our proposed method demonstrates highly competitive performance even with a reduced amount of training data. With only a quarter of the training data, our proposed method achieves a 2.69% higher Dice score than UNet++, which is the best performing mono-modal model trained on the full dataset. This provides sufficient evidence for the superiority of multi-modal approaches and the the fact that suitable text prompts could significantly help improve the segmentation performance.
We observed that when the training data was reduced to 10%, our method only began to exhibit inferior performance compared to UNet++, which was trained with all available data. Similar experiments could be found in the LViT paper. Therefore, it can be argued that multi-modal approaches require only a small amount of data (less than 15% in the case of our method) to achieve performance equivalent to that of mono-modal methods.
§ CONCLUSION
In this paper, we propose a language-driven method for segmenting infected areas from lung x-ray images. The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities. The experimental results on the QaTa-COV19 dataset indicate that the multi-modal segmentation method based on text-image could achieve better performance compared to the image-only segmentation methods. Besides, we have conducted several extended studies on the information granularity of the text prompts and the size of the training data, which reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required.
§.§.§ Acknowledgements
This work was supported by NSFC under Grant 62076093 and MoE-CMCC "Artifical Intelligence" Project No. MCM20190701.
splncs04
|
http://arxiv.org/abs/2307.04038v1 | 20230708195154 | Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods | [
"A. k. Althukair",
"D. Tsiklauri"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Vol.0 (20xx) No.0, 000–000
Department of Physics and Astronomy, School of Physical and Chemical Sciences, Queen Mary University of London,
Mile End Road, London, E1 4NS,
UK; [email protected], [email protected]
Physics Department, College of Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, PO Box 84428, Saudi Arabia
Received 20xx month day; accepted 20xx month day
In our previous work, we searched for super-flares on different types of stars, focusing on G-type dwarfs using entire Kepler data to study statistical properties of the occurrence rate of super-flares. The said study also considered how the statistics change with stellar rotation period, which in turn, had to be determined. Using such new data, as a by-product, we found 138 Kepler IDs of F and G types main sequence stars with rotation periods less than a day (P_ rot<1 d). On one hand, previous studies have revealed short activity cycles in F-type and G-type stars and the question investigated was whether or not short-term activity cycles are a common phenomenon in these stars. On the other hand, extensive studies exist which establish empirical connection between a star's activity cycle and rotation periods. In this study, we compile all available Kepler data with P_ rot<1 d and derive, as well as use plausible, established empirical relations between P_ cyc and P_ rot with the aim to provide predictions for very short 5.13≤ P_ cyc≤ 38.14 d cases in a tabular form. As a result, we invite others to measure P_ cyc using monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index) or similar means for the Kepler IDs found in this study in order put to test the derived and/or established empirical relations between P_ cyc and P_ rot. We also propose an alternative method for measuring very short P_ cyc, using flare-detection algorithms applied to future space mission data.
Althukair & Tsiklauri
Prediction of short stellar activity cycles
Prediction of short stellar activity cycles using derived and established empirical relations between activity and rotation periods.
A. k. Althukair
1,2
D. Tsiklauri
1
=====================================================================================================================================
§ INTRODUCTION
The 11-year cycle of solar activity discovered by Schwabe in 1844 <cit.>, is a significant phenomenon in solar and stellar physics. The cycle is manifested by a periodic change in solar activity, including the appearance of sunspots and changes in the Sun's magnetic field
on this time-scale. Smoothed sunspot numbers have been widely used as a proxy for solar activity over the past four centuries <cit.>.
The idea of the sunspot number was first introduced by <cit.> in the mid-19th century, and it has since become a standard measure for quantifying solar activity. These numbers reveal that there are almost regular cycles of about 11 years, reflecting the Sun's magnetic activity.
During the course of a solar cycle, the Sun experiences alternating periods of strong and weak activity known as solar maximum and minimum <cit.>. As the solar cycle progresses, the magnetic field becomes more complex and twisted. This results in the emergence of sunspots, which are dark areas on the surface of the Sun with intense magnetic fields, vary in size and can last from days to several months <cit.>, decaying into bright areas called faculae formed by smaller magnetic concentrations <cit.>. During the active phase of the solar cycle (solar maximum), the number and size of sunspots increase and appear at the solar surface. At the same time, bright faculae also become more prominent. As the cycle progresses, the number of sunspots decreases, the overall brightness of the Sun remains relatively constant and the Sun enters its least active phase of the solar cycle (solar minimum). These dark and bright features on the Sun's surface contribute to the variability in the total solar irradiance (TSI) <cit.>. Therefore, the TSI data can capture the combined effects of the evolving dark and bright features during the solar cycle <cit.>.
Cyclic activity has been observed in stars other than the Sun through long-term brightness changes associated with increased occurrence of active regions on their surfaces or in their lower stellar atmospheres <cit.>. The Mount Wilson HK program, which started in 1966 and lasted until the end of the 20th century, was the first to conduct a systematic search for activity cycles in main sequence stars <cit.>. By analysing
chromospheric emission in the spectral lines of Ca II H&K, as the magnetic field connected to active regions on the surfaces of stars plays an important role in transporting energy into the chromosphere. This increased energy input into the chromosphere leads to enhanced chromospheric emission, which can be observed prominently in the cores of the Ca II H&K spectral lines <cit.>. The measure of the chromospheric emission strength is described by the Mount Wilson S-index <cit.> or by the quantity R^'_ HK <cit.>. <cit.> investigated the chromospheric activity levels in main-sequence F-G-K-M stars by measuring the chromospheric CaII H&K emission fluxes. They noted that these stars display varying degrees of chromospheric activity and observed a noticeable lack in the number of F-G stars displaying intermediate activity compared to both highly active and less active stars. They suggested that the absence of such stars could be attributed to a decline in chromospheric activity as the stars age. <cit.> examined the relationship between chromospheric activity, specifically the R^'_ HK activity index, and the Rossby number Ro = P_ rot/τ_ c for a sample of main-sequence stars of spectral type F or later. Where P_ rot is the rotational period of the star and τ_ c is a theoretically derived convective turnover time. They found a strong correlation between the R^'_ HK activity index and the Rossby number. However, in contrast to the findings of <cit.>, <cit.> did not find any signs of the "Vaughan-Preston gap". <cit.> investigated the empirical relation between rotation period P_ rot, spectral type, and activity cycle period P_ cyc for 13 slowly rotating main-sequence stars. They found that the cycle period is related to the rotation period by a power law: P_ cyc∝ P_ rot^ 1.25. This relationship can alternatively be expressed as
P_ cyc≈ Ro^1.25≈ (P_ rot/τ_ c)^1.25 <cit.>. For stars of spectral type G0-K5, <cit.> observed a pattern of variation in the rotation period and the measure of chromospheric activity (S-index). Their research revealed that the chromospheric activity levels were high in young stars with fast rotation periods. Chromospheric activity and rotation rates of stars in the intermediate age range were average. Alternatively, the chromospheric activity levels were low in old stars with slow rotation periods. This observation supports the existence of the Vaughan-Preston gap <cit.>, indicating that chromospheric activity and rotation change over time as the stars age. The relation between rotation periods and activity cycles of a sample of stars was investigated by <cit.>, who discovered a correlation between the two variables. In particular, they observed that stars with slower rotation periods exhibit longer activity cycles, while stars with faster rotation periods tend to have shorter activity cycles. According to <cit.>, the relation between rotation periods and cycle lengths is more evident for stars with shorter activity cycles. However, the association becomes less clear for longer cycle lengths when considering more recent findings on the time variability of solar cycles.
<cit.> investigated the behaviour and activity cycles of four fast-rotating late-type stars with (P_ rot≤ 0.5 days), highlighting the presence of 1-year cycles and the correlation between rotation rate and cycle length. <cit.> used the short-term Fourier transform, a time-frequency analysis method, to examine the light curves of 39 fast-rotating late-type active stars with rotation periods of less than one day. Nine of the selected stars showed indications of activity cycles with periods between 300 and 900 days. These cycles were inferred from the changing typical latitude of the starspots on the stellar surface and due to the differential rotation of the stellar surface, the observed rotation period of the stars varied over the activity cycle. This variation in the rotation period was attributed to the movement and evolution of starspots at different latitudes of the star. <cit.> used four years of Kepler data to determine the cyclic variations in the amplitude of the light curve and the rotation period of stars by analysing a sample of active stars and calculating the rotation period and variability amplitude for each star in each Kepler quarter. Then they searched for periodic variations in these time series using Lomb-Scargle periodograms and employed a false alarm probability (FAP) criterion for selection. The study's findings indicate that amplitude periodicities, associated with underlying activity cycles, are detected in 3203 stars with cycle periods ranging from 0.5 to 6 years and rotation periods ranging from 1 to 40 days. According to <cit.> analysis of new observations and previous data, the longer and shorter cycle periods closely match expectations based on the average activity levels and rotation periods, which indicates a connection between stellar activity and stellar rotation. <cit.> reported an activity cycle of 11.6 years in the F-type star τ Boo (HD 120136). However, the authors assigned a FAP "poor" grade to this finding. <cit.> detected an activity cycle with a duration of 122 days in their analysis of the S-index data of τ Boo. This short activity cycle periods suggest that τ Boo may exhibit variations on a relatively short timescale. <cit.> focused on exploring the presence of short-term activity cycles in F-type stars, specifically using S-index time series data obtained with the TIGRE telescope. They utilized the generalized Lomb-Scargle periodogram method to analyze the data and search for periodic variations with a maximum length of 2 years. Their sample of F-type stars identified four stars that exhibited cyclic variations with periods of less than a year. However, compared to solar-type stars with well-developed cyclic activity, the amplitude of these short-term cyclic variations in F-type stars was smaller. Based on their findings, <cit.> concluded that the activity behaviour among F-type stars differs from that of the Sun and cooler main sequence stars. By studying 44 main-sequence stars with confirmed activity cycles, and rotation periods, <cit.> examined the relation between the length of the activity cycle and the Rossby number (Ro). They used empirical turnover periods based on the B-V colour index to calculate Rossby numbers, from which they deduced an empirical relationship between the Rossby number and the cycle duration. The study showed linear behaviour in the double-logarithmic relationship between the Rossby number and cycle period. In addition, the relative convection zone depth was found to be correlated with cycle length and convective turnover time.
In paper I <cit.>, we looked for super-flares on different types of stars and focused on G-type dwarfs using entire Kepler data to study
various aspects of statistical properties of the occurrence rate of super-flares.
In paper II <cit.>, as a by-product, we found thirteen peculiar Kepler IDs that are Sun-like, slowly rotating with rotation periods of 24.5 to 44
days, and yet can produce a super-flare and six G-type and four M-type Kepler IDs with exceptionally large amplitude super-flares. As noted previously,
these detections defy our current understanding of stars and hence deserve a further investigation.
In this paper III, the last in this series, we use an empirical connection between a star's activity cycle and rotation periods for a sample of F and G main sequence stars with rotation periods of less than one day.
Here our aim is to provide predictions for very short activity cycle cases in a tabular form and to investigate in the future whether these short activity cycles are a common phenomenon in these stars or not. Section <ref> provides the target selection method. Section <ref> presents the method used in this work which includes the empirical connection relation between P_ cyc and P_ rot. The main findings of the study are presented in Section <ref>, and section <ref> concludes this work with our main conclusions.
§ RELATION BETWEEN ACTIVITY CYCLE AND ROTATION PERIOD
<cit.> model of the α–Ω dynamo introduced the concept of migratory dynamo waves, which play a crucial role in generating the observed solar cycle <cit.>. The α–effect, arising from the twisting of rising magnetic field tubes due to Coriolis forces, creates the poloidal magnetic field required for the next sunspot cycle. This effect is responsible for the reversal of magnetic polarities between successive cycles <cit.>. On the other hand, the Ω–effect, resulting from the differential rotation of the star, generates a toroidal magnetic field by stretching the magnetic field lines in a longitudinal direction. The combination of the α–effect and the Ω–effect leads to the formation of migratory dynamo waves, where the toroidal field is periodically regenerated and transformed into the poloidal field through the action of the α–effect. These migratory dynamo waves propagate and interact within the star's convective zone, causing the cyclic variations in the magnetic field <cit.>.
According to <cit.>,
the magnetic cycle period for G and K dwarfs with convective turnover times (τ_ c) between 11 and 26 days, is found to be proportional to the rotation period as follows:
1/P_ cyc∝(τ_ c / P_ rot)^n,
where n is 1.25.
We quote theoretical prediction of the relation between
star's activity cycle and its rotation periods, which is
equation (6) in <cit.>:
P_ mag_cyc=2 P_ cyc≈√(R_⋆l)P_ rot.
According to the simple theoretical arguments quoted by <cit.>,
the magnetic cycle period P_ mag_cyc is proportional to the rotation period P_ rot. However, there is a modifying factor, l/R_⋆ the relative depth of turbulence, which depends on the stellar structure, which itself may depend on the effective temperature or B-V colour index of the star. Also l here is the length scale of turbulence and R_⋆ is the stellar radius.
§ METHODS
In our study, we adopt the terminology used by <cit.> to categorize branches into two types: the "inactive" branch, referred to as the short-cycle branch P_ cyc^S and the "active" branch, referred to as the long-cycle branch P_ cyc^L. These terms were introduced first time in <cit.>. According to <cit.> this notation is more accurate and aligned with the actual characteristics of the branches. Therefore, they suggested that these terms should be used in future studies to refer to the two branches.
§.§ Reproduction of <cit.> P_ cyc^S vs. P_ rot Fit
In this subsection, we reproduced the fit between P_ cyc^S and P_ rot data from <cit.> to derive the fit parameters. First, we collected the data in Table<ref>, the first 32 rows, from <cit.>, where we obtained the 32 activity cycles on the short-cycle branch P_ cyc^S calculated by <cit.> along with the 32 corresponding rotation periods P_ rot. These cycle lengths and rotation periods can be found in Table 1. Then we plotted in logarithmic scale the rotation periods on the x-axis versus the calculated cycle period on the y-axis as shown in Figure <ref>, using the empirical relation in <cit.> between the cycle periods and rotation periods in logarithmic terms that is given by:
log P_ cyc≈ a+n log P_ rot.
Since the theoretical relation, equation <ref>
implies a linear connection between P_ cyc and P_ rot, we fitted the data using Python least-square fit, a common technique for determining the best-fitting parameters for a given model, for two different slope adjustments as in <cit.>. Also, we computed the R^2 coefficient of determination to measure how well the model fits the data. A R^2 value of 1 means that the predictions from the regression fit the data perfectly. First, we set the slope n to be 1 and deduced the value of a parameter as a = 1.923 ± 0.025 and the value of R^2= 0.89. The red line in Figure <ref> illustrates this trend. Then we repeated the fit by treating slope n as an independent variable to derive a and n values as equation now <ref> becomes:
log P_ cyc≈ (1.458 ± 0.074)+(1.348 ± 0.054) log P_ rot.
and the value of R^2= 0.95. The blue line in Figure <ref> represents this fit. It is obvious that the n = 1 relation does not fit the short periods data, as <cit.> pointed out.
By comparing the value of a and n parameters here with <cit.>, we find slight differences between these values. As in <cit.> a = 1.918 ± 0.027 for the fit of n=1 , while for the fit where n is treated as a free parameter, a= 1.488 ± 0.092 and n= 1.324 ± 0.067. We noticed two additional points in Figure 1 of <cit.>, which belong to stars HD 100563 and HD 201092. These stars have rotation periods of 7.73 ± 0.04 and 37.8 ± 7.4, respectively, corresponding to cycle lengths of 0.609 ± 0.009 and 11.7 ± 0.4, respectively. Their P_ cyc were taken from <cit.> and <cit.>, respectively, and have not been calculated by <cit.>. We do not have these two points because our plot include only data computed by <cit.>. We also noticed that the locations of some points in our plot differ from those in <cit.> plot, despite using the same data set. We believe these reasons led to the slight difference in the fit parameters between this work and <cit.>.
§.§ Data representation and fit
In this subsection, we repeat the fit between P_ rot and P_ cyc^S using a larger data sample taken from other previous studies. This sample, shown in Table<ref>, contains 94 P_ rot and their 94 corresponding P_ cyc^S. The star ID, spectral type (Sp), color index (B-V), effective temperature (T_ eff), P_ rot and P_ cyc are shown in Table<ref>. Unavailable data is left blank in the table. 32 P_ cyc^S were calculated by <cit.>, the first 32 lines in Table<ref>. The other P_ cyc^S were taken from <cit.>. It should be noted that the 32 stars IDs for which their P_ cyc^S were calculated by <cit.> were used again in the fit but with the P_ cyc^S calculated by others. For illustration, we used two P_ cyc^S values for 32 stars IDs, one was calculated by <cit.> and the other was calculated by another work, except for KIC 10644253, for which we collected three P_ cyc^S calculated by <cit.>. Also, HD 16673 has multiple entries due to the multiple sources, as shown in Table <ref>. References for each P_ rot and P_ cyc^S are shown in Table <ref>.
In the same way as in subsection <ref>, we used the empirical relation between P_ rot and P_ cyc in logarithmic scale given by equation <ref> using the new data set in Table<ref> to produce the fit parameters a and n. We performed a least-square fit in Python to fit the data using two different slope adjustments again, one with a fixed slope n of 1 and another with the n treated as a free variable. This fit is shown in Figure <ref>. For the fit with a fixed slope of 1, we determined the value for the parameter a= 1.889 ± 0.023 and R^2= 0.83. This trend is shown by the red line in Figure <ref>. While for the fit with the slope n treated as a free variable, we deduced values for the parameters a and n as a=1.583 ± 0.064, n=1.257 ± 0.051 and R^2= 0.87. This fit is represented by the blue line in Figure <ref>. So that equation<ref> becomes now
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
We note that our value of n=1.257 ± 0.051 with the extended dataset is
closer to <cit.>'s n=1.25 than <cit.>'s n= 1.324 ± 0.067.
|cccc|cc|cc|
list of star IDs with their parameters, used in previous studies.
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8c
– continued from previous page
1|cHD/KIC 1cT_ eff 1cB-V 1cτ_ c 1|cP_ rot[d] 1c|Ref 1cP_cyc^S[yr] 1c|Ref
8|r|Continued on next page
Sun 5777 0.642 33.94 25.4±1 1 10.3 15
HD 3651 5211 0.850 61.18 44 1 11.7 15
HD 4628 5120 0.890 65.19 38.5±2.1 1 9.9 15
HD 10476 5244 0.836 59.83 35.2±1.6 1 9.2 15
HD 10780 5321 0.804 56.87 22.14±0.55 2 5.6 15
HD 16160 5060 0.918 68.16 48±4.7 1 12.4 15
HD 16673 6183 0.524 18.02 5.7 3 0.9 15
HD 17051 6045 0.561 21.98 8.5±0.1 1 1.4 15
HD 22049 5140 0.881 64.27 11.1±0.1 1 2.6 15
HD 26965 5282 0.820 58.33 43 1 11.5 15
HD 30495 5804 0.632 32.16 11.4±0.2 1 1.6 15
HD 32147 4801 1.049 83.93 48 1 11.7 15
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.4 15
HD 75332 6089 0.549 20.60 4.8 5 0.5 15
HD 75732 5167 0.869 63.05 37.4±0.5 6 9.7 15
HD 76151 5714 0.661 37.58 15 1 2.4 15
HD 100180 6013 0.570 23.06 14 1 3.4 15
HD 103095 5449 0.754 52.52 31 1 9.6 15
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.3 15
HD 128621 5098 0.900 66.24 36.2±1.4 1 9.2 15
HD 140538 5645 0.684 42.51 20.71±0.32 8 4.5 15
HD 146233 5741 0.652 35.81 22.7±0.5 1 7.2 15
HD 149661 5265 0.827 58.98 21.1±1.4 1 5.3 15
HD 160346 4975 0.959 72.75 36.4±1.2 1 9 15
HD 165341 A 5188 0.860 62.16 19.9 1 4.9 15
HD 166620 5151 0.876 63.76 42.4±3.7 1 11.1 15
HD 185144 5366 0.786 55.26 27.7±0.77 2 7.3 15
HD 190406 5910 0.600 27.09 13.9±1.5 1 2.6 15
HD 201091 4764 1.069 86.64 35.4±9.2 1 8.3 15
HD 219834 B 5055 0.920 68.38 43 1 11 15
KIC 8006161 5234 0.840 60.21 29.8±3.1 1 7.7 15
KIC 10644253 5943 0.590 25.67 10.9±0.9 1 1.8 15
HD 16673 6183 0.524 18.02 7.4±0.07 5 0.85 5
HD 49933 3.45 5 0.58 5
HD 75332 6089 0.549 20.60 4.8 5 0.49 5
HD 100563 7.73 5 0.61 5
τ Boo 0.480 14.23 3.5 5 0.33 5
Kepler 87 12.59±0.03 9 3.5 16
KIC 10644253 6030 0.590 25.67 10.91±0.87 10 1.5 17
solar analog HD 30495 5826 0.632 32.16 11.36±0.17 11 1.67±0.35 11
solar analog HD 45184 5871 0.620 30.16 19.98±0.02 12 5.14 12
61 Cyg A HD 201091 4545 1.069 86.64 35.7±1.9 13 7.2±1.3 13
102712791 0.277 4.79 0.96±0.03 14 0.09±0.008 14
102720703 0.514 17.08 10.2±0.6 14 0.512±0.055 14
102721955 0.431 10.94 2.17±0.06 14 1.118±0.071 14
102723038 1.404 147.52 8.6±0.5 14 1.682±0.151 14
102726103 0.767 53.62 3.7±0.1 14 0.321±0.022 14
102738457 0.592 25.95 12.9±0.6 14 1.781±0.356 14
102749950 0.657 36.78 5.4±0.2 14 0.655±0.06 14
102750723 1.143 97.45 1.44±0.02 14 0.277±0.022 14
102754736 0.480 14.23 6.9±0.3 14 0.29±0.019 14
102758108 0.641 33.75 6.1±0.2 14 0.301±0.022 14
102770332 2.055 415.00 4.2±0.1 14 1.162±0.112 14
102770893 0.874 63.56 4.3±0.2 14 0.759±0.058 14
102777006 1.177 102.86 1.33±0.02 14 1.17±0.123 14
102778595 1.157 99.64 11.8±0.7 14 0.575±0.019 14
102780281 1.304 125.85 3±0.1 14 0.551±0.041 14
Sun 5778 0.660 37.38 25.4±1 1 11±2 1
HD 3651 5128 0.840 60.21 44 1 13.8±0.4 1
HD 4628 5035 0.890 65.19 38.5±2.1 1 8.6±0.1 1
HD 10476 5188 0.840 60.21 35.2±1.6 1 9.6±0.1 1
HD 16160 4819 0.980 75.21 48±4.7 1 13.2±0.2 1
HD 17051 6053 0.570 23.06 8.5±0.1 1 1.6 1
HD 22049 5152 0.880 64.17 11.1±0.1 1 2.9±0.1 1
HD 26965 5284 0.820 58.33 43 1 10.1±0.1 1
HD 30495 5780 0.630 31.82 11.4±0.2 1 1.7±0.3 1
HD 32147 4745 1.060 85.41 48 1 11.1±0.2 1
HD 76151 5675 0.670 39.44 15 1 2.5±0.1 1
HD 78366 5915 0.630 31.82 9.7±0.6 1 5.9±0.1 1
HD 81809 5623 0.800 56.51 40.2±3 1 8.2±0.1 1
HD 100180 5942 0.570 23.06 14 1 3.6±0.1 1
HD 103095 5035 0.750 52.19 31 1 7.3±0.1 1
HD 114710 5970 0.580 24.33 12.3±1.1 1 9.6±0.3 1
HD 128620 5809 0.710 48.98 22.5±5.9 1 19.2±0.7 1
HD 128621 5230 0.880 64.17 36.2±1.4 1 8.1±0.2 1
HD 146233 5767 0.650 35.42 22.7±0.5 1 7.1 1
HD 149661 5199 0.800 56.51 21.1±1.4 1 4±0.1 1
HD 160346 4797 0.960 72.86 36.4±1.2 1 7±0.1 1
HD 166620 5000 0.900 66.24 42.4±3.7 1 15.8±0.3 1
HD 190406 5847 0.610 28.58 13.9±1.5 1 2.6±0.1 1
HD 201091 4400 1.180 103.35 35.4±9.2 1 7.3±0.1 1
HD 201092 4040 1.370 139.77 37.8±7.4 1 11.7±0.4 1
KIC 8006161 5488 0.840 60.21 29.8±3.1 1 7.4±1.2 1
KIC 10644253 6045 0.590 25.67 10.9±0.9 1 1.5±0.1 1
HD 165341 A 5023 0.780 54.74 19.9 1 5.1±0.1 1
HD 219834 A 5461 0.800 56.51 42 1 21±1 1
HD 219834 B 5136 0.910 67.30 43 1 10±0.2 1
HD 10780 5321 0.804 56.87 22.14±0.55 2 7.53±0.16 2
HD 16673 6183 0.524 18.02 5.7 3 0.847±0.006 5
HD 43587 5876 0.610 28.58 22.6±1.9 4 10.44±3.03 4
HD 75732 5167 0.869 63.05 37.4±0.5 6 10.9 18
HD 185144 5366 0.786 55.26 27.7±0.77 2 6.66±0.05 2
HD 120136 6245 0.508 16.54 3.05±0.01 7 0.333±0.002 7
HD 140538 5645 0.684 42.51 20.71±0.32 8 3.88±0.02 8
14cm
Notes: The table illustrates a list of stars ID with their corresponding B– V values, effective temperature T_ eff, the convective turnover time τ_ c which was calculated by the relation in <cit.>, the rotation period P_ rot with the reference number and the short branch cycle period P_ cyc^S with the reference number.
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>, (6) <cit.>, (7) <cit.>, (8) <cit.>, (9) <cit.>,
(10) <cit.>, (11) <cit.>, (12) <cit.>, (13) <cit.>, (14) <cit.>, (15) <cit.>, (16) <cit.>, (17) <cit.>, (18) <cit.>.
§.§ Data Samples
One of the main challenges in studying the relation between cycle length and rotation period is the lack number of well-known and accurately measured activity cycles. This limitation introduces uncertainties in the derived empirical relations <cit.>. To overcome these challenges, it is crucial to obtain more reliable cycle periods, particularly for long-period cycles. Achieving this requires long-term time series observations of stars to gather comprehensive and accurate data on their activity cycles <cit.>. Therefore, when looking for activity cycles, it is more efficient to monitor fast-rotating objects, as cycles can be discovered within a few years of observation, as opposed to stars with longer rotation periods <cit.>. For this reason, we chose our sample for this study to include fast-rotating main-sequence stars of type F and G from Kepler data with well-known rotation periods of less than one day. First, we collected all Kepler IDs which has well-known rotation periods. We then selected targets with rotation periods of less than a day. Using Gaia Data Release 2 (Gaia-DR2), we identified F- and G-type main sequence stars by their effective temperatures and radius based on the Harvard Spectral classification. The ranges of the effective temperature are 6000-7500 K and 5200-6000 K for F and G types, respectively. We thus obtained a total of 811 Kepler IDs of F- and G- type stars with less than one day rotation period. By using the radius restriction of the main-sequence stars as 1.15-1.4 R_⊙ and 0.96-1.15 R_⊙ for F and G types, respectively, the final data sample reduced to 138 Kepler targets with a number of 83 F-type and 55 G-type main-sequence stars. 71.74% of the rotation periods for these stars were taken from <cit.>. 15.94% from <cit.>, 5.07% from <cit.>, 4.35% from <cit.> and 2.90% from <cit.>. These 138 Kepler targets are listed in Table <ref> with their effective temperature, radius, rotation period and the references for these rotation periods.
§ RESULTS
Using a data set of 138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d, we provide a
prediction for the corresponding value of their P_ cyc^S, by applying the empirical relation between P_ cyc and P_ rot with the derived parameters in Equation <ref>. Hence we
obtained the predicted values of P_ cyc from
P_ cyc≈ 10^(1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
From equation <ref>, we calculated 138 P_ cyc for 83 F-type and 55 G-type main-sequence stars whose rotation period is less than a day. The shortest P_ cyc is equal to 5.13 d while the longest P_ cyc is equal to 38.14 d. All the 138 predicted P_ cyc are listed in Table <ref>
|cccccc|cccccc|
lists of the 138 Kepler IDs with their parameters and predicted P_ cyc.
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12c
– continued from previous page
1|cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d] 1cKIC 1cT_ eff 1cR_⊙ 1cP_ rot[d] 1cRef 1c|P_ cyc[d]
12|r|Continued on next page
757099 5521 1.05 0.36 1 10.60 6877871 6508 1.40 0.54 2 17.73
1028018 5544 1.14 0.62 2 21.03 6948098 6095 1.29 0.57 3 18.76
1721795 6534 1.31 0.89 2 32.93 6961285 5802 0.98 0.45 2 13.99
1872192 5316 0.98 0.67 2 23.31 6962901 5601 0.97 0.98 2 37.37
2557335 5568 1.01 0.24 2 6.20 7199002 6381 1.24 0.57 2 18.89
2558273 6673 1.35 0.99 2 37.85 7199013 5286 0.96 0.57 2 18.89
2715228 6374 1.30 0.99 1 37.80 7199037 6024 1.36 0.57 2 18.89
2715410 5997 1.11 0.90 1 33.53 7354297 5481 1.05 0.95 2 35.99
2849645 5424 1.06 1.00 2 38.14 7461022 6168 1.28 0.59 2 19.76
2985825 6783 1.23 0.94 3 35.18 7678509 6644 1.22 0.96 2 36.51
3124412 6302 1.21 0.93 1 34.94 7707736 5644 1.09 0.76 2 27.11
3241517 6283 1.34 0.78 3 28.19 7816211 6050 1.32 0.29 2 8.08
3352959 6476 1.37 0.76 2 27.07 7909399 6574 1.40 0.82 2 30.01
3356577 6746 1.39 0.63 4 21.58 7915824 6231 1.39 0.74 2 26.22
3448722 5872 1.13 0.41 2 12.60 7973882 5512 1.06 0.35 2 10.27
3448817 6792 1.33 0.95 4 35.78 8016369 6734 1.34 0.77 1 27.56
3459311 5789 1.05 0.98 2 37.37 8043256 6680 1.27 0.93 2 34.71
3550386 6006 1.30 0.32 2 9.10 8144578 6639 1.32 0.59 2 19.85
3836772 6210 1.32 0.69 2 23.88 8197275 5604 1.14 0.44 2 13.52
3869099 5607 1.01 0.29 2 7.94 8264155 6738 1.33 0.91 4 34.08
4175618 5369 1.05 0.41 2 12.60 8264659 5417 1.12 0.97 1 36.84
4283120 6202 1.25 0.52 2 16.71 8285970 5639 1.14 0.57 2 18.72
4374659 5824 1.03 0.23 2 5.87 8313378 6624 1.31 0.54 2 17.73
4386947 5681 1.14 0.65 2 22.10 8382253 5695 1.01 0.63 3 21.37
4464528 6392 1.38 0.22 2 5.81 8393626 5893 1.15 0.43 2 13.06
4464530 6545 1.30 0.22 2 5.77 8420730 5770 1.08 0.25 2 6.53
4570231 5661 0.99 0.54 1 17.64 8651921 6473 1.29 0.95 2 35.65
4660562 5677 0.96 0.77 1 27.56 8687209 5650 1.00 0.77 1 27.56
4762130 6202 1.35 0.80 2 28.78 8804962 6586 1.23 0.90 2 33.53
4774370 6546 1.36 0.93 2 34.85 8892124 5263 1.01 0.72 2 25.38
4816098 6239 1.29 0.95 1 35.89 8916436 6566 1.35 0.87 1 32.13
4850965 5503 1.04 0.61 2 20.40 9146690 5387 1.11 0.72 2 25.20
4949214 6511 1.36 0.92 2 34.52 9206726 6876 1.31 0.46 4 14.61
4949350 6587 1.40 0.88 2 32.37 9306290 5571 1.04 0.82 2 29.97
4949766 6587 1.39 0.81 2 29.19 9393015 5877 1.01 0.24 2 6.40
5038288 5785 0.99 0.88 2 32.51 9456932 5875 0.97 0.53 2 17.24
5107198 6077 1.36 0.36 2 10.67 9474101 5945 1.10 0.21 2 5.32
5273178 6774 1.32 0.88 2 32.65 9594038 6694 1.31 0.94 4 35.56
5397765 6251 1.34 0.94 2 35.47 9640204 6620 1.33 0.53 2 17.32
5426665 6323 1.38 0.39 2 11.80 9640472 6076 1.34 0.34 2 9.68
5444276 6475 1.31 0.71 2 24.71 9710612 5867 1.08 0.39 2 11.80
5450307 6398 1.24 0.99 3 37.85 9730249 6479 1.34 0.91 2 33.77
5480545 6535 1.31 0.93 2 35.09 9896552 6279 1.26 0.87 1 32.13
5514866 5487 0.97 0.28 2 7.66 9897710 5840 1.08 0.43 2 13.21
5514871 5220 1.06 0.28 2 7.66 9965888 5589 1.13 0.31 2 8.82
5543840 6518 1.20 0.82 2 29.69 9970838 6429 1.25 0.96 2 36.42
5623538 6729 1.32 0.99 1 37.80 10023062 6469 1.38 0.89 2 33.11
5623852 5886 1.10 0.57 2 18.89 10134084 5926 1.00 0.55 5 18.06
5629449 6897 1.31 0.71 1 24.89 10490282 5504 1.05 0.79 2 28.42
5646176 6302 1.20 0.99 1 37.80 10614890 5283 1.06 1.00 2 38.14
5795235 6517 1.36 0.91 2 34.00 10809099 6051 1.31 0.91 2 33.91
5898014 6697 1.35 0.83 2 30.20 11017401 5648 1.09 0.80 2 28.96
5988566 6299 1.20 0.44 2 13.52 11018874 6454 1.30 0.99 2 37.99
6114118 6234 1.24 0.94 2 35.32 11247377 6184 1.38 0.40 2 12.02
6114140 6384 1.16 0.93 3 35.13 11349677 6076 1.23 0.84 1 30.75
6145032 6315 1.28 0.81 1 29.37 11400413 6781 1.34 0.76 4 27.27
6149358 6660 1.28 0.89 2 32.93 11498689 5464 1.10 0.31 2 8.78
6219870 5663 1.05 0.81 1 29.37 11653059 6160 1.26 0.29 2 8.08
6224148 6230 1.18 0.20 2 5.13 11924842 5494 1.13 0.84 5 30.75
6385867 5306 1.06 0.58 1 19.30 11969131 6444 1.23 0.63 1 21.42
6386598 6658 1.37 0.76 2 27.20 12067121 6211 1.33 0.43 5 13.25
6391602 5782 0.99 0.42 2 12.83 12108612 5695 1.09 0.71 2 24.76
6421219 6191 1.36 0.79 2 28.51 12119534 5296 0.98 0.64 2 21.97
6449077 6366 1.31 0.94 2 35.51 12121738 6134 1.31 0.73 2 25.73
6529902 6604 1.38 0.29 2 8.08 12157161 6513 1.26 0.78 2 27.79
6693864 6846 1.35 0.86 1 31.67 12157799 6117 1.17 0.89 5 33.07
6836589 5628 1.15 0.73 2 25.91 12354328 5251 0.97 0.81 2 29.33
6846595 6718 1.26 0.99 1 37.80 12356839 5605 1.14 0.35 2 10.05
6854461 6547 1.39 0.95 3 36.03 12418959 6427 1.36 0.78 2 28.10
14cm
Notes: Effective temperature T_ eff and radius R_⊙ was taken from (Gaia-DR2).
References: (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>.
After predicting the values of the activity cycles for our extended, compared to <cit.>, data sample, we wish to examine the theoretical prediction given by Equation 2 on short P_ cyc < 1 yr.
This is because the latter equation is a theoretical prediction, based on first physical principles,
as opposed to empirical fit, which lacks any theoretical or conceptual justification.
Therefore, we focused on the activity cycles derived from previous studies, as presented in Table 1. We chose 20 stars whose P_ cyc is less than a year and plot the fit between P_ rot and P_ cyc as shown in Figure <ref> using a simple linear regression without an intercept given by
P_ cyc [ yr]= n P_ rot [ d].
We obtained the slope n= 0.081 ± 0.009 and R^2 value is 0.997, which is an indication of a good fit, despite of a large scatter.
Note that P_ cyc here is in years, as in Figure 14 from <cit.>.
Therefore, for the lower and upper bounds of our
138 Kepler IDs with P_ rot ranging from 0.202 d to 0.997 d,
this simple theoretically justified equation predicts for
P_ cyc=0.081×0.202×365.25=5.98 d and 0.081×0.997×365.25=29.50 d,
which are not very different from applying the more accurate powerlaw fit using equation <ref> of
5.13 d and 38.14 d, respectively.
Finally, we examine the convective turnover time, τ_c, vs.
B-V colour index appearance as in Figure 3 from <cit.>.
In general, direct measurements of convective turnover time are not possible. However, its estimation is possible by analysing stars' rotation and activity data.
As pointed out by <cit.>, scaling the rotation periods with with a colour- or mass-dependent τ_c can reduce scatter in the relation between rotation and activity, leading to a broken power-law fit between activity and the Rossby number, as e.g. in <cit.>.
<cit.> present a comprehensive study of the convective turnover time, τ_c, and its dependence on stellar metallicity and age of main-sequence stars with masses between 0.6-1.6 M_⊙ and they also
remark that there is a substantial variation between the different models,
as e.g. <cit.> using chromospheric and coronal data, obtained a significantly flatter curve for B-V > 0.8 than widely-used <cit.>, see figure 4 from <cit.>.
We plot convective turnover time, τ_c, vs.
B-V colour index in figure <ref>.
Figure <ref> used the following expressions
for the dependence of the convective turnover time τ_c on the B-V color index, as derived from <cit.>:
logτ_c = (1.06±0.07) + (2.33±0.37) ((B-V) - 0.44)
for 0.44 ≤ B - V ≤ 0.71. In the case when B - V > 0.71 then
logτ_c = (1.69±0.12) + (0.69±0.13) ((B-V) - 0.71).
As can be seen in Figure <ref> our range of B-V-colour is larger compared to data from <cit.>.
§ CONCLUSIONS
In this work, we studied the empirical relation between
star activity cycle and rotation period.
First, we reproduced the fit between P_ rot and P_ cyc using <cit.> data
and obtained the following fit parameters
log P_ cyc≈ (1.458 ± 0.074) + (1.348 ± 0.054) log P_ rot,
which are slightly different from the <cit.>'s
a= 1.488 ± 0.092 and n= 1.324 ± 0.067, for
the reasons unknown to us.
Then, using a larger data set made up of 94 P_ rot and their 94 associated P_ cyc taken from prior studies, we again re-examined the fit between P_ rot and P_ cyc and obtained the followinh fit parameters
log P_ cyc≈ (1.583 ± 0.064)+(1.257 ± 0.051) log P_ rot.
Using these new parameters, we applied this relation to a sample of 83 F-type and 55 G-type main sequence stars whose rotation periods of less than one day, To provide tabular predictions for cases with very short activity cycles, in order
to determine in the future whether or not these short activity cycles are a common occurrence in these stars.
As a result we derived 138 predicted P_ cyc ranging from 5.13 d to 38.14 d, which are listed in Table <ref>.
Usefulness of measuring short stellar activity cycles
hinges on two main general difficulties:
(i) If monitoring program of stellar activity (e.g. activity-related chromospheric emission S-index or similar) is used
as in references such as <cit.>; or <cit.>, then cadence time of observations is too long
e.g. according to table 2 from the latter reference cadence could be 87 observations per year i.e. 365/87 = 4 days. Resolving activity cycles with 5.13≤ P_ cyc≤ 38.14 d with such cadence would be nearly impossible.
(ii) If Kepler data light curves are used for e.g. plotting number of flares per day vs. time then large number of flare-detection would be necessary to have a reliable statistics. However, the problem is long cadence, 30 minutes, for the mainstream Kepler data. The photometer used by Kepler is sensitive to wavelengths ranging from 400 to 865 nm, covering the entire visible spectrum and a fraction of the infrared. The accuracy of the photometer of Kepler is approximately 0.01% or 0.1 mmag, when 30-minute integration times are used while considering stars with a magnitude of 12. Kepler's 30-minute integration detected flare amplitudes less than 0.1% of the stellar value and energies of 2×10^33 ergs. The duration of the flares ranged from one to three hours, with a rapid increase followed by a slow, exponential decline <cit.>. When Kepler data is taken at a higher cadence or sampling rate of one minute, the accuracy of the measurements decreases. However, this higher cadence enables Kepler to detect flares that are too brief to be detected reliably using the main 30-minute integrations. With the one-minute cadence, Kepler can detect flares with energies as low as 10^32 ergs <cit.>.
It is worth noting that earlier studies exist using different observations where the energy involved in the observed transient brightening is estimated to range from 10^25 to 10^29 erg <cit.>. Also, as far as the Sun is concerned, studies exist <cit.> which consider flare frequency as a function of flare energy in the range 10^27to 10^31 erg, but this is applicable to the Sun only.
In order to have a good statistics for Kepler IDs considered, we need to detect flares with energies 10^27-32 ergs in order to see variation number of flares per day on a time scale of 5.13≤ P_ cyc≤ 38.14 d.
To achieve this goal a new space mission is necessary with short time cadence (< 1 minutes) and photometric accuracy < 0.01%.
A typical example of such proposed sample data from the space mission is shown in figure <ref>.
Alternative option could be making more short cadence ground-based s-index monitoring program of stellar activity with cadence ≈ 1 d or less. However it is unclear
whether this is technically feasible.
In any case, the present study provides predictions for 5.13≤ P_ cyc≤ 38.14 d and
we hope that future either space or ground-based observational missions will put to test our predictions.
Unitl such time the jury is still out.
§ ACKNOWLEDGEMENTS
Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts.
Authors would like to thank Deborah Kenny of STScI for kind assistance in obtaining the data, Cozmin Timis and Alex Owen of Queen Mary University of London for the assistance in data handling at the Astronomy Unit.
A. K. Althukair wishes to thank Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia and
Royal Embassy of Saudi Arabia Cultural Bureau in London, UK for the financial support of her PhD scholarship, held at Queen Mary University of London.
§ DATA AVAILABILITY
Some of the data underlying this article were accessed from Mikulski Archive for Space Telescopes (MAST) <https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>. This paper also has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The derived data generated in this research will be shared on reasonable request to the corresponding author.
aasjournal
|
http://arxiv.org/abs/2307.07231v1 | 20230714085103 | Long Short-term Memory with Two-Compartment Spiking Neuron | [
"Shimin Zhang",
"Qu Yang",
"Chenxiang Ma",
"Jibin Wu",
"Haizhou Li",
"Kay Chen Tan"
] | cs.NE | [
"cs.NE"
] |
[
[
=====
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays. As a result, it remains a challenging task for state-of-the-art spiking neural networks (SNNs) to identify long-term temporal dependencies since bridging the temporal gap necessitates an extended memory capacity. To address this challenge, we propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF. Our model incorporates carefully designed somatic and dendritic compartments that are tailored to retain short- and long-term memories. The theoretical analysis further confirms its effectiveness in addressing the notorious vanishing gradient problem. Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model. This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
§ INTRODUCTION
Deep learning has revolutionized many research fields by empowering machines to learn complex patterns from vast amounts of data, instances include computer vision <cit.>, natural language processing <cit.>, and speech recognition <cit.>. A key component of deep learning is the artificial neural network, which is inspired by the structure and function of biological neural networks <cit.>. Among the different types of artificial neural networks, spiking neural networks (SNNs) have attracted significant attention recently owing to their biological plausibility and potential for facilitating energy-efficient computation <cit.>.
Spiking neurons emulate the rich neuronal dynamics of biological neurons, which facilitate the encoding and memorizing of spatio-temporal sensory cues. Furthermore, spiking neurons communicate with each other via discrete spikes, such event-driven operation leads to ultra-low-power neural computation <cit.>.
In practice, single-compartment spiking neurons models are widely adopted due to their mathematical tractability and computational efficiency, instances include Leaky Integrate-and-Fire (LIF) Model <cit.>, Izhikevich Model <cit.>, and Adaptive Exponential Integrate-and-Fire (AdEx) Model <cit.>. These single-compartment models abstract the biological neuron as a single electrical circuit, preserving the essential neuronal dynamics of biological neurons while ignoring the complex geometrical structure of dendrites and somas. This degree of abstraction significantly reduces the modeling effort, making them more feasible to study the behavior of large-scale biological neural networks and perform complex pattern recognition tasks on neuromorphic machines.
While single-compartment spiking neuron models have demonstrated promising results in various pattern recognition tasks <cit.>, their ability to solve tasks that require long-term temporal dependencies remains constrained. This is primarily attributed to their limited memory capacity. Specifically, the intrinsic leakage of neuronal states (e.g., membrane potential) coupled with the reset mechanism lead to rapid forget or loss of past inputs <cit.>. The loss of information about past inputs makes it challenging for neurons to learn long-term dependencies, especially when the temporal gap between sensory cues is significantly larger than the decaying time constant of the neuronal state variables <cit.>. This problem has motivated the recent proposals of adopting dynamic firing threshold <cit.> and adaptive time constant <cit.> to improve the memory capacity of single-compartment spiking neurons.
For biological neurons, the separation of dendrites and soma as well as the
the complex geometrical structure of dendrites facilitates interactions between different neuronal compartments, resulting in memory traces of input signals at different timescales <cit.>. This multi-compartment structure enhances neuronal dynamics and allows historical or contextual information to be maintained over an extended time period. It, therefore, lays the foundation for learning long-term dependencies between sensory cues <cit.>. While incorporating more compartments offers additional benefits of expanded memory capacity, the increased model complexity and computational cost may hinder their practical use, especially for complex pattern recognition tasks with large-scale SNNs.
In this paper, we derive a generalized two-compartment neuron model as depicted in Figure <ref>(a). This neuron model provides an ideal reflection of the minimal geometry of the well-known Prinsky-Rinzel (P-R) pyramidal neuron while preserving the essential features of more complicated multi-compartment models <cit.>. Furthermore, we proposed a memory-augmented variant, which we referred to as Long Short-Term Memory Leaky Integrate-and-Fire (LSTM-LIF) model. LSTM-LIF separates soma and dendrites into two compartments, which are tailored to store short-term and long-term memories, respectively. This unique design significantly boosts the memory capacity of traditional single-compartment neuron models, thereby enabling the effective processing of multi-scale temporal information.
The main contributions of our work are summarized as follows:
* We propose a biologically inspired two-compartment spiking neuron model, dubbed LSTM-LIF, which tailors its somatic and dendritic compartments to store short- and long-term memories, respectively.
* We conduct a theoretical analysis to shed light on the effectiveness of our proposed LSTM-LIF model in resolving the vanishing gradient problem during BPTT training.
* Our experimental results, on a broad range of temporal classification tasks, demonstrate the superior performance of the proposed model, including exceptional classification capability, rapid training convergence, greater network generalizability, and high energy efficiency.
§ RELATED WORKS
§.§ Memory-Enhanced Single-Compartment Spiking Neuron Models
Due to the inherent limitations of LIF neurons in performing long-term temporal credit assignments, the development of memory-enhanced single-compartment spiking neuron models has become a research focus in recent years. Notable efforts include the Long Short-Term Memory Spiking Neural Network (LSNN) proposed by Bellec et al. <cit.>. LSNN introduces an adaptive firing threshold mechanism to LIF neurons, which serves as a long-term memory of past inputs. Yin et al. <cit.> further propose to apply learnable time constants for the adaptive firing threshold, such that multi-scale temporal information can be retained <cit.>.
Along the same direction, the Parametric LIF (PLIF) model <cit.> introduces learnable membrane time constants that allow LIF neurons to retain multi-scale temporal information in their membrane variables. More recently, the gated LIF (GLIF) <cit.> model incorporates learnable gates to selectively integrate the essential neuronal dynamics, including synaptic integration, membrane leakage, and reset. These enhancements enrich the neuronal dynamics and, therefore, improve the representation power and adaptivity of LIF neurons.
Nevertheless, these single-compartment spiking neuron models are still
facing problems with limited memory capacity and struggle to perform long-term temporal credit assignments. This motivates us to explore more complex multi-compartment models in this work to further enhance the memory capacity of spiking neurons.
§.§ Multi-Compartment Spiking Neuron Model
Multi-compartment spiking neuron models have been extensively studied in literatures. By faithfully modeling the geometrical structure of biological neurons as well as the interactions among their different compartments, multi-compartment models can represent the rich neuronal dynamics of biological neurons. One of the earliest multi-compartment models is the Rall model <cit.>, which is designed based on the cable theory of passive dendrites.
While the Rall model is primarily focused on passive dendritic properties, other models have been extended to incorporate active conductances, such as voltage-gated ion channels, that play a crucial role in shaping the dendritic voltage responses. For instance, the Pinsky-Rinzel model <cit.> is a two-compartment model that simulates the interaction between somatic and dendritic compartments, capturing the essential properties of CA3 pyramidal neurons in the hippocampus.
More recently, researchers have proposed multi-compartment models with varying levels of complexity to better understand the role of dendrites in neural computation.
Multi-compartment spiking neuron models have proven valuable in understanding the complex dynamics of biological neurons, as well as in enabling more accurate brain simulations. However, the trade-off between model complexity and computational efficiency remains a key challenge for practical applications. In this work, we aim to design a biologically plausible two-compartment neuron model that can achieve a good balance between these two aspects, while maintaining the superior temporal processing capability of biological neurons.
§ METHODOLOGY
In this section, we first introduce the dynamics of a typical single-compartment neuron model (i.e., LIF) and elaborate on its inherent deficiencies in retaining long-term memories as well as learning long-term dependencies. Then, we will present a generalized two-compartment spiking neuron model inspired by the well-known Prinsky-Rinzel pyramidal neurons <cit.>. Based on this, we further develop a memory-augmented two-compartment spiking neuron model, namely LSTM-LIF.
Our proposed LSTM-LIF model is theoretically well-grounded that can facilitate learning long-term dependencies.
§.§ Inherent Limitations of LIF Neuron for Long-term Temporal Credit Assignment
In general, spiking neurons integrate synaptic inputs triggered by incoming spikes. Once the accumulated membrane potential surpasses the firing threshold, an output spike will be generated and transmitted to subsequent neurons. The LIF neuron is the most ubiquitous and effective single-compartment spiking neuron model, which has been widely used for large-scale brain simulation and neuromorphic computing. The neuronal dynamics of a LIF neuron can be described by the following discrete-time formulations:
𝒰[t]=β𝒰[t-1] - 𝒱_th𝒮[t-1] + ℐ[t]
ℐ[t]=∑_iω_i𝒮_i[t-1]+b
𝒮[t]=Θ(𝒰[t]-𝒱_th)
where 𝒰[t] and ℐ[t] represent the membrane potential and the input current of a neuron at time t, respectively. The term β≡ exp(-dt/τ_m) is the membrane decaying coefficient that ranged from (0, 1), in which τ_m is the membrane time constant and dt is the simulation time step. ω_i denotes the synaptic weight that connects input neuron i, and b represents the bias term. An output spike will be generated once the membrane potential 𝒰[t] crosses the neuronal firing threshold 𝒱_th as per Eq. (<ref>).
The vanishing gradient problem remains a critical obstacle that hampers the learning of long-term dependencies by stateful neural networks, such as vanilla RNNs and SNNs.
To further elaborate on this problem in SNN, we consider the network training with the following objective function:
ℒ(𝒮̂, 𝒮)=1/N∑_n=1^Nℒ(𝒮̂_n, 𝒮_n)
where N is the number of training samples, ℒ is the loss function, 𝒮_n is the network output, and 𝒮̂_n is the training target.
This objective function can be optimized with the canonical backpropagation through time (BPTT) algorithm. In particular, the gradient of the synaptic weight ω can be calculated as follows:
∂ℒ/∂ω=∑_t=1^T∂ℒ/∂𝒮[T]∂𝒮[T]/∂𝒰[T]∂𝒰[T]/∂𝒰[t]∂𝒰[t]/∂ω
By substituting Eq. (<ref>) into the above equation to compute ∂𝒰[T]/∂𝒰[t], it is obvious that the influence of time step t on its subsequent time step T diminish when T increases. This is because the membrane potential decay causes an exponential decay of early information. This problem becomes exacerbated when t is considerably smaller than T, leading to the vanishing gradient problem.
Consequently, single-compartment neuron models, epitomized by the LIF model, are struggled to retain long-term memory. Therefore, their ability to learn long-term dependencies are limited. This motivates us to develop two-compartment neuron models that can expand memory capacity and facilitate learning long-term dependencies.
§.§ Generalized Two-Compartment Spiking Neuron Model
The Prinsky-Rinzel (P-R) pyramidal neurons are located in the CA3 region of the hippocampus, which plays an important role in memory storage and retrieval of animals. Researchers have simplified this neuron model as a two-compartment model that can simulate the interaction between somatic and dendritic compartments, as depicted in Figure <ref>(a). Drawing upon the structure of the P-R model, we develop a generalized two-compartment spiking neuron model that defined as the following. The detailed derivations of this general formulation are provided in Supplementary Material Section <ref>.
𝒰^D[t]=α_1𝒰^D[t-1] + β_1𝒰^S[t-1] + ℐ[t]
𝒰^S[t]=α_2𝒰^S[t-1] + β_2𝒰^D[t] - 𝒱_th𝒮[t-1]
𝒮[t]=Θ(𝒰^S[t]-𝒱_th)
where 𝒰^D and 𝒰^S represents the membrane potentials of the dendritic and the somatic compartments, respectively.
α_1 and α_2 are respective membrane potential decaying coefficients for these two compartments.
Notably, the membrane potentials of these two compartments are not updated independently. Rather, they are coupled with each other through the second term in Eqs. (<ref>) and (<ref>), in which the coupling effects are controlled by the coefficients β_1 and β_2.
The interplay between these two compartments enhances the neuronal dynamics and, if properly designed, can resolve the vanishing gradient problem.
§.§ Long Short-Term Memory Leaky Integrate-and-Fire (LSTM-LIF) Model
Based on the generalized two-compartment spiking neuron model introduced earlier, we propose an LSTM-LIF model that equips with enhanced memory capacity as well as the ability to learn long-term dependencies. In comparison to the generalized two-compartment neuron model, we drop the membrane decaying factors α_1 and α_1 from both compartments. This modification aims to circumvent the rapid decay of memory that could cause unintended information loss. Moreover, to circumvent excess firing caused by persistent input accumulation, we design β_1 and β_2 to take opposite signs. The dynamics of the proposed LSTM-LIF model are defined by the following equations:
𝒰^D[t]=𝒰^D[t-1] + β_1𝒰^S[t-1] + ℐ[t] - γ𝒮[t-1]
𝒰^S[t]=𝒰^S[t-1] + β_2𝒰^D[t] - 𝒱_th𝒮[t-1]
𝒮[t]=Θ(𝒰^S[t]-𝒱_th)
According to the above formulations, 𝒰^S is responsible for retaining short-term memory about decaying dendritic inputs, which will be reset after neuron firing. Notably, the output spikes are generated from the somatic compartment in a context-aware manner that contributed by 𝒰^D. In contrast, 𝒰^D serves as a long-term memory that retains the past inputs. It is worth noting that despite negative feedback from the somatic compartment, the memory traces of ℐ[t] will not be corrupted.
The coefficients β_1 ≡ -σ(c_1) and β_2 ≡σ(c_2) determine the efficacy of information communication between two compartments. Here, the sigmoid function σ(·) is utilized to ensure two coefficients are within the range of (-1, 0) and (0, 1), and the parameters c_1 and c_2 can be automatically adjusted during the training process. The effect of this design choice will be analyzed in details in Section <ref>. The membrane potentials of both compartments are reset after the firing of the soma. The reset of the dendritic compartment is triggered by the backpropagating spike that governed by a scaling factor γ. The internal operations of the LSTM-LIF model are depicted in Figure <ref>(c), which exhibits richer internal dynamics in comparison to the LIF model that is shown in Figure <ref>(b).
To further demonstrate the superiority of the proposed model in learning long-term dependencies, we provide a mathematical proof to show why the LSTM-LIF model can greatly alleviate the vanishing gradient problem.
As discussed in Section <ref>, the primary cause of the gradient vanishing problem is attributed to the recursive computation of ∂𝒰_T/∂𝒰_t.
This problem can, however, be effectively alleviated in the proposed LSTM-LIF model, wherein the partial derivative ∂𝒰_T/∂𝒰_t can be calculated as follows:
∂𝒰[T]/∂𝒰[t]=∏_j=t+1^T∂𝒰[j]/∂𝒰[j-1], 𝒰[j]=[𝒰^D[j], 𝒰^S[j]]^T
where
∂𝒰[j]/∂𝒰[j-1]=
[ ∂𝒰^D[j]/∂𝒰^D[j-1] ∂𝒰^D[j]/∂𝒰^S[j-1]; ; ∂𝒰^S[j]/∂𝒰^D[j-1] ∂𝒰^S[j]/∂𝒰^S[j-1] ]=
[ β_1β_2+1 β_1; ; β_1β_2^2+2β_2 β_1β_2+1 ]
In order to quantify the severity of the vanishing gradient problem in LSTM-LIF, we further calculate the
column infinite norm as provided in Eq. (<ref>). The infinite norm signifies the maximum changing rate of membrane potentials over a prolonged time period.
∂𝒰[j]/∂𝒰[j-1]_∞=max(β_1β_2^2+β_1β_2+2β_2+1, β_1β_2+β_1+1)
By employing the constrained optimization method to solve the lower bound of Eq. (<ref>), it can be found that ∂𝒰[j]/∂𝒰[j-1]_∞>1. This suggests LSTM-LIF model can effectively prevent the unexpected occurrence of exponentially decaying gradients.
It is worth noting that the LSTM-LIF model can be reformulated into a single-compartment form:
𝒰^S[t]=(1+β_1β_2)𝒰^S[t-1] + β_2𝒰^D[t-1] + β_2ℐ[t] - (β_2γ+𝒱_th)𝒮[t-1]
In essence, the above formulation mirrors a LIF neuron that characterized by a decaying input.
Although memory decaying problem remains inextricable for the LSTM-LIF model, the presence of 𝒰^D can effectively compensate for the memory loss and address the vanishing gradient problem.
§ EXPERIMENTS
In this section, we first validate the effectiveness of our proposed LSTM-LIF model in learning long-term dependencies. Then, we evaluate the LSTM-LIF model on various temporal classification benchmarks, including sequential MNIST (S-MNIST) <cit.>, permuted sequential MNIST (PS-MNIST) <cit.>, Google Speech Commands (GSC) <cit.>, Spiking Heidelberg Digits (SHD) <cit.>, and Spiking Google Speech Commands (SSC) <cit.>.
Finally, we conduct a comprehensive study to demonstrate the advantages of the LSTM-LIF model in terms of rapid training convergence, strong network generalization, and high energy efficiency. To facilitate comparison with state-of-the-art (SOTA) single-compartment neuron models, we construct our network architectures with a comparable amount of parameters. More details about our experimental setups are provided in Supplementary Materials Section <ref>.
§.§ Exploring Parameter Space for Generalized Two-Compartment Neurons
r0.5
< g r a p h i c s >
Model accuracies with respect to different initial beta values.
In Section <ref>, we put forward a generalized two-compartment model whose neuronal dynamics are determined by [α_1, α_2, β_1, β_2]. To circumvent the rapid decay of memory, we set both α_1 and α_2 to one. It is worth noting that the selection of β_1 and β_2 will, however, significantly affect the training convergence of a two-compartment neuron model. To shed light on the effectiveness of the proposed parameter setting for the LSTM-LIF model, we initialize β_1 and β_2 across four different quadrants and evaluate their performance on the S-MNIST dataset.
In Figure <ref>, the contour map illustrates the partial derivative of the membrane potential at adjacent time steps for a generalized two-compartment neuron model.
Different color shades on this map indicate different values of the partial derivative.
The yellow line demarcates the region where the partial derivative equals one. The scattered points on this plot represent different neuron models, each initialized with a different set of values for β_1 and β_2.
The result reveals that when initializing β in the first and the third quadrants leads to apparent exploding and vanishing gradient problems respectively, resulting in the models to diverge. While initializing β values within the upper-right corner of the second quadrant leads to the issue of exploding gradients, a wide range of β values within the lower-left corner of the second quadrant can support effective training. Therefore, we select initialization values from this region for our LSTM-LIF model and we use it consistently for the rest of our experiments. Although initializing β in the fourth quadrant can avoid the issue of vanishing and exploding gradients, it results in negative inputs (see Eq. (<ref>)) to the somatic compartment that will lead to poor temporal classification results.
§.§ Superior Performance for Temporal Classification Tasks
Table <ref> presents the results of the proposed LSTM-LIF model on five selected datasets, along with other existing works. Overall, given the same amount of parameters, the LSTM-LIF model consistently outperforms SOTA single-compartment neurons across all datasets.
For the S-MNIST dataset, each data sample has a sequence length of 784, which requires the model to learn long-range dependencies. The LIF model performs worst on this dataset, which can be explained by the vanishing gradient problem discussed in Section <ref>. As expected, the memory-augmented LSNN <cit.> and adaptive LIF (ALIF) <cit.> models achieve comparable or even better accuracies to non-spiking models, such as LSTM <cit.>. Our proposed LSTM-LIF model consistently outperforms these memory-augment single-compartment neuron models, suggesting its high efficacy in retaining long-term memory and handling long-term dependencies. Notably, we achieve
99.01% accuracy with a recurrent architecture, which is the best-reported SNN model for this dataset. The same conclusions can be drawn for the more challenging PS-MNIST dataset.
In addition to image datasets, we further conduct experiments on speech datasets that exhibit rich temporal dynamics. For the non-spiking GSC dataset, our LSTM-LIF model achieves 90.60% and 94.30% accuracy for feedforward and recurrent networks respectively, surpassing SOTA models by a large margin. The SHD and SSC datasets are neuromorphic datasets that are specifically designed for benchmarking SNNs. On these datasets, our proposed LSTM-LIF exhibit a significant improvement over all other reported works.
§.§ Rapid Learning Convergence
The gradient vanishing problem, as described in Section <ref>, is notorious for BPTT training. It can result in slow convergence and unstable learning. By effectively addressing this issue, the proposed LSTM-LIF model ensures a more stable flow of gradients during the backpropagation process, leading to faster and more stable learning.
To shed light on this, we compare the learning curve of LSTM-LIF with the LIF, GLIF, and PLIF models under the same training settings. As illustrated in Figure <ref>, the LSTM-LIF model converges rapidly within about 25 epochs for both feedforward and recurrent networks, while the LIF model takes around 100 and 75 epochs to converge for feedforward and recurrent networks, respectively.
Moreover, for recurrent networks, the LSTM-LIF model exhibits greater stability than the LIF and PLF models, especially during the early training stage. Although the GLIF model exhibits a similar convergence speed to the LSTM-LIF model, we notice that the LSTM-LIF model is capable of achieving higher accuracy due to the smooth loss landscape that will be explained soon.
§.§ Stronger Network Generalization with Smooth Loss Landscape
To investigate the reason why the LSTM-LIF model can achieve more stable learning and faster convergence than the LIF model, we further compare their loss landscape near the founded local minima.
As shown in Figure <ref>, it is obvious that the LSTM-LIF model exhibits a notably smoother loss landscape near the local minima compared to the LIF model. This suggests the LSTM-LIF model offers improved learning dynamics and convergence properties.
In particular, the smoother loss landscape enables a reduced likelihood of being trapped into local minima, which can lead to more stable optimization and faster convergence. Furthermore, the smoother loss landscape suggests stronger network generalization, as it is less prone to overfitting and underfitting problems.
Overall, the observed smooth loss landscape highlights the potential of the LSTM-LIF model for more accurate and efficient learning, particularly for long temporal sequences.
§.§ High Energy Efficiency
So far, it remains unclear whether the proposed LSTM-LIF model can make a good trade-off between model complexity and computational efficacy. To answer this question, we conduct theoretical and empirical analysis on the energy efficiency of LIF, LSTM-LIF, and non-spiking LSTM <cit.> models. In particular, we count the accumulated (AC) and multiply-and-accumulate (MAC) operations consumed during input data processing and network update. In ANNs, the computations are all performed with MAC operations, whereas the AC operations are used predominantly in SNNs for synaptic updates. It is worth noting that the membrane potential update of spiking neurons requires several MAC operations. More detailed calculations can be found in Supplementary Materials Section <ref>.
As the theoretical results presented in Table <ref>, the energy costs of both spiking neurons (i.e., LIF and LSTM-LIF) are significantly lower than that of the LSTM model, attributed to their lesser computational complexity. Compared to the LIF model, the proposed LSTM-LIF model incurs additional nFr_outE_AC+nE_MAC operations due to the extra computation at the dendritic compartment.
To calculate the empirical energy cost, we perform inference on one randomly selected batch of test samples and compute the average layer-wise firing rates of these SNNs on the S-MNIST dataset. The layer-wise firing rates for LIF and LSTM-LIF models are comparable that take the values of [0.219, 0.145, 0.004] and [0.294, 0.146, 0.030], respectively. To obtain the total energy cost, we base our calculation on the 45nm CMOS process that has an estimated cost of E_AC=0.9 pJ and E_MAC=4.6 pJ for AC and MAC operations, respectively <cit.>. Despite the more complex internal structure of the proposed LSTM-LIF model, it has a comparable energy cost to the LIF model. Remarkably, our LSTM-LIF model achieves more than 100 times energy savings compared with the LSTM model, while demonstrating better temporal classification performance.
§ CONCLUSION
In this paper, drawing inspiration from the multi-compartment structure of biological neurons, we proposed a novel two-compartment spiking neuron model to enhance the memory capacity of single-compartment neurons. The dendritic and somatic compartments of the proposed LSTM-LIF model are tailored to retain long-term and short-term memories, respectively. This leads to an improved ability in learning long-term dependencies. Theoretical analysis and experimental results on various temporal classification tasks demonstrate the superiority of the proposed LSTM-LIF model, including exceptional classification capability, rapid training convergence, greater network generalizability, and high energy efficiency. This work, therefore, contributes to the development of more effective and efficient spiking neurons for emerging neuromorphic computing machines. In this work, we focus our study on two-compartment neuron models, while how to generalize the design to multi-compartment neurons, with an even larger number of compartments, remains an interesting question that we will explore in future works.
plain
Supplementary Materials
§ TWO-COMPATMENT BIOLOGICAL PRINSKY-RINZEL NEURON MODEL
In this paper, we utilized the simplified Prinsky-Rinzel (P-R) neuron model proposed by Kepecs and Wang. This model represents a pyramidal cell in the CA3 region with two compartments - the somatic and dendritic compartments. The dendritic compartment is responsible for producing bursting responses, while the soma generates spikes. The somatic compartment is governed by the I_Na and I_K currents, whereas the dendritic compartment is characterized by the slow potassium I_KS and a persistent sodium I_NaP currents. The P-R neuron model consists of several parameters and two-compartment coupled equations, mathematically described by:
C_mdV_s/dt=-I_Na-I_K-I_Leak+I_link/P+I_s
C_mdV_d/dt=-I_NaP-I_KS-I_Leak-I_link/1-P+I_d
where V_s and V_d are the somatic and dendritic membrane potentials, I_d and I_s denote the currents applied to the soma and dendrite, respectively. Specifically, I_s is assumed to be 0 in this paper and dendrite is the only part in neuron model to accept the outer currents. The membrane capacitance and the proportion of the cell area taken by soma are respectively denoted by C_m and P.
In the Table <ref>, the ionic currents participate in Equation <ref>, <ref> and their corresponding calculations are presented.
In the computations of ionic currents, E_Na, E_Ka and E_L represent equilibrium potentials, and g_Na, g_K, g_L, g_c, g_NaP and g_KS are conductances.
Following the Equations <ref> and <ref> in the continuous time, the iterative and discrete-time forms are obtained through Euler method:
V_s[t+1]=V_s[t]+dt/C_m(-I_Na[t]-I_K[t]-I_Leak[t]+I_link[t]/P)
V_d[t+1]=V_d[t]+dt/C_m(-I_NaP[t]-I_KS[t]-I_Leak[t]+I_link[t]/1-P+I_d[t])
The term I_link encompasses the interaction between the somatic and dendritic compartments in the membrane potential. Furthermore, by incorporating the reset operation in the somatic output to transform the neuron into a spiking form, we deduce the overall dynamics of the two-compartment P-R spiking neuron model, as discussed in Section <ref>.
§ EXPERIMENTAL DETAILS
§.§ Datasets
In this subsection, we introduce the dataset used for this work. These datasets cover a wide range of tasks, allowing us to assess the model's capabilities in handling different types of input data.
S-MNIST: The Sequential-MNIST (S-MNIST) dataset is derived from the original MNIST dataset, which consists of 60,000 and 10,000 grayscale images of handwritten digits for training and testing sets with a resolution of 28 × 28 pixels. In the S-MNIST dataset, each image is converted into a vector of 784 time steps, with each pixel representing one input value at a certain time step. This dataset enables us to evaluate the performance of our model in solving sequential image classification tasks.
PS-MNIST: The Permuted Sequential MNIST dataset (PS-MNIST) is a variation of the Sequential MNIST dataset, in which the pixels in each image are shuffled according to a fixed random permutation. This dataset provides a more challenging task than S-MNIST, as the input sequences no longer follow the original spatial order of the images. Therefore, when learning this dataset, the model needs to capture complex, non-local, and long-term dependencies between pixels.
GSC: The Google Speech Commands (GSC) has two versions, and we employ the 2nd version in this work. The GSC version 2 is a collection of 105,829 on-second-long audio clips of 35 different spoken commands, such as “yes”, “no”, “up”, “down”, “left”, “right”, etc. These audio clips are recorded by different speakers in various environments, offering a diversity of datasets to evaluate the performance of our model.
SHD: The Spiking Heidelberg Digits dataset is a spike-based sequence classification benchmark, consisting of spoken digits from 0 to 9 in both English and German (20 classes). The dataset contains recordings from twelve different speakers, with two of them only appearing in the test set. Each original waveform has been converted into spike trains over 700 input channels. The train set contains 8,332 examples, and the test set consists of 2,088 examples (no validation set). The SHD dataset enables us to evaluate the performance of our proposed model in processing and classifying speech data represented in spiking format.
SSC: The Spiking Speech Command dataset, another spike-based sequence classification benchmark, is derived from the Google Speech Commands version 2 dataset and contains 35 classes from a large number of speakers. The original waveforms have been converted to spike trains over 700 input channels. The dataset is divided into train, validation, and test splits, with 75,466, 9,981, and 20,382 examples, respectively. The SSC dataset allows us to assess the performance of our proposed spiking neuron model in processing and recognizing speech commands represented in spiking data.
§.§ Network architecture
We perform experiments employing both feedforward and recurrent connection configurations. To maintain a fair comparison with existing works, we utilize network architectures exhibiting comparable parameters. These architectures and their corresponding parameters are summarized in Table <ref>.
§.§ LSTM-LIF model hyper-parameters
In this section, we provide our detailed settings on the hyper-parameters of LSTM-LIF neuron model in Table <ref>, including the γ, initial values of β and neuronal threshold 𝒱_th.
§.§ Training configuration
We train the S-MNIST and PS-MNIST datasets for 200 epochs utilizing the Adam optimizer. Their initial learning rates are set to 0.0005 for both feedforward and recurrent networks with the learning rates decaying by a factor of 10 at epochs 60 and 80. For the GSC, SHD, and SSC datasets, we train the models for 100 epochs using the Adam optimizer.
The initial learning rate of GSC datasets is 0.001 for both feedforward and recurrent networks with the decaying by 10 at epochs 60, 90, and 120.
The initial learning rate is set to 0.0005, and 0.005 for feedforward and recurrent networks on the SHD dataset, with the learning rate decaying to 0.8 times its previous value every 10 epochs. For the SSC dataset, the initial learning rates are 0.0001 for both feedforward and recurrent networks, and decay to 0.8 times their previous values every 10 epochs. We train S-MNIST, PS-MNIST, and GSC tasks on Nvidia Geforce GTX 3090Ti GPUs with 24GB memory, and train SHD and SSC tasks on Nvidia Geforce GTX 1080Ti GPUs with 12GB memory.
§.§ Source Code
All codes to reproduce our results will be released after the reviewing process.
§ STUDY ON ENERGY EFFICIENCY
We formulate the theoretical energy cost for LSTM, LIF, and LSTM-LIF recurrent networks based on their computational dynamics calculations.
Table <ref> presents the detailed calculation of theoretical energy cost for each model.
|
http://arxiv.org/abs/2307.05117v1 | 20230711085153 | $\ell_p$-Regression in the Arbitrary Partition Model of Communication | [
"Yi Li",
"Honghao Lin",
"David P. Woodruff"
] | cs.DS | [
"cs.DS",
"cs.DC",
"cs.LG"
] |
ℓ_p-Regression in the Arbitrary Partition Model of Communication
Yi Li
Division of Mathematical Sciences
Nanyang Technological University
Honghao Lin David P. Woodruff
Computer Science Department
Carnegie Mellon University
=====================================================================================================================================================================================================================
L[1]>
m#1
C[1]>
m#1
R[1]>
m#1
ℓ_p-Regression in the Arbitrary Partition Model of Communication
Yi Li
Division of Mathematical Sciences
Nanyang Technological University
Honghao Lin David P. Woodruff
Computer Science Department
Carnegie Mellon University
=====================================================================================================================================================================================================================
We consider the randomized communication complexity of the distributed ℓ_p-regression problem in the coordinator model, for p∈ (0,2]. In this problem, there is a coordinator and s servers. The i-th server receives A^i∈{-M, -M+1, …, M}^n× d and b^i∈{-M, -M+1, …, M}^n and the coordinator would like to find a (1+)-approximate solution to min_x∈^n(∑_i A^i)x - (∑_i b^i)_p. Here M ≤(nd) for convenience. This model, where the data is additively shared across servers, is commonly referred to as the arbitrary partition model.
We obtain significantly improved bounds for this problem. For p = 2, i.e., least squares regression, we give the first optimal bound of Θ̃(sd^2 + sd/ϵ) bits.
For p ∈ (1,2),
we obtain an
Õ(sd^2/ + sd/()) upper bound. Notably, for d sufficiently large, our leading order term only depends linearly on 1/ϵ rather than quadratically.
We also show communication lower bounds of Ω(sd^2 + sd/^2) for p∈ (0,1] and Ω(sd^2 + sd/) for p∈ (1,2]. Our bounds considerably improve previous bounds due to (Woodruff et al. COLT, 2013) and (Vempala et al., SODA, 2020).
§ INTRODUCTION
Regression is a lightweight machine learning model used to capture linear dependencies between variables in the presence of noise. In this problem there is a (sometimes implicit) matrix A ∈ℝ^n × d and a vector b ∈ℝ^n and the goal is to find a hyperplane x ∈ℝ^d for which Ax-b is small for some loss function ·, which throughout this paper will be a norm. Here A is known as the design matrix, b the response vector, and x the model parameters. We focus on the over-constrained case, when n ≫ d, which corresponds to having many more examples than features. Although more sophisticated models can often achieve lower error, regression is often the most computationally efficient and the first model of choice.
One of the most popular loss functions is the ℓ_p-norm, or equivalently its p-th power y_p^p = ∑_i=1^n |y_i|^p. When p = 2 this is least squares regression, which corresponds to the maximum likelihood estimator (MLE) in the presence of Gaussian noise. When the noise is more heavy-tailed, often p < 2 is chosen as the loss function since it is more robust to outliers. Indeed, since one is not squaring the differences, the optimal solution pays less attention to large errors. For example, p = 1 gives the MLE for Laplacian noise. While p < 1 results in non-convex loss functions, heuristics are still used given its robustness properties. When p > 2, the loss function is even more sensitive to outliers; it turns out that such p cannot be solved without incurring a polynomial dependence on n in the communication model we study, see below, and so our focus will be on p ≤ 2.
It is often the case that data is either collected or distributed across multiple servers and then a key bottleneck is the communication complexity, i.e., the number of bits transmitted between the servers for solving a problem. We consider the standard coordinator model of communication, also known as the message-passing model, in which there is a site designated as the coordinator who has no input, together with s additional sites, each receiving an input. There is a communication channel between the coordinator and each other server, and all communication goes through the coordinator. This model is convenient since it captures arbitrary point-to-point communication up to small factors, i.e., if server i wants to send a message to server j, server i can first send the message to the coordinator and then have it forwarded to server j. We note that in addition to the total communication, it is often desirable to minimize the time complexity on each server, and the protocols in this paper will all be time-efficient.
A natural question in any communication model is how the input is distributed. We study the arbitrary partition model of <cit.>, which was studied for the related task of low rank approximation. In this model, the i-th server receives A^i∈{-M, -M+1, …, M}^n× d and b^i∈{-M, -M+1, …, M}^n and the coordinator would like to find a (1+)-approximate solution to min_x∈^n(∑_i A^i)x - (∑_i b^i)_p. Here M ≤(nd) for convenience. Note that this model gives more flexibility than the so-called row partition model in which each example and corresponding response variable is held on exactly one server, and which is a special case of the arbitrary partition model. For example, if each row i of A corresponds to an item and each column j to a user and an entry A_i,j corresponds to the number of times user i purchased item j, then it might be that each server t is a different shop where the user could purchase the item, giving a value A^t_i,j, and we are interested in ∑_t= 1^s A^t_i,j, i.e., the matrix which aggregates the purchases across the shops. This communication model is also important for turnstile streaming where arbitrary additive updates are allowed to an underlying vector <cit.>, as low-communication protocols often translate to low memory streaming algorithms, while communication lower bounds often give memory lower bounds in the streaming model. The number of communication rounds often translates to the number of passes in a streaming algorithm. See, e.g., <cit.>, as an example of this connection for low rank approximation. We note that for p > 2, there is an Ω(n^1-2/p) lower bound in the arbitrary partition model even for just estimating the norm of a vector <cit.>, and so we focus on the p < 2 setting.
The communication complexity of approximate regression was first studied in the coordinator model in the row partition model in <cit.>, though their protocols for 1 ≤ p < 2 use Õ(sd^2+γ + d^5 + d^3+p/^2) communication, where Õ(f) suppresses a (log(sdn/)) factor. These bounds were later improved in the coordinator model and in the row partition model in <cit.>, though the bounds are still not optimal, i.e., their lower bounds do not depend on , are suboptimal in terms of s, or hold only for deterministic algorithms. Their upper bounds also crucially exploit the row partition model, and it is unclear how to extend them to the arbitrary partition model. We will substantially improve upon these bounds.
Despite the previous work on understanding the communication complexity of a number of machine learning models (see, e.g., <cit.> and the references therein), perhaps surprisingly for arguably the most basic task of regression, the optimal amount of communication required was previously unknown.
Our Results We obtain a lower bound of Ω(sd^2+sd/^2) for p∈(0,1] and a lower bound of Ω(sd^2 + sd/) for p∈ (1,2], both of which improve the only known lower bound of Ω̃(d^2 + sd) by <cit.>. We strengthen their d^2 lower bound by a multiplicative factor of s and incorporate the dependence on into their sd lower bound.
When p=2, we obtain an upper bound of Õ(sd^2 + sd/) bits, which matches our lower bound up to logarithmic factors. The total runtime of the protocol is O(∑_i (A^i) + s(d/)), which is optimal in terms of (A^i). Here for a matrix A, (A) denotes the number of non-zero entries of A. Our results thus largely settle the problem in the case of p = 2.
When p∈ (1,2), we obtain an upper bound of Õ(sd^2/ + sd/()) bits with a runtime of O(∑_i (A^i) (d/^O(1)) + s (d/)). Note that if the Õ(sd^2/) term dominates, then our upper bound is optimal up to a 1/ factor due to our lower bound. Interestingly, this beats a folklore sketching algorithm for which each server sketches their input using a shared matrix of p-stable random variables with Õ(d/^2) rows, sends their sketch to the coordinator with Õ(sd^2/^2) total communication, and has the coordinator add up the sketches and enumerate over all x to find the best solution (see, e.g., Appendix F.1 of <cit.> for a proof of this for p = 1). Moreover, our algorithm is time-efficient, while the sketching algorithm is not. In fact, any sketch that solves the harder problem of computing an ℓ_p-subspace embedding requires (d) distortion <cit.> or has an exponential dependence on 1/ <cit.>. We further show that if the leverage scores of [A b] are uniformly small, namely, at most ()/d^4/p, then our runtime can be improved to O(∑_i (A^i) + s (d/)), which is now optimal in terms of (A), with the same amount of communication. Along the way we prove a result on embedding d-dimensional subspaces in ℓ_p^n to ℓ_r for 1 < r < p, which may be of independent interest.
Open Problems
We leave several intriguing questions for future work.
First, it would be good to close the gap in our upper and lower bounds as a function of for p < 2. For 1 < p < 2, if (1/) < d then our bounds are off by a 1/ factor, namely, our upper bound is Õ(sd^2/), but our lower bound is Ω(sd^2).
Second, the term in our runtime in general has a multiplicative factor of d/(). This is mainly due to the use of a dense matrix for the lopsided subspace embedding of ℓ_p^n into ℓ_r, and it is interesting to see whether there are sparse lopsided subspace embeddings of ℓ_p^n into ℓ_r.
§.§ Our Techniques
Lower Bounds
We first demonstrate how to show an Ω(sd/^2) lower bound for p∈ (0,1] and an Ω(sd/) lower bound for p∈ (1,2].
Let us first consider the special case of d = 1.
Consider the ℓ_p regression problem min_x ∈a · x - b_p, where a and b are uniformly drawn from {-1, 1}^n. The crucial observation is that the solution x reveals the Hamming distance Δ(a,b). Specifically, when n = Θ(1/^2), a (1 ±)-solution when 0 < p ≤ 1 and (1 ±^2)-solution when 1 < p ≤ 2 suffice for us to solve the Gap-Hamming communication problem () of a and b (determining Δ(a, b) ≥ c √(n) or Δ(a, b) ≤ -c √(n)). The problem has an Ω(n) information cost lower bound <cit.>, which implies, by our choice of n, an Ω(1/^2) lower bound for p∈ (0,1] and an Ω(1/) lower bound for p∈ (1,2].
To gain the factor of s, we design a distributed version of , the s-GAP problem, as follows. There are 2s players. Each of the first s players holds a vector a^i ∈{-1, 1}^n and each of the remaining players holds a b^i ∈{-1, 1}^n, with the guarantee that ∑_i a^i = a and ∑_i b^i = b. The 2s players and the coordinator will collectively determine the two cases of Δ(a, b). Our goal is to show an Ω(sn) lower bound for this communication problem. To this end, we employ the symmetrization technique that was used in <cit.>. Specifically, Alice simulates a random player and Bob the remaining s - 1 players. As such, Bob will immediately know the whole vector b and part of the vector a (denote the set of these indices by I). As we will show in the proof, to determine the distance Δ(a, b), Alice and Bob still need to approximately determine Δ(a_I^c, b_I^c), which requires Ω(|I^c|) = Ω(n) communication. Note that the input distribution of each player is the same and Alice is choosing a random player. Hence, Alice's expected communication to Bob is at most O(χ/s) bits if s-GAP can be solved using χ bits of communication, which yields a lower bound of Ω(sn) bits for the s-GAP problem.
So far we have finished the proof for d=1. To obtain a lower bound for general d, we use a padding trick. Consider A = (a_1,…,a_d) and let b be the vertical concatenation of b_1,…,b_d, where each pair (a_i,b_i) is drawn independently from the hard distribution for d=1. One can immediately observe that min_x Ax-b_p^p = ∑_i min_x_ia_ix_i-b_p^p and show that approximately solving min_x Ax-b_p^p can approximately solve a constant fraction of the d subproblems min_x_ia_ix_i-b_p^p. This further adds an O(d) factor to the lower bound.
Next we discuss the Ω(sd^2) lower bound. We shall follow the idea of <cit.> and construct a set of matrices ℋ⊆{-1, 1}^d × d with a vector b ∈ℝ^d such that (i) A is non-singular for all A ∈ℋ, (ii) A^-1 b B^-1b for all A, B ∈ℋ and A≠ B and (iii) ℋ = 2^Ω(d^2). The conditions (i) and (ii) mean that a constant-factor approximation to min_x Ax - b_p^p is exact, from which the index of A in the set ℋ can be inferred. Condition (iii) then implies an Ω(d^2) lower bound for solving the regression problem up to a constant factor. To gain a factor of s, we consider the communication game where the i-th player receives a matrix A^i⊆{-1,1}^d× d with the guarantee that A = ∑_i A^i is distributed in ℋ uniformly. Then the s players with the coordinator want to recover the index of A in ℋ. We consider a similar symmetrization technique. However, the issue here is if Bob simulates s - 1 players, he will immediately know roughly a 1/2 fraction of coordinates of A, which can help him to get the index of A in ℋ. To overcome this, we choose a different strategy where Alice simulates two (randomly chosen) players and Bob simulates the remaining s - 2 players. In this case Bob can only know a 1/4-fraction of the coordinates without communication. However, one new issue here is Bob will know partial information about the remaining coordinates. But, as we shall show in the proof, even when conditioned on Bob's input on s - 2 players, with high probability the entropy of the remaining coordinates is still Ω(d^2). This implies that Alice still needs to send Ω(d^2) bits to Bob, which yields an Ω(sd^2) lower bound for the original problem.
Upper Bounds
For the ℓ_p-regression min_xAx-b_p, a classical “sketch-and-solve” approach is to use a (1+)-subspace embedding S for B = [A b]∈^n×(d+1) and reduce the problem to solving min_x SAx-Sb_p, which is of much smaller size. The subspace embedding is non-oblivious and obtained by subsampling Õ(d/^2) rows of B with respect to the Lewis weights of B <cit.>. More recently, it was shown that sampling Õ(d/) rows according to the Lewis weights is sufficient for solving ℓ_p-regression <cit.>, instead of Õ(d/^2) rows needed for an ℓ_p-subspace embedding. However, computing the Lewis weights is expensive and would incur a communication cost as well as a runtime at least linear in n, which is prohibitive in our setting.
Instead of embedding an ℓ_p-subspace into ℓ_p, we (1+)-embed an ℓ_p-subspace into ℓ_r for some 1 < r < p. Furthermore, since we are solving a regression problem, we do not need a conventional subspace embedding but only a lopsided one; that is, the map S must not contract Ax-b_p for all x simultaneously but it is required not to dilate Ax^∗-b_p for only the optimal solution x^∗. We show that an S of i.i.d. p-stable variables and O(dlog d/()) rows suffices (see Lemma <ref> for the formal statement). Such a lopsided subspace embedding for embedding a subspace of ℓ_p^n into ℓ_r, to the best of our knowledge, has not appeared in the literature[We note that the works of <cit.> consider embedding the entire space ℓ_p^n into ℓ_r instead of embedding a low-dimensional subspace of ℓ_p^n into ℓ_r.] and may be of independent interest. This lopsided subspace embedding reduces the ℓ_p regression problem to an ℓ_r-regression problem of Õ(d/()) rows. Importantly though, we do not need to ever explicitly communicate these rows in their entirety. Namely, we can leave the regression problem in an implicit form and now run a Lewis weight approximation algorithm, and since our effective n has been replaced with d/(), we just need d/() communication to iteratively update each of the weights in the Lewis weight algorithm, rather than n communication.
For the ℓ_2-regression problem, it is known that a (1+√())-subspace embedding can yield a (1+)-approximate solution (see, <cit.>, also the [Woo14] reference therein) and so the subspace embedding S needs only to have O(d (log d) /) rows. The servers then run gradient descent on the sketched version min_xSAx-Sb_2. To ensure fast convergence in O(log(1/)) iterations, the servers will instead solve min_xSARx-Sb_2, where R is a pre-conditioner to make SAR have a constant condition number. Putting these pieces together leads to our near-optimal communication and runtime.
§ PRELIMINARIES
ℓ_2 Subspace Embeddings.
For a matrix A∈^n× d, we say a matrix S∈^m× n is a (1±)-ℓ_2 subspace embedding for the column span of A if (1-)Ax_2≤SAx_2 ≤ (1+)Ax_2 for all x∈^d with probability at least 1 - δ.
We summarize the subspace embeddings we use in this paper below:
* : m = O(d^2/(δ^2)) with s = 1 non-zero entry per column, with each non-zero entry in {-1, 1} <cit.>. Computing SA takes only O((A)) time.
* : m=O((d log (d / δ)) /^2) and has s=O((log (d / δ))/) non-zeros per column, with each non-zero entry in {-1, 1} <cit.>. Computing SA takes O(s ·(A)) = O((A)(log (d/δ)/)) time.
p-stable Distributions.
Our protocol for distributed ℓ_p regression will use p-stable distributions, which are defined below.
For 0 < p < 2, there exists a probability distribution 𝒟_p called the p-stable distribution, which satisfies the following property. For any positive integer n and vector x ∈_n, if Z_1, …, ,Z_n ∼𝒟_p are independent, then ∑_j=1^n Z_jx_j ∼x_pZ for Z ∼𝒟_p.
Lewis Weights. Below we recall some facts about Lewis weights. For more details, we refer the readers to, e.g., <cit.>.
Given a matrix A ∈ℝ^n × d. The leverage score of a row A_i, * is defined to be
τ_i(A) = A_i, * (A^TA)^† (A_i, *)^T.
For a matrix A ∈ℝ^n × d, its ℓ_p-Lewis weights
{w_i}_i=1^n are the unique weights such that
w_i = τ_i(W^1/2 - 1/p A) for each i ∈ [n].
Here τ_i is the leverage score of the i-th row of a matrix
and
W is the diagonal matrix whose diagonal entries are w_1,…,w_n.
The Lewis weights are used in the construction of randomized ℓ_p-subspace embeddings. In particular, the rescaled sampling matrix w.r.t. Lewis weights gives an ℓ_p-subspace embedding.
Given p_1,…,p_n∈ [0,1] and p≥ 1, the rescaled sampling matrix S with respect to p_1,…,p_n is a random matrix formed by deleting all zero rows from a random n× n diagonal matrix D in which D_i,i = p_i^-1/p with probability p_i and D_i,i = 0 with probability 1-p_i.
Let A∈^n× d and p≥ 1. Choose an oversampling parameter β = Θ(log(d/δ)/^2) and sampling probabilities p_1,…, p_n such that min{β w_i(A),1}≤ p_i≤ 1 and let S be the rescaled sampling matrix with respect to p_1,…,p_n. Then it holds with probability at least 1-δ that (1-)Ax_p≤SAx_p ≤ (1+)Ax_p (i.e., S is an -subspace embedding for A in the ℓ_p-norm) and S has O(β∑_i w_i(A)) = O(β d) rows.
<cit.> give an iterative algorithm (Algorithm <ref>) which computes the Lewis weights time-efficiently for p < 4.
Suppose that p<4 and β = Θ(1). After T = loglog (n) iterations in Algorithm <ref>, w is a constant approximation to the ℓ_p Lewis weights.
§ DISTRIBUTED ℓ_P-REGRESSION LOWER BOUND
We consider the following variant of the Gap-Hamming problem ().
Gap-Hamming Problem. In the Gap-Hamming problem (_n, c), Alice and Bob receive binary strings x and y, respectively, which are uniformly sampled from {-1,1}^n.
They wish to decide which of the following two cases Δ(x, y) = ∑_i=1^n x_i y_i falls in: Δ(x, y) ≥ c√(n) or Δ(x, y) ≤ -c√(n), where c is a constant. (If Δ(x, y) is between - c√(n) and c√(n), an arbitrary output is allowed.)
If there is a protocol Π which solves _n, c with large constant probability, then we have I(x, y ; Π) = Ω(n), where I denotes mutual information and the constant hidden in the Ω-notation depends on c.
§.§ s-GAP problem
In this section, we will define the s-GAP problem and then prove an Ω(sn) lower bound.
In the s-GAP problem, there are 2s players, where for the first s players, the i-th player receives an n-bit string a^i ∈{-1, 1}^n, and for the remaining s players, the i-th player receives an n-bits string b^i ∈{-1, 1}^n, with the guarantee that a = ∑_i a^i ∈{-1,1}^n, b = ∑_i b^i ∈{-1, 1}^n and Δ(a, b) ∈ [-c_2√(n), c_2√(n)]. The 2s players want to determine if Δ(a, b) ≥ c_1√(n) or Δ(a, b) ≤ -c_1√(n). Here c_1<c_2 are both constants. (Similarly, if Δ(a, b) is between - c_1√(n) and c_1√(n), an arbitrary output is allowed).
To prove the Ω(sn) lower bound, we use a similar symmetrization augment as in <cit.> and reduce to the problem. For the reduction, we consider s = 4t + 2 for simplicity, and without loss of generality by padding, and consider the following distribution μ for the inputs a_i^j for players j = 1, 2, …, 2t + 1.
Choose a uniformly random vector a∈{-1,1}^n. For each i, if a_i = 1, we place (t + 1) bits of 1 and t bits of -1 randomly among the 2t + 1 players in this coordinate; if a_i = -1, we place t bits of 1 and (t +1) bits of -1 randomly among the 2t + 1 players. We remark that under this distribution, each player’s inputs are drawn from the same distribution, and each coordinate of each player is 1 with probability 1/2 and -1 with probability 1/2. The distribution of b_i^j is the same as that of a_i^j for players j = 2t + 2, …, 4t + 2.
Any protocol that solves the s-GAP problem with large constant probability requires Ω(sn) bits of communication.
=-1 We reduce the s-GAP problem to the problem using a similar symmetrization argument to that in <cit.>. Alice picks a random number i ∈ [2t + 1] uniformly and simulates the i-th player. Bob simulates the remaining s - 1 players. We shall show that if there is an s-player protocol solving the s-GAP problem, then the coordinator will be able to solve the problem on a constant fraction of the input vectors a and b, which requires Ω(n) bits of communication. Note that the input distribution of each player is the same and Alice is choosing a random player. Hence, Alice's expected communication to Bob is at most O(χ/s) bits if the s-GAP problem can be solved using χ bits of communication, which yields a lower bound of Ω(sn) bits for the s-GAP problem.
We first consider Bob's information when he simulates s - 1 players. He knows each coordinate of b directly. Consider a coordinate of a. If the sum of Bob's s - 1 bits on this coordinate is 2 or -2, then he knows Alice's bit on this coordinate immediately, as their sum should be 1 or -1; while if Bob's sum is 0, he has zero information about Alice's bit on this coordinate. By a simple computation, we obtain that Bob's sum is 2 or -2 with probability t/2t + 1 and is 0 with probability t +1/2t + 1.
From a Chernoff bound, we see that with probability at least 1 - e^-Ω(n), Bob learns at most 3/5n coordinates of a. Let I denote the set of remaining indices. Then |I| ≥2n/5. We will show that Alice and Bob can solve on a_I and b_I by simulating the protocol for the s-GAP problem.
Consider Δ(a_J, b_J) for J = [n] ∖ I. With probability at least 99/100, it will be contained in [-c_1√(|J|), c_1 √(|J|)], where c_1 is a sufficiently large absolute constant. Conditioned on this event, we have that whether the distance Δ(a_I, b_I) ≥ c_2 √(|I|) or Δ(a_I, b_I) ≤ -c_2 √(|I|) will decide whether Δ(a, b) ≥ c_3 √(n) or Δ(a, b) ≤ -c_3 √(n), where c_2,c_3>0 are appropriate constants (recall that we have |I| ≥2/5n and |J| ≤3/5n). This means that, by simulating a 2s-player protocol for the s-GAP problem, Alice and Bob can solve the _|I|,c_2 problem on a_I and b_I, which requires Ω(|I|) = Ω(n) bits of communication.
Any protocol that solves m independent copies of the s-GAP problem with high constant probability requires Ω(snm) bits of communication.
Similar to the proof of Theorem <ref>, Alice and Bob in this case need to solve m independent copies of . The direct sum theorem <cit.> states that if the information cost of solving a communication problem with probability 2/3 is f, then the information cost of solving m independent copies of the same communication problem simultaneously with probability at least 2/3 is Ω(mf). Since the information cost implies a communication lower bound, it follows from Lemma <ref> and the direct sum theorem that Ω(knm) bits of communication are required.
§.§ Ω(sd/^2) and Ω(sd/) Lower Bounds
In this section, we will show an Ω(sd/^2) lower bound for the ℓ_p-regression problem when 0 < p ≤ 1 and an Ω(sd/) lower bound when 1 < p≤ 2.
For simplicity, we first consider the case of d = 1 and will later extend the result to general d. Consider the same input distribution as in Definition <ref> with n = 1/^2, and for which the 2s players want to compute a (1 + )-approximate solution to the ℓ_p regression problem
_x ∈ax - b_p^p .
In the lemma below, we shall show that using a (1 + )-approximate solution for the ℓ_p-regression problem (<ref>), the players can distinguish the two cases to the s-GAP problem for the vectors a and b, which implies an Ω(s/^2) lower bound. The proof, analogous to that of <cit.>, analyzes an objective of the form r|1 - x|^p + (n - r)|1 + x|^p for r=(n+Δ(a,b))/2.
Suppose that p∈ (0,2], n = Θ(1/^2), and a and b are the vectors drawn from the distribution in Definition <ref>. Let η = when p∈ (0,1] and η = ^2 when p∈ (1,2]. Then, any x̃ such that
ax̃ - b_p^p ≤ (1 + η)min_x ∈ax - b_p^p
can be used to distinguish whether Δ(a, b) ≥ c√(n) or Δ(a, b) ≤ -c√(n), where c is an absolute constant.
Suppose that a_i=b_i for r coordinates i and a_i≠ b_i for n - r coordinates i. The objective function ax-b_p^p can be rewritten as
r·|1 - x|^p + (n - r) · |1 + x|^p .
Case p∈ (0,1). The first observation is that the optimal solution x^* should lie in [-1, 1], otherwise x = 1 or x = -1 will give a lower cost. Next, without loss of generality, we can assume that Δ(a, b) ≥ c√(n), which means that r ≥n/2 + c/2√(n). Following a similar analysis to that in <cit.>, we can now obtain that the optimal solution x^* satisfies x^* > 0 and every x < 0 will lead to ax-b_p^p ≥ (1 + )ax^* - b_p^p. The case where Δ(a, b) ≤ -c√(n) is similar, where the optimal solution x^* satisfies x^* < 0 and every x > 0 will lead to ax-b_p^p ≥ (1 + )ax^* - b_p^p. Hence, using the sign of x and the fact that x is a (1+)-approximate solution, we can distinguish the two cases of Δ(a,b).
Case p = 1. The objective can now be rewritten as
r·|1 - x| + (n - r) · |1 + x| .
Without loss of generality, we assume that Δ(a, b) ≥ c√(n) which means that r ≥n/2 + c/2√(n). The only thing we have to show is that ax-b_p^p ≥ (1 + )ax^* - b_p^p for all x < 0. On the one hand, we have that ax^* - b_p^p ≤a· 1 - b_p^p ≤ n - c√(n). On the other hand, when x < 0, noting that r > n - r, we have that ax - b_p^p ≥a· 0 - b_p^p = n≥ (1+)(n-c√(n)). The last inequality follows from our choice of n = Θ(1/^2). To conclude, when p = 1, we can also distinguish the two cases from the sign of x.
Case p ∈ (1,2). The case of 1 < p < 2 was shown in <cit.>. Similar to their analysis, we can get that (i) when Δ(a, b) ≥ c√(n), the optimal solution x^* satisfies x^* > 0 and any x < 0 will yield ax-b_p^p ≥ (1+2^2)ax^* - b_p^p;
(ii) when Δ(a, b) ≤ -c√(n), the optimal solution x^* satisfies x^* < 0 and
any x > 0 will yield ax-b_p^p ≥ (1+2^2)ax^* - b_p^p.
Hence, we can deduce the sign of x in the two cases, and can distinguish the two cases
when x is a (1+^2)-approximate solution.
Case p = 2.
The optimal solution is x^* = ∑_i a_i b_i/∑_i a_i^2 = ∑_i a_i b_i/n and the corresponding objective value is n - (∑_i a_ib_i)^2/n. When Δ(a, b) ≥ c√(n), the optimal solution x^* > 0 and ax^* - b_2^2 ≤ n - c^2, while for all x < 0, from the property of the quadratic function, we get that ax^* - b_2^2 ≥a·(0) - b_2^2 = n ≥ (1+2^2)(n-c^2) (recall that n ≤ c/(2^2)).
A similar analysis works when Δ(a, b) ≤ -c√(n) and the proof is complete.
Combining this lemma with Theorem <ref> yields the desired lower bound for the distributional regression problem with d=1.
Suppose that d = 1 and > 0. Then any protocol that computes a (1 + )-approximate solution to the s-server distributional ℓ_p-regression problem in the message passing model with high constant probability requires Ω(s/^2) bits of communication for p∈ (0,1] and Ω(s/) bits of communication for p∈ (1,2].
We now extend the lower bound to general d via a padding argument. Suppose that a_1, a_2, …, a_d and b_1, b_2, …, b_d are d independent samples drawn from the same distribution as defined in Definition <ref> with n = Θ(1/^2). We form a matrix A ∈^O(d/^2) × d and a vector b ∈^O(d/^2) as
A = [ a_1 ; a_2 ; ⋱ ; a_d; ], b = [ b_1; b_2; ⋮; b_d; ] .
It then follows that
min_x ∈^dAx - b_p^p = ∑_i = 1^dmin_x_i ∈a_i x_i - b_i_p^p.
We then make the following observation. If x ∈^d is a (1 + )-approximate solution of min_x Ax - b_p^p, then there must exist
a constant fraction of the indices i ∈ [d] such that x_i is a (1 + O())-approximate solution to the regression problem min_x_i ∈a_i x_i - b_i_p^p (recall that we have the guarantee that Δ(a_i, b_i) ∈ [-c_2√(n), c_2√(n)] for all i, and hence the objective values for each regression problem are within a constant factor). This means that from the signs of these x_i, we can solve a constant fraction of the d independent copies of the s-GAP problem, which implies the following theorem immediately.
Suppose that > 1/√(n) for p ∈ (0,1] and > 1/n for p ∈ (1,2]. Then any protocol that computes a (1 + )-approximate solution to the s-server distributional ℓ_p-regression problem with d columns in the message passing model with large constant probability requires Ω(sd/^2) bits of communication for p∈ (0,1] and Ω(sd/) bits of communication for p∈ (1,2].
§.§ Ω(sd^2) Lower Bound for p ∈ (0, 2]
In this section, we present an Ω(sd^2) lower bound for 0 < p ≤ 2. We first describe the intuition behind our lower bound. Following <cit.>, we construct a set of matrices ℋ⊆ℝ^d × d with a vector b ∈ℝ^d such that (i) T is non-singular for all T ∈ℋ, and (ii) S^-1 b T^-1b for all S, T ∈ℋ and S≠ T. Then we uniformly sample a matrix A ∈ℋ and show that we can obtain the index of A in the set ℋ from a constant-factor approximate solution to the regression problem minAx - b_p^p. This will imply an Ω(d^2) lower bound even for s = 2. The construction of ℋ is given in the following lemma.
For every sufficiently large d, there exists a set of matrices ℋ⊆{-1,1}^d × d with |ℋ| = Ω(2^0.49d^2) such that (i)
T is non-singular for all T ∈ℋ, and (ii) for all distinct S, T ∈ℋ, S^-1 e_d ≠ T^-1 e_d, where e_d is the d-th standard basis vector.
We remark that in <cit.>, Lemma <ref> was only shown for the case where t > 1, |ℋ| = Ω(t^1/6d^2) and the matrix entries are integers in [-t, t]. However, using the singularity probability of random matrices in {-1, +1}^d × d and following a similar argument to <cit.>, we can obtain the desired bounds in Lemma <ref>. The detailed proof can be found in Appendix <ref>. Note that the construction procedure of the set is close to random sampling – uniformly sample Ω(2^0.49d^2) matrices and remove a small fraction. This property will be crucial to our proof.
To achieve an Ω(sd^2) lower bound for s players, we consider the same input distribution for the s players in Lemma <ref> and employ a similar symmetrization argument. After sampling matrices in ℋ, we construct the inputs of the s players to be matrices in {-1, +1}^d × d with the sum being A. However, if we follow the same argument and let Bob simulate s - 1 = 2t players, in expectation he will know a t/2t + 1≈1/2 fraction of the entries of A, and from the construction of the set |ℋ| we know that there will be only O(1) matrices in ℋ satisfying the conditions on such entries. Hence, Alice only needs to send O(1) bits of information to Bob. To solve this issue, we make the following modification. Instead, we let Alice simulate 2 players, and Bob simulates the remaining s - 2 = 2t - 1 players. In this case, Bob will know roughly a 1/4-fraction of the entries directly; however, for the remaining entries, he will know side information. Roughly speaking, for A_ij, if Bob's sum over the s - 2 players is 1, with probability roughly 2/3, A_ij is 1; if his sum over the k - 2 players is -1, with probability roughly 2/3, A_ij is -1. We shall show that even having such side information, with high probability the conditional entropy of the remaining entries of A is still Ω(d^2), which implies that Alice still needs to send Bob Ω(d^2) bits.
Consider the following game of s = 2t + 1 players, where the i-th player receives a d × d-matrix A^i such that A^i⊆{-1,1}^d× d with the guarantee that A = ∑_i A^i is distributed in ℋ uniformly. The s players want to determine collectively the index of the matrix A in ℋ.
Any protocol which solves this problem with large constant probability requires Ω(sd^2) bits of communication.
We first describe the input distribution of each player. Suppose that matrix A has been sampled from ℋ. For each coordinate (i,j), if A_ij = 1, we place (t + 1) bits of 1 and t bits of -1 randomly among the 2t + 1 players' inputs for coordinate j; if A_ij = -1, we place t bits of 1 and t +1 bits of -1. Similarly, under this distribution, each player’s inputs are drawn from the same distribution.
We then use symmetry and let Alice simulate two random players, and Bob simulates the remaining s - 2 = 2t - 1 players. Consider first Bob's information when he simulates 2t - 1 players. Via a simple computation we can get that for each coordinate, with probability t - 1/4t + 2 Bob's sum will be 3 or -3, in which case he will know A_ij immediately. If Bob's sum is 1, he will get that A_ij=1 with probability 2/3 and A_ij=-1 with probability 1/3; if Bob's sum is -1, he will get that A_ij=-1 with probability 2/3 and A_ij=1 with probability 1/3. It follows from a Chernoff bound that with probability 1 - exp(-d^2), Bob obtains the exact information of at most 0.26d^2 coordinates and has partial information about the remaining coordinates. For the remainder of the proof we assume this event happens.
Let 𝒮 denote the subset of ℋ which agrees on the above 0.26d^2 coordinates. From the construction of ℋ we get that with at least constant probability |𝒮| = Ω(2^0.2 d^2). Condition on this event. For simplicity, next we only consider the matrix in 𝒮 and treat it as an ℓ-dimensional vector after removing the known 0.26d^2 coordinates, where ℓ = 0.74d^2. Let Y denote Bob's sum vector. We shall show that the conditional entropy H(A | Y) remains Ω(d^2), and hence by a standard information-theoretic argument, Alice must still send Ω(d^2) bits to Bob to identify the index of the matrix in 𝒮. From this, we get an Ω(sd^2) lower bound on the protocol for the original problem.
By a Chernoff bound, with probability 1 - exp(-d^2), the Hamming distance between A and Y is within 1/3ℓ± 0.01d^2. We condition on this in the remainder of the proof. We now turn to bound the number of matrices in S which have a Hamming distance of 1/3ℓ from Y. For each matrix B, from the construction of ℋ we know that each coordinate of B is the same as the corresponding coordinate of A with probability 1/2. Hence, the probability that B has Hamming distance 2/3ℓ from A is (using Stirling's formula)
ℓ2/3ℓ· 2^-ℓ≃1/ℓ·3^ℓ/2^2/3ℓ· 2^-ℓ = 3^ℓ/ℓ 2^5/3ℓ.
Hence, the expected number of such B is
|𝒮| ·3^ℓ/ℓ 2^5/3ℓ > 2^0.2 d^2·3^ℓ/ℓ 2^5/3ℓ≥ (1.101)^d^2 .
From a Chernoff bound we know that with probability at least 1 - exp(-d^2), the number of B ∈𝒮 for which B has a Hamming distance 1/3ℓ from Y is at least (1.10)^d^2.
We next turn to show that when conditioned on the event above, it is enough to show that the conditional entropy H(A| Y) satisfies H(A | Y) = Ω(d^2) given Bob's vector Y. Let 𝒯 be the subset of ℋ which agrees on the above 0.26d^2 coordinates and having Hamming distance within 1/3ℓ± 0.01d^2. For each matrix T ∈𝒯, define a weight of the matrix T to be w_T = (2/3)^ℓ - u(1/3)^u = (1/3)^ℓ 2^l- u, where u is the Hamming distance between T and Y. It follows from Bayes' Theorem that T is the correct matrix with probability
p_T = w_T/∑_i ∈𝒯 w_i .
For the denominator, we have from the conditioned events that
S = ∑_i ∈𝒯 w_i ≥ (1.10)^d^2·(1/3)^ℓ 2^2/3ℓ - 0.01d^2≥ (0.682)^d^2 .
For the numerator, note that it holds for every i ∈𝒯 that
w_i ≤(1/3)^ℓ 2^2/3ℓ + 0.01d^2≤ (0.629)^d^2.
It follows from the definition of the entropy that
H(A | Y) = ∑_i ∈𝒯 p_i log1/p_i = ∑_i ∈𝒯w_i/SlogS/w_i≥∑_i ∈𝒯w_i/SlogS/(0.629)^d^2
= logS/(0.629)^d^2 = Ω(d^2) ,
which is exactly we need. The proof is complete.
The following theorem follows immediately from the preceding lemma.
Suppose that 0 < p ≤ 2. Any protocol that computes a constant-factor approximate solution to the s-server distributional ℓ_p-regression problem with d columns in the message passing model with large constant probability requires Ω(sd^2) bits of communication.
§ ℓ_2-REGRESSION UPPER BOUND
In this section, we give an Õ(sd^2 + sd/) communication protocol for the distributed ℓ_2-regression problem. We first describe the high-level intuition of our protocol, which is based on the sketching algorithm in <cit.> and the sketching-based pre-conditioning algorithm in <cit.>.
* Let S_1 ∈^O(dlog(d)/) × n be a (1 ±√())-subspace embedding. We compute  = SA and b̂ = Sb and then the problem is reduced to solving min_x ∈^dÂx - b̂_2^2.
* Let S_2 ∈^O(d log d) × O(dlog(d)/) be a (1 ± 1/2) subspace embedding of SA. We compute a QR-decomposition of SÂ = QR^-1. Then the regression problem is equivalent to solving min_x ∈^dÂRx - b̂_2^2.
* Run a gradient descent algorithm for T = O(log(1/)) iterations. In the t-th iteration, compute the gradient of the objective function at the current solution x_t and perform the update x_t + 1 = x_t - (ÂR)^T(ÂRx_t - b̂).
* Output Rx_T as the solution.
The protocol is presented in Algorithm <ref>. Initially, each server computes Â^i = Π_2 Π_1 A^i, then computes Π_3 Â^i and sends it to the coordinator. Note that Π_1 is a matrix and hence we can compute Π_1 A^i in (A^i) time and then compute Π_2 Π_1 A^i in (A^i) + (d/) time. The coordinator then computes a QR-decomposition of Π_3 Â = ∑_i Π_3 Â^i. The point is that ÂR will be well-conditioned, which will greatly improve the convergence rate of gradient descent. Then each server will help compute the gradient at the current solution x_t and the coordinator will perform the corresponding update. The following is our theorem.
The protocol in Algorithm <ref> returns a (1 ±)-approximate solution to the ℓ_2-regression problem with large constant probability, and the communication complexity is Õ(sd^2 + sd/). Moreover, the total runtime of all servers of the protocol is O(∑_i nnz(A^i) + s ·(d/)).
To prove the correctness of Algorithm <ref>, we need the following lemmas. The reader can find more detail in <cit.>.
Suppose that S is a (1 ±√())-subspace embedding and x'= _x ∈^dS(Ax - b)_2. Then it holds with large constant probability that
Ax' - b_2 ≤ (1 + ) Ax - b_2 .
Further suppose that x_c is a (1 + )-approximate solution to min_x ∈^dS(Ax - b)_2, it then holds that
S(Ax_c - b)_2 ≤ (1 + ) Ax - b_2 .
We remark that the case where x_c is the minimizer was shown by <cit.> and the case where x_c is a (1 +)-approximate solution was recently shown by <cit.>.
Suppose that S is a (1 ±_0)-subspace embedding and consider the iterative algorithm above, then
ÂRx_t + 1 - x^*_2 = _0^m ·ÂRx_t - x^*_2 .
As a corollary, when t = Ω(log(1/)), it holds that ÂRx_t - b̂_2^2 ≤ (1 + ) ÂRx^* - b̂_2^2.
Now we are ready to prove Theorem <ref>.
Since Π_1 has O(d^2/) rows and Π_2 has O(dlog(d)/) columns, from Section <ref> we get that with probability at least 99/100, both Π_1 and Π_2 are (1 ± O(√())) subspace embeddings, which means Π_2Π_1 is a (1 + √())-subspace embedding.
Let  = Π_2 Π_1 A and b̂ = Π_2 Π_1 b. From Lemma <ref>, we see that it suffices to solve min_x ∈^dÂx - b̂_2.
Conditioned on these events, it follows immediately from Lemma <ref> that x_T is a (1 ±)-approximate solution to min_x ∈^dÂx - b̂_2, provided that each server uses R instead of R̃. To show that R̃ works here, note that an initial step in the proof of Lemma <ref> is that SÂRx_2 = 1 for all unit vectors x, which implies that ÂRx_2 ∈ [1 - _0, 1 + _0]. For R̃, we have that
SÂRx_2 - SÂR̃x_2 ≤SÂ(R - R̃)x_2
≤ 2 Â_2 (R - R̃)x_2
≤ 1/(nd) .
The last inequality is due to the fact that each entry of R - R̃ is O(1/(nd)) and each entry of  is O((nd)). Hence, ARx∈ [1 - 1.1_0, 1 + 1.1_0] will still hold and a similar argument will go through, yielding that x_T is a (1 ±)-approximate solution.
We next analyze the communication complexity of the protocol. For Step 3, since Π_3 Â^i is an O(d log d) × d matrix, each server P_i sends Õ(d^2) entries. Each entry of A^i has magnitude [1/n^c, n^c], and thus each entry of Π_1 A^i is contained in [1/n^c,n^c+1], each entry of Â^i = Π_2 Π_1 A^i is contained in [/n^c+2,n^c+3/] and each entry of Π_3Â^i is contained in [^2/n^c+4,n^c+5/^2], which implies that each entry of Π_3 Â^i can be described using O(log(n/)) bits and thus a total communication of O(sd^2) bits for Step 3. In Step 4, since R̃ is a d × d matrix and each entry is an integer multiple of 1/(nd), the coordinator sends R̃ to each server using Õ(sd^2) bits in total. In each iteration of Step 5, we note that y_t is an O(d/)-dimensional vector and g_t is a d-dimensional vector, and each of their entries has O(log(nd)) precision. Hence, the total communication of each iteration is Õ(sd/). Putting everything together, we conclude that the total amount of the communication is Õ(sd^2 + log(1/) · (sd/)) = Õ(sd^2 + sd/) bits.
We now consider the runtime of the protocol. To compute Π_2 Π_1 A^i, notice that Π_1 is a matrix, and hence each server takes (A^i) time to compute Π_1 A^i and then use (d/) time to compute Π_2(Π_1 A^i). Hence, Step 2 takes O(∑_i (A^i)) time. For the remaining steps, one can verify that each step takes (d/) time on a single server or on the coordinator. The total runtime is therefore O(∑_i nnz(A^i) + s ·(d/)).
§ ℓ_P-REGRESSION UPPER BOUND
In this section, we give an Õ(sd^2/ + sd/^O(1)) communication protocol for the distributed ℓ_p-regression problem when 1 < p < 2. We first describe the high-level intuition of our protocol.
* Let T ∈^O(d(log d)/^O(1)) × n be a sketch matrix whose entries are scaled i.i.d. p-stable random variables. We compute  = TA and b̂ = Tb and then the problem is reduced to solving min_x ∈^dÂx - b̂_r.
* Run Algorithm <ref> to obtain a constant approximation of the ℓ_r Lewis weights w of [Â b̂].
* Sample O(d/) rows of  and b̂ proportional to w, and form the new matrix A' and b'.
* Solve x = _x ∈^dA' x - b'_r and output x.
The protocol is shown in Algorithm <ref>. To show its correctness, we first analyze ℓ_p-to-ℓ_r embeddings and the algorithm for solving the ℓ_p-regression problem using Lewis weight sampling.
p-stable distribution. The best known (1 ±) ℓ_p subspace embeddings require an exponential number of rows for a p-stable sketch. However, as we will show in the following lemma, for 1 < r < p, Õ(d/^O(1)) rows are enough to give a (1 ±) (lopsided) embedding from ℓ_p to ℓ_r, which is sufficient for the regression problem.
Suppose that p>r>1 are constants, and T ∈^m × n is a matrix whose entries are i.i.d. p-stable random variables scaled by 1/(m^1/r·α_p, r), where α_p, r is a constant depending on p and r only. For m = d log d/^C(,r), where C(,r) is a constant depending on p and r only, it holds for any given matrix A∈^n× d that
* (dilation) for each x∈^d, TAx_r ≤ (1 + ) Ax_p with large constant probability.
* (contraction) TAx_r ≥ (1 - ) Ax_p for all x ∈^d simultaneously with high probability.
Furthermore, the entries of T can be rounded to the nearest integer multiples of 1/(nd) and the same guarantees still hold.
To prove the lemma, we need the following results.
Suppose that α∈^d and θ∈^d is a vector whose entries are i.i.d. p-stable variables. Then it holds that
(∑_i α_i θ_i^r )^1/r = α_p, r(∑_i |α_i|^p )^1/p
where α_p, r is a constant that only depends on p and r.
Suppose that r,s≥ 1 and X is a random variable with |X|^rs < ∞. It holds that
X^r - X^r^s ≤ 2^s |X|^rs .
We have that
X^r - X^r^s ≤ 2^s-1( X^r^s + (X^r)^s )
≤ 2^s-1 (X^rs + (X^r)^s)
≤ 2^s-1(X^rs + X^rs)
= 2^s X^rs.
Suppose that 1 ≤ r ≤ 2.
Let X_1,…,X_n be independent zero mean random variables with [|X_i|^r] < ∞. Then we have that
[(∑_i=1^n |X_i|)^r] ≤ 2 ∑_i=1^n [|X_i|^r] .
Suppose that p∈ (1,2) is a constant and T ∈^m × n is a matrix whose entries are i.i.d. p-stable entries scaled by 1/(α_p· m^1/p ). For m = d log d/^O(1), given any A ∈^n × d, it holds with large constant probability that for all x ∈^d
TAx_p ≤(d) Ax_p .
We note that Lemma <ref> was shown in <cit.> for p = 1. For 1 < p < 2, a similar argument still goes through after replacing the ℓ_1 well-conditioned basis with an ℓ_p well-conditioned basis.
First we consider the original T without rounding the entries.
Now we show (1). Let y = Ax.
From properties of p-stable random variables, we get that each (Ty)_i follows the same distribution. From Lemma <ref> we have that for every i, |(Ty)_i|^r = α_p,r^r/α_p,r^r · my_p^r = 1/my_p^r. To get concentration, we pick an r'∈ (r,p) and consider the r'/ r-moment of (Ty)_i^r.
Similar to Lemma <ref>, we have that 𝔼 [|(Ty)|_i^r'] = β_p,r,r'/m^r'/ry_p^r' is bounded, where β_p,r,r' is a constant depending on p,r,r' only. Let S = ∑_i |(Ty)_i|^r and we have that [S]= y_p^r. Consider the (r/r')-th moment of S. We then have
[(S - [S])^r'/r] = [(∑_i (|(Ty)_i|^r- 1/my_p^r ))^r'/r]
≤ 2 (∑_i [ |(Ty)_i|^r- 1/my_p^r|]^r'/r) Lemma <ref>
≤ 2^r'/r+1(∑_i |(Ty)_i|^r') Proposition <ref>
≤ C (∑_i 1/m^r'/ry_p^r')
= C y_p^r' / m^r'/r - 1 ,
where C is a constant that depends only on r,r', and p. By Markov's inequality, we have that
𝐏𝐫[|S - [S]| ≥[S]] ≤𝐏𝐫[|S - [S]|^r'/r≥ ([S])^r'/r]
≤[(S - [S])^r'/r]/^r'/ry_p^r'
≤C_r'/r/^r'/r m^r'/r - 1 .
Hence, we can see that when m = Ω(1/^r'/r' - r) = 1/^Ω(1), |Ty_r - y_p| ≤y_p holds with large constant probability.
We next prove (2). We first show that for every x ∈^d, Ty_r^r ≥ (1 - ) y_p^r holds with probability at least 1 - exp(-d log(d)/^O(1)). Recall that we have that we have that |(Ty)_i|^r = 1/my_p^r for every i. Fix k = 1/^O(1). Let
s_i = |(Ty)_(i - 1)k + 1|^r + |(Ty)_(i - 1)k + 2|^r + ⋯ +|(Ty)_ik|^r (1 ≤ i ≤ m / k) .
We then have Ty_r^r = ∑_i s_i. Similar to (1), one can show that for each i, with large constant probability
|s_i - k/my_p^r | ≤k/my_p^r
By a Chernoff bound, with probability at least 1 - exp(-d/^Ω(1)), at least a (1 - )-fraction of the s_i satisfy (<ref>). Conditioned on this event, it holds that
Ty_r^r = ∑_i s_i ≥m/k(1-) k/my_p^r = (1 - ) y_p^r ,
which is what we need.
The next is a standard net-argument. Let 𝒮 = {Ax: x ∈^d, Ax_p = 1} be the unit ℓ_p-ball and 𝒩 be a γ-net with γ = ( /d) under the ℓ_p distance. It is a standard fact that the size of 𝒩 can be ((d/))^d. By a union bound, we have that TAx_r ≥ (1 - ) Ax_p = (1 - ) for all Ax ∈𝒩 simultaneously with probability at least 9/10. From Lemma <ref>, we have that with probability at least 9/10, TAx_p ≤(d) Ax_p for all x∈^d. Conditioned on these events, we then have for all x ∈^d,
TAx_r ≤ m^1/r - 1/pTAx_p ≤(d/) Ax_p .
Then, for each y = Ax ∈𝒮, we choose a sequence of points y_0,y_1,…∈𝒮 as follows.
* Choose y_0 ∈𝒮 such that y - y_0_p ≤γ and let α_0 = 1;
* After choosing y_0,y_1,…,y_i, we choose y_i+1 such that
y - α_0 y_0 - α_1 y_1 - ⋯ - α_i y_i/α_i+1 - y_i+1_p ≤γ,
where α_i+1 = y - α_0 y_0 - α_1 y_1 - ⋯ - α_i y_i_p.
The choice of y_i+1 means that
α_i+2 = y - α_0 y_0 - α_1 y_1 - ⋯ - α_i y_i - α_i+1y_i+1_p ≤α_i+1γ.
A simple induction yields that α_i ≤γ^i. Hence
y = y_0 + ∑_i ≥ 1α_i y_i , |α_i| ≤γ^i .
Suppose that y_i = Ax_i. We have
TAx_r ≥TAx_0_p - ∑_i ≥ 1γ^i TAx_i_p ≥ (1 - ) - ∑_i ≥ 1γ^i· ((d/)) = 1 - O().
Rescaling , we obtain that TAx_r^r ≥ (1 - ) Ax_p^r for all x∈^d simultaneously.
This completes the proof of the two guarantees for the original T, without rounding the entries. To show that the guarantees continue to hold after rounding the entries,
We only need to notice that
T̃Ax_r - TAx_r ≤(T̃ - T)Ax_r
≤ m^1/r - 1/2(T̃ - T)Ax_2
≤ m^1/r - 1/2T̃ - T_2 Ax_2
≤1/(nd)Ax_p .
Lewis Weight Sampling. It is known that sampling Õ(d/^2) rows with respect to the ℓ_p Lewis weights gives an ℓ_p subspace embedding with large constant probability when p∈ [1,2] <cit.>.
In the following lemma, we shall show that for ℓ_p-regression, sampling Õ(d/) rows is enough.
Let A ∈^n× d, b∈^n and p∈ (1,2). Suppose that S is a rescaled sampling matrix according to w_i([A b]) with oversampling factor β = Θ(^-1log^2 d log n log(1/δ)) and x̃ = min_x∈^dSA x - Sb_p. With probability at least 1-δ, it holds that
Ax̃-z_p ≤ (1+) min_x∈^dAx-z_p
and the number of rows that S samples is
O(^-1 d log^2 d log n log(1/δ) ).
The proof of the lemma closely follows the proof in <cit.> and is postponed to Appendix <ref>.
We are now ready to prove our theorem for distributed ℓ_p-regression.
The protocol described in Figure <ref> returns a (1 ±)-approximate solution to the ℓ_p-regression problem with large constant probability. The communication complexity is Õ(sd^2/ + sd/^O(1)) and the total runtime of all servers is O((∑_i nnz(A^i))· (d/^O(1)) + s ·(d/)).
By Lemma <ref>(1), it holds with high constant probability that
min_x ∈^dT(Ax - b)_r ≤ (1 + ) min_x ∈^dAx- b_p .
Suppose that x' ∈^d is a (1 + )-approximate solution to min_x ∈^dT(Ax - b)_r, i.e.,
T(Ax' - b)_r ≤ (1 + ) min_x ∈^dT(Ax - b)_r .
It follows from Lemma <ref>(2) that
Ax' - b_p ≤1/1 - T(Ax' - b)_r ≤ (1 + O()) min_x ∈^dAx- b_p .
Hence, the problem is reduced to obtaining a (1 + )-approximate solution to min_x ∈^dT(Ax - b)_r = min_x ∈^d x - b̂_r.
Consider the iteration in Step 3. A standard analysis (see, e.g., Section 2.4 of <cit.>) yields that in each iteration, with probability at least 1 - 1/(d), τ is a constant approximation to the leverage score of W^1/p - 1/2 B. Taking a union bound, we get that with high constant probability, for all iterations it holds. Conditioned on this event happening, from Lemma <ref> we get that after t iterations, w is a constant approximation to the ℓ_r Lewis weights of B (in each iteration we round w; however, notice that if the Lewis weight w_i is not 0, it should be larger than 1/(nd) as the non-zero entries of the matrix B are at least 1/(nd)[It is easy to see that the ℓ_r sensitivities, defined in Proposition <ref> in Section <ref>, are at least Ω(1/(nd)) in our setting if the corresponding rows are nonzero as if we take x = a_i, we can get that the ℓ_i^(r)(A) ≥a_i^2p/Aa_i_p^p where the denominator is at most (nd) as each entry of A is in (nd). From Lemma 2.5 in <cit.>, we know that the ℓ_r Lewis weights are larger than the ℓ_r sensitivities when r < 2.
]
, and hence the rounding will not affect the approximation ratio guarantee in each iteration). From Lemma <ref>, the solution to min_x ∈^dA'x - b'_r is a (1 + )-approximate solution to min_x ∈^dT(Ax - b)_r, and is thus a (1 ± O())-approximate solution to the original problem min_x ∈^dAx- b_p.
We next analyze the communication complexity of the protocol. For Step 3(a), S_t W̃^1/2 - 1/p B_i is a d log(d) × (d +1) matrix and the entries of S_t W̃^1/2 - 1/p B_i are in (nd)-precision as the entries of S_t, W̃^1/2 - 1/p, and B_i are both in (nd)-precision. Hence, the total communication of all servers is Õ(sd^2). For Step 3(b), R̃ is a (d + 1) × (d + 1) matrix and hence the total communication cost is Õ(sd^2). For 3(c), B^i R̃ G is a d / ^O(1)× O(log d) matrix, and hence similarly we get that the total communication cost is O(sd/^O(1)). For 3(e), since w is a d/^O(1) vector, the total communication cost of this step is O(sd/^O(1)). In Step 5, since the sum of Lewis weights is O(d), with high constant probability the server samples at most Õ(d/) rows, and hence the communication cost of this step is O(sd^2/). Putting everything together, we get that the total communication cost is
Õ(loglog(d/) ·(sd^2 + sd/^O(1)) + sd^2/) = Õ(sd^2/ + sd/^O(1)) .
We now consider the runtime of the protocol. To compute T A^i, notice that T has d/^O(1) rows, which means it takes O((A^i)· (d/^O(1))) times to compute TA^i. Hence Step 2 takes time O((∑_i (A^i))· (d/^O(1))). For the remaining steps, one can verify that each step takes (d/) time on a single server or on the coordinator. The total runtime is therefore O(∑_i nnz(A^i)· (d/^O(1)) + s ·(d/)).
We remark that when all leverage scores of [A b] are ()/d^4/p, the servers can first uniformly sample O(()/d· n) rows of A using the public random bits, rescale the sampled rows and obtain an A'. The servers can then run the protocol on A'. This modified protocol will still produce a (1+)-approximate solution to the ℓ_p-regression problem and has the same communication complexity because uniform sampling does not require communication. The runtime is now reduced to O(∑_i nnz(A^i) + s ·(d/)), which is optimal in terms of (A^i). The details, including the formal statement, can be found in Appendix <ref>.
§ ACKNOWLEDGEMENTS
Y. Li is supported in part by Singapore Ministry of Education (AcRF) Tier 1 grant RG75/21 and Tier 2 grant MOE-T2EP20122-0001. H. Lin and D. Woodruff would like to thank support from the
National Institute of Health (NIH) grant 5R01 HG 10798-2 and the Office of Naval
Research (ONR) grant N00014-18-1-2562.
alpha
§ PROOF OF LEMMA <REF>
We need the following theorem on the singularity probability of random sign matrices.
Let M_n ∈ℝ^n × n be a random matrix whose entries are i.i.d. Rademacher random variables. It holds that
[M_n is singular] ≤ (1/2 + o_n(1))^n .
The proof of the following lemma follows directly from the proof in <cit.> with only minor modifications.
For sufficiently large d, there exists a set of matrices 𝒯⊆{-1,1}^d × d with |𝒯| = Ω(2^0.49d^2) such that
* For any T ∈𝒯, (T) = d;
* For any S, T ∈𝒯 such that S ≠ T, ([S_d-1 T_d - 1]) = ℝ^d, where S_d - 1 denotes the first d - 1 column of S.
We use the probabilistic method to prove the existence. Let t = 2 -, where is a sufficiently small constant.
We use ⊂ℝ^d × (d - 1) to denote the set
= {B ∈ℝ^d × (d - 1)|[X ∈span(B)] ≥ c · t^-d or (B) < d - 1},
where X ∈ℝ^d is a vector whose entries are i.i.d. Rademacher variables and c is an absolute constant.
Consider a random matrix A ∈ℝ^d × (d - 1) with i.i.d. Rademacher entries. Then
[A ∈] ≤1/c,
since otherwise, if we use X ∈ℝ^d to denote a vector with i.i.d. Rademacher coordinates, we have
[([A X]) < d]
≥ [([A X]) < d | A ∈] ·[ A ∈]
> t^-d,
which violates Theorem <ref>.
For any fixed A ∈ℝ^d × (d - 1)∖,
consider a random matrix B ∈ℝ^d × (d - 1) whose entries are i.i.d. Rademacher variables,
[([A B]) = ℝ^d] ≥ 1 - [⋂_i = 1^d - 1 B_i ∈(A)] ≥ 1 - c^d t^-d(d -1 ),
which follows from the definition of and the independence of the columns of B.
Now we construct a multiset 𝒮 of c^-dt^d(d - 1) / 2 matrices, chosen uniformly with replacement from {-1, 1}^d× d.
By (<ref>) and linearity of expectation, we have
[|𝒮∩ S_|] ≤ c^-dt^d(d - 1) / 2·1/c,
where S_ denotes the set of the matrices M such that the first d - 1 columns of M is in . Let ℰ_1 denote the event that
|𝒮∩ S_| ≤ 4[|𝒮∩ S_|] ≤ 4 c^-(d+1)t^d(d - 1) / 2,
which holds with probability at least 3 / 4 by Markov's inequality.
Let S_𝗋𝖺𝗇𝗄 denote the set of d × d matrices that are not of full rank. By (<ref>) and linearity of expectation, we have
[|𝒮∩ S_𝗋𝖺𝗇𝗄|] ≤ c^-dt^d(d - 1) / 2· t^-d,
Let ℰ_2 denote the event that
|𝒮∩ S_𝗋𝖺𝗇𝗄| ≤ 4[|𝒮∩ S_𝗋𝖺𝗇𝗄|] ≤4 c^-dt^d(d - 1) / 2· t^-d,
which holds with probability at least 3 / 4 by Markov's inequality.
Let ℰ_3 denote the event that
∀ S ∈𝒮∖ S_, ∀ T ∈𝒮∖{S}, ([S_d - 1 T_d - 1]) = ℝ^d.
Using a union bound and (<ref>),
(ℰ_3) ≥ 1 - |𝒮|^2 c^d t^-d(d - 1) = 1 - o_d(1).
Thus by a union bound, the probability that all ℰ_1, ℰ_2 and ℰ_3 hold is strictly larger than zero, which implies there exists a set 𝒮 such that ℰ_1 ℰ_2, and ℰ_3 hold simultaneously.
Now we consider 𝒯 = 𝒮∖ (S_∪ S_𝗋𝖺𝗇𝗄).
Since ℰ_1 and ℰ_2 hold, we have |𝒯| ≥Ω(c^-2dt^d(d - 1) / 2) = Ω(2^0.49d^2), provided that d is sufficiently large and is sufficiently small.
The event ℰ_3 implies that all elements in 𝒯 are distinct, and furthermore, it holds for any S, T ∈𝒯 with S ≠ T that
([S_d-1 T_d - 1]) = ℝ^d.
Suppose that 𝒯 satisfies the conditions in Lemma <ref>.
For each T ∈𝒯, we add T^T into ℋ.
Now suppose there exist S, T ∈ℋ such that S≠ T and S^-1 e_d = T^-1 e_d, which means there exists x ∈ℝ^d such that Sx = e_d and Tx = e_d.
This implies that x^T (S^T)_d-1 = x^T (T^T)_d-1 = 0. The construction of 𝒯 guarantees that ([(S^T)_d-1 (T^T)_d-1]) = ℝ^d and it must thus hold that x = 0, which would result in Sx = Tx = 0≠ e_d.
Therefore, for any S, T ∈ℋ with S≠ T, S^-1 e_d ≠ T^-1 e_d.
§ PROOF OF LEMMA <REF>
The proof of the lemma closely follows that in <cit.>. The proof is a bootstrapping argument based on the following two lemmas. For simplicity of notation, we define R(A,b) = min_x Ax-b_p.
There exists an absolute constant c∈ (0,1] such that the following holds for all A ∈^n× d, b ∈^n and γ∈ (0,1). Let x^* = min_x∈^dAx - b_p. Whenever x∈^d satisfies Ax-b_p ≤ (1+cγ)R(A,b), we have that Ax^*-Ax_p ≤√(γ) R(A,b).
Let A ∈^n × d, b∈^n and 0< γ <1. Let S be the rescaled sampling matrix with respect to { p_i}_(i) such that p_i=min{β w_i([A b]), 1 } and β = Θ(γ/^2log^2 d log n log1/δ). Suppose that x̃ = min_x∈^dSA x - Sb_p and A x̃ - Ax^*_p ≤√(γ) R(A,b). It holds that
Ax̃ - b_p ≤ (1 + C) R(A,b)
with probability at least 0.99 - δ, where C is an absolute constant.
Assuming these two lemmas, the proof of Lemma <ref> is nearly identical to that in the proof of <cit.> and is thus omitted. The proof is simpler because we do not need an argument to first show that sketching by S gives a (1+O(√()))-approximate solution, which follows immediately from the fact that S is a (1+√())-subspace-embedding with large constant probability.
In the remainder of this section, we discuss the proof of Lemma <ref>. The proof is similar to that of <cit.>, which converts the bound on the target dimension obtained from an iterative argument in <cit.> to a moment bound using the framework in <cit.>.
The difference is that here we can choose the weights to be the Lewis weights of [A b], while in <cit.>, it considers min_xAx-z_p with z_p ≤ R(A,b) and it samples the rows according to the Lewis weights of A. Specifically, let R = R(A,b), x' = x -x^* and b' = b - Ax^*. We have, as in the proof of <cit.>, that
Ax̃-b_p^p - Ax^*-b_p^p = Ax̃-b_p^p - SAx̃ - Sb_p^p + SAx̃ - Sb_p^p - SAx^* - Sz_p^p
+ SAx^* - Sb_p^p - Ax^* - b_p^p
≤Ax̃-b_p^p - SAx̃ - Sb_p^p + SAx^* - Sb_p^p - Ax^* - b_p^p
≤Ax' - b'_p^p - SAx' - Sb'_p^p + Sb'_p^p - b'_p^p
= Ax' - b'_p^p - Ax' - b̅'_p^p - b' - b̅'_p^p
- ( SAx' - Sb'_p^p - SAx' - Sb̅'_p^p - Sz' - Sb̅'_p^p )
- ( SA x'-S b̅'_p^p - Ax' - b̅'_p^p + b̅'_p^p - Sb̅'_p^p )
=: E_1 - E_2 - E_3,
where b̅ is the vector obtained from b by removing all coordinates b_i such that |b_i|≥w_i/R. Note that b̅'_p≤b'_p = R and Ax'_p≤√(γ)R. The first term can be controlled using <cit.>, except that the sampling probabilities are Lewis weights of [A b] instead of A, but the proof still goes through because it also holds that |(Ax)_i|^p≤Ax_p s_i^p([A b]), where s_i([A b]) is the ℓ_p-sensitivity of [A b]. The second term can be controlled by <cit.>, yielding that |E_2|≤ R^p with probability at least 0.99. The last term can be controlled as in <cit.>, where the Lewis weights of [A b] do not affect the proof.
§ FASTER RUNTIME FOR DISTRIBUTED ℓ_P-REGRESSION
We need the following auxiliary results.
Suppose that 1≤ p < 2 and A∈^n× d. The ℓ_p-sensitivity scores of A are defined as
ℓ_i^(p)(A) = sup_x: Ax≠ 0⟨ a_i, x⟩^p/Ax_p^p,
where a_i is the i-th row of A. It holds that ℓ_i^(p)(A)≤ (τ_i(A))^p/2 for all i.
Suppose that A has full column rank, otherwise we can find an invertible matrix T such that AT = [A' 0], where A' has full column rank, and consider ℓ_i^(p)(A') and τ_i(A') instead. It is not difficult to verify that ℓ_i^(p)(A') = ℓ_i^(p)(A) and τ_i(A') = τ_i(A).
Write A = UR, where U∈^n× d has orthonormal columns and R∈^d× d is invertible. Then
ℓ_i^(p)(A) = sup_y≠ 0U_i, y^p/Uy_p^p≤sup_y≠ 0U_i_2^p y_2^p/Uy_2^p
= U_i_2^p
= (τ_i(A))^p/2,
as advertised.
Let A ∈^n × d and 1 ≤ p < ∞. The matrix A' is a submatrix of A such that the rescaled i-th row p_i^-1/pa_i is included in A' with probability p_i ≥min(β s_i(A), 1). Then, there is a constant c such that when β≥ c ^-2 d log(1/), the matrix A' is a (1 ±)-subspace embedding of A with probability at least 9/10.
As an immediate corollary of the auxiliary results above, we have that when A∈^n× d has uniformly small leverage scores, uniformly sampling its rows can give an ℓ_p-subspace-embedding (after rescaling).
Suppose that 1≤ p<2 and the matrix A∈^n× d satisfies that τ_i(A) ≤ (c^2γ/(dlog(1/)))^2/p for all i, where γ≤^2/(Cdlog(1/)). Let A' be a matrix formed from A by retaining each row with probability γ independently and then rescaling by 1/γ^1/p. It holds with large constant probability that
(1-)Ax_p^p ≤A'x_p^p ≤ (1+)Ax_p^p
for all x∈^d simultaneously, and that A' has O(γ n) rows.
By Proposition <ref>, ℓ_i^(p)(A)≤ (τ_i(A))^p/2 = c^2γ/(dlog(1/)), so the sampling probability
γ≥Cdlog(1/)/^2·ℓ_i^(p)(A)
satisfies the condition in Lemma <ref>. The conclusion follows immediately.
Hence, if A has uniformly small leverage scores, all sites can agree on the O(γ n) uniformly sampled rows using the public random bits and run the protocol in Algorithm <ref> on the induced A'. By Markov's inequality, (A') = O(γ(A)) with large constant probability and we finally conclude with the following theorem.
Suppose that A∈^n× d and b∈^d satisfies that the leverage scores of [A b] are all bounded by ()/d^4/p.
There is a protocol which outputs a (1 ±)-approximate solution to the ℓ_p-regression problem with large constant probability, using Õ(sd^2/ + sd/^O(1)) bits of communication and running in total time (over all servers) O(∑_i nnz(A^i) + s ·(d/)).
|
http://arxiv.org/abs/2307.03891v3 | 20230708035823 | MARBLER: An Open Platform for Standarized Evaluation of Multi-Robot Reinforcement Learning Algorithms | [
"Reza Torbati",
"Shubham Lohiya",
"Shivika Singh",
"Meher Shashwat Nigam",
"Harish Ravichandar"
] | cs.RO | [
"cs.RO",
"cs.MA"
] |
Feature selection simultaneously preserving both class and cluster structures
Suchismita Dasmycorrespondingauthor and Nikhil R. Pal
August 12, 2023
=============================================================================
Multi-agent reinforcement learning (MARL) has enjoyed significant recent progress, thanks to deep learning. This is naturally starting to benefit multi-robot systems (MRS) in the form of multi-robot RL (MRRL).
However, existing infrastructure to train and evaluate policies predominantly focus on challenges in coordinating virtual agents, and ignore characteristics important to robotic systems. Few platforms support realistic robot dynamics, and fewer still can evaluate Sim2Real performance of learned behavior.
To address these issues, we contribute MARBLER: Multi-Agent RL Benchmark and Learning Environment for the Robotarium. MARBLER offers a robust and comprehensive evaluation platform for MRRL by marrying Georgia Tech's Robotarium (which enables rapid prototyping on physical MRS) and OpenAI's Gym framework (which facilitates standardized use of modern learning algorithms).
MARBLER offers a highly controllable environment with realistic dynamics, including barrier certificate-based obstacle avoidance. It allows anyone across the world to train and deploy MRRL algorithms on a physical testbed with reproducibility.
Further, we introduce five novel scenarios inspired by common challenges in MRS and provide support for new custom scenarios.
Finally, we use MARBLER to evaluate popular MARL algorithms and provide insights into their suitability for MRRL.
In summary, MARBLER can be a valuable tool to the MRS research community by facilitating comprehensive and standardized evaluation of learning algorithms on realistic simulations and physical hardware.
Links to our open-source framework and the videos of real-world experiments can be found at <https://shubhlohiya.github.io/MARBLER/>.
§ INTRODUCTION
With increasing demand for robotics to operate in complex real-world environments, coordination of multiple robots is becoming paramount. However, the complexity of exact solutions to important problems (e.g., coverage control <cit.>, path-planning <cit.>, and task allocation <cit.>) grows exponentially as the number of robots increase <cit.>. Consequently, Multi-Robot Reinforcement Learning (MRRL) <cit.> is emerging as a promising alternative paradigm to address this challenge.
MRRL has proven useful for delivery robots <cit.>, coordinated robotic exploration <cit.>, multi-robot communication <cit.>, multi-robot path planning <cit.>, multi-robot target localization <cit.> and more <cit.>. However, despite being developed for robotics, learning algorithms are rarely evaluated in the real-world, with a few notable exceptions <cit.>. However, even the exceptions were tested on smaller teams (2, 2, 3, and 4 robots, respectively) and on ad-hoc platforms, rending reproducibility time-consuming and difficult.
In contrast, Multi-Agent Reinforcement Learning (MARL) algorithms can be evaluated in a systematic way in many standardized simulated environments, such as the Multi-Agent Particle Environment (MPE) <cit.> and the StarCraft Multi-Agent Challenge (SMAC) <cit.>. While it might possible use existing MARL environments to evaluate algorithms developed for MRS, they lack realistic robot dynamics and likely have a large sim2real gap. Further, they do not directly allow for evaluation and benchmarking on physical robots.
In this work, we develop an integrated and holistic platform that can enable seamless training of MRRL policies and their evaluation on physical robots. Specifically, we contribute Multi-Agent RL Benchmark and Learning Environment for the Robotarium (MARBLER). MARBLER is a bridge between the MARL community and the physical robots in the Robotarium <cit.> that makes it easy to evaluate MRRL algorithms and design novel scenarios. The Robotarium is a remotely-accessible, publicly-available, and free-to-use testbed for MRS that allows for up to 20 robots at once in a highly-customizable environment.
As such, MARBLER enables machine learning researchers to develop and test algorithms for physical robots, and control theorists to experiment with state-of-the-art (SOTA) learning algorithms.
Our MARBLER platform has the following key benefits:
* The simulated robots in MARBLER exhibit dynamics similar to that of physical robots as it is built on top of the Robotarium's simulator. Further, MARBLER includes support for barrier certificates to prevent collisions, forcing algorithms to learn in realistic settings.
* MARBLER inherits the open-access benefits of the Robotarium, enabling anyone across the world to train coordination algorithms and systematically deploy on a physical multi-robot testbed with reproducibility.
* MARBLER is compatible with any learning algorithm that can be used with the OpenAI Gym interface.
* MARBLER currently has 5 novel scenarios inspired by common and challenging problems in MRS.
* MARBLER is open-source and allows users to easily add new scenarios or modify existing ones.
By creating an interface between MARL algorithms and the Robotarium, MARBLER is the first publicly-available environment that can evaluate Sim2Real capability in MRRL. Further, MARBLER can serve as a benchmark to evaluate learning algorithms in simulation with real-world constraints and readily deploy them on physical robots.
In addition, we conducted detailed evaluations of existing MARL algorithms by leveraging Extended PyMARL (EPyMARL) <cit.> within MARBLER. Our experiments reveal insights into how different characteristics of existing algorithms (e.g., policy gradient vs. valued-based, parameter sharing, etc.) impact performance in both simulated and physical multi-robot systems.
§ RELATED WORK
§.§ MARL and MRRL Platforms
The Multi-Agent Particle Environment (MPE) <cit.> is a popular framework for evaluating MARL algorithms, consisting of cooperative and adversarial 2D tasks.
In MPE, agents apply forces to particles which can interact with landmarks and other agents. This is a popular setup in MARL environments and has been extended by platforms such as VMAS <cit.>: a vectorized version of MPE that is supported by GPUs to allow for more complex scenarios and faster training. However, particle simulators have very different dynamics than real robots making them poor choices for MRRL benchmarking.
Another popular MARL environment is StarCraft Multi-Agent Challenge (SMAC) <cit.> which is considerably more complex, requiring agents to handle partial observability over long horizons.
However, the agent dynamics in SMAC is still considerably different from real world robots, again making it a poor choice to evaluate MRRL algorithms.
There are few frameworks that are designed to benchmark MRRL algorithms and fewer that are able to evaluate Sim2Real performance of algorithms. SMART <cit.> is one such evironment. However, SMART is limited to scenarios involving autonomous driving, it only supports up to four robots, and neither their evaluation test bed nor their source code is publicly available.
The other MRRL environment that allows for Sim2Real testing is MultiRoboLearn <cit.>: an open-source framework that provides an OpenAI Gym interface for easier integration. However it also only supports a maximum of 4 robots, and, like SMART, it does not have a publicly available testbed. Additionally, creating new scenarios in MultiRoboLearn requires creating custom environments in Gazebo <cit.>, introducing significant overhead.
In contrast to existing environments, MARBLER's simulator closely mimics the constraints of physical robots and allows researchers to evaluate Sim2Real capabilities in a standardized and reproducible way. Therefore, MARBLER is the first MRRL benchmark that has both a realistic simulator and a physical testbed that anyone can use.
§.§ MARL Algorithms
A variety of MARL algorithms have been proposed that perform very well in simulated environments. PPO <cit.> is an effective actor-critic policy gradient method for single agent RL. MAPPO <cit.> is the multi-agent extension of PPO where a single centralized critic is conditioned on all agent's observations to learn a joint state value function and a separate actor for each agent tries to learn the best action to take conditioned only on the agent's individual observations.
In contrast to MAPPO, QMIX <cit.> and VDN <cit.> are value-based methods that decompose the joint state-action value function into individual state-action value functions. VDN learns to decompose the team value function agent-wise while QMIX learns agent-specific Q networks and combines them monotonically via hypernetworks.
In SMAC and MPE, MAPPO, QMIX, and VDN have been shown to be three of the best performing MARL algorithms <cit.>.
However, while these algorithms have performed very well in simulation, there is limited testing of their real world performance. <cit.> evaluated VDN's and QMIX's performance on robots and <cit.> and <cit.> evaluate different versions of multi-agent PPO based algorithms on real robots. However, these are some of the only works to do real-world evaluations and the experiments only used at most four robots and are not easily reproducible.
Another important design problem in MRRL is if robots should share parameters. When robots share parameters, their networks all learn together which greatly reduces the number of parameters to be trained. However, this leads to robots all learning the same behavior. To combat this, robots have unique IDs appended to their observations but this approach still only allows robots to learn policies with limited heterogeneity <cit.>. Alternatively, each robot can learn its own set of network parameters which allows robots to learn truly heterogeneous behavior but greatly increases the number of environment interactions needed for robots to learn, which can be expesive in realistic settings.
§.§ The Robotarium
The Robotarium<cit.> is a remotely accessible multi-robot laboratory developed by Georgia Tech. It features a 12ft x 14ft testbed, 8 Vicon motion-capture cameras and allows up to 20 GRITSBots <cit.> to operate at once.
The Robotarium has inbuilt control barrier certificates (CBF) <cit.> which provide a provable guarantee of online collision avoidance for the robots, by ensuring a minimum inter-robot distance.
Control commands that don't satisfy constraints are updated with minimum possible deviation before execution, by a quadratic-program based controller.
Hence, the policies learned in environments utilizing CBFs will have to adapt to these actuator constraints which makes the platform more realistic and allows policies to be run on real robots.
The Robotarium also provides a Python simulator that closely resembles how the robots will act in the real Robotarium. Once programs are working in simulation, the Robotarium has a publicly accessible website where anyone in the world can upload their programs for them to then be run in the real Robotarium on real robots.
§ THE MARBLER PLATFORM
Historically, evaluating MRRL algorithms using the Robotarium's simulator has been a challenging task. The lack of a standardized framework for MRRL in the Robotarium means that researchers have to create scenarios from scratch, design the low level control algorithms to control the robots after they select an action, control how the graphics are displayed, and more. As a result, to the best of our knowledge, only <cit.> has evaluated deep reinforcement learning algorithms with the Robotarium, despite its open accessibility to researchers. Addressing this limitation, MARBLER establishes a cohesive and user-friendly API tailored specifically for MRRL experiments. Researchers can design novel environments or employ the pre-existing default environments to execute their algorithms, thereby allowing reproducibility across studies.
Moreover, owing to its integration with the Robotarium's simulator, MARBLER streamlines the process of transitioning trained robots from simulation to real-world deployment. Through the execution of a single script, users can generate the files necessary for submitting their policies to the physical Robotarium. Because the Robotarium is accessible to all users free of charge, MARBLER is the first platform that allows for the deployment of MRRL algorithms on real robots in a highly reproducible manner.
§.§ Core Components
MARBLER is comprised of four core components that form the foundation of the platform:
Core: The Core component serves as the fundamental building block of MARBLER, leveraging the Robotarium's python simulator. It encompasses critical functionalities necessary for the environment, such as environment resetting and discrete time step advancement. By utilizing the capabilities of the Robotarium's simulator and CBFs, MARBLER incorporates realistic dynamics that emulate the constraints encountered by real robots.
Scenarios: The scenarios module defines the environments the robots interact in and the specific tasks they must accomplish.
Gym Interface: Each scenario within MARBLER is registered as a Gym environment, which allows for direct compatibility with the algorithms and tools that support the Gym interface.
Test Pipeline: The Test Pipeline
provides a streamlined process for importing trained robots into the simulation environment, giving researchers a way to visualize robots' performance and collect test data. Subsequently, researchers can execute a script to prepare their files for submission to the Robotarium, which can then be uploaded to the real Robotarium, enabling evaluation in a real-world setting.
§.§ Scenarios
§.§.§ Existing Scenarios
To facilitate immediate testing and evaluation using MARBLER, we introduce five scenarios inspired by diverse MRRL problems. These scenarios are designed to offer researchers a starting point for experimentation and can be easily customized by modifying the scenario's associated configuration file. Parameters such as the number of robots, communication methods, scenario difficulty, and more, can be adjusted as needed.
A complete overview of these scenarios is available in the supplementary material[Supplementary material can be found
https://shubhlohiya.github.io/MARBLER/assets/supplementary.pdfhere].
but we include brief descriptions here:
Simple Navigation (Fig. <ref>):
Robots navigate towards a known destination point. This scenario is an easy starting point for algorithms to learn in.
Predator Capture Prey (PCP) (Fig. <ref>):
Sensing robots and capture robots must work together to capture the prey. Sensing robots know the location of prey within their sensing radius and must communicate this to the blind capture robots. Inspired by the Predator Capture Prey scenario in <cit.>.
Warehouse (Fig. <ref>):
Robots must navigate to their color zone on the right to receive a load and then unload in their color zone on the left while avoiding collisions; a Multi-Robot Path Finding environment <cit.>.
Material Transport (MT) (Fig. <ref>):
Robots with varying speeds and capacities must collaborate to efficiently unload two zones: one nearby with a large amount of material and one further away with a small amount of material. This is a task allocation problem <cit.> where the robots must collaborate to unload the zones within a time limit.
Arctic Transport (AT) (Fig. <ref>):
Drones can move fast over any tile and have a large sensing radius. Ice and water robots have a limited sensing radius and move fast over some tiles but slow over other tiles. Robots are rewarded based on how far the ice/water robots are from the goal zone so the drones must guide the ice/water robots. This is a Multi-Robot Path Planning scenario <cit.> where the drones must find a path to the goal zone and communicate it to the ice/water robots.
§.§.§ Creating New Scenarios
MARBLER provides a user-friendly approach to create new scenarios, similar to MPE and VMAS. Researchers can customize the action space, observation space, visualizations, and other relevant parameters without needing to interact with the underlying Robotarium code, allowing researchers to develop tailored scenarios that align with their specific use cases. Our GitHub includes comprehensive documentation to create new scenarios.
§ EXPERIMENTS
§.§ Experiment Setup
For all our experiments, we used the EPyMARL framework to train our robots. Because the scenarios in MARBLER have been registered as Gym environments, they are directly compatible with EPyMARL. This allowed us to train policies using the various learning algorithms available in EPyMARL with no modifications.
Baselines: We compared MAPPO <cit.>, QMIX <cit.>, and VDN <cit.> with parameter sharing. To investigate the effects of parameter sharing, we also evaluated QMIX without parameter sharing (QMIX_NS).
§.§ Evaluation Protocol
We evaluated all algorithms in the PCP, Warehouse, MT, and AT scenarios with 4, 6, 4, and 4 robots respectively. Before training each algorithm, we ran a hyperparameter search in the Simple Navigation environment in a manner similar to <cit.>. Exact details on the hyperparameter search along with the hyperparameters we used for each algorithm can be found in the supplementary material[Supplementary material can be found
https://shubhlohiya.github.io/MARBLER/assets/supplementary.pdfhere].
We trained VDN and QMIX for a total of 5 million time steps in each scenario. Given the conflicting evidence about off-policy algorithms being more sample efficient than on-policy algorithms due to their use of a replay buffer <cit.>, we trained MAPPO for a total of 25 million time steps. We trained five seeds for each algorithm.
Because the Robotarium immediately stops a run when robots collide or go outside the acceptable boundaries, we used strict CBFs so that, if the robots attempt to get within 20cm from each other, their movement slows to the point to where they almost stop. We also penalize the robots and end the episode if robots collide or drive outside the boundaries of the environment. By doing this, the robots are able to successfully run in the Robotarium after training.
In all scenarios, robots had full communication and in all scenarios except MT, robots had unlimited bandwidth in their communications. Exact details about how the environments were configured for these evaluations are included in the supplementary material.
§.§ Computational Requirements
We trained all models using CPUs; primarily with a Dual Intel(R) Xeon(R) Gold 6226 <cit.> and an Intel(R) Core(TM) i7-12700KF. It took 16084 CPU hours to train all models (excluding hyperparameter searches).
§ RESULTS
To compare baselines, first we look at training evaluation returns to evaluate sample efficiency and how much of an impact different seeds make which can be seen in Fig. <ref>. Then, we compared the best performing models for each algorithm in each scenario. To do this, we took the model that achieved the highest reward for each algorithm and evaluated the model in simulation and on real robots to compare performances. In simulation, we ran each model for 100 episodes and on the real robots, we ran each model for 10 episodes. The results can be seen in table <ref>.
§.§ Value Based vs. Policy Gradient
For the first 5 million timsteps, VDN is the best performing algorithm in every scenario. After 25 million steps, MAPPO's best performing seeds approaches that of VDN's in MT and AT and surpasses it in Warehouse. However, all seeds in MAPPO converge to lower performance in PCP than in any of the value based methods.
Additionally, MAPPO's performance is much more influenced by its seed than in any value-based method. This is contradictory to the findings in <cit.> but it seems that VDN generally outperforms MAPPO in MARBLER suggesting that value based methods, particularly VDN, may be more applicable to physical robots than policy gradients.
§.§ Effects of Parameter Sharing
The performance of models trained with parameter sharing vs. without parameter sharing depends on the heterogeneity of the environment. In the Warehouse scenario, where robots are homogeneous except for their loading zone locations, QMIX outperformed QMIX_NS significantly. In MT, the robots need to learn slightly different policies to ensure that all zones are unloaded within the time limit, but the optimal policies are similar. In AT, drones and ice/water robots had fundamentally different optimal policies, yet neither QMIX nor QMIX_NS utilized the drones' enhanced sensing radius, resulting in similar policies for all robots. In AT and MT, with limited heterogeneity, QMIX showed a significant performance advantage over QMIX_NS but much less significant than in Warehouse. However, in the PCP scenario, where very different policies were learned for the Predator and the Capture robots, QMIX and QMIX_NS performed similarly. Thus, as heterogeneity increases, the gap between policies trained with and without parameter sharing shrinks, consistent with the findings from <cit.>. This suggests that in scenarios with more diverse heterogeneity, models trained without parameter sharing may outperform those trained with it.
Additionally, robots trained with QMIX_NS went out of bounds a total of 10 times in simulation and 6 times on real robots. In contrast, robots trained with all parameter sharing methods only went out of bounds once in simulation and once on real robots. When a single robot goes out of bounds, all robots are given a large negative penalty and the episode ends.
This suggests it is much more difficult for robots to learn how to handle events where a single robot can cause all other robots to suffer a penalty without parameter sharing.
§.§ Sim2Real Gap
As shown in table <ref>, there are few significant differences between the algorithms' performance in simulation and in the real Robotarium. This gives strong evidence that the simulator is very similar to real robots. However, there is one key difference between the real experiments and the simulated experiments: the robots never collide in simulation and robots go out of bounds more than 6x more often on average on real robots. The only time an algorithms' metrics were significantly worse on real robots vs. in simulation was when the real robots collided or went out of bounds.
To further evaluate this, we retrained VDN in PCP using less safe CBFs that are only effective at 17cm and do not slow the robots as much when their within the safety radii. In addition, we did not stop the episode or penalize the robots for driving out of bounds or colliding. This is how the Robotarium's safety mechanisms are setup by default. Other than these two modifications, we trained these models the same way as the original VDN models.
As seen in table <ref>, the differences between the test performance of the robots with the default CBFs compared to the safe CBFs in simulation is not significant. However, when we ran these robots in the Robotarium, they collided 3/10 episodes, despite using the recommended method of preventing collisions, the robots never colliding in the 100 simulated episodes, and the robots with the safe CBFs never colliding. This gives more evidence that, when it comes to safety, there is a significant Sim2Real gap which highlights the second major benefit of using MARBLER: even if robots seem to learn safe policies in simulation, those policies may not run safely in the real world. This makes MARBLER the first open platform created that can be used to evaluate how safe learned MRRL policies are.
§ CONCLUSION
We introduce MARBLER, the first open platform with Sim2Real capabilities, realistic robot dynamics, and the ability to evaluate how safe MRRL algorithms are. MARBLER environments are fully compatible with OpenAI Gym, providing an easy interface with modern learning algorithms.
To demonstrate the utility of MARBLER, we developed five MRRL scenarios and utilized the EPyMARL framework to benchmark popular MARL algorithms, both in simulation and in the real-world. We believe MARBLER will help researchers benchmark Sim2Real transfer capabilities of MRRL algorithms in a systematic and reproducible way, making it an invaluable tool for the research community.
IEEEtran
|
http://arxiv.org/abs/2307.07586v1 | 20230714192535 | QontSum: On Contrasting Salient Content for Query-focused Summarization | [
"Sajad Sotudeh",
"Nazli Goharian"
] | cs.CL | [
"cs.CL"
] |
[email protected]
IR Lab, Georgetown Univeristy
Washington D.C., USA
[email protected]
IR Lab, Georgetown Univeristy
Washington D.C., USA
Query-focused summarization (QFS) is a challenging task in natural language processing that generates summaries to address specific queries. The broader field of Generative Information Retrieval (Gen-IR) aims to revolutionize information extraction from vast document corpora through generative approaches, encompassing Generative Document Retrieval (GDR) and Grounded Answer Retrieval (GAR). This paper highlights the role of QFS in Grounded Answer Generation (GAR), a key subdomain of Gen-IR that produces human-readable answers in direct correspondence with queries, grounded in relevant documents. In this study, we propose , a novel approach for QFS that leverages contrastive learning to help the model attend to the most relevant regions of the input document. We evaluate our approach on a couple of benchmark datasets for QFS and demonstrate that it either outperforms existing state-of-the-art or exhibits a comparable performance with considerably reduced computational cost through enhancements in the fine-tuning stage, rather than relying on large-scale pre-training experiments, which is the focus of current SOTA. Moreover, we conducted a human study and identified improvements in the relevance of generated summaries to the posed queries without compromising fluency. We further conduct an error analysis study to understand our model's limitations and propose avenues for future research.
: On Contrasting Salient Content for Query-focused Summarization
Nazli Goharian
Received ; accepted
================================================================
§ INTRODUCTION
In recent years, the proliferation of digital content has led to an explosion of data, making it increasingly challenging to extract relevant information and insights from large amounts of text. Summarization, the process of condensing large volumes of text into shorter, more manageable summaries, has emerged as a promising solution to this problem. Among different types of summarization, query-focused summarization (QFS) <cit.> has gained significant attention due to its ability to generate summaries tailored to specific user queries or information needs.
QFS is situated within the larger domain of Generative Information Retrieval (Gen-IR), an area that seeks to revolutionize information extraction from vast document corpora by employing generative techniques. In order to understand the relationship between QFS and Gen-IR, it is essential to first explore the divisions within Gen-IR. Generative Information Retrieval can be divided into two main subfields: Generative Document Retrieval (GDR) and Grounded Answer Generation (GAR). GDR is concerned with retrieving a ranked list of documents w.r.t a given query, while GAR focuses on generating specific answers, grounded on relevant documents [Definitions of Gen-IR, GDR, and GAR are taken from the Gen-IR workshop at <https://coda.io/@sigir/gen-ir>]. In this context, QFS can be viewed as a method associated with GAR. QFS is essential in a wide range of natural language processing applications, such as information retrieval <cit.>, and document analysis <cit.>. In information retrieval, for example, QFS can help users quickly and effectively identify relevant information from large volumes of search results. Similarly, in document analysis, query-focused summarization can provide decision-makers with key insights and information necessary for making informed decisions <cit.>.
By condensing large volumes of information into a focused summary that directly addresses the user's information need, QFS enhances the informativeness of generated responses. This ability is particularly beneficial for tasks such as question answering, chatbots, and personal assistants, where the quality and relevance of generated responses are critical for user satisfaction and engagement. Incorporating QFS into Gen-IR systems can help address the issue of generating excessively long or irrelevant responses, a common challenge in natural language response generation. By providing users with more concise and targeted information, QFS can improve the effectiveness of Gen-IR systems, ultimately enhancing the user experience.
Existing methods for query-focused summarization can be broadly categorized into extractive, abstractive, and hybrid approaches. Extractive approaches involve selecting and aggregating the most important information from the input document based on various heuristics or statistical models. Abstractive approaches, on the other hand, aim to generate a summary that captures the essence of the input document in a more human-like manner, often by generating novel phrases or sentences. Hybrid approaches aim to combine the strengths of both extractive and abstractive methods. Despite significant progress in recent years, these approaches still face several challenges, such as the difficulty of capturing the nuances of human language and understanding the contextual information in the input document. Moreover, ensuring the faithfulness and relevance of generated summaries remains a critical challenge, particularly in scenarios where the input document contains complex information or where the query is granular. In this paper, we focus on enhancing the relevance of the generated summary to the query, a challenge that the state-of-the-art systems often encounter, as shown in Figure <ref>.
In the context of an ever-growing volume of information, long-input summarization has become increasingly important, as it can facilitate the efficient and effective extraction of key insights from extensive documents. Thus, our study opts to focus on long-input QFS, rather than short-input QFS. To address the challenges and limitations posed by current techniques in the domain of query-focused summarization, this study introduces an innovative method that employs contrastive learning <cit.> to distinguish between pertinent and non-pertinent content within the input document. The proposed methodology seeks to incorporate the most salient spans of the source document into the summarization process by juxtaposing them against negative regions that are typically the focus of attention for the summarization system. We posit that the incorporation of contrastive learning into query-focused summarization can bolster the model's capacity to discern relevant information from input documents, thereby yielding more relevant and pertinent summaries. Our empirical findings, based on two long-input datasets, demonstrate either enhanced or comparable performance in relation to the previous state-of-the-art, while simultaneously reducing computational cost.
In summary, this investigation presents a promising new avenue for augmenting the relevance of query-focused summarization and enriching the quality of generated responses in Generative Information Retrieval and Query-focused Summarization. Our contributions are twofold:
* We present a novel QFS system that combines contrastive learning with state-of-the-art techniques, surpassing or achieving the performance of existing approaches on relevant benchmark datasets.
* We conduct an evaluation of our model, including automatic performance comparisons, human assessment, and error analyses, to gain insights into the model's strengths, limitations, and potential future research directions.
§ RELATED WORK
Query-focused summarization (QFS) is a specialized area within automatic text summarization, where the objective is to generate summaries tailored to address a specific query by selecting and condensing relevant information from a given document collection. In the early stages of QFS research, extractive methods were the predominant techniques used to generate summaries. These methods identify salient sentences or passages from the input documents and combine them to form a summary without altering the original text. One popular approach is query expansion, which enriches the user query with additional terms to better capture the user's intent and improve the relevance of the extracted summaries <cit.>. Another method is query-biased summarization in information retrieval, which ranks sentences based on their similarity to the user query and their importance within the document <cit.>. For instance, Graph-based methods, such as LexRank <cit.> and TextRank <cit.>, represent documents as graphs, where nodes correspond to sentences and edges indicate the similarity between them. By analyzing the graph structure, these methods identify and extract key sentences based on their centrality and relevance to the user query.
The field of query-focused summarization (QFS) has experienced significant advancements due to the incorporation of deep learning and neural networks. Abstractive methods have become increasingly popular in QFS research, as they generate flexible and coherent summaries by paraphrasing and rephrasing the input text. Various techniques, such as sequence-to-sequence models <cit.>, attention mechanisms <cit.>, and transformer-based architectures <cit.>, have been employed to improve QFS model performance by assigning weights to input tokens. Pre-trained language models, like Bart <cit.>, and transformer-based architectures form the foundation of state-of-the-art (SOTA) models for QFS.
The availability of high-quality query-focused summarization datasets, such as QMSum <cit.> and SQuALITY <cit.>, has fueled increased interest in QFS. Recent research has explored extract-then-generate methods, including passage/answer retrieval <cit.>. Other methods involve adapting attention mechanisms to query-focused summarization through query-utterance interactions <cit.>, adapting SOTA summarizers like to long-input QFS through overlapping segment-based summarization <cit.>, and replacing full attention with block-wise attention <cit.>. Additional approaches include generating pseudo queries for ranking evidence sentences <cit.>, employing data augmentation techniques <cit.>, and pre-training language models with dialog-specific <cit.> and question-driven <cit.> objectives. These advancements in QFS research have contributed to the development of more effective and coherent summarization models, paving the way for further innovations in natural language understanding and generation.
Different than prior work, our approach incorporates contrastive learning to help the model perform generation from relevant regions of the input document, with the goal of increasing the summary's relevance to the query. To the best of our knowledge, this is the first attempt to utilize contrastive learning in QFS. By leveraging this technique, we aim to guide the model to focus on the most relevant parts of the input document, thereby producing summaries that are better aligned with the query.
§ BACKGROUND: SEGMENT-BASED LONG SUMMARIZATION
Prior to presenting our contrastive learning approach, we present an overview of the SOTA Segment Encoder () summarization model, as proposed by <cit.>, which serves as the backbone summarization component of our framework. In the model, the source document is initially divided into overlapping segments of fixed length [We use 512 tokens, with each segment exhibiting a 50% overlap with its adjacent segment (empirically determined).], each of which is appended to the query and subsequently encoded using a conventional Transformer model, such as . In this context, the encoder focuses on both query and segment tokens through its self-attention mechanism <cit.>, and generates token representations for each segment that are attuned to the query. After processing all segments with the shared encoder, their representations are merged into a single continuous embedding sequence, which is then fed into the decoder to create a summary tailored to the query (i.e., the response). Owing to the absence of cross-attention between encoded segments, the attention mechanism's scalability is linearly proportional to the number of segments and, consequently, the length of the input document. Nevertheless, the decoder is capable of attending to all encoded segments jointly, allowing the summarizer to operate in an end-to-end manner.
The generation loss for this network is calculated using cross-entropy, which measures the difference between the predicted summary output and gold summary as follows:
ℒ_ = -∑_t=1^Tlog p(y_t|y_<t,x)
where T is the length of the generated summary, y_t is the t-th token in the generated summary, y_<t represents the previously generated tokens, and x is the input text. The generation loss measures the negative log-likelihood of generating the ground truth summary given the input text, and the goal during training is to minimize this loss.
§ METHODOLOGY
This section introduces our proposed approach for query-focused summarization using contrastive learning, which involves applying Generative Information Retrieval techniques to the task of text summarization. The key aspect of our approach is based on the concept of contrastive learning, which has been successfully applied in various other domains <cit.>. Our approach, named Query-focused Contrastive Summmarization (i.e., ), leverages contrastive learning to distinguish between positive and negative instances during training, enabling discrimination between important (positive) and unimportant (negative) content in the summarization process. Figure <ref> provides a detailed illustration of our model. I the following sections, we provide a systematic breakdown of our proposed approach.
§.§ Segment Scoring
Our architecture incorporates a feed-forward neural network as a critical component for scoring encoded segments in an extractive framework. This network plays a fundamental role in dynamically selecting negative samples during the training process. Specifically, the encoded segment representations, which are associated with the <s> representations as the classification head (i.e., []), are fed into a feed-forward neural network with a Sigmoid classifier.
p_i = σ (h_i W_i + b_i)
in which p_i represents the extraction probability of the i-th segment, σ is the Sigmoid activation function, and W_i and b_i are the learnable parameters. After obtaining the segment probabilities, we minimize the cross-entropy function as the classification loss, given the ground-truth labels for each segment:
ℒ_ = - ∑_i=1^S y_ilog(p_i) + (1 - y_i) log(1 - p_i)
where S is the number of segments, y_i is a binary label indicating whether the i-th segment is contributive in summary or not, and ℒ_ is the classification loss.
§.§ Contrastive Learning Framework
In this subsection, we describe the contrastive learning framework that forms the basis of our proposed approach to query-focused summarization. The main objective of contrastive learning is to distinguish between similar (positive) and dissimilar (negative) instances by learning to map them to different regions in the embedding space. Our framework selects positive (fixed) and negative (dynamically) instances during the training process, which allows the model to learn effective representations that can discriminate between important and unimportant content for summarization.
§.§.§ Positive and Negative instance selection
To generate positive instances, we first check if ground-truth labels are available in the dataset. If not, we generate supervised labels, which will be further explained in Section <ref>. We examine each segment S_i to determine if it contains any gold spans or sentences. If it does, we add the entire segment to our positive contrastive set. For negative instances, we rely on the segment-scoring mechanism introduced in the previous subsection. Specifically, we consider the segments with high extraction probabilities (p_i) conditioning that they are non-gold as negative instances. We continue selecting negative instances until the number of selected negative segments matches the number of positive segments already selected. This selection process is dynamic and happens during training, meaning that the model is continually being challenged to distinguish between important and unimportant content. By incorporating both positive and negative instances, we ensure that our model is trained to focus on meaningful information, leading to better performance in downstream tasks. After identifying the positive and negative instances (i.e., segments), we feed them through a shared-weight decoder to generate a transformed representation for loss computations.
§.§.§ Computing contrastive loss
To achieve our objective of maximizing the similarity between positive instances and their corresponding query embeddings while minimizing the similarity between negative instances and query embeddings, we opt to use the InfoNCE loss for contrastive learning, which is a popular contrastive learning technique introduced by <cit.> and has been proven to exhibit robust performance on QA tasks in recent studies <cit.>.
A few studies have acknowledged the significance of feature transformation employing a small neural network for mapping representations to a space where contrastive loss is computed <cit.>. Drawing inspiration from these works, and to address the challenge of decoder outputs being optimized for the token prediction task rather than semantic similarity within our framework, we utilize a multi-layer perceptron (MLP) coupled with Batch Normalization (BN)<cit.> and Rectified Linear Unit (ReLU) activation function<cit.> to transform the decoder outputs (i.e., logits) into a new embedding space, as described below:
h_i = ( ( W_i d_i + b_i))
s_i = W_j h_i + b_j
in which d_i is the decoder outputs (i.g., logits), W_i, W_j, b_i, and b_j are trainable parameters.
We continue to use cosine similarity as the pairwise similarity function to measure the similarity between contrastive and query embeddings as follows:
sim(s_i, q) = q^⊤s_i/|q| |s_i|
where q is the query embedding (i.e., summary generated from the entire input segments in our framework), s_i is the contrastive instance (i.e., positive/negative) embedding, and sim is the cosine similarity between the query and the contrastive instances. It is worth noting that since the output of the decoder network for q and h_i are token sequences, we compute the token-wise cosine similarity. We then define the InfoNCE contrastive loss as follows:
ℒ_cont = -log(e^ sim(s^+,q_i)/τ/∑_s ∈ S e^sim(s,q)/τ)
where sim(·) denotes the similarity score between the query and the positive (s^+) and a contrastive (s) instances, S is the set of contrastive instances, and τ is a temperature hyperparameter that controls the concentration of the probability distribution over the instances.
The InfoNCE loss encourages the model to learn embeddings that are more similar for positive instances and less similar for negative instances, effectively improving the model's ability to discriminate between important and unimportant content. The temperature parameter τ controls the sharpness of the distribution, with lower values of τ leading to a more focused distribution around the highest-scoring positive instance and higher values resulting in a smoother distribution across all instances. This parameter can be fine-tuned to strike a balance between focusing on the most relevant content and being robust to a diverse set of instances.
§.§ Joint Training Objective
We train our model by optimizing a joint objective that combines the generation loss, classification loss, and contrastive loss, introduced earlier as follows:
ℒ = λ_0ℒ_ + λ_1 ℒ_ + λ_2 ℒ_
where λ parameters balance the learning between the three tasks, and their sum is equal to 1.
§ EXPERIMENTAL SETUP
In this section, we elaborate on the datasets, and baseline models that we have utilized in our experiments. We also provide training and implementation details.
§.§ Datasets
QMSum <cit.> is a query-focused meeting summarization dataset consisting of 1,808 query-focused summaries extracted from 232 multi-turn meetings [A “multi-turn dataset” is a collection of conversational data that involves multiple rounds of dialogue between two or more participants such as those in the meetings. ] across various domains, including product design, academia, and political committees. Additionally, the dataset includes annotations such as topic segmentations and highlighted text spans associated with reference summaries. The dataset is split into 1,257, 272, and 279 instances for training, validation, and testing, respectively. The length statistics represent the average source length of 9K tokens and summary length of 70 tokens. Given the widespread use of QMSum in prior research, we believe it serves as a valuable benchmark for comparison in our work.
SQuALITY <cit.> is a collection of question-focused abstractive summarization data comprising 100 stories, 500 questions, and 2000 summaries. Each question in the dataset is accompanied by 4 reference summaries, written by trained writers who reviewed each other's work to ensure the data is of high quality. The dataset provides 39/25/36 (train/validation/test) splits, which is equivalent to 195/125/180 document-question pairs. The documents and summaries have a length of 5.2K and 237 tokens on average, respectively.
§.§ Baselines
We experiment with different strong and state-of-the-art summarization models, which are outlined as follows.
* Bart is a Transformer-based encoder-decoder <cit.> model pretrained on a token infilling and sentence permutation pretraining objectives <cit.>. netowrk has a maximum input length of 1024 tokens, hence the documents are truncated to fit this baseline.
* Pegasus is an abstractive summarizer that is pretrained using a task-specific objective for summarization <cit.>. Specifically, it is pretrained to predict the masked-out sentences as Gap Sentence Prediction (GSP) objective. The inputs are truncated to 2048 tokens to fit this baseline <cit.>.
* Bart+DPR is an extract-then-summarize method amongst the baselines on SQuALITY dataset as proposed by <cit.>. Unlike , it first retrieves the sentences that are most relevant to the question and concatenates them to form the input to the abstractive summarizer.
* LED is Longformer Encoder-Decoder model, amongst the abstractive summarization systems suited for long document summarization tasks <cit.>. This model modifies the conventional self-attention mechanism of Transformers architecture for a more efficient and robust scale-up to long documents.
* Bart-LS is an extension of Bart model for long document summarization, adapting it for long-sequence inputs. In particular, it replaces the full attention mechanism in Transformers with pooling-augmented block-wise attention, and pretrains the model with a masked-span prediction task with spans of varying lengths.
* DialogLM is a pre-trained neural model for understanding and summarizing long dialogues. It uses an encoder-decoder architecture and a window-based denoising pre-training task to equip the model with the ability to reconstruct noisy dialogue windows.
* SegEnc is a state-of-the-art abstractive summarizer, tailored to address query-focused long-input documents. It divides the input into fixed-length segments, encodes them separately, and then enables the decoder to jointly attend to these segments. Two different configurations of the model are employed: (1) the default checkpoint, which is the architecture; and (2) the Wikisum Pre-Finetuned model (aka -W), which performs pre-finetuning of the model on the Wikisum dataset.
* Socratic is a pre-training framework that uses a question-driven objective specifically designed for controllability in summarization tasks <cit.>. This framework trains a model to generate and respond to relevant questions (i.e., ask & answer) in a given document and achieves the SOTA on QMSum and SQuALITY datasets. Socratic is finetuned on summarization system. We also include a version of that is pretrained on Book3 pre-training dataset, and fine-tuned using approach.
* TUQFS is a query-aware approach that employs joint modeling of tokens and utterances through Token-Utterance Attention <cit.>. This technique integrates both token-level and utterance-level query relevance into the generation process via an attention mechanism. In our experiments, the TUQFS model is considered one of the state-of-the-art methods for the QMSum dataset.
§.§ Training and implementation details
Our method is implemented using Huggingface Transformers. For the model's hyperparameters, we set a learning rate of 5e-5 with a weight decay of 0.01 and train the model for 10 epochs, with validation conducted at the end of each epoch. We select the checkpoint that achieves the highest mean Rouge scores for inference time, a strategy similar to <cit.>. To compute the contrastive loss, we tune the τ parameter from the set of {0.2, 0.4, 0.6, 0.8} and fix it at 0.6 and 0.8 for the QMSum and SQuALITY datasets, respectively. We tried different values of λ in joint learning and fixed them on (λ_0=0.6, λ_1=0.2, λ_2=0.2) for both datasets.
In the SQuALITY dataset, unlike the QMSum dataset, there are no human-annotated ground-truth spans for each query-summary pair. To provide supervised span labels, we adopt a method similar to that used in <cit.> and label the input segments based on the word bigram overlap between the segment and summary. If a segment has six or more common bigrams with the summary, it is labeled as positive; otherwise, it is labeled as negative.
§ EXPERIMENTS
In this section, we present the automatic evaluation results, as well as the findings from a human study conducted to compare the system-generated outputs against each other.
§.§ Automatic results
The performance of various summarization models on the QMSum and SQuALITY benchmarks was evaluated using the Rouge and BertScore metrics, as displayed in Tables <ref> and <ref>, respectively. Our proposed method, , outperforms the majority of the baselines across all metrics in the QMSum benchmark. When compared to the Pret. <cit.> as the strongest baseline, our system achieves improvements of +0.36 (1%) , +0.52 (1.5%) , and +0.09 (0.1%) BertScore points (relative improvements). Although our method falls short by 0.24 points in compared to the Pret., it is crucial to emphasize that our approach outperforms the state-of-the-art Pret. model.
In terms of computational cost, our proposed model offers a distinct advantage over the model. Specifically, our model leverages a contrastive learning strategy that obviates the need for extensive pre-training, which typically requires a significantly larger dataset and consequently, leads to increased computational overhead. The Pret. 1M model and Pret., for instance, underwent an extensive pre-training phase involving 1 million, and 30 million instances from the Book3 collection. In contrast, our model only
employs contrastive learning during the fine-tuning stage, which uses a significantly smaller dataset. This difference in approach has a direct effect on computational efficiency: the drastically reduced dataset size in our model's training phase reduces the computational cost
significantly. Despite this reduction in training complexity, our model was able to achieve improved or at least comparable results across all metrics when compared to . These results not only illustrate the efficiency of our model but also open a new perspective on how to construct more computationally efficient models for Query-focused Summarization tasks.
Similarly, the score of our proposed model demonstrates a significant improvement compared to the SOTA baselines on the SQuALITY dataset (Table <ref>), resulting in an increase of 1.24 (5%) points (relative improvements). However, there is a relatively slighter decrease of 0.55 points () and 0.53 () when compared to the best baseline results. We hypothesize that the relatively lower scores in and can be attributed to the absence of gold spans, which were provided by humans in the QMSum experiments, used in the training of our model. This finding suggests that there is room for improvement by employing more sophisticated techniques to enhance positive segment labeling, potentially leading to better performance in the and metrics.
The impact of the temperature parameter (infoNCE loss) on system performance is demonstrated in Figure. <ref>. As depicted, the mean scores for both QMSum and SQuALITY datasets rise with an increase in the temperature parameter. In the case of the QMSum dataset, the score noticeably ascends from 0.4 to 0.6, then experiences a minor decline at 0.8, thereby suggesting that the most effective parameter value could be approximately 0.6. In contrast, the SQuALITY dataset displays a steady, though marginal, augmentation in the score with increasing parameter values, hinting at a more linear correlation. Generally, across all temperature parameters, the QMSum dataset exhibits a slightly superior mean ROUGE score compared to the SQuALITY dataset.
§.§ Human study
Many previous studies have acknowledged the limitations of automatic evaluation metrics and their correlation with human judgments. To gain a better understanding of system-generated summary qualities and provide a basis for error analysis, we aim to conduct a human study on randomly sampled summaries. This approach will allow us to obtain insights into the strengths and weaknesses of our proposed model and identify areas where it can be improved. By comparing the results of the human study with the automatic evaluation metrics, we can gain a deeper understanding of the performance of our model and make informed decisions on how to optimize it further. The results of this human study will complement the experimental findings presented in this paper and contribute to a more comprehensive evaluation of our proposed summarization model.
In order to conduct the human study, we randomly selected a list of 50 meeting-query instances from the QMSum dataset and provided the human annotator with summaries generated by three different methods: human-written, -W generated, and generated. Due to the typically lengthy nature of meeting transcripts, which often exceeded 9K tokens in our study, we only provided the gold spans relevant to the given query during our evaluation process. This allowed the annotator to focus specifically on the information that was most important and directly related to the query, without being overwhelmed by extraneous information. To avoid any bias, we shuffled the order in which the summaries were presented for evaluation. We then defined three qualitative metrics for the evaluation process, which are listed below, and each case was scored on a Likert scale ranging from 1 (best) to 5 (worst).
* Fluency: This metric measures the extent to which the summary is fully understandable (score of 5) or completely gibberish (score of 1);
* Relevance: This metric quantifies how well the summary (even short responses within it) aligns with the given query, with a score of 5 for high relevance and 1 for low [Note that for the relevance metric, we do not consider the information within the ground-truth summary, but rather focusing solely on assessing the quality of the summary based on its relevance to the query at hand.];
* Faithfulness: This metric measures the extent to which the materials produced in the summary are supported (score of 5) or not supported (score of 1) within the meeting.
The results of the human study presented in Table <ref> show that our proposed method achieves a comparable level of fluency to human-written summaries, which is justifiable considering the current state-of-the-art abstractive summarization systems given their strong pre-training ability. However, there is a clear gap between the performance of systems and that of human-written summaries on the qualitative metrics. Despite this, our method outperforms the -W baseline, particularly on relevance, where it achieves an average score of 3.96 compared to 3.84 for -W. This demonstrates that our model is better at capturing the most important and relevant information in the meeting transcripts and summarizing it effectively, without sacrificing fluency over the baseline system. We also see a gap between the relevance metric of human-written summaries and system-generated summaries, which might be due to the inclusion of irrelevant spans of information in the system-generated summaries compared to the human-written summaries that have been written from the gold spans. Despite this, our method shows promise in improving the relevance metric. However, there is still a need for further research to improve the relevance and faithfulness metric, especially in domain-specific contexts like meeting transcripts. The observed gap in faithfulness between human performance and summarization systems, as also pointed out in previous studies <cit.>, might be attributed to the challenges of understanding and representing natural language nuances in domain-specific contexts like meeting transcripts.
§.§ Error analysis
In the following, we present our findings on the types of errors observed affecting each qualitative metric, their causes, and potential solutions to address the detected issues.
§.§.§ Fluency.
When evaluating our system on the fluency metric, we found that its performance was generally comparable to that of the human-written and baseline systems. However, in cases where our system underperformed the other systems, we observed a couple of common errors, particularly information repetition (Figure. <ref>), incoherence, trasncripts copying (Figure. <ref>), and speaker mix-up, particularly on QMSum benchmark.
We note that segmenting the input documents could have a negative impact on fluency in summarization. Specifically, since the segmentation is performed on fixed-length chunks, it may cause boundary sentences to be split across segments, potentially affecting the coherence of the system summary. We further observed that the current state-of-the-art abstractive summarization models, including our proposed system, face significant challenges in summarizing multi-turn meeting transcripts.
These challenges stem from the complexity of the task, which involves capturing the nuances of human conversation, understanding the speaker's intent and context, and producing a coherent summary that reflects the key takeaways from the discussion. Addressing these challenges will require further research and the development of novel techniques that can effectively handle the complexity of multi-turn summarization tasks such as the works done by <cit.>.
§.§.§ Relevance
When evaluating the system's performance in terms of relevance, we found that our proposed system outperformed the baseline but still fell short of human parity. In investigating the common reasons contributing to the errors observed in the underperformed cases, we identified several factors.
First, we found that the system sometimes struggled to accurately identify the most important information relevant to the query, leading to summaries that were less informative or missed critical details. This issue was more prevalent on the SQuALITY dataset, where no span annotations were provided, making it more challenging for the system to identify the relevant information. However, on the QMSum benchmark, where high-quality human-annotated text spans were provided, we observed fewer instances of this problem. This suggests that the integration of enhanced explicit supervision or utilization of more advanced labeling techniques may potentially contribute to the system's performance for selecting pertinent information, thereby generating more relevant summaries.
Furthermore, We also observed that the system's ability to capture the nuances of the query was sometimes limited, especially in cases where the query was too broad or too specific. For instance, open-ended queries such as “Summarize the whole meeting” lack specificity and can be interpreted in multiple ways, making it challenging for the system to determine which information is most relevant to include in the summary, particularly in longer documents. On the other hand, extremely specific queries, such as “What did Marketing think of the incorporation of current fashion trends in the prototype when making a simulation market evaluation of the new remote control?'' can also pose challenges for the system. In such cases, the system may struggle to accurately interpret the underlying meaning of the query, leading to summaries that may not fully address the information needs.
Finally, we also noted that the choice of input document could significantly impact the system's performance, with some documents being more challenging for the system to summarize effectively. For instance, documents that contain multi-turn dialogues or include idiomatic expressions that are often made during meetings can pose significant challenges for the system. Similarly, the use of figurative language or domain-specific terminology can also impact the system's ability to accurately interpret the underlying meaning of the input document. To address these issues, we suggest further exploration of techniques for improving the system's ability to identify and select relevant information, such as using more advanced supervision signals (e.g., improving the semi-supervised labeling mechanism) or incorporating external knowledge sources to learn in-domain terminologies more effectively.
§.§.§ Faithfulness.
Faithfulness is a crucial aspect of summarization, as it ensures that the summary accurately reflects the key information from the input document. However, it has been identified as a significant challenge for long-input summarization tasks, including Query-Focused summarization. Prior works have also reported similar challenges in achieving faithful summaries <cit.> that accurately capture the relevant information from the input document and address the information needs.
In our error analysis, we observed several instances where the proposed model produced summaries that were not faithful to the input document. Specifically, we observed a common error where the generated summary was written from multiple segments that all did not align with the entire query. This could be attributed to the model's inability to effectively filter out irrelevant information or to correctly interpret the underlying meaning of the query. For example, the model may have included information that was only tangentially related to the query, but not directly relevant to the user's information needs. For instance, a query such as “Did the group think the remote control was easy to use when discussing evaluation criteria of the remote control?” might be asked. However, when it comes to interpreting the query meaning, the model might put more focus on only “being easy to use” and include segments that discuss all items that should be easy to use without considering the specific context and information within the query, such as the group's discussion on the remote control's evaluation criteria. In such cases, the model mixes up different items from the attended segments and generates a summary that is not faithful to the input.
Another error we observed was the omission of critical details or information that was relevant to the query, resulting in incomplete or inaccurate summaries, as shown in Figure. <ref>. This could be due to the model's limitations in accurately interpreting and analyzing the contextual information in the input document.
Another error was that the model exhibited a deficiency in tracking specific items that were frequently referenced during the meeting. This resulted in a failure to accurately capture the final decision or outcome that was made regarding a specific discussion. In other words, the model demonstrated an inability to consistently comprehend and retain crucial information regarding the frequently mentioned discussions on specific items. This type of error can have significant consequences, especially in contexts where the mentioned items play a critical role in decision-making.
To address these errors and improve the faithfulness of the system, we suggest exploring techniques such as integrating fact-checking mechanisms. Additionally, further research could be conducted to develop more advanced natural language understanding techniques such as reinforcement learning with faithfulness-focused rewards, which encourage the model to increase the faithfulness of the generated summaries.
§ CONCLUSION
This study proposes a novel approach for query-focused summarization (QFS) that employs contrastive learning to enhance the relevance of the summary to the given query. The proposed method utilizes the relevant segments of the document as positive instances and high-scored non-gold segments during the training process as negative instances for generating contrastive samples. Specifically, after identifying contrastive segments, they are fed into the abstractive summarizer for generating summaries for each contrastive instance, which then contribute to the computation of the contrastive loss (i.e., InfoNCE loss function). The entire network is optimized using a joint loss that combines generation, classification, and contrastive losses with balancing hyperparameters. Experimental results indicate that the proposed method outperforms existing state-of-the-art techniques and achieves a new SOTA (with 1% and 1.5% improvements over and , respectively on QMSum; and 1.24 (5%) on SQuALITY benchmarks as compared to the previous SOTA) or comparable performance with reduced computational overhead without further large-scale pretraining. Furthermore, a human study analysis demonstrates the effectiveness of the approach in terms of relevance, without sacrificing fluency. The conducted error analysis further provides insights into the current limitations and future research directions for QFS.
The study's contribution adds to the growing body of research on natural language generation and has the potential to advance the state-of-the-art in QFS. Overall, the proposed method represents a step forward in addressing the challenge of QFS in Gen-IR, and it is hoped that it will inspire further research in this area.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.05932v1 | 20230712055309 | Lorentz-covariant spinor wave packet | [
"Kin-ya Oda",
"Juntaro Wada"
] | hep-th | [
"hep-th",
"hep-ph"
] |
Lorentz-covariant spinor wave packet
Kin-ya OdaE-mail: [email protected]
and Juntaro WadaE-mail: [email protected]
August 12, 2023
=====================================================================================================
^* Department of Mathematics, Tokyo Woman's Christian University, Tokyo 167-8585, Japan
^†Department of Physics, University of Tokyo, Tokyo 113-0033, Japan
We propose a new formulation of manifestly Lorentz-covariant spinor wave-packet basis. The conventional definition of the spinor wave packet is problematic in the sense that it suffers from mixing with other wave packets under Lorentz transformations. Our formulation evades this difficulty of mixing. This wave packet forms a complete set that can expand a free spinor field in a Lorentz covariant manner. In addition, we present a Lorentz-invariant expression of zero-point energy.
§ INTRODUCTION
Neutrino oscillation requires wave-packet formulation in its very foundation <cit.>: Its production and detection regions are localized and far from each other, hence we need to use wave packets, representing the localization of neutrino source, detector, and propagation.
There have been extensive studies on the wave-packet treatment on neutrino oscillation; see e.g. Refs. <cit.>.
To discuss neutrino oscillation with wave packet, Lorentz covariance is important because typically the neutrino is a relativistic particle. So far, a Lorentz invariant wave packet <cit.> has been used to deal with neutrino oscillation <cit.>.
Here, we point out that this use of the conventional Lorentz-invariant wave packet for spinors leads to complicated Lorentz transformation law that mixes it with other wave-packet states having different centers of momentum and position.
In this paper, we propose a new formulation of manifestly Lorentz-covariant spinor wave packet, with which we can avoid this difficulty. Our definition is more natural in the sense that wave packets do not mix with each other under Lorentz transformations. Then we prove the completeness of this spinor wave packet in the one-particle subspace. Finally, generalizing this completeness relation, we will show that the spinor field can be expanded by our spinor wave packet, and discuss several well-known operators in the wave packet basis.
The organization of this paper is as follows:
In Section <ref>, we discuss the wave packet with spin in one-particle subspace and propose a new definition of Lorentz-covariant spinor wave-packet basis. In Section <ref>, generalizing the discussion in the previous section, we introduce the creation and annihilation operator of the spinor wave packet. Then we show that the free fermion field can be expanded by this spinor wave packet. Finally, in Section <ref>, we will give the expression of several operators in QFT, i.e. the total Hamiltonian, momentum, and charge operators, in terms of wave packets.
§ LORENTZ-COVARIANT SPINOR WAVE PACKET
In this section, we point out that the known representation of a spinor wave packet suffers from mixing with other wave packets under Lorentz transformations, and propose a complete set of Lorentz-covariant spinor wave-packet basis without the difficulty of mixing.
We work in the d+1-dimensional Minkowski space M^d+1 spanned by coordinate system x=x^0, x=x^0,x^1,…,x^d∈ R^1,d, with d=3 spatial dimensions.
We take the almost-plus metric signature -,+,…,+.
We only consider a massive field, m>0, and always take d+1-momenta on-shell,
p^0=√(m^2+ p^2),
throughout this paper unless otherwise stated.
When an on-shell momentum appears in an argument of a function such as f p, we use both d and d+1-dimensional notations interchangeably: f p=fp.
§.§ Spinor plane waves, revisited
To spell out our notation, we summarize basic known facts on the spinor plane waves.
A free Dirac field ψx can be expanded by plane wave as follows,
ψx
= ∑_s ∫^dp2p^0
u(p,s)e^ip·x2π^d/2αp,s
+v(p,s)e^-ip·x2π^d/2β^†p,s,
where up,s and vp,s are plane-wave solutions of the Dirac equation
(ip+m)u(p,s)=0, (ip-m)v(p,s)=0,
with s=±1/2 being the spin in the rest frame of each solution.
Throughout this paper, we suppress the spinor indices a=1,…,2^⌊d+1/2⌋ for ψ, u, v, etc. when unnecessary.
These solutions satisfy the following completeness relations,
∑_s u(p,s)u(p,s)
= -ip+m,∑_s v(p,s)v(p,s)
= -ip-m,
and their normalization is
u(p,s)up,s'=2mδ_ss',
v(p,s)vp,s'=-2mδ_ss',
where ψ:=ψ^†β is the Dirac adjoint.[
We adopt the spinor notation in Ref. <cit.>: γ^μγ^ν=2η^μνI, where η:=-1,1,…,1 and I is the unit matrix in the spinor space. Here, β:=iγ^0 is distinguished from the operator β.
]
The coefficients αp,s and β^†p,s in Eq. (<ref>) are the annihilation and creation operators
for particle and anti-particle, respectively, that satisfy the following anticommutation relations:
αp,sα^†p',s'
= δ_ss'2p^0δ^dp-p'
1,βp,sβ^†p',s'
= δ_ss'2p^0δ^dp-p'
1,others
= 0.
Free one-particle subspaces of particle and antiparticle are spanned by the following plane-wave bases:
α^†p,s|0⟩
=: p,s,n,
β^†p,s|0⟩
=: p,s,n^c,
where n and n^c denote the particle and antiparticle of nth species, respectively. The anticommutator (<ref>) leads to the inner product:
p,s,η|p',s',η'
:= 2p^0δ^dp-p'δ_ss'δ_ηη',
where η=n,n^c labels the particle and anti-particle.
The normalization (<ref>) leads to the completeness relation (resolution of identity) in the free one-particle subspace of each η=n,n^c:
∑_s∫^dp2p^0p,s,ηp,s,η
= 1.
The Lorentz transformation law of the plane wave reads
UΛp,s
= ∑_s'Λp,s'D_s'sWΛ,p,
where D is the spin-s representation of the Winger rotation SO(d); see Appendix <ref> for details.
§.§ Lorentz-covariant spinor wave packet
In this subsection, we first briefly review basic facts on Lorentz-invariant scalar wave packets <cit.>, which is discussed in our previous work <cit.>. Next, we point out the difficulty in the conventional treatment of the spinor wave packet <cit.>. Then, we propose a new definition of the spinor wave packet and show that we can avoid this difficulty in our expression.
§.§.§ Brief review of Lorentz-invariant scalar wave packet
For central position X and momentum P in d+1-dimensions,
a Lorentz-invariant scalar wave packet Π is defined by <cit.>:[See e.g. Refs. <cit.> for reviews.]
p|Π
:= N_ϕe^-ip·X+iσP,
where Π denotes the phase space[
Here, Π includes the wave-packet central time X^0.
Though P^0=√(m^2+ P^2) is not an independent variable, we also include it for the convenience of writing its Lorentz transformation below.
]
Π := X,P,
and the normalization factor
N_ϕ := σπ^d-14√(K_d-122σm^2)
provides Π|Π=1, in which K_nz is the modified Bessel function of the second kind. Here and hereafter, we fix σ unless otherwise stated.
The wave function and the inner product are obtained as <cit.>
x|Π
= N_ϕm^d-1√(2π)K_d-12ξξ^d-12,
Π|Π'
= N_ϕ^22πm^2^d-12
K_d-12ΞΞ^d-12,
where for any complex vector V^μ, we write V:=√(-V^2), namely,
ξ = m√(σ^2m^2+x-X^2-2iσP·x-X),
Ξ = m√(X-X'-iσP+P'^2),
with ξ^μ:=mσ P^μ+ix-X^μ
and Ξ^μ:=mσ (P^μ+P'^μ)+iX-X'^μ.[
The abuse of notation is understood such that a vector-squared V^2:=-V^0^2+ V^2 is distinguished from the second component of V by the context.
]
We note that there is no branch-cut ambiguity for the square root as long as m>0
<cit.>.
With this state, the momentum expectation value and its (co)variance become <cit.>
p^μ_ϕ
:= ∫^dp2p^0Π|pp^μp|Π
= M_ϕP^μ,
p^μp^ν_ϕ
= K_d+322σm^2K_d-122σm^2
P^μP^ν+M_ϕ2ση^μν,
where
M_ϕ := K_d+222σm^2K_d-122σm^2.
In general, a matrix element of p becomes
p^μ_Π,Π'
:= ∫^dp2p^0Π|pp^μp|Π'
= 2πm^2^d-12N_ϕ^2 mΞ^μK_d+12ΞΞ^d+12.
Let us consider a spacelike hyperplane Σ_N,T=X|N· X+T=0 in the space of central position X; see Appendix <ref>.
One can write the completeness relation in the position-momentum phase space in a manifestly Lorentz-invariant fashion <cit.> (see also Ref. <cit.>):
∫^2dΠ_ϕ
ΠΠ
= 1,
where 1 denotes the identity operator in the one-particle subspace and
the Lorentz-invariant phase-space volume element is given by
∫^2dΠ_ϕ
:= 1M_ϕ
∫^dΣ^μ_X2π^d-2P_μ^dP2P^0,
in which
^dΣ^μ_X
:= ^d+1X δN·X+TN^μ
is the Lorentz-covariant volume element.
We stress that σ is not summed nor integrated in the identity (<ref>) and that the identity holds for any fixed σ.
Let us consider a “time-slice frame” X of the central-position space in which Σ_ N,T becomes an equal-time hyperplane X^0=T,
X
:= L^-1NX,
where the “standard” Lorentz transformation LN is defined by N=: LNℓ, with ℓ denoting ℓ:=1, 0 in any frame; note that N= L^-1NN=ℓ by definition; see Appendix <ref> for details.
On the constant-X^0 hyperplane Σ_,T= X| X^0=T, the Lorentz-invariant phase-space volume element reduces to the familiar form:
∫^2dΠ_ϕ
= 1M_ϕ
∫_X^0=T^d X ^dP2π^d.
Note that M_ϕ→1 in the non-relativistic limit σ m^2≫1.
§.§.§ Difficulty in spin-diagonal representation
In the literature <cit.> a so to say spin-diagonal one-particle wave-packet state Π,S_D with a spin S has been defined as
p,s|Π,S_D
:= p|Πδ_sS
where p|Π is nothing but the scalar Lorentz-invariant wave packet (<ref>).[
In the literature, the normalization and X-dependence <cit.> have been omitted.
]
Its normalization becomes
Π,S|Π',S'_D
= ∑_s∫^dp2p^0Π,S|p,s_Dp,s|Π',S'_D = ∑_s∫^dp2p^0Π|pp|Π' = Π|Π'δ_SS',
where Π|Π' is given in Eq. (<ref>).[
An inner product of the spin-diagonal wave-packet state and another state ψ is understood as
Π,S|ψ_D:=Π,S_D^†ψ=:Π,S_Dψ.
We will never consider an inner product of the spin-diagonal wave-packet state and a phase-space-diagonal wave-packet state that appears below so that this notation will not cause confusion.
]
This leads to the following completeness relation in the one-particle subspace
∑_S∫^2dΠ_ϕΠ,S_DΠ,S_D
= 1,
generalizing the completeness relation of the scalar wave packet (<ref>).
Once the wave-packet state is defined, its Lorentz transformation law is obtained as
UΛΠ,S_D
= ∑_s∫^dp2p^0UΛp,sp,s|Π,S_D = ∑_s,s'∫^dp2p^0
∑_S'∫^2dΠ'_ϕΠ',S'_DΠ',S'|Λp,s'_D
D_s'sWΛ,p
p,s|Π,S_D = ∑_S'∫^2dΠ'_ϕΛΠ',S'_D
D_S'SWΛ,p_Π',Π,
where
ΛΠ:=ΛX,ΛP.
and
D_S'SWΛ,p_Π',Π
:= Π'(∫^dp2p^0D_S'SWΛ,ppp )Π.
We see that the spin-diagonal choice (<ref>) leads to the complicated transformation law (<ref>) mixing the wave-packet state with the others having various centers of momentum and position.
Below, we will show that we can indeed realize a physically reasonable transformation law, so to say the phase-space-diagonal representation, which evades the mixing with other states (<ref>):
UΛΠ,S
=∑_S'ΛΠ,S'C_S'SΛ, Π,
where C_S'SΛ, Π is a yet unspecified representation function.
§.§.§ Phase-space-diagonal representation
Instead of the conventional choice (<ref>), we propose to define
p,s,η|Π,S,η'
:= N_ψe^-ip·X+iσPM_sSp,P,η δ_ηη',
where the key element is
M_sSp,P,η
:=
up,suP,S2m for η=n,
-vP,Svp,s2m for η=n^c,
and N_ψ is a normalization factor to be fixed below.
Note that M_sSp,p,η=δ_sS and that M_sSp,P,n=M_sSp,P,n^c from u(p,s)=C v^*(p,s) (with the charge conjugation matrix C=-γ^2 in our notation).
The definition (<ref>) leads to[
One can show it as
M_sSp,P,n
= 12mup,sS^-1ΛSΛuP,S
= 12m∑_s',S'D^*_s'sWΛ,puΛp,s'uΛP,S'D_S'SWΛ,P = ∑_s',S'D^*_s'sWΛ,p
M_s'S'Λp,ΛP,nD_S'SWΛ,P,
and similarly for M_sSp,P,n^c.
]
M_sSp,P,η
= ∑_s',S'D^*_s'sWΛ,p
M_s'S'Λp,ΛP,ηD_S'SWΛ,P.
Then it follows that
p,s,η|Π,S,η
= ∑_s',S'D^*_s'sWΛ,p
Λp,s,η|ΛΠ,S,ηD_S'SWΛ,P.
Here and hereafter, for notational simplicity, we omit the label η and concentrate on the case of the particle when the distinction is irrelevant.
The identity (<ref>) results in[
This can be shown as
UΛΠ,S
= ∑_s∫^dp2p^0UΛp,sp,s|Π,S
= ∑_s',S'∫^dp2p^0
Λp,s'Λp,s'|ΛΠ,S'D_S'SWΛ,P = ∑_S'ΛΠ,S'D_S'SWΛ,P,
where we have used Eqs. (<ref>) and (<ref>) and then the unitarity (<ref>) in the second equality.
]
UΛΠ,S
= ∑_S'ΛΠ,S'D_S'SWΛ,P.
As promised, we have realized the phase-space-diagonal representation (<ref>).
Now we show that the normalization Π,S|Π,S=1 is realized by the choice
N_ψ=
√(21+M_ϕ) N_ϕ,
where N_ϕ is given in Eq. (<ref>).
Let us first compute
Π,S|Π,S
= ∑_s∫^dp2p^0Π,S|p,sp,s|Π,S = N_ψ^22m^2uP,S∫^dp2p^0Π|pp|ΠN_ϕ^2
-ip+m
uP,S = N_ψ^22m^2uP,S-ip+m_ϕN_ϕ^2
uP,S,
where we used Eq. (<ref>) in the second line.
The expectation value p^μ_ϕ is presented in Eq (<ref>), from which we get
-ip+m_ϕ
= -iM_ϕP
+m.
Therefore, using the Dirac equation (<ref>) and then the normalization (<ref>), we see that the choice (<ref>) provides the normalized state.
Finally, the inner product is given by
Π,S,η|Π',S',η'
= ∑_s,η”∫^dp2p^0Π,S,η|p,s,η”p,s,η”|Π',S',η' = N_ψ^22m^2uP,S-ip+m_Π,Π'N_ϕ^2
uP',S'δ_ηη',
where, -i p+m_Π,Π'=-i p_Π,Π'+m with p^μ_Π,Π' being given in Eq. (<ref>).
Hereafter, we adopt this representation for the Lorentz-covariant spinor wave packet.
§.§ Momentum expectation value
In this subsection, we compute the momentum expectation value of the Lorentz covariant spinor wave packet:
p^μ_ψ
:= ∑_s ∫^dp2p^0Π,S|p,sp^μp,s|Π,S.
This will be an important parameter in the following.
Putting Eq. (<ref>), we obtain
p^μ_ψ
=N_ψ^24m^2∑_s ∫^dp2p^0Π|pp^μp|ΠN_ϕ^2u(P,S)u(p,s)u(p,s)u(P,S) = N_ψ4m^2u(P,S)p^μ(-ip+m)_ϕN_ϕ^2u(P,S),
where we used Eq. (<ref>). The expectation value and its covariance p^μ_ϕ, p^μ p^ν_ϕ are shown in Eqs (<ref>) and (<ref>). Thus,
p^μ(-ip+m)_ϕ
= -i
K_d+322σm^2K_d-122σm^2
P^μ P
+M_ϕ2σγ^μ
+M_ϕm P^μ.
Hence, using the Dirac equation (<ref>) and then the normalization (<ref>), we get
14m^2u(P,S)p^μ(-ip+m)_ϕu(P,S)
= 12
M_ϕ+
M_ϕ2σm^2
+
K_d+322σm^2K_d-122σm^2P^μ.
Therefore, the momentum expectation value is given by
p^μ_ψ
=M_ψP^μ,
where
M_ψ := 11+M_ϕ
K_d+322σm^2K_d-122σm^2
+M_ϕ1 +12σm^2
.
Note that M_ψ→1 in the non-relativistic limit σ m^2≫1.
§.§ Completeness
In this subsection, we will prove the following completeness relation for Lorentz-covariant spinor wave packet,
∑_S∫^2dΠ_ψΠ,SΠ,S
= 1,
where
∫^2dΠ_ψ
:=
1M_ψ
∫_Σ_N,T^dΣ^μ_X2π^d-2P_μ^dP2P^0 = M_ϕM_ψ
∫^2dΠ_ϕ,
in which ^2dΠ_ϕ, M_ϕ, and M_ψ are given in Eqs. (<ref>), (<ref>), and (<ref>) respectivity.
To prove Eq. (<ref>), we rewrite it as a matrix element for both-hand sides, sandwiched by the plane-wave bases (<ref>):
N_ψ^2M_ψ∫_Σ_N,T^d+1X2π^dδN·X+T(-2P·N)^dP2P^0p|ΠΠ|qN_ϕ^2 ×14m^2∑_S up,suP,S uP,Suq,s' = 2 p^0 δ^d (p-q)δ_ss',
where we used Eq. (<ref>) on the right-hand side.
On the left-hand side, we integrate X over Σ_N,T by exploiting its Lorentz invariance, choosing a coordinate system where it becomes a constant-X^0 hyperplane Σ_,T with X^0=T. Then left-hand side in Eq. (<ref>) becomes
(l.h.s.)
= N_ψ^2M_ψ^2δ^d (p-q)∫^dP2P^02P^0e^2σP ·pN_ϕ^2
up,s -iP+m up,s'4m^2 = N_ψ^2M_ψ^2 δ^d (p-q) up,s2p^0(-ip+m)_ϕN_ϕ^2
uq,s' =2 p^0 δ^d (p-q),
where we used Eq. (<ref>) in the first line, and Eqs. (<ref>) and (<ref>) in the last line. Thus, Eq. (<ref>), and hence the completeness (<ref>), is proven.
§ SPINOR FIELD EXPANDED BY WAVE PACKETS
Now we define the creation and annihilation operators of the Lorentz-covariant wave packet.
We write a free spin-1/2 one-particle state of nth spinor particle Π,S;n and of its anti-particle Π,S;n^ c.
Similarly to the plane wave case, we define wave-packet creation operators by
A^†Π,S|0⟩
:= Π,S,n,
B^†Π,S|0⟩
:= Π,S,n^c,
and annihilation operators AΠ,S, BΠ,S by their Hermitian conjugate, with mass dimensions A^†Π,S=Π,S;n=0, etc.
Then, the completeness relation (<ref>) on the one-particle subspace reads
0αp,s;n
= ∑_S∫^2dΠ_ψp,s;n|Π,S;n0AΠ,S,
and similarly for the anti-particles.
Then, we can naturally generalize it to an operator relation that is valid on the whole Fock space:
αp,s
= ∑_S ∫^2dΠ_ψp,s;n|Π,S;nAΠ,S,
βp,s
= ∑_S ∫^2dΠ_ψp,s;n^c|Π,S;n^cBΠ,S.
Similarly, the completeness of the plane wave (<ref>) leads to the expansion of these creation and annihilation operators:
AΠ,S
= ∑_s∫^dp2p^0Π,S;n|p,s;nαp,s,
BΠ,S
= ∑_s∫^dp2p^0Π,S;n^c|p,s;n^cβp,s.
From the above equations, we can derive the anti-commutation relation of the creation and annihilation operators:
AΠ,SA^†Π',S'
= Π,S,n|Π',S',n1,
BΠ,SB^†Π',S'
= Π,S,n^c|Π',S',n^c1,
others
= 0,
where 1 denotes the identity operator in the whole Fock space, and Π,S|Π',S' is the inner product of the Lorentz covariant wave packets, given in Eq. (<ref>).
Finally, the free spinor field can be expanded as
ψx
= ∑_S∫^2dΠ_ψ
U(x,Π,S)AΠ,S
+V(x,Π,S)B^†Π,S
,
where the Dirac spinor wave functions are given by
U(x,Π,S) = ∑_s∫^dp2p^0 up,se^ipx p,s;n|Π,S;n = 12mN_ψx(-ip+m)Π uP,S = 12N_ψm^d-1√(2π)
-iξK_d+12ξξ^d+12
+K_d-12ξξ^d-12u(P,S),
V(x,Π,S) = ∑_s∫^dp2p^0 vp,se^-ipx Π,S;n^c|p,s;n^c =C U^*(x,Π,S),
where we have used the scalar wave function (<ref>). Here, ξ and ξ^μ are given in Eq. (<ref>) and below it, respectively.
The normalization conditions of these Dirac spinors are
∫d^d+1 X/(2π)^d δ(N·X+T) U(x,Π,S) U(x,Π,S')
= 2m δ_SS', ∫d^d+1 X/(2π)^d δ(N·X+T) V(x,Π,S) V(x,Π,S') = -2m δ_SS'
where we used Eqs. (<ref>) and (<ref>). The normalization is as same as the case of plane waves (<ref>), except for the integration of X.
Next, the completeness relations can be computed by
∑_S ∫d^d+1 X/(2π)^d δ(N·X+T) U(x,Π,S) U(x,Π,S)
= -i P M_ψ+m, ∑_S ∫d^d+1 X/(2π)^d δ(N·X+T) V(x,Π,S) V(x,Π,S)
= -i P M_ψ-m,
where we have used Eqs. (<ref>), (<ref>) and (<ref>). These relations are similar to that of plane waves (<ref>), except for the integration of X and factor M_ψ on the right-hand side.
§ ENERGY, MOMENTUM, AND CHARGE
In this section, we rewrite well-known operators in QFT, i.e. the total Hamiltonian, momentum, and charge operators, in the language of the spinor wave packet. Since the wave packet is not the momentum eigenstate, the total Hamiltonian and momentum operators cannot be diagonalized in the wave packet basis. However, the zero-point energy can be described in a fully Lorentz invariant manner using this basis. In Appendix <ref>, we also show the corresponding expressions for the scalar wave packet.
First, let us consider the convergent part of the total Hamiltonian and momentum operators. In the momentum space, these operators are given by
P^μ
:= ∫^dp2p^0∑_s p^μα^†p,sαp,s+β^†p,sβp,s.
Putting Eqs. (<ref>) and (<ref>) into the above expression, we get
P^μ = ∑_S,S' ∫^2dΠ_ψ∫^2dΠ'_ψ A^†ΠAΠ'+B^†ΠBΠ' p^μ_(Π,S),(Π',S'),
where
p^μ_(Π,S),(Π',S')
:= ∑_s ∫^dp2p^0Π,S|p,sp^μp,s|Π',S'.
We see that the total Hamiltonian and momentum operators are not diagonal on the wave packet basis, unlike the plane-wave eigenbasis.
Let us discuss the divergent part of this operator, coming from the zero-point energy:
P^μ_zero
:= ∑_s ∫^dp2p^0(-p^μ)β(p,s)β^†(p,s).
Similarly as above, putting Eq. (<ref>) into this commutator, we obtain
P^μ_zero
=∑_S,S' ∫^2dΠ_ψ∫^2dΠ'_ψ -2p^μ_(Π,S),(Π',S') Π',S'|Π,S1 =∑_S ∫^2dΠ_ψ -2p^μ_ψ1 = ∑_S ∫^d+1X/(2 π)^d^dP2P^02P ·N δN·X+T P^μ 1.
where we have used the completeness relation (<ref>) in the second line and, in the last line, the expectation value (<ref>) and the Lorentz-invariant phase-space volume element (<ref>).
Let the time-like normal vector N^μ and d space-like vectors N_⊥ i (i=1,…,d) compose an orthonormal basis: N· N_⊥ i=0 and N_⊥ i· N_⊥ j=δ_ij such that we can decompose P^μ into the components parallel and perpendicular to N^μ,
P^μ=- P ·NN^μ+∑_i P ·N_⊥i N_⊥i^μ.
When we put this into Eq. (<ref>), the perpendicular components vanish in a regularization scheme that makes Lorentz covariance manifest, namely in the dimensional regularization:
∫^dP2P^0 P ·N_⊥i P ·N
= ∫^dP2P^0 P^μP^ν N_μ N_⊥i ν = ∫^dP2P^0 1/d+1P·P N ·N_⊥i =0.
Therefore, the divergent part P^μ_zero has only one independent component, which can be interpreted as zero-point energy, defined in a manifestly Lorentz-invariant fashion:
E_zero
:= -N_μP^μ_zero = ∑_S∫^d+1X/(2 π)^d^dP2P^0(-2)P ·N^2 δN·X+T,
where P^μ_zero is the coefficient in front of 1 in the right-hand side of Eq. (<ref>).
We note that the zero-point energy should be a scalar as we have shown, otherwise, an infinite momentum would appear from a Lorentz transformation.
Physically, we would expect that the zero-point energy is independent of the choice of space-like hyperplane Σ_N,T.
We can show it by exploiting the Lorentz invariance of the expression (<ref>) by choosing N^μ=ℓ^μ (=1, 0), without loss of generality. Then, the zero-point energy reduces to the well-known form:
E_zero =∑_S∫_X^0=T^d X ^dP2π^d (-P^0).
It is remarkable that this zero point energy of the Dirac spinor is exactly -4 times that of a real scalar, shown in Eq. (<ref>) in Appendix, although the expression of momentum expectation values in Eqs. (<ref>) and (<ref>) are completely different between the spinor and scalar.
The factor 4 is the number of degrees of freedom, and the negative sign cancels the bosonic contribution in a supersymmetric theory.
Next, we consider the following charge operator
Q
:= ∫^dp2p^0∑_s[-α^†p,sαp,s+β^†p,sβp,s]
Substituting Eq. (<ref>) into the above expression, we obtain
Q
= ∑_S∫^2dΠ_ψ [-A^†Π,SAΠ,S+B^†Π,SBΠ,S],
where we have used
AΠ,S
= ∑_S∫^2dΠ'_ψΠ,S|Π',S'AΠ',S, BΠ,S
= ∑_S∫^2dΠ'_ψΠ',S'|Π,S BΠ',S,
which follows from Eq. (<ref>). This expression Eq. (<ref>) means that the creation operators A^†Π,S and B^†Π,S create the wave packet with charge -1, and +1 respectively. In fact,
Q A^†Π,S|0⟩ =QA^†Π,S|0⟩ =-∑_S'∫^2dΠ'_ψ A^†Π',S'Π',S'|Π,S|0⟩ = -A^†Π,S|0⟩,
is valid. Here we have used Eq. (<ref>) in the last line.
§ SUMMARY AND DISCUSSION
In this paper, we have proposed fully Lorentz-covariant wave packets with spin. In the conventional definition of the wave packet, spin dependence of the wave function in the momentum space is just given by Kronecker delta, δ_sS, and such a wave packet with spin transforms under Lorentz transformation mixing wave-packet states that have different centers of momentum and position. Our proposal overcomes this difficulty.
We have also proven that these wave packets form a complete basis that spans the spinor one-particle subspace in the manifestly Lorentz-invariant fashion.
Generalizing this completeness relation to the whole Fock space, we have shown that the creation and annihilation operators of plane waves can be expanded by that of these wave packets. This relation leads to the expansion of the spinor field in a Lorentz covariant manner. In addition to this, we have expressed the well-known operators in a wave packet basis: the total Hamiltonian, momentum, and charge operators. In particular, we have given the Lorentz covariant expression of zero point energy, in terms of centers of momentum and position of this wave packet.
Since neutrino oscillation requires wave-packet formulation, our new definition of wave packet may have an impact on this context. The novel Lorentz covariant basis that we propose will be useful in handling the wave packet quantum field theory <cit.> more transparently.
It may also be interesting that consider Bell's inequality of this wave packet state.
§.§ Acknowledgement
We thank Ryusuke Jinno for a useful comment.
This work is supported in part by the JSPS KAKENHI Grant Nos. 19H01899, 21H01107 (K.O.), and 22J21260 (J.W.).
§ APPENDIX
§ “SLANTED” FOLIATION
In this appendix, we briefly introduce “slanted” foliation which is necessary to write down the completeness relation of Lorentz-invariant wave packets in the fully Lorentz-invariant manner.
Let us consider the following spacelike hyperplane:
Σ_n,τ
:= x∈R^1,d|n·x+τ=0,
where n is an arbitrary fixed vector that is timelike-normal n^2=-1 and is future-oriented n^0>0, namely n^0=√(1+ n^2),
and τ∈ℝ parametrizes the foliation.
Physically, n is the normal vector to the hyperplanes and τ is the proper time for this foliation.
A schematic figure is given in the left panel in Fig. <ref>. We can generalize the equal-time foliation of whole Minkowski space M^d+1 to a general foliation by set F_n=Σ_n,τ_τ∈ℝ of these spacelike hyperplanes.
In general, we may parametrize a component of n in the reference frame as the following linear combination:
n^μ = L^μ_νn^ν,
where the “standard vector” is defined to be
^μ_μ=0,…,d=1,0
in any frame[
In the language of differential geometry, the basis-independent vector is written as n:= n^μ_μ with _μ= x^μ being the basis vectors in the reference coordinates.
Under the change of basis _μ→_μ'=Λ_μ^ν_ν, where '_μ:= x^μ, n should remain the same n→ n^μ_μ'= n^μΛ_μ^ν_ν!= n^ν_ν, that is, n^μ=Λ^μ_ν n^ν.
]
and L n is the “standard boost to the foliation.”
Concretely, for the vector with n^0=√(1+ n^2),
Ln
= n^0 n^
n +n^0-1nn^n^2
,
where denotes a transpose, is the identity matrix in d dimensions,
n is given in the n×1 matrix representation, and
L n
= L^μ_ν n_μ,ν=0,…,d.
Note that L^-1 n= L- n.
Now an equal-time hyperplane x^0=τ in the arbitrary reference frame is written as Σ_,τ because · x+τ=-x^0+τ=0 on it.
For any given foliation F_n, we may Lorentz-transform from the reference frame x to the “time-slice” frame x that gives n^μ=^μ:
x^μ →x^μ= L_ν^μnx^ν,
n^μ →n^μ= L_ν^μnn^ν (=^μ),
where we used L^-1^μ_ν= L_ν^μ as usual.
In the time-slice coordinate system x, the same plane is written as
Σ_n,τ:=x=L^-1nx, x∈ℝ^1,d|n·x+τ=0 (=Σ_n,τ).
As said above, since n· x+τ=- x^0+τ=0 on Σ_ n,τ, they are equal-time hyperplanes parametrized by τ∈ℝ in the x coordinate system.
A schematic figure is given in the right panel in Fig. <ref>.
§ WIGNER REPRESENTATION
In this appendix, we briefly review the Wigner representation in the case of massive one-particle state p,s to spell out our notation; see e.g. Ref. <cit.> for more details.
Here and hereafter, we neglect the label for the particle and antiparticle since it is irrelevant for the current discussion.
The Poincaré transformation on a plane-wave state can be written as
p,s
→UΛ,ap,s
where
UΛ,a
= e^-ia·PUΛ
= e^ia^0H-ia·PUΛ,
in which P= P^0, P is the generator of the spacetime translation.
Since the translational part is the same as the scalar case, we concentrate on the Lorentz transformation.
Without loss of generality, we can choose s to be the of the particle in its rest frame:
p,s
= ULp0,s,
consistently with the definition (<ref>) as we will see below.
Here, s is the spin eigenvalue for the rotation in, say, x^1-x^2 plane in the rest frame and the standard boost Lp is defined by
p^μ =: L^μ_νpmℓ^ν,
in which ℓ^μ is given in Eq. (<ref>).
Concretely, the standard boost to p can be written in terms of the “standard boost to a foliation” (<ref>) as[
In general, these two are different concepts, L p≠ L n, since p and n are different.
]
Lp
= Lp/m.
Since p,s has an internal degree of freedom s, the Lorentz group representation for this state could be nontrivial. To deal with this, we introduce the well-known procedure, Wigner representation.
First, under the Lorentz transformation, the plane-wave basis transforms as
UΛp,s
= UΛULp0,s
= ULΛp
UL^-1Λp
UΛULp0,s = ULΛpUWΛ,p0,s,
where
WΛ,p
:= L^-1ΛpΛLp.
Here, WΛ,p is corresponding to the rotation because this transformation does not change the momentum p. We call this Wigner rotation in SO(d).[
We sloppily write SO(d) when it is to be understood as Spin(d).
]
Next, we may always write
UWΛ,p0,s
= ∑_s'0,s'D_s'sWΛ,p,
where D is a finite-dimensional unitary representation of SO(d):
∑_sD_s”sWΛ, pD^*_s'sWΛ,p=δ_s's”.
Putting this into Eq. (<ref>), we obtain Eq. (<ref>).
§ ENERGY, MOMENTUM, AND NUMBER OPERATOR IN SCALAR CASE
We give the energy, momentum, and number operators for a real scalar field in terms of the Lorentz-invariant wave-packet basis. First, we briefly review the scalar wave packet in QFT, and then we will show newly-found expressions of these operators in terms of the Lorentz-invariant scalar wave packets.
The free field is usually expressed in the plane wave basis:
ϕx
= ∫^dp2p^0αpe^ip·x2π^d/2+α^†pe^-ip·x2π^d/2,
where αp and α^†p are the creation and annihilation operators of the plane waves, which satisfy αpα^†p'=2p^0 δ^d p- p', etc.
Now, we define a wave-packet creation operator by <cit.>
A^†Π|0⟩:=Π,
and an annihilation operator AΠ by its Hermitian conjugate.
The completeness of the scalar wave packet (<ref>) leads to the following expansion of the creation and annihilation operators of the plane waves:
αp
= ∫^2dΠp|ΠAΠ,
α^†p
= ∫^2dΠ A^†ΠΠ|p.
Thus, the free scalar field can be expanded as <cit.>
ϕx
= ∫^2dΠ
x|ΠAΠ
+A^†ΠΠ|x
,
where the wave function is given in Eq. (<ref>).
Now let us rewrite well-known operators in QFT, i.e. the total Hamiltonian, momentum, and number operators, into the language of the scalar wave packet.
First, in momentum space, the convergent part of the number operator is described by
N
:= ∫^dp2p^0α^†pαp.
Substituting Eq. (<ref>) into the above expression, we obtain
N
= ∫^2dΠ ∫^2dΠ' A^†ΠΠ|Π'AΠ' = ∫^2dΠ A^†ΠAΠ.
On the second line, we have used
AΠ
= ∫^2dΠ'Π|Π'AΠ',
A^†Π
= ∫^2dΠ' A^†Π'Π'|Π,
which follows from Eq. (<ref>). From Eq. (<ref>), we can read off a Lorentz-covariant number-density operator in the 2d-dimensional phase space:
N = A^†ΠAΠ.
We now consider the divergent part of the plane-wave number operator, coming from the zero-point oscillation:
N_zero
:= ∫^dp2p^0 1/2αpα^†p.
Putting Eq. (<ref>) into the above expression, we obtain
N_zero
= ∫^2dΠ ∫^2dΠ' 1/2Π|Π'Π'|Π1 = 1/2 ∫^2dΠ 1.
Therefore, including the divergent part, the number-density operator can be described by
N + N_zero := A^†ΠAΠ + 1/2
From this expression, it can be interpreted that there is one zero-point oscillation per 2d-dimensional phase space volume.
Next, we consider the convergent part of the total Hamiltonian and momentum operators. In the momentum space, these operators are given by
P^μ
:= ∫^dp2p^0p^μα^†pαp.
Putting Eq. (<ref>) into the above expression, we get
P^μ = ∫^2dΠ∫^2dΠ' A^†ΠAΠ'p^μ_Π,Π',
where p^μ_Π,Π' is given in Eq.(<ref>).
We see that the total Hamiltonian and momentum operators are not diagonal on the wave packet basis, unlike the plane-wave eigenbasis.
Now, let us discuss the divergent part of this operator, coming from the zero-point energy:
P^μ_zero
:= ∫^dp2p^0p^μ1/2αpα^†p.
Putting Eq. (<ref>) into the above commutator, we obtain
P^μ_zero
=1/2∫^2dΠ Π∫^2dΠ' p^μ_Π,Π' Π'|Π1 =1/2∫^2dΠ p^μ_ϕ1, = ∫^d+1X/(2 π)^d^dP2P^0-P ·N δN·X+T P^μ 1 = ∫^d+1X/(2 π)^d^dP2P^0P ·N^2 δN·X+T N^μ 1.
where we have used the completeness relation (<ref>) in the second line,
the formula (<ref>) in the third line,
and
the same argument as in Eq. (<ref>) in the last line.
It is noteworthy that the result becomes the same as in the spinor case (<ref>) up to the factor -4.
We may define the zero-point energy in a manifestly Lorentz-invariant fashion:
E_zero
:= -N_μP_zero^μ = ∫^d+1X/(2 π)^d^dP2P^0P ·N^2 δN·X+T,
where P_zero^μ is the coefficient of 1 in the right-hand side of Eq. (<ref>).
Physically, we expect that the zero-point energy should be independent of the choice of the space-like hyperplane Σ_N,T.
We can show it by exploiting the Lorentz invariance of the expression (<ref>) to choose N^μ=ℓ^μ (=1, 0), without loss of generality. Then, the zero-point energy reduces to the well-known form:
E_zero=∫_X^0=T^d X ^dP2π^d 1/2P^0.
JHEP
|
http://arxiv.org/abs/2307.04438v1 | 20230710092926 | Reconfigurable Intelligent Surface Assisted Railway Communications: A survey | [
"Aline Habib",
"Ammar El Falou",
"Charlotte Langlais",
"Marion Berbineau"
] | eess.SP | [
"eess.SP"
] |
Reconfigurable Intelligent Surface Assisted Railway Communications: A survey
Aline Habib1, Ammar El Falou2, Charlotte Langlais1, Marion Berbineau4
1 Mathematical and electrical engineering department, CNRS UMR 6285 Lab-STICC, IMT Atlantique, Brest, France
2 CEMSE Division, King Abdullah University of Science and Technology (KAUST), Saudi Arabia
4 COSYS-LEOST, Université Gustave Eiffel, Villeneuve d'Ascq, France
Email: {aline.habib, charlotte.langlais}@imt-atlantique.fr, [email protected], [email protected]
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The number of train passengers and the demand for high data rates to handle new technologies such as video streaming and IoT technologies are continuously increasing. Therefore the exploration of millimeter waves (mmWave) band is a key technology to meet this demand. However, the high penetration loss makes mmWave very sensitive to blocking, limiting its coverage area. One promising, efficient, and low-cost solution is the reconfigurable intelligent surface (RIS). This paper reviews the state of the art of RIS for railway communications in the mmWave context. First, we present the different types of RIS and review some optimization algorithms used in the literature to find the RIS phase shift. Then, we review recent works on RIS in the railway domain and provide future directions.
RIS, Railway communications, mmWave.
§ INTRODUCTION
The need to double the capacity of the existing rail networks and, at the same time to increase the overall quality of service is leading to a drastic increase in the need for high data rates and robust and low latency data exchange between the different actors in the rail system. This multiplication of transmission needs ultimately leads to problems of spectrum scarcity. In this context, using mmWave bands opens up new opportunities. However, mmWaves suffer from very high attenuation and high sensitivity to various masking effects. In this context, Reconfigurable Intelligent Surfaces offers promising application use cases.
Reconfigurable Intelligent Surface, known in the literature by several nomenclatures as Software-Controlled Metasurface <cit.>, Intelligent Reflecting Surface (IRS) <cit.>, Large Intelligent Surface (LIS) <cit.>, and Reconfigurable Smart Surface (RSS) <cit.>, is an electromagnetic-based reconfigurable structure
that turns the random nature of the propagation channel into a controllable and programmable radio environment. RIS is a thin planar meta-surface made of several low-cost reflective elements <cit.>. Each RIS element adjusts the phase and amplitude of the incident wave to reflect it into a beam toward the target direction. This improves the signal quality and extends the coverage area especially when the direct link is blocked. The paper's main objective is to provide the reader with the basic elements to understand RIS and its interest in a railway communication environment. To do so, we review the literature in the domain and propose some future research directions.
The rest of the paper is organized as follows. Section <ref> provides a literature overview related to RIS, such as the different RIS structures and types, and their opportunity in the context of mmWave communications. We stress the need for realistic channel models in order to properly evaluate the performance of RIS-assisted systems. Section <ref> focuses on very recent works investigating RIS-assisted systems for railway communications. Finally, in Section <ref>, some future directions are drawn, and Section <ref> concludes the paper.
§ RECONFIGURABLE INTELLIGENT SURFACE
§.§ RIS General Overview
The main objective of a RIS is to provide a programmable radio environment between a transmitter (Tx), typically a base station (BS) in the downlink case, and a receiver (Rx), typically a remote user equipment (UE), by changing the phase shifts and amplitude of the RIS incident wave as follows <cit.>
z_n=β_ne^jθ_n,
where z_n is the reflection coefficient of the n^th element, β_n and θ_n are the adjustments in amplitude and phase due to the n^th element.
As the RIS should not encompass too many RF and signal processing resources to maintain a low level of energy consumption and complexity, the BS computes the needed tunable parameters and transfers commands to each RIS element thanks to a smart controller <cit.> as seen in Fig.<ref>.
To adjust phase shifts and amplitude of the incident wave, RIS consists of adjustable components, such as diodes and liquid crystals. The diodes adjust the signal by changing the bias voltage, while the liquid crystals adjust the electromagnetic signal by changing material parameters such as conductivity and permeability <cit.>. Indeed, the PIN diode-based RIS consists of three layers: 1) The outer layer with printed metal patches on a dielectric substrate. This layer directly processes the incident signals. 2) The intermediate layer
composed of a copper panel to avoid signal energy loss.
3) The inner layer is a control board activated by a programmable digital electronic circuit (FPGA), allowing the real-time adjustment of the RIS elements' reflection coefficients <cit.>.
Two reflexion paradigms govern propagation in the context of RIS-assisted communication systems, namely, the specular reflection paradigm and the scattering reflection paradigm, <cit.>. The differences are mainly related to the relation between the size of RIS A_t and the distance D between BS-RIS or RIS-UE, as follows:
* The specular reflection paradigm: the transmission occurs in the near-field, i.e., D<d_lim = 2A_t/λ[d_lim denotes the Rayleigh distance and is defined by d_lim=2A_t/λ with A_t the RIS area and λ the wavelength <cit.>.]. The path loss, in this case, depends on the summation of the distances between BS-RIS and RIS-UE.
* The scattering reflection paradigm: the transmission occurs in the far-field, i.e., D>d_lim. In this case, the path loss depends on the product of the BS-RIS and RIS-UE separation distances.
In the case of a passive RIS (β_n ≤ 1), the RIS elements reflect the signal without amplification. Thus, in the context of scattering reflection communications (far-field), and by assuming the optimal phase shifts, the received power at the UE for the indirect link via passive RIS is expressed as <cit.>
P_r^UE=P_tG_tG_r(λ/4π) ^4(d_0)^2μ-4/(d_1d_2)^μN^2
where P_t is the transmitted power at the BS, G_t and G_r are the transmit and receive antenna gains at the BS and the UE, respectively, d_0 the reference distance in the free space, d_1 and d_2 are BS-RIS distance and RIS-UE distance, μ is the path loss exponent depending on the environment type (e.g., μ≥ 3 for urban environments), and N is the number of RIS elements. Thus, the passive RIS gives a gain proportional to N^2. However, the passive RIS has limitations due to the double-path loss effect. Indeed, the signal traverses two cascaded channels, the Tx-RIS link and the RIS-Rx link <cit.>. Thus, the received power via the indirect link could be greater
than the power of the direct link, if N is large, or/and if
the direct link is weak or blocked. To illustrate this concept, we plot in Fig. <ref> the power received at the UE via an
attenuated direct link, a direct link without attenuation, and the indirect
link via the RIS, versus the number of RIS elements. The mmWave channel links are generated using an extended version of the New York University simulator NYUSIM <cit.>. Note that to verify the RIS scattering reflection paradigm, the distances d_1, and d_2 must be in the far-field region. As RIS size increases, the distance where the RIS is in the near-field also increases. Thus, for the distances d_1, and d_2 to be in the far-field and equation (<ref>) to be valid, N must not exceed a certain N_max, computed from d_lim, the Rayleigh distance, and represented by a square in Fig. <ref> <cit.>. The behavior of the RIS in the near-field is an interesting research topic.
§.§ RIS types
To overcome this limitation and obtain an efficient RIS when the direct link exists or the number of RIS elements is low, the authors of <cit.> propose an active RIS that can amplify the reflected signals through amplifiers embedded in the RIS elements. The simulation results in a direct link scenario without attenuation for 256 RIS elements reveal a negligible sum-rate gain
of 3 % using the passive RIS, while their proposed active RIS offers a significant sum-rate gain of 67 % compared to the case without RIS.
Nevertheless, a RIS with a large number of active elements consumes more energy. Thus, the authors in <cit.> propose a novel type of RIS composed of active and passive reflective elements, called hybrid RIS, to deal with the limited power budget of the RIS.
RIS based on continuous phase shifts is considered an ideal system that is difficult to implement in practice. Therefore, RIS based on finite discrete phase shifts is the alternative solution to cope with this hardware constraint. To this end, the authors in <cit.> compare the performance of RIS systems with continuous and discrete phase shifts and they find that 3 levels of quantization are sufficient to obtain full diversity.
§.§ RIS optimization
The efficient functioning of the RIS is strongly affected by the adapted phase shifts θ_n. For instance, in Single Input Single Output (SISO) systems, the optimal phase shift of a RIS is easily determined analytically as follows <cit.>
θ_n=θ_tn+θ_nr.
where θ_tn and
θ_nr are the phase of the LoS path in the BS-RIS
and RIS-UE channels, respectively.
However, it is hard to find the optimal phase shifts analytically in the case of Multiple Input Multiple Output (MIMO) systems. To this end, an optimization algorithm is needed. <cit.> studied multi-user Multi Input Single Output (MISO) downlink communications assisted by RIS, where the objective is to maximize the weighted sum rate to find the optimized passive beamforming θ_n and the optimized precoding at the BS.
To solve this non-convex problem, they used the Lagrangian Dual Transform which transforms the sum-of-logarithms-of-ratio to an alternative form.
The authors in <cit.> discussed an indoor MISO multi-user system with a channel model based on the Rician K-factor. The RIS phase shift were configured as follows
θ_n^*=(H_d^H) - (H_l^H)-(H),
where H_d is the direct channel between the Tx and the Rx, H is the channel between Tx and RIS, and H_l is the channel between the RIS and the lth user.
In <cit.>, the authors adopted a low-complex algorithm called the cosine similarity algorithm. The latter aims to find the sub-optimal phase shifts of the RIS that maximize the channel gain. Moreover, to minimize the transmitted power given the bit error rate for a RIS-assisted single-user multipath uplink system,
the authors of <cit.> propose an iterative algorithm to jointly optimize precoding and passive beamforming. In addition, a deep learning algorithm is applied in <cit.> to maximize the received signal-to-noise ratio and find the optimal phase shifts of RIS.
§.§ RIS versus Relay
Both RIS and relay aim to improve signal quality and coverage. However, there are two main differences.
* In the case of RIS, a power supply is only needed to configure the RIS components based on low-cost materials (diodes, switches...). Once the configuration is done, the RIS becomes passive, and no power supply is needed <cit.>. However, relays are generally considered active devices connected to active electronics such as analog-to-digital converters, digital-to-analog converters, amplifiers, etc., which require a power supply for operation. As a result, relays are more complex to implement and consume more energy than RIS <cit.>.
* A RIS operates in a full duplex mode while relays generally work in a half-duplex mode. Relays can still operate in full duplex mode, but this increases their cost, since appropriate antennas and analog and/or digital signal processing, to eliminate loop-back self-interference, are required <cit.>.
§.§ RIS is an opportunity for mmWave communications
The mmWave band, ranging from 30 to 300 GHz, offers enormous free bandwidth and high data rate possibilities <cit.>, unlike the overloaded low-frequency spectrum. However, it is very vulnerable to oxygen absorption and rain attenuation, and also suffers from penetration loss that makes mmWave signals easily blocked. Therefore, the coverage of mmWave communications is limited<cit.>. On the other hand, when the direct link is blocked or largely attenuated, a RIS is a competitive solution to extend coverage area and connectivity <cit.>. The location of the RIS should be optimized to obtain two efficient connections: the BS-RIS link and the RIS-UE link.
The authors in <cit.> discuss the size limitation of the RIS in low frequencies below 6 GHz, which makes their deployment in this band inefficient. A study of the specific propagation characteristics of the terahertz band is needed to use RIS in these frequencies, and the most important implementation of RIS today is in the mmWave band.
In the literature, the most used channels in RIS-assisted systems are the theoretical channels such as Rice for Line-of-Sight (LOS) environments, and Rayleigh for non-LOS (NLOS) <cit.>, <cit.>. To fill the gap towards realistic channel modeling and simulator, the authors in <cit.> propose a novel geometrical channel simulator, called SimRIS. This simulator is based on statistical modeling and can be used in indoor and outdoor environments at 28 and 73 GHz frequencies. Moreover, in <cit.>
the authors extend QuaDRiGa, a simulator used to model MIMO radio channels at sub-6GHz and mmWave frequencies, to handle RIS. This simulator is convenient for RIS-assisted MIMO systems with a mobile Rx or mobile RIS. In addition, <cit.> discusses the extension of NYUSIM, a mmWave channel simulator based on extensive measurements and well-used to assess MIMO systems <cit.>, to generate realistic channels for RIS-assisted systems.
§ RIS-ASSISTED RAILWAY COMMUNICATIONS
§.§ Railway environments characteristics
Railway environments are known to be very complex and harsh from a radio point of view. Various obstacles such as pylons supporting the catenary and rapid transitions between different scenarios (cutting/tunnel, cutting/viaduct) can create severe radio impairments. Railway tunnel size and shape are very specific, depending on the category of the train. Radio propagation inside tunnels is often modeled using Ray tracing tools <cit.>, <cit.>. It is also important to mention that MIMO system performance in tunnels is subject to possible impairments depending on spatial correlation in the tunnel and also Key holes phenomenon <cit.>. Due to high speed, the train can rapidly go through diverse scenarios. In addition, Doppler effects and possible interference due to the proximity of high voltage (catenary) in the vicinity of the antennas render the railway environments very specific compared to the indoor, urban, or suburban environments generally considered today for the use of mmWave communication systems. A detailed description of railway-specific environments can be found in <cit.>.
Considering the capability of RIS to solve the blockage problems in mmWave wireless communications, the use of RIS for railway communications has recently been considered as a promising candidate.
§.§ RIS-assisted railway communications
§.§.§ RIS for high-speed trains
<cit.> discusses the need for RIS in High-Speed Railway (HSR) environment for mmWave communications to improve the signal quality, which suffers from frequent blockages due to high-speed trains. The authors apply Deep reinforcement Learning (DRL) based approach to jointly optimize the RIS phase shifts and the BS beamforming for spectral efficiency maximization. The results show a significant improvement in spectral efficiency performance using DRL compared to the traditional approach.
In <cit.>, the authors describe how to use RIS on high-speed trains to improve communication performance by providing beamforming, interference mitigation, and reducing signal attenuation. They present a detailed discussion of the challenges associated with the RIS deployment on these trains, such as the need for tracking of the train, low latency, and high-speed RIS control, and the impact of train vibration on the RIS performance. They also propose the DRL approach to solve the sum rate maximization problem.
<cit.> deals with interference suppression in an HSR network, composed of a BS, a mobile relay (MR) located on the train, a RIS located near the MR, and an interference
source. The authors maximize the channel capacity
using a DRL solution and they consider outdated channel state information (CSI) to take into account the motion of the train. The authors found that deploying a RIS in close proximity to the embedded MR improves interference suppression and that their algorithm is more effective in suppressing interference than other optimization algorithms
based on mathematical formulations.
<cit.> proposes a new interrupt flow scheduling approach for RIS-assisted downlink mmWave HSR communications where multiple mobile relays exist. Given the existence of eavesdroppers, the BS schedules a number of flows for each MR when the MR flow quality of service (QoS) exceeds the QoS requirement. The authors seek to maximize the scheduled flow number, find the optimal beamforming, the optimal RIS phase shifts, and the scheduling or not of the RIS discrete phase shift, and they find that RIS can intend communication security by reducing eavesdropping capacity and extending coverage area in the HSR environments.
§.§.§ RIS in railway tunnels
In <cit.>, the authors have considered a simple two dimensions empty tunnel. Using the image theory approach and a vertical blocking element between a Tx and an Rx inside the tunnel, they have shown that the use of RIS located on the ceiling of the tunnel can reduce the Blocking Probability (BP) of the signal between Tx
and Rx.
An increase in the number of RIS
and optimization of the Tx position conduct to an additional decrease in BP. The increase in
distance between RIS and Tx can extend the effective range of RIS for a given BP. This study could be extended by considering a train inside a 3D tunnel.
§.§.§ RIS for passengers inside trains
Recently RIS technology has been studied to extend the coverage area in the mmWave band inside an airplane cabin <cit.>. The authors aim to minimize the number of RIS deployed in this system while ensuring the user data rate remains above a threshold. Besides, they compare the performance of this system for two RIS positions in the cabin corridor near the seat and above the center seat. This study could be easily transposed to the case of the inside of a high-speed train or inside a metro to guarantee a given throughput for the passengers.
§ FUTURE DIRECTIONS
As discussed in the previous sections, RIS offers a promising low-cost solution to solve the blocking problems in railway networks since it improves the efficiency and reliability of high-speed trains, solves the interference problem, and extends the coverage area through controlled signal reflection. In the case of high-speed trains, the channel estimation for RIS-assisted communications is a crucial challenge due to the unexpected rapid change of environments. Future research directions could explore the case of RIS-assisted wireless communications in tunnels, especially when the vertical cross-section of the train is large compared to the tunnel cross-section, which increases the probability of signal blockage. In addition, the case where the train moves in the tunnel from the inside to the outside is particularly difficult due to the development of urban transport and in particular driverless metro systems which require high data rate transmissions. The optimization of RIS-assisted communications in this case will require the development of realistic channel models. It would also be interesting to study the optimal location of the RIS, the number of RIS elements, or the number of RIS itself, needed in these systems to maximize the coverage inside the tunnel and also maximize the ever-increasing passenger throughput demand onboard the trains.
§ CONCLUSION
This paper presents a survey on RIS-assisted communications for railway applications, particularly in the mmWave band. First, we have defined the RIS concept, explaining its structure, and different types of RISs. A review of the various optimization algorithms used in the literature for RIS-assisted systems is proposed, and we highlight the ability of RIS to solve the blocking problem of mmWave. In the last section, the paper outlines
the characteristics of the railway environments and details some recent works concerning the use of RIS in high-speed trains. This topic is a very active field of research and we have proposed some future directions for RIS-assisted railway communications.
§ ACKNOWLEDGMENT
This work was funded by the council of the Region Bretagne, under the grant MILLIRIS.
IEEEtran
|
http://arxiv.org/abs/2307.04385v1 | 20230710074314 | Growing Fast without Colliding: Polylogarithmic Time Step Construction of Geometric Shapes | [
"Nada Almalki",
"Siddharth Gupta",
"Othon Michail"
] | cs.DS | [
"cs.DS",
"cs.CG",
"cs.RO"
] |
=1
obsObservation
mylistenvenumerate3
mylist[1]
[mylistenv] leftmargin = 2, label=#1mylistenvi,ref=#1mylistenvi
[mylistenv,2]label=#1mylistenvi.mylistenvii,ref=#1mylistenvi.mylistenvii
[mylistenv,3]label=#1mylistenvi.mylistenvii.mylistenviii.,ref=#1mylistenvi.mylistenvii.mylistenviii
mylist
plainurl
Growing Fast without Colliding
Department of Computer Science, University of Liverpool, [email protected]
Department of Computer Science, University of Warwick, [email protected]
Department of Computer Science, University of Liverpool, [email protected]://orcid.org/0000-0002-6234-3960
N. Almalki, S. Gupta, and O. Michail
Almalki, Gupta, Michail
[100]Theory of Computation → Computational Geometry; Theory of Computation → Design and analysis of algorithms
2
Growing Fast without Colliding: Polylogarithmic Time Step Construction of Geometric Shapes
Othon Michail
August 12, 2023
==========================================================================================
Building on two recent models of Almalki and Michail <cit.> and Gupta et al. <cit.>, we explore the constructive power of a set of geometric growth processes. The studied processes, by applying a sequence of centralized, parallel, and linear-strength growth operations, can construct shapes from smaller shapes or from a singleton exponentially fast. A technical challenge in growing shapes that fast is the need to avoid collisions caused, for example, when the shape breaks, stretches, or self-intersects. We distinguish two types of growth operations —one that avoids collisions by preserving cycles and one that achieves the same by breaking them— and two types of graph models. We study the following types of shape reachability questions in these models. Given a class of initial shapes ℐ and a class of final shapes ℱ, our objective is to determine whether any (some) shape S ∈ℱ can be reached from any shape S_0 ∈ℐ in a number of time steps which is (poly)logarithmic in the size of S. For the reachable classes, we additionally present the respective growth processes. In cycle-preserving growth, we study these problems in basic classes of shapes such as paths, spirals, and trees and reveal the importance of the number of turning points as a parameter. We give both positive and negative results. For cycle-breaking growth, we obtain a strong positive result —a general growth process that can grow any connected shape from a singleton fast.
§ INTRODUCTION
In recent years, the connection between algorithmic frameworks and the natural world has become increasingly evident and is opening up new research avenues. The principles and mechanisms underlying biological systems can be often modeled using computational approaches. This has led to the development of new computational frameworks and models inspired by biological systems. Examples are brain computation <cit.>, passively-dynamic systems <cit.>, and mobile robotics <cit.>. Recent research on programmable matter <cit.> is concerned with the algorithmic control of physical properties of programmable materials, such as their shape.
A set of recent models in the theory of DNA self-assembly and reconfigurable robotics have attempted to incorporate the concept of growth, which is a fundamental process in organisms. The processes that can be described in those models mimic the process of growth and development in biology. This, on one hand, enables the efficient algorithmic construction of complex shapes and structures and on the other might give insight into some of the algorithmic properties underlying biological systems.
Advances in geometric algorithms have led to significant progress in the theory of modular robotics and self-reconfigurable systems. The underlying systems consist of small, simple, and interchangeable components that can reconfigure themselves into various shapes and structures <cit.>. The efficient construction of geometric shapes is an important algorithmic objective in this context. This work, building on the models of Almalki and Michail <cit.> and Gupta et al. <cit.> further explores the algorithmic and structural properties of geometric growth processes.
§.§ Our Approach and Contribution
We explore the properties of a growth process that was proposed and largely left open in <cit.>.
It is the most general of the growth processes studied in <cit.> and the one in which there is no a priori restriction on the set of nodes that can grow in a given time step. Two different types of this process and its underlying growth operations can be identified: cycle-preserving growth and cycle-breaking growth. Intuitively, the former avoids collisions by preserving cycles, and the latter achieves the same by breaking them. For these two types of growth processes, the present study revolves around the following types of shape-reachability problems:
Given a class of initial shapes ℐ and a class of final shapes ℱ, determine whether any (some) shape S ∈ℱ can be reached from any shape S_0 ∈ℐ in a number of time steps which is (poly)logarithmic in the size of S. In case of a positive answer, we additionally want to provide the respective growth process.
All studied processes and constructions in this paper are centralized. We typically solve a given instance of the problem by designing a parameterized growth process —i.e., a centralized schedule of parallel growth operations— that works for all pairs of input-output shapes in the respective classes. Lower bounds for specific classes of shapes are established by proving that any growth process would fail to be efficient for all pairs of input-output shapes drawn from these classes. Distributed solutions fall beyond the scope of the present paper and form an interesting direction for future research. The main reason for adopting a centralized perspective is that both the centralized and distributed properties of such processes remain largely unexplored and the centralized is a more natural starting point. Centralized lower bounds immediately hold in the distributed case and centralized upper bounds can hint first —possibly inefficient— distributed solutions.
Collision avoidance is a core technical challenge in coming up with exponentially fast growth schedules. Note that if the requirement to avoid collisions —and a few other modeling assumptions related to collisions— was dropped, it would become straightforward to grow some classes of shapes that are otherwise hard to grow fast. For example, any spanning tree —and consequently any connected shape with such a spanning tree— having a bounded number of turning points on every root-to-leaf path could be grown as follows. We would first grow the tree of turning points by a parallel BFS, each time step t generating the turning points at turning-point-distance t from the root. This is linear in the maximum number of turning points on a path and possibly violates the requirement of nodes being collocated. We would then grow in parallel all segments between consecutive turning points to grow the tree to its final size. The latter can be done in time logarithmic in the length of the longest segment. Again, parallel growth could cause intersections between branches of the tree that we have now ignored. Overall, we would pay a logarithmic number of time steps. It will become evident that in the presence of collisions —and it is necessary to take collisions into account for practical implementations— more elaborate approaches are needed to get fast growth of shape classes as basic as paths and trees.
For cycle-preserving growth, in both the adjacency and connectivity graph models, we show that different graph classes can be constructed within (poly)log n time steps, n being the size of the final shape throughout.[It is important to note that we employ two distinct notions of time. The first refers to the time steps involved in the growth process, while the second refers to the running time of a centralized algorithm responsible for determining reachability between shapes and providing corresponding schedules. To maintain clarity, we will consistently differentiate between these two concepts, referring to the former as time steps and the latter as time.]
For path shapes characterized by a parameter k, which represents the number of turning points on the path, we prove that Ω (k log k) time steps are required to grow them from a singleton.
For cycle-breaking growth, our main contribution is a general algorithm that gives a growth schedule for any connected shape from a singleton. All schedules generated by the algorithm reach their final shape exponentially fast. We also study the weaker version of the shape-reachability problem and prove that any connected shape can be transformed into a tree within two time steps only.
In Section <ref>, we formally define the considered growth models and problems. In Section <ref>, we present our results for cycle-preserving growth in the adjacency graph model (Section <ref>) and the connectivity graph model (Section <ref>). In Section <ref>, we study the cycle-breaking type of growth processes. A weaker type of reachability is discussed in Section <ref>. In Section <ref>, we conclude and give further research directions opened by our work.
§ MODELS AND PRELIMINARIES
§.§ The Growth Models
The models studied in this paper build on the models of <cit.> and <cit.>. We consider a 2-dimensional square grid. Each grid point
is identified by its x and y coordinates, where x ≥ 0 indicates the column and y ≥ 0 indicates the row. A shape S is defined by a set of nodes and a set of connections between the nodes. Each node u occupies
a grid point (u_x,u_y) and is represented by a circle drawn on that point.
For a set of nodes V, two nodes u=(u_x, u_y) and v=(v_x, v_y) in the set are adjacent if u_x∈{v_x-1,v_x+1} and u_y=v_y or u_y∈{v_y-1,v_y+1} and u_x=v_x, that is if they are one orthogonal distance apart.
Nodes can only be connected —in which case we also call them neighbors— if they are adjacent. We consider two models of shape connectivity. One is based on the adjacency graph and the other on the connectivity graph, which can be any subgraph of the adjacency graph. For a shape S defined by the adjacency graph on a set of nodes V, we have S=(V,A) where A={uv | u,v∈ V and u,v are adjacent}.
For a shape S defined by a connectivity graph on a set of nodes V, we have S=(V, E) where E⊆ A. A shape S is connected if its graph is a connected graph. We restrict attention to connected shapes. We use n or |S| to denote |V|, i.e., the total number of nodes in a given shape S=(V,E).
Any connected shape S on the grid defines an (orthogonal) polygon that forms the external boundary of S. By the Jordan curve theorem <cit.>, the external boundary of S partitions the grid into an interior and an exterior of S. If a set of points H is a subset of the interior of S and shares no point with the external boundary of S, then we call H fully/strictly enclosed in the external boundary of S. Given a connected shape S, a hole of S is a maximal connected shape of unoccupied points H, strictly enclosed in the external boundary of S. A connected shape S with no holes is called compact. A row (column) of a shape S is the set of all nodes of S with the same y coordinate (x coordinate, resp.).
A growth operation (also called doubling in <cit.> and expansion in <cit.>) applied on a node u of a shape S, generates a new node in one of the points adjacent to u and possibly translates some part of the shape.
In general, applying one or more growth operations to a shape S either causes a collision or yields a new shape S'. There are two types of collisions: node collisions and cycle collisions.
Unless otherwise stated, we shall assume without loss of generality (abbreviated “w.l.o.g.” throughout) that there is an anchor node u_0∈ V that is stationary and other nodes move relative to it. This is sufficient because the constructed shapes are considered to be equivalent up to translations and their final absolute coordinates are not important for our purposes. To simplify the exposition, we first define growth operations for tree shapes and then generalize to any connected shape.
Let the shape be a tree T=(V,E). A single growth operation is applied on a node u∈ V toward a point (x,y) adjacent to u. If point (x,y) is occupied by a node v and uv∉ E then a collision occurs. The remaining cases are (i) (x,y) is empty, (ii) (x,y) is occupied by a node v and uv∈ E. We first define the effect in each of these cases when neighbor handover is not allowed. In case (i), the growth operation generates a node u' at the empty point (x,y) and connects it to u. In case (ii), assume w.l.o.g. that u is closer to u_0 in T than v.
Let T(v) denote the subtree of T rooted at node v. Then, the operation generates a node u' between u and v, connected to both, which translates T(v) by one unit away from u along the axis parallel to uv. After this, u' occupies (x,y) and uv has been replaced by {uu',u'v}. If neighbor handover is allowed, then any neighbor w of u perpendicular to uu' can be handed over to u'. This happens by a unit translation of T(w) or T(u) along the axis parallel to uu', depending on which of u,w, respectively, is closer to u_0 in T.
Let R be a set of operations to be applied in parallel to a connected shape S, each operation on a distinct pair of nodes or a node and an unoccupied point.
We assume that all operations in such a set of parallel operations R are applied concurrently, have the same constant execution speed, and their duration is equal to one time step.
Let T=(V,E) be a tree and u_0∈ V its anchor. We set u_0 to be the root of T. We want to determine the displacement of every v∈ V∖{u_0} due to the parallel application of the operations in R. As u_0 is stationary and each operation translates a subtree, only the operations on the unique u_0v path contribute to v's displacement.
In particular, any such operation contributes one of the unit vectors ⟨ -1,0⟩, ⟨ 0,-1⟩, ⟨ +1,0⟩, ⟨ 0,+1⟩ to the motion vector v⃗ of v.
Moreover, for any node v∈ V that doubles toward an empty point, we add a new node v' with a corresponding unit motion vector v⃗.
We can use the set of motion vectors to determine whether the trajectories of any two nodes will collide at any point. This type of collision is called a node collision (see Figure <ref>).
Let now S be any connected shape with at least one cycle and any node u_0 be its anchor. Then,
a set of parallel operations R on S either causes a cycle collision or its effect is essentially equivalent to the application of R on any spanning tree of S rooted at u_0.
Let u, v be any two nodes on a cycle. If p_1 and p_2 are the two paths between u and v of the cycle, then v⃗_p_1=v⃗_p_2 must hold:
the displacement vectors along the paths p_1 and p_2 are equal.
Otherwise, we cannot maintain all nodes or edges of the cycle. Such a violation is called a cycle collision as shown in Figure <ref>. We call a set of operations that does not cause any node or cycle collisions collision free.
A growth process starts from an initial shape S_0 —often a singleton— and by applying a sequence of parallel growth operations of a given type, goes through a sequence of shapes until it reaches a target shape. The considered growth processes operate in discrete time steps. In each time step t≥ 1, a set of parallel growth operations —possibly a single operation— are applied on the current shape S_t-1 to give the next shape S_t. To simplify our algorithms and w.l.o.g. we require parallel operations to have the same cardinal direction.
This divides time steps into those with horizontal only and those with vertical only motion and implies that a node gets at most one growth operation per time step.
We consider two general types of growth processes, cycle-preserving growth and cycle-breaking growth. Intuitively, the former type avoids cycle collisions by maintaining all cycles affected by growth operations and the latter by breaking them.
A cycle-preserving growth process applies a collision free set of parallel growth operations R_t to shape-instance S_t-1, for all time steps t≥ 1.
A cycle-breaking growth process additionally removes a —possibly empty— subset of the edges of S_t-1 that does not disconnect the shape, before applying R_t to it. If neighbor handover is allowed, growth of a node u generating a new node u' in direction d can hand any neighbor w of u perpendicular to d over to u'. In the adjacency graph model, at the end of each time step t, edge uv is added for all adjacent nodes u,v that are not connected. In the connectivity graph model, no such edges are added.
For the models of Definition <ref>, the following properties hold:
* Under the connectivity graph model, the growth processes never increase the number of cycles.
* Under the connectivity graph model, if S_0 is a singleton, the processes can only construct tree shapes.
* Under both graph models, the cycle-preserving process never decreases the number of cycles.
* Under the connectivity graph model, the cycle-preserving process preserves the number of cycles.
Property (2) is a special case of (1). Property (4) follows by taking (1) and (3) together. So, it is sufficient to prove properties (1) and (3). We first prove these without neighbor handover. In that case, the cycle-preserving process cannot remove any edges and neither do the graph models, thus property (3) holds. Property (1) follows by observing that, without neighbor handover, the growth processes can only add leaves or increase the length of existing line segments and that the connectivity graph model does not modify any edges. We now show that these remain true when neighbor handover is allowed. Let u be a node on which a growth operation is applied, and u_N,u_E,u_S,u_W its up to 4 neighbors in the respective cardinal directions. Let w.l.o.g. u^'_E be the node generated by the operation in the east direction. The nodes that can be handed over from u to u^'_E are u_N and u_S. If we show that the number of cycles is invariant of handover for both types of processes, then propositions (1) and (3) will follow. It is sufficient to consider those cycles that before applying the operation were using edge u_Nu, uu_S or both. If only u_N is handed over to u^'_E then any cycle using u_Nuu_W is replaced by a cycle using u_Nu^'_Euu_W, any using u_Nuu_E by one using u_Nu^'_Eu_E, and any using u_Nuu_S by one using u_Nu^'_Euu_S. The case is which only u_S is handed over is symmetric. If both u_N and u_S are handed over to u^'_E then the only difference is that any cycle using u_Nuu_S is now replaced by one using u_Nu^'_Eu_S. It follows that there is a one-to-one correspondence between previous and new cycles due to neighbor handover, which gives the required invariant.
It is worth noting that the cycle-breaking growth process is independent of whether the shape is represented using the adjacency or connectivity graph model. In both models, cycle-breaking growth follows the same principles and achieves the same results. However, this is not the case for cycle-preserving growth, as it behaves differently depending on the chosen graph model.
The property of neighbor handover is specific to cycle-breaking growth process, where neighboring nodes are transferred during the growth process.
Furthermore, it is important to highlight that any positive results obtained for the cycle-preserving growth process also apply to the cycle-breaking growth process. However, the reverse is not necessarily true, as the behavior and characteristics of the two operations differ.
§.§ Problem Definitions
The following two reachability problems between classes of shapes are defined for all types of growth processes described in Definition <ref>.
Given a growth model, a class of initial shapes ℐ (possibly consisting only of a singleton), and a class of final shapes ℱ we want to determine if there exists a time bound t=O(log n) or t=(poly)log n for which the following holds.
* Strong Reachability: Any shape in ℱ can be grown in the given model within t time steps from any shape in ℐ.
* Reachability: Starting from any shape in ℐ some shape in ℱ can be grown in the given model within t time steps.
For the reachable or strongly reachable classes, we additionally want to give the respective growth processes.
Some of our results concern shapes drawn from special graph classes, such as paths, spirals, and staircases, which we now define.
A node u_i of a path P=⟨ u_1, u_2, …, u_n ⟩ is called a turning point or turn if either i ∈{1,n} or u_i-1u_i is perpendicular to u_iu_i+1. For uniformity of our arguments, we add the endpoints of P to the set of turning points.
A direction of an internal turning point d(u_i) where 1≤ i < n is left if the orientation changes from d(u_i) to d(u_i+1) in a counterclockwise or right if the orientation changes from d(u_i) to d(u_i+1) in a clockwise manner.
A staircase is a path whose tuning points, when ordered from one endpoint to the other, alternate between two clockwise- or counterclockwise- consecutive cardinal directions.
A spiral S is a path whose tuning points, when ordered from one endpoint to the other, follow a continuous and unidirectional sequence of consecutive cardinal directions in either a clockwise or counterclockwise manner.
A fast line growth process begins with a singleton initial shape and by successively doubling all nodes grows a straight path of length n in O(log n) time steps. Fast line growth is used as a sub-process in most of our constructions in order to efficiently grow line segments of a shape. We use the term segment to refer to a line segment of a shape. A fast rectangle growth process —defined similarly— grows any compact rectangular shape of n nodes in O(log n) time steps (see <cit.> for these basic processes).
§ CYCLE-PRESERVING GROWTH PROCESSES
This section presents our results for cycle-preserving growth in the adjacency graph model (Section <ref>) and the connectivity graph model (Section <ref>).
§.§ Cycle-Preserving in the Adjacency Graph Model
We begin with positive results for cycle-preserving growth in the adjacency graph model. Due to cycle-preserving growth being a special case of cycle-breaking growth, positive results for the former immediately hold for the latter.
Assume that a spanning tree T of a shape S has at most k turning points in every root-to-leaf path. Then, if we had access to cycle-breaking growth instead, we could use breadth-first search to grow S in O(klog n) time steps. Starting from the root, all root-to-leaf paths can be grown in parallel. Every such path consists of at most k-1 line segments, each of length at most n, which —by using fast line growth— can be sequentially grown within O(klog n) time steps.
BFS cannot be directly applied by cycle-preserving growth in the adjacency graph model. This is due to the additional cycles that the graph model creates between adjacent segments, making the growth of a segment depend on the growth of segments adjacent to it.
We now describe a variant of BFS that avoids this by treating adjacent segments differently.
* Consider any tree shape T rooted at u_0.
* The process proceeds in phases. In each phase i≥ 1, we will grow all segments at segment-distance i from the root.
We do this by first growing in parallel the horizontal subset of those segments in a horizontal sub-phase i_h, followed by the vertical ones in a vertical sub-phase i_v.
* Each segment L —either horizontal or vertical— of phase i is grown as follows:
* For any sub-segment s of L which is adjacent to a segment s_past grown in a previous phase, grow s by duplicating s_past. Do this in parallel for all these sub-segments. The remaining sub-segments are then grown in parallel using fast line growth.
* For any sub-segment s of L which is adjacent to a segment s_present that will be grown in the same phase i in parallel to s, we use two stages i_h_even and i_h_odd (i_v_even and i_v_odd for the vertical sub-phase). In i_h_even, we grow the even-row segments followed by the odd-row segments in i_h_odd. We then repeat for the vertical sub-phase.
See Figure <ref> for an illustration of this process.
If every root-to-leaf path of a tree T has at most k turns, then the BFS variant grows T in O(k log n) time steps.
To prove the statement we use induction on the number of phases.
For the base case, since L_1 is the only segment at this point, it covers all the paths within distance i=1 in T, so the statement holds.
For the inductive step, let us assume that after i phases, the BFS variant has grown up to the i-th segment of every path within distance i in T, which corresponds to the number of turns k.
Then, in i+1 phase, we grow the line segment L_i+1 of each path within distance i+1. For each sub-segment s of L_i+1 that is adjacent to a sub-segment s_past grown in a previous phase, we can directly grow s by duplicating s_past in one time step.
For any sub-segment s of L_i+1 that is adjacent to a sub-segment s_present that will be grown in the same phase, we use two sub-phases, let us assume w.l.o.g. that it is a horizontal line segment, then we have i_h_even and i_h_odd. In i_h_even, we grow the even-row sub-segments, and in i_h_odd, we grow the odd-row sub-segments. This ensures that adjacent sub-segments in the same phase are grown sequentially without collisions.
By the induction hypothesis, after i phases, we have grown up to the i-th segment of every path within distance i, which corresponds to the number of turns k. Therefore, if T has a bounded number of turns, it can be grown in O(k log n) time steps.
Since all line segments L_1, L_2, L_k,…, L_k+1 are constructed using the BFS variant, the structure of the tree is maintained, and no line segments collide with other line segments during the parallel growth.
It follows that:
If a shape S has a spanning tree with at most k turns in every root-to-leaf path, then S can be grown in O(klog n) time steps.
Let S be any shape with a computed spanning tree T(S). According to Lemma <ref>, T(S) has a bounded number of k turns, implying that it can be grown in log |T(S)| time steps. By growing T(S), we obtain all the vertices of S, and to create the final shape S, we can add the edges between neighboring vertices in the constructed T(S) in a single-time step.
The overall time complexity of growing the shape S can be expressed as O(log |T(S)| + 1). Since |T(S)| is at most the size of S (|T(S)| ≤ |S|), the time complexity can be simplified to O(log |S| + 1). Hence, the time complexity of growing S is O(log |S|).
The next proposition is making use of a fast procedure that fills all holes of a given shape S in order to obtain a compact extension of S.
Given any shape S with at least one hole, there is a sequence of growth operations
of length O(log n) that yields a compact shape S'.
Assume that S has d holes H_1, H_2, …, H_d. We show how a single hole H∈{H_1, H_2, …, H_d} can be filled up in logarithmic time. The statement will then follow by applying this in parallel to all holes.
Hole H is defined by its boundary B(H), which is a closed polygon of nodes and forms an internal boundary of S. The interior H of B(H) is by definition, empty. We show how the nodes of B(H) can be used to efficiently fill H up with nodes.
W.l.o.g., we show how to do this vertically. Let C_1,…, C_k be the consecutive columns containing the empty points of H, say from left to right. Every C∈{C_1,…,C_k} consists of one or more empty vertical segments s_1,…,s_l. Note that all segments defined by the holes of S are pairwise disjoint. Each of the two endpoints of s∈{s_1,…,s_l} is adjacent to a node of B(H). Let (x,y), (x,y+|s|-1) be the bottom-most, uppermost endpoint of s and u, v be its adjacent node from B(H) lying below, above, respectively. Starting from u, we apply BFS-variant introduced at the beginning of this section.
We perform a fast line growth process along [(x,y),(x,y+|s|-1)], which generates a path of length |s| connecting u to v within O(log |s|) time steps. By doing this in parallel for all segments of all holes of S, we can make S compact within O(log (max_s{|s|}))=O(log n) time steps.
By combining Corollary <ref> with Proposition <ref> we get:
Any compact shape S whose perimeter has a bounded number of turns can be constructed in O(log n) time steps.
Consider any shape S with a constant c turns in its perimeter. Let T(S) be a computed spanning tree of S's perimeter. It is important to note that T(S) has a single root-to-leaf path with at most c turns, as per the properties of a spanning tree. Following Corollary <ref>, we can construct S's perimeter within logarithmic time steps using the computed spanning tree T(S). This step ensures that we have constructed the entire perimeter of S accurately and without collision.
Since we consider the adjacency model of S, it involves adding the missing edge to connect the start and end of S's perimeter. This ensures that the shape S is fully connected and maintains its original adjacency connections.
After constructing the perimeter of S and ensuring its full connectivity, we are left with a shape S that has one hole. To fill this hole and achieve a compact shape, we can apply Proposition <ref>. According to the proposition, the hole-filling process can be completed within at most log n time steps. We conclude that the total time complexity of constructing such a shape S with a constant number of turns in its perimeter is O(log |S|) + O(log n), which equals O(log n) time steps.
A family of shapes denoted by NICE was introduced by Almethen et al. <cit.>. A NICE shape consists of a horizontal line and various vertical lines that are perpendicular to the original horizontal line. This family of shapes can be constructed in logarithmic time steps using a growth operation from a single node.
All NICE shapes can be constructed in O(log n) time steps.
Assume a shape S_NICE∈ NICE of size n, that contains w.l.o.g a central horizontal line L_h with length 1≤ |L_h| ≤ n and a number of vertical lines L_1, L_2, …, L_v of total length 1 ≤ |L_v| < n that are orthogonal to L_h.
To construct S_NICE, we begin growing the horizontal line L_h using fast line growth, which starts from a single node and expands it in log|L_h| time steps. Next, we simultaneously grow all the vertical lines L_1, L_2, …, L_v in parallel. To achieve simultaneous growth of multiple vertical lines without collision, we can employ the BFS-variantas described earlier in this section. This ensures that the vertical lines do not collide with each other during the parallel growth.
Alternatively, if S_NICE has a vertical line with a length 1≤ |L_v| ≤ n, we can construct the vertical line first and then grow all the horizontal lines in parallel. It is important to note that since we are performing the cycle-preserving growth under the adjacency model, all the edges between the vertical segments will be added automatically during the growth operation without any form of collision.
The overall time complexity of constructing S_NICE using this method is determined by the time required to grow the longest line segment, which is at most log n time steps.
Any staircase shape S with a bounded number of steps can be constructed in O(log n) time steps.
A staircase is an alternating sequence of turning points and line segments connecting consecutive turning points. It can be uniquely defined by the coordinates of its turning points u_1, u_2, …, u_k. A bounded number of steps implies a bounded number of turning points. Thus, k is a constant. To construct such a staircase fast, we shall first construct the turning points sequentially and then grow in parallel the segments between them.
In the first phase, the k turning points are generated by a sequential —linear time step— process as follows. Starting from node u_1 which is the original singleton, u_i generates u_i+1 in time step i in the direction that respects their relative positions in S. This takes k-1 time steps to generate all turning points. The resulting staircase of turning points is equivalent to the one obtained by compressing all segments of S to unit length, which proves that this phase is collision free.
In the second phase, we grow —through a fast path growth process— all unit segments in parallel to their final length. Due to the geometry of the staircase, this phase is also collision free. It grows all segments within O(log (max_s{|s|})) time steps, where the maximum is over all segments s of the original staircase S. Thus, the whole process runs for (k-1)+O(log (max_s{|s|}))=O(log n) time steps.
§.§ Cycle-Preserving in the Connectivity Graph Model
This section defines the class of shapes that can be constructed by cycle-preserving growth in the connectivity graph model.
If a shape S can be grown in k time steps from a singleton u_0 in cycle-preserving growth, then S has a spanning tree T rooted at u_0, such that any root-to-leaf path of T has at most k turns.
Consider any shape S that can be grown in k time steps from a single node u_0, and let us assume that S has a spanning tree T_i which satisfies this until the time step i (i.e., T has at most k turns that equals i).
Let us assume w.l.o.g that at the next time step i+1, there is a horizontal cycle-preserving growth operation o_i. Then, the number of turns k in T_i can only be increased after operation o_i by at most one if one of the following cases occurs:
* If a line segment L_i in T_i is split into two line segments.
* If an additional turning point k+1 appears by extending the leaf of the line segment L_i.
For the first case, since o_i is a cycle-preserving growth, then, it cannot split any horizontal or vertical segment L_i due to its settings. As a result, we keep growing or translating the whole line segment L_i (i.e., the cycle-preserving growth will never increase the number of turns k in the tree T_i because it preserves all edges when growing any line segment) as shown in Figure <ref>.
For the second case, since o_i is a horizontal cycle-preserving growth, we can add one new turning point k+1 only
if the segment L_i, leading to the leaf, is vertical. However, if the segment is horizontal, applying o_i will only expand its length, not the number of turning points k.
Also, a new turning point k+1 is produced if a new horizontal root-to-leaf path is created by generating nodes to any vertical node of a vertical segment. These new leaves can increase the maximum number of turns in T_i by at most 1. Therefore, T_i+1 is an extension of T_i and has at most k+1 turns on every root to leaf path, which will consume at most k+1 time steps to generate such a shape S has a spanning tree T. Therefore, since cycle-preserving cannot increase the number of turns in the shape, then the statement holds.
If a shape S has a spanning tree T with O(log n) turns on every root-to-leaf path, then S can be constructed within O( log^2 n) time steps.
Consider a connected shape S,
by Proposition <ref> there is a spanning tree T of S with a constant k turns. To construct S, we can use breadth first search on every line segment in parallel.
We start from the root u_0 of the spanning tree T of S by using BFS line segments; we construct these line segments in parallel. Since each path on the tree T of S has at most k turns, it follows that there are k+1 line segments, and we can build all these segments within at most k+1 phases. In the worst case, each phase costs log n, which means at most O(klog n). However, in the worst case, if our tree T of S has at most log n turns, then, we consume log^2 n, thus, O( log^2 n) time steps to build such a shape S.
Any shape S with at most k turns can be compressed into a new shape S' with at most k turns and O(k^2) nodes on an O(k× k) grid.
Let S=(V, E) be a shape with O(k) turns, where V is the set of nodes and E is the set of edges. We will construct a compressed shape S'=(V', E') with at most k turns and k^2 nodes on an O(k × k) grid.
Since S has k turning points, then the number of rows and columns in S equals k, so we divide S into k horizontal rows and k vertical columns, forming a grid of size O(k × k).
Then, we identify the turning points in S and mark them as special nodes. Let V_k be the set of special nodes representing the turning points k in S.
For each row in the grid, we keep only nodes v∈ V_k and remove any duplicated nodes that are not connected to a node in V_k. The remaining nodes (i.e., that are connected to a node in V_k) in each row are stored in the set V_r. Similarly, we do the same for each column and keep the remaining nodes in the set V_c.
Let V' be the union of V_k, V_r, and V_c. The set of nodes in the compressed shape S' is V'. Then, we construct the set of edges E' in S' by including all edges in E that connect nodes in V'.
By construction, the compressed shape S' has at most k turns because we only consider the special nodes representing the turning points in S. The size of S' is at most k^2 since the grid has size O(k × k). Therefore, we have proved that any shape S with at most k turns can be compressed into a new shape S' with at most k turns.
To build a spiral shape S with a total number of k turns where k= log n by using BFS, we need O(log n loglog n) time steps.
From the above Lemma <ref>, any spiral with k turns can be compressed into a k = O(log n) sequence of segments, each of length m, which is at most log n. Then, using the breadth-first search to construct the compressed spiral, we first build the k turning point, point by point sequentially, then expand them into their final length in parallel. Therefore, O(k log m) time steps are needed, giving a total of O(log n loglog n) time steps.
A pipelined breadth-first search is a modified version of BFS that can be used to construct spiral shapes in the cycle-preserving growth process efficiently (i.e., within logarithmic time steps). It consists of two main phases:
Step-
* Constructing and waiting phase: During this phase, we build at most four turning points in an order that follows the geometry of a shape S.
* Growing phase: In this phase, the partially constructed structure from the constructing and waiting phase grows in parallel apart from those that reached their final length.
If a spiral shape S with a total number of k turns where k= log n, then such S can be constructed within logarithmic time steps by using pipelined BFS.
Consider a spiral shape S=(V, E) consisting of a set of layers, where each layer consists of two r∈ R rows and two c∈ C columns on the grid. Each layer S_i contains at most four turning points k, which can be defined as the moving point of each row r into column c. We will use the pipelined BFS approach to construct such a shape within a logarithmic number of time steps.
After compressing S by using Lemma <ref>, we achieved S', which contains all k turning points and possibly some in-compressible nodes that connect these turning points.
Then we start build S' as follow:
In phase i=1, we start from a root u_0 and generate the compressed version of the first (external) layer (i.e., its turning points) S_1={v_1,v_2,v_3, v_4} of S node by node.
Then, in phase i=2, we grow every node in parallel in its position and expand every segment of this layer using the cycle-preserving growth.
Following that, we start again generating the next (inner) layer, S_i+1 (i.e., the compressed version of the next spiral layer). We continue growing these two layers in parallel until the final layer S_log n fits in them.
Finally, because S has a total of log n layers, we consume log n waiting time for each layer to construct. Therefore, the total number of time steps to construct the whole shape is log n + log n for growing all segments in parallel until they reach their final length.
Let P be a path with k turning points. Let A be an algorithm that generates P from a singleton. Without loss of generality, we can assume that A starts from a turning point of the path P. We now give a few observations and lemmas concerning some properties of A. Recall that an edge, once generated, cannot be deleted in the cycle-preserving model. This immediately implies the following observation.
A node can grow in at most its degree many different directions. Moreover, once a node has degree many neighbors in the path constructed by A, it can only grow along one of its incident edges in the path.
As there exists a unique subpath between any two vertices in a path, this fact, together with the above observation gives the following observation.
Let x and z be any two vertices of P such that there exists a straight subpath between them in the path constructed so far by A. Then, all the vertices on the subpath between x and z in P will lie on a straight subpath in the final path constructed by A.
We now give the following lemma concerning the order in which the turning points of P are generated by A.
Let P be a path between u and v with k turning points. Let ⟨ tp_1, tp_2, …, tp_k ⟩ be the order of turning points of P from u to v. Let A be any algorithm that generates P from a singleton starting from the turning point tp_i. Then, the sets {tp_i+1, tp_i+2, …, tp_k} and {tp_1, tp_2, …, tp_i-1} of turning points are generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩ and ⟨ tp_i-1, tp_i-2, …, tp_1 ⟩, respectively by A. Moreover, A respects the direction of P at every node while generating the next node from it.
Recall that, an edge, once generated, can not be deleted in the cycle-preserving model. This in turn means that a node can grow in at most its degree many different directions. Moreover, once a node has degree many neighbors, it can only grow along an incident edge.
We first prove that A respects the direction of P at every node while generating the next node from it. As the direction makes sense only when the node already has a neighbor, we prove the statement for the nodes which grow at time step 2 or later. Let A grows the node v at time step t ≥ 2. Assume for contradiction, A does not respect the direction of P at v while generating the next node u. Then, once u is generated, the degree of v is 2 in the path constructed by A so far. By Observation <ref>, we get that A can never create a neighbor of v in the desired direction, a contradiction. Thus, A always respects the direction of P at every node while generating the next node from it.
We now prove the property regarding the order of generation of turning points. We prove that the set {tp_i+1, tp_i+2, …, tp_k} is generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩. The proof for the set {tp_1, tp_2, …, tp_i-1} is similar. Let tp_j+1 be the first turning point that was not generated in the desired order, for j ≥ i. Moreover, let t_k be the turning point that was generated after t_j, for k > j. This implies that there exists a subpath P' from t_j to t_k of the path constructed so far by A which does not contain any other turning points, i.e. P' is drawn as a straight line. As t_j+1 lies between t_j and t_k in P, by Observation <ref>, we get that A can never create the two neighbors of t_j+1 in different directions. This contradicts the fact that t_j+1 is a turning point of P. Thus, we conclude that the set {tp_i+1, tp_i+2, …, tp_k} is generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩.
Let P be an incompressible spiral path between u and v with k turning points (see Figure <ref>). Moreover, let u be the internal endpoint of P. We now give the following lemma about the lower bound on the number of steps taken by any algorithm that generates P from a single node starting from u.
Let P be an incompressible spiral path between u and v with k turning points. Moreover, let u be the internal endpoint of P. Let A be any algorithm that generates P from a singleton starting from u. Then, A requires Ω(klog k) time steps.
Let ⟨ tp_1 = u, tp_2, …, tp_k = v ⟩ be the order of turning points of P from u to v. By Lemma <ref>, we know that A generates the turning points in the order ⟨ tp_1 = u, tp_2, …, tp_k = v ⟩. Let GT_j be the time step when the turning point tp_j was generated by A, for any j ≥ 2. Let P(t) be the path constructed by A after time step t. Further, let a and b be two vertices of P. We denote by P[a,b] the path between a and b (including both a and b) of P. Moreover, we denote by |a-b|_P the number of edges in P[a,b]. Also, we denote by X(a,P) the x-coordinate of the vertex a in P. To prove the above lemma, we first prove the following lemma about the path constructed by A.
For any j ≥ 5, the path P(GT_j-1) generated by A till time step GT_j - 1 should be the same as the subpath P[tp_1, tp_j-1] of P between tp_1=u and tp_j-1.
We prove the statement by induction on j.
Base case (j=5). Recall that, as P is incompressible, |tp_2 - tp_1|_P = |tp_3 - tp_2|_P = 1. Thus GT_3 = 2 and P(GT_3) = P[tp_1, tp_3]. Assume for contradiction that the lemma is not true for j=5. This means that P(GT_5 - 1) is a subpath of P[tp_1,tp_4]. By Lemma <ref>, we know that GT_5 > GT_4 > GT_3. As P(GT_3) = P[tp_1, tp_3], we get that P[tp_1, tp_3] is a subpath of P(GT_5 - 1). Combining this fact with the fact that P(GT_5 - 1) is a subpath of P[tp_1,tp_4], we get that 1 ≤ |tp_4 - tp_3|_P(GT_5 - 1) < |tp_4 - tp_3|_P = 2. This implies that |tp_4 - tp_3|_P(GT_5-1) = 1. This further mean that X(tp_4, P(GT_5 - 1)) = X(tp_1, P(GT_5 - 1)). By Lemma <ref>, we get that A respects the direction of P at every node. Therefore, when tp_5 is generated it will collide with tp_1, a contradiction (e.g., see Figure <ref>). So, the lemma is true for j = 5.
Inductive hypothesis. Suppose that the lemma is true for j = t - 1 ≥ 5.
Inductive step. We need to prove that the lemma is true for j = t ≥ 6. Assume for contradiction that the lemma is not true for j. This means that P(GT_t - 1) is a subpath of P[tp_1,tp_t-1]. By Lemma <ref>, we know that GT_t > GT_t-1 > GT_t-2. By the inductive hypothesis, we know that P(GT_t-1 - 1) = P[tp_1, tp_t-2]. This implies that P[tp_1, tp_t-2] is a subpath of P(GT_t - 1). Combining this fact and the fact that P(GT_t - 1) is a subpath of P[tp_1,tp_t-1], we get that 1 ≤ |tp_t-1 - tp_t-2|_P(GT_t - 1) < |tp_t-1 - tp_t-2|_P = ⌊t-1/2⌋. This further implies that either t=6 and X(tp_5, P(GT_6 - 1)) = X(tp_1, P(GT_6 - 1)), or X(tp_t-5, P(GT_t - 1)) ≤ X(tp_t-1, P(GT_t - 1)) < X(tp_t-6, P(GT_t - 1)). By Lemma <ref>, we get that A respects the direction of P at every node. Therefore, when tp_t is generated, it will collide with a node on the subpath of P(GT_t-1 - 1) between tp_t-5 and tp_t-6, a contradiction (e.g., see Figure <ref>). So, the lemma is true for j=t.
We now give the proof of Lemma <ref> using Lemma <ref>.
Let ST_j be the time taken by A to create the path P[tp_1, tp_j] starting from tp_1, for any j ≥ 2. Then, by Lemma <ref>, we get that GT_j ≥ ST_j-1 + 1, for any j ≥ 5. Moreover, by Lemma <ref>, we know when tp_j is generated, the subpath from tp_1 to tp_j-1 is already generated by A. So, the difference between P(GT_j) and P[tp_1, tp_j] is the length of the subpath between tp_j-1 and tp_j in both the paths. As we know the subpath between tp_j-1 and tp_j is a straight line path in P, we can generate it in log(|tp_j - tp_j-1|_P). This implies that, ST_j = GT_j + log(|tp_j - tp_j-1|_P). Combining the two equations, we get that ST_j ≥ ST_j-1 + 1 + log(|tp_j - tp_j-1|_P). It is easy to observe that ST_4 = 4. Thus, by solving the recursive relation, we get that ST_j = Ω(klog k). This proves the lemma.
We now give the main theorem of this section.
Let A be an algorithm that generates a path from a singleton. Then, there exists a path for which A takes Ω(klog k) time steps.
We prove the theorem by giving a path on which any algorithm that generates a path from a singleton takes Ω(klog k) time steps. We construct an incompressible path P consisting of two spirals as shown in Figure <ref>. It is easy to observe that, due to Lemma <ref>, irrespective of the starting node A will generate one of the red or blue spirals from its internal endpoint u. Then, by a similar proof to that of Lemma <ref>, we can prove that A takes Ω(klog k) time steps.
§ CYCLE-BREAKING GROWTH PROCESSES
This growth process is characterized by its ability to break any edges within a shape while maintaining its global connectivity. It enriches the class of shapes that can be constructed in this growth process by breaking connections and transforming neighboring nodes using neighbor handover. The following proposition demonstrates growing any spanning tree of a rectangular shape in logarithmic time steps, where |S_I|=1.
For any rectangular shape S with all adjacencies, we can construct any spanning tree of S within O(log n) time steps.
In the first phase, we use the fast rectangle growth process, defined in Section <ref> to construct a rectangle shape S of size n. This operation starts from a singleton and doubles the shape until it reaches the desired size n. This consumes at most log n time steps.
In the second phase, once the rectangular shape S is constructed, we break the edges in parallel to form the final spanning tree in a constant time step c.
Therefore, the total time to construct such a shape is at most log n + c, thus, O(log n) time steps.
Any staircase shape S can be grown within logarithmic time.
Consider a staircase shape S with dimensions l× k.
First, we choose one dimension of the staircase (either length l or height k). For simplicity, let us assume we start from a single node u and grow the length of the shape S (i.e., dimension l) until it reaches the desired length of S. This can be done using the full doubling operation as described in <cit.>.
After that, we identify the starting node of each step in S and perform the breaking operation (see Definition <ref>). Then, we simultaneously grow all of these nodes vertically in parallel until we achieved the actual height k of each step in the staircase S. This involves splitting the whole segment from the first step into multiple smaller segments (i.e., each representing a step of the staircase S). The specific splitting pattern can be determined based on the desired configuration of the staircase S.
By following this approach, we can construct first one dimension of the shape S, in other words, a line segment of length l, in logarithmic time. Then, we efficiently add the remaining steps and grow them vertically to reach the desired height k in parallel. As a result, the overall time complexity of this is logarithmic to the dimensions of the staircase.
A family of shapes known as orthogonally convex shapes, as defined in Proposition 1 by Connor and Michail <cit.>, is a set of shapes where the perimeter consists of four staircases, and the interior is completely filled with nodes. It is possible to generate any shape in this family in logarithmic time steps by following these steps:
Step-
* Consider any orthogonally convex shape S, where the exterior of S consists of four staircases WN,NE,ES and SW, and the interior of S is fully filled with nodes.
* Start from any two consecutive quadrants of the shape's perimeter, such as (WN, NE),
(NE,ES),(ES,SW) or (SW,WN).
* Using Lemma <ref>, grow two consecutive quadrants (i.e., two consecutive staircases) of S in their final geometry. It is important to note that each quadrant of S's perimeter is constructed in accordance with its final position, ensuring that there will be no collisions between adjacent quadrants.
* Since the orthogonally convex shape S is fully filled with nodes, we can proceed to double the nodes in the generated subpart from Step-3. This doubling process is performed in lines until the entire shape S is obtained.
Given any orthogonally convex shape S, the above algorithm can grow S from a singleton within O(log n) time steps.
In order to construct any shape S that belongs to the orthogonally convex family, we perform the proposed procedure above.
By starting from a single node u we can use Lemma <ref> and grow any two consecutive quadrants of S's perimeter according to their final positions in S. Assume w.l.o.g. that the two consecutive staircases are WN and NE, this will consume logarithmic time steps of the length of WN plus logarithmic time steps of the length of NE.
Since the orthogonally convex shape S is fully filled with nodes, we perform the final step and every node of the constructed part of WN and NE of S doubles in lines to form the final shape S. This step also takes logarithmic time steps of the longest line of the other part, ES and SW. Therefore, the construction of any shape S in the orthogonally convex family can be completed within at most log n time steps, where n is the total number of nodes in S.
Below is an informal description of an algorithm that provides an O(log n) time steps growth schedule for any connected shape S. The algorithm achieves this by determining an elimination order of the nodes and generating a growth schedule by reversing this order.
The algorithm consists of two sets of phases: vertical phases and horizontal phases. Given a shape S with i rows and j columns do the following:
* Let L=l_1,l_2,…,l_i-1 be the set of vertical phases.
* For each phase l_i ∈ L, where i ranges from 1 to i-1, do the following:
2.
* Count rows from the bottom-most row, starting with i=1, and denote the odd row as 2i-1 and the even row as 2i.
* For every node u in an odd row 2i-1 that has a neighbor v in an even row 2i, eliminate v by contracting the edge uv towards u. Then, register the eliminated or translated nodes (i.e., if there is no neighbour, a node moves down one row) in a set σ to maintain their order.
* At the end of phase l_i, add all edges between nodes and move on to the next vertical phase l_i+1, counting rows from the bottom-most row, and repeating the same process.
* After completing the set of vertical phases, a horizontal line is obtained with a length equal to the horizontal dimension of the original shape (i.e., the number of columns in S).
* Apply the horizontal set of phases and repeat steps (1-3), which results in eliminating the horizontal line by successive halving.
* After completing both the vertical and horizontal sets of phases, reverse the constructed schedule σ into σ^', and return the growth schedule σ^'.
Given any connected shape S with dimensions l × k, the above algorithm can construct S from a singleton within O(log l + log k) time steps.
After executing the algorithm on the connected shape S and obtaining the growth schedule σ', the growth process involves adding nodes and edges based on the reversed order of elimination or translation represented by σ'.
By applying the schedule σ', starting from a single node, we expand the shape horizontally into j-1 columns using the doubling operation according to the schedule. After completing the horizontal growth, we proceed to the vertical growth, doubling the constructed row vertically into i-1 rows.
To analyze the time complexity of growing the shape S using this approach, let n be the total number of nodes in S, which is equal to the product of the dimension l × k. In each time step, we perform the growth operation, first horizontally and then vertically. The dimension of the horizontal growth is bounded by the number of columns l, which takes O(log l) time steps. Similarly, the vertical growth is bounded by the number of rows k, which takes O(log k) time steps. Therefore, the overall time complexity for this process is O(log k + log l).
§.§ Growth-Distance to Trees
The primary feature of the cycle-breaking growth is that it increases the distance in S by introducing new nodes and breaking certain edges. As a result, any connected shape S can be stretched and converted into a spanning tree T. Converting a shape S into a spanning tree T consists of the following steps:
Step-
* Consider a given spanning tree T of shape S=(V, E).
* At the first time step t_1, apply a cycle-breaking growth on every horizontal edge e∈ E of S that is parallel to non-tree edges of T (i.e., the decision to break such edges depends on the computed spanning tree T in Step-1).
* At the second time step t_2, apply a cycle-breaking growth on every vertical edge e∈ E of S that is parallel to non-tree edges of T. In other words, repeat Step-2 but vertically.
Algorithm <ref> transforms any shape S into a tree T within two-time steps.
To formally prove that we can convert any shape S into a tree T within two time steps, we need to demonstrate two main properties of the output tree T: connectivity and acyclicity.
For connectivity, the given shape S is initially assumed to be connected. The computation of the spanning tree T ensures that T spans all the nodes in S, meaning there is a single path between any pair of nodes in T.
Without loss of generality, let us assume that the horizontal uv edge is not part of the spanning tree T; we break it by introducing a new node x between u'v' that is parallel to uv. This ensures that any path between u and v in S is now connected through node x and the newly introduced edge u'v', that is, the new path between uv is uu', u'x, xv', v'v. Hence, after applying the cycle-breaking growth in parallel, the resulting tree T remains connected, fulfilling the connectivity property.
To prove the acyclicity of T, we consider that the computation of the spanning tree T ensures that T is a tree structure, which by definition, does not contain cycles. After that, during the cycle-breaking growth, no new cycles are introduced. Breaking an edge and growing a parallel edge does not create a cycle, as the newly introduced edges only connect existing nodes in T.
The computational complexity of Algorithm <ref> can be analyzed as follows. In the first step, a cycle-breaking growth is concurrently applied to each horizontal edge in S that is not part of the non-tree edges of T. Subsequently, in the second step, a cycle-breaking growth is performed on every vertical edge in S. Thus, Algorithm <ref> transforms any given shape S into a tree T within two time steps.
§ CONCLUSION
In conclusion, this paper has investigated the geometric properties of cycle-preserving and cycle-breaking growth processes within a centralized geometric framework. We have explored several key questions, including the class of shapes that can be constructed through these growth operations, their differences, and the possibility of transforming shapes from one family to another. As a result, we characterized some classes of shapes that can be constructed within logarithmic time steps using these growth operations. Also, we presented efficient algorithms and approaches for achieving the desired shape construction or transformation.
The results of this study open up new avenues for research and applications in the field of shape manipulation and provide valuable insights into the possibilities and limitations of growth operations. Despite the significant progress made, several open problems are worth further investigation. One open problem is the decision problem of determining whether a growth process exists that can transform an initial shape S_I to a final shape S_F within a given time-bound t. This problem has implications for reachability and can be further studied in special cases such as single-step reachability and the singleton special case of S_I. Additionally, extend the decision problem into the function problem where the objective is to return a growth schedule that transforms S_I into S_F within t time steps. Furthermore, an optimization problem arises in the context of shape growth. In this problem, given an initial shape S_I, a target shape S_F, and a time-bound t, the goal is to find the fastest growth process that transforms S_I into S_F within t time steps. The objective is to minimize the time steps required for the transformation, providing an optimal solution that achieves the desired shape in the shortest time possible.
Addressing these open problems will contribute to the development of efficient algorithms and techniques for geometric shape growth.
|
http://arxiv.org/abs/2307.04179v1 | 20230709140458 | IANS: Intelligibility-aware Null-steering Beamforming for Dual-Microphone Arrays | [
"Wen-Yuan Ting",
"Syu-Siang Wang",
"Yu Tsao",
"Borching Su"
] | eess.AS | [
"eess.AS",
"eess.SP"
] |
[
Diancong Jin
August 12, 2023
===================
Beamforming techniques are popular in speech-related applications due to their effective spatial filtering capabilities. Nonetheless, conventional beamforming techniques generally depend heavily on either the target's direction-of-arrival (DOA), relative transfer function (RTF) or covariance matrix. This paper presents a new approach, the intelligibility-aware null-steering (IANS) beamforming framework, which uses the STOI-Net intelligibility prediction model to improve speech intelligibility without prior knowledge of the speech signal parameters mentioned earlier. The IANS framework combines a null-steering beamformer (NSBF) to generate a set of beamformed outputs, and STOI-Net, to determine the optimal result. Experimental results indicate that IANS can produce intelligibility-enhanced signals using a small dual-microphone array. The results are comparable to those obtained by null-steering beamformers with given knowledge of DOAs.
STOI, STOI-Net, null-steering, beamforming, microphone arrays
§ INTRODUCTION
Microphone arrays are commonly used in numerous speech-related applications including hearing aids and teleconferencing to isolate the desired signals that are often degraded
by ambient noise and other types of interference <cit.>.
Multi-channel speech enhancement (MCSE) techniques have been extensively studied to extract the desired signals
<cit.>.
Beamforming algorithms are usually a crucial component of these methods, as they utilize spatial diversity from multiple recordings to perform spectral and spatial filtering on multiple channel inputs, generating a speech-enhanced output
<cit.>.
For example, the delay-and-sum beamformer
<cit.> uses the geometry of the array and direction-of-arrival (DOA) information to parameterize the spatial-spectral filter.
The minimum variance distortionless response (MVDR) method <cit.>
minimizes the power of the noise signal while maintaining a distortionless response for the target signal, utilizing the knowledge of the covariance matrices and DOA or relative transfer function (RTF).
Additionally, null-steering beamformers (NSBF) have been proposed to
filter out signals from specific directions
<cit.>.
Conventional beamforming algorithms typically depend highly on an accurate DOA or RTF estimate to obtain the spatial information of the target signals.
Over the past few decades, multiple DOA estimation algorithms have been proposed in
<cit.>.
For DOA estimation algorithms specialized for multiple speech signals, the work in <cit.> used the coherence test and sparsity property of speech
to estimate accurate DOAs using clustering-based methods.
In addition to a direct DOA estimation approach
, time difference of arrival (TDOA) estimation methods <cit.> are also commonly used to localize the target signal.
One popular category is the application of the steered response power phase transform <cit.>, which scans over a predefined spatial region to parameterize the cross-correlation functions using each candidate location of the source, and then adopts a maximum likelihood estimator to estimate the TDOA.
In addition to these methods, the work in <cit.> discussed covariance subtraction and covariance whitening methods to obtain RTF estimates of the speech signal using well-estimated covariance matrices from noise-only and speech-noise frames.
Although these approaches have great potential to provide accurate spatial information, they typically rely heavily on multiple assumptions. In the case of <cit.>, the authors assumed accurate estimates of the noise covariance matrices for each time-frequency index.
If the noise covariance matrices
contain spatial statistics of the speech signal, the beamformers might not be aware of such errors and attenuate the corresponding signals without regard to how this might impact the intelligibility of speech signals.
Meanwhile, it is also worth noting that,
neural beamformers, such as <cit.>, have been proposed to perform state-of-the-art MCSE.
For these NN-based approaches, it is usually necessary to construct a dataset containing diverse utterances received by a microphone array in multiple scenarios.
In addition,
these neural beamformers are usually optimized over a large number of parameters, which makes each parameter hard to interpret.
In the field of speech processing,
a well-known metric for intelligibility is the short-time objective intelligibility (STOI) <cit.>.
The STOI function estimates the intelligibility of signals through
a series of signal-processing stages, including silence-segment elimination, feature extraction in the time-frequency (TF) domain, one-third octave band processing, feature normalization, and intelligibility mapping. In this process, the deteriorated sound signal and the corresponding clean reference signal are used simultaneously to compute the final score.
In this paper, the STOI function will be denoted as ℎ_STOI: ℝ^N × K×ℝ^N × K→ [0, 1] which is defined as the mapping from the magnitude of a pair of N × K short-time Fourier transform (STFT) matrices to the interval [0, 1], where N is the number of time frames and K is the number of frequency bins per frame. For simplicity, we will omit the steps such as silence-segment elimination for our description of the STOI function.
In addition, unlike metrics such as the speech intelligibility index in <cit.>, STOI is known for its reliable intelligibility evaluation of signals processed in the TF domain, where most acoustic beamforming systems perform MCSE. However, a clean reference is typically inaccessible. Therefore,
the authors in <cit.> proposed STOI-Net, a non-intrusive intelligibility assessment model that predicts STOI scores based only on the noise-corrupted waveforms.
In light of the heavy dependence of beamformers on the estimation of the DOAs or RTFs of the speech signals,
we propose a new optimization framework for an intelligibility-enhancing beamformer
without relying on the previously mentioned speech parameters.
Instead, we explicitly consider intelligibility as an optimization objective.
Works such as <cit.> have also incorporated the notion of intelligibility into the design of beamformers. We will perform intelligibility-based optimization within a set of null-steering beamformers. Hence, we call this intelligibility-aware null-steering (IANS) beamforming.
For the IANS beamforming process, an NSBF algorithm is first applied to generate a set of candidate signals via null-steering. The generated signals are then passed through a pre-trained STOI-Net to predict the associated STOI scores. IANS then outputs the utterance corresponding to the highest intelligibility score. Contrary to the previously mentioned neural beamformers, the proposed IANS algorithm doesn't require additional multi-channel training data.
Moreover, the IANS optimization problem only optimizes one parameter whose optimal value is interpretable. Furthermore, advanced single-channel SE methods, such as <cit.>, can be incorporated with IANS for downstream applications.
The remainder of this paper is organized as follows. Section <ref> discusses the signal model and related works including filter-and-sum beamformers, null-steering beamformers and STOI-Net.
Next, we will present our IANS optimization problem in Section <ref>.
In Section <ref>, the IANS algorithm will be discussed in detail.
Section <ref> presents the experimental setup and results. Finally, Section <ref> concludes the paper and discusses future works.
§ BACKGROUND AND RELATED WORKS
§.§ Signal model
In this study, the considered signal model comprises a
speech signal, s(t), and an interference signal, i(t), propagating in a room with a sound speed of c, received by a dual-microphone array at angles of θ_s and θ_i, respectively. The angles are
measured with respect to the first (reference) microphone, with 0^∘ being the endfire direction. The microphone array has a small spacing of ℓ, and we assume that the sound sources are stationary in space. We denote the room impulse responses (RIRs) for s(t) and i(t) with respect to the m^th microphone as
g^(m)_s(t) and g^(m)_i(t), respectively. The received signal at the m^th microphone can be expressed as the following:
x^(m)(t) = g^(m)_s(t) ∗ s(t) + g^(m)_i(t) ∗ i(t).
After obtaining the received signals x^(1)(t) and x^(2)(t), We can then apply the STFT to derive their corresponding N × K STFT matrices 𝐗^(1) and 𝐗^(2). Subsequently, we can define the received signal vector 𝐱[n, k] as follows:
𝐱[n, k] = [𝐗^(1)_n, k, 𝐗^(2)_n, k]^T.
Here, 𝐗^(m)_n, k represents the (n, k)^th element of 𝐗^(m), where
n=1, 2, ⋯, N and k=1, 2, ⋯, K.
§.§ Filter-and-sum beamformers
Filter-and-sum beamformers <cit.>
are a set of beamformers that perform the filter-and-sum operation to enhance the signal of interest. This process can be represented in the TF-domain as
Y[n, k] = 𝐰^H[n, k] 𝐱[n, k],
where 𝐰[n, k] is the weight vector for 𝐱[n, k], and Y[n, k] is the resulting TF component of the beamformed signal. We will denote this set of beamformers as ℱ_FSBF.
§.§ Null-steering beamformers
Within ℱ_FSBF, there is a subset of beamformers capable of nulling out signals coming from a particular direction ϕ while maintaining a (nearly) distortionless response at θ_d. We call this set the null-steering beamformer set, ℱ_NSBF.
We first define two vectors, the distortionless response steering vector 𝐚^(θ_d)[k] = [1, e^-j ω_k ℓ/ccosθ_d]^T and the null-response steering vector
𝐚^(ϕ)[k] = [1, e^-j ω_k ℓ/ccosϕ]^T, where ω_k is the frequency value at the k^th frequency bin. Each 𝐚^(ϕ)[k] is associated with a projection matrix Φ[k] defined in the following,
Φ[k] =𝐈 - 𝐚^(ϕ)[k](𝐚^(ϕ)[k])^H/||𝐚^(ϕ)[k]||^2
=𝐈 - 𝐚^(ϕ)[k](𝐚^(ϕ)[k])^H/2,
where (·)^H denotes the Hermitian transpose operator. Here, Φ[k] projects vectors into the subspace orthogonal to the span of 𝐚^(ϕ)[k].
In this paper, ℱ_NSBF is defined as a beamformer set
with time-invariant weight vectors defined as
𝐰[k] = Φ[k]𝐚^(θ_d)[k]/max((𝐚^(θ_d)[k])^HΦ[k]𝐚^(θ_d)[k], ϵ),
where ϵ is a small number to avoid 0 division. We note that without the max(· , ϵ), Eq. (<ref>) has been studied in <cit.> in the context of the MVDR beamformer.
It is also worth pointing out that in the context of beamformers such as the linearly-constrained minimum variance beamformer <cit.>, null responses are usually set as explicit constraints to a noise power minimization problem.
§.§ STOI-Net
In <cit.>, the authors proposed STOI-Net, a non-intrusive speech intelligibility assessment model that predicts the STOI scores of speech signals both frame- and utterance-wise using feature extraction and score calculation functions.
For the feature extraction, the STFT is applied to convert the peak-normalized time-domain waveform of interest into a sequence of frame-wise magnitude spectra in the frequency domain. These frames are then passed through 12 fully convolutional neural network layers to extract the acoustic representations. Next, the score-calculation function maps the extracted features to an intelligibility score. Specifically, frame-level scores are generated after applying 1) bidirectional long short-term memory, 2) an attention layer, and 3) fully connected nonlinear mapping functions to the extracted features. The final intelligibility score of the entire utterance is then obtained by applying a global averaging algorithm to all frame-level scores. It is worth noting that STOI-Net is not limited to a single neural network architecture. In <cit.>, two model architectures were used: one with an attention layer and the other without it. In the remainder of this paper, we will denote the STOI-Net model as a function ℎ_STOI-Net: ℝ^N × K→ℝ.
§ PROBLEM FORMULATION
Contrary to most optimal beamformers for speech enhancement (e.g., MVDR) where optimal weights are derived for each TF bin based on well-estimated DOAs, RTFs and covariance matrices, we propose an optimization problem based on the intelligibility of the entire utterance of the received speech signals.
Our primary goal is to identify a function 𝑓: ℂ^N × K×ℂ^N × K→ℂ^N × K within a function set ℱ that takes the received signals 𝐗^(1) and 𝐗^(2) as input
and maps them to an STFT matrix
with the maximum STOI score.
As we aim to perform optimization without having to train a new NN that learns from a dataset containing microphone array recordings from various scenarios, we will use a simpler function set ℱ=ℱ_NSBF
where we can limit all potential values of θ_d and ϕ on a discrete grid.
However, the number of feasible solutions grows quadratically with the resolution for the θ_d and ϕ axes on this grid. Hence, we only perform grid search for ϕ on a grid 𝒢 while fixing θ_d to an arbitrary angle ψ∈ [0^∘, 180^∘]. The grid 𝒢 is an ordered set containing P angles ranging from 0^∘ to 180^∘.
Thus, the number of possible beamformers we are considering here is P.
Our optimization problem can now be described as the following:
maximize_ϕ∈𝒢 ℎ_STOI(|𝑓_θ_d=ψ, ϕ(𝐗^(1), 𝐗^(2))|, |𝐒|)
subject to 𝑓_θ_d=ψ, ϕ∈ℱ_NSBF,
where 𝐒 represents the STFT matrix of the clean speech signal and |·| denotes the element-wise magnitude extraction for a matrix.
We refer to this problem as the STOI Null-steering (STOI-NS) problem, as it employs null-steering to optimize the true STOI function. We will denote the optimal beamformer and null angle for this problem as 𝑓_STOI-NS^⋆ and ϕ_STOI-NS^⋆, respectively.
However, 𝐒 is never accessible in practical scenarios. Therefore, using the pre-trained STOI-Net model ℎ_STOI-Net, we modify the optimization problem in (<ref>) as follows:
maximize_ϕ∈𝒢 ℎ_STOI-Net(|𝑓_θ_d=ψ, ϕ(𝐗^(1), 𝐗^(2))|)
subject to 𝑓_θ_d=ψ, ϕ∈ℱ_NSBF
Since STOI-Net was trained to estimate the STOI score of a signal, we consider this the Intelligibility-aware Null-steering (IANS) problem.
The IANS problem is now feasible without the clean reference 𝐒. The optimal beamformer and null angle for this problem are denoted as
𝑓_IANS^⋆ and ϕ_IANS^⋆, respectively.
It is clear that the STOI score of the output obtained from using the beamformer 𝑓_STOI-NS^⋆ is a natural upper bound of that using 𝑓_IANS^⋆ as we will show in
Section <ref>.
This optimization framework was inspired by works such as <cit.>,
where the authors trained speech enhancement systems by incorporating speech quality prediction neural networks <cit.> into the loss function.
Notably, contrary to conventional beliefs where it is generally thought to be necessary for ψ to be close to θ_s in order to perform speech enhancement, our method as we will show later in Section <ref> is no longer constrained to this requirement. Therefore, we do not regard ψ as an estimate of θ_s.
This also implies that, in the context of the STOI-NS and IANS optimization problem, we never guarantee the distortionless property of speech as in the MVDR beamformer. However, as we will show later in Section <ref>, intelligibility enhancement is still possible using the optimal null angles
ϕ^⋆_STOI-NS and ϕ^⋆_IANS.
These two angles
can be interpreted as optimal null angles chosen to minimize the
impact of
the interference signal, speech distortion and RIRs on intelligibility, while maintaining a nearly distortionless response at ψ.
Since there is a chance that ψ=θ_i,
it is advisable to perform the IANS algorithm twice using two different θ_d values (e.g., ψ and ψ + 80^∘ in this study).
It is worth noting that dual-microphone array beamformers usually correspond to beampatterns exhibiting a large main lobe and side lobe owing to the limited degrees of freedom. In other words, we can use this property to construct a directive null, as in Eq. (<ref>), while preserving a certain amount of gain for signals coming from all angles, except those within the vicinity of the null.
We also note that small microphone arrays tend to have frequency-invariant beampatterns as explained in <cit.>, which can also be an advantage since beamformers that are sensitive to frequency variations tend to produce more unpredictable results.
§ THE IANS ALGORITHM
This section describes the IANS algorithm which solves the IANS optimization problem in (<ref>). The algorithm consists of two stages: the NSBF stage and the STOI-Net stage as shown in Fig. <ref>,
where results from the first stage will be passed on to the second stage. The following subsections will provide more detailed explanations about the IANS algorithm.
§.§ Stage 1: NSBF
The initial step of the IANS algorithm involves applying the STFT on the two signals x^(1)(t) and x^(2)(t) to obtain 𝐗^(1) and 𝐗^(2).
We then generate a set 𝒴_(STFT) containing P STFT matrices {𝐘^(1), ⋯𝐘^(P)} by sending the pair (𝐗^(1), 𝐗^(2)) into P NSBF beamformers {𝑓_θ_d =ψ, ϕ=𝒢_1, ⋯ ,𝑓_θ_d =ψ, ϕ=𝒢_P}⊂ℱ_NSBF. If ψ = 𝒢_p, where p∈{1, 2, ⋯, P}, we let 𝐘^(p)=𝐗^(1) instead of using
𝑓_θ_d =ψ, ϕ=ψ.
Note that parallel computing can be used since each computation of the elements of 𝒴_(STFT) is independent of each other. It is also worth pointing out that the time-invariant weight vectors in Eq. (<ref>) can be computed and stored beforehand to save time.
Since we will later send these into STOI-Net, we apply the inverse-STFT operation (iSTFT) on each element in 𝒴_(STFT) to perform peak normalization in the time domain. We denote this set as 𝒴_(time)'. We do this to match the training conditions of STOI-Net as we mentioned in Subsection <ref>.
§.§ Stage 2: STOI-Net
Following the peak normalization, we perform STFT on each element in 𝒴_(time)' to convert them back to the TF domain and extract their corresponding magnitude components. We denote the resulting set as 𝒴_(STFT)”.
We then pass each element in 𝒴_(STFT)” into STOI-Net to predict their utterance-based STOI score. These scores are then stored in a STOI-Net score vector α. The optimal null angle ϕ_IANS^⋆ for 𝑓_IANS^⋆ can be obtained as
ϕ_IANS^⋆ = 𝒢_argmax (α).
Moreover, in the case where we have access to the clean reference signal 𝐒, we can replace STOI-Net with the real STOI function in this stage and obtain a STOI score vector β. Therefore, the value of
ϕ_STOI-NS^⋆ for 𝑓_STOI-NS^⋆ can be expressed as
ϕ_STOI-NS^⋆ = 𝒢_argmax (β).
In this study, the pre-trained STOI-Net model without the attention layer was directly obtained from the previous research[https://github.com/dhimasryan/STOI-Net] without any modifications, such as adaptation, retraining, or fine-tuning, for the MCSE task.
§ EXPERIMENTAL RESULTS AND ANALYSIS
§.§ Experimental setup
In this study, the Pyroomacoustics package <cit.> was utilized to simulate the signal model in Eq. (<ref>) using the image source method <cit.> with the following parameters. The simulated room has dimensions of [5m, 6m, 4m] with the RT60 parameter set to 150ms and the sound speed c set to 343m/s. The center of the microphone array is located at [2.5m, 3m, 1m]. The distance between the two microphones is set to ℓ=8mm with the array being parallel to the x-axis and the reference microphone being the microphone on the right.
The speech DOA θ_s is set to 90^∘, while the interference DOA θ_i can be one of four predefined directions: 22.5^∘, 67.5^∘, 112.5^∘, or 157.5^∘.
IANS then uses 512-point Hamming windows with
50% overlap to process the incoming signals.
The set 𝒢 is a uniform
grid over the interval [0^∘, 180^∘] with an angular resolution of 2^∘ (i.e., 91 angular values).
Additionally, IANS was evaluated using two values for ψ: 0^∘, representing the largest angle difference from θ_s, and 80^∘, which is relatively closer to θ_s.
Note that the values of ℓ and c are assumed known to the IANS algorithm. Additionally, the value of ϵ=1.11 × 10^-16.
Our experiments can be classified into two parts.
The first part uses an English dataset, namely, the Wall Street Journal <cit.> eval92 evaluation set.
From eval92, we first selected two male and two female speakers, and chose one utterance from each speaker to form the source signal. Babble and car noises in the NOISEUS corpus <cit.> and pink noise in NOISEX-92 <cit.> were applied as the interference. Five signal-to-interference ratios (SIRs), namely, -10, -5, 0, 5, and 10 dB, were used to create noisy utterances. The SIRs were mixed with respect to the first microphone as suggested by the Pyroomacoustics documentation.
Therefore, 240 source interference pairs (four clean utterances, three noises, four interference angles, and five SIRs) were used to form the English testing set (denoted as “WSJ”).
For the second part of experiments, we used a Mandarin dataset, namely, the Taiwan Mandarin Hearing in Noise Test <cit.> corpus, comprising 320 sentences. Two male speakers and two female speakers were selected from the dataset. One utterance, recorded in a noise-free environment, was selected from each speaker as the speech source.
For the interference signal, we chose three noise signals from the DEMAND <cit.> dataset: “tmetro", “pstation", and “npark."
Like in WSJ,
the aforementioned SIRs were used to create 240 source-interference pairs (four clean utterances, three noises, four interference angles, and five SIRs) for the Mandarin testing set (denoted as “TMHINT”). It is worth noting that STOI-Net was previously trained on the training set of the original Wall Street Journal dataset. The eval92 set was used to evaluate the generalization performance of STOI-Net.
Therefore, the English and Mandarin datasets in this study correspond to the matched and mismatched languages for STOI-Net, respectively. All single-channel recordings were sampled at 16kHz.
The experimental performance was evaluated in terms of STOI
and the wideband extension of the perceptual evaluation of speech quality (PESQ) <cit.> metric.
§.§ Evaluation results
For both testing sets, we labeled the enhanced results using the IANS algorithm with
θ_d=ψ as “IANS_ψ.”
Noisy utterances (labeled as “Noisy") received by the first microphone were used as the baseline.
Moreover, we also compared our IANS result with two additional systems. The first system
is the STOI-NS system which optimizes the STOI-NS problem given the clean reference 𝐒 for all utterances. The optimization procedure was detailed in Section <ref>.
Like in “IANS_ψ", we represent the results from STOI-NS with
θ_d=ψ as “STOI-NS_ψ.” For the second system, NSBF was performed by setting θ_d=θ_s and ϕ=θ_i. This system has the advantage of knowing the true DOAs of the speech and interference signals. Therefore, we label the corresponding results as “T-NSBF."
For the WSJ evaluation set, we list the average STOI and PESQ scores for all 240 utterances of “Noisy”, “IANS_0^∘”, “STOI-NS_0^∘”, and “T-NSBF” in Table <ref>. From the table, we can see that the STOI and PESQ scores for “IANS_0^∘" are higher than those for “Noisy", indicating an improvement in the intelligibility and quality of noisy speech signals from the English dataset using the proposed approach.
Table <ref> lists the STOI and PESQ scores of “Noisy”, “IANS_0^∘”, “STOI-NS_0^∘”, and “T-NSBF” associated with the TMHINT database. From the table, the improved metric performances from “Noisy" to “IANS_0^∘" confirm that the proposed IANS method can effectively enhance the intelligibility and sound quality of
noise-corrupted utterances. Notably, in both Tables <ref> and <ref>, STOI-NS_0^∘ has the highest STOI and PESQ scores on average, indicating that in these two experiments, if we properly choose the null angle to be ϕ_STOI-NS^⋆, we generate results with even higher intelligibility and quality than the results from null-steering beamforming where we had the prior knowledge of the DOAs of the speech and interference signals.
One potential factor that may have influenced this outcome is the non-anechoic nature of the room, resulting in signals propagating through multiple pathways. Hence, nulling the angle θ_i may not be the optimal choice for STOI.
Next, we will further investigate how different values of ψ affects the performance of “IANS_ψ” and “STOI-NS_ψ.” Specifically, we compared the results obtained using ψ = 0^∘ and ψ = 80^∘.
These results are presented in Tables <ref> and <ref>, which correspond to the WSJ and TMHINT databases, respectively.
From both tables, when comparing “IANS_0^∘" with “IANS_80^∘" and “STOI-NS_0^∘" with “STOI-NS_80^∘", we can see that, even though the results corresponding to ψ = 80^∘ yield higher PESQ scores, there is essentially no difference in STOI.
This implies that the large 90^∘ difference between ψ and θ_s has an insignificant effect on the ability of the IANS algorithm to generate intelligibility-enhanced results in our experiments.
Finally, we present an additional analysis of α and β in a particular scenario (Scenario A) to gain further insight into the similarities between the IANS and STOI-NS problems. The scenario consists of a female speaker from the WSJ dataset being interfered by the babble noise coming from a 22.5^∘ angle (i.e., θ_i=22.5^∘) with the SIR set to 0 dB. We let ψ=0^∘ for both IANS and STOI-NS, which means that the STOI value in β corresponding to ϕ=0^∘ is the STOI score of the unprocessed signal at the reference microphone as we explained in Subsection <ref>. The score values in both α and β are represented by the two curves depicted in Fig. <ref>.
The x-axis represents the values of ϕ in degrees, whereas the y-axis represents the values of α and β.
From the figure, we can see that the two curves have similar characteristics. Specifically, the lowest values for α and β both correspond to ϕ=θ_s=90^∘. Since both 𝑓_IANS^⋆ and 𝑓_STOI-NS^⋆ output results corresponding to the largest value in their respective score vectors, this suggests that they are both effective in preventing the speech signal from being severely attenuated in Scenario A.
Moreover, maximum values of α and β occur at ϕ_IANS^⋆=12^∘ and ϕ_STOI-NS^⋆=16^∘, respectively.
The corresponding STOI scores for
𝑓_IANS^⋆ and 𝑓_STOI-NS^⋆ are 0.902 (i.e., β_argmax(α)) and 0.903 (i.e., max(β)), respectively, which are at least 0.219 points higher than the STOI score of “Noisy" at 0.683, indicating the effectiveness in STOI enhancement of our IANS algorithm.
§ CONCLUSION AND FUTURE WORK
In this paper, we proposed a novel intelligibility-based optimization problem (i.e., the IANS problem) along with its corresponding enhancement system, the IANS beamformer.
The system determines the optimal output speech with the highest intelligibility scores by combining the NSBF and STOI-Net modules, where NSBF processes the input recordings and STOI-Net provides STOI predictions.
We conducted experiments using cross-lingual datasets (Mandarin and English). The experimental results show that the proposed IANS system can effectively map the input signals to intelligibility and quality enhanced speech. It was also demonstrated that IANS produces robust performance regardless of whether the distortionless response is steered near the direction of the speech source.
In the future, we will evaluate the combination of beamforming systems with different evaluation modules, such as quality and mean opinion score assessment models <cit.>, and test our system in more complex noisy environments.
In addition, we will integrate single-channel speech enhancement methods with IANS to further enhance speech signals.
IEEEbib
|
http://arxiv.org/abs/2307.04519v1 | 20230710123906 | Energy-based model order reduction for linear stochastic Galerkin systems of second order | [
"Roland Pulch"
] | math.NA | [
"math.NA",
"cs.NA",
"65L05, 37H05, 93D30"
] |
Energy-based model order reduction
for linear stochastic Galerkin systems
of second order
Roland Pulch
Institute of Mathematics and Computer Science,
Universität Greifswald,
Walther-Rathenau-Straße 47, 17489 Greifswald, Germany.
Email: [email protected]
Abstract
We consider a second-order linear system of ordinary differential
equations (ODEs) including random variables.
A stochastic Galerkin method yields a larger deterministic linear
system of ODEs.
We apply a model order reduction (MOR) of this high-dimensional
linear dynamical system, where its internal energy represents a
quadratic quantity of interest.
We investigate the properties of this MOR with respect to
stability, passivity, and energy dissipation.
Numerical results are shown for a system modelling a
mass-spring-damper configuration.
§ INTRODUCTION
Mathematical models typically include physical parameters or other
parameters, which are often affected by uncertainties.
A well-known approach is to change the parameters into random variables
to address their variability, see <cit.>.
Consequently, an uncertainty quantification (UQ) can be performed.
We study second-order linear systems of ordinary differential equations
(ODEs), which contain independent random variables.
Each second-order linear system of ODEs together with its
internal energy is equivalent to a first-order port-Hamiltonian (pH)
system, where the Hamiltonian function represents the internal energy,
see <cit.>.
We use a stochastic Galerkin technique, see <cit.>,
which produces a larger deterministic system of second-order linear ODEs.
The stochastic Galerkin projection is structure-preserving.
Hence the stochastic Galerkin system also features an internal energy,
which represents a quadratic output of the linear dynamical system.
Since the stochastic Galerkin system is high-dimensional,
we employ a model order reduction (MOR), see <cit.>,
to diminish the dimensionality.
MOR of linear stochastic Galerkin systems with linear outputs was applied
in <cit.>, for example.
Now we investigate an MOR, where the internal energy is defined as
the quantity of interest (QoI).
In <cit.>, a balanced truncation technique was derived
to reduce a first-order linear system of ODEs with quadratic output.
We apply the balanced truncation to the canonical first-order system,
which is equivalent to the second-order stochastic Galerkin system.
A reduced system of ODEs exhibits a quadratic output,
which approximates the underlying internal energy.
An a posteriori error bound is computable for the quadratic output
in any MOR method, provided that the systems are asymptotically stable.
Moreover, we study the properties of the reduced systems
with respect to dissipation inequalities and passivity.
A concept to measure a loss of passivity is introduced.
Finally, we present results of numerical experiments
using a model of a mass-spring-damper system.
§ PROBLEM DEFINITION
A stochastic modelling is applied to second-order linear dynamical systems,
which include uncertain parameters.
§.§ Second-order linear ODEs including Parameters
We consider second-order linear systems of ODEs in the form
M(μ) p̈ + D(μ) ṗ + K(μ) p = B(μ) u ,
where the symmetric matrices M,D,K ∈^n × n and
the matrix B ∈^n × n_ in depend on parameters
μ∈ℳ⊆^q.
Input signals u : [0,∞) →^n_ in
are supplied to the system.
The state variables p : [0,∞) ×ℳ→^n
depend on time as well as the parameters.
We assume that the matrices M and K are positive definite and
the matrix D is positive definite or semi-definite
for all μ∈ℳ.
It follows that each linear dynamical system (<ref>) is
Lyapunov stable.
A positive definite matrix D is sufficient for the asymptotic stability
of a system (<ref>), see <cit.>.
The linear dynamical system (<ref>) features an internal energy
V(p,ṗ,μ) = 12( ṗ^⊤ M(μ) ṗ + p^⊤ K(μ) p ) ,
which represents the sum of kinetic energy and potential energy.
In <cit.>, it is shown that a second-order linear system
of ODEs, which satisfies the above assumptions on the definiteness of the
matrices, is equivalent to a first-order pH system.
Consequently, the internal energy (<ref>) is identical to
the Hamiltonian function of the pH system.
§.§ Stochastic Modelling and Polynomial Chaos
Expansions
Often the parameters are affected by uncertainties.
In UQ, a typical approach consists in replacing the parameters
by random variables, see <cit.>.
Thus we substitute the parameters in the system (<ref>) by
independent random variables μ : Ω→ℳ,
ω↦ (μ_1(ω),…,μ_q(ω))
on a probability space (Ω,𝒜,𝒫).
We use traditional probability distributions for each parameter
like uniform distribution, beta distribution, Gaussian distribution, etc.
Let a joint probability density function ρ : ℳ→
be given.
A measurable function f : ℳ→ exhibits
the expected value
𝔼 [f] = ∫_Ω f(μ(ω)) d𝒫(ω)
= ∫_ℳ f(μ) ρ(μ) dμ .
The expected value (<ref>) implies an inner product
⟨ f,g ⟩ = 𝔼[fg] for two square-integrable
functions f,g.
We denote the associated Hilbert space by .
Let an orthonormal basis (Φ_i)_i ∈ be given,
which consists of polynomials Φ_i : ℳ→.
It holds that ⟨Φ_i , Φ_j ⟩ = δ_ij
with the Kronecker-delta.
The number of basis polynomials up to a total degree d is
s = (d+q)!/d!q!.
This number becomes high for larger q even if d is moderate,
say d ≤ 5.
This orthonormal basis allows for expansions in the so-called
polynomial chaos (PC), see <cit.>.
A function f ∈ can be represented as
a PC expansion
f(μ) = ∑_i=1^∞ f_i Φ_i(μ)
f_i = ⟨ f, Φ_i ⟩ .
We apply this expansion to the state variables in (<ref>)
separately for each component p_1,…,p_n and each
time point t ≥ 0.
§.§ Stochastic Galerkin System
Using the expansion (<ref>) for the state variables,
we arrange a finite sum with s terms including a priori unknown
approximations of the coefficients.
Inserting the finite sum into (<ref>) generates a residual.
The Galerkin approach requires that this residual is orthogonal to
the subspace {Φ_1,…,Φ_s} spanned by the basis polynomials.
The orthogonality is defined using the inner product of the
Hilbert space .
The stochastic Galerkin projection yields a deterministic
second-order linear system of ODEs
M̂p̈̂̈ + D̂ṗ̂̇ + K̂p̂ =
B̂ u
with larger matrices M̂,D̂,K̂∈^ns × ns,
and B̂∈^ns × n_ in.
The solution of the system is p̂ : [0,∞) →^ns
with p̂ = (p̂_1^⊤,…,p̂_s^⊤)^⊤,
where p̂_i represents an approximation of the exact
PC coefficients with respect to the ith basis polynomial.
More details on the stochastic Galerkin projection for linear ODEs
can be found in <cit.>, for example.
The stochastic Galerkin projection is structure-preserving.
Thus the matrices M̂,D̂,K̂ are symmetric again and also
inherit the definiteness of the original matrices M,D,K.
The stochastic Galerkin system (<ref>) exhibits the
internal energy
V̂(p̂,ṗ̂̇) = 12( ṗ̂̇^⊤M̂ṗ̂̇ +
p̂^⊤K̂p̂) .
The linear dynamical system (<ref>) without input
(u ≡ 0) satisfies the dissipation property
ddtV̂(p̂,ṗ̂̇)
= - ṗ̂̇^⊤D̂ṗ̂̇≤ 0 ,
since we assume that the matrix D̂ is positive (semi-)definite.
Furthermore, the second-order linear system (<ref>)
has an equivalent linear explicit first-order system
[ v̇̂̇_1; v̇̂̇_2; ] =
[ 0 I_n; -M̂^-1K̂ -M̂^-1D̂; ][ v̂_1; v̂_2; ] +
[ 0; M̂^-1B̂; ]
u
with v̂_1 = p̂, v̂_2 = ṗ̂̇,
and identity matrix I_n ∈^n × n.
The internal energy (<ref>) represents a
quadratic output of (<ref>) due to
V̂ ( v̂_1 , v̂_2 )
= 12[ v̂_1; v̂_2; ]^⊤[ K̂ 0; 0 M̂; ][ v̂_1; v̂_2; ] .
This relation is shortly written as
V̂(v̂) = 1/2v̂^⊤N̂v̂.
§ DISSIPATION INEQUALITY AND PASSIVITY
Let a linear dynamical system be given in the form ẋ = A x + B u
with A ∈^n × n and B ∈^n × n_ in.
The quadratic output V = 1/2 x^⊤ N x with N ∈^n × n
satisfies the dissipation inequality
ddt x^⊤ N x ≤
u^⊤ R u + 2 u^⊤ S x + x^⊤ L x
with two symmetric matrices L ∈^n × n,
R ∈^n_ in× n_ in,
and matrix S ∈^n_ in× n,
if and only if the symmetric matrix
( [ A^⊤ N + N A - L N B - S^⊤; B^⊤ N - S - R; ])
is negative definite or semi-definite, see <cit.>.
We select R=0 and S=B^⊤ N.
Advantageous is a bound (<ref>) with L=0,
because this case implies a dissipation inequality
ddt12 x^⊤ N x ≤ u^⊤ y
including the linear output y = B^⊤ N x,
as in pH systems.
Consequently, the linear dynamical system is passive,
see <cit.>.
Usually, the term u^⊤ y is interpreted as supplied power
and the term 12 x^⊤ N x as internal energy or stored energy.
Thus we insert R=0, L=0, S = B^⊤ N in (<ref>).
It follows that the passivity condition (<ref>)
is satisfied, if and only if the matrix A^⊤ N + N A
is negative definite or semi-definite.
§ MODEL ORDER REDUCTION
We perform an MOR of the stochastic Galerkin system,
where the internal energy represents the QoI.
§.§ Model Order Reduction for Linear Systems
with Quadratic Output
The full-order model (FOM) is a general first-order linear system of ODEs
with quadratic output
ẋ = A x + B u
y = x^⊤ N x
including a symmetric matrix N.
Let n be the dimension of this system again.
In <cit.>, a balanced truncation method was introduced
for systems of the form (<ref>).
This technique requires that the system is asymptotically stable.
We outline this method.
The two Lyapunov equations
A P + P A^⊤ + B B^⊤ = 0
A^⊤ Q + A Q + N P N = 0
are solved successively, which yields the controllability Gramian P and
the observability Gramian Q.
Now symmetric decompositions P = Z_P Z_P^⊤ and Q = Z_Q Z_Q^⊤
are applied.
The singular value decomposition (SVD)
Z_P^⊤ Z_Q = U Σ V^⊤
yields orthogonal matrices U,V and a diagonal matrix Σ,
which includes the singular values in descending order.
We choose a reduced dimension r.
Let U=(U_1,U_2), V=(V_1,V_2),
and Σ = diag(Σ_1,Σ_2)
with U_1,V_1 ∈^n × r, and Σ_1 ∈^r × r.
We obtain projection matrices
V = Z_P U_1 Σ_1^-1/2
W = Z_Q V_1 Σ_1^-1/2 .
The reduced-order model (ROM) of dimension r becomes
ẋ̅̇ = A̅x̅ + B̅ u
y̅ = x̅^⊤N̅x̅
with the smaller matrices
A̅ = W^⊤ A V, B̅ = W^⊤ B, N̅ = V^⊤ N V.
The linear dynamical system (<ref>) inherits the asymptotic
stability of the linear dynamical system (<ref>)
in the balanced truncation technique.
Furthermore, an a posteriori error bound can be computed for
the quadratic output in any MOR method, see <cit.>.
We denote the linear dynamical systems (<ref>) and (<ref>)
by H and H̅, respectively.
The error of the MOR for the quadratic output is measured in
the -norm.
The norm of the system (<ref>) reads as
H _ = √( trace(B^⊤ Q B))
with the observability Gramian satisfying (<ref>).
Likewise, we obtain the -norm of the system (<ref>).
It holds that
y - y̅_ℒ^∞≤ H - H̅_ u ⊗ u _ℒ^2
using the norms of Lebesgue spaces in time domain.
The error bound can be computed directly by
H - H̅_ =
√( trace( B^⊤ Q B + B̅^⊤Q̅B̅
- 2 B^⊤ Z B̅) ) .
Therein, the matrix Q̅∈^r × r satisfies the Lyapunov
equation (<ref>) associated to the ROM (<ref>).
The matrix Z ∈^n × r solves the Sylvester equation
A^⊤ Z + Z A̅ + N X N̅ = 0 ,
while X ∈^n × r represents the solution of
the Sylvester equation
A X + X A̅^⊤ + B B̅^⊤ = 0 .
Lyapunov equations and Sylvester equations can be solved numerically
either by direct methods or iterative methods.
§.§ Application to Stochastic Galerkin System
The second-order stochastic Galerkin system (<ref>)
and its internal energy (<ref>) is equivalent to
the first-order system (<ref>)
with quadratic output (<ref>).
The dissipation analysis of Section <ref> can be
applied to
(<ref>), (<ref>).
We obtain
Â^⊤N̂ + N̂Â =
[ 0 0; 0 - 2 D̂; ] .
The positive (semi-)definiteness of the matrix D̂ is equivalent
to the negative definiteness of the
matrix (<ref>).
Thus the stochastic Galerkin system features the desired
dissipation inequality (<ref>)
and thus it is passive.
This property of the matrix (<ref>)
is related to the counterpart (<ref>).
We employ the MOR method from Section <ref> to the
high-dimensional system (<ref>)
with quadratic output (<ref>).
The balanced truncation technique preserves the asymptotic stability
of the FOM, i.e.,
each ROM is asymptotically stable again.
However, the balanced truncation technique does not preserve the
passivity with respect to the internal energy,
as demonstrated by a test example in Section <ref>.
Hence the matrix
T̅ := A̅^⊤N̅ + N̅A̅
is not negative (semi-)definite in general.
Let λ_max > 0 be the largest eigenvalue of T̅.
A shift of the spectrum via T̅ - λ_max I_r with
identity matrix I_r ∈^r × r yields a
negative semi-definite matrix.
Choosing R̅=0, S̅ = B̅^⊤N̅,
L̅ = λ_max I_r implies the dissipation inequality,
cf. (<ref>),
ddtx̅^⊤N̅x̅≤ 2 u^⊤B̅^⊤N̅x̅ + λ_max x^⊤ x
= 2 u^⊤B̅^⊤N̅x̅ + λ_max x _2^2 .
The desired property would be the case of λ_max≤ 0.
Hence the magnitude of λ_max > 0 measures the loss of
passivity.
§ NUMERICAL RESULTS
As test example, we employ a mass-spring-damper system
from <cit.>.
Figure <ref> shows the configuration.
The system contains 4 masses, 6 springs, and 4 dampers, in total
q=14 physical parameters.
A single input u is supplied by an excitation at the lowest spring.
This test example was also used in <cit.>.
The mathematical model consists of n=4 second-order ODEs (<ref>).
The matrices M,K,D are symmetric as well as positive definite for
all positive parameters.
In the stochastic modelling, we replace the parameters by random variables
with independent uniform probability distributions, which vary
10% around their mean values.
Consequently, the PC expansions (<ref>) include the
(multivariate) Legendre polynomials.
We study two cases of total degree: two and three.
Table <ref> demonstrates the properties of the
resulting second-order stochastic Galerkin systems (<ref>).
In particular, the sparsity of the system matrices is specified
by the percentage of non-zero entries.
The stochastic Galerkin systems are asymptotically stable,
since the Galerkin projection preserves the definiteness of matrices.
Now we perform an MOR of the equivalent first-order
system (<ref>)
with quadratic output (<ref>)
using the balanced truncation technique from Section <ref>.
We solve the Lyapunov equations
(<ref>), (<ref>)
and the Sylvester equations (<ref>), (<ref>)
by direct methods of numerical linear algebra.
Figure <ref> (a) depicts the Hankel-type
singular values of the SVD (<ref>),
which rapidly decay to zero.
We compute the ROMs (<ref>) of dimension r=1,…,100.
The error of the MOR is measured in the relative -norm,
i.e., H - H̅_ / H _,
see (<ref>).
The relative errors are shown for r ≤ 50
in Figure <ref> (b).
We observe that a high accuracy is achieved already for relatively
small reduced dimensions.
Furthermore, we examine the dissipation properties of the
ROMs (<ref>), as described in Section <ref>.
All reduced systems loose the passivity,
because their matrices (<ref>)
are not negative (semi-)definite.
The maximum eigenvalues of the matrices are illustrated by
Figure <ref>.
The maxima tend to zero for increasing reduced dimension.
Yet the decay becomes slower for larger total polynomial degree.
It follows that the dissipation inequality (<ref>)
is valid for a small eigenvalue λ_max.
Finally, we present a comparison.
We reduce the stochastic Galerkin system (<ref>)
for polynomial degree two by the Arnoldi method,
which is a specific Krylov subspace technique, see <cit.>.
This scheme is a Galerkin-type MOR method, i.e., the
projection matrices satisfy V=W.
However, the asymptotic stability may be lost in this technique.
The Arnoldi method does not include an information about
a definition of a QoI.
We use a single (real) expansion point ω=1 in the complex
frequency domain,
because other real-valued choices ω = 10^k with
k ∈{-2,1,1,2} cause worse approximations.
Figure <ref> (a) depicts the
relative -error of the internal energy for the ROMs
of dimension r ≤ 60.
Higher reduced dimensions produce larger errors due to
an accumulation of round-off errors in the orthogonalisation,
which is a well-known effect in the Arnoldi algorithm.
If an ROM (<ref>) is unstable, then the error is not computable
and thus omitted.
As expected, the accuracy of the Arnoldi method is not as good
as the accuracy of the balanced truncation.
Again the passivity is lost in all ROMs.
Figure <ref> (a) shows
the maximum eigenvalue of the matrices (<ref>).
We observe that these positive maxima do not decay,
even though small errors are achieved for reduced dimensions
55 ≤ r ≤ 60.
§ CONCLUSIONS
We applied a stochastic Galerkin projection to a second-order linear
system of ODEs including random variables.
The high-dimensional stochastic Galerkin system owns an internal
energy as quadratic output.
We performed an MOR of an equivalent first-order system of ODEs,
where the used balanced truncation method is specialised to approximate
a quadratic output.
However, the passivity of the dynamical systems with respect to the
internal energy may be lost in this reduction.
We proposed a concept to quantify the discrepancy of a non-passive
dynamical system to the passive case.
Numerical results of a test example demonstrated that this
discrepancy measure tends to zero for increasing reduced dimensions
in the balanced truncation method.
00
antoulas
A. C. Antoulas,
Approximation of Large-Scale Dynamical Systems (SIAM, Philadelphia, 2005).
beattie-etal
C. Beattie, V. Mehrmann, H. Xu, and H. Zwart,
Linear port-Hamiltonian descriptor systems,
Math. Control Signals Syst. 30, no. 4 (2018).
benner-goyal-duff
P. Benner, P. K. Goyal, and I. Pontes Duff,
Gramians, energy functionals, and balanced truncation for
linear dynamical systems with quadratic outputs,
IEEE Trans. Autom. Control 67, no. 2, 886-893 (2022).
freitas-etal
F. D. Freitas, R. Pulch, and J. Rommes,
Fast and accurate model reduction for spectral methods
in uncertainty quantification,
Int. J. Uncertain. Quantificat. 6, no. 3, 271-286 (2016).
lohmann-eid
B. Lohmann and R. Eid,
Efficient order reduction of parametric and nonlinear models
by superposition of locally reduced models,
in: Methoden und Anwendungen der Regelungstechnik,
edited by G. Roppenecker and B. Lohmann
(Shaker, Aachen, 2009).
inman
D. J. Inman,
Vibration and Control
(John Wiley & Sons Ltd, Chichester, 2006).
pulch-matcom
R. Pulch,
Model order reduction and low-dimensional representations for
random linear dynamical systems,
Math. Comput. Simulat. 144, 1-20 (2018).
pulch-jmi
R. Pulch,
Stability-preserving model order reduction for linear
stochastic Galerkin systems,
J. Math. Ind. 9, no. 10 (2019).
pulch2023
R. Pulch,
Stochastic Galerkin method and port-Hamiltonian form for
linear dynamical systems of second order,
arXiv:2306.11424v1 (2023).
schaft-jeltsema
A. van der Schaft and D. Jeltsema,
Port-Hamiltonian Systems Theory: An Introductory Overview
(New Publishers Inc, 2014).
sullivan:book
T. J. Sullivan,
Introduction to Uncertainty Quantification
(Springer, Cham, 2015).
willems
J. C. Willems,
Dissipative dynamical systems,
Eur. J. Control 13, 134-151 (2007).
|
http://arxiv.org/abs/2307.04582v1 | 20230710141955 | NANOGrav spectral index $γ=3$ from melting domain walls | [
"E. Babichev",
"D. Gorbunov",
"S. Ramazanov",
"R. Samanta",
"A. Vikman"
] | hep-ph | [
"hep-ph",
"astro-ph.CO",
"hep-th"
] |
NANOGrav spectral index γ=3 from
melting domain walls
E. Babichev^a, D. Gorbunov^b,c, S. Ramazanov^d, R. Samanta^d, A. Vikman^d
^aUniversité Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
^bInstitute for Nuclear Research of the Russian Academy of Sciences, 117312 Moscow, Russia
^cMoscow Institute of Physics and Technology, 141700 Dolgoprudny, Russia
^dCEICO, FZU-Institute of Physics of the Czech Academy of Sciences,
Na Slovance 1999/2, 182 00 Prague 8, Czech Republic
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================
We discuss cosmic domain walls described by a tension red-shifting with the expansion of the Universe. These melting domain walls emit gravitational waves (GW) with the low-frequency spectral shape Ω_gw∝ f^2 corresponding to the spectral index γ=3 favoured by the recent NANOGrav 15 yrs data. We discuss a concrete high-energy physics scenario proposed in Refs. <cit.> which leads to such a melting domain wall network in the early Universe. This scenario involves a feebly coupled scalar field χ, which can serve as a promising dark matter candidate. We identify parameters of the model matching the GW characteristics observed in the NANOGrav data. The dark matter mass is pushed to the ultra-light range below 10^-11-10^-12 eV which is accessible through planned observations thanks to the effects of superradiance of rotating black holes.
§ INTRODUCTION AND SUMMARY
Recently several pulsar timing arrays (PTAs) such as NANOGrav <cit.>, EPTA (including InPTA) <cit.>, PPTA <cit.>, and CPTA <cit.> reported evidence for a common-spectrum signal in each dataset, with inter-pulsar angular correlations described by the Hellings-Downs (HD) curve <cit.>, pointing to a breakthrough discovery of nHz stochastic gravitational wave (GW) background. Although signals from all the PTAs are in a good agreement, in this article, we shall focus on the NANOGrav 15yrs data, because they are more stringent and with the largest statistical significance. Though no clear hints on the origin of the observed signal have been presented, the NANOGrav 15 yrs data disfavor simple GW-driven models of supermassive black hole binaries (SMBHBs) that predict Ω_gw∝ f^2/3 at 2σ CL <cit.>. Nonetheless, statistical and environmental effects may lead to different predictions, consistent with the data <cit.>. On the contrary, investigating various GW sources of cosmological origin, NANOGrav reports that a power law signal Ω_gw∝ f^1.2-2.4 is preferred at 1σ as a better fit to the data <cit.> [It is important to note that in 2020, the NANOGrav reported similar common spectrum process in their 12.5 yrs dataset but without any evidence of HD correlation. Compared to the old data, which are better fitted with a nearly scale-invariant spectrum: Ω_gw∝ f^-1.5-0.5 at 1σ, the 15 yrs data predict a much steeper spectrum, ruling out stable cosmic strings–one of the most anticipated primordial GW sources for PTAs <cit.>.]. Motivated by this, we explore the second possibility in this article and focus on GWs from cosmic domain walls <cit.>.
Compared to previous works, which fit constant tension domain walls to the NANOGrav 15 yrs signal <cit.>, we consider so-called melting domain walls characterized by a time-dependent tension, which drops as a cube of the Universe temperature <cit.>. Such domain walls are cosmology friendly, as their energy density redshifts fast enough not to overclose the Universe. They naturally arise in a well-motivated renormalizable particle physics scenario involving feebly coupled scalar field (Section 2) <cit.>.
These melting domain walls
serve as a source of GWs, which spectrum differs from the spectrum provided by constant tension domain walls (and other known sources). Most notably, the low-frequency GW spectrum from melting domain walls behaves as Ω_gw∝ f^2 <cit.>, which should be compared with Ω_gw∝ f^3 <cit.> in the case of constant tension walls.
The larger signal at small frequencies stems from the fact that the network of melting domain walls efficiently emits GWs over an extended period of time: while the most energetic GWs are produced at the network formation, later emission from somewhat melted domain walls feeds into the low energy tail of the spectrum. This contrasts sharply with the constant tension case, where GWs are mainly emitted at the end of wall evolution right before dissolving, e.g., due to slight breaking of Z_2-symmetry. Note that, there is no contradiction with causality considerations <cit.>, which typically lead to Ω_gw∝ f^3. Indeed, the standard steeper shape assumes a finite operation of the GW source, typically shorter than the Hubble time. In contrast, in the scenario <cit.> we discuss here, GWs are efficiently produced by the time-extended source over many Hubble time intervals.
Remarkably, the behaviour Ω_gw∝ f^2 better fits NANOGrav 15 yrs data compared to[At the same time, low-frequency GW emission from melting cosmic strings has a shape Ω_gw∝ f^4 <cit.>, which conflicts with NANOGrav data.] Ω_gw∝ f^3. It is conventional to parameterise the PTA GW signal as
Ω_ GW(f)=Ω_yr(f/f_ yr)^5-γ,
with γ being the spectral index and f_yr= 1 yr^-1≃ 32 nHz. The NANOGrav best-fit value of the spectral index reads γ =3.2± 0.6. In Section 3, we also identify the values of the model parameters yielding the best-fit value Ω_ yr=5.8× 10^-8. To accomplish this, one should assume
f_peak≃ f_yr, where f_peak is the predicted peak frequency of GWs, so that Ω_yr≃Ω_gw, peak. Using the relations
between f_peak and Ω_gw, peak and the model coupling constants,
we can pinpoint the particle physics scenario underlying melting domain walls.
In particular, the scalar field constituting domain walls should be extremely
weakly coupled. Given that the model constants are confined to a rather narrow range already with the current NANOGrav sensitivity, a future increase of PTA sensitivity to GWs will allow one to make decisive conclusions regarding the melting domain wall interpretation of the signal. Note also that feeble couplings involved in the interpretation make the scalar field comprising domain walls a suitable dark matter candidate, provided that its mass is confined to the ultra-light range. For such low masses superradiance <cit.> plays an important role by triggering instability of rotating black holes with astrophysical masses <cit.>.
This leads to potentially observable spin-down of rotating black holes and to stochastic GW background due to gravitational radiation of the bosonic condensate forming around black holes, see e.g. Ref. <cit.>.
§ BRIEF OVERVIEW OF MELTING DOMAIN WALLS
We start with the Z_2-symmetric model of real scalar field χ,
which interacts through the portal coupling with a scalar multiplet ϕ from the primordial thermal bath:
S=∫ d^4 x √(-g)[(∂_μχ)^2/2 -M^2_χχ^2/2-λ_χχ^4/4 +g^2 χ^2 |ϕ|^2/2] ,
where M_χ, λ_χ, and g^2 are the bare mass, quartic self-interaction constant of the field χ, and portal coupling constant, respectively <cit.>. We assume that particles ϕ are relativistic at the times of interest, which fixes the variance of the field ϕ to be
⟨ |ϕ|^2 ⟩ =N T^2/12 ,
where N counts the number of degrees of freedom associated with ϕ.
Let us fix the sign of the portal coupling constant g^2 as
g^2>0 .
It induces instability in the two-field system, which is tamed, provided that the following condition is obeyed:
β≡λ_χ/g^4≥1/λ_ϕ ,
where λ_ϕ is the quartic self-interaction constant of the multiplet ϕ. We will often use the constant β instead of self-interaction constant λ_χ in what follows. Consequently, the effective potential characterizing the field χ exhibits spontaneous symmetry breaking leading to the non-zero temperature-dependent expectation value:
⟨χ⟩ =±√(Ng^2 T^2/12λ_χ -M^2_χ/λ_χ) .
In the expanding Universe, this temperature-dependence induces time-dependence, which is crucial for our further discussions. At some (lower) temperature the bare mass term becomes relevant, and the symmetry restores with ⟨χ⟩=0, i.e., the inverse phase transition happens. However, at most times of interest we assume the bare mass M_χ negligible; it will be included only when considering dark matter implications of the model.
Spontaneous breaking of Z_2-symmetry leads to the formation of domain walls, provided that the background field χ is set to zero, i.e., χ=0, prior to falling into the minima of symmetry breaking potential.
This condition can be achieved e.g. by the non-minimal coupling to gravity ∼ξχ^2 R leading to the super-Hubble mass during inflation for ξ≳ 1; at the same time R ≈ 0 during the radiation-dominated stage, and does not affect dynamics of the system, see Ref. <cit.> for details. Domain walls are often unwelcome in cosmology because they quickly begin to dominate the evolution of the Universe, in contradiction with observational data. This problem is absent in our case, exactly due to the time dependence of the expectation value ⟨χ⟩, as it will become clear shortly.
The Universe temperature at the time of domain wall formation is defined by the balance of the Hubble friction and
the tachyonic thermal mass; it is estimated as
T_i ≃√(N) gM_Pl/√(B g_* (T_i)) ,
where g_* (T) counts the number of relativistic degrees of freedom at the temperature T, and M_Pl≈ 2.44 · 10^18 is the reduced Planck mass. The constant B here takes into account the finite duration of the field χ roll to the minimum of its potential; B ≃ 1 for the infinitely fast roll,
but generically it takes values in the range 1 ≲ B ≲ 10^3, see Ref. <cit.>.
The domain wall tension (mass per unit area) is given by
σ =2√(2 λ_χ)·⟨χ⟩^3 /3 .
The energy density of domain walls in the scaling regime with one (a few) domain wall(s) per horizon volume characterized by the size H^-1,
where H is the Hubble rate, is estimated as
ρ_wall∼σ H .
Using Eq. (<ref>), where we neglect the bare mass, from Eqs. (<ref>) and (<ref>) one finds that the energy density of melting domain walls redshifts as ρ_walls∝ 1/a^5 at the radiation-dominated stage, which is in contrast to the scenario with constant tension domain walls yielding ρ∝ 1/a^2. Hence, the energy density of melting walls drops faster than the energy density of radiation, and there is no domain wall problem in the Universe.
§ GWS FROM MELTING DOMAIN WALLS
Numerical estimation of GW emission by a network of domain walls
has been performed in Ref. <cit.>. This has been done in the case of constant
tension domain walls, but we can readily use some of the results
obtained there to the case of melting domain walls. Despite
strong differences, in both cases, most energetic GWs
are emitted within a short time interval: close to the moment of the domain
wall formation in the case of melting domain walls and near the time of
the network dissolution for constant tension domain walls. The properties
of GWs at the spectrum peak are defined by the wall tension and Hubble rate
in this short time interval. In particular, the peak frequency of emission
is estimated by F ≃ H <cit.>. Consequently, the present-day peak frequency is estimated as <cit.>
f_peak≡ f_peak (t_0) ≃ H_i ·a_i/a_0∝ T_i ,
which gives upon substituting in Eq. (<ref>):
f_peak≃ 6 √(N/B)·g/10^-18·(100/g_* (T_i))^1/3 .
Interestingly, the numerical simulations of Ref. <cit.> have shown that the Einstein quadrupole formula well captures the peak energy of GWs.
Including numerical corrections, one can write then for the fractional energy density
at the emission time
Ω_gw, peak (t_i) ≈λ_χϵ_gw A^2 ⟨χ⟩^6_i/27π H^2_i M^4_Pl∝ T^2_i ,
where the coefficients ϵ_gw and A account for the efficiency
of GW emission and scaling, correspondingly; one has ϵ_gw A^2 ≈ 0.5 <cit.>. Hence, at present, the energy density of GWs is given by
Ω_gw, peak h^2_0 ≈ 1.34 · 10^-5×(100/g_* (T_i))^1/3Ω_gw, peak (t_i) ,
where h_0=0.67 is the reduced Hubble constant <cit.>. Combining Eqs. (<ref>), (<ref>), (<ref>), and Eq. (<ref>), and using definition (<ref>), we obtain <cit.>
Ω_gw, peak h^2_0 ≃4 · 10^-14· N^4/B·β^2·(100/g_* (T_i))^7/3 .
To discriminate between GWs emitted by melting domain walls and other sources, one should consider the GW spectrum. While this requires numerical simulations, we can estimate the low-frequency part of the spectrum and show that it is distinct from the spectrum of constant tension walls. For this purpose, we observe that the low-frequency part of the spectrum is sourced by GW emission at the late times t>t_i. This is evident from Eq. (<ref>),
where one should replace T_i by T(t)<T_i.
The peak energy density can be estimated from Eq. (<ref>), where one again replaces T_i with T(t). We conclude that <cit.>
Ω_gw h^2_0 (f<f_peak) =Ω_gw, peak h^2_0 ·( T(t)/T_i)^2= Ω_gw, peak h^2_0 ·( f/f_peak)^2 .
This is in contrast to the result obtained in the case of
constant tension domain walls and many other sources,
i.e., first-order phase transitions and cosmic strings, giving Ω_gwh^2_0≃ f^3. Note that the causality is not violated in our case: indeed, according
to the discussion above, low-frequency GW emission still fulfills F ∼ H(t) and hence follows from on-horizon
dynamics of melting domain walls. Note also that the causality argument suggests the low-frequency tail of GW emission produced around the time of domain wall formation t ≃ t_i is steep enough and does not affect our estimate (<ref>). In this work, we are not much concerned about the high-frequency part of the spectrum, assuming that it is outside of the domain probed by NANOGrav (see below). We assume that it is not different from the case of constant tension walls, i.e., there is a power law decrease Ω_gw∝ 1/f at f>f_peak, which should be followed by the exponential suppression at frequencies corresponding to the inverse width of domain walls <cit.>.
Figure <ref> demonstrates that the predicted GW signal is compatible with the NANOGrav signal for the set of theoretically acceptable values of model parameters.
Below we explain the notations used and the assumed choice of model constants. GW spectral energy density
associated with the NANOGrav, or more generally, with the PTA signal is conventionally expressed as
Ω_gw(f)=2π^2/3 H_0^2f^2h^2_c(f) ,
where h_c(f) is the characteristic strain parameterised as
h_c(f)=A(f/f_yr)^(3-γ)/2,
with A and γ being the amplitude and the spectral index, respectively; recall that f_ yr= 1 yr^-1≃ 32 nHz. Note that combining Eqs. (<ref>) and (<ref>), one gets Eq. (<ref>), where Ω_yr=2π^2/3 H_0^2A^2f_yr^2. The best fit to the NANOGrav signal is provided by the values A≃ 6.4^+4.2_-2.7× 10^-15 and γ=3.2 ± 0.6. The latter agrees well with the model prediction (<ref>).
To achieve the best fit value of A, which corresponds to rather large GW energy density Ω_yr≃ 5.8 × 10^-8, we first set f_peak≃ f_yr, so that Ω_gw, peak≃Ω_yr. The reason for this choice will become clear a posteriori. Using Eqs. (<ref>) and (<ref>), one can relate the temperature at domain wall formation to the peak frequency:
T_i ≃ 1.2 ·(f_peak/f_yr) ·(100/g_* (T_i))^1/6 .
Hence, for f_gw, peak≃ f_yr, one has T_i ≃ 1.2 and g_* (T_i) ≃ 75. Now we fix the constants entering the GW energy density (<ref>):
β = 1 , B = 1 , N=24 .
Finally, using this and Eq. (<ref>), we can fix the constant g, i.e.,
g = 10^-18 .
This implies a tiny portal coupling g^2=10^-36,
while β=1 translates into the self-interaction constant λ_χ=10^-72. Such tiny coupling constants are not unfamiliar in physics, and they are characteristic for axion-like particles. Note also
that the constants β and B are chosen to be close to minimally allowed ones, see Eq. (<ref>), to achieve the observed value Ω_yr. This also explains the choice f_peak≃ f_yr, because for f_peak≫ f_yr, one would need to assume too large Ω_gw, peak≫Ω_yr. It is important to stress that one can accommodate larger values of parameters β and B by a moderate increase of the number of degrees of freedom N. Indeed, increasing β by factor ξ requires only an increase of N by smaller factor ξ^1/2. On the other hand, a change of parameter B by factor ζ requires a corresponding increase of N by a much smaller factor ζ^1/4.
We have assumed that the field ϕ is relativistic at the times when relevant GWs are emitted, which are sufficiently close to the BBN epoch. Thus, if the field ϕ is still relativistic at the temperatures T ≲ 1, one runs the risk of spoiling a well-established picture of light element primordial abundance. There are two ways of avoiding this. One is to assume that the
particles ϕ decoupled from primordial plasma at very early times,
and thus contribute insignificantly to the effective number of neutrino species N_eff. In that case, however, the effective temperature T_ϕ describing the system of particles ϕ is lower than the Universe temperature. This tends to decrease GW energy density according to Eq. (<ref>), but the decrease can be (partially) compensated by the sharp change of degrees of freedom number g_* (T) around QCD phase transition. Another way to handle the problem is to assume that the particles ϕ have mass m_ϕ in the MeV range, i.e.,
1 ≪ m_ϕ≪ 1 .
That is, the particles ϕ become non-relativistic sometime before BBN and then decay into SM species in one or another way. In that case, one can also consider the scenario with the effective temperature T_ϕ higher than the Universe temperature T.
§ IMPLICATIONS FOR DARK MATTER
The field χ being very feebly coupled to the primordial thermal bath is a proper dark matter candidate. This is despite the fact that for the
portal constant g^2 ≃ 10^-36, neither freeze-out nor freeze-out production mechanisms are operating. Yet it is possible to generate the right dark matter abundance even with this tiny coupling constant. We briefly comment on two production mechanisms below and identify the mass M_χ as a function of GW parameters assuming that the field χ constitutes all dark matter.
* Dark matter production via the direct phase transition. Oscillations of the field χ around the minima of its potential naturally feed into dark matter. These oscillations start at the times t ≃ t_i, when the domain wall network is created, and continue till present unless the particles χ are unstable. In that case, the observed dark matter abundance is achieved for extremely small M_χ:
M_χ≃ 6.5 · 10^-17 ·(f_peak/30 ) ·(g_* (T_i)/100)^1/6·√(10^-8/Ω_gw, peak· h^2_0) .
* Dark matter via inverse phase transition <cit.>, cf. Refs. <cit.>. Dark matter production
occurs also in the case, when there is an efficient decay channel for the aforementioned oscillations, and the field χ settles to the minimum of its potential. Yet coherent oscillations are produced at the inverse phase transition because symmetry restoration is a non-adiabatic process.
In that case, one has
M_χ ≃ 10^-12 · B^9/20·(g_* (T_sym)/100)^1/5·(g_* (T_i)/100)^1/20·(m_ϕ/10 )^1/2×
(f_peak/30 )^6/5·(10^-8/Ω_gw, peak h^2_0)^3/20 ,
where T_sym is the Universe temperature at the inverse phase transition.
We observe that in both cases GW parameters favoured by NANOGrav data imply ultra-light dark matter masses M_χ. Notably, with these values of M_χ, our scenario predicts superradiance instability of rotating black holes with astrophysical masses <cit.>. This suggests a complementary way of testing the model, in particular, the future LISA observations will probe the masses of dark matter particles corresponding to the direct phase transition, while the LIGO data may be used to test the masses involved in the inverse phase transition <cit.>.
§ DISCUSSIONS
We have shown that the properties of GWs emitted by the network of
melting domain walls are consistent with the signal detected at PTAs.
Keeping in mind that melting domain walls do not overclose the Universe and
the constituent field χ serves as a suitable dark matter candidate,
this makes them interesting objects deserving further investigation. Perhaps the most important prospect for future studies is the numerical study of melting domain wall evolution and eventually more precise determination
of GW parameters, i.e., peak frequency, energy density,
and the spectral shape including the high-frequency range. In particular, the formation of melting walls and settling them into the scaling regime are yet to be better understood. This is important given that the most energetic GWs signals are coming from the earliest stages of the wall network evolution. With the current estimates of GW parameters, the NANOGrav signal
is fitted in a very narrow range of model constants. Therefore, with more detailed information on the signal/improved predictions of GW properties, one will have a chance to rule out the proposed interpretation of the GW signal or establish it on firmer grounds.
On a more theoretical side, it is interesting to embed the field ϕ, with masses in a phenomenologically interesting range (<ref>), into a realistic particle physics scenario. While in the present work, we assumed that ϕ is in equilibrium with primordial plasma, it is worth investigating situations, where ϕ decouples from plasma prior to domain wall formation or has never reached thermal equilibrium.
§ ACKNOWLEDGMENTS
EB acknowledges support of ANR grant StronG (ANR-22-CE31-0015-01).
DG acknowledges support of the scientific program of the National Center for Physics and Mathematics, section 5 "Particle Physics and Cosmology", stage 2023-2025. SR acknowledges the European Structural and Investment Funds and the Czech Ministry of Education, Youth and Sports (Project CoGraDS -CZ.02.1.01/0.0/0.0/15003/0000437). RS acknowledges the project MSCA-IF IV FZU - CZ.02.2.69/0.0/0.0/20 079/0017754, European Structural and Investment Fund, and the Czech Ministry of Education, Youth and Sports. AV was supported by the Czech Science Foundation (GAČR), project 20-28525S and is thankful to Enrico Barausse for discussions.
99
Babichev:2021uvl
E. Babichev, D. Gorbunov, S. Ramazanov and A. Vikman,
JCAP 04 (2022) no.04, 028
[arXiv:2112.12608].
Ramazanov:2021eya
S. Ramazanov, E. Babichev, D. Gorbunov and A. Vikman,
Phys. Rev. D 105 (2022) no.6, 063530;
[arXiv:2104.13722].
ngr1
G. Agazie et al. [NANOGrav],
Astrophys. J. Lett. 951, no.1, L8 (2023);
[arXiv:2306.16213].
ngr2
G. Agazie et al. [NANOGrav],
Astrophys. J. Lett. 951, no.1, L9 (2023);
[arXiv:2306.16217].
epta1
J. Antoniadis, P. Arumugam, S. Arumugam, S. Babak, M. Bagchi, A. S. B. Nielsen, C. G. Bassa, A. Bathula, A. Berthereau and M. Bonetti, et al.
[arXiv:2306.16214].
epta2
J. Antoniadis, S. Babak, A. S. Bak Nielsen, C. G. Bassa, A. Berthereau, M. Bonetti, E. Bortolas, P. R. Brook, M. Burgay and R. N. Caballero, et al.;
[arXiv:2306.16224].
epta3
J. Antoniadis, P. Arumugam, S. Arumugam, P. Auclair, S. Babak, M. Bagchi, A. S. B. Nielsen, E. Barausse, C. G. Bassa and A. Bathula, et al.;
[arXiv:2306.16227].
ppta1
D. J. Reardon, A. Zic, R. M. Shannon, G. B. Hobbs, M. Bailes, V. Di Marco, A. Kapur, A. F. Rogers, E. Thrane and J. Askew, et al.
Astrophys. J. Lett. 951, no.1, L6 (2023);
[arXiv:2306.16215].
ppta2
A. Zic, D. J. Reardon, A. Kapur, G. Hobbs, R. Mandow, M. Curyło, R. M. Shannon, J. Askew, M. Bailes and N. D. R. Bhat, et al.;
[arXiv:2306.16230].
cpta
H. Xu, S. Chen, Y. Guo, J. Jiang, B. Wang, J. Xu, Z. Xue, R. N. Caballero, J. Yuan and Y. Xu, et al.
Res. Astron. Astrophys. 23, no.7, 075024 (2023);
[arXiv:2306.16216].
hd
R. w. Hellings and G. s. Downs,
Astrophys. J. Lett. 265, L39-L42 (1983).
ngr3
G. Agazie et al. [NANOGrav],
[arXiv:2306.16220].
ngr4
A. Afzal et al. [NANOGrav],
Astrophys. J. Lett. 951, no.1, L11 (2023);
[arXiv:2306.16219].
bhn1
A. Sesana, A. Vecchio, and C. N. Colacino, Mon. Not.
Roy. Astron. Soc. 390, 192 (2008), [arXiv:0804.4476].
bhn2
B. Kocsis and A. Sesana, [arXiv:1002.0584].
cs1
S. Blasi, V. Brdar and K. Schmitz,
Phys. Rev. Lett. 126, no.4, 041305 (2021);
[arXiv:2009.06607].
cs2
J. Ellis and M. Lewicki,
Phys. Rev. Lett. 126, no.4, 041304 (2021);
[arXiv:2009.06555].
cs3
R. Samanta and S. Datta,
JHEP 05, 211 (2021);
[arXiv:2009.13452].
Zeldovich:1974uw
Y. B. Zeldovich, I. Y. Kobzarev and L. B. Okun,
Zh. Eksp. Teor. Fiz. 67 (1974), 3-11
SLAC-TRANS-0165.
dom1
Y. Gouttenoire and E. Vitagliano,
[arXiv:2306.17841].
dom2
S. Blasi, A. Mariotti, A. Rase and A. Sevrin,
[arXiv:2306.17830].
dom3
X. F. Li,
[arXiv:2307.03163].
dom4
Y. M. Wu, Z. C. Chen and Q. G. Huang,
[arXiv:2307.03141].
dom5
X. K. Du, M. X. Huang, F. Wang and Y. K. Zhang,
[arXiv:2307.02938].
dom6
B. Q. Lu and C. W. Chiang,
[arXiv:2307.00746].
dom7
B. Barman, D. Borah, S. Jyoti Das and I. Saha,
[arXiv:2307.00656].
dom8
Y. Bai, T. K. Chen and M. Korwar,
[arXiv:2306.17160].
dom9
N. Kitajima, J. Lee, K. Murai, F. Takahashi and W. Yin,
[arXiv:2306.17146].
dom10
L. Bian, S. Ge, J. Shu, B. Wang, X. Y. Yang and J. Zong,
[arXiv:2307.02376].
Vilenkin:1981zs
A. Vilenkin,
Phys. Rev. D 23 (1981), 852-857.
Hiramatsu:2013qaa
T. Hiramatsu, M. Kawasaki and K. Saikawa,
JCAP 02 (2014), 031;
[arXiv:1309.5001].
Cai:2019cdl
R. G. Cai, S. Pi and M. Sasaki,
Phys. Rev. D 102 (2020) no.8, 083528;
[arXiv:1909.13728].
Hook:2020phx
A. Hook, G. Marques-Tavares and D. Racco,
JHEP 02 (2021), 117;
[arXiv:2010.03568].
Durrer:2003ja
R. Durrer and C. Caprini,
JCAP 11 (2003), 010;
[arXiv:astro-ph/0305059].
Franciolini:2023wjm
G. Franciolini, D. Racco and F. Rompineve,
[arXiv:2306.17136].
Planck:2018vyg
N. Aghanim et al. [Planck],
Astron. Astrophys. 641 (2020), A6
[arXiv:1807.06209].
Emond:2021vts
W. T. Emond, S. Ramazanov and R. Samanta,
JCAP 01 (2022) no.01, 057;
[arXiv:2108.05377].
Phinney:2001di
E. S. Phinney,
[arXiv:0108028].
Babichev:2020xeg
E. Babichev, D. Gorbunov and S. Ramazanov,
JCAP 08 (2020), 047;
[arXiv:2004.03410].
Ramazanov:2020ajq
S. Ramazanov, F. R. Urban and A. Vikman,
JCAP 02 (2021), 011;
[arXiv:2010.03383].
zeldovich1
Y. B. Zel'dovich Pis'ma Zh. Eksp. Teor. Fiz. 14 (1971) 270 [JETP
Lett. 14, 180 (1971)].
zeldovich2
Y. B. Zel'dovich Zh. Eksp. Teor. Fiz 62 (1972) 2076 [Sov.Phys.
JETP 35, 1085 (1972)].
Starobinsky:1973aij
A. A. Starobinsky,
Sov. Phys. JETP 37 (1973) no.1, 28-32.
Arvanitaki:2009fg
A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper and J. March-Russell,
Phys. Rev. D 81 (2010), 123530;
[arXiv:0905.4720].
Brito:2015oca
R. Brito, V. Cardoso and P. Pani,
Lect. Notes Phys. 906 (2015), pp.1-237
2020,
[arXiv:1501.06570].
ska
A. Weltman, P. Bull, S. Camera, K. Kelley, H. Padmanabhan, J. Pritchard, A. Raccanelli, S. Riemer-Sørensen, L. Shao and S. Andrianomena, et al.
Publ. Astron. Soc. Austral. 37, e002 (2020);
[arXiv:1810.02680].
gaia
J. Garcia-Bellido, H. Murayama and G. White,
JCAP 12, no.12, 023 (2021);
[arXiv:2104.04778].
mras
A. Sesana, N. Korsakova, M. A. Sedda, V. Baibhav, E. Barausse, S. Barke, E. Berti, M. Bonetti, P. R. Capelo and C. Caprini, et al.
Exper. Astron. 51, no.3, 1333-1383 (2021);
[arXiv:1908.11391].
lisa
P. Amaro-Seoane et al. [LISA],
[arXiv:1702.00786].
decigo
S. Kawamura, T. Nakamura, M. Ando, N. Seto, K. Tsubono, K. Numata, R. Takahashi, S. Nagano, T. Ishikawa and M. Musha, et al.
Class. Quant. Grav. 23, S125-S132 (2006).
bbo
K. Yagi and N. Seto,
Phys. Rev. D 83, 044011 (2011)
[erratum: Phys. Rev. D 95, no.10, 109901 (2017)];
[arXiv:1101.3940].
Brito:2017zvb
R. Brito, S. Ghosh, E. Barausse, E. Berti, V. Cardoso, I. Dvorkin, A. Klein and P. Pani,
Phys. Rev. D 96 (2017) no.6, 064050;
[arXiv:1706.06311].
|
http://arxiv.org/abs/2307.06076v1 | 20230712105429 | Model-Free Control Design for Feedback-Linearizable SISO Systems | [
"Karthik Shenoy",
"Akshit Saradagi",
"Ramkrishna Pasumarthy",
"Vijaysekhar Chellaboina"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering
Yong Li
======================================================================================
empty
empty
Data-driven control has gained significant attention in recent years, particularly regarding feedback linearization of nonlinear systems. However, existing approaches face limitations when it comes to implementing them on hardware. The main challenges include the need for very small sampling times, which strain hardware capabilities, and the requirement of an initial open-loop data set, which can be impractical for stabilizing unstable equilibrium points. To address these issues, we propose a two-stage model-free approach that combines a high-gain observer and a dynamic controller. This eliminates the hardware implementation difficulties mentioned earlier. The high-gain observer acts as a robust state estimator, offering superior noise attenuation and lower computational costs, crucial factors for digital hardware implementation. Unlike data-driven methods, our design's stability and performance depend on a tunable software parameter, simplifying digital implementation without overburdening hardware resources. Experimental results on a Twin Rotor system demonstrate the effectiveness of our approach compared to the state-of-the-art data-driven method.
§ INTRODUCTION
Traditional control techniques often rely on accurate system models
which are either based on first principles or system identification methods, where models are derived from input-output data. With systems becoming increasingly large-scale and processes getting more complex, designing controllers using physical models (even though available) may become intractable. When rich sensor data from such systems is available, the control laws can be designed directly using the data and in the process obviating the need for physical models and explicit model identification. Model-free methods have previously been employed in control, in the form of adaptive control <cit.>, iterative learning control <cit.>, PID tuning <cit.>, <cit.> to name a few.
Data-driven control is yet another technique that belongs to the spectrum of model-free approaches and in the recent years, efforts have been made towards deriving traditional control laws such as state feedback <cit.>, MPC <cit.>, LQR <cit.>, minimum-energy control <cit.>, event-triggered control <cit.>, and even verifying certain dissipativity properties <cit.> directly from the measured data, without the need of explicit model identification. The aforementioned approaches require persistently exciting data in order to uniquely identify the system and perform control design. The authors in <cit.> provide a framework to deal with data that need not be persistently exciting.
While much of this literature has defected towards data-driven control of linear systems, very few results have been reported for the case of nonlinear systems <cit.>, <cit.>. An interesting result is presented in <cit.> for the class of feedback linearizable systems, where the authors use a data-driven estimator together with a dynamic control law.
The dynamic controller is the key element in the design that makes the controller model-free. A limitation of the method proposed in <cit.> is the dependency of the boundedness and convergence properties of the states on the sampling time, which could place stringent requirements on the hardware used in the digital implementation of the controller.
In this article, we propose a model-free approach that employs a high-gain observer as the state estimator, which has no such critical dependence on a hardware parameter and offers ease of practical implementation. The main contributions of this article are summarized as follows:
* We present a model-free approach for designing stabilizing controllers for (partially-) feedback linearizable systems, using high-gain observers. We show that the proposed method is more suitable for digital implementation and with respect to noise attenuation, compared to the recent data-driven methods <cit.>. The proposed model-free approach does not require any prior open-loop data collection, which provides it with a distinct advantage over data-driven approaches, when the hardware plants are unstable.
* Our design approach has two key stages. The first stage is the estimation stage, which comprises of a high-gain observer, which is a robust non-linear state estimator (see <cit.>,<cit.>,<cit.>) capable of providing state estimates without taxing the hardware resources (sampling rate and computational cost). The second stage uses the dynamic controller presented in <cit.>, which is simple and elegant, as the required feedback linearizing input can be computed dynamically by adjusting just one tunable parameter (the controller gain).
* We present experimental validation of the proposed method using a twin-rotor system, without any prior knowledge of its dynamics. The high-gain observer combined with the dynamic controller stabilizes the yaw-angle of a twin rotor, and tracks the step input robustly, in the presence of large sensor noise.
* We compare the method proposed in the article with the data-driven technique in <cit.> experimentally, in terms of the hardware resources utilized (sampling rate and computational costs) and robustness in the presence of sensor noise, which are crucial in hardware digital implementation. The computational cost for the high-gain observer stage is shown to be far lower compared to the data-driven estimator in <cit.>, while achieving the same performance and noise attenuation.
The rest of the article is organized as follows: In Section <ref> we briefly present the concepts of feedback linearization and the high-gain observer. In Section <ref>, the approximate discrete-time plant models are derived and a stabilizing control law is designed. We combine the dynamic controller and the high-gain observer and present the main results of the article in Section <ref>. In Section <ref> we critically review our results, comparing it with the results proposed in <cit.>.
Experimental validation and comparison are provided in Section <ref> using a twin-rotor MIMO system.
§ PRELIMINARIES AND BACKGROUND
§.§ Notation
ℝ represents the set of real numbers and ℝ^+,ℝ^++ represents the set of non-negative and positive real numbers respectively. ℝ^n is the space of all real n-dimensional vectors. All the estimated states are represented with a hat on top, for example, the estimate of state x as x̂. col{x_1,…, x_n} would represent the n×1 column vector with entries x_1… x_n and diag(x_1,…,x_n) represents a diagonal matrix with the entries x_1,… x_n. The 2-norm of a vector will be represented by x. The Lie-derivative of a function h(x): ℝ^n→ℝ along a vector field f(x):ℝ^n→ℝ^n is given by L_fh(x)=∂ h(x)/∂ x· f(x). The big-O notation given by f(x)=O_x(K) implies there exists some H,K∈ℝ^+ such that for all k∈[0,T], we have f(x,k)≤ HKx. λ_min(A) represents the smallest eigenvalue of A.
§.§ Feedback Linearization
Feedback linearization is a technique in which a nonlinear system can be transformed into a linear system via a proper choice of a nonlinear transformation and a state feedback control law.
Feedback linearization techniques have found wide variety of applications, especially in the control of aerospace and robotic systems. More literature on feedback linearization can be found in <cit.> and <cit.>. Consider the single-input-single-output nonlinear system in its normal form:
ẇ =f_0(w,x)
ẋ =A_cx+B_c[a(w,x)+b(w,x)u]
y =C_cx
where x∈ℝ^ρ, ρ is the relative degree of the system for a given output. w are the internal states, which are not observable from the given output. a(x,w) and b(x,w) are nonlinear functions from ℝ^n→ℝ (which later on are assumed to be unknown). A_c, B_c, C_c are in the Brunovsky canonical form representation of a chain of ρ integrators. Now by choosing a control law of the form u=b(w,x)^1(v(x)-a(w,x)), where v(x) is any linear control law, we can linearize the input-output dynamics to obtain:
ẇ =f_0(w,x)
ẋ =A_cx+B_cv(x)
y =C_cx.
The zero dynamics of (<ref>) is given by ẇ=f_0(w,0).
We assume that the zero dynamics of the system is asymptotically stable. This is to ensure that the internal dynamics, which is uncontrollable, is asymptotically stable.
§.§ The High-Gain Observer
The high-gain observer, presented in <cit.>, <cit.>, <cit.>, is a non-linear robust state estimator. The main features of the high-gain observer are that the estimation errors decay rapidly towards small values. Additionally, it is quite robust with respect to model uncertainties. The dynamics of the high-gain observer, for system (<ref>), is defined as follows:
ẋ̂̇=Ax̂+Bϕ_0(x̂,u)+H(y-Cx̂)
where x̂∈ℝ^ρ and H=col{α_1/ϵ, α_2/ϵ^2, ,
α_ρ/ϵ^ρ}, ϕ_0(x̂,u) is a nominal model for a(w,x)+b(w,x)u and is locally Lipschitz in (x,u) over the domain of interest and is globally bounded in x. ϵ∈ℝ_0^+ and positive constants α_i are chosen such that the polynomial:
s^ρ+α_1s^ρ-1+…+α_ρ-1s+α_ρ
is Hurwitz. Note that ϵ is the observer time constant.
The high-gain observer can also be used as a disturbance estimator, by treating the disturbance as an additional state. The dynamics of the extended high-gain observer is given by:
[ ẋ̂̇; ẋ̂̇_ρ+1 ] =A̅[ x̂; x̂_ρ+1 ]+B̅ϕ_0(x̂,x̂_ρ+1,u)
+H̅(y-C̅[ x̂; x̂_ρ+1 ])
where x̂_ρ+1 is the additional state representing the uncertain disturbance input and A̅,B̅,C̅ the canonical representation of a chain of ρ+1 integrators. Here, the gain matrix H̅ is :
H̅=[ H; α_ρ+1/ϵ^ρ+1; ].
In order to discretize the high-gain observer, we first perform a change of coordinates:
q=D[ x̂; x̂_ρ+1 ]=Dx̂̅̂
where the matrix D is a transformation matrix given by D=diag(1,ϵ,ϵ^2,…,ϵ^ρ). This transformation is made so as to remove the negative powers of ϵ from the gain matrix H̅ before discretizing the observer.
Upon discretizing the observer dynamics using the bi-linear transformation methods with sampling time T and choosing ϕ_0=0 we obtain:
ξ(k+1) =A_dξ(k)+B_dy(k)
x̂̅̂(k) =D^1[C_dξ(k)+D_dy(k)]
where the observer state ξ∈ℝ^ρ+1 and the matrices A_d,B_d,C_d, and D_d are the coefficients of the discrete-time implementation of the high-gain observer, given in Table 9.1, <cit.>. The relation between the sampling time T and the observer time constant ϵ is:
T=αϵ, α∈ℝ_>0.
Any value of α>0 could be chosen in the bi-linear transformation case, as the matrix A_d is Hurwitz for all positive values of α. Theorem 9.1 in <cit.> guarantees the existence of an ϵ such that the output-feedback control law, with the state estimates obtained using the high-gain observer, stabilizes the system. The theorem is as follows:
(Theorem 9.1, <cit.>)
Consider the closed-loop system with the plant (<ref>) and the output
feedback discrete controller u(x̂̅̂(k)) with the observer (<ref>). Let ℛ be the region of attraction of the system (<ref>) with the controller u(x̅(k)), and 𝒮 be any compact set in the interior of ℛ, and let 𝒟 be any compact subset of ℝ^ρ+1. Suppose ((w(0),x̅(0)), x̂̅̂(0))∈𝒮×𝒟. Then
* there exists ϵ_1^*>0 such that for every ϵ∈(0,ϵ_1^*], (w(t),x(t)) is bounded for all t≥0 and the estimation error e_x̅(k) is bounded for all k≥0
* given any μ>0, there exists an ϵ_2^*>0, T_1>0 and k^*>0 all dependent on μ such that for every ϵ∈(0,ϵ_2^*],
(w(t),x(t))≤μ ∀ t≥ T_1
e_x̅(k)|≤μ ∀ k≥ k^*
* If the origin of the system (<ref>) with the controller u(x̅(k)) is exponentially stable and f_0(w,x), Ax+Bϕ(w,x,u)
are twice continuously differentiable in the neighborhood of the origin, then there exists an ϵ_4^*>0 such that for every ϵ∈(0,ϵ_4^*], the origin of (<ref>) and the discretized plant (<ref>) is exponentially stable and 𝒮×𝒟 is a subset of its region of attraction. Moreover, the continuous-time trajectory (w(t),x(t)) decays to zero exponentially fast.
§ SYSTEM MODEL AND CONTROLLER DESIGN
In this section, we derive approximate discrete-time models of the plant and discuss the controller design, which incorporates the observer time constant ϵ.
§.§ System Model
Before getting into the discretized models, we show O_x(T)=O_x(ϵ) when in (<ref>), α≤1:
f(x,t) ≤ HTx=Hαϵx≤ Hϵx
where the first equality is obtained using (<ref>) and the third inequality is obtained by setting α≤1.
For the ease of deriving the control law, we set the relative degree ρ=2. The same procedure would follow for systems with a higher relative degree. We also assume that the measurement is free of noise. Hence we obtain the simplified model in the below normal form:
ẇ=f_0(w,x)
ẋ_1=x_2
ẋ_2=a(w,x)+b(w,x)u
y=x_1
where w∈ℝ^l and x∈ℝ^2 constitute the state vector. We make the following assumptions:
u(kT+λ)=u(kT), ∀λ∈[0,T) where T is the sampling time.
For a small enough sampling time T, we assume a(w,x), b(w,x) to be constants between two consecutive sampling instants.
The origin w=0 of the zero dynamics ẇ=f_0(w,0) is asymptotically stable.
Consider the subsystem (<ref>). On discretizing this system using Taylor-series expansion, and using (<ref>), we obtain the exact discretization F^e(x,u):
x_1(k+1) =x_1(k)+ϵα x_2(k)+(ϵα)^2/2(a(k)+b(k)u(k))
+O_(x,u)(ϵ^3)
x_2(k+1) =x_2(k)+ϵα(a(k)+b(k)u(k))+O_(x,u)(ϵ^2)
y(k) =x_1(k).
Neglecting the O(ϵ^2) terms, we obtain the approximate dynamics F^a(x,u). Since a(w,x) and b(w,x) are unknown, we take a(w,x)+b(w,x)u(k) as an extended state x_3(k). Hence the approximate discretized extended system dynamics are:
x̅(k+1) =[ 1 ϵα (ϵα)^2/2; 0 1 ϵα; 0 0 1 ]x̅(k), y(k)=Cx̅(k)
where C=[ 1 0 0 ].
§.§ Controller Design
§.§.§ Linear-Part of the Controller
In this section we present the design of the stabilizing controller for the feedback linearized system, assuming we have the knowledge of a(k) and b(k). A feedback linearizing control law of the form:
u(k) =b^-1(k)(-a(k)+v(x(k))).
Substituting (<ref>) in the approximate model F^a(x,u), we obtain x(k+1)=Ax(k)+Bv(k), where
A =[ 1 ϵα; 0 1 ] =I+A_iϵα
B =[ (ϵα)^22; ϵα ]=B_iϵα+B_j(ϵα)^2.
Since (A_i,B_i) is a controllable pair, we can find a control law v(x)=Kx and symmetric matrices P_x≽0 and Q≻0 such that:
(A_i+B_iK)^TP_x+P_x(A_i+B_iK)=-Q
and V_x(x)=x^TP_xx. For the approximate dynamics, we can show that:
V_x(F^a(x,u))-V_x(x)≤-λ_min(Q)ϵx^2+O_(x,u)^2(ϵ^2).
§.§.§ Dynamic Controller
The stabilizing controller is designed assuming we have the knowledge of a(k) and b(k). Since we don't have that knowledge, we need to design a dynamic controller such that the error, e_u defined by:
e_u=v(x(k))-x_3(k)
asymptotically converges to the origin. We use the dynamic control law proposed in <cit.> for this purpose. The dynamic control with state feedback is:
u(k+1)=u(k)+γ(v(x(k))-x_3(k))
where the value of γ must be chosen such that γ<b̅^1 and b̅ is the upper bound of b(w(k),x(k)).
§ MAIN RESULTS
We are now ready to state the main result in this article. Given a system of the form (<ref>), which can be transformed into a (partially-) feedback-linearizable normal form, with asymptotically stable zero dynamics, the objective is to design a feedback control law, which is easy to implement, that can asymptotically stabilize the system. We also aim to eliminate the need of a finite set of measurements for state estimation. To meet this objective we make use of high-gain observers, which have been extensively used to design output feedback controllers for nonlinear systems. We will show that our results guarantee the existence of a high-gain observer time constant ϵ, such that the combination of the high-gain observer with the dynamic controller, along with feedback linearization drives the system states to the origin asymptotically and keeps all the signals bounded under zero measurement noise conditions.
Consider the nonlinear system of the form (<ref>), together with the assumptions [asm1]A1-[asm4]A3, where the functions f_0(w,x),a(w,x) and b(w,x) are unknown. The initial conditions ((w(0),x(0)),x̂(0))∈𝒮×𝒟, where the sets 𝒮⊂ℝ^l+ρ and 𝒟⊂ℝ^ρ are compact. Then, in the absence of measurement noise, the extended high-gain observer (<ref>), and the dynamic controller (<ref>), guarantees the existence of an ϵ^*∈ℝ^+, and constants b_1>0, T_1>0, k^*>0 such that for any ϵ∈(0,ϵ^*], the closed loop trajectories are bounded, i.e.
x̂(k)≤ b_1, ∀ k≥ k^*
x(t)≤ b_1, ∀ t≥ T_1
w(t)≤ b_1, ∀ t≥ T_1
e_u≤ b_1.
Moreover, lim_t →∞x(t)=0 and lim_t →∞w(t)=0.
Proof
The proof will consist of two parts. In the first part of the proof need to show how a state-feedback control law guarantees the stability of the closed-loop system. This will be shown using similar techniques used in the proof of Theorem 8.1 in <cit.>, in addition to using the relation derived in (<ref>). Next, we need to prove that the output-feedback controller stabilizes the system, with estimates obtained using the high-gain observer.
Consider the Lyapunov candidate function V̅=V_x(x(k))+V_e, where V_x=x^TP_xx as given in section <ref>, and V_e=12e_u^2 for the error dynamics. Let ℛ be the smallest sub-level set of V̅. We show that ℛ is an invariant set using the steps (1)-(4) below.
* First we relate the evolution of function V_x along the trajectories of the exact discretized system F^e(x,u) with the evolution of the approximate dynamics F^a(x,u) upto O(ϵ^2) terms. Since u=b^1(k)(a(k)+v(x))b^1(k)e_u(k), (x,u)≤ x+ b^1(k)(-a(k)+v(x))+ b^1(k)e_u(k),
and
b^1(k)(-a(k)+v(x)) is Lipschitz continuous in x, we arrive at the following inequality:
V_x(F^e(x,u))-V_x(x) ≤ V_x(F^a(x,u))-V_x(x)
+O_x^2(ϵ^2)+O_e_u^2(ϵ^2).
* Next we show that V_x(F^a(x,u))-V_x(x) is a negative definite quantity upto O(ϵ^2) terms using (<ref>) and choosing λ_x≤λ_min(Q)/2 and by completing the squares:
V_x(F^a(x,u))-V_x(x) ≤-λ_xϵ x^2+O_x^2(ϵ^2)
+O_e_u^2(ϵ^2).
* The controller error dynamics, approximated upto O_(x,u)(ϵ) terms is given by:
e_u(k+1)=(1-b(k)γ)e_u(k)+O_(x,u)(ϵ).
* Now we show V_e(k+1)-V_e(k) is negative definite up to O(ϵ^2) terms, by completing the squares, and by choosing a small enough γ,λ_u∈ℝ^+:
V_e(k+1)-V_e(k) ≤λ_ue_u^2+O_x^2(ϵ^2)
+O_e_u^2(ϵ^2).
Combining (<ref>)-(<ref>), we have the result:
V̅(F^e)-V̅(k) ≤-λ_xϵ x^2λ_ue_u^2+O_x^2(ϵ^2)
+O_e_u^2(ϵ^2)
=-λ_xϵ x^2λ_u e_u^2
+Hϵ^2 x^2+Hϵ^2 e_u^2.
Choosing
λ <min{λ_x,λ_u}, ϵ^*<min{λ_x-λH,λ_x-λH}
we have:
V̅(F^e)-V̅(k) ≤-λϵ^* x^2λ e_u^2.
* Thus ℛ remains invariant, and the compactness of ℛ implies the trajectories are bounded, i.e. ∃ b_2>0 such that x(k)≤ b_2 and e_u(k)≤ b_2 ∀ k∈ℕ. Moreover the converge to the origin asymptotically. Now, since ℛ is invariant, and the dynamics are smooth, using Theorem 1 in <cit.>, we can conclude that ∃ b_3>0 such that x(t)≤ b_3. Furthermore, lim_t→∞x(t)=0. So we choose b_1=max{b_2,b_3}. This proves that the result holds for a state-feedback dynamic control law (<ref>).
* From the assumption that the zero-dynamics of the system is asymptotically stable, the system (<ref>)-(<ref>) is asymptotically stable with state-feedback dynamic control law (<ref>).
* Now for the second part of the proof, we can invoke Theorem <ref>. As Theorem <ref> guarantees that the states, as well as their estimates, are bounded, and the states x(t),w(t) converge to the origin asymptotically.
§ DISCUSSION
The reader might be tempted to draw parallels between Theorem <ref> of this article, Theorem 8.1 in <cit.> and Theorem 9.1 in <cit.>. It is remarked in <cit.>, that the least-squares-based state estimator proposed in <cit.> conceptually resembles a linear high-gain observer converging in finite-time, where the sampling parameter T appearing in the design of the state estimator plays the role of ϵ in the high-gain observer, thus rendering both the methods sensitive to measurement noise. We now discuss in detail the similarities and differences in these approaches and in the process note the advantages and drawbacks of each of the methods, which opens up avenues for further research in this direction.
§.§ Ease of implementation
From the implementation point of view, it is important to note that the sampling time T is a hardware parameter and might pose challenges in the physical realization of methods proposed in <cit.>, even though an appropriate choice of sampling time T ∈ [0,T^*] guarantees that the system trajectories asymptotically converge to the origin. This comes across as a gap between theory and practice. We highlight this using experiments on a twin-rotor system later in the article.
The ϵ in the high-gain observer presents itself as a tunable software parameter, thus offering more flexibility in implementation by allowing the designer to choose the observer time-constant ϵ, and then scaling it using α, to match the sampling time T. Hence the results presented in Theorem <ref> can be considered as an attempt in shortening the gap between theory and practical implementation.
§.§ The peaking phenomenon of the high-gain observer
A characteristic of the high-gain observer is that the state estimates could peak to a negative power of ϵ before they decay to O(ϵ) values quickly. This is referred to as the peaking phenomena of the high-gain observer. The combination of peaking phenomena and certain nonlinearities in the system can lead to a finite escape time. There are ways in which one can tackle this issue, one of them is by saturating state estimates or the control input u(t) outside the compact set under consideration, (see <cit.>). This can also be dealt with using the low-power version of the high-gain observer, (see <cit.>).
§.§ The noisy case
The extension of Theorem <ref> to the case where measurement noise is present, is not discussed in this article, since both the high-gain observer as well as the data-driven estimator give the same performance in the presence of bounded measurement noise. The states can be shown to converge to a bounded set, whose bounds depend on the negative powers of T in the data-driven state estimation case and the negative powers of ϵ in the high-gain observer. This again means that the designer has some control over the bounds in the case of a high-gain observer as compared to using a data-driven estimator. These similarities as well as differences are demonstrated in the next section using experiments on a twin-rotor MIMO system.
§ EXPERIMENTAL WORK
§.§ The Setup
To analyze and compare the proposed method with the data-driven technique, we implement both methods on a twin-rotor MIMO system(TRMS). The objective is to control the yaw angle using feedback linearization by actuating the tail rotor. The pitch is left unactuated. The twin-rotor setup used is shown in Fig <ref>. The setup is run using Feedback Instruments' Simulink package. The experiments are conducted in a model-free approach, i.e., we do not have access to the mathematical model of the system.
The system has a relative degree ρ=2. We start by designing a high-gain observer and a dynamic controller discussed in Sections <ref> and <ref> respectively for the twin-rotor system.
§.§ High-Gain Observer and Dynamic Controller Design
* Sampling Rate(T): All the experiments are run at a sampling rate of T=0.001s.
* Observer Time-Constant(ϵ): As we cannot find the value of ϵ^*, we choose an arbitrary initial value for ϵ and reduce it by an order of 10, until it drops below the unknown ϵ^*, that is guaranteed by Theorem (<ref>). During this time, the changes in the yaw angle are made physically by giving small disturbances to the tail rotor. No control input can be given to the motor during this period since the system may turn unstable in the closed loop, when the value of ϵ is above the threshold value ϵ^*(Refer Theorem (<ref>)). A value of ϵ=0.05 was found to be good enough to estimate the states and the extended state. Thus, from (<ref>), the value of α=T/ϵ=0.02.
* Polynomial Coefficients As the relative degree is ρ=2, the high-gain observer polynomial corresponding to (<ref>), will be of degree 3 (including the extended state). Thus, in order to make the polynomial s^3+α_1s^2+α_2s+α_3=0 Hurwitz, the value of α_1, α_2, α_3 were chosen as α_1=6, α_2=11, α_3=6.
* Controller Parameter γ: Since the model parameters are assumed to be completely unknown, the bound on the diffusion term b(w,x) is not available. In order to ensure the stability of the dynamic controller error dynamics, we need γ<b̅^-1 where b̅ is an upper bound of b(k). Hence to find a stabilizing γ, it was set to an initial value and reduced by orders of 10 till the error dynamics asymptotically approached zero. A value of γ=0.0003 was found to be appropriate for the system.
* Pole placement controller: For the pole placement design, the gains were chosen as k_1=2, k_2=4. Since the system had a steady-state error, an integrator was incorporated to get the required steady-state performance. The integrator gain was set to K_i=0.001. To reduce the effects of peaking, the angular velocity output of the observer was saturated to a value of ±0.05. Since the control input gets saturated by the inbuilt TRMS control output block, no additional saturation blocks were used to saturate the peaking in the control signal.
§.§ Experimental Results
The system was made to track a step input of amplitude 0.3 rad. The selected values of γ, K_1, K_2 and k_i resulted in an overshoot of 36%. The peaking phenomena can be seen in the angular velocity as well as the estimate of the extended state. Observer peaking does not affect the system states as the TRMS bandwidth is much lower compared to the peaking decay rate. Fig <ref> shows the state estimate trajectories as well as the control input.
§.§ Comparison: Data-Driven vs High-Gain Observer
We compare the method proposed with the data-driven technique in terms of ease of implementation and tuning, and robustness in the presence of noise.
Ease of Implementation and Tuning: Both methods require a considerable amount of tuning since the model is assumed to be completely unknown. In the case of the high-gain observer, the parameter ϵ is chosen such that the upper bound on ϵ is dictated by the stability of the system and the lower bound by the amount of noise the observer rejects. In the data-driven estimator, the number of samples required for estimation is dictated by the number of states and the amount of noise the estimator rejects. The peaking in the states is similar to the peaking phenomena observed in the high-gain observer.
In order to compare the two methods with respect to ease of implementation and performance, we replaced the high-gain observer with a data-driven estimator. No other parameters were changed and the number of samples used for estimation was set to 9. The data-driven estimator gave an overshoot of over 100% and large peaking. Fig <ref> shows the state estimates and the control input in the case of a data-driven estimator. The overshoot and the peaking are high as only 9 samples were being used for state estimation. Increasing the number of samples will increase the numerical computations required in the estimation. This is shown in Fig <ref>, where the overshoot has reduced to 70% when we use 26 samples for estimation.
Robustness in the presence of noise: To test the robustness of both the methods in the presence of noise, an additional noise of 1kHz (since there are built-in filters in the TRMS which suppress noise) with 0 mean and a variance of 0.001 is added to the yaw angle sensor output using the random number generator block. All other parameters in both the methods remain unchanged. The high-gain observer with an observer time constant ϵ=0.05 is seen to have better noise attenuation than the data-driven estimator. Moreover, the noise in the output of the high-gain observer reduces significantly when the observer time constant ϵ is raised to 0.5. This is because, in the case of the high-gain observer, the states can be shown to converge to a ball of radius proportional to ϵ^-ρ. Whereas in the case of a data-driven estimator the radius is proportional to T^-ρ. Hence, one has to change the sampling time, which is a hardware parameter, or the number of samples used for estimation, which will require a larger computational power, to reduce the bound on the states in the presence of noise when using a data-driven estimator. Thus the advantage of using a high-gain observer, where the performance of the system depends on a software parameter(ϵ), is highlighted here. The state trajectories and the control input are given in Fig <ref>.
Note that in the model-free control design required neither the use of the system model, nor apriori collection of open-loop data.
§.§ Inferences
The following inferences are derived from the experimental results.
* The high-gain observer with the dynamic controller performed better in terms of system transients as compared to the data-driven method. Using the same controller parameters (γ, k_1, k_2), the proposed method resulted in lesser overshoot in the step response of the yaw angle as compared to the method in <cit.> which uses a data-driven estimator.
* Computationally, the high-gain observer can be seen to be much cheaper compared to the data-driven estimator while yielding the same performance. This is because the performance of the data-driven estimator depends on the number of samples used for estimation and this directly translates to the number of computations needed for state estimation(see Equation 6.5 in <cit.>).
* In the presence of sensor noise, both the data-driven estimator and the high-gain observer have low-pass filtering characteristics (see Fig. <ref>). The caveat here is that, in the case of the high-gain observer, the noise can be suppressed by increasing the observer time constant ϵ (whose upper bound is decided through Theorem <ref> to ensure the stability of the system). Whereas, for the data-driven estimator, the same can be achieved only by increasing the sampling time (T) (whose upper bound is decided through Theorem 8.1 and 8.2 of <cit.> to ensure the stability of the system), or by increasing the number of samples used for estimation. This again increases the computational cost in the latter solution. Note that it is undesirable to increase the number of open-loop data samples used for estimation, especially when stabilizing unstable equilibrium points for hardware plants.
§ CONCLUSIONS
This article proposes a model-free controller for the stabilization of minimum-phase feedback linearizable nonlinear systems. The method proposed does not require any apriori open-loop data collection for the estimation unlike the recent data-driven techniques, which gives our method the advantage when the hardware plants are open-loop unstable. In our method, we select the high-gain observer for the estimator stage, because of its ease of implementation, lesser computational cost, and superior noise attenuation as compared to the data-driven estimator. This fact was validated using experiments on a twin-rotor system. The proposed method, as compared to the recent data-driven methods, exhibits better performance in terms of overshoot, settling time, and robustness to sensor noise. The role of sampling time, the number of samples used in data-driven estimation, and the observer time constant in a practical scenario were highlighted. We demonstrated using experiments that increasing the number of samples used in data-driven estimation reduces the overshoot and increases the noise rejection. Lastly, we believe the discussions presented in this article open up further directions for research in this area. In particular, it is worthwhile investigating cases where the zero dynamics exhibit a stable limit cycle and also cases where the input appears in the state equations governing the internal dynamics of the system,(meaning the first expression in (<ref>) would appear as ẇ - f_0( w,x, u)), for example in the pendulum on a cart system. Addressing these issues is a part of our ongoing and future research.
IEEEtran
|
http://arxiv.org/abs/2307.06816v1 | 20230710024453 | Data-driven Nonlinear Parametric Model Order Reduction Framework using Deep Hierarchical Variational Autoencoder | [
"SiHun Lee",
"Sangmin Lee",
"Kijoo Jang",
"Haeseong Cho",
"SangJoon Shin"
] | cs.LG | [
"cs.LG",
"physics.data-an",
"physics.flu-dyn"
] |
Data-driven Nonlinear pROM using Deep Hierarchical VAE …]Data-driven Nonlinear Parametric Model Order Reduction Framework using Deep Hierarchical Variational Autoencoder
1]SiHun [email protected]
1]Sangmin [email protected]
1]Kijoo [email protected]
2]Haeseong [email protected]
[1,3]SangJoon [email protected]
[1]Department of Aerospace Engineering, Seoul National University, Seoul, 08226, Republic of Korea
[2]Department of Aerospace Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea
*[3]Institute of Advanced Aerospace Technology, Seoul National University, Seoul, 08226, Republic of Korea
A data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed. The present network, which is the least-squares hierarchical variational autoencoder (LSH-VAE), is capable of performing nonlinear MOR for the parametric interpolation of a nonlinear dynamic system with a significant number of degrees of freedom. LSH-VAE exploits two major changes to the existing networks: a hierarchical deep structure and a hybrid weighted, probabilistic loss function. The enhancements result in a significantly improved accuracy and stability compared against the conventional nonlinear MOR methods, autoencoder, and variational autoencoder. Upon LSH-VAE, a parametric MOR framework is presented based on the spherically linear interpolation of the latent manifold. The present framework is validated and evaluated on three nonlinear and multiphysics dynamic systems. First, the present framework is evaluated on the fluid-structure interaction benchmark problem to assess its efficiency and accuracy. Then, a highly nonlinear aeroelastic phenomenon, limit cycle oscillation, is analyzed. Finally, the present framework is applied to a three-dimensional fluid flow to demonstrate its capability of efficiently analyzing a significantly large number of degrees of freedom. The performance of LSH-VAE is emphasized by comparing its results against that of the widely used nonlinear MOR methods, convolutional autoencoder, and β-VAE. The present framework exhibits a significantly enhanced accuracy to the conventional methods while still exhibiting a large speed-up factor.
[
*
=====
§ INTRODUCTION
Modern high-fidelity, nonlinear computational analysis is mostly computationally intensive in terms of time and memory. In particular, many multiphysics analysis adopt a partitioned method in which the solvers regarding each type of physics are executed separately. Such an approach also requires computation for the data interpolation among different types of discretization and executes iterative computation within a single time step, demanding even more intensive computation. Consequently, model order reduction (MOR) has been suggested to alleviate the computational time and memory consumption. Two types of MOR frameworks exist: intrusive and non-intrusive. Intrusive MOR depends on the governing equation to construct the reduced bases. Galerkin projection is one of the most widely used approaches which projects an ensemble of the full-order model (FOM) results into the governing equation <cit.>. However, a parametric analysis may become extremely challenging when the algorithm is not explicitly established as it manipulates the governing equation directly <cit.>. Instead, a completely data-driven approach, non-intrusive MOR (NIMOR) may be considered. NIMOR aims to discover the embedded pattern in the FOM dataset and rescale those to a much smaller dimensionality. Unlike intrusive MOR, NIMOR is independent of the governing equation, making it to be extremely versatile.
Among MOR methods, linear subspace MOR (LS-MOR) has been widely considered as they are mathematically rigorous and efficient. LS-MOR has been successfully employed in fluid dynamics, flow control, structural dynamics, aeroelasticity, and fluid-structure interaction (FSI) <cit.>. However, LS-MOR may require an excessive number of the subspaces to accurately represent a nonlinear, complex FOM. For example, in complex turbulent fluid flows, proper orthogonal decomposition (POD) extracts its modes with respect to the energy ratio and details are filtered out <cit.>. Those details are usually excluded because they contain very small energy and the corresponding coefficients are quite random. LS-MOR methods are generally known to be less effective on advection-dominated, sharp-gradient, multiphysics systems, and especially systems with slowly decaying Kolmogorov n-width <cit.>.
Recent exponential development in the field of machine learning has enabled neural networks to be used for MOR. Specifically, autoencoder has become a viable nonlinear MOR method where a shallow, well-trained autoencoder with a linear activation function is known to behave similarly to POD <cit.>. Instead of the linear activation functions, many autoencoders adopt nonlinear activation functions, using them to generate nonlinear subspace <cit.>. Such an autoencoder-based method has been implemented widely to reduce the dimensionality of various engineering problems including fluid dynamics, convection problems, and structural dynamics <cit.>. However, the performance of an autoencoder as a generative ANN is known to be quite limited <cit.>. The deterministic aspect of its loss function, which was designed to only reconstruct the input, limits autoencoders to generate diverse outputs. Attempts to enhance the generative capability have led to the development of the variational autoencoder (VAE) and generative adversarial network (GAN) <cit.>. These methods implement probabilistic loss functions that construct a dense and smooth latent space. Between the two alternatives, VAE is selected for use in this study owing to its stable training property <cit.>. VAE has been widely studied for use in the field of computer vision but it has also been used to interpolate dynamic systems <cit.>.
VAE in its simplest form, vanilla VAE, is capable of generating data of significantly superior quality compared with the autoencoder. However, VAE commonly suffers from a phenomenon known as posterior collapse, where the generative model learns to ignore a subset of the latent variables <cit.>. The posterior collapse was easily alleviated by applying a technique known as Kullback-Leibler divergence (KL divergence) annealing, or β-VAE <cit.>. Another problem with vanilla VAE is that it is restricted to a shallow network, limiting its expressiveness. Vanilla VAE tends to perform worse as the network becomes deeper due to the loss of long-range correlation and its performance was found to be insufficient when complex data were processed <cit.>. Deep hierarchical VAEs, such as the LVAE, IAF-VAE, and NVAE, have been developed to enhance the performance of vanilla VAE <cit.>. These VAEs mainly adopt a type of residual cells that connect the encoder and decoder directly without passing through the latent space. Similar to U-nets, the skip connections allow bidirectional information sharing between the encoder and decoder, thereby preventing the loss of long-range correlation.
Recently, various types of VAEs are being adopted as a nonlinear MOR method owing to their superior generative capability compared to conventional autoencoders. VAEs have been adopted on flow problems <cit.>, transonic flow <cit.>, numerics <cit.>, biology <cit.>, brain MRI images <cit.>, and anomaly detection <cit.>. While earlier studies adopt the simplest convolutional VAE, many recent studies consider β-VAE due to its near-orthogonal latent space <cit.>. Previous studies show that β-VAE may successfully construct nonlinear subspace, but the majority of networks used in those studies were quite shallow. The use of shallow networks may result in insufficient expressiveness if the input data consists of a large number of DOF and exhibits a complex response.
Instead, a deep hierarchical VAE is proposed, least-squares hierarchical VAE (LSH-VAE) for nonlinear MOR of a dynamic system. LSH-VAE is a very deep hierarchical network that incorporates a modified loss function similar to that of β-VAE. The deep hierarchical structure enables a very deep, stable network (>100 layers) with highly expressive and accurate interpolation results. The modified loss function consists of a hybrid weighted least-squares and Kullback-Leibler divergence function that alleviates posterior collapse and enhances orthogonality of the latent space <cit.>. The least-squares error in the loss function is also known to enhance the accuracy when used on the continuous dataset <cit.>.
There has been no report on a very deep VAE (>100 layers) implemented for nonlinear MOR. The present framework is validated by solving the following three problems. First, a standard two-dimensional FSI benchmark problem developed by Turek and Hron will be exemplified <cit.>. Then, the highly nonlinear aeroelastic phenomenon of limit cycle oscillation (LCO) will be considered to examine the accuracy of the proposed framework under nonlinearity. Finally, the flow surrounding a three-dimensional cylinder is to be analyzed to establish the capability of the current framework to accommodate a system with a significantly large number of degrees of freedom. The computational efficiency and accuracy will be assessed as well as comparison to the existing nonlinear MOR methods will be presented.
§ MACHINE-LEARNING METHODS
This section provides the theoretical background of the machine learning methods. Based on the existing convolutional autoencoder and β-VAE, the formulation of the proposed network, LSH-VAE is presented.
§.§ Convolutional autoencoder (CAE)
A convolutional autoencoder (CAE) is an ANN that is trained to output data that are similar to its input. The typical architecture of the CAE, shown in Fig. <ref>, enables the encoder to compress the input data into a smaller latent dimensionality. The decoder then expands the latent code back to its original dimensionality. By training both the encoder and decoder, CAE learns to extract important features of the input dataset. The latent codes contain the embedded features recognized by the CAE that can be used as the reduced bases in the ROM.
The interpolation of data using CAE is conducted by interpolating the latent codes. The interpolated latent code contains the interpolated features, which leads to the interpolation of the input data.
The loss function of CAE is quite intuitive. CAE takes the input, x, and passes it through the encoder, Φ, to obtain the latent vector, z. Then, the decoder, Ψ, receives the latent vector and generates the output, y. The output, y, is compared against the input, x, using the mean squared error (MSE) loss function. In this way, the CAE is trained such that the difference between y and x is reduced, aiming for a more accurate reconstruction of the input. The equations for the encoder and decoder network are presented in Eq. (<ref>), where the loss function is shown in Eq. (<ref>).
z=Φ(x), y = Ψ(z)
L = MSE(Ψ(Φ(x))-x)
The simplest form of CAE, known as the vanilla CAE, has been shown to produce unsatisfactory interpolation outcomes <cit.>. Hence, derivatives thereof such as VAE, and GAN may be utilized to enhance the performance.
§.§ Variational autoencoder (VAE)
VAE and autoencoder share a similar architecture. The largest difference lies in that the encoder of VAE utilizes probabilistic latent values instead of discrete latent codes. The probabilistic encoder models the latent feature probability distribution. The resultant latent space is continuous and smooth, enabling higher quality generated outcomes. The encoder of VAE extracts the mean, μ, and the variance, σ, which are used to generate the latent code, z. A typical VAE structure can be observed in Figure <ref>.
VAE aims to efficiently infer the intractable posterior distribution, p(z | x). It is performed by adopting an approximate posterior, q(z | x), because determining the true posterior is quite challenging. Here, the encoder or inference network is represented by q(z | x), whereas the decoder network is denoted as p(x | z).
Kullback-Leibler (KL) divergence is the expectation of the difference between two distributions, which is always a positive value. KL divergence between the approximate and the real posterior is written as Eq. (<ref>).
D_KL(q(z | x) || p(z | x))=-∫ q(z | x)log(p(z | x)/q(z | x))dz≥ 0
Applying Bayes' theorem to Eq. (<ref>) yields Eq. (<ref>).
D_KL(q(z | x) || p(z | x)) = -∫ q(z | x) log(p(x | z)p(z)/q(z | x)p(x)) dz
= -∫ q(z | x) log(p(x | z)p(z)/q(z | x)) dz + log p(x)≥ 0
Equation (<ref>) can be rewritten as Eq. (<ref>). Applying the rules of logarithm to Eq. (<ref>) will yield Eq. (<ref>).
log p(x) ≥∫ q(z | x)logp(x | z)p(z)/q(z | x)dz
log p(x)
≥∫ q(z | x) log(p(z)/q(z | x))dz + ∫ q(z | x)log p(x | z) dz
≥𝔼_q(z | x)[log p(x | z)]-D_KL(q(z | x) || p(z))
The right hand side of Eq. (<ref>) is the evidence lower bound (ELBO). VAE aims to maximize ELBO which maximizes the logarithmic probability of the data by proxy. Following the convention of minimizing the loss function, the right hand side of Eq. (<ref>) is converted as Eq. (<ref>), which is the goal of VAE.
min[ -𝔼_q(z | x)[log p(x | z)]+ D_KL(q(z | x) || p(z)) ]
The goal of VAE is to minimize both the reconstruction and KL divergence loss. In Eq. (<ref>), the first term corresponds to the reconstruction loss and the second term corresponds to KL divergence loss. KL divergence loss enforces the decoder (approximate posterior) to become similar to the inverse of the encoder.
The loss function in Eq. (<ref>) has to be differentiable to minimize it during the training. Usually, KLD term can be integrated analytically <cit.>; however, the reconstruction loss is not directly differentiable. To enforce the reconstruction loss to be differentiable, the reparameterization technique is adopted <cit.>.
First, Gaussian sampled random noise, ε will be introduced. The latent code z, is formulated as shown in Eq. (<ref>), introducing the mean and standard deviation to the equation.
z=μ+(σ×ε), ε∼ N(0,1)
Since the latent code is formulated as Eq. (<ref>), KL divergence in Eq. (<ref>) is rewritten as Eq. (<ref>), assuming the posterior and prior follow the Gaussian distribution.
D_KL(q(z| x)|| p(z)) = 1/2∑(σ^2+μ^2-(log(σ^2)+1))
The latent code with the reparameterization technique enforces the latent space to be stochastically determined. The reparameterization enables the reconstruction loss to be differentiable by Monte Carlo method. For further details and step-by-step derivation of the VAE loss function, reference can be found in works by Kingma and Odaibo <cit.>.
§.§ Least-squares hierarchical variational autoencoder (LSH-VAE)
Conventional vanilla VAE is limited to shallow networks due to vanishing gradients and the loss of long-range correlation. However, shallow networks may lack expressiveness on complex systems with a significant number of DOFs. In this study, a deep VAE with a hierarchical structure is proposed to enhance the performance. Specifically, to alleviate the loss of long-range correlation and stabilize the training process of a very deep network. The hierarchical structure creates direct passages between the earlier layers of the encoder and the latter layers of the decoder, circumventing the middle layers. Those direct passages enable bidirectional information sharing between the encoder and decoder network. The bidirectional information enables the earlier layers of the VAE to greatly affect the outcome, thus, alleviating the loss of long-range correlation. The diagram in Fig. <ref> shows the hierarchical structure of LSH-VAE.
In the hierarchical VAE, the latent variables are divided into L groups. By the divided latent dimension, the prior and posterior distributions are rewritten as in Eq. (<ref>) and Eq. (<ref>).
p(z)=p(z_L) ∏_i=1^L-1 p(z_i| z_i+1)
q(z | x)=q(z_1| x) ∏_i=2^L q(z_i| z_i-1)
p(z_i| z_i+1)=𝒩(z_i|μ(z_i+1), σ^2(z_i+1))
p(z_L)=𝒩(z_L| 0, I)
q(z_i| z_i-1)=𝒩(z_i|μ(z_i-1), σ^2(z_i-1))
q(z_1| x)=𝒩(z_1|μ(x), σ^2(x))
The loss function for hierarchical VAE is shown in Eq. (<ref>), which is obtained by computing the KL divergence separately for each group. By breaking down the KL divergence into groups, bidirectional information flows are created between the inference and generative network. Detailed descriptions about the deep hierarchical structure of VAE can be found in <cit.>.
min [ -𝔼_q(z | x)[log p(x | z)]+ D_K L(q(z | x) | p(z))
+∑_i=1^L-1𝔼_q(z_<i| x)[D_K L(q(z_i| z_<i, x) | p(z_i| z_>i))]]
The present LSH-VAE adopts hierarchical structures motivated by LVAE, IAF-VAE, and NVAE <cit.>. The latent codes in the hierarchical VAE are formed by both bottom-up and top-down information. The latent codes of each of the groups output shared information (from the encoder and decoder) to the next decoder block. Because the information of the encoder and decoder network is shared via latent code, the network delivers higher performance.
Upon the hierarchical structure, LSH-VAE implements a hybrid weighted loss function. The loss function consists of the mean squared error (MSE) and KL divergence instead of conventional binary cross entropy. The use of MSE as a reconstruction error has been known to be successful for continuous datasets <cit.>. The loss function of LSH-VAE is shown in Eq. (<ref>), where the coefficients α and β denote the weights of the MSE and KL divergence, respectively.
min _ϕ, θ [α MSE(x, x̃)+ β D_K L(q(z | x) | p(z))
+∑_i=1^L-1𝔼_q(z_<i| x)[β D_K L(q(z_i| z_<i, x) | p(z_i| z_>i))]]
Usually, the weights α and β are set to be α / β_target≈ 10^6. During the training, α is a fixed value whereas β is a variable that varies with respect to the epochs. The variable β is implemented to prevent posterior collapse in which some latent variables become inactive. This method is known as KL-annealing or β-VAE, where β is formulated as Eq. (<ref>) <cit.>.
β =
1× 10^-4β_target if epoch <0.3n_epochs
β_targetepoch/n_epochs if epoch >0.3n_epochs
During the training, β is assigned a low value at the start such that LSH-VAE behaves as an autoencoder. During the first few epochs, input data will be mapped on the latent space. Beyond a few prescribed epochs, β will be gradually ramped up such that LSH-VAE may behave as a VAE, generating smooth latent space.
§ PRESENT FRAMEWORK
§.§ Architecture of the least-squares hierarchical VAE (LSH-VAE)
LSH-VAE adopts a one-dimensional (1D) convolutional layer to accommodate the transient response of the unstructured grids. The use of a 1D convolutional layer enables the temporal continuity of the physical variables to be considered.
The encoder and decoder of the LSH-VAE consist of the blocks discussed in the previous section, where a detailed schematic of these blocks is shown in Fig. <ref>.
Being a deep neural network (DNN), LSH-VAE encoder and decoder blocks are composed of stacks of multiple layers. These layers consist of the following layers: spectral normalization (SN), 1D convolution, dense, exponential linear unit (ELU), Swish, and batch normalization (BN). Swish, and ELU nonlinear activation functions are chosen as their continuous derivatives enhance the stability of a DNN <cit.>. The LSH-VAE implements a normalization-activation sequence instead of the conventional activation-normalization sequence. Such sequence is known to deliver benign performance empirically when used before the convolutional computation <cit.>. The output of the encoder block is branched in three ways. The first branch connects to the input of the next block and the remaining two branches form μ, and σ. The encoder latent code is formulated by reparameterizing μ, and σ.
The reparameterized latent code and ELU layer infer bottom-up information transfer, shown in green in Fig. <ref>.
In the current configuration, the decoder network is significantly deeper and more complex than the encoder network. The deep decoder network enables an expressive output when accompanied by a system with many DOFs. The decoder network receives two inputs: top-down information from the predecessor decoder block and encoder-decoder shared information from the latent code. Through a series of layers, the decoder outputs top-down information, shown in blue. The decoder block generates the decoder latent code and input for the next block. The encoder latent code and the decoder latent code are added to generate shared latent code, z^i. The shared latent code contains both top-down and bottom-up information, enabling bidirectional information sharing.
§.§ Preprocessing dataset
Acquiring many FOM samples may be quite cumbersome. In particular, many-queried FOM computations are extremely time-consuming if FOM is highly nonlinear, includes multiphysics, and involves a significant number of DOFs. Acquiring those FOM data through experiments and simulations is considered prohibitive for computational, financial reasons. Instead, data augmentation is considered to sample sparsely and expand the amount of training data. A larger amount of training data improves the generalization of ANN and thus enhances the accuracy. Similar to the data augmentation typically performed on images, the pre-acquired FOM results are processed using the following three methods. First, temporal data are resampled by shortening the timestep, i.e. frequency elongation. Then, the training data are augmented by changing the amplitude and adding a random number within the bound of ±30% for every epoch. Training the ANN using the augmented data ensures that the ANN is effectively trained against a very large dataset, resulting in a high-performance network.
§.§ LSH-VAE training and interpolation
The current framework performs MOR directly on FOM results. The LSH-VAE employs 1D convolutional layers which requires a three-dimensional input of the format (batch, sequence, channel).
In the current configuration, the temporal continuity of the FOM results is considered in the convolutional dimension. The resultant input composition of LSH-VAE becomes (batch, N_t, N_DOF), where N_t denotes the number of time steps and N_DOF denotes the number of DOFs in the dynamic system. LSH-VAE receives such input and compresses it into latent vectors via the encoder. The dimensionality change throughout LSH-VAE is expressed in Eq. <ref>, where N_i represents the latent dimension in the i-th latent group. The total latent dimension, ∑ N_i is much smaller than the FOM dimension, achieving MOR.
(, N_t, N_DOF)(, ∑ N_i)
(, N_t, N_DOF)
The training algorithm for LSH-VAE is shown in Algorithm <ref>. The algorithm starts by normalizing the physical variables of interest, v. v is normalized to the range of [-0.7, 0.7] for each DOF by the normalizing function, N(). The normalized variable is then augmented by resampling for N_A instances. Then, the training dataset, x_train is constructed by concatenating the original normalized variable with the augmented ones. The training dataset of the network becomes, x_train = [x,R(x)_1,R(x)_2, ... ,R(x)_N_A], where R(x)_n denotes the resampled normalized variable of interest.
The training dataset is further augmented for amplitude and offset. The amplitude and offset augmentation is performed by using random values for every epoch. The network receives a different input in every epoch, enabling the network to be trained against a very large dataset. After the data augmentation is completed, the encoder and the decoder networks are trained. After the decoder is trained, the loss function can be obtained by Eq. <ref>. The training of LSH-VAE is optimized by the Adamax optimizer, which has shown good performance compared with the conventional Adam and SGD optimizers.
Generative ANNs usually require latent vectors to be sought. This is required owing to the probabilistic formulation that is used to parameterize the latent vector. However, we empirically found that sufficient epochs and a small number of parameters obviate the need for latent searching. In this study, rather than attempting latent searching, the latent vectors are calculated by the mean value from the encoder network directly.
Upon acquiring the latent vectors, slerp interpolation is performed to collect the targeted latent vector. The latent space created by VAEs is in the form of a well-structured, multi-dimensional hypersphere, which enables complex operation by vector arithmetic <cit.>. It is possible since the reparameterization trick introduces Gaussian random number, which attributes to the vector length and angle in the latent hypersphere. The slerp interpolation shown in Algorithm <ref> not only interpolates the rotation angle of vectors, but it also interpolates the arc length. Such slerp interpolation enables the latent vectors to be interpolated following the path of the complex latent manifold. The use of slerp interpolation has been widely accepted for performing latent interpolation <cit.>.
§ NUMERICAL RESULTS
This section presents the numerical results obtained by the proposed framework. First, the framework is applied to solve a FSI benchmark problem previously developed by Turek and Hron <cit.>. The accuracy of the current method is evaluated and compared against that obtained by the conventional nonlinear MOR, CAE. Then, the proposed framework is examined on a wing section that undergoes limit cycle oscillation (LCO). LCO analysis is performed to evaluate the accuracy of the proposed framework on the nonlinear multiphysics phenomenon. Last, the applicability of LSH-VAE to a system with many DOFs is demonstrated by analyzing a three-dimensional fluid flow.
The numerical results presented in this paper are obtained by intentionally sampling a small number of initial FOM results. Sparse sampling is performed because ANN replicating its training data often leads to enough accuracy when the sampling is performed densely. In addition, sparse sampling is attempted as dense and iterative computations on a nonlinear system with many DOFs are rather unrealistic.
For all of the results, the same LSH-VAE network is used for each variable of interest. The hyperparameters used for the training are shown in Table. <ref>. In Table <ref>, the first value for the latent dimension criterion denotes the latent dimension in which the interpolation is performed. The latter value denotes the latent dimension used for information sharing between the encoder and decoder network. LSH-VAE used for the following numerical results consists of 7 encoder and decoder blocks, with a total of 107 layers. While detailed optimization of the hyperparameters would yield better accuracy, such procedure is not performed to emphasize the generability of the framework. However, different batch sizes are used considering the number of DOF, limited by the VRAM of GPU.
For all of the results presented in this paper, computations are carried out on AMD 3950X CPU to obtain the FOM results. ANN are trained using NVIDIA GeForce GTX 3090 GPU.
§.§ Turek-Hron FSI benchmark
§.§.§ Description of the analysis
The widely accepted FSI benchmark developed by Turek and Hron is described in this section <cit.>. The benchmark problem consists of a rigid cylinder with a diameter of 0.1 m and a highly flexible tail. The fluid flowed from the inlet to the outlet with laminar separation occurring behind the cylinder. Von Kàrmàn vortex street created by the flow separation excites the tail, which exhibits a large deflection. A hyperbolic inlet profile is used to consider the no-slip initial wall boundary condition at the upper and lower computational domain. A detailed schematic regarding the analysis is shown in Fig. <ref>.
The current framework requires a few parametric initial FOM samples to extract the embedded patterns. For Turek-Hron FSI benchmark problem, seven initial FOM results are collected. The inflow speed was selected as a parameter and speeds ranging from 0.7 m/s to 1.3 m/s, in 0.1 m/s intervals were sampled. The FOM samples are analyzed using Navier-Stokes computational fluid dynamics (CFD) and finite element method (FEM) two-way FSI analysis provided in the commercial software, ANSYS. The flow field is discretized by 29,788 CFD nodes and the flexible body is discretized by 954 FEM nodes.
The ensemble of FOM results is constructed by collecting 2 s of the fully converged response in intervals of 0.01 s. The pre-acquired FOM ensemble is then subjected to interpolation by LSH-VAE shown in Table <ref>. After the training of LSH-VAE is completed, the latent code is interpolated. In the present case, the target parameter is selected as the unseen inflow speed of 0.95m/s. The latent code corresponding to 0.95m/s is acquired by the slerp interpolation shown in Algorithm <ref>. The interpolated latent code is then decoded by the decoder network where the resultant interpolated variables are generated.
§.§.§ Accuracy and efficiency
The accuracy of the current framework is assessed by comparing the results of the ROM against those obtained with the FOM. Five physical variables, dX, dY, u, v, and p are considered for interpolation in this case. Among them, the first two variables denote the grid deformation in x- and y-direction. Using the interpolated variables, the interpolated FSI field will be constructed. The interpolated FSI field and FOM are shown in Fig. <ref>.
Evaluation of the results shown in Fig. <ref> verifies that the proposed framework is reasonably accurate. Subsequently, the accuracy of LSH-VAE is compared against that of CAE and β-VAE. For comparison, the CAE and β-VAE networks are constructed using the same hyperparameters that were used for LSH-VAE. The comparison between CAE, β-VAE, and LSH-VAE is performed by comparing the extent to which their results differed from those of FOM. The discrepancy contours of various networks are shown in Fig. <ref>. The minimum and maximum of each variable are matched for the respective variable.
Overall, LSH-VAE exhibits the smallest discrepancy while β-VAE performs the worst. Interestingly, the regions that exhibit a relatively larger discrepancy are found to be quite similar for all of the networks. This is caused by the finite number of latent dimensions considered in the generative networks. Small details of FOM would have been neglected in the finite latent representation, which lead to the discrepancy in the similar areas. Another one to note is that the pressure contour of CAE and β-VAE shows a considerably larger discrepancy compared against that by LSH-VAE. This is caused by the large variation between the maximum and minimum values of the pressure. The inability of CAE and β-VAE to generate an expressive output is considered to be the reason for small details being neglected by large variations.
Then, the efficiency of the proposed framework is assessed. The computational procedures for the proposed framework comprise four stages and the computational time required for each stage is listed in Table <ref>. For Turek-Hron FSI problem, each FOM query requires 109.0 h whereas the online stage consumes 0.11 h. The proposed framework therefore exhibits a speed-up factor of 990 for each unseen parametric estimation. The expected computational time in terms of the number of computations is shown in Fig. <ref>.
§.§ Limit cycle oscillations
§.§.§ Description of the analysis
Limit cycle oscillation (LCO) is a nonlinear periodic oscillation with limited amplitude on an aerodynamic surface. LCO of an aircraft is a highly nonlinear FSI phenomenon that is caused by nonlinearities in both the fluid and structure. Typical causes of LCO include flow separation, transonic shock, geometric nonlinearity, and nonlinear stiffness of the control surface. For an aircraft, LCO may result in structural fatigue in the wings, thus requiring high-fidelity analysis for safety.
During the design stage of an aircraft, iterative LCO analysis is performed to satisfy the vibration criterion. Such parametric LCO analysis is considered to be quite cumbersome and tedious as it is highly nonlinear and involves many DOFs. In this section, the proposed framework is used to conduct a simplified nonlinear parametric LCO analysis of a wing section.
The wing section considered in this analysis is derived from that reported by O'Neil et al. <cit.>. In it, a two-dimensional wing section was constrained by the pitch and heave springs as shown in Fig. <ref>. The pitch and heave stiffnesses are nonlinear in their cubic terms, which are expressed in Eq. <ref>. LCO is caused by the cubic stiffness in the structure and LCO is observed at the inflow stream speed of 15.5 m/s to 50 m/s.
K_α = 2.57(α+500α^3)
K_h = 0.09(h+2860h^3))
The inflow speed is chosen as the parameter in this analysis. The initial FOM samples are collected by adjusting the inflow speed from 20 m/s to 45 m/s in increments of 5 m/s. The relevant flow field is discretized by 19,381 nodes and solved using the commercial Navier-Stokes solver, ANSYS. The initial FOM samples are obtained by collecting 2 s of the fully converged response in intervals of 0.01 s. The FOM ensemble is subjected to MOR and interpolation by LSH-VAE.
After LSH-VAE is trained, the latent code for the desired parameter is acquired via slerp interpolation. The target parameter is an unseen inflow speed of 32.5 m/s, and the corresponding latent code is interpolated using Algorithm <ref>. The interpolated latent code is then decoded by the decoder and the interpolated FSI field is generated.
§.§.§ Evaluation of accuracy and efficiency
The accuracy of LSH-VAE is assessed by comparing the ROM results against those produced by FOM. In this case, the five physical variables discussed in the previous section were considered. The interpolated variables were used to generate the FSI field, where the interpolated FSI field and FOM are shown in Fig. <ref>.
In Fig. <ref>, the interpolated FSI field constructed by LSH-VAE is found to be accurate. Then, the accuracy of LSH-VAE is compared against that of CAE and β-VAE. The discrepancy contours between LSH-VAE, CAE, and β-VAE are shown in Fig. <ref>. The minimum and maximum of the variable are each matched for the same variable.
Similar to Turek-Hron problem, LSH-VAE exhibits the smallest discrepancy. However in this case, β-VAE performed better than CAE. For dX, all networks exhibit a similar discrepancy, as the wing section is constrained in x-direction. Only the pitching motion affects the deformation of surrounding grids in x-direction, resulting in a small variation. dY, however, shows different behavior. The discrepancy is spread evenly as the wing heaves and LSH-VAE shows a significantly reduced discrepancy. Another important point to note is that the discrepancy regarding the pressure is quite small. This is due to the stagnation point which creates a concentrated high-pressure region.
The efficiency of the proposed framework is also assessed. The computational time required for each stage is summarized in Table <ref>. The offline FOM computation required 280.1 h including six initial FOM sample computations. LSH-VAE training required 3.52 h for the five variables of interest, resulting in a total offline stage of 283.6 h. For the online stage, FSI field reconstruction and saving to disk requires the most time as it requires 0.06 h. The present framework exhibits a speed-up factor of 660 for each unseen parametric estimation.
The expected computational time in terms of the unseen parametric queries is shown in Fig. <ref>.
§.§ Three-dimensional fluid flow
§.§.§ Description of the analysis
Finally, fluid flow surrounding a simple stationary three-dimensional (3D) cylinder is analyzed. The analysis of the 3D fluid serves to demonstrate the use of the proposed framework to analyze a system with a significant number of DOFs. A 3D cylinder with a diameter of 1 m was subjected to a uniform inflow, as shown in Fig. <ref>. Similar to Turek-Hron FSI benchmark, a von Kàrmàn vortex is formed behind the cylinder. For CFD analysis, a cuboid computational domain of 20m×10m×10m was discretized into 1,121,000 tetrahedral elements. The Reynolds number of the inflow varied from 100 to 160 in intervals of 10.
The initial FOM samples are obtained by using the ANSYS Navier-Stokes solver and 2s of FOM data are collected in intervals of 0.01 s. Then, the LSH-VAE is trained against the FOM ensemble and interpolation is performed with respect to the parameter.
After LSH-VAE is trained, the latent code representing the targeted parameter is acquired. The target parameter is selected as an unseen inflow Reynolds number of Re = 125. The latent code corresponding to Re = 125 is acquired by the interpolation shown in Algorithm <ref>. The interpolated latent code is then decoded and the resultant interpolated flow field is generated.
§.§.§ Evaluation of the accuracy and efficiency
The accuracy of LSH-VAE is assessed by comparing the results of ROM with those obtained using FOM. In this case, four physical variables, u, v, w, and p are considered for the interpolation. Using the interpolated variables, the interpolated flow field is generated. The interpolated and original flow fields are displayed in Fig. <ref>.
The interpolated flow field constructed by LSH-VAE is found to be quite accurate. Particularly, the velocity in z-direction, w, is accurately interpolated even though w exhibits quite a complex response. As the initial physical variables are interpolated well, the relationship between the variables is inspected. Comparison against CAE and β-VAE is not conducted in this case as the large number of DOF caused instability of the networks. Instead, the normalized Q-criterion is considered to assess whether the interpolated flow field preserves its vorticity. In Fig.<ref>, the normalized Q-criterion is obtained using the interpolated variables shown in Fig. <ref>. Figure <ref> shows the iso-surface generated based on the normalized Q-criterion. The iso-surface is colored by u-velocity and pressure for visualization.
The good agreement in terms of the Q-criterion indicates that LSH-VAE interpolates the direct variables sufficiently well such that the relationship between variables may be well preserved.
Lastly, the efficiency of the present framework is assessed. The computational time required for each stage is listed in Table <ref>. The offline FOM computation requires 193.7 h including the seven initial FOM samples. LSH-VAE training requires 11.3 h resulting in a total offline stage of 205.0 h. For the online stage, variable reconstruction and writing to disk requires the most time as it required 2.02 h. The proposed framework exhibits a speed-up factor of 14 for each unseen parametric estimation.
The expected computational time in terms of queries is as shown in Fig. <ref>.
§ CONCLUSIONS
This paper proposes a nonlinear data-driven parametric MOR framework based on a neural network. The present framework adopts a novel neural network, LSH-VAE, to perform parametric MOR and interpolation. The present validations demonstrates that the LSH-VAE is capable of the parametric interpolation of dynamic system while significantly reducing the computational time. The following results are obtained in this study.
* A novel machine-learning method, LSH-VAE, is developed for nonlinear MOR and the parametric interpolation of nonlinear, dynamic systems.
* LSH-VAE is assessed on three nonlinear and multiphysics dynamic systems with many DOFs. The proposed framework is proven to be accurate and to significantly reduce the computational time.
* Compared against the existing nonlinear MOR methods, convolutional autoencoder and β-VAE, LSH-VAE demonstrates significantly higher accuracy.
The performance of LSH-VAE is assessed on three nonlinear dynamic systems: FSI benchmark, LCO, and three-dimensional flow. For all of the systems, LSH-VAE is capable of constructing an accurate parametric ROM. Especially, LSH-VAE exhibited a significantly enhanced accuracy compared to CAE and β-VAE. Also, LSH-VAE is found to be effective as not only did it interpolate the variables well, but it also interpolated the vorticity with high accuracy, which is embedded in the patterns of variables. Upon the accurate parametric MOR, LSH-VAE exhibites a speed-up factor of 990, 660, and 14 respectively.
Such results are possible owing to the improvements in the LSH-VAE. First, it adopts a hierarchical structure that enables a much deeper and more stable network. Second, it adopts a hybrid weighted loss function consisting of mean-squared error and KL divergence. The use of mean-squared error improved the performance against continuous datasets while the hybrid weights reduced posterior collapse. Lastly, the use of slerp interpolation instead of linear interpolation in the latent space significantly enhanced the interpolation quality following the complex latent manifolds.
However, there still exist a few challenges to be dealt with. First, LSH-VAE may require a significant amount of video random access memory (VRAM) if it is incorporated with an extensive number of DOF. The excessive VRAM requirement stems from its deep structure. By adopting a deep structure, LSH-VAE is capable of generating an expressive result at the cost of training an extensive number of learnable nodes. The excessive VRAM requirements necessitate limiting the batch size for the 3D fluid flow example. Yet, VRAM limitations may be alleviated by adopting parallel computing and utilizing many GPUs. Splitting the DOFs into several groups and merging them after interpolation may also be considered as a solution. Second, extrapolation is limited in the proposed framework. Accurate extrapolation would require dense sampling in the parametric space. However, the construction of ROM with sufficiently dense sampling accompanied by an effective latent manifold tracking method would make reasonable extrapolation viable. Finally, the effectiveness of the proposed framework decreases as the FOM becomes simpler and increasing DOFs are involved. An example of this tendency is demonstrated in the 3D fluid flow example where the speed-up factor diminished to 14 compared to 990 and 660 in the previous cases.
In the future, the plan is to extend the evaluation of the proposed framework to various multiphysics problems such as the analysis of the heat-structure systems. Considering that the present framework is purely data-driven, LSH-VAE is expected to be used in its current form. In addition, multi-parametric analysis coupled with sampling algorithms such as Latin hypercube will be attempted by adopting conditional tokens in the latent space.
Acknowledgments
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science, ICT and Future Planning (2023R1A2C1007352).
§ DECLARATIONS
The authors declare that they have no conflict of interest.
|
http://arxiv.org/abs/2307.05652v1 | 20230711140430 | Simultaneous study of scattering and fusion hindrance near Coulomb barrier in $F+Pb$ systems | [
"Kamala Kanta Jena",
"Bidhubhusan Sahu",
"Jajati K. Nayak",
"Raj Preethi P",
"B. K. Sharma",
"Santosh Kumar Agarwalla"
] | nucl-th | [
"nucl-th",
"nucl-ex"
] |
Examples of compact LΣ(≤ω)-spaces
Mikołaj Krupski
August 12, 2023
=================================
A phenomenological optical potential is used to study the elastic angular distributions for the
system ^19F+^208Pb close to the Coulomb barrier. This potential is constructed by taking into account the flexible potential developed by Ginocchio. The fluctuations in the real and imaginary parts
of the optical model potential follow the trends of the threshold anomaly. The set of optical
potential parameters needed to analyze the fusion cross sections of the same system are obtained
through analysis of the scattering cross sections. Theoretical fusion cross-sections and results
from four different experimental groups well agree for a range of energies. Several
Fluorine (F) isotopes are used as projectiles in this study of fusion cross-sections by slightly
altering the radial parameter. It was found that the fusion process occurs unfettered in
the ^19F +^208Pb system below the Coulomb
barrier but is seriously hindered in the case of its isotopic projectiles.
Keywords: Optical potential, elastic scattering, threshold anomaly,
fusion cross-section
PACS: 25.70.– z,25.70.Jj, 25.70.Bc
§ INTRODUCTION
To examine the various nuclear characteristics, the analysis of experimental data from nucleus-nucleus scatterings using an optical model has been proven successful. In optical model analysis, phenomenological nuclear potentials like Woods-Saxon (WS), Gaussian, modified WS, and many more are employed. In heavy-ion elastic scatterings, the fluctuation in the real and imaginary components of an optical potential is seen as a crucial characteristic near the Coulomb barrier. Threshold anomaly (TA) is an interesting phenomenon, in the case of systems having heavy projectiles, in which the real component of the potential is practically constant at higher energies but rapidly increases as the incident energy gets closer to the Coulomb barrier. When the incident energy is below the barrier, it slowly starts to decline after reaching its maximum at the barrier. Around the barrier, thus, the variation takes a bell- shape. The imaginary part, on the other hand, shows nearly a constant magnitude at higher energies but decreases to a low value <cit.> around the barrier in the same vicinity. In other words, when the collision energy rises above the top of the Coulomb barrier, the strength of imaginary potential rises rapidly and then its value becomes nearly constant. The maximum value of the real part can be two fold of the constant value it assumes at higher energies <cit.>. This anomalous variation is caused by the coupling of several elastic and quasi-elastic response channels. This is explicable by the dispersion relation developed by Byron and Fuller <cit.> using the causality principle. This study demonstrates how the optical potential, which has an imaginary component that is noticeably small, corresponds with TA occurrences and explains the fusion cross-section. In our discussions, we take into consideration a semi-classical heavy-ion elastic collision system, ^19F + ^208Pb, whose experimental results may be interpreted in terms of an optical model by employing a complex potential with the appropriate parameterization. Lin et al. <cit.> carried out experimental measurements and theoretical analyses of the system. The angular distributions were observed with a fluorine beam (^19F) at six energies ranging from 80.6 MeV to 93.5 MeV in the center-of-mass frame. We analyze the outcomes for the same energy range to broaden our investigation of elastic scattering with a focus on TA. For TA analysis, mostly spherical nuclei have been investigated <cit.>. We work on one that is deformed. A sizable static ^19F nucleus with deformation is present in the system ^19F + ^208Pb <cit.>. To explore the nuclear characteristics, ^19F has also been utilized as a projectile
<cit.> for decades in several elastic scatterings with various targets and incidence energies. All these facts motivate our team to choose the projectile ^19F and realize the versatility of our potential.
We use a phenomenological optical potential <cit.> based on a short-ranged, smooth, and analytically solvable asymmetric potential developed by Ginocchio <cit.> that possesses the versatility to control the volume and surface regions of the potential. The parameters dealt in the optical potential are significantly less in number. The experimental results of ^16O+^28Si and ^12C+^24Mg systems were fairly explained by G. S. Mallick et al. <cit.> over a wide range of energy by using this potential. The interesting feature of our potential is the neck structure near the Coulomb barrier. This non-trivial feature helps us match theoretical data with experimental data. The potential agrees with the presence of ‘threshold anomaly’ because of the fast rise of the imaginary part with the rapid fall of the real part of the potential as the incident energy rises above the Coulomb barrier. The optical potentials used by most of the researchers deal with large imaginary parts. The absorption of a major share of partial waves cannot be avoided in cases having large imaginary parts. The imaginary parts remain below 12% of their corresponding real parts in Ref.<cit.> for incident energies 80.6 – 85.2 MeV but exceed 29% for 87.9 – 93.5 MeV. The fact that high imaginary parts substantially destroy the resonance states generated by the volume part of the effective potential cannot be ruled out. Hence, a small imaginary part may be more convincing. In this work, we use the potential where the imaginary part is very small as compared to its real part.
We extend the applicability of our optical potential in fusion as well. We use the potential to analyze fusion cross-section data obtained from four different experiments performed by D. J. Hinde et al. <cit.>, B. B. Back et al. <cit.>, K. E. Rehm et al. <cit.> and Zhang Huanqiao et al. <cit.> for the same collision system ^19F + ^208Pb but over different energy ranges. The analysis of fusion cross-sections involves mostly the same set of parameters used for the analysis of elastic scattering cross-sections.
As far as fusion hindrance at sub-barrier energies for drip-line nuclei is considered, the periodic table expanded (to now include 118 elements), and super heavy elements (SHE) were made available to humankind due to conceptual and experimental developments in physics. According to conventional content, elements with more than 104 protons shouldn't exist since the element would undergo spontaneous fission if the fission barrier had been zero. Yet again, the stabilization of these elements and the formation of SHEs with distinct features are caused by quantum shell effects. Even though the fusion process between massive nuclei has been well studied thus far, the fusion probability between massive nuclei is dependent on the charge product Z_PZ_T of the projectile and the target. This is because when the charge product grows, the Coulomb repulsion between them grows, decreasing the likelihood of fusion. Nevertheless, nuclear processes involving heavy ions are utilized to produce SHEs. The formation of SHEs involves the use of both cold fusion and hot fusion nuclear processes.
The doubly magic nucleus ^208Pb is employed as a target in cold fusion reactions together with the suitable projectile. The one-dimensional barrier penetration model, which takes into consideration the coupling of inelastic excitations, has been noted to accurately represent the fusion cross section for charge products smaller than 1800. On the other hand, in contrast to the model's calculated results, the fusion cross section is hampered when the charge product is greater than 1800 <cit.>. But in addition to the charge product, other factors that affect nuclear fusion between heavy nuclei include the nuclear structures of the projectile and target. According to reports, the fusion probability is significantly influenced by the number of valence nucleons outside of a major shell closure <cit.>. In the fusion processes, ^130Xe+^86Kr and ^136Xe+^86Kr, where the nucleus ^136Xe has a closed neutron shell N=82 and the neutron number of the nucleus ^130Xe is 76, six neutrons fewer than the closed shell, the evaporation residue cross sections were determined by Oganessian et al. <cit.> in 1987. They discovered that, in the vicinity of the Coulomb barrier, the measured evaporation residue cross sections for the fusion process ^136Xe+^86Kr are about two to three orders of magnitude greater than those for the fusion reaction ^130Xe+^86Kr. The enhancement of the evaporation residue cross sections near the Coulomb barrier region between the double closed shell nuclei ^208Pb and ^48Ca is also pointed out by Oganessian et al. <cit.> in 2001. The dependence of fusion on the nuclear shell structure was investigated by K. Satou et al. <cit.> in 2002 for the two reaction systems ^82Se+^138Ba and ^82Se +^134Ba, where the nucleus ^138Ba has a closed neutron shell N=82 while the nucleus ^134Ba has a neutron number N=78; four neutrons less than the closed shell. The fusion reaction ^82Se+^138Ba takes place without hindrance, but ^82Se+^134Ba fusion is significantly hindered, as is typically observed in major reaction systems with the charge product Z_PZ_T ≥ 1800 of the projectile and target. These results suggest that a crucial part of the low-energy fusion process involves the shell structure. We analyze the isotopic dependence in the ^19-23F+^208Pb systems to realize how the nuclear shell structure affects the fusion process.
The paper is organized to discuss the formulation of the optical model based on Ginocchio potential in section 2. Section 3 explains the application of our optical potential to the elastic scattering of tightly bound projectile ^19F by ^208Pb target at energies near the Coulomb barrier and TA phenomena thereof. The fusion cross-sections for this system are also presented and compared with the data from the various experiments. Finally, the summary and conclusions are presented in section 4.
§ FORMULATION OF THEORY
The phenomenological optical potential used here is based on a short-ranged, smooth, and analytically solvable asymmetric potential developed by Ginocchio <cit.> and used by others <cit.> which possesses the versatility to control the volume and surface regions. A potential to describe nucleus-nucleus interaction usually consists of Coulomb potential V_C(r) due to electric charges of two nuclei and nuclear potential V_N(r). Taking the centrifugal force into account, the effective potential V_eff(r)) for nucleus-nucleus collision with reduced mass μ and orbital quantum number l can be described by Eq.1 in which the last term represents the potential owing to centrifugal force.
V_eff(r) = V_N (r) + V_C(r) + l(l+1)ħ^2/2μ r^2
The nuclear part V_N(r) is an optical potential. It is taken from Ref.<cit.> by considering the value of parameter λ=1, where λ parameter is responsible for the flatness of the potential. Nuclear potential V_N(r) is the most important part of the effective potential which is not uniquely described to date. But V_N(r), as has been argued in many articles, takes a complex form to describe the experimental observations. We also consider V_N(r) to be complex and represent as, V_N(r) = V_n(r) + iW_n(r). The variable μ represents the reduced mass of the projectile-target system and is defined as μ= m_P × m_T/m_P+m_T, where m_P and m_T being the masses of projectile and target respectively.
Following the potential developed in Ref.<cit.>, which is further simplified in Ref.<cit.>, we consider the real part V_n(r) of the potential as given in Eq.2 by putting λ=1.
V_n(r)= {[ -V_B/B_1[ B_0+(B_1-B_0)(1-y_1^2) ] if 0<r<R_0; ; -V_B/B_2[ B_2(1-y_2^2)] if r≥ R_0; ].
On substitutions of y=tanhρ_n, ρ_n=(r-R_0)b_n, and V_B = V_01 B_1 = V_02B_2, we find
V_n(r)= {[ -V_01[B_0+(B_1-B_0)/cosh^2ρ_1] if 0<r<R_0; ; -V_02[B_2/cosh^2ρ_2] if r≥ R_0; ].
Here, V_B is the height of the barrier. The slope parameter b_n is given by, b_n=√(2 μ V_B)/ħ^2B_n in which n = 1 or 2. The radial distance R_0 in the surface region is close to the radial position of the effective S-wave barrier potential. The depth of potential at origin and R_0 are controlled by the parameters B_0 and V_B respectively. Slope parameter b_n on either side of R_0 depends on B_n and V_B. Parameters B_0 and B_1 specify the potential for r≤ R_0. V_01 specifies the strength in that region, and is given by,
V_01 = V_B/λ_1^2 B_1 + 1-λ_1^2/2 = V_B/B_1 for λ _1 =1.
The parameter λ _1 controls the flatness of the potential in the region for r≤ R_0. Similarly, the parameters B_2 and V_B specify the potential for the region r > R_0, where R_0 is sum of radii of two interacting nuclei, i.e., R_0= r_0(A_P^1/3 + A_T^1/3)= R_1 + R_2. The quantities ρ_1 and ρ_2 are the transformed distance variables and given by, ρ_n = (r-R_0)√(2mV_B)/ħ^2B_n= (r-R_0) b_n.
With the above consideration, the real part V_n(r) of the optical potential for the collision system ^19F + ^208Pb is depicted in Fig.1. The potential has two regions; volume and surface. Two parts of the potential corresponding to the volume region and surface region are connected at r = R_0 satisfying the analytic continuity.
Unlike monotonous fall with ‘r’ in a nuclear potential of a standard Woods-Saxon form, our optical potential shows a neck-formation near r = R_0. The optical potential consists of two analytically solvable regions, namely, the volume region and the surface region. The regions are smoothly joined near r = R_0 forming a neck-like structure. We refer the location to an analytic junction <cit.>, where the two regions of the potential meet each other. As the name suggests, the junction is analytically solvable and the Schrodinger equation can be solved there. The structure appears unusual, but our consideration ensures indifference in two parts of potential and keeps the respective derivatives (concerning ‘r’) the same at the meeting point satisfying analytic continuity. This new feature helps us suitably explain the differential scattering cross-sections and fusion cross-sections over a wide range of energies. The feature also enables us to apprehend the effects of frictional forces, resonance in the formation of a composite binuclear system and transfer of one or cluster of nucleons from the target to the projectile and/or vice versa in this configuration, when the bombarding nuclei touch each other in the surface region around r = R_0.
The form of the imaginary part W_n(r) is similar to that of the real part, but its strength differs. The imaginary part in strength is weaker than that of the real part, i.e., the real part with a larger value is very deep and the imaginary part with a smaller value is comparatively weak. With substitution V_0nW=V_BW/W_n, the imaginary part is given by Eq.4 and its behaviour is plotted in Fig.2 with a suitable set of parameters.
W_n(r)= {[ -V_01W[W_0+(W_1-W_0)/cosh^2ρ_1] if 0<r<R_0W; ; -V_02W[W_2/cosh^2ρ_2] if r≥ R_0W; ].
The parameter W_0 represents the depth of imaginary potential at the origin and V_BW controls the depth of potential at R_0W. The other two parameters, namely, W_1 and W_2 are slope parameters. Parameter W_1 specifies the potential for r ≤ R_0W, whereas, W_2 specifies the potential for r > R_0W. We use a set of these parameters to represent imaginary part in Fig.2. The Coulomb potential for the projectile nucleus and target nucleus interacting system is given by Eq.5 as follows.
V_C(r)= {[ Z_p Z_T e^2/2R_C^3(3R_C^2-r^2) if r<R_C; ; Z_p Z_T e^2/r if r > R_C; ].
Here, R_C = r_C (A_P^1/3 + A_T^1/3);A_P and A_T being the mass numbers of projectile and target nuclei respectively. Z_P and Z_T are the atomic numbers of those nuclei. The value of the Coulomb radius parameter r_C is taken to be 1.33 fm. Neglecting the centrifugal term with orbital quantum number l = 0, Eq.1 describes effective potential as :
V_eff(r)=V_n(r)+iW_n(r) +V_C(r)
The real part of the effective potential V_eff(r) is depicted in Fig.3 with a set of parameters considered earlier for the real part in Fig.1. With the above effective potential for the various partial waves (l), we solve the following Schrodinger equation to obtain the total scattering amplitude f(θ).
[-ħ^2/2μ∇^2 + V_eff(r) ]ψ( r⃗)=E ψ(r⃗)
Total scattering amplitude f(θ) is expressed as the sum of Coulomb scattering amplitudes f_C(θ) and nuclear scattering amplitude f_N(θ) respectively thus,
f(θ)=f_C(θ)+f_N(θ)
The amplitudes f_N(θ) and f_C(θ) have expansions as follows.
f_N(θ)=1/2ik∑_l(2l+1)e^2iσ_l(e^2iδ̅_̅l̅-1)P_l(cosθ)
f_C(θ)=1/2ik∑_l(2l+1)(e^2iδ̅_̅l̅-1) P_l(cosθ)
Here k is the magnitude of wave vector k, σ_l is the Coulomb phase shift due to scattering and δ̅_̅l̅ is the nuclear phase shift. The ratio of the measured elastic scattering cross-section to Rutherford’s scattering cross-section is given by
dσ_el/dσ_Ruth=|f(θ)/f_C(θ)|^2
For l ^th partial wave and its S-matrix S_l, the elastic scattering cross-section σ_el and the reaction cross-section σ_ r are given by Eq.12 and Eq.13 respectively as follows.
σ_el=π/k^2(2l+1)|1-S_l|^2
σ_r=π/k^2(2l+1)T_l (E)=π/k^2(2l+1) (1-|S_l|^2)
Here, T_l (E) = (1-|S_l|^2 . This is known as the transmission coefficient for the orbital angular momentum l. The fusion cross section is given by, σ_fus=π/k^2(2l+1)P_l ^F , where P_l ^F is the fusion (absorption) probability. The wave function, in the case of fusion, is expected to be absorbed completely inside the barrier; hence the fusion probability can be assumed to be close to the probability that the incident current reaches the point of total absorption.
Therefore, fusion probability, i.e.,P_l ^F ∼ T_l (E) = (1-|S_l|^2 . Thus, we have, σ _r = σ _fus. Based on the above theory and potential, the results of elastic scattering and fusion are discussed.
§ RESULTS
While analyzing angular distribution cross-sections of the elastic scattering ^19F+^208Pb, C. J. Lin et al. <cit.> have used Woods-Saxon based optical potential, in which imaginary potential depths are approximately 29 - 60% or more of the real part to get the best fit of data for energy span 87.9 - 94.0 MeV. The minimum value of the real part differs by 21% from the maximum value, and the difference is more than 96% in the case of imaginary parts for the same range of incident energy. Scattered values may raise uncertainty to conclude threshold anomaly and higher imaginary values concerning real parts may suppress resonance states of the system generated by the effective potential. We use a substantially small imaginary part to real part ratio to measure angular distribution cross-sections at different incident energies. The success story of the potential generated by using Ginocchio potential <cit.> catalyzes us to explain the experimental data in this optical model analysis. The variations in real and imaginary parts near the Coulomb barriers are studied to realize threshold anomaly. The results are presented in the following sub-sections.
§.§ Analysis of scattering cross-sections
The laboratory energies for elastic scattering of collimated ^19F beam by ^208Pb target are taken at 88, 91, 93, 96, 98, and 102 MeV, which are equivalent to 80.6, 83.4, 85.2, 87.9, 89.8, and 94.0 MeV respectively in the center of mass frame. The system ^19F + ^208Pb has the Coulomb barrier at about E_c.m.= 84 MeV. Energy-dependent parameters (both real and imaginary) of the optical potential are presented in Table-I for the best fit. The calculated angular distribution of elastic cross-sections is compared with experimental values in Fig.4 for the given range of incident energies. The experimental values digitized with GSYS-2.4 are obtained from the source http ://nrv.jinr.ru.
Six number of parameters show energy-independence. The radial distance in the surface region R_0 is kept constant with energy. The values of the independent parameters are R_0 = 12.5 fm, R_0W = 12.8 fm, B_2 = 0.6, B_0= 118 MeV, W_0=1.5 MeV and W_1=1.4 while matching theoretical results with experimental outcomes for the entire range of energies. Four parameters, namely, B_1, V_B, W_2, and V_BW mentioned in Table-I vary with collision energies. We keep the Coulomb radius parameter r_C at 1.33 fm following the literature referred <cit.>. The theoretical data (solid line curve in red colour) are compared with the experimental data (dark circles filled with green colour) obtained from Ref. <cit.>. Theoretical calculations fairly agree with the experimental values.
It is worth mentioning that the best fit takes a smaller imaginary part in comparison to the corresponding real part of the potential which ensures less suppression of resonance states generated by the effective potential. The real parts of the potential are taken the same, i.e., 118.0 MeV for all the six incident energies. The imaginary parts are also kept small and the same, i.e., 1.5 MeV in comparison to their counterpart's real potential. Thus, the ratios of imaginary parts to real parts remain the same, i.e., 0.0127 for all colliding energies. Such a small imaginary-to-real ratio will help not to substantially destroy the resonance states generated by the volume part of the effective potential. The need for a small imaginary part in an optical potential is explained with supporting graphs in sub-section 3.4.
§.§ Phenomenon of threshold anomaly
While reproducing the experimental results using theoretical calculations of the potential, we find variations in real and imaginary parts of the potential near the Coulomb barrier. The variations are described in Fig.5. When we proceed from the lower energy of the collision, the real part first increases and then decreases in the vicinity of the Coulomb barrier, and ultimately saturates around 2.2 MeV at higher energies away from the barrier. On the other hand, the imaginary part remains almost constant at 1.2 MeV at higher energies but decreases in the vicinity of the Coulomb barrier as we move from higher to lower values of incident energy. The variation in the real part follows a bell-shaped dash line (upper plot), whereas, the variation in the corresponding imaginary part follows an L-shaped dash line (lower plot). The bell-shaped and L-shaped curves are obtained by the dispersion relations in Ref.<cit.>, which describes threshold anomaly. Thus, theoretical calculations fairly agree with the TA phenomenon described in the reference.
§.§ Analysis of fusion cross-sections
Along with the angular distribution, the present analysis is extended with the potential to check the fusion cross-sections. The experimental data of the fusion cross-section (σ_fus) with ^19F+^208Pb system are available by different groups from independent experiments. B. B. Back et al. <cit.> measured the fission fragment angular distributions in 1985 by using Argonne Superconducting LINAC in the energy range 100.8 to 174.0 MeV in the center-of-mass frame. The fission cross sections and angular distributions were measured in 1990 by Zhang Huanqiao et al. <cit.> with HI-13 tandem Van de Graff accelerator at bombarding energies from 75.51 to 137.5 MeV. The fusion-fission cross-sections were also measured in 1998 by K. E. Rehm et al. <cit.> by using ATLAS superconducting linear accelerator for the reaction at energies from 77.68 to 99.6 MeV. 1n 1999, D. J. Hinde et al. <cit.> were able to measure fission fragment cross sections and angular anisotropies to high accuracy for the reaction by using 14UD tandem electrostatic accelerator and LINAC in the energy range 76.0 to 144.7 MeV. The experimentally measured fusion cross-section values digitized with GSYS-2.4 are downloaded from the website http://nrv.jinr.ru.
To explain the data of the fusion cross-section, we use the same parameters except for R_0W and W_2 of the optical potential obtained while matching theoretical calculations with the corresponding angular distributions in the scatterings of the system at the incident energy of 94 MeV in the center-of-mass frame. The value of R_0W has been altered from 12.8 MeV to 8.2 MeV and that of W_2 from 3.0 to 0.8 to explain fusion data. Thus, the set of parameters to reproduce fusion cross-sections is R_0=12.5 fm, R_0W =8.2 fm, B_1=1.0, B_2=0.6, V_B=2.2 MeV, V_BW=1.2 MeV, B_0= 118 MeV, W_0= 1.5 MeV, W_1=1.4 and W_2= 0.8. The set of parameters fairly explains the experimental values of four independent experiments performed for different energy ranges. The results are shown in Fig.6 with fusion cross-sections (expressed in mb) as the function of the center of mass energy (taken in MeV). It is challenging to find a unique potential that can address both of these phenomena simultaneously. R_0W in case of fusion is considered to be 8.2 fm, which is less than the Coulomb barrier position (R_B = 12.8 fm) keeping the imaginary W_0 unchanged at 1.5 MeV to observe the structure phenomena. The value of R_0W
(< R_B) confirms that fusion takes place only after the barrier has been fully penetrated. The other parameter W_2 was reduced from 3.0 to 0.8 to observe the fusion phenomena. Finally, we get the required fusion cross-section which is plotted in Fig.6. The structure effect is also visible here because ^208Pb is a double magic and shell closure nucleus and ^19F is relatively stable projectile. The binding energy per nucleon value of ^19F is higher compared to other isotopes such as ^21F and ^23F. We do not find hindrances below the Coulomb barrier because of the shell closure of both the projectile and target. As we move toward the neutron drip-line we see the hindrance phenomena as described in Fig.7.
It is commonly known that a projectile's binding energy has a significant impact on how easily it fragments into smaller parts, which has an impact on the fusion of weakly bound projectiles. It is difficult to calculate the dynamic polarization potential theoretically, thus up to now, TA has only been studied experimentally. Most of the examined systems are spherical or very close to spherical shape. The implications of nuclear structure on well-deformed systems have received very little attention in research. C. J. Lin et al. <cit.> used the system ^19F+^208Pb to investigate the role of deformed nuclei in fusion and TA responses. The projectile nucleus ^19F possesses <cit.> quite large static deformations (β_2 = 0.44 and β_4 = 0.14). The results of the fusion reaction <cit.> for the system have been carefully examined. We analyze the cross-sections of fusion reactions for the fluorine isotopes with the double shell closure of ^208Pb. The nucleus of a nearby projectile with double shell closure is ^16O. In the case of ^19F, one half-filled valence proton and two valence neutrons are present, and the same is evident for ^21F and ^23F nuclei. As we approach the drip-line nuclei, the two-neutron separation energy (S_2n) of fluorine continues to decrease. The magnitudes of S_2n for the isotopes ^19F, ^21F and ^23F are 19.582 MeV, 14.742 MeV, and 12.81 MeV respectively <cit.>. Moreover, the corresponding binding energies per particle (B.E./A) are 7.779 MeV, 7.738 MeV, and 7.622 MeV respectively. Again, Z=82 and N=126 in the target nucleus are double magic numbers. The isotopes of Fluorine (F) could split into ^16O and neutron clusters due to the valence neutrons' low binding energy.
Again, it is observed that the presence of magic shells in the entrance channel increases the probability of fusion <cit.>, because magic nuclei are difficult to excite, which lowers energy dissipation and facilitates the creation of a more compact di-nuclear system. So, when we go away from the shell closure, the 2n, 4n, and 6n evaporation residue cross-sections may indeed be enhanced close to the Coulomb barrier, which might account for the hindrance of the fusion cross-section shown in Fig.7. Nevertheless, it is beyond the scope of our approach to demonstrate the entire reaction cross-section, including quasi-fission and evaporation residues. We have just displayed the fusion cross-section here. Because of quasi-fission, the fusion probability could well be significantly suppressed below the Coulomb barrier, which might be the cause of the fusion hindrance <cit.>. Hence, the fusion of weakly bound projectiles is affected by the breakup channel coupling. Fig.7 illustrates how the breakup channel coupling affects ^19-23F+^208Pb systems depending on the valence nucleon separation energy. As per expectation, the breakup cross-section grows as the binding and separation energies fall, making the breakup of a weakly bound nucleus more feasible. As the structure of interacting nuclei influences the mechanism of the fusion and other processes leading to the absorption of particles from the elastic channel then it is natural to expect that the Optical Model parameters should vary from one system of colliding nuclei to another one. The parameter R_0 is altered. So the potential barrier and hence the fusion cross section increase. We demonstrate the effects of only charge and mass on the fusion phenomenon. Nuclear shell structures, deformation, and orientation are additional factors that influence the fusion processes in addition to charge and mass <cit.>.
§.§ Need for a small imaginary part in the optical potential
The scattering process in a nuclear collision is sensitive to the nature of the potential on the surface region. On the contrary, the fusion process is an interior activity. It is quite difficult to find a unique nuclear potential that can take care of both phenomena.
It is a common assumption that fusion takes place only after the barrier has been fully penetrated <cit.>. Based on this concept that the fusion of two nuclei occurs in the region interior to the radial position (R_B) of the Coulomb barrier, the region 0 < r < R_B is expected to account for the experimental data of fusion cross section as the total reaction cross section includes the cross section for different reaction channels of which the fusion channel is predominant in the low-energy collision activities. The values of fusion radius and the Coulomb radius used in the heavy ion collision system ^19F+^208Pb agree with the fact that fusion is an interior phenomenon, whereas the surface phenomenon is attributed to scattering and other peripheral, less absorptive direct reaction processes.
In this study, we identify two crucial aspects of our potential: (i) the real component with a larger magnitude, and (ii) the imaginary part with a smaller value. Thus, in contrast to light ion systems, this potential has a less absorptive character. Because of its less absorptive nature, standing waves are formed in the nuclear well, which allows shape resonances to survive in the collision process. As a result, these resonances produce the oscillatory structures in the fusion (total reaction) cross-section, σ_fus (σ_r) as a function of colliding energy E_c.m. Although the resonances exist, it is very difficult to detect the resonances experimentally through direct observations <cit.>.
In the potential scattering theory, these resonances are manifested clearly as maxima in the results of reaction cross-section (σ_r) at the respective resonance energies <cit.>. This small value of the imaginary part further indicates that the fusion only occurs when the barrier has been completely penetrated <cit.>. Due to the potential's smaller absorption capacity, standing waves in the nuclear well might occur, which would allow shape resonance states (which have not been experimentally detected) <cit.> to survive the collision process. As a result, these resonances take on the role of being the cause of the oscillatory structure in the barrier distribution, D(E_c.m.) findings as a function of E_c.m. <cit.>. When the potential is made more absorbed by considering a bigger imaginary part W_0, the width of the resonance caused by the real part of the potential widens. Consequently, larger width leads to the extinction of the corresponding resonance in the collision process. In this study, we have considered a deep real potential associated with a relatively weak imaginary strength W_0. As explained above, the resonances are visible in the form of peaks in the partial wave trajectories for a smaller value of W_0, but the oscillation in σ_r vanishes for a larger value of W_0, which is shown explicitly by taking W_0=1.5 MeV and 15 MeV in Fig.8. The cumulative effect of all these resonances is primarily responsible for the oscillation in reaction cross-section (σ_r). The fusion radius is found to be more than the Coulomb radius when a larger value of W_0 is considered in the case of heavy ion collisions.
The amplitudes of oscillation increase with increasing l-values for a particular low imaginary potential. This is verified by changing the variation of resonance structures for different l’s with a particularly low value of W_0 as shown in Fig.9. The plots in Fig.9 explain how the amplitudes of oscillation increase with increasing l-values for a particular low imaginary potential. Here the imaginary depth is kept low at W_0=1.5 MeV and the oscillations with increasing amplitudes are shown for different values of l, i.e., l=10, 20, 30, 50.
§ CONCLUSIONS
We use the optical potential taken into consideration in the paradigm of Ginocchio potential to explain the angular distributions of elastic scattering of the system ^19F+^208Pb for the center of mass energies E_c.m.= 80.6, 83.4, 85.2, 87.9, 89.8, and 94.0 MeV. At a nucleus' surface, the potential has a particular deformation effect. We calculate the fusion cross-sections for the same system and compare these values with various independent findings from four distinct experiments carried out by the researchers in Ref. <cit.>. A theoretical calculation provides a good explanation for the data showing threshold anomaly close to the system's Coulomb barrier. To ensure that resonance states are not too suppressed, the imaginary components of the potential employed in the current study are kept relatively modest in comparison to the real parts. Due to the shell closure of both interacting nuclei, no hindrance phenomenon is seen in the system ^19F+^208Pb.
As shown in Fig.7, a weakly bound nucleus significantly impacts the fusion due to the increased possibility of dissociation. So, it may be argued that although the Coulomb repulsion is stronger, the existence of neutron shell closure and breakup probability in the entrance channel favours fusion hindrance just below the Coulomb barrier. More studies are needed to understand the dynamics in the sub-barrier area and to identify additional influencing elements that may further favour or hinder the likelihood of fusion, such as deformation of the colliding partners, projectile direction upon striking the target, isospin asymmetry of the colliding partners and shell energy <cit.> etc. So it has not only the kinematical origin governed by the atomic mass and charge number, but it could be due to shell structures, deformation and shell energy <cit.>. The adaptability of the potential with less energy-dependent parameters encourages further analysis of more pertinent systems.
99
ref.1 M.A. Nagarajan, C. C. Mahaux, and G. R. Satchler, Phys. Rev. Lett. 54, 1136 (1985).
ref.2 J. Diaz, J.L. Ferrero, J.A. Ruiz et al., Nucl. Phys. A 494, 311 (1989).
ref.3 B. R. Fulton, D.W. Banes 1, J.S. Lilley et al., Phys. Lett. B 162, 55 (1985).
ref.4 M. E. Brandan, J. R. Alfaro, A. Menchaca-Rocha et al., Phys. Rev. C 48 (1993) 1147.
ref.5 M. J. Smithson, J.S. Lilley, M.A. Nagarajan et al., Nucl. Phys. A 517 (1990) 193-204.
ref.6 A. Baeza, B. Bilwes, R. Bilwes et al., Nucl. Phys. A 419 (1984) 412.
ref.7 I. J. Thompson, M.A. Nagarajan, J.S. Lilley et al., Nucl. Phys. A 505 (1989) 84-102.
ref.8 J.S. Lilley, B.R. Fulton, M.A. Nagarajan et al., Phys. Lett. B 151 (1985) 181-184.
ref.9 A.M. Stefanini, D. Bonamini, A. Tivelli et al., Phys. Rev. Lett. 59 (1987) 2852.
ref.10 D. Abriola, D. DiGregorio, J. E. Testoni et al., Phys. Rev. C 39 (1989) 546.
ref.11 G. R. Satchler, Physics Reports, North-Holland, 199, 3 (1991) 147-190.
ref.12 F. W. Byron and R. W. Fuller, Math. of Class. and Quant. Physics, (1992) 340.
ref.13 C. J. Lin, J. C. Xu, H. Q. Zhang et al., Phys. Rev. C 63, 064606 (2001).
ref.14 H. Leucker, K. Becker, K. Blatt et al., Phys. Lett. B 233, 277 (1989).
ref.15 D. R. Tilley, H.R. Weller, C.M. Cheves et al., Nucl. Phys. A 595, 1-170 (1995).
ref.16 Amit Kumar, R. Tripathi, S. Sodaye et al., Euro. Phys. Journal, A 49, 3 (2013).
ref.17 U. C. Voos, W. Von Oertzen, R. Bock et al., Nucl. Phys. A 135, 207-224 (1969).
ref.18 U. C. Schlotthauer-Voos, H.G. Bohlen, W. Von Oertzen et al., Nucl. Phys. A 180, 385-401 (1972).
ref.19 R. Tripathi, R. Tripathi, K. Sudarshan et al., Phys. Rev. C 79, 064604 (2009).
ref.20 A. Gamp, W. Von Oertzen, H. G Bohlen et al., Zeitschrift fur Physik, 261, 283-304 (1973).
ref.21 G. S. Mallick, S. K. Agarwalla, B. Sahu, C. S. Shastry, Phys. Rev. C 73, 054606 (2006).
ref.22 B. Sahu, G. S. Mallick and S. K. Agarwalla, Nucl. Phys. A 727, 299 (2003).
ref.23 Joseph N. Ginocchio, Ann. Phys. (N.Y.) 152, issue 1: 203-219 (1984).
ref.24 D.J. Hinde, A.C. Berriman, M. Dasgupta et al., Phys. Rev. C 60, 054602 (1999).
ref.25 B. B. Back, R. R. Betts, J. E. Gindler et al., Physical Review, C 32, (1985) 195.
ref.26 K. E. Rehm, H. Esbensen, C. L. Jiang et al., Physical Review Letters, 81, (1998) 3341.
ref.27 Zhang Huanqiao, Liu Zuhua, Xu Jincheng et al., Nuclear Physics, A 512, (1990) 531.
ref.28 A. B. Quint, W. Reisdorf, K.-H. Schmidt, et al., Zeitschrift für Physik A 346, (1993) 119.
ref.29 K. -H. Schmidt and W. Morawek, Rep. Prog. Phys.54, 949 (1991).
ref.30 Yu. Ts. Oganessian, A. Yu. Lavrentev, A. G. Popeko, et al., JINR FLNR Scientific Report 1995-1996. Heavy Ion Physics, B. I. Pustylnik (ed.), p. 62 (JINR, E7-97-206, Dubna (Russia), 1997).
ref.31 Yu.Ts.Oganessian, V.K.Utyonkov, Yu.V.Lobanov et al., Phys. Rev. C 64, 054606 (2001).
ref.32 K. Satou, H. Ikezoe, S. Mitsuoka, et al., Phys. Rev. C 65, 054602 (2002).
ref.33 K. K. Jena, S. Senapati, B. B. Sahu, J. K. Nayak and S. K. Agarwalla, arXiv:2201.03805.
ref.34 K. K. Jena, S. K. Agarwalla, B. B. Sahu, Acta Phys. Pol.B, 53, 10-A1 (2022).
ref.35 Kamala Kanta Jena, Santosh Kumar Agarwalla, Bidhubhusan Sahu, New J. Phys. 25,
033012 (2023).
ref.36 D. R. Tilley, H. R. Weller, C. M. Cheves, and R. M. Chasteler, Nucl. Phys. A 595, 1 (1995).
ref.37 National Nuclear Data Center, BNL, Upton, NY 11973-5000, https://www.nndc.bnl.gov
ref.38 C. Simenel, D.J. Hinde, R. du Rietz et al., Phys. Lett. B 7101, 607 (2012).
ref.39 D..J. Hinde, M. Dasgupta, and A. Mukherjee, Phy. Rev. Lett. 89, 282701 (2002).
ref.40 J. R. Birkelund and J. R. Huizenga, Annu. Rev. Nucl. Part. Sci. 33, 265 (1983).
ref.41 S. G. Steadman and M. J. Rhoades-Brown, Annu. Rev. Nucl. Part. Sci. 36, 649 (1986).
ref.42 Y. Eisen and Z. Vager, Nucl. Phys. A 187, 219 (1972).
ref.43 B. Sahu, L. Satpathy, and C. S. Shastry, Phys. Lett. A 303, 105 (2002)
ref.44 B. Sahu, S. K. Agarwalla, and C. S. Shastry, Nucl. Phys. A 713, 45 (2003).
ref.45 B. Sahu et. al., Phys. Rev. C 77, 024604 (2008).
ref.46 R. R. Swain et. al., Int. J. Mod. Phys. E 29, 2050016 (2020).
ref.47 Hiroshi Ikezoe, Kenichirou Satou et al., Progress of Theoretical Physics
Supplement, No. 154, 45 (2004).
|
http://arxiv.org/abs/2307.04016v1 | 20230708171122 | Cellular LTE and Solar Energy Harvesting for Long-Term, Reliable Urban Sensor Networks: Challenges and Opportunities | [
"Alex Cabral",
"Vaishnavi Ranganathan",
"Jim Waldo"
] | cs.NI | [
"cs.NI"
] |
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
empty
§ INTRODUCTION
As the global urban population continues to grow, cities are increasingly interested in monitoring urban processes such as vehicular traffic, and public health and environmental harms including air pollution and noise, to help cities grow in a healthy and sustainable fashion <cit.>. The lowering cost of sensing infrastructure and recent digital twin capabilities have encouraged city officials, researchers, and urban residents to use large-scale, low-cost sensor networks to monitor hyperlocal phenomena, inform policy and planning decisions, and collect data to help transition to being considered smart cities <cit.>.
We identify that, to be successful, a smart city network must be:
* reliable: the network should continue to operate and transmit data over long periods of time and across the city to ensure equitable node distribution <cit.>
* scalable: it should be easy to add/replace nodes within the network at any new location in the city <cit.>
* easy to maintain: nodes should be outfitted with hardware and firmware that minimize the need for in-person maintenance <cit.>
* real-time: data must be transmitted as quickly as possible, particularly for applications such as emergency services <cit.>, and the network must be monitored in real-time for maintenance <cit.>
* low-cost: by using existing infrastructure and services, the network can avoid added costs in installation and maintenance <cit.>
We determine that two key features of an urban sensor network's design can help to make the network fit within the aforementioned criteria. The first is connectivity, which is essential for data transmission, real-time node monitoring, and software updates. The second is power, which provides for reliable operation and data collection. The decisions that cities and network designers make in these two areas have a direct and significant impact on the criteria for a successful smart city network. For example, an urban sensor network that uses a low-power wide-area network (LPWAN) for connectivity may not satisfy the criteria of low cost because the backhaul infrastructure required, although low in per-unit cost, quickly becomes expensive when considering the number of cells required for a large, dense sensor network <cit.>. Similarly, a smart city network that relies on wired power may not be scalable, as nodes will be limited to locations that already have wired mains <cit.> and will involve additional installation and maintenance cost.
Based on a review of prior urban sensor network deployments and our experience working on a large-scale sensor network, we establish that LTE networks and solar panels are the appropriate connectivity and power choice for most urban sensor networks given the available options and necessary criteria. Although LTE performance for mobile communication in urban areas is well-researched <cit.>, the performance of IoT-specific networks when implemented in a city-scale long-term sensor network deployment is yet to be characterized. Solar power in urban sensor networks has also been evaluated on a small scale <cit.>, but not in a large-scale long-term deployment. Moreover, there are no established guidelines that can ensure reliable performance for future deployments of such large-scale LTE-connected, solar-powered sensor networks. Finally, researchers have not looked into the overlap between technical issues that arise in LTE connectivity or solar power and the socioeconomic factors that make up many “sensor deserts" <cit.>, or areas that lack nodes in cities with sensor networks.
In this work we describe the design and analyze the connectivity and power performance of a stationary 118-node LTE-M connected, solar-powered sensor network deployed for one year in Chicago, Illinois. We find that 11 of the 118 original node locations could not support LTE connectivity, despite all FCC and network provider connectivity maps indicating otherwise. A small number of cell towers and node locations are disproportionately affected by significantly delayed readings, and 44 of the 118 nodes experienced issues charging in the winter months. Furthermore, we discover that connectivity and power related issues are not equitably spread around the city, but rather are more prominent in areas that are classified as socioeconomically disadvantaged and have a larger racial minority population.
Our primary contribution is an in-depth analysis of a long-term real-world deployment assessing the feasibility and reliability of a large-scale LTE-connected and solar-powered urban sensor network. Additional contributions include: 1) highlighting the overlap between technical challenges in urban sensor networks and socioeconomic inequality, 2) revealing the inherent challenges in relying upon open data sources that are commonly used to predict connectivity and power availability for urban sensor network deployments, and 3) identifying strengths and weaknesses to define future research directions in energy harvesting systems and equitable network infrastructure deployments to ensure the just future of smart city networks.
This paper is structured as follows: Section 2 offers an overview of Related Works; Section 3 highlights why the city of Chicago is a useful case study for urban sensor networks; Section 4 highlights the design of the sensor network and datasets used; Section 5 discusses the connectivity of the sensor network, including the hardware, network carrier information, and insights from the year-long deployment; Section 6 details the powering of the sensor network, including the hardware, energy management techniques, and insights from the deployment; Section 7 provides a discussion, focusing on the implications of the challenges we discovered and the limitations of our study.
§ RELATED WORKS
In this section, we first review former and existing sensor network deployments to identify necessary criteria, prior evaluations, and known issues around inequality. We then examine LTE connectivity and solar power in urban areas, as these are the technologies we use for our sensor network.
§.§ Criteria for Urban Sensor Networks
By examining prior urban sensor network deployments, we have identified five criteria necessary for success—reliability, scalability, ease of maintenance, real-time communication, and low cost. The shortcomings of prior sensor networks has often been caused by a lack of reliability, either in terms of not functioning over time, as with malfunctioning hardware <cit.>, or not communicating data reliably over space and time <cit.>. Many prior networks have also raised the issue of scalability, which is especially prevalent when relying on electrical cables and wired power, which may be available at street lamps or traffic signals, but ultimately limits the node placement locations <cit.>. Similar initiatives have shown that reliance on these specific locations can additionally make installation and maintenance more difficult, which then increases the cost of operation <cit.>. The issue of maintenance is particularly important in urban settings, where the cost of accessing a node can be very high <cit.>.
Conversely, we find that some deployments are more successful because they achieve low-cost via the use of existing infrastructure. For example, officials in New York City chose to use an existing public safety wireless network for a new traffic control system <cit.> and Chicago's Array of Things relied on cellular networks <cit.>, decisions that helped ease installation and thus save costs.
§.§ Evaluations of Urban Sensor Network Deployments
The evaluations of real-world sensor network deployments in urban settings have often been small-scale and short-term. A small number of researchers have shared the lessons and challenges learned from urban sensor network deployments, but many of these are focused on specific data such as noise <cit.> and water quality <cit.>. Furthermore, many of these studies rely on the power grid for high computation tasks <cit.>, or use technologies such as Wi-Fi or Zigbee for data transfer <cit.>. The works that evaluate LTE-connected or solar-powered urban sensor networks are small scale and short duration studies that do not offer extended insights on reliability <cit.>.
§.§ Inequality of Sensor Networks
As smart city networks are increasingly explored and deployed, sociology and urban planning researchers have begun to evaluate the potential social implications of urban sensor networks. For example, one group of researchers evaluated prior urban sensor network deployments and identified areas deemed “sensor deserts", which are those that lack nearby sensors based on a straight line distance <cit.>. As the researchers state, sensor deserts not only add to existing forms of inequality, but the placement of sensor nodes can also affect resident perception of the distribution of resources and harms throughout a city <cit.>, creating potential political or social strife if nodes are not visible in certain areas. Similarly, others have noted the potential for smart city technologies to “further deepen the splintering of urban networks, creating deep divides between those with access to 'smart' and those without" and raising questions about the “politics of urban exclusion" <cit.>. Thus, there is an increasing push for equity as a consideration in practical sensor network deployment <cit.>.
§.§ LTE Connectivity in Urban Areas
Extensive research around mobile connectivity has revealed a variety of factors known to affect RSS and limit propagation distance for LTE signals. These include physical features such as high-rise buildings <cit.>, the distance between the cell tower and receiver <cit.>; meteorological conditions such as precipitation <cit.>, humidity <cit.>, strong winds <cit.>, temperature <cit.> and sudden weather changes <cit.>; and environmental measures such as high particulate matter concentrations <cit.>. Another major factor that affects signal strength is inter-cell interference (ICI) <cit.>, which occurs when a node moves to the edge of one cell tower's range while moving closer to another cell tower. We include all these factors in our analysis of connectivity issues in section 5.
§.§ Solar Charging in Urban Areas
Due to the vast quantity of previously deployed solar powered sensor networks and the numerous papers published about these networks, it seems guaranteed that solar power is reliable for most sensor network deployments. However, there have been very few studies looking into the long-term reliability of solar power in urban settings. Dehwah et al. <cit.> evaluate the performance of a traffic monitoring sensor network in a desert city, and describe the effect of dust storms and building shadows on solar charging. However, they do not do a deep analysis into the locations that were most affected by shadows to determine how the issue may be prevented in future deployments and the potential social implications.
To our knowledge, this work presents the first in-depth analysis of a large-scale, long-term cellular, solar-powered urban sensor network towards understanding the broader impact of the technical challenges for urban communities.
§ CHICAGO AS A CASE STUDY
§.§ Building Height
According to the Council on Tall Buildings and Urban Habitat <cit.>, amongst cities around the world, Chicago has the 10th most buildings 150 meters and higher, 11th most buildings 200 meters and higher, and 5th most buildings 300 meters and higher. However, its place on those lists is expected to fall within the coming years—Chicago has only three buildings 150 meters and higher under construction and twelve proposed for construction. By comparison, Wuhan, Shenyang, and Bangkok—cities just below Chicago on the list of most 150+ meter buildings—have 49, 14, and 17, buildings under construction respectively, and dozens more proposed in both Wuhan and Shenyang. In addition, development in cities such as Mumbai, Nanning, and Nanjing, which all have several 150+ meter buildings under and proposed for construction will propel them past Chicago in the list in the coming decades. This puts Chicago currently in a unique position for evaluating the impact of built environment towards planning global urban sensor networks.
§.§ Latitude and Sunlight Hours
Chicago has a latitude of 41.88 degrees, where the sun is visible for 15 hours, 15 minutes during the summer solstice and 9 hours, 6 minutes during the winter solstice. According to data from the World Economic Forum <cit.>, the top five most populous latitudes are between the 22nd and 27th parallel north, which are all much closer to the equator and thus have more sunlight on the winter solstice, with an average of 10 hours 35 minutes.
Nevertheless, a number of highly populated cities reside at or above the 42nd parallel north, including London, Moscow, Harbin, and Toronto, as well as much of Western Europe. Cities such as New York and Beijing are also located at nearly the same latitude, receiving 9 hours 13 minutes sunlight on the winter solstice. Furthermore, as the effects of climate change disproportionately affect populations who live closer to the equator, mass migration away from the equator is expected <cit.>. Thus, understanding the performance of solar-powered sensor networks at northern latitudes is essential for future urban environmental sensing.
§.§ Segregation and Inequality
Based on 2020 United States Census Data, Chicago is the fourth most racially segregated large city (population at least 200,000) in the United States <cit.>. Fig. <ref>a highlights Chicago's racial segregation, showing where the white and non-white—primarily Black and Latine—populations live relative to each other. There is limited data comparing racial segregation in global cities, likely because many countries are more racially homogeneous than the United States.
However, segregation based on income or social status exists in many global cities, with the highest levels of inequality and segregation often found in cities of lower income countries <cit.>. According to Gini Index data from the 2019 American Community Survey <cit.>, Chicago has the 10th greatest income inequality amongst US cities, with a Gini index of 0.53 (where a 0 indicates perfect equality and 1 indicates perfect inequality). Compared to cities such as London and Johannesburg, which have the highest global Gini index values—both over 0.7—Chicago has a relatively medium-high level of income inequality <cit.>. As seen in Fig. <ref>b, the areas of Chicago that are considered most socioeconomically disadvantaged based on factors such as unemployment and poverty level also overlap with many of the areas that have a majority Black or Latine population. Thus, we believe that Chicago provides a useful case study by which to examine the potential social and equity implications that sensing technologies can introduce in cities around the globe.
§ SENSOR NETWORK AND DATA
§.§ Sensor Network Design
The sensor network, described in further detail in [blinded]
and shown in Fig. <ref>, was designed and deployed to collect air pollution data across Chicago. The network comprised of 118 unique sensor node locations, with 20 nodes allocated to local environmental justice groups for placement according to their priorities, 12 nodes at four EPA stations (3 nodes at each station) for collocation to perform calibration, and the rest placed based on locations chosen through stratified random sampling, as described in NYCCAS <cit.>, with a small subset chosen by partner organizations.
All devices that were not at EPA stations were installed at bus shelters throughout the city, as shown in Fig <ref>. These nodes were placed at the same height, about 2.5 meters above ground. Nodes at EPA stations were located on the rooftops near the EPA monitors, several meters above ground and at different heights based on the height of the building or structure housing the EPA monitor. Most of the devices were installed at their respective locations in July and August 2021, with 98 nodes (over 83%) placed by July 3rd, 2021.
§.§ Datasets
The node-related data for each reading, including the time, received signal strength (RSS), battery level, internal node temperature, and air pollutant readings were all logged with each reading and stored in an cloud server. We calculated the latency by comparing the time of the sensor reading to the time of the data's insertion into the server. Cell tower information, such as the cell tower ID, were collected when making a connection with the tower. We used OpenCellID <cit.> to link the cell tower information with locations, OSM (Open Street Maps) Buildings <cit.> to gather data about buildings surrounding the nodes, FCC Broadband <cit.> and nPerf <cit.> data to examine AT&T connectivity, Meteostat <cit.> to collect external weather data, and the Shadow Accrual Maps tool <cit.> to calculate the amount of shadow hours at each node location. Socioeconomic data were pulled from the City of Chicago Open Data Portal <cit.>.
§.§ Data Cleaning
We removed readings that had no connectivity data (N = 9,393, 0.2% of readings), readings where the signal was equal to zero (N = 11,626, 0.12%), readings where the tower location was clearly outside of Chicago, possibly due to sensors being shipped back and forth when there were issues (N = 11,778, 0.12%), and readings with a delay of more than 24 hours (N = 54,900, 0.63%), as this was likely indicative of a device issue, rather than connectivity or charging issue. We also identified 565,371 readings (12.7%) where the cell tower could not be located in the OpenCellID database; we kept these readings in for all analyses except ones involving distance and general direction of the cell tower.
§ CONNECTIVITY
§.§ Motivation for an LTE-Connected Urban Sensor Network
Despite recent advances in WiFi and low-power wide-area networks (LPWAN), such as LoRaWAN <cit.>, most urban sensor networks will rely on cellular networks in the coming years
for the following reasons: 1) Dependence on existing urban cellular networks ensures city-wide coverage without additional infrastructure. 2) Widespread global availability and flexible data plans with each generation. 3) Lower cost and ease of setup and scaling—for technologies such as LoRaWAN, scalability is a particularly pressing issue due to the cross-technology interferences that will arise from other technologies <cit.> and potential packet collisions with large sensor networks <cit.>. In addition, LPWAN require dedicated infrastructure that have a low per-unit cost, but quickly add up in costs based on the cells required to support high node density <cit.>.
Thus, to support the necessary criteria of reliability, real-time, and low cost, we use an LTE network for communication. LTE networks propose great coverage in most cities around the globe <cit.>, providing means for scaling reliably. Because the cellular infrastructure is already built and evolving, networks are easy to set up and remain low-cost, especially with the variety of LTE plans available. Finally, with the fast evolving generations of cellular communication, such networks are increasingly seen as dedicated low latency connectivity for massive IoT deployments in growing cities <cit.>.
§.§ Materials: Antenna and LTE Carrier
The sensing nodes connected via AT&T's 4G IoT LTE-M One network, which uses LTE Bands 2, 4, and 12, and operates at frequencies of 700, 1700, and 1900 MHz. Each node used a SIM card and Ignion NN03-310 antenna <cit.>, which transmits data over 3G and 4G, is tuned for channels 2, 3, 4, 5, 9, 12, 20, and 28, and operates on frequencies from 698-960 MHz and 1710-2690 MHz. The antenna was placed at the top right of the printed circuit board (PCB) [After conversations with the antenna manufacturer and a small series of tests, it was determined that antenna placement on a PCB can have a significant effect on the RSS values. It is imperative for sensing node designers to consult with antenna manufacturers to ensure correct antenna placement on custom PCB for the best connectivity.], as shown in Fig <ref>.
§.§ Methods: Node Connectivity and Data Transmission
The sensing node preserved battery life by periodically waking up to record a sample and transmit data to the cloud, as further described in Section <ref>. For this deployment, the nodes were set to transmit data every five minutes from the last recorded sample time. The data transmission process included the following series of steps: 1) The microprocessor woke up and kicked off two processes on separate threads, 2a) One thread sampled the sensor with the longest latency, typically about 8 seconds, 2b) A separate thread simultaneously initiated connection to the cloud, 3) Another array of low latency sensors were sampled, 4) The data were then packaged and transmitted to the IoT endpoint going through the cell tower, AT&T network routers etc.
§.§ Methods: Retry Logic
If a node could not connect to the cloud, it stored the reading locally, went back to sleep for five minutes, and tried to connect again. After 10 retries, if the node still could not connect, then the node was set to reboot itself. After a reboot, the node would immediately try to make a connection to the cloud and would not record local readings until it did because the node lacked a real time clock. Once the node could connect again, it transmitted all locally stored data and errors that were logged in the absence of connectivity.
§.§ Results: Readings and Cell Towers
For the one-year period and 118 nodes in our network, our dataset included 8,684,756 readings. We linked the readings to 417 unique cell tower locations, 65 with only 1 associated reading, 179 with 500 (0.0057%) or more readings, and 165 with 1000 (0.011%) or more readings.
§.§ Results: “Dead Zones"
Over the course of our deployment, we identified 11 locations (9.32%) at which the sensor nodes reported consistently low RSS values and ultimately failed to connect, generally within a few days of installation. These 11 locations include 10 from the main deployment beginning in July 2021 and one node location from an earlier pilot program in April 2021. 3 of the 11 locations were selected for deployment by local community groups, a significant percentage more than in the overall deployment. Initial mitigation strategies involved moving the nodes to the closest bus shelter, which was often directly across the street. However, we discovered that the nodes had to be moved even further—sometimes multiple blocks away—to establish a connection.
We examined a number of factors to determine the potential cause of these “dead zones", including the distance between the node and cellular tower, the number of towers close to a node, evidence of inter-cell interference (ICI) <cit.>, and nearby physical urban structures, including the distance and height of the closest building to the node, and the number, tallest height, mean and median building height within 100, 250, and 500 meters of each node. We found no evidence to suggest that any of these features had an effect on a node's ability to connect, when comparing all “dead zones" to all other node locations. When comparing “dead zone" locations to the new locations each of those nodes was moved to, we found a statistically significant difference in the height of the tallest building within 100 meters of the node after relocation versus before, as shown in Fig. <ref>. This indicates that land use and urban form close to the location of stationary sensors are likely factors impacting connectivity, fitting in line with observation from prior work <cit.>.
In addition, we investigated the role of line-of-sight as a primary factor contributing to “dead zones". We examined the relation between the sensor node, cellular tower, and tallest nearby building for the two nodes found to connect to the same primary cellular tower at their original (“dead zone") and new location. We found that one of these node configurations exhibited line-of-sight interference, as shown in Fig. <ref>, as the tallest building (11.9 meters) was clearly in the path between the cellular tower and sensing node.
Due to the limited number of examples to examine, there is a need for further investigation in larger datasets, however, this evidence supports the key role of line-of-sight impediments in contributing to “dead zones".
Finally, we examine the socioeconomic factors around the node locations without connectivity. We do not find a significant difference in the socioeconomic factors when comparing node locations that can and cannot connect, likely because there are a large number of nodes around the city. However, we do note that many of the dead zone locations are in socioeconomically disadvantaged and majority Black and Latine neighborhoods, as shown in Fig. <ref>a.
§.§ Results: Signal Strength
As shown in Fig. <ref>, the yearly median signal strength for each node ranged from -61 dBm to -113 dBm, with a network-wide median of -87 dBm. There was no significant difference in the median signal strength for community-selected versus randomly-selected nodes and we did not identify a statistical relationship between surrounding physical features, such as building height or distance to buildings, and the median signal strength for the sensor node or corresponding cell tower location.
As with “dead zones", we found that the node locations with the lowest median signal strength—those less than 100 dBm—were nearly all sited in neighborhoods that are socioeconomically disadvantaged and have a higher percentage racial minority population. In fact, only one of the eight locations with a low median signal strength was sited in a majority white neighborhood, as shown in Fig. <ref>b.
§.§ Results: Latency
We found that over the entire year's worth of data, the minimum latency was 2 seconds, the median latency was 5 seconds, and the interquartile range fell between 4 and 6 seconds (our data allowed only for estimating seconds, and not milliseconds for latency).
When examining the median latency for each sensor node over the course of the study, we found a much tighter distribution then we saw for median signal strength. In fact, the interquartile range all falls at the exact same value of 5 seconds. There are only three sensor locations with a median latency greater than that value, shown in Fig. <ref>c, and two of those locations overlap with those that have poor median signal strength, suggesting a correlation between signal strength and latency.
We find that only 7.24% of readings have a latency of 10 or more seconds, 1.18% have a latency of 30 or more seconds, and less than 1% (0.88%) have a latency of one minute or longer. Although these are low percentages, we examined the significantly delayed readings to determine if they occur randomly or follow a pattern. We found that the delayed readings do not occur randomly, but rather appeared disproportionately on certain dates, at certain sensor locations, and with certain cellular towers, as seen in Fig. <ref>. Interestingly, the sensor locations with the most delayed readings have no overlap with the locations that have either the lowest median signal strength or the highest median latency. However, when looking at the map of the sensor locations in Fig <ref>d, we see again that most of these locations are in neighborhoods with a majority Black or Latine population. We could not identify any temporal or location-based events events, such as sporting games, that have previously been associated with cellular network delays and may have caused these significant events. Coupled with the lack of empirical evidence from the cellular service providers
, we are led to determine that the delays are likely caused to carrier-specific issues such as cell tower maintenance.
§ POWER
§.§ Motivation for a Solar-Powered Urban Sensor Network
Nodes must be continuously running to collect data over time, yet many outdoor urban spaces are not equipped with accessible wired mains <cit.>. Solar power is the most ubiquitous form of renewable energy for sensor networks, and will remain prevalent in the coming years for the following reasons: 1) Solar panels are relatively inexpensive and easy to install. 2) Solar panels can power sensors that need to operate continuously in remote or hard-to-reach locations where it may be difficult or expensive to run electrical cables or replace batteries. 3) Using solar power eliminates the need for frequent battery replacements, which creates an added burden for cities looking to deploy sensor networks.
Thus we use solar energy to power our sensor network to
achieve reliability through continuous power, scalability in allowing for power in locations that do not have outlets, ease of maintenance by limiting battery replacements, and low-cost by requiring no new infrastructure.
§.§ Materials: Battery, Solar Panel, and Power Usage
Each sensing node was outfitted with a rechargeable 2000 mAh lithium polymer battery
and a 10×13 cm Voltaic Systems P126 6W solar panel. The solar panel was attached horizontally, in a flat position, to the top of the node's respective bus shelter to maximize solar absorption, maintain security of the panel, and provide ease of installation.
To optimize for low power consumption, the microcontroller operated in a duty cycled mode, consuming as little as 40 µA between measurements. The device's four electrochemical gas sensors consume microwatts of power, while the particulate matter (PM) sensor consumes up to 80 mA power as it relies on an internal fan to circulate air. Thus to optimize the overall power usage, we sampled the gases every 60 seconds and sampled the PM and transmitted data every 5 minutes. On average, the device drew 4mA current over a 24 hour period, allowing the battery to power the sensing node, including communications, for approximately 15 days at the aforementioned sampling rate.
§.§ Methods: Power Saving Strategies
In October 2021, we noticed that one of the devices was no longer charging. After sending the local maintenance team to investigate, we discovered that the sun was no longer reaching the solar panel due to the change in the sun's position and the node's location surrounded by skyscrapers. We anticipated that this issue would begin to show up in other nodes as well, so determined three potential solutions to ensure the network still collected useful data throughout the winter months:
* Set the sampling interval to be more than every five minutes, which would deplete the battery less quickly by running the PM sensor and data transmission less often.
* Implement a power-saving mode to ensure devices only run when they have a certain amount of battery and sleep when they are below that value.
* Schedule devices to only run at certain times of the day, i.e. for a few hours in the middle of the day when there is sunlight.
Naturally, each option comes with its own trade-offs that had to be considered. Sampling less often would provide less temporal coverage which could cause cities to potentially miss timely notifications from sensors, make it more difficult to identify noisy or anomalous readings through techniques such as moving averages, and introduce calibration errors from datasets with different resolutions. A power-saving mode could result in large time spans with no data, creating difficulty in comparing data from different seasons and potentially resulting in a lack of data needed for calibration. Scheduling devices to only run at certain times would limit data collection to only specific hours of the day, and may not solve the issue if the number of hours is not chosen correctly.
Based on the tradeoffs and our need of data for sensor calibration, we implemented a power-saving mode to put devices into a deep sleep to avoid depleting the batteries in low- or no-light conditions. Power-saving mode was initiated when a battery's power level fell to 15% or less of its total capacity then turned off when the battery's power level had recharged to at least 40%.
§.§ Results: Data Loss due to Power Saving Mode
Between the autumn and spring equinox of the year long study period, 44 devices (37.29%) went into power saving mode (PSM), with most devices entering PSM between January and March. Seven of these devices were at community selected sites, representing about 16% of the devices in PSM, indicating the community selected sites were not disproportionately affected. In total, devices in the networks spent 19,450,915 seconds — over 33,180 hours or 1382.5 days—in PSM, resulting in about 398,000 potential sensor readings that were not captured. Most devices entered PSM numerous times, with several entering more than five times during the study period. Thus, in many locations there was adequate sunlight to keep the devices charged throughout the winter months if a larger solar panel had been used or the devices had better energy harvesting to extend the battery life with the limited charge they received.
§.§ Results: Location of Solar Charging Issues
As expected, the node locations in downtown Chicago entered PSM for a long duration of the winter due to the high number of very tall buildings in the neighborhood. However, several node locations in neighborhoods outside of downtown Chicago, that lack a high density of tall buildings, also experienced solar charging issues. In fact, the node location with the second highest amount of time spent in PSM was not in a location near tall buildings, and 8 of the 12 node locations that had the most power saving hours were outside of the downtown area, as shown in Fig. <ref>f. The figure also shows that they mostly fall in neighborhoods with a majority Black or Latine population. As seen in Fig. <ref>, shadows from trees for large portions of the day could be a potential cause for charging issues in some areas. In addition, ice build up on solar panels may cause charging issues, but this is difficult to diagnose without visiting every node location while it is in PSM. Thus, further analysis is required to determine the exact cause of charging issues in these locations that obviously lack tall buildings in the vicinity. The important takeaway is that the dynamic physical environment of solar IoT deployments need to be considered by tools that are currently being developed to estimate solar energy availability using historic data or satellite/map images <cit.>.
§.§ Results: Predicting Solar Charging Issues
We used the OSM Buildings data <cit.> and Shadow Accrual Maps tool <cit.> to determine how well we would be able to predict a sensor location having power saving issues. With the OSM Buildings data, we examined the distance to the closest building, height of the closest building, and mean and median height of buildings within 100, 250, and 500 meters of each node location. For shadows, we used the tool to calculate the amount of time each node location was in shadow on the winter equinox. Using both a logistic regression model for the binary case of power saving or not, and a linear regression model for the amount of time spent in PSM, we found no statistical significance for either the amount of time spent in shadow, or any data related to buildings around the node locations, as highlighted for one data point in Fig. <ref>.
Upon further examination, we discovered that one of the issues around using crowdsourced and open source resources is that they are not consistently updated. For example, one sensor node that was indicated to have shadow issues but did not enter PSM likely had a building present when the data were uploaded, but no longer has a building there as discovered on Google Maps. Likewise, as seen in Fig. <ref>, a node location with no building nearby that entered PSM was likely affected by the presence of a tree near the bus shelter, which was not captured in the tools we used, which are focused on buildings. This points to an additional shortcoming of the data available, which focus on buildings and do not account for foliage, hyperlocal snowfall, and other physical phenomena that may impede solar charging.
§ DISCUSSION
§.§ The Potential of LTE-Connected, Solar-Powered Urban Sensor Networks
The results show immense promise for LTE-connected urban sensor networks. Most node locations had adequate signal strength to achieve connectivity, and the vast majority of sensor readings were transmitted to the cloud server within five seconds. Furthermore, there were no noticeable issues around connectivity due to temporal features such as weather or traffic patterns. We also had success using LTE to detect errors and perform software updates, including a firmware patch to add the power saving mode. These findings all point to the potential of LTE in creating reliable, scalable, easily maintainable, and real-time sensing in cities.
Solar panels proved to be a reliable energy source for over half of the year-long study, and most devices that experienced charging issues only did so between January and March. Chicago is at a more northern latitude than most of the global population, so we expect that many cities, and especially those in the Global South, would experience fewer solar charging issues. Additional improvements with solar panel efficiency <cit.> and research on smart power management strategies for renewable energy in IoT establish solar charging as a viable powering option.
The nodes that were collocated at EPA stations all experienced no charging or connectivity issues, suggesting that placing nodes on rooftops could be a viable solution to improve reliability. However, node placement is highly dependent on the application, and many cities may choose or need to place nodes closer to street level. Future research could include interpolation and machine learning techniques to correlate data from street level to rooftop nodes to address the technical issues and still collect useful data. Additionally, passive wireless reflector and relay research can find application in routing network availability from cell towers and around built infrastructure to end devices.
§.§ Implications of Connectivity and Charging Issues
Despite the success we had in using 4G LTE-M to transmit data, we discovered issues around “dead zones", delayed readings, and unequal signal strength. The cause of these issues could not often be easily identified and data sources from AT&T and the FCC indicate widespread support of the LTE network across Chicago, as seen in Fig. <ref>. Thus, the discovery of these issues raises questions on the reliability of LTE networks, especially in cities that do not have as much cellular infrastructure as Chicago.
However, we did not identify significant data loss from the connection-related issues, suggesting that LTE-connected sensor networks are likely appropriate for applications that do not rely on instant or near instant data.
For applications that cannot afford to have any delayed data, such as emergency support services, network designers will want to think about building robustness into the system to ensure real-time communication for all readings.
Despite the ubiquity of solar panels as the power source for wireless sensor networks, we found that they are not a reliable power source for urban sensor networks for cities
that have limited sunlight
in winter months. In addition, urban areas at latitudes closer to the equator will also experience solar charging issues if they have numerous tall buildings blocking the path of the sun. Thus, we need to continue research in alternative charging options, energy harvesting techniques, and battery-less sensors to ensure reliability and scalability in powering urban sensor networks.
In our study, we found that cellular connection and solar charging issues are not all localized to areas with tall buildings and may be spread inequitably around a city. Thus, urban sensor network deployments have the potential to exacerbate existing societal inequalities by allowing for networks to be scaled more easily in some neighborhoods than others. In turn, this can increase mistrust between residents and governments <cit.> and drive residents to make assumptions about the distribution of resources and harms based on the physical presence of sensors <cit.>. Thus, to serve people in all communities, sensor network designers should consider working with local service providers, using repeaters, multiple sensors, and other technologies to improve reliability in underserved areas. Furthermore, networking researchers and designers need to focus on equality, and not just quality or area coverage when building and deploying infrastructure.
§.§ Challenges around Data Access
Due to the lack of official up-to-date building information, we relied on open crowdsourced data to determine the location and height of buildings in the city. Similarly, because the location of cellular towers is not publicly available, we relied on data from OpenCellID. As with many open crowdsourced datasets, these data were not completely accurate or up-to-date <cit.>. This was especially clear when examining FCC carrier connectivity information, as the entire city of Chicago seemingly has coverage (Fig. <ref>, yet we found that was not the case, likely because the data are reported by carriers <cit.>. We also discovered data accuracy issues in shadow prediction using the Shadow Accrual Maps <cit.>. Other crowdsourced data, such as nPerf, presented an alternative usage issue in incompleteness, as seen in Fig. <ref>. Particularly in Chicago, there is significantly more data available in the northern part of the city and along highways, likely attributed to the increased usage of crowdsourced platforms by white people and high-income earners <cit.>. Thus, relying on crowdsourced data makes it difficult to predict locations with solar charging or connectivity issues that may arise due to building height and other urban interferences, made further difficult by the social inequities that exist in many cities and are exacerbated in crowdsourced technologies.
The difficulty in working with open crowdsourced data points to a need for new methods to obtain up-to-date
urban data. For example, researchers can help develop ways to obtain building height or cell tower location from satellite imagery or Google Maps. We may also look to develop easier ways for cities to create their own databases that are kept up-to-date or develop better community science incentives to keep crowdsourced data sources such as OSM Buildings, OpenCellID, and nPerf up-to-date and to reach new users who do not currently contribute to these datasets.
§.§ Limitations of this Study
We acknowledge that this work is limited, as it focuses on a single-city case study. Although we believe that Chicago is representative of many other large cities,
we lack the empirical evidence needed to “assess the implications and potentially transformative consequences" of how similar smart city networks would emerge in different urban contexts <cit.>. An additional limitation is that we use weather data from US government agencies and there are only three weather stations in the Chicago area. Although we also had temperature and humidity readings at each node, these sensors were located inside the node enclosures, and thus did not always provide accurate external measurements. Thus, our weather-related analyses are not hyperlocalized to most of the sensors, and it is possible that there are hyperlocal weather correlations, such as urban heat islands, that affected sensor connectivity.
§ CONCLUSION
In this work, we present the challenges and opportunities from a year-term city-wide urban sensor network deployment. The network was created based on five specific criteria of success that we identified from past work. We provide an in-depth analysis of deployment data from the aspect of cellular connectivity and solar energy harvesting, which are the two key features that help meet the success criteria. In addition we highlight inherent challenges with open data sources available for root-cause analysis of failure nodes, and identify strengths and weaknesses to define future research directions that will support large-scale, real-time energy harvesting deployments in achieving reliable, equitable smart city networks.
acm
|
http://arxiv.org/abs/2307.03963v1 | 20230708122517 | An observational signature for extremal black holes | [
"Stefanos Aretakis",
"Gaurav Khanna",
"Subir Sabharwal"
] | gr-qc | [
"gr-qc",
"hep-th",
"math-ph",
"math.MP"
] | |
http://arxiv.org/abs/2307.07560v1 | 20230714180510 | Entanglement in an expanding toroidal Bose-Einstein condensate | [
"Anshuman Bhardwaj",
"Ivan Agullo",
"Dimitrios Kranas",
"Justin H. Wilson",
"Daniel E. Sheehy"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"gr-qc"
] |
[subfigure]labelformat=brace
|
http://arxiv.org/abs/2307.05672v1 | 20230711180002 | Scientific Objectives of the Hot Universe Baryon Surveyor (HUBS) Mission | [
"Joel Bregman",
"Renyue Cen",
"Yang Chen",
"Wei Cui",
"Taotao Fang",
"Fulai Guo",
"Edmund Hodges-Kluck",
"Rui Huang",
"Luis C. Ho",
"Li Ji",
"Suoqing Ji",
"Xi Kang",
"Xiaoyu Lai",
"Hui Li",
"Jiangtao Li",
"Miao Li",
"Xiangdong Li",
"Yuan Li",
"Zhaosheng Li",
"Guiyun Liang",
"Helei Liu",
"Wenhao Liu",
"Fangjun Lu",
"Junjie Mao",
"Gabriele Ponti",
"Zhijie Qu",
"Chenxi Shan",
"Lijing Shao",
"Fangzheng Shi",
"Xinwen Shu",
"Lei Sun",
"Mouyuan Sun",
"Hao Tong",
"Junfeng Wang",
"Junxian Wang",
"Q. Daniel Wang",
"Song Wang",
"Tinggui Wang",
"Weiyang Wang",
"Zhongxiang Wang",
"Dandan Xu",
"Haiguang Xu",
"Heng Xu",
"Renxin Xu",
"Xiaojie Xu",
"Yongquan Xue",
"Hang Yang",
"Feng Yuan",
"Shuinai Zhang",
"Yuning Zhang",
"Zhongli Zhang",
"Yuanyuan Zhao",
"Enping Zhou",
"Ping Zhou"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO",
"astro-ph.HE",
"astro-ph.IM",
"astro-ph.SR"
] |
subject
Article
SPECIAL TOPIC:
2023
January
0
0
??
000000
August 12, 2023
January 00, 0000
August 12, 2023
Scientific Objectives of the Mission
1]Joel Bregman
2]Renyue Cen
3,4]Yang Chen
5]Wei [email protected]
6]Taotao Fang
7,8]Fulai Guo
9]
Edmund Hodges-Kluck
5]Rui Huang
10,11]Luis C. Ho
12]Li Ji
7,8]Suoqing [email protected]
2,12]Xi Kang
13]Xiaoyu Lai
5]
Hui Li
12,1]Jiangtao Li
2]Miao Li
3,4]Xiangdong Li
14]Yuan Li
15]Zhaosheng Li
16]Guiyun Liang
17]
Helei Liu
12]Wenhao Liu
18]Fangjun Lu
5]Junjie Mao
19]Gabriele Ponti
20]Zhijie Qu
21]Chenxi Shan
10]
Lijing Shao
7]Fangzheng Shi
22]Xinwen Shu
3,4]Lei Sun
6]Mouyuan Sun
23]Hao Tong
6]Junfeng Wang
24]
Junxian Wang
25]Q. Daniel Wang
26]Song Wang
24]Tinggui Wang
27,10]Weiyang Wang
28]
Zhongxiang Wang
5]Dandan [email protected]
21]Haiguang [email protected]
29]Heng Xu
27,10]Renxin Xu
3,4]Xiaojie Xu
24]
Yongquan Xue
12]Hang Yang
7,8]Feng [email protected]
12]Shuinai Zhang
5]Yuning Zhang
30]
Zhongli Zhang
21]Yuanyuan Zhao
31]Enping Zhou
3,4]Ping Zhou
J Bregman et al.
[1]Department of Astronomy, University of Michigan, Ann Arbor, MI, 48109-1107, USA
[2]Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China
[3]School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China
[4]Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210023, China
[5]Department of Astronomy, Tsinghua University, Beijing 100084, China
[6]Department of Astronomy, Xiamen University, Xiamen, Fujian 361005, China
[7]Astrophysics Division, Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China
[8]Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China
[9]NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
[10]Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, China
[11]Department of Astronomy, School of Physics, Peking University, Beijing 100871, China
[12]Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China
[13]Department of Physics and Astronomy, Hubei University of Education, Wuhan 430205, China
[14]Department of Physics, University of North Texas, Denton, TX 76203, USA
[15]Key Laboratory of Stars and Interstellar Medium, Xiangtan University, Xiangtan 411105, China
[16]CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
[17]School of Physical Science and Technology, Xinjiang University, Urumuqi 830046, China
[18]Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
[19]INAF-Osservatorio Astronomico di Brera, I-23807 Merate (LC), Italy
[20]Department of Astronomy & Astrophysics, the University of Chicago, Chicago, IL 60637, USA
[21]School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China
[22]Department of Physics, Anhui Normal University, Wuhu 241002, China
[23]School of Physics and Materials Science, Guangzhou University, Guangzhou 510006, China
[24]Department of Astronomy, University of Science and Technology of China, Hefei 230026, China
[25]Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA
[26]Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
[27]School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China
[28]Department of Astronomy, School of Physics and Astronomy, Yunnan University, Kunming 650091, China
[29]National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
[30]Shanghai Astronomical Observatory, Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Shanghai 200030, China
[31]Huazhong University of Science and Technology, Wuhan 430074, China
The Hot Universe Baryon Surveyor () is a
proposed space-based X-ray telescope for detecting X-ray emissions from the hot
gas content in our universe. With its unprecedented spatially-resolved
high-resolution spectroscopy and large field of view, the mission will be
uniquely qualified to measure the physical and chemical properties of the hot
gas in the interstellar medium, the circumgalactic medium, the intergalactic
medium, and the intracluster medium. These measurements will be valuable for
two key scientific goals of , namely to unravel the AGN and stellar
feedback physics that governs the formation and evolution of galaxies, and to
probe the baryon budget and multi-phase states from galactic to cosmological
scales. In addition to these two goals, the mission will also help us
solve some problems in the fields of galaxy clusters, AGNs, diffuse X-ray
backgrounds, supernova remnants, and compact objects. This paper discusses the
perspective of advancing these fields using the telescope.
95.55.Ka, 98.35.Gi, 98.70.Qy
empty
Scientific Objectives of
the Hot Universe Baryon Surveyor () Mission
[
August 12, 2023
=====================================================================
2
§ INTRODUCTION
Over 99% of the baryonic matter in the Universe is in the form of ionized
plasma, among which diffuse gas is the most prevalent form that spans
over a wide range of scales from the interstellar medium (ISM) to the
circumgalactic medium (CGM), the intracluster medium (ICM), and the
intergalactic medium (IGM). The diffuse gas is a key component in the cosmic
baryon budget and cycle, as it is the main reservoir of mass, metals and energy,
and thus regulates the formation of stars and galaxies (e.g.,
<cit.>). In particular, galaxies may embrace
pristine cosmological inflows as fuel for star formation, and, in the meanwhile,
eject a significant fraction of mass and metals back to the surrounding diffuse gas
via so-called feedback processes, e.g., supernova feedback and active galactic
nucleus (AGN) feedback. The interplay between the inflow and outflow of baryons
in galaxies is a key process in the formation and evolution of galaxies.
Compared with dark matter which only interacts gravitationally, baryonic
matter is more richly and sensitively imprinted with the history of galactic
feedback and galaxy evolution, due to the more complicated and less understood
baryonic physics, including gas (magneto) hydrodynamics, ionization, cooling and
heating, and chemical enrichment. Therefore, understanding the physical and
chemical properties of diffused gas residing in “galactic ecosystems” is a
fundamental problem in modern astrophysics <cit.>.
Unlike the ionized plasma in the form of stars which emit
photons powered by nuclear reactions and thus are
easily observable, the ionized diffuse gas within and between galaxies is more
difficult to probe. Fortunately, atomic physics enables the possible detection of diffuse gas via emission and absorption across a wide range of the
electromagnetic spectrum, among which the X-ray band is highly prominent. X-rays
are ubiquitous in astrophysical environments: at large (galaxy scale and above)
scales, gas is usually virialized at the virial temperature that increases with
the greater enclosed mass of the system, and thus hot diffuse gas is commonly
expected especially in massive astrophysical systems. For instance, galaxy
clusters are filled with hot ICM with temperatures up to T∼10^8 K. At
smaller scales, the gas can be efficiently heated by the feedback processes from
stars and AGNs. Due to the nature of the emission mechanism, the gas thermal
properties, i.e., temperatures and densities of the X-ray-emitting gas, can be
directly obtained from the X-ray spectra. In addition, through the detection of
emission and absorption lines, the X-rays can also be used to determine the
kinematic properties and chemical compositions of the hot gas, which contain a
tremendous amount of important information for the physical processes such as
AGN and stellar feedback and interaction between galaxies. The X-ray observation
is thus a powerful tool to probe the thermal states and kinematics of the
diffuse gas, and thus to understand the baryon budget and cycle in the Universe.
The Hot Universe Baryon Surveyor () mission <cit.>
is a timely effort to unravel the mystery of the baryon budget and cycle. The
design of is highly optimized for detecting extended X-ray emission from
the diffuse hot gas in and around galaxies <cit.>. Compared
with other X-ray missions such as and , two features of stand out: (1) large field-of-view (of 1^∘ half-power diameter), and (2)
high spectral resolution (of <1 eV for the central sub-array and 2 eV for the
main array at 1 keV). The hybrid design of the detector is adopted to enhance
absorption-line studies through observations of point-like background sources
like active galactic nuclei (AGN) or gamma-ray bursts (GRBs), while staying
within the capability of current readout technologies, which limit the number of
pixels in the detector array. Other technical trade-offs have also been made.
For instance, the spectral range is capped at 2 keV, to make it easier to
realize high spectral resolution while maintaining good quantum efficiency of
the detector. Because CGM and IGM are of very low density, the X-ray emission
from them is expected to be extremely weak but, fortunately, be dominated by
spectral lines. is designed to make full use of the unprecedented spectral
capabilities of the transition edge sensor-based microcalorimeters in detecting
hot CGM and IGM, through narrow-band imaging around strong emission lines, and
in deriving their physical and chemical properties, through high-resolution
X-ray spectroscopy. Moreover, for detecting very extended emissions, the larger
the field of view, the more photons are let in, and the higher the signal-to-noise
ratio. Furthermore, a large field of view provides high efficiency in covering
nearby galaxies and galaxy groups or clusters.
We refer the readers to Table 1 in <cit.> and Table 1 in
<cit.> for detailed specifications of .
The high-resolution X-ray spectroscopic observations with are also
expected to enable advancement in many other areas, including inflows and
outflows in active galactic nuclei (AGN), elemental abundances and distribution
in supernova remnants (SNRs), the origin of diffuse X-ray background, flaring
activities, relativistic effects in compact objects, as well as lunar or
planetary X-ray emission associated with solar wind charge exchange, which
produces foreground X-ray emissions and bears high relevance to active research
in the fields of laboratory astrophysics and atomic physics.
This paper discusses the scientific objectives of . As already mentioned, the design and optimization of the payload are driven by a set of core scientific objectives. Here, we present the core science at two levels, which cover two marginally overlapped spatial scales, with one focusing on feedback processes and baryon/metal cycling in the galactic ecosystem, and the other on the hot baryon budget and multi-phase status at larger scales. The former is discussed in <ref> while the latter in <ref>, respectively. In <ref>, Galactic sciences of are discussed. In <ref>, we explore the capability of by reviewing adopted atomic models and analysis techniques. We finally report the current status of in <ref> and conclude in <ref>.
§ GALACTIC ECOSYSTEM: FEEDBACK AND BARYON CYCLES
AGN and stellar feedback is a bottleneck in the study of galaxy formation and
evolution, and it is also a cutting-edge topic in astrophysics in recent years. AGN feedback is believed to be dominant in relatively massive galaxies while stellar feedback is believed to be more important in less massive ones <cit.>. Both of them are closely
related to star formation activity in galaxies and are responsible for the production of galaxy wind, generating an X-ray halo around galaxies, called circumgalactic medium <cit.>.
With the help of , we will be able to answer a few key questions driving our understanding of the feedback physics and its relationship to galaxy evolution, including:
∙ How does AGN feedback suppress star formation and even quench the galaxy, and what are the respective roles of different feedback modes and different AGN outputs (radiation, wind, and jet) in these processes?
∙ How are different phases of the ISM/CGM regulated by the AGN and stellar
feedback?
∙ How much energy is released, and how much matter is heated/accelerated during the interaction of feedback with the ISM/CGM?
∙ How can help to constrain important non-thermal physics in
the CGM?
The hot circumgalactic medium (CGM) may contain
considerable mass and metals, and is crucial for us to understand the mechanism of feedback. High energy resolution imaging spectroscopy X-ray observations of galaxies and their surrounding CGM/IGM play an important role in our understanding of the role of the co-evolution and interplay between galaxies and their environments. As highlighted by the Decadal Survey on Astronomy and Astrophysics 2020 (Astro2020) <cit.>, studying various forms of stellar and AGN feedback on the galaxy ecosystem over a large physical scale is critical in unveiling the hidden drivers of galaxy growth, including the connection between star formation and the ISM, the cycling of gas and metal in and out of the galactic disk and halo, as well as the ionizing sources of the Universe.
With , the CGM and the associated physical processes can be probed through direct detection of emission lines, or absorption line studies with the observations of bright background sources, taking advantage of the superior spectral resolution of the central sub-array. In addition, observations of the CGM are also crucial to understand the baryon/metal cycling and budget. With its breakthrough technology of combining the high energy resolution micro-calorimeter detector and a large-FOV X-ray telescope <cit.>, has an unprecedented capability in resolving and detecting individual X-ray emission lines from the hot plasma either in thermal equilibrium or not (Fig. <ref>a). It is also outstanding in absorption line studies of X-ray bright background sources (Fig. <ref>b). The former is extremely important in the study of how the stellar and AGN feedbacks affect the galactic ecosystem. Within ∼5 years of science operation, will conduct milestone studies of stellar and AGN feedback mainly in two ways: either by moderately deep surveys of nearby objects with large angular sizes, resolving hot gas features with a physical size from star clusters to massive galaxy clusters (∼10–10^6 pc); or by deep enough observations of individual galaxy halos at moderate distances to measure the physical and chemical properties of the hot CGM.
§.§ AGN feedback
Active galactic nuclei (AGN) are the observational manifestation of matter inflow toward supermassive black holes, which can be found at the centers of almost all massive galaxies <cit.>. Compared to their host galaxies, AGNs are miniature in both size and mass. However, they are expected to provide substantial feedback (in the form of outflows) to their host galaxies and beyond <cit.>.
Generally speaking, AGN-driven outflows have two forms: radio jets and ionized winds. Highly collimated relativistic jets, mainly observed in the radio band, can be found in some AGN accreting at relatively low efficiencies <cit.>. Radio jets are also known as the kinetic or maintenance mode of AGN outflow. Ionized winds with large solid angles are prevalent in AGN accreting at high efficiencies <cit.>. Ionized winds are often referred to as the radiative mode of AGN outflow. Presently, grating spectrometers in the X-ray and UV bands are the main working horses to probe these ionized winds via absorption spectroscopy since these winds are too small to be resolved via direct imaging.
In the X-ray band, there are mainly three types of ionized winds: warm absorbers, ultrafast outflows, and obscuring winds. The classical warm absorbers are identified with multiple narrow absorption lines with a typical outflow velocity of ≲ 10^3 km s^-1 <cit.>. Ultrafast outflows are mainly inferred from the absorption features of highly ionized Fe xxvi and/or Fe xxv in the hard X-ray band <cit.>. The outflow velocity of ultrafast outflows can reach up to about a third of the speed of light (∼10^4-5 km s^-1). Ultrafast outflows occupy the high column density (N_ H), ionization parameter (ξ), and outflow velocity (v_ out) part of the parameter space, while warm absorbers occupy the other side of the parameter space. Transient obscuring winds are currently identified mainly in coordinated multi-wavelength observations <cit.>.
In the N_ H-ξ-v_ out parameter space, obscuring winds are in between warm absorbers and ultrafast outflows, overlapping more with the former.
To study weak absorption lines of AGN winds, two key instrument parameters are
critical: energy resolution (R) and effective area (A_ eff). The larger
the product R× A_ eff (as the figure of merit), the better we can
constrain weak absorption lines. As discussed at the beginning of
<ref>, compared to existing grating spectrometers aboard
and , will greatly advance our knowledge of AGN winds.
Among the three types of AGN winds in the X-ray band, the classical warm absorbers are the most frequent to be detected <cit.>. Nonetheless, we still have gaps in our understanding of the warm absorber, e.g., its number density and distance to the black hole remain largely uncertain. These two parameters, linked with each other via the measurable ionization parameter, are essential to infer the origin of the warm absorber as well as its impact on the circumnuclear media and beyond.
The number density of ionized winds can be constrained with density-sensitive metastable absorption lines. In theory, such diagnostics can cover more than ten orders of magnitude in number density <cit.>. In practice, successful applications are rather scarce: NGC 4151 <cit.> and GRO J1655-40 <cit.>. The latter is an X-ray binary. In addition, upper or lower limits were obtained for Mrk 279 <cit.> and NGC 5548 <cit.>. This is partly due to the insufficient figure of merit (R× A_ eff) of current instruments. The situation can be significantly improved with future missions like .
Figure <ref> illustrates such diagnostics with . In this simulation, the 0.2-2 keV observed continuum flux is ∼7×10^-12 erg s^-1 cm^-2. This flux level is low when compared to those of the well-studied targets like NGC 5548 <cit.> and NGC 3783 <cit.>. Even for this low continuum flux, with 400 ks exposure, the central grid of (with the <1 eV energy resolution) can well constrain the density of the wind with multiple key diagnostic lines. For targets with higher continuum flux, the required exposure time might be further reduced. In addition, the long exposure observations of can help us to better understand the wind density and in turn location via spectral timing analyses <cit.>. Moreover, potential eclipse events of the clumpy absorber could also provide useful constraints on its location and thus density <cit.>.
Another fundamental parameter of the warm absorber that is not well constrained is its opening angle. This can be indirectly inferred from the warm absorber occurrence fraction in a sample of type 1 AGN. In the era of ASCA, the occurrence fraction of the warm absorber is ∼50 % <cit.>. The occurrence fraction was revised to ∼65 % in the era of and <cit.>. However, limited by the capability of current instruments, these sample studies are limited to bright targets. The largest sample size among the three studies is merely 26.
Such sample studies can be improved with . On one hand, its energy resolution and effective area are suitable for weak absorption line studies. On the other hand, its large field of view enables us to accumulate a large number of high-quality spectra in an efficient way. Figure <ref> shows the simulation of warm absorber features of a serendipitous AGN when observing other core science or observatory science targets. If not filled with extended sources, the 1-square-degree field of view (with 3600 integral field units) might have a few serendipitous AGN with detectable warm absorber features <cit.>. Even for a rather low continuum 0.2-2 keV flux of 1.0×10^-13 erg s^-1 cm^-2 (nearly two orders of magnitude lower than any of the sample targets in <cit.>), warm absorber features such as the Fe M-shell unresolved transition array (UTA) feature can be well detected with the 2 eV energy resolution of the normal grid. That is to say, without requesting dedicated observations, we can still accumulate a good sample of AGN warm absorbers.
By the same token, we can search for ultrafast outflows in the soft X-ray band in these serendipitous AGNs. The occurrence fraction of ultrafast outflows is at least 30 % in the sample studies by <cit.>. These fast winds are rarely detected in the soft X-ray band though <cit.>. Moreover, ultrafast outflows can be quite variable <cit.>. The long exposure observations of can also help us to better understand the evolution of these powerful winds.
Some AGN winds are probably powered by accretion disks. As a result, the disk winds may take a significant fraction of the accretion power away from the accretion disk, which significantly alters the disk temperature profiles <cit.>, energy density distributions <cit.> and disk sizes <cit.>.
Transient accretion onto supermassive black holes (SMBH) in galactic centers can also lead to winds and outflows. The most interesting class is the stellar tidal disruption event (TDE). A TDE occurs when a star passes too close to a SMBH and gets tidally
disrupted and accreted, producing a flare of radiation peaking in the
UV and soft X-rays <cit.>. TDE provides a unique probe of quiescent SMBHs in normal galaxies, especially the physical processes associated with various accretion states on a practically observable timescale of several years. The super-Eddington fall-back rates and potentially super-Eddington accretion rates in TDEs have been predicted in several theoretical works <cit.>, which can lead to fast, radiation-driven outflows, as seen in MHD simulations of such systems <cit.>.
X-ray outflows in TDEs are crucial in constraining the super-Eddington accretion flows, yet observations are still sparse <cit.>. This is possibly due to the X-ray faintness of most TDEs, particularly those discovered in the optical bands. An ionized X-ray outflow (from the blueshifted O viii absorption trough) with a velocity of 0.2 c has been reported in a nearby TDE ASASSN-14li, but it disappeared in the late-time observations <cit.>. If a super-Eddington radiatively driven outflow were observed, it would have “turned off” only one year after the TDE luminosity peak. Interestingly, with dedicated grating spectroscopy observations, X-ray outflows with a much lower velocity (∼300 km s^-1) in ASASSN-14li are detected, which are also variable in physical conditions <cit.>. Such a low-velocity outflow component would be common among TDEs, which could be interpreted as absorption through a super-Eddington wind or through a filament of stellar debris. Since TDEs are mainly peaking in the soft X-ray bands <cit.>, with its superior sensitivity and spectral resolution, will allow for measuring precisely the soft X-ray outflow properties at different accretion states.
For instance, is able to measure the evolution of ionized absorption features as a function of luminosity and/or accretion rate, especially in the rising and decaying phase of the TDE flares, establishing the connection between the outflows and the super-Eddington accretion process, which is still poorly constrained.
In the current model of galaxy formation and evolution, feedback is the most
uncertain physical process. Almost all cosmological simulations adopt
subgrid models of AGN feedback which are highly uncertain and distinctively different between
one and another. In this sense, simulations of the evolution of a single galaxy have a significant advantage because a much higher resolution can be achieved and the inner boundary of the AGN accretion flow, which is the Bondi radius of black hole accretion, can even be resolved numerically. This ensures that by self-consistently evolving the gas flow from hundreds of kpc scales down to the inner Bondi radius, we can reliably trace the mass flux crossing the Bondi radius, i.e., the mass accretion rate of the AGN, which is crucial for the determination of the magnitude of AGN activity. Moreover, in this case, the state-of-the-art AGN physics <cit.> can be incorporated into the simulations (e.g., <cit.>). Fig. <ref> shows such an example of simulation results, which describes the evolution of a single elliptical galaxy when the effects of AGN feedback are taken into account.
Fig. <ref> shows the mock spectra of an elliptical galaxy produced by the gas from 0.1-10 kpc using the response of telescope with four different models of AGN feedback, together with the fitting results of the one-temperature model. This result shows the ability of the high spectral resolution spectra obtained by to discriminate different feedback models.
Next step, it is crucial to develop the model further by incorporating more physics into the model and to study in detail the roles of different AGN outputs, such as radiation, wind, and jet in cold and hot feedback modes, in AGN feedback. Moreover, we also need to incorporate the simulation results into cosmological simulations.
§.§ Stellar feedback
Though there is no doubt that supernova feedback is crucial to the suppression of star formation and the launch of galactic winds, people now realize that early feedback from massive stars (e.g. fast OB winds and ionizing radiation) is equally important since the total energy output from them is comparable or even larger than that of the supernovae (SNe; e.g. <cit.>). More importantly, early feedback operates as soon as massive stars are formed, much earlier than the SN explosion at the death of the stars, which takes at least ten Myr. Given the fact that most star-forming regions are short-lived, early feedback processes determine the evolution and fate of individual star-forming regions. Modern galaxy formation simulations have already shown that early feedback disrupts local density concentrate, shuts off star formation (e.g. <cit.>), clears out material and enhances the effects of subsequent SN feedback, reduces the clustering of stars (e.g. <cit.>), enhances chemical enrichment in galactic environments <cit.>, and changes the mass and spatial distribution of GMCs (e.g. <cit.>).
However, there exists huge uncertainty regarding the intensity of these processes and the coupling to the ambient multi-phase turbulence-dominated medium. Diffuse X-ray observations of nearby star-forming regions, therefore, provide a direct laboratory to confront the theoretical expectation and observations on the effects of the feedback from the smallest scales. Previous observations have already demonstrated that diffuse X-ray emission is ubiquitous in star-forming regions (e.g. <cit.>). One famous example is the 30 Doradus, the most well-studied massive star-forming region in LMC.
Fig. <ref> shows a composite image of X-ray, Hα, and UV bands. The complex exhibits many blisters and bubbles filled with hot plasma traced by soft X-rays. These bubbles are surrounded by warm gas traced by UV and Hα emission, demonstrating the importance of the interface between gas of different phases. The 30 Doradus region has been investigated extensively in X-rays. Back in the 90s, Wang & Helfand 1991 (<cit.>) first revealed the diffuse X-ray emission using the Einstein Observatory and fitted the spectra with an isothermal plasma model of the temperature of 5 × 10^6 K. Most recently, using Suzaku observations of the 30 Doradus, Cheng et al. 2021 (<cit.>) showed that the spectra are better modeled with a log-normal temperature distribution, which leads to a higher estimate of the total thermal energy and gas pressure. However, due to the limited spectral resolution of the existing X-ray telescopes, the detailed temperature distribution and total thermal energy of the hot gas are not well determined, therefore limiting our understanding of how feedback modulates hot gas around young star clusters.
From a theoretical perspective, recently numerical simulations start to be able to model various key physical processes in massive star-forming regions. In Fig. <ref>, we show the results from a radiation-hydrodynamic that takes into account various stellar feedback processes such as fast stellar winds and ionizing radiation from massive stars (Li et al. in prep.). The combined effects of feedback disrupt the cloud very quickly in only a few Myr and generate a huge amount of hot gas with a complicated temperature distribution, which is a sensitive probe of the detailed feedback implementation.
will provide a major improvement on this matter, thanks to its large field of view and excellent spectral resolution. The 2 eV spectral resolution will reveal individual metal lines with different ionization states and determine the accurate temperature and electron number density at each pixel of the observation. The high spectral resolution will also allow us to determine the shift of the line center and in turn the line-of-sight velocity of the hot gas. With 's huge field of view, with only a single pointing, we can obtain all the above key physical quantities in a spatially-resolved fashion across the whole star-forming region. These physical quantities together with the derived total energy budgets are powerful measures to directly constrain the dynamics of the regions caused by early stellar feedback processes from a single stellar population.
Star-forming regions are highly turbulent, multi-phase media. It has been
demonstrated that wind feedback naturally creates fractal hot bubble surfaces,
where cooling becomes extremely efficient in the interface between different
phases of the gas (<cit.>). The interface is an ideal site to
trigger charge exchange between ionized and neutral gas. The charge exchange
(CX) X-ray emission is therefore a powerful tool to quantify the physical
conditions between hot plasmas and cold neutral gas within the star-forming
regions. Significantly different from collisional thermal emission, the X-ray
spectra from CX present enhanced forbidden and intercombination lines relative
to the resonance lines, such as the Heα triplet of O vii, N vi, and Ne ix (<cit.>). However, clear signatures of CX in
X-rays are extremely difficult to detect (<cit.>) because
it requires high spectral resolutions and S/N ratios to resolve the triplets.
will be a game changer for the detection of CX spectra signature due to
its large effective area which enhances the S/N rations, and will provide solid
evidence and quantification of the fractal nature of the turbulent medium in
star-forming regions.
Another advantage of the high spectral resolution is that the spectra can
be used to determine the accurate abundance of many elements, such as O, Mg, Ne,
and Fe. These measurements will provide stringent constraints on the metal yield
from massive stars from the stellar population synthesis, which is still very
uncertain due to the different models of binary evolution. Besides the total
metal yields, the spatially-resolved metallicity distribution will also be used
to study the time-dependent metal loading and the process of metal diffusion,
both of which are key physical ingredients of the large-scale galaxy formation
models (e.g. <cit.>). Besides D30 Doradus, other interesting
sources include Carina Nebula, NGC 3603, and M17. A compilation of star-forming
regions of various evolutionary stages will provide us with a great opportunity
to confront the theoretical understanding of the stellar feedback in different
stellar populations.
§.§ CGM: constraining feedback physics and baryon cycle
At scales of a few hundred kpcs, the fraction of baryons in the universe
contributing to the circumgalactic medium of galaxies is crucial to answering
the question of “missing baryon”. Part of the missing baryons is likely to
exist in the hot medium outside the galaxy <cit.>, but the exact
location is unclear: It may be mainly in the dark matter halo near the galaxy,
or around a large-scale structure other than the dark matter halo
<cit.>. The amount of baryons contained in the thermal
medium is determined by the accretion driven by dark matter halos and the feedback of
galaxies. It is now widely accepted that feedback in less massive and massive
galaxies are dominated by supernovae and AGNs respectively
<cit.>, while how they affect the baryon
content of the circumgalactic medium remains under active investigation.
Thankfully, since the density of the CGM is not forbiddingly low, the emitted
X-rays of the circumgalactic medium are still above the detection limit.
Observations of the X-ray system of the galactic peripheral thermal medium can
constrain the mass of the galactic medium, which will help us understand the
distribution of baryons and feedback physics.
Because of its low density, CGM is very sensitive to the
feedback effect, and the properties of CGM generated by different simulations
are very different (e.g., <cit.>). Therefore, comparison with
the observed CGM can be used to identify the credibility of the feedback model.
Fig. <ref> is a comparison of the distribution map of CGM
O VII, O VIII column density (Li et al. in prep),
including the mainstream cosmological simulation IllustrisTNG and zoom-in
simulation <cit.>, and the simulation of a single galaxy
<cit.>, each simulation
adopts different feedback models. It can be seen that the O VII
and O VIII in different simulations are very different. 's
observations of CGM in neighboring galaxies can be compared with the results of
numerical simulations to evaluate the reduction of CGM by different simulations
and limit galaxy feedback models.
To better constrain the galaxy formation models, a large sample of the hot CGM
is needed and systematically compared with the numerical simulations. At
present, there are only dozens of disk galaxies that have detected halo,
basically within 30 Mpc. Generally speaking, in less massive galaxies, the
thermal CGM is fainter and less extended. can use its large field of view
to significantly improve observation efficiency and see fainter, farther and
less massive galaxies, increasing the number of samples by at least an order
of magnitude. In this way, there will be a statistically complete sample in
various physical parameter intervals of galaxies, such as galaxy mass, stellar
activity in the galaxy, and the large-scale environment in which the galaxy is
located. This provides a comprehensive picture of how the properties of the hot
CGM change with the physical parameters of the galaxy. Numerical simulations
that have developed rapidly in recent years will also cover these physical
parameters and systematically predict the outflow of these galaxies and the
thermal medium around galaxies. Comparing the total luminosity, metal abundance,
spectrum and other information of the observed X-rays with the results of
numerical simulation, we can (1) constrain the feedback processes in galaxies under
different conditions and give quantitative limits on the mass cycle of galaxies;
(2) constrain the mass of hot baryons contained in the CGM from the statistical
point of view, and answer the “missing baryon” problem.
On galactic scales, how the galactic winds from SN and AGN feedback including
the AGN jets <cit.> interact with
cosmological inflows is the basis for the formation of galaxies. A key question
in galaxy formation is how galaxies obtain mass from their CGM and when to
stop growing <cit.>. Competition between inflows and
feedback directly determines the growth rate of galaxies. Inflow will increase
the mass of galaxies, and the galaxy wind can not only take away mass and metals
from galaxies, but also reduce the accretion of new gas from the periphery. If
this effect is significant, the inflow will be blocked and the star formation
will be suppressed <cit.>. The converging location of inflow and
outflow is the CGM. Therefore, the observation and study of the galactic media
will provide important clues to the formation and evolution of galaxies.
To quantitatively constrain the strength of feedback and inflow, a number of
attempts have been made. The Numerical Investigation of a Hundred Astrophysical
Objects (NIHAO) project <cit.> simulated a total of 100 galaxies
with masses ranging from dwarfs to the Milky Way (MW)-like galaxies. Compared
with other simulations (e.g., Eagle, Illustris), the NIHAO simulation has some
advantages in the number of samples and resolutions. The NIHAO simulation
includes pre-stellar feedback, so it can reproduce the stellar-halo mass
relationship and is suitable for studying the baryon cycle. The impact of SN
feedback on the baryon cycle is further studied <cit.>, where
the percentage of baryons ejected out of the dark matter halo and returned to the galaxy
are predicted respectively. Because the brightness of the thermal halo decreases
with the increase of the radius, existing observations generally only see
radiation within a few kpcs from the galaxy.
It is expected that can increase the detectable physical scale by more
than one order of magnitude, so that the current research on the inner region of
the thermal halo can be extended to the entire CGM (even the IGM), thereby
constraining radial profiles of various properties (e.g., density and
temperature). Theoretically, inflow and outflow have very different effects on
the properties of the thermal CGM (metal abundance, temperature, etc.). A
remarkable feature is that the metal abundance in the outflow-dominated area
will be much higher than that in the inflow. The outflow of different
intensities will also produce very different effects. More energetic outflows
launched by intensive star formation could enrich metals and heat up the gaseous
halo at larger radii, which leads to a higher X-ray luminosity (see
Fig. <ref>, numerical simulation results <cit.>).
's good spectral resolution can provide important clues to the properties of
the thermal gas around the galaxy, such as temperature and metal abundance.
Combined with its better spatial resolution, it can limit the inflow, outflow
and the area where they interact, from which the feedback strength and the
intensity of the inflow can be inferred. In addition, due to the invisibility of
large-scale environments, there is no constraint on how many thermal gases there
are in the media around the galaxy, while 's observation of large-scale
thermal gas will provide an important constraint on this.
It was a long mystery that the expected amount of baryons around galaxies are not detected in existing multi-wavelength observations (e.g., <cit.>). For example, with the NIHAO galaxy formation simulations, <cit.> made predictions for the baryonic budget in present-day Milky-Way (M_200∼ 10^12 M_⊙) type galaxies. They found that, compared to a universal cosmic baryon fraction of f_b = Ω_b / Ω_m = 0.15, haloes of this mass scale are typically “missing” 30% of the expected baryons, which are relocated to beyond two times the virial radii and are dominated by a diffuse warm-hot gas.
The game is to find this gas and map its distribution. As can be seen in Fig. <ref>, the temperature at the peak of the radiative cooling curve (T∼10^5.5 K) is close to the virial temperature of L^⋆ galaxies. For L^⋆ or super-L^⋆ galaxies with more massive halos (M_ halo≳10^12-13 M_⊙), the virial temperature could fall in the X-ray emitting range where the radiative cooling efficiency is relatively low compared to lower mass halos, as the latter has higher metal cooling efficiency. In this case, there could exist an extended and stable X-ray-emitting hot gaseous halo that potentially contains a significant fraction of the “missing baryons” (e.g., <cit.>).
The first science case in this regard is to observe nearby objects. In this case, the hot CGM could be studied in unprecedented detail even with moderate exposures (e.g., <cit.>). The Andromeda galaxy (M31), with a stellar mass of M_*=(1-1.5)×10^11 M_⊙, a dark matter halo mass of M_ 200=(8-11)×10^11 M_⊙ <cit.>, SFR≈0.4 M_⊙ yr^-1 <cit.>, and d≈0.78 Mpc (1^'≈230 pc), is thus the best case to search for the large-scale accreted hot CGM. It is the external galaxy with the largest angular size of the virial radius (r_ vir≈ 300 kpc≈ 23^∘), while the companion galaxy M33 locates at a projected distance of ∼200 kpc from M31, within its dark matter halo (see Fig. <ref> for the configuration). Such a large angular size of the dark matter halo makes M31 unique for the most detailed study of the multi-phase CGM. In particular, there are a lot of UV-bright background AGN projected within r_ vir of M31 (Fig. <ref>), allowing for UV absorption line studies of the cool and warm gases from the CGM <cit.>. Furthermore, as our closest massive neighbor in the local group, M31 also received a lot of observations in many other bands, which help us to study both its dark matter halo and the multi-phase CGM (e.g., <cit.>). These multi-wavelength observations, together with the proposed large sky area survey with will need ∼(200-300) × 15 ks = (3.0-4.5) Ms
observations to cover the entire area of interest, such as the M31-M33 stellar and gas stream (<cit.>). This will provide us with a unique panchromatic view of the baryon budget among the stars and the multi-phase CGM.
Fig. <ref> shows the simulated spectrum extracted from the entire FOV toward the direction of M31, with an exposure of only ∼15 ks.
The real selection of the spectral extraction aperture depends on both the brightness of the feature of interest and the scientific goal. Here the 1^∘× 1^∘ aperture is still enough to separate the M31-M33 stream from the surrounding medium <cit.>, which is a large-scale structure with gaseous counterparts <cit.>. In many cases when we do not need such a high signal-to-noise ratio, a higher angular resolution down to the instrument limit (1^'≈230 pc) could be adopted, which is impossible for more distant galaxies. Due to the large angular size of the object, such observations of local galaxies are still very time-consuming and typically require a few mega-seconds.
We also expect to collect a sample of ≲10 massive galaxies at moderate distances for . The X-ray emission of the CGM could arise not only from the feedback of AGN and stellar sources (e.g., <cit.>), but also from the accretion shock heating and gravitational compression of the IGM (e.g., <cit.>). The relative importance of these two potentially interrelated mechanisms likely depends on a galaxy's mass, as well as other properties such as the SFR and the environment. The extended hot CGM could potentially contain a large fraction of a galaxy's “missing baryons”. However, due to its low density and metallicity, the X-ray emissivity of this extended hot CGM is extremely low <cit.>. In order to detect it and characterize its spatial distribution, we need a galaxy sample that is massive enough so the virialized gas has a temperature falling in the X-ray emitting band. These galaxies also need to be quiescent in star formation to avoid disproportionately strong X-ray emission from metal-enriched feedback material, as well as in a non-cluster environment such that the ICM would not contaminate the measurement of the CGM in the galaxy vicinity <cit.>. Furthermore, it will also be better if the galaxies are located at a moderate distance of d∼(50-100) Mpc, so the FOV will cover at least a significant fraction of the virial radius, and the redshifted soft X-ray emission lines could be separated from the MW foreground emission (<cit.>). The best cases are thus super-L^⋆ quiescent galaxies (e.g., <cit.>). With the high energy resolution and low background of , we can extract narrow-band images covering individual emission lines with significantly suppressed MW foreground emission, and probe its radial distribution out to almost the virial radius.
When probing X-ray emission from low surface brightness features such as the extended CGM, the most important thing is not only the photon statistic, but also the level and fluctuation of the sky background. With broadband X-ray imaging observations, we can typically detect the hot CGM only within r≲(20-30) kpc or r≲ 0.1r_ 200 (e.g., <cit.>).
We present a simulated ∼1 Ms spectrum of a z=0.01 (d≈50 Mpc) massive quiescent galaxy in Fig. <ref>, using the spectral model from <cit.>. It is clear that some key diagnostic emission lines of the hot gas, such as the redshifted Oviii line at the rest-frame energy of 0.654 keV, could be separated from the same emission line arising from the MW halo. This will significantly increase the signal-to-noise ratio of the redshifted hot gas emission lines in narrow-band imaging observations.
We must note that the objects to be included in such observations need to be carefully selected according to their redshifts. A shorter distance will be helpful to collect more photons to study the physical and chemical properties of the brightest part of the hot CGM, but the contamination from the MW foreground makes it difficult to detect the faint extended hot CGM which potentially contains a larger fraction of the baryons <cit.>. On the other hand, a too-large distance will significantly reduce the flux of the object and makes the project unfeasible. The best choice will be objects at d∼(50-100) Mpc, such as the CGM-MASS sample studied in <cit.>. We also would like to emphasize that the galaxies in the mass range of the CGM-MASS galaxies often have a large discrepancy in the measured hot CGM mass based on X-ray or Sunyaev-Zel'dovich (SZ) observations <cit.>,
which could be partially caused by the poorly constrained hot gas density profile <cit.>. This is another reason to have deep X-ray observations probing the hot CGM from a large fraction of the dark matter halo. The total observation time needed to complete such a survey will be a few mega-seconds, depending on the real sample size and the adjustment of the exposure time for individual galaxies based on existing and observations <cit.>.
§.§ Additional feedback physics
Due to the high level of complexity of galactic environments, the feedback
processes might involve more physics than hydrodynamics and gravity. The impact
of non-thermal physics on galaxy formation, such as magnetic fields and cosmic
rays, has been long overlooked until recently the importance of these additional
feedback physics starts to be investigated and recognized <cit.>. A
number of theoretical models have been developed, predicting distinctively
different CGM properties with a wide range of physical parameter spaces poorly
constrained by existing observations. will be uniquely qualified to test
these models and constrain the feedback physics in the CGM, which is discussed
in detail as follows.
Magnetic fields: The magnetic field strength in the CGM is expected to be much weaker than that in the ISM, as the CGM is expected to be
more diffuse and less dense. In the MW halo, the best-fitting B-field values
are ∼ 1–10 μ G <cit.>.
In recent years, the strength and topology of galactic scale magnetic field in the CGM started to be well constrained in radio observations, via either polarization or Faraday rotation measure (RM) synthesis (<cit.>).
The observed magnetic energy density in the CGM could be either higher or lower than the hot gas pressure <cit.>, indicating a variety of roles the magnetic field plays in the global gas flows.
On the simulation side, a variety of magnetic field
strengths and topologies in the CGM are predicted by different sets of
simulations, such as SURGE <cit.> and FIRE
<cit.>, which is still under active investigation. For
instance, <cit.> found that the magnetic fields in the
simulations even become dominant in the bi-conical regions. Therefore, the
impact of magnetic fields in the CGM might not be negligible.
Magnetic fields affect CGM in a few ways. First, magnetic fields can provide
non-thermal magnetic pressure to the CGM, which could even be comparable to the
local thermal pressure of the field strengths reaches a few μ G. In
this case, the halo gas is partially supported by the magnetic pressure
P_mag = |B|^2 / 4π, and can stay at a lower thermal pressure/temperature <cit.>. Second, magnetic fields can facilitate the
production of cool gas in the CGM by enhancing thermal instability
<cit.>. As shown in Fig. <ref>, at the presence of
magnetic fields (where the magnetic energy and gas thermal energy are
comparable), a significant amount of cool filaments arise in the CGM via
enhanced thermal instabilities (bottom), in contrast to the case without
magnetic fields where the CGM remains a single phase (top). Finally, magnetic
fields help the survival of the cool gas by suppressing turbulent mixing with the
hot phase via magnetic tension
<cit.>, or reduce thermal
conduction between the cool and hot phases via anisotropic conduction
<cit.>. Therefore, the overall impact of magnetic fields
is to increase the fraction of the cool gas in the CGM, and thus potentially
alter the CGM thermal status which can be tested by .
Cosmic rays: Cosmic rays (CRs) are ultra-relativistic protons/electrons
coupled with local plasma magnetic fields via Lorentz forces
<cit.>. At the galactic/ISM scale, CRs at GeV energies (which
dominants the CR energy spectrum) are produced by supernovae and AGN shock
acceleration, and are transported by turbulence and magnetic fields in ISM and
CGM. CRs are believed to be in roughly energy equipartition with the magnetic
fields and thermal pressure in the ISM <cit.>.
In recent years, the CR energy density and transport mechanisms have been better constrained via spatial analysis of the synchrotron radio continuum emissions detected above the galactic disks (e.g., <cit.>).
Recent theoretical studies suggest that with reasonable CR injection rates and
transport coefficients, the CR energy density in the CGM can be comparable, or
even significantly exceed, the thermal pressure in the CGM (e.g.,
<cit.>).
Ji et al. 2020 (<cit.>) found that the CR pressure in the CGM can be
one order of magnitude larger than the thermal pressure in the CGM, leading to a
CR pressure-dominated galaxy halo where the halo gas is primarily supported by
CR pressure rather than gas thermal pressure, and the temperature of the halo
gas is much lower than the virial temperature of ∼
10^5–6 K, as shown in Fig. <ref> from the
FIRE-2 simulations[The Feedback in Realistic Environments (FIRE)
Collaboration:
http://fire.northwestern.edu<http://fire.northwestern.edu>]. In the
meanwhile, virial shocks expected in massive (M_halo≳
10^11.5M_⊙) galaxy halos are also absent from the CR pressure-dominated
CGM. Although this scenario is roughly consistent with the observed CGM
properties via quasar absorption lines such as H I and
O VI column densities <cit.>, two-dimensional
CGM emission maps which are expected from future observations can provide a
more direct test of the CR pressure-dominated CGM. In particular, the
morphologies and intensities of the soft X-ray emission from the CGM can be used
to distinguish between the CR pressure-dominated and the thermal
pressure-dominated CGM. In addition, the kinematics resolution
can reach up to 1000 km/s in absorption and 300 km/s in emission, both of which are sufficient to probe
the structures of virial shocks in the CGM.
§.§ Potential case studies on feedback physics tailored for
In order to probe the feedback physics mentioned above, we herein propose a few well-studied objects for some possible follow-up observations, which may lead to breakthrough scientific output in our understanding of stellar and AGN feedback.
Sagittarius A^⋆ (Sgr A^⋆) located at the center of the Milky Way (MW) is the nearest supermassive black hole (SMBH), so provides us with a unique opportunity to witness the details of AGN or stellar (if more active star formation exists in the past) feedback close to its launching site. There are increasing lines of evidence that outflows of energy and metal-enriched materials from the central tens of parsecs of galaxies have shaped the observed structures on a variety of larger scales <cit.>. Fig. <ref> shows the observations of the Galactic center area <cit.>. The two “chimneys” suggest collimated bi-conical outflows, which further connect to the larger-scale coherent structures such as the “Fermi bubbles” in γ-ray <cit.>, the “bubbles” in X-ray <cit.>, or the “WMAP Haze” in microwave <cit.>, with typical sizes roughly on the order of the galaxy itself (more than one order larger than the “chimneys”). Existing X-ray observations of the Galactic center area already show interesting fine structures highlighted in the emission from special ions (e.g., the bipolar “chimneys” revealed in the S XV emission in Fig. <ref>a), many of them also have multi-wavelength coherent structures or counterparts (also see <cit.>). However, the energy resolution of the X-ray CCD spectrum is insufficient to separate individual emission lines (Fig. <ref>b,c), which limits the constraint of the physical and chemical properties of the outflows. will for the first time resolve fine spectral structures of the hot gas in a large area above the Galactic plane close to Sgr A^⋆. Since the foreground extinction is very strong toward the Galactic center direction and is only sensitive at ≲2 keV (Fig. <ref>b,c), the future survey will most likely focus on the area with the Galactic latitude |b|≳1^∘ (e.g., the cyan box shown in Fig. <ref>a). We can either map the “chimneys” area shown in Fig. <ref>a or a larger sky area covering a significant fraction of the “bubbles”, depending on the desired depth and the available observing time, or the required “effective angular resolution”.
Centaurus A (Cen A; NGC 5128) is the nearest FR-I radio galaxy located at a distance of d≈3.8 Mpc (1^'∼1.1 kpc; <cit.>). It is the central galaxy of one of the two subgroups comprising the Cen A/M83 group, and the 4th nearest galaxy group only after the local group, IC342, and the M81 group. Cen A is the 5th brightest external galaxy in optical, after LMC, SMC, M31, and M33; and 2nd brightest extragalactic radio source, only after Cygnus A. Fig. <ref> shows the multi-scale radio and X-ray structures of Cen A, indicating complex interactions between the AGN jet and the galaxy environment. The small distance and plenty of multi-scale structures related to the AGN-ISM/CGM/IGM interaction make Cen A unique for detailed analysis of AGN feedback. Combined with the study of the Galactic center region as described above, the proposed observations can be used to study the feedback processes over more than five orders of physical scales (from ≲10 pc to ∼1 Mpc). The large FOV of makes it ideal to observe a large object such as Cen A. We will need ∼30 observations to cover the entire area of interest surrounding Cen A, which is much more efficient than the or (Fig. <ref>b). The angular resolution of is still sufficient to resolve some fine structures such as the chain of knots in the northern jet (Fig. <ref>c). The energy resolution (E/Δ E≈500 @1 keV for the 60×60 normal array; E/Δ E≈1000 @0.6 keV for the 12×12 central sub-array) is sufficient to measure the physical and chemical properties of the hot gas, and is also typically marginally sufficient to measure the shift or broadening of the soft X-ray emission lines from a normal galactic outflow, especially for those from the gaseous medium strongly turbulated by the AGN (e.g., the Perseus cluster as observed by Hitomi <cit.>; the central sub-array has an energy resolution comparable to Hitomi at the Fe K lines).
As one of the nearest nuclear starburst galaxies with an edge-on orientation, M82 provides us with the best view of the multi-phase galactic superwind driven by starburst feedback <cit.>. Existing high-resolution X-ray grating spectroscopy observations of the halo of M82 indicate complicated emission line spectra, with contributions from both the thermal plasma and some non-thermal components such as the charge exchange (CX; see the /RGS spectra from <cit.>). However, such grating observations are limited to relatively compact objects. In most of the nearby galaxies like M82, we still rely on X-ray CCD imaging spectroscopy observations, with which the decomposition of different emission components, so the measurement of the physical and chemical properties of the hot gas, can be very uncertain (e.g., <cit.>). Although located in the M81 group which has giant tidal tails detected in colder gas <cit.>, most of the interesting features related to the galactic superwind (such as the northern “cap” ∼11 kpc above the galactic plane; e.g., <cit.>) could be covered with a single observation. At a distance of d≈3.53 Mpc, the 1^' normal pixel corresponds to ∼1 kpc, which is still helpful to perform some spatially resolved analysis (e.g., <cit.>), although the fine structures of the superwind cannot be resolved. The 15^'' pixel size of the central sub-array could better sample the PSF, but cannot significantly increase the angular resolution. The velocity of the hot wind constrained in different ways should be >10^3 km s^-1 (e.g., <cit.>), significantly exceeds the outflow velocity of the cold gas (typically ∼500 km s^-1; e.g., <cit.>). Measuring the velocity difference of different gas phases is not only important in measuring the energy content of the outflow, but also in quantitatively estimating the CX contribution. Furthermore, we can also use the high-resolution spectra taken with the micro-calorimeter on board to better constrain the gradients of the intrinsic absorption column density, temperature, and metallicity, as well as some other derived parameters (electron number density, thermal pressure, cooling timescale, etc.), of the hot gas outflow <cit.>. These measurements can be compared to numerical simulations to determine the thermalization efficiency of supernovae (SNe) energy and the mass loading factor of the cool gas, which are key parameters of stellar feedback models (e.g., <cit.>).
§ GALAXY CLUSTER AND LARGE-SCALE STRUCTURE
Quantifying the cosmic baryon budget, its multi-phase status, as well as its cooling and accretion activities, over a variety of physical scales from galaxies to groups/clusters or even the cosmic webs (e.g., <cit.>) can help us to understand various hidden drives of galaxy growth in their larger-scale environment <cit.>. The baryonic matter abundance and distribution inside the cluster and group halos (e.g., <cit.>), as well as in the CGM of galaxies (e.g., <cit.>) have been extensively studied through multi-wavelength observations. On larger scales, a good fraction of the cosmological baryons at z>1 have been detected inside the cosmic web with their abundance measured mainly through Lyman-α observations (e.g., <cit.>).
At lower redshifts, however, baryons inside the cosmic web are much more difficult to be probed due to the very low gas column density.
Many efforts have been made to search for cosmic baryons at such redshifts and it has been shown that approximately half of the total baryon budget at these epochs are locked up in the CGM of galaxies, the intragroup medium (IGrM), the ICM of clusters and the IGM in neutral and diffuse phases (<cit.>). The other half remain “missing” observationally and cosmological simulations have shown that they are locked up in a warm-hot (10^5 K < T < 10^7,K) phase in the IGM – referred to as the WHIM, as a result of significant heat-up and removal by star forming and feedback processes (e.g., <cit.>). Determining the “missing” baryon budget is expected to be most promising through the next generation X-ray spectroscopy (e.g., <cit.>). In this regard, will play a breakthrough role in probing the hot baryons in their multi-phase states on a variety of physical scales in the Universe. We herein present a few core science projects which could potentially greatly advance our understanding of the hot baryon budget of the local Universe.
§.§ Multi-phase hot gas in galaxy groups and clusters
The hot gas inside the dark matter halo of galaxies, groups and clusters, referred to as the CGM, IGrM, and ICM, respectively, is an optically thin, collisionally ionized, multi-phase medium. The physical states and their spatial distribution are modulated by both external and internal factors (e.g., <cit.>). On larger scales, pristine gas is accreted from the connected cosmic web to the halo, together with relatively low metallicity gas stripped from the infalling satellite galaxy halos. On smaller scales, AGN and stellar feedback eject material, metal, energy/heat and momentum back to the halo environment, some of them can reach about several tens or even hundreds of kiloparsecs, where this hot polluted ejection meets the cold accretion from the outside and falls back when it sufficiently cools down. Overall the halo gas is experiencing gravitational accretion and heating, impact compression, collisional ionization and excitation, radiative cooling, etc.. Internally driven processes generally cause gas to move in the outward direction, although the angular distribution of inflows and outflows may be different and complex. Different spatial and time scales of the various processes involved therefore naturally lead to the multi-phased nature of this hot halo plasma. Observationally resolving the spatial structures in temperature, density, metallicity, and ionization state will be crucial to probe the contributions and strengths of individual processes that together modulate the hot halo gas across hundreds of kiloparsecs.
Regarding such a multi-phase IGrM or ICM gas, a cooler component is often detected in the central few tens of parsecs of relaxed (or nearly relaxed) groups and clusters, which accounts for up to several tens of percent of the total X-ray luminosity in 0.5-2 keV, after the projection effect is corrected (see <cit.> for a review). In the X-ray imaging spectroscopic analysis of the (<cit.>), (<cit.>) and Suzaku (<cit.>) data, this cooler component is routinely modeled either as a single-phase, with a temperature decreasing inward monotonously, or as a cool spectral component co-existing with a hot component (i.e., the ICM defined in the ordinary sense). In this latter case, corresponding to a two-phase scenario, a relative “volume filling factor” that varies with radius is introduced to characterize the spatial distribution of the cooler gas. In these two scenarios, the formation and evolution of the cold gas are believed to be intrinsically different, which unfortunately cannot be distinguished by current data. In some cases (e.g., Abell 1795, <cit.>) a third weak gas component with an even lower temperature is necessary to improve the spectral fitting. The cooler components in the Virgo cluster and Abell 1795 are found to be more metal-enriched than their hotter counterparts. However, in many other cases, the temperature range and metal abundances of the cooler component are poorly constrained, and are often fixed to certain values in the multi-component spectral fitting. Since the amount of gas cannot be well determined, a certain form of emission measure distribution as a function of temperature (e.g., a power-law form) has to be imposed. These uncertainties may have a considerable impact on the accuracy of measurements of metallicity (e.g., the so-called “Fe-bias”, see <cit.>) and other gas properties. For example, <cit.> reported that a systematic bias up to about ∼ 10% can arise for the dynamical mass of the central region, which approximately equals the typical deviation between masses measured with the X-ray and the gravitational lensing techniques. Mounting evidence shows that the coexistence of cold and hot phases cannot be interpreted simply in terms of the hot bubble(s) inflated by the central AGN in the circumstance of a cooler environment. Although it has been proposed that the cold-phase gas may be the cD coronal gas confined by magnetic loops surrounded by the intruding hot ICM, or simply it may be a consequence of radiative energy loss in part of the ICM (<cit.>), direct observational evidence is still absent.
The large FOV and low instrumental background of make it best optimized to detect large-scale low surface brightness features, such as the extended multi-phase medium in cluster halos or even the cosmic web <cit.>. To detect the extragalactic hot gas emission with low-surface brightness, the foreground emission due to the MW hot CGM should be securely removed.
With the high energy resolution of HUBS, a lower limit of redshift ensures the separation between the emission lines from the MW and targeted features (Fig. <ref>; also see <cit.>). This will strongly help to remove the sky background, enabling the detection of extremely low surface brightness features. As demonstrated in the recent work of <cit.>, which is implemented based on the IllustrisTNG simulation (<cit.>), is capable of detecting the soft X-ray emission of the IGrM in group halos out to z=0.3, or that of the ICM in cluster halos located at slightly higher redshifts, when operating in either imaging or spectral mode for 1 Ms. Fig. <ref> presents the X-ray emissivity maps (top) and the -observed O vii intensity maps (middle) of the hot gas in a cluster-sized halo (left column) and a group-sized halo (right column) simulated at z = 0.11. The metallicity-temperature (Z-kT) distributions of the gas particles in the two gas halos are plotted in the bottom panel, where two sets of black points are used to mark the bestfits of the adopted spectral model consisting of three APEC components. In particular, the mock images are made with the field of view and spatial resolution, i.e., 60 × 60 pixels in 1 square degree. The results also show that, although it is possible to pick out the primary emission components by applying a simple spectral model (the three-APEC model in this case), more advanced tools designed for the analysis of high-resolution spectral data are needed to describe the gas properties more accurately.
§.§ Searching for hot baryons in the cosmic web
Recent studies have shown that it is possible to identify the baryonic filamentary structures of the cosmic web through stacking Lyman-α emissions (<cit.>), or the thermal Sunyaev-Zel'dovich signals (<cit.>). An ongoing effort is, with the aid of cosmological hydrodynamic simulations, to find out the feasibility of detecting the X-ray emission of the hot baryons inside the filaments through the stacking technique <cit.>. Due to the extremely low gas column density, direct observations are nearly impossible. However, this may be achieved with the help of optical tracers, because galaxies that live inside filaments can be employed as a natural indicator of the cosmic web location. Many galaxy surveys, such as SDSS (<cit.>), GAMA (<cit.>), 2dFGRS (<cit.>), WiggleZ (<cit.>) as well as the Millennium Galaxy Catalogue Survey (<cit.>), have provided a variety of galaxy catalogs that together cover a good fraction of the full sky. Using the sky position and redshift information of these galaxies as inputs, edge-extracting software can readily reveal the large-scale structure of the cosmic web. With the optical tracers, we now have a great base for detecting the hot baryons hidden inside the filaments. However, the situation is still tricky, because a large spatial coverage and sufficient spatial resolutions in both transverse and sightline directions are necessary. is exactly suited for this purpose. The filaments have typically widths in the range of several hundred kiloparsecs to a few megaparsecs, our spectral resolution will be able to resolve them at a distance of up to a few hundred megaparsecs. This capability may be important to enhance the signal strength with or without stacking. The large field of view of can effectively cover a significant patch of the sky. With several tens of pointings,
can collect X-ray emissions within a
sufficiently large sky patch to allow mapping of the cosmic web structure (see Figure 3 of <cit.> for a demonstration of the cosmic web structure within a slice of a 10-degree cone out to z∼ 0.1). As shown in <cit.>, out to redshift z=0.1 will be able to detect hot gas in galaxies, groups, and clusters in both imaging and spectral modes given suitable exposure times. With an energy resolution of 2 eV, corresponding to a redshift resolution of δ_z ∼ 0.003 at such distances, the strong O vii and O viii emission lines will essentially act as tomography tracers, indicating hot gas distributions at different redshifts. Selected patches in these line intensity maps in both spatial and redshift dimensions shall then be stacked according to the pre-identified cosmic web features as probed by optical tracers. The stacked signal will then be compared with signals derived from random stackings. Through such comparisons, not only will we perceive the existence of the hidden baryons inside the filaments, but also may we learn about the temperature and metallicity distributions of the hot ionized plasma inside the cosmic web.
§.§ Cluster observations for cosmology
will provide unprecedented opportunities for cosmology study with galaxy clusters, based on its large FoV, low instrumental background and superior spectral resolution. Cosmological constraints will mainly come from the following two aspects in the context of observations.
A. Constraints from cluster mass function
Galaxy cluster and group (“cluster” hereafter) population can be used as an important probe to constrain cosmological models and to investigate the properties of dark matter and dark energy<cit.>, the latter of which dominates within z ≲ 0.5 <cit.>. In order to achieve cluster samples as complete as possible, wide-field surveys have been performed through the observations of the Sunyaev-Zel'dovich (SZ) effect, the weak gravitational lensing, and the X-ray emission of the ICM (e.g., <cit.>). By obtaining cluster mass function, we can infer important parameters of the dark matter model, e.g., the matter density as Ω_ M and the amplitude of linear matter density fluctuations as σ_8 when a flat universe is assumed. Furthermore, by combining X-ray and Sunyaev-Zel'dovich effect observations the absolute distances of galaxy clusters can be calculated, which allows the measurement of the Hubble constant <cit.>.
The completeness of the detected cluster population, which is crucial to the constraints on cosmological parameters, is directly determined by the survey area and depth. Deep large surveys are time-consuming and expensive, thus the cluster population is comprehensively investigated only until z∼0.1-0.2 as of today. In the meantime, deficiency still exists in the faint end with a lower mass cut of about 10^14 M_⊙, making it very difficult to obtain a complete cluster mass function. Moreover, limited by the current FoV and instrumental background of detectors, very few clusters have been observed out to their virial radii, especially the low-z ones which are crucial to constrain dark energy models <cit.>.
Several leading X-ray surveys were conducted in the last decade, among which the All-Sky Survey (RASS) <cit.> is the first full-sky survey in soft X-rays. The studies of the few thousand RASS clusters, which are detected above the flux limit of ∼10^-12 erg s^-1 cm^-2 and are mostly high mass systems located at the low redshift space, have offered a fundamental basis to cluster cosmology (e.g., <cit.>). Recently, All-Sky Survey (eRASS), the successor of RASS, is the most promising project for cosmological constraints in X-rays <cit.>. Upon completion[However, the eRASS completion is currently uncertain, as on board the German-Russian Spectrum-Roentgen-Gamma mission has been switched off since 26th February 2022, with only four of the eight planned all-sky survey passes finished <cit.>. The resumption of the telescope's operation has not yet been determined.], eRASS is expected to detect ∼10^5 clusters, most of which are bright sources, in eight all-sky survey scans <cit.>. The depth of eRASS (an average exposure time of 2.5 ks per field), however, is relatively shallow and will limit its application in cluster cosmology (e.g., measurement of the gas fraction within the virial radius). In fact, although it is estimated that the eRASS survey will provide constraints of ΔΩ_ M=0.012 and Δσ_8=0.036 with combined probes from cluster number counts and angular clustering <cit.>, ΔΩ_ M∼0.05 and Δσ_8∼0.07 has just been achieved based on the first results from the proof-of-concept mini-survey with cluster number counts only, i.e., Final Equatorial Depth Survey <cit.>; these are at similar levels of the constraints provided by the and cluster archives <cit.> or by the XXL survey of <cit.>. Deeper investigations with a considerably wide field are desired in order to achieve a complete cluster sample out to z ≲ 0.5, including those low-mass systems.
, due to its large FoV and low instrumental background, has great potential to collect a complete sample of clusters extending to larger redshifts, by carrying out a deep survey covering an area of ∼15 deg^2 with an average exposure of 300 ks. This surveying field, hence being named -DF (deep field), will be sited in the Galaxy And Mass Assembly (GAMA) survey footprint, particularly the GAMA02 field <cit.>, therein abundant multi-band survey data have been archived, which is very important to assist cluster identification and study.
With a low instrumental background and superior 2 eV energy resolution <cit.>, has the superb capability to resolve galaxy groups (or the faintest clusters) among crowded foreground/background AGNs and normal galaxies in narrow-band (vicinity of O vii and O viii lines) images based on a recently published work of the team <cit.>. By extrapolating this result, it can be easily seen that a cluster with M_500=5×10^13 M_⊙ at z∼0.5 can be resolved in the O viii line with an exposure of 300 ks (Figure <ref>). We consider this quasi-monochromatic imaging of a novel method to identify faint and/or distant clusters, and believe that has excellent feasibility to measure ICM in extended redshift range (see also<cit.>). However, to achieve a complete cluster sample within z ∼ 0.5, strong background source confusion of with flux limit of ∼5×10^-15 erg s^-1 cm^-2 must be removed. A drift scan or an assembly of 4500 stacked shallow observations (1 ks for each) with each pointing shifted by 3^' is proposed, to improve the angular resolution of the central 11.3 deg^2 of -DF to 15^'' with the core 12×12 detector array <cit.>, so as to lower down source confusion limit by one magnitude in this area.
Since one of the most important issues is the synergies between and multi-wavelength facilities for source identifications, the survey will be performed within the 25 deg^2 north portion of the XMM-XXL survey in GAMA02, which also overlaps with the VIPERS redshift survey <cit.>. In the narrow-band image of , background AGNs above 10^-15 erg s^-1 cm^-2 (in 0.5-2.0 keV) can be efficiently removed according to the existing XMM-XXL source catalog, and meanwhile, the candidate clusters can be confirmed by utilizing exiting cluster catalogs (especially XXL cluster catalog <cit.>, the Atacama Cosmology Telescope (ACT) SZ cluster catalog <cit.>, and Wen-Han (WH2022) cluster catalog <cit.>) and ancillary multi-frequency observations, such as the Sloan Digital Sky Survey (SDSS) <cit.>, the Wide-field Infrared Survey Explorer (WISE) <cit.>, the Galaxy Evolution Explorer (GALEX) <cit.>, and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) <cit.>.
We have quantitatively estimated the completeness of the expected cluster sample from the proposed -DF survey, and its constraints on the cosmological parameters. Assuming an average CXB background derived from the or observations, the 300 ks deep exposure will allow us to achieve the most complete X-ray sample of clusters within z ≲ 0.5 with M_500 > 5×10^13 M_⊙. Under such mass limit, HUBS detection will be 100% complete within z ≲ 0.48, 93.2% complete at z=0.5, 30.5% complete at z=0.75, and 10.6% complete at z=1. Although HUBS will detect a much smaller total number of clusters than all-sky surveys, the expected detected number density will be one order of magnitude better than the eROSITA shallow survey. The depth of our survey guarantees that there will be >2600 photon counts for each target, which is crucial in measuring gas properties. Compared to the extremely low counts (50 counts per target used as detection limit in forecast <cit.>) of the detected clusters, the remarkably improved spectral quality will allow us to directly constrain the gas temperature in the spectral fittings, and estimate the cluster mass more accurately via the mass-temperature (M-T) scaling relation. By applying the weak-lensing measurements and the excellent spectroscopic redshift measurements of the GAMA survey to, e.g., perform the mass calibration, we expect that our error budget can be notably reduced compared with that of the wide but shallow survey of eRASS. Finally, an early prediction of the cosmological constraints for ΛCDM models has been estimated using the cluster detection limit mentioned above and the Markov chain Monte Carlo (MCMC) exploration of the parameter space (<ref>), which shows that can greatly improve the constraints on cosmological model parameters as compared to the existing results of , and the .
B. Constraints from cluster gas fraction
In contrast to the methods that use the abundance of galaxy clusters as a function of mass and redshift to constrain cosmological models, the method via calculating gas mass fraction (f_ gas) does not rely on the completeness of the cluster sample. This method focuses on the study of gas ratios and their dispersion, and the dependence of these quantities on the cluster mass, aperture and redshift. With the deduced gas mass fraction we can constrain Ω_ m, using a combination of the Hubble parameter and cosmic baryon fraction as h^3/2 (Ω_b/ Ω_m). The results obtained with this method, although to be improved, are competitive as well as consistent with those from recent CMB, Type Ia supernova and baryon acoustic oscillation data, and in the meantime to explain why f_ gas is lower than expected in some low-temperature (kT_ 2500< 5 keV) systems.
The current relevant work from <cit.> is based on a morphological selection of relaxed clusters above 5 keV, with the study only confined within r_2500. However, at r_2500 the influence of various astrophysical processes, e.g. stellar winds from massive stars, AGN jets and supernova explosions, cannot be neglected. The study of <cit.> indicated that even at ∼ r_500 hot gas is still relatively significantly affected by various astrophysical processes, giving the dependence of the baryon fraction upon radiative cooling, star formation, feedback through galactic winds, conduction, and redshifts. As a result, extended investigation of f_ gas until the virial radius (roughly r_200 where gravity dominates) is more cosmologically crucial.
The large FoV of brings a huge advantage to addressing this issue by allowing us to cover the entire virial region of a cluster with one pointing close as z∼0.02. Moreover, its superior spectral resolution provides unique means to identify ICM bulk motions with radial velocities possibly from 90 to 6000 km s^-1, to crosscheck whether the cluster is truly relaxed. We expect to observe ∼30-50 relaxed brightest galaxy clusters to r_200 with ∼ 50 ks single pointing for each. The selection should satisfy two criteria. Firstly, high-quality multi-band data are available for the target, for synergetic investigations to remove possible background pollution. Secondly, once weak lensing data is available, the total gravitating mass obtained in the X-ray will be calibrated, in order to guarantee high accuracy in the gas fraction calculation.
When the gas mass fractions of low-z clusters are incorporated with baryon fraction measurements via the CMB, or with priors on the cosmic baryon density and the Hubble constant, the Hubble constant or the dark energy density as well as the equation of state can be deduced. Thus galaxy clusters in the low-z region can not only help provide a more accurate sample for our measurements, but also have more important cosmological significance for they are in the dark-energy-dominated window of cosmic history. Finally, if the gas mass fraction of clusters is indeed constant, it can be used as a `standard ruler' to measure the space-time geometry of the Universe with the Hubble parameters determined by other measurements.
§ GALACTIC SCIENCE
Close to home, the Milky Way provides the nearest targets for detailed study of the ISM, energetic explosions, stars, compact objects, and so on. Hot gas is thought to permeate the ISM. It is most certainly related to supernovae, stellar wind, and the central supermassive black hole, and thus offers an excellent laboratory for studying the physics of feedback processes. There are a number of unresolved issues, including the origin of the soft X-ray background radiation, the origin of the Bubble (which might be related to the Fermi Bubble), and the properties of the hot halo, which likely bear relevance to the physics of CGM and feedback processes.
§.§ The Cosmic X-ray background
The cosmic X-ray background (CXB) is one of the first discoveries of X-ray astronomy, along with the first extrasolar X-ray source Scorpius X-1 <cit.>.
Later the flux level of the soft X-ray band (44-70Å) was successfully measured by <cit.> and <cit.>, and interpreted as truly diffuse emission of hot plasma <cit.>.
Our understanding of the soft X-ray background has progressed considerably in the ensuing more than 50 years, with generations of X-ray instruments.
Aside from the in-service all-purpose telescopes such as X-ray observatory and , as well as retired ones such as and Suzaku, many kinds of space missions have been dedicated to probing the nature of the soft X-ray background.
Recent missions include the space shuttle payload “Diffuse X-ray Spectrometer” (DXS, <cit.>), dedicated explorer “Cosmic Hot Interstellar Plasma Spectrometer” (CHIPS, <cit.>), sounding rocket mission “Diffuse X-rays from the Local galaxy” (DXL, <cit.>), and the recent soft X-ray surveyor HaloSat <cit.>.
In general, the CXB can be decomposed into two kinds of origins, galactic and extragalactic.
The galactic soft X-ray emission comes from three distinct components, the solar wind charge exchange (SWCX), the Local Hot Bubble (LHB), and the Galactic halo, while the extragalactic origin is dominated by AGNs.
Distinguishing these components from each other and quantifying their contribution to the soft CXB are limited by the spectral resolution of current space missions and model dependent.
For example, it is still an open question whether the observed soft X-ray emission at 1/4 keV is due to LHB or purely from SWCX (e.g., <cit.>).
can provide unprecedented line diagnostics to help to understand the origin of the CXB.
On the other hand, both in-service and past missions barely cover the 0.1-0.5 keV band, which seems to be a turnover in the spectral energy distribution of cosmic UV/X-ray background (<cit.>).
With , we can obtain finer constraints on the UV/X-ray background modeling.
§.§.§ Local hot bubble
It has been identified that the solar system resides in a cavity of low-density and ionized gas, surrounded by a shell of cold neutral gas and dust.
The existence of such a cavity was implied by the soft X-ray emission seen in the map at 1/4 keV <cit.>, and was dubbed as the “local hot bubble” <cit.>.
Though debates on the model exist, the LHB still led its popularity and was strongly supported by recent studies <cit.>.
Followup studies revealed its irregular shape and extent, suggesting a pathlength of order 100 pc <cit.>, possibly created and maintained by stellar winds or supernova explosions due to nearby star formation activities.
Despite the change in intensities, soft X-ray emission from the LHB can be characterized by a hot phase plasma with k_ B T ∼ 0.1 keV <cit.>.
However, possible contamination arises from foreground SWCX, and Galactic halo at intermediate and high latitudes.
X-ray shadowing method is invoked to divide the observed emission into the foreground and background components <cit.>.
Based on the DXL data, Liu et al. <cit.> removed the contribution by SWCX and reported a uniform temperature k_ B T = 0.097 ± 0.013 keV, consistent with previous results. In addition, combining the
DXL result and other measurements, Snowden et al. <cit.> showed the total pressure in the LHB
is in pressure equilibrium with the local interstellar clouds, eliminating the long-standing pressure problem
of the LHB <cit.>.
§.§.§ Galactic halo
The hot gaseous halo was first predicted by <cit.>, while the first hint of the hot gaseous halo was observed by RASS <cit.>.
It was implied by the anti-correlation between the soft X-ray emission at 1/4 keV and the column densities of the neutral hydrogen (e.g. <cit.>).
After , high spectral resolution spectra obtained by DXS revealed that the soft X-ray emission was dominated by thermal emission of hot gas (k_ B T ≈ 0.1-0.2 keV), which favored a galactic halo origin <cit.>.
Furthermore, the launch of flagship telescopes and enabled high spatial resolution observations to decompose the diffuse contribution seen by into the extragalactic point sources (e.g., AGN; <cit.>) and truly diffuse emission (e.g., <cit.>).
In the past two decades, the understanding of the Galactic hot halo has been improved a lot by deep and observations in both emission and absorption.
Particularly, the spatial distribution of the Milky Way hot halo has been established as the first-order approximation assuming the spherical symmetry (e.g. <cit.>).
Furthermore, the temperature distribution of the hot halo has been investigated, showing another extremely hot phase at k_ B T ≈ 0.7 keV (e.g., <cit.>).
In the current decade, newly launched surveyors and HaloSat also continuously provide new insights, such as the discovery of the soft X-ray bubbles on both sides of the Milky Way (i.e., bubbles; <cit.>).
Although it has been astounding progress to understand the Galactic hot halo from decades ago, there are still fundamental open questions in the field.
For instance, the metallicity of the hot halo is still controversial.
On one hand, the continuum of the thermal emission due to the Galactic hot halo is hard to be decomposed from other contributors to the continuum (e.g., the CXB or the soft proton), limited by the relatively poor spectral resolution of the existing instruments.
On the other hand, the SWCX contributes to the soft X-ray line emission, which is also blended with the emission of the Galactic hot halo.
These difficulties make it a hard problem to determine the hot halo metallicity.
The high resolution and spectral coverage down to 0.1 keV of could bring new possibilities to determine the metallicity by clearly decomposing line emission and the continuum or determining the line ratios between forbidden and resonant lines.
Another intriguing question is about the potential extremely hot phase in the Galactic halo.
Currently, detection of an extremely hot phase at k_ B T ≈ 0.7 keV has been claimed in both emission and absorption.
However, these pieces of evidence have limitations in different ways.
The absorption line analyses rely on weak detection (≈ 2-3σ) of Ne ix and Ne x in two sight lines <cit.>.
The modeling of this absorption system requires both super high temperature and super solar neon abundance of [Ne/O] ≈ 0.7, which raises questions about its origin (e.g., in the Galactic disk or the Galactic halo).
The emission evidence of the super hot component is mainly the unexpected enhanced feature at 0.8-0.9 keV in the single-temperature hot halo model.
However, as suggested in <cit.>, the similar feature observed at low Galactic latitudes can be explained by the hot corona of M dwarf stars in the disk.
Adopting the model in <cit.>, M dwarf stars can contribute ≈ 2 - 4 × 10^-7 kpc cm^-3 at high latitudes, which can be 50 - 100% of the claimed detection of extremely hot phase (e.g., <cit.>).
The high spectral resolution and relatively high spatial resolution of could provide unique insights into the extreme hot phase by constraining line ratios (determining radiation mechanisms) and resolving possible M dwarfs in the field.
Therefore, although is not a dedicated surveyor focusing on the diffuse emission of the Galactic hot halo, its unique combination of large FOV and high spectral resolution opens a special window to study the Galactic hot halo.
§.§.§ Extragalacitc sources
While the diffuse Galactic and the local emission dominate the CXB in the 0.5-1 keV band, the majority of the X-ray background has been recognized as discrete extragalactic sources, mostly AGN and star-forming galaxies.
According to the deepest observations by and , in the 0.5-2 keV band, the resolved fraction of the extragalactic background reaches about 80-90% (e.g., <cit.>).
As a consequence, insights into the extragalactic background may serve as a constraint on the integrated SMBH growth and the accretion physics of galaxies.
However, there still remains unresolved CXB of unknown origin, for instance, about 10% diffuse emission in the 1-2 keV band <cit.>.
It may be from CGM of galaxies within their Virial radii <cit.> or “warm-hot” intergalactic medium with the temperature of 10^5-7 K (WHIM; <cit.>).
On the other hand, with the current CCD energy resolution, some models for the components of CXB are oversimplified, for e.g., a single-T APEC model for describing the local thermal-like emission cannot interpret the emission excess of CXB below 0.5 keV <cit.>.
Consequently, the uncertainties of the AGN contribution are larger in the 1-2 keV band, which is essential for disentangling the obscured and the unobscured AGNs (e.g., <cit.>).
has a large FOV (∼ 1 deg^2), and therefore most observations will partly cover the region of CXB, with a remarkable effective area (∼500 cm^2).
The X-ray integral field units in cover the 0.1-2 keV band which is complementary to the bands of or the future Athena, and together can give better constraints on the composition of CXB.
With the 2 eV high energy resolution, the local diffuse emission can be exclusively determined, and the obscured fraction of AGNs would be more precise.
Taking that is designed for observing CGM and WHIM (e.g., <cit.> ), it will quantitatively predict their contributions in the CXB and fill the final gap of the unresolved CXB.
§.§.§ Solar wind charge exchange
SWCX is generated when the highly ionized solar wind ions interact with the neutral materials within the solar system, gaining an electron in a highly excited state which then decays emitting an X-ray or UV photon with the characteristic energy of the ion.
It was first proposed to explain the cometary soft X-ray emission <cit.>, and then identified as the source of the long-term enhancements observed in the All-Sky Survey (RASS; <cit.>).
Based on the target neutrals, there are in general two kinds of SWCX, i.e., the geocoronal SWCX and the heliospheric SWCX.
The former is due to the interaction between the compressed solar wind ions in the magnetosheath and the neutrals (mostly hydrogen) in the exosphere of the Earth.
Its strength and location depend strongly on the strength of the solar wind.
The latter, on the other hand, is due to the interaction between the free-flowing solar wind and the neutral ISM within the entire heliosphere (up to ∼100 AU).
Heliospheric SWCX shows direction dependence as a consequence of the structured solar wind and the neutral distribution in the heliosphere.
Due to its ubiquity, SWCX emission contaminates every X-ray observation of astrophysical objects.
In particular, the spectrum of SWCX contains rich lines, some of which are the same lines used for the diagnostics of astrophysical plasma.
The inclusion of SWCX emission could significantly change the derived plasma temperature of the astrophysical object, and/or mimic a separate diffuse soft X-ray component.
Despite the difficulties in separating SWCX emission from that of astrophysical plasma, different groups have developed models to calculate SWCX emission based on solar wind conditions, neutral distributions, and theoretical interaction cross-sections (e.g., <cit.>). However, there are, sometimes, large discrepancies between the model predictions and the observational results (e.g., <cit.>). The largest uncertainties of these models are mainly due to the lack of detailed information about the solar wind abundance and ionization state, and the theoretical and experimental interaction cross-section.
Owing to its high spectral resolution, will allow us to resolve most of the fine-structure lines, which suits well for learning SWCX.
In principle, line intensity ratios in triplets of the He-line ions (e.g., O vii) from SWCX emission are different from those in astrophysical thermal emission (e.g., <cit.>), and
the spectroscopy will help to distinguish and separate SWCX emission from the thermal components, e.g., from the LHB, the Galactic halo, and other distant components.
Due to the low-Earth orbit, observation will be inevitably affected by the geocoronal SWCX. One strategy to study the SWCX in the near-Earth environment is through observations of the Moon. observation will cover the full Moon with its field of view and clearly resolve the SWCX lines with its superior energy resolution (see Fig. <ref>). On the bright side, strong fluorescence lines from O, Mg, Al, and Si can serve as a remote sensor of the element composition on the lunar surface. While the observation of the dark side of the Moon will maximize the SWCX signal by blocking the thermal emission from our galaxy and distant objects.
The X-ray emission from the dark Moon mainly consists of two parts: the emission from the magnetosheath (the near-Earth environment <10 R_E) and from the region between the bow-shock (∼10 R_E) and the Moon (60 R_E). For the near-Earth environment, the high variability of SWCX is complicated by the solar wind temporal variation. Real-time monitoring of the solar wind is necessary for accurate data analysis. In-situ measurements from ACE [https://solarsystem.nasa.gov/missions/ace/in-depth/] and/or the future Chinese space mission SMILE [http://english.cssar.cas.cn/smile/] will provide valuable data. Another important factor in the SWCX luminosity is the neutral distribution in the magnetosheath, which requires sophisticated magneto-hydrodynamic modeling for the solar wind interaction with Earth's atmosphere <cit.>.
The high-resolution spectra obtained by will precisely measure the SWCX contribution in the near-Earth environment and help to test the results of MHD models.
In addition, data can be used to constrain the charge exchange cross-sections measured in the laboratory.
§.§ Supernova remnants
Supernovae (SNe) are among the most violent explosions in the universe, which release a typical energy of ∼ 10^51 erg in a rather short timescale. As an essential part of the galactic ecosystem, SNe play an important role in the baryon cycle and the energy feedback.
Supernova remnants (SNRs) are SNe interacting with the surrounding circumstellar material (CSM) and interstellar medium (ISM), which provide an important means to study the physics of both sides of the interaction.
SNRs are bright sources in the X-ray sky and the nearest targets to observationally constrain how SNe influences galactic ecosystems. Over a hundred X-ray-bright SNRs have been found thus far in our Galaxy, LMC, and SMC. These extended sources are actively heating the interstellar medium with fast shocks and enriching it with heavy elements. The X-ray observations in past decades have greatly
advanced our knowledge on SNRs <cit.>, but also post
some challenges that require X-ray observations with high spectral resolution.
Some crucial questions in SNRs are yet to be answered with future X-ray instruments with high spectroscopic capabilities: 1) What are the metal compositions in diverse SNRs and how do different supernovae contribute to producing heavy metals in our Universe? 2) How are the hot plasmas in non-equilibrium ionization produced? 3) How to constrain charge exchange and resonant scattering processes using emission lines?
Below we summarize why will help us address these key questions.
§.§.§ Associate SNRs with their progenitors
One of the major challenges in the SNR study concerns the identification of the progenitor type. The two major types of SNe — the core-collapse SNe and the Type Ia (thermonuclear) SNe — can be well-defined and easily distinguished based on their optical spectrum around maximum light. However, it is not that straightforward to associate an evolved SNR with its original progenitor system, which needs a detailed investigation into the properties of the SN ejecta and the CSM.
Type Ia SNe represents the thermonuclear explosions of C/O white dwarfs. The nuclear burning in Type Ia SNe typically results in a large amount of iron-group elements (IGEs) such as Fe and Ni as well as intermediate-mass elements (IMEs) such as Si, S, Ar, and Ca <cit.>. However, in the case of core-collapse SNe, one may expect oxygen as the major product of the nucleosynthesis <cit.>. Therefore, the SN ejecta metal abundances (or abundance ratios) can be used as diagnostics for typing their remnants <cit.>. SNRs showing evidence of enhanced oxygen abundances (so-called oxygen-rich SNRs) are commonly considered from the core-collapse explosions of the most massive stars, while SNRs dominated by IGEs and IMEs are more likely from Type Ia events. The X-ray spectra of SNRs contain most of the prominent emission lines from these metal species, which are essential for constraining the ejecta properties. However, the X-ray spectra of SNRs can always be a combination of the non-thermal emission from the accelerated particles and the thermal emission from both the shocked ejecta and CSM/ISM. Therefore, a precise measurement of the metal abundances relies on high-resolution X-ray spectroscopy that allows us to separate, identify, and measure the individual emission lines, and to distinguish the ejecta from other components. This can be challenging for the CCD instruments. For example, with a typical energy resolution Δ E∼100 eV, CCD instruments can hardly resolve the Fe-L complex and the Ne Heα lines around ∼0.7–1.0 keV, which will be seen as a bump-like structure or a pseudo-continuum and lead to large uncertainties in the measured abundances. The current grating instruments such as RGS and LETG/HETG may partially solve this problem, but they are limited to those bright remnants with small angular sizes and the remnants with bright knot/filament structures.
The constraint on the X-ray properties of the shocked plasmas in SNRs, and our understanding of the SN-SNR connection, will be greatly improved with the help of . The energy band (0.1–2 keV) covers most of the He-like and H-like emission lines from C, N, O, Ne, Mg, Si, and L-shell emission from Fe and Ni. With an ultra-high energy resolution of ∼2 eV of the main array and ≲1 eV of the central sub-array, is capable of resolving individual emission lines, especially the He-like triplets (i.e., the resonant lines, the forbidden lines, and the intercombination lines) and the Fe-L complex.
Figure <ref> shows the simulated 100 ks spectrum of a mix-morphology SNR 3C 400.2, which provides an illustration of its extraordinary capability on detecting and resolving diverse metal species under different ionization states in SNRs.
On the other hand, the spatial resolving ability and the large field of view may help to map the spatial distributions of the plasma parameters over the whole remnant.
§.§.§ Constrain the origins of non-equilibrium ionization plasmas
At the early phase of the SNR evolution, due to the low density of the shocked plasma, the ionization process may take a rather long timescale before reaching equilibrium (n_ et∼10^12 cm^-3 s). Therefore, the shocked plasma in young SNRs is expected to be in the non-equilibrium ionization (NEI) state, where the plasma is still under-ionized (ionizing plasma, IP), characterized by an ionization temperature kT_ i which is lower than the electron temperature kT_ e. The observational evidence for this under-ionized NEI plasmas has been extensively established for a number of young SNRs such as Cas A, Kepler's SNR, SN 1006, SN 1987A, etc (e.g., <cit.>). However, recent X-ray spectroscopic studies have revealed the existence of over-ionized plasma (recombining plasma, RP) in several SNRs, where kT_ i goes even higher than kT_ e (e.g., IC 443, G359.1-0.5, W28, W44, etc., <cit.>). So far, RPs have been found in over a dozen of SNRs, which may represent a new subclass of SNRs <cit.>.
The physical origin of the RPs in SNRs has not yet been fully understood. Theoretically, there are two approaches to an over-ionization state of the plasma: increase of kT_ i (extra ionization) or decrease of kT_ e (electron cooling). The extra ionization can be caused by suprathermal electrons <cit.>, high-energy photons <cit.>, and low-energy cosmic ray protons <cit.>. On the other hand, the electron cooling scenario, which is considered to be better applied to the SNR evolution, may arise from adiabatic expansion <cit.> and thermal conduction <cit.>. In addition, simulations indicate that various scenarios, such as the adiabatic expansion and the thermal conduction, may simultaneously contribute to the formation of RP <cit.>.
The X-ray emission of RPs is characterized by several distinct spectral features, including the radiative recombination continua (RRCs), enhanced Lyα to Heα line ratios, and enhanced He-like ion G ratios (defined as G=(f+i)/r, where r, f, and i stand for the resonant, forbidden, and intercombination line flux, respectively). Limited by the energy resolution of the current CCD instruments, the studies on RPs by far are mostly based on the RRCs and Lyα lines lying in the ≳2 keV band (covers mainly the heavier elements such as Si, S, and Fe), and thus may leave some bias. will extend our study into lower energy band (0.1–2 keV).
SNR 3C400.2 is one of the few remnants in which people have detected recombining features in the <2 keV band so far <cit.> (another possible example could be SNR CTB 1 <cit.>). In Figure <ref>, we present a simulation of the 100 ks spectrum of 3C 400.2. A collisional ionization equilibrium (CIE) plasma model leaves significant residuals at the RRCs of O viii, Ne ix, Ne x, and Mg xi, as well as the Heα and Lyα lines, which can be clearly identified with the help of .
In addition, the spatial resolving ability of can help to map the distribution of RPs in SNRs, which is crucial in determining their physical origins.
§.§.§ Diagnose charge exchange and resonant scattering processes
The excellent energy resolution of is especially suitable for line-oriented studies. Here, we bring up two examples in SNR physics, concerning the charge exchange process and the resonant scattering effect.
Charge exchange (CX) takes place in various astrophysical environments where the hot ionized plasma interacts with the neutral gas, such as solar wind interacting with planet atmospheres, comets, and the heliosphere. The collisionless shocks in SNRs provide a promising site for the CX study. Right behind the SNR shock front, unshocked cold neutrals may collide with the shocked hot ions and go through the CX processes, resulting in a population of highly excited recombined ions (or neutrals) which then produce cascade emission lines. Observational evidence of CX emission has been obtained from the optical band in many SNRs for over 30 years <cit.>. However, the study of CX-induced X-ray emission in SNRs is still limited. Possible evidence has been found for a number of SNRs, including Galactic remnants Cygnus Loop <cit.>, Puppis A <cit.>, and G296.1-0.5 <cit.>, SMC remnant 1E0102.2-7219 <cit.>, as well as LMC remnants N132D <cit.> and J0453.6-6829 (SNR B0453-68.5) <cit.>. These studies are mostly based on investigations into the O VII triplets: the CX emission could be indicated by an unusually high G ratio. However, the precise measurement of G ratios could still be challenging with current X-ray instruments. CCD cameras are not able to resolve the He-like triplets, which are shown as one single line in the spectrum. Thereby one can only roughly estimate the G ratio based on the line centroid energy, which may lead to large uncertainties. In addition, CCD observations may be contaminated by the emission from solar wind charge exchange (SWCX). Grating instruments can help to resolve the triplets and to improve the constraint on G ratios, but the energy resolution may still be affected by the angular size of the source (morphological broadening). On the other hand, an enhanced G ratio is not necessarily originated from CX, it can also be induced by other mechanisms such as resonant scattering and inner-shell ionization. One possible way to distinguish CX from other mechanisms is to look for enhanced high-level excitation lines (e.g., enhanced Lyγ/Lyβ line ratios), which is another prominent and unique feature of CX emission. However, these lines are usually too weak to be detected or blended with other emission lines. Taking together the spatial resolving ability, the high energy resolution, and the large effective area, provides us with an unprecedented opportunity to study the CX phenomenon in SNRs.
Due to the rather low density, the hot X-ray plasma in SNRs can be safely assumed as optically thin in most cases. However, for some emission lines with large transition oscillator strengths, the resonant scattering (RS) effect may not be ignored when the remnant contains a large column density. The optical depth at the line centroid can be estimated following <cit.>:
τ=4.24×10^26fN_ H(n_ i/n_ z)(n_ z/n_ H)(M/T_ keV)^1/2/E_ eV(1+0.0522Mv_100^2/T_ keV)^1/2
where f is the oscillator strength of the line, E_ eV the line centroid energy in eV, N_ H the hydrogen column density in cm^-2, n_ i the ion density, n_ z the element density, M the atomic weight, T_ keV the plasma temperature in keV, and v_100 the turbulence velocity in 100 km s^-1. Taking the O vii resonant line (f∼0.72) as an example, in a dense remnant like SN 1987A (n_ e∼2400 cm^-3 <cit.>) or a large remnant like Cygnus Loop (diameter of ∼2.8^∘ at a distance of ∼540 pc <cit.>), the column density may go to N_ H≳10^20 cm^-2, resulting in an optical depth τ∼1. The RS process will scatter the incident photon into another random direction. For a non-uniform distribution of the plasma or an asymmetric remnant, it will then change the line flux and modify the surface brightness distribution. Therefore, similar to CX, RS may also be indicated by an enhanced G ratio — in this case, it is due to the reduced resonant line flux rather than the enhanced forbidden line flux. The RS effect in X-rays has been extensively studied in diffuse hot plasma of massive elliptical galaxies, galactic bulges, and clusters of galaxies <cit.>. The current study on the X-ray RS effect in SNRs is still quite limited. One possible observational evidence comes from the LMC remnant N49, for which people find enhanced O vii G ratio as well as O viii Lyβ/α and Fe xvii (3s–2p)/(3d-2p) ratios, indicating RS effect on several resonant lines <cit.>. will be capable of resolving all of the bright resonant emission lines lying in the 0.1–2 keV band. Taking advantage of its spatially-resolved high energy resolution and the large field of view, we will be able to map out the surface brightness distributions of individual emission lines for the whole remnant, which has never been done before and will certainly improve our insight into the RS effect in SNRs.
§.§ Stars and compact objects
As the fundamental units of galaxies, stars play a key role in the recycling of matter. X-ray observations associated with stars not only deepen understanding of a wealth of astronomical phenomena, but also contribute to the understanding of extreme physical processes.
Several crucial questions have to be answered with : 1) Can we detect spectral features on the neutron star surface or the surrounding accretion disk to constrain the equation of the state of compact objects? 2) How does the hot plasma near the WD surface in the accretion column/boundary layer cool and how are the emitted X-ray photons absorbed by the accreted matter? 3) How to understand the X-ray flare mechanism and coronal heating process?
, with its large area and high spectral resolution, will clarify these unanswered questions about neutron stars, white dwarfs and active stars.
§.§.§ Neutron Stars
Neutron stars formed by supernova explosions are the most compact objects in the universe. The equation of state for cold and dense matter is still inconclusive with respect to the understanding of the non-perturbative nature of the fundamental interactions between quarks <cit.>. The equation of state of a neutron star and a strangeon star predicts different mass-radius relations <cit.>. The accurate measurements of neutron star mass and radius could put stringent constraints on the equation of state <cit.>. Mass measurements of massive neutron stars, M>2M_⊙, have already excluded a number of equations of state that predict the maximum mass smaller than 2M_⊙ <cit.>. Although a number of masses of neutron stars in compact binaries have been measured from radio observations with high precision, radius measurements are much more difficult to achieve with comparable precision.
Usually, neutron star mass and radius can be measured from type I X-ray bursts occurring in NS LMXBs, pulse profile modeling of X-ray pulsars and so on. NS low-mass X-ray binaries (NS LMXBs) are composed of NS and a main sequence donor orbiting each other. The masses of the NS and its companion can be determined by kinetic methods, with the orbital motion of the star in the NS LMXBs causing its spectral lines to undergo periodic redshifts and blue shifts due to the Doppler effect. The optical and/or near-infrared (NIR) spectroscopic observations can determine the mass function (stellar apparent velocity profile) of the star. Over the past decades, optical/NIR observations have shown that this method has the potential to constrain the compact object mass of LMXBs. However, there are also some shortcomings, mainly in that (1) this method requires a relatively bright optical/infrared flux of stars in LMXBs with strong absorption or emission lines, which is difficult with current optical/NIR telescopes for optically faint LMXBs; (2) This method can only measure the stellar mass function, but not the dense stars. If we can use an X-ray telescope with high energy resolution and a large effective area, we will be able to measure the velocity profile of dense stars and obtain the mass function of dense stars, which can be combined with optical/NIR observations to measure the binary mass ratio. The masses of dense stars can be constrained more precisely if the stellar masses can be determined from optical observations. This has important implications for the mass spectrum of black holes and neutron stars, and for the solution of the “mass gap" problem. Even if the stellar masses cannot be determined, the mass ratio of the two objects, combined with other measurements, can be used to constrain the binary masses very well.
Zhang et al. <cit.> suggested that absorption lines from accretion disk winds are redshifted or blueshifted due to the Doppler effect of orbital motion. These spectral features are produced in the vicinity of compact objects and trace their motion, which can constrain the mass of compact objects in LMXBs. This approach was subsequently applied to the eclipsing NS LMXB MXB 1659–298, but the uncertainties of the measured apparent velocities are large because the energy resolution of and NuSTAR is not high enough <cit.>. In general, the maximum apparent velocity of compact objects in X-ray binaries is in the order of 100 km/s, which causes a spectral shift of order 10^-4. Therefore a high energy resolution of the detector is required. The energy resolution of has the possibility to measure the Doppler effect of the spectral lines with high precision.
The LMXB 4U 1700+24 has a red giant companion, and the X-ray emission is dominated by wind accretion. In the X-ray spectrum of 4U 1700+24, the O viii (hydrogen-like Ly-α) emission line is found with a central energy of about 0.65 keV <cit.>. The spectral line structure is corresponding to a gravitational redshift of 0.009, suggesting that 4U 1700+24 is a candidate for a low-mass neutron star, which has to be verified by observations <cit.>.
Type-I X-ray bursts are the unstable thermonuclear burning of accreting matter on the NS surface. The unstable thermonuclear burning of the hydrogen and helium, also known as type I X-ray burst, usually has a duration of ∼ 10-100 s with a typical energy release of 10^39 erg, recurs from few hours to days, and ignites at a column depth of ∼10^8 g cm^-2 <cit.>. In a rare case, superbursts, which are believed due to burning carbon, have been identified from the total energy release of ∼ 10^42 erg and the duration of >10^3 s <cit.>. Cottam et al. <cit.> reported the identification of absorption lines by stacking of spectra over dozens of type-I X-ray bursts from NS LMXB EXO 0748–676, which they claimed were gravitational redshifted Fe and O lines from the stellar surface. However, the spectral lines have not been confirmed by the following observations. This particular source is now believed to be rotating rapidly with the frequency of 552 Hz from its burst oscillation <cit.>, which makes it challenging to explain the relatively narrow spectral features. in't Zand et al. <cit.> also reported the none detection of spectral lines in Rapid Burster from /HETG observations. Rauch et al. <cit.> calculated the possible spectral lines in the soft X-ray band, i.e., Fe and O, that can be generated during X-ray bursts. provides a larger effective area accompanied with a high energy resolution than and , which could resolve spectra features with a high S/N ratio from NS in LMXBs also by adding many X-ray bursts, or from a single superburst.
The X-ray dimmed isolated neutron stars (XDINS) mainly emit blackbody spectra in X-ray bands, and show optical/ultraviolet (UV) excesses <cit.>. All seven known XDINSs were found by soft X-ray detectors. RX J1856–3754 is the brightest and closest neutron star. observations showed that RX J1856–3754 had an almost blackbody spectrum in the soft X-ray band, with no emission or absorption lines <cit.>, but absorption lines may be present in other XDINS. It is generally believed that the soft X-ray thermal spectrum of XDINS comes from the surface of the star. The absorption lines in the XDINS spectrum are produced in the magnetic environment of the neutron star. Moreover, the structure of the absorption lines is related to the stellar surface properties. The temperature and stellar radius determined by the continuum spectrum depend on the equation of state of the compact object (see <cit.> for neutron stars; <cit.> for strangeon stars). Therefore, XDINS is also an excellent target to study the surface properties and the equation of the state of NSs.
Besides measuring the NS mass and radius, can also study the magnetic field of anomalous X-ray pulsars (AXPs) and soft-γ-ray repeaters (SGRs). AXPs and SGRs are slowly rotating, isolated and ultra-magnetized neutron stars <cit.>. Their X-ray activities, short bursts and outbursts, are powered by magnetic energy. During outbursts, the X-ray spectra of AXPs and SGRs may show absorption lines, which is interpreted as a proton cyclotron feature. could resolve the absorption features in 0.1–2 keV from AXPs and SGRS, and measure the magnetic field (see e.g., <cit.>).
§.§.§ Cataclysmic Variables
Cataclysmic variables (CVs) are binaries consisting of a white dwarf (WD) and a late-type main sequence or sub-giant star. CVs are the most populated binaries consisting of a compact star and their spatial density can reach up to 10^-6 to 10^-5 pc^-3 in the solar neighborhood <cit.>. The WD in a CV accretes matter from its companion and emits mostly in UV and X-ray energy range. CVs are not only laboratories of stellar evolution theory, but also related to other important astrophysical questions. For example, CVs collectively contribute up to 80% of the Galactic Diffuse X-ray Background (GDXE). What's more, CVs are closely related to the progenitor of type Ia supernovae since the latter are supposed to be binaries harboring one or two WDs.
X-ray observations provide unique information to understand the accretion and emission process of CVs. the X-ray luminosity of CVs can reach 10^33-34 erg s^-1, high enough to study the structure of the X-ray emitting region through X-ray spectroscopy. For example, the hard X-ray (around 2 to 50 keV) spectra of CVs have been well described by the multi-temperature thermal plasma model (mkcflow), and are used to constrain the maximum emission temperature and the mass of the WD. In contrast, the soft (0.1-2 keV) X-ray spectra of CVs are less well understood, and the usual characterization (the same mkcflow emission partially covered by the accreted matter) failed to explain the He-like and H-like lines from different elements (e.g., C, N, O) <cit.>. Until now, there are only 15 CVs with high-resolution X-ray spectra at present, and about half of them are not well explained. Since the soft X-ray are supposed to be originated from the region fairly close to the surface of the WD, the failure of a widely-accepted model in this energy range leads to the lack of understanding of the accretion process near the WD itself.
provides a unique opportunity to explore the details of the emission region in CVs. The high-resolution spectra would certainly allow a detailed investigation of the distribution of the differential emission measure (deM) and the metallicity on a large sample of CVs. Combined with hard X-ray data, a thorough understanding of the structure and evolution of the accreted matter in CVs could be reachable.
A rough estimation of the exposures could be done. For CVs within 50 pc, the typical 0.1-2 keV X-ray flux is ∼10^-12 to ∼10^-11 erg s ^-1 cm ^-2. Simulation shows an snapshot of 10 to 100 ks (depending on the flux of the target) could provide a spectrum with sufficient photons for a targeted CV to identify the important emission lines (e.g., of the O and Ne elements) of one targeted CV for further investigation. A total sample of 30 bright CVs in the solar neighborhood requires 30 snapshots with a total exposure of 1.4×10^6 s (16 days).
With the large effective area and high spectral resolution, can greatly improve our understanding of the accretion process in CVs.
§.§.§ Stars
Stars located across almost all regions of a Hertzsprung-Russell diagram have been identified as X-ray sources, although with different mechanisms. Stellar magnetic corona is the predominant origin of X-rays for late-type stars, while for massive and hot stars, the X-ray emission is from shocks forming in unstable winds. The X-ray radiation of pre-main sequence stars may originate both in hot coronal plasma or shocks <cit.>.
The stellar magnetic activity provides substantial information on the magnetic dynamo and the coronal heating process. It is also of great value for exploring the interaction between stars and their planets and determining the habitable zone of different stars <cit.>. Stellar magnetic activity is ubiquitous in late-type stars, which can be traced by various proxies, including spots and flares from the photosphere, emission lines from the chromosphere, and X-ray and radio emissions from the corona.
The activity level strongly depends on stellar parameters (e.g., stellar mass, age).
X-ray astronomy has played a key role in stellar activity studies. The X-ray luminosity of active stars in the quiet state ranges from 10^27 to 10^31erg s^-1, while it is 1-2 orders of magnitude brighter during flares <cit.>.
It helps establish the famous activity-rotation relation (e.g., <cit.>).
In the relation, the X-ray activity is described as the ratio between X-ray luminosity and bolometric luminosity, while the Rossby number is used to trace stellar rotation, which is defined as the ratio of the rotation period to the convective turnover time.
The relation is usually suggested to consist of two distinct sequences: the saturated region for rapidly rotating stars, in which the activity level keeps constant, and the power-law decay region for slowly rotating stars, where the activity level is rotation-dependent <cit.>.
X-ray spectral observations have yielded a typical temperature of about 0.1-1 keV for the stellar corona, belonging to the soft X-ray band.
Previous studies using the low-resolution spectra (e.g., /ACIS, /MOS) have measured the coronal temperatures and discussed the distribution of differential emission measures (dEM) for some nearby active stars <cit.>.
High-resolution X-ray spectroscopy, on the other hand, is mainly done with /HETG and /RGS high-resolution spectrometers.
By using the He-like and H-like lines from different elements (e.g., C, N, O), the distributions of some physical parameters (e.g., coronal temperature and density, dEM, metallicity) during quiet states and flares have been well constrained for dozens of stars <cit.>.
High-resolution spectra of active stars revealed a new trend that runs opposite to the solar FIP effect, called the “inverse FIP (IFIP) effect" <cit.>.
Although previous studies provide a number of surprising findings, there are many key issues unresolved. The standard picture of the activity-rotation relation has been challenged by recent studies, such as the variable activity level in the saturation region <cit.> and more sequences possibly divided in the relation <cit.>.
It is also doubted that the distribution of coronal physical parameters is universal among stars due to the small and incomplete sample with high-resolution spectroscopic observations.
For example, more than 900 F/G/K-type stars are located within 30 pc of the solar system <cit.>, but only about 40 ones were observed; most stars around the solar system are M-type dwarfs, but only a few were observed (e.g., Proxima Cen <cit.> and CN Leo <cit.>).
More importantly, some basic physical questions including the mechanism of the saturation and the connection between the relation and magnetic dynamo are poorly understood.
A large sample covering different types of stars, with well-measured activities and spectral parameters, can help investigate the physical properties of the stellar magnetic dynamo and provide potential diagnostics of heating mechanisms.
can help establish a large high-resolution X-ray spectral sample for stars with different spectral types, rotation periods, ages, and metallicities.
For single stars, detailed diagnostics of the coronal temperature and density can be done with the emission lines from different elements. With further investigation of the dEM, the FIP and IFIP effects, and the area of the active region, a comparison with the Sun can help explore the flaring mechanism and heating process.
On the other hand, by using the large sample, the distribution of these parameters and their relationships with different stellar parameters (e.g., mass, age, rotation) will help understand the structure and evolution of stars.
For typical active stars, an exposure of 100 ks by can obtain a spectrum with a sufficiently high signal-to-noise ratio for the following studies; for close stars, the exposure time can be reduced to around 10–30 ks. Therefore, the exposure time of 100 stars is about 10^3 to 10^4 ks. Taking Proxima Cen as an example, the simulation shows that can clearly distinguish emission lines in its spectrum (typical for M-type active stars) compared with /RGS observations with an exposure time of ≈800 s.
With the large effective area and high spectral resolution, it can be predicted that will provide a valuable opportunity to advance stellar magnetic activity studies.
§ EXPLOITING THE CAPABILITIES OF
As shown in previous sections, will observe various types of warm and hot plasmas across more than ten orders of magnitude in size, such as stellar coronae, supernova remnants, AGN winds, hot plasmas around individual galaxies and galaxy assemblies, and cosmic web filaments. Characteristic emission and absorption lines in the high-resolution X-ray spectra will enable us to measure various physical properties of these astrophysical plasmas, including but not limited to temperature, density, elemental abundances, and kinematics <cit.>. These fundamental parameters are essential to fill the gaps in our understanding of the role of warm and hot plasmas in the formation and evolution of the hot Universe. These astrophysical plasmas, play an important role in the galactic ecosystem <cit.>.
As we have experienced with the era of diffractive grating spectrometers aboard and <cit.>, the next-generation high-resolution X-ray spectroscopy will offer both an opportunity and a challenge. On one hand, they will greatly advance our knowledge of the Universe more than what we have learned from and . On the other hand, they will also challenge us on how to quantify key observables precisely and efficiently. To better prepare us for the upcoming new era, we need to improve the status quo in the following three aspects: atomic data, plasma models, and spectral analysis techniques.
§.§ Atomic data
Various types of microscopic atomic processes give rise to continuum and line features in the observed spectra. Generally speaking, the interactions between electrons, ions, and photons can be divided into collision, ionization, and recombination <cit.>. Each category can be further divided into several sub-classes. For instance, radiative, di-electronic, and multi-electron recombination all contribute to the continuum and line emission in the observed spectrum. Even if we are limited to the simplest radiative recombination rates of H- to Na-like ions with Z≤30, there are 3×10^4 levels to consider <cit.>. Each level-resolved rate is provided either on a few temperature grids or described with a few parameters <cit.>. The entire atomic database can easily grow to a significant size.
The associated large amount of atomic data is the building block of astrophysical plasma codes widely used in the community: APEC <cit.>/ACX <cit.>/NEI <cit.>, CHIANTI <cit.>, Cloudy <cit.>, SPEX<cit.>, SASAL<cit.> and XSTAR <cit.>. Caution that the underlying atomic databases are not perfect (e.g., <cit.>). Continuous developments including both theoretical calculations and lab measurements are required <cit.>.
In 2016, we had a test of the next generation of high-resolution X-ray spectroscopy with Hitomi <cit.>. While the statistical uncertainty of the observed spectrum is less than 1%, the Fe abundance measured with APEC and SPEX differ by 16% <cit.>. This is mostly attributed to the different atomic data used by these two plasma models <cit.>. The Fe abundance is measured from H- and He-like lines, but their transition rates (i.e., A-values) and electron-impact excitation rates can differ up to 40% <cit.>. That is to say, the accuracy of the atomic data is not adequately converged to match the accuracy of the observed data.
When the mysterious 3.5 keV line was in the spotlight <cit.>, it was unclear whether natural atomic processes like di-electronic recombination and charge exchange process can account for this instead of the dark matter decay process. This was largely due to the incompleteness of the atomic database. New theoretical calculations and lab measurements were then pursued to quantify the role of these two recombination processes <cit.>. Ar xvii di-electronic recombination line is at 3.62 keV, while the S xvi charge exchange recombination line is at 3.47±0.06 keV. Due to insufficient energy resolution of CCD instruments, the line center of the 3.5 keV line is not tightly constrained: 3.57±0.02 keV by <cit.> and 3.52±0.02 keV by <cit.>. On the other hand, while the 3.5 keV line is found in some mega-second CCD observations, it is absent in the ∼300 ks microcalorimeter (Hitomi/SXS) observation. Deeper microcalorimeter observations with fine energy resolution (e.g. HUBS) are certainly required.
§.§ Plasma diagnostics
Plasma diagnostics play a crucial role when interpreting characteristic continuum and line features in the observed high-resolution spectra. Fundamental physical properties of the observing target are measured by matching the data and model.
In the 0.1-2 keV soft X-ray bandpass covered by , H-like Lyman series and He-like triplets are the most prominent emission line features. In a low-density CIE plasma, such as the majority of hot gas in individual galaxies and galaxy assemblies, the Lyα lines should have the highest intensity among the Lyman series. Lyα might be optically thick in some astrophysical environments so that the intensity of Lyα will be reduced by resonance scattering (a fraction of Lyα photons are scattered out of our line-of-sight). Other Lyman series lines with smaller oscillator strength suffer less from this issue, leading to larger ratios of Lyβ/Lyα, Lyγ/Lyα, and Lyδ/Lyα. Furthermore, at the interface between the hot plasma and cold media (e.g., comets), the charge-exchange process can selectively increase the intensity of e.g., Lyγ or Lyδ <cit.>.
The He-like triplet consists of the resonance (w), inter-combination (x and y), and forbidden (z) lines. The line ratio among the three is rather sensitive to a wide range of plasma temperature, density, and the astrophysical environment of the plasma <cit.>. In a CIE plasma, the G=(x+y+z)/w ratio decreases with an increasing plasma temperature, while the R=z/(x+y) ratio decreases with an increasing plasma density <cit.>. In photoionized plasmas (e.g., the X-ray narrow line region of AGN), the external radiation field can boost both the G- and R-ratios <cit.>. The charge exchange process can also increase the G-ratio <cit.>. The optical depth effect also applies to He-like resonance lines as well <cit.>.
Apart from H- and He-like lines, the Fe-L complex is also prominent <cit.>. These n≥3 to n=2 (i.e. L-shell) transitions of Fe xvi to Fe xxiv are susceptible to a wide range of atomic processes: direct and resonance excitation, radiative and di-electronic recombination, inner-shell ionization. Consequently, they are notorious for spectral modeling. The line ratios among the Fe xvii 15.01 Å (3C), 15.26 Å (3D), 17.05 Å (3G), and 17.09 Å (M2) lines have been the hot topic for both theoretical calculations and lab measurements for decades <cit.>.
Thanks to the fine energy resolution and large effective area of (Figure <ref>), some weak line diagnostics become possible and effective. The width of radiative recombination continua (RRC) is an effective measure of plasma temperature. These RRC can be found in the hot recombining plasma of supernova remnants <cit.> or warm photoionized gas of X-ray binary <cit.> or AGN <cit.>. Di-electronic recombination satellite lines of He-like ions can effectively verify the presence of non-Maxwellian electrons, such as those supra-thermal electrons behind the shock of merging galaxy clusters <cit.>. Meta-stable absorption lines of Be-like to F-like ions can probe a wide range of number density for AGN winds <cit.>.
All these diagnostics have been implemented in the astrophysical plasma codes widely used for X-ray spectral analysis: APEC/ACX/NEI, CHIANTI, Cloudy, SPEX, and XSTAR. Continuous developments of these plasma codes are still required. For instance, pre-calculated charge-state distribution tables <cit.> are not applicable to high-density plasma <cit.>. Self-consistent charge-state distribution calculations involving excitation, recombination, and ionization from and to meta-stable levels are required. Radiation transfer for the high-density plasma is also required. On one hand, this calls for a large amount of atomic data that is not yet available. On the other hand, as the complexity grows, computational efficiency needs to be improved.
§.§.§ Laboratory benchmark required by
Plasma diagnostics for various objects are strongly dependent on the above-listed models, including SPEX/CX <cit.> and ACX <cit.> for charge-exchange emissions in SWCX foreground and SNRs. However, these two models are not perfect. To obtain high-resolution spectra, both models use some approximations to redistribute total or n- or nl-resolved cross-sections. For the n-resolved cross-section, limited experimental data are available in collisions with different neutrals at some energies <cit.>. For the nl-resolved cross-section, only theoretical calculations are available while experiments are lacking. In the high-resolution spectra obtained with (Δ E≤2 eV), most of the lines are fine-structure levels (nLSJ) resolved. However, the present CX models including ACX and SPEX-CX, have rather large uncertainties <cit.>. This calls for the laboratory benchmark of the CX model.
By comparison of the resultant spectra from both the experimental and theoretical cross-sections, the accuracy of the CX model will be examined at given collision energies. In some cases, CX high-resolution spectrum can be measured directly in the laboratory, which can be used to fit the observation. Besides the charge-exchange data, laboratory measurements on other atomic data including ionization, di-electronic/radiative recombination, excitation as well as spectra, will improve our interpretation of the observations.
In turn, the spectroscopy will prompt the progress of the collision theories in atomic physics, including the nl-resolved charge-exchange cross-section. Generally, the n-resolved CX cross-sections can be obtained in a heavy ion source by the cold target recoil ion momentum spectroscopy (COLTRIM) apparatus with the electron energy resolution of ∼10 eV <cit.>. The nl-resolved cross-section of He-like captured ions has never been obtained by experiments. For the collision of O^7+ with H, the dominant capture channel by O^7+ projectile of the bound electron from the H donor is n=4 captured ion states by accurate close-coupling calculations. The radiative decay rates of the dipole transitions of O vii have an accuracy of ≤5%. The O vii resonance, intercombination, and forbidden lines at the rest-frame energy of 561 eV, 569 eV, and 574 eV are well resolved in the observation. The observed data can be used to determine the l-distribution in the n=4 channel with an accuracy better than 10% by an iterative algorithm.
In summary, the spectroscopy with high resolution requires laboratory measurements to benchmark the CX model, it also gives some constraints for the nl-distribution of the cross-sections measured in the laboratory. Both of them complement each other.
§.§ Spectral analysis techniques
With diffractive grating spectrometers, we typically obtain one high-resolution X-ray spectrum for each observation. It might take weeks or months for experts to finish a thorough spectral analysis. With X-ray integral field units like those on , we might get up to thousands of high-resolution X-ray spectra in one single observation. We need to quantify key observables precisely and efficiently with limited manpower and computation resources.
For observations targeting point-like sources, an efficient and automated line detection algorithm without any prior knowledge of the targets is required as the first step <cit.>. If the spectrum is not featureless, we need to identify these lines and extract preliminary information such as the line center, velocity shift, line broadening, equivalent width, and plasma types according to the intensities of characteristic lines. This might call for a machine-learning approach. For observations targeting extended sources, imaging spectroscopic approaches including but not limited to Weighted Voronoi Tessellations <cit.> and smoothed particle inference <cit.> are to be pursued to get a comprehensive and self-consistent view of the observing target.
§ STATUS OF
The project is being funded by the China National Space Administration for key technology development (which corresponds roughly to Phase A, in terms of NASA project cycles). The critical technologies identified include superconducting microcalorimeter (detector), wide field-of-view X-ray focusing optics (telescope), multiplexing signal readout electronics, mechanical cooler and adiabatic demagnetization refrigerator. The goal is to advance the technical readiness levels (TRLs) of those technologies sufficiently by the end of 2023, before the project can enter the next phase. Looking ahead, the important milestones will include the completion of technology development and payload design, the construction of the satellite, and the launch and operation of the satellite (around 2030 and beyond).
Mock observations have been made to assess the scientific capabilities of and also to help formulate observing strategies <cit.>. The results suggest that CGM studies require deep exposures on carefully-selected targets, while group or cluster observations are likely quite efficient at low redshifts, thanks to the large field of view. For IGM studies, on the other hand, medium-exposure mosaic observations will be necessary to acquire sufficient spatial coverage, so the total exposure time is also expected to be long for each selected field. It is, therefore, clear that target selection is critical to the success of . Discussion is ongoing on the scientific values (vs resource investment) of an all-sky survey in the extended mission period.
§ SUMMARY
The Hot Universe Baryon Surveyor mission aims at studying the hot gas in
the universe with unprecedented sensitivity and spatial resolution in X-rays.
Among the core sciences for which the is tailored, the feedback in
the galactic ecosystem and the cosmic baryon budget are of particular
importance. The will provide a unique opportunity to study the hot
gas in the ISM, the CGM and the ICM by resolving the X-ray spectrum in both emission
and absorption, from which the spatial distribution and the kinematics of the
hot gas can be confidently obtained. Since the thermal and kinematic status of
the hot gas is closely related to the star formation and the AGN activity, the
mission will be a huge leap forward in our understanding of
galaxy formation and evolution. Moreover, with its high sensitivity and large
field of view, the is highly capable of searching for multi-phase hot
gas in galaxy groups, clusters and the cosmic web, which will pave the way for
the future study of the cosmic baryon budget.
The may also extend its application to other X-ray-related observatory
sciences. For instance, will be able to directly constrain the gas number
densities and opening angles of the AGN-driven outflows. The can also
probe the nearest X-ray sources within our Galaxy, such as cosmic X-ray
background, supernova remnants, activities of stars and compact objects, and the
emission from the Solar system.
The capability of the can be exploited further by renovating the knowledge
of atomic data, plasma models and the techniques of spectral analysis. The
laboratory measurements on atomic data will be used to benchmark the CX model
and to improve the interpretation of the observations. In addition, since
the will be able to obtain thousands of high-resolution X-ray spectra in
one single observation, the spectral analysis techniques will be developed to
quantify key observables precisely and efficiently with limited manpower and
computation resources.
A staged construction plan has been carefully designed for the mission.
With the support from the China National Space Administration for key
technology development, a number of critical technologies have been identified,
and the current goal is to sufficiently enhance the technical readiness levels
of those technologies by the end of 2023 before entering the next phase.
Besides, mock observations have also been developed to test the feasibility of
observing targets and strategies.
As we celebrate the 60th anniversary of X-ray astronomy, the field is about to enter a new era, in which spatially-resolved, high-resolution spectroscopy is expected to become increasingly exquisite and routine, thanks to the advancement of new detector technologies. The imminent launch of XRISM <cit.> is highly anticipated, as the first mission employing microcalorimeters for spectroscopy observations. The scientific potential of such a spectrometer has been well illustrated by sounding-rocket experiment <cit.> and the Hitomi satellite mission <cit.>, so breakthroughs are expected of XRISM, especially in the studies of ICM and AGN, as it is optimized to detect emission lines at higher energies than . With the new generation of microcalorimeters, , as well as Athena <cit.>, will not only provide higher spectral resolution, but significantly improve the detection sensitivity at energies where the emission lines associated with hot CGM/IGM are expected to lie and thus provide new avenues to exploring baryonic processes in the cosmos. These improvements are expected to significantly advance our understanding of many important astrophysical fields, as we have stated in detail in the present paper.
This work is supported by the National Natural Science Foundation of China (Grant Nos. 11721303, 11821303, 11825303, 11873029, 11890693, 11973033, 11991052, 12025303, 12033004, 12041301, 12121003, 12133008, 12173018, 12192220, 12192223, 12221003, 12233001, 12233005, 12273010, 12273030, 12273057, 12011540375, U1931140), the China Manned Space Project (Grant Nos. CMS-CSST-2021-A04, CMS-CSST-2021-A06, CMS-CSST-2021-A10, CMS-CSST-2021-B02), the Ministry of Science and Technology of China through its National Key R&D Program (Grant No. 2018YFA0404502), the National SKA Program of China (Grant No. 2020SKA0120300), the National Key Research and Development Program of China (Grant No. 2022YFA1602903), the Outstanding Young and Middle-aged Science and Technology Innovation Teams from Hubei colleges and universities (Grant No. T2021026), the Young Top-notch Talent Cultivation Program of Hubei Province, the National Science Foundation (Grant Nos. AST-2107735 and AST-2219686), and NASA (Grant No. 80NSSC22K0668). Mr. Yongkai Zhu (Shanghai Jiao Tong University) provided useful comments on the manuscript.
The authors declare that they have no conflict of interest.
This paper was organized and structured by Wei Cui and Feng Yuan, and was primarily contributed by each author as follows: <ref> (Wei Cui, Suoqing Ji, Feng Yuan), <ref> (Suoqing Ji, Junjie Mao, Feng Yuan), <ref> (Hui Li, Miao Li), <ref> (Suoqing Ji, Jiangtao Li, Miao Li), <ref> (Suoqing Ji), <ref> (Jiangtao Li), <ref> (Dandan Xu, Haiguang Xu), <ref> (Dandan Xu), <ref> (Zhongli Zhang, Haiguang Xu), <ref> (Guiyun Liang, Wenhao Liu, Zhijie Qu, Hang Yang, Shuinai Zhang), <ref> (Lei Sun, Ping Zhou, Yang Chen), <ref> (Zhaosheng Li, Song Wang, Xiaojie Xu), <ref> (Junjie Mao), <ref> (Guiyun Liang, Junjie Mao), <ref> (Junjie Mao, Ping Zhou, Shuinai Zhang), <ref> (Wei Cui), and <ref> (Wei Cui, Suoqing Ji). All authors critically reviewed and made contributions to the manuscript.
unsrt
|
http://arxiv.org/abs/2307.07524v1 | 20230711024733 | Reducing Causality to Functions with Structural Models | [
"Tianyi Miao"
] | cs.AI | [
"cs.AI"
] |
Reducing Causality to Functions with Structural Models
Tianyi Miao
University of Pennsylvania
======================================================
The precise definition of causality is currently an open problem in philosophy and statistics. We believe causality should be defined as functions (in mathematics) that map causes to effects. We propose a reductive definition of causality based on Structural Functional Model (SFM). Using delta compression and contrastive forward inference, SFM can produce causal utterances like "X causes Y" and "X is the cause of Y" that match our intuitions. We compile a dataset of causal scenarios and use SFM in all of them. SFM is compatible with but not reducible to probability theory. We also compare SFM with other theories of causation and apply SFM to downstream problems like free will, causal explanation, and mental causation.
Keywords: Causal Modeling, Causation, Actual Causality
§ INTRODUCTION
What is causation? What does it mean to say one thing causes another? Is it possible to define causation in non-causal terms?
We can easily find examples where "correlation doesn't imply causation." Ice cream sales are positively correlated with deaths by drowning, but ice cream doesn't cause drowning. However, this doesn't tell us what causation really is. While probabilistic independence and correlation coefficients have clear mathematical definitions, the precise definition of causality remains a subject of ongoing debate.
Embracing a functional theory of causation, we argue that causality essentially is functions that map causes to effects.
While functions are distinct from probability theory and sufficiently general for scientific purposes, we can place additional constraints and formalize Structural Functional Model (SFM), which better fit intuitions in causal utterances:
* Forward inference from causes to effects:
* What if X? Y.
* Had it been X, it would have been Y.
* Actual causality (separating "actual causes" from background conditions):
* X causes/doesn't cause Y.
* X is/isn't the cause of Y.
* What is the cause of Y? X.
Throughout this paper, the word "function" exclusively denotes a mathematical function (Appendix <ref>). We'll never use it to mean "intended purpose or task" as in "the functions of cellphones include texting." The word "functional" is only used as the adjective form of "function."
For SFM, we'll explicitly separate its representation, inference, and learning <cit.>:
* Representation is the declarative model of "what the world is like."
* Inference assumes the representation is correct and answers queries regarding particular instances, such as computing values of unknown variables given known variables.
* Learning inductively constructs a representation from empirical data.
Such decoupling allows us to design general-purpose inference and learning algorithms that work for different task-specific representations.
§ REPRESENTATION: A ROADMAP
In this section, we build the representation of SFM by incrementally adding functions, directed graphs, composition, contrast, and delta compression into a unified model. Each additional component will help SFM better fit intuitions about causal utterances, sometimes at the cost of generality.
Motivated by theoretical and pragmatic benefits like simplicity, expressiveness, and computational efficiency, the definition of SFM is unambiguous, mathematical, and reductive. It contains no circular definition because it doesn't rely on causal concepts like intervention and agency.
§.§ Causal Relata
When we say "X causes Y", what kinds of things are X and Y? How do we represent a world? Classifying by causal relata, there are 4 kinds of causal relationships <cit.>:
* Token causation: I frequently water my flower in my garden, causing it to grow tall.
* Type causation: Watering a plant frequently causes it to grow tall.
* Token influence: How much I water my flower in my garden influences how tall it grows.
* Type influence: How much a plant is watered influences how tall it grows.
Influence relates variables (a variable can have one of many values); causation relates values of variables.
Tokens are specific; types are general. Since this type-token distinction applies to non-causal models too, it's not central to causality. SFM doesn't endorse any particular theory of physics or metaphysics, so it's up to the user to specify how variables correspond to real-world things.
Formally, let be a set of nodes (we use "nodes" instead of "variables" to avoid confusion with random variables) and be a function that maps nodes to their domains. For node u ∈, its domain [u] is the set of values it can take on. An assignment is a function that maps each node to a value in its domain.
* A complete assignment : →⋃_u ∈[u] assigns values to all nodes, satisfying ∀ u∈: (u)∈[u].
* A partial assignment _|: →⋃_u ∈[u] assigns values to a subset ⊆ of nodes, satisfying ∀ u∈: _|(u)∈[u].
* _|⊆ iff ∀ u∈: _|(u)=(u).
We use dictionary notations node1:value1, node2:value2, … for assignments (and discrete finite functions in general). Nodes, values, and assignments are different things.
Influence relates nodes ( influences ), while causation relates assignments (Water:High causes Growth:Tall).
* The set of all complete assignments forms the Cartesian product ∏_u ∈[u].
* A team R is a set of complete assignments <cit.>, so R ⊆∏_u ∈[u]. R is a relation.
* For a modal/counterfactual/possible-world interpretation, each complete assignment is a world.
Each node is a feature/property/aspect/variable of the world.
R is the set of possible worlds; (∏_u ∈[u]) ∖ R is the set of impossible worlds.
* For a database interpretation, each is an individual/person/record/item. R is a population containing many individuals. Each node is a property/attribute/feature of that individual.
* A complete assignment satisfies team R iff ∈ R.
* A team R is satisfiable iff R is nonempty. R is unsatisfiable iff R = ∅.
* If any domain [u] is empty, the Cartesian product ∏_u ∈[u] is empty and there's no satisfiable R, so we'll only consider nonempty domains.
* An assignment _| is permitted by R iff ∃∈ R: ⊇_|.
We call this an induced complete assignment of _|.
* Partial assignments _|_1, _|_2, …, _|_k are compatible with each other iff ∃∈ R: ∀ i ∈{1, 2, …, k}: ⊇_|_i.
We will say " influences " and "_| causes _|," where and are sets of nodes; _| and _| are partial assignments.
§.§ A Functional Theory of Causation
Many causal scenarios are not reducible to probability theory. For example, flipping the light switch turns on the light, but doesn't affect the TV. This system of electric circuits is deterministic and fully-specified. We can consistently predict the "independence" between light switch and TV and what would happen given the switches' status, using functions alone without probabilities.
According to the functional theory of causality, causality essentially is mathematical functions (left-total, right-unique relations) that map causes to effects.
<cit.> briefly mentions that the cause (functionally) determines the effect.
<cit.> explicitly defend that causation is "a function of one variable (the cause) on to another (the effect)."
Structural Causal Model (SCM) <cit.> uses multi-input single-output functions in structural equations to represent "laws" or "mechanisms" of the world.
"Causality as functions" becomes immediately obvious once it's pointed out. For example,
* In y = f(x), we call x the independent variable and y the dependent variable, like how effects depend on causes.
* Describing "rain influences wheat growth" with = f(), the input-output mappings are:
* With no rain, wheat doesn't grow.
* With moderate rain, wheat grows moderately.
* With heavy rain, wheat grows very well.
* The light-switch-and-TV example can be described by =f_1() and =f_2().
Two key properties distinguish functions from other kinds of relations:
* Right-uniqueness: 1 input value cannot simultaneously associate with 2 or more different output values. Functions can only be many-to-one or one-to-one, never one-to-many.
This explains why causes "necessitate" or "are sufficient for" their effects (given the underlying function).
* (Possible) non-injectiveness: Some functions can map different input values to the same output value, like y=x^2 over real numbers. Non-injective functions cannot be inverted. This explains the asymmetry of causation: different causes can lead to the same effect.
Functional dependencies are properties of a team R ⊆∏_u ∈[u]: For , ⊆,
* Value-level dependency: We say " functionally depends on _|" (_|) or "_| functionally depends on _|" (_|_|) when given _|, there exists exactly one _| that's compatible with _|.
* Node-level dependency: We say " functionally depends on " () when _| for every permitted _|.
* Value-level and node-level dependencies can be different. In (Y) = (X_1) (X_2) (X_3), value-level {X_1: 1}{Y: 1} is true; node-level {X_1}{Y} is false; node-level {X_1, X_2, X_3}{Y} is true.
Node-level functional dependency satisfies right-uniqueness: ∀_1, _2 ∈ R: (_1| = _2|) ⇒ (_1| = _2|).
So there's a function f: {_| | ∃∈ R: ⊇_|}→{_| |∃∈ R: ⊇_|} such that ∀∈ R: _| = f(_|).
We thus define functional determination:
* Node-level determination: We say " functionally determines via f" () when ∀∈ R: _| = f(_|).
* Value-level determination: We say "_| functionally determines _| via f" (_|_|) when and _| = f(_|).
In compliance with conventions from dependence logic <cit.> and relational databases <cit.>, functional dependency doesn't contain f, while our functional determination does.
Influence is node-level functional determination; causation is value-level functional determination. In _| = f(_|), _| is the cause, _| is the effect, and f is an underlying mechanism/law-of-nature (since _| = f(_|) is true in every possible world ∈ R).
Generally, causality is the study of functional dependency (e.g. Armstrong's Axioms), functional determination, and relational independence <cit.>. It's nontrivial because these concepts cannot be reduced to probability theory.
We say "_| causes _|" when and _|=f(_|). We say " influences " when .
§.§ Directed Graphs
Previously, we first have a team R and then find functional determinations as properties of R. Now we take the opposite direction. We start with a set of functional determinations FDet = {_1 _1, _2 _2, …_n _n}, which then select R_FDet⊆∏_u ∈[u] as all that satisfies FDet. Here "all" is necessary for defining a unique R_FDet, because functional dependencies and determinations are downward-closed (if R_1 satisfies FDet, then any subset R_2 ⊆ R_1 also satisfies FDet <cit.>).
When we draw diagrams to illustrate causal relationships, we want arrows to point from causes to effects.
Structural Causal Model (SCM) <cit.> generalizes this intuition, subsumes the graphical and potential-outcome frameworks, and is the most popular causal model in statistics, econometrics, and epidemiology. Our SFM inherits the following ideas from SCM:
* A causal system is represented as a (usually finite and acyclic) directed graph.
* One mechanism's effect can be another mechanism's cause. One function's output can be another function's input.
* A node's value is functionally determined by the values of its parents.
* Unlike SCM, our SFM doesn't use "intervention" in its definition at all (Section <ref>).
Besides nodes and domains , an SFM = (, , , ) also has:
* ⊆× is a set of directed edges.
* In a directed graph = (, ), a node u is exogenous (exo-node u ∈_exo) iff it's a root node; otherwise, it's endogenous (endo-node u ∈_endo).
* We write exo-assignment _|_exo as _exo and endo-assignment _|_endo as _endo.
* maps every endo-node u ∈_endo to exactly one structural function [u]: (∏_p ∈(u)[p]) →[u].
* [u]: _|(u)↦(u) maps an assignment over u's parents to a value of u.
* R_={∈∏_u ∈[u] | ∀ u ∈_endo: (u) = [u](_|(u))} is the set of all complete assignments satisfying .
Equivalently, specifies functional determinations FDet_ = {(u) {u}}_u ∈_endo, where f_u(_|(u))={u:[u](_|(u))}.
Consider SFM = (, , , ):
* = {A, B, C, D, E}
* = {(A, B), (B, D), (C, D), (C, E)}
* = {A: ℝ, B: ℝ, C: ℝ, D: ℝ, E: ℝ}
* For simplicity, we'll abuse notations and write [u](_|(u)) as [u]():
[B]()=(A)^2
[D]()=(B)+(C)
[E]()=(C)× 7
* A, C ∈_exo are exo-nodes; B, D, E ∈_endo are endo-nodes.
* A → B → D forms a causal chain, B → D ← C forms a "common effect" structure, and D ← C → E forms a "common cause" structure.
* {A: i, B: -1, C: 10, D: 9, E: 70} isn't an assignment over (, ), because the complex number i ∉ℝ is outside of A's domain.
* {A: 2, B: 2, C: 2, D: 2, E: 2} is a complete assignment over (, ), but it doesn't satisfy .
* {A: 3, B: 9, C: -π, D: 9-π, E: -7π} is a complete assignment that satisfies , so is satisfiable.
* Therefore, partial assignments {A: 3, B: 9} and {D: 9-π, E: -7π} are permitted and compatible with each other.
* {D: -10, E: 7} isn't permitted because no ∈ R_ extends it.
Some design choices of SFM inevitably restrict the kinds of functional dependencies that we can talk about:
* For simplicity, we only consider finite nodes because no important application requires an infinite SFM.
* Not every set of functional determinations be covered (entailed) by an SFM, even if we allow cycles.
Consider = {X, Y, Z} with real-valued domains, the team R_1 = { | (X)^2 = (Y) = (Z)^2} has functional determinations {X}{Y} and {Z}{Y}. There's no SFM with R_=R_1.
Generally, SFM cannot represent one node being functionally determined by multiple "separate" functions/mechanisms, each individually sufficient for its value. This differs from symmetric overdetermination (Section <ref>), which is just multi-input Boolean OR.
* The intersection of SFMs, however, can cover any set of functional determinations.
We say satisfies the SFM-intersection over (_1, _2, …, _n) if ∈⋂_i=1^n R__i ( satisfies every individual _i).
For any set of functional determinations FDet over finite , there exists a finite SFM-intersection that covers it.
Since is finite, FDet is finite.
For every _i _i in FDet, we construct _i = (, _i, , _i) with edges _i = _i ×_i and structural functions _i[y]: _|_i↦ f_i(_|_i)(y) for y ∈_i. The SFM-intersection over all _i entails FDet.
An SFM-intersection-proper is an SFM-intersection that cannot be entailed by an SFM.
Besides (X)^2=(Y)=(Z)^2, SFM-intersection-proper can express autonomous differential equations like d/dt x(t) = f(x(t)) while SFM cannot. The differential operator d/dt is also a function, so we derive 2 functional determinations: A B and A B. Here {A: x(t), B: x'(t)} is permitted iff x'(t)=f(x(t)).
* Why do people dislike SFM-intersection?
It's nearly impossible to find an uncontrived, everyday causal system that's only describable by SFM-intersection-proper. <cit.> even explicitly formulates the Principle of Causal Exclusion against "more than one sufficient cause" in this spirit.
This intuitive dislike is unjustified, but when taken as a primitive desideratum, it entails people's preference of some SFMs over others for modeling reality.
We suggest 2 possible reasons for disliking SFM-intersection-proper:
* Intersection of multiple SFMs creates too much mental computational burden and people prefer simpler models.
In many cases (Section <ref>, <ref>), people dislike the very form of SFM-intersection, even though the underlying R=R_ can be modeled by some SFM .
* SFM-intersection-proper suffers from the possibly-unsatisfiable-laws objection (PULO), which applies to any set of functional dependencies FDep={_i _i}_i=1^n such that some {f_i}_i=1^n makes FDet={_i _i}_i=1^n unsatisfiable.
No world satisfies FDet, but our actual world exists, so we must reject FDet. PULO takes one unjustified step further, suggesting that FDep should also be rejected, even if some other {g_i}_i=1^n makes {_i _i}_i=1^n satisfiable, because FDep "opens the gate" to unsatisfiable laws. From another perspective, PULO expresses a desire for guaranteed satisfiability under any function set.
For example, FDep = {{X}{Y}, {Z}{Y}} suffers from PULO because FDet = {{X}{Y}, {Z}{Y}} is unsatisfiable over real-valued domains.
* Why do we make SFM acyclic?
PULO strikes again: When there are self-loops or cycles in the graph, there exist function sets that make the SFM unsatisfiable, such as A=A+1 and {A=B+1; B=A+1}:
* Besides simplicity and intuitive appeals, finite acyclic SFM has other nice properties (Section <ref>):
* is satisfiable for any .
* _exo functionally determines _endo via _endo⊆ = (, _exo).
Are they worth the price of rejecting many (possibly satisfiable) sets of functional dependencies? We're unsure.
* Different SFMs _1 _2 over the same (, ) can be "semantically equivalent" R__1=R__2, which entails "_|=f(_|) in _1 iff _|=f(_|) in _2", including (_1, _exo)=(_2, _exo) for all _exo.
We'll only consider functional determinations that can be modeled by finite acyclic SFMs, where an endo-node is functionally determined by its parents.
§.§ Composition and Decomposition
Since _exo functionally determines _endo via _endo⊆(, _exo) (Section <ref>), we produce all causal utterances as "_exo causes _endo."
This syntax is simple, but an ostensible flaw is that only exo-assignments can be causes. In A → B → C, we cannot say "{B: b} causes {C: c}" because B is an endo-node. This problem is solved by considering the sub-SFM B → C, where B becomes an exo-node. Sub-SFM generalizes <cit.>'s surgical intervention, which cuts off all incoming edges to the nodes under intervention.
_sub = (_sub, _sub, _sub, _sub) is a sub-SFM of = (, , , ) when:
* (_sub, _sub) is a subgraph of (, ), i.e. _sub⊆, _sub⊆, and (u, v) ∈_sub⇒ (u ∈_sub) (v ∈_sub).
* ∀ u ∈_sub|endo: _sub(u)=(u).
* ∀ u ∈_sub: _sub[u] = [u]
* ∀ u ∈_sub|endo: _sub[u] = [u]
An exo-node in can be nonexistent or exogenous in _sub; an endo-node in can be nonexistent, exogenous, or endogenous (with the same parents and structural function) in _sub, so the mechanisms-of-nature are preserved.
We can compose a set of smaller SFMs {_1, _2, …, _m} into a bigger SFM without altering any structural function, if the following prerequisites are met for any pair of (_i, _j):
* ∀ u ∈_i ∩_j: _i[u]=_j[u]
* ∀ u ∈_i|endo∩_j|endo: _i(u) = _j(u)
* ∀ u ∈_i|endo∩_j|endo: _i(u) = _j(u)
These prerequisites ensure that the composition = (⋃_i=1^m _i, ⋃_i=1^m _i, ⋃_i=1^m _i, ⋃_i=1^m _i) is well-defined. For ⋃_i=1^m _i and ⋃_i=1^m _i,
* _i maps nodes to domains.
* _i maps nodes to structural functions.
* Functions (including and ) are binary relations.
* The union of sets/relations/functions is well defined.
* The prerequisites ensure that each node u has exactly one unique [u] and at most one unique [u] across all i, so ⋃_i=1^m _i and ⋃_i=1^m _i are right-unique and thus functions.
The decomposition of SFM is a set of sub-SFMs {_1, _2, …, _m} that can compose into . While composition of sub-SFMs (when allowed) is unique, there can be multiple different decompositions of an SFM, the most trivial being "keeping the original SFM itself" and the most fragmented being "one sub-SFM for each endo-node and its parents."
Composition shows how small, local, and simple sub-mechanisms can be pieced together into one big, global, and complex system, while decomposition breaks down a large system into small sub-mechanisms. Therefore, we can deductively reason about a big, unrepeatable event using its components and their interconnections.
With composition-decomposition, we can say "_exo causes _endo" relative to some sub-SFM.
§.§ Contrastive Causation
Currently, SFM can already perfectly express a causal system by correctly answering all "what's _endo if _exo" questions. But in causal utterances, people only say "the actual causes" and omit background conditions (Section <ref>).
The selection of actual causes takes 2 steps: contrast and omission. We'll discuss contrast in this section.
<cit.> believes causation is contrastive. Besides the 2-argument surface form (cause, effect), the 4-argument underlying form includes contrast on both sides:
* Surface form: Pam's throwing the rock caused the window to shatter.
* Contrastive form 1: Throwing the rock (rather than the pebble) caused the window to shatter (rather than crack).
* Contrastive form 2: Throwing the rock (rather than not throwing it) caused the window to shatter (rather than remain intact).
We specify 2 assignments _a, _c for contrastive causal utterance "_a|exo (rather than _c|exo) causes _a|endo (rather than _c|endo)":
* Actual assignment _a corresponds to the actual world (i.e. what actually happens).
* Contrastive assignment _c is selected using one of two heuristics:
* _c is a default/expected/normal/typical world; _a is an anomalous/unexpected deviation from the default.
Normality inevitably comes with value judgments, but contrast reduces "finding the actual causes" to "finding a default world," which is a nontrivial simplification.
* With _a available first, we tweak _a|exo into _c|exo by changing the values of a few exo-nodes of interest. We then obtain _c = (, _c|exo)=(, _a, _c|exo) through forward inference (Section <ref>).
This is common when too many nodes in _a have non-default values, or when there's no appropriate default world.
Our contrastive causation is slightly simpler than <cit.>'s and <cit.>'s, because we only need to specify one contrastive world _c (rather than many).
Contrast is common in our causal intuition:
* People often characterize causality as "changing the cause will also change the effect" or "making a difference." Ignoring the manipulation aspect of an agent changing an object, change is inherently contrastive - there's an old state that changes to a new state.
* Some philosophers try to define "event X causes event Y" as "X raises the probability of Y ([Y|X] > [Y|¬ X])." This definition fails to address causal asymmetry and spurious correlations <cit.>, so it's never popular among statisticians. However, the very idea of "raising" contains a contrast between a world with X and a world with ¬ X.
* The contrast of treatment effects is formalized in statistical causal inference. Using the potential outcome notations in <cit.>,
* causal risk difference: [Y^a=1 = 1] - [Y^a=0 = 1]
* causal risk ratio: [Y^a=1 = 1]/[Y^a=0 = 1]
* causal odds ratio: [Y^a=1 = 1] / [Y^a=1 = 0]/[Y^a=0 = 1] / [Y^a=0 = 0]
These measurements all involve a contrast between random variables Y^a=0 (effect under treatment 0) and Y^a=1 (effect under treatment 1).
* To understand a function y=f(x), we often record an initial input value x_0 and its corresponding output value y_0=f(x_0); we then change x_0 to x_1 and see how the output value y changes in response. For example, derivatives in calculus help quantify how "sensitive" the output is with respect to the input.
With actual assignment _a and contrastive assignment _c, we say "_a|exo (rather than _c|exo) causes _a|endo (rather than _c|endo)."
§.§ Delta Compression
To characterize omission in causal utterances, we consider = {u ∈ | _a(u) _c(u)}: the nodes that have different values in _a and _c. || is the Hamming distance between _a and _c. With _exo= ∩_exo and _endo= ∩_endo, the final causal utterance is "_a|_exo causes _a|_endo."
If something doesn't change, we don't mention it. We only mention the new values of changed nodes. This is an example of delta compression <cit.>:
Encoder wants to transmit a target file to Decoder. Encoder and Decoder can both access a reference file. The target file is only slightly different from the reference file, so their delta (change/difference) is much smaller than the target file itself. To reduce the amount of transferred data, Encoder computes the delta (using target and reference files) and sends it to Decoder; Decoder reconstructs the target file by adding the delta to the reference file.
Delta compression is widely used in version control, where we want to store many successive versions of the same file, but any 2 consecutive versions differ only slightly.
Consider nodes {A, B, C, D} with integer domains and assignments _0, _1:
* _0 = {A:1,B:2,C:3,D:4}
* _1 = {A:1,B:7,C:3,D:5}
* = {B, D}
* _0| = {B:2,D:4}
* _1| = {B:7,D:5}
People may prefer delta compression because it shortens causal utterances without losing information or introducing ambiguities. This saving of "mental bandwidth" is especially prominent when:
* We want to represent many _1 relative to one _0.
* Each _1 differs only slightly from _0, i.e. || is small relative to ||.
In default-actual contrasts, the default _c is kept constant for reference; in actual-tweaked contrasts, _a is held for reference.
We say _a|_exo causes _a|_endo, where = {u ∈ | _a(u) _c(u)} is the set of changed nodes.
§ INFERENCE
§.§ Constraint Satisfaction
During inference, we assume the SFM is true. An inference algorithm takes in assignment _| over known nodes ⊆ and a set of target nodes , whose values we're interested in inferring. It then checks whether _| is permitted and if so, returns one or more _| that's compatible with _|.
If all domains are finite, we can formulate SFM inference as a constraint satisfaction problem (CSP) <cit.> and use off-the-shelf CSP solvers for inference:
* The domain of node u ∈ is [u].
* For each u ∈_endo, its structural equation gives a (|(u)|+1)-ary constraint (u) = [u](_|(u)) over scope (u) ∪{u}.
* Each known value w_u=_|(u) for u ∈ is a unary constraint (u)=w_u over scope {u}.
CSP does have a few drawbacks:
* It's NP-complete in general.
* It's unnecessary for most thought experiments, where the SFMs are small and solvable by hand.
* It offers no guarantee for the existence or uniqueness of _|. For example, if y=f(x) is non-injective, different x can be compatible with the same y.
Thanks to right-uniqueness, inferring effects from causes is much easier.
§.§ Forward Inference
Forward inference infers effects from causes.
Given SFM , vanilla forward inference (VFI) computes = (, _exo), where ⊇_exo and of ∈ R_.
When =(, ) is finite and acyclic (and ∀ u ∈: [u] ∅),
* =(, , , ) is satisfiable for any ;
* _exo functionally determines _endo; itself is a function.
Intuitively, deterministically infers all effects given the root causes and mechanisms-of-nature.
(Forward Inference) In a finite acyclic SFM (with nonempty domains), for any exo-assignment _exo, there exists a unique complete assignment satisfying ⊇_exo and ∈ R_.
* Existence: Because is finite, is finite. Because is acyclic, there exists a topological order L of nodes: an ordered list of all nodes such that [(L[i], L[j]) ∈] ⇒ [i < j]. Using topological sort algorithms like depth-first-search and Kahn's algorithm, we can compute L in Θ(||+||) time; cycle detection is done simultaneously <cit.>.
Given _exo, we compute _1 sequentially from i=1 to i=|| inclusive:
* If L[i] ∈_exo, we assign _1(L[i]) ←_exo(L[i]).
* If L[i] ∈_endo, we assign _1(L[i]) ←[L[i]](_1|(L[i])).
* Any parent L[j] ∈(L[i]) must appear earlier (j < i) than its child L[i] because L is a topological order. _1(L[j]) must have already been assigned, so _1|(L[i]) is well-defined.
Because _1(L[i]) isn't modified after iteration i:
* If L[i] ∈_exo, _1(L[i])=_exo(L[i]) is always satisfied.
* If L[i] ∈_endo, _1(L[i]) = [L[i]](_1|(L[i])) is always satisfied.
Therefore, _1 ⊇_exo and _1 satisfies .
This constructive proof also specifies the algorithm =(, _exo), assuming every structural function [u] is computable.
* Uniqueness: Proving by contradiction, suppose instead that there's another _2 _1 satisfying _2 ⊇_exo and _2 ∈ R_. With topological order L, there exists a smallest integer i such that _1(L[i]) _2(L[i]).
Because L is a topological order, every parent L[j] ∈(L[i]) appears earlier (j < i). Since L[i] is the earliest node with different values, ∀ L[j] ∈(L[i]): _1(L[j])=_2(L[j]) and _1|(L[i]) = _2|(L[j]).
Because functions are right-unique, [L[i]](_1|(L[i])) = [L[i]](_2|(L[i])). Because (L[i]) is only modified at iteration i, _1(L[i])=_2(L[i]), which contradicts _1(L[i]) _2(L[i]). Therefore, _1 = _2; the induced complete assignment from an exo-assignment is unique.
Existence entails left-totality; uniqueness entails right-uniqueness, so (, _exo) itself is a function of _exo.
Since _exo functionally determines and _endo⊆, Armstrong's Axioms entail "_exo functionally determines _endo."
During forward inference, is also a computational graph, where edges indicate the order of computation. We start with exo-nodes and the computation "flows down" to endo-nodes, computing their values based on the previously computed values of their parents. Topological sort and graph traversal both take Θ(||+||) time under adjacency-list representation of graphs. For each u ∈_endo, [u] is computed exactly once.
In a finite acyclic SFM with nonempty domains, any partial assignment _| over any subset of exo-nodes ⊆_exo is permitted.
For every u ∈_exo∖, we assign an arbitrary _exo(u) ∈[u] since [u] ∅; for every u ∈, we assign _exo(u) ←_|(u), so _exo⊇_|. By Theorem <ref>, =(, _exo) satisfies _exo⊆∈ R_, so _|⊆_exo⊆∈ R_ and _| is permitted.
A finite acyclic SFM with nonempty domains is always satisfiable, regardless of its structural functions .
Because is finite (no infinite regress) and acyclic, there exists at least one root node u (Appendix <ref>). Because [u] ∅, we select an arbitrary value _|{u}(u) ∈[u]. Corollary <ref> says _|{u} is permitted, so ∃∈ R_: ⊇_|{u} and is satisfiable.
§.§ Functional Invariance
We use functional invariance to describe how a multi-input function's output doesn't change when some inputs have changed: TVs aren't affected by light switches; the output of f(x, y) = 2x is invariant to y given x. Notice that ceteris paribus (holding other input values constant) is well-defined only if there's a clear input-output distinction given by an underlying function.
In SFM, changing an exo-node's value cannot influence its non-descendants. This is deduced from alone. With non-injective functions, new parent values may map to the old child value, resulting in even fewer changed nodes. Equivalently, for ⊆_exo, _|_exo∖) functionally determines 's non-descendants.
(Invariance in SFM) In a finite acyclic SFM with _1, _2 ∈ R_ and changed nodes = {u ∈ | _0(u) _1(u)}:
If _0(u) _1(u), then u ∈⋃_v ∈_exoDe(v). (u's value differs in _0 and _1 only if it's the descendant of some node in _exo.)
Let u ∈ be any node such that _0(u) _1(u). If u ∈_exo, then u ∈_exo and we're done. If u ∈_endo, then because functions are right-unique, at least one parent p ∈(u) must have a different value (_0(p) _1(p)). We consider p as the new u and repeat this process recursively. Because the SFM graph is finite (no infinite regress) and acyclic, this path u ← p_1 ← p_2 ←… must terminate at some exo-node s ∈_exo (Appendix <ref>) such that _0(s) _1(s), which means s ∈_exo. The path shows u is a descendant of s.
§.§ Contrastive Forward Inference
Suppose we already have and _0 ∈ R_. To compute (, _1|exo), we still need to compute every [u]. Given all the unchanged nodes from functional invariance, can the graph structure help us reduce [u] evaluations?
Yes. With _exo = {u ∈_exo|_0|exo(u) _1|exo(u)}, the contrastive forward inference (CFI) algorithm _1 = (, _0, _1|_exo) evaluates [u] only when at least one parent of u has a changed value, so we don't recompute non-descendants of _exo. evaluates usually fewer (and always no more) structural functions than , especially when is small relative to , when there are many _i|exo queries relative to one reference _0, or when many structural functions are non-injective.
If we draw an SFM with all arrows pointing downwards, we visually cache a topological order of nodes. We can easily identify the descendants of changed nodes and only evaluate their structural functions, without recomputing the complete assignment.
Unlike functions, contrast isn't a fundamental and irreducible part of causality. It's just a popular heuristic with pragmatic benefits:
* Delta compression reduces the length of causal utterances.
* recomputes (usually) fewer structural functions than during forward inference.
§.§ Partial Forward Inference
By modifying depth-first search, we can also design partial forward inference algorithms, where we're only interested in a subset of endo-nodes ⊆_endo, so we don't have to compute values for all endo-nodes. Combined with , it further reduces the number of function evaluations, especially when is much smaller than _endo.
§.§ Inference in Practice
* VFI in Boolean circuits: A combinational logic circuit <cit.> is a finite acyclic SFM with {0, 1} domains and Boolean functions.
Each wire's value is 0 (no electrical current) or 1 (has current). A logic gate receives input wires and returns an output wire, like a structural function.
The output wire of one gate can be the input wire of another gate.
To infer the values of all wires given all input wires, we use VFI and produce causal utterances like "setting this input wire to 1 causes the output wire to be 0."
* CFI in GNU Make: GNU is a popular open-source software that automatically determines which pieces of a large program need to be recompiled <cit.>. Especially in C and C++, the source code needs to be compiled or linked into a target file, before the target file can be executed by the computer. In a , there are many rules. Each rule has a target file, a list of source files, and a recipe for compilation. The target file functionally depends on the source files. The target file of one rule can be a source file in another rule. This forms a finite SFM where files are nodes and rules specify edges and structural functions.
VFI compiles all files, but software development is a dynamic process:
We don't compile the files just once. We modify some files, see the results, and repeat.
Because compilation is time-consuming, it's costly to recompile all files after a modification.
Instead, we only need to recompile the descendants of modified files. Just like CFI, only recompiles the target file if any of its source files (parents) has been modified since the previous compilation, saving lots of time. We can produce causal utterances like "modifying this file causes the final compiled program to crash."
§ LEARNING
Learning causal models from statistical data is covered in depth by <cit.>, so we only discuss some philosophical cases where people prefer some SFM over others, given fully-specified possible worlds and laws-of-nature.
§.§ Thermometer and Temperature
We think high room temperature causes high thermometer reading, but not the other way round. Why?
It's common to introduce new nodes and see whether the small model remains true as a sub-SFM of a bigger model. Consider a new node "immersing thermometer in cold water" and all possible worlds are listed below:
Node
_1 0 0 0
_2 1 1 0
_3 0 0 1
_4 0 1 1
Without granting "intervention" any special status, we see that HighReadingHighTemperature and HighReading, ColdWaterHighTemperature aren't true in general, so the edge should point from to . People prefer simple SFMs that compose well with other SFMs that model the same world.
§.§ Light, Object, and Shadow
In a symmetric equation involving Light, Object, Shadow, any 2 nodes functionally determine the 1 remaining node. Why do we think the shadow is the effect? This asymmetric preference is entailed by people's general dislike of SFM-intersection:
* With multiple objects, Light, ShadowObject isn't true in general. When we add another object whose shadow rests entirely in another object's shadow, the system's light and shadow remain the same, thus violating right-uniqueness.
* Light, ShadowObject cannot SFM-compose with FactoryObject (objects determined by their production processes). Explicitly encoding both functional dependencies requires SFM-intersection.
* With one light source and multiple objects, Object(i), Shadow(i)Light holds for every , resulting in SFM-intersection.
* Object, ShadowLight cannot SFM-compose with HandLight (flashlight direction determined by hand movement) or TimeOfDayLight (the Sun's position determined by time of the day), unless we use SFM-intersection.
* Object, LightShadow can seamlessly compose with upstream and downstream SFMs without SFM-intersection.
§ BENCHMARK
Taking a data-centric approach, we compile a collection of thought experiments about causality and apply SFM to all of them. A good definition of causality should have no trouble fitting these causal scenarios. Unless otherwise mentioned, all domains are binary {0, 1}.
§.§ Sensitive to Default
* The assassin shoots the victim, causing the victim's death.
* →
* []() = ()
* Default _c = Assassin:0, Death:0
* Actual _a = Assassin:1, Death:1
* _exo = Assassin, _endo=Death
* _a|_exo=Assassin:1 causes _a|_endo=Death:1.
* At the last moment, the assassin changes his mind and doesn't shoot, causing the victim's survival.
* Same as above.
* Default _c = Assassin:1, Death:1
* Actual _a = Assassin:0, Death:0
* _exo = Assassin, _endo=Death
* _a|_exo=Assassin:0 causes _a|_endo=Death:0.
§.§ Causal Chain
The assassin shoots a bullet, which kills the victim.
* The assassin causes both the bullet and the death.
* →→
* []() = ()
[]() = ()
* Default _c = Assassin:0, Bullet:0, Death:0
* Actual _a = Assassin:1, Bullet:1, Death:1
* _exo = Assassin, _endo=Bullet, Death
* _a|_exo=Assassin:1 causes _a|_endo=Bullet:1, Death:1.
* (Sub-SFM) The bullet causes the death.
* →
* []() = ()
* Default _c = Bullet:0, Death:0
* Actual _a = Bullet:1, Death:1
* _exo = Bullet, _endo=Death
* _a|_exo=Bullet:1 causes _a|_endo=Death:1.
§.§ Connected Double Prevention
A bodyguard shoots the assassin before the assassin could shoot the victim. The victim survives.
* The bodyguard causes the assassin's death and the victim's survival.
* →→
* []() = ()
[]() = ()
* Actual _a = Bodyguard:1, Assassin:0, Survive:1
* _exo = Bodyguard
* Tweak _c|_exo=Bodyguard:0
* Tweaked _c = (, _a, _c|_exo) = Bodyguard:0, Assassin:1, Survive:0
* _endo = Assassin, Survive
* _a|_exo=Bodyguard:1 causes _a|_endo=Assassin:0, Survive:1.
§.§ Disconnected Double Prevention
The assassin puts poison in the victim's cup. The bodyguard puts antidote in the cup. The victim survives.
* Antidote causes the victim's survival.
*
(Poison) at (0,1) ;
(Antidote) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Poison) edge (Survive);
[->, style=thick] (Antidote) edge (Survive);
* []() = () ()
* Actual _a = Poison:1, Antidote:1, Survive:1
* _exo=Antidote
* Tweak _c|_exo=Antidote:0
* Tweaked _c = (, _a, _c|_exo) = Poison:1, Antidote:0, Survive:0
* _endo = Survive
* _a|_exo=Antidote:1 causes _a|_endo=Survive:1.
§.§ No Appropriate Default
Two chess players use a coin flip to decide who moves first. If the coin lands on head, the Player 1 moves first; otherwise, Player 2 moves first. It's difficult to identify a "default" world <cit.>.
* Coin landing on head causes Player 1 to move first.
* →
* []() = ()
* Actual _a = Head:1, Player1:1
* _exo=Head
* Tweak _c|_exo=Head:0
* Tweaked _c = (, _a, _c|_exo) = Head:0, Player1:0
* _endo = Player1
* _a|_exo=Head:1 causes _a|_endo=Player1:1.
* Coin landing on tail causes Player 2 to move first.
* Same as above.
* Actual _a = Head:0, Player1:0
* _exo=Head
* Tweak _c|_exo=Head:1
* Tweaked _c = (, _a, _c|_exo) = Head:1, Player1:1
* _endo = Player1
* _a|_exo=Head:0 causes _a|_endo=Player1:0.
§.§ Gardener and Queen
The flower lives iff at least one person waters it. The gardener is responsible for watering the flower, but the queen isn't <cit.>.
* The gardener's not watering the flower causes the flower's death; the queen's not watering it doesn't cause the flower's death.
*
(Gardener) at (0,1) ;
(Queen) at (2,1) ;
(Flower) at (1,0) ;
[->, style=thick] (Gardener) edge (Flower);
[->, style=thick] (Queen) edge (Flower);
* []() = () ()
* Default _c = Gardener:1, Queen:0, Flower:1
* Actual _a = Gardener:0, Queen:0, Flower:0
* _exo = Gardener, _endo=Flower
* _a|_exo=Gardener:0 causes _a|_endo=Flower:0.
§.§ OR Firing Squad (Symmetric Overdetermination)
Two assassins simultaneously shoot the victim. It takes only 1 bullet to kill the victim.
* Both assassins are responsible because "not killing" is default.
*
(A1) at (0,1) ;
(A2) at (2,1) ;
(Death) at (1,0) ;
[->, style=thick] (A1) edge (Death);
[->, style=thick] (A2) edge (Death);
* []() = () ()
* Default _c = Assassin1:0, Assassin2:0, Death:0
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo = Assassin1, Assassin2, _endo=Death
* _a|_exo=Assassin1:1, Assassin2:1 causes _a|_endo=Death:1.
* Assassin 1 causes nothing because had he not shot, Assassin 2 would've still killed the victim.
* Same as above.
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo) = Assassin1:0, Assassin2:1, Death:1
* _endo = ∅
* _a|_exo=Assassin1:1 causes _a|_endo=∅.
§.§ AND Firing Squad
2 assassins simultaneously shoot the victim. It takes at least 2 bullets to kill the victim.
* Both assassins are responsible because "not killing" is default.
*
(A1) at (0,1) ;
(A2) at (2,1) ;
(Death) at (1,0) ;
[->, style=thick] (A1) edge (Death);
[->, style=thick] (A2) edge (Death);
* []() = () ()
* Default _c = Assassin1:0, Assassin2:0, Death:0
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo = Assassin1, Assassin2, _endo=Death
* _a|_exo=Assassin1:1, Assassin2:1 causes _a|_endo=Death:1.
* Assassin 1 is individually responsible because had he not shot, the victim would've survived.
* Same as above.
* Actual _a = Assassin1:1, Assassin2:1, Death:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo) = Assassin1:0, Assassin2:1, Death:0
* _endo = Death
* _a|_exo=Assassin1:1 causes _a|_endo=Death:1.
§.§ Connected Preemption
Assassin 1 shoots the victim first. If the victim doesn't die, Assassin 2 will shoot. Had Assassin 1 not shot, the victim still would've died.
* Assassin 1 causes the victim's death and Assassin 2's not-shooting.
*
(A1) at (0,2) ;
(A2) at (4,2) ;
(D1) at (0,0) ;
(D2) at (4,0) ;
[->, style=thick] (A1) edge (D1);
[->, style=thick] (D1) edge (A2);
[->, style=thick] (D1) edge (D2);
[->, style=thick] (A2) edge (D2);
* []() = ()
[]() = ()
[]() = () ()
* Actual _a = Assassin1:1, EarlyDeath:1, Assassin2:0, LateDeath:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo)
= Assassin1:0, EarlyDeath:0, Assassin2:1, LateDeath:1
* _endo = EarlyDeath, Assassin2
* _a|_exo=Assassin1:1 causes _a|_endo=EarlyDeath:1, Assassin2:0.
* We cannot say Assassin1:1 causes LateDeath:1 because ∉_endo.
§.§ Disconnected Preemption
Assassin 1 shoots the victim first. Several moments later, Assassin 2 shoots unconditionally.
* Assassin 1 causes the victim's death.
*
(A1) at (0,2) ;
(A2) at (4,2) ;
(D1) at (0,0) ;
(D2) at (4,0) ;
[->, style=thick] (A1) edge (D1);
[->, style=thick] (D1) edge (D2);
[->, style=thick] (A2) edge (D2);
* []() = ()
[]() = () ()
* Actual _a = Assassin1:1, Assassin2:1, EarlyDeath:1, LateDeath:1
* _exo=Assassin1
* Tweak _c|_exo=Assassin1:0
* Tweaked _c = (, _a, _c|_exo)
= Assassin1:0, Assassin2:1, EarlyDeath:0, LateDeath:1
* _endo = EarlyDeath
* _a|_exo=Assassin1:1 causes _a|_endo=EarlyDeath:1.
Time difference distinguishes preemption from symmetric overdetermination: To an extreme, we wouldn't regard immediate death and death in 100 years as the same event.
§.§ Relevant Background Conditions
Ignition requires both striking the match and oxygen present, but we only mention striking the match as the cause of fire.
* Striking the match causes ignition.
*
(Strike) at (0,1) ;
(Oxygen) at (2,1) ;
(Fire) at (1,0) ;
[->, style=thick] (Strike) edge (Fire);
[->, style=thick] (Oxygen) edge (Fire);
* []() = () ()
* Default _c = Strike:0, Oxygen:1, Fire:0
* Actual _a = Strike:1, Oxygen:1, Fire:1
* _exo = Strike, _endo=Fire
* _a|_exo=Strike:1 causes _a|_endo=Fire:1.
* While repeatedly striking a match in an oxygen-deprived container, there's no ignition. Pumping in oxygen causes the match to ignite.
* Same as above.
* Default _c = Strike:1, Oxygen:0, Fire:0
* Actual _a = Strike:1, Oxygen:1, Fire:1
* _exo = Oxygen, _endo=Fire
* _a|_exo=Oxygen:1 causes _a|_endo=Fire:1.
Similarly, a criminal wouldn't have committed the crime had the universe not existed/had he never been born, but we don't consider those as causes of the crime.
§.§ Irrelevant Background Conditions
The assassin simultaneously shoots the victim and whispers.
* Whispering doesn't cause anything.
*
(Whisper) at (0,1) ;
(Shoot) at (2,1) ;
(Death) at (1,0) ;
[->, style=thick] (Shoot) edge (Death);
* []() = ()
* Actual _a = Whisper:1, Shoot:1, Death:1
* _exo=Whisper
* Tweak _c|_exo=Whisper:0
* Tweaked _c = (, _a, _c|_exo) = Whisper:0, Shoot:1, Death:1
* _endo = ∅
* _a|_exo=Whisper:1 causes _a|_endo=∅.
* Shooting causes death.
* Same , _a as above.
* _exo=Shoot
* Tweak _c|_exo=Shoot:0
* Tweaked _c = (, _a, _c|_exo) = Whisper:1, Shoot:0, Death:0
* _endo = Death
* _a|_exo=Shoot:1 causes _a|_endo=Death:1.
Similarly, Socrates drinks hemlock at dusk and dies. Hemlock causes death, but dusk doesn't cause anything <cit.>.
§.§ Boulder and Hiker
A hiker sees a boulder rolling towards him, so he dodges and survives. Had he not dodged, he wouldn't have survived <cit.>. This is an ostensible counterexample to the transitivity of causation (boulder causes dodge, dodge causes survival, but boulder doesn't cause survival). "Transitivity" is better understood as SFM-composition.
* Boulder causes dodge and doesn't cause survival.
*
(Boulder) at (0,2) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Dodge);
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []() = ()
[]()=() ()
* Actual _a = Boulder:1, Dodge:1, Survive:1
* _exo=Boulder
* Tweak _c|_exo=Boulder:0
* Tweaked _c = (, _a, _c|_exo) = Boulder:0, Dodge:0, Survive:1
* _endo = Dodge
* _a|_exo=Boulder:1 causes _a|_endo=Dodge:1, but ∉_endo.
* (Sub-SFM) Dodge causes survival.
*
(Boulder) at (0,1) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []()=() ()
* Actual _a = Boulder:1, Dodge:1, Survive:1
* _exo=Dodge
* Tweak _c|_exo=Dodge:0
* Tweaked _c = (, _a, _c|_exo) = Boulder:1, Dodge:0, Survive:0
* _endo = Survive
* _a|_exo=Dodge:1 causes _a|_endo=Survive:1.
§.§ Bogus Prevention
Taking birth control pills is the cause of a woman not getting pregnant, but not the cause of a man not getting pregnant, although "birth control prevents pregnancy" is always true <cit.>.
* Birth control causes a woman to be unable to get pregnant.
*
(IsWoman) at (0,1) ;
(BirthControl) at (2,1) ;
(CanPregnant) at (1,0) ;
[->, style=thick] (IsWoman) edge (CanPregnant);
[->, style=thick] (BirthControl) edge (CanPregnant);
* []()= () ()
* Actual _a = IsWoman:1, BirthControl:1, CanPregnant:0
* _exo=BirthControl
* Tweak _c|_exo=BirthControl:0
* Tweaked _c = (, _a, _c|_exo) = IsWoman:1, BirthControl:0, CanPregnant:1
* _endo = CanPregnant
* _a|_exo=BirthControl:1 causes _a|_endo=CanPregnant:0.
* Birth control doesn't cause anything for a man.
* Same as above.
* Actual _a = IsWoman:0, BirthControl:1, CanPregnant:0
* _exo=BirthControl
* Tweak _c|_exo=BirthControl:0
* Tweaked _c = (, _a, _c|_exo) = IsWoman:0, BirthControl:0, CanPregnant:0
* _endo = ∅
* _a|_exo=BirthControl:1 causes _a|_endo=∅.
§.§ Backtracking Counterfactuals
Subjunctive conditionals <cit.> use forward inference, while indicative/backtracking/non-causal conditionals don't.
* (Subjunctive) If Shakespeare didn't write Hamlet, someone else would have.
*
(Boulder) at (0,2) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Dodge);
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []() = ()
[]()=() ()
* Actual _a=Shakespeare:1, Writer2:0, Hamlet:1.
* Query _exo=Shakespeare:0
* Queried =(, _exo) = Shakespeare:0, Writer2:1, Hamlet:1.
* (Indicative) If Shakespeare didn't write Hamlet, someone else did.
*
(Boulder) at (0,1) ;
(Dodge) at (2,1) ;
(Survive) at (1,0) ;
[->, style=thick] (Boulder) edge (Survive);
[->, style=thick] (Dodge) edge (Survive);
* []()= () ()
* Given , the only _|Writer2 compatible with Shakespeare:0, Hamlet:1 is Writer2:1.
§.§ Impossible Interventions
Unlike →, some functional dependencies contain all parents due to how the child is logically/conceptually/metaphysically defined, so it's impossible to add surgical interventions:
* The string "hello" is functionally determined by its first character being "h", second character being "e", …
* The average height of students in the class is functionally determined by the individual height of each student.
* Winning 2 out of 3 rounds is functionally determined by the result of each round.
They're often known as supervenience (Section <ref>).
§ DISCUSSION
§.§ Is SFM Insufficient?
Some may argue that since SFM and functions can have non-causal interpretations, they are insufficient for defining causality. We respond with 3 counterarguments:
* Some examples of insufficiency are results of misinterpretation. For example, student ID functionally determines all attributes (name, age, course registration, etc.) of a student in a database, but changing a student's ID won't cause changes in those attributes. This example doesn't hold because if we allow arbitrary changes to ID, there could be repeated IDs in different rows and ID no longer functionally determines other attributes.
* Incorrect causal models (e.g. "cancer causes smoking") are still causal, unlike non-causal models (e.g. correlations, symmetric equations), which don't use functions at all. Since SFM comes from the conceptual analysis of what "causation" should mean, its definition cannot include all empirical facts about our world.
* People often use causal interpretations to understand purely mathematical functions. When we say "changing the independent variable x causes the dependent variable y to change," we're using CFI.
§.§ A Case Against Actual Causality
Delta compression and CFI are slightly useful heuristics that also fit our intuitions. However, the assumption that there exists a fixed set of "actual causes" is questionable in complex systems.
A circuit has n binary switches _exo={X_1, X_2, …, X_n} and 1 light bulb _endo={Y}, where the n switches functionally determine the light via a Boolean function f: {0, 1}^n →{0, 1}.
Given the state of all switches and the light, which switches are the "actual causes" of the light being on/off?
There are 2^2^n different n-input 1-output Boolean functions f. For each f, there are 2^n different possible worlds. Proponents of actual causality must accept one of the following:
* Provide an algorithm that can identify actual causes in 2^2^n× 2^n = 2^(2^n+n) situations. Case-by-case analyses won't scale.
* Admit that contrast, default, and actual causality belong to an imperfect mental heuristic that would fail in complex systems.
Graphical models don't help because there's only one Boolean function and we shouldn't insert hypothetical intermediate nodes. For ⊆_exo, SFM can answer "is _0| the cause of _0|endo?" by tweaking _0| into _1|, inferring _1 = (, _0, _1|), and contrasting _1(Y) against _0(Y) . But "a fixed set of actual causes" given and _0 remains ill-defined.
People's intuitions may not give consistent answers and even if they do, such answers provide less information about f than the input-output mappings of f itself.
This example generalizes all "difficult causal scenarios" with binary variables and following features:
* Causal: We can manipulate the switches to control the light.
* Deterministic: It has no probabilistic component.
* Fully-specified: Epistemological skepticism like "how do we know these laws-of-nature are true" doesn't apply.
* Clear input-output distinction: There's no ambiguity in the direction of causal arrows.
Therefore, proposing and solving a few cases wouldn't dissolve our objection.
Intuitions are often unreliable for modeling reality. Outside simple, everyday causal utterances, there's no real downside in abandoning actual causality. itself perfectly describes the causal system and answers all "what if" inference queries like (, _exo). Instead of listing "actual causes," a scientist should try modeling the functional determinations in a system.
Actual causality is almost only used in normative theories (e.g. responsibility, blame, proximate causes, ethics, law <cit.>), which handle disagreements when everyone agrees on (laws-of-nature) and _a (what actually happens). Working with full SFMs instead of actual causality allows us to consider strictly more normative theories.
§ PROBABILISTIC SFM
To incorporate probability theory, we don't need to modify the definition of SFM. We just extend domains [u] to include random variables and modify structural functions [u] accordingly. Probability isn't required for most thought experiments on causality, but we'll provide a rigorous mathematical foundation for probabilistic SFM. Notably, nodes and random variables are not the same. We avoid calling nodes "variables" precisely for this reason.
§.§ Probabilistic Extension
Think of a node u as a name or index. Its value (u) can be a random variable: (u) = X. A random variable X: Ω→ℝ maps an outcome ω (in sample space Ω) to a real number X(ω) ∈ℝ. "X=x" is a shorthand for event {ω∈Ω | X(ω) = x}, so we can compute its probability [X=x]. X=x isn't an actual equation because X is a function and x is a real number. Again, "node u has value X; X is a random variable" and "random variable X takes on value x; x is a real number" are different things.
Most basically, functions of random variables are actually function compositions <cit.>. Consider real-valued function f(x)=2x and random variable X: Ω→ℝ. We want a new random variable Y that always "takes twice the value" of X:
Y(ω) = 2X(ω)
= f(X(ω))
= (f ∘ X)(ω)
Y = f ∘ X
The expression "Y = f(X)" is wrong by a rigorous standard, because random variable X isn't in f's domain of real numbers.
Formally, the probabilistic extension of SFM _old = (, , _old, _old) returns a new SFM _new=(, , _new, _new):
* Specify the sample space Ω.
* Specify a set of nodes 𝒮⊆ for probabilistic extension. Other nodes ∖𝒮 still don't have random variables in their domains.
The set 𝒮 must be downward closed: if u ∈𝒮, then every descendant of u must also be in 𝒮, as if randomness is "contagious" and flows down the computational graph.
For random variables to be well-defined, we also require _old[u] (e.g. real numbers, vectors, graphs, functions) to be measurable for all u ∈𝒮.
* Let RV[u] denote the set of random variables Ω→_old[u]. If u ∈𝒮, _new[u] = _old[u] ∪ RV[u]; otherwise, _new[u] = _old[u].
* Recall that a random variable X has realization X(ω) given outcome ω∈Ω.
For nodes ⊆, we define the realization of assignment _| given outcome ω as:
(_|, ω) = {u : _|(u) if _|(u) ∈_old[u] else (_|(u))(ω)}_u ∈
By realizing every random variable with ω and keeping other values as is, (_|, ω) is an assignment of both _old and _new because ∀ u ∈: (_|, ω)(u) ∈_old[u].
* For _new and endo-node u ∈_endo:
* If ∀ p ∈(u): _|(u)(p) ∈_old[p] (no parent value is a random variable), then _new[u](_|(u)) = _old[u](_|(u)) ∈_old[u].
* Otherwise (at least one parent value is a random variable), _new[u](_|(u)) is a random variable Ω→_old[u] in RV[u]. For outcome ω∈Ω, we compute _new[u](_|(u))(ω) = _old[u]((_|(u), ω)).
Some corollaries about probabilistic extension:
* For every u ∈, _new[u] ⊇_old[u] because both the domain and the codomain are strictly extended, hence the name "probabilistic extension."
* If satisfies ℳ_old, then it also satisfies ℳ_new. For ⊆, if _| is permitted by ℳ_old, then it's also permitted by ℳ_new.
* If satisfies _new, then its realization (, ω) also satisfies _old for every ω∈Ω.
Intuitively, random variables express uncertainty about which realization is actual. Each realization is a possible world in _old. Probability merely adds "weights" to these possible worlds, so causal mechanisms are deterministic and true in every realization. This is unlike Bayesian networks, where the mechanisms are inherently random.
We can now formalize "correlation doesn't imply causation" using SFM: The same "observational distribution" _| (where some nodes have random variables as values; ⊆) might be permitted by different SFMs _1 _2 with R__1 R__2, which cannot be treated as equal.
§.§ Bayesian Networks
With probabilistic extension, SFM generalizes Bayesian networks, which also use directed acyclic graphs. In a Bayesian network <cit.>, each node corresponds to a random variable, each exo-node stores a marginal distribution, and each endo-node stores a conditional distribution given the node's parents.
Bayesian networks require the exogenous random variables to be probabilistically independent, while we don't enforce that requirement (you may enforce it explicitly).
It's difficult for Bayesian networks to represent SFM. To encode functional determination (right-uniqueness), the conditional distributions must be degenerate. When input distributions cannot be assumed (e.g. light switch doesn't affect TV) and we only have specific input values, the marginal distributions are degenerate too. This sacrifices nearly all expressiveness of a Bayesian network.
Any Bayesian network can be expressed by an SFM. We'll use probability integral transform (PIT) to represent conditional probability distributions with deterministic functions:
* For simplicity, consider real-valued (Ω→ℝ) random variables Y, X_1, X_2, …, X_n and conditional distribution [Y|X_1=x_1, X_2=x_2, …, X_n=x_n].
* Create continuous uniform random variable U ∼Unif(0, 1) in range [0, 1], independent from all X_i.
* Let F_Y|X_i=x_i(y): ℝ→ [0, 1] be the conditional CDF (cumulative distribution function) of Y, such that it has an inverse F_Y|X_i=x_i^-1: [0, 1] →ℝ.
* By PIT <cit.>, random variable F_Y|X_i=x_i^-1∘ U has exactly the same CDF as F_Y|X_i=x_i.
* We've created a deterministic function f(x_1, x_2, …, x_n, U) that returns a random variable F_Y|X_i=x_i^-1∘ U, given real-valued x_i and random variable U.
Essentially, we can enforce "all mechanisms are deterministic" without sacrificing expressiveness. The inherent randomness of a mechanism is "injected" by an unobservable "noise" parent whose value is a random variable. This practice is fairly common:
* To sample values from 𝒩(μ, σ^2), the reparameterization trick uses deterministic function f(μ, σ, ϵ)=μ + σ×ϵ, where ϵ is sampled from an auxiliary "noise" distribution 𝒩(0, 1) <cit.>.
* Additive noise model Y = f_Y(X) + N_Y has random variables X, Y, N_Y, deterministic function f_Y, and additive noise N_Y ⊥ X <cit.>.
* Randomness in computer programs often comes from built-in random number generators, while the main program is deterministic.
§ COMPARISON
§.§ Symmetric Laws and Causal Eliminativism
In a symmetric equation of n variables, the values of any n-1 variables functionally determine the value of the 1 remaining variable. Newton's second law of motion F=ma, the ideal gas law pV=nRT, and Ohm's law V=IR are symmetric laws. This differs from the non-injective asymmetry of functions.
We usually view symmetric equations as non-causal, because 1 equation is simpler than n functional determinations.
As a causal eliminativist, <cit.> argues that causality doesn't appear in physics and should be removed from philosophy altogether. However, we've shown that functions and SFM are useful.
We only consider Russell's attack on the functional theory of causation, since we don't agree with other definitions either.
* Plurality of causes: Multiple alternative causes like gunshot, arsenic, etc. can map to the same effect - the person's death. (Some functions are non-injective.)
* Plurality of effects: The effect can be defined as the whole state of the world, which contains many variables. (The "cause" node has multiple descendants.)
Russell incorrectly dismisses functional asymmetry (non-injectiveness) as "illusory," as if the plurality of effects makes both sides symmetric. But these "pluralities" aren't the same. Non-injectiveness cannot be eliminated without changing the function itself.
SFM also addresses other eliminativist challenges on causality <cit.>. SFM-causality isn't vague; actual causality, while not appearing in physics, is a slightly useful heuristic that can be abandoned when necessary; probabilistic extension handles inherently random mechanisms; functions are compatible with different theories of space (e.g. action at a distance) and time (Section <ref>).
§.§ Hume, Regularity, and Problem of Induction
<cit.> challenges causality as follows. We say "striking a match causes it to ignite." But empirically, we only observe constant conjunctions of events like "match struck" followed by "match igniting." We don't directly observe the link/connection between cause and effect. So any causal "law" is an inductive generalization from particular events, with no necessary guarantee to remain true in the future <cit.>.
Hume conflates 2 distinct problems:
* Conceptual: What's the definition of causality?
Claiming "causality is just a special kind of regularity" is true but non-reductive: What is that "special kind"? All inductive models (e.g. correlations, symmetric equations) model "regular connections," but only functional determination captures our causal intuition.
Besides relying on unspecified physical/metaphysical models (e.g. time, space, contiguity), regularity conditions like "all events of type X are followed by an event of type Y" <cit.> cannot produce causal utterances in background condition cases (Section <ref>), which are deterministic and fully-specified.
* Epistemological: How to ensure the correctness of a causal model?
Non-probabilistic SFM (due to right-uniqueness) and symmetric laws make exceptionless claims about reality, while correlation doesn't. Perhaps that's why Hume attacks causality first. However, all inductive generalizations from empirical data are equally susceptible to the Problem of Induction (PoI) <cit.>. Causality isn't somehow "more unreliable" than symmetric laws or correlations.
We formulate PoI as follows. Consider a normal world W_1(t) and a piecewise world W_2(t). W_2(t) is exactly the same as W_1(t) for all time t before t_0, but is drastically different after t_0. Given a world W and all its information before t_0, there's no way of distinguishing whether W is W_1 or W_2.
By enumerating different ways of W_2 being "drastically different", such as "the world exploding after t_0" or "the gravitational constant doubling after t_0", we can construct worlds where symmetric laws and correlations break down under PoI. Therefore, PoI isn't an attack against causality alone. Similarly, a conceptual definition of causation won't solve PoI.
§.§ Logic and Counterfactuals
In retrospect, "if-then" material conditionals cannot replace causality because it violates right-uniqueness: both {p:0, q:0} and {p:0, q:1} satisfy p ⇒ q. It allows vacuously true propositions like "if I don't eat anything today, then I am a billionaire," which feels wrong causally/counterfactually. By adding the laws-of-nature () to the antecedents, we can perform rigorous deduction =(, _exo) without sacrificing causal intuitions. The underlying causal formula is q=f(p, …) instead of p ⇒ q, though functions and background conditions are often omitted in causal utterances.
Like the but-for test, many counterfactual definitions of causation are variations of "if x, then y; if not-x, then not-y" <cit.>. They're usually imperfect because they don't have the full expressiveness of functions. For example, <cit.>'s INUS condition is equivalent to disjunctive normal form <cit.>, which any Boolean function can be converted to, so it's just a circuitous way of stating "causality is functions."
<cit.> defines counterfactual conditional "if not-x, then not-y" as "in the closest possible world with not-x, there's not-y." However, without defining a distance metric and an algorithm to find the closest possible world, this definition cannot even describe a deterministic, fully-specified causal system. Using actual-tweaked contrast, SFM unambiguously computes the contrastive world as _c=(, _a, _c|_exo).
§.§ Intervention and SCM
The definition of SCM <cit.> relies on intervention, a causal concept, so it's often criticized for being circular and non-reductive. We develop SFM as an equally-expressive reformulation of SCM that only relies on functions, thus eliminating circularity and providing a philosophical foundation for SCM. The generality of functions also avoids anthropocentric objections that manipulation requires human agency <cit.>.
Although SCM's surgical intervention do(Y=y) is generalized by sub-SFM, we can also define it as a parent of Y, making intervention just a type of functional determination.
Given (Y) = {X_1, X_2, …, X_n, DoY}, DoY is a surgical intervention on Y when:
* [DoY] = [Y] ∪{}. ∉[Y] means "no intervention," like in option types and nullable types.
* There exists an "ordinary mechanism" function g: ∏_i=1^n[X_i] →[Y], such that
[Y](_|(Y)) =
g(_|{X_1, X_2, …, X_n}) if _|(Y)(DoY) = ,
_|(Y)(DoY) otherwise
"Conditionally overriding an ordinary mechanism" is the key intuition behind interventions. For example, barometer reading is ordinarily determined by atmospheric pressure, but it can also be manipulated by human intervention. SFM can also express more complicated interventions, like when intervention has a failure probability or when only some intervention options are possible for humans.
§ PHILOSOPHICAL APPLICATIONS
Many philosophical discussions take "causation" as given without mathematically defining what it is, so our functional definition of causality may help clarify some downstream concepts.
§.§ Desires for SFM Learning
Several alleged "metaphysical doctrines" about causality can now be seen as epistemological desires for learning new SFMs:
* The Principle of Sufficient Reason (PSR): "Everything has a cause" or "anything is an effect caused by earlier events" <cit.>.
PSR desires to add parents to exo-nodes that "have no causes" in old models.
* The Eleatic Principle (EP): For something to "exist" in an ontology, it must be able to cause changes in other things <cit.>.
EP desires to add descendants to sink nodes that "affect nothing" in old models.
* Causal Nexus (CN): "Any causal relation requires a nexus, some interface by means of which cause and effect are connected" <cit.>.
CN desires to insert intermediate nodes between old parent-child edges.
Strictly speaking, these desires are not satisfiable if we only allow finite acyclic SFM (Appendix <ref>), but they do encourage us to learn bigger SFMs to model the world.
§.§ The Uncaused
Since exo-nodes can never appear in the "effect" part of causal utterances, we define node u is uncaused relative to iff u ∈_exo. Being uncaused/exogenous is not a metaphysical fact, but a modeling choice we make: We don't want to model u as being determined by a mechanism and other nodes in .
If _uni is the SFM of the full world, we often only use some sub-SFM _sub for specific tasks.
Because a node can be uncaused (exo-node) in one sub-SFM and caused (endo-node) in another, regarding "uncaused" as a node's metaphysical property without specifying _sub is ill-defined. This is the source of many confusions.
For something with no causal parent anywhere, we say u is strongly-uncaused iff u isn't an endo-node in any sub-SFM _sub (i.e. it's an exo-node in _uni).
§.§ Free Will
Free will loosely describes an agent's ability to "freely" choose between different possible actions <cit.>. We often face seemingly conflicting intuitions:
* People have free will.
* The world's past and laws-of-nature functionally determine the world's future, making people's decisions unfree.
If we accept that something is free if it's "uncaused or not deterministically caused" <cit.>, then SFM offers a mathematical definition of freedom that resolves this conflict:
* Node u is free relative to iff u is exogenous in .
* Node u is unfree relative to iff u is endogenous in .
* Node u is strongly-free iff u is exogenous in every of interest that contains u.
* Node u is strongly-unfree iff u is endogenous in every of interest that contains u.
Whether an action is free depends on the model of interest.
When actions have consequences, we want to model the utility function Q(s, a) for taking action a ∈ A at state s ∈ S. This makes action free relative to Q(s, a), so any action with consequences is not strongly-unfree. Meanwhile, the best action a^*=π(s, Q)=_a ∈ Amax Q(s, a) is determined/unfree relative to π(s, Q). But for discrete S, A without additional assumptions, finding the best action requires computing Q(s, a) for all a ∈ A, so modeling a as a "free" input is inevitable and useful: The agent evaluates the utility of each action before taking the best action.
Besides reinforcement learning <cit.>, this best-action-selection framework also applies to minimax search <cit.> and decision-making in general. Although we don't define causality using agency like <cit.>, we suggest that modeling "actions functionally determine consequences" could be an origin of human causal intuitions.
Generally, the freedom/arbitrariness/uncertainty of function inputs is closer to the universal quantifier "for all/any." It's not determined because we don't model it as another function's output; it's not random because we cannot reasonably specify its marginal distribution and even if we do, the distribution isn't helpful for the downstream task.
* The light switch is free to vary, while the light is determined.
* To maximize f(x), we freely vary x and record the maximum f(x).
* We freely change causes _1|_exo and infer effects _1|_endo⊆(, _0, _1|_exo).
* A sorting algorithm works for an arbitrary input list.
§.§ Causal Explanation
We use explanans to explain explanandum. A causal explanation uses causes (and underlying mechanisms/laws-of-nature) to explain effects. There are also non-causal explanations that appeal to symmetric equations, correlation, or backtracking (using effects to explain causes).
With SFM, causal explanations become a subset of Deductive-Nomological (DN) explanations, where (1) explanans contains general laws and particular conditions; (2) explanandum is entailed by explanans <cit.>.
Using _a|_exo to explain _a|_endo, we use general laws and particular conditions _c, _a|exo; entailment comes from _a|_endo⊆_a=(, _c, _a|_exo).
In practice, can be learned from empirical data (inductive); some nodes' values can be random variables (probabilistic).
SFM solves many alleged counterexamples where DN model appears insufficient for defining explanation:
* In the symmetric equation involving shadow length, the Sun's position, and flagpole height, why is shadow the explanandum? Because we prefer causal explanations over non-causal explanations and shadow should be modeled as a child node (Section <ref>).
* Why do people omit irrelevant background conditions in explanations? Because we use delta compression in causal utterances (Section <ref>, <ref>).
The asymmetry of causal explanation comes from the asymmetry of causality, which comes from functions being right-unique and often non-injective.
§.§ Disposition
Glass is fragile because it has a disposition to shatter. Dispositions like fragility resemble properties of objects, but they describe possible (not necessarily actual) behaviors under certain conditions: Glass may not actually shatter <cit.>. We analyze dispositions with functions.
For a deterministic and fully-specified example, minerals higher on Mohs hardness scale (e.g. diamond) will scratch softer minerals (e.g. talc). Let function f(m_1, m_2) take in 2 minerals and return the mineral that gets scratched, so = f(, ). Talc has the disposition to be scratched because ∀ m: = f(m, ); diamond has the "power" to scratch because ∀ m: m = f(m, ).
Therefore, dispositions are properties of a downstream function f, but people colloquially associate them with input nodes (scratch-hardness of minerals) or input values (scratch-hardness of diamond).
§.§ Supervenience
"Y supervenes on X" is equivalent to "Y functionally depends on X," because the formal definition of supervenience ("there cannot be an Y-difference without a X-difference" <cit.>) is the same as right-uniqueness: Consider team R,
([(x, y_1)∈ R] [(x, y_2) ∈ R] [y_1 y_2])
= ([(x, y_1)∈ R] [(x, y_2) ∈ R]) [y_1 y_2]
= ([(x, y_1)∈ R] [(x, y_2) ∈ R]) [y_1 = y_2]
= ([(x, y_1)∈ R] [(x, y_2) ∈ R]) ⇒ [y_1 = y_2]
§.§ Mental Causation
Can mental kinds (property, state, event) cause physical kinds? Mental causation faces 2 conflicting intuitions:
* It's common in everyday experiences: I want to raise my hand (mental state), so I raise my hand (body state).
* The Exclusion Problem: Physical effects like body movements are already determined by physical causes like brain activities, so there's no room for a mental cause, which is also sufficient for the physical effect <cit.>.
The Exclusion Problem arises whenever a functional determination entailed by (, , , ) cannot be deduced from the graph structure =(, ) (and Armstrong's Axioms) alone. People feel uneasy because they cannot find such dependency as a path in .
* (Assumption) On the lowest physical level, brain state functionally determines body state: .
* (Assumption) Mental state supervenes on brain state: . Mental state is an abstract/aggregate description of physical brain state.
Multiple realizability (a single mental kind can be realized by many distinct physical kinds) <cit.> is true when f_2 is non-injective.
It's a coincidence that functionalism (name unrelated to mathematical functions) uses causality to define mental states <cit.> and we reduce causality to functions.
* So we have an SFM with graph ←→ and functions = {: f_1, : f_2}.
* (Fact) There exists a function f_3 such that is true in every ∈ R_. Mental state does functionally determine body state.
cannot be deduced from alone. It's entailed by the specific functional mappings f_1, f_2 (and (, , )). Although it cannot appear as a path in , we see no reason to dismiss it as "excluded." We may use SFM-intersection to explicitly/graphically encode this functional dependency, although R_ is entailed by a single SFM (SFM-intersection-proper isn't required).
The Exclusion Problem appears in any system with hierarchical levels of abstraction, since supervenience is just functional dependency. In fully-specified and deterministic computers, what causes a video to play on screen, the low-level chip activities or the high-level video-player program? The same reasoning applies. The only empirical question is whether higher-level functional determinations like f_3 are true. If not, we simply say the abstraction is broken.
§.§ Time
SFM doesn't endorse any particular theory of time, but we can define T: →ℝ that maps each node u to a real-valued timestamp T(u). If ∀ (u, v) ∈: T(u) ≤ T(v), then causes always temporally precede their effects. But without additional assumptions, T might as well violate this condition.
Backward causation occurs when an effect temporally precedes its cause <cit.>. If most SFM edges (u, v) ∈ still point from past to future (T(u) ≤ T(v)), a backward edge can create cycles, resulting in PULO or actually unsatisfiable laws. That's why people intuitively dislike backward causation. But if the specific SFM is satisfiable or satisfied by empirical data, we cannot dismiss it a priori.
Are there fundamental properties of our physical world that make causal and temporal orders agree? Could it be the asymmetry of thermodynamics, radiation <cit.>, or our mental habit of "actions determining consequences" (Section <ref>)? Further research is required.
In a causal feedback loop A → A, node A influences its own next state. With discrete time, we can unroll it to an acyclic time-indexed causal chain A(0) A(1) A(2) …, which may be countably infinite. When every [A(t)] is invertible, the system has time-symmetry and another equivalent SFM A(0) A(1) A(2) ….
Decreasing the interval between 2 consecutive timestamps towards the infinitesimal, we eventually get an uncountable number of nodes and cannot properly define an edge, because there are no 2 "consecutive" real numbers. In this case, it would make more sense for A(t) to determine its instantaneous rate of change d/dt A(t), like the exponential/logistic growth rate of bacteria population size and the predator-prey dynamics in Lotka-Volterra equations. SFM-intersection-proper can represent autonomous differential equations, which include causal loop diagrams <cit.>. However, differential equation in general are better tools for modeling continuous-time causality.
§ CONCLUSION
After our conceptual analysis that reduces causality to functions, there should be nothing mysterious about the definition of causality.
Using forward inference, contrast, and delta compression, Structural Functional Model (SFM) correctly produces intuitive causal utterances.
We've also supported intuitive practices from an algorithmic perspective: contrast saves space and time; finite acyclic SFM is required for guaranteed satisfiability (at the cost of expressiveness).
Distinct from but compatible with probability theory, "causality as functions" allows for interesting downstream applications.
§ MATHEMATICS REVIEW
Under ZFC set theory, a set is roughly an unordered collection of distinct elements. The binary Cartesian product between two sets X and Y is X × Y = {(x, y)|x ∈ X y ∈ Y}.
A binary relation R over X and Y is R ⊆ X × Y.
A relation R may have properties:
* Left-total: ∀ x ∈ X ∃ y ∈ Y: (x,y) ∈ R
* Right-total: ∀ y ∈ Y ∃ x ∈ X: (x,y) ∈ R
* Left-unique: ∀ x_1 ∈ X, x_2 ∈ X, y ∈ Y: ((x_1,y)∈ R) ((x_2,y) ∈ R) ⇒ x_1=x_2
* Right-unique: ∀ x ∈ X, y_1 ∈ Y, y_2 ∈ Y: ((x,y_1)∈ R) ((x,y_2)∈ R) ⇒ y_1=y_2
* Function (total function): left-total and right-unique.
* Partial function: right-unique.
* Injective function: left-unique function.
* Surjective function: right-total function.
* Bijective function: injective and surjective function.
Because of right-uniqueness, a function can be written as f: X → Y such that f(x) ∈ Y is unique for every x ∈ X. For functions f: X → Y and g: S → Y satisfying S ⊆ X, if ∀ x ∈ S: f(x)=g(x), we say g is a restriction of f and f is an extension of g (or f extends g). Because functions are relations, we write g ⊆ f or g = f_|S.
An indexed collection of sets is a 3-tuple (I, 𝒜, A) written as {A_i}_i∈ I, where I is the index set, 𝒜 is a collection of sets, and A is a function A: I →𝒜. Every A_i = A(i) ∈𝒜 is a set. Now we can define Cartesian product over any (possibly infinite-sized) indexed collection of sets: ∏_i∈ IA_i is the set of all functions f: I →⋃_i∈ IA_i such that ∀ i∈ I: f(i) ∈ A_i.
Similarly, a relation over an indexed collection of sets is a subset of its Cartesian product.
A directed graph is an ordered pair = (, ), where is a set of nodes and ⊆× is a set of directed edges. A directed edge is an ordered pair (u, v) such that u ∈ and v ∈.
* If (u, v) ∈, u is a parent of v and v is a child of u.
* (u) denotes the set of parents of u; Ch(u) denotes the set of children of u.
* The indegree of u is the number of its parents (deg^-(u) = |(u)|); the outdegree of u is the number of its children (deg^+(u) = |Ch(u)|); the degree of u is the sum of its indegree and outdegree (deg(u)=deg^-(u)+deg^+(u)).
* A root node u has indegree deg^-(u) = 0. A sink node u has outdegree deg^+(u) = 0.
* A path is a sequence of nodes v_1, v_2, …, v_n such that (v_i, v_i+1) ∈ for any i ∈{1, 2, …, n-1}; a path is a cycle if v_1 = v_n.
* Node u is an ancestor of node v (u ∈An(v)) if there's a path from u to v. Otherwise, u is a non-ancestor of v.
* Node u is a descendant of node v (u ∈De(v)) if there's a path from v to u. Otherwise, u is a non-descendant of v.
* Under our convention, a node is the ancestor/descendant of itself.
§ GENERALIZED MÜNCHHAUSSEN TRILEMMA
We formalize a theorem that generalizes Münchhaussen Trilemma <cit.> in epistemology:
Any directed graph G = (V, E) contains at least one of the following:
* A root.
* A cycle.
* An infinite regress: An infinite path of distinct nodes (…, u_2, u_1, u_0) ending at u_0, such that for any integer i ≥ 1, there exists u_i ∈ V satisfying (u_i, u_i-1) ∈ E and u_i ∉{u_j}_j=0^i-1
Proving by contradiction, suppose instead that a graph has no root, no cycle, and no infinite regress. Since there's no infinite regress, there exists a nonnegative integer n that is the maximum length of a path of distinct nodes ending at some u ∈ V. Let (u_n, u_n-1, …, u_1, u_0) be that maximum-length path.
Because G doesn't have a root, deg^-(u) ≥ 1 for all u ∈ V and thus deg^-(u_n) ≥ 1, so there exists a node v ∈ V such that (v, u_n) ∈ E is an edge.
If v ∈{u_i}_i=0^n, then v = u_i for some integer 0 ≤ i ≤ n. We can construct a new path (u_n, u_n-1, …, u_i, u_n). It's a path because (u_i, u_n) = (v, u_n) ∈ E and (u_i, u_i-1) ∈ E for all 1 ≤ i ≤ n; it's a cycle because it starts and ends at u_n. This violates the acyclic assumption, so v ∉{u_i}_i=0^n is distinct from all nodes in the path.
We can thus construct a new path (v, u_n, u_n-1, …, u_1, u_0) of length n+1, where all nodes have been shown to be distinct. However, this contradicts the condition that the maximum length of distinct-node paths is n. Therefore, it's impossible for a directed graph to have no root, no cycle, and no infinite regress at the same time.
GMT is a general theorem about directed graphs, proven mathematically. It applies to all problems characterized by objects and directed binary relations between them (i.e. describable by a directed graph), such as "X causes Y" and "X justifies Y."
If we define a directed graph where nodes are propositions and edge (u, v) means "u justifies v" or "u is a part of the justification for v", then GMT entails that we either settle with foundationalism (root that isn't justified), coherentism (cycle that justifies itself), or infinitism (infinite regress of justification chain) - we cannot simultaneously eliminate all 3 of them. Notice how we never used the meaning of "justification" in our proof, only the directed binary form of "A justifies B."
|
http://arxiv.org/abs/2307.07345v1 | 20230714134945 | Energy stability for a class of semilinear elliptic problems | [
"Danilo Gregorin Afonso",
"Alessandro Iacopetti",
"Filomena Pacella"
] | math.AP | [
"math.AP",
"35J61, 35B35, 35B38, 49Q10"
] |
3pt
andify
,-1
@and
, -2
Energy stability for a class of semilinear elliptic problems]Energy stability for a class of semilinear elliptic problems
Research partially supported by Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM)
[Danilo Gregorin Afonso]Dipartimento di Matematica Guido Castelnuovo, Sapienza Università di Roma, Italy [email protected]
[Alessandro Iacopetti]Dipartimento di Matematica “G. Peano", Università di Torino, Via Carlo Alberto 10, 10123 Torino, Italy
[email protected]
[Filomena Pacella]Dipartimento di Matematica Guido Castelnuovo, Sapienza Università di Roma, Piazzale Aldo Moro 5, 00185 Roma, Italy
[email protected]
[2010]35J61, 35B35, 35B38, 49Q10
[
Filomena Pacella
August 12, 2023
====================
In this paper, we consider semilinear elliptic problems in a bounded domain Ω contained in a given unbounded Lipschitz domain 𝒞⊂ℝ^N. Our aim is to study how the energy of a solution behaves with respect to volume-preserving variations of the domain Ω inside 𝒞. Once a rigorous variational approach to this question is set, we focus on the cases when 𝒞 is a cone or a cylinder and we consider spherical sectors and radial solutions or bounded cylinders and special one-dimensional solutions, respectively. In these cases, we show both stability and instability results, which have connections with related overdetermined problems.
§ INTRODUCTION
Let 𝒞⊂ℝ^N, N ≥ 2, be an unbounded uniformly Lipschitz domain and let Ω⊂𝒞 be a bounded Lipschitz domain with smooth relative boundary Γ_Ω∂Ω∩𝒞. More precisely, we assume that Γ_Ω is a smooth manifold of dimension N-1 with smooth boundary ∂Γ_Ω. We set Γ_1, Ω∂Ω∖Γ_Ω and assume that ℋ^N - 1(Γ_1, Ω) > 0, where ℋ^N - 1 denotes the (N - 1)-dimensional Hausdorff measure. Hence ∂Ω = Γ_Ω∪Γ_1, Ω∪∂Γ_Ω.
We consider the following semilinear elliptic problem:
{[ - Δ u = f(u) in Ω; u = 0 on Γ_Ω; ∂ u/∂ν = 0 on Γ_1, Ω ].
where f: ℝ→ℝ is a locally C^1,α nonlinearity and ν denotes the exterior unit normal vector to ∂Ω.
Let u_Ω be a positive weak solution of (<ref>) in the Sobolev space H_0^1(Ω∪Γ_1, Ω), which is the space of functions in H^1(Ω) whose trace vanishes on Γ_Ω. By standard variational methods we have that under suitable hypotheses on f such a solution exists and is a critical point of the energy functional
J(v) = 1/2∫_Ω |∇ v|^2 dx - ∫_Ω F(v) dx, v ∈ H_0^1(Ω∪Γ_1, Ω),
where F(s) = ∫_0^s f(τ) dτ.
A classical example of a nonlinearity for which a positive solution exists for any domain Ω in 𝒞 is the Lane-Emden nonlinearity, namely
f(u) = u^p, with 1 < p < N + 2/N - 2 if N ≥ 3,
1 < p < + ∞ if N = 2.
In this case, u_Ω can be obtained, for instance, by minimizing the functional J on the Nehari manifold
𝒩(Ω) = {v ∈ H_0^1(Ω∪Γ_1, Ω) ∖{0} : J'(v)[v] = 0} .
Given the unbounded region 𝒞, an interesting question is to understand how the energy J(u_Ω) behaves with respect to variations of a domain Ω inside 𝒞. In particular, one could ask whether the energy J(u_Ω) increases or decreases by deforming Ω into a domain Ω sufficiently close to Ω and with the same measure.
Loosely speaking, one could consider the function Ω↦ T(Ω) = J(u_Ω) and study it in a suitable “neighborhood" of Ω. Under this aspect, domains Ω which are local minima of T could be particularly interesting. This question could be attacked by differentiating T(Ω) with respect to variations of Ω which leave the volume invariant and studying the stability or instability of its critical points. However, since (<ref>) is a nonlinear problem and solutions of (<ref>) are not unique in general, it is not clear a priori how to well define the functional T(Ω).
We will show in Section <ref> that for nondegenerate solutions u_Ω of (<ref>) the energy functional T(Ω) is well defined for domains obtained by small deformations of Ω induced by vector fields which leave 𝒞 invariant.
We remark that the study of the stationary domains of the energy functional T(Ω) with a volume constraint is strictly related to the overdetermined problem obtained from (<ref>) by adding the condition that the normal derivative ∂ u/∂ν is constant on Γ_Ω, see Proposition <ref>. This is well-known for a Dirichlet problem in ℝ^N and when T(Ω) is globally defined for all domains Ω⊂ℝ^N (as in the case of the torsion problem, i.e. f ≡ 1). It has been observed in <cit.> and <cit.> in the relative setting of the cone.
The existence or not of domains that are local minimizers of the energy and their shapes obviously depend on the unbounded region 𝒞 where the domains Ω are contained. In this paper, we consider unbounded cones and cylinders, in which there are some particular domains that, for symmetry or other geometric reasons, could be natural candidates for being local minimizers of the energy.
Let us first describe the case when 𝒞 is a cone Σ_D defined as
Σ_D {x ∈ℝ^N : x = tq, q ∈ D, t > 0},
where D is a smooth domain on the unit sphere 𝕊^N - 1.
In Σ_D we consider the spherical sector Ω_D obtained by intersecting the cone with the unit ball centered at the origin, i.e. Ω_D=Σ_D∩ B_1. In Ω_D we can consider a radially symmetric solution u_D of problem (<ref>), for the nonlinearities f for which they exist. Obviously, u_D is a radial solution of the analogous Dirichlet problem in the unit ball B_1.
In Section <ref> we show that, whenever u_D is a nondegenerate solution of (<ref>), then the pair (Ω_D, u_D) is energy-stationary in the sense of Definition <ref> and investigate its “stability" as a critical point of the energy functional T, which is well defined for small perturbations of Ω_D (see Sections <ref> and <ref>).
The main result we get is that the stability of (Ω_D, u_D) depends on the first nontrivial Neumann eigenvalue λ_1(D) of the Laplace-Beltrami operator - Δ_𝕊^N - 1 on the domain D ⊂𝕊^N - 1 which spans the cone. In particular, we obtain a precise threshold for stability/instability which is independent of the nonlinearity, and on the radial positive solution considered, whenever multiple radial positive solutions exist. Let us remark that for several nonlinearities the radial positive solution is unique (see <cit.>). For example, this is the case if f(u) = u^p, p > 1.
To state precisely our result we need to introduce the first eigenvalue ν_1 of the following singular eigenvalue problem:
- z” - N - 1/rz' - f'(u_D)z = ν/r^2 z in (0, 1)
z(1) = 0
This problem arises naturally when studying the spectrum of the linearized operator - Δ - f'(u_D). We refer to Section <ref> for more details.
Let Σ_D be the cone spanned by the smooth domain D ⊂𝕊^N - 1, N ≥ 3, and let λ_1(D) be the first nonzero Neumann eigenvalue of the Laplace-Beltrami operator - Δ_𝕊^N - 1 on D. Let u_D be a radial positive solution of (<ref>) in the spherical sector Ω_D. We have:
* if - ν_1 < λ_1(D) < N - 1, then the pair (Ω_D, u_D) is an unstable energy-stationary pair;
* if λ_1(D) > N - 1, then (Ω_D, u_D) is a stable energy-stationary pair.
The case N = 2 is special and in this case, the overdetermined torsion problem has been completely solved in <cit.> using that the boundary of any cone in dimension 2 is flat. In the nonlinear case, the condition N ≥ 3 arises from the study of an auxiliary singular problem (see Proposition <ref>). It is important to observe that the singular eigenvalue ν_1 which appears in (i) is larger than -(N - 1) for all autonomous nonlinearities f(u) (see <cit.>). Thus the condition λ_1(D) ∈ (- ν_1, N - 1) is consistent.
Let us comment on the meaning of Theorem <ref>. The statement (ii) will be proved by showing that the quadratic form corresponding to the second derivative of the energy functional, with a fixed volume constraint, is positive definite in all directions. This means that the spherical sector locally minimizes the energy among small volume preserving perturbations of Ω_D and of the corresponding radial solution u_D.
On the contrary, when - ν_1 < λ_1(D) < N - 1, by (i) we have that the pair (Ω_D, u_D) is unstable and therefore Ω_D is not a local minimizer of the energy. This means that there exist small volume preserving deformations of the spherical sector Ω_D which produce domains Ω_t and solutions u_t of (<ref>) in Ω_t whose energy J(u_t) is smaller than the energy J(u_D) of the positive radial solution u_D in the spherical sector Ω_D.
Moreover, observe that the function f = f(s) could satisfy suitable hypotheses such that problem (<ref>) has a unique positive solution u_Ω in any domain Ω⊂Σ_D (or more generally in Ω⊂𝒞). This is the case, for example, when f ≡ 1, i.e., (<ref>) is a “relative" torsion problem. Then the energy functional T(Ω) = J(u_Ω) is well defined for any domain Ω⊂Σ_D. Hence we may ask whether a global minimum for T exists, once the volume of Ω is fixed, and is given by the spherical sector Ω_D. This question has been addressed in <cit.>, <cit.> and <cit.> when f ≡ 1, showing that Ω_D is a global minimizer if Σ_D is a convex cone (<cit.>), as a consequence of an isoperimetric inequality introduced in <cit.>, see also <cit.>. Instead, in <cit.> it is proved that Ω_D is not a local minimizer whenever λ_1(D) < N - 1, which is the same threshold we get in Theorem <ref> for general nonlinearities.
The other example of an unbounded domain we consider in the present paper is a half-cylinder, defined as
Σ_ωω× (0, + ∞) ⊂ℝ^N,
where ω⊂ℝ^N - 1 is a smooth bounded domain. We denote the points in Σ_ω by x = (x', x_N), x' ∈ω. In this case, a geometrically simple domain we consider is the bounded cylinder
Ω_ω{(x', x_N) ∈ℝ^N - 1 : x' ∈ω, 0 < x_N < 1 }.
In Ω_ω we consider a positive solution
u_ω(x) = u_ω(x_N)
which is obtained by trivially extending to Ω_ω a positive one-dimensional solution of the problem
- u” = f(u) in (0, 1)
u'(0) = u(1)=0
for a nonlinearity f for which such a solution exists.
Before stating the results concerning the stability of the pair (Ω_ω, u_ω) we again consider an auxiliary eigenvalue problem (but not singular):
-z” - f'(u_ω) z = α z in (0, 1)
z'(0) = z(1) = 0
The problem (<ref>) is considered in Section <ref> to study the spectrum of the linearized operator - Δ - f'(u_ω). We denote by α_1 the first eigenvalue of (<ref>).
We start by stating a sharp stability/instability result for the torsion problem, i.e., taking f ≡ 1 in (<ref>).
Let Σ_ω⊂ℝ^N, N ≥ 2, and Ω_ω be respectively, as in (<ref>) and (<ref>), and let u_ω be the one-dimensional positive solution of (<ref>) in Ω_ω obtained by (<ref>) for f ≡ 1. Let λ_1(ω) be the first nontrivial Neumann eigenvalue of the Laplace operator - Δ_ℝ^N - 1 in the domain ω⊂ℝ^N - 1. Then there exists a number β≈ 1,439 such that
* if λ_1(ω) < β, then the pair (Ω_ω, u_ω) is an unstable energy-stationary pair;
* if λ_1(ω) > β, then the pair (Ω_ω, u_ω) is a stable energy-stationary pair.
Note that the number β that gives the threshold for the stability is independent of the dimension N. Its value is obtained by solving numerically the equation √(λ_1)tanh(√(λ_1)) - 1 = 0 (see (<ref>) in the proof of Theorem <ref>).
It is interesting to observe that the instability result of Theorem <ref> is related to a bifurcation theorem obtained in <cit.>. Indeed, if we consider the cylinder Σ_ω in ℝ^2, in which case ω is simply an interval in ℝ and Ω_ω is a rectangle, a byproduct of Theorem 1.1 of <cit.> is the existence of a domain Ω_ω in Σ_ω that is a small deformation of the rectangle Ω_ω and in which the overdetermined problem
{[ - Δ u = 1 in Ω_ω; u = 0 on Γ_Ω_ω; ∂ u/∂ν = c < 0 on Γ_Ω_ω; [3pt]
∂ u/∂ν = 0 on Γ_1, Ω_ω ].
has a solution.
By looking at the proof of <cit.> and relating it to our instability result it is clear that the bifurcation should occur when the eigenvalue λ_1(ω) crosses the value β provided by Theorem <ref>.
The proof of Theorem <ref> can be derived from a general condition for the stability of the pair (Ω_ω, u_ω) in the nonlinear case, which is obtained in Theorem <ref>. The proof of Theorem <ref> involves auxiliary functions that appear naturally in the study of derivatives of the energy functional T, see Section <ref>.
Let us remark that in the case when f ≡ 1 we succeed in obtaining the sharp bound of Theorem <ref> because the solution given by (<ref>) and (<ref>) is explicit:
u_ω(x) = u_ω(x_N) = 1 - x_N^2/2,
and so are the auxiliary functions which are solutions of simple linear ODEs. This allows us to use the condition of Theorem <ref> to obtain Theorem <ref>.
The result of Theorem <ref> gives a striking difference between the torsional energy problem and the isoperimetric problem in cylinders. Indeed, Proposition 2.1 of <cit.> shows that the only stationary cartesian graphs for the perimeter functional are the flat ones. Instead, Theorem <ref> (as well as the result of <cit.>) indicate that there are domains for which the overdetermined problem relative to (<ref>), with f ≡ 1, has a solution and whose relative boundary is a non-flat cartesian graph.
For the semilinear problem, we obtain a stability result for a large class of nonlinearities as soon as the eigenvalue λ_1(ω) is sufficiently large. Indeed, we have
Let Σ_ω and Ω_ω be as in (<ref>) and (<ref>), and let u_ω be a positive one-dimensional solution of (<ref>) in Ω_ω. Let α_1 be the first eigenvalue of (<ref>) and let λ_1(ω) be as in Theorem <ref>. If the nonlinearity f satisfies f(0) = 0 and
λ_1(ω) > max{- α_1, f'(u_ω)_∞},
then the pair (Ω_ω, u_ω) is a stable energy-stationary pair.
The condition (<ref>) shows that the stability depends on an interplay between the geometry of the cylinder Σ_ω (through the eigenvalue λ_1(ω)) and the nonlinearity f. On the contrary, numerical evidence shows, for the Lane-Emden nonlinearity (<ref>), that, if λ_1 is sufficiently close to -α_1, instability occurs, see Remark <ref>.
Concerning the eigenvalue α_1 in the bound (<ref>), as well as the analogous one, λ_1(D) > - ν_1, of Theorem <ref>, we point out that they are used in the proofs of both theorems to deduce the positivity of some auxiliary functions. It is an open problem to understand if they really play a role in the stability/instability result.
We delay further comments on the results and their proofs to the respective sections.
The paper is organized as follows. In Section <ref> we study problem (<ref>) in domains Ω contained in a general unbounded set 𝒞. We define the energy functional and its derivative with respect to variations of Ω which leave 𝒞 invariant and preserve the measure of Ω. This is done by considering nondegenerate solutions of (<ref>) in Ω.
In Section <ref> we consider the case when 𝒞 is a cone Σ_D. In this setting we take domains which are defined by smooth radial graphs over D, in particular we consider the spherical sector Ω_D and a corresponding radial solution u_D for which we prove the stability/instability result.
Finally in Section <ref> we study the case of the cylinder Σ_ω and prove the corresponding stability/instability result for the pair (Ω_ω, u_ω) when Ω_ω is a bounded cylinder and u_ω is as in (<ref>) and (<ref>).
§ SEMILINEAR ELLIPTIC PROBLEMS IN UNBOUNDED SETS
In this section we consider problem (<ref>) in a bounded Lipschitz domain Ω contained in an unbounded open set 𝒞 which we assume to be (uniformly) Lipschitz regular.
Starting from a positive nondegenerate solution of (<ref>) in Ω we show how to define an energy functional for small variations of Ω which preserve the volume.
§.§ Nondegenerate solutions
Let Ω⊂𝒞 be a bounded domain whose relative boundary Γ_Ω = ∂Ω∩𝒞 is a smooth manifold (with boundary). As in Section <ref> we set Γ_1, Ω = ∂Ω∖Γ_Ω.
We consider a positive weak solution u_Ω of (<ref>) in the Sobolev space H_0^1(Ω∪Γ_1, Ω), which is the subspace of H^1(Ω) of functions whose trace vanishes on Γ_Ω. By standard variational methods, such as constrained minimization, Mountain-Pass Theorem etc, it is easy to exhibit many nonlinearities f = f(s) for which such a solution exists. Moreover, with suitable assumptions on the growth of f we also have, by regularity results, that u_Ω is a classical solution of (<ref>) inside Ω and at any regular point of ∂Ω, and that u_Ω is bounded (see also <cit.>).
We assume that u_Ω is nondegenerate, i.e., the linearized operator
L_u_Ω = - Δ - f'(u_Ω)
does not have zero as an eigenvalue in H_0^1(Ω∪Γ_1, Ω) or, in other words, L_u_Ω defines an isomorphism between H_0^1(Ω∪Γ_1, Ω) and its dual space.
We consider small deformations of Ω which leave 𝒞 invariant and would like to show that the nondegeneracy of u_Ω induces a local uniqueness result for solutions of (<ref>) in the deformed domains. Thus we take a one-parameter family of diffeomorphisms ξ_t, for t ∈ (- η, η), η > 0, associated to a smooth vector field V such that V(x) ∈ T_x∂𝒞 for every x ∈∂𝒞^reg, V(x)=0 for x∈∂𝒞∖∂𝒞^reg, and set Ω_t:= ξ_t(Ω), where T_x∂𝒞 denotes the tangent space to ∂𝒞 at the point x, and ∂𝒞^reg denotes the regular part of ∂𝒞. In particular Ω_0=Ω and in order to simplify the notations we set
Γ_t Γ_Ω_t, Γ_1, tΓ_1, Ω_t.
Let u_Ω be a positive nondegenerate solution of (<ref>), which belongs to W^1, ∞(Ω) ∩ W^2, 2(Ω). Let V be a smooth vector field and let ξ_t be the associated family of diffeomorphisms. Then there exists δ > 0 such that for any t ∈ (- δ, δ) there is a unique solution u_t of the problem
{[ - Δ u = f(u) in Ω_t; u = 0 on Γ_t; ∂ u/∂ν = 0 on Γ_1, t ].
in a neighborhood of the function u_Ω∘ξ_t^-1 in the space H_0^1(Ω_t ∪Γ_1, t). Moreover, the map t ↦ u_t is differentiable.
By using the diffeomorphism ξ_t we can pass from the space H_0^1(Ω∪Γ_1, Ω) to the space H_0^1(Ω_t ∪Γ_1, t). Indeed,
H_0^1(Ω∪Γ_1, Ω) = {v ∘ξ_t : v ∈ H_0^1(Ω_t ∪Γ_1, t)}.
Moreover, u_t is a weak solution of (<ref>), i.e.,
∫_Ω_t∇ u_t ·∇ v dx - ∫_Ω_t f(u_t) v dx = 0 ∀ v ∈ H_0^1(Ω_t ∪Γ_1, t)
if and only if the function u_t = u_t ∘ξ_t ∈ H_0^1(Ω∪Γ_1, Ω) satisfies
∫_Ω (M_t ∇u_t)·∇ w J_t dx - ∫_Ω f(u_t) w J_t dx = 0 ∀ w ∈ H_0^1(Ω∪Γ_1, Ω)
where
J_t(x) = |(ξ_t(x))|
and
M_t = [ξ_t^-1(ξ_t(x))][ξ_t^-1(ξ_t(x))]^T.
In other words, setting M_t:=M_t J_t, we have that u_t is a solution of
- (M_t ∇u_t) - f(u_t)J_t = 0
in the space H_0^1(Ω∪Γ_1, Ω).
Now we consider the map
ℱ: (- η, η) × H_0^1(Ω∪Γ_1, Ω) → H_0^1(Ω∪Γ_1, Ω)^*
defined as
ℱ(t, v) = - (M_t ∇ v) - f(v) J_t.
Since u_Ω is a solution in Ω and ξ_0 is the identity map we have
ℱ(0, u_Ω) = 0.
Notice that ℱ is differentiable with respect to to v, and
∂_v ℱ(0, u_Ω) = - Δ - f'(u_Ω).
Indeed, for any h ∈ H_0^1(Ω∪Γ_1, Ω) we have
ℱ(t, v + ε h) - ℱ(t, v)/ε = - (M_t(∇ v + ε∇ h) ) - f(v + ε h)J_t - (- (M_t ∇ v) - f(v)J_t)/ε
= - (εM_t ∇ h)/ε - (f(v + ε h) - f(v))J_t/ε
→ - (M_t ∇ h) - f'(v)J_t
as ε→ 0. Hence ℱ is differentiable and evaluating ∂_v ℱ at (0, u_Ω) we obtain (<ref>).
By the nondegeneracy assumption on the solution u_Ω, we infer that (<ref>) gives an isomorphism between H_0^1(Ω∪Γ_1, Ω) and H_0^1(Ω∪Γ_1, Ω)^*. Then, by the Implicit Function Theorem, there exists an interval (- δ, δ) and a neighborhood ℬ of u_Ω in H_0^1(Ω∪Γ_1, Ω) such that for every t ∈ (- δ, δ) there exists a unique function u_t ∈ H_0^1(Ω∪Γ_1, Ω) in ℬ such that ℱ(t, u_t) = 0, that is, u_t is the unique solution (in ℬ) of (<ref>). It follows that u_t = u_t ∘ξ_t^-1 is the unique solution of (<ref>) in a neighborhood of u_Ω∘ξ_t^-1 in H_0^1(Ω_t ∪Γ_1, t).
Finally, since the map t ↦u_t is smooth, so is the map t ↦ u_t. In addition
u. d/dt|_t = 0 u_t = (. d/dt|_t = 0u_t) - ⟨∇ u_Ω, V⟩.
The proof is complete.
Note that, as for u_Ω, u_t is a classical solution of (<ref>) in Ω_t and on the regular part of ∂Ω_t. By Proposition <ref> we have that the energy functional
T(Ω_t) = J(u_t) = 1/2∫_Ω_t |∇ u_t|^2 dx - ∫_Ω_t F(u_t) dx,
where F(s) = ∫_0^s f(τ) dτ, is well defined for all sufficiently small t. Observe that, since u_t is a solution to (<ref>), we have
∫_Ω_t |∇ u_t|^2 dx = ∫_Ω_t f(u_t)u_t dx,
so we can also write
T(Ω_t) = 1/2∫_Ω_t f(u_t)u_t dx - ∫_Ω_t F(u_t) dx.
In the next result we show that T is differentiable with respect to t and compute its derivative at t = 0, that is, at the initial domain Ω.
Assume that u_Ω is a positive nondegenerate solution of (<ref>) which belongs to W^1, ∞(Ω) ∩ W^2, 2(Ω). Then
. d/dt|_t = 0 T(Ω_t) = - 1/2∫_Γ_Ω |∇ u_Ω|^2 ⟨ V, ν⟩ dσ.
Recall from Proposition <ref> that t ↦ u_t is smooth and (<ref>) holds. Differentiating the equation - Δ u_t = f(u_t) with respect to t we obtain
- Δu = f'(u_Ω) u in Ω.
Now observe that by the hypotheses on u_Ω we have that
u + ⟨∇ u_Ω, V ⟩ = (. d/dt|_t = 0u_t) ∈ H_0^1(Ω∪Γ_1,Ω),
thus
u = - ∂ u_Ω/∂ν⟨ V, ν⟩ on Γ_Ω.
Finally, since ξ_t maps ∂𝒞 into itself we have that, for all small t and x ∈(∂𝒞∩∂Ω)^reg
⟨∇ u_t(ξ_t(x)), ν(ξ_t(x)) ⟩ = 0.
Differentiating this relation with respect to t and evaluating at t=0 we obtain
0 = ⟨∇u(x), ν(x) ⟩ + d_x(⟨∇ u_Ω, ν⟩)[V(x)],
where d_x(⟨∇ u_Ω, ν⟩)[V(x) ] is the differential of the function ⟨∇ u_Ω, ν⟩|_(∂𝒞∩∂Ω)^reg computed at x, along V(x). Then, since ⟨∇ u_Ω, ν⟩=0 on (∂𝒞∩∂Ω)^reg, and in view of (<ref>), (<ref>), we infer that u satisfies
{[ - Δu = f'(u_Ω) u in Ω; u = - ∂ u_Ω/∂ν⟨ V, ν⟩ on Γ_Ω; ∂u/∂ν = 0 on Γ_1,Ω ].
in the classical sense in the interior of Ω and on the regular part of ∂Ω.
Recalling (<ref>) we can write
T(Ω_t) = ∫_Ω_t1/2(f(u_t)u_t - F(u_t) ) dx.
Since t↦ f(u_t)u_t - F(u_t) is differentiable at t=0, ∂Ω is Lipschitz and taking into account that u_Ω∈ W^1, ∞(Ω) ∩ W^2, 2(Ω), then, applying <cit.>, we can compute the derivative with respect to t of the functional T obtaining that
. d/dt|_t = 0 T(Ω_t)
= 1/2∫_Ω (f'(u_Ω)u u_Ω + f(u_Ω) u) dx - ∫_Ω f(u_Ω) u dx
+ ∫_∂Ω( 1/2f(u_Ω)u_Ω - F(u_Ω)) ⟨ V, ν⟩ d σ
= 1/2∫_Ω(f'(u_Ω) u u_Ω - f(u_Ω)u) dx
= 1/2∫_Ω( (- Δu) u_Ω + Δ u_Ωu) dx
= 1/2∫_∂Ω( u∂ u_Ω/∂ν - u_Ω∂u/∂ν) d σ
= - 1/2∫_Γ_Ω |∇ u_Ω|^2 ⟨ V, ν⟩ d σ.
The previous applications of the Divergence Theorem are justified by arguing as in <cit.>, where the regularity hypothesis on u_Ω comes into play.
It is not difficult to see that u is also a weak solution of (<ref>). Indeed, let φ∈ C_c^∞(Ω∪Γ_1,Ω). Then, for all sufficiently small t, we also have φ∈ C_c^∞(Ω_t ∪Γ_1, t). Hence, since u_t is a weak solution to (<ref>), we have
0 = ∫_Ω_t∇ u_t ∇φ dx - ∫_Ω_t f(u_t) φ dx = ∫_Ω∇ u_t ∇φ dx - ∫_Ω f(u_t) φ dx.
Now, as proved in <cit.>, it holds that
. d/dt|_t = 0∇ u_t = ∇u.
Then, taking the derivative with respect to t in (<ref>), evaluating at t=0, and since φ is arbitrary, we easily conclude.
Let us now consider domains Ω⊂𝒞 of fixed measure c>0 and define
𝒜{Ω⊂𝒞 : Ω is admissible and |Ω| = c},
where admissible means that Ω⊂𝒞 is a bounded domain with smooth relative boundary Γ_Ω∂Ω∩𝒞, ∂Γ_Ω is a smooth (N-2)-dimensional manifold and Γ_1, Ω∂Ω∖Γ_Ω is such that ℋ^N - 1(Γ_1, Ω) > 0. We consider vector fields that induce deformations that preserve the volume. More precisely we take a one-parameter family of diffeomorphisms ξ_t, t∈ (-η,η), associated to a smooth vector field V such that V(x) ∈ T_x∂𝒞^reg for all x ∈∂𝒞^reg, and satisfying the condition |Ω_t| = |Ω|, for all t∈ (-η,η), where Ω_t = ξ_t(Ω).
We say that the pair (Ω, u_Ω) is energy-stationary under a volume constraint if
.d/dt|_t = 0 T(Ω_t) = 0
for any vector field tangent to ∂𝒞 such that the associated one-parameter family of diffeomorphisms preserves the volume.
A characterization of energy-stationary pairs in 𝒞 is the following:
Let Ω∈𝒜 and assume that u_Ω∈ W^1, ∞(Ω) ∩ W^2, 2(Ω) is a nondegenerate positive solution of (<ref>). Then (Ω, u_Ω) is energy-stationary under a volume constraint if and only if u_Ω satisfies the overdetermined condition |∇ u_Ω| = constant on Γ_Ω.
Let ξ_t be an arbitrary admissible one-parameter family of diffeomorphisms and let V be the associated vector field. Since the volume is preserved and V(x) ∈ T_x∂𝒞 on ∂𝒞,
0 = . d/dt|_t = 0 |Ω_t| = ∫_∂Ω⟨ V, ν⟩ d σ = ∫_Γ_Ω⟨ V, ν⟩ d σ.
If |∇ u_Ω| is constant on Γ_Ω, then (Ω, u_Ω) is energy-stationary, in view of (<ref>) and (<ref>). On the other hand, if (Ω, u_Ω) is energy stationary, then
∫_Γ_Ω (|∇ u_Ω|^2 - a) ⟨ V, ν⟩ d σ = 0
for every constant a and every admissible vector field V. Assume by contradiction that |∇ u_Ω| is not constant on Γ_Ω. Then there exists a compact set K ⊂Γ_Ω, with nonempty interior part, such that |∇ u_Ω| is not constant on K. Take a nonnegative cutoff function Θ such that Θ≡ 1 in K, and choose
a = ∫_Γ_ΩΘ |∇ u_Ω|^2 dσ/∫_Γ_ΩΘ dσ.
Then we can build a deformation from the vector field V = (|∇ u_Ω|^2 - a) Θν, and in this case, since (Ω, u_Ω) is energy stationary, we would have
∫_K (|∇ u_Ω|^2 - a)^2 d σ = 0,
which contradicts the fact that |∇ u_Ω| is not constant on K. The proof is complete.
It is relevant to observe that all concepts introduced in this section apply to the case when Γ_1, Ω is empty, or, equivalently, when 𝒞 = ℝ^N. Thus all the above results hold for Dirichlet problems in domains in the whole space. In this case it is known, by Serrin's Theorem (see <cit.>) that if a positive solution for the overdetermined problem
{[ - Δ u = f(u) in Ω; u = 0 on ∂Ω; ∂ u/∂ν = constant on ∂Ω ].
exists, then Ω is a ball. Therefore, in view of Proposition <ref>, it follows that the only energy-stationary pairs in ℝ^N are (B, u_B), where B is a ball and u_B is a nondegenerate positive solution.
We observe that all the results in this section hold true also for non-degenerate sign-changing solutions u_Ω to (<ref>). However, since in the sequel we study the stability in the case of positive solutions, we have considered only this case
§ THE CASE OF THE CONE
Let D ⊂𝕊^N - 1 be a smooth domain on the unit sphere and let Σ_D be the cone spanned by D, which is defined as
Σ_D {x ∈ℝ^N : x = t q, q ∈ D, t > 0}.
In Σ_D we consider admissible domains Ω, in the sense of (<ref>), that are strictly star-shaped with respect to the vertex of the cone, which we choose to be the origin 0 in ℝ^N. In other words, we consider domains whose relative boundary is the radial graph in Σ_D of a function in C^2(D, ℝ). Hence for φ∈ C^2(D, ℝ) we set
Γ_φ{x ∈ℝ^N : x = e^φ(q)q, q ∈ D}
and consider the strictly star-shaped domain Ω_φ defined as
Ω_φ{x ∈ℝ^N : x = tq, 0 < t < e^φ(q), q ∈ D}.
To simplify the notation we set
Γ_1, φΓ_1, Ω_φ = ∂Ω_φ∖Γ_φ.
§.§ Energy functional for star-shaped domains
In Ω_φ we consider the semilinear elliptic problem
{[ - Δ u = f(u) in Ω_φ; u = 0 on Γ_φ; ∂ u/∂ν = 0 on Γ_1, φ∖{0} ].
and assume throughout this section that a bounded positive nondegenerate solution u_Ω_φ exists and belongs to W^1, ∞(Ω_φ) ∩ W^2, 2(Ω_φ). Then we can apply the results of Section <ref> and define the energy functional T as in (<ref>) for small variations of Ω_φ. Since Ω_φ is strictly star-shaped, this property also holds for the domains obtained by small regular deformations. Thus it is convenient to parametrize the domains and their variations by C^2 functions defined on D. Hence, for v ∈ C^2(D, ℝ) and t ∈ (- η, η), where η > 0 is a fixed number sufficiently small, we consider the domain variations Ω_φ + tv⊂Σ_D.
Let ξ:(- η, η) ×Σ_D∖{0}→Σ_D∖{0} be the map defined by
ξ(t, x) = e^t v(x/|x|)x.
Then ξ|_Ω_φ(t, ·) : Ω_φ→Ω_φ + tv is a diffeomorphism, whose inverse is
(ξ|_Ω_φ)^-1(t, x) = e^-t v (x/|x|)x = ξ(-t, x).
By definition, ξ(t, x) ∈∂Σ_D ∖{0} for all (t, x) ∈ (- η, η) × (∂Σ_D ∖{0}) and ξ is the flow associated to the vector field
V(x) = v(x/|x|)x,
since ξ(0, x) = x and
d/dtξ(t, x) = e^tv(x/|x|) v(x/|x|)x = V(ξ(t, x)).
The energy functional T in (<ref>) becomes a functional defined on functions in C^2(D, ℝ). More precisely, we define, for every v ∈ C^2(D, ℝ),
T(φ + tv) := T(Ω_φ + tv) = J(u_φ + tv),
for t ∈ (- δ, δ) with δ > 0 small, where
u_φ + tv u_Ω_φ + tv
is the unique positive solution of (<ref>) in the domain Ω_φ + tv, in a neighborhood of u_φ∘ξ(t, ·)^-1.
We now compute the first derivative of the functional T at φ along a direction v ∈ C^2(D, ℝ), i.e. the derivative with respect to t of (<ref>) computed at t=0.
Let φ∈ C^2(D, ℝ) and assume that u_φ is a bounded positive nondegenerate solution to (<ref>) and that u_φ belongs to W^1, ∞(Ω_φ) ∩ W^2, 2(Ω_φ). Then for any v ∈ C^2(D, ℝ) it holds that
T'(φ)[v] = - 1/2∫_D (∂ u_φ/∂ν(e^φ q) )^2 e^N φ v d σ
The result follows from Proposition <ref>. Indeed, since the exterior unit normal to Γ_φ is given by
ν (x)= x/|x| - ∇_𝕊^N - 1φ(x/|x|)/√(1 + |∇_𝕊^N - 1φ(x/|x|)|^2), x∈Γ_φ,
where ∇_𝕊^N - 1 is the gradient in 𝕊^N - 1 (see <cit.>), then, from (<ref>), it follows that
⟨ V, ν⟩ = |x|/√(1 + |∇_𝕊^N - 1φ(x/|x|)|^2) v (x/|x|) on Γ_φ.
Hence, using the parametrization x = e^φ(q) q, for q ∈ D, taking into account that the induced (N-1)-dimensional area element on Γ_φ is given by
d σ_Γ_φ = e^(N - 1)φ√(1 + |∇_𝕊^N - 1φ|^2) d σ,
and since u_φ=0 on Γ_φ, then, from (<ref>), we readily obtain (<ref>).
The next step is to compute the second derivative of T at Ω_φ with respect to directions v, w ∈ C^2(D, ℝ)
Let φ and u_φ be as in Lemma <ref>. Then for any v, w ∈ C^2(D, ℝ) it holds
T”(φ)[v, w]
= - N/2∫_D e^N φ v w (∂ u_φ/∂ν(e^φ q) )^2 d σ
- ∫_D e^N φ v ∂ u_φ/∂ν(e^φ q) ∂u_w/∂ν(e^φ q) d σ
- ∫_D e^N φ v w ∂ u_φ/∂ν(e^φ q) (D^2u_φ(e^φ q) e^φ q) ·ν d σ
+ ∫_D e^N φ v ∂ u_φ/∂ν(e^φ q) ∇ u_φ(e^φ q) ·∇_𝕊^N - 1 w/√(1 + |∇_𝕊^N - 1φ|^2) d σ
+ ∫_D e^N φ(∂ u_φ/∂ν(e^φ q) )^2 ∇_𝕊^N - 1φ·∇_𝕊^N - 1 w/1 + |∇_𝕊^N - 1φ|^2 d σ,
where u_w = . d/ds|_s = 0 u_φ + sw satisfies (<ref>) with V(x) = w(x/|x|)x.
The proof is the same as that of <cit.> and therefore we omit it.
In view of Definition <ref>, we are interested in studying pairs (Ω_φ, u_φ) which are energy-stationary under a volume constraint. Thus we need to consider domains Ω_φ with a fixed volume. We recall that the volume of the domain defined by the radial graph of a function φ∈ C^2(D, ℝ) is given by
𝒱(φ) 𝒱(Ω_φ) = |Ω_φ| = 1/N∫_D e^Nφ dσ.
Simple computations yield, for v, w ∈ C^2(D, ℝ):
𝒱'(φ)[v] = ∫_D e^N φ v dσ, 𝒱”(φ)[v, w] = N ∫_D e^Nφ v w d σ.
Then, for c > 0 we define the manifold
M {φ∈ C^2(D, ℝ) : 𝒱(φ) = c},
whose tangent space at any point φ∈ M is given by
T_φ M = {v ∈ C^2(D, ℝ) : ∫_D e^N φ v dσ = 0 }.
We restrict the energy functional to the manifold M and denote it by
I(φ) := T|_M(φ).
Clearly, if the pair (Ω_φ, u_φ) is energy-stationary under a volume constraint, in the sense of Definition <ref>, then φ∈ M is a critical point of I. Hence, by the Theorem of Lagrange multipliers, there exists μ∈ℝ such that
T'(φ) = μ𝒱'(φ).
Moreover, the following result holds true:
Let φ∈ M such that (Ω_φ, u_φ) is energy-stationary under the volume constraint. Then the Lagrange multiplier μ is negative and
∂ u_φ/∂ν = - √(- 2 μ) on Γ_φ.
The proof is the same as in <cit.>
For the second derivative of I we have
Let φ∈ M and let v, w ∈ T_φ M. If (Ω_φ, u_φ) is energy-stationary under the volume constraint, then
I”(φ)[v, w] = T”(φ)[v, w] - μ𝒱”(φ)[v, w].
The proof is the same as in <cit.>.
§.§ Spherical sectors and radial solutions
Given a cone Σ_D we consider the spherical sector Ω_D obtained by intersecting Σ_D with the unit ball B_1. Obviously its relative boundary Γ_Ω_D is the radial graph obtained by taking φ≡ 0 in (<ref>), which is in fact the domain D which spans the cone, that is Γ_Ω_D = D.
In the spherical sector Ω_D we would like to consider a nondegenerate positive radial solution u_D u_Ω_D of (<ref>), hence we first recall conditions on the nonlinearity f which ensure that a positive radial solution of (<ref>) in Ω_D exists. Observe that such u_D is just the restriction to Ω_D of a positive radial solution of the Dirichlet problem
{[ - Δ u = f(u) in B_1; u = 0 on ∂ B_1 ].
Let f : ℝ→ℝ be a locally Lipschitz continuous function. Assume that f satisfies one of the following:
* |f(s)| ≤ a|s| + b for all s > 0, where b > 0 and a ∈ (0, μ_1(B_1)), where μ_1(B_1) is the first eigenvalue of the operator - Δ in H_0^1(B_1).
* f:[0, + ∞) → [0, + ∞) is non-increasing.
*
* |f(s)| < c|s|^p + d, where c, d > 0 and p ∈(1, N + 2/N - 2) if N ≥ 3, p > 1 if N = 2;
* f(s) = o(s) as s → 0;
* There exist γ > 2, κ > 0 such that for |s| > κ it holds
0 < γ F(s) < s f(s);
* f'(s) > f(s)/s for all s > 0.
Then a radial positive solution of (<ref>) in B_1, and hence of (<ref>) in Ω_D, exists.
In cases (i) and (ii), the corresponding functional
J(u) = 1/2∫_B_1 |∇ u|^2 dx - ∫_B_1 F(u) dx
is coercive and weakly lower semicontinuous in the space H^1_0, rad(B_1), which is the subspace of H_0^1(B_1) of radial functions, and so it has a minimum which is a solution of (<ref>). In the case (iii) standard variational methods, such as minimization on the Nehari manifold or Mountain Pass type theorems give a positive solution of (<ref>), which is then radial by the Gidas-Ni-Nirenberg Theorem (see <cit.>). We refer to <cit.> and <cit.> for the details.
We point out that a radial solution u_D is always a classical solution of (<ref>) in B_1, and hence in Ω_D. In particular, u_D is bounded and u_D ∈ C^2(B_1)
Now we would like to study the nondegeneracy of a radial solution u_D of (<ref>) in Ω_D.
As recalled in Section <ref>, we need conditions that ensure that zero is not an eigenvalue of the linearized operator
L_u_D = - Δ - f'(u_D)
in the space H_0^1(Ω_D ∪Γ_1, 0), where Γ_1, 0 = ∂Ω_D∖Γ_Ω_D. Obviously, if the linearized operator L_u_D admits only positive eigenvalues, then u_D is nondegenerate. This is the case of stable solutions of (<ref>), which occur when f satisfies conditions (i) or (ii) in Proposition <ref>, in particular, if f is a constant.
In general, L_u_D could have negative eigenvalues, so to detect the nondegeneracy of u_D we have to analyze the spectrum of the linear operator (<ref>) in H_0^1(Ω_D ∪Γ_1, 0). As we will see, the fact that Ω_D is a spherical sector in the cone Σ_D (and not the ball B_1) plays a role.
The first remark is that zero is an eigenvalue for L_u_D if and only if it is an eigenvalue for the following singular problem:
{[ - Δψ - f'(u_D) ψ = Λ/|x|^2ψ in Ω_D; ψ = 0 on D; ∂ψ/∂ν = 0 on Γ_1, 0∖{0}. ].
Therefore we investigate the eigenvalues of (<ref>). The advantage of considering this singular eigenvalue problem is that, since u_D is radial, its eigenfunctions can be obtained by separation of variables, using polar coordinates in ℝ^N. To this aim we denote by {λ_j(D)}_j ∈ℕ, the eigenvalues of the Laplace-Beltrami operator - Δ_𝕊^N - 1 on the domain D with Neumann boundary conditions. It is well-known that
0 = λ_0(D) < λ_1(D) ≤λ_2(D) ≤…,
and the only accumulation point is + ∞. Then we consider the following singular eigenvalue problem in the interval (0, 1):
{[ - z” - N - 1/r z' - f'(u_D) z = ν/r^2 z in (0, 1); z(1) = 0 ].
It is shown in <cit.> (see also <cit.>) that nonpositive eigenvalues for (<ref>) can be defined. They are a finite number and we denote them by ν_i, i = 1, …, k. It is immediate to check that the eigenvalues ν_i are the eigenvalues of (<ref>) which correspond to radial eigenfunctions. In particular, we consider the first eigenvalue ν_1 of (<ref>), referring to <cit.> for a variational definition and a study of its main properties.
By using (<ref>)-(<ref>) we obtain the following result:
The problem (<ref>) admits zero as an eigenvalue if and only if there exist i∈ℕ^+ and j∈ℕ such that
ν_i + λ_j(D) = 0.
The proof follows by <cit.>, where it is proved that the nonpositive eigenvalues of (<ref>) are obtained by summing the eigenvalues of the one-dimensional problem (<ref>) and the Neumann eigenvalues of - Δ_𝕊^N - 1 on D. We refer also to <cit.> for another approach, which consists in approximating the ball by annuli in order to avoid the singularity at 0.
From Proposition <ref> we get the following sufficient condition for a radial solution u_D to be nondegenerate.
A radial solution u_D of (<ref>) in Ω_D (i.e. for φ=0) is nondegenerate if both the following conditions are satisfied:
* the eigenvalue problem (<ref>) does not admit zero as an eigenvalue;
* λ_1(D) > - ν_1.
From Condition (I) we have
ν_i ≠ 0 ∀ i∈ℕ^+,
which means that zero is not an eigenvalue of (<ref>) with a corresponding radial eigenfunction. This, in turn, is equivalent to saying that zero is not a “radial" eigenvalue of the linearized operator (<ref>), i.e., u_D is a radial solution of (<ref>) in Ω_D (or of (<ref>) in B_1) which is nondegenerate in the subspace H_0, rad^1(Ω_D ∪Γ_1, 0), which is the subspace of H_0^1(Ω_D ∪Γ_1, 0) given by radial functions.
Now, since λ_0(D) = 0, λ_1(D) > 0 and since ν_1 is the smallest eigenvalue of (<ref>), then, from Condition (II) and (<ref>) we infer that the sum (<ref>) can never be zero. Hence, thanks to Proposition <ref>, we have that zero is not an eigenvalue of (<ref>) and so cannot be an eigenvalue for the linearized operator (<ref>) in the whole H_0^1(Ω_D ∪Γ_1, 0), i.e. u_D is a non-degenerate solution to (<ref>) in Ω_D.
Condition (I) in Corollary <ref>, i.e., the nondegeneracy of u_D in the space H^1_0, rad(Ω_D ∪Γ_1, 0), is satisfied by positive radial solutions of (<ref>) corresponding to many kinds of nonlinearities.
It holds if f satisfies conditions (i) or (ii) of Proposition <ref>, because in this case all the eigenvalues of (<ref>) and of (<ref>) are positive. It then follows that (II) holds as well. More precisely, in the case (i), since 0<a < μ_1(B_1), the first eigenvalue of L_u_D is positive, so
λ_0(D)_= 0 + ν_1 > 0.
In the case (ii), since f'(u_D) ≤ 0, it follows that ν_1 > 0.
Among the nonlinearities satisfying condition (iii) of Proposition <ref> we could consider f(u) = u^p, 1 < p < N + 2/N - 2, N ≥ 3. Then it is known that the positive radial solution of (<ref>) is unique and nondegenerate (see <cit.>), so (I) holds. It is also well-known that for this nonlinearity it holds ν_1 < 0 and ν_1 is the only negative eigenvalue of (<ref>), because u_D can be obtained by the Mountain Pass Theorem or by minimization on the Nehari manifold and thus it has Morse index one. Then the validity of (II) depends on the cone, since it depends on λ_1(D). However, once p is fixed, since ν_1 does not depend on the cone, it is obvious that, by varying D, there are many cones for which (II) holds. Moreover, it has been proved in <cit.> that ν_1 > - (N - 1) for every autonomous nonlinearity, so that whenever λ_1(D) > N - 1 all radial solutions of (<ref>) are nondegenerate.
§.§ Stability of (Ω_𝐃, 𝐮_𝐃)
Let us first observe that if u_D is a positive nondegenerate radial solution of (<ref>) for φ = 0, belonging to W^1, ∞(Ω_D) ∩ W^2, 2(Ω_D), then (Ω_D, u_D) is energy-stationary in the sense of Definition <ref>. Indeed, since u_D is radial, we have that ∂ u_D/∂ν = constant on Γ_0 = D and thanks to Proposition <ref> we easily conclude.
To investigate the stability of (Ω_D, u_D) we analyze the quadratic form corresponding to the second derivative I”(φ) at φ = 0. Fixing the constant c in the definition of M (see (<ref>)) as c = |Ω_D|, we have that the tangent space to M at φ = 0 is given by
T_0 M = {v ∈ C^2(D, ℝ) : ∫_D v dσ = 0 }.
Writing u_D(r) = u_D(|x|), we denote by u_D' and u_D” the derivatives of u_D with respect to r, so that
u_D'(1) = .∂ u_D/∂ν|_D, u_D”(1) = [(D^2u_D ν) ·ν] |_D.
By Hopf's Lemma we know that u_D'(1) < 0 and actually
u_D'(1) = - √(- 2 μ_D),
where μ_D denotes the Lagrange multiplier in the case φ = 0, see (<ref>).
For v ∈ T_0 M, we will denote by u_v the solution of
{[ - Δu_v - f'(u_D) u_v = 0 in Ω_D; u_v = - u_D'(1) v on D; ∂u_v/∂ν = 0 on Γ_1, 0∖{0} ].
.
Let us remark that for every q ∈ D the outer unit normal vector ν(q) is precisely q, hence (<ref>) corresponds to (<ref>) in Ω_D.
Note that, since u_D is a nondegenerate radial solution, then the weak solution u_v of (<ref>) is unique for every v.
Our next result shows that the quadratic form corresponding to the second derivative of I at φ = 0 has a simple expression.
For any v ∈ T_0M it holds
I”(0)[v, v] = - u_D'(1)(∫_D v ∂u_v/∂ν d σ + u_D”(1) ∫_D v^2 dσ),
where u_v is the solution of (<ref>).
From Lemma <ref>, (<ref>) and Lemma <ref>, with w = v, by simple substitutions and elementary computations we obtain:
I”(0)[v, v]
= - N/2∫_D (u_D'(1))^2 v^2 dσ - ∫_D u_D'(1) v ∂u_v/∂ν dσ
- ∫_D u_D'(1) v^2 (D^2u_D ν)·ν dσ - N μ_D ∫_D v^2 dσ.
Since u_v = -u_D'(1) v on D, by (<ref>) and (<ref>), we deduce that
- N/2∫_D (u_D'(1))^2 v^2 dσ = - N/2∫_D u_v^2 dσ;
- N μ_D ∫_D v^2 dσ = N/2∫_D u_v^2 dσ.
Then (<ref>) follows by substituting (<ref>)-(<ref>) into (<ref>).
To investigate the stability of (Ω_D, u_D) as an energy stationary pair for I we need to study the solution u_v of (<ref>), for any v ∈ T_0M (that is, for functions with mean value zero on D). As we will see, it will be enough to consider only functions v which are eigenfunctions of the Laplace-Beltrami operator - Δ_𝕊^N - 1 with Neumann boundary conditions on D. Hence we consider the eigenvalue problem
{[ - Δψ = λψ on D; ∂ψ/∂ν = 0 on ∂ D ].
and denote its eigenvalues as in (<ref>), counted with multiplicity: 0 = λ_0(D) < λ_1(D)≤λ_2(D) ≤…. The corresponding L^2-normalized eigenfunctions are denoted by {ψ_j}_j ∈ℕ, with ∫_D ψ_j^2 d σ = 1, ψ_0 = constant and ∫_D ψ_j dσ = 0 for j ≥ 1.
Let j ≥ 1 and u_j be the unique solution of (<ref>) for v = ψ_j. Then, writing u_j=u_j(r,q), the function
h_j(r) = ∫_D u_j(r, q) ψ_j(q) dσ, r ∈ (0, 1)
satisfies
{[ - h_j” - N - 1/r h_j' - f'(u_D) h_j = - λ_j(D)/r^2 h_j in (0, 1); h_j(1) = -u_D'(1) ].
Since the proof is the same for all j, we drop the index and the dependence on D and write simply h, ψ and λ.
It is immediate to check that h(1) = - u_D'(1). Moreover, since we can bring the radial derivative inside the integral on D, for every r ∈ (0, 1] we have:
- h”(r) - N - 1/r h'(r)
= ∫_D (- u_rr(r, q) - N - 1/ru_r(r, q) ) ψ(q) d σ
= ∫_D (- Δu + 1/r^2Δ_𝕊^N - 1u) ψ d σ
= ∫_D f'(u_D(r)) uψ dσ + 1/r^2∫_D (Δ_𝕊^N - 1u) ψ dσ.
Now, on the one hand,
∫_D f'(u_D(r)) uψ dσ = f'(u_D(r)) h(r).
On the other hand, applying Green's formula, taking into account the Neumann conditions on ψ and u, we infer that
1/r^2∫_D (Δ_𝕊^N - 1u) ψ d σ = 1/r^2∫_D uΔ_𝕊^N - 1ψ dσ
= - λ/r^2∫_D uψ dσ
= - λ/r^2 h(r).
Substituting (<ref>) and (<ref>) into (<ref>) we conclude the proof.
Note that with u_j and h_j as in Theorem <ref> we have that
u_j(r, q) = h_j(r) ψ_j(q).
Indeed, the boundary conditions are clearly satisfied by this function, and it holds
- Δ (h_j ψ_j)
= - h_j”ψ_j - (N - 1)/r h_j' ψ_j - h_j/r^2Δ_𝕊^N - 1ψ_j
= f'(u_D) h_j ψ_j - λ_j(D)/r^2 h_j ψ_j + λ_j(D)/r^2 h_j ψ_j
= f'(u_D) h_j ψ_j.
Let N ≥ 3. For any j ≥ 1 we have
∫_0^1 r^N - 3 h_j^2 dr < + ∞
and
∫_0^1 r^N - 1 (h_j')^2 dr < + ∞.
Moreover, h_j ∈ L^∞(0, ∞) and h_j(0) = 0.
Again, for simplicity, we drop the index j. Since u∈ H^1(Ω_D) (see Sect. <ref>), writing u=u(r,q) and recalling that ψ is a L^2(D)-normalized solution to (<ref>), we get that
+ ∞ > ∫_Ω_D|∇u|^2 dx
= ∫_0^1 r^N - 1 (h')^2 ∫_D ψ^2 dσ dr + ∫_0^1 r^N - 3 h^2 ∫_D |∇_𝕊^N - 1ψ|^2 dσ dr
= ∫_0^1 r^N - 1(h')^2 dr + λ∫_0^1 r^N - 3 h^2 dr,
which proves (<ref>) and (<ref>). Once we have these estimates, we can proceed as in <cit.> to get the boundness of h and h(0) = 0.
Let λ_j(D), j ≥ 1 be a nontrivial Neumann eigenvalue of - Δ_𝕊^N - 1 on D. Assume that
- ν_1 < λ_j(D),
where ν_1 is the smallest eigenvalue of (<ref>). Then for the solution h_j of (<ref>) it holds that
h_j > 0 in (0, 1).
Let z_1 be an L^2-normalized first eigenfunction of (<ref>). From <cit.> we know that z_1 does not change sign.
Writing the equations satisfied by h_j and z_1 in Sturm-Liouville form we have:
(r^N - 1 h_j')' + r^N - 1(f'(u_D) - r^- 2λ_j(D))h_j = 0,
(r^N - 1 z_1')' + r^N - 1(f'(u_D) + r^-2ν_1)z_1 = 0.
By Proposition <ref> we know that h_j(0) = 0 and h_j(1) = -u_D'(1)>0.
Now, assume by contradiction that h_j changes sign in (0, 1). Then there would exist r_0 ∈ (0, 1) such that h_j(0) = 0. Since - ν_1 < λ_j(D), then, by the Sturm-Picone Comparison Theorem it would follow that z_1 has a zero in (0, r_0). This is a contradiction, because z_1 does not change sign. Hence the only possibility is that h_j is strictly positive in (0, 1).
We are ready to prove our main result for problem (<ref>) in the case of the cone, i.e., Theorem <ref>, which is a sharp instability/stability result for the pair (Ω_D, u_D).
§.§ Proof of Theorem <ref>
Let us fix the domain D which spans the cone, so that we denote λ_1(D) simply by λ_1.
For (i), let u_1 = h_1 ψ_1 be the solution of (<ref>) with v = ψ_1. Then
I”(0)[ψ_1, ψ_1] = - u_D'(1) (h_1'(1) + u_D”(1)).
Putting (<ref>) in Sturm-Liouville form we get
- (r^N - 1 h_1')' - r^N - 1f'(u_D)h_1 = - r^N - 3λ_1 h_1.
On the other hand, writing - Δ u_D = f(u_D) in polar coordinates and differentiating with respect to r = |x| we get
- (u_D')” - N - 1/r (u_D')' - f'(u_D) u_D' = - N - 1/r^2 u_D',
which in Sturm-Liouville form is
- (r^N - 1 u_D”)' - r^N - 1 f'(u_D)u_D' = - r^N - 3 (N - 1) u_D'.
Multiplying (<ref>) by u_D' and integrating by parts in (r̅, 1) we get that
[ ∫_r̅^1 r^N - 1 h_1' u_D” dr - (r^N - 1 h_1' u_D')|_r̅^1- ∫_r̅^1 r^N - 1 f'(u_D) h_1 u_D' dr; = - λ_1 ∫_r̅^1 r^N - 3 h_1 u_D' dr. ]
Similarly, multiplying (<ref>) by h_1 and integrating by parts we deduce that
[ ∫_r̅^1 r^N - 1 h_1' u_D” dr - (r^N - 1 h_1 u_D”)|_r̅^1 - ∫_r̅^1 r^N - 1 f'(u_D) u_D' h_1 dr; = -(N - 1) ∫_r̅^1 r^N - 3 u_D' h_1 dr. ]
Notice that, in view of Proposition <ref>, the right-hand sides of (<ref>), (<ref>) remain finite when taking the limit as r̅→ 0^+. In addition, we claim that
lim_r̅→ 0^+ r^N - 1 h_1'(r̅) u_D'(r̅) = 0.
Indeed, integrating (<ref>) and taking the absolute value we obtain
|∫_r̅^1 - (r^N - 1 h_1')' dr |
= |r̅^N - 1 h_1'(r̅) - h_1'(1)|
≤∫_r̅ r^N - 1 |f'(u_D)| h_1 dr + ∫_0^1 r^N - 3λ_1 h_1 dr
≤ C_1
for some C_1 > 0. Hence
lim sup_r̅→ 0^+r̅^N - 1|h_1'(r̅)| ≤ C_2
for some C_2>0, and thus, since lim_r̅→ 0^+ u_D'(r̅) = 0, (<ref>) follows.
Now, subtracting (<ref>) from (<ref>) and taking the limit as r̅→ 0^+, then, thanks to (<ref>) and since h_1(0)=0, h_1(1) = - u_D'(1), we obtain
- u_D'(1)(h_1'(1) + u_D”(1)) = (N - 1 - λ_1) ∫_0^1 r^N - 3 h_1 u_D' dr.
Since λ_1 > -ν_1, then, by Proposition <ref>, we have that h_1 > 0 in (0,1). On the other hand u_D' < 0 in (0, 1) and λ_1 < N - 1 by assumption. Hence by (<ref>) and (<ref>) we obtain
I”(0)[ψ_1, ψ_1] < 0,
which proves (i).
For (ii), we choose an orthonormal basis (ψ_j)_j of L^2(D) made of normalized eigenfunctions of (<ref>). Then any v ∈ T_0M can be written as
v = ∑_j = 1^∞ (v, ψ_j) ψ_j,
where (·, ·) denotes the inner product in L^2(D). We assume without loss of generality that ∫_D v^2 dσ = 1. Let u_j be the solution of (<ref>) with v = ψ_j, then we can check that
v = ∑_j = 1^∞ (v, ψ_j) u_j
is the solution of (<ref>). As observed in Remark <ref>, u_j(r,q) = h_j(r) ψ_j(q) for every j ∈ℕ, so
∂u_j/∂ν(1,q) = h_j'(1) ψ_j(q) on D.
By an argument analogous to the one presented in the proof of (i), we have that if k > j, then h_k'(1) ≥ h_j'(1) and in fact h_k'(1) > h_j'(1) if k>j are such that λ_k > λ_j.
Indeed, writing the equations for h_j, h_k, multiplying the first one by h_k and the second one by h_j, integrating by parts and subtracting we get
- u_D'(1) (h_k'(1) - h_j'(1)) = (- λ_j + λ_k) ∫_0^1 r^N - 3h_ih_j ≥ 0.
Exploiting the orthogonality of the basis (ψ_j)_j and exploiting (<ref>) we obtain
I”(0)[v, v]
= - u_D'(1) (∫_D (∑_j = 1^∞ (v, ψ_j) ψ_j )(∑_k = 1^∞ (v, ψ_j) h_k'(1) ψ_k) dσ + u_D”(1) ∫_D v^2 dσ)
= - u_D'(1) ((∑_j = 1^∞ (v, ψ_j)^2 h_j'(1) ) + u_D”(1))
≥ - u_D'(1) (h_1'(1) (∑_j = 1^∞ (v, ψ_j)^2 ) + u_D”(1) )
= - u_D'(1) (h_1'(1) + u_D”(1))
= (N - 1 - λ_1) ∫_0^1 r^N - 3 h_1 u_D' dr>0,
because h_1>0 in (0,1), u^'_D<0 in (0,1) and λ_1>N-1 by assumption. The proof is complete.
As already pointed out in Remark <ref>, in the case when 𝒞 = ℝ^N, the couples (B, u_B), where B is a ball and u_B is a positive nondegenerate radial solution, are the only energy-stationary pairs. Thus it remains to study the stability of (B, u_B) as critical point of the energy functional T. This can be done by looking at the problem as the case of a cone spanned by the domain D = 𝕊^N - 1.
As observed in Remark <ref>, the first eigenvalue ν_1 of the singular eigenvalue problem (<ref>) is always larger than -(N - 1). On the other hand, it is known that the first nontrivial eigenvalue of the Laplace-Beltrami operator on the whole 𝕊^N - 1 is precisely N - 1. Then any radial solution u_B is nondegenerate and we obtain that the pair (B, u_B) is a semistable stationary-point.
§ THE CASE OF THE CYLINDER
Let ω⊂ℝ^N - 1 be a smooth bounded domain and let Σ_ω be the half-cylinder spanned by ω, namely
Σ_ωω× (0, + ∞).
We denote by x = (x', x_N) the points in Σ_ω, where x'=(x_1,…,x_N-1) ∈ω and x_N ≥ 0.
In analogy with the case of the cone, we consider domains whose relative boundaries are the cartesian graphs of functions in C^2(ω). More precisely, for φ∈ C^2(ω) we set
Γ_φ{(x', x_N) ∈Σ_ω : x_N = e^φ(x')}
and consider domains of the type
Ω_φ = {(x', x_N) ∈Σ_ω : x_N < e^φ(x')}.
Finally, let
Γ_1, φ (∂Ω_φ∖Γ_φ).
Observe that the outer unit normal vector on Γ_φ at a point (x', e^φ(x')) is given by
ν=ν_φ(x^')= (- e^φ(x^')∇_^N-1φ(x^'), 1)/√(1 + |e^φ(x^')∇_^N-1φ(x^')|^2),
where ∇_^N-1 denotes the gradient with respect to the variables x_1,…,x_N-1.
§.§ Energy functional in cylindrical domains
We study the semilinear elliptic problem
{[ - Δ u = f(u) in Ω_φ; u = 0 on Γ_φ; [1mm]
∂ u/∂ν = 0 on Γ_1, φ ].
and consider bounded positive weak solutions of (<ref>) in the Sobolev space H_0^1(Ω_φ∪Γ_1,φ), which is the space of functions in H^1(Ω_φ) whose trace vanishes on Γ_φ.
As before, we assume that a bounded nondegenerate positive solution u_φ of (<ref>) exists and belongs to W^1, ∞(Ω_φ) ∩ W^2, 2(Ω_φ), so that we can apply the results of Section <ref>.
We consider variations of the domain Ω_φ in the class of cartesian graphs of the type Ω_φ + tv, for v ∈ C^2(ω), which amounts to consider a one-parameter family of diffeomorphisms ξ:(-η,η)×Σ_ω→Σ_ω of the type
ξ(t, x) = (x', e^t v(x')x_N),
whose inverse, for any fixed t∈(-η, η), is given by
ξ(t, x)^-1 = (x', e^- t v(x') x_N) = ξ(- t, x).
This one-parameter family of diffeomorphisms is generated by the vector field
V(x) = (0^', v(x')x_N),
where 0^':=(0, …, 0)∈^N-1.
Indeed, ξ(0, x) = x for every x ∈Σ_ω,
dξ/dt(t, x) = (0^', e^tv(x')v(x')x_N) = V(ξ(t, x)) ∀ (t,x)∈ (-η,η)×Σ_ω
and ξ(t, x)∈∂Σ_ω, for all (t,x)∈ (-η,η)×∂Σ_ω.
We also observe that, in view of (<ref>), it holds
⟨ V, ν⟩ = ⟨ (0^', v e^φ), (- e^φ∇_^N-1φ, 1)/√(1 + |e^φ∇_^N-1φ|^2)⟩ = v e^φ/√(1 + |e^φ∇_^N-1φ|^2) on Γ_φ.
The energy functional T defined in (<ref>) becomes a functional depending only on functions in C^2(ω). More precisely, for every v ∈ C^2(ω), in view of Proposition <ref>, there exists δ > 0 sufficiently small such that for all t ∈ (- δ, δ)
T(φ + tv) = T(Ω_φ + tv) = J(u_φ + tv),
is well defined, where u_φ + tv u_Ω_φ + tv is the unique positive solution of (<ref>) in the domain Ω_φ + tv, in a neighborhood of u_φ∘ξ_t^-1.
By the results of Section <ref> we know that the map t ↦ u_φ + tv is differentiable at t = 0, and the derivative u is a weak solution of
{[ - Δu = f'(u_φ) u in Ω_φ; [.6em]
u = - ∂ u_φ/∂νv e^φ/√(1 + |e^φ∇_^N-1φ|^2) on Γ_φ; ∂u/∂ν = 0 on Γ_1, φ ].
We now compute the first derivative of T at Ω_φ, i.e., for t = 0, with respect to variations v ∈ C^2(ω).
Let φ∈ C^2(ω) and assume that u_φ is a positive nondegenerate solution of (<ref>) which belongs to W^1, ∞(Ω) ∩ W^2, 2(Ω). Then, for any v ∈ C^2(ω) we have
T'(φ)[v] = - 1/2∫_ω(∂ u_φ/∂ν(x^',e^φ))^2 v e^φ dx'.
The proof is similar to that of Lemma <ref>. It suffices to observe that for the parametrization of Γ_φ given by x=(x^',e^φ(x^')), for x^'∈ω, the induced (N-1)-dimensional area element on Γ_φ is expressed by
d σ_Γ_φ = √(1 + |e^φ∇_^N-1φ|^2) dx'.
Then the result follows immediately from Proposition <ref>, taking into account (<ref>).
Let φ and u_φ be as in Lemma <ref>. Then for any v, w ∈ C^2(ω) it holds
T”(φ)[v, w] =
- 1/2∫_ω(∂ u_φ/∂ν(x^',e^φ) )^2 e^φ v w dx'
- ∫_ω∂u_w/∂ν(x^',e^φ) ∂ u_φ/∂ν(x^',e^φ) e^φ v dx'
- ∫_ω∂ u_φ/∂ν(x^',e^φ) [(D^2u_φ(x^',e^φ) (0^',e^φ)) ·ν] vw dx'
+ ∫_ω∂ u_φ/∂ν(x^',e^φ)e^2φ v ∇ u_φ(x^',e^φ) · (w ∇_ℝ^N - 1φ + ∇_ℝ^N - 1 w, 0)/√(1 + |e^φ∇_ℝ^N - 1φ|^2) dx'
+ ∫_ω(∂ u_φ/∂ν(x^',e^φ) )^2 e^3φ v ∇_ℝ^N - 1φ· (w∇_ℝ^N - 1φ + ∇_ℝ^N - 1 w)/1 + |e^φ∇_ℝ^N - 1φ|^2 dx',
where u_w is the solution of (<ref>), with w in the place of v.
Let v, w ∈ C^2(ω). By definition, Lemma <ref> and using the Leibniz rule, we have:
T”(φ)[v, w]
= . d/ds|_s = 0(- 1/2∫_ω(∂ u_φ + sw/∂ν(x^',e^φ + sw))^2 e^φ + sw v dx' )
= - ∫_ω e^φ v ∂ u_φ/∂ν. d/ds|_s = 0(∂ u_φ + sw/∂ν(x^',e^φ + sw) ) dx'
- 1/2∫_ω(∂ u_φ/∂ν(x^',e^φ) )^2 e^φ v w dx'.
To conclude it suffices to compute the derivative in the first integral of the right-hand side of (<ref>). To this end we observe that
. d/ds|_s = 0(∂ u_φ + sw/∂ν(x^',e^φ + sw) )
= . d/ds|_s = 0(∇ u_φ + sw(x^',e^φ + sw) ·ν_φ + sw)
= . d/ds|_s = 0 (∇ u_φ + sw(x', e^φ + sw)) ·ν_φ
+ ∇ u_φ (x^', e^φ)·. d/ds|_s = 0ν_φ + sw
where ν_φ is given by (<ref>) and
ν_φ + sw = (- e^φ + sw∇_ℝ^N - 1 (φ + sw), 1)/√(1 + |e^φ + sw∇_ℝ^N - 1(φ + sw)|^2).
Now, for the first term in the right-hand side of (<ref>), thanks to the argument presented in <cit.>, we have
d/ds (∇ u_φ + sw) = ∇(d/ds u_φ + sw),
and thus we obtain
.d/ds|_s = 0 (∇ u_φ + sw (x', e^φ + sw)) = ∇u_w(x^',e^φ) + D^2u_φ(x^',e^φ) (0^',we^φ).
On the other hand, for the last term in (<ref>), we check that
. d/ds|_s = 0ν_φ + sw = - e^φ/√(1 + |e^φ∇_ℝ^N - 1φ|^2) (∇_ℝ^N - 1 w + w ∇_ℝ^N - 1φ, 0)
- (e^φ)^2 (w |∇_ℝ^N - 1φ|^2 + ∇_ℝ^N - 1φ·∇_ℝ^N - 1 w)/1 + |e^φ∇_ℝ^N - 1φ|^2ν_φ
Finally, substituting (<ref>)–(<ref>) into (<ref>) we obtain (<ref>).
As in Section <ref>, in view of Definition <ref>, we consider a volume constraint. In the case of cartesian graphs, the volume of the domain Ω_φ associated to φ∈ C^2(ω) is expressed by
𝒱(φ) = |Ω_φ| = ∫_ω e^φ dx'.
The functional 𝒱 is of class C^2 and for every v, w ∈ C^2(ω) it holds
𝒱'(φ) [v] = ∫_ω e^φ v dx', 𝒱”(φ)[v, w] = ∫_ω e^φ v w dx'.
For c > 0 we define the manifold
M {φ∈ C^2(ω) : ∫_ω e^φ dx' = c},
whose tangent space at any point φ∈ M is given by
T_φ M = {v ∈ C^2(ω) : ∫_ω e^φ v dx' = 0 }.
We consider the restricted functional
I(φ) = T|_M(φ), φ∈ M.
As before, if φ∈ M is a critical point for I, then there exists a Lagrange multiplier μ∈ℝ such that
T'(φ) = μ I'(φ).
Results analogous to Proposition <ref> and Lemma <ref> hold with the same proofs. In particular, we point out that for an energy stationary pair (Ω_φ, u_φ) under a volume constraint the function u_φ has constant normal derivative on Γ_φ. For the reader's convenience, we restate here these results.
Let φ∈ M and let (Ω_φ, u_φ) be energy-stationary under a volume constraint. Then the Lagrange multiplier μ is negative and
∂ u_φ/∂ν = - √(- 2 μ) on Γ_φ.
The same as in <cit.>
For the second derivative of I we have
Let φ∈ M and let v, w ∈ T_φ M. If (Ω_φ, u_φ) is energy-stationary under a volume constraint, then
I”(φ)[v, w] = T”(φ)[v, w] - μ𝒱”(φ)[v, w].
The same as in <cit.>
§.§ The case φ≡ 0 and one-dimensional solutions
When φ≡ 0 (that is, Γ_φ = Γ_0 is the intersection of the cylinder with the plane x_N = 1), the domain Ω_0 is just the finite cylinder
Ω_ω:=ω× (0, 1).
Then, if f is a locally Lipschitz continuous function, any weak solution of (<ref>) is also a classical solution up to the boundary, i.e., it belongs to C^2(Ω_ω). This follows by standard regularity theory by considering the boundary conditions and that ∂Ω_ω is made by the union of three (N - 1)-dimensional manifolds (with boundary) intersecting orthogonally (see also <cit.>).
In Ω_ω, for suitable nonlinearities, we can find a solution of (<ref>) in Ω_ω which depends only on x_N in the following way: first, we can apply some variational method to find a solution u of the ordinary differential equation
-u” = f(u) in (0, 1)
u'(0) = u(1) = 0
and then set
u_ω(x', x_N) := u(x_N), (x', x_N) ∈Ω_ω.
Recall that, in one dimension, there is no critical Sobolev exponent for the embedding into L^p. So one example of a suitable nonlinearity is f(u) = u^p with 1 < p < ∞, or those of Proposition <ref> with the only caution that in (iii), for N≥ 2 we can take 1 < p < ∞.
For our purposes we need to consider one-dimensional solutions u_ω of (<ref>) in Ω_ω that are nondegenerate, which means that the linearized operator
L_u_ω = - Δ - f'(u_ω)
does not admit zero as an eigenvalue. In other words, u_ω is nondegenerate if there are no nontrivial weak solutions ϕ∈ H_0^1(Ω_ω∪Γ_1, 0) of the problem
{[ - Δϕ - f'(u_ω) ϕ = 0 in Ω_ω; ϕ = 0 on Γ_0; ∂ϕ/∂ν = 0 on Γ_1, 0 ].
To analyze the spectrum of L_u_ω it is convenient to consider the following auxiliary one-dimensional eigenvalue problem:
- z” - f'(u_ω) z = α z in (0, 1)
z'(0) = z(1) = 0
We denote the eigenvalues of (<ref>) by α_i, for i∈ℕ. Clearly, they correspond to the eigenvalues of the linear operator
L_u_ω(z) = - z” - f'(u_ω) z
with the boundary conditions of (<ref>).
We also consider the following Neumann eigenvalue problem in the domain ω⊂ℝ^N - 1:
{[ - Δ_^N-1ψ = λψ in ω; ∂ψ/∂ν_∂ω = 0 on ∂ω ].
where - Δ_^N-1 = - ∑_i = 1^N - 1∂^2/∂ x_i^2 is the Laplacian in ℝ^N - 1, i.e. with respect to the variables x_1,…,x_N-1. We denote its eigenvalues by
0 = λ_0(ω) < λ_1(ω) ≤λ_2 (ω) ≤….
It is well-known that λ_j(ω) ↗ + ∞ as j →∞ and that the normalized eigenfunctions form a basis (ψ_j)_j of the tangent space T_0M defined in (<ref>) when φ≡ 0.
The spectra of L_u_ω, L_u_ω and - Δ_^N-1 with respect to the above boundary conditions are related by
σ(L_u_ω) = σ(L_u_ω) + σ(- Δ_^N-1).
We begin by showing that σ(L_u_ω) ⊂σ(L_u_ω) + σ(- Δ_^N-1). Let τ∈σ(L_u_ω) and let ϕ∈ H_0^1(Ω_ω∪Γ_1,0) be an associated eigenfunction, that is, ϕ is a weak solution of
{[ - Δϕ - f'(u_ω) ϕ = τϕ in Ω_ω; ϕ = 0 on Γ_0; ∂ϕ/∂ν = 0 on Γ_1, 0 ].
As observed at the beginning of this subsection for the the nonlinear problem (<ref>), by the shape of Ω_ω and the boundary conditions, since f ∈ C^1, α(ℝ), by standard elliptic regularity, we have that ϕ is a classical solution of (<ref>) in Ω_ω.
Let λ be an eigenvalue of - Δ_^N-1 with homogeneous Neumann boundary condition on ω and let ψ be an associated eigenfunction. Define
z(x_N) := ∫_ωϕ(x', x_N) ψ(x') dx'.
Then, differentiating with respect to x_N, using Green's formulas and the boundary conditions we have
- z” = ∫_ω - ∂^2 ϕ/∂ x_N^2ψ dx'
= ∫_ω (- Δϕ + Δ_^N-1ϕ) ψ dx'
= ∫_ω f'(u_ω) ϕψ dx' + ∫_ωτϕψ dx' + ∫_ωΔ_^N-1ψϕ dx'
= f'(u_ω) z + τ z - λ z.
Thus (τ - λ) ∈σ(L_u_ω) and hence τ = (τ - λ) + λ∈σ(L_u_ω) + σ(- Δ_^N-1).
To show the reverse inclusion, let α∈σ(L_u_ω), λ∈σ( - Δ_^N-1) and let z, ψ be, respectively, the associated eigenfunctions. Setting for x = (x', x_N)∈Ω_ω
ϕ(x', x_N) := z(x_N) ψ(x'),
we note that
- Δϕ = - z”ψ - Δ_^N-1ψ z
= f'(u_ω) z ψ + α z ψ + λ z ψ
= f'(u_ω) ϕ + (α + λ)ϕ.
Finally, by construction, we easily check that ϕ satisfies the boundary conditions of (<ref>).
As a consequence, we deduce that
α + λ∈σ(L_u_ω)
and this concludes the proof.
The problem (<ref>) admits zero as an eigenvalue if and only if there exist i ∈ℕ^+ and j ∈ℕ such that
α_i + λ_j(ω) = 0
holds.
It follows immediately from Lemma <ref>.
A one-dimensional solution of (<ref>) is nondegenerate if both the following conditions are satisfied:
* the eigenvalue problem (<ref>) in (0, 1) does not admit zero as an eigenvalue;
* λ_1(ω) > - α_1.
Analogous to the proof of Corollary <ref>.
§.§ Stability/instability of the pair (Ω_ω, 𝐮_ω)
In this subsection, we prove a general stability/instability theorem for the pair (Ω_ω, u_ω). We begin with some preliminary results.
Firstly, we recall that when φ≡ 0 the tangent space T_0M is given by
T_0M = {v ∈ C^2(ω) : ∫_ω v dx' = 0 }.
Since u_ω depends on x_N only, in order to simplify the notations, we denote with a prime the derivative with respect to x_N, and thus we write
u_ω^'(x_N)=u_ω^'(x^',x_N) ∂ u_ω/∂ x_N(x', x_N).
Then, for v ∈ T_0 M, we have that the function u (see (<ref>)), which belongs to H^1(Ω_ω), is a weak solution of
{[ - Δu = f'(u_ω) u in Ω_ω; [5pt]
u = - u_ω'(1) v on Γ_0; [3pt]
∂u/∂ν = 0 on Γ_1, 0 ].
As before, by elliptic regularity we know that u is regular in Ω_ω, and thus it is a classical solution. We also note that, by the nondegeneracy of u_ω, there exists a unique solution of (<ref>).
Let λ_j > 0 be any positive eigenvalue for the Neumann problem (<ref>) and let ψ_j be any normalized eigenfunction associated to λ_j. Let u_j ∈ H^1(Ω_ω) be the solution of (<ref>) with v = ψ_j. Then the function
h_j(x_N) := ∫_ωu_j(x', x_N) ψ_j(x') dx', x_N ∈ (0, 1]
satisfies
- h_j” - f'(u_ω)h_j = - λ_j h_j in (0, 1)
h_j(1) = - u_ω'(1)
h_j'(0) = 0
For simplicity of notation we drop the index j and simply write ũ, h, ψ and λ instead of ũ_j, h_j, ψ_j and λ_j.
First observe that, as ũ=- u_ω'(1)ψ on Γ_0, we have
h(1) = ∫_ω - u_ω'(1) ψ^2 dx' = - u_ω'(1).
Now, differentiating with respect to x_N under the integral sign and using Green's formula, taking into account the boundary conditions, we have
- h” = ∫_ω - ∂^2 u/∂ x_N^2ψ dx' = ∫_ω (- Δu + Δ_^N-1u) ψ dx'
= ∫_ω f'(u_ω) uψ dx' + ∫_ωΔ_^N-1uψ dx'
= f'(u_ω) h + ∫_ωuΔ_^N-1ψ dx'
= f'(u_ω) h - λ∫_ωuψ dx' = f'(u_ω)h - λ h.
Finally, exploiting the Neumann condition for u on Γ_1, 0, we check that h'(0) = 0.
Note that for u_j, h_j as in Lemma <ref> we have that
u_j (x', x_N) = h_j(x_N) ψ_j(x').
Indeed:
- Δ (h_j(x_N) ψ_j(x'))
= - h_j(x_N) Δ_^N-1ψ_j(x') - h_j”(x_N) ψ_j(x')
= λ_j h_j(x_N) ψ_j(x') + f'(u_ω)h_j(x_N) ψ_j(x') - λ_j h_j(x_N) ψ_j(x')
= f'(u_ω) u_j.
Moreover, by (<ref>) and (<ref>), the function h_j ψ_j satisfies the boundary conditions in (<ref>), so that h_j ψ_j is the unique solution of (<ref>) and thus coincides with u_j.
Let j ≥ 1, λ_j be a positive Neumann eigenvalue of - Δ_^N-1 in ω, and let h_j be the solution of (<ref>). Assume that -α_1 < λ_j, where α_1 is the smallest eigenvalue of (<ref>). Then it holds that
h_j > 0 in [0, 1].
We can reflect h_j by evenness with respect to 0 to have a solution of the linear problem
- h_j” - f'(u_ω) h_j + λ_j h_j = 0 in (-1, 1)
h_j(-1) = h_j(1) = -u_ω'(1) > 0.
By reflection and (<ref>), the first eigenvalue of the linear operator
z” - f'(u_ω)z in (0, 1)
with the boundary condition z(-1) = z(1) = 0 is exactly α_1. Therefore the first eigenvalue of the linear operator
L_u_ωg = - g”- f'(u_ω)g + λ_j g
with zero boundary condition in (-1, 1) is β_1 = α_1 + λ_j.
It is well-known that L_u_ω satisfies the maximum principle whenever β_1 > 0, i.e., when λ_j > - α_1. Therefore, by (<ref>), the function h_j satisfies h_j ≥ 0 in (-1,1), and by the strong maximum principle we conclude that h_j > 0 in (-1, 1).
We can now state and prove the main result of this section.
Let ω⊂ℝ^N - 1 be a smooth bounded domain. Let f∈ C^1, α_loc() such that there exists a positive one-dimensional non-degenerate solution u_ω of (<ref>) in Ω_ω, and let h_1 be the solution to (<ref>) with j=1. Let λ_1=λ_1(ω) be the first non-trivial eigenvalue of -Δ_^N-1 with homogeneous Neumann conditions, let α_1 be the first-eigenvalue of (<ref>) and let ρ be the number defined by
ρ - f(u_ω(0))h_1(0) -λ_1∫_0^1 h_1u_ω^' dx_N .
Assume that λ_1 > - α_1. Then
(i) if ρ<0, then (Ω_ω, u_ω) is an unstable energy-stationary pair;
(ii) if ρ>0, then (Ω_ω, u_ω) is a stable energy stationary pair.
We first observe that since ∂ u_ω/∂ν is constant on Γ_0 then, by the analogous of Proposition <ref> for cylinders, we infer that the pair (Ω_ω, u_ω) is an energy-stationary pair.
Let w ∈ T_0M and assume without loss of generality that ∫_ω w^2 dx' = 1. In order to prove (i)-(ii) we first determine a suitable expression for I”(0)[w, w]. To this end, for each j ∈ℕ^+, let u_j be the solution of (<ref>) with v = ψ_j and let h_j be the solution of (<ref>). Then we can write
w = ∑_j = 1^∞ (w, ψ_j) ψ_j
where (·, ·) is the inner product in L^2(ω). Moreover, we can check that
u = ∑_j = 1^∞ (w, ψ_j) u_j
is the solution of (<ref>) corresponding to w. Then, taking φ=0 in Lemma <ref>, exploiting Lemma <ref>, taking into account that by Proposition <ref> the Lagrange multiplier μ is given by
μ = - 1/2(u_ω'(1))^2,
by Remark <ref> and observing that ∇ u_ω⊥ (∇_ℝ^N - 1 w, 0), we infer that
I”(0)[w, w]
= - 1/2∫_ω (u_ω'(1))^2 w^2 dx'
- ∫_ω u_ω'(1) (∑_j = 1^∞ (w, ψ_j) h_j'(1) ψ_j ) (∑_k = 1^∞ (w, ψ_k) ψ_k) dx'
- ∫_ω u_ω'(1) u_ω”(1) w^2 dx' + 1/2 (u_ω'(1))^2 ∫_ω w^2 dx'
= - u_ω'(1) ∫_ω(∑_j = 1^∞ (w, ψ_j)^2 h_j'(1) ψ_j^2 ) dx' - u_ω'(1) u_ω”(1)
Finally, since u_ω is a solution to (<ref>) we deduce that
I”(0)[w, w]=- u_ω'(1) ∫_ω(∑_j = 1^∞ (w, ψ_j)^2 h_j'(1) ψ_j^2 ) dx' + u_ω'(1) f(0).
In particular, choosing w=ψ_1 and plugging it into (<ref>) we infer that
I”(0)[ψ_1, ψ_1]=- u_ω'(1) h_1'(1) + u_ω'(1) f(0).
Multiplying the equation in (<ref>) (with j=1) by u^'_ω and integrating by parts we get
-(h_1^' u_ω^')|_0^1 + ∫_0^1h_1^' u_ω^'' dx_N = ∫_0^1(f^'(u_ω)-λ_1)h_1u_ω^' dx_N.
Exploiting (<ref>), integrating by parts and taking into account that h_1(1)=-u_ω^'(1) we obtain
[ -h_1^'(1)u_ω^'(1) - ∫_0^1h_1^' f(u_ω) dx_N; = ∫_0^1f^'(u_ω)u_ω^' h_1 dx_N -λ_1∫_0^1 h_1 u_ω^' dx_N; = (f(u_ω)h_1)|_0^1 - ∫_0^1 f(u_ω) h_1' dx_N -λ_1∫_0^1 h_1 u_ω^' dx_N; = -f(0)u_ω^'(1)-f(u_ω(0))h_1(0) - ∫_0^1 f(u_ω) h_1' dx_N -λ_1∫_0^1 h_1 u_ω^' dx_N; ]
Hence, we deduce that
-h_1^'(1)u_ω^'(1) =-f(0)u_ω^'(1)-f(u_ω(0))h_1(0) -λ_1∫_0^1 h_1 u_ω^' dx_N
In the end, from (<ref>), (<ref>) and recalling (<ref>), we obtain
I”(0)[ψ_1, ψ_1]=-f(u_ω(0))h_1(0) -λ_1∫_0^1 h_1 u_ω^' dx_N=ρ.
Therefore, if ρ<0 then I”(0)[ψ_1, ψ_1]<0, i.e., (Ω_ω, u_ω) is an unstable energy-stationary pair, and this proves (i).
Let us prove (ii). Let w ∈ T_0M such that ∫_ω w^2 dx' = 1. From (<ref>) we know that I”(0)[w, w]=- u_ω'(1) ∫_ω(∑_j = 1^∞ (w, ψ_j)^2 h_j'(1) ψ_j^2 ) dx' + u_ω'(1) f(0). Thanks to the assumption λ_1>-α_1 the following holds true.
Claim: if k > j, then
h_k'(1) ≥ h_j'(1),
and actually h_k'(1) > h_j'(1) if λ_k > λ_j.
Indeed, by definition h_k, h_j satisfy, respectively, the following:
- h_k” - f'(u_ω) h_k = - λ_k h_k,
- h_j” - f'(u_ω) h_j = - λ_j h_j.
Multiplying (<ref>) by h_j and integrating on (0, 1) we obtain
∫_0^1 - h_k” h_j d x_N
= ∫_0^1 h_k' h_j' d x_N - (h_k' h_j)|_0^1
= ∫_0^1 f'(u_ω) h_j h_k d x_N - λ_k ∫_0^1 h_j h_k d x_N
Similarly, multiplying (<ref>) by h_k, integrating on (0, 1) and then subtracting the result from (<ref>), we obtain
- (h_k' h_j - h_j' h_k)(1) = (λ_j - λ_k) ∫_0^1 h_j h_k dx_N ≤ 0,
because h_j > 0 and h_k > 0 (see Proposition <ref>, which holds true for any j∈ℕ^+ because λ_1>-α_1). Now, since h_j(1)=h_k(1)=-u_ω(1), then by (<ref>) we deduce that
u_ω'(1)(h_k'(1) - h_j'(1))≤ 0.
Hence, as u_ω'(1)<0, Claim (<ref>) easily follows.
Now, thanks to (<ref>) and Claim (<ref>), recalling again that u'_ω(1) < 0 and exploiting (<ref>) it follows that
[ I”(0)[w, w] ≥ - u_ω'(1)h_1^'(1)∫_ω(∑_j = 1^∞ (w, ψ_j)^2ψ_j^2 ) dx' + u_ω'(1) f(0); = - u_ω'(1)h_1^'(1) + u_ω'(1) f(0); = -f(u_ω(0))h_1(0) -λ_1∫_0^1 h_1 u_ω^' dx_N=ρ. ]
Hence, if ρ>0 we have that I”(0)[w, w]>0 for all w ∈ T_0M, i.e., (Ω_ω, u_ω) is a stable energy-stationary pair, and this proves (ii). The proof is complete.
As a simple corollary of Theorem <ref> we can now prove the stability/instability result of Theorem <ref>, which concerns the case of the torsional energy, i.e. when f ≡ 1.
§.§ Proof of Theorem <ref>
When f≡ 1 the eigenvalue problem (<ref>) has only positive eigenvalues and therefore the condition λ_1 > - α_1 is automatically satisfied. The only solution of
{[ - Δ u = 1 in Ω_ω; u = 0 on Γ_0; ∂ u/∂ν = 0 on Γ_1, 0 ].
is the one-dimensional positive function given by
u_ω(x_N)=1-x_N^2/2.
Clearly, as u_ω'(1)=-1 and f≡ 1, then for any j∈ℕ^+ (<ref>) reduces to
- h_j” + λ_j h_j = 0 in (0, 1)
h_j(1) = - u_ω'(1)
h_j'(0) = 0
whose unique solution is given by
h_j(x_N)=1/cosh(√(λ_j))cosh(√(λ_j) x_N).
In particular, taking j=1 and exploiting (<ref>) we can compute explicitly the number ρ in (<ref>), namely
ρ=-1/cosh(√(λ_1)) +λ_1/cosh(√(λ_1))∫_0^1 cosh(√(λ_1) x_N) x_N dx_N .
Integrating by parts we readily check that
∫_0^1 cosh(√(λ_1) x_N) x_N dx_N=sinh(√(λ_1))/√(λ_1) -cosh(√(λ_1))/λ_1+1/λ_1,
and thus we obtain
ρ = √(λ_1)tanh(√(λ_1))-1.
Let us consider the function g:[0,+∞[ →, defined by g(t)= √(t)tanh(√(t))-1. Clearly g(0)=-1 and g(t)→ +∞ as t→ +∞ and by monotonicity we infer that g has a unique zero in ]0,+∞[. We denote it by β and from the previous argument and (<ref>) we infer that ρ<0 if and only if λ_1<β. Then, by Theorem <ref>-(i) we get that (Ω_ω, u_ω) is an unstable energy-stationary pair, and this proves (i).
Analogously, as ρ>0 if and only if λ_1>β, from Theorem <ref>-(ii) we obtain that (Ω_ω, u_ω) is a stable energy-stationary pair. The proof is complete.
We conclude this section with the proof of Theorem <ref>.
§.§ Proof of Theorem <ref>
Let w ∈ T_0M such that ∫_ω w^2 dx' = 1. Since λ_1>-α_1, we can argue as in the proof of Theorem <ref>-(ii), in particular, from the first two lines of (<ref>), taking into account that, by assumption, f(0)=0, we have
I”(0)[w, w]≥- u_ω'(1)h_1^'(1).
Now, since h_1” = (λ_1- f'(u_ω)) h_1 in (0,1) and h_1>0 in [0,1] by Proposition <ref>, then, thanks to the assumption λ_1>sup_x_N∈(0,1)|f^'(u_ω(x_N))| we infer that
h_1”>0 in [0,1]. In particular, as h_1^'(0)=0 we deduce that
h_1^'(1)>0.
Finally, combining (<ref>) and (<ref>) we obtain that I”(0)[w, w]>0 for all w ∈ T_0M, which means that (Ω_ω, u_ω) is a stable energy-stationary pair.
We notice that, if f is a non-negative monotone increasing function, as in the case of the Lane-Emden nonlinearity (<ref>), then by the Gidas-Ni-Nirenberg theorem (<cit.>) and by the monotonicity of f we infer that sup_x_N∈(0,1)|f^'(u_ω(x_N))|=f^'(u_ω(0)). Thus the stability condition of Theorem <ref> reduces to
λ_1>f^'(u_ω(0)).
In the case of the Lane-Emden nonlinearity f(u) = u^p, at least for some integer values of p, it is possible to compute the solution u_ω numerically, as well as the eigenvalue α_1 and the function h_1 for different values of λ_1(ω). This allows to compute ρ numerically, so that, plotting the result for ρ as a function of λ_1(ω), we obtain a region of instability for λ_1(ω) close to - α_1.
§ ACKNOWLEDGEMENTS
We would like to thank David Ruiz for several useful discussions and Tobias Weth for pointing out a flaw in an early draft of the paper.
acm
|
http://arxiv.org/abs/2307.03924v1 | 20230708074639 | Real-Time Simulation of Open Quantum Spin Chains with Inchworm Method | [
"Geshuo Wang",
"Zhenning Cai"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
Enhancing Room Security and Automating Class Attendance Using ID Cards
Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135
August 12, 2023
========================================================================
We study the real-time simulation of open quantum systems,
where the system is modeled by a spin chain,
with each spin associated with its own harmonic bath.
Our method couples the inchworm method for the spin-boson model and the modular path integral methodology for spin systems.
In particular, the introduction of the inchworm method can significantly suppress the numerical sign problem.
Both methods are tweaked to make them work seamlessly with each other.
We represent our approach in the language of diagrammatic methods,
and analyze the asymptotic behavior of the computational cost.
Extensive numerical experiments are done to validate our method.
§ INTRODUCTION
An open quantum system refers to a quantum-mechanical system coupled to an environment.
The coupling can significantly affect the quantum dynamics,
resulting in effects such as quantum dissipation and quantum decoherence.
It can also lead to non-Markovian evolution of the quantum system,
posing significant challenges in the numerical simulation.
Nevertheless, the study of open quantum systems is becoming increasingly important and has practical applications in many fields <cit.>, as real-world systems are never completely isolated.
In the simulation of open quantum systems, a simple harmonic bath is generally assumed so that the effect of the bath on the system can be analytically given by the bath influence functional <cit.>,
allowing the path integral approach <cit.> to be used to formulate the system dynamics.
One classical method based on path integrals is the quasi-adiabatic propagator path integral (QuAPI) <cit.>.
Other methods have been developed based on QuAPI to improve simulation efficiency by reducing computational complexity or enhancing computational accuracy, including the iterative QuAPI method <cit.>,
the blip decomposition of the path integral <cit.>
and differential equation-based path integral method (DEBPI)
<cit.>.
Due to the non-Markovian nature of the dynamics,
the path-integral-based methods often suffer from increasing memory costs for longer simulation time.
The small matrix decomposition of the path integral (SMatPI) <cit.>, however, has successfully overcome the problem by summarizing the contribution of the paths into small matrices representing the kernel of the quantum master equation.
An alternative approach to dealing with the high memory cost in simulating quantum systems is to use the quantum Monte Carlo method to evaluate the high-dimensional integrals in the Dyson series <cit.>.
However, the Monte Carlo method introduces stochastic errors and can lead to the so-called “sign problem” for highly oscillatory integrands <cit.>.
To relieve the sign problem, the inchworm Monte Carlo method was developed in <cit.>,
which takes the idea of bold diagrammatic Monte Carlo method introduced in <cit.>.
The idea is to compute quantum propagators for shorter time intervals,
and then combine them into the propagators of longer time intervals.
The extension of the propagators can also be formulated into an integro-differential equation <cit.>,
so that classical numerical methods can be applied.
The inchworm Monte Carlo method has been proven to be successful in reducing the severity of sign problem <cit.>.
Some efficient numerical methods for solving the integro-differential equation has been discussed in <cit.>.
The methods discussed above are mainly focused on simple systems such as a single spin or other systems with a small number of possible states,
since the dimension of the Hilbert space for a system grows exponentially with the number of particles.
As a result, simulating more complex systems requires new approaches.
One such approach is the method of modular path integral (MPI) <cit.>,
which leads to linear scaling with the number of particles.
Other methods apply tensor train decomposition to keep the memory cost low for large systems <cit.>, which utilizes low-rank approximations to reduce the computational and memory cost.
In these methods, a typical system under consideration is the Ising chain model, a one-dimensional chain of interacting spins <cit.>.
The Ising model has wide application in magnetism <cit.>, neuroscience <cit.> and many other fields.
The dynamics of closed Ising chains is well-studied in the literature <cit.>.
Recently, there has been more research focusing on the dissipative Ising chain <cit.>.
This paper focuses on the evolution of an Ising chain coupled with harmonic baths,
which are characterized by the Ohmic spectral density <cit.>.
The Ising model used in this study is introduced in <Ref>.
In <Ref>,
we propose a diagrammatic representation of the model based on the special structure of the Ising chain.
The computation of the diagrams is introduced in detail in <Ref> and <Ref>.
<Ref> mainly discusses the computation of diagrams for each single spin,
and <Ref> contains the algorithm for merging the diagrams.
The estimation of the computational cost is given in <Ref>, and numerical experiments are given in <Ref>.
Finally, in <Ref>, we provide some concluding remarks and introduce possible future works inspired by our results.
§ ISING CHAIN WITH SPIN-BATH COUPLING
This section provides a brief introduction to the model studied in this paper,
which is an Ising chain coupled with baths consisting of harmonic oscillators.
In this model, the baths for different spins are not directly coupled.
An isolated Ising chain is a chain of spins in which each spin couples with its nearest neighbors <cit.>.
The Hamiltonian for an Ising chain with K spins is generally given by
H_Ising
= ∑_k=1^K H_s^(k)
+ ∑_k=1^K-1 U^(k)⊗ V^(k+1).
where
H_s^(k) = ϵ^(k)σ_z^(k) + Δ^(k)σ_x^(k)
with σ_x^(k),σ_z^(k) being Pauli matrices
for the kth spin in the chain.
The parameter ϵ^(k) describes the energy difference between two spin states
and Δ^(k) is the frequency of the spin flipping.
The term U^(k)⊗ V^(k+1) describes the nearest-neighbor coupling between the kth and (k+1)th spins.
In this paper, a more complicated case is studied
where each spin in the Ising chain is coupled with a harmonic bath.
The total Hamiltonian for the whole system-bath is then given by
H = H_Ising + ∑_k=1^K H_b^(k) + ∑_k=1^K W_s^(k)⊗ W_b^(k)
where
H_b^(k) = ∑_j1/2[(p̂_j^(k))^2 + (ω_j^(k))^2 (q̂_j^(k))^2],
W_s^(k) = σ_z^(k),
W_b^(k) = ∑_j c_j^(k)q̂_j^(k).
In this expression, p̂_j^(k) and q̂_j^(k) are the momentum operator and the position operator of the jth harmonic oscillator in the bath of the kth spin, respectively.
ω_j^(k) is the frequency of the jth harmonic oscillator in the bath of the kth spin and c_j^(k) is the coupling intensity between the kth spin and the jth oscillator in its bath.
<Ref> illustrates the overall Hamiltonian and the coupling relation in this model more intuitively
with a Ising chain with 4 spins.
Similar to the assumption in <cit.>,
in the paper, the baths for different spins are not directly coupled with each other.
Similar to <cit.>, we simply use U^(k) = V^(k) so that our method can be better illustrated by diagrams in the following sections.
The method discussed in this paper is also applicable to a more general system U^(k)≠ V^(k).
As for the initial condition, the spins and the baths are assumed to be decoupled.
More specifically,
the kth spin is assumed to be in the state |ς^(k)⟩
and the baths are at their thermal equilibriums.
The initial density matrix for the whole system is then given by
ρ(0)
= ⊗_k=1^K ρ^(k) (0)
= ⊗_k=1^K ( ρ_s^(k)(0) ⊗ρ_b^(k)(0) )
= ⊗_k=1^K ( ς^(k)⊗exp(-β^(k) H_b^(k))/(exp(-β^(k) H_b^(k))))
where β^(k) is the inverse temperature for the kth bath <cit.>.
§ DIAGRAMMATIC REPRESENTATION OF THE PATH INTEGRAL
In this section, we rewrite the evolution of the spin chain system using path integrals, so that the computation of each spin can be decoupled. Such an approach has been studied in many previous works <cit.>, and here we are going to represent the path integrals using diagrams to facilitate our future discussions.
We first split the total Hamiltonian in <ref> into two parts H = H_0 + V, where
H_0 ∑_k=1^K H_0^(k)∑_k=1^K
(H_s^(k) + H_b^(k) + W_s^(k)⊗ W_b^(k)),
V ∑_k=1^K-1 V^(k)⊗ V^(k+1).
Below, we will assume that the interaction between spins V is a perturbation of the unperturbed Hamiltonian H_0,
and describe the dynamics in the interaction picture.
Given an observable O = O_s ⊗Id_b,
we can define the following propagator
G(-t,t) = ^-H_0 t ^H t O ^-H t^H_0 t,
which can be expanded into the following Dyson series
G(-t,t) = ∑_N=0^∞∫_-t⩽⩽ t(∏_n=1^N (s_n))
𝒯[V_I(s_N) ⋯ V_I(s_1) O_s,I(0)]
where
V_I(s_n) ^- H_0 | s_n | V ^ H_0 | s_n |,
O_s,I(0) = O_s
and 𝒯 is the time ordering operator
that sorts all the operators in the time descending order.
The integrals in the equation is interpreted as
∫_-t⩽s⩽ t(integrand)s
= ∫_-t^t∫_-t^s_N…∫_-t^s_2(integrand) s_1
… s_N-1 s_N
Note that the coefficient ∏_n=1^N (s_n) comes from the coupling operators V, meaning that each V_I(s_n) is attached by or - according to the sign of s_n.
With this propagator, the expectation of the observable can be expressed by <cit.>
O_s(t)
= (ρ_I(t) G(-t,t))
with ρ_I(t) = ^- H_0 tρ(0) ^ H_0 t.
If the observable has the form O_s = O_s^(1)⊗…⊗ O_s^(K),
we can plug the definition of V in <ref> into the Dyson series <ref>, so that the integrand will show N summation symbols, and each summand can be written in the tensor product form.
Precisely speaking, for the kth spin, the summand has the form:
𝒢^(k)(s')
= (∏_n'=1^N'√( (s_n'')))
𝒯[
V_I^(k)(s_N'') … V_I^(k)(s_1') O_s,I^(k)(0)
],
where s' is a subsequence of s of length N' ⩽ N.
In particular, if s' is an empty sequence, we use the notation 𝒢^(k)(∅) O_s,I^(k)(0) to denote the above quantity.
Here we have again used the interaction picture:
V_I^(k)(s_n) ^- H_0^(k)| s_n|
V^(k)^ H_0^(k)| s_n |,
O_s,I^(k)(0) = O_s^(k).
In <ref>,
the subsequence ' depends on the number of operators V^(k) appearing in the summand,
and the reason for the square root is that the term V^(k)⊗ V^(k+1) or - V^(k)⊗ V^(k+1),
appearing in the expansion of V or - V,
is separated into the terms 𝒢^(k) and 𝒢^(k+1) after decomposition.
In this work, we stick to the choice √() = ^π/4 and √(-) = ^-π /4.
With these propagators, the terms in <ref>
can be represented by the sum of integrals whose integrands are tensor products of 𝒢^(k)(s).
For example, when N=1 and K=4,
we have
∫_-t^t
(s_1)
𝒯[V_I(s_1)O_s,I(0)]
s_1
=∫_-t^t 𝒢^(1)(s_1)
⊗𝒢^(2)(s_1)
⊗𝒢^(3)(∅)
⊗𝒢^(4)(∅) s_1
+∫_-t^t 𝒢^(1)(∅)
⊗𝒢^(2)(s_1)
⊗𝒢^(3)(s_1)
⊗𝒢^(4)(∅) s_1
+∫_-t^t 𝒢^(1)(∅)
⊗𝒢^(2)(∅)
⊗𝒢^(3)(s_1)
⊗𝒢^(4)(s_1) s_1.
In this equation,
different spins are separated inside the integrals, allowing us to perform computations for each spin independently.
For simplicity, we may express the above equation as a diagrammatic equation:
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
In this diagrammatic equation,
the bold line on the left-hand side represents an operator acting on all spins.
The red cross indicates that only one coupling operator at time s_1 exists in the integral. On the right-hand side,
each gray line represents a single spin.
Since each interaction operator V consists of three terms, each acting on two neighboring spins,
we have three diagrams on the right-hand side,
and each diagram includes two red crosses connected by a dotted line,
indicating the two involved spins.
By comparison with (<ref>),
we can find that every diagram on the right-hand side is an integral with respect to s_1,
and the kth line corresponds to the expression 𝒢^(k)(…),
where the ellipses should be filled with the time points of the red crosses. In this case, the ellipses can only be a single point s_1 or an empty set.
Similarly, for the term with two coupling operators (N=2), the expansion is
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (0.3,0);
[text=black,anchor=north] at (-0.5,0) s_1;
[text=black,anchor=north] at (0.3,0) s_2;
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
.
Here the left-hand side corresponds to the two-dimensional integral in <ref>.
On the right-hand side,
we have nine diagrams since both interaction operators at s_1 and s_2 have three choices.
For general N and K,
the number of diagrams should be (K-1)^N.
In particular,
for the first term in <ref>
where no interaction exists,
no integral is required and we have
O_s(0) = O_s = O_s^(1)⊗ O_s^(2)⊗ O_s^(3)⊗ O_s^(4)⊗
= 𝒢^(1) (∅)
⊗𝒢^(2) (∅)
⊗𝒢^(3) (∅)
⊗𝒢^(4) (∅),
which can be represented by the following diagrammatic equation:
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
.
As a result, the final diagrammatic expansion of
G(-t,t) is
G(-t,t)
= [baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
+
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
+
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (0.3,0);
[text=black,anchor=north] at (-0.5,0) s_1;
[text=black,anchor=north] at (0.3,0) s_2;
+ …
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+ [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+ …,
where the right-hand side includes all possible connections between neighboring spins.
The advantage of this expansion is two-fold:
* For each diagram, when the time points s_1, ⋯, s_N are fixed, the kth line with crosses is mathematically represented by 𝒢^(k)(), which involves only one spin, so that it can be computed relatively easily.
* We can shuffle the diagrams and truncate the series appropriately to obtain efficient algorithms.
The idea for the computation of each line on the right-hand side will be based on an efficient path integral method known as the inchworm method <cit.>,
and our algorithm for the integration over the time points and the summation of the diagrams is inspired by the method of modular path integrals <cit.>.
The following two sections will be devoted to these two steps, respectively.
§ INCHWORM ALGORITHM FOR EACH SPIN
Recall that our purpose is to compute the expectation of the observable in the form of <ref>.
Based on our decomposition
<ref>,
we can first take the trace for each diagram,
and then sum up the results.
Thus, for each diagram, we need to compute ( ρ_I^(k)(t) 𝒢^(k)() ) with ρ_I^(k)(t) = ^- H_0^(k) tρ^(k)(0) ^ H_0^(k) t.
In this section,
we will introduce an efficient algorithm to evaluate this single-spin quantity 𝒢^(k)() for given .
The algorithm is inspired by the inchworm Monte Carlo Method for system-bath coupling <cit.>,
where a single heat bath interacts with the entire system.
Note that each spin is associated with a thermal bath,
we can apply the Dyson series expansion again to separate the spin and the bath.
Since the baths are initially in the thermal equilibrium states,
the trace with respect to the bath part can be calculated explicitly using Wick's theorem <cit.>.
We refer the readers to <cit.> for the detailed calculation,
and here we only present the final result:
(
ρ_I^(k)(t) 𝒢^(k)()
)
= _s^(k)[
ρ_s,I^(k)(t)
(∏_n=1^N √( (s_n)))
∑_M=0^∞^M ∫_-t ⩽τ⩽ t( ∏_m=1^M (τ_m) )
𝒰_0^(k)(τ,s)
ℒ_b^(k)(τ)
τ],
where
𝒰_0^(k)(τ,s)
= 𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M) O_s,I^(k)(0)]
with
V_s,I^(k)(s)
= ^- H_s^(k)| s | V^(k)^ H_s^(k)| s |
,
W_s,I^(k)(τ)
=
^- H_s^(k)|τ|
W^(k)^ H_s^(k)|τ|,
ρ_s,I^(k)(t) =
^- H_s^(k) tρ_s^(k) (0) ^ H_s^(k) t
and the bath influence functional ℒ_b^(k)(τ) has the form <cit.>
ℒ_b^(k)(τ_1,…,τ_M)
=
0, if M is odd
∑_𝔮∈𝒬_M∏_(j,j')∈𝔮 B^(k)(τ_j,τ_j'),
if M is even.
Here B^(k) is the two-point correlation function to be defined later in our test cases,
and the set 𝒬_M contains all possible pairings of integers {1,2,⋯,M}.
For example,
𝒬_2 = {{(1,2)}},
𝒬_4
= {{(1,2),(3,4)}, {(1,3),(2,4)}, {(1,4),(2,3)}}.
The general definition of 𝒬_M for even M is
𝒬_M = {{(j_1,j_1'),…,(j_M/2,j_M/2')}|⋃_l=1^M/2{j_l,j_l'} = {1,…,M},
j_l < j_l' for l = 1,…,M/2
},
which includes (M-1)!! pairings.
According to <ref>, now our objective is to evaluate the following quantity
𝒢^(k)(-t,s,t)
(∏_n=1^N √( (s_n)))
∑_M=0^∞∫_-t ⩽τ⩽ t( ∏_m=1^M (τ_m) )
𝒰_0^(k)(τ,s)
ℒ_b^(k)(τ)
τ,
which yields
(ρ_I^(k)(t) 𝒢^(k)() ) = _s^(k)(ρ_s,I^(k)(t) 𝒢^(k)(-t,,t) ).
Recall that we have used a gray line with red crosses to represent 𝒢^(k)().
Due to the equivalence given in <ref>,
below we will use the same diagram to represent the quantity 𝒢^(k)(-t,,t).
For example, given = (s_1,s_2) with both s_1 and s_2 between -t and t,
<ref> can be represented diagrammatically as
[baseline=0]
[fill=lightgray] (-2,-0.05) rectangle (2,0.05);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
=
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
+
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
[-] (-1.5,0) to[bend left=75] (-0.2,0);
[text=black,anchor=north] at (-1.5,0) τ_1;
[text=black,anchor=north] at (-0.2,0) τ_2;
+
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
[-] (-1.5,0) to[bend left=75] (-0.5,0);
[-] (0.1,0) to[bend left=75] (1.2,0);
[text=black,anchor=north] at (-1.5,0) τ_1;
[text=black,anchor=north] at (-0.5,0) τ_2;
[text=black,anchor=north] at (0.1,0) τ_3;
[text=black,anchor=north] at (1.2,0) τ_4;
+
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
[-] (-1.5,0) to[bend left=75] (0.1,0);
[-] (-0.5,0) to[bend left=75] (1.2,0);
[text=black,anchor=north] at (-1.5,0) τ_1;
[text=black,anchor=north] at (-0.5,0) τ_2;
[text=black,anchor=north] at (0.1,0) τ_3;
[text=black,anchor=north] at (1.2,0) τ_4;
+ …
In the diagrammatic equation, the location of the cross marks, given by , are fixed.
On the right hand side, τ's are integration variables.
Note that a time ordering operator 𝒯 in the definition of 𝒰_0^(k)(τ, ) is required to guarantee that the operators are applied in the correct order.
Each arc represents a two-point correlation function B(τ_j, τ_j') in the bath influence functional ℒ_b.
The equation <ref> is ready for computation.
One can directly apply the Monte Carlo method to the right-hand side to approximate the sum of integrals,
which is known as the bare diagrammatic quantum Monte Carlo method (bare dQMC).
To design a more efficient approach,
we will follow the method in <cit.> to derive an integro-differential equation.
We first generalize the definition of 𝒢^(k)(-t,s,t)
to 𝒢^(k)(s_, , s_) for any s_ < s_:
𝒢^(k)(s_,s,s_)
= ( ∏_n=1^N √( (s_n)))
∑_M=0^∞∫_s_⩽τ⩽ s_( ∏_m=1^M (τ_m) )
𝒰_0^(k)(s_,τ,s,s_) ℒ_b^(k)(τ)
τ
where is an increasing sequence of time points, each of which is between s_ and s_, and
𝒰_0^(k)(s_,τ,s,s_)
=𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M) O_s^(k)(0)],
if 0∈[s_,s_],
𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M)],
if 0∉[s_,s_].
Note that only operators between s_ and s_ are included in the definition.
Therefore, when [s_, s_] does not include the origin,
O_s(0) should be excluded.
This definition can also be represented diagramatically as <ref>,
only with -t replaced by s_ and t replaced by s_.
It can then be seen that for two intervals satisfying [s_, s_] ⊂ [s_', s_'],
𝒢^(k)(s_, , s_) can be understood as a proportion of 𝒢^(k)(s_', ', s_') if is the subvector of ' with all components between s_ and s_.
To formulate an integro-differential equation for 𝒢^(k)(s_, , s_),
we extend the gray line from s_ to s_ by a length of ds (see the left-hand side of <ref>).
Then in the expansion of the extended gray line,
all diagrams on the right-hand side of <ref> are included.
Besides, diagrams that are not included in <ref> are thin lines with arcs ending within the interval [s_, s_ + ds] (second line in <ref>).
Since ds is infinitesimal,
it suffices to assume that there is only one time point inside [s_, s_ + ds].
We can further assume that this time point is fixed at s_,
and then this diagram must be multiplied by ds when being added to the sum (third to fifth lines of <ref>).
For simplicity,
we will name the arc ending at s_ as 𝒜_s_ (thick black arcs in <ref>).
We can now categorize all the diagrams with a point at s_ into classes characterized by the connected component of the arcs including the arc 𝒜_s_.
Here the “connected component” can be established by beginning with a set including the arc (τ_k, τ_M) only,
and then expanding the set iteratively by including all arcs with intersections with any arc that is already in the set, until the set does not change.
In <ref>,
two categories are labeled by yellow and green backgrounds,
and the connected components are highlighted using thick lines (including both black and white lines).
For all diagrams with the same connected component including 𝒜_s_,
we can sum them up and the result is the connection of a few thick lines with all arcs in this connected component,
which is known as a “bold diagram”.
The derivation is summarized in the following diagrammatic equation:
[baseline=0,scale=0.8]
[fill=lightgray] (-2,-0.05) rectangle (2.2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (2.8,0.1) s_+ s;
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
=
[baseline=0,scale=0.8]
[fill=lightgray] (-2,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
+ s (
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
[-] (-1.1,0) to[bend left=75] (-0.3,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
[-] (-1.1,0) to[bend left=75] (-0.3,0);
+ …)
= [baseline=0,scale=0.8]
[fill=lightgray] (-2,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
+ s (
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-0.012,0.05);
[fill=lightgray] (0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
(tl) at (0,-0.8) τ_1;
[->] (tl) – (0,-0.05);
+
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-1.75-0.012,0.05);
[fill=lightgray] (-1.75+0.012,-0.05) rectangle (0-0.012,0.05);
[fill=lightgray] (0+0.012,-0.05) rectangle (0.25-0.012,0.05);
[fill=lightgray] (0.25+0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
(tl) at (-1.75,-0.8) τ_1;
[->] (tl) – (-1.75,-0.05);
(tl) at (-0.2,-0.8) τ_2;
[->] (tl) – (0,-0.05);
(tl) at (0.5,-0.8) τ_3;
[->] (tl) – (0.25,-0.05);
+ …)
where the notations τ's are omitted in some diagrams without ambiguity.
The mathematical formulae of the bold diagrams can be easily read off.
For example, the bold diagram with the yellow background should be interpreted as
∫_s_^s_τ_1
( (τ_1) (s_))
W_s^(k)(s_)
𝒢^(k)(τ_1, s_1 ,s_)
W_s^(k)(τ_1)
𝒢^(k)(s_, s_0 ,τ_1)
B^(k)(τ_1,s_),
where s_0,s_1 are subsequences of s
such that (s_0,τ_1,s_1) is an ascending sequence and s = (s_0,s_1),
and the bold diagram with the green background reads
∫_s_^s_τ_1 τ_2 τ_3
( (τ_1) (τ_2) (τ_3) (s_))
W_s^(k)(s_)
𝒢^(k)(τ_3, s_3 ,s_)
W_s^(k)(τ_1)
𝒢^(k)(τ_2, s_2 ,τ_3)
W_s^(k)(τ_1)
𝒢^(k)(τ_1, s_1 ,τ_2)
W_s^(k)(τ_1)
𝒢^(k)(s_, s_0 ,τ_1)
B^(k)(τ_1,τ_3) B^(k)(τ_2,s_)
where s_0,s_1,s_2,_3 are subsequences of s such that
(s_0,τ_1,s_1,τ_2,s_2,τ_3,s_3) is an ascending sequence and s = (s_0,s_1,s_2,s_3) .
The explicit expression of the diagrammatic equation (<ref>) is as follows:
𝒢^(k)(s_, , s_ + ds) =
𝒢^(k)(s_, , s_)
+
𝒦^(k)(s_, , s_) ds,
where 𝒦^(k)(s_, , s_) is the sum of bold diagrams inside the parentheses in <ref>:
𝒦^(k)(s_,s,s_)
=
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-0.012,0.05);
[fill=lightgray] (0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-] (0,0) to[bend left=75] (2,0);
+
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-1.75-0.012,0.05);
[fill=lightgray] (-1.75+0.012,-0.05) rectangle (0-0.012,0.05);
[fill=lightgray] (0+0.012,-0.05) rectangle (0.25-0.012,0.05);
[fill=lightgray] (0.25+0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-] (0,0) to[bend left=75] (2,0);
[-] (-1.75,0) to[bend left=75] (0.25,0);
+ …
The integro-differential equation of 𝒢^(k)(s_, , s_) can then be derived as
𝒢^(k)(s_,s,s_)s_
= 𝒦^(k)(s_,,s_).
For the purpose of easier implementation,
we will also provide the mathematical expression of 𝒦^(k)(s_, , s_).
The general form of 𝒦^(k)(s_, , s_) is
𝒦^(k)(s_,s,s_)
= ∑_M=1
M is odd^∞∫_s_⩽τ_1 ⩽…⩽τ_M⩽ s_τ_1 …τ_M
( ∏_m=1^M+1 (τ_m) ) W_s^(k)(s_)
𝒰^(k)(s_,τ,s,s_) ℒ_b^c(k)(τ),
where τ = (τ_1,…,τ_M,τ_M+1) and τ_M+1 = s_.
The system-associated operator 𝒰^(k) is defined by
𝒰^(k)(s_,τ,s,s_)
= 𝒢^(k)(τ_M, _M, s_) W_s^(k)(τ_M) 𝒢^(k)(τ_M-1, _M-1, τ_M) W_s^(k)(τ_M-1) ⋯ W_s^(k)(τ_1) 𝒢^(k)(s_, _0, τ_1)
with _0, ⋯, _M being subsequences of such that = (_0, _1, ⋯, _M)
and the extended sequence
(s_, _0, τ_1, _1, τ_2, ⋯, τ_M-1, _M, τ_M)
is increasing.
This indicates that _0, ⋯, _M are subsequences of separated by τ_1, ⋯, τ_M.
The bath influence functional ℒ_b^c(k) is exactly the same as the bath influence functional in <cit.>:
ℒ_b^c(k)(τ_1,…,τ_M+1)
= ∑_𝔮∈𝒬_M+1^c∏_(j,j')∈𝔮
B(τ_j,τ_j')
where 𝒬_M+1^c is the set of connected diagrams.
For example,
𝒬_2^c = {{(1,2)}},
𝒬_4^c
= {{(1,3),(2,4)}},
𝒬_6^c
= {{(1,3),(2,5),(4,6)},
{(1,4),(2,5),(3,6)},
{(1,4),(2,6),(3,5)},
{(1,5),(2,4),(3,6)}}.
One may refer to <cit.> for more information about the set 𝒬_M+1^c.
In general, the number of pairings in 𝒬_M+1^c is asymptotically ^-1 M!! when M is a large odd integer <cit.>.
For fixed s_ and , solving the integro-differential equation <ref> requires an initial condition at s_ = s_N (or s_ = s_ if is an empty sequence). By definition, it can be immediately seen that
𝒢^(k)(s_, s_ = s_) = 𝕀^(k), if s_≠ 0,
𝒢^(k)(s_, s_1, ⋯, s_N, s_ = s_N) = √( (s_N)) V_s,I^(k)(s_N) 𝒢^(k)(s_, s_1,⋯,s_N-1, s_ = s_N), if s_N ≠ 0.
Due to the observable O_s^(k) appearing in the definition of 𝒢^(k),
there is a discontinuity when any of the time points touches zero.
The jump condition needed in the computation is
lim_s_→ 0^+𝒢^(k)(s_,s_1,…,s_N,s_)
= O_s^(k)lim_s_→ 0^-𝒢^(k) (s_,s_1,…,s_N,s_).
By these conditions,
all the full propagators 𝒢^(k)(s_, , s_) can be uniquely determined.
To solve the integro-differential equation (<ref>) numerically,
we start with solving all 𝒢^(k)(s_, s_), i.e. N = 0,
and then increase the length of iteratively.
Such an order guarantees that the initial condition <ref> can be applied whenever needed.
When solving 𝒢^(k)(s_, , s_) for fixed s_ and ,
the second-order Heun's method is applied,
and the jump condition <ref> must be applied when s_ crosses zero.
For the series of integrals on the right-hand side of <ref>,
we select an odd positive integer M̅ and truncate the series up to M = M̅ as an approximation.
In our experiments,
the value of M̅ is at most 5,
and therefore the integrals in <ref> are computed numerically using the second-order composite trapezoidal rule.
If larger M̅ needs to be used,
one can use Monte Carlo methods to approximate the integrals, leading to the inchworm Monte Carlo method as introduced in <cit.>.
To save computational cost, we have also utilized the following property of the full propagators: for all T > 0,
𝒢^(k)(s_+T,s_1+T,…,s_N+T,s_+T)
= ^- H_s T𝒢^(k)(s_,s_1,…,s_N,s_)
^ H_s T,
if s_>0;
𝒢^(k)(s_-T,s_1-T,…,s_N-T,s_-T)
= ^- H_s T𝒢^(k)(s_,s_1,…,s_N,s_)
^ H_s T,
if s_<0.
Note that the property holds only when all the time points are on the same side of the origin.
§ RESUMMATION OF THE FULL PROPAGATOR
Using the algorithm introduced in the previous section,
we are able to compute all the gray lines in <ref>.
In this section,
we will propose a fast algorithm to sum up all the diagrams.
Before introducing the algorithm,
we first note that the same gray line for the same spin can sometimes be used multiple times during the summation.
For example,
in the 4-spin case,
when the propagator 𝒢^(4)(-t, s_1, t) is computed for the fourth spin,
it can be applied in the following terms, all of which appear in <ref>:
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
.
Instead of applying <ref> directly to compute the summation, we will follow the idea of the modular path integral <cit.> to assemble all the gray lines by adding spins iteratively.
Assuming that we want to add up all the five diagrams in <ref>.
Notice that the terms related to the last spin are essentially the same in all these diagrams.
Therefore, instead of computing all the diagrams,
a more efficient way is to apply the distributive law to separate the last spin and only add up the terms for the first three spins.
Similarly, when dealing with the sum involving the first three spins,
the first and the second diagrams in <ref> can be combined;
the third and the fifth diagrams in <ref> can also be combined.
In general,
to deal with the sum on the right-hand side of <ref>,
we can first separate all the diagrams in to groups according to the number of crosses on the last line.
Then, for each of the groups,
we further separate the diagrams into subgroups according to the crosses on third line.
For each of the subgroups,
we apply such grouping one more time according to the crosses on the second line.
When performing computations,
we first sum up the terms involving only the first spin in all the smallest groups.
For the result of each group,
we multiply them by the corresponding term related to the second spin,
and then repeat a similar procedure for rest of the spins.
Mathmatically, this idea is based on the following iterative representation of the observable:
G^[1](-t,s,t) =
_s^(1)(
ρ_s,I^(1)(t)
𝒢^(1)(-t,s,t) );
G^[k+1](-t,s,t)
= ∑_N' = 0^∞∫_-t⩽s'⩽ t
G^[k](-t,s',t)
_s^(k+1)(
ρ_s,I^(k+1)(t)
𝒢^(k+1)(-t,𝒫(s,s'),t)
s'),
for k = 1,…,n-2;
G^[K](-t,t) = ∑_N=0^∞∫_-t⩽s⩽ t
G^[K-1](-t,s,t)
_s^(K)(
ρ_s,I^(K)(t)
𝒢^(K)(-t,s,t) )
s
where s'=(s'_1,…,s'_N') and
s=(s_1,…,s_N) are two non-descending lists.
In <ref>, 𝒫 is the sorting operator to merge s and s' into a sorted list.
We start from the first spin with <ref>, add the middle spins by <ref> and close the diagram by <ref>.
These equations show that there are many duplicate computations in the procedure above,
which can be avoided.
The details of the final algorithm will again be illustrated using diagrams below.
The computation of (<ref>) is straightforward.
We start our discussion with the case j = 1 in <ref>,
which becomes
G^[2](-t,s,t)
= ∑_N' = 0^∞∫_-t⩽s'⩽ t
G^[1](-t,𝒫(s,s'),t)
_s^(2)( ρ_s,I^(2)(t) 𝒢^(2)(-t,s',t) )
s'.
If s has length 1,
the equation can be diagrammatically represented by
G^[2](-t,s_1,t) =
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[black, line width=1.5pt] (-1,0.65-0.4) – (-1,0.15-0.4);
[black, line width=1.5pt] (+1,0.65-0.4) – (+1,0.15-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
=
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.6-0.4) ×;
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.2,0.6-0.4) – (0.2,0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
[text=red] at (0.7,0.6-0.4) ×;
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.7,0.6-0.4) – (0.7,0.2-0.4);
+ ….
On the left-hand side,
the diagram represents the quantity G^[2](-t,s,t) where the two short black lines binding the bold lines indicate that all connections between the first two spins are taken into account.
The parameter s is shown as the cross on the second spin.
We use an open dashed line to indicate that it will be connected to the third spin in the next step.
The right-hand side of the equation represents the sum and the integral in <ref>.
The four diagrams represent the terms for N' = 0,1,2,3,4, respectively.
Similarly, if the length of s is 2, we have the following diagrammatic equation:
G^[2](-t,s_1,s_2,t) =
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[black, line width=1.5pt] (-1,0.65-0.4) – (-1,0.15-0.4);
[black, line width=1.5pt] (+1,0.65-0.4) – (+1,0.15-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
=
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.6-0.4) ×;
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.6-0.4) – (0.2,0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
[text=red] at (0.7,0.6-0.4) ×;
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.6-0.4) – (0.7,0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+ ….
After computing the values of G^[2](-t,s,t)
for all s,
we can move forward to adding the third spin into the diagram.
An example for N=3 is
G^[3](-t,s_1,s_2,s_3,t) =
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
[text=red] at (0.4,0.2) ×;
[black, densely dotted, line width=1pt] (0.4,0.2) – (0.4,-0.2);
[text=red] at (0.4,0.2-0.4) ×;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
[text=red] at (0.4,0.2) ×;
[black, densely dotted, line width=1pt] (0.4,0.2) – (0.4,-0.2);
[text=red] at (0.4,0.2-0.4) ×;
[text=red] at (-0.2,0.2) ×;
[black, densely dotted, line width=1pt] (-0.2,0.2) – (-0.2,-0.2);
[text=red] at (-0.2,0.2-0.4) ×;
+ ⋯
We then repeat this process recurrently until we add the second last spin into the diagram. This completes the computation of <ref>.
To add the last spin, <ref> is applied instead of <ref>.
The only difference is that there are no further spins so that the time sequence s in G^[K](-t,s,t) can only be an empty list,
which will then be simply denoted by G^[K](-t,t).
Diagrammatically, in the 4-spin case, the last step can be represented by
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.65);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.65);
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[text=red] at (-0.5,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[text=red] at (-0.5,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[text=red] at (0.2,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[text=red] at (-0.5,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[text=red] at (0.2,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[text=red] at (0.7,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
+ …
Additionally, the quantity of the left hand side is exactly the quantity O_s = (ρ_I(t) G(-t,t)).
In practical simulations, it is impossible to consider an infinite number of diagrams.
Instead, a sufficiently large integer N̅ is chosen as the maximum number of interactions between any spin and its neighboring spins.
Diagrammatically, N̅ corresponds to the maximum number of red crosses on each line. Furthermore, as depicted in <ref>, each diagram corresponds to an integral over a simplex,
which is approximated using the composite trapezoidal quadrature rule in our numerical implementation.
Recall that the integro-differential equation is also solved using a second-order method.
The overall convergence rate of our method is second order.
Here we would like to comment the relation and difference between modular path integral (MPI) proposed in <cit.> and our approach.
Both methods compute the Ising chain dynamics iteratively based on the connection of spins.
MPI utilizes QuAPI for the computation of a single spin dynamics
while our method uses the Inchworm algorithm.
Another significant difference between two methods is that MPI considers all possible connections between spins
given a specific time discretization
while our method, instead, introduces a cut off for the spin couplings.
With the cut-off,
it is possible to reduce the number of diagrams
and hence improve the computational efficiency.
§ ESTIMATION OF THE COMPUTATIONAL COST
In this section,
we estimate the computational cost for our method.
As discussed above, the computation contains two parts,
including the computation of all bold lines with red crosses for all the spins (<ref>)
and the summation of the full propagators (<ref>).
For simplicity,
a uniform time step is chosen throughout the computation.
All the discrete time points are therefore multiples of .
Below we will estimate the cost for computing G(-t,t) for t=,2,…,L given a positive integer L.
§.§ Computational cost for each spin
The integro-differential equation (<ref>) shows that the computation of longer diagrams depends on the knowledge of shorter diagrams.
To compute G(-t,t) for t up to L,
the maximum length of the diagrams is 2L.
For any l = 1,⋯,2L, we can then assume that all the diagrams of length less than l Δ t are already computed,
and focus on the diagrams of length l Δ t.
For fixed l, the computational costs for all diagrams of length lΔ t are generally the same.
The most costly part is the computation of 𝒢^(k)(s_,s,s_) in <ref>.
Taking the forward Euler method as an example,
we need to evaluate 𝒦^(k)(s_,s,s_ + (l-1)) to obtain 𝒢^(k)(s_,s,s_ + l ).
According to <ref>,
the computational cost can be estimated by
∑_M=1
M is odd^M̅ C_M ()0pt0M+lM,
where the binomial coefficient ()0pt2M+lM is the number of grid points in the M-dimensional simplex s_⩽⩽ s_ + (l-1), and C_M is the computational cost of the integrand.
Note that this estimation is based on the grid-based numerical quadrature,
which does not apply to Monte Carlo methods.
For large M,
the computation of the bath influence functional becomes dominant since the number of diagrams increases as 𝒪(M!!),
so that C_M can be estimated by 𝒪((M+2)!!).
In our tests,
M̅ is no more than 5.
Hence, we will regard C_M as a constant for simplicity.
With the cost of each diagram estimated by <ref>,
we now need to calculate the number of diagrams of length l.
The estimation of the computational cost starts from the number of different bold-lines with total length l
for l=1,…,2L.
When l ⩽ L,
the interval [s_,s_] may or may not contain the origin 0.
With the <ref>,
if 0∉[s_,s_], we may apply the shift invariant property to reduce the number of diagrams.
Since each spin has at most N̅ couplings,
the total number of different diagrams with length l⩽ L is
∑_N=0^N̅ (2L+1-l)
()0pt0N+lN
=(2L+1-l) ()0pt0N̅+l+1N̅
where the factor 2L+1-l is the number of different choices of s_,
namely, s_ = -L, (-L+1), …, (L-l),
and the binomial coefficient ()0pt2N+lN
represents the different choices of N spin interactions
on the set {s_, s_ + ,…, s_+l}.
Practically, when 0∉[s_,s_],
the translation relation <ref> can be applied for the reduction of diagrams.
However, the reduction does not change the order of the estimated cost.
Therefore, for the single-spin full propagators of all lengths, the computational cost is estimated by
∑_l=1^2L (2L+1-l) ()0pt0N̅+l+1N̅∑_M=1
M is odd^M̅
C_M ()0pt0M+lM
⩽ ∑_l=1^2L (2L+1-l)
()0pt0N̅+l+1N̅C_M̅(M̅+1)/2()0pt0M̅+lM̅
≲ M̅C_M̅ L
∑_l=1^2L l^N̅ l^M̅≲ L^M̅+N̅+2
where M̅, N̅ are relatively small in practice and are regarded as constants in the above estimation.
For a spin chain with K spins,
the computational cost should be multiplied by K if all spins have different parameters.
§.§ Computational cost for the summation
We now estimate the summation of diagrams described in <ref>.
Note that in this step,
we only need to use the values of 𝒢^(k)(s_,s,s_) with -s_=s_=l,
so that the total number of diagrams involved is much less than the previous step.
We now consider the computation of G^[k+1](-t,,t) with = (s_1, …, s_N) and t = l according to <ref>.
Recall that we have the values of 𝒢^(k+1)(-t,𝒫(,'), t) only for N + N' ⩽N̅ (see the text about the truncation before <Ref>).
The series (<ref>) should be truncated up to N' = N̅ in the computation.
As a result,
the computational cost of <ref> is
∑_N'=0^N̅-N()0pt02l+N'N'
= ()0pt01+2l+N̅-NN̅-N,
where the binomial coefficient on the left-hand side is the number of grid points in the N'-dimensional simplex.
Since we need to evaluate G^[k+1](-t,,t) for on all the grid points of an N-dimensional simplex,
and N ranges from 0 to N̅,
we have the following estimation of the total computational cost:
∑_N=0^N̅()0pt02l+NN()0pt01+2l+N̅-NN̅-N≲∑_N=0^N̅
l^N l^N̅-N≲N̅ l^N̅.
Finally,
to compute observables on all time steps l = 1,…,L,
the time complexity is then 𝒪(L^N̅+1).
Compared to the solver of the inchworm equation,
the computational cost of the summation is relatively small.
Hence, the total computational cost remains at 𝒪(L^M̅+N̅+2)
as analyzed in <ref>.
§.§ Numerical verification
In agreement with our analysis,
our numerical experiments (to be presented in detail in <ref>) also show that the computational cost of the summation is nearly negligible compared with the solver of the inchworm equation.
Therefore, to verify our estimation of the computational cost,
we will focus only on the analysis in <ref>.
A convenient way to check the time complexity is to count the number of evaluations of the bath influence functional ℒ_b^(c)(), which depends only on L, M̅, N̅ and is independent of all other parameters.
Results for M̅ = 1, N̅ = 1 and M̅ = 3, N̅ = 2 with different values of L are plotted in <ref>.
It can be clearly seen that when L gets larger, the trend of growth agrees better with our analysis.
In general,
this estimation of the computational cost is the same as direct path-integral methods such as the summation of the Dyson series.
However,
the use of bold lines can significantly accelerate the convergence of the series,
resulting in a much smaller M̅ needed in the simulation.
The time complexity 𝒪(L^M̅ + N̅ + 2) shows that reducing M̅ has a great impact on the computational cost,
especially for large values of L.
We would like to comment that in the algorithm,
the most time-consuming step is the evaluation of 𝒢^(k)(s_, , s_).
To reduce the computational time,
multithreading is implemented to parallelize the computation.
In general,
according to the structure of the inchworm equation (<ref>),
the value of 𝒢^(k)(s_, , s_) for shorter is needed to obtain the full propagator for longer .
Therefore, we first compute 𝒢^(k)(s_, ∅, s_) for all s_,s_,
and the solve 𝒢^(k)(s_, s_1, s_) for all s_, s_1, s_, followed by the computation of 𝒢^(k)(s_, s_1,s_2, s_) for all s_, s_1, s_2, s_, and so forth until the maximum length of is reached.
The computations of 𝒢^(k)(s_, ∅, s_) and 𝒢^(k)(s_, s_1, s_) are carried out sequentially.
When the length of in 𝒢^(k)(s_, , s_) is greater than or equal to 2,
the algorithm is parallelized.
The parallelization is based on the fact that the inchworm equations (<ref>) for 𝒢^(k)(s_, s_1, …, s_N, s_) can actually be decoupled.
Precisely speaking,
the propagator 𝒢^(k)(s_', s_1', …, s_N', s_') can appear on the right-hand side of <ref> for = (s_1, …, s_N) only when s_k' = s_k for all k = 1,…,N,
and in this case,
we have s_' = τ_m and s_' = τ_m+1 for a certain m.
If 0 ∉(s_', s_'),
the value of 𝒢^(k)(s_', s_1', …, s_N', s_') (or 𝒢^(k)(τ_m, s_1, …, s_N, τ_m+1)) is actually obtained from <ref>.
Therefore, 𝒢^(k)(s_, s_1, …, s_N, s_) and 𝒢^(k)(s_', s_1', …, s_N', s_') are coupled only if there exists T such that s_j' = s_j + T for all j = 1,⋯,N.
This allows decoupling of equations according to the vector (s_2 - s_1, …, s_N - s_N-1), and thus the algorithm can be parallelized.
In fact, when 0 ∈ (s_1, s_N),
the equations of 𝒢^(k)(s_, , s_) are decoupled simply for different values of , since the translational relation <ref> cannot be applied. Using this structure helps with better distribution of computational cost across the threads.
§ NUMERICAL EXPERIMENTS
In this section, we evaluate our newly-proposed method using several numerical examples.
To begin with, we introduce the parameters used for the numerical tests.
For the coupling intensity between spins, the operator V^(k) simply a scaled Pauli matrix:
V^(k) = J^(k)σ_z^(k)
where J^(k) indicates the coupling intensity between the kth spin and its neighboring spins.
The observable is chosen to be O_s = σ_z^(k) for k=1,…,K, respectively.
In <ref>, the two point correlation functions B^(k)(τ_1,τ_2) are set to be the same for every k:
B^(k)(τ_j,τ_j') = B^*(Δτ)
=1/π∫_0^∞
J(ω)
[
(βω/2) cos(ωΔ t)
- sin(ωΔ t)
] ω
where Δτ = |τ_j | - |τ_j'| and J(ω) is the spectral density of the harmonic oscillators in the bath.
In this paper, we set it to be the Ohmic spectral density:
J(ω) = π/2∑_l=1^L c_l^2/ω_lδ(ω - ω_l)
where L is the number of harmonic oscillators and is set to be 400 in all our tests.
The coupling intensity c_l and frequency of each harmonic oscillator ω_l are given by
ω_l = -ω_c ln(1- l/L[1-exp(-ω_max/ω_c)]),
c_l = ω_l √(ξω_c/L[1-exp(-ω_max/ω_c)]).
The values of the parameters, including the Kondo parameter ξ, the primary frequency of the harmonic oscillators ω_c, and the maximum frequency ω_max, will be given later for each experiment.
In addition to the above physical parameters,
three numerical parameters need to be specified to carry out the simulation,
including two truncation parameters (M̅ for system-bath couplings and N̅ for interspin couplings) and the time step .
The convergence of the numerical results with respect to these parameters will be studied in the following subsection.
§.§ Convergence tests
This section carries out experiments on three convergence parameters, M̅,N̅ and ,
among which M̅,N̅ are two truncation parameters and stands for the time step.
In this section, all spins in the spin chain are prepared in the state |+1⟩.
In other words, ς^(k) = +1 for k=1,…,K in <ref>.
In the spin-boson model with a single spin,
the convergence with respect to the parameter M̅ has been studied numerically in <cit.>,
where it was shown that the convergence of the inchworm method was much faster than the Dyson series.
Here we will carry out a numerical test for the convergence of M̅ by considering a 5-spin system. We choose the time step to be = 0.2.
Other parameters are chosen as follows:
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c, N̅ = 2,
ϵ^(k) = 1,
Δ^(k) = 1,
J^(k) = 0.2,
∀ k = 1,…,5.
Our numerical results are given in <ref>,
which shows the evolution of ⟨σ_z^(k)⟩ for k = 1,⋯,5.
Note that due to the symmetry of the spin chain system,
we have σ_z^(1)(t) = σ_z^(5)(t) and σ_z^(2)(t) = σ_z^(4)(t) for all t,
and therefore only three figures are shown in <ref>.
These figures show fast convergence with respect M̅ for this set of parameters,
due to the use of the inchworm method.
The curves for M̅ = 3 and M̅ = 5 are almost on top of each other,
while some slight differences can be observed for the computation with M̅ = 1,
which is less accurate.
We now fix M̅ and consider the convergence with respect to N̅.
We again consider a chain of 5 spins
and choose the time step to be = 0.2.
Other parameters are
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c, M̅ = 3,
ϵ^(k) = 0,
Δ^(k) = 1,
J^(k) = 0.5,
∀ k = 1,…,5.
The results for N̅=2,3,4,5 are shown in <ref>.
In general,
due to the numerical sign problem,
for longer-time simulations,
larger values of N̅ are needed to obtain accurate results.
For the first and the last spins,
since they are coupled only with one neighboring spin,
the results of N̅ = 3 already show good quality until t = 5.
For the remaining three spins,
the results for N̅ = 4 and N̅ = 5 almost coincide,
showing the convergence for the coupling intensity J^(k) = 0.5 up to t = 5.
Further increasing N̅ does not significantly improve the results.
Additionally, the convergence test is also carried out for the time step , with the parameters of the 5-spin Ising chain being
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c,
M̅ = 3,
N̅ = 2
ϵ^(k) = 1,
Δ^(k) = 1,
J^(k) = 0.2,
∀ k = 1,…,5.
We perform simulations for the time step being 0.4, 0.2, 0.1, and 0.05 and present the results in <ref>.
Note that for M̅ = 3 and N̅ = 2,
according to our analysis in <ref>,
the computational cost is estimated by 𝒪(L^7) with L being the total number of time steps.
Therefore, to save computational time,
we run the simulation only up to t = 3.
It can be observed that for our second-order numerical method,
the time step Δ t = 0.2 can give sufficiently accurate results.
Such a time step will be taken for all the simulations in the following subsections.
§.§ Numerical tests for different coupling intensities
In this section, we conduct numerical experiments to examine the effects of varying coupling intensities between spins.
We again consider the 5-spin Ising chain with the following parameters:
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c,
M̅ = 3,
N̅ = 4
ϵ^(k) = 1,
Δ^(k) = 1,
∀ k = 1,…,5.
As mentioned previously,
the time step is chosen as Δ t = 0.2, which is sufficient to guarantee a small truncation error.
We again set J^(k) to be the same for all k = 1,…,5.
Three values J^(k) = 0.2, 0.4, 0.6 are considered in our experiments,
and the results are given in <ref>
given that all spins are initially in the state |ς^(k)⟩ = |+1⟩ for all k.
Again, our results correctly reflect the symmetry of the Ising chain,
and therefore only three lines are plotted in each figure.
For the purpose of comparison, we also include the result for J^(k) = 0,
meaning that all the spins are decoupled.
In this case,
the evolution of the observable is identical for all the spins,
and they are the same as the spin-boson model studied in <cit.>.
Generally, for higher coupling intensity J^(k),
the discrepancy between spins is more significant,
and they differ more from the decoupled case.
In particular,
when J^(k) = 0,
all the curves coincide as predicted.
It can also be observed that the curve for the first and the last spin is more separated from the other three spins,
especially in the initial stage of the dynamics.
This is due to the fact that the two spins at the ends of the chain interact only with one spin instead of two.
In all cases,
the interaction between the spin and the bath causes smaller amplitude of the fluctuation as the system evolves.
Additionally, we also carry out an experiment where the first spin is initially at the state |ς^(1)⟩ = |-1⟩ and all other spins have the initial state |ς^(k)⟩ = |+1⟩ for k=2,…,5.
Such a spin chain is no longer symmetric.
The evolution of the observable ⟨σ_z^(k)(t) ⟩ is plot in <ref>.
In this experiment, when J^(k)=0,
Spins 2 to 5 are physically identical,
so there are only two distinct curves in the figure.
For non-zero coupling intensities between spins,
it is clear that the behavior of the first spin is affected by the other spins.
The local minimum of the blue curves around t = 2.2 is obviously higher when the coupling intensity J^(k) gets larger.
Similar to <ref>,
the separation of the curves for Spins 2 to 5 also gets clearer for stronger coupling between spins.
§.§ Simulation of a long Ising chain
This section aims to study the behavior of a long spin chain, in which the middle part can mimic the behavior of an infinite Ising chain,
and meanwhile, one can observe the end effects.
We consider an Ising chain comprising of 50 spins and 100 spins, respectively.
The parameters of all the spins are set to be the same.
Under such settings,
we anticipate observing very similar behaviours for the spins near the center of the chain.
Note that in our method, if the spins and baths have the same physical parameters,
the computational cost grows only linearly as the number of spins increases.
The parameters used in this experiment are
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c,
M̅ = 3,
N̅ = 4
ϵ^(k) = 0,
Δ^(k) = 1,
J^(k) = 0.5,
∀ k = 1,…,K.
with K=50 or K=100.
The time step is chosen as = 0.2.
For comparison,
we also carry out the experiments for the same parameters with K=1 and K=5.
Since all spins have the same parameters,
the inchworm equation needs to be solved only once.
For longer spin chains, more computational cost is needed for the for the summation of full propagators.
But even so, according to our analysis in <Ref>,
the summation only takes a small proportion of the computational time.
Our numerical results are presented in <ref>.
In general, the case of a single spin is clearly different from the interacting spin chains,
while the three spin chains show very similar behaviors.
Due to the end effect,
the first and the last spins have a slightly higher flipping frequency.
Between the third and the third last spin,
the curves for all spins are indistinguishable in the plots,
and in this example, the five-spin case can already well represent a long spin chain.
§ CONCLUSION AND DISCUSSION
We proposed a method to simulate an Ising chain coupled with harmonic baths.
The algorithm is derived by two steps: firstly,
the Dyson series decompose the system into spin-boson units
and the problem is also decomposed to a single spin problem;
secondly, the inchworm algorithm is applied
to evaluate the evolution of spin-boson units with special “crosses” representing the spin-spin couplings.
The algorithm leads to the sum of diagrams.
A special order for the summation based on distributive law is then proposed for faster evaluation of the sum,
which accelerates the computation.
Under this special order for the summation,
the most time consuming step is the computation for a single spin-boson unit.
The computational cost is then estimated by 𝒪(L^M̅+N̅+2)
where L is the number of time steps
and M̅,N̅ are two truncation parameters for the series expansions.
Numerical experiments are carried out to validate our method.
While this paper focuses mainly on the Ising chain coupled with harmonic baths,
similar idea can be migrated to more complicated interacting systems in a way similar to <cit.>.
Also, since our approach can be regarded as a perturbation theory,
it is mainly applicable for short-time simulations.
Long-time simulations can be made possible by truncation of the memory kernel like the iterative QuAPI method.
These will be considered in our future works.
abbrv
|
http://arxiv.org/abs/2307.04493v1 | 20230710113115 | Geometric Constraints in Probabilistic Manifolds: A Bridge from Molecular Dynamics to Structured Diffusion Processes | [
"Justin Diamond",
"Markus Lill"
] | cs.LG | [
"cs.LG",
"q-bio.QM"
] |
[
Geometric Constraints in Probabilistic Manifolds: A Bridge from Molecular Dynamics to Structured Diffusion Processes
equal*
Justin Diamondyyy
Markus Lillyyy
yyyDepartment of Pharmaceutical Sciences, University of Basel, Basel, Switzerland
Justin [email protected]
Machine Learning, ICML
0.3in
]
Understanding the macroscopic characteristics of biological complexes demands precision and specificity in statistical ensemble modeling. One of the primary challenges in this domain lies in sampling from particular subsets of the state-space, driven either by existing structural knowledge or specific areas of interest within the state-space.
We propose a method that enables sampling from distributions that rigorously adhere to arbitrary sets of geometric constraints in Euclidean spaces. This is achieved by integrating a constraint projection operator within the well-regarded architecture of Denoising Diffusion Probabilistic Models, a framework founded in generative modeling and probabilistic inference.
The significance of this work becomes apparent, for instance, in the context of deep learning-based drug design, where it is imperative to maintain specific molecular profile interactions to realize the desired therapeutic outcomes and guarantee safety.
§ INTRODUCTION
Infinitesimal Dynamics in classical mechanics is commonly formalized by Lagrangians.
By solving for functionals that extremize the Lagrangian one obtains equations of motion. In molecular systems, e.g. Molecular Dynamics, the EOM are:
Md^2x/dt^2=-∇U - ∑_aλ_a∇σ_a,
where M is the diagonal mass matrix, x the cartesian coordinates, t is time,
and U is the potential energy. The σ_a are a set of holonomic constraints and λ_a are the Lagrange multiplier coefficients. To generalize from holonomic to nonholonomic constraints, one can use slack variables to transform the latter into the first.
Starting with z_x, z_h = f(x,h) = [x(0), h(0)] + ∫_0^1ϕ(x(t), h(t))dt with z being a latent vector sampled from Gaussians and the indexes x and h indicate the latent variables associated
to the coordinates of each particle and the vector embedding of each particle, ϕ is the parameterized transformation defined by a equivariant graph neural network. This defines a Neural ODE <cit.> which generalizes to Denoising Diffusion Probabilistic Models <cit.>.
This form of transformation has the same infinitesimal nature as our previous EOM which makes it acceptable to apply sets of constraints via Langrange's Multipliers, analogous to solving our EOM and thus one can insure the continual satisfaction of a set of constraints
using the Shake algorithm from Molecular Dynamics.
The study of constrained dynamics in Molecular Dynamics and Machine Learning, has traditionally focused on mostly linear constraints: e.g. removing high-frequency oscillations by constraining bond distances in the first and in-painting in the latter by thresholding certain pixel values to predetermined values. From a high level these can be seen as linear constraint problems as the constrained subset affects the unconstrained subset to minimal degrees. In addition, our task is more challenging as different constraints induce different geometric topological structures, such that some sets of distance constraints can determine uniquely the solution, and small modifications in the constraints may lead to vast changes in the solution set.
The problem we hope to model are non-linear constraints where constrained subsets of atoms determine the unconstrained subset to a high degree. We argue these types of non-linear constraints are important in the field of generative drug development where generated molecules must satisfy certain structural or analytic properties a priori. Take for instance, the optimization of lead molecules which is crucial at the final stages of drug development pipeline where off target interactions are attempted to be minimized. Since these off-target reactions can often be described by structural or analytic properties, then we can generate precisely molecules that satisfy a constraint profile of the target of interest, while specifying the subspace of generated molecules to not lie within the subspace of off-target interaction profiles.
In the following, we will give a summary of the Shake algorithm and segments of the equivariant normalaizing flow necessary to elaborate on how to combine them. Next, it will be elaborated that the spaces of latent embeddings and output samples are generally of very different nature, and constraints defined in one space will not necessarily be useful in the other. We suggest a continuous transformation of
the constraints such that they are always satisfied in the latent space, and become more restrictive throughout the integration. Lastly, we show simple examples where complex constraints are satisfied within small molecules. We leave to future work the study of this methodology to larger systems, and more application based studies. Our approach builds a fruitful junction where probabilistic inference, structured data representation, and generative modeling meet, while emphasizing the necessity to encode domain knowledge effectively in these settings, offering a way to formally verify the distributions from which samples are drawn.
§ PREVIOUS RESEARCH
Generative models of graphs have been a subject of interest in recent years. A number of different approaches have been proposed in the literature. <cit.> generates valid Euclidean distance matrices ensuring the resulting molecular structures are physically realistic which are then reconstructed in 3D space. In <cit.>, Boltzmann Generators sample equilibrium states of many-body systems with deep learning, useful for generating molecular configurations that obey thermodynamics distributions.
<cit.> proposed Equivariant Graph Neural Networks, which can be applied to model molecules and proteins while ensuring that their predictions are consistent under different orientations and permutations of the molecule.<cit.> further extended the concept to the diffusion process for 3D molecule generation. <cit.> applied similar methodologies to diffusion models on protein ligand complexes, and <cit.> devise a method of protein generation models that diffuse over harmonic potentials.
The Shake algorithm, described in a parallelized fashion by <cit.>, enforces linear constraints on molecular dynamics simulations of chemicals and biomolecules. This algorithm is conventionally used in simulations to get rid of high frequency motions, i.e. those seen in bonds between atoms.
§ CONSTRAINED GENERATIVE PROCESSES
§.§ Geometric Constraints in Shake
First, we define the constraint functions for the pairwise distance (not necessarily between bonded atoms), bond angle, and dihedral angle.
σ_d_ij = (d_ij - d_ij,0)^2 = 0
σ_θ_ijk = (θ_ijk - θ_ijk,0)^2 = 0
σ_ψ_ijkl = (ψ_ijkl - ψ_ijkl,0)^2 = 0
These constraint functions compare the current pairwise distance, bond angle, and dihedral angle with their target values, and the goal is to minimize the difference. We can additionally create nonholonomic constraints via slack variables. For example, we can add a slack variable y ≥ 0 and define d_j as the boundary of a nonholonomic constraint. Then, we can express the constraint as:
σ_a := ||x_aj - x_ak||^2_2 - d_j ≤ 0 → ||x_aj - x_ak||^2_2 - d_j + y= 0.
Next, modify the constraint matrix in the Shake algorithm to include pairwise distance, bond angle, and dihedral angle constraints seen in equation 4, where ij, ijk, and ijkl sum over the pairwise, bond angles, and torsion constraints indicating the number of atoms in each type of constraint type.
The constraint matrix now accounts for the pairwise distance, bond angle, and dihedral angle constraints by including their second-order derivatives with respect to the Cartesian coordinates by including their contributions to the Lagrange multipliers. After solving for the Lagrange multipliers, update the coordinates using the adjusted coordinate set equation like before.
It is also possible to try to optimize the coordinates via other optimization algorithms like ADAM or SGD.
In this section, we discuss the methods needed to understand how constraints can be represented, and define a novel diffusion process which projects the dynamics onto the submanifold defined by arbitrary sets of geometric constraints.
§.§ Shake Algorithm
The Shake algorithm takes as input a set of coordinates x of a molecular system and a set of constraints σ. At each time step the coordinates
are updated according to the equations of motion (EOM) at hand (without constraint terms) and subsequently are corrected. In general, the EOM will lead to dynamics that do not
satisfy the constraints, and thus this correction is mandatory.
Assuming masses of all the particles and delta time are unit we have the following equation for updating x_i iteratively until the constraints are satisfied.
x_i^(n)= x_i^(n-1) - ∑_bλ_b^(n-1)∇σ_b(x_i)
where x_i^(n) is the updated coordinate after n iterations of
satisfying constraints at each time step, x_i is the initial coordinates at each time step, and λ_b^(n-1) is the lagrange multiplier for each
constraint σ_a. The equation to solve at each iteration of each time step is
∑_βλ_β^(n-1)A_αβ^(n-1)= σ_α(x_i^(n-1))
with
A_αβ^(n-1)= ∇σ_α(x_i^(n-1)) ∇σ_β(x_i).
The matrix A^(n-1)_αβ is a symmetric matrix that describes how changes in particle positions affect both potential energy and constraint violations. The elements of the matrix are given by:
A^(n-1)_αβ = ∂^2 U/∂ x_α∂ x_β + ∑_k=1^N_cλ^(n-1)_k∂^2 σ_k/∂ x_α∂ x_β
where N_c is the number of constraints. The matrix A^(n-1)_αβ is used to solve for the Lagrange multipliers λ^(n)_β , which are then used to adjust particle positions.
§.§ Constraint-Induced Diffusion Process
Suppose we want to incorporate a constraint, such as a distance constraint between two atoms. Let's denote this constraint by f(x) = 0 for simplicity. We can modify the diffusion process to satisfy this constraint by projecting the noise term onto the nullspace of the gradient of the constraint function, analagous to the A matrix in Shake. This gives us:
dx = √(2D) (I - ∇ f(x) (∇ f(x))^T) dB - D ∇log p_t(x) dt
where D is the diffusion constant, B is a standard Brownian motion, and ∇log p_t(x) is the gradient of the log-probability density, which is equivalent to the negative of the potential energy function of the system.
Here, I is the identity matrix, and ∇ f(x) (∇ f(x))^T is the outer product of the gradient of the constraint function, which represents the direction in which the constraint is changing. This projection ensures that the noise term does not push the system out of the constraint-satisfying space.
The covariance matrix of the perturbed Gaussian distribution of the denoising process can be understood formally using the Schur complement method, available in the Appendix. The key takeaway is the relation between constraints and correlations via projecting out the constraints in the Covariance matrix of a Multivariate Gaussian. This modified covariance matrix then defines the perturbed Gaussian distribution from which we can sample at each time step of the diffusion process. This is a good approximation when the constraints are nearly linear or when the changes in the variables are small. One note is that in if the projection operator is non-linear than the the process is no longer Gaussian, but since we deal with linearized constraints, or small changes at each time step, this is negligible as seen in the original Shake formalism. However, the Schur Complement method gives a more general formalism to ensure Gaussian-ness.
§.§ Constraints as Correlations
Consider, for instance, a scenario involving pairwise distance constraints between a set of variables denoted as d = d_ij, where d_ij signifies the distance separating variables i and j. These constraints can be mathematically expressed through the set of functions C_ij(ϵ) = ||ϵ_i - ϵ_j|| - dij = 0, which is applicable to all corresponding variable pairs (i, j) ∈d, influencing the samples drawn from a Multivariate Normal distribution.
The introduction of these geometric constraints essentially interrelates variables that were initially independent in the Gaussian distribution. In order to comprehend the implications of these constraints, the covariance matrix Σ' of the perturbed distribution p'(ϵ') is worth examining:
Σ' = 𝔼_ϵ' ∼ p' [ϵ' (ϵ')^T] - 𝔼_ϵ' ∼ p' [ϵ'] 𝔼_ϵ' ∼ p' [ϵ']^T,
Here, the expectations are calculated over the perturbed distribution. The covariance matrix Σ' elucidates the correlations among variables that emerge as a result of the geometric constraints.
Importantly, these correlations, which are encoded within the covariance matrix of a multivariate Gaussian distribution, represent the constraints in the distribution. This provides a way to naturally incorporate constraint-based information into the model.
§.§ Training and Sampling Algorithms
§.§.§ Training Process
During training, in Algorithm 2, we first sample a time step t and noise vector ϵ from uniform and Gaussian distributions respectively. Then subtract the center of gravity from the noise vector to ensure that it lies on a zero center of gravity subspace. Then compute the latent variable z_t by scaling and adding the input coordinates [x,h] with the noise vector. Finally, minimize the difference between the estimated noise vector and output of the neural network to optimize EDM. For each molecule between 5 and 15 constraints are sampled from x for each batch element. The constraints are uniformly sampled from the pairs, triples, and quadruplets of the atom set of each molecule. This adds an extra layer of complexity due to the constraint distribution which we need to sample from the true data distribution.
§.§.§ Generative Process
In this generative process, we first sample a latent variable z_T from a Gaussian distribution. Then iterate backwards through time and sample noise vectors ϵ at each step. Subtract the center of gravity of the coordinates from the noise vector to ensure that it lies on a zero center of gravity subspace. Then compute the latent variable z_s by scaling and adding the input coordinates with the noise vector and previous latent variable. Finally, sample the input coordinates [x,h] from a conditional distribution given the initial latent variable z_0. The Shake algorithm enforces the constraints, as in training, at each sampling step during generation.
§ EXPERIMENTS
In the experimental section of our study, we evaluate our proposed method by generating molecules with cyclic constraints in Figure 1. The cyclic constraints impose specific geometric relationships among atoms in a molecule, such as the bond distances, bond angles, and torsional angles, which are essential for maintaining the chemical stability and physical plausibility of the generated molecules.
During the training phase, constraints are sampled from the dataset. This approach encourages the model to learn the distribution of constraints inherent in the training data, which reduces the Kullback-Leibler (KL) divergence between the data distribution and the model distribution. Consequently, the KL divergence during training is always minimized, promoting the model to generate molecules that closely resemble those in the training set.
For the practical implementation of this training procedure, we began with a pre-trained model provided by Welling et al.Our methodology then fine-tuned this pre-existing model using our constraint projection method. Due to time considerations and simplicity, our training and experiments focused on molecules consisting of 21 atoms.
§ DISCUSSION
Our method serves as a potent tool for incorporating complex constraints in denoising diffusion processes, specifically when dealing with multi-constraint specifications. Its iterative nature allows it to address nonlinear constraint problems and extends the power of denoising diffusion probabilistic models to work with constraints. Thus allowing these models to leverage the structure inherent in many physical systems. Indeed, many of these systems come with prior structural knowledge, including geometric information like distances, torsions, bond angles, and generalizeable to other piece-wise polynomial terms. Such information can significantly enhance the training process and enable explicit sampling of subsets of the state space.
Although constraints can guide generation towards more physically plausible structures, there can be potential instability in the generation process. This instability may originate from discrepancies between constraints used during training and those applied during generation. It underlines the need for further work to establish robust training procedures that align more closely with the generation constraints. Especially, with application focused studies like generating peptides or ligands with specific interaction profiles.
Though the language of our work is steeped in the semantics of Molecular Generation, the way we use geometric constraints to guide sampling mirrors a more general need of generative models in ML, which must navigate complex, structured probability spaces.
Further exploration could include adapting our methodology to discern constraints intrinsically or applying it to optimization processes like gradient-based learning and potentially lead to more efficient or robust learning algorithms.
§ APPENDIX A: GENERALIZED SCHUR COMPLEMENT FOR MULTIPLE CONSTRAINTS
To obtain a generalized approach of Schur Complement for multiple distance constraints, let's consider a set of M pairwise constraints between atoms. We can express each constraint as a function of the positions of the corresponding atoms:
f_m(𝐱_i, 𝐱_j) = ||𝐱_i - 𝐱_j||^2 - d_ij^2 = 0, m = 1, 2, …, M,
where d_ij is the distance constraint between atoms i and j.
To incorporate all the constraints, we can form the combined gradient and Hessian matrices by stacking the corresponding matrices for each constraint:
∇𝐟 = [ ∇ f_1 ∇ f_2 ⋮∇ f_M ],
∇^2 𝐟 = [ ∇^2 f_1 ∇^2 f_2 ⋮∇^2 f_M ].
To project the Gaussian distribution with the original covariance matrix Σ onto the space of distance constraints, we can use the following generalized Schur complement:
Σ' = Σ - Σ∇^2 𝐟^T (∇^2 𝐟Σ∇^2 𝐟^T)^-1∇^2 𝐟Σ.
While the Schur complement method can be implemented iteratively for non-linear systems, it is computationally intensive due to the inversion of the Hessian matrix. However, it serves as an excellent theoretical tool, providing a precise representation of how constraints can be formally incorporated into the diffusion process.
On the other hand, the Schur complement method provides a direct way to project the covariance matrix of the atomic positions onto the space that satisfies the distance constraints. It essentially modifies the covariance matrix in a way that embeds the constraints, without needing to adjust the atomic positions. This approach formally modifies the probability distribution of interest, and may be more useful for theoretic insight.
§ APPENDIX C: NONHOLONOMIC CONSTRAINTS
We are more interested in nonholonomic constraints where each constraint has possibly a lower and upper bound. As we mentioned earlier,
by adding a slack variable one can translate the nonholonomic constraints to holonomic ones. To formalize this, one sees that a constraint having
a lower and upper bound will either be completely satisfied or fail to satisfy a single boundary. Thus, we only have to consider
at most one holonomic constraint at each call to Shake meaning each constraint with a lower and upper bound may be replaced by a lower, upper,
or no bound for each call.
To calculate the slack variable y from σ_jk:=‖ x^l_i-x^l_j ‖ - d_jk which is ≤ or ≥ 0, one has
y={[ max(0,||x^l_i-x^l_j||-d^u_jk), if ≤; max(0,d^l_jk-||x^l_i-x^l_j||), if ≥ ].
where d_jk is the lower or upper bound in case of nonholonomic constriants and the defined constraint
value for holonomic constraints.
In the generative process, we define the initial values of d_jk such that the constraints have little effects. The constraints are then linearly interpolated throughout the ODE until the predetermined boundary values of d_jk are reached.
§ APPENDIX B: INCORPORATION OF LOGICAL OPERATORS IN GEOMETRIC CONSTRAINTS
The application of logical operators such as'AND', 'OR' and 'NOT' within geometric constraints enables a more flexible and representative modeling of physical and chemical systems. Real-world scenarios frequently require the satisfaction of multiple constraints following complex logical rules. Below, we detail the basic implementation of 'OR' and 'NOT' logical operators within the geometric constraints of our diffusion process while noting that the 'AND' operator is the basis of the formalism:
§.§ 'OR' Logic
The 'OR' condition necessitates that at least one of two (or more) constraints be met. Let's denote two constraint functions as f_1(x) and f_2(x). The 'OR' logic can be integrated by constructing a composite constraint function that is satisfied when any of its constituent constraints is met. We can express this as:
g(x) = min(f_1(x), f_2(x))
In this case, if either f_1(x) = 0 or f_2(x) = 0 (or both), g(x) = 0, thereby meeting the 'OR' condition. Alternatively, we can employ a product of the constraints:
g(x) = f_1(x) · f_2(x)
If either f_1(x) = 0 or f_2(x) = 0 (or both), g(x) = 0, again adhering to the 'OR' logic. This method requires that both f_1(x) and f_2(x) are always non-negative.
§.§ 'NOT' Logic
The "NOT" operator in the context of geometric constraints could be defined using the following equations. Let's say we have a constraint f(x) = 0. We want to define a NOT operator for this constraint. We can then define "NOT f(x)" as regions where f(x) does not equal zero, which can be represented with two inequality constraints which can be combined via the 'OR' operator to designate the 'NOT' operator.
We denote ϵ as a small positive number, then "NOT f(x)" can be represented as:
g_1(x) = f(x) + ϵ < 0
g_2(x) = f(x) - ϵ > 0
In the equations above, we have defined two regions (when f(x) is smaller than -ϵ and larger than ϵ) where "NOT f(x)" is true, thus defining a NOT operator for our constraints. Note that these regions depend on the choice of ϵ.
|
http://arxiv.org/abs/2307.04319v1 | 20230710032047 | New Variants of Frank-Wolfe Algorithm for Video Co-localization Problem | [
"Hamid Nazari"
] | cs.CV | [
"cs.CV",
"math.OC"
] |
FW Variants in Video Co-Localization
Clemson University, Clemson, SC
New Variants of Frank-Wolfe Algorithm for Video Co-localization Problem
Hamid Nazari
=======================================================================
The co-localization problem is a model that simultaneously localizes objects of the same class within a series of images or videos. In <cit.>, authors present new variants of the Frank-Wolfe algorithm (aka conditional gradient) that increase the efficiency in solving the image and video co-localization problems. The authors show the efficiency of their methods with the rate of decrease in a value called the Wolfe gap in each iteration of the algorithm. In this project, inspired by the conditional gradient sliding algorithm (CGS) <cit.>, We propose algorithms for solving such problems and demonstrate the efficiency of the proposed algorithms through numerical experiments. The efficiency of these methods with respect to the Wolfe gap is compared with implementing them on the YouTube-Objects dataset for videos.
§ IMAGE AND VIDEO CO-LOCALIZATION PROBLEMS
Problems in recognizing and localizing particular objects in images and videos have received much attention recently as internet photo and video sharing have become increasingly popular.
Co-localization involves localizing with bounding boxes in a set of images or videos as a sequence of images (frames).
§ MODEL SETUP FOR IMAGES
Our ultimate goal is to localize the common object in a set of images or in a series of frames of a video. Here we first have a brief review of image and video models based on formulation in <cit.>. To this end we review the required back grounds in each step as much as the features and variables in the mathematical programming model become understandable. Note that this formulation is based on formulation introduced in <cit.> for image co-localization. Quadratic formulation that we review in this section localizes any set of images and videos, simultaneously. In <cit.> also, we can find similar discrete optimization approaches in various computer vision applications.
§.§ Objectness for Images
Suppose that we have a set ℐ = {I_1, I_2, …, I_n} of n given images, and our goal is to localize the common object in each image. One approach is to find candidate boxes in each image that potentially contain an object using objectness <cit.>.
While object detectors for images are usually specialized for one object class such as cars, airplanes, cats, or dogs, objectness quantifies how likely it is for an image window to cover an object of any class. In an image, objects have a well-defined boundary and center, cats, dogs, and chairs, as opposed to indefinite background, such as walls, sky, grass, and road. Figure <ref> illustrates the desired behavior of an objectness measure. Green windows must score highest windows fitting an object tight, blue windows should score lower windows covering partly an object and partly the background, and red windows should score lowest windows containing only partial background. This approach and the way we score the windows is designed in <cit.> and explicitly trained to distinguish windows containing an object from background windows.
Using objectness, we generate m candidate boxes (e.g. green boxes in Figure <ref>) for each image that could potentially contain an object. In other words, if j∈{1,2,…,n} we define ℬ_j to be the set of all boxed in image I_j∈ℐ. Then the goal is to select the box that contains the object, from each image, jointly. Also. for simplicity let ℬ = ℬ_1 ∪ℬ_2 ∪⋯∪ℬ_n and n_b = nm the total number of boxes in all images.
§.§ Feature representation
Assume that we have determined m candidate boxes in each of two the different images I_i and I_j for any i,j∈{1,2,…, m}. A common object in I_i and I_j might be in different shape, scale, color, brightness, angle and many other features. Therefore, it is critical to extract distinctive invariant features from images that can be used to perform reliable matching between different views of an object. David G. Lowe in <cit.> introduces a method that finds features that are invariant to image scaling and rotation, and partially invariant to change in illumination and 3D camera view point. Using his method, large number of features can be extracted from typical images with efficient algorithms, as well as the cost of extracting these features is minimized. The major stages of computation used to generate the set of image features are as follows.
* Scale-space extrema detection: The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and orientation.
* Keypoint localization:
At each candidate location, a detailed model is fit to determine location and scale. Keypoints are selected based on measures of their stability.
* Orientation assignment:
One or more orientations are assigned to each keypoint location based on local image gradient directions. All future operations are performed on image data that has been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformations.
* Keypoint descriptor:
The local image gradients are measured at the selected scale in the region around each keypoint. These are transformed into a representation that allows for significant levels of local shape distortion and change in illumination.
This process is called Scale Invariant Feature Transform (SIFT). SIFT transforms image data into scale-invariant coordinates relative to local features. Using SIFT we can generate large numbers of features that densely cover the image over full range of scales and locations.
Let b_k be a box in ℬ. Then we denote the SIFT feature representation of b_k as x_k∈^d where d = 10,000 is the dimensional feature descriptor for each box in ℬ. Finally, we stack the feature vectors to form a feature matrix X∈^n_b× d.
§.§ Prior, Similarity, and Discriminability of boxes
Let us denote the boxes that contain an instance of the common object as positive boxes, and the ones that don't as negative boxes. Then a prior is introduced for each box that represents a score that the box is positive. This happens using a saliency map <cit.> for each box and the prior is in fact the average saliency within the box, weighted by the size of the box. Finally we stack these values into the n_b dimensional vector m⃗ as the prior vector.
In addition, boxes that have the similar appearance should be labeled the same. This happens through a matrix called similarity matrix denoted by S. Similarity matrix of boxes in ℬ is based on the box feature matrix X described above. Let b_i and b_j be any two boxes in ℬ where i,j∈{1,2,…,n_b}. Then similarity matrix S∈^n_b× n_b is computed based on the χ^2-distance as
S_ij = exp-γ∑_k=1^d(x_ik - x_jk)^2/x_ik + x_jk,
where γ = (10d)^-1/2. For i and j where boxes b_i and b_j belong to the same image we set S_ij=0. Then the normalized Laplacian matrix <cit.> is computed as
ℒ = I_n_b - D^-1/2SD^-1/2,
where D is the diagonal matrix composed of row sums of S.
§.§ Model Formulation
Associated with each box b_j,k∈ℬ_j we define a binary variable z_j,k where z_j,k=1 when b_j,k is a positive box (contains an instance of the common object) and 0 otherwise. Then we define the integer vector variable
z⃗ = (z_1,1,…,z_1,m, …, z_n,1,…, z_n,m)^T∈{0,1}^n_b.
Making the assumption that in each image there exist at most 1 positive box, our set of constraints are define by
∑_k = 1^m z_j,k = 1, ∀ j ∈{1,…, n}.
As we introduced a prior for each box and defined the n_b dimensional vector of average saliency within the boxes, we obtain a linear term that penalizes less salient boxes as part of the objective function:
f_p(z⃗) := -z⃗^Tlog(m⃗).
Similarly, our choice of normalized Laplacian matrix ℒ defined in (<ref>) results in a quadratic term that handles the selection of similar boxes:
f_L(z⃗) := z⃗^Tℒz⃗.
This is motivated by the work of Shi and Malik <cit.> in which they have taken advantage of eigenvalues of the Laplacian for clustering z⃗ by the similarity matrix. In fact, they have shown that with the eigenvector corresponding to the second smallest eigenvalue of a normalized Laplacian matrix we can cluster z⃗ along the graph defined by the similarity matrix, leading to normalized cuts when used for image segmentation. Also, Belkin and Niyogi <cit.> showed that this problem is equivalent to minimizing (<ref>) under linear constraints. In fact, the similarity term works as a generative term which selects boxes that cluster well together <cit.>.
Although discriminative learning techniques such as support vector machines and ridge regression has been widely used on many supervised problems in which there are know labels, they can be used in this unsupervised case where the labels of boxes are unknown <cit.>. Motivated by <cit.>, we consider the ridge regression objective function for boxes:
min_w∈^d, c∈ 1/n_b∑_j=1^n∑_k=1^mz_j,k-wx_j,k - c_2^2 - κ/dw_2^2,
where w is the d dimensional weight vector of the classifier, and c is the bias. This cost function is being used among discriminative cost functions because the ridge regression problem has a explicit (closed form) solution for weights w and bias c which implies the quadratic function in the box labels <cit.>:
f_D(z⃗):=z⃗^T𝒜z⃗,
where
𝒜= 1/n_bΠ_n_bI_n_b-X(X^TΠ_n_bX+n_bκ I_n_b)^-1X^TΠ_n_b,
is the discriminative clustering term and Π_n_b = I_nb - 1/n_b1⃗_n_b1⃗_n_b^T in (<ref>) is the centering projection matrix. Note that this quadratic term allows us to utilize a discriminative objective function to penalize the selection of boxes whose features are not easily linearly separable from other boxes.
Summing up our results in (<ref>), (<ref>), (<ref>), and (<ref>), the optimization problem to select the best box in each image is given by
min_z⃗ z⃗^T(ℒ+μ𝒜)z⃗ - λ z⃗^Tlog(m⃗)
s.t ∑_k = 1^m z_j,k = 1, j=1,…, n
z⃗ = (z_1,1,…,z_1,m, …, z_n,1,…, z_n,m)^T∈{0,1}^n_b,
where parameter μ regularizes the trade-off between the quadratic terms (<ref>) and (<ref>), and parameter λ handles the trade-off between the linear term (<ref>) and the quadratic terms (<ref>) and (<ref>). Recall that the linear constraints ensures that one box from each image is selected in the optimal solution. Note that Hastie, Tibshirani, and Friedman in <cit.> showed that 𝒜 is a positive semi-definite matrix. Also, since matrix ℒ is positive semi-definite as well, the objective function of (<ref>) is convex.
§ MODEL SETUP FOR VIDEOS
Co-localization in a video is very similar to the image case, as a video is a sequence of images that are called frames. While an object might not have an extreme change in size, shape, color, etc in two frames in row, co-localization in a video could be a simpler task at some point. In this section we describe the localization of a common object in a set of videos. In fact, if 𝒱 = {V_1, V_2, …, V_n} is a set of n given videos, we explore an approach to localize a common object in each frame of each video. More precisely, we consider ℐ_i = {I_i1, I_i2, …, I_il_i} to be the temporally ordered set of frames of video V_i. Here I_ij is the i-th frame of the j-th video and l_i is the total number of frames, or the length of V_i for i=1,…,n and j=1,…, l_i. Similar to what we did in image case, we set ℬ_i,j to be the set of m generated candidate boxes, using objectness <cit.>, for j-th of i-th video. Then, considering l_i frames in video i and m boxes in each frame, we set n_b^v = ∑_i=1^n l_im to be the total number of boxes in 𝒱, the set of all videos.
Note that, if we set ℐ = {ℐ_1, ℐ_2,…, ℐ_n} to be the ordered set of all frames in 𝒱, model (<ref>) returns a single box in each frame (image) as an optimal solution. Although the objective function of this model capture the box prior, similarity, and discriminability within different videos, as we can define a more efficient similarity mapping withing boxes in the sequence of frames in a video.
§.§ Temporal Consistency In Frames of a Video
As discussed earlier in this section, objects in consecutive frames in video data are less likely to change drastically in appearance, position, and size. This is a motivation to use a separate prior for frames or images in video case. Temporal consistency <cit.> is a powerful prior that is often leveraged in video tasks such as tracking <cit.>. In this approach, in consecutive frames, boxes with great difference in size and position should be unlikely to be selected together. To this end, a simple temporal similarity measure is defined between two boxes b_i and b_j from consecutive frames with:
s_temporal(b_i, b_j) := exp-b_i^center - b_j^center_2 - b_i^area - b_j^area/max(b_i^area , b_j^area)_2.
A few comments comes in place about the prior defines in (<ref>). First, b_i^area is the vector of the pixel area of box b_i and b_i^center are the vectors of the center coordinates of box b_i, normalized by the width and height of the frame. Second, the metric defined in (<ref>) is a similarity metric that is defined between all pairs of boxes in adjacent frames. From this metric we can define a weighted graph 𝒢_i for video 𝒱_i for i = 1,2, …, n with nodes being the boxes in each frame and edges connecting boxes in consecutive frames and weights of edges defined as temporal similarity in (<ref>). Figure <ref> is a graphical representation of graph 𝒢_i. For small values of similarity measure with some threshold we disconnect the nodes and remove the edge. Finally, as long as we can create a weighted graph with boxes, any similarity measure other than the temporal consistency in (<ref>) can be used to weight the edges between two boxes, which makes the temporal framework pretty flexible.
Let us define
S_t(i,j) = {[ s_temporal(b_i, b_j) if frames i and j are adjacent; 0 otherwise ].
to be the similarity matrix define by the temporal similarity measure, where b_i and b_j are any two boxes in the set of all boxes in 𝒱. Similar to our approach to obtain (<ref>), with S_t we can compute the normalized Laplacian
U = I_n_b^v - D^-1/2S_tD^-1/2,
where D is the diagonal matrix composed of the row sums of S_t. This matrix encourages us to select boxes that are similar based on the temporal similarity metric (<ref>).
§.§ Video Model Formulation
As we discussed above, temporal similarity suggests a weighted graph 𝒢_i for video 𝒱_i for i=1,2,…,n. In fact, a valid path in 𝒢_i from the the first to the last frame in 𝒱_i corresponds to feasible boxes chosen in each frame of 𝒱_i. This motivates us to define a binary variable to be on when there is an edge between any two nodes in 𝒢_i and off otherwise. In better words, we define the binary variable y_i,j,k for video i and boxes b_j and b_k in 𝒱_i as
y_i,j,k = {[ 1 if boxes b_j and b_k contain the common object; 0 otherwise. ].
In fact, variable y_i,j,k corresponds to the existence of edge between boxes b_j and b_k in 𝒱_i. Also, we define the binary variable z_i,j,k to be 1 if the box b_k in frame j of video i contains the common object, and 0 otherwise. A type of constraint that we need to consider here is the fact that there might exist an edge between boxes b_j and b_k only if they are boxes in two consecutive frames. Then, for a typical box b_k in frame j of video 𝒱_i, we define index sets p(k_j) and c(k_j) to be the set of indices of parents and children boxes in frames j+1 and j-1, respectively, that are connected to b_k in frame j in the graph 𝒢_i. Therefore, a required set of constraints for localization in video case are defines by:
z_i,j,k = ∑_l∈ p(k_j) y_i,l,k_j = ∑_l∈ c(k_j)y_i,k_j,l, i = 1,…, n, j=1,…,l_i, k=1,…,m.
The other set of constraints, which are quite similar to the image co-localization case, are the set of constraints restricting each frame of each video to has only one box that contains the common object. These constraints are defined by:
∑_k = 1^m z_i,j,k = 1, i=1,2,…,n, j = 1,2,…, l_i.
Finally, we define the vectors of variables
z⃗ = (z_1,1,1,z_1,1,2, …, z_i,j,k, …, z_n,l_n,m)^T∈{0,1}^n_b^v
where n_b^v = m∑_i=1^nl_i. Then if we combine the temporal terms defined by (<ref>) with the terms in the objective function of the original image model (<ref>), then with constraint defines in (<ref>) and (<ref>), we obtain the following optimization formulation to select the box containing the common object in each frame of video:
min_z⃗, y z⃗^T(L+μ A + μ_t U)z⃗ - λ z⃗^Tlog(m⃗)
s.t. ∑_k = 1^m z_i,j,k = 1, i=1,2,…,n, j = 1,2,…, l_i,
z_i,j,k = ∑_l∈ p(k_j) y_i,l,k_j = ∑_l∈ c(k_j)y_i,k_j,l
i = 1,…, n, j=1,…,l_i, k_j=1,…,m,
y_i,s,t∈{0,1}, i = 1,…,n, s,t = 1,…,m
z⃗=(z_1,1,1,z_1,1,2, …, z_i,j,k, …, z_n,l_n,m)^T ∈{0,1}^n_b^v,
where μ_t is the trade-off weight for the temporal Laplacian matrix. Note that with the new objective function in problem (<ref>) the extra constraint (<ref>) in video case is necessary and without that the temporal Laplacian matrix would lead the solution to an invalid path. This formulation allows us to incorporate temporal consistency into the image model.
§ OPTIMIZATION
The formulation (<ref>) obtained to find the best box in each image of the set of the given images is a standard binary constrained quadratic problem. The only issue that makes this problem a non-convex problem are the binary constraints. Relaxing these constraints to the continuous linear constraints lead the problem to the convex optimization problem and can be solved efficiently using standard methods. In fact, first order methods such as like Frank-Wolfe method that we discussed in previous chapters can handle the relaxed problem efficiently as they linearize the quadratic objective function and use a linear optimization oracle in each iteration.
Denoting the feasible region of the problem (<ref>) by 𝒫, we can follow a similar approach for this problem as we did for (<ref>). We can relax the discrete non-convex set 𝒫 into the convex hull, or the integer hull for this specific case, conv(𝒫). Although standard algorithms such as interior point methods can be applied to solve this problem, but as the number of videos increases to hundreds and the dimension of the problem increases exponentially, such problems with complexity of 𝒪(N^3) with number of boxes, would perform very weakly. Similarly, for the relaxation of the video problem we will show in our implementations section that suggested first order methods perform efficiently. We will also propose a first order method later in this chapter and will show that it performs better than other first order methods that have been applied to this problem.
Note that, the constraints defining the set 𝒫 are separable in each video. In fact, for each video, these constraints are equivalent to the constraints of the shortest-path problem. This implies that the linear optimization step appears in each iteration of the first order methods are actually shortest-path problems that can be solved efficiently using dynamic programming.
Recall that Frank-Wolfe algorithm is a first order method that in each of its iteration updates the new point toward a direction by calling a linear optimization oracle. This objective function of this linear optimization is in fact a linear approximation of the objective function of (<ref>), and (<ref>). Frank-Wolfe algorithm specifically results in a simple linearizations with integer solution for the image and video co-localization optimization problems. For the image model, the linearlized cost function is separable for each image, and we can efficiently find the best integer solution with some threshold for this problem. For the video model also, the cost function and the constraints are separable for each video and optimizing the linearized function over the feasible region results in the shortest-path problem for each video.
In the following section we will propose an algorithm that can be applied on image and video co-localization optimization problems efficiently and we finally compare the performance of the proposed algorithm to the algorithms that are applied to these problems.
§ PROPOSED ALGORITHMS
Conditional Gradient Sliding (CGS) algorithm <cit.>, is a first order projection free method for solving convex optimization problems in which the feasible region is a convex and compact set. The major advantage of the CGS algorithm is that it skips gradient evaluation from time to time and uses the same information within some inner iterations. This property of the CGS algorithm becomes helpful when the dimension of the problem as size of the variable is relatively large and computations become more and more expensive.
As showed in previous chapters, CGS algorithm and its proposed variant, Conditional Gradient Sliding with Linesearch (CGS-ls) perform very well in many practical instances. Although the CGS and CGS-ls algorithms out-perform the Frank-Wolfe (FW) algorithm many cases, the variants of FW, such as Away-steps FW or Pairwise FW <cit.> converge faster to the optimal value than CGS for the image and video co-localization problem as we will show this in numerical experiments later in this chapter.
Motivated from the CGS algorithm and also Away-steps and pairwise FW methods, we propose an algorithms called Away-Steps Conditional Gradient Sliding (ACGS) and Pairwise Conditional Gradient Sliding (PCGS) that perform very well for image and video co-localization problems. ACGS and PCGS methods have iterations of the CGS method but the direction to update the new point in each iteration is motivated from the away steps and pairwise steps in the Away-steps and Pairwise FW. We will also show that the ACGS and PCGS out-perform all of the variants of the FW applied to the image and Video co-localization problem.
§.§ Away-Steps and Pairwise Conditional Gradient Sliding
The basic scheme of the ACGS and PCGS methods is obtained by performing a new search direction in CGS method, if the new direction leads the algorithm to smaller Wolfe gap. Also, similar to the CGS algorithm, the classical FW method (as ℱ𝒲 procedure) is incorporated in this algorithm to solve the projection subproblems in the accelerated gradient (AG) with some approximations. The ACGS and PCGS algorithms are described as in <ref> and <ref>.
Note that the purpose of the proposed algorithm is to be applied to the image and video co-localization problems (<ref>) and (<ref>). The objective function in both problems, as discussed before, are convex functions, and the feasible region is a set of finite binary vectors called atoms in ^d for some d. We denote this set by 𝒜 and its convex hull conv(𝒜) by ℳ. As 𝒜 is finite, ℳ is a polytope.
The first difference between the AGCS(PCGS) and the CGS method is that we incorporate the set 𝒮^(k) of active atoms in the ACGS(PCGS) algorithm. This set keeps record of atoms (integer points) in 𝒜 that are being used for the away direction d_K^away at each iteration such that the point y$̨ at current iteration is the sum of corners in𝒮^(k)reweighted byα^(k). This direction that is given in (<ref>), is defined by finding the atomv_kin𝒮^(k)that maximized the potential of descent given by-f'(y), y- v. Note that obtainingv$̨ in (<ref>) is fundamentally easier as the linear optimization is over the 𝒮^(k), the active set of possibly small finite set of points.
The second difference is in the way we update the step-size to update the new iteration point. As we observe in (<ref>) we incorporate a line-search method to obtain a step-size with maximum reduction in the objective toward a prespecified direction from the point at current iteration. With _max defined in (<ref>) and (<ref>) as the maximum step-size for the line-search step the algorithm guarantees that the new iterates y=̨ y +_max d_k^away stays feasible in each iteration. Note that the parameter _k in CGS algorithm is required to be set up in appropriate way to maintain the feasibility in each iteration. Such set ups are represented in <cit.> as =̨ 3/(k+2) and =̨ 2/(k+1) and in fact, we can us these set ups for CGS steps in step (<ref>) as the upper bound for γ_k instead of 1 in line-search step (<ref>). Also, it is easy to check that for the special case of the image and video co-localization problem in which the objective is a convex quadratic function $̨ in step (<ref>) has the closed form
=̨ -d^T ∇ f(x)/d^T Q d,
ifQ≽0is the quadratic term in the objective. This value is projected to 0 or_maxif is outside of the range[0, _max]for (<ref>) case.
Finally, we incorporate the Wolfe gap as an stopping criterion in the ACGS and PCGS algorithms. In fact, at steps (<ref>) and (<ref>), the algorithms checks if they have reached the given threshold to stop before the preset max number of iterationsN. As in classical FW, the Wolfe gap is an upper bound on the unknown suboptimality and from the convexity of the objectivefwe have
f(x_k) - f(x^⋆) ≤-f'(x)̨, x^⋆-y≤-f'(x)̨, x-̨y≤ϵ.
Note that for the image and video co-localization problem with binary decision variables in a CGS step we have
𝒮^(k+1) = {[ {x_k} if =̨ 1; 𝒮^(k)∪{x}̨ otherwise. ].
Also, forv∈𝒮^(k)∖{s_k}we have
α_s_t^(k+1):=(1-)̨α_s_t^(k) + and α_v^(k+1):= (1-)̨α_v^(k).
On the other hand, for an away step we have
𝒮^(k+1) = {[ 𝒮^(k)∖{v}̨ if =̨_max; 𝒮^(k) otherwise. ].
This step is called a drop step. Also, forv∈𝒮^(k)∖{v_k}we have
α_v_t^(k+1):=(1+)̨α_v_t^(k) + and α_v^(k+1):= (1+)̨α_v^(k).
ACGS and PCGS algorithms are slightly different in the direction that they use to update the new point at each iteration. More precisely, steps (<ref>) to (<ref>) in Algorithm <ref> are replaced with steps (<ref>) and (<ref>) in Algorithm <ref>. Similar to the Paiwise FW, the idea here is to only move weight from the away atomv$̨ to the CGS atom x$̨ and keep all otherαweight unchanged. In other words
α_v_t^(k+1):=α_v_t^(k) - and α_x^(k+1):= α_s^(k)+,
for some≤_max:=α_v_t^(k).
An important property of the formulation (<ref>) and (<ref>) is that their constraints are separable for each image and video. This helps computation to be more efficient if we use parallel computing. This, however, is a property of any first-order method and practically it is very memory efficient. In addition, as a solution to the convex relaxation is not necessarily an integer solution optimal or feasible to the original problem, we need to come up with a solution as close as possible to the obtained relaxation optimum. In image and video co-localization case, the most natural way of finding such a solution is to solve
min_p∈𝒫 p - y_2^2,
where𝒫is the feasible region of the original problem andyis the solution to the relaxed problem. It is easy to check that the projection problem (<ref>) is equivalent to
max_p∈𝒫 p,y,
which for the video model is just a shortest path problem that can be solved efficiently using dynamic programming.
§ EXPERIMENTAL RESULTS
In this section we experiment the proposed Algorithm <ref> to the problems introduced in (<ref>) and (<ref>) for image and video co-localization task. Recall that these problems are quadratic problems over the convex hull of paths in a network, the linear minimization oracle in first order methods is equivalent to find a shortest path in the network. We compare the performance of the proposed algorithm with the works in <cit.> and <cit.> on FW algorithm and its variants for the similar problem. For this comparison we reuse the codes available and shared for <cit.> and the included dataset of airplanes consist of 660 variables.
We begin this section by reviewing the performance of Away steps Frank-Wolfe (AFW) and its comparison to the solvers such as Gurobi and Mosek. These results are derived and shown in <cit.> and the goal in this section is to show how AFW outperforms other methods for our problem of interest. In <cit.>, however, Joulin A., Tang K., and Fei-Fei L. showed that their proposed Pairwise Frank-Wolfe (PairFW) algorithm outperforms any other variants of FW in solving this problem. We will end this section by showing that our proposed ACGS algorithm performs better any first order methods that have been utilized to solve the video co-localization problem.
§.§ FW v.s. Mosek and Gurobi
Algorithm <ref> is a variant of FW algorithm proposed in <cit.> in which the authors examined it on two datasets, the PASCAL VOC 2007 dataset <cit.> and the Youtube-Objects dataset <cit.>. This algorithm is in fact the AWF Algorithm introduced in <cit.> with some slight changes and some extra rounding steps. Also, the set𝒟in this algorithm is conv(𝒫)the convex hull of the feasible region of problems (<ref>) or (<ref>). Their implementation of Algorithm <ref> was coded in MATLAB and they compare it to two standard Quadratic Programming (QP) solvers, Mosek and Gurobi on a single-core 2.66GHz Intel CPU with 6GB of RAM. In addition, they setμ=0.4for the image model andμ=0.6for the video model andμ_t=1.8andλ= 0.1, for both image and video models. They extracted 20 objectness boxes from each image and sample each video every 10 frames as there is little change frames in short amount time.
The stopping criterion of Algorithm <ref> is based on the relative duality gap. This criterion, that is given in function duality-gap(z) in the algorithm, is defined asd = (f-g)/g, wherefis the objective function andgis its dual. In the implementation of this algorithm, authors consider two values1e- 2 and1e- 3 for the stopping thresholdϵ.
Figures <ref> presents some comparisons of the Algorithm <ref> as a variant of FW algorithm with QP solvers Mosek and Gurobi in logarithmic scale. Indeed, this comparison is based on the CPU time performance of the algorithms depending on the number of images and videos, or in better words, the dimension of the decision variables. This time is the time that takes that algorithms reach a duality gap less than the thresholdϵ. As we can observe from these plots, the variant of FW algorithm with away steps outperforms the standard QP solvers Mosek and Gurobi.
The reason that we review and represent these comparisons directly from <cit.>local is that in our implementations in next section we will only compare our proposed algorithms to some other first order methods. These first order methods include the AWF algorithm that we already know from this section that it outperforms standard QP solvers.
The PASCAL Visual Object Classes 2007 dataset <cit.> provides standardized image data of 20 objects for object classes recognition along with annotations for images and bounding box and object class label for each object. Challenges and competitions have been used to recognize objects from a number of visual object classes in realistic scenes. The YouTube-Objects dataset <cit.> consists of YouTube videos collected for 10 classes from PASCAL <cit.>: "aeroplane", "bird", "boat", "car", "cat", "cow", "dog", "horse", "motorbike", and "train". Although authors in <cit.> did the study on multiple objects of this dataset, in our implementations our focus will be on the "aeroplane" object class.
§.§ Implementations
Knowing that AFW Algorithm <ref> outperforms the standard QP solvers Mosek and Gurobi from the works in <cit.>, in this section we compare our proposed variants of the CGS algorithm, the ACGS Algorithm <ref> and the PCGS Algorithm <ref> to some other first order methods, including the AFW method. More precisely, we will compare the performance of our algorithms to all of the variants of the FW namely, the FW, the FW Algorithm with away steps (AFW), and the pairwise FW Algorithm as discussed in <cit.>. We also compare our algorithms to the original CGS Algorithm <cit.>. These comparisons include the duality gap, CPU time, and objective function value versus the iterations.
The implementations are over the YouTube Objects dataset <cit.> explained in previous section, and specifically its "aeroplane" class. We obtain the dataset for this class and also the codes for AFW and Pairwise FW algorithms available in the repositories for <cit.>. We only consider the task of video co-localization with the problem formulation defined in (<ref>) for this implementation. All algorithms are coded in MATLAB and run on a computer with Interl Core i5-6500 CPU 3.2 GHz processor with 16 GB of RAM.
In our implementations, we set all algorithms to stop either after the maximum number of iterations or after reaching the Wolfe duality gap threshold. We set the threshold toϵ=1e-5and the max number of iterations to 2000 iterations. All of the parameters exist in (<ref>) are set the same as in <cit.> for consistency in the comparison.
Note that both original versions of FW and CGS algorithms do not reach the desired duality gap before the preset 2000 max number of iterations. Also, the AFW algorithm takes 628 iterations, the Pairwise FW takes 436 iterations, the ACGS takes 84 iterations, and PCGS takes 82 iterations to reach the threshold for the duality gap.
As we observe in Figure <ref> both proposed variants of CGS algorithm, the ACGS and PCGS algorithms outperform the FW algorithms and its variants as well as the original CGS algorithm. The performance of the algorithms in terms of the CPU time versus iterations increments also is represented in Figure <ref>. As we observe in this figure the CPU time per iteration of AFW and ACGS and PCGS are quite similar, although the ACGS and PCGS algorithms reach the gap much earlier than the AFW algorithm.
In addition, while FW algorithm requires one linear optimization oracle per iteration, its CPU time per iteration is not significantly better than the other algorithms. Also, note that out of 84 iteration of the ACGS algorithm, it chooses the away direction in 34 iteration which improves the performance of CGS (with more than 2000 iterations) for this problem significantly.
Finally, authors in <cit.> proved, for the first time, the global linear convergence of the variants of FW algorithms, AFW and Pairwise FW, under strong convexity of the objective. One potential research work related to the current chapter is figure out the convergence of the proposed algorithms <ref> and <ref>.
CGS:Lan
Lan, Guanghui, and Yi Zhou. "Conditional gradient sliding for convex optimization." SIAM Journal on Optimization 26.2 (2016): 1379-1409
Nesterov
Nesterov, Y.: Introductory lectures on convex optimization: A basic course, vol. 87. Springer Science & Business Media (2013)
joulin2014efficient
Joulin, A., Tang, K., Fei-Fei, L.: Efficient image and video co-localization with frank-wolfe algorithm. In: European Conference on Computer Vision, pp. 253–268. Springer (2014)
tang2014co
Tang, K., Joulin, A., Li, L.J., Fei-Fei, L.: Co-localization in real-world images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1464–1471 (2014)
alexe2012measuring
Alexe, B., Deselaers, T., Ferrari, V.: Measuring the objectness of image windows. IEEE trans-actions on pattern analysis and machine intelligence 34(11), 2189–2202 (2012)
boykov2001fast
Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222–1239 (2001)
delong2012minimizing
Delong, A., Gorelick, L., Veksler, O., Boykov, Y.: Minimizing energies with hierarchical costs. International journal of computer vision 100(1), 38–58 (2012)
delong2012fast
Delong, A., Osokin, A., Isack, H.N., Boykov, Y.: Fast approximate energy minimization with label costs. International journal of computer vision 96(1), 1–27 (2012)
lowe2004distinctive
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2), 91–110 (2004)
perazzi2012saliency
Perazzi, F., Krauhenbuhl., Pritch, Y., Hornung, A.: Saliency filters: Contrast based filtering for salient region detection. In: 2012 IEEE conference on computer vision and pattern recognition, pp. 733–740. IEEE (2012)
shi2000normalized
Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence 22(8), 888–905 (2000)
belkin2003laplacian
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation 15(6), 1373–1396 (2003)
bach2007diffrac
Bach, F., Harchaoui, Z.: Diffrac: a discriminative and flexible framework for clustering. Advances in Neural Information Processing Systems 20 (2007)
xu2004maximum
Xu, L., Neufeld, J., Larson, B., Schuurmans, D.: Maximum margin clustering. Advances in neural information processing systems 17 (2004)
joulin2010discriminative
Joulin, A., Bach, F., Ponce, J.: Discriminative clustering for image co-segmentation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1943–1950. IEEE (2010)
hastie2009elements
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer Series in Statistics. Springer (2009)
babenko2010robust
Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE transactions on pattern analysis and machine intelligence 33(8), 1619–1632 (2010)
berclaz2011multiple
Berclaz, J., Fleuret, F., Turetken, E., Fua, P.: Multiple object tracking using k-shortest paths optimization. IEEE transactions on pattern analysis and machine intelligence 33(9), 1806–1819 (2011)
yilmaz2006object
Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. Acm computing surveys (CSUR) 38(4), 13–es (2006)
tang2012shifting
Tang, K., Ramanathan, V., Fei-Fei, L., Koller, D.: Shifting weights: Adapting object detectors from image to video. Advances in Neural Information Processing Systems 25 (2012)
perez2002color
Perez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: European Conference on Computer Vision, pp. 661–675. Springer (2002)
pang2013finding
Pang, Y., Ling, H.: Finding the best from the second bests-inhibiting subjective bias in evaluation of visual tracking algorithms. In: Proceedings of the IEEE International Conference on omputer Vision, pp. 2784–2791 (2013)
harestructured
Hare, S., Saffari, A., Torr, P., Struck, S.: Structured output tracking with kernels. In: IEEE International Conference on Computer Vision. IEEE, pp. 263–27
lacoste2015global
Lacoste-Julien, S., Jaggi, M.: On the global linear convergence of frank-wolfe optimization variants. Advances in neural information processing systems 28 (2015)
everingham2010pascal
Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual
object classes (voc) challenge. International journal of computer vision 88(2), 303–338 (2010)
prest2012learning
Prest, A., Leistner, C., Civera, J., Schmid, C., Ferrari, V.: Learning object class detectors from weakly annotated video. In: 2012 IEEE Conference on computer vision and pattern recognition,
pp. 3282–3289. IEEE (2012) |
http://arxiv.org/abs/2307.06251v1 | 20230712154006 | Realizing the entanglement Hamiltonian of a topological quantum Hall system | [
"Quentin Redon",
"Qi Liu",
"Jean-Baptiste Bouhiron",
"Nehal Mittal",
"Aurélien Fabre",
"Raphael Lopes",
"Sylvain Nascimbene"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"cond-mat.mes-hall",
"quant-ph"
] |
arrows,shapes,backgrounds,calc,
positioning,
intersections
colormap=CMcolor=(white) color=(Blue!50!white) color=(Blue) color=(Blue!75!black) color=(Blue!50!black) color=(Blue!25!black) color=(black)
colormap=RedToBluecolor=(Red) color=(white!50!black) color=(Blue)
colormap=CM2color=(white) color=(Blue!33!white) color=(Blue!67!white) color=(Blue) color=(Blue!75!black) color=(Blue!50!black) color=(Blue!25!black) color=(black)
colormap=CM3color=(white) color=(Blue) color=(black)
colormap=CM4color=(white) color=(Red) color=(black)
⌈⌉
⌊⌋
|
http://arxiv.org/abs/2307.07377v1 | 20230714143641 | Benchmarking Explanatory Models for Inertia Forecasting using Public Data of the Nordic Area | [
"Jemima Graham",
"Evelyn Heylen",
"Yuankai Bian",
"Fei Teng"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Benchmarking Explanatory Models for Inertia Forecasting using Public Data of the Nordic Area
Jemima Graham
Imperial College London
London, United Kingdom
[email protected]
Evelyn Heylen
Centrica Business Solutions
Antwerp, Belgium
[email protected]
Yuankai Bian
National Grid ESO
Wokingham, United Kingdom
[email protected]
Fei Teng
Imperial College London
London, United Kingdom
[email protected]
================================================================================================================================================================================================================================================================================================================================================================================
This paper investigates the performance of a day-ahead explanatory model for inertia forecasting based on field data in the Nordic system, which achieves a 43% reduction in mean absolute percentage error (MAPE) against a state-of-the-art time-series forecast model. The generalizability of the explanatory model is verified by its consistent performance on Nordic and Great Britain datasets. Also, it appears that a long duration of training data is not required to obtain accurate results with this model, but taking a more spatially granular approach reduces the MAPE by 3.6%. Finally, two further model enhancements are studied considering the specific features in Nordic system: (i) a monthly interaction variable applied to the day-ahead national demand forecast feature, reducing the MAPE by up to 18%; and (ii) a feature based on the inertia from hydropower, although this has a negligible impact. The field dataset used for benchmarking is also made publicly available.
benchmarking, explanatory models, energy forecasting, Nordic, power system inertia
Benchmarking Explanatory Models for Inertia Forecasting using Public Data of the Nordic Area
Jemima Graham
Imperial College London
London, United Kingdom
[email protected]
Evelyn Heylen
Centrica Business Solutions
Antwerp, Belgium
[email protected]
Yuankai Bian
National Grid ESO
Wokingham, United Kingdom
[email protected]
Fei Teng
Imperial College London
London, United Kingdom
[email protected]
================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Inertia levels in power systems have been decreasing over the last decade causing operational challenges. Traditionally, inertia was abundant and originated from the rotation of masses in synchronous generators and motor loads; however, in recent years there has been an influx of renewable energy sources (RES) that are connected to the grid asynchronously (i.e. via power electronics). As these power sources do not deliver inertial response naturally, power systems are transitioning to low inertia systems <cit.>. Low inertia systems are at particular risk of frequency instabilities due to power imbalances. The reason for this is that the rate of change of frequency is higher in such systems which leaves the system less time to react to the imbalances <cit.>. As a result, having sufficient inertia in power systems at each point in time is essential to ensure that power system operation is secure and reliable <cit.>. For this reason, system operators need to accurately forecast the amount of inertia expected in the system to aid their decision-making for frequency response management <cit.>.
While this was not necessary in the past due to the consistent usage of synchronous generators <cit.>; in low-inertia systems with high RES penetration, the number of synchronous generators required will vary greatly over time as RES are often weather-dependent <cit.>. This variation will increase as more RES are introduced into energy systems in accordance with the push towards carbon-neutrality. For example, in the UK, the government aims to have 40 GW of offshore wind by 2030 <cit.>. This offshore wind will further displace traditional synchronous generators and will introduce more uncertainty into inertia forecasts. Contrastingly, in the Nordic, this is less likely to be the case as their main RES is hydropower which still produces some inertia.
Nevertheless, even if some energy systems are less likely to be exposed to inertia variability than others, it is still necessary for system operators (SOs) to understand the uncertainty surrounding a forecast. SOs are risk-averse due to the large-scale disruption system mismanagement could cause. For this reason, a conservative approach must be taken when forecasting inertia.
Despite the need for accurate inertia forecasts, existing work in the field of inertia forecasting has been limited so far <cit.>. Although forecast models have been presented in the literature <cit.>, the authors are not aware of existing benchmark studies that objectively compare inertia forecast models on a publicly available dataset. Benchmarking, where models are compared to a validated base case, is crucial to meaningfully assess the performance of any newly developed inertia forecast models.
To facilitate objective comparisons of newly-developed day-ahead inertia forecast models with the state-of-the-art in the field, we developed a benchmarking methodology and benchmarking dataset in this paper. A case study is presented comparing two existing forecast models for which public datasets are available: an explanatory model we previously developed on a dataset of Great Britain (GB) <cit.>; and a time-series forecast model developed on a dataset of the Nordic area <cit.>. The performance of the two models is compared on the Nordic field dataset. Additionally, this study evaluates the generalizability of the explanatory model by applying it to datasets of both the Nordic and GB power systems.
These benchmarking efforts are coupled with a two-pronged investigation into the characteristics of the model: the first branch of the investigation explores the spatial and temporal dependencies of the model; and the second branch of the investigation considers whether the model can be further developed by: (i) considering the annual seasonality of the inertia through a monthly interaction variable; and (ii) considering whether a large amount of hydropower in the generation mix requires specific adaptations in the explanatory model. In addition to these investigations, this study examines the impact of explanatory variable forecast errors on the accuracy of the inertia forecast model.
The remainder of this paper is organized as follows: Section <ref> describes the benchmarking methodology; Section <ref> describes the Nordic dataset used to validate and test the model; Section <ref> discusses the results of this study; and Section <ref> considers any concluding remarks.
§ METHODOLOGY
This section introduces the explanatory and time-series forecast models that are compared in this study.
§.§ Explanatory and Time-series Inertia Forecast Models
The explanatory inertia forecast model is given in (<ref>):
Ê^I,G_t+k|t = α_d,1E^I,G_t + α_d,2P̂^ND_t + α_d,3P̂^wind_t
+ α_d,4P̂^solar_t + α_d,5P^IC_t + α_d,6t + α_d,7t^2
where E^I,G_t is the kinetic energy (inertial energy) of the system at time t; P̂^ND_t is the day-ahead national demand forecast at time t; P̂^wind_t is the day-ahead wind power forecast at time t; P̂^solar_t is the day-ahead solar power forecast at time t; P^IC_t is the interconnection flow at time t; and α_d,i are coefficients where d is whether it is a weekday or weekend/holiday <cit.>. A detailed discussion of the development of this model can be found in <cit.>. This work was validated on a publicly available GB dataset. As the data was half-hourly, data from 2016 and 2017 made up the training set and data from 2018 was used for the test set. Overall, the model obtains good results on this dataset, with a mean absolute percentage error (MAPE) of 4.2% <cit.>.
In the past, the error bound on inertia measurements and forecasts has been unclear. To capture the uncertainty of the inertia forecast model described by (<ref>), the following Gaussian distribution can be assumed <cit.>:
F̂_t+k|t(E^I,G_t+k;x_t) = Φ(E^I,G_t+k; μ̂_t+k|t, σ̂)
where Φ(·) denotes the Gaussian distribution; the mean μ̂_t+k|t is equal to Ê^I,G_t+k|t; and standard deviation σ̂ is assumed to be a constant equal to the sample standard deviation of the training data <cit.>:
σ̂ = √(∑_t(E^I,G_t - μ)^2/N)
where μ is the mean of the training data; and N is the number of samples in the training set.
Contrastingly, the time-series forecast model is given in (<ref>):
Ê^I,G_t = g_t + s_t + h_t
where g_t is the trend component, defined as a logistic growth model; s_t is the seasonality component, defined as a Fourier series; and h_t is an irregular component that describes any holidays or special events. A detailed discussion of the development of this model can be found in <cit.>. This work was validated on a publicly available Nordic dataset. This dataset included data with minutely resolution. As a result, the training and test sets covered a shorter duration of time; the training set in this study spanned 1st - 30th January 2018 and the test set was 31st January 2018. This model is used to generate short-term forecasts which predict inertia at least one hour into the future and at most twenty-four hours into the future. Twenty-four hours into the future, this model achieves a MAPE of 7% <cit.>, which will be directly compared to the MAPE obtained when the explanatory model is applied to the Nordic. In order to provide a direct comparison with the work of Gonzlez-Longatt et al., the 31st January 2018 is used as a test set in this portion of the study; however, a larger training set containing one year of data spanning 31st January 2017 to 30th January 2018 is used instead.
In addition, the generalizability of the explanatory model is investigated in order to understand the applicability of the model to other power systems. This section of the work will compare the accuracy of forecasts generated using the explanatory model on both the GB and Nordic case studies.
§.§ Adaptation of Explanatory Model for Nordic System
Alongside the aforementioned investigations, this paper studies the potential adaption of the explanatory model to the Nordic system. In particular, the spatial and temporal dependencies of the explanatory model are explored. The temporal dependency of the model is evaluated by training the model with different durations of training data. In this part of the study, 1st January 2020 - 31st August 2020 is used as the test set, while data from 2016 - 2019 is used for the various training sets.
The impact of spatial granularity is also examined as it is particularly relevant in the case of the Nordic where the area is comprised of four different countries. As the aim of this work is to forecast inertia for the whole of the Nordic, this approach involves forecasting inertia by region and aggregating these results to produce a forecast for the whole of the Nordic. This model has the following form:
Ê^I,G_t+k|t = ∑_rÊ^I,G_r, t+k|t
where each of the Ê^I,G_t+k|t are forecasted according to (<ref>) and r indicates the region.
Furthermore, potential for further development of the explanatory model is considered in two ways: (i) the incorporation of a monthly interaction variable; and (ii) the inclusion of a feature based on the inertia from hydropower. The monthly interaction variable aims to capture the annual seasonality of the inertia. Some inspiration is taken from the work of Mirasgedis et al. <cit.> which highlights the importance of considering monthly periodicities when modelling electricity demand. As a result, both applying the monthly interaction variables to all explanatory variables, and applying the monthly interaction variable only to the day-ahead national demand forecast variable is trialled. Additionally, the feature based on inertia from hydropower the previous day allows the impact of having large amounts of hydropower in the generation mix to be considered. This feature is incorporated into the existing explanatory model and the impact on forecast accuracy is evaluated.
Finally, this paper investigates how much the inertia forecast accuracy can be improved by improving the underlying day-ahead forecasts. In order to conduct this investigation, the real-time values for the national demand, wind power, and solar power are collected to mimic a forecast with 100% accuracy. These real-time values are each substituted into the explanatory model so that their individual impacts can be ascertained. The overall impact from having all three forecasts as 100% accurate forecasts is also examined.
§ THE NORDIC DATASET
The Nordic power system is chosen as a test case due to the abundance of publicly available data from the Nordic TSOs. Publicly available datasets that are verified by TSOs are difficult to find, especially one that contains all of the explanatory variables required for (<ref>) in Section <ref>. While the publicly available GB dataset used by Heylen et al. when developing (<ref>) does contain all of the relevant explanatory variables, it has not been verified by National Grid ESO.
Regardless, the power system in the Nordic shares some similarities with the GB power system.
In particular, both systems are of a similar size, with a peak load of 60-70 GW and minimum load of 20-25 GW <cit.>.
However, a key difference between the Nordic and GB power systems is the energy generation mix. In the Nordic, the energy generation mix is predominantly hydropower, followed by nuclear and wind. Contrastingly, in GB, over the beginning of the dataset used here, the energy generation mix is predominantly natural gas followed by nuclear and wind. This transitions to a more wind dominant energy system as we look towards present day with wind accounting for up to 50% of the energy generation mix. As a large proportion of hydropower is seen in the Nordic power system and not in the GB power system, the use of inertia from hydropower as an explanatory variable is explored in this work.
In addition to the general characteristics of the Nordic power system, it is important to understand the inertial energy characteristics of the region so that this can be contrasted with other regions in future work.
From Fig.<ref>, it can be seen that the amount of inertial energy in the Nordic is decreasing over time. This behaviour is likely due to the increased share of RES in the power system <cit.>. Multiple seasonalities also exist; not only is there an annual seasonality (as shown in Fig. <ref>), there is also a daily seasonality which can be seen in Fig.<ref>. These periodicities led the authors to consider a monthly interaction variable which will be discussed in greater depth in Section <ref>. Additionally, it is clear from Fig.<ref> that whether it is a weekend or weekday affects the quantity of inertia in the system. This was part of the motivation behind including an interaction variable based on whether it was a weekday or a weekend/holiday in (<ref>) <cit.>.
Given that hydropower makes up a large proportion of the energy generation mix in the Nordic, it is important to consider the characteristics of the inertia from hydropower. Fig.<ref> illustrates that the inertial energy from hydropower decreases over time in a similar way to the total inertial energy. However, it does appear to increase towards the end of the dataset, perhaps indicating a stabilization in the quantity of inertia from hydropower. Additionally, the inertial energy from hydropower displays similar periodicity patterns to the total inertial energy as shown in Fig.<ref> and Fig.<ref>.
In the dataset collected under this study, three features are day-ahead forecasts meaning that there is some predictive error associated with these values. As discussed in Section <ref>, the impact of this will be investigated by replacing the forecasts with real-time values. The authors considered that this may have an impact: a MAPE of approximately 1% is found for the national demand forecast; a symmetric MAPE (sMAPE) of approximately 38% is found for the solar power forecast; and a MAPE of 14% is found for the wind power forecast.
Overall, the data used in this case study spans January 2016 to August 2020. It has hourly resolution in contrast to the minutely resolution used in time-series forecasting studies in <cit.>. Consequently, in order to provide a direct comparison with the work in <cit.>, a test set of 31st January 2018 will be used with a training set between 31st January 2017 and 30th January 2018. For all other investigations, the test set will be between 1st January 2020 and 31st August 2020, and all other data will be used as the training set.
§ RESULTS & DISCUSSION
This section covers three topics: benchmarking of the explanatory model developed in <cit.> (Section <ref>); spatial-temporal dependencies of the model (Section <ref>); and additional variables developed to further improve model accuracy (Section <ref>). It must be noted that we consider a substantial improvement in model accuracy to be a difference of 1000 MVAs or more as this will cause a notable change in ESO planning.
§.§ Benchmarking
Benchmarking of the explanatory model was carried out in two ways: (i) the model was compared against the state-of-the-art time-series forecast model developed in <cit.>; and (ii) the model performance on the Nordic dataset is compared to the model performance on the GB dataset. These results are given in Table <ref> and Table <ref> respectively.
Table <ref> emphasizes that the explanatory model developed in <cit.> outperforms the time-series forecast model developed in <cit.>, reducing the MAPE by 43% (approximately 86000 MVAs). This implies that considering a variety of relevant variables such as: day-ahead national demand forecast; day-ahead wind power forecast; day-ahead solar power forecast; and interconnection flow; improves the accuracy of inertia forecasts. These results also set a benchmark of 4% MAPE for future inertia forecast development on the Nordic case study.
In addition, Table <ref> indicates that the accuracy of the explanatory model developed in <cit.> is consistent; there is little variation in training or test MAPE between the GB and Nordic case studies despite the differences between the case studies outlined in Section <ref>. This suggests that even though this explanatory model was developed for use on a GB dataset, it is generalizable to other power systems. One important point to note is that there are some key similarities between the GB and Nordic power systems in terms of size and composition. For this reason, application of this explanatory model to a system with a different size and composition may be useful to quantify the generalizability of this model.
§.§ Spatial-temporal dependencies
Spatial-temporal impacts are considered in two ways: (i) the impact of using a longer duration of training data is investigated by training the model using different amounts of training data; and (ii) the impact of spatial granularity on the explanatory model is investigated by treating each of the regions within the Nordic (Eastern Denmark, Finland, Norway, and Sweden) separately as outlined in Section <ref>.
Table <ref> shows the training and test MAPEs for the explanatory model trained with different durations of training data. As these results show no clear trend, it implies that the duration of training data has little impact on the accuracy of the inertia forecast. The best performing explanatory model by test MAPE was the model trained using only one year of data. This model will be used as the base case going forward.
The more spatially granular model outperforms the base case; while the base case has a test MAPE of 4.420%, this model achieves a MAPE of 4.261%, which is a 3.6% reduction (equivalent to approximately 7000 MVAs). Additionally, the training MAPE reduces by 2.6% from 4.539% to 4.420% (equivalent to approximately 5000 MVAs). The main reason for this improvement is that the energy generation characteristics differ between Nordic regions both in terms of composition and quantity, as shown in Fig.<ref>. For example, Denmark proportionally relies much more on wind and solar power compared to the other Nordic countries.
Conversely, this improvement was found to be unrelated to tailoring the weekday/weekend and holiday interaction variable to the region. In the model that considers the Nordic as a whole, only national holidays that are common among all of the regions are considered; however, during the spatial granularity investigation, holidays specific to each of the four regions were considered. This change in holiday definition had a negligible impact on model performance, likely due to the fact that the major (and therefore, most impactful) holidays are common among all regions. Altogether, spatial granularity has a notably positive impact on the model due to the increased ability to account for regional energy generation characteristics and is recommended for use in future models.
§.§ Additional variables
Two model developments were trialled as part of this section: (i) the introduction of a monthly interaction variable on the day-ahead national demand forecast feature; and (ii) the introduction of a feature that considers inertia from hydropower the previous day. Additionally, the impact of errors from the day-ahead forecast feature errors are considered.
As discussed in Section <ref>, a monthly interaction variable is applied to the day-ahead national demand forecast feature in order to better model the annual seasonality in this feature. The authors expected that this feature would be heavily influenced by month due to the work of Mirasgedis et al. <cit.>. The results of this investigation can be seen in Table <ref> and Fig.<ref>. In Table <ref>, the MAPEs of all forecasts improve apart from the forecast that is trained on only one year of training data. This suggests that the monthly interaction variable needs at least 2 years worth of data to be well tuned. Additionally, it can be seen in Fig.<ref> that the variation in the mean and spread of the residuals reduces between different months if a monthly interaction feature is applied to the day-ahead national demand forecast.
Conversely, this improvement in MAPE was not seen if the monthly interaction variable is applied to all features or just the feature based on inertial energy from the previous day. This suggests that the day-ahead national demand forecast has a monthly relationship that is not seen in the other features. Overall, applying a monthly interaction variable to the day-ahead national demand forecast feature improves the model accuracy provided that at least two years of training data is used.
In addition, the authors believed that using inertia from hydropower the previous day as a feature may improve the inertia forecast model accuracy because the majority of generation in the Nordic comes from hydropower. Despite this, using the inertial energy from hydropower the previous day as a feature seems to have a negligible impact. A potential reason for this behaviour could be that the contribution of the inertia from hydropower feature is already considered by using the feature of inertia from the previous day.
Finally, the impact of the day-ahead forecast feature errors was considered in order to ascertain whether model accuracy could be improved by improving the accuracy of the day-ahead national demand forecast, the day-ahead wind power forecast, or the day-ahead solar power forecast, which are used as features in the explanatory model. It was found that the difference between the base case and the scenario with all forecasts replaced with real-time values is only around 100 MVAs which is not large enough to affect operational decisions significantly. Therefore, while the feature forecast errors slightly reduce inertia forecast accuracy, it is considered to be a negligible effect.
§ CONCLUSION
Altogether, the day-ahead explanatory inertia forecast model is applied under this study, which demonstrates good performance compared to state-of-the-art inertia forecasting techniques based on time-series forecasting. It also shows similar performance on both the GB and Nordic datasets, implying that this model is transferable to other regions.
Consequently, the explanatory inertia forecast model discussed here is a suitable benchmark for future inertia forecast development projects on the Nordic and GB case studies.
In addition, the explanatory model was found to have limited dependence on the duration covered by the training dataset, but significant dependence on spatial granularity. The former highlights that when using this model, there is no need for especially long durations of training data, and the latter suggests that a more spatially granular approach is beneficial to model accuracy. The fact that a long duration of training data is not essential for good model performance will make it easier to collect further case studies on which this model can be trialled.
Another key finding of this work was that introducing a monthly interaction variable on the day-ahead national demand forecast feature notably improves the accuracy of the inertia forecast. Consequently, this, alongside the spatially granular modelling approach, is recommended for use in future inertia forecast models, particularly in the probabilistic inertia forecast model developed in <cit.>.
§ ACKNOWLEDGEMENTS
This work has been funded by National Grid ESO under Electricity Network Innovation
Allowance project “Short-term System Inertia Forecast"
(NIA-NGSO0020) and by EPSRC under Grant EP/R513052/1. The authors would like to thank Mr Mikko Kuivaniemi from Fingrid, Finland who contributed the data on inertial energy by country and inertial energy from hydropower both for the whole region and the Nordic countries individually. The dataset used for benchmarking can be accessed on zenodo.org (DOI: 10.5281/zenodo.5655048), and used under the Creative Commons Attribution Licence (CC BY).
unsrt
|
http://arxiv.org/abs/2307.03971v1 | 20230708130330 | What is the meaning of proofs? A Fregean distinction in proof-theoretic semantics | [
"Sara Ayhan"
] | cs.LO | [
"cs.LO",
"math.LO",
"03F03 (Primary), 03F07 (Secondary)"
] |
A Fregean distinction in proof-theoretic semantics
Sara Ayhan Institute of Philosophy I, Ruhr University Bochum, Bochum, Germany
[email protected]
What is the meaning of proofs?
Sara AyhanI would like to thank several people for supporting me in improving this paper essentially, among them Luca Tranchini for his thorough feedback and vital input on an earlier version of this paper and also two anonymous referees for their very constructive and helpful reports. I am especially grateful to Heinrich Wansing for the numerous and encouraging occasions to discuss this paper extensively and for his valuable comments.
Received: date / Accepted: date
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This is a post-peer-review, pre-copyedit version of an article published in the Journal of Philosophical Logic.
The final authenticated version will be available online at: DOI: 10.1007/s10992-020-09577-2
The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective.
However, what if we go a step further and ask about the meaning of a proof as a whole?
In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs.
Two questions are central here.
First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation?
The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction.
Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given?
In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems.
Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege's distinction for singular terms and sentences.
§ INTRODUCTION
In proof-theoretic semantics (PTS) the meaning of the logical constants is taken to be given by the rules of inference that govern their use.
As a proof is constituted by applications of rules of inference, it seems reasonable to ask what the meaning of proofs as a whole would consist of on this account.
What we are particularly interested in is a Fregean distinction between sense and denotation in the context of proofs.[We assume at least a basic familiarity with this idea, laid out in Frege's famous paper “Über Sinn und Bedeutung”, cf. <cit.> for an English translation.]
This account builds up on <cit.>, where such a distinction is proposed and used in a proof-theoretic explanation of paradoxes.
The notion of denotation is nothing new in the context of proofs.
It is common in the literature on proof theory and PTS (e.g. <cit.>, <cit.>, <cit.>) to distinguish between derivations, as linguistic objects, and proofs, as abstract (in the intuitionistic tradition: mental) entities.
Proofs are then said to be represented or denoted by derivations, i.e. the abstract proof object is the denotation of a derivation.
The notion of sense, on the other hand, has been more or less neglected.
Tranchini <cit.>, therefore, made a proposal that for a derivation to have sense means to be made up of applications of correct inference rules.
While this is an interesting approach to consider, Tranchini only determines whether a proof has sense or not but does not go further into what the sense of a proof exactly consists of, so there might be further questions worth pursuing.
We will spell out an account of a distinction between sense and denotation of proofs, which can be considered a full-fledged analogy to Frege's distinction concerning singular terms and sentences.[There is some literature also in the field of proof theory concerned with this Fregean distinction, however, to our knowledge, apart from <cit.> this is not concerned with the sense of derivations but with the sense of sentences: cf. P. Martin-Löf (2001). The Sense/Reference Distinction in Constructive Semantics. Transcription of a lecture given at a conference on Frege organised by G. Sundholm at Leiden, 25 August 2001, transcription by B. Jespersen, 9 August 2002: https://www.academia.edu/25695205/The_Sense_Reference_Distinction_in_Constructive_Semantics, or <cit.>.]
Another question concerns the relation of different kinds of proof systems (intuitionistic natural deduction (ND) and sequent calculus (SC) systems will be considered) with respect to such a distinction.
If we have two syntactically different derivations with the same denotation in different proof systems, do they always also differ in sense or can sense be shared over different systems?
§ CONNECTING STRUCTURE AND MEANING
The basic point of departure is the simple observation that there can be different ways leading from the same premises to the same conclusion, either in different proof systems or also within one system.
The focus in this matter so far has been on normal vs. non-normal derivations in ND and correspondingly on derivations containing cut vs. cut-free derivations in SC.
However, there can also simply be a change of the order of rule applications that can lead to syntactically different derivations from the same premises to the same conclusion.
Does this lead to a different denotation or should we say that it is only the sense that differs in such cases, while the underlying proof stays the same?
§.§ Normal form and the denotation of derivations
One and the same proof may be linguistically represented by different derivations.
We will follow the general opinion in taking proofs to be the denotation - the semantic value - of (valid) derivations.
In ND a derivation in normal form is the most direct form of representation of its denotation, i.e. the represented proof object.
For our purposes we will consider a derivation to be in normal form iff neither β- nor η-conversions (cf. rules below) can be applied to it.
A derivation in normal form in ND corresponds to a derivation in cut-free form in SC.
In intuitionistic logic derivations in non-normal form in ND (resp. with cut in SC) can be reduced to ones in normal form (resp. cut-free form).
These are then thought to represent the same underlying proof, just one more indirectly than the other, because, as Prawitz <cit.> says, they represent the same idea this proof is based on.
In order to make sense and denotation transparent, our approach will be to encode the derivations with λ-terms.
As is well known, by the Curry-Howard-isomorphism there is a correspondence between the intuitionistic ND calculus and the simply typed λ-calculus and we can formulate the following ND-rules annotated with λ-terms together with the usual β- and η-conversions for the terms.
The β-conversions correspond to the well-known reduction procedures, which can be formulated for every connective in ND <cit.>, while the η-conversions are usually taken to correspond to proof expansions <cit.>.
We use p, q, r,... for arbitrary atomic formulas, A, B, C,... for arbitrary formulas, and Γ, Δ,... for sets of formulas.
Γ, A stands for Γ∪{A}.
For variables in terms x, y, z,... is used and r, s, t,... for arbitrary terms.
Term-annotated ND-rules:
[⊃I]λx.t:A ⊃B*t:BΓ,[x:A]
[⊃E]App(s, t):B*s: A ⊃BΓ *t:AΔ
[∧I]⟨s, t⟩: A ∧B*s:AΓ *t:BΔ
[∧E_1]fst(t):A*t:A ∧BΓ
[∧E_2]snd(t):B*t:A ∧BΓ
[∨I_1]s:A ∨B*s:AΓ
[∨I_2]s:A ∨B*s:BΓ
[∨E] r {x.s | y.t}:C *r: A ∨BΓ *s:CΔ, [x:A] *t: CΘ, [y:B]
[E]abort(t):A*t:Γ
β-conversions:
App(λx.t, s)
⇝t[s/x]
2
fst(⟨s, t ⟩)
⇝s
snd(⟨s, t ⟩)
⇝t
2
r {x.s | y.t}
⇝s[r/x]
r {x.s | y.t}
⇝t[r/y]
η-conversions:
λ x.App(t, x) ⇝ t (if x not free in t)
⟨fst(t), snd(t) ⟩⇝t
r {t.t | s.s}
⇝r
We read x : A as “x is a proof of A".
t[t'/x] means that in term t every free occurrence of x is substituted with t'.
The usual capture-avoiding requirements for variable substitution are to be observed and α-equivalence of terms is assumed.
A term that cannot be converted by either β- or η-conversion is in normal form.
Since there is a correspondence between intuitionistic SC and intuitionistic ND, for every derivation in ND there must be a derivation in SC named by the same λ-term.
This correspondence is of course not one-to-one, but many-to-one, i.e. for each proof in ND there are at least potentially different derivations in SC.[On the complications of such a correspondence and also on giving a term-annotated version of SC cf. e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Term-annotated sequent calculi can be found i.a. in <cit.> or <cit.>, from which our presentation is only a notational variant.]
The following are our respective SC-rules, where we use the propositional fragment of an intuitionistic SC with independent contexts <cit.>.
The reduction procedures remain the same as above in ND; β-reduction corresponds to the procedures needed to establish cut-elimination, while η-conversion corresponds to what may be called “identicals-elimination" <cit.> or “identity atomization" <cit.>[Showing that it is possible to get rid of axiomatic sequents with complex formulas and derive them from atomic axiomatic sequents. This is also part of cut-elimination but in principle those are separate procedures <cit.>.]:
Term-annotated G0ip:
Logical axiom:
[Rf]x : A ⊢x : A
Logical rules:
[∧R]Γ, Δ⊢⟨s, t⟩: A ∧BΓ⊢s: A Δ⊢t: B
[∧L]Γ, z: A ∧B ⊢s[[fst(z)/x]snd(z)/y] : CΓ, x: A, y : B ⊢s : C
[∨R_1]Γ⊢s :A ∨BΓ⊢s:A
[∨R_2]Γ⊢s:A ∨BΓ⊢s:B
[∨L]Γ, Δ, z:A ∨B ⊢ {x.s | y.t} : CΓ, x:A ⊢s:C Δ, y:B ⊢t:C
[⊃R]Γ⊢λx.t:A ⊃BΓ, x:A ⊢t:B
[⊃L]Γ, Δ, x:A ⊃B ⊢s[App(x, t)/y]:CΓ⊢t: A Δ, y:B ⊢s:C
[L]x: ⊢abort(x): C
Structural rules:
Weakening:
[W]Γ, x:A ⊢t:CΓ⊢t:C
Contraction:
[C]Γ, x : A ⊢t[x/y] : CΓ, x : A, y : A ⊢t : C
The rule of cut
[cut]Γ, Δ⊢s[t/x] : CΓ⊢t : D Δ, x : D ⊢s : C
is admissible in G0ip.
In the left operational rules as well as in the weakening rule we have the case that variables occur beneath the line that are not explicitly mentioned above the line.
In these cases the variables must be either fresh or - together with the same type assignment - already occurring in the context Γ, Δ, etc.
Same variables can only (but need not) be chosen for the same type, i.e., if a new type occurs in a proof, then a fresh variable must be chosen.
If we would allow to chose the same variable for different types, i.e. for example to let x:A and x:B occur in the same derivation this would amount to assuming that arbitrarily different formulas have the same proof, which is not desirable.
§.§ Identity of proofs and equivalence of derivations
Figuring prominently in the literature on identity of proofs is a conjecture by Prawitz <cit.> that two derivations represent the same proof iff they are equivalent.[Prawitz gives credit for this conjecture to Martin-Löf. Cf. also Martin-Löf <cit.> on this issue, in his terminology “definitional equality".]
This shifts the question of course to asking when two derivations can be considered equivalent.
Using the equational theory of the λ-calculus is one way to provide an answer here: terms on the right and the left hand side of the β- and η-conversions are considered denotationally equal <cit.>.
Hence, two derivations can be considered equivalent iff they are β-η-equal (cf. <cit.>, <cit.>, <cit.>).[There is some discussion about whether η-conversions are indeed identity-preserving. Martin-Löf <cit.> does not think so, for example. Prawitz <cit.> is not clearly decided but writes in the context of identity of proofs it would seem “unlikely that any interesting property of proofs is sensitive to differences created by an expansion". Widebäck <cit.>, relating to results in the literature on the typed λ-calculus like <cit.> and <cit.>, argues for β-η-equality to give the right account of identity of proofs and Girard <cit.> does the same, although he mentions, too, that η-equations “have never been given adequate status" compared to the β-equations.]
The denotation is then seen to be referred to by the term that annotates the formula or sequent to be proven.
We will call this the `end-term' henceforth so that we can cover and compare both ND and SC at once.
So if we have two derivations with essentially different end-terms (in the sense that they are not belonging to the same equivalence class induced by β-η-conversion), we would say that they denote essentially different proofs.
On the other hand, for two ND-derivations, where one reduces to the other (or both reduce to the same), e.g. via normalization, we have corresponding λ-terms, one β-reducible to the other (or both β-reducible to the same term).
In this case we would say that they refer to the same proof.
Prawitz <cit.> stresses that this seems evident since two derivations reducing to identical normal derivations must be seen as equivalent.
Note that we can also have the case that two derivations of the same formula, which would look identical in a non-term-annotated version, here for example of ND, are distinguished on the grounds of our term annotation, like the following two derivations:
2
ND1p ⊃ (p ⊃ (p ∧ p))
ND2p ⊃ (p ⊃ (p ∧ p))
[⊃I^2]λy.λx.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
[⊃I^1]λx.⟨x, y ⟩: p ⊃(p ∧p)
[∧I]⟨x, y ⟩: p ∧p[x : p]^1 [y : p]^2
[⊃I^2]λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
[⊃I^1]λy.⟨x, y ⟩: p ⊃(p ∧p)
[∧I]⟨x, y ⟩: p ∧p[x : p]^2 [y : p]^1
The reason for this is that it is possible to generalize these derivations in different directions, which is made explicit by the variables.
Hence, the first one can be generalized to a derivation of B ⊃ (A ⊃ (A ∧ B)), while the second one generalizes to A ⊃ (B ⊃ (A ∧ B)).[For a more detailed examination of generalization cf. <cit.> or <cit.>.]
So, encoding derivations with λ-terms seems like a suitable method to clarify the underlying structure of proofs.
There is one kind of conversion left, though, that needs consideration, namely what we will call permutative conversions, or also γ-conversions.[It goes under various other names, as well, like permutation/permuting conversions or commuting/commutative conversions. Some also prefer “reductions" but we will go with the - to us seemingly - more neutral “conversions". The term γ-conversions appears in <cit.>. Cf. about these conversions in general e.g. <cit.>: 251-259, <cit.>: Ch. 10, <cit.>, <cit.>.]
They become relevant here because we have disjunction as part of our logical vocabulary.
Prawitz <cit.> was the first to introduce these conversions.
In the conjunction-implication-fragment of intuitionistic propositional logic derivations in normal form satisfy the subformula property, i.e. in a normal derivation 𝒟 of A from Γ each formula is either a subformula of A or of some formula in Γ.
However, with the disjunction elimination rule this property is messed up, since we get to derive a formula C from A ∨ B which is not necessarily related to A or B.
That is why, in order to recover the subformula property, permutation conversions are introduced, which can be presented in their most general form in the following way:
D[∨E]C *A ∨BΓ *CΔ, A *CΘ, B
⇝
[∨E]D *A ∨BΓ D*CΔ, A D*CΘ, B
Whether or not these are supposed to be taken into the same league as β- and η-conversions in matters of identity preservation of proofs is an even bigger dispute than the one mentioned concerning η-conversions.
Prawitz <cit.> says that while there can be no doubt about the `proper reductions' having no influence on the identity of the proof, “[t]here may be some doubts concerning the permutative ∨E-[...]reductions in this connection" but does not go into that matter any further.
Since he needs these reductions to prove his normalization theorem, it seems that he would be inclined not to have too many doubts about identity preservation under the permutative conversions.
Girard <cit.>, on the other hand, does not seem to be convinced, as he says - considering an example of permutation conversion - that we are forced to identify “a priori different deductions" in these cases.
Even though he accepts these conversions for technical reasons, he does not seem to be willing to really identify the underlying proof objects.
Restall[Restall, G. (2017). Proof Terms for Classical Derivations. Article in progress: https://consequently.org/papers/proof-terms.pdf], however, analyzing derivations by assigning to them what he calls “proof terms" rather than λ-terms, considers the derivations above as merely distinct in representation but not in the underlying proof, which on his account is the same for both.
What is more, he does so not only for technical but rather philosophical reasons, since he claims the flow of information from premises to conclusion to be essentially the same.
Lindley <cit.> and Tranchini <cit.> both make a point about the connection between reductions and expansions (although they speak of certain kinds of “generalized" expansions) on the one hand and (“generalized") permutative conversions on the other, claiming that performing a (generalized) expansion on the left hand side of the conversion above followed by a reduction (and possibly α-conversion) just yields the right hand side.
To conclude, if we only consider the ⊃-∧-fragment of intuitionistic propositional logic, β-η-equality is enough, but if we consider a richer vocabulary, it seems to us at least that there are substantial reasons to include permutative conversions in our equational theory.[The consequence for this paper would be of course to add “γ-conversions" to the list of relevant conversions in our definitions about normal forms, identity of denotation, etc.]
We do not aim to make a final judgment on this issue here.
Rather, when we have laid out our distinction about sense and denotation of proofs below, we will consider the matter again and show why it makes no essential difference for our purposes whether we include permutative conversions or not.
§ THE SENSE OF DERIVATIONS
Let us spell out at this point what exactly we will consider as the sense and also again the denotation of a derivation in our approach:
Definition of denotation:
The denotation of a derivation in a system with λ-term assignment is referred to by the end-term of the derivation.
Identity of denotation holds modulo belonging to the same equivalence class induced by the set of α-, β- and η-conversions of λ-terms, i.e. derivations that are denoted by terms belonging to the same equivalence class induced by these conversions are identical, they refer to the same proof object.[We use the more accurate formulation of “belonging to the same equivalence class" here instead of the formulation we used before of two terms “having the same normal form". The reason for this is that while these two properties coincide for most standard cases, they do not necessarily concur when it comes to Lindley's “general permutative conversions" or also to SC in general because in these cases the confluence property is not guaranteed. We want to thank one of the anonymous referees for indicating this important point.]
Definition of sense:
The sense of a derivation in a system with λ-term assignment consists of the set[One could also consider the question whether multi-sets are an even better choice here, which would of course yield a much stronger differentiation of senses. The reason why we consider sets instead of multi-sets is that to us the distinctions brought about by multi-sets, by e.g. a variable occurrence more or less, do not seem to go hand in hand with substantial differences in how inferences are built up.] of λ-terms that occur within the derivation.
Only a derivation made up of applications of correct inference rules, i.e. rules that have reduction procedures, can have sense.
§.§ Change of sense due to reducibility
Concerning a distinction between sense and denotation in the context of proofs, the rare cases where this is mentioned at all deal with derivations one of which is reducible to the other or with λ-terms which are β-convertible to the same term in normal form (cf. <cit.>, <cit.>, Restall 2017, p. 6).
Since Tranchini is the only one to spell out the part about sense in detail, we will briefly summarize his considerations.
As mentioned above, in his account, for a derivation to have sense means that it is made up of applications of correct inference rules.
The question to be asked then is of course what makes up correct inference rules?
Tranchini's answer is that inference rules are correct if they have reduction procedures available, i.e. a procedure to eliminate any maximal formula resulting from an application of an introduction rule immediately followed by an elimination rule of the same connective.
From a PTS point of view, applying reduction procedures can be seen as a way of interpreting the derivation because it aims to bring the derivation to a normal form, i.e. the form in which the derivation represents the proof it denotes most directly <cit.>.[Tranchini does not restrict his examination to derivations that normalize, though, but to the contrary, uses it to analyze non-normalizable derivations, like paradoxical ones.]
So the reduction procedures are the instructions telling us how to identify the denotation of the derivation, which for Tranchini means that they give rise to the sense of the derivation.
If we have two derivations denoting the same proof, for example, one in normal form and the other in a form that can be reduced to the former, we could say in Fregean terminology that they have the same denotation but differ in their sense because they denote the proof in different ways, one directly, the other indirectly.
So, we can take as an example the following two derivations, one in normal and one in non-normal form:
NDp ⊃ p
=1.2em
[r]⊃I
[x : p]
λx.x: p ⊃p
NDnon-normal p ⊃ p
=1.2em
[r]∧E
[r]∧I
[r]⊃I
[x : p]
λx.x: p ⊃p
[r]⊃I
[y : q]
λy.y: q ⊃q
⟨λx.x, λy.y ⟩: (p ⊃p) ∧(q ⊃q)
fst(⟨λx.x, λy.y ⟩): p ⊃p
The latter obviously uses an unnecessary detour via the maximal formula (p ⊃ p) ∧ (q ⊃ q), which is introduced by conjunction introduction and then immediately eliminated again, thus, producing different and more complex terms than the former derivation.
The derivation can be easily reduced to the former, though, which can be also seen by β-reducing the term denoting the formula to be proven:
fst(⟨λx.x, λy.y ⟩)
⇝λx.x
We can also give an example analogous to the one above, where a non-normal term (highlighted in bold) in SC is created by using the cut rule:[Note however, that the connection between the application of cut and the resulting non-normal term is necessary but not sufficient, i.e. there can be applications of cut not creating a non-normal term. A non-normal term is produced if both occurrences of the cut formula in the premises are principal.]
SC⊢ (p ∧ p) ⊃ (p ∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y) : (p ∧p) ⊃(p ∨p)
SCcut⊢ (p ∧ p) ⊃ (p ∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]cut
[r]C
[r]∧R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
x : p, z : p ⊢z : p
y : p ∧p ⊢snd(y) : p
y : p ∧p, y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p
y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
y : p ∧p ⊢fst⟨fst(y), snd(y)⟩: p
y : p ∧p ⊢fst⟨fst(y), snd(y)⟩ : p ∨p
⊢λy.fst⟨fst(y), snd(y)⟩ : (p ∧p) ⊃(p ∨p)
λy.fst⟨fst(y), snd(y)⟩
⇝λy.fst(y)
In this case again the two derivations are essentially the same because the latter can be reduced to the former by eliminating the application of the cut rule.
Again, the proof object they represent is thus the same, only the way of making the inference, represented by the different terms occurring within the derivation, differs, i.e. the sense is different.
§.§ Change of sense due to rule permutations
So far we only considered the case in which there is an identity of denotation but a difference in sense of derivations due to one being represented by a λ-term in non-normal form reducible to one in normal form.
However, we want to show that this is not the only case where we can make such a distinction.
This is also the reason why our approach differs from Tranchini's (who works solely in an ND system) in how we grasp the notion of sense of a derivation.
Following Tranchini, the derivation having sense at all depends on there being reduction procedures available for the rules that are applied in it.
Since we are also interested in a comparison of sense-and-denotation relations between ND and SC systems, our approach requires that there are reduction procedures available for the created terms.
Thereby we will be able to cover both systems at once.
Encoding the proof systems with λ-terms also makes the connection between changing the order of the rule applications and the sense-and-denotation distinction transparent, which is the other case we want to cover.
In ND with disjunction rules it is possible to have rule permutations producing derivations with end-terms identifiable by means of the permutative conversions.
In SC, however, there are more cases of rule permutations possible.
When the left disjunction rule is involved, this also leads to different - though γ-equal - terms; with the left conjunction or implication rule the end-term remains completely unchanged.
Consider e.g. the following three derivations in SC of the same sequent ⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)):
SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∧L
[r]W
[r]∨R
[r]Rf
q ⊢q
q ⊢p ∨q
q, r ⊢p ∨q
q ∧r ⊢p ∨q
[r]∧L
[r]W
[r]∨R
[r]Rf
r ⊢r
r ⊢p ∨r
q, r ⊢p ∨r
q ∧r ⊢p ∨r
q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r)
q ∧r ⊢(p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
p, p ⊢(p ∨q) ∧(p ∨r)
p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∨R
[r]∧L
[r]W
[r]Rf
q ⊢q
q, r ⊢q
q ∧r ⊢q
q ∧r ⊢p ∨q
[r]∨R
[r]∧L
[r]W
[r]Rf
r ⊢r
q, r ⊢r
q ∧r ⊢r
q ∧r ⊢p ∨r
q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r)
q ∧r ⊢(p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
p, p ⊢(p ∨q) ∧(p ∨r)
p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
q ⊢q
q ⊢p ∨q
q, r ⊢p ∨q
q ∧r ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
(q ∧r) ∨p ⊢p ∨q
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
r ⊢r
r ⊢p ∨r
q, r ⊢p ∨r
q ∧r ⊢p ∨r
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
(q ∧r) ∨p ⊢p ∨r
(q ∧r) ∨p, (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
The difference between SC1 and SC2 (highlighted in bold) is that the order of applying the right disjunction rule and the left conjunction rule is permuted.
The difference between SC1 and SC3 (highlighted with underlining) is that the order of applying the right conjunction rule and the left disjunction rule is permuted.
The order of applying the right disjunction rule and the left conjunction rule stays fixed this time.
Encoded with λ-terms, though, we see that in the first case, comparing SC1 and SC2, the permutation of rule applications produces exactly the same end-term.
Both derivations have the same end-term, namely:
λ u. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩}
SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∧L
[r]W
[r]∨R
[r]Rf
y : q ⊢y : q
y : q ⊢y : p ∨q
y : q, z : r ⊢y : p ∨q
v : q ∧r ⊢fst(v) : p ∨q
[r]∧L
[r]W
[r]∨R
[r]Rf
z : r ⊢z : r
z : r ⊢z : p ∨r
y : q, z : r ⊢z : p ∨r
v : q ∧r ⊢snd(v): p ∨r
v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r)
⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∨R
[r]∧L
[r]W
[r]Rf
y : q ⊢y : q
y : q, z : r ⊢y : q
v : q ∧r ⊢fst(v) : q
v : q ∧r ⊢fst(v) : p ∨q
[r]∨R
[r]∧L
[r]W
[r]Rf
z : r ⊢z : r
y : q, z : r ⊢z : r
v : q ∧r ⊢snd(v) : r
v : q ∧r ⊢snd(v): p ∨r
v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r)
⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
Considering the second comparison between SC1 and SC3 the situation is different: here the permutation of rule applications leads to a different end-term.
In the end-term for SC1 and SC2 the pairing operation is embedded within the case expression, whereas in the end-term for SC3 the case expression is embedded within the pairing:
λ u.⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩
SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
y : q ⊢y : q
y : q ⊢y : p ∨q
y : q, z : r ⊢y : p ∨q
v : q ∧r ⊢fst(v) : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
u : (q ∧r) ∨p ⊢ {v.fst(v) | x.x} : p ∨q
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
z : r ⊢z : r
z : r ⊢z : p ∨r
y : q, z : r ⊢z : p ∨r
v : q ∧r ⊢snd(v) : p ∨r
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
u : (q ∧r) ∨p ⊢ {v.snd(v) | x.x}: p ∨r
u : (q ∧r) ∨p, u : (q ∧r) ∨p ⊢⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: (p ∨q) ∧(p ∨r)
⊢λu.⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
When we take a look at how the term-annotated rules must be designed in order to have a correspondence to the respective rules in ND, we see why some permutations of rule applications lead to different end-terms, while others do not; and why SC is in general more flexible in this respect than ND.
In SC the left conjunction rule as well as the left implication rule are substitution operations, i.e. they can change their place in the order without affecting the basic term structure because only in the inner term structure terms are substituted with other terms.[For ⊃L the only exception is when an application of this rule is permuted with an application of ∨L, which creates a different, though γ-convertible term.]
In ND, on the other hand, there are no substitution operations used in the term assignment, i.e. for each rule application a new basic term structure is created.
How is this related to the distinction between sense and denotation?
In cases like SC1 vs. SC2 the way the inference is given differs, which can also be seen in different terms annotating the formulas occurring within the derivation: with otherwise identical terms in the two derivations y and z only occur in SC1, while fst(v) and snd(v) only occur in SC2.
However, the resulting end-term stays the same, thus, we would describe the difference between these derivations as a difference in sense but not in denotation.
In other cases, when disjunction elimination or the left disjunction rule is involved, permutation of rule applications can lead to a different end-term, as we see above in SC1 vs. SC3.
Whether this corresponds to a difference in denotation depends on whether we accept γ-conversions to be identity-preserving.
What all cases have in common, though, is that rule permutation always leads to a difference in sense of the given derivations because the sets of terms occurring within the derivations differ from each other.
§.§ Philosophical motivation
Let us have a look at how the Fregean conception of sense is received in the literature in order to show the philosophical motivation for adopting such a definition of sense for derivations.
According to Dummett <cit.>, Fregean sense is to be considered as a procedure to determine its denotation.[This idea of sense as procedures also occurs in more recent publications like <cit.> or <cit.>.]
Girard <cit.>, in a passage about sense and denotation and the relation between proofs and programs, mentions that the sense is determined by a “sequence of instructions" and when we see in this context terms as representing programs and “the purpose of a program [...] to calculate [...] its denotation" (ibid., p. 17), then it seems plausible to view the terms occurring within the derivation, decorating the intermediate steps in the construction of the complex end-term that decorates the conclusion, as the sense of that derivation.
Tranchini holds the reduction procedures to be the sense because these `instructions' lead to the term in normal form.
However, in our framework - because we do not only consider normal vs. non-normal cases - it seems more plausible to look at the exact terms occurring within the derivations and view them as representing the steps in the process of construction encoding how the derivation is built up and leading us to the denotation, the end-term.
For us it is therefore only a necessary requirement for the derivation to have sense to contain only terms for which reduction procedures are available but it does not make up the sense.
In the case of rule permutation we can then say that the proof is essentially the same but the way it is given to us, the way of inference, differs: i.e. the sense differs.
This can be read off from the set of terms that occur within the derivation: they end up building the same end-term, but the way it is built differs, the procedures to determine the denotation differ.
Thus, this allows us to compare differences in sense within one proof system as well as over different proof systems.
Troelstra and Schwichtenberg <cit.> e.g. give an example of two derivations in SC producing the same end-term in different ways to show that just from the variables and the end-term we cannot read off how the derivation is built up:[For simplicity we omit the weakening steps that would strictly seen have to precede the applications of the ∧L-rule.]
SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q))
=1.2em
[r]⊃R
[r]⊃R
[r]∧L
[r]∧L
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : q ⊢y : q
x : p, y : q ⊢⟨x, y ⟩: p ∧q
x : p, z : q ∧r ⊢⟨x, fst(z) ⟩: p ∧q
u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q
u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q)
⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q))
SC2⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q))
=1.2em
[r]⊃R
[r]⊃R
[r]∧L
[r]∧L
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : q ⊢y : q
x : p, y : q ⊢⟨x, y ⟩: p ∧q
u : s ∧p, y: q ⊢⟨snd(u), y ⟩: p ∧q
u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q
u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q)
⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q))
The senses of these derivations would be the following:
Sense of SC1:
{x, y, z, u, ⟨ x, y ⟩, ⟨ x, fst(z) ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩,
λ u.λ z.⟨ snd(u), fst(z) ⟩}
Sense of SC2:
{x, y, z, u, ⟨ x, y ⟩, ⟨ snd(u), y ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩,
λ u.λ z.⟨ snd(u), fst(z) ⟩}
The two sets only differ with regard to the underlined terms, otherwise they are identical.
Thus, they only differ in the order in which the two left conjunction rules are applied.
For the resulting end-term this is inessential, but we can see that when taking the sense, and not only the end-terms, i.e. the denotation, into account, it is indeed possible to read off the structure of the derivations.
As noted above (examples on p. 6), the term annotation of the calculi makes this structure of derivations explicit so that we can differentiate between derivations which would otherwise look identical.
As several authors point out, this is a desirable feature if one is not only interested in mere provability but wants to study the structure of the derivations in question (cf. <cit.>, <cit.>) and also, for simplicity, if one wants to compare proof systems of ND and SC with each other <cit.>.
Since we are interested in both of these points, it seems the right choice for our purposes to consider the annotated versions of the calculi and that is also why these annotated versions are indeed needed for our notions of sense and denotation.
Of course, one could argue that the underlying structure is still the same in the non-annotated versions and can be made explicit by other means, too, like showing the different generalizations of the derivations, but still, we do not see how in these calculi our notions could be easily applied.
Another issue that needs to be considered is the one of identity of senses, i.e. synonymy.
Therefore, we want to extend our definition of sense given above with an addition:
If a sense-representing set can be obtained from another by uniformly replacing (respecting the usual capture-avoiding conventions) any occurrence of a variable, bound or free, by another variable of the same type, they express the same sense.
What we ensure with this point is just that it does not (and should not) matter which variables one chooses for which proposition as long as one does it consistently.
So, it does not make a difference whether we have
2
ND1p ⊃ (q ⊃ p)
=1.2em
[r]⊃I
[r]⊃I
[x : p]
λz.x: q ⊃p
λx. λz.x: p ⊃(q⊃p)
Sense1: {x, λ z.x, λ x. λ z.x}
or
2
ND2p ⊃ (q ⊃ p)
=1.2em
[r]⊃I
[r]⊃I
[y : p]
λz.y: q ⊃p
λy. λz.y: p ⊃(q⊃p)
Sense2: {y, λ z.y, λ y. λ z.y}
Sense1 and Sense2 represent the same sense.
Or to give another example (pointed to by one of the anonymous referees) where we have free variables occurring within the derivation but not appearing in the end-term: If one would replace all occurrences of the free variable y by the variable w in derivation SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q)) (cf. above), then this would make no difference to the sense according to our definition since the sense-representing sets would be obtained from replacing y by w.
This also fits the Fregean criterion of two sentences' identical sense, as Sundholm <cit.> depicts it within a broader analysis: two propositions express the same sense if it is not possible to hold different epistemic attitudes towards them, i.e. “if one holds the one true, one also must hold the other one true, and vice versa".
Whereas, if we have two sentences which only differ in two singular terms, referring to the same object but differing in sense, we can easily hold the one sentence to be true, while thinking the other is false, if we do not know that they are referring to the same object.
With proofs it is the same: Looking at ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) we may not know whether the derivation is valid or not, we do know, however, that if one is a valid derivation then so is the other.
With derivations differing in sense this is not so straightforward.
For Frege this point of considering cases where intensionality is directed towards sentences was crucial to develop his notion of sense, so the question arises how we can explain cases of intensionality directed towards proofs with our notions of sense and denotation.
Let us suppose we have two denotationally-identical proofs which are represented by two different derivations 𝒟 and 𝒟'.
In this case it could happen that a (rational) person believes that derivation 𝒟 is valid but does not believe that derivation 𝒟' is valid.
How can we account for that?
One explanation would be of course to point to the difference in linguistic representation.
After all, it can just be the case that one way of writing down a proof is more accessible to the person than another (they may not be familiar with a certain proof system, for example).
This would amount to letting the linguistic representation, the signs, collapse with the sense of a derivation.
However, then we would have no means to distinguish this case from cases in which we want to argue that it is not justified for a rational person to have different propositional attitudes towards propositions which are about derivations differing insignificantly from each other, like in the cases of ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) above.
For Frege <cit.> the referent of an expression in an intensional context is not its customary referent, i.e. the object it refers to or the truth value in the case of sentences, but its customary sense.
Here the situation is the same: What is referred to in such a setting, when speaking about the attitudes of a person towards propositions about derivations, is not the proof objects (which are identical in our situation) but their senses, which are in this context represented by the sets of terms encoding the steps of construction.
It seems plausible then to say that when the construction steps differ in two derivations, a person can have different attitudes towards propositions about them, because the different construction steps may lead to this person grasping the one derivation, while not understanding the other.
§ ANALOGY TO FREGE'S CASES
Let us finally compare how our conception of sense and denotation in the context of proofs fits the distinction Frege came up with for singular terms and sentences.
We can have the following two cases with Frege's distinction: firstly (cf. <cit.>), there can be different signs corresponding to exactly one sense (and then of course also only one denotation).
In the case of singular terms an example would be “Gottlob's brother” and “the brother of Gottlob".
The sense, the way the denoted individual object is given to us, is the same because there is only a minor grammatical difference between the two expressions.
More frequently, this occurs in comparing different languages, though, taking singular terms which express exactly the same sense only using different words, like “the capital of France" and “die Hauptstadt Frankreichs".
In the case of sentences an example would be changing from an active to a passive construction without changing the emphasis of the sentence; an example from Frege is the following: “M gave document A to N", “Document A was given to N by M" <cit.>.
In the case of proofs, finally, an example would be the following case:
ND(p∨ p) ⊃ (p∧ p)
=1.2em
[r]⊃I^3
[r]∧I
[r]∨E^1
[y : p ∨p]^3
[x : p]^1
[x : p]^1
{x.x | x.x} : p
[r]∨E^2
[y : p ∨p]^3
[x : p]^2
[x : p]^2
{x.x | x.x} : p
⟨ {x.x | x.x}, {x.x | x.x}⟩: p ∧p
λy.⟨ {x.x | x.x}, {x.x | x.x}⟩ : (p ∨p) ⊃(p ∧p)
SC⊢ (p∨ p) ⊃ (p ∧ p)
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]Rf
x : p ⊢x : p
[r]Rf
x : p ⊢x : p
y : p ∨p ⊢ {x.x | x.x} : p
[r]∨L
[r]Rf
x : p ⊢x : p
[r]Rf
x : p ⊢x : p
y : p ∨p ⊢ {x.x | x.x}: p
y : p ∨p , y : p ∨p ⊢⟨ {x.x | x.x}, {x.x | x.x} ⟩: p ∧p
y : p ∨p ⊢⟨ {x.x | x.x}, {x.x | x.x}⟩: p ∧p
⊢λy.⟨ {x.x | x.x}, {x.x | x.x}⟩: (p ∨p) ⊃(p ∧p)
Sense:
{x, y, {x.x | x.x}, ⟨ {x.x | x.x}, {x.x | x.x}⟩,
λ y.⟨ {x.x | x.x}, {x.x | x.x}⟩}
Or to give another example:
NDp ⊃ (p ⊃ (p ∧ p))
=1.2em
[r]⊃I^2
[r]⊃I^1
[r]∧I
[x : p]^2
[y : p]^1
⟨x, y ⟩: p ∧p
λy.⟨x, y ⟩: p ⊃(p ∧p)
λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
SC⊢ p ⊃ (p ⊃ (p ∧ p))
=1.2em
[r]⊃R
[r]⊃R
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : p ⊢y : p
x : p, y : p ⊢⟨x, y ⟩: p ∧p
x : p ⊢λy.⟨x, y ⟩: p ⊃(p ∧p)
⊢λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
Sense: {x, y, ⟨ x, y ⟩, λ y.⟨ x, y ⟩, λ x.λ y.⟨ x, y ⟩}
In these cases derivations can consist of different signs, namely by having one representation in SC and one in ND, which do not differ in sense nor in denotation, since they both contain exactly the same terms and produce the same end-term.
This comparison between different proof systems seems to fit nicely with Frege's <cit.> comment on “the same sense ha[ving] different expressions in different languages".
However, as we have seen above with the examples ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p), this case can also occur within the same proof system.
One could wonder whether there should not be a differentiation between the senses of the derivations in the first example since it seems that different rules are applied: in SC⊢ (p∨ p) ⊃ (p ∧ p) we have an application of contraction, which we do not have in ND(p∨ p) ⊃ (p∧ p).
This would also question whether our definition of sense distinguishes and identifies the right amount of cases.
We do believe that this is the case, though, because in the first example, where there is an application of the contraction rule in SC, there is also a multiple assumption discharge in the ND-derivation, which is generally seen as the corresponding procedure, just as cases of vacuous discharge of assumptions in ND correspond to the application of weakening in SC.
So just as in different languages of course not exactly the same expressions are used, here too, the rules differ from ND to SC but since the corresponding procedures are used, one can argue that the sense does not differ for that reason.
Another case that can occur according to Frege (ibid.) is that we have one denotation, i.e. one object a sign refers to, but different senses.
An example for this would be his famous “morning star" and “evening star" comparison, where both expressions refer to the same object, the planet Venus, but the denoted object is given differently.
On the sentence level this would amount to exchanging singular terms in a sentence by ones which have the same denotation: “The morning star is the planet Venus" and “The evening star is the planet Venus".
The denotation of the sentence - with Frege: its truth value - thus stays the same, only the sense of it differs, the information is conveyed differently to us.
For our proof cases we can say that this case is given when we have syntactically different derivations, be it in one or in different proof systems, which have end-terms belonging to the same equivalence class induced by the set of α-, β- and η-conversions.
Thus, examples would be corresponding proofs in ND and SC, which share the same end-term, but contain different terms occurring within the derivations.
The reason for this to happen seems that in SC often more variables are necessary than in ND.
If we compare derivations within ND, one definite case in which we have the same denotation but a different sense is between equivalent but syntactically distinct derivations, e.g. non-normal and normal derivations, one reducible to the other.
Another case up for debate would be the one with rule permutations due to disjunction elimination.
Within SC we can have two cases: one due to rule permutation, one due to applications of cut.
For the first case, where the inference could be given in a different way, although ending on the same term, we gave examples above (cf. p. 12 and 14f.).
However, it is worth mentioning that our distinction still captures the usual distinction, the second case, where it is said that two derivations, one containing cut and the other one in cut-free form (as a result of cut-elimination applied to the former), have the same denotation but differ in sense:
SC⊢ (p∧ p) ⊃ (p∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p∧p ⊢fst(y) : p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y) : (p ∧p) ⊃(p ∨p)
Sense: {z, x, y, fst(y), fst(y), λ y.fst(y)}
SCcut⊢ (p∧ p) ⊃ (p∨ p)
=1.2em
[r]⊃R
[r]cut
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
[r]∨R
[r]Rf
z : p ⊢z : p
z : p ⊢z: p ∨p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y): (p ∧p) ⊃(p ∨p)
Sense: {z, x, y, fst(y), z, fst(y), λ y.fst(y)}
As mentioned above (fn 14), cut does not need to create a non-normal term, as it is the case here, but still any application of cut will necessarily change the sense of a derivation as opposed to its cut-free form.
Finally, cases that need to be avoided in a formal language according to Frege <cit.> would be to have one sign, corresponding to different senses, or on the other hand, one sense corresponding to different denotations.
As he mentions, these cases of course occur in natural languages but should not happen in formal ones, so it should also not be possible in our present context, for sure.
Fortunately, this cannot happen in the context of our annotated proof systems, either, since the signs (taken to be the derivation as it is written down) always express at most one sense in our annotated system, and likewise the sense always yields a unique denotation since the end-term is part of the sense-denoting set.[Another question would be whether there can be signs without any sense at all. Frege <cit.> dismisses this case, as well, with a remark that we need at least the requirement that our expressions are “grammatically well-formed". Tranchini <cit.> gives a good analogy pointing to the notorious connective playing this role in the case of proofs.]
§ CONCLUSION
The context in which Frege considered sense and denotation was the context of identity.
Likewise, we argued in this paper, if we use term-annotated calculi, we can also say something about proof identity: identity of proofs over different calculi or within the same calculus consists in having end-terms that belong to the same equivalence class induced by the set of α-, β- and η-conversions.
In ND this can happen when we have the same proof in normal and non-normal form, in SC this can happen when we have the same proof using cut and in cut-free form but also when there are forms of rule permutations where an application of the ∧L-rule or the ⊃L-rule switches place with another rule.
Including disjunction in our language creates for both calculi the additional question of whether rule permutations including disjunction elimination (resp. the left disjunction rule) lead to a different proof, or whether these proofs should be identified.
We are more interested in sense, however, and here we can conclude that what in all these cases changes is the sense of the derivation in question.
Finally, considering the question of identity of sense, i.e. synonymy, and trying to follow Frege's conception on this matter, too, we can say the following: if two derivations are supposed to be identical in sense, this means that the way the inference is given is essentially the same, so the set of terms building up the end-term must be the same.
The end-term itself does not necessarily tell us anything about the structure of the proof.
Sense, on the other hand, is more fine-grained in that the set of terms occurring within the derivation reflects how the derivation is built up.
Especially in SC, where we can have different orders of rule applications leading up to the same end-term, the sense gives us means to distinguish on a more fine-grained level.
BarendregtGhilezan Barendregt, H., & Ghilezan, S. (2000). Lambda terms for natural deduction, sequent calculus and cut elimination. Journal of Functional Programming, 10(1), 121–134.
Groote De Groote, P. (1999). On the Strong Normalisation of Natural Deduction with Permutation-Conversions. In P. Narendran, & M. Rusinowitch (Eds), Rewriting Techniques and Applications: RTA 1999 (pp. 45–59). Berlin/Heidelberg: Springer.
Dosen2003 Došen, K. (2003). Identity of Proofs Based on Normalization and Generality. Bulletin of Symbolic Logic, 9, 477–503.
Dosen2008 Došen, K. (2008). Cut Elimination in Categories. Springer.
Dummett Dummett, M. (1973). Frege: Philosophy of Language. New York: Harper & Row.
DJM Duží, M., Jespersen, B., & Materna, P. (2010). Procedural Semantics for
Hyperintensional Logic: Foundations and Applications of Transparent Intensional Logic. Springer.
Francez Francez, N. (2017). On harmony and permuting conversions. Journal of Applied Logic, 21, 14–23.
Frege1 Frege, G. (1948) [1892]. Sense and Reference. The Philosophical Review, 57(3), 209–230.
Frege2 Frege, G. (1979). Posthumous Writings. Oxford: Basil Blackwell.
Friedman Friedman, H. (1975). Equality between functionals. In R. Parikh (Ed.), Logic colloquium: Lecture notes in mathematics 453 (pp. 23–37). Berlin/Heidelberg: Springer.
Girard Girard, J.-Y. (1989). Proofs and Types. Cambridge: Cambridge University Press.
Hacking Hacking, I. (1979). What is Logic? The Journal of Philosophy, 76(6), 285–319.
Herbelin Herbelin, H. (1994). A Lambda-calculus Structure Isomorphic to Gentzen-style Sequent Calculus Structure. Computer Science Logic, 61–75.
Kreisel Kreisel, G. (1971). A survey of proof theory II. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 109–170). Amsterdam: North-Holland.
Lindley Lindley, S. (2007). Extensional Rewriting with Sums. In S. Ronchi Della Rocca (Ed.), Typed Lambda Calculi and Applications: TLCA 2007 (pp. 255–271). Berlin/Heidelberg: Springer.
M-L Martin-Löf, P. (1975). About Models for Intuitionistic Type Theories and the Notion of Definitional Equality. In S. Kanger (Ed.), Proceedings of the Third Scandinavian Logic Symposium (pp. 81–109). Amsterdam: North-Holland.
Muskens Muskens, R. (2005). Sense and the Computation of Reference. Linguistics and Philosophy, 28(4), 473–504.
NegrivonPlato Negri, S., & von Plato, J. (2001). Structural Proof Theory. Cambridge/New York: Cambridge University Press.
Pfenning Pfenning, F. (2000). Structural Cut Elimination: I. Intuitionistic and Classical Logic. Information and Computation, 157, 84–141.
Pottinger Pottinger, G. (1977). Normalization as a homomorphic image of cut-elimination. Annals of Mathematical Logic, 12, 323–357.
Prawitz1965 Prawitz, D. (1965). Natural Deduction. Stockholm: Almqvist & Wiksell.
Prawitz1971 Prawitz, D. (1971). Ideas and results in proof theory. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 235–307). Amsterdam: North-Holland.
SU Sørensen, M., & Urzyczyn, P. (2006). Lectures on the Curry-Howard Isomorphism. Amsterdam: Elsevier Science.
Statman Statman, R. (1983). λ-definable functionals and βη conversion. Archiv für Mathematische Logik, 23, 21–26.
Sundholm Sundholm, G. (1994). Proof-Theoretical Semantics and Fregean Identity Criteria for Propositions. The Monist, 77(3), 294–314.
Tranchini2016 Tranchini, L. (2016). Proof-theoretic semantics, paradoxes and the distinction between sense and denotation. Journal of Logic and Computation, 26(2), 495–512.
Tranchini2018 Tranchini, L. (2018). Stabilizing Quantum Disjunction. Journal of Philosophical Logic, 47, 1029–1047.
TS Troelstra, A., & Schwichtenberg, H. (2000). Basic Proof Theory. 2nd ed., Cambridge: Cambridge University Press.
Urban Urban, C. (2014). Revisiting Zucker's Work on the Correspondence Between Cut-Elimination and Normalisation. In L. Pereira, E. Haeusler, & V. de Paiva (Eds), Advances in Natural Deduction: A Celebration of Dag Prawitz's Work (pp. 31–50). Dordrecht: Springer.
Wideback Widebäck, F. (2001). Identity of Proofs. Stockholm: Almquist & Wiksell International.
Zucker Zucker, J. (1974). The correspondence between cut-elimination and normalization. Annals of Mathematical Logic, 7, 1–112.
|
http://arxiv.org/abs/2307.04173v1 | 20230709133912 | Budgeted Matroid Maximization: a Parameterized Viewpoint | [
"Ilan Doron-Arad",
"Ariel Kulik",
"Hadas Shachnai"
] | cs.DS | [
"cs.DS"
] |
Electron-phonon driven charge density wave in CuTe.
Matteo Calandra
Received 27 February 2023; accepted 23 May 2023
===================================================
We study budgeted variants of well known maximization problems with multiple matroid constraints. Given an ℓ-matchoid on a ground set E, a profit function p:E →ℝ_≥ 0, a cost function c:E →ℝ_≥ 0, and a budget B ∈ℝ_≥ 0, the goal is to find
in the ℓ-matchoid a feasible set S of maximum profit p(S) subject to the budget constraint, i.e., c(S) ≤ B. The budgeted ℓ-matchoid (BM) problem includes as special cases budgeted ℓ-dimensional matching and budgeted ℓ-matroid intersection. A strong motivation for studying BM from parameterized viewpoint comes from the APX-hardness of unbudgeted
ℓ-dimensional matching (i.e., B = ∞) already for ℓ = 3.
Nevertheless, while there are known FPT algorithms for the unbudgeted variants of the above problems, the budgeted variants are studied here
for the first time through the lens of parameterized complexity.
We show that BM parametrized by solution size is W[1]-hard, already with a degenerate single matroid constraint. Thus, an exact parameterized algorithm is unlikely to exist, motivating the study of FPT-approximation schemes (FPAS). Our main result is an FPAS for BM (implying an FPAS for ℓ-dimensional matching and budgeted ℓ-matroid intersection), relying on the notion of representative set - a small cardinality subset of elements which preserves the optimum up to a small factor. We also give a lower bound on the minimum possible size of a representative set which can be computed in polynomial time.
§ INTRODUCTION
Numerous combinatorial optimization problems
can be interpreted as constrained budgeted problems. In this setting, we are given a ground set E of elements and a family ⊆ 2^E of subsets of E known as the feasible sets.
We are also given
a cost function c:E→ℝ, a profit function p:E→ℝ, and a budget B ∈ℝ. A solution is a feasible set S ∈ of bounded cost c(S) ≤ B.[For a function f:A →ℝ and a subset of elements C ⊆ A, define f(C) = ∑_e ∈ C f(e).] Broadly speaking, the goal is to find a solution S of maximum profit.
Notable examples include budgeted matching <cit.> and budgeted matroid intersection <cit.>,
shortest weight-constrained path <cit.>, and constrained minimum spanning trees <cit.>.
Despite the wide interest in constrained budgeted problems in approximation algorithms, not much is known about this intriguing
family of problems in terms of parameterized complexity. In this work, we study
budgeted maximization with the fairly general
ℓ-dimensional matching, ℓ-matroid intersection, and ℓ-matchoid constraints.
An ℓ-dimensional matching constraint is a set system (E,), where E ⊆ U_1 ×…× U_ℓ for ℓ sets U_1, …, U_ℓ. The feasible sets are all subsets S ⊆ E which satisfy the following.
For any two distinct tuples (e_1,…, e_ℓ), (f_1,…, f_ℓ) ∈ S and every i ∈ [ℓ] it holds that e_i ≠ f_i.[For any k ∈ℕ let [k] = {1,2,…,k}.] Informally, the input for budgeted ℓ-dimensional matching is an ℓ-dimensional matching constraint (E,), profits and costs for the elements in E, and a budget. The objective is to find a feasible set which maximizes the profit subject to the budget constraint (see below the formal definition).
We now define an ℓ-matroid intersection.
A matroid is a set system (E, ), where E is a finite set and ⊆ 2^E, such that
* ∅∈.
* The hereditary property: for all A ∈ and B ⊆ A it holds that B ∈.
* The exchange property: for all A,B ∈ where |A| > |B| there is e ∈ A ∖ B such that B ∪{e}∈.
For a fixed ℓ≥ 1,
let (E,_1), (E,_2), …, (E,_ℓ) be ℓ matroids on the same ground set E. An ℓ-matroid intersection is a set system (E,) where
= _1 ∩_2 ∩…∩_ℓ.
Observe that ℓ-dimensional matching, where E ⊆ U_1 ×…× U_ℓ, is a special case of ℓ-matroid intersection: For each i ∈ [ℓ], define a partition matroid (E,_i), where any
feasible set S ∈_i may contain each element e ∈ U_i in the i-th coordinate at most once, i.e.,
_i = {S ⊆ E | ∀ (e_1,…, e_ℓ) ≠ (f_1,…, f_ℓ) ∈ S : e_i ≠ f_i}.
We give an illustration in Figure <ref>.
It can be shown that (E,_i) is a matroid for all i ∈ℓ (see, e.g., <cit.>).
The above constraint families can be generalized to the notion of ℓ-matchoid. Informally, an is an intersection of an unbounded number of matroids, where each element belongs to at most ℓ of the matroids. Formally,
for any ℓ≥ 1, an on a set E is a collection = { M_i = (E_i, _i) }_i ∈ [s] of s ∈ℕ matroids, where for each i ∈ [s] it holds that E_i ⊆ E, and every e ∈ E belongs to at most ℓ sets in {E_1, …, E_s}, i.e., |{i∈ [s] | e∈ E_i}| ≤ℓ.
A set S ⊆ E is feasible for if for all i ∈ [s] it holds that S ∩ E_i ∈_i. Let () = {S⊆ E | ∀ i ∈ [s]: S∩ E_i∈_i} be all feasible sets of . For all k ∈ℕ, we use _k ⊆() to denote all feasible sets of of cardinality at most k. Clearly, ℓ-matroid intersection (and also ℓ-dimensional matching) is the special case of where the s (= ℓ) matroids are defined over the same ground set E.
In the budgeted ℓ-matchoid (BM) problem, we are given an ℓ-matchoid along with a cost function, profit function, and a budget; our goal is to maximize the profit of a feasible set under the budget constraint. The budgeted ℓ-matroid intersection (BMI) and budgeted ℓ-dimensional matching (BDM) are the special cases where the is an ℓ-matroid intersection and ℓ-dimensional matching, respectively. Each of these problems generalizes the classic 0/1-knapsack, where all sets are feasible. Figure <ref> shows the relations between the problems. Henceforth, we focus on the BM problem.
Formally, a BM instance is a tuple I = (E, , c,p, B,k,ℓ), where E is a ground set of elements, is an on E, c:E →ℕ_> 0 is a cost function, p:E →ℕ_> 0 is a profit function, B ∈ℕ_> 0 is a budget, and k,ℓ∈ℕ_> 0 are integer parameters.[We assume integral values for simplicity; our results can be generalized also for real values.]
In addition, each matroid (E_i,_i) ∈ has a membership oracle, which tests whether a given subset of E_i belongs to _i or not in a single query.
A solution of I is a feasible set S ∈_k such that c(S) ≤ B. The objective is to find a solution S of I such that p(S) is maximized. We consider algorithms parameterized by k and ℓ (equivalently, k+ℓ).
We note that even with no budget constraint (i.e., c(E)< B), where the is restricted to be a 3-dimensional matching, BM
is MAX SNP-complete <cit.>, i.e., it cannot admit a polynomial time approximation scheme (PTAS) unless P=NP. On the other hand, the ℓ-dimensional matching and even the problem (without a budget), parameterized by ℓ and the solution size k, are fixed parameter tractable (FPT) <cit.>.
This motivates our study of BM through the lens of parameterized complexity. We first observe that BM parameterized by the solution size is W[1]-hard, already with a degenerate matroid where
all sets are feasible (i.e., knapsack parametrized by the cardinality of the solution, k).
BM is W[1]-hard.
By the hardness result in Lemma <ref>, the best we can expect for BM in terms of parametrized algorithms, is an FPT-approximation scheme (FPAS). An FPAS with parameterization κ for a maximization problem Π is an algorithm whose input is an instance I of Π and an >0, which produces a solution S of I of value (1-) ·(I) in time f(,κ(|I|)) · |I|^O(1) for some computable function f, where |I| denotes the encoding size of I and (I) is the optimum value of I. We refer the reader to <cit.> for comprehensive surveys on parameterized approximation schemes and parameterized approximations in general. To derive an FPAS for BM, we use a small cardinality representative set,
which is a subset of elements containing the elements of an almost optimal solution for the instance. The representative set has a cardinality depending solely on ℓ,k,^-1 and is constructed in FPT time. Formally,
Let I = (E, , c,p, B,k,ℓ) be a BM instance, 0<<1/2 and R ⊆ E.
Then R is a representative set of I and if there is a solution S of I such that the following holds.
* S ⊆ R.
* p(S) ≥ (1-2) ·(I).
We remark that Definition <ref> slightly resembles the definition of lossy kernel <cit.>. Nonetheless, the definition of lossy kernel does not apply to problems in the oracle model, including BM (see Section <ref> for further details).
The main technical contribution of this paper is the design of a small cardinality representative set for BM. Our representative set is constructed by forming a collection of f(ℓ, k,^-1) profit classes, where the elements of each profit class have roughly the same profit. Then, to construct a representative set for the instance, we define a residual problem for each profit class which enables to circumvent the budget constraint. These residual problems can be solved efficiently using a construction of <cit.>. We show that combining the solutions for the residual problems, we obtain a representative set. In the following, we use O(n) for O(n ·poly(log (n))).
There is an algorithm that given a BM instance I = (E, , c,p, B,k,ℓ) and 0< <1/2, returns in time |I|^O(1) a representative set R ⊆ E of I and such that |R| = O(ℓ^(k-1) ·ℓ· k^2 · ^-2).
Given a small cardinality representative set, it is easy to derive an FPAS. Specifically, using an exhaustive enumeration over the representative set as stated in Lemma <ref>, we can construct the following FPAS for BM, which naturally applies also for BMI and BDM.
For any BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, there is an FPAS whose running time is |I|^O(1)·O( ℓ^k^2 ·ℓ· k^O(k)·^-2k).
To complement the above construction of a representative set, we show that even for the special case of an ℓ-dimensional matching constraint, it is unlikely that a representative set of significantly smaller cardinality can be constructed in polynomial time. The next result applies to the special case of BDM.
For any function f:ℕ→ℕ, and c_1,c_2 ∈ℝ such that c_2-c_1<0, there is no algorithm which finds for a
given BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2
a representative set of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2) of I and in time |I|^O(1),
unless coNP⊆NP / poly.
In the proof of Lemma <ref>, we use a lower bound on the kernel size of the Perfect 3-Dimensional Matching (3-PDM) problem, due to Dell and Marx <cit.>.[We refer the reader e.g., to <cit.>, for the formal definition of kernels.] In our hardness result, we are able to efficiently construct a kernel for 3-PDM using a representative set for BM, already for the special case of 3-dimensional matching constraint, uniform costs, and uniform profits.
§.§ Related Work
While BM is studied here for the first time, special cases of the problem have been extensively studied from both parameterized and approximative points of view. For maximum weighted without a budget constraint, Huang and Ward <cit.> obtained a deterministic FPT algorithm, and algorithms for a more general problem, involving a coverage function objective rather than a linear objective. Their result differentiates the problem from the matroid ℓ-parity problem which cannot have an FPT algorithm in general matroids <cit.>. Interestingly, when the matroids are given a linear representation, the matroid ℓ-parity problem admits a randomized FPT algorithm <cit.> and a deterministic FPT algorithm <cit.>. We use a construction of <cit.> as a building block of our algorithm.
The ℓ-dimensional k-matching problem (i.e., the version of the problem with no budget parametrized by k and ℓ) has received considerable attention in previous studies. Goyal et al. <cit.> presented a deterministic FPT algorithm whose running time is O^*(2.851^(ℓ-1) · k) for the weighted version of ℓ-dimensional k-matching, where O^* is used to suppress polynomial factor in the running time. This result improves a previous result of <cit.>. For the unweighted version of ℓ-dimensional k-matching, the state of the art is a randomized FPT algorithm with running time O^*(2^(ℓ-2) · k) <cit.>, improving a previous result for the problem <cit.>.
Budgeted problems are well studied in approximation algorithms. As BM is a generalization of classic 0/1-knapsack, it is known to be NP-hard. However, while knapsack admits a fully PTAS (FPTAS) <cit.>, BM is unlikely to admit a PTAS, even for the special case of 3-dimensional matching with no budget constraint <cit.>. Consequently, there has been extensive research work to identify special cases of BM which admit approximation schemes.
For the budgeted matroid independent set (i.e., the special case of BM where the consists of a single matroid), Doron-Arad et al. <cit.> developed an efficient PTAS (EPTAS) using the representative set based technique. This algorithm was later generalized in <cit.> to tackle budgeted matroid intersection and budgeted matching (both are special cases of BM where ℓ = 2), improving upon a result of Berger et al. <cit.>. We generalize some of the technical ideas of <cit.> to the setting of ℓ-matchoid and parametrized approximations.
Organization of the paper: Section <ref> describes our construction of a representative set. In Section <ref> we present our FPAS for BM. Section <ref> contains the proofs of the hardness results given in Lemma <ref> and in Lemma <ref>. In Section <ref> we present an auxiliary approximation algorithm for BM.
We conclude in Section <ref> with a summary and some directions for future work.
§ REPRESENTATIVE SET
In this section we construct a representative set for BM.
Our first step is to round the profits of a given instance, and to determine the low profit elements that can be discarded without incurring significant loss of profit.
We find a small cardinality representative set from which an almost optimal solution can be selected via enumeration yielding an FPAS (see Section <ref>).
We proceed to construct a representative set whose cardinality depends only on ^-1,k, and ℓ. This requires the definition of profit classes, namely, a partition of the elements into groups, where the elements in each group have
similar profits. Constructing a representative set using this method requires an approximation of the optimum value of the input BM instance I. To this end, we use a 1/2 ℓ-approximation α = (I) of the optimum value (I) described below.
Given a BM instance I = (E, , c,p, B,k,ℓ),
there is an algorithm which
returns in time |I|^O(1) a value α such that (I)/2ℓ≤α≤(I).
The proof
of Lemma <ref> is given in Section <ref>. The proof utilizes a known approximation algorithm for the unbudgeted version of BM <cit.> which is then transformed into an approximation algorithm for BM using a technique of <cit.>.
The first step in designing the profit classes is to determine a set of profitable elements.
required for obtaining an almost optimal solution.
This set allows us to
construct only a small number of profit classes. We define the set of
profitable elements w.r.t. I, α, and as
H[I,α,] = { e ∈ E | ·α/k < p(e) ≤ 2·ℓ·α}.
When clear from the context, we simply use H = H[I,α,].
Consider the non-profitable elements. The next lemma states that omitting these elements indeed has small effect on the profit of the solution set.
For every BM instance I = (E, , c,p, B,k,ℓ), (I)/2ℓ≤α≤(I), 0<<1/2, and S ∈_k it holds that p ( S ∖ H[I,α,] ) ≤·(I).
We note that
p ( S ∖ H[I,α(I),] ) ≤ k ··α/k = ·α≤·(I).
The first inequality holds since each element in S ∖ H[I,α(I),] has profit at most ·α/k by (<ref>); in addition, since S ∈_k it follows that S contains at most k elements. The second inequality holds as α≤(I).
Using Lemma <ref>, our representative set can be constructed exclusively from profitable elements.
We can now partition the profitable elements into a small number of profit classes. There is a profit class r for a suitable range of profit values.
Specifically, let
D(I,) = {r ∈ℕ_>0 | (1-)^r-1≥/2 ·ℓ· k},
and we simplify by D = D(I,).
For all r ∈ D, and (I)/2ℓ≤α≤(I), define the r-profit class as
_r(α)
= {e ∈ E | p(e)/2 ·ℓ·α∈( (1-)^r, (1-)^r-1]}.
In words, each profit class r ∈ D contains profitable elements (and may contain some elements that are almost profitable due to our 1/2ℓ-approximation for (I)), where the profits of any two elements that belong to the r-profit class can differ by at most a multiplicative factor of (1-). We use the following simple upper bound on the number of profit classes.
For every BM instance I and 0<<1/2 there are O( k ·ℓ·^-2) profit classes.
We note that
log_1-(/2 ℓ· k) ≤ln(2 ℓ· k/)/-ln(1-)≤ 2 ℓ· k ·^-1/.
The second inequality follows from x< -ln (1-x), ∀ x>-1, x ≠ 0, and ln (y) < y, ∀ y>0.
By (<ref>)
the number of profit classes is bounded by
|D| ≤log_1-( / 2 ℓ· k)+1 = O( k ·ℓ·^-2).
The last inequality follows from (<ref>).
Next, we define an exchange set for each profit class. This facilitates the construction of a representative set.
Intuitively, a subset of elements X forms an exchange set for a profit class _r(α) if any feasible set Δ and element a ∈ (Δ∩_r(α)) ∖ X can be replaced (while maintaining feasibility) by some element b ∈ (X ∩_r(α)) ∖Δ such that the cost of b is upper bounded by the cost of a. Formally,
Let I = (E, , c,p, B,k,ℓ) be a BM instance, 0<<1/2, (I)/2ℓ≤α≤(I), r ∈ D (I,), and X ⊆_r(α). We say that X is an exchange set for I,,α, and r if:
* For all Δ∈_k and a ∈ (Δ∩_r(α)) ∖ X there is b ∈ (_r(α) ∩ X) ∖Δ satisfying
* c(b) ≤ c(a).
* Δ-a+b ∈_k.
The key argument in this section is that if a set R ⊆ E satisfies that R ∩_r(α) is an exchange set for any r ∈ D, then R is a representative set. This allows us to construct a representative set using a union of disjoint exchange sets, one for each profit class. We give an illustration in Figure <ref>.
Let I = (E, , c,p, B,k,ℓ) be a BM instance, 0<<1/2, (I)/2ℓ≤α≤(I), and R ⊆ E. If for all r ∈ D = D(I,) it holds that R ∩_r(α) is an exchange set for I,,α, and r, then R is a representative set of I and .
For the proof of Lemma <ref>, we define a substitution of some feasible set G ∈_k. We will use G later only as an optimal solution; however, we can state the following claims for a general G ∈_k.
We require that a substitution preserves the number of profitable elements in G from each profit class, so a substitution guarantees a profit similar to the profit of G.
For G ∈_k and Z_G ⊆⋃_r ∈ D_r(α), we say that Z_G is a substitution of G if the following holds.
* Z_G ∈_k.
* c(Z_G) ≤ c(G).
* For all r ∈ D it holds that |_r(α) ∩ Z_G| = |_r(α) ∩ G|.
Proof of Lemma <ref>:
We first show that every set G∈_k has a substitution which is a subset of R.
For any G ∈_k there is a substitution Z_G of G such that Z_G ⊆ R.
Let G ∈_k and let Z_G be a substitution of G such that |Z_G ∩ R| is maximal among all substitutions of G; formally, let 𝒮(G) be all substitutions of G and let
Z_G ∈{ Z ∈𝒮(G) | |Z ∩ R| = max_Z' ∈𝒮(G) |Z' ∩ R|}.
Since G ∩⋃_r ∈ D_r(α)
is in particular a substitution of G it follows that 𝒮(G)≠∅; thus, Z_G is well defined. Assume towards a contradiction that there is a ∈ Z_G ∖ R; then, by Definition <ref> there is r ∈ D such that a ∈_r(α).
Because R ∩_r(α) is an exchange set for I,,α, and r, by Definition <ref> there is b ∈ (_r(α) ∩ R) ∖ Z_G such that c(b) ≤ c(a) and Z_G -a+b ∈_k. Then, the properties of Definition <ref> are satisfied for Z_G-a+b by the following.
* Z_G -a +b ∈_k by the definition of b.
* c(Z_G-a+b) ≤ c(Z_G) ≤ c(G) because c(b) ≤ c(a).
* for all r' ∈ D it holds that |_r'(α) ∩ (Z_G-a+b)| = |_r'(α) ∩ Z_G| = |_r'(α) ∩ G| because a,b ∈_r(α).
By the above, and using and Definition <ref>, we have that Z_G+a-b is a substitution of G; that is, Z_G+a-b ∈𝒮(G). Moreover,
|R ∩ (Z_G -a+b)|>|R ∩ Z_G| = max_Z ∈𝒮(G) |Z ∩ R|.
The first inequality holds since a ∈ Z_G ∖ R and b ∈ R. Thus, we have found a substitution of G which contains more elements in R than Z_G ∈𝒮(G). A contradiction to the definition of Z_G as a substitution of G having a maximum number of elements in R. Hence,
Z_G ⊆ R, as required.
Let G be an optimal solution for I. We complete the proof of Lemma <ref> by showing that a substitution of G which is a subset of R yields a profit at least (1-2) ·(I). Let H[I,α,] = H be the set of profitable elements w.r.t. I, α and (as defined in (<ref>)). By Claim <ref>, as G ∈_k, it has a substitution Z_G ⊆ R. Then,
p(Z_G) ≥ ∑_r ∈ D p(_r(α) ∩ Z_G)
≥ ∑_r ∈ D s.t. _r(α) ≠∅ |_r(α) ∩ Z_G| ·min_e ∈_r(α) p(e)
≥ ∑_r ∈ D s.t. _r(α) ≠∅ |_r(α) ∩ G | · (1-) ·max_e ∈_r(α) p(e)
≥ (1-) · p(G ∩ H).
The third inequality follows from (<ref>), and from Property <ref> in Definition <ref>. The last inequality holds since for every e ∈ H there is r ∈ D such that e ∈_r(α), by (<ref>) and (<ref>).
Therefore,
p(Z_G) ≥ (1-) · p(G ∩ H)
= (1-) ·( p(G) - p(G ∖ H) )
≥ (1-) · p(G)-p(G ∖ H)
≥ (1-) · p(G)- ·(I)
= (1-) ·(I)- ·(I)
= (1-2) ·(I).
The first inequality follows from (<ref>). The last inequality holds by Lemma <ref>. The second equality holds since G is an optimal solution for I. To conclude, by Properties <ref> and <ref> in Definition <ref>, it holds that Z_G ∈_k, and c ( Z_G ) ≤ c(G) ≤ B; thus, Z_G is a a solution for I. Also, by (<ref>), it holds that p ( Z_G ) ≥ (1-2) ·(I) as required (see Definition <ref>).
By Lemma <ref>, our end goal of constructing a representative set is reduced to efficiently finding exchange sets for all profit classes. This can be achieved by the following result, which is a direct consequence of Theorem 3.6 in <cit.>.[The result of <cit.> refers to a maximization version of exchange sets; however, the same construction and proof hold for our exchange sets as well.]
Given a BM instance I = (E, , c,p, B,k,ℓ), 0< <1/2, (I)/2ℓ≤α≤(I), and r ∈ D (I,), there is an algorithm which
returns in time O(ℓ^(k-1) ·ℓ· k) · |I|^O(1) an exchange set X for I,,α, and r, such that |X| = O( ℓ^(k-1) ·ℓ· k).
Using Lemmas <ref> and <ref>, a representative set of I can be constructed as follows.
If the parameters ℓ and k are too high w.r.t. |I|, return the trivial representative set E in polynomial time. Otherwise, compute an approximation for (I), and define the profit classes. Then, the representative set is constructed by finding an exchange set for each profit class. The pseudocode of the algorithm is given in Algorithm <ref>.
Given a BM instance I = (E, , c,p, B,k,ℓ), and 0< <1/2, Algorithm <ref> returns in time
|I|^O(1) a representative set R ⊆ E of I and such that |R| = O(ℓ^(k-1) ·ℓ· k^2 · ^-2).
Clearly, if ℓ^(k-1) ·ℓ· k^2 ·^-2 > |I|, then by Step <ref> the algorithm runs in time |I|^O(1) and returns the trivial representative set E. Thus, we may assume below that
ℓ^(k-1) ·ℓ· k^2 ·^-2≤ |I|. The running time of Step <ref> is |I|^O(1) by Lemma <ref>. Each iteration of the for loop in Step <ref> can be computed in time O(ℓ^(k-1) ·ℓ· k) · |I|^O(1), by Lemma <ref>. Hence, as we have |D| = |D(I,)| iterations of the for loop, the running time of the algorithm is bounded by
|D| ·O(ℓ^(k-1) ·ℓ· k) · |I|^O(1)≤
(2 ℓ· k ·^-2 +1) ·O(ℓ^(k-1) ·ℓ· k)
· |I|^O(1)
= O(ℓ^(k-1) ·ℓ +1· k^2 ·^-2) · |I|^O(1).
The first inequality follows from (<ref>) and (<ref>).
As in this case ℓ^(k-1) ·ℓ· k^2 ·^-2≤ |I|,
we have the desired running time.
For the cardinality of R, note that by Lemma <ref> (I) ≥α≥(I)/2 ℓ. Thus, by Lemma <ref>, for all r ∈ D, (I,,α,r) is an exchange set satisfying
| (I,,α,r) | = O(ℓ^(k-1) ·ℓ· k). Then,
|R| ≤ |D| ·O(ℓ^(k-1) ·ℓ· k) ≤
(2 ℓ· k ·^-2 +1) ·O(ℓ^(k-1) ·ℓ· k)
= O(ℓ^(k-1) ·ℓ +1· k^2 ·^-2).
The second inequality follows from (<ref>) and (<ref>).
To conclude, we show that R is a representative set. By Lemma <ref>, for all r ∈ D, it holds that (I,,α,r) is an exchange set for I,,α, and r. Therefore,
R ∩_r(α) is an exchange set for I,,α, for all r ∈ D. Hence, by Lemma <ref>, R is a representative set of I and .
Proof of Lemma <ref>: The statement of the lemma follows from Lemma <ref>.
§ AN FPT APPROXIMATION SCHEME
In this section we use the representative set constructed by Algorithm <ref>
to obtain an FPAS for BM. For the discussion below, fix a BM instance I = (E, , c,p, B,k,ℓ) and an error parameter 0< <1/2. Given the representative set R for I and output by
algorithm , we derive an FPAS by exhaustive enumeration over all solutions of I within R. The pseudocode of our FPAS is given in Algorithm <ref>.
Given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, Algorithm <ref> returns in time |I|^O(1)·O( ℓ^k^2 ·ℓ· k^2k·^-2k) a solution for I of profit at least (1-2) ·(I).
We can now prove our main result.
Proof of Lemma <ref>: The proof follows
from Lemma <ref> by using in Algorithm <ref> an error parameter
' = /2.
For the proof of Lemma <ref>, we use the next auxiliary lemmas.
Given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, Algorithm <ref> returns a solution for I of profit at least (1-2) ·(I).
By Lemma <ref>, it holds that R = (I,) is a representative set of I and . Therefore, by Definition <ref>, there is a solution S for I such that S ⊆ R, and
p(S) ≥ (1-2) ·(I).
Since S is a solution for I, it follows that S ∈_k and therefore |S| ≤ k. Thus,
there is an iteration of Step <ref> in which F = S, and therefore the set A returned by the algorithm satisfies p(A) ≥ p(S) ≥ (1-2) ·(I). Also, the set A returned by the algorithm must be a solution for I: If A = ∅ the claim trivially follows since ∅ is a solution for I.
Otherwise, the value of A has been updated in Step <ref> of Algorithm <ref> to be some set F ⊆ R, but this step is reached only if F is a solution for I.
Given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, the running time of Algorithm <ref> is |I|^O(1)·O( ℓ^k^2 ·ℓ· k^2k·^-2k).
Let
W' = {F ⊆ R | F ∈_k, c(F) ≤ B}
be the solutions considered in Step <ref> of Algorithm <ref>, and let
W = {F ⊆ R | |F| ≤ k}.
Observe that the number of iterations of Step <ref> of Algorithm <ref> is bounded by |W|, since W' ⊆ W and for each F ∈ W we can verify in polynomial time if F ∈ W'. Thus, it suffices to upper bound W.
By a simple counting argument, we have that
|W| ≤ (
|R|+1)^k
≤ O( ( ℓ^(k-1) ·ℓ +1· k^2 ·^-2)^k )
= O( ℓ^k^2 ·ℓ· k^2k·^-2k)
The first equality follows from Lemma <ref>.
Hence, by (<ref>), the number of iterations of the for loop in Step <ref> is bounded by O( ℓ^k^2 ·ℓ· k^2k·^-2k). In addition, the running time of each iteration is at most |I|^O(1). Finally, the running time of the steps outside the for loop is |I|^O(1), by
Lemma <ref>. Hence, the running time of Algorithm <ref> can be bounded by |I|^O(1)·O( ℓ^k^2 ·ℓ· k^2k·^-2k).
Proof of Lemma <ref>: The proof follows from Lemmas <ref> and <ref>.
§ HARDNESS RESULTS
In this section we prove Lemma <ref> and Lemma <ref>. In the proof of Lemma <ref>, we use a reduction from the k-subset sum (KSS) problem. The input for KSS is a set X = {x_1, …, x_n} of strictly positive integers and two positive integers T,k>0. We need to decide if there is a subset S ⊆ [n], |S| = k such that ∑_i ∈ S x_i = T, where the problem is parameterized by k. KSS is known to be W[1]-hard <cit.>.
Proof of Lemma <ref>: Let U be a KSS instance with the set of numbers E = [n], target value T, and k. We define the following BM instance I = (E, , c,p, B,k,ℓ),.
* is a 1-matchoid = {(E,)} such that = 2^E. That is, is a single uniform matroid whose independent sets are all possible subsets of E.
* For any i ∈ E = [n] define c(i) = p(i) = x_i+2 ·∑_j ∈ [n] x_j.
* Define the budget as B = T+2 k ·∑_j ∈ [n] x_j.
If there is a solution for U then there is a solution for I of profit B.
Let S ⊆ [n], |S| = k such that ∑_i ∈ S x_i = T.
Then,
c(S) = p(S) = ∑_i ∈ S( x_i+2 ·∑_j ∈ [n] x_j ) = T+|S| · 2 ·∑_j ∈ [n] x_j = T+2 k·∑_j ∈ [n] x_j = B.
By the above, and as S ∈_k, S is also a solution for I of profit exactly B.
If there is a solution for I of profit at least B then there is a solution for U.
Let F be a solution for I of profit at least B. Then, p(F) = c(F) ≤ B, since F satisfies the budget constraint. As p(F) ≥ B, we conclude that
p(F) = c(F) = B.
We now show that F is also a solution for U. First, assume towards contradiction that |F| ≠ k. If |F|< k then
p(F) = ∑_i ∈ F x_i+|F| · 2 ·∑_j ∈ [n] x_j ≤∑_i ∈ F x_i+(k-1) · 2 ·∑_j ∈ [n] x_j ≤ 2 k·∑_j ∈ [n] x_j < B.
We reach a contradiction to (<ref>).
Since F is a solution for I it holds that F ∈_k; thus, |F| ≤ k. By the above, |F| = k. Therefore,
∑_i ∈ F x_i = c(F) - |F| · 2 ·∑_j ∈ [n] x_j = c(F) - 2 k ·∑_j ∈ [n] x_j = B - 2 k ·∑_j ∈ [n] x_j = T.
By Claims <ref> and <ref>, there is a solution for U if and only if there is a solution for I of profit at least B. Furthermore, the construction of I can be done in polynomial time in the encoding size of U. Hence, an FPT algorithm which finds an optimal solution for I can decide the instance U in FPT time.
As KSS is known to be W[1]-hard <cit.>, we conclude that BM is also W[1]-hard. In the proof of <Ref> we use a lower bound on the kernel size of
Perfect ℓ-Dimensional Matching (ℓ-PDM), due to Dell and Marx <cit.>. The input for the problem consists of the finite sets U_1,… U_ℓ and E⊆ U_1×…× U_ℓ. Also, we have an
ℓ-dimensional matching constraint (E,) to which we refer as the associated set system of the instance (i.e., contains all subsets S ⊆ E such that for any two distinct tuples (e_1,…, e_ℓ), (f_1,…, f_ℓ) ∈ S and every i ∈ [ℓ] it holds that e_i ≠ f_i). The instance is associated also with the parameter k=n/ℓ, where n=∑_j=1^ℓ |U_ℓ|. We refer to |E| as the number of tuples in the instance.
The objective is to find S∈ such that |S| = k. Let J=(U_1, … , U_ℓ,E) denote an instance of ℓ-PDM
We say J is a “yes” instance if such a set S exists; otherwise, J is a “no” instance. Observe that the parameter k is set such that if S∈ and |S| = k then every element in U_1∪…∪ U_ℓ appears in exactly one of the tuples in S.
Let ℓ≥3 and >0. If coNP⊈NP / poly then ℓ-PDM does not have a kernel in which the number of tuples is O(k^ℓ-).
Proof of Lemma <ref>:
Assume coNP⊈NP / poly. Furthermore,
assume towards a contradiction that there is a function f:ℕ→ℕ, constants c_1,c_2, where c_2-c_1<0, and an algorithm that, given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, finds in time |I|^O(1) a representative set of I and of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2). We use to construct a kernel for 3-PDM.
Consider the following kernelization algorithm for 3-PDM. Let J=(U_1,U_2,U_3,E) be the 3-PDM input instance. Define n=|U_1|+|U_2|+|U_3|, ℓ =3, and k=n/ℓ.
Furthermore, let (E,) be the set system associated with the instance, and let be an ℓ-matchoid representing the set system (E,).
Run on the BM instance I=(E,,,̧p,B,k,ℓ) with =1/3k, where c(e)=p(e)=1 for all e∈ E and B=k. Let R⊆ E be the output of . Return the 3-PDM instance J'=(U_1,U_2,U_3,R).
Since runs in polynomial time, the above algorithm runs in polynomial time as well. Moreover, as k= n/3 and R⊆ E, it follows that the returned instance can be encoded using O(k^4) bits. Let (R,') be the set system associated with J'.
Since R⊆ E, it follows that '⊆. Hence, if there is S∈' such that |S|=k, then S∈ as well. That is,
if J' is a “yes” instance, so is J.
For the other direction, assume that J is a “yes” instance. That is, there is S∈ such that |S|=k. Then S is a solution for the BM instance I (observe that c(S)=|S|=k=B). Therefore, as R is a representative set of I and =1/3k, there is a solution T for I such that T⊆ R, and
p(T) ≥ (1-2)·(I)≥(1-2)· p(S)
= (1-2/3k)· p(S)= (1-2/3k)· k = k-2/3.
Since the profits are integral we have that |T|=p(T)≥ k. Furthermore |T|≤ k (since T is a solution for I), and thus |T|=k. Since T∈ (as T is a solution for I) and T⊆ R, it trivially holds that T∈'. That is, T∈' and |T|=k. Hence, J' is a “yes” instance. We have showed that the above procedure is indeed a kernelization for 3-PDM.
Now, consider the size of R.
Since returns a representative set of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2) it follows that
|R| = O( f(3) · k^3-c_1· (3k)^c_2) = O( k^3-c_1+c_2).
As c_2-c_1<0, we have a contradiction to <Ref>. Thus, for any function f:ℕ→ℕ and constants c_1,c_2 satisfying c_2-c_1<0, there is no algorithm which finds for a given BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2 a representative set of I and of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2)
in time |I|^O(1).
§ A POLYNOMIAL TIME 1/2·ℓ-APPROXIMATION FOR BM
In this section we prove <Ref>. The proof combines an existing approximation algorithm for the unbudgeted version of BM <cit.> with the Lagrangian relaxation technique of <cit.>. As the results in <cit.> are presented in the context of ℓ-extendible set systems, we first define these systems and use a simple argument to show that such systems are generalizations of matchoids. We refer the reader to <cit.> for further details about ℓ-extendible systems.
Given a finite set E, ⊆ 2^E, and ℓ∈ℕ, we say that (E,) is an ℓ-extendible system if for every S ∈ and e ∈ E ∖ S there is T ⊆ S, where |T| ≤ℓ, such that (S ∖ T) ∪{ℓ}∈.
The next lemma shows that an is in fact an ℓ-extendible set system.
For any ℓ∈ℕ_>0 and an ℓ-Matchoid = { M_i = (E_i, _i) }_i ∈ [s] on a set E, it holds that (E,()) is an ℓ-extendible set system.
Let S ∈() and e ∈ E ∖ S. As is an ℓ-matchoid, there is H ⊆ [s] of cardinality |H| ≤ℓ such that for all i ∈ [s] ∖ H it holds that e ∉ E_i and for all i ∈ H it holds that e ∈ E_i. Since for all i ∈ H it holds that (E_i,_i) is a matroid, either (S ∩ E_i) ∪{e}∈_i, or there is a_i ∈ S ∩ E_i such that ((S ∩ E_i) ∖{a_i}) ∪{e}∈_i (this follows by repeatedly adding elements from S ∩ E_i to {e} using the exchange property of the matroid (E_i,_i)). Let L = {i ∈ H | (S ∩ E_i) ∪{e}∉_i}. Then, there are |L| elements T = {a_i}_i ∈ L such that for all i ∈ L it holds that ((S ∩ E_i) ∖{a_i}) ∪{e}∈_i and for all i ∈ H ∖ L it holds that (S ∩ E_i) ∪{e}∈_i. Thus, it follows that (S ∖ T) ∪{e}∈() by the definition of a matchoid. Since |T| = |L| ≤ |H| ≤ℓ, we have the statement of the lemma.
Proof of Lemma <ref>: Consider the BM problem with no budget constraint (equivalently, B>c(E)) that we call the maximum weight matchoid maximization (MWM) problem. By Lemma <ref>, MWM is a special case of the maximum weight ℓ-extendible system maximization problem, which admits 1/ℓ-approximation <cit.>.[The algorithm of <cit.> can be applied also in the
more general setting of ℓ-systems. For more details on such set systems, see, e.g., <cit.>.] Therefore,
using a technique of <cit.>, we have the following. There is an algorithm that, given some >0, returns a solution for the BM instance I of profit at least ( 1/ℓ/1/ℓ+1 -) ·(I), and whose running time is |I|^O(1)· O(log(^-1)). Now, we can set = 1/ℓ/1/ℓ+1-1/2ℓ; then, the above algorithm has a running time |I|^O(1), since ^-1 is polynomial in ℓ and ℓ≤ |I|. Moreover, the algorithm returns a solution S for I, such that
(I) ≥ p(S) ≥( 1/ℓ/1/ℓ+1 -) ·(I) = 1/2ℓ·(I).
To conclude, we define the algorithm which returns α = p(S). By the above discussion, (I) ≥α≥(I)/2ℓ, and the running time of is
|I|^O(1).
§ DISCUSSION
In this paper we present an FPT-approximation scheme (FPAS) for the budgeted ℓ-matchoid problem (BM). As special cases, this yields FPAS for
the budgeted ℓ-dimensional matching problem (BDM) and the budgeted ℓ-matroid intersection problem (BMI). While the unbudgeted version of BM has been studied earlier from parameterized viewpoint, the budgeted version is studied here for the first time.
We show that BM parameterized by the solution size is W[1]-hard already with a degenerate matroid constraint (Lemma <ref>); thus, an exact FPT time algorithm is unlikely to exist. Furthermore, the special case of unbudgeted ℓ-dimensional matching problem is APX-hard, already for ℓ=3, implying that
PTAS for this problem is also unlikely to exist.
These hardness results motivated the development of an FPT-approximation scheme for BM.
Our FPAS relies on the notion of representative set - a small cardinality subset of the ground set of the original instance which preserves the optimum value up to a small factor. We note that representative sets are not lossy kernels <cit.> as BM is defined in an oracle model; thus, the definitions of kernels or lossy kernels do not apply to our problem. Nevertheless, for some variants of BM in which the input is given explicitly (for instance, this is possible for BDM) our construction of representative sets can be used to obtain an approximate kernelization scheme.
Our results also include a lower bound on the minimum possible size of a representative set for BM which can be computed in polynomial time (<Ref>). The lower bound is based on the special case of the budgeted ℓ-dimensional matching problem (BDM). We note that there is a significant gap between the size of the representative sets found in this paper and the lower bound. This suggests the following questions for future work.
* Is there a representative set for the special case of BDM whose size matches the lower bound given in <Ref>?
* Can the generic structure of ℓ-matchoids be used to derive an improved lower bound on the size of a representative set for general BM instances?
The budgeted ℓ-matchoid problem can be naturally generalized to the d-budgeted ℓ-matchoid problem (d-BM). In the d-budgeted version, both the costs and the budget are replaced by d-dimensional vectors, for some constant d≥ 2.
A subset of elements is feasible if it is an independent set of the ℓ-matchoid, and the total cost of the elements in each dimension is bounded by the budget in this dimension. The problem is a generalization of the d-dimensional knapsack problem (d-KP), the special case of d-BM in which the feasible sets of the matchoid are all subsets of E. A PTAS for d-KP was first given in <cit.>, and the existence of an efficient
polynomial time approximation scheme was ruled out in <cit.>.
PTASs for the special cases of d-BM in which the matchoid is a single matroid, matroid intesection or a matching constraint were given in <cit.>.
It is likely that the lower bound in <cit.> can be used also to rule out the existence of an FPAS for d-BM. However, the question whether d-BM admits a
(1-)-approximation in time O( f(k+ℓ) · n^g()), for some functions f and g, remains open.
|
http://arxiv.org/abs/2307.04012v1 | 20230708164551 | Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning | [
"Alice E. A. Allen",
"Nicholas Lubbers",
"Sakib Matin",
"Justin Smith",
"Richard Messerly",
"Sergei Tretiak",
"Kipton Barros"
] | physics.chem-ph | [
"physics.chem-ph",
"physics.comp-ph"
] |
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Nvidia Corporation, Santa Clara, CA 9505, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States
]Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning
The development of machine learning models has led to an abundance of datasets containing quantum mechanical (QM) calculations for molecular and material systems. However, traditional training methods for machine learning models are unable to leverage the plethora of data available as they require that each dataset be generated using the same QM method. Taking machine learning interatomic potentials (MLIPs) as an example, we show that meta-learning techniques, a recent advancement from the machine learning community, can be used to fit multiple levels of QM theory in the same training process. Meta-learning changes the training procedure to learn a representation that can be easily re-trained to new tasks with small amounts of data. We then demonstrate that meta-learning enables simultaneously training to multiple large organic molecule datasets. As a proof of concept, we examine the performance of a MLIP refit to a small drug-like molecule and show that pre-training potentials to multiple levels of theory with meta-learning improves performance. This difference in performance can be seen both in the reduced error and in the improved smoothness of the potential energy surface produced. We therefore show that meta-learning can utilize existing datasets with inconsistent QM levels of theory to produce models that are better at specializing to new datasets. This opens new routes for creating pre-trained, foundational models for interatomic potentials.
[
Kipton Barros
August 12, 2023
===================
§ INTRODUCTION
Machine learning is fundamentally changing and expanding our capabilities for modeling chemical and materials systems <cit.>. A growing array of properties have been successfully predicted with machine learning models from materials' band gaps and formation energies to molecular energies and bond orders <cit.>. The development of machine learning models for various applications has involved the creation of a large number of datasets containing quantum-mechanical calculations at different fidelities (levels of theory) <cit.>. However, incorporating this multi-fidelity information into machine learning models remains challenging. In this work, we show that multiple datasets can be used to fit a machine learning model, even if the datasets were calculated with many varying QM levels of theory. To overcome this challenge, we incorporate meta-learning techniques into the training process and subsequently demonstrate improvements in accuracy for multiple applications. The aim of meta-learning is to use a wide collection of data to train a machine learning model that can then be easily re-trained to specialized tasks and we demonstrate the applicability of the meta-learning method to MLIPs.
In the landscape of broader efforts to incorporate machine learning and molecular and material modelling, a particular attention has been paid to MLIPs <cit.>. Accurate atomistic simulations rely on interatomic potentials that closely recreate the interactions present between atoms and molecules <cit.>. Recreating these interactions involves a trade-off between accuracy and computational cost, with quantum mechanical techniques offering highly accurate simulations whilst classical force fields are fast and capable of modelling much larger systems over long timescales <cit.>. Within the last decade, MLIPs have increasingly been seen as a method that could provide a model that is both fast and accurate <cit.>. However, the development of MLIPs that are transferable to unseen organic molecules requires datasets that cover a large fraction of chemical space. This requirement has lead to the production of numerous datasets <cit.>. These datasets contain the quantum mechanical (QM) energies and forces of millions of structures spanning large regions of chemical space. However, the QM methods used to calculate the energies and forces vary considerably. As different QM methods result in different potential energy surfaces, this inconsistency in QM techniques limits the extent that datasets can used together to fit potentials.
Numerous organic molecule datasets have been created for training MLIPs <cit.>. However, a consensus on the best QM techniques to employ to create these datasets has never been reached as a compromise between accuracy and computational cost must always be considered when performing QM calculations. This lack of consensus has led to a variety of different software, methods, basis sets and exchange-correlation functionals being used. For example, the QM7-x and ANI-1x datasets both contain energies and forces for millions of small organic molecules. However, QM7-x was calculated using the PBE0 exchange-correlation functional with many body dispersion whilst ANI-1x was calculated with the ωB97x functional and 6-31G* basis set <cit.> and does not include dispersion effects. Therefore, these two datasets describe similar, but slightly different potential energy surfaces. If both datasets were joined together to train a potential then problems would likely arise as contradictory information is present. For example, identical structures at different levels of theory can have different energy and forces. Whilst datasets from different sources have been fit together without further refinement <cit.>, this approach does not account for differences in the interactions described. Techniques exist in the machine learning literature to address the difference in the potential energy surface.
Previous work on fitting MLIPs to multiple datasets is limited. In Ref. , a transferable molecular potential was first trained to ∼ 5 million density functional theory (DFT) training points before being refit, with frozen parameters, to 0.5 million CCSD(T)* energies. This technique, known as transfer learning has been used in several works <cit.>. The advantage of using transfer learning for training MLIPs is that it requires fewer calculations at a higher, and more expensive, level of theory. However, this kind of transfer learning technique, freezing neural network (NN) parameters, is limited to just two datasets. If we want to use multiple existing datasets, and expand the size and variety of training data, then new methods must be found.
Fortunately, this problem is being explored in a branch of machine learning research known as meta-learning <cit.>. Meta-learning seeks to build a model that, although not specialized to any particular task, can be quickly re-trained to many new tasks - where a task is a specific learning problem. Furthermore, this retraining can be effective even if the amount of new data is limited <cit.>.
For transferable MLIPs, the concepts of tasks naturally lends itself to quantum mechanical datasets calculated with different methods. By using meta-learning techniques, we will show how information from multiple levels of theory can be incorporated together. We begin by investigating training data with multiple levels of theory for an individual aspirin molecule and for the QM9 dataset (which contains over 100,000 molecules in their equilibrium configuration). With these systems, the problems associated with naively combining datasets together are seen and the benefits of meta-learning are clearly observed in the test set errors. We then move on to combining several large molecule datasets to pre-train an MLIP. Combining large organic datasets to fit MLIPs has never previously been attempted. Subsets, chosen using active learning, of six existing datasets (ANI-1x, GEOM, QMugs, QM7-x, Transition-1x and the QM9 dataset from Ref. ) were used to fit an adaptable potential using meta-learning – see Fig. <ref> for a visualization of the space the datasets cover <cit.>. Figure <ref> demonstrates the increase in chemical space possible when multiple datasets are combined together. The benefits of pre-training are then shown by retraining to the 3BPA molecule and testing various properties. These tests show that pre-training models using meta-learning produces a more accurate and smoother potential. The benefits of pre-training include enhanced accuracy and generalization capabilities in modeling interatomic potentials.
Training machine learning models to large amounts of data before re-training to a specific task is related to the concept of foundational models <cit.>. This concept has been used to create large language models, ie. GPT-4, which have been pre-trained to extremely large datasets before being fine-tuned to specific tasks, i.e. ChatGPT which is fine-tuned for conversational usage <cit.>. Creating foundational models allows a wide range of information to be encoded before specialisation. With meta-learning techniques, we can now pre-train interatomic potentials to numerous large datasets and this is a step towards foundational models for MLIPs – MLIPs that could be quickly re-trained to diverse molecular systems.
The number of QM datasets has grown rapidly over the last few years. However, a major bottleneck in exploiting this information has been the absence of methods that can effectively combine all of this information. In this work, we have overcome this limitation by exploiting techniques which enable the incorporation of datasets with different fidelities. Whilst we focus on MLIPs, these techniques are applicable to the wide range of predictive models that exist for material and molecular property prediction. By showing how meta-learning can be applied, we aim to encourage researchers to fully utilize the vast amount of existing data that the scientific community has already collected.
§ METHODS
§.§ Meta-Learning Algorithm
Meta-learning is an area of machine learning concerned with improving the learning process to produce models that can easily adapt to new problems <cit.>. A key component of meta-learning is the concept of different `tasks'. Tasks are datasets with similar properties but slight differences. For example, if we were interested in animal classification of a cat and a dog, a similar task might be to classify a lion and a bear. The task is not the same but we would expect fundamental similarities in the model needed to perform the classification. By using a meta-learning algorithm to learn multiple different tasks, less data will be required when a new learning problem is introduced.
The objective of meta-learning algorithms is to train a model that can generalize more easily to new data<cit.>. We will use meta-learning to fit multiple different QM datasets with slightly different properties. To our knowledge, meta-learning for MLIPs has not been previously carried out, although it has been used in other areas of science <cit.>.
The meta-learning algorithm we have chosen to fit multiple datasets for MLIPs is called Reptile <cit.>. Reptile works by repeatedly sampling a task (a dataset), performing a limited number of optimization steps on the task and then updating the weights of the machine learning model towards the new weights. Reptile was chosen over other meta-learning algorithms such as MAML <cit.> as Reptile is simpler to implement and therefore more likely to be adopted by the wider community. A comparison of methods such as MAML for interatomic potentials will therefore be left to future work.
Reptile is described in Algorithm <ref> with a visual illustration also given. The algorithm works by separating the training data into distinct learning problems (tasks). An individual task is selected and multiple optimization steps are performed. The parameters of the model are then updated. A new task is then selected and the procedure is repeated multiple times. This moves the model to a region of parameter space where it can readily move between the different datasets present.
Throughout this work, the k=1 result is used as comparison point. This is because when k=1 the algorithm becomes equivalent to stochastic gradient descent on the expected loss over all the training tasks <cit.>. This is referred to as joint training in Ref. At k=1, the algorithm is not expected to account for differences in the QM theory but still uses all the information present from the datasets.
§.§ Interatomic Potential
In this work, we have used the NN architecture implemented in torchANI with the same structure as the ANI-1x model <cit.>. However, the meta-learning techniques described are not specific to this form of model and there is no reason that they could not be applied to other machine learning models that employ similar iterative solvers.
The hyperparameters used for the ANI potential are the same as those used for previous training to the ANI-1x and ANI-1ccx datasets, see Ref. for more details.
§.§ Datasets
§.§.§ Aspirin
Aspirin structures were produced by molecular dynamic simulations at 300K, 600K and 900K. Density Functional based Tight Binding (DFTB) was used to perform the MD simulations and a total of 400 structures were created for each temperature. QM calculations of the energies and forces were then performed on these structures with three levels of theory: DFT with the ωB97x exchange-correlation function and 6-31G* basis set, DFT with Becke, 3-parameter, Lee–Yang–Parr (B3LYP) exchange-correlation functions and def2-TZVP basis set and Hartree-Fock with the def2-SVP basis set for 300K, 600K and 900K respectively. These datasets were used to pre-train a molecular potential. The pre-trained potential was then refit to a new dataset of MD configuration at the Møller–Plesset (MP2) level of theory with the def2-SVP basis set (a more accurate level of theory). The training dataset for refitting used 400 MD configurations sampled at 300K whilst the test set contained structures at 300K,600K and 900K. A batch size of 8 was used for training.
§.§.§ QM9
The QM9 dataset contains over 100,000 equilibrium structures for small organic molecules with up to 9 heavy atoms <cit.>. In Ref. , the QM9 dataset was recalculated with 76 different exchange-correlation functionals and 3 basis sets <cit.>.
§.§.§ Multiple Organic Molecules
Seven separate datasets were chosen to fit a potential to organic molecule potential that could be easily re-trained to new data. The seven datasets used for meta-learning were chosen to cover both diverse regions of chemical space and multiples levels of theory – including the accurate recreation of dispersion effects. The chemical space covered included reactive paths and biologically and pharmacologically relevant structures. Whilst ANI-1x does cover a large number of conformations for organic molecules, it has limitations. This is demonstrated by Fig. <ref> and Fig. S1. Figure <ref> demonstrates how the additional datasets increase the size of the molecules and range of energies included. The E_0 energy is calculated using linear fitting an then subtracted from each dataset. The minimum energy for each dataset is then shifted to zero. Whilst it is not covered in this work as we use the ANI potential, including larger molecules in datasets may be increasingly important for newer generations of interatomic potentials that include message passing and describe longer length scales <cit.>. Figure S1 shows the distribution of uncertainty for the ANI-1x potential across the dataset space. Whilst ANI-1x dz, ANI-1x tz, GEOM and QMugs have similar probability distributions, QM7-x and Transition-1x contain larger uncertainties. Transition-1x contains reactive structures that are not contained in the original dataset and therefore higher uncertainties are expected. For QM7-x, there are also higher uncertainties and this may be due to the different sampling techniques used.
A property that is not shown in Table 1 is the software used for the DFT calculations. Even when the same level of theory is used, we can expect different software to give slightly different results. This will cause further discrepancies between the datasets as a variety of codes are employed. For example, although Transition-1x and ANI-1x are calculated at the same level of theory, Transition-1x is calculated with the ORCA program whilst ANI-1x is calculated with Gaussian <cit.>.
The individual description and justification for including each dataset used is as follows:
* QM9 - This dataset contains a diverse range of 76 functionals and 3 basis sets for small equilibrium organic molecules <cit.>.
* ANI-1x - This is a large dataset of small (up to 8 heavy atoms) organic molecules generated with active learning methods <cit.>.
* QMugs - This dataset includes the largest molecules with up to 100 heavy atoms. It specializes in including drug-like molecules <cit.>
* GEOM - This is the largest dataset and contains both large molecules and drug-like molecules <cit.>.
* QM7-x - This is also a large dataset of small (up to 7 heavy atoms) organic molecules but has dispersion accurately described with many-body dispersion <cit.>
* Transition-1x - This datasets includes minimum energy paths for 12,000 reactions <cit.>.
* ANI-1ccx - This dataset contains coupled cluster level theory calculations for a subset of the ANI-1x dataset <cit.>.
Other datasets considered for inclusion include SPICE, PubChemQC-PM6 and Tensormol <cit.>. However, with the existing datasets a sufficient representation of chemical space is covered. It is also worth noting that retraining to recreate the specific properties of the excluded datasets would also be quickly possible with the meta-learning potential.
§.§ Meta-learning Hyperparameter Optimization
There are three parameters in the Reptile algorithm. These control the number of steps (k) taken at each optimization step, how the parameters are updated (ϵ) from the task's individual NN parameters and the maximum number of epochs used for retraining. The number of epochs was investigated to see whether restricting the training improved accuracy by ensuring the potential remained close to the meta-learned potential or if longer retraining improved results. For a detailed discussion of the hyper parameters chosen when fitting to the seven separate datasets, see Section S1.2. The ϵ value used throughout this work is ϵ=1 whilst the k value is changed depending on the problem. The maximum number of epochs used for retraining for the meta-learning algorithm with k>1 is restricted to 150 epochs.
§.§ Stages of Fitting for the Organic Molecule datasets
In the first iteration, 100,000 structures were taken randomly from the ANI-1x, QMugs, GEOM, QM7-x and Transition-1x datasets. For QM9, 10,000 structures were used for each level of theory. This is restricted as 276 levels of theory exist, and each theory level samples different structures in the QM9 dataset. After the first iteration, the highest error structures were added to the next iteration <cit.>. The cutoffs used for adding structures are described in SI 1.6. This process was repeated 3 times. A diagram of the process is show in Fig. S3.
§ RESULTS
§.§ A Simple Case Study on Aspirin
As the initial test case we investigate the performance of meta-learning on a dataset containing a single aspirin molecule. Aspirin structures were produced by molecules dynamic simulations at 300K, 600K and 900K. The QM energies and forces were then calculated at three different levels of theory: two distinct DFT functionals, and Hartree-Fock. This created three different datasets, with each temperature corresponding to a different level of theory. These three datasets were used to pre-train a molecular potential to the energy and forces of 1,200 structures. The pre-trained potential was then refit to a new dataset of 400 MD configuration at the MP2 level of theory from the 300K simulation.
The change in the RMSE error for the forces is shown with the value of k used in the meta-learning algorithm in Fig. <ref>. The k parameter controls the number of steps taken towards each dataset. As k is increased the speed of the algorithm also increases and this is an additional consideration in choosing the optimal value. In the limit of k →∞ the algorithm would correspond to iterative training to each dataset and then transfer learning to a new task. However, while this may work for small problems, this approach is impractical for large datasets.
Figure <ref> shows that as the k parameter is increased the error in the test set decreases with the minimum error at around k=400. There is therefore an improvement in test set error in comparison to both no pre-training (5.35 ± 0.41 kcal/mol/ Å) and k=1 (3.38 ± 0.16 kcal/mol/ Å). Note that k=1 effectively corresponds to simultaneous training to all tasks. Therefore, when we attempt to combine multiple datasets at different levels of theory an improvement in performance can be seen when meta-learning is incorporated into the training process.
§.§ Meta-learning many levels of theory using QM9
Next, we move onto the QM9 dataset that contains multiple different small organic molecules in their equilibrium structures. The QM9 dataset has been calculated at 228 different levels of theory and therefore provides an ideal dataset for analysing meta-learning techniques. We can use this dataset to test whether meta-learning can develop a potential which can be refit to a new level of theory encountered for the QM9 dataset with less data. In order to do this, a subset of the QM9 dataset was used to train a potential to 10,000 molecules, 50 different exchange-correlation functionals and three different basis set. The potential was then refit to a new exchange-correlation functional, that had not been previously encountered, and the performance of this new model was assessed and compared to no pre-training and k=1 meta-learning.
The test set error for the meta-learning potential refit to a new level of theory in the QM9 dataset is shown in Fig. <ref>. Pre-training the potential greatly improves the test set error for this case. In Fig. S9 a comparison between meta-learning and k=1 is shown and we see that k=1 does not perform as well as k=10. This is because it does not account for the discrepancy in the interaction present. These results show that even when the number of levels of theory is relatively large, at 150, and multiple molecules are present that meta-learning improves test set error over k=1.
§.§ Making the most of scarce data at CCSD(T) level
We will now move to the datasets used to train transferable interatomic potentials. As a starting example, we will look at pre-training to the multiple levels of theory (ωB97x/ 6-31G* and ωB97x/ def2-TZVPP) contained in the ANI-1x dataset <cit.>. We will then retrain to the ANI-1ccx dataset <cit.>. Figure <ref> shows the distribution in error when pre-training to multiple levels of theory with meta-learning and k=1. The RMSE is 3.30 ± 0.10 kcal/mol and 2.39 ± 0.00 kcal/mol for k=1 and meta-learning respectively. Therefore, we can again see that meta-learning with a higher k values improves results compared to k=1. The comparative results for direct training to ωB97x/ 6-31G* and ωB97x/ def2-TZVPP and then transfer learning to CCSD(T) is 2.20± 0.01 kcal/mol and 2.09±0.02 kcal/mol respectively . Therefore, in this case fitting to multiple datasets does not improve results over fitting to just one. This is in part because both datasets contain the same structures and cover the same chemical and configurational space. The potential trained to multiple organic datasets was also refit to the CCSD(T) dataset and the benefits of meta-learning over k=1 were also seen with errors of 2.89± and 3.32± respectively. However, this is notably higher than training to the ANI-1x dataset alone. The CCSD(T) dataset is a subset of the ANI-1x dataset and contains identical structures. For these cases, adding additional data in other areas of chemical space may not improve results.
§.§ Training to multiple transferable organic molecule datasets
Numerous datasets have been created that contain quantum mechanical calculations for organic molecules. However, as these datasets use different levels of theory and software, combining the information from different datasets requires advanced training techniques. By using meta-learning, a pre-trained model was created that uses information from seven different datasets. This is the first instance, to our knowledge, of combining information from multiple organic molecule datasets in this manner.
We have already seen that meta-learning can improve results compared to k=1 when multiple datasets are used. We will now use the pre-trained model to explore the benefits of pre-training with meta-learning in comparison to no pre-training, and k=1 when retraining to a single molecular system. The pre-trained model was re-trained to the 3BPA dataset taken from Ref. and various properties explored <cit.>.
The first properties we will analyze are the energy and force RMSE errors. The force errors for a dataset taken from MD at 1200K is shown in Fig. <ref> with the energy and force learning curves for datasets at 300K, 600K and 1200K given in Fig. S4. From these graphs, the improved performance of pre-training using the meta-learning approach (with three passes through the dataset) to both k=1 and no pre-training can be seen for energies and forces. Therefore, just by adapting the training scheme, with no change in the model architecture or the dataset itself, consistent improvements in accuracy can be seen with meta-learning. The importance of the training method used has previously been seen in Ref. . Here we see how it can improve performance for fitting multiple datasets together. In comparison to when the ANI-1x model is used for pre-training, meta-learning performs slightly better at force errors but slightly worse for energy predictions. Given that the ANI-1x model is fit to the same level of theory as the 3BPA dataset, the performance of the meta-learning potential is encouraging.
However, it is known that RMSE errors alone are not enough to verify the performance of a potential <cit.>. We will therefore examine additional properties. The 3BPA molecule has three central dihedral angles which are illustrated in Fig. <ref>. The energy scans along these dihedral angles are shown in Fig. <ref> with the model refit to the energies and forces of just 62 3BPA conformations. When no pre-training is used, the surface at β=120 significantly over-estimates the high energy point and lacks smoothness. A similar shape is seen for the k=1 potential. However, when meta-learning is used for pre-training the surface remains noticeably smoother with significantly less over prediction. When k=1 is used, multiple different potential energy surfaces are combined together in a nonphysical way which destroys the smoothness of the underlying potential. The error in the gradient of the 2D energy surface is shown in Fig. <ref> b) and emphasizes this difference in smoothness. When meta-learning is used, the contradiction in the potential energy surface described is corrected resulting in a smoother model. When no pre-training or k=1 is used, an additional problem can occur with the high energy regions at α=0 failing to be recreated for the β=180 and β=150 scan respectively. In contrast, both the meta-learning pre-training model correctly recreate this behaviour. The results for ANI-1x pre-training are given in Fig. S6.
One advantage of pre-training with multiple datasets over ANI-1x or QM7-x, is that reactive systems can be added that are not contained in ANI-1x. To test if this information has been effectively passed to the meta-learning potential, hydrogen bond dissociation for the 3BPA molecules was performed. There is no reactive information contained within the 3BPA training set and so this test relies entirely on the information contained in the pre-training.
Figure <ref> shows the change in energy as a hydrogen molecule is removed from the 3BPA. The potential pre-trained with meta-learning recreates the smooth dissociation curve expected. In contrast, when no pre-training, k=1 or ANI-1x is used the curve lacks smoothness and has an additional barrier present. In Fig. S7, the bond dissociation energy when just 31 structures are used for retraining. Even in this low data limit the smooth dissociation curves for the meta-learning potential remain. To demonstrate that this is not unique to 3BPA, the hydrogen bond dissociation for ethanol is shown in Fig. S8. Again, k=1 fails to recreate the smooth curve expected whilst the meta-learning potential captures the correct shape.
We have therefore shown how meta-learning can be used to combine multiple datasets and the resulting improvements in the error, torsion energy scans and bond dissociation. Joint-fitting can improve on no-pre-training. However, not accounting for the difference in QM level of theory causes a reduction in performance that can be seen in the test set errors, smoothness of the potential and performance in extrapolation regions.
§ CONCLUSION
The quantum mechanical properties of millions of molecular species and many materials systems have already been calculated and composed into extended datasets <cit.>. However, the varying levels of theory
used to perform the QM calculations has previously prevented different datasets being used together to make machine learning models, for example for MLIPs. In this work, we have shown that meta-learning techniques can be used to jointly fit multiple datasets and demonstrated the improvement in performance that results from including a diverse selection of datasets.
We show the wide applicability of meta-learning by creating MLIPs for a variety of systems, from a single aspirin molecule to the ANI-1ccx dataset. By pre-training a model to multiple large organic molecule datasets we show that these datasets (QM7-x, QMugs, ANI-1x, Transition-1x and GEOM) can be combined together to pre-train a model. The benefits of using a pre-trained model are then shown for the 3BPA molecule, with a more accurate and smoother potential produced. Meta-learning greatly expands the variety of fitting data available for MLIPs and establishes the possibility of creating readily pre-trained, foundational models for MLIPs.
Pre-training machine learning models has been extensively discussed in the machine learning literature in recent years <cit.>. Whilst pre-training has been carried out for MLIPs, its use has been limited to training from one dataset to another <cit.>. With techniques such as meta-learning, this pre-training does not need to be limited to one specific dataset but can include large numbers of existing datasets. In this work, we added only a single reactive dataset to pre-train a model. However, many different reactive datasets exist and combining this large amount of information could help build a general transferable potentials for reactions in both condensed and gas phase without the need for millions of new QM calculations. Additionally, datasets have been created for many different combinations of elements. Meta-learning techniques could help build more transferable MLIPs over a wider range of elements with fewer calculations required.
However, combining multiple datasets together and training with meta-learning will not always improve results. This was seen with the CCSD(T) results where fitting straight from ANI-1x to CCSD(T) resulted in the lowest error. Therefore, adding more data when there is a specific application in mind is not always the best approach, particularly if the additional data is far from the final application. For specific applications, transfer learning from one dataset to another may yield the best training and test set errors. However, if multiple data sets need to be incorporate together, or a general model is desired which can be specialized to multiple different tasks, meta-learning methods are preferable.
With the techniques described in this work, multiple datasets can be fit at once. However, this advancement has exposed a more practical problem with the datasets currently published. There is not a standard format for storing information. Manual manipulation of datasets to a standard format is extremely time-consuming. The need for uniformity in the structure of datasets produced is therefore becoming increasingly important.
The growth of available datasets containing quantum mechanical information for molecular and material structures has given researchers unprecedented levels of QM information. However, combining data from multiple data-sources is a major challenge. We have shown how meta-learning can be used to combine information from multiple datasets generated with varying levels of theory. This advancement changes the way that existing datasets should be viewed, and opens up new avenues for MLIP fitting. Beyond this, the results suggest that meta-learning can be seen as a general approach for combining training datasets for the broad array of chemical and materials processes where data science models can benefit.
This work was supported by the United States Department of Energy (US DOE), Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (‘Triad’) contract grant no. 89233218CNA000001 (FWP: LANLE3F2). A. E. A. Allen and S. Matin also acknowledge the Center for Nonlinear Studies. Computer time was provided by the CCS-7 Darwin cluster at LANL.
|
http://arxiv.org/abs/2307.05410v1 | 20230711162509 | BLUEX: A benchmark based on Brazilian Leading Universities Entrance eXams | [
"Thales Sales Almeida",
"Thiago Laitz",
"Giovana K. Bonás",
"Rodrigo Nogueira"
] | cs.CL | [
"cs.CL"
] |
Does conditional entropy squeezing indicate normalized entropic uncertainty relation steering?
Saeed Haddadi
August 12, 2023
==============================================================================================
One common trend in recent studies of language models (LMs) is the use of standardized tests for evaluation. However, despite being the fifth most spoken language worldwide, few such evaluations have been conducted in Portuguese. This is mainly due to the lack of high-quality datasets available to the community for carrying out evaluations in Portuguese. To address this gap, we introduce the Brazilian Leading Universities Entrance eXams (BLUEX), a dataset of entrance exams from the two leading universities in Brazil: UNICAMP and USP. The dataset includes annotated metadata for evaluating the performance of NLP models on a variety of subjects. Furthermore, BLUEX includes a collection of recently administered exams that are unlikely to be included in the training data of many popular LMs as of 2023. The dataset is also annotated to indicate the position of images in each question, providing a valuable resource for advancing the state-of-the-art in multimodal language understanding and reasoning. We describe the creation and characteristics of BLUEX and establish a benchmark through experiments with state-of-the-art LMs, demonstrating its potential for advancing the state-of-the-art in natural language understanding and reasoning in Portuguese. The data and relevant code can be found at <https://github.com/Portuguese-Benchmark-Datasets/BLUEX>
§ INTRODUCTION
Recent advances in Language Models (LMs) have generated significant interest due to their demonstrated capabilities on a wide range of language tasks, including text classification, language translation, and text generation <cit.>. LM performance has been particularly impressive on standardized tests, which present challenging questions requiring high levels of domain-specific knowledge and reasoning. For instance, recent benchmarks on GPT-4 <cit.> showed that it can achieve human-level performance on a variety of graduate-level benchmarks.
Despite the impressive performance of LMs on standardized tests, few evaluations have been performed in Portuguese <cit.>, partially due to the lack of available datasets in the language. This lack of high-quality, standardized datasets presents a significant challenge for researchers interested in developing and evaluating LMs in Portuguese. To address this gap for Brazilian Portuguese, we introduce BLUEX, a dataset consisting of entrance exams for the two leading universities in Brazil. Our dataset offers a rich source of high-quality high school-level questions annotated with their respective subjects, as well as flags indicating the required capabilities necessary to respond accurately to the questions, such as knowledge of Brazilian culture and the application of mathematical reasoning. These annotations can be used to evaluate the performance of LMs on a variety of subjects and capabilities such as domain-specific knowledge and reasoning. Additionally, BLUEX includes a collection of recently administered entrance exams that are unlikely to be included in the training data of many currently popular LMs.
In anticipation of the emergence of multimodal models that combine text and image understanding, we have annotated BLUEX to indicate the position of images in each question. Additionally, we have included all necessary images with the dataset to facilitate research on multimodal language tasks. We believe that this resource will be essential in evaluating the performance of models that reason with both text and image inputs to solve complex problems.
In this paper, we describe the creation and characteristics of BLUEX and establish a benchmark through experiments with state-of-the-art LMs. Our findings suggest that BLUEX provides a valuable resource for benchmarking and advancing the state-of-the-art in natural language understanding and reasoning in Portuguese. This is particularly relevant since even the current state-of-the-art models, such as GPT-4, still have considerable room for improvement and do not achieve the highest cutoff grades for both universities.
§ RELATED WORK
In the realm of Portuguese Natural Language Processing (NLP) datasets, there appears to be a limited availability.
For question-answering tasks, Faquad <cit.> is available, which exhibits an extractive style akin to SQuAD <cit.>. It features questions concerning Brazilian higher education institutions, with documents sourced from a federal university and supplemented by Wikipedia articles. Another option is the Multilingual Knowledge Questions and Answers (MKQA) dataset, which covers 26 languages <cit.>. This dataset was generated by selecting 10,000 queries from the Natural Questions dataset <cit.> and acquiring new passage-independent answers for each question. Subsequently, human translators translated the questions and answers into 25 non-English, typologically diverse languages, including Portuguese.
Regarding sentence entailment tasks, ASSIN 1 and 2 <cit.> are available. These datasets encompass Recognizing Textual Entailment (RTE), also referred to as Natural Language Inference (NLI), and Semantic Textual Similarity (STS) tasks. The former involves predicting if a given text (premise) implies another text (hypothesis), while the latter quantifies the semantic equivalence between two sentences.
The Portuguese Language Understanding Evaluation (PLUE) benchmark <cit.> provides Portuguese translations of the GLUE <cit.>, SNLI <cit.>, and SciTAIL <cit.> datasets. These translations have been generated using automatic translation tools including Google Translate and OpusMT <cit.>.
The Winograd Schema Challenge (WSC) dataset <cit.> contains pairs of sentences with minimal differences, featuring an ambiguous pronoun that is resolved divergently between the two sentences. Melo et al. <cit.> manually translated and adapted this dataset to Portuguese.
For sentiment analysis tasks, the TweetsentBr dataset <cit.> consists of 15,000 tweets related to the TV show domain, collected between January and July 2017. The tweets were manually annotated by seven annotators into three classes: positive, neutral, and negative.
The Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation (MASSIVE) <cit.> is a 1M-example dataset containing realistic virtual utterances in 51 languages, including Portuguese. Professional translators translated the dataset from English, and it is annotated for slot (55 classes) and intent (60 classes) prediction tasks.
A dataset more closely related to BLUEX is the ENEM-challenge dataset <cit.>, which includes the editions of the Brazilian national exam, Exame Nacional do Ensino Medio (ENEM), from 2009 to 2017. Additionally, Nunes et al. <cit.> introduced a dataset containing the ENEM exam of 2022, the same paper evaluated the performance of LMs such as GPT-3.5-Turbo and GPT-4 on both the ENEM-challenge and the ENEM 2022 datasets.
§ THE BLUEX DATASET
§.§ Dataset Creation
BLUEX is a dataset comprising more than 1,000 multiple choice questions from the entrance exams of the two leading universities in Brazil, Unicamp and USP, administered between 2018 and 2023. The dataset was created by automatically extracting each question text, alternatives, and related images using scripts, and subsequently each example was manually annotated to correct extraction errors and provide additional metadata such as image positioning.
§.§ Annotated Question Metadata
The annotated metadata is described below.
* Prior Knowledge (PRK) - Indicates whether the question requires knowledge from outside of what has been provided in the question, such as familiarity with a particular author's work or a specific mathematical formula.
* Text Understanding (TU) - Indicates whether the question requires understanding of a particular text.
* Image Understanding (IU) - Indicates whether the question requires understanding of an image. It should be noted that not all questions with images require their understanding to answer the question.
* Mathematical Reasoning (MR) - Indicates whether the question requires mathematical reasoning, such as the ability to perform calculations and symbolic manipulations.
* Multilingual (ML) - Indicates whether the question requires knowledge of two or more languages, such as questions designed to test English skills of Portuguese speakers.
* Brazilian Knowledge (BK) - Indicates whether the question involves knowledge specific to Brazil, such as Brazilian history, literature, geography, or culture.
* Subjects - A list of subjects related to the question, such as geography, physics, etc.
* Related Images - A list of all the related images for the question.
* Alternative Type - Indicates whether the answer choices are presented as text or as images. This is important because some questions may use images as answer choices, which requires different processing techniques than questions with only textual answers.
By providing such annotations along with the questions we aim to facilitate research into language understanding and reasoning in Portuguese for both pure language models and multimodal models. We believe that BLUEX will be a valuable resource for researchers to evaluate and improve the performance of future language models in the context of Portuguese-language standardized tests.
§.§ Image Positioning
Many of the questions in the exams require a contextual or informational understanding of images. Despite active research in the field of multimodal models, models that can adeptly process both text and image data and yield satisfactory results remain scarce in the public domain. We believe that BLUEX can serve as an essential evaluation tool for such models.
Anticipating the use of models that will process images and text in an interleaved manner, we also provide precise information regarding the placement of images within the question, as illustrated in Figure <ref>.
§.§ Dataset Distribution
The BLUEX dataset covers a wide range of high school subjects, including Mathematics, Physics, Chemistry, Biology, History, Geography, English, Philosophy and Portuguese, as well as multidisciplinary questions that involve two or more subjects. The distribution of questions is shown in Table <ref>, where we also provide the distribution for the subset of questions without images, which accounts for approximately 58% of the total dataset.
Furthermore, Table <ref> shows the distribution of the dataset across annotated categories, as explained in Section 3.2. We observe that the majority of questions require specific knowledge and the ability to comprehend text, two expected capabilities in students taking these exams. Note that any given question can be part of multiple categories.
§ RESULTS
To enable future comparisons, we evaluated our dataset using several language models, ranging from 6B to 66B parameters, including OpenAI's GPT-4 and GPT-3.5-Turbo models. Our experiments were conducted using large language models with no specific training for this task. Each model was provided with one example in the input and then asked to answer a question from the test set. The example was randomly selected from an exam of the same university as the current question, but from a different year. For example, if the current question is from UNICAMP 2019, the example provided in the prompt would be a question from a UNICAMP exam, but not from 2019. We excluded all questions containing images from our experiments since the language models we used can only process text. This resulted in a total of 638 questions being used, which corresponds to approximately 60% of the dataset
Table <ref> summarizes our experimental findings, including the mean score achieved by exam-taking students, as well as the mean cutoff score of the most competitive major, which is medicine in both universities.[The average and cutoff scores are reported by the entities responsible for administering the exams. The results presented in Table <ref> are the average of all the exams contained in the BLUEX dataset.] The BLUEX column shows the accuracy of the whole subset used in the evaluation, while the UNICAMP and USP columns account for only the questions from the respective universities. The MR and BK columns account only for questions that include those categories.
Among the language models tested in the 7B-parameter range, Sabiá <cit.>, a model further pre-trained in Portuguese, consistently outperformed all other models, coming close to matching the average human score. Among the open-source models in the 60B-parameter range, LLaMA 65B <cit.> significantly outperformed OPT 66B <cit.> and achieved similar performance to GPT-3.5-Turbo. Sabiá 65B achieved better performance than GPT-3.5-Turbo but still lagged behind GPT-4 by ten points. GPT-4 was by far the best model in our evaluations but did not achieve an average score high enough to pass in medicine, the most competitive major. It is worth noting that the average and cutoff scores provided in Table <ref> are computed taking into account the whole exam, including questions with images, while the scores obtained by the language models utilize only the subset of questions with no images.
We also conducted a more detailed analysis of the models' performance by examining their ability to handle specific question types. Table <ref> presents the findings for questions that required Mathematical Reasoning (MR) and Brazilian Knowledge (BK). We observe that, with the exception of GPT-4, all models struggled to perform significantly better than random chance in questions that required Mathematical Reasoning. Even GPT-4 only achieved an accuracy of 44% in MR questions. On the other hand, when considering questions that require brazilian knowledge, Sabiá greatly outperformed all the other models in the 7B-parameter range, indicating that the extra pretraining in Portuguese provided the model with additional regional knowledge. In the 60B-parameter range, Sabiá also showed improvement over LLaMA, increasing the accuracy in these questions by 10 points and slightly outperforming GPT-3.5-Turbo. Nevertheless, it could not match the remarkable performance of GPT-4.
Moreover, Figure <ref> displays the performance of the top four models on the exams conducted each year. It can be observed that the models have a small variance between the years, which is expected as the difficulty of each exam and the number of questions in the subset vary across years. A surprising result, however, is the increased performance that all models seem to exhibit in 2023. The average and highest cutoff scores also increased slightly over the years, indicating that the exams became slightly easier in recent years. Since the 2023 exams were very recently administered, it is unlikely that they are part of any of the studied models' training data. Therefore, since the models' performance in the most recent years is comparable to that in older exams, it is reasonable to assume that the models are not merely memorizing the answers for the questions in the dataset.
§ CONCLUSION
This work introduced BLUEX, a new dataset that consists of 13 college entrance exams applied between 2018 and 2023 from two of the leading Brazilian universities, UNICAMP and USP. Each question of these exams was extensively annotated to help measure different abilities across multiple subjects in Portuguese. Beyond that, by providing images and their corresponding positions within the text, BLUEX is one of the few Portuguese datasets that are ready to evaluate multimodal models. We provide results from multiple LMs as baselines and reference scores based on students performance to facilitate future comparisons. We believe that BLUEX will be a important benchmark in the evaluation of the Portuguese capabilities of future models.
§ FUTURE WORK
The models used in this study employed a single in-context example. However, there's room for further investigation, such as determining whether increasing the number of few-shot examples could boost the performance of each model, as well as assessing their zero-shot performance. Furthermore, Nunes et al. <cit.> showed that GPT-4's performance on ENEM questions was significantly boosted when chain-of-thought prompts <cit.> were used. Adopting a similar approach here could potentially lead to performance improvement.
Finally, regarding multimodal models, their performance can be assessed utilizing the BLUEX dataset. This provides an opportunity for researchers to investigate the models' capabilities in integrating visual and textual information to address high school level questions.
splncs04
§ APPENDIX
§.§ Prompt for evaluation
The prompt used for all the experiments in this paper is shown in the Figure <ref>.
§.§ Benchmark per Subject
Table <ref> provides a detailed report of each model achieved accuracy by subject. Questions that were associated with more than one subject contributed to the accuracy of both scores. For example, a question related to mathematics and English will be taken into account when calculating the accuracy of both mathematics and English subjects.
|
http://arxiv.org/abs/2307.04714v1 | 20230710172406 | Global solutions versus finite time blow-up for the fast diffusion equation with spatially inhomogeneous source | [
"Razvan Gabriel Iagar",
"Ariel Sánchez"
] | math.AP | [
"math.AP"
] |
0cm
2.5cm -1cm
1cm
theoremTheorem[section]
corollary[theorem]Corollary
lemma[theorem]Lemma
proposition[theorem]Proposition
definition[theorem]Definition
remark[theorem]Remark
*theorem*Theorem
*lemma*Lemma
*remark*Remark
*definition*Definition
*proposition*Proposition
*corollary*Corollary
equationsection
=====ł
=Ł
=ø
=Ø
=
=≪⟨
⟩
α
β̱
δ̣
ε
φ
γ
η
ıι
ȷψ
κ̨
łλ
μ
ν
øω
π
θ
ρ̊
σ
τ
ῠ
ξ
ζ
Δ
Φ
Γ
Ψ
ŁΛ
ØΩ
Π
Θ
Σ
Υ
Ξ
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
L
supp
∂
e
dist
6pt 500
-2ptto8ptwidth 6pt
|
http://arxiv.org/abs/2307.10190v1 | 20230708141246 | Summary of the 3rd BINA Workshop | [
"Eugene Semenko",
"Manfred Cuntz"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.SR"
] |
1]Eugene Semenko
2]Manfred Cuntz
[1]National Astronomical Research Institute of Thailand (Public Organization)
260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand
[2]Department of Physics, University of Texas at Arlington, Arlington, TX 76019, USA
Summary of the BINA Workshop
[
============================
BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations.
§ INDO-BELGIAN COLLABORATION IN SPACE AND TIME
Without comprehensive international collaborations, it is difficult to imagine sustainable scientific progress in the modern age. In astronomy and astrophysics, such collaborations enabled the operation of observational facilities in the best places on the ground and in space. In big international cooperations like the European Southern Observatory, we can see how the technology exchange and mobility of human resources promote research on all levels, from universities to international institutions. Especially promising collaborations pertain to India, the world's most populous country according to the United Nations <cit.>, with exceptionally rapid economic growth.
The Belgo-Indian Network for Astronomy and Astrophysics, or BINA, was initialized in 2014 to foster the existing contacts between Indian and Belgian researchers, mostly from the Aryabhatta Research Institute of Observational Sciences (ARIES) and the Royal Observatory of Brussels (ROB), and to expand this collaboration on the nation-wide scale in both countries. The third BINA workshop, which we have the pleasure of summarizing, marks the end of this project. Two previous workshops were held in 2016 in Nainital (India) and 2018 in Brussels (Belgium). We believe that our summary would not be complete without a brief comparison of the third workshop with the two preceding ones. This will help us to better understand BINA's importance and outcome.
The first workshop (BINA-1) took place in Nainital on 15–18 November 2016. According to available statistics <cit.>, 107 astronomers from eight countries participated in the meeting, giving 36 oral talks and presenting 42 posters. Eighty-eight people from twelve partner institutes represented the Indian astronomical community, whereas six Belgian institutions sent ten representatives. The meetings' agenda focused primarily on the instrumentation of the newly commissioned 3.6-m Devastal Optical Telescope (DOT) and on the future of the 4-m International Liquid-Mirror Telescope (ILMT). The scientific talks covered a wide range of subjects, from solar system studies to individual stars, stellar clusters, exoplanets and extragalactic astronomy.
The second BINA workshop (BINA-2) was held two years later, in 2018, in Brussels; it was aimed to further expand the existing collaborations. Despite the significantly smaller number of participants (i.e., 69 registered researchers from seven countries), the conference's scientific programme was rich in oral talks, totalling 44. Furthermore, there were eight poster presentations <cit.>. The scientific programme of the second workshop largely mirrored the agenda of the first meeting, accentuating the scientific application of the Belgo-Indian telescopes. A highly notable aspect of the second workshop's scientific programme was the presence of the review talks.
In terms of participation and the number of oral talks, BINA-3, the final workshop, resembles the previous events, although, fortunately, a significant increase in participation and contributions occurred. Nearly one hundred fifty scientists from eleven countries participated in BINA-3, with the lion's share from India and Belgium. A total of 37 talks (10: invited, 27: contributory) talks were given in the main programme, and 21 contributory talks were given in the solar physics sessions. There have been 81 poster presentations; many of those were led by graduate and undergraduate students.
There is significant progress hiding behind the numbers. Since 2016, the Belgo-Indian network has grown to involve new institutes from both partner countries. The members published numerous scientific papers with results obtained on the Belgo-Indian telescopes. Many of these were based on PhD theses pursued within BINA. The content of these proceedings, during 2016–2023, also reveals that many young researchers changed their affiliation, moving to new places and thus expanding the network of research contacts. Progress in instrumentation and scientific collaboration within BINA and with external institutes worldwide gave new impulses to solar and general physics studies. In general, we can count the significantly increased number of telescopes and instruments as the major indicator of progress achieved within the BINA project. The list of available instruments has been highly influential on BINA-3. In the following sections, we briefly summarize its scientific programme.
§ OBSERVATIONAL TECHNIQUES AND INSTRUMENTATION
Telescopes and their instruments were in the spotlight of all BINA workshops. The ILMT has become the central theme of the current meeting. From a number of oral talks and poster presentations, one could get a comprehensive view of such telescopes' operation principles. It was particularly interesting to find out about the data reduction, calibration and access to the processed images obtained with the ILMT. Numerous results of the first observations with the ILMT, shown mostly in the poster presentations, have demonstrated a wide range of possible scientific applications of zenith telescopes with liquid mirrors. Given the short time that has passed since the beginning of the operation and obtained results, we can confirm that the ILMT has proven its scientific concept and significantly strengthened the observational facilities for the current and future Indo-Belgian projects.
The Indo-Belgian 3.6-m Devastal Optical Telescope (DOT) remains Asia's largest so far fully steerable optical telescope, which has been in operation since 2016. Yet, accurately by the time of BINA-3, a park of Indian telescopes received strengthening with the commissioning of the 2.5-m telescope, which was built by the Advanced Mechanical and Optical Systems (AMOS) in Belgium for the Physical Research Laboratory (PRL) in Ahmedabad and installed at Mt Abu, Rajasthan, India.
The development of new instruments and the upgrade of existing facilities was the central theme of the instrumentation section of the current conference. Notably, by 2028, the TIFR-ARIES Multi-Object Optical to Near-infrared Spectrograph (TA-MOONS) will bring new capabilities useful for the studies of stars in star formation regions, open clusters, and extended sources with DOT. Also, for this telescope, adding the polarimetric mode to the Aries-Devasthal Faint Object Spectrograph & Camera (ADFOSC), the existing device for observations of faint objects, will enable both linear and circular polarimetry. This new regime is of critical importance to the study of processes in star-forming regions, interacting stellar systems, supernovae, active galactic nuclei, and beyond.
A spectropolarimetric mode might be a case to think of for the creators of the PRL Advanced Radial Velocity Abu Sky Search-2 (PARAS-2), a high-resolution spectrograph at the 2.5-m PRL telescope at Mt Abu. This highly stable device has been developed for precise measurements of radial velocities while providing very high spectral resolution. Due to the geographical location of Mt Abu, PARAS-2 can play a critical role in the continuous monitoring of radial velocities for a wide variety of relatively bright objects; however, with a spectropolarimetric mode being implemented (like HARPSpol at the High Accuracy Radial velocity Planet Searcher (HARPS); ), PARAS-2 can take its niche in observations of hot magnetic stars, either within Indo-Belgian collaboration or in third-party projects like MOBSTER <cit.>. (MOBSTER is an acronym for Magnetic OB[A] Stars with TESS: probing their Evolutionary and Rotational properties; it is a collaboration of more than 60 scientists from over the world.) With the completion of a High-Resolution Spectrograph for the 3.6-m Devastal Optical Telescope (DOT-HRS), the astronomical community of ARIES will possess the ability to independently carry out studies in the fields of asteroseismology and stellar abundances. Again, like in the case of PARAS-2, spectropolarimetry with DOT-HRS is expected to increase the list of potential applications of this device and could further expand the ongoing Nainital-Cape survey of pulsating early-type stars <cit.>.
The rising number of telescopes in India poses questions about the most adequate time allocation policies and the optimal distribution of observational proposals between existing astronomical facilities. We found that the analysis of the time allocation for the 3.6-m DOT regarding the last six observational cycles, as presented at the workshop, indicated that it was particularly useful and appropriate for all facilities of ARIES — especially considering that the ILMT has started its operation and the upcoming arrival of the next-generation instruments for the 3.6-m DOT. From our perspective, in addition to the proposed improvements, we would also recommend the organisation of regular (e.g., on a yearly basis) conferences of the telescope's users under the auspices of the Time Allocation Committee (TAC), where the existing and potential applicants would be able to present their proposals or give feedback on the approved or running programmes. Such mini-conferences could be held online, speeding up communication between the TAC and the astronomical community. Naturally, this experience could be applied to other instruments in India and beyond as well.
The theme of small telescopes has been raised in several talks. The Belgium-made High-Efficiency and high-Resolution Mercator Echelle Spectrograph (HERMES), operated at the 1.25-m Mercator telescope in La Palma (Spain), proved its effectiveness in studies of the chemical composition of single and multiple stars. This spectrograph is used for existing bilateral projects. Complimentary opportunities for high-resolution spectroscopy with the 1-m-class telescopes and the perspectives of affordable implementation of adaptive optics on small and moderate-size telescopes have been considered in BINA-3. The interest in these problems highlights the importance of small, properly equipped telescopes for big programmes complementary to missions like the Transiting Exoplanet Survey Satellite (TESS).
§ MAIN PROGRAMME SESSION
BINA provides access to a wide variety of observational facilities located worldwide <cit.>. The observational component mostly determined the agenda of the BINA-3.
Comets, planets, asteroids, and orbital debris were in the third BINA workshop's spotlight, though other topics such as stars, including stellar multiplicity, and compact objects have been discussed. The selection of objects is largely determined by the areas where optical spectroscopy and photometry are most effective with small and medium-sized telescopes. The exception is the study of planetary atmospheres using the method of stellar occultations. Similar techniques require bigger apertures, and being implemented in a 3–6-m class of telescopes can be very beneficial. The 3.6-m DOT is among those few instruments on the planet which have regularly been used for observation of such events <cit.>.
Various instruments available within the Indo-Belgian collaboration promote the comprehensive study of processes occurring in star formation regions and during the ongoing evolution of stars. The efficiency of multi-wavelength observations was demonstrated in the example of the study of the star formation H ii region Sh 2-305. However, this is not a unique case where the Indian telescopes exploring the Universe in optical, radio, and X-ray domains were successfully combined. We cannot pass by the numerous results of the study of massive binary stars, stars with discs and circumstellar envelopes, introduced in the BINA-3 workshop.
Stellar multiplicity runs the golden thread through many talks given in Bhimtal during the workshop. As companions significantly influence stellar lifes at all stages of evolution, proper accounting and evaluation of the companions' properties are crucial. In this regard, work with the catalogues of binary stars or their extensive study within the ongoing or future Indo-Belgian projects must receive high priority. In such programmes, high-resolution optical spectroscopy of binary and multiple stars must take a special place.
Another problem passing through the scientific content of BINA-3 is stellar magnetism. As pointed out in the workshop, magnetic fields are ubiquitous on and beyond the main sequence, with their strengths varying substantially. Magnetic fields are responsible for different kinds of stellar activity and can impact stellar evolution. Besides the theoretical aspects pertaining to the physics of these processes, we would like to attract attention to the lack of observational facilities in the Asian region suitable to direct observations of stellar magnetic fields and processes. The worldwide selection of medium-sized and big telescopes equipped with sensitive spectropolarimetric devices is very limited, and Indian telescopes could fill this gap.
Through the study of chemical composition, one can explore the evolution of individual stars, groups of stars, and the Galaxy at large. The last is the central task of galactic archaeology. Pursuing this task depends on the availability of spectra and proper modelling. Despite the various observational results presented in BINA-3, we find a lack of interactions between the BINA members and groups working, e.g., in the U.S., Sweden or Germany, on the theoretical aspects of abundance analysis. We believe tighter cooperation with the institutes outside of BINA would take the research of stellar abundances to a qualitatively new level.
In contrast to the previous workshops, asteroseismology, a powerful tool for probing stellar interiors and validating stellar parameters, appears underrepresented in BINA-3. (On a lighter note, a superb cultural show successfully compensated for the lack of “music of the stars” in the conference programme.) This fact looks surprising to us as the Belgian groups in Brussels and Leuven are famous for their proficiency in this field.
Apart from galactic archaeology, which deals with the evolution of chemical composition, probing the Galactic structure is another important direction of work within BINA. Even now, after decades of extensive exploration of the Galaxy using different methods, our knowledge of its structure is incomplete. Optical polarimetry helps to reveal the detailed fine structure of dust clouds in the star formation regions or in the areas of young open clusters. Indian astronomers are experienced in this kind of work, and their results, both published <cit.> and presented during BINA-3, deserve special attention. We look forward to further expanding this direction of galactic studies on a new technical level.
§ SOLAR PHYSICS SESSION
The mainframe of the solar physics programme has been the study of small-scale structure, waves, flares as well as coronal mass ejections (CMEs). Science opportunities are often directly associated with instruments such as the Extreme Ultraviolet Imager (EUI) onboard of the Solar Orbiter. The EUI provides a crucial link between the solar surface, on the one hand, and the corona and solar wind, on the other hand, that ultimately shapes the structure and dynamics of the interplanetary medium. Several contributions focused on wave propagation, including their relevance to small-scale structures of the solar chromosphere, transition region and corona, such as flares, spicules and loop systems.
This kind of research considered both observations and theoretical work, such as ab-initio simulations for standing waves and slow magneto-acoustic waves. Studies of the outer solar atmosphere also utilized the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA), both onboard of the Solar Dynamics Observatory (SDO). In alignment with previous studies given in the literature, the potential of spectral lines, including line asymmetries, for the identification of solar atmospheric heating processes has been pointed out and carefully examined. Clearly, this approach is relevant to both solar physics and studies of solar-type stars of different ages and activity levels; it allows to embed solar studies into a broader context.
Regarding CMEs, a major driver of space weather and geomagnetic stars, attention has been paid the EUropean Heliosphere FORcasting Information Asset (EUHFORIA), which is relevant for MHD modelling and the study of the evolution of CMEs in the heliosphere. In this regard, a pivotal aspect is the study of thermodynamic and magnetic properties of CMEs as well as CME forward-modeling, aimed at predicting CME breakouts as well as CME topologies and magnitudes. Relevant spectral line features include Fe XIV and Fe XI data, obtained with existing instruments or available in the archive. Another notable item has been the presentation of long-term variations of solar differential rotation and the solar cycle; the latter still poses a large set of unanswered scientific questions.
§ RETROSPECTIVE AND RECOMMENDATIONS
A key element of BINA-3 is the future availability of the ILMT. The science goals of ILMT include cosmological research such as the statistical determination of key cosmological parameters through surveying quasars and supernovae as well as photometric variability studies of stars, transiting extra-solar planets and various types of transient events. Another aspect consists in the search for faint extended objects like low-surface brightness and star-forming galaxies. The pronounced use of ILMT, typically in conjunction with other available facilities, requires the ongoing pursuit of international collaborations; this activity is pivotal for future success. Another key aspect is the significance of theoretical studies.
Regarding solar physics research, previous work encompasses the study of MHD waves and small-scale transients, with a focus on the solar chromosphere, transition region and corona. Some of this work made extensive use of the EUI onboard of the Solar Orbiter. The study of outer solar atmosphere fine structure utilized the IRIS and the AIA, both onboard of the SDO. Time-dependent coronal studies, especially CMEs, are of great significance for the Earth, such as the onset of geomagnetic storms and the safety of equipment, including those associated with satellite communication[See <https://www.swpc.noaa.gov> for further information.]. Further advances in this field are expected to benefit from additional observational studies as well as advances in theory, particularly the interface of those two. Regarding theoretical work, ongoing and future efforts should continue to focus on 3-D magneto-hydrodynamics studies in conjunction with the adequate inclusion of radiative transfer and statistical phenomena, as well as aspects of chaos theory.
There are other items with the potential for future successful developments. Asteroseismology has been underrepresented in BINA-3. This is a powerful tool in the context of stellar evolution studies and the validation and improvement of stellar parameters; the latter is also relevant in the context of extrasolar planet investigations. Further important aspects concern the study of stellar magnetism and activity. Besides elementary stellar studies, these topics are also of critical importance regarding circumstellar habitability and astrobiology at large <cit.>. Moreover, studies of AGNs and GRBs are cardinal topics beyond solar and stellar physics; they have gained considerable steam within the scientific community.
Processes in the extragalactic objects are characterized by high energy and rich spectra. Among the variety of works presented during BINA-3, studies of active galactic nuclei (AGN) and different transients like gamma-ray bursts (GRB) continue to deserve special attention. The members of BINA have an exhaustive set of instruments available for multi-wavelength observations of these extragalactic sources, yet there is still room for improvement. Considerable advances are attainable both in instrumentation and in techniques of analysis. In the study of intra-night variability of blazars presented in the workshop's programme <cit.>, we noted the lack of international contributors, although these types of objects are in the spotlight of groups working, e.g., at the 6-m telescope of the Special Astrophysical Observatory, located in the North Caucasus region of Russia <cit.>. Given the absence of polarimetric devices for observation with the 3.6-m DOT at the moment, such cooperation could open new opportunities. Connections established on the personal level between the member institutions of BINA and observatories operating big telescopes would facilitate future studies in extragalactic astronomy where the aperture matters.
Similarly, we would recommend establishing collaborations with the institutes operating robotic telescopes for the observation of transients. However, a more radical future step might be an expansion of Indian observational facilities towards other continents, especially South America. A small network of medium-sized fully-robotic telescopes could provide easy access to observations and be used for educational purposes. It would reduce the dependence on astronomical monitoring occurring in South Asia — in consideration of possible drawbacks due to the regional climates.
Last but not least, in the field of data analysis, the leitmotif now is the use of machine learning (ML) and artificial intelligence (AI). This theme was raised several times during the workshop, but we believe that it could find broader applications in projects related to the classification of light curves and spectra. At the same time, we would recommend researchers using ML and AI in their work not to ignore advances in theory, as without proper constraints and background information, these methods might lead to impractical results, especially if based on small samples.
§.§.§ Acknowledgments
The authors are grateful to the scientific and local organizing committees of BINA-3 for inviting them to summarize the workshop and for further assistance in preparing these proceedings.
§.§.§ ORCID identifiers of the authors
0000-0002-1912-1342Eugene Semenko
0000-0002-8883-2930Manfred Cuntz
§.§.§ Author contributions
Both authors equally contributed to this publication.
§.§.§ Conflicts of interest
The authors declare no conflict of interest.
apalike
|
http://arxiv.org/abs/2307.04380v1 | 20230710072553 | Ghost polygons, Poisson bracket and convexity | [
"Martin Bridgeman",
"François Labourie"
] | math.GT | [
"math.GT",
"math.DG",
"53D30"
] |
ADAQ-SYM: Automated Symmetry Analysis of Defect Orbitals
Igor A. Abrikosov
August 12, 2023
========================================================
§ INTRODUCTION
The character variety of a discrete group Γ in a Lie group 𝖦 admits a natural class of functions: the algebra of regular functions generated as a polynomial algebra by trace functions or characters. When Γ is a surface group, the character variety becomes equipped with a symplectic form generalizing Poincaré intersection form – called the Atiyah–Bott–Goldman symplectic form <cit.> – and a fundamental theorem of Goldman <cit.> shows that the algebra of regular functions is stable under the Poisson bracket and more precisely that the bracket of two characters is expressed using a beautiful combinatorial structure on the ring generated by characters.
The Poisson bracket associated to a surface group has been heavily studied in <cit.>, <cit.>; and in the context of Hitchin representations the link between the symplectic structure, coordinates and cluster algebras discovered by Fock–Goncharov in <cit.>, has generated a lot of attention: for instance see <cit.>, <cit.>, <cit.>, <cit.> and <cit.> for more results, and also relations with the swapping algebra <cit.>.
On the other hand the deformation space of Anosov representations admits many other natural functions besides regular functions. Length functions, associated to any geodesic current, studied by Bonahon <cit.> in the context of Teichmüller theory, play a prominent role for Anosov representations for instance in <cit.> and <cit.>. Another class are the correlation functions, defined in <cit.> and <cit.>. These functions are defined as follows. For the sake of simplicity, we focus in this introduction on the case of a projective Anosov representation ρ of a hyperbolic group Γ: one can then associate to any geodesic g a rank 1-projector _ρ(g). The correlation function _G associated to a configuration of n-geodesics – that is an n-tuple G=(g_1,…,g_n) of geodesics up to cyclic transformation – is then
_G:ρ↦_G(ρ)(_ρ(g_n)…_ρ(g_1)) .
In Teichmüller theory, the correlation function of two geodesics is the cross-ratio of the endpoints. Generally, the correlation functions of geodesics in Teichmüller theory is a rational function of cross-ratios. This is no longer the case in the higher rank.
For instance if C is a geodesic triangle given by the three oriented geodesics (g_1,g_2,g_3), the map
^*_C:ρ↦^*_C(ρ)(_ρ(g_1)_ρ(g_2)_ρ(g_3)) ,
is related to Goncharov triple ratio on the real projective plane.
For a geodesic current μ, its length function _μ is defined by an averaging process – see equation (<ref>). One can also average correlation functions: say a Γ-invariant measure μ on the set ^n of generic n-tuples of geodesics is an integrable cyclic current if it is invariant under cyclic transformations and satisfies some integrability conditions – see section <ref> for precise definitions. Then the μ-correlation function or μ-averaged correlation function is
_μ:ρ↦∫_^n/Γ_G(ρ) μ̣ .
The corresponding functions are analytic <cit.> but rarely algebraic.
In the case when Γ is a surface group, the algebra of functions on the deformation space of Anosov representations admits a Poisson bracket coming from the Atiyah–Bott–Goldman symplectic form.
To uniformize our notation, we write ^k_μ for _μ when μ is supported on ^k and ^1_ν=_ν for the length function of a geodesic current ν. Then, one of the main result of this article, Theorem <ref>, gives as a corollary
[Poisson stability]
The space of length functions and correlation functions is stable under the Poisson bracket. More precisely there exists a Lie bracket on the polynomial algebra formally generated by tuples of geodesics (G,H)↦ [G,H] so that
{^k_μ,^p_ν}=∫_^n+m/Γ_[G,H](ρ) μ̣(G)ν̣(H) .
The complete result, in particularly Corollary <ref> allows to recursively use this formula.
In Theorem <ref> we compute explicitly what is the Hamiltonian vector field of the correlation functions. For instance in Teichmüller theory, this allows us to compute the higher derivatives of a length function along twist orbits by a combinatorial formula involving cross-ratios.
The bracket (G,H)↦ [G,H] – that we call the ghost bracket – is combinatorially constructed.
In this introduction, we explain the ghost bracket in a simple case and refer to section <ref> for more details. Recall first that an ideal polygon – not necessarily embedded – is a sequence (h_1,…, h_n) of geodesics in such that the endpoint of h_i is the starting point of h_i+1. Let then G be the configuration of n geodesics (g_1,…, g_n), with the endpoint of g_i not equal the starting point of g_i+1. The associated ghost polygon is given by the uniquely defined configuration (θ_1,…θ_2n) of geodesics – see figure (<ref>) such that
* (θ̅_1,θ_2,θ̅_3 …,θ̅_2n-1,θ_2n) is an ideal polygon,
* for all i, θ_2i=g_i and is called a visible edge, while θ_2i+1 is called a ghost edge.
We now denote by ⌈ g,h⌉ the configuration of two geodesics g and h, ϵ(g,h) their algebraic intersection, and g̅ is the geodesic g with the opposite orientation.
Then if (θ_i,…,θ_2n) and (ζ_i,…,ζ_2p) are the two ghost polygons associated to the configurations G and H, we define the projective ghost bracket of G and H as
[G,H] G· H·(∑_i,j(-1)^i+jϵ(ζ_j,θ_i) ⌈ζ_j,θ_i⌉) ,
which we consider as an element of the polynomial algebra formally generated by configurations of geodesics. We have similar formulas when G or H are geodesics, thus generalizing Wolpert's cosine formula <cit.>. In the case presented in the introduction – the study of projective Anosov representations – the ghost bracket is actually a Poisson bracket and is easily expressed in paragraph <ref> using the swapping bracket introduced by the second author in <cit.>. Formula (<ref>) is very explicit and the Poisson Stability Theorem <ref> now becomes an efficient tool to compute recursively brackets of averaged correlations functions and length functions.
0.2 truecm
In this spirit, we give two applications of this stability theorem. Following Martone–Zhang <cit.>, say a projective Anosov representation ρ admits a positive cross ratio if
0<(_ρ(g)_ρ(h))<1 for any two intersecting geodesics g and h. Examples come from Teichmüller
spaces and Hitchin representations <cit.>. More generally positive representations are associated to positive cross ratios <cit.>. Our first application is a generalisation of the convexity theorem of Kerckhoff <cit.> and was the initial reason for our investigation:
[Convexity Theorem] Let μ be the geodesic current associated to a measured geodesic lamination, _μ the associated length function. Let ρ be a projective Anosov representation which admits a positive cross ratio, then for any geodesic current ν,
{_μ,{_μ,_ν}}≥ 0 .
Furthermore the inequality is strict if and only if i(μ,ν) ≠ 0.
Recall that in a symplectic manifold {f,{f,g}}≥ 0 is equivalent to the fact that g is convex along the Hamiltonian curves of f. This theorem involves a generalisation of Wolpert's sine formula <cit.>.
Our second result allows us to construct commuting subalgebras in the Poisson algebra of correlation functions. Let ℒ be a geodesic lamination whose complement is a union of geodesic triangles C_i. To each such triangle, we call the associated correlation function ^*_C_i an associated triangle function. The subalgebra associated to the lamination is the subalgebra generated by triangle functions and length functions for geodesic currents supported on ℒ.
[Commuting subalgebra]
For any geodesic lamination whose complement is a union of geodesic triangles, the associated subalgebra is commutative with respect to the Poisson bracket.
In a forthcoming paper, we use Theorem <ref> with Dick Canary, to obtain a new proof of a Theorem by Potrie–Sambarino <cit.> that says that the entropy for simple roots is 1 for Hitchin representations.
0.5 truecm
In order to give a flavour of the constructions of our article, let us explain that the first step is to integrate a closed form α with values in the Lie algebra of the group against a ghost polygon by a simple combinatorial process that we call ghost integration producing the number
∮_ρ(G)α ,
called the ghost integral – see section <ref>. We relate this ghost integration to the derivative of correlation functions using the dynamical cohomological equation – in a more general context than surface groups or hyperbolic groups.
More precisely, we have for a variation of a flat connection ∇̇
_G(∇̇)=∮_ρ(G)∇̇ ,
see paragraph <ref>. We obtain this formula as a consequence of our study of the dynamical cohomological equation (proposition <ref>).
In order to get to the Hamitonian, we have to introduce the dual objects to ghost integration in the twisted cohomology of the group, namely a form Ω_ρ(G) with values in the endomorphism bundle, so that
∫_(α∧Ω_ρ(G))=∮_ρ(G)α .
Then the ghost intersection of two ghosts polygons G and H is
_ρ(G,H)=∮_ρ(G)Ω_ρ(H) ,
and we show that
_[G,H](ρ)= _ρ(G,H) .
For details, see section <ref>.
In order to finally compute the Poisson brackets of averaged functions and proceed to the proof of Theorem <ref>, we have to carefully exchange some integrals – see section <ref>.
0.2 truecm
The constructions outlined above are the analogues of classical constructions (integration along a path, intersection of geodesics) in differential topology described in section <ref>, in some sense playing the role of non-abelian homology.
§.§ The general case
For the sake of simplicity, this introduction focused on the case of the so-called projective Anosov representations. More generally, one can construct correlation functions out of geodesics decorated with weights of the Lie group with respect to a Θ-Anosov representations. The Θ-decorated correlation functions are described by configurations of Θ-decorated geodesics. The full machinery developed in this article computes more generally the brackets of these decorated correlation functions. Using that terminology, the Poisson Stability Theorem <ref> still holds with the same statement, but the ghost bracket has to be replaced by a decorated ghost bracket which follows a construction given in paragraph <ref>, slightly more involved than formula (<ref>).
§.§ Beyond representations: uniformly hyperbolic bundles
We also introduce a new tool allowing us to describe “universal Anosov representations“ in the spirit of universal Teichmüller spaces: the definition of uniformly hyperbolic bundles. This new tool allows us to extend results obtained for Anosov representations, notably stability and limit curves, in a situation where no periodicity according to a discrete group is required. In particular, the (not averaged) correlation functions make sense and we are able to compute the variation of such a correlation function in proposition <ref>. This result follows in particular from the solution of the (dynamical) cohomological equation (proposition <ref>). Important constructions such as ghost integration – in section <ref> – and ghost intersection – in section <ref> – are also given in the context of uniformly hyperbolic bundles.
2 truecm
We would like to thank Dick Canary for very useful comments, Fanny Kassel, Curt McMullen, Andrés Sambarino and Tengren Zhang for their interest.
§ PRELIMINARY
In this section, we recall basic facts about intersection of geodesics in the hyperbolic plane, dual forms to geodesics and the Goldman symplectic form. We also introduce one of the notions important for this paper: geodesically bounded forms.
§.§ The hyperbolic plane, geodesics and forms
We first recall classical results and constructions about closed geodesics in the hyperbolic plane.
§.§.§ Geodesics and intersection
Let us choose an orientation in . We denote in this paper by the space of oriented geodesics of that we identify with the space or pairwise distinct points in . We denote by g̅ the geodesic g with the opposite orientation.
Let g_0 and g_1 be two oriented geodesics.
The intersection of g_0 and g_1 is the number ϵ(g_0,g_1) which satisfies the following rules
ϵ(g_0,g_1)=-ϵ(g_1,g_0)=-ϵ(g̅_0,g_1) ,
and verifying the following
* ϵ(g_0,g_1)=0 if g_0 and g_1 do not intersect or g_0=g_1.
* ϵ(g_0,g_1)=1 if g_0 and g_1 intersect and (g_0(∞),g_1(∞),g_0(-∞),g_1(-∞) is oriented.
* ϵ(g_0,g_1)=1/2 if g_0(-∞)=g_1(-∞) and (g_0(∞),g_1(∞),g_1(-∞)) is oriented.
Observe that ϵ(g_0,g_1)∈{-1,-1/2,0,1/2,1} and that we have the cocycle property, if g_0,g_1,g_2 are the sides of an ideal triangle with the induced orientation, then for any geodesic g we have
∑_i=0^2ϵ(g,g_i)=0 .
We need an extra convention for coherence
A phantom geodesic is a pair g of identical points (x,x) of ∂_∞. If g is a phantom geodesic, h any geodesic (phantom or not), we define ϵ(g,h) 0.
§.§.§ Geodesic forms
Let us denote by Ω^1() the space of 1-forms on the hyperbolic space. A form ω in Ω^1() is bounded if |ω_x(u)| is bounded uniformly for all (x,u) in U the unit tangent bundle of . We let ^∞ the vector space of bounded forms.
We have a equivariant mapping
Ω^1() ,gω_g ,
which satisfies the following properties
* ω_g is a closed 1-form in supported in the tubular neighbourhood of g at distance 1, outside the tubular neighbourhood of g at distance 1/2.
* ω_g=-ω_g̅
* Let g_0 be any geodesic g, then
∫_g_0ω_g = ϵ(g_0,g) .
The construction runs as follows. Let us fix a function f from ℝ^+ to [0,1] with support in [0,1] which is constant and equal to 1/2 on [0,1/2] neighbourhood of 0. We extend (non-continuously) f to ℝ as an odd function. Let finally R_g be the “signed distance" to g, so that R_g̅=-R_g. We finally set ω_g=-(̣f∘ R_g). Then (<ref>) and (<ref>) are obvious. We leave the reader check the last point in all possible cases.
We extend the above map to phantom geodesics by setting ω_g=0 for a phantom geodesic and observe that the corresponding assignment still obey proposition <ref>.
The form ω_g is called the geodesic form associated to g. Such an assignment is not unique, but we fix one, once and for all. Then we have
For any pair of geodesic g_0 and g_1, ω_g_1∧ω_g_0=farea_ with f bounded and in L^1.
The only non-trivial case is if g_0, g_1 share an endpoint. In the upper half plane model let g_0 be the geodesic x=0, while g_1 is the geodesic x=a. Observe that the support of ω_g_1∧ω_g_0 is in the sector V defined by the inequations y>B>0 and | x/y| < C for some positive constants A an C. Finally as the signed distance for g_0 satisfies sinh(R_g_0) = x/y then
ω_g_0=f_0 d(x/y) , ω_g_1=f d(x-a/y) ,
where f_0 and f_1 are functions bounded by a constant D. An easy computation shows that
d(x/y)∧ d(x-a/y)
=a d x∧ d y/y^3 .
Oberve that | f f_0 a| is bounded by D^2a, and
∫_V d x∧ d y/y^3≤ 2C∫_B^∞1/y^2 d y=1/B< ∞ .
This completes the proof.
The above result is still true whenever g or h are phantom geodesics.
From that it follows that
For any pair of geodesics, phantom or not, g and g_0, we have
∫_g_0ω_g = ϵ(g_0,g)=∫_ω_g_0∧ω_g .
Moreover for any (possibly ideal) triangle T in
∫_∂ Tω_g=0 .
§.§ The generic set and barycentric construction
For any oriented geodesic g in we denote by g̅ the geodesic with opposite orientation, and we write g≃ h, if either g=h or g=h̅. Let us also
denote the extremities of g by (∂^-g,∂^+g) in ×.
For n≥ 2, let us the define the
singular set as
^n_1{(g_1,…,g_n)|∀ i,j, g_i≃ g_j } ,
and the
generic set to be
_⋆^n^n∖^n_1 .
We define a Γ-compact set in _⋆^n to be the preimage of a compact set in the quotient _⋆^n/Γ.
The barycenter of a family G=(g_1,…,g_n) of geodesics is the point (G) which attains the minimum of the sum of the distances to the geodesics g_i.
Choosing a uniformisation, the barycentric construction yields a -equivariant map from
:_⋆^n ,(g_1,…,g_n)(y) .
It follows from the existence of the barycenter map that the diagonal action of Γ on _⋆^n is proper. The barycentric section is then the section σ of the following fibration restricted to _⋆^n
F:()^n→^n ,
given by
σ=(σ_1,…,σ_n) ,
where σ_i(g_1,…,g_n) is the orthogonal projection of (g_1,…,g_n) on g_i.
Obviously
The barycentric section is equivariant under the diagonal action of on _⋆^n as well as the natural action of the symmetric group 𝔖_n.
§.§ Geodesically bounded forms
We abstract the properties of geodesic forms in the following definition:
Let α be a closed 1-form on . We say that α is geodesically bounded if
* α belongs to ^∞, ∇α is bounded.
* for any geodesic g, α(ġ) is in L^1(g, ṭ), ω_g∧α belongs to L^1() and
∫_gα =∫_ω_g∧α .
* Moreover for any (possibly ideal) triangle T in
∫_∂ Tα =0 .
We denote by the vector space of geodesically bounded forms.
We observe that any geodesically bounded form is closed and that any geodesic form belongs to .
§.§ Polygonal arcs form
We will have to consider geodesic polygonal arcs which are embedded finite union of oriented geodesic arcs
=γ_0∪⋯∪γ_p ,
such that γ_i joins γ_i^- to γ_i^+ and we have γ_i^-=γ_i-1^+, while γ_0^- and γ_p^+ are distinct points at infinity.
We say that γ_1,…,γ_p-1 are the interior arcs.
We have similarly to above
Given a polygonal arc =γ_0∪⋯∪γ_p there exists a closed 1-form ω_ so that
* the 1-form ω_ is supported on a 1-neighborhood of ,
* Let B be a ball containing a 1-neighbourhood of the interior arcs, such that outside of B the 1-neighbourhood V_0 of γ_0 and the 1-neighbourhood V_1 of γ_p are disjoint then
.ω_|_V_0=.ω_g_0|_V_0 , .ω_|_V_1=.ω_g_p|_V_1 .
where g_0 and g_p are the complete geodesics cointaining the arcs γ_0 and γ_p.
* For any element Φ of ,
ω_Φ()=Φ^*(ω_).
* For any geodesic g,
∫_gω_=ϵ(g,[γ_0^-,γ_p^+]).
*
Let be a polygonal arc with extremities at infinity x and y, then for any 1-form α in we have
∫_ω_∧α=∫_[x,y]α .
The construction runs as the one for geodesics. Let us fix a function f from ℝ^+ to [0,1] with support in [0,1] which is constant and equal to 1/2 on [0,1/2]. We extend (non-continuously) f to ℝ as an odd function. Let finally R_g be the “signed distance" to g, so that R_g̅=-R_g. We finally set ω_g=-(̣f∘ R_g). Then (1), (2), (3) and (4) are obvious.
Then writing ∖=U⊔ V where U and V are open connected sets. We have that
∫_U ω_∧α=∫_U (̣f∘ R_g)∧α=1/2∫_g α ,
by applying carefully Stokes theorem. The same holds for the integral over V, giving us our wanted result.
The form ω_ is the polygonal arc form.
§.§ The Goldman symplectic form
Let S be a closed surface with Σ its universal cover that we identify with by choosing a complete hyperbolic structure on S. Given a representation ρ:π_1(S) → G we let E = Σ×_ρ𝔤 be the bundle over S by taking the quotient of the trivial bundle over Σ×𝔤 by the action of π_1(S) given by γ(x,v) = (γ(x), Adρ(γ) (v)). Let ∇ be the associated flat connection on the bundle E and denote by Ω^k(S)⊗(E) the vector space of k-forms on S with values in (E). Recall that ∇ gives rise to a differential
^̣∇: Ω^k(S)⊗(E)→Ω^k+1(S)⊗(E) .
We say a 1-form α with values in (E) is closed if ^̣∇α=0 and exact if α=^̣∇β.
Let then consider
C^1_ρ(S) {closed 1-forms with values in (E)} ,
E^1_ρ(S) {exact 1-forms with values in (E)} ,
H^1_ρ(S) C^1_ρ(S)/E^1_ρ(S) .
When S is closed, the Goldman symplectic form on H^1_ρ(S) is given by
(α,β)∫_S(α∧β) ,
where for u and v in S: (α∧β)(u,v)(α(u)β(v))-(α(v)β(u)).
Observe that if consider complex bundles, the Goldman symplectic form is complex valued, while it is real valued for real bundles.
§ UNIFORMLY HYPERBOLIC BUNDLES AND PROJECTORS
We introduce the notion of uniformly hyperbolic bundles over the unit tangent bundle of – see definition <ref>. This notion is a universal version of Anosov representations defined in <cit.>. More specifically, we explain in the projective case, that such objects, which are bundles with data, are associated to sections of the endomorphism bundle given by projectors. One of the main results – proposition <ref> – is a description of the variation of such a projector under a variation of the data defining the uniformly hyperbolic bundles. Finally, we recover Anosov representations as periodic cases of uniformly hyperbolic bundles. Uniformly hyperbolic bundle is the structure underlying the study of quasi-symmetric maps in <cit.>.
This notion has a further generalisation to all hyperbolic groups Γ, replacing by a real line bundle X over
∂_∞Γ×∂_∞Γ∖{(x,x)| x∈∂_∞Γ} ,
equipped with a Γ-action so that
X/Γ is the geodesic flow of Γ.
We will not discuss it in this paper, since this will uselessly burden our notation.
§.§ Uniformly hyperbolic bundles: definition
Let be the unit tangent bundle of . We denote by the vector field on generating the geodesic flow ϕ.
We consider the trivial bundle E=V×. For any flat connection ∇ on E, we consider the lift Φ^∇ of ϕ given by the parallel transport along the orbits of ∂_t. When D is the trivial connection on E, we just write ΦΦ^D and observe that Φ_t(x,u)=(ϕ_t(x),u) where x is in and v in V.
A rank k uniformly hyperbolic projective bundle is a pair (∇,h) where h is a section of the frame bundle on E, ∇ a trivializable connection on the bundle E, satisfying first the (standard) bounded cocyle hypothesis: ‖Φ_1^∇‖ is uniformly bounded.
Then we assume that we have a Φ^∇ invariant decomposition at every point x
E_x=L_x⊕ P_x ,
where L_x and P_x are subspaces with (L_x)=k and so that
* The bundle L⊗ P^* is contracting, that is there exist positive constant B and b so that for all positive real s, for all x in for all non-zero vector u and v in L_x and P_x respectively
‖Φ^∇_s(u)‖/‖ u‖≤ B e^-bs‖Φ^∇_s(v)‖/‖ v‖ .
* There exists a positive ϵ, so that for any converging sequences x and y to a point x, and any sequence u and v converging to u and v in E_x, so that u_m and v_m belongs to L_x_m and P_y_m respectively, then
|⟨u| v||⟩≤ (1-ϵ) ‖ u‖·‖ v‖ .
* There is a volume form on E, which is bounded with respect to h and ∇-parallel along orbits on the flow.
The metric and scalar products considered are with respect to the metric g_h for which h is orthonormal.
The fundamental projector associated to a uniformly hyperbolic bundle is the section of (E) given by the projection on L parallel to P.
Observe that we do not require a priori any continuity on the bundles L and P. When the dimension of L_x is 1, we talk of a projective uniformly hyperbolic bundle, when it is k, we talk of a rank k uniformly hyperbolic bundle.
The hypothesis (<ref>) is for simplification purposes. Using that hypothesis, one sees that (L) and (P) are respectively contracting and expanding bundles.
The bounded cocycle assumption, akin to a similar condition in Oseledets theorem, implies that there exists positive constants A, B and C so that
‖Φ_s^∇‖≤ A+ Be^Cs .
If we have a projection π from a set F to U, we write, for x in , F_x=π^-1(x).
Let (∇,h) be a uniformly hyperbolic bundle. Then there exists open sets 𝒱 and 𝒰 of (E) and (E^*), where k is the dimension of L_x, respectively as well as a positive real T, so that
* For every u in 𝒰_x and v in 𝒱_x, u and v are transverse.
* Φ_T sends 𝒰 to 𝒰 and is 1/2 Lipschitz.
* Φ_-T sends 𝒱 to 𝒱 and is 1/2 Lipschitz.
* L and P are sections of 𝒱 and 𝒰
This is a rephrasing of the definition of uniformly hyperbolic bundle: let us consider L and P as sections of (E) and (E^*) respectively and let ℒ and 𝒫 be the closure of the images of these sections. The second condition implies for any u in ℒ and v in 𝒫,
d(u,w)≥ϵ>0 , ∀ w not transverse to v .
It follows that we can find ϵ_0 so that the open sets
𝒱{u| d(u,ℒ)≤ϵ_0} , 𝒰{v| d(v,𝒫)≤ϵ_0} ,
satisfy the condition of the lemma.
As a classical consequence we have.
Let (∇,h) be a uniformly hyperbolic bundle. Let us choose a trivialisation so that ∇ is the trivial connection. Then
* The fundamental projector is a parallel along the geodesic flow and continuous bounded section of (E).
* L is constant along the strong stable foliation of the geodesic flow of .
* Finally P is constant along the strong unstable foliation of .
The second condition of the definition of uniformly hyperbolic bundles guarantees that there exist open sets 𝒱 and 𝒰 in (E) and (E^*) respectively, so that L and P are sections of 𝒱 and 𝒰 respectively and moreover the closure of 𝒱 and 𝒰 do not intersect. The first condition implies that for s large enough Φ_s is contracting as a map from 𝒱 to 𝒰. Thus L being an invariant section is continuous; the same holds for P. Hence is continuous.
Using now that the geodesic flow is contracting along the stable leaves towards the future, and contracting along the unstable leaves towards the past, it follows that L is constant along the strong stable leaves and P is constant along the strong unstable leaves.
This allows to define the limit maps of the unformly hyperbolic bundle (∇,h).
Let us choose a trivialization E=V× so that ∇ is trivial.
The limit map of the uniformly hyperbolic bundle is
ξ: ∂_∞→(V) ,
so that ξ(x)=L(y), if y belongs to the strong stable foliation defined by x.
Symmetrically, the dual limit map of the uniformly hyperbolic bundle is
ξ^*: ∂_∞→(V^*) ,
so that ξ^*(x)=P(y), if y belongs to the strong unstable foliation defined by x.
0.2truecm
Finally let us define a notion of equivalence for uniformly hyperbolic bundles:
Two uniformly hyperbolic bundles (∇_0,h_0) and (∇_1,h_1) are equivalent if there is a section B of 𝖦𝖫(E) so that
* ∇_1=B^*∇_0 ,
* The metrics g_h_0 and B^*g_h_1 are uniformly equivalent.
§.§ Families of uniformly hyperbolic bundles
In order to study families of uniformly hyperbolic bundles, we will adopt two different gauge-fixing points of view:
* The fixed gauge point of view: we allow the frame to vary but fix the connection
* The fixed frame point of view: we allow the connection to vary but fix the frame.
A natural example comes from a projective Anosov representation of a cocompact surface group.
We call such an example, where the frame and the connections are invariant under the action of a cocompact surface group a periodic bundle. We discuss periodic bundles in <ref>.
For a vector bundle V over a topological space X, we denote by V_x the fiber at a point x in X.
A C^k-bounded variation of a uniformly hyperbolic bundle (∇,h) is a family (∇^t,h_t)_t∈ ]-ϵ,ϵ[ of connections and frames on E_0 so that
* (∇_0,h_0)=(∇,h),
* for all t, ∇^t is trivializable
* for all t close to 0, the C^k-derivatives of t↦∇_ḣ_t are bounded with respect to g_h_t.
We will see that any smooth family of periodic variation is of bounded variation.
Then we have the lemma:
Assume that ∇^t,h is a C^k bounded variation of a uniformly hyperbolic bundle where k∈ℕ∪{ω}. Then for t in some neighbourhood of zero, the bundle (∇^t,h_t) is uniformly hyperbolic. Let _t be the associated projector, then _t depends C^k on t.
We prove this lemma in paragraph <ref>.
§.§ The fundamental projector and its variation
Our goal is to compute the variation of the associated family of fundamental projectors of a bounded variation of a uniformly hyperbolic bundle. More precisely, let assume we have a uniformly hyperbolic bundle (∇_0,h_0) with decomposition
E_0=L_0⊕ P_0 .
We prove in this paragraph the following proposition
Assume that we have a bounded variation ∇_t,h of the uniformly hyperbolic bundle (∇_0,h_0) in the fixed connection point of view, that is ∇_t is the trivial connection D.
The derivative of the fundamental geodesic at a point x in a geodesic g, is given by
_0=[Ȧ,_0] + ∫_g^+ [̣̇A,_0] ·_0+ ∫_g^-_0· [̣̇A,_0] .
where g^+ is the geodesic arc from x to g(+∞) and g^- is the arc from x to g(-∞) (in other words with the opposite orientation to g), and Ȧ is the endormorphism so that
Ȧ h =.∂/∂ t|_t=0 h_t .
§.§.§ Preliminary: subbundles of (E_0)
We first adopt the fixed frame point of view.
Let ∇ be a flat connection on E_0,
Then is parallel for the induced flat connection on (E_0) along the flow. Let also F_0 be the subbundle of (E_0) given by
F_0{B∈(E_0)| B+ B=B} .
Observe that for any section C of (E_0), [C,] is a section of F_0 and that for any element A in F_0 we have (A)=0.
The bundle F_0 decompose as two parallel subbundles
F_0=F_0^+⊕ F_0^- ,
where we have the identification
F_0^+=P^*⊗ L , F_0^-=L^*⊗ P .
The projection of F_0 to F_0^+ parallel to F_0^- is given by
B↦ B,
while the projection on
F_0^- parallel to F_0^+ is given by
B↦ B.
Finally there exists positive constants A and a, so that for all positive time s, endomorphisms u^+ in F_0^+ and u^- in F_0^-, we have
‖Φ_-s(u^-)‖≤ A e^-as‖ u^-‖ , ‖Φ_s(u^+)‖≤ A e^-as‖ u^+‖ .
Consequently, for any section D of F_0, we write D=D^++D^- where D^± are sections of F_0^± according to the decomposition (<ref>).
Let us write
(E_0)=E_0^*⊗ E_0=(L^*⊗ L)⊕ (P^*⊗ P)⊕ (L^*⊗ P)⊕ (P^*⊗ L) ,
In that decomposition,
F_0=(P^*⊗ L)⊕(L^*⊗ P).
Let
F_0^+=P^*⊗ L , F_0^-=L^*⊗ P .
Thus, we can identify F_0^+ as the set of elements whose image lie in L and F_0^- are those whose kernel is in P. Thus
F_0^+ = {B∈ F_0| B=B}=
{B∈ F_0| B=0} ,
F_0^- = {B∈ F_0| B=0}={B∈ F_0| B=B} .
Then the equation for any element B of F_0,
B= B+ B ,
corresponds to the decomposition F_0=F_0^+⊕ F_0^-. Thus the projection on F_0^+ is given by B↦ B, while the projection on F_0^- is given by B↦ B.
The definition of F_0^+ and F_0^- and the corresponding contraction properties of the definition of a uniformly hyperbolic bundles give the contraction properties on F_0^+ and F_0^-.
§.§.§ The cohomological equation
Let σ be a bounded section of F_0, then there exists a unique section η of F_0 so that ∇_η=σ. This section η is given by
η(x)=∫_-∞^0 ·σ(ϕ_s(x)) ṣ-∫_0^∞σ(ϕ_s(x))· ṣ .
Classically, in dynamical systems, the equation ∇_η=σ is called the cohomological equation.
Since σ belongs to F_0^+ while σ belongs to F_0^- by lemma <ref>, the right hand side of equation (<ref>) makes sense using the exponential contraction properties given in the inequalities (<ref>). Indeed, for a positive s by lemma <ref> again,
‖Φ_-s(σ(ϕ_s) ·) ‖ ≤ Ae^-as‖σ‖_∞ ,
‖Φ_s(·σ(ϕ_-s)) ‖ ≤ Ae^-as‖σ‖_∞ .
It follows that using the above equation as a definition for η we have
η(ϕ_s(x))=∫_-∞^t·σ(ϕ_u(x)) ụ-∫_t^∞σ(ϕ_u(x))· ụ .
Thus
∇_η=σ +σ=σ ,
since σ is a section of F_0. Uniqueness follows from the fact that F_0 has no parallel section: indeed neither F_0^+ nor F_0^- have a parallel section.
§.§.§ Variation of the fundamental projector: metric gauge fixing
We continue to adopt the variation of connection point of view and consider after gauge fixing only hyperbolic bundles where the metric is fixed.
Let ∇^t,h give rise to a bounded variation of the uniformly hyperbolic bundle (∇_0,h), where ∇_0 is the trivial connection D.
Our first result is
The variation of the fundamental projector _t associated to (∇^t,h) is given by
(x)=∫_-∞^0 ( · [,∇̇_] )(x^s) ṣ- ∫_0^∞([,∇̇_]·)(x^s) ṣ ,
where x^s=ϕ_s(x) and ∇̇_(u)=.∂/∂ s|_t=0∇^t_ (u).
Let us distinguish for the sake of this proof the following connections. Let ∇ be the flat connection on E_0 and ∇^End the associated flat connection on (E_0). Then from the equation
^2=,
we obtain after differentiating,
+= .
Thus is a section of F_0. Moreover taking the variation of the equation ∇^End_=0 yields
∇^End_=-∇̇^End_=[,∇̇_ ] .
In other words, the variation of the fundamental projector is a solution of the cohomological equation ∇^End_η=σ, where σ=[,∇_] and η=. Applying proposition <ref>, yields the equation (<ref>).
§.§.§ The fixed connection point of view and the proof of proposition <ref>
We can now compute the variation of the projector in the fixed frame point of view and prove proposition <ref>.
We first need to switch from the fixed frame of view to the fixed connection point of view.
Let (∇^t,h) be a variation in the fixed frame point of view. Let A^t be so that ∇^t=A_t^-1 DA_t and A_0=Id. In particular, we have
∇̇_= D_Ȧ=̣̇A( ) .
Then the corresponding variation in the fixed connection point of view is ( D,h_s) where
h_t=A_t(h).
It follows that
ḣ= Ȧ(h) , ∇̇_ =̣̇A()=D_Ȧ .
Let now _0^t the projector – in the fixed connection point of view– associated to (D, h_t), while ^t is the projector associated to (∇^s, h). Obviously
_0^t=A_t^t A_t^-1 , _0_0^0=^0_0 .
Thus
_0=[Ȧ,]+ .
Using lemma <ref> and equations (<ref>), we have
=∫_-∞^0 · [,∇̇_] ∘ϕ_s ṣ- ∫_0^∞ [,∇̇_]·∘ϕ(s) ṣ ,
which yields (using the fact that _0=):
_0=[Ȧ,_0] +∫_-∞^0_0· [_0,∇̇_]∘ϕ_s ṣ- ∫_0^∞ [_0,∇̇_]·_0 ∘ϕ(s) ṣ .
From equation (<ref>), we get that
∫_0^∞ [_0,∇̇_]_0 ∘ϕ(s) ṣ=∫_g^+[_0,̣̇A]·_0=- ∫_g^+[̣̇A,_0]·_0 ,
while
∫_-∞^0 _0 [_0,∇̇_]∘ϕ(s) ṣ=-∫_g^-_0 [_0,̣̇A] =∫_g^-_0[̣̇A,_0] .
This concludes the proof of proposition <ref>.
§.§ Proof of the stability lemma <ref>
Let us first choose a continuous family of gauge transformations g so that g_t^*h_t=h. The bounded variation condition implies that for a given T, for any α, there exists β so that | s|≤β, implies that
‖Φ_T-Φ_T^s‖≤α ,
where Φ_Y^s is the parallel transport at time T for ∇^s and the norm is computed with respect to h. Thus from lemma <ref>, for α small enough, Φ_T^s preserves 𝒰 and is 3/4-Lipschitz, while the same holds for Φ_-T^s and 𝒱. This implies that for | s|≤β, (∇_s,h) is a uniformly hyperbolic bundle.
By the C^k bounded variation hypothesis, Φ_-T^s is a C^k-family of contracting maps, hence the fixed section is itself C^k as a function of s. This proves that the fundamental projector varies C^k in s.
§.§ Θ-Uniformly hyperbolic bundles
We now generalize the situation described in the previous paragraphs, using the same notational convention. Let V be a finite dimension vector space, let Θ=(K_1,…,K_n) be a strictly increasing n-tuple so that
1≤ K_1<…< K_n < (V) .
Then a Θ-uniformly hyperbolic bundle over is given by a pair (∇,h) for which there exists a Φ^∇- invariant decomposition
E_0=E_1⊕…⊕ E_n+1 ,
so that (∇,h) is uniformly of rank K_å for all å in {1,…,n} with invariant decomposition given by
E_0=F_å⊕ F^∘_å , with F_å=E_1⊕…⊕ E_å , F^∘_å=E_å+1⊕…⊕ E_n+1 .
The flag (F_1,…,F_n) will be called a Θ-flag.
In other words, we generalized the situation described before for Grassmannians to flag varieties.
§.§ Projectors and notation
In this section, we will work in the context of a Θ-uniformly hyperbolic bundle ρ=(∇,h) associated to a decomposition of a trivialisable bundle
E=E_1⊕⋯⊕ E_n+1 .
Let us denote k_å(E_å) and K_å k_1+… k_å so that Θ=(K_1,…, K_n).
We then write for a geodesic g,
^å(g) ,
the projection on F_å=E_1⊕…⊕ E_å parallel to F_å^∘ E_å+1⊕…⊕ E_n+1.
When g is a phantom geodesic we set the convention that ^å(g).
Observe that all ^å(g) are well defined projectors in the finite dimensional vector space V which is the space of ∇-parallel sections of E. Or in other words the vector space so that in the trivialization given by ∇, E=V×.
Finally, we will consider a Θ-geodesic g, given by a geodesic g_0 labelled by an element å of Θ and write
(g)^å(g_0) , Θ_g=(^å)=K_å .
§.§ The periodic case
Let Σ be the universal cover of a closed surface S.
We denote by π the projection from Σ to S and p the projection from to Σ.
Let Γ be the fundamental group of S and ρ be a projective Anosov representation of Γ on some vector space ℰ. Let E be the associated flat bundle on S with connection ∇.
We will use in the sequel the associated trivialisation of the bundle E_0=p^*π^* E on which ∇ is trivial. Let us choose a Γ-invariant euclidean metric g on the bundle E_0. Let us finally choose a orthonormal frame h for g so that g=g_h.
It follows from the definition of projective Anosov representations that the corresponding bundle (∇,h) is uniformly hyperbolic. We call such a uniformly hyperbolic bundle periodic.
More generally, let 𝖯_Θ be the parabolic group stabilizing a Θ-flag. Then a 𝖯_Θ-Anosov representation defines a Θ-uniformly hyperbolic bundle.
Finally we observe
* Given a representation ρ, a different choice of a Γ-invariant metric yields an equivalent uniformly hyperbolic bundle.
* Similarly, two conjugate representations give equivalent uniformly hyperbolic bundles.
§ GHOST POLYGONS AND CONFIGURATIONS OF PROJECTORS
We introduce here our main tool, ghost polygons, and relate them to configurations of geodesics and correlation functions. This section is mainly concerned with definitions and notation.
We will consider the space of oriented geodesics of , and an oriented geodesic g as a pair (g^-,g^+) consisting of two distinct points in .
§.§ Ghost polygons
0.2 truecm
A ghost polygon is a cyclic collection of geodesics ϑ=(θ_1,…,θ_2p). The ghost edges are the geodesics (possibly phantom) θ_2i+1 , and the
visible edges are the even labelled edges θ_2i, such that
θ_2i+1^+=θ_2i^+ , θ_2i-1^-=θ_2i^- .
* The geodesics are allowed to be phantom geodesics,
* It will be convenient some time to relabel the ghost edges as ζ_iθ_2i+1.
* It follows from our definition that (θ̅_1,θ_2,θ̅_3,…,θ_2p) is closed ideal polygon.
We have an alternative point of view.
A configuration of geodesics of rank p is just a finite cyclically ordered set of p-geodesics. We denote the cyclically ordered set of geodesics (g_1,…,g_p) by ⌈ g_1,…, g_p⌉. The cardinality of the configuration is called the rank of the the configuration.
We see that the data of a ghost polygon and a configuration of geodesics is equivalent (see figure (<ref>)):
* we can remove the ghost edges to obtain a configuration of geodesics from a ghost polygon,
* conversely, given any configuration G=(g_1,…,g_p), the associated ghost polygon ϑ=(θ_1,…,θ_2p) is given by θ_2i g_i, θ_2i+1(g_i+1^-,g_i^+)
We finally say that two configurations are non-intersecting if their associated ghost polygons do not intersect.
Let us add some convenient definitions. Let ϑ=(θ_1,…,θ_2p) be a ghost polygon associated to the configuration configuration ⌈ g_1,…, g_p⌉, We then define the opposite configurations as follows.
* For visible edge g_1 of G, the opposite configuration is tuple g_1^* (g_1,g_2,…,g_p,g_1).
* For ghost edge θ_1 of G, the ghost opposite configuration is the tuple θ_1^*(g_2,…,g_p,g_1).
Observe that both opposite configurations are not configurations per se but actually tuples – or ordered configurations.
We finally define the core diameter r(G) of a ghost polygon G to be the minimum of those R such that, if B(R) is the ball of radius R centered at the barycenter (G), then B(R) intersects all visible edges. We obviously have
The map G↦ r(G) is a continuous and proper map from _⋆^n/ to ℝ.
§.§ Θ-Ghost polygons
We now Θ-decorate the situation. Let as in paragraph <ref>, Θ=(K_1,…,K_n) with
K_å<K_å+1. Let G be a ghost polygon, a Θ-decoration is a map Å from the set of visible edges to 1,…,n.
We again have the equivalent description in terms of configurations. A Θ-configuration of geodesics of rank p is configuration (g_1,…,g_p) with a map Å – the Θ-decoration – from the collection of geodesics to {1,…,n}. We think of a Θ-decorated geodesic, or in short a Θ-geodesic, as a geodesic labelled with an element of Θ.
When ρ is a uniformly hyperbolic bundle and ^å(g) a fundamental projector associated to a geodesic g, we will commonly use the following shorthand.
Let G be ghost polygon (θ_1,θ_2,…,θ_2p) be given by configuration ⌈ g_1,…, g_p⌉.
* For visible edge g_i we write _i^Å_i^Å(i)(g_i).
* For visible edge g_i, the opposite ghost endomorphism is
_G^Å(g^*_j) _j·_j-1⋯_j+1·_j .
* For ghost edge ζ_i, the opposite ghost endormorphism is
_G^Å(ζ_i^*)_i·_i-1…_i+1 .
The reader should notice that in the product above, the indices are decreasing.
The opposite ghost endomorphisms have a simple structure in the context of projective uniformly hyperbolic bundles (that is when Θ={1}).
When Θ={1}, _G(θ^*_i)=_G(ρ) (θ_i).
Let G=(θ_1,…,θ_2p) be a ghost polygon with configuration ⌈ g_1,…, g_p⌉. If g_i^+ = g_i+1^- then p_i+1p_i = 0 and the equality holds trivially with both sides zero. We thus can assume there is a ghost edge ζ_i = θ_2i+1 for each i ∈{1,…,p}.
When Θ={1} all projectors have rank 1. Thus for visible edge g_i
_G(g_i^*)=_i_i-1…_i+1_i = (_i…_i+1) _i= _G(ρ) (g_i) .
For a ghost edge ζ_i as (_i+1_i) ≠ 0
_G(ζ^*_i)=_i_i-1…_i+1 = _i_i+1(_n…_1)/(_i_i+1)= _G(ρ) ,
where 1/(_i_i+1)_i_i+1. Then we see that has trace 1, its image is the image of _i, and its kernel is the kernel of _i+1. Thus is the rank 1 projector on the image of _i, parallel to the kernel of _i+1. Hence =(ζ_i). The result follows.
0.2 truecm
§.§ Correlation function
Given a Θ-configuration of geodesics G=⌈ g_1,…,g_p⌉ given by a p-uple of geodesics (g^0_1,…,g^0_p), with a Θ-decoration Å the correlation function associated to G is
_G: ρ↦_⌈ g_1,…,g_p⌉(ρ)(^å(p)(g^0_p)⋯^å(1)(g^0_1))=((g_p)⋯(g_1)) ,
where is the projector associated to the uniformly hyperbolic bundle ρ. The reader should notice (again) that the geodesics and projectors are ordered reversely.
§.§ Analyticity in the periodic case
In this subsection we will treat first the case of complex bundles, that is representation in
in 𝖲𝖫(n,ℂ) of the (complex) parabolic group 𝖯^ℂ_Θ associated to Θ. We now have, as a consequence of <cit.>, the following
Let G be a ghost polygon.
Let ρ be an analytic family of 𝖯^ℂ_Θ-Anosov representations parametrized by the unit disk . Then, the
function u↦_G(ρ_u) is analytic. Moreover the map G↦_G is a continuous function with values in the analytic functions.
Indeed the correlation functions only depends on the limit curve of the representation and thus the analyticity of the limit curve proved in <cit.> gives the result.
We deduce the general analyticity result from this proposition by complexifying the representation.
§ GHOST INTEGRATION
In this section, given a Θ-uniformly hyperbolic bundle ρ, a Θ-ghost polygon G and a 1-form α on with values in the endormorphism bundle of a uniformly hyperbolic bundle (of a special type), we produce a real number denoted
∮_ρ(G)α .
This procedure is called the ghost integration. We introduce the dual cohomology object Ω_ρ(G) which is a 1-form with values in the endomorphism bundle so that
∫_(α∧Ω_ρ(G))=∮_ρ(G)α .
The construction is motivated by the following formula that we shall derive and explain in paragraph <ref>
_G(∇̇)=∮_ρ(G)∇̇ .
Observe that here we use an abuse of language: we use the same notation for 1-form on with values in (E) and their pull-backs which are 1-forms on U with values in (π^*(E)) where π is the projection from U to .
§.§ Bounded and geodesically bounded forms
In this paragraph, we define a certain type of 1-forms with values in (E), where E is a uniformly hyperbolic bundle (∇,h). All norms and metrics will be using the Euclidean metric g_h on E associated to a framing h.
A bounded 1-form ω on with values in (E) is a form so that ‖ω_x(u)‖_x is bounded uniformly for all (x,u) in U. Let us denote ^∞(E) the vector spaces of those forms and
‖ω‖_∞=sup_(x,u)∈ U‖ω_x(u)‖_x .
As an example of such forms, we have
* Given a Θ-geodesic g, given by a (possibly phantom) geodesic g_0, and an element å of Θ, the projector form is
β_ρ(g)ω_g (g)=ω_g ^å(g_0) .
where we used the notation (<ref>).
* Any Γ-equivariant continuous form in the case of a periodic bundle.
* Given (A_t)_t∈]-1,1[ a bounded variation of a uniformly hyperbolic bundle (see definition <ref>, the form
Ȧ.∂ A_t/∂ t|_t=0 ,
is by definition a bounded 1-form.
We do not require forms in ^∞(E) to be closed.
A form α is geodesically bounded if for any parallel section A of (E), (α A) is geodesically bounded as in definition <ref>. We denote by (E) the set of 1-forms which are geodesically bounded.
Again for any geodesic, the projector form β_ρ(g) is geodesically bounded. However Γ-equivariant forms are never geodesically bounded unless they vanish everywhere.
§.§ Line integration
Let ω be a 1-form in ^∞(E). Let x be a point on the oriented geodesic g and Q a parallel section of (E) along g. The line integration of ω – with respect to the uniformly hyperbolic bundle ρ – is given by
_x,g,(ω) ∫_g^+( [ω,] ) + ∫_g^-( [ω,]) .
Observe that since for a projector , we have
(A [B,]) =([,A] B) ,
we have the equivalent formulation
_x,g,(ω) = ∫_g^+(ω [,] ) + ∫_g^-(ω [,] ) .
Let now α be a section of (E) so that α̣ belongs to ^∞(E). We also define the primitive line integration of α by
_x,g,(α) (α(x) [,] )+ _x,g,(α̣) = ([α(x),] )+ _x,g,(α̣) .
§.§.§ Bounded linear forms and continuity
The line integration operator
ω↦_x,g,Q(ω) ,
a continuous linear form on ^∞(E).
This proposition is an immediate consequence of the following lemma
There exist positive constants B and b, only depending on and x, so that for any ω in ^∞(E)
if y is a point in g^+, z a point in g^- and denoting the tangent vector to the geodesic g.
|( [ω_y(),] )| ≤ Be^-bd(x,y)‖ω‖_∞ ,
‖ [,] ‖_z ≤ Be^-bd(x,z) ,
‖ [,]‖_y ≤ Be^-bd(x,y) .
Let us choose a trivialization of E so that ∇ is trivial. By hypothesis ω is in ^∞(E) and thus
‖ω_y()‖_y≤‖ω‖_∞ .
Then
σ: y↦σ(y) [ω_y(),] ,
is a section of F_0^-. Since is bounded – see proposition <ref> – there exists k_1 such that for all y
‖σ(y)‖_y≤ k_1 ‖ω‖_∞ .
By lemma <ref>, F_0^- is a contracting bundle in the negative direction, which means there exists positive constants A and a so that if y=ϕ_t(x) with t>0, then
‖Φ_-t^∇( σ(y))‖_x≤ A e^-at‖σ(y)‖_y ,
where ∇ is the connection.
However in our context, since we have trivialized the bundle, Φ_-t^∇ is the identity fiberwise, and thus combining the previous remarks we get that if y is in g^+, then
‖ [ω_y(),] ‖_x ≤ A e^-a(d(y,x)‖ω‖_∞ .
By Cauchy–Schwarz, for all endomorphisms U and V, we have
|(U V)|≤‖ U‖_x‖ V‖_x .
Thus combining equations (<ref>) and (<ref>) we obtain
|( [ω_y(),] )|≤‖‖_x ‖ [ω_y(),] ‖_x
≤ A e^-a(d(y,x)‖‖_x ‖ω‖_∞ ,
and the inequality (<ref>) follows. Similarly, [,] is a parallel section of F_0^-, thus the inequality (<ref>) is an immediate consequence of inequality <ref>.
The primitive line integration _x,g,Q(α) does not depend on the choice of x on g.
Let us write for the sake of this proof
_x_x,g,Q(α).
Let μ be the geodesic arc from y to x. Let us consider a parametrization of g so that x=g(s_0) and y=g(t_0). Then letting ω = α̣
_y-_x = ((α(y)-α(x)) [,]) + ∫_t_0^∞(ω(ġ) [,Q])ṭ +∫_t_0^-∞(ω(ġ) [,Q] )ṭ - ∫_s_0^∞(ω(ġ) [,Q])ṭ-∫_s_0^-∞(ω(ġ) [,Q] )ṭ = ∫_s_0^t_0(ω(ġ) ( [,Q]- [, Q]-[,Q] )) ṣ =0 ,
where the last equality comes form the fact that, since is a projector
[,Q] + [, Q]=[,Q] .
Finally we have,
Assume that β is bounded.
Then _m,Q(β)= 0.
Let ϖ=β̣. It follows that
(ϖ() [,])=∂/∂ t(β [,]) .
Thus by the exponential decay lemma <ref>, we have
∫_g^+(ϖ [,])=-(β(x) [,]) .
Similarly
∫_g^-(ϖ [,])=-(β(x) [,] ) .
It follows that
_x,g_0,Q(ϖ)= -(β(x) [,])-(β(x) [,] )=-(β(x) [,]) .
This concludes the proof.
§.§ Ghost integration: the construction
Let now G be a configuration of geodesics with a Θ-decoration Å. Let ρ be a Θ-uniformly hyperbolic bundle, where G=⌈ g_1,…, g_p⌉. Let _i=^Å(i)(g_i) and
_i=_i-1…_i+1 .
Let α be a closed 1-form with values in (E). Assume that α belongs to ^∞(E). Let β be a primitive of α – that is a section of (E) so that β̣=α – let
_ρ(G)(β)∑_i=1^n _g_i,_i(β) ,
The quantity _ρ(G)(β) only depends on the choice of α and not of its primitive.
Let β_0 and β_1 two primitives of α. Observe that Bβ_1-β_0 is constant, then
_G(β_1)-_G(β_0)=∑_i=1^p (B[_i,_i])=∑_i=1^p (B_i_i)-∑_i=1^p (B_i_i)=0 ,
since _i_i=_i-1_i-1.
We define the ghost integration of a 1-form α in ^∞(E) with respect to a Θ-ghost polygon G and a uniformly hyperbolic bundle ρ to be the quantity
∮_ρ(G)α_ρ(G)(β) ,
where β is a primitive of α.
Gathering our previous results, we summarize the important properties of ghost integration:
The ghost integration enjoys the following properties:
* The map α↦∮_ρ(G)α is a continuous linear form on ^∞(E).
* Assume α=β̣, where β is a bounded section of (E). Then
∮_ρ(G)α=0 .
We remark that the second item implies that ghost integration is naturally an element of the dual of the first bounded cohomology with coefficients associated to the bundle.
These are consequences of the corresponding properties for J_x,g,Q proved respectively in propositions <ref>, <ref> and <ref>.
§.§ Ghost integration of geodesic forms
Recall that we denoted by (E) the space of geodesically bounded forms, and observe that for any geodesic g, the projector form β_ρ(g) belongs to (E).
Let ρ be a Θ-uniformly hyperbolic bundle. Let G be configuration of geodesics of rank p associated to a ghost polygon ϑ(θ_1,…θ_2p) and a Θ-decoration. Assume that α is in (E). Then
∮_ρ(G)α = - (∑_i=1^2p(-1)^i∫_θ_i(α _G(θ_i^*))) ,
where _G^Å(θ_i^*) denotes the opposite ghost endomorphism to θ_i.
In the context of projective uniformly hyperbolic bundle, that is Θ={1}, then the previous formula is much simpler as an immediate consequence of lemma <ref>.
Let G be configuration of geodesics of rank p associated to a ghost polygon ϑ(θ_1,…θ_2p) and a Θ-decoration. Let ρ be a projective uniformly hyperbolic bundle. Assume that α is in (E). Then
∮_ρ(G)α = - _G(ρ)(∑_i=1^2p(-1)^i∫_θ_i(α (θ_i))) .
Observe that both formulae above do not make sense for a general bounded form.
Observe also that
Let G be a ghost polygon, and α a 1-form with values in the center of (E) then
∮_ρ(G)α=0 .
§.§.§ An alternative construction: a first step
Let x be a point in , γ_i^± the geodesic from x to g_i^±. Assume that α is in (E) then
∮_ρ(G)α = ∑_i=1^p(∫_γ_i^+(α _i [_i,_i])+ ∫_γ_i^-(α [_i,_i] _i )) .
Let fix a point x_i in each of the g_i. Let β a primitive of α so that β(x)=0. Let η_i be the geodesic from x to x_i. It follows that, since α is geodesically bounded, we have by the cocycle formula (<ref>)
∫_γ_i^+(_i [α,_i] _i)= ∫_η_i(_i [α,_i] _i)+∫_g_i^+(_i [α,_i] _i) .
Similarly
∫_γ_i^-(_i _i [α,_i])= ∫_η_i(_i _i [α,_i]+∫_g_i^-(_i _i [α,_i]) .
Observe now that, using the relation [,Q] + [, Q]=[,Q], we have
∫_η_i(_i [α,_i] _i)+∫_η_i(_i _i [α,_i])
=∫_η_i(_i [α,_i])=(_i [β(x_i),_i]) .
Thus, we can now conclude the proof:
_ρ(G)(β) = ∑_i=1^p(∫_γ_i^+(_i [α,_i] _i)+ ∫_γ_i^-(_i _i [α,_i])) = ∑_i=1^p(∫_γ_i^+(_i [_i,_i] α)+ ∫_γ_i^-([_i,_i] _i α)) .
§.§.§ Proof of proposition <ref>
Let us assume we have a ghost polygon ϑ = (θ_1,…,θ_2p) given by a configuration of geodesics G=⌈ g_1,…, g_p⌉. Let _i=(g_i) and α an element of (E). We have
_i [_i,_i] = _i_i- _i_i_i ,
[_i,_i] _i = _i_i_i- _i_i=_i_i_i- _i-1_i-1 .
Since α is geodesically bounded we have
∮_ρ(G)α
=∑_i=1^p(∫_γ_i^+(α _i _i )- ∫_γ_i+1^-(α _i _i)) -∑_i=1^p(∫_γ_i^+(α _i _i _i)-∫_γ_i^-(α _i _i _i) ) .
For i∈{1,…,p}, let ζ_i be the ghost edge joining g_i+1^- to g_i^+, that is ζ_i=θ_2i+1. For a closed form β which is geodesically bounded the cocycle formula (<ref>) yields
∫_γ_i^+β-∫_γ_i^-β=∫_g_iβ , ∫_γ_i+1^-β-∫_γ_i^+β=-∫_ζ_iβ .
Thus
_ρ(G)(α)
= ∑_i=1^p(∫_ζ_i(α_i_i) -∫_g_i(α_i_i_i)) .
To conclude we need first to observe that as g_i is a visible geodesic then _i_i_i is the opposite ghost endomorphism _G(g_i^*). On the other hand as ζ_j is a ghost edge then _j_j is the opposite ghost endomorphism _G(ζ_j^*). Thus
_ρ(G)(α)
= - (∑_i=1^2p(-1)^i∫_θ_i(α _G(θ_i^*))) .
§.§.§ Another altenative form with polygonal arcs
Let G = (θ_1,…,θ_2p) be Θ a ghost polygon given by configuration ⌈ g_1,…,g_p⌉ with g_i = θ_2i. Let x be the barycenter of G. Let x_i be the projection of x on g_i. For a ghost edge ζ_i = θ_2i+1, let us consider the polygonal arc _i given by
_i=a_i∪ b_i∪ c_i∪ d_i ,
where
* the geodesic arc a_i is the arc (along g_i+1) from g_i+1^- to x_i+1,
* the geodesic arc b_i joins x_i+1 to x,
* the geodesic arc c_i joins x to x_i,
* the geodesic arc d_i joins x_i to g_i^+.
We then have, using the same notation as in proposition <ref>
We have for α in (E)
∮_ρ(G)α = -∑_i∫_g_i(α _G(θ_i^*))
+
∫__i(α _G(θ_i^*)) .
The proof relies on the fact that for α in (E), and ζ_i a ghost edge we have
∫__iα=∫_ζ_iα .
Then the formula follows from proposition <ref>.
Ghost integration and Rhombus integration.
The process described for the ghost integration is a generalisation of the Rhombus integration described in <cit.>.
§.§ A dual cohomology class
Let ρ be a Θ-uniformly hyperbolic bundle. Let now G be a Θ-ghost polygon with configuration ⌈ g_1,…,g_p⌉ and Θ-decoration Å. Let ϑ=(θ_1,…,θ_2p) be the associated ghost polygon and denote by ζ_i=θ_2i+1 the ghost edges. Let _i be the associated polygonal arc associated to the ghost edge ζ_i as in paragraph <ref>.
The ghost dual form to ρ(G) is
Ω_ρ(G)∑_i=1^p(ω_g_i_G(g^*_i) - ω__i_G(ζ^*_i)) .
Observe that ρ(G) incorporates a Θ-decoration and so Ω_ρ(G) depends on the Θ-decoration.
We have the following properties
* The ghost dual form belongs to (E).
* Assume that α belongs to (E). Then
∮_ρ(G)α=∫_(α∧Ω_ρ(G)) .
* (exponential decay inequality) Finally,
there exist positive constants K and a only depending on ρ and R_0 so that if the core diameter of G is less than R_0, then
‖Ω_ρ(G)(y)‖_y≤ K e^-a d(y,(G)) ,
and, moreover, Ω_ρ(G)(y) vanishes when d(y,(G))≥ R_0+2 and d(y,g)>2 for all visible edges g of G.
Later we will need the following corollary which we prove right after we give the proof of the proposition.
We have the following bounds:
The map
ϕ_G : y ↦ ‖Ω_ρ(G)(y)‖_y ,
belongs to L^1(), and ‖ϕ_G‖_L^1() is bounded by a continuous function of the core diameter of G. The map
ψ_G,y : γ ↦ ‖Ω_ρ(G)(γ y)‖_y ,
belongs to ℓ^1(Γ), and ‖ψ_G,y‖_ℓ^1(Γ) is bounded by a continuous function of the core diameter of G. Finally the map
ϕ : H ↦ ‖Ω_ρ(H)‖_∞=
sup_y∈‖Ω_ρ(H)(y)‖_y ,
is bounded on every compact set of ^p_⋆
We first prove the exponential decay inequality (<ref>) which implies in particular that Ω_ρ(G) belongs to ^∞(E).
Let r(G) be the core diameter of G. Let as usual g_i be a visible edges, x be the barycenter of all g_i and x_i be the projection of x on g_i. By the construction of the polygonal arc _i, it follows that outside of the ball of radius r(G)+2 centered at x, then
Ω_ρ(G)= ∑_iω^-_g_i [_i,_i]_i + ∑_iω^+_g_i_i [_i,_i] ,
where ω^±_g_i=f^±_iω_g_i where f^±_i is a function with values in [0,1] with support in the 2-neighbourhood of the arc [x_i,g_i^±]. Then the decay given in equation (<ref>) is an immediate consequence of the exponential decay given in inequality (<ref>).
Observe now that Ω_ρ(G) is closed.
Let A be a parallel section of (E), then it is easily seen that (Ω_ρ(G)A) is geodesically bounded. It follows that Ω_ρ(G) is in (E).
Then the result follows from the alternative formula for ghost integration in proposition <ref>.
Given a ghost polygon H whose set of visible edges is g_H, and core diameter less than R_0. Let
V_H≤{y∈| d(y,(H))≤ R_0+2 or d(y,g)≤ 2 for some g ∈ g_H}
Observe that the volume of V_H(R) V_H∩ B((H),R) has some linear growth as a function of R, and more over this growth is controlled as a function of R_0. This, and the exponential decay inequality (<ref>), implies that ϕ_G, whose support is in V_H,
is in L^1() and that is norm is bounded by a constant that only depends on R_0. Similarly consider
F_H,y{γ∈Γ| d(γ(y),(H))≤ R_0+2 or d(γ(y),g)≤ 2 for g in g_H} .
and
F_H,y(R){γ∈ F_H,y| d(γ (y),(H))≤ R} .
Then the cardinal of the subset F_H,y(R) has linear growth depending only on R_0. Hence, for every y,
γ↦∑_γ∈ F_H,yK_0e^-a(d(γ y,(H)) ,
seen a function of H is in ℓ^1(Γ) and its ℓ^1 norm is bounded as a function of R_0.
Hence – as a consequence of the exponential decay inequality (<ref>) – for every y, the map
γ↦‖Ω_ρ(G)(γ y)‖_y ,
is in ℓ^1(Γ) and its ℓ^1 norm is bounded by a function of R_0.
Finally from inequality (<ref>), we have obtain that there is a constant R_1 only depending on R_0 such that
sup_y∈‖Ω_ρ(H)(y)‖_y≤sup_y∈ B((H), R_1)‖Ω_ρ(H)(y)‖_y +1 .
The bounded cocycle hypothesis, equation (<ref>), implies that sup_y∈ B((H), R_1)‖Ω_ρ(H)(y)‖_y is bounded by a function only depending on R_1,
and thus sup_y∈‖Ω_ρ(H)(y)‖_y is bounded by a function of R_0. This completes the proof of the corollary.
§.§ Derivative of correlation functions
In this paragraph, as a conclusion of this section, we relate the process of ghost integration with the derivative of correlation functions.
Let ∇_t,h be a bounded variation of a uniformly hyperbolic bundle ρ=(∇,h). Assume that G is a Θ-ghost polygon, then
_G(∇̇)=∮_ρ(G)∇̇ .
This proposition is an immediate consequence of the following lemma, which is itself an immediate consequence of the definition of the line integration in paragraph <ref> and lemma <ref>:
Let (∇_t,h_t) be a family of uniformly hyperbolic bundles with bounded variation – see definition <ref> – associated to a family of fundamental projectors .
Then for a decorated geodesic g,
(_0(g)· Q)=_ρ(g),Q(∇̇) .
§.§ Integration along geodesics
For completeness, let us introduce ghost integration for geodesics: we define for any geodesically bounded 1-form α in Ξ(E) and a Θ-geodesic g,
∮_ρ(g)α∫_g(α_g) .
It is important to observe that, contrarily to a general ghost polygon, we only integrate geodesically bounded forms, not bounded ones. In particular, we cannot integrate variations of uniformly hyperbolic bundles.
§ GHOST INTERSECTION AND THE GHOST ALGEBRA
In this section we will effectively define and compute the ghost intersections of ghost polygons or geodesics. This is the objective of propositions <ref> and <ref>.
We define the associated ghost algebra in paragraph <ref> and relate in <ref> the corresponding ghost bracket for the projective case to the swapping bracket defined in <cit.> by the second author. Finally we relate the intersection of two ghost polygon to the correlation of the brackets of these in the crucial proposition <ref>.
In the somewhat independent paragraph <ref>, we define and study natural maps from the ghost algebra to itself.
We will use freely the definitions given in section <ref> for ghost polygons.
§.§ Ghost intersection: definitions and computation
We proceed step by step with the definitions.
§.§.§ Intersecting two geodesics
Let g and h two Θ-geodesics (in other words, geodesics labelled with an element of Θ). Let us define
_ρ(g,h)∮_ρ(g)β^0_h ,
where β^0_hβ_h-Θ_h/(E) is the trace free part of β_h and Θ_h is defined in equation (<ref>).
A straightforward computation using equation (<ref>) and (<ref>)
then gives
_ρ(g,h)=ϵ(h,g)(_⌈ g,h⌉ (ρ) - 1/(E)Θ_gΘ_h) .
By convention, the quantity ϵ(g,h) for two Θ-decorated geodesics g and h is the same as the intersection of the underlying geodesics.
§.§.§ Intersecting a ghost polygon with a geodesic
Let ρ be a Θ-uniformly hyperbolic bundle. Let G be a Θ-ghost polygon and
h a Θ-geodesic. The ghost intersection of G and g
is
_ρ(G,h) -∮_ρ(G)β_ρ(h)=-∫_(β_ρ(h)∧Ω_ρ(G))=-∫_h(Ω_ρ(G) (h))-_ρ(h,G) .
By convention we set _ρ(h,G) -_ρ(G,h).
We will prove that we can effectively compute the ghost intersection. Then we have
Let G be a configuration of geodesics, associated to a ghost polygon ϑ=(θ_1,…,θ_2p).
The ghost intersection of h and G is given by
_ρ(G,h)=∑_i=1^2p(-1)^i+1ϵ(h,θ_j) _⌈ h, θ^*_i⌉(ρ) ,
where θ_i^* is the opposite configuration as in paragraph <ref>. In the projective case, that is Θ={1} we have
_ρ(G,h)=_G(ρ)(∑_i=1^2p(-1)^i+1ϵ(h,θ_j) _⌈ h, θ_i⌉(ρ)) .
§.§.§ Intersecting two ghost polygons
We define the ghost intersection of two ghost-polygons or equivalently of two configuration of geodesics G and H to be
_ρ(G,H)∮_ρ(G)Ω_ρ(H)=∫_(Ω_ρ(H)∧Ω_ρ(G)) .
We can again compute this relatively effectively:
The ghost intersection of the two configuration G and H, associated respectively to the ghost polygons ϑ=(θ_i)_i∈ I, with I=[1,2p], and ς=(σ_j)_j∈ J, with J=[1,2m], respectively, is given by
_ρ(G,H) = ∑_i∈ I,j∈ J(-1)^i+jϵ(σ_j,θ_i) _⌈σ^*_j,θ^*_i⌉(ρ) .
In the projective case, this simplifies as
_ρ(G,H) = _G(ρ)_H(ρ)(∑_i∈ I,j∈ J(-1)^i+jϵ(σ_j,θ_i) _⌈σ_j,θ_i⌉(ρ)) .
§.§ Θ-Ghost bracket and the ghost space
We develop a more formal point of view. Our goal is proposition <ref> that identifies the intersection as a correlation function. Let 𝒜 be the vector space generated by Θ-ghost polygons (or equivalently configurations of Θ-geodesics) and Θ-geodesics. We add as a generator the element , and call it the Casimir element. By definition, we say has rank 0. We will see that the Casimir element will generate the center.
Recall also that we can reverse the orientation on geodesics. The corresponding reverse orientation on configuration is given by
⌈ g_1,… ,g_p⌉⌈g̅_p,… ,g̅_1⌉.
We define the bracket on the basis of 𝒜 and extend it by linearity.
* The bracket of with all elements is 0.
* Let G and H be two configurations of Θ-geodesics, associated respectively to the ghost polygons ϑ=(θ_i)_i∈ I, with I=[1, 2p] and ς=(σ_j)_j∈ J, with J=[1,2m] respectively. Their Θ-ghost bracket is given by
[G,H] ∑_i∈ I,j∈ Jϵ(σ_j,θ_i)(-1)^i+j⌈θ^*_i,σ^*_j⌉ ,
where we recall that θ^*_j is the opposite ghost configuration defined in paragraph <ref>.
* Let g and h be two Θ-geodesics and G a ghost polygon as above. Then we define
[g,h] ϵ(h,g)(⌈ h,g⌉ -Θ_hΘ_g·) ,
G,h] ∑_j∈ J
(-1)^j+1ϵ(h,θ_j) ⌈ h,θ^*_j⌉ -[h,G] ,
Finally 𝒜 equipped with the ghost bracket is called the ghost algebra.
We observe that the ghost bracket is antisymmetric. However, the Θ-ghost bracket does not always satisfy the Jacobi identity: there are some singular cases. We actually prove in the Appendix <ref>, as Theorem <ref> the following result
Assume A, B, and C are ghost polygons and that
V_A∩ V_B∩ V_C=∅ ,
where V_A, V_B and V_C are the set of vertices of A, B and C respectively, then
[A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0 .
Finally we now extend the map on 𝒜 so as to define _G(ρ) for G an element of 𝒜, while defining
_(ρ) 1/(E) .
The purpose of this formal point of view is to rewrite Propositions <ref> and <ref> as the simple formula:
We have for G, H ghost polygons then
_ρ(G,H)=_[G,H](ρ) .
This formula will allow us to compute recursively Poisson brackets of correlation functions.
§.§ The projective case: swapping and ghost algebras
Throughout this section, we will restrict ourselves to the projective case, that is Θ={1}.
§.§.§ Ghost polygons and multifractions
In <cit.>, the second author introduced the swapping algebra ℒ consisting of polynomials in variables (X,x), where (X,x) are points in S^1, together with the relation (x,x)=0. We introduced the swapping bracket defined on the generators by
[(X,x),(Y,y)]=ϵ((Y,y),(X,x)) ((X,y)· (Y,x) ) .
We proved that the swapping bracket gives to the swapping algebra the structure of a Poisson algebra.
We also introduced the multifraction algebra ℬ which is the vector space in the fraction algebra of ℒ generated by the multifractions which are elements defined, when X and x are a n tuples of points in the circle and σ an element of the symmetric group 𝔖(n) by
[X,x;σ]∏_i=1^n (X_i,x_σ(i))/∏_i=1^n(X_i,x_i) .
We proved that the multifraction algebra is stable by the Poisson bracket, while it is obviously stable by multiplication.
Let us consider the algebra ℬ_0 which is generated as a vector space by the multifraction algebra to which we add extra generators denoted ℓ_g for any geodesic g – which are formally logarithms of geodesics ℓ_g=log(g) as well as a central element ; we finally extend
the swapping bracket to ℬ_0 by adding
[ℓ_g,ℓ_h]1/g h[g,h]+ϵ(g,h) ,
[G,ℓ_h] 1/h[G,h] -[ℓ_h,G] .
We call ℬ_0 with the extended swapping bracket, the extended swapping algebra.
The reversing orientation is defined on generators by ℓ_g=ℓ_g,
We then have
The extended swapping algebra is a Poisson algebra. The reversing isomorphisms antipreserves the Poisson structure:
[G,H]=-[ G, H].
This is just a standard check that adding “logarithmic derivatives” to a Poisson algebra still gives a Poisson algebra.
We first see that that
∂_g: z↦ [ℓ_g,z]=1/g[g,z] ,
is a derivation on the fraction algebra of the swapping bracket.
Indeed,
∂_g([z,w]) = 1/g[g,[z,w]]
=1/g([z,[g,w]]-[w,[g,z]])
= ([z,[g,w]/g]-[w,[z,w]/g])=[z,∂_g(w)]+[∂_g(z),w] .
Moreover, the bracket of derivation gives
[∂_g,∂_h](z)=[[ℓ_g,ℓ_h],z] Let us check this last point:
∂_g (∂_h (z))= 1/g[g,1/h[h,z]]=-1/gh^2[g,h][h,z] + 1/g h[g,[h,z]] .
Thus, we complete the proof of the proposition
[∂_g,∂_h](z)=[g,h](-[h,z]/gh^2- [g,z]/hg^2) + 1/gh[[g,h],z]]=[[g,h]/gh,z] .
§.§ The projective case: ghost algebra and the extended swapping algebra
In the projective case, it is convenient to consider the free polynomial algebra 𝒜_P generated by the ghost polygons, and extend the ghost bracket by the Leibnitz rule to 𝒜_P.
In this paragraph, we will relate the algebras 𝒜_P and ℬ_0, more precisely we will show:
There exists a homomorphisms of commutative algebra map
π:𝒜_P→ℬ_0 ,
which is surjective, preserves the bracket and and the reversing the orientation isomorphism:
[π(A),π(B)]=π[A,B] , π(A)=π(A) .
Finally if A belongs to the kernel of π, then for any projective Anosov representation ρ, _A(ρ)=0.
Thus, 𝒜_P/(π) is identified as an algebra with bracket with ℬ_0; in particular 𝒜_P/(π) is a Poisson algebra.
This will allow in the applications to reduce our computations to calculations in the extended swapping algebra, making use of the fact that the extended swapping algebra is a Poisson algebra by proposition <ref>.
Unfortunately, we do not have the analogue of the swapping bracket in the general Θ-case, although the construction and result above suggest to find a combinatorially defined ideal ℐ in the kernel of (ρ) for any ρ, so that 𝒜/ℐ satisfies the Jacobi identity.
§.§.§ From the ghost algebra to the extended swapping algebra
In this paragraph, we define the map π of Theorem <ref>. The map π is defined on the generators by
g ⟼ π(g)ℓ_g ,
G=⌈ g_1,… ,g_p⌉ ⟼ π(G) [X,x;σ]=∏_i=1^n (g^+_i,g^-_i+1)/∏_i=1^n(g^+_i,g^-_i) .
where X=(g^+_i), x=(g^-_i), σ(i)=i+1.
Cyclicity is reflected by
π(⌈ g_1,… ,g_p⌉) = π(⌈ g_2,… ,g_p, g_1 ⌉) .
Conversely, we then have the following easy construction.
Let X=(X_1,…,X_k), x=(x_1,…, x_k), and g_i the geodesic (X_i,x_i). Let σ be a permutation of {1,…,k} and let us write σ=σ_1,…,σ_q be the decomposition of σ into commuting cycles σ_i or order k_i with support I_i. For every i, let m_i be in I_i and let us define
h^i_j= g_σ_i^j-1(m_i) ,
G_i=⌈ h^i_1, … h^i_k_i⌉ .
We then have with the above notation
[X,x;σ]=π(G_1… G_q) .
The map π is surjective.
In the sequel, the decomposition (<ref>) will be referred as the polygonal decomposition of the multifraction [X,x;σ]. We also obviously have
Any tuples of ghost polygons is the polygonal decomposition of a multifraction.
§.§.§ The map π and the evaluation
For any multifraction B=[X,x;σ] and projective Anosov representation ρ associated to limit curves ξ and dual limit curves ξ^*, we define
^P_B(ρ)∏_i ⟨V_i,v_σ(i)|⟩/∏_i ⟨V_i,v_i|⟩ ,
where V_i is a non-zero vector in ξ^*(X_i) while v_i is a non-zero vector in ξ(x^i).
Given ρ, we now extend G↦_G(ρ) and ^P_B(ρ) to homomorphisms of commutative free algebras to 𝒜_p and ℬ_0.
We then have the following result which follows at once since we are only considering rank 1 projectors.
We have, for all projective Anosov representations ρ
^P_π(G)(ρ)=_G(ρ) , _G(ρ)=_G̅(ρ^*) ,
This proposition implies that for every G in the kernel of π, for every ρ, _G(ρ)=0.
§.§.§ Swapping bracket
We now compute the brackets of multifractions. We shall use the notation of paragraph <ref> where the opposite configuration g^* of a ghost or visible edge g is defined. Observe that g^* is an ordered configuration. Then we have
Let G and H be two multifractions that are images of ghost polygons:
G = π(θ_1,…,θ_2p) and H = π(ζ_1,…,ζ_2q). Then their swapping bracket is given by
[G ,H ] = (G H ( ∑_i,jϵ(ζ_j,θ_i)(-1)^i+jπ(⌈θ_i,ζ_j ⌉)) ) .
Moreover, for g=(X,x) and h=(Y,y) geodesics, we have in the fraction algebra of the swapping algebra.
[ℓ_h,ℓ_g] = ( ϵ(g,h) π(⌈ g,h⌉)) .
[G , ℓ_h] = (G ( ∑_iϵ(h,θ_i)(-1)^i+1π(⌈θ_i,h⌉))) .
Moreover, using the notation θ^*_i for the opposite edge, we have, for every i and j
π(⌈θ_i^*,ζ_j^*⌉ )=G H π(⌈θ_i,ζ_j⌉) .
In this proof, we will omit to write π and confuse a ghost polygon and its image under π. Equation (<ref>) follows at once from the definition. Let now G=⌈ g_1,…, g_p⌉, let η_i be the ghost edges joining g_i+1^- to g_i^+. Then we may write in the fraction algebra of the swapping algebra
⌈ g_1,…, g_p⌉ =∏_i=1^pη_i/∏_i=1^p g_i .
Using logarithmic derivatives we then have
1/G [G ,ℓ_h] =∑_i=1^p (1/h η_i[η_i,h] -1/h g_i[g_i,h] )=∑_i=1^p (ϵ(h,η_i)⌈η_i,h⌉ -ϵ(h,g_i)⌈ g_i,h⌉) ,
which gives equation (<ref>).
Writing now
G =⌈ g_1,…, g_p⌉ =∏_i=1^pη_i/∏_i=1^q g_i ,
H =⌈ h_1,…, h_q⌉ =∏_i=1^qν_i/∏_i=1^q h_i ,
where η_i and ν_i are ghost edges of G and H respectively, we get
[G ,H ] /G H = ∑_(i,j)(1/g_i h_j[g_i,h_j] -1/g_i ν_j[g_i,ν_j] +1/η_i ν_j[η_i,ν_j] - 1/η_i h_j[η_i,h_j] ) = ∑_(i,j)(ϵ(h_j,g_i)⌈ g_i,h_j⌉ - ϵ(ν_j,g_i)⌈ g_i,ν_j⌉ +ϵ(ν_j,η_i)⌈η_i,ν_j⌉ - ϵ(h_j,η_i)⌈η_i,h_j⌉) ,
which is what we wanted to prove. The equation (<ref>) follows from the definition of the map π.
As a corollary we obtain
The map π preserves the bracket.
The proof follows at once from proposition <ref> and <ref> which computes the ghost intersection and recognizing each term as the correlation functions of a term obtained in the corresponding ghost bracket in proposition <ref>.
§.§.§ Proof of Theorem <ref>
We have proved all that we needed to prove:
the theorem follows from corollary <ref> and <ref>, as well as lemma <ref>.
§.§ Natural maps into the ghost algebra
Let w be a p-multilinear map from the ghost algebra to itself. We say w is natural, if for tuples of integers (n_1,…,n_p) there exists an integer q, a real number A such that given a tuple of ghost polygons G=(G_1,…, G_p) with G_i in ^n_i, then
w(G_1,…,G_p)=∑_i=1^q λ_i H_i ,
where H_i are ghost polygons, λ_i are real numbers less than A and, moreover, every visible edge of H_i is a visible edge of one of the G_i.[The existence of q is actually a consequence of the definition: there only finitely many polygons with a given set of visible edges]
We will extend the definition of the core diameter to any element of the ghost algebra
by writing, whenever H_i are distinct ghost polygons ghost polygons
r(∑_i=1^q λ_i H_i)sup_i=1,…,q(|λ_i| r(H_i)) ,
We also recall that the core diameter of a ghost polygons, only depends on the set of its visible edeges. We then define the core diameter of a tuple of polygons G=(G_1,…,G_n), as the core diameter of the union of the set of edges of the G_i's.
We then have the following inequality of core diameters for a natural map w, G=(G_1,…,G_p) and q and A as in the definition
r(w(G))≤ A r(G) .
We now give an exemple of a natural map
The map (G_1,…,G_n)↦[G_1,[G_2,[… [G_n-1,G_n]…]]]
is a natural map.
This follows at once from the definition of the ghost bracket and a simple induction argument.
§ GEODESIC AND CYCLIC CURRENTS
In this section, building on the classical notion of geodesic currents, we define the notion of higher order geodesic currents, called cyclic currents. Among them we identify integrable currents, show how they can average correlation functions and produce examples of them.
Recall that is the set of oriented geodesics in . The set of Θ-geodesics is then denoted ×Θ.
§.§ Cyclic current
First recall that a signed measure is a linear combination of finitely many positive measure. Any signed measure is the difference of two positive measures. A cyclic current is a Γ-invariant signed measure invariant under cyclic permutation.
As a first example let us consider for μ and ν geodesic current, the signed measure μ∧ν given by
μ∧ν1/2ϵ (μ⊗ν -ν⊗μ) ,
where we recall that ϵ(g,h) is the intersection number of the two geodesics g and h.
The signed measure μ∧ν is a cyclic current supported on intersecting geodesics. Moreover μ∧ν=-ν∧μ.
We have
2∫_^2/Γ f(g,h) μ̣∧ν(g,h)=
∫_^2/Γ f(g,h) ϵ(g,h)(μ̣(g) ν̣(h)-ν̣(g) μ̣(h) )
=
∫_^2/Γ f(h,g) ϵ(h,g)(μ̣(h) ν̣(g)-ν̣(h) μ̣(g) )
=
∫_^2/Γ f(h,g) ϵ(g,h)(ν̣(h) μ̣(g)-μ̣(h) ν̣(g) )
=2∫_^2/Γ f(h,g) μ̣∧ν(g,h) .
Hence μ∧ν is cyclic. The last assertions are obvious.
Our main definition is the following, let ρ be a Θ-Anosov representation of Γ, the fundamental group of a closed surface.
We give several definitions, let w be a natural map from ^p_1×⋯×^p_q to ^m
* a w-cyclic current is a Γ-invariant measure μ=μ_1⊗⋯⊗μ_q where μ_i are Γ-invariant cyclic currents on ^n_i,
* the w-cyclic current μ is a (ρ,w)-integrable current if there exists a neighborhood U of ρ in the moduli space of (complexified) Θ-Anosov representations of Γ, and a positive function F in L^1(_⋆^k/Γ,η) so that for all σ in U, and G in _⋆^k;
|_w(G)(σ)|≤ F(G) ,
where F_0 is the lift of F to _⋆^k.
* When w is the identity map , we just say a current is ρ-integrable, instead of (ρ,)-integrable.
* A current of order k, is w-integrable or integrable if it is (ρ,w)-integrable or ρ-integrable for all representations ρ.
§.§.§ Γ-compact currents
A Γ-invariant w-cyclic current μ is Γ-compact if it is supported on a Γ-compact set of _⋆^p. Obviously a Γ-compact cyclic current is integrable for any natural map w.
Here is an important example of a Γ-compact cyclic current:
Let ℒ be a geodesic lamination on S with component of its complement C being a geodesic triangle. Let π:↦ S be the universal covering of S and x a point in C
Then
π^-1C=_i∈π^-1(x) C_i .
The closure of each C_i is an ideal triangle with cyclically ordered edges (g_i^1,g_i^2,g_i^3). We consider the opposite cyclic ordering (g_i^3,g_i^2,g_i^1). The notation δ_x denotes the Dirac measure on X supported on a point x of X. Then we obviously have
The measure defined on ^p by
μ^*_C=1/3∑_i∈π^-1(x)(δ_(g_i^1,g_i^3,g_i^2)+δ_(g_i^2,g_i^1,g_i^3)+δ_(g_i^3,g_i^2,g_i^1)) ,
is a Γ-compact cyclic current.
§.§.§ Intersecting geodesics
Let us give an example of integrable current.
Let μ be Γ-invariant cyclic current supported on pairs of intersecting geodesics. Assume furthermore that μ(^2/Γ) is finite. Then μ is integrable.
This follows at once from the following
lemma.
Let ρ_0 be a Θ-Anosov representation. Then there exists a constant K_ρ in an neighborhood U of ρ_0 in the moduli space of Anosov representations, such that for any ρ in U, for any pair of intersecting geodesics
|_⌈ g, h⌉(ρ)|≤ K_ρ .
Given any pair of geodesics (g_1,g_0) intersecting on a point x, then we can find an element γ in Γ, so that γ x belongs to a fundamental domain V of Γ.
In particular, there exists a pair of geodesics h_0 and h_1 passing though V so that
_⌈ g_0,g_1⌉(η)=_⌈ h_0,h_1⌉(η)=(_η(h_0) _η(h_1)) ,
where _η is the fundamental projector for η. Since the set of geodesics passing through V is relatively compact, the result follows by the continuity of the fundamental projector _η(h) on h and η.
Given μ and ν, then μ∧ν is integrable.
Let
A{(g,h)|ϵ(g,h) = ± 1} , B{(g,h)|ϵ(g,h) = ±1/2} .
Observe first that denoting i the Bonahon intersection, we have
|μ∧ν (A/Γ)|≤ i(μ̅,ν̅)<∞ ,
where the last inequality is due to Bonahon <cit.>, and
λ̅ is the symmetrised current of λ.
As Γ acts with compact quotient on the set of triples of points on ∂, it follows that Γ acts on B with compact quotient and therefore μ∧ν(B) is finite. Therefore taking the sum we have that μ∧ν(^2/Γ) is finite.
§.§.§ A side remark
Here is an example of (ρ,w)-integrable current. First we the following inequality: given a representation ρ_0, there is a constant K_0, a neighborhood U of ρ_0, such that for every k-configuration G of geodesics and ρ in U then
|_G(ρ)|≤ e^kK_0 r(G) .
Since this is just a pedagogical remark that we shall not use, we do not fill the details of the proofs.
From that inequality we see that if G↦ e^kK_0 r(G) is in L^1(_⋆^k/Γ,μ) then μ is (ρ,w)-integrable.
§ EXCHANGING INTEGRALS
To use ghost integration to compute the Hamiltonian of the average of correlation functions with respect to an integrable current, we will need to exchange integrals.
This section is concerned with proving the two Fubini-type exchange theorems we will need. Recall that the form β_ρ(g) is defined in equation <ref>.
Let μ a Γ-invariant geodesic current. Let G be a Θ-ghost polygon. Then
* ∫_β_gμ̣(g) — defined pointwise — is an element of ^∞(E),
* the map g↦∮_ρ(G)β_g is in L^1(,μ),
* finally, we have the exchange formula
∮_ρ(G)(∫_β_ρ(g) μ̣(g))=∫_(∮_ρ(G)β_ρ(g))μ̣(g) .
Similarly, we have a result concerning ghost intersection forms. We have to state it independently in order to clarify the statement. Let us first extend the assigement G↦Ω_G by linearity to the whole ghost algebra, and observe that if we have distinct ghost polygons G_i and
H=∑_i=1^q λ_iG_i , with sup_i∈{1,…,q|λ_i|=A ,
Then
‖Ω_H(y)‖≤ qA sup_i∈{1,…,q‖Ω_G_i(y)‖ .
Let μ be a w-cyclic and Γ-compact current of rank p. Let G be a ghost polygon. Let w be a natural map. Then
* ∫_^pΩ_ρ(w(H))μ̣(H) — defined pointwise — is an element of ^∞(E),
* the map H↦∮_ρ(G)Ω_ρ(w(H)) is in L^1(^p,μ),
* finally, we have the exchange formula
∮_ρ(G)(∫_^pΩ_ρ(w(H)) μ̣(H))=∫_^p(∮_ρ(G)Ω_ρ(w(H)))μ̣(H) .
We first concentrate on Theorem <ref>, then prove Theorem <ref> in paragraph <ref>.
§.§ Exchanging line integrals
Theorem <ref> is an immediate consequence of a similar result involving line integrals.
Let μ be a Γ-invariant geodesic current on , then
* ∫_β_gμ̣(g) — defined pointiwise — is an element of ^∞(E),
* Let g_0 be a geodesic, x a point on g_0 and Q a parallel section of (E) along g_0, then the map
g↦_x,g_0,Q(β_g) ,
is in L^1(,μ).
* We have the exchange formula
_x,g_0,Q(∫_β_ρ(g) μ̣(g))=∫__x,g_0,Q(β_ρ(g)) μ̣(g) .
We prove the first item in proposition <ref>, the second item in <ref> and the third in <ref>.
§.§ Average of geodesic forms and the first item
Let μ be a Γ-invariant measure on . Let y be a point in , and
G(y,R){g∈| d(g,y)≤ R} .
As an immediate consequence of the Γ-invariance we have
For every positive R, there is a constant K(R) so that for every y in
μ(G(y,R))≤ K(R) .
Observe now that if g is not in G(y) G(y,2), then y is not in the support of ω_g and thus β_ρ(g)(y)=0. We then define
The μ-integral of geodesic forms is the form α so that at a point y in
α_y∫_G(y)β_g(y) μ̣(g)=∫_β_g(y) μ̣(g) .
We use some abuse of language and write
α∫_β_ρ(g)μ̣(g) .
The form α_y is well defined since G(y) is compact. Moreover, the next lemma gives the proof of the first item of proposition <ref>
The μ-integral of geodesic forms belongs to ^∞(E) and we have a constant K_5 only depending on ρ and μ so that
‖∫_β_ρ(g)μ̣(g)‖_∞≤ K_5
.
We have
|∫_β_ρ(g)(y) μ̣(g)|=|∫_G(y)β_ρ(g)(y) μ̣(g)|≤μ(G(y)) sup_g∈ G(y)‖β_ρ(g)‖_∞ .
Then by proposition <ref>, there is a constant k_1 so that μ(G(y))≤ k_1. Recall that β_ρ(g)=ω_g(g). Then by the equivariance, ω_g is bounded independently of g, while by lemma <ref>, is a bounded section of (E). The result follows.
§.§ Decay of line integrals
We now recall the following definition.
_x,g,(ω) = ∫_g^+(ω [,] ) + ∫_g^-(ω [,] ) .
We prove in this paragraph the following two lemmas.
Let g_0 be a geodesic and x a point in g_0,
Let g be a geodesic such that d(g,g_0)>1, then for any function ψ on with values in [0,1]:
_x,g_0,Q(ψβ_ρ(g))=0 .
This follows at once from the fact that under the stated hypothesis, the support of ω_g does not intersect g_0.
For any endomorphism and representation ρ, there exist positive constants K and k, so that for all g so that d(g,x)>R, for any function ψ on with values in [0,1]:
|_x,g_0,Q(ψβ_ρ(g))|≤ K e^-kR .
We assume x and g are so that d(g,x)> R.
It is enough (using a symmetric argument for g_0^- to show that
|∫_g_0^+(ψβ_ρ(g)··[,])|≤ K e^-kR ,
where g_0^+ is the arc on g_0 from x to +∞.
Let us denote by g_0^+(R) the set of points of g_0^+ at distance at least R from x:
g_0^+(R){y∈ g_0^+| d(y,x)≥ R} .
Then if y belongs to g_0^+ and does not belong to g_0^+(R-1), then d(y,x)<R-1. Thus d(y,g)>1. Thus, by lemma <ref>, β_ρ(g)(y) vanishes for y in g_0^+ and not in g_0^+(R-1). Thus
|∫_g_0^+(ψβ_ρ(g)·· [,])|≤∫_g_0^+(R-1)|(β_ρ(g)()·· [,])| ṭ .
Then the result follows from the exponential decay lemma <ref>.
Lemma <ref> now follows immediately after using a symmetric result for g_0^-.
§.§ Cutting in pieces and dominating: the second item
We need to decompose into pieces. Let g_0 be an element of
and x a point on g_0. Let x^+(n) – respectively x^-(n) – the point in g_0^+ – respectively g^-_0 – at distance n from x.
Let us consider
U_0 {g∈| d(g,g_0)>1} ,
V^+_n {g∈| d(g,x^+(n))< 2 and for all 0≤ p<n , d(g,x^+(n))≥ 2 } ,
V^-_n {g∈| d(g,x^-(n))< 2 and for all 0≤ p<n, d(g,x^-(n))≥ 2 } .
This gives a covering of :
We have the decomposition
=U_0∪⋃_n∈ℕ V^±_n ,
When g does not belong to U_0, there is some y in g_0 so that d(g,y)≤ 1, hence some n so that either d(y,g^+(n))≤ 2, while for all 0≤ p<n we have d(y,g^+(p))> 2, or d(y,g^-(n))≤ 2, while for all 0≤ p<n we have d(y,g^-(p))> 2.
Let now
n(g)=sup{m∈ℕ| g∈ V^+_m∪ V^-_m} .
By convention, we write n(g)=+∞, whenever g does not belong to ⋃_n∈ℕ V^±_n.
The non-negative control function F_0 on is defined by F_0(g)=e^-n(g).
We now prove
For any positive k, the function (F_0)^k is in L^1(,μ).
Moreover, there exist positive constants K_9 and k_9 so that for all functions ψ on with values in [0,1] we have
|_x,g_0,Q(ψβ_g)|≤ K_9(F_0(g))^k_9 .
We now observe that the second item of proposition <ref> is an immediate consequence of this lemma.
We first prove that F_0 and all its powers are in L^1(,μ).
Observe that V^±_n⊂ G(x^±(n),2). It follows from that μ(V^±_n)≤ K(2) by proposition <ref>. Moreover, for any g in V_n^±, F_0(g)^k≤ e^-kn. The decomposition of lemma <ref> implies that F_0^k is in L^1(,μ).
Let g be a element of .
* When g belongs to U_0, then by lemma <ref>, _x,g_0,Q(β_ρ(g))=0. Hence |_x,g_0,Q(β_ρ(g))|≤ A (F_0(g))^a, for any positive A and a.
* When g does not belong to U_0, then g belongs to V^±_n(g) with n(g)<∞. By lemma <ref>, we have d(x,g)≥ n(g). It follows from lemma <ref> that for any positive function ψ, we have
|_x,g_0,Q(ψβ_ρ(g))|≤ Ke^-kn(g)=KF_0(g)^k .
The last inequality concludes the proof.
§.§ Proof of the exchange formula of proposition <ref>
Let us choose, for any positive real R, a cut-off function ψ_R, namely a function on with values in [0,1], with support in the ball with center x and radius R+1, and equal to 1 on the ball of radius x and radius R.
We write
|_x,g_0,Q(∫_β_ρ(g) μ̣(g))-∫__x,g_0,Q(β_ρ(g))μ̣(g)|≤ A(R)+B(R)+C(R) ,
where
A(R) = |_x,g_0,Q(∫_β_ρ(g) μ̣(g))-_x,g_0,Q(ψ_R∫_β_ρ(g)μ̣(g))| ,
B(R) = |_x,g_0,Q(ψ_R∫_β_ρ(g) μ̣(g))-∫__x,g_0,Q(ψ_R β_ρ(g)) μ̣(g)| ,
C(R) = |∫__x,g_0,Q( ψ_R β_ρ(g)) μ̣(g)-∫__x,g_0,Q( β_ρ(g)) μ̣(g) | .
We will prove the exchange formula (the third item of proposition <ref>) as an immediate consequence of the following three steps
0.2 truecm
Step 1: By lemma <ref>, α=∫_β_ρ(g)μ̣_g is in ^∞(E). By definition of a cutoff function, the support of (1-ψ(R)) α vanishes at any point y so that d(x,y)<R. Thus the exponential decay lemma <ref> guarantees that
A(R)=|_x,g_0,Q((1-ψ(R)) α)|≤ K_4e^-k_4R‖α‖_∞ .
Hence lim_R→∞A(R)=0.
0.2 truecm
Step 2: Observe that
ψ_R∫_β_ρ(g) μ̣(g)=∫_ψ_Rβ_ρ(g) μ̣(g) .
Moreover the function g↦ψ_Rβ_g is continuous from to ^∞(E). Thus follows from the continuity of _x,g_0,Q proved in proposition
<ref> implies that B(R)=0.
0.2 truecm
Final Step: As a consequence of Lebesgue's dominated convergence theorem and the domination proved in lemma <ref>, we have that
lim_R→∞ C(R)=0.
0.1 truecm
Combining all steps
lim_R→∞(A(R)+B(R)+C(R))=0 .
Hence thanks to equation (<ref>), we have
_x,g_0,Q(∫_β_ρ(g) μ̣(g))=∫__x,g_0,Q(β_ρ(g)) μ̣(g) .
§.§ Proof of Theorem <ref>
We assume now that μ is a Γ-compact current of order k>1. We may also assume – by decomposing the positive and negative part that μ is a positive current.
We want to show that
∫_^pΩ_ρ(w(H))μ̣(H) — defined pointwise — is an element of ^∞(E).
Since μ is Γ-compact, it follows that the core diameter of any H in the support of μ is bounded by some constant R_0 by proposition <ref>.
It will be enough to prove that
∫_^p‖Ω_ρ(w(H))(y)‖_y μ̣(H) ≤ K_0 ,
for some constant K_0 that depends on μ.
Let 𝒦 be a fundamental domain for the action of Γ on ^p. Observe now that
∫_^p‖Ω_ρ(w(H))(y)‖_y μ̣(H)
= ∑_γ∈Γ∫_γ𝒦‖Ω_ρ(w(H))(y)‖_y μ̣(H)
=∫_𝒦(∑_γ∈Γ‖Ω_ρ(w(H))(γ(y))‖_y) μ̣(H) = ∫_𝒦‖ψ_w(H),y‖_ℓ^1(Γ) μ̣(H) ,
where
ψ_w(H),y : γ ↦‖Ω_ρ(w(H))(γ(y))‖_y .
By the second assertion of corollary <ref>, the map ψ_H,y is in ℓ^1(Γ) and its norm is bounded by a continuous function of the core diameter r(w(H)) of w(H), hence by a continuous function of r(H) by inequality (<ref>), hence by a constant on the support of μ, since r is Γ-invariant and continuous by proposition <ref> and μ is Γ-compact.
Since r(H) is bounded on the support of μ,
the first item of the theorem follows.
Let us consider the map
Ψ: H↦∮_ρ(G)Ω_ρ(w(H))=∫_(Ω_ρ(w(H))∧Ω_ρ(G)) ,
where we used formula (<ref>) in the last equality. Our goal is to prove Ψ is in L^1(^p,μ). We have that
‖Ω_ρ(w(H))∧Ω_ρ(G)(y)‖≤‖Ω_ρ(w(H))(y)‖ ‖Ω_ρ(G)(y)‖ .
It follows that
∫_^p|Ψ(H)| μ̣(H) ≤ ∫_^p∫_‖Ω_ρ(G)(y)‖ ‖Ω_ρ(w(H))(y)‖ ỵ μ̣(H) ≤ ∫_‖Ω_ρ(G)(y)‖ ( ∫_^p‖Ω_ρ(w(H))(y)‖ μ̣(H)) ỵ ≤ K_0∫_‖Ω_ρ(G)(y)‖ ỵ = K_0 ‖Ω_ρ(G)‖_L^1() ,
where we used the first in the third inequality. We can now conclude by using the first assertion the corollary <ref>.
We use again a family of cutoff functions {ψ_n}_n∈ℕ defined on ^p with values in [0,1] so that each ψ_n has a compact support, and ψ_n converges to 1 uniformly on every compact set.
It follows from the Lebesgue's dominated convergence theorem and the second item that
lim_n→∞∫_^p(
∮_ρ(G)Ω_ρ(w(H)))
ψ_n μ̣(H) =∫_^p(∮_ρ(G)Ω_ρ(w(H))) μ̣(H) .
Recall now that by the last assertion of corollary <ref>, ‖Ω_ρ(H)‖_∞ is bounded on every compact set and Γ-invariant, hence bounded on the support of μ. Thus we have the following convergence in ^∞(E)
lim_n→∞∫_^pΩ_ρ(w(H)) ψ_n μ̣(H) =∫_^pΩ_ρ(w(H)) μ̣(H) ,
From the continuity obtained in proposition <ref>, we then have that
lim_n→∞∮_ρ(G)∫_^pΩ_ρ(w(H)) ψ_n μ̣(H) =∮_ρ(G)∫_^pΩ_ρ(w(H)) μ̣(H) .
Finally, for every n, since ψ_n has compact support
the following formula holds
∮_ρ(G)( ∫_^pΩ_ρ(w(H)) ψ_n μ̣(w(H)))= ∫_^p(∮_ρ(G)Ω_ρ(w(H))) ψ_n μ̣(H) .
The exchange formula now follows from both assertions (<ref>) and
(<ref>).
§ HAMILTONIAN AND BRACKETS: AVERAGE OF CORRELATION AND LENGTH FUNCTIONS
We now leave the realm of uniformly hyperbolic bundles in general and focus only on periodic ones. This corresponds to the study of Anosov representations of the fundamental group of a closed surface.
The fact that S is closed allows us to introduce a new structure: the smooth part of the representation variety of projective representations carries the Goldman symplectic form, defined in paragraph <ref>, see also <cit.>. Hence we have a Poisson bracket on functions on the character variety.
In this section, we will introduce averaged correlation functions and length functions and compute their Hamiltonian vector fields and Poisson bracket.
§.§ Averaged length function: definition
As a first step in the construction, let us consider a Θ-decorated current μ^å supported on ×{å} where å is in Θ. The associated length function on the character variety of Anosov representation is the function ^å_μ^å defined by
^å_μ^å(ρ)log(∫_/Γ R_å^σ μ̣^å) ,
where R_å^σ is the (complex valued in the case of complex bundles) 1-form associated to a section σ of (F_å) by ∇_uσ=R^σ(u)·σ. Although R^σ depends on the choice of the section σ, the integrand over does not. In the complex case, we see the length functions as taking values in ℂ/2π i ℤ due to the ambiguity of defining the logarithm.
Recall that in our convention (F_å) is a contracting bundle and thus the real part of _μ is positive. Moreover for a closed geodesic γ whose associated geodesic current, supported on ×{å} is also denoted by γ^å.
^å_γ(ρ)=-log(.Hol(γ)|_F_å) ,
where Hol(γ) is the holonomy of γ.
For a geodesic current δ supported on a closed geodesic, the length function _δ is analytic. This extends to all geodesic currents by density and Morera's Theorem (See <cit.> for a related discussion in the real case). The notion extends naturally – by additivity – to a general Θ-geodesic current.
We can now extend the length function to any Θ-geodesic current. Let μ be a Θ-geodesic current on ×Θ, we can then write uniquely
μ=∑_å∈Θμ^å ,
where μ^å is supported on ×{å}, then by definition the μ-averaged length function[In the complex case, since the logarithm, hence the length, is defined up to an additive constant, the Hamiltonian is well defined and the bracket of a length function and any other function makes sense.]
is
_μ(ρ)∑_å∈Θ^å_μ^å(ρ) .
§.§ Averaged correlation function: definition
When w is a natural map, μ a (ρ,w)-integrable cyclic current, the associated averaged correlation function of order n _w(μ) on the moduli space of Θ-Anosov representations is defined by
_w(μ)(ρ)∫_^n/Γ_w(G)(ρ) μ̣(G) ,
where G=(G_1,…,G_p) with and _G is the correlation function associated to a Θ-configuration of geodesics defined in paragraph <ref>. As we shall see in proposition <ref>, the function _w(μ) is analytic .
Our main result is a formula for the Poisson bracket of those functions. We use a slightly different convention, writing ^k for a correlation function of order k and ^1_μ=_μ.
Let μ be either a w-integrable Θ-cyclic currents at ρ_0 or a Θ-geodesic current. Similarly, let ν be either a v-integrable Θ-cyclic currents at ρ_0 or a Θ-geodesic current.
Then the measure μ⊗ν is z-integrable at ρ_0, where z(G,H)=[w(G),v(H)] and moreover
{^p_w(μ),^n_v(ν)}(ρ) = ∫_^p+n/Γ_ρ(w(G),v(H)) μ̣(G)ν̣(H) = ∫_^p+n/Γ_[w(G),v(H)](ρ) μ̣(G)ν̣(H) .
As a corollary, generalizing Theorem <ref> given in the introduction, using a simple induction and proposition <ref> we get
The vector space generated by length functions, averaged correlations functions and constants is stable under Poisson bracket. More precisely, let μ_1, …μ_p cyclic currents of order n_i, and N=n_1+… n_p then
{^n_1_μ_1,{^n_2_μ_2,…{^n_p-1_μ_p-1,^n_p_μ_p}…}}(ρ) = ∫_^N/Γ^N_[G_1,[G_2,[…,[G_p-1,G_p]…]]](ρ) μ̣_1(G_1)…μ̣_1(G_p) .
In the course of the proof, we will also compute the Hamiltonians of the corresponding functions.
Let μ be a Θ-geodesic current.
The Hamiltonian of the length function _μ is H^0_μ the trace free part of H_μ, where
H_μ-∫_β_ρ(g) μ̣(g) ,
Let w be a natural function. Let ν be a (ρ,w) integrable cyclic current. The Hamiltonian of the correlation function _w(ν) of order n, with n>1 is
Ω_w(ν)∫_^nΩ_ρ(w(G)) ν̣(G) ,
Both H_μ and Ω_w(ν) are in ^∞(E).
§.§ Preliminary and convention in symplectic geometry
Our convention is that if f is a smooth function and a symplectic form, the Hamiltonian vector field X_f of f and the Poisson bracket {f,g} of f and g are defined by
f̣(Y) = (Y,X_f) ,
{f,g} = f̣(X_g)=(X_g,X_f)=- g̣(X_f) .
0.5 truecm
Observe that if Ω is a complex valued symplectic form – which naturally take entries in the complexified vector bundle – and f a complex valued function then the Hamiltonian vector field is a complexified vector field. The bracket of two complex valued functions is then a complex valued function.
In the sequel, we will not write different results in the complex case (complex valued symplectic form and functions) and the (usual) real case.
We first start by computing the bracket and Hamiltonian of length functions;
§.§ Regularity of averaged correlations functions
We prove here
Let w be a natural function. Let μ be a (ρ,w)-integrable current, then
* _w(μ) is an analytic function in a neighborhood of ρ,
* For any tangent vector v at ρ, then _w(G)(v) is in L^1(μ) and
_w(μ)(v)=∫__⋆^n/Γ_w(G)(v)μ̣(G) .
As in proposition <ref>, we work in the context of complex uniform hyperbolic bundles, possibly after complexification of the whole situtation.
Let us first treat the case when μ is Γ-compact. In that case, the functions _G:ρ↦ T_w(G) are all complex analytic by proposition <ref>, uniformly bounded with uniformly bounded derivatives in the support of μ. Thus the result follows from classical results.
We now treat the non Γ-compact case. Let now consider an exhaustion of _⋆^n/Γ by compacts K_n and write μ_n=1_K_nμ. Let then
_n=∫_K_n_w(μ_n)μ̣ .
Then by our integrability hypothesis and Lebesgue dominated convergence Theorem _n converges uniformly to _w(μ). Since all _n are complex analytic, by Morera Theorem _w(μ) is complex analytic and _n converges C^∞ to _w(μ).
It thus follows that
_w(μ)(v)=lim_n→∞_n(v)=lim_n→∞∫_K_n_w(G)(v)μ̣(G) .
We now conclude by lemma <ref>.
§.§ Length functions: their Hamiltonians and brackets
The first step in our proof is to understand the variation of length,
The derivatives of a length function with respect to a variation ∇̇ is given by
_μ(∇̇)=∫_Θ×/Γ(∇̇) μ̣(x) .
By the linearity of the definition, see equation (<ref>), it is enough to consider a Θ-geodesic current μ^å supported on ×{å}.
Let E^å⋀^(F_å) E, and Λ^å the natural exterior representation from sl(E) to sl(E^å).
Then by <cit.> and formula (<ref>) we have
_μ(∇̇)=∫_{å}×/Γ(_åΛ^å(∇̇)) μ̣^å(x) ,
where ^1_å is the section of (E_å) given by the projection on the line (F_a) induced by the projection on F_å parallel to F_å^∘ – see section <ref> for notation.
We now conclude by observing –using just a litle bit of linear algebra–
that for any element in sl(E)
(^1_åΛ^å(A))=(_å A) .
Indeed let us choose a basis (e_1,…, e_p) of F_å completed by a basis (f_1,…, f_m) of F_å^∘ and choose a metric so that this basis is orthonormal. Then
Λ^å(A)(e_1∧…∧ e_p)=∑_i=1^pe_1∧… e_i-1∧ A(e_i)∧ e_i+1∧… e_p ,
(^1_åΛ^å(A))=⟨e_1∧…∧ e_p ,Λ^å(A)(e_1∧…∧ e_p)|=⟩∑_i=1^p⟨e_i , A(e_i)|=⟩(_å A) .
Let then
H_μ= -∫_β_ρ(g) μ̣(g) .
We proved that H_μ lies in ^∞(E) in lemma <ref>.
We now prove the following proposition
The Hamiltonian vector field of _μ is given by H^0_μ, which is the trace free part of H^μ. Then
{_ν,_μ}=(H^0_μ,H^0_ν)=∫__⋆^2/Γ_ρ(g,h) ν̣(g)⊗μ̣(h) ,
Observe that if μ and ν are both supported on finitely many geodesics, then the support of μ⊗ν is finite in ^2 and its cardinality is the geometric intersection number of the support of μ, with the support of ν. This is a generalization of Wolpert cosine formula, see <cit.>.
Remark that ϵμ⊗ν is supported in ^2 on a set on which Γ acts properly.
Let us first consider the computation of (H_μ,H_ν). let Δ_1 be a fundamental domain for the action of Γ on ^2. Then denoting ^0_g the traceless part of _g
(H^0_μ,H^0_ν) = ∫_Δ_0((∫_β^0_h μ̣(h))∧(∫_β^0_g ν̣(g))) = ∫_Δ_0∫_×ω_h∧ω_g (^0 (g)^0 (h)) μ̣(h)ν̣(g) = ∫_Δ_1∫_ω_h∧ω_g (^0 (g)^0 (h)) μ̣(h)ν̣(g) = ∫_Δ_1ϵ(h,g)(^0 (g)^0 (h)) μ̣(h) ν̣(g) .
Let us comment on this series of equalities: the first one is the definition of the symplectic form and that of H_μ and H_ν, for the second one we use the pointwise definition of H_μ and H_ν, for the third one we use proposition <ref>. Observe that the final equality gives formula (<ref>).
From the third equality we also have
(H^0_μ,H^0_ν)
= ∫_Δ_1(∫_gω_h (^0 (g)^0 (h))) μ̣(g)ν̣(h) .
Let now consider the fibration z:×→^2 and observe that z^-1(Δ_1) is a fundamental domain for the action of Γ in ×.
Let Δ_2 be a fundamental domain for the action of Γ on and observe that Δ_2× is a fundamental domain for the action on Γ on ×. Then the above equation leads to
(H^0_μ,H^0_ν)
= ∫_z^-1(Δ_1)ω_h (^0 (g)^0 (h)) μ̣(g)ν̣(h)
=∫_Δ_2×ω_h (^0 (g)^0 (h)) μ̣(g)ν̣(h) = ∫_Δ_2(^0 (g)∫_β^0_ρ(h)ν̣(h) )μ̣(g)
=-∫_Δ_2(^0 (g)H^0_ν)μ̣(g) = -_μ(H^0_ν)=_ν(H^0_μ) .
As a conclusion, if Ham(_ν) is the Hamiltonian vector field of _ν, then for all length functions _μ
_μ(H^0_ν-Ham(_ν))=0 .
We proved in <cit.> that the derivatives of the length functions generates the cotangent space of the character variety on some open dense subset. This completes the proof.
As noted, the above gives a generalization of Wolpert's cosine formula. Explicitly we have for two Θ-geodesic currents μ,ν then
{_ν,_μ} = ∫_(^2)_⋆/Γϵ(g,h)(((g)(h)) - Θ(g)Θ(h)/(E)) μ̣(g)ν̣(h) .
§.§ Bracket of length function and discrete correlation function
We have
Let G be a Θ-configuration and μ a Θ-geodesic current, then
{_G,_μ}=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g)=∫__ρ(G,g) μ̣(g) .
By proposition <ref>, we have
_G(H_μ)=-∮_ρ(G)(∫_β_ρ(g)μ̣) .
Thus by the exchange formula (<ref>), we have
_G (H_μ)=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g) .
Thus conclude using equation (<ref>)
{_G,_μ}= _G (H_μ)=-∫_(∮_ρ(G)β_ρ(g)) μ̣(g)=∫__ρ(G,g) μ̣(g) .
§.§ Bracket of length functions and correlation functions
Our first objective is, given a family of flat connection ∇ whose variation at zero is ∇̇, to compute _μ(∇̇).
Assume that the Θ-cyclic current μ is (ρ,w)-integrable. Then
{_w(μ),_ν}(ρ) = ∫_^n+1/Γ_ρ(w(G),g) ν̣(g) μ̣(G) .
By Theorem <ref>, the hamiltonian vector field of _ν is given by
H^0_ν=-∫_β^0_ρ(g) ν̣(g) .
Let Δ be a fundamental domain for the action of Γ on ^n, and observe that Δ× is a fundamental domain for the action of Γ on ^n+1. It follows since H_ν is ρ-equivariant and proposition <ref>
that
{_w(μ),_ν}=_w(μ)(H^0_ν)
= ∫_Δ_w(G)(H^0_ν) μ̣(G)
=∫_Δ(∮_ρ(w(G))H^0_ν) μ̣(G) = -∫_Δ∫_(∮_ρ(w(G))β_ρ(g)) ν̣(g)μ̣(G)
= ∫_Δ∫_(_ρ(w(G),g)) ν̣(g)μ̣(G)
= ∫_^n+1/Γ_ρ(w(G),g) μ̣(G)ν̣(g) .
For the second equality we used proposition <ref> and that integrating a 1-form with values in the center gives a trivial result by proposition <ref>.
§.§ Hamiltonian of correlation functions
We are going to prove the following result
Let w a natural function.
Let μ be a (ρ,w)-integrable Θ-current. Then for every y in , Ω_ρ(G) belongs to L^1(^p,μ). Moreover
Ω_w(μ)(ρ)∫_^pΩ_ρ(w(G))μ̣(G) .
seen as vector field on the character variety is the Hamiltonian of the correlation function _w(μ).
We first prove proposition <ref> under the additional hypothesis that μ is a Γ-compact current, then move to the general case by approximation.
Assume μ is a Γ-compact current. By the density of derivatives of length functions, it is enough to prove that for any geodesic current ν associated to a length function _ν whose Hamiltonian is H_ν we have
{_ν,_w(μ)}=(Ω_w(μ),H_ν)=_ν(Ω_w(μ)) .
Then using a fundamental domain Δ_0 for the action of Γ on , and Δ_1 a fundamental domain for the action of Γ on ^n, and finally denoting ν_0 the flow invariant measure in associated to the current ν
_ν(Ω_w(μ))
=∫_Δ_0(Ω_w(μ))ν̣_0(g) = ∫_Δ_0(∫_^n( (g)Ω_ρ(w(G))) μ̣(G))ν̣_0(g)
=∫_^n(∫_Δ_0( (g)Ω_ρ(w(G))) ν̣_0(g))μ̣(G) = ∫_Δ_1(∫_( (g)Ω_ρ(w(G))) ν̣_0(g))μ̣(G)
=∫_Δ_1∫_∫_g( (g)Ω_ρ(w(G)))) ν̣(g)μ̣(G) = ∫_^n/Γ(∫_∫_(ω_g (g)∧Ω_ρ(w(G))) )ν̣(g) μ̣(G)
=-∫_(^n/Γ)×_ρ(w(G),g) μ̣(G)ν̣(g) = {_ν,_μ} .
The first equality uses equation (<ref>), the second uses the definition of Ω_μ, the third one comes from Fubini theorem, the fourth one from lemma
<ref>, the fifth one from the fibration from to , the sixth one from formula (<ref>), the seventh one definition (<ref>).
Let us now prove the general case when μ is a ρ-integrable current. Let us consider an exhaustion K of ^p/Γ by compact sets. Assume that the interior of K_m+1 contains K_m. Let 𝒦 be a fundamental domain of the action Γ on _⋆^p.
Let
_m(ρ)∫_K_m_w(G)(ρ) μ̣(G) .
The functions _m are analytic and converges C^0 on every compact set to _μ by the integrability of μ. Thus, by Morera's Theorem, _μ is analytic and converges C^∞ on every compact . Let us call X the Hamiltonian vector field of _μ and X_m the Hamiltonian vector field of _m. It follows that X converges to X.
We have just proven in the previous paragraph that the Hamiltionian of _m is
X_m=∫_C_mΩ_ρ(H) μ̣ .
From corollary <ref>, for every y and H, the function
γ↦‖Ω_ρ(γ w(H))(y)‖, is in ℓ^1(γ).
It follows that
X_m(y)=∫_C_m𝒦(∑_γ∈ΓΩ_ρ(γ H)(y) μ̣(H)) .
Since {X_m(y)}_m∈ℕ converges for any exhaustion of 𝒦 to X(y). It follows by lemma <ref> that
H↦∑_γ∈ΓΩ_ρ(γ w(H))(y) μ̣(H) ,
is in L^1(𝒦,μ) and that
X(y)= ∫_K∑_γ∈ΓΩ_ρ(γ w(H))(y) μ̣(H)=∫_^pΩ_ρ(w(H))(y) μ̣(H) ,
where we applied Fubini again in the last equality. This is what we wanted to prove.
§.§ Bracket of correlation functions
We have
Let μ and ν be two integrable Θ-currents of rank m and n respectively. Let p=m+n, then
{_w(ν),_v(μ)}=∫_^p/Γ_ρ(w(H),v(G)) ν̣⊗μ̣(H,G) .
We have
{_w(ν),_v(μ)}=_w(ν)(Ω_v(μ))
= ∫_^n/Γ_w(H)(Ω_v(μ)) ν̣(H)
=∫_^n/Γ(∮_ρ(w(H))Ω_v(μ)) ν̣(H) = ∫_^n/Γ(∮_ρ(w(H))∫_^mΩ_ρ(v(G))μ̣(G)) ν̣(H)
=∫_^n/Γ(∫_^m∮_ρ(w(H))Ω_ρ(v(G))μ̣(G)) ν̣(H)
= ∫_^p/Γ_ρ(w(H),v(G)) ν̣(H) μ̣(G) .
The crucial point in this series of equalities is the exchange formula for the fifth equality which comes from Theorem <ref>.
With the above, we have completed the proof of the ghost representation Theorem <ref>.
§ APPLICATIONS
In this section we give two applications of our previous results. The first one is a generalization of Kerckhoff theorem <cit.> of the convexity of length functions, and the related Wolpert's sine formula for the second derivatives along twist orbits <cit.>. The second one is to give examples of commuting functions arising from laminations.
Both results will follow from computations in the ghost algebra combined with the Ghost Representation Theorem <ref>.
§.§ Convexity of length functions for positively ratioed representations
We can know prove our convexity theorem.
We work in the context of real projective Anosov representation, or 𝖲𝖫(n,ℝ) valued with Θ={1}. Let us first say, following Martone–Zhang <cit.> that a representation has a positive cross ratio if for all intersecting geodesics g and h
0<_⌈ g,h ⌉(ρ)<1 .
Let μ be an oriented geodesic current supported on non-intersecting geodesics. Then for any geodesic current ν for any projective representation with a positive cross ratio, we have
{_μ,{_μ,_ν}}(ρ)≥ 0 .
Furthermore the inequality is strict if and only if i(μ,ν) ≠ 0.
This will follow from the definition of a positive cross ratio and our generalisation of Wolpert sine formula:
Let μ be an oriented geodesic current supported on non-intersecting geodesics. Then for any geodesic current ν, for any projective representation ρ, we have
{_μ,{_μ,_ν}}(ρ)=2∫_^3,+/Γϵ(g_0,h)ϵ(g_1,h)(_⌈ g_1,h,g_0⌉ -⌈ g_1,h⌉⌈ g_0,h⌉)(ρ) μ̣^2 (g_1,g_0) ν̣(h) .
where ^3,+ is the set of (g_1,h,g_0) so that if h intersects both g_1 and g_0, then h intersects g_1 before g_0.
§.§ Commuting functions arising from laminations
Let ℒ be a lamination. Associated to this lamination we get several functions that we called associated to the lamination
* The length functions associated to geodesic currents supported on the laminations,
* functions associated to any complementary region of the lamination.
Let F_ℒ be the vector space generated by these functions.
our result is then
Let ℒ be a geodesic lamination, then the vector space F_ℒ consists of pairwise Poisson commuting functions.
An interesting example is the case of the maximal geodesic lamination coming from a decomposition into pair of pants. An easy check give that there are 6g-6 length functions, and 4g-4 triangle functions. Thus we have 10g-10 commuting functions. However in the case the dimension of the space is 16g-16 and it follows that there are relations between these functions. It is interesting to notice that these relations may not be algebraic ones: In that specific case some relations are given by the higher identities <cit.> generalizing Mirzakhani–McShane identities.
§.§ Double derivatives of length functions in the swapping algebra
In order to prove our convexity result, we will need to calculate the double brackets. By Theorem <ref>, as the map A →_A on the ghost algebra factors through the extended swapping bracket ℬ_0, it suffices to do our calculations in ℬ_0. For simplicity, we will further denote the elements ℓ_g in ℬ_0 by g.
Let h be a an oriented geodesic and g_0 and g_1 two geodesics so that ϵ(g_0,g_1)=0. Let ϵ_i=ϵ(g_i,h).
Assume first that ϵ_0ϵ_1=0, then
[g_1,[g_0,h]]=0.
Assume otherwise that h intersect g_1 before g_0 or that g_1=g_0. Then
[g_1,[g_0,h]] = ϵ_1ϵ_0 (⌈ g_1,h,g_0 ⌉- ⌈ g_1,h⌉ ⌈ g_0,h ⌉)
=ϵ_1ϵ_0 ⌈ g_1,h⌉ ⌈ g_0,h ⌉ (⌈γ_0 ,γ_1 ⌉-1 ) ,
where γ_0 (g_0^+,h^-) and γ_1 (h^+, g_1^-). Observe that γ_0 and γ_1 are not phantom geodesics by hypothesis.
First let us remark that by the Jacobi identity, since [g_0,g_1]=0, then
[g_1,[g_0,h]]=[g_0,[g_1,h]] .
We apply formulas of paragraph <ref>.
We first have from equation (<ref>).
[g_0,h]=ϵ(h,g_0)⌈ g_0,h ⌉ + ϵ(g_0,h) .
It follows that if ϵ(g_0,h)=0, then
[g_1,[g_0,h]]=0 .
The same holds whenever ϵ(g_1,h)=0 by the symmetry given by equation (<ref>).
Assume now that ϵ_0ϵ_1≠0.
Let then (g_0,ζ_0,h,η_0) be the associated ghost polygon to ⌈ g_0,h⌉ with ghost edges ζ_0 = (g_0^+,h^-) and η_0 = (h^+,g_0^-).
Thus using the hypothesis ϵ(g_0,g_1)=0, and using the notation ϵ_i=ϵ(g_i,h) we get from equation (<ref>)
[g_1,[g_0,h]] = -ϵ_0⌈ g_0,h ⌉(ϵ_1 ⌈ g_1,h⌉- ϵ(g_1,ζ_0)⌈ g_1,ζ_0⌉ -ϵ(g_1,η_0)⌈ g_1, η_0 ⌉) .
Since h intersects g_1 before g_0, we have ϵ(g_1,η_0)=0 and ϵ(g_1,ζ_0)=ϵ(g_1,h). Thus
[g_1,[g_0,h]]
= ϵ_1ϵ_0 ( ⌈ g_1, ζ_0 ⌉⌈ g_0,h ⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉) .
As ζ_0 = (g_0^+,h^-) by definition of the swapping algebra
⌈ g_1, ζ_0 ⌉⌈ g_0,h ⌉ = (g_1^+,h^-)(g_0^+,g_1^-)(g_0^+,h^-)(h^+,g_0^-)/(g_1^+,g_1^-)(g_0^+,h^-)(g_0^+,g_0^-)(h^+,h^-) = (g_1^+,h^-)(h^+,g_0^-)(g_0^+,g_1^-)/(g_1^+,g_1^-)(h^+,h^-)(g_0^+,g_0^-) = ⌈ g_1, h, g_0 ⌉ .
Similarly
⌈ g_1, h, g_0 ⌉/⌈ g_1, h ⌉⌈ g_0, h⌉ = (g_0^+,g_1^-)(h^+,h^-)/(g_0^+,h^-)(h^+,g_1^-) = ⌈ (g_0^+,h^-),(h^+,g_1^-)⌉ .
The result follows from equations (<ref>) and the fact that γ_0=(g_0^+,h^-) and γ_1=(h^+,g_1^-).
§.§ Triangle functions and double brackets
Let δ_0=(a_1,a_2,a_3) be an oriented ideal triangle, we associate to such a triangle the configuration
t_0⌈ a_1,a_3,a_2⌉ .
The reader should notice the change of order.
One can make the following observation. First t t̅ =1. Thus for a self-dual representation ρ, we have _t(ρ)^2=1 and in particular _t is constant along self dual representations.
Let t_0 be a triangle, then
[t_0, g]
= ∑_j∈{1,2,3}ϵ(a_j,g) t_0 (⌈ g,a_j⌉+⌈ g,a̅_j⌉) .
Let t_0, t_1 be triangles. Then
[t_1,t_0] = t_1· t_0∑_i,j∈{1,2,3}ϵ(a_i,b_j)(⌈ a_i,b_j⌉ + ⌈ a_i, b_j⌉+⌈a_i,b_j⌉ + ⌈a_i, b_j⌉= t_0∑_i∈{1,2,3} [t_1,a_i-a_i] .
Assume now that t_0 and t_1 are two non-intersecting triangles.
Then we have the formula:
[t_1,[t_0, g]]
=t_0 t_1∑_i,j∈{1,2,3}
α,β∈{-1,1}ϵ(b_i,g)ϵ(a_j,g) (⌈ b_i^β,g,a_j^α⌉ -⌈ a^α_j,g⌉ ⌈ b^β_i,g⌉) ,
where c^1=c, c^-1=c̅.
Observe first the the hypothesis imply that [t_0,t_1]=0. Thus, by Jacobi identity,
[t_0,[t_1,g]]=[t_1,[t_0, g]] .
The ghost polygon associated to t is (a_1,a_2,a_3,a_1,a_2,a_3).
Thus
[t_0, g]
= t_0 ∑_j∈{1,2,3}ϵ(a_j,g)⌈ g,a_j⌉-ϵ(a_j,g)⌈ g,a̅_j⌉
= t_0 ∑_j∈{1,2,3}ϵ(a_j,g)(⌈ g,a_j⌉+⌈ g,a̅_j⌉) .
In particular, if ϵ(g,a_i)=0 for all i, then [t_0, g]=0. Hence, in that case
[t_0,[t_1, g]]=[t_1,[t_0, g]]=0 ,
and the formula (<ref>) is correct.
For t_0,t_1 we have
[t_1,t_0] = t_1· t_0∑_i,j∈{1,2,3}ϵ(a_i,b_j)⌈ a_i,b_j⌉ - ϵ(a_i,b_j)⌈ a_i, b_j⌉-ϵ(a_i,b_j)⌈a_i,b_j⌉ + ϵ(a_i,b_j)⌈a_i, b_j⌉
= t_0· t_1∑_i,j∈{1,2,3}ϵ(a_i,b_j)(⌈ a_i,b_j⌉ +⌈ a_i, b_j⌉+⌈a_i,b_j⌉ + ⌈a_i, b_j⌉)
= t_0∑_i∈{1,2,3} [t_1, a_i- a_i]
For the triple bracket, let us focus in the case where g intersects both t_0 and t_1 and by the above symmetry that g intersects t_1, then t_0.
Let (a_i,ζ_i,g,η_i) the ghost polygon to ⌈ a_i,g⌉ with ζ_i = (a_i^+,g^-) and η_i = (g^+,a_i^-).
Let t_1 = ⌈ b_1, b_3, b_2⌉ be another ideal triangle not intersecting t_0 and such that g intersects t_1, then t_0. Then the associated ghost polygon is (b_1,b_2,b_3,b_1,b_2,b_3). Let h be an edge of the ghost polygon of t_1. Then as g intersects t_1 before t_0
ϵ(h,η_j)=0 , ϵ(h,ζ_j)=ϵ(h,g) .
Thus
[ t_1,⌈ g, a_j⌉ ] = t_1⌈ g, a_j⌉∑_i∈{1,2,3}(ϵ(g,b_i) ⌈ b_i,g⌉ -ϵ(g,b_i) ⌈b_i,g⌉
- ϵ(ζ_j,b_i)⌈ζ_j, b_i ⌉+ϵ(ζ_j,b_i)⌈ζ_i,b_i⌉) .
Simplifying we obtain
[ t_1,⌈ g, a_j⌉ ] = t_1⌈ g, a_j⌉∑_i∈{1,2,3}ϵ(g,b_i)( ⌈ b_i,g⌉ + ⌈b_i,g⌉
-⌈ b_i, ζ_j ⌉-⌈b_i, ζ_j ⌉) .
By equation (<ref>)
⌈ g, a_j⌉⌈ b_i, ζ_j ⌉ = ⌈ b_i,g, a_j ⌉ ⌈ g, a_j⌉⌈b_i, ζ_j ⌉ = ⌈b_i,g, a_j ⌉ .
Thus we obtain
[t_1,⌈ g,a_j⌉]
= t_1∑_i∈{1,2,3}ϵ(b_i,g) (⌈ b_i,g,a_j⌉-⌈ a_j,g⌉ ⌈ b_i,g⌉
+⌈b̅_j,g,a_j⌉ - ⌈ g,a_j⌉⌈b̅_j, g ⌉) ,
t_1,⌈ g,a̅_j⌉]
= t_1∑_i∈{1,2,3}ϵ(b_i,g) (⌈ b_i,g,a̅_j⌉-⌈ a_j,g⌉ ⌈ b_i,g⌉
+⌈b̅_j,g,a̅_j⌉ - ⌈ g,a̅_j⌉⌈b̅_j, g ⌉) .
Combining the two last equations, and writing ϵ(b_i,g)=ϵ^1_i and ϵ(a_j,g)=ϵ^0_j we have (after some reordering)
[t_1,[t_0, g]]
= t_0 t_1 ∑_i,j∈{1,2,3}ϵ^1_iϵ^0_j(⌈ b_i,g,a_j⌉+⌈ b_i,g,a̅_j⌉+⌈b̅_i,g,a_j⌉+⌈b̅_i,g,a̅_j⌉ - ⌈ a_j,g⌉ ⌈ b_i,g⌉-⌈a̅_j,g⌉ ⌈ b_i,g⌉-⌈a̅_j,g⌉ ⌈b̅_i,g⌉-⌈ a_j,g⌉ ⌈b̅_i,g⌉)
,
which is what we wanted to prove.
Let g be disjoint from the interior of ideal triangle δ. Then g and the triangle function t commute.
Similarly let δ_0, δ_1 be ideal triangles with disjoint interiors. Then the associated triangle functions t_0,t_1 commute.
We first make an observation. If ϵ(g, h) = ± 1/2 then
⌈ g,h ⌉ + ⌈ g, h⌉ = 1 .
To see this, assume g^+=h^-. Then
⌈ g,h⌉ = 0 and ⌈ g, h⌉ has ghost polygon (g,h,h, g) giving
⌈ g, h⌉ = h· g/g·h =1 .
By symmetry, this holds for all g,h with ϵ(g,h) = ± 1/2.
Let g be disjoint from the interior of ideal triangle δ = (a_1,a_2,a_3). Then from above
[g, t] = t∑_i∈{1,2,3}ϵ(g,a_i)(⌈ g,a_i⌉ + ⌈ g, a_i⌉) = t∑_i∈{1,2,3}ϵ(g,a_i) .
If ϵ(g,a_i) = 0 for all i then trivially [ g, t] = 0. Thus we can assume ϵ(g, a_1) = 0 and ϵ(g,a_2),ϵ(g,a_3) ≠ 0. If g = a_1 then as ϵ(a_1, a_2) = -ϵ(a_1, a_3) then [g, t] =0. Similarly for g = a_1.
Otherwise g, a_2, a_3 share a common endpoint and a_2,a_3 have opposite orientation at the common endpoint. Therefore as g is not between a_2 and a_3 in the cyclic ordering about their common endpoint, then ϵ(g,a_2)= -ϵ(g,a_3) giving [g, t] =0.
Let t_0, t_1 be the triangle function associated to ideal polygons δ_0, δ_1 with t_0 = [a_1, a_3,a_2]. Then from above
[t_1,t_0] = t_0∑_i [t_1,a_i-a_i] .
Thus if t_0,t_1 have ideal triangles with disjoint interiors then by the above, [a_i,t_1] = [a_i,t_1]=0 giving [t_0,t_1]=0.
Let t_1, t_2 be ideal triangles intersecting triangle t_0 with sides a_i. Let u = ∑ a_i - a_i. Then
[t_2,[t_1,t_0]] = t_0([t_1,u][t_2,u]- [t_2,[t_1,u]]) .
From above [t_1,t_0] = -t_0[t_1,u] and [t_2,t_0] = -t_0[t_2,u]. Therefore
[t_2,[t_1,t_0]] = -[t_2,t_0][t_1,u] - t_0[t_2,[t_1,u]] = t_0[t_2,u][t_1,u]-t_0[t_2,[t_1,u]] .
§.§ Positivity
Recall that a projective representation ρ has a positive cross ratio if for all g,h intersecting geodesics 0 < _⌈ g,h⌉(ρ) < 1. We now give an equivalent definition which is the one originally given by Martone–Zhang in <cit.>.
A projective representation ρ has a positive cross ratio if and only if for all (X,Y,y,x) cyclically oriented
_⌈ (X,x),(Y,y)⌉(ρ)>1 .
Let X,x,Y,y be 4 points. We observe that (X,Y,y,x) is cyclically oriented if and only if geodesics (X,y),(Y,x) intersect. The result then follows from
⌈ (X,x),(Y,y)⌉ =(X,y) (Y,x)/(X,x) (Y,y)=((X,x) (Y,y)/(X,y) (Y,x))^-1=⌈ (X,y),(Y,x)⌉^-1 .
Assume ρ is a projective representation with a positive cross ratio. Let h be so that if h intersects both g_1 and g_0, then h intersects g_1 before g_0. Let g_1, g_0 be such that ϵ(g_0,g_1) = 0 Then we have the inequality
ϵ_1ϵ_0 _⌈ g_1 ,h , g_0⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉(ρ)≥ 0 .
Furthermore the inequality is strict if and only if h intersects both g_0, g_1 in their interiors (i.e. if and only if |ϵ_0ϵ_1| = 1).
By Lemma <ref> we have, since g_1 meets h before g_0.
ϵ_0ϵ_1(⌈ g_1,h,g_0⌉ - ⌈ g_1,h⌉⌈ g_0,h⌉) = ϵ_1ϵ_0 ⌈ g_1,h⌉ ⌈ g_0,h ⌉ (⌈γ_0 ,γ_1 ⌉-1 ) .
where γ_0 (g_0^+,h^-) and γ_1 (h^+, g_1^-). We will also freely use that if x^+=y^- or x^-=y^+, then _⌈ x,y⌉=0, while if x^+=y^+ or x^-=y^- then _⌈ x,y⌉=1.
0.2 truecm
First case: ϵ_0ϵ_1 =0.
In that case, we have equality.
0.2 truecm
Second case: 0<|ϵ_0ϵ_1| <1. In that situation one of the end point of h is an end point of g_0 or g_1.
* Firstly, the cases g_0^± = h^- or g_1^± = h^+ are impossible since h meets g_1 before g_0.
* Secondly if g_1^+= h^- or g_0^- = h^+, then
_⌈ g_1,h⌉_⌈ g_0,h⌉(ρ) = 0.
* Finally, if g_1^-= h^- or g_0^+ = h^+, then either γ_0^+=γ_1^+ or γ_0^-=γ_1^-. In both cases, _⌈γ_0,γ_1⌉(ρ)=1 and hence
it follows that _⌈ g_1 ,h , g_0⌉- ⌈ g_1,h⌉⌈ g_0,h ⌉(ρ)=0.
0.2 truecm
Final case: |ϵ_0ϵ_1|=1.
As both g_0 and g_1 intersect h and ρ has a positive cross ratio, then by proposition <ref>,
_⌈ g_1,h⌉ ⌈ g_0,h ⌉(ρ)=_⌈ g_1,h⌉(ρ) _⌈ g_0,h ⌉(ρ)>0 .
We can then split in two cases as in figure (<ref>):
* If ϵ_0ϵ_1>0, then γ_0 and γ_1 do not intersect, and (h^-,g_0^+,h^+,g_1^-) is a cyclically oriented quadruple. Hence, by definition
_⌈γ_0,γ_1⌉(ρ)>1. See figure (<ref>))
* If now ϵ_0ϵ_1<0, then γ_0 and γ_1 intersect, and by proposition <ref>
_⌈γ_0,γ_1⌉(ρ)<1.(see figure (<ref>))
Combining both cases, we get that
ϵ_0ϵ_1 (_⌈γ_0,γ_1⌉(ρ)-1)> 0 .
The result follows from equations (<ref>) and (<ref>).
Then we have
Assume ρ is a projective representation with a positive cross ratio. Let g_1, g_0 be such that ϵ(g_0,g_1) = 0. Then we have the inequality
_[g_1,[g_0, h]](ρ)≥0 .
Furthermore the inequality is strict if and only if h intersects both g_0, g_1 in their interiors.
The Jacobi identity for the swapping bracket <ref>
gives that [g_0,[g_1,h]]=[g_1,[g_0,h]] since [g_0,g_1]=0. Thus the proof follows lemma <ref>, lemma <ref>.
§.§ Proof of the convexity theorem <ref> and the sine formula theorem <ref>
By the representation theorem and its corollary <ref>
{_μ,{_μ,_ν}}(ρ)=∫_^3/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) .
Since by lemma <ref>, the integrand is non-negative, the integral is non-negative.
If i(μ,ν)= 0 then for all g in the support of μ and h in the support of ν, |ϵ(g,h)| ≠ 1. Thus by lemma <ref> for g_0,g_1 in the support of μ and h in the support of ν then
_[g_1,[g_0, h]](ρ) = 0 .
Thus the integral is zero for i(μ,ν)= 0.
If i(μ,ν) ≠ 0 then there exists g_0, h in the supports of μ,ν respectively such that |ϵ(g_0,h)| = 1. If h is descends to a closed geodesic then it is invariant under an element γ of Γ then we let g_1 = γ g_0. Then the triple (g_1,g_0,h) is in the support of μ⊗μ⊗ν. Thus _[g_1,[g_0, h]](ρ) > 0 and the integral is positive. If h does not descend to a closed geodesic, then as any geodesic current is a limit of a discrete geodesic currents, it follows that h intersects g_1 = γ g_0 for some γ in Γ. Again the triple (g_1,g_0,h) are in the support of μ⊗μ⊗ν with _[g_1,[g_0, h]](ρ) > 0. Thus the integral is positive.
This completes the proof of Theorem <ref>. For Theorem <ref>, we use the Jacobi identity for the swapping bracket to get
∫_^3/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) =2∫_^3,+/Γ_[g_1,[g_0, h]](ρ) μ̣(g_0)μ̣(g_1)ν̣(h) .
Then we use lemma <ref>.
§ THE JACOBI IDENTITY FOR A Θ-GHOST BRACKET
We now explain the the Jacobi identity for polygons with disjoint set of vertices is satisfied.
§.§ Linking number on a set
Let us make a little more general construction recall some construction of in <cit.> Let be a set, 𝒢_1 be the set of pair of points of Z. We denote temporarily the pair (X,x) with the symbol Xx. We also defined a linking number on to be a map from
^4 to a commutative ring 𝔸
(X,x,Y,y)→ϵ(Xx,Yy),
so that for all points X,x,Y,y,Z,z the following conditions are satisfied
ϵ(Xx,Yy)+ϵ(Xx,yY)=ϵ(Xx,Yy)+ϵ(Yy,Xx) = 0 ,
ϵ(zy,XY)+ϵ(zy,YZ)+ϵ(zy,ZX) = 0 ,
ϵ(Xx,Yy).ϵ(Xy,Yx) = 0 .
The second author proved in <cit.> the
Let (X,x,Z,z,Y,y) be 6 points on the set equipped with an linking number, then
ϵ(Xy,Zz)+ϵ(Yx,Zz)=ϵ(Xx,Zz)+ϵ(Yy,Zz).
Moreover, if {X,x}∩{Y,y}∩{Z,z}=∅,
then
ϵ(Xx,Yy)ϵ(Xy,Zz)+ϵ(Zz,Xx)ϵ(Zx,Yy)+ϵ(Yy,Zz)ϵ(Yz,Xx) = 0 ,
ϵ(Xx,Yy)ϵ(Yx,Zz)+ϵ(Zz,Xx)ϵ(Xz,Yy)+ϵ(Yy,Zz)ϵ(Zy,Xx) = 0 .
§.§ The ghost algebra of a set with a linking number
§.§.§ Ghost polygons and edges
We say a geodesic is a pair of points in . We write g=(g_-,g_+). A configuration G ⌈ g_1,… g_n⌉ is a tuple of geodesics (g_1,… g_n) up to cyclic ordering, with n≥ 1. The positive integer n is the rank of the configuration.
To a configuration of rank greater than 1, we associate a ghost polygon, also denoted G which is a tuple G = (θ_i,…,θ_2n) where g_i = θ_2i are the visible edges and ϕ_i = θ_2i+1 ((g_i+1)_-,(g_i)_+) are the ghost edges.
The ghost index i_e of an edge e is an element of ℤ/2ℤ which is zero for a visible edge and one for a ghost edge. In other words
i_θ_k k [2].
We will then denote by G_∘ the set of edges (ghost or visible) of the configuration G.
Geodesics, or rank 1 configuration, play a special role. In that case G=⌈ g⌉, by convention G^∘ consists of of single element g which is a visible edge.
§.§.§ Opposite edges
We now define the opposite of an edge in a reduced configuration. Recall that a configuration is a tuple up to cyclic permutation. in this section we will denote
⌊ g_1,… g_n⌋,
a tuple. We denote by the ∙ the concatenation of tuples:
⌊ g_1,… g_n⌋∙⌊ h_1,… h_p⌋⌊ g_1,… g_n, h_1,… h_p⌋ .
We introduce the following notation. If θ is a visible edge of G, we define θ_+ = θ_- = θ and if θ is a ghost edge of G then we define θ_+ to be the visible edge after θ and θ_- the visible edge before. The opposite of an edge is
θ^* ⌊θ_+…θ_-⌋
where the ordering is an increasing ordering of visible edges from θ_+ to θ_-.
More specifically
* For a visible edge g_i, the opposite is the tuple
g_i^* = ⌊ g_i, g_i+1… g_i-1g_i⌋,
* while for a ghost edge ϕ_i the opposite is
ϕ_i^* = ⌊ g_i+1g_i+2… g_i-1 g_i⌋.
* if ⌈ h⌉ is a rank 1 configuration. The opposite of its unique edge h is h itself.
§.§ Ghost bracket and our main result
We now define the ghost algebra of to be the polynomial algebra 𝒜_0 freely generated by ghost polygons and geodesics. The ghost algebra is equipped with the antisymmetric ghost bracket, given on the generators 𝒜 by, for two ghosts polygons B and C and geodesics g and h,
[B,C] = ∑_(b,c)∈ B_∘× C_∘ϵ(c,b)(-1)^i_b+i_c⌈ c^* b^*⌉ .
It is worth writing down the brackets of two geodesics g and h, as well as the bracket of a geodesic g and a configuration B,
-[g,B]=[B,g] = ∑_b∈ B_∘ϵ(g,b)(-1)^i_b+1⌈ g,b^*⌉ ,
-[g,h]=[h,g] = ϵ(g,h) ⌈ g, h⌉ .
Our goal in this section is to prove
Let A, B, C three polygons with no common vertices:
V_A∩ V_B∩ V_C=∅,
where V_G is the set of vertices of the polygon G. Then the ghost bracket satisfies the Jacobi identity for A, B, C:
[A,[B,C]]+[B,[C,A]]+[C,[A,B]] =0 .
As the formula for the bracket differs based on whether ghost polygons are rank 1 or higher, will need to consider the different cases based on the rank of the three elements. We will denote rank 1 elements by a,b,c and higher rank by A,B,C. For a, b and c edges in A, B, C ghost or otherwise we label their ghost indexes by i_a ,i_b, i_c and their opposites by a^*, b^*, c^*.
§.§ Preliminary: more about opposite edges
Let also use the following notation: if θ_k and θ_l are two edges, ghost or visible ⌈ g_1,…, g_n⌉, of a ghost polygon, then
G(θ_k,θ_l) = ⌊θ_k_+…θ_l_-⌋ ,
where again this is an increasing of visible edges. The tuple G(θ_k,θ_l) is an “interval" defined by θ_k and θ_2.
In order to continue our description of the triple brackets. We need to understand, in the above formula, what are the opposite of ϕ^* in [b^*,c^*]. Our preliminary result is the following
Let B and C be two ghost polygons, b and c edges in B and C respectively.
Let ϕ be an edge in ⌈ b^*,c^*⌉, then we have the following eight possibilities
1: Either ϕ is an edge of B, different from b or a ghost edge, then
ϕ^*= G(ϕ,b)∙ c^*∙ G(b,ϕ) ,
2: b is a visible edge, ϕ is the initial edge b in b^* and then
ϕ^*=b^*∙ c^*∙ b .
3: b is a visible edge, ϕ is the final edge b in b^* and then
ϕ^*=b∙ c^*∙ b^* .
4, 5, 6: Or ϕ is an an edge of C, and the three items above apply with some obvious symmetry, giving three more possibilities.
7: or ϕ is the edge u_b,c (c_-^-,b_+^+) of ⌈ b^*,c^*⌉ which is neither an edge of b nor an edge of c, a ghost edge, and
ϕ^*=⌊ c^*,b^*⌋ .
8: ϕ is the edge u_c,b (b_-^-,c_+^+) of ⌈ b^*,c^*⌉ which is neither an edge of b nor an edge of c, a ghost edge, and
ϕ^*=⌊ b^*,c^*⌋ .
This follows from a careful book-keeping and the previous definitions.
§.§ Cancellations
Let us introduce the following quantities for any triple of polygons A, B, C whatever their rank. They will correspond to the cases obtained corresponding to the cases observed in lemma <ref>:
Case 1: P_1(A,B,C) ∑_(a,c,b,ϕ)∈ A_∘× C_∘× B_∘^2
ϕ≠bϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉ ,
Case 3: P_2(A,B,C) ∑_(a,b,c,ϕ)∈ A_∘× B_∘× C_∘^2
ϕ≠cϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^*∙ G(ϕ,c)∙ b^*∙ G(c,ϕ) ⌉ ,
Case 4: Q_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,b)ϵ(c,b)(-1)^i_a+i_c⌈ a^*∙ b ∙ c^*∙ b^* ⌉ ,
Case 5: Q_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,b)ϵ(c,b)(-1)^i_a+i_c⌈ a^*∙ b^* ∙ c^*∙ b⌉ ,
Case 6: R_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ a^*∙ c ∙ b^*∙ c^* ⌉ ,
Case 7: R_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ a^*∙ c^* ∙ b^*∙ c ⌉ ,
Case 8: S_1(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,u_b,c)ϵ(c,b)(-1)^i_a+i_c+i_b⌈ a^*∙ c^* ∙ b^* ⌉ ,
Case 4: S_2(A,B,C) ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,u_c,b)ϵ(c,b)(-1)^i_a+i_c+i_b⌈ a^*∙ b^* ∙ c^* ⌉ .
We then have
We have the following cancellations, where the two last ones use the hypothesis (<ref>)
P_1(A,B,C)+P_2(C,A,B) = 0 , first cancellation ,
Q_1(A,B,C)+R_2(B,C,A) = 0 , second cancellation ,
S_1(A,B,C)+S_1(B,C,A)+S_1(C,A,B) = 0 , hexagonal cancellation-1 ,
S_2(A,B,C)+S_2(B,C,A)+S_2(C,A,B) = 0 , hexagonal cancellation-2 .
For the first cancellation, we have
P_1(A,B,C)+P_2(C,A,B)
= ∑_(a,c,b,ϕ)∈ A_∘× C_∘× B_∘^2
ϕ≠bϵ(a,ϕ)ϵ(c,b)(-1)^i_a+i_ϕ+i_b+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉
+ ∑_(c,a,b,ϕ)∈ C_∘× A_∘× B_∘^2
ϕ≠bϵ(c,ϕ)ϵ(b,a) (-1)^i_a+i_ϕ+i_b+i_c⌈ c^* ∙ G(ϕ,b)∙ a^*∙ G(b,ϕ) ⌉
= ∑_(a,c)∈ A_∘× C_∘
(b_0,b_1)∈ B_∘^2
b_0≠b_1(ϵ(a,b_1)ϵ(c,b_0) + ϵ(c,b_0)ϵ(b_1,a) )(-1)^i_a+i_b_0+i_b_1+i_c⌈ a^* ∙ G(ϕ,b)∙ c^*∙ G(b,ϕ) ⌉=0 ,
where we used the change of variables (b_0,b_1)=(b,ϕ) in the second line and (b_0,b_1)=(ϕ,b) in the third and use the cyclic invariance.
The second cancellation follows by a similar argument
R_1(A,B,C)+Q_2(B,C,A)
= ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(a,c)ϵ(c,b)(-1)^i_a+i_b⌈ b^*∙ c^* ∙ a^*∙ c) ⌉
+ ∑_(a,b,c)∈ A_∘× B_∘× C_∘ϵ(b,c)ϵ(a,c)(-1)^i_a+i_b⌈ b^*∙ c^* ∙ a^*∙ c) ⌉ =0 .
Finally the hexagonal cancellation-1 follows from the hexagonal relation
ϵ(a,u_b,c)ϵ(c,b)+ϵ(b,u_c,a)ϵ(a,c)+ϵ(c,u_a,b)ϵ(b,a)=0 ,
which is itself a consequence of lemma <ref> and the assumption (<ref>). A similar argument works the second hexagonal relation.
§.§ The various possibilities for the triple bracket
We have to consider 3 different possibilities for the triple brackets [A,[B,C] taking in account whether B and C have rank 1.
The following lemma will be a consequence of lemma <ref>. We will also use the following conventions:
if Q_1(U,V,W)=Q_2(U,V,W) , then we write Q(U,V,W) Q_1(U,V,W)=Q_2(U,V,W) ,
if R_1(U,V,W)=R_2(U,V,W) , then we write R(U,V,W) R_1(U,V,W)=R_2(U,V,W) .
We have the following four possibilities (independent of the rank of U) for the triple brackets
* The polygons V and W have both rank greater than 1, then
[U,[V,W]] = P_1(U,V,W)+P_2(U,V,W)+Q_1(U,V,W)+Q_2(U,V,W)
+ R_1(U,V,W)+R_2(U,V,W)+ S_1(U,V,W)+S_2(U,V,W) .
* Both v V and w W have rank 1, then
[U,[v,w]] = Q(U,v,w)+R(U,v,w)+S_1(U,v,w)+S_2(U,v,w) .
* The polygon W has rank greater than 1, while v V has rank 1, then
[U,[v,W]] = P_2(U,v,W)+Q(U,v,W)+R_1(U,v,W)+R_2(U,v,W)
+ S_1(U,v,W)+S_2(U,v,W) .
* The polygon W has rank greater than 1, while v V has rank 1, then
[U,[V,w]] = P_2(U,W,v)+R(U,W,v)+Q_1(U,W,v)+R_2(U,W,v)
+ S_1(U,W,v)+S_2(U,W,v) .
This is deduced from lemma <ref>. Indeed we deduce from that lemma that we have
* if B is a geodesic, then case 1 does not happen, and case 4 and case 5 coincide, thus
P_1(U,V,W)=0 , Q_1(U,V,W)=Q_2(U,V,W) Q(U,V,W) .
* Symmetrically, if C is a geodesic, then case 2 does not happen, and case 6 and case 7 coincide, thus
P_2(U,V,W)=0 , R_1(U,V,W)=R_2(U,V,W) R(U,V,W) .
§.§ Proof of the Jacobi identity
We will use freely in that paragraph lemma <ref>
The previous discussion gives
[A,[B,C]] = P_1(A,B,C)+P_2(A,B,C)+Q_1(A,B,C)+Q_2(A,B,C)
+ R_1(A,B,C)+R_2(A,B,C)+ S_1(A,B,C)+S_2(A,B,C) ,
B,[C,A]] = P_1(B,C,A)+P_2(B,C,A)+Q_1(B,C,A)+Q_2(B,C,A)
+ R_1(B,C,A)+R_2(B,C,A)+ S_1(B,C,A)+S_2(B,C,A) ,
C,[A,B]] = P_1(C,A,B)+P_2(C,A,B)+Q_1(C,A,B)+Q_2(C,A,B)
+ R_1(C,A,B)+R_2(C,A,B)+ S_1(C,A,B)+S_2(C,A,B) .
The proof of the Jacobi identity then follows from the cancellations (<ref>).
In that case, writing a A, b B and c C, we have
[a,[b,c]] = Q(a,b,c)+R(a,b,c)+S_1(a,b,c)+S_2(a,b,c) ,
b,[c,a]] = Q(b,c,a)+R(b,c,a)+S_1(b,c,a)+S_2(b,c,a)
c,[b,a]] = Q(c,a,b)+R(c,a,b)+S_1(c,a,b)+S_2(c,a,b) .
The Jacobi identity follows from the cancellations (<ref>).
Assume a A is a geodesic, B and C has rank 2. Then
[a,[B,C]] = P_1(a,B,C)+P_2(a,B,C)+Q_1(a,B,C)+Q_2(a,B,C)
+ R_1(a,B,C)+R_2(a,B,C)+ S_1(a,B,C)+S_2(a,B,C) ,
C,[a,B]] = P_2(C,a,B)+Q_1(C,a,B)+Q_2(C,a,B)
+R(C,a,B)+ S_1(C,a,B)+S_2(C,a,B) ,
B,[C,a]] = P_1(B,C,a)+R_1(B,C,a)+R_2(B,C,a)
+Q(B,C,a)+ S_1(B,C,a)+S_2(B,C,a) .
Then again the cancellations (<ref>), yields the Jacobi identity in that case.
We have here that A has rank greater than 1, while b B and c C are geodesics, then
[A,[b,c]] = Q(A,b,c)+R(A,b,c)+S_1(A,b,c)+S_2(A,b,c)
,
b,[c,A]] = P_1(b,c,A)+Q_1(b,c,A)+Q_2(b,c,A)
+R(b,c,A)+ S_1(b,c,A)+S_2(b,c,A) ,
c,[A,b]] = P_2(c,A,b)+R_1(c,A,b)+R_2(c,A,b)
+Q(c,A,b)+ S_1(c,A,b)+S_2(c,A,b) .
For the last time, the cancellations (<ref>), yields the Jacobi identity in that case.
§ A LEMMA IN HYPERBOLIC GEOMETRY
For any geodesic g and g_0, where g_0 is parametrized by the arc, the following holds.
If R> 1 and d(g_0(R), g)<2, while d(g_0(R-1),g)≥ 2, then
d(g_0(0),g)≥ R .
We let h be a geodesic with d(g_0(R),h) = d(g_0(R-1),h) = 2. Then we observe that d(g_0(0),g) ≥ d(g_0(0), h). We drop perpendiculars from g_0(R-1),g_0(R-1/2) and g_0(0) to h. The perpendicular from g_0(R-1) to h is length 2 and let a be the length of the perpendicular from g_0(R-1/2). Then considering the Lambert quadrilateral with opposite sides of length a, 2 gives
sinh(a)cosh(1/2) = sinh(2) , sinh(a)cosh( R-1/2)=sinh D ,
where D = d(g_0(0), h). It follows easily that
e^D/2≥sinh(D) = sinh(a) cosh(R-1/2) ≥sinh (a)/2 e^R-1/2 .
Thus
d(g_0(0),g) ≥ D ≥ R-1/2+log(sinh(a)) ≥ R .
§ FUNDAMENTAL DOMAIN AND L^1-FUNCTIONS
If Γ is a countable group acting on X preserving a measure μ, a μ-fundamental domain for this action is a measurable set Δ so that
∑_γ∈Γ 1_γ(Δ)=1, μ-almost everywhere.
A function F on X is Γ-invariant if for every γ in Γ,
F=F∘γ, μ–almost everywhere. Then
For any Γ-invariant positive function, if Δ_0 and Δ_1 are fundamental domain then
∫_Δ_0 Fμ̣=∫_Δ_1 Fμ̣ .
Using the γ-invariance of F
∫_Δ_0F=∑_γ∈Γ∫_X F· 1_Δ_0∩γ(Δ_1)μ̣=∑_η∈Γ∫_X F· 1_η(Δ_0)∩Δ_1μ̣= ∫_Δ_1F .
We define by a slight abuse of language, if Γ-admits a μ fundamental domain Δ on X
∫_X/ΓF μ̣∫_ΔF μ̣ .
Let Γ be a group acting properly on X_0 and X_1 preserving μ_0 and μ_1 respectively. Assume that Δ_0 – respectively Δ_1 – is a fundamental domain for the action of Γ on X_0 and X_1, then
Let F be a positive function on X_0× X_1 which is Γ invariant, where Γ acts diagonally and the action on each factor preserves measures called μ_0 and μ_1 and admits a fundamental domain called Δ_0 and Δ_1, then
∫∫_Δ_0× X_1F μ̣_0⊗μ̣_1=∫∫_X_0×Δ_1F μ̣_0⊗μ̣_1 .
Indeed Δ_0× X_1 and X_0×Δ_1 are both fundamental domains for the diagonal action of Γ on X_0× X_1. The lemma then follows from the previous one and Fubini theorem.
Let f be a continuous function defined on a topological space X. Let μ be a Radon measure on X. Then the following lemma holds.
Assume that there exists a real constant k so that for every exhausting sequence K of compacts of X,
lim_m→∞∫_K_mf μ̣=k.
Then f belongs to L^1(X,μ) and ∫_Xf μ̣=k.
amsplain
10
Atiyah:1983
Michael F Atiyah and Raoul Bott, The Yang-Mills equations over Riemann surfaces, Philos. Trans. Roy. Soc. London Ser. A 308 (1983), no. 1505, 523–615.
BGLPW
Jonas Beyrer, Olivier Guichard, François Labourie, Beatrice Pozzetti, and Anna Wienhard, Positivity, cross ratios and the collar lemma.
Bonahon:1988
Francis Bonahon, The geometry of Teichmüller space via geodesic currents, Inventiones Mathematicae 92 (1988), no. 1, 139–162.
Bonahon:2014woa
Francis Bonahon and Guillaume Dreyer, Hitchin characters and geodesic laminations, Acta Mathematica 218 (2017), no. 2, 201–295.
Bridgeman:2020vg
Martin Bridgeman, Richard Canary, and François Labourie, Simple length rigidity for Hitchin representations, Adv. Math. 360 (2020), 106901, 61. 4035950
Bridgeman:2015ba
Martin J Bridgeman, Richard Canary, François Labourie, and Andres Sambarino, The pressure metric for Anosov representations, Geometric And Functional Analysis 25 (2015), no. 4, 1089–1179.
Choi:2020aa
Suhyoung Choi, Hongtaek Jung, and Hong Chan Kim, Symplectic coordinates on PSL_3( R)-Hitchin components, Pure Appl. Math. Q. 16 (2020), no. 5, 1321–1386. 4220999
Fock:2006a
Vladimir V Fock and Alexander B Goncharov, Moduli spaces of local systems and higher Teichmüller theory, Publ. Math. Inst. Hautes Études Sci. (2006), no. 103, 1–211.
Goldman:1984
William M Goldman, The symplectic nature of fundamental groups of surfaces, Advances in Mathematics 54 (1984), no. 2, 200–225.
Goldman:1986
, Invariant functions on Lie groups and Hamiltonian flows of surface group representations, Inventiones Mathematicae 85 (1986), no. 2, 263–302.
Kerckhoff:1983th
Steven P. Kerckhoff, The Nielsen realization problem, Ann. of Math. (2) 117 (1983), no. 2, 235–265. 690845
Labourie:2020tv
François Labourie and Jérémy Toulisse, Quasicircles and quasiperiodic surfaces in pseudo-hyperbolic spaces, arXiv:2010.05704, 2020.
Labourie:2006
François Labourie, Anosov flows, surface groups and curves in projective space, Inventiones Mathematicae 165 (2006), no. 1, 51–114.
Labourie:2005
, Cross ratios, surface groups, PSL(n, R) and diffeomorphisms of the circle, Publ. Math. Inst. Hautes Études Sci. (2007), no. 106, 139–213.
Labourie:2013ka
, Lectures on representations of surface groups, Zurich Lectures in Advanced Mathematics, European Mathematical Society (EMS), Zürich, 2013.
Labourie:2012vka
, Goldman algebra, opers and the swapping algebra, Geometry and Topology 22 (2018), no. 3, 1267–1348.
McShane-Lab
François Labourie and Gregory McShane, Cross ratios and identities for higher Teichmüller-Thurston theory, Duke Mathematical Journal 149 (2009), no. 2, 279 – 345.
Labourie:2018fj
François Labourie and Richard Wentworth, Variations along the Fuchsian locus, Annales Scientifiques de l'Ecole Normale Supérieure. Quatrième Série 51 (2018), no. 2, 487–547.
Martone:2019uf
Giuseppe Martone and Tengren Zhang, Positively ratioed representations, Comment. Math. Helv. 94 (2019), no. 2, 273–345.
Nie:2013tu
Xin Nie, The quasi-Poisson Goldman formula, J. Geom. Phys. 74 (2013), 1–17.
Potrie:2014uta
Rafael Potrie and Andr e s Sambarino, Eigenvalues and Entropy of a Hitchin representation, Inventiones Mathematicae (2017), no. 3, 885–925.
Sun:2021tj
Zhe Sun, Rank n swapping algebra for PGL_n Fock-Goncharov X moduli space, Math. Ann. 380 (2021), no. 3-4, 1311–1353.
Sun:2020vm
Zhe Sun, Anna Wienhard, and Tengren Zhang, Flows on the PGL(V)-Hitchin component, Geom. Funct. Anal. 30 (2020), no. 2, 588–692.
Sun:2017
Zhe Sun and Tengren Zhang, The Goldman symplectic form on the PGL(V)-Hitchin component, arXiv:1709.03589.
Turaev:1991wk
Vladimir G Turaev, Skein quantization of Poisson algebras of loops on surfaces, Annales Scientifiques de l'Ecole Normale Supérieure. Quatrième Série 24 (1991), no. 6, 635–704.
Wolpert:1981vt
Scott Wolpert, An elementary formula for the Fenchel-Nielsen twist, Comment. Math. Helv. 56 (1981), no. 1, 132–135.
Wolpert:1983td
Scott A Wolpert, On the Symplectic Geometry of Deformations of a Hyperbolic Surface, Annals of Mathematics 117 (1983), no. 2, 207–234.
|
http://arxiv.org/abs/2307.05099v1 | 20230711081701 | Pseudomagnetic suppression of non-Hermitian skin effect | [
"Hau Tian Teo",
"Subhaskar Mandal",
"Yang Long",
"Haoran Xue",
"Baile Zhang"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"physics.optics"
] |
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore
[email protected]
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore
[email protected]
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore
Centre for Disruptive Photonic Technologies, Nanyang Technological University, Singapore 637371, Singapore
It has recently been shown that the non-Hermitian skin effect can be suppressed by magnetic fields. In this work, using a two-dimensional tight-binding lattice, we demonstrate that a pseudomagnetic field can also lead to the suppression of the non-Hermitian skin effect. With an increasing pseudomagnetic field, the skin modes are found to be pushed into the bulk, accompanied by the reduction of skin topological area and the restoration of Landau level energies. Our results provide a time-reversal invariant route to localization control and could be useful in various classical wave devices that are able to host the non-Hermitian skin effect but inert to magnetic fields.
Pseudomagnetic suppression of non-Hermitian skin effect
Baile Zhang
=======================================================
§ INTRODUCTION
Due to the absence of Hermiticity, non-Hermitian systems can exhibit many unprecedented phenomena without Hermitian counterparts <cit.>. The inherent point-gap topology in non-Hermitian systems has led to the emergence of non-Hermitian skin effect (NHSE), where an extensive number of eigenstates accumulate at the boundaries <cit.>. Associated with the failure of conventional Bloch band theory and the breakdown of bulk-boundary correspondence in topological systems <cit.>, NHSE has been successfully observed in a few platforms <cit.>, leading to potential applications in wave manipulation, lasing and sensing <cit.>.
Contrary to boundary-localized skin modes, a magnetic field can induce Landau levels with eigenstates that are localized in the bulk <cit.>. Recent theoretical studies show that magnetic fields can lead to a suppression of the NHSE: with an increasing magnetic field, the skin modes are gradually pushed into the bulk <cit.>. Based on magnetic fields that induce bulk-localized Landau levels, the time-reversal breaking nature seems to be intrinsic to the suppression of boundary-localized NHSE.
By adopting the similar motivation of spatially varying gauge fields, a time-reversal invariant partner of magnetic field has also been widely studied, namely the pseudomagnetic field (PMF). Interestingly, using the PMFs which are artificially constructed from spatially inhomogeneous gauge fields, Landau levels and the associated bulk-localized modes can even be induced in magnetic-free systems without breaking time-reversal symmetry <cit.>. This idea has been adopted in various classical wave systems to mimic magnetic-like effects and, in particular, to realize bulk-localized Landau modes in time-reversal invariant systems <cit.>.
Therefore, the introduction of PMF gives rise to a potential alternative to suppress the NHSE without breaking time-reversal symmetry. In addition, while magnetic field takes effect in systems with charged particles like electrons, current platforms for realizing the NHSE are mostly magnetically inert <cit.>. This separation in physical systems poses a challenge in utilizing the competition between the NHSE and magnetic field. By hosting similar localization mechanisms as magnetic fields <cit.>, a more feasible implementation of PMF suggests a novel type of suppression of the NHSE, thus offering a promising route for wave localization control in systems inert to magnetic fields.
In this work, we show that, instead of a real magnetic field, a PMF can also suppress the NHSE. We construct a two-dimensional lattice model with both NHSE and a PMF, induced by nonreciprocal hoppings and inhomogeneous hoppings, respectively. The NHSE can occur in either x or y direction, depending on where the nonreciprocal hoppings are implemented. We find that, for both cases, the NHSE will be suppressed when the PMF is introduced, as revealed by the calculations of skin mode profiles, skin topological areas and Landau level spectra. The suppression is prominent for energies within the first few Landau levels, where the effective theory of the PMF stays valid. Moreover, our model can be directly mapped to several realistic physical settings in photonic, acoustic and circuit systems, therefore paving a novel way to localization control for classical wave devices without breaking time-reversal symmetry.
§ GENERATION OF THE PMF
We consider a two-dimensional lattice model with π flux in each plaquette, as illustrated in Fig. <ref>. This lattice consists of nonreciprocal hoppings t_±=t±δ_x along x direction, together with nonreciprocal and dimerized hoppings J_1± = J_1±δ_y, J_2± = J_2±δ_y along y direction. Half of the y-directional hoppings are set to be negative (i.e., -J_1± and -J_2±), which introduce the π flux. Note that all the hopping parameters are real. The Bloch Hamiltonian of this lattice is
ℋ=
[ 0 T_x+Δ_x V+Δ_y 0; T_x+Δ_x 0 0 -(V^∗+Δ_y); V^∗+Δ_y 0 0 T_x+Δ_x; 0 -(V+Δ_y) T_x+Δ_x 0 ],
where T_x=2tcos(k_x/2), V=J_1 e^ik_y/2+J_2 e^-ik_y/2, Δ_x=-2iδ_xsin(k_x/2) and Δ_y=-2iδ_ysin(k_y/2).
Without loss of generality, we set the lattice constant as a=1 thereafter and fix hopping amplitudes t and J_2 to be 1 and 0.5, respectively. As we show below, the hopping dimerization and nonreciprocity can be used to generate a PMF and the NHSE, respectively. Therefore, we adopt this lattice to study the interplay between these two effects.
We first discuss the generation of the PMF in such a lattice. To this end, we let the non-Hermitian parameters δ_x and δ_y vanish. We note that the Hermitian limit of this model was studied in Refs. <cit.>, which focus on the consequences of projective symmetry algebra induced by the π flux.
When J_1 and J_2 are equal, the model exhibits a four-fold band degeneracy at the corner of the Brillouin zone; when J_1 and J_2 differ, the four-fold degeneracy splits into two two-fold Dirac points at (k_x,k_y)=(τκ_x,π), where κ_x is related to the hoppings through 2tcos(κ_x/2)=|J_1-J_2| and τ=±1 is the valley index. To induce a uniform PMF, we introduce y-dependent hoppings ± J_1(y) of the following form:
J_1(y) = J_2 + 2tcos[1/2(π/2-2π y/N_yη)],
where N_y is the number of unit cells along y direction and η is a dimensionless factor that controls the PMF strength for fixed N_y. Under such a spatially inhomogeneous hopping texture, the position of the Dirac points varies linearly in space from (1/2+η)π to (1/2-η)π as y changes from -N_y/2 to N_y/2 [see Fig. <ref>(a)], yielding the gauge field:
A=(A_x,A_y) =(-τ2π/N_yη y, 0)
The corresponding PMF is
ℬ =∇× A= τ2π/N_yη
It is noteworthy that the field strength ℬ has opposite signs at opposite valleys. This indicates that the system is still time-reversal invariant, agreeing with the property of the PMF.
To see the effects of the PMF, we calculate the dispersion around τ=+1 valley with N_y=100 and η=0.2. As shown in Fig. <ref>(b), we can clearly see the formation of Landau levels, whose spacing matches the theoretical prediction elucidated in Appendix A:
E_N = sgn(N)ω√(|N|), N∈ℤ,
where
ω = √(2 v_x v_y |ℬ|)
is the cyclotron frequency and v_x,y is the group velocity. The eigenstates of Landau levels of order N=0,1,2 are plotted in Figs. <ref>(c)-<ref>(e). As expected, they are all bulk-localized modes. By further increasing η, the eigenstates are further squeezed into the bulk due to a decreasing effective magnetic length, thus making the skin modes possible to be manipulated by tuning the strength of the PMF in our subsequent calculations.
§ GENERATION OF THE NHSE
After introducing the PMF, we set η to zero to facilitate the generation of NHSE. In this limit, we tune the non-Hermitian components δ_x and δ_y to induce the NHSE along x and y directions, respectively. To investigate the NHSE, we conduct calculations under various boundary conditions. Henceforth, periodic boundary condition and open boundary condition along x(y) direction are referred to as x(y)-PBC and x(y)-OBC, respectively.
We first illustrate the NHSE along x direction by setting (δ_x, δ_y)=(0.05,0). In this case, the eigenvalues under x-PBC and y-PBC form closed loops in the complex plane for each fixed k_y [Fig. <ref>(a)]. The spectral winding of the eigenvalues when k_x increases from -π to π can be captured by the winding number (for a fixed k_y):
w(E_0) = 1/2π i∫_-π^π dk_x d/dk_xlog(ℋ(k_x)-E_0),
where E_0 is a reference energy in the complex plane. This winding number is a topological invariant for point-gap topology and a nonzero w indicates the emergence of NHSE under OBC <cit.>. To see the NHSE, we plot the eigenvalues under x-OBC and y-PBC as cyan lines in Fig. <ref>(a), which form open arcs in the interior of x-PBC and y-PBC spectra (the corresponding eigenmodes are skin modes). Therefore, for each fixed k_y, the corresponding 1D sub-system is a typical NHSE system. Subsequently, while considering a finite lattice (i.e., under x-OBC and y-OBC), we can find that the eigenstates are concentrated at the right boundary [Fig. <ref>(b)].
The NHSE along y direction can be similarly induced by letting (δ_x, δ_y)=(0,-0.05). Under this condition, the eigenvalues under x-PBC and y-PBC form closed loops in the complex plane for each fixed k_x [Fig. <ref>(c)]. Consequently, the skin modes are now localized at the bottom boundary in a finite lattice, as shown in Fig. <ref>(d).
§ COMPETITION BETWEEN THE PMF AND THE NHSE
Comparing the PMF-induced Landau modes and the skin modes (see Figs. <ref> and <ref>), it is evident that they have distinct localization areas: the Landau modes are localized in the bulk, while the skin modes are concentrated at the boundaries. In the present model, these two types of modes are induced and controlled by independent parameters, i.e., the PMF strength parameter η and the nonreciprocal hopping parameters (δ_x, δ_y). Next, we turn on these parameters simultaneously to study the competition between the two localization mechanisms and to reveal the suppression of the NHSE by the PMF. Note that the introduction of the PMF breaks the translational symmetry along y direction, but still leaves k_x a good quantum number. Therefore, we use different methods to investigate the cases when the NHSE is along x and y directions.
§.§ Pseudomagnetic suppression of the NHSE along x direction
When the NHSE is along x direction, we use a bulk probe, i.e., the winding number defined in Eq. (<ref>), to detect the influence of the PMF on the NHSE. Using this method, we can avoid adopting x-OBC geometry where the computation is heavy and identification of modes from the bulk Dirac cones is relatively hard. In the calculation, a semi-infinite strip with N_y=100 along y direction is considered, together with x-PBC and y-OBC imposed. In such a setup, the system can be regarded as a 1D lattice with a large number of sites in one unit cell. Thus, we can straightforwardly use the winding number to characterize the NHSE. The non-Hermitian parameters (δ_x, δ_y) are fixed to be (0.05,0), while the PMF strength parameter η can be tuned.
Figure <ref>(a) shows the computed winding number in the complex plane for different PMF strength η. By gradually increasing η from 0 to 0.4, regions of high winding number decrease and move away from zero energy. Winding number around zero energy even approaches zero at a stronger field, signifying the suppression of skin modes by the PMF. The effect of reduced winding number becomes weaker away from zero energy, consistent with the fact that the PMF is only valid in the low-energy regime. In general, the winding number in the entire complex plane decays when η increases. To characterize the global behaviour of NHSE strength, skin topological area S_1 introduced in Ref. <cit.> is employed here, which is the weighted sum of winding number in the entire complex energy plane. Figure <ref>(b) highlights the relationship between S_1 (normalized by N_y) and η, showing that skin topological area decreases when PMF increases. This trend also demonstrates the pseudomagnetic suppression of the NHSE along x direction.
§.§ Pseudomagnetic suppression of the NHSE along y direction
Now we turn to the other case where the NHSE is along y direction. In this case, the winding number and skin topological area employed above will not be useful since they can only capture x-directional skin modes. However, we can directly access the skin modes by taking x-PBC and y-OBC while fixing k_x at the valleys. Specifically, we set k_x=π/2 (i.e., the τ=+1 valley) and (δ_x,δ_y)=(0,-0.05) and adopt a semi-infinite strip (N_y=100) with x-PBC and y-OBC. In this setup, we are able to investigate in detail how the PMF will affect the NHSE.
The energy spectrum of the semi-infinite strip is first investigated near zero energy where the PMF description is expected to hold. In fact, the eigenenergies retrieve their Landau quantization values while PMF is introduced. This is numerically demonstrated in Fig. <ref>(a), where blue curves show the eigenenergies (real part) computed from the lattice Hamiltonian at k_x=π/2, and red curves denote Landau levels predicted from Eq. (<ref>). As can be seen, the eigenenergies of the lowest few modes gradually approach Landau level energies as η increases. As one of the signatures of the PMF, the quantization behaviour gives a hint of suppressing the NHSE using bulk-localized Landau modes.
By gradually tuning the PMF, the suppression is indeed observed in the eigenstates near zero energy. As shown in Fig. <ref>(b), an eigenstate pinned to zero energy is chosen to visualize its spatial variation when η increases. To be precise, this eigenstate hosts an eigenvalue characterized by the zeroth-order Landau level computed in the Hermitian limit. In the η=0 limit, the eigenstate displays an exponential decay from the boundary into the bulk, which is exactly a skin mode profile expected from an NHSE system. With an increasing η, this skin mode moves progressively into the bulk and forms a growing hump in the bulk. This mode profile evolution clearly demonstrates how the PMF can suppress the NHSE.
In addition, we can observe that the center of the mode moves continuously as η varies, which could be a useful property for localization control. At a high PMF strength (e.g., η=0.4), the mode becomes very similar to a Landau mode with a Gaussian profile. As elucidated in Appendix A, its center is not exactly at the center of the lattice, but with a shifted value y_0. Under the presence of a nonzero δ_y, y_0 can be described by non-Hermitian strength δ_y and PMF indicator η as follows:
y_0=N_y/π v_xδ_y/η
It is labelled as the blue dashed line in Fig. <ref>(b), thus highlighting the competition between NHSE and PMF based on their indicators in Eq. (<ref>). Without loss of generality, the eigenstates characterized by higher-order Landau levels near zero energy [eigenvalues shown in Fig. <ref>(a)] can also be conceptually described by the shifted center y_0, in other words, an additional tuning to the Landau mode solutions in the Hermitian limit. Consequently, these higher-order eigenstates can also be suppressed by the PMF scheme, showing a large tunability from skin modes to bulk modes [Fig. <ref>].
§ CONCLUSION AND OUTLOOK
To conclude, we have shown that the PMF can induce a suppression of the NHSE. In particular, a boundary-localized skin mode can be transferred into a bulk-localized mode by the PMF. We note that, while in the main text, we only show edge skin modes can be pushed into the bulk by the PMF. Corner skin modes actually behave similarly, as demonstrated in Appendix B. Compared to other methods to tune the NHSE, like using the real magnetic field <cit.> and electric field <cit.>, the PMF approach would be easier to be engineered in various artificial systems. In Appendix C, we provide an electric circuit design of the tight-binding model, showing its feasibility in physical systems. Moreover, the PMF can reach higher values than the real magnetic field <cit.>. However, one needs to keep in mind that the PMF only holds for certain low-energy modes. Thus, not all skin modes can be well manipulated by the PMF. In future works, it would be desired to further apply the tunable skin modes found here to various classical wave devices, especially in optical systems where active control of localization and nonlinear effects can be studied.
This work is supported by Singapore Ministry of Education Academic Research Fund Tier 3 under Grant No. MOE2016-T3-1-006.
§ APPENDIX A: PMF-INDUCED LANDAU LEVELS VS NON-HERMITICITY
Based on the non-Hermitian Hamiltonian in Eq. (<ref>), we will show that the PMF scheme in Eq. (<ref>) is able to generate Landau levels, at the same time suppressing the non-Hermitian skin effect. The Hermitian limit illustrated in Fig. <ref> is naturally included in the subsequent derivations, by setting the non-Hermitian terms to zero.
We start from the most generic form of non-Hermitian Hamiltonian, followed by the eigenvalue equation ℋ|ψ⟩ = E|ψ⟩. The Hamiltonian can be expressed in a block-diagonal form via a unitary transformation:
ℋ'=U^†ℋ U =
[ ℋ_0 0; 0 ℋ_1 ], U = 1/√(2)[ 1 0 0 1; 0 1 1 0; 0 i -i 0; i 0 0 -i ]
with the following blocks (W≡-iV):
ℋ_0 =[ 0 T_x+Δ_x-W+iΔ_y; T_x+Δ_x-W^∗-iΔ_y 0 ]
ℋ_1 =
[ 0 T_x+Δ_x+W^∗+iΔ_y; T_x+Δ_x+W-iΔ_y 0 ]
Upon this transformation, we observe the relation ℋ' |Φ⟩ = E |Φ⟩, where |Φ⟩≡ U^†|ψ⟩=[|Φ_0⟩, |Φ_1⟩]^T is expressed in the two-component vectors |Φ_0⟩ and |Φ_1⟩. To elucidate the suppression of y-directional NHSE, we set δ_x=0 throughout this section. Now, the Hamiltonian ℋ' is expanded around (k_x,k_y)=(τπ/2+q_x,π+q_y) valleys (τ=±1) up to first order in q_x and q_y:
T_x = t√(2)(1-τq_x/2) + 𝒪(q_x^2)
W = -iV = (J_1-J_2) + iq_y/2(J_1+J_2) + 𝒪(q_y^2)
Δ_y = -2iδ_y+𝒪(q_y^2)
By observing that ℋ_0 |Φ_0⟩ = E |Φ_0⟩, we arrive at the Dirac Hamiltonian with an additional imaginary gauge:
[-τ v_x σ_x q_x + (v_y q_y+2iδ_y)σ_y]|Φ_0⟩ = E|Φ_0⟩ + 𝒪(q^2)
with group velocity (v_x, v_y) = (t/√(2), J_2+t/√(2)). The relations J_1-J_2 = 2tcos(κ_x/2) at κ_x=τπ/2 valleys are utilized here, which correspond to the η=0 limit in Eq. (<ref>). The y-dependent hoppings in Eq. (<ref>) are now introduced to the Dirac Hamiltonian up to first order in y, leading to the following Hamiltonian:
[-τ v_xσ_x (-i∂_x - A_x) - i v_yσ_y ∂_y + 2iδ_y] Φ_0(r) ≃ E Φ_0(r)
where gauge field A_x is generated with respect to the reference Dirac point κ_0^τ≡κ_x^τ(J_1 = J_2+t√(2))=τπ/2:
A_x(y) = κ_x^τ [J_1(y)] - κ_0^τ = -τ2π/N_yη y
The valley-dependent pseudomagnetic field ℬ=∂_x A_y - ∂_y A_x is thus obtained, with fundamental constants a, ħ and e (electron charge) restored:
ℬ = τ2π/N_yηrestore a,ħ,e=1=τ2πħ/N_y e a^2η
Since J_1(y) is homogeneous along x direction, Φ(r) can be modulated by a plane wave with K_x= k_x - κ_0^τ, where Φ(r) = e^iK_x x Φ(y):
[-τ v_x σ_x (K_x - A_x) + v_y (-i∂_y+iq_y')σ_y]|Φ_0⟩≃ E|Φ_0⟩
where q_y'=2δ_y/v_y. To remove the imaginary gauge, we redefine the eigenstate |Φ_0'⟩=e^-q_y' y|Φ_0⟩, arriving at:
[-τ v_x σ_x (K_x-A_x) + v_yσ_y q_y]|Φ_0'⟩ ≃ E |Φ_0'⟩
This looks exactly like the Hermitian case, except with an additional similarity transformation on eigenstate. Therefore, Eq. (<ref>) can be expressed via these coupled equations, where Φ_0'(r) = [Φ_A'(r), Φ_B'(r)]^T:
[-τ v_x (K_x-A_x) - v_y ∂_y] Φ_B'(y) ≃ E Φ_A'(y)
[-τ v_x (K_x-A_x) + v_y ∂_y] Φ_A'(y) ≃ E Φ_B'(y)
By combining Eq. (<ref>)-(<ref>) with τ=+1, we arrive at the eigenproblem in the form of quantum harmonic oscillator even in the non-Hermitian setup:
ω^2 (a^† a)Φ_B' = E^2 Φ_B',
where ω = √(t√(2)(2J_2+t√(2))ηπ/N_y) = √(2v_x v_y |ℬ|), followed by annihilation operator a = [v_x (K_x-A_x) + v_y ∂_y]/ω and [a,a^†]=1. Consequently, the eigenvalues E^2 employ the form of quantum harmonic oscillator, leading to the Landau level energies labelled by order N:
E_N = sgn(N) ω√(|N|), N∈ℤ
As a result, the energy plateaus still exist in non-Hermitian case. We denote A_x = -ℬy as derived. For illustration in Fig. <ref>, we set K_x=0 here. By expanding Eq. (<ref>), we arrive at:
∂_y^2 Φ_B' + [ϵ^2 - (v_x/v_yℬ)^2 y^2]Φ_B' = 0
where ϵ^2 = (|N|+1/2)(ω/v_y)^2. This resembles the quantum harmonic oscillator problem, but with modified effective field B_c that characterizes magnetic length l_B:
B_c = v_x/v_yℬ, l_B^2 = B_c^-1 = v_y/v_xℬ^-1
It is noteworthy that B_c here is not exactly equal to ℬ due to the intrinsic anisotropy of the tight-binding model without any strain. By observing Eq. (<ref>), we can deduce that at the zeroth Landau level (E=0), the eigenstate employs the form: Φ_A'=0, Φ_B'∝ e^-y^2/2l_B^2. By recovering the eigenstate before imaginary gauge transformation |Φ_0⟩ = [Φ_A, Φ_B]^T, we obtain the final eigenstate:
Φ_A=0, Φ_B∝ e^-y^2/2l_B^2e^q_y' y∝ e^-(y-y_0)^2/2l_B^2
where shifted center y_0=l_B^2 q_y'=N_y/π v_xδ_y/η. Since y=y_0 corresponds to the position with largest intensity, we can deduce that the δ_y/η term in y_0 describes the competition between non-Hermiticity δ_y and PMF η.
Due to the intrinsic sublattice symmetry in this model (SℋS^-1 = -ℋ, S = σ_z ⊗σ_z), features of sublattice polarization are also captured here. Note that:
[ Φ_A; Φ_B; Φ_C; Φ_D ]≡[ |Φ_0⟩; |Φ_1⟩ ] = U^†|ψ⟩ = 1/√(2)[ ψ_1 - iψ_4; ψ_2 - iψ_3; ψ_2 + iψ_3; ψ_1 + iψ_4 ],
We can observe that Φ_A and Φ_B capture the features of (ψ_1, ψ_4) and (ψ_2,ψ_3) respectively. As a result, solution in Eq. (<ref>) illustrates the complete distribution of the eigenstate in sublattices 2 and 3, in the form of Landau mode. This corresponds to the complete dominance of Landau modes at large η in Fig. <ref>(b). Regarding the transition from skin modes to Landau modes, it is not described in this model as the continuum model fails at the boundary. In other words, when y_0 is close to boundary y=N_y/2 (large δ_y or small η), the failure of the model leads to the emergence of skin modes.
In general, the arguments can be applied to the modes further from the valleys (i.e., K_x ≠ 0). It corresponds to the additional shift to y_0, which can be seen from Eq. (<ref>). Apart from zero-energy modes, the suppression is also prominent for the first few Landau modes (small |N|), falling in the region where PMF holds. The solutions are related to Hermite polynomials with an additional tuning by the exponential term e^q_y' y. The evolution of the eigenstate characterized by first Landau level is illustrated in Fig. <ref> for a better visualization.
§ APPENDIX B: MANIPULATION OF CORNER SKIN MODES IN A FINITE LATTICE
In this section, we study the competition between the PMF and NHSE when the skin modes are localized at the corner. To this end, we consider a finite lattice with 40×40 unit cells. The parameter δ_y is fixed to be 0.05 while δ_x and η are tunable. To demonstrate the suppression of skin modes, we investigate the evolution of an eigenstate belonging to the first-order Landau level in the Hermitian limit. As illustrated in Fig. <ref>, the PMF indicator η and the other non-Hermitian parameter δ_x are varied accordingly to highlight the competition.
When δ_x and η are both zero, the eigenstate localizes at the top edge as expected from nonreciprocal couplings along y direction. By increasing δ_x, the skin mode distributed along the top edge is then pushed to the top right corner, thus inducing a corner skin mode. Starting from this corner skin mode (i.e., the bottom left panel in Fig. <ref>), as can be seen, by varying η and δ_x, the skin mode can be driven along both x and y directions, showing a large degree of freedom in the skin mode manipulation in this finite lattice. In particular, with increasing η and decreasing δ_x, the corner skin mode is gradually transferred to a bulk Landau mode (see the top right panel in Fig. <ref>).
§ APPENDIX C: PROPOSAL FOR ELECTRIC CIRCUIT REALIZATION
Topoelectrical circuit is an excellent platform for realizing NHSE-based phenomena. Based on previous developments in this field <cit.>, we hereby demonstrate a circuit realization of the lattice illustrated in Fig. <ref>. The reciprocal part of the circuit consists of inductors and capacitors; whereas the nonreciprocity is achieved via negative impedance converters with current inversion (INICs) <cit.>. A schematic diagram of such a circuit in a strip geometry is shown in Fig. <ref>(a), having x axis as the periodic direction and consisting of three unit cells along the y axis. The nodes are shown in gray and green circles, which play the role of the sites. A zoomed view of the nth cell within the supercell is shown in Fig. <ref>(b), which consists of four nodes (a_n,b_n,c_n,d_n). The positive couplings along the x axis are realized by the capacitors (in black) C_x and those along the y axis are realized by the capacitors (in red) C_y. The negative couplings are realized by the proper choice of the inductors L_y. The INICs (represented by the red triangles) are connected in parallel with C_y and L_y. However, the directions of the INICs are reversed while connecting with L_y. Additional inductors and capacitors (L_n,C_n) are connected properly in order to realize the PMF as labelled in blue in Fig. <ref>(b). All the nodes in the circuit are grounded as shown in Fig. <ref>(c).
The role of the Hamiltonian ℋ is played by the Laplacian 𝒥 of the circuit with 𝒥∝ -iℋ and I=𝒥 V, where I and V are the vectors representing the currents and voltages at each node. Following Ref. <cit.>, the currents at each node of the nth cell can be expressed as:
I_a_n = [1/iω( L_+^-1+ L_n^-1)+iω(2C_x+2C_y+C_n)]V_a_n-iω(C_y+c_q)V_c_n-1-iω(C_y+C_n-c_q)V_c_n
-iω C_x(1+e^-ik_xa)V_b_n,
I_b_n = [ (iω L_-)^-1+iω(C_n-1+2C_x)+2(iω L_y)^-1+(iω L_n-1)^-1]V_b_n-[1/iω( L_y^-1+ L_n-1^-1)-iω c_q] V_d_n-1
-[(iω L_y)^-1+iω c_q] V_d_n-iω C_x(1+e^ik_xa)V_a_n,
I_c_n = [1/iω( L_+^-1+ L_n^-1)+iω(2C_x+2C_y+C_n)]V_c_n-iω(C_y+C_n+c_q)V_a_n-iω(C_y-c_q)V_a_n+1
-iω C_x(1+e^-ik_xa)V_d_n,
I_d_n = [ (iω L_-)^-1+iω(C_n+2C_x)+2(iω L_y)^-1+(iω L_n)^-1]V_d_n-[(iω L_y)^-1-iω c_q] V_b_n
-[1/iω( L_y^-1+ L_n^-1)+iω c_q]V_b_n+1-iω C_x(1+e^ik_xa)V_c_n.
Here ω=2π f_0, where f_0 is the resonant frequency of the circuit. In a finite circuit, to realize y-OBC, the nodes at the edges are grounded properly, whereas x-PBC can be realized by connecting the two edges along the x direction through conducting wires. In order to realize the PMF, we choose:
C_n= 2C_xcos[1/2(π/2-2π n/N_yη)],
L_n= 1/(ω^2C_n).
We aim for resonant frequency f_0=100 kHz and choose the values of the circuit components accordingly: C_x=1 μF, C_y=C_x/2, L_y=1/(ω^2C_y)≈5 μH, L_+=1/(6ω^2C_y)≈0.8 μH, L_-=L_y/2, and c_q=50 nF. At the resonant frequency, all the diagonal terms of the circuit Laplacian 𝒥 become zero. In Fig. <ref>(d), the band structures without and with the PMF are presented, where the appearance of the Landau levels for η≠0 is clearly shown. Such a band structure can be measured in practice from the admittance response. Figure <ref>(e) shows the skin effect suppression due to the PMF, which can be obtained in practice by measuring the voltage across all the nodes of the circuit.
57
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[El-Ganainy et al.(2018)El-Ganainy, Makris, Khajavikhan,
Musslimani, Rotter, and Christodoulides]el2018non
author author R. El-Ganainy, author K. G. Makris, author M. Khajavikhan,
author Z. H. Musslimani,
author S. Rotter, and author D. N. Christodoulides, title title Non-Hermitian physics and PT symmetry, https://doi.org/10.1038/nphys4323 journal journal Nat. Phys. volume 14, pages
11 (year 2018)NoStop
[Ashida et al.(2020)Ashida,
Gong, and Ueda]ashida2020non
author author Y. Ashida, author Z. Gong, and author M. Ueda, title title Non-Hermitian physics, https://doi.org/10.1080/00018732.2021.1876991 journal
journal Adv. Phys. volume 69, pages 249 (year 2020)NoStop
[Bergholtz et al.(2021)Bergholtz, Budich, and Kunst]bergholtz2021exceptional
author author E. J. Bergholtz, author J. C. Budich, and author F. K. Kunst, title title Exceptional topology of
non-Hermitian systems, https://doi.org/10.1103/RevModPhys.93.015005 journal
journal Rev. Mod. Phys. volume 93, pages 015005 (year 2021)NoStop
[Bender and Boettcher(1998)]bender1998real
author author C. M. Bender and author S. Boettcher, title title Real spectra in
non-Hermitian Hamiltonians having PT symmetry, https://doi.org/10.1103/PhysRevLett.80.5243 journal journal Phys. Rev. Lett. volume 80, pages 5243 (year 1998)NoStop
[Heiss(2012)]heiss2012physics
author author W. D. Heiss, title title The physics of exceptional
points, https://doi.org/10.1088/1751-8113/45/44/444016 journal journal J. Phys. A: Math. Theor. volume 45, pages 444016 (year
2012)NoStop
[Chong et al.(2010)Chong,
Ge, Cao, and Stone]chong2010coherent
author author Y. D. Chong, author L. Ge, author H. Cao, and author
A. D. Stone, title title Coherent perfect absorbers: time-reversed lasers, https://doi.org/10.1103/PhysRevLett.105.053901 journal
journal Phys. Rev. Lett. volume 105, pages 053901 (year 2010)NoStop
[Lin et al.(2011)Lin,
Ramezani, Eichelkraut, Kottos, Cao, and Christodoulides]lin2011unidirectional
author author Z. Lin, author H. Ramezani,
author T. Eichelkraut, author T. Kottos, author
H. Cao, and author
D. N. Christodoulides, title
title Unidirectional invisibility induced by PT-symmetric
periodic structures, https://doi.org/10.1103/PhysRevLett.106.213901 journal
journal Phys. Rev. Lett. volume 106, pages 213901 (year 2011)NoStop
[Doppler et al.(2016)Doppler, Mailybaev, Böhm, Kuhl, Girschik, Libisch, Milburn, Rabl, Moiseyev, and Rotter]doppler2016dynamically
author author J. Doppler, author A. A. Mailybaev, author J. Böhm,
author U. Kuhl, author
A. Girschik, author
F. Libisch, author T. J. Milburn, author P. Rabl, author N. Moiseyev, and author S. Rotter, title title Dynamically encircling an
exceptional point for asymmetric mode switching, https://doi.org/10.1038/nature18605 journal journal Nature volume 537, pages
76 (year 2016)NoStop
[Chen et al.(2017)Chen,
Kaya Özdemir, Zhao, Wiersig, and Yang]chen2017exceptional
author author W. Chen, author Ş. Kaya Özdemir, author G. Zhao, author J. Wiersig, and author L. Yang, title title Exceptional points enhance sensing in an optical
microcavity, https://doi.org/10.1038/nature23281 journal journal Nature volume 548, pages 192 (year 2017)NoStop
[Yao and Wang(2018)]yao2018edge
author author S. Yao and author Z. Wang, title title Edge states and topological invariants
of non-Hermitian systems, https://doi.org/10.1103/PhysRevLett.121.086803 journal
journal Phys. Rev. Lett. volume 121, pages 086803 (year 2018)NoStop
[Martinez Alvarez et al.(2018)Martinez Alvarez, Barrios Vargas, and Foa Torres]alvarez2018
author author V. M. Martinez Alvarez, author J. E. Barrios Vargas, and author L. E. F. Foa Torres, title title
Non-Hermitian robust edge states in one dimension: Anomalous
localization and eigenspace condensation at exceptional points, https://doi.org/https://doi.org/10.1103/PhysRevB.97.121401 journal journal Phys. Rev. B volume
97, pages 121401(R) (year 2018)NoStop
[Xiong(2018)]xiong2018does
author author Y. Xiong, title title Why does bulk boundary
correspondence fail in some non-Hermitian topological models, https://doi.org/10.1088/2399-6528/aab64a journal journal J. Phys. Commun. volume 2, pages 035043 (year 2018)NoStop
[Gong et al.(2018)Gong,
Ashida, Kawabata, Takasan,
Higashikawa, and Ueda]gong2018topological
author author Z. Gong, author Y. Ashida,
author K. Kawabata, author K. Takasan, author
S. Higashikawa, and author
M. Ueda, title title Topological phases of non-Hermitian systems, https://doi.org/10.1103/PhysRevX.8.031079 journal journal Phys. Rev. X volume 8, pages
031079 (year 2018)NoStop
[Kunst et al.(2018)Kunst,
Edvardsson, Budich, and Bergholtz]kunst2018biorthogonal
author author F. K. Kunst, author E. Edvardsson,
author J. C. Budich, and author E. J. Bergholtz, title title Biorthogonal bulk-boundary
correspondence in non-Hermitian systems, https://doi.org/10.1103/PhysRevLett.121.026808 journal
journal Phys. Rev. Lett. volume 121, pages 026808 (year 2018)NoStop
[Brandenbourger et al.(2019)Brandenbourger, Locsin, Lerner, and Coulais]brandenbourger2019non
author author M. Brandenbourger, author X. Locsin, author E. Lerner, and author C. Coulais, title title Non-reciprocal robotic
metamaterials, https://doi.org/10.1038/s41467-019-12599-3
journal journal Nat. Commun. volume 10, pages 4608 (year
2019)NoStop
[Helbig et al.(2020)Helbig,
Hofmann, Imhof, Abdelghany,
Kiessling, Molenkamp, Lee,
Szameit, Greiter, and Thomale]helbig2020generalized
author author T. Helbig, author T. Hofmann,
author S. Imhof, author M. Abdelghany, author
T. Kiessling, author
L. W. Molenkamp, author
C. H. Lee, author A. Szameit, author M. Greiter, and author R. Thomale, title title
Generalized bulk–boundary correspondence in non-Hermitian topolectrical
circuits, https://doi.org/10.1038/s41567-020-0922-9 journal journal Nat. Phys. volume
16, pages 747 (year 2020)NoStop
[Xiao et al.(2020)Xiao,
Deng, Wang, Zhu,
Wang, Yi, and Xue]xiao2020non
author author L. Xiao, author T. Deng, author K. Wang, author
G. Zhu, author Z. Wang, author W. Yi, and author P. Xue, title title Non-Hermitian bulk–boundary
correspondence in quantum dynamics, https://doi.org/https://doi.org/10.1038/s41567-020-0836-6 journal journal Nat. Phys. volume
16, pages 761 (year 2020)NoStop
[Liu et al.(2021)Liu,
Shao, Ma, Zhang,
You, Wu, Xiang, Cui, and Zhang]liu2021non
author author S. Liu, author R. Shao, author S. Ma, author
L. Zhang, author O. You, author H. Wu, author Y. J. Xiang,
author T. J. Cui, and author S. Zhang, title
title Non-Hermitian skin effect in a non-Hermitian
electrical circuit, https://doi.org/10.34133/2021/5608038
journal journal Research volume 2021, pages 5608038 (year
2021)NoStop
[Zou et al.(2021)Zou,
Chen, He, Bao, Lee, Sun, and Zhang]zou2021observation
author author D. Zou, author T. Chen, author W. He, author
J. Bao, author C. H. Lee, author H. Sun, and author X. Zhang, title title
Observation of hybrid higher-order skin-topological effect in
non-Hermitian topolectrical circuits, https://doi.org/10.1038/s41467-021-26414-5 journal journal Nat. Commun. volume 12, pages 7201 (year 2021)NoStop
[Chen et al.(2021)Chen,
Li, Scheibner, Vitelli, and Huang]chen2021realization
author author Y. Chen, author X. Li, author C. Scheibner, author
V. Vitelli, and author
G. Huang, title title Realization of active metamaterials with odd micropolar
elasticity, https://doi.org/10.1038/s41467-021-26034-z journal journal Nat. Commun. volume
12, pages 5935 (year 2021)NoStop
[Zhang et al.(2021a)Zhang, Tian,
Jiang, Lu, and Chen]zhang2021observation
author author X. Zhang, author Y. Tian,
author J.-H. Jiang, author M.-H. Lu, and author
Y.-F. Chen, title title Observation of higher-order non-Hermitian skin effect, https://doi.org/10.1038/s41467-021-25716-y journal journal Nat. Commun. volume 12, pages 5377 (year 2021a)NoStop
[Zhang et al.(2021b)Zhang, Yang,
Ge, Guan, Chen, Yan, Chen, Xi, Li,
Jia et al.]zhang2021acoustic
author author L. Zhang, author Y. Yang,
author Y. Ge, author
Y.-J. Guan, author
Q. Chen, author Q. Yan, author F. Chen, author R. Xi, author Y. Li, author
D. Jia, et al., title
title Acoustic non-Hermitian skin effect from twisted winding
topology, https://doi.org/10.1038/s41467-021-26619-8 journal journal Nat. Commun. volume
12, pages 6297 (year
2021b)NoStop
[Gao et al.(2022)Gao,
Xue, Gu, Li, Zhu, Su, Zhu, Zhang, and Chong]gao2022non
author author H. Gao, author H. Xue, author Z. Gu, author
L. Li, author W. Zhu, author Z. Su, author J. Zhu, author B. Zhang, and author
Y. D. Chong, title title Anomalous Floquet non-Hermitian skin effect in a ring resonator
lattice, https://doi.org/10.1103/PhysRevB.106.134112 journal journal Phys. Rev. B volume
106, pages 134112 (year 2022)NoStop
[Gu et al.(2022)Gu,
Gao, Xue, Li, Su, and Zhu]gu2022transient
author author Z. Gu, author H. Gao, author H. Xue, author
J. Li, author Z. Su, and author J. Zhu, title title Transient
non-Hermitian skin effect, https://doi.org/10.1038/s41467-022-35448-2 journal journal Nat. Commun. volume 13, pages 7668 (year 2022)NoStop
[Weidemann et al.(2020)Weidemann, Kremer, Helbig, Hofmann, Stegmaier, Greiter, Thomale, and Szameit]weidemann2020topological
author author S. Weidemann, author M. Kremer,
author T. Helbig, author T. Hofmann, author
A. Stegmaier, author
M. Greiter, author R. Thomale, and author A. Szameit, title title
Topological funneling of light, https://doi.org/10.1126/science.aaz8727 journal journal Science volume 368, pages
311 (year 2020)NoStop
[Longhi(2018)]longhi2018non
author author S. Longhi, title title Non-Hermitian gauged
topological laser arrays, https://doi.org/10.1002/andp.201800023
journal journal Ann. Phys. volume 530, pages 1800023 (year
2018)NoStop
[Zhu et al.(2022)Zhu,
Wang, Leykam, Xue,
Wang, and Chong]zhu2022anomalous
author author B. Zhu, author Q. Wang, author D. Leykam, author
H. Xue, author Q. J. Wang, and author Y. D. Chong, title title
Anomalous single-mode lasing induced by nonlinearity and the non-Hermitian
skin effect, https://doi.org/10.1103/PhysRevLett.129.013903
journal journal Phys. Rev. Lett. volume 129, pages 013903 (year
2022)NoStop
[Teo et al.(2022)Teo,
Zhu, and Gong]teo2022
author author W. X. Teo, author W. Zhu, and author J. Gong, title title Tunable two-dimensional laser arrays with
zero-phase locking, https://doi.org/10.1103/PhysRevB.105.L201402
journal journal Phys. Rev. B volume 105, pages L201402 (year
2022)NoStop
[Budich and Bergholtz(2020)]budich2020non
author author J. C. Budich and author E. J. Bergholtz, title title Non-Hermitian
topological sensors, https://doi.org/10.1103/PhysRevLett.125.180403 journal
journal Phys. Rev. Lett. volume 125, pages 180403 (year 2020)NoStop
[McDonald and Clerk(2020)]mcdonald2020exponentially
author author A. McDonald and author A. A. Clerk, title title Exponentially-enhanced
quantum sensing with non-Hermitian lattice dynamics, https://doi.org/10.1038/s41467-020-19090-4 journal journal Nat. Commun. volume 11, pages 5382 (year 2020)NoStop
[Mandal et al.(2022)Mandal,
Banerjee, and Liew]mandal2022topological
author author S. Mandal, author R. Banerjee, and author T. C. H. Liew, title title From the topological spin-Hall
effect to the non-Hermitian skin effect in an elliptical micropillar
chain, https://doi.org/10.1021/acsphotonics.1c01425 journal journal ACS Photonics volume
9, pages 527 (year 2022)NoStop
[Landau and Lifshitz(2013)]landau2013quantum
author author L. D. Landau and author E. M. Lifshitz, @noop title Quantum mechanics:
non-relativistic theory, Vol. volume 3 (publisher Elsevier, year 2013)NoStop
[Klitzing et al.(1980)Klitzing, Dorda, and Pepper]klitzing1980new
author author K. v. Klitzing, author G. Dorda, and author M. Pepper, title title New method for high-accuracy
determination of the fine-structure constant based on quantized Hall
resistance, https://doi.org/10.1103/PhysRevLett.45.494 journal journal Phys. Rev. Lett. volume 45, pages 494 (year 1980)NoStop
[Thouless et al.(1982)Thouless, Kohmoto, Nightingale, and den Nijs]thouless1982quantized
author author D. J. Thouless, author M. Kohmoto,
author M. P. Nightingale, and author M. den Nijs, title title Quantized Hall conductance in a two-dimensional
periodic potential, https://doi.org/10.1103/PhysRevLett.49.405
journal journal Phys. Rev. Lett. volume 49, pages 405 (year
1982)NoStop
[Lu et al.(2021)Lu,
Zhang, and Franz]lu2021magnetic
author author M. Lu, author X.-X. Zhang, and author M. Franz, title title Magnetic suppression of non-Hermitian skin
effects, https://doi.org/10.1103/PhysRevLett.127.256402 journal journal Phys. Rev. Lett. volume 127, pages 256402 (year
2021)NoStop
[Shao et al.(2022)Shao,
Cai, Geng, Chen, and Xing]shao2022cyclotron
author author K. Shao, author Z.-T. Cai,
author H. Geng, author
W. Chen, and author
D. Y. Xing, title title Cyclotron quantization and mirror-time transition on nonreciprocal
lattices, https://doi.org/10.1103/PhysRevB.106.L081402 journal journal Phys. Rev. B volume
106, pages L081402 (year 2022)NoStop
[Guinea et al.(2010)Guinea,
Katsnelson, and Geim]guinea2010energy
author author F. Guinea, author M. I. Katsnelson, and author A. K. Geim, title title Energy gaps and a zero-field
quantum Hall effect in graphene by strain engineering, https://doi.org/10.1038/nphys1420 journal journal Nat. Phys. volume 6, pages
30 (year 2010)NoStop
[Rechtsman et al.(2013)Rechtsman, Zeuner, Tünnermann,
Nolte, Segev, and Szameit]rechtsman2013strain
author author M. C. Rechtsman, author J. M. Zeuner, author A. Tünnermann, author S. Nolte, author M. Segev, and author A. Szameit, title title Strain-induced pseudomagnetic field and photonic
Landau levels in dielectric structures, https://doi.org/10.1038/nphoton.2012.302 journal journal Nat. Photonics volume 7, pages 153 (year 2013)NoStop
[Yang et al.(2017)Yang,
Gao, Yang, and Zhang]yang2017strain
author author Z. Yang, author F. Gao, author Y. Yang, and author
B. Zhang, title title Strain-induced gauge field and Landau levels in acoustic
structures, https://doi.org/10.1103/PhysRevLett.118.194301
journal journal Phys. Rev. Lett. volume 118, pages 194301 (year
2017)NoStop
[Abbaszadeh et al.(2017)Abbaszadeh, Souslov, Paulose, Schomerus, and Vitelli]abbaszadeh2017sonic
author author H. Abbaszadeh, author A. Souslov,
author J. Paulose, author H. Schomerus, and author V. Vitelli, title
title Sonic Landau levels and synthetic gauge fields in
mechanical metamaterials, https://doi.org/10.1103/PhysRevLett.119.195502 journal
journal Phys. Rev. Lett. volume 119, pages 195502 (year 2017)NoStop
[Wen et al.(2019)Wen,
Qiu, Qi, Ye, Ke, Zhang, and Liu]wen2019acoustic
author author X. Wen, author C. Qiu, author Y. Qi, author
L. Ye, author M. Ke, author F. Zhang, and author Z. Liu, title title Acoustic Landau quantization and
quantum-Hall-like edge states, https://doi.org/10.1038/s41567-019-0446-3 journal journal Nat. Phys. volume 15, pages
352 (year 2019)NoStop
[Jamadi et al.(2020)Jamadi,
Rozas, Salerno, Milićević, Ozawa, Sagnes,
Lemaître, Le Gratiet, Harouri, Carusotto et al.]jamadi2020direct
author author O. Jamadi, author E. Rozas,
author G. Salerno, author M. Milićević, author T. Ozawa, author
I. Sagnes, author A. Lemaître, author L. Le Gratiet, author A. Harouri, author I. Carusotto, et al., title
title Direct observation of photonic Landau levels and helical
edge states in strained honeycomb lattices, https://doi.org/10.1038/s41377-020-00377-6 journal journal Light Sci. Appl. volume 9, pages 144 (year 2020)NoStop
[Bellec et al.(2020)Bellec,
Poli, Kuhl, Mortessagne, and Schomerus]bellec2020observation
author author M. Bellec, author C. Poli,
author U. Kuhl, author
F. Mortessagne, and author
H. Schomerus, title title Observation of supersymmetric pseudo-Landau levels in strained
microwave graphene, https://doi.org/10.1038/s41377-020-00351-2
journal journal Light Sci. Appl. volume 9, pages 146 (year
2020)NoStop
[Wang et al.(2020)Wang,
Gao, Chen, Shi, Li, Dong, Xiang, and Zhang]wang2020moire
author author W. Wang, author W. Gao, author X. Chen, author
F. Shi, author G. Li, author J. Dong, author Y. Xiang, and author S. Zhang, title title Moiré fringe induced gauge field in
photonics, https://doi.org/10.1103/PhysRevLett.125.203901
journal journal Phys. Rev. Lett. volume 125, pages 203901 (year
2020)NoStop
[Guglielmon et al.(2021)Guglielmon, Rechtsman, and Weinstein]guglielmon2021landau
author author J. Guglielmon, author M. C. Rechtsman, and author M. I. Weinstein, title title Landau levels in
strained two-dimensional photonic crystals, https://doi.org/10.1103/PhysRevA.103.013505 journal journal Phys. Rev. A volume 103, pages 013505 (year 2021)NoStop
[Yan et al.(2021)Yan,
Deng, Huang, Wu,
Yang, Lu, Li, and Liu]yan2021pseudomagnetic
author author M. Yan, author W. Deng, author X. Huang, author
Y. Wu, author Y. Yang, author J. Lu, author F. Li, and author Z. Liu, title title Pseudomagnetic fields enabled manipulation of
on-chip elastic waves, https://doi.org/10.1103/PhysRevLett.127.136401 journal
journal Phys. Rev. Lett. volume 127, pages 136401 (year 2021)NoStop
[Xue et al.(2020)Xue,
Wang, Zhang, and Chong]xue2020non
author author H. Xue, author Q. Wang, author B. Zhang, and author
Y. D. Chong, title title Non-Hermitian Dirac cones, https://doi.org/10.1103/PhysRevLett.124.236403 journal
journal Phys. Rev. Lett. volume 124, pages 236403 (year 2020)NoStop
[Shao et al.(2021)Shao,
Liu, Xiao, Yang, and Zhao]shao2021gauge
author author L. B. Shao, author Q. Liu, author R. Xiao, author
S. A. Yang, and author
Y. X. Zhao, title title Gauge-field extended k˙ p method and novel
topological phases, https://doi.org/10.1103/PhysRevLett.127.076401
journal journal Phys. Rev. Lett. volume 127, pages 076401 (year
2021)NoStop
[Xue et al.(2022)Xue,
Wang, Huang, Cheng,
Yu, Foo, Zhao, Yang, and Zhang]xue2022projectively
author author H. Xue, author Z. Wang, author Y.-X. Huang, author
Z. Cheng, author L. Yu, author Y. X. Foo, author Y. X. Zhao, author S. A. Yang, and author B. Zhang, title title Projectively enriched symmetry and topology in
acoustic crystals, https://doi.org/10.1103/PhysRevLett.128.116802
journal journal Phys. Rev. Lett. volume 128, pages 116802 (year
2022)NoStop
[Okuma et al.(2020)Okuma,
Kawabata, Shiozaki, and Sato]okuma2020topological
author author N. Okuma, author K. Kawabata,
author K. Shiozaki, and author M. Sato, title title Topological origin of non-Hermitian skin
effects, https://doi.org/10.1103/PhysRevLett.124.086801 journal journal Phys. Rev. Lett. volume 124, pages 086801 (year
2020)NoStop
[Zhang et al.(2020)Zhang,
Yang, and Fang]zhang2020correspondence
author author K. Zhang, author Z. Yang, and author C. Fang, title title Correspondence between winding numbers and skin
modes in non-Hermitian systems, https://doi.org/10.1103/PhysRevLett.125.126402 journal
journal Phys. Rev. Lett. volume 125, pages 126402 (year 2020)NoStop
[Borgnia et al.(2020)Borgnia, Kruchkov, and Slager]borgnia2020non
author author D. S. Borgnia, author A. J. Kruchkov, and author R.-J. Slager, title title Non-Hermitian boundary
modes and topology, https://doi.org/10.1103/PhysRevLett.124.056802
journal journal Phys. Rev. Lett. volume 124, pages 056802 (year
2020)NoStop
[Li et al.(2022)Li,
Trauzettel, Neupert, and Zhang]li2022enhancement
author author C.-A. Li, author B. Trauzettel,
author T. Neupert, and author S.-B. Zhang, title
title Enhancement of second-order non-Hermitian skin effect by
magnetic fields, https://arxiv.org/abs/2212.14691 journal journal arXiv:2212.14691 (year
2022)NoStop
[Peng et al.(2022)Peng,
Jie, Yu, and Wang]peng2022manipulating
author author Y. Peng, author J. Jie, author D. Yu, and author
Y. Wang, title title Manipulating non-Hermitian skin effect via electric fields, https://doi.org/10.1103/PhysRevB.106.L161402 journal
journal Phys. Rev. B volume 106, pages L161402 (year 2022)NoStop
[Levy et al.(2010)Levy,
Burke, Meaker, Panlasigui,
Zettl, Guinea, Castro Neto, and Crommie]levy2010strain
author author N. Levy, author S. A. Burke,
author K. L. Meaker, author M. Panlasigui, author
A. Zettl, author F. Guinea, author A. H. Castro Neto, and author M. F. Crommie, title title
Strain-induced pseudo–magnetic fields greater than 300 tesla in graphene
nanobubbles, https://doi.org/10.1126/science.1191700 journal journal Science volume
329, pages 544 (year 2010)NoStop
[Imhof et al.(2018)Imhof,
Berger, Bayer, Brehm,
Molenkamp, Kiessling, Schindler, Lee, Greiter, Neupert et al.]imhof2018topolectrical
author author S. Imhof, author C. Berger,
author F. Bayer, author J. Brehm, author
L. W. Molenkamp, author
T. Kiessling, author
F. Schindler, author
C. H. Lee, author M. Greiter, author T. Neupert, et al., title title Topolectrical-circuit realization of topological corner modes, https://doi.org/10.1038/s41567-018-0246-1 journal
journal Nat. Phys. volume 14, pages 925 (year 2018)NoStop
[Lee et al.(2018)Lee,
Imhof, Berger, Bayer,
Brehm, Molenkamp, Kiessling, and Thomale]lee2018topolectrical
author author C. H. Lee, author S. Imhof, author C. Berger, author
F. Bayer, author J. Brehm, author L. W. Molenkamp, author T. Kiessling, and author R. Thomale, title title Topolectrical circuits, https://doi.org/10.1038/s42005-018-0035-2 journal
journal Commun. Phys. volume 1, pages 39 (year 2018)NoStop
|
http://arxiv.org/abs/2307.07315v1 | 20230714124831 | Minimum $k$-critical-bipartite graphs: the irregular Case | [
"Sylwia Cichacz",
"Agieszka Görlich",
"Karol Suchan"
] | math.CO | [
"math.CO",
"05C35, 05C70, 05C85, 68M10, 68M15"
] |
This work was partially supported by the Faculty of Applied Mathematics AGH UST statutory tasks within subsidy of Ministry of Science and Higher Education.
^1 AGH University, al. A. Mickiewicza 30, 30-059 Krakow, Poland
^2Universidad Diego Portales, Av. Ejército Libertador 441, 8370191 Santiago, Chile
[email protected], [email protected], [email protected]
We study the problem of finding a minimum k-critical-bipartite graph of order (n,m): a bipartite graph G=(U,V;E), with |U|=n, |V|=m, and n>m>1, which is k-critical-bipartite, and the tuple (|E|, Δ_U, Δ_V), where Δ_U and Δ_V denote the maximum degree in U and V, respectively, is lexicographically minimum over all such graphs. G is k-critical-bipartite if deleting at most k=n-m vertices from U yields G' that has a complete matching, i.e., a matching of size m. Cichacz and Suchan <cit.> solved the problem for biregular bipartite graphs. Here, we extend their results to bipartite graphs that are not biregular. We also prove tight lower bounds on the connectivity of k-critical-bipartite graphs.
[2010]05C35, 05C70, 05C85, 68M10, 68M15
Minimum k-critical-bipartite graphs:
the irregular Case
Karol Suchan^2,1
August 12, 2023
========================================================
§ INTRODUCTION
An important body of knowledge has been developed on networks prone to faults. When representing the network as a simple undirected graph G=(V, E)[For standard terms and notations in graph theory, the reader is referred to the textbook by Diestel <cit.>], the faults can be modeled as vertex or edge deletions, depending if the faults occur to the nodes or links of the network, respectively. In this work, we focus on node faults.
Many applications in diverse fields consider the robustness of assignments. For example, consider a network of m sensing nodes and n relay nodes with n ≥ m. The sensing nodes need to transmit their readings through the relay nodes, based on a one-to-one assignment (due to some technological considerations), using a pre-established infrastructure of links. It is natural to ask whether we can design a network such that all the sensing nodes can do their work while no more than k=n-m relay nodes are faulty. This kind of network is called a k-critical-bipartite graph and is part of the wider field of research related to fault-tolerant networks.
Besides applications in the design of fault-tolerant networks, k-critical-bipartite graphs could find applications in the design of supercomputer architectures <cit.>, flexible processes <cit.>, personnel rostering <cit.>, and other areas of operations research <cit.>. Section 2 of <cit.> presents a brief non-exhaustive overview of connections to other areas of research.
§.§ Fault-tolerant graphs
Given a graph H and a positive integer k, a graph G is called k-fault-tolerant with respect to H, denoted by k-FT(H), if G-S contains a subgraph isomorphic to H for every S⊂ V(G) with |S|≤ k. Clearly, under this definition, it is enough to check that the property holds for S⊂ V(G) with |S| = k.
Fault-tolerance was introduced by Hayes <cit.> in 1976 as a graph theoretic model of computer or communication networks working correctly in the presence of faults. Therein, the main motivation for the problem of constructing k-fault-tolerant graphs lies in finding fault-tolerant network architectures. A graph H represents the desired interconnection network and a k-FT(H) graph G allows one to emulate the graph H even in the presence of k vertex (processor) faults.
The problem has been systematically studied in different aspects. Clearly, given a graph H, a complete graph on |V(H)|+k vertices is k-FT(H). So it is interesting to study different quality measures motivated by diverse applications. Hayes <cit.> and Ajtai et al. <cit.> considered k-FT(H) graphs with |V(H)|+k vertices and the number of edges as small as possible. A different quality measure of k-FT(H) graphs was introduced by Ueno et al. <cit.>, and independently by Dudek et al. <cit.>, where the authors were interested in k-FT(H) graphs having as few edges as possible, disregarding the number of vertices (see also <cit.>, <cit.>). Yet another setup was studied by Alon and Chung <cit.>, Ueno and Yamada <cit.>, and Zhang <cit.>. They allowed O(t) spare vertices in k-FT(H) graphs and focused on minimizing the maximum degree (giving priority to the scalability of a network). Other results on k-fault-tolerance can be found, for example in <cit.>.
§.§ k-critical-bipartite graphs
It is well known that the bipartite graphs are exactly the graphs that are 2-colorable. Throughout the paper, we will use the notation G=(U, V; E) for a bipartite graph G with color classes U and V. Let |U|=n and |V|=m. We say that G is of order (n, m). We say that G is biregular if the degrees of the vertices in both color classes are constant, and irregular otherwise. Let δ_U(G), Δ_U(G), δ_V(G), Δ_V(G), denote the minimum and maximum degree in G of a vertex in U and V, respectively. Where it does not lead to confusion, we do not mention the graph explicitly, for example, stating just δ_U instead of δ_U(G). If δ_U = Δ_U = a and δ_V = Δ_V = b, then we say that G is (a,b)-regular. A complete graph of order n is denoted K_n and a complete bipartite graph of order (n,m) is denoted K_n,m.
A k-critical-bipartite graph G=(U, V;E), with |U|=n and |V|=m, such that k=n-m≥ 0 can be seen as a k-FT(H) graph where H is a matching of size |V| and the k faults can occur only in U. Cichacz and Suchan <cit.> introduced the problem of finding a minimum k-critical-bipartite graph according to the following definition.
[<cit.>]
A bipartite graph G=(U,V;E), with |U|=n, |V|=m, and n>m>1, is a Minimum k-Critical-Bipartite Graph of order (n,m), MkCBG-(n,m), if it is k-critical-bipartite, and the tuple (|E|, Δ_U, Δ_V) is lexicographically minimum over all such graphs.
Note that, given integer n and m with n>m>1, the graph G^*(U,V;E) with |U|=n and |V|=m, obtained by taking a matching of size m and adding to U another k=n-m vertices adjacent to every vertex in V gives a graph that is minimal k-critical-bipartite, i.e., removing any edge from G^* yields a graph that is not k-critical-bipartite, but is not minimum according to the definition given above. Indeed, Δ_U(G^*) = m, whereas we show in this paper that there exist k-critical-bipartite graphs G=(U,V;E) that also have |E(G)|=m(n-m+1), but with Δ_U(G) = m(n-m+1)/n. So a minimum k-critical-bipartite graph of order (n,m) can not be obtained by simply taking any k-critical-bipartite graph and removing edges, one by one, as long as the property is preserved.
Cichacz and Suchan <cit.> solved the problem of finding MkCBG-(n,m) in the case of biregular graphs, leaving open the case of irregular bipartite graphs. We solve it in this paper.
§.§ Related work
The concept of a k-critical-bipartite graph stems from older studies related to matchings.
In a graph G of even order n, a perfect matching (or 1-factor) M, is a matching containing n/2 edges. In other words, a perfect matching covers every vertex of G.
Let G be a graph of order n with a perfect matching M, and let k, n/2 > k ≥ 0, be an integer. A graph G of even order n≥ 2k+2 is called k-extendable if every matching of size k in G extends to (i.e., is a subset of) a perfect matching in G. This concept was introduced by Plummer in 1980 <cit.>.
By the following result of Plummer, k-extendability of a bipartite graph of order 2(n+k) can be seen as fault-tolerance for H being a matching of size n, under attacks that consist in removing (at most) k vertices from each color class.
Let G be a connected bipartite graph on n vertices with the color classes (U, V). Suppose k is a positive integer such that k ≤ (n-2)/2. Then the following are equivalent:
* G is k-extendable,
* |U|=|V| and for each non-empty subset X of U such that |X| ≤ |U|-k, there is |N(X)|≥ |X|+k.
* For all U' ⊂ U and V' ⊂ V, |U'|=|V'|=k, the graph G'=G - U' - V' has a perfect matching.
There is a close relation between k-extendability and k-factor-criticality. A graph G is called k-factor-critical (also called simply k-critical) if, after deleting any k vertices, the remaining subgraph has a perfect matching. This concept was first introduced and studied for k = 2 by Lovász <cit.> under the name of a bicritical graph. For k>2 it was introduced by Yu in 1993 <cit.> and independently by Favaron in 1996 <cit.>.
It is straightforward that a bipartite graph cannot be k-critical. Li and Nie amended the definition of a k-critical graph with respect to bipartite graphs <cit.>. It requires that the k vertices to be deleted lie in the color class with more vertices. Formally, a bipartite graph G=(U, V; E) such that k=|U|-|V|≥ 0 is a k-critical-bipartite graph if, after deleting any k vertices from the set U, the remaining subgraph has a perfect matching - and this is the definition that we are using.
The problem of designing k-critical graphs (for the class of general graphs) with the minimum number of edges was studied by Zhang et al. in <cit.>. Using the notation k-FT(pK_c), with positive integers k, p, and c, for a graph in which the removal of k vertices leaves a subgraph that contains p disjoint copies of K_c, the authors gave a construction for k-FT(pK_2) graphs of minimum size for any generally feasible values of p and k. This result was extended to higher values of c by Cichacz et al <cit.>, who characterized minimum k-FT(pK_c) graphs for k=1, any positive integer p, and c>3. Zhang et al. in <cit.> also gave a construction for minimum size k-extendable bipartite graphs.
§.§ Structure of the paper
The structure of this paper is as follows. In Section <ref>, we detail the problem of designing a minimum k-critical-bipartite graph. In Section <ref>, we give a construction that yields a minimum k-critical-bipartite graph of order (n,m) for any values of n and m such that n>m>1, with k=n-m. We show that a k-critical-bipartite graph G=(U,V;E) of order (n,m) is minimum if |E|=m(n-m+1), Δ(U)=m(n-m+1)/n, and Δ(V)=n-m+1. In Section <ref>, we give a construction that yields graphs G=(U,V;E) of order (n,m) that also have |E|=m(n-m+1), Δ(U)=m(n-m+1)/n, and Δ(V)=n-m+1, but are not k-critical-bipartite - so these properties are not sufficient for a graph to be k-critical-bipartite. In Section <ref> we present tight lower bounds for the connectivity of k-critical-bipartite graphs. And we conclude with some final remarks in Section <ref>.
§ MAIN PROBLEM
Let G=(U, V;E) be a bipartite graph, with |U|=n and |V|=m, such that k=n-m > 0. Let G̃=(U, V ∪ D; E ∪ E^D) be the graph obtained from G by adding to V a set D of k vertices and making them adjacent to all vertices in U. Li and Nie <cit.> gave the following characterization of k-critical-bipartite graphs.
G is k-critical-bipartite if and only if G̃ is k-extendable.
They also described the connectivity of k-critical-bipartite graphs in the following theorem.
Let G=(U, V;E) be a bipartite graph such that k=|U|-|V|> 0. If G is k-critical-bipartite, then G is connected.
On the other hand, Laroche et al. <cit.> gave a Hall-style characterization of k-critical-bipartite graphs as follows:
Let G=(U, V;E) be a bipartite graph such that k=|U|-|V|> 0. The graph G is k-critical-bipartite if and only if |N(V')|≥ |V'|+k for all ∅≠ V'⊆ V.
Note that a k-critical-bipartite graph needs to have at least (k+1)m edges. Indeed, suppose that the total number of edges is smaller. Then at least one vertex v in V is connected to at most k distinct vertices in U. And there is a fault scenario where precisely the vertices in the neighborhood of v are removed, in which case v cannot be matched. A contradiction.
As Zhang et al. <cit.> for k-extendable bipartite graphs and Cichacz and Suchan <cit.> for k-critical-bipartite biregular graphs, we want to study topologies where not only the total number of links is low, but also the maximum number of links per node is small (in both color classes). Thus, for given positive integer values n,m such that n>m>1 and k=n-m, we want to find a bipartite graph G=(U, V; E) of order (n,m) that is a k-critical-bipartite graph and is lexicographically minimum with respect to (|E|, Δ_U, Δ_V) (see Definition <ref>).
The construction below is a generalization of the construction from <cit.>. Indeed, the construction was used only for integers m,n such that n>m>1 and a = (k+1)m/n is an integer. In the construction and throughout the paper we use the following notation: [o]={0,1,…,o-1} for any positive integer o.
Let n, m, a be positive integers such that n>m>1. Let G_n,m^a=(U,V; E) be a bipartite graph with U={u_i | i ∈ [n]}, V={v_j | j ∈ [m]}, and
E = { (u_i, v_(j+α) m) | i ∈ [n], α∈ [a], j=i m/n}.
Cichacz and Suchan <cit.> proved that, if n, m and a are positive integers such that n>m>1, k=n-m, and a n = (k+1) m, then the graph G_n,m^a=(U,V; E) is a (a, k+1)-regular k-critical-bipartite graph of size (k+1) m that is MkCBG-(n,m). Moreover, they stated the following conjecture for irregular k-critical-bipartite graphs.
Let n,m be positive integers such that n>m>1, Let k=n-m and a=m (k+1)/n is not an integer. Then G_n,m^a obtained by Construction <ref> is k-critical-bipartite.
In this paper, we prove that the conjecture is true. Moreover, for any pair n, m of positive integers such that n>m>1, k=n-m, we construct a bipartite graph G=(U, V; E) of order (n,m) that is k-critical-bipartite and is lexicographically minimum among all such graphs with respect to (|E|, Δ_U, Δ_V). In other words, we solve the problem of finding a Minimum k-Critical Bipartite Graph of order (n,m) (MkCBG-(n,m)) completely.
§ POSITIVE CONSTRUCTION
In this section, we give a construction that yields Minimum k-Critical Bipartite Graphs of order (n,m).
Let us start by recalling the following lemma that was proved in <cit.>.
Let x, y, c be positive integers such that x>y. Let n=cx, m=cy, and j ∈ [m]. Then the number of integer solutions to im/n≡ j m with respect to i, with i ∈ [n], is equal to jx/y - (j-1)x/y. Moreover:
* jx/y - (j-1)x/y = rx/y - (r-1)x/y,
where r = j y.
* For any interval of consecutive y values of j, for (x y) of them, there are x/y solutions and, for the remaining (y - x y), there are x/y solutions.
* In general, the number of solutions is x/y for (n m), and x/y for (m - n m) values of j∈[m].
With Lemma <ref>, we can prove the following lemma that allows us to further analyze the neighborhoods of vertices in graphs like the ones from Construction <ref>.
Let n, m be positive integers such that n>m>1. Let j ∈ [m]. Then
max{i | i ∈ [n], ⌈im/n⌉≡ jm} = j n/m.
Let us write i_j=max{i | i ∈ [n], ⌈im/n⌉≡ j m}.
We have i_0=0=0 · n/m. For j>0, it is easy to check that i_j is equal to i_j-1 plus the number of integer solutions to im/n≡ j m. By Lemma <ref>, the number of integer solutions to im/n≡ j m, with respect to i, with i ∈ [n], is equal to jn/m - (j-1)n/m. Therefore, we have i_j=i_j-1+jn/m - (j-1)n/m. By the telescopic property, we have i_j=∑_i=1^j(in/m - (i-1)n/m)=jn/m for j>0.
It is easy to check that the edge set of the graph from Construction <ref>, in the case where a n = (k+1) m, can also be written as:
E = { (u_(i-β) n, v_j) | j ∈ [m], β∈ [k+1], i=max{i | i ∈ [n], ⌈im/n⌉≡ jm}}.
So, by Lemma <ref>, when a = m(k+1)/n is an integer, the edge set of the graph from Construction <ref> can be described in the following way:
E = { (u_(i-β) n, v_j) | j ∈ [m], β∈ [k+1], i= j n/m}.
Let us generalize this construction to the case where a = m(k+1)/n is not an integer.
Let n, m be positive integers such that n>m>1. Let k=n-m. Let G_n,m=(U,V; E) be a bipartite graph with U={u_i | i ∈ [n]}, V={v_j | j ∈ [m]}, and
E = { (u_(i-β) n, v_j) | j ∈ [m], β∈ [k+1], i= j n/m}.
So the following observation holds.
If a = m(k+1)/n is an integer then G_n,m=G_n,m^a.
See Figure <ref> for a small example of graphs G_n,m^a and G_n,m in which a = m(k+1)/n is not an integer. Let us present a lemma that relates the graphs obtained by Constructions <ref> and <ref> in general.
Let n,m be positive integers such that n>m>1. Let k=n-m and a=m(k+1)/n. Then the graph G_n,m=(U,V; E) obtained by Construction <ref> is a subgraph of the graph G_n,m^ a=(U,V; E) given in Construction <ref>.
Let us consider the graph G_n,m=(U,V; E) obtained by Construction <ref>. Let e_0 ∈E. Then
there is j_0 ∈ [m] and β_0 ∈ [k+1] such that
e_0=(u_(i_0 - β_0) n,v_j_0) for i_0=j_0n/m. Let us show that e_0 also belongs to E.
By Construction <ref>, there is {(u_(i_0 - β_0),v_(j_(i_0-β_0)+α) m), α∈ [ a]}⊂E, where j_(i_0-β_0)=(i_0-β_0)m/n. Let us choose γ∈ [m] such that
(i_0-β_0)m/n=(j_0n/m-β_0)m/n=(j_0n-γ/m-β_0)m/n=j_0-γ +β_0 m/n.
So, there is j_(i_0-β_0)=j_0-γ +β_0 m/n=j_0-γ +β_0 m/n.
Note that
γ +β_0 m/n≤m-1+km/n=a-1/n= a-1 if a ∈ℕ
b, b ∈{ a-2, a-1} otherwise.
So there exists α_0 ∈ [ a] such that j_0=j_(i_0-β_0)+α_0, and hence
e_0=(u_(i_0-β_0),v_j_0) ∈{(u_(i_0-β_0),v_(j_(i_0-β_0)+α) m), α∈ [ a]}⊂E.
Let n,m be positive integers such that n>m>1. Let k=n-m. Then G_n,m=(U,V; E) obtained by Construction <ref> is a minimum k-critical-bipartite graph of order (n,m).
Let us show that, for any S≠∅, S⊂ V, there is |N_G_n,m(S)|≥ |S|+k, which, by Theorem <ref>, implies that G_n,m is k-critical. The proof is by induction on |S|.
Let |S|=1. Let v_j, j ∈ [m] be the vertex in S. By definition, N_G_n,m(v_j)={(u_(i-β) n, v_j) | i= j n/m, β∈ [k+1]}, so the conclusion is true.
Given an integer p, 1 ≤ p ≤ m-1, suppose that |N_G_n,m(S')|≥ p+k holds for any S', S' ⊂ V, such that |S'| = p.
Take any S, S⊂ V, with |S| = p + 1.
Suppose first that there exists v ∈ S such that N_G_n,m(v) ∖ N_G_n,m(S ∖ v) ≠∅ and hence |N_G_n,m(v) ∩ N_G_n,m(S ∖{v})|< _G_n,m(v). Let S'= S ∖{v}. Then |S'| = p and, by the induction hypothesis, |N_G_n,m(S')|≥ p+k. Hence,
|N_G_n,m(S)|≥ p+k +|N_G_n,m(v)∖ N_G_n,m(S')|≥ p+k+1=|S|+k.
Assume now that N_G_n,m(v) ⊂ N_G_n,m(S ∖ v) for every v ∈ S. Let us show that it implies that N_G_n,m(S) = U, and so |N_G_n,m(S)| = |V|+k ≥ |S|+k.
Suppose, to the contrary, that I = {i ∈ [n] u_i ∉N_G_n,m(S)}≠∅. Let i_0 = max{i i ∈ I, (i+1) n ∉ I}. Then there exists v_r ∈ S such that u_(i_0+1) n∈ N_G_n,m(v_r). Since N_G_n,m(v_r)⊂ N_G_n,m(S∖{v_r}), there exists v_l ∈ S, r≠ l, such that u_(i_0+1) n∈ N_G_n,m(v_l). By Construction <ref>, N_G_n,m(v_j)={u_i_j-k,u_i_j-(k-1),…,u_i_j} for every j∈[m]. It is easy to check that, for any j_1,j_2 ∈ [m], j_1≠ j_2 implies i_j_1≠ i_j_2. So there is N_G_n,m(v_j_1)∖ N_G_n,m(v_j_2)≥ 1.
On the other hand, since u_(i_0+1) n∈ N_G_n,m(v_l) ∩ N_G_n,m(v_r) and u_i_0 n∉ N_G_n,m(v_l) ∪ N_G_n,m(v_r), there is N_G_n,m(v_l)=N_G_n,m(v_r)={u_i_0+1,u_i_0+2,…,u_i_0+k+1}, a contradiction.
It is easy to check, by the pigeonhole principle, that there is δ_V≥ k+1 for any k-critical-bipartite graph G=(U, V; E) of order (n,m). By definition, G_n,m has (k+1)m edges. So the construction is minimum.
Combining Theorem <ref> and Lemma <ref>, we obtain that the Conjecture <ref> is true.
Let n,m be positive integers such that n>m>1, Let k=n-m and a=m (k+1)/n is not an integer. Then G_n,m^a obtained by Construction <ref> is k-critical-bipartite.
§ NEGATIVE CONSTRUCTION
In this section we give a construction that yields graphs G=(U,V;E) of order (n,m) that also have |E|=m(k+1), Δ(U)=m(k+1)/n, and Δ(V)=k+1, where k=n-m, but are not k-critical-bipartite. So these properties are not sufficient for a graph to be k-critical-bipartite.
Note that Theorem <ref>, together with the simple observation that there is δ(V) ≥ k+1 in any k-critical-bipartite G=(U,V;E) with n>m>1 and k=n-m, implies that there is δ(V) = Δ(V) = k+1, Δ(U) =m(k+1)/n, and δ(U) ≤ k if G=(U,V;E) is minimum k-critical-bipartite. Both in Construction <ref> and Construction <ref> that follows, there is δ(U) =m(n-m+1)/n. The graphs obtained by the two constructions have the same degree sequences, so even fixing the vertex degrees does not make a graph k-critical-bipartite.
Cichacz and Suchan in <cit.> gave the following construction of a class of biregular graphs.
[<cit.>]
Let n, m be positive integers such that n>m>1, and a=(n-m+1)m/n is an integer. Let (n,m)=c, n=cx, and m=cy. Let Ǧ_n,m^a=(U,V; E) be the bipartite graph with U={u_i | i ∈ [n]}, V={v_j | j ∈ [m]}, E = { (u_i, v_(j+α) m) | i ∈ [n], α∈ [a], j=i/xy}.
It is easy to check that the graph Ǧ_n,m^a=(U, V; E) can also be constructed in the following way. Let b=n-m+1 and d=(a,b). Note that there is a=dy and b=dx (see <cit.>). Let Ǧ be a d-regular bipartite graph having color classes U^'={u_i | i ∈ [c]} and V^'={v_j | j ∈ [c]}, where c=(n,m), such that E(Ǧ)={u_iv_(i+δ)c, i ∈ [c], δ∈ [d] }. We construct the graph Ǧ_n,m^a=(U,V; E) by “blowing up” each vertex u_i into x=n/c vertices u_i,α, α∈ [x], and each vertex v_j into y=m/c>1 vertices v_j,β, β∈ [y]. Each edge from Ǧ is substituted by the corresponding complete bipartite graph K_x,y. Note that Ǧ_n,m^a is (a,b)-regular.
The authors showed that, despite having the same degrees as minimum biregular k-critical-bipartite graphs, the graphs obtained by Construction <ref> tend not to be k-critical-bipartite.
[<cit.>]
Let n, m be positive integers such that n>m>1, k=n-m, and a=m (k+1)/n is an integer. The graph Ǧ_n,m^a given in Construction <ref> is biregular k-critical-bipartite if and only if (n,m)=m.
Cichacz and Suchan <cit.> complemented Construction <ref> with another construction for the case where (n,m)=m and a = m(n-m+1)/n is an integer to get the following result.
[<cit.>]
Let n=|U|, m=|V|∈ be such positive integers that 1 < m < n, k=n-m and a = m(k+1)/n is an integer. There exists an (a, k+1)-regular bipartite graph G = (U, V ;E) that is not k-critical if and only if a < m - 1.
So we know constructions of (a,b)-regular bipartite graphs of order (n,m) that are not k-critical-bipartite, where b=n-m+1 and a=m(k+1)/n, whenever such graphs exist. In what follows, we focus on the cases where a=m(k+1)/n is not an integer.
Let us recall the results of Havel-Hakimi on constructing bipartite graphs based on degree sequences that are useful for constructing graphs that are not k-critical-bipartite. Let P : p_0 ≥ p_1 ≥…≥ p_n-1 and Q : q_0 ≥ q_1 ≥…≥ q_n-1 be sequences of non-negative integers. The pair (P, Q) is bigraphic if there exists a bipartite graph G=(U,V; E) with |U|=n and |V|=m in which P and Q describe the degrees of the vertices in U and V, respectively. The following theorem is a version of Havel-Hakimi’s theorem for bigraphic sequences.
The pair (P, Q) is bigraphic if and only if the pair (P',Q') is bigraphic, where (P',Q') is obtained from (P, Q) by deleting the largest element p_1 of P and subtracting one from each of the p_1 largest elements of Q.
Let x, y, b be positive integers such that x>y>1 and b≤ x. Let r=(b x), b = dx+r, and l=by-xyb/x. Let p_i=yb/x for i∈[l] and p_i=yb/x for i∈{l,…,x-1}, let q_j=b for j∈[y]. Let P=(p_0, p_1, …, p_x-1) and Q=(q_0, q_1, …, q_y-1). Note that there is r=by-dx=by-xyb/x and by=(x-r)d+r(d+1), which implies that ∑_i ∈ [x]p_i = by. So the pair (P,Q) is bigraphic by Theorem <ref>.
Note that (P,Q) is bigraphic if and only if (Q,P) is.
Let us present two constructions based on P and Q defined above: one for (P,Q) and the other for (Q,P). The graphs thus obtained have different properties.
Let x, y, b be positive integers such that x>y>1 and b≤ x. Let l=by-xyb/x. Let p_i=yb/x for i∈[l] and p_i=yb/x for i∈{l,…,x-1}. Let D_0=0 and D_i=(∑_j=0^i-1p_i) y for i ∈{1, …, x-1}. Let Ġ_x,y^b=(U,V; E) be the bipartite graph with U={u_i | i ∈ [x]}, V={v_j | j ∈ [y]} such that for every i ∈ [x]:
N_Ġ_x,y^b(u_i)={v_(D_i+π) y|π∈ [p_i]}.
Let x, y, b be positive integers such that x>y>1 and b≤ x. Then the graph Ġ_x,y^b=(U,V; E) obtained by Construction <ref> has size |E|=by, (u)∈{yb/x,yb/x} for every u∈ U, and (v)=b for every v∈ V.
Note that the graph Ġ_x,y^b is a graph constructed as in Theorem <ref> for (P,Q) for P and Q defined above, where P and Q describe the degrees of vertices in U and V, respectively. Thus (u)∈{yb/x,yb/x} for every u∈ U and (v)=b for every v∈ V.
Let x, y,b be positive integers such that b≤ x.
Let G̈_x,y^b=(U,V; E) be the bipartite graph with U={u_i | i ∈ [x]}, V={v_j | j ∈ [y]}, such that for each j ∈ [y]:
N_G̈_x,y^b(v_j)={u_(jb+β) x|β∈ [b]}.
Let x,y,b be positive integers such that b≤ x. Then G̈_x,y^b=(U,V; E) has size |E|=by, (u)∈{yb/x,yb/x} for every u∈ U and (v)=b for every v∈ V.
Note that the graph G̈_x,y^b is a graph constructed as in Theorem <ref> for (Q,P) for Q and P defined above, where Q and P describe the degrees of vertices in V and U, respectively. Thus (u)∈{yb/x,yb/x} for every u∈ U and (v)=b for every v∈ V.
Note that, in general, Ġ_x,y^b≇G̈_x,y^b. For example, Ġ_6,5^2 is connected, whereas G̈_6,5^2 is not (see Figure <ref>). Based on the above constructions we are able to show the following result.
Let n,m be positive integers such that n>m>1. Let k=n-m. Let c be a positive integer such that n=cx and m=cy. If c>1 and k+1≤ x, or d=n/k+1>1 and d is an integer, then there exists a graph G=(U,V;E) such that (u)∈{m(k+1)/n,m(k+1)/n} for any u∈ U and (v)=k+1 for any v∈ V which is not k-critical-bipartite.
If c>1 and k+1≤ x, define a graph ⃛G_n,m^k+1=(U,V; E) as the disjoint union of c copies of Ġ_x,y^k+1=(U',V'; E'). Since c>1, the graph ⃛G_n,m^k+1 is disconnected, therefore is not k-critical-bipartite by Theorem <ref>.
Suppose now that d=n/k+1>1 and d is an integer. Then the graph G̈^k+1_n,m is disconnected. Indeed, since N_G̈_n,m^k+1(v_j)={u_(j(k+1)+β) n|β∈ [k+1]}, every vertex in V has one of d=n/k+1>1 disjoint neighborhoods in U. So G̈^k+1_n,m has d connected components. By Theorem <ref>, it means that it is not k-critical-bipartite.
Note that Observation <ref> does not cover all cases of n and m. Assumption n ≤ k+1 implies that we deal with a complete bipartite graph, which is k-critical-bipartite. So the cases that are left open have that n/n-m+1 is not an integer, and (n,m)=1 or n-m+1 > n/c for any c non-trivial common divisor of n and m.
§ CONNECTIVITY
A graph G=(V, E) is said to be k-connected if it has more than k vertices and remains connected whenever strictly fewer than k vertices are removed. The connectivity of G, denoted κ(G), is the maximum k such that G is k-connected.
Given a graph G and two vertices u and v that belong to the same component of G, a vertex cut in G separating u and v is a set S of vertices of G whose removal leaves u and v in different components of G - S. The local connectivity κ_u, v(G) of u and v in G is the size of a smallest vertex cut separating u and v. Given a graph G, κ(G) equals the minimum κ_u, v(G) over all nonadjacent pairs of vertices u, v (except for complete graphs).
For a set S⊂ V(G), the set connectivity of S, denoted by κ_S(G), is the size of a smallest vertex cut separating any u,v∈ S.
Favaron <cit.> showed that every k-critical graph G of order n > k is k-connected and this result is sharp. On the other hand, Li and Nie <cit.> only showed that every k-critical-bipartite graph G is 1-connected. We improve this result to show that for any k-critical-bipartite graph G=(U, V; E) with |U|=n, |V|=m, and k=n-m > 0, there is:
* κ_V(G)≥ k,
* κ_U(G) ≥min{δ_U(G),k},
* κ(G) ≥min{δ(G),k}.
Let n,m be positive integers such that 1<m<n. Let k=n-m. Then κ_V(G)≥ k for any k-critical-bipartite graph G=(U,V;E) with |U|=n and |V|=m.
Let G=(U,V;E) be a k-critical-bipartite graph of order (n,m). Towards a contradiction, suppose that κ_V(G)< k. So there exists a set Z ⊂ U ∪ V with |Z|<k that separates two vertices v_1, v_2 in V. Let Z_1 = Z ∩ U and Z_2 = Z ∩ V. Note that there is no path between v_1 and v_2 in G'=(U',V'; E')=G[(U∖ Z_1)∪(V∖ Z_2)]. So we can choose a partition of G' into two subgraphs G'_1=(U'_1,V'_1;E'_1) and G'_2=(U'_2,V'_2;E'_2) that are unions of components of G' with v_1 ∈ V'_1 and v_2 ∈ V'_2.
Let |U'_i|= |V'_i|+ε_i for i=1,2. So there is |V'_1|+|V'_2|+|Z_2|+k=|U'_1|+|U'_2|+|Z_1|=|V'_1|+|V'_2|+|Z_1|+ε_1+ε_2. Thus |Z_2|+k=|Z_1|+ ε_1+ε_2.
By Theorem <ref>, there is |N_G(V'_1)| ≥ |V'_1|+k. On the other hand, since U'_1 ∪ Z_1 ⊇ N_G(V'_1), there is |U'_1 ∪ Z_1| ≥ |N_G(V'_1)|. So, by simplifying |V'_1|+ε_1+|Z_1| = |U'_1| + |Z_1| ≥ |V'_1|+k, we get ε_1+|Z_1| ≥ k. In a similar way, we get that ε_2+|Z_1| ≥ k.
On one hand, since |Z_2|+k=|Z_1|+ ε_1+ε_2≥ k+ε_2, we obtain that |Z_2|≥ε_2. On the other hand, there is |Z_1|+|Z_2|<k≤ |Z_1|+ε_2, so |Z_2|<ε_2, a contradiction.
Let n,m be positive integers such that 1<m<n. Let k=n-m. Then κ_U(G) ≥min{δ_U(G),k} for any k-critical-bipartite graph G=(U,V;E) with |U|=n and |V|=m. Moreover, for every separator Z in G with |Z|<k, there exists a vertex u ∈ U with N(u) ⊆ Z.
Let G=(U,V;E) be a k-critical-bipartite graph of order (n,m). Towards a contradiction, suppose that κ_U(G)< min{δ_U(G),k}. So there exists a set Z ⊂ U ∪ V with |Z|<min{δ_U(G),k} that separates two vertices u_1, u_2 in U. Let Z_1 = Z ∩ U and Z_2 = Z ∩ V. Note that there is no path between u_1 and u_2 in G'=(U',V'; E')=G[(U∖ Z_1)∪(V∖ Z_2)]. So we can choose a partition of G' into two subgraphs G'_1=(U'_1,V'_1;E'_1) and G'_2=(U'_2,V'_2;E'_2) that are unions of components of G' with u_1 ∈ U'_1 and u_2 ∈ U'_2.
Suppose first that the graph G' contains an isolated vertex u∈ U', then |Z_2|≥δ_U since N(u)⊂ Z_2, a contradiction.
Assume now that the graph G' does not contain an isolated vertex u∈ U', hence |V_i'|>0 for i=1,2. Suppose that |Z|=|Z_1|+|Z_2|<k. We will proceed now like in the proof of Theorem <ref>.
Let |U'_i|= |V'_i|+ε_i for i=1,2. So there is |V'_1|+|V'_2|+|Z_2|+k=|U'_1|+|U'_2|+|Z_1|=|V'_1|+|V'_2|+|Z_1|+ε_1+ε_2. Thus |Z_2|+k=|Z_1|+ ε_1+ε_2.
Since V_1'≠∅ by Theorem <ref>, there is |N_G(V'_1)| ≥ |V'_1|+k. On the other hand, since U'_1 ∪ Z_1 ⊇ N_G(V'_1), there is |U'_1 ∪ Z_1| ≥ |N_G(V'_1)|. So, by simplifying |V'_1|+ε_1+|Z_1| = |U'_1| + |Z_1| ≥ |V'_1|+k, we get ε_1+|Z_1| ≥ k. In a similar way, we get that ε_2+|Z_1| ≥ k.
On one hand, since |Z_2|+k=|Z_1|+ ε_1+ε_2≥ k+ε_2, we obtain that |Z_2|≥ε_2. On the other hand, there is |Z_1|+|Z_2|<k≤ |Z_1|+ε_2, so |Z_2|<ε_2, a contradiction.
Finally, the last part of the thesis of the theorem follows from the previous analyses.
Let n,m be positive integers such that 1<m<n. Let k=n-m. Then, for any k-critical-bipartite graph G=(U,V;E) with |U|=n and |V|=m, there is κ(G) ≥min{δ(G),k}.
Let G=(U,V;E) be a k-critical-bipartite graph of order (n,m). Towards a contradiction, suppose that Z is a vertex cut for two vertices x and y in G with |Z| < min{δ_U(G),k}. Let G'=G-Z.
If x,y∈ V, then |Z|≥ k by Theorem <ref>, a contradiction. For x,y∈ U, we have |Z|≥min{δ_U(G),k} by Theorem <ref>, a contradiction. Finally, consider the case where x∈ U and y∈ V. Choose x'∈ U' ∖{x}∩ N_G'(y). Such a vertex exists since G is k-critical and n ≥ k+2 > |Z|+2. So Z is a vertex cut for x and x', and the thesis holds by Theorem <ref>.
By applying Theorems <ref>, <ref>, and <ref> to Construction <ref>, we get the following corollary that shows that the given lower bounds are tight.
Let n,m be positive integers such that 1<m<n. Let k=n-m and G_n,m=(U,V; E) be a graph given by Construction <ref>. Then the following properties hold:
* κ_V(G_n,m) ∈{k,k+1},
* κ_U(G_n,m)=δ_U(G_n,m),
* κ(G_n,m)=δ(G_n,m).
Moreover, κ_V(G_n,m) = k if k=1.
Since Δ_V(G_n,m)=k+1, by Theorem <ref>, there is κ_V(G_n,m) ∈{k,k+1}. Since δ_U(G_n,m)=m(k+1)/n and m(k+1)/n<k+1 for n>m>1, by Theorem <ref>, there is κ_U(G_n,m)=δ_U(G_n,m). By Theorem <ref>, there is κ(G_n,m)=δ(G_n,m). Finally, the case where k=1 is easy to check (for example, consider removing u_0 in the graph G_6,5 in Figure <ref>).
Note that given positive integer values n,m such that n>m>1 and k=n-m, for any κ∈{1,…,m}, there exists a k-critical-bipartite graph G=(U,V; E) of order (n,m) with connectivity κ(G)=κ. Indeed, if κ=m, then G=K_n,m, otherwise (i.e. κ<m) let G'=(V,U;E) be a complete bipartite graph K_n,m. If we pick any vertex u∈ U and delete m-κ incident edges with v, the obtained graph G=(U,V;E) is k-critical and κ-connected.
§ FINAL REMARKS
Let G=(U,V;E) with |U|=n, |V|=m n>m>1, k=n-m be a minimum k-critical-bipartite graph. Then δ_V=k+1, therefore κ_V∈{k,k+1} by Theorem <ref>. For example, note that κ_V(G_6,5)=1 (see Figure <ref>). Therefore we pose the following open problem.
Characterize all minimum k-critical-bipartite graphs for which κ_V=k.
Recall that for any minimum k-critical-bipartite graph G=(U,V;E) of order (n,m), with k=n-m+1), there is |E|=(k+1)m and Δ_U=(k+1)m/n. And, to have (k+1)m edges, the number of vertices in U of degree Δ_U has to be at least (n-m+1)m-n(k+1)m/n. But there is some flexibility with respect to the degree of other vertices in U: there may be δ_U < (k+1)m/n (see Figure <ref> for an example). So we pose the following open problem.
Determine if there exist minimum k-critical-bipartite graphs G=(U,V;E) of order (n,m) with δ_U=δ for any δ∈{1,2.…,m(k+1)/n-1}.
The property of being k-critical-bipartite, and fault-tolerance in general, has strong relations with connectivity (besides the results presented in this paper, see, for example, the work of Cichacz et al. <cit.>). Let us recall that, by Theorem <ref>, G is k-critical-bipartite if and only if G̃ is k-extendable, and there is the following result by Robertson et al.
Let G=(U,V; E) be a connected bipartite graph, let M be a perfect matching in G, and let k ≥ 1 be an integer. Then G is k-extendable if and only if D(G,M) is strongly k-connected, where D(G,M) is the directed graph obtained by directing every edge from U to V, and contracting every edge of M.
Since the respective auxiliary graphs can be constructed in polynomial time, testing if a graph is k-critical-bipartite reduces to testing connectivity in directed graphs, which can also be done efficiently (see, for example, the results of Henziger et al. <cit.>). But, given a bipartite graph G=(U,V;E), besides testing if G is k-critical-bipartite, it is valuable for applications to find a minimum supergraph (adding edges, augmentation) or minimum subgraph (removing edges, sparsification) of G that is k-critical-bipartite. However, unlike testing connectivity, the corresponding edge modification problems tend to be harder and not well understood (see the work of Crespelle et al. <cit.> for a recent review). We believe that the relations between k-critical-bipartiteness and connectivity will permit us to adapt methods developed for edge modification problems related to connectivity to work with k-critical-bipartiteness. And characterizing minimum k-critical-bipartite graphs is a valuable step in this direction. We terminate with the following open problem.
Given a bipartite graph G=(U,V;E) of order (n,m), what is the complexity of finding a minimum supergraph (subgraph) of G that is k-critical-bipartite.
abbrvnat
|
http://arxiv.org/abs/2307.07611v1 | 20230714201604 | Combinatorial and Recurrent Approaches for Efficient Matrix Inversion: Sub-cubic algorithms leveraging Fast Matrix products | [
"Mohamed Kamel Riahi"
] | math.NA | [
"math.NA",
"cs.NA",
"math.CO",
"15A09, 15A23, 65F05, 68R05"
] |
Towards Generalizable Detection of Urgency of Discussion Forum Posts
Valdemar Švábenský
University of Pennsylvania
[email protected]
Ryan S. Baker
University of Pennsylvania
[email protected]
Andrés Zambrano
University of Pennsylvania
[email protected]
Yishan Zou
University of Pennsylvania
[email protected]
Stefan Slater
University of Pennsylvania
[email protected]
August 12, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we introduce novel fast matrix inversion algorithms that leverage triangular decomposition and recurrent formalism, incorporating Strassen's fast matrix multiplication. Our research places particular emphasis on triangular matrices, where we propose a novel computational approach based on combinatorial techniques for finding the inverse of a general non-singular triangular matrix. Unlike iterative methods, our combinatorial approach for (block) triangular-type matrices enables direct computation of the matrix inverse through a nonlinear combination of carefully selected combinatorial entries from the initial matrix. This unique characteristic makes our proposed method fully parallelizable, offering significant potential for efficient implementation on parallel computing architectures. While it is widely acknowledged that combinatorial algorithms typically suffer from exponential time complexity, thus limiting their practicality, our approach demonstrates intriguing features that allow the derivation of recurrent relations for constructing the matrix inverse. By combining the (block) combinatorial approach, with a recursive triangular split method for inverting triangular matrices, we develop potentially competitive algorithms that strike a balance between efficiency and accuracy.
To establish the validity and effectiveness of our approach, we provide rigorous mathematical proofs of the newly presented method. Additionally, we conduct extensive numerical tests to showcase its applicability and efficiency. Furthermore, we propose several innovative numerical linear algebra algorithms that directly factorize the inverse of a given general matrix. These algorithms hold immense potential for offering preconditioners to accelerate Krylov subspace iterative methods and address large-scale systems of linear equations more efficiently.
The comprehensive evaluation and experimental results presented in this paper confirm the practical utility of our proposed algorithms, demonstrating their superiority over classical approaches in terms of computational efficiency. Our research opens up new avenues for exploring advanced matrix inversion techniques, paving the way for improved numerical linear algebra algorithms and the development of effective preconditioners for various applications.
Keywords: Combinatorial for matrix inversion, Fast inversion Algorithm, Strassen's method, Recurrent algorithms, and Triangular Factorization.
Mathematics Subjclass Classification [2022] 15A09, 15A23, 65F05, 68R05
§ INTRODUCTION
In recent years, there has been significant research focused on developing efficient and scalable techniques for matrix inversion, which is a fundamental operation in linear algebra. Matrix inversion plays a critical role in various fields, including science and engineering, where it is used to solve systems of linear equations, calculate determinants, eigenvalues, and eigenvectors, and perform other essential computations. However, traditional matrix inversion methods can be computationally expensive and impractical for large-scale problems. Therefore, the development of fast and efficient algorithms for matrix inversion is crucial, particularly for applications involving large matrices.
Over the years, several research papers have offered insight into the ongoing efforts to improve matrix inversion and multiplication efficiency. Since the late sixties, Strassen <cit.> proposed the first fast approach to multiply two square matrices, which induces through divide and conquer approach to find the inverse of a matrix in less than 5.64 n^log_2(7) time complexity. A short later, Strassen made a further development <cit.> that led to reducing the exponent complexity. Coppersmith and Winograd <cit.> benefited from the idea. Furthermore, Davie and Stothers improved the results in <cit.>. Davie <cit.> extends the method used by Coppersmith and Winograd to derive an upper bound of ω < 2.37369 for the exponent of complexity.
Later Vassilevska Williams <cit.> made a further improvement, which recently got beaten by Duan, Wu, and Zhou in <cit.>. The latter work relies on an asymmetric hashing method, and it is the fastest method for matrix multiplication as of today. It is worth noting that the above improvement benefited from tensor formatting calculations.
The non-singular matrix inverse is fundamentally interrelated to the matrix-matrix product. This interconnection is clear if one, for example, considers the block decomposition of a given non-singular matrix and considers the Schur complement for the inverse calculation. Several methods have been developed and benefiting from matrix block matrix decomposition <cit.>, and provided valuable insights into the challenges and potential solutions for matrix inversion in real-world applications.
Besides, in the context of Matrix inverse and Generalized matrix inverse, Petković et. al in <cit.> introduced a recursive algorithm for the generalized Cholesky factorization of a given symmetric, positive semi-definite matrix. They used the Strassen method for matrix inversion along with the recursive Cholesky factorization algorithm resulting in better running times while the matrix multiplication is considered to consume time complexity 𝒪(n^3). In <cit.> Stanimirović et. al. introduced a successive matrix squaring algorithm for approximating outer generalized inverses of a given matrix with a prescribed range and null space.
Additionally, high-performance computing contributed to further enhancing the acceleration of the proposed algorithms for matrix inversion, for instance, using graphic processing units as in Sharma et al. <cit.> who redesigned the classical Gauss-Jordan algorithm to optimize matrix inversion. In the ultra-large-scale HouZhen et al. in <cit.> discussed the challenge of inverting matrices, particularly in the domain of cryptography. The paper proposes a parallel distributed block recursive computing method based on Strassen's method, which can process matrices at a significantly increased scale.
Moreover, in the context of sparse matrices over finite fields, a recent research paper by Casacuberta et. al
<cit.> proposed an improvement to the current best running time for matrix inversion, achieving an expected 𝒪(n^2.2131) time using fast rectangular matrix multiplication. The paper generalizes the inversion method to block-structured matrices with other displacement operators and strengthens the upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices.
In <cit.> Amestoy et. al exploited the sparsity within the resulting blocks of multiple right-hand sides in the computation of multiple entries of the inverse of a large sparse matrix in a massively parallel setting. The matrix is assumed to be already factorized by a direct method and the factors are distributed.
Likewise, in the previous works, our paper focuses on the speed of matrix inversion, which directly affects the overall computational cost. Nonetheless, our approach is completely different from the above. Although, it can benefit from any advancement made in the matrix-matrix product in terms of algorithms and software. In practice, we shall discuss a new technique of inversion of the non-singular triangular matrix and use combinatorics to fill in directly the entries of the matrix inverse.
Triangular matrices are among the type of matrices that are of particular interest in the basic linear algebra theory. This goes from the elementary Gauss row elimination to produce a reduced echelon form matrix to matrix factorization such as the famous QR and the LU matrix decomposition methods. The nature of the triangular matrices automatically suggests direct forward/backward substitution, whether to solve a given linear system of equations (with possibly multiple right-hand sides) or to invert the matrix through the Gauss-Jordan operation. Both ways are, indeed, sequential and it is hard to fill in any arbitrary entries in the resulting inverse matrix without pre-computation of the row below or above.
Inverse QR factorization has been adopted in <cit.> using Givens's rotation and dropping strategy for incomplete factorization. In <cit.> the author incorporates information about the inverse factors L^-1 and U^-1 to efficiently produce Incomplete ILU factorization, this method constructs robust preconditioners <cit.>.
Our main contribution consists of providing a range of numerical linear algebra algorithms specifically designed for the inverse factorization of non-singular square matrices. The central focus of these algorithms revolves around the utilization of triangular decomposition. Notably, we introduce a novel and pioneering combinatorial-based approach that allows for the direct inversion of triangular matrices. This approach is particularly advantageous in its block version, as it leverages recurrence to expedite the computational burden by reducing the number of sub-blocks involved. By incorporating this recurrence mechanism, our algorithms demonstrate accelerated performance, enabling efficient computation of matrix inverses. A comprehensive analysis of the time complexity for our proposed algorithms was conducted, taking into account the utilization of Strassen's matrix by matrix product within the recurrence. Our analysis revealed a significantly reduced coefficient in the sub-cubic complexity compared to other techniques that employ Strassen's fast method <cit.>. This demonstrates the computational efficiency and effectiveness of our algorithms. By leveraging Strassen's matrix multiplication, we achieve improved time complexity, making our algorithms highly competitive for matrix inversion tasks.
The rest of the paper is organized as follows, in Section<ref> we provide the motivation for our approach in tackling the challenges associated with triangular matrices. We then present, in Section <ref>, our novel approach COMBRIT to the inverse triangular matrix using combinatorics on the indices of the entry of the matrix. Later in Section <ref> we introduce a recursive technique to speed up the computation of the inverse of the non-singular triangular matrix. In Section <ref>, we investigate the use of the proposed inverse triangular method to decompose the inverse of the matrix directly. Within this section, we propose two new algorithms, namely, SQR and SKUL for the inverse decomposition of QR, and LU respectively as described in subsection <ref>. Additionally, in subsection <ref> we introduce a novel technique for triangular decomposition (LU or UL) based on recurrent split and recurrent fast triangular inversion. The numerical tests and implementations are reported in Section <ref>. Finally, we close this paper with some concluding remarks.
§ MOTIVATION
The study of matrix linear algebra is fundamental in many areas of science and technology, and one of the key concepts within this field is the use of triangular matrices. These matrices are momentous because they deliver a simpler and more efficient way to solve systems of linear equations. In particular, when a matrix is triangular, its solution can be easily computed through back-substitution. This can be especially useful when dealing with large systems of equations, where the complexity of the problem can quickly become overwhelming. Additionally, triangular matrices offer an elegant way to compute determinants and eigenvalues, two important mathematical concepts used extensively in many scientific and engineering disciplines. The fact that the product of two triangular matrices is also a triangular matrix further highlights the usefulness of this concept in matrix multiplication. Furthermore, the LU factorization, which is a powerful technique for solving systems of linear equations, relies heavily on the use of triangular matrices. We also find the triangular matrices in the QR decomposition, which is very useful in the calculation of least square solutions and also in finding the eigendecomposition of a given matrix.
The importance of triangular matrices in linear algebra cannot be overstated, as they provide a powerful tool for simplifying and solving complex mathematical problems.
In this work, we focus on the triangular matrices and provide novel concepts on utilizing such a specific format toward accelerating the computation of the inverse of general non-singular matrices.
Let us consider the following n× n unitary upper triangular matrix R
T_i,j=
1 if i=j
T_i,j if j>i
0 otherwise.
Our focus will be on the Triangular matrix described in Eq.(<ref>), which defines an upper triangular matrix. For the lower triangle matrix, we just consider the transpose operator and all of the results in this paper apply. Furthermore, Eq.(<ref>) considers leading ones writing in T, this specific writing is important for our study, and a generalization of triangular matrices could be straightforwardly done through appropriate diagonal matrix multiplication.
§ THE COMBINATORICS TRIANGULAR MATRIX INVERSION
The Hopscotch-series (̋a_0,a_n+1), 0<a_0≤ a_n+1 is defined as a collection of non-cyclic sequences with fixed integer endpoints a_0 and a_n+1 at the left and right ends, respectively. These sequences are sorted in increasing order. To construct the series, we consider all possible sorted sequences formed by removing at least k integers, where k is greater than or equal to a_n+1-a_0-1, from the complete sequence containing all integers from a_0 to a_n+1.
(̋a_0,a_n+1)={⟨ a_0, a_1,a_2, ,…, a_n+1⟩,⟨ a_0,a_2,a_3,…,a_n+1⟩,⟨ a_0, a_4,…,a_j,…,a_n, a_n+1⟩, …, ⟨ a_0, a_n+1⟩}.
Note that
(̋a_0,a_0)=∅.
Examples of sequences for the (̋1,5) series read:
[ k_1 5= 1 , ⟨α^k_1,α^k_2,α^k_3,α^k_4,α^k_5⟩ = ⟨ 1,2,3,4,5⟩ , ℓ^1_⋆=5; k_1 5=2 , ⟨α^k_1,α^k_2,α^k_3,α^k_4⟩ = ⟨ 1,3,4,5⟩ , ℓ^2_⋆=4; k_1 5=3 , ⟨α^k_1,α^k_2,α^k_3,α^k_4⟩ = ⟨ 1,2,4,5⟩ , ℓ^3_⋆=4; k_1 5=4 , ⟨α^k_1,α^k_2,α^k_3,α^k_4⟩ = ⟨ 1,2,3,5⟩ , ℓ^4_⋆=4; k_1 5=5 , ⟨α^k_1,α^k_2,α^k_3⟩ = ⟨ 1,4,5 ⟩ , ℓ^5_⋆=3; k_1 5=6 , ⟨α^k_1,α^k_2,α^k_3⟩ = ⟨ 1,3,5⟩ , ℓ^6_⋆=3; k_1 5=7 , ⟨α^k_1,α^k_2,α^k_3⟩ = ⟨ 1,2,5 ⟩ , ℓ^7_⋆=3; k_1 5=8 , ⟨α^k_1,α^k_2⟩ = ⟨ 1,5 ⟩ , ℓ^8_⋆=2 ]
Here, ℓ_⋆^k_i j stands for the cardinal of element in the sequence (̋i,j).
For two integers 0<a_<b_, the total number of the Hopscotch-sequences
_̋#(a_,b_)=2^b_-a_-1.
The concept is quite simple. Let us consider a=a_0 and b=a_n+1, where in between these two integers values we have n other integers greater than a and lower than b. Consider then the set { a_1,a_2,…, a_n-1,a_n} of all the integers in between a and b. By excluding the fixed endpoints from the set we are left with n integer numbers. The Hopscotch series may then be taken as all possible (sequences) combinations that consist of taking-off (or hiding) k element from n-2 for all k ∈{ 1,2, …,n}. Operations count
_̋#(a_0,a_n+1)= ∑_k=1^nnk = 2^n.
For any positive integers a_0,a_n+1 and ξ, we have
(̋a_0+ξ,a_n+1+ξ) = ξ + (̋a_0,a_n+1)
The proof follows the construction method for the Hopscotch series, over a given sorted sequence { a_i}_i=0^n+1.
Here we show the example of a_0=1, a_n+1=4 and ξ=4.
[ (̋1,4) (̋5,8); ⟨ 1,2,3,4⟩ ⟨ 5,6,7,8⟩=4+⟨ 1,2,3,4⟩; ⟨ 1,3,4⟩ ⟨ 5,7,8⟩=4+ ⟨ 1,3,4⟩; ⟨ 1,2,4⟩ ⟨ 5,6,8⟩=4+⟨ 1,2,4⟩; ⟨ 1,4⟩ ⟨ 5,8⟩=4+⟨ 1,4⟩. ]
Following the results of Corollary <ref>, one remarks that the entries indexation in the diagonals below share the same Hopscotch sequences up to constant.
[ m-1-11 m-1-2 m-1-3 m-1-4 m-1-5; 1 m-2-5; 1 m-3-5; m-4-41 m-4-5; m-5-51 ].
[overlay,remember picture]
[opacity=.4,line width=3mm,line cap=round] (m-1-2.center) – (m-4-5.center);
[opacity=.4,line width=3mm,line cap=round] (m-1-3.center) – (m-3-5.center);
[opacity=.4,line width=3mm,line cap=round] (m-1-4.center) – (m-2-5.center);
[opacity=.4,line width=3mm,line cap=round] (m-1-5.center) – (m-1-5.center);
In the sequel, in order to design a combinatorial-based algorithm for the inversion of a (unitary) triangular matrix, we shall associate to a given matrix T the following tensor (of sequences):
[ (̋1,1) (̋1,2) (̋1,3) ⋯ ⋯ (̋1,n); (̋2,2) (̋2,3) ⋯ ⋯ (̋2,n); ⋱ ⋱ ⋱ ⋮; ⋱ ⋱ ⋮; ⋱ ⋮; (̋n,n) ],
Next, we define T^-1 as S, where each element S_i,j in the inverse matrix is associated with its corresponding Hopscotch-series (̋j,j). This association allows the evaluation of the inverse matrix to be completely independent for each element, enabling natural parallelization of the computation. Furthermore, based on the observation Eq.(<ref>) the time complexity of the inversion reduces further, where basically, we only need the Hopscotch series for the first row only. Unfortunately, the Eq (<ref>) 's complexity remains 2^n. Nonetheless will see in the sequel section that the patterns provided by Eq.(<ref>) reduce dramatically such exponential complexity.
Moving forward, we shall explain now how to use these combinatorial calculations, namely the Hopscotch series associated with the given triangular matrix, to evaluate its inverse directly.
we will employ flexible notations to encompass various combinatorial possibilities. Indeed,
∏_ℓ=1^ℓ_⋆^k_i j-1 T_α^k_i j_ℓ,α^k_i j_ℓ+1
reads for example, when (i,j)=(1,4), as follows
∏_ℓ=1^ℓ_⋆^k_1 4-1 T_α^k_1 4_ℓ,α^k_1 4_ℓ+1=T_1,2· T_2,3 T_3,4·_ℓ_⋆=4
Here the subscript α^k_1 4_ℓ represents the index (value of the) element ℓ in the Hopscotch sequence k_1 4, see example <ref>.
The matrix S inverse of the matrix T (defined in (<ref>)) writes
S_i,j =∑_k_i j=1^2^(j-i-1) (-1)^ℓ_⋆^k_i j-1∏_ℓ=1^ℓ_⋆^k_i j-1 T_α^k_i j_ℓ,α^k_i j_ℓ+1 , if j > i
1 , if i = j
0 , otherwise.
where ℓ^k_i j_⋆ stands for the length of the k_i j^th (̋i,j) sequence, while α_ℓ^k_i j stands for the ℓ^th element in the k_i j^th (̋i,j) sequence.
The proof is conducted through induction arguments.
For the case where n=1,2 the formula is trivially validated. Let's start with considering n=3, in which case the size of the matrix R becomes 3× 3, we then have
T^-1=[ 1 T_1,2 T_1,3; 0 1 T_2,3; 0 0 1 ]^-1 = [ 1 S_1,2 S_1,3; 0 1 S_2,3; 0 0 1 ].
Using Eq. (<ref>) we can write
[ S_i,i = 1 for i ={ 1,2,3 }; S_1,2 = (-1)^1· T_1,2; S_1,3 = (-1)^1· T_1,3 + (-1)^2 · T_1,2T_2,3.; S_2,3 = (-1)^1· T_2,3; ]
It is then trivial that S stands for the inverse of T, hence the formula is validated for n=1,2,3. Let us next assume that the formula is true for an arbitrary rank n, to prove that the formula holds for the rank n+1. Consider the following matrix
On which we use the block structure, For R∈ℝ^n,n as defined in Eq.(<ref>) and v∈ℝ^n,1, we write:
[ R v; 0 1 ]
which has a block matrix inverse that writes
[ S -S· v; 0 1 ]
where we have assumed that S is the matrix inverse of R with rank n. Here it turns out that we only have to prove the formula for the entry S_1,n+1 as per the Corollary <ref> property (see also matrix (<ref>)).
Actually, from the matrix of rank n+1 we can eliminate the first row and the first column to fall back into the assumption of the rank n with the exact same type of matrix R̃ (new matrix) as of Eq.(<ref>).
Let us now focus on the top-right corner entry of the matrix S, i.e. T_1,n+1. Using the notation in Eq.(<ref>) T_1,n+1 is the first component of the vector resulting from the matrix-by-vector product -S· v that write explicitly as
[ S_1,n+1 = - T_1,n+1 -∑_p=2^n S_1,pT_p,n+1; = -T_1,n+1 - ∑_p=2^n( ∑_k_1 p=1^2^(p-2) (-1)^ℓ_⋆^k_1 p-1∏_ℓ=1^ℓ_⋆^k_1 p-1 T_α^k_1 p_ℓ,α^k_1 p_ℓ+1 T_p,n+1); = (-1)^1· T_1,n+1 + ∑_p=2^n( ∑_k_1 p=1^2^(p-2) (-1)^ℓ_⋆^k_1 p∏_ℓ=1^ℓ_⋆^k_1 p-1 T_α^k_1 p_ℓ,α^k_1 p_ℓ+1 T_p,n+1) ]
Now we just focus on the finite series
[ ∑_p=2^n( ∑_k_1 p=1^2^(p-2) (-1)^ℓ_⋆^k_1 p∏_ℓ=1^ℓ_⋆^k_1 p-1 T_α^k_1 p_ℓ,α^k_1 p_ℓ+1 T_p,n+1); = ∑_k_1,2=1^2^0 (-1)^ℓ_⋆^k_1,2∏_ℓ=1^ℓ_⋆^k_1,2-1 T_α^k_1,2_ℓ,α^k_1,2_ℓ+1 T_2,n+1; +∑_k_1,3=1^2^1 (-1)^ℓ_⋆^k_1,3∏_ℓ=1^ℓ_⋆^k_1,3-1 T_α^k_1,3_ℓ,α^k_1,3_ℓ+1 T_3,n+1; ⋮; +∑_k_1 n=1^2^(n-2) (-1)^ℓ_⋆^k_1 n∏_ℓ=1^ℓ_⋆^k_1 n-1 T_α^k_1 n_ℓ,α^k_1 n_ℓ+1 T_n,n+1, ]
which sums up as the geometric series 2^0+2^1+⋯+2^n-2=2^n-1-1, therefore, the finite series simplifies to
[ ∑_p=2^n( ∑_k_1 p=1^2^(p-2) (-1)^ℓ_⋆^k_1 p∏_ℓ=1^ℓ_⋆^k_1 p-1 T_α^k_1 p_ℓ,α^k_1 p_ℓ+1 T_p,n+1) = ∑_k_1 p=1^2^(n-1)-1 (-1)^ℓ_⋆^k_1 p∏_ℓ=1^ℓ_⋆^k_1 p-1 T_α^k_1 p_ℓ,α^k_1 p_ℓ+1 T_p,n+1; = ∑_k^'_1,p=1^2^(n-1)-1 (-1)^ℓ_⋆^k^'_1,p-1∏_ℓ=1^ℓ_⋆^k^'_1,p-1 T_α^k^'_1,p_ℓ,α^k^'_1,p_ℓ+1 ]
Note in the latter the use of k^'_1,p instead of k_1 p, this is because the length of the Hopscotch sequence has increased by one increment. In fact, the symbol of the products computes T_1,⋯ T_,p, which when multiplied by T_p,n+1 gets incremented by one. This means k_1 p=k_1 p^'-1, which explains the power of the negative one that becomes in its turn (-1)^k_1 p^'-1.
Henceforth, it becomes clearer now that
S_1,n+1 = (-1)^1 · T_1,n+1 + ∑_k^'_1,p=1^2^(n-1)-1 (-1)^ℓ_⋆^k^'_1,p-1∏_ℓ=1^ℓ_⋆^k^'_1,p-1 T_α^k^'_1,p_ℓ,α^k^'_1,p_ℓ+1
which simplifies further to
S_1,n+1 = ∑_k^'_1,p=1^2^(n-1) (-1)^ℓ_⋆^k^'_1,p-1∏_ℓ=1^ℓ_⋆^k^'_1,p-1 T_α^k^'_1,p_ℓ,α^k^'_1,p_ℓ+1
[Direct application of the main Theorem <ref>]
Given the matrix
T=
[ 1 2 4 1; 1 3 2; 1 5; 1 ]
Following Theorem <ref> the matrix inverse A^-1 writes
T^-1=
[ 1 S_1,2 S_1,3 S_1,4; 1 S_2,3 S_2,4; 1 S_3,4; 1; ]
where
S_1,2 =-2:= -1 ×2
S_1,3 = 2:= -1 ×4 + 2×3
S_1,4 =-7:= -1×1 + 2×2
+ 4×5
-1 ×2×3×5
S_2,3 =-3:= -1×3
S_2,4 =13:= -1 ×2 + 3×5
S_3,4 =-5:= -1×5.
hence
T^-1=
[ 1 -2 2 -7; 1 -3 13; 1 -5; 1 ]
Unit triangular matrices are close under inversion. Furthermore, the unit matrices T with off-diagonal integers are also closed under inversion.
The process only involves linear combinations of integers; therefore, the results should be an integer for every entry of the matrix inverse.
Obviously, and at first glance, such a formula of Theorem <ref> does not uplift any programming language because of the aberrant drawback of involving exponential time complexity, i.e. 2^n.
Nevertheless, the proof of Theorem <ref> inspires considerable promise toward the reduction of time complexity. The recurrent patterns revealed in the proof suggest a block version that outperforms the element-wise approach. Actually by using the combinatorial with a moderate number n, one can proceed with the inversion recurrently. This way we get rid of the exponential time complexity. The details of the optimized algorithm are given in the next section.
Notations
We shall consider the following assertion
16· 2^k≤ n ≤ m2^k
where the natural numbers k and m are given as such <cit.>
k =[log_2(n)-4]
m =[n2^-k] + 1.
In our analysis of time complexity, the notation we will employ is as follows:
§ FAST RECURSIVE TRIANGULAR INVERSION USING STRASSEN'S METHOD
We present in this section, a fast method based on the combinatorics approach. This method relies on the possession of the combinatoric combination of indices up to a given rank, then the inversion of a given triangular matrix (of rank mβ^k) is made possible by recursion.
Additionally, we treat the block-recursive approach to invert any given triangular nonsingular matrix.
For simplicity and the notation abuse issue, we shall consider unit triangular matrices. The general case can be easily treated following the trivial decomposition R=DT where D is a diagonal matrix with entries D_i,i=T_i,i.
(
[ ; ; ; ; ; ])
=
(
[ ; ; ; ; ; ])
(
[ 1 ; 1 ; 1 ; 1 ; 1; ])
§.§ Combinatorics Block-Recursive Inverse Triangular Matrix
There is no doubt that combinatorics involves exponential time complexity to identify the right patterns used in Theorem <ref>. Although, it is worth noting that the process generated by combinatorics doesn't depend on the matrix entries! Indeed, it depends only on the indexes and tells us what the appropriate indices involved in a calculation are. Therefore, this process can be done in an offline fashion, as it is unique regardless of the triangular matrix to be handled. For this reason, we assume that we possess a combinatorics card ready before the inversion. For Generalization purposes, let us assume that such a combinatorics card is given up to an index β. This means that using such a combinatorics card: i) we are able to inverse any triangular matrix (modulo transpose) of size mβ× mβ, and ii) If we have to deal with a larger size matrix we can find the first β-off-diagonal-band in the inverse matrix S using the translation property as described in Corollary (<ref>).
T =
[ m-1-11 m-1-2 m-1-3 m-1-4 m-1-5 m-1-6 m-1-7 m-1-8 m-1-9; m-2-21 m-2-3 m-2-4 m-2-5 m-2-6 m-2-7 m-2-8 m-2-9; m-3-31 m-3-4 m-3-5 m-3-6 m-3-7 m-3-8 m-3-9; m-4-41 m-4-5 m-4-6 m-4-7 m-4-8 m-4-9; m-5-51 m-5-6 m-5-7 m-5-8 m-5-9; m-6-61 m-6-7 m-6-8 m-6-9; m-7-71 m-7-8 m-7-9; m-8-81 m-8-9; m-9-91; ][overlay,remember picture]
[opacity=.4,,fill] (m-1-1.center) – (m-1-3.center) – (m-3-3.center) – (m-1-1.center);
[opacity=.4,,fill] (m-4-4.center) – (m-4-6.center) – (m-6-6.center) – (m-4-4.center);
[opacity=.4,,fill] (m-7-7.center) – (m-7-9.center) – (m-9-9.center) – (m-7-7.center);
[opacity=.4,fill] (m-1-4.center) – (m-1-6.center) – (m-3-6.center) – (m-3-4.center) ;
[opacity=.4,fill] (m-1-7.center) – (m-1-9.center) – (m-3-9.center) – (m-3-7.center) ;
[opacity=.4,fill] (m-4-7.center) – (m-4-9.center) – (m-6-9.center) – (m-6-7.center);
Eq.(<ref>) illustrates a schematic representation of a sizable matrix measuring mβ× mβ with m=3 and β=3. Thus with this view, instead of inverting the whole matrix 9×9, we shall use the card with β=3.
§.§.§ COMBRIT: A Combinatorics Block-recursive Inversion algorithm
Algorithm <ref>, utilizes the combinatorial card to invert a given triangular matrix T using block-wise matrix operations. The workflow concept of the recursion is very simple as it reduces the matrix size (supposed n=mβ^k) recursively until it reaches the base m. Whenever it meets inversion instruction the algorithm recalls itself again with the appropriate subblocks.
As input and regardless of the superscript k the algorithm will consider each time that n=m'β:=(mβ^k-1)β.
The input matrix T is assumed to be a square matrix with a block structure, where each block is of size m by m. The number of blocks in each row and column of A is specified by the parameter β, which stands for the size of the combinatorial card.
The method first initializes several tensors (multidimensional arrays) of size m' × m' ×β×β to store intermediate results. These arrays include 𝕋, 𝔹, i𝔻, i𝕋, and T^-1. The algorithm then loops over each pair of block indices in A and extracts the corresponding blocks into the 𝔸 array. Next, the algorithm performs a block-wise matrix inversion operation on the diagonal blocks of A, by recalling the algorithm itself, then stores the resulting inverse matrices in the i𝔻 array. Further, the algorithm then uses the inverse diagonal blocks to compute the off-diagonal blocks of 𝔹.
Finally, the algorithm computes the inverse of the 𝔹 array through the use of the combinatorial card for block matrices following theorem <ref> (in its block version). At this stage, the algorithm may benefit from fast matrix multiplication methods. The resulting inverse of 𝔹 is then used to compute the inverse of 𝕋 block-wise and store the result in the i𝕋 array.
The tensor i𝔸 is then unfolded to reconstruct 𝔸 in two dimension array (i.e. regular matrix format).
The COMBRIT algorithm offers a versatile approach by providing flexibility in choosing the inversion method for the triangular diagonal blocks. A notable feature of this algorithm is the incorporation of a recursive iteration, achieved by recalling the function itself. This recursive iteration enables the reduction of the size of the triangular matrix that needs to be inverted.
By employing this recursive strategy, the COMBRIT algorithm efficiently handles the inversion of triangular matrices by iteratively solving smaller subproblems. At each iteration, the size of the triangular matrix decreases, leading to a step-by-step computation of the inverse. This recursive approach allows for a systematic and structured inversion process, facilitating the efficient handling of larger triangular matrices.
§.§.§ Asymptotic time complexity analysis
It is clear that in the matrix inversion using our combinatorial-based approach, the top right corner of the matrix is the worst element in terms of demanding time complexity.
Let's consider a unitary triangular matrix T of size n× n≡ mβ^k× mβ^k. We assume that we hold the card of combinatorics that helps to invert the triangular matrix up to the order β. This card provides the Hopscotch series.
For the small block of size m× m, the evaluation of the element S_1,j, j≥ 3, in the matrix inverse, requires the Hopscotch series (̋1,j) , in which count
𝒮(j):=∑_ℓ=0^j-2j-i-1ℓ = 2^j-2
sequences. The evaluation of S_1,j sums up over the sequences of the Hopscotch series, where each sequence implies (j-2) multiplications. The total multiplication operations count then
ℳ(j):=∑_ℓ=0^j-2j-2ℓ( j-ℓ) = 2^j-3 j
With regards to Corollary <ref> results (i.e. diagonal bands elements have the same complexity), the total complexity for inverting a unitary matrix counts
^COMBRIT^⋆(m) := ∑_j=2^m (m-j+1)(ℳ(j) + 𝒮(j) )
= ∑_j=2^m (m-j+1) (j+2) 2^j-3
= m/2(2^m-2)
Additional 𝒟(m):=m(m-1)/2 division is required to transform a general triangular matrix to a unitary one. Therefore, the total complexity for any triangular matrix writes
^COMBRIT(m) = m/2(2^m+m-1)
At first glance, the above time complexity is worse than the known m^3. Although, it is worth noting that using the block structure based on the off-line card we can inverse any matrix of size mβ^k, m≥2, β≥2, k>1. Furthermore, the combinatorial-based method is highly parallelizable, where the computation of every single element in the matrix inverse S is totally independent of their counterpart in S.
In a parallel setting, it should be noted that when dealing with a general triangular matrix and utilizing β
processors, we have the capability to invert block diagonal triangular matrices of size m, either using the card itself or employing any desired inversion technique. Additionally, in a parallel manner, performing a row-wise block multiplication of the primary matrix T with its diagonal inverse yields the following structure.
[ I B_1,2 … B_1,β; I … ⋮; I B_β-1,β; I ].T^⋆
Next, we will examine the time complexity analysis for the inversion of a general triangular matrix T. This inversion process consists of two main steps: first, inverting the diagonal blocks and multiplying them (from the right) by their respective rows in T to achieve a unitary matrix structure <ref>; second, utilizing combinatorial techniques to invert β blocks. This inversion process is recursively repeated for each matrix until all matrices have been inverted.
^COMBRIT(mβ^k+1)
:= β^COMBRIT(mβ^k) + β/2(β-1) (mβ^k) + ∑_j=2^β (β-j+1)(ℳ(j) (mβ^k) + 𝒮(j) m^2β^2k)
= β^COMBRIT(mβ^k) + β/2(β-1) (mβ^k) + ∑_j=2^β (β-j+1)(j· 2^j-3(mβ^k) + 2^j-2 m^2β^2k)
= β^COMBRIT(mβ^k) + ( 2^β - β-1) m^2β^2k + 1/2((2^β-1)(β-2)+β^2) (mβ^k)
= β^k+1^COMBRIT(m) + ( 2^β - β-1) m^2∑_ℓ=0^kβ^2k-ℓ
+1/2((2^β-1)(β-2)+β^2)∑_ℓ=0^kβ^ℓ(mβ^k-ℓ)
= β^k+1^COMBRIT(m) + m^2( 2^β- β -1)(β ^k (β ^k+1-1))/β -1
+1/2((2^β-1)(β-2)+β^2)∑_ℓ=0^kβ^ℓ(mβ^k-ℓ).
It is worth noting that the dominant effect, particularly in higher-order terms, is governed by the convex function 1/2((2^β-1)(β-2)+β^2). This function exhibits an increasing trend as β increases, except at β=1 and β=2, where it becomes zero. Choosing β=1 is not interesting since it results in n always being equal to m for which we don't benefit from the recursion. On the other hand, selecting β=2 leads to n=m2^k. This choice minimizes complexity by avoiding matrix multiplications. Nonetheless, in the sequel, we shall allow β to reach moderate values such as β=2^1,2^2,2^3 and 2^4 to release the combinatorial calculation and let them enjoy potentially enjoy its parallel nature.
However, for larger values of β>2 (with n=β^k), matrix-matrix multiplications come into play. In such cases, one can rely on the sub-cubic complexity of fast Strassen's method. Incorporating the parallel nature of the earlier combinatorial-based approach alongside this reduction in complexity can potentially result in rapid convergence towards the inverse.
Although the design of parallel algorithms for combinatorial-based methods can be complex and beyond the scope of the current study, their effectiveness becomes evident in a parallel computing environment. In this study, our emphasis will be on traditional recursive approaches, through which we aim to design straightforward yet efficient matrix inversion algorithms.
Moreover, in many applications, matrices are typically sparse or even banded. In such cases, the COMBRIT method could offer reduced complexity compared to standard methods. Specifically, the number of non-zero elements will further diminish the complexity. However, a comprehensive exploration of these topics, especially those involving sparsity, falls beyond the scope of the present work.
§.§ Column-Recursive Inversion Triangular Matrix Algorithm
§.§.§ CRIT algorithm
The algorithm we employ in this case is based on classical linear algebra principles for block matrices. It is worth mentioning that this approach may not introduce any significant novelty to our work, it would be rather used as a reference for numerical comparison purposes. However, as demonstrated in the proof of our main theorem, there is a strong relationship between both methods, namely classical linear algebra, and combinatorics. This connection opens up a new avenue for exploring the potential combination of these approaches, particularly considering the high level of parallelization achievable with the combinatorics method.
[ 1 m-1-2 m-1-4 m-1-5; 1 m-2-5; 1 ; 1 m-4-5; 1; ] [overlay,remember picture]
[blue,dash pattern=on 2pt off 1.25pt, thick,box around=(m-1-2)(m-1-4)];
[brown,dash pattern=on 2pt off 1.25pt, thick,box around=(m-1-5)];
[red,dash pattern=on 2pt off 1.25pt, thick,box around=(m-2-5)(m-4-5)];
Put simply, the efficient inversion of a unitary matrix, specifically the optimized version, can be accomplished by a straightforward computation. Each element S_i,j of the resulting inverse matrix is obtained by taking the scalar product of the row (represented by a blue encircled line) from the newly computed block inverse with the column vector (represented by a red encircled column) from the original matrix is inverted, and then subtracting the corresponding element T_i,j. These processes are illustrated in the following pseudo-algorithm (Algorithm <ref>).
0.35
0.58
The CRIT^⋆ algorithm computes the inverse of an n × n lower unitary triangular matrix T. The algorithm produces an n × n matrix S such that RS=I, where I is the identity matrix. The time complexity of the CRIT algorithm is clearly O(n^3), which we analyze thoroughly in the following section.
Roughly speaking, the outer loop of the algorithm runs n-1 times, and for each iteration of the outer loop, the inner loop runs n-i times. Within each iteration of the inner loop, scalar multiplication and a dot product are performed. The former operation takes constant time while the latter takes O(n). Hence, the time complexity of each iteration of the inner loop is O(n). Therefore, the time complexity of the inner loop is O(n^2), and the time complexity of the outer loop is O(n^3). The final assignment statement outside the loops takes constant time, so the overall time complexity of the algorithm is O(n^3).
§.§.§ CRIT asymptotic time complexity analysis
The CRIT algorithm is an extension of the CRIT^⋆ algorithm designed to handle non-singular triangular matrices. The key distinction lies in the step where we transform a non-singular triangular matrix into a unitary triangular matrix by dividing each element by its corresponding diagonal element. It is at this specific step that the CRIT algorithm is applied.
The time complexity of the inversion of a unitary triangular matrix, of order β≥ 2, demands
φ_[×](β):=∑_i=1^β-1∑_j=i+2^β (j-i-1) = β^3-3β^2+2β/6=β(β -1)(β -2)/6
multiplication, and
φ_[±](β):=∑_i=1^β-1∑_j=i+2^β (j-i-2)= β^3 - 6 β^2 + 11 β -6/6=(β -1)(β -2)(β -3)/6,
additions and subtractions. Hence, the inversion of the triangular matrix of order m requires
^CRIT(m) = m(m+1)/2 + φ_[×](m) + φ_[±](m) = m^3-3m^2+8m-3/3
flops operations (multiplications and addition combined).
Besides, The product of two matrices (i.e. an upper-triangular matrix times a full matrix) of order m, while promoting the sparsity of the triangular matrix reads
(m)=m∑_i=1^m (2i-1)=m^3.
flops operations (multiplications, and addition combined).
On the other hand, the time complexity of (mβ^k) for block matrices of order mβ^k enjoys the following recursion formula, where the order of the matrices β is supposed as α-multiple of 2, i.e. β=2^α
(mβ^k)
= 4(mβ^k/2) + 2 (mβ^k/2) + m^2β^2k/2,
= 4^α k(m) + 2∑_r=0^α k-1 4^r(m2^α k-r-1)+∑_r=0^α k-14^r 2(m2^α k/2^r+1)^2,
= (2 m^3+α k m^2/2)4^α k + 2∑_r=0^α k-1 4^r((5+2m)m^27^α k-r-1-6(m2^α k-r-1)^2),
= (2 m^3+α k m^2/2)4^α k + 2m^2 (5 + 2 m)/3 (7^k α -4^k α ) - 3· 4^k α k m^2 α
= 2m^2 (5 + 2 m)/3 7^α k
+( 2 m^3+α k m^2/2
-2m^2 (5 + 2 m)/3 - 3 k m^2 α) 4^k α
= 2m^2 (5 + 2 m)/3 7^α k
-( m^2(2m+15kα+20)/6) 4^α k
= (2m^2 (5 + 2 m)/3 - m^2(2m+15kα+20)/6(4/7)^α k) 7^α k
Now, we can consolidate these formulas Eqs.(<ref>)-(<ref>) to calculate the overall complexity of inverting a non-singular triangular matrix, which can be summarized as follows.
^CRIT(mβ^k+1)
= β^CRIT(mβ^k) + β(β-1) (mβ^k) + φ_[×](β)(mβ^k)+φ_[±](β) (mβ^k)^2
= β^k+1^CRIT(m) + β(β-1)∑_j=0^kβ^j(mβ^k-j)
+ φ_[×](β) ∑_j=0^kβ^j(mβ^k-j)+φ_[±](β)m^2∑_j=0^kβ^2k-j
= 2^α(k+1)m^3-3m^2+8m-3/3 + φ_[±](2^α)m^22^α k(2^α(k+1)-1 )/2^α-1
+ φ_[×](2^α) ∑_j=0^k 2^α j( m^2(5+2m)7^α k-j-6(m2^α k-j)^2)
+ 2^α(2^α-1)∑_j=0^k( 2m^2 (5 + 2 m)/3 7^α k-j
-( m^2(2m+15(α k-j)+20)/6) 4^α k-j)
It becomes clear now with the choice of β=2^1 the root of the polynomials φ_[×](β),φ_[±](β) optimally reduces the complexity of the inverse of the triangular matrix to
^CRIT(m 2^k+1) = 4m^2 (5 + 2 m)/3∑_j=0^k 7^k-j + 2^k+1m^3-3m^2+8m-3/3
- ∑_j=0^km^2(2m+15(k-j)+20)/6 4^ k-j
= 4m^2 (5 + 2 m)/37^k+1-1/6 - m^2/9 (15· 2^2k+1(k+1) + m(4^k+1-1))
+ 2^k+1m^3-3m^2+8m-3/3
= 4m^2 (5 + 2 m)/18 7^k+1 - 2m^3 + (k+1)15m^2/18 4^k+1
+m^3-3m^2+8m-3/3 2^k+1-m^3+2m^2 (5 + 2 m)/9
[ ^CRIT(m 2^k) = 4m^2 (5 + 2 m)18 7^k - 2m^3 + k 15m^218 4^k; +m^3-3m^2+8m-33 2^k-m^3+2m^2 (5 + 2 m)9. ]
Using the fact that m=[n 2^-k]+1 we have
4m^2 (5 + 2 m)/18 7^k ≤ 4 (n/2^k+1)^2 (5 + 2 (n/2^k+1))/18 7^k
≤ ( 2^-3k+2n^3/9+11· 2^1-2kn^2/9+2^-k+5n/9+14/9) 7^k
≤ 4/9(7/8)^k n^3
+22/9(7/4)^kn^2
+32/9(7/2)^kn +14/9 7^k
≤ ( 4/9(8/7)^log_2(n)-k +22/9(4/7)^log_2(n)-k +32/9(2/7)^log_2(n)-k
+0.00065
) n^log_2(7)
≤ 1.023 n^log_2(7)
and
-2m^3 + k 15m^2/18 4^k ≤ 1/9n^3/8^k 4^k =-1/9n/2^k n^2 = -32/9 n^2
we have also,
m^3-3m^2+8m-3/3 2^k ≤ 1/3( ( n/2^k+1)^3-3(n/2^k+1)^2+8(n/2^k+1)-3) 2^k
= (1+5n/3· 2^k+n^3/3· 8^k) 2^k
= 2^k+5/3 n +1/3n^3/8^k2^k
= (2^k-log_2(n) + 1/3 8^log_2(n)-k 2^k-log_2(n)) n^2 + 5/3 n
≤ (1/2^4 + 8^5/31/2^4) n^2 + 5/3 n
≤ 682.73 n^2 + 5/3 n
and,
-m^3+2m^2 (5 + 2 m)/9 = - 1/9( 5m^3 +10 m^2)
≤ - 5/9n^3/8^k -10/9n^2/4^k
≤ - 5/9 8^log_2(n)-k -10/9 2^log_2(n)-k
≤ - 5/9 8^4 -10/9 2^4=-6880/3
Henceforth, summing up Eqs.(<ref>)–(<ref>) we obtain the asymptotic complexity of the CRIT algorithm, Eq.(<ref>) becomes
^CRIT(m 2^k) ≤ 1.023 n^log_2(7) + 679.18 n^2 -6880/3
§ NON-SINGULAR SQUARE MATRIX INVERSE
We present in this section, two approaches for calculating the inverse of a non-singular matrix. The first approach is based on classical factorization methods such as the LU and the QR decomposition. These classic decomposition methods are augmented to incorporate the calculation of the inverse decomposition online. We shall indeed, modify these classic algorithms to the extent where the direct decompositions and the inverse decomposition will share the time frame. The second approach is a completely new method. Although it is simple to understand and implement. With the help of the new method developed in this article (based on the combinatorics calculations), we shall split a given non-singular matrix into two upper and lower matrices, and proceed with the inverse calculation iteratively, while each iteration calls the fast triangular inversion.
§.§ Inverse matrix by augmenting classic decomposition
We present in the sequel a modified version of both QR and LU (Crout) algorithms to compute the inverse factorization directly and not after already having the forward decomposition. In fact, we accordingly incorporate triangular inversion algorithms such as (CRIT or COMBRIT) in these well-known matrix decomposition algorithms to produce the inverse factorization.
§.§.§ SQR: Inverse QR factorization
A Modified Gram-Schmidt algorithm for the forward QR factorization is augmented to incorporate the inverse evaluation of the upper triangular matrix R.
As described in the bellow algorithm SQR the evaluation of the matrix S is not dependent on the complete evaluation of the upper triangular matrix R. Instead, as each column of R is computed, the corresponding column of S is simultaneously computed.
To decompose a matrix into an orthogonal matrix and an upper triangular matrix, we can use the QR decomposition. The classical QR decomposition can be achieved using the modified Gram-Schmidt algorithm. However, this algorithm is known to have numerical stability issues. It is therefore advisable to use more stable algorithms, such as Householder QR or Givens QR for more dedicated applications.
The SQR algorithm introduces an iterative approach for computing the inverse matrix S (hence the inverse decomposition of the initial matrix), avoiding the need to wait for the complete evaluation of the matrix R. By computing S column by column in parallel with the evaluation of R, the algorithm provides immediate results at each step. Additionally, the SQR algorithm incorporates the triangular inversion method, allowing the inverse of R, denoted as S, to be computed as the decomposition progresses. As each new column of R is computed, its corresponding column in the inverse matrix S is simultaneously calculated. This integration of the triangular inversion method within the SQR algorithm enables a dynamic and progressive computation of the inverse, enhancing the algorithm's efficiency and making it suitable for handling large matrices and various computational applications.
It is important to note that the SQR algorithm <ref> assumes that the matrix has full column rank. If a column is zero, the algorithm will stop, and the matrix cannot be decomposed.
Algorithm SQR <ref> decomposes then
Q R = A
S Q^t = A^-1
§.§.§ SKUL: Inverse LU factorization
We introduce an augmented algorithm based on the Crout method. This algorithm incorporates the triangular inversion technique, allowing for the simultaneous computation of the inverse decomposition alongside the direct LU decomposition. Similar to the previous approach, we utilize the CRIT or COMBRIT algorithm twice within this algorithm, as we construct two triangular matrices.
The augmented LU algorithm, outlined below, combines the advantages of the LU decomposition and the triangular inversion technique:
In the SKUL algorithm, the calculation of the inverse matrices S and K is done in a synchronized manner with the construction of the matrices L and U during the LU decomposition. As each column of L is computed, the corresponding row of K is immediately calculated using CRIT. Likewise, as each column of U is computed, the corresponding row of S is simultaneously evaluated using CRIT^⋆. This simultaneous computation of the inverse matrices ensures that the inverse decomposition is formed progressively and dynamically throughout the LU decomposition process. By synchronizing the calculation of rows in K with the computation of columns in L, and the calculation of rows in S with the computation of columns in U, the SKUL algorithm achieves efficient and real-time computation of the inverse decomposition
Algorithm SKUL <ref> decomposes then
L U = A
S K = A^-1
§.§ Recursive Split and Inverse Method for Non-singular Matrices
§.§.§ Element-wise RSI method
Given a non-singular matrix A, we can express it as the sum of two triangular matrices, M (non-singular) and N (singular), such that A = M + N. Let's assume that M is non-singular, while N is a singular matrix. By appropriately permuting the rows or columns of the original matrix A, we can ensure the non-singularity of M, satisfying A = M - N. Starting from the definitions of M and N, we have:
A=M+N=M(I+M^-1N).
If we let A_0=A, M_0=M, and N_0=N, we can re-iterate as such A_1=I+M_0^-1N_0, which we decompose into a sum of singular and non-singular matrices N_1 and M_1 respectively. We have then
A_1=I+M_0^-1N_0=M_1+N_1=M_1(I+M_1^-1N_1)
By combining Eq.(<ref>) and Eq.(<ref>) we have
A=M_0 A_1=M_0M_1(I+M_1^-1N_1)
With a straightforward bootstrapping argument we infer
A = M_0M_1⋯ M_n_Lower triangular(I+M_n^-1N_n)_Upper triangular
Which is a re-invention of the triangular decomposition (UL or LU), by means of recursive matrix split and product of inverse triangular matrices, taking advantage of the closedness under product of triangular matrices. Besides, as N is a singular matrix with a diagonal full of zeros, the matrix product M_i^-1N_i results in a matrix with rank n_i-1 where n_i stands for the rank of N. Therefore, and by construction, we have the following Theorems.
Let A=M+N, where M is an n× n lower (or upper) triangular matrix extracted from A, where N is the complement upper (or lower) triangular matrix respectively. Assume that M (or N) is invertible matrix, then
{[ A = ( ∏_i=0^n N_i) (I+N_n^-1M_n); M_i+1 + N_i+1 = A for i=0
I+N_i^-1M_i for i≥ 1. ].
Equivalently, by symmetry under the condition of M being an invertible matrix
{[ A = ( ∏_i=0^n M_i) (I+M_n^-1N_n); M_i+1 + N_i+1 = A for i=0
I+M_i^-1N_i for i≥ 1. ].
By taking the inverse of Eq.(<ref>) we have the following results
Let A=M+N, where M is an n× n lower (or upper) triangular matrix extracted from A, where N is the complement upper (or lower) triangular matrix respectively. Assume that M (or N) is invertible matrix, then
{[ A^-1 = (I+N_n^-1M_n)^-1( ∏_i=0^n N_i^-1); M_i+1 + N_i+1 = A for i=0
I+N_i^-1M_i for i≥ 1. ].
Equivalently, by symmetry under the condition of M being an invertible matrix
{[ A^-1 = (I+M_n^-1N_n)^-1( ∏_i=0^n M_i^-1); M_i+1 + N_i+1 = A for i=0
I+M_i^-1N_i for i≥ 1. ].
We consider a given non-singular matrix A of size n× n, to which we apply the permutation (swap of columns) P, i.e. AP such that the diagonal of the resulting permuted matrix AP are all non-zeros. This way, the split M+N=AP produces a non-singular triangular matrix M. This permutation will be applied every iteration to make sure that all splits lead to non-singular triangular matrix M_j, where M_j+N_j=(I+M_j-1^-1N_j-1)P_j. The former assertion is trivial as per the assumption that A is non-singular which writes as a product of two matrices M and (I+M_j-1^-1N_j-1), where M is non-singular by construction, hence (I+M_j-1^-1N_j-1) is also non-singular.
Furthermore, it is worth mentioning that the singular triangular matrix N_j has zeros on the diagonal, in addition to the closedness of the triangular matrices for the inversion operation, M_j^-1 is also a triangular matrix. Therefore in the case of M upper triangular matrix and N a lower triangular, the resulting product M_j^-1N_j has the last column full of zeros. The shift with the identity matrix in each iteration of the Algorithm <ref> makes the resulting matrix
I+M_j^-1N_j has the last column equal to e_n (c.f. n^th of the canonical basis). This process reduces the split further by rank-1 at each iteration, where the size of the matrix M_j to be inverted becomes only (n-j)×(n-j):
M_j=
[ a^(j)_1,1 a^(j)_1,2 ⋯ a^(j)_1,n-j 0 ⋯ 0; a^(j)_2,2 ⋯ a^(j)_2,n-j 0 ⋯ 0; ⋱ ⋮ ⋮ ⋱ ⋮; a^(j)_n-j,n-j 0 0; 1 ⋯ 0; ⋱ ⋮; 1; ]
Although the algorithm constructs a mathematically correct inverse of a given non-singular square matrix, it suffers from an overwhelming computational complexity overhead. In fact, its total time complexity sums up to
^SRI(m 2^k+1) = ∑_j=0^m 2^k+1-1(^CRIT(m 2^k+1-j) + (m 2^k+1-j) + m 2^k+1-j )
For reader convenience, we include the detailed calculation of the formula in the appendix.
It is clear that such complexity is worse than 𝒪(n^3)<(m 2^k+1). Although, it is easy on the other hand to verify the correctness of the algorithm on inverting non-singular matrices.
§.§.§ Block-wise RSI method: BRSI
Despite the fact that the element-wise approach has a complexity that exceeds cubic order, it forms the foundation for the block-wise approach, which significantly reduces the time complexity to a subcubic order by leveraging Strassen's method for matrix-matrix multiplication.
In the sequel, we present the BRSI by promoting a γ-block splitting approach. We shall assume the order of the matrix n=m 2^k+1 where m=γ q, with 2≤γ≤ 2^4 being an integer.
16· 2^k≤ n ≤ m2^k = γ q 2^k
17 ≤ qγ≤ 31
q= [ n 2^-k]+1γ
One remark that with regards to the setting in Eqs.(<ref>) ( see <cit.>), we have 16 < m < 32. As we shall consider the factorization m=qγ, m prime numbers can be avoided by reconsidering n=m2^k=(2m)2^k-1.
Established on Eq.(<ref>), we provide the following preliminary formulas that we use in the complexity analysis.
q^3≤(n 2^-k+1/γ)^3 = 1/γ^3(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )
q^2≤(n 2^-k+1/γ)^2 = 1/γ^2(n^2/4^k+2n/2^k+1 )
𝒬_3(γ):=∑_j=0^γ-1 (γ-j)^3 =γ^2(γ+1)^2/4
𝒬_2(γ):=∑_j=0^γ-1 (γ-j)^2 = γ(2γ+1)(γ+1)/6
𝒬_1(γ) :=∑_j=0^γ-1 (γ-j)^2 =γ(γ+1)/2
The Block-wise Recursive Inversion (BRSI) algorithm, which is an extension of Algorithm <ref> with γ-blocks, is introduced in a similar manner. In the following sections, we outline the key steps involved in analyzing the time complexity of the BRSI algorithm.
Steps of BRSI(γ)
Given a non-singular square matrix A and a permutation P_0 the BRSI proceeds as follows
* : Set ℓ=0, and A_ℓ=AP_0
* : Split the matrix A_ℓ of order n=m2^k into γ blocks of order q2^k each.
* : Form L_ℓ Low blocks triangular from A_ℓ, and U_0 upper blocks triangular from A, such that
L_ℓ+U_ℓ=A_ℓ P_ℓ
* :
* If U is triangular use COMBRITE(β) Algorithm <ref> to evaluate U^-1_ℓ
* Else use BRSI(β) Algorithm to evaluate U^-1_ℓ
* : Set
A_ℓ+1= I_d + U^-1_0L^_0
* : Repeat Steps1–5 times (γ-1), with ℓ=ℓ+1.
Hence, the BSRI terminates after performing γ calls of i) as necessary permutation operation to handle the appropriate pivoting while extracting a non-singular matrix 𝐔_ℓ before the split, ii) ^COMBRIT for block/triangular matrix inversion, and iii) block matrix multiplication of Upper-block triangular and lower-block triangular matrices. Finally, after the loop, we obtain
A P_0P_1⋯ P_γ-1 = 𝐔_0𝐔_1⋯𝐔_γ-1𝐋_γ-1
A P = 𝐔𝐋
where P = P_0P_1⋯ P_γ-1 hence P^-1=P_γ-1⋯ P_1P_0, and 𝐔=𝐔_0𝐔_1⋯𝐔_γ-1 and 𝐋=𝐋_γ-1.
An important aspect to highlight is the presence of double recursion within the BRSI algorithm, as described in <ref>. This feature provides flexibility in selecting the parameters γ and β, which respectively determine the splitting of the square matrix into γ
blocks for inversion and recursion for handling the inverse of triangular matrices in the combinatorial-based algorithm. The simultaneous presence of these two levels of recursion in the BRSI algorithm offers a powerful tool for optimizing the inversion process. By carefully adjusting the values of γ and β, researchers can fine-tune the algorithm to suit specific problem characteristics and computational requirements. This flexibility allows for tailoring the BRSI algorithm to achieve optimal performance and efficiency in various scenarios.
The inversion's complexity of a full non-singular square matrix of order n=m2^k satisfies
[ ^BSRI(m 2^k) -^algo(m 2^k)-(m2^k) -(γ-1)(m2^k); =; ∑_j=0^γ-1(^algo((γ-j) q 2^k) + ( (γ-j) q 2^k) + γ q 2^k +((γ-j) q 2^k)) ]
In our analysis, we will consider the worst-case complexity scenario, where the choice of algorithm, denoted as algo, can be either CRIT or COMBRIT. To provide a fair comparison, we will focus on the worst-case complexity that does not benefit from recursion. Specifically, when considering CRIT, we obtain the following expression:
∑_j=0^γ-1^CRIT((γ-j) q 2^k ≤ 1.16· n^log_2(7) - 2.78· n^2+ 461· n - 3472.
∑_j=0^γ-1((γ-j) q 2^k) ≤ 1.0432 n^log_2(7)+26.56 n^2 + 309.67 nlog_2(n) - 693.25 n
∑_j=0^γ-1((γ-j) q 2^k) ≤ 2 n^2 -3.32 · n.
The complete derivation of the upper bounds Eqs.(<ref>)-(<ref>) are reported in the appendix section. It has been found that the best γ-block split strategy is actually 2-blocks split. In fact, the complexity is an increasing function of γ, which imposes taking γ as its minimum which is 2. These upper bounds are, therefore, with γ=2. Furthermore, with Eq.(<ref>) and Eq.(<ref>) the upper bound for the total complexity sums up to
(m 2^k) ≤ 4.1472n^log _2(7) + 728.76n^2 - 98.807n+564.905nlog _2(n)-5765.33
Hereafter, we describe how the algorithm BRSI performs for γ=2. We suppose A is a positive definite matrix,
A=
[ A_11 A_12; A_21 A_22 ],
with non-singular principal sub-blocks A_11 and A_22. It is worth noting here, that a permutation matrix should be applied in the case where a direct splitting doesn't lead to non-singular diagonal sub-blocks. Following the split performed in Figure <ref> we have
A = [ A_11 A_12; 0 A_22 ](
[ I_11 0; 0 I_22 ] +
[ S_11 S_12; 0 S_22 ][ 0 0; A_21 0 ])
= [ A_11 A_12; 0 A_22 ][ I_11 + S_12A_21 0; S_22A_21 I_22 ]
where
[ S_11 S_12; 0 S_22 ]=[ A_11 A_12; 0 A_22 ]^-1=[ A_11^-1 -A_11^-1A_12A_22^-1; 0 A_22^-1 ].
Then computing the inverse A^-1 simply reads
A^-1=[ A_11 A_12; A_21 A_22 ]^-1 = [ I_11 + S_12A_21 0; S_22A_21 I_22 ]^-1[ S_11 S_12; 0 S_22 ]
By further exploring the evaluation of the matrix inverse, we arrive at a formulation that is equivalent to the Schur complement. In fact,
A^-1=
= [ (I_11 + S_12A_21)^-1 0; -S_22A_21(I_11 + S_12A_21)^-1 I_22 ][ S_11 S_12; 0 S_22 ]
= [ (I_11 -A_11^-1A_12A_22^-1A_21)^-1 0; -A_22^-1A_21(I_11 -A_11^-1A_12A_22^-1A_21)^-1 I_22 ][ A_11^-1 -A_11^-1A_12A_22^-1; 0 A_22^-1. ],
By using the inversion lemma (also known as Woodbury identity), we can identify the result to the standard 2-blocks (LD)U identity.
Therefore,
A^-1=
[ (I_11 -A_11^-1A_12A_22^-1A_21)^-1A_11^-1 -(I_11 -A_11^-1A_12A_22^-1A_21)^-1A_11^-1A_12A_22^-1; -A_22^-1A_21(I_11 -A_11^-1A_12A_22^-1A_21)^-1A_11^-1 A_22^-1A_21(I_11 -A_11^-1A_12A_22^-1A_21)^-1A_11^-1A_12A_22^-1+A_22^-1, ]
further,
A^-1=
[ (A_11 -A_12A_22^-1A_21)^-1 -(A_11 -A_12A_22^-1A_21)^-1A_12A_22^-1; -A_22^-1A_21(A_11 -A_12A_22^-1A_21)^-1 A_22^-1A_21(A_11 -A_12A_22^-1A_21)^-1A_12A_22^-1+A_22^-1 ]
It is also worth mentioning that, if one chooses to invert U instead of L in the split steps, the result ends up with an UL-like decomposition for the initial matrix. Moreover, the splitting approach represents a generalization of the γ-block-LU (or γ-block-UL) decompositions.
§ NUMERICAL TESTS
The numerical simulations were conducted on an Intel Dell i7 core machine running the Ubuntu operating system (kernel version 5.15.0 SMP). The specific configuration of the machine is x86_64. The simulations were implemented using MATLAB version R2021b.
In this section, numerical tests of the proposed methods, namely, SQR, SKUL, and BRSI are presented.
§.§ SQR and SKUL approaches
We present the results obtained from executing two methods: SQR and SKUL. These methods incorporate the CRIT method while performing the classic QR and classic LU (Crout) methods, respectively. We aim to provide simultaneous direct and inverse matrix decompositions.
The SQR method combines the classic QR method with CRIT. It utilizes the CRIT algorithm during the QR decomposition process to efficiently handle triangular matrices. Similarly, the SKUL method integrates CRIT within the classic LU (Crout) method.
This integration allows for improved performance and accuracy in matrix decomposition tasks.
Its advantage reveals itself in the case of truncated decompositions where both direct and inverse decomposition will be provided, which forms assets for preconditioning techniques.
Let us recall that the SQR algorithm <ref> construct three matrices S,Q,R such that
A=QR and A^-1=R^-1Q^t=SQ^t. Where S is calculated using the newly proposed method described in algorithm <ref>.
For a given (non-singular) random matrix such as
A=([ 3.4932 1.6986 1.0719 1.5279 2.0805; 1.6986 2.9712 1.1746 1.0648 2.3337; 1.0719 1.1746 1.8540 0.6077 1.4418; 1.5279 1.0648 0.6077 2.1238 1.2346; 2.0805 2.3337 1.4418 1.2346 3.8037 ])
the corresponding Q, R decomposition is given by
([ 0.7300 -0.5538 -0.1454 -0.3245 -0.1844; 0.3550 0.7573 -0.3205 -0.1045 -0.4323; 0.2240 0.1427 0.9303 -0.0130 -0.2526; 0.3193 -0.0882 -0.0688 0.9386 -0.0683; 0.4348 0.3028 0.0771 -0.0524 0.8430 ])_Q,
([ 4.7854 3.9123 2.4356 2.8443 4.7179; 0 2.0897 0.9435 0.2334 1.8639; 0 0 1.2617 -0.0491 0.4990; 0 0 0 1.3136 0.0215; 0 0 0 0 1.3652 ])_R
while the S, Q^T is calculate as
([ 0.2090 -0.3912 -0.1108 -0.3871 -0.1414; 0 0.4785 -0.3578 -0.0984 -0.5210; 0 0 0.7926 0.0296 -0.2901; 0 0 0 0.7613 -0.0120; 0 0 0 0 0.7325 ])_S,
([ 0.7300 0.3550 0.2240 0.3193 0.4348; -0.5538 0.7573 0.1427 -0.0882 0.3028; -0.1454 -0.3205 0.9303 -0.0688 0.0771; -0.3245 -0.1045 -0.0130 0.9386 -0.0524; -0.1844 -0.4323 -0.2526 -0.0683 0.8430 ])_Q^t
On the other hand, and for the same matrix A as above, the results for the S, K, U, and L decomposition is as follows
([ 1 0.4863 0.3069 0.4374 0.5956; 0 1 0.3046 0.1500 0.6163; 0 0 1 0.0308 0.3022; 0 0 0 1 0.0810; 0 0 0 0 1 ])_U,
([ 3.4932 0 0 0 0; 1.6986 2.1452 0 0 0; 1.0719 0.6534 1.3261 0 0; 1.5279 0.3218 0.0408 1.4059 0; 2.0805 1.3221 0.4007 0.1139 1.6195 ])_L
([ 1 -0.4863 -0.1588 -0.3596 -0.2188; 0 1 -0.3046 -0.1407 -0.5129; 0 0 1 -0.0308 -0.2997; 0 0 0 1 -0.0810; 0 0 0 0 1 ])_S,
([ 0.2863 0 0 0 0; -0.2267 0.4662 0 0 0; -0.1197 -0.2297 0.7541 0 0; -0.2557 -0.1000 -0.0219 0.7113 0; -0.1351 -0.3167 -0.1851 -0.0500 0.6175 ])_K
Table <ref> presents the overhead results obtained from comparing the LU and SKUL algorithms, as well as the QR and SQR algorithms, for various matrix sizes. The overhead represents the additional time required by the modified algorithms compared to their conventional counterparts. From the results, it is evident that both the SKUL and SQR algorithms exhibit higher overhead ratios compared to the LU and QR algorithms, respectively. This is because incorporating the triangular inversion technique introduces additional computations, leading to longer execution times.
For the LU and SKUL pair, the overhead ratios range from approximately 2.06 to 2.43, indicating that the SKUL algorithm has an overhead of around 2 times compared to the LU algorithm. Similarly, for the QR and SQR pair, the overhead ratios range from approximately 1.46 to 2.76, indicating that the SQR algorithm has an overhead of around 1.5 to 2.8 times compared to the QR algorithm. This indicates that the modified algorithms take roughly almost two times longer than their conventional counterparts while producing both direct and inverse decomposition. These findings highlight the trade-off between the benefits of triangular inversion incorporated in classical methods and the associated increase in computational complexity. Note that if one does first LU then would apply CRIT to inverse U and L to form S and K respectively, would take longer wall-time computation. A report of the run-time performance of the CRIT method is given in the next section, when one can clearly see that the cost of making S,K,U,L would take the usual runtime of obtaining the direct decomposition, with additional twice times the runtime for the evaluation of CRIT. This shows how important is the incorporation of the CRIT within the classical codes. Furthermore, if this implementation uses low-level programming such as BLAS, it would greatly decrease the ratio.
It is worth mentioning that these overheads are mainly due to the fact that the CRIT doesn't benefit from any recurrent relation that decreases the handled matrix size within the iterations. On the other hand, we believe that the incorporation of COMBRIT method in the classical decomposition algorithms would further accelerate their computation wall-time and reduce the computational overhead.
§.§ Runtime performance of the combinatorial Based square matrix inversion
Table <ref> showcases a comparison of the CPU time (in seconds) for two different methods, namely Colomn Recursive Inversion Triangular CRIT and Combinatorial-based Block Recursive Inversion Triangular COMBRIT. These methods are specifically designed for non-singular triangular matrices.
The table includes various matrix sizes, ranging from 16×16 to 1024×1024, and displays the average CPU time obtained from 10 runs of each algorithm.
Upon examining the results, we observe that the COMBRIT method consistently exhibits lower CPU times, especially when β=2 as it has been dictated by the theoretical complexity analysis. Furthermore, the COMBRIT outperforms the reference CRIT method, and tis is expected since CRIT doesn't benefit from the reduction of calculation through recursive iterations.
However, as the matrix size grows, the COMBRIT method experiences a noticeable increase in CPU time, when higher values of the parameter β are considered. This is due to the complexity overhead generated by the combinatorial approach. Nonetheless, it is recalled that these implementations are sequential while the COMBRIT approach doesn't take advantage of its natural parallel computing.
However, the COMBRIT method maintains its efficiency and outperforms CRIT for larger matrices when β is chosen as β=2,4.
Table <ref> provides a comprehensive comparison of the CPU time (in seconds) for three different matrix inversion methods: Gauss Jaurgan Inversion (GJI), RSI, and its block version BRSI. The algorithms were evaluated on various matrix sizes, ranging from 16×16 to 1024×1024. The results were obtained by averaging the run-time performance over 10 executions of each algorithm.
Upon analyzing the results, it is evident that the RSI method consistently demonstrates slower computation times across all matrix sizes. This is expected since as demonstrated theoretically its complexity is super-cubic. On the other hand, the BRSI method exhibits a notable decrease in CPU time as the matrix size grows, indicating its computational efficiency for larger matrices. However, as the parameter γ=β increases, representing the size of the blocks used in the recursion in the splitting and combinatorial, the BRSI method achieves increasingly lower CPU times.
For relatively smaller matrix sizes, the BRSI method with β=2^4 lags behind RSI/GJI in terms of CPU time, this is due to the complexity overheads of the combinatorics involving more matrix multiplications in its process. However, as the matrix size grows, the BRSI method quickly surpasses RSI/GJI, demonstrating its ability to handle larger-scale computations efficiently.
Notably, for the largest matrix size in the table (1024x1024), the BRSI method achieves a considerable improvement over RSI/GJI, with a significantly lower CPU time. This highlights the effectiveness of the BRSI approach for handling complex and computationally demanding tasks, such as large-scale matrix inversions. The results provide valuable insights into the efficiency of the proposed block method (BRSI) compared to traditional inversion methods. These findings make the BRSI method a promising approach for practical applications that require fast and accurate matrix inversions.
§ CONCLUSION
In this paper, we have presented novel methods for computing the inverse of non-singular triangular matrices. Our study includes the analysis of several algorithms, namely COMBRIT, SQR, SKUL, and BRSI, which provide efficient and accurate solutions for inverse factorization tasks.
The SQR and SKUL algorithms are specifically designed for the inverse decomposition of QR and LU matrices, respectively. The COMBRIT method utilizes combinatorial calculations based on the indexes of the entries in the initial triangular matrix, enabling a direct computation of its inverse without the need for iterative procedures. On the other hand, the BRSI method employs a matrix splitting approach, where the given square matrix is divided into a sum of triangular matrices. This technique takes advantage of the recurrence provided by COMBRIT to construct the inverse matrix iteratively.
We have conducted a comprehensive analysis of the time complexity of these algorithms, demonstrating their effectiveness across various matrix sizes. The results of numerical tests and implementations indicate that our proposed algorithms outperform traditional techniques, especially for larger matrices. Notably, the BRSI method exhibits promising performance when the parameter β is appropriately chosen, making it a valuable tool for practical applications that require efficient and accurate matrix inversions.
Furthermore, our research introduces the concept of combinatorial-based approaches and recurrent techniques for triangular decomposition and inversion. These innovative methods enhance the efficiency and speed of the computations, allowing for more dynamic computation of inverse matrices.
* CRIT: Column Recursive inverse of triangular matrices.
* COMBRIT: Combinatorial (Block) recursive inverse of triangular matrices.
* SKUL: Inverse factorization with augmented classical LU factorization.
* SQR: Inverse factorization with augmented classical QR factorization.
* BRSI: Inverse Factorization based on recursive, split, and block inverse of triangular matrices.
§ ACKNOWLEDGMENT
The author would like to acknowledge the support received through the external research grant number 8434000491 at the Emirates Nuclear Technology Center at Khalifa University.
§ DATA AND CODES AVAILABILITY
In the interest of transparency and reproducibility, the MATLAB codes used in this research study, including the implementation of all algorithms, are made publicly available online through the GitHub repository <https://github.com/riahimk/Combinatorial_Inversion.git>. This allows researchers and interested parties to access and review the codes, thereby promoting transparency and facilitating the replication of our findings.
§ APPENDIX
Complexity calculation for SRI
We breakdown the calculation of the complexity formula Eq.(<ref>) as follows:
Using the fact that
(m 2^k+1) = 2 (m 2^k) + (m 2^k) + 2(m 2^k)
= 2^k+1(m) + ∑_j=0^k2^j(m 2^k-j) + 2^j+1(m 2^k-j)
= 2^k+1(m) + ∑_j=0^k 2^j( (5+2m)m^27^k-j-6(m2^k-j)^2)
+ ∑_j=0^k2^j+1( (m^3-13/2m^2-13) 2^2(k-j) +m^215/22^(k-j)
+7^(k-j)/3)
= 2^k+1(m) + m^2(2m+5)5 7^k+1 -2m^2(m-5)52^k+1 -3· m^2 4^k+1
+ (m^3-13/2m^2-13)(4^k+1-2^k+1)
+ (1+k)15m^2/2 2^k+1
+ 2· 7^k+1-2^k+215
= 6m^3+15m^2+215 7^k+1+ (m^3-19/2m^2-13)4^k+1
((m)-14m^3-75km^2-160m^2-210) 2^k+1
Knowing that, and by promoting the sparsity, the product of an upper triangular matrix of size m× m with a lower triangular matrix L of size m× m, where L has diagonal entry zeros requires (m-1)m^2/3 multiplications and (5 m^3-27m^2+46m-24)/6 additions. Hence,
(m)=7m^3-27m^2+44m-246.
(m 2^k)
= (6m^3+15m^2+215) 7^k+ (m^3-19/2m^2-13)4^k
+ (-7m^3+345m^2+220m+225km^2-11430) 2^k
≤ (6(n/2^k+1)^3+15(n/2^k+1)^2+215) 7^k
+ ((n/2^k+1)^3-19/2(n/2^k+1)^2-13)4^k
+(-7(n/2^k+1)^3+345(n/2^k+1)^2+220(n/2^k+1)+225k(n/2^k+1)^2-11430) 2^k
≤ ( 2/5n^3/8^k+11/5n^2/4^k+16/5n/2^k+23/15) 7^k
+ (n^3/8^k - 13/2n^2/ 4^k -16n/2^k-53/6)4^k
+(-7/30n^3/8^k+75 k+108/10n^2/4^k
+ ( 15k+889/30) n/2^k+148+75k/10) 2^k
= ( 2/5(8/7)^log_2(n)-k+11/5(4/7)^log_2(n)-k+16/5(2/7)^log_2(n)-k+23/15(1/7)^log_2(n)-k) n^log_2(7)
+ (2^log_2(n)-k - 13/2 -16(1/2)^log_2(n)-k-53/6(1/4)^log_2(n)-k)n^2
+(-7/304^log_2(n)-k+75 k+108/10 2^log_2(n)-k
+ ( 15k+889/30) +148+75k/10(1/2)^log_2(n)-k) n
≤ ( 2/5(8/7)^5+11/5(4/7)^5+16/5(2/7)^5+23/15(1/7)^5) n^log_2(7)
+ (2^5 - 13/2 -16(1/2)^5-53/6(1/4)^5 )n^2
+(-7/304^5+75 k+108/10 2^5
+ ( 15k+889/30) +148+75k/10(1/2)^5) n
≤ ( 33137/36015) n^log_2(7)
+ (153547/6144)n^2 +16335/64 nlog_2(n) +(10941/80) n
≤ 0.921 · n^log_2(7) + 23.8 · n^2 +255.235 · nlog_2(n) +136.763 · n
We note that this formula uses sub-blocks of size m 2^k from the initial order m 2^k+1. Despite this fact, the matrix multiplication operation still works for the matrices of order m2^k+1-j by the simple fact that we can adjust the size of the new matrix accordingly by adding necessary columns formed by the canonical basis. We have the following upper bound for the time complexity
^SRI(m 2^k+1) ≤ ∑_j=0^m 2^k+1-1(^CRIT(m 2^k+1-j) + (m 2^k+1) + m 2^k+1-j )
= ∑_j=0^m 2^k+1-1(215 7^k+1 + m 2^k+1-j )
+∑_j=0^m 2^k+1-1 2 (m 2^k) + 2(m 2^k) + (m 2^k)
= ∑_j=0^m 2^k+1-1(6m^3+15m^2+415 7^k+1 +(m^3-19/2m^2-13)4^k+1)
+∑_j=0^m 2^k+1-1( (m+(m)-14m^3-75km^2-160m^2-210) 2^k+1 -j )
Complexity calculation for formula Eq.(<ref>)
∑_j=0^γ-1^CRIT((γ-j) q 2^k)
= ∑_j=0^γ-14m^2 (5 + 2 m)18 7^k - 2m^3 + k 15m^218 4^k
+m^3-3m^2+8m-33 2^k-m^3+2m^2 (5 + 2 m)9.
= ∑_j=0^γ-1( 4m^3/9+10m^2/9) 7^k
-∑_j=0^γ-1(m^3/9+k^15m^2/18) 4^k
+∑_j=0^γ-1(m^3/3-m^2+8m/3-1) 2^k
-∑_j=0^γ-1( 5m^3/9+10m^2/9).
= ∑_j=0^γ-1( 4/9q^3(γ-j)^3+10/9q^2(γ-j)^2) 7^k
-∑_j=0^γ-1(q^3/9(γ-j)^3+k^15/18q^2(γ-j)^2) 4^k
+∑_j=0^γ-1(q^3/3(γ-j)^3-q^2(γ-j)^2+8q/3(γ-j)-1) 2^k
-∑_j=0^γ-1( 5q^3/9(γ-j)^3+10q^2/9(γ-j)^2).
= ( 4/9q^3𝒬_3(γ)+10/9q^2𝒬_2(γ) ) 7^k
- (q^3/9𝒬_3(γ)+k15/18q^2𝒬_2(γ)) 4^k
+ (q^3/3𝒬_3(γ)-q^2𝒬_2(γ)+8q/3𝒬_1(γ)-γ) 2^k
- ( 5q^3/9𝒬_3(γ)+10q^2/9𝒬_2(γ) ).
≤ ( 1/9(γ+1)^2/γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )+5/27(2γ+1)(γ+1)/γ(n^2/4^k+2n/2^k+1 ) ) 7^k
- (1/36(γ+1)^2/γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )+k 5/36(2γ+1)(γ+1)/6γ(n^2/4^k+2n/2^k+1 )) 4^k
+ (1/12(γ+1)^2/γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )-(2γ+1)(γ+1)/6γ(n^2/4^k+2n/2^k+1 )+1-2γ/3) 2^k
- ( 5/36(γ+1)^2/γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )+5/27(2γ+1)(γ+1)/γ(n^2/4^k+2n/2^k+1 ) ).
≤ ( 1/9(γ+1)^2/γ((8/7)^log_2(n)-k+3· (4/7)^log_2(n)-k+3·(2/7)^log_2(n)-k+(1/7)^log_2(n)-k) ) n^log_2(7)
+( 5/27(2γ+1)(γ+1)/γ((4/7)^log_2(n)-k+2(2/7)^log_2(n)-k+(1/7)^log_2(n)-k) ) n^log_2(7)
- (1/36(γ+1)^2/γ(2^log_2(n)-k+3+3(1/2)^log_2(n)-k+(1/4)^log_2(n)-k) ) n^2
- ( k 5/36(2γ+1)(γ+1)/6γ(1+2(1/2)^log_2(n)-k+(1/4)^log_2(n)-k)) n^2
+ (1/12(γ+1)^2/γ(4^log_2(n)-k)+3· 2^log_2(n)-k +3+ (1/2)^log_2(n)-k) n
-((2γ+1)(γ+1)/6γ(2^log_2(n)-k +1+(1/2)^log_2(n)-k)+1-2γ/3) n
- 5/36(γ+1)^2/γ(8^log_2(n)-k+3 · 4^log_2(n)-k+3· 2^log_2(n)-k+1 )
-5/27(2γ+1)(γ+1)/γ(4^log_2(n)-k+2· 2^log_2(n)-k+1 )
≤ ( 1/9(γ+1)^2/γ((8/7)^5+3· (4/7)^5+3·(2/7)^5+(1/7)^5 ) ) n^log_2(7)
+( 5/27(2γ+1)(γ+1)/γ((4/7)^5+2(2/7)^5+(1/7)^5 ) ) n^log_2(7)
- (1/36(γ+1)^2/γ(2^4+3+3(1/2)^4+(1/4)^4 ) ) n^2
- ( 10/36(2γ+1)(γ+1)/6γ(1+2(1/2)^4+(1/4)^4 )) n^2
+ (1/12(γ+1)^2/γ(4^5 )+3· 2^5 +3+ (1/2)^5 ) n
-((2γ+1)(γ+1)/6γ(2^4 +1+(1/2)^4 )+1-2γ/3) n
- 5/36(γ+1)^2/γ(8^4+3 · 4^4+3· 2^4+1 )
-5/27(2γ+1)(γ+1)/γ(4^4+2· 2^4+1 )
Finally,
∑_j=0^γ-1^CRIT((γ-j) q 2^k ≤ (121(109γ+104)(γ+1)/50421γ) n^log_2(7)
- ( 289(61γ+56)(γ+1)/27648γ) n^2
+ ( 3169/32+7582γ^2+15597γ+7919/96γ) n
- 1445(59γ+55)(γ+1)/108γ.
∑_j=0^γ-1^CRIT((γ-j) q 2^k≤ 1.16· n^log_2(7) - 2.78· n^2+ 461· n - 3472.
Complexity calculation for formula Eq.(<ref>)
∑_j=0^γ-1((γ-j) q 2^k)
= ∑_j=0^γ-1(6(γ-j)^3q^3+15(γ-j)^2q^2+215) 7^k+ ((γ-j)^3q^3-19/2(γ-j)^2q^2-13)4^k
+ (-7(γ-j)^3q^3+345(γ-j)^2q^2+220(γ-j)q+225k(γ-j)^2q^2-11430) 2^k
= ∑_j=0^γ-1(2/5q^3(γ-j)^3+q^2(γ-j)^2+2/15) 7^k+ ((γ-j)^3q^3-19/2(γ-j)^2q^2-13)4^k
+ (-730q^3(γ-j)^3+34530q^2(γ-j)^2+22030q (γ-j)+225k30q^2(γ-j)^2-11430) 2^k
= (2/5𝒬_3(γ) q^3+𝒬_2(γ)q^2+2/15γ) 7^k+ ( 𝒬_3(γ) q^3-19/2𝒬_2(γ)q^2-13γ)4^k
+ (-730𝒬_3(γ)q^3+34530𝒬_2(γ)q^2+22030𝒬_1(γ) q+225k30𝒬_2(γ)q^2-11430γ) 2^k
= (2/5𝒬_3(γ) q^3+𝒬_2(γ)q^2+2/15γ) 7^k+ ( 𝒬_3(γ) q^3-19/2𝒬_2(γ)q^2-13γ)4^k
+ (-730𝒬_3(γ)q^3+34530𝒬_2(γ)q^2+22030𝒬_1(γ) q-11430γ) 2^k
+(225k30𝒬_2(γ)q^2) 2^k
≤ (γ+1)^2/10γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1) 7^k +((2γ+1)(γ+1)/6γ(n^2/4^k+2n/2^k+1 ) + 2γ/15) 7^k
+ ( (γ+1)^2/4γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )
-19/12(2γ+1)(γ+1)/γ(n^4/4^k+2n/2^k+1 )
-γ/3) 2^k
+ (-730(γ+1)^2/γ(n^3/8^k+3· n^2/4^k+3·n/2^k+1 )
+345180(2γ+1)(γ+1)/γ(n^2/4^k+2n/2^k+1 )) 2^k
+ ( 22030γ+1/2 (n/2^k+1) -11430γ) 2^k + (225k180(2γ+1)(γ+1)/γ(n^2/4^k+2n/2^k+1 ) ) 2^k
= (γ+1)^2/10γ( (8/7)^log_2(n)-k+3·(4/7)^log_2(n)-k+3·(2/7)^log_2(n)-k+(1/7)^log_2(n)-k) n^log_2(7)
+(2γ+1)(γ+1)/6γ(
(4/7)^log_2(n)-k
+2(2/7)^log_2(n)-k
+ (1/7)^log_2(n)-k) n^log_2(7)
+( 2γ/15(1/7)^log_2(n)-k) n^log_2(7)
+ ( (γ+1)^2/4γ(2^log_2(n)-k+3·(4/4)^log_2(n)-k+3·(1/2)^log_2(n)-k+(1/4)^log_2(n)-k) ) n^2
-( 19/12(2γ+1)(γ+1)/γ((1/1)^log_2(n)-k+(1/2)^log_2(n)-k+(1/4)^log_2(n)-k)
-γ/3) n^2
+ (-730(γ+1)^2/γ(4^log_2(n)-k+3· 2^log_2(n)-k+3 +(1/2)^log_2(n)-k) ) n
+(345180(2γ+1)(γ+1)/γ(2^log_2(n)-k+1+(1/2)^log_2(n)-k)) n
+ ( 22030γ+1/2 (1+(1/2)^log_2(n)-k) -11430γ(1/2)^log_2(n)-k) n
+ (225180(2γ+1)(γ+1)/γ(2^log_2(n)-k+1+(1/2)^log_2(n)-k) ) n log_2(n)
≤ (γ+1)^2/10γ( (8/7)^5+3·(4/7)^5+3·(2/7)^5+(1/7)^5) n^log_2(7)
+( (2γ+1)(γ+1)/6γ(
(4/7)^5
+2(2/7)^5
+ (1/7)^5)
+ 2γ/15(1/7)^5) n^log_2(7)
+ ( (γ+1)^2/4γ(2^5+3·(4/4)^5+3·(1/2)^5+(1/4)^5) ) n^2
-( 19/12(2γ+1)(γ+1)/γ((1/1)^5+(1/2)^5+(1/4)^5)
+γ/3) n^2
+ (-730(γ+1)^2/γ(4^5+3· 2^5+3 +(1/2)^5) ) n
+(345180(2γ+1)(γ+1)/γ(2^5+1+(1/2)^5)) n
+ ( 22030γ+1/2 (1+(1/2)^5) -11430γ(1/2)^5) n
+ (225180(2γ+1)(γ+1)/γ(2^5+1+(1/2)^5) ) n log_2(n)
Finally, we have
∑_j=0^γ-1((γ-j) q 2^k) ≤ (
(γ+1)^2/10γ35937/16807
+(2γ+1)(γ+1)/6γ1089/16807+ 2γ/252105)_θ_a(γ) n^log_2(7)
+ ( (γ+1)^2/4γ35937/1024
- 19/12(2γ+1)(γ+1)/γ1057/1024
-γ/3)_θ_b(γ) n^2
+ ((2γ+1)(γ+1)/γ2251801057/32)_θ_c(γ) n log_2(n)
+ (-260008γ^2-641571γ-381563/1920γ + 293γ/80+121/32)_θ_d(γ) n
= θ_a(γ) n^log_2(7)+θ_b(γ) n^2 + θ_c(γ) nlog_2(n) + θ_d(γ) n
Hence,
∑_j=0^γ-1((γ-j) q 2^k) ≤θ_a(γ) n^log_2(7)+θ_b(γ) n^2 + θ_c(γ) nlog_2(n) + θ_d(γ) n
Besides, for a matrix of order n a Permutation involves n(n-1)/2 comparisons and n(n-1) columns interchanges.
Complexity calculation for formula Eq.(<ref>)
∑_j=0^γ-1((γ-j) q 2^k) = 3/2∑_j=0^γ-1 ((γ-j) q 2^k)^2 - (γ-j) q 2^k
= 3/2∑_j=0^γ-1 (γ-j)^2 q^2 4^k - (γ-j) q 2^k
= 3/2𝒬_2(γ) q^2 4^k - 𝒬_1(γ) q 2^k
≤ 3/12(2γ+1)(γ+1)/γ(n^2/4^k+2n/2^k+1 ) 4^k - 3/4(γ+1) (n/2^k+1) 2^k
= 3/12(2γ+1)(γ+1)/γ( 1 + 2(1/2)^log_2(n)-k+(1/4)^log_2(n)-k) n^2
- 3/4(γ+1) (1+(1/2)^log_2(n)-k) n
≤ 1089/4096(2γ+1)(γ+1)/γ n^2 -99/128(γ+1) n
≤ 2 n^2 -3.32 · n
plain
|
http://arxiv.org/abs/2307.04876v1 | 20230710195438 | Density and Velocity Correlations in Isothermal Supersonic Turbulence | [
"Branislav Rabatin",
"David C. Collins"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Engineering bound states in continuum via nonlinearity induced extra dimension
Girish S. Agarwal
August 12, 2023
==============================================================================
bstract
§ INTRODUCTION
Star-forming clouds of molecular hydrogen, which are known to be undergoing turbulent supersonic motion, are often modeled as isothermal in astrophysical simulations. This approximation is facilitated by rapid cooling rates of the molecular clouds <cit.>, which keeps the temperature roughly constant.
This reasonably simple yet powerful model is capable of explaining the observed density fluctuations within the molecular clouds, which can be used to predict many properties of star formation, such as the star formation rate <cit.> and the initial stellar mass distribution <cit.>. While supersonic turbulent motion inhibits the collapse and star formation by increasing the effective Jeans mass, at the same time it gives rise to large density variations allowing for a local collapse <cit.>.
The interplay between density and velocity fluctuations is fundamental to understanding star formation <cit.>. Describing the statistics of the fundamental dynamical quantities including the correlations between them reveals the statistical behavior of all derived quantities, including kinetic energy and the joint PDF of kinetic and thermal energy.
The main purpose of this work is to explore f_sv(s,v), the joint probability distribution
function (PDF) between the log of density, s=logρ, and speed, v. The simplest
assumption is that s and v are independent of one another, in which case the
joint distribution is the product of the marginalized distributions:
f_(s,v) = f_s(s) f_v(v)
f_s(s) =∫_-∞^∞ v f_(s,v)(s,v)
f_v(v) =∫_0^∞ s f_(s,v)(s,v).
The density PDF is typically treated as lognormal, f_s(s) = 𝒩(s;μ, σ), a Gaussian 𝒩 with mean μ and variance σ.
Speed, v, is usually modeled with a Maxwellian distribution; f_v (v) = ℳ (v; M) with the 1D Mach number M = √(⟨ v^2 ⟩ / 3). In this work, we improve on all three assumptions.
The finite shock model <cit.> as an extension of a simple Gaussian PDF of density is discussed in Section <ref>. In Section <ref> we introduce a tilted Maxwellian to better fit the statistics of speed. Finally, we find a correction to the joint PDF in Section <ref>.
Figure <ref> shows three models for the joint distribution along with simulated data. The color and solid contours are taken from simulations
described in Section <ref>. In the left panel, the dashed contours show the simple assumption of uncorrelated variables. Clearly the
shape of the model does not agree with the simulated data. The second panel shows our first correction to the joint PDF, which introduces a correlation between density and speed, but continues to assume a lognormal for density and Maxwellian for speed. The third panel shows our detailed model, with the corrected joint PDF and improved density and speed PDFs.
An important aspect of this work is the lack of fitting of any kind. All of the results come from moments of the data, and not by fitting a model to the simulated histograms.
The paper is organized as follows. In Section <ref> we discuss the code, simulations, and analysis. In
Section <ref> we describe the finite shock model for the density
PDF. In Section <ref> we discuss our updated distribution of speed. In Sections <ref> and <ref> we show our new joint
distribution. In Section <ref> we show that our model works
well even for higher order moments of the distribution. Finally we conclude in
Section <ref>.
§ METHODS
The suite of numerical simulations was performed using the hydrodynamic code Enzo <cit.> using the piecewise parabolic method <cit.>. The simulation domain consists of a cube of unit length with periodic boundary conditions. Each simulation is described by two parameters, the forcing mode ξ and Mach number M, both introduced via the Stochastic forcing module implemented within Enzo (Schmidt, Federrath, 2008). The forcing mode ξ∈ [0,1] is the weight of the solenoidal components of the forcing field. The value of ξ = 0 corresponds to the purely compressive forcing field, whereas ξ = 1 represents the purely solenoidal forcing.
The target mach number is achieved by adding energy at the large scale at a rate equal to the Mach-number dissipation rate, ϵ M^3/L <cit.>.
For each Mach number M we consider the turnover scale τ as the time scale at which two frames become statistically uncorrelated. The turnover time is roughly equal to the turbulent crossing time τ_turb. = (L/2)/M, where L is the size of the box with L/2 being the size of the driving pattern and M is the 1D r.m.s. Mach number, M = √(⟨ v^2 / 3 ⟩). Each simulation is run for 9 τ with the step of 0.1 τ. For statistical purposes, only frames with t ≥ 2 τ are considered, as the fluid settles in its chaotic turbulent motion. Thus 71 snapshots of statistics within each simulation. This approach to obtain statistical data is common in similar astrophysical simulations <cit.>.
The simulation grid consists of N = 1024^3 cells with each cell ℓ containing the same volume δ V_ℓ = 1/1024^3. Our suite of simulations employed 1D r.m.s. Mach numbers 1, 2, 4, 8, and three values of the forcing parameter, ξ = 0, 1/2, 1.
Table <ref> describes the simulations and the resulting parameters. The first column names the simulation by way of forcing parameter and target Mach number. The second column shows the actual 1d Mach number realized by the simulation. The third column shows the ratio of volume-weighted Mach number to mass-weighted Mach number, 𝔛. The following two columns show the volume-weighted mean speed ⟨ v ⟩ and its mass-weighted counterpart ⟨ρ v ⟩. The final three columns show the volume-weighted mean and variance of s, μ and σ, and the number of shocks.
§.§ Analysis
The probability distribution function, f_Q(q), for a random quantity, Q, is
the probability that Q will realize a value within the interval [q,q+dq].
This can be found as
f_Q(q) =1/V∫_V d^3 x δ( q-Q(x⃗)),
where V is the volume of the sample.
We can alternatively weight our PDF with other quantities, W, as
f^(W)_Q(q) = 1/W_net∫_V d^3 x W(x⃗) δ( q -
Q(x⃗)),
where W_net is the total of W on the domain. This is useful as it gives
an alternative view of the variable.
We will find it valuable to explore weighting by volume (V), mass (M), and kinetic
energy (E). 2D PDFs weighted by different quantities are related to one another by the following useful formulae:
f^(M)_(s,v) (s, v) = e^s f^(V)_(s,v) (s, v)
f^(E)_(s,v) (s, v) = e^s v^2/⟨ e^s v^2 ⟩ f^(V)_(s,v) (s, v)
f^(E)_(s,v) (s, v) = v^2/⟨ e^s v^2 ⟩ f^(M)_(s,v) (s, v)
For 1D PDFs, the only simple analytic expressions possible are the following
f^(M)_s (s) = e^s f^(V)_s (s)
f^(E)_v (v) = v^2/⟨ e^s v^2 ⟩ f^(M)_v (v).
Relationships between other weights and quantities, e.g., f_v^(M)(v) and f_v^(V)(v), are only possible by integrating the joint distributions.
The ratio of volume-weighted Mach number to its mass-weighted counterpart will prove to be a useful quantity:
𝔛 = ⟨ v^2 ⟩/⟨ e^s v^2 ⟩ = M^2/M_M^2
which serves as a loose measure of the correlation between density and velocity. Here we have introduced the mass-weighted Mach number, M_M = √(⟨ρ v^2 ⟩/3).
For the purposes of numerically comparing histograms binned from data, f^(data), with a theoretical model f^(theory) we employ the L_1 norm
δ = ∑_bin b| f^(data)_b - f^(theory) (b_cen.) | |b|
where the model function is evaluated at the bin center b_cen. and |b| indicates the bin measure (length, area, volume, ...). This formula closely mimics the analogous integral L_1 norm.
§ DENSITY IN SUPERSONIC ISOTHERMAL TURBULENCE
The knowledge of the statistical properties of density within the star-forming clouds is one of the cornerstones of many star formation theories <cit.>. A turbulent medium without self-gravity can be shown to exhibit near lognormal density fluctuations, a result of the self-similar statistics of isothermal, supersonic flows <cit.>, later also extended to flows magnetized with ideal MHD <cit.>. In the scope of isothermal turbulence the PDF of log density s = logρ / ρ_0 can be approximated by a Gaussian
f_s (s; σ) = 𝒩 (s; - σ^2 / 2, σ) = 1/√(2 πσ^2)exp( - ( s + σ^2 / 2 )^2/2 σ^2)
with variance σ^2 = ⟨ s^2 ⟩ - ⟨ s ⟩^2 and mean value μ = ⟨ s ⟩ = - σ^2 / 2 that fixes the mean density, ⟨ e^s ⟩ = 1. In the longormal approximation, the variance is known to depend on the r.m.s. sonic Mach number = √(⟨ v^2 ⟩) and the weight of the solenoidal components of the forcing, ξ; σ^2 ≈log( 1 + b^2 ^2 ) <cit.>.
While the lognormal approximation already provides a reasonably accurate picture of the density fluctuations, several works propose various corrections to the PDF of density, either purely within the context of turbulence <cit.>, or due to other phenomena extending beyond the framework of isothermal turbulence <cit.>.
In this work we make use of the finite shock model of density fluctuations <cit.>, that describes the PDF of log density s arising from a series of shocks traversing the turbulent medium, each adjusting the local density by a factor proportional to the local sonic Mach number, drawn from an idealized Maxwell distribution. When the number of the shocks grows to infinity, the PDF of density approaches a lognormal. However, for a finite number of shocks n, the distribution in s can be described via its characteristic function, ϕ
(s; μ, σ, n) = 1/σ∫_- ∞^∞ω ϕ (ω; n) exp( - i ωs - μ/σ)
where the parameters μ≡⟨ s ⟩ and σ^2 ≡⟨ s^2 ⟩ - μ^2 are the mean value of s and variance in s. The additional parameter n represents the number of shocks giving rise to a distribution with a negative skew. More details, along with the explicit form for ϕ can be found in <cit.>.
By default, the finite shock model PDF without a superscript is assumed to describe the volume-weighted statistics of log density s. To obtain its mass-weighted counterpart, we employ (<ref>)
f^(M)_s (s; μ, σ, n) = e^s (s; μ, σ, n)
The kinetic energy-weighted PDF of log density is derived in sec. <ref>.
§.§ Generating function of the finite shock model
For the purposes of calculating various expectation values within the finite shock model, we introduce the following parametric expectation value involving only (log) density
E (u, k; μ, σ, n) ≡⟨ s^k e^u s⟩ = ∫_-∞^∞ s^k e^u s (s; μ, σ, n)
Using the analytic properties of the characteristic function, we can easily calculate the expectation value for k = 0. Moreover, differentiation with respect to u brings down one power of s, increasing k by 1, which gives rise to a recurrent formula for k ≥ 1,
E (u, 0; μ, σ, n) = e^u μϕ (- i u σ; n)
E (u, k+1; μ, σ, n) = / u E (u, k; μ, σ, n)
In order to extract useful quantities from the characteristic function, we introduce two normalized functions, Φ_k(x) and F(Δ), which normalize out the first and second arguments of ϕ(ω;n) as follows
Φ_0 (x) ≡1/nlogϕ (- i √(n) x; n)
F (Δ) = 1/σ^2logϕ (- i σ; σ^2 / Δ^2).
If μ, σ, n are parameters of the volume-based distribution of log density, the conservation of total mass, ⟨ e^s ⟩ = 1, following equations (<ref>) and (<ref>), constraints μ as follows
μ = - logϕ (- i σ; n) = - n Φ_0 (σ / √(n))
which, as expected, reduces to - σ^2 / 2 when n →∞.
The number of shocks, n, for given values μ, σ can be estimated from equation (<ref>) and by inverting equation (<ref>)
n = σ^2/Δ(- μ / σ^2 )
where Δ≡ F^-1 denotes the solution to equation (<ref>).
Φ_k(x) for k > 0 are calculated as the derivative of Φ_0, and their explicit form for k = 1,2 is
Φ_1 (x) ≡Φ^' (x) = - i/√(n)ϕ^' (- i √(n) x; n)/ϕ (- i √(n) x; n)
Φ_2 (x) ≡Φ^'' (x) = - ϕ^'' (- i √(n) x; n)/ϕ (- i √(n) x; n) + ( ϕ^' (- i √(n) x; n)/ϕ (- i √(n) x; n))^2
The mass-weighted counterpart of the average log density, μ_M ≡⟨ s ⟩_M = ⟨ρ s ⟩ can be calculated using the generating function E with u = 1, k = 1, utilizing equation (<ref>),
μ_M = μ + √(n) σ Φ_1 (σ / √(n))
reducing to + σ^2 / 2 when n →∞.
Finally, it is possible to express the variance in s weighted by mass, σ_M^2 = ⟨ρ s^2 ⟩ - ⟨ρ s ⟩^2, using equation (<ref>) as follows
σ_M^2 = σ^2 Φ_2 (σ / √(n))
which reduces to σ_M = σ in the lognormal limit.
§.§ Energy-weighted density PDF
For the construction of the joint PDF of density and speed as outlined in sec. <ref>, the kinetic energy-weighted histogram of density must be known. We already explored the mass-weighted PDF, f^(M)_s (s; μ, σ, n) = e^s (s; μ, σ, n), and its statistics in the previous paragraph. However, equation (<ref>) indicates, that the conversion from the mass-weighted to the energy-weighted instance of the density PDF would require marginalization of the full joint PDF weighted by a factor of v^2. Since the full PDF is not known, this approach is not feasible. To sidestep this problem, we propose an explicit form for the energy-weighted PDF based on the finite shock model. First, we notice an approximate relation between the mass- and energy-weighted standard deviations of logρ are approximately equal,
σ_E ≈σ_M
to a high degree of accuracy. The highest relative difference between the two is observed to be less than 3% in the compressive simulation with Mach number 2 (see Figure <ref>). This remarkable match allows for the following educated guess; since the width of the log density PDF does not change between the mass- and energy-weighted instances, we assume, that the two share the same general shape. The only freedom left after this assumption has been made is an arbitrary argument shift, that can be expressed as
f^(E)_s (s) = f^(M)_s (s + δ s) = e^s + δ s (s + δ s; μ, σ, n).
As a consequence, the difference between the mean of s weighted by energy and mass is δ s; μ_M - μ_E = δ s. To determine δ s we look at the energy-weighted mean of 1/ρ,
⟨ e^-s⟩_E = ⟨ v^2 ⟩/⟨ e^s v^2 ⟩ = 𝔛
where 𝔛 = ⟨ v^2 ⟩ / ⟨ρ v^2 ⟩ was introduced in equation <ref>.
Going back to our proposed shape for f^(E)_s, we use this newly found mean value to determine δ s
𝔛 = ⟨ e^-s⟩_E = ∫_-∞^∞ s e^-s f^(E)_s (s) = e^δ s δ s = log𝔛
which translates to the following shift in μ_E
μ_E = μ_M - log𝔛
Given the shift, the energy-weighted PDF can be written using the finite shock model as
f^(E)_s (s; 𝔛, μ, σ, n) = 𝔛 e^s (s; μ - log𝔛, σ, n)
Figure <ref> shows the relative error between the estimators for σ_M, E and the values measured from the simulations as filled circles. The calculated value for σ_M was obtained from μ, σ, n using equation (<ref>), where n is given by equation (<ref>). Subsequently, σ_E is assumed to be equal to σ_M per equation (<ref>). Figure <ref> also shows the error between the estimated and measured means μ_M, E (filled stars) obtained from equations (<ref>, <ref>). These errors are taken relative to their respective σ_M,E, | μ_M, E^(data) - μ_M, E^(est.)| / σ_M, E^(data). This reduction was chosen due to the overall scale of a Gaussian-like distribution being set by its respective standard deviation σ; two Gaussian distributions with equal widths σ only differ substantially from each other if their means μ disagree significantly on the scale given by σ. The difference between the estimated and measured mass- and energy-weighted values of mean and standard deviation of log density is below 5 % for all simulations, demonstrating the accuracy and consistency of the approximations derived in this section.
Figure <ref> shows the plots of f^(E)_s (s; 𝔛, μ, σ, n) compared to the histograms extracted from the simulations, by using the values of 𝔛, μ, σ directly measured from the histograms. These values are used to determine n using equation (<ref>). Subsequently, equation (<ref>) with the determined parameters and the finite shock model for the volume-weighted basis is plotted alongside the data. The match between the model equipped by estimated parameters and the histograms is remarkable, considering the approximations made along the way.
§ PDF OF SPEED
The velocity field within in isothermally turbulent medium can, due to the chaotic nature of turbulence, also be treated as a random variable with certain statistical properties. While the exact distribution depends on the driving, several assumptions can be made to derive a simple distribution for the magnitude of velocity.
Assuming independence of all components of velocity and isotropic driving, the argument similar to that of <cit.> can be used to infer that the velocity is a Gaussian in all directions with variance equal in each component. Thus, the speed is drawn from the following Maxwellian distribution
f_v (v; M) = ℳ (v; M) = 4 π v^2/(2 π c_s^2 M^2)^3/2exp( - v^2/2 c_s^2 M^2),
where M is the 1D r.m.s. Mach number.
In what follows we will set c_s = 1 for the sake of brevity.
Despite the vast majority of literature regarding the velocity fluctuations focuses on the two-point statistics and power spectra, several previous works address the deviations from the ideal Maxwellian shape of the PDF of speed in compressible and incompressible isothermal turbulence <cit.>. The slope of the distribution above the maximum can be observed to be steepened compared to the ideal Maxwellian, and can be seen from a direct comparison, in Figure <ref>. The three-dimensional geometry of the simulation necessarily implies that the prefactor v^2 is preserved under very general assumptions about the original distribution for the velocity, f (v⃗) → f (v) ∼ v^2 + ⋯. Thus, this steepening can only be reflected as a higher-order term, for example a quartic correction inside the exponential,
f^(V,M)_v (v; M, b) = (v; M, b) ∝ v^2 exp[ - v^2/2 a^2( 1 - b + b v^2/a^2) ]
where a is a parameter carrying the units of speed, that is adjusted so that the root-mean square of v matches the desired Mach number, 3 M^2 = ⟨ v^2 ⟩. The parameter b ∈ [0, 1] adjusts the amount of steepening; when b = 0, ideal Maxwellian shape is restored, whereas for b = 1, the tail behaves like ∼ v^2 e^- v^4.
Note, that the functional form of equation (<ref>) can be used to describe both volume- and mass-weighted PDF of speed, with unique parameters of M, b in each case. The kinetic energy-weighted histogram of speed can be determined using equation (<ref>).
The difference between the newly introduced correction and its Maxwellian counterpart when b = 0, apart from the shape of the PDF, manifests in the following ratio of the expectation values of powers of magnitude of speed
⟨( v⃗·v⃗)^α⟩/⟨( v⃗·v⃗)^α⟩_(b=0)≡ h_α (b).
The function h_α only depends on the power, α, and the tilt parameter, b. While it doesn't have an analytic form, can be easily tabulated and inverted numerically.
Specifically, for the pure Maxwellian, the expected results are
⟨( v⃗·v⃗)^α⟩_(b=0) = ∫_0^∞ v^2 α f_v (v; M) v = 2^α + 1/√(π) M^2 αΓ (α + 3/2)
which simplifies to (2n+1)!! M^2n for integer α = n, however, extra care should be taken for half-integer α, as the double-factorial formula does not match the form in equation (<ref>). Lower values of α are most numerically reliable, for example, for α = 1/2, we can relate the ensemble average of ⟨ v ⟩ to the sloping parameter b as follows
√(π/8)⟨ v ⟩/M = √(π/8)/M⟨√(v⃗·v⃗)⟩ = h_1/2 (b) → b = h_1/2^-1( √(π/8)⟨ v ⟩/M)
This equation can be used to estimate the value of the parameter b for a given set of measured ensemble averages v and the Mach number M. Table <ref> lists the simulation parameters along with the ensemble averages of v and Mach number (both volume- and mass-weighted). Figure <ref> shows the perfect Maxwellian shape by obtaining the Mach number M and the correction (<ref>) obtained by measuring the additional parameter v ≡⟨ v ⟩ for each simulation. While the Maxwellian form fails to fit the data for v > M due to the prominent steepening of the slope of the distribution in this region, the quartic correction approximates the dataset much better.
In the line of the original argument for the Maxwellian distribution of speeds based on the rotational symmetry and independence of individual components of velocity, one might wonder which assumption (if not both) is violated. Arguments from the power spectrum of velocity <cit.> and direct numerical simulations <cit.> show, that the tails of the PDFs of the individual components of velocity are sub-Gaussian, which does not leave any indication of dependence or independence of the components. The full study of the velocity statistics is interesting, but outside the scope of this work.
§ JOINT PDF OF DENSITY AND SPEED: GENERAL THEORY
We now turn to the joint distribution of density and speed, f_(s,v)(s,v). Having already described the statistics of each variable separately, the dependence between the two comes to question, as
(s, v) independent f_(s,v) (s,v) = f_s (s) f_v (v).
If the random variables are truly independent, the joint PDF would be fully described by the product of its marginalized parts, f_(s,v) (s,v) = f_s (s) f_v (v). Conversely, if there is dependence between s and v, f_(s,v) is not the product of the marginalized distributions. We will first show that this is in fact the case, then develop a model for the actual joint PDF. Our correction will be developed in the next section.
To demonstrate dependence between s and v, we exploit another, equivalent, definition of independence of random variables. For any two functions h_1(s), h_2(v): ⟨ h_1 (s) h_2 (v) ⟩ = ⟨ h_1 (s) ⟩⟨ h_2 (v) ⟩ iff s, v are independent random variables. That is, the average of the product is the product of the averages, iff s and v are independent. Conversely, if we find a certain combination for which ⟨ h_1 (s) h_2 (v) ⟩≠⟨ h_1 (s) ⟩⟨ h_2 (v) ⟩, the variables must be dependent.
One such choice is h_1(s)=s and h_2(v)=v^2. We will show that ⟨ρ v^2 ⟩≠⟨ρ⟩⟨ v^2 ⟩. We can interpret this as the mass-weighted r.m.s. Mach number, also related to the mean kinetic energy density ε,
ε = E / V = ⟨1/2ρ v^2 ⟩ = 3/2ρ_0 M_M^2
where E is the total kinetic energy, E = ε V. We parameterize the correlation using 𝔛=⟨ v^2⟩/⟨ρ v^2⟩ and show that it is different from one, demonstrating dependence.
Table <ref> features all parameters measured from the simulations. As seen from the values of 𝔛, the values of ⟨ v^2 ⟩ and ⟨ρ v^2 ⟩ differ by at least 10% in all simulations which indicates, that density and speed are correlated and therefore, to some extent, dependent quantities.
The non-zero correlation between density and speed complicates the joint statistics, since the joint PDF cannot be written as a product of the 1D marginalized PDFs. However, motivated by the fact, that the product of 1D marginalized PDFs is relatively close to the joint PDF, in the following section <ref> we propose a simple correction term added to the product of marginalized distributions, allowing for a simple, consistent, description of the joint statistics.
§.§ Correction term to the joint PDF
The relative proximity between the true joint PDF and the product of its marginalized subparts leads us to believe, that a simple, small correction to the latter can be used to model the dependence between s and v,
f_(s,v) (s,v) = f_s (s) f_v (v) + g (s, v).
Given full freedom in g, this approach can perfectly describe the joint PDF. However, the full knowledge of such correction is akin to knowing the joint PDF itself. Instead, we resort to a reasonable approximation; let's assume, that the function g can be also written as a product of two single-variable functions,
g (s, v) = g_s (s) g_v (v).
The main task is to determine the single variable functions g_s, v using various methods of weighting outlined in <ref>. Note, that since integrating out one of the variables must yield the marginalized PDF of the other variable, the integral over each single-variable g_s,v must be equal to zero. Therefore, to reveal the correction term in each variable, we need a way to break this symmetry by introducing a factor involving one of the variables. This can be done using the paradigm of weighted histograms, as weighting by different positive quantities naturally imposes factors involving density and speed.
To proceed, we consider the mass-weighted joint PDF of s and v as the basis for our calculations,
f_(s,v)^(M) (s, v) = f_s^(M) (s) f_v^(M) (v) + g_s^(M) (s) g_v^(M) (v),
and compare it to the volume- and kinetic energy-weighted joint PDFs, that can be related to the mass-weighted basis using equations (<ref>, <ref>)
f_(s,v)^(V) (s, v) = e^-s[ f_s^(M) (s) f_v^(M) (v) + g_s^(M) (s) g_v^(M) (v) ]
f_(s,v)^(E) (s, v) = v^2/3 M_M^2[ f_s^(M) (s) f_v^(M) (v) + g_s^(M) (s) g_v^(M) (v) ]
The factors introduced this way break the symmetry of the correction terms under integration over the involved variable. Firstly, by definition, integrating over the mass-weighted instances of the joint PDF yields the baseline mass-weighted marginalized distribution of the other variable
∫_-∞^∞ s f_(s,v)^(M) (s, v) = f_v^(M) (v)
∫_0^∞ v f_(s,v)^(M) (s, v) = f_s^(M) (s)
If we now use the fact, that ⟨ e^-s⟩_M = ⟨ 1 ⟩ = 1 and ⟨ v^2 ⟩_M = ⟨ e^s v^2 ⟩ = 3 M_M^2 = 2 ε, we can explicitly integrate out s in the volume-weighted case and v in the energy-weighted instance to get
∫_-∞^∞ s f_(s,v)^(V) (s, v) ≡ f_v^(V) (v) = f_v^(M) (v) + A g_v^(M) (v)
∫_0^∞ v f_(s,v)^(E) (s, v) ≡ f_s^(E) (s) = f_s^(M) (s) + B g_s^(M) (s)
where A, B are non-zero constants associated with the integrals of the mass-weighted g-functions of variable s and v with additional factors of e^-s and v^2 in the density and speed terms, respectively. As we can see, the terms associated with different weighing break the symmetry of an otherwise identically vanishing integral. Solving equations (<ref>) and (<ref>) for the g-functions, we find:
g_s^(M) (s) ∼ f_s^(E) (s) - f_s^(M) (s)
g_v^(M) (v) ∼ f_v^(V) (v) - f_v^(M) (v).
The corrected joint PDF of logρ and v can be found by inserting these into equation (<ref>) to find
f_(s,v)^(M) (s,v) = f_s^(M) (s) f_v^(M) (v) +
+ C ( f_s^(E) (s) - f_s^(M) (s) ) ( f_v^(V) (v) - f_v^(M) (v) )
where C is a constant accommodating the proportionality relation of the g-terms to the differences in the brackets.
This method is successful under two conditions; first, we had to assume, that the correction g can be written as a product of two single-variable functions. Second, the single-variable functions must be well described by the finite shock model function and tilted Maxwellian, for some choice of the parameters, regardless of the method of weighing. It should be noted, that despite the derivation mainly focusing on the mass-weighted version of the histogram, this functional form can be converted to the volume-weighted instance of the joint PDF by multiplying by a factor e^-s. Since f^(M)_s (s) = e^s f^(V)_s (s), we can write the volume-weighted joint PDF as follows
f_(s,v)^(V) (s,v) ≈ f_s^(V) (s) f_v^(M) (v) +
+ C ( e^-s f_s^(E) (s) - f_s^(V) (s) ) ( f_v^(V) (v) - f_v^(M) (v) )
The expression for C,
C = (𝔛 - 1)^-1,
can be found by multiplying equation (<ref>) by v^2, integrating over speed and demanding both sides to be equal to 3 M_M^2 f^(E)_s (s).
§ JOINT PDF: SPECIFIC REALIZATIONS
In what follows we suggest several choices of basis functions to build up the joint distribution; first, we use the simplest basis possible, consisting of Gaussian in s and Maxwellian in v. We then utilize our updated marginalized pictures using the finite shock model and a tilted Maxwellian to obtain a much better description of the joint distribution.
§.§ Minimal model
In this section we describe the joint PDF using the simplest basis distributions; the normal distribution 𝒩 (s; μ, σ) with a mean μ and variance σ^2, and a simple Maxwellian ℳ (v; M) where M is the 1D r.m.s. Mach number. The minimum amount of parameters needed to describe the distribution is 3; M, 𝔛, σ. These three allow to directly describe the volume-weighted distribution of density, approximated by 𝒩 (s; μ, σ) where μ = -σ^2 / 2, volume-weighted distribution of speed approximated by ℳ (v; M) and also the mass-weighted distribution of speed using the Maxwellian with the parameter M_M = M / √(𝔛). The energy-weighted distribution of log density is approximated as exp (s + log𝔛) 𝒩 (s; μ - log𝔛, σ). With these considerations in mind, the joint PDF can be then written as
f^(V)_(s,v) (s,v; M, 𝔛, σ) = 𝒩 (s; μ, σ) ℳ (v, M_M) +
+ (𝔛 - 1)^-1( 𝔛 𝒩 (s; μ - log𝔛, σ) - 𝒩 (s; μ, σ) ) ×
× ( ℳ (v; M) - ℳ (v; M_M) )
While this model does not aspire to fit the true shape of the 2D histogram, it fully preserves the measured parameters and expected relations between them.
Figure <ref> shows the joint PDF of s (horizontal axis) and v (vertical axis). Histograms obtained from the simulated data are displayed via solid contours and color denoting the fraction of probability, our minimal model of the joint PDF is overlaid as dashed contours. Since the minimal model only uses three parameters directly measured from the data, it cannot, in its simplicity, fully capture the joint PDF. The most jarring difference occurs in the compressively driven simulations with high r.m.s. Mach number, manifesting in a large shift of the maximum. This is due to a crude approximation μ = - σ^2 / 2. In reality, μ is far away from this value, moreover, the true maximum of the density PDF is further shifted to the right due to the very low number of shocks inferred from these datasets.
While the maximum of the proposed simple model is shifted with respect to the true maximum of the distribution due to the approximations we used, the general shape matches that of the measured histograms.
§.§ Detailed basis
The final, most complicated form of our model of the joined distribution, we replace each function with its more detailed counterpart; the finite shock model function (s; μ, σ, n) instead of a simple Gaussian and the tilted Maxwellian for speed (v; M, b) in place of the ideal Maxwellian. This way, we need to provide 6 parameters to fully describe the joint distribution; M, 𝔛, u, u_M, μ, σ, where u, u_M are two new measured quantities equal to u=⟨√(v⃗·v⃗)⟩ and u_M=⟨√(v⃗·v⃗)⟩_M = ⟨ρ√(v⃗·v⃗)⟩, which define b and b_M via equation (<ref>). Parameter n is inferred from μ, σ using equation (<ref>).
The function can then be written as
f^(V)_(s, v) (s, v; M, 𝔛, u, u_M, μ, σ) = (s; μ, σ, n) (v; M_M, b_M) +
+ (𝔛 - 1)^-1( 𝔛 (s; μ - log𝔛, σ, n) - (s; μ, σ, n) ) ×
×( (v; M, b) - (v; M_M, b_M) )
Figure <ref> shows the comparison between the model with detailed basis to the histograms extracted from the datasets. Notice the remarkable match between the two without any additional fitting. Even the noisiest dataset, the compressible Mach 8 simulation, is described very closely by our model in the regions with low noise and extrapolates naturally into the region with larger density and higher noise.
§ MOMENTS OF THE JOINT DISTRIBUTION
To corroborate our model of joint distribution, we compare various moments, C_ℓ, m = ⟨ s^ℓ v^2m⟩ between our model and the data. The moments implied from our model can be expressed via the measured quantities as follows
C_ℓ,m = (2m+1)!! M_M^2m[ E (0, ℓ; μ, σ, n) h_m (b_M) +
(𝔛 - 1)^-1( 𝔛 E (0, ℓ; μ - log𝔛, σ, n) - E (0, ℓ; μ, σ, n) ) ×
( 𝔛^m h_m (b) - h_m (b_M) ) ].
In case of the simple model using parameters M, 𝔛, σ, the correlators can be obtained from the same formula by taking n →∞, μ = -σ^2 / 2 and b = b_M = 0.
Figure <ref> shows the ratio of the calculated vs. simulated moments of the joint distribution, C_ℓ,m for integers 1 ≤ℓ, m ≤ 5. For the sake of clarity, all moments are normalized by their uncorrelated value assuming lognormal density and Maxwellian speed, C̃_ℓ, m = C_ℓ, m / ( ⟨ s^ℓ⟩⟨ v^m ⟩ ). Moments generated using the simple model are depicted by red points, those of the detailed model by blue points. The shape of the points represents the size of ℓ^2+m^2; the lowest powers are denoted by circles, intermediate powers by diamonds and the highest combinations of powers by stars. It can be seen that for the most combinations of exponents, the detailed model matches the simulated moments substantially better than the simple model.
§.§ Correlation coefficient
The Pearson correlation coefficient corr (s,v) is a special case of a normalized moment of the joint distribution and can be expressed using our model. The term ⟨ s v ⟩ needed to calculate corr (s,v) can be obtained from equation (<ref>) by setting ℓ = 1, m = 1/2,
corr (s,v) = ⟨ s v ⟩ - ⟨ s ⟩⟨ v ⟩/σ_s σ_v = - u - u_M/σ√(3 M^2 - u^2)𝔛log𝔛/𝔛 - 1
This expression is compared to the measured correlation coefficients in Figure <ref>. The largest correlation coefficients, occurring in the datasets with the lowest Mach numbers, match the measurement more accurately, whereas with increasing Mach number and decreasing correlation, the estimate of the correlation deviates somewhat from the measured value.
§ CONCLUSIONS
In the present work we developed a new model of the joint distribution of log density s and speed v by introducing a correction term to the product of marginalized 1D PDFs of the individual variables. By marginalizing over differently weighted instances of the proposed 2-dimensional PDF we were able to describe the correction term using a simple set of 1D distributions of each variable weighted by volume, mass or kinetic energy. We proposed 3 different shapes of the overall distribution, depending on the complexity of the basis functions; ranging from the simplest Gaussian in s and Maxwellian in v to the most detailed basis comprised of the finite shock model in s and tilted Maxwellian (with a quartic correction) in v. Along the way we found out, that the kinetic-energy weighted histogram of log density has the same overall shape as its mass-weighted counterpart, and is shifted by δ s = log𝔛 = log( ⟨ v^2 ⟩ / ⟨ρ v^2 ⟩) to the left. The overall match between the shapes is closely related to the fact, that σ_M = σ_E, i.e. the mass- and kinetic energy-weighted variances of log density are equal to each other. The shift between the PDFs can be interpreted as the difference between the mass- and kinetic energy-weighted means of log density, μ_M - μ_E = log𝔛.
Our model was confronted with simulated data from Enzo with compressive, mixed and solenoidal driving, each at 4 different 1D sonic Mach numbers M = 1, 2, 4, 8. The parameters of the model are directly measured from each simulation, with no additional fitting needed. The model using the detailed basis functions matches the simulated histograms to a high degree of precision even when density and speed are correlated to a considerable degree. The match between each model and histograms is measured by the L_1 norm, and for the detailed basis, the overall difference is at most 4.5% in the worst case scenario. It should be noted, that feeding the model parameters taken from an ensemble leads to a reasonable match even upon re-weighing by mass or energy, e.g. see Figure <ref>. This is opposed to fitting one of the instances (for example the volume-weighted histogram) by varying the parameters of the model, however, that makes the match between a differently weighted histogram and its measured counterpart suboptimal.
In addition to matching histograms we computed a set of 25 correlation coefficients for each model, ⟨ s^ℓ v^2 m⟩ (1 ≤ℓ, m ≤ 5) that are compared to the coefficients measured directly from each simulation. Unsurprisingly, the model utilizing the detailed basis functions provides the closest match between the estimated values of the coefficients and their measured counterparts, with the factor of 2 at most, occurring in the case of the highest powers in ℓ, m.
In this work we focused on the supersonic turbulent flows, in which the density and speed become less correlated with increasing Mach number, regardless of the forcing mode. At the same time, the number of shocks, inferred from the statistics of density alone, decreases with Mach number, resulting in a more tilted distribution. Both of these effects can be explained in the same framework of shocks and rarefaction waves. The shock waves propagating through a supersonic, turbulent medium exhibit, on average, higher density with increasing Mach number. However, due to overall mass conservation, the volume available for such shock to occupy is smaller, resulting in a limited longitudinal size of the shock wave. On the other hand, rarefaction waves, following behind the shocks, tend to reset the density towards the mean. Since the shock waves are faster and smaller in more turbulent gas, the number of shocks experienced by the gas before it resets to ambient density is smaller. This is paralleled by the weakening correlation between the density and speed.
Overall, our model suggests, that the correlations between density and speed are an integral part of the complete picture of the statistics of a turbulent, supersonic, isothermal flow. Moreover, with the knowledge of the full joint PDF of density and speed, further insight into the statistics of turbulence can be attained, such as exploring the statistics of thermal and kinetic energy.
§ DATA AVAILABILITY
Simulation data present here is available on request ([email protected]).
§ ACKNOWLEDGEMENTS
tocsectionAcknowledgements
Support for this work was provided in part by the National Science Foundation
under Grant AAG-1616026 and AAG-2009870. Simulations were performed on Stampede2, part of the Extreme Science and Engineering Discovery Environment <cit.>, which is supported by National Science Foundation grant number
ACI-1548562, under XSEDE allocation TG-AST140008.
mnras
|
http://arxiv.org/abs/2307.04062v1 | 20230708235140 | CR compactification for asymptotically locally complex hyperbolic almost Hermitian manifolds | [
"Alan Pinoy"
] | math.DG | [
"math.DG",
"53C21, 53C35, 53C55, 58J60"
] |
In this article, we consider a complete, non-compact almost Hermitian manifold whose curvature is asymptotic to that of the complex hyperbolic plane.
Under natural geometric conditions, we show that such a manifold arises as the interior of a compact almost complex manifold whose boundary is a strictly pseudoconvex CR manifold.
Moreover, the geometric structure of the boundary can be recovered by analysing the expansion of the metric near infinity.
Symmetry energy and neutron star properties
constrained by chiral effective field theory calculations
Achim Schwenk
======================================================================================================
§ INTRODUCTION
The complex hyperbolic space is the unique simply connected, complete, Kähler manifold of constant negative holomorphic sectional curvature (we adopt the convention that this constant is -1).
It is the complex analogue of the real hyperbolic space, and similarly to its real counterpart, the complex hyperbolic space can be compactified by a sphere at infinity.
This sphere at infinity carries a natural geometric structure, which is closely related to the Riemannian geometry of the complex hyperbolic space: their respective groups of automorphisms are in one-to-one correspondence.
This structure is that of a strictly pseudoconvex CR manifold, namely, the CR sphere (𝕊,H,J).
If 𝕊 is thought of as the unit sphere of ^N, then H = (T𝕊)∩ (iT𝕊) is the standard contact distribution, and J is given by the multiplication by i in H.
Set ρ = e^-r with r the distance function to a fixed point.
Then ρ is a defining function for the boundary of the above compactification, and as ρ→ 0, the complex hyperbolic metric has the asymptotic expansion
1/ρ^2ρ⊗ρ + 1/ρ^2θ⊗θ + 1/ργ + o(1),
with θ the standard contact form of 𝕊, and γ = θ|_H× H(·,J·) the associated Levi-form.
The strict pseudoconvexity of the boundary means that the Levi-form is positive definite on H.
The aim of this paper is to construct a similar compactification by a strictly pseudoconvex CR structure for complete, non-compact, almost Hermitian manifolds satisfying some natural geometric conditions.
These conditions are the existence of a convex core (called an essential subset), the convergence of the curvature tensor R to that of the complex hyperbolic space R^0 near infinity, and the fact that the underlying almost complex structure J is asymptotically Kähler at infinity.
More precisely, we show the following.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of real dimension at least 4, which admits an essential subset.
Let r be the distance function to any compact subset.
Assume that there exists a > 1 such that
R-R^0_g, ∇ J_g, ∇ R_g, and ∇^2 J_g = 𝒪(e^-ar).
Then (M,J) is the interior of a compact almost complex manifold (M̅,J̅), whose underlying almost complex structure J̅ is continuous.
The hyperplane distribution H_0 = (T∂M̅)∩ (J̅T∂M̅) and the restriction J_0 = J̅|_H_0 are of class 𝒞^1.
Moreover, H_0 is a contact distribution, and J_0 is formally integrable, and (∂M̅,H_0,J_0) is a strictly pseudoconvex CR manifold.
In addition, the metric g is asymptotically complex hyperbolic: there exists a defining function ρ for the boundary, a 𝒞^1 contact form η^0 calibrating H_0, and a continuous Carnot metric γ, with η^0 and γ^0 = γ|_H_0× H_0 > 0 of class 𝒞^1, such that
g ρ→ 0=1/ρ^2ρ⊗ρ + 1/ρ^2η^0⊗η^0 + 1/ργ +
𝒪_g(ρ^a-1) if 1 < a < 3/2,
𝒪_g(ρ^1/2lnρ) if a = 3/2,
𝒪_g(ρ^1/2) if a > 3/2.
The contact form and the Carnot metric are related by the relation η^0|_H_0× H_0(·,J_0·) = γ^0.
This result gives a geometric characterisation of complete, non-compact, almost Hermitian manifolds admitting a compactification by a strictly pseudoconvex CR structure.
Notice the similarity between equations (<ref>) and (<ref>).
The real analogue of this result, involving a compactification by a conformal boundary for asymptotically locally real hyperbolic manifolds, has been proven by E. Bahuaud, J. M. Lee, T. Marsh and R. Gicquaud <cit.>, pursuing the seminal work of M. T. Anderson and R. Schoen <cit.>.
In a previous paper <cit.>, the author proved a similar result in the Kähler case.
The improvement here is twofold.
First, we are able to remove the Kähler assumption, which was of great importance in the previous proof.
Here, the almost complex structure is no more assumed to be parallel, and in fact, needs not even be formally integrable, nor the associated almost symplectic form needs to be closed.
In particular, the result applies to perturbations of asymptotically complex hyperbolic Kähler metrics which are only almost Hermitian.
Second, the strict pseudoconvexity of the boundary is obtained with an exponential decay of order a > 1, while the earlier version of this result needed a decay of order a > 3/2.
Note that this has a cost: the Carnot metric can be shown to be 𝒞^1 only in the direction of the contact distribution.
This is the reason why the extended almost complex structure J̅ is only continuous in the transverse direction.
Both improvements imply that the set of examples to which the result applies is much increased.
A compactification by a CR structure for some complete, non-compact, Kähler manifolds was already given by J. Bland <cit.>, under assumptions that are rather analytic and not totally geometric.
To obtain a continuous compactification with no regularity on the CR structure, these assumptions imply the a posteriori estimates R-R^0_g, ∇ R_g = 𝒪(e^-4r)[At first, one sees that these assumptions imply that R-R^0_g = 𝒪(e^-3r) and ∇ R_g = 𝒪(e^-4r).
Since on a Kähler manifold it holds that ∇ R^0 = 0, applying Kato's inequality to R-R^0 yields the claimed estimate.].
A strictly pseudoconvex boundary of class 𝒞^1 is obtained under assumptions that imply the even stronger estimates R-R^0_g,∇ R_g,∇^2 R_g = 𝒪(e^-5r).
It was proven by O. Biquard and M. Herzlich <cit.> that for asymptotically complex hyperbolic Kähler-Einstein metrics in real dimension 4, the curvature tensor has the form R = R^0 + Ce^-2r + o_g(e^-2r), where C is a non-zero multiple of the Cartan tensor of the CR boundary.
It is known that the Cartan tensor vanishes exactly when the CR structure is locally equivalent to that of the sphere (such CR manifolds are called spherical).
Many examples are then not covered by J. Bland's results.
The paper is organized as follows.
In Section <ref>, we set up the notations and explain the main idea of the proof of our main Theorem.
In Section <ref>, we compute the expansion of the metric near infinity and prove the existence of the objects η^0 and γ, see Theorem <ref>.
Section <ref> is dedicated to prove the existence of J_0, see Theorem <ref>.
At this step, η^0, γ and J_0 are continuous tensor fields.
We show in Section <ref> that they have higher regularity and that they induce a strictly pseudoconvex CR structure, see Theorems <ref>, <ref> and <ref>.
Finally, we prove our main Theorem in Section <ref>.
§ PRELIMINARIES
§.§ Notations
Let (M,g) be a Riemannian manifold.
Its Levi-Civita connection is denoted by ∇.
Our convention on the Riemann curvature tensor is Besse's convention <cit.>, namely
R(X,Y)Z = -(∇^2_X,Y Z - ∇^2_Y,XZ) = ∇_[X,Y]Z - ∇_X(∇_YZ) + ∇_Y(∇_XZ),
for vector fields X, Y and Z.
By abuse of notation, we still denote by R its four times covariant version: this means that we write R(X,Y,Z,T) = g(R(X,Y)Z,T) for vector fields X, Y, Z and T.
With this convention, the sectional curvature of a tangent plane P with orthonormal basis {u,v} is (P) = (u,v) = R(u,v,u,v).
§.§.§ Essential subsets and normal exponential map
Following <cit.>, an essential subset K ⊂ M is a codimension 0, compact, totally convex submanifold, with smooth boundary which is oriented by a unit outward vector field ν, and such that (M∖ K) < 0.
In that case, the normal exponential map
[ ℰ _+ × ⟶ M̅∖̅ ̅K̅; (r,p) ⟼ exp_p(rν_p) ]
is a diffeomorphism.
The level hypersurface at distance r above K is denoted by _r.
For r ⩾ 0, ℰ induces a diffeomorphism ℰ_r→_r given by ℰ_r(p)=ℰ(r,p); the induced Riemannian metric ℰ_r^*g on is denoted by g_r.
Gauss Lemma states that ℰ^*g = r ⊗ r + g_r.
Note that g_0 = g|_.
The gradient of the distance function r on M̅∖̅ ̅K̅, called the radial vector field, is denoted by .
A radial geodesic is a unit speed geodesic ray of the form r ↦ℰ(r,p) with p∈.
Note that the restriction of to a radial geodesic is its tangent vector field: therefore, satisfies the equation of geodesics =0.
More generally, a vector field X on M̅∖̅ ̅K̅ is called radially parallel if X=0.
The shape operator S is the field of symmetric endomorphisms on M̅∖̅ ̅K̅ defined by SX = ∇_X.
The normal Jacobi field on M̅∖̅ ̅K̅ associated to a vector field v on is defined by Y_v = ℰ_*v.
Such vector fields are orthogonal to and commute with the radial vector field .
They satisfy the Jacobi field equation ( Y_v) = -R(,Y_v), and their restriction to any radial geodesic are thus Jacobi fields.
Normal Jacobi fields are related to the shape operator S by the first order linear differential equation Y_v = SY_v.
§.§.§ Almost Hermitian manifolds
An almost Hermitian manifold (M,g,J) is a Riemannian manifold (M,g) together with an almost complex structure J which is compatible with the metric, in the sense that it induces linear isometries in the tangent spaces: one has g(JX,JY) = g(X,Y) for all vector fields X and Y.
Note that this implies that J is skew-symmetric (in fact, these two properties are equivalent).
A tangent plane P⊂ TM is called J-holomorphic (respectively totally real) if JP=P (respectively JP⊥ P).
The constant -1 J-holomorphic sectional curvature tensor R^0 on (M,g,J) is defined by the equality
R^0(X,Y)Z = 1/4( g(Y,Z)X - g(X,Z)Y + g(JY,Z)JX - g(JX,Z)JY + 2g(X,JY)JZ)
for X, Y and Z vector fields on M.
Similarly to the Riemann curvature tensor, we still denote by R^0 its fully covariant version, meaning that R^0(X,Y,Z,T) = g(R^0(X,Y)Z,T) for all vector fields X, Y, Z and T.
Note that R^0_g ⩽3/2.
For any pair of orthogonal unit tangent vectors u and v, R^0(u,v,u,v) = -1/4(1+3g(Ju,v)^2);
the minimal value -1 (respectively the maximal value -1/4) is achieved precisely when {u,v} spans a J-holomorphic plane (respectively a totally real plane).
In the specific case of the complex hyperbolic space, R^0 coincides with the curvature tensor of the complex hyperbolic metric (see <cit.>).
§.§.§ CR manifolds
A CR manifold (for Cauchy-Riemann) is a triplet (M,H,J) where H is a tangent distribution of hyperplanes and J is an almost complex structure on H, such that the distribution H^1,0 = { X - iJX | X ∈ H}⊂ TM⊗_ is involutive (i.e. [X,Y] is a section of H^1,0 whenever X and Y are).
In this case, J is said to be formally integrable.
A CR manifold is called strictly pseudoconvex if there exists a contact form η calibrating the distribution H (i.e. H=η and η induces a non-degenerate 2-form on H), and if the associated Levi form η|_H× H(·,J·) is positive definite on H.
§.§ The asymptotic conditions
Throughout the paper, (M,g,J) will denote a complete, non-compact, almost Hermitian manifold of dimension 2n+2⩾ 4, with an essential subset K.
We define the following asymptotic geometric conditions.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold.
Let r be the distance function to a compact subset.
* We say that (M,g,J) satisfies the ALCH(ALCH) condition of order a > 0, for asymptotically locally complex hyperbolic[For this condition implies that the local geometry at infinity resembles that of the complex hyperbolic space.], if R-R^0_g = 𝒪(e^-ar).
* We say that (M,g,J) satisfies the AK(AK) condition of order a > 0, for asymptotically Kähler, if ∇ J_g = 𝒪(e^-ar).
Note that R^0_g ⩽3/2.
The condition of order a > 0 implies R_g = 𝒪(1).
One readily verifies that the condition implies that the sectional curvature of M is bounded as follows: -1 + 𝒪(e^-ar) ⩽⩽ - 1/4 + 𝒪(e^-ar).
The lower bound implies the following Lemma, proven in <cit.>.
Assume that (M,g,J) is a complete, non-compact, almost Hermitian manifold, admitting an essential subset K, and satisfying the condition of order a > 0.
Let S = ∇ be the shape operator of the level hypersurfaces above K.
Then one has
S_g ⩽ 1 + 𝒪(e^-ar) if 0 < a < 2,
𝒪((r+1)e^-2r) if a = 2,
𝒪(e^-2r) if a > 2.
In any case, one has S_g = 𝒪(1), and exp(∫_0^r S_g-1) = 𝒪(1).
We also define the following analogous asymptotic conditions of higher order.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold.
Let r be the distance function to a compact subset.
* We say that (M,g,J) satisfies the ALCHplus(ALCH+) condition of order a > 0 if one has the estimates R-R^0_g = 𝒪(e^-ar) and ∇ R_g = 𝒪(e^-ar).
* We say that (M,g,J) satisfies the AKplus(AK+) condition of order a > 0 if one has the estimates ∇ J_g = 𝒪(e^-ar) and ∇^2 J_g = 𝒪(e^-ar).
Under the condition of order a > 0, one has ∇ R^0_g = 𝒪(e^-ar).
Thus, under the condition of order a > 0, Kato's inequality shows that the condition of order a > 0 is equivalent to R-R^0_g r →∞⟶ 0 and ∇(R-R^0)_g = 𝒪(e^-ar).
In practice, r will be the distance function to the essential subset K.
The constants involved in the previous estimates are global.
Moreover, in what follows, all estimates of the form f = 𝒪(h) will involve a constant that is global.
When built out of the choice of a reference frame (which will soon be called an
admissible frame, see Definition <ref>), the constant will be independent of that choice.
By the expressions Y_u_g = 𝒪(u_g_0e^r) or Y_u = 𝒪_g(u_g_0e^r), we mean that there exists C > 0 such that for any vector field u on , one has
∀ r ⩾ 0, ∀ p ∈, (Y_u)_ℰ(r,p)_g ⩽ C u_p_g_0e^r.
§.§ Outline of the proof
If (M,g,J) is assumed to be Kähler (that is, if ∇ J=0), the author showed in a previous paper <cit.> the following result.
[<cit.>]
Let (M,g,J) be a complete, non-compact, Kähler manifold admitting an essential subset K.
Assume that there is a constant a>1 such that the estimates R-R^0_g,∇ R_g=𝒪(e^-ar) hold, where r is the distance function to any compact subset.
Then on , there exist a contact form η of class 𝒞^1, and a continuous symmetric positive bilinear form γ, positive definite on the contact distribution H=η, such that
ℰ^*g = r^2 + e^2rη⊗η + e^r γ + lower order terms.
If moreover a>3/2, then γ is of class 𝒞^1, and there exists a 𝒞^1 formally integrable almost complex structure J_H on H, such that γ|_H× H = η(·, J_H·).
In particular, (,H,J_H) is a strictly pseudoconvex CR manifold.
Notice the similarity between equations (<ref>) and (<ref>) by setting ρ = e^-r.
This result provides a compactification by a strictly pseudoconvex CR structure for a Kähler manifold whose curvature is asymptotically close to that of the complex hyperbolic space.
The proof is quite long, but can be summarised as follows:
* For {Jν,e_1,…,e_2n} an orthonormal frame on , with ν the outward unit normal, let {,E_1,…,E_2n} denotes its parallel transport along radial geodesics.
For r ⩾ 0, define η_r = ℰ_r^*(e^-rg(·,)), and η^j_r = ℰ_r^*(e^-r/2g(·,E_j)), j∈{1,…,2n}, which are local 1-forms on .
* If R-R^0_g = 𝒪(e^-ar), with a > 1/2, then {η_r,η^1_r…,η^2n_r}_r⩾ 0 converges to continuous 1-forms {η,η^1,…,η^2n}.
This implies that the metric reads as in equation (<ref>), where γ = ∑_j=1^2nη^j⊗η^j.
If moreover a > 1, volume comparison techniques show that the limit is a coframe.
* If in addition, ∇ R_g=𝒪(e^-ar), then the family of 1-forms (η_r)_r⩾ 0 converges in 𝒞^1 topology, the limit η is of class 𝒞^1, and is contact.
The proof uses several estimates, and tedious computations involving many curvature terms.
* If a>3/2, then (η_r^j)_r⩾ 0 locally uniformly converges in 𝒞^1 topology, for any j∈{1,…,2n}.
Hence, γ is of class 𝒞^1.
* If φ_r = ℰ_r^*(J - g(·,)⊗) + g(·,)⊗), then (φ_r)_r⩾ 0 uniformly converges to a tensor φ of class 𝒞^1.
Its restriction to H= η gives the desired formally integrable almost complex structure J_H.
The very first step of the proof crucially relies on the fact that is parallel in the radial direction, and in fact, the equality ∇ J = 0 is used many times.
Note that the Kähler assumption is rather rigid: for instance, one has ∇ J = 0 if and only if the 2-form g(J·,·) is closed and J is formally integrable.
In this paper, we extend and improve the results of <cit.>.
First, the Kähler condition is removed: in fact, neither the closedness of g(J·,·) nor the formal integrability of J need to be met.
We instead consider an almost Hermitian manifold (M,g,J) whose almost complex structure J is only parallel at infinity, by imposing the condition ∇^k J_g = 𝒪(e^-ar), k∈{1,2}.
Second, we show that the strict pseudoconvexity of the boundary can be obtained with a > 1 instead of a > 3/2.
This sharper bound comes from deriving sharp geometric estimates in the direction of the contact structure.
In this context of this paper, the vector field is not radially parallel, and one cannot even initiate the above strategy as it stands.
The main trick is to prove the existence, under our assumptions, of a unit vector field E_0 on M̅ ̅∖̅ ̅K̅ that is radially parallel, and that satisfies E_0-_g = 𝒪(e^-ar).
This latter vector field is unique.
One can then consider a reference frame {E_0,…,E_2n} having nice properties, which we call an admissible frame (see Definition <ref> below), and try to mimic the above proof.
The counterpart is that the computations become longer and more involved; one also needs to show numerous extra estimates.
§ METRIC ESTIMATES
This section is dedicated to the derivation of the expansion near infinity of the metric g under the and conditions.
We first define the notion of admissible frames, which simplify future computations.
We then derive estimates on the asymptotic expansion of normal Jacobi fields, which turns out to be the main ingredients to show our results.
§.§ Admissible frames
We give a construction for some parallel orthonormal frames along radial geodesics in which later computations will be easier.
For v a vector field on , let V be the vector field on M̅∖̅ ̅K̅ obtained by the parallel transport of v along radial geodesics.
Finally, for r ⩾ 0, define β_r(v) = g(,V)|__r.
This defines a family of 1-forms (β_r)_r⩾ 0 on .
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Then there exists a continuous 1-form β on such that
β_r - β = 𝒪_g_0(e^-ar).
Fix v a vector field on and r ⩾ 0.
Both and V are radially parallel, so that one has β_r(v)-β_0(v) = ∫_0^r g(,V) = ∫_0^r g(( J),V).
By the assumption, there exists C > 0 such that ∇ J_g ⩽ Ce^-ar.
The Cauchy-Schwarz inequality now implies that ∫_0^rg(( J), V)⩽∫_0^r ∇ J_g V_g⩽ C1-e^-ar/av_g_0.
Therefore, (β_r(v))_r⩾ 0 pointwise converges: let β(v) to be its pointwise limit.
It defines a pointwise linear form on the tangent spaces of , satisfying
|β(v)-β_r(v)|
= | ∫_r^∞ g(( J),V) |
⩽∫_r^∞|g(( J),V)|
⩽C/ae^-arv_g_0,
from which is derived equation (<ref>).
The convergence is thus uniform, and β is continuous.
We shall now show that β is nowhere vanishing.
For all r ⩾ 0, one has β_r_g_0 = 1 pointwise.
Indeed, for any v, Cauchy-Schwarz inequality implies that |β_r(v)| ⩽V_g = v_g_0.
Equality is reached for v = ι_r^-1(), where ι_r T→ T_r is induced by the parallel transport along radial geodesics.
It follows that β_g_0 = 1 pointwise, and that β is nowhere vanishing.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let U⊂ be an open subset on which the continuous distribution β is trivialisable.
Let {e_0,…,e_2n} be an orthonormal frame on U such that β(e_0) > 0 and β(e_j) = 0 if j∈{1,…,2n}.
The associated admissible frame {E_0,…,E_2n} on the cone E(_+× U) is defined as the parallel transport of {e_0,…,e_2n} along the radial geodesics.
If {E_0,…,E_2n} is an admissible frame, then {,E_0,…,E_2n} is an orthonormal frame on the cone E(_+× U) whose elements are parallel in the radial direction even though they need not be differentiable in the directions that are orthogonal to .
In the following, we will often refer to admissible frames without mentioning the open subset U⊂ on which they are defined.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let {E_0,…,E_2n} be an admissible frame.
Then β(e_0) = 1.
One has 1 = _g^2 = ∑_j=0^2nβ_r(e_j)^2.
The result follows by taking the limit as r →∞.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let {E_0,…,E_2n} be an admissible frame and δ be the Kronecker symbol.
Then
* g(,E_j) - δ_0j = 𝒪(e^-ar) for j∈{0,…,2n},
* E_0 - = 𝒪_g(e^-ar).
The first point is a consequence of the equality g(,E_j)=β_r(e_j) and of equation (<ref>).
For the second point, notice that
E_0- = ∑_j=0^2ng(E_0-,E_j)E_j = ∑_j=0^2n(δ_0j- g(,E_j))E_j,
from which is derived the claimed estimate.
One easily shows that the vector field E_0 is the unique unit vector field X on E(_+× U) such that X = 0 and g(X,) = 1 + o(1).
If (M,g,J) is Kähler (if ∇ J = 0), then = 0, and thus E_0 =.
In this specific case, admissible frames can be chosen to be smooth, and correspond to the radially parallel orthonormal frames defined in <cit.>.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 0.
Let {E_0,…,E_2n} be an admissible frame.
Then
* (,E_0) + 1 = 𝒪(e^-ar),
* (,E_j) + 1/4 = 𝒪(e^-ar) for j ∈{1,…,2n},
* R(,E_i,,E_k) = 𝒪(e^-ar) for any i ≠ j ∈{0,…,2n}.
We prove the first point, the other being shown similarly.
One readily verifies from the definition of R^0 that R^0(,,,) = -1, and therefore, it holds that
(,E_0)
= R^0(, + (E_0-), , + (E_0-))+ (R-R^0)(,E_0,,E_0)
= -1 + 2R^0(,E_0-,E_0,)
+ R^0(,E_0-,,E_0-)
+ (R-R^0)(,E_0,,E_0).
The definition of R^0 (see equation (<ref>)) yields R^0_g ⩽3/2, and the result follows from the assumption and from the second point of Corollary <ref>.
§.§ Associated coframes and normal Jacobi fields estimates
Recall that for r ⩾ 0, the diffeomorphism ℰ_r→_r is defined by ℰ_r(p) = ℰ(r,p).
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold with essential subset K.
Assume that it satisfies the condition of order a > 0.
Let {E_0,…,E_2n} be an admissible frame on the cone E(_+× U).
The associated coframe {η^0_r,…,η^2n_r}_r ⩾ 0 on U is defined by
η^0_r = ℰ_r^* (e^-r g(·,E_0)) and η^j_r = ℰ_r^*(e^-r/2g(·,E_j))
if j∈{1,…,2n}.
In any admissible frame, the normal Jacobi field Y_v associated to the vector field v on reads
Y_v = η^0_r(v) e^r E_0
+ ∑_j=1^2nη^j_r(v) e^r/2E_j.
Applying twice the differential operator to this last equality, one has
( Y_v) = (^2 η^0_r(v)+ 2η^0_r(v) + η^0_r(v) )e^r E_0
+ ∑_j=1^2n(^2η^j_r(v) + η^j_r(v) + 1/4η^j_r(v) )e^r/2E_j.
Recall that radial Jacobi fields are actual Jacobi fields, which means that they satisfy the second order linear differential equation ( Y_v) = -R(,Y_v).
An identification of the components of ( Y_v) in the given admissible frame shows that the coefficients {η^j_r(v)}_j ∈{0,…,2n} satisfy the differential system
^2η^0_r(v) + 2 η^0_r(v) =
∑_k=0^2n u^0_k η^k_r(v),
^2η^j_r(v) + 2η^j_r(v) =
∑_k=0^2n u^j_k η^k_r(v), j∈{1,…,2n},
where the functions {u^j_k}_j,k∈{0,…,2n} are defined by
u^j_k = -
(,E_0) + 1 if j=k=0,
e^-r/2R(,E_0,,E_k) if j=0, k≠ 0,
e^r/2 R(,E_k,,E_0) if j≠ 0, k=0,
R(,E_j,,E_k) if j,k ∈{1,…,2n}, j≠ k,
(,E_j) + 1/4 if j,k∈{1,…,2n}, j=k.
Proposition <ref> implies that one has the uniform estimates |u^j_k| = 𝒪(e^-(a-1/2)r).
Combining the proofs of <cit.>, relying on successive integrations, an application of Grönwall's Lemma, and a bootstrap argument, one obtains the following result.
The last claim relies on estimates on the growth of the volume (see <cit.>).
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a>1/2.
Let {η^0_r,…,η^2n_r}_r ⩾ 0 be the coframes associated to an admissible frame on U⊂.
Then there exists continuous 1-forms {η^0,…,η^2n} on U
∂_r η^0_r, η^0_r - η^0 =
𝒪_g_0(e^-ar) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-3/2r) if
a = 3/2,
𝒪_g_0(e^-3/2r) if
a > 3/2,
∀ j ∈{1,…,2n}, ∂_r η^j_r, η^j_r - η^j =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
If furthermore one assumes that a > 1, the family {η^0,…,η^2n} is a continuous coframe on U.
If a > 1/2, then η^j_r_g_0 is bounded independently of r, j, the choice of an admissible frame, and U.
For j∈{0,…, 2n} and r ⩾ 0, write η^j_r = η^j_0 + ∫_0^r η^j_r.
Notice that η^j_0_g_0 = 1.
Then by Proposition <ref>, η^j_r_g_0⩽η^j_0_g_0 + ∫_0^r η^j_r_g_0⩽ 1 + ∫_0^∞η^j_r_g_0 = 𝒪(1).
Recall that a normal Jacobi field Y_v satisfies Y_v = SY_v.
The following corollary is an immediate consequence of Proposition <ref>.
In any admissible frame, the normal Jacobi field Y_v associated to a vector field v on satisfies
Y_v = η^0(v) e^r E_0 + ∑_j=1^2nη^j(v)e^r/2 E_j +
𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2,
𝒪_g(v_g_0 (r+1)e^-r/2) if
a = 3/2,
𝒪_g(v_g_0 e^-r/2) if
a > 3/2,
and
SY_v = η^0(v) e^r E_0 + ∑_j=1^2n1/2η^j(v)e^r/2 E_j +
𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2,
𝒪_g(v_g_0 (r+1)e^-r/2) if
a = 3/2,
𝒪_g(v_g_0 e^-r/2) if
a > 3/2.
As a consequence, one has the global estimates Y_v, SY_v = 𝒪_g(v_g_0e^r).
If moreover, v is everywhere tangent to η^0, then Y_v, SY_v = 𝒪_g(v_g_0e^r/2).
Note that although the estimates of Proposition <ref> are not uniform in all directions, they contribute equally to the lower order term in equations (<ref>) and (<ref>) thanks to the remaining exponential factors.
§.§ Global consequences and metric estimates
We shall now highlight global consequences of the study conducted in Subsections <ref> and <ref>.
We then prove the first of our main results.
Assume that (M,g,J) satisfies the condition of order a > 0.
Then the local vector field e_0 defined in Definition <ref> defines a global continuous vector field on , independently of the construction of any admissible frame.
The 1-form β defined in Lemma <ref> is continuous and nowhere vanishing.
Hence, the distribution β⊂ T is a continuous distribution of hyperplanes.
It follows that its g_0-orthogonal complement L is a well-defined and continuous line bundle.
Notice that the restriction of β trivialises L.
It follows that e_0 is the unique section of L that is positive for β, and of unit g_0-norm.
This concludes the proof.
The family of 1-forms {η^0_r}_r ⩾ 0 is then globally defined on , independently of the choice of the admissible frame.
As a consequence, one has the following global version of Proposition <ref>.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and condition of order a > 1/2.
Then there exists a continuous 1-form η^0 on such that
∂_r η^0_r, η^0_r - η^0 =
𝒪_g_0(e^-ar) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-3/2r) if
a = 3/2,
𝒪_g_0(e^-3/2r) if
a > 3/2.
If furthermore one assumes that a > 1, then η^0 is nowhere vanishing.
The following Corollary is a straightforward application of the triangle inequality and of Corollary <ref>.
One has the following estimates
η^0_r ⊗η^0_r - η^0 ⊗η^0 =
𝒪_g_0(e^-ar) if 1/2 < a < 3/2,
𝒪_g_0((r+1)e^-3/2r) if a = 3/2,
𝒪_g_0(e^-3/2r) if a > 3/2.
From Gauss's Lemma, the Riemannian metric g reads as ℰ^*g = r ⊗ r + g_r, with (g_r)_r ⩾ 0 the smooth family of Riemannian metrics on defined by g_r = ℰ_r^* g.
By construction, the first term that appears in the asymptotic expansion of the metric g near infinity is e^2rη^0 ⊗η^0.
For r⩾ 0, γ_r is defined as γ_r = e^-r( g_r - e^2rη^0_r ⊗η^0_r).
By definition, (γ_r)_r⩾ 0 is a family of symmetric 2-tensors on .
Let {η^0_r,…,η^2n_r}_r ⩾ 0 be the coframes associated to an admissible frame {E_0,…,E_2n}.
Then locally, γ _r = ∑_j=1^2nη^j_r⊗η^j_r.
Consequently, γ_r is positive semi-definite, and is positive definite on η^0_r, for any r ⩾ 0.
The following proposition shows that (γ_r)_r ⩾ 0 converges to some tensor that shares similar properties.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, and admitting an essential subset K. Assume that it satisfies the and conditions of order a > 1/2.
Then there exists a continuous positive semi-definite symmetric 2-tensor γ on , which we call the Carnot metric, such that
γ_r - γ =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a < 3/2,
𝒪_g_0((r+1)e^-r) if a = 3/2,
𝒪_g_0(e^-r) if a > 3/2.
If furthermore one assumes that a > 1, then γ is positive definite on η^0.
For r ⩾ 0, one has g_r = e^2rη^0_r⊗η^0_r + e^r γ_r.
Let {η^0_r,…,η^2n}_r ⩾ 0 be the coframes associated with an admissible frame.
Locally, one can express γ_r as γ_r = ∑_j=1^2nη^j_r⊗η^j_r.
Therefore, (γ_r)_r ⩾ 0 converges pointwise to a limit we call γ which is locally given by ∑_j=1^2nη^j⊗η^j.
In addition, one has the local expression
γ_r - γ = ∑_j=1^2nη^j_r⊗ (η^j_r-η^j) + (η^j_r-η^j) ⊗η^j.
The global estimates (<ref>) now follow from the triangle inequality and from an application of Proposition <ref> and Corollary <ref>.
As a consequence, γ is a continuous symmetric positive semi-definite 2-tensor.
If a > 1, then {η^0,…,η^2n} is a coframe (Proposition <ref>), and γ is hence positive definite on η^0.
The previous study implies the following comparison between quadratic forms.
If a > 1, there exists a constant λ > 1 such that for all r ⩾ 0, the comparison between quadratic forms 1/λ e^rg_0 ⩽ g_r ⩽λ e^2r g_0 holds.
For r ⩾ 0, η^0_r ⊗η^0_r and γ_r are positive symmetric 2-tensors.
Define q_r = η_r^0⊗η_r^0 + γ_r, which is a Riemannian metric on .
From g_r = e^2rη^0_r ⊗η^0_r + e^r γ_r, one readily checks that
∀ r ⩾ 0, e^r q_r ⩽ g_r ⩽ e^2rq_r.
According to Propositions <ref> and <ref>, q_r uniformly converges to the continuous Riemannian metric q_∞ = η^0 ⊗η^0 + γ as r→∞.
Let S^g_0 be the unit sphere bundle of (,g_0), which is compact by compactness of .
The map (r,v) ∈ [0,∞]× S^g_0↦ q_r(v,v)∈ (0,∞) is then continuous on the compact space [0,∞]× S^g_0.
Therefore, there exists λ > 1 such that for all r⩾ 0, 1/λ⩽ q_r ⩽λ on S^g_0.
The result now follows from equation (<ref>) and from the homogeneity of quadratic forms.
We shall now show the first of our main results.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and assumptions of order a > 1/2.
Then on , there exists a continuous 1-form η^0 and a continuous positive semi-definite symmetric 2-tensor γ, such that in the normal exponential map E, the Riemannian metric g reads
ℰ^*g = r ⊗ r + e^2rη^0 ⊗η^0
+ e^r γ +
𝒪_g_0(e^(2-a)r)
if 1/2 < a < 3/2,
𝒪_g_0((r+1)e^r/2)
if a = 3/2,
𝒪_g_0(e^r/2)
if a > 3/2.
If furthermore one assumes that a > 1, then η^0 is nowhere vanishing, and γ is positive definite on the distribution of hyperplanes η^0.
Let (η^0_r)_r ⩾ 0, (γ_r)_r ⩾ 0 and their limits η^0 and γ be given by
Propositions <ref> and <ref>.
By construction, one has
ℰ^*g = r ⊗ r + e^2rη^0_r ⊗η^0_r + e^r γ_r
= r ⊗ r + e^2rη^0 ⊗η^0 + e^r γ + ε_r,
with ε_r = e^2r(η^0_r ⊗η^0_r - η^0 ⊗η^0) + e^r (γ_r - γ).
Estimates (<ref>) now follow from Corollary <ref> (estimates on η^0_r⊗η^0_r - η^0⊗η^0)
and Proposition <ref> (estimates on γ_r-γ).
Ultimately, if a > 1, the last claim follows from Propositions <ref> (η^0 is nowhere vanishing) and <ref> (γ is positive semi-definite, positive definite on η^0).
Setting g = ℰ_*( r⊗ r + e^2rη^0⊗η^0 + e^r γ) on M̅∖̅ ̅K̅, Corollary <ref> shows that estimates (<ref>) read
g - g =
𝒪_g(e^-(a-1)r)
if 1/2 < a < 3/2,
𝒪_g((r+1)e^-r/2)
if a = 3/2,
𝒪_g(e^-r/2)
if a > 3/2.
If η^0 were a contact form and γ a Carnot metric on its kernel distribution, then g would be asymptotically complex hyperbolic in the sense of <cit.>.
§.§ Estimates on the shape operator
Before we conclude this section, we give another consequence of the previous study: we derive asymptotic estimates on the shape operator S.
First, we introduce a natural vector field ξ_0, which is closely related to S.
The vector fields (ξ_0^r)_r ⩾ 0 on are defined as ξ_0^r = ℰ_r^* (e^r E_0).
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then there exists a continuous vector field ξ_0 on such that
ξ_0^r - ξ_0 =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
It is uniquely characterised by the fact that η^0(ξ_0) = 1 and γ(ξ_0,ξ_0) = 0.
Define g̅_0 = η^0⊗η^0 + γ, which is a continuous Riemannian metric on according to Theorem <ref>.
Consider the continuous line bundle L̅ = (η^0)^⊥_g̅_0 on .
The restriction of η^0 trivialises L̅, which thus has a continuous nowhere vanishing section ξ.
Define ξ_0 = ξ/η^0(ξ), which is continuous by construction.
Let {η^0,…,η^2n} be the limit coframe associated with any admissible frame.
Then η^0(ξ_0) = 1 and η^j(ξ_0) = 0 for j∈{1,…,2n}.
In particular, ξ_0 is uniquely characterised by the relations η^0(ξ_0)=1 and γ(ξ_0,ξ_0)=∑_j=1^2nη^j(ξ_0)^2 = 0.
Notice that for j∈{1,…,2n} and r ⩾ 0, one has
η^j_r(ξ_0 - ξ_0^r) = η^j_r(ξ_0^r) - η^j_r(ξ) = δ^j_0 - η^j_r(ξ_0) = η^j(ξ_0) - η^j_r(ξ_0)= (η^j-η^j_r)(ξ_0),
where δ stands for the Kronecker symbol.
Corollary <ref> yields the existence of a constant c > 0 such that ξ_0^r - ξ_0_g_0⩽ c e^-r/2Y_(ξ_0^r - ξ_0)_g for all r ⩾ 0.
The triangle inequality together with equation (<ref>) now yield
Y_(ξ_0^r - ξ_0)_g ⩽(e^r η^0-η^0_r_g_0 + e^r/2∑_j=1^2nη^j-η^j_r_g_0) ξ_0_g_0.
Estimates (<ref>) now follow from the estimates of Proposition <ref>, together with the fact that ξ_0_g_0 is uniformly bounded by continuity of ξ_0 and compactness of .
Fix an admissible frame on U⊂.
If ξ_j^r = ℰ_r^* (e^r/2E_j) and if {ξ_0,…,ξ_2n} is the dual frame of {η^0,…,η^2n}, a similar study shows that
∀ j ∈{1,…,2n}, ξ_j - ξ_j^r =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
The constants involved in the upper bounds are independent of the choice of the admissible frame and of U.
It relies on the fact that one can uniformly bound ξ_j_g_0 if j∈{1,…,2n}, for instance, as an application of Corollary <ref>.
For v a vector field on , the associated normal Jacobi fields Y_v satisfies Y_v = SY_v.
It follows from equation (<ref>) that in an admissible frame, one has
SY_v = (η^0_r(v) + η^0_r(v) )e^r E_0
+ ∑_j=1^2n(η^j_r(v) + 1/2η^j_r(v) )e^r/2E_j.
For r ⩾ 0, consider the pull-back S_r = ℰ_r^*S of the shape operator S through the diffeomorphism ℰ_r →_r.
It is well defined since S leaves stable the tangent bundle of the level hypersurfaces _r.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1/2.
Then the family (S_r)_r ⩾ 0 satisfies the estimates
S_r - 1/2( + η^0_r ⊗ξ_0^r) =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2,
In particular, if a > 1, then S_r r →∞⟶1/2( + η^0 ⊗ξ_0), and one can substitute η^0_r⊗ξ_0^r with η^0 ⊗ξ_0 in estimates (<ref>).
Let v be a vector field on .
It follows from Proposition <ref> and from Corollary <ref> that
SY_v -1/2(Y_v + η^0_r(v)e^rE_0) = 𝒪_g(v_g_0 e^-(a-1)r) if 1/2 < a <3/2,
𝒪_g(v_g_0 (r+1)e^-r/2) if
a = 3/2,
𝒪_g(v_g_0 e^-r/2) if
a > 3/2,
By the very definition of S_r, ξ_0^r and g_r, it follows that
S_r-1/2( + η^0_r⊗ξ_0^r)_g_r =
𝒪(e^-(a-1)r) if 1/2 < a <3/2,
𝒪((r+1)e^-r/2) if
a = 3/2,
𝒪(e^-r/2) if
a > 3/2,
Finally, Corollary <ref> implies that
S_r - 1/2( + η^0_r ⊗ξ_0^r)
= 𝒪_g_0(e^-r/2S_r - 1/2( + η^0_r ⊗ξ_0^r) _g_r),
and estimates (<ref>) now follow.
If a > 1, then estimates on η^0-η^0_r_g_0 (Proposition <ref>)
and on ξ_0-ξ_0^r_g_0 (Proposition <ref>), together with the triangle inequality, show that one can replace η^0_r⊗ξ_0^r with η^0⊗ξ_0 in estimates (<ref>).
This concludes the proof.
In the complex hyperbolic space, the shape operator of a geodesic sphere of radius r, with outward unit normal ν, is given by S = (r)_ Jν + 1/2(r/2) _{ν,Jν}^⊥.
Proposition <ref> implies that the local extrinsic geometry of the level hypersurfaces _r is then asymptotic to that of horospheres in the complex hyperbolic space.
§ THE ALMOST COMPLEX STRUCTURE
This section is dedicated to prove the existence of a natural almost complex structure J_0 on the distribution of hyperplanes H_0 = η^0, obtained as the restriction of a naturally defined tensor φ on .
The ambient almost complex structure J does not leave stable the ambient distribution of hyperplanes {}^⊥.
Consider the orthogonal projection π T M̅∖̅ ̅K̅→ T M̅∖̅ ̅K̅ onto {}^⊥.
Define Φ to be the field of endomorphisms on M̅∖̅ ̅K̅ defined by Φ = π J π.
Since π and J have unit norms, then Φ_g ⩽ 1.
Formally, one has π = - g(,·) ⊗, and Φ then reads Φ = J + g(·,) ⊗ - g(·,)⊗.
Assume that (M,g,J) satisfies the condition of order a > 0.
For any admissible frame {E_0,…,E_2n} and any vector fields X and Y, one has:
* g(Φ X,Φ Y) = g(X,Y) - g(X,)g(Y,) - g(X,)g(Y,),
* Φ(E_0) = 𝒪_g(e^-ar),
* Φ(E_j) - _j = 𝒪_g(e^-ar) if j∈{1,…,2n}.
The first point is a straightforward computation.
To prove the second point, note that Φ() = 0, so that Φ(E_0)_g = Φ(E_0-)_g ⩽E_0-_g.
The result follows from Corollary <ref>.
Finally, by the very definition of Φ, Φ(E_j)=_j - g(E_j,), and the last point follows from Corollary <ref>.
The tensor Φ leaves stable the tangent distribution {,}^⊥.
Therefore, one can pull it back through the family of diffeomorphisms (ℰ_r)_r⩾ 0.
The family of endomorphisms (φ_r)_r ⩾ 0 is defined by φ_r = ℰ_r^*Φ for r ⩾ 0.
Recall that (S_r)_r ⩾ 0 is the family of endomorphisms ℰ_r^*S induced by the shape operator.
Assume that (M,g,J) satisfies the and assumption of order a > 1.
Then the following estimates hold:
* φ_rξ_0^r = 𝒪_g_0(e^-(a-1/2)r).
* φ_r = 𝒪_g_0(1),
* η^0_r∘φ_r = 𝒪_g_0(e^-ar),
* γ_r(φ_r·,φ_r·) - γ_r = 𝒪_g_0(e^-(a-1)r),
* φ_r S_r - S_r φ_r =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
We first show the first point.
From Corollary <ref>, there exists c > 0 such that for r ⩾ 0, φ_rξ_0^r_g_0⩽ c Φ (e^rE_0)_g e^-r/2 = cΦ (E_0)_g e^r/2.
The result now follows from Lemma <ref>
Let us now focus on the second point.
Let v be a vector field on .
Corollary <ref> states that there exists c>0 such that φ_rv_g_0⩽ c Φ(Y_v)_g e^-r/2,
for all r ⩾ 0.
The result follows from the fourth point of Lemma <ref>.
For the third point, let v be a vector field on .
In an admissible frame, one has Φ(Y_v) = η^0_r(v) e^r Φ(E_0) + e^r/2∑_j=1^2nη^j_r(v) Φ(E_j).
It then follows that
(η^0_r∘φ_r)(v) = η^0_r(v) g(Φ(E_0),E_0) + e^-r/2∑_j=1^2nη^j_r(v) g(Φ(E_j), E_0).
Notice that Φ has range in {}^⊥, so that g(Φ(E_j), E_0)) = g(Φ(E_j), E_0-) for all j∈{0,…,2n}.
Recall that Φ_g ⩽ 1 and that E_j_g=1 for all j∈{0,…,2n}.
The triangle inequality now yields
η^0_r∘φ_r_g_0⩽ (η^0_r_g_0
+ e^-r/2∑_j=1^n η^j_r_g_0) E_0-_g
for all r ⩾ 0.
The result follows from Corollary <ref> (estimates on E_0-) and from Corollary <ref> (uniform bounds on {η^j_r_g_0}_j ∈{0,…,2n}).
Let us now consider the fourth point.
Let u and v be vector fields on , and fix r ⩾ 0.
By Lemma <ref>, one has
g_r(φ_ru,φ_rv) = g(Φ Y_u,Φ Y_v) = g(Y_u,Y_v) - g(Y_u,)g(Y_v,).
Cauchy-Schwarz inequality now yields
g_r(φ_ru,φ_rv) = g_r(u,v) - e^2rη^0_r(u)η^0_r(v) + 𝒪(Y_u_gY_v_gE_0-_g).
It follows from Corollaries <ref> and <ref>, and from the very definition of γ_r, that
g_r(φ_r·,φ_r·) = e^rγ_r + 𝒪_g_0( e^(2-a)r).
Therefore, e^2r(η^0_r∘φ_r)⊗(η^0_r∘φ_r) + e^r γ_r(φ_r·,φ_r·) = e^r γ_r + 𝒪_g_0(e^(2-a)r).
From the preceding point, one has e^2r(η^0_r∘φ_r)⊗(η^0_r∘φ_r) = 𝒪_g_0(e^(2-2a)r), from which is deduced that γ_r(φ_r·,φ_r·) = γ_r + 𝒪_g_0(e^-(a-1)r)
This concludes the proof of the fourth point.
Finally, let us prove the last point.
Write S_r = S_r - 1/2( + η^0_r ⊗ξ_0^r) + 1/2( + η^0_r ⊗ξ_0^r), for r ⩾ 0.
By the triangle inequality, one has
φ_r S_r - S_r φ_r _g_0 ⩽ 2 φ_r_g_0S_r - 1/2( + η^0_r ⊗ξ_0^r)_g_0
+1/2(η^0_r_g_0φ_rξ_0^r_g_0 + η^0_r∘φ_r_g_0ξ_0^r_g_0).
The result now follows from uniform bounds on η^0_r_g_0 and ξ_0^r_g_0 (by uniform convergence), the estimates on S_r - 1/2( + η^0_r ⊗ξ_0^r) (Proposition <ref>),
and the estimates on φ_r, η^0_r∘φ_r,
and φ_r ξ_0^r, given by the three first points.
We are now able to prove that the family (φ_r)_r ⩾ 0 converges to a continuous field of endomorphisms, provided that a > 1.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then there exists a continuous field of endomorphisms φ on such that
φ_r - φ =
𝒪_g_0(e^-(a-1/2)r) if
1 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
In addition, φ satisfies:
* η^0∘φ = 0 and φξ_0 = 0,
* γ(φ·,φ·) = γ,
* φ^2 = - + η^0 ⊗ξ_0 and φ^3 = -φ.
Let us first show the existence of φ.
The proof goes in two steps.
We first derive a differential equation for (φ_r)_r ⩾ 0.
Let X be a vector field on M̅∖̅ ̅K̅.
Then
( J)X = [,JX] - J[,X]
= ((JX) - ∇_JX) - J( X - ∇_X)
= ( J) X + J X - S(JX) - J X + J(SX)
= JSX - SJX + ( J)X.
It follows that J = JS - SJ + J.
Recall that Φ = π J π, where π = - g(,·)⊗ is the orthogonal projection onto {}^⊥.
It is a standard fact that g = 2g(S·,·).
Moreover, S = = 0.
It follows that π = 0, and consequently, that Φ = π (JS - SJ + J) π.
Note that the eigenspaces of the projector π are π = and (π - ) = {}^⊥, which are both left stable by the shape operator S.
Hence, S commutes with π, from which is derived that that Φ = Φ S - S Φ + π ( J) π.
Define ψ_r = ℰ_r^*(π ( J) π), so that one has φ_r = φ_r S_r - S_r φ_r + ψ_r.
A direct application of the assumption and Corollary <ref> yields ψ_r= 𝒪_g_0(e^-(a-1/2)r).
Therefore, it follows from Lemma <ref> that
φ_r =
𝒪_g_0(e^-(a-1/2)r) if 1/2 < a <3/2,
𝒪_g_0((r+1)e^-r) if
a = 3/2,
𝒪_g_0(e^-r) if
a > 3/2.
Consequently, (φ_r)_r ⩾ 0 uniformly converges to some continuous tensor φ, which satisfies the inequality φ_r - φ_g_0 = ∫_r^∞φ_r_g_0⩽∫_r^∞φ_r_g_0 for all r ⩾ 0.
This implies estimates (<ref>).
Let us now establish the claimed properties satisfied by φ.
The first two points are immediate consequences of Lemma <ref>.
We thus focus on the last claim.
One easily checks that Φ satisfies the equality
Φ^2 = - + g(·,) ⊗ + g(·,) ⊗.
Hence, one has φ_r^2 = - + η^0_r ⊗ξ_0^r + ϵ_r, for all r ⩾ 0, where ϵ_r = ℰ_r^*(g(·, - E_0) ⊗ + g(·,E_0)⊗ ( - E_0)).
As usual, Corollary <ref> yields that
ϵ_r_g_0 = 𝒪(e^r/2E_0-_g) = 𝒪(e^-(a-1/2)r), where the last equality is due to Corollary <ref>.
The first part of the result now follows from the convergence of (η^0_r)_r ⩾ 0 and of (ξ_0^r)_r⩾ 0 when a > 1.
The second part of the claim is a consequence of the first point.
Proposition <ref> implies that when a > 1, (,η^0,φ,ξ_0) is an almost contact manifold (see <cit.> for an introduction to this notion).
In particular, φ induces an almost complex structure on the distribution of hyperplanes H_0 = η^0.
The study conducted in this section finally implies the second of our main Theorems.
Let (M,g,J) be a complete, non-compact almost Hermitian manifold of dimension greater than or equal to 4
Assume that M satisfies the and conditions of order a > 1.
Let η^0 and γ be given by Theorem <ref>, and let φ be defined as in Proposition <ref>.
The restriction J_0= φ|_H_0 of φ to the hyperplane distribution H_0 = η^0 then induces an almost complex structure, and γ^0=γ|_H_0× H_0 is J_0-invariant.
§ HIGHER REGULARITY
This section is dedicated to show that under the stronger conditions and of order a>1, the tensors η^0, γ, and φ defined previously gain in regularity.
As a consequence, we highlight a strictly pseudoconvex CR structure related to the expansion of the metric near infinity.
§.§ Order one estimates
We first provide several estimates that will be useful in the following study.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the condition of order a > 1/2.
Let u and v be vector fields on .
Let V be the parallel transport of v along radial geodesics.
Then ∇_Y_u V = 𝒪_g(u_g_0v_g_0 e^r).
Since V = 0 and [,Y_u]=0, one has (∇_Y_uV) = -R(,Y_u)V.
Hence, Kato's inequality yields | ∇_Y_uV_g | ⩽R_g Y_u_g V_g almost everywhere.
Recall that R_g= 𝒪(1) (Remark <ref>) and that V_g = v_g_0.
Under the condition of order a > 1/2, one has Y_u_g = 𝒪(u_g_0e^r) (Corollary <ref>).
The result follows from a straightforward integration.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1/2.
Then ∇_Y_u = 𝒪_g(u_g_0e^r).
Write ∇_Y_u = (∇_Y_uJ) + J SY_u.
Then ∇_Y_u_g ⩽ (∇ J_g+ S_g) Y_u_g, and the result follows from Lemma <ref>, the assumption and the estimates of Corollary <ref>.
Assume that (M,g,J) satisfies the and conditions of order a > 1/2.
Then ∇_Y_u() = 𝒪_g(u_g_0e^-(a-1)r).
Since = 0 and ∇_Y_u = SY_u, it follows that
∇_Y_u( (J)) = ∇_Y_u(( J))
= (∇_Y_u( J)) + ( J) ∇_Y_u
= (∇^2_Y_u,J) + (∇_∇_Y_u J) + ( J)∇_Y_u
= (∇^2_Y_u,J) + (∇_SY_uJ) + ( J)SY_u.
The result follows from Corollary <ref> (estimates on SY_u) and from the assumption.
Assume that (M,g,J) satisfies the and conditions of order a > 1/2.
Let π be the orthogonal projection onto {}^⊥.
For u and v vector fields on , one has:
* π((∇_Y_uS)Y_v) = 𝒪_g(u_g_0v_g_0e^3/2r).
* π(∇_Y_uY_v) = 𝒪_g((v_g_0+∇^g_0v_g_0)u_g_0e^3/2r).
We first consider the first point.
By Kato's inequality, and noticing that π = 0, one has the almost everywhere inequality π(∇_Y_uS)Y_v)_g ⩽π( ((∇_Y_uS)Y_u))_g.
The shape operator S satisfies the Riccati equation S = -S^2 - R(,·).
Moreover, one has π S = S π.
Direct computations using the equalities Y_v = SY_v and (SY_v) = -R(,Y_v) now yield
(π ((∇_Y_uS)Y_v))) = π SR(,Y_u)Y_v - π R(,Y_u)SY_v - π R(SY_u,Y_v)
-π R(,Y_v)SY_u - π (∇_Y_uR)(,Y_v) - S π (∇_Y_uS)Y_v
= ℜ - S(π ((∇_Y_uS)Y_v))),
where ℜ contains all the curvature terms.
From this is deduced the almost everywhere inequality (e^-rπ ((∇_Y_uS)Y_v))_g) ⩽ e^-rℜ_g + (S_g-1) (e^-rπ ((∇_Y_uS)Y_v))_g).
After a straightforward integration, Grönwall's Lemma yields
e^-rπ ((∇_Y_uS)Y_v))_g ⩽((∇^g_uS)v_g + ∫_0^r e^-sℜ_g s)exp(∫_0^r (S_g-1) s).
By tensoriality and compactness of , one has (∇^g_uS)v_g = 𝒪(u_g_0v_g_0).
Moreover, Lemma <ref> yields the estimate exp(∫_0^r (S_g-1) s) = 𝒪(1).
To conclude, it suffices to show that ℜ = 𝒪_g(u_g_0v_g_0e^3/2r).
The assumption of order a > 1/2 yields
ℜ = π SR^0(,Y_u)Y_v - π R^0(,Y_u)SY_v - π R^0(SY_u,Y_v)
-π R^0(,Y_v)SY_u + 𝒪_g( u_g_0v_g_0e^-(a-2)r).
A close look at the definition of R^0 (see equation (<ref>)) shows that the leading terms in ℜ_g are of the form cη^0(u)η^j(v)e^3/2r or cη^0(v)η^j(u)e^3/2r for c a constant and j ∈{1,…,2n}.
The result follows.
Let us now show the second point.
Similarly, Kato's inequality yields the almost everywhere inequality
π(∇_Y_uY_v)_g ⩽(π(∇_Y_uY_v))_g.
Straightforward computations, using that π = 0, that π and S commute, and that Y_v = SY_v, now yield the equality (π(∇_Y_uY_v)) = -π R(Y_u,Y_v) + π ((∇_Y_uS)Y_u) + S π (∇_Y_uY_v).
Hence, one has
(e^-rπ(∇_Y_uY_v)_g) ⩽ e^-rπ R(Y_u,Y_v)_g + e^-rπ((∇_Y_uS)Y_v)_g
+ (S_g-1) (e^-rπ(∇_Y_uY_v)_g) a.e.
The rest of the proof goes similarly to that of the first point, using the estimates derived on π((∇_Y_uS)Y_v)_g.
The main difference is that the initial data here is not tensorial in v, but instead is π (∇_uv)_g = ∇^g_0_uv_g_0⩽∇^g_0v_g_0u_g_0.
If one considers the whole vector field ∇_Y_uY_v instead, then one only has the estimates ∇_Y_uY_v_g = 𝒪((v_g_0+∇^gv_g)u_g_0e^2r).
Indeed, the radial component is given by g(∇_Y_uY_v,) = -g(SY_u,Y_v) ≃ -η^0(u)η^0(v)e^2r when η^0(u) and η^0(v) do not vanish.
§.§ Regularity of the admissible frames
We shall now show that under the and conditions of order a > 1, the vector field e_0, defined in Definition <ref>, is actually of class 𝒞^1.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then the vector field e_0 is of class 𝒞^1; admissible frames can be chosen to have the same regularity.
It suffices to show that the 1-form β defined in Section <ref> is of class 𝒞^1.
To do so, we shall show that β(v) is a 𝒞^1 function for any 𝒞^1 vector field v.
We prove this later fact by showing that (u(β_r(v)))_r⩾ 0 uniformly converges for any 𝒞^1 vector fields u and v on .
Let u and v be such vector fields, and r ⩾ 0.
Then u(β_r(v)) = Y_u(g(,V)) = ∇_Y_u(g(,V)), where V is the parallel transport of v along radial geodesics.
Since [,Y_u] = 0 and V = 0, one has
(u (β_r(v))) = (∇_Y_u(g(,V))) = ∇_Y_u((g(,V))),
so that (u (β_r(v))) = g(∇_Y_u(()),V) + g((),∇_Y_uV).
It now follows that one has |(u (β_r(v)))| ⩽∇_Y_uV_g()_g + V_g∇_Y_u(())_g.
Recall that S_g = 𝒪(1) (Lemma <ref>), V_g = v_g_0, and Y_u_g = 𝒪(u_g_0e^r) (Corollary <ref>).
It now follows from Lemma <ref>, Lemma <ref>, and the assumption, that
(u (β_r(v))) = 𝒪(u_g_0v_g_0e^-(a-1)r).
Consequently, (u (β_r(v))) uniformly converges for any vector fields u and v.
This concludes the proof.
It what follows, we will need to differentiate expressions involving ∇_Y_uE_j in the radial direction, with Y_u a normal Jacobi field and E_j an element of an admissible frame.
At a first glance, this is a priori justified only if E_j is of class 𝒞^2.
One could prove such regularity by requiring the stronger condition ∇^3 J_g = 𝒪(e^-ar).
It turns out that one needs not assume this last condition, as a consequence of the fact that E_j is solution to the first order linear differential equation E_j=0.
Indeed, let {r,x^1,…,x^2n+1} be Fermi coordinates[That is, {x^1,…,x^2n+1} are coordinates on , and that if (x^1,…,x^2n+1) corresponds to p∈, then (r,x^1,…,x^2n+1) corresponds to ℰ(r,p)∈ M.], and write E_j = ∑_i=1^2n+1E_j^i ∂_i.
Then {E_j^i} are solutions to the ODE (E^i_j)' + ∑_k=1^2n+1E_j^kS_k^i = 0, with (S_k^i) the components of the shape operator S.
As a consequence, one can consider elements of the form (∇_Y_u E_j) even though E_j is only of class 𝒞^1.
In fact, one has (∇_Y_u E_j) = -R(,Y_u)E_j.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, admitting an essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Let u be a vector field on .
Then
∇_Y_u(E_0 - ) = 𝒪_g(u_g_0e^-(a-1)r).
Let u be a vector field on , and {E_0,…,E_2n} be an admissible frame of class 𝒞^1.
Equation (<ref>) yields that
∇_Y_u(E_0-) = -∑_j=0^2n u(β_r(e_j)) E_j + ∑_j=0^2n (δ_0j - β_r(e_j)) ∇_Y_uE_j.
During the proof of Proposition <ref>, we have shown that (β_r)_r ⩾ 0 converges in 𝒞^1 topology.
Hence,
∀ j ∈{0,…,2n}, lim_r →∞ u (β_r(e_j)) = u ( lim_r →∞β_r(e_j)) = u(β(e_j)) = u(δ_0j) = 0.
Therefore, |u(β_r(e_j))| = |∫_r^∞ (u(β_r(e_j)))| ⩽∫_r^∞ | (u(β_r(e_j)))| for j ∈{0,…,2n} and r ⩾ 0.
It follows from equation (<ref>) that u(β_r(e_j)) = 𝒪(u_g_0e^-(a-1)r).
Moreover, by Corollary <ref>, one has |δ_0j-β_r(e_j)| = 𝒪(e^-ar).
Finally, Lemma <ref> yields ∇_Y_uE_j = 𝒪_g(u_ge^r).
The result now follows.
§.§ The contact form and the Carnot metric
We shall now show that if the and conditions of order a>1 are satisfied, then η^0 and γ|_H_0× H_0 are of class 𝒞^1 and that η^0(·,φ·) = γ.
In particular, η^0 is contact.
These results are analogous to <cit.>, although we give slightly different and considerably shorter proofs here.
The main difference is that we prove the 𝒞^1 convergence of elements of the form (η^j_r(v))_r⩾ 0, instead of 𝒞^0 convergence of elements of the form (ℒ_uη^j_r)_r⩾ 0.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then η^0 is a contact form of class 𝒞^1.
Moreover, η^0(·,φ·) = γ, and the Reeb vector field of η^0 is ξ_0.
The proof is divided in three parts.
First, we show that η^0 is of class 𝒞^1.
Then we derive an expression for η^0(·,φ·), and deduce that η^0 is contact.
Finally, we show that ξ_0 is the Reeb vector field of η^0.
To show that η^0 is of class 𝒞^1, we show that for any vector field v, the function η^0(v) is of class 𝒞^1.
To do so, we show that for any other vector field u, (u(η^0_r(v)))_r ⩾ 0 uniformly converges on .
Let u and v be vector fields on .
Let f be the function on M̅∖̅ ̅K̅ defined by the expression f= e^r(u(η^0_r(v)) = Y_u(g(Y_v,E_0)) = ∇_Y_u(g(Y_u,E_0) ).
Then f is smooth in the radial direction.
Since [,Y_u]=0 and E_0=0, one has
f = (∇_Y_u ((g(Y_v,E_0))) = ∇_Y_u( (g(Y_v,E_0))) = ∇_Y_u(g( Y_v,E_0)).
Similarly, one has ^2f = ∇_Y_u(g(( Y_v),E_0)).
For Y_v is a Jacobi field, one has the equality ( Y_v) = -R(,Y_v), and it follows that ^2f = -∇_Y_u(R(,Y_v,,E_0)).
Notice that
R(,Y_v,,E_0) = R(,Y_v,,) + R(,Y_v,,E_0-)
= R^0(,Y_v,,) + R(,Y_v,,E_0-)
+ (R-R^0)(,Y_v,,).
One readily checks from the definition of R^0 that R^0(,Y_v,,) = -g(Y_v,), so that R^0(,Y_v,,) = -g(Y_v,E_0) - g(Y_v, - E_0).
Hence, it follows that
^2f - f = g(∇_Y_uY_v, -E_0) + g(Y_v,∇_Y_u(-E_0))
- (∇_Y_uR)(,Y_v,,E_0-) - R(SY_u,Y_v,,E_0-)
- R(,∇_Y_uY_u,,E_0-) - R(,Y_v,SY_u,E_0-)
- R(,Y_v,,∇_Y_u(E_0-)) - (∇_Y_u(R-R^0))(,Y_v,,)
- (R-R^0)(SY_u,Y_v,,) - (R-R^0)(,∇_Y_uY_v,,)
- (R-R^0)(,Y_v,SY_u,) - (R-R^0)(,Y_v,,∇_Y_u).
Note that the radial part of ∇_Y_uY_v plays no role here due to the symmetries of the Riemann curvature tensor, so that one can substitute ∇_Y_uY_v with π(∇_Y_uY_v) in this latter expression.
Recall that one has the following estimates:
* R, S = 𝒪_g(1) (Remark <ref> and Lemma <ref>),
* R-R^0,∇ R, ∇(R-R^0) = 𝒪_g(e^-ar) (condition and Remark <ref>),
* E_0- = 𝒪_g(e^-ar) (Corollary <ref>),
* Y_u,Y_v = 𝒪_g(u_g_0e^r) (Corollary <ref>),
* ∇_Y_u = 𝒪_g(u_g_0e^r) (Lemma <ref>),
* π(∇_Y_uY_v) = 𝒪_g((v_g_0+∇^g_0v_g_0)u_g_0e^3/2r) (Lemma <ref>),
* ∇_Y_u(E_0-) = 𝒪_g(u_g_0e^-(a-1)r) (Corollary <ref>).
Hence, the triangle inequality yields
^2f - f = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(2-a)r).
Define h = f - f, and notice that h + h = ^2f - f.
It now follows from equation (<ref>) that (e^rh) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(3-a)r).
Therefore, one has
e^rh =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(3-a)r) if 1 < a < 3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)) if a=3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0) if a > 3.
Notice that e^-rh = (e^-rf) = (u(η^0_r(v)) ).
Hence,
(u(η^0_r(v)) ) =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-1)r) if 1 < a < 3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)e^-2r) if a=3,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-2r) if a > 3.
Consequently, (u(η^0_r(v)))_r⩾ 0 uniformly converges as r→∞,
and η^0 is then of class 𝒞^1.
We shall now derive an expression for η^0(·,φ·), by computing the limit of η^0_r(·,φ_r·) as r →∞.
Let u and v be vector fields on .
For r ⩾ 0, it holds that
η^0_r(u,φ_rv) = u(η^0_r(φ_rv)) - (φ_rv)(η^0_r(u)) - η^0_r([u,φ_rv])
= e^-r( Y_u g(Φ Y_v,E_0) - (Φ Y_v)g(Y_u,E_0) - g([Y_u,Φ Y_v],E_0) )
= e^-r(g(Φ Y_v,∇_Y_uE_0) - g(Y_u,∇_Φ Y_vE_0)).
On the one hand, it holds that
g(Φ Y_v,∇_Y_uE_0) = g(Φ Y_v,∇_Y_u) + g(Φ Y_v,∇_Y_u(E_0-))
= g(Φ Y_v,JSY_u) + g(Φ Y_v,(∇_Y_uJ))+ g(Φ Y_v,∇_Y_u(E_0-))
= -g(JΦ Y_v,SY_u) + g(Φ Y_v,(∇_Y_uJ))+ g(Φ Y_v,∇_Y_u(E_0-)).
On the other hand, one has
g(Y_u,∇_Φ Y_vE_0) = g(Y_u,∇_Φ Y_v) + g(Y_u,∇_Φ Y_v(E_0-))
= g(Y_u,JSΦ Y_v) + g(Y_u, (∇_Φ Y_vJ)) + g(Y_u,∇_Φ Y_v(E_0-))
= -g(JY_u,SΦ Y_v) + g(Y_u, (∇_Φ Y_vJ)) + g(Y_u,∇_Φ Y_v(E_0-)).
It then follows from the assumption, Corollary <ref> and Corollary <ref> that
η^0_r(u,φ_rv) = e^-r(g(JY_u,SΦ Y_v) - g(JΦ Y_v,SY_u)) + 𝒪(u_g_0v_g_0e^-(a-1)r).
Fix {E_0,…,E_2n} an admissible frame.
From Corollary <ref> and Corollary <ref>, one has the estimate Y_v = η^0(v) e^r + ∑_j=1^2nη^j(v)e^r/2E_j + 𝒪_g(v_g_0e^-(a-1)r).
It now follows from Lemma <ref> that JΦ Y_v = -∑_j=1^2nη^j(v) e^r/2 E_j + 𝒪_g(v_g_0e^-(a-1)r).
Corollary <ref> now yields
g(JΦ Y_v,SY_u) = -e^r/2∑_j=1^2nη^j(v)η^j(u) + 𝒪(u_g_0v_g_0e^-(a-2)r).
Similarly, one shows that
g(JY_u,SΦ Y_v) = e^r/2∑_j=1^2nη^j(u)η^j(v) + 𝒪(u_g_0v_g_0e^-(a-2)r).
Recall the local expression γ = ∑_j=1^2nη^j⊗η^j.
Equations (<ref>), (<ref>) and (<ref>) now yield
η^0_r(u,φ_rv) = γ(u,v) + 𝒪(u_g_0v_g_0e^-(a-1)r).
By uniform convergence of the first derivatives of (η^0_r)_r⩾ 0, it follows that η^0(·,φ·) = γ.
Proposition <ref> hence shows that η^0 is non-degenerate on η^0.
In particular, η^0 is a contact form.
To conclude, let us show that ξ_0 is the Reeb vector field of η^0.
Since η^0(ξ_0) = 1, it remains to show that η^0(ξ_0,v) = 0 for all vector field v tangent to H_0.
Let v be such a vector field.
The image of φ being exactly H_0, there exists a vector field u on such that v = φ u.
By Proposition <ref>, γ is φ-invariant and φξ_0=0.
From the preceding point, η^0(·,φ·) = γ.
Hence, η^0(ξ_0,v) = η^0(ξ_0,φ u) = γ(ξ_0,u) = γ(φξ_0,φ u) = γ(0,φ u) = 0.
This concludes the proof.
Under the assumptions of Theorem <ref>, the distribution H_0 = η^0 is a contact distribution of class 𝒞^1.
The next result shows that under the assumptions of Theorem <ref>, the Carnot metric γ^0 on H_0 is of the same regularity.
The proof is very similar.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that it satisfies the and conditions of order a > 1.
Then γ^0 = γ|_H_0× H_0 is of class 𝒞^1.
Let {E_0,…,E_2n} be an admissible frame of class 𝒞^1 defined on a cone E(_+× U), and fix j∈{1,…,2n}.
Let us first show that η^j is of class 𝒞^1 on the distribution H_0|_U.
To do so, we shall prove that (u(η^j_r(v)))_r ⩾ 0 locally uniformly converges on U for v tangent to H_0|_U and u any vector field on U.
Let u and v be such vector fields, and r ⩾ 0 be fixed.
Let f^j = e^r/2 u(η^j_r(v)) = ∇_Y_u(g(Y_v,E_j)), which is smooth in the radial direction.
Since [,Y_u] = 0 and E_j = 0, one has
^2 f^j = ((∇_Y_u(g(Y_v,E_j)))) = ∇_Y_u g(( Y_v),E_j),
and, for Y_v is a Jacobi field, one has ^2f^j = - ∇_Y_u(R(,Y_v,,E_j)).
One checks from the very definition of R^0 that R^0(,Y_v,,E_j) = -1/4g(Y_v,E_j) - 3/4g(Y_v,)g(E_j,).
Therefore, one has the equality
^2f^j - 1/4f^j = 3/4g(∇_Y_uY_v,)g(E_j,) + 3/4g(Y_v,∇_Y_u)g(E_j,)
+ 3/4g(Y_v,)g(∇_Y_uE_j,) + 3/4g(Y_v,)g(E_j,∇_Y_u)
- ∇_Y_u(R-R^0)(,Y_v,,E_j) - (R-R^0)(SY_u,Y_v,,E_j)
- (R-R^0)(,∇_Y_uY_v,,E_j) - (R-R^0)(,Y_v,SY_u,E_j)
- (R-R^0)(,Y_v,,∇_Y_uE_j).
As in the proof of Theorem <ref>, the radial component of ∇_Y_uY_v plays no role due to the symmetries of R, so that one can substitute this term with π(∇_Y_uY_v).
Moreover, g(E_j,) = β_r(e_j), where (β_r)_r ⩾ 0 is the family defined in Section <ref>.
Recall that one has the following estimates:
* R, S = 𝒪_g(1) (Remark <ref> and Lemma <ref>),
* R-R^0,∇ (R-R^0) = 𝒪_g(e^-ar), (condition and Remark <ref>),
* β_r(e_j) = 𝒪(e^-ar) (Corollary <ref>),
* Y_u = 𝒪_g(u_g_0e^r) and Y_v = 𝒪_g(v_g_0e^r/2) (Corollary <ref>),
* ∇_Y_uE_j = 𝒪_g(u_g_0e^r) (Lemma <ref>),
* ∇_Y_u = 𝒪_g(u_g_0e^r) (Lemma <ref>),
* π(∇_Y_uY_v) = 𝒪_g((∇^g_0u_g_0 + u_g_0)v_g_0e^3/2r)
(Lemma <ref>).
It follows from the triangle inequality that ^2 f^j - 1/4f^j = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-3/2)r).
Let h^j be the function defined by h^j = f^j - 1/2f^j.
Then h^j + 1/2h^j = ^2f^j - 1/4f^j, from which is derived that (e^r/2h^j) = 𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-2)r).
A straightforward integration now yields
e^r/2h^j =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^(2-a)r) if 1 < a < 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)) if a = 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0) if a > 2.
Notice that e^-r/2h^j = (e^-r/2f^j) = ( u(η^j_r(v))), from which is deduced that
( u (η^j_r(v))) =
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-(a-1)r) if 1 < a < 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0(r+1)e^-r) if a = 2,
𝒪((v_g_0+∇^g_0v_g_0)u_g_0e^-r) if a > 2.
In any case, ( u(η^j_r(v)))_r⩾ 0 locally uniformly converges.
As a consequence, η^j|_H_0|_U is of class 𝒞^1.
We immediately deduce from the local expression γ = ∑_j=1^2nη^j⊗η^j that γ^0=γ|_H_0× H_0 is of class 𝒞^1.
This concludes the proof.
With the stronger assumption a > 3/2, the same proof shows that for j∈{1,…,2n}, η^j is of class 𝒞^1 in all directions, and so is γ.
Indeed, in this case, on has to consider the estimate Y_v = 𝒪_g(v_g_0e^r) instead.
§.§ The almost complex structure
We shall now show that the almost complex structure J_0 defined on the 𝒞^1 distribution H_0 is of the same regularity, and that it is formally integrable.
We first remark that the local vector fields {ξ_1,…,ξ_2n} are of class 𝒞^1, although the Reeb vector field ξ_0 might only be continuous.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, with essential subset K.
Assume that (M,g,J) satisfies the and conditions of order a > 1.
Let {η^0,…,η^2n} be the local coframe associated to any admissible frame {E_0,…,E_2n}.
Let {ξ_0,ξ_1,…,ξ_2n} be its dual frame.
Then for j∈{1,…,2n}, ξ_j is a vector field of class 𝒞^1.
Throughout the proof of Theorem <ref>, we have shown that {η^1,…,η^2n} is a 𝒞^1 trivialisation of the 𝒞^1 vector bundle (H_0,).
Consequently, {ξ_1,…,ξ_2n} is a 𝒞^1 trivialisation of the vector bundle H_0.
We now show that under the condition of order a > 0, admissible frames can almost be chosen to be J-frames, in the following sense.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at least 4, and with essential subset K.
Assume that it satisfies the condition of order a > 0.
Then there exists an admissible frame {E_0,…,E_2n} such that
∀ j ∈{1,…,n}, _2j-1 - E_2j = 𝒪_g(e^-ar).
Let U⊂ be an open domain on which H_0 is trivialisable.
Let e_1 be a unit section of H_0|_U of class 𝒞^1, and let E_1 be its parallel transport along radial geodesics.
Consider the family of 1-forms β^1_r H_0|_U → defined by β^1_r(v) = g(V, _1)|__r,
where V is the parallel transport of v along radial geodesics.
The same study than that conducted for the proofs of Lemma <ref> and Proposition <ref> shows that under the condition of order a >1, there exists a nowhere vanishing 1-form β^1 on U, which is of class 𝒞^1, such that β^1_r - β^1_g_0 =𝒪(e^-ar).
Let e_2 be the unique 𝒞^1 section of H_0|_U such that e_2 ⊥^g_0β^1, e_2_g_0 = 1 and β^1(e_2) > 0.
Define E_2 to be its parallel transport along radial geodesics.
Similarly to Corollary <ref>, one shows that E_2-_1 = 𝒪_g(e^-ar).
The rest of the proof follows by induction.
We refer to such an admissible frame as a J-admissible frame.
We are now able to show the last Theorem of this section, exhibiting a strictly pseudoconvex CR structure at infinity.
Let (M,g,J) be a complete, non-compact, almost Hermitian manifold of dimension at last 4, with essential subset K.
Assume that it satisfies the and condition of order a > 1.
Let J_0 be the almost complex structure on H_0 induced by φ.
Then J_0 is of class 𝒞^1, and is formally integrable.
In particular, (,H_0,J_0) is a strictly pseudoconvex CR manifold of class 𝒞^1.
Let {E_0,…,E_2n} be a J-admissible frame of class 𝒞^1, and {η^1,…,η^2n} and {ξ_1,…,ξ_2n} be the associated 𝒞^1 coframe and frame.
Then {,E_0,…,E_2n} is an orthonormal frame.
Since Φ() = Φ()= 0, one has Φ = ∑_j=0^2n g(·,E_j)⊗Φ(E_j).
It then follows from Lemma <ref> and Lemma <ref> that Φ = ∑_j=1^n g(·,E_2j-1)⊗ E_2j - g(·,E_2j)⊗ E_2j-1 + 𝒪_g(e^-ar).
Corollary <ref> now yields
φ_r = ∑_j=1^nη^2j-1_r⊗ξ_2j^r - η^2j_r⊗ξ_2j-1^r + 𝒪_g_0(e^-(a-1/2)r).
Taking the limit as r→∞ shows that φ = ∑_j=1^n η^2j-1⊗ξ_2j - η^2j⊗ξ_2j-1.
Therefore, the restriction J_0= φ|_H_0 has at least the same regularity as {η^1|_H_0,…,η^2n|_H_0} and {ξ_1,…,ξ_2n}.
It follows from Theorem <ref> and Lemma <ref> that J_0 is of class 𝒞^1.
Let us now show that J_0 is formally integrable.
Recall that γ|_H_0× H_0 is J_0-invariant, so that by <cit.>, it suffices to show that N_φ|_H_0× H_0 = η^0|_H_0× H_0⊗ξ_0,
where N_A stands for the Nijenhuis tensor of the field of endomorphisms A, defined by N_A(X,Y) = -A^2[X,Y] - [A X,AY] + A[A X,Y] + A[X,A Y].
Let u and v be any vector fields on .
Using the fact that ∇ is torsion-free, one first obtains N_Φ(Y_u,Y_v) = Φ(∇_Y_uΦ)Y_v - (∇_Φ Y_uΦ) Y_v - Φ(∇_Y_vΦ)Y_u + (∇_Φ Y_vΦ) Y_u.
Recall that Φ = J - g(·,)⊗ + g(·,)⊗.
Since ∇ g = 0, ∇ = S, Φ() = Φ()=0 and Y_u,Y_v ⊥, one has
Φ(∇_Y_uΦ)Y_v = g(Y_v,)Φ(SY_u) + Φ(∇_Y_u J)Y_v,
(∇_Φ Y_uΦ)Y_v = -g(Y_v,SΦ Y_u) + g(Y_v,JSΦ Y_u) + g(Y_v,)SΦ Y_u
+(∇_Φ Y_uJ)Y_v - g(Y_v,(∇_Φ Y_uJ)),
Φ(∇_Y_vΦ)Y_u = g(Y_u,)Φ(SY_v) + Φ(∇_Y_v J)Y_u, and
(∇_Φ Y_vΦ)Y_u = -g(Y_u,SΦ Y_v) + g(Y_u,JSΦ Y_v) + g(Y_u,)SΦ Y_v
+ (∇_Φ Y_vJ)Y_u - g(Y_u,(∇_Φ Y_vJ)).
Recall that Φ takes values in the distribution {}^⊥, which is involutive as the tangent field to the foliation (_r)_r ⩾ 0 of M̅∖̅ ̅K̅.
The definition of the Nijenhuis tensor then shows that N_Φ has range in {}^⊥.
Hence, the terms in the radial direction cancel out each others, and the remaining terms yield
N_ϕ(Y_u,Y_v) = (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))
+ g(Y_v,)(Φ S Y_u - S Φ Y_u) - g(Y_u,)(Φ S Y_v - S Φ Y_v)
+ Φ((∇_Y_uJ)Y_v - (∇_Y_vJ)Y_u) - π((∇_Φ Y_uJ)Y_v) + π((∇_Φ Y_vJ)Y_u)
= (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))E_0
+ g(Y_v,E_0)(Φ S Y_u - S Φ Y_u) - g(Y_u,E_0)(Φ S Y_v - S Φ Y_v)
+ (g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))(-E_0)
+ g(Y_v,-E_0)(Φ S Y_u - S Φ Y_u) - g(Y_u,-E_0)(Φ S Y_v - S Φ Y_v)
+ Φ((∇_Y_uJ)Y_v - (∇_Y_vJ)Y_u) - π((∇_Φ Y_uJ)Y_v) + π((∇_Φ Y_vJ)Y_u),
where π is the orthogonal projection onto {}^⊥.
From now, and until the rest of the proof, we assume that u and v are tangent to H_0.
Let r ⩾ 0, and note that N_φ_r = ℰ_r^* N_Φ.
The condition,
the uniform bound on S_g (Lemma <ref>),
estimates on E_0- (Corollary <ref>),
estimates on Y_u and Y_v (Corollary <ref>),
comparison between g_0 and g_r (Corollary <ref>),
and estimates on φ_r S_r - S_r φ_r (Lemma <ref>),
now yield the existence of α_1 > 0, depending on a, such that N_φ_r(u,v) = e^-r(g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v))ξ_0^r + 𝒪_g_0(u_g_0v_g_0e^-α_1 r).
Similar calculations that the ones conducted to derive an expression for η^0_r(u,φ_rv) (see the proof of Theorem <ref>) show that there exists α_2 > 0 depending on a with
e^-r(g(Y_v,SΦ Y_u) - g(Y_u,SΦ Y_v)) = η^0(u,v) + 𝒪(u_g_0v_g_0e^-α_2 r).
The 𝒞^1 convergence of (φ_r|_H_0)_r ⩾ 0 to φ|_H_0, and the 𝒞^0 convergence of (ξ_0^r)_r ⩾ 0 to ξ_0 finally imply that N_φ|_H_0 × H_0 = lim_r→∞ N_φ_r|_H_0 × H_0 = η^0|_H_0× H_0⊗ξ_0.
Consequently, J_0 is formally integrable.
The associated Levi-form η^0|_H_0× H_0(·,J_0·) coincides with γ|_H_0× H_0, and is thus positive definite.
Ultimately, (,H_0,J_0) is a strictly pseudoconvex CR manifold, which concludes the proof.
If M has dimension 4, then J_0 is an almost complex structure of class 𝒞^1 defined on a 2-dimensional vector bundle.
Its integrability is automatic in this specific case.
Similarly to Remark <ref>, under the stronger assumption a > 3/2, one shows that φ is of class 𝒞^1 in all directions.
§ THE COMPACTIFICATION
We conclude this paper by showing our main Theorem.
We first give a construction for M̅.
Fix K an essential subset and E its normal exponential map.
Let M(∞) be the visual boundary of (M,g), which is the set of equivalent classes [σ] of untrapped unit speed geodesic rays σ, where two rays σ_1 and σ_2 are equivalent if and only if the function t⩾ 0 ↦ d_g(σ_1(t),σ_2(t)) is bounded.
By <cit.>, is in bijection with M(∞) by the map p ↦ [E(·,p)].
Define M̅ = M ∪ M(∞).
The following map
[ ℰ̅ [0,1) × ⟶ M̅∖ K; (ρ, p) ⟼ ℰ(-lnρ, p) ∈ M∖ K if ρ > 0,
[ℰ(·,p)] ∈ M(∞) if ρ = 0, ]
is thus a bijection.
We endow M̅ with the structure of a compact manifold with boundary through this latter bijection.
This identifies M with the interior of M̅.
Note that if ρ > 0, then r = -lnρ is the distance to K for g in M.
A compactly supported modification of ρ in a neighbourhood of K in M provides a smooth defining function for the boundary ∂M̅ = M(∞).
By abuse of notation, we still denote it ρ.
Let η^0 be the contact form and γ be the Carnot metric given by Theorem <ref>.
Let H_0 be the associated contact distribution, and let J_0 be the integrable almost complex structure on H_0 given by Theorem <ref>.
We see these objects as defined on ∂M̅ through the diffeomorphism E̅(0,·) {0}×→∂M̅.
Then (∂M̅,H_0,J_0) is a strictly pseudoconvex CR manifold of class 𝒞^1 by Theorem <ref>.
Theorem <ref> and Remark <ref> show that the metric g has the desired asymptotic expansion (<ref>) near the boundary ∂M̅ = ρ^-1({0}).
Let us show that H_0 and J_0 are induced by a continuous ambient almost complex structure J̅.
To that end, we show that J extends continuously to the boundary.
Let {E_0,…,E_2n} be a J-admissible frame on a cone E(_+× U), and consider the frame {-∂_ρ, ξ̅_0,…,ξ̅_2n} on E̅((0,1)× U) defined by ξ̅_0 = E̅^*(ρ^-1E_0) and ξ̅_j = E̅^*(ρ^-1/2E_j) for j∈{1,…,2n}.
Notice that -∂_ρ = e^r on M∖ K.
Proposition <ref> and Remark <ref> show that {ξ̅_0,…,ξ̅_2n} extends continuously on the boundary E̅({0}× U), with limit {ξ_0,…,ξ_2n}.
The tangent bundle of M̅ at the boundary splits as TM̅|_∂M̅ = ∂_ρ⊕ T∂M̅ =∂_ρ⊕ξ_0 ⊕ H_0.
From the very definition of a J-admissible frame, one has
J(e^r ) - e^r E_0, J(e^r E_0) + e^r = 𝒪_g(e^-(a-1)r),
J(e^r/2E_2j-1) - e^r/2E_2j, J(e^r/2E_2j) + e^r/2E_2j-1 = 𝒪_g(e^-(a-1/2)r), j∈{1,…, n}.
It follows that in the continuous frame
{-∂_ρ,ξ̅_0,…,ξ̅_2n},
the matrix of J reads
([ 0 -1
1 - 0 0; 0 ⋱
0 -1
1 - 0 ])
+
([ 𝒪(ρ^a) 𝒪(ρ^a+1/2); ; 𝒪(ρ^a-1/2) a
𝒪(ρ^a) ])
,
where the top left block is of size 2× 2 and the bottom right block is of size 2n × 2n.
Hence, J extends uniquely as a continuous almost complex structure J̅ up to boundary.
In addition, J̅ satisfies
J̅(-∂_ρ) = ξ_0, J̅ξ_0 = ∂_ρ, J̅ξ_2j-1 = ξ_2j, and J̅ξ_2j = -ξ_2j-1, j∈{1,…,2n}.
It follows that J̅|_H_0 = J_0, and that H_0 = (T∂M̅)∩(J̅T∂M̅).
This concludes the proof.
Under the stronger assumption that a > 3/2, one can show that J̅ is of class 𝒞^1 up to the boundary in all directions (see Remark <ref>).
When (M,g,J) is Kähler, (that is, if ∇ J = 0), then (M̅,J̅) is a compact complex manifold with strictly pseudoconvex CR boundary.
|
http://arxiv.org/abs/2307.07213v1 | 20230714081525 | On the maximal spectral type of nilsystems | [
"Ethan Ackelsberg",
"Florian K. Richter",
"Or Shalom"
] | math.DS | [
"math.DS"
] |
Let (G/Γ,R_a) be an ergodic k-step nilsystem for k≥ 2. We adapt an argument of Parry <cit.> to show that L^2(G/Γ) decomposes as a sum of a subspace with discrete spectrum and a subspace of Lebesgue spectrum with infinite multiplicity. In particular, we generalize a result previously established by Host–Kra–Maass <cit.> for 2-step nilsystems and a result by Stepin <cit.> for nilsystems G/Γ with connected, simply connected G.
Complementary Frequency-Varying Awareness Network for Open-Set Fine-Grained Image Recognition
Jiayin Sun, Hong Wang and Qiulei Dong
The corresponding author is Qiulei Dong.
Jiayin Sun and Qiulei Dong are with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China, and the Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]; [email protected]).
Hong Wang is with the College of Life Science, University of Chinese Academy of Sciences, Beijing 100049, China (email: [email protected])
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
A nilmanifold is a compact manifold of the form G/Γ where G is a nilpotent Lie group and Γ is a discrete co-compact subgroup. If G is k-step nilpotent, then we say that G/Γ is a k-step nilmanifold.
For any a∈ G let R_a G/Γ→ G/Γ denote the left-translation by a on G/Γ, that is, R_a (g Γ) = (a g) Γ for all gΓ∈ G/Γ. The resulting topological dynamical system (G/Γ, R_a) is called a nilsystem. Every nilmanifold G/Γ admits a unique left-translation invariant Borel probability measure μ_G/Γ called the Haar measure on G/Γ. This allows us to associate to every topological nilsystem (G/Γ, R_a) a natural measure-preserving system (G/Γ,μ_G/Γ, R_a). The Koopman-representation of the transformation R_a is the unitary operator (which, in an abuse of notation, we also denote by R_a) on L^2(G/Γ,μ_G/Γ) defined by R_a f = f∘ R_a. We say that the nilsystem is ergodic if the only R_a-invariant functions in L^2(G/Γ,μ_G/Γ) are the constant functions. Due to their rich underlying algebraic structure, the dynamical behaviour of nilsystems is remarkably complaisant:
Let G/Γ be a nilmanifold and a ∈ G. The following are equivalent:
* The topological system (G/Γ, R_a) is transitive, i.e., there exists a point with dense orbit.
* The topological system (G/Γ, R_a) is minimal, i.e., every point has a dense orbit.
* The topological system (G/Γ, R_a) is uniquely ergodic with unique G-invariant measure μ_G/Γ.
* The measure-preserving system (G/Γ,μ_G/Γ, R_a) is ergodic.
Let S^1 denote the unit circle in the complex plane. If μ and ν are two Borel measures on S^1, we say that μ is absolutely continuous with respect to ν and write μ≪ν if ν(A)=0⇒μ(A)=0 for any Borel subset A⊆ S^1. This introduces a natural partial order on the family of Borel measures on S^1. We say that μ and ν are equivalent if μ≪ν and ν≪μ and write ν≈μ. The type of a Borel measure μ is defined as the equivalence class of all Borel measures that are equivalent to ν.
Let U be a unitary operator on a Hilbert space . The spectral measure of h∈ is the unique finite Borel measure σ_h on S^1 satisfying
<U^n h , h > = ∫_S^1 t^n dσ_h(t), ∀ n∈ℤ.
The existence of this measure is guaranteed by Herglotz's theorem (see <cit.>). Moreover, there exists a unique finite Borel measure σ on S^1, called the maximal spectral type of the operator U, with the property that every spectral measure of an element in is absolutely continuous with respect to σ and conversely, every finite Borel measure absolutely continuous with respect to σ is the spectral measure of some element of <cit.>.
Spectral theory provides an important framework for analyzing linear operators and links operator theory to harmonic analysis.
In ergodic theory, the study of the spectral properties of the Koopman representation of a measure-preserving transformation follows a long history, dating back to the foundational works of von Neumann and Koopman in the 1930s (<cit.>, <cit.>, <cit.>).
It directly relates to important ergodic-theoretic properties of the underlying dynamical system (such as mixing properties or rigidity phenomena), plays an important role in the classification of measure-preserving systems and their joinings
, and aids the study of the stability and long-term behaviour of the system, which connects to recurrence and convergence problems in ergodic theory.
For more information about the spectral theory of dynamical systems and its applications, we refer the reader to the survey <cit.>.
In general, spectral measures are difficult to compute and even for some of the most well-studied systems the maximal spectral type remains unknown. The purpose of this paper is to settle this problem for the class of nilsystems. Nilsystems and some generalizations thereof have become increasingly important in ergodic theory due to their connections to the structure theory of ergodic averages <cit.>, <cit.>, <cit.>, <cit.>, additive combinatorics <cit.>, number theory <cit.>, <cit.>, <cit.>, nilspace theory <cit.>, <cit.>, <cit.>, <cit.>, and Higher-order Fourier analysis <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, <cit.>. Therefore, determining the maximal spectral type of nilsystems finds several applications. To state our main result, we need one more definition.
Let U be a unitary operator on a Hilbert space .
* U has discrete spectrum if the maximal spectral type is a discrete measure on S^1, or equivalently, if the eigenfunctions of U span a dense subspace of .
* U has infinite Lebesgue spectrum if decomposes into a direct sum of infinitely many pairwise orthogonal closed U-invariant subspaces, each with maximal spectral type equivalent to the Lebesgue measure on S^1.
* U has compact-Lebesgue spectrum if = _d⊕_l is an orthogonal decomposition of into a closed subspace _d with discrete spectrum and a closed subspace _l with infinite Lebesgue spectrum.
In this paper we will only consider separable Hilbert spaces. Therefore, infinite Lebesgue spectrum is a synonym of countable Lebesgue spectrum.
The main result of this paper is the following:
Let k≥ 2, and let (G/Γ,μ_G/Γ, R_a) be an ergodic k-step nilsystem. Then R_a has compact-Lebesgue spectrum on L^2(G/Γ,μ_G/Γ).
This result was previously established by <cit.> under the additional assumption that G is connected and simply connected (see also <cit.> for history and comments) and by Host, Kra and Maass <cit.> for 2-step nilsystems (k=2) in full generality. We give an alternative proof of the k=2 case in <ref>.
Nilsystems are closely related to another class of systems called skew products. Yet, the results of Theorem <ref> fail for these systems. Consider the system X = (𝕋^2,μ_^2,T) equipped with the Haar measure μ_^2 and the action T(x,y) = (x + α , y+φ(x)), where α is irrational and φ:→ℝ is continuous. When α is a Liouville number, Herman <cit.> constructed an absolutely continuous function φ for which X is ergodic, rigid and is not a group rotation. Since rigid systems are singular, this example contradicts the conclusion of Theorem <ref>. More related examples are surveyed in <cit.>
As an application of our main theorem, we also recover the following theorem.
Let k≥ 1 and let (G/Γ,μ_G/Γ, R_a) be an ergodic k-step nilsystem. Let f∈ L^∞(G/Γ,μ_G/Γ) and suppose that f is orthogonal to the subspace spanned by all eigenfunctions of R_a. Then
lim_n→∞∫_G/Γ T^n f ·f dμ_G/Γ = 0.
This result was previously established for k=2 by Host-Kra-Maass <cit.>, and for general k by Griesmer <cit.>, and recently by Frantizkinakis and Kuca <cit.>, using different methods. We provide yet another proof.
By <ref> we have ∫_G/Γ T^n f ·f dμ_G/Γ = ∫_S^1 t^n dσ_f where σ_f is the spectral measure of f. Write L^2(G/Γ,μ_G/Γ) = ℋ_d ⊕ℋ_l, where ℋ_d has discrete spectrum and ℋ_l has infinite Lebesgue spectrum. The subspace spanned by all eigenfunctions corresponds to ℋ_d. By assumption, f is orthogonal to ℋ_d and hence belongs to ℋ_l. Therefore σ_f is absolutely continuous with respect to Lebesgue. The result now follows by Riemann-Lebesgue lemma.
§.§ Acknowledgements
The first and third author are supported by the National Science Foundation under grant DMS-1926686. We would like to thank Mariusz Lemańczyk for helpful discussions leading to Remark <ref>.
§ RELEVANT RESULTS ABOUT NILSYSTEMS
In this preparatory section we survey some well known results about nilsystems used in the proof of <ref>.
Let G be a Lie group. For a, b ∈ G, the commutator of a and b is the element [a,b] = a^-1b^-1ab ∈ G. For subgroups H_1, H_2 ≤ G, we denote by [H_1, H_2] the group generated by the commutators [a,b] with a ∈ H_1, b ∈ H_2. For nilpotent Lie groups, an important associated object is its lower central series,
G=G_1 G_2 … G_k G_k+1={e},
defined recursively as G_1 = G and G_i+1 = [G, G_i] for i ≥ 1.
Recall that G_k+1 = {e} if and only if G is nilpotent of step ≤ k.
The following properties of nilsystems are useful for us.
(1) If any of the conditions in <ref> hold, then (G/Γ, R_a) is isomorphic to a nilsystem (H/Λ, R_b) where H is generated by b ∈ Y and H^0, the connected component of the identity in H (see <cit.>).
(2) If N ≤Γ is normal in G, then G/Γ≅ (G/N)/(Γ/N). Thus, by taking the maximal such N with respect to inclusion, we may assume without loss of generality that Γ contains no normal subgroup of G.
Let G/Γ be a nilmanifold, let a ∈ G such that (G/Γ, μ_G/Γ, R_a) is ergodic, and assume that G = ⟨ G^0, a ⟩.
Then G_i is connected for every i ≥ 2.
First observe that a∉G_2: otherwise X admits a non-ergodic factor G/G_2Γ, which is a contradiction. It then follows by <cit.> that G_i is connected for i ≥ 2.
Fix k ≥ 2, and
let (G/Γ, μ_G/Γ, R_a) be an ergodic k-step nilsystem.
If f ∈ L^2(X) is an eigenfunction, then f is measurable with respect to the factor G/G_2Γ.
This was established by Leibman in <cit.>.
§ CRITERION FOR LEBESGUE SPECTRUM
The following lemma is a corollary of the Peter-Weyl theorem and is due to Parry <cit.>.
Let k≥ 1 and let G/Γ and a∈ G, be such that (G/Γ,μ_G/Γ,R_a) is an ergodic k-step nilsystem and suppose that Γ contains no non-trivial normal subgroups of G. Then
L^2(G/Γ,μ_G/Γ) = ⊕_γ∈G_k V_γ,
where
V_γ = {f∈ L^2(G/Γ,μ_G/Γ : f(ux) = γ(u) f(x) ∀ u∈ G_k}
and G_k is the character group of G_k.
Our main tool for establishing Lebesgue spectrum is the following criterion of Parry.
Let G/Γ and a ∈ G be such that (G/Γ, μ_G/Γ, R_a) is an ergodic nilsystem, and assume that G = ⟨ G^0, a ⟩ and Γ contains no nontrivial normal subgroups of G, which is an assumption that we can always make without loss of generality due to <ref>). Let G=G_1 G_2 … G_k G_k+1={e} denote the lower central series of G and write L^2(G/Γ, μ_G/Γ) = ⊕_γ∈Ĝ_̂k̂V_γ as in (<ref>).
Suppose that for every u∈ G_k there exists b_u∈ G_k-1 such that [a,b_u]=u. Then for any nontrivial γ∈Ĝ_̂k̂∖{1}, the maximal spectral type of (V_γ,R_a) is Lebesgue.
Let γ be a nontrivial character and f∈ V_γ. Let u∈ G_k and let b_u be as in the proposition. Then
ab_u = b_u a u
and more generally, for all n∈ℤ we have
a^nb_u = b_ua^n u^n.
For f∈ V_γ we let σ_f denote the spectral measure of f with respect to R_a. Since b_u commutes with G_k, it maps V_γ to itself. We then have
∫_S^1λ^n dσ_b_uf = <a^n b_uf,b_uf> = <b_ua^nu^nf,b_uf> = <a^n u^n f, f> = ∫_S^1γ(u)^nλ^n dσ_f.
That is, σ_b_u f is equal to the measure σ_f^γ(u) obtained by the change of variables λ↦γ(u) λ. Let σ be the maximal spectral type of V_γ. In view of <ref> the set γ(G_k) is a connected subgroup of S^1. Since γ∈Ĝ_̂k̂∖{1} we conclude that γ(G_k)≠{1} and hence γ(G_k)=S^1. So
if ν≪σ, then ν^s ≪σ for all s∈ S^1. We claim from this observation that σ is Lebesgue. First note that ∫_S^1σ^t dt is a rotation-invariant measure on S^1, and so it must be equal to Lebesgue. Therefore, the Lebesgue measure is absolutely continuous with respect to σ. For the other direction, let A be a set with Lebesgue measure zero and assume for contradiction that σ(A)≠ 0. By definition it follows that σ^t(t^-1A)> 0, but since σ^t is absolutely continuous with respect to σ we get σ(t^-1A)> 0 for all t∈ S^1. Therefore, ∫_S^1σ(t^-1A) dt >0, but the measure defined by this integral is Lebesgue which leads to a contradiction.
§ PROOF OF THE MAIN RESULT
The k=2 case is proved in <cit.> (see also <ref> below), so assume k > 2 and (G/Γ, μ_G/Γ, R_a) is an ergodic k-step nilsystem.
By <ref>, we may assume G = ⟨ G^0, a ⟩ and Γ does not contain any normal subgroup of G. In particular, G_k is compact and connected, and G_k ∩Γ = {e}.
By <ref>, and induction on the degree of nilpotency of G, it suffices to show
L^2(G/Γ, μ_G/Γ) ≅ L^2(G/G_kΓ,μ_G/G_kΓ) ⊕ V
where V is a closed and invariant subspace of L^2(G/Γ, μ_G/Γ) on which R_a has infinite Lebesgue spectrum.
If G is (k-1)-step nilpotent, then V=0 and there is nothing to prove. We therefore assume that this is not the case.
As in the proof of <ref>, the space V decomposes as V = ⊕_γ V_γ with V_γ = { v ∈ V : u · v = γ(u)v for all u ∈ G_k }. Taking a quotient by γ for γ∈Ĝ_̂k̂, we may assume that G_k is a subgroup of S^1. Since G_k is non-trivial and connected, we then actually have G_k = S^1.
We claim that A:={[a,g] : g∈ G_k-1} = G_k. Since g↦ [a,g] is continuous, and G_k-1 is connected (by <ref>), we have that A is connected.
Moreover, x ↦ [a, x] is a homomorphism from G_k-1 to G_k. Indeed,
[a,xy] = a^-1 y^-1 x^-1 a x y = a^-1 y^-1 (a a^-1) x^-1 a x y = a^-1 y^-1 a [a,x] y = [a,x][a,y].
Therefore, A is a connected subgroup of S^1 and so must be all of S^1 or trivial.
Suppose for contradiction that A = {e}. We claim that this implies [G, G_k-1] = {e}, or equivalently that G is (k-1)-step nilpotent which contradicts the assumption. Since G = ⟨ G^0, a ⟩, it suffices to show [G^0, G_k-1] = {e}. Fix y ∈ G. Since (G/Γ, R_a) is minimal (cf. <ref>), we may find sequences γ_n ∈Γ and t_n ∈ such that a^t_nγ_n → y in G. Let γ∈ G_k-1∩Γ.
Then, on the one hand, γ a^t_nγ_n →γ y. On the other hand, γ a^t_nγ_n = a^t_nγγ_n=a^t_nγ_n γ [γ,γ_n]. But since [γ,γ_n]∈Γ∩ G_k ={e}, we have a^t_nγ_n γ→ yγ.
Hence, γ y = y γ. Thus,
[G, G_k-1∩Γ] = {e}.
Now consider the map s ↦ [s, ·] from G^0 to Hom(G_k-1, G_k). By (<ref>), the homomorphism [s, ·] is (G_k-1∩Γ)-invariant for each s ∈ G^0. It is also G_k-invariant. Hence, [s, ·] descends to a homomorphism from H = G_k-1/G_k(G_k-1∩Γ) to G_k = S^1. The group H is a compact abelian group, so Hom(H, G_k) = Ĥ is discrete. But since G^0 is connected, s↦ [s,·] is continuous, and [e, ·] is trivial, we conclude [G^0, G_k-1] = {e}, yielding a contradiction.
We deduce that A=G_k. In this case the assumption in <ref> is satisfied and so the maximal spectral type of V_γ is Lebesgue. Since G_k is connected it has infinitely many non-trivial characters, showing that the Lebesgue spectrum has infinite multiplicity. This completes the proof of <ref>.
§ MAXIMAL SPECTRAL TYPE OF 2-STEP NILSYSTEMS
In this appendix, we present a short proof of <ref> for 2-step nilsystems. The main fact which enables us to produce compact-Lebesgue spectrum in this case is that compact-Lebesgue spectrum is preserved by relatively independent joinings:
Let (X, μ, T) and (Y, ν, S) be measure-preserving systems with compact-Lebesgue spectrum.
Then the relatively independent joining (X × Y, μ×_Z ν, T × S) over any common factor Z
also has compact-Lebesgue spectrum.
Let π_1 : X → Z and π_2 : Y → Z be factor maps.
Let K_T and K_S be the Kronecker factors of (X, μ, T) and (Y, ν, S) respectively.
Note that the meet K_T Z is equal to the Kronecker factor of Z, which we denote by K_Z.
Similarly, K_S Z = K_Z.
We have the following splittings into invariant subspaces
L^2(X, μ) = L^2(K_Z) ⊕ U ⊕ V_1 ⊕ V_2
and
L^2(Y, ν) = L^2(K_Z) ⊕ U ⊕ W_1 ⊕ W_2,
where
L^2(K_Z) ⊕ U = L^2(Z),
L^2(K_Z) ⊕ V_1 = L^2(K_T),
L^2(K_Z) ⊕ W_1 = L^2(K_S).
By assumption, . T |_U ⊕ V_2 and . S |_U ⊕ W_2 have Lebesgue spectrum.
We claim
L^2(X × Y, μ×_Z ν) = L^2(Z) ⊕ V ⊕ W ⊕ (V ⊗ W),
where V = V_1 ⊕ V_2, W = W_1 ⊕ W_2, and V ⊗ W is the Hilbert space tensor product of V and W.
Indeed, suppose f ∈ L^2(X × Y, μ×_Z ν).
We may assume without loss of generality that f(x,y) = g(x) h(y) for some functions g ∈ L^2(X, μ), h ∈ L^2(Y, ν).
Using the splittings above, we may write
g = g̃∘π_1 + v and h = h̃∘π_2 + w,
where g̃ = (π_1)_*g, h̃ = (π_2)_*h, v ∈ V, and w ∈ W.
Since π_1(x) = π_2(y) for (μ×_Z ν)-a.e. (x,y) ∈ X × Y, we have
f(x,y) = g̃(z) h̃(z) + v(x) h̃(z) + g̃(z) w(y) + v(x) w(y),
where z = π_1(x) = π_2(y).
This decomposition consists of a sum of elements from L^2(Z), V, W, and V ⊗ W as desired.
Decomposing L^2(Z), V, and W further, the space L^2(X × Y, μ×_Z ν) is equal to
L^2(K_Z) ⊕ U ⊕ V_1 ⊕ V_2
⊕ W_1 ⊕ W_2 ⊕ (V_1 ⊗ W_1) ⊕ (V_1 ⊗ W_2)
⊕ (V_2 ⊗ W_1) ⊕ (V_2 ⊗ W_2).
It is easily checked that T × S has discrete spectrum on the subspace
L^2(K_Z) ⊕ V_1 ⊕ W_1 ⊕ (V_1 ⊗ W_1).
Moreover, on the orthogonal complement
U ⊕ V_2 ⊕ W_2 ⊕ (V_1 ⊗ W_2) ⊕ (V_2 ⊗ W_1) ⊕ (V_2 ⊗ W_2),
T × S has Lebesgue spectrum.
This is clear for the subspaces U, V_2, and W_2.
For the remaining subspaces, note that for v ∈ V and w ∈ W, one has σ_v ⊗ w = σ_v * σ_w.
Hence, if the spectral measure associated to either v or w is the Lebesgue measure,
then σ_v ⊗ w is also the Lebesgue measure.
The following description of the structure of 2-step nilsystems allows us, with the help of <ref>, to reduce to the case that either G is connected or the niltranslation is isomorphic to an affine transformation on a finite-dimensional torus.
Let G/Γ be a connected 2-step nilmanifold, and suppose (G/Γ, R_a) is a minimal nilsystem. Let G^0 denote the conncected component of the identity in G and Γ^0=Γ∩ G^0.
Then there exist b ∈ G^0, d ∈, and a minimal unipotent affine transformation S ^2d→^2d such that
(G/Γ, R_a) is a factor of a relatively independent joining of (G^0/Γ^0, R_b) with (^2d, S).
Since G/Γ is connected, there exists γ∈Γ so that b = a γ∈ G^0.
Then for any x = g Γ∈ X, we have ax = bγ^-1gΓ = bg[g, γ]Γ.
Since G is a 2-step nilpotent group, the commutator [g, γ] belongs to the center of G, so ax = [g, γ] bx.
Moreover, one can check that if g ≡ h Γ, then [g, γ] ≡ [h, γ] Γ, so we may write ax = [x, γ] bx without any ambiguity.
If [x, γ] = Γ for every x ∈ X, then R_a = R_b, so there is nothing to prove.
Assume [x, γ] Γ for some x ∈ X.
Denote by [X, γ] the set {[x, γ] : x ∈ X}⊆ G_2/(G_2 ∩Γ).
Then [X, γ] is a connected abelian Lie group, so there is an isomorphism φ : [X, γ] →^d for some d ∈.
Define S : ^2d→^2d by S(u,v) = (u+α, v+u), where α = φ([b, γ]).
The map (u,v) ↦ u is clearly a factor map (^2d, S) → (^d, R_α).
Moreover, the map x ↦φ([x, γ]) is a factor map (X, R_b) → (^d, R_α).
Indeed, for any x ∈ X,
φ([bx, γ]) = φ([b, γ] [x, γ]) = φ([x, γ]) + α.
The relatively independent joining of (X, R_b) and (^2d, S) over the factor (^d, R_α) is isomorphic to the system (Z, T), where Z = X ×^d and
T(x,v) = (bx, v + φ([x, γ])).
We claim (X, R_a) is a factor of (Z, T).
Define π : Z → X by π(x,v) = φ^-1(v)x.
Then
π(T(x,v)) = π(bx, v + φ([x, γ]))
= φ^-1(v) [x, γ] bx = φ^-1(v) ax = a φ^-1(v)x = R_a π(x,v).
Moreover, π(x,1) = x, so π is surjective.
Thus, π is a factor map (Z, T) → (X, R_a). Since (X,R_b) and (G^0/Γ^0,R_b) are isomorphic, the proof is complete.
It is natural to ask what would be a version of Theorem <ref> for higher values of k. The system (^2d, S) appearing in the conclusion of Theorem <ref> belongs to a class of systems called Weyl systems introduced in <cit.>. A Weyl system (Y,S) consists of a compact abelian Lie group Y and a unipotent affine transformation S : Y → Y. Weyl systems form a subclass of the class of nilsystems and enjoy many nice dynamical properties (see <cit.>).
Let G/Γ be a connected k-step nilmanifold, and suppose that (G/Γ,R_a) is a minimal nilsystem. Let G^0 denote the connected component of the identity in G and Γ^0=Γ∩ G^0. Do there exists b∈ G^0 and a k-step Weyl system (Y,S) such that (G/Γ,R_a) is a factor of a relatively independent joining of (G^0/Γ^0,R_b) with (Y,S)?
The system (^2d, S) appearing in the conclusion of Theorem <ref> belongs to a class of systems called Weyl systems introduced in <cit.>. A Weyl system (Y,S) consists of a compact abelian Lie group Y and a unipotent affine transformation S : Y → Y. Weyl systems form a subclass of the class of nilsystems and enjoy many nice dynamical properties (see <cit.>).
We believe that Theorem <ref> can be generalized to higher values of k as follows:
Let G/Γ be a connected k-step nilmanifold, and suppose that (G/Γ,R_a) is a minimal nilsystem. Let G^0 denote the connected component of the identity in G and Γ^0=Γ∩ G^0. Then there exists b∈ G^0, and a k-step Weyl system (Y,S) such that (G/Γ,R_a) is a factor of a relatively independent joining of (G^0/Γ^0,R_b) with (Y,S).
We can now prove <ref> when k=2.
We will first carry out the proof in the case that X = G/Γ is a connected 2-step nilmanifold and then deduce the general case from this one.
By <ref>, (X, μ_x, R_a) is a factor of a relatively independent joining of an ergodic nilsystem (X, μ_x, R_b) with b ∈ G^0 and an ergodic affine transformation (^2d, μ_^2d, S) for some d ∈.
A factor of a system with compact-Lebesgue spectrum clearly has compact-Lebesgue spectrum, so it suffices to prove that the relatively independent joining of (X, μ_x, R_b) with (^2d, μ_^2d, S) has compact-Lebesgue spectrum. By <ref>, we may further reduce to showing that each of the systems (X, μ_X, R_b) and (^2d, μ_^2d, S) has compact-Lebesgue spectrum.
To show that (X, μ_X, R_b) has compact-Lebesgue spectrum, we may assume X = G/Γ with G connected by <ref>, since b ∈ G^0. In this case, one may follow the proof in <ref> as written, since G_k-1 = G is now a connected group.
The fact that (^2d, μ_^2d, S) has compact-Lebesgue spectrum follows from a straightforward calculation. We give the details for S of the form S(u,v) = (u + α, v + u) (as appears in the proof of <ref>) for completeness.
First, L^2(^2d) splits as U ⊕ V, where U ≅ L^2(^d) consists of functions f(u,v) that depend only on u.
The transformation S is compact on U with eigenvalues n ·α, n ∈^d, corresponding to the eigenfunctions e_n,0(u,v) = e(n · u).
Now V is spanned by the functions e_n,m(u,v) = e(n · u + m · v) with n, m ∈^d, m 0.
By direct calculation, the spectral measure σ_n,m of e_n,m has Fourier coefficients
σ̂_̂n̂,̂m̂(k) = S^k e_n,me_n,m = ∫_^d ×^de_n,m( u+kα, v + ku + k2α) e_n,m(u,v) du dv
= e ( ( kn + k2 m ) ·α) ∫_^de(k m · u) du = 0
for k 0, since km ∈^d ∖{0} gives rise to a nontrival character on ^d.
Hence, σ_n,m is equal to the Lebesgue measure on S^1.
Now suppose X = G/Γ is a (not necessarily connected) 2-step nilmanifold and a ∈ G such that (X, R_a) is minimal. Then X has finitely many connected components, say X = ⋃_i=1^lX_i, and R_a^k(X_i) ⊆ X_i for k = l! ∈. Let f ∈ L^2(X) with f orthogonal to the Kronecker factor of (X, μ_X, R_a). Then f is orthogonal to the Kronecker factor on each ergodic component (X_i, μ_X_i, R_a^k) for the transformation R_a^k. The spectral measure of f with respect to the transformation R_a^k has Fourier coefficients
σ̂_̂f̂;̂ ̂R̂_̂â^̂k̂(n) = ∫_Xf(a^knx) f(x) dμ_X
= 1/l∑_i=1^l∫_X_if_i(a^knx) f_i(x) dμ_X_i,
where f_i = f|_X_i.
Since each of the systems (X_i, μ_X_i, R_a^k) is an ergodic 2-step nilsystem on a connected nilmanifold X_i, the sequence
c_i(n) = ∫_X_if_i(a^knx) f_i(x) dμ_X_i
is the Fourier transform of a function φ_i ∈ L^1(S^1). Therefore, σ_f; R_a^k is absolutely continuous with respect to the Lebesgue measure on S^1, with Radon–Nikodym derivative φ = 1/l∑_i=1^lφ_i.
Now, for A ⊆ S^1, σ_f; R_a^k(A) = σ_f; R_a({z ∈ S^1 : z^k ∈ A}). Hence, σ_f; R_a(A) ≤σ_f; R_a^k({z^k : z ∈ A}), so σ_f; R_a is also absolutely continuous with respect to the Lebesgue measure on S^1. The following observation then completes the proof:
Let (X, μ, T) be an ergodic measure-preserving system with an irrational eigenvalue.
Let σ be the maximal spectral type of (X, μ, T).
Decompose σ = σ_d + σ_s + σ_ac as a sum of
discrete, singular, and absolutely continuous measures.
Then σ_ac is equivalent to Lebesgue measure.
Let f ∈ L^2(X) with σ_f ≈σ_ac.
Let g : X → S^1 be an eigenfunction Tg = e(α)g with α∉.
Then
σ̂_fg(n) = T^n(fg)fg
= e(nα)g T^nffg
= e(nα) T^nff
= e(nα) σ̂_f(n).
Hence, σ_fg(A) = σ_f(e(α)A).
Define a measure
ν(A) = ∫_S^1σ_f(tA) dt.
By construction, ν is translation-invariant, so ν is a nonzero scalar multiple of Lebesgue measure.
We want to show ν≪σ_f.
Suppose σ_f(A) = 0.
Since σ_ac≪σ_f, we have σ_fg^n≪σ_f for every n ∈.
Hence σ_f(e(nα) A) = 0 for all n ∈.
But t ↦σ_f(tA) is a continuous function and {e(nα) : n ∈} is dense in S^1,
so σ_f(tA) = 0 for every t ∈ S^1.
By the definition of ν, it follows that ν(A) = 0.
That is, ν≪σ_f as claimed.
abbrv
|
http://arxiv.org/abs/2307.04353v1 | 20230710053014 | On Sufficient Graphical Models | [
"Bing Li",
"Kyongwon Kim"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
We introduce a sufficient graphical model by applying the recently developed nonlinear sufficient dimension reduction techniques to the evaluation of conditional independence. The graphical model is nonparametric in nature, as it does not make distributional assumptions such as the Gaussian or copula Gaussian assumptions. However, unlike a fully nonparametric graphical model, which relies on the high-dimensional kernel to characterize conditional independence, our graphical model is based on conditional independence given a set of sufficient predictors with a substantially reduced dimension. In this way we avoid the curse of dimensionality that comes with a high-dimensional kernel. We develop the population-level properties, convergence rate, and variable selection consistency of our estimate.
By simulation comparisons and an analysis of the DREAM 4 Challenge data set, we demonstrate that our method outperforms the existing methods when the Gaussian or copula Gaussian assumptions are violated, and its performance remains excellent in the high-dimensional setting.
conjoined conditional covariance operator, generalized sliced inverse regression, nonlinear sufficient dimension reduction, reproducing kernel Hilbert space
§ INTRODUCTION
In this paper we propose a new nonparametric statistical graphical model, which we call the sufficient graphical model, by incorporating the recently developed nonlinear sufficient dimension reduction techniques to the construction of the distribution-free graphical models.
Let G = ( Γ, E) be an undirected graph consisting of a finite set of nodes Γ={1, …, p} and set of edges
ℰ⊆{(i,j)∈Γ×Γ : i ≠ j }.
Since (i,j) and (j,i) represent the same edge in an undirected graph, we can assume without loss of generality that i>j.
A statistical graphical model links G with a random vector X=(X1, …, X p) by the conditional independence:
(i,j) ∉ℰ⇔ X i X j | X-(i,j),
where
X -(i,j)= {X 1, …, X p }∖{X i, X j}, and
A B | C means conditional independence. Thus, nodes i and j are connected if and only if X i and X j are dependent given X -(i,j).
Our goal is to estimate the set E based on a sample X 1, …, X n of X.
See <cit.>.
One of the most popular statistical graphical models is the Gaussian graphical model, which assumes that X ∼ N(μ, Σ). Under the Gaussian assumption, conditional independence in (<ref>) is encoded in the precision matrix Θ = Σ in the following sense
X i X j |X-(i,j)⇔θij =0,
where θij is the (i,j)th entry of the precision matrix Θ. By this equivalence, estimating E amounts to identifying the positions of the zero entries of the precision matrix, which can be achieved by sparse estimation methods
such as the <cit.>, <cit.>, and <cit.>. A variety of methods have been developed for estimating the Gaussian graphical model, which include, for example, <cit.>, <cit.>, <cit.>, and <cit.>. See also <cit.>, <cit.>, and <cit.>.
Since the Gaussian distribution assumption is restrictive, many recent advances have focused on relaxing this assumption. A main challenge in doing so is to avoid the curse of dimensionality <cit.>: a straightforward nonparametric extension would resort to a high-dimensional kernel, which are known to be ineffective.
One way to relax the Gaussian assumption without evoking a high dimensional kernel is to use the copula Gaussian distribution, which is the approach taken by <cit.>, <cit.>, and <cit.>, and is further extended to the transelliptical model by <cit.>.
However, the copula Gaussian assumption could still be restrictive: for example, if A and B are random variables satisfying B=A2+ϵ, where A and ϵ are i.i.d. N(0,1), then (A,B) does not satisfy the copula Gaussian assumption. To further relax the distributional assumption, <cit.> proposed a new statistical relation called the additive conditional independence as an alternative criterion for constructing the graphical model. This relation has the advantage of achieving nonparametric model flexibility without using a high-dimensional kernel, while obeying the same set of semi-graphoid axioms that govern the conditional independence <cit.>. See also <cit.> and <cit.>. Other approaches to nonparametric graphical models include <cit.> and <cit.>.
In this paper, instead of relying on additivity to avoid the curse of dimensionality, we apply the recently developed nonparametric sufficient dimension reduction <cit.> to achieve this goal. The estimation proceeds in two steps: first, we use nonlinear sufficient dimension reduction to reduce X -(i,j) to a low-dimensional random vector U ij; second, we use the kernel method to construct a nonparametric graphical model based on (X i, X j) and the dimension-reduced random vectors U ij. The main differences between this approach and <cit.> are, first, we are able to retain conditional independence as the criterion for constructing the network, which is a widely accepted criterion with a more direct interpretation, and second, we are no longer restricted by the additive structure in the graphical model. Another attractive feature of our method is due to the “kernel trick”, which means its computational complexity depends on the sample size rather than the size of the networks.
The rest of the paper is organized as follows. In Sections <ref> and <ref>, we introduce the sufficient graphical model and describe its estimation method at the population level. In Section <ref> we lay out the detailed algorithms to implement the method. In Section <ref> we develop the asymptotic properties such as estimation consistency, variable selection consistency, and convergence rates. In Section <ref>, we conduct simulation studies to compare of our method with the existing methods. In Section <ref>, we apply our method to the DREAM 4 Challenge gene network data set. Section <ref> concludes the paper with some further discussions. Due to limited space we put all proofs and some additional results in the Supplementary Material.
§ SUFFICIENT GRAPHICAL MODEL
In classical sufficient dimension reduction, we seek the lowest dimensional subspace S of p, such that, after projecting X ∈ p on to S, the information about the response Y is preserved; that is, Y X | P S X, where P S is the projection onto S. This subspace is called the central subspace, written as S Y|X. See, for example, <cit.>, <cit.>, and <cit.>. <cit.> and <cit.> extended this framework to the nonlinear setting by considering the more general problem: Y X | G, where G a sub-σ field of the σ-field generated by X. The class of functions in a Hilbert space that are measurable with respect to G is called the central class, written as S Y|X. <cit.> introduced the Principal Support Vector Machine, and <cit.> generalized the Sliced Inverse Regression <cit.> and the Sliced Average Variance Estimate <cit.> to estimate the central class. Precursors of this theory include <cit.>, <cit.>, and <cit.>.
To link this up with the statistical graphical model, let (Ω, F, P) be a probability space, (Ω X, F X) a Borel measurable space with Ω X ⊆ p, and X: Ω→Ω X a random vector with distribution P X.
The ith component of X is denoted by X i and its range denoted by ΩX i. We assume Ω X = ΩX 1×⋯×ΩX p. Let X (i,j)=(X i, X j) and X -(i,j) be as defined in the Introduction. Let σ (X - (i,j)) be the σ-field generated by X -(i,j).
We assume, for each (i,j) ∈Γ×Γ, there is a proper sub σ-field G -(i,j) of σ (X -(i,j)) such that
X (i,j) X -(i,j) | G -(i,j).
Without loss of generality, we assume G -(i,j) is the smallest sub σ-field of σ ( X -(i,j) ) that satisfies the above relation; that is, G -(i,j) is the central σ-field for X (i,j) versus X -(i,j). There are plenty examples of joint distributions of X for which the condition (<ref>) holds for every pair (i,j): see Section S10 of the Supplementary Material.
Using the properties of conditional independence developed in <cit.> (with a detailed proof given in <cit.>), we can show that (<ref>) implies the following equivalence.
If X (i,j) X -(i,j) | G -(i,j), then
X i X j | X -(i,j) ⇔X i X j | G -(i,j).
This equivalence motivates us to use X i X j | G -(i,j) as the criterion to construct the graph G after performing nonlinear sufficient dimension reduction of X (i,j) versus X -(i,j) for each (i,j) ∈Γ×Γ, i > j.
Under condition (<ref>), the graph defined by
(i,j) ∉ E ⇔ X i X j | G -(i,j)
is called the sufficient graphical model.
§ ESTIMATION: POPULATION-LEVEL DEVELOPMENT
The estimation of the sufficient graphical model involves two steps: the first step is to use nonlinear sufficient dimension reduction to estimate G -(i,j); the second is to construct a graph G based on reduced data
{ (X (i,j), G -(i,j)): (i,j) ∈Γ×Γ, i > j }.
In this section we describe the two steps at the population level. To do so, we need some preliminary concepts such as the covariance operator between two reproducing kernel Hilbert spaces, the mean element in an reproducing kernel Hilbert spaces, the inverse of an operator, as well as the centered reproducing kernel Hilbert spaces. These concepts are defined in the Supplementary Material, Section S1.2. A fuller development of the related theory can be found in <cit.>. The symbols (·) and (·) will be used to denote the range and the closure of the range of a linear operator.
⊥
κ
§.§ Step 1: Nonlinear dimension reduction
We use the generalized sliced inverse regression <cit.>, <cit.> to perform the nonlinear dimension reduction. For each pair (i,j) ∈Γ×Γ, i > j, let ΩX -(i,j) be the range of X -(i,j), which is the Cartesian product of ΩX 1, …, ΩX p with ΩX i and ΩX j removed. Let
X -(i,j): ΩX -(i,j)×ΩX -(i,j)→
be a positive semidefinite kernel.
Let H X -(i,j) be the centered reproducing kernel Hilbert space generated by X -(i,j). Let ΩX (i,j), X (i,j), and H X (i,j) be the similar objects defined for X (i,j).
E[ X -(i,j) ( X -(i,j), X -(i,j) ) ]< ∞, E[ X (i,j) ( X (i,j), X (i,j) )] < ∞.
This is a very mild assumption that is satisfied by most kernels.
Under this assumption, the following covariance operators are well defined:
ΣX -(i,j) X (i,j): H X (i,j)→ H X -(i,j), ΣX -(i,j) X -(i,j): H X -(i,j)→ H X -(i,j).
For the formal definition of the covariance operator, see S1.2. Next, we introduce the regression operator from H X (i,j) to H X -(i,j). For this purpose we need to make the following assumption.
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j) ).
As argued in <cit.>, this assumption can be interpreted as a type of collective smoothness in the relation between X (i,j) and X -(i,j): intuitively, it requires the operator ΣX -(i,j) X (i,j) sends all the input functions to the low-frequency domain of the operator ΣX -(i,j) X -(i,j). Under Assumption <ref>, the linear operator
R X -(i,j) X (i,j)=ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j)
is defined, and we call it the regression operator from H X (i,j) to H X -(i,j). The meaning of the inverse
ΣX -(i,j) X -(i,j) is defined in Section S1.2 in the Supplementary Material.
The regression operator in this form was formally defined in <cit.>, but earlier forms existed in <cit.>; see also <cit.>.
R X -(i,j) X (i,j) is a finite-rank operator, with rank d ij.
Intuitively, this assumption means that R X -(i,j)X (i,j) filters out the high frequency functions of X (i,j), so that, for any f ∈ H (i,j), R X -(i,j)X (i,j) f is relatively smooth. It will be violated, for example, if one can find an f ∈ H (i,j) that makes R X -(i,j)X (i,j) f arbitrarily choppy.
The regression operator plays a crucial role in nonlinear sufficient dimension reduction. Let L 2 ( P X -(i,j) ) be the L 2-space with respect to the distribution P X -(i,j) of X -(i,j). As shown in <cit.>, the closure of the range of the regression operator is equal to the central subspace; that is,
( R X -(i,j) X (i,j) ) = 𝔖X (i,j) | X -(i,j)
under the following assumption.
* H X -(i,j) is dense in L 2 (P X -(i,j) ) modulo constants; that is, for any f ∈ L 2 (P X -(i,j) ) and any ϵ > 0, there is a g ∈ H X -(i,j) such that [ f( X -(i,j) ) - g( X -(i,j) ) ] < ϵ;
* 𝔖X (i,j) | X -(i,j) is a sufficient and complete.
The first condition essentially requires the kernel X -(i,j) to be a universal kernel with respect to the L 2(P X -(i,j))-norm. It means H -(i,j) is rich enough to approximate any L 2(P X -(i,j))-function arbitrarily closely. For example, it is satisfied by the Gaussian radial basis function kernel, but not by the polynomial kernel. For more information on universal kernels, see <cit.>. The completeness in the second condition means
E[ g (X -(i,j)) | X (i,j)] = 0 ⇒ g (X -(i,j)) = 0 .
This concept is defined in <cit.>,
and is similar to the classical definition of completeness treating X -(i,j) as the parameter. <cit.> showed that completeness is a mild condition, and is satisfied by most nonparametric models.
A basis of the central class 𝔖X (i,j) | X -(i,j) can be found by
solving the generalized eigenvalue problem: for k = 1, …, d ij,
⟨ f, ΣX -(i,j) X (i,j) A ΣX (i,j) X - (i,j) f ⟩-(i,j)
⟨ f k, ΣX -(i,j) X -(i,j) f k ⟩-(i,j) = 1
⟨ f k, ΣX -(i,j) X -(i,j) f ℓ⟩-(i,j), ℓ=1, …, k-1
where A: H X (i,j)→ H X (i,j) is any nonsingular and self adjoint operator, and ⟨·, ·⟩-(i,j) is the inner product in H X -(i,j). That is, if f ij 1, … f ijd ij are the first d ij eigenfunctions of this eigenvalue problem, then they span the central class. This type of estimate of the central class is called generalized sliced inverse regression.
Convenient choices of A are the identity mapping I or the operator ΣX (i,j) X (i,j). If we use the latter, then we need the following assumption.
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j) ).
This assumption has the similar interpretation as Assumption <ref>; see Section S11 in the Supplementary Material.
At the population level, choosing A to be ΣX -(i,j) X -(i,j) achieves better scaling because it down weights those components of the output of ΣX -(i,j)X (i,j) with larger variances. However, if the sample size is not sufficiently large, involving an estimate of ΣX -(i,j)X (i,j) in the procedure could incur extra variations that overwhelm the benefit brought by ΣX -(i,j)X (i,j). In this case, a nonrandom operator such as A=I is preferable.
In this paper we use A = Σ X (i,j) X (i,j). Let U ij denote the random vector
( f ij 1 (X -(i,j)) , … f ijd ij(X -(i,j)) ).
The set of random vectors { U ij: (i,j) ∈Γ×Γ, i > j } is the output for the nonlinear sufficient dimension reduction step.
§.§ Step 2:Estimation of sufficient graphical model
To estimate the edge set of the sufficient graphical model
we need to find a way to determine whether X i X j | U ij is true. We use a linear operator introduced by <cit.> to perform this task, which is briefly described as follows.
Let U, V, W be random vectors taking values in measurable spaces (Ω U, F U), (Ω V, F V), and (Ω W, F W).
Let ΩUW = Ω U ×Ω W, ΩVW = Ω V ×Ω W, F UW= F U × F V, and F VW = F V × F W.
Let
UW: ΩUW×ΩUW→, VW: ΩVW×ΩVW→, W: Ω W ×Ω W →
be positive kernels. For example, for (u 1, w 1), (u 2, w 2) ∈ΩUW×ΩUW, UW returns a real number denoted by UW[(u 1, w 1), (u 2, w 2)]. Let H UW, H VW, and H W be the centered reproducing kernel Hilbert space's generated by the kernels UW, VW, and W.
Define the covariance operators
Σ(UW)(VW): H VW→ H UW, Σ(UW)W: H W → H UW,
Σ(VW)W: H W → H VW, ΣWW: H W → H W
as before.
The following definition is due to <cit.>. Since it plays a special role in this paper, we give it a name – “conjoined conditional covariance operator” that figuratively depicts its form.
Suppose
* If S is W, or (U,W), or (V, W), then E [ S (S, S) ] < ∞;
* (ΣW (VW) ) ⊆ (ΣWW), (ΣW (UW) ) ⊆ (ΣWW).
Then the operator
ΣÜV̈|W = Σ(UW)(VW) - Σ(UW)WΣWWΣW(VW)
is called the conjoined conditional covariance operator between U and V given W.
The word “conjoined” describes the peculiar way in which W appears in Σ(UW)W and ΣW(VW), which differs from an ordinary conditional covariance operator, where these operators are replaced by ΣUW and ΣWV. The following proposition is due to <cit.>, a proof of a special case of which is given in <cit.>.
Suppose
* H UW⊗ H VW is probability determining;
* for each f ∈ H UW, the function E[ f(U, W) | W=·] belongs to H W;
* for each g ∈ H VW, the function E[ g(V, W) | W =· ] belongs to H W;
Then ΣÜV̈|W = 0 if and only if U V | W.
The notion of probability determining in the context of reproducing kernel Hilbert space was defined in <cit.>. For a generic random vector X, an reproducing kernel Hilbert space H X based on a kernel X is probability determining if and only if the mapping
P ↦ E P [ X(·, X)]
is injective.
Intuitively, this requires the family of expectations { E P f(X): f ∈ H X } to be rich enough to identify P. For example, the Gaussian radial basis function is probability determining, but a polynomial kernel is not. We apply the above proposition to X i, X j, U ij for each (i,j) ∈Γ×Γ, i > j. Let
XUi,ij: (ΩX i×ΩU ij ) × (ΩX i×ΩU ij ) →
be a positive definite kernel, and H XUi,ij the centered reproducing kernel Hilbert space generated by XUi,ij. Similarly, let
Uij: ΩU ij×ΩU ij→
be a positive kernel, and H Uij the centered reproducing kernel Hilbert space generated by Uij.
Conditions (1) and (2) of Definition <ref> and conditions (1), (2), and (3) of Proposition <ref> are satisfied with U, V, and W therein replaced by
X i, X j, and U ij, respectively, for each (i,j) ∈Γ×Γ and i > j.
Under this assumption, the conjoined conditional covariance operator ΣẌ i Ẍ j | U ij is well defined and has the following property.
Under Assumption <ref>, we have
(i,j) ∉ℰ⇔ΣẌ i Ẍ j | U ij = 0.
This corollary motivates us to estimate the graph by thresholding the norm of the estimated conjoined conditional covariance operator.
§ ESTIMATION: SAMPLE-LEVEL IMPLEMENTATION
§.§ Implementation of step 1
Let (X 1, Y 1), …, (X n, Y n) be an i.i.d. sample of (X,Y). At the sample level, the centered reproducing kernel Hilbert space H X -(i,j) is spanned by the functions
{ X -(i,j) ( ·, X -(i,j) a ) - E n [ X -(i,j) ( ·, X -(i,j))]: a = 1, …, n },
where X -(i,j) (·, X -(i,j) ) stands for the function u ↦ X -(i,j) (u, X -(i,j) ), and
E n [ X -(i,j) (·, X -(i,j) )] the function u ↦ E n [ X -(i,j) (u, X -(i,j) )].
We estimate the covariance operators ΣX -(i,j) X (i,j) and Σ X -(i,j) X -(i,j) by
Σ̂X -(i,j) X (i,j) =
E n {[ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )]
⊗
[ X (i,j) ( ·, X (i,j) )
-E n X (i,j) ( ·, X (i,j) )] }
Σ̂ X -(i,j) X -(i,j) =
E n { [ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )]
⊗
[ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )] },
respectively. We estimate ΣX (i,j) X (i,j) by the Tychonoff-regularized inverse
( Σ̂X (i,j) X (i,j) + ϵ X (i,j) I ),
where I: H X (i,j)→ H X (i,j) is the identity operator.
The regularized inverse is used to avoid over fitting. It plays the same role as ridge regression <cit.> that alleviates over fitting by adding a multiple of the identity matrix to the sample covariance matrix before inverting it.
At the sample level, the generalized eigenvalue problem (<ref>) takes the following form: at the kth iteration,
⟨ f, Σ̂X -(i,j) X (i,j) ( Σ̂X (i,j) X (i,j) + ϵ X (i,j) I )Σ̂X (i,j) X - (i,j) f ⟩-(i,j)
⟨ f, Σ̂X -(i,j) X -(i,j) f ⟩-(i,j) = 1,
⟨ f, Σ̂X -(i,j) X -(i,j) f ℓ⟩-(i,j) = 0, ℓ = 1, …, k-1,
where f 1, …, f k-1 are the maximizers in the previous steps. The first d ij eigenfunctions are an estimate of a basis in the central class S X (i,j) | X -(i,j).
Let K X -(i,j) be the n × n matrix whose (a,b)th entry is X -(i,j) (X a -(i,j), X b -(i,j)), Q = I n - 1 n 1 n / n, and
G X -(i,j) = Q K X -(i,j) Q.
Let a 1, …, a d ij be the first d ij eigenvectors of the matrix
( G X -(i,j) + ϵ X -(i,j) I n ) G X -(i,j) G X (i,j) ( G X (i,j) + ϵ X (i,j) I n ) G X - (i,j) ( G X -(i,j) + ϵ X -(i,j) I n ).
Let
b r = ( G X -(i,j) + ϵ X -(i,j) I n ) a r for r = 1, …, d ij.
As shown in Section S12.2, the eigenfunctions f 1 ij, …, f d ijij are calculated by
f r ij = ∑a=1 n b r a { X -(i,j) ( ·, X -(i,j) a ) - E n [ X -(i,j) ( ·, X -(i,j))]}.
The statistics Ûij a = ( f 1 ij (X a -(i,j)) , …, f d ijij (X a -(i,j))), a = 1, …, n, will be used as the input for the second step.
§.§ Implementation of step 2
This step consists of estimating the conjoined conditional covariance operator's for each (i,j) and thresholding their norms. At the sample level, the centered reproducing kernel Hilbert space's generated by the kernels XUi,ij, XUj,ij, and U ij are
H XU i,ij= {XUi,ij ( ·, (X a i, U a ij)) - E n [ XUi,ij ( ·, (X i, U ij)) ]: a = 1, …, n },
H XU j,ij= {XUj,ij ( ·, (X a j, U a ij)) - E n [ XUj,ij ( ·, (X j, U ij)) ]: a = 1, …, n },
H U ij= {Uij ( ·, U a ij) - E n [ Uij ( ·, U ij) ]: a = 1, …, n },
where, for example, XUi,ij ( ·, (X a i, U a ij)) denotes the function
ΩX i×ΩU ij→, (x i, u ij ) ↦XUi,ij ( (x i, u ij ), (X a i, U a ij))
and E n [ XUi,ij ( ·, (X i, U ij)) ] denotes the function
ΩX i×ΩU ij→, (x i, u ij ) ↦ E n [ XUi,ij ( (x i, u ij ), (X i, U ij))].
We estimate the covariance operators
Σ(X i U ij)( X i U ij), Σ(X i U ij)U ij, ΣX j (X jU ij), and ΣU ij U ij by
Σ̂(X i U ij) (X j U ij) = E n { [ XUi,ij ( ·, ( X i, U ij))- E n XUi,ij ( ·, ( X i, U ij)) ]
⊗ [ XUj,ij ( ·, ( X j, U ij))- E n XUj,ij ( ·, ( X j, U ij)) ] }
Σ̂(X i U ij) U ij = E n { [ XUi,ij ( ·, ( X i, U ij))- E n XUi,ij ( ·, ( X i, U ij)) ]
⊗ [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ] }
Σ̂U ij(X j U ij) = E n { [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ]
⊗ [ XUj,ij ( ·, ( X j, U ij))- E n XUj,ij ( ·, ( X j, U ij)) ] }
Σ̂U ij U ij = E n { [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ]
⊗ [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ] },
respectively. We then estimate the conjoined conditional covariance operator by
Σ̂Ẍ i Ẍ j | U ij=
Σ̂(X i U ij) (X j U ij) -
Σ̂(X i U ij) U ij
(Σ̂U ij U ij + ϵ U (i,j) I ) Σ̂U ij(X j U ij) ,
where, again, we have used Tychonoff regularization to estimate the inverted covariance operator ΣU ij U ij.
Let K U ij, K X i U ij, and K X j U ij be the Gram matrices
K U ij= { U ij (U a ij, U b ij) }a, b = 1 n,
K X i U ij= {XUi, ij ((X i a, U a ij), (X i b, U b ij)) }a, b =1 n,
K X j U ij = {XUj, ij ((X j a, U a ij), (X j b, U b ij)) }a,b=1 n,
and G X i U ij,
G X j Uij, and
G U ij their centered versions
G X i U ij = Q K X i U ij Q,
G X j Uij = Q K X jU ij Q,
G U ij = Q K U ij Q.
As shown in Section S12 in the Supplementary Material,
Σ̂Ẍ i Ẍ j| U ijhs
= G X i U ij1/2 G X j U ij1/2 - G X i U ij1/2 G U ij ( G U ij + ϵ U (i,j) Q ) † G X jUij1/2 f,
where · f is the Frobenius norm.
Estimation of the edge set is then based on thresholding this norm; that is,
Ê = { (i,j) ∈Γ×Γ: i > j, Σ̂Ẍ i Ẍ j| U ijhs > ρ n }
for some chosen ρ n > 0.
§.§ Tuning
We have three types of tuning constants: those for the kernels, those for Tychonoff regularization, and the threshold ρ n. For the Tychonoff regularization, we have ϵ X (i,j) and ϵ X -(i,j) for step 1, and ϵ U (i,j) for step 2. In this paper we use the Gaussian radial basis function as the kernel:
(u,v) = exp ( - γ u - v 2 ).
For each (i,j), we have five γ's to determine: γ X (i,j) for the kernel X (i,j), γ X -(i,j) for X -(i,j), γXUi,ij for XUi,ij, γXUj,ij for XUj,ij, and γ U ij for U ij, which are chosen by the following formula (see, for example, <cit.>)
1/ √(γ)= n 2∑a < b s a - s b ,
where s 1, …, s n are the sample of random vectors corresponding to the mentioned five kernels. For example, for the kernel XUj, ij, s a = (X a j, U a ij).
For the tuning parameters in Tychonoff regularization, we use the following generalized cross validation scheme (GCV; see <cit.>):
GCV(ϵ)= ϵ∑i<j‖ G 1-G 2 [ G 2+ϵλmax(G 2)]-1G 1
‖F/1/ntr{I n-G 2 [ G 2+ϵλmax(G 2) ]-1},
where G 1, G 2 ∈n × n are positive semidefinite matrices, and λmax (G 2) is the largest eigenvalue of G 2. The matrices G 1 and G 2 are the following matrices for the three tuning parameters:
* G 1 = G X -(i,j), G 2 = G X (i,j) for ϵ X (i,j),
* G 1 = G X (i,j), G 2 = G X -(i,j) for ϵ X -(i,j),
* G 1 = G X (i,j), G 2 = G U ij for ϵ U (i,j),
We minimize (<ref>) over a grid to choose ϵ, as detailed in Section <ref>.
We also use
generalized cross validation to determine the thresholding parameter ρ n. Let Ê(ρ) be the estimated edge set using a threshold ρ, and, for each i ∈Γ, let C i (ρ)={ X j: j ∈Γ, (i,j) ∈Ê(ρ) } be the subset of components of X at the neighborhood of the node i in the graph (Γ, Ê ( ρ)). The basic idea is to apply the generalized cross validation to the regression of the feature of X i on the feature of C i (ρ). The generalized cross validation for this regression takes the form
GCV (ρ) = ∑i=1 p‖ GX i-GC i (ρ) [ GC i (ρ)+ϵλmax(GC i (ρ))I n ]-1GX i‖F/1/ntr{I n-GC i (ρ) [ GC i (ρ)+ϵλmax(GC i (ρ) ) I n ]-1},
where G C i (ρ)= Q K C i (ρ) Q, and K C i (ρ) is the n × n kernel matrix for the sample of C i (ρ).
We minimize GCV(ρ) over the grid ρ∈{ℓ× 10-2: ℓ=2, …, 7} to determine the optimal threshold ρ n.
Regarding the selection of the dimension of U ij, to our knowledge there has been no systematic procedure available to determine the dimension of the central class for nonlinear sufficient dimension reduction. While some recently developed methods for order determination for linear sufficient dimension reduction, such as the ladle estimate and predictor augmentation estimator <cit.>, may be generalizable to the nonlinear sufficient dimension reduction setting, we will leave this topic to future research. Our experiences and intuitions indicate that a small dimension, such as 1 or 2, for the central class would be sufficient in most cases. For example, in the classical nonparametric regression problems Y = f(X) + ϵ with X ϵ, the dimension of the central class is by definition equal to 1.
§ ASYMPTOTIC THEORY
In this section we develop the consistency and convergence rates of our estimate and related operators. The challenge of this analysis is that our procedure involves two steps: we first extract the sufficient predictor using one set of kernels, and then substitute it into another set of kernels to get the final result. Thus we need to understand how the error propagates from the first step to the second. We also develop the asymptotic theory allowing p to go to infinity with n, which is presented in the Supplementary Material.
§.§ Overview
Our goal is to derive the convergence rate of
| Σ̂Ẍ i Ẍ j | Ûijhs - ΣẌ i Ẍ j | U ijhs|,
as Σ̂Ẍ i Ẍ j | Ûijhs is the quantity we threshold to determine the edge set.
By the triangular inequality,
| Σ̂Ẍ i Ẍ j | Ûijhs - ΣẌ i Ẍ j | U ijhs|
≤Σ̂Ẍ i Ẍ j | Ûij - ΣẌ i Ẍ j | U ijhs
≤Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs + Σ̂Ẍ i Ẍ j | U ij - ΣẌ i Ẍ j | U ijhs.
So we need to derive the convergence rates of the following quantities:
Ûij - U ij[ H -(i,j) (X)] d ij,
Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs,
Σ̂Ẍ i Ẍ j | U ij - ΣẌ i Ẍ j | U ijhs,
where, to avoid overly crowded subscripts, we have used H -(i,j) (X) to denote H -(i,j) X when it occurs as a subscript.
The first and third convergence rates can be derived using the asymptotic tools for linear operators developed in <cit.>, <cit.>, <cit.>, and <cit.>. The second convergence rate is, however, a new problem, and it will also be useful in similar settings that require constructing estimators based on predictors extracted by sufficient dimension reduction. In some sense, this is akin to the post dimension reduction problem considered in <cit.>.
ØȮ
øȯ
Ö
ö
In the following, if {a n } and { b n } are sequences of positive numbers, then we write a n ≺ b n if a n / b n → 0. We write a n ≍ b n if 0< lim inf n (b n / a n) ≤lim sup n (b n / a n) < ∞. We write b n ≼ a n if either b n ≺ a n or b n ≍ a n. Because (i,j) is fixed in the asymptotic development, and also to emphasize the dependence on n, in the rest of this section we denote ϵ X (i,j), ϵ X -(i,j), and ϵ U (i,j) by ϵ n, η n, and δ n, respectively.
§.§ Transparent kernel
We first develop what we call the “transparent kernel” that passes information from step 1 to step 2 efficiently. Let Ω be a nonempty set, and : Ω×Ω→ a positive kernel.
We say that is a transparent kernel if, for each t ∈Ω, the function s ↦ (s,t) is twice differentiable and
* ∂ (s,t)/ ∂ s | s=t = 0;
* the matrix H(s,t) = ∂ 2 (s,t) / ∂ s ∂ s has a bounded operator norm; that is, there exist -∞ < C 1 ≤ C 2 < ∞ such that
C 1 ≤λmin (H(s,t)) ≤λmax (H(s,t)) < C 2
for all (s,t) ∈Ω×Ω, where λmin(·) and λmax (·) indicate the largest and smallest eigenvalues.
For example, the Gaussian radial basis function kernel is transparent, but the exponential kernel
(u,v) = τ 2 exp(-γ‖ u-v ‖ ) is not.
This condition implies a type of Lipschitz continuity in a setting that involves two reproducing kernels 0 and 1, where the argument of 1 is the evaluation of a member of the reproducing kernel Hilbert space generated by 0.
Suppose H 0 is the reproducing kernel Hilbert space generated by 0, H 0 d is the d-fold Cartesian product of H 0 with inner product defined by
⟨ U, V ⟩ H 0 d = ⟨ u 1, v 1 ⟩ H 0 + ⋯ + ⟨ u d, v d ⟩ H 0
where U = (u 1, …, u d) and V = (v 1, …, v d) are members of H 0 d,
H 1 is the reproducing kernel Hilbert space generated by 1. Then:
(i) for any U, V ∈ H 0 d, a ∈Ω, we have
U (a)- V(a) d≤ [ 0(a, a) ] 1/2 U- V H 0 d;
(ii)
if 1(s,t) is a transparent kernel,
then there exists a C> 0 such that, for each U, V ∈ H 0 d and a ∈Ω,
1 ( ·, U ( a) ) - 1 ( ·, V ( a) ) H 1≤ C [ 0 (a , a)]1/2 U - V H 0 d.
A direct consequence of this theorem is that, if Û is an estimate of some U, a member of H 0 d, with Û - U H 0 d = O P ( b n) for some 0 < b n → 0, Σ̂(Û) is a linear operator estimated from the sample Û 1, …, Û n (and perhaps some other random vectors), and Σ̂(U) is a linear operator estimated from the sample U 1, …, U n, then,
Σ̂( Û) - Σ̂(U) hs = O P ( b n).
This result is somewhat surprising, because sample estimates such as Σ̂( Û) can be viewed as E n 𝔾 ( X, Û ), where Û is an estimate of a function U in a functional space with norm · and 𝔾 is an operator-valued function. If Û - U = O P (b n) for some b n → 0, then it is not necessarily true that
E n 𝔾 ( X, Û) - E n 𝔾 ( X, U) = O P (b n),
particularly when U is an infinite dimensional object. Yet relation (<ref>) states exactly this. The reason behind this is that the reproducing kernel property separates the function Û and its argument X a (i.e. Û (x) = ⟨Û, (·, x) ⟩), which implies a type of uniformity among Û (X 1), …, Û (X n). This point will be made clear in the proof in the Supplementary Material.
Statement (<ref>) is made precise by the next theorem.
Suppose conditions (1) and (2) of Definition <ref> are satisfied with U, V, W therein replaced by X i, X j, and U ij. Suppose, furthermore:
(a) U ij, XUi,ij, and XUj,ij are transparent kernels;
(b) Ûij - U ij[ H -(i,j) (X) ] d ij = O P ( b n ) for some 0 < b n → 0.
Then
(i) Σ̂ÛijÛij - Σ̂ U ij U ijhs=O P ( b n );
(ii) Σ̂ (X iÛij) Ûij - Σ̂ (X i U ij) U ijhs=O P ( b n );
(iii) Σ̂ (X iÛij) (X jÛij) - Σ̂ (X i U ij) (X j U ij) hs=O P ( b n ).
Using Theorem <ref> we can derive the convergence rate of Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs.
Suppose conditions in Theorem <ref> are satisfied and, furthermore,
(a)
ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(b) b n ≼δ n ≺ 1.
Then
Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs = O P ( b n ).
Note that, unlike in Theorem <ref>, where our assumptions imply
ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j)
is a finite-rank operator, here, we do not assume
ΣU ij(U ij)ΣU ij(X j U ij) to be a finite-rank (or even Hilbert-Schmidt) operator; instead, we assume it to be a bounded operator.
This is because (X j, U ij) contains U ij, which makes it unreasonable to assume ΣU ijU ijΣU ij(X j U ij) to be finite-rank or Hilbert Schmidt. For example, when X j is a constant, ΣU ij(X j U ij) is the same as ΣU ij U ij and ΣU ij U ijΣU ij U ij is not a Hilbert Schmidt operator, though it is bounded.
Theorem <ref> shows that convergence rate of (ii) in (<ref>) is the same as the convergence rate of (i) in (<ref>); it now remains to derive the convergence rate of (i) and (iii).
§.§ Convergence rates of (i) and (iii) in (<ref>)
We first present the convergence rate of Ûij to U ij. The proof is similar to that of Theorem 5 of <cit.> but with two differences. First, <cit.> took A in (<ref>) to be I, whereas we take it to be ΣYY. In particular, the generalized sliced inverse regression in <cit.> only has one tuning parameter η n, but we have two tuning parameters η n and ϵ n. Second, <cit.> defined (in the current notation) f r ij to be the eigenfunctions of
ΣX -(i,j)X -(i,j)ΣX -(i,j)X (i,j)ΣX (i,j)X (i,j)ΣX (i,j)X -(i,j)ΣX -(i,j)X -(i,j),
which is different from the generalized eigenvalue problem (<ref>).
For these reasons we need to re-derive the convergence rate of Ûij.
Suppose
(a) Assumption <ref> is satisfied;
(b) ΣX -(i,j) X (i,j) is a finite-rank operator with
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j)2),
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j));
(c) n -1/2≺η n ≺ 1, n -1/2≺ϵ n ≺ 1;
(d) for each r = 1, …, d ij, λij 1 > ⋯ > λijd ij.
Then,
Ûij - U ij[ H -(i,j) (X) ] d ij= O P (
η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n ).
An immediate consequence is that, under the transparent kernel assumption, the b n in Theorem <ref> is the same as this rate. We next derive the convergence rate in (iii) of (<ref>). This rate depends on the tuning parameter δ n in the estimate of conjoined conditional covariance operator, and it reaches b n for the optimal choice of δ n.
Suppose conditions (1) and (2) of Definition <ref> are satisfied with U, V, W therein replaced by X i, X j, and U ij. Suppose, furthermore,
(a)
ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(b) b n ≼δ n ≺ 1.
Then
Σ̂Ẍ i Ẍ j | U ij- ΣẌ i Ẍ j | U ijhs = O P (δ n). Consequently, if δ n ≍ b n, then
Σ̂Ẍ i Ẍ j | U ij- ΣẌ i Ẍ j | U ijhs = O P (b n).
Finally, we combine Theorem <ref> through Theorem <ref> to come up with the convergence rate of Σ̂Ẍ i Ẍ j | Ûij. Since there are numerous cross references among the conditions in these theorems, to make a clear presentation we list all the original conditions in the next theorem, even if they already appeared. These conditions are of two categories: those for the step 1 that involves sufficient dimension reduction of X (i,j) versus X -(i,j), and those for the step 2 that involves the estimation of the conjoined conditional covariance operator. We refer to them as the first-level and second-level conditions, respectively.
Suppose the following conditions hold:
(a) (First-level kernel) E [ (S, S)] < ∞ for = X (i,j) and = X -(i,j);
(b) (First-level operator) ΣX -(i,j) X (i,j) is a finite-rank operator with rank d ij and
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j)2),
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j));
all the nonzero eigenvalues of ΣX (i,j) X -(i,j)ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j) are distinct;
(c) (First-level tuning parameters) n -1/2≺η n ≺ 1, n -1/2≺ϵ n ≺ 1, η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n 1/2 + ϵ n ≺ 1;
(d) (Second-level kernel) E [ (S, S)] < ∞ is satisfied for = U ij, XUi,ij, and XUj,ij; furthermore, they are transparent kernels;
(e) (Second-level operators) ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(f) (Second-level tuning parameter) δ n ≍η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n.
Then
Σ̂Ẍ i Ẍ j | Ûij- ΣẌ i Ẍ j | U ijhs = O P (η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n).
Using this result we immediately arrive at the variable selection consistency of the Sufficient Graphical Model.
Under the conditions in Theorem <ref>, if
η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n ≺ρ n ≺ 1,
Ê = { (i,j) ∈Γ×Γ: i > j, Σ̂Ẍ i Ẍ j | Ûijhs < ρ n }
then limn →∞ P ( Ê = E ) → 1.
§.§ Optimal rates of tuning parameters
The convergence rate in Theorem <ref> depends on ϵ n and η n explicitly, and δ n implicitly (in the sense that δ n ≍η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n is optimal for fixed ϵ n and η n). Intuitively, when ϵ n, η n, and δ n increase, the biases increase and variances decrease; when they decrease, the biases decrease and the variances increase. Thus there should be critical rates for them that balance the bias and variance, which are the optimal rates.
Under the conditions in Theorem <ref>, if ϵ n, η n, and δ n are of the form n a, n b, and n c for some a > 0, b > 0, and c > 0, then
(i) the optimal rates the tuning parameters are
n -3/8≼ϵ n ≼ n -1/4, η n ≍ n -1/4, δ n ≍ n -1/4;
(ii) the optimal convergence rate of the estimated conjoined conditional covariance operator is
Σ̂Ẍ i Ẍ j | Ûij- ΣẌ i Ẍ j | U ijhs = O P (n -1/4).
Note that there is a range of ϵ n are optimal, this is because the convergence rate does not have a unique minimizer. This also means the result is not very sensitive to this tuning parameter.
In the above asymptotic analysis, we have treated p as fixed when n →∞. We have also developed the consistency and convergence rate in the scenario where the dimension of p n of X goes to infinity with n, which is placed in the Supplementary Material (Section S9) due to limited space.
§ SIMULATION
In this section we compare the performance of our sufficient graphical model with previous methods such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and a Naïve method which is based on the conjoined conditional covariance operator without the dimension reduction step.
By design, the sufficient graphical model has advantages over these existing methods under the following circumstances. First, since the sufficient graphical model does not make any distributional assumption, it should outperform <cit.> and <cit.> when the Gaussian or copula Gaussian assumptions are violated; second, due to the sufficient dimension reduction in sufficient graphical model, it avoids the curse of dimensionality and should outperform <cit.>, <cit.>, and a Naïve method in the high-dimensional setting; third, since sufficient graphical model does not require additive structure, it should outperform <cit.> when there is severe nonadditivity in the model. Our simulation comparisons will reflect these aspects.
For the sufficient graphical model, <cit.>, and the Naïve method, we use the Gaussian radial basis function as the kernel. The regularization constants ϵ X(i,j), ϵ X-(i,j), and ϵ U(i,j) are chosen by the generalized cross validation criterion described in Section <ref> with the grid {10-ℓ: ℓ=-1,0,1,2,3,4}. The kernel parameters γ X (i,j), γ X -(i,j), γXUi,ij, γXUj,ij, and γ U ij are chosen according to (<ref>). Because the outcomes of tuning parameters are stable, for each model, we compute the generalized cross validation for the first five samples and use their average value for the rest of the simulation.
The performance of each estimate is assessed using the averaged receiver operating characteristic curve as a function of the threshold ρ.
The accuracy of a method across all ρ is measured by the area under the receiver operating characteristic curve.
To isolate the factors that affect accuracy, we first consider two models with relatively small dimensions and large sample sizes, which are
Model 1: X1 =ϵ1, X2=ϵ2, X3=sin(2X1)+ϵ3
X4 = (X1)2 + (X2)2+ϵ4, X5=ϵ5,
Model 2: X1 =ϵ1, X2=X1+ ϵ2, X3=ϵ3, X4 = (X1 + X3)2+ϵ4,
X5 =cos(2X2X3)+ϵ5, X6=X4+ϵ6,
where ϵ i, i=1, …, p are from independent and identically distributed standard normal distribution. The edge sets of the two models are
: E = { (1,3), (1, 4), (2,4), (1,2)}
: E = {(1,2), (1,4), (3, 4), (1,3), (2,5), (3, 5), (2, 3), (4, 6) }.
We use n = 100, 1000 for each model, and for each n, we generate 50 samples to compute the averaged receiver operating characteristic curves. The dimension d ij for sufficient graphical model is taken to be 2 for all cases (we have also used d ij = 1 and the results are very similar to those presented here).
The plots in the first row of Figure <ref> show the averaged receiver operating characteristic curves for the seven methods, with the following plotting symbol assignment:
Sufficient graphical model: red solid line <cit.>: red dotted line
<cit.>: black solid line <cit.>: black dotted line
<cit.>: red dashed line Naïve: blue dotted line
<cit.>: black dashed line
From these figures we see that
the two top performers are clearly sufficient graphical model and <cit.>, and their performances are very similar. Note that none of the two models satisfies the Gaussian or copula Gaussian assumption, which explains why sufficient graphical model and <cit.> outperform <cit.> and <cit.>. Sufficient graphical model and <cit.> also outperform <cit.>, <cit.>, and Naïve method, indicating that curse of dimensionality already takes effect on the fully nonparametric methods. The three nonparametric estimators have similar performances. Also note that Model I has an additive structure, which explains the slight advantage of <cit.> over sufficient graphical model in subfigure (a) of Figure <ref>; Model II is not additive, and the advantage of <cit.> disappears in subfigure (b) of Figure <ref>.
We next consider two models with relatively high dimensions and small sample sizes. A convenient systematic way to generate larger networks is via the hub structure. We choose p = 200, and randomly generate ten hubs h 1, …, h 10 from the 200 vertices. For each h k, we randomly select a set H k of 19 vertices to form the neighborhood of h k. With the network structures thus specified, our two probabilistic models are
Model 3: X i = 1+ | Xh k|2 + ϵ i, where i ∈ H k ∖ h k,
Model 4: X i = sin((Xh k)3) ϵ i, where i ∈ H k ∖ h k,
and ϵ i's are the same as in Models 1 and 2. Note that, in Model III, the dependence of X i on X h k is through the conditional mean E ( X i | X h k), whereas in Model IV, the dependence is through the conditional variance ( X i | X h k ).
For each model, we choose two sample sizes n=50 and n=100. The averaged receiver operating characteristic curves (again averaged over 50 samples) are presented in the second row in Figure <ref>. From the figures we see that, in the high-dimensional setting with p > n, sufficient graphical model substantially outperforms all the other methods, which clearly indicates the benefit of dimension reduction in constructing graphical models.
We now consider a Gaussian graphical model to investigate any efficiency loss incurred by sufficient graphical model. Following the similar structure used in <cit.>, we choose p=20, n=100, 200, and the model
Model 5: X ∼ N(0, Θ-1),
where Θ is 20 × 20 precision matrix with diagonal entries 1, 1, 1, 1.333, 3.010, 3.203, 1.543, 1.270, 1.544, 3, 1, 1, 1.2, 1, 1, 1, 1, 3, 2, 1, and nonzero off-diagonal entries θ3,5=1.418, θ4,10=-0.744, θ5,9=0.519, θ5,10=-0.577, θ13,17=0.287, θ17,20=0.542, θ14,15=0.998. As expected, Figure <ref> shows that <cit.>, <cit.>, and <cit.> perform better than sufficient graphical model in this case. However, sufficient graphical model still performs reasonably well and significantly outperforms the fully nonparametric methods.
Finally, we conducted some simulation on the generalized cross validation criterion (<ref>) for determining the threshold ρ n. We generated samples from Models I through V as described above, produced the receiver operating characteristic curves using sufficient graphical model, and determined the threshold ρ n by (<ref>). The results are presented in Figure S1 in the Supplementary Material. In each penal, the generalized cross validation-determined threshold ρ n are represented by the black dots on the red receiver operating characteristic curves.
§ APPLICATION
We now apply sufficient graphical model to a data set from the DREAM 4 Challenge project and compare it with other methods.
The goal of this Challenge is to recover gene regulation networks from simulated steady-state data.
A description of this data set can be found in <cit.>.
Since <cit.> already compared their method with <cit.>, <cit.>, <cit.>, <cit.>, and Naïve method for this dataset and demonstrated the superiority of <cit.> among these estimators, here we will focus on the comparison of the sufficient graphical model with <cit.> and the champion method for the DREAM 4 Challenge.
The data set contains data from five networks each of dimension of 100 and sample size 201. We use the Gaussian radial basis function kernel for sufficient graphical model and <cit.> and the tuning methods described in Section <ref>. For sufficient graphical model, the dimensions d ij are taken to be 1. We have also experimented with d ij = 2 but the results (not presented here) show no significant difference. Because networks are available, we can compare the receiver operating characteristic curves and their areas under the curve's, which are shown in Table <ref>.
As we can see from Table <ref>, sufficient graphical model has the same areas under the receiver operating characteristic curve values as <cit.> for Networks 2, 3, and 4, performs better than <cit.> for Network 5, but trails slightly behind <cit.> for Network 1; sufficient graphical model has the same areas under the curve as the champion method, performs better for Network 5 and worse for Network 1. Overall, sufficient graphical model and <cit.> perform similarly in this dataset, and they are on a par with the champion method. We should point out that sufficient graphical model and <cit.> are purely empirical; they employ no knowledge about the underlying physical mechanism generating the gene expression data. However, according to <cit.>, the champion method did use a differential equation that reflects the underlying physical mechanism.
The results for threshold determination are presented in Figure S2 in the Supplementary Material.
§ DISCUSSION
This paper is a first attempt to take advantage of the recently developed nonlinear sufficient dimension reduction method to nonparametrically estimate the statistical graphical model while avoiding the curse of dimensionality. Nonlinear sufficient dimension reduction is used as a module and applied repeatedly to evaluate conditional independence, which leads to a substantial gain in accuracy in the high-dimensional setting.
Compared with the Gaussian and copula Gaussian methods, our method is not affected by the violation of the Gaussian and copula Gaussian assumptions. Compared with the additive method <cit.>, our method does not require an additive structure and retains the conditional independence as the criterion to determine the edges, which is a commonly accepted criterion. Compared with fully nonparametric methods, sufficient graphical model avoids the curse of dimensionality and significantly enhances the performance.
The present framework opens up several directions for further research. First, the current model assumes that the central class S X (i,j) | X -(i,j) is complete, so that generalized sliced inverse regression is the exhaustive nonlinear sufficient dimension reduction estimate. When this condition is violated, generalized sliced inverse regression is no longer exhaustive and we can employ other nonlinear sufficient dimension reduction methods such as the generalized sliced averaged variance estimation <cit.> to recover the part of the central class that generalized sliced inverse regression misses. Second, though we have assumed that there is a proper sufficient sub-σ-field G -(i,j) for each (i,j), the proposed estimation procedure is still justifiable when no such sub-σ-field exists. In this case, U ij is still the most important set of functions that characterize the statistical dependence of X (i,j) on X -(i,j) – even though it is not sufficient. Without sufficiency, our method may be more appropriately called the Principal Graphical Model than the sufficient graphical model. Third, the current method can be extended to functional graphical model, which are common in medical applications such as EEG and fMRI. Several functional graphical models have been proposed recently, by
<cit.>, <cit.>, <cit.>, and <cit.>. The idea of a sufficient graph can be applied to this setting to improve efficiency.
This paper also contains some theoretical advances that are novel to nonlinear sufficient dimension reduction. For example, it introduces a general framework to characterize how the error of nonlinear sufficient dimension reduction propagates to the downstream analysis in terms of convergence rates. Furthermore, the results for convergence rates of various linear operators allowing the dimension of the predictor to go to infinity are the first of its kind in nonlinear sufficient dimension reduction. These advances will benefit the future development of sufficient dimension reduction in general, beyond the current context of estimating graphical models.
Bing Li's research on this work was supported in part by the NSF Grant DMS-1713078. Kyongwon Kim's work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2021R1F1A1046976, RS-2023-00219212), basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2021R1A6A1A10039823).
§ SUPPLEMENTARY MATERIAL
Supplementary material includes proofs of all theorems, lemmas, corollaries, and propositions in the paper, asymptotic development for the high-dimensional setting, some additional simulation plots for threshold determination.
0.2in
|
http://arxiv.org/abs/2307.03971v1 | 20230708130330 | What is the meaning of proofs? A Fregean distinction in proof-theoretic semantics | [
"Sara Ayhan"
] | cs.LO | [
"cs.LO",
"math.LO",
"03F03 (Primary), 03F07 (Secondary)"
] |
A Fregean distinction in proof-theoretic semantics
Sara Ayhan Institute of Philosophy I, Ruhr University Bochum, Bochum, Germany
[email protected]
What is the meaning of proofs?
Sara AyhanI would like to thank several people for supporting me in improving this paper essentially, among them Luca Tranchini for his thorough feedback and vital input on an earlier version of this paper and also two anonymous referees for their very constructive and helpful reports. I am especially grateful to Heinrich Wansing for the numerous and encouraging occasions to discuss this paper extensively and for his valuable comments.
Received: date / Accepted: date
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This is a post-peer-review, pre-copyedit version of an article published in the Journal of Philosophical Logic.
The final authenticated version will be available online at: DOI: 10.1007/s10992-020-09577-2
The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective.
However, what if we go a step further and ask about the meaning of a proof as a whole?
In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs.
Two questions are central here.
First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation?
The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction.
Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given?
In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems.
Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege's distinction for singular terms and sentences.
§ INTRODUCTION
In proof-theoretic semantics (PTS) the meaning of the logical constants is taken to be given by the rules of inference that govern their use.
As a proof is constituted by applications of rules of inference, it seems reasonable to ask what the meaning of proofs as a whole would consist of on this account.
What we are particularly interested in is a Fregean distinction between sense and denotation in the context of proofs.[We assume at least a basic familiarity with this idea, laid out in Frege's famous paper “Über Sinn und Bedeutung”, cf. <cit.> for an English translation.]
This account builds up on <cit.>, where such a distinction is proposed and used in a proof-theoretic explanation of paradoxes.
The notion of denotation is nothing new in the context of proofs.
It is common in the literature on proof theory and PTS (e.g. <cit.>, <cit.>, <cit.>) to distinguish between derivations, as linguistic objects, and proofs, as abstract (in the intuitionistic tradition: mental) entities.
Proofs are then said to be represented or denoted by derivations, i.e. the abstract proof object is the denotation of a derivation.
The notion of sense, on the other hand, has been more or less neglected.
Tranchini <cit.>, therefore, made a proposal that for a derivation to have sense means to be made up of applications of correct inference rules.
While this is an interesting approach to consider, Tranchini only determines whether a proof has sense or not but does not go further into what the sense of a proof exactly consists of, so there might be further questions worth pursuing.
We will spell out an account of a distinction between sense and denotation of proofs, which can be considered a full-fledged analogy to Frege's distinction concerning singular terms and sentences.[There is some literature also in the field of proof theory concerned with this Fregean distinction, however, to our knowledge, apart from <cit.> this is not concerned with the sense of derivations but with the sense of sentences: cf. P. Martin-Löf (2001). The Sense/Reference Distinction in Constructive Semantics. Transcription of a lecture given at a conference on Frege organised by G. Sundholm at Leiden, 25 August 2001, transcription by B. Jespersen, 9 August 2002: https://www.academia.edu/25695205/The_Sense_Reference_Distinction_in_Constructive_Semantics, or <cit.>.]
Another question concerns the relation of different kinds of proof systems (intuitionistic natural deduction (ND) and sequent calculus (SC) systems will be considered) with respect to such a distinction.
If we have two syntactically different derivations with the same denotation in different proof systems, do they always also differ in sense or can sense be shared over different systems?
§ CONNECTING STRUCTURE AND MEANING
The basic point of departure is the simple observation that there can be different ways leading from the same premises to the same conclusion, either in different proof systems or also within one system.
The focus in this matter so far has been on normal vs. non-normal derivations in ND and correspondingly on derivations containing cut vs. cut-free derivations in SC.
However, there can also simply be a change of the order of rule applications that can lead to syntactically different derivations from the same premises to the same conclusion.
Does this lead to a different denotation or should we say that it is only the sense that differs in such cases, while the underlying proof stays the same?
§.§ Normal form and the denotation of derivations
One and the same proof may be linguistically represented by different derivations.
We will follow the general opinion in taking proofs to be the denotation - the semantic value - of (valid) derivations.
In ND a derivation in normal form is the most direct form of representation of its denotation, i.e. the represented proof object.
For our purposes we will consider a derivation to be in normal form iff neither β- nor η-conversions (cf. rules below) can be applied to it.
A derivation in normal form in ND corresponds to a derivation in cut-free form in SC.
In intuitionistic logic derivations in non-normal form in ND (resp. with cut in SC) can be reduced to ones in normal form (resp. cut-free form).
These are then thought to represent the same underlying proof, just one more indirectly than the other, because, as Prawitz <cit.> says, they represent the same idea this proof is based on.
In order to make sense and denotation transparent, our approach will be to encode the derivations with λ-terms.
As is well known, by the Curry-Howard-isomorphism there is a correspondence between the intuitionistic ND calculus and the simply typed λ-calculus and we can formulate the following ND-rules annotated with λ-terms together with the usual β- and η-conversions for the terms.
The β-conversions correspond to the well-known reduction procedures, which can be formulated for every connective in ND <cit.>, while the η-conversions are usually taken to correspond to proof expansions <cit.>.
We use p, q, r,... for arbitrary atomic formulas, A, B, C,... for arbitrary formulas, and Γ, Δ,... for sets of formulas.
Γ, A stands for Γ∪{A}.
For variables in terms x, y, z,... is used and r, s, t,... for arbitrary terms.
Term-annotated ND-rules:
[⊃I]λx.t:A ⊃B*t:BΓ,[x:A]
[⊃E]App(s, t):B*s: A ⊃BΓ *t:AΔ
[∧I]⟨s, t⟩: A ∧B*s:AΓ *t:BΔ
[∧E_1]fst(t):A*t:A ∧BΓ
[∧E_2]snd(t):B*t:A ∧BΓ
[∨I_1]s:A ∨B*s:AΓ
[∨I_2]s:A ∨B*s:BΓ
[∨E] r {x.s | y.t}:C *r: A ∨BΓ *s:CΔ, [x:A] *t: CΘ, [y:B]
[E]abort(t):A*t:Γ
β-conversions:
App(λx.t, s)
⇝t[s/x]
2
fst(⟨s, t ⟩)
⇝s
snd(⟨s, t ⟩)
⇝t
2
r {x.s | y.t}
⇝s[r/x]
r {x.s | y.t}
⇝t[r/y]
η-conversions:
λ x.App(t, x) ⇝ t (if x not free in t)
⟨fst(t), snd(t) ⟩⇝t
r {t.t | s.s}
⇝r
We read x : A as “x is a proof of A".
t[t'/x] means that in term t every free occurrence of x is substituted with t'.
The usual capture-avoiding requirements for variable substitution are to be observed and α-equivalence of terms is assumed.
A term that cannot be converted by either β- or η-conversion is in normal form.
Since there is a correspondence between intuitionistic SC and intuitionistic ND, for every derivation in ND there must be a derivation in SC named by the same λ-term.
This correspondence is of course not one-to-one, but many-to-one, i.e. for each proof in ND there are at least potentially different derivations in SC.[On the complications of such a correspondence and also on giving a term-annotated version of SC cf. e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Term-annotated sequent calculi can be found i.a. in <cit.> or <cit.>, from which our presentation is only a notational variant.]
The following are our respective SC-rules, where we use the propositional fragment of an intuitionistic SC with independent contexts <cit.>.
The reduction procedures remain the same as above in ND; β-reduction corresponds to the procedures needed to establish cut-elimination, while η-conversion corresponds to what may be called “identicals-elimination" <cit.> or “identity atomization" <cit.>[Showing that it is possible to get rid of axiomatic sequents with complex formulas and derive them from atomic axiomatic sequents. This is also part of cut-elimination but in principle those are separate procedures <cit.>.]:
Term-annotated G0ip:
Logical axiom:
[Rf]x : A ⊢x : A
Logical rules:
[∧R]Γ, Δ⊢⟨s, t⟩: A ∧BΓ⊢s: A Δ⊢t: B
[∧L]Γ, z: A ∧B ⊢s[[fst(z)/x]snd(z)/y] : CΓ, x: A, y : B ⊢s : C
[∨R_1]Γ⊢s :A ∨BΓ⊢s:A
[∨R_2]Γ⊢s:A ∨BΓ⊢s:B
[∨L]Γ, Δ, z:A ∨B ⊢ {x.s | y.t} : CΓ, x:A ⊢s:C Δ, y:B ⊢t:C
[⊃R]Γ⊢λx.t:A ⊃BΓ, x:A ⊢t:B
[⊃L]Γ, Δ, x:A ⊃B ⊢s[App(x, t)/y]:CΓ⊢t: A Δ, y:B ⊢s:C
[L]x: ⊢abort(x): C
Structural rules:
Weakening:
[W]Γ, x:A ⊢t:CΓ⊢t:C
Contraction:
[C]Γ, x : A ⊢t[x/y] : CΓ, x : A, y : A ⊢t : C
The rule of cut
[cut]Γ, Δ⊢s[t/x] : CΓ⊢t : D Δ, x : D ⊢s : C
is admissible in G0ip.
In the left operational rules as well as in the weakening rule we have the case that variables occur beneath the line that are not explicitly mentioned above the line.
In these cases the variables must be either fresh or - together with the same type assignment - already occurring in the context Γ, Δ, etc.
Same variables can only (but need not) be chosen for the same type, i.e., if a new type occurs in a proof, then a fresh variable must be chosen.
If we would allow to chose the same variable for different types, i.e. for example to let x:A and x:B occur in the same derivation this would amount to assuming that arbitrarily different formulas have the same proof, which is not desirable.
§.§ Identity of proofs and equivalence of derivations
Figuring prominently in the literature on identity of proofs is a conjecture by Prawitz <cit.> that two derivations represent the same proof iff they are equivalent.[Prawitz gives credit for this conjecture to Martin-Löf. Cf. also Martin-Löf <cit.> on this issue, in his terminology “definitional equality".]
This shifts the question of course to asking when two derivations can be considered equivalent.
Using the equational theory of the λ-calculus is one way to provide an answer here: terms on the right and the left hand side of the β- and η-conversions are considered denotationally equal <cit.>.
Hence, two derivations can be considered equivalent iff they are β-η-equal (cf. <cit.>, <cit.>, <cit.>).[There is some discussion about whether η-conversions are indeed identity-preserving. Martin-Löf <cit.> does not think so, for example. Prawitz <cit.> is not clearly decided but writes in the context of identity of proofs it would seem “unlikely that any interesting property of proofs is sensitive to differences created by an expansion". Widebäck <cit.>, relating to results in the literature on the typed λ-calculus like <cit.> and <cit.>, argues for β-η-equality to give the right account of identity of proofs and Girard <cit.> does the same, although he mentions, too, that η-equations “have never been given adequate status" compared to the β-equations.]
The denotation is then seen to be referred to by the term that annotates the formula or sequent to be proven.
We will call this the `end-term' henceforth so that we can cover and compare both ND and SC at once.
So if we have two derivations with essentially different end-terms (in the sense that they are not belonging to the same equivalence class induced by β-η-conversion), we would say that they denote essentially different proofs.
On the other hand, for two ND-derivations, where one reduces to the other (or both reduce to the same), e.g. via normalization, we have corresponding λ-terms, one β-reducible to the other (or both β-reducible to the same term).
In this case we would say that they refer to the same proof.
Prawitz <cit.> stresses that this seems evident since two derivations reducing to identical normal derivations must be seen as equivalent.
Note that we can also have the case that two derivations of the same formula, which would look identical in a non-term-annotated version, here for example of ND, are distinguished on the grounds of our term annotation, like the following two derivations:
2
ND1p ⊃ (p ⊃ (p ∧ p))
ND2p ⊃ (p ⊃ (p ∧ p))
[⊃I^2]λy.λx.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
[⊃I^1]λx.⟨x, y ⟩: p ⊃(p ∧p)
[∧I]⟨x, y ⟩: p ∧p[x : p]^1 [y : p]^2
[⊃I^2]λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
[⊃I^1]λy.⟨x, y ⟩: p ⊃(p ∧p)
[∧I]⟨x, y ⟩: p ∧p[x : p]^2 [y : p]^1
The reason for this is that it is possible to generalize these derivations in different directions, which is made explicit by the variables.
Hence, the first one can be generalized to a derivation of B ⊃ (A ⊃ (A ∧ B)), while the second one generalizes to A ⊃ (B ⊃ (A ∧ B)).[For a more detailed examination of generalization cf. <cit.> or <cit.>.]
So, encoding derivations with λ-terms seems like a suitable method to clarify the underlying structure of proofs.
There is one kind of conversion left, though, that needs consideration, namely what we will call permutative conversions, or also γ-conversions.[It goes under various other names, as well, like permutation/permuting conversions or commuting/commutative conversions. Some also prefer “reductions" but we will go with the - to us seemingly - more neutral “conversions". The term γ-conversions appears in <cit.>. Cf. about these conversions in general e.g. <cit.>: 251-259, <cit.>: Ch. 10, <cit.>, <cit.>.]
They become relevant here because we have disjunction as part of our logical vocabulary.
Prawitz <cit.> was the first to introduce these conversions.
In the conjunction-implication-fragment of intuitionistic propositional logic derivations in normal form satisfy the subformula property, i.e. in a normal derivation 𝒟 of A from Γ each formula is either a subformula of A or of some formula in Γ.
However, with the disjunction elimination rule this property is messed up, since we get to derive a formula C from A ∨ B which is not necessarily related to A or B.
That is why, in order to recover the subformula property, permutation conversions are introduced, which can be presented in their most general form in the following way:
D[∨E]C *A ∨BΓ *CΔ, A *CΘ, B
⇝
[∨E]D *A ∨BΓ D*CΔ, A D*CΘ, B
Whether or not these are supposed to be taken into the same league as β- and η-conversions in matters of identity preservation of proofs is an even bigger dispute than the one mentioned concerning η-conversions.
Prawitz <cit.> says that while there can be no doubt about the `proper reductions' having no influence on the identity of the proof, “[t]here may be some doubts concerning the permutative ∨E-[...]reductions in this connection" but does not go into that matter any further.
Since he needs these reductions to prove his normalization theorem, it seems that he would be inclined not to have too many doubts about identity preservation under the permutative conversions.
Girard <cit.>, on the other hand, does not seem to be convinced, as he says - considering an example of permutation conversion - that we are forced to identify “a priori different deductions" in these cases.
Even though he accepts these conversions for technical reasons, he does not seem to be willing to really identify the underlying proof objects.
Restall[Restall, G. (2017). Proof Terms for Classical Derivations. Article in progress: https://consequently.org/papers/proof-terms.pdf], however, analyzing derivations by assigning to them what he calls “proof terms" rather than λ-terms, considers the derivations above as merely distinct in representation but not in the underlying proof, which on his account is the same for both.
What is more, he does so not only for technical but rather philosophical reasons, since he claims the flow of information from premises to conclusion to be essentially the same.
Lindley <cit.> and Tranchini <cit.> both make a point about the connection between reductions and expansions (although they speak of certain kinds of “generalized" expansions) on the one hand and (“generalized") permutative conversions on the other, claiming that performing a (generalized) expansion on the left hand side of the conversion above followed by a reduction (and possibly α-conversion) just yields the right hand side.
To conclude, if we only consider the ⊃-∧-fragment of intuitionistic propositional logic, β-η-equality is enough, but if we consider a richer vocabulary, it seems to us at least that there are substantial reasons to include permutative conversions in our equational theory.[The consequence for this paper would be of course to add “γ-conversions" to the list of relevant conversions in our definitions about normal forms, identity of denotation, etc.]
We do not aim to make a final judgment on this issue here.
Rather, when we have laid out our distinction about sense and denotation of proofs below, we will consider the matter again and show why it makes no essential difference for our purposes whether we include permutative conversions or not.
§ THE SENSE OF DERIVATIONS
Let us spell out at this point what exactly we will consider as the sense and also again the denotation of a derivation in our approach:
Definition of denotation:
The denotation of a derivation in a system with λ-term assignment is referred to by the end-term of the derivation.
Identity of denotation holds modulo belonging to the same equivalence class induced by the set of α-, β- and η-conversions of λ-terms, i.e. derivations that are denoted by terms belonging to the same equivalence class induced by these conversions are identical, they refer to the same proof object.[We use the more accurate formulation of “belonging to the same equivalence class" here instead of the formulation we used before of two terms “having the same normal form". The reason for this is that while these two properties coincide for most standard cases, they do not necessarily concur when it comes to Lindley's “general permutative conversions" or also to SC in general because in these cases the confluence property is not guaranteed. We want to thank one of the anonymous referees for indicating this important point.]
Definition of sense:
The sense of a derivation in a system with λ-term assignment consists of the set[One could also consider the question whether multi-sets are an even better choice here, which would of course yield a much stronger differentiation of senses. The reason why we consider sets instead of multi-sets is that to us the distinctions brought about by multi-sets, by e.g. a variable occurrence more or less, do not seem to go hand in hand with substantial differences in how inferences are built up.] of λ-terms that occur within the derivation.
Only a derivation made up of applications of correct inference rules, i.e. rules that have reduction procedures, can have sense.
§.§ Change of sense due to reducibility
Concerning a distinction between sense and denotation in the context of proofs, the rare cases where this is mentioned at all deal with derivations one of which is reducible to the other or with λ-terms which are β-convertible to the same term in normal form (cf. <cit.>, <cit.>, Restall 2017, p. 6).
Since Tranchini is the only one to spell out the part about sense in detail, we will briefly summarize his considerations.
As mentioned above, in his account, for a derivation to have sense means that it is made up of applications of correct inference rules.
The question to be asked then is of course what makes up correct inference rules?
Tranchini's answer is that inference rules are correct if they have reduction procedures available, i.e. a procedure to eliminate any maximal formula resulting from an application of an introduction rule immediately followed by an elimination rule of the same connective.
From a PTS point of view, applying reduction procedures can be seen as a way of interpreting the derivation because it aims to bring the derivation to a normal form, i.e. the form in which the derivation represents the proof it denotes most directly <cit.>.[Tranchini does not restrict his examination to derivations that normalize, though, but to the contrary, uses it to analyze non-normalizable derivations, like paradoxical ones.]
So the reduction procedures are the instructions telling us how to identify the denotation of the derivation, which for Tranchini means that they give rise to the sense of the derivation.
If we have two derivations denoting the same proof, for example, one in normal form and the other in a form that can be reduced to the former, we could say in Fregean terminology that they have the same denotation but differ in their sense because they denote the proof in different ways, one directly, the other indirectly.
So, we can take as an example the following two derivations, one in normal and one in non-normal form:
NDp ⊃ p
=1.2em
[r]⊃I
[x : p]
λx.x: p ⊃p
NDnon-normal p ⊃ p
=1.2em
[r]∧E
[r]∧I
[r]⊃I
[x : p]
λx.x: p ⊃p
[r]⊃I
[y : q]
λy.y: q ⊃q
⟨λx.x, λy.y ⟩: (p ⊃p) ∧(q ⊃q)
fst(⟨λx.x, λy.y ⟩): p ⊃p
The latter obviously uses an unnecessary detour via the maximal formula (p ⊃ p) ∧ (q ⊃ q), which is introduced by conjunction introduction and then immediately eliminated again, thus, producing different and more complex terms than the former derivation.
The derivation can be easily reduced to the former, though, which can be also seen by β-reducing the term denoting the formula to be proven:
fst(⟨λx.x, λy.y ⟩)
⇝λx.x
We can also give an example analogous to the one above, where a non-normal term (highlighted in bold) in SC is created by using the cut rule:[Note however, that the connection between the application of cut and the resulting non-normal term is necessary but not sufficient, i.e. there can be applications of cut not creating a non-normal term. A non-normal term is produced if both occurrences of the cut formula in the premises are principal.]
SC⊢ (p ∧ p) ⊃ (p ∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y) : (p ∧p) ⊃(p ∨p)
SCcut⊢ (p ∧ p) ⊃ (p ∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]cut
[r]C
[r]∧R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
x : p, z : p ⊢z : p
y : p ∧p ⊢snd(y) : p
y : p ∧p, y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p
y : p ∧p ⊢⟨fst(y), snd(y)⟩: p ∧p
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
y : p ∧p ⊢fst⟨fst(y), snd(y)⟩: p
y : p ∧p ⊢fst⟨fst(y), snd(y)⟩ : p ∨p
⊢λy.fst⟨fst(y), snd(y)⟩ : (p ∧p) ⊃(p ∨p)
λy.fst⟨fst(y), snd(y)⟩
⇝λy.fst(y)
In this case again the two derivations are essentially the same because the latter can be reduced to the former by eliminating the application of the cut rule.
Again, the proof object they represent is thus the same, only the way of making the inference, represented by the different terms occurring within the derivation, differs, i.e. the sense is different.
§.§ Change of sense due to rule permutations
So far we only considered the case in which there is an identity of denotation but a difference in sense of derivations due to one being represented by a λ-term in non-normal form reducible to one in normal form.
However, we want to show that this is not the only case where we can make such a distinction.
This is also the reason why our approach differs from Tranchini's (who works solely in an ND system) in how we grasp the notion of sense of a derivation.
Following Tranchini, the derivation having sense at all depends on there being reduction procedures available for the rules that are applied in it.
Since we are also interested in a comparison of sense-and-denotation relations between ND and SC systems, our approach requires that there are reduction procedures available for the created terms.
Thereby we will be able to cover both systems at once.
Encoding the proof systems with λ-terms also makes the connection between changing the order of the rule applications and the sense-and-denotation distinction transparent, which is the other case we want to cover.
In ND with disjunction rules it is possible to have rule permutations producing derivations with end-terms identifiable by means of the permutative conversions.
In SC, however, there are more cases of rule permutations possible.
When the left disjunction rule is involved, this also leads to different - though γ-equal - terms; with the left conjunction or implication rule the end-term remains completely unchanged.
Consider e.g. the following three derivations in SC of the same sequent ⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r)):
SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∧L
[r]W
[r]∨R
[r]Rf
q ⊢q
q ⊢p ∨q
q, r ⊢p ∨q
q ∧r ⊢p ∨q
[r]∧L
[r]W
[r]∨R
[r]Rf
r ⊢r
r ⊢p ∨r
q, r ⊢p ∨r
q ∧r ⊢p ∨r
q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r)
q ∧r ⊢(p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
p, p ⊢(p ∨q) ∧(p ∨r)
p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∨R
[r]∧L
[r]W
[r]Rf
q ⊢q
q, r ⊢q
q ∧r ⊢q
q ∧r ⊢p ∨q
[r]∨R
[r]∧L
[r]W
[r]Rf
r ⊢r
q, r ⊢r
q ∧r ⊢r
q ∧r ⊢p ∨r
q ∧r, q ∧r ⊢(p ∨q) ∧(p ∨r)
q ∧r ⊢(p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
p, p ⊢(p ∨q) ∧(p ∨r)
p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
q ⊢q
q ⊢p ∨q
q, r ⊢p ∨q
q ∧r ⊢p ∨q
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨q
(q ∧r) ∨p ⊢p ∨q
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
r ⊢r
r ⊢p ∨r
q, r ⊢p ∨r
q ∧r ⊢p ∨r
[r]∨R
[r]Rf
p ⊢p
p ⊢p ∨r
(q ∧r) ∨p ⊢p ∨r
(q ∧r) ∨p, (q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
(q ∧r) ∨p ⊢(p ∨q) ∧(p ∨r)
⊢((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
The difference between SC1 and SC2 (highlighted in bold) is that the order of applying the right disjunction rule and the left conjunction rule is permuted.
The difference between SC1 and SC3 (highlighted with underlining) is that the order of applying the right conjunction rule and the left disjunction rule is permuted.
The order of applying the right disjunction rule and the left conjunction rule stays fixed this time.
Encoded with λ-terms, though, we see that in the first case, comparing SC1 and SC2, the permutation of rule applications produces exactly the same end-term.
Both derivations have the same end-term, namely:
λ u. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩}
SC_1⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∧L
[r]W
[r]∨R
[r]Rf
y : q ⊢y : q
y : q ⊢y : p ∨q
y : q, z : r ⊢y : p ∨q
v : q ∧r ⊢fst(v) : p ∨q
[r]∧L
[r]W
[r]∨R
[r]Rf
z : r ⊢z : r
z : r ⊢z : p ∨r
y : q, z : r ⊢z : p ∨r
v : q ∧r ⊢snd(v): p ∨r
v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r)
⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
SC_2⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]∨L
[r]C
[r]∧R
[r]∨R
[r]∧L
[r]W
[r]Rf
y : q ⊢y : q
y : q, z : r ⊢y : q
v : q ∧r ⊢fst(v) : q
v : q ∧r ⊢fst(v) : p ∨q
[r]∨R
[r]∧L
[r]W
[r]Rf
z : r ⊢z : r
y : q, z : r ⊢z : r
v : q ∧r ⊢snd(v) : r
v : q ∧r ⊢snd(v): p ∨r
v : q ∧r, v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
v : q ∧r ⊢⟨fst(v), snd(v) ⟩: (p ∨q) ∧(p ∨r)
[r]C
[r]∧R
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
x : p, x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
x : p ⊢⟨x, x⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢ {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : (p ∨q) ∧(p ∨r)
⊢λu. {v.⟨fst(v), snd(v) ⟩ | x.⟨x, x⟩} : ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
Considering the second comparison between SC1 and SC3 the situation is different: here the permutation of rule applications leads to a different end-term.
In the end-term for SC1 and SC2 the pairing operation is embedded within the case expression, whereas in the end-term for SC3 the case expression is embedded within the pairing:
λ u.⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩
SC_3⊢ ((q ∧ r) ∨ p) ⊃ ((p ∨ q) ∧ (p ∨ r))
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
y : q ⊢y : q
y : q ⊢y : p ∨q
y : q, z : r ⊢y : p ∨q
v : q ∧r ⊢fst(v) : p ∨q
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨q
u : (q ∧r) ∨p ⊢ {v.fst(v) | x.x} : p ∨q
[r]∨L
[r]∧L
[r]W
[r]∨R
[r]Rf
z : r ⊢z : r
z : r ⊢z : p ∨r
y : q, z : r ⊢z : p ∨r
v : q ∧r ⊢snd(v) : p ∨r
[r]∨R
[r]Rf
x : p ⊢x : p
x : p ⊢x : p ∨r
u : (q ∧r) ∨p ⊢ {v.snd(v) | x.x}: p ∨r
u : (q ∧r) ∨p, u : (q ∧r) ∨p ⊢⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: (p ∨q) ∧(p ∨r)
u : (q ∧r) ∨p ⊢⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: (p ∨q) ∧(p ∨r)
⊢λu.⟨ {v.fst(v) | x.x}, {v.snd(v) | x.x}⟩: ((q ∧r) ∨p) ⊃((p ∨q) ∧(p ∨r))
When we take a look at how the term-annotated rules must be designed in order to have a correspondence to the respective rules in ND, we see why some permutations of rule applications lead to different end-terms, while others do not; and why SC is in general more flexible in this respect than ND.
In SC the left conjunction rule as well as the left implication rule are substitution operations, i.e. they can change their place in the order without affecting the basic term structure because only in the inner term structure terms are substituted with other terms.[For ⊃L the only exception is when an application of this rule is permuted with an application of ∨L, which creates a different, though γ-convertible term.]
In ND, on the other hand, there are no substitution operations used in the term assignment, i.e. for each rule application a new basic term structure is created.
How is this related to the distinction between sense and denotation?
In cases like SC1 vs. SC2 the way the inference is given differs, which can also be seen in different terms annotating the formulas occurring within the derivation: with otherwise identical terms in the two derivations y and z only occur in SC1, while fst(v) and snd(v) only occur in SC2.
However, the resulting end-term stays the same, thus, we would describe the difference between these derivations as a difference in sense but not in denotation.
In other cases, when disjunction elimination or the left disjunction rule is involved, permutation of rule applications can lead to a different end-term, as we see above in SC1 vs. SC3.
Whether this corresponds to a difference in denotation depends on whether we accept γ-conversions to be identity-preserving.
What all cases have in common, though, is that rule permutation always leads to a difference in sense of the given derivations because the sets of terms occurring within the derivations differ from each other.
§.§ Philosophical motivation
Let us have a look at how the Fregean conception of sense is received in the literature in order to show the philosophical motivation for adopting such a definition of sense for derivations.
According to Dummett <cit.>, Fregean sense is to be considered as a procedure to determine its denotation.[This idea of sense as procedures also occurs in more recent publications like <cit.> or <cit.>.]
Girard <cit.>, in a passage about sense and denotation and the relation between proofs and programs, mentions that the sense is determined by a “sequence of instructions" and when we see in this context terms as representing programs and “the purpose of a program [...] to calculate [...] its denotation" (ibid., p. 17), then it seems plausible to view the terms occurring within the derivation, decorating the intermediate steps in the construction of the complex end-term that decorates the conclusion, as the sense of that derivation.
Tranchini holds the reduction procedures to be the sense because these `instructions' lead to the term in normal form.
However, in our framework - because we do not only consider normal vs. non-normal cases - it seems more plausible to look at the exact terms occurring within the derivations and view them as representing the steps in the process of construction encoding how the derivation is built up and leading us to the denotation, the end-term.
For us it is therefore only a necessary requirement for the derivation to have sense to contain only terms for which reduction procedures are available but it does not make up the sense.
In the case of rule permutation we can then say that the proof is essentially the same but the way it is given to us, the way of inference, differs: i.e. the sense differs.
This can be read off from the set of terms that occur within the derivation: they end up building the same end-term, but the way it is built differs, the procedures to determine the denotation differ.
Thus, this allows us to compare differences in sense within one proof system as well as over different proof systems.
Troelstra and Schwichtenberg <cit.> e.g. give an example of two derivations in SC producing the same end-term in different ways to show that just from the variables and the end-term we cannot read off how the derivation is built up:[For simplicity we omit the weakening steps that would strictly seen have to precede the applications of the ∧L-rule.]
SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q))
=1.2em
[r]⊃R
[r]⊃R
[r]∧L
[r]∧L
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : q ⊢y : q
x : p, y : q ⊢⟨x, y ⟩: p ∧q
x : p, z : q ∧r ⊢⟨x, fst(z) ⟩: p ∧q
u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q
u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q)
⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q))
SC2⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q))
=1.2em
[r]⊃R
[r]⊃R
[r]∧L
[r]∧L
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : q ⊢y : q
x : p, y : q ⊢⟨x, y ⟩: p ∧q
u : s ∧p, y: q ⊢⟨snd(u), y ⟩: p ∧q
u: s ∧p, z : q ∧r ⊢⟨snd(u), fst(z) ⟩: p ∧q
u : s ∧p ⊢λz.⟨snd(u), fst(z) ⟩: (q ∧r) ⊃(p ∧q)
⊢λu.λz.⟨snd(u), fst(z) ⟩: (s ∧p) ⊃((q ∧r) ⊃(p ∧q))
The senses of these derivations would be the following:
Sense of SC1:
{x, y, z, u, ⟨ x, y ⟩, ⟨ x, fst(z) ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩,
λ u.λ z.⟨ snd(u), fst(z) ⟩}
Sense of SC2:
{x, y, z, u, ⟨ x, y ⟩, ⟨ snd(u), y ⟩, ⟨ snd(u), fst(z) ⟩, λ z.⟨ snd(u), fst(z) ⟩,
λ u.λ z.⟨ snd(u), fst(z) ⟩}
The two sets only differ with regard to the underlined terms, otherwise they are identical.
Thus, they only differ in the order in which the two left conjunction rules are applied.
For the resulting end-term this is inessential, but we can see that when taking the sense, and not only the end-terms, i.e. the denotation, into account, it is indeed possible to read off the structure of the derivations.
As noted above (examples on p. 6), the term annotation of the calculi makes this structure of derivations explicit so that we can differentiate between derivations which would otherwise look identical.
As several authors point out, this is a desirable feature if one is not only interested in mere provability but wants to study the structure of the derivations in question (cf. <cit.>, <cit.>) and also, for simplicity, if one wants to compare proof systems of ND and SC with each other <cit.>.
Since we are interested in both of these points, it seems the right choice for our purposes to consider the annotated versions of the calculi and that is also why these annotated versions are indeed needed for our notions of sense and denotation.
Of course, one could argue that the underlying structure is still the same in the non-annotated versions and can be made explicit by other means, too, like showing the different generalizations of the derivations, but still, we do not see how in these calculi our notions could be easily applied.
Another issue that needs to be considered is the one of identity of senses, i.e. synonymy.
Therefore, we want to extend our definition of sense given above with an addition:
If a sense-representing set can be obtained from another by uniformly replacing (respecting the usual capture-avoiding conventions) any occurrence of a variable, bound or free, by another variable of the same type, they express the same sense.
What we ensure with this point is just that it does not (and should not) matter which variables one chooses for which proposition as long as one does it consistently.
So, it does not make a difference whether we have
2
ND1p ⊃ (q ⊃ p)
=1.2em
[r]⊃I
[r]⊃I
[x : p]
λz.x: q ⊃p
λx. λz.x: p ⊃(q⊃p)
Sense1: {x, λ z.x, λ x. λ z.x}
or
2
ND2p ⊃ (q ⊃ p)
=1.2em
[r]⊃I
[r]⊃I
[y : p]
λz.y: q ⊃p
λy. λz.y: p ⊃(q⊃p)
Sense2: {y, λ z.y, λ y. λ z.y}
Sense1 and Sense2 represent the same sense.
Or to give another example (pointed to by one of the anonymous referees) where we have free variables occurring within the derivation but not appearing in the end-term: If one would replace all occurrences of the free variable y by the variable w in derivation SC1⊢ (s ∧ p) ⊃ ((q ∧ r) ⊃ (p ∧ q)) (cf. above), then this would make no difference to the sense according to our definition since the sense-representing sets would be obtained from replacing y by w.
This also fits the Fregean criterion of two sentences' identical sense, as Sundholm <cit.> depicts it within a broader analysis: two propositions express the same sense if it is not possible to hold different epistemic attitudes towards them, i.e. “if one holds the one true, one also must hold the other one true, and vice versa".
Whereas, if we have two sentences which only differ in two singular terms, referring to the same object but differing in sense, we can easily hold the one sentence to be true, while thinking the other is false, if we do not know that they are referring to the same object.
With proofs it is the same: Looking at ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) we may not know whether the derivation is valid or not, we do know, however, that if one is a valid derivation then so is the other.
With derivations differing in sense this is not so straightforward.
For Frege this point of considering cases where intensionality is directed towards sentences was crucial to develop his notion of sense, so the question arises how we can explain cases of intensionality directed towards proofs with our notions of sense and denotation.
Let us suppose we have two denotationally-identical proofs which are represented by two different derivations 𝒟 and 𝒟'.
In this case it could happen that a (rational) person believes that derivation 𝒟 is valid but does not believe that derivation 𝒟' is valid.
How can we account for that?
One explanation would be of course to point to the difference in linguistic representation.
After all, it can just be the case that one way of writing down a proof is more accessible to the person than another (they may not be familiar with a certain proof system, for example).
This would amount to letting the linguistic representation, the signs, collapse with the sense of a derivation.
However, then we would have no means to distinguish this case from cases in which we want to argue that it is not justified for a rational person to have different propositional attitudes towards propositions which are about derivations differing insignificantly from each other, like in the cases of ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p) above.
For Frege <cit.> the referent of an expression in an intensional context is not its customary referent, i.e. the object it refers to or the truth value in the case of sentences, but its customary sense.
Here the situation is the same: What is referred to in such a setting, when speaking about the attitudes of a person towards propositions about derivations, is not the proof objects (which are identical in our situation) but their senses, which are in this context represented by the sets of terms encoding the steps of construction.
It seems plausible then to say that when the construction steps differ in two derivations, a person can have different attitudes towards propositions about them, because the different construction steps may lead to this person grasping the one derivation, while not understanding the other.
§ ANALOGY TO FREGE'S CASES
Let us finally compare how our conception of sense and denotation in the context of proofs fits the distinction Frege came up with for singular terms and sentences.
We can have the following two cases with Frege's distinction: firstly (cf. <cit.>), there can be different signs corresponding to exactly one sense (and then of course also only one denotation).
In the case of singular terms an example would be “Gottlob's brother” and “the brother of Gottlob".
The sense, the way the denoted individual object is given to us, is the same because there is only a minor grammatical difference between the two expressions.
More frequently, this occurs in comparing different languages, though, taking singular terms which express exactly the same sense only using different words, like “the capital of France" and “die Hauptstadt Frankreichs".
In the case of sentences an example would be changing from an active to a passive construction without changing the emphasis of the sentence; an example from Frege is the following: “M gave document A to N", “Document A was given to N by M" <cit.>.
In the case of proofs, finally, an example would be the following case:
ND(p∨ p) ⊃ (p∧ p)
=1.2em
[r]⊃I^3
[r]∧I
[r]∨E^1
[y : p ∨p]^3
[x : p]^1
[x : p]^1
{x.x | x.x} : p
[r]∨E^2
[y : p ∨p]^3
[x : p]^2
[x : p]^2
{x.x | x.x} : p
⟨ {x.x | x.x}, {x.x | x.x}⟩: p ∧p
λy.⟨ {x.x | x.x}, {x.x | x.x}⟩ : (p ∨p) ⊃(p ∧p)
SC⊢ (p∨ p) ⊃ (p ∧ p)
=1.2em
[r]⊃R
[r]C
[r]∧R
[r]∨L
[r]Rf
x : p ⊢x : p
[r]Rf
x : p ⊢x : p
y : p ∨p ⊢ {x.x | x.x} : p
[r]∨L
[r]Rf
x : p ⊢x : p
[r]Rf
x : p ⊢x : p
y : p ∨p ⊢ {x.x | x.x}: p
y : p ∨p , y : p ∨p ⊢⟨ {x.x | x.x}, {x.x | x.x} ⟩: p ∧p
y : p ∨p ⊢⟨ {x.x | x.x}, {x.x | x.x}⟩: p ∧p
⊢λy.⟨ {x.x | x.x}, {x.x | x.x}⟩: (p ∨p) ⊃(p ∧p)
Sense:
{x, y, {x.x | x.x}, ⟨ {x.x | x.x}, {x.x | x.x}⟩,
λ y.⟨ {x.x | x.x}, {x.x | x.x}⟩}
Or to give another example:
NDp ⊃ (p ⊃ (p ∧ p))
=1.2em
[r]⊃I^2
[r]⊃I^1
[r]∧I
[x : p]^2
[y : p]^1
⟨x, y ⟩: p ∧p
λy.⟨x, y ⟩: p ⊃(p ∧p)
λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
SC⊢ p ⊃ (p ⊃ (p ∧ p))
=1.2em
[r]⊃R
[r]⊃R
[r]∧R
[r]Rf
x : p ⊢x : p
[r]Rf
y : p ⊢y : p
x : p, y : p ⊢⟨x, y ⟩: p ∧p
x : p ⊢λy.⟨x, y ⟩: p ⊃(p ∧p)
⊢λx.λy.⟨x, y ⟩: p ⊃(p ⊃(p ∧p))
Sense: {x, y, ⟨ x, y ⟩, λ y.⟨ x, y ⟩, λ x.λ y.⟨ x, y ⟩}
In these cases derivations can consist of different signs, namely by having one representation in SC and one in ND, which do not differ in sense nor in denotation, since they both contain exactly the same terms and produce the same end-term.
This comparison between different proof systems seems to fit nicely with Frege's <cit.> comment on “the same sense ha[ving] different expressions in different languages".
However, as we have seen above with the examples ND1p ⊃ (q ⊃ p) and ND2p ⊃ (q ⊃ p), this case can also occur within the same proof system.
One could wonder whether there should not be a differentiation between the senses of the derivations in the first example since it seems that different rules are applied: in SC⊢ (p∨ p) ⊃ (p ∧ p) we have an application of contraction, which we do not have in ND(p∨ p) ⊃ (p∧ p).
This would also question whether our definition of sense distinguishes and identifies the right amount of cases.
We do believe that this is the case, though, because in the first example, where there is an application of the contraction rule in SC, there is also a multiple assumption discharge in the ND-derivation, which is generally seen as the corresponding procedure, just as cases of vacuous discharge of assumptions in ND correspond to the application of weakening in SC.
So just as in different languages of course not exactly the same expressions are used, here too, the rules differ from ND to SC but since the corresponding procedures are used, one can argue that the sense does not differ for that reason.
Another case that can occur according to Frege (ibid.) is that we have one denotation, i.e. one object a sign refers to, but different senses.
An example for this would be his famous “morning star" and “evening star" comparison, where both expressions refer to the same object, the planet Venus, but the denoted object is given differently.
On the sentence level this would amount to exchanging singular terms in a sentence by ones which have the same denotation: “The morning star is the planet Venus" and “The evening star is the planet Venus".
The denotation of the sentence - with Frege: its truth value - thus stays the same, only the sense of it differs, the information is conveyed differently to us.
For our proof cases we can say that this case is given when we have syntactically different derivations, be it in one or in different proof systems, which have end-terms belonging to the same equivalence class induced by the set of α-, β- and η-conversions.
Thus, examples would be corresponding proofs in ND and SC, which share the same end-term, but contain different terms occurring within the derivations.
The reason for this to happen seems that in SC often more variables are necessary than in ND.
If we compare derivations within ND, one definite case in which we have the same denotation but a different sense is between equivalent but syntactically distinct derivations, e.g. non-normal and normal derivations, one reducible to the other.
Another case up for debate would be the one with rule permutations due to disjunction elimination.
Within SC we can have two cases: one due to rule permutation, one due to applications of cut.
For the first case, where the inference could be given in a different way, although ending on the same term, we gave examples above (cf. p. 12 and 14f.).
However, it is worth mentioning that our distinction still captures the usual distinction, the second case, where it is said that two derivations, one containing cut and the other one in cut-free form (as a result of cut-elimination applied to the former), have the same denotation but differ in sense:
SC⊢ (p∧ p) ⊃ (p∨ p)
=1.2em
[r]⊃R
[r]∨R
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p∧p ⊢fst(y) : p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y) : (p ∧p) ⊃(p ∨p)
Sense: {z, x, y, fst(y), fst(y), λ y.fst(y)}
SCcut⊢ (p∧ p) ⊃ (p∨ p)
=1.2em
[r]⊃R
[r]cut
[r]∧L
[r]W
[r]Rf
z : p ⊢z : p
z : p, x : p ⊢z : p
y : p ∧p ⊢fst(y) : p
[r]∨R
[r]Rf
z : p ⊢z : p
z : p ⊢z: p ∨p
y : p ∧p ⊢fst(y) : p ∨p
⊢λy.fst(y): (p ∧p) ⊃(p ∨p)
Sense: {z, x, y, fst(y), z, fst(y), λ y.fst(y)}
As mentioned above (fn 14), cut does not need to create a non-normal term, as it is the case here, but still any application of cut will necessarily change the sense of a derivation as opposed to its cut-free form.
Finally, cases that need to be avoided in a formal language according to Frege <cit.> would be to have one sign, corresponding to different senses, or on the other hand, one sense corresponding to different denotations.
As he mentions, these cases of course occur in natural languages but should not happen in formal ones, so it should also not be possible in our present context, for sure.
Fortunately, this cannot happen in the context of our annotated proof systems, either, since the signs (taken to be the derivation as it is written down) always express at most one sense in our annotated system, and likewise the sense always yields a unique denotation since the end-term is part of the sense-denoting set.[Another question would be whether there can be signs without any sense at all. Frege <cit.> dismisses this case, as well, with a remark that we need at least the requirement that our expressions are “grammatically well-formed". Tranchini <cit.> gives a good analogy pointing to the notorious connective playing this role in the case of proofs.]
§ CONCLUSION
The context in which Frege considered sense and denotation was the context of identity.
Likewise, we argued in this paper, if we use term-annotated calculi, we can also say something about proof identity: identity of proofs over different calculi or within the same calculus consists in having end-terms that belong to the same equivalence class induced by the set of α-, β- and η-conversions.
In ND this can happen when we have the same proof in normal and non-normal form, in SC this can happen when we have the same proof using cut and in cut-free form but also when there are forms of rule permutations where an application of the ∧L-rule or the ⊃L-rule switches place with another rule.
Including disjunction in our language creates for both calculi the additional question of whether rule permutations including disjunction elimination (resp. the left disjunction rule) lead to a different proof, or whether these proofs should be identified.
We are more interested in sense, however, and here we can conclude that what in all these cases changes is the sense of the derivation in question.
Finally, considering the question of identity of sense, i.e. synonymy, and trying to follow Frege's conception on this matter, too, we can say the following: if two derivations are supposed to be identical in sense, this means that the way the inference is given is essentially the same, so the set of terms building up the end-term must be the same.
The end-term itself does not necessarily tell us anything about the structure of the proof.
Sense, on the other hand, is more fine-grained in that the set of terms occurring within the derivation reflects how the derivation is built up.
Especially in SC, where we can have different orders of rule applications leading up to the same end-term, the sense gives us means to distinguish on a more fine-grained level.
BarendregtGhilezan Barendregt, H., & Ghilezan, S. (2000). Lambda terms for natural deduction, sequent calculus and cut elimination. Journal of Functional Programming, 10(1), 121–134.
Groote De Groote, P. (1999). On the Strong Normalisation of Natural Deduction with Permutation-Conversions. In P. Narendran, & M. Rusinowitch (Eds), Rewriting Techniques and Applications: RTA 1999 (pp. 45–59). Berlin/Heidelberg: Springer.
Dosen2003 Došen, K. (2003). Identity of Proofs Based on Normalization and Generality. Bulletin of Symbolic Logic, 9, 477–503.
Dosen2008 Došen, K. (2008). Cut Elimination in Categories. Springer.
Dummett Dummett, M. (1973). Frege: Philosophy of Language. New York: Harper & Row.
DJM Duží, M., Jespersen, B., & Materna, P. (2010). Procedural Semantics for
Hyperintensional Logic: Foundations and Applications of Transparent Intensional Logic. Springer.
Francez Francez, N. (2017). On harmony and permuting conversions. Journal of Applied Logic, 21, 14–23.
Frege1 Frege, G. (1948) [1892]. Sense and Reference. The Philosophical Review, 57(3), 209–230.
Frege2 Frege, G. (1979). Posthumous Writings. Oxford: Basil Blackwell.
Friedman Friedman, H. (1975). Equality between functionals. In R. Parikh (Ed.), Logic colloquium: Lecture notes in mathematics 453 (pp. 23–37). Berlin/Heidelberg: Springer.
Girard Girard, J.-Y. (1989). Proofs and Types. Cambridge: Cambridge University Press.
Hacking Hacking, I. (1979). What is Logic? The Journal of Philosophy, 76(6), 285–319.
Herbelin Herbelin, H. (1994). A Lambda-calculus Structure Isomorphic to Gentzen-style Sequent Calculus Structure. Computer Science Logic, 61–75.
Kreisel Kreisel, G. (1971). A survey of proof theory II. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 109–170). Amsterdam: North-Holland.
Lindley Lindley, S. (2007). Extensional Rewriting with Sums. In S. Ronchi Della Rocca (Ed.), Typed Lambda Calculi and Applications: TLCA 2007 (pp. 255–271). Berlin/Heidelberg: Springer.
M-L Martin-Löf, P. (1975). About Models for Intuitionistic Type Theories and the Notion of Definitional Equality. In S. Kanger (Ed.), Proceedings of the Third Scandinavian Logic Symposium (pp. 81–109). Amsterdam: North-Holland.
Muskens Muskens, R. (2005). Sense and the Computation of Reference. Linguistics and Philosophy, 28(4), 473–504.
NegrivonPlato Negri, S., & von Plato, J. (2001). Structural Proof Theory. Cambridge/New York: Cambridge University Press.
Pfenning Pfenning, F. (2000). Structural Cut Elimination: I. Intuitionistic and Classical Logic. Information and Computation, 157, 84–141.
Pottinger Pottinger, G. (1977). Normalization as a homomorphic image of cut-elimination. Annals of Mathematical Logic, 12, 323–357.
Prawitz1965 Prawitz, D. (1965). Natural Deduction. Stockholm: Almqvist & Wiksell.
Prawitz1971 Prawitz, D. (1971). Ideas and results in proof theory. In J.E. Fenstad (Ed.), Proceedings of the Second Scandinavian Logic Symposium (pp. 235–307). Amsterdam: North-Holland.
SU Sørensen, M., & Urzyczyn, P. (2006). Lectures on the Curry-Howard Isomorphism. Amsterdam: Elsevier Science.
Statman Statman, R. (1983). λ-definable functionals and βη conversion. Archiv für Mathematische Logik, 23, 21–26.
Sundholm Sundholm, G. (1994). Proof-Theoretical Semantics and Fregean Identity Criteria for Propositions. The Monist, 77(3), 294–314.
Tranchini2016 Tranchini, L. (2016). Proof-theoretic semantics, paradoxes and the distinction between sense and denotation. Journal of Logic and Computation, 26(2), 495–512.
Tranchini2018 Tranchini, L. (2018). Stabilizing Quantum Disjunction. Journal of Philosophical Logic, 47, 1029–1047.
TS Troelstra, A., & Schwichtenberg, H. (2000). Basic Proof Theory. 2nd ed., Cambridge: Cambridge University Press.
Urban Urban, C. (2014). Revisiting Zucker's Work on the Correspondence Between Cut-Elimination and Normalisation. In L. Pereira, E. Haeusler, & V. de Paiva (Eds), Advances in Natural Deduction: A Celebration of Dag Prawitz's Work (pp. 31–50). Dordrecht: Springer.
Wideback Widebäck, F. (2001). Identity of Proofs. Stockholm: Almquist & Wiksell International.
Zucker Zucker, J. (1974). The correspondence between cut-elimination and normalization. Annals of Mathematical Logic, 7, 1–112.
|
http://arxiv.org/abs/2307.10992v1 | 20230712071927 | Modeling Motion Dynamics in Psychotherapy: a Dynamical Systems Approach | [
"Itai Dattner"
] | q-bio.NC | [
"q-bio.NC"
] |
Modeling Motion Dynamics in Psychotherapy: a Dynamical Systems Approach
Itai Dattner
Department of Statistics, University of Haifa
[email protected]
August 12, 2023
========================================================================================
This study introduces a novel mechanistic modeling and statistical framework for analyzing motion energy dynamics within psychotherapy sessions. We transform raw motion energy data into an interpretable narrative of therapist-patient interactions, thereby revealing unique insights into the nature of these dynamics. Our methodology is established through three detailed case studies, each shedding light on the complexities of dyadic interactions. A key component of our approach is an analysis spanning four years of one therapist's sessions, allowing us to distinguish between trait-like and state-like dynamics. This research represents a significant advancement in the quantitative understanding of motion dynamics in psychotherapy, with the potential to substantially influence both future research and therapeutic practice.
§ INTRODUCTION
Psychotherapy is a vital tool in the management and treatment of various mental health disorders, and its effectiveness is dependent on a multitude of factors <cit.>. Among these, the quality of the therapeutic alliance between the patient and the therapist is a crucial determinant of the treatment outcome <cit.>. Research in this domain has highlighted the significance of non-verbal cues, such as body language and facial expressions, as essential components of the therapeutic relationship <cit.>. The phenomenon of non-verbal synchrony, where the patient and therapist unconsciously mirror each other's movements, is of particular interest, as it is associated with positive therapeutic outcomes (<cit.>, <cit.>). Traditionally, such non-verbal information has been studied using observational methods, relying on human coders to evaluate the degree of movement similarity between the dyadic partners. These approaches are time-consuming, subjective, and prone to human error. The advent of motion capture technology and advanced computational techniques has paved the way for more reliable and objective assessments of non-verbal information. Motion energy analysis (MEA) is one such promising approach that quantifies the spatial and temporal patterns of movement in a dyad. The MEA software generates data that capture the movement patterns of participants in a dyadic interaction, such as those occurring during psychotherapy sessions. These data are obtained by processing video recordings of the sessions, where the software extracts and quantifies the motion energy present in the temporal sequences. The resulting output consists of time series data that represent the movement intensity for each individual in the dyad. Specifically,
frame-differencing algorithms quantify movement dynamics by measuring differences between consecutive frames in a sequence . These algorithms compare each frame to its predecessor and extract the differences based on the number and magnitude of pixel changes. While frame-differencing methods effectively quantify the degree of change over time, they do not capture the direction or form of movement, as they solely focus on the extent of change between frames.
In this study we use data obtained from MEA, which are publicly accessible as detailed in <cit.>. These data, thoroughly studied by <cit.>, allows us to implement and evaluate our innovative methods and inference framework. The data showcase therapist and patient movement patterns during sessions. Figure <ref> depicts a 45-minute segment of therapist and patient motion energy data, derived from the MEA software. This segment doesn't include the initial ten minutes of each session, which typically involve logistical discussions such as setting up video recording and discussing financial details. The substantive portion of the session usually begins with a question about the patient's motivation for seeking the appointment. The decision to exclude the first ten minutes ensures that the organizational aspects of the session are not part of the analysis, focusing instead on the psychotherapeutic interaction. The selected segment for analysis, therefore, spans from the 10th to the 55th minute of the session, providing a consistent timeframe for all sessions analyzed in the sequel. Motion energy time series such as the one displayed in the figure, serves as the basis for our exploration of therapist and patient motion dynamics.
Although MEA has been applied in various contexts, including psychotherapy research , its full potential remains untapped, as current models do not fully capture the intricate motion dynamics within a dyad. There is a pressing need for novel mathematical models that can elucidate the complex interplay of motion dynamics and offer a more comprehensive understanding of the underlying mechanism.
We use the notation x_0(t)=(x_01(t),x_02(t))^⊤ for the continuous movement velocities of the therapist and patient, respectively; ⊤ stand for the transpose of a vector. In what follows we use the terms 'motion energy' and 'velocity' interchangeably. Without loss of generality, we use t∈[0,1] to represent a full session of 45 minutes. Denote the vector of derivatives of x_0(t) w.r.t. t by
f_0(t)=(x^'_01(t),x^'_02(t))^⊤, t∈[0,1].
The scientific question posed in this work is essentially a question of finding an adequate parametric description for f_0(·) defined in Equation (<ref>), one that expresses f_0(t) in terms of x_1(t), x_2(t) that describes the process mechanistically, in the sense that the current rate-of-change depends on the current state. In undertaking this study, we are cognizant that human motion within a psychotherapy session is an inherently complex phenomenon, intricately entwined with a multitude of psychological, emotional, and contextual factors. It is, therefore, important to acknowledge that any model we propose is an approximation of reality and, in this sense, is inherently misspecified. However, the goal of our work is not to propose a perfect model that captures all facets of motion dynamics, but rather to develop a useful model that affords us insights into the underlying mechanisms governing these dynamics.
The structure of the paper is as follows. Section 2 introduces the ordinary differential equation model and the corresponding mechanistic implications. Section 3 establishes the statistical framework, while an empirical analysis of three specific dyadic interactions is the topic of Section 4.
A key result is developed in Section 5 where we distinguish between trait and state characteristics in motion dynamics. Finally, Section 6 discusses findings and future research directions. This work aims to integrate mathematical modeling, statistics, and psychotherapy research to better understand and quantify motion dynamics in psychotherapy dyadic interactions.
§ MECHANISTIC MODELING OF MOTION DYNAMICS
The exploration of motion dynamics in psychotherapy interactions can be significantly enriched by considering not just velocity but also acceleration—the rate of change in velocity. The analysis of acceleration, a derivative of velocity, affords a detailed understanding of the temporal dynamics of motion, potentially revealing subtle alterations in movement patterns that are otherwise missed when focusing solely on velocity. For example, in the engineering domain, acceleration is used as an input to better control complex dynamics, see, e.g., <cit.>. Applying this understanding to psychotherapy, therapists' awareness of acceleration changes in their own and their patients' movements could serve as a novel 'control mechanism.' This could enable therapists to better guide the therapeutic alliance by mirroring or complementing their patients' non-verbal cues, thereby enhancing rapport and mutual understanding. Thus, we leverage dynamical systems theory to develop novel methodologies for assessing motion dynamics within psychotherapy dyadic interactions. We consider a parametric model given by a coupled system of ordinary differential equations (ODEs). The linear ODEs system is given by
x_1^'(t)=α x_1(t)+β x_2(t),
x_2^'(t)=γ x_1(t)+δ x_2(t),
which encodes the dynamics of a psychotherapy session. Here, the states x_1(t) and x_2(t) represent the therapist's and patient's motion energy levels, respectively. Their temporal derivatives, x_1^'(t) and x_2^'(t), capture the movements acceleration or deceleration. The ODEs are characterized by their ability to model interactions between multiple entities or processes, making them particularly suitable for capturing the complex interplay between therapist and patient during a psychotherapy session, see e.g., <cit.>.
The above coupled linear system of ODEs
has been thoroughly studied, and its qualitative properties, which include stability, periodicity, and sensitivity to initial conditions, among others, are well understood. In particular, the analytic solution to the system of ODEs (<ref>) can be obtained via matrix exponentiation:
[ x_1(t); x_2(t) ] = e^At[ x_1(0); x_2(0) ],
where A is the matrix of coefficients:
A = [ α β; γ δ ].
Here e^At is the matrix exponential, which can be computed using a series expansion or via eigendecomposition. The specific form of the solution depends on the eigenvalues and eigenvectors of A. The eigenvalues λ_1 and λ_2 of A are the solutions to the characteristic equation, which is given by det(A - λ I) = λ^2 - (α + δ) λ + (αδ - βγ) = 0; here I is the identity matrix. For instance, in case of real and distinct eigenvalues (λ_1 ≠λ_2) the general solution will be of the form
[ x_1(t); x_2(t) ] = c_1 e^λ_1 t𝐯_1 + c_2 e^λ_2 t𝐯_2
,
where c_1 and c_2 are constants determined by the initial conditions, and 𝐯_1 and 𝐯_2 are the eigenvectors corresponding to λ_1 and λ_2, respectively. On the other hand, if the eigenvalues are complex, they will come in complex conjugate pairs, λ_1,2 = a ± bi, and the general solution will be of the form
[ x_1(t); x_2(t) ] = e^at (c_1 cos(bt) + c_2 sin(bt)) 𝐯.
Here c_1 and c_2 are constants determined by the initial conditions, and 𝐯 is the eigenvector corresponding to λ_1,2. In each case, the specific form of the solution and its behavior over time will depend on the values of the parameters α, β, γ, and δ. The parameter α is a self-damping/reinforcing term for the therapist, indicating the rate at which the therapist's motion energy tends to stabilize/de-stabilize, respectively. The coefficient α multiplies the term x_1(t) in the equation for x_1^'(t). As such, α x_1(t) describes the component of the therapist's motion energy change that depends solely on the therapist's current motion energy level. A negative/positive α suggests a damping/reinforcing effect, with the therapist's motion energy decelerating/accelerating as it increases. For instance, when α is negative this might represent a more reserved therapeutic style or a more structured therapeutic approach. The magnitude of α modulates this effect: a larger (negative) magnitude implies a quicker return to a stable state. The parameter δ serves a similar role for the patient, being the self-damping/reinforcing factor that governs how quickly the patient's motion energy tends to stabilize/de-stabilize.
On the other hand, the parameters β and γ reflect cross-influences between the therapist and patient. A positive β implies that an increase in the patient's motion energy tends to raise the therapist's motion energy, indicating a synchronous dynamic. Conversely, a negative β suggests a counterbalancing dynamic, where the patient's increased motion energy tends to decelerate the therapist's motion energy. The parameter γ mirrors this dynamic, but with the roles of the therapist and patient reversed.
:
e^A = I + A + 1/2!A^2 + 1/3!A^3 + 1/4!A^4 + ⋯ = ∑_n=0^∞1/n!A^n
Here, I is the identity matrix of the same size as A, A^n is the nth power of the matrix A, n! is the factorial of n, and the sum is over all nonnegative integers n. This series is well-defined and converges for any square matrix A.
There are three possible cases to consider:
* Real and Distinct Eigenvalues (λ_1 ≠λ_2): If the eigenvalues are real and distinct, the general solution will be of the form
[ x_1(t); x_2(t) ] = c_1 e^λ_1 t𝐯_1 + c_2 e^λ_2 t𝐯_2
Where c_1 and c_2 are constants determined by the initial conditions, and 𝐯_1 and 𝐯_2 are the eigenvectors corresponding to λ_1 and λ_2, respectively.
* Real and Equal Eigenvalues (λ_1 = λ_2): If the eigenvalues are real and equal, the general solution will be of the form
[ x_1(t); x_2(t) ] = e^λ_1 t (c_1 𝐯_1 + c_2 t 𝐯_1)
Where c_1 and c_2 are constants determined by the initial conditions, and 𝐯_1 is the eigenvector corresponding to λ_1.
* Complex Eigenvalues: If the eigenvalues are complex, they will come in complex conjugate pairs, λ_1,2 = a ± bi, and the general solution will be of the form
[ x_1(t); x_2(t) ] = e^at (c_1 cos(bt) + c_2 sin(bt)) 𝐯
Where c_1 and c_2 are constants determined by the initial conditions, and 𝐯 is the eigenvector corresponding to λ_1,2.
In our analysis, we also consider transformations of the model parameters into a set of ratios that represent the relative contribution of each factor within the psychotherapy session. This transformation allows for a more interpretable insight into the dyadic dynamics. The transformed parameters are:
* Therapist self-damping/reinforcing ratio (th_self): This is defined as the absolute value of α divided by the sum of the absolute values of all parameters. Mathematically, this is expressed as th_self=|α|/(|α|+|β|+|γ|+|δ|).
* Therapist interaction ratio (th_int): Similar to the damping/reinforcing ratio, the therapist interaction ratio is calculated as the absolute value of β divided by the sum of the absolute values of all parameters: th_int=|β|/(|α|+|β|+|γ|+|δ|).
* Patient interaction ratio (pa_int): The patient interaction ratio is the absolute value of γ divided by the sum of the absolute values of all parameters: pa_int=|γ|/(|α|+|β|+|γ|+|δ|).
* Patient self-damping/reinforcing ratio (pa_self): This ratio is calculated for the δ parameter: pa_self=|δ|/(|α|+|β|+|γ|+|δ|).
Incorporating these parameters and their respective ratio definitions, our model provides a nuanced understanding of psychotherapy dyadic interactions. It accounts not just for the motion energy levels but also for their rates of change - acceleration and deceleration. The four ration parameters allow us to quantify the specific contribution of each participant's motion dynamics to the overall session. With these parameters and ratios, we can better understand and interpret the motion dynamics and non-verbal synchrony observed in therapist-patient interactions.
As mentioned above, model misspecification is an important consideration in statistical analysis. In our study, we use a system of ordinary differential equations to model the dynamics of motion energy within psychotherapy sessions. If the solution of this ODE system precisely represents the true motion velocities, then the model is well specified. However, if the solution only approximates the true velocities, then the model is, in fact, misspecified. Under such circumstances, the parameters of the ODE system, say θ:=(α,β,γ,δ)^⊤ and initial values ξ:=(x_1(0),x_2(0))^⊤, do not necessarily correspond to the "true" parameters but rather those giving rise to solutions that are closest to the true velocities in the following sense:
(θ,ξ):=min_θ∈Θ,ξ∈Ξ∫_0^1 ||
x(θ,ξ;t)-x_0(t)
||^2 t,
where ∥·∥ denotes the Euclidean norm.
This interpretation is based on the work of <cit.> on model misspecification. In the presence of model misspecification, the parameters of the ODEs are still interpretable, but their interpretations may differ from the case where the model is correctly specified. This caveat should be borne in mind when interpreting the estimated parameters and the model's insights. However, the potential for model misspecification does not undermine the utility of our approach. It merely provides a reminder of the need for careful interpretation of the results and the context-dependent nature of the parameter estimates.
Following this general overview of the modeling approach, we will delve deeper into a specific property of these systems: sensitivity to model parameters. This sensitivity analysis offers valuable insights into the responsiveness of the model to changes in the parameters, helping us to assess the robustness of our findings and their implications for psychotherapy research.
§.§ Sensitivity Analysis
Here we conduct a small sensitivity analysis. This analysis measures the influence of each parameter on the model output, providing insights into which parameters have the greatest impact on the system dynamics.
To illustrate, let's consider a simulation scenario where we run the ODE model for 100 time points, evenly spaced from 0 to 10. The initial conditions for the motion energy levels of the therapist and patient (x_1 and x_2) are set at 1. The parameter values are set to α = 0.5, β = 0.5, γ = 0.5, and δ = 0.5. In this scenario, we compute the sensitivities of the maximum value of x_1 to a 10% increase in each of the parameters. The result are as follows:
Sensitivities =
α = 0.32,
β = 0.31,
γ = 0.25,
δ = 0.26.
These values can be interpreted as follows: a 10% increase in the value of α will lead to approximately a 32% change in the maximum value of x_1; a 10% increase in the value of β will lead to approximately a 31% change in the maximum value of x_1; a 10% increase in the value of γ will lead to approximately a 25% change in the maximum value of x_1; a 10% increase in the value of δ will lead to approximately a 26% change in the maximum value of x_1.
In this case, the output x_1 is most sensitive to changes in α and β, and less sensitive to changes in γ and δ. This indicates that changes in the therapist's self-damping factor and the influence of the patient's motion on the therapist will have a relatively larger impact on the system dynamics than changes in the influence of the therapist's motion on the patient and the patient's self-damping factor.
However, it's important to note that sensitivity analysis is a local method and assumes that the effects of changes in different parameters are independent. This may not hold true if there are interactions between the parameters, and the sensitivities could be different for other values of the parameters.
§ STATISTICAL INFERENCE FRAMEWORK
The observed time series of motion energy consist of positive values over a session as displayed in Figure <ref>. The data for our study were sourced from video-recorded intake interviews at a psychotherapy clinic in Bern, Switzerland <cit.>. Each interview lasted between 60 to 90 minutes, with a focus on understanding the patient's reasons for seeking therapy and their personal life history. To ensure consistency in our analysis, we omitted the initial ten minutes of each session, typically spent on logistics, and confined our analysis to a 45-minute segment starting from the 10th minute.
§.§ Statistical Model
We split a session into ten equidistant segments and summarize each segment by the mean motion energy values denoted by Y_j(t_i), i=1,...,n, j=1,2 (in our case n=10). This is a noisy version of the underlying mean of the true motion energy process denoted above by x_0j(t_i). After standartization of these values we consider the statistical model
Y_j(t_i)=x_0j(t_i)+ϵ_ij, i=1,…,n j=1,2,
where Y_j(t_i) is a scalar random variable, t_1,…,t_n are deterministic distinct design points; and the unobserved random variables ϵ_ij are independent measurement errors having zero expectation and finite variance. There is some abuse of notation that simplifies presentation where we use x_0j in the observation model which is now considered to be the standardized underlying process.
The choice to segment psychotherapy sessions into ten equidistant periods for model fitting was informed by existing practices in the field. For instance, in the psychotherapy research literature, it is common for human coders to divide sessions into five-minute segments, assigning various clinical labels to each of these segments based on the observed dynamics (see, e.g., <cit.>). Here we work with 45 minutes so each segment is of 4.5 minutes. This approach allows for a more granular understanding of the therapeutic process, capturing the evolving nature of therapist-patient interaction and potential shifts in clinical dynamics throughout the session. Our type of motion energy data are inherently noisy, and by taking averages over short intervals, we ensure that our estimates better represent the underlying process. This technique reduces the impact of momentary fluctuations and enhances the reliability of our parameter estimates, despite the noise in the raw data. We have also studied the finite sample properties of our method. The Monte Carlo simulation results, as seen in Table 1 and Table 2, suggest that our choice of ten segments is not only practical but also statistically reliable, even in the face of measurement error.
By aligning our model fitting process with this established method of data organization, we position our modeling approach to readily incorporate these clinically-relevant labels whenever they become available. This ensures that our model, based on motion energy dynamics, and the clinical labels share the same temporal scale, facilitating a more integrated and clinically nuanced analysis, providing a potentially valuable tool for understanding and interpreting therapeutic processes.
§.§ Parameter Estimation
The next crucial step in our analysis is parameter estimation for our system of ordinary differential equations. This task forms the heart of the mechanistic modeling process, as it enables us to quantitatively characterize the dynamics of therapist-patient interactions. Recent advancements in the field have provided a suite of techniques for ODE parameter estimation, as comprehensively reviewed in <cit.>.
Let θ:=(α,β,γ,δ)^⊤ and note that the ODEs are equivalent to the integral equations
x(t)=ξ + ∫_0^t g(x(s)) s θ, t∈[0,1],
where ξ=(ξ_1,ξ_2)^⊤ stands for the initial values of the system x(0)=(x_1(0),x_2(0))^⊤, and the matrix g is given by
g(x) = [ x_1 x_2 0 0; 0 0 x_1 x_2 ].
Let
G(t) = ∫_0^t g(x(s)) s, t ∈
[0,1],
A = ∫_0^1 G(t) t, B= ∫_0^1 G^T(t) G(t) t.
<cit.> show that
if B is nonsingular then
I-AB^-1A^T is and
ξ = (I - A B^-1A^T)^-1∫_0^1 (I - A
B^-1 G^T(t)) x(t) t,
θ = B^-1∫_0^1 G^T(t) ( x(t) -ξ) t
hold; here I denotes the 2 × 2 identity matrix. Moreover, they show that if x(t) determines θ, then B
is nonsingular. This provides necessary and sufficient
conditions for identifiability of θ.
Note that the system of ODEs representing the motion dynamics within a psychotherapy session is linear in the parameters. This property is instrumental in the estimation process. While an analytic solution exists for the ODEs, as described above, we opt for minimizing the distance between observations and the ODEs' solution for parameter estimation. This leads to a nonlinear least squares problem. However, this can be circumvented by adopting the direct integral approach proposed by <cit.>; see also <cit.>. The direct integral approach provides a more accurate and computationally efficient mechanism for estimating parameters of ODEs linear in (function) of the parameters. The aforementioned works have demonstrated the robustness and efficiency of this approach, making it a valuable tool for our analysis.
In order to estimate the parameter θ the observations are first smoothed, which
results in an estimator x̂_n(·) for the solution
x(·;θ,ξ) of the system, and by differentiation in the
estimator x̂_n^'(·) for
x^'(·;θ,ξ). Then in view of Equation (<ref>) we estimate the parameters θ and ξ by minimizing
∫_0^1 ||
x̂_n(t)-ζ-∫_0^tg(x̂_n(s)) s η||^2 t.
Denote
Ĝ_n(t) = ∫_0^t g(x̂_n(s)) s , t ∈
[0,1],
Â_n = ∫_0^1 Ĝ_n(t) t,
B̂_n = ∫_0^1 Ĝ_n^⊤ (t) Ĝ_n(t) t.
Minimizing the criterion function (<ref>) with respect
to ζ and η results in the direct estimators
ξ̂_n = (I - Â_n B̂_n^-1Â_n^⊤)^-1∫_0^1 (I - Â_n
B̂_n^-1Ĝ_n^⊤ (t)) x̂_n(t) t,
θ̂_n = B̂_n^-1∫_0^1 Ĝ_n^⊤ (t) (
x̂_n(t) -ξ̂_n ) t.
<cit.> present conditions that
guarantee √(n)-consistency of the estimators
ξ̂_n and θ̂_n.
§.§ Finite Sample Properties
Following the introduction of our estimation method, we now turn our attention to assessing the finite sample properties of our estimators. Given the structure of our data, with each psychotherapy session being divided into ten equidistant segments, it becomes crucial to understand how our estimators behave under this finite sample scenario. Thus, Monte Carlo simulations are integral to our study, offering invaluable insights into the distributional properties of our estimators under controlled conditions. This small numerical study aims to ensure our findings' reliability and validity, laying a solid groundwork for future psychotherapy research applications of this methodology. However, it's a targeted inquiry driven by our modeling approach's practical considerations, not a comprehensive exploration of the estimator’s properties. More comprehensive Monte Carlo studies and a thorough theoretical investigation of the estimators defined in Equations (<ref>)-(<ref>) have been conducted in the foundational work by <cit.>. For implementation we employ the `simode` package in R developed by <cit.>. The `simode` package is particularly well-suited for our needs, as it is designed to handle ODE models that are linear in their parameters, which aligns perfectly with the structure of our model. This package utilizes state-of-the-art techniques to reliably estimate model parameters, even in complex settings. Using `simode`, we can extract meaningful parameter values from our preprocessed motion energy data, thereby providing a quantitative foundation for understanding the dyadic dynamics within psychotherapy sessions.
Guided by the real data analysis presented in the sequel we study the case of a sample size of n=10. We set the parameters to α=-0.5,β= -1.5,γ= 1,δ= 0.3. The initial conditions for the ODEs were set at 0.5, -0.5 for the therapist and patient, respectively. We added Gaussian noise to the ODE solutions to simulate measurement errors at two levels, 20% and 50% of the mean value of the solution. The time interval for the simulation ranged from 1 to 10, and the simulations were repeated 100 times.
In the case of a 20% measurement error, as shown in Table <ref>, the estimated means for the therapist and patient parameters come remarkably close to the truth, providing excellent results. The variance is also suitably managed, ensuring reliable estimations. The estimation of the damping/reinforcing and interaction ratios is particularly accurate, demonstrating the robustness of our method.
When the measurement error is increased to 50%, the results, although somewhat affected, remain promising. As Table <ref> illustrates, even with a larger measurement error and a small sample size of 10, the estimated means for parameters are close to the truth. The variance, while present, is not prohibitive for reliable estimation. The estimation of the self-damping/reinforcing and interaction ratios remains reasonably accurate, further underlining the efficacy of our approach. These results illuminate the potential of our method to consistently estimate these pivotal ratios, providing a more interpretable and significant understanding of the dynamics in psychotherapy.
§.§ Structural Identifiability Analysis
Structural identifiability pertains to whether the parameters of a model can be uniquely determined from perfect, noise-free data.
For the linear ODE system modeling the psychotherapy dyadic interaction, a basic method to determine structural identifiability is to examine the singularity of the matrix of coefficients. In our case, this is the matrix A given by:
A = [ α β; γ δ ].
The model is structurally identifiable (both locally and globally) if the matrix A is non-singular, i.e., its determinant is non-zero. If the determinant is zero, the matrix A is singular, and the model is not structurally identifiable.
Consider the case where α=δ=1 and β=γ=-1. In this scenario, the determinant of the resulting matrix A is:
det(A) = α·δ - β·γ = 1 · 1 - (-1) · (-1) = 0.
Hence, the matrix A is singular, implying that the model is not structurally identifiable in this instance. In a mechanistic interpretation, this means that given perfect noise-free data, we would not be able to discern whether the observed motion dynamics are due to the therapist's and patient's self-damping/reinforcing factors (α and δ) or due to their influence on each other (β and γ).
Furthermore, the eigenvalues of matrix A can provide further insight into the dynamics of the system. Indeed, the coupled system of linear ordinary differential equations considered above is well-studied, and its qualitative behavior is well-understood in terms of the nature of the eigenvalues of the coefficient matrix (<cit.>). This can be written in matrix form as X' = AX, where X = [x_1, x_2]^T. The qualitative behavior of the system can be determined from the eigenvalues of the matrix A. The eigenvalues are the roots of the characteristic equation, which is given by
det(A - λ I) = 0,
where I is the identity matrix and λ are the eigenvalues. For this 2x2 matrix, this simplifies to
λ^2 - (α + δ) λ + (αδ - βγ) = 0.
The roots of this equation, λ_1,2, determine the qualitative behavior of the system:
Real and distinct eigenvalues: This situation occurs when the parameters are set such that the eigenvalues of the matrix are real and distinct. For instance, consider α = -1, β = 2, γ = 1, and δ = -1. This configuration leads to real and distinct eigenvalues. If the eigenvalues are negative, the unique equilibrium point at the origin could represent a state of mutual calmness or rest, to which both individuals naturally tend. This might be indicative of a functional therapist-patient relationship where both parties are capable of returning to a calm state after periods of increased activity or agitation. However, if the eigenvalues are positive (as in the provided example), the system is unstable and the behaviors (motion energy) of the individuals do not return to a resting or stable state but instead amplify over time. This could be interpreted as a runaway interaction, where the energy of one individual fuels the energy of the other, leading to an escalating situation.
In the context of the model, a negative α could represent a therapist who is very quick to return to a resting state after a period of activity, while a negative δ could represent a patient with a similar tendency. The positive β and γ values indicate that an increase in the motion energy of one individual leads to an increase in the motion energy of the other individual. The negative eigenvalues suggest that despite these influences, both the therapist and the patient are able to return to a resting state, indicating a balanced and potentially effective therapeutic relationship.
Real and equal eigenvalues: This situation might occur when the parameters are set such that α = δ and β = γ. For example, if α = δ = -1 and β = γ = 0, the eigenvalues are real and equal. This might correspond to a psychotherapeutic situation where the therapist and patient have an identical influence on each other and their energy levels tend to stabilize at the same rate. The unique equilibrium point at the origin could again represent a state of mutual calmness or rest.
Complex conjugate eigenvalues: This situation might occur when αδ - βγ < 0, which implies that the determinant of the matrix is negative. For instance, consider α = δ = 1 and β = -2, γ = 2. This configuration leads to complex conjugate eigenvalues. In a psychotherapy context, this might reflect a more dynamic interplay between therapist and patient, with their energy levels oscillating in a complex manner around a state of equilibrium. The spirals or ellipses in the phase plane could represent cycles of activity and calmness that the therapist and patient go through during their interactions.
In all these cases, the specific interpretations depend on the context and the exact meanings assigned to the motion energy levels and the parameters of the model. These are just possible interpretations based on the mathematical properties of the system.
Note that these analyses assume that the system is linear and time-invariant. If the coefficients α, β, γ, and δ are functions of time or the system is nonlinear, the behavior can be much more complex.
The eigenvalues, denoted λ_1 and λ_2, are given by the solutions to the equation:
det(A - λ I) = 0,
where I is the identity matrix. The real parts of the eigenvalues determine the stability of the system: if all eigenvalues have negative real parts, the system is stable, implying that the motion energy of the dyad will converge to a stable equilibrium over time. Conversely, if any eigenvalue has a positive real part, the system is unstable, indicating that the motion energy may diverge over time. The imaginary parts of the eigenvalues, on the other hand, signify oscillatory behavior in the system's dynamics.
This signifies the need for careful parameter estimation and model validation to ensure the robustness and utility of the model in capturing the intricacies of psychotherapy dyadic interactions.
§ EMPIRICAL ANALYSIS OF DYADIC INTERACTIONS IN PSYCHOTHERAPY SESSIONS USING A MECHANISTIC MODEL
In this section we apply our proposed mechanistic model to real-world data obtained from psychotherapy sessions. This empirical analysis aims to elucidate the dynamic interactions between therapists and patients during these sessions, providing a novel quantitative perspective on the psychotherapeutic process. The motion energy data, derived from video recordings of the sessions, serves as a proxy for the movements velocities of the involved individuals. By fitting our model to these data, we seek to capture the inherent dyadic dynamics and provide a robust framework for exploring various hypotheses related to psychotherapy practice. Herein, we present the results of this empirical analysis, highlighting key findings and interpreting them in the context of psychotherapeutic interaction.
In our analysis of motion energy data, we specifically focused on the regions of interest (ROI) corresponding to the heads of the therapist and the patient. This decision was based on the capabilities of the software used to capture and process the motion energy data, which allows for the definition of specific ROIs. By concentrating on the head regions, we aimed to capture a significant portion of the expressive behavior and nonverbal communication cues often crucial in psychotherapy sessions. This includes various head movements and postures which can convey agreement, attention, emotion, and other psychological states. Therefore, the data derived from these ROIs present a rich source of information for understanding the dynamic interaction between the therapist and patient.
§.§ Data Preprocessing
As a preliminary step in the data processing pipeline, each psychotherapy session is partitioned into ten equal segments from reasons detailed above. We then calculate the mean motion energy for each of these segments, which provides a coarse-grained representation of the activity levels throughout the session. To account for potential differences in baseline activity levels across different sessions or individuals, these mean motion energy values are standardized. This ensures that the values for each segment reflect deviations from the average activity level, rather than absolute measures of motion energy. Thus, through this preprocessing pipeline, we ensure that the dataset is adequately prepared for the application of the mechanistic model, allowing us to capture the essential dynamics of therapist-patient interactions within each session.
§.§ Case Studies: In-Depth Analysis of Single Psychotherapy Sessions
Psychotherapy sessions are complex, involving myriad subtle interactions and dynamics, and the intent of the following case studies is to illuminate these complexities in a way that brings our analytical approach to life. By focusing on individual sessions, we can draw out the nuances of our model and elucidate the meaningful interpretations of the parameters and ratios in the specific context of psychotherapy. This process provides the reader with a deeper understanding of our approach, laying the groundwork for the subsequent large-scale data analysis. The case studies, therefore, serve as a bridge, translating the abstract methodology into a practical framework for analysis, and setting the stage for the broader investigation that follows in the next section.
In the first case study we analyze Patient ID 115067. The parameter estimates are α̂ = -0.433, β̂ = -0.478, γ̂ = 0.442, and δ̂ = 0.452. Here α and β showing negative estimates, and the eigenvalues are complex conjugate which suggest an oscillatory pattern in the interaction dynamics, as can be seen in Figure <ref>.
We also provide confidence intervals for the parameters using
profile likelihood generated by 'simode' package.
The 95% confidence intervals for the parameter estimates are provided in Table <ref>. Notably, all the parameters have confidence intervals that exclude zero, indicating that they are statistically significant. The intervals are relatively narrow, reflecting a high degree of precision in these estimates. This suggests that the parameters are identifiable under the conditions of this analysis.
Given the estimated parameters, the derived ratios are t̂ĥ_self = 0.24, t̂ĥ_int = 0.26, p̂â_int = 0.25, p̂â_self = 0.25. The ratios indicate a highly balanced therapist-patient interaction in terms of both input and output energy. The therapist and patient are equally active in influencing the overall dynamics of the session, and equally receptive to the other's motions. This shows a strong mutual influence and engagement within the session, which can be indicative of a highly collaborative therapeutic process.
Analyzing the second case study, Patient ID 117022, the parameter estimates were calculated as α̂ = 0.426, β̂ = 0.484, γ̂ = -0.313, and δ̂ = -0.349. The 95% confidence intervals for the parameter estimates are displayed in Table <ref>. Parameters α and β have positive estimates, indicating a positive effect in the corresponding variables. The parameter δ is negative, reflecting a damping or regulatory effect. The small differences between the lower and upper bounds of the confidence intervals suggest a high degree of precision in the estimation of these parameters, reinforcing the reliability of our model. Based on the estimated values, the derived ratios are t̂ĥ_self = 0.27, t̂ĥ_int = 0.30, p̂â_int = 0.20, p̂â_self = 0.23.
Unlike the previous case study, here the eigenvalues are real and distinct, indicating a non-oscillatory, exponential pattern in the interaction dynamics, see Figure <ref>.
Last, we analyze the session of Patient ID 117105. The model fitting procedure yielded parameter estimates of α̂=-0.094, β̂=-0.323, γ̂=0.221, and δ̂=0.063.
The 95% confidence intervals for the parameter estimates are displayed in Table <ref>. The parameters α and β have negative estimates, while γ and δ are positive. The confidence intervals here seem to be wider than in previous case studies. Furthermore, the confidence interval for δ includes zero, implying the parameter is not statistically significant at the 0.05 level. This might suggest that the damping effect of the patient is not as influential as other dynamics in this particular model.
The corresponding ratio values are t̂ĥ_self=0.14, t̂ĥ_int=0.46, p̂â_int=0.31, and p̂â_self=0.09. The eigenvalues in this case are complex conjugates, which suggests an oscillatory pattern in the dynamics of motion energy. Upon examining the time series for the therapist and patient in Figure <ref>, clear oscillations can be observed with alternating periods of high and low activity. A close inspection of the oscillatory pattern suggests that the therapist's movements often precede the patient's, indicative of a leader-follower dynamic. The patient's responses seem to be influenced by the therapist's actions, suggesting a reactive role. This observation aligns with the calculated interaction ratios which is higher for the therapist. A plausible interpretation is that the patient is responding more to the therapist's cues rather than initiating interactions.
It is important to emphasize that the above analysis and interpretations are based solely on motion energy data and should be considered with caution. The precise content or context of the therapy session was not taken into account, and further research is necessary to fully understand the complex interplay between these motion dynamics and other factors influencing the therapeutic process.
In the first psychotherapy session analyzed of Patient ID 113138, see Figure <ref>, the estimated parameters were α̂ = 1.13, β̂ = -0.61, γ̂ = 3.45, and δ̂ = -1.06. These estimates yielded therapist self and interaction ratios of 0.18 and 0.10 respectively, and patient self and interaction ratios of 0.55 and 0.17. The eigenvalues for this set of parameters were complex conjugate, indicating a cyclical, oscillatory dyadic interaction during the session.
The therapist's interaction and self factors together make up less than 30% of the total interaction within the session. This suggests that the therapist adopted a non-directive stance, allowing the patient to lead the dynamics of the session. The patient's factors accounted for over 70% of the interaction, indicating that the patient was highly active in the session, both responding to the therapist and initiating interactions.
The complex conjugate eigenvalues suggest that the patient and therapist were engaged in a cyclical interaction, each responding to the other in a repeating pattern. This might be indicative of a session where there was a good rapport and understanding between the therapist and patient, with each party taking turns to lead and follow in the conversation. It might also suggest that the session was characterized by recurrent topics or emotional states. However, more detailed content analysis would be necessary to confirm these interpretations.
In the second psychotherapy session analyzed of Patient ID 114164, see Figure <ref>, the estimated parameters were α̂ = -0.11, β̂ = 3.75, γ̂ = -0.08, and δ̂ = -1.45. These estimates yielded therapist self and interaction ratios of 0.02 and 0.70 respectively, and patient self and interaction ratios of 0.02 and 0.27. The eigenvalues for this set of parameters were real and distinct, indicating a dyadic interaction that is not cyclical, but instead tends to a stable equilibrium.
The therapist's interaction factor is notably high, accounting for 70% of the total interaction within the session. This suggests a directive approach from the therapist, taking a leading role in driving the dynamics of the session. The therapist's self factor is very low, suggesting minimal efforts to dampen or reduce the patient's emotional intensity. The patient's factors accounted for less than 30% of the interaction, suggesting a more passive role in the session, mainly responding to the therapist's prompts and interventions.
The real and distinct eigenvalues indicate a stable equilibrium in the dyadic interaction. This could suggest a session where the patient and therapist quickly found a comfortable rhythm of interaction, with the therapist taking the lead and the patient responding. It may also suggest a less varied session, with fewer shifts in topic or emotional state. Further qualitative analysis of the session content could provide additional insights into these dynamics.
§.§ Exploring Dyadic Dynamics Across Multiple Sessions
In the this part of our research we broaden our lens to examine the dynamics of psychotherapy across a multitude of sessions. This expansive analysis allows us to tackle the three main research questions of our study: the comparison of novice versus experienced therapists, the exploration of potential gender differences, and the investigation of trait-state dynamics over time. This comprehensive analysis presents an opportunity to delve into the intricacies of dyadic interactions in therapy and their potential variability based on factors such as therapist experience, gender dynamics, and time. By scrutinizing these aspects across a larger dataset, we aim to uncover meaningful patterns and insights that could contribute to our understanding of psychotherapy dynamics and potentially guide future therapeutic practices.
§.§.§ Novice vs Experienced Therapists
The mean interaction ratio for the experienced therapist was 0.29, while the mean for the novice therapist was 0.40.
This statistical analysis was performed to compare the therapist motion energy ratios between the experienced therapist and the novice therapist. A two sample t-test was conducted to determine if there was a significant difference in the mean ratios between the two therapist groups. The t-test resulted in p-value of 0.06. The p-value is just above the conventional significance threshold of 0.05, suggesting that the difference in means between the two groups is not statistically significant at the 5% level, but it is at the 10% level.
§.§.§ Gender Differences
This analysis aimed to compare the "self" motion energy ratios (th_self) between the experienced therapist (thA, n=100 sessions) and the novice therapist (thX, n=20 sessions). The Welch Two Sample t-test was employed to determine if there is a statistically significant difference in the mean "self" ratios between the two therapist groups.
The t-test resulted in a t-value of 2.005 and a p-value of 0.05338. The p-value is slightly above the conventional significance threshold of 0.05, suggesting that the difference in means between the two groups is not statistically significant at the 5% level, but is significant at the 10% level.
In conclusion, although there is some evidence of a difference in the "self" motion energy ratios between the experienced and novice therapists, this difference is not statistically significant at the conventional 5% level. Further data may be required to draw more definitive conclusions.
§ DISTINGUISHING BETWEEN TRAIT AND STATE CHARACTERISTICS IN MOTION DYNAMICS
In two recent papers <cit.>
and <cit.> argue for the critical role of distinguishing between trait-like (stable) and state-like (dynamic) aspects of psychotherapy. They propose that this distinction provides a more personalized approach to psychotherapy, contributing to a deeper understanding of therapeutic change and the patient-therapist alliance. These works underline the necessity of considering both enduring attributes and momentary states when examining psychotherapeutic processes. They provide a significant motivation for the following analysis, which aims to explore trait-like and state-like characteristics within the dynamics of patient
and therapist motion energy. Specifically, we propose that the self-damping/reinforcing ratio parameters may represent trait-like characteristics of therapists and patients, while the interaction parameters may reflect state-like features of the therapeutic process. We analyzed the sessions of an experienced therapist spanning the years 2015-2018, finding compelling evidence for this proposed distinction. Notably, the therapist in question remained the same individual throughout this period, providing a consistent reference point for our analysis.
§.§ Self-damping/reinforcing ratio
In Figure <ref> we can see that the therapist's self-damping/reinforcing parameter remained stable throughout this period, with a small positive trend towards the end of the period, maybe due to an outlier. Overall this parameter is around the value 0.2, namely 20% of the session dynamics. A linear regression model showed no significant trend, indicating that this parameter, and by implication the trait it represents, remains consistent over time. This suggests that certain aspects of a therapist's non-verbal communication style, such as his natural rhythm, may remain relatively fixed over time, functioning as a kind of therapeutic 'signature'.
Further evidence for this distinction comes from the analysis of the patient's parameters over time as displayed in Figure <ref>. It's important to note that unlike the therapist, the patients are different individuals, as the data is drawn solely from intake interviews. Despite this variability, the patient's self parameter showed no significant trend over time. The loess smoothing line oscillates around a value of 0.2, again about 20% of the session dynamics, suggesting that this variability may be attributed to differences between patients, rather than a directional shift over time. This aligns with the notion that self-damping/reinforcing might represent a trait characteristic, and underscores the complexity of individual differences in psychotherapy dynamics.
§.§ Interaction ratio
We now analyze the interaction ratio of both therapist and patient. The therapist's interaction parameter showed a significant positive trend (p-value = 0.005), while the patient's interaction parameter showed a significant negative trend (p-value = 0.0006). Figure <ref> visualizes the interaction levels of the therapist and patients over time, with linear regression lines highlighting the underlying trends. The therapist's interaction shows a slight upward trend, while the patients' interaction reveals a downward trend. This inverse relationship between the therapist and patients' interaction levels could potentially suggest a shift in the dynamics of therapy sessions over time, with the therapist becoming more active and patients becoming less engaged in their interaction.
This may suggests an increased responsiveness to the patient's non-verbal cues over time, reflecting a possible evolution in therapeutic style or increased sensitivity to the patient's non-verbal expressions.
In both cases the model's R-squared value is about 7.8%-11.5%, suggesting that approximately 10% of the variation in the interaction ratio
can be explained by the time of the session. This is not a very large amount of explained variance, suggesting that other factors not included in the model might also be influencing the interaction parameter over time.
Notably, <cit.> reported findings of decreasing synchrony between this specific experienced therapist and patients over time, suggesting that our model captures key features of therapeutic dynamics. Indeed, this intriguing parallel development of the interaction parameters - increasing for the therapist while decreasing for the patient - may jointly suggest a shift in the dyadic synchrony over time. This shift is not merely reflected in the synchrony measure, but is rooted in the underlying mechanisms of interaction and self-damping/reinforcing that our model captures. These changes in the therapist and patient interactions not only reflect the individual engagement levels but also the interplay of these components, contributing to the overall synchrony in the dyad. This finding further substantiating that our mechanistic model captures critical aspects of the evolving therapeutic dynamics.
In summary, our results suggest that both trait-like and state-like characteristics play a role in motion dynamics within psychotherapy sessions. The stability of the self-damping/reinforcing parameters highlights the enduring influence of individual traits, while the variability of the interaction parameters underscores the dynamic, evolving nature of the therapeutic process. Specifically, based on analyzing the specific data we have, it seems that the self-damping/reinforcing trait-like characteristics are 'responsible' to about 40% of the motion dynamics, while the interaction state-like characteristics for about 60%. These insights further our understanding of psychotherapy dynamics and underscore the potential of our modeling approach for capturing the complexity of therapeutic interactions. Future research should continue to explore these dynamics, investigating the potential impacts of these trait-like and state-like characteristics on therapy outcomes and the professional development of therapists.
§ DISCUSSION AND CONCLUSIONS
This study introduces a pioneering mathematical and statistical framework for exploring the dynamics of psychotherapy, with an emphasis on the analysis of motion energy data obtained during therapy sessions. Our methodology, anchored in a system of coupled linear ordinary differential equations, delves into the intricate mechanisms propelling motion dynamics in therapeutic dyads. Furthermore, the ability of our approach to manage measurement errors and deliver trustworthy parameter estimates and confidence intervals highlights its accuracy and reliability. By providing a more comprehensive understanding of therapeutic dynamics, this research opens the door to advanced data-driven insights in the field of psychotherapy.
Through the analysis of three case studies, we demonstrated the practical utility of our model. By transforming raw motion energy data into interpretable narratives of non-verbal communication patterns, we identified meaningful dynamics and roles within therapist-patient dyads. This ability to extract actionable insights from motion energy data showcases its potential in revealing unique perspectives on psychotherapy dynamics. Our in-depth investigation also brought forth the importance of distinguishing between trait and state characteristics of the dyadic interaction dynamics, taking inspiration from recent works in the field of psychotherapy research. We observed how the trait-like and state-like characteristics of therapist and patient interactions manifested in the therapy sessions, providing a nuanced understanding of the dynamic interplay between consistent patterns and moment-to-moment fluctuations in non-verbal communication. The insights gained from this analysis not only shed light on the nuances of therapist-patient dynamics but also underscore the potential for a broader applicability of our mechanistic model to other dyadic interactions.
While the mechanistic modeling approach provides significant insights into the dynamics of psychotherapy sessions, it is important to underscore the fundamental difference between mechanism and causality. By examining the patterns of interaction ratios over time for both the experienced therapist and patients, we observe noticeable changes, indicating evolving dynamics. However, attributing causality based solely on these changes could lead to multiple, and potentially conflicting interpretations. For instance, one possible interpretation could be that as the therapist gains more experience, he tends to engage with more challenging patients, necessitating a higher degree of involvement on his part. Another plausible explanation could be that as the therapist's experience grows, he becomes more proactive in the therapeutic interaction, potentially overshadowing the patient's participation and thus reducing the degree of synchrony.
Both explanations are plausible based on the available motion energy data. Nevertheless, they represent contrasting views on the cause-and-effect dynamics at play: the first suggests a reaction to the changing patient population, while the second implies an inherent change in the therapist's approach over time. The key takeaway is that the observed mechanism – the changing dynamics of interaction ratios over time – does not inherently reveal the underlying causality. Further research, perhaps incorporating additional data sources or methods, would be needed to untangle the complex web of cause and effect in these interactions. Mechanistic modeling thus serves as a powerful tool for revealing patterns and generating hypotheses, but it must be complemented with careful interpretation and further investigatory work to draw robust causal inferences. Indeed, our motion-based analyses, while offering a new perspective on psychotherapy dynamics, do not provide direct insights into therapy session content or participant subjective experiences. Traditional data sources, such as session transcripts or self-report measures, are better equipped to capture these elements.
Future research directions offer exciting prospects. Incorporating process noise within our models could yield a more comprehensive understanding of psychotherapy dynamics. Applying our methodology to various types of dyadic interactions or different therapeutic modalities is another promising direction. Furthermore, delving deeper into the relationship between motion dynamics and therapy outcomes could enhance our understanding of different therapeutic interventions' effectiveness.
In conclusion, our research marks a significant step forward in the quantitative analysis of non-verbal synchrony in psychotherapy. By adopting a mechanistic understanding of these dynamics, we pave the way for further advancements in this field. The potential to derive meaningful insights from data collected non-intrusively and analyzed objectively opens up new avenues in therapeutic research and practice.
|
http://arxiv.org/abs/2307.04343v1 | 20230710045405 | Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification | [
"Haixing Dai",
"Lu Zhang",
"Lin Zhao",
"Zihao Wu",
"Zhengliang Liu",
"David Liu",
"Xiaowei Yu",
"Yanjun Lyu",
"Changying Li",
"Ninghao Liu",
"Tianming Liu",
"Dajiang Zhu"
] | cs.CV | [
"cs.CV"
] |
Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification
Haixing Dai*, Lu Zhang*, Lin Zhao, Zihao Wu, Zhengliang Liu, David Liu,
Xiaowei Yu, Yanjun Lyu, Changying Li, Ninghao Liu, Tianming Liu, Dajiang Zhu.
* Co-first authors.
Haixing Dai, Lin Zhao, Zihao Wu, Zhengliang Liu, Ninghao Liu, Tianming Liu and Changying Li are with the
Department of Computer Science, University of Georgia, Athens, GA, USA.
(e-mail: hd54134, lin.zhao, zw63397,zl18864, ninghao.liu, [email protected], [email protected]).
Lu Zhang, Xiaowei Yu, Yanjun Lyu and Dajiang Zhu are with the Department of Computer Science
and Engineering, The University of Texas at Arlington, Arlington, TX, USA.
(e-mail: lu.zhang2, xxy1302, [email protected], [email protected])
David Weizhong Liu is with Athens Academy, Athens, GA, USA.(e-mail:
[email protected])
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
With the popularity of deep neural networks (DNNs), model interpretability is becoming a critical concern. Many approaches have been developed to tackle the problem through post-hoc analysis, such as explaining how predictions are made or understanding the meaning of neurons in middle layers. Nevertheless, these methods can only discover the patterns or rules that naturally exist in models. In this work, rather than relying on post-hoc schemes, we proactively instill knowledge to alter the representation of human-understandable concepts in hidden layers. Specifically, we use a hierarchical tree of semantic concepts to store the knowledge, which is leveraged to regularize the representations of image data instances while training deep models. The axes of the latent space are aligned with the semantic concepts, where the hierarchical relations between concepts are also preserved. Experiments on real-world image datasets show that our method improves model interpretability, showing better disentanglement of semantic concepts, without negatively affecting model classification performance.
Explainable AI (XAI), hierarchical tree of semantic concepts, image embedding, image interpretation.
§ INTRODUCTION
Machine learning interpretability has recently received considerable attention in various domains <cit.>. An important challenge that arises with deep neural networks (DNNs) is the opacity of semantic meanings of data representations in hidden layers. Several types of methods have been proposed to tackle the problem. First, recent works have shown that some neurons could be aligned with certain high-level semantic patterns in data <cit.>. Second, it is possible to extract concept vectors <cit.> or clusters <cit.> to identify semantic meanings from latent representations. However, these methods are built upon the assumption that semantic patterns are already learned by DNNs, and the models would admit the post-hoc method of a specific form. There is no guarantee that the assumption holds true for any model, especially when meaningful patterns or rules may not be manifested in the model, thus leading to over-interpretation <cit.>. Meanwhile, although many post-hoc explanation methods are proposed with the expectation of improving or debugging models, it is challenging to achieve this goal in practice. Although we could collect human annotations to guide prediction explanations and improve model credibility <cit.>, manually labeling or checking semantic concepts is rather difficult. Unlike explaining individual predictions, which is a local and instance-level task, extracting concepts provides a global understanding of models, where manual inspection of such interpretation is time-consuming and much harder, if not impossible.
Instead of relying on post-hoc approaches, we aim to instill interpretability as a constraint into model establishment. For example, explanation regularization is proposed in <cit.>, but it constrains gradient magnitude instead of focusing on semantic concepts. Meanwhile, β-VAE and its variants <cit.> add independence constraints to learn disentangled factors in latent representations, but it is difficult to explicitly specify and align latent dimensions with semantic meanings. Ideally, we want to construct DNNs whose latent space could tell us how it is encoding concepts. The recent decorrelated batch normalization (DBN) method <cit.> normalizes representations, providing an end-to-end technique for manipulating representations, but it is not directly related to interpretability.
In this work, we propose a novel Hierarchical Semantic Tree Concept Whitening (HaST-CW) model to decorrelate the latent representations in image classification for disentangling concepts with hierarchical relations. The idea of our work is illustrated in Fig. <ref>. Specifically, we define each concept as one class of objects, where the concepts are of different granularities and form a hierarchical tree structure. We decorrelate the activations of neural network layers, so that each concept is aligned with one or several latent dimensions. Unlike the traditional DBN method (Fig. <ref>a), which treats different concepts as independent, our method is able to leverage the hierarchically related organization of label concepts inherent in domain knowledge (Fig. <ref>b).
The consideration of relations between different concepts is crucial in many real-world applications <cit.>. For example, in the healthcare domain, the relationship of different disease stages (concepts) may reflect the progression of the disease, which is significant for reversing pathology <cit.>. Also, in the precision agriculture domain <cit.>, real-time monitoring of interactions of multiple agricultural objects (concepts) with each other and with the environment are crucial in maintaining agro-ecological balance <cit.>.
In our model, a novel semantic constraint (SC) loss function is designed to regularize representations. As a result, the data representations of two concepts with higher semantic similarity will be closer with each other in the latent space. Moreover, a new hierarchical concept whitening (HCW) method is proposed to decorrelate different label concepts hierarchically. We evaluated the proposed HaST-CW model using a novel agriculture image dataset called Agri-ImageNet. The results suggest that our model could preserve the semantic relationship between the label concepts, and provide a clear understanding of how the network gradually learns the concept in different layers, without hurting classification performance.
§ RELATED WORK
Post-Hoc Interpretation. Post-Hoc interpretation can be divided into approaches that explain predictions or models <cit.>. Prediction-oriented interpretation aims to develop faithful and robust measures to quantify feature importance towards individual predictions for identifying those features (e.g., pixels, super-pixels, words) that made most contributions <cit.>. Model-oriented interpretation analyzes behaviors of neural networks either by characterizing the function of model components <cit.> or analyzing semantic concepts from latent representations <cit.>. The proposed method also targets concept-level interpretation in deep neural networks. Different from post-hoc techniques that focus on discovering existing patterns in models, the newly proposed HaST-CW proactively injects concept-related knowledge into training and disentangles different concepts to promote model interpretability.
Inherently Interpretable Models. Another school of thought favors building inherently explainable machine learning models <cit.>. Some approaches design models that highlight prototypical features of samples as interpretation. For example, Chen et al. <cit.> classifies images by dissecting images into parts and comparing these components to similar prototypes towards prediction. Li et al. <cit.> designs an encoder-decoder framework to allow comparisons between inputs and the learned prototypes in latent space. Some other works such as β-VAE and its variants <cit.> regularize representation learning for autoencoders to produce disentangled factors in representation dimensions, but the semantic meaning of each dimension remains unknown without further manual inspection. In contrast, our method attempts to explicitly align latent dimensions with specific semantic concepts contained in external knowledge. A recent technique called Concept Whitening (CW) <cit.> constrains the latent space, after revising Batch Whitening <cit.>, such that it aligns with predefined classes. Our method attempts to infuse more complex knowledge of concept relations into representation learning.
Applying Whitening to Computer Vision. Whitening is a standard image preprocessing technique, which refers to transforming the covariance matrix of input vectors into the identity matrix. In fact, the well-known Batch Normalization <cit.> can be regarded as a variant of whitening where only the normalization process is retained. There are many works in deep learning that describe the effectiveness of whitening <cit.> and the process of finding the whitening matrix <cit.>. Our work further takes semantics into consideration during the whitening process towards more interpretable representation learning.
§ METHODOLOGY
§.§ Overview
The proposed HaST-CW model aims to preserve the underlying hierarchical relationship of label concepts, as well as to disentangle these concepts by decorrelating their latent representations. To achieve this goal, we leverage the hierarchical tree structure of the label concepts extracted from specific domain knowledge (<ref>). Then, the obtained structure of label concepts is used as prior knowledge to be instilled into the model for guiding the representation learning process. There are two key components in the knowledge instillation process – the hierarchical concept whitening (HCW) module and the semantic constraint (SC) loss, which will be elaborated in <ref> and <ref>, respectively.
§.§ The Hierarchical Semantic Tree of Concepts
In this work, we used a newly collected and curated Agri-ImageNet dataset to develop and evaluate the HaST-CW model. There are 9173 high quality images in Agri-ImageNet, covering 21 different types of agricultural objects. Taking each type of agricultural object as one class, we have 21 label concepts in total. Some pairs of agriculture objects have the supertype-subtype relationship between them, so we obtain the parent-child relationship between the corresponding labels. As a result, a tree structure is built to represent the underlying hierarchically related organization of label concepts, which is shown in <ref>. Two concepts connected in the tree structure means they have parent-child relationship, where the parent is located at the lower hierarchy level. Besides the parent-child relation, we further introduce two notions – brother and cousin. If two concepts have the same parent, then they are brothers. If the parents of two concepts are brothers, then the two concepts are cousins. According to the laws of inheritance: (1) objects with the parent-child relation should be more similar than those with the uncle-child relation (vertical parent-child relationship); and (2) the traits of brothers should be more similar than cousins (horizontal brother-cousin relationship). An effective model should be able to capture both of the vertical relationship and horizontal relationship, so that the representation of any concept in the latent space should be closer to its parent than uncles, and closer to brothers than cousins. For our HaST-CW model shown in <ref>, a new HCW module (<ref>) is proposed to preserve the vertical relationship, and a novel SC loss (<ref>) is proposed to preserve the horizontal relationship.
§.§ Hierarchical Concept Whitening
The hierarchical concept whitening (HCW) module is one of the key components in the HaST-CW model, which aims to disentangle different label concepts while preserving their underlying hierarchical relationship. Specifically, in this work, the set of label concepts were denoted by C={C_i}_i=1^N_c, where C_i represents the i^th concept and N_c = 21 is the number of concepts. For C_i, its parent, children, brothers and cousins were denoted as C_i.𝒫, {C_i.children}, {C_i.ℬ} and {C_i.𝒞}, respectively. A dataset is denoted as 𝒟{x_i,y_i} ^n_i=1. We use X^C_i={x_j^C_i}_j=1^n_i to denote the set of i^th-class samples labeled by C_i.
In traditional whitening transformation <cit.>, during the training process, data samples are first fed into the model in mini-batches to obtain the latent representation matrix Z_d× n, where n is the mini-batch size and d is the dimension of latent representation. We use ResNet as the model backbone in this work. Then a transformation ψ is applied to decorrelate and standardize Z_d× n:
ψ(Z)=W(Z-μ1_n× 1^T),
where W_d× d is the orthogonal whitening matrix, and μ=1/n∑^n_i=1z_i is the sample mean. A property of representation whitening is that Q^TW is still a valid whitening matrix if Q is an orthogonal matrix. We leverage this property for interpretable representation learning. In our model, besides decorrelation and standardization, we expect that the transformed representation of samples from concept C_i, namely Q^Tψ(Z^C_i), can align well with the i^th axis of latent space. Meanwhile, the underlying hierarchical relationship of concepts should also be preserved in their latent representations. That is, we need to find an orthogonal matrix Q= [q_1, q_2, …, q_N_c] with two requirements: (1) Z^C_i should be most activated by q_i, i.e., the i^th column of Q; (2) Z^C_i should also be activated by {q_c}, where c∈{C_i.children} is the child of concept C_i. The first constraint makes the representation align together with the corresponding concept dimension, and the second one maintains the vertical parent-child relationship between concepts. To this end, the optimization problem can be formulated as:
max_q_1,…q_N_c ∑^N_c_i=1[ 1/n_iq^T_iψ(Z^C_i)1_n_i ×1
+
∑_c∈{C_i.children}1/n_i× N_cd(q_c)^Tψ(Z^C_i)1_n_i ×1],
s.t. Q^TQ= I_d ,
where N_cd = |{C_i.children}| is the number of child concepts of C_i. To solve this optimization problem with the orthogonality constraint, a gradient descent method with the curvilinear search algorithm <cit.> is adopted. With the whitening matrix W and rotation orthogonality matrix Q, HaST-CW can replace any batch normalization layer in deep neural networks. The details of representation whitening for HaST-CW is summarized in Algorithm <ref>.
The overall training pipeline of our HaST-CW model is shown in <ref>. We adopt an alternative training scheme. In the first stage, the deep neural network is trained with the traditional classification loss. In the second stage, we solve for Q to align representation dimension with semantic concepts. The two stages work alternatively during the training process. The classification loss of the first stage is defined as:
min_θ,ω,W,μ,1/m∑^m_i=1ℓ(g(Q^Tψ( Φ(x_i;θ);W,μ);ω);y_i),
where Φ(·) and g(·) are layers before and after the HaST-CW module parameterized by θ and ω, respectively. ψ(·) is the whitening transformation parameterized by the sample mean μ and whitening matrix W. The rotation orthogonal matrix Q will be updated according to <ref> in the second stage. The operation of Q^Tψ(·) forms the HCW module. During the first training stage, Q will be fixed and other parameters (θ,ω,W,μ) will be optimized according to <ref> to minimize the classification error. The first stage will take T_thre mini batches (we set T_thre=30 in experiments). After that, Q will be updated by the Cayley transform <cit.>:
Q^' = (I+η/2A)^-1(I-η/2A)Q,
A = GQ^T-QG^T,
where A is a skew-symmetric matrix. G is the gradient of the concept alignment loss, which is defined in <ref>. η is the learning rate. At the end of the second stage, an updated Q^' will participate in the first training stage of the next iteration.
[tb]
The Overall Framework of HaST-CW
§.§ Semantic Constraint Loss
Besides preserving the vertical parent-child relationship of concepts, we further model the horizontal relation between concepts that are at the same hierarchy level (i.e., brothers or cousins). Different from the HCW in <ref> that focuses on concept alignment, here we directly control the distance between representations of different concepts with the horizontal relation <cit.>. To this end, we propose a Semantic Constraint (SC) loss to model the horizontal brother-cousin relationship as below:
ℒ_SC = αℒ_ℬ + βℒ_𝒞,
ℒ_ℬ=∑_j ∑_ℬ_i∈{C_i.ℬ}∑_k max{0,m_ℬ-d(z^C_i_j,z^ℬ_i_k)},
ℒ_𝒞 =∑_j ∑_ℬ_i∈{C_i.ℬ}∑_𝒞_i∈{C_i.𝒞}∑_k∑_l max{0,d(z^C_i_j,z^ℬ_i_k)
-d (z^C_i_j,z^𝒞_i_l)+m_𝒞}.
There are two components in the SC loss and their contributions are controlled by two hyperparameters – α and β. The first term ℒ_ℬ is a contrastive loss, which takes a pair of image representations labeled by two brother concepts as input and enlarges the distance between them. It uses a hyperparameter m_ℬ to control the distance. The distance between two concepts increases when m_ℬ is set larger. ℬ_i∈{C_i.ℬ} denotes one of the brothers of concept C_i. The second term ℒ_𝒞 is a triplet loss. It takes three inputs: the anchor image representation z^C_i_j, the image representation z^ℬ_i_k labeled by brother concept of the anchor, and the image representation z^𝒞_i_l labeled by cousin concept of the anchor. 𝒞_i∈{C_i.𝒞} denotes the cousins of concept C_i. The triplet loss encourages the anchor-brother distance to be smaller compared with the anchor-cousin distance in representation space. In this way, the distance of image representations from brother classes tends to be smaller than the distance of image representations from cousin classes. The gap between the two types of distance is controlled by the margin value m_𝒞. Consequently, the hierarchical concept whitening module, together with the SC loss, enables the latent representations of concepts with similar semantics to be close with each other in the latent space.
§.§ Latent Feature Maps Activation
The proposed HaST-CW model can generate latent representations (ẑ_i) for input images (x_i) at each neural network layer by ẑ_i=Q^Tψ( Φ(x_i;θ);W,μ). The latent representation can be used to assess the interpretability of the learning process by measuring the degree of activation of ẑ_i at different concept dimensions (i.e. {q_i}). In the implementation, Φ(·) is a CNN based deep network, whose convolution output z_i= Φ(x_i;θ) is a tensor with the dimension z_i∈ R^d× h× w. Since ẑ_i is calculated by ẑ_i = Q^Tψ(z_i) where Q^T∈ R^d× d, we obtain ẑ_i∈ R^d× h× w, where d is the channel dimension and h× w is the feature map dimension. The hierarchical concept whitening operation Q^Tψ(·) is conducted upon the d feature maps. Therefore, different feature maps contain the information of whether and where the concept patterns exist in the image. However, as a tensor the feature map cannot directly measure the degree of concept activation. To solve this problem and at the same time to reserve both of the high-level and low-level information, we first apply the max pooling on the feature map and then use the mean value of the downsteam feature map to represent the original one. By this way, we reshape the original feature map z_i∈ R^d× h× w to z_i^'∈ R^d× 1. Finally, z_i^' is used to measure the activation of image x_i at each concept dimension.
§ EXPERIMENTS
In the experiments section, we first visually demonstrate how our method can effectively learn and hierarchically organize concepts in the latent space (<ref>). We also show that (<ref>), compared to existing concept whitening methods, HaST-CW not only separates concepts, but also can separate groups of semantically related concepts in the latent space. After that, we discuss the advantages offered by our method with quantitative results and intuitive examples (<ref>) compared with baselines, including the CW module and ablated versions of our method.
§.§ Experimental Setting
§.§.§ Data Preparation
In this work, we use a newly collected and curated Agri-ImageNet dataset to evaluate the proposed HaST-CW model. In total, 9173 images from 21 classes are used in our experiments. Each image is labeled with the class at the highest possible hierarchy level. For example, an image of Melrose apple will be labeled as "Melrose" rather than the superclass "Apple". Then we divide images per class into three parts by 60%/20%/20% for a standardized training/validation/test splitting. Because the resolution of the original images can range from 300 to 5000, we adopt the following steps to normalize the image data: 1) we first lock aspect ratio and resize the images to make the short edge to be 256; 2) During each training epoch, the images in the training and validation datasets are randomly cropped into 224×224; 3) During testing process, images in the test dataset are center cropped to be of size 224×224; 4) After cropping, the pixel values of images are normalized to [0,1]. Then, the whole training dataset is divided into two parts (𝒟_T and 𝒟_C in <ref>). 𝒟_C is the concept dataset used to update the matrix Q in the second stage (<ref>). It is created by randomly selecting 64 images from each class in the training dataset. The remaining images in the training dataset 𝒟_T are used in the first stage to train the model parameters (<ref>).
§.§.§ Model Setting
In this work, we use several ResNet structures <cit.> to extract features from images, including ResNet18 and ResNet50. During the training process, the two-stage training scheme adopts a 30-to-1 ratio to alternatively train the whole framework. In this case, after 30 mini batches of continuous training, the model will pause and the rotation orthogonal matrix Q will be optimized at the next mini batch. Two hyper-parameters α and β in the SC loss are set to be 1.0. Adam optimizer is used to train the whole model with a learning rate of 0.1, a batch size of 64, a weight decay of 0.01, and a momentum rate of 0.9.
§.§ Visualization of Semantic Map
To illustrate the learned semantic hierarchical structure, we show the representations extracted from the latent hidden layer of all the samples in <ref>. For better visualization, we use Uniform Manifold Approximation and Projection (UMAP) <cit.> to project the representations to a two-dimensional space. All the images are color coded using the 17 sub-concepts which are defined on the left of <ref>. The top panel shows the result using CW method. In general, all the concepts are assembled as small groups, but neither semantic relations nor hierarchical structures have been learned. We highlight the super-concept of “Weed" (black) and three sub-concepts ( “Apple Golden" - green, “Apple Fuji" - red and “Apple Melrose" - blue) in the right column. We can see that the three types of apple (sub-concepts) are evenly distributed along with other fruits samples. The bottom panel shows our HaST-CW results. All the different concepts successfully keep their distinct cluster patterns as CW result. After our two-stage training process to instill the semantic and hierarchical knowledge, the three types of apple images have been pulled together and form a new concept (“Apple" with orange circle) at a higher level. Moreover, the newly learned concept of “Apple" simultaneously possesses sufficient distance to “Weed" (different super-concept) and maintains relatively close relations to “Strawberry", “Orange", “Mango" as well as other types of “Fruit". This result demonstrates the effectiveness of our hierarchical semantic concept learning framework, without negatively affecting the overall classification performance.
§.§ Efficiency and Accuracy of Concept Alignment
In this section, we compare the learning efficiency and accuracy of the proposed HaST-CW with that of the conventional CW method. We track the alignment between image representations and their corresponding concepts at each layer. Specifically, we randomly select six concepts, and for each concept we sort and select the top five images whose representations show the strongest activation at the corresponding concept axis. We show the results at both shallow and deep layers (layer 4 vs. layer 8) in <ref>. From the results of layer 4 (the left column) we can see that most of the top five images obtained by conventional CW (the rows marked by green box) are mismatched with the corresponding concepts. For example, the five images under the concept of “Apple-Melrose" obtained by CW are from the “Weed" class. The five images under the concept of “Snake Weed" are actually from other subclass of “Weed". Moreover, this situation continues in the following layers and has not been changed until layer 8. On the contrary, with the help of our designed semantic constraint loss, our HaST-CW (the rows marked by orange boxes) can learn the intrinsic concept faster and achieves the best performance at an earlier training stage (e.g., at a shallow layer). This result demonstrates that by paralleling multiple HCW layers the proposed HaST-CW model can capture the high-level features more efficiently.
To further demonstrate the alignment between images and the corresponding concepts, we project each image in the test dataset into a latent space where each concept can be represented by an axis. To visualize the alignments at different concept hierarchies (<ref>), we show three pairs of concepts which belong to different hierarchical levels as examples: “Apple-Melrose"-“Apple-Fuji" is from hierarchy 3 (H-3), “Snake Weed"-“Parkinsonia" is from hierarchy 2 (H-2), and “Weed"-“Apple" crosses hierarchies 1 and 2 (“Weed": H-1, “Apple": H-2). Within each concept pair, a two-dimensional space has been built by taking the two concepts as axes. Thus, each image can be mapped into the space by calculating the similarity between image representation and the two concept representations. The results are shown in <ref>. Different rows correspond to different methods and the concept axes (space) are defined at the bottom.
The first column of <ref> shows the data distribution in the two-dimensional space of “Apple-Melrose"-“Apple-Fuji" concept pair. The images belonging to Apple-Melrose class should have the highest similarity with the concept of “Apple-Melrose", and thereby they should be located at the right-bottom corner. Similarly, the images of Apple-Fuji class should be located at the left-top corner. The other images should distribute in the space according to the similarity with the two concepts. For example, compared to images of fruit-related classes, images of weed-related classes will have lower semantic similarity with the two concepts, so they should locate near the origin point (left-bottom corner). As shown in the first column, the two models which adopt the HaST-CW method (the second and third rows) can better follow the above-mentioned patterns. While in the CW model (the first row), nearly all the images are gathered at the right-bottom corner. This may be due to the high similarity between the two concepts considered, since they share the same super-class of “Apple". As a result, CW model may be limited in distinguishing different classes with high semantic similarity. A similar situation happens in the second column with the concept pair of “Snake Weed"-“Parkinsonia". These results suggest that compared to CW method, HaST-CW can better capture the subtle differences of semantic-related classes.
The third column shows the results of the concept pair of two super-classes: “Weed" and “Apple". As each of the super-class concept contains multiple sub-classes, the intra-class variability is greater. Our proposed HaST-CW, together with the SC loss (the third row), can effectively capture the common visual features and project the “Weed" and “Apple" images to the left-top and right-bottom, respectively. At the same time, the images belonging to different sub-classes under “Weed" and “Apple" are assembled as blocks instead of scattered along the diagonal line. In the other two methods, especially in the CW method (the first row), the images of “Weed" class spread out over a wide range along the vertical axis. This result suggests that the proposed HaST-CW with SC loss can effectively model both the inter- and intra- class similarity.
§.§ Interpretable Image Classification
In this section, we compare the classification performance of the proposed HaST-CW method and the SC loss function with the conventional CW method using different backbones: ResNet18 and ResNet50. The results are summarized in <Ref>. Different rows correspond to different model settings. Within each model setting, we repeat the experiments for five times to reduce the effect of random noise. The mean and variance of accuracy (ACC.) are reported in the fourth column. From the results, we can see that the classification performance is slightly better than the other three model settings. This result indicates that the proposed HaST-CW model can improve the interpretability without hurting predictive performance.
To track and visualize the classification process, we randomly select two images from Apple-Melrose class and Snake Weed class. The activation values between each image with the six relevant concepts are calculated and normalized to [0, 1]. The images, concepts and activation values are organized into a hierarchical activation tree. The results are shown in <ref>. We could observe that the activation values of each image correctly represent the semantic relationship between the images and the concepts. For example, in <ref> (a), the image located at the root is from Snake Weed class which is a subclass of Weed. The activation values of the image are consistent with this relationship and possess the highest activation values on the two concepts – “Weed" and “Snake Weed".
§ CONCLUSION AND FUTURE WORK
In this study, we propose a new HaST-CW and demonstrate its superiority over Concept Whitening <cit.>. HaST-CW decorrelates representations in the latent space and aligns concepts with corresponding dimensions. In addition, it correctly groups concepts at different granularity levels in the latent space and preserves hierarchical structures of concepts of interest. By doing so, we can interpret concepts better and observe the semantic relationships among concepts.
We believe there are many possibilities for future work. One promising direction is automatically learning concepts from data. In this scenario, we can jointly learn possible concepts from common abstract features among images and how to represent these learned concepts in the latent space. For example, it might be possible to develop unsupervised or weakly-supervised methods to automatically learn the concept tree from data. By jointly learning concepts, their representations, and relations, the model may discover more data-driven semantic structures.
HaST-CW can also be extended with post-hoc interpretability strategies (such as saliency-based methods that highlight focused areas used for classification). Such explanations at the concept level can provide a more global view of model behaviors.
In addition, while this work focuses on the the natural image domain, the idea of leveraging hierarchical knowledge to guide representation learning is generalizable to other domains such as natural language processing <cit.> and medical image analysis <cit.>. Exploring knowledge-infused learning in different domains <cit.> and tasks <cit.>, including innovative applications <cit.>, is an interesting future direction.
In conclusion, as deep learning models become increasingly complex, model interpretability is crucial for understanding behaviors, gaining trust, and enabling human-AI collaboration. Our work complements previous work and lays a solid foundation for further exploration.
IEEEtran
|
http://arxiv.org/abs/2307.04213v1 | 20230709155609 | Family Floer theory, non-abelianization, and Spectral Networks | [
"Yoon Jae Nho"
] | math.SG | [
"math.SG",
"53D40"
] |
Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data
Jian Tao
August 12, 2023
================================================================================================
In this paper, we study the relationship between Gaiotto-Moore-Neitzke's non-abelia
nization map and Floer theory. Given a complete GMN quadratic differential ϕ defined on a closed Riemann surface C, let C̃ be the complement of the poles of ϕ. In the case where the spectral curve Σ_ϕ is exact with respect to the canonical Liouville form on T^∗C̃, we show that an “almost flat" GL(1;ℂ)-local system ℒ on Σ_ϕ defines a Floer cohomology local system HF_t(Σ_ϕ,ℒ;ℂ) on C̃ for 0< t≤ 1. Then we show that for small enough t, the non-abelianization of ℒ is isomorphic to the family Floer cohomology local system HF_t(Σ_ϕ,ℒ;ℂ).
§ INTRODUCTION
§.§ Main result
Let C be a closed Riemann surface, and let ω_C be the canonical line bundle of C. A quadratic differential ϕ is a meromorphic section of ω_C^⊗ 2. We say that a quadratic differential is a ϕ complete GMN quadratic differential if all of its zeroes are simple, it does not have poles of order 1, and has at least one pole.
Let C̃ be the complement of the poles of ϕ, and let (T^∗_ℂ)^1,0C̃ denote the total space of ω_C̃. The complete GMN
quadratic differential ϕ defines a smooth embedded algebraic subvariety Σ_ϕ of (T^∗_ℂ)^1,0C̃
called the spectral curve associated to ϕ. It becomes a simple branched double covering of C̃ by restricting the projection map π:(T^∗_ℂ)^1,0C̃→C̃ to Σ_ϕ. This curve is defined using the canonical holomorphic Liouville form λ=p^z dz via
Σ_ϕ:={λ^2-π^∗ϕ=0}⊂T^1,0_ℂ^∗C̃.
Here p^z is the complex fibre coordinate and z is the complex base coordinate [For the definition, see Section <ref>].
We have the canonical holomorphic symplectic form Ω on (T^∗_ℂ)^1,0C̃ defined by Ω:=dλ. The spectral curve Σ_ϕ is a holomorphic Lagrangian submanifold in the sense that it is a holomorphic submanifold of (T^∗_ℂ)^1,0C̃ and the holomorphic symplectic form Ω vanishes on Σ_ϕ. There is also a diffeomorphism between (T^∗_ℂ)^1,0C̃ and the real cotangent bundle T^∗C̃, sending the real part of the holomorphic Liouville form to the canonical real Liouville form λ_re=∑ p^i dq_i. Then the spectral curve Σ_ϕ becomes an ω-Lagrangian submanifold of T^∗C̃ under this identification, where ω=dλ_re.
Suppose now that the spectral curve Σ_ϕ is exact with respect to λ_re, then so is tΣ_ϕ for any t∈ℝ_>0. Now let C^∘ be the complement of the zeroes and poles of ϕ, and Σ_ϕ^∘=π^-1(C^∘). Following <cit.>, we say that a rank 1 local system ℒ over Σ_ϕ^∘ is almost flat if the monodromy along a small loop around any of the ramification points in π^-1(zero(ϕ)) is -Id. Let 𝔰 be a spin structure on C, and let 𝔣_z be the induced spin structure on the cotangent fibre F_z, for z∈ C. Given an almost flat GL(1;ℤ)-local system ℬ, the spin structure 𝔰̃=𝔰⊗ℬ on Σ_ϕ^∘ extends to a global spin structure on Σ_ϕ, which we still denote as 𝔰̃. Furthermore, given an almost flat GL(1;ℂ)-local system ℒ, ℒ⊗ℬ extends to a GL(1;ℂ)-local system on Σ_ϕ.
We show that together with an almost flat GL(1;ℂ)-local system ℒ on Σ_ϕ, spin structures 𝔰̃ and 𝔣_z, and a choice of ℬ, we can define the family Floer cohomology local system
HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ): z↦ HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ),
for any t∈ℝ_>0. Here we are taking Floer cohomology over ℂ twisted by the GL(1;ℂ)-local system ℒ⊗ℬ. It turns out that (<ref>) is concentrated in the zeroth degree, is free and has rank 2.
In <cit.>, Gaiotto, Moore and Neitzke constructed the non-abelianization map which sends an almost flat GL(1;ℂ)-local system on Σ_ϕ^∘ to a GL(2;ℂ)-local system on C̃. The main theorem of this paper is that for small enough t, HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ) and the non-abelianization of ℒ are isomorphic.
Suppose Σ_ϕ is exact with respect to the real Liouville form λ_re. Given an E≫ 1 and small enough δ>0, there exists a t_0(δ;E)>0 such that the following holds. Let 0<t<t_0. Let 𝔰 be a global spin structure on C, and let ℒ be an almost flat GL(1;ℂ)-local system on the spectral curve. Let ℬ be an almost flat GL(1;ℤ)-local system, and extend 𝔰̃=π^∗𝔰⊗ℬ and ℒ⊗ℬ to Σ_ϕ. Then the Floer cohomology local system
HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ): z↦ HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ)
is isomorphic to the non-abelianization of ℒ.
In particular, the isomorphism class of the local system does not depend on the choice of 𝔰 and ℬ and so we write HF_t(Σ_ϕ,ℒ;ℂ) instead.
We take a direct Floer-theoretic approach to study GMN non-abelianization. The microlocal side of GMN non-abelianization have been studied in various other papers, such as <cit.>, <cit.> and <cit.>.
We now split the rest of the introduction into two parts. The first part contains a brief review of the theory of quadratic differentials, and the GMN non-abelianization map. The second part outlines the main ideas involved in the proof of the main theorem.
§.§ Quadratic differentials and non-abelianization
§.§.§ Quadratic differentials
We review the theory of quadratic differentials in more detail. We will describe the local structure of zeroes and poles, the spectral curve, and the induced singular flat metric on the base. We follow the expositions from <cit.> and <cit.>. Again, let C be a closed Riemann surface, and let ω_C be the canonical line bundle. Then
A quadratic differential is a meromorphic section of ω_C^⊗ 2. Equivalently, a quadratic differential is a collection of open conformal charts (U_μ,z_μ) where z_μ:U_μ→ℂ is a biholomorphism onto its image, together with a collection of meromorphic functions ϕ_μ on U_μ such that
ϕ_μ'=ϕ_μ(dz_μ/dz_μ')^2 on U_μ∩ U_μ'.
We then locally write ϕ=ϕ_μdz_μ^2
on U_μ.
A zero or a pole of ϕ is called a critical point.
We say that a critical point is finite if it is either a simple pole or a zero of ϕ. Otherwise, we say that a critical point is an infinite critical point. The critical points of ϕ have the following local structure. For details, see <cit.>. The following is Theorem 6.1—Theorem 6.4 in <cit.> combined.
Let b be either a finite critical point of ϕ or a pole of an odd order. Let n be the exponent of b. Then there exists a neighbourhood U_b of b, an open set D of ℂ containing zero, and a biholomorphism ξ=ξ_b:(D,0)→ (U_b,b) such that
ϕ(ξ)dξ^2=(n+2/2)^2 ξ^n dξ^2.
Furthermore, the germ of the biholomorphism is unique up to a factor of some c=exp(k/n+2(2π i)) for k=0,1,2,..,n+1.
In particular, for n=1, we get
ϕ(ξ)dξ^2=(3/2)^2 ξ dξ^2.
Let b be a pole of order 2. Then there exists a local conformal parameter ξ which is unique up to a factor of a constant c∈ℂ such that
ϕ(ξ)dξ^2=a_-2ξ^-2 dξ^2.
Let b be a pole of ϕ with even order n≥ 4. Then there exists a local conformal parameter ξ and a constant r∈ℂ such that
ϕ(ξ)dξ^2=(1/2(2-n)ξ^-m+rξ^-1)^2 dξ^2.
Spectral curves
A quadratic differential ϕ gives rises to a holomorphic Lagrangian submanifold of T^∗C̃ called the spectral curve Σ_ϕ. To define this, let (T^∗_ℂ)^1,0C̃ denote the holomorphic cotangent bundle of C̃. There exists a canonical holomorphic Liouville 1-form λ on (T^∗_ℂ)^1,0C̃; for (q,p)∈(T^∗_ℂ)^1,0C̃ and V∈ T(T^∗_ℂ)^1,0C̃, we define
λ(q,p)(V)=p(π_∗(V))
where π:(T^∗_ℂ)^1,0C̃→C̃ is the projection map. We evaluate π_∗V∈ T_q C̃ on p∈ (T_q C̃)^∗ with respect to the canonical pairing
(T_q C̃)^∗⊗ (T_qC̃)→ℂ.
In local coordinates, we can write λ=p^z dz where p^z is the complex fibre coordinate and z is the complex base coordinate. We see that λ gives the canonical section of the line bundle π^∗ω_C̃. We obtain the canonical holomorphic symplectic form on (T^∗_ℂ)^1,0C̃ by taking the exterior derivative Ω=dλ.
There exists a diffeomorphism of the total space of the real fibre bundles
T^∗C̃→ (T^∗_ℂ)^1,0C̃
between the real cotangent bundle and the holomorphic cotangent bundle, under which the real part of λ is pull-backed to the canonical real Liouville form λ_re on T^∗C̃. The diffeomorphism is induced by the identification of V_ℂ^1,0≃ V with V a real vector space with a complex structure I:V→ V, I^2=-Id (see <cit.>).
The algebraic variety
Σ_ϕ:={λ^2-π^∗ϕ=0}⊂ (T^∗_ℂ)^1,0C̃
is called the spectral curve associated to the quadratic differential ϕ. The projection π:Σ_ϕ→C̃ gives a branched double covering of C̃. The spectral curve is smooth if the zeros of ϕ are simple. In this case, the map Σ_ϕ→C̃ becomes a simple branched covering. To see this, note that by Proposition <ref>, if z_0∈ϕ is a zero of ϕ, then one can find some conformal coordinate charts near z_0 such that ϕ reads locally zdz^2. Then realizing ℂ^2 as the holomorphic cotangent bundle of ℂ, we see that the germ of the spectral curve near z_0 is equivalent to the germ of {(p^z)^2-z=0}⊂ℂ^2 at (z,p^z)=(0,0), which is smooth. Now observe that the holomorphic symplectic form Ω vanishes on any smooth codimension one algebraic subvariety of (T^∗_ℂ)^1,0C̃. Under the identification of the holomorphic and the real cotangent bundle, we see that Σ_ϕ becomes a real Lagrangian submanifold of T^∗C̃.
ϕ- metric.
We need a further ingredient to describe non-abelianization, which is the natural flat singular metric structure on C. To define this, let C^∘ denote the complement of the zeros and poles of ϕ. On C^∘, we have a corresponding Riemannian metric
g^ϕ=ϕ(z)dz^2.
which we regard as a singular metric on C̃. The metric g^ϕ is actually flat because in local conformal coordinate W=∫√(ϕ), ϕ≡ dW^2 and g^ϕ≡dW^2 by (<ref>). Note that g^ϕ induces a topological metric space structure on C̃.
We will be interested in the following class of quadratic differentials with nice g^ϕ-metric properties.
<cit.> A meromorphic quadratic differential is GMN[For Gaiotto, Moore and Neitzke who first introduced the theory of spectral networks with which we are concerned.] if:
* all the zeroes of ϕ are simple,
* ϕ has at least one pole,
* ϕ has at least one finite critical point (either an order one pole or zero).
We say that a GMN quadratic differential ϕ is complete if ϕ has no simple poles.
If ϕ is complete, then the metric space is complete as well. To see this, note that the integral lim_a→ 0^+∫_a^11/x^bdx
for 0<b<∞ converges for b=1/2, but not for b≥ 1. Now comparing with the local forms in Proposition <ref>, we see that the integral of the line element √(ϕ)∼1/z^b for b≥ 1 blows up as z→ 0.
Each g^ϕ-geodesic, or ϕ-geodesic for short, admits a unique phase in ℝπℤ since ϕ-geodesics are just straight lines in the W-coordinate. We call geodesics with phase θ=0 horizontal
and geodesics with phase θ= π/2 vertical. We call maximal solutions of the ϕ-geodesic equation trajectories.
There are several types of trajectories. If the trajectory γ has its maximal interval of definition a finite open interval, or equivalently, approaches finite critical points at both ends, we say that γ is a saddle trajectory. If it
is defined over (-∞,∞), or equivalently approaches infinite critical points at boths ends, then we say that it is a generic trajectory. If the trajectory approaches a finite critical point at a single end, then we say that it is a separating trajectory. Note that horizontal generic trajectories do not intersect with each other. We say that ϕ is saddle-free if there are no horizontal saddle trajectories on C. We can always rotate ϕ by e^i2θ for a generic θ to obtain a saddle-free quadratic differential <cit.>.
The phase θ trajectories in C^∘ give a singular foliation on C̃. The critical graph of this singular foliation is called the spectral network S(θ). The spectral network S(θ) is stratified into a 0-th dimensional stratum consisting of all the zeroes of ϕ and a 1-dimensional stratum consisting of the separating θ-trajectories, also called walls.
The complement of the spectral network for a saddle-free GMN quadratic differential is a disjoint union of chambers; chambers are connected contractible conformal subdomains of C̃. Given a chamber 𝒵^h, there exists a conformal equivalence of (𝒵^h,ϕ)≃ (𝒵^h(a,b),dz^2) where 𝒵^h(a,b) is either the upper half-plane or a finite horizontal strip subdomain of ℂ (c.f. <cit.>). These chambers are maximal horizontal domains, meaning that they are spanned by generic horizontal trajectories. Thus we have a cellular decomposition of C̃, where the 2-cells are the chambers, the 1-cells the walls, and the 0-cells the zeroes of ϕ. The spectral curve Σ_ϕ restricted to a chamber is sent under the conformal equivalence (𝒵^h,ϕ)≃ (𝒵^h(a,b),dz^2), to the two disjoint affine hyperplanes {p^x=± 1, p^y=0}.
§.§.§ Non-abelianization
We now state what we mean by non-abelianization. Let ℛ be either ℤ or ℂ. We define a GL(k;ℛ)-local system to be a rank k locally constant sheaf of free ℛ-modules. Inspired by <cit.>, we look at the “integrated version" of local systems, or the path groupoid representation of ℛ-local systems (Definition <ref>). For this purpose, let M be a two dimensional real manifold. Suppose furthermore that M admits a compactification M, by which we mean that there is an embedding of M into a closed two-dimensional manifold M such that M_∞=M-M consists of finite set of points.
In the above set-up, M admits a wall-chamber decomposition if there exists a finite collection M^0 of points on M, and a collection M^1 of embedded arcs (called walls) in M satisfying the following conditions.
* If w∈ M^1, then w connects a point in M^0 to either a point in M^0 or a point in M_∞.
* Given a point m_0∈ M^0, there exists a wall W such that m_0∈∂ W, and given a point m_∞∈ M_∞, there exists at least one point m_0 in M^0 and a wall W∈ M^1 such that ∂ W={m_0}∪{m_∞}.
* The walls in M^1 only meet at the points in M^0∪ M_∞.
* The complement M^2 of all the walls s in M^1 decompose M into a finite disjoint union of contractible components (called chambers).
<cit.>.
Given a wall-chamber decomposition (M^0,M^1,M^2), we say that a collection of points 𝒫_M in M is a set of base points if each component of M^2 contains at least one element of 𝒫_M. Given the base points 𝒫_M, the path groupoid 𝒢_M=𝒢_M(𝒫_M) is the groupoid whose objects are points in 𝒫_M, and whose morphisms are path-homotopy classes between the points in 𝒫_M. A collection of morphisms of 𝒢_M is said to be a path groupoid generating set if their concatenations generate 𝒢_M.
A path groupoid representation of a GL(k;ℛ)-local system consists
of the following data.
* A free rank k ℛ-module E_b for each b∈𝒫_M together with an isomorphism
ℛ^⊕ k≃ E_b.
* A morphism
Γ(α):E_b→ E_b'
given a path homotopy class α∈π_1(b,b')_M, b,b'∈𝒫_M, such that Γ(α) is compatible with path concatenations.
Two path groupoid representations (𝒫_M,E',P) and (𝒫_M,E',P') are said to be equivalent if for each b∈ M there are isomorphisms
g_b:E_b→ E_b'
such that (i) the following diagram commutes for α∈π_1(b,b'), b,b'∈𝒫_M:
E_b E_b'
E'_b E'_b'["Γ'(α)", from=2-1, to=2-2]
["Γ(α)", from=1-1, to=1-2]
["g_b"', from=1-1, to=2-1]
["g_b'"', from=1-2, to=2-2]
,
and (ii) the isomorphisms g_b are compatible with the isomorphisms (<ref>) to ℛ^⊕ k above.
Given a path groupoid representation of a GL(k;ℛ)-local system, we can build a genuine ℛ-local system on M. To see this, we borrow the argument in <cit.>. Consider the following space
𝒫̃^M:{γ:I→ M: γ(0)∈𝒫_M}/{∼}
of path homotopy classes that begin at some m∈𝒫_M and end at some other point m'. Let P̃^M_b be the connected component of P̃^M containing the constant path at b∈𝒫^M. Then we glue the constant sheaves E_b×P̃^M_b by (v,b)∼ (Γ(α)v,b') for α∈π_1(b,b').
The spectral network S(0) induces a wall-chamber decomposition of C̃. Suppose we choose a collection of base points b(w) for each wall w in S(0). The wall w picks out a unique sheet of √(ϕ) in the following sense: choose any parametrization w:[0,∞)→C̃ in the outward orientation; there exists a unique sheet of √(ϕ) such that the function s→∫_0^s √(ϕ) along w takes values in ℝ_≥ 0 which is independent on the choice of an oriented parametrization of w. We can then similarly choose a pair of points b^u(w) and b^d(w) connected by an oriented vertical arc α called a short path passing through b(w) such that the integral s→∫_α(0)^α(s) Im√(ϕ) is non-negative and increasing. Furthermore, we can give ± labels for the lifts of b^u(w)^u (or b^d(w)) by letting b(w)^u,+ (or b(w)^d,+) to be the lift corresponding to the positive sheet of √(ϕ) along w.
Let 𝒫_C be the resulting collection of points b^u(w) and b^d(w) for w a wall in S(0). Let 𝒫_Σ_ϕ^∘=π^-1(𝒫_C^∘), and lift the wall-chamber decomposition of C to a wall-chamber decomposition of Σ_ϕ. Recall that we call a GL(1;ℛ)-local system on Σ_ϕ almost flat if the monodromy around a ramification point is -Id. Similar to Definition <ref>, we introduce a path-groupoid representation analogue of an almost flat GL(1;ℛ)-local system introduced in <cit.>.
A path groupoid representation of an almost flat GL(1;ℛ)-local system ℒ on Σ_ϕ^∘ is a collection of the following data:
* A one-dimensional free ℛ-module ℒ_b̃ for each of the points b̃∈𝒫_Σ_ϕ^∘ with a preferred choice of basis.
* A morphism of vector spaces
Φ^ℒ(α): ℒ_b̃→ℒ_b̃'
given a morphism α∈ Hom(b̃,b̃') of the path groupoid 𝒢_Σ_ϕ^∘=𝒢_Σ_ϕ^∘(𝒫_Σ_ϕ^∘).
This data is subject to the following conditions:
* The morphisms Φ^ℒ(α) are compatible with composition of path homotopy classes.
* The holonomy around a based loop encircling a ramification point of π, is -Id.
We now define non-abelianization.
Given a path groupoid representation ℒ of an almost flat GL(1;ℂ)-local system on Σ_ϕ^∘ and a path groupoid representation E of a GL(2;ℂ)-local system on C̃, we say that L and E form a 𝒲-pair, or equivalently that E is a non-abelianization of ℒ, if:
* There is an isomorphism
i_b:E_b→π_∗(ℒ)_b
for each b∈𝒫_C.
* If α does not cross walls of S(0), then
Γ(α)=i_f(α)^-1(π_∗Φ^ℒ(α)) i_i(α).
* If α is a short path between b(w)^- and b(w)^+, then
Γ(α)=i_f(α)^-1𝒮_w(Φ^ℒ(α)) i_i(α)
where 𝒮_w is a unipotent matrix of form Id+μ_w, for some ℂ-morphism
μ_w:ℒ_b(w)^d,-→ℒ_b(w)^u,+.
Furthermore, we say that the induced local systems on C̃ and Σ_ϕ form a 𝒲-pair if their path groupoid representations form a 𝒲-pair.
One of the main insights of <cit.> was that homotopy invariance and the data of ℒ uniquely determine the matrices μ_w. We will revisit this idea in Section <ref>.
Consider Σ_ϕ as a real Lagrangian submanifold in T^∗C̃ with respect to the real canonical Liouville form λ_re under the identification of the real cotangent bundle and the holomorphic cotangent bundle. We are interested in complete GMN quadratic differentials ϕ such that the corresponding spectral curve Σ_ϕ is exact with respect to the canonical real Liouville form on T^∗C̃. We call such quadratic differentials real exact. The space of real exact quadratic differentials constitutes a totally real submanifold of the space of quadratic differentials (see the remark after Proposition <ref>; the space of GMN quadratic differentials is a complex manifold by <cit.>).
We show in Section <ref> that given a real exact quadratic differential ϕ, the Floer cohomology local system
HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ): z↦ HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ)
is well defined. The construction of the precise Floer-theoretic set-up uses only standard techniques, but is slightly involved. This is carried out in Section <ref>.
We also show that the points 𝒫_C and the ℂ-vector spaces HF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,ℒ⊗ℬ;ℂ) for z∈𝒫_C, along with the Floer-theoretic parallel transports, define a path groupoid representation HF_t(Σ_ϕ,ℒ,𝔰,ℬ,𝒫_C;ℂ) of a GL(2;ℂ)-local system over C̃. In addition, we show that compact Hamiltonian isotopies of tΣ_ϕ which are supported away from the points in π^-1(𝒫_C) define an equivalent path groupoid representation (Proposition <ref>).
We can now restate our main theorem as follows. Note that there are some constants involved for technical reasons.
Let Σ_ϕ be the spectral curve associated to a real-exact GMN quadratic differential on a closed Riemann surface C. Given a small deformation parameter δ>0 and a large energy cut-off E≫ 1, there exists a t_0>0 and a collection of points 𝒫_C=𝒫_C(δ;E) (with lifts P_Σ_ϕ^∘) such that the following holds for all 0<t<t_0.
Let ℒ=ℒ(P_Σ_ϕ^∘) be a path groupoid representation of an almost flat GL(1;ℂ)-local system, 𝔰 be a spin structure on C, and ℬ be an almost flat GL(1;ℤ)-local system. Then
HF_t(Σ_ϕ,ℒ,𝔰, ℬ, 𝒫_C;ℂ) and ℒ(P_Σ_ϕ^∘) form a 𝒲-pair, or equivalently, HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ) is a non-abelianization of ℒ.
§.§ Towards the proof of Theorem <ref>
We outline the strategy towards the proof of Theorem <ref>. Given some small parameter δ>0, we find a suitable Kähler metric g^ϕ_δ over C̃ which agrees with g^ϕ outside a small neighbourhood of the zeroes of ϕ. Then as we said in Section <ref>, we restrict our attention to spectral curves which are exact with respect to the real Liouville form λ_re. Given such a quadratic differential and a large energy cut-off E≫ 0, we construct some bounded open subdomain C(δ;E) of C which is a deformation retract of C̃-S(0), such that g^ϕ_δ=g^ϕ on C(δ;E).
The metric g^ϕ_δ gives rises to an induced almost complex structure J on T^∗C̃ called the Sasaki almost complex structure. For definition, see Definition <ref>. For z∈ C^∘, and 0<t≤ 1, a J-holomorphic strip u bounded between tΣ_ϕ and F_z, that travels between distinct lifts of z on Σ_ϕ, is called a t-BPS disc ending at z. The main analytic theorem of the paper is the following non-existence result for t-BPS discs ending at z for small enough t>0 and z∈ C(δ;E).
Given E≫ 0, δ≪ 1, there exists a metric g^ϕ_δ on C̃, a deformation retract C(δ;E) of C̃-S(0) over which g^ϕ_δ=g^ϕ such that the following holds.
Let J be the Sasaki almost complex structure associated to g^ϕ_δ. Then there exists a scaling parameter t_0=t_0(δ;E)>0 such that for 0<t≤ t_0, there are no non-constant J-holomorphic strips bounded between F_z and tΣ_ϕ for z∈ C(δ;E).
The main motivation behind the proof of Theorem <ref> comes from the following general expectation. Suppose we have sequence of discs u_t bound in a t-rescaling of an exact multi-graph. Then we expect the sequence u_t to degenerate to sets of solutions of Morse-like local differential equations on C̃, after possibly passing to a subsequence. The resulting set of solutions on C̃ is called the adiabatic degeneration of the sequence u_t. However, in our case, the resulting Morse-like local differential equation turns out to agree with the horizontal ϕ-geodesic equations on C̃ over the region C(δ;E). Furthermore, we show in Section <ref>, that the flow lines for points z∈ C(δ;E) can never enter some neighbourhood of the branch points.
Using these observations, in Section <ref>, we modify Ekholm's Morse flow tree techniques <cit.> to show that after passing to a subsequence, u_t maps arbitrarily close to a horizontal trajectory γ passing through some z∈ C(δ;E) for small enough t. By construction, we can find a small neighbourhood of γ contained in C^∘ over which the metric g^ϕ_δ agrees with g^ϕ. We show that such discs cannot exist under the finite energy assumption, and prove the the main analytic theorem.
We now explain briefly how to relate Theorem <ref> to Theorem <ref>. Let z∈𝒫_C. By choosing a suitable grading, we show that the chain complex CF(Σ_ϕ,F_z) is concentrated in degree 0. Thus the intersection points in F_z⋔Σ_ϕ give rises to a natural decomposition of HF(Σ_ϕ,F_z) for z∈𝒫_C. The key part is using Theorem <ref> to show that the parallel transport is diagonal along (homotopy classes of) paths that are strictly contained in a connected component of C(δ;E). This is necessary to show that (<ref>) holds. To understand the heuristics, consider the holomorphic strips that contribute to the non-diagonal terms in the Floer-theoretic parallel transport map along horizontal or vertical arcs in C(δ;E). We show that these strips Gromov-converges to broken holomorphic strips bounded between F_z and tΣ_ϕ as the corresponding arcs converge to the point z. Since Theorem <ref> implies that such holomorphic strips cannot exist, we deduce that the parallel transport must be diagonal.
We now explain why the parallel transport along a short arc is of the form (<ref>). This is essentially due to positivity of energy. From real exactness, we have W:Σ_ϕ→ℝ such that λ_re=dW. From Stokes' theorem, we see that the energy of t-BPS discs ending at z must be bounded above by ± t (W(z^+)-W(z^-)). We show that W(z^+)=W(z^-) if and only if z lies on the spectral network S(π/2). Thus for z≠ S(π/2), we can choose the ordering z^+ and z^- in such a way that W(z^+)>W(z^-). This means that there are no t-BPS discs ending at z that travel from z^+ to z^-, regarding them as J-holomorphic strips. A similar energy argument applies to show that the parallel transport (<ref>) should be strictly upper-triangular for short enough α. We leave the discussions on holonomy contributions of ℒ in (<ref>) and (<ref>) to Section <ref>.
It is worth remarking here that similar applications of Morse flow-tree techniques to study degeneration of holomorphic discs for (certain generalizations of) spectral curves also appeared in <cit.> and implicitly in <cit.>.
§.§ Set-up of the paper
The setup of the paper is now as follows. In Section <ref>, we introduce and gather the necessary ingredients from pseudo-holomorphic curve theory (namely, monotonicity) to establish the Floer theoretic set-up that we will use through out this paper. In Section <ref>, we discuss the geometry of ϕ-metrics and the wall-chamber decomposition induced by S(0). We will then study conditions under which the spectral curve is real exact and find particular deformation retracts of C-S(0) called C(δ;E) with the properties described in Section <ref>.
In Section <ref>, we will adapt the adiabatic degeneration techniques as in <cit.> to prove theorem <ref>. Finally, in Section <ref>, we will use the Gromov compactness argument, to show that the local system is a non-abelianization up to signs. We will then compute the signs explicitly, by making careful choices of the spin structures, and prove Theorem <ref>.
§.§ Conventions
We use the following conventions:
* The canonical symplectic form on the cotangent bundle is dp∧dq.
* The Hamiltonian vector field associated to a smooth function H on T^∗M is defined by
i_X_Hω=-dH.
* All the holomorphic polygons are given anticlockwise boundary orientations regarded as the unit disc with punctures on the boundary in ℂ.
* When we take identification ℂ^n≃ T^∗ℝ^n as a target symplectic manifold, we take the induced “standard complex structure" on ℂ^n to be given by z_k=x_k-iy_k,k=1,...,n.
* When we regard ℂ as a Riemann surface, or a conformal domain, we take the complex structure given by z=x+iy.
* The contact form on the jet bundle is given by dz-pdq.
* W^k,p denotes the (k,p)-Sobolev Space.
* Given a topological metric space (X,d), a subset N⊂ X, and x∈ X, we set d(N,x) to be the distance between N and x. Given l>0, we set B_l(N) to be the set of points x in X with d(x,N)<l.
* Given a complete Riemannian manifold (M,g) and x,y∈ M, we consider the induced topological metric d on M, and we define d(N,x) and B_l(N) accordingly.
* Given a complete Riemannian manifold (M,g), we set
r:T^∗M→ℝ, r(q,p)=p.
Here the norm of the covector p is taken with respect to g. Then we set
D_l^∗M ={(q,p)∈ T^∗M: r(q,p)<l}
S_l^∗M ={(q,p)∈ T^∗M, r(q,p)=l}.
* For l>0, we set:
A_l ={z∈ℂ:z<l}
∂ A_l ={z∈ℂ:z=l}
E_l ={z∈ℂ:z<l, Im(z)≥ 0}
∂ E_l = {z∈ℂ:z<l, Im(z)= 0}.
* Given a real positive function a_t defined on some subset I of ℝ and α∈ℝ_>0, we say that a_t is of size O(t^α) if there exists some C>0 such that
a_t<Ct^α
for all t∈ I.
* We adopt the convention that the infinite strip 𝒵=(-∞,∞)× [0,1] is given the conformal coordinate z=s+iτ. The parameter t is reserved for either the scaling parameter or family parameter.
§.§ Acknowledgements
The author would like to thank his supervisor, Ailsa Keating, for her continued support and encouragement. The author is also grateful to Ivan Smith for sharing his insights on general Floer theory; to Andy Neitzke for explaining non-abelianization; to Tobias Ekholm for explaining his paper on Morse flow-trees; to Roger Casals for interesting discussions on spectral networks and their relationship to Legendrian weaves, as well as for his kind hospitality at UC Davis; to Jack Smith for explaining how to orient Floer-theoretic moduli spaces which played an immense role for the sign computations in Section <ref>; to Jean-Philippe Chassé and Jeff Hicks for the joint collaboration on reverse isoperimetric inequalities, which played a crucial role in resolving a specific technical issue in the paper; to Yoel Groman and Sheel Ganatra for conversations on Floer theory on open manifolds; and to Noah Porcelli, Aleksander Doan, and Benedict Morrissey for useful conversations.
This project owes a significant amount of debt to the works of Davide Gaiotto, Greg Moore and Andy Neitzke <cit.>, <cit.>, Tobias Ekholm's work on Morse flowtrees <cit.>, and Ganatra-Pardon-Shende and Yoel Groman's works on Floer theory on open manifolds, namely, <cit.> and <cit.>. The author was sponsored by the Cambridge Trust scholarship.
§ FLOER THEORY ON COTANGENT BUNDLES OF OPEN MANIFOLDS
The main aim of this section is to establish a Floer-theoretic set-up on T^∗C̃ such that the Floer cohomology local system HF(tΣ_ϕ,F_z) is well-defined on C̃. To do this, we define Floer theory on cotangent bundles of the more general class of Riemannian open manifolds that are “flat at infinity".
In Section <ref> we introduce the notion of flatness at infinity, define finiteness conditions for Lagrangians, Hamiltonians and almost complex structures. In particular, we will introduce the class of vertically finite Lagrangians, which includes spectral curves associated to GMN complete quadratic differentials ϕ.
In Section <ref>, we review the notion of geometric boundedness. In Section <ref>, we discuss the basic monotonicity techniques.
In Section <ref>, we show using the monotonicity techniques and the arguments in <cit.> that the moduli space of Floer solutions satisfy the usual compactness and transversality properties. The key is showing that the relevant pseudo-holomorphic curves do not escape off to infinity. Here, the boundary conditions are given with respect to the classes of Lagrangians and almost complex structures defined in Section <ref>. This allows us to define, for instance, CF(tΣ_ϕ,F_z). In Section <ref>, we show that the Floer chain complex satisfies certain invariance properties up to isomorphism in cohomology.
This section is heavily based on the works of Sikorav<cit.>, Groman<cit.> and Ganatra-Pardon-Shende<cit.>.
§.§ Finiteness conditions and confinement via monotonicity
§.§.§ Flatness at infinity and finiteness conditions
We start with the following definition.
A Riemannian manifold (M,g) is flat at infinity if g is complete, there exists a compact subset K⊂ M such that g|_M-K is flat, and there exists an R_0>0 such that the injectivity radius of g is bounded below by R_0.
ℝ. We will also see in Section <ref> that C̃ equipped with the flat metric desingularized at the branch points is also flat at infinity.
Consider the cotangent bundle T^∗M. Although it is not a Liouville manifold, it is very close to one. T^∗M admits the canonical Liouville form λ_re=p·dq and the canonical symplectic form ω=dp∧dq. Furthermore, the standard Liouville vector field Z:=p∂_p satisfies L_Zω=ω and ι_Z ω=λ_re. In addition to this, given any metric g on M, the unit sphere bundle S^∗M is a codimension 1 submanifold of T^∗M and the restriction of the Liouville form defines a contact form α on S^∗M.
Consider the diffeomorphism of the positive cone of (S^∗M,α) into T^∗M
[1,∞)× S^∗M→ T^∗M
given by sending the point (r,(q,p)) to its time-log(r) image under the Liouville flow. Since this is simply the map
(r,(q,p))→ (q,rp),
the pullback of the canonical Liouville form λ=pdq is equal to rα and the canonical symplectic form reads d(rα) on the positive cone. The Liouville vector field then takes the form rd/dr over [1,∞)× S^∗M. We see that the positive flow of the Liouville vector field is complete and that the image of [1,∞)× S^∗M covers the neighbourhood of the vertical infinity.
The Reeb field R over S^∗M is the unique vector field defined by the condition α(R,-)=1, dα(R,-)=0. At (q,p)∈ S^∗M, in geodesic normal coordinates, the Reeb vector field reads:
R:=∑_i=1^n p_i/p∂/∂ q_i.
The Reeb field over [1,∞)× S^∗M is defined as the field 0⊕ R⊂ TS^∗M. This is the Hamiltonian vector field associated to the linear function r. We now introduce objects which are compatible with the Liouville-like structure. We say that an almost complex structure or a Lagrangian submanifold is cylindrical if it is invariant under the positive Liouville flow. Furthermore,
An ω-compatible almost complex structure J is of general contact type if there exists a positive smooth function h:ℝ_>0→ℝ_>0 such that
h(r)dr=λ_re∘ J.
If this condition holds over {r>R} for some R≫ 1, we say that it is of general contact type at vertical infinity.
The almost complex structure J is of contact type if h(r)=1 and of rescaled contact type if h(r)=r.
The notion of almost complex structures of rescaled contact type comes from <cit.>.
Definition <ref> is equivalent to J mapping the kernel of α to itself on the level sets of r and swapping the Liouville flow Z with h(r)R.
In this paper, we will utilize the “canonical" ω-compatible almost complex structure on T^∗M induced from the metric g, called the Sasaki almost complex structure (See <cit.>, or for the full exposition, <cit.>). To define this, first we note that given the projection π: T^∗M→ M, the kernel V of the derivative dπ:T^∗M→ M gives the canonical vertical distribution on T^∗M. Then the metric g gives rises to a distribution H on T^∗M called the horizontal distribution for which the restriction dπ:H_p→ T_π(p)M for p∈ M gives a vector space isomorphism. We then identify H with (TM,g) with respect to dπ; we have the following covariant decomposition
TT^∗M =H⊕ V
=(TM,g)⊕ (T^∗M,g),
of TT^∗M. Regarding g as a real vector bundle isomorphism g:TM→ T^∗M, we get:
The Sasaki almost complex structure J_g is the almost complex structure on T^∗M defined by the following matrix
J_g:=
[ 0 +g^-1; -g 0 ],
with respect to the covariant decomposition (<ref>). We write g^S for the metric on T^∗M induced from ω and J_g.
We have the following simple local computation.
Let η be a differential 1-form on M and let c:I→ M be a curve. Then the velocity vector of the curve C(t)=(c(t),η(c(t))):I→ T^∗M
has the following decomposition
∂ C(t)/dt=c'(t)^H⊕∇_c'(η)^V
with respect to TT^∗M=H⊕ V.
The Sasaki almost complex structure is not of contact type at infinity, so we deform the almost complex J_g as in <cit.> to find some conical deformations of J_g. The same conical deformation also appeared in <cit.>.
Let ρ:[1,∞)→ [1,∞) be a smooth increasing positive function such that ρ(r)=1 for r<3/2 and ρ(r)=r for r≫ 2. The following deformation of the Sasaki almost complex structure
J_con=[ 0 +ρ(r)^-1g^-1; -ρ(r) g 0. ],
is called the (ρ-)conical deformation of J_g. We write g_con for the Riemannian metric induced from ω and J_con.
Here the matrix is taken with respect to the decomposition (<ref>). Fixing a smooth ρ once and for all, we obtain our background almost complex structure J_con and our reference metric g_con on T^∗M. The following proposition is mentioned in <cit.>. We prove it for the sake of completion.
Let λ_re be the canonical Liouville form on T^∗M. The Sasaki almost complex structure is of rescaled contact type. The deformed almost complex structure is invariant under the Liouville flow and satisfies
λ_re∘ J_con=dr
for r≫ 1. Hence, it is of contact type at infinity.
Let x be a point on M. Take the geodesic normal coordinates q=(q_1,...,q_n) and the corresponding covector coordinates p=(p^1,...,p^n) centred at x=(0,....,0). Since the statement of Proposition <ref> is local, we will show that the statement of Proposition <ref> holds for any (x,p).
We first observe that in the coordinate system (q_1,...,q_n,p^1,...,p^n), the Liouville field Z defined by ι_Z ω=λ can be written as
Z=∑_i=1^n p^i∂/∂ p_i
where n is the dimension of M. Therefore, we see that the Liouville flow is given by
ϕ_t:(x,p)→ (x,e^tp).
We now check that (ϕ_t)^∗J_con=J_con.
Computing (ϕ_t)^∗ J_con(x,p), we get
(ϕ_t)^∗J_con(x,p) =[ I 0; 0 e^-tI ]
J_con(x,e^tp)
[ I 0; 0 e^tI ]
=[ I 0; 0 e^-t I ][ 0 e^-tr^-1; -e^tr 0 ][ I 0; 0 e^tI ]= [ 0 r^-1; -r 0 ]=J_con(x,p).
This finishes the proof of invariance under the Liouville flow.
Furthermore, from
J_conZ= ∑_i=1^n p^i/ρ(p)∂/∂ q_i
which is r/ρ(r) times the Reeb vector field (unit geodesic flow) of λ_re, we see that the deformed Sasaki almost complex structure swaps the Reeb vector field and the Liouville vector field.
We now check that λ_re|_S^∗M is J_con-orthogonal to the distribution ⟨ R,Z⟩ generated by R and Z. We first check that λ_re|_S^∗M is J_g-orthogonal to the distribution ⟨ R,Z⟩ generated by R and Z. Indeed, given X=∑ a^i∂/∂ q_i+∑ b^i ∂/∂ p^i in ⟨ R,Z⟩^⊥_g^S, we get:
0=g^S(x,p)(Z,X) =∑ p^ib_i⇒ X∈ T_(x,p)(S^∗M)
0=g^S(x,p)(R,X) =1/r∑ p^ia_i=1/rλ_re(X)⇒ X∈λ_re(x,p).
Since the dimensions match up, we see that ⟨ R,Z⟩^⊥_g^S=λ_re|_S^∗M. The vector Z is totally vertical and R is totally horizontal and since we are just rescaling the norm in each horizontal and vertical tangent space, a vector is orthogonal to Z (or R) in g^S if and only if it is orthogonal to Z (or R) in g_con. Hence ⟨ R,Z⟩^⊥_g^S=⟨ R,Z⟩^⊥_g_con and we have a J_con-invariant decomposition
T(T^∗M)=⟨ R,Z⟩⊕λ_re|_S^∗M.
Since J_con(Z)=r/ρ(r)R, we get that
λ_re∘ J_con vanishes on ⟨ R,Z⟩^⊥_g_con that
λ_re∘ J=λ_re∘ J_con|_⟨ R,Z⟩=r/ρ(r)· dr
where the last equality can be checked directly. This finishes the proof.
We introduce the class of horizontally finite Hamiltonians and Lagrangians.
Let (M,g) be a Riemannian manifold which is flat at infinity. Equip the cotangent bundle T^∗M with its background almost complex structure J_con.
* Let L be a Lagrangian submanifold in T^∗M which is cylindrical at infinity. We say that L is horizontally finite if π(L)⊂ K for some compact subset K⊂ M.
* Let H be a Hamiltonian function on T^∗M. We say that it is cylindrical if ZH=H at infinity, or equivalently, if H=hr for r≥ R, R≫ 1, where h:S^∗M→ℝ is a contact Hamiltonian. We say that H is horizontally finite if there exists a compact subset K⊂ M such that the support of H lies inside T^∗K.
We restrict to the following class of almost complex structures on T^∗M.
Let J be an ω-compatible almost complex structure. We say that J is an admissible almost complex structure if J is cylindrical at infinity and if there exists a compact subset K⊂ M such that J=J_con outside of T^∗K. We say that K is the horizontal support of J.
Let 𝒥(T^∗M) denote the space of ω-compatible admissible almost complex structures. Let S be a Riemann surface with boundary. A family of admissible almost complex structures parametrized by S is a smooth map
J:S→𝒥(T^∗M).
We will be concerned with a family of almost complex structures that is uniform in the following sense.
Let J:S→𝒥(T^∗M) be a family of admissible almost complex structures, then J is uniformly cylindrical, if there exists a subset of S× T^∗M, which is proper over S, such that outside of this subset, the almost complex structures J_s|_s∈ S are invariant under the Liouville flow. A family of admissible almost complex structures is called uniformly admissible if there exists a uniform horizontal support, and if the family is uniformly cylindrical at infinity.
We now introduce the notion of vertically finite Lagrangians.
A properly embedded Lagrangian submanifold L in T^∗M which is a closed subspace of T^∗M is vertically finite if there exists an R≫ 1, ϵ_L>0 and a compact subset K_L⊂ M such that:
* L is contained in D_R^∗M,
* the complement M-K_L is an open submanifold of M and outside of T^∗K_L, the projection π:L→ M is a proper covering map,
* the space π^-1(K_L)∩ L is a manifold with boundary and consists of finitely many connected components,
* the submanifold L∩ T^∗(M-K_L) is totally g_con-geodesic and contained in the subset D_1^∗M,
* for all x∈ M-K_L, and x'∈π^-1(x) , B^g_con_ϵ_L(x')∩ L|_(M-K_L) is connected.
We say that a Lagrangian is finite at infinity if it is either horizontally finite or vertically finite.
We will show in Corollary <ref> that spectral curves associated to complete GMN quadra-
tic differentials are vertically finite. Note that on D_1^∗M, g_con coincides with g^S. This is why we had ρ(r)=1 in the neighbourhood of ST^∗M.
§.§.§ Geometric boundedness
We review the notion of geometric boundedness and tameness for almost complex manifolds (V,J) and totally real submanifolds of V, following <cit.>. This will be necessary for controlling the C^0 images of pseudo-holomorphic curves using monotonicity techniques. Recall that an almost complex manifold (V,ω,J) equipped with a symplectic form ω such that J is ω-compatible is called almost Kähler. The following definition of geometric boundedness is due to Ganatra-Pardon-Shende.
<cit.>
Let (V,ω,J) be a 2n-dimensional almost Kähler manifold equipped with a symplectic form. We say that (V,ω,J) is geometrically bounded if there is an open cover {V_α} of V and charts ϕ_α: B_1(0)⊂ℝ^2n→ U_α such that:
* the collection {ϕ_α(B_1/2(0))} also covers V,
* with respect to the standard metric on B_1(0),
sup_αϕ_α^∗J_C^r<∞,
sup_αϕ_α^∗ω_C^r<∞,
* there exists some r_0>0 such that
ω(v,(ϕ_α)^∗Jv)>r_0 g_std.
Furthermore, we say that an ω-Lagrangian submanifold L of V is geometrically bounded if the charts ϕ_α can be chosen in a way that ϕ_α^-1(L) is either empty or a linear subspace of B_1(0) for all α.
Let (S,ω_S,j_S) be an almost Kähler manifold. Suppose we have a family (V,ω_s,J_s) of almost Kähler structures over S. Then we say that (V,ω_s,J_s) is uniformly geometrically bounded if the almost Kähler manifold (V× S,ω_s⊕ω_S,J_s⊕ j_S) is geometrically bounded.
From geometric boundedness, one can obtain the tameness condition, which is originally due to Sikorav <cit.>.
<cit.>.
Let (V,g) be a Riemannian manifold. We say that g is (δ,c)-isoperimetric at p if given any closed curve γ:S^1→ B_δ(p), there is a disc D_γ in V such that ∂ D=γ and
Area(D)≤ cℓ (γ)^2.
Here ℓ(γ) is the length of γ.
Let (V,J,g) be an almost complex manifold equipped with a Riemannian metric g. Then we say that (V,J,g) is tame if there exist constants r_V,C_0,C_1,C_2>0 such that the following holds.
* The metric is complete, r_g=inf_x∈ Minj_x g>0 and r_V<r_g.
* (V,g) is uniformly (r_V,C_1)-isoperimetric.
* Over each ball B(p,r_V), there exists a local symplectic form ω_p such that ω_p_g≤ C_0. Furthermore, X_g^2≤ C_2 ω_p(X,JX).
Suppose (S,ω_S,j_S,g_S) is an almost Kähler manifold. A family of quadruples (V,J_s,ω_s,g_s) on S is said to be uniformly tame if (S× V,ω_S⊕ω,j⊕ J_s,g_s⊕ g) is tame.
Let (V,ω,J) be an almost Kähler manifold. Let g_J=ω(-,J) be the induced metric on V. Then (V,ω,J) is said to be tame if (V,J,g_J) is tame with respect to symplectic forms ω_p=ω|_B(p,r_V).
A family of almost Kähler structures (V,ω_s,J_s) parametrized on S is said to be uniformly tame if (S× V, ω_S⊕ω,j⊕ J_s) is tame.
Recall that a submanifold W of V is called totally real if TW∩ JTW=0.
<cit.>
Let W be a properly embedded totally real submanifold of V. A point p∈ W is (δ,c)-isoperimetric with respect to g if g is (δ,c)-isoperimetric at p, and for any chord γ:[0,1]→ B_δ(p) with endpoints on L, there is a half disc D with ∂ D=γ∪γ̃ with γ̃⊂ L such that
Area(D)≤ cℓ (γ)^2.
(<cit.>)
Let (V,J,μ) be as in Definition <ref>.
Let W⊂ V be a properly embedded totally real submanifold of V. Then W is said to be tame if there exists an r_W>0, C_W>0 such that the following holds.
* For x,y∈ W with d(x,y)_V<r_W, we have
d(x,y)_W≤ C_W d(x,y)_V.
* For all p∈ W, B(r_W,p)∩ W is contractible and there exists a symplectic form ω_p on B(r_W,p) satisfying the conditions in Definition <ref>, such that W∩ B(r_W,p) is ω_p-Lagrangian.
Given a uniformly tame family (V,ω_s,J_s) over an almost Kähler mainfold S, we say that L is uniformly tame if ∂ S× L is tame in (S× V,ω_s⊕ω_S, J_s⊕ j_S,g_s⊕ g_S).
In particular, any Lagrangian submanifold of an almost Kähler manifold is totally real.
A properly embedded Lagrangian submanifold W⊂ V which is a closed subspace of V, in a tame almost Kähler manifold (V,J,ω,g), is said to be tame if there exists an r_W>0, C_W>0 such that:
* for x,y∈ W with d(x,y)_V<r_W, we have
d(x,y)_W≤ C_W d(x,y)_V;
* each B(r_W,p)∩ W is contractible.
Given a uniformly tame family (V,ω_s,J_s) parametrized on S, we say that W is uniformly tame if the totally real manifold ∂ S× W is tame in (S× V,ω_s⊕ω_S, J_s⊕ j_S,g_s⊕ g_S).
See Remark (<ref>) for the relationship between tameness and isoperimetricity.
The following well known proposition relates tameness with geometric boundedness.
Suppose (V,ω,J) is geometrically bounded, then it is tame. Furthermore, if W is a geometrically bounded Lagrangian submanifold of V, then W is also tame.
Groman's estimate <cit.> gives control over the isoperimetricity constants in terms of the sectional curvature and the injectivity radius. Jean-Philippe Chassé's estimate <cit.> shows tameness for geometrically bounded Lagrangian submanifolds.
Controlling the injectivity radius requires control over the sectional curvature and the volume comparison between the Euclidean volume and the volume induced from g_J. This requires theorem <cit.>. In particular, a uniform lower bound on the g-volume of the unit ball and an upper bound on the sectional curvature gives a uniform lower bound on the injectivity radius. Controlling the injectivity radius and the cut locus distance (the supremum of the radial radius of the embedded tubular neighbourhood) for Lagrangians was done in <cit.> and <cit.> respectively.
The following proposition verifies geometrical boundedness of almost complex manifolds (T^∗M,ω,J) for J an admissible almost complex structure. This is a modification of <cit.> and we follow their proof closely.
Let J be an admissible complex structure on T^∗M. Let g_J be the metric induced from J and ω. Then the almost Kähler manifold (T^∗M,J,ω,g_J) is geometrically bounded. Furthermore, Lagrangians which are finite at infinity are also geometrically bounded.
Since admissible almost complex structures are cylindrical at infinity, we may assume that J is cylindrical without loss of generality, over the positive cone over some fixed sphere bundle of J_g-radius R>0 which depends only on J. This R only depends on the auxiliary function ρ.
Let p be a point near vertical infinity. Take the reverse Liouville flow to bring it down to the point q on the sphere bundle. Since J is invariant under the Liouville flow, we see that the geometry of (T^∗M,J,ω) near p is the same as the geometry of (T^∗M,J,rω) near q for some real number r≥ 1. Take geodesic normal coordinates (x_1,...,x_m) near q with respect to g=ω(J,-). Changing ω to rω rescales the metric by rg, but we can zoom in and send x_i'→ x_i=√(r)^-1x_i'. Taylor expansion in the x-coordinates gives
g=g_ijdx^idx^j=(g_ij(0)+O(x^2))dx^idx^j.
Here the O(x^2) term depends only on the curvature, the inverse metric g^ij at q and its covariant derivatives. In the rescaled coordinates, we get
rg =rg_i'j'(√(r)^-1dx^i')(√(r)^-1dx^j')
=(g_i'j'(0)+√(r)^-1O(x'^2))dx^i'dx^j'.
This implies that as r→∞, the local geometry at q uniformly converges to the linear Kähler geometry at T_q(T^∗M) induced by the triple (ω,J,g(0)). Hence, it suffices to bound the geometry of the sphere bundle. Let R_0 be as in Definition <ref>. Outside some compact subset of K, we can cover M with countably many balls B_r_i(x_i) such that 0<R_1'<r_i<R_0 for some R_1'>0. Since the curvature vanishes, the exponential map exp_x_i:B_r_i(0)→ B_r_i(x_i) is a local isometry. Taking the pullback via the exponential map and using the covariance of J_con, we see that the unit sphere bundle is trivial and we are simply bounding the geometry of S^n-1× B_r_i(0) equipped with the standard metric scaled by R^1/2 (The R^1/2 factor only appears because we had taken the sphere near infinity where the almost complex structure becomes conical). This is automatic.
Now suppose L is a horizontally finite Lagrangian. Then the same argument holds since it is conical at infinity and the horizontal support is contained in a compact subset of M. Suppose now L is a vertically finite Lagrangian. Let K_L and ϵ_L be as in Definition <ref>. By definition, there exists some compact subset K_1⊂ M such that J=J_con on T^∗(B_R_0(M-K_1)) and K_L∩ B_R_0(M-K_1)=∅.
For x∈ M-K_1, the restriction of the exponential map exp_x:B_R_0(0)→ B_R_0(x) is an isometry since the sectional curvature vanishes identically on the image, by the flatness at infinity condition. Consider the induced map
(exp_x,d(exp^-1_x)^∗):T^∗B_R_0(0)→ T^∗B_R_0(x).
Then
J|_D_3/2T^∗B_R_0(x)=J_con|_D_3/2^∗B_R_0(x)=J_g|_D_3/2T^∗B_R_0(x)
by definition, that by the covariance of J_g,
(exp_x,d(exp^-1_x)^∗)^∗J|_D_3/2^∗B_R_0(x)=(exp_x,d(exp^-1_x)^∗)^∗J_g|_D_3/2^∗B_R_0(x)=J_g_std|_D_3/2^∗B_R_0(0).
Here g_std is the standard metric on ℝ^n. Of course, the metric induced from J_g_std is the standard metric on ℝ^2n.
Now any totally geodesic submanifold of ℝ^2n equipped with the standard flat metric is a linear subplane of ℝ^2n. Furthermore, since L→ M on B_R_0(x) is a proper covering, and any covering on a contractible open set is trivial, (exp_x,d(exp^-1_x)^∗)^-1(L) consists of finitely many disjoint Lagrangian subplanes of T^∗B_R_0(0)⊂ℝ^2n. By the final condition in Definition <ref>, setting ϵ'_L=min{ϵ_L,1/4,1/2(R_0)}, for any x'∈ (exp_x,d(exp^-1_x)^∗)^-1(L), B_ϵ'_L(x')∩ L is connected and consists of a single Lagrangian plane. Furthermore, (exp_x,d(exp^-1_x)^∗)^∗ω=ω_std. This finishes the proof of geometric boundedness of L. Note that we have derived tameness of L directly in the proof as well.
We also need the following “family" version of the geometric boundedness statement. This is a modification of <cit.>.
Let J:A_1→𝒥(T^∗M) be a uniformly admissible family of almost complex structures over S, then (A_1× T^∗M, j_A_1⊕ J,ω_A_1⊕ω_T^∗M) is geometrically bounded. Furthermore, if a Lagrangian submanifold L⊂ T^∗M is finite at infinity, then ∂A_1× L is geometrically bounded.
Suppose L is vertically finite. Since the family is uniformly admissible, there exists a compact subset K⊂ M such that over T^∗(M-K), J agrees with the background almost complex structure J_con^g. Furthermore, L is totally geodesic outside of some T^∗K' for some compact subset K'⊂C̃. Then the manifold ∂ A_1× L|_T^∗((K∪ K')^c) is geometrically bounded, and the statement for the compact part of L also follows. For the case L is horizontally finite, repeat the argument in the proof of Proposition <ref>.
Replacing a family of almost complex structures on A_1 with an almost complex structure on A_1× T^∗M is called the Gromov Trick.
We now focus our attention back to C̃. We first show flatness at infinity.
Let ϕ be a complete GMN quadratic differential over C.
Let g be a Riemannian metric on C̃ that agrees with the singular metric g^ϕ outside a compact subset of C̃. Then (C̃,g) is flat at infinity.
By definition, the metric g is equal to g^ϕ in some neighbourhood of infinity. We consider the union of the neighbourhoods of the poles U contained in this neighbourhood where g=g^ϕ such that the points in U are of distance >1 away from the zeros of ϕ. Then if p∈ U, the flat coordinate W=∫√(ϕ(z)) can be extended over the disc of radius ≥ 1. This shows that the minimal injectivity radius is positive. Hence g is flat at infinity.
Let ϕ be a GMN complete quadratic differential. Let (C̃,g) be as above. Then the pair T^∗C̃ and Σ_ϕ equipped with an admissible almost complex structure is geometrically bounded. Furthermore, the spectral curve Σ_ϕ is vertically finite.
Note that on D_1^∗M, J_con=J_g. Since Σ_ϕ lies in D_1 ^∗M, it suffices to show that outside T^∗K for some compact subset K ot C̃ containing the branch points, Σ_ϕ is totally geodesic with respect to J_g induced by the flat singular metric g^ϕ. We now show that the spectral curve is vertically finite.
Let p^x and p^y denote the dual coordinates in the W-coordinate system. Outside of a compact set in C̃, the metric on the base equals dW^2 and the spectral curve reads {p^x=± 1, p^y=0} in the W=∫√(ϕ) coordinate system. Hence we may take the vertical sheet gap to be ϵ_Σ_ϕ=1/2.
§.§.§ Monotonicity techniques
Now we introduce monotonicity techniques and apply them to find a priori restriction on the diameter of the Floer trajectories. We first start with the statement of the monotonicity lemma from <cit.>,<cit.>.
Suppose J is such that g_J is (δ,C)-isoperimetric at p. Then any J-holomorphic curve u passing through p and with boundary in M-B_δ(p) satisfies
Area(u;u^-1(B_δ(p)))=∫_u^-1(B_δ(p))1/2du^2≥δ^2/2C.
If p∈ L and J,L are (δ,C) isoperimetric, the same holds if ∂ u ∩ B_δ(p)⊂ L.
For tame manifolds, we can set δ=r_W and C=1/4(C_1+1+C_W)).
From the monotonicity lemma, we derive the following C^0 boundary estimate on the image of the J-holomorphic curves. This is <cit.>.
Let (V,J,ω,g) be a tame manifold. Let W be a tame Lagrangian submanifold of V. Let S be a connected Riemann surface with boundary and let K be a compact subset of V, then there exists a constant C_5(W,K,E)>0 with the following property.
Let u:S→ V be a J-holomorphic map with Area(u)<E such that S∩ K≠∅ and u(∂ S)⊂ K∪ W, then C⊂ B_C_5(W,K,E)(K). Here C_5(W,K,E) can be chosen to be linearly dependent on E and r_W^-1.
We follow the proof of <cit.>. Set r_0=min(R_V,r_W). First, note that the compact subsets B(K,2nr_0) of V for n∈ℕ give an exhaustion of the manifold V. We will be done if we can show that there exists an N=N(W,E,K)∈ℕ such that any u:S→ (V,K∪ W) is contained in B(K,2iN). Intuitively, we regard the subsets B(K,2in) as giving the nth “energy levels", and their boundaries ∂ B(K,2in) as giving “energy shells". The idea of the proof is to show that everytime u enters and leaves an energy shell, it loses a finite fixed amount of energy, which does not depend on u.
Let u:S→ (V,K∪ W) be a J-holomorphic curve, then u(S)⊂ B(K,2(N+1)r_0) for some N>0. Choose the smallest such N. Then we can find points x_1,...,x_N such that u(x_i)∈∂ B(K,2ir_0).
We claim that the subsets B_r_0(u(x_i))∩ u(∂ C) lie in W. This is simply because of the condition ∂ S⊂ K∪ C. Applying the monotonicity lemma on each B_r_0(u(x_i)), we see that there exists constants (δ_i,C_i) such that for M=1,...,N,
∫_u^-1(B_r_0(u(x_i)))du^2>∑_i=1^M δ_i^2/C_i.
However, by tameness, we may choose the δ_is so that δ_i>r_W and 1/C_i>C_4 for some C_4. Therefore, we see that
∑∫_u^-1(B_r_0(u(x_i)))du^2>MC_4r_W^2,
giving the energy bound
E≥ NC_4 r_W^2
that
E/C_4 r_W^2≥ N.
Hence N is bounded.
This finishes the first part of the proof. We now show the scaling properties of the constant C_5(W,K,E). Suppose that we rescale so that r_W is sent to t r_W. Then we need to take N/t^2 many t r_W-balls hence we see that the image of u lies in the (N+2)r_W/t-neighbourhood of K. So C_5(W',K,E)=1/tC_5(W,K,E). Alternatively, scaling E to t E scales N by t. In this case, we see that u(S) is in the t (N+2)r_W neighbourhood of K. So we have C_5(W,K,tE)=tC_5(W,K,E). This finishes the proof.
On the other hand, we will also need the interior estimate <cit.> by Groman. For the proof, see <cit.>.
Let (V,J,ω,g) be a tame almost Kähler manifold. Let L be a tame Lagrangian submanifold of V. Let E>0 and let K be a compact set of V. Then there exists an R=R(V,L,E,K)>0 such that the following holds.
* For any J-holomorphic map
u:A_1→ V
satisfying Area(u;A_1)≤ E and u(A_1/2)∩ K≠∅, we have
u(A_1/2)⊂ B_R(K).
* For any J-holomorphic map
u:(E_1,∂ E_1)→ (V,L)
satisfying
Area(u;E_1)≤ E and u(E_1/2)∩ K≠∅, we have
u(E_1/2)⊂ B_R(K).
We also have the following “family" version of the interior estimate:
Let (J_s,ω,g_s), s∈A_1 be a family of uniformly tame compatible triples on V. Let E>0 and let K be a compact set of V. Then there exists a compact subset R(K,E,J_s,ω,g_s) such that the following holds.
If u:A_1→ V satisfies (du)^0,1_J_s=0, Area(u;A_1)≤ E and u(A_1/2^2)∩ K≠∅, then
u(A_1/2)⊂ R(K,E,J_s,ω,g_s).
Furthermore, suppose L is a Lagrangian submanifold of V which is uniformly tame with respect to (J_s,ω,g_s). Then there exists a compact subset R'(K,E,J_s,ω,g_s,L) of V such that the following holds.
If u:(E_1,∂ E_1)→ (V,L) satisfies
(du)^0,1_J_s=0, Area(u;E_1)≤ E, and u(E_1/2)∩ K≠∅, then
u(E_1/2)⊂ R'(K,E,J_s,ω,g_s,L).
Reduce to Proposition <ref> by taking (A_1× V, j_A_1⊕ J_s,ω_A_1⊕ω).
§.§ Floer operations
We now utilize the estimates in Section <ref>. In Section <ref>, we discuss the compactification and transversality of moduli space of stable pseudo-holomorphic polygons. The key idea is to use geometric boundedness and convexity to bound the images of pseudo-holomorphic polygons. We use this to define the Floer chain complex. In Section <ref>, we define the notion of passive continuation strips. In Section <ref>, we derive various formulas for the geometric energy of the continuation strips. In Section <ref>, we show the C^0 confinement of the passive continuation strips. In Section <ref>, we construct continuation chain maps and discuss their properties. Finally, in Section <ref>, we discuss the path groupoid representation of the Floer cohomology local system given a wall-chamber decomposition on the base M. We only consider the case where M is two-dimensional. From now on, we assume that all the Lagrangians are exact, with respect to λ_re.
§.§.§ Compactness and transversality
Moduli Spaces
We first start with the compactness and transversality properties of the Floer moduli spaces. We follow <cit.> closely. In this section, we assume that all the Lagrangians are exact with respect to the canonical Liouville form λ_re.
Let L_1 and L_2 be a pair of transversely intersecting Lagrangians in T^∗M such that L_1 is finite at infinity, and L_2 is horizontally finite (see Definition <ref>). Since L_i is λ_re-exact, there are smooth functions f_i:L_i→ℝ on L_i such that df_i=λ_re| L_i. Such functions f_i are unique up to constants. We choose the primitives f_1 and f_2 once and for all for L_1 and L_2 respectively. We define the action of an intersection point x∈ L_1⋔ L_2 by:
a(x):=f_1(x)-f_2(x).
Given such a pair (L_1,L_2), a choice of an s-invariant admissible family of almost complex structures J_L_i,L_j=J_L_i,L_j(τ) on the infinite strip 𝒵 is called the Floer datum of the pair L_1, L_2. For our purpose, it suffices to only consider the case where J_L_1,L_2 is given by a compact deformation of J_con, and we will assume so from now on.
For x,y∈ L_1⋔ L_2, let ℛ(L_0,L_1,J_L_1,L_2)_x↦ y be the moduli space of unparametrized J_L_1,L_2(τ)-holomorphic strips u between L_0 and L_1 with lim_s→ -∞ u(s,τ)=x and lim_s→ +∞ u(s,τ)=y. The space ℛ(L_0,L_1,J_L_1,L_2(τ)) is the union of unparametrized moduli spaces ℛ(L_0,L_1,J_L_1,L_2(τ))_x↦ y for x,y∈ L_0⋔ L_1. The moduli space ℛ(L_0,L_1,J_L_1,L_2(τ)) is the union of ℛ(L_0,L_1,J_L_1,L_2(τ))_x↦ y consisting of broken J_L_1,L_2-holomorphic strips.
More generally, let L_1 be a Lagrangian which is finite at infinity and let L_2,...,L_n be any finite collection of mutually transverse, horizontally finite Lagrangians which is transverse to L_1. For k≥ 2, let ℛ_k,1 be the compactified Deligne-Mumford moduli space [For details of Deligne-Mumford moduli spaces, see <cit.>).] of stable discs with k+1 marked points x_1...,x_k,y (labelled in anticlockwise directions). Let 𝒮_k,1 be the universal bundle over ℛ_k,1. Given a fibre S of 𝒮_k,1→ℛ_k,1, we say that a neighbourhood of the boundary marked points x_1,...,x_k,y and the nodes of the fibre inside the total space 𝒮_k,1 gives the thin part of the fibre S. The complement of the thin parts gives the thick part of the fibre S. For k=1, we set ℛ_1,1 to be the stack pt/ℝ.
Choose a Floer datum for each pair (L_i,L_j) for i,j=1,...,n, i≠ j. For every sequence 1≤ i_0< ....< i_k≤ n, choose “universal strip-like coordinates":
End^+_i_0,....,i_k;j :[0,∞)× [0,1]×ℛ_k,1→𝒮_k,1;j=1,....,k,
End^-_i_0,...,i_k :(-∞,0]× [0,1]×ℛ_k,1→𝒮_k,1,
and a uniformly admissible family of almost complex structures
J_i_0,...,i_k: 𝒮_k,1→𝒥(T^∗M)
such that the strip-like coordinates are compatible with gluing and the almost complex structures are compatible with gluing. By this we mean the following.
* For each j=1,...,l there is a boundary collar
ℛ_k,1×ℛ_j,1× (0,∞)→ℛ_k+l-1,1
given by glueing two ends at x_j with respect to the glueing parameter in (0,∞). Then the “glued" strip-like coordinates on ℛ_k+l-1,1 must agree with the universal strip-like coordinates specified on ℛ_k+l-1,1.
* The almost complex structures J must be compatible with glueing via End^±. By this we mean the following: End^±_j^∗J_i_0,...,i_k must be s-invariant; J_i_0,...,i_k+1-1 must be given by glueing with respect to End over the image of (<ref>); J_i_0,...,i_k+1-1 must coincide with the product of J_i_j-1,...,i_k+j-1 and J_i_0,...,i_j-1,i_k+j-1,...,i_k+l-1 over the image of ℛ_k,1×ℛ_j,1 under (<ref>).
For details, see <cit.>. We simply remark that given a negative (or positive) strip-like end with Lagrangian labels L_i_m and L_i_m+1, then End_m^-,∗J_i_0,...,i_k (or End_m^+,∗J_i_0,...,i_k) must be equal to the Floer datum J_L_i_m,i_m+1(τ) of the pair L_i_m and L_i_m+1. Note that here we have not said anything about the regularity of the moduli spaces.
We consider the moduli spaces
ℛ_k,1(y;x_1,....,x_k)=ℛ_k,1(y;x_1,...,x_k;L_i_0,...,L_i_k)
of stable J_i_0,...,i_k-holomorphic maps u:S→ T^∗M. Here the Lagrangian boundary conditions are given as in Figure <ref>; the marked points x_i_k are mapped to intersection points of L_i_k-1 and L_i_k, and the marked point y is mapped to intersection points of L_i_0 and L_i_k.
Compactness
We want to show that ℛ_k,1(y;x_1,...,x_k) is compact. First, we need the following lemma, which is originally due to Seidel-Abouzaid <cit.>, whose current form and the proof we have borrowed from Ganatra-Pardon-Shende <cit.>. For related ideas, see <cit.>.
Let (S,j) be a Riemann surface with boundary. Let J be an ω-compatible almost complex structure of general contact type (Definition <ref>) on {r>a} for some a>0. Let u:S→ T^∗M be a a (j,J)-holomorphic curve such that u^∗λ_re|_∂ S≤ 0 on u^-1({r>a}) then u is locally constant over u^-1({r>a}). By u^∗λ_re|_∂ S≥ 0, we mean that the evaluation of u^∗λ_re on a positively oriented vector field along T∂ S is negative. Here the orientation is given with respect to j.
We follow the proof in <cit.>. For any smooth function f:ℝ→ℝ_≥ 0 satisfying f'≥ 0 and f(r)=0 for r≤ a, we have
0≤∫_S f(r(u))u^∗ω=∫_∂ Sf(r(u))· u^∗λ_re- ∫_S f'(r(u))· u^∗(dr∧λ_re).
To see this, note that
d(f(r(u))· u^∗λ_re)=f'(r(u))∧ u^∗(dr)∧ u^∗λ_re+f(r(u))· u^∗ω.
The first term on the right hand side of (<ref>) is ≤ 0, because of the condition u^∗λ_re≤ 0. Since (dr∧λ_re)(X,JX)≥ 0 for any vector field X, the second term is also ≤ 0. To see this, since h(r)dr=λ_re∘ J, we have
(dr∧λ_re)(X,JX)=dr(X)λ_re(JX)-λ_re(X)dr(JX)=h(r)(dr(X)^2+dr(JX)^2)≥ 0.
So
0≤∫_S f(r(u))u^∗ω=∫_∂ Sf(r(u))· u^∗λ_re- ∫_S f'(r(u))· u^∗(dr∧λ_re)≤ 0,
since u^∗λ_re≤ 0. This finishes the proof.
In particular, the condition u^∗λ_re≤ 0 in Lemma <ref> holds when connected components of ∂ S belong to Lagrangians that are cylindrical over {r>a} since on the cylindrical part u^∗λ_re=0. We now show that the moduli space is compact.
The moduli space ℛ_k,1(y;x_1,...,x_k) is compact.
We modify the proof in <cit.>. Let K_L_i,i≠ n be the compact subsets of M such that π(L_i)⊂ K_L_i. In the case L_n is vertically finite, let K_L_n be as in Definition <ref>. In the case L_n is horizontally finite, let K_L_n be a compact subset of M such that π(L_n)⊂ M.
Let R>0 be such that: (i) the Lagrangians L_is are either cylindrical or empty outside of D_R^∗M, and (ii) the almost complex structure J(s,τ) is cylindrical outside D_R^∗M. We furthermore demand that the Legendrian submanifolds Λ_i=L_i∩ S_R^∗M are either compact or empty, and that they are disjoint. Let K_0 be a compact codimension 0 submanifold with boundary of M such that on T^∗K_0^c, J(s,τ)=J_con. We assume that K_0 is large enough so that it contains all the compact subsets K_L_i of M. Here the radius of the codisc bundle is taken with respect to the metric g on the base M.
We first estimate the energy. Since the Lagrangians L_i_0,...,L_i_k are all exact and the almost complex structures in the family are ω-compatible, there is an upper bound E>0 such that if u∈ℛ_k,1(y;x_1,...,x_k) then du^2_L^2,J≤ E. Indeed, this follows from
1/2du^2_L^2,J=∫ u^∗ω=a(y)-∑ a(x_i)
where we used ω-compatibility in the first equality, and Stokes' theorem on dλ_re=ω and df_L_i=λ_re|_L_i in the second equality. Here a is the action of the intersection point defined in (<ref>).
Since the geometric energy admits a uniform finite upper bound, Proposition <ref> will be true by Gromov compactness if we can find a fixed compact subset K of M and R_3>0 such that if u∈ℛ_k,1(y;x_1,...,x_k) then the image of u lies in D_R_3^∗K. To do this, we show that outside of some compact subset the Lagrangians are uniformly separated near infinity, and argue via monotonicity to control the images of the thick parts and the thin parts.
We first show that the Lagrangians in question are uniformly separated at infinity, outside D_R^∗K_0, that is, we have a lower bound C>0 on the J(s,τ)-distance between the Lagrangians L_is outside of D_R^∗K_0.
When L_n is vertically finite, such a lower bound between L_i and L_n for i≠ n is obvious. We now show that horizontally finite Lagrangians are also uniformly separated outside of D_R (T^∗K_0). This was stated so in the proof of <cit.> but we supply the full argument here.
Let h denote the metric obtained from a cylindrical ω-compatible almost complex structure. Take the pullback of the metric h to the positive cone S^∗M× [R,∞) over S^∗M and consider a vector field Y⊕ 0 in S^∗M× [R,∞) tangent to S^∗M. Let Z be the Liouville vector field. Since L_Z h=h and L_Z Y=0 for any vector field tangent to S^∗M, the S^∗M-compontent of the metric grows with a factor of r, and the norm of ∂_r grows with a factor of r^-1.
We are interested in how h(Y,X) grows, for Y a vector field on T(S^∗M). Taking the Lie derivative, we check that:
h(Y,X)=(L_X h)(Y,X)=X· (h(Y,X))-h(L_X Y,X)-h(Y,L_X X)=X· (h(Y,X)).
This can only happen if h(Y,∂_r) is r-invariant. Indeed,
L_r∂_r(h_yrdydr)=h_yr,rdydr+h_yrdydr
where we have used Cartan's formula: L_r∂ r(dr)=dr. Hence the metric is of the form
h=rh|_S^∗M+r^-1dr^2+∑ h_yrdydr
Taking r=s^2, we find that the metric is now of the form of the standard metric on the cone:
h=s^2h|_S^∗M+4ds^2+2s∑ h_yrdyds.
Let P be the local orthogonal projection to TS^∗M. Then
h(·,·)≥ h(P·,P·)=s^2h_S^∗M(P·,P·).
So it follows that
d_h(L_i,L_j)|_S^∗M× [R,∞)≥
R^2 d_h(Λ_i,Λ_j)>0.
This shows that the horizontally finite Lagrangians are uniformly separated at infinity. Modifying C_2 in the case L_n is vertically finite, we get our uniform lower bound C.
Having separated the Lagrangians at infinity, we deal with the thick part. Lemma <ref> tells us that restricting u to some disc A_l (or half-disc E_l) in the thick part of S is uniformly geometrically bounded, and the geometric boundedness constants only depend on l, the family J, and the Lagrangians L_is. Lemma <ref> then tell us that if the image of u restricted to the aforementioned unit disc (or unit half-disc) intersects a large enough compact subset A⊂ T^∗M that separates the Lagrangians near infinity in A_l/2 (or E_l/2) then the image of u restricted to A_l/2 (or E_l/2) is contained in some R(A,E,J_s,ω,g_s,l) (or R'(A,E,J_s,ω,g_s,L_s,l)).
Since the thick parts are compact Riemann surfaces with boundaries and corners with uniform topology, they can be covered by uniformly finite number of half-discs and discs of some uniform radius l_1>0 so that the shrinked discs of radius l_1/2 also cover the thick part of S. This constant doesn't depend on u but only on the topology of the thick part of S. The boundary conditions corresponding to horizontally finite Lagrangians lie in a compact set, since L_i∩ D_R^∗M for i≠ n is compact. So by repeatedly applying Lemma <ref>, we can use monotonicity to show that over the thick parts the J-holomorphic curves are a priori contained in D_R_1^∗ K' for some compact subset K'⊂ M and R_1>R>0.
So now it remains to show that we can compactly enlarge K' and R_1 so that the whole image of u is contained in the compact enlargement. We follow the strategy in the proof of <cit.>.
On the thin-parts, we have s-invariant almost complex structures J(τ). We take some constants R_2>R_1>0 so that the image of u restricted to the thick part is contained in D_R_2^∗M and outside of D_R_2^∗M, J(τ) is cylindrical. Let K_thin be a compact subset of M such that outside of T^∗K_thin, J(τ)=J_con. Then we take a codimension 0 submanifold-with-boundary K_base of M containing K_thin, K' and K_0, such that the g-distance d_base between ∂ K_base and K_L_i is positive.
Suppose now that the interval [a,b]× [0,1] in the thin part is mapped outside of the disc bundle D_R_2^∗K_base. Then we have
E≥∫_[a,b](∫_0^1 ∂_t u)^2 ≥ C (b-a),
Therefore, taking L=E/C, we see that if (a-b)>L then u cannot map [a,b]× [0,1] outside of D_R_2^∗K_base. So there exists some ϵ>0 such that [a,b]× [0,1] is covered by uniformly finite half-discs and discs of radius ϵ. Hence applying the interior estimate (Lemma <ref>) again, we can enlarge D_R_2^∗K_base to D_R_3^∗K so that the image of u is wholly contained in D_R_3^∗K. This finishes the proof.
Transversality
We now proceed on with the construction of A_∞-structures. For details, see <cit.>. In particular, we will postpone detailed discussion of orientation lines and spin structures to Section <ref> since there we will carry out explicit computations.
We now assume that the Lagrangians are graded, and that they are spin. Given each pair L_1, L_2, choose an initial s-invariant uniformly admissible family of almost complex structure J^in_L_1,L_2(τ)[For our purpose, it suffices to set J^in_L_1,L_2(τ)=J_con]. By Proposition <ref>, we can find some H>0 and a compact subset K⊂ M such that: (i) J^in_L_1,L_2 is cylindrical outside D_H^∗M, (ii) J^in=J_con outside T^∗K and u∈ℛ(L_1,L_2,J_L_1,L_2^in) are contained in the interior of D_H^∗K, (iii) D_H^∗K contains all the intersection points of L_1 and L_2, and (iv) if L_i is horizontally finite then K contains the horizontal support of L_i, and if L_i is vertically finite, then K contains the compact subsets K_L_i in the sense of Definition <ref>.
Consider the following space of ω-compatible almost complex structures
𝒥(K,H):={J: J=J^in_L_1,L_2(τ) outside D_H^∗(K).}
This space 𝒥(K,H), equipped with the C^k topology for large enough k>0 (which is equivalent to the uniform topology induced from g_con) is a Banach manifold modelled on the space of C^k-infinitesimal deformations 𝒴(U) that satisfies the conditions
YJ+JY=0 ω(Y·,·)+ω(·,Y·)=0 (Y)⊂D_H^∗K.
Note then under J→ Jexp(-JY), the class of horizontally finite almost complex structures stays invariant. Similarly, we can equip the space 𝒥(K,H) with the C^∞-topology which makes it a Fréchet manifold. Furthermore, applying the proof of Proposition <ref>, we see that there exists a compact set P containing every u∈ℛ(L_1,L_2,J) for J∈𝒥(K,H). Indeed, we can run the argument of Proposition <ref> outside of D_H^∗K where J=J_con where the uniform separation of the Lagrangians and tameness constants coincide for J∈𝒥(K,H).
[Given a general symplectic manifold M, the space 𝒥(M) of (tame, or compatible) almost complex structures is given the weak C^∞-topology. When the base M is compact, the space 𝒥^k(M) and the space 𝒥^∞(M) of C^k and smooth compatible almost complex structures are Banach and Fréchet manifold. However, when M is not compact, endowing Banach/Fréchet structures on such spaces become much more involved, unless one specifies appropriate decay condition at infinity for maps in W^k,p_loc.]
Since we are now in the situation covered in <cit.>, we can perturb the family J_L_0,L_1(τ) generically so that moduli spaces of holomorphic strips ℛ(L_0,L_1,J_L_1,L_2)_x↦ y are transversely cut out for all x,y∈ L_0⋔ L_1. Indeed, note that given a J^in_L_1,L_2-holomorphic curve u, the set of injective points is dense, and the images of such points must necessarily lie in the interior of D_H^∗K.
To ensure smoothness of the J-holomorphic strips via elliptic regularity, we need to find a Baire dense subset in the C^∞-topology whose associated moduli spaces of strips are transversely cut-out. This is done either using the Floer C^∞_ϵ-space or Taubes' trick. For details, see <cit.>. Note that since there are only finitely many intersections, and since finite intersections of Baire dense subsets are Baire dense, we can find a generic J(τ) such that all the moduli spaces ℛ(L_0,L_1,J_L_1,L_2)_x↦ y,x,y∈ L_0⋔ L_1 are transversely cut out.
A uniformly admissible family J(τ) of almost complex structures such that the moduli space of holomorphic strips ℛ(L_0,L_1,J(τ)) is transversely cut out is called a regular Floer-datum for the pair L_0,L_1.
Choose a regular Floer datum for each pair once and for all. We can regard each intersection point x∈ L_0⋔ L_1 as a constant holomorphic half-strip with boundary conditions given by a path L_s of Lagrangian subspaces of T_x (T^∗M) that begin at T_x L_0 and end at T_x L_1, that satisfy the grading constraints[If A is the induced Maslov grading on LGr(T_x (T^∗M)) and A_0,A_1 are chosen grading functions on L_0 and L_1 respectively, the path L_s must satisfy A(L_0)=A_0(x) and A(L_1)=A_1(x).]. Then we define the orientation line o_L_i,L_j,p to be the determinant of the linearized Cauchy-Riemann operator associated to x. Any other choice of paths that satisfy the grading constraints give a canonically isomorphic real line. For details, see <cit.>.
We define the Floer intersection complex
CF(L_1,L_2)=⊕_p∈ L_1∩ L_2o_L_1,L_2,p.
Since this is standard in literature, we conclude that we can define a chain complex structure on CF(L_i,L_j) by counting regular J-holomorphic strips. As usual, we call the cohomology HF(L_i,L_j) of this chain complex the Floer cohomology.
Carrying over to the general moduli space of stable discs is an inductive procedure. Again, we begin with an initial admissible family J_i_0,...,i_k^in of almost complex structures that satisfy the consistency conditions in (<cit.>). In particular, at the strip-like ends, the pullback of the family with the universal strip-like coordinates agrees with the chosen regular Floer datum of the pair (L_i_m,L_i_m+1). We can use Proposition <ref> again to find a compact subset K of M and H>0 such that J is cylindrical outside D_H^∗M, J=J_con outside T^∗K and u∈ℛ(y;x_i_0,...,x_i_k) is contained in D_H^∗K.
Then using 𝒥(K,H) again, we are back in the situation covered in <cit.>; we can perturb the family of almost complex structures
J_i_0,...,i_k:𝒮_k,1→𝒥(T^∗M)
generically so that all the moduli spaces ℛ(y;x_1,...,x_k) are transversely cut out and so that the consistency conditions are still satisfied. By using the same trick, we see that we have a Baire dense subset of regular (family of) almost complex structures in the C^∞-Fréchet space of ω-compatible almost complex structures.
Furthermore, the higher operations
μ_k: CF(L_i_0,L_i_1)⊗...⊗ CF(L_i_k-1,L_i_k)→ CF(L_i_0, L_i_k)[2-k]
can be defined by counting holomorphic discs in the zeroth dimensional part of the moduli spaces ℛ_k,1(y;x_1,...,x_k). Again, we will describe in detail how orientation lines enter the story in Section <ref>, so we skip the discussion of that. Then studying the boundary stratification of the 1-dimensional part of the moduli spaces give the A_∞ relations.
§.§.§ Continuation strips
We now pose the continuation strip moduli problem. Let L_s be an exact Lagrangian isotopy of horizontally finite Lagrangians. We say that L_s is uniformly horizontally supported if there exists a compact subset K of M such that π(L_s)⊂ K. Given such an exact Lagrangian isotopy, we can find some time-dependent horizontally finite Hamiltonian H_s such that L_s=ψ_s(L) with uniform horizontal support (Definition <ref>). Here ψ_s is the Hamiltonian flow associated to H_s. Let V be a vertically finite Lagrangian submanifold. We say that an exact Lagrangian isotopy V_s:V× [0,1]→ T^∗M of vertically finite Lagrangians is compactly supported if there exists a compact subset K of V such that V_s(v,s)=v for v outside of K. Note that given a uniformly horizontally finite isotopy ψ_s, the isotopy V_s=ψ_s^-1(V) is compactly supported.
Let L_s be a uniformly horizontally supported isotopy of horizontally finite Lagrangians. Suppose V is transverse to L_0 and L_1. As we explained in Section <ref>, we can compactly perturb the constant family J_con=J_con(τ) to find a regular Floer datum J_0(τ),J_1(τ) for the pair (V,L_0) and (V,L_1), respectively, such that the moduli spaces ℛ(V,L_0,J_0(τ)) and ℛ(V,L_1,J_1(τ)) are transversely cut out.
Choose a uniformly horizontally supported Hamiltonian isotopy ψ_s generating L_s=ψ_s(L_0). Given a pair ((L_0,V,J_0),(L_1,V,J_1)), fix a uniformly admissible family J̃(s,τ) of almost complex structures on 𝒵
such that J̃(s,τ)=J_0(τ) for s≤ -N, J̃(s,τ)=J_1 for s≥ N, and J̃ is given by a compact perturbation of the constant family J_con on [-N,N]. [By this, we mean that J(s,τ)=J^in(s,τ) outside some compact subset of T^∗M.] Let
J(s,τ)=(ψ_l(s)^∗)J̃
Note that J satisfies J(s,τ)=(Dψ_1^-1)_∗ J_1(τ)(Dψ_1)_∗=(ψ_1)^∗J_1 for s≥ N.
Choose a smooth increasing elongation function l:[0,∞)→ [0,1] such that l(s)=0 for s<-N and l(s)=1 for s>N.
We say that a map u:𝒵→ T^∗M is a J-holomorphic strip with a passive moving Lagrangian boundary condition if the following equation is satisfied.
∂̅_J u=0
u(s,0)⊂ V_l(s)
u(s,1)⊂ L_0
lim_s→∞ u(s,τ)∈ L_0∩ V
lim_s→ -∞ u(s,τ)∈ L_0∩ψ_1^-1(V).
We call the solutions passive continuation strips.
We now introduce the classes of homotopies of families that we will use to show certain invariance properties of HF. Suppose we are given two uniformly admissible families of almost complex structures J̃^0 and J̃^1 on 𝒵 such that J̃^i(s,τ)=J_0 for s≤ -N and J̃^i(s,τ)=J_1 for s≥ N for some N>0. Then we say that a path J̃^t of family of almost complex structures between J̃^0 and J̃^1 over 𝒵 is a uniformly admissible homotopy if (i) there exists some N'>0 such that J̃^t(s,τ)=J_0 for s≤ -N' and J̃^t(s,τ)=J_1 for s≥ N', and (ii) there exists a compact set K⊂ M and some R>0 such that outside T^∗K, J̃^t=J_con and J̃^t(s,τ) is cylindrical for all s,t∈𝒵 and τ∈ [0,1] outside D_R^∗M. The Hamiltonian counterpart is as follows; suppose we are given a family of time-dependent Hamiltonians H^t_s and suppose there exists an R>0 and a compact subset K⊂ M such that H_s^t is cylindrical outside D_R^∗M and π( H_s^t)⊂ K for all s and τ. Then we say that such a family is a uniformly cylindrical and horizontal.
We now state the result, whose proof we postpone to Section <ref>.
Let V be a vertically finite Lagrangian and let L be a horizontally finite Lagrangian. For uniformly horizontally supported isotopies L_s such that L_s⋔ V,s=0,1 there exists a chain map called passive continuation map
c^passive =c_(L_0,J_0)→(L_1,J_1):CF(V,L_0,J_0)→ CF(V,L_1,J_1).
The passive continuation map has the following properties.
* A uniformly horizontally finite generic homotopy (L_s^t,J̃^t) relative to end points generated by a uniformly cylindrical and horizontally supported family of Hamiltonians H_s^t induces a chain homotopy map
H:CF^∗(V,L_0)→ CF^∗+1(V,L_1)
for the passive continuation maps.
* There is a chain homotopy between c_L_0→ L_1∘ c_L_1→ L_2 and the continuation map c_L_0→ L_2 associated to concatenation of isotopies. Hence the continuation maps are well-defined up to isomorphisms in cohomology.
* For constant maps L_s=L, the induced continuation map is the identity.
* For any uniformly horizontally finite isotopy, the passive continuation maps (<ref>) are quasi-isomorphisms.
As usual, we define the continuation chain maps using holomorphic strips with moving Lagrangian boundary conditions. The difficulty lies in making sure we pose the right moduli problem for the holomorphic strips so that they don't escape off to infinity; we will shortly show that the images of passive continuation strips are a priori confined. (Proposition <ref>)
Then an argument as in the end of Section <ref> tells us that for generic J, the moduli spaces of solutions of (<ref>) are transversely cut out. This will show the existence of a chain map
ĉ:CF(V_0,L_0,J_0)→ CF(V_1,L_0,(Dψ_1^-1)_∗J_1 (Dψ_1)_∗).
Making use of the trivial isomorphism
CF(V_1,L_0,(Dψ^-1)_∗J_1 (Dψ)_∗)≃ CF(V,L_1,J_1)
induced by the global Hamiltonian isotopy ψ^-1_1:T^∗M→ T^∗M,
we get the passive continuation map
c̃:CF(V,L_0,J_0)→ CF(V,L_1,J_1)
defined by the following commutative diagram.
CF(V,L_0,J_0) CF(V,L_0,J_0)
CF(V,L_1,J_1) CF((ψ_1)^-1V,L_0,(Dψ_1^-1)_∗J_1(Dψ_1)_∗).
["Id", from=1-1, to=1-2]
["ĉ", from=1-2, to=2-2]
["Id"', from=2-1, to=2-2]
["c̃"', from=1-1, to=2-1]
§.§.§ Energy Formula
We now derive energy upper bounds for solutions of (<ref>). We first start with the following formula from Oh <cit.>.
Let (X,dα) be an exact symplectic manifold and let L⊂ X be an exact Lagrangian. Let ψ_s be a Hamiltonian isotopy on X and let F be the primitive of α on L. Let L_s=ψ_s(L), then
F_s= F+∫_0^s (-H_t∘ i_t+α(X_H_t)∘ i_t)dt
satisfies dF_s=i_s^∗α. Here i_s=ψ_s∘ i where i:L→ X is the inclusion map.
We also need the following formula for the integral on the moving part.
Suppose we have (X,dα,L,ψ_s) as above. Suppose γ:[0,1]→ X is a curve such that γ(s)∈ L_s. Then we have
∫γ^∗α= F_0(γ(0))-F_1(γ(1))+∫_0^1 H_s(γ(s)) ds.
Let γ̃=ψ_s^-1(γ(s)). Then γ̃ is a curve on L. Consider the following homotopy on [0,1]× [0,1] defined by:
v(s,t)=ψ_st(γ̃(s)).
Then it follows that:
v^∗ω=∫_0^1 ∫_0^1 sdsdt d(H_st∘ψ_st)(γ̃(̃s̃)̃)ds.
Taking the change of coordinate τ=st, we get
v^∗ω =∫_0^1 ∫_τ^1 dτ ds d(H_τ∘ψ_τ)(γ̃(s))
=∫_0^1 ∫_τ^1 ds ∂(H_τ∘ψ_τ)(γ̃(s))/∂ s
=-∫_0^1 (H_τ∘ψ_τ)(γ̃(τ))dτ +∫_0^1 (H_τ∘ψ_τ)(γ̃(1)) dτ
=∫_0^1 H_τ(γ(τ))dτ+∫_0^1(H_τ∘ψ_τ)(γ̃(1)) dτ
On the other hand, Stokes' theorem gives us
v^∗ω= ∫γ̃^∗α+∫α(X_H_t)(ψ_t(γ̃(1)))-∫γ^∗α
Equating the two sides give us the desired formula.
Combining the two lemmas, we arrive at the following expression for the energy of discs with moving boundary conditions
Suppose S is a disc with k+1 marked points x_0,...,x_k+1. Identify each of the anticlockwise ordered boundary segments ∂ S_1,...,∂ S_k+1,∂ S_0 with [0,1]. Suppose we have moving Lagrangian labels L_0,...,L_k+1, L^s_i=ψ_s(L_i) as above, with L_i^0=L_i, L_i^1=L_i+1. Suppose the Lagrangians L_j,j=0,...,ks are mutually transverse. Let u:S→ T^∗M be a continuation disc with moving Lagrangian labels with respect to Hamiltonians H_s:S→ C^∞(T^∗ M,ℝ). Choose the primitives of L_i^s as in Proposition <ref>. Then the geometric energy of the solutions of (<ref>) satisfy:
∫_S 1/2du^2_J=∫_S u^∗ω=∑_i a^+(x_i)-∑_i a(x_i^-)+ ∫_∂ S H_s(u)ds.
In particular, if the isotopies H_s are compactly supported on L_is, then the geometric energy is bounded by a constant which depends only on the original action of the intersection points and H_s.
There exists some N≫ 1 such that l(s) is locally constant outside of [-N,N]. Fix such an N. Then we can regard L_l(s) as a family on [-N,N]. Define the primitives of λ_re of L_l(s) with respect to this family on [-N,N] using Lemma <ref>. Then the proof follows from Lemmas <ref> and <ref>.
§.§.§ Confinement of continuation Strips
We now show the C^0 confinement of passive continuation strips and construct continuation chain homomorphisms. Firstly, given a moving Lagrangian boundary condition L_s, let ℒ:={(s,p):s∈ [0,1], p∈ L_s}. Then ℒ is a totally real submanifold of A_1× T^∗M.
We have the following analogue of Lemma <ref>.
Let J:A_1→𝒥(T^∗M) be a uniformly admissible family of almost complex structures over A_1, then (A_1× T^∗M, j_A_1⊕ J,ω_A_1⊕ω_T^∗M) is geometrically bounded. Furthermore, if L⊂ T^∗M is finite at infinity, then ∂A_1× L is geometrically bounded. If the submanifold W⊂ (A_1× T^∗M,j_A_1⊕ J,ω_A_1⊕ω_T^∗M) is totally real, and coincides with some ∂A_1× L outside a compact subset for L a Lagrangian finite at infinity, then W must be tame.
The proof is as before; the Lagrangian submanifold ∂A_1× L is geometrically bounded by Lemma <ref>. So since W agrees with ∂A_1× L outside a compact subset, W must be geometrically bounded as well.
From now on, we do not distinguish horizontally finite Hamiltonian isotopies with a uniform horizontal support, and exact Lagrangian isotopies with a uniform horizontal support. From Lemma <ref>, we arrive at the following corollary:
Suppose L_s is an exact Lagrangian isotopy of horizontally finite Lagrangians with uniform horizontal support. Given an R>0, let
ℒ_p≤ R:={(s,p):s ∈ [0,1], p∈ L_s, p≤ R }.
Then ℒ_p≤ R is tame.
Alternatively, suppose K_s is an exact Lagrangian isotopy of vertically finite Lagrangians with uniform horizontal support. The totally real submanifold
𝒦={(s,p):s∈ [0,1], p∈ K_s}
is tame.
The first case follows immediately since ℒ_p≤ R is compact. The second case satisfies the hypothesis of Lemma <ref> so we are done.
We have the following analogue of Proposition <ref> for J-holomorphic curves with moving boundary conditions:
Let L_s,s∈ [0,1], be an exact Lagrangian isotopy of horizontally finite Lagrangians with uniform horizontal support and let ψ_s be the horizontally finite Hamiltonian isotopy generating L_s. Let V be a vertically finite Lagrangian. Then the following holds.
There exists a compact set K=K(J(s,τ),L_s,V,l)⊂ T^∗M such that the solutions of (<ref>) are contained in K.
We modify the proof of <cit.>. The boundary conditions are fixed for s>>0 and s<<0, so the moving boundary conditions appear only on the compact part S_N:=[-(N+1),(N+1)]× [0,1] for some N≫0. We can split the strip 𝒵 into the thin part (-∞,-N-1)× [0,1]∪ (N+1,∞)× [0,1] and the thick part S_N.
We control the thick part using tameness for totally real submanifolds. Consider the compatible triple (S_N× T^∗M,j⊕ J,ω_ℂ⊕ω_T^∗M). Note that the manifold
𝒱:={(s,p):s∈[0,1],p∈ V_s}
is totally real with respect to j⊕ J. Furthermore, it is (ω_S_N⊕ω_T^∗M)-Lagrangian outside some compact subset of (V_s(l)). So by Lemma <ref> and Corollary <ref>, the manifold 𝒱 must be actually tame with respect to (j⊕ J). Then from the a priori bound on the geometric energy and tameness, we see that the image of the thick part S_N must be a priori confined by Proposition <ref>. The analysis for the thin-part is unchanged. This finishes the proof.
Note that the proof does not extend to the case where L and K are both vertically finite.
§.§.§ Continuation maps
Recall the set-up in <ref>. From the discussion in Section <ref>, we had chosen a generic compact perturbation J_i(τ),i=0,1 of the constant family J_con as regular Floer datum for pairs (V,L_0) and (V,L_1). Then we further chose an initial family of uniformly admissible almost complex structures J̃^in on (-∞,∞)× [0,1] for (<ref>) such that J̃^in(s,τ)=J_0(τ) for s≪ 0 and J̃(s,τ)=J_1(τ) for s≫ 0. Then we set J^in(s,τ)=(ψ_l(s))^∗J̃^in.
We see from Proposition <ref> that for such a family J^in, the solutions of (<ref>) are compactly confined. Just as we did in the end of Section <ref>, (See expression (<ref>)), we perturb the family J^in to J over this compact set such that the solutions of (<ref>) are transversely cut out. Then for this perturbed J, J̃=(ψ_s^-1)^∗J is called a regular perturbation datum for ((V,L_0),(V,L_1),ψ_s,J̃^in). Furthermore, the 1 dimensional part of this moduli space is compactified as usual.
We briefly explain how to orient the moduli space of passive continuation strips. We adopt the notions from <cit.>. Suppose we have Lagrangian branes (V,A_V,P_V) and (L_s,A_s,P_L_s). For x∈ V⋔ L_0 and y∈ V⋔ L_1, choose a path of Lagrangian subspaces L_T(x), T∈ [0,1] from T_x V to T_x L_0, and a path L_T(y) from T_y V to T_y L_1, that satisfy the grading constraint. We furthermore choose a path of spin structures (P_T)_x over L_T(x) and isomorphisms of Spin torsors (P_0)_x≃ P_V(x) and (P_1)_x≃ P_L_0(x). We choose an analogous path of spin structures (P_T)_y over L_T(y). Then given a regular u satisfying (<ref>) we glue the constant half-strips x and ψ_1^-1(y) to the strip-like ends of ψ_s∘ u. Then given the glued disc x♯ u♯ψ_1^-1(y), the induced spin structure on the boundary is given by glueing (P_T)_x, ψ_s^∗P_V(u), ψ_1^∗(P_T)_y and ψ_s^∗P_L_s(u) along the boundary of x♯ u♯ y. We orient the tangent space at u by the induced spin structure— for details, see <cit.>.
So just as we suggested above, by counting the 0th dimensional parts, we get an induced chain map
c^passive =c_(L_0,J_0)→(L_1,J_1):CF(V,L_0,J_0)→ CF(V,L_1,J_0)
which we call the passive continuation map.
Two passive continuation maps are concatenated as indicated by the following commutative diagram
CF(V,L_0,J_0) CF(V,L_0,J_0)
CF(V,L_1,J_1) CF(ψ_1^-1(V),L_0,ψ_1^∗J_1)
CF(ψ_2^-1(V),L_1,(ψ_2^-1)^∗J_2) CF((ψ_1^-1∘ψ_2^-1)(V),L_0,(ψ_2∘ψ_1)^∗J_2)
CF(V,L_2,J_2) CF(V,L_2,J_2) ["Id", from=3-1, to=4-1]
["Id"', tail reversed, from=2-1, to=2-2]
["ĉ_̂1̂2̂", from=2-1, to=3-1]
["c̃_̃0̃1̃", from=1-1, to=2-1]
["ĉ_̂0̂1̂", from=1-2, to=2-2]
["Id", tail reversed, from=1-1, to=1-2]
["Id", tail reversed, from=4-1, to=4-2]
["Id", tail reversed, from=3-1, to=3-2]
["ĉ_̂1̂2̂'", from=2-2, to=3-2]
["Id"', from=3-2, to=4-2]
We are now ready to show Proposition <ref>.
We only sketch the proof. See <cit.> and the construction in <cit.> for details. We first show the first assertion. Suppose we are given a homotopy of Lagrangian isotopies L^t_s=ψ_s^t(L), that is fixed at the endpoints s=0,s=1 generated by a homotopy ψ_s^t of uniformly cylindrical and horizontally supported Hamiltonian isotopies. Set V^t_s=(ψ_s^t)^-1(V).
Recall that we had chosen a uniformly admissible family J̃^in for the pair of triples ((V,L_0,J_0),(V,L_1,J_1)) such that J̃^in(s,τ)=J_0 for s≪ 0, J̃^in(s,τ)=J_1 for s≫ 0. Suppose J̃^0 is a regular perturbation datum for ((V,L_0),(V,L_1),ψ^0_s,J̃^in) and J̃^1 is a regular perturbation datum for ((V,L_0),(V,L_1),ψ^1_s,J̃^in). Suppose furthermore there exists an initial uniformly admissible homotopy of ω-compatible almost complex structures J̃^t,t∈ [0,1] extending J̃^0 and J̃^1 such that each J̃^t is given by compactly perturbing J̃^in(s,τ) for s∈ [-2,2]. Set J^t(s,τ)=(ψ^t_l(s))^∗J̃^t. The corresponding family of passive continuation strip equations is given by:
∂̅_J^t u=0
u(s,0)⊂ V^t_l(s)
u(s,1)⊂ L_0
lim_s→∞ u(s,τ)∈ L_0∩ V
lim_s→ -∞ u(s,τ)∈ L_0∩(ψ^t_1)^-1(V).
By the properness of the map (ψ^t_s)^-1:[0,1]× [0,1]× T^∗M→ T^∗M, and application of the arguments in the proof of Proposition <ref>, we may enlarge K, and R>0 such that: (i) the horizontal support of ψ^t_s is contained in K, (ii) ψ^t_s is cylindrical outside D_R^∗, (iii) J^t=J_con outside T^∗K, and (iv) solutions of (<ref>) are contained in D_R^∗K for t∈[0,1]. In particular, condition (i) implies that the set T^∗K is invariant under ψ^t_s. Let R_1> R be such that ψ^t_s(D_R^∗K)⊂ D_R_1^∗K.
Replacing u(s,τ) with ũ(s,τ)=ψ_s^t(u(s,τ)), we arrive at an equivalent family of equations
∂ũ/∂ s+J̃^t(s,τ)∂ũ/∂τ-l'(s)X^t_l(s)(ũ)=0
ũ(s,0)⊂ V
ũ(s,1)⊂ L_l(s)
lim_s→∞ũ(s,τ)∈ L_0∩ V
lim_s→ -∞ũ(s,τ)∈ L_1∩ V.
The Hamiltonian perturbation datum in the sense of Seidel <cit.> is given by the Hamiltonian valued 1-form B(s,τ)=l'(s)H^t_l(s)ds. Indeed, the corresponding Hamiltonian vector field valued 1-form is Y^t=l'(s)X^t_l(s)ds and (<ref>) just reads (dũ-Y^t)^0,1=0 as usual. Let 𝒥(K,R_1,J^0,J^1) be the space of homotopy of uniformly admissible almost complex structures Ĵ^t rel endpoints such that Ĵ^t=J̃^t outside D_R_1 ^∗K. Let ℋ(R_1,K) be the space of Hamiltonians supported inside D_R_1^∗K.
Now further perturb the equation (<ref>) by replacing J̃^t with Ĵ^t in 𝒥(K,R_1,J^0,J^1) and B^t(s,τ) with B̂^t(s,τ)=B^t(s,τ)+Q^t(s,τ) where Q^t(s,τ) is a family of Hamiltonian valued 1-forms taking values in ℋ(R_1,K). We may assume that the 1-form vanishes on the boundary. Let Q̂^t be the vector field valued 1-form obtained from Q(s,τ)^t and Ŷ^t=Y^t+Q̂^t. Consider the following equation
(dũ-Ŷ^t)_Ĵ^t^0,1=0
ũ(s,0)⊂ V
ũ(s,1)⊂ L_l(s)
lim_s→∞ũ(s,τ)∈ L_0∩ V
lim_s→ -∞ũ(s,τ)∈ L_1∩ V.
Let R_2>R_1 such that the image of (ψ_s^t)^-1(D_R_1^∗K) is contained in D_R_2^∗K. The most important feature of (<ref>) is that the endpoint Lagrangian conditions now match. Abusing notation, let J^t be the pullback of Ĵ^t via ψ^t_l(s). The pullback equation (ψ_s^t)^-1(ũ) solves (du-Z^t)^0,1_J^t=0 for Hamiltonian vector field valued 1-forms Z^t coming from the Hamiltonian-valued 1-form Q(s,τ)∘ψ^t_s. Indeed, as before,
(dψ_s^t)^-1(dũ-Y^t)_Ĵ^t^0,1=(du)^0,1_J^t
and so
(dψ_s^t)^-1(dũ-Ŷ^t)^0,1_Ĵ^t=(du-Z^t)^0,1_J^t.
Note that Z^t is supported on D_R_2^∗K. In particular, the geometric energy ∫du-Z^t^2 is bounded above in terms of u^∗ω and the curvature integrand (See <cit.>). [Actually, the curvature integrand vanishes in this situation since Z^t vanishes on the boundary and
ω(∂_s u-Z,J(∂_s u-Z))=ω(∂_s u-Z,∂_t u)=u^∗ω-dH(∂_t u).
] The boundary conditions are the same as in (<ref>) and outside D_R_2^∗K, solutions of (du-Z^t)^0,1_J^t solves (<ref>) that the solutions of (<ref>) are still compactly confined on, say, D_R_3^∗ K_1, for any Ĵ^t and with respect to the bound on sup_t ∇ Q^t. Note that the bound on sup_t∇ Q^t will however, depend on Ĵ^t.
Then we may use the solutions of (<ref>) to construct the desired chain homotopy H. To achieve transversality, we use the Banach manifold 𝒥(K,R_1,J^0,J^1) and ℋ(R_1,K) and run the standard transversality argument, say as in the proof of <cit.>). This is essentially the same strategy as in <cit.>. Then we count the zero dimensional component of the moduli space of solutions of (<ref>) for generic Ĵ^t and Ŷ; the 1-dimensional component has boundary either that induced from strip breaking or the solutions of the equation (<ref>) for t=0,1 so we get the desired chain homotopy relation.
We now discuss the second bullet point. Note that showing that the following commutative diagram holds up to chain homotopy,
CF(ψ_1^-1(V),L_0,ψ_1^∗J_1)
CF(V,L_0,J_0) CF((ψ_1^-1∘ψ_2^-1)(V),L_0,(ψ_2∘ψ_1)^∗J_2)["ĉ_̂0̂1̂", from=2-1, to=1-2]
["ĉ_̂1̂2̂'", from=1-2, to=2-3]
["ĉ_ψ_2 ∘ψ_1"', from=2-1, to=2-3]
reduces to the standard case discussed in <cit.> since the endpoint conditions match.
The last point on passive continuation maps being quasi-isomorphisms follows because uniformly horizontally finite isotopies are compactly supported on V, and so are their inverses. Therefore, the “inverse movie" 𝒱^-:={(s,v):p∈ V_-s} is still tame. Hence the same argument applies and we can explicitly construct the chain inverse map.
Suppose now that V is a vertical finite Lagrangian in T^∗M and suppose that the set
V(F):={m∈ M: T_m^∗M is transverse to V}
is dense. Given two points m,m'∈ V(F) and a path homotopy class α between m and m', we can find a piecewise smooth representative of α such that i) each of the smooth components α_i are embedded curves in M, and ii) the endpoints are contained in the set V(F). We call the induced passive continuation map the parallel transport map associated to α. From Proposition <ref>, we readily obtain:
A relative path homotopy class α between m,m∈ V(F) as above induces a parallel transport map
Γ(α): HF(V,F_α(0),J_0)→ HF(V,F_α(1),J_1)
with the following properties:
* parallel transport maps are isomorphisms,
* parallel transport maps are compatible with respect to concatenation of paths,
* parallel transport maps only depend on the path homotopy classes.
In particular, the assignment
z↦ HF(V,F_z)
equipped with the parallel transport maps defines a local system on M.
§.§.§ Path groupoid representation
We now relate everything we discussed to path groupoid representations of the Floer cohomology local system. We first recall the definition in Section <ref>. Let M be a two dimensional manifold which is flat at infinity. Suppose furthermore that M admits a compactification M, by which we mean that there is a proper embedding of M into a compact two dimensional manifold M such that M_∞=M-M consist of finite set of points.
(Definition <ref>)
In the above set-up, M admits a wall-chamber decomposition if there exists a finite collection M^0 of points on M, and a collection M^1 of embedded arcs (called walls) in M satisfying the following conditions.
* If w∈ M^1, then w connects a point in M^0 to either a point in M^0 or a point in M_∞.
* Given a point m_0∈ M^0, there exists a wall W such that m_0∈∂ W, and given a point m_∞∈ M_∞, there exists at least one point m_0 in M^0 and a wall W∈ M^1 such that ∂ W={m_0}∪{m_∞}.
* The walls in M^1 only meet at the points in M^0∪ M_∞.
* The complement M^2 of all the walls s in M^1 decompose M into a finite disjoint union of contractible components (called chambers).
(Definition <ref>)
Given a wall-chamber decomposition (M^0,M^1,M^2), we say that a collection of points 𝒫_M in M is a set of base points if each component of M^2 contains at least one element of 𝒫_M. Given the base points 𝒫_M, the path groupoid 𝒢_M=𝒢_M(𝒫_M) is the groupoid whose objects are points in 𝒫_M, and whose morphisms are path-homotopy classes between the points in 𝒫_M. A collection of morphisms of 𝒢_M is said to be a path groupoid generating set if their concatenations generate 𝒢_M.
A path groupoid representation of a GL(k;ℛ)-local system consists
of the following data.
* A free rank k ℛ-module E_b for each b∈𝒫_M together with an isomorphism
ℛ^⊕ k≃ E_b.
* A morphism
Γ(α):E_b→ E_b'
given a path homotopy class α∈π_1(b,b')_M, b,b'∈𝒫_M, such that Γ(α) is compatible with path concatenations.
Two path groupoid representations (𝒫_M,E',P) and (𝒫_M,E',P') are said to be equivalent if for each b∈ M there are isomorphisms
g_b:E_b→ E_b'
such that (i) the following diagram commutes for α∈π_1(b,b'), b,b'∈𝒫_M:
E_b E_b'
E'_b E'_b'["Γ'(α)", from=2-1, to=2-2]
["Γ(α)", from=1-1, to=1-2]
["g_b"', from=1-1, to=2-1]
["g_b'"', from=1-2, to=2-2]
,
and (ii) the isomorphisms g_b are compatible with the isomorphisms (<ref>) to ℛ^⊕ k above.
Suppose now V is a vertically finite Lagrangian over M. Suppose we can choose a finite set of points 𝒫_M(V) on M such that 𝒫_M(V) contains at least one point from each chamber of (M^0,M^1,M^2) and that F_b and V are transverse for b∈𝒫_M(V). Choose a grading on T^∗M. Suppose furthermore that V is spin and graded. Choose a spin structure on M and the induced spin structure on F_b (See Section <ref> for details), and a spin structure on V. Then consistent sign choices can be made so that the chain complex CF(V,F_b) is a ℤ-graded ℤ-module, for b∈ P_M(V). Suppose we have a compactly supported exact Lagrangian isotopy V∼ V' such that the support lies outside π^-1(𝒫_M(V)). Then CF(V',F_b) can also be made a ℤ-graded ℤ-module in a compatible manner. In particular, the quasi-isomorphisms CF(V,F_b)→ CF(V',F_b) for b∈ P_M(V) commute with parallel transport maps. Then the following proposition rewrites our discussion in Section <ref>, in the language of path groupoids.
Let k= HF(V,·). The following data forms a path groupoid representation of a GL(ℤ;k)-local system.
* Points 𝒫_M(V).
* The free ℤ-module HF(V,F_b).
* Parallel transport maps
Γ(α):HF(V,F_b)→ HF(V,F_b')
defined as in <ref>
Furthermore, let Γ'(α) denote the parallel transport maps associated to the ℤ-modules HF(V',F_b) for b∈𝒫_M(V). Then the two path groupoid representations (𝒫_M(V),HF(V,F_b),Γ(α)) and
(𝒫_M(V),HF(V',F_b),Γ'(α)) are equivalent.
We may instead think of the global local system z↦ HF(V,F_z) as the local system induced from the path groupoid representation (𝒫_M(V),HF(V,F_b),Γ(α)). We will switch between these two conceptual pictures depending on whichever is more convenient.
§ DESINGULARIZATION AND REAL-EXACT SPECTRAL CURVES
In this section we discuss the geometry and topology of real-exact spectral curves. In Section <ref>, given a small deformation parameter δ>0, we deform the singular metric g^ϕ on C^∘ to a Kähler metric g^ϕ_δ on C̃ as we discussed briefly in Section <ref>. In Section <ref>, we define what we mean by “non-constant discs bounded between the fibre and the spectral curve", also called in this paper as BPS discs, and provide various conformal models. In Section <ref>, we look at the toy case of ϕ=zdz^2 on ℂ=ℂP^1-{∞}, whose spectral curve is isomorphic to Σ_ϕ={(p^z)^2-z=0} on ℂ^2=T^∗ℂ. Then we discuss how the associated spectral network is related to BPS discs.
In Section <ref>, we discuss the wall-chamber decomposition of C induced from a complete saddle-free GMN quadratic differential ϕ. In Section <ref>, we discuss the geometry of real-exact spectral curves. Like we said in Section <ref>, we show that given an energy cut-off E≫ 1 we can deform C-S(0) to a bounded open subdomain C(δ;E) of C such that horizontal trajectories passing through z∈ C(δ;E) never enter sufficiently small neighbourhoods of the zeroes of ϕ. Furthermore, we show in Proposition <ref> that outside of S(π/2), we have a canonical ± ordering on the lifts of the points in z∈C̃ with respect to the projection π:Σ_ϕ→C̃.
§.§ Desingularization
We provide a way of deforming the singular ϕ-metric to a smooth metric on C̃. We first start with the case of ϕ=zdz^2. This deformation depends on some auxiliary choices but all the deformed metrics are conformally equivalent and they agree near infinity. The singular flat metric g^ϕ=zdz^2 in polar coordinates reads
g^ϕ=r(dr^2+r^2 dθ^2).
Choose a δ>0 and a smooth strictly increasing positive function ψ_δ:[0,∞)→ [1,∞) such that ψ_δ(r)=r for r<δ and r=1 for r>3/2δ. The metric
g_δ^ϕ=r/ψ_δ(r)(dr^2+r^2 dθ^2)
is now globally defined on ℂ, and conformal hence invariant with respect to the standard complex structure on ℂ. So g_δ^ϕ is actually a Kähler metric since we are in complex dimension 1, though it is not real analytic.
Recall that we call a quadratic differential complete if it does not admit poles of order one. Let ϕ be a complete GMN quadratic differential and let b_1,...,b_n be the zeroes of ϕ. Recall from Proposition <ref> that:
Let b be a simple zero of ϕ. Then there exists a neighbourhood U_b of b, an open set D of ℂ containing zero, and a biholomorphism ξ=ξ_b:(D,0)→ (U_b,b) such that
ϕ(ξ)dξ^2=(3/2)^2 ξ dξ^2.
Furthermore, the germ of the biholomorphism is unique up to a factor of some c=exp(k/3(2π i)) for k=0,1,2.
We may assume that ϕ in (<ref>) is dξ^2. Let U_i=U_b_i and ξ_i=ξ_b_i. By shrinking if necessary, we may assume that the open sets U_i are disjoint and that ξ_i^-1(U_i)=D(r_i) for some r_i>0. Having made these choices, we define:
Let 0<r<min{r_1,...,r_n}. Let b_i, U_i, ξ_i and D(r_i) be as above. Let U_i(r)=ξ_i(D(r)). We define
U(r)=⋃_i=1^n U_i(r).
By choosing δ<1/2min{r_1,...,r_n}, we use the local form (<ref>) near each branch point to conformally deform the flat metric g^ϕ to obtain a global smooth metric on C̃ which we still denote as g_δ^ϕ. Note that for any other choice of δ'<δ and ψ'_δ', the resulting metric g_ϕ^δ' is conformally equivalent to g_δ^ϕ. Furthermore, the conformal factor is a smooth positive function which is equal to 1 except on some small annular regions near each of the zeroes of ϕ. We call the metrics obtained by this general method (conformally) desingularized metrics.
§.§ The Moduli problem
We now define the moduli problem that we are interested in. Let ϕ be a complete GMN quadratic differential, g^ϕ the induced singular flat metric on C, and g^ϕ_δ a desingularization of g^ϕ that we constructed in Section <ref>. We do not require ϕ to be saddle-free. Here J=J_ϕ is the induced almost complex structure on T^∗C̃ with respect to g_δ^ϕ, and J_con its conical deformation.
§.§.§ Conformal structures
We start with a brief discussion on the conformal model _m of the closed unit disc with m punctures on the boundary, which was constructed by Ekholm in <cit.>. Given points c=(c_1,...,c_m-2)∈ℝ^m-2, we consider the subdomain of (-∞,∞)×[0,m] given by removing m-2 horizontal slits in the direction of +∞, of width 0<ϵ≪ 1, starting from the points (c_j,j) for j=1,...,m-2. A boundary component I with both of its ends at +∞ is called a slit boundary component. Given a slit boundary component I, the boundary minimum of I is the unique point with the smallest real part along I. We can regard each of these subdomains as giving conformal structures on _m induced by z=s+it.
Note that translating by (t,...,t) on ℝ^m-2 for t∈ℝ gives a biholomorphism of this subdomain and hence a conformal equivalence between two different conformal structures on _m. Quotienting ℝ^m-2 by the t-action gives ℝ^m-3. In <cit.>, Ekholm shows that there is a diffeomorphism between ℝ^m-2/ℝ and the space of conformal structures on _m. In particular, we recover the unique conformal structure on _3.
§.§.§ t-BPS discs ending at z
We now provide several models of t-BPS discs ending at z. Let 𝒵=(-∞,∞)× [0,1] be the infinite strip. Then
Let 0≤ t≤ 1. A map u:𝒵→ T^∗C̃ is a t-BPS disc ending at z in the infinite strip model if it satisfies the following equation:
∂̅_̅J̅u=0
u((-∞,∞)×{0})⊂ tΣ_ϕ
u((-∞,∞)×{1})⊂ F_z
lim_s→±∞u(s,τ)∈ F_z∩ tΣ_ϕ
lim_s→ -∞u(s,τ)≠lim_s→∞u(s,τ).
A map u:_3→ T^∗C̃ is a t-BPS disc ending at z in the _3-model if it satisfies the following equation:
∂̅_Ju=0
u(s,0)⊂Σ_ϕ
u(s,1)⊂Σ_ϕ
lim_s→ -∞u(s,τ)∈ tΣ_ϕ
lim_s→ +∞, 0≤ t≤ 1-ϵ u(s,τ)∈ tΣ_ϕ∩ F_z.
lim_s→ +∞ 1-ϵ≤τ≤ 2 u(s,τ)∈ tΣ_ϕ∩ F_z
u(slit)⊂ F_z
lim_s→ +∞, 0≤τ≤ 1-ϵ u(s,τ)≠lim_s→ -∞ 1+ϵ≤ t≤ 2.
where slit is the unique slit boundary component of _3. The notion of J_con-BPS discs with the various conformal models is defined by replacing J with J_con. When t=1, we just write BPS discs.
See Figure <ref> for the triangle model. The infinite strip model is useful when dealing with degeneration of continuation strips and the slit model is useful when carrying out adiabatic degeneration techniques. One can pass from the strip model to the slit model by removing the point (0,0) on 𝒵. Similarly, by removal of singularity, one can remove the strip-like end at s=-∞ from the _3-model and return to the strip model.
We will now construct explicitly some BPS discs ending at z. The J-disc we will construct here as a submanifold coincides with the vertical strip constructed in <cit.>. In fact, the construction of the metric g_δ^ϕ was initially motivated by the problem of finding a suitable metric on C̃ making the vertical strips J-holomorphic for the Sasaki almost complex structure J. This construction will not be used in Sections <ref>– <ref>, but we have included it here for the sake of completion.
Let ϕ be a complete GMN quadratic differential. Let z∈ S(0), then there exists a BPS disc at z.
Recall that the spectral network S(0) is the critical graph of the singular foliation on C̃ given by horizontal trajectories. In particular, the walls on S(0) are ϕ-trajectories with maximal domain of definition an open interval of form (a,∞) or (a,b) where a and b are both finite. The metric g_δ^ϕ (<ref>) constructed in Section <ref> is radial near the zeroes of ϕ and all the radial rays are geodesics with respect to g_δ^ϕ. Furthermore, the walls on the spectral network S(0) initially propagate at the zeroes of ϕ as positive radial rays of phase 0 and ± 2π/3. Thus the walls on the spectral network lie on some g_δ^ϕ-geodesics.
Let g=g_δ^ϕ. Given an arc-length parametrized h-geodesic γ:(-ϵ,ϵ)→C̃, let S_γ be the embedded plane in T^∗C̃ given by the parametrization
(s,τ)→(γ(s),-τ g((∂_s γ)(s)))
for (s,τ)∈ (-ϵ,ϵ)×ℝ, where we regard h as an isomorphism TC̃→ T^∗C̃. To see that the S_γ is J_h-holomorphic, note that by Lemma <ref>:
∂_s(S_γ)(s,τ) =((∂_s γ)(s))^H
∂_t(S_γ)(s,τ) =-(g((∂_s γ)(s)))^V
and
J=[ 0 g^-1; -g 0 ]
in the horizontal-vertical decomposition, that
∂_s (S_γ)(s,τ)+ J(∂_t(S_γ)(s,τ)) =((∂_s γ)(s))^H-(g^-1(g(∂_s γ(s))))^H
= ((∂_s γ)(s))^H-((∂_s γ)(s))^H=0.
Here the fact that γ was a geodesic was crucial; we have ∇_∂_s γ(s)∂_s γ(s)=0. In particular, suppose η is any reparametrization of γ, then the curve (∂_s η(s),-g(∂_s η(s))) in T^∗C̃ traces out an arc in S_γ.
Let z∈ S(0). Let w be the wall of S(0) containing z. Let γ:(a,b)→ C be a unit speed geodesic defined on (a,b) containing a finite closed interval [a',b'] such that γ|_[a',b'] is contained in the wall w, γ(a')=z_0≠ z, and γ(b')=z, where z_0 is a zero of ϕ contained in w. Then one can check that the ϕ-geodesic equation implies that the complex vector √(ϕ(γ(s))) is actually colinear to (∂_s γ)(s). So the arcs (γ(s),±√(ϕ(γ(s)))) and (γ(b'),-τ g(∂_s γ)(b')), for s∈ [a',b'] and t∈ℝ, are contained in the J-holormophic plane S_γ (<ref>) associated to γ. Furthermore, they bound a simply connected domain in S_γ. So by the Riemann mapping theorem, we have constructed our J-holomorphic disc u.
§.§ The toy case
Now we discuss the case of ϕ=z dz^2 to illustrate how the spectral network relates to the existence of BPS discs. We remark that for general complete GMN quadratic differentials, one needs the adiabatic degeneration argument in Section <ref>.
We can identify ℂ^2≃ T^∗ℂ and Σ_ϕ with {(p^z)^2-z=0} in ℂ^2. Recall that g^S is the metric induced on ℂ^2 and Ω is the canonical holomorphic symplectic form on ℂ^2 (Defined in <ref>). Let Ĩ be the horizontal lift of I. In conformal normal Kähler coordinates, we have:
g^S= dz^2+d(p^z)^2
Ω=dp^z∧ dz
J=[ 0 Id; -Id 0 ] Ĩ=[ i 0; 0 -i ].
Note that g^S is Ĩ and J invariant. Let K=ĨJ and ω_I=g^S(Ĩ-,-). Then the imaginary part ω_π/2=1/2i(Ω-Ω̅) of Ω is given by g^S(K̃-,-). Furthermore, since
ω_K(v,Jv)=g^S(Kv,Jv)=-g^S(v,KJv)=-g^S(Iv,v)=ω_I(v,v)=0,
the imaginary part of Ω vanishes on the interior of a J-holomorphic disc.
The spectral curve Σ_ϕ is exact with respect to the holomorphic Liouville form λ. We choose a primitive W of λ
W(p^z,z)=2(p^z)^3/3.
Then we have dW|_Σ_ϕ=λ.
Given a complex number z∈ℂ, write
z_θ= e^-iθz+e^iθz̅/2.
For the quadratic differential ϕ=zdz^2, the spectral network S(θ) consists of three positive rays of phases e^i2θ+2π k/3,k=0,1,2 emanating from the origin. Comparing with (<ref>), we see that we have the following alternative characterization of the spectral network S(θ) in terms of the holomorphic primitive W:
The spectral network S(θ) is the locus of points z on ℂ such that
W(π^-1(z))=W(±√(z),z)_θ+π/2=(± z√(z))_θ+π/2=0.
For a∈ℂ-{0}, let {a^0,a^1} be the set of the lifts of a on Σ. Since W(w,z)=±2/3 z√(z) on (z,w)∈Σ, we see that S(0) is the locus of points a∈ℂ such that the imaginary value of W(a^i) is equal to zero for i=0,1.
We now give an ordering to the pair provided that Re(W(a^0))≠Re(W(a^1)). The equality happens if and only if the real part vanishes, which then implies that a is on S(π/2). Based on this fact, for a∉ S(π/2), we order the two lifts of a by a^± with respect to the relation
Re(W(a^+))>Re(W(a^-).
We will construct a simillar ordering in Section <ref>.
We now provide a Floer theoretic reformulation of the characterisation of the spectral network for {(p^z)^2-z=0}. From now on we fix the phase θ=0.
Let ϕ=zdz^2 on ℂ. The spectral network S(0) is locus of the points z on ℂ such that there exists a BPS disc (<ref>) ending at z.
We utilise the exactness of the holomorphic Liouville form. Since
∫ u^∗Ω=W(z^+)-W(z^-)=4z^3/2/3,
and ω_π/2 vanishes in the interior of any J-holomorphic disc, there can be no BPS disc ending at z for z∉ S(0). From Proposition <ref>, we see that we can construct explicitly some BPS disc ending at z∈ S(0).
Notice that for the case ϕ=zdz^2, Proposition <ref> is much stronger than Theorem <ref>. However, since most spectral curves are not exact with respect to the holomorphic Liouville form λ, our argument in this section cannot be applied for general spectral curves Σ_ϕ.
§.§ Domain decomposition
We now discuss the domain decomposition that comes from the spectral network S(0) associated to a saddle-free GMN quadratic differential ϕ. We assume that ϕ is GMN and complete. We use the conventions introduced in the ϕ-metric part in Section <ref>.
Given a class [γ]∈ H_1(Σ;ℤ), its charge Z(γ) is defined by the following formula:
Z(γ)=∫_γλ
where γ is a smooth representative of [γ]. The induced ℤ-additive homomorphism
Z:H_1(Σ;ℤ)→ℂ
is called the charge homomorphism.
Given a saddle trajectory γ of phase θ we can join the two lifts of γ so that the charge of the corresponding class in H_1(Σ,ℤ) is of phase e^-iθ. Furthermore, by rotating the quadratic differential ϕ to e^i2θϕ for generic θ, we can make the image of Z avoid ℝ_>0∪ℝ_<0. This means that by rotating the quadratic differential by a generic phase, we can always obtain a saddle-free quadratic differential (See <cit.>).
We have the following result on the conformal equivalence classes of the connected components (which we called the chambers) of C-S(0) for saddle-free, complete quadratic differentials ϕ. For the proof, see Chapters 6 and 9-11 of <cit.>, and Sections 3.4-3.5 and Lemma 3.1 of <cit.>.
Let ϕ be a complete, saddle-free quadratic differential. Then the connected components of C̃-S(0)) are conformally equivalent to one of the following.
* Vertically finite horizontal strips
𝒵(a,b)={z∈ℂ:a<Im(z)<b}
for some -∞<a,b<∞. The boundary of 𝒵(a,b) consist of separating horizontal trajectories given by extending the biholomorphism to the lines {Im(z)=a, Re(z)> a_0}, {Im(z)=a, Re(z)< a_0}, {Im(z)=b, Re(z)< b_0}, {Im(z)=b, Re(z)> b_0} for some a_0,b_0∈ℝ. In other words, the biholomorphism extends to a continuous map 𝒵(a,b)→ℂ which is a surjection onto the closure of the corresponding horizontal chamber component, such that the points a_0+ia and b_0+ib are mapped to zeroes of ϕ.
* The open upper half-plane
ℋ:={z∈ℂ:im(z)>0}.
Again, there exists some x_0∈ℝ such that the biholomorphism extends to a continuous map ℋ→C̃ which is a surjection onto the closure of the corresponding horizontal chamber component, where the point x_0+i· 0 is mapped to a zero of ϕ, and the lines {im(z)=0, re(z)>x_0} and {im(z)=0, re(z)<x_0} are mapped to separating horizontal trajectories.
In both cases, the pullback of ϕ under the conformal equivalence is equal to dz^2. In fact, these domains are given by maximal analytic continuations of ∫√(ϕ(z)) along open neighbourhoods of generic horizontal trajectories. Both of these domains are traced out by generic horizontal trajectories.
From now on, we will not distinguish the horizontal chambers of ϕ (which are open conformal subdomains of C̃) from their conformally equivalent counterparts 𝒵(a,b) and ℋ (which are open conformal subdomains of ℂ). From the proposition, we see that given a δ>0 we have an ϵ(δ)>0, h(δ)>0 and η(δ)>0 such that the h(δ)-neighborhoods of horizontal trajectories which trace out the horizontal subdomains
𝒵(δ;a,b) =𝒵(a+ϵ(δ),b-ϵ(δ))⊂𝒵(a,b)
ℋ(δ) =ℋ∩{y>ϵ(δ)}
never enter the (slightly thickened) neighbourhood U((2+η)δ). For later purposes, we demand that η>0 is small enough so that outside U((2-η)δ), g_δ^ϕ=g^ϕ. We sometimes call the latter neighbourhood the desingularization region.
Note that 𝒵(δ;a,b) and ℋ(δ) are naturally deformation retracts of the horizontal chambers of ϕ. Taking the union of the horizontal subdomains 𝒵(δ;a,b) and ℋ(δ) inside each of the horizontal chambers, we obtain our domain C(δ;∞).
There exists a conformal subdomain C(δ;∞)⊂C̃ which is a disjoint union of deformation retracts of connected components of C̃-S(0) which satisfies the following.
There exists an h=h(δ;E)>0 and an η(δ)>0 such that if γ is a horizontal trajectory passing through z∈ C(δ;∞), then the h(δ)-neighborhood of γ lies strictly outside the desingularization region U((2+η)δ).
§.§ Real-exact spectral curves
We now look at real-exact quadratic differentials ϕ, which, as stated in the introduction, is the main object of our interest. Recall that (Section <ref>) we have the identification of the real cotangent bundle and the holomorphic cotangent bundle via
dx→ dz, dy→ -idz.
Recall that a complete GMN quadratic differential ϕ is called real-exact if the spectral curve Σ_ϕ associated to ϕ is sent to a λ_re-exact Lagrangian. Equivalently, this means that Σ_ϕ is exact with respect to the real part of the holomorphic Liouville form:
λ_θ=0:=λ+λ̅/2.
We discuss when saddle-free GMN quadratic differentials give real exact spectral curves. Then for ϕ real-exact, we find an open subdomain C(δ;E)⊂C̃ which is a deformation retract of C(δ;∞), such that the energy of a BPS disc ending at z∈ C(δ;E) is a priori bounded above by 2E. Furthermore, we also construct a vertical neighbourhood 𝒱 of the “truncated" spectral network (see Definition <ref>) such that we have a preferred ordering z^+,z^- of the lifts π^-1(z), for which the geometric energy of a t-BPS disc ending at z, that travels from z^+ to z^-, is strictly negative; hence we show the non-existence of such J-discs.
§.§.§ Criterion for real exactness
Given a horizontal strip (𝒵(a,b),ϕ=dz^2), consider the saddle trajectory given by connecting the two zeroes of ϕ on the horizontal boundary segments of 𝒵(a,b). Such saddle trajectories are called standard saddle trajectories. The corresponding homology classes in H_1(Σ_ϕ;ℤ) given by joining the two lifts of the straight line are called standard saddle classes.
From standard saddle classes, we obtain the following criterion for real-exactness.
The Lagrangian Σ_ϕ with respect to the canonical symplectic form ω of the real cotangent bundle is real-exact if and only if the standard saddle trajectories all have purely imaginary charge.
The natural involution on the spectral curve induces a ℤ_2-action on the homology group H_1(Σ;ℤ). Define the hat-homology group H_1(ϕ) to be the ℤ_2 anti-invariant part of H_1(Σ;ℤ). Then <cit.> shows that the hat-homology group H_1(ϕ) is generated by the standard saddle classes of ϕ.
Since λ is ℤ_2-anti-invariant, the ℤ_2-invariant part of H_1(Σ;ℤ) lies in Z. Hence the charge homomorphism factors through H_1(ϕ). However, the image of a standard saddle class under Z is equal to ± e^iθ times its ϕ-length. So we see that Σ_ϕ is real-exact if and only if the phases associated to the standard saddle trajectories are all imaginary.
The standard saddle classes give a ℤ-basis in the lattice H_1(ϕ). Hence we can identify it with ℤ^⊕ n. Following Bridgeland and Smith <cit.>, let Quad_free(g,m=({m_1,p_1},...,{m_k,p_k}) be the space of pairs (C,ϕ), where C is a genus g closed Riemann surface and ϕ is a quadratic differential over C, such that the poles of ϕ are the points p_i with the order m_i. We identify the pairs (C,ϕ) and (C',ϕ') up to conformal equivalence. Then in <cit.>, Bridgeland and Smith show that Quad_free(g,m) is locally isomorphic to the space of ℤ-homomorphisms from ℤ^n to ℂ, which implies that it is a complex manifold of dimension H_1(ϕ). Restricting to the homomorphisms which map entirely to iℝ⊂ℂ, we see that the real-exact quadratic differentials form a totally real submanifold of Quad_free(g,m).
§.§.§ Energy and horizontal distance
Let W be the primitive of λ_Re over Σ_ϕ. We now relate W to the ϕ-length. Given an arc-length parametrized ϕ-geodesic α:[0,l]→ C^∘ of phase θ, let α̃ be a lift α onto Σ_ϕ. Then
We have
∫_α̃λ=± e^-iθl.
Take a flat local conformal coordinate ∫√(ϕ(z))dz, sending α(0) to 0, over which α reads e^-iθt, and Σ_ϕ:={p^x=± 1, p^y=0}. So the value of the holomorphic Liouville form is just ± 1· e^-iθ. Integrating this from t=0 to t=l gives (<ref>).
Let 𝒵^h be a horizontal chamber. Suppose z,z' are two points on the closure of 𝒵^h in C̃. Choose the shortest straight line segment l in 𝒵^h connecting z and z' and a lift l̃ of l to Σ_ϕ. Then the horizontal distance d_hor(z,z') is defined by:
d_hor(z,z'):=∫_l̃λ_re.
Since any other choice of l̃ reverses the sign of ∫_l̃λ_re by -1, the horizontal distance is well-defined. Since ϕ is real-exact, all the standard saddle trajectories are vertical. By translating if necessary, we may assume that the standard saddle trajectory on 𝒵^h lies on x=0.
Let 𝒵^h be a horizontal chamber and let z,z' be two points on the closure of 𝒵^h. Suppose z=x+iy, z'=x'+iy' under some ϕ-flat conformal equivalence (𝒵^h,ϕ)≃ (ℋ,dz^2) or (𝒵^h,ϕ)≃ (𝒵(a,b),dz^2). Then
d_hor(z,z')=x-x'.
Furthermore, let d=d_hor(z,b) for some zero b of ϕ on the boundray of 𝒵^h. Then d only depends on z and not on b. Finally, if z^0 and z^1 are the two lifts of z, then
W(z^0)-W(z^1)=2d.
By translating the standard vertical saddle trajectory in 𝒵^h into the line Re(z)=0, we may assume that the branch points lie over the line Re(z)=0. Let z=x+iy and z'=x'+iy'. The straight line segment l connecting (x,y) and (x',y') is homotopic to the concatenation of the horizontal line segment from (x,y) to (x',y) and the vertical line segment from (x',y) to (x',y'). Call the concatenation of these two line segments γ. Note that γ and l bound a triangle in 𝒵^h. Consider the lift of the homotopy given by the triangle. Call γ̃ the resulting lift of γ. (See Figure <ref>)
Since the vertical line segment is a π/2-geodesic, we see that integrating the real Liouville form over the vertical component vanishes. Furthermore, since the horizontal line segment is a phase zero geodesic, the integral of the real Liouville form over γ̃ is equal to ± (x-x') by (<ref>). This finishes the proof.
In particular, set b to be one of the zeroes. Then the straight line segment connecting b and z attains two lifts, both of which meet at the point π^-1(b). We can join the two lifts at b. Consider the resulting curve on Σ_ϕ. Integral of the real Liouville form then gives us ±(W(z^0)-W(z^1)) but this is also equal to twice the horizontal distance between z and b, up to sign. This finishes the proof.
Given a point z∈C̃-S(π/2), we can now order the two lifts of z to Σ_ϕ by the condition
W(z^+)>W(z^-).
Furthermore, we have the following corollary:
Let z be a point in C̃-S(π/2). Connect z to a point z̃ on the wall γ of S(0) by a vertical trajectory. Then
W(z^+)-W(z^-)=W(z̃^+)-W(z̃^-)=2l>0,
where l is the ϕ distance of z̃ from the branch point end of the wall γ. l does not depend on the choice of z̃.
§.§.§ Chamber deformations
We now construct the region C(δ;E) and the bridge region 𝒱(δ;E).
Constructing C(δ;E).
The conformal subdomain C(δ;E) is a deformation retract of C(δ;∞) such that we have a bound on W(z^+)-W(z^-) for z∈ C(δ;E). Again, since all the standard saddle trajectories are vertical, we can translate the vertically finite horizontal strip domains and half-plane domains as in Proposition <ref> so that all the branch points lie over x=0. Then for E>0, set
𝒵(a,b;E) :={z∈𝒵(a,b): Re(z)<E}
𝒵(δ;a,b;E) :=𝒵(a,b;E)∩𝒵(δ;a,b)
ℋ(E) :={z∈ℋ: Re(z),Im(z)<E.}
ℋ(δ;E) :=ℋ(δ)∩ℋ(E).
We define
C(δ;E):=⋃𝒵(a,b;E) ∪⋃ℋ(δ;E)
where we take the union over all the horizontal chambers of C.
Note that C̃-S(0) deformation retracts to C(δ;E) and W(z^+)-W(z^-)<2E by Proposition <ref>.
Constructing 𝒱(δ;E).
We now construct the bridge region 𝒱(δ;E). We start with a definition.
Suppose γ:[0,∞)→C̃ is a wall on the spectral network S(0), arc-length parametrized with respect to ϕ. Then for T>0, the T-truncated wall γ is the restriction of γ to the interval [T,∞). The T-truncated spectral network (or the truncated spectral network for short) S(0)_T is the union of the images of the T-truncated walls.
The following definition will be useful:
Let γ be an open geodesic arc in C^∘. Then we say that a neighbourhood V of γ is a vertical neighbourhood if V is traced out by open vertical segments that passes through γ.
Let h_v≪min_𝒵(a,b)⊂ C-S(0)b-a/2 and let 𝒱(h_v) be the set of points in C that are connected to points on S(0)_T by a vertical geodesic of length less than h_v. Each component 𝒱 of 𝒱(h_v) is a vertical neighbourhood of a unique truncated wall γ|_[T,∞] which we call the core of 𝒱. By taking δ≪ 1, we can ensure that 𝒱 intersects all the horizontal chambers that are adjacent to the core wall γ.
If z is a point on 𝒱, then W(z^+)-W(z^-) only depends on the core horizontal geodesic. For small enough δ>0, 𝒱 serves as a “connecting bridge" between the connected components of C(δ;∞) for T=D((2+η)δ) for some continuous function D that only depends on ϕ and the choice of identifications U_i≃ D(r_i)⊂ℂ made in Section <ref>. Note that 𝒱 now lies outside S(π/2) and U(2+η)δ.
We summarize the discussion.
Let δ≪1 be a small deformation parameter and let E≫1 be an energy cut-off. Then there are precompact open conformal subdomains 𝒱(δ;E) and C(δ;E) contained in the set C̃-S(π/2) and C̃- U(2+η)δ respectively, with the following properties.
* There exists a h(δ)>0 and an η(δ)>0 such that if γ is a generic horizontal trajectory passing through a point z contained in C(δ;E) then γ never enters U((2+η)δ). Furthermore, if z∈ C(δ;E) then
W(z^+)-W(z^-)<2E.
* Given a connected component 𝒱 of 𝒱(δ;E), there exists a unique wall γ:(0,∞)→ C called the core of 𝒱 and a truncated portion γ|_(δ;∞) lying in 𝒱, such that the component of 𝒱 is given by some vertical thickening of γ|_(δ,∞). Furthermore, for z∈𝒱, we can order the lifts z^+,z^- of z on Σ_ϕ such that
W(z^+)-W(z^-)>0.
Finally, the connected component 𝒱 overlaps with all the components of 𝒞(δ;E) adjacent to its core wall γ.
Let J be a compatible almost complex structure on T^∗C̃. Let z∈𝒱(δ;E). Then there are no J-discs bounded between F_z and Σ_ϕ going from z^+ to z^-.
From Stokes' theorem, and ω-compatibility,
Area(u)= ∫ u^∗ω=∫ u^∗ω=W(z^-)-W(z^+)<0.
This is a contradiction since u must have a positive L^2 norm. This finishes the proof.
Corollary <ref> says nothing about the discs going from z^- and z^+. However, it is important because otherwise we do not know a priori that the parallel transport across the connected components of 𝒱 is upper-triangular.
§ ADIABATIC DEGENERATION
We now study the adiabatic degeneration of t-BPS discs ending at z as t→ 0. From now on, we set T^∗C̃ and stick to the _3-conformal model introduced in Definition <ref>.
Recall that we had constructed a wall-chamber decomposition of C and a deformation retract C(δ;E) of its horizontal chambers, with respect to a parameter δ>0 and an energy cut-off E≫ 1. The region C(δ;E) has the property that the maximal horizontal trajectory passing through z∈ C(δ;E) never enters the region U((2+η)δ). In Section <ref>, we define the notion of holomorphic flow lines for (slight generalizations of) spectral curves and describe how they relate to ϕ-trajectories. In Section <ref>, we find an a priori energy and boundary length estimate for t-BPS discs ending at z for z∈ C(δ;E). In Section <ref>, we establish some gradient estimates.
In Section <ref>, we follow <cit.> closely and introduce a t-uniformly finite number of punctures on the boundary of _3 for each t to obtain a conformal domain _r with the conformal structure defined as in Section <ref>. On this new conformal domain _r of the map u_t, we construct a domain subdivision D_0(t)∪ D_1(t) with the following two properties.
* The discs u_t map D_0(t) outside of T^∗U(2δ) and map D_1(t) into T^∗(U(2+η)δ).
* The size of the derivatives of u_t over D_0(t) is O(t). We show this by utilising the gradient estimates in Section <ref>.
In Section <ref> and <ref>, we study the limiting behaviour of u_t restricted to D_0(t) as t→ 0. We introduce auxiliary subdomains W_0(t) of D_0(t) so that D_0(t)-W_0(t) consist of uniformly finitely many strip-like domains, and u_t|_W_0(t) converges to points on C̃ (Lemma <ref>). The components of D_0(t)-W_0(t) satisfy the following properties.
* A 0-special domain is a strip domain that contains a horizontal boundary component with an F_z-label. By Lemma <ref>, a 0-special domain uniformly converges to z.
* A non-0-special strip domain is either a vertex region or a non-vertex region. By Proposition <ref> a vertex region is mapped very close to a point in C̃, after taking a subsequence.
* By Proposition <ref> a non-vertex region is mapped very close to a horizontal trajectory, after taking a subsequence.
In Section <ref>, we will prove the main analytic Theorem <ref> by combining the results in Sections <ref>-<ref>.
§.§ Flow lines
We adapt the notion of flow lines introduced in <cit.> to the holomorphic setting. Let C̃ be a Riemann surface and let (T^∗_ℂ)^1,0C̃ be the holomorphic cotangent bundle. Let Y be a codimension 1 holomorphic submanifold of (T^∗_ℂ)^1,0C̃ such that the holomorphic projection π:Y→C̃ is a simple branched covering of C̃. Let n be the degree of the branched covering Y→C̃. Suppose z∈C̃ is a regular value of π. Then there exist an open neighbourhood U of z, and locally defined holomorphic functions f_1,...,f_n such that Y∩π^-1(U) locally reads as Γ_df_1⊔...⊔Γ_df_n over U. Suppose now that z is a branch point. Then the germ of Y near the ramification point over z is isomorphic to the germ of (0,0) for the zero set of {(p^z)^2-z^2)=0}. The other smooth sheets of Y over z are given by the disjoint union Γ_dg_1⊔...⊔Γ_dg_n-2 for some locally defined holomorphic functions g_1,...,g_n-2.
Equip C̃ with a Kähler metric h=h_zz̅ dz d̅z̅. We can regard h as an isomorphism h:T^1,0_ℂC̃→T^∗_ℂ^1,0C̃. Following <cit.> and <cit.>
Let W be a locally defined holomorphic function on C̃. Then the h-gradient of W is defined by
∇_h(W)=h^-1(dW).
Given a curve z:[0,1]→C̃, a cotangent lift of z is an ordered pair {z_1,z_2} of lifts z_i:[0,1]→ Y such that z_1≠ z_2, or their common value is a branch point of the projection Y→C̃.
A curve z:I→C̃, defined over an open interval I, with an ordered cotangent lift (z_1,z_2), is called a holomorphic (Morse) flow line of phase θ associated to Y if the following equation is satisfied:
dz/dt=-e^-iθ∇_h(W_1-W_2),
whenever the local holomorphic functions W_1, W_2 associated to the lifts z_1 and z_2 are defined.
Now we restrict to the case Y=Σ_ϕ. Given a point z∈C^∘, the quadratic differential ϕ determines local functions W^±(z) such that
W^±(0)=0,
∂_z W^±=±√(ϕ)(z).
Furthermore, a GMN quadratic differential admits a conformal coordinate near a zero which pulls back ϕ to zdz^2.
Recall that the GMN equation is the ODE
Im(e^-2iθϕ(γ'))=0.
Holomorphic flow lines associated to the spectral curve satisfy the GMN equation (<ref>). In particular, holomorphic Morse flow lines of phase 0 lie on a horizontal trajectory.
This follows from
e^2iθϕ(z)(dz/dt)^2= 2h^-2ϕ(z) ϕ(z).
§.§ The energy and boundary length estimate
We prove the crucial energy and boundary length estimate.
Suppose u:𝒵→ T^∗C̃ is a t-BPS disc ending at z for z∈ C(δ;E). Then
Area(u)≤ 2Et.
Let W be the primitive of λ_re on Σ_ϕ. By Stokes' theorem,
Area_J(u)=∫ u^∗ω=tW(z^0)-W(z^1)≤ 2Et
where the last inequality follows from Proposition <ref>.
This finishes the proof.
We now need the estimate on the length of the boundary of u_t on tΣ_ϕ lying outside T^∗U(2δ)∩ tΣ_ϕ.
There exists some c=c(δ,η)>0 such that for all sufficiently small 0<t≤ 1, the following holds. Let u be a t-BPS disc ending at z. Let ∂_3^hor be the union of the horizontal boundary components of _3. The length of u(∂_3^hor) outside (T^∗U(2δ)∪ T^∗B_η/2(z))∩ tΣ_ϕ is bounded above by c.
Let K=(T^∗U((2-η/2)δ)∪ T^∗B_η/3(z)) and l be the length of u(∂_3^hor) on the region outside (T^∗U(2δ)∪ T^∗B_η/2(z))∩ tΣ_ϕ. Recall that we had chosen an η>0 such that over U((2-η)δ)^c, g_δ^ϕ=g^ϕ. Outside K, the normal injectivity radius of tΣ_ϕ is r'_0t for some r'_0>0 independent of t. We take r_0=min(r_0',η/8,δ).
On U((2-η)δ)^c, the W=∫√(ϕ)-coordinate brings J=J_std, tΣ_ϕ={y_1=± t, y_2=0} and ω=ω_std. Translating the chosen sheet to {y_1=y_2=0}, we see the following.
* In the neighbourhood of the boundary ∂ K∩ tΣ_ϕ, there are charts of radius r_0 t contained in the complement of (T^∗U((2-η)δ)∪ T^∗B_η/4(z)) such that: J=J_std, g=g_std, tΣ_ϕ=ℝ^2⊂ℂ^2, and K=D× iℝ^2. Here ℂ^2 is given by the coordinates (x_1,x_2,y_1,y_2) and D is some open subdomain of ℝ^2.
* Each point of tΣ_ϕ∩ (N_r_0K)^c admits a chart of radius r_0t/2 in the complement of (T^∗U((2-η)δ)∪ T^∗B_η/4(z)) such that: J=J_std, g=g_std, and tΣ_ϕ=ℝ^2⊂ℂ^2.
We choose a non-negative support function β:T^∗C̃→ℝ_≥ 0 such that β=β(x_1,x_2) on each standard open chart chosen above, β is positive on K^c, β vanishes on K, and β is equal to 1 on (T^∗U(2δ)∪ T^∗B_η/2(z))^c. Such a β can be chosen such that ξ=sup∇β depends only on δ and η. Let ρ be the distance function from tΣ_ϕ. Note that in the above local charts, ρ=y=√(()y_1^2+y_2^2).
Let r≤ r_0t. We define the functions a(r),l(r),a^β(r), and l^β(r):
a(r)=∫_{ρ≤β r}∩ u∩ K^c dA, a^β(r) =∫_{ρ≤β r}∩ u∩K^cβ dA,
l(r)=∫_{ρ= β r}∩ u∩K^c dl, l^β(r) =∫_{ρ=β r}∩ u∩K^cβ dl.
For ϵ>0, let K_ϵ=N_ϵ(K). Since β>0 on K^c, it follows that ρ/β is Lipschitz on K_ϵ^c. Hence applying the coarea formula, we get
∫_{ρ≤β r}∩ u∩ K_ϵ^c dA=∫_{ρ≤β r}∩ u∩ K_ϵ^c1/∇ (ρ/β)·∇ (ρ/β)dA=∫_0^r ∫_{ρ=βτ}∩ u∩ K_ϵ^c1/∇ (ρ/β) dl dτ.
On ρ=τβ,
∇(ρ/β)≤1/β(∇ρ+τ|β'|)≤1+τξ/β.
So combining (<ref>)-(<ref>) and using the monotone convergence theorem, it follows that
a(r)≥∫_0^rl^β(τ)/1+τξdτ and d/dra(r)≥l^β(r)/1+rξ a.e .
Now, observe that
rl^β(r) =∫_{ρ=β r}∩ u∩ K^c rβ dl≥∫_{ρ=β r}∩ u∩ K^c1/2 d^c (ρ^2)
=∫_{ρ≤β r}∩ u∩ K^c1/2dd^c(ρ^2)=∫_{ρ≤β r}∩ u∩ K^cω_std=a(r).
For the inequality in (<ref>), we use d^c (y^2)≤ 2ρ which follows from d^c (ρ^2)=d^c(y^2)= 2∑ y^i dx^i. To arrive at the first equality in (<ref>) is a bit more involved. We first use Stokes' theorem:
∫_{ρ≤β r}∩ u∩ K^c dd^c(ρ^2) =
∫_{ρ≤β r}∩ u∩∂ K d^c(ρ^2)+∫_{ρ≤β r}∩∂ u∩ K^cd^c(ρ^2)
+∫_{ρ=β r}∩ u∩ K^cd^c(ρ^2).
Now d^c(ρ^2)=0 on {ρ≤β r}∩ u∩∂ K, since β=0 on ∂ K, ρ=β r=0. Furthermore, d^c(ρ)^2=0 on {ρ≤β r} as well since this set is contained in tΣ_ϕ. Hence the first two terms in (<ref>) vanish and we arrive at the first equality in (<ref>). For the second equality, note that 1/2 dd^c(y^2)=ω_std and for J-holomorphic curves, the area density is just equal to u^∗ω. So we get (<ref>)–(<ref>).
Combining (<ref>) and (<ref>)–(<ref>), we see that
ra'(r)≥a(r)/1+r ξ.
Hence we get the differential inequality
d/drlog(a(r)·ξ r+1/r)≥ 0,
which implies that the function
r→ a(r)·ξ r+1/r
is nondecreasing.
Now, if r<ξ^-1, then we get
2a(r)/r≥lim_s→ 0Area(u;(T^∗U(2δ)∪ T^∗B_η/2(z))^c∩{ρ≤ s})/s⇒ 2a(r)/r≥ l.
The total energy of u is bounded above by <2Et by Proposition <ref>. Setting r=r_0 t it follows that we have
Er_0^-1>l.
Set c=Er_0^-1. This finishes the proof.
There exists a compact subset K=K(δ,ϕ,E)⊂C̃ containing C(δ;E) such that if u is a t-BPS disc ending at z for z∈ C(δ;E), then u lies in P=D_1^∗K^∘ for all small enough t.
The rescaled spectral curves tΣ_ϕ for 0<t≤ 1 lie inside the unit disc bundle D_1^∗C̃. By the integrated maximum principle, we see that the disc u must also lie in the unit disc bundle D^∗C̃. Let V be a sufficiently small neighbourhood of the poles of ϕ lying outside the region C(δ;E)∪ U(2δ) such that g|_V=g^ϕ. Let K_1 be the complement of V. The spectral curve tΣ_ϕ is (Gt,H)-isoperimetric outside T^∗K_1, for sufficiently small t>0 and some G,H>0 independent of t. Furthermore, by Proposition <ref>, the total energy of u is bounded above by 2Et. So we can apply the proof of Proposition <ref> to see that the discs cannot leave some neighbourhood of D_1^∗K by some precompact open subset K containing K_1. Set P=D_1^∗K^∘.
§.§ Gradient estimate
We now follow <cit.> to prove the gradient estimates, which will be needed for the rest of the Section. We will only consider those fibres F_z for z∈ C(δ;E). From Proposition <ref>, we see that the discs of our interest are contained in a precompact neighbourhood P of C(δ;E) in T^∗C̃. For this reason, from now on we only consider smooth functions that map into P.
We start with the following gradient estimate:
<cit.>
There exists some ħ>0 such that for all 0<t≤ 1, the following inequalities hold.
* If u:A_r→ T^∗C̃ is a J-holomorphic disc, then
Area(u)<ħ⇒du(0)^2≤8/π r^2∫_A_rdu^2.
* If u:E_2r→ (T^∗C̃,tΣ_ϕ) is a J-holomorphic half-disc with u(∂ E_2r)⊂ T^∗U(2δ)^c then
Area(u)<ħ⇒sup_E_rdu^2≤8/π r^2∫_E_2rdu^2.
The same statement holds replacing tΣ_ϕ with F_z for z∈ C(δ;E).
The Sasaki almost complex structure J already satisfies the conditions in <cit.> that outside T^∗U(2δ), tΣ_ϕ is totally geodesic, JT(tΣ_ϕ) is orthogonal to T(tΣ_ϕ) and J is skew-adjoint with respect to g^S. Then by <cit.>, there exists some ħ=ħ(g^ϕ_δ,η)>0 such that the statement of Lemma <ref> holds. The same argument applies for F_z,z∈ C(δ;E)
Fix now some ϵ>0. Suppose we have a t-BPS disc u ending at z and suppose u admits a subdomain (E_ϵ,∂ E_ϵ)⊂ (_3,∂_3) such that u|_E_∂ϵ maps outside T^∗U(2δ). Suppose t is small enough so that 2Et<ħ. By Proposition <ref>, the total energy of u is bounded above by 2Et, so u|_E_ϵ satisfies the conditions in Lemma <ref>. From this, we see that sup_E_1/2ϵdu must be bounded above by √(8E/2πϵ^2)t^1/2.
The following estimate by Ekholm improves the above O(t^1/2)-estimate to an O(t)-estimate. The crucial ingredient is that for u=(q,p), we get p≤ t, from the integrated maximum principle (see also <cit.>).
<cit.> Fix some positive constants ϵ,C_1,C_2>0, then for sufficiently t>0, the following holds.
* Let u:A_8ϵ→ D_C_1t^∗C̃ be a J-holomorphic disc such that Area(u)<C_2t. Then there exists a constant k(ϵ,δ,η,ϕ,C_1,C_2)>0 such that
sup_A_ϵDu≤ kt.
* Let u:E_8ϵ→ D_C_1t^∗C̃ be a J-holomorphic half-disc such that Area(u)<C_2t and u(∂ E_8ϵ) lies on either tΣ_ϕ outside T^∗U(2δ), or on F_z for z∈ C(δ;E). Then there exists a constant k(ϵ,δ,η,ϕ,C_1,C_2)>0 such that
sup_E_ϵDu≤ kt.
Take a small enough t so that C_2 t<ħ.
The idea is to show that the geometric energy of u restricted to E_2ϵ(p) is actually of the size O(t^2). Hence applying Lemma <ref>, we see that Du on E_ϵ is of size O(t) which is precisely (<ref>) in the case ∂ E_8ϵ maps to either tΣ_ϕ or F_z. The proof is essentially the same as the proof of <cit.>.
The case where ∂ E_8ϵ∩∂_m maps to tΣ_ϕ is unchanged. For the case the boundary maps to F_z, note that since the energy of u is bounded above by C_1 t on E_8ϵ, the C^1 norm of u_t on E_4ϵ is of O(t^1/2) by Lemma <ref>. This implies that after taking a uniformly bounded conformal isomorphism Φ:E_4ϵ≃ E_1, the image of E_1 under u∘Φ^-1 remains O(t^1/2)-close to z. So for t>0 small, we can ensure that for z∈ C(δ;E), the image of u∘Φ^-1 on E_1 maps inside T^∗C(δ;E).
However, we have a local isometry G:(T^∗C̃,F_z)≃ (ℂ^2,iℝ^2) sending J to the standard almost complex structure on ℂ^2 (induced from taking the coordinate ∫√(ϕ) near z). Composing with this isometry, we get holomorphic maps v=G∘ u∘Φ^-1: A_1→ℂ^2, with the imaginary part bounded above by C_1t. Furthermore, we can double along iℝ^2 to get maps v̂:E_1→ℂ^2. Let ṽ=t^-1v̂ then the imaginary part of ṽ is bounded above by C_1.
Let F(z_1,z_2)=(e^iz_1,e^iz_2), then f=F∘ṽ is holomorphic. Furthermore, the image is uniformly bounded since the imaginary part of ṽ is uniformly bounded, and so is the derivative of F on the images of ṽ. The L^2-norm of Df on the disc of radius 1/2 can be uniformly bounded by supf by Cauchy's inequality[If f:A_1→ℂ is holomorphic, and z∈ A_1/2, then
D^nf(z)≤ n!·f_∞,D/(1/4)^n.
]. Furthermore, since by chain rule Df=DF(ṽ)Dṽ(z) and both the norms of Dv and DF(ṽ) are bounded on A_1/2, so is the norm of Dṽ. So we see that there exists some k_1>0 such that Dṽ_L^2,A_1/2≤ k_1.
Now Dṽ^2_L^2,A_1/2=t^-2Dv̂_L^2,A_1/2^2 hence
D(u∘Φ^-1)_L^2,E_1/2^2=Dv^2_L^2,E_1/2=1/4Dv̂^2_L^2,A_1/2≤1/4k_1t^2,
where the first equality follows from v=G∘ u∘Φ^-1 and G being an isometry, and the second equality follows from v̂ being a doubling of v. Here recall that we had composed with a conformal equivalence E_4ϵ≃ E_1. Hence we have managed to show that the energy of u is of size O(t^2) on E_2ϵ, just as claimed. [The actual proof is more or less the same, except that there are some diffeomorphisms involved sending the local graph t· graph(dg) uniformly to ℝ^n, and comparing the almost complex structure with the standard almost complex structure J_0 on ℂ^n. The resulting function f in <cit.> is not fully holomorphic, but it is very close to one.]
§.§ Domain subdivision
To show Theorem <ref>, we argue by contradiction. We assume that there exists a sequences of positive real numbers t_n→ 0 and a sequence of points z_t_n∈ C(δ;E) converging to a point z∈ C(δ;E) such that there exist t_n-BPS discs
u_t_n:_3→ T^∗C̃
ending at z_t_n. We will find a subsequence of (z_t_n,t_n) such that the corresponding discs lie strictly outside the desingularization region T^∗U((2+η)δ).
In order to do this, we modify the construction in <cit.>, which will take the rest of Section <ref>. We introduce uniformly finite number of punctures on the boundary of the domain _3 of u_t mapping to tΣ_ϕ. The new domain _r admits a subdivison into domains D_0(t) and D_1(t). Throughout this construction, we have to make choices for some auxiliary functions δ_0(t). We now summarize their properties.
* ∂ D_j(t)-∂_r consist of vertical line segments disjoint from the boundary minima.
* (Corollary <ref>) Over D_0(t), we have
sup_z∈ D_0Du_t(z)≤ kt.
* (Lemma <ref>) The subdomain D_0(t) is mapped outside of U((2+1/20δ_0(t))δ) for some function δ_0(t) satisfying 0<δ_0(t)<η/10.
* The subdomain D_1(t) is mapped inside U((2+9/20δ_0)δ).
Construction of domain subdivision
Now we begin the construction. Fix a constant 0<δ_0<η/10 such that u|∂_3 is transverse to ∂(T^∗U((2+cδ_0)δ) for c∈{1,2,3,4}. Let I≃ℝ be a boundary component of _3. Let
b_1^c<b_2^c<....<b^c_n(c), c=1,2,3,4
be the points in I such that u(b_j^c) lies in the boundary ∂(T^∗U(2+cδ_0)δ). Set ∞=b^c_k for any k>n(c). Let
B_i={b_1^c,....,b^c_n(c)}, B=∪ B_i, and c(b):B→{1,2,3,4} be the indexing function. For c≤ 2≤ 4, we add a puncture at each b_j^c and b_j+1^c with the property that there exists some b_k^c-1 with b_j^c<b_k^c-1<b_j+1^c.
Intuitively, we are adding punctures everytime the image of the boundary enters at the point b_j^c, and then leaves at the point b_j+1^c the same “level" ∂(T^∗U((2+cδ_0)δ). Note also that at b_j+1^c, the image of the boundary points outward.
Removing the punctures, we arrive at a new domain _r=_3+m_1 with a holomorphic map u:_r→ T^∗C̃. It can be readily checked that the boundary components Ĩ of _r separate into three different types:
* out: u(I)⊂ T^∗(C̃-U((2+3δ_0)δ))
* 0: u(I)⊂ T^∗((U(2+4δ_0)δ)-U((2+δ_0)δ))
* in: u(I)⊂ T^∗U((2+2δ_0)δ).
One very important property is that the number of added punctures is uniformly finite.
<cit.>
There exists a constant R=R(δ_0)>0 such that the number m_1 of added punctures satisfies m_1≤ R.
Each new puncture corresponds to a segment in u(∂_m) connecting the boundary ∂ T^∗U((2+cδ_0)δ) to ∂(T^∗U(2+(c-1)δ_0)δ), c=2,3,4. The lengths of these segments admit a positive lower bound given by
min_c=2,3,4 d_g^ϕ_δ(∂(U(2+cδ_0)δ),∂(U(2+(c-1)δ_0)δ))
by the definition of the Sasaki almost complex structure. Then the proof follows from the a priori bound on the total length of the boundary components outside T^∗U(2δ) (Lemma <ref>).
Note that a boundary component I which maps into fibres F_z for z∈ C(δ;E) is automatically an out boundary component.
From now on, given a subset S⊂_r and l>0, let B_l(S) denote the l-neighbourhood of S in _r.
For 1/4>d>0 let Ω_d=_r-⋃_I⊂∂_r B_d(I). Fix a small ϵ>0 so that for p∈out∪0, the conformal domain B_ϵ(p) is uniformly conformally equivalent to E_ϵ/2(p) independent of t. Let Θ_ϵ=Ω_ϵ∪⋃_I∈out∪0 B_ϵ(I).
We have from Theorem <ref> that:
<cit.>
There exist a constant k>0 such that if t>0 is sufficiently small then
sup_z∈Θ_ϵDu≤ kt.
By the integrated maximum principle, for u_t=(q_t,p_t), p_t≤ t (see also <cit.>). Now suppose t is small enough so that 2Et<ħ. By Proposition <ref>, the total energy of u_t is bounded above by 2Et, so u|_Θ_ϵ satisfies the conditions in Theorem <ref>, after restricting to a smaller neighbourhood of radius ϵ on the boundary which is uniformly conformally equivalent to E_ϵ/2.
Now at each of the boundary minima of _r, introduce a vertical ray in _r passing through the boundary minimum, connecting a boundary point to a boundary point, and consider the resulting subdivison of _r. Since the number of the punctures is uniformly finite, so is the number of the components. Colour a component blue if the component contains an in horizontal boundary segment. Consider the union of all the blue connected components.
Equivalently, let D'⊂_r be the union of all the vertical line segments in _r connecting a point in a type-in boundary component to some other boundary point on ∂_r. Observe that D' is the same as the union of all the blue subdomains. The set ∂ D'-∂_r is a collection of vertical line segments. We state <cit.> without proof since the proof is word-to-word the same.
For any 0<a<1 and for sufficiently small t>0 we have d(p,D')>t^-a for any point p∈ I, where I is a boundary segment of type out. In particular, a vertical segment l in ∂ D'-∂_r has its end points either on the boundary minimum of a boundary segment of type in or on a boundary segment of type 0.
Now, colour a component of the vertical ray subdivision red if the component contains an out horizontal boundary segment. The lemma states that the union of all the red components is separated away from D' by the distance at least t^-a. Note that t^-a grows much faster than logt^-1.
Let log(t^-1)≤ d ≤ 2log(t^-1) be chosen such that ∂B_d(D')-∂_r and ∂ B_1/2d(D')-∂_r consist of vertical line segments disjoint from all the boundary minimum. Intuitively, we are taking a horizontal thickening of the blue and the red subdomains by length d. Let D_0=_r-B_1/2d(D') and D_1=B_d(D'). We see that if p∈∂ D_0∩∂_r, then p lies in a boundary component of type 0 or type out, and if q∈∂ D_1-∂_r then q lies in a boundary segment of type 0 or in. Note also that by thickening the red subdomain (or the blue subdomain) to D_0 (or D_1), we have not increased the number of connected components of the red subdomain (or the blue subdomain). Hence the number of the components of D_0 and D_1 are still uniformly bounded. Furthermore,
sup_z∈ D_0Du(z)≤ kt.
This follows from Lemma <ref>.
The following is adapted from <cit.>. Again, the proof is word-to-word the same.
u(D_1)⊂ T^∗U((2+9/2δ_0)δ) and u(D_0)⊂ T^∗(U((2+1/2δ_0)δ)^c) for sufficiently small t.
The upshot is that D_1 is mapped inside the region where h-neighbourhoods of a horizontal trajectory passing through z∈ C(δ;E) cannot enter, and D_0 is mapped into a region outside all the deformations, and where the metric coincides with g^ϕ.
Now, given the sequence of t_n-BPS discs u_t_n:_3→ T^∗C̃, ending at z_n, we apply the same subdivision procedure by letting δ_0 be a function of t which is a very small variation of the constant δ_0 such that 0<δ_0(t)<1/10η. By taking a subsequence, we assume that the number of added punctures is in fact, constant. Construct a decorated graph by assigning red vertices for D_0 and black for D_1 and assign an edge between vertices if the intersection between the corresponding components is non-empty. Since there are only finitely many vertices, the number of all possible configurations is also finite. So by taking a subsequence if necessary, we may assume that the resulting graph is constant. Furthermore, we may also assume that the topology of the components of D_j(t) is also constant. We have now finished the construction.
§.§ Convergence to gradient flow lines
In this subsection, we introduce the auxiliary subdomain W_0(t) of D_0(t) such that the components of W_0(t)-D_0(t) consist of strip-like domains and they degenerate to solutions of gradient flow line equations. We also study limits of the auxiliary subdomains W_0(t) and the 0-special domains which we recall to be the components of D_0(t)-W_0(t) that contains a horizontal F_z-labelling.
We have the domain subdivision _r= D_0(t)∪ D_1(t) as constructed in Section <ref>. Let W_j(t) be the neighbourhood of the boundary minima of D_j(t) such that:
* the boundary ∂ W_j(t) consist of arcs in ∂ D_j(t) and vertical line segments,
* there is at least one boundary minimum on each component of W_j(t).
For such W_j(t), D_j(t)-W_j(t) is a finite collection of strip regions. For a connected component W⊂ W_j(t), we define the width of W as the maximum distance from a vertical line segment in the boundary of W to a boundary minima inside W. We define the width of the neighbourhood W_j(t) to be the maximum of the width of the finitely many connected components of W_j(t).
Given a vertical segment l≃{0}×[0,1]⊂ D_0(t)-W_0(t) with ∂ l⊂∂ D_0(t), let [-c,c]× [0,1]⊂ D_0(t) be a strip-like domain centred around l. With (s,τ)∈ [-c,c]× [0,1], we write u_t(s,τ)=(q_t(s,τ),p_t(s,τ)). Let tb_σ denote the (1-form) section of the sheet that contains u_t(0,σ) for σ=0,1.
We have the following estimate due to Ekholm <cit.> which describe the degenerative behaviour of components of D_0(t)-W_0(t).
For all sufficiently small t>0, we can find neighborhoods W_0(t) of the above type with width at most 2log(t^-1), such that the following holds. Let Θ be a component of D_0(t)-W_0(t) that is not a 0-special domain. Then along any vertical line segment l⊂Θ, we have
1/t∇_τ p_t(0,τ)-(b_1(q_t(0,0))-b_0(q_t(0,0)))=O(t)
1/t∇_s p_t(0,τ)=O(t).
In particular, if Θ=[-c_t,c_t]× [0,1] is a non-0-special component of D_0(t)-W_0(t), then the rescaled strips ũ_̃t̃=u_t(t^-1s,t^-1τ) on [-tc_t,tc_t]× [0,t] locally converge to a gradient-flow equation determined by b_σ. Observe that since the 1-form sections b_σ are holomorphic, the resulting gradient flow equation is a holomorphic gradient flow equation. The question is when can we ensure that b_0≠ b_1. This issue will be discussed in Section <ref>.
We now deal with the 0-special domains.
Let Θ⊂ D_0(t)-W_0(t) be a 0-special domain. Then lim_t→ 0d(u_t|_Θ,z)=0.
The size of the derivative of u_t on Θ is O(t). Let l be a vertical line segment in Θ, then l intersects a boundary component labelled F_z. So any point on u_t(l) is O(t)-close to the point z. Since Θ⊂ℝ× [0,r] and so the length of a vertical line segment in Θ is bounded above by r. Hence as t→ 0, u(l)→ z. Furthermore, the speed of convergence is independent of l since it only depends on r and the O(t)-estimate. This finishes the proof.
Furthermore, we show that the domains W_0(t) are mapped very close to points in C̃.
Let Θ be a component of W_0(t). Then after taking a subsequence if necessary, there exists a point z in C such that lim d(u_t|_Θ,z)→ 0.
The widths of the domains W_0(t) are controlled by 2log(t^-1). From the O(t)-estimate, we see that diameters of the discs restricted to the domains W_0(t) are of size O(tlogt^-1). Since tlog(t^-1) converges to 0 as t→ 0, we see that, after taking a subsequence if necessary, that u_t|_Θ uniformly converges to a point in C̃.
§.§ Convergence to horizontal geodesics
In this subsection, we further investigate the convergence of strip-like domains in D_0(t)-W_0(t). We separate the non 0-special strip domains in D_0(t)-W_0(t) into vertex and non-vertex regions. We show that the vertex regions converge to points and the non-vertex regions converge to ϕ-horizontal geodesics. We modify the approach in <cit.>.
We first fix some conventions. From now on, Θ means a strip domain either of the form [a,b]× [0,1] with both a and b finite, [a,∞)× [0,1] or (-∞,b]× [0,1]. We write j for the standard complex structure on Θ given by z=s+iτ.
We regard ℂ^2 as T^∗_ℂ^1,0ℂ and write J_0 for the Sasaki almost complex structure induced from the standard flat metric on ℂ. We take the complex coordinates z_1=x-ip^x and z_2=y-ip^y where p^x,p^y are dual coordinates. As before, let C^∘ denote the complement of both the zeroes and poles of ϕ, and let J_ϕ be the Sasaki almost complex structure on T^∗C^∘ with respect to the flat metric g^ϕ.
We need the following technical proposition.
Suppose u:Θ→ T^∗C^∘ is a (j,J_ϕ)-holomorphic map with the horizontal boundary components [a,b]×{0,1} mapping to Σ_ϕ.
Then there exists a (j,J_0)-holomorphic map v: Θ→ℂ^2 such that the pointwise equality holds:
Du=Dv,
and v maps the horizontal boundary components into {p^x=± 1,p^y=0}. In particular, the L^2 energy of the maps u and v agree. The same applies for tΣ_ϕ with {p^x=± t,p^y=0} instead.
We explain where we use Proposition <ref>. Recall that the strip-like domains Θ inside D_0(t)-W_0(t) are mapped outside of U(2δ). The restriction of u_t on Θ then satisfies the conditions in Proposition <ref>. So we obtain a holomorphic map v_t:Θ→ℂ^2 with the horizontal boundary components now mapping to {p^x=± t, p^y=0}. Observe that the Lagrangian boundary condition now splits globally into distinct sheets.
We can use this global sheet splitting to prove the following two Lemmas. Suppose there exists a subsequence of v_t such that the horizontal boundary segments of Θ under v_t map to the same sheets of {p^x=± t, p^y=0}. Then we show in Lemma <ref> that the corresponding subsequence of u_t|_Θ must uniformly converge to a point. On the other hand, suppose the horizontal boundary segments of Θ under v_t maps to distinct sheets of {p^x=± t,p^y=0}. Then we show in Lemma <ref> that u_t|_Θ must stay C^0 close to a horizontal trajectory passing through points in C(δ;E).
We now briefly explain the motivation behind Proposition <ref>. Over the ∫√(ϕ) coordinate, Σ_ϕ splits into two distinct hyperplanes {p^x=± 1, p^y=0}. The idea is simply to take the analytic continuation of ∫√(ϕ) along the disc.
Assume that Θ=[0,1]× [0,1]. We regard the map u:Θ→ T^∗C^∘ as a section p(s,τ) of u^∗(T^∗C^∘) over π(u(Θ)) by taking the point (s,τ)∈Θ to the element u(s,τ)∈ T^∗_π∘ u(s,τ)C^∘. Since the domain Θ is contractible, we can choose a lift l(s,τ) of π(u(Θ)) to Σ_ϕ as it is a genuine cover of C^∘.
We let (0,0)∈ [0,1]× [0,1] to be our basepoint. For (s,τ)∈ [0,1]× [0,1], consider a smooth family of parametrized line segments
L_(s,τ)(T)=T(s+iτ).
Consider the smooth map
v(s,τ)=(v_1(s,τ),v_2(s,τ))=( ∫_0^1 l(L_(s,τ)(T))^∗λ dT ,p(s,τ)/l(s,τ))
into ℂ^2. Here we regard both p(s,τ) and l(s,τ) as complex vectors in the 1-dimensional complex vector space T^∗_π∘ u(s,τ) C^∘ and we compare their ratio. Note that l(s,τ) is never equal to zero, and that we can rewrite (<ref>) as
(s,τ)↦ (∮ _L_(s,τ)(T)√(ϕ) dT,p(s,τ)/√(ϕ))
where we regard l as a sheet of √(ϕ). This clarifies the meaning of the map (<ref>).
We show that it is (j,J_0)-holomorphic. Let (s,τ)∈ [0,1]× [0,1] and choose the ϕ-flat coordinate near u(s,τ) so that ϕ=dz^2, and the choice of √(ϕ) agrees with that of l. In this coordinate system, we may write u(s,τ)=(k(s,τ),p(s,τ))∈ℂ^2.
Let x∈ T_(s,τ)([0,1]× [0,1]) and h be sufficiently small. Let L_h(s,τ,x) be the line segment between (s,τ)+hx and (s,τ). The 1-chain
(s,τ,v)=[L_(s,τ)+hx]-[L_h(s,τ,x)]-[L_(s,τ)]
is null-homologous in [0,1]× [0,1] (See Figure <ref>). Since dλ=Ω=0 on Σ_ϕ,
∫_(s,τ,x) l^∗λ=0.
So we see that
v_1((s,τ)+xh)-v_1(s,τ)=∫_L_h(s,τ,x) l^∗λ.
For h≪ 1, l(s,τ)=1 and so the right hand side just computes k((s,τ)+xh)-k(s,τ). Dividing both sides by h and sending h to zero, we get
D_xv_1= dk/dx.
The computation is easier for v_2 since in the ϕ-flat coordinate, v_2(s,τ)=p(s,τ). Thus
D_x v=(dk/dx,dp/dx).
So since u is J_ϕ-holomorphic, and J_ϕ is covariant, the map (k(s,τ),p(s,τ))→ T^∗ℂ is holomorphic with respect to the Sasaki almost complex structure associated to the standard flat metric on ℂ and so holomorphicity follows. Furthermore, it is straightforward to see that the norm of the derivatives agree.
The general case where Θ=[a,b]× [0,1] or [a,∞)× [0,1], or (-∞,b]× [0,1] is entirely analogous since it is conformally equivalent to either [0,1]× [0,1], (-∞,0]× [0,1] or [0,∞)× [0,1]. But for these domains, the same argument applies. This finishes the proof.
Now, note that given Θ⊂ D_0(t)-W_0(t) a strip region, since Du_t=O(t), possibly passing to a subsequence, we see that given a vertical line segment l⊂Θ, π(u_t(l)) is contained in an O(t)-ball around a point. Since this point lies outside U(2δ), we have two sheets of Σ_ϕ over this point. Call the region a vertex region, if, we can find a subsequence of t converging to 0 such that the endpoints of the vertical segments lie on the same sheet.
We have the following lemma:
Let Θ⊂ D_0(t)-W_0(t) be a vertex region and let ϵ>0. Then, after passing to a subsequence, there exists a point p∈ C(δ;E)^c such that u_t(Θ) is contained in an ϵ-ball around p in T^∗C̃.
We modify the proof of <cit.>.
After passing to a subsequence, we assume that π(u_t(l)) converges to some p∈ U((2+η)δ)^c. Assume that for all small t>0, u_t|_Θ does not stay entirely in an ϵ-ball around p. Then there exists a sequence of points q_t∈Θ such that u_t(q_t) lies strictly outside the ϵ-ball around p for small enough t. By taking a subsequence, we may assume that u_t(q_t) converges to a point q. By the O(t)-estimate on the derivative of u_t restricted to D_0(t), the vertical line segment passing through q_t must map outside the 1/2ϵ-ball around p and must also uniformly converge to the point q.
Let Θ_t be the strip region inside Θ bound by the vertical line segment l and the vertical line segment passing through q_t. We claim that there exists a k>0 such that Area(u_t;Θ_t)<kt^2. Suppose for now this is true, and consider the disjoint union of balls
B=B_ϵ/4(q)∪ B_ϵ/4(p)
in T^∗C̃. Again, by the O(t)-estimate and the convergence u_t(q_t)→ q, the boundary of u_t|_Θ_t is contained in B ∪ tΣ_ϕ for small enough t. In particular, since Θ maps outside T^∗U(2δ), u_t maps the horizontal boundary segments of Θ_t to the same sheets of Σ_ϕ. Since each sheet of tΣ_ϕ over π(u(Θ)) is uniformly geometrically bounded, we see that the curve u_t restricted to Θ_t cannot leave some O(t)-small neighbourhood of B by the boundary estimate (Proposition <ref>). For small enough t, such a neighbourhood of B is disconnected, but the image of u_t over Θ_t must be connected, a contradiction.
To show the claim, let v_t be the holomorphic maps v_t:Θ_t→ℂ^2 obtained from u_t via Proposition <ref>. We know that the norm of the derivatives of v_t and u_t agree, and that Area(v_t)=Area(u_t). The advantage is that now we are looking at holomorphic maps of Θ into ℂ^2 with the horizontal boundary components mapping into {p^x=± t, p^y=0}. Furthermore, we have a primitive of the real Liouvile form, simply given by ± tq^x. Observe that if the vertical segment l lies on a single sheet, say {p^x=+t, p^y=0}, then the entire v_t must map its horizontal boundary into {p^x=+t, p^y=0}.
Let Θ_t=[a_t,b_t]× [0,1]. By Stokes' theorem, we have
Area(v_t) = ∫_∂([a_t,b_t]×[0,1]) p dq
=-∫_{a_t}×[0,1] pdq+ ∫_{b_t}×[0,1] pdq
± t(q_x(v_t(a_t,1))-q_x(v_t(b_t,1))-q_x(v_t(a_t,0))+q_x(v_t(b_t,0))).
Now since p=O(t), and Du=O(t) over D_0(t), the first two terms are O(t^2). For the four terms after that, note that
tq_x(v_t(a_t,1))-q_x(v_t(a_t,0)≤ tsupDv_t.
Indeed, q_x(v_t(a_t,1))-q_x(v_t(a_t,0)) is just approximated by the q_x-component of the velocity of u_t near (a_t,1). Similarly,
tq_x(v_t(b_t,1))-q_x(v_t(b_t,0)≤ tsupDv_t.
Since Dv_t=Du_t=O(t), the area on the Θ_t region must be of O(t^2).
We have the following statement on the adiabatic degeneration of non-vertex strip regions.
Let Θ(t)⊂ D_0(t)-W_0(t) be a non 0-special, non-vertex strip region and let ϵ>0. Then, after passing to a subsequence, there exists a horizontal trajectory passing through a point in C(δ;E) such that u_t(Θ) is contained in an ϵ-neighborhood of γ.
The proof is a very small modification of <cit.>. We split into two cases. First, assume that Θ=[-c_t,c_t]× [0,1] is such that tc_t≤ K for some K. Write u_t=(q_t,p_t). Since u_t is J-holomorphic, we have
∂ q_t/∂ s+g^-1(∇_τp_t)=0,
∂ q_t/∂τ-g^-1(∇_s p_t)=0.
Then consider the rescaling ũ_̃t̃=(q̃_t,p̃_t)=u_t(t^-1s,t^-1τ) defined on [-tc_t,tc_t]× [0,t]. We see from Proposition <ref>
that
∂q̃_t/∂ s-Y=O(t),
∂q̃_t/∂τ=O(t).
Here Y is the local gradient difference determined by the two local sheets of tΣ_ϕ. Pass to a subsequence for which both the rescaled lengths tc_t and the points u_t(-c_t,0) converge (recall that all the discs map into P; see Lemma <ref>). We see that the image of the strip region must lie in a small neighborhood of a flow line. Since these flow lines are contained in C(δ;E), they must correspond to a horizontal trajectory.
We next consider the case where tc_t is unbounded. In this case, the strips map outside the regions where the gradient difference vanishes. By applying the same argument, we see that this cannot happen since otherwise the length of the boundary will be unbounded, a contradiction.
§.§ Proof of Theorem <ref>
We now show the main theorem:
(Theorem <ref>)
Given E≫ 0, δ≪ 1, there exists a metric g^ϕ_δ on C̃, a deformation retract C(δ;E) of C̃-S(0) over which g^ϕ_δ=g^ϕ such that the following holds.
Let J be the Sasaki almost complex structure associated to g^ϕ_δ. Then there exists a scaling parameter t_0=t_0(δ;E)>0 such that for 0<t≤ t_0, there are no non-constant J-holomorphic discs bounded between F_z and tΣ_ϕ for z∈ C(δ;E).
We argue by contradiction. Let u_t:_3→ T^∗C̃ be a sequence of t-BPS discs ending at z_t with z_t∈ C(δ;E) such that z_t→ z and t→ 0. Let D_0(t), W_0(t) and D_1(t) be as in Sections <ref> and <ref>. Recall that the strip-like regions in D_0(t)-W_0(t) with an F_z_t-horizontal labelling were called 0-special.
Consider the decorated graph 𝒢 constructed as follows: we associate a red vertex to each of the connected components of W_0(t) and D_0(t)-W_0(t); a blue vertex for each of the connected components of D_1(t). We connect two vertices with an edge if the corresponding components are not disjoint. Since there are finitely many vertices, there are finitely many possible configurations, and so by taking a subsequence, we can ensure that the graph configuration remains constant. Now, choose a red vertex x corresponding to a 0-special component. We argue via induction that for any finite length path P beginning at x and ending at y with all the vertices red, there exists a subsequence of u_t such that for small enough t, the restriction of u_t to the connected component corresponding to y lies in C(δ;∞).
Suppose the length of P is one. By Lemma <ref>, the 0-special regions inside D_0(t)-W_0(t) converge to the point z. This proves the case when the length of P is equal to 1. Suppose now the claim holds for any red path with length less or equal to k-1. Consider a red path of length k. Then the red path of length k-1 connecting x to the penultimate point p satisfies the condition above. Let D_0(y) be the component corresponding to y.
Suppose D_0(y) is a component of W_0(t). Then by Lemma <ref>, the region D_0(y) must converge to points on C(δ;E). Now suppose D_0(y) is contained in D_0(t)-W_0(t) and that D_0(y) is not 0-special. Then either D_0(y) is a vertex region, or a non-vertex region. Since y is red, by taking a subsequence, the restriction of u_t on D_0(y) admits a reparameterization that converges to a holomorphic flow line (Lemma <ref>) in the case y is a non-vertex region, or to a point in C(δ;∞) in the case y is a vertex region (Lemma <ref>). In the former case, since the restriction of u_t on p for small enough t lies in C(δ;∞), the holomorphic flow line must lie on a horizontal trajectory which passes through a point in C(δ;∞). However, such a horizontal trajectory belongs entirely in C(δ;∞), proving the claim.
Now suppose the set of blue vertices is non-empty, then there exists a finite path of minimal length beginning at x and ending at a blue vertex y, such that all the intermediate vertices are red. Let p be the penultimate vertex in the path, let D_0(p) be the component corresponding to p and D_1(y) be the component corresponding to y. Since the restriction of u_t on D_0(p) maps to C(δ;∞) for small enough t, it follows that this component cannot intersect D_1(y) for small enough t, a contradiction. Therefore, the set of blue vertices is empty and for small enough t, u_t maps entirely into T^∗C(δ;E). In fact, the argument implies that u_t lies in a small neighbourhood of the unique horizontal trajectory γ passing through z.
So we see that we can find a t_0>0 such that if t<t_0 then the t-BPS disc ending at z lies entirely outside T^∗U(2δ) and lies over a small vertical neighbourhood of γ. On this neighbourhood, ϕ=dz^2 and so we reduce to the case of holomorphic discs of finite energy
u:ℛ→ℂ^2=ℂ(z=x-ip^x)⊕ℂ(z=y-ip^y)
with the following boundary conditions: u extends to a continuous map u on the closed half-disc ℋ∩D_1 mapping [-1,1] to {p^x=± t, p^y=0} and mapping {r=1, Im(y)≥ 0} to x=y=0. However, there isn't one by maximum principle. So we have arrived at the contradiction, finishing the proof.
§ WALL-CROSSING ANALYSIS
In this section, we compute the Floer cohomology local system z↦ HF(Σ,F_z) and prove the main theorem.
(Theorem <ref>) Let Σ_ϕ be the spectral curve associated to a real-exact GMN quadratic differential on a closed Riemann surface C. Given a small deformation parameter δ>0 and a large energy cut-off E≫ 1, there exists a t_0>0 and a collection of points 𝒫_C=𝒫_C(δ;E) (with lifts P_Σ_ϕ^∘) such that the following holds.
Let ℒ=ℒ(P_Σ_ϕ^∘) be a path groupoid representation of an almost flat GL(1;ℂ)-local system, 𝔰 be a spin structure on C, and ℬ be an almost flat GL(1;ℤ)-local system. For 0<t<t_0,
HF_t(Σ_ϕ,ℒ,𝔰, ℬ, 𝒫_C;ℂ) and ℒ(P_Σ_ϕ^∘) form a 𝒲-pair, or equivalently, HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ) is a non-abelianization of ℒ.
To do this, we must show that the Floer-theoretic parallel transport along a path α contained in C(δ;E) is given by the pushforward of ℒ, and the Floer-theoretic parallel transport along the “short paths" (see Section <ref>) admits the form (<ref>). In Section <ref>, we define and study the relevant passive continuation strips. In Section <ref>, we set up some conventions, fix the branch cut and the sheet ordering data once and for all, and specify the path groupoid generators on C that we will use throughout the section. In Section <ref>, we specify the Floer data that we will use for the Lagrangian pair (tΣ_ϕ,F_z). In Section <ref>, we study the moduli problem for parallel transports along the “short paths". The main result is Proposition <ref> which explains the form (<ref>) up to sign. In Section <ref>, we study the moduli problem for parallel transports along arcs contained in C(δ;E). The main result is Proposition <ref>; we show using Theorem <ref> that for infinitesimal fibre parallel transports, the relevant continuation strips are all constant strips. This explains the form (<ref>) up to sign.
In Section <ref>, we specify the necessary grading and spin structure data in order to compute the Floer cohomology local system. In Section <ref>, we define the grading functions. In Section <ref>, we introduce a finite subset M_C of C, a good open cover {G_α}_α∈ M_C [An open cover such that any arbitrary finite intersection is contractible.] and a “good" local framing on C, using the material from Sections <ref>- <ref>.
We use this data to define spin structures as Čech cocycles.
In Section <ref>, we use the Čech formalism to prove Lemma <ref>, which states that for constant passive continuation strips, the sign difference between the Floer-theoretic parallel transport map induced from π^∗𝔰 and 𝔰̃ is given precisely by Φ^ℬ. In Section <ref>, we use the sign difference lemma <ref> to compute the Floer-theoretic parallel transport maps and prove Theorem <ref>.
§.§ Moduli problem for parallel transports
In this subsection, we define and study the various moduli problem for Floer-theoretic parallel transport maps associated to horizontal and vertical geodesic arcs on the base. We use the conventions from Section <ref>.
§.§.§ Wall-chamber data
In this section, we fix the branch cut data, a choice of a “positive sheet" of ϕ for each component of C-S(0), the set of base points 𝒫_C for the path groupoid over C̃ and the generators for the path groupoid morphisms. We will need the following definition.
Let γ be an oriented horizontal trajectory in C. Then the positive sheet +√(ϕ) along γ is the unique sheet of √(ϕ) such that the line element √(ϕ)(γ(s))·γ'(s)ds is real and positive, for any smooth parametrization of γ that respects the chosen orientation.
Let 𝒱⊂ C^∘ be a vertical neighbourhood of γ, and let +√(ϕ) be the positive sheet along γ. Then we say that the a point z in V-γ lies above γ if the integral ∫ Im(+√(ϕ)) along the unique vertical segment between a point on γ and z is positive. Otherwise, we say the point z lies below γ.
Note that for small enough 𝒱, 𝒱-γ consists of two connected components, the one that lies above γ, and the one that lies below γ.
Now let w be a wall on S(0). We always orient the walls in the outward direction, travelling away from the branch points. This orientation on the wall w picks out a unique positive sheet of √(ϕ) along w.
Let w be a wall. We define 𝒵^h(w) to be the unique component of C-S(0) containing the points that lies above w.
Note that the conformal equivalence in Proposition <ref> defined using the positive sheet +√(ϕ) sends w to the lower right bottom corner.
Let 𝒵^h be a component of C-S(0). Then 𝒵^h=𝒵^h(w) for at most two walls. In fact, w is unique if and only if 𝒵^h has the conformal type of the upper half plane.
Given a conformal equivalence of 𝒵^h with a finite horizontal strip (which sends ϕ to dz^2), w corresponds to either the right bottom boundary or the left upper boundary. Reversing the parametrization by z→ -z swaps the two.
Branch-cut data
We fix the branch-cut data. Let b be a zero of ϕ. By Proposition <ref>, there exists a neighbourhood of zero (U_b,ϕ) and a biholomorphism (U_b,b,ϕ)≃ (D,0,zdz^2) whose germ at b is unique up to a phase factor of e^2π k/3,k=0,1,2. Choose a phase factor once and for all and introduce a branch cut on the negative real axis.
Label the wall corresponding to the positive ray ℝ_>0e^i· 0 by w_0, the wall corresponding to ℝ_>0· e^i· 2π/3 by w_1, and the wall corresponding to ℝ_>0· e^i4π/3 by w_-1. Give ± labels for the two sheets of √(ϕ), + and - with respect to the branch cut. Let v be a positive tangent vector along a wall. If +√(ϕ)(v) is positive, we label the wall -+. If -√(ϕ)(v) is negative, we label the wall +-. So we see that at each vertex of the spectral network S(0), the three walls are labelled +-, +- and -+. In particular, w_0 is now labelled -+ and w_1 and w_-1 are labelled +-. We do this for each of the zeroes of ϕ. See Figure <ref>.
Chamber sheet data
We fix the “positive sheet" of √(ϕ) over each component of C-S(0). Let 𝒵^h be a component of C-S(0) and choose an oriented generic horizontal trajectory γ(𝒵^h). This orientation picks out a positive sheet of √(ϕ) along γ. We use the positive sheet of √(ϕ) with respect to γ(𝒵^h) to identify 𝒵^h with the corresponding horizontal subdomain in ℂ. From now on, we'll write 𝒵^h(δ;E)=C(δ;E)∩𝒵^h, and we will abuse notation and let 𝒵^h also denote its representative as a horizontal subdomain in ℂ. On the other hand, when we write 𝒵^h(w) for w a wall, we will find its representative as a horizontal subdomain in ℂ, using the trivialization induced from the positive sheet of √(ϕ) along w. Note that there exists a single wall w on the boundary of the closure of γ(𝒵^h) respecting the choice of +√(ϕ) making 𝒵^h=𝒵^h(w).
Path groupoid data
We now fix the path groupoid base points 𝒫_C and the path groupoid generators. Let 𝒱^v be a component of 𝒱(δ;E), and let w be its core wall. On this component, we take the trivialization induced by the positive sheet of √(ϕ) with respect to w. For each wall w, we choose a point b(w)∈𝒱(δ;E) and the adjacent points b^u(w)=b(w)+iη(w) and b^d(w)=b(w)-iη(w) for some η(w)>0. We choose them in a way that the adjacent points contained in a component 𝒵^h(δ;E) are connected by either a vertical or a horizontal arc fully contained in 𝒵^h(δ;E). We can always manage this to happen by taking δ much smaller than mina-b/2 where the minimum is taken over all the horizontal strips 𝒵(a,b) as in Proposition <ref>.
Having made these choices, we choose the set
𝒫_C:={b^∙(w): w is a wall,∙=h,v},
to be the set of base points for the path groupoid of C̃.
Let w be a wall, and let w' be a wall that lies on the left bottom boundary of 𝒵^h(w). The arc α(w,w') is the unique horizontal arc contained in 𝒵^h(w)∩ C(δ;E) connecting b^u(w) to b^d(w'). The arc α(w) is the shortest vertical arc connecting b^d(w) to b^u(w). The arc γ(w,w”) is the unique vertical arc contained in 𝒵^h(w) connecting b^u(w) to b^d(w”) where w” is the wall on the right top boundary of 𝒵^h(w). For any other w', we set α(w,w') and γ(w,w') to be the emptyset.
We then set the path groupoid generators to be
{α(w,w')^± 1,γ(w,w')^± 1,α(w)^± 1: w, w' a wall on S(0)}.
§.§.§ Floer data
We now study the passive continuation strips associated to fibre parallel transports. To define the moduli spaces, we fix a regular Floer datum for the pair tΣ_ϕ and F_z for z∈ C(δ;E) and 0<t<t_0(δ;E). We start with the following lemma:
There exists an auxiliary function ρ:[1,∞)→ [1,∞] satisfying ρ(r)=r for r≫ 1 such that u is a J_g^ϕ_δ-strip bounded between tΣ_ϕ and F_z if and only if it is a J_con-strip bounded between tΣ_ϕ and F_z, where J_con is the conical deformation of J_g^ϕ_δ obtained using ρ.
Choose a smooth, positive increasing function ρ:[1,∞)→ [1,∞) such that ρ(r)=1 for r<3 and ρ(r)=r for r>5. Let J_con be the ρ-conically deformed almost complex structure. By Lemma <ref>, since J_con is of general contact type, the discs lie in D_2.5T^∗M, where J_con=J_g^ϕ_δ. This finishes the proof.
For z∈ C(δ;E) and 0<t<t_0, the Floer datum (tΣ_ϕ,F_z,J_con) is regular.
By Theorem <ref> and Lemma <ref> we see that there are no non-trivial J_con holomorphic strips bounded between F_z and tΣ_ϕ for z∈ C(δ;E) and 0<t<t_0. So all the strips are constant, which are regular because local configurations near the intersection points coincide with the intersection of ℝ^2 and iℝ^2 in ℂ^2 equipped with the standard complex structure.
We now fix some 0<t<t_0 for the rest of the section. Following Corollary <ref>, we set the Floer datum to be (tΣ_ϕ,F_z,J_con).
§.§.§ Moduli problem for arcs α(w)
We now study the moduli problem associated to Floer-theoretic parallel transports along the arcs α(w). As before, let w be a wall and let 𝒱^v(w) be the component of 𝒱(δ;E) containing w as its core.
Choose a bump function ξ: 𝒱^v(w)→ [0,1] such that ξ=1 in a small neighbourhood of the arc α(w). Trivialize 𝒱^v(w) using the flat coordinate ∫√(ϕ) with respect to the outward orientation on the wall w. The outward orientation allows us to order the lifts of z∈𝒱^v(w) to z^±. Note that the canonical ordering introduced in Proposition <ref> agrees with the ordering of the lifts z^±.
Consider the Hamiltonian S^v=ξ(x,y)p^y. Recall that 2η(w) is the distance between b^d(w) and b^u(w). Let χ^v denote the Hamiltonian isotopy generated by S^v. The Hamiltonian isotopy χ^v has the following property:
Σ_ϕ and its ℝ_>0-rescalings are invariant under χ^v.
The Hamiltonian vector field is given by:
X_S^v=-p^y(∂ξ/∂ x∂/∂ p^x+∂ξ/∂ y∂/∂ p^y)+ξ(x,y)d/dy.
However, on T^∗(𝒱^v(w)), Σ_ϕ equals {p^x=± 1, p^y=0}. So the vector field X_S^v restricts there as ρ(x,y)d/dy, the flow of which preserves the set {p^x=± 1,p^y=0}.
By Corollary <ref>, the Floer datum (tΣ_ϕ,F_b^u(w),J_con) and (tΣ_ϕ,F_b^d(w),J_con) are all regular. Let J^short be a uniformly admissible family of almost complex structures on 𝒵 such that J^short(s,τ)=J_con for s≪ 0 and J^short(s,τ)=(χ^v)^∗J_con for s≫ 0. Let ℳ^short(w) be the moduli space of J^short-holomorphic maps u:𝒵→ T^∗C̃ satisfying the following boundary conditions
u(s,0)⊂ tΣ_ϕ
u(s,1)⊂ F_z
lim_s→ -∞ u(s,τ)∈ F_b^d(w)∩ tΣ_ϕ
lim_s→ +∞ u(s,τ)∈ F_b^d(w)∩ tΣ_ϕ
By Lemma <ref>, ℳ^short coincides with the moduli space of passive continuation strip equation associated to tΣ_ϕ and χ^v. We choose a generic J^short so that ℳ^short is transversely cut out. We have the decomposition
ℳ^short=ℳ^short,diag(w)⊔ℳ^short,-(w)
where ℳ^short,diag(w) is the moduli of passive continuation strips that travel from (tb^d(w))^± to (tb^d(w))^±, and ℳ^short,nondiag(w) is the moduli of passive continuation strips that travel from (tb^d(w))^± to (tb^d(w))^∓. Let ℳ^short,-(w) denote the moduli space of continuation strips that travel from (tb^d(w))^+ to (tb^d(w))^-.
ℳ^short,diag(w) consist of constant maps and ℳ^short,-(w) is empty.
If u∈ℳ^short,diag(w), then lim_s→ -∞ u(s,τ)=lim_s→ +∞ u(s,τ)=(b^d(w))^±. This implies that ∫ u^∗ω=0. Since the energy vanishes and J^short is ω-compatible, the moduli space ℳ^short,diag(w) must consist of constant maps. By the same argument, the energy of discs in ℳ^short,- must be negative. By positivity of energy, ℳ^short,-(w) must be empty.
§.§.§ Moduli problem for arcs contained in C(δ;E)
We now study the moduli problem associated to Floer-theoretic parallel transports along arcs contained in C(δ;E). As before, let 𝒵^h be a horizontal chamber and let 𝒵^h(δ;E)=C(δ;E)∩𝒵^h. Let +√(ϕ) be the positive sheet of ϕ picked out by γ(𝒵^h). Let z^± be the corresponding ordering on the lifts of z∈𝒵^h(δ;E).
Choose a compactly supported smooth positive bump function ρ(𝒵^h):𝒵^h→ [0,1] once and for all such that ρ(𝒵^h)=1 on C(δ;E)∩𝒵^h and ρ=0 on 𝒵^h∩ U((2+η)δ). We will consider the following two Hamiltonians.
H^h := ρ(𝒵^h)p^x
H^v := ρ(𝒵^h)p^y.
By Proposition <ref>, we see that we have the canonical ordering on the lifts of z to Σ_ϕ, for z≠ S(π/2). On the other hand, we also have the ordering on the lifts given by the choice of +√(ϕ)|_𝒵^h. For convenience, we may regard the first type of ordering as an energy ordering, and the second type of ordering as a sheet ordering. Note that for points contained the right hand side of 𝒵^h-S(π/2), the sheet ordering and the energy ordering coincide, but they become opposite when we cross S(π/2).
For z∈𝒵^h(δ;E), let α^h_z and α_z^v be arc-length parametrized horizontal and vertical arcs, respectively, beginning at z. Let ψ^h_s denote the time-s flow of the constant Hamiltonian H^h. Similarly, let ψ^v_s denote the time-s flow of the constant Hamiltonian H^v. Note that the time-s flow ψ^∙_s sends F_z to F_α^∙(ϵ s),∙=v,h. From now on, the superscript ∙ will denote either v or h. Given z∈𝒵^h(δ;E), we will only consider those s∈ℝ such that α^∙_z(s)∈𝒵^h(δ;E).
We will need the following formula.
Let z∈𝒵^h(δ;E). The primitive of W^∙_s of λ_re on (ψ^∙_s)^-1(t Σ_ϕ) at tz^± satisfies:
W^∙_s(tz^±)=tW(ψ^∙_s(z^±)).
In particular, W^h_s(tz^±)=tW(z^±)±ϵ and W^v_s(tz^±)=tW(z^±).
This follows from Lemma <ref>.
(W^∙_t∘ (ψ^∙_s)^-1)(tz^±) =tW(z^±)+∫(H^∙∘ (ψ^∙_s)^-1)(tz^±)-λ_re(X_H^∙)(ψ^∙_s)^-1(tz^±).
One can check directly that the integrand in the second term on the right hand side of (<ref>) vanishes. The next statement follows from the observation that W depends only on the horizontal distance to a zero of ϕ on the boundary of 𝒵^h; ψ_s^h changes this distance by s whereas ψ_s^v leaves this distance invariant. (see Lemma <ref>)
Choose a smooth strictly increasing elongation function l:(-∞,∞)→ [0,1] once and for all such that l(s)=0 for s≤ -2 and l(s)=1 for s≥ 2. Write:
J^∙_±ϵ(s,τ)=(ψ_±ϵ l(s)^∙)^∗J_con,
and consider the moduli spaces ℳ^∙_±ϵ,z of solutions of
∂̅_J^∙_±ϵ u=0
u(s,0)⊂ (ψ^∙_±ϵ l(s))^-1(tΣ_ϕ)
u(s,1)⊂ F_z
lim_s→ -∞ u(s,τ)∈ F_z ∩ tΣ_ϕ
lim_s→ +∞ u(s,τ)∈ F_z ∩ tΣ_ϕ.
Intuitively, these are the continuaion strips that contribute to the parallel transport along the paths s↦α_z^∙(±ϵ s),∙=v,h. Keep in mind that the flow ψ^∙_ϵ l(s) preserves tΣ_ϕ and J_con invariant in a neighbourhood of z. Hence, F_z∩ψ^∙_±ϵ^-1(tΣ_ϕ)=F_z ∩ tΣ_ϕ. As before, we split the moduli space ℳ^∙_±ϵ,z into the diagonal part and the non-diagonal part:
ℳ^∙_±ϵ,z=ℳ^diag,∙_±ϵ,z⊔ℳ^nondiag,∙_±ϵ,z.
The diagonal part consists of solutions of (<ref>) that travel from tz^± to tz^± with respect to the sheet ordering. The non-diagonal part consists of solutions of (<ref>) that travel from tz^∓ to tz^±. The moduli space ℳ^nondiag,∙_±ϵ,z further decomposes into ℳ^+,∙_±ϵ,z and ℳ^-,∙_±ϵ,z consisting of passive continuation strips travelling from z^- to z^+ and z^+ to z^-, respectively.
We now state the main analytic result of Section <ref>.
Given z∈𝒵^h(δ;E), there exists some ϵ(z)>0 such that for any 0<ϵ<ϵ(z), the following holds.
* The moduli spaces ℳ^diag,∙_±ϵ,z consist of constant strips.
* The moduli spaces ℳ^nondiag,∙_±ϵ,z are empty.
In particular, the moduli spaces ℳ^∙_±ϵ,z are regular.
For the proof of Proposition <ref>, we will need the following statement:
Let z∉ S(π/2) and let ϵ_n be a sequence of positive real numbers converging to zero. Let u_n∈ℳ^+,∙_±ϵ_n,z be a sequence of non-constant J^∙_ϵ_n-holomorphic strips with respect to the energy ordering. Then u_n Gromov converges to a non-constant broken strip bounded between F_z and tΣ_ϕ.
We will show Proposition <ref> in Section <ref>.
We first treat the case of ℳ^v_ϵ,z. The case of ℳ^v_-ϵ,z is entirely analogous. We prove the first assertion in Proposition <ref>. Let u be a solution of the equation
∂̅_J^v_ϵ u=0
u(s,0)⊂ (ψ^v_ϵ l(s))^-1(tΣ_ϕ)
u(s,1)⊂ F_z
lim_s→ -∞ u(s,τ)= tz^±
lim_s→ +∞ u(s,τ)= tz^±.
The action of the pair ((ψ^v_ϵ)^-1(tΣ_ϕ),F_z) is given by W_ϵ^v and the pair (tΣ_ϕ,F_z) by tW=W_0^v. By Lemma <ref>, the geometric energy is equal to
Area(u)= W_ϵ^v(tz^±)- W_0^v(tz^±) +ϵ∫_-∞^∞ H^v(u(s,τ))l'(s)ds=ϵ∫_-∞^∞H^v(u(s,τ))l'(s)ds.
For small enough ϵ, (ψ^v_ϵ l(s)^-1)(tΣ_ϕ)∩ H^v lies inside D_1^∗C̃ for all s∈ [0,1]. Hence sup H^v_s(u) is bounded above. So as ϵ→ 0, (<ref>) uniformly converges to zero.
Now the Hamiltonian isotopy ψ^v_s(z) leaves tΣ_ϕ and F_z invariant in some neighbourhood of F_z since the generating function is locally of form ±ρ(x,y)p^y. In this neighbourhood, the configuration (T^∗C̃,J^v_ϵ,(ψ^v_ϵ l(s))^-1(tΣ_ϕ),F_z) is isometric to the standard configuration (T^∗ℝ^2,J_std,{p^x=± t, p^y=0},{x=y=0}). In the latter configuration, the equation (<ref>) becomes the standard J_std-holomorphic strip equation with non-moving boundary conditions. Applying the boundary estimate, we see that the disc cannot escape such a neighbourhood of F_z, for small enough ϵ. However, in the standard configuration, there are no non-constant J_std-holomorphic strips. So we conclude that for small enough ϵ, ℳ^diag,v_ϵ,z must consist of constant strips.
We now show that the non-diagonal part of ℳ^v is empty and hence prove the second assertion. We first treat the case z∈ S(π/2). The difference W(z^+)-W(z^-) of the primitive between the two lifts vanishes by Lemma <ref>. Then the difference of the action of the intersection point vanishes at z, and the geometric energy is again of size O(ϵ). By the previous observation, it follows that for some ϵ(z)>0, all the passive continuation strips associated to the path s↦α^v_z (s) for 0<ϵ<ϵ(z) and s∈ [0,±ϵ] are the constant strips.
Now suppose that z∈𝒵^h(δ;E)-S(π/2). Without loss of generality, we assume that z lies on the right-hand side of 𝒵^h(δ;E)-S(π/2). We see that ℳ^-,v_ϵ,z must be empty for small enough ϵ by Equation (<ref>) below and positivity of energy.
We now show that ℳ^+,v_ϵ,z is empty for small enough ϵ for z∉ S(π/2). Suppose there exists a strictly decreasing sequence of positive real numbers 0<ϵ_n<1,ϵ_n→ 0, such that the moduli spaces ℳ^+,v_ϵ_n,z are all non-empty. We have a sequence of J^v_ϵ_n-holomorphic strips u_n satisfying the equation
∂̅_J^v_ϵ_n u=0
u(s,0)⊂ (ψ^v_ϵ_n l(s))^-1(tΣ_ϕ)
u(s,1)⊂ F_z
lim_s→ -∞ u(s,τ)=t z^±
lim_s→ +∞ u(s,τ)= tz^∓,
However, by Proposition <ref>, the sequence of the strips u_n Gromov converges to a non-constant broken J_con-strip between F_z and tΣ_ϕ. By Theorem <ref>, such strips cannot exist, a contradiction.
For the case ∙=h, the proof is entirely analogous except that W^h_ϵ(tz^±)-W^h_0(tz^±) is now equal to tϵ, by Proposition <ref>. Hence we get
Area(u)= tϵ+ϵ∫_-∞^∞H^h(u(s,τ))l'(s)ds.
The same monotonicity argument applies for diagonal continuation strips and for z∈ S(π/2). We treat ℳ^nondiag,h_ϵ,z as before, using (<ref>) and Proposition <ref>. This finishes the proof of Proposition <ref>.
§.§.§ Proof of proposition <ref>
We now proceed with the proof of Proposition <ref> for ℳ_ϵ_n,z^+,∙. The case where ϵ_n is replaced by -ϵ_n is entirely analogous. We first establish lower and upper bounds for energy. Recall that we had the following expression for the geometric energy
∫_𝒵du_n^2_J^∙_ϵ_n=∫_𝒵 u_n^∗ω= W_ϵ_n^∙(tz^+)- W_0^∙(tz^+)+ϵ_n ∫_-∞^∞ H^∙(u_n(s,τ))l'(s)ds.
From Lemma <ref>, we see that (<ref>) is bounded above by 2E+ϵ_n C for some C>0 and bounded below by ħ=t/2(W(z^+)-W(z^-)) for small enough ϵ_n. Note that this lower bound depends on the fixed t.
We now carry out the “blow-up" analysis estimate:
Let u_n be a sequence of J^v_ϵ_n-holomorphic strips with moving boundary conditions as above. Let p>2. Then there exists a constant C=C(p) such that
Du_n_∞≤ C.
This is standard blow-up analysis so we sketch the proof. See <cit.> and <cit.> for details. Suppose by contradiction we have a subsequence of u_n and points v_n=(s_n,τ_n)∈ (-∞,∞)× [0,1] such that Du_n(v_n) blows up. Suppose v_n converges to a point v_0∈ (-∞,∞)× [0,1]. Note that from the set-up, the moving data appears only on [-2,2]× [0,1].
We first homogenize the problem. Since the family J^v_ϵ_n is uniformly geometrically bounded, the images of the holomorphic discs u_n are contained in an a priori compact subset K of T^∗C̃. We consider the manifold [-2,2]× [0,1]× K and the compact submanifold
𝒦_n:={(s,p):s∈ [0,1], p∈ψ^-1_ϵ_n l(s)(tΣ_ϕ)∩ K}
which is totally real with respect to (j⊕ J_ϵ_n^v(s,τ)). It is Lagrangian outside a compact subset since ψ_s has compact support on tΣ_ϕ. Furthermore, as ϵ_n→ 0, the manifolds 𝒦_n converge in the C^∞ topology to the compact Lagrangian submanifold [-2,2]× (tΣ_ϕ∩ K)⊂ [-2,2]× [0,1]× K.
The graph (s,τ)↦ (s,τ,u_n(s,τ)) is j⊕ J^v_ϵ_n holomorphic. Identify the neighbourhood of v_0 with an open subset of the upper half-plane. By Hofer's lemma <cit.>, we have sequences c_n∈ℍ, positive real numbers e_n>0, and d_n=du_n(v_n) such that
c_n→ v_0, Du_n_B_e_n(c_n)≤ 2d_n, e_n→ 0, e_n d_n→∞
Consider the zoomed-in curve z↦ (c_n+z/d_n,u_n(c_n+z/d_n)). We split into two cases: either Im(c_n)d_n is unbounded or is bounded. These are cases I and II, respectively, in <cit.>. For the first case, sphere bubbles develop, but we know that they cannot exist since [ω]∘π_2(T^∗C̃)=0. For the second case, disc bubbles may develop localised on a boundary point. Such a boundary point either lies on the moving-boundary or the non-moving boundary. Suppose the boundary point lies on the non-moving boundary, and s_n stays bounded. Then the boundary bubble is a J_con-disc bounded in tΣ_ϕ or F_z. However, such discs cannot exist by exactness. So we arrive at a contradiction.
Suppose now the boundary bubble develops at a point on the moving boundary. Since H_n(s,τ)=ϵ_nH^v(s,τ)→ 0 and 𝒦_n→ [-2,2]× tΣ_ϕ, in the C^∞-topology, the configuration (j⊕ J^v_ϵ_n(s,τ),[-2,2]× [0,1]× K,𝒦_n) converges to (j⊕ J_con,[-2,2]× [0,1]× K,tΣ_ϕ∩ K) in the C^∞-topology.
By Gromov compactness for totally real submanifolds (<cit.> and Remark <cit.>), the disc bubble is a j⊕ J_con-disc bound in [-2,2]× tΣ_ϕ. Since bubbles localize, projection of the bubble to the [-2,2]× [0,1] component must be constant. Furthermore, projecting to T^∗C̃, we see that the T^∗C̃-component of the disc must be constant since the Lagrangian tΣ_ϕ is exact. So no bubbles can develop for s_n bounded .
Now we treat the case where s_n is unbounded. Choose ξ>0 such that the ξ-neighbourhood of v_n does not intersect [-2,2]× [0,1]. Then consider the translated strips (ψ_ϵ_n∘ u_n)(s-s_n,τ_n) over [-ξ,ξ]× [0,1] which is now J_con-holomorphic and has non-moving boundary conditions on tΣ_ϕ and F_α^v(ϵ_n). Arguing as before, we see that no bubbles can develop by exactness. This finishes the proof.
We conclude:
From Lemma <ref>, we see that the sequence u_n is equicontinuous on any compact subset of (-∞,∞)× [0,1]. By Arzela-Ascoli, given some sequence N_n∈ℝ_n, the translated localised strips
ψ_ϵ_n∘ u_n(s-N_n,τ)|_[-R,R]× [0,1]
admits a subsequence that uniformly converges for all R>0. The Arzela-Ascoli limit in the C^∞_loc-topology must be a J_con-holomorphic strip between tΣ_ϕ and F_z. We call such an Arzela-Ascoli limit a local strip. [Smoothness is given by elliptic regularity. Showing that the endpoints are indeed the intersection points between F_z and tΣ_ϕ requires the exponential decay estimate at the transverse intersection points.]
Consider a chain of such non-constant local strips (see <cit.>). By <cit.>, the length of the chain must be a priori bounded because of uniform upper and lower bound on energy of u_n, and by <cit.>, there exists a maximal chain. From <cit.>, we see that the strip-like ends of the local limits are consecutively glueable. Furthermore, according to <cit.>, the total energy of the maximal chain agrees with the limit of the geometric energy of u_n. Hence by the uniform lower bound on the energy of u_n, the broken strip must have positive total energy and therefore must be non-constant. This finishes the proof.
§.§.§ Subdividing path groupoid generators
With Propositions <ref> and <ref> established, we subdivide the arcs α(w,w') and γ(w,w') (see Definition <ref>) into smaller paths. Regard α(w,w') as a closed bounded interval in ℝ. By Proposition <ref>, there exists an open cover I_z of α(w,w') indexed by z∈α(w,w') such that if z'∈ I_z, then the passive continuation strips from z to z' are all constant. By Lebesgue's number lemma, there exists some δ(w,w')>0 such that any set of diameter <δ(w,w') is contained in some I_z. Take a partition of the interval α(w,w') into segments of length <δ(w,w'). Each subinterval of the partition belongs in some I_z. Choose one such I_z for each subinterval once and for all. By adding these points z, further refine the partition, and obtain a sequence of points b(w,w')^0,....,b(w,w')^m(w,w'), which are in increasing order regarded as points in the interval α(w,w'), with b^u(w)=b(w,w')^0 and b^d(w')=b(w,w')^m(w,w'). Then the points have the following property that:
for 0≤ i<m(w,w'), there exists 0≤ j≤ m(w,w') such that the passive continuation strips from b(w,w')^i to b(w,w')^j and b(w,w')^i+1 to b(w,w')^j are all constants.
Simillarly, do the same for γ(w,w”) and obtain a sequence of points c(w,w”)^0,...,c(w,w”)^k(w,w') which are in increasing order regarded as points in the interval γ(w,w”), with
b^u(w)=c(w,w”)^0 and b^d(w”)^=c(w,w”)^k(w,w') so that:
for 0≤ i<k(w,w”), there exists 0≤ j≤ k(w,w') such that the passive continuation strips from c(w,w”;)^i to c(w,w”)^j and c(w,w”)^i+1 to c(w,w”)^j are all constants.
We will now write b(w,w')^k→ b(w,w')^l for the horizontal arc between b(w,w')^k and b(w,w')^l contained in α(w,w') for w,w' walls and 0≤ k,l≤ k(w,w'), and similarly for c(w,w')^i→ c(w,w')^j for 0≤ i,j≤ m(w,w').
§.§ Computation of family Floer cohomology local system
In this section, we compute the family Floer cohomology local system and prove Theorem <ref>. In Section <ref>, we define the grading data for the spectral curve. In Section <ref>, we introduce a good open cover and the local framing data to define spin structures on the base and the spectral curve as a Čech cocycle. In Section <ref>, we use spin structures to orient the moduli spaces of continuation strips and derive the sign comparison formula (Lemma <ref>). In Section <ref>, we use the sign comparison formula to prove Theorem <ref>.
§.§.§ Grading
Let I be the complex structure on C. We have the following almost complex structure on T^∗C̃
Ĩ:=[ I 0; 0 I^t ]
with respect to TT^∗C̃=H⊕ V. Let ω_I be a non-degenerate 2-form defined by
ω_I=g^S(I,)
where g^S is the Sasaki metric on T^∗C̃. Let ω_Im denote the imaginary part of the holomorphic volume form Ω, then the 2-form
ω_I+iω_Im
is non-degenerate and gives a preferred section of ω_T^∗C̃^⊗ 2.
The corresponding phase for Σ_ϕ is constant since ω_Im|Σ_ϕ=0 and so we choose the grading function to be the constant map 0. Similarly, we choose the grading function on any of the fibres to be the constant map 0 as well. This implies that the chain complex CF(Σ_ϕ,F_z) is concentrated in degree 0, for z∈ C^∘ (recall that C^∘ is the complement of the zeroes and the poles of ϕ).
§.§.§ Spin structures
Open cover data
With respect to the points b(w),b^u(w),b^d(w), b(w,w')^i and c(w,w')^j, we choose a finite subset M_C of points in C and a good open cover {G_α}_α∈ M_C such that the following conditions hold (see Figure <ref>).
* The critical points of ϕ, and the points b(w), b(w,w')^i, c(w,w')^j, for w,w' a wall in S(0) and 0<i<m(w,w'), 0<j<k(w,w'), are all contained in M_C.
* The open set G_α contains the point α, and does not contain any other β∈ M_C.
* The open set G_α for α∈ zero(ϕ) is contained in U(δ).
* The open set G_α for α∈ pole(ϕ) is contained in a small conformal coordinate chart near α as in Proposition <ref>. So is any other G_β such that G_α∩ G_β≠∅.
* For α∉ crit(ϕ), the open set G_α intersects at most one wall and the covering π:Σ_ϕ→ C is trivial over G_α.
* For w a wall, the open set G_b(w) contains the closed minimal horizontal arc between b(w)^u and b(w)^d and is contained in 𝒱^v(w). The horizontal arc does not intersect any other G_α.
* The open sets G_b(w,w')^i and G_c(w,w)^j, for w and w' walls on S(0), are contained in C(δ;E).
* For each b∈ zero(ϕ) and a wall w emanating from b, there exists a unique q(w)∈ M_C∩ w such that (G_q(w)∩ G_b)≠∅.
Recall that at each branch point b∈ zero(ϕ), we made a choice for the branch cut, and that for each component 𝒵^h of C-S(0), we made a choice for an oriented generic horizontal trajectory γ(𝒵^h) which determined a positive sheet of √(ϕ) over 𝒵^h. With respect to this, we choose the following local orthonormal framing data:
We define the good local frame on the open cover {G_α}_α∈ M_C to consist of the following data:
* For α∈ zero(ϕ), we take the conformal equivalence (G_α,ϕ)≃ (ℂ,zdz^2) with respect to the choice of the branch cut. We take the local frame given by ⟨d/dx,d/dy⟩ on G_α with respect to z=x+iy.
*
For α such that G_α intersects a wall, take the local conformal chart defined using the positive sheet +√(ϕ) along w. We take the pullback of the local conformal frame given by ⟨d/dx,d/dy⟩ with respect to z=x+iy. The frame is orthonormal outside U_δ∩ G_α, but it is only orthogonal on U(δ)∩ G_α. On U(δ)∩ G_α, we conformally normalize and consider the resulting frame instead.
* For those α such that G_α lie in the interior of some 𝒵^h, take the sheet of √(ϕ) induced from the chosen orientation of a generic horizontal trajectory γ(𝒵^h). We take the pullback of the conformal frame ⟨d/dx,d/dy⟩ for z=x+iy, with respect to the orientation of the trajectory. The frame is orthonormal outside U_δ∩ G_α, but it is only orthogonal on U(δ)∩ G_α. On U(δ)∩ G_α, we conformally normalize and consider the resulting frame instead.
A choice of a local orthonormal frame ⟨ e_1,e_2 ⟩ gives a local trivialization of the orthonormal frame bundle by sending A∈ SO(2) to the orthonormal frame ⟨ Ae_1,Ae_2 ⟩. Hence our good local frame gives rise to a Čech cocycle in ψ̃ in Č^1({G_α},α∈ M_C;SO(2)). From now on, let P_SO(2)(z) denote the SO(2)-torsor of orthonormal frames of T_z C and let ϕ_αβ denote the SO(2)-transition functions induced from the good choice of the local frame data (Definition (<ref>)).
Spin structures
Using the open cover {G_α;α∈ M_C}, we now take spin structures on C̃ as a Spin(2)-Čech cocycle in terms of {G_α;α∈ M_C}.
A spin structure 𝔰 on C̃ is a Čech cocycle {ϕ̃_αβ} in the group Č^1({G_α;α∈ M_C};Spin(2)) lifting the cocycles {ϕ_αβ}.
Choose a spin structure 𝔰. The corresponding Spin(2)-bundle is the bundle obtained by glueing the trivial copies of Spin(2)× G_α with respect to ϕ̃_αβ. The fibrewise double cover structure is given according to the following commutative diagram
G_α× Spin(2)
⋃_z∈ G_αP_SO(2)(z) G_α× SO(2)[from=1-2, to=2-2]
["≃"', from=2-1, to=2-2]
[from=1-2, to=2-1]
.
We will still denote the resulting Spin(2) bundle as 𝔰. We now define the induced spin structure on each of the fibre. Identify the orthonormal coframes on the vector space T^∗_z C with the orthonormal frames on T_z C using the metric g, and let P_SO(2)(z)^-1 denote the fibre of the orthonormal coframe bundle over z. Then P_SO(2)(z)^-1 defines a trivial local frame on the cotangent fibre T_z^∗C regarded as a submanifold. In other words, the bundle F_z× P_SO(2)(z)^-1 defines a trivial SO(2) bundle over F_z.
Let z∈C̃. The spin structure 𝔣_z is a trivial Spin(2)-bundle over T^∗_z C with the fibre torsor defined by the fibre of P_Spin(2)C bundle over z, which is mapped to the orthonormal coframe over z via the diagram
Spin(2)
P_SO(2)^-1(z) P_SO(2)(z) SO(2)[from=2-3, to=2-2]
[from=1-3, to=2-3]
["g"', from=2-2, to=2-1]
[from=1-3, to=2-1]
.
We will need the following technical lemma for later computations.
Let γ:[0,1]→ C be a smooth path with γ(0),γ(1)∈ M_C. Consider a complex vector bundle γ^∗(T^∗C) over [0,1] and the real subbundle given by F_γ(s). Consider the spin structure on F_γ(s) given by P_s=𝔣_γ(s), and let γ^-1(G_α),α∈ M_C be the resulting open cover of γ.
Trivialize P_s by pulling back the trivialization of P_SO(2)(z)^-1 over {G_p}. Then the transition functions are given by γ^∗ψ_αβ=ψ_αβ∘γ.
Spin structures—the spectral curve
We lift the good open cover {G_α;α∈ M_C} to a good open cover {G̃_α̃;α̃∈π^-1(M_C)}. To do this, for α∉ crit(ϕ), we take G̃_α̃ for α̃∈π^-1(α) to be the component of π^-1(G_α) containing α̃. For b∈ zero(ϕ), we simply take the preimage π^-1(G_b).
We explain how the explicit choice of local orthonormal frames (Definition <ref>) give rise to spin structures on the spectral curve. We first define a convenient metric on Σ_ϕ. Let π^∗(g_δ)_reg be a conformal desingularization of the pullback metric π^∗(g_δ) such that π^∗(g_δ)_reg agrees with π^∗(g_δ) outside of π^-1(U(2δ)) and agrees with the pushforward of the metric π_∗dp^z^2 with respect to the map p^z→ ((p^z)^2,p^z) mapping from the p^z-plane to a local germ of a branch point over U(δ). The orthonormal frame bundle on Σ_ϕ, with respect to the metric π^∗(g_δ)_reg restricted to Σ_ϕ^∘, is isomorphic to the orthonormal frame bundle with respect to π^∗(g_δ). Hence the pullback spin structure π^∗𝔰 gives rises to a spin structure on (P_Σ_ϕ^∘SO(2),π^∗(g_δ)_reg). Note that we are regarding the pullback metric bundle (π^∗(TC^∘),π^∗(g_δ)) on Σ_ϕ^∘ as a subbundle of TΣ_ϕ with respect to the isomorphism dπ:T_z̃Σ_ϕ^∘→ T_π(z̃)C^∘ for z̃∈Σ_ϕ^∘.
We now assign a Čech cochain representing π^∗𝔰. For α̃∈π^-1(M_C)-π^-1(zero(ϕ)), we can pull back the orthonormal frames in Definition <ref> and take a suitable conformal normalization of the basis to define the SO(2)-transition functions ψ_α̃β̃=ϕ_αβ∘π. For α̃ in π^-1(zero(ϕ)), we take the pushforward of the trivial frame ⟨d/dx,d/dy⟩ on the (p^z)-coordinate with respect to the map (p^z)→ ((p^z)^2,(p^z)). This defines a principal SO(2)-bundle structure on the orthonormal frame bundle of (Σ_ϕ,π^∗(g_δ)_reg). The transition functions ψ_α̃β̃ and their spin lifts ψ̃_α̃β̃:G̃_α̃∩G̃_β̃→ Spin(2) for α̃,β̃∈π^-1(M_C)-π^-1(zero(ϕ)) now define a spin structure on Σ_ϕ^∘. We can regard this as the pullback of the spin structure 𝔰 over (P_Σ_ϕ^∘SO(2),π^∗(g_δ)_reg). Hence we get
Čech cocycles
ψ∈ Č^1(G̃_α̃,α̃∈π^-1(M_C)-π^-1(zero(ϕ));SO(2))
ψ̃∈ Č^1(G̃_α̃,α̃∈π^-1(M_C)-π^-1(zero(ϕ));Spin(2))
given by transition functions ψ_α̃β̃ and ψ̃_α̃β̃, respectively, such that the following commutative diagram holds
Spin(2)=U(1)
G̃_α̃∩G̃_β̃ SO(2)=U(1)["z^2", from=1-2, to=2-2]
["ψ_α̃β̃"', from=2-1, to=2-2]
["ψ̃_α̃β̃", from=2-1, to=1-2]
.
We have the following lemma:
The pullback spin structure π^∗𝔰 on Σ_ϕ^∘ does not extend to Σ_ϕ.
The local frame on ℂ^∗ given by ⟨p^z/p^z,i·p^z/p^z⟩ for p^z∈ℂ^∗ maps to the frame
⟨ 2p^z,2p^zi⟩ in ℂ under the projection map π:p^z→ (p^z)^2 whose differential is dπ=2p^z. Take the unit circle S^1 in the p^z-plane. The trivial frame ⟨d/dx,d/dy⟩ gives rises to a section of (P_Σ_ϕ^∘ SO(2),π^∗(g_δ)_reg) which we regard as a constant map S^1→ 1. We regard the trivial spin structure on S^1 as the lift 1∈ S^1 of 1∈ S^1. On the other hand, the frame ⟨p^z/p^z,i·p^z/p^z⟩, for p^z∈ℂ^∗, restricted to the unit circle can be regarded as a map S^1→ S^1,z↦ z^-1. However, there is no lift of this map to Spin(2)→ SO(2), z↦ z^2. Hence we see that the induced spin structure on (P_Σ_ϕ^∘SO(2),π^∗(g_δ)_reg) is non-trivial when restricted to the unit circle in p^z-plane, and hence does not extend to the whole of the spectral curve.
Recall that ℛ is the coefficient ring which is either ℤ or ℂ. We now reintroduce the notion of almost flat GL(1;ℛ)-local systems.
A cocycle ℬ∈Č^1({G̃_α̃}_α̃∈π^-1(M_C)∩Σ_ϕ^∘;ℤ) given by transition functions
ℬ_α̃β̃:G̃_α̃∩G̃_β̃→ GL(1;ℤ)
is a Čech almost flat GL(1;ℤ)-local system if the induced GL(1;ℤ)-bundle on Σ_ϕ^∘ has monodromy -1 along small loops encircling the ramification points.
At this point, we make the following choices.
* We fix a reference Čech almost flat GL(1;ℤ)-local system ℬ once and for all. Let Q_ℬ be the induced
GL(1;ℤ)-principal bundle. Then Q_ℬ comes equipped with the canonical flat Ehresmann connection since GL(1;ℤ)=0. The induced Koszul connection on the associated ℤ-bundle is then flat and so together with the parallel transport maps Φ^ℬ, we have defined a path groupoid representation of a GL(1;ℤ)-local system which we still denote as ℬ. Observe that the stalks of ℬ at α̃∈π^-1(M_C)∩Σ_ϕ^∘ are now identified with ℤ and so with respect to these identifications, the parallel transport map Φ^ℬ lies in {± 1}.
* Given the pullback spin structure π^∗𝔰∈Č^1({G̃_α̃}:α̃∈π^-1(M_C)∩Σ_ϕ^∘;Spin(2)) and an almost flat GL(1;ℤ)-local system ℒ∈Č^1({G̃_α̃}_α̃∈π^-1(M_C)∩Σ_ϕ^∘,ℤ_2), we choose s̃ to be a cocycle in Č^1({G̃_α̃}_α̃∈π^-1(M_C);Spin(2)) extending the cocycle ℬ_α̃β̃ψ̃_α̃β̃.
* Given a path groupoid representation ℒ=(M_C,ℒ_α̃∈π^-1(M_C)∩Σ_ϕ^∘,ℒ()) of an almost flat GL(1;ℂ)-local system, let ℒ⊗ℬ be the GL(1;ℂ)-local system on Σ_ϕ^∘ induced from the tensor product of ℒ with the local system induced from ℬ. Since the monodromy along the ramification points vanish, ℒ⊗ℬ extends to Σ_ϕ and defines a global GL(1;ℂ)-local system.
We will shortly see that any other choices of 𝔰̃ and ℒ⊗ℬ will yield an isomorphic local system. Note that as Principal Spin(2)-homogenous spaces, the fibre of 𝔰̃ is identified with the fibre π^∗𝔰 over x∈ M_C. For the rest of the section, we fix ℬ,ℒ,𝔰̃ and ℒ⊗ℬ.
§.§.§ Spin structures and orientation lines
Using the grading and the spin structure, we now follow <cit.> to define the ℤ-graded chain complex CF(Σ,F_z) over ℤ. For z∈{M_C-crit(ϕ)}∪𝒫_C, let p∈Σ⋔ F_z be an intersection point. We regard T_p Σ_ϕ and T_pF_z as linear subspaces in V_p:=T_p T^∗C̃ which we regard as a complex vector space of dimension 2. Furthermore, we regard (π^∗𝔰)_p and 𝔰̃_p as spin structures on the linear subspaces T_p Σ_ϕ and 𝔣_z as a spin structure on T_pF_z.
(<cit.>)
Let V be a complex vector space and let A be a grading on LGr(V). Then we call the triple (L,A,P) a linear Lagrangian brane, given a linear Lagrangian L in LGr(V), a grading A of L, and principal Spin(n)-torsor P on L equipped with an isomorphism
P×_spin(n)ℝ^n≃ L.
Since the grading functions for Σ_ϕ and F_z are both equal to zero, we see that the triples (T_pΣ_ϕ,0,π^∗𝔰_p), (T_p Σ_ϕ,0,𝔰_p), (T_p F_z,0,𝔣_z) form linear Lagrangian branes. We omit the grading function from this point onwards.
We now choose an explicit path of Lagrangians and spin structures. With respect to the good local frame (See Definition <ref>), use the path of Lagrangian subspaces L_T(p) by subspaces of T_p(T^∗C̃) generated by cos (1/2π T)d/dx+ sin(1/2π T)d/dp^x and cos (1/2π T)d/dy+ sin(1/2π T)d/dp^y for s∈ [0,1]. This Lagrangian path has a constant grading since it is I_p-invariant.
We then use the following path of spin structures. Over (L_T(p),g_p), we fix the base point of the SO(2)-torsor of orthonormal frame over L_T(p) to SO(2) by identifying the basis
⟨cos (π/2T)d/dx+sin(π/2T) d/dp^x, cos (π/2T)d/dy+sin(π/2T) d/dp^y⟩
with the unit. This gives a trivialization of the SO(2)-bundle associated to L_T over [0,1] and we take the trivial Spin(2) bundle which gives a spin structure P_T over L_T. Notice then that (P_T)_0 is identified with 𝔰̃_p and (P_T)_1 is identified with 𝔣_z(p). We do the same for any ℝ_>0-rescaling of Σ_ϕ.
Observe that when z∈{M_C-crit(ϕ)}∪𝒫_C, the principal Spin(2)-homogenous spaces π^∗𝔰_z and 𝔰̃_z are identified. So we choose the same path of Lagrangians, the same spin structure on E_T, and the same pair of isomorphisms for each brane pair ((T_pΣ_ϕ,π^∗𝔰),(T_p F_z,𝔣_z)).
The path L_s of Lagrangians gives a boundary condition for the Cauchy-Riemann operator ∂̅_∇ on the upper half-plane ℋ. This gives rises to an abstract real line D_ℋ. Let us consider the space 𝒫(L_0,L_1) of all the paths between L_0 and L_1 which satisfy the grading condition. Then the real lines D_ℋ form a line bundle on 𝒫(L_0,L_1). We then take the double cover consisting of the triples (P_T,f_0,f_1), over which the (pullback) line bundle D_ℋ becomes trivial. For details, see <cit.>.
We then choose a trivialization once, and define the orientation line 𝔬_p to be the fibre of D_ℋ over the triple (P_T,f_0,f_1). The choice of the trivialization of D_ℋ makes 𝔬_p an oriented real vector space. Regarding the orientation as a choice of an element in (𝔬_p-{0})/ℝ^∗, we write 𝔬_p=ℤ((𝔬_p-{0})/ℝ^∗) with +1 identified with the orientation of 𝔬_p. We then form the ℤ-group as a direct sum:
CF(Σ_ϕ,F_z;ℤ)=⊕_p∈ F_z⋔Σ_ϕ𝔬_p.
For z∈ M_C-crit(ϕ)∪𝒫_C, order the intersection points of Σ_ϕ and F_z with respect to the choice of the positive sheet of √(ϕ) on G_z (see Definition <ref>). Then for
CF(Σ_ϕ,F_z;ℤ)=𝔬_z^+⊕𝔬_z^-
we use the ordered basis {(+1,0),(0,+1)}.
Now CF(Σ_ϕ,F_z;ℤ) is a ℤ-graded ℤ-module. Since it is concentrated in degree 0, all the differentials vanish, so we have
CF^∗(Σ_ϕ,F_z;ℤ)=HF^∗(Σ_ϕ,F_z;ℤ).
The same discussion applies for any ℝ_>0-rescaling of Σ_ϕ.
We now look at the case of tΣ_ϕ in detail. Again, let 𝒵^h be a horizontal chamber and 𝒵^h(δ;E) be the unique connected component of C(δ;E) contained in 𝒵^h. Let z∈𝒵^h∩ M_C and let z'∈𝒵^h∩ M_C be a point connected to z by a geodesic arc α^∙_z of length d less than ϵ(z) in the sense of Proposition <ref>. Let u∈ℳ^∙_(-1)^i d,z , where i=0 if the positive sheet picked out by α coincides with the positive sheet of √(ϕ) on 𝒵^h(δ;E), or 1 otherwise. By Proposition <ref>, u is a constant map. The induced spin structure P_u(s,·) on the boundary of the infinite strip 𝒵 is given by:
{
P_u(s,1)= (ψ^∙)^∗_(-1)^idl(s)𝔣_ψ^∙_(-1)^idl(s)u(s,1)
P_u(s,0)= (ψ^∙)^∗_(-1)^idl(s)𝔰̃_ψ^∙_(-1)^idl(s)u(s,0).
We now describe what happens when we pass to ℒ⊗ℬ-twisted family Floer cohomology local system over ℂ. We set
CF(tΣ_ϕ,F_z,ℒ;ℂ)=CF(tΣ_ϕ,F_z;𝔰̃,𝔣_z,ℒ⊗ℬ,ℂ)=CF(tΣ_ϕ,F_z,𝔰̃,𝔣_z,;ℤ)⊗ (ℒ⊗ℬ).
We denote the resulting local system as HF_t(Σ_ϕ,ℒ,𝔰,ℬ;ℂ), Γ_ℒ for the resulting parallel transport map, and the resulting path groupoid representation as HF_t(Σ_ϕ,ℒ,𝔰,ℬ,𝒫_C;ℂ). For example, given u∈ℳ^∙_(-1)^id,z the contribution of ℒ⊗ℬ for is given via
Φ^ℒ⊗ℬ(∂ (ψ_s∘ u)|_(-∞,∞)×{1}): (ℒ⊗ℬ)_z̃→ (ℒ⊗ℬ)_z̃'.
Then the corresponding component of the induced ℤ-parallel transport map 𝔬_z̃→𝔬_z̃' is twisted by Φ^ℒ⊗ℬ.
We have the following technical lemma:
For u∈ℳ^∙_(-1)^i d,z let g_1:𝔬_z̃→𝔬_z̃' be the induced isomorphism with respect to (𝔰̃,𝔣,ℳ^∙_(-1)^i d,z) and let g_2:𝔬_z̃→𝔬_z̃' be the induced isomorphism with respect to (π^∗𝔰,𝔣,ℳ^∙_(-1)^i d,z). Then g_1 and g_2 differ by a sign Φ^ℬ.
To determine the maps g_i,i=1,2, we glue the half-strip operators ∂̅_H determined by Lagrangian paths L_T(z̃) and (ψ^∙_(-1)^i d)^-1L_T(z̃'̃) at the negative strip-like end and the positive strip-like end of the strip u, respectively. Let v:z̃♯ u♯ (ψ^∙_(-1)^i d)^-1z̃':A_1→ T^∗C̃ be the glued map with respect to a sufficiently big enough glueing parameter.
The maps g_1 and g_2 are induced by the canonical isomorphism
D_A_1,v= D_A_1,z̃♯ u ♯(ψ^∙_(-1)^i d)^-1z̃' ≃𝔬^∨_z̃⊗ D_𝒵,u⊗(ψ^∙_(-1)^i d)^∗𝔬_z̃'̃,
after orienting D_A_1,v. To do this, choose a base point on A_1 that maps to F_z and lies on the non-moving part. We may trivialize the pullback bundle v^∗(TT^∗C̃)≃ℂ^2 on A_1, and obtain a loop ρ of Lagrangian subspaces in ℂ^2. Let P_π^∗(𝔰) be the induced spin structure from (ψ_s^∗(π^∗𝔰),ψ_s^∗𝔣). Similarly, let P_𝔰̃ be the induced spin structure from (ψ_s^∗𝔰̃,ψ_s^∗𝔣).
By <cit.>, we see that the two possible isomorphism classes of spin structures over Maslov zero loops ρ' inside ℒ_0 Gr(V) form a double cover isomorphic to the covering obtained from the line bundle D∂̅_A_1,ρ'⊗⋀^top(ρ'(0)). Using D∂̅_A_1,const≃ρ'(0), we trivialize the bundle by choosing the trivial spin structure on S^1. This choice of the trivialization orients the vector spaces ∂̅_A_1,ρ'.
Deform the linearized Cauchy-Riemann operator D ∂̅_J v to D∂̅_A_1. By the deformation invariance of determinant lines for Fredholm operators, we see that the two induced orientations on the moduli space ℳ^∙_(-1)^id,z are equivalent if and only if the two spin structures P_π^∗(𝔰) and P_𝔰̃ are isomorphic. Since we have made the same choices for 𝔰̃ and π^∗𝔰 at each of the intersection points z̃ and z̃', we see that P_π^∗(𝔰) and P_𝔰̃ are isomorphic if and only Φ^ℬ along ∂ (ψ_s∘ u) restricted to V_s is the identity. This finishes the proof.
We now have all the ingredients needed to prove Theorem <ref>.
§.§.§ Proof of non-abelianization
We now use Lemma <ref> to compute Floer-theoretic parallel transports along the arcs α(w,w') and γ(w,w'). By construction, given any pair b(w,w)^i, b(w,w)^i+1, there exists some b(w,w')^j such that the continuation strips from b(w,w)^j to b(w,w)^i and b(w,w)^i+1 are all constants, and so necessarily lie on T^∗𝒵^h. So ℒ, and the different spin structures π^∗𝔰 and 𝔰̃ yield maps for k=i,i+1.
Γ_ℒ(π^∗𝔰)(b(w,w)^j→ b(w,w)^k) :CF(tΣ_ϕ,F_b(w,w)^j;ℂ)→ CF( tΣ_ϕ,F_b(w,w)^k;ℂ)
Γ_ℒ(𝔰̃)(b(w,w)^j→ b(w,w)^k) : CF(tΣ_ϕ,F_b(w,w)^j;ℂ)→ CF( tΣ_ϕ,F_b(w,w)^k;ℂ)
For α an arc contained in a horizontal chamber 𝒵^h, let Φ^ℒ(α)^± (or Φ^ℒ⊗ℬ(α)^±) denote the parallel transport map of ℒ (or ℒ⊗ℬ) restricted to the ±-lift of α to Σ^∘ with respect to the sheet ordering of the chamber 𝒵^h.
The maps (<ref>) and (<ref>) read as follows:
Γ_ℒ(π^∗𝔰) =[ Φ^ℒ⊗ℬ^+ 0; 0 Φ^ℒ⊗ℬ^- ]
Γ_ℒ(𝔰̃) =[ Φ^ℒ^+ 0; 0 Φ^ℒ^- ].
with respect to the ordered basis (<ref>).
We claim that it is sufficient to prove
Γ_ℒ(π^∗𝔰)=[ Φ^ℒ⊗ℬ^+ 0; 0 Φ^ℒ⊗ℬ^- ].
Indeed, by Lemma <ref>, the sign difference is Φ^B. So (<ref>) follows from (<ref>) since
(Φ^ℒ⊗ℬ)Φ^ℬ=Φ^ℒ
which follows because ℬ is a GL(1;ℤ)-local system.
We now proceed on with the proof of (<ref>).
The induced loop of Lagrangian spaces is the same as concatenation of L_s with L̅_̅s̅ where L̅_̅s̅=L_1-s and so it is homotopic to the constant loop. We get an open cover on S^1 induced from the open covers of Σ^∘_ϕ. The transition function on the boundary segment that maps to Σ_ϕ is given by the transition functions ψ_αβ. By Lemma <ref>, the same transition functions appear on the boundary segment that maps to the fibres. By construction of P_s over L_s, the transition functions induced from the half-disc operators glued at the strip-like ends are equal to the identity. This implies that the spin structure is trivial. Therefore, the induced count on the moduli space of continuation strips must be +1.
But since all the continuation strips are constant, the twisted contribution must be equal to Φ^ℒ⊗ℬ. This finishes the proof.
Then composing the parallel transport map from b(w,w')_i to b(w,w')_j and the parallel transport map from b(w,w')_j to b(w,w')_i+1, we see that
We have
Γ_ℒ(𝔰̃)(b(w,w')_i→ b(w,w')_i+1) =[ Φ^ℒ^+ 0; 0 Φ^ℒ^- ]
Therefore,
The parallel transport along α(w,w') is given by the matrix
[ Φ^ℒ(w,w')^+ 0; 0 Φ^ℒ(w,w')^-. ].
The same argument gives us
The parallel transport along γ(w,w”) is given by the matrix
[ Φ^ℒ(w,w”)^+ 0; 0 Φ^ℒ(w,w”)^- ].
Finally, repeating the argument in the proof of <ref>, we obtain the following:
The parallel transport along α(w) is given by the matrix
[ 1 μ(w); 0 1; ]
Note that we get an upper triangular matrix because the moduli spaces ℳ^short,-(w) are empty. Here the basis of ℤ^2 are chosen with respect to the orientations on the orientation lines. We now compute the number μ(w) explicitly using Φ^ℒ and finish the proof of the main theorem. The proof is essentially due to <cit.>. We will abbreviate Φ^ℒ(α^±(w_i,w_j)) by Φ^ℒ(w_i,w_j)^± .
Let z be a zero of ϕ and order the three walls w_0,w_± 1 as above. Let Γ_ℒ(w_i,w_j) denote the parallel transport map with respect to α(w_i,w_j).
Then we have
μ(w_0) =-Φ^ℒ(w_1,w_-1)^+Φ^ℒ(w_0,w_1)^-Φ^ℒ(w_-1,w_0)^-
μ(w_1) =-Φ^ℒ(w_1,w_-1)^-Φ^ℒ(w_0,w_1)^-Φ^ℒ(w_-1,w_0)^+
μ(w_-1) =-Φ^ℒ(w_0,w_1)^+Φ^ℒ(w_1,w_-1)^-Φ^ℒ(w_-1,w_0)^-.
Consider the concatenation of paths α(w_0), α(w_0,w_1), α(w_1), α(w_1,w_-1), α(w_-1) and α(w_-1,w_0) in that order. This gives a loop encircling z once and on C it is contractible. Notice that when we go from 𝒵^h(w) to 𝒵^h(w') along the loop, we reverse the ordering of the basis. The configuration is illustrated in Figure <ref>.
Let Γ_ℒ(w) denote the parallel transport map with respect to α(w). Then from homotopy invariance, we have
Id= Γ_ℒ(w_-1,w_0)∘Γ_ℒ(w_-1)∘Γ_ℒ(w_1,w_-1)∘Γ_ℒ(w_1)∘Γ_ℒ(w_0,w_1)∘Γ_ℒ(w_0)
which we rewrite in the form
Id=
[ 0 Φ^ℒ(w_-1,w_0)^-; Φ^ℒ(w_-1,w_0)^+ 0; ][ 1 μ(w_-1); 0 1; ][ 0 Φ^ℒ(w_1,w_-1)^-; Φ^ℒ(w_1,w_-1)^+ 0; ]
[ 1 μ(w_+1); 0 1; ][ 0 Φ^ℒ(w_0,w_1)^-; Φ^ℒ(w_0,w_1)^+ 0; ][ 1 μ(w_0); 0 1; ]
Expanding the matrix out, it follows that μ(w_0),μ(w_1),μ(w_2) are given by products of the transport coefficients Φ^ℒ^± as in (<ref>).
Summarizing everything, we obtain our main theorem:
(Theorem <ref>)
Corollaries <ref>, <ref>, <ref>, and <ref> give the full description of the path groupoid representation of the Floer cohomology local system, in terms of ℒ. This is the non-abelianization.
|
http://arxiv.org/abs/2307.07360v1 | 20230714140556 | Quasinormal modes of black holes encircled by a gravitating thin disk | [
"Che-Yu Chen",
"Petr Kotlařík"
] | gr-qc | [
"gr-qc",
"astro-ph.HE"
] |
[email protected]
RIKEN iTHEMS, Wako, Saitama 351-0198, Japan
Institute of Physics, Academia Sinica, Taipei 11529, Taiwan
[email protected]
Institute of Theoretical Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 180 00 Prague 8, Czech Republic
The ringdown phase of gravitational waves emitted by a perturbed black hole is described by a superposition of exponentially decaying sinusoidal modes, called quasinormal modes (QNMs), whose frequencies depend only on the property of the black hole geometry. The extraction of QNM frequencies of an isolated black hole would allow for testing how well the black hole is described by general relativity. However, astrophysical black holes are not isolated. It remains unclear whether the extra matter surrounding the black holes such as accretion disks would affect the validity of the black hole spectroscopy when the gravitational effects of the disks are taken into account. In this paper, we study the QNMs of a Schwarzschild black hole superposed with a gravitating thin disk. Considering up to the first order of the mass ratio between the disk and the black hole, we find that the existence of the disk would decrease the oscillating frequency and the decay rate. In addition, within the parameter space where the disk model can be regarded as physical, there seems to be a universal relation that the QNM frequencies tend to obey. The relation, if it holds generically, would assist in disentangling the QNM shifts caused by the disk contributions from those induced by other putative effects beyond general relativity. The QNMs in the eikonal limit, as well as their correspondence with bound photon orbits in this model, are briefly discussed.
Quasinormal modes of black holes encircled by a gravitating thin disk
Petr Kotlařík
August 12, 2023
=====================================================================
§ INTRODUCTION
Black holes ring when they are perturbed, with the ringing frequencies determined by the underlying spacetime geometry. The feature of the ringings of black holes is tightly related to the fact that the whole system is dissipative. For an asymptotically flat black-hole spacetime, the emitted gravitational waves propagate outward, escaping from the system to spatial infinity. In addition, the event horizon, i.e., a point beyond which no infalling matter can return, acts as the other boundary of dissipation of the system. Because of the dissipation, the ringings of black holes would decay. Such a “ringdown" phase can be described by a superposition of exponentially decaying sinusoidal oscillations, called quasinormal modes (QNMs) <cit.>. The QNM frequencies are complex-valued, with the real part describing the oscillations, and the imaginary part determining the decay of the amplitudes. Importantly, for an isolated black hole in general relativity (GR), the spacetime geometry dictates the QNM spectrum, and they both satisfy the no-hair theorem, i.e., they are purely determined by the mass and the spin of the black hole. Therefore, based on the current achievements <cit.> and with the upcoming advancements in the gravitational wave detection of binary merger events <cit.>, the extraction of QNM frequencies from ringdown signals may be accessible, helping us to identify the black hole parameters and even to test GR.
However, astrophysical black holes are generally not isolated. They may be surrounded by dark matter halos, or be encircled by accretion disks. The validity of using black hole QNMs to extract parameters describing the black hole spacetime requires a sufficient understanding of how the surrounding matter would alter the QNMs. One has to ensure that the contributions from the environments can be disentangled from those induced by the black hole geometry itself, at least under suitable approximations.
The QNM spectra of black holes surrounded by matter – the dirty black holes – have been explored in the literature. In Refs. <cit.>, the surrounding matter was modeled by a spherical dust thin-shell. It was shown, both numerically and analytically, that the QNM spectrum could deviate significantly from the vacuum case, especially when the shell is far away from the black hole. It was later clearly elucidated in Refs. <cit.>, assuming again spherically symmetric matter configurations, that this large amount of frequency shifts could be actually due to the existence of the double-barrier structure on the effective potential in the QNM master equations. The surrounding matter induces an additional barrier in the effective potential, which could induce pseudospectral instability of black hole QNMs <cit.>. Even an additional tiny bump on the effective potential would already trigger the instability and excite additional modes. The instability could happen even to the fundamental modes – the longest-lived modes <cit.>, but their frequencies may still be extracted robustly from the prompt ringdown signals in time-domain <cit.>. The pseudospectral instability can be avoided when the contributions of the surrounding matter on the effective potential are sufficiently mild in the sense that, in the case of nonrotating black holes, the effective potential retains its single-peak structure. This can be achieved as shown by the model of Ref. <cit.>, in which the authors, assuming spherical symmetry, proposed an effective metric that can describe the spacetime geometry of a whole galaxy harboring a supermassive black hole. In this case, the frequencies of fundamental modes are shifted mildly by the environmental effects. The highly damped QNMs of spherically symmetric dirty black holes were also studied <cit.>.
Apparently, the discussion of the QNM spectrum for dirty black holes so far is still quite confined to the assumption that the overall spacetime remains spherically symmetric. However, in a more realistic scenario, such as a black hole encircled by a gravitating accretion disk, the spherical symmetry is no longer preserved. But, the complicated structure of the Einstein equations makes obtaining the common gravitational field of the black hole with the disk a rather difficult task, at least for analytical work. Some reasonable simplifications (symmetries) are still needed. The simplest viable option is to consider an axially symmetric disk and neglect (or compensate) the total rotation present in the spacetime, so the spacetime is also static, and the black hole is described by the Schwarzschild metric. Another assumption that can be made is that the typical thickness of the disk is much smaller than the black hole radius, thus it is effectively infinitesimally thin. Then the Einstein equations are simplified considerably. Nevertheless, not many models of the Schwarzschild black hole encircled by a thin disk (SBH-disk models) are known in the literature. The first “superposition” was made in Ref. <cit.> (further studied in Ref. <cit.>) using inverted Morgan-Morgan disk <cit.>. It was also used to calculate the influence of a heavy accretion disk on the black-hole shadow in the more recent work <cit.>. Another class of disk solutions was proposed in Ref. <cit.>, revisited recently in Ref. <cit.>. Both of these models have a slight disadvantage in that only a part of the metric was obtained explicitly, the rest being left to numerical treatments when needed. Yet recently, new solutions have been found <cit.>, where the whole metric of the entire superposition was derived explicitly and in closed-forms. In this paper, we consider the SBH-disk model proposed in Ref. <cit.>.
From the astrophysical point of view, the SBH-disk model <cit.> may not properly describe any realistic scenario of accretion processes. In addition, being static, it does not include the spin of the central black hole nor the rotation of the disk. However, the disk possesses physically reasonable properties, and it can demonstrate the effects that may actually occur in the real astrophysical setup where the gravitation from the disk cannot be totally neglected.
In the presence of the gravitating disk, the calculations of the QNMs for the SBH-disk model become substantially challenging because the master equations in general are nontrivial partial differential equations. This is true even for the calculations of the QNMs of massless scalar fields. To proceed, we assume that the mass of the disk is much smaller than the black hole one. Up to the first order of the mass ratio, we adopt the projection method, which was proposed in Ref. <cit.> then applied in Refs. <cit.>, to derive the master equation and investigate how the QNM frequencies of a massless scalar field are shifted by the gravitating disk. To ensure the validity of the projection method and the stability of the disk, we can fairly consider the parameter space of the model in which the aforementioned pseudospectral instability of fundamental modes does not happen. This can be achieved by focusing only on the effective potential, which can be defined in our treatment, with a single-peak structure. Furthermore, adopting the geometric optics approximations, we consider the frequencies of eikonal QNMs and identify their correspondence with bound photon orbits in the SBH-disk model.
The rest of this paper is organized as follows. In sec. <ref>, we briefly review the SBH-disk model proposed in Ref. <cit.>. In order to analyze the QNMs of the SBH-disk model, in sec. <ref> we consider a deformed Schwarzschild black hole, and demonstrate how to recast the master equation for scalar field perturbations in a Schrödinger-like form. This section is based on the results of Ref. <cit.>. The main results of our paper are presented in sec. <ref>, in which we show how the effective potentials of the master equation (sec. <ref>) and the QNM frequencies (sec. <ref>) vary with respect to the parameters in the SBH-disk model. Then, in sec. <ref>, we comment on the eikonal correspondence between QNMs and bound photon orbits in the SBH-disk model. Finally, we conclude in sec. <ref>.
§ THE SBH-DISK MODEL
Due to the inherent non-linearity of Einstein equations, it is difficult to “superpose" multiple sources in GR. However, in the static and axially symmetric case, the situation is much simpler. In fact, in Weyl cylindrical coordinates (t, ρ, z, φ) Einstein equations outside of sources (i.e. in vacuum) are reduced to the Laplace equation and a line integration
Δν = 0 ,
λ_,ρ = ρ ( ν_,ρ^2 - ν_,z^2) , λ_,z = 2ρν_,ρν_,z ,
where the ν(ρ, z) and λ(ρ, z) are the only nontrivial components of the Weyl-type metric
d s^2 = - e^2ν d t^2 + ρ^2 e^-2ν dφ^2 + e^2λ - 2ν (dρ^2 + d z^2) .
Thus any axially symmetric gravitational field with its potential ν known from Newton's theory has its GR counterpart. However, the potential ν does not tell the whole story. The presence of the second metric function λ may significantly depart from the pure Newtonian picture. Moreover, while the Laplace equation (<ref>) is linear, and thus makes the superposition problem for the potential ν trivial, it is not the case for λ as Eqs. (<ref>) are quadratic in ν.
Here, we wish to study the QNMs of a black hole that is surrounded by some matter in a physically appealing configuration. Namely, we take a recently derived solution <cit.> describing a Schwarzschild black hole encircled by a thin disk (SBH-disk model). Such a structure is of clear astrophysical importance as disk-like sources often result from an accretion of matter onto a compact central body. While the total potential is a simple sum ν = ν_Schw + ν_disk, for the second metric function we write λ = λ_Schw + λ_disk + λ_int, where λ_Schw and λ_disk denote contributions from the Schwarzschild black hole and the disk (thus each satisfying (<ref>) with their corresponding ν_Schw, or, ν_disk respectively). The non-linear “interaction" part λ_int satisfies
λ_int, ρ = 2 ρ (ν_Schw, ρν_disk, ρ - ν_Schw, zν_disk, z) ,
λ_int, z = 2 ρ (ν_Schw, ρν_disk, z + ν_Schw, zν_disk, ρ) .
Notice that when we treat the existence of the disk as a small perturbation of the black hole, i.e. |ν_disk| ≪ |ν_Schw|, and consider its contributions up to the first order, only the interaction part λ_int is relevant because λ_disk is of second order.
In Weyl coordinates, the Schwarzschild black hole is a singular rod of length 2M – twice the black-hole mass M – placed symmetrically on the z axis described by
ν_Schw = 1/2ln(R_++R_–2M/R_++R_-+2M) ,
λ_Schw = 1/2ln[(R_++R_-)^2-4M^2/4R_++R_-] ,
where
R_± = √(ρ^2+(|z| ∓ M)^2) .
The disks considered in Ref. <cit.> are infinitesimally thin and spatially infinite (with a finite total mass) extending from the horizon. The disk density falls off quickly enough both at the horizon and at infinity – see the schematic Fig. <ref>. The Newtonian surface density profiles[The quantity w(ρ) satisfies exactly the Poisson equation Δν = 4 π w(ρ) δ(z), where δ(z) is the delta distribution, so it is the precise counterpart of the Newtonian surface density.] read
w^(m, n) = W^(m, n)ρ^2n/(ρ^2 + b^2)^m + n+ 3/2 , m,n ∈N_0
where b is a parameter of the dimension of length and the normalization W^(m,n) is chosen in such a way that the total mass of the disk ∫_0^∞ w^(m,n)(ρ) ρ d ρ = ℳ. In particular,
W^(m,n) = (2m+1) m + n+ 1/2nℳ .
The densities (<ref>) have a single maximum located at ρ_max = b √(2n/3 + 2m). Thus, increasing b (or n) when m,n (or m,b) are fixed means shifting the maximum further from the central region, as well as expanding the width of the peak. Whereas increasing m when n, b fixed corresponds to shifting the maximum towards the central region while shrinking the width of the peak. When keeping the total disk mass ℳ constant, the maximum density decreases when increasing b (or n) while it increases when increasing m.
If we denote
r_b^2 := ρ^2 + (|z| + b)^2 , |cosθ_b| := |z| + b/r_b ,
the potential is given by
ν^(m, n) = - W^(m,n)∑_j=0^m + n𝒬_j^(m,n)b^j/r_b^j+1 P_j(|cosθ_b |) ,
where P_j are the Legendre polynomials and the coefficients
𝒬_j^(m,n) =
∑_k=0^n (-1)^k nk2^j-k-m (2m + 2k -j)!/(m + k - j)!(2m + 2k + 1)!! if j≤ m
∑_k=j^m+n (-1)^k-mnk-m2^j-k (2k - j)!/(k-j)!(2k + 1)!! if j > m .
The potential (<ref>) was first obtained by Vogt & Letelier <cit.> by taking a specific superposition of the Kuzmin-Toomre family of discs <cit.>.
The second metric function λ_disk was also found explicitly (see <cit.> Eq. (21)), but we will not repeat it here as we shall not need it. The interaction part λ_int satisfies following recurrence relations
λ^(0,0)_int = - ℳ/r_b( R_+/b+M - R_-/b-M) - 2ℳ M/b^2 - M^2 ,
λ^(0,n + 1)_int = λ^(0,n)_int + b/2(n + 1)∂/∂ bλ^(0,n)_int ,
(2m + 1)(2n + 3)/2m + 2n + 3λ_int^(m+1, n) =
=λ_int^(m, n) + 4m(n+1)/2m + 2n + 3λ_int^(m, n+1) - b∂/∂ bλ_int^(m, n) .
Thus the whole metric (both metric functions) of the SBH-disk model is known explicitly and in closed-form. From now on, to simplify the expression, the notation (m,n) that indicates the explicit dependence of the disk functions on the indices m and n will be dropped. One should keep in mind that ν and λ explicitly depend on ℳ, b, m, and n.
Two physical interpretations of these disks are possible: a) a single component ideal fluid with density σ and azimuthal pressure P (a set of solid rings with internal azimuthal stress), or, b) two equally counter-rotating pressureless dust streams with the densities σ_± = σ/2 following circular geodesics. Both characteristics follow from the metric
σ + P = e^ν - λν_,z(z=0^+)/2π = e^ν - λ w(ρ) ,
P = e^ν - λν_,z(z=0^+)/2πρν_,ρ = e^ν - λ w(ρ) ρν_,ρ ,
where w(ρ) is the Newtonian surface density (<ref>). See Appendix <ref> for the derivation in more details.
Clearly σ + P ≥ 0, so the strong energy condition is satisfied automatically for any disk. The dominant energy condition is generally satisfied everywhere (for a broad range of parameters) except close to the black-hole horizon, where σ < P. In fact, the accretion disks are usually assumed to end around the innermost stable circular orbit (ISCO). However, we argue that (i) our disk density drops to zero toward the horizon, so there is really no matter on the horizon itself, and, (ii) accretion disks around realistic black holes would indeed stretch toward the horizon, although the matter will infall there rather than orbiting on circular trajectories. Thus, in this sense, it is more realistic to model the gravitational field with some modest density going down to the horizon. By choosing appropriate parameters (m,n) and b, the density can be made arbitrarily small below a chosen radius, e.g., the ISCO orbit.
For the double-stream interpretation, both energy conditions considered above require σ_±≥ 0, which also implies P ≥ 0 for the single component interpretation. Finally, the energy conditions are satisfied for both interpretations if the speed of a particle on a circular geodesic in the equatorial plane
v^2 = P/σ = ρν_,ρ/1 - ρν_,ρ
acquire timelike values 0 ≤ |v| < 1.
While superposition can be carried out very straightforwardly in Weyl coordinates, it will be convenient to work in Schwarzschild coordinates (t,r,θ,φ) from now on. The two sets of coordinates are related as follows[Note the difference between r_b and θ_b with the subscript b defined in (<ref>) and the Schwarzschild coordinates r, θ.]
ρ = √(r (r-2M))cosθ , z = (r-M) sinθ ,
and the metric of the SBH-disk model in Schwarzschild coordinates then reads
ds^2= -f(r)e^2ν_diskdt^2+e^2λ_ext-2ν_diskdr^2/f(r)
+r^2e^-2ν_disk(e^2λ_extdθ^2+sin^2θ dφ^2) ,
where λ_ext=λ_disk+λ_int and f(r) ≡ν_Schw = 1 - 2M/r after the transformation into Schwarzschild coordinates.
§ DEFORMED SCHWARZSCHILD BLACK HOLES – MASTER EQUATION
The SBH-disk metric of Eq. (<ref>) describes the spacetime of a Schwarzschild black hole encircled by a gravitating thin disk. The main goal of this work is to investigate the QNMs propagating in this superposed spacetime. However, due to the general (r,θ) dependence appearing in the metric functions through ν_disk and λ_ext, the radial and the latitudinal sectors of the wave equation are not separable. In order to proceed, we assume that the disk mass ℳ is much smaller than the black hole mass M and consider the contributions up to O(ℳ/M). Besides having its astrophysical applicability, this assumption, as mentioned in the previous section, allows us to simplify the calculations by omitting λ_disk term because it is of second order in ℳ/M. Then, we focus on the QNMs of scalar field perturbations. Adopting the projection method <cit.> to the master equation up to O(ℳ/M), one can separate the radial component of the master equation from the latitudinal one. This has been shown explicitly in Ref. <cit.> for a very general class of deformed Schwarzschild spacetimes. In this section, we briefly review the results in Ref. <cit.>, based on which one can compute the scalar field QNMs of the SBH-disk model.
We consider a deformed Schwarzschild spacetime and assume that the spacetime remains static and axially symmetric in the presence of deformations. The nonzero metric components of the deformed spacetime can be expressed as <cit.>
g_tt(r,θ) =-f(r)(1+ϵ A_j(r)|cos^jθ|) ,
g_rr(r,θ) =1/f(r)(1+ϵ B_j(r)|cos^jθ|) ,
g_θθ(r,θ) =r^2(1+ϵ C_j(r)|cos^jθ|) ,
g_φφ(r,θ) =r^2sin^2θ(1+ϵ D_j(r)|cos^jθ|) ,
where ϵ is a dimensionless parameter that quantifies the amount of deformations. In general, the spacetime deformations are functions of r and θ. In Eqs. (<ref>), we expand the latitudinal part of the deformation functions as a Taylor series in terms of cosθ. Each term in the series is weighted by a function of r, i.e., the functions A_j(r), B_j(r), C_j(r), and D_j(r) that appear in the expansion. The dummy index j stands for summations running upward from j=0. The absolute value in each term in the expansion is to preserve the equatorial reflection symmetry, with the possibility of having a nonzero surface density at the equatorial plane. When the deformations are small, i.e., |ϵ|≪1, we can consider terms up to O(ϵ). As we will show later, the radial sector of the Klein-Gordon equation can then be separated from the latitudinal one, and it can be further recast into the Schrödinger-like form.
§.§ Massless scalar field: Effective potential
In this work, we will focus on the massless scalar field perturbations, whose QNMs are governed by the Klein-Gordon equation
ψ=0 .
Indeed, the investigation of the ringdown phase in real gravitational wave emission has to be based on the computations of linearized gravitational equations rather than Eq. (<ref>). However, as the simplest scenario, the consideration of scalar field perturbations already allows us to address interesting issues, such as the (in)stability of the system, without suffering the computational complexity in linearized gravitational equations of deformed background spacetimes. In addition, according to the geometric optics approximations, the behaviors of scalar field QNMs should be able to capture those of the gravitational perturbations at least in the eikonal regimes, that is, when the multipole number l is large. This will be discussed later in sec. <ref>.
For the master equation of scalar fields in the Schwarzschild spacetime, one can use the associated Legendre functions P_l^m_z(x), where x≡cosθ and m_z is the azimuthal number, as the angular basis to separate the radial and latitudinal sectors of the wave equation. The radial equation is labeled by the multipole number l and determines the evolution of the mode of l. The azimuthal number m_z degenerates because of the spherical symmetry of the spacetime. In the presence of deformations of O(ϵ), there would appear off-diagonal terms that correspond to the modes with multipole numbers l different from that of the zeroth-order one. These off-diagonal terms in the wave equations are O(ϵ). Therefore, by taking advantage of the orthogonality of P_l^m_z among multipole numbers, one can project out the off-diagonal terms and focus only on the corrections on the zeroth-order equation. In the following, we will only show the main results of the calculations and refer the readers to sec. IV of Ref. <cit.> for more details.
Essentially, the projection method allows us to separate the radial and the latitudinal sectors of the wave equation. To further recast the radial equation into the Schrödinger-like form, we find it convenient to define the following coefficients:
a_lm_z^j =2m_z^2/𝒩_lm_z∫_0^1x^j(P_l^m_z)^2/1-x^2dx ,
b_lm_z^j =2/𝒩_lm_z∫_0^1x^j(P_l^m_z)^2dx ,
c_lm_z^j =2/𝒩_lm_z∫_0^1x^jP_l^m_z[(1-x^2)∂_x^2-2x∂_x]P_l^m_zdx ,
d_lm_z^j =2/𝒩_lm_z∫_0^1P_l^m_z(1-x^2)(∂_xx^j)(∂_x P_l^m_z)dx ,
where the normalization constant 𝒩_lm_z≡ 2(l+m_z)!/[(2l+1)(l-m_z)!] is determined by the orthogonality condition
∫_-1^1dx P_l^m_z(x)P_k^m_z(x)=𝒩_lm_zδ_lk .
Note that the coefficients given by Eqs. (<ref>)-(<ref>) depend on l and m_z, but they are invariant under m_z↔-m_z.
After Fourier transformations, we denote the radial part of the Fourier modes of the scalar field as Ψ_l,m_z(r). By using the projection method, the radial wave function is found to satisfy the following Schrödinger-like equation <cit.>
∂_r_*^2Ψ_l,m_z(r)+ω^2Ψ_l,m_z(r)=V_eff(r)Ψ_l,m_z(r) ,
where ω is the mode frequency. The effective potential V_eff(r) can be expressed as
V_eff(r) =l(l+1)f(r)/r^2+f(r)/rdf/dr[1+ϵ b_lm_z^j(A_j(r)-B_j(r))]
+ϵ{f(r)/r^2[a_lm_z^j(A_j(r)-D_j(r))-c_lm_z^j(A_j(r)-C_j(r))-d_lm_z^j/2(A_j(r)+B_j(r)-C_j(r)+D_j(r))]
-b_lm_z^j/4d^2/dr_*^2[A_j(r)-B_j(r)]+1/4r^2d/dr_*[b_lm_z^jr^2d/dr_*(A_j(r)-B_j(r)+C_j(r)+D_j(r))]} ,
which explicitly contains the coefficients given by Eqs. (<ref>)-(<ref>). The tortoise radius r_* is defined as follows
dr/dr_*=f(r){1+ϵ/2b_lm_z^j[A_j(r)-B_j(r)]} .
On the above equations (<ref>) and (<ref>), the summations over j are implicitly assumed. It can be seen that when ϵ=0, the effective potential and the whole master equation reduce to those of the Schwarzschild spacetime. In this case, as we have mentioned, the azimuthal numbers m_z degenerate, and Eq. (<ref>) is labeled only by l. However, in the presence of deformations, the spacetime is no longer spherically symmetric, hence the degeneracy among m_z splits. Different values of |m_z| in the range of 0≤|m_z|≤ l give distinctive QNM frequencies.
§ QNMS OF SBH-DISK MODEL
Having discussed the master equation of the scalar field perturbations in a general deformed Schwarzschild spacetime, we then consider the SBH-disk model whose metric is given by Eq. (<ref>). The SBH-disk model can also be treated as a deformed Schwarzschild spacetime whose deformations are caused by the thin disk. Typical mass M of the astrophysical black hole is usually expected to dominate over the mass of the accretion disk ℳ. Therefore, it is natural to set ϵ=ℳ/M and consider terms up to O(ℳ/M). As we have mentioned, in this linear approximation, we have λ_ext≈λ_int because λ_disk is quadratic in ϵ. The metric components of the SBH-disk model can then be approximated as
g_tt(r,θ) ≈-f(r)(1+2ν_disk) ,
g_rr(r,θ) ≈1/f(r)(1+2λ_int-2ν_disk) ,
g_θθ(r,θ) ≈ r^2(1+2λ_int-2ν_disk) ,
g_φφ(r,θ) ≈ r^2sin^2θ(1-2ν_disk) .
The approximated metric (<ref>) belongs to the class of deformed Schwarzschild metrics of Eq. (<ref>), as will be shown more explicitly below.
§.§ Effective potential
The identification between the metrics (<ref>) and (<ref>) is made by first expanding ν_disk and λ_int in terms of |x| as follows:
ν_disk=ϵ𝒱_j(r)|x^j| , λ_int=ϵℒ_j(r)|x^j| ,
where 𝒱_j(r) and ℒ_j(r) depend on m, n, and b, but are independent of ϵ. Again, the summations over j are implicitly imposed as before. One then identifies the weighting functions in Eq. (<ref>) as follows
A_k(r) =-D_k(r)=2𝒱_k(r) ,
B_k(r) =C_k(r)=2ℒ_k(r)-2𝒱_k(r) ,
for all k. With these mappings, one sees that the approximated metric (<ref>) does belong to the class of metrics (<ref>). As a result, the effective potential (<ref>) can be written as
V_eff(r)=l(l+1)f(r)/r^2 +f(r)/rdf/dr[1+ϵ b_lm_z^j(4𝒱_j(r)-2ℒ_j(r))]
+ϵ{f/r^2[4a_lm_z^j𝒱_j(r)-c_lm_z^j(4𝒱_j(r)-2ℒ_j(r))]-b_lm_z^j/2d^2/dr_*^2[2𝒱_j(r)-ℒ_j(r)]} ,
and the definition of the tortoise radius r_*, which is given by Eq. (<ref>), becomes
dr/dr_*=f(r){1+ϵ b_lm_z^j[2𝒱_j(r)-ℒ_j(r)]} .
Note that when ϵ=0, the spacetime recovers a pure Schwarzschild one and the effective potential is given by
V_eff^Sch(r)≡ l(l+1)f(r)/r^2+f(r)/rdf/dr .
With the master equation (<ref>) and the effective potential (<ref>), we can calculate the QNM frequencies of the scalar field perturbations of the SBH-disk model.
We first check that the effective potential, which is ideally defined as an infinite sum in j, has a sufficiently fast rate of convergence when increasing the summation order j. In Fig. <ref>, we consider the effective potential V_eff(r) of the SBH-disk model with m=0, n=1, b=10M, l=|m_z|=2, and ℳ=0.02M, then calculate the radius of its peak r_m with various truncation values of j, which we defined as j_t. We find that when j_t≥4, the results of r_m already converge very well. Therefore, in the rest of this paper, the effective potential of the SBH-disk model will be calculated with the summation truncated at j_t=4.
In Fig. <ref>, we set m=0, n=2, b=10M, and l=|m_z|=2. The effective potentials of SBH-disk models are shown with respect to different values of the disk mass ℳ. The black curve corresponds to the pure Schwarzschild black hole, i.e., ℳ=0, whose effective potential is given by V_eff^Sch(r). The inset shows the deviation of the effective potentials in the presence of the disk with respect to the pure Schwarzschild one (δ V_eff≡ V_eff-V_eff^Sch). One can see that the effective potential is flattened in the presence of the disk. This is consistent with the findings in Ref. <cit.> that the disk provides additional gravitational attractions and makes the horizon, as well as the effective potential as a whole, more flattened. Also, from the inset, one finds that the effective potential reduces to V_eff^Sch(r) both near the horizon and at the spatial infinity. This is also expected as the surface density of the disk drops to zero there, as one can see in Fig. <ref>. In fact, the inset of Fig. <ref> also indicates that the effective potential in the presence of the disk acquires the largest deviation from the pure Schwarzschild one near the peak r_m.
Then, we explore the shape of the effective potential within the parameter space of the disk model itself, i.e., m, n, and b. In Figs. <ref>, <ref>, and <ref>, we focus on how the effective potentials vary with respect to the changes of b, n, and m, respectively. We find that the effective potentials gradually reduce to V_eff^Sch when increasing b or n. On the other hand, increasing the index m would further flatten the effective potential, as can be seen from Fig. <ref>. As we have mentioned in sec. <ref>, when keeping ℳ/M constant and increasing either n, b, or 1/m, the density peak of the disk would get lower and move further away from the black hole. Therefore, the net effects due to the disk become weaker. It is also worth remarking that in the presence of the disk, |δ V_Sch| seems to always get its largest value near the potential peak r_m.
§.§ Scalar field QNMs
The QNM frequencies of the scalar field perturbations in the SBH-disk model can be calculated by solving Eq. (<ref>) with the effective potential (<ref>) after imposing proper boundary conditions. Typical boundary conditions for black hole QNMs require that there are purely outgoing waves at spatial infinity and purely ingoing waves at the event horizon. The system can be treated as a wave-scattering problem through the peak of the effective potential. The whole system is dissipative because of the boundary conditions. Therefore, the QNM frequencies in general would acquire an imaginary part that quantifies the decay of the modes.
In this section, we focus on the cases where the effective potential retains its single-peak structure. This can be easily achieved when only orders of O(ϵ) are considered. The single-peak structure of the effective potential allows us to calculate the QNM frequencies using the third-order Wentze-Kramers-Brillouin (WKB) method <cit.>[The WKB method for calculating black hole QNMs has been extended to higher orders <cit.>. We refer the readers to Ref. <cit.> for the review of the method and, in particular, its range of applicability.]. We also make use of the asymptotic iteration method (AIM) <cit.> to check the consistency of the results. In this section, we shall focus on the fundamental modes with l=|m_z| because the fundamental modes have the longest decay time and hence are more astrophysically relevant. In addition, our numerical results suggest that changing |m_z| only shifts the frequencies very weakly as compared to the frequency shifts generated by other model parameters.
The complex planes of QNM frequencies within some parameter space are shown in Fig. <ref>. In each panel, the three branches correspond to the complex QNM frequencies with multipole numbers l=2, l=4, and l=6, from left to right, respectively. For each branch in the top panel, we fix the set of parameters {m,n,b} as that in Fig. <ref> and use the third-order WKB method to calculate the QNM frequencies with respect to the disk mass, which is chosen to be ℳ=0, 0.02M, 0.04M, 0.06M, 0.08M, and 0.1M (black points from top to bottom). The green (magenta) points are the results calculated using AIM, with disk mass ℳ=0 (ℳ=0.1M). In this case, the topmost points in each branch correspond to the QNM frequencies of a pure Schwarzschild black hole. In the bottom panel of Fig. <ref>, we fix the set of parameters {m,n,ℳ} as that in Fig. <ref>, and vary only b=25M, 20M, 15M, 10M, and 5M (black points from top to bottom) for each branch. The green (magenta) points are the results calculated using AIM, with b=5M (b=25M).
From Fig. <ref>, one first sees that the WKB method and AIM give quite consistent results, particularly in the regime of large l where the WKB method is expected to be accurate. Second, increasing the disk mass ℳ would reduce the values of ω_R and |ω_I| [The change of ω_R in each branch may not be easily seen in Fig. <ref>. See Fig. <ref> for more details.]. In addition, given a non-zero ℳ, the pure Schwarzschild results can be recovered when b→∞. Reducing the value of b decreases the values of ω_R and |ω_I|, as compared with the pure Schwarzschild case. This is consistent with our previous finding that when b increases, the effective potential gradually reduces to V_eff^Sch.
In fact, after a careful examination of the parameter space, we find that the presence of a thin disk would always reduce the values of ω_R and |ω_I|, as long as ℳ/M stays reasonably small and the index m remains O(1). In such cases, the validity of the first-order approximation used to derive the effective potential (<ref>) is ensured. Moreover, the effective potential has a single-peak structure, whose shape monotonically deviates from the pure Schwarzschild one. In Fig. <ref>, we focus on l=2 and investigate the QNM frequencies in the parameter space {m,n,b,ℳ}. The solid curve shows the results of fixing m=0, n=2, b=10M, and varying ℳ/M from 0 to 0.1. The cross indicates the pure Schwarzschild frequency ℳ=0. The colored points and open circles show the results of fixing ℳ/M=0.02 and other parameters except for those indicated in the legend. From Fig. <ref>, we find that the larger the parameters b (red circle) or n (green circle) are, the closer the QNM frequencies are to the Schwarzschild one. On the other hand, increasing m would reduce ω_R and |ω_I|.
Another important observation from Fig. <ref> is that almost all the colored points are nicely lined along the black curve. Although the SBH-disk model has a large parameter space {m,n,b,ℳ,l,m_z}, there seems to be a universal relation that the QNM frequencies of the model have to obey. We also consider other multipole numbers l and a universal relation seems to exist among the modes, as can be seen in Fig. <ref>. A similar trend of QNM shifts also appears in the model in which the black hole spacetime is superposed with a spherically symmetric matter distribution <cit.>, indicating that the relation may be really universal in the sense that it is insensitive to the matter configuration in the distribution. In general, the universal relation inevitably implies a strong degeneracy among intrinsic disk parameters. However, the relation could be helpful to distinguish the disk effects from those contributed by other putative external parameters not belonging to the disk model. For example, if the black hole QNM frequencies are found to be away from this universal relation, e.g, they are not lined along the black curve in Fig. <ref>, such a frequency shift must be induced by effects other than the disk contributions. In fact, several quantum-corrected black hole models predict larger values of ω_R <cit.>, hence the QNMs of the models would not be lined on the black curve. The quantum parameters in these models are thus robustly disentangled from the disk effects, hence enhancing the possibility of testing these quantum-corrected black hole models through black hole spectroscopy.
Having said that, if there does exist a universal relation, one should still be careful with the range of its validity. In particular, from Fig. <ref>, the relation seems not valid anymore when one keeps increasing m or ℳ. Indeed, disk models with sufficiently large m and ℳ could acquire a very dense and narrow peak in the density profile outside the black hole, resembling a flattened torus or ring rather than a disk. This extreme density profile could largely alter the shape of the effective potential, including the possibility of generating extra peaks outside the original one (see Fig. <ref> for an example). If this happens, pseudospectral instability may be triggered, which would totally destroy the QNM spectrum <cit.> and the universal relation would not be valid anymore. In fact, when the second peak appears in the effective potential, gravitational echoes following the main sinusoidal-decaying phase may appear in the time domain signals. These echoes correspond to the long-lived modes which are trapped between the potential barriers before they slowly leak through the outer one. However, we would like to emphasize that the possibility of having multiple peaks in the effective potential has to be treated with great care. This is because increasing the disk mass ℳ would, at some point, violate the validity of the first-order approximations from which we derive the effective potential (<ref>). In addition, although all energy conditions discussed at the end of sec. <ref> hold for the disk parameters considered in Fig. <ref>, for high m the density peak is located around the Schwarzschild value of the ISCO radius. Thus, for the most part, those disks would not be stable. Therefore, the double-peak structure in the effective potential demonstrated in Fig. <ref> may have issues regarding its theoretical and physical viability. Therefore, we will not discuss it more in the present paper.
§ QNMS IN EIKONAL LIMITS
Consider a test field propagating in curved spacetimes. In the geometric optics approximation, the wavelength of the field is assumed to be much smaller than any other length scale in the system. In the leading order of this approximation, sometimes also called eikonal approximation, the equations of motion of the propagating field share the same form as those of the freely moving photons. When adopting the approximation to black hole spacetimes, it is well-known that the eikonal black hole QNMs have some properties that can be directly linked to the photon orbits in such a spacetime. More explicitly, one can identify the so-called eikonal correspondence between the eikonal QNMs and the bound photon orbits around the black hole.
In a static and spherically symmetric black hole spacetime, the QNMs are determined up to their multipole number l because the azimuthal numbers m_z degenerate. As for the bound photon orbits, it turns out that all the bound photon orbits in this case are circular orbits and have a single radius, called the photon sphere. In this simple spacetime configuration, the eikonal correspondence can be identified straightforwardly through the fact that the peak of the effective potential of QNMs of l≫1 is precisely at the photon sphere. Based on this identification, the real and the imaginary parts of the large-l QNMs would correspond to the orbital frequency and the Lyapunov exponent of photons on the photon sphere, respectively <cit.>. The eikonal correspondence can be extended to rotating black hole spacetimes <cit.>, black holes with multiple photon spheres <cit.>, and even deformed black hole spacetimes <cit.>. The possibility of testing eikonal correspondence through black hole observations has been proposed in Ref. <cit.>.
When the black hole is slightly deformed, as the SBH-disk model considered in this paper, the QNM equations depend on both l and m_z. Therefore, the identification of the eikonal correspondence has to be carried out with care. In fact, for the SBH-disk model, circular orbits only exist at the equatorial plane. Any inclined bound photon orbits would acquire θ-dependent deformations such that they do not have a constant radius. In Ref. <cit.>, it has been demonstrated that the eikonal correspondence of deformed Schwarzschild black hole spacetimes can be identified by defining the averaged radius of the bound photon orbits along one complete period. More explicitly, the averaged radius would correspond to the peak of the effective potentials of QNMs with l≫1 and arbitrary m_z.
In this section, we will investigate the eikonal correspondence for the SBH-disk model. Specifically, we will consider the equatorial eikonal modes (l=|m_z|≫1) and the polar eikonal modes (m_z=0 and l≫1). The results obtained in this section can be treated as a consistency check with those exhibited in Ref. <cit.>.
§.§ The equatorial modes l=|m_z|
When l=|m_z|, the coefficients (<ref>)-(<ref>) can be expressed as
a_ll^2k =a_l-l^2k=l(2l+1)/2X_even(l,k) ,
b_ll^2k =b_l-l^2k=2l+1/2l+2k+1X_even(l,k) ,
c_ll^2k =c_l-l^2k=l(2l+1)(2k-1)/2(2l+2k+1)X_even(l,k) ,
d_ll^2k =d_l-l^2k=-2kl(2l+1)/2l+2k+1X_even(l,k) ,
a_ll^2k+1 =a_l-l^2k+1=l(2l+1)/2X_odd(l,k) ,
b_ll^2k+1 =b_l-l^2k+1=2l+1/2l+2k+1X_odd(l,k) ,
c_ll^2k+1 =c_l-l^2k+1=l(2l+1)(2k-1)/2(2l+2k+1)X_odd(l,k) ,
d_ll^2k+1 =d_l-l^2k+1=-2kl(2l+1)/2l+2k+1X_odd(l,k) ,
where k are non-negative integers, and
X_even(l,k)≡C^l+k_k/C^2l+2k_2k , X_odd(l,k)≡C^2l_l/4^lC^l+k_k ,
where C^i_j are the binomial coefficients. Therefore, in the eikonal limit l≫1, only the coefficients a_ll^0 and a_l-l^0 dominate and read
a_ll^0=a_l-l^0≈ l^2 .
The effective potential (<ref>) of the SBH-disk model can thus be approximated as
V_eff(r)≈ l^2f(r)/r^2[1+4ϵ𝒱_0(r)] .
In this case, the eikonal QNMs with l=|m_z|≫1 correspond to the photons that undergo bound circular motion on the equatorial plane. According to Ref. <cit.>, the radius of these orbits is determined by the root of ∂_r(g_tt/g_φφ)_x=0=0. For the SBH-disk model with approximated metric (<ref>), this equation can be written as
∂_r[f(r)/r^2(1+4ν_disk)]_x=0=0 ,
which is precisely the equation that determines the peak of the effective potential (<ref>) for l=|m_z|≫1 because ν_disk|_x=0=ϵ𝒱_0.
§.§ The polar modes m_z=0
When m_z=0, the dominant coefficients in the eikonal limit l≫1 are
c_l0^j≈
-1/4^kC^2k_kl^2 , if j=2k
-4^k+1l^2/π(k+1) C^2k+2_k+1 , if j=2k+1 .
The effective potential (<ref>) is then approximated as
V_eff(r)≈ l^2f(r)/r^2{1+ϵ∑_k=0^∞[C_k^2k/4^k(4𝒱_2k-2ℒ_2k)
+4^k+1/π(k+1)C_k+1^2k+2(4𝒱_2k+1-2ℒ_2k+1)]} .
Note that the last term on the right-hand side comes from the terms with odd j.
The peak of the effective potential (<ref>) can also be obtained through the calculations of photon geodesic equations. Consider the polar photon orbits on the photon sphere around the Schwarzschild black hole. Those orbits have zero azimuthal angular momentum L_z and repeatedly reach the poles x=±1. When the spacetime is deformed, i.e., ϵ0, these polar orbits would also be deformed such that ṙ=O(ϵ) and L_z=O(ϵ), where the dot denotes the derivative along the geodesic with respect to the affine parameter λ. Up to the first-order of ϵ, the radial component of the geodesic equations
d/dλ(g_μνẋ^ν)=1/2(∂_μ g_αβ)ẋ^αẋ^β
can be written as
d/dλ(g_rrṙ)=1/2E^2/g_tt∂_rln|g_tt/g_θθ|+O(ϵ^2) ,
where E is the energy of photons. Following Ref. <cit.>, we assume that the deformed orbits remain periodic and form a class of limit cycles in the phase space. We can then integrate Eq. (<ref>) along a closed loop along λ. We then obtain
o(ϵ) ∝∫_0^2πdθ∂_r(g_tt/g_θθ)
∝∫_0^2πdθ∂_r{f(r)/r^2[1+(4𝒱_j(r)-2ℒ_j(r))|cos^jθ|]} ,
with j a dummy index standing for summations over all non-negative integers. Because of the absolute value of cos^jθ, both even and odd powers of j contribute to the integration[In Ref. <cit.>, the metric functions are expressed in series of cosθ without absolute values. Therefore, in that case, only even powers of j would contribute.]. One can eventually get
∂_r {f(r)/r^2[1+ϵ∑_k=0^∞(C_k^2k/4^k(4𝒱_2k-2ℒ_2k)
+4^k+1/π(k+1)C_k+1^2k+2(4𝒱_2k+1-2ℒ_2k+1))]}=o(ϵ) ,
and then see that the root of Eq. (<ref>) coincides with the peak of the effective potential (<ref>). In Ref. <cit.>, it has been proved that the root of Eq. (<ref>) is precisely the averaged radius of the polar photon orbits along full periods. The averaged radius of bound photon orbits, both in the cases of equatorial orbits (<ref>) and polar orbits (<ref>), can be captured by their corresponding effective potentials of QNMs in the eikonal limit, i.e., Eqs. (<ref>) and (<ref>), respectively. This is the manifestation of eikonal correspondence between bound photon orbits and high-frequency QNMs. Here, we show explicitly that even in the presence of spacetime deformations induced by a gravitating thin disk, as long as the disk mass is much smaller than the black hole mass, i.e., ϵ≪1, the eikonal correspondence can be identified through the definition of the averaged radius of bound photon orbits. This is consistent with the results of Ref. <cit.>.
§ CONCLUSIONS
In this paper, we consider a recently obtained solution of deformed Schwarzschild black holes (SBH-disk model) <cit.> and investigate the QNMs of a massless scalar field of this spacetime. The SBH-disk model describes the spacetime geometry of a Schwarzschild black hole encircled by a gravitating thin accretion disk. The superposed spacetime is an exact solution to GR and the gravitational field is regular everywhere outside the event horizon. In particular, the presence of the gravitating thin disk breaks the spherical symmetry, which is usually assumed in the literature when considering the gravitating fluid in the environment around astrophysical black holes.
The lack of spherical symmetry of the SBH-disk model inevitably leads to the computational complexity of QNM frequencies because the angular and radial sectors of the QNM master equation are highly coupled. We overcome this difficulty by assuming that the disk mass ℳ is much smaller than the black hole mass M. Up to the first order of ℳ/M, one can obtain the master equation that allows us to investigate the frequency shifts of QNMs in the presence of the disk. In particular, the radial sector of the master equation can be recast in a Schrödinger-like form in which the effective potential V_eff(r) can be defined unambiguously and it reduces to the Schwarzschild one V_eff^Sch(r) in proper limits.
Besides the black hole mass M, the SBH-disk model contains four additional parameters, which essentially control the shape of the surface density profile for the disk. Taking a physically reasonable density profile, we find that the disk gravity would flatten the effective potential V_eff(r) as compared with the Schwarzschild one. This behavior is robust among different choices of disk parameters. Furthermore, the presence of the gravitating disk would lower the real part of the QNM frequencies, while increase the damping time. In particular, the shifts of the real and imaginary parts with respect to their Schwarzschild counterparts, seem to follow a universal relation in the sense that they are shifted toward the same direction on the complex plane by the same amount in the presence of the disk (Fig. <ref>). Similar results also appear when the matter around the black hole is modeled based on the assumption of spherical symmetry <cit.>. Although still far away from a rigorous proof, if such a universal relation is indeed robust against the changes of matter configuration around the black hole, it would aid the discrimination between the disk effects on the QNM spectrum and those contributed by other putative physics beyond GR. This line of research deserves further investigation.
In addition to QNM frequencies, we investigate two special kinds of bound photon orbits around the SBH-disk model. Since the equatorial symmetry is still preserved, the circular photon orbits on the equatorial plane exist, and the radius of the orbits is precisely at the peak of the effective potential of the eikonal equatorial QNMs. On the other hand, each polar orbit has a θ-dependent radius because of the spacetime deformations. Assuming periodicity of the orbits, we find that the averaged radius of the orbits along a full period would correspond to the peak of the effective potential of the eikonal polar modes. This result is consistent with that found in the literature.
In order to directly connect to the ringdown phase of gravitational waves, extending the present work to gravitational perturbations is necessary[Similar analysis has been carried out in Ref. <cit.> in which the matter field around the black hole is assumed not to deform the black hole geometry at the background level, while interact gravitationally only at the perturbation level.]. In addition, the physical properties of the SBH-disk model are not much explored so far. These include a detailed investigation of the geodesic dynamics of photons and massive particles. An extension towards a more realistic situation would be to include rotation of the black hole, the disk, or both in the black-hole–disk model and in the analysis of QNMs. We leave these interesting issues for future work.
CYC is supported by the Institute of Physics of
Academia Sinica and the Special Postdoctoral Researcher (SPDR) Program at RIKEN. PK acknowledges support from GACR 21-11268S of the Czech Science Foundation.
§ PHYSICAL PROPERTIES OF THE DISKS
To describe the matter content of infinitesimally thin disks, we have to introduce the stress-energy tensor on a singular hypersurface. Using the formalism developed by Israel <cit.>, the surface stress-energy tensor of a singular layer of matter located at z=const reads <cit.>
S_αβ = -√(g_ρρ)/8π( g_αβ/g_ρρ)_,z .
This expression holds for any axially symmetric and stationary spacetime in Weyl coordinates. If the spacetime is static described by a metric (<ref>), the stress-energy tensor (<ref>) has only two non-trivial components which read
S_tt = 1/4π e^3ν - λν_,z (1 - ρν_,ρ) ,
S_φφ = 1/4π e^-λ - νρ^3 ν_,zν_,ρ ,
where the right-hand sides are evaluated in the singular hypersurface, i.e., in the equatorial plane z = 0 where our disk lies. Consider a static observer equipped with a tetrad
e_(t)^α = 1/√(-g_tt)δ^α_t , e_(φ)^α = 1/√(g_φφ)δ^α_φ ,
e_(ρ)^α = 1/√(g_ρρ)δ^α_ρ , e_(z)^α = 1/√(g_zz)δ^α_z .
In this tetrad, we easily observe that the disk can be interpreted as ideal fluid with density and azimuthal pressure (measured by the static observer hovering above the disk)
σ ≡ S_αβe^α_(t) e^β_(t) = 1/2π e^ν - λν_,z (1 - ρν_,ρ) ,
P ≡ S_αβ e^α_(φ) e^β_(φ) = 1/2π e^ν - λρν_,zν_,ρ .
The relation between the z derivative of the potential and the Newtonian surface density w(ρ) can be obtained by integrating the Poisson equation Δν = 4 π w(ρ) δ(z) over the z coordinate. Assuming that the spacetime is reflection symmetric with respect to the equatorial plane, only the term ν_,zz gives some non-zero contributions, thus
w(ρ) = 1/2πlim_z → 0^+ν_,z .
Substituting this relation into (<ref>) we get precisely (<ref>).
99
Kokkotas:1999bd
K. D. Kokkotas and B. G. Schmidt,
Living Rev. Rel. 2, 2 (1999).
Berti:2009kk
E. Berti, V. Cardoso and A. O. Starinets,
Class. Quant. Grav. 26, 163001 (2009).
Konoplya:2011qq
R. A. Konoplya and A. Zhidenko,
Rev. Mod. Phys. 83, 793-836 (2011).
LIGOScientific:2016aoc
B. P. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. Lett. 116, no.6, 061102 (2016).
LIGOScientific:2021djp
R. Abbott et al. [LIGO Scientific, VIRGO and KAGRA],
[arXiv:2111.03606 [gr-qc]].
Reitze:2019iox
D. Reitze, R. X. Adhikari, S. Ballmer, B. Barish, L. Barsotti, G. Billingsley, D. A. Brown, Y. Chen, D. Coyne and R. Eisenstein, et al.
Bull. Am. Astron. Soc. 51, no.7, 035 (2019)
[arXiv:1907.04833 [astro-ph.IM]].
Maggiore:2019uih
M. Maggiore, C. Van Den Broeck, N. Bartolo, E. Belgacem, D. Bertacca, M. A. Bizouard, M. Branchesi, S. Clesse, S. Foffa and J. García-Bellido, et al.
JCAP 03, 050 (2020).
Leung:1997was
P. T. Leung, Y. T. Liu, W. M. Suen, C. Y. Tam and K. Young,
Phys. Rev. Lett. 78, 2894-2897 (1997).
Leung:1999iq
P. T. Leung, Y. T. Liu, W. M. Suen, C. Y. Tam and K. Young,
Phys. Rev. D 59, 044034 (1999).
Barausse:2014tra
E. Barausse, V. Cardoso and P. Pani,
Phys. Rev. D 89, no.10, 104059 (2014).
Barausse:2014pra
E. Barausse, V. Cardoso and P. Pani,
J. Phys. Conf. Ser. 610, no.1, 012044 (2015).
Jaramillo:2020tuu
J. L. Jaramillo, R. Panosso Macedo and L. Al Sheikh,
Phys. Rev. X 11, no.3, 031003 (2021).
Cheung:2021bol
M. H. Y. Cheung, K. Destounis, R. P. Macedo, E. Berti and V. Cardoso,
Phys. Rev. Lett. 128, no.11, 111103 (2022).
Berti:2022xfj
E. Berti, V. Cardoso, M. H. Y. Cheung, F. Di Filippo, F. Duque, P. Martens and S. Mukohyama,
Phys. Rev. D 106, no.8, 084011 (2022).
Cardoso:2021wlq
V. Cardoso, K. Destounis, F. Duque, R. P. Macedo and A. Maselli,
Phys. Rev. D 105, no.6, L061501 (2022).
Medved:2003rga
A. J. M. Medved, D. Martin and M. Visser,
Class. Quant. Grav. 21, 1393-1406 (2004).
Lemos:1994
J. P. S. Lemos and P. S. Letelier,
Phys. Rev. D, 49, no.10, 5135–5143 (1994).
Semerak:2000
O. Semerák and M. Žáček,
Class. Quantum Grav., 17, no.7, 1613–1626 (2000).
Morgan:1969
T. Morgan and L. Morgan,
Phys. Rev., 183, no.5, 1097–1101 (1969).
Cunha:2020
P. V. P. Cunha, N. A. Eiró, C. A. R. Herdeiro, and J. P. S. Lemos,
J. Cosmol. Astropart. Phys., 2020, no.3, 035, (2020).
Semerak:2004
O. Semerák,
Class. Quantum Grav., 21, no.8, 2203–2218 (2004).
Kotlarik:2022
P. Kotlařík, D. Kofroň, and O. Semerák,
ApJ, 931, no.2, 161 (2022).
Vieira:2020
R. S. S. Vieira,
Class. Quantum Grav., 37, no.20, 205013 (2020).
Kotlarik:2022spo
P. Kotlařík and D. Kofroň,
Astrophys. J. 941, no.1, 25 (2022).
Cano:2020cao
P. A. Cano, K. Fransen and T. Hertog,
Phys. Rev. D 102, no.4, 044047 (2020).
Chen:2022ynz
C. Y. Chen, H. W. Chiang and J. S. Tsao,
Phys. Rev. D 106, no.4, 044068 (2022).
Cardoso:2021qqu
V. Cardoso and A. Foschi,
Phys. Rev. D 104, no.2, 024004 (2021).
Zhao:2023uam
Y. Zhao, Y. Cai, S. Das, G. Lambiase, E. N. Saridakis and E. C. Vagenas,
[arXiv:2301.09147 [gr-qc]].
Ghosh:2023etd
R. Ghosh, N. Franchini, S. H. Völkel and E. Barausse,
[arXiv:2303.00088 [gr-qc]].
Vogt:2009
D. Vogt and P. S. Letelier
MNRAS 396, no. 3, pp. 1487–1498 (2009).
Toomre:1963
A. Toomre
ApJ 138, p. 385 (1963).
Schutz:1985km
B. F. Schutz and C. M. Will,
Astrophys. J. Lett. 291, L33-L36 (1985).
Iyer:1986np
S. Iyer and C. M. Will,
Phys. Rev. D 35, 3621 (1987).
Konoplya:2003ii
R. A. Konoplya,
Phys. Rev. D 68, 024018 (2003).
Matyjasek:2017psv
J. Matyjasek and M. Opala,
Phys. Rev. D 96, no.2, 024011 (2017).
Matyjasek:2019eeu
J. Matyjasek and M. Telecka,
Phys. Rev. D 100, no.12, 124006 (2019).
Hatsuda:2019eoj
Y. Hatsuda,
Phys. Rev. D 101, no.2, 024008 (2020).
Konoplya:2019hlu
R. A. Konoplya, A. Zhidenko and A. F. Zinhailo,
Class. Quant. Grav. 36, 155002 (2019).
Cho:2009cj
H. T. Cho, A. S. Cornell, J. Doukas and W. Naylor,
Class. Quant. Grav. 27, 155004 (2010).
Cho:2011sf
H. T. Cho, A. S. Cornell, J. Doukas, T. R. Huang and W. Naylor,
Adv. Math. Phys. 2012, 281705 (2012).
Konoplya:2021ube
R. A. Konoplya,
Phys. Lett. B 823, 136734 (2021).
Liu:2012ee
D. J. Liu, B. Yang, Y. J. Zhai and X. Z. Li,
Class. Quant. Grav. 29, 145009 (2012).
Fernando:2012yw
S. Fernando and J. Correa,
Phys. Rev. D 86, 064039 (2012).
Flachi:2012nv
A. Flachi and J. P. S. Lemos,
Phys. Rev. D 87, no.2, 024034 (2013).
Bouhmadi-Lopez:2020oia
M. Bouhmadi-López, S. Brahma, C. Y. Chen, P. Chen and D. h. Yeom,
JCAP 07, 066 (2020).
Daghigh:2020fmw
R. G. Daghigh, M. D. Green and G. Kunstatter,
Phys. Rev. D 103, no.8, 084031 (2021).
Jafarzade:2021umv
K. Jafarzade, M. Kord Zangeneh and F. S. N. Lobo,
Annals Phys. 446, 169126 (2022).
del-Corral:2022kbk
D. del-Corral and J. Olmedo,
Phys. Rev. D 105, no.6, 064053 (2022).
Cardoso:2008bp
V. Cardoso, A. S. Miranda, E. Berti, H. Witek and V. T. Zanchin,
Phys. Rev. D 79, no.6, 064016 (2009).
Yang:2012he
H. Yang, D. A. Nichols, F. Zhang, A. Zimmerman, Z. Zhang and Y. Chen,
Phys. Rev. D 86, 104006 (2012).
Li:2021zct
P. C. Li, T. C. Lee, M. Guo and B. Chen,
Phys. Rev. D 104, no.8, 084044 (2021).
Guo:2021enm
G. Guo, P. Wang, H. Wu and H. Yang,
JHEP 06, 060 (2022).
Chen:2022nlw
C. Y. Chen, Y. J. Chen, M. Y. Ho and Y. H. Tseng,
[arXiv:2212.10028 [gr-qc]].
Nagar:2006eu
A. Nagar, O. Zanotti, J. A. Font and L. Rezzolla,
Phys. Rev. D 75, 044016 (2007).
Israel:1966
W. Israel
Nuovo Cim. B 44 p. 14, (1966).
Ledvinka:2019
T. Ledvinka and J. Bičák
Phys. Rev. D 99, no. 6, p. 064046 (2019).
|
http://arxiv.org/abs/2307.04896v1 | 20230710204144 | An abstract formulation of the flat band condition | [
"Jeffrey Galkowski",
"Maciej Zworski"
] | math.AP | [
"math.AP",
"math-ph",
"math.MP",
"math.SP"
] |
Motivated by the study of flat bands in models of twisted bilayer graphene (TBG), we give abstract conditions which guarantee the existence of a discrete set of parameters for which periodic Hamiltonians exhibit flat bands. As an application, we show that a scalar operator derived from the chiral model of TBG has flat bands for a discrete set of parameters.
Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe
Lars R. Schreiber
August 12, 2023
==============================================================================
§ INTRODUCTION
Existence of flat bands for periodic operators (in the sense of Floquet theory) has interesting
physical consequences, especially in the case of nontrivial band topology. A celebrated
recent example is given by the Bistritzer–MacDonald Hamiltonian <cit.> modeling twisted
bilayer graphene
(see <cit.> and <cit.> for its mathematical derivation). A model exhibiting exact flat bands is given by the chiral limit of the Bistritzer–MacDonald
model considered by Tarnopolsky–Kruchkov–Vishwanath <cit.>. Both the Bistritzer–MacDonald model and its chiral limit depend on a parameter corresponding to the angle of twisting between two graphene sheets and, in the chiral model, the perfectly flat bands
appear for a discrete set of values of this parameter.
This follows from a spectral characterization of those magic angles
given by Becker–Embree–Wittsten–Zworski <cit.>. Existence of the first real magic angle
was provided by Watson–Luskin <cit.>, with its simplicity established by
Becker–Humbert–Zworski <cit.>. That paper also showed existence of infinitely many,
possibly complex, magic angles.
The purpose of this note is to provide a simple abstract version of the spectral
characterization of magic angles given in <cit.> (see also <cit.>). In <ref> we
apply this spectral characterization of flat bands in a model to which the argument from <cit.> does not apply.
To formulate our result we consider Banach spaces,
X⊂ Y, and a connected open set Ω⊂ℂ.
The result concerns a holomorphic family of Fredholm operators
of index 0
(see <cit.>):
Q : Ω×ℂ→ℒ ( X, Y ) ,
( α , k ) ↦ Q ( α, k ) .
We make the following assumption: there exists a lattice
Γ^* ⊂ℂ,
and families of invertible operators γ↦ W_∙ (γ ) : ∙→∙,
∙ = X, Y, γ∈Γ^*,
such that
Q ( α , k + γ ) = W_Y ( γ )^-1 Q ( α, k ) W_X ( γ ) , γ∈Γ^* .
A guiding example is given by the chiral model of twisted bilayer graphene (TBG) <cit.>, <cit.>,
<cit.>:
Q ( α, k ) := D ( α ) + k ,
D ( α ) := [ 2 D_z̅ α U ( z ); α U ( - z ) 2 D_z̅ ] ,
Ω = ℂ ,
2D_z̅ = 1 i ( ∂_x_1 + i ∂_x_2 ) , z = x_1 + i x_2 ∈ℂ ,
where U satisfies
U ( z + γ ) = e^ i ⟨γ , K ⟩ U ( z ) , U ( ω z ) = ω U(z) , U ( z̅ ) = - U ( - z ) , ω = e^ 2 π i/3,
γ∈Λ := ωℤ⊕ℤ , ω K ≡ K ≢0 Λ^* , Λ^* := 4 π i/√(3)Λ , ⟨ z , w ⟩ := ( z w̅ ) .
An example of U is given by the Bistritzer–MacDonald potential
U ( z ) = - 4 3 π i ∑_ℓ = 0 ^2 ω^ℓ e^ i ⟨ z , ω^ℓ K ⟩, K = 43 π .
We note that a potential satisfying (<ref>) is periodic with respect to the lattice 3 Λ
and that we can take
Y := L^2 ( ℂ / Γ ; ℂ^2 ) ,
X := H^1 ( ℂ / Γ ; ℂ^2 ) , Γ := 3 Λ .
(For the Fredholm property of D ( α ) + k : X → Y see
<cit.>; the index is equal to 0.) The operators W_∙ ( γ )
are given by multiplication by e^ i ⟨γ, z ⟩, γ∈Γ^*,
with Γ^* the dual lattice to Γ. (The operator is the same but acts on different spaces.)
The self-adjoint Hamiltonian for the chiral model of TBG is given by
H ( α ) := [ 0 D( α )^*; D ( α ) 0 ] ,
and Bloch–Floquet theory means considering the spectrum of
H_k ( α ) := e^ - i ⟨ z, k ⟩ H ( α ) e^ i ⟨ z, k ⟩
: H^1 ( ℂ/Γ ; ℂ^4 ) → L^2 ( ℂ/Γ ; ℂ^4 ) ,
H_k ( α ) = [ 0 Q ( α, k )^*; Q ( α, k ) 0 ] , Q ( α, k ) = D ( α ) + k ,
see <cit.> (we should stress that it is better to consider a modified boundary
condition <cit.> rather than Γ-periodicity but this plays no role in the discussion here).
A flat band at zero energy for the Hamiltonian (<ref>) means that
∀ k ∈ℂ 0 ∈_ L^2 ( ℂ/Γ; ℂ^4 ) H_k ( α )
⟺ ∀ k ∈ℂ _ H^1 ( ℂ/Γ; ℂ^4 ) H_k ( α ) ≠{ 0 }
⟺ ∀ k ∈ℂ _ H^1 ( ℂ/Γ; ℂ^2 ) Q ( k, α ) ≠{ 0 } .
We generalize the result of <cit.> stating that the set of α's for which
(<ref>) holds, which we denote by 𝒜_ch, is a discrete subset of ℂ and that (<ref>) is
equivalent to
∃ k ∈ℂ∖Γ^* _ H^1 ( ℂ/Γ; ℂ^2 ) Q ( k, α ) ≠{ 0 } .
The key property in showing this is the existence of protected states <cit.>, <cit.>:
∀ α∈ℂ ,
k ∈Γ^* _ H^1 ( ℂ/Γ; ℂ^2 ) Q ( k, α )
≥ 2, _ H^1 ( ℂ/Γ; ℂ^2 ) Q ( k , 0 ) = 2 .
This is replaced by the hypothesis (<ref>). We use
_𝒦 to denote the indicator function of 𝒦.
In the notation of (<ref>) and assuming (<ref>), suppose that
there exists a discrete set 𝒦⊂ℂ such that
for some m_0 ∈ℕ and α_0 ∈Ω, we
have,
Q ( α_0,k ) = m_0 _𝒦 ( k ) , Q ( α ,k ) ≥ m_0 _𝒦 ( k ),
k ∈ℂ, α∈Ω .
Then there exists a discrete set 𝒜⊂Ω such that
Q ( α ,k) ≠{ 0 } for α∈𝒜 and k ∈ℂ,
Q ( α,k ) = m_0 _𝒦 ( k ) for
α∈Ω∖𝒜 and k ∈ℂ.
In view of (<ref>) we see that (<ref>) is satisfied for Q given in
(<ref>) with m_0 = 2, α_0 = 0, Ω =ℂ and 𝒦 =
Γ^*. For a direct proof see <cit.> or <cit.>.
Remarks.
Theorem <ref> is valid under a weaker condition than (<ref>).
As seen in <ref>, we need to control the dimension of Q(α,k) for every k using
the dimension of Q ( α, k) for k in some fixed compact set.
That some condition is needed (other than holomorphy and the Fredholm property)
can be seen by considering the simple example of Q(α,k)=1-α k,
X = Y = ℂ. In this case (<ref>) is satisfied with α_0=0 and 𝒦=∅. Nevertheless,
Q(α,k) =0 k≠α^-1
1 k=α^-1
and (<ref>) fails. We opted for the easy to state condition (<ref>) in view
of the motivation from condensed matter physics.
§ PROOF OF THEOREM <REF>
We first fix k_0 ∈ℂ∖𝒦 and define
𝒜_k_0 := ∁{α∈Ω : Q(α, k_0 )^-1:Y→ X exists}.
Since α↦ Q( α , k_0 ) is a holomorphic family of Fredholm operators
of index zero,
and Q( α_0, k_0 ) = { 0 }, we conclude that α↦ Q ( α , k_0 )^-1 is
a meromorphic family of operators and, in particular, 𝒜_k_0 is a discrete set –
see <cit.>. Also, for
α∉𝒜_k_0, k ↦ Q ( α, k )^-1 is a meromorphic
family of operators and the multiplicity
m ( α,k ) := 1/ 2 π∮_∂ D Q( α, ζ ) ^-1∂_ζ Q ( α , ζ ) d ζ ,
is well defined. The integral is over the positively oriented boundary of a disc
D which contains k as the
only possible pole of ζ↦ Q ( α, ζ ). For such D
there exists ε > 0 such that
m ( α,k ) = ∑_ k' ∈ D m( α',k' ) , if |α - α' | < ε.
In particular for a fixed k ∈ℂ, α↦ m ( α ,k ) is upper semicontinuous.
We now define
U := {α∈Ω∖𝒜_k_0 : ∀ k, m ( α,k ) = m_0 _𝒦 ( k ) }.
We note that
α_0∈ U and that Ω∖𝒜_k_0 is connected. Hence U = Ω∖𝒜_k_0 if we show that U is open and closed in the relative topology of
Ω∖𝒜_k_0.
Let α∈ U. We start by showing that for any compact subset K⊂ℂ, there
exists ε_K>0 such that
m(α',k)= m_0 _𝒦(k)=m(α,k) for all k∈ K and |α-α'|<ε_K.
To see this we note that for any fixed k ∈ℂ
there exist D_k = D ( k, δ_k ),
and ε_k>0 such that that (<ref>) holds for |α-α'|<ε_k.
By shrinking D_k (and consequently ε_k) we can assume that (here we use
the discreteness of 𝒦)
D_k ∖{ k }⊂∁𝒦.
Since K is compact, we can find a finite cover
K⊂⋃_i=1^N D_k_i.
Then k_i is the only possible pole for k↦ Q(α,k)^-1 in D_k_i
and for |α-α'|<ε_K:=min_i=1,… Nε_k_i, we have
m(α,k_i)=∑_k∈ D_k_im(α',k).
If k_i∉𝒦 then, as α∈ U, m ( α, k_i ) = 0
and consequently m ( α' , k ) = 0 for k ∈ D_k_i ⊂∁𝒦.
On the other hand, if k_i∈𝒦 then,
m_0=∑_k∈𝒟_k_im(α',k).
and since m(α',k_i)≥ m_0 (by the assumption (<ref>)) we have m(α',k)=0 for k∈ D_k_i∖{ k_i}⊂∁𝒦 and m(α',k_i)=m_0.
Putting those two cases together, we have m(α',k)=m_0_𝒦(k) for k∈ K and |α-α'|<ε_K as claimed in (<ref>).
Now, to complete the proof that U is open, we use (<ref>). Let K⊂ℂ contain the fundamental domain of Γ^* and ε_K as in (<ref>). Then, for all k∈ℂ, there is γ∈Γ^* such that k+γ∈ K. Using (<ref>), we have for |α-α'|<ε_K,
m(α',k+γ)=m(α,k+γ).
But then, by (<ref>)
m(α',k+γ)=m(α',k), m(α,k+γ)=m(α,k),
and hence
m(α',k)=m(α,k)=_𝒦(k).
Since k∈ℂ was arbitrary, this implies α'∈ U.
To show that U is closed suppose that 𝒜_k_0∌α_j →α∉𝒜_k_0
and m ( k, α_j ) = m_0 _𝒦 ( k ). Then, since α∉𝒜_k_0, for every k∈ℂ, there exist ε_k>0 and D_k
such that (<ref>) and (<ref>) hold. In particular, for j large enough (depending on k),
m(α,k)=∑_k'∈ D_k m(α_j,k')=∑_k'∈ D_km_0_𝒦(k')=m_0_𝒦(k).
Hence U is closed and open which means that U = Ω∖𝒜_k_0.
Recalling the definition (<ref>), we proved that
Ω∖𝒜_k_0⊂{α: ∀ k, m ( α,k ) = m_0 _𝒦 ( k ) }⊂Ω∖𝒜_ k_1 ,
for any k_1 ∉𝒦. But this means that 𝒜_k_0 is independent of k_0
and for α∈𝒜 := 𝒜_k_0, Q ( α, k )^-1 does not exist for any k
∈ℂ. Since Q ( α, k ) is a Fredholm operator of index 0, this shows
that Q ( α, k ) ≠{ 0 } for all k.
§ A SCALAR MODEL FOR FLAT BANDS
One of the difficulties of dealing with the model described by (<ref>), (<ref>)
is the fact that D ( α ) acts on ℂ^2-valued functions. Here we propose the following
model in which D ( α ) is replaced by a scalar (albeit second order) operator. This is done
as follows. We first consider P ( α ) : H^2 ( ℂ/Γ ; ℂ^2 )
→ L^2 ( ℂ/Γ ; ℂ^2 ) defined as follows:
P ( α ) := D ( - α ) D ( α ) = Q ( α ) ⊗ I_ℂ^2 + R ( α ),
Q ( α ): = ( 2 D_z̅ )^2 - α^2 V ( z ) ,
R ( α ) := - α[ 0 V_1 ( z ); V_1 ( -z ) 0 ] , V ( z ) := U ( z ) U( -z ) , V_1 ( z ) := 2 D_z̅ U ( z ) .
If we think of P ( α ) as a semiclassical differential system with h = 1/α
(see <cit.>) then Q ( α ) is the quantization of the determinant of
the symbol of D( α ) and R ( α ) is a lower order term.
We lose no information when considering P ( α ) in the characterization
of flat bands (<ref>):
If P ( α, k ) := e^ - i ⟨ z, k ⟩ P ( α ) e^ i ⟨ z, k ⟩
then
_ H^1 ( ℂ/Γ ) ( D ( α ) + k ) ≠{0 } ⟺ _ H^2 ( ℂ/Γ ) P ( α, k ) ≠{ 0 } .
In particular α∈𝒜_ch if and only if
k ∈_ L^2 ( ℂ / Γ ) P ( α, k ) for some
k ∉Γ^* (which then implies this for all k).
We note that P ( α , k ) = ( D ( - α ) + k ) ( D ( α) + k ) and that
D ( - α ) - k = - ℛ ( D ( α ) + k ) ℛ,
ℛ[ u_1; u_2 ] ( z ) = [ u_2 ( -z ); u_1 ( - z ) ]
and hence
_ H^1 ( ℂ/Γ ) ( D ( α ) + k ) =
ℛ_ H^1 ( ℂ/Γ ) ( D ( - α ) - k ) .
Since D ( α ) is elliptic, the elements of the kernels above are in C^∞
( ℂ/Γ ) and hence H^1 can be replaced by H^s for any s
– see <cit.>. Hence if _ H^2 P ( α, k ) ≠{ 0 }
then either _ H^2 ( D ( α ) + k ) = _H^1 ( D ( α) + k ) ≠{ 0 } or _ H^1 ( D ( - α ) + k ) ≠{ 0 }. If k ∉Γ^* then the equivalence of (<ref>) and
(<ref>) gives the conclusion.
We now consider a model in which we drop the matrix terms in
(<ref>), the definition of P ( α ), and have Q ( α ) act on scalar valued functions. The self-adjoint Hamiltonian corresponding to
(<ref>) is now given by
H ( α ) := [ 0 Q ( α )^*; Q ( α ) 0 ], Q ( α ): = ( 2 D_z̅ )^2 - α^2 V ( z ) , V ∈ C^∞ ( ℂ ) ,
V ( x + γ ) = V ( x ) , γ∈Λ:=
ωℤ⊕ℤ , V ( ω x ) = ω̅V ( x ) , ω := e^ 2 π i/3 .
The potential is periodic with respect to Λ, and hence the usual Floquet theory
applies:
H( α , k ) := [ 0 Q( α , k )^*; Q ( α, k ) 0 ], Q ( α , k ): = ( 2 D_z̅ + k )^2 - α^2 V ( z ) ,
_L^2 ( ℂ ) H ( α ) = ⋃_ k ∈ℂ/Λ^* _ L^2 (
ℂ / Λ ) H ( α, k ) ,
where _ L^2 ( ℂ / Λ ) H ( α , k ) is discrete and
is symmetric under E ↦ - E. Just as for the chiral model of
TBG, a flat band at zero for a given α means that
∀ k ∈ℂ 0 ∈_ L^2 (
ℂ / Λ ; ℂ^2 ) H ( α , k ) ⟺ ∀ k ∈ℂ _ H^2 (
ℂ / Λ; ℂ ) Q ( α, k ) ≠{ 0 } .
As in the chiral model, we take W_X(γ)=W_Y(γ)=e^i⟨γ,z⟩, γ∈Λ^*, the dual lattice to obtain (<ref>).
Theorem <ref> shows that as in the case of (<ref>) this happens
for a discrete set of α∈ℂ:
For H and Q given in (<ref>) there exists a discrete set
𝒜_sc⊂ℂ such that
_ H^2 ( ℂ / Λ; ℂ ) Q ( α, k ) ≠{ 0 } for
α∈𝒜_sc, k ∈ℂ,
_ H^2 ( ℂ / Λ; ℂ ) Q ( α, k ) =
_Λ^* ( k ) for α∉𝒜_sc.
This is an immediate consequence of Theorem <ref> once we establish
(<ref>) with m_0 = 1 (and α_0 = 0). The kernel of
Q ( 0 , k ) = 2 ( D_z̅ + k )^2 ,
on H^2 ( ℂ/Λ ) is empty for k ∉Λ^* and
is given by ℂ e^ i ⟨ k , z ⟩, when k ∈Λ^*.
This gives the
first condition in (<ref>). The second one is provided by
For all α∈ℂ and k ∈Λ^*,
_ H^2 (
ℂ / Λ; ℂ ) Q ( α, k ) ≥ 1.
The proof is essentially the same as that of <cit.> and it uses symmetries of
H (α ) in (<ref>): for u ∈ L^2 ( ℂ/Λ; ℂ^2 ),
ℒ_γ u ( z ) := u ( z + γ ) , γ∈Λ , 𝒞 u ( z ) := [ 1 0; 0 ω̅ ] u ( ω z ) , 𝒲 u = [ -1 0; 0 1 ] u ,
ℒ_γ H ( α ) =H ( α ) ℒ_γ, 𝒞H ( α ) = H ( α ) 𝒞, 𝒞ℒ_γ= ℒ_ωγ𝒞,
𝒲 H ( α ) 𝒲 = - H ( α ) , ℒ_γ𝒲 = 𝒲ℒ_γ , 𝒞𝒲 = 𝒲𝒞 .
We introduce two orthogonal subspaces of L^2 ( ℂ/Γ ):
L^2_j := { u ∈ L^2 ( ℂ/Γ ) : ℒ_γ u = u ,γ∈Λ, 𝒞 u = ω̅^j u } , j = 0 , 1 .
Then the standard basis of ℂ^2 satisfies 𝐞_j ∈ L^2_j
and H ( 0 ) 𝐞_j = 0. Using 𝒲 we see that
the spectrum of H ( α ) on L^2_j (with the domain given by H^2 ( ℂ/Γ )
∩ L^2_j) is symmetric under E ↦ - E. Since 0 is a simple eigenvalue
of H( 0 ) |_L^2_j, j = 0, 1 and the eigenvalues of H(α)|_L^2_j are continuous in α, 0 remains an eigenvalue for all α.
That means that _H^2 Q ( α, 0 ) is at least one dimensional. The same argument
applies at all k ∈Λ^* by conjugation with e^ i ⟨ z , k ⟩.
Remarks. 1. The proof of Theorem <ref>
also shows the following spectral characterization of 𝒜_sc: if
T_k := ( 2 D_z̅ + k)^-2 V , k ∉Λ^* ,
then
α∈𝒜_sc ⟺ ∃ k ∉Λ^* α^-2∈_ L^2 ( ℂ/Λ ) T_k
⟺ ∀ k ∉Λ^* α^-2∈_ L^2 ( ℂ/Λ ) T_k ,
Using the methods of <cit.> one can show that for V ( z ) = U ( z ) U ( - z )
with U given by (<ref>) (or for more general classes of potentials described
in <cit.>), T_k^p∈ ( π/√(3) ) ℚ, p ≥ 2. Together
with a calculation for p = 2 (as in <cit.>) this
shows that | 𝒜_sc | =∞. With numerical assistance one can also show
existence of a real α∈𝒜_sc.
2. We can strengthen Proposition <ref> as in <cit.>:
there exists a holomorphic family ℂ∋α↦ u ( α ) ≢0,
such that u ( 0 ) = 1 and Q ( α, 0 ) u ( α ) = 0.
§ NUMERICAL OBSERVATIONS
The spectral characterization (<ref>) allows for an accurate computation
of α's for which (<ref>) exhibits flat bands at energy 0.
For large α's however, pseudospectral effects described in <cit.>
make calculations unreliable. The set (shown as ∙)
𝒜_sc∩{α≥ 0 }
where 𝒜_sc is given in Theorem <ref> looks as follows
(for comparison we show the corresponding set, 𝒜_ch, for the chiral model ∘):
< g r a p h i c s >
The real elements of 𝒜_sc are shown as ∙. They appear to have multiplicity
two. An adaptation of the theta function argument <cit.>, <cit.>, <cit.>, <cit.>
should apply to this case and the evenness of eigenfunctions in Proposition <ref> shows that
they have (at least) two zeros at α∈𝒜_sc. That implies multiplicity of at least 2.
This is illustrated by an animation <https://math.berkeley.edu/ zworski/scalar_magic.mp4> (shown
in the coordinates of <cit.>). When we interpolate between the chiral model and the
scalar model, the multiplicity two real α's split and travel in opposite directions to become
magic α's for the chiral model: see <https://math.berkeley.edu/ zworski/Spec.mp4>.
One of the most striking observations made in <cit.> was a quantization rule for
real elements of 𝒜_ch with the exact potential
(<ref>): if α_1 < α_2 < ⋯α_j < ⋯ is the
sequence of all real α's for which (<ref>) holds, then
α_j+1 - α_j = γ + o ( 1 ) , j → + ∞ , γ≃32.
The more accurate computations made in <cit.> suggests that
γ≃ 1.515.
In the scalar model (<ref>) with V( z ) = U ( z ) U ( -z ) where
U is given by (<ref>) we numerically observe the following rule for real elements of 𝒜_sc:
α_j+1 - α_j = 2 γ + o ( 1 ) , j → + ∞ ,
where γ is the same as in (<ref>).
Acknowledgements
We would like to thank Simon Becker for help with matlab and in particular for
producing the movies referred to above.
JG acknowledges support from EPSRC grants EP/V001760/1 and EP/V051636/1 and
MZ from the NSF grant DMS-1901462
and the Simons Foundation under a “Moiré Materials Magic" grant.
0
[Be*21]suppl S. Becker, M. Embree, J. Wittsten and M. Zworski,
Spectral characterization of magic angles in twisted bilayer graphene, Phys. Rev. B 103, 165113, 2021.
[Be*22]beta S. Becker, M. Embree, J. Wittsten and M. Zworski,
Mathematics of magic angles in a model of twisted bilayer graphene,
Probab. Math. Phys. 3(2022), 69–103.
[BHZ22a]bhz1 S. Becker, T. Humbert and M. Zworski,
Integrability in the chiral model of magic angles, 2208.01620.
[BHZ22b]bhz2 S. Becker, T. Humbert and M. Zworski,
Fine structure of flat bands in a chiral model of magic angles,
2208.01628.
[BiMa11]BM11 R. Bistritzer and A. MacDonald, Moiré bands in twisted double-layer graphene. PNAS, 108, 12233–12237, 2011.
[CGG22]CGG E. Cancès, L. Garrigue, D. Gontier, A simple derivation of moiré-scale continuous models for twisted bilayer graphene. 2206.05685.
[DuNo80]dun B.A. Dubrovin and S.P. Novikov, Ground states in a periodic field. Magnetic Bloch functions and vector
bundles. Soviet Math. Dokl. 22, 1, 240–244, 1980.
[DyZw19]res S. Dyatlov and M. Zworski,
Mathematical Theory of Scattering Resonances,
AMS 2019, <http://math.mit.edu/ dyatlov/res/>
[TKV19]magic
G. Tarnopolsky, A.J. Kruchkov and A. Vishwanath,
Origin of magic angles in twisted bilayer graphene,
Phys. Rev. Lett. 122, 106405, 2019
[Wa^*22]wats A. B. Watson, T. Kong, A. H. MacDonald, and M. Luskin, Bistritzer-MacDonald dynamics in twisted bilayer graphene, 2207.13767.
[WaLa21]lawa A. Watson and M. Luskin, Existence of the first magic angle for the chiral model of bilayer graphene, J. Math. Phys. 62, 091502 (2021).
|
http://arxiv.org/abs/2307.05704v1 | 20230711181205 | A Causal Ordering Prior for Unsupervised Representation Learning | [
"Avinash Kori",
"Pedro Sanchez",
"Konstantinos Vilouras",
"Ben Glocker",
"Sotirios A. Tsaftaris"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Data-driven Discovery of Diffuse Interstellar Bands with APOGEE Spectra
[
August 12, 2023
=======================================================================
Unsupervised representation learning with variational inference relies heavily on independence assumptions over latent variables. Causal representation learning (CRL), however, argues that factors of variation in a dataset are, in fact, causally related. Allowing latent variables to be correlated, as a consequence of causal relationships, is more realistic and generalisable. So far, provably identifiable methods rely on: auxiliary information, weak labels, and interventional or even counterfactual data. Inspired by causal discovery with functional causal models, we propose a fully unsupervised representation learning method that considers a data generation process with a latent additive noise model (ANM). We encourage the latent space to follow a causal ordering via loss function based on the Hessian of the latent distribution.
§ INTRODUCTION
The objective of extracting meaningful representations from unlabelled data is a longstanding pursuit in the field of deep learning <cit.>. Conventionally, methods of unsupervised representation learning have concentrated on unveiling statistically independent latent variables <cit.>, demonstrating appreciable success in synthetic benchmarks and datasets where generation parameters can be carefully manipulated <cit.>. However, it is essential to acknowledge the differences between controlled environments and real-world scenarios. In the latter, the factors contributing to data variation are often intertwined within causal relationships. Therefore, it is not merely advantageous but imperative to integrate causal understanding into the process of learning representations <cit.>, which can improve the models from a generalisation, and interpretability, viewpoint.
The main challenge in learning meaningful and disentangled latent representations is identifiability,
i.e. ensuring the true distribution of a data generation process can be learned (up to a simple transformation, given the inherent limitation that we can never observe the hidden latent factors from observational data alone), implying the model to be injective (one-to-one mapping) onto the observed distribution. Identifiability ensures that if an estimation method perfectly fits the data distribution, the learned parameters will correspond to the true generative model.
For example, discovering independent sources of variation which are observed via a nonlinear mixing function is impossible <cit.>. This established result from the nonlinear ICA literature has been replicated for disentangled representation learning with variational autoencoders <cit.>.
Representation learning becomes identifiable when non-i.i.d. (independent and identically distributed) samples from a given data generation process are considered <cit.>. For instance, temporal contrastive learning <cit.> and iVAE <cit.> can provably ensure identifiability by utilising knowledge of auxiliary information. Indeed, <cit.> develops a comprehensive proof that generative models become identifiable when variables in the latent space are conditionally independent, given the auxiliary information. Conditional independence given external information allows variables to be dependent (or correlated) <cit.>, which is more realistic. Further reinforcing the notion of dependence between latent variables, the identifiability of unsupervised representations can be proven by assuming a latent space to follow a Gaussian Mixture Model (GMM) and an injective decoder <cit.>. Any distribution can be approximated by a mixture model with sufficiently many components, including distributions following a causal model. In fact, <cit.> assumes that latent variables are conditionally independent, given a component of the mixture model. The mixture component can correspond to using a “learned” auxiliary variable <cit.>, bridging the gap with <cit.>.
These works <cit.> on identifiable representation learning from observational data do not consider latent causal structure. They build up, however, a theory around identifiable representation learning which allows arbitrary distribution encoding statistical dependencies in latent variables. Discovering the dependency structure in the latent space is at the core of causal representation learning (CRL) <cit.> via the common cause principle[“If two observables X and Y are statistically dependent, then there exists a variable Z that causally influences both and explains all the dependence in the sense of making them independent when conditioned on Z. As a special case, Z can coincide with X or Y.”] <cit.>. Learning causally related variables enable
[label=(*)]
* robustness to distribution shifts via the independent causal mechanism (ICM) principle;
* better generalisation, e.g. in transfer learning settings;
* answering causal queries, i.e. estimation of interventional and counterfactual distributions.
Previous work on CRL, however, utilise data from interventional <cit.> or counterfactual (pre- and post-intervention) <cit.> distributions for learning identifiable causal representations.
In this work, we bridge the gap between identifiable representation learning from observational data and CRL by using functional constraints (which are very common in the causal discovery <cit.> literature). We propose the first (to the best of our knowledge) method for unsupervised CRL under some data and model assumptions. This can be done by assuming a data generation process in which the latent space adheres to an additive noise model (ANM) and applies an injective nonlinear mapping to generate observational data. The main contributions in this work include
[label=(*)]
* Based on the universal approximation capabilities of GMMs, we show that models with a latent ANM prior are identifiable to block diagonal transformation; and
* We propose an estimation method that encourages the latent space to follow an ANM by leveraging asymmetries in the learned latent distribution.
More specifically, the latent distribution's second-order derivatives (Hessian) can be incorporated into a loss function that promotes latent ordering. We term models trained with the proposed estimation method as coVAE (causally ordered Variational AutoEncoders).
§ RELATED WORKS
Disentangled Representation Learning.
Early efforts on unsupervised representation learning focused on the Variational Autoencoder framework <cit.>. β-VAE <cit.> and extensions <cit.> rely on independence assumptions between latent variables to learn disentangled representations <cit.>. Despite showing some success, there is a lack of theory around the identifiability of independent representations. In fact, learning independent (disentangled) representations from i.i.d. data in an unsupervised manner is provably impossible <cit.>.
Representation Learning with Auxiliary Information.
A line of work based on nonlinear ICA leverages auxiliary information to learn identifiable models. <cit.> derive a more general proof of identifiability using the concept of conditional independence given auxiliary variables. An extension of nonlinear ICA, called Independently Modulated Component Analysis (IMCA) was proposed in <cit.>, where the components are allowed to be dependent. On the contrary, <cit.> prove that identifiability of deep generative models can also be achieved without auxiliary information by considering a GMM prior in the latent space. In the same line, empirical results in <cit.> show that the GMM prior assumption is as efficient as utilising auxiliary information in terms of learning stability (latents learned for different training seeds are correlated).
Causal Representation Learning.
Following the common cause principle <cit.>, causal relationships between variables also imply statistical dependencies. Recent works have shown that it is possible to model causal relationships given access to either interventional or non-i.i.d. data. To this end, the method in <cit.> uses an injective polynomial decoder and the overall model is trained on both observational and interventional data. Similarly, <cit.> consider the case of an injective linear decoder and directly optimize the score function of the distribution (in both the latent and observation space). In <cit.> a setting where observations are collected before and after unknown interventions (i.e. counterfactual data) is introduced, while <cit.> extends this idea to causal graphs of higher complexity. Under the non-iid scenario, <cit.> focuses on extracting causal factors from spatiotemporal data by performing interventions across different time steps. There also exist works that assume some level of supervision, i.e. having access to ground-truth causal factors. <cit.> propose a method based on the GAN framework where the prior follows a nonlinear Structural Causal Model (SCM). Others <cit.> instead model exogenous noise directly, which is then mapped to causal latent variables via a linear SCM. Table <ref> describes data and latent space assumptions of previously existing models in comparison to the proposed method.
§ IDENTIFIABILITY OF LATENT ADDITIVE NOISE MODELS
A key challenge in unsupervised representation learning is identifiability. The intuition is that if two parameters result in an identical distribution of observations, then they must be equivalent in order to ensure model identifiability. Note that identifiability is the property of the data generation process, and not of the estimation method. Model identifiability is important because it gives theoretical guarantees that an estimation method is capable of learning the true variables that generated the observed data. Therefore, we first define our model assumptions, show identifiability results and leave the description of the estimation method for the next section. In this section, we define and distinguish between the different forms of identifiability and theoretically show that stronger forms of identifiability can be guaranteed when the latent variables are causally ordered.
§.§ Preliminaries
We assume the data generation process maps a latent space , following a structural causal model (SCM), to an observational space as
= _o() + ϵ_x , ℙ() = ∏_iℙ(_i |𝐩𝐚(_i)).
_o: ℝ^d →ℝ^o is a non-linear injective mapping (or mixing function), d is the number of latent variables and o = |𝒪| ≥ d. ℙ() is a distribution entailed by a SCM following a directed acyclic graph (DAG) , containing d nodes, which describes the true causal structure of the latent. 𝐩𝐚(_i) are the parents of _i in .
r0.35
[obs, minimum size=20pt] (eps1) ϵ_1;
[obs, below=0.35cm of eps1, font=55] (z1) _1;
[obs, right=0.4cm of eps1] (eps2) ϵ_2;
[obs, below=0.35cm of eps2, font=55] (z2) _2;
[obs, right=0.4cm of eps2] (eps3) ϵ_3;
[obs, below=0.35cm of eps3, font=55] (z3) _3;
[obs, right=0.4cm of eps3] (epsn) ϵ_n;
[obs, below=0.35cm of epsn, font=55] (zn) _𝐧;
[obs, below right=0.7cm and 0.1cm of z2, font=33] (x) ;
[obs, right=0.4cm of x] (epsx) ϵ_x;
(eps1) edge [->] (z1)
(eps2) edge [->] (z2)
(eps3) edge [->] (z3)
(epsn) edge [->] (zn)
(z2) edge [->] (z1)
(zn) edge [->] (z3)
(zn) edge [bend right,->] (z1)
(z1) edge [->] (x)
(z2) edge [->] (x)
(z3) edge [->] (x)
(zn) edge [->] (x)
(epsx) edge [->] (x);
Data generation process with a latent SCM (endogenous and exogenous variables) causing an observation space.
Additive Noise Models. We assume that the latent SCM consists of a collection of assignments following an additive noise model (ANM) _i f_i(𝐩𝐚(_i)) + ϵ_i. ϵ_i is a noise term independent of _i, also called exogenous noise. ϵ_i are i.i.d. from a smooth distribution ℙ^ϵ. When using an ANM assumption over , the latent distribution in <ref> becomes
ℙ() = ∏_iℙ(_i |𝐩𝐚(_i)) = ∏_iℙ^ϵ(_i - f_i(𝐩𝐚(_i))).
This assumption is particularly important to demonstrate guarantees on stronger forms of identifiability.
Assuming a functional form for the causal mechanism between variables, such as ANMs <cit.>, is an established method for identifying causal relationships <cit.> due to asymmetries in the joint distribution. Moreover, the ANM assumption has been shown to perform well on real benchmarks from various domains such as meteorology, biology, medicine, engineering and economy <cit.>, for the task of causal discovery.
Causal Ordering. Since we assume to be a DAG, there is a non-unique permutation τ of d nodes such that a given node always appears first in the list compared to its descendants. Formally, τ_i < τ_j j ∈𝐝𝐞(_i) where 𝐝𝐞(_i) are the descendants of _i in (Appendix B in <cit.>).
§.§ Identifiability Equivalence
The exact definition of model identifiability can be too restrictive. In reality, identifying a representation up to a simple transformation is enough. Therefore, we now formally define identifiability <ref> and its weaker forms, which guarantee identifiability up to affine transformation <ref>, permutation and scaling <ref>, and block diagonal and scaling transformations <ref>.
In the case of an ANM data generating process, <cit.> demonstrates the identifiability of models with only observational data; further, <cit.> discuss the identifiability of these models under data score functions. However, they do not discuss the identifiability of latent ANM models.
In this section, we define and make a distinction between different forms of identifiabilities and theoretically show that stronger forms of identifiability can be guaranteed when latent variables are causally ordered.
(Strong Identifiability)
For parameter domain Θ and equivalence relation ∼ on Θ, the considered model is ∼-identifiable if equation <ref> is satisfied.
ℙ_θ_1() = ℙ_θ_2(), ⇒θ_1 ∼θ_2.
According to <cit.>, strong model identifiability makes the latent space ℙ() identifiable.
(Affine Equivalence, ∼_A)
For θ = {𝐟, 𝐩} a set of parameters corresponding to the mixing function and prior, the affine equivalence relation ∼_A on Θ is defined as:
(𝐟, 𝐩) ∼_A (𝐟̃, 𝐩̃) ∃ 𝐀∈ℝ^n× n, 𝐜∈ℝ^n s.t. 𝐟^-1() = 𝐀𝐟̃^-1() + 𝐜, ∀∈𝒪.
where 𝐀 is an invertible matrix and 𝒪 is an observational data space.
∼_A states that the images of 𝐟^-1 and 𝐟̃^-1 are related by an affine transformation.
(Permutation Equivalence, ∼_P)
For θ = {𝐟, 𝐩} a set of parameters corresponding to the mixing function and prior, the permutation equivalence relation ∼_P on Θ is defined as by:
(𝐟, 𝐩) ∼_P (𝐟̃, 𝐩̃) ∃ 𝐏∈ℝ^n× n, 𝐜∈ℝ^n s.t. 𝐟^-1() = 𝐏𝐟̃^-1() + 𝐜, ∀∈𝒪.
where 𝐏 is a block permutation matrix and 𝒪 is an observational data space.
∼_P states that the images of 𝐟^-1 and 𝐟̃^-1 are related by rotation, scaling, and translation.
(Block Diagonal Equivalence, ∼_D)
For θ = {𝐟, 𝐩} a set of parameters corresponding to the mixing function and prior, the identity equivalence relation ∼_D on Θ is defined as by:
(𝐟, 𝐩) ∼_D (𝐟̃, 𝐩̃) ∃ 𝐃, 𝐜 s.t. 𝐟^-1() = 𝐃𝐟̃^-1() + 𝐜, ∀∈𝒪.
where 𝐃 is a block diagonal matrix, 𝐜∈ℝ^d is a shift vector, and 𝒪 is an observational data space.
∼_D states that the images of 𝐟^-1 and 𝐟̃^-1 are related just by translation and scaling.
§.§ Identifiability of Latent ANMs
Universal Approximation of GMMs.
Assuming the data generating process is an affine or piece-wise affine function, GMMs with a sufficient amount of components can model any densities in the limiting case <cit.>, which in turn breaks the symmetry in the latent space behaving like auxiliary information in iVAE <cit.>.
In light of this, we model our latent distribution ℙ() = ∏_iℙ^ϵ(_i - f_i(𝐩𝐚(_i))) = ∑_j=1^J π_j 𝒩(μ_j, Σ_j) as a mixture of densities.
(Identifiability of under )
Let _o, _o, satisfying injectivity assumption with y ∼ℙ(), y' ∼ℙ̃(), where ℙ, ℙ̃ follow the same causal graph . Suppose _o(y) and _o(y') are equally distributed, then, ℙ() ∼ℙ̃().
This theorem is similar to, but goes beyond, Theorem E.1 in <cit.>. We show equivalence up to ∼ rather than ∼_P, given that the latent variables are constrained with respect to some causal graph (with all conditional independencies).
The proof is detailed in the appendix. The main outline of this proof includes showing that, under the constrain that the latent distribution respects the same causal graph , the block permutation matrix (in Theorem E.1 of <cit.>) can be reduced to a diagonal matrix. Similar to <cit.> we approximate the posterior distribution using GMMs.
(Identifiability of under causal ordering) In the case when only causal ordering is known, the strong identifiability in theorem <ref> reduces to block diagonal identifiability (∼_D).
Given the fact that constraining latent variables based on the complete causal graph may not be feasible, the lemma relaxes this constraint to enforce causal ordering, which guarantees ∼_D identifiability. In section <ref>, we show how to achieve causal ordering in the latent space.
(Model Identifiability)
Let _o, _o, satisfy the injectivity assumption with y ∼ℙ(), y' ∼ℙ̃(), where ℙ, ℙ̃ follow the same causal graph and let 𝒟⊆ℝ^o, where o = |𝒪| such that _o, _o are injective on to 𝒟.
Suppose _o(y) and _o(y') are equally distributed, then, _o() = _o().
This theorem is similar to, but goes beyond Theorem D.4 in <cit.>. We show equivalence up to ∼ rather than ∼_A, given that the latent variables are constrained with respect to some causal graph (with all conditional independencies).
We detail the proof in the appendix. Similar to the proof of Theorem <ref>, we use GMMs to model our posterior distribution. The main component of the proof is to reduce affine transformation in Theorem D.4 <cit.> to an identity transformation.
(Model identifiability under causal ordering) In the case when latent variables follow a particular causal ordering τ rather than the entire causal graph , there exists a block diagonal transformation 𝐃 such that _o() = (_o ∘𝐃)().
§ ESTIMATION
We now derive an estimation procedure for learning the data generation process in equation <ref>. The findings of the previous section show that a data generation process with an ANM in the latent space is identifiable if the causal graph (or causal ordering) is known. Therefore, we proceed to define a loss function that will ensure that the latent space is causally ordered. Then, we describe a variational inference estimation method which models latent variables using a GMM.
§.§ Causal Ordering Loss
In causal representation learning, the goal is to learn causal variables from observations without information about the causal structure. However, there is always a causal ordering associated with a DAG. It is well known in the causal discovery literature that a complete causal graph is not identifiable from observational data without extra assumptions. If the functional form of the causal mechanism is assumed to be an ANM, causal directions become identifiable due to asymmetries. Interestingly, previous works on causal discovery <cit.> explore a property of the distribution of ANMs to find a causal ordering. Here, we use the same property to enforce causal ordering instead of discovering it.
Enforcing causal ordering allows us to approximate the assumption of known causal ordering from Lemma <ref>. We use this property as a loss function for learning the latent representations.
The property is based on the Jacobian of an ANM distribution's score function. Firstly, let the latent distribution be ℙ() which follows an ANM and ℙ^ϵ be any quadratic exponential noise prior (e.g. Gaussian-like) <cit.>. We can express its score function as
∇__ilogℙ() = ∂logℙ^ϵ(_i - f_i(𝐩𝐚(_i)))/∂_i - ∑_j ∈𝐜𝐡(_i)∂ f_j/∂_i∂logℙ^ϵ(_j - f_j(𝐩𝐚(_j)))/∂_i.
Based on the above formalism
it can be derived that ∇__i^2 logℙ() = a _i is a leaf node, where a is some constant and ∇__i^2 logℙ() is i^th diagonal element of the distribution's Hessian.
Assuming that ℙ() follows an ANM and let H_var^i() = var(∇^2__ilogℙ()). The latent space can be causally ordered by minimising the causal ordering loss defined as
ℒ_order = -∑_i^d-1logH_var^i(_i, …, _d)^-1/∑_j = i^d H_var^j(_i, …, _d)^-1
The proof directly extends from analysing equation <ref>. As described in <cit.>, the minimum variance in the latent log-likelihood's hessian corresponds to a leaf node.
The loss term ℒ_order is minimum if, and only if, the nodes at position i are leaves.
We show this by contradiction; without loss of generality, consider the random latent order τ, s.t. τ_i ≠ i, then
H_var^0() ≥ϵ⇒ℒ_order 0.
Based on the above expression ℒ_order→ 0, τ_i = i, where τ_i correspond to true causal order.
It is important to note that as the representations are learned end-to-end, enforcing this loss would organise the latent order to follow the sorted true causal ordering.
Hessian Estimation. To compute H_var^i(), we approximate the score's Jacobian (Hessian) with Stein kernel estimators <cit.> as described in <cit.>:
𝐉^Stein = -diag(𝐆^Stein(𝐆^Stein)^T) + (𝐊 + η𝐈)^-1⟨∇^2_diag, 𝐊⟩
Where 𝐆^Stein = -(𝐊 + η𝐈)^-1⟨∇, 𝐊⟩ is the Stein gradient estimator <cit.>, 𝐊 is the median kernel, 𝐈 is the Identity matrix, and ⟨ a, b ⟩ correspond to applying operation a on b element-wise.
The final algorithm for computing ℒ_order is described in Alg. <ref>.
§.§ Variational Inference
We are now interested in modelling a latent space with an arbitrarily complex distribution based on an ANM using the deep variational framework. That is, learning a posterior distribution that can approximate the ANM prior ℙ() given a sample from the observational distribution. A multivariate diagonal Gaussian prior cannot model these distributions. Therefore, we consider a prior following a GMM, following established literature <cit.>, which is proven to be identifiable and have universal approximation capabilities <cit.>.
In particular, we utilise the framework from MFC-VAE <cit.>.
We consider the generative model to be ℙ(,,) = ℙ(|)ℙ(|)ℙ(). MFC-VAE choose a posterior ℚ(,|) = ℚ(|)ℚ(|), where ℚ(|) is a multivariate Gaussian with diagonal covariance and ℚ(|) a categorical distribution over GMM components.
Similar to MFC-VAE <cit.>, we consider our inference model as described above, where the mixture components are inferred via prior (as ℚ_(|) ∝exp (𝔼_ℚ_(|)logℙ_(|) )).
In this case, the posterior ℚ(,|) is a GMM and can approximate the prior ℙ() following a ANM. The ELBO for this model is described in Eqn. <ref>, where 𝔼 is over ℚ(|) distribution.
ℒ_ELBO = - logℙ(|) +KL(ℚ(|) ||ℙ()) + KL(ℚ(|) ||ℙ(|))
(Training Objective)
Based on the proposition <ref> and lemmas <ref> and <ref>, models trained with the following objective: ℒ_total = ℒ_ELBO + αℒ_order, where
will converge at true latents with ∼_D equivalence.
§.§ Neural Network Constraints
Injective Decoder. It is common to assume an injective decoder for proving the identifiability of a data generation process <cit.>. When implementing a deep generative model in practice, some constraints in the decoder are necessary to ensure that neural networks are modelling injective functions. We follow similar modelling assumptions of ICE-BeeM <cit.>:
[label=(*)]
* Monotonicity: The latent dimension of the decoder is monotonically increasing, i.e., d_l+1≥ d_l ∀ l ∈{0, …, L-1 }, where d_l corresponds to the feature dimension at layer l and L is the total number of layers in the decoder.
* Activation: The activation function after every layer corresponds to LeakyReLU (max(0, x) + αmin(0, x), α∈ (0, 1)).
* Full rank: All weight matrices 𝐟_l are full row ranked, as the number of rows is greater than or equal to the number of columns.
* Invertible sub-matrix: All weight sub-matrices 𝐟'_l of size d_l × d_l are invertible.
Discussion: Proposition <ref> shows that, given sufficient data and compute, under the non-linear ANM assumption, latent representations are organised with respect to evidential ordering. Additionally, given the organised latent representations, the causal relationships among the representations can be estimated using conditional independencies similar to <cit.>. We later discuss how latent causal discovery can be achieved.
As previously discussed in equation <ref>, it is important to note that we consider all features in to be direct parents of , thus any indirect cause y → (_i ∈) → cannot be recovered by our approach.
§ EXPERIMENTS
Here, we demonstrate the effectiveness of latent ANM models with topological constraints on both tabular (including a synthetic data generating process) and image (MorphoMNIST and Causal3DIdent) datasets. We compare the proposed model against two baseline methods β-VAE and MFC-VAE with a single facet on mean correlation coefficient (MCC) and causal ordering divergence (COD).
§.§ Metrics
We compute different variants of MCC: (i) across multiple random seeds (MCC-R): measures the stability of the training process given the model; (ii) with respect to ground truth variables (MCC-GT): measures the faithfulness of the estimated latent variables to true latent variables <cit.>; and (iii) subset MCC (MCC-SG): in the case when all parents of are not observed, we measure the faithfulness by considering a subset of latent variables. All three variants are formally described in definition <ref>.
As these MCC measures are permutation invariant by nature, to capture the perceived order among latent variables, we also calculate COD, which measures the divergence of the topological order in an estimated causal graph from the causal order, formally defined in equation <ref>.
In addition, to quantify the injectiveness of the model we compute MIC and RRO defined in <ref>.
(Mean Correlation Coefficient)
We compute the mean correlation coefficient with respect to ground truth (MCC-G) as described in <cit.>. MCC-SG and MCC-R are based on MCC-G and are described as:
MCC-SG(, ) = max{MCC-G([S_j], ), ∀ j={1,…, |𝒮|}, S = ||||}
MCC-R({_0, …, _K}) = 1K-1∑_k MCC-G(_k, _0),
where _k = 𝐟_k^-1(), S is the set of all the partition indices of ẑ with the size of ||, corresponds to the ground truth latent features and K total number of experimental runs.
(Causal Order Divergence, COD) Similar to divergence metric in <cit.>, we define COD as:
COD(τ, A) = ∑_i=0^d ∑_j>i^d A_ij
where τ={0, …, d} is the expected order and A is an estimated adjacency graph predicted using the resulting latent space after training.
(Mean Injectivity Coefficient, MIC)
Based on the network constraints described in section <ref>, we compute the MIC to measure the injectivity of the model. MIC is formally described as:
MIC(𝐟) = min{1|𝒞|∑_j Rank(𝐟_i(𝒞_j)^T)ri ∀ i ∈{0, …, ||}}
where, ci, ri correspond to number of columns and rows of _i, with abuse of notation, we use 𝒞 = ciri as a set of all partitions of column indices with size ri, and |S| is the cardinality of set S.
We measure the average row rank ratio RRO = ( 1/L∑_l Rank(f_l)d_l) and MIC (ref. definition <ref>) to measure the injectivity of the decoder.
§.§ Data Generation
Simulation Data: To generate the synthetic dataset we first randomly generate a latent causal DAG with n nodes and e edges using <cit.>. We randomly select all the involved structural causal models f_i with an injective mapping from 𝐩𝐚(_i) to _i. Finally, we select an injective random transformation function _o mapping from latent space to observational data . In our experiments we generate 2,000 datapoints from Syn-2, Syn-15, and Syn-50 processes, where Syn-k correspond to the above data-generating process with latent variable ∈ℝ^k and observational data ∈ℝ^2k.
Image Datasets: We further extend our method on imaging datasets, which include MorphoMNIST <cit.> variants and Causal3DIdent <cit.>. In the case of MorphoMNIST, we use MorphoMNIST-IT, MorphoMNIST-TI, MorphoMNIST-TS, and MorphoMNIST-TSWI variants where I, T, S, and W
correspond to latent variables with the semantics of intensity, thickness, slant, and width respectively. We detail all the data-generating processes in Appendix. All the MorphMNIST variants have 60,000 training images and 10,000 testing images. Similarly, Causal3DIdent includes 252,000 training samples and 25,200 test samples that were generated using a fixed causal graph with 10 nodes (more details about this dataset can be found in <cit.>, Appendix B).
§.§ Results
r0.5
MCC and COD results on synthetic datasets with 2, 15, and 50 nodes in the latent space along with imaging datasts MorphoMNIST-IT and MorphoMNIST-TSWI.
0.49!
2*c]@l@ Methods(↓),
Metrics(→) 3cSyn-2
(l)2-4
COD (↓) MCC-R(↑) MCC-G(↑)
(r)1-4
VAE 0.13 ± 0.08 0.11 0.26± 0.03
MFC-VAE 0.17 ± 0.09 0.14 0.35 ± 0.06
coVAE 0.00 ± 0.01 0.62 0.52 ± 0.07
(r)1-4
3cSyn-15
(r)1-4
VAE 1.68 ± 0.22 0.21 0.22 ± 0.02
MFC-VAE 1.43 ± 0.24 0.26 0.26 ± 0.03
coVAE 0.03 ± 0.01 0.42 0.34 ± 0.03
(r)1-4
3cSyn-50
(r)1-4
VAE 5.53 ± 0.81 0.23 0.28 ± 0.24
MFC-VAE 5.17 ± 0.62 0.31 0.26 ± 0.01
coVAE 0.78 ± 0.46 0.39 0.34 ± 0.02
(l)1-4
3cMorphoMNIST-IT
(l)2-4
COD (↓) MCC-R(↑) MCC-SG(↑)
(r)1-4
VAE 1.61 ± 0.44 0.29 0.23 ± 0.11
MFC-VAE 1.04 ± 0.46 0.36 0.34 ± 0.09
coVAE 0.0 0.59 0.47 ± 0.08
(r)1-4
3cMorphoMNIST-TSWI
(r)1-4
VAE 0.81 ± 0.26 0.47 0.21 ± 0.00
MFC-VAE 1.35 ± 0.24 0.52 0.28 ± 0.04
coVAE 0.0 0.61 0.31 ± 0.04
In each of our experiments, we adopt a model adhering to the properties delineated in Section <ref>. Observations pertaining to MIC and RRO measures suggest that the injectivity of the decoder is predominantly influenced by choice of architecture and the dataset under consideration.
For instance, the MIC for the Syn-2, Syn-15, and Syn-50 datasets are recorded as 1.0, 0.68, and 1.0, respectively, while the corresponding RRO values are 0.88, 0.93, and 0.95. To gauge the effectiveness in terms of stability and faithfulness, we tabulated the results concerning MCC-R and MCC-GT metrics for synthetic and image datasets in Table <ref>. Here, we employed five random seeds to compute the MCC-R and report the mean and standard deviation across these five runs for COD and MCC-G.
These results, illustrate that given additive noise models in latent space, the proposed loss enforces evidential structure as COD goes to 0 and achieves stronger identifiability which can be inferred from MCC-R and MCC-G values.
Similarly, in the case of imaging datasets for both MorphoMNIST-IT and MorphoMNIST-TSWI we observed MIC of 1.0 and RRO of 0.85, and the resulting MCC-SG (as previously described, in the case of image datasets, all the parents are not observed) and COD measures are described in Table <ref>.
In all our experiments, we observed that topological ordering with respect to the evidential graph is better enforced in coVAE and even in terms of stability and faithfulness of the latent representations, coVAE outperforms VAE and MFC-VAE.
Additional experiments on other variants of the MorphoMNIST dataset and Causal3DIdent are detailed in the Appendix.
§ CONCLUSION
In this work, we propose the first fully unsupervised causal representation learning method for data
adhering to ANM by imposing a topological ordering on the latent space that corresponds to the underlying causal graph.
We present a multitude of results pertaining to the identifiability of latent representations, demonstrating these outcomes both empirically and experimentally. Evaluations on synthetic and image datasets corroborate the efficacy of the proposed estimation method, which in practice exhibits superior identifiability.
Possible future works would be to investigate sample efficiency and robustness of the models trained with the proposed estimation method. Additionally, extending the proposed approach from ANM to post-ANM and simplifying modelling assumptions would be of particular interest.
Although modelling assumptions are standard and widely used in practice, formulating a model and estimation methods without these assumptions would be ideal.
plain
|
http://arxiv.org/abs/2307.04111v1 | 20230709070831 | Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication | [
"José Miguel Mateos-Ramos",
"Christian Häger",
"Musa Furkan Keskin",
"Luc Le Magoarou",
"Henk Wymeersch"
] | eess.SP | [
"eess.SP"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study model-based end-to-end learning in the context of integrated sensing and communication (ISAC) under hardware impairments.
A monostatic orthogonal frequency-division multiplexing (OFDM) sensing and multiple-input single-output (MISO) communication scenario is considered, incorporating hardware imperfections at the ISAC transceiver antenna array.
To enable end-to-end learning of the ISAC transmitter and sensing receiver, we propose a novel differentiable version of the orthogonal matching pursuit (OMP) algorithm that is suitable for multi-target sensing.
Based on the differentiable OMP, we devise two model-based parameterization strategies to account for hardware impairments: (i) learning a dictionary of steering vectors for different angles, and (ii) learning the parameterized hardware impairments.
For the single-target case, we carry out a comprehensive performance analysis of the proposed model-based learning approaches, a neural-network-based learning approach and a strong baseline consisting of least-squares beamforming, conventional OMP, and maximum-likelihood symbol detection for communication.
Results show that learning the parameterized hardware impairments offers higher detection probability, better angle and range estimation accuracy, lower communication symbol error rate (SER), and exhibits the lowest complexity among all learning methods.
Lastly, we demonstrate that learning the parameterized hardware impairments is scalable also to multiple targets, revealing significant improvements in terms of ISAC performance over the baseline.
Hardware impairments, integrated sensing and communication (ISAC), joint communication and sensing (JCAS), machine learning, model-based learning, orthogonal matching pursuit (OMP).
§ INTRODUCTION
Next-generation wireless communication systems are expected to operate at higher carrier frequencies to meet the data rate requirements necessary for emerging use cases such as smart cities, e-health, and digital twins for manufacturing <cit.>. Higher carrier frequencies also enable new functionalities, such as ISAC. ISAC aims to integrate radar and communication capabilities in one joint system, which enables hardware sharing, energy savings, communication in high-frequency radar bands, and improved channel estimation via sensing-assisted communications, among other advantages <cit.>.
ISAC has been mainly considered by means of dual-functional waveforms. For instance, radar signals have been used for communication <cit.>, while communication waveforms have proven to yield radar-like capabilities <cit.>. Furthermore, optimization of waveforms to perform both tasks simultaneously has also been studied <cit.>, where the results depend on the cost function to optimize and the ISAC optimization variables. However, conventional ISAC approaches degrade in performance under model mismatch, i.e., if the underlying reality does not match the assumed mathematical models. In particular at high carrier frequencies, hardware impairments can severely affect the system performance and hardware design becomes very challenging <cit.>. This increases the likelihood of model mismatch in standard approaches, and problems become increasingly difficult to solve analytically if hardware impairments are considered.
DL approaches based on large NN have proven to be useful under model mismatch or complex optimization problems <cit.>. DL does not require any knowledge about the underlying models as it is optimized based on training data, which inherently captures the potential impairments of the system.
DL has been investigated in the context of ISAC for a vast range of applications, such as predictive beamforming in vehicular networks <cit.>, waveform design <cit.> and channel estimation <cit.> in IRS-assisted ISAC scenarios, multi-target sensing and communication in THz transmissions <cit.>, or efficient resource management <cit.>.
However, most previous works on DL for ISAC consider single-component optimization, either at the transmitter or receiver. On the other hand, end-to-end learning <cit.> of both the transmitter and receiver has proven to enhance the final performance of radar <cit.> and communication <cit.> systems. End-to-end learning in ISAC was applied by means of an AE architecture in <cit.>, to perform single-target angle estimation and communication symbol estimation, under hardware impairments. This was recently extended to multiple targets in <cit.>, although without considering impairments, where the AE outperformed conventional ESPRIT <cit.> in terms of angle estimation for single- and dual-snapshot transmissions.
Nevertheless, DL approaches often lack interpretability and require large amounts of training data to obtain satisfactory performance.
To overcome the disadvantages of large DL models, MB-ML <cit.> instead parameterizes existing models and algorithms while maintaining their overall computation graph as a blueprint.
This allows training initialization from an already good starting point, requiring less training data to optimize, and typically also offers a better understanding of the learned parameters.
A popular example of MB-ML learning is deep unfolding <cit.>, where iterative algorithms are “unrolled” and interpreted as multi-layer computation graphs.
In the context of sensing, deep unfolding of the fixed-point continuation algorithm with one-sided l_1-norm was applied to angle estimation of multiple targets <cit.>, showing enhanced accuracy with respect to DL and model-based benchmark approaches. In <cit.>, the ISTA was unfolded to perform angle estimation in the presence of array imperfections.
Related to communications, deep unfolding has been applied to massive MIMO channel estimation in <cit.>, where classical steering vector models are used as a starting point and then optimized to learn the system hardware impairments, by unfolding the matching pursuit algorithm <cit.>. This approach was later refined to reduce the required number of learnable parameters in <cit.>.
Previous MB-ML approaches <cit.> exhibit three primary shortcomings that can limit their effectiveness in practical scenarios. Firstly, they focus only on receiver learning; however, end-to-end learning of transmitter and receiver, which holds great potential given its promising performance in model-free DL applications <cit.>, remains unexplored in MB-ML. Secondly, sensing works <cit.> only investigate angle estimation, although range estimation is also required to estimate target locations. Hence, end-to-end MB-ML for multi-target positioning has not been studied before. Finally, while MB-ML has been utilized to address individual challenges related to sensing and communications, its untapped potential to significantly improve system performance in ISAC applications remains undiscovered.
In view of the current literature on DL and MB-ML for ISAC, three questions arise: (i) How can efficient end-to-end MB-ML strategies be developed for multi-target positioning? (ii) What computational and performance benefits can be harnessed by employing MB-ML in ISAC systems compared to large DL models and model-based approaches? (iii) To what extent can ISAC trade-offs be improved under hardware impairments by employing MB-ML strategies compared to large DL models and model-based approaches?
This paper aims to answer the above questions by studying end-to-end MB-ML for ISAC, focusing on the effect of hardware impairments in the ISAC transceiver ULA.
Considering a MIMO monostatic sensing and MISO communication scenario (as depicted in Fig. <ref>), we propose novel end-to-end MB-ML strategies for joint optimization of the ISAC transmitter and sensing receiver, suitable for both single- and multi-target scenarios.
Building upon our preliminary analysis in <cit.>, the main contributions of this work can be summarized as follows:
* Multi-target position estimation via end-to-end learning of OFDM ISAC systems:
For the first time in the literature, we investigate end-to-end learning of OFDM ISAC systems under hardware impairments at the ISAC ULA. To combat these hardware imperfections, we introduce novel learning architectures to simultaneously optimize the ISAC beamformer and sensing receiver. OFDM transmission enables joint angle and range (and, hence, position) estimation of multiple targets, significantly extending the single-carrier models and methods in our previous work <cit.>, and the recent works <cit.>.
* MB-ML via differentiable OMP:
Expanding upon the foundation laid by <cit.>, we propose a differentiable version of the OMP algorithm that is suitable for single- and multi-target sensing.
This new algorithm allows for end-to-end gradient-based optimization, where we consider two different MB-ML parameterization approaches.
The first approach learns a dictionary of steering vectors at each OMP iteration, extending our results in <cit.> to joint range-angle estimation and multiple targets.
The second approach is new compared to <cit.> and directly learns the parameterized ULA impairments at each iteration.
This offers the advantage of drastically reducing the number of parameters to be learned.
* Single- and multi-target performance comparison and ISAC trade-off characterization:
We first consider the single-target case (corresponding to one OMP iteration) and compare different solutions based on the extent of model knowledge: (i) NNBL[Note that the neural-network architectures in <cit.> do not directly apply to the scenario considered here due to the use of OFDM signals.], representing no knowledge of the system model, (ii) the two MB-ML approaches, where model knowledge is utilized, but impairments are learned, and (iii) a strong baseline, which fully relies on the mathematical description of the system model under no hardware impairments.
Our results show that under hardware impairments, the new MB-ML ULA impairment learning outperforms all other approaches in terms of target detection and range-angle estimation, with fewer trainable parameters.
Lastly, we show that impairment learning scales smoothly also to multiple targets, where it achieves better sensing and communication performance than the baseline.
In the rest of this paper, we first describe the mathematical ISAC system model in Sec. <ref>. Then, we describe the two approaches to perform target positioning and communication:
the baseline in Sec. <ref>, and MB-ML in Sec. <ref>. The main ISAC results are presented and discussed in Sec. <ref> before the concluding remarks of Sec. <ref>.
Notation. We denote column vectors as bold-faced lower-case letters, a, and matrices as bold-faced upper-case letters, A. A column vector whose entries are all equal to 1 is denoted as 1. The identity matrix of size N× N is denoted as I_N. The transpose and conjugate transpose operations are denoted by (·)^ and (·)^, respectively. The i-th element of a vector and the (i,j)-th element of a matrix are denoted by [a]_i and []_i,j. The element-wise product between two matrices is denoted by ⊙, while ⊘ denotes element-wise division, and ⊗ denotes the Kronecker product. · denotes matrix vectorization operator. Sets of elements are enclosed by curly brackets and intervals are enclosed by square brackets. The set {x∈|x≥0} is denoted as _≥0. The cardinality of a set 𝒳 is denoted by 𝒳. The uniform distribution is denoted by , and denotes the circularly-symmetric complex distribution. The Euclidean vector norm is represented by ‖·‖_2, while the matrix Frobenius norm is denoted by ‖·‖_F. The indicator function is denoted by 𝕀{·}.
§ SYSTEM MODEL
This section provides the mathematical models for the received sensing and communication signals, the ISAC transmitted signal and the hardware impairments. In Fig. <ref>, a block diagram of the considered ISAC system is depicted.
§.§ Multi-target MIMO Sensing
We consider an ISAC transceiver consisting of an ISAC transmitter and a sensing receiver sharing the same ULA of K antennas, as shown in Fig. <ref>.
The transmitted signal consists of an OFDM waveform across S subcarriers, with an inter-carrier spacing of Hz. In the sensing channel, we consider at most possible targets. Then, the backscattered signal impinging onto the sensing receiver can be expressed over antenna elements and subcarriers as <cit.>
= 1/√(S)ψ_t (θ_t) ^(θ_t) [() ⊙(τ_t)]^ + W,
where ∈KS collects the observations in the spatial-frequency domains, T ∼{0,...,} is the instantaneous number of targets in the scene,
and ψ_t ∼(0,^2) represents the complex channel gain of the t-th target. The steering vector of the ISAC transceiver ULA for an angular direction θ is, under no hardware impairments, [(θ)]_k= exp(- 2 π (k-(K-1)/2) d sin (θ ) / λ), k=0,...,K-1, with d = λ / 2, λ = c/f_c, c is the speed of light in vacuum and f_c is the carrier frequency[In case of different ULAs for transmitting and receiving, different steering vector models should be used in (<ref>).]. The precoder ∈ℂ^K permits to steer the antenna energy into a particular direction. Target ranges are conveyed by (τ_t) ∈ℂ^S, with [(τ_t)]_s = exp(-j2π s τ_t), s=0,...,S-1, and where τ_t = 2R_t/c represents the round-trip time of the t-th target at R_t meters away from the transmitter. Moreover, the communication symbol vector () ∈ℂ^S conveys a vector of messages ∈^S, each uniformly distributed from a set of possible messages . Finally, the receiver noise is represented by W, with [W]_i,j∼(0,N_0). Note that if T=0, only noise is received. From the complex channel gain and the noise, we define the integrated sensing SNR across antenna elements as _r = K^2/N_0.
The angles and ranges of the targets are uniformly distributed within an uncertainty region, i.e., θ_t ∼[, ] and R_t ∼[, ]. However, uncertainty regions might change at each new transmission. The position of each target is computed from target angle θ_t and range R_t as
_t = [ R_tcos(θ_t); R_tsin(θ_t) ].
The transmitter and the sensing receiver are assumed to have knowledge of {, , , }. In the considered monostatic sensing setup, the receiver has access to communication data (), which enables removing its impact on the received signal (<ref>) via reciprocal filtering <cit.>
= ⊘^() = α_t (θ_t) ^(τ_t) + ,
where α_t=1/√(S)^(θ_t) ψ_t and = W⊘^().
The goal of the sensing receiver is to estimate the presence probability of each target in the scene, denoted as û∈ [0,1]^, which is later thresholded to provide a hard estimate of the target presence, t̂∈{0,1}^. For all detected targets, the sensing receiver estimates their angles, θ̂∈ [-π/2, π/2]^, and their ranges, R̂∈_≥ 0^, from which target positions can be estimated according to (<ref>).
§.§ MISO Communication
In the considered ISAC scenario, communication and sensing share the same transmitter. We assume that the communication receiver is equipped with a single antenna element. In this setting, the received OFDM signal at the communication receiver in the frequency domain is given by <cit.>
= [()⊙]^(φ) + ,
with ∈ℂ^S denoting the S-point DFT of the channel taps [β_0, β_1, ..., β_L-1,0,...,0], where each tap is distributed as β_l ∼(0,σ_l^2). Complex Gaussian noise ∼(0,N_0I_S) is added at the receiver side. The average communication SNR per subcarrier is defined as _c = ∑_l=1^Lσ_l^2/(SN_0).
The communication receiver is assumed to be always present at a random position, such that φ∼[, ]. The transmitter has also knowledge of {, }. The receiver is fed with the CSI = ^(φ).
The goal of the receiver is to retrieve the communication messages that were transmitted.
§.§ ISAC Transmitter
ISAC scenarios require the use of a radar-communication beamformer to provide adjustable trade-offs between the two functionalities. Using the multi-beam approach from <cit.>, we design the ISAC
beamformer, based on a sensing precoder _r ∈ℂ^K, and a communication precoder _c∈ℂ^K, as
(η,ϕ) = √(P)√(η)_r + √(1-η)e^ϕ_c/‖√(η)_r + √(1-η)e^ϕ_c ‖ ,
where P is the transmitted power, η∈ [0,1] is the ISAC trade-off parameter, and ϕ∈ [0,2 π) is a phase ensuring coherency between multiple beams.
By sweeping over η and ϕ, we can explore the ISAC trade-offs of the considered system. The sensing precoder _r points to the angular sector of the targets, {, }, whereas the communication precoder _c points to the angular sector of the communication receiver, {, }. In Secs. <ref> and <ref>, we detail how _r and _c are computed for the baseline and MB-ML, respectively. However, the same precoding function is applied for sensing and communication, as represented in Fig. <ref>.
§.§ Hardware Impairments
We study the effect of hardware impairments in the ULA in the ISAC transceiver, which affect the steering vectors of (<ref>), (<ref>), (<ref>). Impairments in the antenna array include mutual coupling, array gain errors, or antenna displacement errors, among others <cit.>. Following the impairment models of <cit.>, we consider two types of impairments:
* Unstructured impairments: In this case, the true steering vector (θ) is unknown for all angles θ, while the methods for beamforming design and signal processing assume the nominal steering vector (θ). If we consider a grid of possible angles with N_θ points, then the steering vectors require K× N_θ complex values to be described.
* Structured impairments: In this case, the steering vector model is known, conditional on an unknown perturbation vector . We can thus write (θ;), where the meaning and dimensionality of depend on the type of impairment. In contrast to the unstructured impairments, the impairments are often described with a low-dimensional vector, independent of N_θ.
[Impact of structured impairments]
Consider the example of inter-antenna spacing errors, where ∈ℂ^K and [(θ; )]_k = exp(- 2 π (k-(K-1)/2) []_k sin (θ ) / λ), k=0,...,K-1.
In Fig. <ref>, the angle-delay map (defined in Sec. <ref>) is depicted under ideal conditions (top) and hardware impairments (bottom), when T = 4 targets are present. The main effect of hardware impairments is to expand target lobes in the angle domain. In the example shown in Fig. <ref>, two targets become indistinguishable due to impairments, and the appearance of spurious lobes hinders the detection of the target at the highest range. Another effect of hardware impairments is that the magnitude of the target lobes is decreased, which makes them harder to differentiate from noise. These results highlight the relevance of addressing hardware impairments in our sensing scenario.
§ BASELINE
In this section, we derive the baseline method according to model-based benchmarks, which will later be compared with end-to-end learning approaches in Sec. <ref>.
§.§ ISAC Beamformer
We design the baseline for the precoding mapping in Fig. <ref>, which affects both the sensing precoder _r, and the communication precoder _c in (<ref>), by resorting to the beampattern synthesis approach in <cit.>.
We define a uniform angular grid covering [-π/2, π/2] with grid locations {θ_i}_i=1^. For a given angular interval (i.e., = [, ] for communications, and = [, ] for sensing),
we denote by ∈1 the desired beampattern over the defined angular grid, given by
[]_i =
K, if θ_i ∈θ_interval
0, otherwise.
The problem of beampattern synthesis can then be formulated as
min__bs‖ - ^_bs‖_2^2, where = [(θ_1) … (θ_)] ∈K denotes the transmit steering matrix evaluated at the grid locations. This least-squares (LS) problem
has a simple closed-form solution
_bs = (^^)^-1^,
which yields, after normalization according to the transmit power constraints, a communication-optimal beam _c or a radar-optimal beam _r, which can then be used to compute the joint ISAC beam in (<ref>).
§.§ Multi-target Sensing Receiver
We propose to formulate the multi-target sensing problem based on the received signal in (<ref>) as a sparse signal recovery problem <cit.> and employ the OMP algorithm <cit.> to solve it, which represents our model-based benchmark.
To construct an overcomplete dictionary for OMP, we specify an angular grid {θ_i}_i=1^ and a delay grid {τ_j}_j=1^ depending on the region of interest for target detection (i.e., the a priori information {, , , }). Then, a spatial-domain and a frequency-domain dictionary covering angular and delay grids can be constructed as
_a = [ (θ_1) ⋯ (θ_) ] ∈K ,
_d = [ (τ_1) ⋯ (τ_) ] ∈S .
Using (<ref>), the problem of multi-target sensing based on the observation in (<ref>) becomes a sparse recovery problem
= ∑_i=1^∑_j=1^ []_i,j [_a]_:, i ([_d]_:, j)^ + ,
where ∈. Here, the goal is to estimate the T-sparse vector ∈1 under the assumption T ≪. The baseline OMP algorithm <cit.> to solve this problem is summarized in Algorithm <ref>, which will serve as a foundation to the proposed MB-ML approaches in Sec. <ref>.
§.§ Communication Receiver
We assume that the communication receiver has access to the CSI = ^(φ). Hence, the received signal can be expressed as = ⊙() +. Optimal decoding in this case corresponds to subcarrier-wise maximum likelihood estimation according to
_s = min_m_s ∈[]_s - []_s x(m_s)^2,
for s=0,...,S-1. Since communication decoding is already optimal, given the CSI, learning methods described in Sec. <ref> apply (<ref>) for communication message estimation.
§ MODEL-BASED LEARNING
MB-ML is inspired by the baseline of Sec. <ref>, although we need to develop differentiable beamforming and estimation algorithms that permit end-to-end learning, as well as a suitable loss function for multiple targets. This section describes the two MB-ML methods developed for multi-target sensing: (i) dictionary learning, which learns a dictionary of steering vectors for different angles as in <cit.>, and is suitable for unstructured impairments, as defined in Sec. <ref>; (ii) impairment learning, which directly learns a parameterization of the hardware impairments and thus is suitable for structured impairments, also defined in Sec. <ref>. This section also defines the loss function to train them.
§.§ Beamformer
MB-ML follows the same operations (<ref>) and (<ref>) to compute the precoding vector _r or _c, given an angular interval . Dictionary learning considers ∈ℂ^K× N_θ from (<ref>) as a free learnable parameter to account for unstructured impairments, which is comprised of KN_θ complex parameters.
The new proposed impairment learning considers instead as a free learnable parameter the vector ∈ℂ^K, which represents a parameterization of the structured hardware impairments. From , the dictionary of steering vectors is computed as () = [(θ_1;) … (θ_;)], such that () is used in (<ref>) instead of . Impairment learning reduces the number of learnable parameters by taking into account the structured hardware impairments of Sec. <ref>. Indeed, it has only K complex parameters, which can be several order of magnitudes less than the dictionary learning approach, since the dictionary of steering vector needs a relatively large number of columns N_θ to perform well. Note that the operation in (<ref>), which involves the learning parameters of both MB-ML methods, is already differentiable.
§.§ Sensing Receiver
Range-angle estimation of targets is based on Algorithm <ref>.
However, the max operation in line <ref> of Algorithm <ref> is not differentiable and the gradient of no loss function could be backpropagated in MB-ML.
To circumvent this issue, we develop a differentiable algorithm which is represented in Fig. <ref>. The difference with the conventional OMP in Algorithm <ref> is that we replace the operations of lines <ref>-<ref> by the following steps:
* max_i,j: We still perform this nondifferentiable operation as a temporary result to obtain the final estimation. Note that is based on an angular grid ={θ_i}_i=1^ and a delay grid ={τ_j}_j=1^. In line <ref> in Algorithm <ref>, this calculation yields the estimated angle-delay pair, which serves as foundation for the following step of the differentiable OMP algorithm.
* Mask the angle-delay map, , based on angle and range resolution: in order to consider elements of that solely correspond to a single target, we select the elements around the maximum of the angle-delay map that are within the angle and range resolution. This operation also helps to obtain a differentiable angle-delay estimation, similar to line <ref> in Algorithm <ref>.
We create the mask based on the angle and range resolution, since it determines the minimum angle or range for which two targets are indistinguishable.
The angle and range resolutions in our case are
≈2/K ≈c/2B = c/2S,
with B the bandwidth of the transmitted signal. The resolutions are considered in terms of the number of pixels of the angle-delay map, depending on and .
* Softmax: We apply a softmax operation to the masked matrix from the previous operation, so that the sum of its elements is equal to 1. Unlike line <ref> in Algorithm <ref>, the softmax function is differentiable, enabling end-to-end learning.
* Weighted sum: A weighted sum of and is implemented, where each weight corresponds to the output of the previous softmax operation, and they represent an estimate of the probability that a certain angle-delay pair is the true value. From this interpolation operation, an angle-delay pair (θ̂_I, τ̂_I) is obtained, which may not be included in or . From this computation, the angle-delay pairs are updated, as in line <ref> in Algorithm <ref>. Note that these four first steps (center column of Fig. <ref>), amount to looking in the dictionary for the most correlated atoms with the input, and then estimating the angle-delay pair as a convex combination of the corresponding angle-delays on the grid. This kind of similarity-based learning has been applied to other tasks within MIMO systems <cit.>, and is reminiscent of the attention mechanism <cit.>.
* Compute estimated spatial-domain and frequency-domain vectors (θ̂_I), (τ̂_I): unlike line <ref> in Algorithm <ref>, we recompute the spatial-domain and frequency-domain vectors based on the estimated angle-delay pair of the previous step, since the estimated angle-delay pair (_I, _I) might not be contained in (, ). The sets _a and _d are updated with the new vectors, as represented in Fig. <ref>.
After the previous steps, differentiable OMP continues as lines <ref>-<ref> in Algorithm <ref> to obtain the new residual ^(I+1), as depicted in Fig. <ref>. This differentiable OMP algorithm still involves looking over a grid of possible angles. We utilize as the dictionary of angles _a the same matrices and () from the beamformer of Sec. <ref> to compute , which allows parameter sharing between the co-located transmitter and receiver. The gradient of the loss function does not flow through the max operation, as illustrated in Fig. <ref>. To further improve memory efficiency, gradient flow is also discarded when computing the new residual ^(I+1) from the estimates (_I, _I).
§.§ Loss Function
As loss function for MB-ML multi-target sensing, we select the GOSPA loss from <cit.>. In our case, the GOSPA loss is defined as follows. Let γ>0, 0<μ≤2 and 1≤ p < ∞. Let = {_1, ..., _||} and = {_1,...,_||} be the finite subsets of ℝ^2 corresponding to the true and estimated target positions, respectively, with 0≤||≤, 0≤||≤. Let d(, ) = ‖ - ‖_2 be the distance between true and estimated positions, and (, ) = min(d(, ),γ) be the cut-off distance. Let Π_n be the set of all permutations of {1,...,n} for any n ∈ℕ and any element π∈Π_n be a sequence (π(1),...,π(n)). For || ≤ ||, the GOSPA loss function is defined as
d_p^(γ,μ)(, ) =
( min_π∈Π_||∑_i=1^||(_i, _π(i))^p + γ^p/μ (-) )^1/p.
If > , d_p^(γ,μ)(, ) = d_p^(γ,μ)(, ). The parameter p is proportional to the penalization of outliers, and the value of γ dictates the maximum allowable distance error. The role of μ, together with γ, is to control the detection penalization. This loss function becomes suitable for multiple targets, since it considers the association between estimated and true positions that gives the minimum loss, tackling the data association problem of multiple targets. In terms of target detection, we follow the same principle as the baseline, i.e., we stop the OMP algorithm when the maximum of the angle-delay map drops below a threshold. Sweeping this threshold over different values yields a trade-off in terms of detection and false alarm rates.
§ RESULTS
This section details the simulation parameters and the results for single- and multi-target ISAC.[Source code to reproduce all numerical results in this paper will be made available at <https://github.com/josemateosramos/MBE2EMTISAC> after the peer-review process.]
Four methods will be evaluated and compared:
* The model-based baseline from Sec. <ref>, working under the mismatched assumption of no hardware impairments.
* A NNBL method, extending <cit.>, which replaces the precoding and sensing estimation mappings in Fig. <ref> by NN, and can operate in the absence of any knowledge of the ISAC system (including the hardware impairments). More details can be found in Appendix <ref>.
* Dictionary learning from Sec. <ref>, where the unstructured impaired steering vectors (θ) are learned for both precoding and sensing.
* Impairment learning from Sec. <ref>, where the structured impairment vector d is learned for precoding and sensing.
§.§ Simulation Parameters
We consider a ULA of K=64 antennas, S=256 subcarriers, and a subcarrier spacing of 120 kHz. We set the maximum number of targets in the scene as = 5. The transmitted power is P=1 and the carrier frequency is f_c = 60 GHz. The sensing SNR across antenna elements was set to _r = K^2/N_0 = 15 dB, and the average communication SNR per subcarrier was fixed to _c = ∑_l=1^Lσ_c,l^2/(SN_0) = 20 dB. The number of channel taps in the communication channel is L=5, with an exponential power delay profile, i.e., σ_l^2 = exp(-l), l=0,...,L-1. The power delay profile is later normalized to obtain the desired average SNR. The number of grid points for angle and range is set as = 720 and =200.
To train the learning methods for a wide range of angles, we randomly draw {, } as in <cit.>, i.e.,
we draw a realization of ∼[-60, 60] and Δ∼[10, 20], for each new transmission. The target angular sector is computed as = - Δ/2, = + Δ/2. The communication angular sector and the range uncertainty region are set as {, } = {30, 50}, {, } = {10, 190} m, for all transmissions.
For hardware impairments, we consider the model of <cit.>, i.e., we assume structured hardware impairments where the antenna elements in the ULA array are spaced as ∼((λ/2) 1, ^2I_K). We select a standard deviation of = λ/25 = 0.2 mm. MB-ML is initialized with the same knowledge as the baseline, i.e., the steering vector models firstly assume that d=(λ/2) 1.
In the GOSPA loss, we set μ=2, as recommended in <cit.>, p=2, and γ = (-)/2=90 m. The cardinality mismatch term in (<ref>) implies the use of a threshold during training. However, our goal is to train the learning methods regardless of the threshold, and then explore sensing performance by changing the threshold. Hence, during training it is assumed to know the actual number of targets T, which means that || = || = T, and the GOSPA loss during training becomes
d_p^(γ,μ)(, ) = (min_π∈Π_||∑_i=1^||(_i, _π(i))^p)^1/p.
However, there is no detection penalization term in (<ref>), which implies that the detection probability estimation NN of NNBL cannot be optimized. Hence, we adopt a two-step training approach for NNBL, as follows:
* We first train and based on the simplified GOSPA loss of (<ref>).
* While freezing the parameters ξ, we then train and by minimizing
d_u^(γ_u,μ)(, ) = (min_π∈Π_||∑_i=1^|| d^(γ_u)(u_i, û_π(i))^p)^1/p,
where = {u_1, ..., u_||} and = {û_1, ..., û_||} are the true and estimated sets of target probabilities, d^(γ_u)(u_i, û_π(i)) = min(d(u_i, û_π(i)),γ_u), and d(u_i, û_π(i)) = -u_ilog(û_π(i)) - (1-u_i)log(1-û_π(i)). That is, we replace the position distance error in (<ref>) with the BCE loss. Note that in (<ref>) we also assume that ||=||=T.
The previous two-step training approach was observed to yield better performance, compared to joint training of all NN parameters ε, ξ, ζ based on the sum of the losses (<ref>) and (<ref>).
Network optimization is performed using the Adam optimizer <cit.>, with a batch size of B=3000 and 100,000 training iterations. The learning rate of dictionary and impairment learning was set to 5·10^-3 and 10^-7, respectively. In the two-step training approach for NNBL, 100,000 training iterations are applied to each of the steps. Position estimation training used a learning rate of 10^-2, while target detection utilized 10^-3 as learning rate. The architecture of NNBL is described in Appendix <ref>. NNBL also benefited from using a scheduler, to reduce the learning rate when the loss function has reached a plateau. Details of the scheduler parameters can be found in Appendix <ref>.
§.§ Performance Metrics
Concerning testing, we compute as detection performance metrics a measure of the probability of misdetection and the probability of false alarm, for multiple targets. We use the same definitions as in <cit.>, which correspond to
= 1-∑_i=1^Bmin{T_i, _i}/∑_i=1^B T_i,
= ∑_i=1^B max{T_i, _i} - T_i/∑_i=1^B - T_i,
where
T_i, _i are the true and estimated number of targets in each batch sample, respectively. The regression performance is measured via the GOSPA (for multiple targets sensing) and RMSE (for single target sensing).
As communication performance metric, we use the average SER across subcarriers, computed as
SER = 1/BS∑_i=1^B ∑_j=1^S 𝕀{[_i]_j ≠ [_i]_j},
with _i and _i the true and estimated message vectors at the i-th batch sample. All described methods in this paper (baseline of Sec. <ref>, MB-ML of Sec. <ref>, and NNBL) use a QPSK encoder, and the message estimation rule in (<ref>).
§.§ Single-target ISAC
In single-target ISAC, the maximum number of targets is =1, which implies that the GOSPA loss function in (<ref>) becomes (, ).
However, in order to compare with our previous work <cit.>, we train MB-ML and position estimation of NNBL using the MSE loss d(, )^p = - _2^2, and detection estimation of NNBL using the BCE loss, d(u, ) = -ulog() - (1-u)log(1-). Position estimation is assessed by the angle RMSE, √([(θ-θ̂)^2]), and the range RMSE, √([(R-R̂)^2]).
ISAC performance results are represented in Fig. <ref>, where
we sweep over [0,1] and [0,7π/4], taking 8 uniformly spaced values, to set η and ϕ in (<ref>), respectively. For testing, we fixed {, } = {-40, -20}[Unless otherwise stated, the authors also tested other values of {, }, and the results were qualitatively the same.]. The probability of false alarm was set to = 10^-2.
Result show that under no complexity limitations (solid lines) and hardware impairments, learning methods outperform the baseline in terms of misdetection probability, angle and range estimation, and SER, which implies that learning methods have adapted to hardware impairments. Communication performance, even in the case of optimal symbol estimation, is enhanced by learning approaches, which suggests that the impairments have a significant impact on the optimal communication precoder. In addition, dictionary learning outperforms NNBL for range estimation, although the converse happens for misdetection probability. Impairment learning yields the best performance among all learning methods, and with fewer parameters, which usually implies less training time. Indeed, NNBL is composed of a total of 7.78 million real learnable parameters, while dictionary learning uses K = 40,080 complex parameters, and impairment learning consists of K=64 complex parameters.
Under limited complexity, the number of parameters of dictionary learning and NNBL are restricted. We follow the approach of <cit.>, and restrict the number of (complex) parameters of dictionary learning by setting = 156, which reduces the number of parameters to 9,984 complex parameters. The complexity constraints applied to NNBL-learning are detailed in Appendix <ref>, which decreases the number of real parameters to 10,555. From Fig. <ref>, it is observed that while NNBL drops in performance, especially for angle and range estimation, dictionary learning still yields better results than the baseline. However, dictionary learning also decreased in performance compared to the unconstrained approach, which means that dictionary learning cannot achieve the same performance as impairment learning for the same number of parameters.
Lastly, we test all learning approaches for a scenario that was not encountered during training, to assess their generalization capabilities. Fig. <ref> depicts the performance of the learning methods for {, } = {-20, 20}, which includes a span of the angular uncertainty region wider than expected. The complexity of the networks is not restricted. The performance of all learning approaches has dropped compared to Fig. <ref>. However, while NNBL performs worse than the baseline, and dictionary learning yields similar results to the baseline, impairment learning is the only approach that still outperforms the baseline. NNBL and dictionary learning appear to overfit to the training data and degrade for unexpected inputs. This means that for new testing scenarios, impairment learning is the learning approach that best generalizes in terms of performance. This is due to the fact that impairment learning is the only method for which parameters are shared between all directions (all columns of the dictionary are affected each time the parameters are updated). Dictionary learning does not exhibit this feature, since each column of the dictionary (corresponding to a direction) is considered an independent set of parameters.
§.§ Multi-target ISAC
Based on the results of Sec. <ref>, impairment learning performs the best among all considered learning methods for the simpler case of single-target ISAC.
Hence, we only consider impairment learning to compare against the baseline for multi-target sensing. The batch size for MB-ML is decreased to B=1500 due to memory restrictions. The number of iterations was also reduced to 25,000, since finding the association between estimated and true data that minimizes the GOSPA loss of (<ref>) increases training time. In addition, ISAC results perform very close to perfect knowledge of impairments, as observed in the following.
We first compare the performance of the differentiable OMP algorithm of Sec. <ref> with the baseline, when hardware impairments are perfectly known. In Fig. <ref>, the sensing performance of both approaches is depicted. Results show that differentiable OMP performs closely to the baseline. The difference in performance might be because the dictionary _a in the baseline only covers the angular range {, }, while differentiable OMP uses a fixed dictionary that covers [-π/2, π/2]. However, this allows for efficient parameter sharing in MB-ML. Differentiable OMP takes a weighted sum of angles and ranges, which permits to select an angle or range outside the predefined dictionaries, unlike the baseline.
The GOSPA loss in Fig. <ref> achieves a minimum for different false alarm probabilities, since it takes into account both position and detection errors. For high , OMP estimates a higher number of targets than the true value, and conversely for low .
Fig. <ref> shows the results of the baseline without impairment knowledge, differentiable OMP with perfect impairment knowledge, and impairment learning. Impairment learning outperforms the baseline, which illustrates the adaptability of impairment learning to antenna imperfections in multi-target sensing. Moreover, the performance is very close to perfect knowledge of the impairments, which suggests that the learned spacing is quite similar to the underlying reality.
In terms of ISAC trade-off, Fig. <ref> presents the ISAC trade-offs in case of multiple targets when = 10^-2. In this case, we sweep in (<ref>) over η and fixed ϕ = 0, since in Figs.<ref> and <ref> we observed that the effect of ϕ is not very significant. Compared to Fig. <ref>, it is observed that impairment learning also outperforms the baseline when impairments are not known in terms of communication performance, due to the impact of hardware impairments in the communication precoder.
§ CONCLUSIONS
In this work, we studied the effect of antenna spacing impairments in multi-target ISAC, and different learning approaches to compensate for such impairments. A new efficient MB-ML approach to perform end-to-end learning and impairment compensation was proposed, based on a differentiable OMP algorithm. Simulation results showed that learning approaches outperform the baseline and they can compensate for hardware impairments. Among learning methods, the new proposed impairment learning approach outperformed all other considered methods, also exhibiting better generalization capabilities to new testing data, with much fewer parameters to optimize. Simulations results verify that injection of the system and impairment knowledge in learning methods improves their performance and reduces their complexity.
§ NNBL
Since the optimal detection and estimation rules might not be tractable, NNBL can be trained based on data to achieve optimality. Moreover, when no information about the impairments is available, NNBL can provide data-driven solutions to account for them. This appendix describes the principles and architecture of the considered NNBL approach.
§.§ Principles
NNBL replaces the precoding and sensing estimation mappings in Fig. <ref> by NN. The precoding network, :^2→^2K, takes as input and produces a precoder as output, where ε corresponds to the learnable parameters. NN in this work are considered to work with real-valued numbers, hence, the output dimension is doubled. The same mapping is applied to both sensing and communication precoders, to obtain _r and _c, which are later used to design the ISAC precoder according to (<ref>).
Sensing estimation is divided into two tasks, each corresponding to a different NN: (i) detection probability estimation, and (ii) position estimation. As input to both NN, we use ∈^× defined in Sec. <ref>, instead of , since we observed a better sensing performance.
In addition to the angle-delay map, the input is also composed of the a priori information {, , , }, as shown in Fig <ref>, to improve network performance.
The output of each NN is task-dependent. The detection probability network, : ^××^4→ [0,1]^, outputs a probability vector û whose elements correspond to the probability that each target is present in the scene, which is later thresholded to provide an estimate of the number of targets. The position estimation network, : ^××^4→^×2, outputs a matrix P̂ whose columns represent the position estimation of each potential target. The learnable parameters of each network are ζ and ξ, respectively. Both NN are trained based on the GOSPA loss function of Sec. <ref>.
§.§ NN Architectures
The precoding operation of Fig. <ref> was implemented as a MLP, whose input is an angular sector ({, } or {, }), with 3 hidden layers of 8K neurons and an output layer of 2K neurons, where we recall that K=64 is the number of antennas in the ULA transceiver. The activation function after each layer is the ReLU function, except for the final layer, which contains a normalization layer to ensure a unit-norm output, i.e., ‖_bs‖_2=1.
For the receiver side, we resort to CNN given the 2-dimensional nature of the input , as represented in Fig. <ref>. The receiver architecture repeats a set of layers, represented in Fig. <ref>, which we call residual bottleneck block. This block was inspired by the ResNet architecture <cit.>. A convolutional layer is first introduced with some stride to decrease the number of pixels to process. Then, 2 bottleneck blocks with skipped connections similar to <cit.> follow. However, we reduce the number of activation functions and normalization layers, as suggested in <cit.>. Another residual connection is introduced from the beginning to the end of both bottleneck blocks to help with gradient computation.
We observed that splitting position estimation into angle and range estimation, each of them involving a CNN, yielded better results than using a single network. Angle and range estimates are later combined into a position vector following (<ref>). The common architecture for all CNN (detection, angle and range estimation) is shown in Table <ref>. Convolutional layers introduce zero-padding so that the number of pixels is preserved. After the first and last convolutional layers, a 2-dimensional batch normalization and a ReLU activation function are also applied. The resulting feature map of the CNN has / 2^12 elements. For NNBL, = 320 and = 128 due to memory constraints. The resulting feature map from the convolutional layers, together with the a priori information {, , , } of the target locations, are processed by MLP. The angle estimation network only uses {, }, the range estimation network {, }, and the detection network utilizes both of them. The architecture of each MLP is described in Table <ref>. The activation function after each fully-connected layer is the ReLU function. Unless stated otherwise, all NN architectures were optimized to give the best ISAC performance, where we explored, for instance, kernel sizes up to 13x13, the number of residual bottleneck blocks from 3 to 7, or the number of layers of the MLP of Table <ref>, from K to 64K, among others.
When training NNBL, a scheduler is used to reduce the learning rate if the loss function plateaus. The patience of the scheduler was set as 10^4 iterations. If the loss function was regarded to plateau, the learning rate was decreased by half, with a minimum attainable learning rate of 10^-6.
When complexity limitations are considered, in the transmitter network the number of neurons in each hidden layer was reduced to 4. At the receiver side, the kernel size of the Maxpool layer is increased to 4x4, the number of residual bottleneck blocks is changed from 6 to 3, the number of channels in the network is reduced by a factor of 4, and the number of neurons in the hidden layer of the last MLP are constrained to 4.
IEEEtran
|
http://arxiv.org/abs/2307.03881v1 | 20230708024615 | On Delay Performance in Mega Satellite Networks with Inter-Satellite Links | [
"Kosta Dakic",
"Chiu Chun Chan",
"Bassel Al Homssi",
"Kandeepan Sithamparanathan",
"Akram Al-Hourani"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
The Busboy Problem: Efficient Tableware Decluttering
Using Consolidation and Multi-Object Grasps
Kishore Srinivas^1, Shreya Ganti^1, Rishi Parikh^1, Ayah Ahmad^1,
Wisdom Agboh^1,2,
Mehmet Dogar^2, Ken Goldberg^1
^1The AUTOLab at UC Berkeley (automation.berkeley.edu).
^2University of Leeds, UK.
=============================================================================================================================================================================================================
Utilizing Low Earth Orbit (LEO) satellite networks equipped with Inter-Satellite Links (ISL) is envisioned to provide lower delay compared to traditional optical networks. However, LEO satellites have constrained energy resources as they rely on solar energy in their operations. Thus requiring special consideration when designing network topologies that do not only have low-delay link paths but also low-power consumption. In this paper, we study different satellite constellation types and network typologies and propose a novel power-efficient topology. As such, we compare three common satellite architectures, namely; (i) the theoretical random constellation, the widely deployed (ii) Walker-Delta, and (iii) Walker-Star constellations. The comparison is performed based on both the power efficiency and end-to-end delay. The results show that the proposed algorithm outperforms long-haul ISL paths in terms of energy efficiency with only a slight hit to delay performance relative to the conventional ISL topology.
Low Earth orbit constellations, inter-satellite links, mega satellite constellations, dense satellite constellations, delay.
§ INTRODUCTION
The deployment of Low Earth Orbit (LEO) constellation networks is at an accelerating pace aiming to provide global connectivity. Many satellite projects, such as OneWeb, SpaceX's Starlink, and Amazon's Kupier <cit.> are currently been deployed with thousands of satellites into LEO orbit to provide seamless Internet coverage around the globe. These constellations will complement the existing terrestrial communications including both 5G and future 6G networks.
One of the key enablers of such constellations envisioned inter-satellite connectivity. A connection between two satellites is referred to as Inter-Satellite Link (ISL) and is being widely anticipated (and demonstrated) in the upcoming LEO constellations. ISLs relay data directly between satellites, unlike current methods which depend on the large network of ground stations <cit.>. ISLs could liberate constellations from the burden of establishing costly, and sometimes infeasible, ground station networks. An additional advantage is that the data carried by ISLs travel in free space and thus at the exact speed of light as opposed to conventional optical fiber networks. The average propagation speed in a typical single-mode optical fiber cables network is around 65-70% the speed of light <cit.>. Hence, ISLs can potentially enable new low-delay applications such as remote control industry operations, cloud-controlled autonomous vehicles and farming, and telesurgery, in addition to enabling faster financial transactions. However, despite these advantages, satellite networks with ISLs face their own set of challenges, such as power constraints due to reliance on solar energy and the need to maintain reliable inter-satellite connections in a dynamic orbital environment. Consequently, the development of power-efficient and low-delay satellite constellation topologies that effectively leverage ISLs remains an active area of research and innovation.
Nevertheless, using satellites rather than optical fiber submarine cable possesses the potential to decrease the propagation delay by a few tens of milliseconds depending on the distance. Apart from delay reduction, the LEO satellite constellation can provide an access network for remote and rural communities, and to locations with extreme terrain such as mountains <cit.>. This is also particularly important for industries that require real-time monitoring and control, such as manufacturing and logistics. Recent studies have also shown that LEO satellites with ISLs offer better resilience to cyber-attacks, making them a more secure option for data transfer in Industry 4.0 applications <cit.>. Furthermore, due to their low altitude, LEO satellites offer lower latency and higher bandwidth capacity compared to traditional geostationary satellites, enabling faster and more efficient communication services for users in remote areas <cit.>. Recent studies have demonstrated the potential benefits of LEO satellite networks for improving connectivity in developing countries and bridging the digital divide <cit.>.
Simulations presented in <cit.> and in <cit.> illustrate the comparative delay advantages of ISL as a data relay technology versus optical cable. Additionally, authors in <cit.>, delve deeper into the idea of using optical satellite links rather than terrestrial fiber by developing a crossover function to optimize delay. The crossover function is a mathematical formula that determines the optimal point at which to switch from using a terrestrial fiber link to an optical satellite link. Authors in <cit.> revealed the limitations of traditional network design approaches in the context of ISL-enable satellite communications and suggested the use of repetitive 3-satellite link patterns to address the temporal dynamics, achieving a higher efficiency than previous state-of-the-art methods. Nevertheless, more investigation is needed to better evaluate the benefits of optical satellite links for data relaying as well as to develop satellite-aware ISL topologies rather than just applying the shortest path ISLs. For example, a power-efficient ISL topology would take into consideration the power limitations of satellites due to their reliance on intermittent solar energy.
In this paper, we further analyze the prospect of using LEO satellite ISLs to relay data. The analysis is made through the simulation of different LEO satellite constellations (shown in Fig. <ref>) where network topologies are proposed; (i) Nearest hop topology and (ii) Cutoff distance topology. We concentrate on the performance in terms of delay between the transmitting and receiving device because ISL is expected to facilitate lower delays, however, the performance of ISL-enabled networks needs to be carefully studied and assessed. An illustration of data communications with ISLs is shown in <ref>. The contributions of this work are summarized as follows:
* We compare the performance of three common satellite constellations, namely the theoretical random constellation, the widely deployed Walker-Delta, and the Walker-Star constellations in terms of end-to-end delay.
* We propose and evaluate two network topologies for LEO satellite constellations with ISLs: Nearest hop topology and Cutoff distance topology, focusing on their impact on delay performance comparing a theoretical great-circle optical fiber connection.
* We show that our proposed topologies achieve competitive delay performance relative to the conventional ISL topology, highlighting their potential for use in future LEO satellite networks.
§ SYSTEM MODEL
§.§ Geomtric model
In this section, we introduce the three constellation models used for benchmarking the performance.
§.§.§ Random
The random distribution of satellites with random circular orbits. This is distribution is used in the simulations, as outlined in <cit.>, with the assumption that satellite collisions are disregarded. The left-hand side of Fig. <ref> provides an illustration of the random constellation along with the common Walker constellations.
§.§.§ Walker-Star Constellation
Satellite providers, such as OneWeb and Iridium use the Walker-Star constellation. The orbits in such constellation follow a near-polar configuration which has an inclination angle close to 90^∘, this ensures global coverage, including the poles. However, this inherently results in an increased density of satellites with higher latitudes. The right ascension of the ascending node (RAAN) of the orbital planes in a Walker-Star configuration is spread across 0 to π, unlike the Walker-Delta constellation which uses 0 to 2π. The middle of Fig. <ref> depicts a typical Walker-Star constellation with 200 satellites along with their orbital planes.
§.§.§ Walker-Delta Constellation
The Walker-Delta constellation, employed in satellite networks such as Kupier and Starlink <cit.>, reduces inter-satellite distance variations. These networks are currently considering inter-satellite links (ISLs) to support end-to-end communication without the need for a large network of terrestrial gateways. In the Walker-Delta configuration, the orbital planes are equally spaced and rotate around the Earth's axis of rotation, with a RAAN of Ω∈0,2π/P,22π/P,…,(P-1)2π/P. The right-hand side of Fig. <ref> displays the constellation and orbital planes for the Walker-Delta constellation.
§.§ Footprint model
In order to avoid terrestrial interference and heavy signal fading, the user-terminal only connects to satellites that have an elevation angle larger than a given threshold θ_min. One reasonable threshold is 25^∘ as FCC filings by Starlink <cit.>. Thus, according to <cit.>, the effective footprint of a satellite is bounded by the minimum permissible elevation angle. In a practical sense, the footprint might be even smaller than this bound depending on the antenna beamwidth ψ. For the purpose of this study the footprint projection is assumed to be an ideal spherical cap bounded by an earth-centered zenith angle, denoted as φ, see Fig. <ref> for details. In order to calculate the beamwidth, the maximum slant distance between the satellite and the ground device needs to be calculated with the cosine rule as follows,
a = R_ecos(π/2+θ_min)+√(R^2-R_esin(π/2+θ_min)^2) ,
where R_e is Earth's average radius, R = R_e + h, and h is the satellite altitude above the Earth's mean sea level. Thus, the maximum effective beamwidth can be again calculated with the cosine rule as follows <cit.>,
ψ = acos(R_e^2+R^2-a^2/2Ra) .
Then the earth-centered zenith angle is calculated using the law of sines as follows <cit.>,
φ = (1/αsinψ/2) - ψ/2 ,
where α = R_e/R. Finally, the area of the spherical cap (footprint) of the beam is calculated as follows,
A_fp = 2π R_e^2(1-cosφ) ,
The perimeter of a spherical cap can then be drawn on the earth's surface to define each satellite footprint, where if a device is located within the footprint, it is able to connect to the satellite. For defining the perimeter of the footprint, the latitude and longitude of the footprint boundary need to be calculated with the heading formulae <cit.> as follows,
ϕ_fp = (sinϕ_satcosφ + cosϕ_satsinφcosθ) ,
and the longitude,
ρ_fp = ρ_sat + (sinθsinφcosϕ_sat ,
cosφ - sinϕ_satsinϕ_fp) ,
where θ is an array from 0 to 2π with 360 elements and ϕ_sat and ρ_sat is the latitude and longitude of the satellite in radians. An illustration of the geometry of a LEO satellite is shown in Fig. <ref>.
§.§ Communication Delay
In order to benchmark the satellite network, a direct point-to-point fiber link is assumed between the communicating ground terminals. The link thus follows the great-circle which is the shortest path on a spherical surface. The delay is calculated as follows,
τ_gc = d_gc/(c/n)
where c is the speed of light, n is the refractive index of the optical fiber cable, and d_gc is the great circle distance between the communicating ground terminals. When using the satellite network, the total delay for related to the distance of the sum of all hops distances plus the approximated processing delay and is formulated as follows,
τ_gc = (d_sat+d_tx→ sat+d_sat → rx) /c + N_hopsτ_process ,
where d_sat is the sum of all the ISL distances from the first satellite point to the last satellite point, d_tx → sat is the distance from the transmitter to the first satellite, and d_sat → rx is the distance from the last satellite in the path to the receiver. The processing delay is denoted as τ_process.
§ TOPOLOGIES
In this paper, we evaluate the end-to-end delay performance of different ISL-enabled constellations with two different network topologies. The topologies being utilized are (i) cutoff distance-based topology and (ii) nearest hop-based topology. For both topologies, after the links are constructed, Dijkstra's algorithm is used to calculate the lowest delay path. For Dijkstra's algorithm, the distance of each link is used for the link weight because it is directly proportional to the propagation delay.
§.§ Nearest Hop-based Topology
In a practical LEO satellite constellation, the number of available optical ISL ports (links) is limited due to various reasons including, cost, energy consumption, and satellite size. Therefore, ISLs links to neighboring satellites need to be optimized according to the given criteria. One way to form such links is to connect with the next satellites (next hop) that minimize the transmission energy. One method to establish the lowest energy next hops is to evaluate every satellite within a given vicinity and then pick up only the neighbors that if reached by a direct link would be more energy efficient. In geometric terms, this topology can be realized using the following steps:
* Draw a virtual sphere between the current satellite and the candidate satellite residing on the opposing side of the diameter segment.
* Create a link from the test node to the candidate only if there is no other candidate node in the sphere.
* Repeat for every candidate satellite in the vicinity
By referring to the illustration in Fig. <ref>, the figure shows an ISL path from node A to node D. If we assume the transmission power is variable, the link from A → C cannot be made as link A → B is more efficient. The link between A and C cannot be made as d_AC^2 ≥ d_AB^2 + d_BC^2. The inverse-square relationship between distance and transmit power due to the free space path-loss (FSPL) exponent means that less transmit power is required if the signal travels A → B rather than directly from A → C. The FSPL is calculated as follows,
l = [4π d/λ]^2 ,
Then to calculate the total energy consumption of the system is formulated as follows,
E = α∑_i=0^K d_i^2 + (E_processing× K) ,
where K is the number of hops and α is the transmit power.
After all the ISLs are made, Dijkstra's algorithm <cit.> is then used to calculate the shortest path between the transmitter and receiver using the satellite network. A figure showing the ISLs between each satellite using the nearest hop algorithm is shown in Fig. <ref>. Additionally, the pseudocode for constructing the nearest hop topology is shown in <ref>.
§.§ Cutoff Distance -based Topology
This topology assumes that a satellite can link with any other satellite if it is closer that a given distance threshold. The practical sense of such a topology is that ISL links would have a maximum viable distance either limited by the link budget or by the occlusion caused by the Earth's curvature. For a generic case, we take the assumption that the links are limited by the Earth's curvature which imposes the upper bound distance of feasible ISL links. Accordingly, the connection between the neighbor pairs is removed when the weight exceeds the maximum horizon range, denoted as d_max. To be more accurate, the practical visibility constraint of ISL is limited by the troposphere which contains 99% of the atmospheric water vapor and aerosols <cit.>. The troposphere has an average height of around 18 km above the Earth’s surface so we use this as a threshold <cit.>. The maximum ISL links distance is then calculated as follows,
d_max= 2√((R_⊕+h_s)^2-(R_⊕+h_t)^2) ,
where h_s is the height of the satellite and h_t is the average height of the troposphere. Dijkstra's algorithm <cit.> is then utilized to find the shortest path in the topology. Note, the cutoff algorithm provides a lower bound for ISL delay, as this is the maximum distance at which ISLs could be formed. An illustration of how the links for the Cutoff routing concept are formed is shown in Fig. <ref>.
§ RESULTS
In this section, we present the performance results compared to torrential optical fiber connections between two arbitrary locations on Earth. Additionally, we explore three different LEO satellite constellations, (i) random, (ii) Walker-Star, and (iii) Walker-Delta. Note, we explore the performance on the random constellation as a baseline because it has been shown to be analytically tractable from coverage handover problems <cit.>.
For calculating the processing delay, we assume a processing clock operating at 533 MHz, which is chosen as the Zynq UltraScale+ RFSoC <cit.> as an example from a real-time single-chip radio platform. We also assume 3000 instructions to decide the ISL for the next satellite, the low number of instructions is due to the orbits being deterministic, thus all the topologies can be calculated apriori and the satellites would just need to use a look-up table to calculate the next link. The processing delay can then be calculated as D_p = Number of Instructions / CLK. As the processing delay within satellites is likely to be fixed and similar to each other, we assumed the same processing delay for all satellites. Therefore, the processing delay is multiplied by the number of satellite hops K.
§.§ Distance vs. Delay
When comparing the delay of LEO satellite networks for data communication and the great circle optical fiber path, we normalize the delay to the great circle optical fiber path. The Improvement = D_Optical fiber/D_Satellite - 1, where D is the delay. As such, the improvement relative to the great circle optical fiber path is plotted relative to the great circle distance.
The great circle distance is plotted against the delay in Fig. <ref>, where we normalize the delay to the delay achieved by using an optical fiber path. The plot then shows a trend as the number of orbiting satellites increases, the performance also improves. The improvement is due to the greater coverage of satellites around the globe so the probability of a more efficient path existing also increases. Additionally in the same plot, the performance of the nearest hop topology is compared when processing is included in the calculation as well as assumed negligible. The plot shows the performance decreases very slightly due to the high clock rate of the hardware considered in this work <cit.>. The performance of cutoff topology with and without processing was also considered, however, due to the smaller hop count relative to the nearest hop topology, the effect is negligible thus it is not shown on the plot. Note, the random constellation is used as a baseline example.
In Fig. <ref> the performance of the delay against distance for LEO walker constellations, which are also normalized a great circle optical fiber path. From the plot, it can be seen that like with the random constellation, the performance in terms of delay increases as the number of satellites increases. Also, the Walker-Star seems to outperform the Walker-Delta constellation when the nearest hop topology is used. Furthermore, the performance contrast between the Walker-Delta and Walker-Star is the greatest when the nearest hop algorithm is used and also increases as the distance between the transmitting device and the receiving device grows. The performance disparity between the walker constellations is much small when the cutoff topology is used. The Walker-Star constellation outperforms the Walker-Delta constellation in terms of delay as the Walker-Star has better coverage over the globe compared to the Walker-Delta and random locations for the ground-based transmitter and receiver are used.
§.§ Alternate Paths
To determine the robustness of each LEO constellation to link failures, we investigate how the link delay increases as we introduce alternate paths. We construct the best possible path and calculate the delay, then remove the links that correspond to the best path and form a new path. The delay of the new path is then calculated. The process is repeated until we have 10 distinct paths from the transmitting device to the receiving device. The results are shown in Fig. <ref> where the delay performance is shown for Walker-Delta as well as Walker-Star satellite constellations. In addition, the delay for using terrestrial optical fiber cable over the great circle distance between the cities is shown. An illustration showing the different types of links is shown in Fig. <ref>. The Walker-Delta constellation using the nearest hop topology has the greatest deviation between cities, particularly for the path between New York and London. Moreover, the deviation from using the cutoff algorithm is very low, thereby showing greater robustness to failed links between satellites. Finally, the performance of satellites for ISL paths compared to traditional fiber has a greater pay-off when the distance between the two locations increases such as in the delay between Perth and Brest.
§ CONCLUSION
In this study, we analyzed ISL-enabled LEO satellite constellations as a low-delay alternative to traditional terrestrial optical fiber networks. We investigated three LEO constellations: Walker-Delta, Walker-Star, and random. Additionally, we explored two topologies, presenting the power-efficient nearest hop topology and comparing it to the delay-minimizing cutoff topology. Our results demonstrated that satellite networks improve delay performance compared to optical fiber connections as the transmitter-receiver distance increases. The proposed nearest hop topology maintains a better delay performance compared to the great-circle fiber path while also utilizing more energy-efficient ISLs. Future work will involve using machine learning to develop a topology for dynamically changing ISLs due to high traffic loads and link failures.
The Walker-Star constellation had a lower delay, but the Walker-Delta constellation was more power-efficient in terms of average ISL length with the nearest hop algorithm.
§ ACKNOWLEDGMENT
The authors would like to acknowledge the discussions with Dr. Ben Allen and Mr. Ben Moores, also the partial funding by SmartSat CRC under the UK-Australia spacebridge program.
IEEEtran
|
http://arxiv.org/abs/2307.04554v1 | 20230710133817 | Non-unit quaternion parametrization of a Petrov-Galerkin Cosserat rod finite element | [
"Jonas Harsch",
"Simon R. Eugster"
] | math.NA | [
"math.NA",
"cs.NA",
"math-ph",
"math.MP"
] |
[EN]
The short title]Non-unit quaternion parametrization of a Petrov–Galerkin Cosserat rod finite element
[1][DE] Institute for Nonlinear Mechanics, University of Stuttgart, Stuttgart, Germany
[EN]
The application of the Petrov–Galerkin projection method in Cosserat rod finite element formulations offers significant advantages in simplifying the expressions within the discrete virtual work functionals. Moreover, it enables a straight-forward and systematic exchange of the ansatz functions, specifically for centerline positions and cross-section orientations. In this concise communication, we present a total Lagrangian finite element formulation for Cosserat rods that attempts to come up with the least required concepts. The chosen discretization preserves objectivity and allows for large displacements/ rotations and for large strains. The orientation parametrization with non-unit quaternions results in a singularity-free formulation.
[
Simon R. Eugster1
August 12, 2023
=====================
§ INTRODUCTION
This article complements the two papers <cit.> on Petrov–Galerkin rod finite formulations for Cosserat rods. The cross-section orientations are parameterized using non-unit quaternions instead of total rotation vectors, which require additionally the concept of the complement rotation vector for a singularity-free parametrization. To keep the formulation as simple as possible, we opt for the ^12-interpolation for the ansatz functions, see <cit.>.
The paper is structured as follows. In Section <ref>, the Cosserat rod theory is recapitulated very briefly; mainly to introduce all quantities required for the further finite element formulation. For those interested in additional comments as well as a thorough introduction and explanation of the chosen notation, we recommend reading <cit.>. The Petrov–Galerkin finite element formulation in terms of nodal non-unit quaternions is presented in Section <ref>. The last section on numerical experiments, investigates the static analysis of a helical spring in line with <cit.>. Additionally, the Wilberforce example from <cit.> with a helical spring with three coils is discussed.
§ COSSERAT ROD THEORY
Let ξ∈𝒥 = [0, 1] ⊂ be the centerline parameter and let t denote time. The motion of a Cosserat rod is captured by a time-dependent centerline curve represented in an inertial I-basis _I_OP=_I_OP(ξ, t) ∈^3 augmented by the cross-section orientations _IK=_IK(ξ, t) ∈ SO(3)={∈^3 × 3| = 1_3 × 3∧()= 1}. The subscripts O and P in the centerline curve refer to the origin and the centerline point, respectively. The cross-section orientation _IK can also be interpreted as a transformation matrix that relates the representation of a vector in the cross-section-fixed K-basis to its representation in the inertial I-basis.
The derivatives with respect to time t and centerline parameter ξ are denoted by (̇∙̇)̇ and (∙)_,ξ, respectively. The variation of a function is indicated by δ(∙). With this, we can introduce the centerline velocity _I _P= (_I _OP)^· and the virtual displacement _I δ_P = δ(_I _OP).
The angular velocity of the cross-section-fixed K-basis relative to the inertial I-basis, in components with respect to the K-basis, is defined by _K _IK j^-1(_IK(_IK)^·),
where j ^3 →(3) = {∈^3×3 | = -} is the linear and bijective map such that = j() = × for all , ∈^3. Analogously, the virtual rotations and the scaled curvature are defined as
_K δ_IK j^-1( _IKδ(_IK) ) and _K _IK j^-1( _IK (_IK)_,ξ), respectively.
For the reference centerline curve _I _OP^0, the length of the rod's tangent vector is J = _I _OP, ξ^0. Thus, for a given centerline parameter ξ, the reference arc length increment is s = J ξ. The derivative with respect to the reference arc length s of a function = (ξ,t) ∈^3 can then be defined as _,s(ξ,t) _,ξ(ξ,t) /J(ξ).
The objective strain measures of a Cosserat rod are the curvature _K _IK = _K _IK / J, which measures torsion and bending, together with the measures for dilatation and shear strains contained in _K = _K / J determined by _K (_IK)_I _OP, ξ.
The internal virtual work of a Cosserat rod is defined as
δ W^int -∫_𝒥{
(_I δ_P,ξ)_IK_K + (_K δ_IK,ξ)_K
- (_K δ_IK)[ _K ×_K + _K _IK×_K ]
}[ξ] ,
where _K and _K denote the resultant contact forces and moments, respectively. For hyperelastic material models with a strain energy density with respect to the reference arc length W = W(_K , _K _IK; ξ), they can be determined by the constitutive relations _K = (∂ W / ∂_K ) and _K = (∂ W / ∂_K _IK).
Assume the line distributed external forces _I = _I (ξ,t) ∈^3 and moments _K =_K (ξ,t) ∈^3 to be given as densities with respect to the reference arc length. Moreover, for i∈{0,1}, point forces _I _i = _I _i(t) ∈^3 and point moments _K _i = _K _i(t) ∈^3 can be applied to the rod's boundaries at ξ_0=0 and ξ_1=1. The corresponding external virtual work functional is defined as
δ W^ext∫_𝒥{ (_Iδ_P)_I + (_K δ_IK)_K } J [ξ]
+ ∑_i = 0^1 [ (_Iδ_P)_I _i + (_K δ_IK)_K _i ]_ξ_i .
In case _I_OP is the line of centroids, the inertial virtual work functional of the Cosserat rod can be written as
δ W^dyn -∫_𝒥{(_I δ_P ) A_ρ_0 (_I_p)^· + (_K δ_IK)
(_K _ρ_0 (_K _IK)^· + _K _IK×_K _ρ_0_K _IK)} J [ξ] ,
where A_ρ_0 is the cross-section mass density and _K _ρ_0 the constant cross-section inertia tensor represented in the cross-section-fixed K-basis.
§ PETROV–GALERKIN FINITE ELEMENT FORMULATION
The rod's parameter space 𝒥 is divided into n_el linearly spaced element intervals 𝒥^e = [ξ^e, ξ^e+1) via 𝒥 = ⋃_e=0^n_el-1𝒥^e. For a p-th order finite element, the closure of each of the intervals 𝒥^e contains p + 1 evenly spaced points ξ^e_i ∈cl(𝒥^e) = [ξ^e, ξ^e+1] with i ∈{0, …, p} such that ξ^e_0 = ξ^e < ξ^e_1 < … < ξ^e_p = ξ^e+1. Note, for e ∈{0, …, n_el -2 }, the points ξ^e_p=ξ^e+1_0 denote the same point ξ^e+1, which is the boundary point of the adjacent element intervals. It is convenient to use both indexations in the following. For a given element interval 𝒥^e = [ξ^e, ξ^e+1), the p-th order Lagrange basis function and derivative of node i∈{0,…,p} are
N^p,e_i(ξ) = 0 ≤ j ≤ p
j≠ i∏ξ - ξ^e_j/ξ^e_i - ξ^e_j and
N^p,e_i,ξ(ξ) = N_i^p,e(ξ) k=0
k ≠ i∑^p1/ξ - ξ^e_k ,
where ξ^e_i, ξ^e_j, and ξ^e_k are the points contained in the set {ξ^e_0 = ξ^e, ξ^e_1, …, ξ^e_p = ξ^e+1}.
The centerline curve _I _OP and the cross-section orientations _IK are approximated by interpolating nodal centerline points _I_OP^e_i(t)∈^3 and nodal transformation matrices _IK^e_i(t)∈(3). For each node i ∈{0,…,p} within element e ∈{0,…, n_el-1}, it will hold that _I_OP^e_i(t) = _I _OP(ξ^e_i,t) and _IK^e_i(t) = _IK(ξ^e_i,t). In contrast to <cit.>, the nodal transformation matrices
_IK^e_i = (^e_i) = 1_3 × 3 + 2 ((^e_i)^2 + p^e_0,i ^e_i) / _i^e^2
are parametrized by nodal non-unit quaternions ^e_i(t) = (p^e_0,i(t), ^e_i(t)) ∈^4 with the scalar part p^e_0,i(t) ∈ and the vectorial part ^e_i(t) ∈^3, see <cit.>. Note that (<ref>) is formulated in such a way to return orthogonal matrices also for non-unit quaternions.
Accordingly, the N=(p n_el + 1) nodal generalized position coordinates ^e_i(t) = (_I _OP^e_i, ^e_i)(t) ∈^7 are given by the nodal centerline points _I _OP^e_i and the nodal non-unit quaternions ^e_i resulting in n_ = 7N positional degrees of freedom for the discretized rod. The nodal quantities can be assembled in the global tuple of generalized position coordinates (t) = (^0_0, …, ^0_p-1, …, ^e_0, …, ^e_p-1, …, ^n_el-1_0, …,^n_el-1_p-1, ^n_el-1_p)(t) ∈^n_. For e ∈{0, …, n_el -2 }, the coordinates ^e_p=^e+1_0 refer to the same nodal coordinates. Introducing an appropriate Boolean connectivity matrix _e ∈^7(p+1) × n_, the element generalized position coordinates ^e(t) = (^e_0, …, ^e_p)(t) ∈^7(p+1) can be extracted from via ^e = _e. Note that during a numerical implementation it is advisable to slice arrays instead of multiply them with Boolean matrices.
In the sense of <cit.>, both the nodal centerline points and the cross-section orientations are interpolated by p-th order Lagrangian polynomials. Using the characteristic function χ_𝒥^e𝒥→{0, 1}, which is one for ξ∈𝒥^e = [ξ^e, ξ^e+1) and zero elsewhere, together with the p-th order Lagrange basis functions (<ref>), the ansatz functions for centerline and cross-section orientations are
_I _OP(ξ, ) = ∑_e=0^n_el-1χ_𝒥^e(ξ) ∑_i=0^p
N^p,e_i(ξ) _I _OP^e_i and
_IK(ξ, ) = ∑_e=0^n_el-1χ_𝒥^e(ξ)
∑_i=0^p
N^p,e_i(ξ) (^e_i) .
The discretized version of the curvature strain is computed as
_K _IK = j^-1((_IK_IK,ξ)) / J ,
where the map () = 1/2( - ) ∈(3) extracts the skew-symmetric part of the matrix ∈^3×3. Hence, the curvature can efficiently be computed using j^-1(()) = 12(M_32 - M_23, M_13 - M_31, M_21 - M_12).
At the same N nodes as for the nodal generalized position coordinates, we introduce the nodal generalized virtual displacements δ^e_i(t) = (_I δ_P^e_i, _K^e_iδ_IK^e_i)(t) ∈^6 given by the nodal virtual centerline displacement _I δ_P^e_i(t) ∈^3 and the nodal virtual rotation _K^e_iδ_IK^e_i(t) ∈^3. In analogy to the generalized virtual displacements, we also introduce the nodal generalized velocities ^e_i(t) = (_I _P^e_i, _K^e_i_IK^e_i)(t) ∈^6 given by the nodal centerline velocity _I _P^e_i(t) ∈^3 and the nodal angular velocity _K^e_i_IK^e_i(t) ∈^3. Similar to the generalized position coordinates , the nodal generalized virtual displacements and velocities are assembled in the global tuple of generalized virtual displacements δ(t) ∈^n_ and velocities (t) ∈^n_. In contrast to the nodal position coordinates, there are only six nodal generalized virtual displacements or velocity coordinates resulting in n_ = 6N generalized virtual displacements or velocity degrees of freedom for the discretized rod.
Consequently, we require a new Boolean connectivity matrix _, e∈^6(p+1) × n_, which extracts the element generalized virtual displacements δ^e(t) = (δ^e_0, …, δ^e_p)(t) ∈^6(p+1) and velocities ^e(t) = (^e_0, …, ^e_p)(t) ∈^6(p+1) from the global quantities via δ^e = _,eδ and ^e = _,e. By further introducing the Boolean connectivity matrices _, i∈^3 × 6(p+1), the nodal virtual centerline displacements _I δ_P^e_i and centerline velocities _I _P^e_i can be extracted from the element generalized virtual displacements δ^e and velocities ^e via _I δ_P^e_i = _, iδ^e and _I _P^e_i = _, i^e, respectively. Identical extraction operations hold for the nodal virtual rotations _K^e_iδ_IK^e_i = _,iδ^e and angular velocities _K^e_i_IK^e_i= _,i^e, where _, i∈^3 × 6(p+1). The test functions are then given by interpolating the nodal generalized virtual displacements by p-th order Lagrangian basis functions (<ref>) in agreement with
_I δ_P(ξ, δ) = ∑_e=0^n_el - 1χ_𝒥^e(ξ) ∑_i=0^p N^p,e_i(ξ) _I δ_P^e_i and
_K δ_IK(ξ, δ) = ∑_e=0^n_el - 1χ_𝒥^e(ξ)∑_i=0^p N^p,e_i(ξ) _K^e_iδ_IK^e_i .
Note that the interpolation of the virtual rotations must be understood in the sense of a Petrov–Galerkin projection, where the virtual rotations are not obtained from a consistent variation of the ansatz functions (<ref>).
To obtain a constant and symmetric mass matrix in the discretized formulation, see (<ref>) below, the velocities are considered as independent fields and are interpolated with the same interpolation as the virtual displacements and rotations as
_I _P(ξ, ) = ∑_e=0^n_el - 1χ_𝒥^e(ξ)∑_i=0^p N^p,e_i(ξ) _I _P^e_i and
_K _IK(ξ, ) = ∑_e=0^n_el - 1χ_𝒥^e(ξ) ∑_i=0^p N^p,e_i(ξ) _K^e_i_IK^e_i .
The independent introduction of velocity fields (<ref>) demands an additional relation defining the coupling between position coordinates and velocity coordinates . This coupling is given by the nodal kinematic differential equations
^e_i =
[ _I_OP^e_i; ^e_i ] =
[ 1_3 × 3 0_3 × 3; 0_4 × 3 (^e_i) ][ _I _P^e_i; _K^e_i_IK^e_i ] =
(^e_i) ^e_i , where () = 1/2[ -; p_0 1_3 × 3 + ] ,
cf. <cit.>. The nodal kinematic equations (<ref>) can easily be assembled to a global kinematic differential equation of the form = (). Note that the kinematic differential equation is linear in too. This allows to write the relation also in the form = (), see <cit.> for more details.
Inserting the test functions (<ref>) together with the corresponding approximations for centerline, cross-section orientations (<ref>) and strain measures into (<ref>), the continuous internal virtual work is approximated by δ W^int(; δ) = δ^int(), where the internal generalized forces are computed element-wise by
^int() = ∑_e=0^n_el - 1_, e^int_e(_e ) ,
^int_e(^e) = -∫_𝒥^e∑_i=0^p{ N^p,e_i,ξ_, i_IK_K + N^p,e_i,ξ_, i_K
-N^p,e_i_, i(_K ×_K + _K_IK×_K ) }[ξ] .
Similarly, the external virtual work (<ref>) is discretized by δ W^ext(t, ; δ) = δ^ext(t, ) with
^ext(t, ) = ∑_e=0^n_el - 1_,e^ext_e(t, _e ) + _, 0[_, 0_I _0 +_, 0_K _0 ]_ξ=0 +
_, n_el - 1[_,p_I _1 +_, p_K _1 ]_ξ=1 ,
^ext_e(t, ^e) = ∫_𝒥^e∑_i=0^p{ N^p,e_i_, i_I + N^p,e_i_, i_K } J [ξ]
.
Finally, inserting (<ref>) and (<ref>) into the inertial virtual work functional (<ref>) yields the discrete counterpart δ W^dyn(;δ) = -δ( + ^gyr() ),
where we have introduced the symmetric and constant mass matrix
= ∑_e=0^n_el - 1_,e_e _,e , _e =
∫_𝒥^e∑_i=0^p∑_k=0^p N^p,e_i N^p,e_k{
A_ρ_0_, i_, k
+ _, i_K _ρ_0_, k} J [ξ] ,
and the gyroscopic forces
^gyr() = ∑_e=0^n_el-1_,e_e^gyr(_,e) , ^gyr_e(^e) = ∫_𝒥^e∑_i=0^p N^p,e_i {_, i (_K _IK×_K _ρ_0_K _IK) } J [ξ] .
Element integrals of the form ∫_𝒥^e f(ξ) [ξ] arising in the discretized external and gyroscopic forces, as well as in the mass matrix, are subsequently computed using a Gauss–Legendre quadrature rule with ceil[(p + 1)^2 / 2] quadrature points. To alleviate locking, the internal generalized forces (<ref>) are integrated by a reduced p-point quadrature rule.
Applying the principle of virtual work, which requires the total virtual work functional to vanish, we readily obtain the system dynamics in the form
= () ,
= ^-1(^gyr() + ^int() + ^ext(t, )) ,
where the two lines correspond to the global kinematic differential equation and the equations of motion, respectively. Even though deviations from unit length of ^e_i do not affect the kinematic differential equation, to avoid numerical issues due to quaternion magnitudes near zero or floating point overflow, the nodal quaternions are normalized after each time-step, i.e., ^e_i = ^e_i / _i. For static problems, the n_ = 6N nonlinear generalized force equilibrium equations
0 = ^int() + ^ext()
must be augmented by the N constraint equations
0 = () = (^0_0^2 - 1, …, ^n_el-1_p^2 - 1)
to ensure solvability.
§ NUMERICAL EXPERIMENTS
In the following, the quadratic strain energy density
W(_K , _K _IK; ξ) = 1/2(_K - _K ^0)_(_K - _K ^0) + 1/2(_K _IK - _K _IK^0)_(_K _IK - _K _IK^0)
is used. The superscript 0 refers to the evaluation in the rod's reference configuration. Moreover, _ = diag(EA, GA, GA) and _ = diag(G (I_y + I_z), E I_y, E I_z) denote the diagonal elasticity matrices with constant coefficients given by Saint-Venant’s relations from linear elasticity. Therein, E and G, respectively denote the Young's and shear modulus. The cross-sectional surface is denoted A and I_y, I_z are the respective second moments of area.
§.§ Helical spring
Following <cit.>, we investigate the elongation of an initially curved helical rod due to an applied external force at its tip, pointing in positive _z^I-direction. The rod has a Young's modulus E=10^11 N/m^2 and Poisson's ratio ν=0.2, i.e., a shear modulus G = E / 2 (1 + ν). It has an undeformed shape of a perfect helix with n_c=10 coils, coil radius R=10 mm, wire diameter d=1 mm and unloaded pitch k=5 mm, i.e., a total height of h=50 mm.
In the simulation, the spring was discretized using 75 elements of the presented finite element formulation with p=2. Reduced integration was performed with 2 quadrature points, while 5 points were used for all other integrals. The rod's curved initial configuration was obtained by solving the following minimization problem. Let ξ_j = jm - 1∈ [0, 1] for j ∈{0,1,…,m-1} denote the m linearly spaced evaluation points of the reference helix curve
_I (ξ) = R [ sinφ(ξ); -cosφ(ξ); c φ(ξ) ] , with c = k/2 π R and φ(ξ) = 2 π n_cξ .
Hence, the evaluation of the reference curve (<ref>) at all ξ_j's leads to m target centerline points _I _j = _I (ξ_j). Similarly, the corresponding cross-section orientations are given by evaluating the Serret–Frenet basis _IK_j = (_I _x^K_j _I _y^K_j _I _z^K_j) with _I _x^K_j = _I_,ξ(ξ_j) / _I_,ξ(ξ_j), _I _y^K_j = _I_,ξξ(ξ_j) / _I_,ξξ(ξ_j) and _z^K_j = _I _x^K_j×_I _y^K_j for the individual ξ_j's. Following <cit.>, the centerline positions and cross-section orientations can be assembled in the Euclidean transformations
_j = [ _IK_j _I _j; 0_1×3 1 ] and
(ξ_j) = [ _IK(ξ_j) _I _OP(ξ_j); 0_1×3 1 ] , with
_j^-1 = [ _IK_j^T -_IK_j^T _I _j; 0_1×3 1 ] .
Using the (3)-logarithm map Log_(3) introduced in <cit.>, the optimal initial generalized position coordinates _0 results from the nonlinear least squares problem
_0 = ∈ℝ^n_argmin K() , with K() = 1/2∑_j=0^m-1_j()^2 and _j() = Log_(3)(_j^-1(ξ_j)) ,
in terms of the metric of relative twists. The minimization problem (<ref>) can efficiently be solved using a Levenberg–Marquardt algorithm. The unity constraints of the nodal quaternions (<ref>) can be incorporated into the optimization problem as equality constraints, albeit at the expense of employing a complex constrained nonlinear least squares solver. To simplify the process, we initially solved the unconstrained minimization problem and subsequently applied a projection step to normalize all nodal quaternions.
Starting from _0, the maximal force of 100 N was applied within 500 linearly spaced force increments. During each iteration, the nonlinear equations (<ref>) and (<ref>) were solved up to an absolute error of 10^-8. As can be seen in Fig. <ref>, the helical spring initially elongates proportional to the applied load. This is in line with classical helical spring theory <cit.>, which assumes a linear force-displacement relation with linear equivalent stiffness G d^4 / (64 n_c R^3) ≈ 65.1 N/m. When the elongation exceeds a certain value (approx. 10 N), the linear theory does not longer agree with the numerically obtained nonlinear solution. This observation was also made by <cit.> and can be explained as follows. The helical spring unwinds gradually and approaches slowly a straight line with an altered linear stiffness EA. For comparison, we also solved the problem with the two-node (3)-interpolation strategy proposed in <cit.>, using the same number of unknowns. As depicted in Fig. <ref>, the results are in line with the proposed quaternion formulation.
§.§ Wilberforce pendulum
More than 100 years ago, Lionel Robert Wilberforce did investigations On the Vibrations of a Loaded Spiral Spring <cit.>. The experimental setup can be described as follows. While one end of a helical spring is clamped, at the other end a cylindrical bob is attached, see Fig. <ref>. When the cylinder in the gravitational field is displaced vertically, it starts oscillating up and down. Due to the coupling of bending and torsion of the deformed spring an additional torsional oscillation around the vertical axis of the cylinder is induced. When the cylinder's moment of inertia is properly adjusted, a beat phenomenon can be observed. In that case, the envelope of the vertical and torsional oscillations possess an almost perfect phase shift of π/2, i.e., the maximal amplitude of the vertical oscillations coincide with a zero torsional amplitude and vice versa.
To have a benchmark example that can be reproduced with reasonably computational effort, we introduce here a Wilberforce pendulum consisting of a spring with three coils modeled as a precurved rod. The rod has the properties of steel with mass density ρ_0=7850 kg/m^3, shear modulus G=81·10^9 N/m^2 and Poisson's ratio ν=0.23, i.e., a Young's modulus E = 2 G (1 + ν)=199·10^9 N/m^2. The undeformed shape is given by a perfect helix with n_c=3 coils, coil radius R=16 mm, wire diameter d=1 mm and an unloaded pitch of k=1 mm. The bob is modeled as a cylindrical rigid body with radius r=23 mm and height h=36 mm also having the mass density of steel.
In the simulations, the rod was discretized using 18 elements of the presented Cosserat rod finite element with p=2. Gravitational forces for the rod were neglected. Again, reduced integration was performed with 2 quadrature points, while for all other integrals 5 points were used. The bob was parameterized by the inertial position of the center of mass _I _OS together with a non-unit quaternion for the orientation. The bob was subjected to gravity with gravity constant g=9.81 m/s^2. For the governing equations describing such a parameterized rigid body under the influence of gravity, we refer to model 4 in <cit.>. Cylinder and rod were rigidly connected by perfect bilateral constraints <cit.>.
Again, the optimal helical initial configuration _0 was found by solving the minimization problem (<ref>). The system was initialized at rest with initial velocity _0 = 0. The resulting differential algebraic equations were solved using a first-order generalized-alpha method <cit.> for constrained mechanical systems of differential index 3, similar to the implementation found in <cit.>. A constant step-size Δ t = 5·10^-3 s was chosen and the governing equations were solved up to a final time of t_1 = 8 s. Since the example includes high-frequency oscillations, we chose a spectral radius at infinity of ρ_∞ = 0.8. The internal Newton–Raphson method satisfied a tolerance of 10^-8 with respect to the maximum absolute error. In Fig. <ref> the vertical position and the torsional angle of the rigid cylinder are plotted clearly showing the beat phenomenon of the Wilberforce pendulum.
[10]
Harsch2023a
J. Harsch, S. Sailer, and S. R. EugsterA total Lagrangian, objective and intrinsically locking-free
Petrov–Galerkin SE(3) Cosserat rod finite element formulation,
Int. J. Numer. Meth. Eng. 124(13) (2023).
Eugster2023a
S. R. Eugster and J. HarschA family of total
Lagrangian Petrov–Galerkin Cosserat rod finite element
formulations,
GAMM Mitt. 46(2) (2023).
Betsch2002
P. Betsch and P. SteinmannFrame-indifferent beam
finite elements based upon the geometrically exact beam theory,
Int. J. Numer. Meth. Eng. 54(12), 1775–1788 (2002).
Romero2002
I. Romero and F. ArmeroAn objective finite element
approximation of the kinematics of geometrically exact rods and its use in
the formulation of an energy-momentum conserving scheme in dynamics,
Int. J. Numer. Meth. Eng. 54, 1683–1716 (2002).
Marino2017
E. MarinoLocking-free isogeometric collocation formulation
for three-dimensional geometrically exact shear-deformable beams with
arbitrary initial curvature,
Comput. Method Appl. M. 324, 546–572 (2017).
Harsch2021a
J. Harsch, G. Capobianco, and S. R.
EugsterFinite element formulations for constrained spatial
nonlinear beam theories,
Math. Mech. Solids 26(12), 1838–1863 (2021).
Rucker2018
C. RuckerIntegrating rotations using nonunit quaternions,
IEEE Robot. Autom. Lett. 3(4), 2979–2986 (2018).
Berg1991
R. E. Berg and T. S. MarshallWilberforce
pendulum oscillations and normal modes,
Am. J. Phys. 59(1), 32–38 (1991).
Wilberforce1894
L. R. WilberforceOn the vibrations of a loaded spiral
spring,
Lond. Edinb. Dublin Philos. Mag. J. Sci. 38(233), 386–392
(1894).
Sailer2020
S. Sailer, S. R. Eugster, and R. I.
LeineThe tippedisk: a tippetop without rotational symmetry,
Regul. Chaotic Dyn. 25(6), 553–580 (2020).
Geradin2001
M. Géradin and A. Cardona,
Flexible Multibody Dynamics: A Finite Element Approach (Wiley, 2001).
Jansen2000
K. E. Jansen, C. H. Whiting, and G. M.
HulbertA generalized-α method for integrating the filtered
Navier–Stokes equations with a stabilized finite element method,
Comput. Method Appl. M. 190(3), 305–319 (2000).
Arnold2007
M. Arnold and O. BrülsConvergence of the
generalized-α scheme for constrained mechanical systems,
Multibody Syst. Dyn. 18(2), 185–202 (2007).
|
http://arxiv.org/abs/2307.03928v1 | 20230708080247 | Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy | [
"Georgios Kaissis",
"Jamie Hayes",
"Alexander Ziller",
"Daniel Rueckert"
] | cs.CR | [
"cs.CR",
"cs.AI"
] |
Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.]
: Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu
August 12, 2023
===========================================================================================================================================================================================================================================================
We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.
Previous research has demonstrated that differential privacy (DP) mechanisms also provide ReRo, but so far, only asymptotic Monte Carlo estimates of a tight ReRo bound have been shown.
Directly computable ReRo bounds for general DP mechanisms are thus desirable.
In this work, we establish a connection between hypothesis testing DP and ReRo and derive closed-form, analytic or numerical ReRo bounds for the Laplace and Gaussian mechanisms and their subsampled variants.
§ INTRODUCTION
In the rapidly advancing field of machine learning (ML), the importance of preserving privacy cannot be understated, particularly in critical tasks where privacy may be compromised through attacks on unprotected ML models.
Among these, membership inference (MI) poses a considerable risk <cit.>.
Here, an adversary attempts to determine whether a candidate record was part of the model's training database.
Differential privacy (DP) <cit.> plays a crucial role as a safeguard against privacy risks in ML.
Its guarantees can be interpreted in terms of the protection it offers against MI, a notion termed the hypothesis testing interpretation of DP <cit.>.
Broadly speaking, protecting against MI also serves to protect against all weaker forms of attack <cit.>.
For example, data reconstruction (DR) attacks <cit.>, where adversaries attempt to extract records from the model's weights or gradients <cit.>, are also prevented by DP mechanisms.
In fact, it can be shown that protecting against DR requires substantially less noise than protecting against MI <cit.>.
Recent works have proposed formal bounds tailored to DR.
For instance, Guo et al. <cit.> frame DR as a signal estimation problem and use the properties of the Fisher information matrix to lower-bound reconstruction error.
Moreover, Guo et al. <cit.> utilise Fano's inequality to bound the mutual information between the training data and the model's parameters.
Last but not least, Balle et al. <cit.> recently proposed Reconstruction Robustness (ReRo), which serves as a high-probability bound on successful DR.
Moreover, this work's authors prove a strong relationship between DP and ReRo in the sense that (Rényi-)DP <cit.> implies ReRo (and vice versa under some preconditions).
Very recently, Hayes et al. <cit.> strengthened the aforementioned results by circumventing the utilisation of Rényi-DP and bounding ReRo directly.
In this work, we expand upon the previous investigations on ReRo, which we regard as the most promising DR bound (as it both outperforms previous DR guarantees and is closely matched by the results of empirical DR attacks against ML models).
The aforementioned work by Hayes et al. <cit.> limits its purview to DP-SGD <cit.> and utilises a Monte Carlo (MC) technique to estimate the ReRo bound.
This MC bound only holds asymptotically and cannot be used efficiently in workflows involving large datasets.
Methods to directly obtain ReRo upper bounds for arbitrary datasets and mechanisms (e.g. also the Laplace mechanism and its subsampled variant), would thus be of value to practitioners.
Contributions
The contributions of our work are as follows:
(1) We extend the work of Hayes et al. by proposing ReRo bounds derived from the hypothesis testing interpretation of DP.
(2) We furnish closed-form bounds for the Gaussian and Laplace mechanisms and provide an analytic formulation for the Poisson-sampled Gaussian and Laplace mechanisms using an Edgeworth series.
Both techniques are very efficient in terms of memory and run time, even for very large datasets and across broad ranges of the mechanism parameters.
(3) We experimentally corroborate the accuracy of our bounds against a numerical ground truth, provide the first ReRo bounds for ImageNet-scale workflows and explain a finding by <cit.> regarding differences in ReRo bounds when DP-SGD parameters are varied at a fixed (ε, δ)-value.
Background
We assume familiarity with the fundamentals of DP and omit a detailed introduction due to space constraints.
In brief, we will focus on the global model of DP and the add/remove one adjacency relation between databases D and D'.
An extension to replacement adjacency is straightforward.
We will denote the deterministic query function (e.g. a single step of SGD outputting a gradient containing sensitive information) by q and its global sensitivity by Δ with an appropriate subscript to indicate the order of the norm it is measured in.
We will use ℳ for an (additive noise) mechanism, i.e. the Laplace mechanism (LM), Gaussian mechanism (GM) or their Poisson-subsampled variants (SLM and SGM).
For details on subsampling, we refer to <cit.>; in brief, to realise Poisson subsampling, each record in a database participates in the query with individual probability p.
In the hypothesis testing interpretation of DP, we presume that an adversary 𝒜 who has complete knowledge of D, D', q, and all specifications of ℳ observes a mechanism output y and must decide: ℋ_0: y ∼ℳ(D) vs. ℋ_1: y ∼ℳ(D').
ℋ_0 and ℋ_1 are called the null and alternative hypothesis, respectively.
We stress that the only unknown in the aforementioned hypothesis testing problem is the exact noise draw realised by ℳ.
The privacy guarantee of ℳ thus expresses how difficult it is to distinguish between the distributions ℳ(D) and ℳ(D') as measured in terms of trade-off between the fundamental errors of hypothesis testing: the Type-I error α and the Type-II error β.
Since the aforementioned hypothesis testing problem is one between two simple hypotheses, 𝒜 is endowed with the optimality properties furnished by the Neyman-Pearson (NP) lemma <cit.>.
In other words, their test has the highest power 1-β at any given level α∈ [0, 1].
f-DP <cit.> utilises a trade-off function T: α↦β to express DP guarantees.
Concretely, let ϕ be a rejection rule for the aforementioned hypothesis testing problem.
Then, T(ℳ(D), ℳ(D'))(α) = inf_ϕ{β_ϕ|α_ϕ≤α}.
A mechanism is said to satisfy f-DP, if, for all α∈ [0,1] and all adjacent D, D' it holds that T(ℳ(D), ℳ(D'))(α) ≥ f(α), where f is some reference trade-off function.
The inf_ϕ means that, by definition, f-DP only considers the rejection rule with the highest power among all realisable rejection rules at the same level α, which is consistent the optimality properties of 𝒜.
For rejection rules with asymmetric trade-off functions (e.g. for sub-sampled mechanisms), one must also consider T^-1=T(ℳ(D'), ℳ(D)) and obtain the symmetrised/convexified curve C(T, T^-1).
This is important as the DP guarantee must hold identically for the add one and the remove one adjacency relations.
A mechanism whose trade-off function is β(α)=1-α, i.e. the off-diagonal of the unit square, offers perfect privacy.
As a worst-case guarantee, f-DP thus additionally only considers the trade-off function which is farthest from this off-diagonal, corresponding to the pair of mechanism distributions exhibiting the greatest effect size.
This pair is called the dominating pair of a mechanism <cit.>.
For the GM, the dominating pair is (𝒩(0, σ^2), 𝒩(Δ_2, σ^2)) and for the LM it is (Lap(0, b), Lap(Δ_1, b)).
For the SGM two pairs must be considered: (𝒩(0, σ^2), (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2)) and ((1-p)𝒩(Δ_2, σ^2)+p𝒩(0, σ^2), 𝒩(Δ_2, σ^2)); this transfers to the SLM by replacing the Gaussian by the Laplace density.
ReRo <cit.> is an upper bound on the probability of a successful DR attack.
In this work, we will study ReRo under a pessimistic threat model which is very similar to that of DP:
𝒜 has access to all database records and executes a DR attack R on a model w outputting a reconstructed record z^∗∼ R(w).
The goal of 𝒜 is to select the correct database record z corresponding to z^∗ (i.e. record matching).
Formally, let π denote 𝒜's prior distribution (i.e. auxiliary information) and let ρ be a reconstruction error function.
Then, ℳ satisfies (η, γ)-ReRo if, for any fixed D, it holds that ℙ_z∼π, w ∼ℳ(D ∪{ z })(ρ(z, R(w))≤η) ≤γ.
Note the difference to DP: ReRo is defined purely through the add one adjacency relation.
The authors of <cit.> directly show that mechanisms whose output distributions satisfy a bound on the so-called blow-up function ℬ_κ(η) also satisfy ReRo.
Concretely, let μ and ν be ℳ's dominating pair distributions for the add one adjacency relation and E be a measurable event.
Then, ℳ satisfies (η, γ)-ReRo with respect to a prior κ(η) with γ = ℬ_κ(η)(μ, ν) = sup{ℙ_μ(E) |ℙ_ν(E) ≤κ(η) }.
Throughout, we follow <cit.> and let ρ=1(z≠ z^∗) (i.e. an exact match) and assign a uniform prior κ(η)=1/n, where n can e.g. be the cardinality of the database, since 𝒜 has an a priori probability of 1/n to select the correct candidate record without observing R(w), or some more pessimistic fixed prior, e.g. 1/10.
Although general hypothesis testing theory is used in <cit.> to prove the ReRo bound for DP mechanisms, the authors do not directly use f-DP to bound ReRo and instead estimate γ using MC (Algorithm 1 of <cit.>).
This strategy has the drawback of holding only at the limit as the number of MC samples approaches infinity and is impracticable for very large n or very small κ.
Next, we will show that ℬ_κ(η)(μ, ν) has a natural hypothesis testing interpretation, allowing us to circumvent the MC procedure and directly bound γ.
§ RERO BOUNDS FOR DP MECHANISMS THROUGH HYPOTHESIS TESTING
We begin by expressing ℬ_κ(η) in terms of the hypothesis testing problem between ℳ(D) and ℳ(D').
Assume that 𝒜 employs a rejection rule ϕ with power 1-β_ϕ(α) at a pre-selected level α.
Consistent with the worst-case guarantee, we will only consider the rejection rule with the highest power among all realisable rejection rules and denote this supremum power as 𝒫(α).
We remark that we make no further specifications about the rejection rule.
Therefore, although we will consider the DP threat model which assumes an optimal ϕ using the likelihood ratio test statistic evaluated at the dominating pair, all results transfer to threat model relaxations, provided the realisable rejection rules and their corresponding test statistics can be specified.
(2) We formulate our results in terms of the test ℋ_0:ℳ(D) vs. ℋ_1:ℳ(D') because we only need to bound the add one adjacency relation to bound ReRo.
The upshot of this choice can be seen in Figure 1, panel f.
If ℳ upper-bounds the adversary's supremum power 𝒫(α), then it also satisfies (η, 𝒫(κ(η))-ReRo for a prior κ.
In particular, if ℳ satisfies f-DP, it also satisfies (η, 1-f(κ(η)))-ReRo and if it satisfies (ε, δ)-DP, it also satisfies (η, min{e^εκ(η) + δ, 1})-ReRo.
The special case of (ε, 0)-DP appeared previously in <cit.>.
The theorem's main advantage is that it allows us to think about the relationship between DP and ReRo in terms of statistical power analysis, for which robust tools and an extensive body of theory exist.
Moreover, it explains the finding by <cit.> that directly bounding ReRo using ℬ_κ(η)(μ, ν) instead of taking a detour via Rényi DP results in a tighter bound: ReRo has a natural hypothesis testing interpretation, whereas Rényi DP does not <cit.>.
Furthermore, the theorem establishes that ReRo as a weaker guarantee than f-DP in the sense that f-DP bounds 𝒜's supremum power at all levels α∈ [0,1], whereas ReRo is a bound on the supremum power at a single level α = κ(η).
Consequently, achieving ReRo is easier (i.e. requires less noise) than achieving f-DP.
In terms of concrete mechanisms, we obtain the following results:
Let μ_1 = Δ_1/b and
f_Lap(α, μ_1) =
1-αe^μ_1, α < e^-μ_1/2
e^-μ_1/4α, e^-μ_1/2≤α≤1/2
(1-α)e^-μ_1, α > 1/2.
Then, the LM satisfies (η, γ)-ReRo with γ = 1-f_Lap(κ(η), μ_1).
Let μ_2 = Δ_2/σ and f_Gauss(α, μ_2) = Φ(Φ^-1(1-α) - μ_2), where Φ and Φ^-1 are the cumulative distribution and quantile function of the standard normal distribution. Under N-fold homogeneous composition, the GM satisfies (η, γ)-ReRo with γ = 1-f_Gauss(κ(η), √(N)μ_2).
Under heterogeneous composition of mechanisms with μ_a, μ_b, …, we have γ = 1-f_Gauss(κ(η), √(μ_a^2+μ_b^2+…)).
These two corollaries allow us to obtain an exact bound on ReRo for the respective mechanisms.
Unfortunately, the trade-off functions for the LM under composition and for the SLM and SGM are not available in closed form.
Three distinct options exist for evaluating these functions:
(1) Compute the trade-off functions numerically either through direct numerical integration or e.g. using the technique by <cit.>.
This approach can be optimal in the sense that it can provide an exact bound up to numerical precision (or with a controlled error tolerance).
To obtain a valid ground truth, we use direct numerical integration by performing a grid discretisation over G points and using an arbitrary-precision floating point library such as <cit.>.
This technique is extremely time-consuming, as (for N composition steps) it requires G · N numerical integrations (in neural network applications N = 𝒪(10^4)) and thus only serves as a gold standard.
An approach using the technique by <cit.> can be found in the appendix.
(2) One can leverage an analytic (e.g. Edgeworth or saddle-point) finite sample approximation to the trade-off function which can be computed in constant time for homogeneous composition.
Such approximations are a cornerstone of statistical power analysis <cit.>, and have been previously used for (ε, δ)-DP accounting <cit.>.
For our experiments, we use an improved version of the technique proposed by <cit.>, i.e. a fourth order Edgeworth approximation, which has error 𝒪(N^-2).
(3) Asymptotically, the trade-off function of a (Poisson-)subsampled mechanism with sampling rate p converges to f_Gauss(α, μ̃) with μ̃= p√(N(e^μ_2-1)) when p√(N) converges to a positive constant as the compositions N→∞ <cit.>.
This so-called CLT approximation is essentially an order zero Edgeworth approximation and has an error of 𝒪(N^-1/2).
We note that, although the MC technique of <cit.> has a nominally even higher error rate of 𝒪((κ N)^-1/2), it performs better than the CLT approximation in practice because it is unbiased, whereas the CLT approximation presupposes that the approximated trade-off function is Gaussian, which leads to poor performance when its assumptions are violated (see experiments below and <cit.> for discussion).
Independent of the technique used to approximate the trade-off function, we can formulate the following results:
Let f̃_SLM(α, μ_1, N, p) denote the approximate trade-off function for the SLM with sampling rate 0<p≤ 1 under N-fold composition using one of the approximation techniques above.
Then, the SLM satisfies (η, γ)-ReRo with γ≈ 1-f̃_SLM(κ(η), μ_1, N, p).
Similarly, let f̃_SGM(α, μ_2, N, p) denote the approximate trade-off function for the SGM with sampling rate 0<p<1.
Then, the SGM satisfies (η, γ)-ReRo with γ≈ 1-f̃_SGM(κ(η), μ_2, N, p).
Note that for the SGM, when p=1, we revert to the GM and can use the closed-form bound from Corollary 2 (see Figure 1c below).
We remark for completeness that heterogeneous composition is also possible using the techniques above and that approximations are not necessarily valid upper bounds unless verified, e.g. using the technique by <cit.>.
We omit a detailed discussion of these points due to space constraints.
§ EXPERIMENTAL EVALUATION AND CONCLUSION
Figure <ref> compares the MC estimate <cit.> of γ (10^6 samples) at a fixed prior κ to the asymptotic CLT approximation <cit.>, the fourth-order Edgeworth approximation and the Ground Truth computed by numerical integration (a,b,d) or in closed form (c).
γ is plotted against the effect size (Δ_1/b or Δ_2/σ), corresponding to increasing privacy loss: a: ε_max=20, b/c: ε_max=100, d: ε_max=𝒪(10^8) at δ=10^-6 for c/d.
Observe that in panel d, the MC algorithm of <cit.> already has too high variance to provide an accurate estimate of γ.
This means that the analysis of ImageNet-sized datasets where the values of κ and p are very low and the number of steps N is very high is infeasible using MC (or the Ground Truth) due to memory or time constraints.
In contrast, estimating γ using the Edgeworth approximation yields excellent precision at a constant memory consumption and run time of only about 1.5s, exactly matching the Ground Truth.
Panel e shows γ as a function of κ for a very low p and a very large N, similar to the hyperparameters used by <cit.> when training ImageNet from scratch.
Even at κ=10^-7, our presented techniques allow for estimating γ, and the CLT approximation matches the Edgeworth approximation very well.
Further examples of CIFAR-10 and ImageNet workflows are shown in the Appendix.
Panel f explains the observation by <cit.>, where, at a constant (ε, δ), the authors find that different sampling rates p lead to different values of γ.
The crux of this finding is that the authors of <cit.> choose mechanism parameter combinations which result in the same privacy guarantee in terms of a single (ε, δ)-pair (recall that SGMs are only identical if they correspond in all possible (ε, δ)-pairs).
Thus, mechanisms with different p are fundamentally distinct and thus lead to different γ values across the range of κ.
In particular, the trade-off function (and thus the ReRo bound function) is increasingly asymmetric at low values of p.
As seen in the figure, for κ=0.1 (used by <cit.>), γ is lower at p=0.1 (blue) compared to 0.9 (lavender), matching Figure 6 of <cit.>.
A detailed discussion on this topic can be found in the Appendix.
Conclusion
In this work, we expanded on the connection between ReRo and DP by leveraging hypothesis testing theory and techniques from statistical power estimation.
This allowed us to formulate refined ReRo bounds for relevant DP mechanisms and propose techniques to estimate them with high precision across a broad range of use-cases.
Our results can thus help ML practitioners to evaluate the vulnerability of sensitive data processing systems against data reconstruction attacks, thereby increasing user trust.
In future work, we intend to assess ReRo bound tightness for large vision and language models/datasets, provide ReRo bounds in the shuffle model of DP and for individual privacy accounting schemes, expand our analysis to non-uniform priors other reconstruction error functions and heterogeneous compositions.
§ APPENDIX
§.§ Proofs
Proof of Theorem 1
Let y be a mechanism output, μ, ν be the dominating pair distributions of ℳ and κ(η) ∈ [0,1] be a prior.
Since E is an arbitrary measurable event, we can fix E to be the event of rejecting ℋ_0 (this mirrors the event definitions in Corollary 3 of <cit.> and standard hypothesis testing theory).
Moreover, let ϕ be a rejection rule for ℋ_0: y ∼ν and ℋ_1: y ∼μ.
This is without loss of generality since f can always be considered (or made) symmetric, and thus the following statements also hold when the role of the hypotheses is exchanged, although this is not required to bound ReRo, which only considers the add one adjacency relation.
From the definition of ReRo, γ = ℬ_κ(η)(μ, ν) = sup{ℙ_μ(E) |ℙ_ν(E) ≤κ(η) }.
From our assumption above, ℙ_μ(E)=1-β_ϕ (correctly reject ℋ_0 given ℋ_1) and ℙ_ν(E) = α_ϕ (wrongly reject ℋ_0 given ℋ_0).
Substituting, we obtain γ = sup{ 1-β_ϕ | α_ϕ≤κ(η) }.
In other words, γ exactly corresponds to the supremum power of ϕ given a pre-selected bound on Type-I error rate, i.e. γ = 𝒫(α), and thus a bound on γ is implied by a bound on 𝒫(α) with α=κ(η).
To prove the ReRo bound implied by f-DP, we consider the definition of the trade-off function: f(κ(η)): inf{β_ϕ | α_ϕ≤κ(η)).
Since f is convex, continuous and non-increasing on the unit square, 1-f(κ(η))=sup{ 1-β_ϕ | α_ϕ≤κ(η)) = 𝒫(α) = γ.
We note that the reverse does not hold in general: bounding ReRo through a bound on γ implies a bound on 𝒫(α) for a specific level α, whereas f-DP implies a bound on 𝒫(α) at all levels α∈ [0,1].
To prove the the ReRo bound implied by (ε, δ)-DP, we leverage a result by <cit.>, who show that, if a mechanism satisfies (ε, δ)-DP, it imposes a bound on the power 1-β at a level α of the optimal hypothesis test ϕ such that 1-β_ϕ(α_ϕ) ≤e^εα_ϕ + δ, i.e. 𝒫(α) = e^εα_ϕ + δ.
Finally, we substitute κ(η) as the desired level α_ϕ and take the min since γ is a probability, from which the claim follows.
Algorithm 1 of <cit.> essentially computes an MC estimate of the complementary trade-off function for 𝒩(0, σ^2) vs. (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2).
The sampling inefficiency and high variance at small values of κ is due to the fact that the algorithm draws S MC samples but discards all but ⌈κ· S ⌉ of them.
This percolates to extreme parameter regimes such as the ones discussed above, necessitating orders of magnitude more samples to be drawn to correctly estimate the bound, which eventually becomes infeasible due to memory constraints.
In terms of the distributions of the likelihood ratio test statistics under ℋ_0 and ℋ_1, constructing 𝒫(α) corresponds to the following steps:
Per the Neyman-Pearson lemma, the optimal test ϕ is realised by thresholding the likelihood ratio test statistics.
Let c be the critical value for rejecting ℋ_0.
Then, (1) determine the value of c for which α_ϕ(c)<κ(η) by computing the quantile function of the test statistic under ℋ_0 at 1-κ(η) and
(2) compute the value of the cumulative distribution function of the test statistic under ℋ_1 evaluated at c.
The likelihood ratios under ℋ_0 and ℋ_1 are also called the privacy loss random variables in DP.
The equivalence between the privacy loss random variables and the test statistics from which the trade-off function f is computed represents the intuitive link between f-DP, (ε, δ)-DP and ReRo and reinforces the pivotal role of the privacy loss random variable.
The Edgeworth approximation utilises the cumulant generating functions of the likelihood ratio test statistics computed numerically, followed by a series approximation combined with a numerical inverse for the quantile function.
The CLT approximation is equivalent to an Edgeworth approximation of order zero, rendering it quite inflexible, which explains its poor performance when its assumptions are violated.
Proof of Corollaries 1 and 2
The claims of both corollaries follow directly from the closed-form expressions of the trade-off functions of the LM and the GM.
The derivations of the trade-off functions themselves can be found e.g. in <cit.>.
Proof of Corollary 3
The claims follow from the ReRo bound implied by f-DP proven in Theorem <ref>.
We remark that since we are dealing with trade-off function approximations, minimising the approximation error is crucial for obtaining an exact bound on γ.
§.§ Supplementary Figure
The following figure illustrates further scenarios in which the Edgeworth and CLT approximation yield excellent results, whereas the MC technique of <cit.> would not be usable due to an impracticably high number of MC samples required to obtain an accurate estimate.
Moreover, in these scenarios, the numerical ground truth would take on the order of weeks to compute and is thus unavailable.
In contrast, the Edgeworth and CLT approximations are computable in constant time.
Moreover, the assumptions of the CLT approximation kick in for these parameter values and thus the two methods yield identical results.
The top figure row shows ReRo bounds for CIFAR-10-style workflows with hyperparameters taken from Table 13 of <cit.> (left) and an even smaller sampling rate (right), whereas the bottom row shows ImageNet-style workflows with the hyperparameters from Table 15 of <cit.> (left) and an even smaller batch size (right).
The bottom right panel is identical to Figure 1, panel e in the main manuscript.
For all panels, κ∈ [10^-7, 10^-1].
§.§ Experimental details
Conversion to (ε, δ)-DP
Conversions to (ε, δ)-DP were performed as follows:
* For the LM, following the simple composition theorem: ε = NΔ_1/b.
* For the SLM, following <cit.>: ε = log(1 + p(e^NΔ_1/b-1).
* For the GM, following <cit.>: Compute δ(ε) = Φ(-σε/Δ_2 + Δ_2/2σ) - e^εΦ(-σε/Δ_2 - Δ_2/2σ), then solve for ε at a given δ numerically.
* For the SGM, following <cit.>: Compute the (symmetrised) trade-off function, then compute the convex conjugate numerically and solve for ε at a given δ.
Details of numerical techniques
The numerical Ground Truth was evaluated by using the technique proposed in Section 5.1 of <cit.> with G=1 000 grid points and using 25 digits of numerical precision in <cit.> (for reference, a 64-bit floating point value provides ≈ 15 digits of precision).
We recall that this technique requires one numerical integral per step N and grid point, rendering it extremely time consuming and impracticable for any use beyond establishing a gold standard.
The fourth-order Edgeworth approximation was computed as previously described (see Section 3.1 of <cit.>).
However, we expanded the Edgeworth series up to order four as described in the main manuscript.
Moreover, the original work <cit.> only approximates the trade-off function for only one of the two dominating pairs of the SGM (𝒩(0, σ^2), (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2)).
Whenever required (e.g. for conversions to (ε, δ)-DP or for Figure 1, panel e), we also instantiated the trade-off function for the other dominating pair ((1-p)𝒩(Δ_2, σ^2)+p𝒩(0, σ^2), 𝒩(Δ_2, σ^2)) and obtained the symmetrisation/convexification of the two trade-off functions, in line with the assumption that the trade-off function is symmetric.
Monte Carlo (MC) estimation of γ was performed according to Algorithm 1 of <cit.>.
All MC experiments were performed with S=1 000 000 samples.
We used multi-core sampling with 16 concurrent processes on a single 2019 Apple MacBook Pro with an 8 core Intel i9 CPU and 64 GB of memory.
The CLT and Edgeworth approximations have constant run time, the latter provided the composition is homogeneous (i.e. the effect size is constant over all N).
In terms of memory usage, the MC algorithm allocates an array of size S · N, where S is the number of MC samples and N is the number of SGM steps.
The numerical Ground Truth, Edgeworth and CLT approximations require constant memory.
§.§ Discussion of ReRo bound sensitivity to subsampling probability
In <cit.>, the authors found that the ReRo upper bound is dependent on the subsampling probability, p.
They showed this by fixing the number of steps in DP-SGD and the gradient cliping norm, and finding a σ that would give a fixed (ε, δ)-DP guaranteee across different subsampling rates.
In <cit.>, authors chose a small number of steps (100) for this experiment, due to the computational overhead of their MC estimation method.
For this relatively small number of composition, the CLT assumption for Gaussian DP is not yet fully in effect, meaning the mechanisms authors selected at different values of p are not identical, they only intersect for a specific choice of ε and δ.
We plot this in <Ref>, and show the corresponding trade-off curves for the add one and remove one adjacency relations along with their symmetrised version (see Definition F.1 in <cit.>).
When the CLT does not apply, this comparison is across fundamentally distinct mechanisms with different trade-off curves, and so the upper bounds for ReRo are different.
We compare different values of subsampling probabilities when the CLT is assumed to hold (number of steps N=10,000).
From <Ref>, the three mechanisms intersect at identical (ε, δ) pairs, as so they are identical mechanisms.
In the right figure, we plot the trade-off curves under the assumption that the CLT is valid [We use μ̃ = p √(N ( e^1/σ^2 -1 )),
so the collection of all σ to obtain the same μ̃ can be found through:
σ = 1/√(log(1 + μ̃^2/N p^2)), see <cit.> for details.].
For each mechanism, we also numerically compute its privacy profile using <cit.> and convert to a trade-off curve using the (ε, δ) trade-off function (Eq. 5 in <cit.>).
These all coincide perfectly.
When the CLT holds, the mechanisms are identical, the trade-off curves are independent of p, and since the curves are symmetric, the add one and the remove one curves are one and the same.
|
http://arxiv.org/abs/2307.05583v1 | 20230710144048 | Resistivity in Quantum Vortex Liquid of Clean Two-Dimensional Superconductor | [
"Naratip Nunchot",
"Ryusuke Ikeda"
] | cond-mat.supr-con | [
"cond-mat.supr-con"
] |
Department of Physics, Kyoto University, Kyoto 606-8502, Japan
Motivated by a recent controversy on a possible quantum phase in thin films of relatively clean superconductors under an out-of-plane magnetic field, the quantum fluctuation effects on the phase diagram and the resistivity are reexamined. It is argued that most of features seen in the corresponding resistivity data in relatively clean systems reported recently are explained within the present theory, and that the fan-shaped resistivity curves, suggestive of the presence of a superconductor to insulator transition at zero temperature, in the vortex liquid regime is a consequence of the insulating behavior of the Aslamasov-Larkin fluctuation conductivity in the quantum regime.
Resistivity in Quantum Vortex Liquid of Clean Two-Dimensional Superconductor
Naratip Nunchot and Ryusuke Ikeda
August 12, 2023
============================================================================
§ INTRODUCTION
In thin films of type II superconductors under a magnetic field perpendicular to the plane, the resistivity often shows a behavior insensitive to the temperature T over wide field and temperature ranges <cit.>. Possibilities of a novel two-dimensional (2D) quantum phase based on this quantum metallic behavior have been discussed repeatedly over the past two decades <cit.>. However, it has been clarified recently that most of the T-independent behavior of the resistivity is removed by adequately filtering external radiation from the film sample <cit.>, strongly suggesting that external noise has created the quantum metallic behavior in experiments. The presence of a quantum metal state has been still argued in some recent experimental works on relatively clean systems, i.e., with weak disorder <cit.>. Since a nearly flat resistivity curve is seen even in the temperature range of the same order as the mean field T_c in some film samples, such a peculiar resistive behavior cannot be due to the randomness or
the sample disorder which becomes more effective at lower temperatures. In addition, a crossing behavior leading to assuming the presence of a superconductor to insulator quantum transition (SIT) at zero temperature <cit.> is seen at relatively higher fields in samples of relatively clean films <cit.>. Then, one might wonder what the flat resistivity curve appearing in clean samples in lower fields than the apparent SIT field implies.
In the present work, the quantum superconducting (SC) fluctuation effects on the resisitivity in clean and 2D superconductors are reexamined by performing a detailed analysis within the framework of the renormalized fluctuation theory <cit.>. It was argued in a previous theoretical work of one of the present authors <cit.> that, based on a dimensional analysis, the melting curve H_m of the 2D vortex lattice becomes insensitive to T at low enough temperatures due to the quantum SC fluctuation, and that, in such a quantum regime, the vortex flow resistance in a narrow field range close to H_m is also insensitive to T and takes a value of the order of the quantum resistance R_q=π e^2/2 ħ = 6.45(kΩ). However, this explanation on the crossing behavior seen in the field dependence of the resistivity curves seems to be inconsistent with the observation of the apparent SIT behavior in a couple of experiments <cit.> where the crossing of the resistivity is seen in a much higher field than the nominal vortex lattice melting field at low temperatures. Below, the vortex lattice melting transition line will be first examined without resorting to the rough argument <cit.> and by comparing the free energy of the renormalized fluctuation of the SC order parameter with that of the vortex lattice corrected by the Gaussian fluctuation <cit.>. In contrast to the previous estimate of the quantum melting line <cit.>, the resulting melting field H_m grows upon cooling everywhere at nonzero temperatures, while H_m(T=0) can take a much lower value than H_c2(T=0), and the resulting quantum vortex liquid regime becomes well-defined <cit.>. Next, the in-plane resistivity computed within the renormalized fluctuation theory is examined in a consistent way with the calculation of the melting line. Bearing in our mind that the characteristic features of the resistivity curves in the quantum regime seem to depend on the details of the materials, the resistivity curves will be discussed by focusing on the two extremely different cases: One is the case with a moderate strength of the thermal fluctuation and an extremely strong quantum fluctuation, and the other is the case with strong thermal fluctuation and weak quantum fluctuation. In both cases, the crossing behavior of the resistivity leading to erroneously assuming the presence of an SIT at zero temperature appears in a finite temperature range, as a consequence of the fact that the Aslamasov-Larkin (AL) term of the dc fluctuation conductivity vanishes in the vortex liquid in zero temperature limit <cit.>. The resistivity curve insensitive to T tends to appear more frequently when the thermal fluctuation is stronger.
This paper is organized as follows. We explain the theoretical treatment used in the present work in sec.2. The resulting numerical results on the phase diagram and the resistivity curves are presented in sec.3. Summary of our results is given and relevance to the experimental data are given in sec.4.
§ THEORETICAL EXPRESSIONS
In the unit of k_ B=ħ=1, we start from the partition function
Z= Trexp(- S).
Here, in the high field approximation where the pair field ψ( r) consists
only of the lowest Landau level (LLL) modes ψ_0( r), the action S expressing the Ginzburg-Landau (GL) model takes the form <cit.>
S = ∑_ω, p (s ω^2 + γ_0|ω| + ε_0) |ψ̃_0(p; ω)|^2
+ g/2 d β^2∫_0^β dτ∫ d^2r |ψ_0( r,
τ)|^4.
Here, the order parameter field was rescaled so that the dependences on the film thickness d and the temperature T=β^-1 appear only in the quartic term. Further, the order parameter field was expanded in terms of the normalized eigen functions u_p( r) in LLL in the manner ψ_0( r, τ) = ∑_p, ωψ̃_0(p, ω) e^-i ωτ u_p( r), ω is the Matsubara frequency for bosons, and p measures the macroscopic degeneracy in LLL. The microscopic T and H dependences of the positive coefficients s, γ_0, and g are, for simplicity, neglected, and the bare mass ε_0 will be assumed to be linearly dependent on H and T like
ε_0 = t-1+h,
where h=H/H_c2(0), and t=T/T_c0. The mean field H_c2(T) line is given by ε_0=0. Further, since the ω^2 term in the action S was introduced only to cut off an inessential divergence in the frequency summation, the coefficient s is assumed to be small so
that s ≪γ_0^2.
The simplest approximation describing reasonably the fluctuation renormalization is the Hartree approximation which is reached through the self-consistent replacement
|ψ_0|^4 → 4 ⟨ |ψ_0|^2 ⟩ |ψ_0|^2
in the quartic term, where ⟨ ⟩ denotes the statistical average within the Hartree approximation. Then, the fluctuation propagator G_0(p, ω)=⟨ |ψ̃_0(p, ω)|^2 ⟩ is given by 1/[r_0 + γ_0|ω| + s ω^2], where
r_0 = ε_0 + g h/πξ_0^2 d β^-1∑_ω1/r_0 + γ_0|ω| + s ω^2,
where ξ_0 is the coherence length in zero temperature limit.
Note that, according to the BCS theory <cit.>, the mode-coupling strength g is a positive constant of the order of (N(0) T_c0^2)^-1, where N(0) is the density of states of the quasiparticles on the Fermi energy in the normal state. To rewrite the frequency summation into a tractable form, the spectral representation <cit.>
1/r_0 + γ_0|ω| + s ω^2 = 1/π∫_-∞^∞ du ρ(r_0; u)/u - iγ_0 ω
will be used, where
ρ(r; u) = u/(u^2 + (a r)^2)((us/γ_0^2)^2 + a^-2).
This expression (7) of the spectral function is valid when r < γ_0^2/(4s). Then, the coefficient a in eq.(<ref>) is given by a^-1= (1 + √(1 - 4sr/γ_0^2))/2. Since we are interested in the region below H_c2(T)-line where r_0 ≪ 1, the coefficient a will be replaced by unity in the ensuing expressions.
Therefore, we will use hereafter the following self-consistent relation on the renormalized mass r_0 of the LLL fluctuation
r_0 = ε_0 + 2 ε_ G^(2) h/πγ_0 T_c0∫_0^∞ du coth(u/2 γ_0 T) u/(u^2 + r_0^2)( 1 + (s u/γ_0^2)^2),
where H_c2(0) is the depairing field in zero temperature limit,
ε_ G^(2) = g T_c0/2 πξ_0^2 d
is the Ginzburg-number in 2D, and the identity
coth(u/2 γ_0 T) = 2 γ_0 T ∑_ω1/u - iγ_0 ω
was used. Note that eq.(<ref>) can be regarded as being a definition of ε_0(r_0) as a function of r_0. Then, we have
∂ε_0(r_0)/∂r_0 = 1 + 2 ε_ G^(2) h/πγ_0 T_c0∫_0^∞ du coth(u/2 γ_0 T) 2 r_0 u/(u^2 + r_0^2)^2.
§.§ Free energy
Next, the expressions on the free energy density will be derived. Using the identity on the fluctuation free energy F_>
∂ F_>/∂ε_0 = ∑_p, ω G_0(p, ω),
the fluctuation free energy density f_> in the vortex liquid regime of a SC thin film with thickness d is simply given by
f_> = h/2 π^2 ξ_0^2 d γ_0∫_r_c^r_0 dμ∫_0^∞ dx coth(x/2 γ_0 T) ρ_μ(x) ∂ε_0(μ)/∂μ,
where the prefactor proportional to h arises from the degeneracy in LLL. Then, f_> will be expressed in terms of eq.(<ref>) as
f_>=f_ G(r) + f_ H,
where
f_ H = h/2 π^2 ξ_0^2 d γ_0∫_r_c^r_0 dμ∫_0^∞ dx coth(x/2 γ_0 T) ρ_μ(x) (∂ε_0(μ)/∂μ - 1 ).
The cut-off r_c will be determined in examining f_ G(r_0)
(see below).
Regarding the remaining term f_ G(r_0) = f_> - f_ H which is nonvanishing even when g=0, i.e., even in the absence of the mode-couplings, the μ-integral will be performed firstly. Then, f_ G(r_0)
takes the form
f_ G(r_0) = H/ϕ_0 π d γ_0∫_0^∞ dx 1/1+(sx/γ_0^2)^2 coth(x/2 γ_0 T) [ tan^-1(x/r_c) - tan^-1(x/r_0)
].
Here, to determine the cut-off r_c, we take the thermal limit of eq.(<ref>) in which coth(x/(2 γ_0 T)) is replaced by 2 γ_0 T/x. By comparing it with the corresponding result in ref.16, h T ln(r_0/r_c)/(2 πξ_0^2 d), the cut-off will be chosen hereafter as
r_c = πγ_0 T.
On the other hand, by making use of eq.(<ref>) determining the T and H dependences of the renormalized mass r_0, f_ H may be rewritten in the following simpler form
f_ H = - 1/4g (r - ε_0 )^2.
The free energy derived above can be used as the SC fluctuation contribution to the free energy in the normal phase. To determine the 2D quantum melting transition line, the corresponding free energy density f_< in the vortex lattice phase corrected by the Gaussian fluctuations is needed. Within the GL approach, the contribution of the shear elastic energy is smaller <cit.> in the order of the magnitude than that of the amplitude (or, Higgs) mode and hence, will be simply neglected. Then, f_< becomes
f_< = - 1/2 β_ A( ε_0^2/g - 1/√(2) f_ G(-2 ε_0) ),
where β_ A is the Abrikosov factor 1.1596 of the triangular lattice. Using these expressions, the transition line of the 2D vortex lattice melting occurring through not only the thermal but also the quantum fluctuations of the SC order parameter is determined by the relation f_>=f_<.
§.§ Fluctuation Conductivity
The fluctuation conductivity in the moderately clean case is dominated by the Aslamasov-Larkin (AL) term of the conductivity due to the renormalized SC fluctuation which is expressed in dc limit by <cit.>
d R_q σ_ AL = 2 T r_1^2 ∑_ω[ G_0(ω) G_1(ω) ( γ_0 G_0(ω) + γ_1 G_1(ω) ) - γ_0^2 [ G_0(ω)]^2 + γ_1^2 [ G_1(ω)]^2/γ_0 r_1 + γ_1 r_0],
where G_n(ω)= 1/(γ_n|ω| + r_n) (see also eq.(15) of Ref.14 and eq.(21) of Ref.22). The time scale γ_1 is the counterpart in the second lowest (n=1) Landau level (LL) fluctuation of γ_0 of the LLL fluctuation, and the tiny ω^2 term introduced in eq.(5) as a cut-off term for the frequency summation is unnecessary in obtaining σ_ AL and hence, has been neglected in eq.(20). As shown previously <cit.>, the renormalized mass r_1 of the n=1 LL fluctuation is renormalized to be 2h deep in the vortex liquid regime in the Hartree approximation. Hereafter, the relations r_1=2h and γ_0=γ_1 will be assumed for simplicity in our numerical analysis.
§ NUMERICAL RESULTS
Now, we will explain typical examples of the resistivity curves following from eqs.(<ref>) and (<ref>) together with the corresponding phase diagrams which follow from eqs.(<ref>) and (<ref>). Below, the coefficient of the ω^2 term of eq.(<ref>) which plays the role of a cutoff on the dissipative dynamics will be chosen as s = (10^-6γ_0)^2 throughout this paper.
In our work, the DOS and Maki-Thompson fluctuation terms of the conductivity are not taken into account from the outset based on the well-known fact <cit.> that, in clean limit, those terms and the subleading contribution of the Aslamasov-Larkin term cancel with one another in 2D systems with no Pauli paramagnetic depairing. For this reason, the total dimensionless conductivity R_q d σ_ tot is assumed hereafter to be given by the sum of the leading contribution of the Aslamasov-Larkin term in dc limit, eq.(<ref>), and the dimensionless normal conductivity d R_q σ_ N. Regarding σ_ N, the same model as used in Ref.14, d R_q σ_ N = (1 + (8 π)^-1 ln(T_c0/T))^-1, will be used here to describe a weakly insulating resistivity curve in the normal state of a couple of materials <cit.>.
To clarify what are typical consequences originating from strong quantum SC fluctuations, typical results following from two highly different sets of the parameter values will be compared with each other. Below, the strengths of the thermal fluctuation and the quantum fluctuation will be measured, respectively, by ε_ G^(2) = [λ(0)]^2/(d Λ(T_c0)) and ħ/(γ_0 k_ BT_c0), where λ(0) is the magnetic penetration depth at T=0, Λ(T)=ϕ_0^2/(16 π^2 T), and ϕ_0 is the flux quantum <cit.>. Here, we have used the relation between g and λ(0) in the BCS theory <cit.>.
First, the results of the phase diagram in a case with moderately strong thermal fluctuation and unusually strong quantum fluctuation are shown in Fig.1 where ħ/(γ_0 k_ BT_c0)= 100 and ε_ G^(2)=2.0 × 10^-4. This ε_ G^(2)-value corresponds to, e.g., the set of the parameter values T_c0=10(K), d=25(A), and λ(0)=330(A). It is found that the melting field H_m(T) is linear in the temperature over a wide field range except close to T_c0. Close to T_c0, the quantum fluctuation is negligible so that H_m(T) in the present LLL-GL approach obeys the 2D LLL scaling <cit.> H_m(T) ≃ (T_c0 - T)^2 (see the Inset of Fig.1). Such a large deviation of the melting line from its LLL scaling behavior over the wide field range is a consequence of the strong quantum fluctuation in this case, and the T=0 melting field H_m(0) becomes 0.62 H_c2(0).
Figure 2 expresses the resistivity curves ρ(T) at various magnetic fields, H/H_c2(0) = 0.5, 0.55, 0.6, 0.65, 0.66, 0.67, 0.68, 0.69, and 0.7. The two curves in lower fields than H_m(0) are found to become flat, i.e., insensitive to T, below the melting line, while each of other curves in H > H_m(0) simply shows a drop at a temperature without a clear flat portion accompanied. We note that each temperature T_d at which the resistivity starts to drop is much lower than T_c2(H) corresponding to the mean field H_c2(T)-line. For instance, at H=0.66 H_c2(0), T_d/T_c0=0.06, while T_c2/T_c0=0.34 <cit.>. Such a large deviation of T_d from T_c2 is a consequence of strong reduction of σ_ AL (eq.(20)) due to the unusually strong quantum fluctuation assumed in Figs.1 and 2. On the other hand, the flat (i.e., metallic) portion is not clearly seen in those resistivity curves. As will be stressed below, it appears that the flat portion does not become remarkable as far as the thermal fluctuation is not strong enough. Nevertheless, as a consequence of the strong quantum superconducting fluctuation, the so-called fan-shaped T-dependence of the resistivity curves which often leads to assuming the presence of a superconductor to insulator transition (SIT) at T=0 is seen in the field range (H_m(0) <) 0.67 H_c2(0) < H < 0.7 H_c2(0) in spite of the absence of a quantum continuous transition. It seems that these resistivity curves are qualitatively similar to the data in Refs. 5, 8, and 9. Of course, it should be noted that those resistive behaviors explained above in H > 0.6 H_c2(0) are not their genuine low T results. Since there are no quantum transitions above H_m(0) in the present clean limit, all curves of the normalized resistance 1/(d R_q σ_ tot) in H > H_m(0) start to grow at much lower temperatures than 0.01 T_c0 and reduce to their normal values 1/(d R_q σ_ N) on approaching T=0 reflecting the vanishing of σ_ AL at T=0 <cit.>.
Next, the case with exceptionally strong thermal fluctuation and a moderate strength of the quantum fluctuation will be considered. In Fig.3 and 4, we have used ε_ G^(2) = 0.12, corresponding to, e.g., the set of the parameter values T_c0=30(K), d=5(A), and λ(0)=2000(A), and ħ/(γ_0 k_ BT_c0)= 1.0. As Fig.3 shows, the vortex liquid regime is expanded particularly at higher temperatures reflecting the large ε_ G^(2), and the melting curve is bent upwardly at low enough temperatures reflecting the relatively weaker quantum fluctuation. Nevertheless, the H_m(0)/H_c2(0)-value is remarkably low, and, in H > 0.2 H_c2(0), we have only the vortex liquid regime at any temperature.
In Fig.4, the corresponding resistivity curves are shown. It is noticeable that nearly flat resistivity curves are seen over a wide field range. This is a consequence of the strong thermal fluctuation assumed here. In particular, the flat resistivity curves appear in fields below H_m(0), i.e., the fluctuating vortex solid phase. Here, we stress that, in 2D case, the freezing from the vortex liquid to the vortex solid tends not to be reflected in the resistivity curve. It has been recently clarified through a detailed diagrammatic analysis <cit.> that this feature on the resistivity in 2D case has a theoretical foundation.
Even in the resistivity data of Fig.4, a crossing behavior of the resistivity curves is seen at nonzero temperatures. As is seen in the lower figure of Fig.4, the resistivity curves in the field range 0.2 H_c2(0) < H < 0.34 H_c2(0) obey an approximate crossing behavior around H=0.28 H_c2(0) in the temperature range 0.04 T_c0 < T < 0.16 T_c0. Again, this crossing behavior never implies the presence of a genuine quantum transition
and is merely a reflection of the insulating behavior of the fluctuation conductivity <cit.> arising from the vanishing of eq.(<ref>)
at T=0.
§ SUMMARY AND DISCUSSION
In this work, we have examined possible field v.s. temperature phase diagrams and the corresponding resisitivity curves to be seen in thin films of clean superconductors under a magnetic field perpendicular to the two-dimensional plane. Since a moderately strong fluctuation has been assumed in obtaining those figures, the field range of our interest in which the vortex lattice melting occurs at zero temperature is low enough to neglect the paramagnetic pair-breaking effect. In this situation, the fluctuation conductivity in clean 2D superconductors is given by only the well-known Aslamasov-Larkin term <cit.>. For this reason, we have been able to assume that, even at low temperatures, the total conductivity is the sum of a quasiparticle contribution and the conventional fluctuation conductivity following from a time-dependent GL dynamics. We note that, in a moderately dirty system <cit.> case, the sum of the Maki-Thompson and DOS terms of the fluctuation conductivity has a contribution leading to a negative magnetoresistance in the fluctuation regime <cit.>. Therefore, the absence of such a negative magnetoresistance would play a key role in judging whether the present theory is applicable to experimental data of the resistivity or not.
The resistivity curves obtained based on the renormalized fluctuation theory <cit.> are highly dependent on the relative magnitude of the quantum fluctuation to the thermal one. When the thermal fluctuation is of a moderate strength, enhanced quantum fluctuation tends to create fan-shaped resistivity curves R(T), often leading to erroneously assuming the presence of a quantum SIT, in the quantum vortex liquid but far above the vortex lattice melting field in T=0 limit. This type of resistivity data have been reported in several works <cit.>.
Further, even in quite a different case where the thermal fluctuation is quite strong, while the quantum fluctuation has a moderate strength, the resistive behavior suggestive of the presence of an apparent SIT is visible in the experimentally measurable temperature range. We conclude that, except the observations in dirty systems <cit.>, the SIT behavior of the resistivity in relatively clean systems is a consequence of the insulating behavior <cit.> of the Aslamasov-Larkin fluctuation conductivity in dc limit in the quantum regime.
In the present work, any pinning effect arising from some randomness or defects in the SC material has been neglected. In analyzing resistivity data in thin films, the resistivity drop upon cooling at intermediate temperatures is often modelled according to the empirical thermal activation (TA) (or the so-called Arrhenius) formula. Within the GL model, this TA behavior may be conveniently incorporated as an exponential growth in the inverse temperature T^-1 of the coefficient γ_1. As far as the vortex lattice melting transition does not occur due to weak disorder in the material, the present fluctuation theory can be used even for the lower temperature region, in which a flat resistive behavior may be seen, than the region of the vortex liquid in which the TA behavior is seen. In fact, it is interesting to regard a (if any) flat resistivity curve as a consequence of a competition between the insulating fluctuation conductivity <cit.> and an increase of γ_1 on cooling.
The present work was supported by a Grant-in-Aid for Scientific Research [Grant No.21K03468] from the Japan Society for the Promotion of Science.
9
Kapi1 D. Ephron, A. Yazdani, A. Kapitulnik, and M. R. Beasley, Phys. Rev. Lett. 76, 1529 (1996).
Kapi2 N. Mason and A. Kapitulnik, Phys. Rev. Lett. 82, 5341 (1999).
Kapi3 J. A. Chervenak and J. M. Valles, Jr., Phys. Rev. B 61, 9245(R) (2000).
nature Y. Qin, C. L. Vicente, and J. Yoon, Phys. Rev. B 73, 100505(R) (2006).
Nojima1 Y. Saito, T. Nojima, and Y. Iwasa, Nature Comm. 9, 778 (2018).
Tamir I. Tamir, A. Benyamini, E. J. Telford, F. Gorniaczyk, A. Doron, T. Levinson, D. Wang, F. Gay, B. Sacepe, J. Hone, K. Watanabe, T. Taniguchi, C. R. Dean, A. N. Pasupathy, and D. Shahar, Sci. Adv. 5, 3826 (2019).
India Surajit Dutta, Indranil Roy, Soumyajit Mandal, John Jesudasan, Vivas Bagwe, and Pratap Raychaudhuri, Phys. Rev. B 100, 214518 (2019).
Ienaga K. Ienaga, T. Hayashi, Y. Tamoto, S. Kaneko, and S. Okuma, Phys. Rev. Lett. 125, 257001 (2020).
Masonjyanaihou Wei Liu. LiDong Pan, Jiajia Wen, M. Kim, G. Sambandamurthy, and N. P. Armitage, Phys. Rev. Lett. 111, 067003 (2013).
Shahar23 A. Haug and D. Shahar, arXiv: 2305.1593.
MPAF M. P. A. Fisher, Phys. Rev. Lett. 65, 923 (1990).
HP A. F. Hebard and M. A. Paalanen, Phys. Rev. Lett. 65, 927 (1990).
IOT R. Ikeda, T. Ohmi, and T. Tsuneto, J. Phys. Soc. Jpn. 58, 3770 (1989).
IOT2 R. Ikeda, J. Phys. Soc. Jpn. 72, 2930 (2003).
RI96b R. Ikeda, Int. J. Mod. Phys. B 10, 601 (1996).
Hikami S. Hikami, A. Fujita, and A. I. Larkin, Phys. Rev. B 44, 10400(R) (1991).
Blatter G. Blatter, B. Ivlev, Y. Kagan, M. Theunissen, Y. Volokitin, and P. Kes, Phys. Rev. B 50, 13013 (1994).
RI96a R. Ikeda, J. Phys. Soc. Jpn. 65, 33 (1996).
deGennes P. G. de Gennes, Superconductivity of Metals and Alloys (Addison Wesley, 1989).
Tsuneto E. Abrahams and T. Tsuneto, Phys. Rev. B 11, 4498
(1975).
RI90 G. Eilenberger, Phys. Rev. 164,628 (1967).
Nunchot N. Nunchot, D. Nakashima, and R. Ikeda, Phys. Rev. B 105, 174510 (2022).
Varlamov D. V. Livanov, G. Savona, and A. A. Varlamov, Phys. Rev. B 62, 8675 (2000).
Galitski V. M. Galitski and A. I. Larkin, Phys. Rev. B 63, 174506 (2001).
com For simplicity, effects of the higher LL modes making the vertical portion of the melting curve in lower fields in the field v.s. temperature phase diagram will be neglected. See T. Saiki and R. Ikeda, Phys. Rev. B 83, 174501 (2011).
comhc2 In Ref.5, the H_c2(T)-curve has been determined based on the LLL scaling relation <cit.> formulated by neglecting the quantum fluctuation in spite of the fact that the resistivity curves show the fan-shaped SIT behavior. In the phase diagram proposed in Ref.5 (Fig.4 there), the correct H_c2(T)-curve must lie at a much higher temperature at least in higher fields, and it seems to us that their erroneous determination of the H_c2(T)-curve has led to their argument on the presence of a quantum Griffiths state which should not appear in cleaner systems of a type studied in Ref.5.
Nunchot2 N. Nunchot and R. Ikeda, unpublished.
Gant V. F. Gantmakher, M. V. Golubkov, V. T. Dolgopolov, G. E. Tsydynzhapov, and A. A. Shashkin, JETP Letters 68, 344 (1998).
|
http://arxiv.org/abs/2307.03921v1 | 20230708072624 | Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks | [
"Tong Xue",
"Haixia Zhang",
"Hui Ding",
"Dongfeng Yuan"
] | eess.SP | [
"eess.SP"
] |
Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks
Tong Xue,
Haixia Zhang, Senior Member, IEEE, Hui Ding, and
Dongfeng Yuan, Senior Member, IEEE
T. Xue, H. Zhang, H. Ding and D. Yuan are all with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, 250061, China.
T. Xue and H. Zhang are also with School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China (e-mail: [email protected]; [email protected]).
August 12, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The existing computation and communication (2C) optimization schemes for vehicular edge computing (VEC) networks mainly focus on the physical domain without considering the influence from the social domain. This may greatly limit the potential of task offloading, making it difficult to fully boom the task offloading rate with given power, resulting in low energy efficiency (EE). To address the issue, this letter devotes itself to investigate social-mobility-aware VEC framework and proposes a novel EE-oriented 2C assignment scheme. In doing so, we assume that the task vehicular user (T-VU) can offload computation tasks to the service vehicular user (S-VU) and the road side unit (RSU) by non-orthogonal multiple access (NOMA). An optimization problem is formulated to jointly assign the 2C resources to maximize the system EE, which turns out to be a mixed integer non-convex objective function. To solve the problem, we transform it into separated computation and communication resource allocation subproblems. Dealing with the first subproblem, we propose a social-mobility-aware edge server selection and task splitting algorithm (SM-SSTSA) to achieve edge server selection and task splitting. Then, by solving the second subproblem, the power allocation and spectrum assignment solutions are obtained utilizing a tightening lower bound method and a Kuhn-Munkres algorithm. Finally, we solve the original problem through an iterative method. Simulation results demonstrate the superior EE performance of the proposed scheme.
VEC, NOMA, edge server selection, task splitting, spectrum assignment, power allocation.
Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks
Tong Xue,
Haixia Zhang, Senior Member, IEEE, Hui Ding, and
Dongfeng Yuan, Senior Member, IEEE
T. Xue, H. Zhang, H. Ding and D. Yuan are all with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, 250061, China.
T. Xue and H. Zhang are also with School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China (e-mail: [email protected]; [email protected]).
August 12, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
With the booming development of intelligent vehicles and wireless communications, a variety of advanced vehicular entertainment services such as high-definition map have emerged in vehicular networks. Quite a lot emerging vehicular entertainment services are computationally-intensive, but the vehicular users (VUs) with constrained computation capability can not satisfy the quality of service (QoS) of such services. To overcome this, it is paramount crucial to utilize vehicular edge computing (VEC) technology that leverages the abundant computation resources at proximity edge servers (i.e., road side units (RSUs) and idle service vehicular users (S-VUs)) <cit.>. However, when the VUs offload tasks to the edge servers, the power consumption increases significantly. Improving the transmission rate of offloaded tasks with limited power, i.e. energy efficiency (EE), has become a major concern in VEC networks. One feasible method is to optimize the communication resources, such as spectrum and power. In addition, designing appropriate task computation policies, such as determining where to offload the computational tasks, is another way to enhance the EE <cit.>. There are works focusing on joint optimizing communication and computation (2C) resource allocation strategies to maximize the EE in orthogonal multiple access (OMA)-enabled VEC networks <cit.>.
In addition, non-orthogonal multiple access (NOMA) has also been regarded as a potential technology to further enhance the system EE <cit.>. With the help of successive interference cancellation (SIC) at the receiver, co-channel interference can be suppressed, which enhances the system sum-rate and finally achieves the significant improvement of the system EE. Therefore, there are works focusing on optimizing 2C resources by integrating NOMA into VEC networks<cit.>. For instance, Cheng et al. <cit.> proposed a joint optimization strategy for binary task splitting and power control to maximize EE, where the task VU (T-VU) can offload its computation task to the S-VU or RSU by NOMA. With the same goal, based on the minimum distance S-VU selection (MDSS) strategy, Wen et al. studied a NOMA-enabled three-sided matching theory to jointly optimize the task splitting and power control in cognitive vehicular networks <cit.>. Literature <cit.> focused on the 2C optimization strategies based on the physical domain without considering the influence from the social domain. This may greatly limit the potential of task offloading, making it difficult to fully boom the task offloading rate with given power, resulting in a low EE. Therefore, it is indispensable to improve the system EE by designing a social-mobility-aware 2C optimization strategy.
Inspired by the aforementioned analysis, this work designs a social-mobility-aware VEC framework and proposes a novel EE-oriented 2C assignment scheme. In doing so, we assume the T-VU offloading computation tasks to S-VU and RSU by NOMA. Meanwhile, to improve the resource utilization, we enable T-VUs to reuse the spectrum resource with cellular users (CUs). An optimization problem is formulated to jointly allocate the 2C resource to maximize the system EE, while guaranteeing the QoS requirements of all CUs and T-VUs. The formulated optimization problem is a mixed integer non-convex. To solve this problem, we decompose it into separated computation and communication resource allocation subproblems. To deal with the computation subproblem, we propose a social-mobility-aware edge server selection and task splitting algorithm (SM-SSTSA) to determine edge server and task splitting. Then, by solving the communication subproblem, the power allocation and spectrum assignment solutions are obtained by using a tightening lower bound method and a Kuhn-Munkres algorithm. Finally, we solve the original problem by iteratively solve the two subproblems. Simulation results demonstrate the superiority of the proposed scheme in terms of the EE.
§ SYSTEM MODEL AND PROBLEM FORMULATION
§.§ Physical and Social Domain Model
This work studies a social-mobility-aware VEC network that utilizes NOMA technology to ensure the differentiated QoS requirements for each CU and T-VU, as shown in SystemModel. In the physical domain, a macro base station (MBS) is deployed to support high-rate data transmission of U CU indexed by u∈𝒰={1,2,...,U}, and S RSUs indexed by s∈𝒮={1,2,...,S} with coverage radius r are deployed to support the computationally-intensive services of M T-VU indexed by m∈ℳ={1,2,...,M}. Each RSU is equipped with a mobile edge computing (MEC) server. Given the limited computation capability of T-VUs, we allow the T-VUs to offload computational tasks to the proximity RSUs through vehicle-to-infrastructure (V2I) links and the idle S-VUs through vehicle-to-vehicle (V2V) links. It is assumed that there are N idle S-VUs indexed by n∈𝒩={1,2,...,N}. Based on the characteristic of task offloading, this work allows the T-VU offloading tasks to the RSU server and the S-VU by utilizing NOMA. In the social domain, leveraging social relationships can help build trustworthy V2V offloading links and improve the effective task offloading rate with limited power, i.e., EE <cit.>. In this work, the social relationship graph among VUs is denoted by 𝒢=(Z,δ), where Z denotes the set of all VUs with Z=ℳ∪𝒩, and δ_m,n∈δ={δ_1,1,δ_1,2,...δ_M,N} is a binary variable representing the social relationship between the mth T-VU and the nth S-VU. If the mth T-VU agrees to share computation task with the nth S-VU, then δ_m,n=1, otherwise, δ_m,n=0.
§.§ Communication and Computation Model
In the NOMA-enabled VEC network, it is assumed that there are totally F available sub-channels (SCs) indexed by f∈ℱ={1,2,…,F}. Without loss of generality, we assume F = U, and each CU uses a single SC. To improve the spectrum resource utilization, the CUs and the T-VUs are allowed to share the spectrum band. It is assumed that only one V2I link and one V2V link utilize NOMA mode to share the SC occupied by one CU. Therefore, the signal-to-interference-plus-noise ratio (SINR) of the uth CU at the time slot t, t∈𝒯={1,2,...,T}, can be expressed as
R_u(t)=∑_f∈ℱBlog_2(1+ P_u^op(t)X_u,f(t)H_u(t)/∑_m∈ℳQ_1+σ^2),
where B represents the bandwidth of each SC, Q_1=( ϵ_m,1(t)+ϵ_m,2(t))P_m^thX_m,f(t)H_m,u(t), with ϵ_m,1(t) and ϵ_m,2(t) represent the power allocation coefficients from the mth T-VU to the RSU and to the S-VU at the tth time slot, respectively, P_m^th is the maximum transmit power of the mth T-VU, P_u^op(t) denotes the optimal transmit power of the uth CU at the tth time slot, σ^2 is the noise power, the binary variable X_m,f(t)∈{0,1} is defined as the spectrum assignment factor. If the mth T-VU occupies the fth SC at the tth time slot, X_m,f(t)=1, otherwise, X_m,f(t)=0. Similarly, X_u,f(t) is also a spectrum assignment indicator of the uth CU at the tth time slot. H_u(t) and H_m,u(t) are the channel and interference channel power gain of the uth CU at the tth time slot.
For each NOMA-enabled V2V link and V2I link's receiver, it assumes that each receiver is able to decode the received messages via SIC, and the decoding order is based on the increasing order of channel coefficients. If H_m,s<H_m,n, the mth T-VU tends to allocate higher power to the sth RSU than that of the nth S-VU, such that ϵ_m,1>ϵ_m,2. Through the NOMA protocol, the mth V2I receiver is firstly decoded. The mth V2V link is then decoded and the co-channel interference from the mth V2I link is removed[If H_m,s>H_m,n, the mth V2V link will be firstly decoded, and the SINR of receiver will be changed.] by SIC. Therefore, the SINR of the mth V2I link's receiver (i.e, the sth RSU) at the tth time slot can be expressed as
R_m,s(t)=∑_f∈ℱBlog_2(1+
ϵ_m,1(t)P_m^thX_m,f( t)H_m,s(t)/Q_2+σ^2_γ_m,f(t) ),
where Q_2=∑_u∈𝒰P_u^op(t)X_u,f(t)H_u,s(t)+ϵ_m,2(t)P_m^thX_m,f(t)H_m,s(t). The SINR of the mth V2V link's receiver (i.e., the nth S-VU) at the tth time slot can be expressed as
R_m,n(t)=∑_f∈ℱBlog_2(1+ X_m,f(t)Ψ_m,n(t)Q_3/Q_4+σ^2_γ_m,n,f(t)),
where Q_3=ϵ_m,2(t)P_m^thH_m,n(t), Q_4=∑_u∈𝒰P_u^op(t)X_u,f(t)Ψ_m,n(t)H_u,n(t), H_m,s(t) and H_m,n(t) are the channel power gains from the mth T-VU to the sth RSU server and to the nth S-VU at the tth time slot, respectively, H_u,s(t) denotes the interference channel power gain from the uth CU to the sth RSU at the tth time slot, H_u,n(t) is the interference channel power gain from the uth CU to the nth S-VU at the tth time slot. The binary variable Ψ_m,n(t) composed of both mobility and social relationships is denoted as
Ψ_m,n(t)=k_m, n(t)·δ_m,n(t),
where k_m, n(t) is the mobility relationship between the mth T-VU and the nth S-VU at the tth time slot, where
k_m, n(t)=
{[ 1,
if ρ_m, n(t)<ζ_th,; 0, otherwise, ].
where ζ_th represents the threshold of physical domain, and ρ_m,n(t) is written as
ρ_m,n(t)= ψ· f(Δ d_m,n(t))+(1-ψ) · f(Δ v_m,n(t)),
where Δ d_m,n(t) is the distance between T-VU and S-VU, Δ v_m,n(t) represents the difference in velocity between T-VU and S-VU, ψ∈[0,1] is the weight of the distance, f(·) is the normalized function.
We define a tuple (D_m(t), C_m,β_m(t)) to characterize the task of the mth T-VU at the tth time slot, where D_m(t) is the size of the computation task, C_m is the number of CPU cycles required for computing 1-bit data, β_m(t)={β_m,1(t),β_m,2(t)}∈[0,1], β_m,1(t) represents the computing task splitting factor from the mth T-VU to the RSU server. β_m,2(t) is the computed ratio by the S-VU. Thus, (1-β_m,1(t)-β_m,2(t)) denotes the portion of the computing task left for local executing (i.e., the mth T-VU). Therefore, the task executing delay at the mth T-VU is
D_m(t)(1-β_m,1(t)-β_m,2(t)) C_m/y_m<T_tol,
where y_m (in CPU cycle/s) is the assigned computing resource for executing local tasks, T_tol denotes the maximum tolerant delay of each T-VU.
The task offloading and executing delay from the mth T-VU to the RSU server and to the nth S-VU can be expressed as
D_m(t)β_m,1(t)/R_m,s(t)+D_m(t)β_m,1(t) C_m/y_m,s<T_tol,
D_m(t)β_m,2(t)/R_m,n(t)+D_m(t)β_m,2(t)C_m/y_m,n<T_tol,
where y_m,s (in CPU cycle/s) and y_m,n (in CPU cycle/s) are the computing resource allocated to the mth T-VU served by the RSU server and the nth S-VU, respectively. The EE of the NOMA-enabled VEC networks is expressed as
ξ=∑_t∈𝒯R_total(t)/P_total(t)=∑_t∈𝒯∑_n∈𝒩∑_m∈ℳ∑_s∈𝒮R_m,s(t)+R_m,n(t)/P_cir+P̃_m,n,s(t),
where P̃_m,n,s(t)=κ y_m^3+ϵ_m,1(t)P_m^th+κ y_m,s^3+ϵ_m,1(t)P_m^th+κ y_m,n^3, κ is the effective switched capacitance depending on the CPU architecture, and P_cir is the circuit power consumption.
§.§ Problem Formulation
In this work, our objective is to maximize the EE for task offloading of the NOMA-enabled VEC network by optimizing the edge server selection Ψ, the task splitting β, the spectrum assignment X and the power allocation ϵ. Notably, Ψ, β, X and ϵ are matrices composed of variables Ψ_m,n(t), {β_m,1(t),β_m,2(t)}, {X_u,f(t), X_m,f(t)} and {ϵ_m,1(t),ϵ_m,2(t)}, respectively. Mathematically, the problem is formulated as
𝒫1:max_Ψ_m,n(t),β_m,1(t),β_m,2(t),X_u,f(t),
X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)ξ
s. t. Ψ_m,n(t), X_m,f(t), X_u,f(t)∈{0,1},∀ m,s,n,u,f,t,
0≤β_m(t)≤1,∀ m,t,
ϵ_m,1(t)≥ 0, ϵ_m,2(t)≥ 0,ϵ_m,1(t)+ϵ_m,2(t)≤ 1,∀ m,t,
R_u(t)≥ R_th,u,∀ f,u,t,
∑_f∈ℱX_u,f(t)= ∑_f∈ℱX_m,f(t)=1, ∀ u,m,t,
∑_u∈𝒰X_u,f(t)=1, ∑_m∈ℳX_m,f(t)≤1,∀ f,t,
(<ref>),(<ref>),(<ref>),
where R_th,u is the minimum data rate thresholds for the uth CU, constraints (<ref>)-(<ref>) list the feasible task splitting and power allocation of the T-VUs, respectively, constraint (<ref>) represents the QoS requirements of CUs, constraint (<ref>) restricts that each user (T-VU and CU) can only access to one SC, each SC can be shared by one CU and at most one T-VU according to constraint (<ref>).
It is obvious that (<ref>) is a fractional programming, which can be converted into a subtractive form <cit.>. Therefore, (<ref>) is reformulated as
max_Ψ_m,n(t),β_m,1(t),β_m,2(t),X_u,f(t),
X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)∑_t∈𝒯(R_total(t)-ξ P_total(t)).
§ SOLUTION OF THE EE OPTIMIZATION PROBLEM
Since the communication and computation resource decision of 𝒫1 is made in each time slot and there is no interdependence among time slots, we transform the optimization problem across the whole time slots into one time slot optimization problem. But, the obtained one time slot optimization problem is still non-convex, and it is difficult to obtain the global optimal solution. As an alternative, we decompose it into 1) computation resource optimization subproblem 𝒫2 and 2) communication resource optimization subproblem 𝒫3. 𝒫2 and 𝒫3 can be given by
𝒫2: max_Ψ_m,n(t),β_m,1(t),β_m,2(t)
(R_total(t)-ξ P_total(t))
s. t. Ψ_m,n(t),Ψ_m,s(t)∈{0,1},∀ m,s,n,
(<ref>), (<ref>),
𝒫3: max_X_u,f(t),X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)(R_total(t)-ξ P_total(t))
s. t. X_m,f(t), X_u,f(t)∈{0,1},∀ m,u,f,
(<ref>), (<ref>), (<ref>)-(<ref>).
It is seen that 𝒫2 is NP-hard. To find a tractable solution, we design a heuristic SM-SSTSA as shown in Algorithm 1. Then, to solve the communication resource allocation subproblem, we decouple 𝒫3 into a power allocation subproblem and a spectrum assignment subproblem, which can be solved iteratively.
As proved in <cit.>, when the spectrum assignment variable is fixed, (<ref>) is written as
R_m,s(t)+R_m,n(t)-ξ P_total(t)
≥ b_1log_2γ_m,f(t)+c_1 _Φ_1(t) + b_2log_2γ_m,n,f(t)+c_2 _Φ_2(t) -ξ P_total(t),
where b_1, b_2, c_1 and c_2 are
b_1=γ̃_m,f(t)/1+γ̃_m,f(t), b_2=γ̃_m,n,f(t)/1+γ̃_m,n,f(t),
c_1=log_2(1+γ̃_m,f(t))-γ̃_m,f(t)/1+γ̃_m,f(t)log_2 γ̃_m,f(t),
c_2=log_2(1+γ̃_m,n,f(t))-γ̃_m,n,f(t)/1+γ̃_m,n,f(t)log_2 γ̃_m,n,f(t).
Then, the lower bound of the objective function in (<ref>) can be written as
max_ϵ ( Φ_1(t) + Φ_2(t) -ξ P_total(t)).
Denote ϵ_m,1(t)=2^w_m,1(t) and ϵ_m,2(t)=2^w_m,2(t), the power control subproblem can be rewritten as
𝒫4: max_w_m,1(t),w_m,2(t)( Φ_1(t) + Φ_2(t) -ξ P_total(t))
s. t. 2^w_m,1(t)≥ 0, 2^w_m,2(t)≥ 0,∀ m,
2^w_m,1(t)+2^w_m,2(t)≤ 1,∀ m,
(<ref>),(<ref>),(<ref>).
Since (<ref>) is a standard convex optimization problem, we adopt Lagrange dual decomposition to solve it.
Given ϵ, the spectrum assignment subproblem is a complicated matching among CUs, T-VUs and SCs, which is proved to be NP-hard. From (<ref>)-(<ref>), the relationship between the cellular user and SC belongs to one-to-one match. To facilitate the solution, the complex match among CUs, T-VUs and SCs is transformed into the new match among CUs and T-VUs. The new spectrum assignment variable between the uth CU and the mth T-VU at the tth time slot is denoted as X_u,m(t). Therefore, the spectrum assignment subproblem can be rewritten as
𝒫5: max_X_u,m(t)(R_total(t)-ξ P_total(t))
s. t. X_u,m(t)∈{0,1},∀ u,m,
∑_m∈ℳX_u,m(t)≤1, ∀ u,
∑_u∈𝒰X_u,m(t)=1,∀ m,
which can be solved by a Kuhn-Munkres algorithm.
To solve 𝒫1, JCCRAA is proposed as shown in Algorithm 2, which composes of solving the computation resource allocation subproblem and the communication resource allocation subproblem. In Algorithm 2, by solving Algorithm 1, Ψ and β can be obtained. Then, the analytical expression of X and ϵ can be derived by using the tightening lower bound method and the Kuhn-Munkres algorithm. Next, by substituting the obtained assigned spectrum and allocated power into Algorithm 1, the S-VUs selection and task splitting strategies are updated. Repeat the process until convergence, the original problem is solved.
§ SIMULATION RESULTS AND ANALYSIS
Intensive simulations are done to show the performance of the proposed algorithm. It is assumed that all the users are located within a target rectangular area 1000 m× 1000 m. The simulation parameters are set according to 3GPP TR 36.885 <cit.>, where a MBS is located at the center of the area and a number of RSUs with r= 150 m are located at the roadside in the area. The number of lanes is 6, and the width of each lane is 4 m. The average inter-VU distance driving in the same lane is 2.5v m with v representing the moving speed of vehicles in meter per second. Besides, we set P_u^op=20 dBm, P_m^th= [15, 30] dBm, D_m=[10^4,10^5] bits.
The impact of the number of T-VUs, the size of the offloaded tasks and the number of SCs on the system EE are simulated, respectively. The obtained resluts are shown in Figs. <ref>-<ref>. To show the superiority of the proposed JCCRAA, three baselines are simulated and compared:
1) NOMA-MDSS-TSCRA algorithm, which is composed of the MDSS and the proposed task splitting and communication resource allocation algorithm.
2) RSU-SAPC algorithm, which is composed of the RSU-based offloading strategy and the proposed communication resource allocation algorithm.
3) OMA-JCCRA algorithm, which is adopted by the proposed JCCRA algorithm based on the orthogonal multiple access.
The system EE for different number T-VUs is shown in Sim1, from which we see that the EE decreases as the number of T-VUs increases for all the simulated algorithms. The reason is that, as the number of T-VUs increases, the competition for limited communication resources intensifies, resulting in severe co-channel interference and a degradation in EE performance. In addition, we see that the proposed NOMA-JCCRAA performs best, and with the social-mobility-aware algorithm, a gain of approximately 17%-32% can be achieved. Sim2 shows the effect of the size of offloaded tasks at each T-VU on the EE performance when the size of the offloaded tasks at each T-VU varies. The simulation results reveal that as the size of the offloaded tasks increases, the EE decreases. This is attributed to an increase in task delay, making it difficult to satisfy the constraint of delay and ultimately reducing the EE performance of the VEC network. From Sim3, we see that the EE increases when the number of the available SCs increases from 30 to 60 for all the simulated algorithms. This is because when the number of the available SCs increases, more users can occupy the spectrum bands individually, improving the system EE.
§ CONCLUSIONS
This letter focuses investigation on the social-mobility-aware EE maximization in VEC networks, where the T-VUs can offload the computation tasks to the S-VUs and the RSUs by NOMA. An EE maximization problem was formulated to jointly assign the 2C resources. Since the optimization turned out to be NP-hard, to solve it, an iterative JCCRAA was proposed. Simulation results have shown that the proposed JCCRAA not only can help appropriately allocate the communication and computation resources, but also can achieve a system EE gain of approximately 17%-32% by using the proposed social-mobility-aware strategy.
IEEEtran
|
http://arxiv.org/abs/2307.04291v1 | 20230710005229 | Wait, wasn't that code here before? Detecting Outdated Software Documentation | [
"Wen Siang Tan",
"Markus Wagner",
"Christoph Treude"
] | cs.SE | [
"cs.SE"
] |
Wait, wasn't that code here before?
Detecting Outdated Software Documentation
Wen Siang Tan
School of Computer Science
University of Adelaide
Adelaide, SA, Australia
[email protected]
Markus Wagner
Department of Data Science & AI
Monash University
Melbourne, VIC, Australia
[email protected]
Christoph Treude
School of Computing and Information Systems
The University of Melbourne
Melbourne, VIC, Australia
[email protected]
August 12, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================
Encountering outdated documentation is not a rare occurrence for developers and users in the software engineering community. To ensure that software documentation is up-to-date, developers often have to manually check whether the documentation needs to be updated whenever changes are made to the source code. In our previous work, we proposed an approach to automatically detect outdated code element references in software repositories and found that more than a quarter of the 1000 most popular projects on GitHub contained at least one outdated reference. In this paper, we present a GitHub Actions tool that builds on our previous work's approach that GitHub developers can configure to automatically scan for outdated code element references in their GitHub project's documentation whenever a pull request is submitted.
Video—<https://www.youtube.com/watch?v=4cA10vdlmns>
software repositories, outdated documentation, outdated references, code elements, workflow automation
§ INTRODUCTION
Not only developers but also users often find encountering outdated software documentation a frustrating experience. In our previous work <cit.>, we found that 28.9% of the top 1000 most popular projects[Top 1000 projects ranked by the number of stars] on GitHub contain at least one outdated reference to source code in their documentation. In the same paper, we proposed an approach named DOCER (Detecting Outdated Code Element References) to automatically detect outdated code element references in software repository documentation. The approach works by extracting code element references from documentation (README and wiki pages) using a list of regular expressions. These extracted references include variables, functions and class names found in the documentation such as HttpClient, Promise.reject(err) and ArrayList<String>. To determine if a reference is outdated, we match the reference to two revisions of the source code: the repository snapshot when the documentation was last updated and the current revision. We compare the number of instances found in the two versions and flag the reference as outdated if it existed in the snapshot but is no longer found in the current revision. <Ref> shows an overview of the DOCER approach.
In our previous paper, we provided an implementation that developers can use to scan for outdated code element references. However, running the script whenever new changes are proposed may be mundane and repetitive. To simplify this process, we created a tool based on GitHub Actions workflow that is automatically triggered whenever a pull request is submitted to the repository. This workflow automates all the steps mentioned above and reports outdated references by commenting on the pull request.
In the following sections of this paper, we provide an in-depth introduction to the tool's implementation (Section <ref>), and describe real-world examples where the DOCER approach successfully detected outdated documentation (Section <ref>). Limitations of the tool are discussed in Section <ref> before we conclude the paper with related (Section <ref>) and future work (Section <ref>).
§ TOOL
In this section, we introduce: (1) the GitHub Actions workflow that the tool is based on, (2) an example repository showing how the tool can be configured to run whenever a pull request is submitted, and (3) how false positives reported by the tool can be ignored.
§.§ Implementation
GitHub Actions,[<https://github.com/features/actions>] a feature on GitHub, enables developers to automate workflows based on events. This feature is typically employed for building Continuous Integration and Continuous Delivery (CI/CD) pipelines. We created the tool using GitHub Actions because it provides developers a convenient way to integrate the tool with existing GitHub projects. Developers also have the flexibility to configure their projects in a way that the tool automatically scans for outdated code element references in their documentation, whenever a pull request is submitted.
The workflow is defined by a YAML file[<https://yaml.org/>] containing a series of actions that gets executed when the workflow is triggered. To begin, we list the name of the workflow (DOCER), the events that trigger the workflow (pull requests), followed by the name of the GitHub-hosted runner[<https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners>] (latest Long Term Support version of Ubuntu) and the permissions needed for the job (read repository contents and write to pull requests).
[bgcolor=mygray]yaml
name: DOCER
on: pull_request
jobs:
run:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
The rest of the file defines the steps to execute in the workflow. Three repositories are cloned on the runner (repositories containing the source code, wiki pages, and scripts for the analysis) using a GitHub Action named checkout.[<https://github.com/actions/checkout>]
[bgcolor=mygray]yaml
- name: Checkout repository
uses: actions/checkout@v3
with:
repository: github.repository
ref: github.event.pull_request.head.sha
path: repo
fetch-depth: 0
- name: Checkout wiki
continue-on-error: true
uses: actions/checkout@v3
with:
repository: github.repository .wiki
path: wiki
- name: Checkout tool
uses: actions/checkout@v3
with:
repository: wesleytanws/DOCER_tool
path: tool
Once the repositories are cloned, the runner possesses all the necessary files to scan for outdated references. The workflow then commences the analysis, installs the necessary Python packages used by the report, generates the report and finally stores the results in an environment variable.
[bgcolor=mygray]yaml
- name: Run tool
run: |
bash tool/analysis.sh
pip install pandas
pip install numpy
echo 'report<<EOF' >> GITHUB_ENV
python tool/report.py github.repository github.run_id >>GITHUB_ENV
echo 'EOF' >> GITHUB_ENV
In the case where merging the pull request may result in outdated documentation, the workflow uses a GitHub Action named github-script[<https://github.com/actions/github-script>] to post a comment on the pull request listing the potentially outdated references.
[bgcolor=mygray]yaml
- name: Comment on pull request
if: env.report
uses: actions/github-script@v6
env:
report: env.report
with:
script: |
github.rest.issues.createComment(
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: process.env.report
)
Figuring out why a code element reference has been flagged as potentially outdated can be challenging, especially when there are numerous modifications in the pull request. This final step uploads the report and summary files to GitHub using a GitHub Action named upload-artifact,[<https://github.com/actions/upload-artifact>] allowing developers to view the full report.
[bgcolor=mygray]yaml
- name: Upload artifact
if: env.report
uses: actions/upload-artifact@v3
with:
name: report
path: |
output/report.csv
output/summary.csv
output/summary.md
The GitHub repository, which includes the workflow outlined above and the source code for the tool, is available for public access.[<https://github.com/wesleytanws/DOCER_tool/tree/v1.0.1>]<Ref> summarises the steps defined by the workflow.
§.§ Adding to GitHub projects
To demonstrate how the GitHub Actions tool works, we will integrate the tool with an example repository with three files (<Ref>):
* README.md documents the mathematical functions defined in arithmetic.py
* arithmetic.py defines the mathematical functions
* main.py calls the functions defined in arithmetic.py
Integrating the tool to a repository is as convenient as copying the YAML file defining the workflow[<https://github.com/wesleytanws/DOCER_tool/blob/v1.0.1/DOCER.yml>] to the .github/workflows folder. Suppose a pull request as shown in <Ref> is submitted to the repository.
Looking at the pull request submitted, two files in the repository have been modified. In arithmetic.py, the subtract and divide functions were removed and a new power function was added. Similarly, the main.py file was modified to remove the subtract function and the chained multiply functions were refactored into a power function. Notice that the tool reports that continuing to merge the pull request may result in two outdated references in the documentation (<Ref>). This discrepancy arises because the README file was not updated to reflect the removal of `divide' and `subtract' functions from the source code.
To keep the documentation up-to-date, we can simply remove the two outdated references in the README file. Better still, we can document the new function and mention that the two functions are now deprecated as shown in <Ref>.
§.§ Excluding code elements
One useful feature that we added to the tool is the ability to exclude certain code elements from the report, which allows developers to stop keeping track of code elements that have been determined to be false positives. Developers can add a list of code elements separated by newlines in a file named .DOCER_exclude located at the root of the repository. Code elements in the exclude list will be ignored by the tool when scanning for outdated references.
§ EXAMPLES
In our previous work <cit.>, we evaluated the approach's usefulness in real-world software projects by submitting GitHub issues to 15 different projects. Here, we present two examples of true positives and false positives in the issues submitted <cit.>. automates the creation of such notifications.
True positives The google/cctz project was one of the 15 projects that responded positively to our GitHub issue.[<https://github.com/google/cctz/issues/210>] All code element instances int64_t were removed from the source code in one of the commits but the documentation continued to reference the deleted code element. In response to our GitHub issue, the developer updated the documentation to align with the changes in the source code (<Ref>). In the google/hs-portray project, the function prettyShow was renamed to showPortrayal in the source code, but the README file was not updated (<Ref>). We alerted the developers of this discrepancy, and the issue was fixed subsequently.[<https://github.com/google/hs-portray/issues/7>]
False positives In another Google project google/clif (<Ref>), a CMake flag was removed from the source code but the documentation was not updated. The developer responded that the flag is no longer required in the source code but it is still relevant for users that have installed multiple versions of Python to configure the installation directory correctly.[<https://github.com/google/clif/issues/52>] A false positive was reported in the google/gnostic project (<Ref>) where the code element text_out was deleted from the source code. Although the code element is no longer found in the source code, the functionality remains in the program logic. This leads to the code element reference getting falsely flagged as outdated.[<https://github.com/google/gnostic/issues/273>]
§ LIMITATIONS
Trying to understand and use documentation which features code elements that do not exist is just one of many frustrations that software developers encounter when they are confronted with outdated documentation. Addressing this particular frustration is the goal of . Other forms of outdated documentation, such as inaccurate descriptions of the functionality of code elements or not-yet-documented code elements, are beyond the scope of our current work. is currently limited to detecting outdated documentation in GitHub (README and wiki pages) and would not be able to find issues in documentation hosted externally. detects code elements in documentation using a set of regular expressions from previous work. These regular expressions have not been validated on all possible programming languages and refining them to work on popular programming languages is part of our future work.
Our tool may sometimes falsely categorised references as outdated due to limitations of the approach. For example, the change log of a project may contain references to deleted code elements in the source code. However, these references should not be flagged as outdated as they only serve as a notice. As a workaround, developers can add the code elements to the .DOCER_exclude file to avoid the tool reporting the references as outdated. In addition, our tool only detects code elements written as text. Other kinds of outdated documentation such as images and videos in the documentation cannot be detected.
§ RELATED WORK
There are numerous existing work related to detecting and fixing inconsistencies between source code and documentation, with source code comments being one of the main focuses. Wen et al. <cit.> conducted an empirical study of 1500 Java systems, citing deprecation and refactoring as causes of code-comment inconsistencies. In one of the earliest attempts to address these inconsistencies, Tan et al. <cit.> proposed @tcomment, aiming to catch exceptions related to null values in Javadoc comments. Ratol and Robillard <cit.> introduced Fraco, a tool targeting source code comments and identifiers renaming. Panthaplackel et al. <cit.> proposed a model that can modify natural language comments based on source code changes, outperforming existing comment generation models.
Other work related to documentation but not limited to source code comments include DocRef by Zhong and Su <cit.>. Combining natural language tools and code analysis techniques to identify discrepancies between source code and documentation, DocRef was able to detect more than 1000 errors in API documentation. Designed to report documentation changes, AdDoc by Dagenais and Robillard <cit.> uses traceability links to identify changes to the documentation that deviate from existing code patterns. Using static program analysis, Zhou et al. <cit.> proposed a framework DRONE, that automatically discovers defects in Java API documentation and generates helpful recommendations. Another work addressing API documentation is FreshDoc by Lee at al. <cit.>. By using a grammar parser and analysing multiple source code versions, FreshDoc can automatically update class, method and field names found in the documentation.
In contrast to these approaches and to the best of our knowledge, is the first tool which attempts to prevent inconsistent and outdated documentation by alerting software developers before their documentation becomes outdated. We accomplish this through a GitHub Action which is GitHub's implementation of a software bot <cit.>. Software bots have recently attracted the attention of the software engineering research community, with a particular focus on code review bots which—similar to —comment on pull requests. For example, Wessel et al. <cit.> found that the adoption of code review bots increases the number of monthly merged pull requests, decreases monthly non-merged pull requests, and decreases unnecessary communication among developers. Our goal with is to enable code review bots to also decrease the amount of outdated documentation.
§ FUTURE WORK AND CONCLUSION
In this paper, we presented that developers can use to automatically scan for outdated code element references. The tool analyses the repository and generates a report on the state of code element references whenever a pull request is submitted. If merging the pull request results in outdated references in the documentation, the tool will upload the report and comment on the pull request alerting developers of the situation. Developers can choose to fix the outdated references in their documentation, or add the references to the exclude list if they have been determined to be false positives.
As mentioned in <Ref>, refining the list of regular expressions used to detect code elements is part of our future work. One such refinement could be ensuring that the regular expressions can accurately extract code elements found in popular programming languages such as JavaScript, Python and Java. In addition, several improvements can be made to the tool. Adding a feature where developers can reply to the tool's comment for code elements they do not want to keep track of could be helpful. The tool will then automatically add the code elements to the project's exclude list. Another improvement could be adding a file that defines a list of documentation files to exclude, e.g. wiki page that contains the project's change log. Expanding the tool to not only work on GitHub, but also other version control platforms is another direction worth exploring. This allows more developers to scan for outdated code element references in their projects.
IEEEtran |
http://arxiv.org/abs/2307.07419v1 | 20230714154327 | The dual nature of the tidal tails of NGC 5904 (M5) | [
"Andrés E. Piatti"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
BiGSeT: Binary Mask-Guided Separation Training for DNN-based Hyperspectral Anomaly Detection
Haijun Liu, Member, IEEE, Xi Su, Xiangfei Shen, Lihui Chen and Xichuan Zhou, Senior Member, IEEE
This paper was supported in part by the National Natural Science Foundation of China under Grant 62001063, Grant 61971072 and Grant U2133211; in part by the Graduate Research and Innovation Foundation of Chongqing, China, under Grant CYB22068; and in part by the China Postdoctoral Science Foundation under Grant 2020M673135. Corresponding author: Xichuan Zhou.
H. Liu, X. Su, X. Shen, L. Chen and X. Zhou are with the School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China.
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The tangential velocity dispersion of stars belonging to the Milky Way globular cluster's tidal tails has
recently been found from N-body simulations to be a parameter that distinguishes between
cored and cuspy profiles of low-mass dwarf galaxy dark matter subhaloes where that
globular cluster formed, and the in-situ formation scenario. In this context, we discovered that M5's
tidal tails are composed by stars at two different metallicity regimes ([Fe/H] ∼ -1.4 dex
and -2.0 dex). The more metal-rich tidal tail stars are of the same metal content than
M5's members and have a tangential velocity dispersion
that coincides with the predicted value for a cuspy formation scenario (subhalo mass
∼ 10^9 ). The more metal-poor stars, that are
found along the entire M5 tidal tails and have similar distributions to their more metal-rich
counterparts in the M5 colour-magnitude diagram and orbit trajectory, have a
tangential velocity dispersion that refers to a cored subhalo (mass ∼ 10^9 )
or an in-situ formation scenario. In order to reconcile the dual distribution of M5 tidal tail stars,
in kinematics and chemistry, we propose that M5 collided with another more metal-poor and
less massive globular cluster anytime before or after it was accreted into the Milky Way.
globular clusters: general – globular cluster: individual: M5 – methods: numerical
§ INTRODUCTION
Some recent detection of tidal tails of Milky Way globular clusters using
clustering search techniques in an N-dimensional phase space assume that
their stars and those belonging to the cluster have similar proper motions.
In general, an upper limit of 2 mas/yr around the mean cluster
proper motion has been used to identify tidal tails stars, which have been
called a cold stellar stream <cit.>.
However, because of tidal tails stars have speed up their pace in order to escape the
cluster, their proper motions can be different from the cluster mean
proper motion. There are, additionally, other reasons that can contribute to make the
space motion of tidal tails stars different from the mean cluster
space velocity, among them, projection effects of the tidal tails,
the Milky Way tidal interaction, the intrinsic kinematic agitation of a
stellar stream <cit.>.
Furthermore, some stellar streams have resulted to be kinematically hot, like the
C-19 stream <cit.>, which has been found to be dominated by a dark
matter halo <cit.>. Rising velocity dispersion profiles
toward the outer regions of globular clusters were also suggested by <cit.>
for globular clusters placed at the centre of dark matter mini-haloes; a behaviour that
was also explained by the effects of the Milky Way tidal interaction <cit.>.
Recently, <cit.> showed that globular clusters formed in low-mass
dwarf galaxy dark matter subhaloes, later accreted into the Milky Way, have tidal tails
with a mean tangential velocity dispersion larger than that for tidal tails of globular
clusters formed in-situ. Based on these outcomes, globular cluster origins
could be inferred by differentiating whether their tidal tails
are kinematically cold (in-situ formation) or hot (accreted
origin).
Although some globular clusters do not present tidal tails <cit.>,
it is worth measuring the tangential velocity dispersion of globular cluster
tidal tails in order to distinguish those formed in cored or cuspy cold dark matter subhaloes
(those accreted) from those formed in the Milky Way.
These outcomes can be useful, for instance, to know the nature of the
dwarf galaxy dark matter subhaloes where the clusters formed, the mass of those
subhaloes, as well as to confirm the previous known associated origins of the
Milky Way globular clusters <cit.>. We embarked
in this challenging analysis by examining the tidal tails of NGC 5904 (M5), a globular
clusters associated to one of the latest merger events occurred in the Milky Way
<cit.>.
In this Letter we report the discovery of the dual nature of the tidal tails of M5. They
contain stars both in the cored and cuspy formation scenarios,
which in turn have clearly different overall metallicity contents. In Section 2 we
present the analysis of the data, while in Section 3 we speculate on a possible scenario
for the resulting outcomes.
§ DATA ANALYSIS
<cit.> detected using the second release of the Gaia database
<cit.> a long trailing tidal tail extending westward from M5. He
selected 50 highest-ranked tidal tail member candidates based on their similar
distances, their magnitudes and colours distributed along the M5 colour-magnitude
diagram, their proper motions consistent with the cluster's trajectory at a
detection significance ≈ 10σ. The long tidal tail is included in the
recent atlas of Milky Way streams compiled by <cit.>. We used the
derived Gaia DR3 parameters[Kindly provided by Cecilia Mateu.] for
these 50 stars to compute their tangential velocities.
The top-left panel of Fig. <ref> shows the distribution of the stars in the sky.
As for their physical distances, we relied on the results found by <cit.> and
<cit.>, who place the long tidal tail at a constant distant, that of the
M5's mean heliocentric distance <cit.>, as is illustrated by the solid line in the
parallax versus R.A.plot drawn in the top-right panel of Fig. <ref>. Hence, we
adopted for the subsequent analysis the R.A. coordinates as the tidal tail tracing
coordinates.
The tangential velocities were computed as V_Tan = k × d_⊙×μ;
where k = 4.7405 km s^ -1 kpc^ -1 (mas/yr)^ -1, d_⊙ is the
tidal tail distance, and μ = √(μ_α^*^2 + μ_δ^2), with
μ_α^* and μ_δ being the proper motions in R.A. and Dec as
provided by Gaia DR3. We used Figure 15 of <cit.> to
estimate the mean d_⊙ for intervals of Δ(R.A.) = 5, and used those values
to compute V_Tan for stars in the respective R.A. bins.
The bottom-right panel of Fig. <ref> shows the
observed relation between V_Tan and R.A. The error bars come from
propagation of errors of the V_Tan expression. We then fitted a second order
polynomial function, represented by the solid line in the bottom-right panel of
Fig. <ref>, and computed the different between the measured V_Tan
values and the corresponding ones on the fitted function for the respective
R.A.
In order to obtain the tangential velocity dispersion,
we derived the dispersion of the resulting residual distribution
by employing a maximum likelihood approach <cit.>.
For that purpose, we optimized the probability ℒ given by :
ℒ = ∏_i=1^N ( 2π (σ_i^2 + W^2 )
)^-1/2 exp(-(Δ(V_Tan)_i - <Δ(V_Tan)>)^2/2(σ_i^2 + W^2))
where Δ(V_Tan)_i and σ_i are the residiual V_Tan value and
the corresponding error for the i-th star. We obtained a mean tangential velocity
dispersion W = 15.65 ± 0.47 km/s. This result largely exceeds the highest predicted
tangential velocity dispersion for globular cluster streams in dark
matter subhaloes with a mass of 10^9 (∼ 8.5 km/s).
We used the overall metallicity estimates ([Fe/H]) and their uncertainties
provided by GSP-Phot in Gaia DR3 to
check whether the resulting W value can be biased by the presence of field stars.
Only 25 out of the 50 stars have available Gaia DR3 metallicities. For them, we first corrected
the [Fe/H] values following the prescriptions given by <cit.>[https://www.cosmos.esa.int/web/gaia/dr3-gspphot-metallicity-calibration] and then plotted the
resulting values as a function of R.A. (see bottom-left panel of Fig. <ref>). As can be seen,
there are two different metallicity regimes, centred at [Fe/H] ∼ -1.4 dex and -2.0 dex,
respectively. For each of them, we repeated the above procedure to compute the
tangential velocity dispersion, and obtained W = 7.50 ± 1.38 km/s and
2.00 ± 4.07 km/s, for the more metal-rich and more metal-poor samples, respectively.
For the most metal-rich regime, we did not consider the star at [Fe/H] ≈ -0.5 dex,
because it is beyond the mean value by more than 7 times the metallicity dispersion.
We finally searched the Gaia DR3 database looking for M5 members with
metallcity estimates, with the aim of validating the above procedure and results.
We applied the selection cuts as in <cit.>, selecting stars with
μ_α^* and μ_δ values within the dispersion found by
<cit.>[https://people.smp.uq.edu.au/HolgerBaumgardt/globular/],
< 1.15, ≤ 10, and
≤ 2. We found four stars within the cluster area (< 5) with corrected [Fe/H]
values of ∼ -1.4 dex, which is in excellent agreement with the known M5 metal content
<cit.>.
§ DISCUSSION AND CONCLUSIONS
M5, with an age of 11.46±0.44 Gyr <cit.>, is associated to the Helmi stream,
the aftermath of a merger event (5-8 Gyr) of a small mass dwarf galaxy experienced by the Milky Way
<cit.>. Its orbit eccentricity (0.79±0.01) and
inclination (74.09±0.66) also witness its accreted origin <cit.>.
If we entered in the bottom panel of Figure 1 of <cit.> with W = 7.5 km/s - the
present tangential velocity dispersion for M5 tidal tail stars with similar cluster metallicities -, we
would find that the cluster formed in a dwarf galaxy dark matter
subhalo with a cuspy profile with a mass of ∼ 10^8.8±0.1 .
Therefore, we can conclude that this component of the stream is consistent with that formation scenario.
However, the dual metallicity distribution found among the highest-ranked tidal tail candidate
members hampers our smooth understanding of the M5's origin. Stars at both metallicity regimes
are found distributed along the entire extension of the examined tidal tail (see Fig. <ref>),
which undoubtedly removes
any speculation that most of them are unrelated stars to M5. On the other hand, overall
metallicity differences Δ[Fe/H] larger than ∼ 0.6 dex (see Fig. <ref>) were found in the
building block globular clusters Terzan 5 <cit.> and Liller 1 <cit.>
in the Milky Way and M54 <cit.> in the Sagittarius dwarf spheroidal galaxy.
We note that the
difference in metallicity between M5 1G and 2G stars is 0.03 dex <cit.>. Hence,
M5 would not seem to belong to that globular cluster group, since it does
not show the expected age spread (see its HST colour-magnitud diagram in <cit.>)
for a long star formation
history as observed in those
building block globular clusters. Therefore, we can conclude that only tidal tail stars with a metallicity
similar to that of M5 were born in the cluster itself. Because of the remarkable different metallicity, and
according to the known nucleosyhthesis processes, the more
metal-poor tidal tail stars could not form in M5.
The mean tangential velocity dispersion of the more metal-poor stars (W = 2.00) matches the
cored dark matter subhalo formation scenario (mass ∼ 10^9 ) in the N-body
simulation performed by <cit.>. If this were the case, then these more
metal-poor stars could be the relicts of a globular cluster formed in another dwarf galaxy,
that collided with M5. Note that the lack of metallicity spread among them (see Fig. <ref>) discards any possible accreation of stellar structures with a long star formation
history or scaterred field stars. The collision could take place before M5 was accreted into the Milky
Way or in the Milky Way itself, after they were accreted into the Galaxy, separately. Because of the
uncertainty in the resulting W value (σW = 4.07 km/s) the more metal-poor
stars could also be the fossils of a globular cluster formed in-situ that collided with M5.
Whatever the scenarios is considered, it would seem that a reasonable interpretation
for the dual nature of stars belonging to the tidal tails of M5, in kinematics and chemistry, is that
the cluster experienced an encounter with another globular cluster.
This speculation opens this research field to further analyses. Indeed, a spectroscopic survey
of tidal tails stars in M5 is mandatory, as well as in M5's main body. According to the number of
analysed tidal tail stars with metallicity estimates, the putative colliding cluster could contain ∼
1/3 of the M5's tidal tail mass. We also wonder whether some stars of such a
disrupting cluster could be trapped within the M5's main body. Alternatively, future N-body simulations
will be very useful to analyse this scenario in detail. <cit.> simulated a dwarf galaxy passing
within the M5's tidal tails and found that it is negligibly perturbed by the dwarf galaxy.
In case future simulations find that the collisional scenario is feasible, then that would further imply that observations of metallicity spread in M5 do not necessarily require multiple episodes of star formation, but that it can occur as a result of collision between two globular clusters hosting different stellar populations.
The speculated collision between M5 and another more metal-poor globular cluster is not the prototypical phenomenon proposed by theories of the formation of globular clusters, however, its discovery is both encouraging and interesting. Therefore, M5 provides a unique laboratory to explore various new aspects
of formation and evolution theory of globular clusters.
§ ACKNOWLEDGEMENTS
We thank the referee for the thorough reading of the manuscript and timely
suggestions to improve it.
This work has made use of data from the European Space Agency (ESA) mission
Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia
Data Processing and Analysis Consortium (DPAC,
<https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement.
§ DATA AVAILABILITY
Data used in this work are available upon request to the author.
@urlcharsothermakeother $&#_%
@doi@urlcharsother ifnextchar [ @doi@
@doi@[]
@doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1
@eprint#1#2@eprint@#1:#2::nil
@eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1
@eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml
dblp:#1
@eprint@#1:#2:#3:#4niltempa #1tempb #2tempc
#3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined
mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc
[Alfaro-Cuello et al.,Alfaro-Cuello
et al.2019]alfarocuelloeal2019
Alfaro-Cuello M., et al., 2019, @doi [] 10.3847/1538-4357/ab1b2c,
https://ui.adsabs.harvard.edu/abs/2019ApJ...886...57A 886, 57
[Andrae et al.,Andrae
et al.2022]andraeetal2022
Andrae R., et al., 2022, @doi [arXiv e-prints]
10.48550/arXiv.2206.06138, https://ui.adsabs.harvard.edu/abs/2022arXiv220606138A p. arXiv:2206.06138
[Babusiaux et al.,Babusiaux
et al.2022]gaiaetal2022b
Babusiaux C., et al., 2022, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2022arXiv220605989B p. arXiv:2206.05989
[Baumgardt & VasilievBaumgardt &
Vasiliev2021]bv2021
Baumgardt H., Vasiliev E., 2021, @doi []
10.1093/mnras/stab1474, https://ui.adsabs.harvard.edu/abs/2021MNRAS.505.5957B 505, 5957
[Baumgardt, Hilker, Sollima &
BelliniBaumgardt et al.2019]baumgardtetal2019
Baumgardt H., Hilker M., Sollima A., Bellini A., 2019, @doi
[] 10.1093/mnras/sty2997, http://adsabs.harvard.edu/abs/2019MNRAS.482.5138B 482, 5138
[Carlberg & GrillmairCarlberg &
Grillmair2021]cg2021
Carlberg R. G., Grillmair C. J., 2021, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2021arXiv210600751C p. arXiv:2106.00751
[Crociati et al.,Crociati
et al.2023]crociatietal2023
Crociati C., et al., 2023, @doi [arXiv e-prints]
10.48550/arXiv.2305.04595, https://ui.adsabs.harvard.edu/abs/2023arXiv230504595C p. arXiv:2305.04595
[El-Falou & WebbEl-Falou &
Webb2022]ew2022
El-Falou N., Webb J. J., 2022, @doi [] 10.1093/mnras/stab3505,
https://ui.adsabs.harvard.edu/abs/2022MNRAS.510.2437E 510, 2437
[Errani et al.,Errani
et al.2022]erranietal2022
Errani R., et al., 2022, @doi [] 10.1093/mnras/stac1516, https://ui.adsabs.harvard.edu/abs/2022MNRAS.514.3532E 514, 3532
[ForbesForbes2020]forbes2020
Forbes D. A., 2020, @doi [] 10.1093/mnras/staa245, https://ui.adsabs.harvard.edu/abs/2020MNRAS.493..847F 493, 847
[Gaia Collaboration et al.,Gaia
Collaboration et al.2016]gaiaetal2016
Gaia Collaboration et al., 2016, @doi []
10.1051/0004-6361/201629272, http://adsabs.harvard.edu/abs/2016A
[GrillmairGrillmair2019]g2019
Grillmair C. J., 2019, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2019arXiv190905927G p. arXiv:1909.05927
[Ibata et al.,Ibata
et al.2021]ibataetal2021
Ibata R., et al., 2021, @doi [] 10.3847/1538-4357/abfcc2, https://ui.adsabs.harvard.edu/abs/2021ApJ...914..123I 914, 123
[Koposov et al.,Koposov
et al.2023]koposovetal2023
Koposov S. E., et al., 2023, @doi [] 10.1093/mnras/stad551, https://ui.adsabs.harvard.edu/abs/2023MNRAS.521.4936K 521, 4936
[Kruijssen, Pfeffer, Reina-Campos,
Crain & BastianKruijssen et al.2019]kruijssenetal2019
Kruijssen J. M. D., Pfeffer J. L., Reina-Campos M., Crain R. A.,
Bastian N., 2019, @doi [] 10.1093/mnras/sty1609, https://ui.adsabs.harvard.edu/abs/2019MNRAS.486.3180K 486, 3180
[Malhan, Valluri, Freese &
IbataMalhan et al.2022]malhanetal2022
Malhan K., Valluri M., Freese K., Ibata R. A., 2022, @doi
[] 10.3847/2041-8213/aca6e5, https://ui.adsabs.harvard.edu/abs/2022ApJ...941L..38M 941, L38
[Marino et al.,Marino
et al.2019]marinoetal2019
Marino A. F., et al., 2019, @doi [] 10.1093/mnras/stz1415, https://ui.adsabs.harvard.edu/abs/2019MNRAS.487.3815M 487, 3815
[Massari, Koppelman & HelmiMassari
et al.2019]massarietal2019
Massari D., Koppelman H. H., Helmi A., 2019, @doi []
10.1051/0004-6361/201936135, https://ui.adsabs.harvard.edu/abs/2019A A...630L...4M 630, L4
[MateuMateu2023]mateu2023
Mateu C., 2023, @doi [] 10.1093/mnras/stad321, https://ui.adsabs.harvard.edu/abs/2023MNRAS.520.5225M 520, 5225
[Piatti, Webb & CarlbergPiatti
et al.2019]piattietal2019b
Piatti A. E., Webb J. J., Carlberg R. G., 2019, @doi []
10.1093/mnras/stz2499, https://ui.adsabs.harvard.edu/abs/2019MNRAS.489.4367P 489, 4367
[Pryor & MeylanPryor &
Meylan1993]pm1993
Pryor C., Meylan G., 1993, in Djorgovski S. G., Meylan G., eds,
Astronomical Society of the Pacific Conference Series Vol. 50, Structure and
Dynamics of Globular Clusters. p. 357
[RomanoRomano2023]romanoetal2023
Romano D. e. a., 2023,
[SollimaSollima2020]sollima2020
Sollima A., 2020, @doi [] 10.1093/mnras/staa1209, https://ui.adsabs.harvard.edu/abs/2020MNRAS.495.2222S 495, 2222
[VandenBerg, Casagrande &
EdvardssonVandenBerg et al.2022]vdgetal2022
VandenBerg D. A., Casagrande L., Edvardsson B., 2022, @doi
[] 10.1093/mnras/stab2998, https://ui.adsabs.harvard.edu/abs/2022MNRAS.509.4208V 509, 4208
[Vasiliev & BaumgardtVasiliev &
Baumgardt2021]vb2021
Vasiliev E., Baumgardt H., 2021, @doi []
10.1093/mnras/stab1475, https://ui.adsabs.harvard.edu/abs/2021MNRAS.505.5978V 505, 5978
[Vitral & BoldriniVitral &
Boldrini2022]vb2022
Vitral E., Boldrini P., 2022, @doi []
10.1051/0004-6361/202244530, https://ui.adsabs.harvard.edu/abs/2022A A...667A.112V 667, A112
[Walker, Mateo, Olszewski, Bernstein,
Wang & WoodroofeWalker et al.2006]walkeretal2006
Walker M. G., Mateo M., Olszewski E. W., Bernstein R., Wang X.,
Woodroofe M., 2006, @doi [] 10.1086/500193, http://adsabs.harvard.edu/abs/2006AJ....131.2114W 131, 2114
[Wan et al.,Wan
et al.2023]wanetal2023
Wan Z., et al., 2023, @doi [] 10.1093/mnras/stac3566, https://ui.adsabs.harvard.edu/abs/2023MNRAS.519..192W 519, 192
[Yang, Zhao, Ishigaki, Chiba, Yang,
Xue, Ye & ZhaoYang et al.2022]yangetal2022
Yang Y., Zhao J.-K., Ishigaki M. N., Chiba M., Yang C.-Q., Xue
X.-X., Ye X.-H., Zhao G., 2022, @doi []
10.1051/0004-6361/202243976, https://ui.adsabs.harvard.edu/abs/2022A A...667A..37Y 667, A37
[Yuan et al.,Yuan
et al.2022]yuantal2022
Yuan Z., et al., 2022, @doi [] 10.1093/mnras/stac1399, https://ui.adsabs.harvard.edu/abs/2022MNRAS.514.1664Y 514, 1664
[Zhang, Mackey & Da CostaZhang
et al.2022]zhangetal2022
Zhang S., Mackey D., Da Costa G. S., 2022, @doi []
10.1093/mnras/stac751, https://ui.adsabs.harvard.edu/abs/2022MNRAS.513.3136Z 513, 3136
|
http://arxiv.org/abs/2307.04273v1 | 20230709222249 | Characterization of a novel proton-CT scanner based on Silicon and LaBr$_3$(Ce) detectors | [
"E. Nácher",
"J. A. Briz",
"A. N. Nerio",
"A. Perea",
"V. G. Távora",
"O. Tengblad",
"M. Ciemala",
"N. Cieplicka-Orynczak",
"A. Maj",
"K. Mazurek",
"P. Olko",
"M. Zieblinski",
"M. J. G. Borge"
] | physics.med-ph | [
"physics.med-ph",
"nucl-ex",
"physics.ins-det"
] |
e1corresponding author: [email protected]
e2present address: Universidad Complutense de Madrid, CEI Moncloa, E-28040 Madrid, Spain
Instituto de Física Corpuscular, CSIC - Universidad de Valencia, Spain
Instituto de Estructura de la Materia, CSIC, Madrid, Spain
Instytut Fizyki Jadrowej PAN, 31-342 Krakow, Poland
Characterization of a novel proton-CT scanner
based on Silicon and LaBr_3(Ce) detectors
E. Nácheraddr1, e1
J.A. Brizaddr2, e2
A.N. Nerioaddr2
A. Perea,addr2
V.G. Távoraaddr2
O. Tengbladaddr2
M. Ciemalaaddr3
N. Cieplicka-Orynczakaddr3
A. Majaddr3
K. Mazurekaddr3
P. Olkoaddr3
M. Zieblinskiaddr3
M.J.G. Borgeaddr2
Received: date / Accepted: date
================================================================================================================================================================================================================================================================================================================================================
Treatment planning systems at proton-therapy centres generally use X-ray computed tomography (CT) as primary imaging technique to infer the proton treatment doses to tumour and healthy tissues. However, proton stopping powers in the body, as derived from X-ray images, suffer from important proton-range uncertainties. In order to reduce this uncertainty in range, one could use proton-CT images instead. The main goal of this work is to test the capabilities of a newly-developed proton-CT scanner, based on the use of a set of tracking detectors and a high energy resolution scintillator for the residual energy of the protons. Different custom-made phantoms were positioned at the field of view of the scanner and were irradiated with protons at the CCB proton-therapy center in Krakow. We measured with the phantoms at different angles and produced sinograms that were used to obtain reconstructed images by Filtered Back-Projection (FBP). The obtained images were used to determine the capabilities of our scanner in terms of spatial resolution and proton Relative Stopping Power mapping and validate its use as proton-CT scanner. The results show that the scanner can produce medium-high quality images, with spatial resolution better than 2 mm in radiography, below 3 mm in tomography and resolving power in the RSP comparable to other state of the art pCT cameras.
42.30.Wb 42.79.Pw 07.77.Ka
§ INTRODUCTION
According to World Health Organisation, cancer is the leading cause of death in the world. More than 50% of cancer patients receive some kind of radiation therapy (radiotherapy) during their course of treatment. Conventional radiotherapy for deep tumours makes use of X rays to control or kill malignant cells. Unfortunately, healthy tissue is not immune to the ionisation produced by the X rays and, therefore, the areas surrounding the cancerous tumour are severely damaged. Proton therapy is a technique that uses proton beams instead of X rays as ionising radiation. It has a far higher selectivity than conventional radiotherapy, what makes it ideal for the treatment of localised tumours in highly sensitive areas e.g. brain, heart or spinal cord.
The application of proton therapy, however, is not exempt of difficulties. The precision in the determination of the distal position of the dose distribution is crucial for a complete irradiation of the tumour and to avoid, as much as possible, any dosage to the surrounding healthy tissue. So far, treatment planning systems at proton-therapy centres use X-ray computed tomography (X-ray CT) as primary imaging technique to calculate doses to tumour and healthy tissues. This produces a map of the linear attenuation coefficient of the tissue for X rays, the so called Hounsfield Units (HU). In the production of the treatment plan, one has to transform the map of HU into a map of relative proton stopping powers (RSP), since the patient is going to be treated with a beam of protons. However, there are unavoidable uncertainties associated with the derivation of the RSP map from the X-ray CT scan. Apart from the fact that the HU to RSP conversion depends on the chemical composition of the volume traversed by protons and not only on its HU value, it is not possible to ignore the ambiguity and limitations of the different HU to RSP conversion algorithms that are being used nowadays <cit.>.
The aforementioned effects may lead to proton range uncertainties up to 5% in the abdomen and 11% in the head (see <cit.> and refs. therein). These large proton range uncertainties result in higher dose to healthy tissues or in a far too conservative treatment plan to avoid that. Reducing these uncertainties would allow a better planning that maximises the dose to the tumour, minimising at the same time the dose to the surrounding tissue. In order to reduce the uncertainty in proton range and take full advantage of the therapeutic potential of proton therapy, it is necessary to provide the treatment planning software with RSP maps obtained with proton beams rather than those derived after a conversion from the HU maps obtained with X rays. Proton computed tomography (proton CT) is the appropriate tool to produce such images since it makes use of proton beams provided by the same accelerator that is used later for the treatment, but this time at higher energy, so that the protons go through the patient and reach an appropriate proton scanner to form an image. See for instance the work of Takabe et al. <cit.> for a descriptive introduction to proton CT. Some other recent studies with more advanced scanners are described in Dedes et al. <cit.> and Esposito et al. <cit.>. In the next section we will describe the basis of proton CT and our approach to a proton scanner for imaging.
Besides medical physics and imaging, basic nuclear-physics research in general involves the development of nuclear instrumentation, in the form of spectrometers and radiation detectors, to perform nuclear reactions and study the structure of matter. Within the detector R&D process, the design and test of prototypes is rather frequent. Sometimes, the prototypes are just small parts, the building blocks of the final device. In these cases, the optimised prototypes are often an integral part of the final product. However, some other times, the prototypes, although being very valuable radiation detectors themselves, cannot be used in the final device because they do not comply with the requirements or simply because they do not have the appropriate geometry: shape or size.
In this work we will show how we have re-used one of these prototype detectors, that was developed as part of the R&D of a larger device but cannot be used now as part of it. This, in combination with some other instrumentation used for nuclear reaction and structure experiments, has been converted into a proton scanner capable of performing proton radiography and tomography as will be described in the next sections.
§ MATERIALS AND METHODS
§.§ The proton CT scanner
Any scanning technique based on the use of a penetrating probe to obtain images by sections is known as tomography. These images by sections can be combined, using the appropriate reconstruction method, to form a 3-dimensional (3D) model of the object under study. In the case of transmission tomography, one generally starts by obtaining plane 2-dimensional (2D) projections, that, using the appropriate reconstruction algorithm, are turned into the final tomographic sections or 3D image. The most typical case of medical tomography technique by transmission is the X-ray CT, obtained from plane X-ray radiographs. The subject of this paper refers to tomography with proton beams, in other words, the obtention of images by sections using a proton beam as probe. Therefore, at the basis of proton CT stands the use of a proton accelerator that provides a proton beam with enough energy to go through the object of study. In clinical practice, the object of study is a part of a patient body, however, since we are presenting here a pre-clinical instrument, from now on the object of study will be referred to as phantom. As for the case of X-ray CT, we will start by obtaining proton radiographs that will be useful by themselves as explained later.
Since the main goal of the proton CT scan is to produce a map of RSP, we need to detect the individual protons that form the beam, once they have gone through the phantom, and determine their trajectory and energy deposited in the phantom. With this aim, we will make use of tracking detectors to determine the trajectories and a calorimeter, a detector that absorbs all the energy of the particles penetrating, to measure the residual energy of the protons. Fig. <ref> shows a sketch of a simple proton-CT scanner. The tracking detectors, in green, are placed at the entrance and exit sides of the object of study, and are due to determine the entrance and exit point of each proton trajectory. In this work these trajectories are taken as straight lines as zero-order approximation, although we know that there is always a certain deviation within the object due to multiple Coulomb scattering (lateral deviation of the proton trajectories due to Coulomb interaction with the atomic nuclei). Apart from the trajectory followed, to calculate the RSP of the different materials within the phantom, we need to know the energy deposited by the protons. Since we know the energy of the beam delivered by the accelerator, what we really want to measure is the residual energy of the protons after having passed through the object and tracking detectors. For that, we need to place a calorimeter, or residual-energy detector, right after the rear tracking detector.
For a concise description of the process let us refer to Fig. <ref> again. The proton beam reaches the setup, from the left side in the figure, and pass through the front position-sensitive detector, the phantom, and the rear position-sensitive detector. From the positions recorded in the tracking detectors we can trace back a straight line, our zero-order approximation for the trajectory. After traversing the rear tracking detector, the protons leave their remaining kinetic energy in the bulk of the calorimeter, at the right-hand side in the figure. Combining the trajectories and residual energy measured for each proton, we can reconstruct tomographic images of the RSP in the bulk of the phantom following any of the methods detailed in <cit.>.
Fig. <ref> Panel A shows a 3D-CAD design of our proton-CT scanner. In this sketch one can clearly see the front and rear tracking detectors held by their red supports, the green phantom cylinder between them, and the calorimeter at the right end of the setup. The full setup is enclosed in an opaque box to prevent the passage of light that would produce spurious signals in the tracking detectors. At the right-hand side, Fig. <ref> panel B shows a real picture of the actual setup. Details on the tracking and residual energy detectors are given in what follows.
Double-Sided Silicon Detectors for proton tracking
The tracking detector system is comprised of two Double-Sided Silicon Strip Detectors (DSSD), manufactured by Micron Semiconductor Ltd. The first DSSD detector is placed directly facing the proton beam, at the front side of the phantom, to determine the entrance point. The second one in placed at the rear position, to determine the exit point of the protons. Both DSSDs are 1-mm thick, and segmented into 16 vertical and 16 horizontal strips, giving a total of 256 pixels of 3x3 mm^2 per tracking detector. The two DSSD detectors where set 8 cm apart from each other, covering a field of view of 48×48×80 mm^3. A full description of these detectors and a very thorough characterisation of their response function to charged particles is given in <cit.>. During the measurements presented in this work, the signals from the DSSDs went through Mesytec preamplifiers and shapers before entering the CAEN V785 ADCs at the data acquisition system.
CEPA4: The Residual-Energy Detector
The calorimeter, or residual-energy detector, used in our scanner is an array of four scintillation units, each of them comprised of two scintillator crystals in phoswich configuration: 4 cm of LaBr_3(Ce) and 6 cm of LaCl_3(Ce) with a common photomultiplier tube (see Fig. <ref>). The crystals are individually wrapped in reflecting material and closed packed in a 0.5 mm Aluminum can.The full detector array, called CEPA4, is a prototype detector for the endcap of CALIFA, the electromagnetic calorimeter of R^3B at FAIR. A full description of the CEPA4 and its response to high-energy proton beams can be found in <cit.>. In all the measurements described in this work, the signals from the photomultipliers were directly acquired by a Mesytec MDPP-16-QDC high-resolution time and charge integrating digitizer at the data acquisition system.
The main advantage of using CEPA4 as residual-energy detector lies in its energy resolution, that translates in a better contrast in the final RSP image. For protons of 80-130 MeV, which are the relevant energies for our study, the protons are stopped by the first crystal, namely the LaBr_3(Ce) part of the phoswich, and the resolution of CEPA4 improves from 3.5 to 2%. For higher energies the protons penetrate the second crystal, namely the LaCl_3(Ce) and the resolution deteriorates to a maximum of 7%.
§.§ In-beam experiments
Apart from the setup and fine-tuning of the system at the laboratory of Instituto de Estructura de la Materia (IEM-CSIC, Madrid), we have carried out two experiments with proton beams: one at Centro de Microanálisis de Materiales (CMAM, Madrid), and the other at the Centrum Cyklotronowe Bronowice (CCB, Krakow). The former, at CMAM, was a proof-of-concept experiment with low-energy proton beams to test the DSSD as tracking detectors, details on the results can be found in <cit.>. The latter, at CCB, was the first test of the full setup with high-energy protons and is detailed in what follows.
For a realistic test of our proton-CT scanner we used the high-energy proton beams provided by the IBA PROTEUS C235 proton cyclotron at the CCB in Krakow. The latter is part of the Henryk Niewodniczański Institute of Nuclear Physics Polish Academy of Sciences in Krakow (IFJ PAN) and its main focus is the application of cyclotrons in scientific research and tumor radiotherapy. For our measurements we were provided with mono-energetic proton beams at energies 100 and 110 MeV, with an energy spread of 1.5% (FWHM).
The accelerator provided a high-current pencil beam (≈ 1 nA, ≈ 10 mm diameter), however, for our purposes, we needed a low-current fan beam covering the full field of view of our scanner. Thus, we measured using the protons scattered on a 25-μm thick (11.25 mg/cm^2) Titanium foil. The measurement was performed in air and with the proton-CT scanner at an angle of 12.5 degrees with respect to the beam direction, the alignment of the system was done using a laser system provided by the local team. In these conditions, our acquisition rate was kept around 10 kHz (triggered by an OR condition between the 3 detector signals). The energy loss due to the scattering angle, and the losses in the Ti foil have been calculated using the GEANT4 Monte Carlo code, the losses in the DSSD tracking detectors have been directly acquired by the detectors themselves since they also perform well as spectrometers. We used proton beams of 95, 100 and 120 MeV to calibrate the tracking and residual energy detectors, but the final measurements for radiography and tomography where carried out at 100 and 110 MeV respectively. A detailed Monte Carlo simulation of the experiment was performed using Geant4 <cit.> to calculate the values of energy deposited in the different volumes for the three proton beam energies, which allowed for an accurate calibration in the energy range of interest. A picture of the cyclotron providing the proton beam at CCB and a schematic view of the setup are shown in Fig. <ref>.
§ RESULTS
§.§ Proton Radiography
As we explained before, in the measuring process for proton CT we will obtain proton radiographs, 2D images that are useful by themselves. While the slices from the tomographic reconstruction hold a direct measurement of the RSP, the plane radiographs hold information of the line integrals of the RSP. This line integrals of the RSP are referred to as Water Equivalent Path Length (WEPL), and when they are averaged within a certain spatial bin, they turn into the so-called water-equivalent thickness (WET). Proton radiographs, when the spatial resolution allows for it, can be used for patient alignment/positioning. Furthermore, a comparison between a real proton radiograph and a virtual proton radiograph reconstructed from the X-ray CT used for the treatment plan, can be a very powerful tool to detect possible proton range errors due to the conversion of HU to RSP before the treatment.
For our test beam at CCB we used some custom-made phantoms specially designed to test the spatial resolution of the system in realistic conditions. For that, we enclosed our Aluminum phantom inserts in a thick PMMA square box (a cube of 50 mm side). We tested two different patterns: a cross and a point/line regular spatial pattern. A picture of these two Aluminum inserts included in the two phantoms can be seen in Fig. <ref>, left column, close to the proton radiographs obtained with our scanner with proton beams of 100 MeV, at the right column. The radiographs reconstructed in the figure were obtained at the central plane of the phantom, the X,Y coordinates were determined by simply averaging the X and Y coordinates at both detector planes, always assuming straight proton trajectories. The colour scale represents the average energy deposited per detected proton. It is important to recall here that the energy deposited in the phantom is not proportional to the RSP but to its line integral along the proton trajectory and averaged within a spatial bin, namely the WET. For a rough estimation of the spatial resolution one can look carefully at the bottom half of Fig. <ref>. In the picture of the phantom, displayed in C, at the left hand side, the holes at the third raw starting from the bottom have 2 mm of diameter and are 2 mm apart of each other. In the radiograph, at the right hand side, one can clearly see that these holes are well resolved in the image, however, the 1-mm holes at the upper row are not. This allows us to conclude that the spatial resolution is better than 2 mm.
A detailed description of the radiography measurements and results has already been published in <cit.>. The quality of the images was studied via a Modulation Transfer Function (MTF) analysis using the profiles obtained with the regular spatial pattern (holes in C and D panels in Fig. <ref>). The MTF is a measure of the capability of our device to transfer contrast at a particular resolution from the object to the image. In other words, the MTF is a way to incorporate resolution and contrast into a single specification. In this study, the MTF is calculated as the contrast (in percentage of grey level) in the image between one hole and the Aluminum spacing, and it is represented, in Fig. <ref>, as a function of the number of line pairs (hole-spacing pairs in our case) per mm. Looking at the resolved lines in Fig. <ref>, and the MTF analysis shown in Fig. <ref>, we concluded that the spatial resolution of the device is better than 2 mm and the MTF-10% = 0.3 line pairs / mm, comparable to those of other existing devices (e.g. 0.35 in <cit.> from a tomographic image). For more details the reader is referred to <cit.>.
§.§ Proton Tomography
As far as the 3D image reconstruction is concerned, we are currently implementing different algorithms from those described in the very detailed review of <cit.>. Furthermore, we are concerned with different approaches to correct for the multiple Coulomb scattering effect in the phantom. In that respect, a solution based on the use of neural networks has been recently published for the 2D radiographs in <cit.> and we are considering a similar approach. However, for the purpose of this work, with emphasis in the validation of the device as proton-CT scanner, we will only present images obtained with a simple filtered back-projection algorithm, using a ramp filter, that assumes straight paths for the protons inside the phantom.
With the aforementioned approach, we have performed two different measurements, one to work on the different reconstruction algorithms and estimate the spatial resolution of the system, and another one to check its capability to resolve the proton RSP values of different materials. For these measurements we designed two different phantoms based on PMMA cylinders. The first one was a Derenzo-like pattern with holes of 7, 5 and 3 mm diameter and with separations of the same length. A picture of this phantom and a schematic top view are shown at the leftmost panels of Fig. <ref>. In order to take several projections at different angular positions, the phantom was placed on a rotatory platform connected to a step motor. The measurements were carried out at a proton energy of 110 MeV.
The A, B, C and D panels of Fig. <ref> are the filtered back-projected images obtained for the four different sets of measurements that were carried out: A) 10 projections of 20 minutes each, in steps of 18^∘; B) 20 projections of 5.5 minutes each, in steps of 9^∘; C) 20 projections of 20 minutes each, in steps of 9^∘; D) 100 projections of 5.5 minutes each, in steps of 1.8^∘. The total number of projections of each measurement shown in the figure always cover half a turn, i.e., 180^∘. During these measurements the proton current was stable at around 1 nA and, at this intensity, we counted ≈700 triple coincidences per second (front DSSD and rear DSSD and Calorimeter). In these conditions, the projections of 5.5 minutes recorded ≈2.3× 10^5 events, whereas the projections of 20 minutes recorded ≈8.4× 10^5. The difference in statistics per projection, as well as the different number of projections affect considerably the image quality. Looking at the four images of Fig. <ref> we can clearly appreciate, firstly, that the image with the lowest number of projections, panel A), does not reproduce fairly the pattern, since one of the cylinders of 7 mm has not the shape of a cylinder and two of the cylinders of 3 mm are blurred and practically absent. Secondly, panel B) shows the image with low statistics per projection (5.5 min) but 20 projections in total, and it already reproduces fairly well the pattern, since all cylinders are seen with the right shape and position. Going from A) to B) shows that the effect of lowering the statistics per projection is well compensated by taking a higher number of projections. The third panel C) keeps the same number of projections than B) but increasing the statistics per projection and the improvement is obvious. Finally, we took a longer measurement of 100 projections of lower statistics that is shown in D). In this case the result is more uniform, but we do not see a better resolution than in the previous image, indicating that, with a proper measurement of a uniform cylinder for normalization, 20 projections covering 180^∘ is an acceptable sampling for our purposes. A far deeper study of the effects on the quality of the images due to different sampling rates, statistics, addition of subsets or reconstruction algorithms, will be published soon <cit.>. The limitation in statistics/time in this study was due to the high dead time of the data acquisition system. However, recently we have carried out a new series of measurements with the same system but an improved electronic setup and digitization configuration, being able to take similar images with less than 10% of dead time at counting rates of 45 kHz. This compares well to other similar devices in the field (see Table 1 of Ref. <cit.> for a complete list).
Beyond the capabilities to produce images with a high resolving power, our main goal in this work is to produce reliable RSP maps. In this context, the energy resolution of the residual-energy detector is crucial, since the energy deposited by the protons in the traversed volumes depends completely on the RSP of the material. This is why our proton-CT scanner, even being made of detectors that were originally designed for other use, is very promising in terms of RSP mapping, since the residual-energy detector is made of high-resolution scintillators. To test the RSP mapping capabilities of our setup we designed a special phantom, a PMMA cylinder of 60-mm diameter with two inserts of 9-mm diameter each that can be filled with different liquids, gels or powders. A picture of such phantom is shown at the leftmost panel of Fig. <ref>.
We performed proton scans at 110 MeV of the phantom with the inserts filled with ethanol and water. We took 10 projections of 20 minutes each, in steps of 18^∘, covering 180^∘ in total. As with the previous scans of the Derenzo-like phantom, we have used a simple filtered back-projection with the ramp filter to reconstruct the images.
The rightmost panel of Fig. <ref> shows the four regions of interest that have been defined to study each material present in our phantom, one region of water, one of ethanol and two regions covering the PMMA matrix. The reconstructed image was consequently normalised to water in order to estimate the RSP of PMMA and ethanol. The resulting values are shown in Table 1, where they are compared with the experimental values reported in Ref. <cit.>, that were measured using proton beams of 149 MeV. The values and uncertainties of the RSP of PMMA and ethanol have been obtained, after the normalisation of the image with respect to the region of water (R4 in Fig. <ref>), as the mean and the standard deviation of the RSP values obtained inside the respective regions indicated in Fig. <ref> as R1 and R2 for PMMA and R3 for ethanol. The last column of Table 1 shows the relative difference between the present RSP values and those taken from <cit.> as reference values, being in both cases of the order of 1%.
Our results and those of Ref. <cit.> are in agreement within the uncertainties. The resulting proton RSP map from our test beam is satisfactory for a first experiment. However, the relative differences are not negligible, definitely a bit worse than those reported e.g. in the recent work of Dedes et al. <cit.>, and the relative error on our values are 8% for Ethanol and 4% for PMMA, far worse than those of <cit.>. We remind here the reader that this was the first test of this proton-CT scanner that is made of detectors originally designed for different applications. This RSP map was obtained only with 10 projections and without any uniformity correction. Taking the optimal 20 projections and correcting the data with a dedicated measurement of a uniform PMMA phantom will decrease the uncertainties and improve both the spatial and RSP resolution and accuracy.
§ DISCUSSION
The results shown in the previous section validate our setup as a prototype of proton-CT scanner. The spatial resolution of our setup has room for improvement, for instance there are DSSDs in the market with much higher granularity, and position sensitive photomultipliers for the scintillator, but for a proof of concept we have demonstrated that our scanner can resolve 2 mm holes in radiography and 3 mm in tomography images. Unfortunately, we did not expect such a good resolution in tomography mode and this is why we did not build a Derenzo-like cylinder with smaller holes to really find the limit. Therefore, we can say that the spatial resolution is better than 3 mm but, at this stage, we cannot state how much better. The excellent energy resolution of the scintillator crystals used for the residual-energy detector, namely the LaBr_3-LaCl_3 phoswich detectors, allows for a fairly good resolving power in RSP in the tomographic images, and it was shown how the RSP of ethanol and PMMA materials can be reproduced accurately, although there is room for improvement in terms of precision. Additional tests will be performed to deeper evaluate the potential of our first prototype in tomography and to reduce the relative uncertainties in our reconstructed RSP values.
The main concern during the first set of measurements presented here was the dead time of the acquisition system that only allowed for measurements at low counting rates (below 10 kHz), meaning very long scanning times. This would be a showstopper for the future of our device as proton-CT scanner, however, recently we have optimised our electronics and data acquisition system and carried out a new set of measurements with different phantoms. With some improvements at the digitisation level, we have been able to take images with less than 10% of dead time at counting rates of 45 kHz, far closer to clinical levels. This translates into a much faster system capable to take the images presented here in few minutes rather than hours, and compares well to other similar devices in the field.
This work has been mainly supported by the PRONTO-CM B2017/BMD-3888 project (Comunidad de Madrid, Spain) that has sponsored J.A. Briz and A.N. Nerio. The experiments have been carried out with the support of the European Union Horizon 2020 research and innovation programme under grant agreement no. 654002 (ENSAR2) and grant agreement No [730983] (INSPIRE). This publication is also part of the R&D grants PID2019-104714GB-C21, PID2019-104390GB-I00 and PDC2022-133382-I00, funded by MCIN/ AEI/10.13039/501100011033 (Spanish Ministry of Science) and grant CIPROM/2021/064 from Generalitat Valenciana. The authors want to express their gratefulness to the CCB crew for their unconditional help during the data taking.
spphys
|
http://arxiv.org/abs/2307.04162v1 | 20230709125749 | A threshold model of plastic waste fragmentation: New insights into the distribution of microplastics in the ocean and its evolution over time | [
"Matthieu George",
"Frédéric Nallet",
"Pascale Fabre"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.mtrl-sci"
] |
Laboratoire Charles-Coulomb, UMR 5221 CNRS – université de Montpellier, Campus Triolet,
Place Eugène-Bataillon – CC069,
F-34095 Montpellier Cedex 5 – FRANCE
Centre de recherche Paul-Pascal, UMR 5031 CNRS – université de Bordeaux, 115 avenue du Docteur-Schweitzer, F-33600 Pessac – FRANCE
[Email for correspondence: ][email protected]
Laboratoire Charles-Coulomb, UMR 5221 CNRS – université de Montpellier, Campus Triolet,
Place Eugène-Bataillon – CC069,
F-34095 Montpellier Cedex 5 – FRANCE
Plastic pollution in the aquatic environment has been assessed for
many years by ocean waste collection expeditions around the globe or
by river sampling. While the total amount of plastic produced
worldwide is well documented, the amount of plastic found in the
ocean, the distribution of particles on its surface and its evolution
over time are still the subject of much debate. In this article, we
propose a general fragmentation model, postulating the existence of a
critical size below which particle fragmentation becomes extremely
unlikely. In the frame of this model, an abundance peak appears for
sizes around 1mm, in agreement with real environmental data. Using, in
addition, a realistic exponential waste feed to the ocean, we discuss
the relative impact of fragmentation and feed rates, and the temporal
evolution of microplastics (MP) distribution. New conclusions on the
temporal trend of MP pollution are drawn.
A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time
Pascale Fabre
August 12, 2023
==============================================================================================================================================
§ INTRODUCTION
Plastic waste has been dumped into the environment for nearly 70
years, and more and more data are being collected in order to quantify
the extent of this pollution. Under the action of degradation agents
(UV, water, stress), plastic breaks down into smaller pieces that
gradually invade all marine compartments. If the plastic pollution
awareness initially stemmed from the ubiquitous presence of
macro-waste, it has now become clear that the most problematic
pollution is “invisible” i.e. due to smaller size debris, and
the literature exploring microplastics (MPs, size between 1 μm and
5 mm) and nanoplastics (NPs, size below 1 μm) quantities and
effects is rapidly increasing. The toxicity of plastic particles being
dependent on their size and their concentration, it is crucial to know
these two parameters in the natural environment to better predict
their impacts. While the total amount of plastic produced worldwide
is well-documented <cit.>, the total amount of plastic
found in the ocean and its time evolution are still under debate:
while many repeated surveys and monitoring efforts have failed to
demonstrate any convincing temporal trend <cit.>,
increasing amounts of plastic are found in some regions, especially in
remote areas, and a global increase from ca. 2005 has been
suggested <cit.>. Still, some features can be drawn from
the available data from the
field <cit.>
about the size distribution of plastic particles. When browsing the
sizes from the largest to the smallest, a first abundance peak is
observed around
1 mm <cit.>. Between
1 mm and approximately 150 μm, very few particles are
found <cit.>. The abundance increases again from
150 μm down to 10 μm, with an amount of particles which is
several orders of magnitude larger than what is found around
1 mm <cit.>. To the best of our knowledge,
the physical reason <cit.> for the existence of
two very different size classes for microplastics (small MP
<150 μm, large MP between 1 and 5 mm) is that there are two
fragmentation pathways: i) bulk fragmentation with iterative
splitting of one piece into two daughters for large MPs, and
ii) delamination and disintegration of a thin surface layer
(around 100 μm depth) into many particles for small MPs. This
description does however not explain the deficit of microplastics of
sizes between 150 μm and 1 mm. Early authors attempted to
describe the large MP distribution by invoking a simple iterative
fragmentation of plastic pieces into smaller objects, conserving the
total plastic
mass <cit.>,
in accordance to pathway i). These models lead to a
time-invariant power-law dependence of the MP abundance with size
(refer to Supplementary Information <ref> for an
elementary version of such models), which is in fair agreement with
experimental observations for large MP. However, they fail to describe
the occurrence of an abundance peak and the subsequent decrease of the
number of MP when going to smaller sizes. Other mechanisms such as
sinking, ingestion, etc. have been invoked to qualitatively explain
the absence of particles smaller than 1 mm. Very recently, two papers
have addressed this issue using arguments related to the fragmentation
process itself. Considering the mechanical properties of a
one-dimensional material (flexible and brittle fibres) submitted to
controlled stresses in laboratory mimicking ocean turbulent flow,
Brouzet et al <cit.> have shown both theoretically
and experimentally in the one-dimensional case that smaller pieces are
less likely to break. Aoki and Furue <cit.> reached
theoretically the same conclusion in a two-dimensional case using a
statistical mechanics model. Note that both approaches are based on
the classical theory of rupture, insofar as plastics fragmenting at
sea have generally been made brittle by a long exposure to UVs.
In
this paper, we also explore pathway i), keeping out of focus
pathway ii), since delamination process produces directly very
small plastic pieces. Regardless of the fracture mechanics details
i.e. the specific characteristics of the plastic waste (shape,
elastic moduli, aging behavior) and the exerted stresses, we postulate
the existence of a critical size below which bulk fragmentation
becomes extremely unlikely. Since many of the microplastics recovered
from the surface of the ocean are film-like objects (two dimensions
exceeding by a large margin the third one) like those coming from
packaging, we construct the particle size distribution over time based
on the very idea of a universal failure threshold for breaking
two-dimensional objects. A very simple hand-waving argument from
everyday's life that illustrates this breaking threshold, is that the
smaller a parallelepipedic piece of sugar is, the harder it is to
break it, hence the nickname sugar lump model used in this
paper. Unlike many previous models, which make the implicit
assumption of a stationary distribution, we explicitly describe
the temporal evolution of the large MP quantity (see
Sections <ref> and <ref>). Moreover, by
injecting a realistic waste feed into the model, we discuss the
synergistic effect of feeding and fragmentation rates on the large MP
distribution, in particular in terms of evolution with time, and
compare to the observed data in Section <ref>.
§ FRAGMENTATION MODEL WITH THRESHOLD
The sugar lump iterative model implements the two following
essential features: a size-biased probability of fragmentation on the
one hand, and a controlled waste feed rate on the other
hand. Initially, a constant feeding rate is used in the model. In a
second step, the more realistic assumption of an exponentially growing
feeding rate is introduced and discussed in comparison with field data
(See Section <ref>).
At each iteration, we assume that the ocean is fed with a given amount
of large parallelepipedic fragments of length L_init,
width ℓ_init and thickness h, where h is much
smaller than the other two dimensions and length L_init
is, by convention, larger than width ℓ_init. At each
time step, every fragment potentially breaks into two parallelepipedic
pieces of unchanged thickness h. The total volume (or mass) is kept
invariant during the process. In addition, we assume that, if the
fragment ever breaks during a given step, it always breaks
perpendicular to its largest dimension L: A fragment of dimensions
(L, ℓ, h) thus produces two fragments of respective
dimensions (ρ L,ℓ,h) and ([1-ρ]L,ℓ,h),
ρ being in our model a random number between 0 and 0.5. Note
that, depending on the initial values of L,ℓ and ρ, one or
both of the new dimensions ρ L and [1-ρ]L may
become smaller than the previous intermediate size ℓ: the
fragmentation of a film-like object, at contrast to the case of a
fibre-like object, is not conservative in terms of its largest
dimension <cit.>. Furthermore, in
order to ensure that the fragment thickness h remains (nearly)
constant all along the fragmentation process, ρ values leading to
ρ L or (1-ρ)L significantly less than h are
rejected in the simulation. This obviously introduces a short length
scale cutoff, in the order of h, and a limiting, nearly cubic, shape
for the smaller fragments (an “atomic limit”, according to the
ancient Greek meaning).
A second length scale, L_c, also enters the present model,
originating in the mechanical sugar lump approach, described
heuristically by means of a breaking efficiency E(L)
sigmoidal in L. For the sake of convenience, this efficiency is
built here from the classical Gauss error function. It is therefore
close to 1 above a threshold value L_c (chosen large enough compared
to h) and close to 0 below L_c. A representative example is shown
in Fig. <ref>, with L_c/h=100. Note that throughout
this paper, all lengths involved in the numerical model will be scaled
by the thickness h.
Qualitatively speaking, this feature of the model means that when
the larger dimension L is below the threshold value L_c, fragments
will “almost never” break, even if they haven't reached yet the
limiting (approximately) cubic shape of fragments of size ≈
h. For the sake of simplicity, the threshold value is assumed
not to depend on plastic type or on residence time in the
ocean, considering that weathering occurs from the moment the waste is
thrown in the environment and quickly renders all common plastics
brittle. A unique L_c is thus used for all fragments.
Technical
details about the model are given in supplementary
information <ref>.
§ RESULTS AND COMPARISON WITH FIELD DATA
In this whole section, we discuss the results obtained with the
sugar lump model and systematically compare with what we call
the standard model
<cit.>, that is to
say the case where fragments always break into two (identical) pieces
at each generation, whatever their size. Whenever possible and
meaningful, we also compare our results with available field
data. Therefore, one needs to assign a numerical correspondence
between the physical time scale and the duration of a step in the
iterative models. The fragmentation rate of plastic pieces can be
assessed using accelerated aging
experiments <cit.>. The
half-life time, corresponding to the time when the average particle
size is divided by 2, has been found around 1000 hours, which roughly
corresponds to one year of solar exposition <cit.>. Hence,
the iterative step t used in all following sections can be
considered to be in the order of one year. For typical plastic film
dimensions, it is reasonable to assume that the thickness h is
between 10 and 50 μm, and the initial largest lateral dimension
L_init is in the range of 1 to 5 cm. These characteristic
lengths, together with the other length scales involved in this paper
are positioned relative to each other in Fig. <ref>.
§.§ Evolution of the size distribution and of the total abundance of fragments with time
The size distribution of plastic fragments over time is represented in
Fig. <ref> for the sugar lump and
confronted to the standard model size distribution. The origin
of time corresponds to the date when the very first plastic waste was
dumped into the ocean.
According to the standard model (see
Eq. (<ref>), Section <ref>), the
amount of particles as a function of their size follows a power law of
exponent -2 which leads to a divergence of the number of particles
at very small size (dotted line in Fig
<ref>). For large MP, the prediction of
the sugar lump model is broadly similar, i.e. following
the same power law. By contrast, the existence of a mechanism
inhibiting the break of smaller objects, as introduced in the
sugar lump model, does lead to the progressive built of an
abundance peak for intermediate size fragments due to the accumulation
of fragments with size around L_c (see Section <ref> for
details). Moreover, the particle abundance at the peak increases with
time while the peak position shifts towards smaller size classes. This
shift is fast for the first generations, and then slows down when time
passes: Fig. <ref>. The inset in
Fig. <ref> shows how the existence of a
breaking threshold significantly slows down the production of very
small particles compared to the standard model. As can be observed
from the inset in Fig. <ref>, the peak
position L_peak^th, around L_c, decreases in
a small range typically between 1.5L_c and 0.5L_c for time periods
up to a few tens of years.
Let us discuss now the comparison to the experimental data. A
sample of various field data from different authors
<cit.>
is displayed in Fig. <ref>. In order to obtain a
collapse of the data points for large MPs, a vertical scaling factor
has been applied, since abundance values from different sources can
not be directly compared in absolute units. The two main features of
these curves are: A maximum abundance at a value of a few millimeters
(indicated by a grey zone) and the collapse of the data points onto a
single 1/L^2 master curve (indicated by a dashed line).
The threshold value L_c is presumably defined by the energy balance
between the bending energy required for breaking a film and the
available turbulent energy of the ocean. The bending energy depends on
the film geometry and on the mechanical properties of the weathered
polymer. As shown by Brouzet et
al <cit.>, for a fiber (1D), the
threshold L_c is proportional to the fiber diameter d and varies
as
L_c= kE^1/4/(ρηϵ)^1/8d
where E is the Young modulus of the brittle polymer fiber, ρ
and η are the mass density and viscosity of water, ϵ is
the mean turbulent dissipation rate and k is a prefactor in the
order of 1. In two dimensions, the expression for the threshold L_c
is more complex, since it depends both on the width ℓ and
thickness h of the film. However, based on 2D mechanics, one can
show that the order of magnitude and h-dependency for L_c remain
the same as in 1D, while the prefactor slightly varies with
ℓ. Reasonable assumptions on film geometry, mechanical properties
of weathered brittle plastic and highly turbulent ocean events, such
as made by Brouzet et al. <cit.> allow us to
evaluate that L_c/h ≈ 100. For films of typical thicknesses
lying between 10 and 50 μm, this gives a position of the peak
between 1 and 5 mm in good agreement with the field data represented
in Fig. <ref>. It is also interesting to
discuss the power law exponent value exhibited by both standard
and sugar-lump models at large MP sizes. In time-invariant
models, the theoretical exponent actually varies with the
dimensionality of the considered objects (fibres, films, lumps)
ranging from -1 (fibres) to -3 (lumps). As expected, when the
objects dimensionality is fixed, the value -2 observed in
Fig. <ref> for the sugar-lump
model is due to the hypothesis of film-like pieces breaking along
their larger dimension only, keeping their thickness constant. In the
same way, regarding the laboratory experiments performed on glass
fibres <cit.>, the large MP distribution is compatible in
the long-time limit with the expected -1 power
law [provided that, of course, the depletion of very large
objects that originates from the absence of feeding is disregarded.].
Coming back to the field data as displayed in
Fig. <ref>, one can note that for large MP all
data points collapse onto a single 1/L^2 master curve. This suggests
that either most collected waste comprises film-like objects breaking
along their larger dimension only, or, perhaps more likely, that one
collects a mixture of all three types of objects leading to an
“average” exponent, obviously lying somewhere between -1 and -3,
that turns out to be close to -2. The total abundance N_tot of fragments (all sizes included) as a function
of time is represented in Fig. <ref> for both the
sugar lump and standard models. In the latter case, the
abundance is simply described by an exponential law: N_tot = [2^t+1-1] N_0 when the ocean is fed by
a constant number N_0 of (nearly identical) large fragments
per iteration (Eq. <ref>,
Section <ref>). The sugar lump model predicts a
time evolution which deviates from the standard model
prediction: The increase of total abundance slows down with time, due
to the hindering of smaller fragments production, and the effect is
all the more pronounced for larger threshold parameters L_c, as
could have been expected. In the realistic case where L_c/h ≈
100, the increasing rate of fragments production becomes very small
for the largest feeding times, as can be observed in
Fig. <ref> which shows that the number of MP would be
multiplied every ten years by only a factor 2, compared to a factor of
1000 in the standard model. These theoretical results might explain
why no clear temporal trend is observed in the field
data <cit.>.
§.§ Role of the mesh size on the size distribution and on its temporal evolution
If one wants to go further in confronting models to field data, one
needs to take into account that the experimental collection of
particles in the environment always involves an observation window,
and in particular a lower size limit L_mesh, e.g.
due to the mesh size of the net used during ocean campaigns. The very
existence of a lower limit leads to the appearance of transitory and
steady-state regimes for the temporal evolution of the number of
collected particles, as will be shown below. In the
standard model case, when the feeding and breaking process
starts, larger size classes are first filled, while smaller size
classes are still empty (Fig. <ref>,
Section <ref>). As long as the smaller fragments
produced by the breaking process are larger than the lower size limit
L_mesh of the collection tool, the number of collected
fragments increases with time, de facto producing a transitory
regime in the observed total abundance. The size of the smaller
fragments reaches L_mesh after a given number of
fragmentation steps corresponding to the duration of the transitory
regime:
t_c≈2ln(L_init/L_mesh)/ln2
where L_init is the initial largest dimension of the
plastic fragments released into the ocean. From this time onward,
both the size distribution and total number of collected fragments in
the observation window no longer change. Even though the production of
fragments smaller than L_mesh continues to occur, as well
as the continuous feeding of large-scale objects, one therefore
observes a steady-state regime. This is illustrated in
Fig. <ref> for
two different values of the mesh size L_mesh (filled
symbols ∙ and ▪).
For the sugar lump model case, one needs to also consider
the size threshold length scale L_c, below which fragmentation is
inhibited. When L_c is much smaller than L_mesh, the
threshold length L_c is not in the observation window, hence the
analysis is the same as in the standard case. At contrast,
when L_c is close to L_mesh or larger, the transitory
regime is expected to exhibit two successive time dependencies. This
behavior is displayed in
Fig. <ref> (open
symbols ∘ and □) for the same mesh size values as in the
standard model for comparison. At short times, since the
smaller fragment size has not reached yet the breaking threshold
L_c, the number of collected fragments follows the same law as in
the standard case. When the smaller fragments get close to the
size L_c, however, the inhibition of their breaking creates an
accumulation of fragments around L_c, hence the abundance peak. As a
consequence, the increase in the total number of fragments slows
down. Since the abundance peak position shifts towards smaller values
with time (Fig. <ref>, inset) albeit
slowly, a final stationary state should be observed when the abundance
peak position becomes significantly smaller than
L_mesh. As shown in
Fig. <ref>, this
occurs within the explored time window for large L_mesh
(∘), but the stationary state is not observed for small
L_mesh (□), presumably because our simulation has
not explored times large enough. When the steady-state regime is
reached, the number of fragments above L_mesh,
i.e. likely to be collected, remains constant with a value
larger than that of the standard model, due to the overshoot induced
by the accumulation on the right-hand side of the peak.
Let us recall that the characteristic fragmentation time, defined
as the typical duration for a piece to break into two, has been
evaluated at one year. In the case of the standard model, this
means that the size of each fragment is reduced by a factor 30 in
about 10 years. Therefore, starting with debris size of the order of a
centimeter, small MPs of typical size the mesh size (330 μm in
Fig. <ref>) will be obtained within 10 years only. Thus, 10
years correspond to the duration of the transitory regime t_c
established in Eq. (<ref>) and the oceans should
be by far in the steady-state regime since the pollution started in
the 1950's. It is however no longer controversial nowadays that the
standard (steady-state) model fails to describe the size
distribution of the field data. On the contrary, the sugar
lump model predicts the existence of an abundance peak, in agreement
with what is observed during collection campaigns. This peak is due to
the accumulation of fragments whose size is in the order of the
breaking threshold L_c. As discussed in paragraph <ref>, the failure threshold L_c can be soundly estimated
to lie between 1 and 5 mm. Comparison with field data then
corresponds to the case where L_c is about ten times larger than the
mesh size L_mesh. As just shown in
Fig. <ref>, this
implies a drastic increase of duration of the transitory regime, that
can be estimated to be above 100 years. These considerations lead us
to the important conclusion that one is still nowadays in the
transitory regime. Moreover, the sugar lump model also implies
that the total abundance is correctly estimated through field data
collection, i.e. that it is not biased by the mesh
size. Because the peak position slowly shifts towards smaller sizes,
the mesh size will eventually play a role, but at some much later
point in time. Finally, let us recall that this paper does not take
into account delamination processes, so the previous statement is only
true for millimetric debris, that is to say debris produced through
fragmentation, and that micrometric size debris might exhibit a
completely different behavior, being probably much more numerous.
§.§ Constant versus exponential feeding
In the results discussed in Section <ref>, it was
assumed that the rate of waste feeding in the ocean is constant with
time. However, it is common knowledge that the production of plastics
has increased significantly since the 1950's. Geyer et
al <cit.> have shown that the discarded waste follows the
same trend. Data from the above-quoted article has been extracted and
fitted in Fig. <ref> and Fig. <ref> with
exponential laws N= N_0(1+τ)^t, where τ
represents an annual growth rate, of plastic production and discarded
waste respectively. For plastic production, the annual growth rate is
found about 16% until 1974, the year of the oil crisis, and close
to 6% after 1974 with, perhaps, an even further decrease of the
rate in the recent years. Not unexpectedly, the same trends are found
when considering the discarded waste, with growth rates,
respectively, 17% and 5%. In order to discuss now the effects of an
increasing waste feeding in the ocean, we inject for simplicity a
single exponential with an intermediate rate of 7% in the two
models.
When comparing this feeding law and the standard fragmentation
law [2^t+1-1] N_0, one easily concludes that the total
number of plastics items in the ocean is mainly determined by the
fragmentation rate, regardless of the feeding rate. In order to
verify what happens in the case of the sugar lump model, where
the fragmentation process is hindered, the size distributions for both
feeding hypotheses are numerically compared in
Figs. <ref> and
<ref>,
respectively after 14 and 40 years.
It can be observed that at short times, the size distribution is very
little altered by the change in feeding. At longer times, a
significant increase of the amount of the largest particles can be
observed, while the amount of small particles is increasing much
less. Besides, the size position of the abundance peak is almost not
shifted.
The total amount of fragments is represented in
Fig. <ref> for the
standard and sugar lump models for the two feeding cases
considered. For exponential feeding, the sugar lump model still
predicts a significant decrease in the rate of fragment generation
over time, whereas one could have thought that exponential feeding
could cancel out this slowdown.
The conclusions drawn above, Section <ref>
therefore remain valid in the more realistic case of an exponential
feeding.
Finally, one should keep in mind that, if the feeding rate is a
reasonable indicator of plastic pollution, since it describes the
evolution over time of the total mass of plastics present in
the ocean, it is not enough to properly describe plastic pollution.
For a given mass, the number–hence size–of particles produced is the
major factor in assessing potential impacts. Indeed, the smaller the
size the larger the particles number concentration, the larger their
specific area hence their adsorption ability and the larger the
ensuing eco-toxicity. It is shown here that the mass of waste roughly
doubles every 10 years, whereas the number of particles doubles every
year, making fragmentation the main factor driving plastic pollution
and impacts. A lot of studies are devoted to making a mass balance
and understanding the fluxes of plastic waste
<cit.> but, even in the
case of a drastic immediate reduction of waste production, plastic
pollution and impacts will affect the ocean life for still many years
to come, due to fragmentation.
§ CONCLUSION
The generalist model presented here is based on a few sound physical
assumptions and sheds new light on global temporal trends in the
distribution of microplastics at the surface of the oceans. The model
shows that the existence of a physical size threshold below which
fragmentation is strongly inhibited, leads to the accumulation of
fragments at a given size, in line with what is observed in the field
data. In other words, if one does not collect particles in the range
100 μm–1 mm, it is because only a few of them is actually
generated by fragmentation at this scale. One would not necessarily
need to invoke any other mechanism or bias such as ingestion by living
organisms <cit.> or the mesh size of collection nets
<cit.>, to explain the field data for floating
debris. As a consequence, the observed distribution does reflect, in
our opinion, the real distribution of MPs at the surface of the ocean,
down to 100 μm. Besides, the sugar lump model implies a
slowdown in the rate of MPs production by fragmentation, due to the
fact that fragmentation is inhibited when particles approach the
threshold size. This may explain the absence of a clear increase in
the MP numbers in different geographical
areas <cit.>.
observations <cit.>.
Two other general facts have been pointed out in this paper:
* for large MP, the predicted size distribution follows a
power law, whose exponent depends on the dimensionality of the
object (-1 for a fibre, -2 for a film and -3 for a lump). It is
therefore worth sorting out collected objects according to their
geometry, as it is done for instance when fibres are separated
from 2D objects <cit.>. It is however interesting
to note that, when the objects are not sorted in this way, an
“average value” -2 is found for the exponent.
* the model takes into account an exponentially-increasing
waste feeding rate. We have fitted the plastic production since
the 50' and found that there is not one but two exponential
laws, the second one, slower than the first one, being visible
after the second oil crisis in 1974. Comparing this feeding to
the exponential fragmentation ratio, we show that the number of
fragments is mainly predicted by the fragmentation process,
regardless of the feeding details.
To go further and estimate absolute values of MP concentrations in the
whole range of sizes, it would be necessary, on one hand, to take into
account delamination in order to get small particles distribution. On
the other hand, one should also be aware of the spatial heterogeneity
of particles concentration and therefore an interesting development
could be to combine fragmentation with flow models developed for
instance in Refs. <cit.>.
§ SUPPORTING INFORMATION
§.§ Standard model
In this model, as pictorially represented in
Fig. <ref>, the ocean is fed at each iteration
n with a fixed number a_0 of large 2D-like objects, mimicking
plastic films.
Neglecting size and shape dispersity for convenience, all
0^th-generation objects are assumed to be large square platelets
of lateral size L_init and thickness h, with
L_init≫ h. Between consecutive iteration steps,
fragmentation produces p^th-generation objects, by splitting in
two equal parts (p-1)^th-generation objects, thus generating
square platelets when p is even, but rectangular
platelets with aspect ratio 2:1 for odd p. If size is measured by
the diagonal, a p^th-generation object has size
√(2)L_init/2^p/2 (even p) or
√(5)L_init/2^(p+1)/2 (odd p). With size classes
described by the number of p^th-generation objects at iteration
step n, C(n,p), the filling law of size classes is:
[ C(n,0) = a_0 ; C(n,p) = 0 if p>n; C(n,p) = 2C(n-1,p-1) if 1≤ p≤ n ]
The set of equations (<ref>) is readily solved:
C(n,p)=2^pa_0 for 0≤ p≤ n, and C(n,p)=0 for p>n. Since
size L scales with generation index p as 2^-p/2, the
steady-sate scaling for the filling of size classes is C∝
L^-2.
The cumulative abundance S_n≡∑_pC(n,p) at iteration step n is also easily obtained:
S_n=[2^n+1-1]a_0
and displayed as a dashed line in Figs. <ref> and
<ref>.
As noticed in Ref. <cit.> where experimental
data and model predictions are matched together, the standard model
fails for small objects, and this occurs when a (nearly) cubic shape
is reached. Since the typical (lateral) size of p^th-generation
objects is ≈ L_init/2^p/2, the limit is reached
for
p_max≈2logL_init/h/log2
that is to say in about 20 generations with the rough estimate
L_init/h=10^3. The set of equations describing the
size-class filling law has to be altered to take into account this
limit. Assuming for simplicity that p_max-generation
objects cannot be fragmented anymore (“atomic” fragments), this set
of equations becomes:
[ C(n,0) = a_0 ; C(n,p) = 0 if p>n or p>p_max; C(n,p) = 2C(n-1,p-1) if 1≤ p<p_max and p ≤ n; C(n,p_max) = C(n-1,p_max)+2C(n-1,p_max-1) if n>p_max ]
As shown by the explicit solution, Eq. (<ref>)
below, the last line in this set of equations leads to an
accumulation of “atomic” fragments (see also
Fig. <ref> for a pictorial representation of this
feature)
[ C(n,p) = 2^pa_0 if 0≤ p≤ n<p_max; C(n,p_max) = (n+1-p_max)2^p_maxa_0 if n≥ p_max; C(n,p) = 0 for other cases ]
associated to a significant (exponential to linear) slowing down of the cumulative abundance:
S_n=[2^p_max(2+n-p_max)-1]a_0
for iteration steps n≥ p_max.
§.§ Standard model with inflation
As a first extension of the standard model, inflation in the
feeding of the ocean with large 2D-like objects is now
considered. Taking simultaneously into account the “atomic” nature
of small fragments beyond p_max generations, the
size-class filling set of equations (<ref>) has
to be replaced by:
[ C(n,0) = a_0(1+τ)^n ; C(n,p) = 0 if p>n or p>p_max; C(n,p) = 2C(n-1,p-1) if 1≤ p<p_max and p ≤ n; C(n,p_max) = C(n-1,p_max)+2C(n-1,p_max-1) if n>p_max ]
Size classes are now described by
C(n,p)=2^p(1+τ)^n-pa_0 for 0≤ p≤ n as long
as the generation index p remains smaller than p_max
and
C(n,p_max)=2^p_max[(1+τ)^n-p_max+1-1]a_0/τ
for n≥ p_max. Whereas the filling of the size class
associated to “atomic” fragments was linear in n without
inflation, it becomes here exponential. Consequently, the
cumulative abundance, definitely slowed down, remains exponential in
n for n>p_max:
S_n={(1+τ)^n[(2/1+τ)^p_max-1/1-τ]+2^p_max(1+τ)^n-p_max+1-1/τ}a_0
As long as the “atomic limit” is not reached, the cumulative
abundance exhibits a simpler form, namely:
S_n=[2^n+1-(1+τ)^n+1]a_0/1-τ
that does not significantly differ from
Eq. (<ref>). The time-invariant features of
the size distribution are nevertheless modified in two respects (see
Fig. <ref>):
* Inflation spoils the strict time-invariant feature
previously observed for the size distribution N(L);
* A (nearly) time-invariant behaviour remains as far as
scaling is concerned, since N∝1/L^ν,
but ν does depend, albeit rather weakly, on the time index
n, while being significantly smaller than 2. Fitting data to
a power law, an exponent ν close to 1.8 is obtained for
inflation τ=7%.
§.§ Sugar lump model
Taking inspiration from the standard model,
Section <ref>, at each iteration the ocean is fed with
large parallelepipedic fragments of length L, width ℓ and
thickness h, where h is much smaller than the other two dimensions
and length L is, by convention, larger than width ℓ. Some size
dispersity is introduced when populating the largest size class, by
randomly distributing L in the interval [0.9L_init,
L_init], and ℓ in [0.7L_init, 0.9L_init],
but h is kept fixed. The number of
objects feeding the system can be controlled at each iteration step,
and two simple limits have been investigated: Constant, or
exponentially-growing feeding rates, mimicking two variants of the
Standard model, Sections <ref> and
<ref>, respectively. Size-classes evenly sampling
(in logarithmic scale) the full range of L/h, [1,
L_init/h] are populated by sorting into the proper size
class the fragments present in the system. Except for the
0^th, initialisation step, these fragments are either
0^th-generation fragments just introduced into the
system, obviously belonging to the largest size class, or
g-generation fragments (g≥1) that have been “weathered”
during the time step from step n to step n+1 and then split, with
a L-dependent efficiency, into two smaller fragments. As explained
in Section <ref>, the splitting process, albeit
random, explicitly ensures the existence of an “atomic” limit:
Fragments belonging to the smallest size class cannot be fragmented
any further. As tentatively illustrated in
Fig. <ref>, a special feature of the model is
that generations (g) and size-class (p) indices have to be
distinguished because, at contrast with the standard model, although
for a given fragment a “weathering” event (n→ n+1) is
always associated to an “ageing” event (g increased by one), it is
not always associated to populating one or two lower-size classes (and
simultaneously decreasing by 1 the abundance of the considered
size-class) because the splitting process is not 100% efficient.
Keeping track of abundances in terms of time (n), age (g) and size
(p) being computationally demanding for exponentially-growing
populations, our simulations have been limited to, at most,
n=g=40. The number of distinct size classes has also been limited to
28, as this corresponds to the number of size-classes reported in
Ref. <cit.>.
|
http://arxiv.org/abs/2307.05736v1 | 20230711190431 | Two-fluid reconnection jets in a gravitationally stratified atmosphere | [
"B. Popescu Braileanu",
"R. Keppens"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE",
"physics.plasm-ph",
"physics.space-ph"
] |
QCD on Rotating Lattice with Staggered Fermions
Xu-Guang Huang
August 12, 2023
===============================================
§ INTRODUCTION
The temperature increases steeply towards the corona of the quiet sun, throughout the transition region located at ≈ 2.5 Mm, and coronal plasma becomes fully ionized <cit.>.
Observations showed that field lines below ≈ 2.5 Mm can change their connectivity in about 1.5 h,
suggesting fast reconnection mechanisms in the solar chromosphere <cit.>.
The reconnection mechanism involved in solar flares releases a huge amount of magnetic energy, therefore it is believed that the
reconnection is driven at large scales, evolving towards a Petschek-type configuration <cit.>.
Petschek <cit.> proposed a stationary 2D reconnection model that extends a Sweet-Parker <cit.> layer (the diffusion region) at the center with standing slow-mode shocks from its ends.
The Sweet-Parker reconnection rate scales as ∝ S^-1/2, where the nondimensional Lundquist number, S is defined as S=v_A L/η, with v_A being the Alfvén
speed, L a hydrodynamic characteristic length and η the value of the resistivity (in units of m^2/s)
This classical resistive MHD model proposed by <cit.> predicts reconnection rates much higher than the Sweet-Parker model, where it
scales as ∝ln^-1(S). For a value of S=10^6, the Petschek-type reconnection gives a reconnection rate 65 times larger than the Sweet-Parker model <cit.>.
With larger resistivity, the slow shocks become wider <cit.>.
The magnetic energy decreases, being converted into thermal energy through Ohmic heating near the diffusion region, and to
kinetic energy through the work done by the Lorentz force, away from the diffusion region, especially at the location of the slow shocks <cit.>. In simulations of driven reconnection, anomalous resistivity is assumed in order to impede the unbounded growth of the current density <cit.>.
Anomalous resistivity can be produced by ion acoustic turbulence, and is then a function of the ion electron drift velocity, as shown by <cit.>.
A more simplified, but frequently adopted model of anomalous resistivity is a space localized resistivity around the X-point, which is also used in this paper.
Unlike anomalous resistivity which depends on the current density, the localized resistivity only depends on space, but both prescriptions lead to fast magnetic reconnection <cit.>.
This locally enhanced resistivity leads to fast reconnection, however, it is not very clear whether fast reconnection is due to a high value of the local resistivity in the diffusion region, or to the localization <cit.>. Numerical 2D resistive MHD simulations found that having a flat local resistivity profile near the X-point can induce spontaneous symmetry breaking in the otherwise symmetric Petschek configuration <cit.>. In this paper, we will have an additional broken up-down symmetry due to gravity from the beginning, and extend Petschek-reconnection findings to a two-fluid, plasma-neutral setting.
Many simulations study the standard solar flare scenario, where a vertical current sheet (CS) evolves to form post-flare loops.
<cit.> studied post-flare loops in the MHD approximation using a 2.5D setup, where gravity is neglected and thermal conductivity is adopted along the field lines. In their simulations, Petschek-type reconnection develops because of localized resistivity
and the slow shocks are essentially isothermal due to effective thermal conductivity.
<cit.> study reconnection in a 2.5D MHD setup, using a spatially localized resistivity and an analytic density profile, with uniform pressure. In their model, gravity is neglected, while anisotropic thermal conductivity is incorporated.
The reconnection rate was found to be slightly smaller when plasma β increases, and the rate is also smaller with thermal conductivity. The reconnection rate reaches a maximal value of 0.01.
The authors show that reconnection at the termination shock due to interaction between magnetic islands formed along the primary current sheet (CS) and the magnetic arcade below is almost as important as reconnection in the main CS for releasing magnetic energy. Jets produced by MHD simulations of Petschek reconnection in a 2D setup without gravity have properties of small-scale flares observed in the solar atmosphere <cit.>. State-of-the-art solar flare simulations extend these efforts with the inclusion of gravitational stratification, and even
include the effect of fast electron beams that self-consistently interact with large scale MHD simulations, identifying many ingredients found in actual observations <cit.>. The step to full 3D MHD simulations, including gravity, thermal conduction and reproducing turbulent regions consistent with observed non-thermal velocities was made in <cit.>. EUV synthetic images produced from these 3D flare models show very good agreement with observations. However, extension to plasma-neutral setups are needed to address more chromospheric flare counterparts, and this work is a first step towards that goal.
Reconnection jets have been observed at all heights in the solar atmosphere, from photosphere to corona <cit.> in both cool and hot spectral lines.
<cit.> suggest that spicules are chromospheric jets in emerging flux regions, which disappear in chromospheric line images before returning, probably due to heating.
Hot and cool jets observed by <cit.> were suggested to form in the lower corona or upper chromosphere. Aspects of coronal X-ray jets were successfully reproduced in simulations by <cit.>.
Chromospheric anemone jets with velocities of 10 km/s comparable to the local Alfvén speed were observed in the upper chromosphere, but could not be observed in the lower chromosphere, where the Alfvén speed is much lower <cit.>.
Many observations show clear signatures of plasmoids formed during the reconnection process, and track their motions using emission signatures. Plasmoids have been observed as periodic blobs in optically thin AIA lines <cit.>.
These plasmoids can appear in the nonlinear evolution of current sheets that are liable to linear resistive tearing modes. Since plasmoids form on the current sheets that also develop Petschek-type configurations with outflows, once they are formed, it is relevant to study the linear stability of a CS in the presence of outflows. In an early 2D purely linear MHD analytical study, it has been shown that outflows have a stabilizing effect for the tearing mode <cit.>. In this paper, we will discuss stability aspects of a CS due to tearing in a two-fluid setting. Our simulations contain plasmoids, and we will make synthetic images that directly relate to the observed blob features.
Since our work is using a plasma-neutral two-fluid model, several works that looked into two-fluid reconnection are of direct relevance to our study.
2D simulations in the two-fluid approach show that the reconnection rate is increased because of recombination and larger outflows <cit.>.
Ionization/recombination processes would put additional constraints on the background stratification, leading to nontrivial equilibrium conditions <cit.>.
A non-static equilibrium introduces a new free parameter through the gradient in the vertical velocity, moreover it can explain the formation and the properties of the transition region <cit.>.
In this paper, we generalize the work to stratified settings, but will not include ionization/recombination processes in our simulations.
<cit.> obtain high reconnection rates around 0.1 due to two-fluid effects, which otherwise would be obtained by using Hall, kinetic effects or localized resistivity.
In stratified setups that were liable to the Rayleigh-Taylor instability, secondary reconnection events showed that two-fluid effects are locally very important <cit.>. <cit.> found that
plasmoid coalescence happens faster in the two-fluid model than in MHD, because the effective Alfvén speed
based on a two-fluid density defined by the collisional coupling, is larger. In 1D two-fluid simulations of slow magneto-acoustic shocks (of the type relevant in a Petschek-type reconnection region), frictional heating leads to a localized region around the `reconnection' point with increased temperature in the neutral fluid <cit.>. As a consequence a blast wave in the neutral fluid develops
with overshoots in the neutral density and velocity <cit.>. We will show in our multi-dimensional setup how the detailed CS structure may show intricate decoupling (and runaway) effects in the Petschek-type configuration.
Here, we extend previous works by studying reconnection in a 2D stratified setup, accounting for the presence of coupled plasma-neutral species.
We present the numerical setup in Section <ref>,
the results of our simulations in Section <ref>.
We then create synthetic views from the simulation snapshots, presented in Section <ref> and we summarize our conclusions in Section <ref>.
§ NUMERICAL SETUP
We consider a gravitationally stratified atmosphere where we define a temperature profile with height z described by:
T(z)=T_ ch + T_ co-T_ ch/2[tanh( z-z_ tr/w_ tr) +1] ,
where w_ tr = 0.2 Mm, the transition region height is
z_ tr = 2 Mm, the chromospheric and coronal temperature is set through
T_ ch = 8×10^3 K and T_ co = 1.8×10^6 K.
The initial profiles of temperature and the neutral and charged, as well as total fluid densities are shown in Figure <ref>.
We use an ideal equation of state for both neutrals and charges and the normalized mean molecular weight is considered uniform and constant, having the values of 0.5 for charges and 1 for neutral species <cit.>.
We use a force-free sheared magnetic field, changing its far-field (large horizontal coordinate |x|) vertical orientation near x=0, with uniform magnitude B_0=10^-3 T, given by
B_ z0 = - B_0 tanh( x/L_s) ,
B_ y0 = B_0 cosh^-1( x/L_s) ,
where L_s=0.02 Mm. A similar force-free profile has been used
in the simulations of <cit.>.
The current sheet width calculated as the full width at half maximum (FWHM) of the corresponding current density component J_ y0 is 0.036 Mm.
We will use a localized resistivity of the particular form
η(x,z)=(η_0-η_1) exp( -x^2/2 L_s - (z-z_ rec)^2/2 L_s) + η_1 ,
where η_0≈8 Ω m and η_1≈ 0.8 Ω m.
The reconnection point will be at x=0 always, but we will vary the reconnection height z_ rec, as mentioned
in Section <ref> below.
To trigger the reconnection, we adopt an initial velocity variation. The initial perturbation is in the x-direction only, trying to bring the field lines closer around the reconnection point, having the form:
v_x(x,z; t=0) = -V(x,z), for x>0 ,
v_x(x,z; t=0) = +V(x,z), for x<0 ,
with
V(x,z) = A v_A(z) exp( -x^2/2 L_s - (z-z_ rec)^2/2 L_s) ,
and
v_A(z; t=0)=B_0/√(ρ_ tot(z)) ,
the total Alfvén speed, having ρ_ tot=ρ_ n+ρ_ c the total density. We choose the amplitude A=10^-1.
In a two-fluid simulation, this initial perturbation is the same for the velocity of charges and neutrals.
The two-fluid model uses the newly developed module in the fully open-source MPI-AMRVAC code <cit.>.
The equations solved are the nonlinear, compressible, resistive two-fluid Eqs. (1)-(7) from <cit.>.
In a 2.5D geometry the domain xz with x∈[-0.5,0.5] Mm and z∈[0,7] Mm is covered by a grid with base resolution of 256×1024
and we use five levels of refinement, having an effective resolution of 1024×4096 points and the size of the finest cell Δ x=9.76×10^-4 Mm
and Δ z=1.7×10^-3 Mm.
The bottom boundary in the z-direction is closed (antisymmetric for vertical velocities and symmetric for the rest of the variables)
and we use open boundary conditions (symmetric for all the variables) for the top boundary and both side boundaries in the x-direction.
The region -0.2≤ x≤ 0.2 is always refined at the highest level, so that the CS is properly resolved.
The refinement criterion is based on density only for the MHD cases and equally on charged and neutral density in the two-fluid runs. We use the splitting of the equilibrium force-free magnetic field <cit.> and the gravity stratification for both neutrals and charges <cit.>.
In this approach, the magnetic field 𝐁, densities ρ_ n, ρ_ c and pressures p_ n and p_ c are split into a time-independent,
𝐁_0, ρ_ n0, ρ_ c0, p_ n0, p_ c0 and time-dependent 𝐁_1, ρ_ n1, ρ_ c1, p_ n1, p_ c1 parts.
The equations solved in the code are for time-dependent quantities, while the equilibrium conditions are explicitly removed from the equations:
𝐉_0×𝐁_0=0 , -∇p_ c0-ρ_ c0𝐠 =0 ,-∇p_ n0-ρ_ n0𝐠 =0 .
Mathematically, the split equations are equivalent to the unsplit equations, but numerically, the splitting helps avoiding an unwanted evolution due to numerical dissipation of the equilibrium.
§.§ Coupling aspects
Because of the very low mass of electrons compared to ions, the collisions between charges and neutrals are effectively collisions between ions and neutrals,
and the mean free path between ions and neutrals and between neutrals and ions can be defined as:
λ_ in = v_A/ν_ in ; λ_ ni = v_A/ν_ ni ,
where ν_ in=αρ_n and ν_ ni=αρ_c denote collision frequencies between ions and neutrals and between neutrals and ions, respectively <cit.>.
The characteristic velocity is the Alfvén speed of the whole fluid, as defined (generalized to v_A(x,z,;t)) by Eq. (<ref>)
<cit.>.
The collisional parameter α <cit.> is defined as:
α = 2/m_H^3/2√(π)√( k_B T_ cn)Σ_in
where the collisional cross-section considered here is Σ_in = 10^-19m^2.
T_ cn is the average of temperatures of the neutrals and charges.
In this paper, we consider two cases for the collisional coupling, one when α(x,z;t) is calculated consistently from instantaneous plasma parameters and another
where it is set to a constant value.
When α is calculated self-consistently, its profile varies slowly with height, with values initially between 5.8 × 10^12 m^3/kg/s and 8.2 × 10^12 m^3/kg/s. These minimum and maximum values remain practically unchanged at the end of the simulation. These high values imply that the coupling is near perfect (and hence MHD-like behavior is expected) throughout the domain. We will compare this to a run where we instead set α to a constant value throughout. This constant value of α=3.84 × 10^8 m^3/kg/s is almost four orders of magnitude smaller than the consistently computed values. However, as we argue in the section below, this reduced coupling value ends up to be more representative for actual solar settings. Moreover, in that regime, we will have two-fluid effects that are important in stratified reconnection setups.
§.§ Parameters of the simulations
For our study of reconnection in stratified settings, we will compare three collisional regimes, namely: (1) a single fluid
MHD model (label “MHD”); (2) a
two-fluid plasma-neutral model where the collisional parameter α is calculated self-consistently from plasma values (label “2fl”); (3) a
two-fluid model where the collisional parameter α is constant and set to a smaller value of 3.84 × 10^8 m^3/kg/s (label “2flα”). We will study 6 cases in total, since each collisional regime will vary the reconnection point from z_ rec=2 Mm, with reconnection in the upper chromosphere to low transition region, to z_ rec=4.5 Mm, for coronal reconnection.
In the MHD model, the initial equilibrium atmosphere is constructed by summing the densities (shown in Figure <ref>) and pressures for charged and neutral fluid at all heights.
The resulting mean free paths between ions and neutrals and between neutrals and ions for the two two-fluid models, 2fl and 2flα,
are shown in Figure <ref>. In this 2flα case, both values of the mean free path in the transition region and above are O(0.01 Mm), hence they are similar
to the width of the CS, while in the 2fl case the values of the mean free path are O(10^-6Mm). Because of the weak dependence on height of the value of α calculated self-consistently, both profiles of the mean free paths (2fl and 2flα) look similar, being dominated by the variation of the density, included in the calculation of the mean free path as from Eq. <ref>.
The mean free path does not change significantly during the simulation, being rather determined by the equilibrium plasma parameters.
We will now argue that the 2flα case is more solar-relevant. Indeed, we used a temperature profile defined by Eq. <ref>, which is slightly smoother than VALC <cit.>, but it has the advantage that we directly control the width (w_ tr) of the transition region (TR). A similar temperature profile defined by an analytic function
has been used by other authors <cit.>. This overly smooth temperature variation also implies a larger fraction of neutrals
at both used reconnection heights, especially at the coronal reconnection point
z_ rec=4.5 Mm, taking into account that the scale height of the neutrals is twice smaller than that of the charged particles.
The other reconnection case has z_ rec=2 Mm, i.e. starts its reconnection at the middle of the TR. How effective the collisional coupling between plasma-neutral species is at these heights, is largely set by the densities they attain there.
When we integrated the vertical profiles, at the base of the atmosphere at z=0,
we used the total number density from the VALC model namely n_T≈10^23 m^-3, but we had to consider an ionization fraction of 0.1,
so that we still obtain more charges in the corona despite the smoothing of the temperature profile.
Hence, the adopted temperature variation, together with the imposed bottom densities lead to a reversal of the dominance of neutrals over charges at z=1 Mm, while the entire transition region and corona is charge-dominated.
We find that the overly smoothed temperature profile actually increased the
density in the upper chromosphere and corona above observed values, making collisional coupling now larger there. This justifies the use of a smaller and more representative value of α in the simulation 2flα, which would mimic the actual coupling found for the normally smaller densities. Indeed, the mean free path between ions and neutrals in the two-fluid reconnection simulations of <cit.>, which was self-consistently calculated, was about 100 m. This is nicely situated between our 2fl and 2flα case.
Moreover, there are different recipes for calculating collisional frequencies, due to different values for the collisional cross sections available in the literature, which lead to large differences <cit.>.
The very small mean free path in our 2fl case suggests that it can be considered an MHD limit case, and this will be confirmed in our simulations below.
§ RESULTS
In all six cases considered (MHD, 2fl, 2flα for the coronal versus TR reconnection case) the reconnection develops, producing bidirectional outflow jets, traveling away from the reconnection point, which can be seen in the snapshots of the total density
shown in Figure <ref>.
As the atmosphere is gravitationally stratified, the jets traveling upwards are denser and those traveling downwards are less dense than the surrounding
fluid located at the same height.
The figure shows that for TR reconnection, we get a pronounced upwards jet that is accompanied by a vertically oriented CS that lengthens as time progresses.
From the time evolution of the upwards moving jets in the z_ rec=2 Mm case we can estimate a vertical velocity of ≈ 8 km/s. An online animation for this case (z2.0-2flaplha.mp4) overplots the adaptive grid for the neutral density. In the case of coronal reconnection, we find a clear two-sided (up-down) jet forming, where the lower one ultimately interacts with the TR and chromosphere (forming post-flare loops).
For both reconnection heights considered, z_ rec=2 Mm and z_ rec=4.5 Mm, the MHD (top row) and the 2fl models (middle row) give very similar results, as expected according to the collisionality regime.
A clear difference appears in the snapshots for the 2flα model (bottom row of Figure <ref>), with the development of a less dense region with an edge of increased density, and this is seen in both z_ rec cases.
In order to better understand these differences, we further analyze
snapshots for z_ rec=2 Mm at t=622.6 s, the last time shown in Figure <ref> (left column).
§.§ Analysis for the case with TR reconnection
We plot in Figure <ref> the out-of-plane current density J_y for the 2fl and 2flα models, and overplot magnetic field lines and isocontours of total density.
Except for the fact that the 2flα seems a bit further evolved in time (as the top magnetic island is located a bit higher), the current density structures are very similar for the two models.
We observe that the dense edges in the 2flα case are located just outside the current sheets.
Because of the localized resistivity, the simulations evolve towards a Petschek-like reconnection.
Slow shocks that accompany the reconnection,
traveling in the x-direction, widen gradually the distance over which B_z≈ 0, splitting the CS into two current sheets, seen as “V” structures in the images.
Theoretically, in an assumed MHD stationary Petschek reconnection state (without stratification), the slow shock discontinuity is located along this V-pattern,
B_z is exactly zero inside the V and B_x is uniform within. Hence, the two current sheets should actually be infinitely thin in the ideal MHD limit that holds away from the resistive reconnection layer.
However, in our more realistic setup we find that it is possible for B_z to locally reverse sign, creating another current sheet with J_y of opposite sign, as seen most clearly in the snapshot
for the 2flα case at height z≈5 Mm, at the location of the corresponding ripple in the magnetic field line.
This reversal in the sign of B_z was in 1D two-fluid models associated with a reversal in the sign of v_x and was found related to the heating produced by the two-fluid effects <cit.>.
However, this reversal in the sign of B_z is observed in our MHD (and 2fl) simulations as well, so that the Ohmic heating might be the cause in our simulations. Note that the magnetic field lines in Figure <ref> show the expected post-flare loop configuration below the reconnection site. The 2flα case also shows a plasmoid structure forming at later stage,
and we will discuss this in Section <ref>.
We compare snapshots of density of charges and neutrals, separately for the two models in Figure <ref> (note that our earlier Figure <ref> showed total densities).
We observe that in the 2fl case the density of charges and neutrals have similar structures, the difference being mainly in magnitude.
The magnitude scales with the background density, the neutral density being smaller in the corona than the charged density.
The neutral density shows more variation with height, since the scale height of the neutrals is half of that of the charges.
In contrast, in the 2flα case, we observe a clear reversal of contrast, namely a central region with low neutral density and high charged density,
surrounded by a shell of high neutral density and low charged density.
We also show in the charged density plots the density snapshot of a MHD run where we used the charged fluid properties only, this corresponding to the zero collisions α=0 limit.
The snapshot seems more evolved in time than the previous cases: 2fl and 2flα. A smaller (effective) density implies a higher Alfvén speed and shorter hydrodynamic timescales, however the edge of increased neutrals is rather related to an incomplete coupling regime, and not to a smaller effective density. Similarly to the conclusion of <cit.>, a smaller effective density due to the collisions is associated to smaller timescales and faster evolution, as we have seen
previously in Figure <ref>.
The same reversal can be seen in the temperature maps as well for the 2flα case, as shown in Figure <ref>, where the higher density
regions have smaller temperatures, for both neutrals and charged fluids.
The 2fl case (top panels of Figure <ref>), however, shows similar structures in the temperatures of neutrals and charges.
In order to understand this clear difference in the 2flα case, which must be caused by two-fluid effects,
we plot in Figure <ref> different quantities along a horizontal cut located at z=4 Mm.
These 1D profiles are consistent with the previous 2D images, most notably Figures <ref> and <ref>, where they cut across the V-shaped current sheet structures discussed earlier.
For the 2fl model the structures in the charged and neutral density (panels (a) and (b)) are similar. The x-velocity (panel (c)) and temperature (panel (d)) profiles overlap for both fluids,
meaning that the two fluids are coupled in both velocity and temperature.
For the 2flα model, more charges imply less neutrals and higher density implies lower temperature.
The temperature of the neutrals increases by more than 1.6 × 10^6 K at the center of the CS.
Further from the center of the CS, the velocities of charges and neutrals are coupled, however inside the CS, they are completely different.
The neutral velocity changes sign at the center of the CS, meaning that the neutrals are going out of the CS. Hence, the 2flα case which shows a pronounced anticorrelated structure in density and temperature for charges versus neutrals is clearly demonstrating decoupling between both species through the CS structure.
Panels (e) and (f) in Figure <ref> show the total density, ρ,
and the temperature of the center of mass, defined as
T^ 2fl = 1/ρ(ρ_c T_c + ρ_n T_n) .
The total density profile is very similar to the separate neutral and charged density profile for the 2fl model. For this case,
this positive peak in the density, and a corresponding negative peak in the temperature, are related to the fact that the (vertically flowing)
reconnection outflow comes from a lower height with higher density and lower temperature. However, the two-fluid solution in the 2flα case, where the collisional effects are enhanced, behaves rather different, and demonstrates a nonlinear runaway effect.
In the case of the 2flα model, there is a central depletion in total density, where charges accumulate and neutrals deplete, and this is surrounded by a layer of enhanced total density where charges deplete and neutrals accumulate. A tiny central positive peak, of similar magnitude as in the 2fl model, still remains in the very center CS region.
Seen from an MHD point of view, the whole fluid (both neutrals and charges) heats because of the collisions. This creates a peak in the center of mass temperature (panel (f), dotted line) and the entire fluid expands, so that the total density decreases towards the center of the CS (panel (e), dotted line). In the various panels of Fig. <ref>, vertical dotted lines show extremal positions in the neutral collisional heating term for case 2flα, discussed in more detail in what follows.
In order to better understand the temperature and velocity profiles seen in Fig. <ref> we show the terms which enter the temperature (top panel) and velocity (bottom panel) equations in Fig. <ref>. The primary and secondary maximum peaks in the neutral frictional heating term (blue dashed line) for the 2flα case (top-right panel), are located at a distance of 0.03 Mm and 0.1 Mm away from the center of the CS, and these are indicated by vertical violet and gray dotted lines, respectively (those are repeated in each frame of Fig. <ref>).
The peak in the Ohmic heating term (red solid line) is located close to the primary maximum peak in the neutral frictional heating term of the 2flα case (i.e. near the vertical violet dotted line). This frictional heating is negligible in the 2fl case (top-left panel). In the 2flα case (top-right panel),
the charged fluid frictional heating term (red dashed line) has a similar profile to the neutral frictional heating term, but with smaller values, because of the higher charged density compared to the neutral density. In the 2flα case, the neutral frictional heating peak is five times larger than the peak value in the Ohmic heating.
We now turn attention to what causes the velocity decoupling effects, by discussing the forces as shown in Fig. <ref>, bottom panels.
The inflow into the CS (as seen in panel (c) of Fig. <ref>)
is driven by the magnetic pressure gradient which acts only on charges, producing initial decoupling in the velocities between neutrals and charges when entering the CS.
This magnetic pressure gradient (red dashed line in bottom panels of Fig. <ref>) pushes the charges and the collisionally coupled neutrals inside the CS
against the gradient of their pressures (solid lines). In the 2fl case, only Ohmic heating matters, and the overall decoupling between neutrals and charges stays minimal throughout the CS.
In the 2flα case, the neutral and charged fluids decouple in velocity in a more pronounced way throughout the entire CS, again indirectly driven by the magnetic forces which drive the reconnection. But the initial decoupling grows and collisional heating of neutrals causes the neutrals to heat, expand and go out of the CS, accumulating between the primary and secondary heating point (violet and gray lines, respectively), as observed in panel (b) of Fig. <ref>.
The inflow of charges there decreases because of the increased collisions with the neutrals that accumulated outside the CS (see the local minimum observed for the dotted red curve at the location of the gray line in panel (c) of Fig. <ref>).
Therefore, the decoupling in velocity increases further, and also the frictional heating at this secondary neutral heating maximum point (the gray vertical line), which again increases the neutral pressure.
Thus, the neutral inflow increases as a local maximum is observed for the dotted blue curve at the location of the gray line in panel (c) of Fig. <ref>. This is because of the neutral pressure gradient, which shows a local maximum peak in the solid blue curve at the location of the gray line in bottom-right panel of Fig. <ref>. This in turn drags the charges into the CS: a local maximum peak is seen for the red dotted line in bottom-right panel of Fig. <ref>, which has positive values around the gray line. This implies acceleration of the plasma towards the center of the CS, in the same direction and partially overlapping the magnetic pressure gradient curve. Therefore, the velocity of charges towards the center of the CS is increased and has opposite sign to that of the neutrals, leading to more decoupling and to a runaway nonlinear instability. The heating of the charges at the primary heating point (violet line) slows down in this runaway instability process.
The charges enter faster into the CS because of decreasing collisions with neutrals and this accelerates the process (besides leading to a thinner CS and faster outflows).
The runaway process is further enhanced by the charges expanding due to the (collisional) heating at the secondary point (gray line).
The runaway process creates a region of accumulation of neutrals bordering the CS (within the hydrodynamic timescale on which the charges enter in the CS due
to the magnetic pressure gradient),
which creates a secondary (collisional) heating point whereby the charges get pushed faster into the CS.
§.§ Analysis for coronal reconnection
We next study the case z_ rec=4.5 Mm where the reconnection was triggered in the coronal region. A zoomed view shown in Figure <ref> shows what happens near and above the TR, after the downwards reconnection outflow hits
the dense material in the chromosphere and is reflected.
The snapshots of density of neutrals versus charges (top row) show again a pattern of evacuated versus enhanced regions with dense versus evacuated edges.
The decoupling in velocity (bottom left panel of Figure <ref>) points across the magnetic field lines and the isocontours of the decoupling in temperature (T_c-T_n),
shown in the bottom right panel, follow the structures in current density,
suggesting that the decoupling is related to the magnetic fields.
Smaller current sheets are formed at different locations and with different orientation,
meaning that this separation of neutral versus charged densities across the current sheet is generally related to reconnection and not determined by gravity or localized resistivity.
In a time evolution (an animation z4.5-2flalpha.mp4 is provided) we see that this process of separation seems to reverse in this case, when the structure rises again, but this is influenced by mixing with both neutral and charged material coming from below.
Hence, in both 2flα cases, we find clear evidence of a nonlinear runaway process that enhances the spatial separation between charged and neutral fluids near current sheets.
This runaway process is clearly related to having a large collisional free path, as compared to the width of the CS, as it is not seen at all in the 2fl model.
Overall, we identified a runaway (decoupling) instability in a weakly collisional regime, which occurs in a non-stationary two-fluid setup to the reconnection problem. This will ultimately break down the fluid assumptions if neutral and charged densities decrease towards zero in separate locations.
The exact conditions for the onset of this nonlinear instability, such as the α collisional parameter range, can be subject of a more idealized (without gravity) setup in the future.
§.§ Reconnection rate
We then calculate the reconnection rate
M=η^X J_y^X/v_A^* B_z^ up ,
in the same way as <cit.>.
The superscripts indicate the points where quantities are evaluated.
The reconnection point X is defined as the point where the dissipation η J_y^2 is maximum (we included η in this calculation because of the localized resistivity).
X will be located at the center of the current sheet (x=0) at some height, which might not be the initial reconnection point z_ rec, because this reconnection point migrates vertically,
especially in the TR reconnection case z_ rec=2 Mm, where the stratification is stronger.
Initially, the X-point is located at the height z=z_ rec, which is one of the parameters of the simulations.
The border of the CS, indicated by the superscript ^ up, is defined as the point located at the same height as X, but displaced along the x-direction, as it is located at the point where the current density is half
the current density measured at X. Because of the horizontal symmetry, the values calculated at the two opposing symmetric points at the borders of the CS are averaged, because numerically exact left-right symmetry might not be preserved.
The Alfvén velocity v_A^* is calculated using the total density evaluated at the reconnection point X and the magnetic field B_z^ up.
We plot the thus computed reconnection rate as a function of time in Figure <ref>, for both TR and coronal reconnection, each time comparing MHD and both two-fluid rates.
The reconnection rate has an initial increase, which is steeper for z_ rec=4.5 Mm than z_ rec=2.0 Mm.
This is because at z_ rec=4.5 Mm the density is lower and the CS thins faster, however the minimum width of the CS achieved during the simulations
is similar for both heights.
The maximum reconnection rate reached after this increase phase is M≈ 0.1, a typical value for reconnection scenarios which use localized resistivity.
Then, it decreases for both cases, mainly because the magnetic field dissipates.
The reconnection rate decreases much faster in the TR reconnection case that started at z_ rec=2.0 Mm, because of the quick and continuous upwards migration of the X-point over a distance of ≈0.5 Mm at the end of the simulation. It is seen that the reconnection rates remain similar for the MHD and the two two-fluid setups.
It is known that the plasmoid formation increases the reconnection rate <cit.>, however, in our case the plasmoids appear too late in the simulations to influence significantly the growth rate.
§.§ Secondary plasmoids
In the last snapshots of the simulations with TR reconnection (z_ rec=2 Mm) we can observe the formation of secondary plasmoids, but not for the coronal case z_ rec=4.5 Mm.
Their shape can be clearly seen in the current density map in Figure <ref> for the 2flα case, where we can visually estimate a size of ≈0.3 Mm.
The initial formation phase of the plasmoids can be best seen in the vertical profile of the vertical velocity.
Several equidistant moments of time are shown in Figure <ref> for the three models: 2flα, 2fl and MHD. In these profiles, the plasmoids are seen as secondary velocity extrema that develop and get advected upwards.
The plasmoids move with the outflow from the primary reconnection point, which are at the locations where the vertical velocity changes sign (fitted locally by linear slope in the three panels). We can visually estimate that the fastest growing length scales are similar
in the three cases, however the growth is largest in the MHD case, followed by 2flα and then 2fl.
This suggests that the two-fluid effects do not affect the initial phase of the growth of these plasmoids. We do expect that the later stages of this plasmoid evolution will be similarly affected by the runaway decoupling process in the 2flα case, when charges accumulate towards the middle of the CS.
The linear growth of the tearing mode is affected by the gradient in the vertical flow (parallel to the CS), as shown in a linear resistive MHD analysis about a 2D configuration by
<cit.>. In their analysis the vertical profile is assumed linear v_z(z)=a z, where a quantifies the vertical velocity variation.
<cit.> show that the growth rate γ(a) of the tearing mode at a finite a, is reduced from the case without flow gradient, a=0, in the sense that γ(a) ≈γ(0)-a.
A linear analysis of the tearing mode in the two-fluid approach, using simplified assumptions of uniform density, and hence no gravity (and no added vertical flow) is
presented in Appendix <ref>. This
shows that two-fluid effects will define an effective density ρ_ c0≤ρ_ eff≤ρ_ T0 between the background charged ρ_ c0 and total ρ_ T0 densities.
The linear growth of the tearing mode in this simplified two-fluid assumption is bounded between the growth calculated in the MHD assumption when the density is the total density (ρ_ T0) and the charged density (ρ_ c0). In our setup the ionization fraction is high, and the difference
that the two-fluid effects might introduce in the growth of the tearing mode are of the order of 10^-4 s^-1. This is shown in our appendix, and can be seen as the difference on the y-axis of Fig. <ref> between the orange point which corresponds to the collisional parameter used in 2flα simulations and the red point which correspond to a value of the collisional parameter larger than the maximum value of α in 2fl cases.
This two-fluid related difference is
one order of magnitude smaller than the obtained difference in the velocity slope, which we show for the three cases in Figure <ref>.
The ordering of the growth rates of the plasmoids, as visually estimated from Figure <ref>, is reversed compared to the ordering of these slopes. This is consistent with the expected reduction in tearing growth rates due to added vertical velocity gradients. Similarly, the large (vertical) gradients in the outflow velocities are probably the reason why we do not observe secondary plasmoids in the simulations
with z_ rec=4.5 Mm, since then the vertical velocities and corresponding vertical gradients are larger than for the z_ rec=2 Mm case.
Therefore, in our simulations, the initial linear growth of the plasmoids is rather influenced by the fact that they form at different moments and slightly different heights
during the simulations, when the vertical gradients in the velocity are different. The difference in the growth rate produced by this gradient in velocity is much larger than the difference in the growth introduced by the collisional effects.
§ SYNTHETIC VIEWS
As a relatively straightforward observational validation, we can produce synthetic images resulting from emission from optically thin spectral lines. To do so, we calculate the emission in the Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO/AIA) channel AIA 193 Å, which has its highest response for temperatures between 10^6 and
2 × 10^6 K. We do so only above the height z ≈2 Mm in our initially stratified atmosphere, because the chromospheric region itself is not appropriately dealt with in an optically thin limit.
Being optically thin emission, we deal essentially with a function of local temperature T and number density n given by
Λ(n,T)=n^2 R(T) ,
where R(T) is a response function which depends on temperature, and a log R-log T view is shown in Figure <ref> for the wavelength 193 Å, obtained using the CHIANTI atomic database <cit.>.
Images in these Extreme Ultra-Violet (EUV) channels of AIA can be synthesized from 3D data cubes by integrating
Eq. <ref> along the line of sight (LOS). We have 2.5D data, with an assumed invariance in the third (y) direction. As we have 2.5 D simulations, the images are shown at the simulation resolution, and there is no integration along LOS, also making units meaningless.
As the AIA 193 channel is emitted by Fe XII, emission is obviously from charged species. From our two-fluid plasma-neutral model, we must choose which number density and temperature is taken in Eq. <ref>, and a more accurate result might be obtained by using only the charged density, instead of the total density.
In particular, plasmoids are usually observed as bright blobs due to their increased density compared to the surrounding medium.
In this case models which consider the total density in the calculation of the synthetic image (as is usually done from a single-fluid MHD simulation) would give wrong results, opposed to considering the charged particle density.
Figure <ref> shows synthetic images in the AIA 193 channel which capture the temporal evolution of the plasmoids. We found secondary plasmoids when z_ rec=2 Mm, and show synthetic views for the 2fl case (top row) and the 2flα case (middle row), where we use the
charged fluid temperature and density, meaning adopt Λ(n_c,T_c) in Eq. <ref>.
The bottom row shows for the case 2flα the same snapshots as in the middle row, but instead using the total density and the center of mass temperature defined by Eq. <ref>, in practice using Λ(n_T,T^ 2fl) in Eq. <ref>.
On all panels of Figure <ref>, we show an emission isocontour at a fixed value.
We can observe that the plasmoid fades away as the surface delimited by the isocontour becomes smaller, while it accelerates upwards in the 2fl case (top row).
On the contrary, the plasmoid decelerates and becomes brighter in the 2flα case (middle row), most probably because of the increasing charged density due to the runaway effect.
A completely different interpretation results from the images constructed using total density for the 2flα case (bottom row), where the plasmoids appear as dark structures that get
surrounded by two bright and descending spikes, coming down from z≈4 Mm, where the accumulation of the neutrals outside the CS seems to be maximal (Figure <ref>).
Because of the runaway process, the synthetic image which uses total density gives a wrong image, as the neutrals accumulated outside the current sheet (considered in the total density) would not actually contribute to the emission in this line.
Therefore, charged density only should be used in generating synthetic images in optically thin lines in order to produce more accurate results.
Finally, we investigate how the MHD model estimates the charged density, and quantify differences obtained in synthetic images for the three models.
The ionization fraction depends on height only and is constant in the MHD model, depending on the background profile.
The MHD model cannot track how the ionization fraction changes during the simulation because of how the two species evolve, but
we can define a quantity R, being the inverse of the normalized mean molecular weight, as
calculated using the two-fluid equilibrium (hence the 0 subscripts) atmosphere
R = p_ c0 + p_ n0/T_0 (ρ_ c0+ρ_ n0) ,
which can be used to retrieve the temperature in the MHD model in a consistent way with the two-fluid model
T = p/R ρ .
As we also know the mean molecular weight of charges and neutrals at each height[When we use a purely Hydrogen plasma, the normalized mean molecular weights for charges and neutrals are uniform and constant <cit.>,
being equal to 0.5 and 1, respectively.], then we can
estimate the charged and neutral density from the MHD model:
ρ_c = ρ (R-1) , ρ_n = ρ (2-R) .
With these identifications, we can mimic consistent two-fluid like quantities from a pure MHD model, and then also turn the MHD in a synthetic image based on temperature and charged density.
In practice, for the MHD model the density of charges is then estimated using Eq. <ref>.
Figure <ref> shows the resulting images for the three models and for the two cases of reconnection heights.
We can observe that the MHD and 2fl models give very similar results.
There is a small difference: in the z_ rec=2 Mm the 2fl shows less intensity than the MHD,
but the reverse happens for the z_ rec=4.5 Mm case. This is due to the fact that the density of charges is
slightly overestimated for the fluid coming from below and underestimated for the fluid coming from above.
Because of different scale heights between charges and neutrals, the ionization fraction is different at different heights.
This is an intrinsic limitation of the MHD model which only keeps the information of the initial ionization state.
The snapshots used for z_ rec=4.5 Mm are at an earlier time than the final time of this simulation, where we see a clear upward jet feature as the reconnection outflows impact the chromosphere. This jet-like feature with width comparable to the width of the CS (≈30 km) is present in all three images, and may be observable in high cadence, high resolution observations.
§ SUMMARY
We did simulations of two-fluid reconnection in a gravitationally stratified atmosphere, similar to the solar atmosphere,
where we studied the collisional effects for reconnection points situated at different heights.
Because of the localized resitivity used, a Petschek type reconnection developed
with slow shocks propagating along the x-direction, disrupting the current sheet in a V shape. The vertical upward velocities of ≈ 10 km/s in the z_ rec=2 Mm and the width of the jets, which vary from being similar to the width of the CS (≈ 30 km) near the X-point to ≈ 700 km, higher up, due to the slow shocks, are consistent with properties of type I spicules.
The MHD and 2fl simulations showed very similar results.
When z_ rec=4.5 Mm the reconnection outflow hit the denser material below and interacted with reconnected magnetic field,
creating secondary thin current sheets,
leading to locally more turbulent behavior in the post-flare loop region.
The thermal effect of the collisions on the evolution of the neutral fluid has been observed earlier in simplified 1D slow shock simulations, where the heating of the neutrals produces an overshoot in the neutral velocity
<cit.>.
In our case, the decoupling is larger when the collisional effects are increased (the 2flα cases) and the heating of the neutrals will produce a runaway effect which separates the neutrals and the charges across the CS.
The neutrals accumulate outside the CS, while the charges tend towards the center of the CS.
This gives reversed contrasts in the charged density and neutral or total density maps.
The regions with very low density of neutrals have high density of charges (inside the CS) and
the regions with very high density of charges have low density of neutrals (outside the CS), and these differences increase over time.
The temperature maps are consistent with the density maps, with high density regions having low temperatures, while low density has high temperatures. This was analyzed in detail, and a nonlinear decoupling runaway effect was identified.
We obtain high reconnection rates in all the simulations because of the localized resistivity.
The localized resistivity has the effect to bound the current density, similar to an anomalous prescription.
We obtain the maximum reconnection rate in the Petschek model, which is 0.1 <cit.>.
Two-fluid effects do not increase our reconnection rates further, as opposed to the results of <cit.>. Our setup is closer to conditions relevant for the stratified solar atmosphere.
At later time we observe the formation of secondary plasmoids. They are observed for all the models when z_ rec=2 Mm.
The large outflows and associated gradients in the flow when z_ rec=4.5 Mm are likely inhibiting the linear tearing mode. This effect is much more important
than collisions in the early formation of plasmoids. In simplified assumptions of uniform density,
collisions define an effective density in the two-fluid model, similar to the idea presented by <cit.> in simulations of the coalescence instability.
At later stages, however, the accumulation of plasma adds to the nonlinear evolution of the tearing mode.
Because of a smaller effective density when the collisional coupling is reduced, the effective Alfvén speed is larger and the simulations evolve slightly faster.
The secondary plasmoid formation process would look completely different in synthetic images which use total density instead of charged density in the calculation
of emission in optically thin lines.
When the charged density is used for the synthetic image generation, an MHD model which does not keep track of the ionization fraction produces slightly different results because of this limitation. It became evident that using charged particle densities leads to more realistic behavior, in line with observed enhanced emission blobs.
§ LINEAR TEARING IN TWO-FLUID SETTINGS
The linearized incompressible, resistive MHD equations, where the background density ρ_0 is uniform, the background magnetic field
𝐁_0=(0,B_ y0(x),B_ z0(x) )
is force-free, and the evolution of the perturbed magnetic field 𝐁_1 neglects the
changes due to the Ohmic diffusion of the equilibrium magnetic field, are:
ρ_0 ∂𝐯/∂ t = 𝐉_0×𝐁_1 + 𝐉_1×𝐁_0 ,
∂𝐁_1/∂ t = -∇× (𝐯×𝐁_0) + η_0 ∇^2𝐁_1 ,
∇·𝐯=0 .
In these equations, 𝐉_1=∇×𝐁_1 is the perturbed current.
In a 2.5D geometry (xz plane) we consider the y-component after taking ∇× of Eq. <ref>, the x-component of Eq. <ref>, and
∇·𝐁=0 ,
which is equivalent in the linear assumption to the z-component of Eq. <ref>.
After assuming a solution of the form {v_x(x,z,t), B_ x1(x,z,t)} = {u_x(x),b_x(x)}·exp(γ t - i k_z z), the following system is obtained <cit.>:
i γρ_0 (d^2 u_x/d x^2 - k_z^2 u_x) =
k_z [ B_ z0d^2 b_x/d x^2 - b_x ( k_z^2 B_ z0 + d^2 B_ z0/d x^2) ] ,
γ b_x = -i k_z B_ z0 u_x + η_0 ( d^2 b_x/d x^2 - k_z^2 b_x ) .
In the two-fluid model, when there are separate momentum equations for charges and neutrals, Eqs. <ref>- <ref> are replaced by:
ρ_ c0∂𝐯/∂ t = 𝐉_0×𝐁_1 + 𝐉_1×𝐁_0 + αρ_ c0ρ_ n0( 𝐯_𝐧 - 𝐯_𝐜) ,
∂𝐁_1/∂ t = -∇× (𝐯_𝐜×𝐁_0) + η_0 ∇^2𝐁_1 ,
∇·𝐯_𝐜=0 ,
ρ_ n0∂𝐯_𝐧/∂ t = αρ_ c0ρ_ n0( 𝐯_𝐜 - 𝐯_𝐧) .
In the two-fluid assumption Eq. <ref> remains unmodified, and Eq. <ref> can be rewritten, so we now get the following coupled system of two governing linear ordinary differential equations:
i γρ(γ) (d^2 u_x/d x^2 - k_z^2 u_ cx) =
k_z [ B_ z0d^2 b_x/d x^2 - b_x ( k_z^2 B_ z0 + d^2 B_ z0/d x^2) ] ,
γ b_x = -i k_z B_ z0 u_ cx + η_0 ( d^2 b_x/d x^2 - k_z^2 b_x ) .
where
ρ(γ) = ρ_ c0γ + αρ_ T0/γ + αρ_ c0 ,
and ρ_ T0 = ρ_ c0 + ρ_ n0 is the background total density. Since for every γ we have ρ_ c0≤ρ(γ) ≤ρ_ T0,
this defines an effective density in the two-fluid model, based on the collisional coupling α.
We observed in Fig. <ref> that all plasmoids formed have a similar length scale, estimated as λ≈ 0.3 Mm.
We consider the densities at z=3.5 Mm, the value of the resistivity η_0=8 Ω m and solve numerically (using python numpy solver) the eigenvalue problem resulted from the spatial discretization of Eqs.(<ref>),(<ref>), with boundary conditions where we set to zero both variables, thus obtaining the growth rate in the MHD assumption.
Doing so, we found that the largest growing mode (with wavelength larger than 0.1 Mm and
smaller than 0.5 Mm) has wavelength λ=0.4 Mm for the parameters considered.
The density gradient scale height at z=3.5 Mm is much larger than the vertical size of the domain and than the wavelength considered, so this justifies neglecting the gravity and all variations of the quantities in the vertical direction.
Fig. <ref> shows the computed growth rate as a function of density (varying between ρ_ c0≤ρ≤ρ_ T0) for the MHD case, and also gives the two-fluid growth rate for several values of α as indicated in the legend. To get the black solid line in Fig. <ref> which quantifies growth rates in the MHD approximation, we fix λ=0.4 Mm and solve numerically the eigenvalue problem in the MHD approximation for densities with values between the charged density and the total density.
The growth in the two-fluid approach will be the intersection points between the black line and the curve of ρ^-1, the inverse of ρ(γ) from Eq.(<ref>) (only the intersection points are shown in Fig. <ref> and not the curve ρ^-1).
This work was supported by the FWO grant 1232122N and a FWO grant G0B4521N.
This project has received funding from the European Research Council (ERC) under
the European Union’s Horizon 2020 research and innovation programme (grant
agreement No. 833251 PROMINENT ERC-ADG 2018). This research is further supported by Internal funds KU Leuven, through the project C14/19/089 TRACESpace.
The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government.
aa
43
natexlab#1#1
[Baty et al.(2014)Baty, Forbes, & Priest]2014Baty
Baty, H., Forbes, T. G., & Priest, E. R. 2014, Physics of Plasmas, 21,
112111
[Biskamp & Schwarz(2001)]biskamp-loc
Biskamp, D. & Schwarz, E. 2001, Physics of Plasmas, 8, 4729
[Bulanov et al.(1978)Bulanov, Syrovatskii, &
Sakai]stabOutflow
Bulanov, S. V., Syrovatskii, S. I., & Sakai, J. 1978, ZhETF Pisma
Redaktsiiu, 28, 193
[Close et al.(2004)Close, Parnell, Longcope, &
Priest]recObs
Close, R. M., Parnell, C. E., Longcope, D. W., & Priest, E. R. 2004,
, 612, L81
[Del Zanna et al.(2015)Del Zanna, Dere, Young, Landi, &
Mason]chianti
Del Zanna, G., Dere, K. P., Young, P. R., Landi, E., & Mason, H. E.
2015, A&A, 582, A56
[Dover et al.(2021)Dover, Sharma, & Erdélyi]tempProfile
Dover, F. M., Sharma, R., & Erdélyi, R. 2021, The Astrophysical Journal, 913,
19
[Furth et al.(1963)Furth, Killeen, & Rosenbluth]furth
Furth, H. P., Killeen, J., & Rosenbluth, M. N. 1963, The Physics of Fluids, 6,
459
[Hillier et al.(2016)Hillier, Takasao, &
Nakamura]andrewShocks1
Hillier, A., Takasao, S., & Nakamura, N. 2016, , 591, A112
[Innes & Tóth(1999)]1999gabor
Innes, D. E. & Tóth, G. 1999, , 185, 127
[Keppens et al.(2023)Keppens, Popescu Braileanu, Zhou,
Ruan, Xia, Guo, Claes, & Bacchini]mpiamrvac3
Keppens, R., Popescu Braileanu, B., Zhou, Y., et al. 2023, arXiv
e-prints, arXiv:2303.03026
[Kumar et al.(2023)Kumar, Karpen, Antiochos, DeVore, Wyper, &
Cho]obsPlasm
Kumar, P., Karpen, J. T., Antiochos, S. K., et al. 2023, The Astrophysical
Journal, 943, 156
[Leake et al.(2013)Leake, Lukin, & Linton]slavaRec2
Leake, J. E., Lukin, V. S., & Linton, M. G. 2013, Physics of Plasmas, 20,
061202
[Leake et al.(2012)Leake, Lukin, Linton, & Meier]slavaRec1
Leake, J. E., Lukin, V. S., Linton, M. G., & Meier, E. T. 2012, The
Astrophysical Journal, 760, 109
[Loureiro et al.(2007)Loureiro, Schekochihin, & Cowley]loureiro
Loureiro, N. F., Schekochihin, A. A., & Cowley, S. C. 2007, Physics of
Plasmas, 14, 100703
[Manheimer & Flynn(1971)]anomalousRes
Manheimer, W. M. & Flynn, R. 1971, Phys. Rev. Lett., 27, 1175
[Martínez-Sykora et al.(2017)Martínez-Sykora, De
Pontieu, Carlsson, Hansteen, Nóbrega-Siverio, &
Gudiksen]2017sykora
Martínez-Sykora, J., De Pontieu, B., Carlsson, M., et al. 2017,
, 847, 36
[Murtas et al.(2021)Murtas, Hillier, & Snow]murtas
Murtas, G., Hillier, A., & Snow, B. 2021, Physics of Plasmas, 28, 032901
[O'Flannagain et al.(2015)O'Flannagain, Brown, & Gallagher]ionFr
O'Flannagain, A., Brown, J., & Gallagher, P. 2015, The Astrophysical Journal,
799, 127
[Parker(1957)]parker
Parker, E. N. 1957, , 62, 509
[Petschek(1963)]petschek
Petschek, H. E. 1963
[Popescu Braileanu & Keppens(2022)]2flpaper
Popescu Braileanu, B. & Keppens, R. 2022, , 664, A55
[Popescu Braileanu et al.(2023)Popescu Braileanu, Lukin, &
Khomenko]recBeatrice
Popescu Braileanu, B., Lukin, V. S., & Khomenko, E. 2023, , 670, A31
[Ruan et al.(2020)Ruan, Xia, & Keppens]wenzhi2
Ruan, W., Xia, C., & Keppens, R. 2020, , 896, 97
[Ruan et al.(2023)Ruan, Yan, & Keppens]wenzhi
Ruan, W., Yan, L., & Keppens, R. 2023, arXiv e-prints, arXiv:2210.09856
[Sato & Hayashi(1979)]sato
Sato, T. & Hayashi, T. 1979, The Physics of Fluids, 22, 1189
[Schmieder et al.(2022)Schmieder, Joshi, & Chandra]obsJets2
Schmieder, B., Joshi, R., & Chandra, R. 2022, Advances in Space Research, 70,
1580, magnetic Flux Ropes in Solar Environments
[Schmieder et al.(1995)Schmieder, Shibata, van
Driel-Gesztelyi, & Freeland]obsJets
Schmieder, B., Shibata, K., van Driel-Gesztelyi, L., & Freeland, S.
1995, , 156, 245
[Snow & Hillier(2019)]andrewShocks2
Snow, B. & Hillier, A. 2019, , 626, A46
[Snow & Hillier(2021)]2021ben
Snow, B. & Hillier, A. 2021, , 645, A81
[Song et al.(2023)Song, Tu, & Wexler]2023song
Song, P., Tu, J., & Wexler, D. B. 2023, , 948, L4
[Sweet(1958)]sweet
Sweet, P. A. 1958, in Electromagnetic Phenomena in Cosmical Physics, ed.
B. Lehnert, Vol. 6, 123
[Takasao et al.(2013)Takasao, Isobe, &
Shibata]2013takasao
Takasao, S., Isobe, H., & Shibata, K. 2013, , 65, 62
[Takasao et al.(2015)Takasao, Matsumoto, Nakamura, &
Shibata]Takasao2015
Takasao, S., Matsumoto, T., Nakamura, N., & Shibata, K. 2015, ,
805, 135
[Ugai(1995)]ugai
Ugai, M. 1995, Physics of Plasmas, 2, 388
[Uzdensky(2003)]Uzdensky_2003
Uzdensky, D. A. 2003, The Astrophysical Journal, 587, 450
[Vasyliunas(1975)]vasi
Vasyliunas, V. M. 1975, Reviews of Geophysics and Space Physics, 13, 303
[Vernazza et al.(1981)Vernazza, Avrett, & Loeser]VALC
Vernazza, J. E., Avrett, E. H., & Loeser, R. 1981, , 45, 635
[Wang et al.(2021)Wang, Cheng, Ding, & Lu]2021wang
Wang, Y., Cheng, X., Ding, M., & Lu, Q. 2021, , 923, 227
[Xia et al.(2018)Xia, Teunissen, El Mellah, Chané, &
Keppens]B0split
Xia, C., Teunissen, J., El Mellah, I., Chané, E., & Keppens, R.
2018, , 234, 30
[Yadav et al.(2022)Yadav, Keppens, & Popescu
Braileanu]nitin
Yadav, N., Keppens, R., & Popescu Braileanu, B. 2022, , 660, A21
[Yamada et al.(2010)Yamada, Kulsrud, & Ji]recReview
Yamada, M., Kulsrud, R., & Ji, H. 2010, Reviews of Modern Physics, 82,
603
[Yokoyama & Shibata(1996)]1996yoko
Yokoyama, T. & Shibata, K. 1996, , 48, 353
[Zanna et al.(2016)Zanna, Landi, Papini, Pucci, & Velli]velli
Zanna, L. D., Landi, S., Papini, E., Pucci, F., & Velli, M. 2016, Journal of
Physics: Conference Series, 719, 012016
|
http://arxiv.org/abs/2307.04878v1 | 20230710195656 | The Impact of Black Hole Scaling Relation Assumptions on the Mass Density of Black Holes | [
"Cayenne Matt",
"Kayhan Gültekin",
"Joseph Simon"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
firstpage–lastpage
Higher-order composition of short- and long-period effects for improving analytical ephemeris computationt1
[
============================================================================================================
We examine the effect of supermassive black hole (SMBH) mass scaling relation choice on the inferred SMBH mass population since redshift z ∼ 3. To make robust predictions for the gravitational wave background (GWB) we must have a solid understanding of the underlying SMBH demographics. Using the SDSS and 3D HST+CANDELS surveys for 0 < z < 3 we evaluate the inferred SMBH masses from two SMBH-galaxy scaling relations: and . Our SMBH mass functions come directly from stellar mass measurements for , and indirectly from stellar mass and galaxy radius measurements along with the galaxy mass fundamental plane for .
We find that there is a substantial difference in predictions especially for z > 1, and this difference increases out to z = 3. In particular we find that using velocity dispersion predicts a greater number of SMBHs with masses greater than 10^9 M_⊙. The GWB that pulsar timing arrays find evidence for is higher in amplitude than expected from GWB predictions which rely on high redshift extrapolations of local SMBH mass-galaxy scaling relations. The difference in SMBH demographics resulting from different scaling relations may be the origin for the mismatch between the signal amplitude and predictions. Generally, our results suggest that a deeper understanding of the potential redshift evolution of these relations is needed if we are to draw significant insight from their predictions at z > 1.
black hole physics – gravitational waves
§ INTRODUCTION
Supermassive black holes (SMBHs) reside in the nuclei of nearly all massive galaxies <cit.>. Through galaxy mergers, these SMBHs can form dual and binary SMBHs <cit.>. In the final stages of their evolution, before coalescence, SMBH binaries lose energy and angular momentum purely though gravitational waves (GW). The combined GW signal from SMBH binaries is expected to be a stochastic background known as the gravitational wave background <cit.>. Though GW detectors such as LIGO, VIRGO, and KAGRA have successfully detected many GW events from stellar mass compact objects <cit.>, the frequency range of GWs emitted by SMBH binaries is far below even the lowest detectable limit for Earth-based detectors. For such GWs, a much longer baseline is needed. To achieve this, pulsar timing arrays <cit.> use high-precision time-of-arrival measurements of millisecond pulsars to measure the change in Earth–pulsar distances for ∼kpc-scale baselines. There are several years-long PTA campaigns, including North American Nanohertz Observatory for Gravitational Waves <cit.>, European Pulsar Timing Array <cit.>, Parkes Pulsar Timing Array <cit.>, and Chinese Pulsar Timing Array <cit.>, Indian Pulsar Timing Array <cit.>, South Africa Pulsar Timing Array <cit.>.
Several PTAs have individually made significant progress towards detecting the GWB with evidence for a GWB with the characteristic quadrupolar signal of GWs <cit.>. Previously, the NANOGrav 12.5-year data <cit.>, while not having sufficient signal-to-noise to see the <cit.> correlation, showed a common red noise process that shared many traits characteristic of the expected GWB. NANOGrav's signal, however, is significantly higher in amplitude than many predictions of the GWB <cit.>. The newest PTA data increase the significance of the high-amplitude GWB with support for characteristic strain amplitude of h_c ∼ 2 × 10^-15 consistent in all of the data sets finding evidence for <cit.> correlations <cit.>. In fact, three of the analyses are inconsistent with h_c ≤ 1 × 10^-15 <cit.>. The discrepancy between high amplitude observed and that expected from SMBH binaries has been explained with exotic theories such as cosmic strings <cit.> and inflationary universe models <cit.>, or extreme parameterizations of our current models <cit.>. This opens the possibility that the explanations for the GWB signal should be revised <cit.>.
Though there are many SMBH properties that influence the emitted GWs, the mass distribution of SMBHs is fundamentally linked to the characteristic strain amplitude of the GWB and may be the most significant contributor to the amplitude we observe. <cit.> noted that the characteristic strain amplitude from an isotropic background of binary SMBHs depends on four key quantities:
(i) the chirp mass of the binary, ℳ^5 / 3≡M_1 M_2(M_1+M_2)^-1 / 3, where M_1, M_2 are the masses of the SMBHs in the system with M_1 ≥M_2; (ii) the frequency of the emitted GWs, f, which is twice the orbital frequency; (iii) the present-day comoving number density of merged remnants, N_0; and (iv) the redshift, z as
h_c ∼ℳ^5/6 f^-2/3 N_0^1/2⟨(1+z)^-1 / 6⟩.
Note that the amplitude has the strongest dependence on chirp mass, and so the signal is dominated by the most massive black holes. Below z = 1 the PTA band is dominated by local SMBH binaries, but the GWB amplitude is additionally influenced by galaxies that merged at higher redshifts. SMBH evolution is determined, among other things, by mass and so a higher mass population of SMBHs at z > 1 may reflect a higher redshift evolution, thus the astrophysical history of SMBH mass evolution is encoded in the GWB.
Since direct measurements of SMBH masses are only possible for nearby sources, we are often left to infer masses from properties of their host galaxies <cit.>. There exists a wealth of relations between galaxy properties and the mass of their central black hole, all with varying degrees of scatter <cit.>. Here we focus on two relations in particular: the correlation between SMBH mass with velocity dispersion (σ) and bulge stellar mass (M_bulge). In the local universe, despite having lower scatter <cit.>, both relations were found to be remarkably accurate when reproducing known SMBH masses from either stellar mass or velocity dispersion. These scaling relations are based on direct, dynamical mass measurements, which have been shown to be robust. For example, SMBH mass estimates in M87 have previously had discrepancies up to a factor of 2.5 when using stellar kinematics <cit.> versus gas dynamics <cit.>. These are now seen as due to gas filaments <cit.> which agrees with the mass found by the Event Horizon Telescope collaboration <cit.>.
While there is general agreement in the local universe between SMBH masses predicted from stellar mass and velocity dispersion, it is worth discussing instances where these relations are thought to break down. Though we do not investigate it in this paper, SMBH mass is well-predicted from host luminosity. When investigating SMBH masses of large, luminous, brightest cluster galaxies (BCGs), <cit.> found that fails to reproduce the extreme masses above M_BH∼ 3 × 10^9 M_⊙ measured and predicted from M_BH–L. Similarly, <cit.> discuss this same trend, which they call a “saturation” effect, for which not only , but also under-predict the highest mass SMBHs in core galaxies. Both relations display this saturation at the high end of the relations that is not seen in the M_BH–L.
We see a strikingly different pattern, however, when considering red nugget galaxies—galaxies with relatively small radii for their masses and high velocity dispersions that are more typical of younger galaxies.
Red nugget galaxies may be representative of the high-redshift galaxy population, possibly because they have avoided mergers for a large portion of their lives <cit.>. One red nugget is NGC 1277 which hosts a SMBH with a mass of (4.9 ± 1.6) × 10^9 M_⊙ <cit.>. NGC 1277's SMBH is over massive compared to the total stellar mass of the galaxy (1.2 × 10^11M_⊙) and is an outlier in the M_BH–M_bulge relation which predicts a mass of around (4.9–6.23) × 10^8 M_⊙. However, because of its high velocity dispersion, reproduces the measured SMBH mass more accurately, predicting a mass of (2.9–3.7) × 10^9 M_⊙ and the dynamical mass lies within the intrinsic scatter of the relation <cit.>. Recently, it has been found that NGC 1277 may have lost the majority of its dark matter, suggesting an alternative evolutionary path <cit.>, but NGC 1277 is not the only galaxy for which σ has been found to be a better predictor of SMBH mass. MRK 1216 is another one of several well studied examples of this type of object which exhibit similar traits <cit.>.
Despite the great promise of the relation as a SMBH mass predictor, it is resource intensive to measure velocity dispersion at high redshift due the spectral quality required to resolve the necessary spectral features. To overcome this, the method is commonly used because it relates the relatively easily measured bulge stellar mass directly to the SMBH mass. This relationship is well measured within our local universe, but a more accurate mass predictor may be needed for high redshifts (z > 1) where the a significant fraction of the GWB signal originates.
To circumvent the spectral limitations on measuring velocity dispersion, in this paper we use the mass fundamental plane (MFP) of galaxies, which links total stellar mass and half light radius to stellar velocity dispersion. The MFP therefore allows us to infer velocity dispersion for distant galaxies and thus extend the relationship to higher redshifts. <cit.> investigated the evolution of the relationship between galaxy total stellar mass (M_*) and effective radius (R_eff). They found that galaxy masses do not evolve along the z = 0 M_*–R_eff relation, but from redshift 0 to 3, the effective radii decrease substantially. This evolution of the M_*–R_eff relationship indicates that galaxies start off relatively compact and become more diffuse as they age as a result of mergers, feedback processes, and other galaxy interactions. This change in radius is not incorporated in any way into the relation. Applying the local relationship to high-redshift galaxies results in a relatively unchanging SMBH mass population throughout time.
Because of the known evolution of the M_*–R_eff relationship, the lack of evolution in the MFP is not immediately obvious. Velocity dispersions tend to be higher, however, for more compact galaxies, which would suggest that younger galaxies have higher velocity dispersions and therefore higher SMBH masses. This does not mean that black holes decrease in mass, of course, but suggests that black holes grow faster (relatively) than their host galaxies at first. This inference is supported by observations of red nugget galaxies. We therefore investigate how the assumption of SMBH mass galaxy scaling relation affects the inferred SMBH mass population.
The structure of this paper is as follows: In section <ref> we describe the data we used. Section <ref> provides the details of our methods and choices of scaling relations. Section <ref> is where we present the results of our analysis. We discuss the implications of our results in section <ref> and then summarize our work in section <ref>. Tables of our fit posterior values can be found in the appendix. Throughout this work we adopted a WMAP9 cosmology <cit.> where H_0 = 69.33, Ω_b = 0.0472, and Ω_c= 0.2408.
§ DATA
The data we use in this work come from SDSS <cit.> and the 3D-HST+CANDELS survey <cit.>. A summary of the data is presented in the mass–radius plots in Figure <ref>.
§.§ Local Sample from SDSS
<cit.> did not provide mass estimates for galaxies below a redshift of 0.5 so, to supplement this, we compiled a sample of local galaxies with velocity dispersion measurements from the 7th data release of SDSS <cit.> at 0.05 < z < 0.07 (top-left panel in Fig. <ref>). All galaxies were selected from the SDSS Main Galaxy Sample <cit.>, which is ∼95% complete <cit.>. We cross-matched our initial sample with galaxies that had circularized half-light radii and stellar mass estimates from <cit.> and <cit.>, respectively. Quiescent and star-forming galaxies were separated using their u-r and r-z colors, using the criteria in <cit.>. These criteria are nearly identical to those laid out in <cit.>, and we found them to be consistent with other methods of separation based on, e.g., star formation rates. The data were selected for reliability of measurements and completeness of the sample from the SDSS DR7 database. We excluded flagged galaxies using the same criteria detailed in <cit.>. For plotting purposes we include galaxies below log(M_* / M_⊙) = 10.5 which <cit.> removed from their sample entirely. Our sample contains 10,863 galaxies split into 1,241 star-forming and 9,622 quiescent galaxies.
§.§ 0.5 < z < 3 Sample from 3D-HST+CANDELS
For our high-redshift sample (all panels except top-left in Fig. <ref>), we use data from the 3D-HST+CANDELS survey. For this work we infer SMBH mass from stellar mass and velocity dispersion, the latter of which can be calculated from stellar mass and half-light radius. Half-light radii used here are those determined by <cit.>. Half-light radius estimates can differ when measured at one wavelength versus another so we normalized these radii to a rest frame of 5000 Å following equation 2 in <cit.>. We circularized the radii according to R_eff = R_hl q^1/2 where R_hl is the wavelength-corrected half-light radius and q is the axis ratio reported by <cit.>. We also made cuts to the data according to <cit.> and <cit.> based on, e.g., completion limits resulting in a sample that is ≥ 95% complete <cit.>.
Masses for each galaxy were determined by <cit.> using the galaxy SED-fitting code <cit.>. In their work, <cit.> report that the mass-radius relationship evolves as R_eff = 5.6 ( M_* / 5 × 10^9 M_⊙)^0.8 (1 + z)^-1.48 for quiescent galaxies and R_eff = 8.9 ( M_* / 5 × 10^9 M_⊙)^0.2 (1 + z)^-0.75 for star forming galaxies. Because their analysis was performed with different mass estimates, we provide our own fits to the data to demonstrate this evolution. Those interested in the evolution of this relationship should refer to <cit.> for a more rigorous characterization of this relationship. Our final sample consists of 13,232 galaxies from the UDS, GOODS-S, and COSMOS fields. For all galaxies in this sample, <cit.> determined star formation rates from infrared (IR) and ultraviolet (UV) luminosity. We followed their galaxy type selection criteria shown in their figure 5 resulting in a final sample of 11,107 star-forming and 2,125 quiescent galaxies.
§ METHODS
Here we describe how we use the <ref> to infer velocity dispersions for all galaxies in our sample, as well as the two methods of predicting SMBH mass that are our main focus of this paper. The resulting SMBH mass predictions are converted to number density functions, the process for which is detailed at the end of this section.
§.§ Scaling Relations
In this section we give the relations for the MFP, and .
§.§.§ High Redshift Velocity Dispersion
We infer velocity dispersions for our sample using the galaxy MFP; a three-dimensional relation between galaxy stellar mass, half-light radius, and stellar velocity dispersion <cit.>. This relation can be used reliably to predict any of the three properties if the other two are known. Several works in the last decade have investigated both the possibility of an evolution in the MFP and the effect galaxy type may have on the parameterization <cit.>. Now, with large volumes of deep data a picture is emerging where all galaxies lie on one plane that does not evolve <cit.>. In particular, <cit.> recently performed a thorough analysis of the galaxy type dependence and redshift evolution and came to this same conclusion. Motivated by these results we used the MFP described described by
logσ = (log R_eff - β logΣ_⋆ - γ) / α
and
Σ_⋆≡M_* / (2 πR_eff^2),
where α = 1.6287 and β = -0.84 as determined by <cit.> and the offset is γ= 4.482 <cit.>.
If the MFP is a valid prescription, we should be able to reproduce measured velocity dispersions using the stellar mass and effective radii of each galaxy. We compare the measured velocity dispersions from galaxies in both the SDSS and LEGA-C surveys to those we predict using the MFP. We plot the results of these comparisons in Fig. <ref> for each set of galaxies. We find that our predicted values are consistent with measurements for all galaxy types across both samples (0.1 dex or below), even with scatter introduced (0.16 dex or lower). Because our predictions are able to reproduce the measured values, we can treat the MFP velocity dispersions functionally as measured velocity dispersions. From here on we use σ to indicate the velocity dispersion predicted from the MFP unless otherwise specified.
§.§.§ Supermassive Black Hole Mass
To infer SMBH mass from host galaxy properties we used the relations presented in <cit.> for the and scaling relationships given by
M_BH/10^9 M_⊙=α_1 (M_bulge/10^11M_⊙)^β_1
and
M_BH/10^9 M_⊙=α_2(σ/200 km s^-1)^β_2.
The two relations are well studied in the local universe, but there is a lack of consensus surrounding the evolution (or lack thereof) of either relation beyond nearby galaxies <cit.>. For this work we assumed the local paramtetrization [α_1, β_1] = [0.49, 1.16] and [α_2, β_2] = [0.309, 4.38] to be non-evolving with redshift. We revisit this assumption in section <ref>. When using mass and radius to predict velocity dispersion, the relation becomes a function of both bulge mass and radius, therefore including an additional galaxy property in the mass estimation in contrast with . Because of this consideration of galactic radius, implicitly incorporates the evolution of the M_*–R_eff relationship with redshift without defining an explicit redshift evolution <cit.>.
Because SMBH mass is derived from host bulge properties, we assigned each star-forming galaxy a bulge mass fraction to be 40% of its total stellar mass. Our choice of bulge mass fraction has an effect on the degree to which the two relationships disagree, but the our overall results do not change when using significantly higher or lower fractions. We also performed our analysis for each galaxy type separately, so results including only quiescent galaxies are not affected by this choice.
§.§ Number Density Functions
The stellar mass function (SMF) of galaxies is a useful tool for understanding galaxy formation and evolution. The SMF informs us of the total number of galaxies per unit volume per logarithmic mass interval as a function of stellar mass. Though stellar mass and luminosity are the most commonly discussed, this type of number density function, Φ(X), can be constructed for virtually any galaxy property.
There are several ways of estimating Φ(X), but the most straightforward is Schmidt's 1 / V_max method <cit.>. We calculate the density functions as
V_max , i=Ω/3(r(z_max , i)^3-r(z_min , i)^3)
and
Φ(X) =∑_i 1/V_max , iΔX,
where X represents the property in question, e.g., stellar mass, velocity dispersion, or SMBH mass and V_max , i is the co-moving volume between redshifts z_min , i and z_max , i. The solid angle subtended by the survey is represented by Ω, and ΔX is the width of the bins. This method is functionally similar to a histogram making it computationally efficient and it is robust against bias as long as no clustering is present <cit.>. Given the high completeness of the data sets we use, this is sufficient for our purposes.
Because Φ(X) is a function of redshift, it is common to split the data into narrow redshift bins and fit each independently. We used the survey areas listed in <cit.> to calculate our co-moving volume for each redshift bin.
The number of galaxies within a given volume is expected to undergo an overall decline with increasing redshift and with increasing extremity of the property in question (e.g., very high mass or luminosity). Distributions of Φ(X) of this sort are well described by Schechter functions. The logarithmic form of a “single Schechter”, which we used for all our fitting, is described by
Φ(Y)=ln (10) ϕ_* 10^(Y-Y_c)(α_s+1)exp(-10^Y-Y_c),
where Y is the base 10 logarithm of the property in question, i.e. Y = log_10(X), Y_c is the (log) characteristic value of said property, α_s is the slope of the lower power-law, and ϕ_* is density normalization. Especially in the local universe, a “double Schechter” is sometimes used which is simply the sum of two single Schechter functions.
After obtaining values for our stellar mass functions, we compared our estimates to those obtained in <cit.>. We compiled the data into one figure and over plotted our SMF estimates and found that we were in good agreement (Fig. <ref>).
We repeated the same process to produce number density functions for velocity dispersion and SMBH mass predicted from both and . Our parameterization for the Schechter fits was found using PyMC <cit.>, a modeling software that uses Markov chain Monte Carlo sampling. The priors we used are listed in Table <ref>. We used four chains with 15,000 total steps, the first 5,000 of which were tuning steps. In all cases, the data were not fitted for values below the completion limits. We determined our completion limits for stellar mass from <cit.> and converted these into SMBH mass completion limits using the relation. Velocity dispersion completion limits are informed by the aforementioned limits on stellar mass and the completion limits for effective radius used by <cit.>. A more complete breakdown can be found in Table <ref>.
Error estimates were obtained by performing 100 fits to the data where we introduced random scatter into the data based on the errors of the values involved in the fits and the known intrinsic scatter of the relations used for our inferred quantities. Cosmic variance estimates were obtained following the methods outlined in <cit.>. Because accurate determinations of cosmic variance for velocity dispersion and SMBH mass would require a large volume of in-depth measurements for each of these values, an exact estimate does not exist. For these values we approximated the cosmic variance based on the values we calculated for stellar mass.
§ RESULTS
In Figures <ref>, <ref>, <ref>, and <ref> we present the number density functions of galaxy stellar mass, MFP velocity dispersion, and inferred SMBH mass from both the and scaling relations.
§.§ Stellar Mass and Velocity Dispersion Functions
Our stellar mass and velocity dispersion function fits to all galaxies are shown in figures <ref>–<ref>, . The stellar mass functions (Figs. <ref> and <ref>) are described here by a double Schechter function at all redshifts. At the highest redshifts the data are well described by a single Schechter which is consistent with others' results <cit.>, but we chose to fit these with a double Schechter to maintain consistency within our results across all redshifts. There is a general decline in the total number density between the lowest and highest redshifts, the number of galaxies with logM_* > 11.5 M_⊙ is 8.3 times higher at z̅ = 0.65 than at z̅ = 2.8. The distribution, Φ(M_*) drops off steeply for masses greater than ∼ 11 M_⊙ but the slope for lower masses is much flatter with no clear trends across time.
The velocity dispersion functions (Figs. <ref> and <ref>) are parameterized by a single Schechter function across all redshifts. We see an overall decrease in number density of galaxies as redshift increases. There appears to be a mild change in the slope of the distribution that is steepest at z̅ = 0.65 and is at its shallowest for 1.6 < z̅ < 2.0. This flattening of the curve leads to an apparent broadening of the whole distribution, though we cannot be sure if the flattening of the values to the left of the completion limits are reliable. Perhaps the most notable results of these fits are the evolution of the characteristic velocity dispersion which increases from 1.6 to 1.9 over the entire redshift range. An increase of the characteristic velocity dispersion suggests that galaxy velocity dispersion is increasing with increasing redshift.
The large difference between the results of <cit.> and our functions (Fig. <ref>) has several possible explanations. First, their results consider only quiescent galaxies while ours are for combined galaxy type. Number density functions of separate galaxy types often have different shapes to the combined functions as we find in this paper and what was found by, e.g., <cit.>. There is also a large gap in cosmic time between their z̅ = 0.07 results and our lowest redshift sample which is z̅ = 0.65 that corresponds to a approximately 5.2 Gyr. Because we see lower characteristic velocity dispersions with lower redshift, it is possible that the relation evolves in this time. Additionally, <cit.> found an increase in the number of galaxies with high velocity dispersions for z > 0.6 which could indicate an evolution in the intrinsic scatter of the relation they used to infer velocity dispersion. Though they used dynamical mass to infer virial velocity dispersions, which is different to what we do here, a similar scatter evolution could be affecting this difference since we include the measured intrinsic scatter from <cit.> which was measured for z ∼ 0.8.
§.§ Supermassive Black Hole Mass Functions
We show histograms of resulting distributions of SMBH masses in Figure <ref>. As we look back to earlier times the shape of the histogram of SMBH masses inferred from velocity dispersions flattens out leading to a lower peak, but a much thicker and longer tail than for SMBH masses inferred from stellar mass. These same data are shown in Figure <ref> showing only our quiescent galaxy population. We see the same trends here despite having far fewer galaxies; the high mass tail of the distribution is larger for masses predicted from velocity dispersion than from stellar mass. It is from these same data that we constructed the mass functions for each relationship for star-forming, quiescent, and combined galaxy types.
If our results are to be trusted, they should be independent of survey choice. We can compare CANDELS to the LEGA-C survey for quiescent galaxies between 0.5 ≲ z ≲ 1. In this redshift range, the two surveys have comparable coverage, and even though our results are robust to choice of bulge fraction, we see these same results even when restricting to quiescent galaxies only. When repeating our analysis on LEGA-C (Fig. <ref>), we get SMBH mass distributions that have all of the same properties we have highlighted. Namely, predicts a larger number of SMBHs with masses greater than ∼ 10^9 M_⊙ and also extends to higher masses than . The fact that we find similar trends between both data sets with quiescent galaxies suggests that our results are both reproducible and unbiased by survey choice or bulge stellar mass fraction.
The resulting SMBH mass functions for both galaxy types as well as quiescent and star-forming galaxies are shown in Figures <ref>, <ref>, <ref> respectively. Here median fits and errors are presented in the same way as the stellar mass and velocity dispersion fit. We find that, independent of galaxy type, there are significant differences between the predicted SMBH masses from and especially for redshifts above 1. For all redshift bins higher than z ∼ 1, predicts a notably higher number density of large (M_BH > 10^9 M_⊙) SMBHs. While both relationships undergo a decrease in total number density with increasing redshift, the overall predictions between high and low masses evolve. The number density of the highest mass black holes derived from stellar mass does not change significantly. The slope of the distribution around M_BH∼ 10^8 M_⊙ and higher remains consistent across all snapshots until a slight flattening in the two highest redshift bins. The characteristic logarithmic SMBH mass is also highest at these two times while it does not follow a noticeable trend in either direction for redshifts below z̅ = 2.5. The characteristic logarithmic SMBH mass for those derived from velocity dispersion undergoes an increase from 9.8 to 10.8 over the range of redshifts considered here. This change is related to the similar increase we see in characteristic velocity dispersion. The highest SMBH masses in this distribution tend towards higher values with increasing redshift which leads to a growing division further back in time.
Especially at z ∼ 3, the distributions of SMBH masses inferred by either galaxy stellar mass or velocity dispersion do not agree. This tension is apparent when considering galaxy types both separately and together and is present across at least two different high-redshift samples (Fig. <ref>). The bulk of the distributions overlap (Fig. <ref>) and so these relationships are suggesting similar populations of SMBHs for the majority of galaxies. The amplitude of the GWB is most impacted by the largest SMBHs, where the distributions differ most significantly, so an accurate picture of the high-mass population is necessary. Further study and high redshift tests of the MFP are needed.
§ DISCUSSION
We derive the distribution of SMBH mass for 0 < z < 3. The masses we used were inferred from either the host bulge stellar mass or velocity dispersion, the latter being inferred from host stellar mass and radius using the MFP. When comparing these mass distributions we find that using MFP velocity dispersion implies a greater number density of SMBHs at the high mass end, particularly for M_BH > 10^9 M_⊙.
Throughout the course of this work we checked our methods against others (Figs. <ref>, <ref>, <ref>) and we were able to consistently reproduce their results and/or measured values. We additionally demonstrated that our results are not limited or biased by our choice in sample. Because higher numbers of high-mass SMBHs are predicted by even when only considering quiescent galaxies, we can also be confident that our choice in bulge fraction is not the reason for this this difference. Additionally, these results are not sensitive to which version of the SMBH mass scaling relationship is used. When comparing to other forms of these relations such as those determined by <cit.> or <cit.> we found no significant differences in respective SMBH mass distributions. Finally, assuming larger values for the intrinsic scatter in the MFP and SMBH mass relations does not impact our predicted values without assuming non-physically large scatter.
Given the known observed evolution of galaxy properties, it is not possible for the z = 0 and relations to be both correct and non-evolving at high redshift. There have been observational studies to investigate the evolution of black hole scaling relations with sometimes contradictory results <cit.>. A recent study by <cit.> uses results from HETDEX, and takes into account a number of potential observational biases including the potential selection bias discussed in <cit.>; they find a 0.52 ± 0.14 dex offset between the local relation and the relation at z ∼ 2. This alone, however, does not entirely bridge the gap we find at z ∼ 2 though their results primarily consider SMBHs with masses lower than 10^9 M_⊙ so the applicability of their results is limited when comparing to the population of large SMBHs we discuss here. Very little analysis has been performed for in this manner though <cit.> found no evolution in using observational data out to z ∼ 1. Without a high redshift survey of velocity dispersions for galaxies with known SMBH mass, we have extremely limited insight into how this relation may or may not evolve.
If the observed lack of evolution in the MFP out to redshift 1 is a robust result, we would expect that any evolution in the MFP velocity dispersions out to this same redshift would reflect a physical reality. Because we see an increasing difference between the distribution of SMBH masses predicted from bulge mass and velocity dispersion even below z = 1, it is likely that this change is because one (or both) of these scaling relationships evolve with redshift.
We find an inescapable tension between predictions made with versus that cannot be otherwise explained given our modest assumptions. This difference in number density of high mass SMBHs has several implications for predictions such as for the sizes of galactic core. Galaxies with more massive central SMBHs have larger cores <cit.> and so using may predict a population of galaxies with larger cores than when using .
Our results indicate that analysis similar to <cit.> would point to a larger GWB amplitude when using . For masses above 10^9 M_⊙ we can do an approximate calculation for the GWB amplitude suggested by these number densities. Following the relation between number density and GWB amplitude given in equation (<ref>) we see that the amplitude has a dependence on number density such that h_c ∝ N_0^1/2. Using this we can get that the ratio in amplitudes predicted by versus is proportional to the square root of the number densities of SMBHs predicted from each relation, i.e.,
h_c(σ)/h_c(M_bulge) = √(N_0(σ)/N_0(M_bulge)).
Using our reported number densities (Table <ref>) we find that using implies a higher amplitude by a factor of 2.1 on average across 0.5 < z < 3.0.
From the 15 year results of NANOGrav's PTA, the offset between the signal amplitude and the highest value predictions for the GWB amplitude is at least a factor of 2 though potentially more <cit.>. An in-depth analysis of how our results affect predictions for the GWB will be presented in future work, but the initial estimate we provide here suggests an origin for this difference. It is uncertain at this point whether velocity dispersion or stellar mass is necessarily a better SMBH mass indicator. It is clear, however, that further investigation is necessary so that we can further understand why these relations differ so greatly.
Future work investigating our findings is necessary. A good test the MFP would involve obtaining velocity dispersion measurements for a sub-sample of the galaxies in this survey for z > 1, with even a relatively small sample it would be possible to quantify the accuracy of the MFP at z > 1. Measured velocity dispersion estimates are the first step for evaluating the potential evolution of the MFP, but to thoroughly analyze how SMBH mass scaling relations may change with time, dynamical mass estimates at z > 1 are needed. 30-m class telescopes, suitable for high-redshift observations, make this feat a realistic goal and will expand our understanding of how galaxies and their SMBHs evolve <cit.>. Aside from tests of the results we show here, extending our work to include a robust analysis of lower mass (M_BH < 10^8 M_⊙) black holes will inform our predictions for the Laser Interferometer Space Antenna (LISA) mission which will be vital in our characterization of black hole see formation. With upcoming missions and the continued refinement in GWB detection efforts, a full picture of the potential evolution of galaxy SMBH scaling relations can emerge.
§ SUMMARY
In this paper we examined the difference between SMBH mass predictions when assuming versus . To do this we used the three-parameter relationship between galaxy stellar mass, effective radius, and velocity dispersion to infer velocity dispersion for galaxies up to z = 3. We created SMBH mass density functions for all galaxies in our sample for 0.5 < z < 3 and compared how using stellar mass versus MFP velocity dispersion affected inferred SMBH demographics. We found that the number of SMBHs with masses M_BH < 10^9 M_⊙ was different between these relations, especially for z > 1. In particular we find that predicts a greater number of these high mass SMBHs. Our results suggest that the relationship between SMBH mass and stellar mass and/or velocity dispersion must evolve at high redshift. Assuming the local relations to be constant across time leads to substantial differences when extrapolated beyond z = 0.5, and this difference must be reconciled.
Our results do not inform us of the accuracy of either relation. It remains unclear whether one or both relations are evolving. Recent work has found that the stellar mass to SMBH mass relation may have evolved at least since z ∼ 2 <cit.>, but no evolution has been investigated for velocity dispersion. Circumstantial evidence from, e.g., red nugget galaxies, points toward being a more accurate predictor of SMBH mass at these higher redshifts <cit.>. Prediction and interpretation of the GWB from PTAs relies heavily on the assumptions made for the SMBH demographics at high redshift. Here we have shown that the choice in scaling relation used to infer high redshift SMBH mass can lead to meaningfully different demographics. If we are to refine our ability to explore the physics of galaxy and SMBH evolution at z > 1 we must also re-examine how the local scaling relations may evolve.
§ ACKNOWLEDGEMENTS
The authors would like to thank Eric Bell and Rachel Bezanson for their helpful conversations. We additionally thank Anna de Graaff, Joel Leja, and Arjen van der Wel for readily sharing their knowledge and data with us.
CM acknowledges financial support through the University of Michigan’s Rackham Merit Fellowship Program.
JS is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2202388.
We thank the anonymous referee for their insightful comments.
Anishinaabeg gaa bi dinokiiwaad temigad manda Michigan Kichi Kinoomaagegamig. Mdaaswi nshwaaswaak shi mdaaswi shi niizhawaaswi gii-sababoonagak, Ojibweg, Odawaag, minwaa Bodwe’aadamiig wiiba gii-miigwenaa’aa maamoonjiniibina Kichi Kinoomaagegamigoong wi pii-gaa aanjibiigaadeg Kichi-Naakonigewinning, debendang manda aki, mampii Niisaajiwan, gewiinwaa niijaansiwaan ji kinoomaagaazinid. Daapanaming ninda kidwinan, megwaa minwaa gaa bi aankoosejig zhinda akiing minwaa gii-miigwewaad Kichi-Kinoomaagegamigoong aanji-daapinanigaade minwaa mshkowenjigaade.
The University of Michigan is located on the traditional territory of the Anishinaabe people. In 1817, the Ojibwe, Odawa, and Bodewadami Nations made the largest single land transfer to the University of Michigan. This was offered ceremonially as a gift through the Treaty at the Foot of the Rapids so that their children could be educated. Through these words of acknowledgment, their contemporary and ancestral ties to the land and their contributions to the University are renewed and reaffirmed.
§ DATA AVAILABILITY
The data generated through this project will be deposited into Deep Blue Data, the University of Michigan's institutional data repository. Data that we supply but is based on formatted versions of others' work will include attribution and notices that they are downstream products of others' work.
mnras
§ FIT PARAMETERS
The posterior fit parameters for stellar mass, velocity dispersion, and black hole mass functions are presented in the tables <ref>, <ref>, <ref>, and <ref> found here. The errors listed are 68% confidence intervals. Because of degeneracy between some of the fit parameters, e.g., ϕ_* and α, the errors reported here are the confidence intervals on a given variable and are not the same as the 68% confidence fits shown by the darker shaded region in each plot.
|
http://arxiv.org/abs/2307.04132v2 | 20230709090426 | Reasoning over the Behaviour of Objects in Video-Clips for Adverb-Type Recognition | [
"Amrit Diggavi Seshadri",
"Alessandra Russo"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.SC"
] |
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Yuanheng Zhang,
Nan Jiang,
Zhaoheng Xie,
Junying Cao*,
Yueyang Teng*
Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Z. Xie is with the Institute of Medical Technology, Peking University, China.
J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
J. Cao and Y. Teng contributed equally to this work.
This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114).
This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this work, following the intuition that adverbs describing scene-sequences are best identified by reasoning over high-level concepts of object-behavior, we propose the design of a new framework that reasons over object-behaviours extracted from raw-video-clips to recognize the clip's corresponding adverb-types. Importantly, while previous works for general scene adverb-recognition assume knowledge of the clips underlying action-types, our method is directly applicable in the more general problem setting where the action-type of a video-clip is unknown. Specifically, we propose a novel pipeline that extracts human-interpretable object-behaviour-facts from raw video clips and propose novel symbolic and transformer based reasoning methods that operate over these extracted facts to identify adverb-types. Experiment results demonstrate that our proposed methods perform favourably against the previous state-of-the-art. Additionally, to support efforts in symbolic video-processing, we release two new datasets of object-behaviour-facts extracted from raw video clips - the MSR-VTT-ASP and ActivityNet-ASP datasets.
§ INTRODUCTION
In recent years, the task of recognizing the type of actions being performed in video-clips has gained much of the vision community's attention <cit.>. It is a relatively well studied problem, with practical applications in smart-home systems and robotics. Current state-of-the-art methods for action-type recognition fuse predictions from two-streams of convolutional neural networks (CNNs). One stream predicts action-type probabilities from stacked image-frames of the input video-clip while another stream predicts probabilities from stacked frames of the clip's optical-flow. The output from these two streams are fused together for inference. In particular, the Inflated 3D Convolutional Network (I3D) architecture <cit.> has demonstrated much success for action-type recognition by employing this two-stream paradigm with 3D-convolutional operations.
In contrast to action-type recognition - for which numerous architectures have been proposed, the problem of adverb-type recognition is less well explored. Adverbs further describe the nature and execution of generic action types, providing additional detail regarding intent, meaning and consequences. A device recording recipes in a kitchen for example might deem a cook in the action of “stirring" a pot slowly and completely to be performing a required and delicate step. The same action performed fast or partially on the other hand, might be of less consequence. Interestingly, adverbs can also prove useful even without any knowledge of the underlying action-type. We might for example deem all recordings that are performed slowly and completely to take precedence over partially executed work.
To our knowledge, there have been two architectures proposed to solve the task of adverb-type recognition in general-scene video clips <cit.>. However, these previous methods both assume the availability of ground-truth action-types as a prerequisite for adverb-type recognition and follow the trend set by previous action-recognition systems - encoding video clips using an I3D backbone. These practices make them unsatisfactory for two important reasons.
Firstly, ground-truth action-types are not usually known for raw video clips, making the previous methods inapplicable in many scenarios. One might attempt to compensate for this by using a pretrained action-type predictor to feed into the adverb-recognition model. However, doing so is a non-trivial and complex task - as previous methods <cit.> assume knowledge of over 100 distinct ground-truth action-type categories. Using predictions for more than 100 action-type categories invariably leads to incorrect and noisy input data - especially if the number of training samples are limited. Additionally, under such a bootstrapped framework, one would be forced to re-train both action-type and adverb-type predictors when new video-clips with new or unseen action-types emerge. Ideally, we would instead prefer to have a framework and model wherein adverb-type predictions are made without requiring any knowledge of the video-clip's action-types.
Secondly, while end-to-end black-box CNN models such as the I3D architecture have proved successful for action-type recognition, a key reason for this success has been the fact that CNNs excel at object recognition, and a video-clip's action-type is greatly constrained by the type of objects present within scenes. The same is not true for adverb-type recognition. Object-type may vary widely across different instances of the same adverb. A person cooking slowly for example presents a very different scene from a dog running slowly in a park. And while the use of optical-flow input does mitigate some of this problem by providing motion-related information, end-to-end CNN models fail to generalize well over such diverse scenes.
However, despite this added complexity of not being constrained by object-type, humans are usually able to easily identify adverb-types by reasoning over high-level concepts of object behaviour. Something happening slowly for example might be identified to mean that objects change very little between frames. Properties of other adverbs such as partially or completely are less straightforward to define, but again seem easier to identify by reasoning over higher-level concepts of object-behaviour than they are to identify by pattern-matching over diverse scenes that vary widely.
In this work, following the intuition that adverbs are best described by reasoning over higher-level concepts, we propose a novel framework that (1)Extracts discrete facts of object-behaviour from raw video clips (2)Reasons over those extracted facts to produce high-level summaries of object-behaviour (3)Predicts and aggregates adverb-types using down-stream models over those high-level summaries.
Importantly, unlike previous work for general scene adverb-recognition <cit.>, our framework does not assume any knowledge of the video-clip's action-type during training or inference - making it directly applicable to the more general problem setting wherein the action-type of a video clip is unknown. Our main contributions are summarized as follows:
* We propose the design of a novel action-free framework for adverb-type recognition in video clips - that extracts object-facts from raw video clips; reasons over those facts to learn high-level behaviour summaries; and makes predictions of adverb-types from those summaries.
* We propose a novel extraction phase for our framework that converts raw video clips to discrete Answer Set Programs (ASP) of facts - capturing information regarding objects moving within each clip. Using this new extraction phase, we release two new datasets of object-behaviour-facts - the `MSR-VTT-ASP' and the `ActivityNet-ASP' datasets.
* For the reasoning phase of our framework, we propose novel symbolic and transformer-based reasoning methods over our extracted ASP-facts to obtain higher-level summary vectors of object-behaviour.
* Finally, we evaluate the performances of the different symbolic and transformer-based-architectures that we propose within our framework, and make a comparison against the previous state-of-the-art.
Experiment results demonstrate that our new methods for adverb-type recognition perform favourably against the previous state of the art on video-clips from the MSR-VTT and ActivityNet datasets, providing a new means for adverb-type recognition when the action-type of a video-clip is unknown.
§ RELATED WORK
Action-Type Recognition: Simonyan et al. <cit.> was first to propose a two-stream 2D CNN network for action-type recognition - that employs a separate stream to process image-frames and a separate stream to process their optical flow. This two-stream method outperformed the previous method of predicting actions from features pooled across video-frame snips <cit.> by a large margin. Subsequently, 3D CNNs <cit.> were shown to outperform their 2D counterparts by better preserving temporal information across input frame sequences, and building on these ideas, the Two-Stream Inflated 3D CNN (I3D) network was proposed <cit.> - using two streams of 3D convolutional networks over stacked frames of a video clip's image frames and optical flow. This I3D model significantly outperformed the previous methods and is employed as the backbone of a number of state-of-the-art action-type recognition systems <cit.>. However, as pointed out earlier, the two-stream and 3D CNN paradigms operate end-to-end, directly over raw pixel maps of image frames or optical flow and fail to cleanly separate out and reason over individual object-behaviours across time-steps.
To reason about objects in scenes or possible next-actions, Rueda et al. <cit.> proposed the use of a Computational Causal Behaviour Model (CCBM) alongside a two-stream 3D CNN for action-type detection. While this method does involve reasoning components, it is very different from our proposed approach. While they maintain a state-space-model alongside a conventional two-stream network for better interpretability, our framework automatedly learns summary-representations of object-behaviour as a means to adverb-type recognition. Further, their state-space-model learning methodology requires the availability of predefined action precondition and effect templates. Adverb-types on the other hand are not generally associated with preconditions or post-conditions, and our system does not require any such knowledge.
Adverb-Type Recognition: Pang et al. <cit.> was first to explore the problem of adverb-type recognition in video clips, introducing the “Adverbs Describing Human Actions" (ADHA) dataset and employing a hybrid two-stream CNN along with expression detectors and human pose-estimates. However, their work addresses a problem setting different from the one that we are interested in. The ADHA dataset is focused on adverbs for human subjects, and places special focus on human pose and expression informed adverbs. We are interested in scenes comprising more general content that may not be human. Doughty et al. <cit.> scaled up the problem of adverb-type recognition to general-scene video clips, and released adverb-annotations for subsets of video-clips from the HowTo100M<cit.>, VATEX<cit.>, MSR-VTT<cit.> and ActivityNet<cit.> datasets while proposing new architectures for the task. However, as mentioned earlier, both of these prior works <cit.> assume knowledge of the video-clip's underlying action-types as a prerequisite for adverb-type recognition (with over 100 distinct ground-truth action-type categories), and they encode video-clips using an I3D backbone without attempting to reason over individual object-behaviours. In our work, we use the adverb-annotations released by Doughty et al. <cit.> for experiments on video-clips from the MSR-VTT and ActivityNet datasets - datasets for which raw video files are publicly available.
§ METHOD
Our adverb-type recognition framework (Figure <ref>) comprises three phases - an Extraction phase, a Reasoning phase and a Prediction phase. The Extraction phase extracts separate discrete, and human-interpretable object-behaviour-facts for each object detected to be of interest within the video clip. The Reasoning Phase summarizes those facts across time-steps into summary vectors for each object. The Prediction Phase makes downstream classifications, using separate SVMs to classify between
each adverb and it's antonym. Finally, as we obtain separate SVM predictions for each object detected to be of interest in a clip, we aggregate results by majority-voting.
§.§ Extraction Phase
Figure <ref> shows a depiction of our extraction pipeline. Given a raw video clip, we first employ MaskRCNN <cit.> over delayed-captures of static frames from the video clip's image sequence - considering every fifth frame of the original clip. In doing so, we avoiding processing successive frames between which very little changes. MaskRCNN gives us a collection of predicted object-types and their corresponding bounding boxes with confidence scores between 0 and 1. We ignore all detections made with a confidence score less than 0.3, and flag all detected patches with low confidence scores between 0.3 and 0.5 as `unknown' object-types[ To simplify our explorations and to reduce noise in the input data, in this work, we ignore `unknown' type object-behaviours detected by our extraction phase - leaving reasoning over those less-confident object-facts as scope for future work.]. Patches detected with a confidence score above 0.5 are recorded along with their predicted object-types.
To capture properties of motion for each of these detected object patches, we compute the pixel-wise Gunnar-Farneback optical flow <cit.> between consecutive delayed capture frames. These per-pixel optical-flow values are averaged within each detected object-bounding-box to give us a single average numeric value of optical-flow magnitude and a single average numeric value of optical-flow angle for each detected object-patch. To filter these numerous detections, we then slide a non-overlapping sliding window over the delayed capture frames (each window detection corresponding to a single time-step), and we assume that (1)Of the objects detected in a frame, only objects moving faster than the frame's average are of interest for adverb-type recognition, (2)Of those filtered cases, only objects detected consistently in at least half the frames of the sliding window can be considered important enough for adverb-recognition. It is necessary for us to make these assumptions/choices to reduce the complexity of the problem faced by subsequent phases. Automatedly learning optimal property-extractions for adverb-type recognition is scope for future work and poses a significantly more challenging task.
A consistently detected object of interest has its properties averaged across a time-step's window, and these properties are recorded as Answer Set Programming (ASP) <cit.> facts as shown in Figure <ref>, where “detected(person, 2)" means that an object of type `person' is detected at time-step 2. We also capture local temporal properties of objects such as `operation-area' and `movement-in-place' at each time step, and record the region of the frame `cell_occupancy' that the object occupies for that time-step. To simplify processing, optical-flow angles are bucketed into discrete directions north(n), north-east(ne), east(e), etc.; while numeric-values besides optical-flow magnitude are thresholded to very_small, small, medium, large and very_large. The implementation details of these extracted predicate properties are discussed in greater detail in Appendix Section 6.1.
Our video-to-ASP extraction pipeline can be used to convert raw video-clips to ASP-programs in an online-fashion. However, to simplify our training and evaluation procedures and to further research efforts in symbolic and neuro-symbolic video-processing, we instead preprocess video-clips taken from the MSR-VTT and ActivityNet datasets using our proposed pipeline in an offline-manner and release new `MSR-VTT-ASP' and `ActivityNet-ASP' datasets consisting of extracted object-behaviour-facts and releavant background knowledge over predicate properties. Details of these new datasets are discussed further in Sections <ref>.
§.§ Reasoning Phase
§.§.§ Symbolic-Based Reasoning
In this work, we first consider employing the FastLAS inductive learning method <cit.> to automatedly learn indicator rules that plausibly define adverb-types. However, automatedly learning symbolic rules that reason over more than one time-step is an extremely challenging task (owing to the large number of possible variable groundings of rules). Rather than attempting to overcome the multi-time-step challenge, in this initial exploration, we consider learning a large number of simple single-time-step rules, that compositionally might inform overall adverb-type. In particular, for magnitude, angle, and operation_area predicates, we focus on learning range-rules that define upper or lower bounds of an object's predicate-properties at single time-steps, and for the cell-occupancy predicate, we focus on learning single-time step rules that outline rough left/right or up/down placement of an object within the frame.
As an example, Figures <ref> and <ref> depict FastLAS learnt ASP range-rules that classify between categories of `adverb_A' and `antonym_A' motion for some collection of object-behaviour facts. According to these indicator rules, objects moving with an optical-flow magnitude between five and twenty at some time-step are considered to exhibit `adverb_A' behavior, while those objects moving with optical-flow magnitude outside those limits are considered to exhibit `antonym_A' behavior. These rules might not hold as universally true, however they are identified by FastLAS as plausibly explanations for some given batch of input behaviour-examples.
Similarly, from the training data, we learn indicator rules classifying between each (adverb, antonym) pair. To do so we sample small-balanced-batches of object-behaviour-facts from the training data - choosing 10 randomly sampled object-behaviours for each adverb, and 10 random sampled object-behaviours for it's antonym. We then run FastLAS separately over each balanced-batch along with common background-knowledge to obtain a large number of batch-wise plausible adverb/antonym indicator-rules over predicate-properties (such as those rules shown in Figure <ref>).
After all such single-time-step batch-plausible indicator-rules have been learnt for each adverb vs antonym task across the training data, we use those symbolic ASP rules to summarize object behaviours. Specifically, for an object's collection of behaviour-facts, we assign a 1 for an indicator-rule if that rule logically-fires for the given object's behaviour-facts, and we assign a 0 otherwise - so that from our collection of indicator-rules we obtain a vector of 0s and 1s (such as [1,1,1,0,1,1,...]) for each object-behaviour. All object-behaviours are converted in this manner for each adverb vs antonym task, and those vectors are used as rough behaviour-summaries for downstream adverb-type recognition. (Implementation details are further discussed in Appendix Section 6.3).
§.§.§ Transformer-Based Reasonoing
As an alternative to our single-time-step based symbolic-reasoning, we also propose multi-time-step transformer-based reasoning. We start by flattening the ASP-format object-behaviour properties detected by our Extraction Phase (as shown in Figure <ref>). We get rid of unnecessary syntactical detail and special characters that might otherwise confuse a sentence tokenizer, and record object-type only once per time-step to avoid redundancy. We also eliminate the explicit time-stamps (1,2,3...) associated with each logical fact. We are able to do this, provided that we maintain the correct chronological ordering of detected object-properties since transformer models already have provisions allowing them to recognize and reason over the positional-ordering of words in sentences.
Next, we consider Masked Language Modeling (MLM) <cit.> over object-behaviours to learn useful object-behaviour representations (Figure <ref>). In conventional MLM, some of the words of a natural-language sentence are masked out and a transformer model fitted with a shallow prediction-layer is trained to predict those masked words from the rest of the unmasked sentence - forcing the transformer to learn to encode sentence-structure and overall sentence meaning. Features output by the last transformer-layer are then typically extracted and used for related down-stream tasks such as text-classification. In the context of our object-behaviours problem setting, we directly extend this idea, by masking out some of the `value-words' that correspond to each object's particular behaviour (that might be a value of magnitude/angle/operation-area/etc. at some time-steps). We then train an MLM transformer model to predict those masked values from the rest of the unmasked object-behaviour (as shown in Figure <ref> [Specifically, we mask value-words with a probability of 20% and do not mask-out prompt-words such as `magnitude' and `angle' that occur in every example. Importantly, we also make sure not to mask object-types as they can be difficult to infer from object-motion, and forcing a model to predict them would detract from learning other behavioural-properties.]). In doing so, we force the transformer to learn to encode some overall meaning or dynamics of object-behaviour. The features output by the last transformer-layer are then used for down-stream adverb-type recognition.
In particular, we do not train the transformer model from scratch, but rather fine-tune a model that has been pretrained for natural-language MLM. We do this transfer-learning in order to exploit complex network reasoning properties that have already been learnt over very large datasets of natural language[Note: to limit the computational costs of fine-tuning, we truncate flat object behaviour inputs (Figure <ref>) at 512 tokens.].
Once we have fine-tuned our MLM model to reason over and unmask object-behaviours, we then use it to extract object-behaviour summary vectors. For each input object-behavior snippet in the dataset, we feed that flattened object-behavior to our trained transformer-model and extract the word-level vectors output by the transformer's final layer. Those word-level features are then averaged across the entire flattened sentence to give us a single summary vector - that encodes some overall multi-time-step object-behaviour information.
§.§ Prediction Phase
Finally, each summary object-behaviour feature vector (output by either the single-time step symbolic-reasoning approach or the multi-time step transformer-reasoning approach) is then fed into a separate Support-Vector Machine (SVM) with rbf kernel for binary classification between each adverb-type and it's antonym. At test time, the adverb-vs-antonym predictions from multiple object-behaviours detected to be of interest in a single clip are aggregated to make a single decision by a majority-vote.
§ EXPERIMENTS
Datasets: We evaluate our method on subsets of the MSR-VTT <cit.> and ActivityNet <cit.> datasets, using adverb-annotations by Doughty et al. <cit.>. We process clips where both raw-footage and adverb-annotations are available using our Extraction Phase (Section <ref>), to obtain 1309 ASP-programs for our new MSR-VTT-ASP dataset and 1751 ASP-programs for our new ActivityNet-ASP dataset - where each program contains facts of multiple object-behaviours detected to be of interest within the corresponding video-clip, along with background knowledge of predicate properties (Appendix 6.1). Each program is labeled with one or more of 22 adverb-types (11 adverb/antonym pairs) according the source clip's labels[We drop the loudly/quietly category since neither our method nor the previous work uses a clip's audio.] : (1)upwards/downwards, (2)forwards/backwards, (3) outdoor/indoor, (4) slowly/quickly, (5)gently/firmly, (6)out/in, (7) partially/completely, (8)properly/improperly, (9) periodically/continuously, (10) instantly/gradually, (11) off/on. This leaves us with 1674 unique (asp-program, adverb) pairs from MSR-VTT and 1824 unique (asp-program, adverb) pairs from ActvityNet. We randomly split these datasets into training and testing sets using 70/30 stratified splits (stratified by adverb-type) to obtain 1171 training and 503 testing samples for MSR-VTT-ASP and 1276 training and 548 testing samples for ActivityNet-ASP. Finally, with these two new ASP-datasets and splits having been created, for experiments, we turn to the requirements of our adverb-type recognition framework. We require snippets of individual object-behaviours to reason-over for adverb-type prediction. So, for each ASP-program, we cut-out behaviour snippets for separate detected object-types - so that one snippet corresponds to one object-type's behaviour over the course of a video (as shown in Figure <ref>). Each object behaviour snippet is annotated with the adverb-type of it's source program.[Note: As each video-clip contains multiple objects, the number of
object-behaviour snippets is much larger than the number of video-clips.] These snippets of object-behaviour are then repeated within each adverb-category to balance out the number of samples used for training and testing in each category.
§.§ Symbolic Based Reasoning
As mentioned in section Section <ref>, we learn a large number of batch-wise plausible indicator rules using FastLAS over balanced batches of object-behaviours from the training-set within each adverb-vs-antonym category, and use those learnt indicator-rules to extract summary vectors of object behaviors for each adverb-vs-antonym classification task. Those summary vectors from the training-set are then used to train separate SVM classifiers to distinguish between each adverb and its antonym. At test time, SVM predictions from multiple object-behaviours within individual source-video-clips from the test-set are aggregated by majority-vote to distinguish between adverbs and antonyms in each category. As shown in Figure <ref>, the accuracy of prediction of this single-time-step based symbolic method is highest for both MSR-VTT and ActivityNet datasets when distinguishing between forwards-and-backwards type adverbs - which might plausibly be inferred from a grouping of single-time-step behaviour-properties. Performance is worst (zero) for more complex adverb-types: periodically-continuously, instantly-gradually and off-on - for which no single-time-step batch-wise-plausible range rules are found. Table <ref> shows averaged accuracies across all adverb/antonym categories in each dataset.
§.§ Transformer Based Reasoning
For our experiments, we consider two light-weight versions of the landmark BERT <cit.> transformer model - namely the ALBERT <cit.> and DistilBERT <cit.> architectures. As outlined in Section <ref>, we fine-tune each pre-trained transformer model by flattening object-behaviour snippets and making them unmask randomly masked `value-words'. We then obtain behaviour summary-vectors for each object-snippet by feeding their flat representations to the trained transformers without masking and averaging word-level features output by the last hidden layer across the entire sentence. As with the symbolic-case, a separate SVM with rbf kernel is used over these extracted summary-vectors, along with majority-voting to distinguish between each adverb and it's antonym. We find that both ALBERT and DistilBERT achieve comparable average-performance, while out-performing our symbolic approach by a wide margin (Table <ref>). However, when each adverb-vs-antonym recognition task is viewed separately (Figure <ref>), results are mixed. The symbolic approach performs best in the `forward/backward' category, while one or the other of our transformer-based methods works best for other adverbs. The general superiority of the transformer approach is largely to be expected, given that it jointly reasons over multiple time-steps and multiple predicate properties, while our symbolic approach composes single time-step, single predicate properties. It can be difficult to interpret why one reasoning method outperforms another within a given category. However, it is encouraging that not all reasoning models exhibit the same performance, since we can achieve higher overall accuracy by separately using the most appropriate reasoning method for each category - as shown in Table <ref>.
§.§ Comparison with State-of-the-Art
We next make a strict comparison between our approaches and the action-dependent previous state-of-the-art <cit.>. For previous methods, we randomly flip action-type labels for 5% of action-categories in the train and test sets, so that we obtain `imperfect-actions' that represent an action-type prediction-accuracy of 95% (which one might approach if a state-of-the-art action-type predictor <cit.> is trained on full-versions of the two datasets and used for prediction). In the case of MSR-VTT-ASP, our joint Symbolic and Transformer action-free reasoning method is highly competitive, making a 3.71% improvement over PsudoAdverbs<cit.> in the imperfect-actions case and outperforming previous works even in the scenario where all ground truth actions are explicitly known (Table <ref>). In the case of ActivityNet-ASP, our joint Symbolic and Transformer action-free reasoning method achieves 1.86% lower accuracy than PsudoAdverbs in the imperfect-actions-case, but is still highly useful, as it offers an action-free alternative to previous methods at a relatively small drop in performance - with no requirement to train or maintain action-type predictors.
Broader Impact: Similar to action-type recognition methods, we reflect that one may attempt to use adverb-type recognition to maliciously interpret and monitor video-footage. However, we also note that improved adverb-type recognition, when used ethically for improved video-interpretation, offers significant benefits to human computer interaction and robotics.
Limitations and Scope for Future Work: To our knowledge, we are the first to propose action-free methods and first to reason-over object behaviours for adverb-type recognition. As such there are several possible directions for further investigation. Primarily, for transformers we explored MLM-modeling using light-weight transformer models, and our symbolic-reasoning method is limited to single-time step and single-predicate type rules. Scope for future work then includes exploring alternative transformer-modeling/architectures (such as causal modeling using GPT-3 <cit.>), and reasoning over multiple-time steps and multiple-predicate-properties using symbolic-reasoning.
§ CONCLUSION
In this work, we proposed the design of a new framework that reasons over object-behaviours to recognize a video-clip's adverb-types. Importantly, unlike previous work, our method is action-free and is directly applicable when the action-type of a video-clip is unknown. We proposed a novel pipeline to extract human-interpretable object-behaviour-facts from raw video clips and used that pipeline to create two new datasets of object-behaviour-facts - the MSR-VTT-ASP and ActivityNet-ASP datasets. Finally, we proposed novel symbolic and transformer based reasoning methods that reason over those extracted facts to distinguish between adverb/antonym types. Experiment results demonstrate that our proposed methods perform favourably against the previous state-of-the-art.
ieee_fullname
§ APPENDIX
§.§ Implementation Details of the Extraction Phase
In this section, we describe the design of our extraction-phase in greater detail. Figure <ref> shows a depiction of this pipeline.
As mentioned earlier, given a raw video clip, we first employ MaskRCNN <cit.> over delayed-captures of static frames from the video clip's image sequence - the delay is added so that we only consider every fifth frame of the original clip and avoid processing immediately successive frames (between which very little changes). MaskRCNN gives us a collection of predicted object-types and bounding boxes along with their corresponding confidence scores between 0 and 1. We ignore all detections made with a confidence score less than 0.3, and flag all detected patches with low confidence scores between 0.3 and 0.5 as `unknown' object-types. Patches detected with a confidence score above 0.5 are recorded along with their predicted object-types.
To capture properties of motion for each of these detected object patches, we compute the pixel-wise Gunnar-Farneback optical flow <cit.> between consecutive delayed capture frames. These per-pixel optical-flow values are averaged within each detected object bounding box to give us a single numeric value of optical-flow magnitude and a single numeric value of optical-flow angle for each detected object-patch.
To filter these numerous detections for the most adverb-relevant information, we make two important assumptions.
* First, we assume that of the objects detected in a scene, faster moving objects are of more interest for adverb-type recognition then slower moving objects within the same scene. This is a reasonable assumption to make since we are trying to design a system that mimics human judgement of adverb recognition in video clips, and to humans, faster moving objects are usually more eye-catching and take precedence over slower-moving objects.
* Next, we make the assumption that objects whose behaviour determine the video clip's overall adverb-type must be detected to be of interest (moving faster than other objects within the same scene) with some level of consistency. If an object is deemed to be of interest only fleetingly, then it is unlikely to determine the overall categorization of a video clip.
Acting on these two assumptions, after computing averaged optical-flow properties for each bounding box as described above, we then run a non-overlapping sliding window over the delayed capture frames and filter out objects that (1) Do not have optical-flow magnitude above the average of all objects detected in the same frame (2) Do not pass the first filtering step for at least half the delayed-capture frames encompassed by the sliding window. In our implementation, we use a sliding window of size five -within which period, the types of objects being portrayed do not usually change much.
Finally, to simplify the tracking of object behaviour between frames, we ignore duplicate object-types detected within each delayed frame and record only the object with the highest optical-flow magnitude in a contest between two or more objects of the same type. Once we have filtered our MaskRCNN detections this way, the per-frame properties of optical-flow magnitude, optical-flow angle and bounding-box size for objects are averaged across all detections of the same object-type within each window. Since scenes do not usually change much within the span of a window, these object-properties that we are averaging for a given object-type usually pertain to the same physical object.
Properties recorded from each window correspond to a separate time-step and are time-stamped as ASP facts accordingly. As shown in Figure <ref>, “detected(person, 3)" means that an object of type `person' is detected at time-step 3 - corresponding to the third window scan.
In addition to these averaged per-frame properties, we also capture local temporal properties of `operation-area' and `movement-in-place' for each object within a window. Operation-area captures the size of the area within which a single object-type `lives' for the span of a window i.e it is the product of (xmax-xmin) and (ymax-ymin) computed over all detected bounding box coordinates. Movement-in-place on the other hand is the ratio of the operation-area to average bounding-box-size of the detected object within a window. The more that an object moves around within a given window, the larger that this ratio will be. To simplify the down-stream reasoning process, all numeric properties besides optical-flow magnitude are placed into discrete buckets, such as `small', `very-small', `medium', etc. ; while angles are categorized into discrete sectors `north(n)', `north-east(ne)', `east(e)' and so on. The exact numerical-range of each discrete bucket/sector is specified in the accompanying code-implementation.
Besides the object propeties already mentioned, we also record the region of the video frame within which each object's operation-area is maximum (i.e the section of the frame within which the object `lives'). Rather than mapping each frame's area using a series of flat grid cell locations such as C_0,C_1,C_2,C_3... and assigning cell-ids to each object, in order to allow a more intuitive reasoning process, we employ a hierarchy of relative placement.
As shown in Figure <ref>, at the highest level of this hierarchy (level 0), the frame is split into 4 regions, (top, left), (top, right), (bottom, left) and (bottom, right). At the second level (level 1), each of these regions is further split into another 4 regions, and again at level 2, the process is repeated. Describing the location of an object's operation-area using this hierarchy of relative placement then allows us to make fairly simple inferences. For example, if level 1 placement stays unchanged and the level 2 toggles from left to right, we can easily determine that the object has moved right by a small amount, and if instead the level 1 placement toggles from top to bottom, then we can say that the object has moved downward by a significant amount. As shown in Figure <ref>, cell-occupancy predicates map the operation-area of detected objects in each window using this hierarchy of relative-placement.
Finally, for clarity, Figure <ref> shows a trimmed example of an ASP-program, computed from a video-clip using this pipeline. The figure shows some selected background knowledge and object-properties detected over a single time step. Importantly, we highlight that our background knowledge includes information on the `opposites' of relative directions, as well as `less-than', `clockwise' and `anticlockwise' orderings over bucketed numeric values - so that we can reason about ranges in symbolic approaches. Importantly, these ordering predicates are formulated with a measure of distance to allow for range-reasoning. For example, `clockwise(n, ne, 1)' indicates that northeast is clockwise of north by one-tick while `clockwise(n, e, 2)' indicates that east is clockwise from north by two-ticks. Similarly `less-than(very-small, small, 1)' indicates that small is larger than very-small by one step, whereas `less-than(very-small, medium, 2)' indicates that medium is larger than very-small by two steps.
§.§ MSR-VTT-ASP and ActivityNet-ASP datasets
As discussed in Section <ref>, in this work, using our extraction-phase pipeline, we process clips where both raw-footage and adverb -annotations (from Doughty et al. <cit.>) are available for the MSR-VTT<cit.> and ActivityNet<cit.> video-datasets to create the new MSR-VTT-ASP and ActivityNet-ASP datasets of ASP-programs. Each dataset contains facts of multiple-object behaviours detected to be of interest within the corresponding video clip, along with background knowledge of predicate properties - as shown in Figure <ref> (the full background-knowledge of predicate-properties are specified in the ASP-files of the new datasets).
As discussed in Section <ref>, each program is labeled with one or more of 22 adverb-types (11 adverb/antonym pairs) according the source clip's labels: (1)upwards/downwards, (2)forwards/backwards, (3) outdoor/indoor, (4) slowly/quickly, (5)gently/firmly, (6)out/in, (7) partially/completely, (8)properly/improperly, (9) periodically/continuously, (10) instantly/gradually, (11) off/on, and we split these datasets into 70/30 train/test stratified splits (stratified by adverb-type) to obtain 1171 training and 503 testing samples for MSR-VTT-ASP and 1276 training and 548 testing samples for ActivityNet-ASP.
Table <ref> shows summary properties of these two new datasets.
§.§ Implementation Details of Symbolic-Based Reasoning
As mentioned in Section <ref>, we consider employing the FastLAS inductive learning method <cit.> to automatedly learn some governing rules over these extracted facts - so as to explain the overall adverb-type categorization of each video clip.
In using FastLAS to learn such rules, we are primarily constrained by the number and type of variables that each rule can use. The more number of variables that a rule allows, the more the number of possible groundings that it can take, and the longer that it takes for rule learning to complete. And while each extracted instance of object behaviour possess multiple facts of the same predicates across different time-steps (such as magnitude at time step 1, magnitude at time step 2, etc.), owing to the large number of possible groundings, automatedly learning rules that reason over more than one time-step is especially challenging for this task.
In this work, rather than attempting to overcome these challenges and learn a few very complex rules over multiple time-steps and multiple predicate-properties, we instead consider learning a large number of simpler rules, that compositionally might inform overall adverb-type.
To limit the number of free variables that FastLAS has to deal with, we focus on learning range-rules that define upper or lower bounds of an object's predicate-properties at single time-steps. To illustrate this idea, Figure <ref> shows a toy example that we feed into FastLAS. The example specifies the behavior four objects: a car, a plane, a person, and a cat. Each of these behaviors has a corresponding optical flow magnitude for an arbitrary time-step and each object-behaviour (that is a positive example for our rule-learning problem) is also associated with a particular class type - either `strange' or `not-strange'. The first class-type mentioned in a #pos header in the figure is the one that we wish to associate with the object-behaviour. The second class type mentioned in the header is what the object-behaviour is not.
As shown in this toy example, in order to reduce the number of variable-values that FastLAS deals with, we also use discretized versions of optical-flow magnitude such as `five-to-ten', `ten-to-fifteen', etc.
The language-bias shown in the figure specifies that the head atom of any learnt rule must be a class type (`strange' or `not-strange'), and must generalize over variable objects-types. The language-bias also specifies that body atoms (if used) must capture some range property over magnitude. As FastLAS can learn to use one or none of each of the specified `less-than' body atoms, a learnt-rule might enforce an upper bound on magnitude value, a lower bound on magnitude value, or neither. Additionally, as the `number-of-steps' field in the less-than predicate is specified as a FastLAS numeric-variable (num-var), FastLAS is allowed to learn numeric-constraints that further explain range-rules.
For this particular toy example, we can explain all of the provided object-behaviours by deeming magnitudes between 5 and 20 to correspond to the class `strange' and other magnitude values to correspond to the class `not-strange'. FastLAS does infact discover such corresponding rules, as shown in Figure <ref>. Figure <ref> shows a depiction of these learnt ranges for better clarity.
We can similarly employ these range-style language-biases for other predicate properties such as optical-flow angle, and operation area. Figure <ref> shows how we might do so. As also shown in Figure <ref>, for cell-occupancy we consider using a slightly different language bias - we allow for rule-body conditions that consist of: (A)A variable relative direction along the vertical (top/bottom) and a constant horizontal direction (left/right), (B)A variable direction along the horizontal (left/right) and a constant vertical direction (top/bottom) or (C)Both horizontal and vertical relative directions specified as constants. We also use a numeric-variable (num-var) value for the level of cell-hierarchy used by a rule, so that we can learn rules that apply to different levels.
An important facet of this type of observational-predicate rule learning is that it specifically requires numeric rule learning (either to learn constraints over the number-of-steps range property or the level-of-hierarchy as described), and we have chosen to employ FastLAS as it is the only framework that allows this type of automated numeric-rule learning over ASP programs.
Generalizing our toy example for the more complex problem-setting of recognizing adverb-types from object-behaviours is straightforward. We use recorded object behaviours from our video-to-ASP pipeline as positive examples in the rule-learning setup, wherein the class associated with each object-behaviour is the ground-truth adverb-type of the overall video clip that an object hails from. Naturally, the class not to be associated with each object behaviour is the antonym of its adverb-type. Figure <ref> shows a truncated example of a detected object's behaviour formatted for FastLAS rule-learning. To simplify our explorations and to reduce noise in the training data, we ignore `unknown' object-behaviours detected by our pipeline.
However, problems arise in using this learning methodology as-is over sets of object-behaviours for given (adverb, antonym) pairs. Firstly if the set of object-behaviours is not-balanced to have an equal number of objects for both adverb and antonym, then we might get best coverage by just predicting a single class rule without any body conditions. This problem of unbalanced data is easily solved by repeating object-behavior examples in the training data to balance out the adverb/antonym classes. The next problem is more important - since the adverb-recognition setting is quite noisy (with many of our detected object-behaviours not necessarily impacting a video's overall adverb-type), for large sets of object-behaviour, even after balancing, we find that rules learnt from our simple language biases (Figure <ref>) are not able to cover more than 50% of behaviours. Then, directly predicting one or another adverb/antonym head with no body conditions again becomes the best strategy for maximum coverage.
As mentioned earlier, increasing the complexity of possible rules to abate this problem comes with its own set of challenges (namely rule-learning can slow down to the point of becoming computationally impractical). So, rather than increasing the complexity of our language biases, we consider smaller-batches of balanced subsets of the training data - which pose a less noisy and less complex problem setting when each batch is viewed separately. We run FastLAS separately over each balanced-batch of object behaviours to get batch-wise plausible rules for each of our language-biases. When the rules returned by FastLAS for a batch and language bias are non-trivial (possess body-conditions), we then record them as indicators of adverb-type.
After all such indicator-rules have been extracted from a stream of balanced-batches of the full training set for each (adverb, antonym) pair using our language-biases (Figure <ref>), we then consider composing their results together using Support Vector Machines (SVMs). Specifically, given an input object-behaviour and an adverb vs antonym task, we assign a 1 to each corresponding indicator-rule if the rule fires for the given object's behaviour, and assign a 0 otherwise - so that from our collection of indicator-rules of the adverb/antonym task, we obtain for the object-behaviour a feature of 1s and 0s (such as [1,1,1,0,1,1,...]). The entire balanced training-set is converted in this manner for each adverb vs antonym task. A separate binary-SVM with rbf kernel is trained over these extracted features to classify between every adverb and its antonym.
At inference time, a raw video clip is converted to an ASP program of object behaviors, and all the indicator-rules are checked to obtain vectors of zeros and ones for each object. All the SVMs make their adverb/antonym predictions (over features from their corresponding indicator-rules) for each detected object, and predictions from multiple objects detected within a single-clip are aggregated by a simple voting mechanism in each adverb vs antonym category.
§.§ Compute Requirements for Experiments
All experiments presented in this work can be reproduced using a single P5000 GPU device. Using this resource, ASP-Program facts were extracted for the full MSR-VTT-ASP and ActivityNet-ASP datasets sequentially over 2 days, while transformer-based finetuning completes in under 1 hr for a given dataset for both DistilBERT and ALBERT architecures. Learning rules from balanced-batches of object-behaviours using FastLAS and a single CPU requires roughly 20 hours to complete for each dataset. All SVM training and inference completes in under 5 min.
|
http://arxiv.org/abs/2307.07411v1 | 20230710121834 | Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases | [
"Michael Sheinman Orenstrakh",
"Oscar Karnalim",
"Carlos Anibal Suarez",
"Michael Liut"
] | cs.CL | [
"cs.CL",
"cs.CY"
] |
Detecting LLM-Generated Text in Computing Education]Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases
[email protected]
University of Toronto Mississauga
Mississauga
Canada
[email protected]
0000-0003-4930-6249
Maranatha Christian University
Bandung
Indonesia
[email protected]
0000-0002-6012-932X
Escuela Superior Politécnica del Litoral
Guayaquil
Ecuador
[email protected]
0000-0003-2965-5302
University of Toronto Mississauga
Mississauga
Canada
Due to the recent improvements and wide availability of Large Language Models (LLMs), they have posed a serious threat to academic integrity in education. Modern LLM-generated text detectors attempt to combat the problem by offering educators with services to assess whether some text is LLM-generated. In this work, we have collected 124 submissions from computer science students before the creation of ChatGPT. We then generated 40 ChatGPT submissions. We used this data to evaluate eight publicly-available LLM-generated text detectors through the measures of accuracy, false positives, and resilience. The purpose of this work is to inform the community of what LLM-generated text detectors work and which do not, but also to provide insights for educators to better maintain academic integrity in their courses. Our results find that CopyLeaks is the most accurate LLM-generated text detector, GPTKit is the best LLM-generated text detector to reduce false positives, and GLTR is the most resilient LLM-generated text detector. We also express concerns over 52 false positives (of 114 human written submissions) generated by GPTZero. Finally, we note that all LLM-generated text detectors are less accurate with code, other languages (aside from English), and after the use of paraphrasing tools (like QuillBot). Modern detectors are still in need of improvements so that they can offer a full-proof solution to help maintain academic integrity. Further, their usability can be improved by facilitating a smooth API integration, providing clear documentation of their features and the understandability of their model(s), and supporting more commonly used languages.
<ccs2012>
<concept>
<concept_id>10003456.10003457.10003527</concept_id>
<concept_desc>Social and professional topics Computing education</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003317.10003347.10003355</concept_id>
<concept_desc>Information systems Near-duplicate and plagiarism detection</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003317.10003338.10003341</concept_id>
<concept_desc>Information systems Language models</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Social and professional topics Computing education
[500]Information systems Near-duplicate and plagiarism detection
[500]Information systems Language models
[
Michael Liut
August 12, 2023
===================
§ INTRODUCTION
In academia, a way to encourage students utilizing all learning opportunities and experiences is to properly maintain academic integrity in the courses <cit.>. Students need to complete any exams and assessments with their best effort. Further, they need to actively engage with the instructors (and tutors).
Although Artificial Intelligence (AI) can foster education <cit.>, it might be misused to breach academic integrity. Paraphrasing tools <cit.> and code obfuscation tools <cit.> for example, are misused to cover up evidence for plagiarism (a breach of academic integrity about copying one's work and reusing it without proper acknowledgment <cit.>).
Misuse of AI chatbots with large language models (LLM) <cit.> such as ChatGPT[<https://openai.com/blog/chatgpt>] is another trending threat for breaching academic integrity. Students can complete exams or assessments with limited effort, resulting in questionable performance; it is unclear whether the learning objectives are actually met. The misuse can be considered as contract cheating (i.e., getting help in exchange for mutual incentives <cit.>) since AI chatbots provide responses in exchange for additional user data. However, considering AI responses are generated based on other people's textual data without proper acknowledgment, we believe it is more justifiable to consider the misuse as plagiarism.
While checking student work for plagiarism, instructors are often aided by automated detectors. A number of detectors have been developed to detect whether a work is a result of LLM. Two of them are GPT-2 Output Detector <cit.> and Giant Language model Test Room (GLTR) <cit.>. Nevertheless, due to the recency of misuse of AI chatbots, Computing educators might have limited information about publicly available detection detectors. Further, it is challenging to choose the most suitable detector for their teaching environment. To the best of our knowledge, there are no empirical studies comparing the detectors in terms of effectiveness.
In response to the aforementioned gaps, we investigate LLM-generated text detectors and formulate the following research question (RQ): “How effective are LLM-generated text detectors?”
It is clear that there is a need in the community to understand if the currently available detectors are able to detect LLM-generated content <cit.> and what there reliability is.
As an additional contribution, we also report our experience in using the LLM-generated text detectors. It might be useful for readers interested in employing those detectors in their classrooms.
§ RELATED WORK
This section discusses common breaches of academic integrity in computing education and misuse of AI to breach academic integrity.
§.§ Common Breaches of Academic Integrity
Academic integrity encourages students to act honestly, trustworthy, respectfully, and responsibly in learning[<https://lo.unisa.edu.au/course/view.php?id=6751&section=6>]. <cit.> lists five common breaches of academic integrity in computing education: plagiarism, collusion, contract cheating, exam cheating, and research fraud. It is important to inform students about instructors' expectations about academic integrity in their courses <cit.> and penalize those who breach academic integrity.
Plagiarism happens when ideas, words, or even code is reused without proper acknowledgment and permission to the original author(s) <cit.>.
It is commonly identified with the help of automated detectors <cit.> such as Turnitin[<https://www.turnitin.com/>], Lichen <cit.>, MOSS[<https://theory.stanford.edu/ aiken/moss/>], and JPlag <cit.>. Any submissions with high similarity will be investigated and if they are indeed a result of misconduct, the students will be penalized <cit.>.
Nevertheless, identifying plagiarism is not always straightforward; some perpetrators disguise their act with automated paraphrasing <cit.>, essay spinning <cit.> or code obfuscation <cit.>. The automated detectors should be resilient to common disguising practices in addition to being effective and efficient.
GPlag <cit.> and BPlag <cit.> for examples, focus on content semantic while measuring similarity among submissions.
<cit.> and <cit.> developed detectors that detect substantial changes among consecutive saves.
<cit.> developed a detector that is automatically integrated to a programming workspace to record any code edits.
Collusion is also about reusing ideas, words, or code without proper acknowledgment. However, the original author(s) is aware about the matter and somewhat allows it <cit.>. Typically, this occurs when two or more students work closely beyond reasonable levels of collaboration <cit.>. Collusion can be identified in the same manner as plagiarism with the help of automated detectors. Similar submissions are reported by the detectors and then manually investigated by the instructors; students whose submissions are indeed a result of misconduct will be penalized.
Contract cheating occurs when third parties are paid to complete student assessments <cit.>. The third parties can be professional companies or even their colleagues. Contract cheating is quite challenging to identify as the third parties tend to know how to evade detection. It is only identifiable when the writing style and the quality of the submission is substantially different to those of the student's prior submissions. To expedite the identification process, instructors can either use the help of authorship identification detectors <cit.> such as Turnitin Authorship Investigate[<https://help.turnitin.com/MicroContent/authorship-investigate.htm>] <cit.> or check contract cheating sites <cit.>.
Exam cheating happens when some students have unfair advantages in the exams <cit.>. The advantages can vary from concealed notes during exams, leaked exam questions, to impersonation (i.e., an individual switch places with a student to take the exam). Exam cheating can be identified via careful investigation on the whole process of the exams. Sometimes, such identification can be aided with online proctoring systems <cit.> (e.g., Proctorio[<https://proctorio.com/>] and ProctorExam[<https://proctorexam.com/>]) or local monitoring tools (e.g., NetSupport[<https://www.netsupportschool.com/>]).
Research fraud means reporting research results without verifiable evidence <cit.>. It can be data fabrication (i.e., generating artificial data to benefit the students) or data falsification (i.e., updating the data so that it aligns with the students' desired findings). Both are parts of research misconduct[<https://grants.nih.gov/policy/research_integrity/definitions.htm>] and they can happen in research-related assessments. Research fraud can be identified via careful investigation on the whole process of research. Due to its complex nature, such misbehaviour is manually identified on most cases. However, instructors can get some help from source metadata <cit.> and automated image manipulation detection <cit.>.
§.§ Misuse of AI
AI substantially affects education <cit.>. It improves student learning experience via the help of intelligent tutoring systems <cit.> and personalized learning materials <cit.>. AI expedites the process of providing feedback <cit.>, identifying breaches of academic integrity <cit.>, maintaining student retention <cit.>, learning programming <cit.>, creating programming exercises <cit.>, and recording attendance <cit.>.
Advances in AI might also be misused for breaching academic integrity.
Paraphrasing tools <cit.> which are intended to help students learn paraphrasing are misused to cover up plagiarism.
Code generators like GitHub Copilot <cit.> which are intended to help programmers in developing software are misused to complete programming tasks that should be solved independently.
Code obfuscation tools <cit.> which are intended to secure code in production are misused to disguise similarities in copied code submissions.
AI chatbots <cit.>, especially those with Large Language Model (LLM) <cit.> are intended to help people searching information, but they are misused to unethically complete exams[<https://edition.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html>] and assessments[<https://theconversation.com/chatgpt-students-could-use-ai-to-cheat-but-its-a-chance-to-rethink-assessment-altogether-198019>].
LLM is derived from Language Model (LM), a statistical model at which each sequence of words are assigned with a probability <cit.>. Per query or question, the response is generated by concatenating sequences of words that have high probability with the query or the question.
ChatGPT is a popular example of LLM. The tool is developed by OpenAI, a non-profit American research laboratory on top of GPT-3, a LLM with deep learning to generate human-like text. The tool relies on reinforcement and supervised learning to further tune the model.
A number of automated detectors have been developed to help instructors identifying AI misuses for breaching academic integrity. In the context of plagiarism and collusion, automated detectors nullify common alterations that can be done without understanding the content <cit.> and remove contents that are not evident for raising suspicion <cit.>.
In dealing with misuses of AI chatbots, a few automated detectors are developed under the same way as the chatbots via pretrained model, but dedicated to detect AI-generated texts. GPT-2 Output Detector <cit.> and GLTR <cit.> are two of the examples.
§ METHODOLOGY
This section discusses how the research question stated in the introduction would be addressed and our preliminary work to discover publicly available LLM-generated text detectors.
We collected historical assignment data dating back to 2016 from two publicly funded research-focused institutions, one in North America and one in South America. The data collected was from upper-year undergraduate computer science and engineering students.
We analyzed a total of 164 submissions (124 were submitted by humans, 30 were generated using ChatGPT, and 10 were generated by ChatGPT and altered using the Quillbot paraphrasing tool) and compared them against eight LLM-generated text detectors. This results in a total of 1,312 prediction results.
Of the 164 submissions, 134 were written in English (20 of which were generated by a LLM, and another 10 which were LLM-generated and paraphrased) and 20 were written in Spanish (10 of which were AI-generated). The submissions were collected between 2016 and 2018 (prior to the release of ChatGPT), and were made in “databases”, “networking”, and a “final thesis project” course. These courses were specifically selected as they are upper-year computer science major courses that touch on a mix of systems and theory (databases and networking), as well as technical writing in computer science with a programming/development component (final thesis project). The students in these courses were primarily in a computer science major. It should also be noted that Spanish was selected as an alternative language to analyse because it is one of the world's most popular languages, and some of the authors have experience writing and evaluating technical material in this language.
The assessments analyzed in this study (see Table <ref>) are taken from three undergrad courses. The first course is a databases course offered to third-year computer science students in their first or second semester. It is a mix of database theory and practical systems application. There are 101 paper submissions from this course which involved a final assessment where students wrote a report analyzing two industry players and their use of databases and data centers, this was written in English.
The second course is a networking course offered to third-year computer science students in their second semester. It is a mix of theoretical concepts and practical system application. There are 13 paper submissions from this course which involved an exam question where students explain how they would implement the NOVEL-SMTP and NEO-SMTP email protocols using only UDP, this was written in English.
The third course is a final thesis project course offered to fourth-year computer science students throughout their final year of study (across both semesters). It is meant to bridge theory and practice to develop something that can be used/implemented in the real world. There are 10 paper submissions from this course which involved improving computing systems and engineering processes in their local community, this was written in Spanish.
Due to the character limitations, data below 1,000 characters was excluded and data above 2,500 characters was truncated to the last complete sentence. This ensures the input data fits within the range of all detectors. As many LLM-generated text detection platforms have a 2,500 character maximum, to ensure fairness across platform, we used 2,500 characters as our upper-bound.
LLM-generated texts were created with the help of ChatGPT[<https://openai.com/blog/chatgpt>], a popular LLM. The handouts were parsed to prompts by removing irrelevant information (course code, deadlines, submission instruction) so the prompts only contain the core requirements of the task. These prompts were then fed into ChatGPT to generate a solution to the assignment.
It should be noted, the authors mined through over 2,000 submissions in programming, data structures and algorithms, and compilers courses, however, the submission data varied too much for the content to easily be extracted and analyzed for detectors. Often due to a lack of context after removing any code. The selected submissions were purely writing-based and did not involve coding components, they did in some cases discuss theoretical concepts in computer science.
Finally, all of the detectors were tested in April 2023.
§.§ Discovering Publicly Available LLM-generated Text Detectors
Publicly available LLM-generated text detectors were discovered from January to February 2023 from social media (i.e., Twitter, Facebook, and blogs), online news, and previous literature on LLM-generated text detection (GPT-2, GLTR). Public interest in LLM-generated text detectors followed the release of GPTZero which went viral on January, 2023. After GPTZero, many other companies launched their own LLM-generated text detectors.
A number of LLM-generated text detectors were discovered but we limited this study to LLM-generated text detectors that appear to offer proprietary solutions to LLM-generated text detection. We found that some LLM-generated text detectors are likely to be replicas of open source work (GPT-2) and hence we excluded such detectors from the study.
We identified eight such publicly available LLM-generated text detectors, as shown in Table <ref>. Two of them (GPT-2 Output Detector and GLTR) are featured with technical reports <cit.>.
GPT-2 Output Detector <cit.> is a LLM-generated text detector based on the RoBERTa large pretrained model <cit.>. RoBERTa is a transformers model trained on a large corpus of raw English data. The GPT-2 Output Detector starts with the pre-trained ROBERTA-large model and trains a classifier for web data and the GPT-2 output dataset. The GPT-2 Detector returns the probability that an input text is real on GPT-2 text with accuracy of 88% at 124 million parameters and 74% at 1.5 billion parameters <cit.>. The detector is limited to the first 510 tokens, although there are extensions that extend this limit <cit.>.
GLTR <cit.> is a detector that applies statistical methods to detect GPT-2 text. The model is based on three simple tests: the probability of the word, the absolute rank of a word, and the entropy of the predicted distribution. This detector shows an interface where each word is highlighted along with a top-k class for that word.
The GLTR detector does not provide quantifiable overall probability that a text is AI-generated. To make a fair comparison between GLTR and other detectors, we define a detector on top of GLTR to make probability predictions using the normal distribution. We compute an average μ and a standard deviation σ over a sample dataset of 20 human and 20 ChatGPT submissions. The results were μ = 35.33, and s = 15.68. We then used those results to normalize a prediction by computing the standard score of a data point x using x - μ/s. This score is sent as input to the sigmoid function to obtain a probability prediction.
GPTZero was the first detector <cit.> to claim to detect ChatGPT data. The original version of the detector used two measures: perplexity and burstiness. Perplexity refers to a measurement of how well GPT-2 can predict the next word in the text. This appears similar to the way the GLTR detector works <cit.>. The second measure is burstiness: the distribution of sentences. The idea is that humans tend to write with bursts of creativity and are more likely to have a mix of short and long sentence. The current version of GPTZero gives four classes of results. Table <ref> shows how different classes are interpreted as probability. GPTZero claims an 88% accuracy for human text and 72% accuracy for AI text for this detector <cit.>.
AI Text Classifier is OpenAI's 2023 model fine tuned to distinguish between human-written and AI-generated text <cit.>. The model is trained on text generated from 34 models from 5 different organization. The model provides 5 different categories for the results based on the internal probabilities the model provides. Table <ref> shows how different classes are interpreted as probability. The interpretations are based on the final category, not the internal model. Usage of this classifier requires at least 1,000 characters.
GPTKit uses an ensemble of 6 other models, including DistilBERT <cit.>, GLTR, Perplexity, PPL, RoBERTa <cit.>, and RoBERTa (base). The predictions of these models are used to form an overall probability that a text is LLM-generated. However, the exact weight used for each of the detectors is unclear. The detector claims an accuracy of 93% based on testing on a dataset of 100K+ responses <cit.>.
CheckForAI claims to combine the GPT-2 Output Detector along with custom models to help limit false readings <cit.>. The detector also supports account sign up, history storage, and file uploads. The detector provides four classes to compute the probability of text, as shown in Table <ref>. This detector is currently limited to 2,500 characters.
CopyLeaks offers products for plagiarism and AI content detection targeted broadly for individuals, educators, and enterprises. The detector highlights paragraphs written by a human and by AI. CopyLeaks also claims detection across multiple languages, including Spanish (tested in this paper). CopyLeaks claims an accuracy of 99.12% <cit.>. The detector is currently available publicly <cit.>.
Originality.AI is a detector targeted for content publishers. The detector is available through a commercial sign-up page <cit.> with a minimum fee $20. We received research access for analysis of the detector. The detector comes with API access and a number of additional features for content creators. A self-proclaimed study by Originality on ChatGPT suggests that the detector has an accuracy of 98.65% <cit.>.
We did not impose a systematic approach <cit.> to discover publicly available LLM-generated text detectors. Most of the detectors are recent and cannot be easily found on the internet or academic papers. A systematic approach might cover fewer results.
§.§ Addressing the RQ: Effectiveness of LLM-generated text detectors
A detector is only worthy of use if it is reasonably effective. We addressed the RQ by comparing detectors listed in Table <ref> under three metrics: accuracy, false positives, and resilience. Instructors prefer to use detectors that are reasonably accurate, reporting a minimal number of false positives, and are resilient to disguises.
Accuracy refers to how effective the detectors are in identifying LLM-generated texts. We present all accuracy results using two measures of accuracy, as we have found that using only one measure may mislead about some aspect of the results.
The first method (averages) takes the average prediction each detector across a dataset. As discussed in the discovery section, each detector either provides a probability that a text is LLM-generated or a category that represents such a probability. We apply our category to AI conversion tables to obtain a probability for each detector. These probabilities are averaged for the final results.
The second method (thresholds) is calculated as the proportion of correctly-classified LLM-generated texts. These are measured as the number of texts that correctly receive above or below a 50% score out of the total number of texts. This measure is strict, so a prediction of 50% is always considered to be incorrect.
False positives are original submissions that are suspected by LLM-generated text detectors. Fewer false positives are preferred. For this metric, we collected student submissions before the release of ChatGPT (2019) and measured their degree of originality with the detectors. Any suspected submissions (originality degree less than 50%) were expected to be false positives.
Resilience refers to how good LLM-generated text detectors are in removing disguises. Some students might disguise their LLM-generated texts to avoid getting caught. QuillBot <cit.> is a paraphrasing tool capable of paraphrasing text. The tool uses Artificial Intelligence to reword writing. We paraphrased 10 ChatGPT submissions through QuillBot and measured the results.
It is worth noting that measuring effectiveness of LLM-generated text detectors is time consuming and labour intensive. Further, some detectors are not supported with API integration; the authors needed to manually copy and paste each test case.
§.§ Summarizing our experience using the LLM-generated text detectors
We also report our experience in using the LLM-generated text detectors. Several aspects are considered: intuitiveness, clarity of documentation, extendability, variety of inputs, quality of reports, number of supported LLM-generated languages, and pricing.
§ RESULTS
This section discusses our findings from addressing the research question and our experience using LLM-generated text detectors.
§.§ Addressing the RQ: Effectiveness of LLM-generated Text Detector
Table <ref> shows accuracy of each detector across human and ChatGPT data using the threshold method. The data shows CopyLeaks to be the most accurate LLM-generated text detector, with an accuracy of 97.06%. CopyLeaks is followed by the GPT-2 Output Detector/CheckForAI (96.62%), GLTR (88.73%), GPTKit (87.50%), OpenAI's Detector (77.37%), and GPTZero (49.69%).
Table <ref> shows the results using averages instead of thresholds. The results show CopyLeaks to provide the best probabilities (99.53%), followed by CheckForAI (96.56%), the GPT-2 Output Detector (96.29%), GPTKit (82.09%), OpenAI's Detector (82%), OriginalityAI (76.63%), GLTR (65.84%), GPTZero (64.47%).
The data in Tables <ref> and <ref> are both normally distributed, verified using the Shapiro-Wilk and Kolmogorov-Smirnov tests. Thus, no correction needed to be applied. Overall, from the t-tests (Table <ref>: t = 1.67 and p = 0.116, Table <ref>: t = 1.154, p = 0.268, both with 14 degrees of freedom) we did not find significant differences in the accuracy of LLM-generated text detectors between human and ChatGPT data.
Table <ref> shows the false positive results on the human data from the databases and network assignments. GPTKit is the only detector that managed to achieve no false positives across the entire set of human submissions. This is followed by CopyLeaks (1), the GPT-2 Output Detector/CheckForAI (2), OpenAI's detector (6), OriginalityAI (7), GLTR (20), and finally GPTZero (52).
A further investigation of GPTKit, which appears to be the the best detector for avoiding false positives, shows that this detector is still prone to false positives. While none of our original test samples appeared more than 50% fake, we found that some submissions score up to 37% fake from GPTKit. In some cases, removing the last paragraph(s) from these submissions led to a false positive. Figures <ref> and <ref> show such a case. We note that in this case the output of GPTKit also shows that the detector merged separate paragraphs into a single one. This unexpected merge may contribute to the problem.
Table <ref> shows results of 10 ChatGPT papers before and after the Quillbot paraphraser. The results are measured using overall accuracy. The GLTR detector was the most resilient, with none of the predictions changing. It is worth noting that the overall weighted result of GLTR also decreased by 10%, although the change did not effect the accuracy. In contrast, the rest of the detector saw a significant drop following the transformation of Quillbot.
Figures <ref> and <ref> show an example of a ChatGPT data point that went from 98% before Quillbot to 5% after Quillbot on Originality.
Tables <ref> and <ref> show results from the capstone course data, written using Spanish. We found that CopyLeaks and the AI Text Classifier tend always output fake predictions on AI data. In contrast, the GPT-2 Output Detector, GPTZero, CheckForAI, GLTR, GPTKit, and Originality tend to output human predictions.
The data in Tables <ref> and <ref> are both normally distributed, verified using the Shapiro-Wilk and Kolmogorov-Smirnov tests. Thus, no correction needed to be applied. Overall, from the t-tests (Table <ref>: t = 1.766 and p = 0.099, Table <ref>: t = 1.862, p = 0.084, both with 14 degrees of freedom) we did not find significant differences in the accuracy of LLM-generated text detectors between human (Spanish text) and ChatGPT (Spanish text) data.
The GLTR detector shows an interesting mild success with Spanish data. The average top-k score on human data was 104, while the average top-k score on ChatGPT data was 85. When we changed the implementation of GLTR to set a mean of a 94.5 top-k score, GLTR managed to achieve the highest accuracy of 65% on Spanish text.
§.§ Our experience using the LLM-generated text detectors
Generally, many LLM-generated text detectors are intuitive to use. Similar with many online similarity detectors for identifying text plagiarism <cit.>. They have a web-based interface where a user can paste the text they want to check its originality. GPTZero and CheckForAI allow their users to upload a document instead.
While there are a number of LLM-generated text detectors, only two of them have their technical reports publicly available (GPT-2 Output Detector <cit.> and GLTR <cit.>). This is possibly due to at least two reasons. First, technical reports might be misused by some individuals to trick the detectors. Second, some detectors are commercial.
Most LLM-generated text detectors do not facilitate API integration. GPTZero, GPTKit, OriginalityAI, CopyLeaks provide such a feature with a fee. Without API integration, it is challenging to integrate the detectors to existing teaching environments, especially learning management system. LLM-generated text detectors are unlikely to be independently used as the task is labor intensive.
As many of the detectors are commercial, their code is not publicly available. This might complicate instructors to further develop the detectors to fit their particular needs. The only open source detectors are the GPT-2 Output Detection and GLTR.
The detectors are also limited in the input formats they support. Most of them only allow raw text pasted in a form, making them difficult to automate. The PDF parsers that we attempted to use often parsed in an incorrect order and had a tendency to include unwanted characters. We had to write custom scripts to parse the text in a format that translates all information to text.
Detection results are challenging to interpret. Detectors attempt to combat this problem by highlighting content that is more likely to be AI-generated. Table <ref> shows the highlighting support each detector provides. Highlighting is provided on either a paragraph, sentence, or a word basis.
While highlighting does seem to mitigate some barriers, we found that the highlighting feature can still be misleading. This was particularly evident in GPTZero, which highlighted 52 human submissions as either possibly or entirely AI-generated. Figure <ref> shows a sample human report where some sentences were highlighted as more likely to be written by AI. It is unclear what makes the highlighted text more likely be written by AI than the other sentences.
In terms of output quality, it seems like the detectors are limited in their ability to export results. Nevertheless, some detectors were more effective than others. We provided screenshots of GPTKit, GPTZero, and Originality in this report since they provided more detailed results and it was easier to screenshot the results along with the text in contrast to the other detectors. It was more challenging to show full results of other detectors as they did not allow side-by-side results.
Most LLM-generated text detectors only support English as the language of LLM-generated text. While one can still send text in other languages, the results do not appear meaningful as we previously showed.
As many LLM-generated text detectors are commercial and they are relatively new, there appear to mostly individual pricing options. GPTZero CopyLeaks, for instance, have business pricing. GPTZero currently has a subscription plan for business users for $19.99USD per month.
These detectors might be far less useful for instructors living in countries with weak currency; the pricing options are only available in USD.
§ DISCUSSION
The current state of LLM-generated text detectors suggests that they are not yet ready to be trusted blindly for academic integrity purposes or as reliable plagiarism detectors such as Turnitin, MOSS, or JPlag. Our study demonstrates that detectors under-perform compared to the GPT-2 Output Detector and GLTR, which are older and freely available detectors from 2019.
At first glance, it appears that LLM-generated text detectors are fairly accurate with human data being correctly detected ∼89.59%[this percentage is the average accuracy for human data using Tables <ref> and <ref>.] while the average accuracy for ChatGPT-generated data is substantially lower; ∼77.42%[this percentage is the average accuracy for ChatGPT-generated data using Tables <ref> and <ref>.]. Upon deeper inspection, it is apparent that the number of potential false positives can lead to a wide array of issues, especially if being trusted for plagiarism detection at educational institutions.
Delving further, when a paraphraser (in this case, QuillBot) is utilized the average accuracy is slightly reduced for human data
∼89.02%[this percentage is the average accuracy for human data using Tables <ref> and <ref>.] but this substantially reduces the accuracy of ChatGPT-generated data ∼49.17%[this percentage is the average accuracy for ChatGPT-generated data using Tables <ref> and <ref>.]. This means that in more than half of all cases, ChatGPT-generated data cannot correctly be identified by these detectors. Though, some detectors perform better than others (e.g., GLTR), it is still a serious concern for users of these detectors.
Additionally, once non-English languages are introduced, these detectors are easily exacerbated. We investigate submissions made in Spanish and see that the average accuracy for human data lowers to an average of ∼70.99% [this percentage is the average accuracy for human data using Tables <ref> and <ref>.], and ChatGPT-generated data reduces to an abysmal ∼17.50%[this percentage is the average accuracy for ChatGPT-generated data using Tables <ref> and <ref>.]. Though only Spanish was investigated, it introduces the need for additional research into alternative languages (non-English).
Presently, all LLM-generated text detectors struggle with languages other than English, code, and special symbols, resulting in fairly inaccurate results. As a point of clarity, it would be ideal for these detectors to explicitly state their limitations and aim to produce human predictions in such cases.
In terms of usability, LLM-generated text detectors need some improvements. Although they are intuitive to use and generate acceptable reports, many of them are not well documented at a technical level, some do not have APIs making them more difficult to integrate into local and larger systems (e.g., Learning Management Systems), and the support of these detectors is limited. Furthermore, some of these detectors require processing fees.
From our results, LLM-generated text detectors appear to lack in understandability. We are aware that all of these detectors leverage similar large language models for detection purposes. However, they might differ in terms of their technical implementation, parameters, pre-trained data, etc. These are unlikely to be revealed since most of the detectors are for commercial-use and, thus, proprietary. While some detectors highlight sentences that are more likely to be AI-generated (Table <ref>), the results produced by the detectors are not clear enough for users of these detectors.
§ THREATS TO VALIDITY
Our study has several threats to validity:
* The findings of the study reflect detector results that are accurate as of April 2023. The detectors are volatile, and owners of these detectors could update their models. Results could change based on updates to LLM-generated text detectors.
* Accuracy, false positives, and resilience were arguably sufficient to represent effectiveness. However, additional findings can be obtained by considering other effectiveness metrics.
* The data sets were obtained from two institutions; one uses English as the operational language while another uses Spanish. This means that the findings might not be generalizable to other institutions, especially those with different operational languages.
* While we believe that the data sets are sufficient to support our findings, we acknowledge that more data sets can strengthen the findings.
§ CONCLUSION
This paper examines eight LLM-generated text detectors on the basis of effectiveness. The paper shows that while detectors manage to achieve a reasonable accuracy, they are still prone to flaws and can be challenging to interpret by the human eye. Ultimately, LLM-generated text detectors, while not yet reliable for academic integrity or plagiarism detection, show relatively accurate results for human-generated data compared to ChatGPT-generated data. However, false positives are a significant concern, especially when used for plagiarism detection in educational institutions. When a paraphrasing tool like QuillBot is employed, the accuracy decreases for both human and ChatGPT-generated data. Additionally, the detectors struggle with non-English languages, resulting in even lower accuracy. It is crucial for these detectors to acknowledge their limitations and aim for improved performance in various language contexts.
§.§ Future Work
Future detectors could attempt to incorporate a combination of metrics along with their accuracy for AI detectors. A combination of many factors along with the accuracy and false positive rates may give educators better insights into the predictions. This could include text-based features such as burstiness and repetition as well as AI-learned features such as probabilities. These detectors could further be fine-tuned for specific domains to improve their reliability.
Additionally, there is a fundamental need to have accurate and understandable LLM-generated text detectors available for educators to combat against the rising concern of academic integrity due to these publicly available LLMs. It is also important for the researchers to contact the creators of these detectors to better understand the related issues and needs of the end users, but also to facilitate a deeper conversation about the functionality and correctness of their instruments.
Finally, there is an apparent need to investigate the use of non-English languages using these detectors as large language models, like the one(s) used by ChatGPT, can produce content in languages other than English.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04685v1 | 20230710164026 | The Mikheyev-Smirnov-Wolfenstein Matter Potential at the One-loop Level in the Standard Model | [
"Jihong Huang",
"Shun Zhou"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
The Mikheyev-Smirnov-Wolfenstein Matter Potential at the One-loop Level in the Standard Model
Jihong Huang [E-mail: [email protected]],
Shun Zhou [E-mail: [email protected] (corresponding author)]
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
When neutrinos are propagating in ordinary matter, their coherent forward scattering off background particles results in the so-called Mikheyev-Smirnov-Wolfenstein (MSW) matter potential, which plays an important role in neutrino flavor conversions. In this paper, we present a complete one-loop calculation of the MSW matter potential in the Standard Model (SM). First, we carry out the one-loop renormalization of the SM in the on-shell scheme, where the electromagnetic fine-structure constant α, the weak gauge-boson masses m^_W and m^_Z, the Higgs-boson mass m^_h and the fermion masses m^_f are chosen as input parameters. Then, the finite corrections to the scattering amplitudes of neutrinos with the electrons and quarks are calculated, and the one-loop MSW matter potentials are derived. Adopting the latest values of all physical parameters, we find that the relative size of one-loop correction to the charged-current matter potential of electron-type neutrinos or antineutrinos turns out to be 6%, whereas that to the neutral-current matter potential of all-flavor neutrinos or antineutrinos can be as large as 8%. The implications of such corrections for neutrino oscillations are briefly discussed.
footnote
§ INTRODUCTION
In the past quarter of a century, neutrino oscillation experiments have provided us with robust evidence that neutrinos are massive and leptonic flavor mixing is significant <cit.>. For the neutrinos propagating in matter, the coherent forward scattering of neutrinos off the background particles leads to the Mikheyev-Smirnov-Wolfenstein (MSW) matter potential and could modify neutrino flavor conversions in a remarkable way <cit.>. To be explicit, at the tree level in the Standard Model (SM), the effective Hamiltonian for neutrino oscillations in matter receives extra potential terms, i.e., V_e^ = V_ CC^ + V_ NC^ for electron neutrinos and V_μ^ = V_τ^ = V_ NC^ for muon and tau neutrinos, where the charged-current (CC) and the neutral-current (NC) contributions are given by
V_ CC^ = √(2) G_μ^ N_e^ , V_ NC^ = -G_μ^/√(2)[(1 - 4 sin^2θ_ w^) (N_e^ - N_p^) + N_n^] .
In Eq. (<ref>), G_μ^ is the Fermi constant determined from the muon lifetime, N_e^, N_p^ and N_n^ are respectively the net number densities of electrons, protons and neutrons, and θ_ w^ is the weak mixing angle. For antineutrinos, the MSW matter potentials V^_α (for α = e, μ, τ) change accordingly to opposite signs. As the NC potential V^_ NC is universal for three neutrino flavors, only the CC potential V^_ CC for electron (anti)neutrinos is relevant for neutrino flavor conversions in matter.
At the one-loop level in the SM, it has been known for a long time that the NC potentials V^α_ NC become dependent on the charged-lepton masses m^_α (for α = e, μ, τ). Given the strong hierarchy of charged-lepton masses m_e^≪ m_μ^≪ m_τ^ and N_n^ = N_p^ = N_e^ for ordinary matter, one can estimate the ratio of the flavor-dependent part of one-loop NC potential to the tree-level CC potential as below <cit.>
ϵ^_μτ≡ V^τ_ NC - V^μ_ NC/ V_ CC≈ - 3α/2πsin^2θ_ w^m_τ^2/m_W^2[ln(m_τ^2/m_W^2) + 5/6] ,
where α≡ e^2/(4π) denotes the electromagnetic fine-structure constant. With the input values of α = 1/137, m^_W = 80.377 GeV, m^_Z = 91.1876 GeV and m^_τ = 1.777 GeV, one has sin^2θ^_ w = 1 - m^2_W/m^2_Z ≈ 0.223 and thus finds the ratio in Eq. (<ref>) to be ϵ^_μτ≈ 5.19× 10^-5. Although such a correction is extremely small, it causes the difference between the matter potential of ν^_μ and that of ν^_τ, which affects greatly the flavor conversions of supernova neutrinos in the dense-matter environment <cit.>. Further discussions about the impact of ϵ^_μτ on neutrino oscillations can be found in Refs. <cit.>.
In the calculation of ϵ^_μτ, however, the previous works <cit.> concentrate on the flavor-dependent radiative corrections, e.g., V^τ_ NC - V^μ_ NC, instead of the one-loop NC potentials V^α_ NC themselves (for α = e, μ, τ). Moreover, the one-loop radiative corrections to the CC potential have not been studied thus far. Therefore, it is interesting to calculate neutrino matter potentials in the SM at the one-loop level, including the NC potential V^α_ NC for three-flavor neutrinos and the CC potential V^_ CC for the electron neutrino. The motivation for such a calculation is two-fold. First, the flavor-independent part of the one-loop NC potential V^α_ NC is irrelevant for flavor oscillations of three active neutrinos, but may be important for active-sterile neutrino oscillations, particularly in the supernova environment <cit.>. Second, the future long-baseline accelerator neutrino oscillation experiments, such as DUNE <cit.> and T2HK <cit.>, will be able to determine neutrino mass ordering and probe leptonic CP violation, and they are already sensitive enough to the Earth matter effects. Obviously, the precise calculation of V^_ CC at the one-loop level is necessary to achieve high-precision measurements of the neutrino mass ordering and the CP-violating phase.
In this work, we carry out a complete one-loop calculation of the MSW potentials in the SM. More explicitly, after performing one-loop renormalization of the SM in the on-shell scheme <cit.>, we compute the scattering amplitudes for ν_α^ + f →ν_α^ + f at one loop, where f = u, d, e are the SM fermions in ordinary matter. For the electron neutrino ν_e^, both CC and NC interactions must be taken into account, while only the latter is considered for ν_μ,τ^. For both NC and CC interactions, since the distributions of background particles are assumed to be homogeneous and isotropic, only the vector-type couplings c_ V, NC^f and c^f_ V, CC are directly involved in matter potentials. After obtaining finite scattering amplitudes, we extract the matter potentials by comparing the obtained amplitudes and those generated by the effective weak Hamiltonian of neutrino interactions in the forward limit. After inputting the latest values of all physical parameters, we find that the one-loop correction to the NC potential is about 8%, while that to the CC potential is about 6%. In the future long-baseline accelerator neutrino oscillation experiments, e.g., DUNE and T2HK, it is promising to probe the one-loop correction to the CC potential.
The remaining part of this paper is organized as follows. In Sec. <ref>, we outline the basic strategy for one-loop calculations of the MSW matter potentials in the SM, and explain the notations and the on-shell scheme of the one-loop renormalization implemented in our calculations. The analytical results for the one-loop NC and CC potentials are presented in Sec. <ref> and Sec. <ref>, respectively. Then, in Sec. <ref>, we specify the input parameters and evaluate the one-loop corrections. The impact of such corrections on the long-baseline accelerator neutrino oscillation experiments is briefly discussed. We summarize our main results in Sec. <ref>. For completeness, the renormalization of the SM and some details of our calculations are given in Appendix <ref>.
§ STRATEGY FOR ONE-LOOP CALCULATIONS
In this section, we explain how to calculate the one-loop potentials in the SM. For the low-energy neutrinos propagating in ordinary matter, the coherent forward scattering with background particles modifies their dispersion relations and its impact on neutrino flavor conversions can be described by the effective potentials at the amplitude level. The ordinary matter is composed of protons, neutrons and electrons, so the NC interactions contribute to the matter potentials for all-flavor neutrinos, whereas the CC interaction is relevant only for the electron neutrinos.
§.§ Effective Hamiltonians and Matter Potentials
The amplitudes for relevant two-body scattering processes ν^_α + f →ν^_α + f, with α = e, μ, τ and f = u, d, e, can be divided into the NC and CC parts. For the NC part, we can directly read it off from the low-energy effective Hamiltonian
H_ eff^ NC (x) = G_μ^/√(2)[ν_α^ (x)γ^μ(1-γ^5) ν_α^ (x)] [f(x)γ_μ(c_ V, NC^f - c_ A, NC^f γ^5 ) f (x)] ,
where c^f_ V, NC and c^f_ A, NC refer respectively to the vector-type and axial-vector-type couplings for the NC interaction. At the tree level, these couplings in the SM have been collected in Table <ref>.
Assuming the distribution of background fermions to be homogeneous and isotropic, one can average the effective Hamiltonian over all possible states of background fermions and then obtain the effective potential for the SM left-handed neutrinos <cit.>
V_ NC^ = √(2) G_μ^ N_f^ c_ V, NC^f ,
where N^_f is the net number density of the background fermion f and only the vector-type coupling c^f_ V, NC is involved. Notice that the NC potential is independent of neutrino flavors at the tree level.
For electron neutrinos, the CC part of the two-body scattering amplitude can be derived from the effective Hamiltonian
H_ eff^ CC (x) = G_μ^/√(2)[ν_e^ (x)γ^μ(1 - γ^5 ) ν_e^ (x)] [e(x)γ_μ(c^e_ V, CC -c^e_ A, CCγ^5) e (x)] ,
where the Fierz transformation has been performed and c
^e_ V, CC = c^e_ A, CC = 1 in the SM. In a similar way to the derivation of the NC potential, one can easily get the CC potential of electron neutrinos
V_ CC^ = √(2) G_μ^ N_e^ c^e_ V, CC .
Therefore, the total matter potential for electron neutrinos is V^_e = V^_ CC + V^_ NC, while those for muon and tau neutrinos are V^_μ = V^_τ = V^_ NC. For ordinary matter composed of protons, neutrons and electrons, together with the vector-type couplings in Table <ref>, one can simply use N^_u = 2N^_p + N^_n and N^_d = 2 N^_n + N^_p and the condition of charge neutrality N^_p = N^_e to reproduce the results of V^_ CC and V^_ NC in Eq. (<ref>).
From the above derivations of the tree-level matter potentials, it is evident that one should calculate the renormalized scattering amplitude of ν^_α + f →ν^_α + f at the one-loop level and then find out the effective Hamiltonian corresponding to the loop-corrected amplitude. Starting with the loop-level effective Hamiltonian, we can extract the coefficient for the vector-type interactions involving the background particles. More explicitly, for the NC part, we identity the correction to the vector-type coupling c_ V,NC^f, which will be denoted as Δ c_ V,NC^f ≡c^f_ V,NC - c^f_ V,NC with c^f_ V,NC being the loop-corrected coupling. For definiteness, we take the Fermi constant to be G^_μ as determined precisely from muon decays. The one-loop NC potential is given by V^α_ NC = √(2)G^_μ N^_f c^f_ V,NC, whereas the tree-level one reads V^_ NC = √(2)G^_μ N^_f c^f_ V,NC. In this case, the relative magnitude of one-loop correction to the NC potential is characterized by Δ c^f_ V,NC/c^f_ V,NC, as G^_μ will be anyway assigned the experimentally measured value in both tree- and loop-level calculations. Similarly, the correction to the CC potential will be represented by Δ c^e_ V, CC/c^e_ V, CC, where Δ c_ V,CC^e ≡c^e_ V,CC - c^e_ V,CC and c^e_ V,CC is the loop-level coupling.
§.§ On-shell Renormalization
The one-loop renormalization of the SM in the on-shell scheme can be found in the monograph <cit.> and also in many excellent review papers <cit.>. For completeness, a brief summary of the on-shell renormalization of the SM is presented in Appendix <ref>, and the basic procedure is sketched in this subsection in order to explain our conventions.
For the classical Lagrangian of the standard electroweak theory, we shall closely follow the definitions and notations in Ref. <cit.>. As usual, the quantization of the SM can be performed by introducing the gauge-fixing terms and the Faddeev-Popov ghosts, and then the Feynman rules can be derived, where the 't Hooft-Feynman gauge will be chosen for simplicity. At the one-loop level, the ultraviolet (UV) divergences in the one-point Green's function (i.e., the Higgs tadpole diagrams), one-particle-irreducible two-point Green's functions and three-point vertex functions can be separated out by using the dimensional regularization, where the space-time dimension is set to d = 4 - 2ϵ and the UV-divergent term in the limit of ϵ→ 0 shows up as
Δ≡1/ϵ - γ_ E^ + ln (4π) ,
where γ_ E^≈ 0.577 is the Euler-Mascheroni constant. In principle, only the particle masses and coupling constants need to be renormalized to guarantee finite S-matrix elements in the SM <cit.>, but the wave-function renormalization of physical fields is necessary to keep the Green's functions finite as well.
After expressing the bare model parameters and physical fields in terms of the renormalized ones and the corresponding counterterms, as summarized in Appendix <ref>, one can calculate the Higgs tadpole diagrams, two-point Green's functions and three-point vertex functions, which are in general UV-divergent. Then, the on-shell renormalization conditions on the renormalized Green functions are imposed to remove the UV-divergences and thus determine the counterterms. Finally, a complete set of renormalized parameters are chosen as inputs and implemented to calculate the S-matrix elements of our interest. Some comments are helpful.
* Input parameters. As has been done in Ref. <cit.>, we shall choose the input parameters as the fine structure constant α, the W-boson mass m_W^, the Z-boson mass m^_Z, the Higgs-boson mass m_h^, and the charged-fermion masses m_f^. Since m_W^ and m_Z^ have been chosen as input parameters, the weak mixing angle is defined via cosθ_ w≡ m_W^ / m_Z^. For later convenience, the abbreviations c ≡cosθ_ w^ and s ≡sinθ_ w^ will be used. Moreover, s^_2 w≡sin2θ_ w^ = 2sc and c_2 w^≡cos 2θ_ w^ = c^2 - s^2 are also implemented to simplify the expressions.
With the physical parameters chosen above, the electromagnetic coupling constant e = √(4πα) is related to the weak gauge coupling constant g via the weak mixing angle, i.e., e = g s. Whenever the coupling constants e and g appear, their definitions should be understood in terms of the fine-structure constant α and the weak mixing angle θ^_ w.
* One-loop amplitudes. The contributions to the amplitudes of ν^_α + f →ν^_α + f at the one-loop level can be divided into three categories, i.e., the self-energies of weak gauge bosons including the tadpole diagrams, the vertex corrections and the box diagrams. With all the counterterms previously determined in the on-shell scheme, the UV-divergent terms are all canceled out and the finite corrections are obtained. The one-loop diagrams have been calculated by using Package-X <cit.>, and the Passarino-Veltman functions <cit.> are implemented to express one-loop integrals as in Appendix <ref>.
In the following expressions, x_i^≡ m_i^2/m_W^2 and y_i^≡ m_i^2/m_Z^2 are introduced with “i" referring to the particle type. The fermion masses for external legs are retained, but they are much smaller compared to the gauge-boson masses and thus all the terms of O(x_e^) or O(x_q^) for q = u,d can be safely neglected. It should be noticed that as we are interested in the forward scattering amplitudes, the diagrams with the photon propagator with p^2 = 0 attached to the charged fermions will not contribute due to the on-shell renormalization of the electric charge. In addition, neutrinos are massless in the SM and the quark flavor mixing is ignored. For the latter assumption, the reason is simply that the off-diagonal elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix are much smaller than the diagonal ones and the vertices involving a pair of quarks not in the same isospin-doublet are highly suppressed.
* Finite corrections. Once the finite corrections to the amplitudes are obtained, one can extract the vector-type coefficients in the corresponding effective Hamiltonian and derive the one-loop corrections to the matter potentials of neutrinos. For the NC part, the renormalized self-energy of the Z-boson, the neutrino or charged-fermion vertex, and the box diagrams are denoted as iΣ_Z^ r, i e Γ^ r_ν_αν_α Z or i e Γ^ r_ffZ and i M_ NC^f, respectively, so the correction to the vector-type coupling is
Δ c_ V,NC^f = (-Σ_Z^ r/m_Z^2 + s_2 w^Γ_ν_α^ν_α^ Z^ r) c_ V,NC^f + s_2 w^Γ_ffZ^ r - 4m_W^2/g^2 M_ NC^f .
Similarly, for the CC part, with the renormalized self-energy of the W-boson, the corrected vertex, and the box diagrams denoted as iΣ_W^ r, i e Γ^ r_ν_e^ e W and i M_ CC^, respectively, the correction to the vector-type coupling turns out to be
Δ c_ V,CC^e = (-Σ_W^ r/m_W^2 + 2×√(2) s Γ_ν_e^ e W^ r) c_ V,CC^e - 4m_W^2/g^2 M_ CC^ .
Note that the factor of two associated with the vertex correction Γ_ν_e^ e W^ r in Eq. (<ref>) arises from the fact that the ν^_e-e-W vertex appears twice in the diagrams.
The self-energy, vertex and box contributions on the right-hand sides of Eqs. (<ref>) and (<ref>) will be presented in Sec. <ref> and Sec. <ref>, respectively. With the latest values of the input parameters, we shall evaluate these finite corrections in Sec. <ref>.
§ THE NEUTRAL-CURRENT POTENTIAL
§.§ The Fermi Constant
As shown in Eqs. (<ref>) and (<ref>), the NC and CC potentials at the tree level are usually given in terms of the Fermi constant G^_μ, which is related to the adopted physical parameters by G^_μ = g^2/(4√(2)m^2_W) = πα/(√(2) m^2_W s^2). At the one-loop level, however, such a relation is corrected as
g^2/4 √(2) m_W^2≡G_μ^(1 - Δ r ) ,
where G_μ^ stands for the one-loop corrected Fermi constant and the finite radiative corrections are collected in Δ r. With the help of Eqs. (<ref>)-(<ref>), we can evaluate Δ r by <cit.>
Δ r = - . ∂Σ_ T^A(p^2)/∂ p^2|_p^2=0 + c^2/s^2[Σ_ T^Z(m_Z^2)/m_Z^2 - Σ_ T^W(m_W^2)/m_W^2] + Σ_ T^W(m_W^2) - Σ_ T^W(0)/m_W^2
- 2c/sΣ_ T^AZ (0)/m_Z^2 + α/4π s^2[6+7-4s^2/2s^2ln(m_W^2/m_Z^2)] .
Since the Fermi constant determined from the muon lifetime is the most precise, it is convenient to use it in the studies of low-energy weak interactions. For the tree-level matter potential, one may just input the value of G^_μ extracted from the muon lifetime, namely, G^_μ = G^ exp_μ. On the other hand, at the one-loop level, we implement the relation in Eq. (<ref>) to determine G^_μ from the same experimental observation, i.e., G^_μ (1 - Δ r) = G^ exp_μ. In this case, the tree-level matter potential is given by V = √(2)G^_μ N^_f c^f_ V, while the one-loop potential is V = √(2)G^_μ (1 - Δ r) N^_f (c^f_ V + Δ c^f_ V). As the experimental value G^ exp_μ is used to evaluate the matter potential at either the tree- or one-loop level, we shall characterize the magnitude of radiative corrections by
V/ V - 1 = √(2) G_μ^ exp N_f^(c_ V^f + Δ c_ V^f )/√(2)G^ exp_μ N^_f c^f_ V - 1 = Δ c_ V^f/c_ V^f .
It is worthwhile to mention that Eq. (<ref>) is applicable to both NC and CC potentials, for which one should make use of the corresponding vector-type couplings and their radiative corrections. Therefore, in the subsequent discussions, we focus only on the radiative corrections to the vector-type couplings.
§.§ Self-energy of Z-boson
The relevant Feynman diagrams of the scattering ν_α^ + f →ν_α^ + f for the NC potential have been shown in Fig. <ref>. After calculating the one-loop amplitudes, we can extract the corrections to the vector-type coupling c_ V,NC^f.
First, let us look at the self-energy of Z-boson in Fig. <ref>-(3), where the shaded circle represents all possible contributions. The self-energy of Z-boson contributes to Δ c_ V,NC^f as -(c_ V,NC^f / m_Z^2) Σ_Z^ r, where iΣ_Z^ r denotes the renormalized self-energy.
* Bosonic Contributions. The bosonic contributions to the Z-boson self-energy involve gauge bosons, the Higgs boson, the Goldstone bosons and the Faddeev-Popov ghosts running in the loop. The final result can be written as
(4π)^2 Σ_Z- b^ r = g^2 m_Z^2/8 c^2 (1-y_h^)(y_h^4-6y_h^3+17y_h^2-22y_h^+4)ln y_h^
-3/2 g^2 m_Z^2 (4 c^4+4 c^2-1) DiscB(m_Z^2,m_W^,m^_W)
+g^2 m_Z^2/4 c^2 (y_h^ - 4 )(y_h^3 -7y_h^2 + 20y_h^ -28) DiscB(m_Z^2,m_h^,m_Z^)
+g^2 m_Z^2/24 c^2(6y_h^2 -21y_h^ -288c^6-264c^4 +112c^2 +49 ) ,
where the function DiscB(p^2,m_0^,m_1^) is related to the Passarino-Veltman function via
B^_0 (p^2;m^_0,m^_1) = Δ + ln(μ ^2/m_1^2) +2+ DiscB(p^2,m^_0,m^_1)-m_0^2-m_1^2+p^2/2 p^2ln(m_0^2/m_1^2) ,
with μ being the renormalization mass scale. The explicit form of DiscB(p^2,m_0^,m_1^) reads
DiscB(p^2, m^_0, m^_1) = √(λ(m_0^2,m_1^2,p^2))/p^2ln[m_0^2 + m_1^2 -p^2 + √(λ(m_0^2,m_1^2,p^2))/2m^_0 m^_1] ,
where the Källén function
λ(x,y,z) ≡ x^2+y^2+z^2-2xy-2yz-2zx
has been defined.
* Fermionic Contributions. For the fermions running in the loop, we have
(4π)^2 Σ_Z- f^ r = ∑_f 4 e^2 m_Z^2 /12 y^_f-3{ 6 y^_f [a_f^2 (1-4 y^_f)+2 v_f^2 y^_f] DiscB(m_Z^2,m^_f,m^_f) .
. +(4 y^_f-1) [a_f^2 (1-12 y^_f)+v_f^2 (6 y^_f+1)] } ,
where we have defined v_f^≡ c_ V,NC^f/s^_2 w and a_f^≡ c_ A,NC^f/s^_2 w. Note that the summation is over all the SM fermions and three colors for each type of quarks are taken into account.
§.§ Vertex Contributions
Then, we calculate the vertex corrections, for which the Feynman diagrams have been depicted in Fig. <ref>-(2) and (4). For later convenience, we introduce the following functions
F_Z^ (p^2) = ∑_f {[4 a_f^2 m_f^2-p^2(a_f^2+v_f^2)] B_0^(p^2;m_f^,m_f^) .
. -4 (a_f^2+v_f^2) B^_00(p^2;m_f^,m_f^) +2 (a_f^2+v_f^2) A^_0(m_f^) } ,
F_W^ (p^2) = ∑_{f,f'}[ (m_f^2+m_f^'^2) B^_0(p^2;m^_f,m^_f')-4 B^_00(p^2;m^_f,m^_f') .
. -p^2 B^_0(p^2;m^_f,m^_f') + A^_0(m^_f)+ A^_0(m^_f') ] ,
F_A^ (p^2) = ∑_f Q_f^2 [ -4 B^_00(p^2;m^_f,m^_f)-p^2 B^_0(p^2;m^_f,m^_f) + 2 A^_0(m^_f) ] ,
F_AZ^ (p^2) = ∑_f Q_f^ v_f^[ -4
B^_00(p^2;m^_f,m^_f)-p^2 B^_0(m_Z^2;m^_f,m^_f)+2 A^_0(m^_f) ] ,
where Q^_f denotes the electric charge and {f, f^'} refers to the pair of fermions in the same isospin-doublet. As the subscripts of these functions indicate, they represent the contributions from the self-energies of Z-boson, W-boson, photon and the A-Z mixing in Eqs. (<ref>)-(<ref>). In addition, their derivatives F_V^' (m_V^2) ≡. d F_V^(p^2)/ dp^2|^_p^2=m_V^2 for V = W,Z,A are also needed.
* The ν_α^-ν_α^-Z Vertex. The contribution to Δ c_ V,NC^f is given by s^_2 w c_ V,NC^f Γ_ν_α^ν_α^ Z^ r with
(4π)^2 Γ_ν_α^ν_α^ Z^ r = -g^2 x_α^/s^_2 w(ln x_α^ +3) + g^2 c^_2 w/s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2 ] + g^2 s/2c[ F_Z^'(m_Z^2) - F_A^'(0)]
+g^2/48 c s^3(120 c^6+68 c^4-106 c^2+17) DiscB(m_Z^2,m^_W,m^_W)
-g^2/6 s^3_2 w(y^_h-4)[ (4 c^2-3) y_h^3-(29 c^2-21) y_h^2 .
.+(88 c^2-60) y^_h -132 c^2+84 ] DiscB(m_Z^2,m^_h,m^_Z)
-g^2 /48 c^5 s^3(96 c^8+88 c^6-100 c^4+14c^2+1) DiscB(m_W^2,m^_W,m^_Z)
+ g^2 c^_2 w/48 c s^3(x_h^2-4 x^_h +12 ) DiscB(m_W^2,m^_h,m^_W)
+g^2 /12 s_2 w^3 [(4c^2-3) y_h^3-(21 c^2-15) y_h^2+(42 c^2-30) y^_h-60 c^2+36] ln y^_h
-g^2 c^_2 w/12 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x^_h-24)ln x^_h
+g^2 /96 c^7 s^3[(12 c^6-6 c^4) y^_h-158 c^6+106 c^4-12 c^2-1 ]ln(m_W^2/m_Z^2)
+g^2 /48 c^5 s [(4 c^2-1) y_h^2-6 c^2 y^_h-240 c^8-356 c^6+252 c^4+10 c^2-1] .
Notice that the flavor-dependent terms proportional to x_α^ are the same as those in Ref. <cit.>, and our results are also consistent with Eqs. (5.46) and (5.47) in Ref. <cit.>.
* The f-f-Z Vertex. With the radiative corrections to the vector-type couplings in the renormalized vertices Γ_ffZ^ r, the total contributions to Δ c_ V,NC^f can be expressed as s_2 w^Γ_ffZ^ r for f=u,d,e. All the terms proportional to the quark and electron masses of O(x_f^) can always be neglected due to the suppression by the W-boson mass.
* u-u-Z Vertex. The renormalized vertex reads
(4π)^2Γ_uuZ^ r = g^2(5-2c^2)/6 s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2] + e^2 (8c^2-5)/6 s^_2 w[ F_Z^'(m_Z^2) - F_A^'(0)]
+4 e^2 /3 m_Z^2 F_AZ^ (m_Z^2) -g^2 (2 c^2-5)/288 c s^3(x_h^2-4 x_h^+12) DiscB(m_W^2,m_h^,m_W^)
-g^2/36 s^3_2 w(y_h^-4)[ (16 c^4-28 c^2+15) y_h^3 - (104 c^4 - 185 c^2+105) y_h^2 .
. +(256 c^4-472 c^2+300) y_h^ -288 c^4+564 c^2-420 ] DiscB(m_Z^2,m_h^,m_Z^)
+g^2 /96 c s^3(320 c^8-360 c^6-236 c^4+398 c^2-23) DiscB(m_Z^2,m_W^,m_W^)
+ g^2/288 c^5 s^3(96 c^8-104 c^6-372 c^4+78 c^2+5) DiscB(m_W^2,m_Z^,m_W^)
+g^2 /72 s^3_2 w[ (16 c^4-28 c^2+15) y_h^3 -(72 c^4 - 129 c^2 + 75) y_h^2 .
. +(144 c^4-258 c^2+150) y_h^ - 96 c^4+204 c^2-180 ] ln y_h^
+ g^2(2 c^2-5) /72 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x_h^-24) ln x_h^
+ g^2 /576 c^7 s^3[(30 c^4-12 c^6) y_h^ + 16 c^8+134 c^6-418 c^4+68 c^2+5]ln(m_W^2/m_Z^2)
+ g^2/288 c^5 s [ (16 c^4-12 c^2+5) y_h^2 -6 c^2 (8 c^2-5) y_h^.
. -1920 c^10+400 c^8+1652 c^6-1100 c^4-42 c^2+5 ] .
* d-d-Z Vertex. The renormalized vertex is given by
(4π)^2Γ_ddZ^ r = - g^2(2c^2+1)/6 s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2 ] + e^2(1-4c^2)/6 s^_2 w[ F_Z^'(m_Z^2) - F_A^'(0)]
-2 e^2/3 m_Z^2 F_AZ^ (m_Z^2) -g^2(2 c^2+1)/288 c s^3(x_h^2-4 x^_h+12) DiscB(m_W^2,m^_h,m^_W)
+g^2/36 s^3_2 w(y^_h-4)[ (8 c^4-8 c^2+3) y_h^3-(52 c^4-49 c^2+21) y_h^2 .
. +(128 c^4-104 c^2+60) y^_h - 144 c^4+84 c^2-84 ] DiscB(m_Z^2,m^_h,m^_Z)
-g^2 /96 c s^3(160 c^8-120 c^6-84 c^4+146 c^2-3) DiscB(m_Z^2,m^_W,m^_W)
+g^2 /288 c^5 s^3(96 c^8+184 c^6+36 c^4-18 c^2-1) DiscB(m_W^2,m^_Z,m^_W)
-g^2 /72 s^3_2 w[ (8 c^4-8 c^2+3) y_h^3 - (36 c^4 - 33 c^2 + 15) y_h^2 .
. +(72 c^4-66 c^2+30) y^_h -48 c^4+12 c^2 -36 ] ln y^_h
+ g^2 (2 c^2+1)/72 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x^_h-24)ln x^_h
-g^2 /576 c^7 s^3[6 (2 c^2+1) c^4 y_h + 8 c^8-170 c^6-50 c^4+16 c^2+1] ln(m_W^2/m_Z^2)
-g^2 /288 c^5 s [ (8 c^4+1) y_h^2 + 6 c^2 (1-4 c^2) y^_h .
. + (960 c^10+160 c^8-292 c^6+172 c^4+6 c^2-1) ] .
* e-e-Z Vertex. The renormalized vertex is
(4π)^2Γ_eeZ^ r = g^2 (2 c^2-3) /2 s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2] + e^2 (3-4 c^2)/2 s^_2 w[ F_Z^'(m_Z^2) - F_A^'(0)]
- 2 e^2/m_Z^2 F_AZ^ (m_Z^2) +g^2 (2 c^2-3) /96 c s^3(x_h^2-4 x_h^+12) DiscB(m_W^2,m_h^,m_W^)
+g^2/12 s^3_2 w(y_h-4)[ (8 c^4-16 c^2+9) y_h^3-(52 c^4-107 c^2+63) y_h^2 .
. +(128 c^4-280 c^2+180) y_h^-144 c^4+348 c^2-252 ] DiscB(m_Z^2,m_h^,m_Z^)
-g^2 /96 c s^3(480 c^8-600 c^6-388 c^4+650 c^2-43) DiscB(m_Z^2,m_W^,m_W^)
-g^2/96 c^5 s^3(96 c^8-8 c^6-236 c^4+46 c^2+3) DiscB(m_W^2,m_W^,m_Z^)
-g^2 /24 s^3_2 w[ (8 c^4-16 c^2+9) y_h^3 - (36 c^4 - 75 c^2 + 45) y_h^2 .
. +(72 c^4-150 c^2+90) y_h^-48 c^4+132 c^2 -108 ] ln y_h^
-g^2 (2 c^2-3)/24 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x_h^-24) ln x_h^
-g^2/192 c^7 s^3[ (18 c^4-12 c^6) y_h^ + 96 c^12-240 c^10+224 c^8 .
. +62 c^6-250 c^4+40 c^2 +3 ] ln(m_W^2/m_Z^2)
- g^2 /96 c^5 s[ (8 c^4-8 c^2+3) y_h^2 - 6 c^2 (4 c^2-3) y_h^.
. - 960 c^10+320 c^8+1004 c^6-676 c^4-26 c^2+3 ] .
This renormalized vertex has also been calculated in Ref. <cit.>, where the results in Eqs. (5.42)-(5.44) agree perfectly with ours.
§.§ Box-diagram Contributions
Finally, we consider the box diagrams shown in Fig <ref>-(5). The contribution to Δ c_ V,NC^f is actually given by -(4 m_W^2/g^2) M^f_ NC, where the relevant amplitudes from the one-loop box diagrams are expressed as i M^f_ NC with f=u,d,e. These amplitudes are UV-finite and no renormalization is needed. For the scattering with the neutrino ν_α^, the box diagrams for three different types of background particles lead to
(4π)^2 M_ NC^u = -g^4/8 m_W^2[5-4c^2/4c^2+ x_α^(ln x_α^ +1)] ,
(4π)^2 M_ NC^d = +g^4/2m_W^2[20 c^2-1/16 c^2 +x_α^(ln x_α^+1)] ,
(4π)^2 M_ NC^e = +g^4/2m_W^2[28c^2-9/16c^2+x_α^(ln x_α^+1)] .
The first two results are consistent with Eqs. (7.1)-(7.3) in Ref. <cit.>, whereas the final one is the same as in Eq. (5.51) of Ref. <cit.>. The neutrino flavor-dependent parts have been found to be compatible with the previous calculations in Refs. <cit.>.
§ THE CHARGED-CURRENT POTENTIAL
In parallel with the discussions about the NC potential, there are also three types of radiative corrections to the CC potential V_ CC^, which will be denoted by Δ c_ V,CC^e. The relevant Feynman diagrams of the elastic scattering between electron neutrinos and electrons ν_e^ +e →ν_e^ + e for the CC potential have been given in Fig. <ref>.
§.§ Self-energy of W-boson
First, we consider the self-energy of W-boson in Fig. <ref>-(3), where the shaded circle represents all possible contributions. The contribution to Δ c_ V,CC^e from the W-boson self-energy can be expressed as -(c_ V,CC^e / m_W^2) Σ_W^ r, where iΣ_W^ r denotes the renormalized self-energy.
* Bosonic Contributions. The W-boson self-energy receives the contributions from all the bosons running in the loop, and the renormalized self-energy is
(4π)^2Σ_W- b^ r = -g^2 m_Z^2/4 c^2(12 c^6+44 c^4-13 c^2-1) DiscB(m_W^2,m_Z^,m_W^)
+g^2 m_W^2/4(x_h-4)(x_h^3-7 x_h^2+20 x_h^ -28 ) DiscB(m_W^2,m_h^,m_W^)
-g^2 m_W^2 /8 (x_h^-1)(x_h^4-6 x_h^3+17 x_h^2-22 x_h^+4) ln x_h^
- g^2 m_Z^2/8 c^4 s^2(16 c^10-4 c^8-118 c^6+83 c^4-10 c^2-1) ln(m_W^2/m_Z^2)
+ g^2 m_W^2 /24 c^4[c^4 (6 x_h^2-21 x_h^ -370)+75 c^2+6] .
* Fermionic Contributions. The self-energy correction with fermions in the loop reads
(4π)^2Σ_W- f^ r = g^2 m_W^2∑_{f,f^'}{m_W^4/6{-x_f^3+x_f^2 x_f^'^+x_f^(x_f^'^2-4 x_f^'^ +3 )-x_f^'^3+3 x_f^'^ -2 /λ(m_f^2,m_f^'^2,m_W^2). .
. -1/m_W^4[3 x_f^2-2x_f^(3 x_f^'^-1)+3 x_f^'^2+2 x_f^'^-2] } DiscB(m_W^2,m_f^,m_f^'^)
+ 1/4(x_f^-x_f^'^)[x_f^4-4 x_f^3 x_f^'^ +x_f^2 (6 x_f^'^2-1)-4 x_f^ x_f^'^3+x_f^'^4-x_f^'^2] ln(x_f^/x_f^'^)
. -1/12[6 x_f^2+3 x_f^(1-4 x_f^'^)+6 x_f^'^2+3 x_f^'^-4 ] } .
We should sum over all the contributions from the SM fermions, where {f, f^'} denotes the pair of fermions in the same isospin-doublet, and take into account three colors for each type of quarks.
§.§ Vertex Contributions
Then, we turn to the CC vertex corrections, which have been shown in Fig. <ref>-(2) and (4). The total contribution to Δ c_ V,CC^e from the ν_e^-e-W vertex can be expressed as √(2) s Γ_ν_e^ e W^ r c_ V,CC^e with the renormalized vertex iΓ_ν_e^ e W^ r defined as follows
(4π)^2Γ_ν_e^ e W^ r = g^2/c^2[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2] - e^2 [ F_A^'(0) - F_W^'(m_W^2)/4 s^2]
+g^2 /24 s^2(4-x_h^)[ (c^2-2) x_h^3 - (5 c^2-13) x_h^2 .
. +4 (c^2-8) x_h^ +12 (c^2+3) ] DiscB(m_W^2,m_h^,m_W^)
-g^2/24 c^4 s^2 (60 c^8-8 c^6+71 c^4-22 c^2-2) DiscB(m_W^2,m_W^,m_Z^)
- g^2/24 s^2(y_h^2-4 y_h^+12) DiscB(m_Z^2,m_h^,m_Z^)
+g^2/24 s^2 (48 c^6+68 c^4-16 c^2-1) DiscB(m_Z^2,m_W^,m_W^)
+g^2/48 s^2(y_h^3-6 y_h^2 +18 y_h^ -20 c^2 )ln y_h^
-g^2/48[(c^4+c^2+2) x_h^3 - (6 c^2+9) x_h^2 +18 x_h^ +168 c^2-8] ln x_h^
+g^2/48 c^6 s^2( c^6 y_h^3 -6 c^6 y_h^2 +18 c^6 y_h^ - 48 c^10-36 c^8 .
. +166 c^6-119 c^4+18 c^2+2 )ln(m_W^2/m_Z^2)
+g^2/24 c^4[(c^2+2) y_h^2-6 c^2 y_h^ -96 c^8-224 c^6+32 c^4+23 c^2+2 ] .
As mentioned before, the same CC vertex appears both in Fig. <ref>-(2) and (4), so a factor of two is present in the vertex correction in Eq. (<ref>).
§.§ Box-diagram Contributions
Finally, the contributions from the UV-finite box diagrams should be included, for which the Feynman diagram has been shown in Fig. <ref>-(5). Since the electrons are present in the background, electron neutrinos interact with them via both NC and CC processes. In particular, for the box diagrams, it is impossible to categorize the contributions into either NC or CC type. However, it is clear that both ν^_μ and ν^_τ interact with the background particles only through the NC interaction. For this reason, we select the box diagrams that are universal for all three types of neutrinos as the NC part, whereas the remaining ones as the CC part. The contribution from box diagrams can be written as -(4 m_W^2/g^2) M^_ CC with the amplitude
(4π)^2 M^_ CC = -g^4/8 m_W^2 s^2[2 s^4 (ln x_e^-1)+(2 c^4+6 c^2-3) ln(m_W^2/m_Z^2)] .
Here it is worth mentioning that for the box diagram involving the internal photon propagator, the generalized Fierz identity <cit.>
ν_e^ (x)(1+γ^5) e(x) e(x)(1-γ^5) ν_e^ (x) = -1/2ν_e^ (x)γ_μ(1-γ^5) ν_e^ (x) e(x)γ^μ(1+γ^5) e(x) ,
has been utilized to transform the contributions into the correction to the vector-type coupling.
§ NUMERICAL RESULTS
Given the finite corrections in the previous sections, we now specify the input parameters and evaluate the one-loop corrections to the matter potentials. The latest values of relevant input parameters are quoted from the Particle Data Group <cit.> and summarized below:
* The fine structure constant
α≡ e^2/(4π) = 1/137.035999084 ;
* The gauge-boson and Higgs-boson masses[The latest measurement of W-boson mass given by the CDF-II collaboration is m_W^ = 80.433 GeV <cit.>, yielding a 7σ discrepancy with the SM expectation. However, we have checked that the difference in the correction to the matter potential caused by such a discrepancy appears at the order of O(10^-4).]
m_W^ = 80.377 GeV , m_Z^ = 91.1876 GeV , m_h^ = 125.25 GeV ;
* The quark masses
m_u^ = 2.16 MeV , m_c^ = 1.67 GeV , m_t^ = 172.5 GeV ,
m_d^ = 4.67 MeV , m_s^ = 93.4 MeV , m^_b = 4.78 GeV ;
* The charged-lepton masses
m_e^ = 0.511 MeV , m_μ^ = 105.658 MeV , m_τ^= 1.777 GeV .
All the particle masses quoted above refer to the on-shell masses, except for those of three light quarks (i.e., u, d and s). Instead, the running masses of three light quarks at the energy scale of μ = 2 GeV are used, since the on-shell masses of light quarks are not well-defined due to the non-perturbative nature of quantum chromodynamics at low energies.
From Eq. (<ref>), we can observe that the tree-level NC potential induced by each type of fermions in the matter is proportional to the vector-type coupling c_ V,NC^u = 0.2026, c_ V,NC^d = -0.3514 and c_ V,NC^e = -0.0539, where these couplings have been displayed in Table <ref> and evaluated by using s^2 = 1 - m^2_W/m^2_Z ≈ 0.223. The corresponding corrections to these vector-type couplings from the Z-boson self-energy, vertex corrections and box diagrams are listed in Table <ref>, accordingly. The flavor-dependent corrections are labeled as “fd", where we have chosen the flavor α = τ for example. It shows clearly that the flavor-dependent contributions are two to three orders of magnitude smaller than the flavor-independent ones. Therefore, in the final results of Δ c_ V,NC^f in the last column of Table <ref>, we only list the dominant flavor-independent values.
Then, we can translate the NC potential induced by quarks and electrons into that by protons, neutrons and electrons via the relations among their number densities, namely, N_u^ = 2 N_p^ + N_n^, N_d^ = N_p^ + 2 N_n^ and N_e^ = N_p^. The one-loop correction to the NC potential is thus given by
Δ c_ V,NC^/c_ V,NC^ = N_p^(2Δ c_ V,NC^u + Δ c_ V,NC^d + Δ c_ V,NC^e) + N_n^(Δ c_ V,NC^u + 2 Δ c_ V,NC^d)/N_n^( c_ V,NC^u + 2 c_ V,NC^d)≈ 0.062 + 0.02 N^_p/N^_n ,
where the relation 2 c_ V,NC^u + c_ V,NC^d + c_ V,NC^e = 0 has been implemented. Therefore, for the ordinary matter with N^_p ≈ N^_n, the one-loop correction to the NC potential is about 8.2%.
Similar to the case of the NC potential, we collect all the contributions to Δ c_ V,CC^e in Table <ref>. It shows that there is a correction of about 6% to the CC matter potential. Whereas the NC potentials are the same for three-flavor neutrinos, except for the tiny flavor-dependent contributions, this correction to the CC potential of electron neutrinos will play an important role in neutrino flavor conversions. In the near future, the long-baseline accelerator neutrino experiments DUNE and T2HK will make use of the MSW effect to resolve the sign of Δ m^2_31, and also determine the octant of θ_23^ and the CP-violating phase δ_ CP^. The oscillation probability in the appearance channel ν^_μ→ν_e^ with matter effects can be written as <cit.>
P(ν_μ^→ν_e^) ≈ sin ^2θ_23sin ^2 2 θ_13sin ^2(Δ_31-a L)/(Δ_31-a L)^2Δ_31^2
+sin 2 θ_23sin 2 θ_13sin 2 θ_12sin(Δ_31-a L)/(Δ_31-a L)Δ_31sin (a L)/(a L)Δ_21cos(Δ_31+δ_ CP^)
+cos ^2θ_23sin ^2 2 θ_12sin ^2(a L)/(a L)^2Δ_21^2 ,
where Δ_ij^≡Δ m_ij^2 L/(4E) with Δ m^2_ij≡ m^2_i - m^2_j for ij = 21, 31 being the neutrino mass-squared differences and a ≡ V/2 have been defined. Here L is the baseline length and E is the beam energy of neutrinos. The first line on the right-hand side of Eq. (<ref>) denotes the dominant oscillation term driven by Δ m^2_31. As the contributions from the NC potential are identical for three neutrino flavors (when the tiny flavor-dependent parts are neglected), only the CC potential is relevant. At the tree level, we have a = G_μ^ N_e^ /√(2), while the 5.8% correction to the CC potential at the one-loop level should be included.
As an example, we now investigate the impact of the one-loop correction to the matter potential on the sensitivity to neutrino mass ordering at DUNE, for which the baseline length is L = 1300 km <cit.> and the average matter density is ρ_ avg^ = 2.848 g/ cm^3 <cit.>. With the global-fit results of neutrino oscillation parameters <cit.>, the oscillation probability in Eq. (<ref>) can be numerically calculated in both cases of normal mass ordering (NO) and inverted mass ordering (IO). Since DUNE is sufficiently sensitive to the difference in the oscillation probabilities between NO and IO cases, we expect that the difference caused by the one-loop correction to the matter potential can also be observed experimentally.
The difference of oscillation probabilities Δ P(ν^_μ→ν^_e) ≡ P^ NO_(ν_μ^→ν_e^) - P^ IO_(ν_μ^→ν_e^) between NO and IO cases at DUNE has been plotted in Fig. <ref>. In the left and middle panels, the tree- and one-loop-level results of Δ P(ν^_μ→ν^_e) are respectively denoted by the black solid curve and blue dashed curve. The difference of Δ P(ν^_μ→ν^_e) between tree- and one-loop-level results is represented by the red dot-dashed curve, where the CP phase has been chosen as δ_ CP^ = -90^∘ and δ^_ CP = 0 in the left and middle panel, respectively. To display the impact of the one-loop correction to the matter potential on the sensitivity to neutrino mass ordering, we show the difference between tree- and one-loop-level results as the function of neutrino energy E ∈ [1, 5] GeV and δ_ CP^∈ [-180^∘, 180^∘] in the right panel. It can be seen that the per mille level difference can be generated by radiative corrections. In particular, in the region where E ≈ 2 GeV and δ_ CP^ is negative, the difference reaches more than 3‰. Although such a difference is small, it is promising to be observed at DUNE, for which it has been demonstrated that the percent-level uncertainty in matter density can be measured <cit.>. In consideration of both one-loop corrections to the matter potential and the uncertainty in the matter density, we shall carry out a more dedicated study to explore their impact on the determination of neutrino mass ordering and the precise measurements of the CP-violating phase at both DUNE and T2HK in a separate work.
§ SUMMARY
In this paper, we have performed a complete calculation of the MSW matter potential for all-flavor neutrinos at the one-loop level in the SM. Following the on-shell renormalization of the SM, we have calculated the one-loop amplitudes for the coherent forward scattering of neutrinos with the SM fermions present in the ordinary matter. The radiative corrections to the vector-type couplings of neutrinos in both NC and CC processes have been obtained and used to determine the MSW matter potential. With the latest values of the SM parameters, we evaluate the finite corrections to the matter potentials and find that the correction to the NC potential is about 8% while that to the CC potential is about 6%.
In the coming precision era of neutrino oscillation physics, one has to reconsider the radiative corrections at the percent level to the interactions of neutrinos with matter. For instance, the JUNO experiment will push the relative errors in the measurement of the oscillation parameters sin^2θ^_12, Δ m^2_21 and Δ m^2_31 even down to the sub-percent level <cit.>. The next-generation long-baseline accelerator neutrino experiments are expected to determine the neutrino mass ordering, the octant of θ_23^ and the value of the CP-violating phase δ_ CP^. The experimental sensitivities of DUNE and T2HK to these unknown parameters are also sufficiently high to probe the one-loop corrections to the MSW matter potential. In this sense, we believe that our calculations are not only useful for the study of neutrino oscillation phenomenology, but also serve as an instructive example for precision calculations in the whole field of neutrino physics.
§ ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China under grant No. 11835013. One of the authors (J.H.) would like to thank Dr. Di Zhang for helpful suggestions on using FeynArts. All Feynman diagrams in this work are generated by FeynArts <cit.>, and the loop integrals are calculated with the help of Package-X <cit.>.
§ RENORMALIZATION OF THE STANDARD MODEL
In this appendix, we explain some details about the on-shell renormalization of the Standard Model (SM) and list all the relevant one-loop diagrams for completeness.
The renormalization procedure that we have adopted follows closely that in Ref. <cit.>. Instead of repeating the derivations of all the counterterms, we just highlight some key points relevant to our calculations. More details of the on-shell renormalization can be found in a number of excellent reviews <cit.>, where the SM Lagrangian and the Feynman rules are explicitly given.
§.§ Renormalization Constants
Once the set of input physical parameters is chosen, one can decompose the bare parameters and fields, which will be marked by the subscript “0", into the renormalized ones and the counterterms. More explicitly, the bare parameters are given by
e_0^ = Z_e^ e = (1 + δ Z_e^) e ,
m_W,0^2 = m_W^2 + δ m_W^2 ,
m_Z,0^2 = m_Z^2 + δ m_Z^2 ,
m_h,0^2 = m_h^2 + δ m_h^2 ,
m_f,0^2 = m_f^2 + δ m_f^2 ,
while the renormalization of the physical fields is as follows
W_0μ^± = √(Z_W^) W_μ^± = (1 + 1/2δ Z_W^) W_μ^± ,
[ Z_0μ^; A_0μ^ ] = [ √(Z_ZZ^) √(Z_ZA^); √(Z_AZ^) √(Z_AA^) ][ Z_μ^; A_μ ] = [ 1+ 1/2δ Z_ZZ^ 1/2δ Z_ZA^; 1/2δ Z_AZ^ 1 + 1/2δ Z_AA^ ][ Z_μ^; A_μ^ ] ,
h_0^ = √(Z_h^) h = (1 + 1/2δ Z_h^) h ,
f_i,0^ L = √(Z_ij^f, L) f_j^ L = (1+1/2δ Z_ij^f, L) f_j^ L ,
f_i,0^ R = √(Z_ij^f, R) f_j^ R = (1+1/2δ Z_ij^f, R) f_j^ R .
The subscripts i and j of the fermion fields refer to different generations. In our calculations, the flavor mixing among different generations of quarks plays an insignificant role, so we ignore it and its radiative corrections. Hence only the i=j case is considered and the CKM matrix is taken to be the identity matrix. A more careful treatment of the renormalization of the CKM matrix can be found in Refs. <cit.>. In addition, the renormalization of unphysical fields is irrelevant to the one-loop scattering amplitudes and will be neglected as well.
§.§ Fixing the Counterterms
The one-loop self-energies of the scalar and fermion fields are denoted as iΣ, while those of gauge fields as iΣ^_ T with
iΣ_μν^V (p^2) = iΣ_ T^V(g_μν^ - p_μ^ p_ν^/p^2) + iΣ_ L^Vp_μ^ p^_ν/p^2 ,
for V = W, Z, A, AZ. The counterterms are fixed by imposing the on-shell conditions and can be expressed in terms of the self-energies. The mass and wave-function counterterms of gauge bosons and the Higgs boson are given by
δ m_W^2 = - Re Σ_ T^W (m_W^2) , δ Z_W^ = . Re ∂Σ_ T^W (p^2)/∂ p^2|_p^2_ = m_W^2 ,
δ m_Z^2 = - Re Σ_ T^Z (m_Z^2) , δ Z_Z^ = . Re ∂Σ_ T^Z (p^2)/∂ p^2|_p^2_ = m_Z^2 ,
δ m_h^2 = + Re Σ_^h (m_h^2) , δ Z_h^ = - . Re ∂Σ_^h (p^2)/∂ p^2|_p^2_ = m_h^2 .
The counterterms for the photon and A-Z mixing are
δ Z_AA^ = .∂Σ_ T^AA(p^2)/∂ p^2|_p^2 = 0 , δ Z_AZ^ = 2 Re Σ_ T^AZ(m_Z^2)/m_Z^2 , δ Z_Z A^ = - 2 Σ_ T^A Z(0)/m_Z^2 .
Notice that there is a minus sign for the gauge-boson self-energy in our notations compared to those in Refs. <cit.>. Such a difference just arises from the definition of the gauge-boson self-energy, which is denoted as iΣ_ T^ in our work while as - iΣ_ T^ in the previous literature. As a result, all the counterterms corresponding to the gauge-boson self-energies in Eqs. (<ref>) and (<ref>) have an opposite sign.
For the fermion masses and wave functions, the counterterms are fixed by
δ m_f^ = m_f^/2 Re[Σ^f, L_ii(m_f^2) + Σ^f, R_ii(m_f^2) + 2 Σ^f, S_ii(m_f^2)] ,
δ Z_ii^f, L = - Re Σ_i i^f, L(m_f^2) - .m_f^2∂/∂ p^2 Re[Σ_ii^f, L(p^2) + Σ_ii^f, R(p^2) + 2 Σ_ii^f, S(p^2)]|_p^2=m_f^2 ,
δ Z_ii^f, R = - Re Σ_i i^f, R(m_f^2) - .m_f^2∂/∂ p^2 Re[Σ_ii^f, L(p^2) + Σ_ii^f, R(p^2) + 2 Σ_ii^f, S(p^2)]|_p^2=m_f^2 .
As has been mentioned in the main text, the terms of O(x^_f) can be safely neglected, so only the first terms in the wave-function counterterms of fermions need to be taken into account. Note that the fermion self-energy has been decomposed as below
Σ_ii^f(p^2) = p P_ L^Σ_ii^f, L(p^2) + p P_ R^Σ_ii^f, R(p^2) + m^_f Σ_ii^f, S(p^2) ,
with the chiral projection operators P_ L, R^ = (1∓γ^5_)/2.
The renormalization constant of the electric charge can be expressed in terms of the self-energies by implementing the Ward identity, namely,
δ Z_e^ = -1/2δ Z_AA^ - s/2cδ Z_ZA^ ,
which is independent of the fermion species. This occurs as the consequence of the universality of the electric charge.
Finally, although the weak mixing angle has not been chosen as an input parameter, it is usually convenient to introduce a counterterm for it as well and use it to simplify the Feynman rules of the vertex counterterms. However, the counterterms of the cosine and sine of the weak mixing angle are related to the counterterms of gauge-boson masses by
δ c/c = 1/2(δ m_W^2/m_W^2 - δ m_Z^2/m_Z^2) , δ s/s = - c^2/2s^2(δ m_W^2/m_W^2 - δ m_Z^2/m_Z^2) .
§.§ Self-energies
As all the relevant counterterms are governed by the self-energies, we shall explicitly show the results of the self-energies and give some explanations whenever necessary. In our calculations, the tadpole contribution to the gauge-boson self-energies is included. In subsequent discussions, we focus only on the real parts of the transverse self-energies that contribute to the counterterms.
§.§.§ Tadpole
The inclusion of the tadpole diagrams iT renders the mass counterterms of gauge bosons to be gauge-independent. All the tadpole diagrams are plotted in Fig. <ref>, and the total contribution is
i T = i g/(4π)^2 4 m_W^[ -8 m_f^2 A_0^(m_f^)+2 m_h^2
A_0^(m_W^)+m_h^2 A_0^(m_Z^)+3
m_h^2 A_0^(m_h^) .
. + 4 d m_W^2
A_0^(m_W^) -4 m_W^2 A_0^(m_W^)+2
d m_Z^2 A_0^ (m_Z^)-2 m_Z^2
A_0^ (m_Z^) ] .
Notice that a symmetry factor of 1/2 should be considered in Fig. <ref>-(1), -(2) and -(7), while a minus sign for the ghost loops in the diagrams (4)-(6) and the fermion loop in the diagram (9) must be included.
§.§.§ Z-boson
The one-loop self-energy corrections for Z-boson are shown in Fig. <ref>. The contribution to the self-energy of Z-boson is
Σ^Z_ T (p^2) = g^2 /(4π)^2 4 c^2{(16 c^4 p^2+8c_2 w^ m_W^2) B_0^(p^2;m_W^,m_W^)-4m_Z^2 B_0^(p^2;m_Z^,m_h^) .
+4 B_00^(p^2;m_h^,m_Z^)+[16 c^4 (d-1)-16 c^2+4] B_00^(p^2;m_W^,m_W^) - A_0^(m_h^)
. - A_0^(m_Z^) -[8 c^4 (d-1)-8 c^2+2] A^_0(m_W^) }
+ 2 e^2/(4π)^2∑_f{[4 a_f^2 m_f^2-p^2 (a_f^2+v_f^2)] B_0^(p^2;m_f^,m_f^) .
. -4 (a^2_f+v^2_f) B_00^(p^2;m_f^,m^_f) +2 (a^2_f+v^2_f) A_0^(m^_f) } .
The summation is over all the SM fermions. In addition, the tadpole diagrams contribute the term of g m_Z^ T/(m_h^2 c).
§.§.§ W-boson
The one-loop diagrams for the W-boson self-energy are listed in Fig. <ref>. The total result is
Σ_ T^W (p^2) = g^2/4(4π)^2{(8 m_W^2 - 4m_Z^2 s^2 + 16 p^2 c^2) B_0^(p^2;m^_W,m^_Z)-4m_W^2 B_0^(p^2;m^_W,m_h^) .
+ 4[4 c^2 (d-2) + 1] B_00^(p^2;m_W^,m_Z^)+4 B_00^(p^2;m_W^,m_h^)
+ 4s^2(λ ^2+4 p^2) B_0^(p^2;m_W^,λ)+16 s^2 (d-2) B_00^(p^2;m_W^,λ)
. +(6-4 d) A_0^(m_W^) -4(d-2) s^2 A_0^(λ)- A_0^(m_h^) +[4c^2(2-d)-1] A_0^(m_Z^) }
+ g^2/2(4π)^2∑_{f,f'}[ (m_f^2 +m_f^'^2) B_0^(p^2;m_f^,m_f^'^)-4 B_00^(p^2;m_f^,m^_f^') .
. -p^2 B_0^(p^2;m_f^,m_f^'^)+ A_0^(m_f^)+ A_0^(m_f^'^) ] .
To avoid the infrared divergence, we have introduced a tiny mass λ for the photon, which should be kept during the whole calculation and then set to zero in the end. The summation is performed over {f,f^'}, which denotes a pair of fermions in the same isospin-doublet. The tadpole contribution is given by g m_W^ T/ m_h^2.
§.§.§ Photon and A-Z Mixing
Different from the cases of gauge bosons, whose self-energies directly contribute to the corrections of the matter potential, the self-energy of the photon and the A-Z mixing are relevant for the counterterm of the electric charge as indicated in Eq. (<ref>). Given one-loop diagrams in Fig. <ref>, the self-energy of the photon A reads
Σ_ T^A (p^2) = 2 e^2/(4π)^2[ (3p^2 + 4m_W^2) B_0^(p^2;m_W^,m_W^)-2(d-2) A_0^(m_W^)]
+ 2 e^2/(4π)^2∑_f Q_f^2 [-4 B_00^(p^2;m_f^,m_f^)-p^2 B_0^(p^2;m_f^,m_f^)+2 A_0^(m_f^)] .
Note that there is no correction to the longitudinal self-energy of the photon, as expected from the unbroken U(1) gauge symmetry.
The Feynman diagrams for the A-Z mixing are similar to those for the photon self-energy, as shown in Fig. <ref>. The analytical expression reads
Σ_ T^AZ(p^2) = g^2 s /(4π)^2 c{[c^2 (3-2d)+s^2] A_0^(m_W^) +2[c^2 (2 d-3)-s^2] B_00^(p^2;m_W^,m_W^) .
. + 2[c^2 (m_W^2+2 p^2)+m_W^2 s^2] B_0^(p^2;m_W^,m_W^) }
+ 2 e^2/(4π)^2∑_f Q_f^ v_f^[-4 B_00^(p^2;m_f^,m_f^)-p^2 B_0^(p^2;m_f^,m_f^) +2 A_0^(m_f^)] .
As in the case of the photon self-energy, the diagrams with the ghost loops and those with the W-ϕ loops give identical corrections.
§.§.§ Fermion
The fermion self-energy will be involved in the vertex counterterms. From the one-loop diagrams in Fig. <ref> and with the decomposition in Eq. (<ref>), we obtain
Σ^f, L (p^2) = -g^2/4(4π)^2{[4 (d-2) s^2 (a_f^+v_f^)^2+x_f^] B_1(p^2;m_f^,m_Z^)+x_f^ B_1(p^2;m_f^,m_h^) .
. +4 (d-2) Q_f^2 s^2 B_1(p^2;m_f^,λ)+2 (d+x_f^'^-2) B_1(p^2;m_f^'^,m_W^) } ,
Σ^f, R (p^2) = -g^2/4(4π)^2{ 4 (d-2) s^2 [(a_f^-v_f^)^2 B_1(p^2;m_f^,m_Z^)+Q_f^2 B_1(p^2;m_f^,λ)] .
+ . x_f^[ B_1(p^2;m_f^,m_h^)+ B_1(p^2;m_f^,m_Z^)+2 B_1(p^2;m_f^'^,m_W^)] } ,
Σ^f, S (p^2) = g^2/4(4π)^2{[4 s^2 d (a_f^2-v_f^2)-x_f^] B_0^(p^2;m_f^,m_Z^) .
. -2 [2 d Q_f^2 s^2 B_0^(p^2;m_f^,λ)+x_f^'^ B_0^(p^2;m_f^'^,m_W^)]+x_f^ B_0^(p^2;m_f^,m_h^) } .
For massless and electrically-neutral neutrinos, the contributions from diagrams (1), (2) or (4) are vanishing, since the relevant interaction vertices are proportional to either the fermion mass or the electric charge.
It is worthwhile to mention that although the obtained self-energies are seemingly different from those in Ref. <cit.>, cf. Eqs. (B.1)-(B.4) and (B.6)-(B.8) therein, they are actually identical after transforming the Passarino-Veltman functions A_0^, B_00^ and B_1^ into B_0^. With these self-energies, we can fix all the counterterms as in Eqs. (<ref>)-(<ref>).
§.§ Amplitudes from the Counterterms
The counterterms result in new interaction vertices and additional diagrams to the scattering amplitudes of our interest. The Feynman rules for the counterterms have been derived in the previous literature <cit.>, and the amplitudes from the counterterms can be easily obtained.
§.§.§ Self-energies of Gauge Bosons
The mass and wave-function counterterms of gauge bosons induce the following contribution
i(m_Z,W^2 δ Z_ZZ,W^ + δ m_Z,W^2) g_μν^ ,
where p^2 = 0 has been assumed for the intermediate gauge bosons in the case of forward scattering. Furthermore, considering the external fermions, one obtains the scattering amplitudes of ν_α^ + f →ν_α^ + f from the self-energy counterterms
i M_ c^Z = ig^2/4m_Z^4 c^2(m_Z^2 δ Z_ZZ^ + δ m_Z^2) ν_α^γ_μ P_ L^ν_α^ fγ^μ(c_ V,NC^f-c_ A,NC^fγ^5)f ,
i M_ c^W = ig^2/4m_W^4(m_W^2 δ Z_W^ + δ m_W^2) ν_α^γ_μ P_ L^ν_α^ fγ^μ(c_ V,CC^f - c_ A,CC^f γ^5 ) f .
§.§.§ Vertex Counterterms
The general fermion-vector-boson interaction from the counterterms can be expressed as
δΓ^FFV_μ = i e γ_μ( C^-_f P_ L^ + C^+_f P_ R^) ,
with F stands for relevant fermions interacting with a given gauge boson V. All the one-loop diagrams for the corrections to the f-f-Z vertex have been shown in Fig. <ref>. The coefficients in front of chiral projection operators are defined as
C^±_f = g_f^±(δ g_f^±/g_f^±+1/2δ Z_Z Z^ + δ Z_i i^f, R(L)) + 1/2 Q_f^δ Z_A Z^ ,
where
g_f^+ = -s/c Q_f^ , δ g_f^+=-s/c Q_f^(δ Z_e+1/c^2δ s/s) ,
g_f^- = I_f^3 - s^2 Q_f^/sc , δ g_f^-=I_f^3/s c(δ Z_e+s^2-c^2/c^2δ s/s)+δ g_f^+ ,
with the weak isospin generator I_f^3 of the SM fermions. The scattering amplitude from the counterterms turns out to be
i M_ c^Γ = - i g^2 s/2m_Z^2 c[ν_α^γ_μ C^-_ν_α^ P_ L^ν_α^ fγ^μ(c_ V,NC^f - c_ A,NC^f γ^5) f + fγ_μ( C^-_f P_ L^ + C^+_f P_ R^) f ν_α^γ^μ P_ L^ν_α^] ,
from which one can see that some corrections to the vector-type coupling are proportional to the tree-level coupling c_ V,NC^f whereas others are not.
Several comments on Fig. <ref> are helpful. For massless and electrically-neutral neutrinos, the contributions from the diagrams (1), (2), (4), (5), (7), (10) or (12) are vanishing. As the corrections of O(x_f^) for f=u,d,e are highly suppressed, the contributions from those diagrams can also be neglected. The flavor-dependent terms in the vertex correction come from the diagrams (3), (6), (9), (11), (13) and (14), which are consistent with the observations in Refs. <cit.>. Meanwhile, since neutrinos are purely left-handed in the SM, only C^-_ν_α^ takes part in the correction.
The one-loop diagrams for the corrections to the ν_e^-e-W vertex are given in Fig. <ref>. The counterterm is similar to that in Eq. (<ref>) but with
C^-_f = 1/√(2)s[δ Z_e^ - δ s/s + 1/2δ Z_W^ + 1/2(δ Z_ii^α, L+δ Z_ii^ν_α^, L)] , C^+_f = 0 .
As the diagrams with the vertices proportional to the electron mass can be neglected, we just concentrate on those in (3), (8), (9) and (10).
§.§ Box Diagrams
The box diagrams are presented in Figs. <ref>, <ref> and <ref>, which are actually UV-finite. The final results of the amplitudes have been given and discussed in the main text. Notice that the diagrams involving W or ϕ lead to the flavor-dependent corrections.
To simplify the expressions, one can expand the analytical formulas around the small fermion masses. However, there are two types of small fermion masses, namely, the charged-lepton masses and light quark masses. Given the strong mass hierarchy, i.e., m_e^≪ m_u^≈ m_d^≪ m_μ^≪ m_τ^, we should first expand the results around m^_u,d=0 and m^_e = 0 and safely neglect O(x_u,d,e) terms.
99
ParticleDataGroup:2022pth
R. L. Workman et al. [Particle Data Group],
“Review of Particle Physics,”
PTEP 2022, 083C01 (2022)
Xing:2020ijf
Z. z. Xing,
“Flavor structures of charged fermions and massive neutrinos,”
Phys. Rept. 854, 1-147 (2020)
[arXiv:1909.09610 [hep-ph]].
Wolfenstein:1977ue
L. Wolfenstein,
“Neutrino Oscillations in Matter,”
Phys. Rev. D 17, 2369-2374 (1978)
Wolfenstein:1979ni
L. Wolfenstein,
“Neutrino Oscillations and Stellar Collapse,”
Phys. Rev. D 20, 2634-2635 (1979)
Mikheyev:1985zog
S. P. Mikheyev and A. Y. Smirnov,
“Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos,”
Sov. J. Nucl. Phys. 42, 913-917 (1985)
Mikheev:1986wj
S. P. Mikheev and A. Y. Smirnov,
“Resonant amplification of neutrino oscillations in matter and solar neutrino spectroscopy,”
Nuovo Cim. C 9, 17-26 (1986)
Botella:1986wy
F. J. Botella, C. S. Lim and W. J. Marciano,
“Radiative Corrections to Neutrino Indices of Refraction,”
Phys. Rev. D 35, 896 (1987)
Mirizzi:2009td
A. Mirizzi, S. Pozzorini, G. G. Raffelt and P. D. Serpico,
“Flavour-dependent radiative correction to neutrino-neutrino refraction,”
JHEP 10, 020 (2009)
[arXiv:0907.3674 [hep-ph]].
Dutta:1999ir
G. Dutta, D. Indumathi, M. V. N. Murthy and G. Rajasekaran,
“Neutrinos from stellar collapse: Effects of flavor mixing,”
Phys. Rev. D 61, 013009 (2000)
[arXiv:hep-ph/9907372 [hep-ph]].
Dighe:1999bi
A. S. Dighe and A. Y. Smirnov,
“Identifying the neutrino mass spectrum from the neutrino burst from a supernova,”
Phys. Rev. D 62, 033007 (2000)
[arXiv:hep-ph/9907423 [hep-ph]].
Zhu:2020wuy
J. y. Zhu,
“Radiative corrections to the lepton flavor mixing in dense matter,”
JHEP 05, 097 (2020)
[arXiv:2002.12182 [hep-ph]].
Xing:2022efm
Z. z. Xing and J. y. Zhu,
“One-loop radiative correction to the Toshev relation for neutrino oscillations in matter,”
[arXiv:2208.03488 [hep-ph]].
Tamborra:2011is
I. Tamborra, G. G. Raffelt, L. Hudepohl and H. T. Janka,
“Impact of eV-mass sterile neutrinos on neutrino-driven supernova outflows,”
JCAP 01, 013 (2012)
[arXiv:1110.2104 [astro-ph.SR]].
Wu:2013gxa
M. R. Wu, T. Fischer, L. Huther, G. Martínez-Pinedo and Y. Z. Qian,
“Impact of active-sterile neutrino mixing on supernova explosion and nucleosynthesis,”
Phys. Rev. D 89, no.6, 061303 (2014)
[arXiv:1305.2382 [astro-ph.HE]].
DUNE:2020ypp
B. Abi et al. [DUNE],
“Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume II: DUNE Physics,”
[arXiv:2002.03005 [hep-ex]].
Hyper-Kamiokande:2022smq
J. Bian et al. [Hyper-Kamiokande],
“Hyper-Kamiokande Experiment: A Snowmass White Paper,”
[arXiv:2203.02029 [hep-ex]].
Aoki:1982ed
K. I. Aoki, Z. Hioki, M. Konuma, R. Kawabe and T. Muta,
“Electroweak Theory. Framework of On-Shell Renormalization and Study of Higher Order Effects,”
Prog. Theor. Phys. Suppl. 73, 1-225 (1982)
Bohm:1986rj
M. Bohm, H. Spiesberger and W. Hollik,
“On the One Loop Renormalization of the Electroweak Standard Model and Its Application to Leptonic Processes,”
Fortsch. Phys. 34, 687-751 (1986)
Hollik:1988ii
W. F. L. Hollik,
“Radiative Corrections in the Standard Model and their Role for Precision Tests of the Electroweak Theory,”
Fortsch. Phys. 38, 165-260 (1990)
Denner:1991kt
A. Denner,
“Techniques for calculation of electroweak radiative corrections at the one loop level and results for W physics at LEP-200,”
Fortsch. Phys. 41, 307-420 (1993)
[arXiv:0709.1075 [hep-ph]].
Giunti:2007ry
C. Giunti and C. W. Kim,
“Fundamentals of Neutrino Physics and Astrophysics,”
Oxford University Press, 2007,
Xing:2011zza
Z. z. Xing and S. Zhou,
“Neutrinos in particle physics, astronomy and cosmology,”
Springer Berlin, 2011,
Bohm:2001yx
M. Bohm, A. Denner and H. Joos,
“Gauge theories of the strong and electroweak interaction,”
Vieweg+Teubner Verlag, 2001
Sirlin:1977sv
A. Sirlin,
“Current Algebra Formulation of Radiative Corrections in Gauge Theories and the Universality of the Weak Interactions,”
Rev. Mod. Phys. 50, 573 (1978)
[erratum: Rev. Mod. Phys. 50, 905 (1978)]
Sirlin:1980nh
A. Sirlin,
“Radiative Corrections in the SU(2)-L x U(1) Theory: A Simple Renormalization Framework,”
Phys. Rev. D 22, 971-981 (1980)
Patel:2015tea
H. H. Patel,
“Package-X: A Mathematica package for the analytic calculation of one-loop integrals,”
Comput. Phys. Commun. 197, 276-290 (2015)
[arXiv:1503.01469 [hep-ph]].
Patel:2016fam
H. H. Patel,
“Package-X 2.0: A Mathematica package for the analytic calculation of one-loop integrals,”
Comput. Phys. Commun. 218, 66-70 (2017)
[arXiv:1612.00009 [hep-ph]].
Passarino:1978jh
G. Passarino and M. J. G. Veltman,
“One Loop Corrections for e+ e- Annihilation Into mu+ mu- in the Weinberg Model,”
Nucl. Phys. B 160, 151-207 (1979)
Sakakibara:1980hw
S. Sakakibara,
“Radiative Corrections to the Neutral Current Interactions in the Weinberg-Salam Model,”
Phys. Rev. D 24, 1149 (1981)
Nieves:2003in
J. F. Nieves and P. B. Pal,
“Generalized Fierz identities,”
Am. J. Phys. 72, 1100-1108 (2004)
[arXiv:hep-ph/0306087 [hep-ph]].
CDF:2022hxs
T. Aaltonen et al. [CDF],
“High-precision measurement of the W boson mass with the CDF II detector,”
Science 376, no.6589, 170-176 (2022)
Nunokawa:2007qh
H. Nunokawa, S. J. Parke and J. W. F. Valle,
“CP Violation and Neutrino Oscillations,”
Prog. Part. Nucl. Phys. 60, 338-402 (2008)
[arXiv:0710.0554 [hep-ph]].
DUNE:2015lol
R. Acciarri et al. [DUNE],
“Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 2: The Physics Program for DUNE at LBNF,”
[arXiv:1512.06148 [physics.ins-det]].
DUNE:2021cuw
B. Abi et al. [DUNE],
“Experiment Simulation Configurations Approximating DUNE TDR,”
[arXiv:2103.04797 [hep-ex]].
Esteban:2020cvm
I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou,
“The fate of hints: updated global analysis of three-flavor neutrino oscillations,”
JHEP 09, 178 (2020)
[arXiv:2007.14792 [hep-ph]].
Kelly:2018kmb
K. J. Kelly and S. J. Parke,
“Matter Density Profile Shape Effects at DUNE,”
Phys. Rev. D 98, no.1, 015025 (2018)
[arXiv:1802.06784 [hep-ph]].
JUNO:2015zny
F. An et al. [JUNO],
“Neutrino Physics with JUNO,”
J. Phys. G 43, no.3, 030401 (2016)
[arXiv:1507.05613 [physics.ins-det]].
JUNO:2022mxj
A. Abusleme et al. [JUNO],
“Sub-percent precision measurement of neutrino oscillation parameters with JUNO,”
Chin. Phys. C 46, no.12, 123001 (2022)
[arXiv:2204.13249 [hep-ex]].
Hahn:2000kx
T. Hahn,
“Generating Feynman diagrams and amplitudes with FeynArts 3,”
Comput. Phys. Commun. 140, 418-431 (2001)
[arXiv:hep-ph/0012260 [hep-ph]].
Denner:1990yz
A. Denner and T. Sack,
“Renormalization of the Quark Mixing Matrix,”
Nucl. Phys. B 347, 203-216 (1990)
Gambino:1998ec
P. Gambino, P. A. Grassi and F. Madricardo,
“Fermion mixing renormalization and gauge invariance,”
Phys. Lett. B 454, 98-104 (1999)
[arXiv:hep-ph/9811470 [hep-ph]].
Pilaftsis:2002nc
A. Pilaftsis,
“Gauge and scheme dependence of mixing matrix renormalization,”
Phys. Rev. D 65, 115013 (2002)
[arXiv:hep-ph/0203210 [hep-ph]].
|
http://arxiv.org/abs/2307.04628v1 | 20230710151413 | Tight Algorithmic Applications of Clique-Width Generalizations | [
"Vera Chekan",
"Stefan Kratsch"
] | cs.DS | [
"cs.DS"
] |
Kibble-Zurek Mechanism for Nonequilibrium Generation
of Magnetic Monopoles in Spin Ices
Gia-Wei Chern
August 12, 2023
==========================================================================================
In this work, we study two natural generalizations of clique-width introduced by Martin Fürer.
Multi-clique-width (mcw) allows every vertex to hold multiple labels [ITCS 2017], while for fusion-width (fw) we have a possibility to merge all vertices of a certain label [LATIN 2014].
Fürer has shown that both parameters are upper-bounded by treewidth thus making them more appealing from an algorithmic perspective than clique-width and asked for applications of these parameters for problem solving.
First, we determine the relation between these two parameters by showing that
≤ + 1.
Then we show that when parameterized by multi-clique-width, many problems (e.g., Connected Dominating Set) admit algorithms with the same running time as for clique-width despite the exponential gap between these two parameters.
For some problems (e.g., Hamiltonian Cycle) we show an analogous result for fusion-width:
For this we present an alternative view on fusion-width by introducing so-called glue-expressions which might be interesting on their own.
All algorithms obtained in this work are tight up to (Strong) Exponential Time Hypothesis.
§ INTRODUCTION
In parameterized complexity apart from the input size we consider a so-called parameter and study the complexity of problems depending on both the input size and the parameter where the allowed dependency on the input size is polynomial.
In a more fine-grained setting one is interested in the best possible dependency on the parameter under reasonable conjectures.
A broad line of research is devoted to so-called structural parameters measuring how simple the graph structure is: different parameters quantify various notions of possibly useful input structure.
Probably the most prominent structural parameter is treewidth, which reflects how well a graph can be decomposed using small vertex separators.
For a variety of problems, the tight complexity parameterized by treewidth (or its path-like analogue pathwidth) has been determined under the so-called Strong Exponential Time Hypothesis (e.g., <cit.>).
However, the main drawback of treewidth is that it is only bounded in sparse graphs: a graph on n vertices of treewidth k has no more than nk edges.
To capture the structure of dense graphs, several parameters have been introduced and considered.
One of the most studied is clique-width.
The clique-width of a graph is at most k if it can be constructed using the following four operations on k-labeled graphs: create a vertex with some label from 1, …, k; form a disjoint union of two already constructed graphs; give all vertices with label i label j instead; or create all edges between vertices with labels i and j.
It is known that if a graph has treewidth k, then it has clique-width at most 3 · 2^k-1 and it is also known that an exponential dependence in this bound is necessary <cit.>.
Conversely, cliques have clique-width at most 2 and unbounded treewidth.
So on the one hand, clique-width is strictly more expressive than treewidth in the sense that if we can solve a problem efficiently on classes of graphs of bounded clique-width, then this is also true for classes of graphs of bounded treewidth.
On the other hand, the exponential gap has the effect that as the price of solving the problem for larger graph classes we potentially obtain worse running times for some graph families.
Fürer introduced and studied two natural generalizations of clique-width, namely fusion-width (fw) <cit.> and multi-clique-width (mcw) <cit.>.
For fusion-width, additionally to the clique-width operations, he allows an operator that fuses (i.e., merges) all vertices of label i.
Originally, fusion-width (under a different name) was introduced by Courcelle and Makowsky <cit.>.
However, they did not suggested studying it as a new width parameter since it is parametrically (i.e., up to some function) equivalent to clique-width.
For multi-clique-width, the operations remain roughly the same as for clique-width but now every vertex is allowed to have multiple labels.
For these parameters, Fürer showed the following relations to clique-width (cw) and treewidth (tw):
≤≤· 2^ ≤≤ 2^ ≤ + 2 ≤ + 2
Fürer also observed that the exponential gaps between clique-width and both fusion- and multi-clique-width are necessary.
As our first result, we determine the relation between fusion-width and multi-clique-width:
For every graph G, it holds that (G) ≤(G) + 1.
Moreover, given a fuse-k-expression ϕ of G, a multi-clique-width-(k + 1)-expression of G can be created in time polynomial in |ϕ| and k.
The relations in (<ref>) imply that a problem is FPT parameterized by fusion-width resp. multi-clique-width if and only if this is the case for clique-width.
However, the running times of such algorithms might strongly differ.
Fürer initiated a fine-grained study of problem complexities relative to multi-clique-width, starting with the Independent Set problem.
He showed that this problem can be solved in (2^) where hides factors polynomial in the input size.
On the other hand, Lokshtanov et al. proved that under SETH no algorithm can solve this problem in ((2-ε)^) where denotes the parameter called pathwidth <cit.>.
Clique-width of a graph is at most its pathwidth plus two <cit.> so the same lower bound holds for clique-width and hence, multi-clique-width as well.
Therefore, the tight dependence on both clique-width and multi-clique-width is the same, namely (2^k).
We show that this is the case for many further problems.
Let G be a graph given together with a multi-k-expression of G. Then:
* Dominating Set can be solved in time (4^k);
* q-Coloring can be solved in time ((2^q - 2)^k);
* Connected Vertex Cover can be solved in time (6^k);
* Connected Dominating Set can be solved in time (5^k).
And these results are tight under SETH.
Further, Chromatic Number can be solved in time f(k) · n^2^𝒪(k) and this is tight under ETH.
We prove this by providing algorithms for multi-clique-width with the same running time as the known tight algorithms for clique-width.
The lower bounds for clique-width known from the literature then apply to multi-clique-width as well proving the tightness of our results.
By <ref>, these results also apply to fusion-width.
For the following three problems we obtain similar tight bounds relative to fusion-width as for clique-width, but it remains open whether the same is true relative to multi-clique-width:
Let G be a graph given together with a fuse-k-expression of G. Then:
* Max Cut can be solved in time f(k) · n^𝒪(k);
* Edge Dominating Set can be solved in time f(k) · n^𝒪(k);
* Hamiltonian Cycle can be solved in time f(k) · n^𝒪(k).
And these results are tight under ETH.
To prove these upper bounds, we provide an alternative view on fuse-expressions, called glue-expressions, interesting on its own.
We show that a fuse-k-expression can be transformed into a glue-k-expression in polynomial time and then present dynamic-programming algorithms on glue-expressions.
Due to the exponential gap between clique-width and both fusion- and multi-clique-width, our results provide exponentially faster algorithms on graphs witnessing these gaps.
Related Work
Two parameters related to both treewidth and clique-width are modular treewidth (mtw) <cit.> and twinclass-treewidth <cit.> (unfortunately, sometimes also referred to as modular treewidth).
It is known that ≤mtw + 3 (personal communication with Falko Hegerfeld).
Further dense parameters have been widely studied in the literature.
Rank-width (rw) was introduced by Oum and Seymour and it reflects the _2-rank of the adjacency matrices in the so-called branch decompositions.
Originally, it was defined to obtain a fixed-parameter approximation of clique-width <cit.> by showing that rw≤≤ 2^rw + 1 - 1.
Later, Bui-Xuan et al. started the study of algorithmic properties of rank-width <cit.>.
Recently, Bergougnoux et al. proved the tightness of first ETH-tight lower bounds for this parameterization <cit.>.
Another parameter defined via branch-decompositions and reflecting the number of different neighborhoods across certain cuts is boolean-width (boolw), introduced by Bui-Xuan et al. <cit.>.
Fürer <cit.> showed that boolw≤≤ 2^boolw.
Recently, Eiben et al. presented a framework unifying the definitions and algorithms for computation of many graph parameters <cit.>.
Organization
We start with some required definitions and notations in <ref>.
In <ref> we prove the relation between fusion-width and multi-clique-width from <ref>.
After that, in <ref> we introduce glue-k-expressions and show how to obtain such an expression given a fuse-k-expression of a graph.
Then in <ref> we employ these expressions to obtain algorithms parameterized by fusion-width.
In <ref> we present algorithms parameterized by multi-clique-width.
We conclude with some open questions in <ref>.
§ PRELIMINARIES
For k ∈_0, we denote by [k] the set {1, …, k} and we denote by [k]_0 the set [k] ∪{0}.
We use standard graph-theoretic notation.
Our graphs are simple and undirected if not explicitly stated otherwise.
For a graph H and a partition (V_1, V_2) of V(H), by E_H(V_1, V_2) = {{v_1, v_2}| v_1 ∈ V_1, v_2 ∈ V_2} we denote the set of edges between V_1 and V_2.
For a set S of edges in a graph H, by V(S) we denote the set of vertices incident with the edges in S.
A k-labeled graph is a pair (H, _H) where _H V(H) → [k] is a labeling function of H.
Sometimes to simplify the notation in our proofs we will allow the labeling function to map to some set of cardinality k instead of the set [k].
In the following, if the number k of labels does not matter, or it is clear from the context, we omit k from the notions (e.g., a labeled graph instead of a k-labeled graph).
Also, if the labeling function is clear from the context, then we simply call H a labeled graph as well.
Also we sometimes omit the subscript H of the labeling function _H for simplicity.
For i ∈ [k], by U^H_i = _H^-1(i) we denote the set of vertices of H with label i.
We consider the following four operations on k-labeled graphs.
* Introduce: For i ∈ [k], the operator v⟨ i ⟩ creates a graph with a single vertex v that has label i. We call v the title of the vertex.
* Union: The operator ⊕ takes two vertex-disjoint k-labeled graphs and creates their disjoint union. The labels are preserved.
* Join: For i ≠ j ∈ [k], the operator η_i, j takes a k-labeled graph H and creates the supergraph H' on the same vertex set with E(H') = E(H) ∪{{u, v}|_H(u) = i, _H(v) = j}.
The labels are preserved.
* Relabel: For i ≠ j, the operator ρ_i → j takes a k-labeled graph H and creates the same k-labeled graph H' apart from the fact that every vertex that with label i in H instead has label j in H'.
A well-formed sequence of such operations is called a k-expression or a clique-expression.
With a k-expression ϕ one can associate a rooted tree such that every node corresponds to an operator, this tree is called a parse tree of ϕ.
With a slight abuse of notation, we denote it by ϕ as well.
By G^ϕ we denote the labeled graph arising in ϕ.
And for a node t of ϕ by G^ϕ_t we denote the labeled graph arising in the subtree (sometimes also called a sub-expression) rooted at t, this subtree is denoted by ϕ_t.
The graph G^ϕ_t is then a subgraph of G^ϕ.
A graph H has clique-width of at most k if there is a labeling function _H of H and a k-expression ϕ such that G^ϕ is equal to (H, _H).
By (H) we denote the smallest integer k such that H has clique-width at most k.
Fürer has studied two generalizations of k-expressions <cit.>.
Fuse: For i ∈ [k], the operator θ_i takes a k-labeled graph H with ^-1_H(i) ≠∅ and fuses the vertices with label i, i.e., the arising graph H' has vertex set (V(H) - ^-1_H(i)) ∪̇{v}, the edge relation in V(H) - ^-1_H(i) is preserved, and N_H'(v) = N_H(^-1_H(i)).
The labels of vertices in V(H') - v are preserved, and vertex v has label i.
A fuse-k-expression is a well-formed expression that additionally to the above four operations is allowed to use fuses.
We adopt the above notations from k-expressions to fuse-k-expressions.
Let us only remark that for a node t of a fuse-k-expression ϕ, the graph G^ϕ_t is not necessarily a subgraph of G^ϕ since some vertices of G^ϕ_t might be fused later in ϕ.
Originally, Fürer allows that a single introduce-node creates multiple, say q, vertices with the same label.
However, we can eliminate such operations from a fuse-expression ϕ as follows.
If the vertices introduced at some node participate in some fuse later in the expression, then it suffices to introduce only one of them.
Otherwise, we can replace this introduce-node by q nodes introducing single vertices combined using union-nodes.
These vertices are then also the vertices of G^ϕ.
So in total, replacing all such introduce-nodes would increase the number of nodes of the parse tree by at most 𝒪(|V(G^ϕ)|), which is not a problem for our algorithmic applications.
Another generalization of clique-width introduced by Fürer is multi-clique-width (mcw) <cit.>.
A multi-k-labeled graph is a pair (H, _H) where _H V(H) → 2^[k] is a multi-labeling function.
We consider the following four operations of multi-k-labeled graphs.
* Introduce: For q ∈ [k] and i_1, … i_q ∈ [k], the operator v ⟨ i_1, …, i_q ⟩ creates a multi-k-labeled graph with a single vertex that has label set {i_1, …, i_q}.
* Union: The operator ⊕ takes two vertex-disjoint multi-k-labeled graphs and creates their disjoint union. The labels are preserved.
* Join: For i ≠ j ∈ [k], the operator η_i, j takes a multi-k-labeled graph H and creates its supergraph H' on the same vertex set with E(H') = E(H) ∪{{u, v}| i ∈_H(u), j ∈_H(v)}.
This operation is only allowed when there is no vertex in H with labels i and j simultaneously, i.e., for every vertex v of H we have {i, j}⊈_H(v).
The labels are preserved.
* Relabel: For i ∈ [k] and S ⊆ [k], the operator ρ_i → S takes a multi-k-labeled graph H and creates the same multi-labeled graph apart from the fact that every vertex with label set L ⊆ [k] such that i ∈ L in H instead has label set (L ∖{i}) ∪ S in H'.
Note that S = ∅ is allowed.
A well-formed sequence of these four operations is called a multi-k-expression.
As for fuse-expressions, Fürer allows introduce-nodes to create multiple vertices but we can eliminate this by increasing the number of nodes in the expression by at most 𝒪(|V(G^ϕ)|).
We adopt the analogous notations from k-expressions to multi-k-expressions.
Complexity
To the best of our knowledge, the only known way to approximate multi-clique-width and fusion-width is via clique-width, i.e., to employ the relation (<ref>).
The only known way to approximate clique-width is, in turn, via rank-width.
This way we obtain a 2^2^k-approximation of multi-clique-width and fusion-width running in FPT time.
For this reason, to obtain tight running times in our algorithms we always assume that a fuse- or multi-k-expression is provided.
Let us emphasize that this is also the case for all tight results for clique-width in the literature (see e.g., <cit.>).
In this work, we will show that if a graph admits a multi-k-expression resp. a fuse-k-expression, then it also admits one whose size is polynomial in the size of the graph.
Moreover, such a “compression” can be carried out in time polynomial in the size of the original expression.
Therefore, we delegate this compression to a black-box algorithm computing or approximating multi-clique-width or fusion-width and assume that provided expressions have size polynomial in the graph size.
(Strong) Exponential Time Hypothesis
The algorithms in this work are tight under one of the following conjectures formulated by Impagliazzo et al. <cit.>.
The Exponential Time Hypothesis (ETH) states that there is 0 < ε < 1 such that 3-Sat with n variables and m clauses cannot be solved in time (2^ε n).
The Strong Exponential Time Hypothesis (SETH) states that for every 0 < ε < 1 there is an integer q such that q-Sat cannot be solved in time (2^ε n).
In this work, hides factors polynomial in the input size.
Simplifications
If the graph is clear from the context, by n we denote the number of its vertices.
If not stated otherwise, the number of labels is denoted by k and a label is a number from [k].
§ RELATION BETWEEN FUSION-WIDTH AND MULTI-CLIQUE-WIDTH
In this section, we show that for every graph, its multi-clique-width is at most as large as its fusion-width plus one.
Since we are interested in parameterized complexity of problems, the constant additive term to the value of a parameter does not matter.
To prove the statement, we show how to transform a fuse-k-expression of a graph H into a multi-(k+1)-expression of H.
Fürer has proven the following relation:
For every graph H, it holds that (H) ≤(H) · 2^(H).
We will use his idea behind the proof of this lemma to prove our result.
For every graph H, it holds that (H) ≤(H) + 1.
Moreover, given a fuse-k-expression ϕ of H, a multi-(k + 1)-expression of H can be created in time polynomial in |ϕ| and k.
Let H be a graph.
We start by showing that (H) ≤ 2 ·(H) holds.
To prove this, we will consider a fuse-k-expression of H and from it, we will construct a multi-2k-expression of H using labels {1, …, k, 1, …, k}.
For simplicity of notation, let [k] = {1, …, k}.
For this first step, we strongly follow the construction of Fürer in his proof of <ref>.
There he uses k · 2^k labels from the set [k] × 2^[k] so the second component of such a label is a subset of [k].
We will use that multi-clique-width perspective already allows vertices to have sets of labels and model the second component of a label via subsets of [k].
Then we will make an observation that allows us to (almost) unify labels i and i for every i ∈ [k].
Using one additional label ⋆, we will then obtain a multi-(k+1)-expression of H using labels [k] ∪{⋆}.
First of all, we perform several simple transformations on ϕ without changing the arising graph.
We suppress all join-nodes that do not create new edges, i.e., we suppress a join-node t if for its child t' it holds G_t = G_t'.
Then we suppress all nodes fusing less than two vertices, i.e., a θ_i-node t for some i ∈ [k] is suppressed if for its child t', the labeled graph G^ϕ_t' contains less than two vertices with label i.
Now we provide a short intuition for the upcoming transformation.
Let x be a θ_i-node creating a new vertex, say u, by fusing some vertices, say U.
And let y be an ancestor of x such that y is a fuse-node that fuses vertex u with some further vertices, say W.
Then we can safely suppress the node x: the fuse of vertices from U is then simply postponed to y, where these vertices are fused together with W.
Now we fix some notation used in the rest of the proof.
Let x be a node, let y be an ancestor of x, and let t_1, …, t_q be all inner relabel-nodes on the path from x to y in the order they occur on this path.
Further, let s_1, …, s_q ∈ [k] and r_1, …, r_q ∈ [k] be such that the node t_j is a ρ_s_j → r_j-node for every j ∈ [q].
Then for all i ∈ [k], we define
ρ^*_x,y(i) = σ_q ( σ_q-1 ( …σ_1(i) ) )
where
σ_j(i') =
i' if i' ≠ s_j
r_j if i' = s_j
for j ∈ [q].
Intuitively, if we have some vertex v of label i in G^ϕ_x, then ρ^*_x, y(i) denotes the label of v in G^ϕ_y' where y' denotes the child of y, i.e., ρ^*_x, y(i) is the label of v right before the application of y.
Now for every i ∈ [k] and every θ_i-node x, if there exists an ancestor y of x in ϕ such that y is a θ_ρ^*_x,y(i)-node, we suppress the node x.
In this case, we call x skippable.
Finally, we transform the expression in such a way that a parent of every leaf is a union-node as follows.
Let x be a leaf with introducing a vertex v of label i for some i ∈ [k].
As a result of the previous transformations, we know that the parent y of x is either a relabel- or a union-node.
In the latter case, we skip this node.
Otherwise, let i_1 ≠ i_2 ∈ [k] be such that y is a ρ_i_1 → i_2-node.
If i_1 ≠ i, then we suppress y.
Otherwise, we suppress y and replace x with a node introducing the same vertex but with label i_2.
This process is repeated for every leaf.
We denote the arising fuse-k-expression of H by ψ.
Now let x be a node of ψ and let v be a vertex of G^ψ_x.
We say that v is a fuse-vertex at x if v participates in some fuse-operation above x, that is, there is an ancestor y of x (in ψ) such that y is a θ_ρ^*_x, y(i)-node.
Note that first, since we have removed skippable fuse-nodes, if such a node y exists for x, then it is unique.
And second, in this case all vertices of label i in G^ψ_x will participate in the fuse-operation.
So we also say i is a fuse-label at x.
Hence, instead of first, creating these vertices via introduce-nodes and then fusing them, we will introduce only one vertex representing the result of the fusion.
And the creation of the edges incident with these vertices needs to be postponed until the moment where the new vertex is introduced.
For this, we will store the label of the new vertex in the label set of the other end-vertex.
But for postpone-purposes we will use labels from [k] to distinguish from the original labels.
We now formalize this idea to obtain a multi-2k-expression ξ of H.
In the following, the constructed expression will temporarily contain at the same time vertices with multiple labels and fuse-nodes, we call such an expression mixed.
First, we mark all fuse-nodes in ψ as unprocessed and start with ξ := ψ.
We proceed for every leaf ℓ of ψ as follows.
Let v and i ∈ [k] be such that ℓ is a v⟨ i ⟩-node.
If v is not a fuse-vertex at ℓ in ψ, we simply change the operation at ℓ in ξ to be 1 ⟨{i}⟩.
Otherwise, let x be the fuse-node in ψ in which v participates.
Note that since we have suppressed skippable fuse-nodes, such a node x is unique.
Let i ∈ [k] be such that x is a θ_i-node.
First, we remove the leaf ℓ from ξ and suppress its parent in ξ.
Note that since the parent of ℓ is ψ is a union-node, the mixed expression remains well-formed.
Second, if x is marked as unprocessed, we replace the operation at x in ξ to be a union, add a new 1 ⟨{i}⟩-node as a child of x, and mark x as processed.
We refer to the introduce-nodes created in this process as well as to the vertices introduced by these nodes as new.
Observe that first, the arising mixed expression does not contain any fuse-nodes.
Second, the set of leaves of ξ is now in bijection with the set of vertices of H.
Also, the set of edges induced by vertices, that do not participate in any fuse-operation in ψ, has not been affected.
So it remains to correctly create the edges for which at least one end-point is new.
This will be handled by adapting the label sets of vertices.
First, for every i ≠ j ∈ [k], every ρ_i → j-node is replaced with a path consisting of a ρ_i →{j}-node and a ρ_i→{j}-node.
Now let i ≠ j ∈ [k] and let x be a η_i, j-node in ξ.
In order to correctly handle the join-operation, we make a case distinction.
If both i and j are not fuse-labels at x in ψ, we skip x.
Next, assume that exactly one of the labels i and j, say i, is a fuse-label at x in ψ.
Then we replace the operation in x in ξ with ρ_j →{j, i} to store the information about the postponed edges in the vertices of label j.
From now on, we may assume that both i and j are fuse-labels at x in ψ.
Observe that x creates only one edge of H since all vertices of label i (resp. j) are fused into one vertex later in ψ.
Let x_i (resp. x_j) be the ancestor of x in ψ such that x_i (resp. x_j) is a θ_p_i-node (resp. θ_p_j-node) where p_i = ρ^*_x, x_i(i) (resp. p_j = ρ^*_x, x_j(j)).
Since we have suppressed skippable fuse-nodes, the nodes x_i and x_j are unique.
By our construction, x_i (resp. x_j) is in ξ a union-node that has a child y_i (resp. y_j) being an introduce-node.
Without loss of generality, we may assume that x_i is above x_j in ξ.
Then, we store the information about the postponed edge in y_j as follows.
Let S ⊆ [k] ∪[k] be the label set such that y_j is currently a 1 ⟨ S ⟩-node.
Note: initially S consists of a single label p_j but after processing several join-nodes, this is not the case in general.
We now replace the operation in y_j with 1 ⟨ S ∪{ρ^*_x,x_j(i)}⟩.
After all join-nodes are processed, we create the postponed edges at every new introduce-node x of ξ as follows.
Let y be the parent of x in ξ and let S ⊆ [k] be such that x is an 1 ⟨ S ⟩-node.
By construction, there exists a unique label i ∈ [k] ∩ S.
Then right above y, we add the sequence ρ_i→∅∘η_i, i and we refer to this sequence together with y as the postponed sequence of x.
This concludes the construction of a multi-2k-expression, say α, of H.
It can be verified that we have not changed the construction of Fürer <cit.> but only stated it in terms of multi-clique-width.
Therefore, the construction is correct.
Now as promised, we argue that the number of required labels can be decreased to k + 1.
Before formally stating this, we provide an intuition.
First, observe that moving from ξ to α, we did not change the unique label from [k] kept by each vertex at any step, only the labels from [k] have been affected.
We claim that for i ∈ [k], both labels i and i may appear in a subgraph G^α_y only in very restricted cases, namely when y belongs to a postponed sequence of a new introduce-node.
We now sketch why this is the case.
Let x be a node such that G^α_x contains a vertex with label i.
This can only occur if i is a fuse-label at x in ψ, i.e., there exists a unique fuse-node z such that z is an ancestor of x and the vertices from G^ψ_x of label i participate in the fuse at z.
By the construction of ξ, all introduce-nodes creating these vertices have been removed so G^α_z contains a unique vertex holding the label j := ρ^*_x, z(i), namely the one introduced at its child, say t.
Then in the end of the postponed sequence of t, the label j is removed from all vertices.
So the only moment where both labels j and j occur is during the postponed sequence of t.
Also note that postponed sequences do not overlap so if such j exists, then it is unique.
This is formalized as follows.
Let y be a node in α and let i ∈ [k] be such that the labeled graph G^α_y contains a vertex containing label i and a vertex containing label i.
Then y belongs to the postponed sequence of some new 1 ⟨ S ⟩-node x with S ⊆ [k] ∪[k] and i ∈ S.
Moreover, the only vertex in G^α_y containing label i is the vertex introduced at x.
In particular, since the postponed sequences for distinct nodes are disjoint by construction, for every j ≠ i ∈ [k], the graph G^α_y does not contain a vertex containing label j or it does not contain a vertex containing label j.
So up to postponed sequences, we can unify the labels i and i for every i ∈ [k].
And inside postponed sequences, we will use an additional label ⋆ to distinguish between i and i.
So we process new introduce-nodes as follows.
Let x be a new 1 ⟨ S ⟩-node for some S ⊆ [k] ∪[k] and let i ∈ [k] be the unique value in S ∖[k].
We replace the operation in x with a 1 ⟨ S ∖{i}∪{⋆}⟩ and we replace the postponed sequence of x with the sequence ρ_⋆→ i∘ρ_i→∅∘η_i, ⋆∘⊕.
After processing all new introduce-nodes, we replace every occurrence of label i with label i for all i ∈ [k].
The new multi-expression uses k+1 labels and by the above observation, it still creates H.
Also it can be easily seen that the whole transformation can be carried out in polynomial time.
§ REDUCED GLUE-EXPRESSIONS
In this section, we show that a fuse-k-expression can be transformed into a so-called reduced glue-k-expression of the same graph in polynomial time.
Such expressions will provide an alternative view on fusion-width helpful for algorithms.
We formally define them later.
In the following, we assume that the titles used in introduce-nodes of a fuse-expression are pairwise distinct.
Along this section, the number of labels is denoted by k and polynomial is a shorthand for “polynomial in the size of the expression and k”.
To avoid edge cases, we will assume that any expression in this section does not contain any useless nodes in the following sense.
If a join-node does not create new edges, it is suppressed.
Similarly, if a fuse-node fuses at most one node, it is suppressed.
Also during our construction, the nodes of form ρ_i → i might arise, they are also suppressed.
Further, if ρ_i → j is applied to a labeled graph with no vertices of label i, it is suppressed.
Clearly, useless nodes can be found and suppressed in polynomial time.
For this reason, from now on we always implicitly assume that useless nodes are not present.
We say that fuse-expressions ϕ_1 and ϕ_2 are equivalent if there exists a label-preserving isomorphism between G^ϕ_1 and G^ϕ_2.
In this section, we provide rules allowing to replace sub-expressions with equivalent ones.
For simplicity, the arising expression will often be denoted by the same symbol as the original one.
The following equivalencies can be verified straight-forwardly.
Although some of them might seem to be unnatural to use at first sight, they will be applied in the proofs of <ref>.
Let k ∈, let H be a k-labeled graph, let q ∈, let i, j, a, b, a_1, …, a_q ∈ [k] be integers, let H_1, H_2 be k-labeled graphs, and let v be a title.
Then the following holds if none of the operators on the left-hand side is useless:
* θ_i ∘η_a, b (H) = η_a, b∘θ_i(H);
* If i ∉{a, b}, then θ_i ∘ρ_a → b(H) = ρ_a → b∘θ_i(H);
* If a, b ∈{a_1, …, a_q, i}, then:
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘η_a, b(H) = θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H);
* If a ∈{a_1, …, a_q, i} and b ∉{a_1, …, a_q, i}, then:
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘η_a, b(H) = η_i, b∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H);
* If a, b ∉{a_1, …, a_q, i}, then:
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘η_a, b(H) = η_a, b∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H);
* If a, b ∉{a_1, …, a_q, i}, then
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_a, b(H) = ρ_a, b∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H);
* If b ∈{a_1, …, a_q}, then
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_a → b(H) = θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_a → i(H);
* If b ∉{a_1, …, a_q, i}, then
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_i → b(H) = ρ_a_1 → i∘θ_a_1∘ρ_a_2 → a_1∘…∘ρ_a_q → a_1∘ρ_i → b(H);
*
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i (H_1 ⊕ H_2) = θ_i ((ρ_a_1 → i∘…∘ρ_a_q → i(H_1)) ⊕(ρ_a_1 → i∘…∘ρ_a_q → i(H_2)) );
* If b ∈{a_1, …, a_q, i}, then:
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘θ_b(H) = θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i (H);
* If b ∉{a_1, …, a_q, i}, then:
θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘θ_b(H) = θ_b ∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i (H);
* ρ_a → j∘ v ⟨ a ⟩ = v ⟨ j ⟩;
* η_i, j∘ρ_a → i (H) = ρ_a → i∘η_i, j∘η_a, j (H);
* If a, b ∉{i, j}, then:
η_i, j∘ρ_a → b (H) = ρ_a → b∘η_i,j (H).
We fix some notation.
Let t be a fuse-node in some fuse-expression ϕ.
Since t is not useless, there is at least one successor of t being a union-node.
The union-nodes are the only nodes with more than one child so there exists a unique topmost successor of t being a union-node, we denote it by t_⊕.
The children of t_⊕ are denoted by t_1 and t_2.
For a node t, we call the maximum number of union-nodes on a path from t to any leaf in the subtree rooted at t the ⊕-height of t.
Informally speaking, a fuse-expression we want to achieve in this section has the following two properties.
First, for any pair of distinct vertices that are fused at some point, their fuse happens “as early as possible”.
Namely, two vertices are fused right after the earliest union these vertices participate in together: in particular, these vertices come from different sides of the union.
This will allow us to replace a sequence of fuse-nodes by a so-called glue-node that carries out non-disjoint union of two graphs under a certain restriction.
Second, we want that each edge of the graph is created exactly once.
We split the transformation into several steps.
In the very first step we shift every fuse-node to the closest union-node below it (see <ref>.
Let k ∈ and let ϕ be a fuse-k-expression.
Then in time polynomial in |ϕ| + k we can compute a fuse-k-expression of the same labeled graph such that for every fuse node t, every inner node on the path from t to t_⊕ is a fuse-node.
We start with ϕ and transform it as long as there is a fuse-node violating the property of the lemma.
We say that such a node is simply violating.
If there are multiple such nodes, we choose a node t to be processed such that every fuse-node being a proper successor of t satisfies the desired property.
Since any parse tree is acyclic, such a node exists.
So let t be a fuse-node to be processed and let i ∈ [k] be such that t is a θ_i-node.
We will shift t to t_⊕ by applying the rules from <ref> to t and its successors as follows.
When we achieve that the child of t is a union- or a fuse-node, we are done with processing t.
While processing t, with t_c we always refer to the current child of t and by α we denote the operation in t.
Recall that t is not useless so as long as t is processed, α is a join or a relabel.
If α is a join-node, then we apply the rule from <ref> to swap t and t_c.
Otherwise, we have α = ρ_a → b for some a ≠ b ∈ [k].
We proceed depending on the values of a and b.
If a ≠ i and b ≠ i, then the rule from <ref> is applied to swap t and α.
If a = i and b ≠ i (resp. and b = i), then t (resp. t_c) would be useless so this is not the case.
We are left with the case a ≠ i and b = i, i.e., α = ρ_a → i.
Note that here we cannot simply swap the nodes t and t_c since the vertices that have label a at the child of t_c also participate in the fuse at t.
So this is where we will have to apply the rules from <ref> to longer sequences of nodes.
From now on, we always consider the maximal sequence (t_0 = t), t_1, …, t_q (for some q ∈) such that for every j ∈ [q], the node t_j is a relabel-node ρ_a_j → i for some a_j ∈ [k] and t_j is a child of t_j-1.
In particular, we have t_1 = t_c.
Since the nodes are not useless, the values {a_1, …, a_q} are pairwise distinct.
Let t' be the child of t_q.
If t' is a join-node, then depending on the joined labels, we apply one of the rules from <ref> to either suppress t' (see <ref> (a)) or shift it above t with possibly changing the labels joined in t' (see <ref> (b)).
If t' is a relabel-node, let c, d ∈ [k] be such that it is a ρ_c → d-node.
By maximality, we have d ≠ i.
Now depending on c and d, we can apply one of the rules from <ref>.
In the case of <ref>, the length of the maximal sequence t_0, …, t_q increases.
In the cases of <ref>, the height of t decreases.
So in any case we make progress.
If t' is a union-node, we apply the rule from <ref>.
Now we may assume that t' is a fuse-node.
Observe that while processing t we have not affected the subtree rooted at t' all inner nodes on the path from t' to t'_⊕ are fuse-nodes.
So there exist r ∈ with r > q, the nodes (t' = t_q+1), …, t_r, and values b_q+1, …, b_r-1∈ [k] with the following two properties.
First, for every j ∈ [q+1, r-1], the node t_j is a θ_b_j-node while t_r is a union-node.
And second, for every j ∈ [q+1, r], the node t_j is a child of t_j-1.
For ℓ = q+1, …, r we do the following to achieve that t_ℓ is the child of t_q.
This holds at the beginning for ℓ = q+1.
Now let ℓ > q + 1 and suppose this holds for ℓ - 1.
Depending on b_ℓ we apply the rule from <ref> to the sequence (t = t_0), …, t_q, t_ℓ-1.
This either suppresses t_ℓ-1 or shifts it to become the parent of t.
In any case, t_ℓ becomes the child of t_q as desired.
In the end, this holds for ℓ = r, i.e.,
the vertices t, t_1, …, t_q, t_r form a path in the parse tree (see <ref> (c)).
Finally, we apply the rule <ref> to achieve
that t is a parent of the union-node t_r, i.e., t now satisfies the desired condition (see <ref> (d)).
This concludes the description of the algorithm processing t.
Now we argue that the algorithm terminates and takes only polynomial time.
We analyze the process for the node t and then conclude about the whole algorithm.
It can be verified that every application of a rule either decreases the height of t or increases q.
The latter case can only occur at most k times: if q > k, then at least one of t_1, …, t_q would be redundant.
So only a polynomial number of rules is applied until t satisfies the property of the lemma.
The application of any of these rules increases neither the height nor the number of leaves of the parse tree.
On the other hand, suppressing a useless node below t decreases the height of t as well.
So to conclude the proof, it suffices to show that for any fuse-node s, if s satisfied the property of the lemma before processing t, then this still holds after processing t, i.e., the number of violating fuse-nodes decreases.
While processing the node t, some fuse-node t^* ∈{t_q+1, …, t_r-1} might violate our desired property when this node is shifted to become the parent of t after the application of the rule from <ref>.
But observe that after processing t, the path between t^* and t contains only fuse-nodes (which similarly to s have been shifted there as a result of the rule from <ref>) and the child of t is a union-node.
So t^* again satisfies the desired condition.
Therefore, every fuse-node is processed at most once and no new fuse-nodes are created.
There is a linear number of fuse-nodes and a single application of any rule from <ref> can be accomplished in polynomial time.
Above we have argued that per fuse-node, the number of rule applications is polynomial.
Altogether, the algorithm runs in polynomial time.
As the next step, we will shift the fuse-nodes further below so that every fuse-node t fuses exactly two vertices, namely one from G_t_1 with another from G_t_2.
Let k ∈ and let ϕ be a fuse-k-expression.
Then in time polynomial in |ϕ| + k we can compute a fuse-k-expression of the same labeled graph such that for every fuse node t, the following holds.
First, every inner node on the path from t to t_⊕ is a fuse-node.
Second, let i ∈ [k] be such that t is a θ_i-node.
Then for every pair u ≠ v ∈_t^-1(i), it holds that |{u, v}∩ V(G_t_1)| = |{u, v}∩ V(G_t_2)| = 1.
In particular, we have |_t^-1(i)| = 2.
First of all, we apply <ref> to transform ϕ into a fuse-k-expression of the same labeled graph satisfying the properties of that lemma in polynomial time.
We still denote the arising fuse-expression by ϕ for simplicity.
We will now describe how to transform ϕ to achieve the desired property.
We will process fuse-nodes one by one and as invariant, we will maintain that after processing any number of fuse-nodes, the expression still satisfies <ref>.
As long as ϕ does not satisfy the desired property, there is a fuse-node t such that at least two fused vertices u and v come from the same side, say G_t_1 of t_⊕.
We call t violating in this case.
Since ϕ satisfies <ref>, all inner nodes on the path between t and t_⊕ are fuse-nodes.
The vertices u and v have therefore the same label in G_t_1 and they can already be fused before the union.
The way we do it might increase the number of fuse-nodes in the expression so we have to proceed carefully to ensure the desired time complexity.
Now we formalize this idea.
Among violating fuse-nodes, we always pick a node t with the largest ⊕-height to be processed.
Let i ∈ [k] be such that t is a θ_i-node.
First, we subdivide the edge t_⊕ t_1 resp. t_⊕ t_2 with a fresh θ_i-node t_1' resp. t_2' (see <ref> (a)).
Clearly, this does not change the arising labeled graph and t now fuses at most two vertices: at most one from each side of the union.
Now the following sets of nodes may become useless: {t_1', t}, {t_2', t}, {t_1'}, {t_2'}, or ∅, these nodes are therefore suppressed.
In particular, if the node t is not suppressed, then it is not violating anymore.
Let now ∅≠ S ⊆{t_1', t_2'} denote the set of non-suppressed nodes in {t_1', t_2'}.
These nodes now potentially violate <ref>.
So for every node x' in S, we proceed as in the proof of <ref> to “shift” x' to x'_⊕ to achieve that all nodes between x' and x'_⊕ are fuse-nodes (see <ref> (b)).
Note that this shift only affects the path from x' to x'_⊕.
Observe that every node x' ∈ S has strictly smaller ⊕-height than t.
Thus the order of processing violating fuse-nodes implies that after processing any node, the value (a, b) lexicographically decreases where a denotes the maximum ⊕-height over violating fuse-nodes and b denotes the number of violating nodes with ⊕-height a.
Indeed, if t was the only violating node with the maximum ⊕-height, then a decreases after this process.
Otherwise, a remains the same, and b decreases.
Since no new introduce-nodes are created, the maximum ⊕-height does not increase and the value of a is always upper-bounded by |ϕ|.
Further, recall that after processing a fuse-node, the expression again satisfies <ref>, i.e., all inner nodes of a path from any fuse-node s to s_⊕ are fuse-nodes.
Let us map every fuse-node s to s_⊕.
The expression never contains useless fuse-nodes so at most k nodes (i.e., one per label) are mapped to any union-node and the value of b never exceeds k |ϕ|.
Therefore, the whole process terminates after processing at most k |ϕ|^2 fuse-nodes.
Next observe that none of the rules from <ref> increases the length of some root-to-leaf path.
Thus, processing a fuse-node t might increase the maximum length of a root-to-leaf path by at most one, namely due to the creation of nodes t_1' and t_2'.
Since on any root-to-leaf path there are at most |ϕ| ⊕-nodes and there are no useless fuse-nodes, there are at most k |ϕ| fuse-nodes on any root-to-leaf path at any moment.
Initially, the length of any root-to-leaf path is bounded by |ϕ| and during the process it increases by at most one for any fuse-node on it.
Hence, the length of any root-to-leaf path is always bounded by (k+1)|ϕ|.
Altogether, processing a single fuse-node can be done in time polynomial in k and |ϕ| and the running time of the algorithm is polynomial in k and |ϕ|.
Now we may assume that a fuse-expression looks as follows: every union-operation is followed by a sequence of fuse-nodes and each fuse-nodes fuses exactly two vertices from different sides of the corresponding union.
Thus, we can see the sequence consisting of a union-node and following fuse-nodes as a union of two graphs that are not necessarily vertex-disjoint.
So these graphs are glued at the pairs of fused vertices.
Now we formalize this notion.
A glue-k-expression is a well-formed expression constructed from introduce-, join-, relabel-, and glue-nodes using k labels.
A glue-operation takes as input two k-labeled graphs (H_1, _1) and (H_2, _2) satisfying the following two properties:
* For every v ∈ V(H_1) ∩ V(H_2), the vertex v has the same label in H_1 and H_2, i.e., we have _1(v) = _2(v).
* For every v ∈ V(H_1) ∩ V(H_2) and every j ∈ [2], the vertex v is the unique vertex with its label in H_j, i.e., we have |_1^-1(_1(v))| = |_2^-1(_2(v))| = 1
In this case, we call the k-labeled graphs H_1 and H_2 glueable.
The output of this operation is then the union of these graphs denoted by H_1 ⊔ H_2, i.e., the labeled graph (H, ) with V(H) = V(H_1) ∪ V(H_2) and E(H) = E(H_1) ∪ E(H_2) where the vertex-labels are preserved, i.e.,
(v) =
_1(v) if v ∈ V(H_1)
_2(v) if v ∈ V(H_2)
.
We denote the arising k-labeled graph with H_1 ⊔ H_2 (omitting for simplicity) and we call the vertices in V(H_1) ∩ V(H_2) glue-vertices.
Unlike fuse-expressions, if t is a node of a glue-expression ϕ, then G^ϕ_t is a subgraph of G^ϕ.
Let k ∈ and let ϕ be a fuse-k-expression.
Then in time polynomial in |ϕ| + k we can compute a glue-k-expression of the same labeled graph.
In polynomial time we can obtain a fuse-k-expression satisfying <ref> that creates the same graph.
For simplicity, we still denote this expression by ϕ.
We assume that the introduce-nodes of ϕ use pairwise distinct titles.
For titles v and w, by identification of v with w we denote the operation that for every i ∈ [k], replaces every leaf v ⟨ i ⟩ of the current expression with a leaf w ⟨ i ⟩.
Informally speaking, our goal is to assign the vertices that are fused at some point in the expression the same title.
Then such vertices will be “automatically” glued by a glue-node.
We start with α := ϕ and α will always denote current “mixed” expression, i.e., it potentially contains union-, fuse-, and glue-nodes simultaneously.
We process ⊕-nodes in the order of increasing ⊕-height as follows.
Let t be the union-node to be processed.
Let f^t_1, …, f^t_p for some p ∈_0 denote the maximal sequence of predecessors of t in the parse tree of α such that f^t_1, …, f^t_p are fuse-nodes and t, f^t_1, …, f^t_p form a path in α.
Simply speaking, f^t_1, …, f^t_p are exactly the fuse-operations following t.
For j ∈ [p], let i^t_j ∈ [k] be such that f^t_j is a θ_i^t_j-node.
If p > 0, we denote f^t_p by t_θ for simplicity.
If p = 0, then with t_θ we denote t itself.
If t is clear from the context, we sometimes omit this superscript.
We will replace the path t, f^t_1, …, f^t_p with a single glue-node denoted by t_⊔ and identify some titles in the sub-expression of α rooted at t so that we maintain the following invariant.
For every union-node t of ϕ such that t has already been processed, the labeled graphs G^ϕ_t_θ and G^α_t_⊔ are isomorphic.
This has the following implication.
For every node t in α such that t is not a glue-node, the labeled graphs G^α_t and G^ϕ_t are isomorphic.
Up to some formalities, this property just ensures that all sub-expressions still create the same labeled graph.
Let s_1 and s_2 be the children of t in α.
The order of processing then implies that α_s_1 and α_s_2 are glue-k-expressions (i.e., contain no fuse-nodes).
Let t_1 and t_2 be the children of t in ϕ.
By invariant, the labeled graphs G_t_q^ϕ and G_s_q^α are isomorphic for each q ∈ [2].
So since ϕ satisfies <ref>,
for every j ∈ [p] and q ∈ [2], there exists exactly one vertex v^q_i_j in G_t_q^α with label i_j.
Now for every j ∈ [p], we identify v^1_i_j with v^2_i_j in α.
Let ξ denote the arising expression.
Now we replace the sequence t, f_1, …, f_p with a single glue-node denoted by t_⊔.
And let ζ denote the constructed expression.
We claim that G_t_⊔^ζ is isomorphic to G^α_t_θ.
First, note that since union-nodes are processed from bottom to top and in ϕ all titles are pairwise distinct, there was no title occurring in both α_t_1 and α_t_2.
Therefore, after identifying v^1_i_j with v^2_i_j for every j ∈ [p], we still have that the labeled graph G^ξ_t_1 (resp. G^ξ_t_2) is isomorphic to the labeled graph G^α_t_1 (resp. G^α_t_2).
In simple words, no identification has lead to the gluing of vertices inside G^α_t_1 or G^α_t_2.
Moreover, the ordering of processed nodes implies that the titles other than v^2_i_1, …, v^2_i_p are pairwise distinct in ξ_t_1 and ξ_t_2.
Therefore, the glue-node t_⊔ takes two labeled graphs G^ξ_t_1≅ G^α_t_1 and G^ξ_t_1≅ G^α_t_1 with shared vertices {v^2_i_1, …, v^2_i_p} and produces their union.
Observe that this is the same as applying the sequence f_p ∘…∘ f_1 ∘ t to G^α_t_1 and G^α_t_2.
Therefore, we have G^ζ_t ≅ G^α_t as desired.
After t is processed, we set α := ζ to denote the current expression and move on to the next union-node (if exists).
After all nodes are processed, the expression α contains neither union- nor fuse-nodes.
So α is a glue-k-expression such that (by invariant) G^α≅ G^ϕ holds, i.e., α creates the same labeled graph.
The number of union-nodes in ϕ is bounded by |ϕ| and processing a single node can be done in time polynomial in k and the number of leaves of ϕ (i.e., also bounded by |ϕ|).
Hence, the transformation takes time polynomial in k and ϕ.
Transforming a glue-k-expression into a fuse-k-expression is trivial: Replace every glue-node by a union-node followed by a sequence θ_i_1, …, θ_i_q of fuse-nodes where i_1, …, i_q are the labels of vertices shared by the glued graphs.
This implies that fuse- and glue-expressions are equivalent and there is no reason to define “glue-width” as a new parameter.
As a last step of our transformations, we show that similarly to the existence of irredundant k-expressions defined by Courcelle and Olariu <cit.> and widely used in dynamic-programming algorithms (e.g., <cit.>), certain irredundancy can be achieved for glue-expressions.
Let k ∈ and let ϕ be a fuse-k-expression.
Then in time polynomial in |ϕ| + k we can compute a glue-k-expression ξ of the same labeled graph without useless nodes such that:
* Let i, j ∈ [k], let t be a η_i, j-node in ξ, and let t' be the child of t in ξ.
Then G^ξ_t' contains no edge {v, w} with ^ξ_t'(v) = i and ^ξ_t'(w) = j.
* Let t be a glue-node in ξ and let t_1 and t_2 be its children.
Then the graphs G^ξ_t_1 and G^ξ_t_2 are edge-disjoint.
* Let t be a glue-node in ξ, let t_1 and t_2 be its children, and let v be a glue-vertex.
Then for every q ∈ [2], the vertex v has an incident edge in G^ξ_t_q.
We call a glue-k-expression satisfying these properties reduced.
First, we apply <ref> to obtain a glue-k-expression ϕ of the same graph in polynomial time.
As in the previous proofs of this section, we will transform ϕ iteratively until it satisfies the desired properties.
In the first phase, as long as there is a join-node t and an edge {v, w} violating the first property, we proceed as follows.
There exists a successor t' of t in ϕ such that t' is a η_i', j'-node for some i', j' ∈ [k], the vertices v and w are the vertices of G_t', and it holds that ^ϕ_t'(v) = i' and ^ϕ_t'(w) = j'.
There can be multiple such nodes t' so we fix an arbitrary one.
We suppress the node t'.
Let ϕ' denote the arising expression.
Note that once two vertices have the same label, this property is maintained along the expression.
So similarly to the construction of irredundant clique-expressions (see <cit.>), it holds that every edge e' created by t' is also created by t.
Formally, the following holds.
Since t' is a successor of t, for all v' ∈ V(G^ϕ_t') the property ^ϕ_t'(v') = ^ϕ_t'(v) implies ^ϕ_t(v') = ^ϕ_t(v) and hence, also ^ϕ'_t(v') = ^ϕ'_t(v).
The analogous statement holds for vertices w' ∈ V(G^ϕ_t') with ^ϕ_t'(w') = ^ϕ_t'(w).
Therefore, the labeled graphs G^ϕ and G^ϕ' are isomorphic.
Now we set ϕ := ϕ', and the process is repeated until ϕ satisfies the first property.
As mentioned above, the node t' is not necessarily unique for t so after t' is suppressed, t and {v, w} can still violate the first property of the lemma.
The number of join-nodes decreased by one though.
Therefore, the process terminates after at most |ϕ| steps and it results in a glue-k-expression of the same labeled graph.
Clearly, each step takes only polynomial time so the running time of this transformation is polynomial.
In the second phase, we proceed similarly to satisfy the second property.
As long as there exist a glue-node t and an edge {v, w}∈ E(G^ϕ_t_1) ∩ E(G^ϕ_t_2) violating the second property, we proceed as follows.
Note that v and w are then glue-vertices.
There exists a successor t' of t_1 such that t' is a η_i', j'-node for some i', j' ∈ [k], the vertices v and w are the vertices of G_t', and it holds that ^ϕ_t'(v) = i' and ^ϕ_t'(w) = j'.
We claim that e is then the only edge created by t', i.e., v (resp. w) is the unique vertex with label i' (resp. j') in G^ϕ_t'.
Suppose not, then without loss of generality, there exists a vertex v' ≠ v in G^ϕ_t' with label i'.
Then the vertices v and v' would also have the same label in G^ϕ_t_1.
But the glueabilty of G_t_1^ϕ and G_t_2^ϕ implies that v is the unique vertex with label (^ϕ_t_1)^-1(v) in G^ϕ_t_1 – a contradiction.
Let now ϕ' denote the expression arising from ϕ by suppressing t'.
Then it holds G^ϕ'_t_1 = G^ϕ_t_1 - e.
Therefore, G^ϕ'_t = G^ϕ_t and also G^ϕ' = G^ϕ.
Now we set ϕ := ϕ' and repeat the process until ϕ satisfies the second property.
Since in each repetition the expression loses one join-node, the process terminates after at most |ϕ| steps.
Also, one step takes only polynomial time so the total running time is also polynomial.
Clearly, the first condition remains satisfied during the process.
Now we move on to the third property.
Let t and v violate it, i.e., without loss of generality v has no incident edge in G^ϕ_t_1.
The crucial observation for the transformation described below is that since v also belongs to G^ϕ_t_2, it holds that
G^ϕ_t_1⊔ G^ϕ_t_2 = (G^ϕ_t_1 - {v}) ⊔ G^ϕ_t_2
(in particular, the glueability precondition applies to the right side are as well).
Hence, we want to transform the sub-expression rooted at t_1 so that it does not create the vertex v.
For this, we simply need to cut off all introduce-nodes using title v from this sub-expression.
Formally, we start with ϕ' := ϕ and as long as ϕ'_t_1 contains a leaf ℓ being a v⟨ i ⟩-node for some i ∈ [k]:
* As long as the parent t' of ℓ is not a glue-node we repeat the following.
Since there are no useless nodes, t' is a ρ_i → j-node so we apply the rule from <ref> to suppress it.
* Now t' is a glue-node so we remove ℓ and suppress t'.
Note that when the last item is reached, the parent t' is a glue-node whose one side of the input is the graph that consists of a single vertex v.
A simple inductive argument shows that G^ϕ'_t_1 = G^ϕ_t_1 - {v} holds as desired.
Now we set ϕ := ϕ' and repeat until the third property is satisfied.
Clearly, the first two properties are maintained and since in each repetition the size of the expression decreases by at least one, the process take only polynomial time.
It is known that for clique-expressions,
the number of leaves of an expression is equal to the number of vertices in the arising graph.
For fuse-, and hence, glue-expressions, the situation is different.
Since a fuse-node, in general, decreases the number of vertices in the arising graph, the number of leaves in a fuse-expression can be unbounded.
However, we now show that the number of leaves of a reduced glue-expression is bounded by 𝒪(m+n) where n resp. m is the number of vertices resp. edges of the arising graph.
This will lead to an upper bound of 𝒪(k^2(m+n)) on the number of nodes of a reduced glue-expression.
Let ϕ be a fuse-k-expression of a graph H on n vertices and m edges.
Then in time polynomial in |ϕ| and k we can compute a reduced glue-k-expression ζ of H such that the parse tree of ζ contains 𝒪(k^2(m+n)) nodes.
By <ref>, in polynomial time we can compute a reduced glue-k-expression ξ of the same labeled graph.
Let L(ξ) denote the set of leaves of ξ.
We start by bounding the size of this set.
Let H = G^ξ.
We will now define a mapping h: L(ξ) → V(H) ∪ E(H) and then show some properties of it.
Let ℓ∈ L(ξ) and let v ∈ V(H) and i ∈ [k] be such that ℓ is a v⟨ i ⟩-node.
If no other leaf in L(ξ) has title v, then we simply set h(ℓ) = v.
Otherwise, there exists at least one other leaf with title v.
Hence, there exists at least one glue-node t such that v is a glue-vertex at t.
Note that any such t is an ancestor of ℓ.
So let g(ℓ) denote the bottommost among such glue-nodes and let s_1 and s_2 be its children.
Without loss of generality, we assume that ℓ belongs to the sub-expression rooted at s_1.
The reducedness of ξ implies that there is an edge e ∈ E(G^ξ_s_1) ∖ E(G^ξ_s_2) incident with v.
We set h(ℓ) = e.
In particular we have e ∈ G^ξ_g(ℓ).
Observe that any vertex is mapped by h either to itself or to an incident edge.
Now let e be an edge of H and let v be one of its end-vertices.
We claim that there exists at most one leaf with title v mapped to e.
For the sake of contradiction, suppose there exist leaves ℓ_1 ≠ℓ_2 and i_1, i_2 ∈ [k] such that ℓ_1 (resp. ℓ_2) is a v⟨ i_1 ⟩-node (resp. v⟨ i_2 ⟩-node) and h(ℓ_1) = h(ℓ_2) = e.
Then let t denote the lowest common ancestor of g(ℓ_1) and g(ℓ_2).
Let q ∈ [2] be arbitrary.
Let t_q be the child of t such that g(ℓ_q) is a (not necessarily proper) descendant of t_q.
The property h(ℓ_q) = e implies that e ∈ E(G^ξ_g(ℓ_q)) ⊆ E(G^ξ_t_q) holds.
Therefore, we obtain e ∈ E(G^ξ_t_1) ∩ E(G^ξ_t_2) contradicting the reducedness of ξ.
So there indeed exists at most one leaf in ξ with title v mapped to e.
Since e has two end-points, there are at most two leaves of ξ mapped to e.
Finally, for every vertex u of H in the image of h, we know that there exists exactly one leaf with title u so there exists at most one leaf of ξ mapped to u.
These two properties imply that the size of preimage of h, i.e., the size of L(ξ) is bounded by 2m + n.
It is folklore that a rooted tree with at most 2m+n leaves has 𝒪(m+n) inner nodes with at least two children.
Hence, ξ has 𝒪(m+n) glue-nodes.
Now we will apply a simple folklore trick (originally applied to clique-expressions)
to bound the number of nodes between any two consecutive glue-nodes.
For this, we apply rules from <ref> of <ref> to ensure that for any two consecutive glue-nodes t_1 and t_2 (where t_1 is an ancestor of t_2), the path from t_2 to t_1 first contains join-nodes and then relabel-nodes.
Any duplicate on this path would be useless.
Therefore, this path contains 𝒪(k^2) relabel- and join-nodes (at most one per possible operation).
Finally, by applying the rule from <ref> of <ref> we can ensure that a parent of any introduce-node is a glue-node.
Let ζ be the arising glue-expression.
Clearly, ζ is still reduced.
The number of relabel- and join-nodes in ζ is now bounded by 𝒪(k^2(m+n)) and so the total numbers of nodes is also bounded by this value as claimed.
§ ALGORITHMS PARAMETERIZED BY FUSION-WIDTH
In this section, we parameterize by fusion-width.
We will present three algorithms for problems W[1]-hard when parameterized by clique-width, these are Hamiltonian Cycle, Max Cut, and Edge Dominating Set.
The algorithms are XP-algorithms and they have the same running time as the tight (under ETH) algorithms parameterized by clique-width.
Fusion-width is upper-bounded by clique-width and this has two implications.
First, the lower bounds from clique-width apply to fusion-width as well, and hence, our algorithms are also ETH-tight.
And second, our results show that these problems can be solved for a larger class of graphs within the same running time.
Each of the following algorithms gets a fuse-k-expression of the input graph and as the very first step transforms it into a reduced glue-k-expression of the same graph in polynomial time (cf. <ref>).
By <ref>, the size of the expression is then linear in the size of the graph.
§.§ Max Cut
In this problem, given a graph G = (V, E) we are asked about the maximum cardinality of E_G(V_1, V_2) over all partitions (V_1, V_2) of V.
In this subsection we solve the Max Cut problem in time n^𝒪(k) given a fuse-k-expression of a graph.
For this, we will rely on the clique-width algorithm by Fomin et al. and use the same dynamic-programming tables <cit.>.
Then, it suffices to only show how to handle glue-nodes.
Later, in the description of the algorithm, we also sketch why general fuse-expressions (i.e., with unrestricted fuse-nodes) seem to be problematic for the algorithm.
Let H be a k-labeled graph.
The table T_H contains all vectors h = (s_1, …, s_k, r) with 0 ≤ s_i ≤ |U^H_i| for every i ∈ [k] and 0 ≤ r ≤ |E(H)| for which there exists a partition (V_1, V_2) of V(H) such that |V_1 ∩ U^H_i| = s_i for every i ∈ [k] and there are at least r edges between V_1 and V_2 in H.
We say that the partition (V_1, V_2) witnesses the vector h.
For their algorithm, Fomin et al. provide how to compute these tables for nodes of a k-expression of the input graph.
In particular, they show that this table can be computed for a k-labeled graph H with n vertices in time n^𝒪(k) in the following cases:
* If H consists of a single vertex.
* If H = ρ_i → j(H') for some i, j ∈ [k] and a k-labeled graph H' given T_H'.
* If H = η_i, j(H') for some i ≠ j ∈ [k] and a k-labeled graph H' given T_H'.
The correctness of their algorithm requires that a k-expression is irredundant, i.e., no join-node creates an already existing edge.
Our extended algorithm will process a reduced glue-expression and the first property in <ref> will ensure that this property holds so the approach of Fomin et el. can indeed be adopted for the above three types of nodes.
First, let us mention that processing a fuse-node θ_i seems problematic (at least using the same records).
Some vertices of label i might have common neighbors.
So when fusing these vertices, multiple edges fall together but we do not know how many of them so it is unclear how to update the records correctly.
For this reason, we make use of glue-expressions where, as we will see, the information stored in the records suffices.
To complete the algorithm for the fusion-width parameterization, we provide a way to compute the table T_H if H = H^1 ⊔ H^2 for two glueable edge-disjoint k-labeled graphs H^1 and H^2 if the tables T_H^1 and T_H^2 are provided.
Let {v_1, …, v_q} = V(H^1) ∩ V(H^2) for some q ∈_0 and let i_1, …, i_q be the labels of v_1, …, v_q in H^1, respectively.
The glueability implies that for every j ∈ [q], it holds that |U^H^1_i_j| = |U^H^2_i_j| = 1.
Hence, for every entry (s_1, …, s_k, r) of T_H^1 and every j ∈ [q], it also holds that s_i_j∈{0, 1} with s_i_j = 1 if and only if v_j is put into V_1 in the partition witnessing this entry.
The same holds for the entries in T_H^2.
This gives the following way to compute the table T_H.
It will be computed in a table S_H.
We initialize this table to be empty.
Then we iterate through all pairs of vectors h^1 = (s_1^1, …, s_k^1, r^1) from T_H^1 and h^2 = (s_1^2, …, s_k^2, r^2) from T_H^2.
If there is an index j ∈ [q] such that s_i_j^1 ≠ s_i_j^2, then we skip this pair.
Otherwise, for every 0 ≤ r ≤ r^1 + r^2, we add to S_H the vector h = (s_1, …, s_k, r) where for all i ∈ [k]
s_i =
s_i^1 + s_i^2 i ∉{i_1, …, i_q}
s_i^1 · s_i^2 i ∈{i_1, …, i_q}
and we call h^1 and h^2 compatible in this case.
The table S_H contains exactly the same entries as T_H.
For the one direction, let h^1 = (s_1^1, …, s_k^1, r^1) and h^2 = (s_1^2, …, s_k^2, r^2) be compatible entries of T_H^1 and T_H^2, respectively.
And let (V^1_1, V^1_2) and (V^2_1,V^2_2) be the partitions witnessing h^1 in H^1 and h^2 in H^2, respectively.
Also let 0 ≤ r ≤ r_1 + r_2.
We claim that then (V_1, V_2) with V_1 = V^1_1 ∪ V^2_1 and V_2 = V^1_2 ∪ V^2_2 is a partition witnessing a vector h = (s_1, …, s_k, r) of S_H constructed as above in H so h belongs to T_H.
First, we show that this is a partition of V(H).
We have
V(H) = V(H^1) ∪ V(H^2) = (V^1_1 ∪ V^1_2) ∪ (V^2_1 ∪ V^2_2) = (V^1_1 ∪ V^2_1) ∪ (V^1_2 ∪ V^2_2) = V_1 ∪ V_2.
Since the sets V^1_1 and V^1_2 are disjoint and also the sets V^2_1 and V^2_2 are disjoint, we have:
V_1 ∩ V_2 = (V^1_1 ∩ V^2_2) ∪ (V^2_1 ∩ V^1_2).
Any vertex v in V^1_1 ∩ V^2_2 belongs to both H^1 and H^2 so there exists an index j^* ∈ [q] with v = v_q.
The property v ∈ V^1_1 then implies s^1_i_j = 1 while the property v ∈ V^2_2 implies s^2_i_j = 0.
So we obtain s^1_i_j≠ s^2_i_j contradicting the fact that h^1 and h^2 are compatible.
Therefore, (V^1_1 ∩ V^2_2) is empty.
A symmetric argument shows that (V^2_1 ∩ V^1_2) is empty as well.
Hence V_1 and V_2 are disjoint and they indeed form a partition of V(H).
Let j ∈ [q].
Since v_j is the unique vertex with label i_j in H, the set V_1 contains exactly one vertex of this label if s_i_j^1 = s_i_j^2 = 1 and zero such vertices if s_i_j^1 = s_i_j^2 = 0.
So we have
|V_1 ∩ U^i_j_H| = s_i_j^1 · s_i_j^2 = s_i_j.
For every i ∈ [k] ∖{i_1, …, i_q} the sets U^H^1_i and V(H^2) as well as U^H^2_i and V(H^1) are disjoint by definition of {i_1, …, i_q} and therefore:
|V_1 ∩ U^H| = |V_1 ∩ (U^H^1_i ∪ U^H^2_i)| =
|(V_1 ∩ U^H^1_i) ∪ (V_1 ∩ U^H^2_i)| =
|(V_1 ∩ U^H^1_i)| + |(V_1 ∩ U^H^2_i)| =
|(V_1^1 ∩ U^H^1_i)| + |(V_1^2 ∩ U^H^2_i)| =
s_i^1 + s_i^2 =
s_i.
Finally, we bound the number of edges in E_H(V_1, V_2).
It holds that E_H^b(V^b_1, V^b_2) ⊆ E_H(V^b_1, V^b_2) ⊆ E_H(V_1, V_2) for every b ∈ [2] so we obtain
E_H^1(V^1_1, V^1_2) ∪ E_H^2(V^2_1, V^2_2) ⊆ E_H(V_1, V_2).
Recall that the graphs H^1 and H^2 are edge-disjoint, then we have
r ≤ r^1 + r^2 ≤ |E_H^1(V^1_1, V^1_2)| + |E_H^2(V^2_1, V^2_2)| ≤ |E_H(V_1, V_2)|.
So (V_1, V_2) is indeed a partition witnessing h in H.
For the other direction, let h = (s_1, …, s_k, r) be an entry of T_H.
Then there exists a partition (V_1, V_2) of V(H) witnessing h.
We will show that there exist entries h^1 and h^2 of T_H^1 and T_H^2, respectively, such that the above algorithm adds h to the table S_H at the iteration of h^1 and h^2.
We set V^j_2_j_1 = V_j_1∩ V(H^j_2) for j_1, j_2 ∈ [2].
Let j ∈ [2] be arbitrary but fixed.
Since (V_1, V_2) is a partition of V(H) and we have V(H^j) ⊆ V(H), the pair (V_1^j, V_2^j) is a partition of V(H^j).
Let r^j = |E_H^j(V_1^j, V_2^j)| and let h^j = (s_1^j, …, s_k^j, r^j) be the vector such that (V_1^j, V_2^j) witnesses h^j.
First, consider i ∈{i_1, …, i_q}.
Recall that glueability implies that s_i^j, s_i ∈{0, 1}.
If s_i = 1, then we have v_i ∈ V_1 and therefore also v_i ∈ V_1^j so we obtain s_i^j = 1.
Similarly, if s_i = 0, then we have v_i ∈ V_2 and therefore also v_i ∈ V_2^j so we obtain s_i^j = 0.
Therefore, it holds that s_i = s_i^1 · s_i^2.
Next, consider i ∈ [k] ∖{i_1, …, i_q}.
The sets U^H^1_i and U^H^2_i are disjoint.
Therefore, the sets V_1 ∩ U^H^1_i = V_1^1 ∩ U^H^1_i and V_1 ∩ U^H^2_i = V_1^2 ∩ U^H^2_i partition V_1 ∩ U^H_i so we obtain s_i = s_i^1 + s_i^2.
Let W^1 = V(H^1) ∖ V(H^2), W^2 = V(H^2) ∖ V(H^1), and let I = V(H^1) ∩ V(H^2).
For j_1, j_2 ∈ [2], let W^j_1_j_2 = W^j_1∩ V_j_2 and let I_j_2 = I ∩ V_j_2.
Then W^1_j, W^2_j, and I_j partition V_j for j ∈ [2].
The following holds:
E_H(V_1, V_2) =
E_H(W^1_1, W_2^1) ∪ E_H(W^1_1, I_2) ∪ E_H(W^1_1, W_2^2) ∪
E_H(I_1, W_2^1) ∪ E_H(I_1, I_2) ∪ E_H(I_1, W_2^2) ∪
E_H(W^2_1, W_2^1) ∪ E_H(W^2_1, I_2) ∪ E_H(W^2_1, W_2^2) =
E_H(W^1_1, W_2^1) ∪ E_H(W^1_1, I_2) ∪ E_H(W^1_1, W_2^2) ∪
E_H(I_1, W_2^1) ∪(E_H^1(I_1, I_2) ∪ E_H^2(I_1, I_2)) ∪ E_H(I_1, W_2^2) ∪
E_H(W^2_1, W_2^1) ∪ E_H(W^2_1, I_2) ∪ E_H(W^2_1, W_2^2).
Therefore, the sets occurring after the second equality are pairwise disjoint so the size of E_H(V_1, V_2) is the sum of their sizes.
Next, recall that every edge of H is either an edge of H^1 or of H^2 and therefore, for every edge of H, there exists an index j^* such that both end-points of this edge belong to H^j*.
Therefore, the sets E_H(W^1_1, W_2^2) and E_H(W^2_1, W_2^1) are empty.
This also implies the following equalities:
E_H(W^1_1, W_2^1) = E_H^1(W^1_1, W_2^1)
E_H(W^1_1, I_2) = E_H^1(W^1_1, I_2)
E_H(I_1, W_2^1) = E_H^1(I_1, W_2^1)
E_H(I_1, W_2^2) = E_H^2(I_1, W_2^2)
E_H(W^2_1, I_2) = E_H^2(W^2_1, I_2)
E_H(W^2_1, W_2^2) = E_H^2(W^2_1, W_2^2)
Now by using these properties we obtain
E_H(V_1, V_2) =
E_H^1(W^1_1, W_2^1) ∪ E_H^1(W^1_1, I_2) ∪
E_H^1(I_1, W_2^1) ∪ E_H^1(I_1, I_2) ∪ E_H^2(I_1, I_2) ∪ E_H^2(I_1, W_2^2) ∪
E_H^2(W^2_1, I_2) ∪ E_H^2(W^2_1, W_2^2).
By rearranging the terms we get
E_H(V_1, V_2) =
(E_H^1(W^1_1, W_2^1) ∪ E_H^1(W^1_1, I_2) ∪ E_H^1(I_1, W_2^1) ∪ E_H^1(I_1, I_2)) ∪
(E_H^2(I_1, I_2) ∪ E_H^2(I_1, W_2^2) ∪ E_H^2(W^2_1, I_2) ∪ E_H^2(W^2_1, W_2^2)).
Finally, note that for j_1, j_2 ∈ [2], the pair (W^j_1_j_2, I_j_2) is a partition of V^j_1_j_2.
So we obtain
E_H(V_1, V_2) = E_H^1(V^1_1, V^1_2) ∪ E_H^2(V^2_1, V^2_2).
Since the graphs H^1 and H^2 are edge-disjoint, we get
r^1 + r^2 = |E_H^1(V^1_1, V^1_2)| + |E_H^2(V^2_1, V^2_2)| = |E_H(V_1, V_2)| = r.
Therefore, at the iteration corresponding to h^1 and h^2 the algorithm indeed adds h to S_H.
This concludes the proof of the correctness of the algorithm.
Observe that if a graph H has n nodes, the table T_H contains n^𝒪(k) entries.
Therefore, this table can be computed from T_H^1 and T_H^2 in time n^𝒪(k) as well.
This results in an algorithm that given a graph H together with a reduced glue-k-expression ξ of H, traverses the nodes x of ξ in a standard bottom-up manner and computes the tables T_G^ξ_x in time n^𝒪(k).
Let y denote the root of ξ.
Then G^ξ_y is exactly the graph H so we output the largest integer r such that T_G^ξ_y contains an entry (s_1, …, s_k, r) for some s_1, …, s_k ∈_0.
By definition, this value is then the size of the maximum cardinality cut of the graph H.
Given a fuse-k-expression of a graph H, the Max Cut problem can be solved in time n^𝒪(k).
Fomin et al. have also shown the following lower bound:
<cit.>
Let H be an n-vertex graph given together with a k-expression of H.
Then the Max Cut problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails.
Since any k-expression of a graph is, in particular, a fuse-k-expression of the same graph, the lower bound transfers to fuse-k-expressions as well thus showing that our algorithm is tight under ETH.
Let H be an n-vertex graph given together with a fuse-k-expression of H.
Then the Max Cut problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails.
§.§ Edge Dominating Set
In this problem, given a graph G = (V, E) we are asked about the cardinality of a minimum set X ⊆ E such that every edge in E either belongs to X itself or it has an incident edge in X.
In this section, we provide a way to handle the glue-nodes in order to solve the Edge Dominating Set problem.
As in the previous subsection, we rely on the dynamic programming algorithm for the clique-width parameterization by Fomin et al. and use their set of records defined as follows.
For a k-labeled graph H, the table T_H contains all vectors (s_1, …, s_k, r_1, …, r_k, ℓ) of non-negative integers such that there exists a set S ⊆ E(H) and a set R ⊆ V(H) ∖ V(S) with the following properties:
* |S| ≤ℓ≤ |E(H)|;
* for every i ∈ [k], exactly s_i vertices of U^H_i are incident with the edges of S;
* for every i ∈ [k], we have |R ∩ U^H_i| = r_i;
* every edge of H undominated by S has an end-vertex in R.
We say that the pair (S, R) witnesses the vector (s_1, …, s_k, r_1, …, r_k, ℓ) in H.
The last property reflects that it is possible to attach a pendant edge to every vertex in R so that the set S together with these pendant edges dominates all edges of H.
In the following, we will sometimes use this view in our arguments and denote the set of edges pendant at vertices of R by E^R.
Note that since no vertex incident with S belongs to R, we have s_i + r_i ≤ |U^H_i| for any i ∈ [k].
In particular, for every i ∈{i_1, …, i_q}, the property r_i = 1 implies s_i = 0 and therefore
r_i s_i = r_i.
For their algorithm, Fomin et al. provide how to compute these tables for nodes of a k-expression of the input graph.
In particular, they show that this table can be computed for a k-labeled graph H with n vertices in time n^𝒪(k) in the following cases:
* If H consists of a single vertex.
* If H = ρ_i → j(H') for some i, j ∈ [k] and a k-labeled graph H' given the table T_H'.
* If H = η_i, j(H') for some i ≠ j ∈ [k] and a k-labeled graph H' given the table T_H'.
Similarly to the previous subsection, let us mention that processing a fuse-node θ_i seems problematic (at least using the same records).
Some vertices of label i might have common neighbors.
So when fusing these vertices, multiple edges of the set S of a partial solution fall together but we do not know how many of them so it is unclear how to update the records correctly.
For this reason, we make use of glue-expressions where, as we will see, the information stored in the records suffices.
To complete the algorithm for the fusion-width parameterization, we provide a way to compute the table T_H if H = H^1 ⊔ H^2 for two glueable edge-disjoint k-labeled graphs H^1 and H^2 if the tables T_H^1 and T_H^2 are provided.
Let {v_1, …, v_q} = V(H^1) ∩ V(H^2) for some q ∈_0 and let i_1, …, i_q be the labels of v_1, …, v_q in H^1, respectively.
Then for every j ∈ [q], it holds that |U^H^1_i_j| = |U^H^2_i_j| = 1.
Hence, for every entry (s_1, …, s_k, r_1, …, r_k, ℓ) of T_H^1 and every j ∈ [q], it holds that s_i_j + r_i_j≤ 1.
The same holds for the entries in T_H^2.
This motivates the following way to compute the table T_H.
It will be computed in a table S_H.
We initialize this table to be empty.
Then we iterate through all pairs of vectors h^1 = (s_1^1, …, s_k^1, r_1^1, …, r_k^1, ℓ^1) from T_H^1 and h^2 = (s_1^2, …, s_k^2, r_1^2, …, r_k^2, ℓ^2) from T_H^2 and for every ℓ^1 + ℓ^2 ≤ℓ≤ |E(H)|, we add the vector (s_1, …, s_k, r_1, …, r_k, ℓ) defined as follows.
For every i ∈ [k] ∖{i_1, …, i_q}, it holds that s_i = s_i^1 + s_i^2 and r_i = r_i^1 + r_i^2.
And for every i ∈{i_1, …, i_q}, it holds that s_i = s_i^1 s_i^2 and r_i = s_i^1 s_i^2 (r_i^1 r_i^2).
The table S_H contains exactly the same entries as T_H.
For the one direction, let h^1 = (s_1^1, …, s_k^1, r_1^1, …, r_k^1, ℓ^1) be an entry of T_H^1 and h^2 = (s_1^2, …, s_k^2, r_1^2, …, r_k^2, ℓ^2) be an entry of T_H^2.
So for j ∈ [2], there exists a pair (S^j, R^j) witnessing h^j in H^j.
Also let ℓ∈_0 be such that ℓ^1 + ℓ^2 ≤ℓ≤ |E(H)| and let h = (s_1, …, s_k, r_1, …, r_k, ℓ) be the entry constructed by the algorithm from h^1 and h^2.
We now show how to construct a pair (S, R) witnessing h in H.
First, let S = S^1 ∪ S^2.
Then we have
|S| ≤ |S^1| + |S^2| ≤ℓ^1 + ℓ^2 ≤ℓ.
Now for every i ∈ [k], we determine the number s_i' of vertices in U^H_i = U^H^1_i_1∪ U^H^2_i_2 incident with an edge of S.
For i ∈ [k] ∖{i_1, …, i_q}, the sets U^H^1_i_1 and U^H^2_i_2 are disjoint so we obtain s_i' = s^1_i + s^2_i = s_i.
For every j ∈{1, …, q}, the value s_i_j' reflects whether the vertex v_j has an incident edge in S.
Similarly, the values s_i_j^1 and s_i_j^2 reflect whether v_j has an incident edge in S^1 and S^2, respectively.
Due to S = S^1 ∪ S^2, we obtain s_i_j' = s_i_j^1 s_i_j^2 = s_i_j.
Altogether, we obtain s_i = s_i' for every i ∈ [k].
Next we set R = (R^1 ∪ R^2) ∖ V(S).
Now for every i ∈ [k], let r_i' denote the size of R ∩ U^H_i, i.e., the number of vertices with label i that have a pendant edge attached to it.
Recall that we have U^H_i = U^H^1_i ∪ U^H^2_i.
First, consider i ∈ [k] ∖{i_1, …, i_q}.
In this case, the sets U^H^1_i and U^H^2_i are disjoint.
We claim that in this case we simply have R ∩ U^H_i = (R^1 ∪ R^2) ∩ U^H_i.
Consider a vertex v ∈ R^1 ∩ U^H^1_i.
Since (S^1, R^1) witnesses h^1 in H, the vertex v has no incident edge in S^1 and since v does not belong to H^2, it also has no incident edge in S^2.
So v has no incident edge in S and therefore belongs to R.
The symmetric argument shows that the vertices of R^2 ∩ U^H^2_i belong to R.
So we obtain R ∩ U^H_i = (R^1 ∪ R^2) ∩ U^H^1_i and since the sets R^1 ∩ U^H^1_i and R^2 ∩ U^H^2_i are disjoint, we get
|R ∩ U^H_i| = |R^1 ∩ U^H^1_i| + |R^2 ∩ U^H^2_i| = r^1_i + r^2_i = r_i.
Now let j ∈ [q].
Recall that there exists a unique vertex v_j ∈ U^H_i_j.
Also, the vertex v_j is the unique vertex in the set U^H^1_i_j as well as in U^H^2_i_j.
By construction, this vertex belongs to R if and only if it belongs to R^1 ∪ R^2 and has no incident edge in S, i.e.,
r_i_j' = (r^1_i_j r^2_i_j) s_i_j =
(r^1_i_j r^2_i_j) (s^1_i_j s^2_i_j) =
(r^1_i_j r^2_i_j) s^1_i_j s^2_i_j
= r_i_j.
So we obtain r_i' = r_i for every i ∈ [k].
It remains to prove that pendant edges from R^E dominate all edges of E(H) undominated by S.
So let e be an edge of E(H) undominated S.
Without loss of generality, assume that e belongs to H^1.
First, e is an edge of H^1 undominated by S^1 and therefore, it has an end-point v in R^1.
Second, since e is not dominated by S, in particular, the vertex v has no incident edge in S and therefore, by construction, the edge e also belongs to R as desired.
Altogether, we have shown that (S, R) witnesses h in H and therefore, the vector h belongs to T_H.
For the other direction, we consider a vector h = (s_1, r_1, …, s_k, r_k, ℓ) from T_H.
Let (S, R) be the pair witnessing h in H.
For j ∈ [2], let S^j = S ∩ E(H^j) and let ℓ^j = |S^j|.
We then have ℓ^j ≤ |E(H^j)|.
Since the graphs H^1 and H^2 are edge-disjoint, we obtain
ℓ^1 + ℓ^2 = |S^1| + |S^2| = |S| ≤ℓ.
For i ∈ [k] and j ∈ [2], let s_i^j = |S^j ∩ U^H^j_i|.
First, let i ∈ [k] ∖{i_1, …, i_q}, j ∈ [2], and let v be a vertex from U^H_i ∩ V(H^j).
Recall that U^H^1_i and U^H^2_i are disjoint.
Then by construction, the vertex v has an incident edge in S if and only if it has one in S^j.
This, together with, again, the disjointness of U^H^1_i and U^H^2_i implies that s_i = s_i^1 + s_i^2 holds.
Now let j ∈ [q].
Then the unique vertex v_j ∈ U^H_i_j has an incident edge in S if and only if it has an incident edge in S^1 or in S^2, i.e., we have s_i_j = s^1_i_j s^2_i_j.
Now we construct the sets R^1 and R^2 from S and R as follows.
For j ∈ [2], we set
R^j = (R ∩ V(H^j)) ∪{v_p | p ∈ [q], s_i_p = 1, s_i_p^j = 0}.
And for i ∈ [k] and j ∈ [2], we set r^j_i = R^j ∩ U^H^j_i.
Observe that for i ∈{i_1, …, i_q} and j ∈ [2], we then have
r^j_i = r_i (s_i s_i^j)
and for i ∈ [k] ∖{i_1, …, i_1}, we have
r_i = r^1_i + r^2_i
since U^H^1_i and U^H^2_i are disjoint.
Let now j ∈ [2] be arbitrary but fixed.
We show that the edges of S^j together with pendant edges at vertices in R^j dominate all edges of H^j.
So let e be an edge of H^j.
Since e is also an edge of H, it is dominated by S ∪ E^R.
So there exists an end-vertex u of e such that u is incident with an edge in S or u belongs to R.
If u belongs to R, then by construction u also belongs to R^j and so e is dominated by a pendant edge from E^R^j.
So we now may assume that u does not belong to R and it has an incident edge e' of H in S.
First, assume that we have u ∉{v_1, …, v_q}.
Since u is not a glue-vertex, any edge of H incident with u must be an edge of H^j, i.e., we have e' ∈ S ∩ E(H^j) = S^j.
So e is dominated by S^j.
Now we may assume that u ∈{v_1, …, v_q} holds and let p ∈ [q] be such that u = v_p.
Suppose e is not dominated by S^j and the edges pendant at R^j.
In particular, it implies that u does not belong to R^j.
Since u = v_p is the unique vertex in U^H^j_i_p, we have s^j_i_p = 0 and r^j_i_p = 0.
Recall that by the above assumption, the vertex u has an incident edge in S, i.e., s_i_p = 1.
But this contradicts the equality <ref>.
Thus, the edge e is dominated by S^j ∪ E^R^j.
Since e was chosen arbitrarily, this holds for any edge of H^j.
Altogether, we have shown that (S^j, R^j) witnesses h^j in H^j and therefore, h^j is an entry of T_H^j.
It remains to show that at the iteration corresponding to h^1 and h^2, the algorithm adds h to S_H.
So let (s_1', …, s_k', r_1', …, r_k', ℓ') be an entry added to S_H such that ℓ' = ℓ holds.
Above we have shown that ℓ^1 + ℓ^2 ≤ℓ holds so such an entry indeed exists.
Also, we have already shown that s_i = s_i^1 + s_i^2 for any i ∈ [k] ∖{i_1, …, i_q} and s_i = s_i^1 s_i^2 for any i ∈{i_1, …, i_q}.
So it holds that s_i = s_i' for any i ∈ [k].
It remains to show that r_i = r_i' holds for any i ∈ [k] as well.
Recall that by the construction of the algorithm, we have r_i' = r_i^1 + r_i^2 for any i ∈ [k] ∖{i_1, …, i_q}.
The equality (<ref>) then implies that r_i = r_i' holds.
For i ∈{i_1, …, i_q} the algorithm sets r_i' = s_i^1 s_i^2 (r_i^1 r_i^2).
We then obtain
r_i' = s_i^1 s_i^2 (r_i^1 r_i^2) (<ref>)=
s_i^1 s_i^2 ((r_i [s_i s_i^1]) (r_i [s_i s_i^2])) s_i = s_i^1 s_i^2=
s_i^1 s_i^2 ((r_i [(s_i^1 s_i^2) s_i^1]) (r_i [(s_i^1 s_i^2) s_i^2])) =
s_i^1 s_i^2 ((r_i (s_i^2 s_i^1)) (r_i (s_i^1 s_i^2))) =
s_i^1 s_i^2 (r_i (s_i^2 s_i^1) (s_i^1 s_i^2)) =
s_i^1 s_i^2 r_i =
(s_i^1 s_i^2) r_i =
s_i r_i (<ref>)=
r_i
So we indeed obtain r_i = r_i' for every i ∈ [k].
Therefore, at the iteration corresponding to the entries h^1 and h^2 of T_H^1 and T_H^2, respectively, the algorithm indeed adds the entry h to S_H.
Altogether, we obtain that S_H = T_H and the provided algorithm indeed computes the table T_H given the tables T_H^1 and T_H^2.
Observe that if a graph H has n nodes, the table T_H contains n^𝒪(k) entries.
Therefore, this table can be computed from T_H^1 and T_H^2 in time n^𝒪(k) as well.
This results in an algorithm that given a graph H together with a reduced glue-k-expression ξ of H, traverses the nodes x of the expression in a standard bottom-up manner and computes the tables T_G^ξ_x in time n^𝒪(k).
Let y denote the root of ξ.
Then G^ξ_y is exactly the graph H.
As noted by Fomin et al., the size of the minimum edge dominating set of H is the smallest integer ℓ such that the table T_G^ξ_y contains an entry (s_1, …, s_k, 0, …, 0, ℓ) for some s_1, …, s_k ∈_0.
So the algorithm outputs this value.
Given a fuse-k-expression of a graph H, the Edge Dominating Set problem can be solved in time n^𝒪(k).
Fomin et al. have also shown the following lower bound:
<cit.>
Let H be an n-vertex graph given together with a k-expression of H.
Then the Edge Dominating Set problem cannot be solved in time f(k)n^o(k) for any computable function f unless the ETH fails.
Since any k-expression of a graph is, in particular, its fuse-k-expression, the lower bound transfers to fuse-k-expressions as well thus showing that our algorithm is tight under ETH.
Let H be an n-vertex graph given together with a fuse-k-expression of H.
Then the Edge Dominating Set problem cannot be solved in time f(k)n^o(k) for any computable function f unless the ETH fails.
§.§ Hamiltonian Cycle
In this problem, given a graph G = (V, E) we are asked about the existence of a cycle visiting each vertex exactly once.
Here we provide an algorithm solving this problem.
Similarly, to the previous two problems, we will rely on the existing algorithm for the parameterization by clique-width.
The algorithm is by Bergougnoux et al. and runs in time n^𝒪() <cit.>.
We will show how to handle glue-nodes in the same running time.
We will follow the general idea for union-nodes from the original paper.
However, with multiple vertices being glued, the situation becomes more complicated and the proof of correctness gets more involved.
We start with some preliminary definitions, most of which were already introduced by Bergougnoux et al <cit.>.
A path packing is a graph such that each of its connected components is a path.
We say that a path packing is a path packing in H if is a subgraph of H.
A maximal path packing of a graph H is a spanning subgraph of H that is a path packing.
Note that no restrictions on the length of the paths are imposed so in particular, paths consisting of a single vertex are allowed.
With a slight abuse of notation, depending on the context, we will refer to as a graph or as a set of paths.
If not stated otherwise, speaking about paths in a path packing we always refer to its connected components, i.e., maximal paths in .
We sometimes refer to maximal path packings of a graph as partial solutions and we denote the set of all partial solutions of a graph H by Π(H).
With a (not necessarily maximal) path packing in a k-labeled graph H we associate an auxiliary multigraph _H() on the vertex set [k] such that for every i ≠ j ∈ [k], the multiplicity of the edge {i, j} is equal to the number of paths in whose end-points have labels i and j; and for every i ∈ [k], the multiplicity of the loop at the vertex i is equal to the number of paths whose both end-vertices have label i (in particular, this contains the paths consisting of a single vertex of label i).
Note that this multigraph depends on the labeling of H.
The edges of such a multigraph will often be referred to as red, this will allow us to speak about red-blue trails later.
In their work, Bergougnoux et al. use the technique of so-called representative sets <cit.>.
For two maximal path packings _1 and _2 of a k-labeled graph H they write _1 ≃_H _2 if for every i ∈ [k], it holds that __H(_1)(i) = __H(_2)(i) and the graphs _H(_1) and _H(_2) have the same set of connected components.
This defines an equivalence relation on Π(H).
For a set ⊆Π(H) of partial solutions, the operation _H() returns a set containing one element of each equivalence class of / ≃_H.
The following has been shown by Bergougnoux et al. <cit.>:
For every ⊆Π(H), we have |()| ≤ n^k · 2^k(log_2(k)+1) and we can moreover compute _H() in time 𝒪(|| · n k^2 log_2(nk)).
In the following, we will work a lot with multigraphs on the vertex set [k].
For two such multigraphs A and B, with A ⊎ B we denote the multigraph on the vertex set [k] such that the multiplicity of every edge is given by the sum of multiplicities of this edge in A and B.
As in the work of Bergougnoux et al. <cit.>, we say that the edges of a multigraph on the left resp. right of ⊎ are colored red resp. blue.
They also use the following notion of representativity.
Let _1, _2 ⊆Π(H) be two families of partial solutions of a k-labeled graph H.
We write _1 ≲_H _2 and say that _1 H-represents _2 if, for every multigraph on the vertex set [k] whose edges are colored blue, whenever there exists a path packing _2 ∈_2 such that _H(_2) ⊎ admits , there also exists a path packing _1 ∈_1 such that _H(_1) ⊎ admits a red-blue Eulerian trail, where a red-blue Eulerian trail is a closed walk containing each edge exactly once and such that red and blue edges alternate on this walk.
Crucially, they have shown the following lemma:
For every ⊆Π(H), it holds that _H() ≲_H.
Together with <ref>, this allows to keep the number of partial solutions maintained along the dynamic programming small.
Recall that we aim at handling glue-nodes.
As in standard algorithms based on representative sets, our goal is the following: given two k-labeled glueable edge-disjoint graphs H_1 and H_2 and families _1 ≲_H_1Π(H_1) and _2 ≲_H_2Π(H_2) of partial solutions of H_1 and H_2, respectively, we would like to compute a family of partial solutions of H = H_1 ⊔ H_2 with ≲_H Π(H) such that has bounded size.
After that, the operation _H can be applied to to obtain a smaller representative.
Bergougnoux et al. have shown that for two vertex-disjoint graphs H_1 and H_2, the set of partial solutions of the graph H_1 ⊕ H_2 can be computed by simply iterating through all partial solutions _1 of H_1 and _2 of H_2 and forming their union _1 ∪_2 <cit.>.
For glue-nodes our process will be analogous but there is more interaction between partial solutions.
At a glue-node, multiple paths in partial solutions _1 and _2 can be glued together.
First, this can result in longer paths that contain several paths of _1 and _2 as subpaths (see <ref> (a)).
But also, the glueing can create cycles (see <ref> (b)) as well as vertices of degree more than two (see <ref> (c)) so that the result of gluing of two partial solutions of H_1 and H_2, respectively, is not a partial solution of H_1 ⊔ H_2 anymore.
Now we formalize this.
Along this section, let H_1 and H_2 be two k-labeled glueable edge-disjoint graphs.
First, we show that the set of partial solutions of H_1 ⊔ H_2 can be obtained in a natural way by gluing all pairs of partial solutions of H_1 and H_2 and then filtering out graphs that are not path packings.
For a family ℋ of graphs, by (ℋ) we denote the set of all path packings in ℋ.
Clearly, the set (ℋ) can be computed in time polynomial in the cardinality of ℋ and the largest graph in ℋ.
Let H_1 and H_2 be two edge-disjoint graphs.
And let
S = ({_1 ⊔_2 |_1 ∈Π(H_1), _2 ∈Π(H_2)).
Then it holds that S = Π(H_1 ⊔ H_2).
For the one direction, let ∈ S and let _1 ∈Π(H_1), _2 ∈Π(H_2) be such that = _1 ⊔_2.
First, recall that _1 resp. _2 contains all vertices of H_1 resp. H_2 and we have V(H_1 ⊔ H_2) = V(H_1) ∪ V(H_2).
So contains all vertices of H_1 ⊔ H_2.
Second, since S is the result of the application of the operator, the graph is a path packing.
Therefore, the graph is a maximal path packing of H, i.e., ∈Π(H_1 ⊔ H_2), and we obtain S ⊆Π(H_1 ⊔ H_2).
For the other direction, consider a path packing ∈Π(H_1 ⊔ H_2).
For i ∈ [2], let
_i = { Q | Q is an inclusion-maximal subpath of some path in
such that all edges of Q belong to H_i}
Clearly, _i is a subgraph of H_i and it is a path packing due to being a subgraph of .
Each vertex v of V(H_i) ⊆ V(H_1 ⊔ H_2) = V(H_1) ∪ V(H_2) lies on exactly one path, say P, in .
Then there is a unique inclusion-maximal subpath Q of P containing v that uses edges of H_i. By definition, the subpath Q belongs to _i.
Therefore, the set _i is a maximal path packing of H_i, i.e., _i ∈Π(H_i).
It remains to show that _1 ⊔_2 =.
Since _1 and _2 are maximal path packings of H_1 and H_2, respectively, the graph _1 ⊔_2 contains all vertices of H_1 ⊔ H_2, i.e., all vertices of .
For i ∈ [2], every edge of E(H_i) ∩ E() is contained in a unique maximal subpath Q of some path in such that this subpath contains the edges of H_i only, i.e., Q ∈_i.
Therefore, we have _1 ⊔_2 =.
And since is a path packing, the operation does not discard it.
So we obtain Π(H_1 ⊔ H_2) ⊆ S and this concludes the proof.
As the next step, we will show that the representativity is maintained in this process, formally:
Let H_1 and H_2 be two glueable edge-disjoint k-labeled graphs.
Further, let _1 ≲_H_1Π(H_1) and _2 ≲_H_2Π(H_2).
Then for the set S defined by
S = ({_1 ⊔_2 |_1 ∈_1, _2 ∈_2})
it holds that
S ≲_H_1 ⊔ H_2Π(H_1 ⊔ H_2).
Further, given _1 and _2, the set S can be computed in (|_1| |_2|).
This lemma will be the key component of our procedure for glue-nodes.
In the remainder of this subsection we mostly concentrate on the proof of this lemma.
It will follow the general idea behind the proof of Bergougnoux et al. for union-nodes <cit.> but the technicalities will become more involved.
We start with some simple claims.
Let H be a k-labeled graph, let i ∈ [k] be such that there exists a unique vertex v of label i in H, and let be a path packing in H that contains v.
Then for the unique path P ∈ containing v, it holds that:
* If P has length zero, then __H()((v)) = 2 and there is a loop at (v) in _H.
* If P has non-zero length and v is an end-vertex of P, then __H()((v)) = 1.
* If P has non-zero length and v is an internal vertex of P, then __H()((v)) = 0.
In particular, we have __H()((v)) = 2 - _(v).
We can apply this observation to glue-vertices as follows:
Let H_1 and H_2 be two k-labeled glueable edge-disjoint graphs and let v ∈ V(H_1) ∩ V(H_2) be a glue-vertex of label i for some i ∈ [k].
Further let _1 and _2 be path packings in H_1 and H_2, respectively, both containing v such that the graph _1 ⊔_2 is a path packing.
Then it holds that
__H_1 ⊔ H_2(_1 ⊔_2)((v)) = __H_1(_1)((v)) + __H_2(_2)((v)) - 2.
The vertex v is the unique vertex of label i in H_1 ⊔ H_2.
Then we have
__H_1 ⊔ H_2(_1 ⊔_2)((v)) <ref>=
2 - __1 ⊔_2(v) E(H_1) ∩ E(H_2) = ∅=
2 - (__1(v) + __2(v)) <ref>=
2 - ((2 - __H_1(_1)((v))) + (2 - __H_2(_2)((v)))) =
__H_1(_1)((v)) + __H_2(_2)((v)) - 2.
Let H be a k-labeled graph and let and ' be path packings in H with V() = V(').
Further, let be a multigraph on the vertex set [k] such that each of the graphs _H() ⊎ and _H(') ⊎ admits .
Finally, let v ∈ V() be a vertex of unique label in H, i.e., |_H^-1(_H(v))| = 1.
Then the graphs _H() and _H(') have the same degree sequence and in particular, the following properties hold:
* The vertex v is an internal vertex of a path in iff v is an internal vertex of a path in '.
* The vertex v is an end-vertex of a non-zero length path in iff v is an end-vertex of a non-zero length path in '.
* The vertex v forms a zero-lentgh path in iff v forms a zero-lentgh path in '.
Since each of the graphs _H() ⊎ and _H(') ⊎ admits , for every i ∈ [k] we have
__H()(i) = _(i) = __H(')(i)
by a result of Kotzig <cit.>.
Therefore, the graphs _H() and _H(') have the same degree sequence.
The remainder of the claim follows by <ref>.
With these technical lemmas in hand, we can now prove <ref>.
In the proof we will work a lot with multigraphs on vertex set [k].
For i ∈ [k], by _i we will denote a loop at the vertex i.
Similarly, for i, j ∈ [k], by e_i, j we denote an edge with end-points i and j where i = j is allowed.
This edge is not necessarily unique so with a slight abuse of notation, this way we denote one fixed edge clear from the context between these vertices.
If ℋ is a multigraph and e = e_i, j resp. e = _i, by ℋ∪̇{e} we denote the multigraph arising from ℋ by adding an edge with end-points i and j resp. adding a loop at i.
Here, ∪̇ emphasizes that we add a new edge and in particular, increase the number of edges in the multigraph.
Similarly, by ℋ - e we denote the multigraph arising from ℋ after removing the edge e and emphasize that e was present in ℋ.
For simplicity, we denote D = H_1 ⊔ H_2.
Along this proof no relabeling occurs so every vertex of H_1 resp. H_2 has the same label in H. For this reason we omit the subscripts of labeling functions to improve readability: the label of a vertex v is simply denoted by (v).
Now let ∈Π(D) be a maximal path packing of D.
By <ref>, there exist maximal path packings _1 ∈Π(H_1) and _2 ∈Π(H_2) such that = _1 ⊔_2.
Further, let be a blue multigraph on the vertex set [k] such that _H() ⊎ admits , say T.
To prove the lemma, we need to show that there exists a maximal path packing ' ∈ S = ({_1' ⊔_2' |_1' ∈_1, _2' ∈_2}) of D such that _H(') ⊎ admits a red-blue Eulerian trail as well, i.e., there exist maximal path packings '_1 ∈_1 and '_2 ∈_2 such that '_1 ⊔'_2 is a path packing and _H(_1' ⊔'_2) ⊎ admits a red-blue Eulerian trail.
Let t = |_2| and let us fix some ordering _2 = {P^1, …, P^t} of the paths (i.e., connected components) in _2.
For i ∈ [t]_0, we define
^i = _2 ∖{P^1, …, P^i}.
Now we will construct blue multigraphs ^0, …, ^t with the following properties.
There exist blue multigraphs ^0, …, ^t such that for every i ∈ [t]_0, the following two properties hold.
First, the multigraph _D(_1 ⊔^i) ⊎^i admits , we fix one and denote it by T^i.
And second, if i > 0 and _1' is a maximal path packing of H_1 such that _D(_1' ⊔^i) ⊎^i admits , then _D(_1' ⊔^i-1) ⊎^i-1 also admits .
Along the proof of the claim, we will inter alia show that if the graph _1' ⊔^i (as above in the claim) is a path packing, then _1' ⊔^i-1 is also a path packing, i.e., _D(_1' ⊔^i-1) is well-defined.
Recall that _1 ⊔^i-1 (a subgraph of _1 ⊔_2) is a path packing and by <ref>, _D(_1 ⊔^i-1) has the same degree sequence as _D(_1' ⊔^i-1).
So to prove that _1' ⊔^i-1 is a path packing, it will suffice to show its acyclicity.
The proof is by induction.
Base case i = 0:
Since ^0 = _2, we can use ^0 = and the statement is true by using T^0 = T.
So now let 1 ≤ i ≤ t and suppose the statement holds for i - 1.
We make a case distinction based on the path P := P^i.
For simplicity of notation, after we construct ^i in each case, with _1' we refer to any maximal path packing of H_1 such that _D(_1' ⊔^i) ⊎^i admits .
We emphasize that in every case, the construction of ^i and T^i will be independent of _1'.
The cases 1.2 and 2 will be similar to a union-node as handled by Bergougnoux et al. <cit.> while the remaining cases are different.
Case 1.1
First, suppose that the path P has zero length and the unique vertex, say v, of P is a glue-vertex.
Since _1 and _1' are maximal path packings of H_1, both of them contain v on some path.
So in this case, we simply have
_1 ⊔^i = _1 ⊔ (^i-1 - P) = _1 ⊔^i - 1 and
_1' ⊔^i = '_1 ⊔ (^i-1 - P) = _1' ⊔^i - 1
(here and in the analogous equalities we treat a path packing as a set of vertex-disjoint paths and the operation - P removes the path P from it in terms of set difference).
Therefore, ^i := ^i-1 and T^i := T^i-1 satisfy the desired condition.
Case 1.2
Now again suppose that P is a zero-length path but now the unique vertex, say v, of P is not a glue-vertex.
Then we have
_1 ⊔^i = _1 ⊔ (^i-1 - P) = (_1 ⊔^i-1) - P and
_1' ⊔^i = _1' ⊔ (^i-1 - P) = (_1' ⊔^i-1) - P
So
_D(_1 ⊔^i) = _D(_1 ⊔^i-1) - _(v) and
_D(_1' ⊔^i) = _D(_1' ⊔^i-1) - _(v).
Since T^i-1 is of _D(_1 ⊔^i-1) ⊎^i-1, the loop _(v) occurs on it.
So let e^1 resp. e^2 be the blue edge in ^i-1 preceding resp. following _(v) in T^i-1.
And let a resp. b be the label such that a and (v) resp. b and (v) are the end-points of e^1 resp. e^2.
We then define
^i = (^i-1 - e^1 - e^2) ∪̇{e_a, b}.
Now on the one hand, we can easily obtain T^i of _D(_1 ⊔^i) ⊎^i by taking T^i-1 and replacing the subtrail e^1, _(v), e^2 with a blue edge e_a, b in it (see <ref>): by (<ref>) and (<ref>), each edge is indeed visited exactly once.
For the other direction, let S^i be of _D(_1' ⊔^i) ⊎^i.
Then we can obtain of _D(_1' ⊔^i-1) ⊎^i-1 by taking S^i and replacing the occurrence of the blue edge e_a, b by a subtrail e^1, _(v), e^2 (see <ref>): By (<ref>) and (<ref>), each edge is again visited exactly once.
This concludes the proof for Case 1.2.
Now it remains to prove the claim for the case that P has non-zero length.
We further distinguish on whether P contains glue-vertices and if so, how many of them are end-vertices of P.
Let v and w denote the end-vertices of P.
Case 2
Suppose the path P does not contain glue-vertices.
This case is very similar to Case 1.2 but we provide the proof for completeness.
It again holds that
_1 ⊔^i = _1 ⊔ (^i-1 - P) = (_1 ⊔^i-1) - P and
_1' ⊔^i = _1' ⊔ (^i-1 - P) = (_1' ⊔^i-1) - P.
So
_D(_1 ⊔^i) = _D(_1 ⊔^i-1) - e_(v), (w) and
_D(_1' ⊔^i) = _D(_1' ⊔^i-1) - e_(v), (w).
Without loss of generality, we may assume that in T^i-1 the edge e_(v), (w) is traversed from (v) to (w): otherwise, we can use the reverse of T^i-1 instead.
Let again e^1 resp. e^2 be the blue edge in ^i-1 preceding resp. following the red edge e_(v), (w) in T^i-1.
And let a resp. b be the labels such that e^1 resp. e^2 has end-vertices a and (v) resp. b and (w).
Then we define
^i = (^i-1 - e^1 - e^2) ∪̇{e_a, b}.
Now on the one hand, we can easily obtain T^i of _D(_1 ⊔^i) ⊎^i by taking T^i-1 and replacing the subtrail e^1, e_(v), (w), e^2 with a blue edge e_a, b in it (see <ref>):
By (<ref>) and (<ref>), each edge is indeed visited exactly once.
For the other direction, let S^i be of _D(_1' ⊔^i) ⊎^i.
Then we can obtain of _D(_1' ⊔^i-1) ⊎^i-1 by replacing the occurrence of the blue edge e_a, b in S^i by a subtrail e^1, e_(v), (w), e^2 (see <ref>): By (<ref>) and (<ref>), each edge is again visited exactly once and the claim holds.
This concludes the proof for the Case 2.
From now on, me may assume that P has non-zero length and contains at least one glue-vertex.
Let u_1, …, u_q denote all internal glue-vertices of P for some q ∈_0.
Since these vertices are internal, we have __2(u_j) = 2 for every j ∈ [q].
Since _1 ⊔_2 is a maximal path packing of D, the path packing _1 resp. _2 is maximal for H_1 resp. H_2, and the graphs H_1 and H_2 are edge-disjoint, we have
2 ≥__1 ⊔_2(u_j) = __1(u_j) + __2(u_j) = __1(u_j) + 2.
Therefore, we have __1(u_j) = 0 i.e., u_j forms a zero-length path in _1.
Since the path packing ^i does not contain u_j by construction, the vertex u_j also forms a zero-length path in _1 ⊔^i.
Next recall that u_j is a glue-vertex so it is a unique vertex with label (u_j) in H_1 and H_2.
Then if there exists a blue multigraph such that each of _D(_1 ⊔^i) ⊎ and _D(_1' ⊔^i) ⊎ admits , then by <ref>, the vertex u_i also forms a path in _1' ⊔^i (and hence, also in _1').
We will apply this property to = ^i to prove the latter direction of <ref>.
As mentioned before, the previous two cases were very similar to the correctness of the procedure for a union-node in the clique-width algorithm (see <cit.>).
In the remaining cases, the approach will be more involved but still natural.
In the remainder of the paragraph we try to sketch this process and the main difficulty of these cases.
The details will become clear in the description of the remaining cases though.
As before, we will replace a certain subtrail A of T^i-1 of _D(_1 ⊔^i-1) ⊎^i-1 by a sequence B of edges to obtain T^i of _D(_1 ⊔^i) ⊎^i.
In the previous cases, the sequence B consisted of a single edge.
So when we then considered S^i of _D('_1 ⊔^i) ⊎^i, it was easy to replace this edge by A again to obtain S^i-1 of _D('_1 ⊔^i-1) ⊎^i-1.
In the following cases, the situation might become less simple since B possibly consists of multiple edges there.
For this reason, first, B does not necessarily occur as a subtrail in S^i, i.e., the edges of B do not necessarily occur consecutively.
And second, some of the edges of B possibly do not even occur in _D('_1 ⊔^i).
Therefore, the construction of S^i-1 from S^i is less straight-forward in these cases.
Therefore, in general it is not possible to simply replace B with A to obtain S^i-1.
For this reason, a more careful analysis is required to show that such a S^i-1 of _D(_1' ⊔^i-1) still exists.
Now we move on to details.
First, observe that if at least one of the vertices v and w is not a glue-vertex, the acyclicity of _1' ⊔^i also implies the acyclicity of _1' ⊔^i-1 so _1' ⊔^i-1 is a path packing.
We will come back to this issue in Case 3.3 where both v and w are glue-vertices.
Case 3.1
First, assume that neither v nor w is a glue-vertex.
Then P is a path in _1 ⊔^i-1.
Since we know by assumption that P contains a glue-vertex, we have q > 0 and it holds that
_1 ⊔^i = ((_1 ⊔^i-1) - P) ∪̇{{u_1}, …, {u_q}}.
And hence
_D(^1 ⊔^i) = (_D(_1 ⊔^i-1) - e_(v), (w)) ∪̇
{_(u_1), …, _(u_q)}.
As before, we may assume that the edge e_(v), (w) is traversed from (v) to (w) in T^i-1.
So let again e^1 resp. e^2 be the blue edge in ^i-1 preceding resp. following e_(v), (w) in T^i-1.
And let a resp. b be the label such that the end-vertices of e^1 resp. e^2 are a and (v) resp. b and (w).
We then set
^i = (^i-1 - e^1 - e^2) ∪̇{e_a, (u_1), e_(u_q), b}∪̇{e_(u_d), (u_d+1)| d ∈ [q-1]}.
Then we can obtain T^i of _D(_1 ⊔^i) ⊎^i by taking T^i-1 and replacing the subtrail e^1, e_(v), (w), e^2 with the sequence L given by
L = e_a, (u_1), _(u_1),
e_(u_1), (u_2), _(u_2),
…,
e_(u_q-1), (u_q) _(u_q),
e_(u_q), b
(see <ref>).
Note that by (<ref>) and (<ref>), the trail T^i indeed contains all edges.
On the other hand, recall two properties.
First, every vertex in u_1, …, u_q forms a zero-length path in _1' (argued above).
And second u_1, …, u_q is the set of glue-vertices on P.
Thus, P forms a path in _1' ⊔^i-1 and we have
_1' ⊔^i = ((_1' ⊔^i-1) - P) ∪̇{{u_1}, …, {u_q}}.
Therefore
_D(_1' ⊔^i) = (_D(_1' ⊔^i-1) -
e_(v), (w)) ∪̇
{_(u_1), …, _(u_q)}.
Now consider S^i of _D('_1 ⊔^i) ⊎^i.
We make the following observation.
For every j ∈ [q], since the only red edge incident with (u_j) in _D(_1' ⊔^i) is a loop, there are exactly two blue edges, say f^1 and f^2, incident with (u_j) in ^i, namely the two edges from the set
{e_i, (u_1), e_(u_q), j}∪{e_(u_d), (u_d+1)| d ∈ [q-1]}
that are incident with (u_j).
This implies that f^1, _(u_j), and f^2 appear consecutively in S^i.
Since this holds for every j ∈ [q], the sequence L or its reverse is a subtrail of S^i.
As before, we may assume that L is a subtrail of S^i.
Now we can obtain of _D('_1 ⊔^i-1) ⊎^i by taking S^i and replacing L with e^1, e_(v), (w), e^2 (see <ref>):
By (<ref>) and (<ref>), it indeed contains all edges of _D(_1' ⊔^i-1) ⊎^i.
In this case, for both directions of the claim we still were able to replace two subtrails e^1, e_(v), (w), e^2 and L with each other.
In the remaining two cases, this will not be true anymore so we will work with different pairs of subtrails in the “forward” and “backward” directions of the proof (the details follow).
For the remainder of the proof we may assume that at least one end-vertex of P is a glue-vertex, say v.
Observe the following: since v is an end-vertex of a path P of non-zero length, its degree in _2 is exactly one.
The vertex v is a glue-vertex so by <ref> the degree of (v) in _D(_2) is one as well.
Since _1 ⊔_2 is a path packing containing v, we have __1 ⊔_2(v) ≤ 2.
Recall that _1 is maximal in H_1 so it contains v.
Together with __2(v) = 1 this implies __1(v) ≤ 1.
Thus, the vertex v is an end-vertex of some path in the path packing _1.
So this also holds for the family _1 ⊔^i.
By <ref>, this also holds for _1' ⊔^i and therefore, also for _1'.
This observation also implies that the path in _1 ⊔^i with end-vertex v has non-zero length if and only if this holds for _1 ⊔^i.
If w is a glue-vertex as well, the symmetric property holds.
Case 3.2
In this case, we assume that w is not a glue-vertex.
Let P̂ be the path in _1 ⊔^i-1 containing P as a subpath.
Then w is an end-vertex of P̂.
Let r ∈ [k] denote the label of the other end-vertex of P̂.
Note that r = (v) is possible.
Our path P is a suffix or a prefix of P̂.
Without loss of generality, we assume that P is a suffix of P̂: otherwise, we could take the reverse of P̂ instead.
Let P̂ v denote the prefix of P̂ ending in v.
Then the following holds
_1 ⊔^i = ((_1 ⊔^i-1) - P̂) ∪̇{P̂v, {u_1}, …, {u_q}}
and
_D(_1 ⊔^i-1) = (_D(_1 ⊔^i) - e_r, (w)) ∪̇
{e_r, (v), _(u_1), …, _(u_q)}.
As before, we may assume that in T^i-1 the edge e_r, (w) is traversed from r to (w).
So let e be the blue edge following e_r, (w) in T^i-1 and let b ∈ [k] be such that (w) and b are the end-vertices of e.
For simplicity of notation, we set u_0 = v.
We define
^i = (^i-1 - e) ∪̇
{ e_(u_d), (u_d+1)| d ∈ [q - 1]_0}∪̇{e_(u_q), b}.
Then we can construct T^i by taking T^i-1 and replacing the subtrail e_r, (w), e with a sequence L where
L = e_r, (v),
e_(v), (u_1), _(u_1),
e_(u_1),(u_2), _(u_2),
…,
e_(u_q-1),(u_q), _(u_q),
e_(u_q), b
(see <ref> (a)).
Note that T^i indeed uses all edges of _D(_1 ⊔^i) ⊎^i
(see (<ref>) and (<ref>)).
For the other direction, we consider S^i of _D(_1' ⊔^i) ⊎^i.
Above we have argued that v is an end-vertex of some path, say P^*, in _1' ⊔^i, and for every j ∈ [q], the vertex u_j forms a zero-length path in _1'.
Let r' ∈ [k] be such that r' and (v) are the labels of the end-vertices of P^*.
Note that r' = (v) is possible.
Then it holds that
_1' ⊔^i-1 = ((_1' ⊔^i) - {P^*, {u_1}, …, {u_q}}) ∪̇{P^* ⊔ P}
and therefore, also
_D(_1' ⊔^i-1) = (_D(_1' ⊔^i) - e_r', (v) - _(u_1) - … - _(u_q)) ∪̇
{e_r', (w)}.
Now observe the following.
Since v is a glue-vertex and v ∈_1' holds,
there is exactly one edge, say e^*, incident with (v) in _D(_1' ⊔^i).
And this edge is a loop if and only if P^* = {v} holds.
So the degree of (v) in ^i is one if this edge is not a loop and two if it is.
Now we can make some observations about the occurrence of e^* in S^i.
First, consider the case that P^* consists of v only.
Above we have argued that then in _1 ⊔^i there is also a path consisting of v only (and in particular, we have r = (v)).
The existence of implies that in this case there are exactly two blue edges in ^i incident with (v) so they must precede and follow e^* in S^i.
Crucial is that these edges are the same (possibly their order is swapped) that follow and precede e_t, (v) in T^i.
The same holds for edges incident with a vertex u_j for every j ∈ [q].
Now consider the case that P^* has non-zero length, i.e., (v) has the red degree of exactly one in _D(_1' ⊔^i).
Since S^i is , the vertex (v) also has the blue degree of exactly one in ^i.
Now we may again assume that e^* is traversed from r' to (v) in S^i.
Then the edge following e^* in S^i is the unique blue edge incident with (v) in ^i, i.e., the same blue edge follows e_t, (v) in T^i.
Altogether, this implies that a sequence L' is a subtrail of S^i where
L' = e_r', (v),
e_(v), (u_1), _(u_1),
e_(u_1), (u_2), _(u_2),
…
e_(u_q-1), (u_q), _(u_q),
e_(u_q), b.
Hence, it can be replaced with a sequence e_r', (w), e (see <ref> (b)) to obtain of _D(_1' ⊔^i-1) ⊎^i-1 (see (<ref>) and (<ref>) to verify that every edge is indeed used exactly once).
Case 3.3
Now we remain with the case where both v and w are glue-vertices.
For simplicity of notation, let us denote u_0 = v and u_q+1 = w.
By Case 1.1 we may assume that v ≠ w holds.
Since both v and w are glue-vertices, it also holds that (v) ≠(w).
Let P̂ again be the path in _1 ⊔^i-1 that contains P as a subpath.
We may assume that v occurs on both P and P̂ before w: otherwise we may use the reverse of the violating path instead.
Let r ∈ [k] resp. s ∈ [k] be the label of the start- resp. end-vertex of P̂.
Then it holds that
_1 ⊔^i = ((_1 ⊔^i-1) - P̂) ∪̇{P̂ v, {u_1}, …, {u_q}, w P̂}
where P̂ v denotes the prefix of P̂ ending in v and w P̂ denotes the suffix of P̂ starting in w.
So we have
_D(_1 ⊔^i) = (_D(_1 ⊔^i-1) - {e_r, s})
∪̇{e_r, (v), _(u_1), …, _(u_q), e_(w), s}.
We define
^i = ^i-1∪̇{e_(u_d) (u_d+1)| d ∈ [q]_0 }.
Then we can obtain T^i by taking T^i-1 and replacing the red edge e_r, s with a subtrail L where
L = e_r, (v),
e_(v), (q_1), _(u_1),
e_(u_1), (u_2), _(u_2),
…,
e_(u_q), (w), e_(w), s
(see <ref> (a)).
Note that by (<ref>) and (<ref>), the trail T^i indeed contains all edges.
Now we prove the other direction.
Let S^i be of _D(_1' ⊔^i) ⊎^i.
First, to show that _D('_1 ⊔^i-1) is well-defined, we prove that '_1 ⊔^i-1 is a path packing by showing its acyclicity (above we explained why this needs to be proven in this case only).
Above we have argued that for every j ∈ [q] the vertex u_j forms a path in _1' ⊔^i.
Thus, the only edge incident with the vertex (u_j) in _D(_1' ⊔^i) is a loop.
So the only two blue edges incident with (u_j) in ^i appear right before and after this loop in S^i.
Thus, the sequence
Q = e_(v), (u_1), _(u_1),
e_(u_1), (u_2), _(u_2),
…,
e_(u_q), (w).
(or its reverse) is a subtrail of S^i.
As before we may assume that Q is a subtrail of S^i.
Let e^1 resp. e^2 be the red edges preceding resp. following e_(v), (u_1) resp. e_(u_q), (w) in S^i.
And let P_v' resp. P_w' be the path in _D(_1' ⊔^i) with an end-point v resp. w (above we have argued that such a path exists).
Let r' ∈ [k] resp. s' ∈ [k] be such that r' and (v) resp. (w) and s' are the labels of end-vertices of P_v' resp. P_w'.
Observe that we have e^1 = e_r', (v) and e^2 = e_(w), s' since v and w are vertices with unique labels in D.
To prove the claimed acyclicity, suppose it holds that e^1 = e^2.
Then S^i consists exactly of the following edges
S^i = (e^1 = e^2),
e_(v), (u_1), _(u_1),
e_(u_1), (u_2), _(u_2),
…,
e_(u_q), (w)
and we have r' = (w) and s' = (v).
This implies that first, P_v' = P_w' (since v and w are vertices of unique labels) and second, the path packing _1' ⊔^i consists of a path from v to w and paths {u_1}, …, {u_q} only.
Therefore, the multigraph _D(_1 ⊔^i) consists of loops at (u_1), …, (u_q) and an edge between (v) and (w).
The degree sequences of _D(_1 ⊔^i) and _D('_1 ⊔^i) coincide (recall <ref>) so the degree sequences of _1 ⊔^i and '_1 ⊔^i coincide as well (recall <ref>).
But then both _1 ⊔^i and {P} contain a path from v to w (where v and w are distinct glue-vertices) so
(_1 ⊔^i) ⊔{P} = _1 ⊔^i-1
contains a cycle – this contradicts the fact that the graph _1 ⊔^i-1) (a subgraph of a path packing _1 ⊔_2) is a path packing.
Hence, it holds that e^1 ≠ e^2 and therefore, P_v' ≠ P_w'.
Now we have:
_1' ⊔^i-1 = ((_1' ⊔^i) - {P_v', {u_1}, …, {u_1}, P_w'}) ∪̇{P_v' ⊔ P ⊔ P_w'}.
Since the paths P_v' and P_w' are distinct, in particular, this implies that _1' ⊔^i-1 is acyclic and therefore, it is a path packing.
We then have
_D(_1' ⊔^i-1) =
(_D(_1' ⊔^i) - e_r', (v) - _(u_1) - … - _(u_q), e_(w), s') ∪̇{e_r', s'}.
Above we have argued that the sequence L' is a subtrail of S^i where
L' = e_r', (v),
e_(v), (u_1), _(u_1),
e_(u_1), (u_2), _(u_2),
…,
e_(u_q), (w), e_(w), s'.
Now this subtrail of S^i can be replaced by the red edge e_r', s' (see <ref> (b)) to obtain T_i-1' of _D(_1' ⊔^i-1) ⊎^i-1 (see (<ref>) and (<ref>) to verify that every edge is indeed used exactly once).
This concludes the proof of the claim.
Applied to i = t, the claim implies the existence of a blue multigraph ^t with the following two properties.
First, the multigraph _D(_1 ⊔^t) ⊎^t admits a red-blue Eulerian trail T^t.
Since ^t = ∅, we obtain that
_D(_1 ⊔^t) ⊎^t =_D(_1) ⊎^t = _H_1(_1) ⊎^t
admits a red-blue Eulerian trail T^t.
Then the property _1 ≲_H_1Π(H_1) implies that there exists a maximal path packing _1' ∈_1 such that
_H_1(_1) ⊎^t = _D(_1) ⊎^t
admits , say S^t.
Then the second part of the claim applied to i = t, …, 1 implies that the multigraph
_D(_1' ⊔^0) ⊎^0 = _D(_1' ⊔^2) ⊎
admits a .
Now, a symmetric argument implies that there also exists a maximal path packing _2' ∈_2 such that _D(_1' ⊔_2')⊎ admits a (and in particular, _1' ⊔_2' is a path packing so the auxiliary graph is well-defined).
By definition of S, we obtain that _1' ⊔_2' belongs to S.
Altogether, we have shown that for every blue multigraph and every maximal path packing ∈Π(D), if _D() ⊎ admits , then there exists a maximal path packing ' ∈ S of D such that _D() ⊎ also admits , i.e., we have S ≲_D Π(D) as desired.
Computing S in time (|_1| |_2|) is trivial: we iterate over all pairs _1 ∈_1 and _2 ∈_1, compute _1 ⊔_2 in time polynomial in the size of H and then check whether this subgraph is acyclic.
Now we are ready to provide the algorithm solving the problem parameterized by fusion-width.
Given a fuse-k-expression of a graph H, the problem can be solved in time n^𝒪(k).
First, by <ref>, in time polynomial in the size of the given fuse-k-expression and k, we can compute a reduced glue-k-expression ϕ of H whose size is polynomial in the size of H and k.
Let e = uv be an arbitrary but fixed edge of H.
In the following, our algorithm will decide, whether H admits a Hamiltonian cycle containing e.
Then, by checking this for every edge of H, we can solve .
First, we slightly transform ϕ into a reduced glue-(k+2)-expression ξ of H such that the root of ξ is a join-node that creates the edge e only.
For this we proceed as follows.
For the simplicity of notation, let i_u = k+1 and i_v = k+2.
First, all leaves of ϕ with title u (resp. v) are replaced with u ⟨ i_u ⟩ (resp. v ⟨ i_v ⟩).
After that, we iterate through all join-nodes t.
Let i, j ∈ [k] be such that t is a η_i, j-node.
If the vertex u belongs to G^ϕ_t and the label j_u of u in G^ϕ_t is equal to i (resp. j), we add a new η_i_u, j-node (resp. η_i_u, i-node) right above t.
Similarly, if the vertex v belongs to G^ϕ_t and the label j_v of v in G^ϕ_t is equal to i (resp. j), we add a new η_i_v, j-node (resp. η_i_v, i-node) right above t.
After processing all nodes, we add a new η_i_u, i_v-node above the root making it a new root.
The process takes only polynomial time.
By construction, this expression still creates the graph G.
Moreover, it still satisfies the first two properties of a reduced glue-k-expression: no join-node creates an already existing edge and the glued graphs are always edge-disjoint (see <ref> for a formal definition).
After that, we proceed as in the proof of <ref> to ensure that the last property of a reduced glue-(k+2)-expression is still satisfied.
If we look into that proof, we note that this transformation does not change the root so the join-node creating the edge e is still the root.
We denote the arisen glue-(k+2)-expression by ξ.
By x we denote the child of the root of ξ.
Note that since u and v now have unique labels, the node x creates the edge e only so we have G^ξ_x = H - e.
Now given the result of Bergougnoux et al. for introduce-, join-, and relabel-nodes <cit.> as well as our <ref>, we can traverse ξ bottom-up to compute a set _x of partial solutions of G^ξ_x such that _x ≲_G^ξ_xΠ(G^ξ_x).
Namely, we start with the leaves and then given a set (or sets) of partial solutions representing the set (resp. sets) of all partial solutions of the child (resp. children) of the current node, say y, we first compute a set of partial solutions of y representing all partial solutions of G^ξ_y and then apply the _G^ξ_y-operation to ensure that the number of partial solutions kept at each step is bounded by n^𝒪(k).
We emphasize that the first two properties of reduced a glue-(k+2)-expression (see <ref>) ensure that every edge is created exactly once.
So the expression is “irredundant” in terms of clique-width and therefore, the procedures for introduce-, relabel-, and join-nodes by Bergougnoux et al. are still correct <cit.>.
Now we show to decide whether H admits a Hamiltonian cycle using the edge e given the set _x.
We claim that this is the case if and only if _x contains a maximal path packing consisting of a single path P with end-vertices u and v.
One direction is almost trivial: Recall that ∈_x ⊆Π(G^ξ_H) and G^ξ_H is a subgraph of H so P is a path in H.
Since V(H) = V(G^ξ_x) and is maximal, the path P contains every vertex of H and together with the edge e it forms a Hamiltonian cycle of H.
For the other direction, let H contain a Hamiltonian cycle using the edge e, let P' denote the path connecting u and v on this cycle and not using the edge e.
Let ' denote the path packing of H consisting of P' only.
Note that since we consider a Hamiltonian cycle, the path P' contains every vertex of H so ' is indeed a maximal path packing of H.
Moreover, since it does not contain the edge e, ' is a path packing of G^ξ_x as well, i.e., ' ∈Π(G^ξ_x).
Since the vertices u and v have unique labels and they are never relabeled, the red graph _G^ξ_x(') on the vertex set [k+2] has a single edge and this edge has end-points i_u and i_v.
Consider a blue multigraph on the vertex set [k+2] whose single edge is between i_u and i_v.
Trivially, the multigraph _G^ξ_x(') ⊎ admits .
Then the property _x ≲_G^ξ_xΠ(G^ξ_x) implies that there is a path packing ∈_x such that _G^ξ_x() ⊎ admits as well.
Since consists of a single blue edge between i_u and i_v, the red multigraph _G^ξ_x() consists of a single red edge between i_u and i_v.
Again, recall that u resp. v are the unique vertices with label i_u resp. i_v so the path packing consists of a single path P containing all vertices from V(G^ξ_x) = V(H) (i.e., maximal) with end-points u and v.
Recall that the number of partial solutions kept at each node is bounded by n^𝒪(k) due to the application of the reduce operator after processing every node.
By <ref>, a glue-node is processed in time polynomial in the number of partial solutions kept for its children, i.e., in n^𝒪(k).
Also by the results of Bergougnoux et al. <cit.>, the remaining nodes can also be handled in time n^𝒪(k).
Finally, recall that a reduced glue-(k+2)-expression contains a polynomial number of nodes.
So the algorithm runs in time n^𝒪(k).
Fomin et al. have also shown the following lower bound:
<cit.>
Let H be an n-vertex graph given together with a k-expression of H.
Then the Hamiltonian Cycle problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails.
Since any k-expression of a graph is, in particular, its fuse-k-expression, the lower bound transfers to fuse-k-expressions as well thus showing that our algorithm is tight under ETH.
Let H be an n-vertex graph given together with a fuse-k-expression of H.
Then the Hamiltonian Cycle problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails.
§ ALGORITHMS PARAMETERIZED BY MULTI-CLIQUE-WIDTH
In this section we show that for many problems we can obtain algorithms parameterized by multi-clique-width with the same running time as known SETH-tight (and for Chromatic Number even an ETH-tight) algorithms parameterized by clique-width.
Due to the relation
<ref>≤ + 1 ≤ + 1,
the obtained running times are then (S)ETH-tight for all three parameters.
For these problems, we will use known algorithms for the clique-width parameterization and describe what adaptations are needed to handle multiple labels per vertex.
First, we make some simple observations to restrict ourselves to simpler expressions.
Let H be a k-labeled graph, let r ∈ and let i, s_1, …, s_r ∈ [k].
Then if i ∈{s_1, …, s_r}, then it holds that
ρ_i →{s_1, …, s_r} (H) = ρ_i →{i, s_r}∘…∘ρ_i →{i, s_1} (H)
and if i ∉{s_1, …, s_r}, then it holds that
ρ_i →{s_1, …, s_r} (H) = ρ_i →∅∘ρ_i →{i, s_r}∘…∘ρ_i →{i, s_1} (H).
We also have
1 ⟨ s_1, …, s_r ⟩ = ρ_s_1 →{s_1, s_r}∘…∘ρ_s_1 →{s_1, s_2}∘ 1 ⟨ s_1 ⟩
if r > 0
and
1 ⟨∅⟩ = ρ_1 →∅ 1 ⟨ 1 ⟩.
Therefore, by first applying the above rules to all relabel-nodes whose right side has size one or more than two and then suppressing relabel-nodes of form ρ_i →{i}, we may restrict ourselves to relabel-nodes that either remove some label i or add a label j to every vertex with label i.
Similarly, using the last two equalities we may assume that every introduce-node uses exactly one label.
Note that the length of the multi-k-expression increases by at most a factor of k after these transformations.
Also we can assume that for every join-node, the joined label sets are non-empty.
Finally, me may reduce the number of nodes in a multi-k-expression to polynomial as follows.
First, similarly to clique-expressions, we may assume that every union-node is first, followed by a sequence of join-nodes, then a sequence of relabel-nodes, and then a union-node (if exists).
Then we may assume that between any two consecutive union-nodes, there are at most k^2 join-nodes: at most one per pair i, j ∈ [k].
Now we sketch how to achieve that we also have at most k relabel-nodes between two consecutive union-nodes, namely at most one per possible left side i ∈ [k].
Suppose there are two distinct nodes x_1 and x_2 being ρ_i → S_1- and ρ_i → S_2-nodes, respectively, for S_1, S_2 ⊆ [k].
We choose x_1 and x_2 so that there is no further ρ_i → S'-nodes between them.
If the label set i is empty right before the application of x_2, we simply suppress x_2.
Otherwise, observe that in all vertices that had label i before x_1, this label was replaced by S_1 (with possibly i ∈ S_1).
So every vertex that has label i right before x_2 got this label at some relabel-node on the path from x_1 to x_2 (including x_1).
Therefore, for every ρ_j → S-node x on this path with i ∈ S, we replace the operation in x with ρ_j → (S ∖{i}) ∪ S_2.
And after that we suppress x_2.
Note that this is correct since no ρ_i → S'-node occurs between x_1 and x_2.
By repeating this process, we obtain that for each i there is at most one ρ_i → S-node between the two consecutive nodes.
As for a clique-expression, the leaves of a multi-expression are in bijection with the vertices of the arising graph (i.e., there at most n leaves).
And since union-nodes are the only nodes with more than one child, there are at most 𝒪(n) union-nodes.
Finally, the above argument implies that there are at most 𝒪(k^2 n) relabel- and join-nodes.
Let ϕ be a multi-k-expression of a graph H on n vertices.
Then given ϕ in time polynomial in |ϕ| and k, we can compute a multi-k-expression ξ of H, such that the are at most 𝒪(k^2 n) nodes and for every node t of ξ the following holds.
If t is a ρ_i → S-node for some i ∈ [k] and S ⊆ [k], then we have |S| = ∅ or S = {i, j} for some j ≠ i ∈ [k].
If t is a η_i, j-node for some i ≠ j ∈ [k], then we have U^t_i ≠∅ and U^t_j ≠∅.
And if t is a 1⟨ S ⟩-node for some S ⊆ [k], then we have |S| = 1.
For the remainder of this section we assume that an expression has this form.
Now we show how existing algorithms for clique-width can be adapted to achieve the same running time for multi-clique-width.
§.§ Dominating Set
In the Dominating Set problem, given a graph G = (V, E) we are asked about the cardinality of the smallest set S ⊆ V with N_G[S] = V.
Bodlaender et al. have developed a (4^) algorithm <cit.>.
The idea behind it is to store a pair of Boolean values for every label: the first value reflects whether the label set contains a vertex from the partial solution while the second reflects whether all vertices of the label set are dominated.
Crucially (as for most problems handled below), the algorithm does not make use of the fact the every vertex holds exactly one label.
So we can use almost the same algorithm to process a multi-k-expression.
The procedures for introduce-, join-, and union-nodes can be reused from Bodlaender et al.
And it remains to handle relabel-nodes: in this case, the state of every single vertex remains the same and we only need to represent the states with respect to the new labeling function.
For a ρ_i →∅-node, all vertices of label i are now dominated and no vertex belongs to the dominating set while the other labels remain unaffected.
And for a ρ_i →{i, j}-node, first, all vertices of label j are dominated iff this was true for labels i and j before the relabeling; and second, a vertex of label j belongs to a partial solution iff this was the case for the label i or the label j before; the state of other labels (in particular, of the label i remains).
We omit a formal description since analogous ideas occur several times in the next problems.
This yields an (4^) algorithm.
Katsikarelis et al. have proven the matching lower bound for clique-width which then also applies to multi-clique-width <cit.>.
Let G be a graph given together with a multi-k-expression of G. Then Dominating Set can be solved in time (4^k).
Unless SETH fails, this problem cannot be solved in time ((4 - ε)^k) for any ε > 0.
§.§ Chromatic Number
In the Chromatic Number problem, given a graph G = (V, E) we are asked about the smallest integer q such that there exists a proper q-coloring of G, that is, a mapping ϕ V → [q] such that ϕ(u) ≠ϕ(v) for all uv ∈ E.
Kobler and Rotics have developed an algorithm that given a graph and a k-expression of this graph, solves the Chromatic Number problem in time f(k) · n^2^𝒪(k) <cit.>.
Later, Fomin et al. have proven the ETH-tightness of this result by showing that under ETH, there is no algorithm solving this problem in f(k) · n^2^o(k) even if a k-expression of the graph is provided <cit.>.
The algorithm by Kobler and Rotics is based on dynamic programming.
The records are of the form N: 2^[k]∖{∅}→ [n]_0 and for a node t of a k-expression, a mapping N is a record if there exists a proper coloring c: V(G_t) → of G_t such that for every subset ∅≠ S ⊆ [k] of labels, we have
|{j ∈|∀ i ∈ S U^t_i ∩ c^-1(j) ≠∅, ∀ i ∈ [k] ∖ S U^t_i ∩ c^-1(j) = ∅}| = N[S].
Simply speaking, N[S] reflects how many colors are there that occur at each label from S but at no further label.
Let T_t denote the set of records at the node t.
Let r denote the root of the k-expression.
In the end, their algorithm outputs the smallest number q such that there exists a record N in T_r with ∑_S ⊆ [k] N[S] = q.
This sum is exactly the number of colors used by a corresponding coloring since for every color, there exists a (unique) set S of labels on which this color is used.
For the multi-clique-width setting, there is a small hurdle: it might happen that at some point a vertex loses all its labels so we might thus “forget” that such a vertex also uses some color.
To overcome this issue, given a multi-k-expression ϕ of a graph G, we create a multi-k+1-expression of the same graph by replacing every 1⟨ i ⟩-node (for i ∈ [k] with a 1⟨ i, k+1 ⟩-node.
After that, we apply the simple transformations describe earlier to achieve that the expression satisfies <ref>.
This ensures that at every sub-expression, every vertex has at least one label (namely k+1) so for any record N, the value ∑_∅≠ S ⊆ [k+1] N[S] still reflects the total number of colors used by the corresponding coloring.
Apart from that, their algorithm never uses that a vertex holds exactly one label and therefore can be easily adapted to multi-clique-width as follows.
Introduce-, join-, and union-nodes can be adopted from the original algorithm by Kobler and Rotics.
And it remains to handle relabel-nodes.
First, let t be a ρ_i →∅-node for some i ∈ [k+1] and let t' be its child.
For every record N' of t', we create a record N such that for every ∅≠ S ⊆ [k+1], we have:
N[S] =
0 i ∈ S
N'[S] + N'[S ∪{i}] i ∉ S
.
Then we add N to the set of records of t.
It is straight-forward to verify that this process results exactly the set of records of t.
Second, let t be a ρ_i →{i, j}-node for some i ≠ j ∈ [k] and let t' be its child.
We may assume that i is non-empty at t' since t can be suppressed otherwise.
For every record N' of t', we create a record N as follows.
If the label j is empty in t', then for all ∅≠ S ⊆ [k] we set
N[S] = N'[S']
where S' is obtained from S by swapping the roles of i and j, i.e.,
S' =
S i, j ∉ S
S i, j ∈ S
S ∖{i}∪{j} i ∈ S, j ∉ S
S ∖{j}∪{i} i ∉ S, j ∈ S
.
Otherwise, we may assume that j is non-empty in t'.
Then for every ∅≠ S ⊆ [k], we set:
N[S] =
0 i ∈ S, j ∉ S
N'[S] + N'[S ∖{j}] i ∈ S, j ∈ S
N'[S] i ∉ S
.
Again, it is easy to verify these equalities.
In the end, as in the original algorithm, we output the smallest number q for which there is a record N in T_r with ∑_∅≠ S ⊆ [k+1] N[S] = q.
The process runs in time n^2^𝒪(k) for every node and since the number of nodes can be assumed to be polynomial, the whole algorithm has the same running time.
Fomin et al. showed that under ETH the problem cannot be solved in time f(k) · n^2^o(k) for any computable function f.
Since multi-clique-width lower-bounds clique-width, this also applies in our case.
Let G be a graph given together with a multi-k-expression of G. Then Chromatic Number problem can be solved in time f(k) · n^2^𝒪(k).
Unless ETH fails, this problem cannot be solved in time f(k) · n^2^o(k) for any computable function f.
§.§ q-Coloring
In the q-Coloring problem, given a graph G = (V, E) we are asked about existence of a proper q-coloring of G, that is, a mapping ϕ V → [q] such that ϕ(u) ≠ϕ(v) for all uv ∈ E.
Now we sketch the main idea behind the SETH-tight ((2^q - 2)^) algorithm for the q-Coloring problem parameterized by clique-width by Lampis <cit.>.
The naive algorithm traversing a clique-expression would store for every label, the set of colors occuring on the vertices of that label, and thus have the running time of ((2^q)^).
But Lampis observed that two states can be excluded.
First, the empty set of colors occurs at some label if and only if this label is empty so such information is trivial and there is no need to store it.
Second, if some label i contains all q colors and later the vertices of this label obtain a common neighbor v, then such a coloring would become not proper since at least of the colors also occurs on label v.
Therefore, the set of all colors can only appear on a label that would not get any new common neighbors (and in particular, this label does not participate in any join later).
This led Lampis to a notion of a so-called live label.
A label is live at some node t of the expression if it contains a vertex which will later get an incident edge (and hence, all vertices in the label set will get a common neighbor).
In particular, a live label is non-empty.
For multi-clique-width we will follow a similar idea, however we need to slightly adapt the notion of a live label.
For motivation, we provide the following example.
Let q = 3, we consider a multi-3-labeled graph on four vertices with label sets {1, 2}, {3}, {1}, and {1}.
Suppose this graph is edgeless, and a partial solution colors these vertices with colors , , , and , respectively (see <ref>).
And now a η_2, 3-operation occurs.
Although all three colors appear on label 1 and a vertex of label 1 now gets a neighbor, this does not make a partial solution invalid.
So although the label 1 is live as defined by Lampis, it can still hold all colors.
The reason is that the edge creation happens due to some other label held by a vertex with label 1.
This motivates our following definition.
We say that a label i is active at a node t of a multi-k-expression ϕ if U^t_i is non-empty and there exist labels a ≠ b ∈ [k] and a η_a, b-node t' such that:
* The node t' is an ancestor of t;
* Let t_1, …, t_q be all inner relabel-nodes on the path from t to t' (in the order they occur on this path) and let s_1, …, s_q ∈ [k] and S_1, …, S_q ⊆ [k] be such that for every j ∈ [q], the node t_j is a ρ_s_j → S_j-node. Then it holds that
a ∈σ_q ( σ_q-1 ( … (σ_1({i}) ) ) )
where for T ⊆ [k]
σ_j(T) =
T if s_j ∉ T
(T ∖{s_j}) ∪ S_j if s_j ∈ T
;
* The set U^t'_b is non-empty.
Informally speaking, the set σ_q ( σ_q-1 ( …σ_1({i}) ) ) contains the labels to which the label i has been relabeled on the way to t'.
Then, for any label a from this set, all vertices with label i in t contain the label a in t' so the join-node t' creates a common neighbor for these vertices.
By ℓ(t) we denote the set of all active labels at node t.
With this definition, we are now ready to provide an algorithm that given a graph and its multi-k-expression solves the q-Coloring problem in time ((2^q - 2)^k) by keeping track of the sets of colors used by every active label.
By <ref>, we may assume that we are given a multi-k-expression of H satisfying the properties of the lemma.
We will follow the dynamic programming by Lampis, provide how to handle relabel-nodes, and observe that the approach for introduce, join-, and union-nodes can be adopted from his work without changes.
By we denote the set 2^[q]∖{∅, [q]} of relevant sets of colors, then we have || = 2^q - 2.
The dynamic programming table A_t is indexed by ^ℓ(t), i.e., the assignments of colors to active labels.
And for f ∈^ℓ(t), the value A_t[f] is the number of proper q-colorings of G_t such that for every active label i, the coloring uses exactly the colors f(i).
Now let t be a ρ_i →∅-node with i ∈ [k] and let t' be its child.
Observe that the label i is then inactive in both t and t'.
The remaining labels are not affected so we simply have ℓ(t) = ℓ(t') and A_t = A_t'.
Next let t be a ρ_i →{i, j}-node for i, j ∈ [k] and let t' be its child.
We may assume that i ≠ j holds since otherwise t can be suppressed.
Now we have to make a case distinction based on the activity of labels i and j.
Case 1: Assume that i is inactive in t'.
Then by definition of activity, both labels i and j are inactive in t.
Other labels are not affected so we again have ℓ(t) = ℓ(t') and A_t = A_t'.
Case 2: From now on, we assume that i is active in t'.
Then by definition, at least one of labels i and j is active in t.
Case 2.a: Assume that both i and j are active in t.
Then label j is either active (and then ℓ(t) = ℓ(t')) or empty in t' (then ℓ(t) = ℓ(t') ∪̇{j} by definition).
We will compute the entries of the table A_t in the entries of a table B_t indexed by ^ℓ(t) as follows.
First, all entries of B_t are initialized with zeros.
After that we iterate over all footprints f ∈^ℓ(t').
Let f' be such that for all p ∈ℓ(t) we have:
f'(p) =
f(p) p ≠ j
f(i) ∪ f(j) p = j and j is active in t'
f(i) p = j and j is empty in t'
.
If j is active in t' and f'(j) = f(i) ∪ f(j), then we skip f': the property that j is an active label implies that all vertices of this label will get a common neighbor and all q-colorings with such footprints will be invalidated.
Otherwise, we increase the value B_t[f'] by the value A_t'[f].
This process requires a linear in the number of entries of A_t' number of arithmetic operations.
As a result, the entries of B_t coincide with A_t: indeed, in this process we have simply recomputed the footprints of every q-coloring and eliminated the colorings that would be invalidated later in the expression anyway.
Case 2.b: Assume that i is active in t and j is inactive in t.
Then by definition j is inactive in t' as well.
Then we simply have ℓ(t) = ℓ(t') and A_t = A_t' since the underlying graph has not been changed.
Case 2.c: Assume that i is inactive in t and j is active in t.
Then j is either active (and then ℓ(t) = ℓ(t') - {i}) or empty (then ℓ(t) = (ℓ(t') - {i}) ∪{j}) in t'.
This case is similar to Case 2.a and is handled as follows.
We will compute the entries of the table A_t in the entries of a table B_t indexed by ^ℓ(t) as follows.
First, we iniatilize all values of B_t with zeros.
After that we iterate over all footprints f ∈^ℓ(t').
Let f' ∈^ℓ(t) be such that for all p ∈ℓ(t) we have:
f'(p) =
f(p) p ≠ j
f(i) ∪ f(j) p = j and j is active in t'
f(i) p = j and j is empty in t'
.
If j is active in t' and f'(j) = f(i) ∪ f(j), then we skip f': the property that j is an active label implies that all vertices of this label will get a common neighbor and all q-colorings with such footprints will be invalidated.
Otherwise, we increase the value B_t[f'] by the value A_t'[f].
This process requires a linear in the number of entries of A_t' number of arithmetic operations.
As a result, the table B_t contains the same entries as A_t and the argument is analogous to Case 2.a.
This concludes the procedure for relabel-nodes.
Now let t be a η_i, j-node for some i ≠ j ∈ [k] and let t' be its child.
If i would be inactive in t', then i would be empty in t' and the join-node t could be suppressed.
So we may assume that i, and similarly j, is active in t'.
Now we can proceed the same way as Lampis so we only sketch it very briefly.
Some of the labels i and j may become inactive in t so let I = ℓ(t') ∖ℓ(t) ⊆{i, j}.
Then we set
A_t[f] =
0 f(i) ∩ f(j) ≠∅
∑_c ∈^I A_t'[f × c] f(i) ∩ f(j) = ∅
.
The first case reflects that every q-coloring of G_t in which labels i and j share a color is not proper.
In the second case, we keep the information about the coloring and store it with the correct footprint.
Crucial here is that although we use a different notion of active labels (compared to live labels by Lampis), the equality remains the same.
As we see next, the same holds for union-nodes so that their fast processing can be adopted from Lampis.
Let t be a union-node and let t_1 and t_2 be its children.
Observe that we have ℓ(t) = ℓ(t_1) ∪ℓ(t_2).
Moreover, if some label i ∈ [k] is active in t but not in t_1, then i is empty in t_1 so the set of colors used on it in any coloring is empty.
This is the property that ensures that the approach of Lampis is correctly applicable in our case.
The approach relies on fast subset convolution and can be described as follows.
He first computes the entries B_t_1[f] and B_t_2[f] of auxiliary tables such that for label i, the value f(i) provides the set of colors allowed to be used on the label i, i.e., an upper bound on the set of colors instead of the exact value.
Then the analogous table B_t for the node t can be computed as pointwise multiplication of B_t_1 and B_t_2 and finally, the reverse procedure is used to compute A_t from B_t.
We refer to the paper by Lampis for all details <cit.>.
The whole process requires the number of arithmetic operations linear in the number of entries of A_t.
Now we are able to handle all types of nodes of a multi-k-expression and compute the table A_r where r denotes the root of the expression.
Observe that no label is active in r so this table contains a unique entry equal to the number of proper q-colorings of G_r = H and this approach even solves the counting version of the problem.
Each of the considered tables has at most (2^q - 2)^k entries and as argued above, at each node we carry out a number of arithmetic operations linear in the size of the table.
Each entry is bounded by q^n (the largest possible number of q-colorings of H) and the number of nodes in the expression is polynomial in the size of H and k so the total running time is bounded by ((2^q - 2)^k) as claimed.
Lampis also showed that his algorithm is tight under SETH and since multi-clique-width lower-bounds clique-width, this also applies in our case.
Let G be a graph given together with a multi-k-expression of G. Then q-Coloring can be solved in time ((2^q - 2)^k).
Unless SETH fails, this problem cannot be solved in time ((2^q - 2 - ε)^k) for any ε > 0.
§.§ Connected Vertex Cover
In the Connected Vertex Cover problem, given a graph G = (V, E) we are asked about the cardinality of the smallest set S ⊆ V such that G[S] is connected and for every edge uv ∈ E we have u ∈ S or v ∈ S.
Although the Connected Vertex Cover problem seems to be very different from q-Coloring at first sight, for most parameterizations known (e.g., <cit.>), the SETH-tight algorithms rely on the Cut&Count technique introduced by Cygan et al. <cit.>.
Very briefly they show that to decide whether a graph admits a connected vertex cover of certain size, it suffices to count the number of pairs (L, R) where L ∪̇R is a vertex cover of the desired size and there are no edges between L and R.
This very rough description hides some technical details like fixing a vertex in L, assuming that a vertex cover of minimum weight is unique via Isolation lemma etc. (see <cit.>).
But this shows that on a high level, solving this problem reduces to counting the number of colorings of the graph with colors L, R, and N (stands for not being in a vertex cover) where LR and NN edges are forbidden: the former pair forbids edges between L and R and the latter ensures that every edge of the input graph has at least one vertex in the vertex cover L ∪ R.
Hegerfeld and Kratsch employ this observation to obtain an (6^) algorithm for Connected Vertex Cover similar to the above algorithm for q-Coloring by Lampis.
There are 8 subsets of {L, R, N}, however, the empty set of colors can only occur on an empty label; while if a label contains all colors, then joining this label with some other label would necessarily lead to a LR or a NN edge <cit.>.
Thus, stated in our terms, a label containing all colors from {L, R, N} is necessarily inactive.
Therefore, it suffices to keep track of only six possible combinations of colors that may occur on an active label.
These observations together with a sophisticated convolution at union-nodes yield their (6^) algorithm.
For us, two properties are crucial.
First, neither the definition of relevant colorings nor the procedure at any node-type uses the fact that any vertex holds only one label.
The second property is more technical and requires a closer look at the algorithm by Kratsch and Hegerfeld.
For correctness of the algorithm, they assume that a k-expression of a graph is irredundant, namely no edge of the graph is created by multiple join-nodes.
It is folklore that any k-expression can be transformed into an irredundant k-expression of the same graph in polynomial time.
Unfortunately, we do not know whether an analogous statement holds for multi-k-expressions.
Also, instead of active labels as we define them in the previous section, they use the live labels as defined by Lampis <cit.>.
However, a closer look at their procedure for the union-node and its correctness reveals that it still works if an expression is not necessarily irredundant but we work with active labels instead of live labels.
Namely, they only use irredundant k-expressions to ensure that whenever there is a join-node η_i,j, the vertices of label i will later get a new common neighbor.
We observe that the expression does not need to be irredundant for this: even if some edges incident with the label i and created by the join-operation are already present in the graph, the vertices of label i will still share a neighbor after this join and therefore, they cannot use all colors.
With this observation, we can adapt the algorithm by Hegerfeld and Kratsch.
All node types apart from relabel-nodes can be handled the same way while relabel-nodes are analogous to the previous subsection.
Hegerfeld and Kratsch also showed that their algorithm is tight under SETH and since multi-clique-width lower-bounds clique-width, this also applies in our case.
Let G be a graph given together with a multi-k-expression of G. Then Connected Vertex Cover problem can be solved in time (6^k).
The algorithm is randomized, it does not return false positives and returns false negatives with probability at most 1/2.
Unless SETH fails, this problem cannot be solved in time ((6 - ε)^k) for any ε > 0.
§.§ Connected Dominating Set
Another problem for which Hegerfeld and Kratsch provided an algorithm is Connected Dominating Set <cit.>.
In the Connected Vertex Cover problem, given a graph G = (V, E) we are asked about the cardinality of the smallest set S ⊆ V such that G[S] is connected and N_G[S] = V.
As for some other parameterizations (e.g., <cit.>), the algorithm is based on a combination of Cut&Count and the inclusion-exclusion approach.
However, the idea behind the reduction of the number of states is different to Connected Vertex Cover or q-Coloring: for this problem, there is usually (e.g., <cit.>) a state, called Allowed, allowing edges to any other states.
So unlike the previous algorithms, if some label i contains all states and it is joined with a label j containing only Allowed vertices, no conflict occurs.
Instead, they unify multiple combinations of states into the same state and then show that the precise state combination does not matter if one is interested in counting the solutions modulo 2.
We will rely on the main idea behind this algorithm, however there are multiple minor changes needed to be carried out so we provide our whole algorithm for multi-clique here.
We will also try to emphasize the parts that are different from the original algorithm by Kratsch and Hegerfeld and argue why they are needed.
Now we define a consistent cut of a graph to state the Cut&Count result for Connected Dominating Set and use it as a black-box later.
Let G be a graph, let v^* be an arbitrary but fixed vertex of G, and let ω: V(G) → [2|V(G)|] be a weight function.
We say that (L, R) is a consistent cut in G if the following properties hold: v^* ∈ L, the sets L and R are disjoint, and there are no edges between L and R in G.
For c ∈_0 and w ∈_0 with |L ∪ R| = c and ω(L ∪ R) = w we say that (L, R) has weight w and cardinality c.
By ^c, w_G we denote the family of all consistent cuts of cardinality c and weight w in G.
We also denote
^c, w = {(L, R) ∈^c, w_G | L ∪ R is a dominating set of G}.
The Cut&Count result for Connected Dominating Set by Cygan et al. <cit.> can be stated as follows:
Let G = (V, E) be a graph, v^* ∈ V a fixed vertex, ω V →[2|V|] a weight function, and c ∈ [n]_0, w ∈[2|V|^2]_0 integers.
If there exists an algorithm that given the above input computes the size of ^c, w modulo 2 in time (T) for some function computable non-decreasing function T, then there exists a randomized algorithm that given a graph G solves the Connected Dominating Set problem in time (T). The algorithm cannot give false positives and may give false negatives with probability at most 1/2.
So from now on we concentrate on the computation of |^c, w| 2 given a multi-k-expression ϕ of G.
First, we may assume that ϕ satisfies <ref>.
To simplify the algorithm, at the top of ϕ we insert k relabel-nodes: a node ρ_i →∅ for each i ∈ [k].
Clearly, this does not change the underlying graph so with a slight abuse of notation, we still denote the arising expression by ϕ.
As Hegerfeld and Kratsch, we define the following families of partial solutions.
We say that (A, B, C) is a subpartition of G if A ∪ B ∪ C ⊆ V(G) and A, B, C are pairwise disjoint.
For a node t of ϕ and , ∈, by ^, _t we denote the family of all subpartitions (L, R, F) of G_t such that
(L, R) ∈^, _G_t and there is no edge between L ∪ R and F.
We call such subpartitions partial solutions of size c and weight w.
Let us emphasize the key difference to the partial solutions by Hegerfeld and Kratsch: they distinguish between live and dead labels and additionally require that L ∪ R dominates all vertices of dead labels.
We will take care of domination in the very end.
Now we introduce the signatures of partial solutions as defined by Hegerfeld and Kratsch.
Unlike their work, our signatures are over all labels instead of live labels.
First, for a subpartition (L, R, F) of V_t and a label i ∈ [k], we define S^i_t(L, R, F) ⊆{, , } so that
* ∈ S^i_t(L, R, F) iff L ∩ U^i_t ≠∅,
* ∈ S^i_t(L, R, F) iff R ∩ U^i_t ≠∅,
* ∈ S^i_t(L, R, F) iff F ∩ U^i_t ≠∅.
As already mentioned before, Hegerfeld and Kratsch unify the subsets of {, , } that contain at least two elements into a state so the set of used states is defined as = {{}, {}, {}, , ∅}.
A signature is a mapping f [k] →.
We say that a subpartition (L, R, F) of G_t is compatible with f if the following holds for every i ∈ [k]:
* If |S^i_t(L, R, F)| < 2, then f(i) = S^i_t(L, R, F).
* If |S^i_t(L, R, F)| ≥ 2, then f(i) =.
Observe that there exists exactly one signature with which (L, R, F) is compatible.
For a signature f, we define
^, _t(f) = {(L, R, F) ∈^, _t | (L, R, F) is compatible with f}
and B^, _t(f) = |^, _t(f)| 2.
So our goal for now is to traverse a multi-k-expression bottom-up to compute the values B^, _t(f).
In the end, we will summarize how to obtain the size of ^c, w modulo 2 from it in order to apply <ref>.
In the following, we assume that the values and are reasonable, namely 0 ≤≤ |V(G)| and 0 ≤≤ 2|V(G)|^2.
For values outside these ranges, we implicitly treat any B^, _t(f) as zero.
In the following, by ≡ we denote the equality in _2.
First, let t be a 1 ⟨ i ⟩-node for some i ∈ [k] introducing a vertex v.
Then it holds that
B^, _t(f) = [v ≠ v^* f(i) ∈{}] ·
[( = = 0 f(i) ∈{∅, {}})
( = 1 = ω(v) f(i) ∈{{}, {}})
] ·
[∀ j ∈ [k] ∖{i} f(j) = ∅]
Next, let t be a η_i, j-node for some i ≠ j ∈ [k] and let t' be its child.
Here, we can adopt the approach of Kratsch and Hegerfeld without changes, namely:
B^, _t(f) = feas(f(i), f(j)) · B^, _t'
where feas×→{0, 1} is given by
feas ∅ {} {} {}
∅ 1 1 1 1 1
{} 1 1 0 0 0
{} 1 0 1 0 0
{} 1 0 0 1 0
1 0 0 0 0
.
In simple words, feas invalidates partial solutions with LR-, LF-, or RF-edges between i and j.
Now let t be a ρ_i →∅ for some i ∈ [k] and let t' be its child.
This operation does not change the set of partial solutions of the arising graph, only their signatures by making the set U^i_t empty.
So it holds that
B^, _t(f) ≡ [f(i) = ∅] ·∑_s ∈ B^, _t'(f[i → s]).
Similarly, let t be a ρ_i →{i, j}-node for i ≠ j ∈ [k] and let t' be its child.
Again, the set of partial solutions remains the same but signatures change: a signature at label j is now the union of old signatures at labels i and j.
Therefore, we have
B^, _t(f) ≡∑_s ∈
merge(s, f(i)) = f(j) B^, (f[j → s])
where merge× is defined by Hegerfeld and Kratsch as
merge ∅ {} {} {}
∅ ∅ {} {} {}
{} {} {}
{} {} {}
{} {} {}
.
It is easy to see that for the previous node types, all values B^, _t(f) for reasonable and can be computed in (5^k).
Finally, let t be a union-node and let t_1 and t_2 be its children.
Then, informally speaking, every partial solution of G_t corresponds to a pair of partial solutions of G_t_1 and G_t_2 by forming their pointwise union where a signature at some label is the union of signatures at this label in both partial solutions.
Formally, we have
B^, _t(f) ≡∑__1 + _2 =
_1 + _2 = ∑_f_1, f_2 [k] →
merge(f_1, f_2) = f B^_1, _1_t_1(f_1) · B^_2, _2_t_2(f_2)
where merge of two functions is componentwise, i.e., merge(f_1, f_2)(i) = merge(f_1(i), f_2(i)) for all i ∈ [k].
This equality is analogous to Hegerfeld and Kratsch with the only difference that in our case, the signatures keep track of all labels and not only live ones.
Similarly to their work, we may observe that there is only a polynomial number of reasonable tuples (_1, _2, _1, _2).
Then we may iterate over all of them in polynomial time.
Now we may assume that such a tuple is fixed and we aim to compute
∑_f_1, f_2 [k] →
merge(f_1, f_2) = f B^_1, _1_t_1(f_1) · B^_2, _2_t_2(f_2) 2.
By Lemma 4.6 in <cit.>, this can be accomplished in time (5^k).
These equalities provide a way to compute the values B^c, w_r(f) where r denotes the root of ϕ for all signatures f.
Any node is processed in time (5^k) and since we may assume that the number of nodes in ϕ is polynomial in k and |V(G)|, the values can be computed in time (5^k).
At the beginning, we already mentioned that the transformation from the numbers B^c, w_r(f) to the value |^c, w| 2 has to be carried out differently for multi-clique-width.
For clique-width, Hegerfeld and Kratsch do it labelwise: namely, they ensure that the vertices of a label are dominated once the label is not live anymore.
This ensures that every vertex is processed exactly once like this.
There are two issues related to this in our case.
First, for this they rely on the existence of irredundant clique-expressions and we do not know whether this is true for multi-expressions.
Second, transforming a vertex every time one of its labels is not active anymore would potentially lead to transforming this vertex multiple times resulting in an uncontrolled behaviour.
To overcome these issues, we will carry out such a transformation at the very end.
Let us note that although the procedure is different from the original work of Hegerfeld and Kratsch, the idea behind it remains the same.
Recall that at the top of ϕ, we have a ρ_i →∅-node for every i ∈ [k].
Thus, we have U^i_r = ∅ for all i ∈ [k] and every partial solution of G_r = G has the signature f_∅ [k] → where f_∅(i) = ∅ for every i ∈ [k].
So we have
|^c, w_r| ≡ B^c, w_r(f_∅).
Now we claim that |^c, w| ≡ |^c, w_r| holds.
For the simplicity of notation, let
^c, w = { (L, R, ∅) | (L, R) ∈^c, w}.
Clearly, it holds that |^c, w| = |^c, w| so it suffices to show that |^c, w| ≡ |^c, w_r|.
First, observe that ^c, w⊆^c, w_r holds: indeed, for every element (L, R, ∅) of ^c, w, the pair (L, R) is a consistent cut of G_r = G and the cardinality resp. weight of L ∪ R is c resp. w.
Next we show that the cardinality of ^c, w_r ∖^c, w is even.
For this, consider an arbitrary but fixed pair (L, R) such that there exists F with
(L, R, F) ∈^c, w_r ∖^c, w.
First, we claim that L ∪ R is not a dominating set of G = G_r.
If F = ∅, then by definition of these sets, the only reason for (L, R, F) to belong to ^c, w_r but not ^c, w is that L ∪ R is not a dominating set of G;
On the other hand, if there is a vertex v ∈ F, then there is no edge between L ∪ R and v ∈ F so v is undominated by L ∪ R.
Let U = V(G) ∖ N_G[L ∪ R].
By our claim, the set U is non-empty.
Then the sets F satisfying (L, R, F) ∈^c, w_r ∖^c, w are exactly the subsets of U since there is no edge between L ∪ R and F.
Therefore there are exactly 2^|U| such sets F.
Recall that U is non-empty so 2^|U| is even.
Altogether, for every fixed pair (L, R) there exist either no or an even number of sets F with (L, R, F) ∈^c, w_r ∖^c, w.
So the size of ^c, w_r ∖^c, w is indeed even.
Altogether we obtain that
|^c, w| = |^c, w| ≡ |^c, w_r|.
Thus, the above algorithm computes the size of ^c, w in time (5^k) and by <ref>, the Connected Dominating Set problem can also be solved in time (5^k).
Hegerfeld and Kratsch also showed that their algorithm is tight under SETH and since multi-clique-width lower-bounds clique-width, this also applies in our case.
Let G be a graph given together with a multi-k-expression of G. Then Connected Dominating Set problem can be solved in time (5^k).
The algorithm is randomized, it does not return false positives and returns false negatives with probability at most 1/2.
Unless SETH fails, this problem cannot be solved in time ((5 - ε)^k) for any ε > 0.
§ CONCLUSION
In this work, we studied two generalizations of clique-width, namely fusion-width and multi-clique-width, both introduced by Fürer <cit.>.
First, we showed that the fusion-width of a graph is an upper bound for its multi-clique-width.
For the other direction, the best upper bound we are aware of is ≤ 2^ and we leave open whether this is tight.
By extending existing algorithms for clique-width, we have
obtained tight algorithms parameterized by multi-clique-width for
Dominating Set, Chromatic Number,
q-Coloring, Connected Vertex Cover, and Connected Dominating Set.
The running times are the same as for (S)ETH-optimal algorithms parameterized by clique-width.
For Hamiltonian Cycle, MaxCut, and Edge Dominating Set, we were not able to achieve analogous results and these complexities remain open.
Instead, we have introduced glue-expressions equivalent to fuse-expressions and then we employed them for these three problems to obtain tight algorithms parameterized by fusion-width with the same running times as ETH-optimal algorithms for clique-width.
Finally, in all algorithms we assume that a multi-k-expression / fuse-k-expression is provided.
However, the complexity of computing these parameters is unknown.
To the best of our knowledge, the best approximation would proceed via clique-width, have FPT running time, and a double-exponential approximation ratio.
|
http://arxiv.org/abs/2307.04466v1 | 20230710103140 | Decay of long-lived oscillations after quantum quenches in gapped interacting quantum systems | [
"Jacob H. Robertson",
"Riccardo Senese",
"Fabian H. L. Essler"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
apsrev4-2
|
http://arxiv.org/abs/2307.03967v1 | 20230708124657 | End-to-End Supervised Multilabel Contrastive Learning | [
"Ahmad Sajedi",
"Samir Khaki",
"Konstantinos N. Plataniotis",
"Mahdi S. Hosseini"
] | cs.CV | [
"cs.CV"
] |
Impact of noise on inverse design: The case of NMR spectra matching
O. Anatole von Lilienfeld
August 12, 2023
===================================================================
Multilabel representation learning is recognized as a challenging problem that can be associated with either label dependencies between object categories or data-related issues such as the inherent imbalance of positive/negative samples. Recent advances address these challenges from model- and data-centric viewpoints. In model-centric, the label correlation is obtained by an external model designs (e.g., graph CNN) to incorporate an inductive bias for training. However, they fail to design an end-to-end training framework, leading to high computational complexity. On the contrary, in data-centric, the realistic nature of the dataset is considered for improving the classification while ignoring the label dependencies. In this paper, we propose a new end-to-end training framework–dubbed KMCL (Kernel-based Mutlilabel Contrastive Learning)–to address the shortcomings of both model- and data-centric designs. The KMCL first transforms the embedded features into a mixture of exponential kernels in Gaussian RKHS. It is then followed by encoding an objective loss that is comprised of (a) reconstruction loss to reconstruct kernel representation, (b) asymmetric classification loss to address the inherent imbalance problem, and (c) contrastive loss to capture label correlation. The KMCL models the uncertainty of the feature encoder while maintaining a low computational footprint. Extensive experiments are conducted on image classification tasks to showcase the consistent improvements of KMCL over the SOTA methods. PyTorch implementation is provided in <https://github.com/mahdihosseini/KMCL>.
§ INTRODUCTION
Learning from multilabel representation is a common practice that is considered in both computer vision <cit.> and medical image <cit.> application domains. Images usually contain more than one object for classification, where they can be semantically related to each other. The idea is to create an embedded feature space that can capture label dependencies to improve the classification task <cit.>. However, effectively learning such embedded space is known to be a challenging problem and various methods have been proposed over the past few years, including sequence-to-sequence modeling <cit.>, graph approaches <cit.>, and new loss-function designs <cit.>. Generally, there are two main approaches to addressing the multilabel representation learning problem: the data-centric approach and the model-centric approach. The data-centric approach focuses on addressing data-related issues like inherent imbalance <cit.>, impartial label training <cit.>, and hierarchical relationships <cit.> while ignoring label dependencies. On the contrary, the model-centric approach aims to capture label interactions for semantic embedding such as graph convolutional networks <cit.>, attention mechanisms <cit.>, and transformer-based learning <cit.>. Despite the benefits, they fail to design an end-to-end learning framework due to their high computational costs or the laborious task of capturing heuristic label dependencies like using correlation matrices. These limitations make them challenging to implement, optimize, and interpret.
In this paper, we aim to combine the benefits of both data-centric and model-centric approaches while addressing their potential drawbacks. The solution lays on the foundation of asymmetric loss <cit.> which tackles the imbalance between positive and negative samples in multilabel classification. Our design augments this loss function by capturing the semantic relationships between labels using a kernel-based contrastive loss. This is achieved through two steps: (a) leveraging a Kernel Mixture Module (KMM) to explore the epistemic uncertainty of the feature encoder (see Figs. <ref> and <ref>). This is done by converting the embedded features of multilabel images into a Gaussian Reproducing Kernel Hilbert Space (RKHS) ℋ, and (b) employing a contrastive learning framework on the Gaussian RKHS to capture label dependencies through a weighted loss-function design (see Fig. <ref>). The resulting loss is trainable from end-to-end, providing high numerical stability during training. The following summarizes the contribution of the paper:
[C1]: We propose a novel end-to-end framework –dubbed KMCL– to strike a balance between model-centric and data-centric approaches using a new contrastive loss augmented on asymmetric classification loss from <cit.>. KMCL is capable of capturing both the epistemic uncertainty of the model and label dependencies between classes simultaneously.
[C2]: We introduce a KMM block design within the KMCL framework to generate a mixture of exponential kernels in Gaussian RKHS to model the uncertainty of the feature encoder and improve the robustness of the classification task. To reconstruct the mixture kernels from data, we propose a loss function ℒ_REC (in Eq. <ref>) as an alternative to the negative log-likelihood loss that addresses the numerical instabilities mentioned in <cit.>.
[C3]: We construct the ℒ_KMCL (in Eq. <ref>) as a complementary loss to ℒ_ASL <cit.> to capture label dependencies and enhance classification performance. We utilize the Bhattacharyya coefficient (ρ) as a similarity metric between two kernel representations to pull together similar classes (positive) from a pair of multilabel images while contrasting dissimilar ones (negative) in Gaussian RKHS.
[C4]: We consistently improve classification performance on both computer vision and medical imaging tasks with low computational footprints. Our loss design yields robust behavior toward a range of hyperparameters that are fixed across all experiments.
§.§ Related Work
Multilabel Image Representation. Multilabel image representation problems have been extensively studied, focusing on exploiting label dependencies within semantically aware regions. Previous approaches include RNN-CNN models for sequence-to-sequence modeling <cit.>, transforming the problem into a multi-instance problem <cit.>, and using recurrent attention reinforcement learning <cit.>. Later, efforts were made to incorporate linguistic embedding of training labels into graph neural network designs <cit.>. However, graph-based approaches assume the presence of coexisting label dependencies, which may not hold true when labels co-occur infrequently. Attention mechanisms have been introduced in dynamic graph modeling networks to address this issue <cit.>. Despite their effectiveness, these approaches often result in complex models with heavy computational requirements and limited generalization in different domains. A residual attention mechanism was introduced <cit.> to reduce such complexities by augmenting independent class feature scores using a class-agnostic average pooling method for aggregation scoring. Recent developments in this field emphasize the realistic nature of multilabel data representation. For example, the design proposed in <cit.> introduces an asymmetric loss function to balance the frequency of positive and negative classes. Other approaches include class-aware loss design for impartial label training <cit.> and exploring hierarchical relationships of multilabel data in a contrastive learning framework <cit.>. In this paper, we leverage both data- and model-centric approaches to reduce the above-mentioned complexities.
Contrastive Learning. Self-supervised learning methods primarily focus on contrastive learning, which involves capturing inter-relational object information in image representation. This is achieved through the use of contrastive loss functions, either in unsupervised contrastive learning where labels are absent <cit.>, or in supervised contrastive learning where labels are available <cit.>. The framework has been extended to multilabel representation learning <cit.> by considering shared label images as positive and unshared label images as negative. The existing multilabel contrastive loss designs rely on hard-coded features and lack flexibility in representing semantically aware objects and their label dependencies. However, we propose transforming embedded features into a mixture of exponential kernels in Gaussian RKHS to account for the potential uncertainty of model parameters and accordingly relax the embeddings.
§ BACKGROUND ON BHATTACHARYYA COEFFICIENT BETWEEN EXPONENTIAL KERNELS
The Bhattacharyya coefficient is a widely used metric to measure the similarity between probability distributions in various fields, including computer vision, pattern recognition, and statistical analysis <cit.>. Normal distributions are commonly evaluated using this metric to determine class separability in transfer learning <cit.>, perform point cloud instance segmentation <cit.>, and employ pseudo-labels for semi-supervised classification <cit.>. However, the Gaussian probability may not always be the best option for estimating the target variable due to normality assumptions which leads to numerical instabilities such as singularity <cit.>. A mixture of exponential kernels can be used as a reliable alternative to estimate the relative likelihood of the target variable, especially when the distribution is unknown or multimodal. In such cases, the Bhattacharyya coefficient ρ between the normalized versions of the kernel components can assess the geometric similarity and degree of overlap. Compared to Kullback-Leibler divergence <cit.> or L_p norms, ρ takes values in the range of [0, 1], which makes it a practical choice for comparing two statistical samples. In the following remark, we will elaborate on the closed-form expression of ρ between two exponential kernels.
Let p(𝐱):= K_Σ_p(𝐱, μ_p) = exp(-1/2𝐱 - μ_p^2_Σ_p^-1) and q(𝐱) := K_Σ_q(𝐱, μ_q) = exp(-1/2𝐱 - μ_q^2_Σ_q^-1) be anisotropic multivariate squared exponential kernels that define a Gaussian RKHS ℋ <cit.>. Then, the Bhattacharyya coefficient between the normalized p(𝐱) and q(𝐱)
is:
ρ(p(𝐱), q(𝐱) ) = ∫(p(𝐱)∫p(𝐱) d𝐱)^1/2(q(𝐱)∫q(𝐱) d𝐱)^1/2d𝐱 = |Σ_p|^1/4|Σ_q|^1/4/|Σ|^1/2exp(-1/8μ_p-μ_q^2_Σ^-1),
where, μ_p-μ_q^2_Σ^-1 = (μ_p-μ_q)^TΣ^-1(μ_p-μ_q) and Σ = Σ_p+Σ_q/2. The μ_p, μ_q∈ℝ^M and Σ_p, Σ_q∈𝕊_++^M are the mean vectors and the covariance matrices, respectively, and the operation |·| represents the determinant of a matrix. The proof of Remark <ref> is provided in Supplementary material.
The Bhattacharyya coefficient, also known as the Hellinger affinity <cit.>, measures the normalized correlation between the square roots of kernels over the entire space. This similarity metric compares p(𝐱) and q(𝐱) by projecting their square roots onto a unit hypersphere and measuring the cosine of the angle between them in the complete inner product space ℋ.
A careful examination of Equation <ref> reveals that the Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) consists of two terms: a scale factor and an exponential component. The scale factor measures overlap by comparing the generalized variances of the kernels, which are determined by the determinant of their covariance matrices. The scale factor converges to one when the covariance matrices of the two kernels are similar, indicating an overlap between them. The generalized variance of a kernel is related to its entropy and power entropy <cit.>, which measure uncertainty and spread. This allows the scale factor to consider differences in information content and orientation, resulting in separability due to covariance dissimilarity. On the other hand, the second term measures the similarity between the means μ_p and μ_q weighted by the precision matrix Σ^-1, providing separability based on positional differences. This exponential component represents the Mahalanobis kernel similarity <cit.> between μ_p and μ_q with respect to Σ^-1. The following corollary will further elucidate the connection of the Bhattacharyya coefficient with the Mahalanobis and Gaussian similarities.
Let p(𝐱) := K_Σ_p(𝐱, μ_p) and q(𝐱) := K_Σ_q(𝐱, μ_q) be multivariate kernels defined in Remark <ref>. The Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) can be reduced to either the Mahalanobis or the RBF kernel similarity, depending on the covariance matrices:
(i) The Mahalanobis kernel similarity, Sim_M(p(𝐱), q(𝐱)), is obtained when the covariance matrices are homoscedastic, i.e., Σ_p = Σ_q = Σ. It has the following closed-form expression:
Sim_M(p(𝐱), q(𝐱)) = ρ(K_Σ(𝐱, μ_p), K_Σ(𝐱, μ_q)) = exp(-12 (2)^2μ_p-μ_q^2_Σ^-1).
The described Mahalanobis metric evaluates the similarity between p(𝐱) and q(𝐱) based on their mean difference and relative positions (see Fig. <ref>d).
(ii) The Gaussian kernel similarity, Sim_G(p(𝐱), q(𝐱)), is obtained when the covariance matrices are equal and isotropic, meaning Σ_p = Σ_q = σ^2I. The closed-form expression will be:
Sim_G(p(𝐱), q(𝐱)) = ρ(K_Σ(𝐱, μ_p), K_Σ(𝐱, μ_q)) = exp(-μ_p-μ_q^28σ^2).
In cases where two kernels have similar means but different covariance matrices, the Mahalanobis and Gaussian kernel similarities often exhibit a perfect correlation that may not precisely reflect true similarities (Figs. <ref>a and c). Instead, the Bhattacharyya coefficient evaluates the generalized variances of the kernels and identifies similarities in their orientation, shape, and means (Figs. <ref>a and c). Therefore, it is often a superior metric to the Mahalanobis and the Gaussian kernel similarities.
The process of computing the final value of the closed-form expression between high-dimensional kernels can be time-consuming and resource-intensive. This problem can be alleviated by imposing constraints on the mean vectors and/or the covariance matrices. Following <cit.>, we will cover how specific constraints can be applied to improve computational efficiency in a subsequent corollary.
Let p(𝐱) := K_Σ_p(𝐱, μ_p) and q(𝐱) := K_Σ_q(𝐱, μ_q) be two multivariate kernels as defined in Remark <ref>. The following statements hold:
(i) If the covariance matrices are diagonal, meaning that
Σ_p = diag(σ_p,1^2, ⋯, σ_p,M^2) and Σ_q = diag(σ_q,1^2, ⋯, σ_q,M^2), the Bhattacharyya coefficient between normalized p(𝐱) and q(𝐱) will be
ρ(p(𝐱), q(𝐱)) = (∏_i=1^M(σ_p,i^2+σ_q,i^2/2σ_p,iσ_q,i)^-1/2)exp(-14∑_i=1^M(μ_p,i -μ_q,i)^2/σ_p,i^2+σ_q,i^2). (Anisotropic)
(ii) If the mean vectors have identical values across all dimensions (μ_p = μ_p1, μ_q = μ_q1, where 1 = [1, ⋯, 1]^T∈ℝ^M is the one vector), and the covariance matrices are diagonal with homogeneous variances (Σ_p = σ_p^2I, Σ_q = σ_q^2I, where I∈𝕊^M_++ is the identity matrix), then the Bhattacharyya coefficient between two normalized isotropic kernels p(𝐱) and q(𝐱) can be calculated as
ρ(p(𝐱), q(𝐱)) = (σ_p^2+σ_q^2/2σ_pσ_q)^-M/2exp(-M4(μ_p -μ_q)^2/σ_p^2+σ_q^2). (Isotropic)
§ PROPOSED METHOD
[13]R0.68
< g r a p h i c s >
Overview of KMCL framework. The training pipeline comprises a feature encoder that feeds into the KMM, which outputs the parameters of a mixture model in the Gaussian RKHS ℋ. These parameters then define the objective function that captures label correlation to aid in training the model for the multi-label classification.
The multi-label classification task involves assigning multiple labels to an image 𝐱^n from sample space 𝐗. These labels are typically correlated with each other and represented by a multi-hot binary vector 𝐲^n∈{0,1}^K, where K denotes the number of labels. In this section, we propose an end-to-end multi-label learning framework–dubbed Kernel-based multi-label Contrastive Learning (KMCL), that captures label correlations to improve recognition performance. Given an input batch of data, we first propagate it through the encoder network to obtain the feature embedding. The embedding is then inputted into a novel fully connected layer called the Kernel Mixture Module (KMM), which produces a Gaussian Reproducing Kernel Hilbert Space ℋ. The Gaussian RKHS embedding can handle higher-order statistics of the features and has a complete inner product that enables linear geometry, making it richer than the deterministic feature embedding. Finally, we compute the loss function using the KMM outputs on space ℋ to capture label correlation and train the model for multi-label classification. Figure <ref> provides a visual explanation.
§.§ KMCL Framework
The main components of the KMCL framework are:
[2]R0.23
< g r a p h i c s >
Internal architecture of KMM.
Feature Encoder.The encoder network takes two samples from the input batch separately and generates corresponding feature representation vectors 𝐟∈ℝ^M. The dimension of the feature vector depends on the encoder type.
KMM.
Most feature encoders produce deterministic results that do not quantify or control uncertainty, leading to low confidence in robust multi-label classification tasks and errors in interpreting the output predictions. Uncertainty in deep learning arises from two sources: epistemic uncertainty (model uncertainty), resulting from uncertainty in model parameters, and aleatoric uncertainty (data uncertainty), which stems from the inherent noise in data and label ambiguity. In this study, we propose the Kernel Mixture Module (KMM) to estimate epistemic uncertainty in predictions. The KMM takes the feature vector 𝐟 from the encoder network and generates a mixture of exponential kernels within the Hilbert space, each corresponding to a specific class in an image. Specifically, the fully connected layer in the KMM utilizes learnable weights and biases to produce three outputs for each unimodal exponential kernel component: the mixture coefficient π_k, mean vector μ_k, and covariance matrix Σ_k (Fig. <ref>). The parameters π_k, μ_k, and Σ_k quantify the existence, relative spatial positioning, and relative statistical complexities (measures of spread and uncertainty) of the kth class membership. These parameters are then used to model the label representation of a given sample 𝐱^n associated with a class vector 𝐲^n using the following expression:
𝒢_𝒮(𝐟^n) := ∑_k ∈𝒮π_k^n g_k(𝐟^n) =
∑_k ∈𝒮π_k^nexp(-‖𝐟^n - μ_k^n1‖^2/2(σ_k^n)^2),
where, 𝒮 = {k: y_k^n = 1} and 𝐟^n is the extracted feature vector of the input sample. The component g_k(𝐟^n) := K_Σ_k^n(𝐟^n, μ_k^n) is an isotropic exponential kernel where μ_k^n = μ_k^n1, Σ_k^n = (σ^n_k)^2I, and π_k^n∈ [0, 1]. These adaptive parameters i.e., θ_k^n = [μ_k^n, (σ^n_k)^2, π_k^n] are calculated through forward propagation, using suitable activation functions to ensure that the parameters adhere to their constraints. The sigmoid activation function is used to normalize the mixture coefficient for efficient multi-label classification, accurately predicting the likelihood of multiple labels. The modified version of the exponential linear unit (ELU) <cit.> is also used as an activation function for variances, ensuring their semi-positivity. The detailed architecture of KMM can be found in Fig. <ref> and Supplementary material.
§.§ Multi-label Learning with KMCL
Building upon the KMCL framework, we aim to provide insights into the learning process of multi-label tasks. To achieve this, we introduce the details of our objective function, which comprises three components: reconstruction loss, classification loss, and contrastive loss. Throughout this paper, we use N and K to denote the mini-batch size and the total number of classes, respectively.
Reconstruction Loss.
[12]R0.3
< g r a p h i c s >
Relative frequency histograms of class distributions in four datasets show that most images have 2, 2, 4, and 1 labels in the Pascal-VOC, MS-COCO, ADP, and ChestX-ray14, respectively.
It is straightforward to compute the mixture model defined in Equation <ref> using the KMM output parameters, which provide 3K values for each input sample. Following this calculation, the model can be used to learn label-level representations in Hilbert space ℋ by minimizing its negative log-likelihood. Therefore, we introduce to optimize the following reconstruction loss over the data batch to train the mixture model
ℒ_REC = 1/N∑_n=1^N-log𝒢_𝒮(𝐟^n)/𝒢_𝒴(𝐟^n),
where, 𝒢_𝒴(𝐟^n) := ∑_k∈𝒴={1, ⋯, K}π_kg_k and 𝒢_𝒮(𝐟^n) denotes the kernel mixture associated with image 𝐱^n defined in Equation <ref>. The log-ratio term in Equation <ref> is always negative i.e. 𝒢_𝒮(𝐟^n)≤𝒢_𝒴(𝐟^n), where the loss is led by the supervised labels for reconstruction. We propose this as an alternative choice for reconstruction loss, which is commonly used in the literature <cit.>. Our new loss function ℒ_REC exhibits robust behavior without relying on numerical tricks for stabilization.
Classification Loss.
The analysis in Figure <ref> reveals that despite varying statistical and conceptual properties across datasets, most images have only a fraction of labels, causing a significant imbalance between positive and negative samples. This imbalance can lead to poor training accuracy as gradients from positive labels may be underemphasized. To mitigate this issue, we use ASL <cit.> as a classification loss function that adjusts the contributions of positive and negative samples by down-weighting easy negative samples and focusing on the hard ones. Therefore, given the predictive mixture of coefficients π^n from KMM and the ground-truth multi-hot label vector 𝐲^n, the classification loss for a batch is obtained as
ℒ_ASL = 1/N∑_n=1^N∑_k=1^K -y_k^n(L_k^n)_+-(1-y_k^n)(L_k^n)_-,
where, (L_k^n)_+ = (1-π_k^n)^γ_+log (π_k^n), and (L_k^n)_- = (max(π_k^n-m, 0))^γ_-log (1-max(π_k^n-m, 0)) represent the positive and negative loss parts, respectively, such that γ_+, γ_-, and m are the hyper-parameters used to balance the loss. For additional information on ℒ_ASL, please refer to <cit.>.
Kernel-based Contrastive Loss.
The ASL loss function classifies labels independently, making it difficult to capture correlations between co-occurring semantic labels. Moreover, it fails to account for uncertainty in predictions, which can undermine decision-making confidence. To address these limitations, we propose a new loss function, ℒ_KMCL, which incorporates label correlation and epistemic uncertainty into supervised contrastive learning to improve representation.
The objective of kernel-based multi-label contrastive loss ℒ_KMCL is to pull together the kernel representations of positive images that have shared classes with the anchor image 𝐱^n in the embedding space ℋ, while pushing apart negative samples that do not share any classes. This approach differs from deterministic supervised contrastive losses <cit.> as ℒ_KMCL constructs the positive and negative pairs using similarity measures that consider the uncertainty of kernel representations. The similarity is measured by a Bhattacharyya coefficient discussed in Corollary <ref> (isotropic), which determines the overlap between these exponential kernels and their confidence in proximity. Essentially, the kernel-based contrastive loss optimizes the similarity of frequently co-occurring labels and captures their statistical dependencies, making it a valuable complement to ASL. The contrastive loss is defined for the entire minibatch as follows:
ℒ_KMCL = 1/N∑_n=1^N -1/|𝒜(n)| ∑_m∈𝒜(n)J(n, m) (∑_k∈𝒦(n, m) logexp(ρ_k,k^n,m/τ)/∑_i∈{N\n}exp(ρ_k,k^n,i/τ)),
where, ρ_k,l^n,m:=ρ(g_k(𝐟^n), g_l(𝐟^m)) indicates the Bhattacharyya coefficient between the normalized exponential kernels g_k(𝐟^n) and g_l(𝐟^m) (see Corollary <ref>) and τ is the temperature parameter. The positive set 𝒜(n) = {m ∈{N \ n}: 𝐲^n·𝐲^m≠ 0, where · is a dot product.} includes samples that share at least one label with the anchor image 𝐱^n, while 𝒦(n,m)= {k∈𝒴: y_m^k = y_n^k = 1} represents the indices of shared labels between 𝐱^n and 𝐱^m. The Jaccard index J(n, m)=𝐲^n·𝐲^m/𝐲^n^2+𝐲^m^2-𝐲^n·𝐲^m serves as a weighting factor for positive samples based on the number of shared labels with the anchor. It measures the intersection over union (IOU) of the label vectors between the anchor and positive image, taking into account object co-occurrences. In this way, ℒ_KMCL prioritizes positive samples with a high Jaccard index for a given anchor while downplaying samples with few shared labels.
[14]R0.44
< g r a p h i c s >
(a) Training loss over different epoch training. Plots show the normalized total loss ℒ as well as different normalized sub-losses, and (b) Training accuracy of KMCL pipeline over different epoch training
Objective Function. The overall training loss of the KMCL is the augmented Lagrangian of the three aforementioned losses, which can be expressed as:
ℒ = ℒ_REC + λ_1 ℒ_ASL + λ_2 ℒ_KMCL,
where λ_1 and λ_2 are the Lagrangian multipliers used to balance the gradients of ℒ_ASL and ℒ_KMCL, respectively. We use an end-to-end pipeline to incorporate contrastive learning into supervised classification, which simultaneously trains the feature encoder and classification parts. This approach is different from previous methods that use contrastive losses <cit.>. In those methods, the encoder is trained with a contrastive loss and then frozen before being transferred to the classifier for tuning. Instead, the KMCL framework combines these training regimes into one formulation, enabling us to learn multi-label classification and label correlations with data-driven techniques.
§.§ KMCL Algorithm
[15]R0.57
The pseudo-code of the proposed KMCL framework is outlined in Algorithm <ref>, which takes a set of batches and a specified number of epochs as inputs. The pair of anchor images and their positive set are fed through the network depicted in Figure <ref> to obtain the feature vectors and parameters of the corresponding kernel mixtures (lines <ref>-<ref>). The overall loss is then computed as an augmented Lagrangian of the ℒ_REC, ℒ_ASL, and ℒ_KMCL using the KMM parameters (lines <ref>-<ref>). Finally, the objective function is back-propagated through the KMM and the feature encoder for each iteration to update the weights based on the gradients associated with the subsequent forward pass (line <ref>). This iterative process continues until convergence is reached.
Figures <ref> (a) and (b) demonstrate the results of implementing the KMCL framework with TResNet-L <cit.> as the encoder network on the Pascal-VOC dataset <cit.>. Fig. <ref> (b) displays the objective loss behavior along with the evolution of the three loss terms for the training and test sets; whereas The mean average precision (mAP) accuracy is presented in <ref> (a). The losses decrease with different multiplicative factors due to the tuned Lagrangian multipliers. The convergence speed of the method on multi-label tasks is impressive, reaching 96.2% mAP accuracy in fewer than 30 epochs.
§ EXPERIMENTS
In this section, we present the experimental setup and demonstrate the superior performance of KMCL in both general computer vision and medical imaging domains. To ensure robust feature extraction, we utilized TResNet-M and TResNet-L <cit.>, state-of-the-art architectures designed for different image resolutions (224 and 448, respectively). The features are then passed through the KMM to obtain the mixture parameters π, μ, and Σ. Additional information regarding the encoders, KMM, datasets, evaluation metrics, and training details can be found in Supplementary material.
Datasets. We evaluate the KMCL's performance on popular computer vision datasets, PASCAL-VOC <cit.> and MS-COCO <cit.>, as well as on medical datasets, ADP <cit.> and ChestX-ray14 <cit.>.
Evaluation Metrics.
Following SOTA <cit.>, we report the standard metrics of mean average precision (mAP), average overall precision (OP), recall (OR), and F1 score (OF1) in addition to per-class precision (CP), recall (CR), and F1 score (CF1). We considered the number of parameters (M) and GMAC as measures of computational costs. Finally, for the ChestX-ray14 dataset <cit.>, we reported per-class AUC scores to assess model discriminability for specific classes.
Training Details.
We implemented the KMCL framework using PyTorch, following Alg. <ref>. The backbone feature encoders were initialized with pre-trained architectures, while the mixture parameters were initialized by applying a uniform distribution to π and μ and setting Σ to a constant value of 1. In all experiments, we assign fixed values of 0.1 and 0.3 to λ_1 and λ_2 respectively, as specified in Eq. <ref>. The Adam optimizer <cit.> was used with an initial learning rate of 2e-4, and the OneCycleLR scheduler <cit.> for 40 epochs. Standard augmentations from RandAugment policy <cit.> were applied to the training data. Experiments were conducted on four NVIDIA GeForce RTX 2080Ti GPUs.
How does KMCL compare to SOTA methods on computer vision datasets? We evaluate KMCL with SOTA methods on computer vision datasets in Table <ref> and Fig. <ref>. KMCL outperforms the best competitors on PascalVOC and MS-COCO, achieving superior performance with a margin of 0.4% and 0.2% in mAP score, respectively. In particular, KMCL excels in challenging classes on PascalVOC, such as the sofa and bus classes, with an improvement of over 3.0%. On MS-COCO, KMCL demonstrates significant improvements across multiple metrics, including mAP, OF1, and CF1. Using the TResNet-M encoder at resolution 224, we achieve state-of-the-art results with a 5.0% increase in mAP compared to the best method. Similarly, with TResNet-L at a resolution of 448, KMCL surpasses other methods in overall and per-class metrics. These achievements are attained by integrating the proposed contrastive learning with ASL classification loss, to capture label correlation and enhance prediction accuracy. This is illustrated through the Top3-metrics on MS-COCO, where our 3 classes are better selected by considering label correlation when ranking the predictions.
How well KMCL generalizes to medical imaging datasets?
[8]r7cm
Comparisons with state-of-the-art methods on the ADP dataset.
!
2pt1pt1pt
22emMethod 7c|Performance 2cComplexity
2-10
mAP OP OR OF1 CP CR CF1 Parameters (MM) GMAC
2pt1pt1pt
ML-GCN (Binary) <cit.> 94.9 92.0 86.9 89.7 91.8 87.0 89.3
44.90 31.39
ASL (TResNet-L) <cit.> 96.1 92.1 90.7 91.4 92.5 89.2 90.8
44.14 35.28
TDRG <cit.> 95.5 94.3 86.2 90.5 94.6 84.8 89.4
75.20 64.40
CSRA <cit.> 96.1 93.0 89.7 91.7 93.1 88.6 90.8
42.52 31.39
KMCL (TResNet-M) 95.1 94.2 91.0 90.4 94.7 88.9 89.8
29.41 5.74
KMCL (TResNet-L) 96.5 92.7 92.9 92.8 92.6 92.0 92.3
44.20 35.28
2pt1pt1pt
We evaluate KMCL against SOTA methods on medical imaging datasets presented in Tables <ref> and<ref>. The recall is a crucial factor in these datasets, as it reflects the likelihood of missing a medical diagnosis. The proposed method achieves a superior tradeoff between precision and recall by significantly improving recall metrics while maintaining competitive precision scores, including SOTA mAP. On the ADP dataset, KMCL outperforms the surveyed SOTA with margins of 0.4%, 2.2%, and 2.8% for mAP, OR, and CR, respectively. Similarly, on the ChestX-ray14 dataset, both TResNet-M and TResNet-L models exhibit significant improvements, with our best model surpassing SOTA results by 5.2%, 7.0%, and 11.6% in mAP, OR, and CR, respectively. In comparison, competing methods such as ML-GCN <cit.> use label correlation but suffer from increased computational complexity and a multi-stage approach, as shown in Table <ref>. However, our method surpasses the SOTA while maintaining a small model size and low GMAC scores. These findings highlight the advantage of KMCL in computationally constrained environments.
How KMCL's performance varies with different similarity measurements? In this ablation study, we examine the impact of changing the Battacharya coefficient to either Mahalanobis kernel similarity or Gaussian kernel similarity in the KMCL framework (Corrolary <ref> (i) and (ii)). Under the Mahalanobis kernel similarity, the performance decreases across the PascalVOC and ADP, as indicated in Table <ref>. This is likely due to the constraint that the variance must be identical across all classes, leading to an inability to capture entropy and uncertainty as reported in Section <ref>.
[7]r7cm
Ablative comparison for similarity measures and kernel representation cases.
!
2pt1pt1pt
2c||Configuration 7c|ADP PascalVOC 2cComplexity
Similarity Metric Case mAP OP OR OF1 CP CR CF1 mAP Params(MM) GMAC
2pt1pt1pt
Bhattacharyya Anisotropic 95.4 94.0 92.7 90.6 94.8 90.7 90.5 95.4
104.91 5.81
Bhattacharyya Isotropic 95.1 94.2 91.0 90.4 94.7 88.9 89.8 95.2
29.41 5.74
Mahalanobis - 94.7 92.0 92.4 90.9 92.6 90.5 90.4 95.1
71.34 5.78
Gaussian Kernel - 94.5 91.5 89.7 90.6 92.3 86.5 89.3 95.0
29.40 5.74
2pt1pt1pt
Similarly, when utilizing Gaussian kernel similarity, the performance further deteriorates because the model is constrained to learn a single variance value that applies to both the label classes and feature dimensions. Therefore, it is more meaningful to use the Bhattacharyya coefficient since it evaluates the generalized variances of the kernels and identifies similarities in their orientation, shape, and means (Eq. <ref>). We further investigate the assumptions from both isotropic and anisotropic cases of the exponential kernel representations in KMCL framework as discussed in Corrolary <ref>. While the anisotropic case leads to an improved performance as shown in Table <ref>, but results in an increase in learnable parameters at the cost of higher computational complexity. By incorporating variances over the feature dimension, we better capture epistemic uncertainty and achieve enhanced overall results. Thus, if computational resources are available, one could best leverage our framework in the anisotropic case to achieve sota results.
[11]r0.7
< g r a p h i c s >
Reduced t-SNEs for ASL (left) and KMCL(Center) on PascalVOC color-coded by user-defined super-classes in the legend; (Right) ground truth correlation matrix for PascalVOC.
Intuitive Visualizations.
KMCL presents an end-to-end framework for contrastive learning that has achieved quantitatively significant results compared to existing methods. In this section, we visualize how the learned feature representation incorporates label correlation and epistemic uncertainty. Figure <ref> shows a reduced t-SNE <cit.> visualization of the feature representation for ASL and KMCL on the Pascal VOC dataset. Both methods accurately discriminate between different classes, as seen from the plotted centroids of each cluster. Notably, both methods exhibit a clustering pattern based on user-defined super-classes (e.g., car and bus are both forms of Transportation). Upon analyzing the ground truth correlation matrix, it becomes apparent that KMCL captures label correlation more effectively. Specifically, the sofa class exhibits the highest correlation with the chair class, resulting in their closer proximity in the t-SNE visualization for KMCL compared to ASL.
[8]r0.65
< g r a p h i c s >
GradCam visualization of KMCL and competing SOTA method. Bolded class labels indicate instances where KMCL outperforms SOTA by a large margin.
Figure <ref> showcases the GradCam visualization for KMCL and a competing SOTA method. KMCL effectively distinguishes the sofa and chair classes, consistent with the t-SNE visualization results. Moreover, by capturing epistemic uncertainty from the kernel representation, our method accurately identifies the correct classes in the ADP sample with minimal extraneous activations. For more visualizations, please refer to the Supplementary material.
§ BROADER IMPACT
KMCL provides an end-to-end supervised contrastive learning framework for multilabel datasets. It requires fewer resources for the design and implementation of downstream tasks such as classification. Contrastive learning methods like <cit.> typically involve two stages of encoder training and fine-tuning for the task, which can take several hundred epochs. In contrast, KMCL only requires one stage of training with significantly fewer epochs. This translates into a much smaller carbon emission footprint, as highlighted in <cit.> for using more compact models for training. Although KMCL has been successfully applied in computer vision and medical imaging domains, its effectiveness has not yet been tested for segmentation/detection tasks or in other modalities like natural language processing. In future work, we will consider broadening our experiments for further validation. Additionally, we believe that society can benefit from the theoretical analysis of the similarity metrics presented in this paper, which can be adapted to different application domains.
§ ACKNOWLEDGMENT
Authors would like to thank Rahavi Selvarajan, Xiao Hu and Jiarui Zhang for their assistant and helpful discussion.
ieee_fullname
§ APPENDIX
§.§ Proof of Remark 1.
The Bhattacharyya coefficient between the normalized p(𝐱):= K_Σ_p(𝐱, μ_p) = exp(-1/2𝐱 - μ_p^2_Σ_p^-1) and q(𝐱):= K_Σ_q(𝐱, μ_q) = exp(-1/2𝐱 - μ_q^2_Σ_q^-1) is defined as
ρ(p(𝐱), q(𝐱) ) = ∫_𝒳(p(𝐱)∫_𝒳 p(𝐱) d𝐱)^1/2(q(𝐱)∫_𝒳 q(𝐱) d𝐱)^1/2d𝐱 = ∫_𝒳p(𝐱)^1/2q(𝐱)^1/2d𝐱/√(∫_𝒳p(𝐱) d𝐱)√(∫_𝒳q(𝐱) d𝐱).
To begin, we expand the integrand part of the enumerator, i.e., √(p(𝐱)q(𝐱)) as follows:
exp(-14𝐱^T(Σ_p^-1+Σ_q^-1)𝐱+12(Σ_p^-1μ_p+Σ_q^-1μ_q)^T𝐱 -14(μ_p^TΣ_p^-1μ_p + μ_q^TΣ_q^-1μ_q )).
In order to overcome the challenge of integrating the derived integrand in Equation <ref>, we will introduce a new approach. We will represent √(p(𝐱)q(𝐱)) as the product of a constant value, denoted as h(μ_p, μ_q, Σ_p, Σ_q), and a newly defined anisotropic multivariate squared exponential kernels, denoted as r(𝐱):= K_Σ_r(𝐱, μ_r). This formal representation can be expressed as follows:
√(p(𝐱)q(𝐱)) = h(μ_p, μ_q, Σ_p, Σ_q)r(𝐱).
We defined the new exponential kernel of Equation <ref> as
r(𝐱) := K_Σ_r(𝐱, μ_r) = exp(-1/2𝐱 - μ_r^2_Σ_r^-1) = exp(-12(𝐱-μ_r)^TΣ_r^-1(𝐱-μ_r)),
where Σ_r≜(12Σ_p^-1+12Σ_q^-1)^-1 and μ_r≜Σ_p(12Σ_p^-1μ_p+12Σ_q^-1μ_q). Once the values of Σ_r and μ_r are replaced in Equation <ref>, the kernel r(𝐱) will be
r(𝐱) = exp(-14𝐱^T(Σ_p^-1+Σ_q^-1)𝐱 + 12(Σ_p^-1μ_p+Σ_q^-1μ_p)^T𝐱 -14(Σ_p^-1μ_p+Σ_q^-1μ_p)^T
+(Σ_p^-1+Σ_q^-1)^-1(Σ_p^-1μ_p+Σ_q^-1μ_p)).
By substituting Equations <ref> and <ref> into Equation <ref>, we obtain the closed-form expression of h(μ_p, μ_q, Σ_p, Σ_q) as presented below.
exp(-1/4(
μ_p^T(Σ_p^-1-Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1)μ_p+
μ_q^T(Σ_q^-1-Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1)μ_q
-μ_p^T(Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1)μ_q
-μ_q^T(Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1)μ_p
))
Given the fact that Σ_p^-1-Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1 = Σ_q^-1-Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1 = Σ_p^-1(Σ_p^-1+Σ_q^-1)^-1Σ_q^-1 = Σ_q^-1(Σ_p^-1+Σ_q^-1)^-1Σ_p^-1 = (Σ_p+Σ_q)^-1 <cit.>, we can simplify Equation <ref> and derive
exp(-1/4μ_p^T(Σ_p+Σ_q)^-1μ_p+μ_q^T(Σ_p+Σ_q)^-1μ_q-μ_p^T(Σ_p+Σ_q)^-1μ_q-μ_q^T(Σ_p+Σ_q)^-1μ_p),
where can be further simplified to yield the following expression:
h(μ_p, μ_q, Σ_p, Σ_q) = exp(-18(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)),
where Σ
= Σ_p+Σ_q/2. Ultimately, by utilizing the definition of the Bhattacharyya coefficient, Equation <ref>, and Equation <ref>, we can deduce the following conclusion:
ρ(p(𝐱), q(𝐱)) = ∫_ℝ^Mp(𝐱)^1/2q(𝐱)^1/2d𝐱/√(∫_ℝ^Mp(𝐱) d𝐱)√(∫_ℝ^Mq(𝐱) d𝐱) =∫_ℝ^Mh(μ_p, μ_q, Σ_p, Σ_q)r(𝐱)d𝐱/√(∫_ℝ^Mp(𝐱) d𝐱)√(∫_ℝ^Mq(𝐱) d𝐱)
= h(μ_p, μ_q, Σ_p, Σ_q) ∫_ℝ^M|2πΣ_r|^1/2𝒩(𝐱;μ_r, Σ_r)d𝐱/√(∫_ℝ^M|2πΣ_p|^1/2𝒩(𝐱;μ_p, Σ_p) d𝐱)√(∫_ℝ^M|2πΣ_q|^1/2𝒩(𝐱;μ_q, Σ_q) d𝐱)
= |Σ_r|^1/2/|Σ_p|^1/4|Σ_q|^1/4h(μ_p, μ_q, Σ_p, Σ_q) = |2Σ_p(Σ_p+Σ_q)^-1Σ_q|^1/2/|Σ_p|^1/4|Σ_q|^1/4h(μ_p, μ_q, Σ_p, Σ_q)
(a)= |Σ_p|^1/2|Σ_q|^1/2|Σ|^1/2exp(-18(μ_p-μ_q)^TΣ^-1(μ_p-μ_q)),
where, Σ
= Σ_p+Σ_q/2 and (a) is followed by the probability property that the total area underneath a probability density function is 1. The notation 𝒩(𝐱;μ, Σ) represents a multivariate Gaussian probability distribution in M dimensions, characterized by a mean vector μ and a covariance matrix Σ. This completes the proof of Remark 1.
§.§ Forward Propagation in KMM.
The KMM (Kernel Mixture Module) takes the feature vector 𝐟^n∈ℝ^M as input from the encoder network and produces the parameters for each exponential kernel component in the kernel mixture model. This transformation converts the feature vector into 3K values, where each K represents the parameters for the kth kernel component (existing class), such as μ_k^n∈ℝ, σ_k^n∈ℝ^+, π_k^n∈ [0, 1]. The adaptive parameters are computed through forward propagation, employing suitable activation functions to ensure that the parameters satisfy their respective constraints. The activations corresponding to the parameters of the kth component for the KMM ((a_k^μ)^n,(a_k^σ^2)^n, (a_k^π)^n) are used to accomplish this, and they are calculated through the forward propagation of a fully connected layer by
(a_k^μ)^n = 𝐰_k^μ𝐟^n + b^μ_k, (a_k^σ^2)^n = 𝐰_k^σ^2𝐟^n + b^σ^2_k, (a_k^π)^n = 𝐰_k^π𝐟^n + b^π_k,
where, {𝐰_k^μ, 𝐰_k^σ^2, 𝐰_k^π}∈ℝ^M are the weights, and {b^μ_k, b^σ^2_k, b^π_k}∈ℝ represent the biases associated with {(a_k^μ)^n, (a_k^σ^2)^n, (a_k^π)^n}, respectively. We make a minor revision to the idea of using nonlinear activation from <cit.> by replacing softmax with sigmoid to normalize the mixture of coefficients and address multilabel issues. In the following, we define the nonlinear and linear transformations applied to (ak^μ)^n, (ak^σ^2)^n, (a_k^π)^n using
π_k^n = 11+exp(-(a_k^π)^n),
μ_k^n = (a_k^μ)^n, (σ_k^n)^2 = ELU((a_k^σ^2)^n)+2+ϵ,
where ELU(·) and ϵ are the exponential linear unit function <cit.> and the hyperparameter used to ensure training stability, respectively. We use a modified ELU function rather than the exponential function as the activation on (a_k^σ^2)^n in order to ensure that variances remain non-negative ((σ_k^n)^2≥ 0). This modification is necessary because the vanilla exponential function exhibits rapid growth for larger values, which can lead to training instability, particularly when dealing with high-variance datasets. It is important to note that there is no constraint on the mean μ_k^n, as it is obtained directly from the activation (a_k^μ)^n.
§ DATASETS
PASCAL-VOC
The PASCAL Visual Object Classes Challenge (2007) <cit.> is a common computer vision dataset used in multi-label classification. It contains a total of 9963 images over 20 classes, including 'cat', 'bottle', and 'person'. Being consistent with the state of the art, we trained our architecture on the trainval set and evaluated it on the test set with a total of 5011 and 4952 images in each set, respectively. Referencing the relative frequency in the main paper, we can see that the number of classes per image to the total number of classes is heavily unbalanced, with the majority of images having only 2-4 classes.
MS-COCO
The Microsoft COCO dataset <cit.> is another common computer vision dataset used in multi-label classification. This dataset includes 82,081 training and 40,504 validation images across 80 different classes including 'person', 'bicycle', and 'elephant'. Following the state of the art, we test our method on the validation dataset making it comparable with competitive approaches.
ADP
The Atlas of Digital Pathology for Hisotological Tissue Type Classification <cit.> is composed of digital histology images taken from several organ tissues, including the colon, brain, stomach, etc. These images were generated via a Whole Slide Image (WSI) scanner. This database includes 17,668 image patches that are multilabel in nature. The training, validation, and test sets contain 14,134, 1767, and 1767 images respectively. This labeling scheme follows a three-tier hierarchy: L1 (9 labels), L2 (11 labels), and L3 (22 labels). As we progress down the levels, the features being annotated gradually progress from coarse to fine detail. The highest level (L1) contains classes that amalgamate several lower-level classes.
For example, Dense Regular Connective (C.D.R) is an L3 precise label that falls under the more coarse L1 category of Connective (C). For the purpose of our work, we have selected L1 as it seems to be the most statistically significant selection with a better balance of per-class distribution.
ChestXray-14
The ChestX-Ray 14 dataset contains hospital-scale frontal-view chest X-ray images from 30,805 unique patients. Each image either contains multiple common thoracic illnesses including ‘cardiomegaly’ or ‘pneumonia’ or is designated ‘normal’ indicating no illness. The released version of the dataset catalogs 14 common illnesses to date, as opposed to the original 8 that was released at the time of publication.
§.§ Hyperparameters & Tuning
In this section, we list all the necessary parameters for the reproducibility of our method. We have categorized our hyperparameters depending on which part of the pipeline they relate to (i.e., Training Optimization refers to any parameters used in setting up the training phase). A special note is made for the Loss Development λ values. In order to best tune our method, we sampled a 15-point log-random search in a subset of the provided range to best adapt our model to the given datasets. See Table <ref>.
§.§ Additional Information on Metrics
Being consistent with state-of-the-art methods, we calculate the average overall precision (OP), recall (OR), and F1 score (OF1), in addition to the average per-class precision (CP), recall (CR), and F1 score (CF1), as metrics for evaluating the different methods on the datasets <cit.>. Overall these metrics challenge the model’s ability to accurately discriminate the class of interest in terms of measuring false positives and false negatives. Superior OF1 and CF1 indicate that the model is well-tuned for class discrimination as this metric encompasses both recall and precision in the calculation. For some experiments, we include the following computational complexity measures: Parameters (MM) to indicate model size, and GMAC to indicate the forward computational resource required. The motivation behind these metrics is to illustrate that performance is not only measured through how well the method discriminates classes but also through the complexity of deploying said method in the real world. Finally, due to the increased difficulty of the ChestX-ray14 dataset, we additionally report per class AUC scores to identify model discriminability for the class of interest, this has been a common trend in papers that have cited results on this dataset <cit.>.
§.§ Additional Visualizations
To further augment the main paper visualizations, we attach supplemental visualizations on the two additional datasets: MS-COCO and ChestXray-14. As can be seen, by the visualizations, our model is more precise at localizing the correct features. Due to capturing the epistemic uncertainty from the kernel representation, our method is able to focus the activation on the correct class, limiting extraneous false positive results. See Figure <ref>.
|
http://arxiv.org/abs/2307.04144v1 | 20230709101707 | Shadow, absorption and Hawking radiation of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity | [
"Qian Li",
"Chen Ma",
"Yu Zhang",
"Zhi-Wen Lin",
"Peng-Fei Duan"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] (Corresponding author)
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
City College, Kunming University of Science and Technology, Kunming, Yunnan 650051, China.
This paper studies the black hole shadow, absorption cross section, and Hawking radiation of a massless scalar field in the background of a static spherically symmetric black hole spacetime that is surrounded by a cloud of strings in Rastall gravity. Specifically, the effects of the parameters a and β on the photon sphere and shadow radii are investigated. The results show that as the negative parameter β decreases, the photon sphere and shadow radii change in an N-shape. In addition, the absorption cross section obtained after solving the massless Klein-Gordon equation is calculated using the sinc approximation and the partial waves method. We compare the absorption cross section obtained by the sinc approximation and the partial waves method, and find it to be exceptionally consistent in the mid-to-high frequency region. Furthermore, the effects of parameters a and β on absorption are examined in detail. Finally, we study in detail the effects of the parameters a, β and l on the Hawking radiation power emission spectrum of the considered black hole. It turns out that the string parameter a always suppresses the power emission spectrum, indicating that such black holes live longer when the string parameter a is increased while other parameters are fixed.
Shadow, absorption and Hawking radiation of a Schwarzschild black
hole surrounded by a cloud of strings in Rastall gravity
Peng-Fei Duan
August 12, 2023
===========================================================================================================================
§ INTRODUCTION
General relativity, proposed by Einstein in 1915 <cit.>, is by far the most widely accepted theory of gravity. The predictions made therein have been tested and verified under weak or strong field conditions. Particularly, black holes, as one of the predictions, are arguably the most interesting and mysterious celestial bodies in our universe. The mystery of a black hole is that nothing, including light, can escape its event horizon. For the past few decades, the existence of black holes was only studied through indirect methods, until the first images of black holes appeared in 2019 <cit.>. This discovery provides many inspiring answers for our exploration of Einstein's theory of general relativity and for testing other revised theories of gravity, taking our understanding of black hole physics a major step forward. However, the basic theory proposed by Einstein cannot explain some phenomena or solve certain fundamental problems, e.g., the singularity problem and the conjecture that the covariant divergence of the energy-momentum tensor may be non-zero.
To account for the special case where the covariant divergence of the energy-momentum tensor does not vanish, Rastall <cit.> proposed a special modification of general relativity where the field equation is T^μν_;μ = λ R^,ν and λ= 0 corresponds to the Einstein equation. An important feature of Rastall's gravity is that the field equation T^μν_ ;μ = λ R^,ν is obtained directly by violating the normal conservation law, which does not rely on the metric or palatini formalism <cit.>. Also, it is important to note that Rastall's gravity appears to be consistent with experimental observations in the context of cosmology <cit.>. Specifically, the observational data include, but are not limited to, the age of the universe, helium nucleosynthesis, and Hubble parameters. What is more interesting is that the modified gravity gives us a lot of novel and interesting results at the cosmological level. Besides, some attention has been focused on a debate, namely, whether Rastall gravity is equivalent to Einstein gravity. Visser <cit.> thought that the modified gravity proposed by Rastall is a rearrangement of the matter sector of Einstein gravity. In other words, the geometrical part of the field equation is the same in both theories, so we just need to construct a new energy-momentum tensor to fulfill the ordinary conservation law. So the author claimed there is nothing new, such as gravity description, in Rastall proposal. Das et al. <cit.> had a conclusion that in the framework of non-equilibrium thermodynamics (for homogeneous and isotropic FLRW black hole model), generalized Rastall gravity theory is equivalent to Einstein gravity theory. However, other researchers disagree with Visser's ones, see for example the research of Darabi and his colleagues <cit.> who support that Rastall theory is not equivalent to Einstein gravity theory and give a simple example to prove that the claim proposed by Visser is incorrect. Moreover, they indicated that Rastall gravity theory is an "open" gravity theory in comparison to basic general relativity and has more compatible with observational cosmology. Hansraj et al. <cit.> also discussed this dispute and their results are consistent with Darabi et al. <cit.>. In this work, they showed that Rastall gravity can satisfy the fundamental conditions for physically viable model whereas Einstein gravity doesn't fulfill its requirements (see <cit.> for more detailed discussion). Some works <cit.> have shown the difference between Rastall gravity and Einstein gravity from theoretical or cosmological perspectives. Finally, regardless of whether the Rastall gravity is equivalent to Einstein gravity, Rastall gravity theory is worth studying or discussing because it faces a challenge from cosmological (astrophysical) observations.
String theory, on the other hand, holds that the fundamental unit of nature is not the point particle in particle physics, but an extended one-dimensional string. Letelier <cit.> proposed for the first time that the source of the gravitational field could be a cloud of strings, and gave an exact solution for a Schwarzschild black hole surrounded by a collection of strings in the context of Einstein's general relativity. In addition, black holes that treat a cloud of strings as the source of the gravitational field in the modified gravity have been studied <cit.>. For instance, Cai and Miao proposed a black hole solution in which a cloud of strings is the source of the gravitational field of a Schwarzschild black hole in the context of Rastall gravity <cit.>. The authors also analyzed fundamental thermal properties, quasinormal modes of gravitational perturbations, area spectra <cit.>, and entropy spectra.
The experimental results reported by the Event Horizon Telescope Collaboration <cit.> not only directly prove the existence of black holes, but also allow us to directly observe the shadows of black holes. The theoretical analysis of black hole shadows has a long history. For example, Synge <cit.> discussed the shadows of Schwarzschild spacetime, and Bardeen et al. <cit.> analyzed the shadows of Kerr black holes. In addition to shadow analysis performed in basic general relativity, it extends to various modified forms of gravity or arbitrary-dimensional spacetime. Abbas and Sabiullah <cit.> studied the structure of timelike as well as null geodesics of regular Hayward black hole and found the massive particles, which move along timelike geodesics path, are dragged toward the black hole. To the best of our knowledge, numerous studies <cit.> have been devoted to studying the shadows of black holes under various modified gravity. More concretely, Gyulchev et al. <cit.> analyzed the shadows cast by different rotating traversable wormholes. Interestingly, the near horizon geometry is determined by the shadow cast by the black hole. Instead, the trajectory of light is affected by the plasma surrounding the black hole. This causes the geometric size and shape of the shadow on the Kerr spacetime to change <cit.>. In general, gravitational light deflection causes black hole shadows, and the trajectory of a photon in a vacuum depends on its impact parameter <cit.>. Therefore, we cannot ignore the role of the impact parameter in shadow formation.
Due to the special properties of black holes, we cannot directly study their internal structure. However, black hole is not an isolated system because it interacts with its surrounding environment, such as absorption, scattering and Hawking radiation. These interactions can convey information about the interior of the event horizon. In particular, as one of the interactions, the absorption cross section of black holes has received extensive attention from researchers. That's because one of the most useful and efficient ways to understand the properties of a black hole is to analyze the absorption of matter waves and the test field around the black hole. This series of studies began in the 1970s <cit.>. During that period, Sanchez found that the absorption cross section of Schwarzschild spacetime for scalar waves oscillates around the geometric capture cross section. About twenty years later, Das et al. <cit.> presented a key result that, in the low-energy regime, the absorption cross section of a coupled massless scalar field is equal to its event horizon area. Consequently, the literature on this particular topic has proliferated over the past few decades, covering various fields of research and several revision theories <cit.>.
Furthermore, Hawking predicted that black holes are thermal systems, like black bodies, and then have associated temperature and entropy. Based on the analysis of quantum field dynamics in the context of curved space-time, Hawking pointed out that black holes emit radiation, known as Hawking radiation, from their event horizons <cit.>. Intriguingly, Hawking radiation depends on the type of particle and the geometry of the black hole. This is because the Hawking temperature T_BH=f'(r_+)/4π is one of the influencing factors. Moreover, Yale <cit.> has analyzed the Hawking radiation of particle scalars, fermions and bosons spin-1 using the tunneling method. In recent years, a large body of literature <cit.> has emerged on Hawking radiation on various modified gravity, including high-dimensional black holes.
This paper investigates the black hole shadow, absorption cross section and Hawking radiation of the test scalar field of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity. Specifically, Cai and Miao <cit.> presented the corresponding quasinormal modes of odd parity gravitational field by the WKB approximation. On this basis, our research contributes to further understanding of this black hole and its physical characteristics.
This paper is organized as follows. The second section outlines the basic information of the black hole solution, that is, a Schwarzschild black hole surrounded by a cloud of strings in the context of Rastall gravity, and also gives the meaning of the influencing parameters. The third part is devoted to the derivation of massless scalar equations and the analysis of related effective potentials. Section 4 analyzes the radius of the photon sphere and the shadow radius of the black hole in detail. Next, the absorption cross section of the scalar field is calculated using the sinc approximation and the partial wave method, and the effects of the parameters are also investigated. Section 6 gives the expression of Hawking radiation and the corresponding results for the Hawking radiation power emission spectra. The last section contains the summary and conclusions. Besides, we use the natural unit that c = G = ħ = 1 in this paper.
§ THE SOLUTION OF A SCHWARZSCHILD BLACK HOLE SURROUNDED BY A CLOUD OF STRINGS IN RASTALL GRAVITY
The field equations of the Rastall gravity <cit.> are as follows,
G_μν + β g_μν R= κ T_μν ,
T^μν_ ;μ = λ R^,ν ,
where κ and λ represent the Rastall gravitational coupling constant and the Rastall parameter, respectively. Moreover, β is defined as the product of these two parameters, i.e., β≡κλ. From the above equations we have that
R= κ/4β-1T ,
T^μν_ ;μ = κ/4β-1T^,ν ,
where R, T denote the Ricci scalar and the trace of the energy-momentum tensor, respectively. Besides, κ=4β-1/6β-1 8 π under the Newtonian limit <cit.>. It can be seen from the above equations that, the Einstein gravity is recovered and the energy-momentum tensor is conserved when the Rastall gravity parameter λ vanishes, i.e., β=0.
We consider the case where the metric is static and spherically symmetric,
ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dθ^2+r^2 sin^2θdϕ^2
with the metric <cit.>
f(r)=1-2M/r+4a(β-1/2)^2/(8β^2+2β-1)r^4β/2β-1.
It is worth noting that the Rastall theory should satisfy the Newtonian limit <cit.>. Therefore, the cases β=1/6 and β=1/4 are not allowed. The parameter a needs to satisfy a specific constraint, namely a≡κ b where b is a constant of integration associated with a cloud of strings. Specifically, β and a represent the influence of the Rastall gravity and the string, respectively. Consequently, the Rastall gravity is converted to Einstein gravity when β=0. Meanwhile, when a equals to 0, the Schwarzschild spacetime is restored.
§ SCALAR WAVE EQUATION
The massless scalar field Ψ governed by the massless Klein-Gordon equation in curved spacetime can be formulated as
1/√(-g)∂_μ(√(-g)g^μν∂_ν)Ψ=0,
and then the massless scalar field Ψ can be decomposed as follows
Ψ_ω l m=ψ_ω l(r)/r P_l(cosθ)e^ -i ω t,
where P_l(cosθ) denotes the Legendre polynomial, l and m represent the corresponding angular quantum number and magnetic quantum number, respectively. In addition, the function Ψ_ω l satisfies the following ordinary differential equation,
f(r)d/dr[f(r)dψ_ω l/dr]+[ω^2-V_eff(r)]ψ_ω l=0,
where V_eff(r) stands for the corresponding effective potential that is defined as
V_eff(r)=f(r)(1/rdf(r)/dr+l(l+1)/r^2).
Moreover, by substituting the metric in the effective potential, the specific potential is reformulated as
V_eff(r)=(1-2M/r+4a(β-1/2)^2/(8β^2+2β-1)r^4β/2β-1)×
(l(l+1)/r^2+2M/r^3-16 a(-1/2+β^2 )r^-2-4β/-1+2β/(-1+2β)(-1+2β+8β^2)).
Additionally, we define the following tortoise coordinate change
r_*=∫dr/f.
Consequently, the equation (<ref>) is equivalent to
d^2ψ/dr_*^2+(ω^2-V_eff)ψ=0.
Note that both the metric f(r) and the effective potential V_eff(r) are divergent when β is set to -0.5. Besides, to satisfy the condition that the effective potential V(r) → 0 when r→∞, we have β < 1/6. Hence, the domain of β should be in (-0.5, 1/6). Moreover, due to the condition a ≡κ b, the domain of a strictly relies on the positivity (negativity) of the parameter β. For β < 0, the barrier of the effective potential V_eff(r) disappears as the parameter a approaches to 1. Accordingly, the domain of the parameter a is set to [0,1). In contrast, for β > 0, when the parameter a is larger, a black hole surrounded by a cloud of strings in Rastall gravity has no event horizon. For instance, when β=0.1, the domain of a is set to [0,0.3].
Fig.<ref> shows the behaviour of the effective potential V_eff(r) with respect to r for different angular quantum numbers l when a=0.1, β=1/10. We find that the peak value of the effective potential increases when the angular quantum number l is increased. Furthermore, the potential V_eff(r) first increases, then decreases, and finally tends to zero at r→∞.
As shown in Fig.<ref>, to compare the effects of parameters a and β on the effective potential V_eff(r), we depict the behaviour of V_eff(r) with respect to a and β when β < 0 and β > 0, respectively. Specifically, for β > 0, i.e., when the parameter β is fixed to 1/10, the barrier height of the effective potential decreases as the string parameter a increases. It is clear that the peak of the effective potential becomes smaller and shifts to the right side as a increases. Next, we vary the Rastall parameter β and fix a to 0.1. It can be seen that with the increase of β, the peak value of the effective potential decreases, and the position of the peak value does not change much compared with the case where the string parameter a changes.
Meanwhile, when β < 0, one can see that for the same value of a, the barrier height of the potential first increases and then decreases with decreasing β. Also, the peak position firstly shifts to the left and then to the right. Furthermore, when the parameter a is varied, at the same value of the Rastall parameter β, the barrier height decreases and the peak position shifts to the right as the parameter a increases.
We add some boundary conditions for the Schörding-like equation (<ref>) because we are interested in the absorption cross section and Hawking radiation. Near the horizon regime and at infinity, one can find that ψ_ω l(r_*) need to satisfy the following boundary conditions
ψ_ω l(r_*)∼{
I_ω l e^-iω r_*+R_ω l e^iω
r_*,
r_* →+∞,
T_ω l e^-iω r_*,
r_*→-∞,
.
where R_ω l and T_ω l in the action denote reflection and transmission coefficients, respectively. Due to the conservation of flux,
R_ω l and T_ω l satisfy the following constraint
|R_ω l|^2+|T_ω l|^2=|I_ω l|^2.
Furthermore, the phase shift δ_l can be defined as
e^2 iδ_l=(-1)^l+1 R_ω l/ I_ω l.
Next, we will discuss the black hole shadows, absorption cross section and Hawking radiation based on the last two sections.
§ SHADOWS
In this section, we investigate the role of the Rastall parameter β and the string parameter a on the shadow radius of a black hole enclosed by a cloud of strings in Rastall gravity. Moreover, the results will be compared to those of Schwarzschild spacetime (i.e. a=0) and Einstein gravity (i.e. β=0), respectively.
The photon trajectories of a black hole surrounded by a cloud of strings in Rastall gravity can be represented by null geodesics <cit.>. The Lagrangian of geodesic equations for the curve spacetime have the following form
0=-f(r)ṫ^2+1/f(r)ṙ^2+r^2θ̇^2+r^2sin^2θϕ̇^2,
where the overdot symbol denotes the differentiation with respect to the affine parameter τ. Without loss of generality, we consider an analysis restricted to the equatorial plane, i.e., θ=π/2. By using the Euler-Lagrange equation, the t and ϕ coordinates are expressed as,
ṫ=E/f(r),
ϕ̇=L/r^2,
where E, L are motion constants, representing the energy and angular momentum of the massless test particle, respectively.
Hence, by substituting Eq. (<ref>) and Eq. (<ref>) in the Lagrangian equation (<ref>), the Lagrangian expression can be written as
ṙ+f(r)(L^2/r^2)=E^2,
furthermore, we define
V=f(r)L^2/r^2,
where V stands for the effective potential of the massless test particle. Besides, the null-like geodesics of the equatorial circular motion in static spherically symmetric spacetime should satisfy the conditions ṙ=0 and r̈=0. Consequently, we have V=E^2 and dV/dr=0, indicating the stability of circular null geodesics. The equations V(r_p)=0 and V^'(r)_|r=r_p=0 <cit.> represent the circular orbit of the photon, that is, the photon sphere radius r_ p.
Moreover, the critical impact parameter b_c can be expressed as
b_c=L/E=r_p/√(f(r_p)),
f^'(r_p)(r_p-2f(r_p))=0.
On the other hand, the black hole shadow radius r_s is represented by the celestial coordinates (x,y) as follows
r_s=√(x^2+y^2)=r_p/√(f(r_p)).
Specifically, the effects of the parameters a and β on the photon sphere and shadow radii are shown in Table <ref>. For β > 0, from Table <ref> one can see that for fixed a=0.1 (β=1/10), the photon sphere and shadow radii increase as the parameter β (a) increases. Furthermore, as the string parameter a tends to 0.3 when β=1/10, the black hole shadow radius increases rapidly. For β < 0, when we set a=0.3, we observe that the photon sphere and shadow radii first increase, then decrease and finally increases as the parameter β decreases. A possible reason is that the metric f(r) is not a monotonic function of the Rastall parameter β in the range -0.5 < β < 0. Therefore, when the parameter β is set to -1/3, the photon sphere and shadow radii increase as a approaches to its parameter maximum.
§ ABSORPTION CROSS SECTION
In this section, we calculate the absorption cross section using two methods, viz., the sinc approximation method and the partial waves method where the gray-body factor is calculated by sixth-order WKB method. Besides, we view the capture cross section as a reference. It is known that the absorption cross section at the low-frequency and high-frequency limits can be calculated by different analytical approximations. The total absorption cross section of massless scalar waves in an arbitrary-dimensional general spherically symmetric black hole inclines to its area <cit.> in the low-frequency regime, which is the event horizon of the black hole. In the high-frequency regime, the total absorption cross section of the massless scalar field converges to the geometric capture cross section, described by the following null geodesics
σ_geo≡π b^2_c,
where b_c denotes the above critical impact parameter.
§.§ sinc approximation
Sanchez <cit.> proposed that in the high-frequency regime, the total absorption cross-section oscillates near the above-mentioned capture cross section (27/4)π r^2_s, where r^2_s=2M, and has an interval of oscillation peaks, Δ=2/√(27)M. In addition, Sanchez also presented the following analytical approximation of the absorption cross section
σ_San=27π/4-A/ω r_ssinπ(3√(3))(ω r_s+B),
which has the best fit when A= 1.14 ∼√(2) and B<10^-4.
Furthermore, the Sanchez approximation was generalized by Décanini et al. to static spherically symmetric spacetimes of arbitrary dimensions. Décanini et al. <cit.> showed that in the eikonal state, the fluctuation of the absorption cross section was completely and very simply described by the properties of the null unstable geodesics located on the photon sphere. Important characteristics are the orbital period and the Lyapunov exponent. Specifically, the sinc approximation of the absorption cross section in a d-dimensional static and spherically symmetric black hole is given by
σ≈σ_geo +σ_abs^osc,
where the oscillation part of the absorption, i.e., σ_abs^osc, is expressed as
σ_abs^osc≡ (-1)^d-34(d-2)πη_c e^-πη_csinc(2π r_cω/√(f(r_c))) σ_geo,
with sinc(x) denoting the sine cardinal
sinc(x) ≡sin x/x,
and d representing the dimension of the black hole. Besides 2πr_c/√(f(r_c))
= 2π b_c indicates the orbital period of the black hole on the photon sphere <cit.>. The parameter η_c for measuring the instability of the circular orbit on the photon sphere is defined as
η_c =1/2√(4f(r_c)-2r^2_c f^”(r_c)),
for instance, the sinc approximation of the absorption cross section of a Schwarzschild black hole at the high-frequency limit is written as
σ≈σ_geo - 8π e^-πsinc[2π (3√(3)M)ω] σ_geo.
§.§ Partial wave approach
We consider that the field Φ, which is purely ingoing waves at the event horizon, is the sum of the monochromatic incident plane wave Φ^I and outgoing scattered wave Φ^S in the far-field, that is,
Φ∼Φ^I+Φ^S.
Without loss of generality, we assume that the direction of wave propagation is along the z-axis. Accordingly, the monochromatic incident plane wave Φ^I and the outgoing scattered wave Φ^S are respectively defined as
Φ^I= e^-iω(t-z),
Φ^S= 1/rf̂(θ) e^-iω(t-r),
where f̂(θ) denotes the scattering amplitude. Moreover, e^iω z can be decomposed as <cit.>
e^iω z= ∑_l=0^∞(2l+1)i^lj_l(ω r)P_l(cosθ),
with j_l(.) representing the spherical Bessel function.
Hence, Eq. (<ref>) in the far-field can be rewritten as follows,
Φ^I∼e^-iω t/r∑_l=0^∞ C_ω l(e^-iω r+ e^-i π(l+1) e^iω r)P_l(cosθ),
where C_ω l is given by
C_ω l=(2l+1)/2iωe^i π(l+1).
The field solution Φ depends on the boundary conditions (<ref>). This means that the ingoing part of Φ should match the incident plane wave Φ^I. Therefore we obtain
Φ=e^-iω t/r∑_l=0^∞ C_ω lϕ_ω l(r) P_l(cosθ).
The absorption cross section depends on the flux of particles that enter the black hole through the effective potential. Hence, we can introduce the four-current density vector as follows
J^μ=i/2(Φ^*Δ^μΦ-ΦΔ^μΦ^*),
and the above equation satisfies the conservation law, that is
Δ_αJ^α=0.
By substituting Eq.(<ref>) into Eq.(<ref>) under the boundary condition Eq.(<ref>), we obtain the four-current density vector by surface integral as
N(r)=-∫_Σr^2J^rdΩ =-π/ω∑_l=0^∞ (2l+1)(1-|e^2 i δ_l|^2),
where N(r) is the flux that passes the surface Σ with a constant radius r and dΩ=sinθ dθ dφ. The flux is a constant, and when we consider the stationary scenarios, N (minus) represents the particles passing through the potential and entering the black hole <cit.>. Besides, we have used the orthogonality of Legendre polynomials, i.e.,
∫_-1^1 P_l(x) P_l'(x) dx =2/(2l+1)δ_l l'.
where x=cosθ.
Furthermore, the absorption cross section σ_abs is defined as the ratio of the particle flux |N| to the plane wave incident current ω. Hence, the absorption cross section can be written as
σ_abs(ω)≡|N|/ω= π/ω^2∑_l=0^∞(2l+1)(1-|e^2 i δ_l|^2)
=π/ω^2∑_l=0^∞ (2l+1)|T_ω l|^2,
and the partial absorption cross section can be expressed as
σ_l(ω)= π/ω^2(2l+1)(1-|e^2 i δ_l|^2)= π/ω^2(2l+1) |T_ω l|^2.
In order to study the effects of Rastall and string parameters on the absorption cross section of the scalar field, we need to calculate the phase shift δ_l, that is, the transmission coefficient. In this paper, we use the WKB approximation to obtain the transmission coefficient T_ω. Assuming that the probability of the incident plane wave is equal to 1, Eq.(<ref>) can be expressed as
|R_ω l|^2+|T_ω l|^2=1.
The transmission probability of different multipole numbers l can be obtained with the help of the sixth-order WKB method,
1- |R_ω l|^2=|T_ω l|^2,
with
R_ω l=(1+e^2i πα)^-1/2,
where α is obtained by
α-i (ω^2-V_0)/√(-2V_0^”) -∑_i=2^i=6Λ_i(K)=0.
In Eq.(<ref>), V_0 represents the maximum value of the potential at r=r_0, and the prime denotes the derivative of the potential at r=r_0 with respect to r^*. Moreover, Λ_i(K) indicates a higher-order correction of the WKB method, which depends on K and the 2i order derivative of the potential at its maximum position <cit.>.
Specifically, we express the third-order method as follows,
Λ_2=1/√(-2V^(2)_0)[1/8(V^(4)_0/V^(2)_0(b^2+1/4)-1/288(V^(3)_0/V^(2)_0)^2(7+60b^2))]
Λ_3=n+1/2/-2V^(2)_0[5/6912(V^(3)_0/V^(2)_0)^4(77+188b^2)
-1/384((V^(3)_0)^2V^(4)_0/(V^(2)_0)^3)(51+100b^2)+1/2304(V^(4)_0/V^(2)_0)^2(67+
68b^2)-1/288(V^(6)_0/V^(2)_0)(5+4b^2)+1/288(V^(3)_0V^(5)_0/(V^(2)_0)^2)(19+28b^2)].
In Eqs. (<ref>), the superscripts (2,3,4,5,6) of the effective potential represent the corresponding differentials with respect to the tortoise coordinate r_*, and b=n+1/2. Besides, since the specific expressions of Λ_4(K), Λ_5(K) and Λ_6(K) are overly cumbersome (see Ref. <cit.>), they will not be described in detail here. In addition, during the calculation, we find that when the Rastall parameter β is set as a fraction, the results and figures of the WKB approximation calculation are more accurate than when β is set as a decimal <cit.>. This phenomenon can be attributed to the term r^4β/2β-1 in the metric f(r). Hence, in order to maintain the consistency of the data, we choose the fractional form of β throughout the paper.
From Fig. <ref>, we can compare the effects of the Rastall parameter and the string parameter on the partial absorption cross section when the Rastall parameter is positive. The results are shown in the left plot, where different values of the string parameter a are chosen, the corresponding partial absorption cross section first starts at zero, then reaches to a maximum value, and finally decreases to almost the same value with increasing ω. Furthermore, it is easy to see that as the string parameter a increases, the partial absorption increases and its peak position shifts to the left. When we fix the string parameter and change the Rastall parameter, one can get that the peak value of the partial absorption cross section increases as the Ratall parameter β increases.
In Fig. <ref> we present the total absorption cross section of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity for different values of the string parameter, where l goes from 0 to 10 and β=1/10. Specifically, the horizontal solid line represents the geometric capture section. As shown in Fig. 4, the dashed curve is the sinc approximation result, and the solid curve is the partial wave result using the sixth-order WKB approximation. We show that increasing the parameter a results in incrementing the absorption cross section. We also notice that the two curves are significantly different at small values of frequency. Moreover, as the string parameter increases, the difference is more pronounced and the range of oscillation amplitudes is significantly wider. However, in the high frequency regime, the total absorption cross sections obtained by these two methods are in good agreement and converge to the geometric capture cross section. In Fig. <ref>, our results show that when we fix the value of a and increase β, the absorption cross section increases. The difference between the two curves also increases significantly in the low frequency regime due to the Rastall parameter.
As shown in Fig. <ref>, we describe the behavior of the partial absorption cross section for different parameters obtained by the sixth-order WKB method when β < 0. From the left figure, where the Rastall parameter is treated as a variable and the parameter a is fixed, we observe that the partial absorption cross section does not monotonically increase as the Rastall parameter β decreases. The partial cross sections intersect in the range of 0.2<ω< 0.3 due to the effective potential. Therefore, the variation trend of the partial cross section is 'N' type. We also observe in the right plot that when we increase the string parameter, the partial cross section increases monotonically. Moreover, the peak position of the partial cross section is evidently shifted to the left.
Finally, we observe that as the multipole number l increases, the partial cross section decreases and its peak position shifts to the right.
In Fig. <ref> we give the total absorption cross sections of the massless scalar field by varying the Rastall parameter β (β<0) and fixing the string parameter a = 0.6. We can observe that the change of the total absorption cross section as a function of β is similar to that of the partial absorption cross section. This is because the higher the potential barrier, the more particles are scattered back to the black hole by the potential barrier. In addition, we can see that when we reduce the Rastall parameter to -0.5, the difference between the solid curve and the dashed curve gradually decreases.
In Fig. <ref> we present the total absorption cross section of the massless scalar field when β<0, changing the string parameters and fixing β=-1/3. It can be observed that the difference between the two curves is the smallest at the low-frequency limit compared to the above three cases. Furthermore, when ω is large, the total absorption cross section as a function of ω goes into the capture cross section. We also notice that the total absorption cross section, as well as the oscillation amplitude, increases with increasing string parameters.
§ HAWKING RADIATION
In this section, we employ the sixth-order WKB method to calculate the Hawking radiation for massless scalar fields. Furthermore, we analyze the effects of the string and the Rastall parameters on Hawking radiation in the background of a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity.
A black hole behaves almost in the same way as a black body, emitting particles when its temperature is proportional to the surface gravity <cit.>. Hawking also presented that black holes can radiate particles in the form of thermal. This is due to the quantum tunneling effect created by the vacuum fluctuations near the event horizon of the black hole. Therefore, if we consider quantum effects and the laws of thermodynamics are satisfied, black holes can produce radiation. This phenomenon is known as Hawking radiation.
The Hawking radiation calculated by the gray-body factor has the following expression <cit.>
dE/dt=∑_lN_l|T_ω l|^2ω/exp(ω/T_BH)- 1dω/2π,
where N_l is the multiplicity that depends only on the black hole dimension. Moreover, for the massless scalar field in a four-dimensional black hole, l and N_l satisfy the condition N_l=2l+1. T_ω l denotes the above gray-body factor and T_BH represents the Hawking temperature.
Specifically, the Hawking temperature of static spherically symmetric spacetime can be written as
T_BH=1/4πf'(r) |_r=r_h.
By substituting Eq.(<ref>) into Eq.(<ref>), we can obtain
T_BH=1/4π r_h (1+a(1-2β)r^4β/1-2β_h/4β-1),
where f(r_h)=0 and r_h is the radius of the event horizon. Besides, the string and Rastall parameters need to satisfy the previous parameter range, i.e., -0.5< β< 1/6 and 0 ≤ a < 1. By substituting Eq.(<ref>) and N_l=2l+1 into Eq.(<ref>), we can further obtain the Hawking power emission spectrum
d^2E/dt dω=1/2π∑_l(2l+1)|T_ω l|^2ω/e^ω/T_BH-1.
Fig.<ref> compares the effects of parameters a and β on the Hawking power emission spectrum of the massless scalar wave when β is non-negative. We can clearly observe in the upper panel that for a given l and β, increasing the parameter a depresses the power emission spectrum. Moreover, the peak power emission spectrum gradually shifts to low frequencies as a increases. It is clear from the middle panel that when we fix l and a, but increase the parameter β, the peak power emission spectrum gradually decreases and moves to low frequencies. As the multipole number l increases, we can get from the lower panel that for a massless scalar field, the power emission spectrum decreases and the peak position shifts towards high frequencies. In conclusion, the parameters a, β and l suppress the power emission spectrum. Besides, it is easy to see that if the values of parameters a and b are chosen larger, the lifespan of the black hole will be longer.
This trait is more easily observed in Fig.<ref>, which plots the effects of parameters a, β and l on the power emission rate (as a function of ω) for the scalar wave in the range β<0. From the upper figure we can see that when we increase the parameter a, the power emission spectrum decreases. That is, under the condition that β is constant, the increase of the string parameter a leads to a decrease in the energy emission rate, thus making the lifetime of the black hole longer.
Furthermore, we also observe in the center panel that with decreasing Rastall parameter, for fixed l and a, the peak value of the power emission rate increases and then decreases, and the peak position first shifts to high frequency and then moves to low frequency. Finally, we fix the two parameters a=0.6 and β=-1/3 and analyze the effects of the multipole number l in the lower panel. It is clear that a larger multipole number results in a lower power emission spectrum. Besides, it is worth noting that the low multipole number l dominates the energy emission rate, while the contribution of the high multipole number l is extremely small and thus negligible.
§ CONCLUSION AND DISCUSSION
In the previous sections, we have comprehensively studied the black hole shadow, absorption cross section and power emission spectrum of Hawking radiation for the massless scalar field in a Schwarzschild black hole surrounded by a cloud of strings in Rastall gravity. The ranges of the string parameter and Rastall parameter are chosen according to the effective potential in the context of the scalar field. Notably, we have calculated the absorption cross section and Hawking radiation with the help of the sixth-order WKB method.
First, in Figs.<ref> and <ref>, we carefully analyzed the effective potential for different values of parameters a, β and l.
For β>0, the parameters a and β depress the barrier of the effective potential, and the waves do not reflect. For β<0, a reduces the barrier height when β is fixed, whereas the effective potentials intersect when β varies. Moreover, we studied the shadow and photon sphere radii caused by the curved light ray. Because we consider the black hole to be static spherically symmetric, the radii of the photon sphere and shadow are constant. In other words, the black hole shadow has spherical symmetry. Besides, the radius of the photon sphere increases as the parameter a increases. However, when we consider β as a variable, the photon sphere and shadow radii fluctuate abnormally. The reason is that when the Rastall parameter is less than zero, the metric f(r) changes abnormally.
Second, with the help of the sixth-order WKB method, we calculated the absorption cross section of the scalar field in detail. To compare the accuracy of the sixth-order WKB, we also presented the results of the sinc approximation with the geometric capture cross section as a reference. From Figs.<ref>, <ref> and <ref>, we can clearly observe that larger values of the parameters a and β enhance the partial or total absorption cross section when β>0. However, in the low frequency range, when a or β is set to a larger value, the results calculated by the two methods are quite different. Furthermore, in Figs.<ref>, <ref> and <ref>, we plotted the partial and total absorption cross sections when β<0. Unlike the case where β is positive, the absorption cross section does not always grow as the Rastall parameter decreases. Since the potential barrier reflects waves, the change in the absorption cross section is exactly the opposite of the change in the potential barrier. Hence, as β decreases, the total absorption cross section first increases, then decreases and finally increases again. It is worth mentioning that the smaller the value of β, the smaller the difference between the two approximations. Very importantly, in the mid-high frequency region, the total absorption cross section and the sinc approximation are in good agreement and in all cases oscillate around the geometric capture cross section σ_geo.
Finally, we investigated the energy emission rate of Hawking radiation. Specifically, the power emission rate is affected by the string parameter, the Rastall parameter as well as the multipole number. In Fig.<ref>, we found that both a and β suppress the power emission spectrum, and the peak position shifts to a lower energy region. Moreover, the multipole number l also significantly depresses the power emission spectra whereas the peak position shifts to the higher frequency regime. The case of β<0 is also similar to the case of β>0 above, except the case where β varies and a is fixed. As the Rastall parameter decreases, the power emission spectrum first increases and then decreases, at the same time, the peak position first moves to the higher frequency region and then enters the lower energy region.
§.§ acknowledgments
This work was supported partly by the National Natural Science Foundation of China (Grants No. 12065012, No. 12065013), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (Grants No. YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006).
§.§ Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors’ comment: This present study is a theoretical work.]
99
12pt
Einstein1914
A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys. ) 1914, 1030-1085 (1914).
Einstein1915
A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys. ) 1915, 831-839 (1915).
Event2019
K. Akiyama et al. [Event Horizon Telescope], Astrophys. J. Lett. 875, L1 (2019).
Rastall1972
P. Rastall, Phys. Rev. D 6, 3357-3359 (1972).
Gogoi2021
D.J. Gogoi, U.D. Goswami, Phys. Dark Univ. 33, 100860 (2021).
Al-Rawaf1996
A.S. Al-Rawaf, M.O. Taha, Phys. Lett. B 366, 69-71 (1996).
Visser:2017gpz
M. Visser,
Phys. Lett. B 782, 83-86 (2018).
Das:2018dzp
D. Das, S. Dutta and S. Chakraborty,
Eur. Phys. J. C 78, 810 (2018).
Darabi:2017coc
F. Darabi, H. Moradpour, I. Licata, Y. Heydarzade and C. Corda,
Eur. Phys. J. C 78, 25 (2018).
Hansraj:2018zwl
S. Hansraj, A. Banerjee and P. Channuie,
Annals Phys. 400, 320-345 (2019).
Ziaie:2019jfl
A. H. Ziaie, H. Moradpour and S. Ghaffari,
Phys. Lett. B 793, 276-280 (2019).
Moradpour:2017ycq
H. Moradpour, A. Bonilla, E. M. C. Abreu and J. A. Neto,
Phys. Rev. D 96, 123504 (2017).
Li:2019jkv
R. Li, J. Wang, Z. Xu and X. Guo,
Mon. Not. Roy. Astron. Soc. 486, 2407-2411 (2019).
Abbas:2018ffk
G. Abbas and M. R. Shahzad,
Eur. Phys. J. A 54, 211 (2018).
Letelier1979
P.S. Letelier, Phys. Rev. D 20, 1294-1302 (1979).
Herscovich2010
E. Herscovich, M.G. Richarte,
Phys. Lett. B 689, 192-200 (2010).
Toledo2019
J.M. Toledo, V.B. Bezerra,
Eur. Phys. J. C 79, 117 (2019).
Morais2018
J.P. Morais Graça, I.P. Lobo, I.G. Salako,
Chin. Phys. C 42, 063105 (2018).
Li2021
Z. Li, T. Zhou,
Phys. Rev. D 104, 104044 (2021).
Chen:2018szr
S. Chen, L. Zhang, J. Jing,
Eur. Phys. J. C 78, 981 (2018).
Cai2020
X.C. Cai, Y.G. Miao,
Phys. Rev. D 101, 104023 (2020).
Setare:2003bd
M. R. Setare,
Phys. Rev. D 69, 044016 (2004).
Setare:2004uu
M. R. Setare, E. C. Vagenas,
Mod. Phys. Lett. A 20, 1923-1932 (2005).
Synge1966
J.L. Synge,
Mon. Not. Roy. Astron. Soc. 131, 463-466 (1966).
Bardeen
J.M. Bardeen, W.H. Press, S. A. Teukolsky,
Astrophys. J. 178, 347 (1972).
Abbas:2014oua
G. Abbas, U. Sabiullah,
Astrophys. Space Sci. 352, 769-774 (2014).
Amarilla2010
L. Amarilla, E.F. Eiroa, G. Giribet,
Phys. Rev. D 81, 124045 (2010).
Sharif2016
M. Sharif, S. Iftikhar,
Eur. Phys. J. C 76, 630 (2016).
Amir2019
M. Amir, A. Banerjee, S.D. Maharaj,
Annals Phys. 400, 198-207 (2019).
Babar2020
G.Z. Babar, A.Z. Babar and F. Atamurotov,
Eur. Phys. J. C 80, 761 (2020).
Konoplya20200
R.A. Konoplya,
Phys. Lett. B 804, 135363 (2020).
Konoplya20201
R.A. Konoplya, A.F. Zinhailo,
Eur. Phys. J. C 80, 1049 (2020).
Anacleto2021
M.A. Anacleto, J.A.V. Campos, F.A. Brito, E. Passos,
Annals Phys. 434, 168662 (2021).
Cai2021
X.C. Cai, Y.G. Miao,
Phys. Rev. D 103, 124050 (2021)
Zhang2021
M. Zhang, J. Jiang,
Phys. Lett. B 816, 136213 (2021).
Long:2019nox
F. Long, J. Wang, S. Chen, J. Jing,
JHEP 10, 269 (2019).
Long:2020wqj
F. Long, S. Chen, M. Wang, J. Jing,
Eur. Phys. J. C 80, 1180 (2020).
Gyulchev2019
G. Gyulchev, P. Nedkova, V. Tinchev, Y. Stoytcho,
AIP Conf. Proc. 2075, 040005 (2019).
Gyulchev2018
G. Gyulchev, P. Nedkova, V. Tinchev, S. Yazadjiev,
Eur. Phys. J. C 78, 544 (2018).
Nedkova2013
P.G. Nedkova, V.K. Tinchev, S.S. Yazadjiev,
Phys. Rev. D 88, 124019 (2013).
Perlick2017
V. Perlick, O.Y. Tsupko,
Phys. Rev. D 95, 104003 (2017).
Javed2021
W. Javed, A. Hamza, A. Övgün,
Universe 7, 385 (2021).
Matzner1968
R.A. Matzner,
J. Math. Phys. 9, 163 (1968).
Mashhoon1973
B. Mashhoon,
Phys. Rev. D 7, 2807-2814 (1973).
Starobinski1974
A.A. Starobinskil, S.M. Churilov,
Sov. Phys. JETP 65, 1-5 (1974).
Fabbri1975
R. Fabbri,
Phys. Rev. D 12, 933-942 (1975).
Ford1975
L.H. Ford,
Phys. Rev. D 12, 2963-2977 (1975).
Page1976
D.N. Page,
Phys. Rev. D 14, 3260-3273 (1976).
Sanchez19781
N.G. Sanchez,
Phys. Rev. D 18, 1030 (1978).
Das1997
S.R. Das, G.W. Gibbons, S.D. Mathur,
Phys. Rev. Lett. 78, 417-419 (1997).
Higuchi2001
A. Higuchi,
Class. Quant. Grav. 18, L139 (2001).
Kanti2002
P. Kanti, J. March-Russell,
Phys. Rev. D 66, 024023 (2002).
Jung2004
E. Jung, D.K. Park,
Class. Quant. Grav. 21, 3717-3732 (2004).
Grain2005
J. Grain, A. Barrau, P. Kanti,
Phys. Rev. D 72, 104016 (2005).
Crispino2007
L.C.B. Crispino, E.S. Oliveira, A. Higuchi, G.E.A. Matsas,
Phys. Rev. D 75, 104012 (2007).
Crispino2009
L.C.B. Crispino, S.R. Dolan, E.S. Oliveira,
Phys. Rev. D 79, 064022 (2009).
Macedo2014
C.F.B. Macedo, L.C.B. Crispino,
Phys. Rev. D 90, 064001 (2014).
Huang2015
H. Huang, M. Jiang, J. Chen, Y. Wang,
Gen. Rel. Grav. 47, 8 (2015).
2018
L.C.S. Leite, S. Dolan, L. Crispino, C.B.,
Phys. Rev. D 98,024046 (2018).
Huang2019
H. Huang, J. Chen, Y. Wang, T. Lu,
Gen. Rel. Grav. 51, 22 (2019).
Anacleto2020
M.A. Anacleto, F.A. Brito, J.A.V. Campos, E. Passos,
Phys. Lett. B 803, 135334 (2020).
Magalhaes2020
R.B. Magalhães, L.C.S. Leite, L.C.B. Crispino,
Eur. Phys. J. C 80, 386 (2020).
Junior2020
H.C.D. Lima, C.L. Benone, L.C.B. Crispino,
Phys. Lett. B 811, 135921 (2020).
Benone2018
C.L.Benone, L.C.S. Leite, L.C.B. Crispino, S.R. Dolan,
Int. J. Mod. Phys. D 27, 1843012 (2018).
Li2022
Q. Li et al.
Chinese Journal of Physics (2022).
Hawking1975
S.W. Hawking,
Commun. Math. Phys. 43, 199-220 (1975).
Hawking1976
S.W. Hawking,
Phys. Rev. D 13, 191-197 (1976).
Yale2011
A. Yale,
Phys. Lett. B 697, 398-403 (2011).
Chen:2008ra
S. Chen, B. Wang, R. Su,
Phys. Rev. D 77, 124011 (2008).
Konoplya2020
R.A. Konoplya, A.F. Zinhailo,
Phys. Lett. B 810, 135793 (2020).
Harmark2010
T. Harmark, J. Natario, R. Schiappa,
Adv. Theor. Math. Phys. 14, 727-794 (2010).
Kanti2015
P. Kanti, E. Winstanley,
Fundam. Theor. Phys. 178, 229-265 (2015).
Pappas2016
T. Pappas, P. Kanti, N. Pappas,
Phys. Rev. D 94, 024035 (2016).
Miao2017
Y.G. Miao, Z.M. Xu,
Phys. Lett. B 772, 542-546 (2017).
Javed2019
W. Javed, R. Babar, A. Övgün,
Mod. Phys. Lett. A 34, 1950057 (2019).
2020
R.A. Konoplya, A.F. Zinhailo, Z. Stuchlik,
Phys. Rev. D 102, 044023 (2020).
Guo2020
H. Guo, H. Liu, X.M. Kuang, B. Wang,
Phys. Rev. D 102, 124019 (2020).
Ali2021
R. Ali, R. Babar, M. Asgher, S.A.A. Shah,
Annals Phys. 432, 168572 (2021).
Slavov:2012mv
P. I. Slavov, S. S. Yazadjiev,
Phys. Rev. D 86, 084042 (2012).
Moradpour2016
H. Moradpour, I.G. Salako,
Adv. High Energy Phys. 2016, 3492796 (2016).
Azam:2017adt
M. Azam, G. Abbas, S. Sumera, A. R. Nizami,
Int. J. Geom. Meth. Mod. Phys. 14, 1750120 (2017).
Setare:2010zd
M.R.Setare, D. Momeni,
Int. J. Theor. Phys. 50, 106-113 (2011).
Yves2011
Y. Decanini, G. Esposito-Farese, A. Folacci,
Phys. Rev. D 83, 044032 (2011).
Decanini2010
Y. Decanini, A. Folacci, B. Raffaelli,
Phys. Rev. D 81, 104039 (2010)..
Futterman1988
J.A.H. Futterman, F.A. Handler, R.A. Matzner, Cambridge; New York: Cambridge University Press (1988).
Unruh1976
W.G. Unruh,
Phys. Rev. D 14, 3251-3259 (1976).
Iyer1987
S. Iyer, C. M. Will,
Phys. Rev. D 35, 3621 (1987).
Konoplya2003
R.A. Konoplya,
Phys. Rev. D 68, 024018 (2003).
Sharif:2020cus
M. Sharif, Q. Ama-Tul-Mughani,
PTEP 2020, 033E01 (2020).
Sharif:2021sow
M. Sharif, S. Shaukat,
Annals Phys. 436, 168673 (2022).
|
http://arxiv.org/abs/2307.04302v1 | 20230710014146 | Auction Design for Value Maximizers with Budget and Return-on-spend Constraints | [
"Pinyan Lu",
"Chenyang Xu",
"Ruilong Zhang"
] | cs.GT | [
"cs.GT"
] |
Auction Design for Value Maximizers
Pinyan Lu, Chenyang Xu and Ruilong Zhang
ITCS, Shanghai University of Finance and Economics, China Huawei TCS Lab, China Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, China Department of Computer Science and Engineering, University at Buffalo, USA
[email protected], [email protected], [email protected]
Auction Design for Value Maximizers with Budget and Return-on-spend ConstraintsAll authors (ordered alphabetically) have equal contributions and are corresponding authors.
Pinyan Lu1,2 Chenyang Xu3 Ruilong Zhang4
August 12, 2023
===========================================================================================================================================================================
The paper designs revenue-maximizing auction mechanisms for agents who aim to maximize their total obtained values rather than the classical quasi-linear utilities. Several models have been proposed to capture the behaviors of such agents in the literature. In the paper, we consider the model where agents are subject to budget and return-on-spend constraints.
The budget constraint of an agent limits the maximum payment she can afford, while the return-on-spend constraint means that the ratio of the total obtained value (return) to the total payment (spend) cannot be lower than the targeted bar set by the agent.
The problem was first coined by <cit.>.
In their work, only Bayesian mechanisms were considered.
We initiate the study of the problem in the worst-case model and compare the revenue of our mechanisms to an offline optimal solution, the most ambitious benchmark. The paper distinguishes two main auction settings based on the accessibility of agents' information: fully private and partially private. In the fully private setting, an agent's valuation, budget, and target bar are all private. We show that if agents are unit-demand, constant approximation mechanisms can be obtained; while for additive agents, there exists a mechanism that achieves a constant approximation ratio under a large market assumption. The partially private setting is the setting considered in the previous work <cit.> where only the agents' target bars are private. We show that in this setting, the approximation ratio of the single-item auction can be further improved, and a (1/√(n))-approximation mechanism can be derived for additive agents.
§ INTRODUCTION
In an auction with n agents and m items, the auctioneer decides the allocation ={x_ij}_i∈ [n],j∈ [m] of the items and the agents' payments ={p_i}_i∈ [n].
The agent i's obtained value is usually denoted by a valuation function v_i of the allocation;
while the agent's utility depends on both the obtained value and the payment made to the auctioneer.
Combining the valuation and payment to get the final utility function is a tricky modeling problem.
In the classic auction theory and the vast majority of literature from the algorithmic game theory community, one uses the quasi-linear utility function u_i=v_i-p_i, i.e., the utility is simply the obtained value subtracting the payment.
This natural definition admits many elegant mathematical properties and thus has been widely investigated in the literature (e.g. <cit.>).
However, as argued in some economical literature <cit.>, this utility function may fail to capture the agents' behaviors and thus cannot fit reality well in some circumstances.
In these circumstances, one usually uses a generic function u=f(v,p) (with monotonicity and possibly convexity properties) to model the utility function.
Such treatment is surely general enough, but usually not explicitly enough to get a clear conclusion.
In particular, designing non-trivial truthful mechanisms for agents with a generic and inexplicit utility function is difficult.
Is there some other explicit utility function (beyond the quasi-linear one) that appears in some real applications? One simple and well-studied model is agents with budget constraints (e.g. <cit.>). In this setting, besides the valuation function, an agent i also has a budget constraint B_i for the maximum payment he can make. In the formal language, the utility is
u_i:= { v_i -p_i if p_i≤ B_i ,
-∞ otherwise.
.
In mechanism design, the valuation function v_i(·) is considered as the private information for agent i.
Thus the auctioneer needs to design a truthful mechanism to incentivize the agents to report their true information.
For these models beyond the simplest quasi-linear utility, other parameters might be involved in the agents’ utility functions besides the valuation function, such as budget B in the above example.
For the mechanism design problem faced by the auctioneer, one can naturally ask the question of whether these additional parameters are public information or private.
Both cases can be studied, and usually, the private information setting is more realistic and, at the same time, much more challenging.
This is the case for the budget constraint agents.
Both public budget and private budget models are studied in the literature (e.g. <cit.>).
Value Maximizer.
The above budget constraint agent is only slightly beyond the quasi-linear model since it is still a quasi-linear function as long as the payment is within the budget. However, it is not uncommon that their objective is to maximize the valuation alone rather than the difference between valuation and payment for budget-constrained agents.
This is because in many scenarios the objective/KPI for the department/agent/person who really decides the bidding strategy is indeed the final value obtained.
On the other hand, they cannot collect the remaining unspent money by themselves anyway, and as a result, they do not care about the payment that much as long as it is within the budget given to them.
For example, in a company or government's procurement process, the agent may be only concerned with whether the procurement budget can generate the maximum possible value. We notice that with the development of modern auto-bidding systems, value maximization is becoming the prevalent behavior model for the bidders <cit.>. This motivates the study of value maximizer agents, another interesting explicit non-quasi-linear utility model. In many such applications, there is another return-on-spend (RoS) constraint
τ_i for each agent i which represents the targeted minimum ratio between the obtained value and the payment and is referred to as the target ratio in the following. Formally, the utility function is
u_i:= { v_i if p_i≤ B_i and p_iτ_i ≤ v_i ,
-∞ otherwise.
.
As one can see, the value maximizer's utility function is still a function of v and p but with two additional parameters B and τ, which result from two constraints.
Note that the above utility function is identical to that of <cit.>.
Their paper focused on one particular setting where both value and budget are public information, with RoS parameter τ being the only single-dimensional private information.
Considering τ as the only private information helps design better auctions, but it may fail to capture more wide applications.
On top of capturing more practical applications, we
consider the setting where all these pieces of information are private, which we call the fully private setting.
This makes designing an efficient auction for the problem challenging.
With the focus on the fully private setting, we also consider some partially private settings, for which we can design better mechanisms.
There are other definitions of value maximizer in the literature, most of which can be viewed as a special case of the above model <cit.>.
For example, there might be no budget constraint (B=∞) or no RoS constraint. Another example is to combine v_i/τ_i as a single value (function).
A mechanism for the fully private setting in our model is automatically a mechanism with the same guarantee in all these other models.
That is another reason why the fully private setting is the most general one.
Revenue maximization and benchmarks
This paper considers the revenue maximization objective for the auctioneer when designing truthful[A mechanism is truthful if for any agent i, reporting the true private information always maximizes the utility regardless of other agents’ reported profiles, and the utility of any truthtelling agent is non-negative.] mechanisms for value maximizers.
For the revenue maximization objective, there are usually two benchmarks, called “first-best" and “second-best".
The first-best benchmark refers to the optimal objective we can get if we know all the information. In our setting, it is max_∑_i min{B_i, v_i()/τ_i}.
For the traditional quasi-linear utility function, the first-best benchmark is simply the maximum social welfare one can generate max_∑_i v_i(). It is proved that such a benchmark is not achievable or even not approximated by a constant ratio in the traditional setting.
Thus the research there is mainly focused on the second-best benchmark. The second-best benchmark refers to the setting where the auctioneer additionally knows the distribution of each agent's private information and designs a mechanism to get the maximum expected revenue with respect to the known distribution. The benchmark in <cit.> is also this second-best benchmark and they provide optimal mechanism when the number of agents is at most two.
It is clear that the first-best benchmark is more ambitious and more robust since it is prior free.
They focus on the second-best in the traditional setting because the first-best is not even approximable.
In our new value maximizer agents setting, we believe it is more important to investigate if we can achieve the first-best approximately.
Thus, we focus on the first-best benchmark in this paper. This is significantly different from that of <cit.>.
§.§ Our Results
Problem Formulation.
The formal description of the auction model considered in the paper follows.
One auctioneer wants to distribute m heterogeneous items among n agents.
Each agent i∈ [n] has a value v_ij per unit for each item j∈ [m] and a budget B_i, representing the maximum amount of money agent i can pay.
The agent also has a RoS constraint τ_i, representing the minimum ratio of the received value (return) to the total payment (spend) that she can tolerate.
As mentioned above, several settings of the type (public or private) of ( ,,) are considered in the paper.
Agents are value maximizers subject to their budget constraints and RoS constraints (see <ref> for the formula).
The auctioneer aims to design a truthful mechanism that maximizes the total payment.
We investigate our model in a few important auction environments.
We studied both indivisible and divisible items, both the single-item and the multiple-item auctions.
When there are multiple items, we consider the two most important valuations: unit demand and additive.
Unit demand models are the setting where the items are exclusive to each other.
Additive models are the setting where an agent can get multiple items and their values add up. We leave the more generic valuation function, such as submodular or sub-additive, to future study.
In the fully private information setting, we obtain constant approximation truthful mechanisms for both the single-item auction and the multiple items auction among unit demand agents.
This is quite surprising given the fact that such a constant approximation to the first-best benchmark is proved to be impossible for the classic quasi-linear utility agents even in the single-item setting.
The intuitive reason is that the agent is less sensitive to the payment in the value maximizer setting than in the quasi-linear utility setting and thus the auctioneer has a chance to extract more revenue from them.
But this does not imply that designing a good truthful mechanism is easy.
Quite the opposite, we need to bring in some new design and analysis ideas since the truthfulness here significantly differs from the traditional one as agents’ utility functions are different.
For the additive valuation, we provide constant approximation only under an additional large market assumption. This is obtained by observing an interesting and surprising relationship between our model and the model of “liquid welfare for budget-constrained agents".
We also consider the partially private information setting.
For the public budget (but private value and target ratio), we obtained an improved constant approximation truthful mechanism for the single-item environment.
The improved mechanism for the single-item setting has a much better approximation since we cleverly use the public budget information in the mechanism.
For the additive valuation without the large market assumption, we also investigate it
in the private target ratio (but public budget and valuation) setting, which is the setting used in <cit.>. we obtained an (1/√(n)) approximation truthful mechanism. In the additive setting, an agent may get multiple items, and thus the payment she saved from one item can be used for other items, which is an impossible case in the unit demand setting.
Due to this reason, agents may become somewhat more sensitive to payment which leads to an (1/√(n)) approximation.
§.§ Related Works
The most relevant work is <cit.>, in which they also aim to design a revenue-maximizing Bayesian mechanism for value maximizers with a generic valuation and utility function under budget and RoS constraints. As mentioned above, they focus on the setting where each agent's only private information is the target ratio, which is referred to as the partially private setting in our paper. They show that under the second-best benchmark, an optimal mechanism can be obtained for the two-agent case.
Another closely related line of work is “liquid welfare for budget constraint agents" <cit.>.
We observe an interesting and surprising relationship between these two models since the liquid welfare benchmark is almost identical to the first-best benchmark in our setting. Therefore, some algorithmic ideas there can be adapted here.
However, there are two significant differences: (1) the objective for the auctioneer is (liquid) welfare rather than revenue.
This difference mainly affects the approximation;
(2) the bidders are quasilinear utility (within the budget constraint) rather than value maximizers.
This difference mainly affects truthfulness.
Observing this relation and difference, some auction design ideas from their literature inspire part of our methods.
Furthermore, building deeper connections or ideal black-box reductions between these two models would be an interesting future direction.
The model of budget feasible mechanism <cit.> also models the agent as a value maximizer rather than a quasi-linear utility maximizer as long as the payment is within the budget.
The difference is that the value maximizer agent is the auctioneer rather than the bidders.
§.§ Paper Organization
In the main body, we focus on the fully private setting, where all the budgets, valuations, and target ratios are private.
We first consider the single-item auction in <ref> and then extend the algorithmic ideas to the multiple items auction for unit demand agents in <ref>.
Both of the two environments can be constant-approximated.
Finally, we turn to the multiple items auction for additive agents, the most challenging environment, and show a constant approximation under an assumption on the budgets in <ref>.
For the partially private setting, due to space limit, we defer all the results to the appendix. In <ref>, we show that a better constant approximation for the single item environment can be obtained when the budgets become public. Then we leverage this new mechanism to give an (1/√(n)) approximation for multiple items auction among additive agents in <ref>.
§ WARM-UP: SINGLE ITEM AUCTION
Let us warm up by considering the environment where the auctioneer has only one item to sell. Our first observation is that if the item is indivisible, we can achieve a truthful optimal solution by directly assigning the item to agent k with the maximum min{B_k,v_k/τ_k} and charging her that value. Basically, the first price auction with respect to min{ B_i,v_i/τ_i}.
The optimality is obvious.
For truthfulness, since min{ B_i,v_i/τ_i} is the maximum willingness-to-pay of each agent i, if someone other than k misreports the profile and gets assigned the item, one of the two constraints must be violated.
On the other hand, misreporting a lower profile can only lead to a lower possibility of winning but without any benefit.
There exists a truthful optimal mechanism for the single indivisible item auction.
The above theorem gives some intuition for the divisible item environment. If the indivisible optimum is at least a constant fraction c of the divisible optimum, selling the item indivisibly can give a constant approximation. We refer to this idea as indivisibly selling in the following.
In contrast, for the case that the indivisible optimum is smaller than a constant fraction c of the divisible optimum (denoted by in the following), we have min{ B_i,v_i/τ_i}≤ c · for any agent i.
This property implies that the random sampling technique can be applied here. More specifically, we randomly divide the agents into two groups, gather information from one group, and then use the information to guide the item's selling price for the agents in the other group.
Since in an optimal solution, each agent does not contribute much to the objective, a constant approximation can be proved by some concentration inequalities
based on the above two strategies, we give our mechanism in <ref>.
<ref> is feasible, truthful, and achieves an expected approximation ratio of 1/52.
The feasibility is obvious. Firstly, since x_i≤α when each agent i comes, ∑_i∈ [n]x_i≤ 1. Secondly, due to x_i≤B_i/r for each agent i, p_i=x_i · r ≤ B_i. Thirdly, for each agent i, we have x_iv_i≥ p_iτ_i because an agent buys some fractions of the item and gets charged only if r≤v_i/τ_i.
Then we show that regardless of which procedure is executed, the mechanism is truthful. The truthfulness of the first procedure is proved by <ref> directly. For the second procedure, we show that agents in neither S nor R have the incentive to lie. For an agent in S, she will not be assigned anything, and therefore, misreporting her information cannot improve her utility; while for the agents in R, they are also truthtelling because their reported information determines neither the arrival order nor the reserve price, and misreporting a higher v_i/τ_i (resp. a larger B_i) to buy more fractions of the item must violate the RoS (resp. budget) constraint of agent i.
Finally, we analyze the approximation ratio. Let (^*,^*) be an optimal solution. Use and to denote the optimal payment and our total payment, respectively. Without loss of generality, we can assume that p_i^*=x_i^* ·v_i/τ_i≤ B_i.
Clearly, if there exists an agent l with p_l^*≥1/36, we can easily bound the expected total payment by the first procedure:
() ≥9/13·min{ B_i,v_i/τ_i}≥9/13·min{ B_l,v_l/τ_l}≥1/52.
Otherwise, we have p_i^* < 1/36 ∀ i∈ [n]. Then according to the concentration lemma proved in <cit.>, we can establish the relationship between ∑_i∈ Sp_i^* and in the second procedure:
[1/3≤∑_i∈ Sp_i^* ≤2/3]≥3/4.
Namely, with probability of at least 3/4, both ∑_i∈ Sp_i^* and ∑_i∈ Rp_i^* are in [1/3,2/3].
Let us focus on the second procedure and consider a subset S such that ∑_i∈ Sp_i^*∈ [1/3,2/3]. We distinguish two cases based on the final remaining fraction of the item.
If the item is sold out, our payment is at least 1/4∑_i∈ Sp_i(^S). Since (^S,(^S)) is the optimal solution of distributing the item among the agents in S, we have
≥1/4∑_i∈ Sp_i(^S)≥1/4∑_i∈ S p_i^* ≥1/12.
If the procedure does not sell out the item, for any agent i∈ R who does not exhaust the budget, v_i/τ_i < r = 1/4∑_i∈ Sp_i(^S). Using T⊆ R to denote such agents, we have
1/3≤∑_i∈ Rp_i^* ≤∑_i∈ R∖ T B_i + ∑_i∈ T p^*_i ≤ + ∑_i∈ Tv_i/τ_i x_i^*
≤ + 1/4∑_i∈ Sp_i(^S)∑_i∈ T x_i^* ≤ + 1/4∑_i∈ Sp_i(^S)
≤ + 1/4 .
We have ≥1/12 from the above inequality.
Thus, in either case, is at least 1/12 under such a subset S. Then according to <ref>, we can complete the proof:
() ≥4/13·3/4·1/12 = 1/52.
§ MULTIPLE ITEMS AUCTION FOR UNIT DEMAND AGENTS
This section considers the environment where the auctioneer sells multiple items to unit-demand agents, a set of agents who each desires to buy at most one item.
We extend the results in the last section and show that a constant approximation can still be obtained.
Similar to the study of the single-item auction, <ref> starts from the indivisible goods environment and shows a 1/2-approximation.
For the divisible goods environment, our mechanism is also a random combination of the “indivisibly selling" procedure and the “random sampling" procedure. However, the mechanism and its analysis are much more complicate than that for single item environment and this section is also the most technical part of this paper.
We describe the indivisibly selling procedure in <ref>. For the random sampling procedure, the multiple-item setting needs a variant of greedy matching (<ref>) to compute the reserved prices of each item and <ref> has a discussion about this algorithm. Finally, <ref> analyzes the combined mechanism (<ref>). In order to analyse the approximation ratio of <ref>, we introduce <ref>, a non-truthful mechanism and purely in analysis, to bridge <ref> and <ref>.
§.§ Indivisibly Selling
We first prove the claimed truthful constant approximation in the scenario of selling indivisible items and then give two corollaries to show the performance of applying the indivisibly selling idea to distributing divisible items.
Consider the indivisible goods setting. For each agent-item pair (i,j), define its weight w_ij to be the maximum money that we can charge agent i if assigning item j to her, i.e., w_ij=min{B_i,v_ij/τ_i}. Since items are indivisible and each agent only wants to buy at most one item, a feasible solution is essentially a matching between the agent set and the item set, and the goal is to find a maximum weighted matching.
However, the algorithm to output the maximum weighted matching is not truthful.
We observe that a natural greedy matching algorithm can return a constant approximation while retaining the truthfulness.
The mechanism is described in <ref>.
<ref> is feasible, truthful and achieves an approximation ratio of 1/2 when items are indivisible.
The feasibility is obvious since min{B_i,v_ij/τ_i} is the maximum willingness-to-pay of agent i when adding (i,j) into the matching.
To prove the truthfulness, we show that once an agent misreports the profile and obtains a higher value, either the budget constraint or the RoS constraint must be violated. Since the agent-item pairs are sorted in the decreasing lexicographical order of (min{B_i,v_ij/τ_i}, v_ij), the matched item value of agent i is non-increasing when none of the related agent-item pairs are ranked higher. Thus, once the agent misreports a profile (B_i',
_i',τ_i') and gets assigned an item k with a higher value, the rank of pair (i,k) must get improved, implying that min{B_i',v'_ik/τ_i'} > min{B_i,v_ik/τ_k}. Since the mechanism charges this agent min{B_i',v'_ik/τ_i'} under the new reported profile, either the budget constraint or the RoS constraint must be unsatisfied.
Finally, we prove the approximation ratio by the standard analysis of the greedy matching algorithm. For each pair (i,j) in an optimal matching, there must exist a pair (either (i,j') or (i',j)) in the greedy matching whose weight is at least c_ij. Thus, the maximum matching weight is at most twice the weight of our matching, and <ref> gets a 1/2-approximation.
Consider a feasible solution ={z_ij}_i∈ [n],j∈ [m] (not necessarily truthful) for multiple divisible items auction among unit-demand agents. We assume that each unit-demand agent i has at most one variable z_ij>0, Define _j() := ∑_i:z_ij>0 p_i to be the total payments related to item j. We observe the following two corollaries.
If solution is α-approximation and for any item j, max_i∈ [n]min{v_ij/τ_iz_ij,B_i}≥β·_j(), then running <ref> directly obtains an approximation ratio of αβ/2.
For a constant β∈ [0,1], define item subset H(,β)⊆ [m] to be the set of items with max_i∈ [n]min{v_ij/τ_iz_ij,B_i}≥β·_j(). Running <ref> directly obtains a total payment at least β/2∑_j∈ H(,β)_j() for any β∈ [0,1].
§.§ Foundations of Random Sampling
The subsection explores generalizing the random sampling procedure in <ref> to multiple items auction.
We first randomly sample half of the agents and investigate how much revenue can be earned per unit of each item if the auctioneer only sells the items to these sampled agents. Recall that the mechanism does not actually distribute any item to the sampled agents.
Then, the auctioneer sets the reserve price of each item based on the investigated revenues and sells them to all the remaining agents.
More specifically, let these agents arrive in a fixed order.
When an agent arrives, she is allowed to buy any remaining fraction of any item as long as she can afford the reserve price.
It is easy to observe that the mechanism is still truthful according to the same argument in the proof of <ref>: for a sampled agent, she will not be assigned anything, and therefore, she does not have any incentive to lie; while for the agents that do not get sampled, they are also truthtelling because neither the arrival order nor the reserve prices are determined by their reported profiles and a fake profile that can improve the agent's obtained value must violate at least one constraint.
The key condition that random sampling can achieve a constant approximation ratio is that the revenue earned by each item among the sampled agents is (w.h.p.) close to its contribution to the objective in an optimal solution or a constant approximation solution; otherwise, there is no reason that the reserve prices are set based on the investigated revenues. Unfortunately, unlike the single-item environment, we cannot guarantee that an optimal solution of the multiple items auction satisfies this condition.
Thus, to obtain such a nice structural property, we present an algorithm based on greedy matching and item supply clipping in <ref>. Note that this algorithm is untruthful, and we only use it to simulate the auction among the sampled agents. We first prove that it obtains a constant approximation, and then show several nice structural properties of the algorithm.
The approximation ratio of <ref> is 1/6.
Use (^*={x_ij^*},^*={p_i^*}) and (={x_ij}_i∈ [n], j∈ [m], ={p_i}) to represent the allocations and the payments in an optimal solution and <ref>'s solution respectively. Without loss of generality, we can assume that p_i^*=∑_j∈ [m]x_ij^*w_ij≤ B_i for any i∈ [n].
For each item j∈ [m], define A_j to be the set of agents who buy some fractions of item j in the optimal solution, i.e., A_j := { i∈ [n] | x_ij^*>0 }, and then based on , we partition A_j into three groups:
A_j^(1) = { i∈ [n] | x_ij > 0 },
A_j^(2) = { i∈ [n] | x_ij = 0 due to R_j≤ 1/2 },
A_j^(3) = { i∈ [n] | x_ij = 0 due to agent i has bought another item}.
Note that if some agent does not buy the item j in due to both of the two reasons, we add the agent into an arbitrary one of A_j^(2) and A_j^(3).
Use and to denote the objective values of the optimal solution and our solution, respectively. Based on the partition mentioned above, we split the optimal objective into three parts:
= ∑_i∈ [n], j∈ [m] x_ij^*w_ij = ∑_j∈ [m]∑_i∈ A_j^(1) x_ij^*w_ij + ∑_j∈ [m]∑_i∈ A_j^(2) x_ij^*w_ij +∑_j∈ [m]∑_i∈ A_j^(3) x_ij^*w_ij .
In the following, we analyze the three parts one by one and show that each part is at most twice , which implies that is 1/6 approximation.
Due to the definition of A_j^(1), for each (i,j) pair in the first part, <ref> assigns some fractions of item j to agent i, and therefore, x_ij≥min{1/2,B_i/w_ij}. Since x_ij^*≤ 1 and we assume w.l.o.g. that x_ij^*≤B_i/w_ij, we have
∑_j∈ [m]∑_i∈ A_j^(1) x_ij^*w_ij ≤∑_j∈ [m]∑_i∈ A_j^(1)min{w_ij,B_i}≤∑_j∈ [m]∑_i∈ A_j^(1) 2w_ijmin{1/2,B_i/w_ij}
≤∑_j∈ [m]∑_i∈ A_j^(1) 2x_ijw_ij≤ 2 .
For each item j with non-empty A_j^(2), <ref> must sell at least half of the item, and then due to the greedy property of the algorithm, we have
∑_i∈ A_j^(2) x_ij^*w_ij≤ 2_j(),
recalling that _j()=∑_i:x_ij>0p_i. Thus,
∑_j∈ [m]∑_i∈ A_j^(2) x_ij^*w_ij≤∑_j∈ [m] 2_j()
≤ 2 .
Finally, for each item j and agent i∈ A_j^(3), suppose that agent i buys some fractions of item j' in solution . Due to the greedy property, w_ij≤ w_ij'. Hence,
x^*_ijw_ij≤min{B_i,w_ij'}≤ 2 min{B_i/w_ij;,1/2} w_ij'≤ 2x_ij'w_ij'=2p_i.
Summing over these (i,j) pairs,
∑_j∈ [m]∑_i∈ A_j^(3) x_ij^*w_ij = ∑_i∈ [n]∑_j:i∈ A_j^(3) x_ij^*w_ij≤∑_i∈ [n] 2p_i = 2.
Combining <ref>, <ref> and <ref> completes the proof.
Note that the item supply clipping parameter 1/2 in <ref> can be replaced by any other constant in (0,1). By setting this parameter to be √(2)/1+√(2), the algorithm can get an approximation ratio of 3+2√(2).
For an agent subset S⊆ [n], use (^S, ^S) to denote the allocation and the payments if using <ref> to distribute all the items to agents in S.
We claim the following lemma.
For any agent subset S⊆ [n], we have
* agent payment monotonicity: p_i^S≥1/2 p_i, ∀ i∈ S.
* selling revenue monotonicity: _j(^S) ≤ 2_j(), ∀ j∈ [m].
Use R_j(i,k) and R_j^S(i,k) to denote the remaining fractions of item j at the end of pair (i,k)'s iteration when running <ref> for all the agents and for the agent subset S, respectively. Note that if i∉S, the corresponding iterations are viewed as empty iterations. We first show a key lemma that helps prove the two properties.
Consider an agent i and let k and k' be the items that she buys in and ^S respectively. We have ∀ j∈ [m],
max{ R_j(i,k),1/2}≤max{ R_j^S(i,k'),1/2}.
We first show that for any pair (i,k) and any item j,
max{ R_j(i,k),1/2}≤max{ R_j^S(i,k),1/2},
Assume for contradiction that <ref> is violated for some agent-item pairs. Let (i,k) be the first such pair in the order stated in <ref>. Notice that in this iteration, only the remaining fraction of item k could change. We distinguish three cases:
(1) x_ik^S=0,
(2) x_ik^S>0 and x_ik > 0,
and (3) x_ik^S>0 and x_ik = 0.
With some abuse of notation, we use R_j^-(i,k) (resp. R_j^S-(i,k)) to denote the remaining fraction of item j at the beginning of the iteration.
For case (1), the remaining fraction R_k^S remains unchanged. Thus,
max{ R_k^-(i,k),1/2}≥max{ R_k(i,k),1/2} > max{ R_k^S(i,k),1/2} = max{ R_k^S-(i,k),1/2},
contradicting the assumption that (i,k) is the first such pair.
For case (2), we have x_ik^S=min{R_k^S-(i,k), B_i/w_ik} and x_ik=min{R_k^-(i,k), B_i/w_ik} according to the algorithm. If x_ik = R_k^-(i,k), then clearly, R_k^-(i,k) becomes 0 and <ref> certainly holds; while if x_ik=B_i/w_ik, we have
x_ik^S ≤B_i/w_ik= x_ik,
and R_k^S(i,k) = R_k^S-(i,k) - x_ik^S≥ R_k^-(i,k)-x_ik=R_k(i,k),
contradicting the definition of pair (i,k).
For case (3), if x_ik = 0 is due to R_j^-(i,k)<1/2, it is impossible that <ref> gets violated. Hence, the only reason that x_ik = 0, in this case, is that agent i has bought another item k'. This implies that in the iteration of pair (i,k'), we have x_ik'^S=0 and x_ik' > 0. Since agent i had not bought any item that time, the only reason for x_ik'^S=0 is that R_k'^S-(i,k')< 1/2. Due to the definition of (i,k) and the fact that (i,k') is in front of (i,k) in the order, we have
R_k'^-(i,k') ≤max{ R_k'^-(i,k'),1/2}≤max{ R_k'^S-(i,k'),1/2} = 1/2,
contradicting to x_ik' > 0.
Thus, <ref> holds for any agent-item pair. Then due to the same argument in the analysis of case (3) above, we see that (i,k') must be in front of (i,k) in the order, implying that R_j^S(i,k)≤ R_j^S(i,k'). Finally,
max{ R_j(i,k),1/2}≤max{ R_j^S(i,k),1/2}≤max{ R_j^S(i,k'),1/2}.
We build on <ref> to prove the two properties one by one.
Consider an agent i∈ S and let k and k' be the items that she buys in and ^S respectively (w.l.o.g., we can assume that each agent always buys something by adding some dummy items with value 0.).
Due to <ref> and the greedy property of <ref>, we have w_ik'≥ w_ik. Thus,
p_i^S = w_ik'· x^S_ik'≥ w_ik'·min{B_i/w_ik', 1/2}
≥ w_ik·1/2·min{B_i/w_ik, 1 }≥ w_ik·1/2· x_ik
≥1/2p_i,
which proves the agent payment monotonicity.
Now we prove the selling revenue monotonicity.
Consider an arbitrary item j. Use A_j^S and A_j to denote the agents who buy some fractions of item j in solution ^S and , respectively. Further, let l^S and l be the last buyer in A_j^S and A_j, respectively. According to the assignment rule in the algorithm, for each agent i∈ A_j^S ∩ A_j ∖{l}, we have
x_ij^S ≤B_i/w_ij = x_ij;
while for agent l, similar to the analysis in the last paragraph,
x_lj^S ≤min{B_i/w_ij,1}≤ 2min{B_i/w_ij,1/2}≤ 2x_il.
Thus, if A_j^S ⊆ A_j, clearly, we have
_j(^S)=∑_i∈ A_j^Sx_ij^Sw_ij≤ 2∑_i∈ A_j^Sx_ijw_ij≤ 2_j().
It remains to show the case that A_j^S ∖ A_j ≠∅. For an agent i∈ A_j^S ∖ A_j, we have x_ij^S>0 but x_ij=0. Again, due to <ref>, we see the only reason is that in the process of computing solution , the remaining fraction of item j in that iteration is less than 1/2; otherwise, agent i must buy an item with a larger weight in solution ^S. Then due to the greedy property, we have ∀ i∈ A_j^S ∖ A_j, w_ij≤min_i'∈ A_j w_i'j = w_lj.
Thus, the property can be proved:
_j(^S) =∑_i∈ A_j^S∩ A_j ∖{l}x_ij^Sw_ij + ∑_i∈ A_j^S∖ A_j x_ij^Sw_ij + x_lj^Sw_lj
≤∑_i∈ A_j^S∩ A_j ∖{l}x^S_ijw_ij + (∑_i∈(A_j^S∖ A_j ) ∪{l}x^S_ij)· w_lj
≤∑_i∈ A_j^S∩ A_j ∖{l}x^S_ijw_ij + (1-∑_i∈ A_j^S∩ A_j ∖{l}x^S_ij)· w_lj
≤∑_i∈ A_j^S∩ A_j ∖{l}x_ijw_ij + (1-∑_i∈ A_j^S∩ A_j ∖{l}x_ij)· w_lj
≤ 2_j(),
where the last inequality used the fact that at least half of the item has been sold out in solution .
By <ref>, we have the following corollary.
Randomly dividing all the agents with equal probability into set S and R, we have
(∑_j∈ [m]_j(^S) ) = (∑_i∈ S p^S_i ) ≥1/2(∑_i∈ S p_i ) = 1/4∑_i∈ [m] p_i≥1/4∑_j∈ [m]_j().
§.§ Final Mechanism
This subsection states the final mechanism, which is a random combination of the indivisibly selling idea and the random sampling idea. To streamline the analysis, we first introduce an auxiliary mechanism which is constant-approximate but not truthful, and then show it can be altered to a truthful mechanism by losing only a constant factor on the approximation ratio.
<ref> obtains a constant approximation ratio.
Recollect that H(,β):={ j∈ [m] |max_i∈ [n]min{v_ij/τ_iz_ij,B_i}≥β·_j() } defined in <ref>.
To prove <ref>, we partition all the items into two sets: H(,1/144) and (,1/144) = [m] ∖ H(,1/144). <ref> directly implies that the first procedure (<ref>) guarantees our objective value is at least a constant fraction of ∑_j∈ H(,1/144)_j().
The revenue obtained by the first procedure in <ref> is at least 1/288∑_j∈ H(,1/144)_j().
For the second procedure, we show that ∑_j∈(,1/144)_j() can be bounded by the total payment obtained by this procedure. More specifically, we prove the following technical lemma.
The expected revenue obtained by the second procedure in <ref> is at least
1/192∑_j∈(,1/144)_j() - 7/96∑_j∈ H(,1/144)_j().
Let F and D be the set of items that are sold out and the set of agents that use up their budgets in our solution, respectively. According to <ref>, for a pair (i,k(i)), if i∉ D and k(i)∉ F,
w_i,k(i) < r_k(i) = 1/12_j(^S).
We observe two lower bounds of the objective value of our solution: ≥∑_j∈ F1/12_j(^S),
and
≥∑_i∈ D B_i ≥∑_j∉ F∑_i∈ D z_ijw_ij = ∑_j∉ F( ∑_i∈ R z_ijw_ij- ∑_i∈ R∖ D z_ijw_ij)
≥∑_j∉ Fmax{0,( ∑_i∈ R z_ijw_ij- 1/12_j(^S) )},
where the last inequality used <ref>.
For simplicity, use _j(∩ S) to denote ∑_i∈ S z_ijw_ij. Combing the two lower bounds, we have
2≥∑_j∈ F1/12_j(^S) + ∑_j∉ Fmax{0,( _j(∩ R)- 1/12_j(^S) )},
and thus,
2() ≥∑_j∈ [m](_j∈ F·1/12_j(^S) + _j∉ F·( _j(∩ R)- 1/12_j(^S) ) ),
where _(·) is an indicator function of the event (·).
According to the definition of (,1/144), Chebyshev's inequality and the concentration lemma <cit.>, for any item j∈(,1/144), we have
[1/3_j() ≤_j(∩ S) ≤2/3_j()] ≥15/16,
which implies that with high probability,
_j(∩ R)- 1/12_j(^S) ≥1/3_j() -1/12_j(^S) ≥1/12_j(^S),
where the last inequality used the selling revenue monotonicity.
Use Π_j to denote the event that the sampled subset S satisfies 1/3_j() ≤_j(∩ S) ≤2/3_j(). Combining <ref> and <ref>,
2() ≥∑_ j∈(,1/144) [Π_j]·(_j∈ F·1/12_j(^S) + _j∉ F·( _j(∩ R)- 1/12_j(^S) ) | Π_j )
≥1/12·∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j).
We continue to find a lower bound of ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j). Observe that
∑_j∈ [m]( _j(^S) ) = ∑_j∈ H(,1/144)( _j(^S) ) + ∑_j∈(,1/144)( _j(^S) )
= ∑_j∈ H(,1/144)( _j(^S) ) + ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j)
+ ∑_j∈(,1/144)[⌝Π_j]·( _j(^S) | ⌝Π_j)
≤ ∑_j∈ H(,1/144)( _j(^S) ) + ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j)
+ 1/16·∑_j∈(,1/144)( _j(^S) | ⌝Π_j)
≤ ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) + 2∑_j∈ H(,1/144)_j()
+ 1/8∑_j∈(,1/144)_j()
Combining the above inequality and <ref>, we get the lower bound:
∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) + 2∑_j∈ H(,1/144)_j() + 1/8∑_j∈(,1/144)_j() ≥1/4∑_j∈ [m]_j()
∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) ≥1/8∑_j∈(,1/144)_j() - 7/4∑_j∈ H(,1/144)_j() .
Thus, due to <ref>, we complete the proof:
() ≥1/192∑_j∈(,1/144)_j()-
7/96∑_j∈ H(,1/144)_j()
Combing <ref>, <ref> and the probabilities set in <ref>,
() ≥45/47·1/288∑_j∈ H(,1/144)_j() + 2/47·( 1/192∑_j∈(,1/144)_j() - 7/96∑_j∈ H(,1/144)_j() )
= 1/4512∑_j∈ [m]_j() ≥1/27072,
where the last inequality used <ref>.
Finally, we present our final mechanism in <ref>. The only difference from <ref> is that in the last step of the second procedure, we let the agent choose any item she wants as long as she can afford the reserve price, and then charge her the maximum willingness-to-pay.
<ref> is feasible, truthful, and constant-approximate.
According to <ref>, the first procedure is feasible and truthful.
For the second procedure, the mechanism is truthful since any agent is charged her maximum willingness-to-pay.
Then according to the same argument in the proof of <ref>, we can prove the truthfulness.
The following focuses on analyzing the approximation ratio.
To this end, we couple the randomness in <ref> and <ref>.
The two algorithms are almost identical to each other except for one line and their randomness can be coupled
perfectly. If by the coupling of randomness, both algorithms execute the first procedure, they are exactly identical and thus <ref>
also apples to <ref>.
Now, by randomness, they both execute the second procedure. In the second procedure, we can further couple the randomness so that they randomly sample the same set S. Conditional on all these (they both execute the second procedure and sample the same set S), we prove that the revenue of <ref> is at least 1/4 of that of <ref>.
Let (,) and (',') be the two solutions respectively of
<ref> and <ref> under the above conditions. Let and ' be their revenues respectively.
Use A_j' to denote the agents who buy some fractions of item j in solution '. According to <ref>,
' = ∑_j∈ [m]∑_i∈ A_j'x_ij' r_j.
For an item j, if the corresponding revenue in <ref>'s solution _j() ≥1/2r_j, we have
∑_i∈ A_j'x_ij' r_j ≤ 2_j(),
and then summing over all such items,
∑_j:_j() ≥1/2r_j∑_i∈ A_j'x_ij' r_j ≤∑_j:_j() ≥1/2r_j 2_j() ≤ 2.
For each item j with _j() < 1/2r_j, we distinguish three cases for agents in A_j' based on (,):
(1) p_i=B_i,
(2) p_i<B_i and x_ij>0,
and (3) p_i<B_i and x_ij=0.
For case (1), clearly,
x_ij'r_j ≤ B_i = p_i .
For case (2), since _j() < 1/2r_j, the remaining fraction of item j is at least 1/2 when <ref> let agent i buy, and therefore, x_ij≥min{1/2,B_i/ r_j }.
According to p_i<B_i, we have p_i=w_ijx_ij≥ r_j min{1/2,B_i/ r_j }. Then, due to x_ij'≤min{ 1,B_i/ r_j },
x_ij'r_j≤ 2p_i.
For case (3), suppose that agent i buys item k in solution . Since the remaining fraction of item j is at least 1/2 and the agent always pick the most profitable part in <ref>, we have
min{1/2, B_i/r_j}· v_ij≤ x_ik v_ikmin{1/2, B_i/r_j}· w_ij≤ x_ik w_ik.
Again, due to p_i<B_i, r_j ≤ w_ij and x_ij'≤min{ 1,B_i/ r_j }, we have
1/2x_ij'r_j ≤min{1/2, B_i/r_j}· w_ij≤ x_ik w_ik = p_i.
Due to <ref>, <ref> and <ref>, for an item with _j() < 1/2r_j, in either case, we always have
x_ij'r_j ≤ 2p_i. Thus, summing over all such items and the corresponding agents,
∑_j:_j() < 1/2r_j∑_i∈ A_j' x_ij'r_j ≤∑_j:_j() < 1/2r_j∑_i∈ A_j' 2p_i ≤ 2.
Combining <ref> and <ref> proves '≤ 4.
Combining this with <ref>, we know that
The expected revenue obtained by the second procedure in <ref> is at least
1/768∑_j∈(,1/144)_j() - 7/384∑_j∈ H(,1/144)_j().
Further combining with <ref>, which we argued also applies to <ref>, we have
the expected revenue obtained by <ref> is at least
45/53·1/288∑_j∈ H(,1/144)_j() + 8/53·( 1/768∑_j∈(,1/144)_j() - 7/384∑_j∈ H(,1/144)_j() )
= 1/5088∑_j∈ [m]_j() ≥1/30528.
In the proof of <ref>, we have not tried to optimize the constants in our analysis in the interests of expositional simplicity.
The parameters (e.g. 45/47 and 1/144) in our algorithm and analysis can be easily replaced by some other constants in (0,1) to obtain another constant approximation ratio.
§ MULTIPLE ITEMS AUCTION FOR ADDITIVE AGENTS
This section studies the setting where the auctioneer has multiple items to sell and the agents are additive, that is, everyone can buy multiple items and obtain the sum value of the items.
This environment is more challenging than the previous one, and some algorithmic ideas introduced in the last section are hard to apply. For example, one of the most critical components of <ref> is indivisibly selling, which is based on the observation that selling indivisible goods to unit-demand agents is much easier than selling divisible goods. However, this is not true in the additive valuation environment. To quickly see this, suppose that we have an approximation mechanism for selling indivisible goods to additive agents. Then we can obtain a mechanism with almost the same approximation ratio by splitting each item into tiny sub-items and selling them indivisibly. Thus, in the additive valuation environment, selling indivisible items is harder than selling divisible items.
Fortunately, we find that the idea of random sampling still works in this environment. Due to the relationship between our model and the liquid welfare maximizing model, the theoretical guarantee of the random sampling mechanism in <cit.> directly implies a constant approximation of our problem under a large market assumption on the agents' budgets (that is, B_i≤/m · c for any agent i, where c is a sufficiently large constant). This part is technically simple. We only state the theorem and the high-level idea here and defer some details to <ref>.
There exists a truthful constant approximation for multiple items auction among additive agents under the large market assumption.
For an instance = (,,) of our model, we can easily construct a liquid welfare maximization instance '=(','), where for each agent i, the budget B_i'=B_i and the valuation w_ij'=v_ij/τ_i ∀ j∈ [m].
Since given the same allocation, the maximum willingness-to-pay of an agent in is exactly the agent's liquid welfare in ', we see that the two instances share the same offline optimal objective values.
Our mechanism is simply running the random sampling mechanism proposed in <cit.>[See <ref> for the description of the mechanism] on the reduced instance '.
<cit.> showed that when the agents are quasi-linear utility maximizers subject to budget constraints, the total revenue obtained by the mechanism is at least a constant fraction of the optimal objective.
We note that the behavior of a value maximizer in the random sampling mechanism is different from the behavior of a quasi-linear utility maximizer. Thus, we cannot directly say that the proof has been completed due to ()=('). The mechanism lets the agents come in a fixed order and allows each arrived agent to buy any fraction of the items she wants at the reserve prices. A quasi-linear utility maximizer will never buy any fraction of the items with reserve prices higher than the valuations (over the target ratio). However, a value maximizer may be interested in buying such items because the overall RoS constraint can still be satisfied even if for some items, the bought prices are higher than the valuations (over the target ratio).
We complete the proof by showing that the revenue obtained among value maximizers is always at least that obtained among quasi-linear utility maximizers. The key observation is that When an agent comes, regardless of the type, she computes a knapsack optimization problem with constraints. In other words, the agent sorts all the available items in the decreasing order of the ratio of valuation w'_ij to the reserve price r_j, and then buys them sequentially as long as the constraints are satisfied. For a quasi-linear utility maximizer, she will keep buying until the budget is exhausted or all the remaining items have w'_ij/r_i<1; while a value maximizer may not stop immediately at the time that all the remaining items have w'_ij/r_i<1 when she still has budget left, instead, she will continue buying until the budget is used up or the overall RoS constraint is about to be violated. According to the above argument, it is easy to verify that the sold fraction of each item is non-decreasing when the agent becomes a value maximizer, and thus, the total revenue obtained among value maximizers is non-decreasing and can be bounded by a constant fraction of ().
§ CONCLUSION AND OPEN PROBLEMS
We investigate the emerging value maximizer in recent literature but also significantly depart from their modeling.
We believe that the model and benchmark proposed in this paper are, on the one hand, more realistic and, on the other hand, friendlier to the AGT community.
We get a few non-trivial positive results which indicate that this model and benchmark is indeed tractable.
There are also many more open questions left.
For additive valuation, it is open if we can get a constant approximation.
It is interesting to design a mechanism with a better approximation for the setting of the single item and unit demand since our current ratio is fairly large.
We also want to point out that no lower bound is obtained in this model, and thus any non-trivial lower bound is interesting.
We get a much better approximation ratio for the single-item environment when valuation and budget are public than in the fully private setting.
However, this is not a separation since we have no lower bound.
Any separation result for different information models in terms of public and private is interesting.
§ ACKNOWLEDGMENT
Chenyang Xu and Pinyan Lu were supported in part by Science and Technology Innovation 2030 –“The Next Generation of Artificial Intelligence” Major Project No.2018AAA0100900. Additionally, Chenyang Xu received support from the Dean's Fund of Shanghai Key Laboratory of Trustworthy Computing, East China Normal University.
Ruilong Zhang was supported by NSF grant CCF-1844890.
splncs04
§ PARTIALLY PRIVATE SETTING
This section studies a partially private setting proposed by <cit.> where the budgets and values are all public. We first show that a better constant approximation for the single item auction can be obtained when the budgets become public in <ref>. Then we build on the new single item auction to give an (1/√(n)) approximation for multiple items auction among additive agents with public budgets and values in <ref>.
§.§ Single Item Auction with Public Budgets
This subsection improves upon the previous approximation in <ref> when the agents' budgets become public.
The high-level idea is similar to the uniform price auction for liquid welfare maximization proposed in <cit.>, which is allocating the item according to the maximum selling price such that if all agents buy the item at this price per unit, the item is guaranteed to be sold out.
Such a selling price is referred to be a market clearing price.
However, new truthfulness challenges arise when applying the market clearing price idea to our auction environment. For example, there may exist such a case that the market clearing price remains unchanged when some agent changes the reported profile. Then in this case, the agent may misreport a lower target ratio or a larger value to obtain more goods without violating any constraint.
To solve this issue, we use a simple scaling technique to partition the agents into two levels according to their reported profile and let the agents in the lower level buy the item at the market clearing price while the agents in the higher level have to pay a slightly higher price. The agent who determines the market clearing price always stays in the lower level, and she can obtain more goods only if she jumps into the higher level by increasing the reported v_i/τ_i. However, in that case, the agent needs to pay a higher price that violates her RoS constraint. Thus, the agent has no incentive to misreport a lower ratio. The detailed mechanism is stated in <ref>. This subsection aims to show the following theorem.
For any parameter ϵ>0, <ref> is truthful and achieves an approximation ratio of 1/(1+ϵ)(2+ϵ), which tends to 1/2 when ϵ approaches 0.
We first show that the allocation satisfies the budget constraint and the reported RoS constraint of each agent, then discuss the truthfulness, and finally give the analysis of the approximation ratio.
Given any , for each agent i, we have p_i≤ B_i and τ_i p_i ≤ x_i v_i.
We discuss case by case. If B[k]>w_k+1, for an agent i≤ k, we have
p_i = B_i · C[k]/(1+ϵ)B[k] < B_i,
and
p_i/x_i = C[k] ≤ w_k≤ w_i≤v_i/τ_i.
The first inequality in the second formula used the fact that k is an index with B[k] ≤ w_k and w_k is an exponential multiple of 1+ϵ. For all other agents, obviously, the two constraints are satisfied since their payments are 0.
Consider the case that B[k]≤ w_k+1. For an agent i≤ k, clearly, the budget constraint is satisfied.
If w_i > w_k+1, since each w_i is an exponential multiple of 1+ϵ, we have w_i ≥ (1+ϵ)w_k+1, and
p_i/x_i = (1+ϵ)w_k+1≤ w_i≤v_i/τ_i.
Otherwise, we have w_i = w_k+1 and
p_i/x_i = w_k+1 = w_i≤v_i/τ_i.
For agent k+1, the budget constraint holds because for index k+1, B[k+1]>w_k+1 (otherwise, k is not the maximum index with B[k]≤ w_k). More specifically,
p_k+1 = w_k+1-B[k]/1+ϵ = B_k+1+w_k+1-B[k+1]/1+ϵ < B_k+1.
The RoS constraint is also easy to show:
p_i/x_i≤ w_k+1≤v_i/τ_i.
Finally, for all other agents, the two constraints are satisfied since their payments are 0.
Then we prove the truthfulness. Notice that changing the reported profile may change the indices of the agents in step <ref>. To avoid confusion, we use agent a to represent a certain agent.
We first show that any agent a will not misreport a lower v_a/τ_a because when v_a/τ_a becomes smaller, x_a cannot increase (<ref>); and then build on the RoS constraints to prove the other hand (<ref>).
For any agent a, x_a is non-increasing as v_a/τ_a decreases.
Given a reported profile (,), refer to = max{B[k],w_k+1} as the market clearing price. Decreasing v_a/τ_a unilaterally may change the value of k, the top-k agents S, the index π(a) of agent a, and the market clearing price . Use k', S', π'(a) and ' to denote the three terms respectively after decreasing v_a/τ_a to v_a'/τ_a'.
Clearly, if the current index π(a) is already larger than k, x_a is either 0 or 1/1+ϵ-B[k]/(1+ϵ)w_k+1, and will not increase as v_a/τ_a decreases. Thus, we only need to consider the case that π(a) is at most k, i.e., x_a= B_a/(1+ϵ).
Due to the observation that min_i∈ S∖{a} w_i ≥min_i∈ S w_i ≥∑_i∈ S B_i > ∑_i∈ S∖{a} B_i, we have k'≥ k-1 and S∖{a}⊆ S' after decreasing v_a/τ_a. If k'=k-1, w.l.o.g., we can assume that the new index π'(a) is k'+1 and the new market clearing price ' is w'_a; otherwise, agent a obtains nothing. Let agent b be the (k+1)-th player when the reported profile is (,). Since π'(a)=k'+1=k, agent a still ranks higher than agent b, i.e., w'_a≥ w_b.
Then according to the definition of k', we see that the market clearing price decreases:
= ∑_i∈ S B_i > w'_a = '.
Thus,
x_a' = 1/1+ϵ - ∑_i∈ S∖{a}B_i/(1+ϵ)' < 1/1+ϵ - ∑_i∈ S∖{a}B_i/(1+ϵ) = ∑_i∈ S B_i - ∑_i∈ S∖{a}B_i /(1+ϵ) = x_a.
For the case that k' ≥ k, we claim that either agent a or agent b is contained in S'. Suppose that b∉ S'. Since only agent a changes the reported profile, it is easy to verify that k'=k and S'=S, implying that '==max{∑_i∈ SB_i, w_b} and x_a'=x_a. If b ∈ S', due to the fact that ∑_i∈ S∖{a} B_i + B_a +B_b > w_b (the definition of k), agent a can not belong to S'. Without loss of generality, assume that π'(a)=k'+1 and '=w'_a; otherwise, x'_a=0.
We also see that the market clearing price is non-increasing: ≥ w_b ≥'. Thus,
x'_a = 1/1+ϵ-∑_i∈ S'B_i/(1+ϵ)'≤1/1+ϵ-∑_i∈ S∖{a} B_i + B_b/(1+ϵ) = -∑_i∈ S∖{a} B_i - B_b/(1+ϵ).
Regardless of whether takes the value w_b or ∑_i∈ SB_i, we always have - ∑_i∈ S∖{a} B_i - B_b < B_a, which implies that x'_a<x_a and completes the proof.
Consider any agent a and any v_a'/τ_a'> v_a/τ_a. If x'_a >x_a, then v_a x'_a < τ_a p'_a.
Use to denote the market clearing price when the reported profile is (,).
Clearly, if w_a >, we have x'_a = x_a for any v_a'/τ_a'> v_a/τ_a. In other words, x'_a>x_a happens only when w_a ≤.
We distinguish two cases. First, if w_a <, the current price of the item (for agent a) is at least (1+ϵ)w_a. Noticing that increasing v_a/τ_a cannot decrease the price, we have
p_a/x_a≥ (1+ϵ)w_a > v_a/τ_a.
For the case that w_a=. Since <ref> breaks the ties in a fixed manner, x_a increases only when agent a jumps to the higher level, i.e., w_a' >. Thus, according to the payment rule, we still have
p_a/x_a≥ (1+ϵ)w_a > v_a/τ_a.
Combining <ref> and <ref> proves the truthfulness of the mechanism.
Finally, we analyze the approximation ratio of the mechanism.
<ref> is 1/(1+ϵ)(2+ϵ)-approximation.
The proof is technically simple and similar to the analysis in <cit.>. Use and to denote the optimal payment and our payment respectively. We first give an upper bound of and then establish the relationship between the upper bound and . For the top-k agents, due to the budget constraints, the optimal mechanism charges them at most B[k]; while for all the remaining agents, due to the RoS constraints, the optimal mechanism charges them at most max_i>kv_i/τ_i≤ (1+ϵ)w_k+1. Namely,
≤ B[k] + (1+ϵ)w_k+1.
Then we analyze . If B[k] > w_k+1, our total payment is
= ∑_i∈[k] p_i = ∑_i∈ [k]B_i· C[k]/(1+ϵ)B[k]≥B[k]/(1+ϵ) > w_k+1/(1+ϵ);
while if B[k]≤ w_k+1, the total payment is
= ∑_i∈[k] p_i + p_k+1≥B[k]/1+ϵ + w_k+1/1+ϵ - B[k]/1+ϵ = w_k+1/1+ϵ≥B[k]/1+ϵ.
Thus, in either case, we have
(1+ϵ) + (1+ϵ)^2 >
> /(1+ϵ)(2+ϵ).
§.§ Multiple Items Auction for Additive Agents
In this subsection, we build on the aforementioned single-item auction to give a truthful mechanism for multiple-items auction.
The mechanism is described in <ref>.
One critical part of the mechanism is that it splits the budget of each agent and runs <ref> for each item to get solution (,()). We observe that although each single item auction is truthful individually, outputting (,()) directly gives an untruthful mechanism. An agent may misreport a lower target ratio to obtain more value because even if for some item j, the RoS constraint is violated (i.e., ∃ j∈ [m], v_ijz_ij/ p_i(_j) < τ_i), it is possible that the overall RoS constraint still holds when summing over all items because the return-on-spend ratio v_ijz_ij/ p_i(_j) of each bought item j is different.
A natural idea to handle this issue is raising the purchase prices of some items for an agent to guarantee that the agent's return-on-spend ratio of each bought item equals min_j:p_i(_j)>0v_ijz_ij/ p_i(_j) so that once the agent violates the RoS constraint on some item, the overall RoS constraint must also be violated.
Following this line, since the purchase prices are raised, to maintain the budget constraints, we need to reduce the number of items assigned to each agent.
Thus, in <ref>, we introduce T_i(j) and let agent i buy at most z_ij fraction of any item j'∈ T_i(j). Finally, to maximize the total revenue, the mechanism charges each agent her maximum willingness-to-pay.
We state the main theorem in the following.
<ref> is feasible, truthful, and obtains an approximation ratio of (1/√(n)) when the budget profile and the value profile are public.
§.§.§ Feasibility and Truthfulness
We start by proving the feasibility and the truthfulness of the mechanism.
For each item j∈ [m], <ref> satisfies the unit item supply constraint: ∑_i∈ [n] x_ij≤ 1. For each agent i∈ [n], the mechanism satisfies the budget constraint and the RoS constraint: p_i≤ B_i and τ_ip_i ≤∑_j∈[m]x_ijv_ij.
For each item j, since _j is the assignments returned by running <ref> and applying an item supply clipping, we have ∑_i∈ [n]z_ij≤ 1. According to the definition of T_i(h(i)), for any item j∈ T_i(h(i)), z_ij≥ z_i,h(i) and thus, x_ij = z_i,h(i)≤ z_ij, proving that the unit item supply constraints are satisfied.
For each agent i, the mechanism charges her min{ B_i,U_i(h(i))/τ_i}. According to the definition of U_i(h(i)), we see that this is exactly the total value of the obtained items. Hence, the mechanism satisfies the budget constraint and the RoS constraint.
Similar to the last subsection, we use two lemmas to prove the truthfulness.
For any agent i, ∑_j∈ [m]v_ijx_ij is non-increasing as τ_i increases.
For each agent-item pair (i,j), according to <ref>, z_ij is non-increasing as τ_i increases, which implies that U_i(j) is also non-increasing. Since h(i) is the item that obtains the maximum value of U_i(j), U_i(h(i)) is non-increasing. As mentioned above, U_i(h(i)) is exactly the total obtained value. Thus, we have ∑_j∈ [m]v_ijx_ij = U_i(h(i)) is non-increasing as τ_i increases.
Consider any agent i and any τ'_i < τ_i. If ∑_j∈ [m]v_ijx'_ij>∑_j∈ [m]v_ijx_ij, then ∑_j∈ [m]v_ijx'_ij< τ_i p'_i.
Consider an agent i and any τ'_i < τ_i, if ∑_j∈ [m]v_ijx'_ij>∑_j∈ [m]v_ijx_ij, there must exist at least one item l∈ T'_i(h'(i)) such that z'_il> z_il≥ 0; otherwise, the agent cannot obtain more valuable items. According to <ref>, we have
p_i'(_l')/z'_il > v_il/τ_i.
Consider the following payment rule: for each item j, we charge the agent
q'_ij = x'_ij·p_i'(_l')/z'_il·v_ij/v_il.
Clearly, this payment rule violates the RoS constraint for any item j:
q'_ij/x'_ij = p_i'(_l')/z'_il·v_ij/v_il > v_il/τ_i·v_ij/v_il = v_ij/τ_i,
and thus,
∑_j∈ [m] q'_ij > ∑_j∈[m] v_ijx'_ij/τ_i.
Finally, we show that p'_i = min{ B_i,U'_i(h'(i))/τ'_i}≥∑_j∈ [m] q'_ij. According to <ref>, the single item auction mechanism satisfies p_i'(_l')≤ B_il and p_i'(_l') ≤v_ilz_il'/τ_i'. Thus, for each item j∈ T'_i(h'(i)), due to x_ij' ≤ z'_il and B_il/v_il = B_ij/v_ij, we have
q'_ij = x'_ij·p_i'(_l')/z'_il·v_ij/v_il≤ B_ij,
and
q'_ij = x'_ij·p_i'(_l')/z'_il·v_ij/v_il≤ x_ij'·v_ij/τ_i'.
Summing over all the items,
∑_j∈ [m]q_ij' ≤min{ B_i,∑_j∈ [m]x_ij'v_ij/τ_i'} = p_i' ,
completing the proof.
<ref> prevents an agent from misreporting a target ratio higher than the actual ratio since the agent is a value maximizer, while <ref> guarantees that the agent cannot misreport a ratio lower than the actual ratio because otherwise, her RoS constraint will be violated. Thus, combing these two lemmas proves the truthfulness[We can also claim that <ref> and <ref> immediately prove the truthfulness according to <cit.>].
§.§.§ Approximation Ratio
This subsection analyzes the approximation ratio of <ref>. As mentioned above, at the beginning stage of the mechanism, we split the budget of each agent based on the value profile.
To streamline the analysis, we consider the setting where each agent i can only use the sub-budget B_ij to buy some fractions of each item j. Use to denote the optimal objective of this sub-budget constrained setting. According to the approximation ratio of <ref> (<ref>) and the item supply clipping bar 1/2, we have
∑_i∈ [n],j∈[m] z_ij· p_i(_j) ≥1/2(1+ϵ)(2+ϵ)·
for any ϵ>0.
This inequality splits our proof into two parts. We first show that is at least 1/2√(n)+3·, and then establish the relationship between our objective value and ∑_i∈ [n],j∈[m] z_ij p_i(_j).
≥1/2√(n)+3·
Instead of comparing and directly, we introduce a simple greedy algorithm for the sub-budget constrained setting in <ref> and show that the objective obtained by the algorithm is at least 1/2√(n)+3·.
Use (,) and (^*,^* ) to represent the solution of <ref> and the optimal solution (of the original setting) respectively. We partition all the agents into two groups: S:={i∈ [n] | p_i ≥ B_i/ √(n)} and R:={i∈ [n] | p_i < B_i/ √(n)}, and get an upper bound of :
= ∑_i∈[n],j∈[m] x_ij^*w_ij≤∑_i∈ S B_i + ∑_j∈ [m]∑_i∈ R x_ij^*w_ij≤√(n)· + ∑_j∈ [m]∑_i∈ R x_ij^*w_ij .
The remaining part is to prove that ∑_j∈ [m]∑_i∈ R x_ij^*w_ij can also be bounded by O(√(n)) ·.
For each item j, define a(j) := _i∈ R w_ij to be the agent i∈ R with the maximum w_ij. Clearly,
∑_j∈ [m]∑_i∈ R x_ij^*w_ij≤∑_j∈ [m] w_a(j),j .
We further partition all the items into two groups based on their assignments in the greedy solution: P:= {j∈ [m] | x_a(j),jw_a(j),j < B_a(j),j} and Q:={j∈ [m] | x_a(j),jw_a(j),j = B_a(j),j}.
For each item j∈ P, if sorting all agents in the decreasing order of {w_ij}, agent a(j) is either the last agent who buys item j in <ref>, or ranks behind the last agent buying item j; otherwise, agent a(j) must exhaust the sub-budget B_a(j),j. Thus, w_a(j),j≤_j(), and therefore,
∑_j∈ P w_a(j),j≤∑_j∈ P_j() ≤ .
For the items in Q, we reorganize the corresponding formula:
∑_j∈ Q w_a(j),j = ∑_i∈ R ∑_j∈ Q : a(j)=i w_ij.
For simplicity, use Q(i) to denote the item subset {j∈ Q | a(j)=i }.
We aim to show that ∀ i∈ R, ∑_j∈ Q(i) w_ij is at most / (√(n)-1), and thus, their sum can be bounded by O(√(n)) ·.
For each agent i∈ R, due to the similar argument in the last paragraph, we have
∑_j∉ Q(i) w_ij≤∑_j∉ Q(i)_j() ≤ .
Recall that any agent i ∈ R pays less than B_i/√(n). It is easy to observe that for an agent i∈ R, the sum budget of the items in Q(i) is very limited because the agent spends very little compared to the budget even though she has exhausted the sub-budgets of these items. More formally, we have
∑_j∈ Q(i) B_ij < B_i/√(n)
∑_j∈ Q(i) B_i ·v_ij/∑_j'∈ [m]v_ij' ≤B_i/√(n)
∑_j∈ Q(i) w_ij/∑_j∈ Q(i) w_ij + ∑_j∉ Q(i) w_ij ≤1/√(n)
∑_j∈ Q(i) w_ij ≤1/√(n)-1∑_j∉ Q(i) w_ij .
Combing <ref>, <ref> and <ref> and then summing over all agents in R, we have
∑_j∈ Q w_a(j),j = ∑_i∈ R ∑_j∈ Q(i) w_ij≤n/√(n)-1· .
Finally, combing <ref>, <ref>, <ref> and <ref> completes the proof:
≤( √(n) + 1 + n/√(n)-1) ·≤ (2√(n) + 3) .
For any ϵ>0, ≥min{1/2,1/1+ϵ}·∑_i∈ [n],j∈[m] z_ij p_i(_j)
We prove the lemma by showing that for any agent i, p_i≥min{1/2,1/1+ϵ}·∑_j∈[m] z_ij p_i(_j) .
Consider an arbitrary agent i. Use g(i) to denote the item j with the minimum non-zero z_ij, i.e., g(i):=_j: z_ij>0 z_ij. We construct an auxiliary allocation {y_ij}_j∈ [m] and payment q_i as follows:
* For each item j, set y_ij=z_i,g(i) if j∈ T_i(g(i)) and 0 otherwise.
* Find the most cost-effective available item l:=_j∈ T_i(g(i)) p_i(_j) /z_ijv_ij and set
q_i = ∑_j∈ [m] y_ij· p_i(_l) /z_ilv_il· v_ij .
Similar with the last part analysis in the proof of <ref>, we see that payment q_i is at most min{B_i,U_i(g(i))/τ_i}, and therefore,
q_i ≤min{B_i,U_i(g(i))/τ_i}≤min{B_i, U_i(h(i))/τ_i} = p_i,
where the second inequality used the fact that h(i):= _j∈ [m] U_i(j).
Now we show that q_i is at least a constant fraction of ∑_j∈[m] z_ij p_i(_j).
Noting that g(i) is the item with the minimum non-zero z-value,
∑_j∈[m] z_ij· p_i(_j) = ∑_j∈ T_i(g(i)) z_ij· p_i(_j).
We distinguish two cases based on the value of z_i,g(i): (1) z_i,g(i)≥ 1/2, (2) z_i,g(i) < 1/2.
If z_i,g(i)≥ 1/2, we have
y_ij· p_i(_l) /z_ilv_il· v_ij≥1/2· p_i(_j) /z_ijv_ij· v_ij≥1/2· z_ij· p_i(_j)
for any item j∈ T_i(g(i)).
For the second case, due to the item supply clipping in <ref>, agent i must be one of the top-k agents in the single-item auction that sells item g(i). Thus, according to <ref>, we have p_i(_g(i)) ≥B_i,g(i)/1+ϵ. Thus, for any item j∈ T_i(g(i)),
y_ij· p_i(_l) /z_ilv_il· v_ij ≥ z_i,g(i)· p_i(_g(i)) /z_i,g(i)v_i,g(i)· v_ij
≥B_i,g(i)/1+ϵ·v_ij/v_i,g(i)
= B_ij/1+ϵ
≥1/1+ϵ· z_ij· p_i(_j).
Thus, in either case, we have
p_i ≥ q_i =∑_j∈ [m] y_ij· p_i(_l) /z_ilv_il· v_ij≥min{1/2,1/1+ϵ}·∑_j∈ [m] z_ij· p_i(_j),
which completes the proof.
Combining <ref>, <ref> and <ref> proves an approximation ratio of (1/√(n) ).
§ OMITTED DETAILS IN SECTION <REF>
In this section, we restate the random sampling mechanism proposed in <cit.> and the results they obtained. The mechanism is described in <ref>.
The random sampling mechanism is a universal truthful budget feasible mechanism which guarantees a constant fraction of the liquid welfare under the large market assumption.
The correctness of the above theorem heavily depends on <cit.>, which states that the liquid welfare obtained from the random sampling algorithm is at least some constant fraction of the optimal mechanism.
To prove <cit.>, they use the revenue obtained by a truthful auction as a lower bound of the liquid welfare.
Thus, <cit.> actually holds for the revenue maximization objective.
Hence, we have the following corollary.
The random sampling mechanism is a budget feasible and truthful mechanism which achieves a constant approximation under the large market assumption.
Suppose that there is a random sampling mechanism for the liquid welfare maximizing model, whose input is the budget profile ={B_i}_i∈ [n] and the value profile = {w_ij}_i∈ [n],j∈ [m]. Our mechanism ' is constructed as follows. Given an input profile (,,), define = {w_ij=v_ij/τ_i}_i∈ [n], j∈ [m]. Then, run mechanism on the input (,) to get the allocation . Finally, we charge each agent i p_i=min{B_i, ∑_jv_ijx_ij/τ_i}.
Essentially, mechanism ' is constructed by simply changing the payment of each agent i in to her maximum willingness-to-pay min{B_i, ∑_jv_ijx_ij/τ_i}.
According to the arguments in <ref>, for the random sampling mechanism, the truthfulness can always be guaranteed.
For the approximation ratio, we observe that the new payment rule does not violate the budget constraints and the ROS constraints, and guarantees that the total payment in ' equals the liquid welfare obtained by . Since the constructed liquid welfare instance and our instance share the same optimal objective value, <ref> can be proved directly by the following theorem.
There exists a random sampling mechanism which is a universal truthful budget feasible mechanism and guarantees a constant fraction of the liquid welfare under the large market assumption.
|
http://arxiv.org/abs/2307.04496v1 | 20230710113448 | Distinguishing between Dirac and Majorana neutrinos using temporal correlations | [
"Bhavya Soni",
"Sheeba Shafaq",
"Poonam Mehta"
] | hep-ph | [
"hep-ph",
"quant-ph"
] | LaTeX2e
=1
=7.truein
=9.5truein
justification=justified,singlelinecheck=false
#1(#1)
#1(#1)
#1#1
#1#1
#1#1
= 1.5ex
5pt plus 1pt
0pt
-0.1in -0.1in
6.45in 9.3in
-1.5cm 1.0cm
[4]
-.4ex∼ .4ex<
-.4ex∼ .4ex>
myenumi
mylist
)
m
mysubequation[equation]
|
http://arxiv.org/abs/2307.04324v1 | 20230710033812 | Study of the $B^-\to K^-ηη_c$ decay due to the $D\bar{D}$ bound state | [
"Xin-Qiang Li",
"Li-Juan Liu",
"En Wang",
"Le-Le Wei"
] | hep-ph | [
"hep-ph"
] |
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei 430079, China
Center for High Energy Physics, Peking University, Beijing 100871, China
[email protected]
School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China
[email protected]
School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei 430079, China
We study the B^- → K^- ηη_c decay by taking into account the S-wave contributions from the pseudoscalar meson–pseudoscalar meson interactions within the unitary coupled-channel approach, where the DD̅ bound state is dynamically generated. In addition, the contribution from the intermediate resonance K_0^*(1430), with K_0^*(1430)→ K^-η, is also considered. Our results show that there is a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which could be associated with the D D̅ bound state. The future precise measurements of the B^- → K^- ηη_c process at the Belle II and LHCb experiments could be, therefore, used to check the existence of the D D̅ bound state, and to deepen our understanding of the hadron-hadron interactions.
Study of the B^- → K^- ηη_c decay due to the DD̅ bound state
Le-Le Wei
============================================================
§ INTRODUCTION
Since the discovery of X(3872) by the Belle Collaboration in 2003 <cit.>, many exotic states, which do not fit into the expectations of the conventional quark models, have been observed experimentally during the past two decades <cit.>. Many of these exotic states, especially the ones observed in the charmonium sector, are observed around the threshold of a pair of heavy hadrons; some of them, such as X(3872) <cit.>, Z_c(3900) <cit.> and X(4160) <cit.>, can be explained as the hadronic molecules. However, the hadronic molecular states with mass near the D D̅ threshold have not yet been observed experimentally, and further detailed studies are therefore required both theoretically and experimentally <cit.>.
In Ref. <cit.>, by taking into account the ππ, K K̅, D D̅, D_s D̅_s, ηη, and ηη_c coupled channels, the authors predicted a narrow hidden charm resonance with quantum numbers I(J^PC)=0(0^++) and mass around 3700 MeV [denoted as X(3700) throughout this paper] within the unitary coupled-channel approach. Furthermore, by considering the η_c as a pure c c̅ state and the η–η^' mixing, together with the same parameters as used in Ref. <cit.>, the pole of the new X(3700) state was predicted to be √(s)=(3722-i18) MeV within the unitary coupled-channel approach <cit.>. The mass of the D D̅ bound state predicted by other different models is also basically around the threshold of D D̅ <cit.>, and the theoretical studies of the experimental measurements of the processes e^+ e^- → J/ψ D D̅ <cit.>, B^+ → D^0 D̅^0 K^+ <cit.> and γγ→ D D̅ <cit.> all support the existence of such a D D̅ bound state. Meanwhile, some processes have also been suggested to search for the D D̅ bound state, such as ψ(3770) →γ X(3700) →γηη^', ψ(4040) →γ X(3700) →γηη^', e^+ e^- → J/ψ X(3700) → J/ψηη^' <cit.>, ψ(3770) →γ D D̅ <cit.>, and Λ_b →Λ D D̅ <cit.>. It is worth mentioning that the BESIII Collaboration has recently searched for the X(3700) in the ψ(3770) →γηη^' decay for the first time, observing however no significant signals due to the low detection efficiencies of the photons <cit.>.
Although the DD̅ bound state X(3700) couples mainly to the D D̅ and D_s D̅_s channels, it is not easy to search for any signals of the state in these systems. This is due to the fact that, since its mass is a little bit lower than the D D̅ threshold, the X(3700) state would manifest itself as a near-threshold enhancement in the D D̅ invariant mass distributions, which may be difficult to identify due to the low detection efficiencies near the threshold. On the other hand, the X(3700) state has also a sizeable coupling to the ηη_c channel, as observed in Refs. <cit.>. Since the ηη_c threshold is about 200 MeV lower than the predicted mass of X(3700), one expects that, if the D D̅ bound state exists, a clear peak near the D D̅ threshold would appear in the ηη_c invariant mass distributions of some processes with large phase space.
As is well known, the three-body weak decays of the B mesons involve more complicated dynamics than the two-body decays and can, therefore, provide a wealth of information about the meson-meson interactions and hadron resonances <cit.> (see e.g. Ref. <cit.> for a recent review). For instance, the B → K + X/Y/Z decay is an ideal process to produce the charmoniumlike hadronic molecular states <cit.>, and many exotic states have been observed experimentally through the B-meson weak decays during the past few years, such as Z_cs(4000) and Z_cs(4220) <cit.>, X(4140) <cit.> in B^+ → J/ψϕ K^+, as well as X_0(2900) and X_1(2900) in B^+ → D^+ D^- K^+ decay <cit.>. In this paper, we propose to search for the D D̅ bound state X(3700) in the B^- → K^- ηη_c decay. It is worth mentioning that the Belle Collaboration has already searched for the process in 2015 based on 772×10^6 BB̅ pairs collected at the Υ(4S) resonance <cit.>, and no significant signal of the D D̅ bound state was observed due to insufficient statistics. However, the Belle II Collaboration will accumulate about 50 times the Belle data set <cit.>, and is expected to make the further precise measurements of the B^- → K^- ηη_c decay, which will shed more light on the existence of the D D̅ bound state in this process. In addition, the authors of Ref. <cit.> have suggested to search for the D D̅ bound state in the ηη_c mass distribution of the B^+ → K^+ ηη_c decay, and predicted a branching ratio of ℬ(B^+ → ( X_q q̅→η_c η ) K^+ )= ( 0.9 ∼ 6.7) × 10^-4.
In this paper, motivated by the observations made above, we study the B^- → K^- ηη_c decay by taking into account the pseudoscalar meson–pseudoscalar interactions within the chiral unitary approach, where the DD̅ bound state is generated dynamically. On the other hand, the B^- → K^- ηη_c decay can also proceed through the subsequent decay of the intermediate resonance K^*_0(1430), i.e. K^*_0(1430) → K η, whose contribution will be considered in this paper too. We will demonstrate that, besides a peak of K_0^*(1430) in the K^-η invariant mass distribution, there is a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which could be associated with the D D̅ bound state. Therefore, future precise measurements of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments could be used to check the existence of the D D̅ bound state, and to deepen our understanding of the hadron-hadron interactions.
This paper is organized as follows. In Sec. <ref>, we will firstly introduce our formalism for the B^- → K^- ηη_c decay. Our numerical results and discussions are then presented in Sec. <ref>. In Sec. <ref>, we give our final conclusion.
§ FORMALISM
In analogy to the discussions made in Refs. <cit.>, the B^- → K^- ηη_c decay proceeds via the following three steps: the weak decay, the hadronization and the final state interactions. Explicitly, the b quark of the B^- meson firstly decays into a c quark and a W^- boson, and then the W^- boson turns into a c̅ s pair. In order to give rise to the K^- ηη_c final state, the u̅ antiquark of the initial B^- meson and the c̅ s pair from the W^- subsequent decay have to hadronize together with the q̅ q (≡u̅ u + d̅ d + s̅ s) created from the vacuum with the quantum numbers J^PC=0^++. The relevant quark level diagrams can be classified as the internal W^- emission mechanisms and external W^- emission mechanisms, as depicted in Figs. <ref>(a)–(b) and <ref>(c)–(d), respectively. Here we have neglected all the CKM suppressed diagrams that are proportional to the CKM element V_ub.
The meson-meson systems formed by the hadronization of q_i, q̅_j and q̅_k q_k are given by
∑^3_k=1q_i(q̅_k q_k)q̅_j=∑^3_k=1M_ikM_kj=(M^2)_ij,
with the SU(4) q q̅ matrix defined as
M=( [ uu̅ ud̅ us̅ uc̅; du̅ dd̅ ds̅ dc̅; su̅ sd̅ ss̅ sc̅; cu̅ cd̅ cs̅ cc̅ ]),
which could be expressed in terms of the physical pseudoscalar mesons as <cit.>,
M =
( [ π^0/√(2)+ η/√(3)+η^'/√(6) π^+ K^+ D̅^0; π^- -π^0/√(2)+η/√(3)+η^'/√(6) K^0 D^-; K^- K̅^0 -η/√(3) +√(2/3)η^' D_s^-; D^0 D^+ D_s^+ η_c ]).
Thus, by isolating the meson K^-, one could easily obtain the components of the meson systems for Figs. <ref>(a) and <ref>(b) as follows:
| H ⟩^a = V_p V_cb V_cs^∗ c(u̅ u + d̅ d + s̅ s) c̅su̅
= V_p V_cb V_cs^∗(M^2)_44 K^-
= V_p V_cb V_cs^∗( D^0 D̅^0 + D^+ D^- + D_s^+ D_s^- ) K^-,
| H ⟩^b = V_p V_cb V_cs cc̅s(u̅ u + d̅ d + s̅ s) u̅
= V_p V_cb V_cs^∗(M^2)_31η_c
= V_p V_cb V_cs^∗( 1/√(2)K^- π^0 + 3/√(6)K^- η^') η_c,
where V_cb=0.04182 and V_cs=0.97349 are the elements of the CKM matrix, and V_p encodes all the remaining factors arising from the production vertex. Then, the final state interactions of DD̅, D_sD̅_s, and η'η_c will dynamically generate the DD̅ bound state, which could decay into ηη_c system. Here we do not consider the component K^-π^0η_c, since the isospin of the π^0η_c system is I=1.
Similarly, we can write the hadron components for Figs. <ref>(c) and <ref>(d) that could couple to the K^-ηη_c system as follows:
| H ⟩^c = V_p V_cb V_cs^∗× C ×( K^- D_s^+ ) D_s^-,
| H ⟩^d = V_p V_cb V_cs^∗× C ×( K^- D̅^0 ) D^0,
where we have introduced the color factor C to account for the relative weight of the external W^- emission mechanisms with respect to the internal W^- emission mechanism, and will take C=3 in the case of color number N_C=3, as done in Refs. <cit.>.
According to the above discussions, the K^- ηη_c final state could not be produced directly through the tree-level diagrams of the B^- decay, but can via the final state interactions of the coupled channels D^0 D̅^0, D^+ D^-, D_s^+ D_s^-, and η'η_c, which could then generate the DD̅ bound state, as shown in Fig. <ref>. The total amplitude of Fig. <ref> can be expressed as
𝒯_X = V_p V_cb V_cs^∗[ G_D^+ D^- t_D^+ D^- →ηη_c.
. + (1+C) × G_D^0 D̅^0 t_D^0 D̅^0 →ηη_c.
. + (1+C) × G_D_s^+ D_s^- t_D_s^+ D_s^- →ηη_c.
. + 3/√(6)× G_η'η_c t_η'η_c →ηη_c],
where G_l is the loop function for the two-meson propagator in the l-th channel, and its explicit expression is given by <cit.>
G_l = i ∫d^4 q/(2π)^41/q^2 - m_1^2 + iϵ1/(P-q)^2 - m_2^2 + iϵ
= 1/16π^2[α_l + lnm_1^2/μ^2 + m_2^2 - m_1^2 + s/2slnm_2^2/m_1^2.
+ p/√(s)×(lns - m_2^2 + m_1^2 + 2p√(s)/-s + m_2^2 - m_1^2 + 2p √(s).
. . + lns + m_2^2 - m_1^2 + 2p√(s)/-s - m_2^2 + m_1^2 + 2p √(s)) ],
with the subtraction constant α_l= -1.3 for the coupled channels D^+ D^-, D^0 D̅^0, D_s^+ D_s^-, and η^'η_c, and μ= 1500 MeV, being the same as used in Ref. <cit.>. √(s)=M_ηη_c is the invariant mass of the two mesons in the l-th channel, and m_1 and m_2 are the mass of these two mesons. P is the total four-momentum of the two mesons in the l-th channel, and p is the magnitude of the three-momentum of each meson in the meson-meson center of mass frame, with
p = λ^1/2( s, m_1^2, m_2^2 )/2 √(s),
where λ(x,y,z) = x^2 + y^2 + z^2 - 2xy - 2yz -2zx is the Källen function. The transition amplitudes in Eq. (<ref>) can be generically written as
t_j → k = g_j × g_k/M_ηη_c^2 - M_X(3700)^2 + i M_X(3700)Γ_X(3700),
where the mass M_X(3700) = 3722 MeV, the width Γ_X(3700) = 36 MeV, and the coupling constants g_j are taken from Ref. <cit.>. For convenience, we also show in Table <ref> the values of these couplings.
On the other hand, the B^- → K^- ηη_c decay could also proceed via the intermediate excited kaon mesons. According to the Dalitz plot shown in Fig. <ref>, one can see that only the well-established resonance K^*_0(1430) could contribute to this process, since the K^*_0(1430) couples to the channel K^-η in an S-wave way with a branching fraction ℬ(K^*_0(1430)→ Kη)=(8.6^+2.7_-3.4)% <cit.>. Therefore, in this paper, we will neglect all the other excited kaon mesons, and only take into account the contribution from the intermediate K^*_0(1430) as shown by Fig. <ref>, whose amplitude can be expressed as
𝒯_K^*_0 = V_p×β× M_K^*_0(1430)^2/M_K^- η^2 - M_K^*_0(1430)^2 + i M_K^*_0(1430)Γ_K^*_0(1430),
where the parameter β stands for the relative weight of the K^*_0(1430) contribution with respect to that of the DD̅ bound state X(3700), and M_K^- η is the invariant mass of the K^- η system. We will take as input M_K^*_0(1430) = 1425 MeV and Γ_K^*_0(1430) = 270 MeV <cit.>.
With the amplitudes of Eqs. (<ref>) and (<ref>) at hand, the doubly differential decay width of the B^- → K^- ηη_c process can be written as
d^2 Γ/dM_ηη_cdM_K^- η = 1/(2 π)^3M_ηη_c M_K^- η/8 M_B^-^3|𝒯_X + 𝒯_K^*_0|^2.
The differential decay width dΓ/dM_ηη_c can then be obtained by integrating Eq. (<ref>) over the K^- η invariant mass M_K^- η, whose integration range is given by
( M^2_K^- η)_min
= ( E_K^-^* + E_η^* )^2 - ( √(E_η^*2 - m_η^2) + √(E_K^-^*2 - m_K^-^2))^2,
( M^2_K^- η)_max
= ( E_K^-^* + E_η^* )^2 - ( √(E_η^*2 - m_η^2) - √(E_K^-^*2 - m_K^-^2))^2,
where E_K^-^* and E_η^* are the energies of K^- and η in the ηη_c rest frame, respectively. Explicitly, we have
E_K^-^* = M^2_B^- - M^2_ηη_c - M^2_K^-/2 M_ηη_c,
E_η^* = M^2_ηη_c - M^2_η_c + M^2_η/2 M_ηη_c.
Similarly, we can also obtain the differential decay width dΓ/dM_K^- η by integrating Eq. (<ref>) over the ηη_c invariant mass M_ηη_c, and the range of integration can be obtained by exchanging K^- and η_c in Eqs. (<ref>)–(<ref>). Finally, by integrating the differential width dΓ/dM_ηη_c (dΓ/dM_K^- η) over M_ηη_c (M_K^- η), we can obtain the partial decay width of the B^- → K^- ηη_c process,
Γ = ∫dM_ηη_c∫dM_K^- η1/(2 π)^3M_ηη_c M_K^- η/8 M_B^-^3|𝒯_X + 𝒯_K^*_0|^2.
Here all the meson masses involved are taken from the Particle Data Group <cit.>.
§ RESULTS AND DISCUSSION
In our model, we have two free parameters, V_p and β. The parameter V_p is a global factor and its value does not affect the shapes of the ηη_c and K^- η invariant mass distributions, and thus we take V_p=1 for simplicity. The parameter β represents the relative weight of the contribution from K^*_0(1430) with respect to that from X(3700), and we take the default value β=0.004 in order to make the contributions from X(3700) and K^*_0(1430) within the same order of magnitude.
Firstly, we show in Fig. <ref> the normalized ηη_c and K^- η invariant mass distributions with β=0.004. One can see a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which should be associated with the D D̅ bound state X(3700). In addition, a K^*_0(1430) signal appears in the K^- η invariant mass distribution, but gives rise to a smooth shape in the ηη_c invariant mass distribution and does not affect the peak structure of the X(3700) significantly. It should be stressed that the line shape of the X(3700) in the ηη_c invariant mass distribution is different from that of a Breit-Wigner form, which is a typical feature of the DD̅ molecular state.
We also show in Fig. <ref> the Dalitz plot for the B^- → K^- ηη_c decay in the (M_ηη_c^2, M_K^- η^2) plane, where one can see two clear bands corresponding to the X(3700) and K^*_0(1430) resonances, respectively.
The value of the parameter β is unknown, and could be determined if the experimental measurements of the B^- → K^- ηη_c decay are available in the future. In order to study the dependence of our results on β, we show in Fig. <ref> the predicted ηη_c and K^- η (b) invariant mass distributions of the process with three different values of β = 0.003, 0.004, 0.005. One can see that the peak of the K^*_0(1430) resonance in the K^- η invariant mass distribution becomes more significant when the value of β increases. The signal corresponding to the D D̅ bound state X(3700) is, however, always clear in the ηη_c invariant mass distribution.
On the other hand, the value of the color factor C, which represents the relative weight of the external W^- emission mechanism with respect to the internal W^- emission mechanism, could vary around 3 in order to account for the potential nonfactorizable contributions <cit.>. To this end, we show in Fig. <ref> the normalized ηη_c and K^- η invariant mass distributions of the B^- → K^- ηη_c decay by taking three different values of C = 3.0, 2.5, 2.0. One can see that, although the peak of the X(3700) state in the ηη_c invariant mass distribution becomes weaker when the value of C decreases, its signal is still clear and will be easy to be distinguished from the background contribution. Meanwhile, the peak of the K^*_0(1430) resonance in the K^-η invariant mass distribution has little changes for these three different values of the parameter C, because the contribution from the DD̅ bound state is smooth around the peak of K^*_0(1430) in the K^-η invariant mass distribution.
From the above analyses, one can find that within the variation ranges of the two free parameters, there is always a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which corresponds to the D D̅ bound state. Thus, we suggest strongly that our experimental colleagues can perform more precise measurements of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments in the future, which is very important for confirming the existence of the predicted D D̅ bound state.
§ CONCLUSIONS
In this paper, motivated by the theoretical predictions for the DD̅ bound state, we propose to search for this state in the B^- → K^- ηη_c decay. To this end, we have investigated the process within the unitary coupled-channel approach, by taking into account the contributions from the S-wave pseudoscalar meson–pseudoscalar meson interactions, which can dynamically generate the DD̅ bound state X(3700). We have also taken into account the contribution from the intermediate resonance K^*_0(1430), since it couples to the Kη channel in an S-wave way with a branching fraction of ℬ(K^*_0(1430)→ Kη)=(8.6^+2.7_-3.4)%.
Our results show that a clear peak appears around 3730 MeV in the ηη_c invariant mass distribution, which should be associated with the DD̅ bound state. It should be stressed that the line shape of the DD̅ bound state is significantly different from that of a Breit-Winger form, which is a typical feature of the DD̅ molecular state. On the other hand, one can also find the peak of the resonance K^*_0(1430) in the K^-η invariant mass distribution, and the resonance gives a smooth contribution in the ηη_c invariant mass distribution.
In summary, we strongly encourage our experimental colleagues to perform a more precise measurement of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments in the future, which will be very helpful to confirm the existence of the predicted D D̅ bound state, as well as to deepen our understanding of the hadron-hadron interactions.
§ ACKNOWLEDGEMENTS
This work is supported by the National Natural Science Foundation of China under Grant Nos. 12135006, 12075097 and 12192263, the Natural Science Foundation of Henan under Grand Nos. 222300420554 and 232300421140, the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (2020GGJS017), the Youth Talent Support Project of Henan (2021HYTP002), the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology (No. NLK2021-08), as well as the Fundamental Research Funds for the Central Universities under Grant Nos. CCNU19TD012 and CCNU22LJ004.
99
Belle:2003nnu
S. K. Choi et al. [Belle],
Observation of a narrow charmonium-like state in exclusive B^±→ K^±π^+ π^- J/ψ decays,
Phys. Rev. Lett. 91 (2003), 262001.
ParticleDataGroup:2022pth
R. L. Workman et al. [Particle Data Group],
Review of Particle Physics,
PTEP 2022 (2022), 083C01.
Pakvasa:2003ea
S. Pakvasa and M. Suzuki,
On the hidden charm state at 3872 MeV,
Phys. Lett. B 579 (2004), 67-73.
Chen:2015ata
W. Chen, T. G. Steele, H. X. Chen and S. L. Zhu,
Mass spectra of Z_c and Z_b exotic states as hadron molecules,
Phys. Rev. D 92 (2015), 054002.
Molina:2009ct
R. Molina and E. Oset,
The Y(3940), Z(3930) and the X(4160) as dynamically generated resonances from the vector-vector interaction,
Phys. Rev. D 80 (2009), 114013.
Guo:2017jvc
F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou,
Rev. Mod. Phys. 90 (2018) no.1, 015004
[erratum: Rev. Mod. Phys. 94 (2022) no.2, 029901].
Gamermann:2006nm
D. Gamermann, E. Oset, D. Strottman and M. J. Vicente Vacas,
Dynamically generated open and hidden charm meson systems,
Phys. Rev. D 76 (2007), 074016.
Gamermann:2009ouq
D. Gamermann, E. Oset and B. S. Zou,
The radiative decay of ψ(3770) into the predicted scalar state X(3700),
Eur. Phys. J. A 41 (2009), 85-91.
Prelovsek:2020eiw
S. Prelovsek, S. Collins, D. Mohler, M. Padmanath and S. Piemonte,
Charmonium-like resonances with J^PC = 0^++, 2^++ in coupled D D̅, D_s D̅_s scattering on the lattice,
JHEP 06 (2021), 035.
Dong:2021bvy
X. K. Dong, F. K. Guo and B. S. Zou,
A survey of heavy–heavy hadronic molecules,
Commun. Theor. Phys. 73 (2021), 125201.
Chen:2021erj
H. X. Chen,
Hadronic molecules in B decays,
Phys. Rev. D 105 (2022) 9, 094003.
Shi:2021hzm
P. P. Shi, Z. H. Zhang, F. K. Guo and Z. Yang,
D^+ D^- hadronic atom and its production in pp and p p̅ collisions,
Phys. Rev. D 105 (2022), 034024.
Xin:2022bzt
Q. Xin, Z. G. Wang and X. S. Yang,
Analysis of the X(3960) and related tetraquark molecular states via the QCD sum rules,
AAPPS Bull. 32 (2022) 1, 37.
Peng:2023lfw
F. Z. Peng, M. J. Yan and M. Pavon Valderrama,
Heavy- and light-flavor symmetry partners of the T_cc^+(3875), the X(3872) and the X(3960) from light-meson exchange saturation,
[arXiv:2304.13515 [hep-ph]].
Gamermann:2007mu
D. Gamermann and E. Oset,
Hidden charm dynamically generated resonances and the e^+ e^- → J/ψ D D̅, J/ψ D D̅^* reactions,
Eur. Phys. J. A 36 (2008), 189-194.
Wang:2019evy
E. Wang, W. H. Liang and E. Oset,
Analysis of the e^+e^- → J/ψ D D̅ reaction close to the threshold concerning claims of a χ_c0(2P) state,
Eur. Phys. J. A 57 (2021), 38.
Belle:2017egg
K. Chilikin et al. [Belle],
Observation of an alternative χ_c0(2P) candidate in e^+ e^- → J/ψ D D̅,
Phys. Rev. D 95 (2017), 112003.
Dai:2015bcc
L. R. Dai, J. J. Xie and E. Oset,
B^0 → D^0 D̅^0 K^0 , B^+ → D^0 D̅^0 K^+ , and the scalar D D̅ bound state,
Eur. Phys. J. C 76 (2016) 3, 121.
Belle:2005rte
S. Uehara et al. [Belle],
Observation of a χ^'_c2 candidate in γγ→ D D̅ production at BELLE,
Phys. Rev. Lett. 96 (2006), 082003.
BaBar:2010jfn
B. Aubert et al. [BaBar],
Observation of the χ_c2(2P) meson in the reaction γγ→ D D̅ at BaBar,
Phys. Rev. D 81 (2010), 092003.
Deineka:2021aeu
O. Deineka, I. Danilkin and M. Vanderhaeghen,
Dispersive analysis of the γγ→ D D̅ data and the confirmation of the D D̅ bound state,
Phys. Lett. B 827 (2022), 136982.
Wang:2020elp
E. Wang, H. S. Li, W. H. Liang and E. Oset,
Analysis of the γγ→ DD̅ reaction and the DD̅ bound state,
Phys. Rev. D 103 (2021), 054008.
Xiao:2012iq
C. W. Xiao and E. Oset,
Three methods to detect the predicted D D̅ scalar meson X(3700),
Eur. Phys. J. A 49 (2013), 52.
Dai:2020yfu
L. Dai, G. Toledo and E. Oset,
Searching for a D D̅ bound state with the ψ (3770) →γ D^0 D̅^0 decay,
Eur. Phys. J. C 80 (2020) 6, 510.
Wei:2021usz
L. L. Wei, H. S. Li, E. Wang, J. J. Xie, D. M. Li and Y. X. Li,
Search for a D D̅ bound state in the Λ_b →Λ DD̅ process,
Phys. Rev. D 103 (2021), 114013.
BESIII:2023bgk
M. Ablikim et al. [BESIII],
Search for a scalar partner of the X(3872) via ψ(3770) decays into γηη' and γπ^+π^- J/ψ,
[arXiv:2305.11682 [hep-ex]].
Xing:2022uqu
Z. P. Xing, F. Huang and W. Wang,
Angular distributions for Λ_b →Λ^*_J (p K^-) J/ψ (→ℓ^+ ℓ^-) decays,
Phys. Rev. D 106 (2022), 114041.
Duan:2023qsg
M. Y. Duan, E. Wang and D. Y. Chen,
Searching for the open flavor tetraquark T^++_cs̅0(2900) in the process B^+→ K^+ D^+ D^-,
[arXiv:2305.09436 [hep-ph]].
Lyu:2023jos
W. T. Lyu, Y. H. Lyu, M. Y. Duan, D. M. Li, D. Y. Chen and E. Wang,
The roles of the T_cs̅0(2900)^0 and D_0^*(2300) in the process B^-→ D_s^+K^-π^-,
[arXiv:2306.16101 [hep-ph]].
Bediaga:2020qxg
I. Bediaga and C. Göbel,
Direct CP violation in beauty and charm hadron decays,
Prog. Part. Nucl. Phys. 114, 103808 (2020).
Wang:2021aql
F. L. Wang, X. D. Yang, R. Chen and X. Liu,
Correlation of the hidden-charm molecular tetraquarks and the charmoniumlike structures existing in the B→ XYZ+K process,
Phys. Rev. D 104 (2021), 094010.
Dai:2018nmw
L. R. Dai, G. Y. Wang, X. Chen, E. Wang, E. Oset and D. M. Li,
The B^+→ J/ψω K^+ reaction and D^∗D̅^∗ molecular states,
Eur. Phys. J. A 55 (2019) no.3, 36.
Zhang:2020rqr
Y. Zhang, E. Wang, D. M. Li and Y. X. Li,
Search for the D^*D̅^* molecular state Z_c(4000) in the reaction B^-→ J/ψρ^0 K^-,
Chin. Phys. C 44 (2020) no.9, 093107.
Wang:2017mrt
E. Wang, J. J. Xie, L. S. Geng and E. Oset,
Analysis of the B^+→ J/ψϕ K^+ data at low J/ψϕ invariant masses and the X(4140) and X(4160) resonances,
Phys. Rev. D 97 (2018), 014017.
LHCb:2021uow
R. Aaij et al. [LHCb],
Observation of New Resonances Decaying to J/ψ K^+ and J/ψϕ,
Phys. Rev. Lett. 127 (2021), 082001.
CDF:2009jgo
T. Aaltonen et al. [CDF],
Evidence for a Narrow Near-Threshold Structure in the J/ψϕ Mass Spectrum in B^+→ J/ψϕ K^+ Decays,
Phys. Rev. Lett. 102 (2009), 242002.
D0:2013jvp
V. M. Abazov et al. [D0],
Search for the X(4140) state in B^+ → J/ψϕ K^+ decays with the D0 Detector,
Phys. Rev. D 89 (2014), 012004.
LHCb:2020bls
R. Aaij et al. [LHCb],
A model-independent study of resonant structure in B^+→ D^+D^-K^+ decays,
Phys. Rev. Lett. 125 (2020), 242001.
LHCb:2020pxc
R. Aaij et al. [LHCb],
Amplitude analysis of the B^+→ D^+D^-K^+ decay,
Phys. Rev. D 102 (2020), 112003.
Belle:2015yoa
A. Vinokurova et al. [Belle],
Search for B decays to final states with the η_c meson,
JHEP 06 (2015), 132
[erratum: JHEP 02 (2017), 088].
Belle-II:2018jsg
E. Kou et al. [Belle-II],
The Belle II Physics Book,
PTEP 2019 (2019), 123C01
[erratum: PTEP 2020 (2020), 029201].
Bhardwaj:2018ffc
V. Bhardwaj [Belle-II],
Prospects in spectroscopy with Belle II,
Springer Proc. Phys. 234 (2019), 181-187.
Xie:2022lyw
J. M. Xie, M. Z. Liu and L. S. Geng,
Production rates of D_s^+ D_s^- and D D̅ molecules in B decays,
Phys. Rev. D 107 (2023), 016003.
Wang:2020pem
Z. Wang, Y. Y. Wang, E. Wang, D. M. Li and J. J. Xie,
The scalar f_0(500) and f_0(980) resonances and vector mesons in the single Cabibbo-suppressed decays Λ_c → p K^+K^- and pπ^+π^-,
Eur. Phys. J. C 80 (2020) 9, 842.
Wang:2021naf
J. Y. Wang, M. Y. Duan, G. Y. Wang, D. M. Li, L. J. Liu and E. Wang,
The a_0(980) and f_0(980) in the process D_s^+ → K^+ K^- π^+,
Phys. Lett. B 821 (2021), 136617.
Liu:2020ajv
W. Y. Liu, W. Hao, G. Y. Wang, Y. Y. Wang, E. Wang and D. M. Li,
Resonances X(4140), X(4160), and P_cs(4459) in the decay of Λ_b→ J/ψΛϕ,
Phys. Rev. D 103 (2021), 034019.
Duan:2020vye
M. Y. Duan, J. Y. Wang, G. Y. Wang, E. Wang and D. M. Li,
Role of scalar a_0(980) in the single Cabibbo suppressed process D^+ →π ^+π ^0η,
Eur. Phys. J. C 80 (2020) 11, 1041.
Zhang:2022xpf
H. Zhang, Y. H. Lyu, L. J. Liu and E. Wang,
Role of the scalar f_0(980) in the process D_s^+ →π^+π^0π^0,
Chin. Phys. C 47 (2023) no.4, 043101.
Li:2020fqp
X. C. Feng, L. L. Wei, M. Y. Duan, E. Wang and D. M. Li,
The a_0(980) in the single Cabibbo-suppressed process Λ_c →π^0η p,
[arXiv:2009.08600 [hep-ph]].
Ali:1998eb
A. Ali, G. Kramer and C. D. Lu,
Experimental tests of factorization in charmless nonleptonic two-body B decays,
Phys. Rev. D 58, 094009 (1998).
|
http://arxiv.org/abs/2307.05116v2 | 20230711085052 | Topological interface states -- a possible path towards a Landau-level laser in the THz regime | [
"Mark O. Goerbig"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
[email protected]
Laboratoire de Physique des Solides, Université Paris Saclay, CNRS UMR 8502, F-91405 Orsay Cedex, France
Volkov-Pankratov surface bands arise in smooth topological interfaces, i.e. interfaces between a topological and a trivial insulator, in addition to the chiral surface state imposed by the bulk-surface correspondence of topological materials. These two-dimensional bands become Landau-quantized if a magnetic field is applied perpendicular to the interface. I show that the energy scales, which are typically in the 10-100 meV range, can be controlled both by the perpendicular magnetic field and the interface width. The latter can still be varied with the help of a magnetic-field component in the interface. The Landau levels of the different Volkov-Pankratov bands are optically coupled, and their arrangement may allow one to obtain population inversion by resonant optical pumping. This could serve as the elementary brick of a multi-level laser based on Landau levels. Moreover, the photons are absorbed and emitted either parallel or perpendicular to the magnetic field, respectively in the Voigt and Faraday geometry, depending on the Volkov-Pankratov bands and Landau levels involved in the optical transitions.
Topological interface states – a possible path towards a Landau-level laser in the THz regime
Mark O. Goerbig
August 12, 2023
==============================================================================================
§ INTRODUCTION
Landau levels (LLs), which arise due to the quantization of the electrons' energy in a strong magnetic field, have been regularly proposed to be a promising system for a frequency-tunable laser in the THz regime <cit.>. Indeed, upon a putative population inversion between the LLs n and n+1 in parabolic bands, one may expect cyclotron emission with a typical frequency Ω_n+1,n=ω_c given by the cyclotron frequency ω_c=eB/m_B, which is directly controlled by the strength of the magnetic field B and the band mass m_B.
In spite of this conceptually appealing proposal, the path to the realization of a working LL laser is barred by strong obstacles that are mainly concerned with population inversion. The latter requires rather long-lived electrons in the excited LL, but their lifetime is strongly reduced by non-radiative recombinations, namely Auger processes that are prominent due to the equidistant LL separation <cit.> (for a detailed discussion of these processes, see Ref. [but19]). In such processes, an electron in the excited LL n+1 can be promoted due to electron-electron interactions to the LL n+2 while the required energy is provided by a simultaneous desexcitation of another electron from n+1 to n. Instead of using one excited electron to emit a photon of frequency ω_c, two electrons in the LL n+1 are thus lost without emission of any photon. Another obstacle equally related to equisitant LLs is reabsorption of cyclotron light due to the transition (n+1)→ (n+2), which is resonant with the (n+1)→ n transition used in the emission of light <cit.>.
Soon after the isolation of graphene, physicists explored this material in cyclotron-emission experiments in the perspective of realizing a LL laser <cit.>. Due to the linearly dispersing bands of graphene electrons in the vicinity of charge neutrality, the LL spectrum is given by E_n=±ħ (v/l_B)√(2n), in terms of the Fermi velocity v≃ 10^6 m/s and the magnetic length l_B=√(ħ/eB)≃ 26 nm/√(B[T]), i.e. the levels are no longer equidistant. While the orders of magnitude with a fundamental gap of ħΩ_1,0∼ 100 meV for magnetic fields B∼ 10 T are promising for possible THz applications, Auger processes remain a relevant source of non-radiative recombination processes also in these relativistic systems <cit.>. For example, while the 1→ 0 transition is no longer in resonance with the neighboring 2→ 1 transition, it is in resonance with the transition 4→ 1 due to the square-root dependence of the LL on the level index n <cit.>. Furthermore, it has been shown that the optical phonon responsible for the G band in graphene (at ∼ 200 meV) also enhances decay processes that are detrimental to population inversion <cit.>. The drawback of resonant transitions and enhanced Auger processes can to some extent be healed, e.g. in gapless HgTe/CdTe quantum wells, where the low-energy electrons are described in terms of so-called Kane fermions. While their zero-field spectrum is similar to that of massless Dirac fermions, LLs with even indices are absent in the spectrum so that some transitions are absent, such as the above-mentioned transition 4→ 1.
An extremely interesting route towards the realization of a LL laser is the use of Dirac materials with a (mass) gap Δ that is on the same order of magnitude as the typical LL spacing, i.e. in the 100 meV range, for systems with a characteristic velocity parameter of v≃ 10^6 m/s. In this case, the LL spectrum is given by
E_λ, n =λ√(Δ^2+2ħ^2 v^2 n/l_B^2),
where λ=± is the band index. Indeed, if Δ∼ħ v/l_B, the LL spectrum is neither (approximately) linear in n and B as it would be in the limit Δ≫ħ v/l_B nor does it follow the square-root dependence of graphene in the opposite limit Δ≪ħ v/l_B. In this case, the absence of simultaneous resonant transitions suppress both reabsorption and non-radiative Auger scattering. First encouraging results in this direction have been obtained in gapped HgTe/CdTe quantum wells <cit.>. Another system in which massive Dirac fermions occur is the interface of a topological and a trivial insulator, in the form of Volkov-Pankratov (VP) states <cit.>. The bulk-surface correspondence for topological materials enforces indeed the occurence of a massless chiral state at such an interface, but it has been shown that the interface spectrum is much richer in systems with smooth interfaces, e.g. when the gap changes over a certain distance ℓ that characterizes the interface width and that is larger than an intrinsic length _C=ħ v/Δ.
In smooth interfaces between a topological and a trivial insulator, one finds a whole family of surface states the spectrum of which is indeed given by <cit.>
ϵ_m()≃λħ v√(^2+2m/l_S^2).
Here, =(q_x,q_y) is the two-dimensional (2D) wave vector in the interface, m denotes the index of the surface band, and l_S=√(ℓ_C) is a characteristic length determining the extension of the interface states in the z direction perpendicular to the interface. Equation (<ref>) is indeed valid as long as the energy of the surface bands at =0 is smaller than the bulk gap, √(2m)ħ v/l_S≤Δ. The latter condition is equivalent to requiring that the interface width ℓ be larger than m times the intrinsic length _C <cit.>. The n=0 surface state is precisely the chiral state that survives in the abrupt limit, ℓ→ 0, while the VP states (for m≠ 0) disappear in the continuum of bulk states as soon as ℓ<_C. Notice that the formation of VP states is a universal property of topological materials that has been studied not only in topological insulators <cit.>, but also in Weyl semimetals <cit.>, graphene <cit.>, and topological superconductors <cit.>.
Very recently, inter-VP transitions have been measured within magneto-optical spectroscopy in Pb_1-xSn_xSe crystals <cit.> in which the Sn concentration determines whether the system is a trivial or a topological (crystalline) insulator <cit.>. Moreover, the concentration determines the size of the bulk gap so that smooth interfaces may be obtained by molecular-beam epitaxy (MBE) in which the Sn concentration is smoothly varied during the growth process, and where the absolute band gap in the topological regime can be designed such as to be identical to that in the trivial insulator <cit.>. This allows for a strong versatility in the fabrication of interfaces of various widths and thus of systems with specially designed fundamental gaps
Δ_VP=√(2)ħ v/l_S=√(2)Δ√(_C/ℓ)=√(2ħ vΔ/ℓ)
between the m=1 VP and the chiral (m=0) surface states.
In the present paper, I argue that smooth topological interfaces, such as in the above-mentioned Pb_1-xSn_xSe crystals, may be extremely promising systems for the realization of long-lived population inversion if a magnetic field is applied perpendicular to the interface that quantizes the 2D electronic motion in the interface into LLs. The main reason for this expectation is the fact that VP bands provide us with several families of LLs that can to some extent be brought into close energetic proximity with LLs of the chiral surface band. This would allow for devices similar to three- or four-level lasers in which population inversion could be more easily achieved than in the usual LL setup. Furthermore, optical pumping and radiative desexcitation can be chosen to happen in different directions via an intelligent choice of the involved transitions. Indeed, while the optical selection rules in the Faraday geometry impose that the emitted or absorbed photons propagate in the direction of the magnetic field for a transition coupling the LLs n and n± 1, it has been shown previously <cit.> that such transitions must obey an optical selection rule m→ m for the VP states. The selection rules are inverted in the Voigt geometry, where the emitted or absorbed photon propagates in a direction perpendicular to the magnetic field. The underlying reason for these selection rules and their geometry dependence is an intriguing analogy between the spatially changing gap parameter and LL quantization. Indeed, the spatially varying gap parameter can be viewed as a fake magnetic field that is oriented in the plane of the interface, and the characteristic length l_S plays the role of an effective magnetic length. Via an intelligent choice of the geometry of a cavity hosting the topological material and the involved transitions, one may therefore expected to obtain a strong cyclotron emission in the direction of the interface while pumping the system with photons propagating perpendicular to the interface, or vice versa.
§ VOLKOV-PANKRATOV STATES AND OPTICAL SELECTIONS IN A MAGNETIC FIELD
Let us first review some basic features of VP states and their coupling to light in the presence of a magnetic field along the lines of Ref. [Lu_2019]. We consider an interface between a trivial insulator in the lower part of the device (z<-ℓ) and a topological one in the upper part (z>ℓ) (see inset of Fig. <ref>). Due to the magnetic field, the VP bands (<ref>) get quantized into LLs whose spectrum reads (for n≠ 0)
E_λ, m,n≠ 0 = λħ v√(2|m|/l_S^2 + 2|n|/l_B^2),
where we have considered the magnetic field to be oriented perpendicular to the interface (in the z-direction). For notational simplicity, we merge the band index λ from now on with the VP and LL indices so that (-m,-n) corresponds to the n-th LL in the m-th VP band of negative energy (λ=-), whence the modulus of the indices in the spectrum (<ref>) to avoid confusion. Due to the parity anomaly, the above spectrum is only valid for LLs with an index n≠ 0, while the n=0 LLs of the VP bands stick either to the bottom of the positive-energy bands (ξ=+) or to the top of the negative-energy VP state (ξ=-)
E_m,n=0 = ξħ v√(2|m|/l_S^2),
depending on the chirality index ξ. The latter can be changed if we change the order between the topological and the trivial insulator (interface between a topological insulator in the lower part and a trivial one in the upper part), and it can also be altered easily by changing the orientation of the magnetic field. The LL spectrum for the m=0 and the m=1 VP states [both for the conduction (m=+1) and the valence band (m=-1)] are shown in Fig. <ref>.
Notice that the surface-state-width parameter l_S can be decreased effectively with the help of an inplane magnetic field B_∥, l_S(B_∥=0)^-4=1/ℓ^2_C^2→ l_S^-4=l_S(B_∥=0)^-4 + (eB_∥ /ħ)^2 so that the effective energy separation between the VP states, given by Eq. (<ref>) is increased to <cit.>
Δ_VP = √(2ħ vΔ/ℓ)(1+ e^2v^2B_∥^2 ℓ^2/Δ^2)^1/4.
The writing of Eq. (<ref>) unveils the reminiscence of LLs and VP states. Indeed, if we linearize the gap inversion over the smooth interface by a linear function connecting a gap parameter of +Δ in the trivial insulator (at z<-ℓ) and -Δ in the topological insulator (at z>ℓ), i.e. -Δ z/ℓ, the system may be mapped to the LL problem of massive Dirac fermions <cit.>. Within this analogy, the variation of the gap parameter in the z-direction may be viewed as a vector potential that stems from a “fake” magnetic field oriented in the interface, while the physical magnetic field is oriented in the z-direction. Notice furthermore that the above description can easily be generalized to a situation where the gap in the topological insulator is not of the same size as that in the trivial one <cit.>, in which case the effective interface width l_S is determined by an average between the two gaps. The analogy between interface width and magnetic field finally yields a physical understanding of the optical selection rules between the levels (m,n) and (m',n'), where henceforth the first index indicates the VP band and the second one the physical LL.
In the Faraday geometry, in which the absorbed or emitted photon propagates in the direction of the magnetic field, angular-momentum conservation imposes that the only optically active transitions involve adjacent LL indices, n→ n'=± (n± 1), regardless of the band index λ. This needs to be contrasted to the Voigt geometry, where the photon propagates in the plane perpendicular to the magnetic field and where the LL index remains unchanged n→ n'=± n. Since the fake magnetic field that yields the VP bands is oriented in the interface, Voigt and Faraday geometry are inverted, and a photon propagating perpendicular to the interface couples VP bands with the same index (m→ m'=± m) while a photon with a wave vector in the interface couples adjacent VP bands [m→ m'= ± (m± 1)]. As in the LL problem, the selection rules, which are summarized in the table above, do not depend on the band index. In both cases, VP states and LLs, it is the circular polarization of the photon determines which of the adjacent levels or bands are optically coupled.
§ THREE-LEVEL SCHEME
Let us first illustrate schematically the different emission processes in terms of resonant (optical) pumping within a three-level picture to fix some basic ideas. In a first step, we consider the situation depicted in Fig. <ref>(a) where the LL energy scale √(2)ħ v/l_B is slightly larger than the VP gap Δ_VP given in Eq. (<ref>), i.e. the magnetic length is slightly smaller than the effective surface width l_S. We show below in Sec. <ref> that this situation can be easily achieved e.g. in MBE-grown Pb_1-xSn_xSe crystals. In this case, the n=1 LL of the chiral m=0 surface state is slightly above the n=0 level of the upper VP band with an index m=1.
In Fig. <ref>(a), we consider optical pumping in the Faraday geometry, where the light frequency is resonant with the (m=0,n=0)→ (m=0,n=1) transition. If the target level is only slightly above the lowest LL of the m=1 VP band, (m=1,n=0), one may expect rapid non-radiative decay of the excited electrons to the latter level. These electrons may then decay to the zero-energy level (m=0,n=0) by emitting light of the frequency ω=√(2) v/l_S in the Voigt geometry, i.e. absorbed and emitted photons, even if they may be almost resonant, propagate in perpendicular directions. While the magnetic field does then not allow one to control the frequency of the transition, which is determined by the interface width ℓ, it allows us to bring the levels (m=1,n=0) and (m=0,n=1) into close energetic vicinity and thus to increase the transition rate between the two levels, which is proportional to
Γ∼1/τ/1/τ^2 + 2v^2(1/l_B -1/l_S)^2,
if we consider Lorentzian level broadening due to a dephasing time τ <cit.>. For a typical value of τ∼ 100 fs, the level broadening is then on the order of some meV. Notice, however, that the frequency of the emitted light may to some extent be varied with the help of an inplane magnetic field, according to Eq. (<ref>).
Similarly, one may use the Voigt goemetry for pumping the transition (m=0,n=0)→ (m=1,n=0). If the latter is now slightly above the (m=0,n=1) level [see Fig. <ref>(b)], i.e. for smaller magnetic fields with √(2)ħ v/l_B< Δ_VP, the n=1 LL of the chiral surface band may be populated by non-radiative decay processes, and one may expect a population inversion between the (m=0,n=0) and (m=0,n=1) levels, with cyclotron emission at the fundamental frequency ω_C=√(2) v/l_B. As before, the emitted photon propagates then in a direction perpendicular to that of the absorbed photon, but Faraday and Voigt geometries are inverted.
§ FOUR-LEVEL SCHEME
We now investigate a possible four-level scheme for population inversion, as shown in Fig. <ref>. For the sake of the argument, we consider the n=0 LLs of the VP bands now to be situated in the negative-energy branch. As already mentioned, this can easily be achieved by switching the orientation of the magnetic field. Let us choose optical pumping by light in the Voigt geometry that is resonant with the transition (m=0,n=-1)→ (m=1,n=1). In contrast to the three-level scheme discussed in the previous section, the target level is no longer in close vicinity with the level below that is (m=0,n=1). However, both are optically coupled, and an electron can transit from (m=1,n=1) to (m=0,n=1) by emitting a photon in the Voigt geometry again. While this photon is sacrified in the present scheme, its emission allows for an enhanced population of the n=1 LL in the chiral surface band. This is particularly interesting since the transition (m=0,n=1)→ (m=0,n=0) to the central zero-energy level, which we consider to be non or only sparsely populated, is resonant with the (m=0,n=0)→ (m=0,n=-1) transition to the original level served in the pumping process. Under strong pumping and thus a strong depletion of the (m=0,n=-1) level, it is therefore possible to emit two photons at the cyclotron frequency.
It is noteworthy that the above-mentioned resonant cyclotron transitions are also involved in non-radiative Auger processes. Such Auger processes have been shown to be detrimental to population inversion in GaAs and graphene. Here, however, this is not the case. Indeed, one of the two electrons that take part in the Auger process, where both electrons originally reside in the (m=0,n=0) LL, is kicked back into the (m=0,n=1) LL thus maintaining the fertile population inversion. While the electron that transits simultaneously to the level (m=0,n=-1) is energetically lost, i.e. it does not emit a photon, the first electron emits another photon at the cyclotron frequency before it can take part in another Auger process or radiatively transit to (m=0,n=-1).
§ POSSIBLE REALIZATION IN PB_1-XSN_XSE CRYSTALS
While the above arguments are not restricted to a particular topological insulator, it is useful to discuss the orders of magnitude of the probably best controlled system in which VP states occur that is MBE-grown Pb_1-xSn_xSe crystals <cit.>. As mentioned in the introduction, the MBE growth allows one to obtain interfaces with a well-controlled interface width in which the VP states obey to great accuracy the dispersion (<ref>) <cit.>. Most saliently, the Sn concentration x in the Pb substitution allows one to trigger the electronic nature of the material. While, for x=0, the system is a trivial band insulator it becomes a crystalline topological insulator above a critical concentration on the order of (x_c≃ 0.12). Moreover, the Sn concentration determines the size of the gap, which is on the order of 90 meV in the trivial insulator at x=0 <cit.>. The choice x=0.24 allows one to obtain the same magnitude for the gap in the topological insulator (2Δ∼ 90 meV) <cit.>, but even larger gaps on the order of 2Δ∼ 200 meV may be obtained upon variation of temperature and strain on the crystals <cit.>. Magneto-optical experiments indicate that the fundamental VP gap (<ref>) scales as <cit.>
Δ_VP≃ 45 meV/√(ℓ/100 nm),
and samples with interface widths between ℓ=50 and 200 nm have been obtained, while the intrinsic length has been estimated to be _C≃ 6 nm so that the effective surface width varies between l_S∼ 17 nm and l_S∼ 35 nm. In order for the magnetic length to be on the same order of magnitude as l_S – situation considered in the present paper – one would require magnetic fields in the range 0.5...3 T that are easily accessible experimentally.
Finally, the Fermi velocity is roughly half of that in graphene so that v/c∼ 1/600, in terms of the speed of light c. If we consider the fundamental cyclotron resonance associated with the transition (m=0,n=1)→ (m=0,n=0), the energy of the transition is thus roughly
√(2)ħ v/l_B≃ 20 meV×√(B[T]).
This implies a the transition rate <cit.>
Γ_(m=0,n=1)→ (m=0,n=0) = 2α(v/c)^2ω
≃ 2.4 × 10^6 s^-1×√(B[T]),
in terms of the fine-structure constant α=1/137, if we consider dipolar light coupling. This is roughly a factor of four smaller than in graphene due to the reduced Fermi velocity v. Notice that interaction-induced decay processes take place at much shorter time scales, typically in the fs range. In the case of almost resonant levels, as discussed in the previous section [see e.g. the levels (m=1,n=0) and (m=0,n=1) in Fig. <ref>(a) and (b)], the decay rate from the higher to the lower level is on the order of <cit.>
Γ∼2π/ħ(e^2/ϵ l_B)^2 (τ/ħ)≃ϵ^-1× 10^16 s^-1× B[T],
where ϵ is the dielectric constant of the host material.
Finally, one should notice that the (m=1,n=1) level, used e.g. in the above four-level scheme, can also be brought into close energetic vicinity of the bottom of the bulk conduction band. While the schemes discussed above in terms of resonant pumping in a specific geometry require themselves a THz source, one might then alternatively use the conduction band as a target of pulsed or continuous pumping at higher energies and rely on rapid decay processes towards the band bottom, which then serves as a reservoir for the (m=1,n=1) level. However, to test this possibility, one would need to rely on a decay towards this target level at the interface that is more rapid than the bulk recombination. The quantitative study of these decay processes is beyond the scope of the present paper.
§ CONCLUSIONS
In conclusion, I have argued that the particular surface-state spectrum that is formed in smooth interfaces between a trivial and a topological insulator is a promising path towards the LL laser. In addition to the chiral surface state, which may be described in terms of a massless Dirac fermion, VP states are formed if the gap parameter varies over a width ℓ that must be larger than the intrinsic length scale _C=ħ v/Δ, in terms of the bulk gap Δ. These surface bands have the form of a massive 2D Dirac fermion, and each of the bands gives rise to LLs if a magnetic field is applied perpendicular to the interface. One is thus confronted with families of LLs the energy of which can to great extent be controlled, by the magnetic field for the LL separation and by the interface width for the energy separation between the VP bands. While the latter is given by the sample growth, it can still be varied in situ with the help of an inplane magnetic field that effectively reduces the interface width and thus increases the gap between the VP bands. The magnetic field does not only allow one to change the cyclotron frequency, at which light is emitted in certain setups, but also to bring LLs associated with different VP bands into close energetic vicinity. When the gap between the VP bands is on the same order of magnitude as the typical LL separation – this situation can be easily achieved experimentally, e.g. in Pb_(1-x)Sn_xSe crystals – the LL spectra are neither equidistant nor follow a square-root law so that both Auger and reabsorption processes are maximally suppressed.
Another highly unusual and, for devices, potentially extremely fertile aspect of light emission in VP LLs is the direction of propagation of the absorbed and emitted photons. Indeed, photons with a wave vector perpendicular to the interface (Faraday geometry) are absorbed and emitted in transitions involving adjacent LL indices n and n± 1 but the same VP band index m, regardless of whether the LLs are formed in the positive- or negative energy branch of the VP bands. On the other hand, photons propagate inside the interface (Voigt geometry) for transitions (m,n)→ (m± 1,n). This would allow for a smart design of the Fabry-Pérot cavities such that the extension in the z- and x/y-directions match the photon wavelength of the respective transitions, especially if pumping and emission are associated with the two different geometries (Faraday and Voigt). Finally, I have argued that the often detrimental Auger processes may be less efficient in the proposed setup so that they do not hinder the population inversion required for a LL laser, in contrast to most proposals for LL lasers.
I would like to thank Gauthier Krizman, Louis-Anne de Vaulcher, and Milan Orlita for fruitful discussions.
apsrev4-1
|
http://arxiv.org/abs/2307.07587v1 | 20230714192632 | Modulated logarithmic Sobolev inequalities and generation of chaos | [
"Matthew Rosenzweig",
"Sylvia Serfaty"
] | math.PR | [
"math.PR",
"math-ph",
"math.AP",
"math.FA",
"math.MP",
"39B62, 82C22, 82B40, 82C40, 35Q70, 35Q82, 94A17"
] |
We consider mean-field limits for overdamped Langevin dynamics of N particles with possibly singular interactions.
It has been shown that a modulated free energy method can be used to prove the mean-field convergence or propagation of chaos for a certain class of interactions, including Riesz kernels. We show here that generation of chaos, i.e. exponential in time convergence to a tensorized (or iid) state starting from a nontensorized one, can be deduced from the modulated free energy method provided a uniform-in-N “modulated logarithmic Sobolev inequality" holds. Proving such an inequality is a question of independent interest, which is generally difficult.
As an illustration, we show that uniform modulated logarithmic Sobolev inequalities can be proven for a class of situations in one dimension.
A large `Active Magnetic Shield' for a high-precision experiment
C. AbelSussex
N. J. AyresETH
G. BanCAEN
G. BisonPSI
K. BodekCracow
V. BondarETH,e8
T. BouillaudLPSC
E. ChanelBern,e7
J. ChenCAEN
W. ChenETH,PSI
P.-J. ChiuETH,PSI,e6
C. B. CrawfordKentucky
M. DaumPSI
C. B. DoorenbosETH,PSI
S. EmmeneggerETH,e5
L. Ferraris-BouchezLPSC
M. FertlMainz
A. FratangeloBern
W. C. GriffithSussex
Z. D. GrujicSerbia
P. HarrisSussex
K. KirchETH,PSI,e9
V. KletzlETH,PSI
P. A. KossLeuven,e4
J. KrempelETH,e10
B. LaussPSI
T. LefortCAEN
P. MullanETH
O. Naviliat-CuncicCAEN
D. PaisETH,PSI
F. M. PiegsaBern
G. PignolLPSC
M. RawlikETH,e3
I. RienäckerPSI
D. RiesPSI
S. RocciaLPSC
D. RozpedzikCracow
W. Saenz-ArevaloCAEN
P. Schmidt-WellenburgPSI
A. SchnabelPTB
E. P. SegarraPSI
N. SeverijnsLeuven
T. SheltonKentucky
K. SvirinaLPSC
R. Tavakoli DinaniLeuven
J. ThorneBern
R. VirotLPSC
N. YazdandoostMainz2
J. ZejmaCracow
N. ZiehlETH
G. ZsigmondPSI
Received ; accepted
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Consider a canonical Gibbs measure for N particles with energy ℋ_N(), with X_N (x_1, …, x_N), x_i∈^$̣, of the form
dℙ_N,β()= 1/Z_N,β e^-βℋ_N() dx_1… x_N.
It is well know that ifℙ_N,βsatisfies a Poincaré (or spectral gap) or logarithmic Sobolev inequality (LSI), with a constant independent ofN, then the joint law of the particles under the overdamped Langevin (Glauber) dynamics
dx_i^t = -∇_i ℋ_N() + √(2/β)dW_i^t i∈{1,…,N},
where theW_i^tare independent standard Brownian motions, converges exponentially fast in time to the steady state. See, for instance, <cit.>. A Poincaré inequality or LSI is satisfied as soon asℋ_Nsatisfies a uniform strict convexity condition of the formHess ℋ_N ≥c I_dN×dNwithc>0, with the Poincaré/LSI constant only depending onc<cit.>. Proving uniform LSIs meaningfully beyond this uniformly convex case is in general hard and the object of current efforts. We refer to <cit.> for some instances of progress. For a taste of the extensive literature on LSIs, we refer to <cit.>.
In this note, we are interested in the particular case of pair interaction energies of the form
ℋ_N()= 1/2N∑_1≤ i≠ j≤ N(x_i,x_j) + ∑_i=1^N V(x_i),
where again= (x_1, …, x_N ) ∈(^)̣^N;:(^)̣^2→[-∞,∞]is some symmetric interaction potential belonging to a class to be specified later and is similar to that considered in <cit.>, which includes repulsive Coulomb and Riesz interactions of the form
(x,y) = -log|x-y|, s=0
1/s |x-y|^-s, s<,
as well as moderately attractive ones; andVis some confinement potential.
In that case, the overdamped Langevin dynamics is of the form
dx_i^t = -1/N∑_1≤ j≤ N : j≠ i∇_1(x_i^t,x_j^t) -∇ V(x_i^t) dt + √(2/β)dW_i^t
x_i^t|_t=0 = x_i^0
i∈{1,…,N},
withx_i^0∈^$̣ the pairwise distinct initial positions. Here, ∇_1 denotes the gradient with respect to the first argument of . The mean-field limit, or equivalently propagation of chaos, for such evolutions has been proved for sufficiently regular by many classical methods <cit.>, for possibly singular (with V=0) by the relative entropy method in <cit.>, and for even more singular and Coulomb/Riesz-like by the modulated free energy method <cit.> using the modulated energy of <cit.>. We will recall these methods below but suffice it to say that propagation of chaos means that if the initial data is distributed according to the probability distribution f_N^0()
= μ^0(x_1) …μ^0(x_N) on (^)̣^N (i.e., the particles are iid with common law μ^0), then the solution f_N^t of the N-particle Liouville/forward Kolmogorov equation associated to the dynamics (<ref>) is such that
f_N,k^t⇀ (μ^t )^⊗ k as N→∞,
where f_N,k^t denotes the k-point marginal of f_N^t and μ^t is a solution to the mean-field evolution[In (<ref>) and the remainder of the paper, we abuse the convolution notation by defining ∗μ(x) ∫_^(x,y)dμ(y)=∫_^(y,x)dμ(y), since is assumed to be symmetric.]
∂_t μ^t - ÷ (( *μ^t + ∇ V) μ^t)=1/βΔμ^t
μ^t|_t=0= μ^0.
The convergence (<ref>) is for fixed k, as N →∞. There has been recent progress on understanding the optimal rate of this convergence in the context of the relative entropy method <cit.>. The modulated free energy method yields convergence in relative entropy, which in turn implies convergence of all the fixed marginals. Here, the (normalized) relative entropy is defined by
H_N(f_N|g_N) 1/N∫_(^)̣^Nlog(f_N/g_N) df_N.
There has also been progress on showing bounds for the relative entropy which vanish as N→∞ and hold uniformly in time, hence proving uniform-in-time propagation of chaos in <cit.>. Informally, the distance between the laws f_N^t, (μ^t)^⊗ N does not grow arbitrarily large as time becomes large.
The notion of generation of chaos, a term coined recently by Lukkarinen <cit.>, consists in a similar convergence as time gets large, even when the initial data f_N^0 is not tensorized, i.e. does not exhibit chaos or independence. Interpreted in an entropic sense (see <cit.> for a discussion of various notions of chaos), we have generation of chaos if H_N(f_N^t |(μ^t) ^⊗ N) → 0 as t →∞, uniformly in N, and without any smallness assumptions on H_N(f_N^0|(μ^0)^⊗ N). This is what we wish to demonstrate here holds, under a uniform-in-N modulated LSI condition, that we will define below.
§.§ Modulated energies and modulated Gibbs measures
Before going further, let us review the notion of modulated energy. This object was first introduced as a next-order electric energy in <cit.> and used in the dynamics context as a modulated energy in <cit.> and following works—in the spirit of <cit.>.
Given a probability density μ on ^$̣,
we define the modulated energy of the configurationas
F_N(, μ) ∫_(^)^2∖(x,y) d1/N∑_i=1^N δ_x_i- μ (x)
d1/N∑_i=1^N δ_x_i- μ (y) ,
wheredenotes the diagonal in(^)̣^2. This is the total interaction of the system ofNdiscrete charges atx_iagainst a negative (neutralizing) background chargeμ, with the self-interaction of the points, which is infinite if(x,x)=∞,[If (x,x) is finite, then the renormalization is unnecessary. See <ref> for elaboration.] removed. As shown in the aforementioned prior works,F_Nis not necessarily positive; however, under appropriate assumptions on, it acts in effect as a squared distance between the empirical measure1/N∑_i=1^N δ_x_iandμ.
Given the densityμ, we can also define the modulated Gibbs measure
ℚ_N,β(μ) 1/K_N,β(μ) e^-β N F_N(, μ) dμ(x_1) … dμ(x_N),
where
K_N,β(μ) ∫_(^)̣^N e^-β N F_N(, μ) dμ(x_1) … dμ(x_N)
is the associated partition function. An example of use of such a modulated Gibbs measure is provided in <cit.> in the study of (<ref>) for the energy (<ref>) in the case whereis the Coulomb interaction.
Following for instance <cit.>, we may introduce in the context of (<ref>) the thermal equilibrium measureμ_β, which is defined as the minimizer among probability densities of the mean-field free energy
ℰ_β(μ) ∫_(^)̣^2(x,y) dμ(x)dμ(y)+ ∫_^ V(x) dμ(x) + 1/β∫_^logμ(x)dμ(x).
IfVgrows sufficiently fast at infinity, thenℰ_βhas a unique minimizer, which is characterized by the existence of a constantc_β∈such that
* μ_β + V + 1/βlogμ_β=c_β in ^.̣
In the Coulomb case, <cit.> studied howμ_βconverges to the usual equilibrium measure asβ→∞.
The thermal equilibrium measure allows, as seen in <cit.>, for a nice splitting of the energy and thus of the Gibbs measure, as follows. For any, using (<ref>), (<ref>), and direct computations, we have
ℋ_N()= N ℰ_β(μ_β) + N F_N(, μ_β) - 1/β∑_i=1^N logμ_β(x_i).
Inserting this identity into (<ref>), we find that
d^V() = e^-β Nℰ_β(μ_β) /Z_N,β^V e^-β N F_N(, μ_β) dμ_β(x_1)… dμ_β(x_N).
In other words, comparing with (<ref>), we have found that
^V = ℚ_N,β(μ_β)
and
Z_N,β^V= K_N,β(μ_β) e^-β N ℰ_β(μ_β).
Thus, the Gibbs measure is itself a modulated Gibbs measure, relative to the thermal equilibrium measure.
Conversely, given a probability measureμ, it is easy to see the modulated Gibbs measureℚ_N,β(μ)as a Gibbs measure through a change of the confining potential. Following (<ref>), let
V_μ,β - * μ - 1/βlogμ.
Then retracing the steps of the splitting formula above, one has
ℚ_N,β(μ)= ℙ_N,β^V_μ, β.
With the rewriting (<ref>), a crucial condition, appearing in all that follows, is
|log K_N,β(μ) |=o(N)
with ao(N)uniform inβ∈[β_0,2β_0], for some fixedβ_0, which corresponds for instance to the “large deviations estimates" in <cit.>. We will call it a smallness of the free energy. This condition—and even a stronger quantitative one—can be proven in the Riesz cases (<ref>) and for bounded continuous interactions. We give a short proof of this fact in the appendix. In the attractive log case, it is proven in <cit.> and later streamlined in <cit.>.
In several cases of interest, including in particular (<ref>) (cf. <cit.>),F_Nis positive up to a small additive constant
and controls a form of distance (e.g., a squared Sobolev norm). In such cases one may easily obtain a concentration estimate aroundμas follows.[If this is true for -F_N instead of F_N, the same reasoning below applies, using 2β and β instead of β/2 and β in (<ref>).]
By definition (<ref>) ofℚ_N,β(μ), we may rewrite the exponential moments of the modulated energyF_Nas
log_ℚ_N,β(μ)[ e^β/2 N F_N(, μ)] = logK_N,β/2(μ)/K_N,β(μ).
If (<ref>) holds, we obtain the exponential moment control
| log_ℚ_N,β(μ)[ e^β/2 N F_N(, μ) ]|≤ o(N).
Thus, using the almost positivity ofF_Nand the fact that it controls a squared distance between the empirical measure and reference density, this provides a concentration estimate aroundμand implies a law of large numbers in the form
𝔼_ℚ_N,β(μ)[1/N∑_i=1^N δ_x_i-μ^2] → 0,
where·is a suitable norm. By standard arguments, this convergence also implies propagation of chaos for the statistical equilibriumℚ_N,β(μ)(see for instance <cit.>):
ℚ_N,β^(k)(μ) []μ^⊗ k as N→∞,
whereℚ_N,β^(k)(μ) denotes thek-point marginal ofℚ_N,β(μ)andkis fixed.
§.§ Modulated free energy
We may now define the modulated free energy, as introduced in <cit.>. Given a reference probability densityμon^$̣ as above and a probability density f_N on (^)̣^N, the modulated free energy is defined by
E_N(f_N, μ) 1/βH_N(f_N|μ^⊗ N) +𝔼_f_N[F_N(,μ)],
where H_N is the relative entropy as in (<ref>) and 𝔼_f_N denotes the expectation with respect to the measure f_N, viewing the as a random variable. Let us remark here that using the explicit form of (<ref>), the modulated free energy can be rewritten as
E_N(f_N, μ)= 1/βH_N(f_N| ℚ_N,β(μ))+ log K_N,β(μ)/N.
In other words, up to a constant related to the smallness of free energy condition (<ref>), the modulated free energy is another relative entropy. Note that this provides an easy proof of the
fact that E_N(f_N, μ) is essentially positive if the smallness of free energy condition
(<ref>) holds.
Moreover,
controlling the relative entropy from f_N to ℚ_N,β proves closeness of the particle density
to ℚ_N,β(μ) and is, in reality, more precise than the mean-field limit and propagation of chaos provided by the control of H_N(f_N |μ^⊗ N). As t →∞, the solution μ^t to (<ref>) converges to the thermal equilibrium measure μ_β, and ℚ_N,β(μ_β) is, as already noticed in (<ref>), equal to ℙ_N,β, so we retrieve the fact, provided by usual LSI, that there is convergence in large time to ℙ_N,β, the invariant measure for the dynamics (<ref>). See <ref> below for a further discussion on the advantages of ℚ_N,β(μ) over ℙ_N,β.
Finally, if one wishes to retrieve closeness of f_N to μ^⊗ N, one may either use a control of the
negative part of the modulated energy by the relative entropy, as ensured by condition (<ref>) below, or use the concentration inequality via its consequence (<ref>).
§.§ Evolution of modulated energy, Fisher information, and uniform LSI
The crucial computation of <cit.> (performed on the torus, but the whole-space with confining potential case is similar) is that when differentiating in time E_N(f_N^t, μ^t), for f_N^t solving the forward Kolmogorov equation and μ^t solving the mean-field evolution equation (<ref>), a cancellation occurs, leading to
d/dtE_N(f_N^t, μ^t) ≤ -1/2∫_(^)̣^N∫_(^)̣^2∖ (u^t(x)-u^t(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ^t)^⊗ 2(x,y)d f_N^t
-1/β^2 N∫_(^)̣^N∑_i=1^N |∇_i log*f_N^t/(μ^t)^⊗ N +β/N∑_j≠ i∇_1 (x_i,x_j) - β * μ^t (x_i) |^2df_N^t,
where
u^t 1/β∇logμ^t +∇ V+ ∇∗μ^t
is the velocity field associated to the mean-field dynamics (<ref>).
At first pass, the second term on the right-hand side of (<ref>), which is nonpositive, may be discarded, and, assuming is translation-invariant, the first term in the right-hand side can be controlled, for instance in Riesz cases (<ref>) via the second author's inequality from <cit.> and its refinements and generalizations <cit.>, by the modulated energy itself, allowing to close a Grönwall loop. When V=0, this is what is done in <cit.> and revisited in <cit.>. More precisely, the following type of inequality is used: for any sufficiently regular vector field v and any pairwise distinct ∈ (^)̣^N ,
|∫_(^)̣^2∖ (v(x)-v(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ)^⊗ 2(x,y)|
≤ C v_*F_N(, μ) + o_N(1),
where v_* is some homogeneous Sobolev norm of v and o_N(1) depends only on (and is increasing with respect to) the L^∞ norm of μ and vanishes as N→∞. This inequality was first proven in full generality in <cit.> for all Coulomb/super-Coulombic Riesz potentials, following a previous work for the =̣2 Coulomb case <cit.>. A sharp additive error o_N(1)= O( μ_L^∞^s/ N^s/-1) with v_* = ∇ v_L^∞ was proven in <cit.>, following earlier Coulomb results <cit.>. The estimate (<ref>) was generalized to Riesz-like kernels in <cit.>. For satisfying |(x-y)·∇_1(x,y)| ≤ C, one may extract from <cit.>, as was done in <cit.>, the averaged inequality
|∫_(^)̣^N∫_(^)̣^2∖ (v(x)-v(y))·∇_1 (x,y) d(1/N∑_i=1^Nδ_x_i - μ)^⊗ 2(x,y)df_N|
≤∇ v_L^∞(C_1 H_N(f_N |μ^⊗ N) + C_2/N).
Let us now examine the nonpositive term in the right-hand side of (<ref>). We rewrite it
as
-1/β^2 N∫_(^)̣^N∑_i=1^N |∇log*f_N^t/(μ^t)^⊗ N + β/N∑_j≠ i∇_1 (x_i,x_j) - * μ^t (x_i) |^2df_N^t
= - 1/β^2 N∫_(^)̣^N |logf_N^t/ℚ_N,β (μ^t) |^2 df_N^t = - 1/β^2 N∫_(^)̣^N| √(f_N^t/ℚ_N,β (μ^t) )|^2 d ℚ_N,β(μ^t) .
Indeed, one may check that by definition (<ref>) of ℚ_N,(μ),
_i logℚ_N,β(μ) = -β N _i F_N(, μ) + logμ(x_i),
and in view of the definition (<ref>) of _N(,μ),
_i F_N (,μ) = 1/N^2∑_1≤ j≤ N: j≠ i_1 (x_i,x_j) - 1/N (* μ) (x_i).
For any f_N and any reference probability density μ, we call
the quantity
1/N∫_(^)̣^N |√(f_N/ℚ_N,β (μ) )|^2 d ℚ_N,β(μ)
the modulated Fisher information, which is nothing but the normalized relative Fisher information I_N(f_N |ℚ_N,(μ)), and the relation (<ref>) transforms into
d/dtE_N(f_N^t, μ^t) ≤ -1/β^2 N∫_(^)̣^N |√(f_N^t/ℚ_N,β (μ^t) )|^2 d ℚ_N,β(μ^t)
-1/2∫_(^)̣^N∫_(^)̣^2∖ (u^t(x)-u^t(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ^t)^⊗ 2(x,y)d f_N^t .
The goal is then to exploit a functional inequality relating the modulated Fisher information to the modulated free energy to take advantage of the negative term in (<ref>).
We say that a family of probability measures {P_N}_N≥ 1 satisfies a uniform logarithmic Sobolev inequality (LSI) if there exists a constant C_LS>0, such that for any N≥ 1 and f ∈ C^1((^)̣^N), we have
∫_(^)̣^N f^2 logf^2/∫ f^2 dP_N dP_N ≤ C_LS∫_(^)̣^N | f|^2 dP_N.
Given data (,V,β), we say that a uniform μ-modulated LSI (μ-LSI) holds if the family of probability measures {ℚ_N,β(μ)}_N≥ 1 of the form (<ref>) satisfies a uniform LSI.
Our main observation is that if ℚ_N,β(μ) satisfies a uniform LSI, then applying (<ref>) to f= √(f_N /ℚ_N,β (μ)), with f_N a probability density on (^)̣^N, we find
∫_(^)̣^N|√(f_N /ℚ_N,β (μ))|^2 dℚ_N,β(μ)
≥1/C_LS∫_(^)̣^Nlog(f_N /ℚ_N,β (μ)/∫_(^)̣^N f_N /ℚ_N,β (μ)dℚ_N,β(μ) )df_N .
Using that f_N is a probability density, we recognize on the right-hand side N H_N(f_N|ℚ_N,β(μ)). In light of (<ref>), we then have
1/N∫_(^)̣^N|√(f_N /ℚ_N,β (μ))|^2 dℚ_N,β(μ)
≥1/C_LSβ E_N(f_N, μ) - 1/Nlog K_N,β(μ).
In other words, a uniform LSI for ℚ_N,β (μ) implies that the modulated Fisher information is bounded below by the modulated free energy and an additive error that is o_N(1) assuming smallness of free energy. If (<ref>) holds for all μ^t along the flow, then it can be inserted into (<ref>) to obtain an exponential decay of the modulated free energy, provided (<ref>) or (<ref>) holds.
In <cit.>, in the context of conservative dynamics on the torus ^d (see remarks at the end of <ref>), a uniform LSI is used in the context of the relative entropy method <cit.>. In that method, one differentiates in time H_N(f_N^t| (μ^t)^⊗ N) instead of (<ref>), leading to a Fisher information relative to the reference measure (μ^t)^⊗ N instead of ℚ_N,β(μ^t). Proving the needed uniform LSI holds is straightforward, as it follows from upper and lower bounds on μ^t (a consequence of maximum principle and only possible on compact domains) and the Holley-Stroock perturbation lemma. See also <cit.> for a similar idea applied to the hierarchal relative entropy method of <cit.>.
§.§ Main result
To present the main result of this note, we list some assumptions that we make on the potential : (^)̣^2→ [-∞,∞]. We will explain below specific cases in which these assumptions hold.
*
∈ C^2((^)̣^2∖) is symmetric and for some s<$̣, satisfies
|(x,y)| ≤ C1+|log|x-y||, s=0
1+ |x-y|^-s, s>0
for some constantC>0.
*
There exists a constantC_β∈[0,1/β)such that for anyf_N ∈_ac((^)̣^N)andμ∈(^)̣ ∩L^∞(^)̣, with∫_^log(1+|x|)dμ(x)<∞ifs=0,
𝔼_f_N[_N(,μ)] ≥ -C_β H_N(f_N |μ^⊗ N) - o_N(1),
whereo_N(1)only depends (in an increasing fashion) onμthroughμ_L^∞.
*
There exist constantsC_RE,C_ME≥0, such that
|∫_(^)̣^N∫_(^)̣^2∖ (v(x)-v(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ)^⊗ 2(x,y)df_N|
≤v_*(C_REH_N(f_N |μ^⊗ N) + C_ME𝔼_f_N[_N(,μ)] + o_N(1))
for all pairwise distinct configurations∈(^)̣^N, densitiesf_N∈_ac((^)̣^N)andμ∈(^)̣∩L^∞(^)̣, and continuous vector fieldsvwith finite homogeneous Sobolev norm·_*of some order.
Assumption (<ref>) is to ensure that all energy expressions are well-defined and that all differential identities can be justified. Assumption (<ref>) ensures that the modulated energy does overwhelm the relative entropy, which is not a priori forbidden, since we make no sign assumptions on . Since C_β<1/β, it ensures that the modulated free energy is nonnegative up to o_N(1) error. In fact, it shows that the modulated free energy controls the relative entropy.
Let us introduce the quantity
ℰ_N^t E_N(f_N^t,μ^t) + o_N^t(1)
as a substitute for the modulated free energy. The additive erroro_N^t(1)is a constant multiple of the maximum of the additive errors in assumptions (<ref>), (<ref>) and ensures thatℰ_N^t≥0, which allows to perform a Grönwall argument on this quantity. It depends only onμ^tthrough theL^∞norm, hence thetsuperscript, and is increasing in this dependence. Also, it is easier to write the statements withℰ_N^t, as these additive constants appear as the errorso_N(1)in (<ref>).
Let β>0. Assume that equation (<ref>) admits a solution μ∈ C([0,∞), 𝒫(^)̣∩ L^∞(^)̣), such that μ^t_L^∞ is bounded uniformly in t and u^t∈ L^∞ locally uniformly in t. If s=0, further assume that ∫_^log(1+|x|)dμ^t<∞ for every t≥ 0. If ℚ_N,β(μ^t) satisfies a uniform LSI with constant C_LS>0 for every t≥ 0, then
∀ t≥ 0, _N^t ≤ e^-4 t/β C_LS+∫_0^t u^τ_*/2dτ_N^0
+e^-4 t/β C_LS+∫_0^tu^τ_*/2dτ∫_0^t e^4τ/β C_LS-∫_0^τ u^τ'_*/2dτ'[ȯ_N^τ+ 4/β C_LS(o_N^τ(1) - log K_N,(μ^τ) /β N)]dτ,
where K_N,(μ^τ) is as in (<ref>), o_N^τ(1) is as above, and ȯ_N^τ(1) denotes the derivative of o_N^τ(1) with respect to time.
We see here that provided∫_0^∞u^τ_*dτ<∞, the first term on the right-hand side converges exponentially fast to0ast→∞, while the second term iso_N(1)uniformly bounded int, assuminglogK_N,(μ^τ)=o(N)uniformly inτand that∫_0^∞|ȯ_N^τ(1)|dτ<∞, by the fundamental theorem of calculus and our assumption thatμ^t_L^∞is uniformly bounded. Sinceℰ_Ndiffers fromE_Nonly by additive constants which areo_N(1), and the modulated free energyE_Ncontrols the relative entropyH_N, as explained in <ref>, it follows that the estimate (<ref>) implies entropic generation of chaos and also gives a uniform-in-time propagation of chaos if the initial data is such thatℰ_N^0=o_N(1). In the next subsection, we give cases of interest to which <ref> applies.
Generation of chaos for potentialswith∇inL^∞, which does not allow for singular potentials, was shown in <cit.>, with a rate of convergence inNthat is sharp for relative entropy, under smallness assumptions onβ. In <cit.>, a generation of chaos result was shown for conservative dynamics (replace∇with∇for an antisymmetric matrix) withhaving a log-type singularity. Both <cit.> are restricted to the torus^$̣. A weaker generation of chaos result in 2-Wasserstein distance was shown in <cit.> for the Riesz case on with uniformly convex confinement via coupling methods. We mention that convergence in relative entropy implies convergence in W_2 by a theorem of Otto-Villani <cit.>.
The long-time analysis of equation (<ref>) that allows to show in the Riesz case that ∫_0^∞∇ u^τ_L^∞dτ < ∞ and K_N,(μ^τ)=o(N) uniformly in τ is the subject of forthcoming work with J. Huang <cit.>. In fact, this work shows that solutions converge as t→∞ to the thermal equilibrium μ_β in a strong sense at a quantifiable rate and even covers the case of ^$̣ without confinement, which has been an outstanding problem.
One could also consider the periodic setting ^$̣, as in <cit.>. But the case of^is mathematically more interesting.
In the case where=̣1and(x)=-log|x|or|x|^-sfors∈ (0,1),Vis aC^2uniformly convex potential (e.g.,V(x)=|x|^2), andμis a probability density which is not too far from the thermal equilibriumμ_β, we are able to verify a uniformμ-modulated LSI. The general$̣-dimensional Riesz case is challenging: it is at least as difficult as the uniform LSI for ℙ_N,^V, which is a well-known open problem.
§.§ Applications
We can give a more precise form of the estimate (<ref>) in the repulsive singular Riesz case (<ref>) so that (-Δ)^-̣s/2 = 𝖼_,̣sδ_0. One has that
_N(_N,μ) ≥ -log(Nμ_L^∞)/2N_s=0 + μ_L^∞^s/N^s/-1, s≥-̣2
𝖢log(Nμ_L^∞)/N_s=0 + 𝖢μ_L^∞^s/ N^-2(-̣s)/2(-̣s)+s(+̣2), s<-̣2.
Here, 𝖢>0 is an absolute constant. The additive errors for the sub-Coulomb case s<-̣2 are expected to be suboptimal, while they are sharp in the Coulomb/super-Coulomb case s≥-̣2.[The L^∞ condition here—and by implication, the L^∞ condition in <ref>—can be relaxed quite a bit (e.g., see <cit.>) at the cost of increasing the additive errors; but we will not concern ourselves with such generality.] For details, we refer to <cit.> in the case s<-̣2 and <cit.> in the case s≥-̣2. In particular, (<ref>) shows that
E_N(f_N,μ) ≥1/βH_N(f_N |μ^⊗ N) -log(Nμ_L^∞)/2N_s=0 + μ_L^∞^s/N^s/-1, s≥-̣2
𝖢log(Nμ_L^∞)/N_s=0 + 𝖢μ_L^∞^s/ N^-2(-̣s)/2(-̣s)+s(+̣2), s<-̣2.
We take
ℰ_N^t E_N(f_N^t,μ^t) + log(Nμ^t_L^∞)/2N_s=0 + μ^t_L^∞^s/N^s/-1, s≥-̣2
𝖢log(Nμ^t_L^∞)/N_s=0 + 𝖢μ^t_L^∞^s/ N^-2(-̣s)/2(-̣s)+s(+̣2), s<-̣2
The estimate (<ref>) holds with C_RE=0,
v_* = ∇ v_L^∞, s≥-̣2
∇ v_L^∞ + (-Δ)^-̣s/4v_L^2/-̣2-s, s<-̣2,
and
o_N^t(1) = log(Nμ^t_L^∞)/2N_s=0 + μ^t_L^∞^s/N^s/-1, s≥-̣2
(-Δ)^s+1-/2μ^t_L^∞ N^-s+1+(2(-̣s)/+̣2/(s+(2(-̣s)/+̣2)(1+s) + μ^t_L^∞^2+s/+̣2N^-(2(-̣s)/+̣2/(s+(2(-̣s)/+̣2)(1+s), s<-̣2.
In the attractive log case on the torus ^$̣, it is shown in <cit.> (building on <cit.>) that there existsβ_>̣0such that for any0≤β<β_$̣, there are constants _β∈ (0,1) and C_β>0 such that
β𝔼_f_N[_N(,μ) ] ≤_β H_N(f_N |μ^⊗ N) + C_β/N.
Therefore, assumption (<ref>) is satisfied. The conjectured optimal value of β_$̣ is2$̣ (e.g., β_=̣4 in the =̣2 case, which corresponds to the Patlak-Keller-Segel model). It is shown in <cit.> that that β_≤̣2$̣ and further that if=̣2, thenβ_=̣2$̣ provided one restricts to densities μ sufficiently close to the uniform measure.
For potentials :(^)̣^2→ that are continuous along the diagonal, one can skip the renormalization and simply define
_N(,μ) = ∫_(^)̣^2(x,y)d(1/N∑_i=1^Nδ_x_i-μ)(x)d(1/N∑_i=1^Nδ_x_i-μ)(y).
If is repulsive in the sense that (x,y) is the integral kernel of a positive semidefinite operator on the space of finite Borel measures, as in the case of the equations used for neural networks parameters evolution <cit.>, then _N(,μ)≥ 0.
Continuing to assume that is continuous at the origin, but dropping the repulsive assumption, we may use the Donsker-Varadhan lemma to estimate
𝔼_f_N[_N(,μ)] ≤1/η(H_N(f_N|μ^⊗ N) +1/Nlog𝔼_μ^⊗ N[e^Nη_N(,μ)])
for any η>0. If ∈ L^∞, then one may use <cit.> (see also <cit.> for a simpler proof) with
ϕ(x,z) ((x,z) - ∫_^(x,y)dμ(y) - ∫_^(y,z)dμ(y) + ∫_(^)̣^2(y,y')dμ(y)dμ(y')).
The conclusion is that if √(C_0)ηϕ_L^∞<1, where C_0 is a universal constant, then
log𝔼_μ^⊗ N[e^Nη_N(,μ)] ≤log(2/1-C_0η^2ϕ_L^∞^2).
Replacing F_N by -F_N and repeating the preceding reason, we then find that
|𝔼_f_N[_N(,μ)]| ≤1/ηH_N(f_N|μ^⊗ N) + 1/η Nlog(2/1-C_0η^2ϕ_L^∞^2).
If 1/β>1/√(C_0)ϕ_L^∞, then we may choose 1/η∈ (1/√(C_0)ϕ_L^∞, 1/β), implying that assumption (<ref>) holds.
§.§ Advantages of modulated LSI over LSI
Let us explain the advantage of a uniform modulated LSI over merely a uniform LSI for ℙ_N,. For simplicity, let us assume that is translation-invariant. Ignoring regularity questions,
d/dtH_N(f_N^t |ℙ_N,) = -1/β I_N(f_N^t |ℙ_N,β),
where we recall that I_N is the normalized relative Fisher information. If there is a uniform LSI constant C_LS for ℙ_N,, then by Grönwall's lemma,
H_N(f_N^t |ℙ_N,) ≤ e^-t/C_LSβH_N(f_N^0 |ℙ_N,).
By subadditivity of relative entropy and Pinsker's inequality, for any fixed 1≤ k≤ N,
f_N,k^t - ℙ_N,^(k)_TV^2 ≤ 2k e^-t/C_LSβ H_N(f_N^0 |ℙ_N,).
If μ^t is a solution of the mean-field evolution (<ref>), then
d/dt[_β(μ^t) - _β(μ^β) ] = -∫_^|1/β∇logμ^t + ∇∗μ^t + ∇ V|^2 dμ^t,
where ℰ_β is the mean-field free energy as defined in (<ref>).
One may check by direct computation that
lim_N→∞1/β H_N(μ^⊗ N|ℙ_N,β) = ℰ_β(μ) - ℰ_β(μ_β)
and
lim_N→∞1/βI_N(μ^⊗ N|ℙ_N,β) = β∫_^|1/βlogμ +∇ V + ∇∗μ|^2dμ,
which together with the uniform LSI for ℙ_N,β imply the infinite-volume LSI
β[ ℰ_β(μ) - ℰ_β(μ_β)] = lim_N→∞ H_N(μ^⊗ N|ℙ_N,β) ≤lim_N→∞ C_LSI_N(μ^⊗ N|ℙ_N,β)
= C_LSβ^2∫_^|1/βlogμ +∇ V + ∇∗μ|^2dμ.
Inserting this inequality into the right-hand side of (<ref>) and applying Grönwall again,
[_β(μ^t) - _β(μ^β) ] ≤ e^-t/C_LSβ[_β(μ^0) - _β(μ^β) ].
Using (<ref>) and direct computation, one may also check that
ℰ_β(μ) - ℰ_β(μ_β) = 1/β∫_^log(μ/μ_β)dμ + 1/2∫_(^)̣^2(x-y)d(μ-μ_β)^⊗ 2(x,y).
Assuming, say, that ≥ 0, we may discard the potential energy term and then apply Pinsker's inequality again to obtain
μ^t-μ__TV^2 ≤ 2β e^-t/C_LSβ[_β(μ^0) - _β(μ^β) ].
Considering just the case k=1 to simplify the analysis, we have by triangle inequality that
f_N,1^t - μ^t_TV ≤f_N,1^t - ℙ_N,β^(1)_TV + ℙ_N,β^(1) - μ_β_TV + μ^t- μ_β_TV
≤√(2e^-t/C_LSβ H_N(f_N^0 |ℙ_N,)) + √(2β e^-t/C_LSβ(_β(μ^0) - _β(μ^β) )) + ℙ_N,β^(1) - μ_β_TV.
Supposing that[Such a bound is known (with a sharp estimate for o_N(1)), for instance, in the high-temperature case where has bounded gradient <cit.>.]
ℙ_N,β^(1)-μ_β_TV = o_N(1),
the right-hand side of (<ref>) tends to zero as t→∞ and N→∞. But this estimate does not imply propagation of chaos, even locally in time, as the second term does not vanish as N→∞. To address this unsatisfactory feature, one would also need a local-in-time estimate with which to interpolate, say, of the form
f_N,1^t - μ^t_TV≤ e^Cto_N(1),
where o_N(1) vanishes as N→∞ assuming some form of chaos for the initial data.
The above described argument is rather inefficient. We had to pass from relative entropy to a genuine metric, total variation distance, to implement this triangle inequality argument. In doing so, one loses the optimality of the rate in N<cit.>. Moreover, by trying to balance t and N, the rate of convergence further deteriorates. In contrast, a uniform modulated LSI addresses propagation/generation of chaos in one swoop, because it is dynamic: it not only depends on N but also allows for dependence on t through the flowing of μ according to (<ref>).
We mention that this classical triangle inequality/interpolation idea was used in <cit.> for energies with regular interactions, except with total variation distance replaced by 2-Wasserstein distance, which works just as well since LSI implies a Talagrand inequality <cit.>. Though, only a statement of uniform-in-time propagation of chaos (with suboptimal rate), as opposed to generation of chaos, is presented in <cit.>.
§.§ Organization of the paper
Let us conclude the introduction with some remarks on the organization of the body of the paper. In <ref>, we give the details of the proof of the main result, <ref>. Then in <ref>, we turn to proving that a uniform modulated LSI holds in the log/Riesz case for a certain class of densities μ in dimension =̣1.
§.§ Acknowledgments
The authors thank Djalil Chafaï for helpful discussion and references. The second author also acknowledges the Fondation Sciences Mathématiques de Paris and PSL Research University who supported her visit to ENS-PSL, where this work was completed.
§ PROOF OF THE MAIN THEOREM
Applying the uniform LSI for ℚ_N,β(μ^t) to the first term in the right-hand side of (<ref>) via (<ref>), we find, abbreviating K_N^t K_N,β(μ^t),
d/dtE_N(f_N^t, μ^t) ≤ -1/2∫_(^)̣^N∫_(^)̣^2∖ (u^τ(x)-u^τ(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ^t)^⊗ 2(x,y)d f_N^t
-4/β C_LS(E_N(f_N^t,μ^t) + log K_N^t/β N).
Under the assumption (<ref>), we have
∫_(^)̣^N|∫_(^)̣^2∖ (u^τ(x)-u^τ(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ^t)^⊗ 2(x,y)| d f_N^t
≤u^t _*(C_REH_N(f_N^t| (μ^t)^⊗ N) + C_ME𝔼_f_N^t[_N(,μ^t)] + o_N^t(1)).
If C_RE/C_ME≤1/β, then since H_N(f_N^t | (μ^t)^⊗ N)≥ 0, we may assume without loss of generality that C_RE/C_ME=1/β. If C_RE/C_ME> 1/β, then using assumption (<ref>),
C_ME𝔼_f_N^t[_N(^t,μ^t)] ≤ C_ME( 𝔼_f_N^t[_N(,μ^t)] + C_βH_N(f_N^t| (μ^t)^⊗ N) + o_N^t(1))
≤ C_ME'( 𝔼_f_N^t[_N(,μ^t)] + C_βH_N(f_N^t| (μ^t)^⊗ N) + o_N^t(1))
for any C_ME'≥ C_ME. So, choosing C_ME' sufficiently large so that C_RE/C_ME' + C_β≤1/β (remember that C_β<1/β by assumption), we see that in all cases,
∫_(^)̣^N|∫_(^)̣^2∖ (u^τ(x)-u^τ(y))·∇_1(x,y) d(1/N∑_i=1^Nδ_x_i - μ^t)^⊗ 2(x,y)| d f_N^t
≤𝖢u^t _*(1/βH_N(f_N^t | (μ^t)^⊗ N) +𝔼_f_N^t[_N(,μ^t)] + o_N^t(1)),
for some constant 𝖢>0. To establish a Grönwall relation, we use the quantity (<ref>). We see from combining (<ref>) and (<ref>) that
d/dtℰ_N^t ≤ -4/β C_LS(E_N(f_N^t,μ^t) + log K_N^t/β N) + /2 u^t_*_N^t + ȯ_N^t(1)
=(-4/β C_LS+ u^t_*/2)_N^t + 4/β C_LS(o_N^t(1)- log K_N^t/β N).
Recall that ȯ_N^t(1) denotes the time derivative. Multiplying both sides by e^∫_0^t(4/β C_LS- u^τ'_*/2)dτ', we obtain
d/dt[e^∫_0^t(4/β C_LS- u^τ'_*/2)dτ'ℰ_N^t ] ≤ e^∫_0^t(4/β C_LS- u^τ'_*/2)dτ'[ȯ_N^t(1) +4/β C_LS(o_N^t(1) - log K_N^t/β N)].
Now using the fundamental theorem of calculus followed by a little rearrangement,
_N^t ≤ e^-4 t/β C_LS+∫_0^tu^τ_*/2dτ_N^0
+e^-4 t/β C_LS+∫_0^tu^τ_*/2dτ∫_0^t e^4τ/β C_LS-∫_0^τ u^τ'_*/2dτ'[ȯ_N^τ(1) + 4/β C_LS(o_N^τ(1) - log K_N^τ/β N)]dτ.
This gives the estimate (<ref>) and therefore completes the proof of <ref>.
§ UNIFORM LSI FOR =̣1 RIESZ CASE
We show in this section that a uniform modulated LSI holds in the =̣1 repulsive Riesz case (<ref>)
for uniformly convex confinement V. Using the notation from the introduction,
ℋ_N() ∑_i=1^N V(x_i) + 1/N∑_1≤ i<j≤ N(x_j-x_i).
In fact, the proof will show that a modulated LSI holds for any interaction potential which is convex or for any C^2 interaction potential with _Ċ^2 sufficiently small depending on the convexity of V. We leave the details as an exercise for the reader. We expect that one could generalize further by following the proof of Zegarlinski's theorem <cit.>, as used to show uniform LSIs in <cit.>, or the two-scale approach of <cit.>, but will not pursue this.
As explained in <cit.>, the approach of <cit.> implies the LSI up to the critical inverse temperature for the Gibbs measure of the mean-field classical XY/O(2)/planar rotator/Kuramoto model, whose energy is far from convex. It is straightforward to adapt the reasoning of <ref> to obtain a modulated LSI for μ close enough to μ_β=1.
§.§ Uniform LSI for ℙ_N,^V
Following Chafaï-Lehec <cit.>,[Strictly speaking, <cit.> considers the =̣1log case; but the argument works with trivial modification in the general Riesz case. Furthermore, Chafaï-Lehec present more than one proof; but we choose to highlight the one based on Caffarelli's contraction theorem.] we present the LSI for the Gibbs measure ℙ_N,^V in the =̣1 Riesz case with uniformly convex confinement V. This is a warm-up for proving the modulated LSI in the next subsection.
Let V: → be -convex for some >0. For > 0, the probability measure ℙ_N,^V has LSI constant 2/β.
As V is fixed, we omit the superscript in ℙ_N,^V in what follows. By exchangeability, it suffices to restrict to the Weyl chamber[This ability to order is, of course, a special feature of the one-dimensional setting.]Δ_N {∈^N : x_1≤⋯≤ x_N}. More precisely, define
(x) (x), x>0
∞, x≤ 0 and ℋ_N() ∑_i=1^N V(x_i) + 1/N∑_1≤ i<j≤ N(x_j-x_i),
and dℙ_N, = e^-βℋ_N/Z_N,dX_N. Since
∫_^N e^-βℋ_Nd = N!∫__Ne^-βℋ_Nd,
it follows that if φ is invariant under permutation of coordinates, then
∫_^Nφ^2log(φ^2/∫φ^2dℙ_N,)dℙ_N, = ∫_^Nφ^2log(φ^2/∫φ^2dℙ_N,)dℙ_N,.
So, ℙ_N, has LSI constant C_LS if and only if ℙ_N, has LSI constant C_LS. Going forward, we drop the superscript in ,ℋ_N,ℙ_N,.
Assuming that V is -convex, for some >0, we claim that ℋ_N is -convex. Indeed, let , ∈Δ_N and ρ∈ (0,1). We want to show that
ℋ_N(ρ + (1-ρ)) ≤ρℋ_N() + (1-ρ)ℋ_N() -ρ(1-ρ)/2|-|^2.
If x_i=x_j or y_i=y_j for some 1≤ i<j≤ N, then the right-hand side is infinite and the inequality holds trivially; so, suppose otherwise. Since V is -convex, we have for each i,
V(ρ x_i + (1-ρ)y_i) ≤ρ V(x_i) + (1-ρ)V(y_i) - ρ(1-ρ)/2|y_i-x_i|^2.
So, it only remains to show that for each pair i<j,
([ρ x_j + (1-ρ)y_j] - [ρ x_i + (1-ρ)y_i]) = (ρ(x_j-x_i) + (1-ρ)(y_j-y_i) )
≤ρ(x_j-x_i) + (1-ρ)(y_j-y_i).
Fix a pair i<j. If x_j-x_i = y_j-y_i, then there is nothing further to show; so, suppose otherwise. Without loss of generality, suppose y_j-y_i > x_j-x_i>0. Then by the fact that
∀ x>0, ”(x) = 1/x^2, s=0
s(s+1)/|x|^s+2, s≠ 0,
and therefore is convex on _+, we see that (<ref>) holds.
We perform a qualitative regularization argument that reduces us to the case when ℙ_N,β has full support ^N and ℋ_N ∈ C^2(^N). Let 𝔾_N be the Gaussian measure with covariance (β)^-1/2I_N× N,
d𝔾_N = (2π/β )^-N/2e^-β||^2/2d.
Since
log(dℙ_N,/d𝔾_N) = -βℋ_N - log(Z_N,) + N/2log(2π/βκ) + βκ||^2/2,
we see that ℋ_N is -convex if and only if log(dℙ_N,/d𝔾_N) is concave. Let {Q_t}_t≥ 0 be the Ornstein-Uhlenbeck semigroup with stationary measure 𝔾_N: for any test function f,
∀∈^N, (Q_t f)() ∫_^N f(e^-t+√(1-e^-2t))d𝔾_N().
The measure 𝔾_N is reversible for {Q_t}_t≥ 0. Therefore, Q_t#ℙ_N, is absolutely continuous with respect to 𝔾_N, and its Radon-Nikodym derivative dQ_t#ℙ_N,/d𝔾_N = Q_t(dℙ_N,/d𝔾_N). Moreover, as consequence of the Prékopa-Leindler inequality, Q_t preserves log concavity. Hence, ℋ_N^t -1/βlog(Q_t#ℙ_N,) is -convex and belongs to C^∞(^N). Finally, since lim_t→ 0(Q_t f)(x) = f(x) for any continuous f, it follows that Q_t#ℙ_N,[]ℙ_N, as t→ 0. Thus, if Q_t#ℙ_N, has LSI constant C_LS for every t>0, then so does ℙ_N,.
We proceed under the C^2 and full support assumptions. According to Caffarelli's contraction theorem <cit.> (see also <cit.> for an alternative proof), if ℋ_N is β-convex, then the Brenier map <cit.>T from 𝔾_N to ℙ_N, (i.e., T#𝔾_N = ℙ_N,) is 1-Lipschitz. So, for any test function φ≥ 0,
∫_^Nφlog(φ) dℙ_N, = ∫_^Nφlog(φ) d(T#𝔾_N)
=∫_^N(φ∘ T) log(φ∘ T)d𝔾_N
≤2/β∫_^N |∇(φ∘ T)|^2 d𝔾_N
≤2/β∫_^N |(∇φ)∘ T|^2 |∇ T|^2 d𝔾_N
≤2/β∫_^N |∇φ|^2 dℙ_N,.
In the third line, we have used the well-known LSI for 𝔾_N<cit.>; and in the final line we have used that ∇ T_L^∞≤ 1 together with another application of T#𝔾_N = ℙ_N,. This completes the proof.
§.§ Uniform LSI for ℚ_N,(μ)
Given a density μ, recall from (<ref>) and (<ref>) that
ℚ_N,β(μ)= ℙ_N,β^V_μ, β, where V_μ,β - * μ - 1/βlogμ.
We recycle the notation ℋ_N, so that
ℋ_N() = ∑_i=1^N V_μ,β(x_i) + 1/2N∑_1≤ i≠ j≤ N(x_i-x_j).
The advantage of this notation is that assuming V_μ,β is -convex, for some >0, we may apply <ref> with V replaced by V_μ,β to obtain a uniform LSI for ℚ_N,β(μ).
Suppose that μ∈() ∩ L^∞() and if s=0, also suppose that ∫log(1+|x|)dμ<∞.[The L^∞ and log moment assumptions are just to ensure that the convolution ∗μ is well-defined.] For β>0, suppose that V_μ,β is -convex, for some >0. Then the probability measure ℚ_N,β(μ) has LSI constant 2/β.
We follow the proof of <ref>. We start by restricting the support of P_N to the Weyl chamber _N and with an abuse of notation, recycle the notation U_N,, P_N. The only thing we have to check is that U_N, is (_1+_2)-convex: given , such that x_1≤⋯≤ x_N and y_1≤⋯≤ y_N, and ρ∈ (0,1),
U_N,β(ρ + (1-ρ)) ≤ρ U_N,() + (1-ρ)U_N,() -(_1+_2)ρ(1-ρ)/2|-|^2.
If x_i=x_j or y_i=y_j for some 1≤ i<j≤ N, then the right-hand side infinite and there is nothing to prove; so suppose otherwise. Since V is _1-convex and -ζ is _2 convex by assumption, we have for each i,
(V-ζ)(ρ x_i + (1-ρ)y_i) ≤ρ(V-ζ)(x_i) + (1-ρ)(V-ζ)(y_i) - (_1+_2)ρ(1-ρ)/2|y_i-x_i|^2.
Appealing to (<ref>), the desired statement (<ref>) follows.
To give meaning to <ref>, we now specify conditions under which V_μ,β is uniformly convex.
Let μ∈() be such that logμ/μ_∈ C^2().[Since μ_∈ C^2, this assumption implies by the chain rule that μ∈ C^2.] Suppose V∈ C^2 and >0. Then V_μ,β is -convex with
inf V” -(1/βlogμ/μ__Ċ^2 + ∗(μ-μ_)_Ċ^2).
Recalling the definition of V_μ,β,
V_μ,β = - ∗(μ-μ_β + μ_β) - 1/βlog(μ/μ_βμ_β)
=-∗(μ-μ_β)-1/βlogμ/μ_β - (∗μ_β + 1/βlogμ_β)
=-∗(μ-μ_β)-1/βlogμ/μ_β + V - c_β,
where to obtain the third line, we have applied (<ref>) to the last term of the second line. By triangle inequality,
V_μ,β”≥ V” - ∗(μ-μ_β)_Ċ^2 -1/βlogμ/μ_β_Ċ^2,
from which the desired conclusion is immediate.
ϱ' = h'e^h, ϱ'_L^∞≤h'_L^∞ e^h_L^∞
ϱ” = [h” + (h')^2]e^h, ϱ”≤(h”_L^∞ +h'_L^∞^2)e^h_L^∞.
Also by chain rule,
(∗(ϱμ_))” = ∗(ϱ”μ_ + 2ϱ'μ_' + ϱμ_”),
which is in L^∞ since ∈ L^1 + L^∞ and μ_∈∩ C^2 (e.g., see ?). Hence, ζ∈ C^2 and
ζ”_L^∞≤(h”_L^∞ + (∗(ϱμ_))”_L^∞) .
In particular, this implies that -ζ is (-)-convex.
One can produce probability measures μ such that logμ/μ_∈ C^2 by choosing h∈ C^2 and then setting μe^hμ_/∫ e^hdμ_, which is tautologically a probability density. One can make the quantities logμ/μ__Ċ^2, ∗(μ-μ_)_Ċ^2 arbitrarily small by taking e^h-1_C^2 arbitrarily small. In particular, we see that there exist non-equilibrium densities μ such that V_μ,β is uniformly convex.
As a corollary in this one-dimensional Riesz case with uniformly convex confinement, suppose we start the dynamics (<ref>) from an initial data μ^0 that is close enough to μ_β in the sense that (<ref>) with μ = μ^0 is strictly positive. If this closeness persists throughout the dynamics (<ref>) in the sense that (<ref>) with μ=μ^t is bounded from below by some _0>0 uniformly in t (this is a consequence of the aforementioned forthcoming work <cit.>), then <ref> applies, showing entropic generation of chaos.
§ PROOF OF THE SMALLNESS OF FREE ENERGY IN RIESZ AND REGULAR CASES
In this appendix, we prove the smallness of the free energy (<ref>) in the cases (<ref>) and in the case of bounded continuous nonnegative interactions, by showing
|log K_N,β(μ)|≤β o(N),
for o(N) independent of β.
The upper bound is obtained straightforwardly in the Riesz cases from inserting (<ref>) into (<ref>), then using that μ is a probability measure:
log K_N,β(μ) ≤βlog(Nμ_L^∞)/2 _s=0 + μ_L^∞^s/N^s/, -̣2≤ s <
𝖢log(Nμ_L^∞)_s=0 + 𝖢μ_L^∞^s/ N^1 -2(-̣s)/2(-̣s)+s(+̣2), s<-̣2.
In all cases, the preceding right-hand side is β o(N). When is nonnegative and continuous, one can insert the diagonal back into the definition of F_N, which implies that
F_N(, μ) ≥ - 1/2N(0,0),
and the proof of the upper bound is concluded in the same way.
The lower bound follows from Jensen's inequality. Indeed,
log K_N,β(μ) ≥ - β N 𝔼_μ^⊗ N[F_N(, μ)].
We then expand out the definition (<ref>) of F_N and use the symmetry of to find that
𝔼_μ^⊗ N[F_N(, μ)] = ∫_(^)̣^N( 1/2N^2∑_i ≠ j(x_i,x_j) - 1/N∑_i=1^N∫_^(x_i,y)dμ(y)
+∫_(^)̣^2(x,y)dμ^⊗ 2(x,y)) dμ^⊗ N ()
=- 1/2N∫_(^)̣^2(x,y) dμ^⊗ 2(x,y).
Inserting the last line back into the right-hand side of (<ref>) yields
log K_N,β(μ) ≥β/2∫_(^)̣^2(x-y) dμ^⊗2(x,y),
which gives the desired lower bound in all cases (<ref>) and all cases where is bounded.
alpha |
http://arxiv.org/abs/2307.04956v2 | 20230711011700 | PKU-GoodsAD: A Supermarket Goods Dataset for Unsupervised Anomaly Detection and Segmentation | [
"Jian Zhang",
"Runwei Ding",
"Miaoju Ban",
"Ge Yang"
] | cs.CV | [
"cs.CV"
] |
Reinforcement Learning with
Non-Cumulative Objective
Wei Cui, Student Member, IEEE, and Wei Yu, Fellow, IEEE
Manuscript submitted on November 10, 2022, revised on August 12, 2023. This work is supported by Natural Sciences and Engineering Research Council (NSERC) of Canada via the Canada Research Chairs Program.
The authors are with The
Edward S. Rogers Sr. Department of Electrical and Computer Engineering,
University of Toronto, Toronto, ON M5S 3G4, Canada
(e-mails: {cuiwei2, weiyu}@ece.utoronto.ca).
October 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
Visual anomaly detection is essential and commonly used for many tasks in the field of computer vision. Recent anomaly detection datasets mainly focus on industrial automated inspection, medical image analysis and video surveillance. In order to broaden the application and research of anomaly detection in unmanned supermarkets and smart manufacturing, we introduce the supermarket goods anomaly detection (GoodsAD) dataset. It contains 6124 high-resolution images of 484 different appearance goods divided into 6 categories. Each category contains several common different types of anomalies such as deformation, surface damage and opened. Anomalies contain both texture changes and structural changes. It follows the unsupervised setting and only normal (defect-free) images are used for training. Pixel-precise ground truth regions are provided for all anomalies. Moreover, we also conduct a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods. This initial benchmark indicates that some methods which perform well on the industrial anomaly detection dataset (e.g., MVTec AD), show poor performance on our dataset. This is a comprehensive, multi-object dataset for supermarket goods anomaly detection that focuses on real-world applications.
Data Sets for Robotic Vision, Computer Vision for Automation, Deep Learning Methods.
§ INTRODUCTION
Anomaly areas are regions that differ from normal areas. While humans can easily identify anomaly areas on the surface of objects based on their learned knowledge, it is challenging for machines to do the same.
Visual Anomaly detection (VAD) is one of the essential applications in the field of computer vision, which aims to classify and locate anomaly regions. Currently, anomaly detection algorithms are widely used in various fields such as industrial quality inspection, medical diagnosis, and intelligent surveillance. Specifically, in the field of industrial quality inspection, anomaly detection can be used to detect defects on the surface of industrial products. In the field of medical diagnosis, it can be used to detect lesions on the surface of organs. In the field of intelligent surveillance, it can be used to detect the occurrence of anomalous events. Therefore, it has broad application prospects and research significance. Due to the scarcity of anomalous data, unsupervised anomaly detection algorithms have drawn much attention in research. Their goal is to train models only using a large amount of easily obtainable normal samples, enabling the models to differentiate anomalous samples. At present, unsupervised anomaly detection algorithms can be divided into three categories: those based on pre-trained models, those based on pseudo anomaly generation, and those based on generative models. The first category uses a model that has learned features of normal samples from the ImageNet dataset to distinguish anomalous samples. The second category generates pseudo anomalies that resemble real anomalies during training, thereby transforming the unsupervised paradigm into a supervised one. The third category trains the model to fit the distribution of normal samples and distinguishes anomalies by calculating the distance between the distributions of anomalous and normal samples during training. Due to their ability to distinguish anomalies without using anomalous samples, these unsupervised anomaly detection methods have gained increasing attention and achieved remarkable results in various academic conferences.
However, most existing anomaly detection datasets are limited and mainly concentrated in industrial quality inspection, medical diagnosis, and intelligent monitoring fields. The diversity of datasets in these fields is also limited. Currently, widely used unsupervised anomaly detection datasets include MVTec AD<cit.> in industrial quality inspection, Chest X-ray<cit.> in medical diagnosis, and ShanghaiTech<cit.> in intelligent monitoring. Due to the high-speed development of this field and the high cost of dataset construction, the performance of existing datasets has approached saturation, limiting the development of anomaly detection. With the continuous development of intelligence, unmanned supermarkets (Fig. <ref>) have entered people's lives. Although the shopping process does not require human intervention, detecting and replacing damaged goods in unmanned supermarkets often requires a large amount of manpower. The demand for anomaly goods detection in supermarkets is increasing day by day, but there is currently a lack of large-scale anomaly goods datasets. Therefore, establishing an unsupervised anomaly goods dataset has significant research value and application prospects.
Based on this, we collected a large number of normal and anomaly goods sample images in a real unmanned supermarket application scenario and performed pixel-level anomaly annotation, creatively establishing the first goods anomaly detection (GoodsAD) dataset[https://github.com/jianzhang96/GoodsAD] in the field of artificial intelligence. The dataset contains a total of six goods categories, including boxed cigarettes, bottled drinks, canned drinks, bottled foods, boxed foods, and packaged foods. Each goods has multiple types of anomalies, totalling 8 different types. The dataset includes 6,124 images, with 4,464 images of normal goods and 1,660 images of anomaly goods. The resolution of the images is 3000 × 3000. In the experiment, we selected 3,136 normal images as the training set and used the remaining 2,988 normal and anomaly images as the test set. In addition, we also tested the goods dataset on current state-of-the-art (SOTA) unsupervised anomaly detection methods and compared the performance of various methods. Our contribution is twofold:
* We creatively established the first unsupervised anomaly detection goods dataset in the field of artificial intelligence, which is used to classify and locate anomaly areas on the surface of goods, increase the diversity of data in the anomaly detection field, and promote the development of unmanned supermarkets.
* Extensive experiments are conducted on the established goods dataset using current unsupervised anomaly detection methods, laying the foundation for subsequent anomaly detection work and promoting the performance improvement of related algorithms.
§ RELATED WORK
Some previous anomaly detection methods made experiments on image classification datasets such as MNIST and CIFAR10. They assume that a certain category of the dataset is normal and the rest is anomalous. For the application of visual anomaly detection, industrial vision <cit.>, medical image analysis and video anomaly detection <cit.> are fields of great concern. Table <ref> shows commonly used datasets for visual anomaly detection. In the field of medical images, there are datasets for anomaly detection such as Chest X-ray <cit.> and CheXpert <cit.>. ShanghaiTech <cit.> and Avenue <cit.> are two commonly used datasets for video anomaly detection. Some anomaly detection datasets <cit.> in the industry field have been proposed in recent years. These datasets all provide pixel-level annotations. DAGM <cit.> and NEU-SDD <cit.> are early datasets. DAGM contains 10 types of texture images with artificial defects. NEU-SDD contains 6 kinds of typical surface defects of the hot-rolled steel strip. MTD <cit.> includes 6 types of defects on the surface of magnetic tiles. This dataset is somewhat difficult because the contrast of some defects and background is low. MSD <cit.> dataset contains three types of defects in mobile phone screens. These four datasets all follow the supervised learning setting.
In actual industrial manufacturing, the vast majority of products are normal samples, while anomalous samples only account for a very small number. Therefore in 2019, P. Bergmann et al. proposed an industrial dataset called MVTec AD using one-class classification setting, which means that only normal samples are used during the training phase. This setting is more in line with industrial scenarios and is called semi-supervised or unsupervised anomaly detection. MVTec AD contains 10 object and 5 texture categories with a total of 5354 images. Each image in the dataset contains only one object, and the camera is perpendicular to the object with the same shooting angle. This dataset has drawn a lot of attention and many methods focus on unsupervised anomaly detection based on this dataset. Since then, some datasets <cit.> have been proposed, using the same settings. BTAD <cit.> contains 2830 images with 3 different classes (industrial products), of which 1799 anomaly-free images are for training and the rest for testing. Compared with MVTec AD, the shooting conditions of the images in MPDD <cit.> are more complex. Under different light intensities and non-homogeneous backgrounds, the image captured by the camera contains multiple objects with different spatial directions, positions and distances. VisA <cit.> is a newly proposed dataset with multiple objects in the image, and the number of images is about twice that of MVTec AD.
However, The current datasets contain at most a dozen classes of objects. There is no goods anomaly detection dataset, which is needed in unmanned supermarkets and commodity production. Different types of datasets are also needed in anomaly detection research to test the universality of current state-of-the-art methods and promote real-world applications.
§ DATASET DESCRIPTION
§.§ Problem Statement and Definition
In practical applications, commodity anomalies are difficult to define in advance for supervised learning, and it is easy to acquire normal samples but costly and limited to get anomalous sample data. Therefore, GoodsAD adopts the same unsupervised setting as the previous datasets <cit.>. The training set contains only images without defects. The test set contains both: images containing various types of defects and defect-free images.VAD consists of two sub-tasks, image-level anomaly detection (classification) and pixel-level anomaly localization (segmentation). The input is an image I ∈ℝ^H× W × 3, and the output is an anomaly score η∈ [0,1] for anomaly classification or a segmentation mask M ∈ℝ^H× W for anomaly segmentation. The value range of each pixel of M is [0,1], indicating the degree of anomaly.
§.§ Dataset Details
The GoodsAD dataset comprises 6 categories with 3136 images for training and 2988 images for testing. Table <ref> gives an overview for each category. Fig. <ref> shows example images for every category together with example defects. We collected 6 kinds of common commodities in supermarkets, which are drink_bottle (d_b), drink_can (d_c), food_bottle (f_bt), food_box (f_bx), food_package (f_p) and cigarette_box (c_b). Each commodity can be used and evaluated individually if necessary. Each category contains multiple goods, and the dataset contains a total of 484 goods. As a result, The appearance of each item varies greatly, such as variations in colour and texture. Each category contains several common defects such as surface damage, deformation and opened. The defects contain both surface texture changes and structure changes. The defects were manually generated to produce realistic anomalies as they would occur in real-world application scenarios.
All images are acquired with 3000 × 3000 high-resolution. The object locations in the images are not aligned. Most objects are in the center of the images and one image only contains a single object. For each item, we collected multiple images from different angles. For bottled and canned goods, we collected images from different angles around the cylinder. The images were acquired under the illumination conditions of a real supermarket. The appearance of goods may change in texture due to illumination.
The image background is a natural white commodity shelf.
Both image-level and pixel-level annotations are provided.
Fig. <ref> shows the region size of different anomalies in six categories. Different types of anomalies differ in size, and anomalies of the same type change in size. Most anomalies like surface damage and cap open occupy only a small fraction (less than 2%) of image pixels. opened and deformation are two kinds of anomalies with relatively large proportion.
§ BENCHMARK
§.§ Methods for Visual Anomaly Detection
Different types of unsupervised SOTA VAD methods are tested on the proposed GoodsAD dataset. We divide current methods into three categories: based on pre-trained models, based on pseudo-anomaly, and based on generative models. Pseudo anomaly-based methods adopt contrastive learning <cit.> paradigms or auto-encoders <cit.> for image reconstruction. Generative Adversarial Networks (GAN) <cit.>, Normalizing Flow <cit.> and Diffusion Model <cit.> are the most commonly used generative models, which can be used in VAD.
§.§.§ Based on pre-trained models
This type of approach uses the models pre-trained on ImageNet and does not require a training stage. Because deep learning libraries such as PyTorch provide pre-trained models, it is convenient to use. The basic idea of this type of approach is comparison. We can know whether the test image is anomalous by comparing the test image with the normal training image. Pixel-level comparisons at the image level show that the detection results are too sensitive to pixel values, and there are problems with misalignment of objects. Therefore, Niv Cohen and Yedid Hoshen first proposed the method based on the pre-trained model, SPADE <cit.>. They used ResNet <cit.> to extract the features of the images and compare the feature vectors at the image and patch level. K-Nearest Neighbors (KNN) algorithm is adopted to obtain more robust results.
PaDiM <cit.> improves SPADE, assuming that the distribution of patches of normal images is subject to multivariate Gaussian distribution, and estimates the mean and variance in the training stage. In the test stage, the Mahalanobis distance between the feature vector of the test image and the distribution is calculated as the anomaly score. PatchCore <cit.> uses greedy coreset subsampling to reduce the memory bank of the normal samples. It uses the second and third level feature maps extracted by Convolutional Neural Network (CNN) such as WideResNet <cit.> and average pooling is adopted on these feature maps to obtain global information. SimpleNet <cit.> adopts a simple network architecture and combines the ideas of the pre-trained model and pseudo-anomaly. It improves PatchCore by adding Gaussian noise in feature space and a discriminator.
The methods based on knowledge distillation <cit.> assume that the teacher network and the student network will output different feature maps for anomalous samples in the test stage. MKD <cit.> uses a smaller student network and multilayer feature synthesis. RD4AD <cit.> proposes reverse distillation paradigm and uses the residule block of ResNet to limit the features acquired by the student network.
§.§.§ Based on pseudo-anomaly
This type of method simulates natural anomalies to generate some pseudo anomalies in the training phase, so the unsupervised task is transformed into a supervised task. Contrastive learning based methods including CutPaste <cit.>, NSA <cit.> and SPD <cit.> introduce the idea and classical methods of contrastive learning into VAD. The classical contrastive learning method aims to learn the general features of images, while the VAD task needs to detect anomalous areas in the images, so the classical method needs to be modified to adapt to this task. CutPaste cuts an image patch with colour jitter and pastes it at a random location of a large image to generate the anomalous sample. In the training stage, it uses anomaly classification as the proxy task. NSA extracts foreground objects before cutting the image patch and uses Poisson image editing approach to fuse the image patch.
Image reconstruction based methods such as RIAD <cit.>, DRAEM <cit.> and DSR <cit.> use the auto-encoders (U-Net <cit.> is used for implementation). RIAD introduced image inpainting into image reconstruction to obtain large reconstruction errors of anomalous samples. DRAEM adds a segmentation network after reconstruction network to obtain more accurate results. DSR adopts quantized feature space and moves the anomaly generation process into the feature space. CRDN <cit.> improves DRAEM by cascade network architecture and structural anomaly generation. MemAE <cit.> also uses an auto-encoder, but an innovative memory module is adopted to handle the problem of good generalization of the anomalous regions.
§.§.§ Based on generative models
The basic idea of this type of method is to use a generative model to fit the distribution of normal samples, and measure the distance between the test sample and the distribution during testing. AnoGAN <cit.> introduces GAN into VAD, and the backpropagation algorithm is needed to find the sample closest to the test sample in the distribution. f-AnoGAN <cit.> solves the problem of slow testing. The method trains a WGAN <cit.> in the first stage, which is the same as AnoGAN, and trains an encoder in the second stage to find the latent encodings of the test sample. CFLOW-AD <cit.> adopts Normalizing Flow to fit the distribution of features extracted by CNN from normal samples. It differs from PaDiM by using a different model to fit the distribution. AnoDDPM <cit.> uses the DDPM <cit.>, the basic idea of which is that anomalous images with added noise can be restored to normal images.Some recent works <cit.> focus on a more challenging application scenario: only few-shot (less than 8) normal samples are used in the training stage.
§.§ Evaluation Metric
The standard classification metrics AUROC and AUPR are used for image-level anomaly classification and pixel-level anomaly segmentation. AUPR is more sensitive to the datasets of unbalanced categories. PRO <cit.> is also adopted to balance anomalous areas of different sizes.
§.§ Implementation Details
For each method, we follow one-model-per-category learning paradigm and train one model for each category. It is time-consuming and memory-consuming to train a model for each commodity, although the accuracy is higher in this way.
The images are resized to 224×224 during training and test. All experiments are conducted on NVIDIA GTX 1080Ti GPUs with PyTorch 2.0.
For each method, we adopt the default standard parameters. We set base_width and base_channels in reconstructive and discriminative sub-networks of DRAEM to 64 and 32, respectively. For f-AnoGAN, we train 100000 iterations for WGAN and 50000 iterations for the encoder. More details such as batch size (bs) and learning rate (lr) are listed in Table <ref>.
§.§ Experimental Results and Discussion
We test the performance of different types of methods on the proposed GoodsAD dataset. Table <ref> shows the image-level anomaly classification results and Table <ref> shows the pixel-level anomaly segmentation results. Fig. <ref> shows the qualitative examples of anomaly localization of methods DRAEM, NSA, RD4AD, SimpleNet and PatchCore-100%.Compared to the previous dataset like MVTec AD, GoodsAD has two different attributes: (1) The object's location in the image is not aligned. (2) The same category contains many items with different appearances. These two characteristics cause the poor performance of current VAD methods. SPADE, RD4AD and CFLOW-AD assume that the location of the object in the image is unchanged, and thus the detection results are not accurate, especially the localization score is low. Because of many goods in one category and appearance change, the student network of RD4AD is challenging to learn the similar representation as the teacher network for normal data samples. Therefore RD4AD incorrectly predicts almost all commodity regions as anomalous, and the samples of anomaly segmentation are shown in the seventh row of Fig. <ref>. RD4AD only achieves 15.4% AUPR on anomaly localization sub-task.
The third and fourth rows of Fig. <ref> shows the anomalous test images x and generated normal images G(E(x)) by f-AnoGAN <cit.>. The commodities in the generated images are blurry and the text on the package is not clear. The appearance of the commodity in the generated image mixes up with other commodities (Fig. <ref>, fifth column). Therefore the anomaly segmentation masks obtained by L1 distance |x-G(E(x))| are not accurate. We think various commodity appearances cause this problem. More training epochs may improve the performance.
As shown in Fig. <ref>, CutPaste and NSA cut a random image patch and blend it into a large image to generate anomalous samples. The generated anomalies are much different from natural anomalies of commodities. Therefore, the detection results of these methods are not accurate, which are shown in the sixth row of Fig. <ref>. NSA only obtains 15.8% AUPR on the anomaly segmentation task.
Due to the appearance changes and location misalignment of various goods, DRAEM is difficult to learn a proper distance function to recognize the anomaly. The generated samples of pseudo-anomalies in the training stage are shown in Fig. <ref>. DRAEM adopts Perlin noise generator and extra texture images to generate anomalous samples with texture changes. Therefore DRAEM can not recognize anomalies with small texture changes such as bottle cap opening and box deformation (see Fig. <ref>, fifth row). It also fails to detect small anomalies. Apart from PatchCore, DRAEM achieves the second best performance in category cigarette_box, because the anomalous region opened of boxed cigarettes are relatively large and texture changes obviously and the location of boxed cigarettes is relatively aligned.
SimpleNet performs second only to PatchCore on anomaly classification, reaching 75.3% AUROC. But its anomaly localization score is not high, only getting 24.4% AUPR. This indicates that SimpleNet can determine whether the image is anomalous but cannot output an accurate anomaly mask. The anomaly masks of boxed cigarettes in Fig. <ref> of SimpleNet are not continuous. It also fails in several samples such as deformation of bottled food, surface_damage on boxed and packaged food, and cap_half_open of bottled drink. We believe that the discriminator of SimpleNet is effective in detecting anomalies but the Gaussian noise is not suitable for commodity anomalies. The accuracy and loss of SimpleNet in the training phase are also unstable.
From Table <ref> and Table <ref>, PatchCore achieves the best performance among all tested methods. Without subsampling of the memory bank, PatchCore-100% achieves better performance than PatchCore-1%. PatchCore-100% achieves state-of-the-art of 85.5% AUROC on anomaly classification and 53.8% AUPR, 89.9% PRO on anomaly segmentation. PatchCore uses the patch-feature memory bank equally accessible to all patches evaluated at test time, and thus it is less reliant on image alignment. PatchCore adopts KNN algorithm to estimate anomaly scores at test time, and thus it is more robust to the diverse appearance of goods. As shown in sixth row of Fig. <ref>, PatchCore-100% can predict relatively accurate anomalous regions. Nevertheless, the disadvantage of PatchCore is that the score of the predicted anomalous regions is not high, because image patches sometimes contain both normal and anomalous pixels. PatchCore cannot predict segmentation masks with sharp edges and high confidence like DRAEM (Fig. <ref>, fifth row, first and seventh column). In the ninth column of Fig. <ref>, PatchCore and DRAEM predict normal regions of packaged goods as anomalies due to changes in texture and illumination.
In order to comprehensively evaluate each method, we also test the inference speed and storage space. The results are shown in Fig. <ref>. f-AnoGAN is the fastest method and reaches 236.6 FPS. Although the inference speed of f-AnoGAN and NSA is fast, their performance on the GoodsAD dataset is not high. If the pre-trained model of the PyTorch library is not counted, PatchCore and SimpleNet only need to save the extracted features and the discriminator, respectively. PatchCore-1% requires less storage space, and its inference speed is 6.4 FPS. PatchCore-100% requires much space to store the extracted features, but compared to the lightweight PatchCore-1%, the performance is only slightly improved. CFLOW-AD also occupies much storage space.
From Table <ref>, The scores of the AUROC metric are very high, with most methods exceeding 90%, but the actual detection results are not accurate. The reason is that the anomalies occupy only a small fraction of image pixels (Fig. <ref>), and the categories of normal and anomalous pixels are extremely unbalanced. The scores of PRO metric are also high. Table <ref> and <ref> show that most methods perform well on category cigarette_box and the accuracy of food_box and food_package is lowest.
In general, current VAD methods do not perform well on the GoodsAD dataset. For real supermarket application scenarios containing a large number of goods, the current methods are not accurate enough for practical application.
§ CONCLUSION
In this work, we introduce the GoodsAD dataset, a novel dataset for unsupervised anomaly detection mimicking real-world supermarkets and industrial inspection scenarios. The dataset provides the possibility to evaluate unsupervised anomaly detection methods on a variety of goods with various appearances and different types of anomalies. Pixel-precise ground truth labels are provided to evaluate both image-level classification and pixel-level segmentation. Several current state-of-the-art methods are thoroughly evaluated on this dataset. The best-performing method for all categories is PatchCore. The evaluations show that current methods are not accurate enough for goods anomaly detection and there is still considerable room for improvement. We hope that the proposed dataset will stimulate the development of unmanned supermarkets and smart manufacturing.
IEEEtran
|
http://arxiv.org/abs/2307.04854v2 | 20230710185323 | Unconventional quantum oscillations and evidence of non-trivial electronic states in quasi-two-dimensional electron system at complex oxide interfaces | [
"Km Rubi",
"Manish Duman",
"Shengwei Zeng",
"Andrew Ammerlaan",
"Femke Bangma",
"Mun K. Chan",
"Michel Goiran",
"Ariando Ariando",
"Suvankar Chakraverty",
"Walter Escoffier",
"Uli Zeitler",
"Neil Harrison"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
Corresponding author: [email protected]
National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87544 USA
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
Quantum Materials and Devices Unit, Institute of Nano Science and Technology, Mohali, Punjab 140306, India
Present address: Institute of Materials Research and Engineering (IMRE), Agency for Science, Technology and Research (A*STAR), 2 Fusionopolis Way, Innovis #08-03, Singapore 138634, Republic of Singapore
Department of Physics, National University of Singapore, 117551 Singapore
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87544 USA
Laboratoire National des Champs Magnétiques Intenses (LNCMI-EMFL), Université de Toulouse, CNRS, INSA, UPS, 143 Avenue de Rangueil, 31400 Toulouse, France
Department of Physics, National University of Singapore, 117551 Singapore
Quantum Materials and Devices Unit, Institute of Nano Science and Technology, Mohali, Punjab 140306, India
Laboratoire National des Champs Magnétiques Intenses (LNCMI-EMFL), Université de Toulouse, CNRS, INSA, UPS, 143 Avenue de Rangueil, 31400 Toulouse, France
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87544 USA
The simultaneous occurrence of electric-field controlled superconductivity and spin-orbit interaction makes two-dimensional electron systems (2DES) constructed from perovskite transition metal oxides promising candidates for the next generation of spintronics and quantum computing. It is, however, essential to understand the electronic bands thoroughly and verify the predicted electronic states experimentally in these 2DES to advance technological applications. Here, we present novel insights into the electronic states of the 2DES at oxide interfaces through comprehensive investigations of Shubnikov-de Haas oscillations in two different systems: EuO/KTaO_3 (EuO/KTO) and LaAlO_3/SrTiO_3 (LAO/STO). To accurately resolve these oscillations, we conducted transport measurements in high magnetic fields up to 60 T and low temperatures down to 100 mK. For 2D confined electrons at both interfaces, we observed a progressive increase of oscillations frequency and cyclotron mass with the magnetic field. We interpret these intriguing findings by considering the existence of non-trivial electronic bands, for which the E-k dispersion incorporates both linear and parabolic dispersion relations. In addition to providing experimental evidence for topological-like electronic states in KTO-2DES and STO-2DES, the unconventional oscillations presented in this study establish a new paradigm for quantum oscillations in 2DES based on perovskite transition metal oxides, where the oscillations frequency exhibits quadratic dependence on the magnetic field.
Unconventional quantum oscillations and evidence of non-trivial electronic states in quasi-two-dimensional electron system at complex oxide interfaces
Neil Harrison
August 12, 2023
======================================================================================================================================================
§ INTRODUCTION
Two-dimensional electron systems (2DES) have been observed at the surface and interface of many perovskite transition metal oxides, so-called complex oxides. Particularly, widely studied 2DES based on SrTiO_3 (STO) and KTaO_3 (KTO) exhibit various intriguing phenomena, including a large magnetoresistance, Rashba spin-orbit interaction <cit.>, 2D superconductivity <cit.>, and magnetism <cit.>, which do not exist in their bulk counterparts.
The coexistence of these phenomena gives these systems a multi-functional character, with potential applications in spintronics <cit.> as well as in the field of topological quantum computing <cit.>. However, a comprehensive understanding of the electronic structure that gives rise to these interesting phenomena remains elusive.
STO-2DES and KTO-2DES exhibit several similarities in terms of their calculated band structures. For example, the electrons occupy crystal-field split t_2g orbital of d bands (3d for STO and 5d for KTO), and the combination of 2D confinement and spin-orbit interactions gives rise to multiple bands with mixed orbital characters of d_xy, d_xz, and d_yz due to the avoided crossing between light (d_xy) and heavy (d_xz/d_yz) subbands. Heeringen et al. <cit.> predicted strongly anisotropic nonparabolic subbands for 2DES at the LAO/STO interface.
Furthermore, topological states with linear dispersion are predicted for STO-2DES in the vicinity of avoided crossing points in Γ-M direction of the first Brillouin zone <cit.>. While experiments based on the Shubnikov-de Haas (SdH) effect and angle-resolved photoemission spectroscopy (ARPES) have verified the existence of several subbands of different effective masses for both STO <cit.> and KTO-2DES <cit.>, the signature of nonparabolic subbands or topological states in these systems have not yet been perceived through experiments. Interestingly, the STO-2DES exhibits peculiar SdH oscillations that are not periodic in inverse magnetic field <cit.>. The aperiodicity in oscillations perceived in high magnetic fields has been tentatively attributed to different mechanisms (e.g., Rashba spin-orbit interaction <cit.>, Zeeman splitting <cit.>, magnetic depopulation of magnetoelectric subbands <cit.>, and magnetic field-induced change in carrier density <cit.>) in different investigations; and its physical origin has not yet reached to a consensus. Furthermore, despite a comparable electronic band structure to the STO-2DES, the existence of aperiodic SdH oscillations in KTO-2DES remains unclear from previous studies <cit.>.
In our quest to unravel the origin of aperiodic quantum oscillations and uncover topological states in STO and KTO-related 2DES, we conducted a thorough experimental investigation of the SdH oscillations at the interfaces of EuO/KTO and LaAlO_3(LAO)/STO. In order to capture the oscillations with utmost precision, we measured electrical transport in high magnetic fields (utilizing both a 60 T pulsed field and a 35 T dc field) and ultra-low temperatures (as low as 0.1 K). By examining the tilt-angle dependence of the quantum oscillations, we reveal the presence of itinerant electrons that are confined in the 2D interface region, coexisting with the carriers that disperse deeper into the STO and KTO. Interestingly, we observed that both interfaces exhibit a progressive increase in the cyclotron mass, estimated from the SdH oscillations, as well as an apparent increase of the oscillations frequency with increasing magnetic field. Notably, we found that the increase in cyclotron mass follows an almost linear trend, while the change in frequencies exhibits a quadratic relationship with the magnetic field. We explain this behavior through the existence of non-trivial electronic subbands, where the energy dispersion in k-space combines both linear and quadratic terms. These findings provide valuable insights into the unique electronic properties and subband structure at the interfaces of these oxides.
§ METHODS
As depicted in Fig.1(a) and (b), the EuO/KTO sample consists of a 10 nm thin film of EuO on KTO (001) substrate, while LAO/STO is made of ∼ 3.2 nm (8 u.c.) thin film of LAO on STO (001) substrate. Both KTO and STO substrates are 0.5 mm thick. We used a pulsed laser deposition technique to grow EuO and LAO thin films. For the LAO/STO sample, a mask of amorphous AlN was deposited on STO before LAO growth to obtain a Hall-bar patterned sample. One can find the growth details for EuO/KTO in Ref. <cit.> and for LAO/STO in Ref. <cit.>.
We carried out longitudinal and Hall resistance measurements simultaneously on the EuO/KTO sample in high pulsed magnetic fields (B_max = 60 T and pulse time ∼ 80 ms) and down to the temperature of 0.5 K in the ^3He system. We measured LAO/STO in a high continuous magnetic field (B_max = 35 T) and at low temperatures down to 0.1 K in a dilution fridge. To achieve a high signal-to-noise ratio for the measurements in a pulsed field, we used an excitation current of amplitude 30 μA and frequency up to 256 kHz. We applied a quasi DC excitation of 0.1 μA for measurements on LAO/STO in continuous magnetic fields. The measurements at different tilt angles were performed using in-situ sample rotators devised explicitly for the dilution fridge and the ^3He fridge used in the extreme environment of high magnetic fields. To probe the interface using transport measurements, we made electrical contacts for both samples using a wire bonder. In particular, we measured an unpatterned EuO/KTO sample at the lowest temperature in both up and down field directions (for details see Fig. A1(a) in Appendix A1), and attained the antisymmetric R_yx and symmetric R_xx using the formulas R_yx = R_yx(B↑)-R_yx(B↓))/2 and R_xx = R_xx(B↑)+R_xx(B↓))/2.
§ EXPERIMENTAL RESULTS
§.§ Electrical properties and quantum oscillations
Fig.1 (c) and (d) show the Hall resistance R_yx(B) for EuO/KTO and LAO/STO interfaces, respectively, measured at the lowest temperature possible for each case and in magnetic fields (B) oriented perpendicular to the interface. Except in the low field regime of 0 - 4 T, the R_yx(B) is linear for both interfaces. From the slope of the linear fit to the R_yx(B) for B > 5 T, we estimate the carrier density of 2.2 × 10^14 cm^-2 for EuO/KTO and 3.1 × 10^13 cm^-2 for LAO/STO.
However, despite having a lower effective mass <cit.>, the carriers at the EuO/KTO interface exhibit a lower Hall mobility ∼ 1500 cm^2V^-1s^-1 than thereof LAO/STO ∼ 2350 cm^2V^-1s^-1. We attribute the lower carrier mobility in EuO/KTO to the spin-disorder scattering induced by the magnetic proximity effect of EuO on the conducting TaO_2 planes at the interface <cit.>. The left y-axes of Fig.1(e) and (f) display the magnetic field dependence of longitudinal resistance R_xx(B) for LAO/STO and EuO/KTO, respectively. For both interfaces, the quantum oscillations originating from the quantization of closed cyclotron orbits are superimposed on a positive magnetoresistance. We show the oscillating resistance Δ R_xx after subtracting a smooth background (dash lines) on the right y-axes of each panel. For both interfaces, the non-monotonic enhancement of the oscillations amplitude with increasing magnetic field indicates the presence of more than one frequency, as verified by multiple peaks in the Fast Fourier Transform (FFT), which will be discussed in detail later. A two-order of magnitude smaller amplitude of the quantum oscillations in EuO/KTO than LAO/STO is consistent with a lower mobility of carriers at the EuO/KTO interface.
§.§ Tilt-angle dependence of quantum oscillations
To examine the dimensionality of the electron systems at the EuO/KTO and LAO/STO interfaces, we measured both samples at different tilt angles ranging from 0^∘ to 90^∘. The tilt angle θ, as illustrated in Fig. 2(a), is defined as the angle between the magnetic field B and the normal to the interface. For all field orientations, B is perpendicular to the current. First, it is worth mentioning that both interfaces show a large negative magnetoresistance (MR) for the in-plane field orientation, θ = 90^∘, as shown in the main panels of Fig.2(b) and 2(c). The magnitude of the negative MR is larger for LAO/STO (see Appendix A2), even though this system does not consist of any magnetic material, which could induce magnetic proximity effect on the interfacial conducting sheets, as reported for EuO/KTO <cit.>. The negative MR in complex oxide interfaces with the application of an in-plane magnetic field can be attributed to the combined effect of spin-orbit coupling and long-range impurity scattering <cit.>. Additionally, the positive MR for LAO/STO in the high magnetic fields (inset of Fig. 2(c)) can be explained by the domination of conventional orbital MR in this regime as the higher carriers mobility in LAO/STO leads to the completion of more cyclotron orbits compared to EuO/KTO.
After subtracting a smooth background from R_xx(B) measured at different tilt angles, we show Δ R_xx as a function of the total magnetic field in Fig.2(d) and (e) for EuO/KTO and LAO/STO, respectively.
Both systems show a complex shift in oscillations' minima and maxima positions at least up to θ = 45^∘. We, however, do not perceive a noticeable change in oscillations for θ > 65^∘. For both interfaces, the fixed quantum oscillations pattern in the regime of θ = 75^∘ - 90^∘, as verified by FFT analysis in Appendix A3, provides evidence for the existence of three-dimensional conduction channels.
To identify the two-dimensional (2D) nature of the electron systems, we plot Δ R_xx of EuO/KTO and LAO/STO as a function of the perpendicular component of magnetic field, Bcos(θ), in Fig. 2(f) and (g), respectively. For EuO/KTO, the low-field oscillations follow a cos(θ) scaling for θ < 30^∘, indicating the 2D confinement of conduction electrons at the interface. However, on comparing Fig. 2(d) and (e), we find the high field oscillations (> 25 T) to follow a scaling of neither B_total nor Bcos(θ), indicating the superposition of oscillations originating from 2D and 3D Fermi surfaces. In contrast, the oscillations in LAO/STO exhibit a Bcos(θ) scaling (depicted by vertical dashed lines) up to the high fields (35 T), except a few minima that might be affected by the crossover of Landau levels of multiple electronic subbands. Overall, both samples reveal a 2D confinement of electrons at the interface, along with a fraction of electrons dispersed deep into KTO and STO.
§.§ Magnetic field dependence of cyclotron mass
Since the temperature dependence of quantum oscillations amplitude provides a means for determining the effective mass, we measure both systems at different temperatures. In particular, we measure EuO/KTO at various selected temperatures for two different field orientations θ = 0^∘ and 90^∘ and show the oscillations resistance in Fig. 3(a) and (b). It is to be noted that to improve the signal-to-noise ratio in pulsed magnetic fields, the measurements on EuO/KTO at different temperatures were performed using a higher frequency (256 kHz) of excitation. The higher frequency excitation did not modify the frequency and amplitude of quantum oscillations, as compared in Fig.A1(b) of Appendix A1. As expected, the oscillations amplitude progressively decreases with increasing temperature for θ=0^∘. We, however, noticed a nonmonotonic temperature dependence of the oscillations amplitude for θ = 90^∘, most likely due to imperfect subtraction of the smooth background from the raw data. Overall, the oscillations amplitude and frequency at θ = 0^∘ are larger than those at θ = 90^∘, and therefore, we assume that the oscillations from the carriers confined at the interface dominate at θ = 0^∘.
We determine the cyclotron mass m_c by fitting the temperature dependence of oscillations amplitude to the temperature damping factor in the Lifshitz-Kosevich (L-K) equation <cit.> given below
R(T) = R_0 2π^2k_Bm_cT/ħ eB/sinh(2π^2k_Bm_cT/ħ eB)
For θ = 90^∘, we fit the maxima-to-minima difference to minimize the error in m_c induced from the imperfect background subtraction. The m_c values normalized with free electron mass m_e are displayed in Fig. 3(e). At θ = 0^∘, m_c = 0.56 ± 0.04 m_e in moderate fields (10-14 T) is comparable to the effective mass for heavy subbands (0.50 m_e) predicted theoretically <cit.> and confirmed with ARPES experiments <cit.> and SdH oscillations measurements <cit.> on KTO-2DES.
Most interestingly, above 14 T, m_c for θ = 0^∘ increases almost linearly, as depicted by a dashed line, with increasing magnetic field strength. We, however, did not observe such a progressive field-dependent enhancement in m_c values at θ = 90^∘. The average m_c at θ = 90^∘ is 0.61± 0.07 m_e, which corresponds well with the effective mass of the heavy band of bulk KTO <cit.>.
Next, we analyze the oscillations for LAO/STO measured at different temperatures in the range of 0.1 - 3.0 K (Fig. 4(a)). To ensure that the analysis is not primarily influenced by the superimposition of oscillations of multiple frequencies, we estimate m_c for this system by fitting not only the temperature dependence of the oscillations amplitude (Fig. 4(b)) but also the FFT amplitude (Appendix A4) to Eq. (1). We attribute the large discrepancy between the low-field m_c values determined from the oscillations amplitude and the FFT amplitude to the superimposition effect and poor resolution of oscillations in low fields (B < 14 T). Very similar to EuO/KTO, the lowest value of m_c (1.6 ± 0.1 m_e) corresponds to the heavy subband of STO-2DES <cit.>. Interestingly, m_c for LAO/STO also increases with the magnetic field, bearing a resemblance with the data reported by Y. Xie et al.<cit.> in the moderate field range (4 - 15 T).
In conclusion, both interfaces exhibit a progressive enhancement of the cyclotron mass for θ = 0^∘ as the magnetic field intensifies. Since both the cyclotron mass m_c (= ħ^2/2π∂ A_k/∂ E) and the frequency of the quantum oscillations F (=ħ/2π eA_k) are related to the k-space area enclosed by the cyclotron orbit A_k, we next examine any eventual variation of the oscillations periodicity with magnetic field.
§.§ Aperiodicity in quantum oscillations
From the semiclassical theory of Landau quantization developed by Onsager and Lifshitz <cit.>, the oscillations in magnetoresistivity are periodic in 1/B. To examine the periodicity of oscillations in the 2DES at EuO/KTO and LAO/STO interfaces, we plot Δ R_xx at θ = 0^∘ as a function of the inverse magnetic field in Fig. 5(a) and (b), respectively. As a quality check of the oscillations in LAO/STO in a low-field regime (< 14 T), we also measure the same sample in a superconducting magnet at a temperature of 0.3 K and display this data in the inset as well as in the main panel of Fig. 5(b). While for B > 6 T, the minima and maxima of the oscillations from these measurements perfectly overlie with the high-field measurement data, we acquired better-resolved oscillations in low-fields (B < 6 T). As one can see, for both interfaces, the oscillations period decreases as the magnetic field increases. We perform the FFT analysis for both systems in a few selected field ranges to evaluate the magnetic field dependence of the oscillations frequencies. The FFT spectra of EuO/KTO (Fig. 5(c)) reveal one or two peaks in each field window and the dominant peak position moves to higher frequency with decreasing average inverse field of selected windows. The shoulder peak (on the left of dominant peak) noticed in two field windows (8.3 - 14.9 T and 27.8 - 59.3 T) are most likely from the 3D oscillations as the FFT of oscillations at θ = 90^∘ produces peaks at the same frequencies (see Appendix A3). Unlike EuO/KTO, the LAO/STO interface exhibits at least two prominent peaks for each field window and these peaks shifts to higher frequency with increasing field. In Fig. 5(e) and 5(f), we plot the estimated frequencies as a function of the effective field B_eff for EuO/KTO and LAO/STO, respectively. B_eff defines as 1/B_eff=1/B_min+1/B_max/2 depends on the size of the field range used in the FFT analysis. It is worth mentioning that the FFT analysis of the data at θ = 0^∘ in the full field range gives 7-8 peaks for both interfaces (Appendix A5, Fig A5) because of the progressive increase in oscillations frequency with field. Contrary to the observation at θ = 0^∘, the FFT analysis of the oscillations at θ = 90^∘ reveals only two frequencies (appendix A3, Fig A3).
In conclusion, the FFT analyses for the data at θ = 0^∘ reveal that the 2DESs at the studied interfaces exhibit a continuous increase in quantum oscillations frequency as the magnetic field strength rises, in line with the previous observation on LAO/STO interface <cit.>.
§ INTERPRETATION OF UNCONVENTIONAL FINDINGS FROM QUANTUM OSCILLATIONS
The shared unconventional findings from the analysis of the quantum oscillations in 2DES at the EuO/KTO and LAO/STO interfaces are as follows: (1) The cyclotron mass estimated at low field values is comparable to the effective mass for heavy subbands. (2) Both the cyclotron mass and the frequency of the oscillations increase with the magnetic field.
The quantum oscillations resolved only from the heavy subbands can be attributed to the low carrier mobility in the light subbands. Despite a lighter effective mass, electrons in the light subbands, mainly composed of d_xy orbitals, exhibit reduced mobility due to their existence in the interface-adjacent planes (TiO_2 planes for LAO/STO and TaO_2 planes for EuO/KTO), which typically experience significant disorder (e.g., intermixed ions and dislocation) induced during the growth of the top oxide layers.
Of particular interest are the mass enhancement and the large cyclotron mass observed at high magnetic fields. In the case of EuO/KTO, the cyclotron mass reaches approximately 1.8 m_e, while in LAO/STO, it reaches around 3.0 m_e. These values cannot be explained solely based on the predicted mass of electronic subbands <cit.> or magnetic breakdown <cit.>. While the magnetic-field-induced change in density or chemical potential can reasonably explain the increase in oscillations frequency, the mass enhancement contradicts this scenario if the electronic bands follow a parabolic dispersion relation, for which ∂ A_k/∂ E is constant. Therefore, the magnetic-field-induced simultaneous change in frequency and cyclotron mass (i.e. change in A_k and ∂ A_k/∂ E) implies a correction to the parabolic dispersion of the electronic bands. Since for STO-2DES, a linear E-k dispersion is predicted at the avoided crossings of the light and heavy subbands along Γ M direction<cit.>, we consider a combination of linear and parabolic dispersion relation to interpret the B dependence of A_k and ∂ A_k/∂ E.
Combining the parabolic and linear dispersion terms, the Hamiltonian for a 2DES in a magnetic field perpendicular to its plane will be <cit.>
H = Π^2/2m+ v_F(Π_xσ_y + Π_yσ_x) - 1/2gμ_B Bσ_z
where Π_i = ħ k_i+ eA_i, m is the density of states (DOS) mass, v_F is the Fermi velocity, σ_i are the Pauli matrices, g is the Landé g-factor, and μ_B is the Bohr magneton.
The associated Landau levels for the Hamiltonian in Eq. (2) will be <cit.>
E_N = ħω_SN±√((ħω_D)^2N+(ħω_S/2 - gμ_BB/2)^2)
where ω_S = eB/m, ω_D = √(2ev_F^2B/ħ), and N is the Landau level index.
Taking E_N = E_F and converting Eq. (3) as a quadratic equation for N, we get
(ħω_S)^2N^2 - [2ħω_S E_F + (ħω_D)^2]N +
E_F^2-1/4(ħω_S - gμ_BB)^2 = 0
and by solving Eq. (4) for N, we have
N = m^2v_F^2/eħ+mE_F/eħ/B - √((m^2v_F^2/eħ)^2+2mE_F(mv_F/eħ)^2+1/4(1-mgμ_B/eħ)^2B^2)/B .
Considering that the first two terms in the square root of Eq. (5) are larger then the last one owing to the heavy DOS mass in EuO/KTO and LAO/STO, we perform Taylor's expansion of the square root, and get an approximate expression for the Landau level index N:
N ∼F_0/B + C × B + .......
where
F_0=m^2v_F^2/eħ(1+E_F/mv_F^2-√(1+2E_F/mv_F^2))
and
C = -eħ/8m^2v_F^2(1-mgμ_B/eħ)^2/√(1+2E_F/mv_F^2)
To be noted, Eq. (6) is the well-known Onsager’s relation <cit.> with an additional term C × B that brings a deviation of the oscillations periodicity from 1/B and leads to a non-linear Landau plot, i. e., a plot of Landau level index as a function of the inverse magnetic field. Constructing the Landau plot from the oscillations for two or more different frequencies (EuO/KTO data for B > 30 T in Fig. 5(a) and LAO/STO data in full-field range in Fig. 5(b)) is not feasible. We, therefore, display the Landau plot for EuO/KTO with reasonably good fit to Eq. (6) only for B < 30 T in Fig. 6(a) and list fitting parameters in Table 1.
In order to determine a relationship between the oscillations frequency and magnetic field, we make a first order derivative of N as a function of 1/B, i.e.
F = ∂ N/∂ (1/B) = F_0 - C × B^2
Next, we apply this phenomenological model to the frequency extracted from the FFT analysis and show the best fit of the experimental data to Eq. (9) in the Fig. 5(e) and (f). Interestingly, the fitting parameters F_0 and C for EuO/KTO extracted from two different methods of analyzing quantum oscillations, the Landau plot and the FFT, are comparable (Table 1). Further, as displayed in Table 1, the carrier density calculated from the oscillations frequencies is smaller than the Hall carrier density for both interfaces, in line with previous reports <cit.>.
Next, to examine the field dependence of the cyclotron mass, we estimate the energy difference between two consecutive Landau levels, as given below
E_N+1-E_N = ħω_c^*.
where ω_c^* = eB/m_c^* is the cyclotron frequency and m_c^* is the cyclotron mass, including linear and parabolic dispersion as given in Hamiltonian in Eq. (2). It is important to note that the L-K formula in Eq. (1) is based on the effective mass theory with a parabolic dispersion. In the case of nonparabolic dispersion, the cyclotron mass extracted from the L-K analysis or cyclotron resonance naturally exhibits a dependence on energy and magnetic fields <cit.>.
By substituting E_N and E_N+1 in Eq. (10) and treating ħω_D as a correction to ħω_S, we get an approximate expression for ω_c^*:
ω_c^* = ω_S+ħω_D^2/2(ħω_S - gμ_BB)
Using ω_c^* = eB/m_c^*, ω_S = eB/m, and ω_D = √(2ev_F^2B/ħ), we obtain
1/m_c^* = 1/m+mv_F^2/(ħ e - gμ_B m)1/ B
This expression for the cyclotron mass for g = 0 is the same as derived directly using the cyclotron mass definition m_c=ħ^2/2π∂ A_k/∂ E, where A_k = π k^2 and E = ħ^2k^2/2m+ħ v_Fk (see Appendix A6). By fitting the experimental m_c(B) data for EuO/KTO to Eq. (12) in Fig. 6(b), we get the Fermi velocity v_F ∼ 2 × 10^4 m/s, one order of magnitude smaller than that for Dirac fermions in topological materials <cit.>.
§ DISCUSSION AND CONCLUSION
After establishing that the peculiar findings in this study, namely the enhancement in cyclotron mass and oscillations frequency with the magnetic field, can be reasonably explained by considering a summation of linear and parabolic dispersion as described in Eq. (2), we now explore the potential origin of this non-ideal E-k dispersion close to the Fermi level.
The spin-orbit interaction is one of the key elements that can modify the parabolic bands, even in more conventional semiconductor heterointerfaces. For instance, in GaAs/Al_0.3Ga_0.7As heterostructures, the quasi-2D hole system experiences nonparabolic valence bands (with higher-order corrections in k) due to the anticrossings of light and heavy hole subbands <cit.>. In the case of STO(001)-2DES, the spin-orbit interaction leads to partial avoidance of band crossings between the light (d_xy) and heavy (d_xz and d_yz) bands along ΓM, resulting in an orbital dispersion reminiscent of Dirac cones <cit.>. Additionally, density functional theory predicts the existence of non-trivial topological states at the avoided crossings of subbands in the EuO/KTO(001) interface <cit.>. Given that there are four such points in the Brillouin zone of the STO and KTO 2DES where a Dirac-like dispersion occurs, it is plausible that electrons orbiting within the electronic states reconstructed from the combination of d_xy and d_xz/d_yz will encounter an unusual summation of linear and parabolic dispersion. Furthermore, the observed similarity in the oscillations aperiodicity in the studied systems and the surface states of topological insulators <cit.> hints at the existence of unique electronic states in the oxides-based 2DES, possibly related to topological effects. We note that the Hamiltonian stated in Eq. (2) bears a striking resemblance to the Hamiltonian of a 2D electron gas with Rashba spin-orbit coupling and a Zeeman effect <cit.>. Therefore, the obtained results from our analysis can be fairly interpreted using the Rashba model as well.
In summary, to gain a deeper understanding of the electronic band structure of the 2DES based on STO and KTO, we conducted a thorough investigation of quantum oscillations in magnetoresistance of high-mobility LAO/STO and EuO/KTO interfaces. By analyzing the observed oscillations at various tilt angles, we identified that both interfaces exhibit electron confinement in the two-dimensional plane at the interface, while a portion of carriers extends deep into the STO and KTO. Remarkably, for both interfaces, the oscillations originating from the 2D confined electrons displayed an increased frequency and cyclotron mass with increasing magnetic field strength. To explain these findings, we propose a scenario involving a combination of linear and parabolic dispersion relations. The presence of both types of dispersion relations reasonably explain the experimental observations and indicates the existence of non-trivial electronic states, possibly related to the topological effects. Furthermore, these interesting results shed light on the topological states predicted recently <cit.> and their experimental realization through anomalous effects in transport measurements <cit.> on the 2DES based on related perovskite oxides.
§ ACKNOWLEDGEMENTS
We acknowledge support from the National High Magnetic Field Laboratory, supported by the National Science Foundation through NSF/DMR-1644779 and the state of Florida. K.R., M.K.C. and N. H. and pulsed field measurements were supported by the US Department of Energy "Science of 100 Tesla" BES program. We acknowledge the support of HFML-RU/FOM, member of the European Magnetic Field Laboratory. M.K.C. acknowledges support from NSF IR/D program while serving at the National Science Foundation. Any opinion, findings, and conclusions or recommendations expressed in these materials are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
§ APPENDICES
§.§ Magnetotransport details for EuO/KTO
In order to check the data symmetry in up and down fields for the unpatterned EuO/KTO sample, we measured transport on this sample in both field directions at the lowest possible temperature T = 0.7 K and in the field perpendicular to the interface. The R_xx and R_yx (Fig. A1(a)) both show asymmetry in the field. Despite the different magnitude of R_xx and R_yx, we did not see a noticeable shift in the position or amplitude of oscillations. We used the asymmterized data (Fig. 1(b)), as described in the main text, to determine the carrier density and the mobility. Furthermore, since the high frequency of excitation improves the signal-to-noise ratio of the data measured in the pulsed magnetic field, we check its implication on the amplitude and frequency of oscillations. As displayed in Fig. A1(b), we do not observe any noticeable change in the oscillations pattern except that the oscillations quality improves by increasing the frequency of excitation.
§.§ In-plane magnetoresistance
§.§ FFT analysis of quantum oscillations at angles close to θ = 90^∘
To further verify that the position of SdH oscillations does not move by varying the angles in the vicinity of θ = 90^∘, we performed FFT analysis of the Δ R_xx(1/B) at a few angles. As shown in Fig A3, the position of the prominent peaks for both interfaces does not move with the angle.
§.§ L-K fit to FFT amplitude for LAO/STO
In order to evaluate the cyclotron mass m_c from the FFT spectra in A4(a), we fit the temperature dependence of FFT amplitude with L-K equation given below:
X(T) = X_0 2π^2k_Bm_cT/ħ eB_eff/sinh(2π^2k_Bm_cT/ħ eB_eff)
where 1/B_eff=1/B_min+1/B_max/2. The calculated m_c values are displayed in Fig. A4(b) and (c) for frequencies F_1 and F_2, respectively.
§.§ FFT analysis of quantum oscillations in full-field range
§.§ Cyclotron mass in the case of combined linear and parabolic dispersion
Combining the linear and parabolic dispersion term, we have E = ħ^2k^2/2m+ħ v_Fk. The area of the Fermi surface in k space can be given as A_k = π k^2. Substituting E and A_k values in the cyclotron mass formula, we get
1/m_c=2π/ħ^21/∂ A_k/∂ E = 1/m+v_F/ħ k
As we know, ω_c = v/r = eB/m. Converting r into reciprocal space, we have k ≈eB/mv_F. Substituting k value into Eq. (A2), we get
1/m_c≈1/m+mv_F^2/e ħ1/B
48
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Caviglia et al.(2010)Caviglia, Gabay, Gariglio, Reyren, Cancellieri, and Triscone]PhysRevLett.104.126803
author author A. D. Caviglia, author M. Gabay,
author S. Gariglio, author N. Reyren, author
C. Cancellieri, and author
J.-M. Triscone, @noop journal journal Phys. Rev. Lett. volume 104, pages 126803 (year
2010)NoStop
[King et al.(2012)King,
He, Eknapakul, Buaphet,
Mo, Kaneko, Harashima,
Hikita, Bahramy, Bell,
Hussain, Tokura, Shen,
Hwang, Baumberger, and Meevasana]PhysRevLett.108.117602
author author P. D. C. King, author R. H. He, author T. Eknapakul,
author P. Buaphet, author S.-K. Mo, author
Y. Kaneko, author S. Harashima, author Y. Hikita, author M. S. Bahramy, author C. Bell, author Z. Hussain,
author Y. Tokura, author Z.-X. Shen, author
H. Y. Hwang, author
F. Baumberger, and author
W. Meevasana, https://doi.org/10.1103/PhysRevLett.108.117602 journal
journal Phys. Rev. Lett. volume 108, pages 117602 (year 2012)NoStop
[Wadehra et al.(2020)Wadehra, Tomar, Varma, Gopal, Singh, Dattagupta, and Chakraverty]wadehra2020planar
author author N. Wadehra, author R. Tomar,
author R. M. Varma, author R. Gopal, author
Y. Singh, author S. Dattagupta, and author S. Chakraverty, @noop journal
journal Nature communications volume
11, pages 1 (year 2020)NoStop
[Li et al.(2011)Li,
Richter, Mannhart, and Ashoori]li2011coexistence
author author L. Li, author C. Richter,
author J. Mannhart, and author R. Ashoori, @noop
journal journal Nature physics volume 7, pages 762 (year
2011)NoStop
[Bert et al.(2011)Bert,
Kalisky, Bell, Kim,
Hikita, Hwang, and Moler]bert2011direct
author author J. A. Bert, author B. Kalisky,
author C. Bell, author
M. Kim, author Y. Hikita, author H. Y. Hwang, and author K. A. Moler, @noop journal journal Nature physics volume 7, pages 767 (year 2011)NoStop
[Ueno et al.(2011)Ueno,
Nakamura, Shimotani, Yuan,
Kimura, Nojima, Aoki,
Iwasa, and Kawasaki]ueno2011discovery
author author K. Ueno, author S. Nakamura,
author H. Shimotani, author H. Yuan, author
N. Kimura, author T. Nojima, author H. Aoki, author Y. Iwasa, and author M. Kawasaki, @noop journal journal
Nature nanotechnology volume 6, pages
408 (year 2011)NoStop
[Chen et al.(2021)Chen,
Liu, Sun, Chen, Liu, Zhang, Li, Zhang,
Hong, Ren et al.]chen2021two
author author Z. Chen, author Z. Liu, author Y. Sun, author
X. Chen, author Y. Liu, author H. Zhang, author H. Li, author M. Zhang, author
S. Hong, author T. Ren, et al., @noop journal journal Physical Review Letters volume 126, pages 026802 (year
2021)NoStop
[Liu et al.(2021)Liu,
Yan, Jin, Ma, Hsiao, Lin, Bretz-Sullivan, Zhou, Pearson, Fisher et al.]liu2021two
author author C. Liu, author X. Yan, author D. Jin, author
Y. Ma, author H.-W. Hsiao, author Y. Lin, author T. M. Bretz-Sullivan, author X. Zhou, author J. Pearson, author B. Fisher,
et al., @noop journal journal
Science volume 371, pages 716
(year 2021)NoStop
[Zhang et al.(2018)Zhang,
Yun, Zhang, Zhang,
Ma, Yan, Wang, Li, Li, Khan et al.]zhang2018high
author author H. Zhang, author Y. Yun, author X. Zhang, author
H. Zhang, author Y. Ma, author X. Yan, author F. Wang, author G. Li, author
R. Li, author T. Khan, et al., @noop journal journal Physical review letters volume 121, pages 116803 (year
2018)NoStop
[Noël et al.(2020)Noël, Trier, Arche, Bréhin, Vaz, Garcia, Fusil, Barthélémy, Vila,
Bibes et al.]noel2020non
author author P. Noël, author F. Trier,
author L. M. V. Arche, author J. Bréhin, author
D. C. Vaz, author V. Garcia, author S. Fusil, author A. Barthélémy, author L. Vila, author M. Bibes, et al., @noop journal journal Nature volume 580, pages
483 (year 2020)NoStop
[Vicente-Arche et al.(2021)Vicente-Arche, Bréhin, Varotto,
Cosset-Cheneau, Mallik, Salazar, Noël, Vaz, Trier, Bhattacharya et al.]vicente2021spin
author author L. M. Vicente-Arche, author J. Bréhin, author S. Varotto,
author M. Cosset-Cheneau,
author S. Mallik, author R. Salazar, author
P. Noël, author
D. C. Vaz, author F. Trier, author S. Bhattacharya, et al., @noop journal journal Advanced Materials volume 33, pages 2102102 (year
2021)NoStop
[Chung et al.(2016)Chung,
Chan, and Yao]chung2016dislocation
author author S. B. Chung, author C. Chan, and author H. Yao, @noop
journal journal Scientific reports volume 6, pages 1 (year
2016)NoStop
[Barthelemy et al.(2021)Barthelemy, Bergeal, Bibes, Caviglia, Citro, Cuoco, Kalaboukhov, Kalisky, Perroni,
Santamaria et al.]barthelemy2021quasi
author author A. Barthelemy, author N. Bergeal,
author M. Bibes, author A. Caviglia, author
R. Citro, author M. Cuoco, author A. Kalaboukhov, author B. Kalisky, author C. Perroni, author J. Santamaria, et al., @noop journal journal Europhysics Letters volume 133, pages 17001 (year
2021)NoStop
[van Heeringen et al.(2013)van Heeringen, de Wijs, McCollam,
Maan, and Fasolino]PhysRevB2013
author author L. W. van Heeringen, author G. A. de Wijs, author A. McCollam,
author J. C. Maan, and author A. Fasolino, https://doi.org/10.1103/PhysRevB.88.205140 journal journal Phys. Rev. B volume 88, pages 205140 (year 2013)NoStop
[Vivek et al.(2017)Vivek,
Goerbig, and Gabay]vivek2017
author author M. Vivek, author M. O. Goerbig, and author M. Gabay, @noop journal journal Physical Review
B volume 95, pages 165117 (year 2017)NoStop
[McCollam et al.(2014)McCollam, Wenderich, Kruize, Guduru, Molegraaf, Huijben, Koster, Blank, Rijnders, Brinkman et al.]mccollam2014quantum
author author A. McCollam, author S. Wenderich,
author M. Kruize, author V. Guduru, author
H. Molegraaf, author
M. Huijben, author G. Koster, author D. H. Blank, author G. Rijnders, author A. Brinkman,
et al., @noop journal journal
APL materials volume 2, pages 022102
(year 2014)NoStop
[Meevasana et al.(2011)Meevasana, King, He, Mo,
Hashimoto, Tamai, Songsiriritthigul, Baumberger, and Shen]meevasana2011creation
author author W. Meevasana, author P. King,
author R. He, author
S. Mo, author M. Hashimoto, author A. Tamai, author P. Songsiriritthigul, author F. Baumberger, and author Z. Shen, @noop journal journal Nature materials volume 10, pages 114 (year 2011)NoStop
[Rödel et al.(2016)Rödel, Fortuna, Sengupta, Frantzeskakis, Fèvre, Bertran,
Mercey, Matzen, Agnus,
Maroutian et al.]rodel2016universal
author author T. C. Rödel, author F. Fortuna,
author S. Sengupta, author E. Frantzeskakis, author
P. L. Fèvre, author
F. Bertran, author B. Mercey, author S. Matzen, author G. Agnus, author T. Maroutian, et al., @noop journal journal Advanced Materials volume 28, pages 1976 (year
2016)NoStop
[Rubi et al.(2021)Rubi,
Zeng, Bangma, Goiran,
Ariando, Escoffier, and Zeitler]rubi2021electronic
author author K. Rubi, author S. Zeng, author F. Bangma, author
M. Goiran, author A. Ariando, author W. Escoffier, and author U. Zeitler, @noop journal journal Physical Review Research volume 3, pages 033234 (year 2021)NoStop
[Santander-Syro et al.(2012a)Santander-Syro, Bareille, Fortuna, Copie, Gabay, Bertran, Taleb-Ibrahimi,
Le Fèvre, Herranz, Reyren
et al.]santander2012orbital
author author A. Santander-Syro, author C. Bareille, author F. Fortuna,
author O. Copie, author M. Gabay, author
F. Bertran, author A. Taleb-Ibrahimi, author P. Le Fèvre, author G. Herranz, author N. Reyren, et al., @noop journal journal Physical Review B volume 86, pages 121107 (year
2012a)NoStop
[Fête et al.(2014)Fête, Gariglio, Berthod, Li, Stornaiuolo, Gabay, and Triscone]fete2014large
author author A. Fête, author S. Gariglio,
author C. Berthod, author D. Li, author
D. Stornaiuolo, author
M. Gabay, and author
J.-M. Triscone, @noop journal journal New Journal of Physics volume 16, pages 112002 (year
2014)NoStop
[Yang et al.(2016)Yang,
Han, Torresin, Pierre,
Zeng, Huang, Venkatesan,
Goiran, Coey, Ariando, and Escoffier]MingYang2016
author author M. Yang, author K. Han, author O. Torresin, author
M. Pierre, author S. Zeng, author Z. Huang, author T. V. Venkatesan, author M. Goiran,
author J. M. D. Coey, author Ariando, and author W. Escoffier, https://doi.org/10.1063/1.4963234
journal journal Applied Physics Letters volume 109, pages 122106 (year 2016)NoStop
[Trier et al.(2016)Trier,
Prawiroatmodjo, Zhong, Christensen, von Soosten, Bhowmik,
Lastra, Chen, Jespersen, and Pryds]trier2016quantization
author author F. Trier, author G. E. Prawiroatmodjo, author Z. Zhong, author D. V. Christensen, author M. von
Soosten, author A. Bhowmik,
author J. M. G. Lastra, author Y. Chen, author
T. S. Jespersen, and author
N. Pryds, @noop journal journal Physical Review Letters volume 117, pages 096804 (year
2016)NoStop
[Cheng et al.(2018)Cheng,
Annadi, Lu, Lee,
Lee, Huang, Eom,
Irvin, and Levy]cheng2018shubnikov
author author G. Cheng, author A. Annadi,
author S. Lu, author
H. Lee, author J.-W. Lee, author M. Huang, author C.-B. Eom, author P. Irvin, and author J. Levy, @noop journal journal Physical review
letters volume 120, pages 076801
(year 2018)NoStop
[Rubi et al.(2020)Rubi,
Gosteau, Serra, Han,
Zeng, Huang, Warot-Fonrose,
Arras, Snoeck, Goiran, and Escoffier]rubi2020aperiodic
author author K. Rubi, author J. Gosteau,
author R. Serra, author K. Han, author
S. Zeng, author Z. Huang, author B. Warot-Fonrose, author R. Arras, author E. Snoeck, author M. Goiran, and author W. Escoffier, @noop journal
journal npj Quantum Materials volume
5, pages 1 (year 2020)NoStop
[Harashima et al.(2013)Harashima, Bell, Kim, Yajima, Hikita, and Hwang]harashima2013coexistence
author author S. Harashima, author C. Bell,
author M. Kim, author
T. Yajima, author Y. Hikita, and author H. Hwang, @noop journal journal Physical Review B volume 88, pages 085102 (year 2013)NoStop
[Kumar et al.(2021)Kumar,
Wadehra, Tomar, Kumar,
Singh, Dattagupta, and Chakraverty]kumar2021observation
author author N. Kumar, author N. Wadehra,
author R. Tomar, author S. Kumar, author
Y. Singh, author S. Dattagupta, and author S. Chakraverty, @noop journal
journal Advanced Quantum Technologies volume 4, pages 2000081 (year
2021)NoStop
[Yan et al.(2022)Yan,
Zeng, Rubi, Omar,
Zhang, Goiran, Escoffier, and Ariando]yan2022ionic
author author H. Yan, author S. Zeng, author K. Rubi, author
G. J. Omar, author
Z. Zhang, author M. Goiran, author W. Escoffier, and author A. Ariando, @noop journal journal Advanced Materials Interfaces , pages 2201633
(year 2022)NoStop
[Diez et al.(2015)Diez,
Monteiro, Mattoni, Cobanera,
Hyart, Mulazimoglu, Bovenzi,
Beenakker, and Caviglia]PhysRevLett.115.016803
author author M. Diez, author A. M. R. V. L. Monteiro, author G. Mattoni,
author E. Cobanera, author T. Hyart, author
E. Mulazimoglu, author
N. Bovenzi, author C. W. J. Beenakker, and author A. D. Caviglia, https://doi.org/10.1103/PhysRevLett.115.016803 journal
journal Phys. Rev. Lett. volume 115, pages 016803 (year 2015)NoStop
[Shoenberg(2009)]shoenberg2009magnetic
author author D. Shoenberg, @noop title Magnetic oscillations
in metals (publisher Cambridge university press, year 2009)NoStop
[Santander-Syro et al.(2012b)Santander-Syro, Bareille, Fortuna, Copie, Gabay, Bertran, Taleb-Ibrahimi,
Le Fèvre, Herranz, Reyren,
Bibes, Barthélémy, Lecoeur, Guevara, and Rozenberg]PhysRevB.86.121107
author author A. F. Santander-Syro, author C. Bareille, author F. Fortuna,
author O. Copie, author M. Gabay, author
F. Bertran, author A. Taleb-Ibrahimi, author P. Le Fèvre, author G. Herranz, author N. Reyren, author M. Bibes, author A. Barthélémy, author P. Lecoeur, author J. Guevara, and author M. J. Rozenberg, https://doi.org/10.1103/PhysRevB.86.121107 journal journal Phys. Rev. B volume 86, pages 121107 (year 2012b)NoStop
[Delugas et al.(2011a)Delugas, Filippetti, Fiorentini, Bilc, Fontaine, and Ghosez]Delugas2011
author author P. Delugas, author A. Filippetti,
author V. Fiorentini, author D. I. Bilc, author
D. Fontaine, and author
P. Ghosez, https://doi.org/10.1103/PhysRevLett.106.166807 journal
journal Phys. Rev. Lett. volume 106, pages 166807 (year 2011a)NoStop
[Xie et al.(2014)Xie,
Bell, Kim, Inoue,
Hikita, and Hwang]xie2014quantum
author author Y. Xie, author C. Bell, author M. Kim, author
H. Inoue, author Y. Hikita, and author H. Y. Hwang, @noop journal journal Solid state communications volume 197, pages 25 (year 2014)NoStop
[Onsager(1952)]onsager1952
author author L. Onsager, @noop journal journal The
London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science volume 43, pages 1006 (year
1952)NoStop
[Delugas et al.(2011b)Delugas, Filippetti, Fiorentini, Bilc, Fontaine, and Ghosez]delugas2011spontaneous
author author P. Delugas, author A. Filippetti,
author V. Fiorentini, author D. I. Bilc, author
D. Fontaine, and author
P. Ghosez, @noop journal journal Physical review letters volume 106, pages 166807 (year
2011b)NoStop
[Taskin and Ando(2011)]taskin2011berry
author author A. Taskin and author Y. Ando, @noop journal journal Physical Review
B volume 84, pages 035301 (year 2011)NoStop
[Tisserond et al.(2017)Tisserond, Fuchs, Goerbig, Auban-Senzier, Mézière, Batail,
Kawasugi, Suda, Yamamoto,
Kato et al.]tisserond2017aperiodic
author author E. Tisserond, author J. Fuchs,
author M. Goerbig, author P. Auban-Senzier, author
C. Mézière, author
P. Batail, author Y. Kawasugi, author M. Suda, author H. Yamamoto, author R. Kato,
et al., @noop journal journal
Europhysics Letters volume 119, pages
67001 (year 2017)NoStop
[Palik and Wallis(1961)]palik1961
author author E. Palik and author R. Wallis, @noop journal journal Physical Review volume 123, pages 131 (year
1961)NoStop
[Rössner et al.(2006)Rössner, von Känel, Chrastina,
Isella, and Batlogg]rossner2006
author author B. Rössner, author H. von
Känel, author D. Chrastina,
author G. Isella, and author B. Batlogg, @noop
journal journal Semiconductor science and
technology volume 22, pages S191
(year 2006)NoStop
[Analytis et al.(2010)Analytis, McDonald, Riggs, Chu, Boebinger, and Fisher]analytis2010
author author J. G. Analytis, author R. D. McDonald, author S. C. Riggs,
author J.-H. Chu, author G. Boebinger, and author I. R. Fisher, @noop
journal journal Nature Physics volume 6, pages 960 (year
2010)NoStop
[Liu et al.(2014)Liu,
Zhou, Zhang, Wang,
Weng, Prabhakaran, Mo,
Shen, Fang, Dai et al.]liu2014
author author Z. Liu, author B. Zhou, author Y. Zhang, author
Z. Wang, author H. Weng, author D. Prabhakaran, author S.-K. Mo,
author Z. Shen, author
Z. Fang, author X. Dai, et al., @noop journal journal Science volume
343, pages 864 (year 2014)NoStop
[Winkler()]winkler2003
author author R. Winkler, @noop title Spin-orbit coupling
effects in two-dimensional electron and hole systems, Vol. volume 191NoStop
[Kakkar and Bera(2023)]kakkar2023
author author S. Kakkar and author C. Bera, @noop journal journal Advanced Physics
Research volume 2, pages 2200026
(year 2023)NoStop
[Wright and McKenzie(2013)]wright2013
author author A. R. Wright and author R. H. McKenzie, @noop journal journal
Physical Review B volume 87, pages
085411 (year 2013)NoStop
[Gao and Niu(2017)]gao2017
author author Y. Gao and author Q. Niu, @noop journal journal Proceedings of the
National Academy of Sciences volume 114, pages 7295 (year 2017)NoStop
[Fuchs et al.(2018)Fuchs,
Piéchon, and Montambaux]RN91
author author J. N. Fuchs, author F. Piéchon, and author G. Montambaux, https://doi.org/10.21468/SciPostPhys.4.5.024 journal
journal SciPost Phys. volume 4, pages 24 (year 2018)NoStop
[Liu et al.(2022)Liu,
Liu, Ma, Wang, Li, and Chen]liu2022
author author Z. Liu, author H. Liu, author J. Ma, author
X. Wang, author G. Li, and author H. Chen, @noop journal journal npj Computational Materials volume 8, pages 208 (year 2022)NoStop
[Zou et al.(2022)Zou,
Shin, Wei, Fan, Davidson, Guo, Chen, Zou, and Cheng]zou2022
author author Y. Zou, author H. Shin, author H. Wei, author
Y. Fan, author B. A. Davidson, author E.-J. Guo, author Q. Chen, author K. Zou, and author Z. G. Cheng, @noop
journal journal npj Quantum Materials volume 7, pages 122 (year
2022)NoStop
|
http://arxiv.org/abs/2307.05313v1 | 20230711150007 | Programmable and arbitrary-trajectory ultrafast flying focus pulses | [
"M. V. Ambat",
"J. L. Shaw",
"J. J. Pigeon",
"K. G. Miller",
"T. T. Simpson",
"D. H. Froula",
"J. P. Palastro"
] | physics.optics | [
"physics.optics"
] |
1Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14623, USA
[email protected]
[email protected]
“Flying focus” techniques produce laser pulses with dynamic focal points that travels distances much greater than a Rayleigh length. The implementation of these techniques in laser-based applications requires the design of optical configurations that can both extend the focal range and structure the radial group delay. This article describes a method for designing optical configurations that produce ultrashort flying focus pulses with arbitrary-trajectory focal points. The method is illustrated by several examples that employ an axiparabola for extending the focal range and either a reflective echelon or a deformable mirror-spatial light modulator pair for structuring the radial group delay. The latter configuration enables rapid exploration and optimization of flying foci, which could be ideal for experiments.
§ INTRODUCTION
The intensity peak of a flying focus pulse can travel at any velocity, independent of the group velocity, over distances much longer than a Rayleigh range <cit.>. These properties offer a new approach to optimizing the wide range of laser-based applications that require velocity matching or extended interaction lengths. For instance, recent experiments have used a flying focus to create long, contiguous plasma channels <cit.> and to synchronize the pump and probe pulses in soft x-ray lasers <cit.>. The potential uses of flying focus pulses extend beyond these demonstrations to enhancing laser wakefield acceleration <cit.>, nonlinear Thomson scattering <cit.>, or THz generation <cit.> and to facilitating observations of fundamental processes, such as radiation reaction <cit.> and Compton scattering <cit.>. The ultimate success of these applications relies on the design of practical, and preferably adaptive, optical configurations for preparing flying focus pulses.
The first experimental realization of a flying focus used a highly chromatic diffractive optic to focus a chirped laser pulse <cit.>. The diffractive optic focuses each wavelength of the pulse to a different longitudinal location, while the chirp controls the arrival time of each wavelength at its focus. The resulting intensity peak traverses the focal range, i.e., the distance between the focal points of the minimum and maximum wavelengths, with a constant velocity that can be adjusted by changing the chirp. More complex spectral phases allow for more complex focal trajectories <cit.>. Despite its tunability, this “chromatic flying focus” has several limitations. First, because the extended focal range is produced by a static diffractive optic, it cannot be modified from shot to shot. Second and more importantly, the bandwidth of the pulse is spread across the focal region. This precludes the formation of an ultrashort (<100 fs) intensity peak, which is a requirement for many applications.
The need for ultrashort intensity peaks has motivated the development of flying focus techniques that preserve the entire bandwidth of the laser pulse at every location within the focal range <cit.>. In contrast to the chromatic flying focus, which uses radial group delay to extend the focal range, these “ultrafast flying focus” schemes employ separate optics to independently extend the focal range and structure the radial group delay. As an example, a recent demonstration of an ultrafast, constant-velocity flying focus <cit.> used the geometric aberration of an axiparabola <cit.> to focus different annuli in the near field to different longitudinal locations in the far field and the radial group delay imparted by an echelon <cit.> to control the relative timing of the annuli. Despite the success of these experiments, the configuration relies on the use of a static echelon designed for a specific focal trajectory. An alternative configuration that replaces the echelon with adaptive optics, such as a deformable mirror-spatial light modulator pair <cit.>, would allow for on-shot programmability of the radial group delay and, as a result, the focal trajectory.
This work describes a method for designing optical configurations that produce ultrashort flying focus pulses with arbitrary focal trajectories at velocities close to the speed of light (Section II). The general method is independent of the optical configuration but is illustrated for specific examples of an axiparabola combined with either an echelon or a deformable mirror-spatial light modulator pair (Section III). The method is applied to create flying focus pulses exhibiting constant velocity, constant acceleration, and oscillating focal trajectories (Section IV). In each case, the intensity peak of the flying focus maintains an ultrashort duration as it traverses the extended focal range. The flexibility afforded by this method and the deformable mirror-spatial light modulator pair (DM-SLM) enable rapid and automated control over the focal trajectory, which can facilitate the use of the ultrafast flying focus in laser-based applications.
§ THE FOCAL TRAJECTORY OF AN ULTRAFAST FLYING FOCUS
Figure <ref> compares the trajectories of focal points produced by a focusing optic alone (a) and a focusing optic used in combination with optics that structure the radial group delay (b) and (c). In Fig. <ref>(a), a laser pulse with a flat phase front and a flat pulse front is incident at z=0 on a focusing optic with a surface defined by the sag function s_f(r). The focusing optic extends the range of high intensity by using geometric aberration to focus different radial locations r in the near field to different longitudinal locations in the far field z = f(r). The resulting focal point travels a distance L = max(f) - min(f) along a trajectory that is fully determined by the sag function. In Figs. <ref>(b) and (c), additional optics are used to structure the pulse front, or radial group delay τ_D(r), before focusing. Structuring the delay provides control over the trajectory of the focus and can produce a constant-velocity (b), oscillating (c), or otherwise dynamic focal point.
Each optical element in Fig. <ref> applies a spatio-spectral phase to the laser pulse. The phase imparted by the entire optical assembly ϕ(ω,r) can be written as the sum of contributions from the focusing optic and the elements that structure the radial group delay (RGD). In the paraxial approximation (see Appendix A),
ϕ(ω,r) =
-2ω/c s_f(r) +
ϕ_D(ω,r).
The first term provides the initial phase front curvature required to focus each radius to the location z = f(r). With f(r) specified, the sag function s_f(r) can be found by solving
ds_f/dr= r/2f(r).
The second term in Eq. (<ref>) modifies the relative timing of the near-field radii,
τ_D(r) = ∂ϕ_D(ω,r)/∂ω.
To preserve the desired focusing, the elements that structure the RGD cannot significantly distort the phase fronts. The constraint ∂_rϕ_D(ω,r)|_ω=ω_0 = 0 ensures that ϕ_D only modifies the RGD and, equivalently, that the central frequency of the laser pulse ω_0 focuses to the locations described by f(r).
For applications, one would like to specify a focal trajectory, i.e., the time-dependent velocity of the focus _f(t), and use this trajectory to determine the required τ_D(r). To calculate the required τ_D(r), first note that each near-field radius of the laser pulse can arrive at its focal location z = f(r) at a different time. The focal time t_f(r) for each radius has contributions from the structured RGD and the focal geometry:
t_f(r) ≈τ_D(r) + 1/c[f(r) + r^2/2f(r) - 2s_f(r)].
The variation in the focal time and location with radius results in a moving focal point with a velocity
_f(r) = df/dr(dt_f/dr)^-1≈ c[1 + r^2/2f^2(r) - c(df/dr)^-1dτ_D(r)/dr].
Equation <ref> demonstrates that the structured RGD can be used to control the trajectory of the focus independently of the focal geometry. If τ_D(r) = 0, _f(r) = c[1+r^2/2f^2(r)], which is dictated soley by f(r). Rearranging Eq. (<ref>) provides a differential equation for the τ_D(r) needed to produce a specified trajectory _f(t):
cdτ_D/dr = [1 - _f(t_f(r))/c + r^2/2f^2(r)]df/dr,
where _f(t_f(r)) = _f(r) depends on τ_D through Eq. (<ref>) and a one-to-one mapping between near-field radius and time has been assumed. The solutions to Eqs. (<ref>) and (<ref>) form the basis for designing the optical elements necessary to create an ultrafast flying focus.
In order to preserve the ultrashort duration of the intensity peak at every point within the focal range, the focal velocity must be close to the speed of light, _f(t) ≈ c. Even if a ϕ_D satisfies the constraint ∂_rϕ_D|_ω=ω_0 = 0 and maintains the focal locations of the central frequency, it will modify the focal locations of every other frequency. This spreads the frequency content of the laser pulse across the focal region, which reduces the bandwidth available at each location and places a lower bound on the minimum duration. Noting that the transverse wavenumber is the radial derivative of the phase and using similar triangles, one can show that the RGD modifies the focal locations by a distance Δ f(ω,r) ≈ -cf^2(∂_rϕ_D)/(rω). This longitudinal chromatism will have a negligible effect on the duration of the intensity peak when Δ f is much smaller than the focal range L, i.e., when
Δω/ω_0f^2/rL| df/dr(1-_f/c) | ≪ 1,
where Δω is the bandwidth of the laser pulse and Eq. (<ref>) has been used with a simple form of ϕ_D(ω,r) = (ω-ω_0)τ_D(r).
§ OPTICAL ELEMENTS TO CREATE AN ULTRAFAST FLYING FOCUS
§.§ Optics to extend the focal range
The optics that extend the focal range use geometric aberration to focus different radial locations r in the near field to different longitudinal locations in the far field z = f(r). In principle, this can be accomplished using refractive optics like lenses. However, for broadband, ultrashort pulses, the B-integral, group velocity dispersion, and higher-order dispersion of these optics can broaden or distort the temporal profile. In addition, the damage threshold of refractive optics typically prohibits their use as final focusing elements for high-intensity pulses. Thus, reflective optics are often preferable for extending the focal range of high-intensity, ultrashort flying focus pulses.
One such optic, the axiparabola <cit.>, produces an near-constant, on-axis intensity maximum over the entire focal range, making it ideal for many applications. The focal length as a function of near-field radius f(r) is designed so that a flattop transverse intensity profile incident on the optic results in a uniform on-axis intensity maximum in the far field. Specifically,
f(r) = f_0 + L(r/R)^2,
s_f(r) = R^2/4Lln[1 + L/f_0(r/R)^2],
where f_0 is the nominal focal length, R is the maximum radius of the axiparabola, and L determines the length of the focal range. Expanding Eq. (<ref>) in powers of q≡ L/f_0 shows that the axiparabola is primarily a parabolic mirror 𝒪(q^0) with spherical aberration 𝒪(q^1). For L>0 (<0), rays incident at larger radii are focused farther from (closer to) the optic than rays incident at smaller radii. With this choice of f(r), Eq. (<ref>) simplifies to 2(Δω/ω_0)(f_0/R)^2|1-_f/c| ≪ 1, which is independent of L.
Figure <ref> displays the results of propagation simulations (see Appendix B) for a laser pulse focused by an axiparabola with f_0= 50 cm, R= 5 cm, and L= 1 cm. The laser pulse had a central wavelength λ_0 = 2π c/ω_0= 920 nm and Δλ = 78 nm of bandwidth in a Gaussian power spectrum, corresponding to a 27 fs full-width at half-maximum (FWHM) duration. The transverse profile was initialized as a flattop with a 5 cm radius that filled the aperture of the axiparabola. The maximum on-axis intensity is nearly uniform over the entire focal range L, which is ∼340× longer than the Rayleigh range of the full-aperture focal spot Z_R = λ_0f_0^2/π R^2 [Fig. <ref>(b)]. The modulations in the on-axis intensity result from diffraction of the spherically aberrated phase fronts (see Appendix C). The near-uniform on-axis intensity comes at the cost of a spot size w that narrows over the focal range [Fig. <ref>(c)]. More specifically, the effective f/# at the beginning of the focal range is larger than that at the end, such that within the focal region
w(z) ≈λ_0f_0/π R |L/z-f_0|^1/2.
The ring-like structures visible in the fluence [Fig. <ref>(c)] are the natural diffraction pattern created by the axiparabola.
Figure <ref>(d) illustrates the focal trajectory produced by the axiparabola. Here, the on-axis intensity is plotted as a function of propagation distance z-f_0 and the moving frame coordinate ξ = t-z/c. In these coordinates, a vertical line indicates a signal travelling at the vacuum speed of light. The intensity peak accelerates from its initial focal point at z-f_0 = 0 and ξ = 0 to its final focal point at z-f_0 = L and ξ≈ -75 fs, following a trajectory consistent with _f(r) = c[1+r^2/2f^2(r)]. The pulse maintains its ultrashort duration over the entire focal range as shown by the white lineouts taken at the start (right) and end (left) of the focal region.
§.§ Optics to structure the radial group delay
The trajectory of the focus can be programmed by structuring the radial group delay of the laser pulse. Ideal, achromatic focusing optics impart the exact amount of RGD needed to ensure that all frequency components within a pulse arrive at their focus at the same time. More generally, optics can impart unwanted RGD, resulting in asynchronous focusing and a reduction in the maximum focused intensity. For instance, with refractive optics, the combination of group velocity dispersion and the radially dependent thickness of the optic produce unfavorable RGD <cit.>. Below, optical elements are discussed that can impart favorable RGD, thereby enabling control over the trajectory of the focal point and the peak laser intensity.
The recently proposed and demonstrated radial echelon provides a reflective approach to structuring the radial group delay <cit.>. The mirrored surface of the echelon consists of concentric rings with variable widths determined by the desired RGD and depths d equal to a half-integer multiple of the central wavelength d = (ℓ/2)λ_0 = πℓ/ω_0, where ℓ is a positive integer. For a given τ_D(r) and ℓ = 1, the phase imparted by the echelon is given by
ϕ^ech_D(ω,r) = -2ω/c{1/4λ_0 [ ceil(cτ_D(r)/λ_0) + floor( cτ_D(r)/λ_0) ] }.
By discretizing the continuous delay cτ_D(r) in steps of the central wavelength, the echelon satisfies the constraint ∂_rϕ^ech_D(ω,r)|_ω=ω_0 = 0 and thus does not affect the focusing of the frequency component ω_0. Said differently, the phase fronts of the central wavelength maintain their transverse coherence upon reflection from the echelon. For any other wavelength, the echelon introduces a shear in the phase front between each ring. This shear smooths out as higher-spatial orders diffract, leaving the desired radial group delay. The widths of the echelon rings can also lead to diffractive losses. These losses are negligible when Δ R ≫λ_0f_0/2R, which is easily satisfied for a large range of designs. Importantly, for _f(t) ≈ c, the combined axiparabola-echelon system preserves an ultrashort pulse duration.
Despite its advantage as a reflective optic with a higher damage threshold, each echelon is a static optical element that can only impart a single, pre-designed RGD. Adaptive optics, such as deformable mirrors and spatial light modulators, offer dynamic programmability of the radial group delay and, as a result, the focal trajectory. A deformable mirror (DM) consists of pistons or piezoelectric segments that shape a flexible, reflective membrane <cit.>. A DM can be programmed to apply the continuous phase
Φ_dm(ω,r) = -2ω/cs_dm(r) = ωτ_D(r),
where s_dm(r) = -cτ_D(r)/2 is the sag function of the membrane. However, the phase Φ_dm(ω,r) does not satisfy the constraint ∂_rΦ_dm(ω,r)|_ω=ω_0 = 0. Thus a second optical element must be introduced to eliminate the phase distortion at the central frequency.
A spatial light modulator (SLM) can partially correct the phase front distortion at the central frequency <cit.>. An SLM consists of a pixelated, two-dimensional array of liquid crystals that possess electrical and optical anisotropy. The voltage delivered to each pixel can be adjusted to change the optical path length of an incident laser pulse as a function of transverse location <cit.>. By appropriately programming the SLM voltages, the phase front of the central frequency can be flattened to an extent allowed by the discreteness of the pixels. Specifically, for the DM phase in Eq. (<ref>),
Φ_slm(ω,r) = - ω/cλ_0mod[ cτ_D (r_p)/λ_0, 1 ],
where r_p = 12[floor(rp) + ceil(rp)]p and p is the SLM pixel size. The total phase of the DM-SLM pair is then
ϕ_D^dm-slm(ω,r) = Φ_dm(ω,r) + Φ_slm(ω,r).
In the limit of infinitesimal pixels, p→0 and ϕ_D^dm-slm(ω,r)→ϕ_D^ech(ω,r). Note that Eq. (<ref>) was discretized into radial zones; for Cartesian zones, one can instead use τ_D(x_p, y_p).
Figures <ref> and <ref> illustrate how these optics modify the electric field profile of a laser pulse in the near field to produce a constant-velocity focus. Figure <ref>(a) shows the τ_D(r) required for subluminal (_f <c), luminal (_f = c), and superluminal (_f>c) focal velocities when using the axiparabola described in Fig. <ref>. Because the axiparabola naturally produces a superluminal and accelerating focus, the subluminal (superluminal) velocity requires a larger (smaller) delay than the luminal velocity at larger radii. The echelon and DM-SLM designs for _f = c are displayed in Figs. <ref>(b) and (c). In this configuration, the incident laser pulse propagates from right to left, so that the center of the pulse encounters the optics first. Figure <ref> shows the effect that each optic has on the electric field profile. After the echelon [Fig. <ref>(b)], the field has flat phase fronts and a radially dependent delay consistent with τ_D(r). After the DM [Fig. <ref>(c)], the field has the correct delay, but also has curved phase fronts. The SLM undoes this curvature [Fig. <ref>(d)]. The combined DM-SLM system reproduces the field profile created by the echelon to within the resolution limits of the SLM.
A DM-SLM pair with sufficiently small pixels can create a flying focus that is virtually indistinguishable from a flying focus created by an echelon [Fig. <ref>]. While an echelon flattens the phase fronts globally and locally, an SLM can only flatten the phase fronts globally. Within each pixel, the phase fronts remain curved [Fig. <ref>(d) inset]. As a result, the constraint ∂_rϕ^dm-slm_D(ω,r)|_ω=ω_0 = 0 is only approximately satisfied. When the SLM pixel size is too large, the local curvature of the phase fronts affects the structure of the flying focus pulse in the far field. The inequality
max(∂_r ϕ_D^dm-slm)p ≪ 1 provides a rough condition for the SLM pixel size required to reproduce the flying focus created with an echelon. Failing to meet this condition in the near field results in a decreased intensity at corresponding locations in the far field [cf. Figs. <ref>(b) and (c)]. As the pixel size is reduced, the intensity profile converges to the profile produced using an echelon [cf. Figs. <ref>(a) and (d)].
§ EXAMPLES OF ULTRASHORT FLYING FOCUS TRAJECTORIES
This section presents examples that demonstrate the flexibility and far-field properties of the ultrafast flying focus. The examples, i.e., constant-velocity, accelerating, and oscillating focal trajectories, are motivated by applications in plasma physics and nonlinear optics. The propagation of pulses that exhibit these trajectories was simulated in the near and far fields using a combination of the Fresnel diffraction integral and the modified paraxial wave equation (see Appendix B for details) <cit.>. In all cases, an axiparabola with f_0= 50 cm, R= 5 cm, and L= 1 cm, a deformable mirror with a 5 cm radius, and a spatial light modulator with a pixel size of p = 50 μm were used to extend the focal range and structure the RGD. The parameters were chosen based on the capabilities of current technology.
§.§ Constant-velocity focal trajectories
A constant-velocity flying focus can enhance applications that rely on velocity matching over long distances, such as laser wakefield acceleration <cit.>, THz generation <cit.>, and photon acceleration <cit.>. Figure <ref> shows the on-axis intensity for the (a) superluminal, (b) luminal, and (c) subluminal velocities described in Fig. <ref>. In each case, the intensity peak travels along the designed constant-velocity trajectory. The images also reveal that the combination of the DM-SLM and axiparabola produce features similar to those of the axiparabola alone. Namely, the on-axis intensity is modulated, and the ultrashort pulse duration is preserved over the entire focal region [cf. Fig. <ref>].
§.§ Exotic focal trajectories
An accelerating focus can be used to control the trapping and acceleration of electrons in a laser wakefield accelerator. Initializing the intensity peak, and therefore the wakefield, with a subluminal velocity would facilitate the trapping of background plasma electrons in the plasma wave <cit.>. After sufficient trapping has occurred, the intensity peak can be accelerated to a luminal or superluminal velocity. This change in velocity has the dual benefit of preventing electrons from outrunning the accelerating phase of the wakefield, i.e., dephasing, and of improving the quality of the electron bunch by eliminating unwanted trapping <cit.>.
Figure <ref> illustrates an ultrafast flying focus that accelerates from an initial subluminal velocity to a superluminal velocity over the focal range. The design trajectory was specified as
_f(t) = _0 + Δ(ct - f_0/L),
with an initial velocity _0 = 0.99c and a velocity increment Δ = 0.02c.
Over the first half of the focal range, the on-axis intensity falls back in a frame moving at the vacuum speed of light [Fig. <ref>(a)]. At the half-way point the velocity has increased to c, and thereafter the intensity peak advances in the speed of light frame. Interestingly, the radial group delay required for this trajectory [Figs. <ref>(b) and (c)] smooths the intensity modulations that were observed with both the axiparabola alone and with the DM-SLM constant-velocity trajectories [cf. Figs. <ref> and <ref>].
A pulse with an oscillating focal point could provide a novel method for quasi-phase-matching nonlinear optical processes, a wiggler for generating radiation from relativistic electrons, or an additional degree of freedom for accessing new parametric resonances in direct laser acceleration <cit.>. An example of such a focus is shown in Fig. <ref>. In this case, the design focal trajectory was specified as
_f(t) = _0 + Δsin(2π N(ct - f_0)/L),
with a nominal velocity _0 = c, an oscillation magnitude Δ = 0.002c, and N=3 periods. As shown in Fig. <ref>(a), the on-axis intensity peak oscillates between the expected velocities. While the pulse maintains its ultrashort duration, the maximum value of the intensity exhibits modulations, as it did in the case of the axiparabola alone. In general, the oscillation period of the velocity should be much greater than the Rayleigh range of the full-aperture focal spot, so that the intensity modulations do not obscure the velocity oscillations, i.e., N≪π R^2L/λ_0f_0^2.
§ CONCLUSIONS AND OUTLOOK
This work has described a method for structuring ultrashort laser pulses with dynamic focal points. The moving focal point, or “flying focus,” can follow a near-arbitrary trajectory over distances much greater than a Rayleigh range, while maintaining an ultrashort duration. The method employs separate optics to extend the focal range and structure the radial group delay (RGD). This overcomes a disadvantage of previous flying focus techniques, which place a lower bound on the duration of the moving intensity peak. Two specific optical configurations were considered: an axiparabola, which uses geometric aberration to extend the focal range, combined with either an echelon or a deformable mirror-spatial light modulator (DM-SLM) pair to structure the RGD. While an echelon can apply the exact RGD required for a particular focal trajectory, it is a static optic that cannot be modified on a shot-to-shot basis. The DM-SLM pair, on the other hand, has constraints imposed by the resolution of the SLM, but allows for dynamic programmability and optimization of the focal trajectory. This capability could enable rapid exploration of exotic flying foci that benefit laser-based applications in plasma physics and nonlinear optics.
§ FOCAL TRAJECTORY PRODUCED BY AN EXTENDED FOCAL RANGE OPTIC
Consider a laser pulse with an initially flat phase front and flat pulse front
propagating in the negative 𝐳̂-direction. Assuming cylindrical symmetry, the rays composing the phase and pulse front can be identified by their radial distance r = (x^2+y^2)^1/2 from the propagation axis and their frequency ω. The rays travel parallel to the axis and are incident on a reflective optic defined by the sag function s_f(r). At the point of reflection, each ray acquires a transverse wavenumber k_r(ω,r)=(ω/c)sin[2θ(r)], where θ(r) = arccos[𝐳̂·𝐧̂(r)] defines the angle between the +𝐳̂-direction and the normal vector to the surface of the optic 𝐧̂(r) = [D(r)r̂ - ẑ]/√(1+D^2(r)) with D(r) ≡ ds_f/dr. After some algebra, one finds
k_r(ω,r)= - 2ω/cD(r)/1+D^2(r).
The perpendicular wavenumber is simply the radial derivative of the phase, such that
ϕ_f(ω,r) = -2ω/c∫D(r)/1+D^2(r) dr.
In the paraxial approximation, Eq. (<ref>) simplifies to ϕ_f(ω,r)=-2ω s_f(r)/c, which is the first term on the right-hand side of Eq. (<ref>).
The trajectory of the rays as they travel to the far field can be found by integrating the ray equations 𝐱̇' = c^2𝐤/ω, where the overdot denotes a total time derivative and the prime denotes the instantaneous location of the ray. The radial and longitudinal locations of the rays evolve according to
r'(t) = r + ck_r(ω,r)/ω[ct + s_f(r)]
z'(t) = s_f(r) + ck_z(ω,r)/ω[ct + s_f(r)] ,
where ct ≥ -s_f(r), t = 0 corresponds to the time at which the ray with r=0 reflects from the optic, and k_z(ω,r) = [ω^2/c^2 - k_z(ω,r)]^1/2. The focal time t_f(r) and location f(r) of each ray are defined as the values of t and z' where r' = 0. Solving for the value of t where Eq. (<ref>) equals zero and using this in Eq. (<ref>) yields
ct_f(r) = -s_f(r) + 1+D^2(r)/2D(r)r
f(r) = s_f(r) + 1-D^2(r)/2D(r)r,
where Eq. (<ref>) has been used. The focal time and location are both independent of frequency.
The focal location depends implicitly on the focal time through their shared dependence on r. This dependence results in a focal point that moves in time. The velocity of the focal point _f(r) is given by
_f(r)/c = df/dr(dct_f/dr)^-1 = 1+D^2(r)/1-D^2(r),
which is constrained by the focal geometry D(r) and is always superluminal (D^2 is positive definite).
When each ray is delayed by a time τ_D(r) before reflecting from the optic, the focal time t_f(r) → t_f(r) + τ_D(r), and Eq. (<ref>) can be rewritten as a differential equation for the delay needed to produce a specified focal trajectory _f(t):
dτ_D/dr = [c/_f(t_f(r)) - (1-D^2(r)/1+D^2(r))]df/dr,
where _f(t_f(r)) = _f(r). The paraxial limits of these equations are presented in the main text for simplicity.
§ SIMULATION DETAILS
The evolution of the flying focus pulse was simulated in two steps. The first step used the frequency-domain Fresnel integral to propagate the laser pulse from the flying focus optical configuration to the far field. The second step used the modified paraxial wave equation to propagate the pulse through the far field <cit.>. The results shown in the figures were obtained from this second step.
To solve for the evolution of the flying focus pulse, the transverse electric field was written as a carrier modulating an envelope: E(ξ,r,z) = 1/2e^-iω_0ξE(ξ,r,z) + c.c., where ξ = t - z/c is the moving frame coordinate. The carrier frequency ω_0 was chosen so that the central wavelength λ_0 = 2π c/ω_0 = 920 nm. The envelope E was initialized just before the optical configuration in the frequency domain with the profile
Ẽ_0(δω,r) = Ẽ_iΘ(r-R)exp(-14τ^2δω^2),
where ∼ denotes a frequency domain field, δω = ω - ω_0, Θ is the Heaviside function, Ẽ_i is the initial amplitude, R = 5 cm, and τ = 23 fs, corresponding to a full width at half maximum duration and bandwidth of 27 fs and Δλ = 78 nm, respectively.
The phase imparted by the optical configuration, i.e., an axiparabola combined with either an echelon or a deformable mirror-spatial light modulator pair, was applied to the initial envelope. Just after the optical configuration at z=0, the envelope can be expressed as Ẽ_0(δω,r)e^iϕ(ω,r), where ϕ(ω,r) is the phase applied by the optical configuration [Eq. (<ref>)]. The envelope was propagated in vacuum from z=0 to the far-field location z=z_i using the frequency-domain Fresnel integral:
Ẽ(δω,r,z=z_i) =
ω/ic z_i∫ J_0(ω r r'/cz_i)
exp[iω(r^2+r'^2)/2cz_i+iϕ(ω,r')]Ẽ_0(δω,r')r' dr',
where J_0 is the zeroth-order Bessel function of the first kind. The electric field from the Fresnel integral Ẽ(ω,r,z=z_i) provided the initial condition for the modified paraxial wave equation <cit.>:
[2(iω_0-∂_ξ)∂_z + c∇_⊥^2]
E(r,z,ξ) = 0.
The mixed space-time derivative in Eq. (<ref>) ensures that effects such as radial group delay and angular dispersion are modelled correctly—a requirement for accurately modeling an ultrafast flying focus. Note that Eqs. (<ref>) and (<ref>) are fully consistent with one another: Eq. (<ref>) is the integral solution to Eq. (<ref>). The use of the Fresnel integral decouples the radial grids in the near field and far field, reducing computational expense compared to using Eq. (<ref>) over the entire domain, especially when considering smaller f/#'s <cit.>.
The simulation parameters were motivated by the MTW-OPAL laser system at the Laboratory for Laser Energetics <cit.>, where future ultrafast flying focus experiments are being planned. The longitudinal step size Δ z = 2.83 μm, temporal resolution Δξ = 0.74 fs, and radial resolution Δ r = 0.60 μm, were chosen to resolve the Rayleigh range, transform-limited pulse duration, and spot size, respectively.
§ ON-AXIS INTENSITY MODULATION FROM AN AXIPARABOLA
The Fresnel diffraction integral can be used to derive an approximate expression for the far-field, on-axis intensity profile of a laser pulse focused by an axiparabola. The expression reveals that the on-axis intensity modulations result from the spherical aberration imparted by the axiparabola and provides a condition for mitigating these modulations. The derivation begins by substituting Eq. (<ref>) into Eq. (<ref>) and approximating the axiparabola phase as
ϕ(ω,r') = -ω r'^2/2cf_0(1-L/2f_0r'^2/R^2),
which includes the parabolic and spherical contributions and is accurate to second order in L/f_0. Evaluating Eq. (<ref>) on-axis, i.e., at r=0, provides
Ẽ(δω,0,z) =
ω/ic z∫^R_0exp[iω r'^2/2c(1/z - 1/f_0)+iω Lr'^4/4cf_0^2R^2]Ẽ_0(δω)r' dr',
where Ẽ_0(δω) = Ẽ_iexp(-14τ^2δω^2). Upon integrating, one finds
|Ẽ(δω,0,z)|^2/|Ẽ_0(δω)|^2≈πω R^2/4cL| erfi[(iω R^2/4cLf_0^2)^1/2(f_0-z)] - erfi[(iω R^2/4cLf_0^2)^1/2(f_0+L-z)] |^2,
where erfi is the imaginary error function and z≈ f_0 has been assumed. Equation (<ref>) oscillates with a period that varies throughout the focal region. The scale length apparent in Eq. (<ref>) provides a rough estimate for the modulation period: L_M ∼ (4Lf_0^2λ_0/R^2)^1/2. The modulations can be mitigated when L ≫ L_M or L ≫ 4π Z_R, where Z_R = λ_0f_0^2/π R^2 is the Rayleigh range of the full-aperture focal spot.
Funding
U.S. Department of Energy Office of Fusion Energy Award Number DE-SC00215057, U.S. Department of Energy National Nuclear Security Administration Award Number DE-NA0003856.
Acknowledgments
The authors would like to thank D. Ramsey, J. Bromage, C. Dorrer, S.-W. Bahk, C. Jeon, B. Webb, and I. Begishev for productive discussions.
This material is based upon work supported by the Department of Energy Office of Fusion Energy under Award Number DE-SC00215057 and by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856.
This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S.
Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof.
Disclosures
The authors declare no conflicts of interest
Data Availability Statement
Data underlying the results presented in this paper are not publicly available at this time but may
be obtained from the authors upon reasonable request.
|
http://arxiv.org/abs/2307.07651v1 | 20230714225741 | An Overview and Comparison of Spectral Bundle Methods for Primal and Dual Semidefinite Programs | [
"Feng-Yi Liao",
"Lijun Ding",
"Yang Zheng"
] | math.OC | [
"math.OC",
"cs.SY",
"eess.SY"
] |
1]Feng-Yi Liao
2]Lijun Ding
1]Yang Zheng
[1]Department of Electrical and Computer Engineering, University of California, San Diego ([email protected], [email protected])
[2]Wisconsin Institute for Discovery, University of Wisconsin–Madison, Madison ([email protected])
An Overview and Comparison of Spectral Bundle Methods for Primal and Dual Semidefinite ProgramsThis work is supported by NSF ECCS-2154650. Corresponding author: Yang Zheng ([email protected]).
[
==================================================================================================================================================================================================
The spectral bundle method developed by Helmberg and Rendl is well-established for solving large-scale semidefinite programs (SDPs) in the dual form, especially when the SDPs admit low-rank primal solutions. Under mild regularity conditions, a recent result by Ding and Grimmer has established fast linear convergence rates when the bundle method captures the rank of primal solutions.
In this paper, we present an overview and comparison of spectral bundle methods for solving both primal and dual SDPs. In particular, we introduce a new family of spectral bundle methods for solving SDPs in the primal form. The algorithm developments are parallel to those by Helmberg and Rendl, mirroring the elegant duality between primal and dual SDPs. The new family of spectral bundle methods also achieves linear convergence rates for primal feasibility, dual feasibility, and duality gap when the algorithm captures the rank of the dual solutions.
Therefore, the original spectral bundle method by Helmberg and Rendl is well-suited for SDPs with low-rank primal solutions, while on the other hand, our new spectral bundle method works well for SDPs with low-rank dual solutions. These theoretical findings are supported by a range of large-scale numerical experiments. Finally, we demonstrate that our new spectral bundle method achieves state-of-the-art efficiency and scalability for solving polynomial optimization compared to a set of baseline solvers , , , and .
§ INTRODUCTION
Semidefinite programs (SDPs) are an important class of convex optimization problems that minimize a linear function in the space of positive semidefinite (PSD) matrices subject to linear equality constraints <cit.>. Mathematically, the standard primal and dual SDPs are in the form of
min_X ⟨ C, X⟩
subject to ⟨ A_i, X⟩ = b_i, i = 1, …, m,
X ∈𝕊^n_+,
P
and
max_y, Z b^ y
subject to Z + ∑_i=1^m A_i y_i = C,
Z ∈𝕊^n_+,
D
where b ∈ℝ^m, C, A_1, …, A_m ∈𝕊^n are the problem data, 𝕊^n_+ denotes the set of n × n PSD matrices (we also write X ≽ 0 to denote X ∈𝕊^n_+ when the dimension is clear from the context or not important), and ⟨·,·⟩ denotes the standard trace inner product on the space of symmetric matrices.
SDPs offer a powerful mathematical framework that has gained significant attention for decades <cit.> and still receives strong research interests today <cit.>. Indeed, SDP provides a versatile and robust modeling and optimization approach for solving a wide range of problems in different fields.
Undoubtedly, SDPs have become powerful tools in control theory <cit.>, combinatorial optimization <cit.>, polynomial optimization <cit.>, machine learning <cit.>, and beyond <cit.>.
In theory, one can solve any SDP instance up to arbitrary precision in polynomial time using second-order interior point methods (IPMs)<cit.>.
At each iteration of second-order IPMs, one usually needs to solve a linear system with a coefficient matrix (i.e., the Schur complement matrix) being generally dense and ill-conditioned. Consequently,
IPMs often suffer from both computational and memory issues when solving SDPs from large-scale practical applications.
Improving the scalability of SDPs has gained significant attention in recent years; see <cit.> for surveys.
In particular, first-order methods (FOMs) are at the forefront of developing scalable algorithms for solving large-scale SDPs thanks to their low complexity per iteration. For instance, the alternating direction method of multipliers (ADMM) is used to solve large-scale SDPs in the dual form <ref> <cit.>. The ADMM framework has been extended to solve the homogenous self-dual embedding of SDPs <ref> in <cit.>.
In <cit.>, ADMM has been applied to solving SDPs with a quadratic cost function.
It is known that augmented Lagrangian methods (ALMs) are also suitable for solving large-scale optimization problems. Some efficient ALM-based algorithms have recently been developed to solve large-scale SDPs. For example, a Newton-CG augmented Lagrangian method is proposed to solve SDPs with a large number of affine constraints. An enhanced version is developed in <cit.> to further tackle degenerate SDPs by employing a semi-smooth Newton-CG scheme coupled with a warm start strategy <cit.>. The algorithms <cit.> have been implemented in a MATLAB package, , which has shown promising numerical performance.
To tackle the storage issue,
the sketching idea, approximating a large matrix X without explicitly forming it, is exploited in the ALM framework together with a conditional gradient method for SDPs
<cit.>. In <cit.>, an optimal storage scheme is developed to solve SDPs by using a first-order method to solve the dual SDP <ref> and recovering the primal solution in <ref>. Finally, a class of efficient first-order spectral bundle methods has been developed to solve an equivalent eigenvalue problem when primal SDPs enjoy a constant trace property <cit.>.
Another important idea in designing efficient algorithms is to exploit the underlying sparsity and structures in SDPs <cit.>.
When SDPs have an aggregate sparsity pattern, chordal decomposition <cit.> has been exploited to reduce the dimension of PSD constraints in the design of both IPMs <cit.> and ADMM <cit.>. In <cit.>, partially separable properties in conic programs (including chordal decomposition) have been investigated to design efficient first-order algorithms. On the other hand, when the SDPs have low-rank solutions, low-rank factorization decomposing a big PSD matrix X ∈𝕊^n_+ into VV^, where V ∈ℝ^n × r with r ≪ n, has been utilized reduce the searching space in <cit.>. This low-rank factorization leads to a nonconvex optimization problem, and there are significant efforts in advancing theoretical understanding of the factorization approach <cit.>. Similar to the low-rank factorization, one can approximate the PSD constraint X ∈𝕊^n_+ by X = V S V^ where S ∈𝕊^r_+ and the factor V ∈ℝ^n × r is fixed. This is one of the main ideas in the design of the spectral bundle method <cit.> and the spectral Frank Wolf algorithm <cit.>. The core strategy in <cit.> is to iteratively search for the factor V such that it spans the range space of the optimal solution in <Ref>. Another similar approximation strategy is the basis pursuit techniques in <cit.>.
In this paper, we focus on the development of the low-rank approximation X = V S V^ in spectral bundle methods <cit.>. The spectral bundle method, originally developed by Helmberg and Rendl in <cit.>, is well-established to solve large-scale SDPs, thanks to its low per-iteration complexity and fast practical convergence. Further developments of spectral bundle methods appear in <cit.>. Very recently, Ding and Grimmer established sublinear convergence rates of the spectral bundle method in terms of primal feasibility, dual feasibility, and duality gap, and further proved a linear convergence rate when the algorithm captures a rank condition <cit.>. To the best of our knowledge, all existing spectral bundle methods <cit.> focus on solving dual SDPs in the form of <ref>. As shown in <cit.>, these spectral bundle methods are more desirable when the primal SDP <ref> admits low-rank solutions in which it is easier to enforce the rank condition to guarantee linear convergence. On the other hand, when the dual SDP <ref> admits low-rank solutions,
the existing spectral bundle methods may offer less benefit in terms of convergence and efficiency. Indeed, SDPs arising from moment/sum-of-squares (SOS) optimization problems and their applications <cit.> are likely to admit low-rank solutions in the dual SDP (<ref>) (this low-rank property is consistent with the flat extension theory on the moment side <cit.> when it is formulated as a dual SDP).
In this work, we present an overview and comparison of spectral bundle methods for solving both primal and dual SDPs. In particular, we introduce a new family of spectral bundle methods for solving SDPs in the primal form (<ref>). Our algorithm developments are parallel to those by Helmberg and Rendl <cit.> which focuses on solving dual SDPs (<ref>), mirroring the elegant duality between primal and dual SDPs. In particular, our contributions are as follows.
* We propose a new family of spectral bundle methods, called , for solving primal SDPs <ref>, while all existing methods <cit.> focus on dual SDPs (<ref>). We first translate the primal SDP (<ref>) into an eigenvalue optimization problem using the exact penalty method (<Ref>). Then, each iteration of solves a small subproblem formulated from past eigenvectors and current eigenvectors of the primal variable X evaluated at the past and current iterates respectively (<Ref>).
* We show that any configuration of admits (1/ϵ^3) convergence rate in primal feasibility, dual feasibility, and duality gap. Similar to <cit.>, has a faster convergence rate, (1/ϵ), when the SDPs <ref> satisfy strict complementarity (<Ref>). In <Ref>, we further show linear convergence of if ( 1) strict complementarity holds, ( 2) the number of eigenvectors is larger than the rank of dual optimal solutions.
Our proofs largely follow the strategies in <cit.>, <cit.> and <cit.>, and we complete some detailed calculations and handle the constrained case for the primal SDPs. As a byproduct, we revisit the results for generic bundle methods in <cit.> for constrained convex optimization (see <Ref>).
* We present a detailed comparison between the primal and dual formulations of the spectral bundle methods by showing the symmetry of the parameters and convergence behaviors from both sides. It becomes clear that the existing dual formulation in <cit.> is advantageous when the primal SDP <ref> admits low-rank solutions. On the other hand, our primal formulation is more suitable when the dual SDP <ref> admits low-rank solutions. These theoretical findings are supported by a range of large-scale numerical experiments.
* Finally, we present an open-source implementation of the spectral bundle algorithms for both <ref> and <ref>, while the existing implementations of spectral bundle algorithms for <ref> are not open-source or not easily accessible. We demonstrate that our new spectral bundle method achieves state-of-the-art efficiency and scalability for solving polynomial optimization compared to a set of baseline solvers <cit.>, <cit.>, <cit.>, and <cit.>.
The rest of the paper is structured as follows. <ref> covers some preliminaries on SDPs and nonsmooth optimization. <Ref> presents the exact penalty formulations for primal and dual SDPs. This is followed by our new family of spectral bundle methods and the convergence results in <Ref>. <Ref> reviews the classical spectral bundle methods, called , and the connections and differences between and are clarified. Our open-source implementation and numerical experiments are presented in <Ref>. <Ref> concludes the paper. Some detailed calculations and technical proofs are postponed in the appendix.
Notation We use ⟨·,·⟩ to denote the dot product and the trace inner product on the space of ^n and 𝕊^n respectively. For a symmetric matrix A ∈𝕊^n, we denote its eigenvalues in the decreasing order as λ_max(A) = λ_1(A) ≥⋯≥λ_n(A). Given a vector on ℝ^n, we use · to denote its two norm. For a matrix M ∈ℝ^m × n, its Frobenius norm, operator two norm, and nuclear norm are denoted by ·,·_op, and ·_* respectively. In the primal and dual SDPs <ref>, for notational simplicity, we will also denote a linear map : 𝕊^n→ℝ^m as
(X) := [ ⟨ A_1, X ⟩, …, ⟨ A_m, X ⟩ ]^, and its adjoint map that is a linear mapping from ℝ^m to
𝕊^n as (y) := ∑_i=1^m A_i y_i.
The optimal cost value of SDPs <ref> is denoted as and respectively.
Finally, given a closed set 𝒞⊂ℝ^n and a point Y ∈ℝ^n, the distance of Y to 𝒞 is defined as (Y,𝒞) = inf_X ∈𝒞X-Y.
§ PRELIMINARIES
In this section, we first introduce standard assumptions and an important notion of strict complementarity for <ref>. We then briefly overview the exact penalization for constrained nonsmooth convex optimization and the generic bundle method.
§.§ Strict complementarity of SDPs
Throughout this paper, we make the following standard assumptions for well-behaved SDPs <ref> and <ref>.
The matrices A_i, i = 1, …, m in <ref> and <ref> are linearly independent.
The SDPs <ref> and <ref> satisfy Slater's constraint qualification, i.e., they are both strictly feasible.
<Ref> allows us to uniquely determine y from a given dual feasible Z, i.e., the feasible point y is unique in Z+𝒜^* (y) = C when giving a feasible Z. Under <Ref>, the strong duality holds for <ref> and <ref> (i.e., p^⋆ = d^⋆), and both <ref> and <ref> are solvable (i.e., there exist at least a primal minimizer and a dual maximizer (y^⋆, Z^⋆) that achieve the optimal cost) <cit.>.
We denote the set of primal optimal solutions to <ref> as and the set of dual optimal solutions to <ref> as , i.e.,
= {X ∈𝕊^n| p^⋆ = ⟨ C, X⟩, 𝒜(X) = b, X ∈𝕊^n_+},
= {(y,Z) ∈ℝ^m ×𝕊^n| d^⋆ = b^ y, Z+𝒜^* (y) = C, Z ∈𝕊^n_+}.
<Ref> ensures that ≠∅ and ≠∅. In addition, if <Ref> holds, then the solution sets and are nonempty and compact <cit.>.
Under <Ref>, the mapping is surjective and the optimal solution sets and are nonempty and compact.
It is clear that and are closed. Given any (,) ∈ and a strict primal feasible point X̂, we have a finite duality gap ⟨ C ,X̂⟩ - b^ = ⟨X̂ , ⟩≥ 0. If →∞, then ⟨X̂ , ⟩→∞ since ∈𝕊^n_+ and X̂ is positive definite.
This is impossible due to the finite duality gap, and thus, any optimal is bounded. <Ref> ensures that is a surjective mapping, which means is injective. Thus, any optimal is bounded, and is bounded. Similarly, the existence of a strictly feasible point (ŷ, Ẑ) ensures the compactness of .
The following result is a version of the KKT optimality condition for the SDPs <ref> and <ref>.
Given a pair of primal and dual feasible solutions and (, ), they are optimal if and only if
there exists an orthonormal matrix Q ∈ℝ^n × n with Q^ Q = I, such that
X^⋆ = Q ·diag(λ_1, …, λ_n) · Q^, Z^⋆ = Q ·diag (w_1, …, w_n) · Q^
and λ_i w_i = 0, i = 1,…,n.
Given a pair of optimal solutions and (, ), the complementary slackness condition
(<ref>)
is equivalent to = 0 ( and commutes so they share a common set of eigenvectors as the columns of Q). This implies that
rank() + rank() ≤ n, range() ⊂null(), and range() ⊂null().
We now introduce the notion of strict complementarity for a pair of
optimal solutions.
A pair of primal and dual optimal solutions ∈ and (,) ∈ satisfies strict complementarity if rank() + rank() = n holds, i.e., exactly one of the two conditions λ_i = 0 and w_i = 0 is true in (<ref>).
If such a pair and (,) exists, we also say that the SDPs <ref> and <ref> satisfy strict complementarity. We note that strict complementarity is not restrictive. It is a generic property of SDPs <cit.>, and many structured SDPs from practical applications also satisfy strict complementarity
<cit.>.
§.§ Exact penalization for constrained convex optimization
Consider a constrained convex optimization problem of the form
min f(x)
subject to g_i(x) ≤ 0, i = 1, …, m,
x ∈𝒳_0,
where f: ℝ^n →ℝ and g_i: ℝ^n →ℝ, i = 1, …, m are (possibly nondifferentiable) convex functions, and 𝒳_0 ⊆ℝ^n is a closed convex set (which are defined by some simple constraints). The idea of exact penalty methods is to reformulate the constrained optimization problem <ref> by a problem with simple constraints. In particular, upon defining an exact penalty function
P(x) = ∑_i=1^m max{0, g_i(x)},
we consider a penalized problem
min Φ_ρ (x) := f(x) + ρ P(x)
subject to x ∈𝒳_0,
where ρ > 0 is a penalty parameter. When choosing ρ large enough, problems (<ref>) and (<ref>) are equivalent to each other in the sense that they have the same optimal value and solution set.
Suppose that problem (<ref>) satisfies Slater's constraint qualification. There exists a constant ρ_0 ≥ 0 such that for each ρ > ρ_0, a point x̂ is an optimal solution of (<ref>) if and only if it is an optimal solution of (<ref>). In particular, we can choose ρ_0 = sup_λ∈Λλ_∞, where Λ⊂ℝ^m is the set of optimal Lagrange multipliers associated with g_i(x) ≤ 0, i = 1, …, m.
Therefore, we can transform some nonsmooth constraints that are hard to handle in (<ref>) into the nonsmooth cost function of (<ref>). Then, we can apply cutting plane or bundle methods to solve the nonsmooth optimization (<ref>). However, it should be noted that the resulting problem (<ref>) may be difficult to solve if the penalty parameter ρ is too large. As we will discuss in <Ref>, in some SDPs that arise from practical applications, the penalty parameter ρ is known a priori <cit.>.
§.§ The cutting plane and bundle methods
In this subsection, we briefly overview the generic bundle method; see <cit.> for details. Consider a generic nonsmooth constrained convex optimization
f^⋆ = min_x ∈𝒳_0 f(x),
where f: ℝ^n →ℝ is convex but not necessarily differentiable and 𝒳_0 ⊆ℝ^n is a closed convex set. It is clear that problem <ref> includes <ref> as a special case.
The simplest method for solving <ref> is arguably the subgradient method, which constructs a sequence of points x_t iteratively by updating
x_t+1 = Π_𝒳_0 ( x_t - τ_t g_t), t = 1, 2, …
where g_t ∈∂ f(x_t) is a subgradient of f(·) at the current point x_t, and τ_t ∈ℝ is a step size, and Π_𝒳_0 (x) denotes the orthogonal projection of the point x∈ℝ^n onto 𝒳_0. Recall that for a convex function f: ℝ^n →ℝ, a vector g ∈ℝ^n is called a subgradient of f at x if
f(y) ≥ f(x) + ⟨ g, y - x ⟩, ∀ y ∈ℝ^n.
The set of all subgradients of f at x is called the subdifferential, denoted by ∂ f(x).
With mild assumptions and an appropriate choice of decreasing step sizes, the subgradient method is guaranteed to generate a converging sequence {x_t} to an optimal solution of problem (<ref>) (see <cit.>). Despite the simplicity of subgradient methods, it is generally challenging to develop reliable and efficient step size rules for practical optimization instances.
§.§.§ The cutting plane method.
Another useful way to utilize subgradients is the idea of the cutting plane method, which solves a lower approximation of the function f(x) at every iteration. Here, we assume that 𝒳_0 is compact in <ref>; otherwise, we consider minimizing f(x) over 𝒳_0 ∩𝒞 where 𝒞 is a compact set containing an optimal solution.
The basic idea of the cutting plane method is to use the subgradient inequality to construct lower approximations of f(·). In particular, at iteration t, having points x_1, x_2, …, x_t, function values f(x_1), f(x_2), …, f(x_t), and the corresponding subgradients g_1, g_2, …, g_t, we construct a lower approximation using a piece-wise affine function
f̂_t (x) = max_i=1,…,t f(x_i) + ⟨ g_i,x-x_i ⟩.
By definition of subgradients, it is clear that f(x) ≥f̂_t (x), ∀ x ∈ℝ^n. Starting from a triple {x_1 ∈𝒳_0, f(x_1),g_1 ∈∂ f(x_1)}, the cutting plane method solves the following master problem to generate the next point,
x_t+1 ∈_x ∈𝒳_0 f̂_t (x), t = 1, 2, ….
When 𝒳_0 is a convex set defined by simple constraints (e.g., a polyhedron), the above problem becomes a linear program (LP) for which very efficient algorithms exist.
The sequence generated by the cutting plane method is guaranteed to satisfy that lim_t →∞ f(x_t) = f^⋆ <cit.>. However, the theoretical convergence rate is rather slow; it generally takes (1/ϵ^n) iterations to reach f(x_t) - f^⋆≤ϵ <cit.>.
The practical convergence of the cutting plane method may be faster.
§.§.§ The bundle method.
The bundle method improves the convergence rate and numerical behavior of the cutting plane method by incorporating a regularization strategy. Unlike the cutting plane method that only considers the lower approximation function, the bundle method updates its iterates by solving a regularized master problem (i.e., a proximal step to the lower approximation model f̂_t(x)):
∈_x ∈𝒳_0 f̂_t (x) + α/2 x - ω_t_2^2,
where ω_t ∈ℝ^n is the current reference point and α > 0 penalizes the deviation from ω_t. The bundle method only updates the iterate ω_t when the decrease of the objective value f(·) is at least a fraction of the decrease that the approximated model f̂_t(·) predicts. In particular, letting 0< β <1, if
β(f(ω_t) - f̂_t()) ≤ f(ω_t) - f()
then we set ω_t+1 = (descent step); otherwise, we set ω_t+1 = ω_t (null step). In any case, the subgradient g_t+1∈∂ f() at the new point is used to update the lower approximation f̂_t+1(x), e.g., using <ref>.
The seemingly subtle modifications above have a rather surprising consequence: the cost value f(ω_t) generated by the bundle method converges to f^⋆ for any constant α >0 with a rate of (1/ϵ^3) when the objective function is Lipschitz continuous <cit.>. Faster convergence rates appear under different assumptions of f(x); see <cit.> for a detailed comparison. Note that subgradient methods rely on very carefully controlled decreasing stepsizes which might be inefficient and unreliable in practice, and the cutting plane method has a slow convergence rate theoretically. On the contrary, the bundle method appears more suitable to solve the nonsmooth problem <ref>.
§.§.§ The bundle method with cut-aggregation.
The lower approximation model f̂_t (x) can be constructed using all past subgradients as in <ref>, but this leads to a growing number of cuts or constraints when solving the regularized master problem <ref>. Another useful cut-aggregation idea <cit.> allows us to simplify the collection of t lower bounds used by <ref> into just two linear lower bounds. In particular, the convergence of the bundle method is guaranteed as long as the lower approximation model f̂_t+1 satisfies the following three properties <cit.>[The analysis in <cit.> focuses on unconstrained optimization where 𝒳_0 = ℝ^n in (<ref>). We have extended the analysis <cit.> for constrained optimization where 𝒳_0 is a closed convex set; see <Ref>.]:
* Minorant: the function f̂_t+1 is a lower bound on f, i.e.
f̂_t+1(x) ≤ f(x), ∀ x ∈𝒳_0 .
* Subgradient lowerbound: f̂_t+1 is lower bounded by the linearization given by some subgradient g_t+1∈∂ f() computed after <ref>, i.e.
f̂_t+1 (x) ≥ f() + ⟨ g_t+1, x-⟩, ∀ x ∈𝒳_0 .
* Model subgradient lowerbound: f̂_t+1 is lower bounded by the linearization of the model f̂_t given by the subgradient s_t+1:= α(ω_t- ) ∈∂f̂_t() + 𝒩_𝒳_0(),
i.e.
f̂_t+1 (x) ≥f̂_t() + ⟨ s_t+1, x-⟩, ∀ x ∈𝒳_0 ,
where 𝒩_𝒳_0() denotes the normal cone of 𝒳_0 at point , i.e. 𝒩_𝒳_0() = { v ∈ℝ^n |⟨ v, x - ⟩≤ 0, ∀ x ∈𝒳_0}. Note that s_t+1 certifies the optimality of for problem <ref>.
The lower bound <ref> serves as an aggregation of all previous subgradient lower bounds. Instead of (<ref>), we can construct the lower approximation f̂_t+1(x) as the maximum of two lower bounds as
f̂_t+1 (x) = max{f() + ⟨ g_t+1, x-⟩, f̂_t() + ⟨ s_t+1, x-⟩}.
The overall process of the general bundle method is listed in <Ref>. The convergence rates for <Ref> have been recently revisited in <cit.>. The big-O notation below suppresses some universal constants.
Consider a convex and M-Lipschitz function f(x) in <ref>. Let f^⋆ = inf_x∈𝒳_0 f(x)
and 𝒞 ={x ∈𝒳_0 | f(x) = f^⋆}. If 𝒞 is nonempty, the number of steps for <Ref> before reaching an ϵ > 0 optimality, i.e. f(x)- f^⋆≤ϵ, is bounded by
t ≤( 12 α M^2 D^4/β (1-β)^2 ϵ^3),
where D = sup_k(x_k,𝒞) < ∞.
If f(x) further satisfies the quadratic growth condition
f(x) - f^⋆≥μ·^2(x,𝒞), ∀ x ∈𝒳_0,
where μ>0 is a positive constant, then the number of steps for <Ref> before reaching an ϵ optimality is bounded by
t ≤( 16M^2/β (1-β)^2 min{α,μ}ϵ).
When solving SDPs <ref>-<ref>, a specialized version called spectral bundle method constructs a special lower approximation model satisfying <ref>-<ref>.
This idea was first proposed in <cit.> for the dual SDP (<ref>) with a constant trace property. In this paper, we will show spectral bundle methods can be developed for both general primal and dual SDPs (<ref>) and (<ref>).
We will present the details in <Ref> and <Ref>.
§ PENALIZED NONSMOOTH FORMULATIONS FOR SDPS
We here present an exact nonsmooth penalization of primal and dual SDPs <ref>-<ref> in the form of <ref>, which allows us to apply the bundle method in <Ref> and <Ref>.
§.§ Exact penalization of primal and dual SDPs
The semidefinite constraints in <ref> and <ref> are nonsmooth and typically non-trivial to deal with for numerical algorithms. A useful method proposed in <cit.> is to move nonsmooth semidefinite constraints into the cost function. In particular, for the primal SDP <ref>, we consider a penalized nonsmooth formulation
min_X ⟨ C, X⟩ + ρmax{λ_max (-X) ,0 }
subject to ⟨ A_i, X⟩ = b_i, i = 1, …, m,
and for the dual SDP <ref>, we consider the following penalized nonsmooth formulation
min_y -b^ y + ρmax{λ_max(∑_i=1^m A_i y_i - C) ,0 }.
From <Ref>, we expect that if the penalty parameter ρ is large enough, <ref> and <ref> are equivalent to the primal and dual SDPs <ref> and <ref>, respectively. We have the following results (recall that 𝒫^⋆ and 𝒟^⋆ are the sets of primal and dual optimal solutions, respectively; see <ref>).
Let
ρ > := max_(, ) ∈ ().
A point X̂ is an optimal solution of the primal SDP <ref> if and only if it is an optimal solution of (<ref>).
Let
ρ > := max_∈ ().
A point ŷ (with Ẑ = C - ∑_i=1^m A_i ŷ_i) is an optimal solution of the dual SDP <ref> if and only if it is an optimal solution of (<ref>).
Both <Ref>
are direct consequences of <Ref>. In particular, a proof of <Ref> appeared in <cit.>.
For completeness, we provide a proof of <Ref> in <Ref>. In some applications, we may have prior information on the bounds of and (for example, one may have explicit trace constraints; see <Ref>). In these cases, we can choose the penalty parameter ρ a priori.
The exact penalization for dual SDPs <ref> is in the form of unconstrained eigenvalue minimization; see <cit.> for excellent discussions on eigenvalue optimization. To our best knowledge, all the existing results on the application of bundle methods for solving SDPs focus on the dual formulation <ref>. One of the early results in <cit.> assumes a constant trace constraint (X) = k>0. This has been generalized to standard SDPs (i.e. <ref>) in <cit.>. However, the exact penalization for primal SDPs <ref> has been less studied. We cannot find a formal statement of <Ref> in the literature. For completeness, we provide a proof of <Ref> in <Ref>.
§.§ SDPs with trace constraints
Here, we show that the exact penalty formulations can be viewed as standard SDPs with an explicit trace constraint (X) ≤ρ or (Z) ≤ρ. In particular, let us consider
max_y, Z b^ y
subject to Z + ∑_i=1^m A_i y_i = C,
(Z) ≤ρ, Z ∈𝕊^n_+,
and
-min_X ⟨ C, X⟩
subject to ⟨ A_i, X⟩ = b_i, i = 1, …, m,
(X) ≤ρ, X ∈𝕊^n_+.
The following statements hold:
* For any ρ >0 such that <ref> is strictly feasible, the exact penalization <ref> and the modified SDP <ref> have the same optimal cost value.
* For any ρ >0 such that <ref> is strictly feasible, the exact penalization <ref> and the modified SDP <ref> have the same optimal cost value.
The equivalence comes from the strong duality. We present simple arguments below. It is straightforward to verify that the Lagrange dual problem for <ref> is
min_X,t,Q ⟨ C,Q ⟩ + ρ t
subject to ⟨ A_i,Q ⟩ = b_i, i=1,…,m,
Q + t I = X, t ≥ 0, X ∈𝕊^n_+.
Eliminating the variable X leads to Q + t I∈𝕊^n_+, which is equivalent to t ≥λ_max(-Q). Since t ≥ 0, upon partially minimizing over t, the problem <ref> is equivalent to
min_Q ⟨ C,Q ⟩ + ρmax{λ_max(-Q),0}
subject to ⟨ A_i,Q ⟩ = b_i, i=1,…,m,
which is clearly equivalent to <ref>. Since <ref> is strictly feasible, strong duality holds for <ref> and <ref>, which confirms that <ref> and <ref> have the same optimal cost value.
Similarly, the Lagrange dual problem for <ref> is
-max_y,t b^ y - t ρ
subject to Z + ∑_i=1^m A_i y_i - t I = C ,
t ≥ 0, Z ∈𝕊^n_+.
Eliminating the variable Z leads to C-∑_i=1^m A_i y_i+ t I ∈𝕊^n_+, which is equivalent to t ≥λ_max( ∑_i=1^m A_i y_i -C ). Combining this bound with the constraint t ≥ 0 leads to t ≥max{λ_max(∑_i=1^m A_i y_i -C ),0 }. Thus, the problem <ref> is equivalent to
-max_y b^ y - ρmax{λ_max(∑_i=1^m A_i y_i- C ),0},
which is equivalent to <ref>. Since the strongly duality holds for <ref> and <ref>, we know that <ref> and <ref> have the same optimal cost value.
We note that <Ref> implies that when ρ is large enough, i.e., satisfying the bounds in <Ref>, <Ref> is equivalent to the primal SDP <Ref>, and <Ref> is equivalent to the dual SDP <Ref>. This result becomes obvious since the extra constraint trace constraint (X) ≤ρ or (Z) ≤ρ does not affect the optimal solutions.
If the constraints ⟨ A_i, X ⟩ = b_i, i = 1, …, m imply that (X) = k for some k>0, i.e., the primal SDP <Ref> has an implicit constant trace constraint, the exact dual penalization <ref>
can be simplified as
min_y -b^ y + k λ_max(∑_i=1^m A_i y_i - C).
This was first used to derive the original spectral bundle method in <cit.>. Similarly, if Z + ∑_i=1^m A_i y_i = C imply that (Z) = k, i.e., (A_i) = 0, the exact primal penalization <ref> can be simplified as
min_X ⟨ C, X⟩ + k λ_max (-X)
subject to ⟨ A_i, X⟩ = b_i, i = 1, …, m.
For self-completeness, we provide short derivations for <Ref> and <Ref> in <Ref>.
We note that primal SDPs with a constant trace constraint are very common for semidefinite relaxations of binary combinatorial optimization problems, such as MaxCut <cit.> and Lovász theta number <cit.>. Also, dual SDPs with a constant trace constraint appears in certain matrix completion problem <cit.> (see <Ref>) and the moment/sum-of-squares relaxation of polynomial optimizations <cit.> (see <Ref>). For these problems, the penalty parameter ρ is thus known a priori.
§ SPECTRAL BUNDLE METHODS FOR PRIMAL SDPS
We can apply the standard bundle method in <Ref> to solve the penalized primal formulation <ref> or dual formulation <ref>. This idea was first proposed in <cit.>, and further revised and developed in <cit.>. To our best knowledge, however, all previous studies <cit.> only consider the penalized dual formulation <ref>. The dual formulation is in the form of unconstrained eigenvalue optimization <cit.>, for which it seems more convenient to apply the bundle method.
In this section, we apply the bundle method to solve the penalized primal formulation <ref>, which leads to a new family of spectral bundle algorithms. Differences and connections between our new algorithms and the existing spectral bundle algorithms will be clarified in <Ref>.
§.§ A new family of spectral bundle algorithms for primal SDPs
For notational convenience, we denote the cost function in <ref> as
F(X) := ⟨ C,X ⟩ + ρmax{λ_max (-X) ,0 }.
Directly applying the bundle method in <ref> to solve <ref> requires computing a subgradient of F(X) at every iteration t. It is known that for every X_t ∈𝕊^n, a subgradient g_t∈∂ F(X_t) is given by <cit.>
g_t =
C - ρ v_t v_t^ , if λ_max(-X_t)>0,
C, otherwise,
where v_t ∈ℝ^n is a normalized eigenvector associated with λ_max(-X_t).
§.§.§ Lower approximation models.
As discussed in <Ref>, one key step in the bundle method is to construct a valid lower approximation model of F(X) at each iteration t. Similar to (<ref>), one natural choice is to use a piece-wise affine function,
F̂_t(X) = max_i=1,…,t ⟨ C,X_i ⟩ +ρmax{λ_max (-X_i) ,0 } + ⟨ g_i , X-X_i ⟩,
where X_i, i = 1, …, t are the past iterates, and g_i ∈∂ F(X_i), i = 1, …, t are subgradients.
Via simple derivations, each affine function corresponding to X_i becomes
⟨ C,X_i ⟩ +ρmax{λ_max (-X_i) ,0 } + ⟨ g_i , X-X_i ⟩
= ⟨ C,X_i⟩ + ρ⟨ v_i v_i^, -X ⟩ , if λ_max (-X_i) > 0,
⟨ C,X_i ⟩, otherwise,
where v_i ∈ℝ^n is a normalized eigenvector corresponding to λ_max(-X_i).
For this special cost function,
one key idea of the original spectral bundle method in <cit.> is to improve the lower bound <ref> using infinitely many affine minorants. In particular, at iteration t, we compute a matrix P_t ∈ℝ^n × r with some small value 1 ≤ r<n and orthonormal columns (i.e., P_t^ P_t = I ∈𝕊^r), and define a lower approximation function
F̂_P_t(X) = ⟨ C,X ⟩ + ρmax_S ∈𝕊^r_+ , (S) ≤ 1⟨ P_t S P_t^, -X ⟩.
It is clear that F(X) ≥F̂_P_t(X), ∀ X ∈𝕊^n thanks to the fact that
max{λ_max(-X) ,0 } = max_S ∈𝕊^n_+ , (S) ≤ 1⟨ S,-X⟩, ∀ X ∈𝕊^n,
and
{P_t S P_t^∈𝕊^n_+ | S ∈𝕊^r_+, (S) ≤ 1}⊂{S ∈𝕊^n_+|(S) ≤ 1}.
Meanwhile, it is not difficult to check that if r = 1, and P_t = v_t with v_t being the top eigenvector of -X_t, then F̂_P_t(X) defined in <ref> is reduced to the approximation function in <ref> with i = t. Thus, when choosing r > 1 and selecting P_t spanning v_t, we have a strictly better lower approximation using <ref> than the simple linear function <ref> based on one subgradient. In principle, the columns of P_t ∈ℝ^n × r should consist of both top eigenvectors of the current iterate X_t and the accumulation of spectral information from past iterates X_1, …, X_t-1 <cit.>.
Therefore, by construction, F̂_P_t(X) in <ref> is naturally a minorant satisfying <ref>, and it also satisfies subgradient lowerbound <ref>. However, a further refinement is needed for <ref> to fulfill the model subgradient lower bound <ref>. The spectral bundle method <cit.> maintains a carefully selected weight to capture past information. In particular, we introduce a matrix W̅_t ∈𝕊^n_+ and (W̅_t) = 1, and then build the lower approximated model using W̅_t and P_t ∈ℝ^n × r,
F̂_(W̅_t,P_t)(X) = ⟨ C,X ⟩ + ρmax_S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ 1 ⟨γW̅_t + P_tSP_t^, -X ⟩.
It is clear that this lower approximation model <ref> is an improved approximation than <ref> (e.g., letting γ = 0 in <ref> recovers the approximation model <ref>). Thus, it satisfies the inequalities <ref> and <ref>. Upon careful construction of W̅_t at each iteration, we will show that <ref> will also satisfy <ref>. The construction details will be presented below.
§.§.§ Spectral bundle algorithms.
We are ready to introduce a new family of spectral bundle algorithms for primal SDPs, which we call [This name is consistent with <cit.> which focuses on solving the dual penalization formulation <ref>.]. As we will detail below, the family of spectral bundle algorithms considers r = + (where ≥ 0, ≥ 1) normalized eigenvectors to form the orthonormal matrix P_t ∈ℝ^n × r that is used in <ref>. The algorithms have the following steps.
Initialization: starts with an initial guess ∈𝕊^n, and P_0 ∈ℝ^n ×r formed by the r = + top normalized eigenvectors of -, any weight matrix W̅_0 ∈𝕊^n_+ with (W̅_0) = 1. Matrices P_0 and W̅_0 are used to construct an initial lower approximation model F̂_(W̅_0,P_0)(X) in <ref>.
Solve the master problem: Similar to <ref>, at iteration t ≥ 0, solves the following regularized master problem
(, S_t^⋆,γ^⋆_t) = _X ∈𝒳_0 F̂_(W̅_t,P_t)(X) + α/2 X - ^2,
where ∈𝕊^n is the current reference point (proximal center), α > 0 serves as a penalty for the deviation from , and
𝒳_0 := { X ∈𝕊^n | 𝒜(X) = b }.
Solving <ref> is the main computation in each iteration of , and we provide its computational details in <Ref>.
Update reference point: Similar to <ref>, updates the next reference point Ω_t+1 as follows: given β∈ (0,1), if
β(F() - F̂_(W̅_t,P_t)()) ≤ F() - F()
holds true (i.e. at the candidate point , the decrease of the objective value F(·) is at least β fraction of the decrease in objective value that the model F̂_(W̅_t,P_t)(·) predicts), we set =, which is called a descent step. Otherwise, we let =, which is called a null step.
Update the lower approximation model: updates the spectral matrices W̅_t+1, P_t+1 for the lower approximation model F̂_(W̅_t+1,P_t+1)(·) using a strategy similar to that in <cit.>. We first compute the eigenvalue decomposition of the small r × r matrix S^⋆_t as
S^⋆_t = [ Q_1 Q_2 ][ Σ_1 0; 0 Σ_2 ][ Q_1^; Q_2^ ],
where Q_1 ∈ℝ^r × r_p consists of the top r_p≥ 0 orthonormal eigenvectors, Σ_1 ∈𝕊^r_p is a diagonal matrix formed by the top r_p eigenvalues of S^⋆_t, and Q_2 ∈ℝ^r × (r- r_p) and Σ_2 ∈𝕊^(r-r_p) captures the remaining orthonormal eigenvectors and eigenvalues, respectively.
* The orthonormal matrix P_t+1: we compute V_t ∈ℝ^n × r_c with its columns being the top r_c≥ 1 eigenvectors of -, which naturally contains the subgradient information of the true objective function F(X) at . Let the range space of P_t+1 span V_t, which guarantees to improve the lower approximation model. We also let P_t+1 contain the past important information P_tQ_1 according to the top r_p eigenvalues of S^⋆_t. Therefore, we update P_t+1 as
P_t+1 = orth([ V_t P_tQ_1 ]),
where orth (·) denotes an orthonormal process such that P_t+1^ P_t+1 = I_r with r = r_p + r_c.
* The weight matrix W̅_t+1: we keep the rest of the past information by updating W̅_t+1 as
W̅_t+1 = 1/γ^⋆_t + (Σ_2) ( γ^⋆_t W̅_t + P_t Q_2 Σ_2 Q_2^ P_t^).
Note that W̅_t+1 has been normalized such that (W̅_t+1) = 1.
If r_p = 0, the updates in <ref> and <ref> become
P_t+1 = V_t , and W̅_t+1 = 1/ () ,
where we denote the optimal solution of γW̅_t + P_tSP_t^ in <ref> as
= γ^⋆_t W̅_t + P_t S^⋆_t P_t^.
Overall, generates a sequence of points {, , }, where is the dual variable corresponding to the affine constraint in <ref>, and a sequence of monotonically decreasing cost values {F()}. The detailed steps of are listed in <Ref>.
§.§ Computational details
At every iteration t, we need to solve the subproblem <ref>, which is the main computation in .
Therefore, it is crucial to solve the master problem <ref> efficiently.
We summarize the computational details of solving <ref> in the following proposition.
The master problem <ref> is equivalent to the following problem
min_W ∈𝒲̂_t, y ∈ℝ^m ⟨ W-C,⟩ - ⟨ b- 𝒜() ,y ⟩+ 1/2 αW-C+∑_i=1^mA_i y_i^2,
where
the constraint set is defined as
𝒲̂_t := {γW̅_t + P_t S P_t^∈𝕊^n | S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ρ}.
The optimal X in <ref> is recovered by
= + 1/α(-C+ (y_t^⋆)),
where (,y_t^⋆) is a minimizer of <ref>.
Our proof relies on the strong duality of convex optimization. Upon applying the definition of F̂_(W̅_t,P_t)(·) in (<ref>), it is clear that <ref> becomes
min_X ∈𝒳_0⟨ C,X ⟩ + ρmax_S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ 1 ⟨γW̅_t + P_tSP_t^, -X⟩ + α/2 X - ^2
= min_X ∈𝒳_0max_S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ρ ⟨ C,X ⟩ +⟨γW̅_t + P_tSP_t^, -X⟩ + α/2 X - ^2
= min_X ∈𝒳_0 max_W ∈𝒲̂_t ⟨ C-W,X ⟩ + α/2 X - ^2,
where the first equality brings the constant ρ into the constraint, and the second equality applies a change of variables W = γW̅_t + P_t S P_t^ and uses the set 𝒲̂_t defined as <ref>. Since 𝒲̂_t is bounded, by strong duality <cit.>, we can switch the min-max order
and have the following equivalency
min_X ∈𝒳_0 max_W ∈𝒲̂_t ⟨ C-W,X ⟩ + α/2 X - ^2
= max_W ∈𝒲̂_tmin_X ∈𝒳_0 ⟨ C-W,X ⟩ + α/2 X - ^2.
Note that the inner minimization is an equality-constrained quadratic program,
min_X ∈𝒳_0 ⟨ C-W,X ⟩ + α/2 X - ^2,
which can be simplified by considering its dual formulation. Specifically, we introduce a dual variable y ∈ℝ^m and construct the Lagrangian for <ref> as follows
L(X,y) = ⟨ C-W,X ⟩ + α/2X-^2 + y^(b-(X)),
which is strongly convex in X. The dual function for <ref> is given by g(y) := min_X L(X,y), where the unique minimizer X is
X = + 1/α(W-C+∑_i=1^mA_i y_i).
Therefore, the dual function becomes
g(y) = ⟨ C-W,⟩ + ⟨ b -𝒜() ,y ⟩- 1/2 αW-C+∑_i=1^mA_i y_i^2.
By strong duality of <ref>, we have
max_W ∈𝒲̂_tmin_X ∈𝒳_0 ⟨ C-W,X ⟩ + α/2 X - ^2
= max_W ∈𝒲̂_tmax_y ∈ℝ^m ⟨ C-W,⟩ + ⟨ b -𝒜() ,y ⟩- 1/2 αW-C+∑_i=1^mA_i y_i^2
= max_W ∈𝒲̂_t, y ∈ℝ^m ⟨ C-W,⟩ + ⟨ b -𝒜() ,y ⟩- 1/2 αW-C+∑_i=1^mA_i y_i^2,
which is clearly equivalent to <ref>. Finally, the optimal X in <ref> is recovered in <ref> once we obtain the optimal dual variables and from solving <ref>. This completes the proof.
After the first iteration, if (Ω_0) ≠ b, we update the proximal center
Ω_1 = X^⋆_1.
Then, the rest of the iterations are naturally feasible to the affine constraint, i.e., A() = b, ∀ t > 0.
Therefore, the master problem <ref> can be further simplified as
min_W ∈𝒲̂_t, y ∈ℝ^m ⟨ W-C,⟩ + 1/2 αW-C+∑_i=1^mA_i y_i^2.
The subproblem <ref> is a semidefinite program with a convex quadratic cost function. The dimension of the PSD constraint is r = +, which can be chosen to be very small (i.e., r ≪ n). Thus, <ref> can be efficiently solved using either standard conic solvers (such as SeDuMi <cit.> and Mosek <cit.>) or customized interior-point algorithms <cit.>. In addition, we note that the dual variable y ∈ℝ^m in <ref> admits an analytical closed-loop solution in terms of W, and thus <ref> can be further converted into the following form
min_v ∈ℝ^1+r^2
v^ Q v + q^ v + c
subject to v = [ γ S^ ]^,
γ≥ 0,S ∈𝕊^r_+, γ + (S) ≤ρ,
where c ∈ℝ is a constant, and Q ∈𝕊^1+r^2 and q ∈ℝ^1+r^2 depend on the problem data and the matrices P_t and W̅_t at step t.
We present the details of transforming <ref> into <ref>
in <Ref>. Therefore, at each iteration of , we only need to solve (<ref>), which admits very efficient solutions since r can be much smaller than the original dimension n.
As we shall see in <Ref>, the computational complexity of solving the regularized sub-problem <ref> in spectral bundle methods for primal SDPs is very similar to that in spectral bundle methods for dual SDPs. Furthermore, when r = 1 (i.e., = 0, =1), the problem <ref> admits an analytical closed-form solution, and no other solver is required; see <Ref>.
§.§ Convergence results
We here present two convergence guarantees <Ref> and <Ref> for .
Consistent with <Ref>, when strong duality holds for <ref> and <ref>,
<Ref> below provides a convergence rate of (1/ϵ^3) in terms of cost value gap, approximate primal feasibility, approximate dual feasibility, and approximate primal-dual optimality gap. The convergence rate improves to (1/ϵ) under the condition of strict complementarity (see <Ref>).
Suppose <ref> are satisfied.
Given any β∈(0,1), ≥ 1, ≥ 0, α>0, r= +,
ρ > 2+1,
and target accuracy ϵ > 0, the in <Ref>
produces iterates , , and with
F()-F()≤ϵ and
approximate primal feasibility: () -b = 0, λ_min()≥ -ϵ,
approximate dual feasibility: +() - C^2 ≤ϵ, ≽ 0,
approximate primal-dual optimality: | ⟨ C, ⟩ - ⟨ b, ⟩ | ≤√(ϵ)
by some iteration t≤(1/ϵ^3).
If additionally, strict complementarity holds for <ref> and <ref>, then these conditions <ref> are reached by some iteration t≤(1/ϵ).
In addition to strict complementarity, an improved convergence rate can be established if the number of current eigenvectors at every iteration satisfies
≥ :=max_∈(null())
with proper choice of α and β. Under these conditions, <Ref> ensures that converges linearly once the iterate is close enough to the set of the primal optimal solutions .
Suppose <ref> are satisfied and strict complementarity holds for <ref> and <ref>.
There exist constants T_0>0 and η>0. Under a proper selection of α≥η, β∈(0,1/2], ≥, ≥ 0, r= +,
ρ > 2+1,
and target accuracy ϵ > 0, after at most T_0 iterations, the in <Ref> only takes descent steps and converges linearly to an optimal solution. Consequently, produces iterates , , and satisfying F()-F()≤ϵ and <ref>
by at most T_0 + (log(1/ϵ)) iterations.
The proof sketches are provided in <Ref>. The constants T_0 and η only depend on problem data and are independent of the sub-optimality ϵ, and we provide some discussions in <Ref> (see <Ref> and <Ref> in the appendix for further details).
The convergence results in <Ref> can be viewed as the counterparts of <cit.> when applying the spectral bundle method for solving primal SDPs. Note that <cit.> focus on solving dual SDPs only (we will review some details in <Ref>). As highlighted earlier, all existing studies <cit.> only consider the penalized dual formulation <ref>. Here, we establish a class of spectral bundle methods, i.e., in <Ref>, to directly solve primal SDPs with similar computational complexity and convergence behavior. However, we remark that the value of r_c in (<ref>) can be drastically different from that in <cit.> for linear convergence. A detailed comparison will be presented in <Ref>.
§.§ Proof sketches
Here, we provide some proof sketches for <Ref>. The proofs largely follow the strategies in <cit.>, <cit.> and <cit.>. We complete some detailed calculations, handle the constrained case for the penalized primal SDPs (<ref>) (note that the penalized dual case is unconstrained), and fix minor typos in <cit.>. We have also provided an extension in <Ref> compared to <cit.>. We do not claim main contributions for establishing the proofs, and we provide them for self-completeness and for the convenience of interested readers.
§.§.§ Proof of <ref>
One main step in the proof of <ref> is to characterize the improvements in terms of primal feasibility, dual feasibility, and primal-dual optimality at every descent step, which is summarized in the following lemma. Its proof is provided in <Ref>.
In , let β∈(0,1), ≥ 1, ≥ 0, α>0, r= +,
ρ > 2+1. Then, at every descent step t>0, the following results hold.
* The approximate primal feasibility for Ω_t+1 satisfies
λ_min (Ω_t+1) ≥-(F()-F())/+1, and 𝒜(Ω_t+1) = b.
* The approximate dual feasibility for (, ) satisfies
≽ 0, and -C+ () ^2 ≤2 α/β (F()-F()).
* The approximate primal-dual optimality for (, , ) satisfies
⟨ C, ⟩ - ⟨ b,⟩ ≥ -ρ (F()-F())/ +1 - √(2 α/β (F()-F())),
⟨ C, ⟩ - ⟨ b,⟩ ≤1-β/β (F()-F()) + √(2 α/β (F()-F())),
where := max_F(Ω_t) ≤ F(Ω_0) Ω_t
is bounded due to the compactness of (see <ref>).
<Ref> is a direct consequence by combining <Ref> with the convergence results of the generic bundle method in <Ref>.
Proof of <Ref>: Consider the generic convex optimization <ref>. For any lower approximation function f̂_t+1 (·) satisfying <ref>-<ref>, <Ref> guarantees that the generic bundle method in <Ref> generates iterates ω_t with the gap f(ω_t) - f(x^⋆) converging to zero at a rate of (1/ϵ^3).
This rate improves to (1/ϵ) whenever the quadratic growth condition <ref> occurs.
In our case of SDPs, the quadratic growth of the primal penalized cost function F(X) in <ref>
holds whenever strict complementarity holds (see <Ref>): fix ϵ > 0 and define a ϵ-sublevel set
𝒫_ϵ = {X∈𝕊^n | (X)=b, F(X) - F() ≤ϵ},
then there exist a constant μ > 0 such that
F(X)-F() ≥μ·^2(X,), ∀ X ∈𝒫_ϵ.
In other words, the square of the distance between a point X and the set of primal optimal solutions is bounded by the optimality of the objective value when strict complementarity holds.
Therefore, if the lower approximation model F̂_(W̅_t,P_t)(X) in <ref> satisfies <ref>, the gap F() - F(X^⋆) from converges to zero at the rate of (1/ϵ^3). This rate improves to (1/ϵ) whenever strict complementarity holds.
By <ref>, the convergence results for approximate primal feasibility <ref>, approximate dual feasibility <ref>, and primal and dual gap <ref> are naturally established. It remains to verify <ref> for the lower approximation F̂_(W̅_t,P_t)(X), which is provided in <Ref>. This completes the proof. ▪
§.§.§ Proof of <Ref>
Thanks to <Ref>, the iterate will be sufficiently close to the set of optimal solutions after some finite number of iterations (which is independent of ϵ). In particular, let T_0 be the number of iterations that ensures
min_∈ - _op≤δ/3,
where δ is a constant eigenvalue gap parameter (see <ref> in <Ref>). Then, the lower approximation model F̂_(W̅_t,P_t)(X) in <ref> becomes quadratically close to the true penalized cost function F(X), as summarized in <Ref>. Thanks to the quadratic closeness of F̂_(W̅_t,P_t)(X), <Ref> will take only descent steps after T_0 iterations, and we have contractions in terms of the cost value gap and the distance to the set of primal optimal solutions (<Ref>).
Suppose <ref> are satisfied and strict complementarity holds for <ref> and <ref>. Let r_c≥. After T_0 iterations, there exists a constant η > 0 (independent of ϵ) such that
F̂_(W̅_t,P_t) (X) ≤ F(X) ≤F̂_(W̅_t,P_t) (X) + η/2X-^2, ∀ X ∈𝒳_0.
Under the conditions in <ref>, for any α≥η and t ≥T_0, <Ref> with β∈ (0, 1/2] takes only descent steps and guarantees two contractions[Note that <cit.> only shows the linear convergence in terms of
the distance to the set of optimal solutions.
We here further establish the linear convergence in terms of cost value gap F() - F(), which can be directly used in <Ref>.]
(,) ≤√(α/2/μ +α/2)(,),
F() - F() ≤(1- min{μ/2 α,1/2}β) (F() - F()),
where μ is the quadratic growth constant in <ref> for the initial sublevel set.
Proof of <Ref>: <Ref> is a direct consequence by combining <Ref> with <Ref>. After T_0 iterations, (<ref>) guarantees the linear convergence of F() - F(). Thus, together with <Ref>, this ensures the linear convergence of approximate primal feasibility, approximate dual feasibility, and approximate primal-dual optimality after T_0 iterations. ▪
We finally note that T_0 is a constant depending on problem data only. The proofs of <Ref> are provided in <Ref>.
§ SPECTRAL BUNDLE METHODS FOR DUAL SDPS
In this section, we first review the existing spectral bundle method for dual SDPs that was originally proposed in <cit.> and further developed in <cit.>. The (sub)linear convergence results have been recently established in <cit.>. We then compare the spectral bundle method for dual SDPs with that for primal SDPs developed in <ref>.
§.§ Spectral bundle algorithms for dual SDPs
The idea in <cit.> applies the standard bundle method in <Ref> to the penalized dual formulation <ref>. One key step is to construct an appropriate lower approximation model.
For notational simplicity, let us denote the objective function in <ref> as
F_d (y) := -b^ y + ρmax{λ_max((y) - C) ,0 }.
Similar to <ref>, the family of spectral bundle methods for dual SDPs uses a positive semidefinite definite matrix W̅_t ∈𝕊^n_+ with (W̅_t) = 1 and an orthonormal P_t ∈ℝ^n × r matrix at iteration t to lower approximate F_d(y). Specifically, the lower approximation model is constructed as
F̂_d,(W̅_t,V_t)(y) =-b^ y + ρmax_S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ 1 ⟨γW̅_t + P_tSP_t^, 𝒜^*(y) -C ⟩.
It is clear to see that F̂_d,(W̅_t,V_t)(y) ≤ F_d (y), ∀ y ∈ℝ^m.
Following <cit.>, we present a family of spectral bundle algorithms for dual SDPs, called ,
which has the following steps:
* Initialization: starts with an initial guess ∈ℝ^m, P_0 ∈ℝ^n × r formed by top r normalized eigenvectors of () - C, and W̅_0 ∈𝕊^n with (W̅_0) = 1. The matrices P_0 and W̅_0 are used to construct an initial lower approximation model F̂_d,(W̅_t,V_t)(y) in <ref>.
* Solve the master problem: Similar to <ref>, at iteration t ≥ 0, solves the following regularized master problem
(,S_t^⋆,γ^⋆_t) = _y ∈ℝ^n F̂_d,(W̅_t,P_t)(y) + α/2y - ^2,
where ∈ℝ^m is the current reference position, and α > 0 is as a penalty parameter.
* Update reference point: Similar to <ref>, updates the next reference point ω_t+1 as follows: given β∈ (0,1), if
β(F_d() - F̂_d,(W̅_t,P_t)()) ≤ F_d()- F_d()
holds true, we let = (descent step), otherwise = (null step).
* Update the lower approximation model: updates the matrices W̅_t+1, P_t+1 for the lower approximation model F̂_d,(W̅_t,V_t)(y) using formulas in <ref> with one difference that the matrix V_t ∈ℝ^n × is formed by the top ≥ 1 eigenvectors of () - C.
The overall process for is listed in <Ref>. generates a sequence of iterates {, } with monotonically decreasing cost values {F_d()}.
Solving the subproblem <ref> is the main computation in each iteration of . Similar to <ref>, we have the following result.
The master problem <ref> is equivalent to
min_W ∈𝒲̂_t ⟨ b,⟩ + ⟨ W, C - () ⟩ + 1/2 αb - W^2,
where the constraint set is defined as
𝒲̂_t := {γW̅_t + P_t S P_t^∈𝕊^n | S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ρ}.
The optimal y in <ref> is recovered as = + 1/α( b- () ), where is a minimizer of <ref>.
Similar to <Ref>, problem <ref> is a quadratic problem with a small semidefinite constraint, which can be reformulated into a problem of the form <ref>; see <ref>.
The convergence results of are summarized in <ref>.
Suppose <ref> are satisfied.
Given any β∈(0,1), ≥ 1, ≥ 0, α>0, r= +,
ρ > 2, P_0∈^ n × r, ω_0 ∈ℝ^m, and accuracy ϵ > 0, the in <Ref>
produces iterates and with
F_d()-F_d( )≤ϵ and
approximate primal feasibility: b - () ^2 ≤ϵ, ≽ 0,
approximate dual feasibility: λ_min(C - () )≥ -ϵ,
approximate primal-dual optimality: |⟨ C, ⟩- ⟨ b, ⟩ | ≤√(ϵ)
by some iteration t≤(1/ϵ^3).
If additionally, strict complementarity holds, then these conditions are reached by some iteration t≤(1/ϵ).
Along with strict complementarity, a further improvement on the convergence rate can be shown if the number of selected current eigenvectors at every iteration satisfies
≥ :=max_ (y^⋆, ) ∈(null( )).
Suppose <ref> are satisfied and strict complementarity holds for <ref> and <ref>. There exist constants T_0>0 and η>0. Under a proper selection of α and any β∈(0,1/2], ≥, ≥ 0, r= +,
ρ >2, P_0∈ℝ^ n × r, ω_0 ∈ℝ^m, and accuracy ϵ > 0, after at most T_0 iterations, the in <Ref> only takes descent steps and converges linearly to an optimal solution. Consequently, produces iterates and with
F_d()-F_d( )≤ϵ and <ref>
by at most T_0 + (log(1/ϵ)) iterations.
The constants T_0>0 and η>0 only depend on problem data and are independent of ϵ. We refer the interested reader to <cit.> for details.
Historically, the spectral bundle method was first introduced in <cit.> to tackle SDP relaxations for large-scale combinatorial problems. The method in <cit.> works for dual SDPs with an explicit constant trace constraint; see <ref>. The algorithm <cit.> requires one current eigenvalue, i.e. = 1, and allows different values of . This selection of parameters is different from the selection of parameters reviewed in <Ref> (we follow the choice in <cit.>). The convergence results in <Ref> show that the parameter can be chosen as zero which still guarantees the linear convergence whenever ≥, α is chosen correctly, and the SDP satisfies strict complementarity. The result is based on one critical observation that γW̅_t + P_tSP_t^ in <ref> is an approximation of the optimal primal variable and P_t is an approximation of the null space of Z^⋆. A key result in the analysis is based on a novel eigenvalue approximation in <cit.>.
One immediate benefit of selecting instead of is that becomes particularly efficient when the primal SDP admits low-rank solutions as can be chosen small to fulfill the rank condition <ref>. In this case, the master problem <ref> can be solved efficiently. The low-rank property indeed holds for many SDPs from combinatorial problems and phase retrieval <cit.>.
§.§ Comparison and connections
In this subsection, we compare the differences and draw the connections between primal and dual formulations of the spectral bundle method.
It is clear that both in <Ref> (primal SDPs) and in <Ref> (dual SDPs) follow the same framework of the generic bundle method (see <Ref>). One main difference is that solves a constrained nonsmooth problem <ref> while deals with an unconstrained nonsmooth problem <ref>. The unconstrained nonsmooth problem <ref> is in the form of eigenvalue minimization which has extensive literature <cit.>. <Ref> presents a comparison of the computational details and convergence results between and .
In particular, we have the following observations:
* generates iterates (,,) that 1) exactly satisfy the primal affine constraint () = b and dual PSD constraint λ_min() ≥ 0, but 2) do not exactly satisfy the primal PSD constraint λ_min() ≥ -ϵ and the dual affine constraint +() - C^2 ≤ϵ.
* outputs iterates (,) that 1) exactly satisfy the dual affine constraint (by construction) and the primal PSD constraint (λ_min() ≥ 0), but 2) do not exactly satisfy the primal affine constraint () -b^2 ≤ϵ and the dual PSD constraint λ_min(C-())≥ -ϵ.
* Each iteration of and requires solving a small quadratic SDP in the form of <ref>. When choosing the same values for and , the subproblems from and have the same dimension, and thus the computational complexity is very similar (although their constructions of <ref> are slightly different; see <Ref>).
Both and have very similar sublinear and linear convergence rates, as shown in <Ref> and <Ref>, respectively. However, we here highlight a key difference in the condition for linear convergence.
When constructing the lower approximation model at each iteration, selects the top eigenvectors of -X^⋆_t+1 to approximate the null space of the primal optimal solutions , while uses eigenvectors of ()-C to approximate the null space of the dual optimal solutions . If the null space of the primal optimal solutions in has a smaller dimension than that of the dual optimal solutions in , i.e.,
max_∈(null()) ≪max_ (, ) ∈(null( )),
then the master problem in can select a smaller number of eigenvectors, leading to a smaller PSD constraint in <ref>. This is reflected in the linear convergence condition on r_c; see <ref>. In this case, it will be more numerically beneficial to apply that solves the primal SDPs directly as it is easier to solve the subproblem <ref> at each iteration. If, on the other hand, we have the following relationship
max_∈(null()) ≫max_ (,) ∈(null( )),
then it will be more numerically beneficial to apply that solves the dual SDPs directly.
Note that <Ref> implies that the dual SDP <ref> admits low-rank optimal solutions (i.e., rank(Z^⋆) is small), while <Ref> indicates that the primal SDP <ref> has low-rank optimal solutions (i.e., rank(X^⋆) is small). Therefore, it is crucial to choose the appropriate algorithm to solve the problem if we have prior knowledge of the rank property of SDPs. For instance, SDP-based optimization problems from sum-of-square relaxation and their applications <cit.> are likely to admit low-rank dual optimal solutions in <ref>. On the other hand, SDP relaxation of combinatorial problems such as Max-Cut <cit.>, matrix completion <cit.>, and phase retrieval <cit.> are likely to admit low-rank primal solutions in <ref>. We provide numerical experiments of SOS optimization and Max-Cut in <Ref>, which indeed validate the effect of the rank property on the convergence behavior of <Ref>. We finally note that the low-rank properties also depend on how SDPs are formulated in different applications; see the conversion between primal and dual SDPs in <Ref>.
§ IMPLEMENTATION AND NUMERICAL EXPERIMENTS
We have implemented and in <Ref> in an open-source MATLAB package, which is available at
<https://github.com/soc-ucsd/specBM>.
In this section, we discuss some implementation details and present three sets of numerical experiments to validate the performance of and , especially their linear convergence behaviors. The numerical results confirm the discussions in <Ref>: works better when the dual SDP <ref> admits low-rank solutions (i.e., <ref> holds). Similarly, is more beneficial when the primal SDP <ref> admits low-rank solutions (i.e., <ref> holds).
In <Ref>, similar to <cit.>, we consider SDPs with randomly generated problem data, which demonstrates that and admit sublinear convergence under different configurations of (,) and that the algorithms have a linear convergence when the rank conditions in <Ref> hold.
In <Ref>, we consider a benchmark combinatorial problem – Max-Cut and its standard SDP relaxation, which is likely to admit low-rank primal solutions. In this case, it is more desirable to apply
than . In <ref>, we show that
is well-suited for SDP relaxation rising from sum-of-squares (SOS) optimization, which appears to admit low-rank dual solutions. In this case, a small number of current eigenvectors is sufficient to fulfill the rank condition <Ref> that ensures fast linear convergence. To highlight the efficiency of , we further compare it with the state-of-the-art interior-point and first-order SDP solvers: we choose <cit.> and <cit.> as the interior-point solvers, and we select <cit.> and <cit.> as the first-order solvers.
All numerical experiments are conducted on a PC with a 12-core Intel i7-12700K [email protected] and 32GB RAM.
§.§ Implementation details
One major computation in each iteration of and is to solve the master problems <ref> and <ref>. As discussed in <Ref>, we have implemented an automatic transformation from <ref> and <ref> into standard conic form <ref>. Then, a subroutine is required to solve <ref>. In our current implementation, we used <cit.> to get an exact solution of <ref> when r = + > 1 and implemented the analytical solution in <Ref> when = 0, = 1. We note that customized algorithms can be developed to <ref> with varying accuracy which will further improve numerical efficiency at each iteration.
To form the lower approximation models <ref> and <ref>, we computed eigenvalues and eigenvectors using the routine in MATLAB. Faster eigenvalue/eigenvector computations can be implemented using in , similar to <cit.>.
The orthogonalization process to update the matrix P_t+1 in <ref> was implemented using the routine in MATLAB.
§.§.§ Adaptive strategy on the regularization parameter.
<Ref> are guaranteed to converge with any regularization parameter α > 0 in the subproblems <ref>. Yet, the value of α largely influences the practical convergence performance, as highlighted in <cit.> and <cit.>. In our implementation of <Ref>, the parameter α uses the adaptive updating rule below[The implementation of <Ref> uses the same updating rule and replaces , , F, and F̂_(W_t,P_t) with , , F_d and F̂_d,(W_t,P_t) respectively.]:
α =
min{2α,α_max}, if (F() - F̂_(W̅_t,P_t)()) ≥ F() - F() and N_c≥ N_min
max{α/2,α_min}, if (F() - F̂_(W̅_t,P_t)()) ≤ F() - F(),
where > β and 0 < < β are two nonnegative parameters that indicate the effectiveness of the current approximation model F̂_(W̅_t,P_t) and the candidate point , α_min and α_max are two nonnegative parameters that keep α staying in the interval [α_min,α_max], N_c counts the number of consecutive null steps, and N_min is the threshold that controls the frequency of increasing α.
In our implementation for
<Ref>,
the default parameters are chosen as = 0.001,
α_min = 10^-5,α_max = 100, and N_min = 10.
The parameter β and were tuned slightly for different classes of instances in our experiments.
The initial points are chosen as Ω_0 = I and ω_0 = 0 for
<Ref> and <Ref> respectively.
§.§.§ Suboptimality measures.
For the pair of primal and dual SDPs <ref>, we measure the feasibility and optimality of a candidate solution (X,y,Z) ∈𝕊^n ×ℝ^m ×𝕊^n using
η_1 = 𝒜(X) - b/1 + b, η_2 = min{0,λ_min(X) },
η_3 = 𝒜^*(y) + Z - C/1 + C, η_4 = min{0,λ_min(Z) },
η_5 = |⟨ C, X ⟩ - b^ y|/1 + |⟨ C, X ⟩| + |b^ y|.
In <ref>, η_1 and η_2 measure the violation of the affine constraint and the conic constraint in the primal SDP, respectively. The measures η_3/η_4 in <ref> quantify the violation of the affine/conic constraints in the dual SDP. The last index η_5 in <ref> measures the duality gap.
As stated in <Ref>, all iterates (primal variable), and (dual variables) from ensure η_1 = 0 and η_4 = 0 (up to machine accuracy). On the other hand, the iterates (primal variable) and ω_t (dual variable) from guarantee η_2 = 0 and η_3 = 0 (see <Ref>).
In <Ref>, we ran and for a fixed number of iterations and report the measures η_1, …, η_5 for the final iterate.
In <Ref>, for a given tolerance ≥ 0, we terminate the algorithms when
max{η_1, …, η_5}≤.
The performance of was compared with the baseline solvers <cit.>, <cit.>, <cit.> and <cit.> in our last experiment. In <Ref>, f(ω_t) denotes the value of the generic cost function <ref>, and it refers to the cost function in <ref> for primal SDPs and in <ref> for dual SDPs.
§.§ SDPs with randomly generated problem data
Our first experiment demonstrates the (sub)linear convergence of and under different configurations of (,). Similar to <cit.>, we randomly generated two SDPs, satisfying strict complementarity, in the form of <ref> and <ref>. Both SDP instances have a PSD constraint of dimension n = 1000 and affine constraints of size m = 200.
The first SDP admits a low-rank dual solution (rank() = 3) and the second SDP admits a low-rank primal solution (rank() = 3). Details of generating these SDP instances are discussed in <Ref>.
We consider two different configurations of the parameters and :
* = 0, while changing = r^⋆ -1, = r^⋆, and = r^⋆+1,
* = 1, while changing = r^⋆-2, = r^⋆-1, and = r^⋆,
where we set r^⋆ = 3 for both SDP instances (thus we have r^⋆ = rank() or r^⋆ = rank()).
In the first setting, we do not consider any past information but only rely on the current information , while in the second setting, we keep the minimum amount of current information and rely on the accumulated past information .
As the discussion in <Ref>, and iteratively approximate two different spaces: the null space of the primal solutions and the null space of the dual solutions , respectively. When an SDP admits low-rank dual solutions (i.e., the null space of the primal solutions X^⋆ has a low dimension), it is computationally more beneficial to choose to solve the SDP. On the other hand, when an SDP admits low-rank primal solutions (i.e., the null space of the dual solutions Z^⋆ has a low dimension), it is computationally more beneficial to choose .
We use the first SDP instance with a low-rank dual solution to demonstrate the benefits of and its fast convergence guarantees in <Ref>. For this SDP instance, we ran for both settings and for only the first setting. Then, we use the second SDP instance with a low-rank primal solution to highlight the benefits of and validate its fast convergence guarantees in <Ref>.
For the second SDP instance with a low-rank primal solution, we ran for both settings and for only the first setting. The penalty parameter ρ is set 2()+2 and 2()+2 for and respectively. The step-size parameters β and are chosen as 0.4 and 0.7 respectively.
In all cases, we ran and for 300 iterations. In this experiment, we also computed the cost value gap as
= (f(ω_t)-f^⋆)/f^⋆.
The convergence behaviors of the cost value gap are illustrated in <Ref>. In <Ref>, we list the suboptimality measures for the final iterates, where “Semi Feasi.” denotes the violation of PSD constraints max{η_2,η_4}, “Affine Feasi.” denotes the violation of affine constraints max{η_1,η_3}, “Dual Gap” denotes the duality gap η_5 in <Ref>, and “Cost Opt.” denotes the cost-value gap . As expected, the value of greatly affects the convergence performance for both and .
In the first SDP with a low-rank dual solution, we observe that has a fast convergence behavior when choosing ≥ 3 = dim(null()), while has a slow performance in all settings. On the other hand, in the second SDP with a low-rank primal solution, enjoys the fast convergence when ≥ 3 = dim(null()), while converges poorly in all different configurations. The numerical results confirm the theoretical convergence results in <Ref> and our discussions in <Ref>.
§.§ Max-Cut
In this experiment, we consider the maximum cut problem, which is a benchmark combinatorial optimization problem. The SDP relaxation is likely to have low-rank primal solutions, for which is better suited. Consider an undirected graph 𝒢(𝒱,ℰ) defined by a set of vertices 𝒱 = {1,2,…,n} and a set of edges ℰ⊆𝒱×𝒱, and each edge {i,j}∈ℰ has a weight w_ij = w_ji. The max-cut problem aims to find a maximum cut that separates the vertices into two different groups. This can be formulated as a binary quadratic program <cit.>
min_x_i^2 = 1, i = 1, …, n 1/4 x^ L x,
where L ∈𝕊^n is the Laplacian matrix of 𝒢(𝒱,ℰ) defined as L_ij = ∑_j ≠ i w_ij if i = j, and L_ij = -w_ij otherwise.
A well-known semidefinite relaxation <cit.> for the Max-Cut problem <ref> is
min_X 1/4⟨ L,X ⟩
subject to X_ii = 1, …, n,
X ∈𝕊^n_+.
If the optimal solution of SDP relaxation <Ref> satisfies rank(X^⋆) = 1, then the SDP relaxation <Ref> is exact and one can recover a globally optimal solution to <Ref>. However, the rank-one solution may not exist. Instead, many max-cut instances admit low-rank optimal solutions: 1 < rank(X^⋆) ≪ n, as observed in <cit.>, <cit.>. For these SDP instances, we expect that exhibits faster linear convergence when choosing a small value of , while only has slower sublinear convergence for the same choice of .
We run both and for a fixed number of 300 iterations for two Max-Cut instances and from <cit.>. There are 800 nodes in graph , and 2000 nodes in graph , and so the PSD constraints in <ref> have a dimension of 800 and 2000, respectively. Despite the large value of n, we observed a low rank primal solution rank(X^⋆) = 12 for and rank(X^⋆) = 19 for .
Note that the SDP <Ref> has a constant trace property (X) = n. Thus, we chose the penalty parameter ρ = 2n+2 for . We can also estimate the penalty parameter for as[This estimate is due to the structure of Max-Cut problems <ref> that (∑_i=1^m A_i y_i )= b^ y, ∀ y ∈ℝ^m, and the fact that ŷ = (n/4min{0,λ_min(L)})1 is a dual feasible solution, where 1∈ℝ^m is an all one vector. Hence, we have () = 1/4(L) - ∑_i=1^m (A_i) _i = 1/4(L) - b^≤1/4(L) - b^ŷ, ∀ (, ) ∈.] ρ = 2(1/4(L)-n/4min{0,λ_min(L)})+2. We set r_p = 0 since this parameter has little impact on convergence in this case. The other parameters were chosen as those in <Ref>.
The numerical results are illustrated in <Ref> and <Ref>. In all cases, compared with , returns solutions with much higher accuracy within the same number of iterations. In particular, since the rank condition <Ref> is expected to hold, shows a faster linear convergence rate, while converges to an optimal solution slowly. Again, this is consistent with the theoretical expectation in <Ref>.
§.§ Quartic polynomial optimization on a sphere
In our last numerical experiments, we consider SOS relaxations for polynomial optimization, which are likely to admit low-rank dual solutions. We expect that is better suited than . Our numerical results further show that outperforms
a set of baseline solvers, including interior-point solvers <cit.>, <cit.>, and first-order solvers <cit.>, <cit.>.
Consider a constrained polynomial optimization problem over a sphere
min_x ∈𝒮^n-1 p_0(x),
where p_0(x): ^n → is a polynomial and 𝒮^n-1 = { x ∈^n |x^2 = 1} is the n-dimensional unit sphere. This problem is in general NP-hard, but it can be approximated well using the moment/SOS relaxation <cit.>.
In particular, the kth-order SOS relaxation is
max_γ, σ_0, ψ_1 γ
subject to p_0(x) - γ -ψ_1 h(x) = σ_0 ,
σ_0 ∈Σ[x]_n,2k, ψ_1 ∈[x]_n,2(k-⌈ h ⌉),
where h(x) = x^2 -1, ⌈ h ⌉ = ⌈deg(h)/2 ⌉, [x]_n,2(k-⌈ h ⌉) denotes the real polynomial in n variables and degree at most 2(k-⌈ h ⌉), and Σ[x]_n,2k denote the cone of SOS polynomials in [x]_n,2k.
It is well-known that <ref> can be equivalently reformulated into the standard primal SDP in the form <ref> with some extra free variables; see e.g., <cit.> for details. We observe that these SDPs are likely to admit low-rank dual solutions (this observation is consistent with the flat extension theory on the moment side <cit.> when it is formulated as a dual SDP).
Motivated by the benchmark problems <cit.>, we consider three instances of <ref> in our numerical experiments:
* Modified Broyden tridiagonal polynomial
q_1(x) = ((3-2 x_1) x_1-2 x_2+1)^2 +
∑_i=2^d-1((3-2 x_i) x_i-x_i-1-2 x_i+1+1)^2 +((3-2 x_d) x_d-x_d-1+1)^2+(∑_i=1^d x_i)^2.
* Modified Rosenbrock polynomial
q_2(x) = 1 + ∑_i=2^d 100 ( x_i - x_i-1^2 )^2 + ( 1 - x_i )^2 + (∑_i=1^d x_i)^2.
* Random quartic polynomial
q_3(x) = ⟨ c_d,4,[x]_d,4⟩ ,
where [x]_d,4 is the standard monomial bases with d variables and degree at most 4, and c_d,4 is a randomly generated coefficient vector.
We used the package <cit.> to recast the SOS relaxation <Ref> with k = 2 into a standard SDP in the form of <ref>.
We tested for the above three polynomials with different dimensions (the performance of was very poor in our experiments, and we omitted it here). The dimension of the SDP relaxations ranges from n = 496 to 861 and m= 45,897 to 134,889.
Recall in <ref> that we need to choose the penalty parameter ρ > for . For SDPs from <ref>, we can show that any ρ≥ (1+1)^2 = 4 is a valid exact penalty parameter (see <Ref>) thanks to the unit sphere constraint. We thus chose ρ =10 for all cases. The parameters (,) are chosen as (0,3),(0,5), and (0,7) for three different problems. In our experiments, we ran until it reached tolerance 1-4.
To further demonstrate the performance of , we compare it with <cit.>, <cit.>, <cit.>, <cit.>. For and , we used their default parameters. For , we use the solver with a maximum 10,000 iterations with tolerance 1-4, and this option is customized for solving SDPs from SOS relaxations (by exploiting a property called partial orthogonality <cit.>). For , we used 1-4 as the tolerance, turn off their stagnation detection, and run it using their default parameter with a maximum of 20,000 iterations and a maximum of 10,000 seconds runtime.
The computational results are listed in <ref>. To be consistent with other solvers, we report the cost value, time consumption, primal feasibility, dual feasibility, and duality gap (see <ref>) of the final outcome, i.e.,
= η_1, = η_3, = η_5.
As we can see in <ref>, our algorithm solves all SDP instances to the desired accuracy within a reasonable time, and it consistently outperforms the baseline solvers. For the interior-point solvers, SDPT3 runs out of memory in all cases on our computer. MOSEK was able to return solutions of high accuracy for medium-size problems (d=30) but took more time consumption. MOSEK also encountered memory issues for larger instances (d≥ 35).
For the first-order solvers, solved all tested problems with medium accuracy, but the runtime was worse than (indeed, our algorithm was one order of magnitude faster than in some cases);
the solver solved all SDPs to the desired accuracy for the measures and , while the duality gap remains unsatisfactory. We note that the design of does not consider the duality gap as a stopping criterion. This partially explains the poor performance of the duality gap in the final iterate.
§ CONCLUSION
In this paper, we have presented an overview and comparison of spectral bundle methods for solving primal and dual SDPs. All the existing results focus on solving dual SDPs. We have established a family of spectral bundle methods for solving primal SDPs directly. The algorithm developments mirror the elegant duality between primal and dual SDPs. We have presented the sublinear convergence rates for this family of spectral bundle methods and shown that the algorithm enjoys linear convergence with proper parameter choice and low-rank dual solutions. The convergence behaviors and computational complexity of spectral bundle methods for both primal and dual SDPs are in general similar, but they have different features. It is clear that the existing spectral bundle methods are well-suited for SDPs with low-rank primal solutions, and our new spectral bundle method works well for SDPs with low-rank dual solutions. These theoretical findings are supported by a range of large-scale numerical experiments. We have further demonstrated that our new spectral bundle method achieves state-of-the-art efficiency and scalability when solving the SDP relaxations from polynomial optimization.
Potential future directions include incorporating other types of constraints (such as nonnegative, second-order cone constraints, etc.), considering second-order information <cit.> for the lower approximation, and analyzing the algorithm performance when the subproblem <ref> is solved inexactly <cit.>. Finally, we remark that our current prototype implementation shows promising numerical performance, and it would also be very interesting to further develop reliable and efficient open-source implementations of these spectral bundle methods.
unsrt
equationsection
Appendix
The appendix is divided into five parts:
* <Ref> extends the convergence results of generic bundle methods for unconstrained optimization in <cit.> to constrained convex optimization. We present the adaptions required to prove <Ref>;
* <Ref> presents some technical proofs in <Ref>, i.e., the exact penalization for primal and dual SDPs;
* <Ref> presents some computation details in and ;
* <Ref> completes the technical proofs for the convergence guarantees of , i.e., <Ref>;
* <Ref> presents further details of our numerical experiments in <Ref>, including the generation of random SDPs and the exact penalty parameter in SOS optimizations on a sphere.
§ BUNDLE METHODS FOR CONSTRAINED CONVEX OPTIMIZATION
In this section, we show that the three conditions <Ref> ensure the convergence of the bundle method in <Ref> to solve the constrained convex optimization problem <Ref>. We restate <Ref> below for convenience,
f^⋆ = min_x ∈𝒳_0 f(x),
where f: ℝ^n →ℝ is convex but not necessarily differentiable and 𝒳_0 ⊆ℝ^n is a closed convex set.
The analysis in <cit.> focuses on unconstrained convex optimization with 𝒳_0 = ℝ^n in <ref>. We here clarify the minor extension of their analysis to the constrained case with a closed convex set 𝒳_0. We present the adaptions to prove <Ref> in the main text. Specifically, we only need to establish the constrained versions of <cit.>, which are given in <Ref> below. Then, the proof of <Ref> follows the same arguments in <cit.>. Following the notations in <cit.>, at iteration k, we define the proximal gap by
Δ_k := f(ω_k) - (f(x̅_k+1) + α/2x̅_k+1 - ω_k^2),
where x̅_k+1 = _x∈𝒳_0 {f(x) + α/2x-ω_k^2} and ω_k is the reference point. Note that x̅_k+1 is obtained using the original f rather than the lower approximation model f̂_k.
A descent step at iteration k satisfies
f(ω_k+1) ≤ f(ω_k) - βΔ_k.
The proof is the same as in <cit.>. We omit the details.
A descent step, at iteration k, followed by T consecutive null steps has at most
T≤8G_k+1^2/(1-β)^2αΔ_k+T,
where G_k+1=sup{g_t+1| k≤ t ≤ k+T}. If f is M-Lipschitz, the condition simplifies to
T ≤8M^2(1-β)^2αΔ_k+T.
The proof is essentially the same that in <cit.>.
Consider some descent step k, followed by T consecutive null step. Define the proximal subproblem gap at k < t ≤ k+T by
Δ_t := f(ω_k+1) - (f̂_t( ) + α/2 - ω_k+1^2).
Note that every null step has the same reference point ω_k+1. The core of this proof is to show the inequality
Δ_t+1 ≤Δ_t - (1-β)^2αΔ_t^2/8G_k+1^2.
Before proving this inequality, let us show how it completes the proof first. After T consecutive null steps, the lower bound f̂_k+T (·) ≤ f(·) in <Ref> ensures that Δ_k+T≥Δ_k+T. Thus, to bound T, it is sufficient to show that the reversed inequality Δ_k+T < Δ_k+T holds. Indeed, upon applying the result in <cit.> that bounds the number of steps for a recursive relation and setting the target accuracy ϵ = Δ_k+T (note that Δ_k+T = Δ_k+1), we conclude the number of consecutive null steps T is at most
T≤8G_k+1^2/(1-β)^2αΔ_k+T.
Now, let us focus on the derivation of <Ref>. Consider some null step k < t ≤ k+T. We define the necessary lower bound given by <Ref> as
f̃_t+1(x) := max{f̂_t() + ⟨ s_t+1, x-⟩, f() + ⟨ g_t+1, x-⟩}≤f̂_t+1(x), ∀ x ∈𝒳_0,
and the solution of a proximal step made by f̃_t+1(x) as y_t+2 = _x ∈𝒳_0{f̃_t+1(x) + α/2x - ω_k+1^2 }. It admits the analytical solution
θ_t+1 = min{1,α(f()-f̂_t())/g_t+1-s_t+1^2},
y_t+2 = ω_k+1 - 1/α(θ_t+1 g_t+1 +(1-θ_t+1)s_t+1 + ĥ_t+1),
where ĥ_t+1∈𝒩_𝒳_0(y_t+2).
Hence, the objective of the proximal subproblem can be lower bounded by
f(x_t+2^⋆) + α/2x_t+2^⋆ - ω_k+1^2
≥ f̃_t+1(x_t+2^⋆) + α/2x_t+2^⋆ - ω_k+1^2
≥ f̃_t+1(y_t+2) + α/2y_t+2 - ω_k+1^2
≥ θ_t+1 (f̂_t() + ⟨ s_t+1, y_t+2-⟩) + (1-θ_t+1)(f() + ⟨ g_t+1, y_t+2-⟩) + α/2y_t+2 - ω_k+1^2
= f() + θ_t+1(f()-f̂_t()) - θ_t+1^2/2αg_t+1 -s_t+1^2 + α/2-ω_k+1^2 + 1/2αĥ_t+1^2
≥ f() + θ_t+1 (f()-f̂_t() ) - θ_t+1^2/2αg_t+1 -s_t+1^2 + α/2-ω_k+1^2,
where the equality uses the definition of y_t+2 and the fact that
y_t+2 = -1/α(ĥ_t+1+θ_t+1(g_t+1-s_t+1))
since = ω_k+1 - 1/αs_t+1 by the optimality condition of the subproblem _x ∈𝒳_0{f̂_t(x) + α/2x - ω_k+1^2 } at iteration t.
Thus, we have
Δ_t+1≤Δ_t - ( θ_t+1(f()-f̂_t()) - θ_t+1^2/2αg_t+1 -s_t+1^2 ).
The amount of decrease above can be lower bounded as follows
θ_t+1(f()-f̂_t()) - θ_t+1^2/2αg_t+1 -s_t+1^2
≥1/2min{ f()-f̂_t() , α(f()-f̂_t())^2 /g_t+1 -s_t+1^2}
≥1/2min{ (1-β) Δ_t , α (1-β)^2Δ_t^2 /g_t+1-s_t+1^2 }
≥1/2min{ (1-β) Δ_t , α (1-β)^2Δ_t^2 / 2g_t+1^2 + 2s_t+1^2},
where the first inequality uses the definition of θ_t+1, the second inequality uses the definition of the null step, and the third inequality uses Young's inequality.
It is clear that both components in the minimum above are non-negative and we have a weaker result that Δ_t+1 is non-increasing. Upon further utilizing the relation s_t+1^2 ≤ 2 αΔ_t≤ G^2_k+1 (which will be shown later), the decrease bound above is at least
= 1/2min{ (1-β) Δ_t , α (1-β)^2Δ_t^2 / 2g_t+1^2 + 2s_t+1^2}
≥1/2min{ 2 α(1-β) Δ_t^2/G_k+1^2 , α (1-β)^2Δ_t^2 / 4G_k+1^2}
≥α (1-β)^2 Δ_t^2/8G_k+1^2
≥α (1-β)^2 Δ_t^2/8M^2,
where the last inequality applies the observation that the subgradient G_k+1 is uniformly upper bounded by the M as f is M-Lipschitz. Hence, we conclude that Δ_t+1≤Δ_t - α (1-β)^2 Δ_t^2/8M^2.
We then show the inequality s_t+1^2 ≤ 2 αΔ_t≤ G^2_k+1. By the facts that f̂_t(·) + α/2· - ω_k+1^2 is α strongly convex and that the minimizer is unique, there exists a h_t∈𝒩_𝒳_0(x_t+1^⋆) and v_t∈∂f̂_t() + α ( - ω_k+1) such that 0 = v_t + h_t. It follows from the first-order condition of a strongly convex function that
f̂_t(ω_k+1) ≥f̂_t() + α/2 - ω_k+1^2 - ⟨ h_t ,ω_k+1 - ⟩+α/2- ω_k+1^2
≥f̂_t() + α/2 - ω_k+1^2 + α/2 - ω_k+1^2,
where the second inequality uses the definition of normal cone ⟨ h_t, y - x_t+1^⋆⟩≤ 0, ∀ y ∈𝒳_0. Using the lower bound property f̂_t(ω_k+1) ≤ f(ω_k+1), we have
α/2x_t+1^⋆ - ω_k+1^2 ≤ f(ω_k+1) - (f̂_t()+ α/2 - ω_k+1^2) = Δ_t≤Δ_k+1 ,
where the second inequality comes from the fact that Δ_t is non-increasing.
By the construction of the lower approximation model at step k+1 and Young's inequality, we know
f̂_k+1(x^⋆_k+2) ≥ f(ω_k+1) + ⟨ g_k+1,x^⋆_k+2-ω_k+1⟩
≥ f(ω_k+1) -1/2( g_k+1^2/α + αx^⋆_k+2-ω_k+1^2),
where g_k+1∈∂ f(ω_k+1).
Combining the above inequality and the definition Δ_k+1, we get the relationship
α/2x_t+1^⋆ - ω_k+1^2 ≤Δ_t≤Δ_k+1≤1/2αg_k+1^2,
which implies s_t+1^2=α^2 x_t+1^⋆ - ω_k+1^2 ≤ 2 αΔ_t^2 ≤ 2 αΔ_k+1≤g_k+1^2 ≤ G_k+1^2.
Fix a minimizer x^⋆ in <ref> and let ω_k∈^n \{x^⋆}, the proximal gap is lower bounded by
Δ_k ≥12α(f(ω_k)-f(x^*)ω_k-x^*)^2, if f(ω_k)-f(x^*) ≤αω_k-x^*^2,
f(ω_k)-f(x^*)-α2ω_k-x^*^2, otherwise.
This follows <cit.> exactly and we omit the proof here.
The proof of <Ref> follows the same arguments in <cit.> via combining <Ref>. Indeed,
* <Ref> gives the progress made in the descent step with the relation to the proximal gap;
* <Ref> shows that the maximum number of null steps between two descent steps depends on the proximal gap; and
* <Ref> shows the lower bound on the proximal gap.
As a result, by the lower bound on the proximal gap given in <Ref>, the number of null steps between two descent steps can be bounded using <Ref>. Combining <Ref>, we can achieve the maximum number of descent steps to achieve the desired accuracy. Finally, considering both the maximum number of null steps and descent steps renders the maximum number of total steps to achieve the desired accuracy, which gives the convergence results in <Ref>. We refer interested readers to <cit.> for more details.
§ TECHNICAL PROOFS IN <REF>
§.§ Proof of <Ref>
We first recall two technical lemmas: the first one is the KKT optimality condition for the primal and dual SDPs <ref> and <ref>, which is shown in <Ref>, and the second one is the computation of the subdifferential of the maximal eigenvalue of symmetric matrices.
Given a symmetric matrix A ∈𝕊^n. Suppose its maximal eigenvalue λ_max(A) has multiplicity t. Then, we have
∂λ_max(A) = {Q U Q^| U ∈𝕊^t_+, (U) = 1},
where the columns of Q ∈ℝ^n × t forms an orthonormal set of the eigenvectors for λ_max(A).
We are now ready to prove <Ref>.
Upon denoting 𝒳_0 = {X ∈𝕊^n | ⟨ A_i, X⟩ = b_i, i = 1, …, m}, it is easy to see that the primal SDP <ref> is equivalent to
min_X ⟨ C, X⟩
subject to λ_max(-X) ≤ 0,
X ∈𝒳_0,
where we have applied the fact that X ≽ 0 ⇔ -X ≼ 0 ⇔λ_max(-X) ≤ 0. <Ref> ensures that the penalty form <ref> is equivalent to <ref> if we choose ρ > ρ_0 with ρ_0 = sup_λ∈Λ |λ|, where Λ⊂ℝ is the set of Lagrange multipliers associated with the inequality λ_max(-X) ≤ 0.
Denote Λ̃ = { () ∈ℝ| (,) ∈}, where is the set of dual optimal solutions to <ref>. We only need to show that Λ̃ = Λ. To see this, we start with the KKT optimality condition for <ref>, i.e.,
0 ∈ C + α∂ (λ_max(-))+ N_𝒳_0() ⟺ 0 ∈ C + α∂ (λ_max(-))- ∑_i=1^mA_i y_i,
αλ_max(-) = 0, α≥0, λ_max(-) ≤ 0, ∈𝒳_0,
where 𝒩_𝒳_0() :={ X ∈𝕊^n | ⟨ X, Y-⟩≤ 0, ∀ Y ∈𝒳_0 } denotes the normal cone to 𝒳_0 at , and we have used the fact that 𝒩_𝒳_0() ={∑_i=1^mA_i y_i ∈𝕊^n | y ∈ℝ^m} since 𝒳_0 = {X ∈𝕊^n | ⟨ A_i, X⟩ = b_i, i = 1, …, m} is an affine space.
The set of Lagrange multipliers Λ associated with the inequality λ_max(-X) ≤ 0 is all α satisfying <ref> and <ref>.
Our proof is divided into two steps.
* We first prove that Λ̃⊆Λ. For any dual optimal solution (, ) ∈𝒟^⋆, let the corresponding primal optimal solution be . It suffices to show that α = (), , and satisfy <ref> and <ref>.
If is strictly positive definite, then = 0 according to <Ref>. Then , α = () = 0, and C - ∑_i=1^m A_i y_i^⋆ = 0 naturally satisfy <Ref>-<Ref>.
If is positive semidefinite with λ_max(-) = -λ_min()= 0, then α = () ≥ 0 naturally satisfies <Ref>. Denote the multiplicity of λ_min() = 0 as r, and the corresponding set of orthonormal eigenvectors as the columns of P ∈ℝ^n × r. Via a classical chain rule (see e.g.,<cit.>) and <Ref>, the subdifferential of λ_max(-) is
∂λ_max(-) = { -P U P^∈𝕊^n | U ∈𝕊^r_+, (U) = 1}.
By <Ref>, it is not difficult to see that ∈{() × P U P^ | U ∈𝕊^r_+, (U) = 1}, which indicates
C - ∑_i=1^m A_i y_i^⋆ =∈ - ()∂λ_max(-).
Thus, α = (), , and also satisfy <ref>.
* We then prove Λ⊆Λ̃. It suffices to show that for any α, y∈ℝ^m, satisfying <ref>-<ref>, the points , (y, Z = C - ∑_i=1^m A_i y_i) are a pair of primal and dual optimal solutions to the SDPs <ref> and <ref> (note that <ref> implies that (Z) = (C - ∑_i=1^m A_i y_i) = α).
By definition, it is clear that , y, Z = C - ∑_i=1^m A_i y_i are primal and dual feasible, i.e.,
⟨ A_i, ⟩ = b_i, i = 1, …, m, ∑_i=1^m A_i y_i + Z = C, ≽ 0, Z ≽ 0.
By <Ref>, we only need to show that the complementarity slackness Z = 0 also holds.
If λ_max(-) < 0, then α = 0 by <Ref>. Thus, we have Z = C - ∑_i=1^m A_i y_i ∈ - α∂λ_max(-) ={0}, leading to Z = 0.
If λ_max(-) = 0 with multiplicity r, then PUP^× = 0, ∀ U ∈𝕊^r, where the columns of P ∈ℝ^n × r forms an orthogonal set of the eigenvectors for λ_max(-). Combining this fact with the definition Z = C - ∑_i=1^m A_i y_i ∈ - α∂λ_max(-), we also have Z = 0.
Therefore, the points , y, Z = C - ∑_i=1^m A_i y_i satisfy the KKT condition for <ref> and <ref>.
Therefore, we have established that Λ̃ = Λ. Together with <Ref>, this completes the proof.
§.§ Constant trace property in primal and dual SDPs
In this subsection, we give a detailed derivation for the penalized primal/dual problem when the feasible set of <ref> implies a constant trace constraint on the decision variables X and Z, i.e. (X) = k and (Z) = k for some k>0.
§.§.§ Constant trace in primal SDPs: Derivation of <ref>
Without loss of generality, we can add the explicit constant trace constraint (X) = k to <ref>, leading to
min_X ⟨ C, X⟩
subject to ⟨ A_i, X⟩ = b_i, i = 1, …, m,
⟨ I,X ⟩ = k,
X ∈𝕊^n_+.
Then, the corresponding Lagrangian dual problem reads as
max_y,t b^ y + t k
subject to Z + ∑_i=1^m A_i y_i + t I = C,
Z ∈𝕊^n_+.
For every pair of optimal solutions X^⋆ and Z^⋆, we have () = k>0 and rank() ≤ n-1 due to the rank condition in <ref>. This implies λ_min() = 0. By eliminating the variable Z in <ref>, we get
λ_min(C - ∑_i=1^m A_i y_i - t I) = 0 ⇒ t = λ_min(C - (y)) = -λ_max((y) -C).
Therefore, the problem <ref> can be equivalently written as
max_y b^ y - k λ_max((y) -C),
which is also equivalent to <ref>.
§.§.§ Constant trace in dual SDPs: Derivation of <ref>
Similarly, we can add the explicit constant trace constraint, (Z) = k, to the dual SDP <ref>. Its Lagrange dual problem becomes
min_X,t,Q ⟨ C,Q ⟩ + k t
subject to ⟨ A_i,Q ⟩ = b_i, i=1,…,m,
Q + t I = X,
X ∈𝕊^n_+.
Following the same argument in <Ref>, we know
λ_min(Q + t I) = 0 ⇒ t = -λ_min(Q) = λ_max(-Q),
which leads to
min_Q ⟨ C,Q ⟩ + k λ_max(-Q)
subject to ⟨ A_i,Q ⟩ = b_i, i=1,…,m.
This is clearly equivalent to <ref>.
[Constant trace in dual SDPs]
Here, we point out a simple observation that a specific matrix completion problem admits a constant trace property in the dual variable Z. Matrix completion aims to recover a low-rank matrix M ∈ℝ^s × t from its partially observed entries {M_ij}_(i,j)∈Ω. A typical formulation with a nuclear norm regularization reads as <cit.>
min_X X_*
subject to X_ij = M_ij, ∀ (i,j) ∈Ω,
where X_* denotes the nuclear norm of X. The problem above can be cast as a standard primal SDP
min_X (X)
subject to ⟨[ 0 E_ij^; E_ij 0 ], X ⟩ = 2M_ij,∀ (i,j) ∈Ω,
X = [ W_1 U^; U W_2 ]∈𝕊^s+t_+,
where E_ij∈ℝ^s × t is zero everywhere except the (i,j) entry being 1. Letting A_ij = [ 0 E_ij^; E_ij 0 ] and re-indexing the matrices {A_ij} by integers s= 1, … , |Ω|, the dual SDP for <ref> becomes
max_y,Z b^ y
subject to ∑_s = 1^|Ω| A_sy_s + Z = I,
Z ∈𝕊^s+t_+.
Note that (A_s) = 0 and any feasible variable Z has the constant trace property (Z) = (I) = s+t.
§ COMPUTATION AND CONVERSION IN AND
§.§ Reformulation of master problem <ref> in
As indicated in <Ref>, needs to solve the following SDP with a quadratic cost function at each iteration (recall that the constraint set 𝒲̂_t is defined in <ref>)
min_W ∈𝒲̂_t, y ∈ℝ^m ⟨ W-C,Ω_t ⟩ - ⟨ b- 𝒜(Ω_t) ,y ⟩+ 1/2 αW-C+∑_i=1^mA_i y_i^2.
After dropping some constant terms and using the operator vec: 𝕊^n →^n^2 that stacks the columns of the input matrix on top of each other, the problem <ref> can be reformulated as
min_γ, S, y [ γ S^ y^ ][ Q_11 Q_12 Q_13; Q_12^ Q_22 Q_23; Q_13^ Q_23^ Q_33 ][ γ; vec(S); y ] + [ q_1^ q_2^ q_3^ ][ γ; S; y ]
subject to γ≥ 0, S ∈𝕊^r_+, γ + (S) ≤ρ, y ∈ℝ^m,
where Q_11 = ⟨W̅_t,W̅_t ⟩, Q_22 = I_r^2, Q_33 = [ (A_1), …, (A_m) ], Q_12 = P_t^W̅_t P_t ^, Q_13 = (W̅_t)^ and
Q_23 = [ P_t^ A_1 P_t P_t^ A_2 P_t ⋯ P_t^ A_m P_t ],
q_1 = -2 ⟨W̅_t, C ⟩ + 2 α⟨W̅_t, Ω_t ⟩,
q_2 = 2 αP_t^Ω_t P_t - 2 P_t^ C P_t,
q_3 = -(2 α (b-(Ω_t))+ 2𝒜(C)) ).
Although the problem
<ref>
is already ready to be solved by standard conic solvers, the computation can be further simplified by eliminating the variable y. The optimality condition for y follows
y = Q_33^-1(-q_3/2 - Q_13^α - Q_23^S),
where Q_33 is invertible because of <ref>.
Therefore, the problem becomes
_γ, S [ γ S^ ][ M_11 M_12; M_12^ M_22; ][ γ; vec(S) ] + [ m_1^ m_2^ ][ γ; S ]
subject to γ≥ 0, S ∈𝕊^r_+, γ + (S) ≤ρ,
where M_11 = Q_11 - Q_13 Q_33^-1 Q_13^, M_22 = Q_22 - Q_23 Q_33^-1 Q_23^, M_12 = Q_12 - Q_13 Q_33^-1 Q_23^, m_1 = q_1 - Q_13 Q_33^-1 q_3 and m_2 =q_2 - Q_23Q_33^-1 q_3.
We note that <ref> has only one non-negative variable, one semidefinite variable, and one inequality constraint. The computational complexity for solving <ref> is low
when the dimension of the semidefinite variable is small.
Aside from deriving the dual problem for <ref>, another approach to solving equality-constrained problems is to eliminate the affine constraints by finding an explicit representation of the feasible set
<cit.>. Specifically, <ref> can be reformulated as
min_x ∈ℝ^p ⟨ C-W, X_0 + ∑_i=1^p N_i x_i ⟩ + α/2 X_0 + ∑_i=1^p N_i x_i - ^2,
where X_0 ∈𝕊^n is a particular solution for the affine constraint, i.e., (X_0) = b, and {N_1, …, N_p} is a set of the orthonormal basis of the null space of . We note that the above problem
is an unconstrained quadratic program in x, and thus we have the optimality condition
x = -K^-1((1/α(C-W) + X_0-)),
where 𝒩(·) is a linear map 𝕊^n→ℝ^p as
𝒩(X) := [ ⟨ N_1, X ⟩, … , ⟨ N_p, X ⟩ ]^, and K = [ (N_1) …(N_p) ]∈𝕊^p_++.
Therefore, another equivalent problem of <ref> becomes (after dropping some constant terms)
_W ∈𝒲̂_t - ⟨ X_0 - (K(X_0-)) ,W ⟩ -1/2α(K^-1)^1/2(C-W) ^2,
where (·):ℝ^p →𝕊^n is the adjoint operation of .
§.§ Reformulation of master problem (<ref>) in
Similar to , solves the following quadratic SDP at every iteration
min_W ∈𝒲̂_t ⟨ b,⟩ + ⟨ W, C - () ⟩ + 1/2 αb - (W)^2,
where the constraint set is defined as
𝒲̂_t := {γW̅_t + P_t S P_t^∈𝕊^n | S ∈𝕊^r_+ , γ≥ 0, γ + (S) ≤ρ}.
By removing some constants and performing a scaling, the problem can be reformulated as
min_γ, S [ γ S^ ][ M_11 M_12; M_21 M_22 ][ γ; vec(S) ]
+ [ m_1^ m_2^ ][ γ; S ]
subject to γ≥ 0, S ∈𝕊^r_+, γ + (S) ≤ρ, y ∈ℝ^m ,
where the problem data are
M_11 = ⟨(W̅_t),(W̅_t) ⟩ ,
M_22 = ∑_i=1^m P_t^ A_i P_tP_t^ A_i P_t^,
M_12 = P_t^((W̅_t))P_t,
m_1 = ⟨ -2(b) + 2 α G, W̅_t ⟩ ,
m_2 = P_t^ (-2(b)+ 2 α G) P_t,
G = C - ().
It is clear that <ref> are in the same form as <ref> and have decision variables of the same dimension. Thus, solving <ref> has the same computational complexity. However, the problem data M, m_1, m_2 in <ref> are different, and their constructions take different time consumption.
§.§ Analytical solution when = 0 and = 1
The subproblem <ref> or <Ref> can be reformulated into a conic problem of the form <ref>. In general, the subproblem <ref> does not admit an analytical solution and needs to be solved by another algorithm (e.g., MOSEK <cit.> or SeDuMi <cit.>).
Here, we highlight that an analytical solution to <ref> exists when parameters = 0 and = 1. In this case, the conic problem <ref> is reduced to
min_γ, s [ γ s ]
M [ γ; s ] + m^[ γ; s ]
subject to γ≥ 0, s ≥ 0, γ + s ≤ρ,
where M ∈𝕊^2_+ and m ∈ℝ^2 are problem data.
There are only two scalar decision variables in <Ref>, and the feasible region forms a triangle in the first quadrant.
Therefore, the solution to <ref> depends on the location of the minimum of the quadratic objective function. Precisely, the minimum of the objective function happens at
[ γ_obj; s_obj ] = -M^†m/2,
where M^† is the Moore-Penrose Pseudoinverse of M. If γ_obj and s_obj are feasible in <ref>, they are also the optimal solution to <ref>. If, however, γ_obj and s_obj are not feasible, the solution to <ref> will be on one of the three line segments defined by
l_1 ={ (γ,0) ∈ℝ^2 | 0 ≤γ≤ρ},
l_2 = {(0,s) ∈ℝ^2 | 0 ≤ s ≤ρ},
l_3 = { (γ , s) ∈ℝ^2 |γ + s = ρ, γ≥ 0, s ≥ 0 }.
Each line segment is one-dimensional and thus the minimizer of a quadratic function on each line segment can be calculated analytically:
* For the line segment l_1, the minimizer is
v_1 = [ γ̂; 0 ] and γ̂ = 0 if -m_1/(2M_11) < 0,
-m_1/2M_11 if 0 ≤ -m_1/(2M_11) ≤ρ,
ρ if - m_1/(2M_11) > ρ.
* For the line segment l_2, the minimizer is
v_2 =
[ 0; ŝ ] and ŝ =
0 if -m_2/(2M_22) < 0,
-m_2/2M_22 if 0 ≤ -m_2/(2M_22) ≤ρ,
ρ if -m_2/(2M_22) > ρ.
* For the line segment l_3, the minimizer is
v_3 = [ γ̂; ρ - γ̂ ] and γ̂ =
0 if ϕ < 0,
ϕ if 0 ≤ϕ≤ρ,
ρ if ϕ> ρ,
where ϕ = 2ρ M_22 - 2 ρ M_12 - m_1 + m_2/2M_11 + 2M_22 - 4M_12.
Let f(v) = v^ M v + m^ v, where v ∈ℝ^2, be the same objective function in <ref>. When γ_obj and s_obj are not feasible, the solution to <ref> can be computed as
_v ∈{v_1,v_2,v_3} f(v).
§.§ Conversion between primal and dual formulations.
Here, we show that the primal and dual algorithms and can be converted from each other through a reformulation of the SDPs <ref>. Accordingly, is able to solve the dual SDP <ref> in a different form, and can solve the primal SDP <ref>. The idea is to make a conversion between the equality-form SDP <ref> and inequality-form SDP <ref>.
Consider the primal SDP <ref>. Any feasible point satisfying (X) = b can be represented as
X = X_p + y,
where X_p∈𝕊^n is a particular solution of (X) = b, y ∈ℝ^p with p denoting the dimension of the null space of , and 𝒩(·) is a linear map 𝕊^n→ℝ^p as 𝒩(X) := [ ⟨ N_1, X ⟩, … , ⟨ N_p, X ⟩ ]^ such that its adjoint y represents the null space of .
This allows us to equivalently rewrite <ref> as
min_y ⟨ C, X_p + y ⟩ + ρmax{λ_max (- X_p - y) ,0 }
= min_y ⟨ (C), y ⟩ + ρmax{λ_max (- X_p - y) ,0 },
where we drop a constant term and use the property ⟨ C, y ⟩ = ⟨(C), y ⟩.
It is clear that the problem above is in the same form as the penalized dual SDPs <ref> with problem data
b = -(C), = -, C = X_p.
Therefore, we can apply in <Ref> to solve it and convergence guarantees follow <Ref>.
On the other hand, the dual SDP <Ref> can also be solved by in <Ref>. Indeed, we can equivalently reformulate <ref> or <ref> into the form of <ref> or <ref>:
* We first find G_1, …,G_l ∈𝕊^n and h ∈ℝ^l, such that { Z ∈𝕊^n | 𝒢 (Z) = h } = {C - y | y∈ℝ^m }, where 𝒢 is a linear map : 𝕊^n →ℝ^l as 𝒢(X) = [ ⟨ G_1,X ⟩, …,⟨ G_m ,X⟩ ]^. In particular, {G_1, …, G_l} is a set of the basis of the null space of 𝒜 and h = 𝒢(C).
* We then find a feasible point X_f satisfying the affine constraint in <ref>, i.e., 𝒜(X_f)=b.
Consequently, we can equivalently rewrite the dual penalized nonsmooth formulation <ref> in the form of
min_ Z ⟨ X_f, Z ⟩ + ρmax{λ_max(-Z),0 }
subject to 𝒢(Z) = h,
where we have replaced b by (X_f), introduced a constant ⟨ C, X_f⟩, and defined a new variable Z = C - y.
It is clear that the problem above is in the form of <ref>, which is ready to be solved by in <Ref>. The convergence results follow <ref>.
§ TECHNICAL PROOFS IN <REF>
In this section, we complete the technical proofs in <Ref>. We first verify that the lower approximation model <ref> satisfies <ref> in <Ref>, and then prove <Ref> in <Ref>. These two results complete the proof of <Ref>. Finally, in <Ref>, we complete the proofs for <Ref>.
§.§ Verification of the lower approximation model <ref>
First, the constructions of P_t in <ref> and W̅_t in <ref> guarantee that P_t^ P_t = I_r, W̅_t ≽ 0 and (W̅_t) =1. Then, we have
𝒲̂_t:={γW̅_t + P_tSP_t^| S ∈𝕊^r_+ , γ≥ 0,γ + (S) ≤ρ}⊂{ W ∈𝕊^n_+|(W) ≤ρ},
which implies that
max_W ∈𝒲̂_t ⟨ W, -X⟩≤max_(W) ≤ρ, W ∈𝕊^n_+⟨ W, -X⟩ = ρmax{λ_max(-X),0}, ∀ X ∈𝕊^n.
Thus, F̂_(W̅_t,P_t)(·)
is a global lower approximation of F(·)
, satisfying <ref>.
Second, the satisfaction of <ref> is due to the fact that a subgradient of F(X) at at iteration t is contained in the feasible set 𝒲̂_t+1 for the next model, when constructing P_t+1 in <ref>. In particular, the column space of P_t+1 in <ref> spans the top r_c eigenvectors of -. Then, there exists a unit vector s ∈ℝ^r
such that P_t+1s = v, where v is a top normalized eigenvector of -.
Letting γ = 0, it is easy to verify that ρ vv^ = P_t+1 (ρ s s^) P_t+1^∈𝒲̂_t+1 since ρ s s^∈𝕊^r_+ and (ρ s s^) ≤ρ. Therefore, if (-) > 0, we have
F̂_(W̅_t+1,P_t+1) (X) =
⟨ C,X ⟩ + max_W ∈𝒲̂_t+1⟨ W, -X ⟩ ≥⟨ C,X ⟩ + ρ⟨ vv^, -X ⟩
= F() + ⟨ g_t , X- ⟩, ∀ X ∈𝕊^n,
where g_t =C-ρ vv^∈∂ F(). On the other hand, if (-) ≤ 0, since 0 ∈𝒲̂_t+1, it also follows that
F̂_(W̅_t+1,P_t+1) (X) ≥⟨ C , X ⟩
= F() + ⟨ g_t , X- ⟩, ∀ X ∈𝕊^n,
where g_t = C ∈∂ F().
Hence, we have verified the subgradient lower-bound in <ref> for F̂_(W̅_t,P_t)(·).
Finally, the satisfaction of <ref> is due to the fact that the optimal solution at iteration t is contained the feasible set 𝒲̂_t+1 at the next iteration t+1, thanks to the construction <ref> and <ref>. In particular, since the column space of P_t+1 spans the past information information P_tQ_1 as in <ref>, there exists an orthonormal matrix Q̅∈ℝ^ r × r_p such that P_t+1Q̅ = P_tQ_1. Let γ = γ_t^⋆ + (Σ_2) and S =Q_1 Σ_1 Q_1^, then W_t^⋆ = γW̅_t+1 + P_t+1SP_t+1^∈𝒲̂_t+1. Therefore, ∀ X ∈𝕊^n such that (X) = b, we have
F̂_(W̅_t+1,P_t+1) (X) =
⟨ C,X ⟩ + max_W ∈𝒲̂_t+1⟨ W, -X ⟩
≥⟨ C,X ⟩ +⟨ W_t^⋆, -X ⟩
= ⟨ C- W_t^⋆,⟩ +⟨ W_t^⋆, -X ⟩ - ⟨ C,-X ⟩ + ⟨(), -X ⟩
= F̂_(W̅_t,P_t) () + ⟨ -W_t^⋆ + C - () , X - ⟩
= F̂_(W̅_t,P_t) () + ⟨α(-) , X - ⟩
= F̂_(W̅_t,P_t) () + ⟨ s_t+1, X- ⟩,
where the second equality uses ⟨(), - X ⟩ = ⟨ ( - X ⟩) ,⟩ = 0 since and X are both feasible, the fourth equality uses the optimal condition <ref>, and the fifth equality sets s_t+1 =α(-) ∈∂F̂_(W̅_t,P_t) () + 𝒩_𝒳_0() since 𝒩_𝒳_0() = { y |∀ y ∈ℝ^m }. Therefore, we have verified the model subgradient lowerbound in <ref>.
§.§ Proof of <Ref>
Before presenting the proof, we first draw a technical lemma that ensures the compactness of the sub-level set in <ref> (recall that := sup_F(Ω_t) ≤ F(Ω_0) Ω_t).
Given a closed convex set A ⊆ℝ^n and a convex function f: A →ℝ,
if the optimal solution set of min_x ∈ Af(x) is compact, then the sublevel set C_ϵ = {x ∈ A | f(x^⋆) ≤ f(x^⋆) + ϵ}, x^⋆ is a minimizer, is also compact for all ϵ > 0.
Let 𝒳 be the optimal solution set of min_x ∈ Af(x). We first note that f is closed as f is a continuous function and the domain of f is closed <cit.>. Thus, both 𝒳 and C_ϵ are closed as the sublevel set of a closed function is closed <cit.>.
We only need to prove C_ϵ is bounded. For this, it suffices to prove that the unboundedness of C_ϵ implies the unboundedness of 𝒳.
We prove this by contradiction. For any ϵ > 0, suppose C_ϵ is unbounded. Since C_ϵ is convex and closed, there exists a direction d ∈ℝ^n such that lim_t →∞x^⋆ + t d_2 = ∞, where x^⋆ is an optimal solution, and x^⋆ + t d ∈ C_ϵ, ∀ t ≥ 0 <cit.>. If there exists a point x̂ on this half-line such that x̂ = x^⋆ + t̂ d, t̂ > 0, and f(x̂) > f(x^⋆), then according to <cit.>, we have
f(x^⋆ + td) - f(x^⋆)/t≥f(x^⋆ + t̂ d) - f(x^⋆)/t̂ =:k > 0, ∀ t ≥t̂,
which implies f(x^⋆ + td) ≥ f(x^⋆) + t k, ∀ t ≥t̂. Thus, we have
lim_t →∞ f(x^⋆ + td) = ∞.
However, this contradicts the assumption that x^⋆ + t d ∈ C_ϵ for all t ≥ 0. This indicates that f(x^⋆ + td) = f(x^⋆), ∀ t ≥ 0, and consequently 𝒳 is unbounded, which completes the proof.
To lighten the notation, for the rest of the section, we use := F̂_(W̅_t,P_t) to denote the approximate model at step t. For convenience, we restate <Ref> below.
In , let β∈(0,1), ≥ 1, ≥ 0, α>0, r= +,
ρ > 2+1. Then, at every descent step t>0, the following results hold.
* The approximate primal feasibility for Ω_t+1 satisfies
λ_min (Ω_t+1) ≥-(F()-F())/ +1, and 𝒜(Ω_t+1) = b.
* The approximate dual feasibility for (, ) satisfies
≽ 0, and -C+ () ^2 ≤2 α/β (F()-F()).
* The approximate primal-dual optimality for (, , ) satisfies
⟨ C, ⟩ - ⟨ b,⟩ ≥ -ρ (F()-F())/ +1 - √(2 α/β (F()-F())),
⟨ C, ⟩ - ⟨ b,⟩ ≤1-β/β (F()-F()) + √(2 α/β (F()-F())),
where := sup_F(Ω_t) ≤ F(Ω_0) Ω_t
is bounded due to the compactness of (see <ref>).
First, by the construction of <ref>, we have W_t^⋆ = γ^⋆_t W̅_t + P_t S_t^⋆ P_t. Since W̅_t ∈𝕊_+^n, γ^⋆_t ≥ 0 and S_t^⋆∈𝕊^r_+, W_t^⋆ is positive definite. The inequality can be established by observing
F()-F(X_⋆)/β ≥F()-F()/β
≥ F() - ()
≥α/2-^2
= α/21/α^2W_t^⋆-C+ (y_t^⋆)^2
= 1/2αW_t^⋆-C+ (y_t^⋆)^2,
where the second inequality comes from the definition of serious step in <ref>, the third inequality uses the fact that is the minimizer of (X) + α/2X-^2 and (X) ≤ F(X), ∀ X ∈𝕊^n, and the first equality comes from the optimality condition in <ref>.
Second, the feasibility of comes naturally from the problem construction <ref>. Further, the definition of serious step <ref> implies a cost drop. Hence, it follows
F()-F() ≥ F()-F()
= ⟨ C,-⟩ + ρmax{λ_max(-),0}
= ⟨ C,-⟩ - ρmin{λ_min(),0}.
To further lower bound the first term, we note that <ref> ensures that <ref> are both feasible, and for any pair of optimal primal and dual solution (,), complementary slackness holds, i.e., ⟨, ⟩ = 0. Therefore,
⟨ C, -⟩ = ⟨+𝒜^* , -⟩
= ⟨, ⟩+⟨,-⟩+⟨𝒜^* , -⟩
= ⟨, ⟩ + ⟨𝒜^* , -⟩
= ⟨, ⟩
≥_* min{λ_min(),0}
≥min{λ_min(),0},
where the third equality is due to the complementary slackness, the fourth equality uses the definition of the adjoint operator and the fact that both and are feasible. As a result, we obtain
F()-F() ≥min{λ_min(),0} - ρmin{λ_min(),0}
= (-ρ)min{λ_min(),0}
≥ -( +1)min{λ_min(),0},
where the last inequality uses the assumption ρ > 2+1. This completes the proof for the approximate primal feasibility.
Third, by the feasibility of , the duality gap follows
⟨ C, ⟩ - ⟨ b,⟩ =⟨ C,⟩ - ⟨𝒜(),⟩
= ⟨ C,⟩ - ⟨,⟩
= ⟨, ⟩ - ⟨-C+,⟩.
We first bound the second term using Cauchy inequality
|⟨𝒜^* -C+,⟩| ≤𝒜^* -C+
≤√(2 α/β (F()-F(X_⋆)))
≤√(2 α/β (F()-F(X_⋆))),
where the first inequality is due to
the approximate dual feasibility.
The lower bound on the first term follows that
⟨, ⟩ ≥_* λ_min( )
≥ -_* (F()-F(X_⋆))/+1
≥ -ρ (F()-F(X_⋆))/+1,
where the second inequality comes from
the approximate primal feasibility,
and the last inequality is by the construction _* ≤ρ in <ref>. Therefore, the duality gap is lower bounded by
⟨ C, ⟩ - ⟨ b,⟩≥ -ρ (F()-F(X_⋆))/ +1 - √(2 α/β (F()-F(X_⋆))).
Similarly, by a reformulation of descent step <ref>, we obtain
F() - β() ≤ (1-β) F().
Adding (β -1)F() from both sides and performing a simple algebra leads to
-1-β/β (F()-F()) ≤ - F() + ()
= -ρmax{λ_max(-),0} + ⟨ W_t^⋆,-⟩
≤ -⟨, ⟩,
which implies
⟨, ⟩≤1-β/β (F()-F()) ≤1-β/β (F()-F()).
Combining the last inequality with the lower bound from Cauchy inequality, we get
⟨ C, ⟩ - ⟨ b,⟩≤1-β/β (F()-F()) + √(2 α/β (F()-F(X_⋆))).
§.§ Proof of <ref>
The proof sketch of <ref> is presented in <Ref>, which relies on the results in <Ref>.
Before proving <Ref>, we draw a few technical results.
The first technical result, shown in <ref>, presents a powerful error bound that relates the distance of a point X to the optimal solution set 𝒫^⋆ with its optimality for F(X) in <ref>. The proof is built on the results in <cit.> and <cit.>. Given a ϵ >0, we define the sublevel set for the objective function F(X) in <ref> as
𝒫_ϵ:={X ∈𝕊^n | F(X) ≤ F() + ϵ, (X) = b }.
Under <Ref>, choosing ρ as in <ref>, there exist some constants ζ≥ 1 and μ>0 such that
F(X)-F() ≥μ·^ζ(X,), ∀ X ∈𝒫_ϵ.
Furthermore, if the strict complementarity holds for <Ref>, the exponent term can be chosen as ζ = 2.
By <ref>, we know the optimal solution set of min_X∈𝒳_0 F(X) = ⟨ C, X⟩ + ρmax{λ_max (-X) ,0 } is the same as the optimal solution set . <ref> further ensures that is compact. By <ref>, we know for any ϵ > 0, the sub-level set 𝒫_ϵ is also compact.
The optimal solution set can be rewritten as 𝕊^n_+ ∩ℒ, where ℒ= {X | (X) = b, ⟨ C,X ⟩ = }.
Then, the result in <cit.> ensures that there exists two constants k_1 > 0 and k_2 > 0 such that
^2^d(X,) ≤ k_1 ((X,ℒ) + (X,𝕊^n_+))
≤ k_1(k_2 |⟨ C,X ⟩ - ⟨ C,⟩| + n max{λ_max(-X),0})
≤ k_1 max{k_2,n} ( |⟨ C,X ⟩ - ⟨ C,⟩| + max{λ_max(-X),0}), ∀ X ∈𝒫_ϵ,
where the second inequality applies the fact that (X,𝕊^n_+) ≤ n max{λ_max(-X),0}) for any X ∈𝕊^n and (X,ℒ) ≤ k_2 |⟨ C,X ⟩ - ⟨ C,X^⋆⟩| since ℒ is an affine space.
To establish the relationship of F(X)-F() and (X,𝒫^⋆) in <ref>, it would be sufficient to prove that for some constant ρ≥ 1, the following inequality holds
F(X) - F()
= ⟨ C,X ⟩ - ⟨ C,⟩ + ρmax{λ_max(-X),0}
≥|⟨ C,X ⟩ - ⟨ C,X^⋆⟩| + max{λ_max(-X),0}.
Indeed, if |⟨ C,X ⟩ - ⟨ C,⟩| = ⟨ C,X ⟩ - ⟨ C,⟩, the inequality <ref> holds naturally. On the other hand, if |⟨ C,X ⟩ - ⟨ C,⟩| = ⟨ C,⟩ - ⟨ C,X ⟩, <ref> is equivalent to
2⟨ C,X ⟩ - 2 ⟨ C,⟩ + (ρ -1) max{λ_max(-X),0}≥ 0,
which is the same as
⟨ C,X ⟩ + (ρ -1)/2max{λ_max(-X),0}≥⟨ C,⟩, ∀ X ∈𝒫_ϵ.
This inequality holds as long as (ρ -1)/2 > since the function in the lefthand side becomes an exact penalization in the form of <ref> (see <Ref>). This condition is equivalent to ρ > 2 + 1.
Therefore, upon choosing ρ > 2 +1, the inequality <ref> always holds.
Combining <ref> with <Ref> leads to
F(X)-F() ≥μ·^2^d(X,), ∀ X ∈𝒫_ϵ,
where μ = 1/ k_1 max{k_2,n} > 0 and d is the singularity degree of <ref> and is bounded <cit.>. Furthermore, if strict complementarity holds, the singularity degree d is at most one <cit.>.
The following technical lemma characterizes the quality of eigenvalue approximation.
Given X∈𝕊^n with the difference between the top r-th and (r+1)-th eigenvalue defined as δ = λ_r(X)-λ_r+1(X) and denote Λ_r,(X)=max{|λ_r+1(X)|,|λ_ (X)| } and 𝒞^+_r (X):={VSV^⊤|(S)≤1,S≽0,S∈^r}, where V ∈ℝ^n × r contains the r orthonormal eigenvectors corresponding to the largest r eigenvalues of X. Then for any Y∈𝕊^n, the function f_X(Y): =max{λ_1(Y),0}-max_W∈ r (X)WY satisfies that
0 ≤ f_X(Y) ≤8 Y-X^2 Λ_r,n(X)/δ^2 + (8+√(2)+16) Y-X^2 /δ.
We are now ready to prove <Ref>. For convenience, we state a refined version of <Ref> below, which shows that the lower approximation model (X) becomes quadratically close to the true penalized cost function F(X) after some finite number of iterations T_0.
Suppose strong duality and strict complementarity hold for <ref> and <ref>. Let r_c≥. After T_0 iterations, there exists a constant η > 0 (independent of ϵ) such that
(X) ≤ F(X) ≤ (X) + η/2X-^2, ∀ X ∈𝒳_0.
In particular, we can choose η =4 ρmax{144 sup_∈_op/δ^2,9(8√(2)+16)/δ}, where
δ := inf_∈sup_r ≤λ_r(-) - λ_r+1(-)
is the eigenvalue gap parameter.
The proof is divided into two cases: unique solution and multiple solutions. We first consider the case when ∈ is unique. Since the primal solution is unique, we can choose T_0 large enough to ensure iterate is sufficiently close to in the way that -_op≤δ/3 (see <ref> for an estimate on the number iterations to achieve this), where δ is λ_(-) - λ_+1(-) in this case. By Weyl's inequality and the triangle inequality, we know
λ_(-) - λ_(-)≤δ/3 , λ_+1(-) - λ_+1(-)≤δ/3 , and _op - _op≤δ/3≤δ.
Summing up the first two inequalities with the definition of δ yields
λ_(-) - λ_+1(-) ≥δ/3
and the fact δ≤_op implies
_op≤ 2_op.
Let P ∈ℝ^n × denote the matrix formed by orthonormal eigenvectors corresponding to the largest eigenvalues of -, we have
F(X) - (X)
= ρmax{λ_max(-X),0} - ρmax_γ≥ 0,γ + (S) ≤ 1, S ∈𝕊^r_+⟨γW̅_t + P_tSP_t^,-X ⟩
≤ρmax{λ_max(-X),0} - ρmax_(S) ≤ 1, S ∈𝕊^_+⟨ PSP^,-X ⟩
≤ρ( 9 × 8 X-^2 Λ_r,n(-)/δ^2 + 9 ×(8+√(2)+16) X-^2 /δ)
≤ρ(144 X-^2 _op/δ^2 + 9(8√(2)+16) X-^2 /δ)
≤ 2ρmax{144_op/δ^2,9(8√(2)+16)/δ}X-^2,
where the first inequality is due to the restriction on the feasible set since we choose ≥ (recall r = +),
the second inequality applies <ref> and <ref>, and the third inequality is due to the fact Λ_r,n(-) ≤_op and <ref>. As a result, choosing η =4 ρmax{144_op/δ^2,9(8√(2)+16)/δ} does provide a quadratic closed upper bound in <ref>.
Second, we consider the case of containing multiple solutions. Similar to the case of unique solution, we choose T_0 large enough to ensure inf_∈-_op≤δ/3. Therefore, there exists a ∈ and r≤ such that -_op≤δ/3 (see <ref> for an estimate on the number iterations to achieve this) and λ_r(-) - λ_r+1(-) ≥δ, which implies λ_r(-) - λ_r +1(-) ≥δ/3 and _op≤ 2 sup_∈_op. As a result, a similar argument as the case of unique solution can be made by replacing and _op with r and sup_∈_op. We conclude that choosing η = is sufficient to ensure a quadratic closed upper bound in <ref>.
<Ref> will allow us to prove the linear convergence in terms of the distance to the optimal solution set (we show it in <Ref>), i.e., <ref> in <Ref>. To further prove the linear convergence in terms of cost value <ref> in <Ref>, we need another two technical results: <ref>.
Fix a minimizer in . Let F_α(W) := min_ X ∈𝒳_0 F(X) + α/2X-W^2 denote the proximal mapping. For any W ∈𝒳_0\{}, the following result holds[We noticed a typo in <cit.> and we correct it here. Although <cit.> considers an unconstrained case, the same result holds for the case with a convex constraint set here.].
F_α(W) ≤ F(W) - α/2W - ^2 Φ(F(W)-/αW-^2),
where
Φ(t) =
t^2, if 0≤ t ≤ 1,
-1+2t, if t> 1.
This is the constrained version of the result in <cit.>. The proof follows the same argument in <cit.> and we omit it here.
<Ref> above shows that the progress made by a proximal mapping can be calculated precisely. The following result shows the linear decrease of a descent step in the objective value under the assumption of quadratic growth.
Supposed <Ref> are satisfied.
If the function further satisfies the following quadratic growth property
F(Ω_t) -≥ c ·^2(Ω_t,), ∀Ω_t∈𝒳_0 ,
where c > 0 is a constant, then it follows that
F() - ≤F()- (( )+α/2 -^2)/min{c/2 α,1/2}.
If further, a descent step occurs, then the reduction in cost value gap satisfies
F() - ≤ (1- μβ) (F() - ),
where μ = min{c/2 α,1/2}.
By definition on F_α(X), we know
F_α() = min_ X ∈𝒳_0 F(X) + α/2X-^2
≥min_X ∈𝒳_0(X) + α/2X-^2
= () + α/2-^2,
where is the minimizer. By <ref>, if F()- ≤α^2(,) = α-^2, where is the closest point to in , and ≠, we have
F_α() ≤ F() - (F()-)^2/2α-^2 .
Combining this inequality with <ref> leads to
(F()- )^2/2α-^2≤ F() - () - α/2-^2.
By applying <ref> and this inequality, we get
F()- ≤ F() - () - α/2-^2/c/2 α.
On the other hand, if F()- > α^2(,) =α-^2, where is the closest point to , <ref> ensures
F_α() ≤ + α/2 - ^2 < 1/2 (F()+).
Combining <ref> with this inequality yields
-1/2(F()-) < - () - α/2-^2.
By adding F() on both sides, we conclude
F()-≤ 2(F() - () - α/2-^2).
Combining <ref> and the above inequality completes the proof for <ref>.
If further a descent step occurs and by a reformulation of the definition of the descent step, we have
F() ≤ (1-β) F() +β().
By <ref>, we have
F() - ≤F()-()-α/2 -^2/μ≤F()-()/μ.
Combining the above two inequalities leads to
F() ≤ (1-β)F() + β F()- βμ (F()-)
= F() - βμ (F()-).
Substracting from both sides yields
F() - ≤ (1-βμ) (F()-).
With <Ref>, we are ready to prove <ref>, which is the key to prove <ref> (as discussed in <Ref>).
For convenience, we restate <ref> below
Under the conditions in <ref>, for any α≥η and t ≥T_0, <Ref> with β∈ (0, 1/2] takes only descent steps and guarantees two contractions:
(,) ≤√(α/2/μ +α/2)(,),
F() - F() ≤(1- min{μ/2 α,1/2}β) (F() - F()),
where μ is the quadratic growth constant in <ref>.
We first show that after T_0 iterations <Ref> will only take descent steps. Without loss of generality, we set η = α since we require η≤α. Since (X) +α/2X-^2 is α strongly convex, applying the first order inequality for a differentiable strongly convex function leads to the following, ∀ X ∈𝕊^n such that (X) = b,
(X) +α/2X-_F^2 ≥F̅_t() +α/2-^2 + ⟨ g_t,X-⟩ + α/2X - ^2
= F̅_t() +α/2-^2 + ⟨() ,X-⟩ + α/2X - ^2
= () +α/2-^2 + α/2-X^2,
where g_t is the gradient of (X) + α/2X-^2 evaluated at and the second equality uses the optimality condition in <ref>. By setting X = in <ref> and combining the fact F(X) ≥(X) for all X ∈𝕊^n, we obtain
F() ≥() ≥() +α/2-^2 + α/2-^2,
= F̅_t() + α-^2,
which implies
F() - () ≥α-^2.
On the other hand, letting X = in <ref> yields
F() - α/2-^2 ≤F̅_t().
Therefore, combining <ref> leads to
F() - F() ≥α/2-^2 .
By the assumption on β∈ (0,1/2 ],
β (F()-F̅_t()) ≤1/2 (F()-F̅_t())
≤1/2 (F()-F()) +α/4-^2
≤ F()-F(),
which shows indeed a descent step will occur.
Second, we show the (,) is shrinking with a linear rate. Upon applying <ref> with X=, we obtain
μ·^2(,) ≤ F()-F().
To measure the relationship between F()-F() and (,), we note that for any ∈, with the replacement of = since a decent step happens, <ref> ensures
F() +α/2-^2 ≥F̅_t() +α/2 -^2 + α/2 - ^2,
where we also use the global lower bound property of the approximation model.
Combining <ref> and <ref>, we get
F() +α/2-^2 ≥ F() - α/2-^2 +α/2 -^2 + α/2 - ^2
= F() + α/2 - ^2,
which implies
F() - F() ≤α/2-^2 - α/2 - ^2.
With <ref>, we reach
μ·^2(,) ≤α/2-^2 - α/2 - ^2.
By setting to be the closest point to , it follows
μ·^2(,) ≤α/2^2(,)- α/2 - ^2
≤α/2^2(,) - α/2^2(,),
which implies
(,) ≤√(α/2/μ +α/2)(,).
Since the contraction factor √(α/2/μ +α/2) < 1, the distance is converging geometrically.
Third, we show the cost value gap F() - is shrinking geometrically. This is in fact a direct application of <Ref> and <Ref>. Since we have shown above that only the descent step will occur after T_0 iteration, the objective value will decrease linearly as
F() - F() ≤(1- min{μ/2 α,1/2}β) (F() - F()).
§.§ Bound on T_0
In the proof of <ref>, we choose T_0 to be the number of iterations that ensure the iterate generated by is δ/3 closed to the optimal solution set, i.e., inf_∈ - _2 ≤δ/3, where δ := inf_X ∈sup_r ≤λ_r(-X) - λ_r+1(-X).
The bound can be obtained by applying <ref>.
By the fact that Frobenius norm is greater than the spectral norm and <ref>, we know
- _op≤ - = (,) ≤√( (F()-F())/μ),
where μ is the quadratic growth constant in <ref>.
Therefore, choosing F() - F() ≤μδ^2/9 is sufficient to ensure the iterate is δ/3 close to the optimal solution set . Using the bound for the maximal number of iterations under the quadratic growth assumption in <ref>, the bound on T_0 can be computed as
,
where we use the fact that F(X) is max{C_op, 1}-Lipschitz.
§ FURTHER COMPUTATIONAL DETAILS
§.§ Data generation for random SDPs
To examine the influence of to <Ref> and <Ref>, we randomly generate the data that satisfy strict complementarity and have the constant trace property on the dual variable Z in <ref>. We first randomly generate a positive definite matrix S ∈𝕊^n_++ and the primal constraint matrices A_1, …, A_m ∈𝕊^n with (A_i) = 0, i = 1 ,…,m. The matrix S is decomposed as
S =U_1 Σ_1 U_1^ + U_2 Σ_2 U_2^,
where Σ_1 ∈𝕊^r is the matrix formed by r orthnormal eigenvectors correpsonding the largest r eigenvalues and r is a given number. We then set = U_1 Σ_1 U_1^, =U_2 Σ_2 U_2^, and b = (). To generate C, we randomly generate y^⋆∈ℝ^m and set C = + (y^⋆). The procedure is presented in <Ref>.
Indeed, the solution pair (, ,) generated by <Ref> is an optimal solution. To see this, we note that both and are positive semidefinite and satisfy the affine constraint in <ref> and <ref> respectively. The complementarity slackness is satisfied as ⟨, ⟩ = 0. There also exists a strictly feasible point in the primal SDP as ( + t I) = b and + t I becomes positive definite as long as t is sufficiently large.
The constant trace property of Z can be observed from (Z) = (C - (y)) = (C) - ∑_i=1^m y_i(A_i) = (C) as (A_i) = 0, i=1,…,m.
§.§ Exact penalty parameter for SDPs from sum-of-squares optimization
One requirement for running <Ref> is to choose a valid penalty parameter ρ in <Ref>. In the SDPs from sum-of-squares optimization over a single sphere constraint, one valid lower bound for the parameter ρ can be obtained a priori. We summarize this result in the following lemma.
Consider a kth-order SOS relaxation of a problem minimizing a polynomial p_0(x) over one single sphere constraint, i.e.,
max_γ, σ_0, ψ_1 γ
subject to p_0(x) - γ -ψ_1 h(x) = σ_0 ,
σ_0 ∈Σ[x]_n,2k, ψ_1 ∈[x]_n,2(k-⌈ h ⌉),
where h(x) = x^2 -R̅ with R̅> 0, ⌈ h ⌉ = ⌈deg(h)/2 ⌉, [x]_n,2(k-⌈ h ⌉) denotes the real polynomial in n variables and degree at most 2(k-⌈ h ⌉), and Σ[x]_n,2k denotes the cone of SOS polynomials in [x]_n,2k.
Then, any ρ≥(1+R̅)^k is a valid exact penalty parameter that satisfies ρ > (), where is any dual optimal solution the SDP from <Ref>.
Let us first clarify some necessary notations: For x ∈^n and α∈ℕ^n, we define the monomial x^α = x_1^α_1x_2^α_2⋯ x_n^α_n, note |α| = ∑_i=1^n α_i, and let ℕ^n_t = {α∈ℕ^n | |α| ≤ t }.
The proof is a simple adaption of <cit.>.
Given a k∈ℕ, let (1+x^2)^k = ∑_α∈ℕ^n_kθ_k,α x^2 α, where {θ_k,α} is the sequence of the coefficients of polynomial (1+x^2)^k, and let P_n,k = diag(√(θ_k,α))_α∈ℕ^n_k.
By the result of <cit.>, the variable P_n,k Z P_n,k^ has a constant trace property, i.e., (P_n,k Z P_n,k^) = (R̅+1)^k, where Z∈𝕊^n+kn_+ is any feasible dual point of <ref>. By the construction of P_n,k, the diagonal elements of P_n,k are always greater or equal to 1 and some of them are strictly greater than 1. Therefore,
(Z) < ( P_n,k Z P_n,k^) = (1+R̅)^k.
Since this holds for any feasible Z, taking Z = completes the proof.
|
http://arxiv.org/abs/2307.07233v2 | 20230714091016 | Improving the scalability of Gaussian-process error marginalization in gravitational-wave inference | [
"Miaoxin Liu",
"Xiao-Dong Li",
"Alvin J. K. Chua"
] | astro-ph.IM | [
"astro-ph.IM",
"gr-qc"
] |
APS/123-QED
Department of Physics, National University of Singapore, Singapore 117551
School of Physics and Astronomy, Sun Yat-sen University Zhuhai Campus, 2 Daxue Road, Tangjia, Zhuhai 519082, P.R. China
CSST Science Center for Guangdong-Hong Kong-Macau Great Bay Area, Zhuhai 519082, P.R. China
[email protected]
Department of Physics, National University of Singapore, Singapore 117551
Department of Mathematics, National University of Singapore, Singapore 119076
The accuracy of Bayesian inference can be negatively affected by the use of inaccurate forward models. In the case of gravitational-wave inference, accurate but computationally expensive waveform models are sometimes substituted with faster but approximate ones. The model error introduced by this substitution can be mitigated in various ways, one of which is by interpolating and marginalizing over the error using Gaussian process regression. However, the use of Gaussian process regression is limited by the curse of dimensionality, which makes it less effective for analyzing higher-dimensional parameter spaces and longer signal durations. In this work, to address this limitation, we focus on gravitational-wave signals from extreme-mass-ratio inspirals as an example, and propose several significant improvements to the base method: an improved prescription for constructing the training set, GPU-accelerated training algorithms, and a new likelihood that better adapts the base method to the presence of detector noise. Our results suggest that the new method is more viable for the analysis of realistic gravitational-wave data.
Improving the scalability of Gaussian-process error marginalization in gravitational-wave inference
Alvin J. K. Chua
August 12, 2023
===================================================================================================
§ INTRODUCTION
The field of gravitational-wave (GW) astronomy has witnessed remarkable progress so far, with the detection of approximately 90 compact binary coalescences (stellar-mass binary mergers) by the LIGO-Virgo-KAGRA Collaboration
<cit.>. Future space-based GW detectors operating in the millihertz frequency band, namely LISA <cit.>, TianQin <cit.>, and Taiji <cit.>, will lead to the discovery of new kinds of sources such as binary white dwarfs <cit.>, massive binary black-hole mergers <cit.>, stellar-mass binary inspirals <cit.>, and extreme-mass-ratio inspirals (EMRIs) <cit.>. Gravitational waves generated by all of these extreme astronomical events carry unique information, providing novel insights into the physics and astronomy of such phenomena.
To achieve scientific goals in GW astronomy, it is essential to identify and characterize GW signals within a noisy data stream. The characterization process involves the inference of astrophysical parameters based on a certain GW source model. The accuracy of parameter estimation is constrained by two factors: the statistical error caused by the noise and the theoretical error due to the use of an inaccurate waveform model. It is known that, for both ground-based <cit.> and space-based GW detectors <cit.>, the statistical error decreases as the signal-to-noise ratio (SNR) increases, while the theoretical error remains constant; this may lead to the exclusion of the true parameter values with high statistical significance.
In GW data analysis, the deliberate incurrence of theoretical error is a common scenario, as it occurs whenever fast approximate models are used in lieu of more accurate but computationally costly models/simulations (e.g., waveforms from numerical-relativity simulations <cit.>).
To account for the presence of (known) theoretical error, Gaussian-process regression (GPR) <cit.>, a machine-learning technique, has been proposed as a method for interpolating and marginalizing over such error <cit.>. The method fits a Gaussian process to a small set of precomputed waveform differences between an accurate fiducial model and an approximate one. This process then serves as a prior distribution for the waveform difference, and can be marginalized over in the standard Bayesian likelihood with the approximate model. The GPR marginalized likelihood, which is informed by accurate waveforms, corrects the search under approximate templates and accounts for any residual model inaccuracy with (generally) more conservative error estimates. This method has since been applied in follow-up studies <cit.>.
Previous research has demonstrated the potential of the GPR marginalized-likelihood method to mitigate theoretical error in low-dimensional cases <cit.>. However, the curse of dimensionality is a major challenge that hinders the use of GPR even in general applications. The number of training points required to cover a parameter space typically increases exponentially with its dimensionality, while the computational complexity of GPR increases cubically with the size of the training set. In the marginalized-likelihood method, this not only slows down the offline training phase but also the online evaluation phase, negating the speed advantage of using approximate templates.
In this study, we propose multiple improvements to the base GPR method that better adapt it to high-dimensional cases; the most notable is the use of Fisher-information-based Latin hypercube sampling (LHS) to generate a more informative training set with fewer points. We illustrate the efficacy of our approach by applying it to EMRI parameter estimation with a representative “accurate” signal model <cit.> and an artificially constructed “approximate” template model. Even though accurate next-generation EMRI models <cit.> will not be significantly more costly than existing approximate ones (due to recent computational developments <cit.>), we choose EMRIs as our example here because the intrinsic information content and computational complexity of their waveforms epitomize most of the difficulties that inhibit the use of the base GPR method.
The remainder of the paper is organized as follows. Sec. <ref> provides a brief overview of the marginalized-likelihood method, while Sec. <ref> introduces the technique of GPR in the context of waveform interpolation. The training of the GPR model using a precomputed set of waveform differences is discussed in Sec. <ref>, while Sec. <ref> reviews the construction of the training set as described in previous studies. These sections provide important background information for understanding the application of the GPR method in GW analysis.
In Sec. <ref>, we describe and demonstrate our proposed improvements to the base GPR method. We introduce a parameter re-scaling strategy in Sec. <ref>, and discuss the new hyperparameters to be trained in Sec. <ref>. We also compare the LHS training set construction method to the old method in Sec. <ref>, concluding that the former contains more information than the latter with the same density. We describe computational enhancements to training in Sec. <ref>. We present a new form of the GPR marginalized likelihood that properly treats the presence of detector noise in Sec. <ref>, and discuss an iterative approach to the GPR method in Sec. <ref>. Our model is tested on data with simulated detector noise in Sec. <ref>. Finally, we summarize the new techniques implemented in this work and propose possible computational strategies for further extensions in Sec. <ref>.
§ BACKGROUND: THE BASE GPR METHOD
§.§ Marginalized likelihood
For a two-channel GW detector, the single source data can be expressed as
x(t) = s(t)+n(t),
where s≡(s_I,s_II) is the source signal and n≡(n_I,n_II) is the detector noise. In the standard matched-filtering framework, the data is compared against waveform templates h≡(h_I,h_II) that are parametrized by some astrophysical parameters θ, while the detector noise is treated as a Gaussian and stationary stochastic process.
The Bayesian likelihood of the source parameters is thus <cit.>
L∝exp(-1/2⟨ x-h|x-h⟩),
where the noise-weighted inner product ⟨·|·⟩ on the space of finite-length time series is given by
⟨ a|b⟩=4 Re∑_f>0^f_Ndf∑_χ=I,IIã_χ^*(f)b̃_χ(f)/S_n,χ(f),
with overtildes denoting discrete Fourier transforms, f_N denoting the Nyquist frequency, and S_n,χ denoting the one-sided power spectral density of the channel noise n_χ. The optimal SNR of a waveform template h is given in terms of this inner product as √(⟨ h|h⟩), while the overlap between two templates is defined as ⟨ h_1|h_2⟩/√(⟨ h_1|h_1⟩⟨ h_2|h_2⟩).
If an accurate template model h_acc(θ) is used for the analysis, then the source signal s is well described by the model at the actual parameter values θ_true, i.e., s=h(θ_true). The maximum-likelihood estimate θ_ML does not generally equal θ_true due to the presence of detector noise, but the parameter error θ_ϵ=θ_ML-θ_true is purely statistical in nature as it arises only from n, and is thus fully described by the posterior. On the other hand, if an approximate template model h_app(θ) is used for the analysis, then the parameter error now contains an additional contribution from the difference h_ϵ=h_app-h_acc. For high-SNR sources, this theoretical-error term may exceed the statistical uncertainties described by the posterior, and thus become the limiting factor in obtaining accurate parameter estimates <cit.>.
The bias from theoretical error can be mitigated by specifying a suitable prior probability distribution p(h_ϵ) for the waveform difference h_ϵ, then marginalizing over h_ϵ in Eq. (<ref>). This “marginalized likelihood” is given by
ℒ∝∫_WDh_ϵ p(h_ϵ)L_acc,
where L_acc is Eq. (<ref>) with h=h_acc=h_app-h_ϵ, and W is the space of waveform differences. In <cit.>, Moore & Gair proposed using GPR to define a Gaussian prior distribution, thus allowing the above integral to be analytically approximated (since L_acc is also formally Gaussian).
§.§ Gaussian process regression
In the GPR method, h_ϵ∈ W may be modeled as a Gaussian process over the parameter space Θ:
h_ϵ(θ)∼𝒢𝒫(h̅_ϵ,k),
where h̅_ϵ is the (vector-valued) mean of the process, and k(θ,θ') is the covariance function of the process. Then the set of waveform differences {h_ϵ(θ_i)∈ W | i=1,2,...,N} corresponding to a small training set of parameter points {θ_i∈Θ | i=1,2,...,N} has a Gaussian probability distribution 𝒩(h̅_ϵ,𝐊) on W^N <cit.>:
p([h_ϵ(θ_i)])=1/(2π)^N𝐊exp(-1/2𝐯^T𝐊^-1𝐯),
where the covariance matrix 𝐊 and waveform difference vector 𝐯 are expressed respectively by
[𝐊]_ij=k(θ_i,θ_j),
[𝐯]_i=h_ϵ(θ_i)-h̅_ϵ.
Note that the normalization constant in Eq. (<ref>) is the square of its usual value for a multivariate Gaussian, due to the two independent channels of the process. Also, 𝐯 is a deliberate abuse of notation to cast Eq. (<ref>) in the familiar Gaussian functional form; its components are itself vectors in W equipped with the inner product (<ref>).
The quadratic form in Eq. (<ref>) may be written as
𝐯^T𝐊^-1𝐯=tr (𝐊^-1𝐌),
with
[𝐌]_ij=[𝐯𝐯^T]_ij=⟨ h_ϵ(θ_i)-h̅_ϵ|h_ϵ(θ_j)-h̅_ϵ⟩.
In <cit.> and follow-up work, the mean of the process was taken to be the zero vector. Here we use a nonzero but constant mean h̅_ϵ, which is simply chosen to be the mean of the training set of waveform differences {h_ϵ(θ_i) | i=1,2,...,N}; doing so improves the regression performance at negligible computational cost. We also remove from Eq. (<ref>) the factor of γ that was introduced in Eq. (15) of <cit.>. This quantity is defined as the (empirical) ratio between the frequency-averaged power spectral densities of the waveform differences and the detector noise, and was added as a “fudge factor” to the base method to prevent the estimate of statistical error from being dominated by the GPR variance when the noise realisation is nonzero. Here, we treat the noise in a more principled way by modifying the definition of the marginalized likelihood in the base method; see Sec. <ref>.
For any new parameter point θ, the enlarged set {h_ϵ(θ_i),h_ϵ(θ)} is again normally distributed with mean h̅_ϵ and the covariance matrix
𝐊_*=[[ 𝐊 𝐤_*; 𝐤_*^T k_** ]],
where
[𝐤_*]_i=k(θ_i,θ), k_**=k(θ,θ).
Since {h_ϵ(θ_i)} is known, the conditional probability distribution of h_ϵ(θ) given {h_ϵ(θ_i)} is also Gaussian:
p(h_ϵ(θ))∝1/σ^2exp(-1/2⟨ h_ϵ(θ)-μ|h_ϵ(θ)-μ⟩/σ^2),
where μ(θ) and σ^2(θ) are given respectively by
μ=𝐤_*^T𝐊^-1𝐯+h̅_ϵ,
σ^2=k_**-𝐤_*^T𝐊^-1𝐤_*.
Note that 𝐊^-1𝐯 in Eq. (<ref>) and 𝐊^-1 in Eq. (<ref>) have nothing to do with θ, and can thus be precomputed. The GPR mean μ(θ) is essentially an interpolant for h_ϵ(θ), with associated (squared) error given by the GPR variance σ^2(θ). We may thus define a new GPR-informed template model as
h_GPR=h_app-μ,
which approximates h_acc since h_acc=h_app-h_ϵ. Eq. (<ref>) also provides the prior for h_ϵ in Eq. (<ref>), which evaluates to the GPR marginalized likelihood (of the base method):
ℒ∝1/1+σ^2exp(-1/2⟨ x-h_GPR|x-h_GPR⟩/1+σ^2).
§.§ Training the Gaussian process
The waveform difference model (<ref>) is specified by the (fixed) mean of the training set and the covariance function k; the latter depends on hyperparameters that are determined by fitting the Gaussian process to the training set. Previous studies in the GW field have demonstrated that the GPR interpolant and the marginalized likelihood function exhibit consistent performance across various common choices for k <cit.>. In this work, we use the squared-exponential covariance function
k(θ,θ')=σ_f^2exp(-1/2τ^2),
with
τ^2=g_ab[θ-θ']^a[θ-θ']^b,
where the hyperparameters consist only of an overall scale factor σ_f^2 and the (independent) components g_ab of some constant parameter-space metric 𝐠 on Θ.
As the size of the training set grows, the covariance matrix 𝐊 tends to become ill-conditioned. However, it is common practice to add noise to the training set, which allows for some error in the GPR fit. We transform
[𝐊]_ij→[𝐊]_ij+σ_f^2σ_n^2δ_ij,
where δ_ij is the Kronecker delta, and the fractional noise variance σ_n^2 of training-set points is taken to be uniform and fixed. The introduction of noise has the side effect of reducing the condition number of 𝐊 for more robust numerical calculations. In this work, we use an empirically determined value of σ_n^2=10^-2 throughout.
The Gaussian process is fit to the training set by maximizing (the logarithm of) Eq. (<ref>) as a function of the hyperparameters, i.e., the “hyperlikelihood” Z:
lnZ=-1/2tr (𝐊^-1𝐌)-ln𝐊+const.
Part of this maximization may be done analytically, as lnZ with 𝐊=σ_f^2𝐊̂ is maximized over σ_f^2 at
σ_f^2=1/2Ntr (𝐊̂^-1𝐌).
Substituting Eq. (<ref>) back into Eq. (<ref>), we may instead maximize the scale-invariant log-hyperlikelihood
lnZ=-Nlntr (𝐊^-1𝐌)-ln𝐊+const.
over the metric components only, which reduces the dimensionality of the hyperparameter space by one.
§.§ Fisher-coordinate training grid
The waveform derivative ∂ h and Fisher information matrix Γ are defined respectively as
[∂ h]_a=∂ h/∂[θ]^a, [Γ]_ab=⟨[∂ h]_a|[∂ h]_b⟩.
Let {(λ_i,𝐯̂_i)} denote the eigensystem of Γ for the SNR-normalized waveform difference, h_ϵ/√(⟨ h_ϵ|h_ϵ⟩), evaluated at some reference parameter point. One can then define a new coordinate system centered on that reference point, by taking the semi-principal axes {λ_i^-1/2𝐯̂_i} of the associated covariance hyperellipse as basis vectors. A local grid-based training set may be constructed by uniformly placing points on a grid defined by the basis vectors in these “Fisher coordinates”. Previous research <cit.> has employed this grid-based design.
The main challenge of using a grid-based training set is the issue of scalability in high-dimensional parameter spaces. In this study, we adopt an alternative sampling method and compare it to the traditional grid-based training set in Sec. <ref>. Subsequently, a training set utilizing the alternative sampling method (within a hyperellipse) is employed as our final model in Sec. <ref>.
§ IMPROVEMENTS & RESULTS
As mentioned in Sec. <ref>, we choose the example of an EMRI signal to showcase our improvements to the base GPR method, and to demonstrate the scalability of our results to the typical length and complexity of EMRI waveforms. Throughout this study, the fiducial model is taken as the augmented analytic kludge <cit.> with 5PN-adiabatic evolution <cit.> (5PN AAK), which is publicly available as part of the Fast EMRI Waveforms software package <cit.>. This choice is motivated not only by the improved realism of the model relative to previous kludges, but also by its computational efficiency (which provides a tractable fiducial likelihood for comparison with the marginalized likelihood).
To construct an approximate model, we artificially modify the time evolution of the slowly evolving orbital parameters (p,e,Y) (the quasi-Keplerian semi-latus rectum, eccentricity, and cosine of the inclination) by linearly interpolating between 5PN- and 4PN-adiabatic evolution <cit.>:
ṗ = (1-c)ṗ_5PN + c ṗ_4PN,
ė = (1-c)ė_5PN + c ė_4PN,
Ẏ = (1-c)Ẏ_5PN + c Ẏ_4PN,
where c is a tunable quantity that is fixed to 0.0001 in this study. (When c = 0, the orbital evolution reduces to 5PN; when c = 1, it is equivalent to 4PN.) This construction allows for an approximate model that retains a physically motivated dependence on the EMRI parameters, while producing waveforms that have a controllable overlap with those from the fiducial model.
In this work, an EMRI with redshifted component masses (μ,M)_true=(10^1,10^6)M_⊙, dimensionless spin parameter s_true=0.9, and initial orbital parameters (p_0,e_0,Y_0)_true=(6.97,0.1,0.54) is considered as a generic example source. Other source parameters are chosen such that the fiducial signal and the approximate signal have an overlap of 0.84. For simplicity, the long-wavelength approximation for the LISA response <cit.>, h≡(h_I,h_II), is used instead of full time-delay-interferometry <cit.>. The signal is six months long and sampled at 0.2 Hz, while the source distance is adjusted to yield a high but feasible SNR of 100. The GPR marginalized likelihood Eq. (<ref>) is used to estimate six source parameters, (μ,M,a,p_0,e_0,Y_0)_true, assuming all other parameters are known and fixed at their true values. The first application of the base GPR method to EMRIs <cit.> considered only up to a two-month long signal and three estimated parameters.
To decrease the computational expense involved in initializing and evaluating the marginalized likelihood, we employ a band-pass filter as done in <cit.>. This filter is applied to restrict both the data and the templates to the frequency range 3.3–8.3 mHz, outside of which little signal information is present.
§.§ Rescaled Fisher coordinates
The construction of the training set is based on the Fisher matrix Γ for the SNR-normalized waveform difference rather than the Fisher matrix for the accurate waveform, since
the former more closely approximates the optimal hyperparameter metric (which is SNR-independent); see Sec. III in <cit.> for a more detailed discussion. Thus for SNR values >1, the bulk of the likelihood is typically comfortably contained within the span of the training set. However, the covariance hyperellipses associated with both matrices can occasionally still be comparable in scale, especially in the minor directions (corresponding to the largest Fisher eigenvalues). When sampling from the likelihood, the coverage of the training set might thus be insufficient in these directions.
To address this, we adopt a strategy of rescaling the basis vectors of the Fisher coordinates as {λ_i^-1/2𝐯̂_i}→{f_i λ_i^-1/2𝐯̂_i}, so as to boost the training-set coverage in the minor directions. An appropriate choice of f_i would depend on the discrepancy between the covariance hyperellipses for the waveform difference and the accurate waveform (essentially, the former should be rescaled such that it contains the latter). As this discrepancy is model- and signal-specific, such a procedure is necessarily somewhat ad hoc. In this study, we perform the rescaling by hand, with the following empirically determined values:
f_1 = 8.0 ,
f_2 = 4.0 ,
f_3 = 2.0 ,
f_4 = 1.0 ,
f_5 = 0.5 ,
f_6 = 0.5 ,
where the index is sorted in order of decreasing eigenvalue magnitude.
More explicitly, for a training set centered at (c_1,c_2,c_3,c_4,c_5,c_6) in the parameter coordinates, the point (x_1,x_2,x_3,x_4,x_5,x_6) in the rescaled Fisher coordinates corresponds to
[ c_1; ⋮; c_6; ]
+
[ f_1 λ_1^-1/2𝐯̂_1 ⋯ f_6 λ_6^-1/2𝐯̂_6 ][ x_1; ⋮; x_6; ]
in the parameter coordinates. To sum up, our rescaling strategy ensures that the resulting training set sufficiently covers the parameter region of interest, so as to avoid inaccurate inference due to regression error.
§.§ Fisher-coordinate metric hyperparameters
The number of training hyperparameters required to specify the covariance metric in Eq. (<ref>) scales approximately with the square of the parameter-space dimensionality, which again is a challenge to the scalability of GPR. A common approach to mitigating this in many GPR applications is to use a diagonal metric. Here, we instead use the Fisher matrix Γ to place constraints on the metric g_ab in Eq. (<ref>), since the former is a good approximation to the optimal values for the latter. Specifically, given the unit eigenvectors 𝐯̂_i and eigenvalues λ_i of Γ, we demand that
g_ab = [𝐯̂_1 … 𝐯̂_6 ]
[ 1/w_1^2λ_1 ; ⋱ ; 1/w_6^2λ_6 ][𝐯̂_1
⋮
𝐯̂_6 ]^T,
where now only the w_i are trained hyperparameters. Since σ_f^2 is fixed by Eq. (<ref>), the number of hyperparameters is simply the dimensionality of the parameter space. Restricting the covariance function in this way has a negligible impact on regression performance (relative to training the full metric), while significantly reducing the computational cost of training.
§.§ Latin hypercube sampling
Although a grid-based construction of the training set is simple to implement, it has a couple of important drawbacks. The first one is that the entropy of a grid-based training set is generally lower than that of a more irregularly distributed training set with the same number of points, i.e., it contains less information <cit.>. The second drawback of using a grid-based training set in higher dimensions is that a larger number of points will lie in low-likelihood regions, since typical likelihoods have a radial fall-off in density from the maximum-likelihood point.
To address the first drawback, we adopt Latin hypercube sampling (LHS) <cit.> as an alternative method of choosing training points. This technique allows a random placement of sample points within a hypercube such that no two samples are aligned (up to a regular partition of the hypercube) in any coordinate direction.
In two dimensions, this is equivalent to the classic problem of placing non-attacking rooks on a chess board.
We adopt a maximum-distance design for generating the Latin hypercube samples, as proposed by <cit.>. This approach aims to maximize the distance between all pairs of samples, while minimizing the number of pairs that are separated by the same distance <cit.>. Thus it prevents highly clustered sample regions, and ensures a more homogeneous distribution of the samples. This variant of LHS is implemented using the Surrogate Modeling Toolbox <cit.>.
In order to compare the performance of an LHS training set against a grid-based training set in GPR, we construct a Fisher-based grid of N=4^6=4096 points centered around the signal parameters θ_true (such that the grid spans a six-dimensional hypercube of side-length 3 in the rescaled Fisher coordinates), as well as an N-sample LHS training set within the same region. The true source parameters are not included in either set of points. From the Gaussian assumption (<ref>), the entropy of each training set is given by
H([h_ϵ(θ_i)]) = -∫Dh_ϵ p([h_ϵ(θ_i)])lnp([h_ϵ(θ_i)])
=-𝔼[ln𝒩(h̅_ϵ, 𝐊)]
=N(1+ln(2 π))+ ln𝐊.
Under the same initial hyperparameter values w_1=…=w_6=1, the entropy of the grid-based is smaller than that of the LHS training sets by 1651.
The smaller entropy value for the grid-based training set indicates that it contains less information, which turns out to be insufficient for effective training.
Although the hyperlikelihood for the grid-based training set increases asymptotically towards some optimal value during training, the Gaussian process fails to fit the waveform difference adequately, with the GPR error (<ref>) at most evaluation points of interest taking on its maximal value (σ_f^2 in Eq. (<ref>)). This is illustrated by the top plot in Fig. <ref>, which shows how the GPR variance (normalised by σ_f^2) at the true signal parameters fails to improve as training proceeds. In contrast, the same plot for the LHS training set tends toward a minimal value ≪1, indicating that the set is more optimal for regression while having the same span and number of points as the grid-based training set.
As for the second drawback of using a grid-based training set, the larger relative volume contained in the “corners” of the hypercube leads to a larger proportion of uninformative points in the set, which adds unnecessary computational cost to both the training and evaluation of the GPR model. To make this intuitive, consider a hypersphere (representing the bulk of the likelihood density) that is inscribed in a hypercube (representing the span of the training grid), in d dimensions. When d=2, the volume outside the hypersphere is 21% of the hypercube volume; this rises to ≥92% when d≥6.
To address this drawback, we implement a further hyperspherical truncation of the LHS training set in our final model (used in Sec. <ref>). In the rescaled Fisher coordinates, the covariance hyperellipse associated with the Fisher matrix is a hypersphere; we simply enlarge this such that it is inscribed in the hypercube spanned by the grid, and then remove all LHS points lying outside the enlarged hypersphere. In this way, the number of model evaluations in low-likelihood regions is greatly reduced. Both the grid-based training set and the truncated LHS training set are compared visually in Fig. <ref>.
§.§ Computational acceleration of training
When training the Gaussian process, the cost of evaluating the hyperlikelihood is dominated by the calculation of 𝐊^-1𝐌 at each iteration, especially for large N. The most efficient way of computing this quantity is then: i) to solve the linear systems of equations 𝐊𝐗=𝐌 for 𝐗 (instead of inverting 𝐊), and ii) to parallelize the calculation by performing it on a GPU. In previous work <cit.>, the gains from this approach were marginal even for the largest considered training sets with N∼10^2. In this work, where N≳10^3, it becomes essential. We use the conjugate gradient method <cit.> to solve for the roots of 𝐊𝐗-𝐌; this is an iterative technique that is better suited to large N than previously employed methods such as Cholesky decomposition. For a training set containing 4096 points, a single training iteration typically takes around 3.5 seconds when evaluated on a GPU, and around 200 iterations in total to converge.
GPUs can also be used to accelerate evaluation of the trained model (now with fixed 𝐊, such that 𝐊^-1𝐯 in Eq. (<ref>) and 𝐊^-1 in Eq. (<ref>) can be precomputed). A single evaluation of the GPR mean and variance takes around 0.01s on a GPU, as compared to around 0.1s on a CPU. However, note that the marginalized likelihood requires evaluation of the approximate waveform, which must also be accelerated in order to gain the full benefit from the Gaussian-process component of the model. In this work, as the cost of waveform generation is the computational bottleneck in the case of EMRIs, we do not implement the sampling of likelihoods on a GPU.
§.§ Marginalized likelihood for nonzero noise
From standard noise properties, the expectation of the logarithm of Eq. (<ref>) is given (up to a constant) by
E[lnℒ] = -1/2⟨ s-h_GPR|s-h_GPR⟩+𝒩/1+σ^2+ln(1/1+σ^2),
where 𝒩E[⟨ n|n⟩] is the expected noise power. It is the interplay between 𝒩≠0 and the size/variation of σ^2 over the signal space that is generally problematic for the practical application of the GPR marginalized likelihood with nonzero noise. Essentially, the profile of the likelihood becomes driven by the variation of σ^2 if 𝒩 is too large, and it can even have a narrowed credible region that excludes the true parameters to high significance.
This issue was recognised and addressed in <cit.>, although not explicitly described in that paper. There, σ^2 was reduced by an overall factor of γ≪1, with the value of γ chosen empirically as the ratio between the typical power of the waveform differences and the expected power of the detector noise. This works simply because the GPR likelihood approaches the accurate likelihood as γ→0, but it is rather ad hoc and does not generally yield broadened credible regions in the former.
We will instead redesign the GPR likelihood in a way that aims to recover, for general noise realizations, its behaviour when 𝒩=0 (while reducing to the accurate likelihood as σ^2→0). It is straightforward to achieve this when the likelihood is approximated by Eq. (<ref>); one such solution is simply
ℒ∝1/1+σ^2exp(-1/2⟨ x-h_GPR|x-h_GPR⟩+𝒩σ^2/1+σ^2).
Here, 𝒩 needs to be estimated accurately because the noise-corrected likelihood can still be quite sensitive to any residual noise power near the maximum-likelihood point. We propose an iterative method for estimating 𝒩 (and for refining inference) in the following section.
§.§ Iterative inference
When applying the GPR method to realistic inference, the starting point is an estimate of the true source parameters θ_true, so as to construct the training set in its local vicinity. This estimate is most naturally obtained through maximum likelihood or maximum a posteriori estimation with the approximate waveform model (since the accurate model is assumed to be computationally intractable); we denote it by θ_app. At this stage, a first estimate of 𝒩 is given by
𝒩_1stGPR=⟨ x-h_app(θ_app)|x-h_app(θ_app)⟩.
Together with the training set centered on θ_app and the Gaussian process that is trained on this set, inference can then be performed using the noise-corrected GPR likelihood (<ref>), which we denote by ℒ_1stGPR.
Computation of the posterior under the first GPR model yields a first maximum a posteriori estimate, which we denote by θ_1stGPR (σ^2 taken as 0 in this optimization). Depending on the distance between θ_1stGPR and θ_true (equivalently, the error in the approximate model), the first GPR posterior may not be a sufficiently faithful approximation to the accurate posterior. In this case, another iteration of inference may be performed by constructing a second training set centered on θ_1stGPR, retraining the Gaussian process, and re-estimating the noise as
𝒩_2ndGPR = ⟨ x-h_acc(θ_1stGPR)|x-h_acc(θ_1stGPR) ⟩ .
Here we have used the accurate waveform model to compute the second noise estimate, although we could also have used h_1stGPR instead. In realistic scenarios, such an iterative usage of the GPR method for high-precision inference is unlikely to take more than two iterations; if the error in the approximate model is so large as to require this, a more prudent approach in the first place would be to improve the approximate model or construct better fits to the accurate model.
§.§ Results
In this subsection, we present illustrative results for the GPR method with all of the proposed improvements in Secs <ref>–<ref>, when applied to our example EMRI signal with simulated LISA noise <cit.>. We assume a flat prior in a suitably bounded region of parameter space, and employ the Markov chain Monte Carlo sampler emcee <cit.> to draw samples from the standard approximate likelihood L_app, the standard accurate likelihood L_acc, and the GPR marginalized likelihoods ℒ. Note that the accurate likelihood is assumed to be unavailable in the actual scenarios to which the GPR likelihood might be applied, but we include it here to showcase the efficacy of the method.
Fig. <ref> displays the GPR likelihood that is computed in the first iteration described in Sec. <ref>, along with the standard accurate and approximate likelihoods for comparison. The approximate likelihood excludes the true source parameters at more than 3-sigma significance. On the other hand, the first GPR likelihood provides a decent (but still slightly shifted) approximation to the accurate likelihood, and agrees with the true parameters to within 3 sigma.
Finally, results from a second iteration are presented in Fig. <ref>, where the second GPR likelihood is seen to be almost perfectly consistent with the accurate likelihood.
Finally, we conducted a parameter estimation analysis using the accurate waveform but excluded Y_0 as a searching parameter. The resulting likelihood plot (Fig. <ref>) revealed two significant impacts caused by Y_0. Firstly, we observed a strong degeneracy between the parameters. Comparing Fig. <ref> with Fig. <ref>, we concluded that this degeneracy was due to the inclusion of Y_0, which arises from the correlated term in low order post-Newtonian evolution. The second one is about the deviation caused by the noise. This deviation was peculiar in the 6D results, as the true points were still located near the center of many 2D marginalized confidence regions. However, the deviation appeared more normal in the 5D results. Thus, we concluded that the strong degeneracy was the primary cause of the observed difference.
§ CONCLUSION
This work improves the scalability of the GPR marginalized-likelihood scheme for high-precision GW inference <cit.>, thus extending its potential application to higher-dimensional parameter spaces and longer signal durations (six intrinsic parameters and six-month long signals, in our EMRI example). Several significant modifications have been made to the base GPR method that was developed in previous work.
In Secs <ref>–<ref>, various improvements to the training of the Gaussian process are described. These are: (i) a rescaling of the Fisher-informed training set that is better adapted to highly correlated parameters; (ii) a Fisher-informed constraint on the covariance metric such that the number of hyperparameters scales linearly rather than quadratically with the parameter-space dimensionality; (iii) the use of LHS and (Fisher-informed) hyperspherical truncation to construct the training set; and (iv) computationally efficient training through the use of the conjugate gradient method and GPU acceleration. These modifications boost the scalability of the GPR method by significantly reducing the required density of the training set in a given region of interest (such that it grows sub-exponentially with the parameter-space dimensionality), and by greatly accelerating the training process as well.
Secs <ref> and <ref> describe improvements to the marginalized-likelihood method itself. We make a crucial redefinition of the marginalized likelihood in order to render it usable in realistic inference scenarios with nonzero detector noise; this is done via an estimation of the noise power by computing the data–template residual at the maximum likelihood parameters. We also propose an iterative approach to using the GPR method, where the training set and noise estimate are refined through (a single) repetition. Finally, in Sec. <ref>, we implement all of the above improvements to perform inference on an example EMRI signal with simulated LISA noise, and to demonstrate the viability of the GPR marginalized-likelihood method for more realistic GW applications.
In future work, it is anticipated that the GPR model developed in this study will be extended to other types of waveform analysis, such as resonance phenomenon modeling and numerical relativity waveform. The GPR model in this work can also be extended to include more parameters of detector response. To scale up to higher-dimensional cases, additional strategies may be helpful. For certain parameters, reparametrization or dimensionality-reduction methods could minimize the size of the training set itself, as shown in the component-mass example discussed in Sec. IIIB of <cit.>. For other parameters, the number of training-set points used to cover the relevant region could still be reduced through a non-uniform placement of points, or non-geometric methods such as stochastic placement algorithms <cit.>.
ML would like to express his gratitude to Prof. Yiming Hu and Prof. Jiandong Zhang for their insightful discussions and valuable feedback on our research. ML is also grateful to the NUS Research Scholarship for its financial support. AJKC thanks Christopher Moore for helpful comments on the manuscript, and acknowledges previous support from the NASA LISA Preparatory Science grant 20-LPS20-0005.
*
apsrev4-1
|
http://arxiv.org/abs/2307.04042v1 | 20230708202414 | Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric Regression by Adversarial Training | [
"Masaaki Imaizumi"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme. For the nonparametric regression problem, it has been shown that an estimator using deep neural networks can achieve better performances in the sense of the L2-norm. In contrast, it is difficult for the neural estimator with least-squares to achieve the sup-norm convergence, due to the deep structure of neural network models. In this study, we develop an adversarial training scheme and investigate the sup-norm convergence of deep neural network estimators. First, we find that ordinary adversarial training makes neural estimators inconsistent. Second, we show that a deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction. We extend our adversarial training to general setups of a loss function and a data-generating function. Our experiments support the theoretical findings.
§ INTRODUCTION
We study the nonparametric regression problem.
Suppose we observe (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × with dimension d ∈ that are independent and identical copies of a [0,1]^d ×-valued random element (X,Y) which follows the following regression model:
Y = f^*(X) + ξ,
where f^*: [0,1]^d → is an unknown function, ξ is a random noise variable with zero mean and finite variance and is independent to X, and X follows a marginal measure P_X on [0,1]^d.
Our interest is to utilize a deep neural network model and develop an estimator f̂ from the model and the n observations, then study its estimation risk in terms of the sup-norm, referred to as an L^∞-risk:
sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|,
which implies uniform convergence of the estimator.
In this study, we prove that an adversarial training framework can provide an estimator with deep neural networks whose L^∞-risk converges, then derive a convergence rate of the risk and show the minimax optimality of the rate.
§.§ Background and Question
Deep learning is a data-driven statistical method using deep neural network models <cit.>, which have multiple layers.
It has many well-known extensions, such as a deep convolutional network <cit.>, a residual network <cit.>, and an attention mechanism <cit.>.
Owing to the multiple layers and the well-designed training algorithm, deep learning has achieved quite accurate prediction performance in various tasks.
The framework of nonparametric regression has been actively used to analyze deep neural networks, and many roles of deep learning have been revealed.
A deep neural network is a model of functions f:[0,1]^d → with multiple layers such that
f(x) = g_L ∘ g_L-1∘⋯∘ g_1(x),
where g_1(·),...,g_L(·) are trainable functions by L layers.
Deep learning is a method of fitting the function by deep neural networks to observed data, hence it is obviously regarded as a method for the nonparametric regression problem.
Specifically, in most studies on the nonparametric regression with deep neural networks, the following least-square estimator has been studied:
f̂^LS∈_f ∈1/n∑_i=1^n (Y_i - f(X_i))^2,
where is a set of functions by deep neural networks with the form (<ref>).
Further, performance of the estimator f̂^LS has been studied by its L^2-risk
f̂^LS - f^*_L^2^2 := [ (f̂^LS(X) - f^*(X))^2 ].
Using this framework, seminal works <cit.> show that the multilayer structure of deep neural networks fits an internal structure of the unknown function f^* and that its estimation error achieves a faster convergence.
<cit.> investigate statistical properties of the neural estimators such as asymptotic distribution and robustness.
<cit.> show that the multilayer structure of the neural estimator is effective when the target function f^* has irregular properties such as discontinuity and heterogeneous smoothness.
<cit.> shows an adaptive property of the neural estimators to an intrinsic low-dimensionality of the observations, e.g., data concentrates on a low-dimensional manifold in its domain.
Studying a sup-norm value of the estimation error has been an important interest in nonparametric regression problems.
The sup-norm value, referred to as an L^∞-risk, is a sharper measure of accuracy and sensitivity of estimators than the L^2-risk.
Furthermore, the sup-norm convergence of errors is useful for statistical inference, such as a uniform confidence band, and is effective in the case with covariate shift of the transfer learning <cit.>.
For several conventional (non-deep) nonparametric estimators for f^*, their sup-norm convergence has been actively studied.
Classically, the convergence of kernel methods <cit.> and series methods <cit.> have been investigated.
More recently, the convergence of wavelet methods <cit.>, methods with reproducing kernel Hilbert spaces <cit.>, and Gaussian process methods <cit.> have been clarified.
Roughly speaking, when studying the sup-norm convergence of these non-deep estimators f̂^ND, the following linear-in-basis form plays an effective role:
f̂^ND = ∑_j ∈ J w_j ψ_j(·),
where J is an index set, {w_j}_j ∈ J is a set of weights in trained by the least-square approach, and {ψ_j(·)}_j ∈ J is a family of basis functions (possibly depending on covariates) such as wavelets or kernels.
Since the non-deep estimators have the linear form, it is possible to control the L^∞-risk effectively and show its convergence, except a general result by <cit.>.
Our interest is to evaluate the L^∞-risk of an estimator using deep neural networks (<ref>).
Since the deep neural network model (<ref>) does not have the linear-in-basis form (<ref>) as the non-deep methods, the existing analysis cannot study the L^∞-risk of deep neural networks.
Based on the background, we have the following questions:
Is it possible to achieve an estimator by deep neural networks f^* whose L^∞-risk converges?
If so, is it possible to show the optimality of a convergence rate of the L^∞-risk?
§.§ Introduction to Adversarial Training
The adversarial training is a training scheme for deep neural networks, which has been developed to deal with an adversarial attack on prediction by neural networks.
An adversarial attack is a methodology to mislead deep neural networks in its predictions, by putting a tiny perturbation into a covariate for a trained deep neural network.
Since functions by trained deep neural networks are unstable, the perturbed samples, called adversarial samples, vary the outputs of deep neural networks drastically.
<cit.> reported that the phenomenon by introducing a case in which a deep neural network misclassified an image of a panda as an image of gibbons by adding very fine noise to the image.
After the finding, many adversarial attack methods have been developed <cit.>, threatening the robustness of neural networks.
A standard approach to adversarial training is to minimize a robustified empirical risk, which is measured by adding perturbations to the observed input variable <cit.>.
Rigorously, an estimator by the adversarial training for regression is defined as the minimizer of the following empirical risk:
min_f ∈1/n∑_i=1^n max_x' : x' - X_i_∞≤ h (Y_i-f(x'))^2,
with some h > 0.
The outer minimization is solved by the gradient descent method as well as the usual least-square loss, and the inner maximization is solved by a gradient ascent method.
Several efficient algorithms have been proposed to solve this problem effectively <cit.>, such as the fast gradient sign method <cit.>.
The optimization process is summarized in the following:
i. Initialize f ∈ and repeat the following steps ii and iii:
ii. For each (Y_i,X_i), find x^*_i = _x' ∈{x: x-X_i_∞≤ h} (Y_i - f(x'))^2.
iii. Update function f ← f - η∇ ( n^-1∑_i=1^n (Y_i - f(x^*_i))^2),
where η > 0 is a learning rate and ∇ denotes a derivative with respect to neural network parameters of f.
Note that the efficiency of the algorithm is not a primary interest of this study, hence we focus on the estimation error by the global minimizer of the adversarial risk.
Several works actively pursue a theoretical understanding of adversarial training.
One of the most significant issues is a trade-off between the robustness and accuracy of the adversarial training, which studies the possibility of balancing the predictive performance of deep neural networks with their ability to defend against adversarial samples.
A risk bound and the sample complexity of the adversarial training in general settings is widely examined <cit.>.
The predictive performance of the adversarial training has been also studied, particularly in linear regression models with over-parameterization <cit.>.
§.§ This Study
The purpose of this study is to investigate the sup-norm convergence of an error by deep neural networks using the adversarial training scheme.
For this aim, we develop a novel formulation of adversarial training and study its efficiency.
Specifically, our formulation includes a preprocessing for smoothing the output variable at the first step, then formulates a neural estimator as a minimizer of an empirical adversarial risk associated with the preprocessing.
The preprocessing has a role to reduce a bias on the estimator from the perturbation of the adversarial training scheme.
As a specific form of preprocessing, we can employ several nonparametric estimators including the nearest neighbor method and the kernel method.
As a result, we derive an upper bound on the L^∞-risk of the estimator with deep neural networks using our adversarial training scheme, then reveal some properties of its convergence rate.
Specifically, our contributions are summarized as follows.
(i) We derive a convergence rate of the L^∞-risk of the estimator when the true function f^* belongs to the Hölder space.
The derived rate achieves the minimax optimal rate with an appropriately designed preprocessing.
(ii) We show the inconsistency of the ordinary adversarial training without preprocessing.
This is due to the inability of an output variable in the regression problem to accommodate perturbations of the adversarial training.
(iii) Our approach applies to not only the adversarial training with a squared loss but also a general convex loss.
Specifically, we study an L^∞-risk of the regression problem of general loss, which is useful for handling data that have heavy-tailed noise.
(iv) We additionally study the L^∞-risk when the true function f^* has a heterogeneous smoothness, i.e. it belongs to the Besov space.
Our analysis shows the minimax optimality of the convergence rate of the L^∞-risk in this case.
(v) Our result is applicable to a wide range of architectures of deep neural networks, such as a fully-connected dense layer.
Also, it allows both finite depth networks and finite width networks.
We conduct numerical experiments and confirm that our theoretical results are consistent with the result.
Our results provide new implications for the understanding of adversarial training, which argues the trade-off between robustness and accuracy of prediction by adversarial training.
Along with this line, we show that (i) the ordinary adversarial learning is not consistent in the regression problem in the first place, (ii) the robustness obtained by adversarial learning is described by sup-norm convergence of the estimation error, and (iii) the adversarial training achieve the optimal rate with appropriate preprocessing.
Technical contributions in our proof are summarized as follows.
First, we derive an upper bound of the sup-norm of an estimation error by the adversarial risk up to constants.
This bound uses a volume of a neighborhood set of an input variable, which is utilized to design the adversarial perturbation.
Second, we develop an empirical process technique for the evaluation of preprocessing.
To control the effects of the preprocessing and the adversarial training simultaneously, we involve two levels of evaluation of biases and variances as appropriate.
§.§ Organization
The rest of this paper is organized as follows.
Section <ref> gives a setup for the nonparametric regression problem and the definition of deep neural networks.
Section <ref> gives a general formulation of adversarial training and an overview of analysis on it.
Furthermore, the section shows that naive adversarial training does not give a consistent estimator.
In Section <ref>, as a main result, we derive an upper bound by a sup-norm of an estimation error by the developed estimator
Section <ref> gives extensions and applications.
Section <ref> gives numerical simulations, and Section <ref> concludes.
§.§ Notation
For n ∈, [n] := {1,2,...,n} is a set of natural numbers no more than n.
For a,a' ∈, a ∨ a' := max{a,a'} is the maximum.
⌊ a ⌋ denotes the largest integer which is no more than a.
The Euclidean norm of a vector b ∈^d is denoted by b_2 := √(b^⊤ b).
Let C_w be a positive finite constant depending on a variable w.
{E} denotes the indicator function. It is 1 if the event E holds and 0 otherwise.
For a matrix A ∈^N × N, A_i,j denotes an (i,j)-th element of A for i,j=1,...,N.
For a measurable function f: Ω→ on a set Ω⊂^d, f_L^p(μ) := (∫ |f(x)|^p dμ(x) )^1/p denotes an L^p-norm for p ∈ [1,∞) with a measure μ, and f_L^∞ := sup_x ∈Ω|f(x)| denotes a sup-norm.
Also, L^p(Ω) denotes a set of measurable functions such that f_L^p(λ) < ∞ with the Lebesgue measure λ.
For x ∈^d, δ_x denotes the Dirac measure at x.
For a function f : ^d → with a multi-variate input (x_1,...,x_d) ∈^d and a multi-index a = (a_1,...,a_d) ∈^d, ∂^a f(x_1,...,x_d) := ∂_x_1^a_1∂_x_2^a_2⋯∂_x_d^a_d f(x_1,...,x_d) denotes a partial derivative with the multi-index.
For a variable x, C_x denotes some positive finite constant that polynomially depends on x, and it can have different values in different places.
For sequences of reals {a_n}_n ∈ and {b_n}_n ∈, a_n ≍ b_n denotes lim_n →∞ a_n/b_n → c with some c ∈ (0,∞), a_n = O(b_n) denotes |a_n| ≤ M|b_n| and a_n = Ω (b_n) denotes |a_n| ≥ M |b_n| with some M > 0 for all sufficiently large n. a_n = o(b_n) denotes |a_n| ≤ M |b_n| for any M > 0 and for all sufficiently large n.
O(·) and Ω(·) are the notations O(·) and Ω(·) ignoring multiplied polynomials of log(n), respectivelly.
For a sequence of random variables {X_n}_n ∈, X_n = O_P(a_n) denotes Pr(|X_n/a_n| > M) ≤ε for any ε > 0 and some M>0 for all sufficiently large n, and X_n = o_P(a_n) denotes lim_n →∞Pr(|X_n/a_n| > ε) = 0 for any ε > 0.
§ PROBLEM SETTING AND PRELIMINARIES
§.§ Nonparametric Regression and L^∞-Risk
§.§.§ Model and Observations
For the nonparametric regression, suppose that we have n observations (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × that are independent and identical copies of a random variable (X,Y) which follows the regression model (<ref>).
Note that the model is characterized by the unknown function f^* and the noise variable ξ.
Let P_X be a marginal measure of X.
§.§.§ Basic Assumption
We introduce a standard assumption on the regression model.
P_X has a density function that is uniformly lower bounded by C_P_X > 0 on [0,1]^d.
Assumption <ref> is important to estimate f^* on the entire domain [0,1]^d.
Both of the assumptions are commonly introduced in the nonparametric regression for neural networks <cit.>.
We suppose that f^* belongs to a function class with the Hölder smoothness with an index β > 0.
To the end, we define a ball of the Hölder space with β > 0 as
^β([0,1]^d) := { f: [0,1]^d →|
∑_b ∈^d: b_1 < ⌊β⌋∂^b f_L^∞ + ∑_b ∈^d: b_1 = ⌊β⌋sup_x,x' ∈ [0,1]^d, x ≠ x'|∂^b f(x) - ∂^b f(x')|/x - x'_∞^β - ⌊β⌋≤ B},
with its radius B ≥ 1.
Intuitively, ^β([0,1]^d) is a set of functions on [0,1]^d that are ⌊β⌋ times partially differentiable and their derivatives are (β - ⌊β⌋)-Hölder continuous.
There exists β > 0 such that f^* ∈^β'([0,1]^d) holds for all β' ∈ (0,β].
To impose differentiability for f^* is the usual setting for nonparametric regression (see <cit.>, for example).
Further, in the statistical studies on deep neural networks, it has also studied the estimation of functions with more complex structures <cit.>.
We will discuss an extension on this assumption in Section <ref>.
§.§.§ Goal: Sup-norm Convergence
Our goal is to estimate the true function f^* in the model (<ref>) and study an estimation error of an estimator in terms of the sup-norm ·_L^∞.
Rigorously, we will develop an estimator f̂ and study its L^∞-risk defined as follows:
f̂ - f^*_L^∞ := sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|.
The L^∞-risk is a sharp measure for the robustness of estimators and is applied to statistical inference such as a uniform confidence band.
To understand this point, we discuss its relation to the commonly used L^2-risk measured by the L^2-norm, which is a typical case with the following L^p-norm (p ∈ [1,∞)) with p=2:
f̂ - f^*_L^p(P_X)^p := _X[ |f̂(X) - f^*(X)|^p ].
Since the L^∞-risk bounds the L^p-risk, i.e. f̂ - f^*_L^∞≥f̂ - f^*_L^p(P_X) holds for every p ≥ 1, the L^∞-risk leads stronger convergence.
Figure <ref> illustrates the difference between the convergences in the L^2-norm and the sup-norm.
In the related studies with neural networks (e.g. <cit.>), the L^2-risk has been mainly studied, but the L^∞-risk of neural network estimators has not been proved to converge.
§.§ Deep Neural Network Model
We define a deep neural network, which is a model of functions by multiple layers.
Specifically, we consider deep neural networks with fully-connected layers and the rectified linear unit (ReLU) activation function, which is one of the most commonly used activations.
Let L ∈ be a number of layers, and = (W_1,...,W_L+1) ∈^L+1 be a tuple of width parameters, where W_ℓ denotes width of an ℓ-th layer.
Deep neural networks have a weight matrix A_ℓ∈^W_ℓ + 1× W_ℓ and a weight vector b_ℓ∈^W_ℓ for each ℓ∈ [L].
For each d ∈, we introduce a ReLU activation function σ:^d →^d such that σ(z) = ((z_1 ∨ 0), (z_2 ∨ 0),...,(z_d ∨ 0))^⊤ for z = (z_1,...,z_d) ∈^d.
For each ℓ∈ [L-1], we define a map g_ℓ: ^W_ℓ→^W_ℓ+1 by an ℓ-th layer as
g_ℓ(z) = σ(A_ℓ z + b_ℓ), z ∈^W_ℓ.
For the last L-th layer, we define g_L(z) = A_L z + b_L with z ∈^W_L.
For L and , we define a parameter space Θ_L, := (^W_2× W_1×^W_1) × (^W_3× W_2×^W_2) ×⋯× (^W_L+1× W_L×^W_L) whose elements is θ = ((A_1,b_1),(A_2,b_2),...,(A_L,b_L)), then we define a function g :^d → by a deep neural network with d = W_1 and W_L+1 = 1 as
f_θ(x) = g_L ∘ g_L-1∘⋯∘ g_1(x), x ∈ [0,1]^d.
Intuitively, f_θ(x) is constituted by compositions of L maps by the multiple layers with the maximum width _∞ = max_ℓ∈ [L+1] W_ℓ.
There are at most ∑_ℓ=1^L (W_ℓ + 1) W_ℓ+1≤ L (_∞ +1)^2 parameters in the deep neural network model.
We introduce a set of functions by deep neural networks with L layers and W maximum width.
With a tuple (L, W) ∈^2 and an upper bound B ≥ 1, we define the set of functions by deep neural networks as
(L,W):= { f_θ|f_θ_L^∞≤ B , θ∈Θ_L,, _∞≤ W }.
The condition on the upper bound B can be satisfied by a clipping operation using the ReLU activation function <cit.>.
This definition of deep neural networks includes several variations of neural networks.
If the parameter matrix A_ℓ is not sparse, the defined neural network is a fully-connected neural network.
If the matrix A_ℓ is constrained to be sparse with some structure, it is equivalent to a convolutional neural network <cit.> or a residual network <cit.>.
One advantage of the definition (<ref>) is that it controls the easily manipulated values of width W and depth L of neural networks, that can be easily specified when designing neural network models.
This is in contrast to manipulating the number of nonzero parameters and the maximum parameter value, which are difficult to control in practice (for example, see <cit.>).
§ ADVERSARIAL TRAINING ESTIMATOR FOR REGRESSION
§.§ Ordinary Adversarial Training and its Inconsistency
We introduce a framework of adversarial training.
The adversarial training framework defines its loss using an input point in the neighborhood of a data point that maximizes loss, as reviewed in (<ref>).
Rigorously, with a scale multipliers h ∈ ( h,1) with h >0, we consider a neighbourhood of x ∈ [0,1]^d as
Δ_h^p(x) = {x' ∈ [0,1]^d |x - x'_p ≤ h}⊂ [0,1]^d.
Then, we consider the following estimator by the empirical adversarial risk with a function f: [0,1]^d → and p ≥ 1:
R_n^o(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i) (Y_i - f(x'))^2.
We can define an estimator of f^* by the minimizer of this empirical adversarial risk as
f := _f ∈(L,W) R_n^o(f).
The minimax optimization in the problem (<ref>) is solved by various algorithms <cit.>.
§.§.§ Inconsistency of Ordinary Adversarial Training
In this section, we show the inconsistency of f̃ by ordinary adversarial training.
Specifically, we obtain the following result.
Suppose n ≥ 3.
There exists a sub-Gaussian noise ξ_i, f^* ∈^1([0,1]^d), P_X, and h ∈ (0,1) such that the estimator f̌ in (<ref>) satisfies the following inequality with an existing constant c^* > 0 with probability at least 0.5:
f̌ - f^*_L^2(P_X)^2 ≥ c^*.
This result shows that the L^∞-risk of f̌ does converge to zero with the ordinary adversarial training, regardless of the sample size n and a neural network architecture.
Since the L^∞-risk is bounded below by the L^2-risk, hence the ordinary adversarial training also yields an inconsistent estimator in the sense of a sup-norm.
This result is not limited to the choice of model used for the estimator, hence it occurs with methods other than neural networks.
Intuitively, ordinary adversarial training produces a bias by the design of perturbations on inputs (see the middle panel of Figure <ref>).
This is because the perturbation makes f̌(X_i) fit to an output with a shift ς = x' - X_i, which creates the inconsistency.
Hence, we need to correct the bias by the ordinary adversarial training in the regression problem.
§.§ Proposed Framework of Adversarial Training
We introduce an empirical risk function for adversarial training based on a quadratic loss.
We develop a random map Ŷ: [0,1]^d → for surrogate outputs, which referred to a preprocessed output.
This notion is a general expression of several methods, and its specific configurations will be given later.
With Ŷ, we define an empirical preprocessed adversarial risk as
R_n(f) := 1/n∑_i=1^nsup_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
for a function f ∈ L^2([0,1]^d).
This loss function is a generalized version of the ordinary adversarial risk (<ref>) with the preprocessing Ŷ.
Using this notion, we define an estimator as the minimizer of the empirical risk as
f̂∈_f ∈(L,W) R_n(f).
This framework intends to perturb an output variable in response to the perturbation on the input X_i.
That is, when the input point X_i is shifted by ς = x' - X_i due to the adversarial training, we also shift the output side by ς.
Hence, the observed outputs may not be able to accommodate the shift.
To address this issue, we prepare the corresponding output using a preprocessing approach, such as the nearest neighbor method.
Figure <ref> illustrates differences between the least square estimator f̂^LS, the ordinary adversarial training f̌, and our proposal estimator by the adversarial training with preprocessing f̂.
§.§.§ Preprocessing Design
We impose the following assumptions on the preprocessing.
[Preprocessing]
Ŷ(x) is continuous and [Ŷ_L^∞^2] ≤ V^2 with some V > 0.
Also, there exists a non-negative sequence {ζ_n}_n ∈ such that ζ_n → 0 as n →∞ such that the following holds for all n ∈:
ζ_n^2 ≥[ Ŷ - f^*_L^∞^2 ].
The sequence {ζ_n}_n ∈ represents a convergence rate of the preprocessing Ŷ to f^*.
Importantly, the data used to construct the preprocessed output Ŷ here may overlap the data for the estimator as (<ref>).
There are several examples for preprocessing as follows.
[Nearest neighbour]
First, we consider the k-nearest neighbor method.
For k ∈ and x ∈ [0,1]^d,
we define a radius B_x(r) := {x' ∈ [0,1]^d |x-x'_2 ≤ r} with r>0, the k-nearest neighbour radius r_k(x) := inf{r >0 | |B_x(r) ∩| ≥ k}, and its corresponding dataset N_k(x) := B_x(r) ∩.
With this notion, we define the k-
nearest neighbor preprocessing.
Ŷ(x) = 1/|N_k(x)|∑_i=1^n Y_i {X_i ∈ N_k(x)}
In this example, if Assumption <ref> holds with β∈ (0,1], we have ζ_n^2 = O(n^-2β/(2β + d)log n) with k ≍ n^2β/(2β + d) by Theorem 1 in <cit.>.
[Posterior mean by Bayesian method]
We consider a mean of a posterior distribution by a prior distribution on functions.
The method considers a B-spline series (see <cit.> for overview and specific constructions).
With some tuple of numbers of basis (J_1,...,J_d) ∈^d and orders (q_1,...,q_d) ∈^d, we consider parameters {θ_j_1,...,j_d}_j_1,...,j_d = 1^J_1,...,J_d and the B-spline series {B_j_k,q_k(x)}_j_k = 1^J_k for k=1,...,d.
Then, the method constructs a prior distribution on a function f with the form
f(x) = ∑_j_1=1^J_1⋯∑_j_d=1^J_dθ_j_1,...,j_d∏_k=1^d B_j_k,q_k(x_k),
by putting a Gaussian prior on the parameters θ_j_1,...,j_d.
If Assumption <ref> holds with β > 0, Theorem 4.4 in <cit.> shows that ζ_n^2 = O(n^-2β/(2β + d)log^2β/(2β + d) n), which is implied by a contraction of the posterior shown by the theorem.
We can pick other methods for preprocessing.
The required property is that an error in estimating a smooth function converges in the sup-norm sense.
§ MAIN RESULT: L^∞-RISK ANALYSIS
We present our main results on the consistency of the estimator and a non-asymptotic upper bound on the estimation error with its convergence rate in n.
We further discuss the minimax optimality of the obtained convergence rate.
To achieve optimality, we need to discuss the design of the preprocessing Ŷ and the architecture of deep neural networks.
§.§ Consistency
We present an upper bound of an expectation of the L^∞-risk of the estimator.
The first result is consistency in the sense of the L^∞-risk.
In an asymptotic analysis with n →∞, a product of the depth and width of deep neural networks should also increase in n.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class by deep neural networks with a tuple (L,W).
Suppose <ref>, and <ref> hold and f^* is continuous.
Then, there exists a tuple (L,W) with LW = o(n) such that it holds that
[f̂ - f^*_L^∞^2 ] → 0,
as n →∞.
The results show that under divergent widths and depths and appropriate preprocessing, we obtain consistency in the sense of sup-norm.
Note that f^* needs only be continuous, and conditions on derivatives are not necessary.
Also, it provides the following important implications: (i) we can control the L^∞-risk even though the deep neural network model does not have the linear-in-feature structure, and (ii) the preprocessing solves the problem of inconsistency in adversarial training presented in Section <ref>.
Its proof is based on the procedure in Section <ref>.
We note the importance of sup-norm convergence in the context of estimation.
In the theory of approximation, the sup-norm convergence by neural networks has been an important topic, that is, inf_f∈(L,W)f - f^*_L^∞→ 0 as L →∞ or W →∞, and numerous studies have studied the problem, e.g. <cit.>.
Conversely, in the nonparametric regression problem, the sup-norm convergence has been difficult due to noise in observations.
Theorem <ref> shows that the adversarial training with preprocessing enables convergence in the sup-norm.
§.§ Non-Asymptotic Bound and Convergence Rate
As a more rigorous error evaluation, we derive a non-asymptotic upper bound for the L^∞-risk of the estimator with the adversarial training.
This result is also useful in studying convergence rates of the risk and discussing its optimality.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W) by deep neural networks.
Suppose Assumption <ref>, <ref>, and <ref> hold for some β > 0.
Then we have
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (WL)^-4β/d + h^-dζ_n^2 ),
for every n ≥n̅ with some n̅∈ℕ.
This result gives some implications: (i) we develop an upper bound on the L^∞-risk of the estimator, and
(ii) the bound is proportional to h^-d, which appears when evaluating the L^∞-risk using the adversarial loss.
Note that we can select h as strictly positive and thus it does not affect an order of the bound in n.
More precisely, this upper bound consists of the three terms.
The first term O((WL)^2 log (WL) /n) is the complexity error, the second term O((WL)^-4s/d) is the approximation error by the deep neural network, and the third term O(ζ_n^2) is the error by the preprocessing.
The complexity and approximation errors also appear in several risk bounds on an L^2-risk of deep neural network (e.g., Theorem 4.3 in <cit.>).
In contrast, the preprocessing error term is a new term needed to derive an upper bound on the L^∞-risk.
We derive the convergence rate of the L^∞-risk with respect to n.
Specifically, we select the width and depth of deep neural networks in order to balance the trade-off in the error terms presented in Theorem <ref>.
Consider the setting in Theorem <ref>.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as LW ≍ n^2β/(2β + d).
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^2 ∨β^* n ).
The rate obtained in Corollary <ref> is identical to the minimax optimal rate of risk measured in the sup-norm in the problem of estimating a function from ^β([0,1]^d) <cit.>.
Specifically, the derived rate corresponds to the following lower bound:
inf_f̅_nsup_f^* ∈^β([0,1]^d)[f̅_n - f^*_L^∞^2 ] = Ω̃( n^-2β / (2β + d)), (n →∞),
where f̅_n is taken from all estimators depending on the n observations.
Since the derived rate is the same as the lower bound, we show that the adversarial training estimator achieves the minimax optimal rate.
§.§ Proof Overview
We give an overview of proof of the main theorem.
As preparation, we introduce several notations related to adversarial training.
With h, an order p and a base measure P, we define an adversarial (pseudo-)norm of f: [0,1]^d → and its empirical analogue
f_P,Δ^2 := _X ∼ P[ max_x' ∈Δ_h^p(X) |f(x')|^2 ], f_n,Δ^2 := n^-1∑_i=1^n max_x' ∈Δ_h^p(X_i) |f(x')|^2.
These norms correspond to the adversarial risks with a squared loss for the regression problem (<cit.>).
We also define an approximation error of deep neural networks in (L,W) as
Φ_L,W := inf_f ∈(L,W)f - f^*_L^∞.
This term represents an expressive power of neural networks in (L,W), which decreases as L or W increase (see <cit.> for an example).
We further use a uniform covering number of (L,W).
Let Q_n be an empirical measure with n samples.
Given δ∈ (0,1],
we define a δ-covering set of (L,W) as {f_1,...,f_N}⊂ and the uniform covering number from the empirical process theory (e.g., <cit.>):
N_L,W(δ) := sup_Q_n N(δ, (L,W), ·_L^2(Q_n)),
where the supremum is taken over all possible empirical measures Q_n.
This notion is useful to evaluate the complexity of the set of deep neural networks, because it gives an upper bound without boundedness or sparsity of parameters of neural networks (See Lemma <ref>, for example).
Our proof consists of three main elements: (i) the derivation of an upper bound of the adversarial norm of the estimation error, (ii) to develop an upper bound of the L^∞ norm of the estimation error by the adversarial norm, and (iii) a comb of the above results using the localization technique.
Each of these is described below.
In the first step, we derive an upper bound for the adversarial norm of the estimation error.
Rigorously, Lemma <ref> will state the following upper bound
[f̂ - f^*_P_X, Δ^2 ] ≤ C {[f̂ - f^*_n,Δ^2] + B^2 (log N_L,W(δ) +1)/n + δ B + δ^2 },
for any δ∈ (0,1) with some universal constant C> 0.
Furthermore, Proposition <ref> will bound the empirical adversarial norm [f̂ - f^*_n,Δ^2] as
[f̂ - f^*_n, Δ^2 ] ≤ C {([f̂ - f^*_L^∞^2 ]^1/2 +δ) ( log N_L,W(δ)/n + ζ_n )^1/2 + (Φ_L,W + ζ_n )^2 }.
We achieve these bounds by extending the empirical process technique by <cit.> to the adversarial norm.
There are several points for noting: (i) the term Φ_L,W represents a bias, and the term O(log N_L,W(δ) / n) represents a variance of the estimator, that are similar to the least square estimator, (ii) the variance term is described by the uniform covering number, which is useful to study neural networks whose parameters are unbounded and non-sparse, and (iii) there is a term ζ_n which represents the error by the preprocessing, unlike the case of the least square estimator.
In the second step, we construct an upper bound for the sup-norm using the adversarial norm.
That is, we develop the following statement:
Consider the estimator as (<ref>) and the adversarial norm as (<ref>).
Suppose P_X satisfies Assumption <ref>.
Then, we have
f̂ - f^*_P_X, Δ^2≥ C_P_X,p,d h^d f̂ - f^*_L^∞^2 .
Intuitively, we utilize the similarity between the adversarial norm and the sup-norm to achieve the result.
That is, the maximization over Δ_h^p in the adversarial norm has a similar property to the sup-norm.
Using this property, we give an upper bound on the sup-norm while taking into account the volume of the hypercube.
We will give a generalized version of this result as Lemma <ref> in the supplementary material.
In the last step, we combine these results and derive the main statement of Theorem <ref>.
Here we apply the peeling argument to obtain convergence rates. Note that a simple combination of the above results would lose optimality.
To obtain the minimax optimal rate, we evaluate the approximation error and the uniform covering number based on the localization techniques.
§ APPLICATIONS
§.§ Extension to General Loss Function
§.§.§ Motivation and Setting
We can extend our adversarial training results to the case of non-squared loss functions.
Specifically, we can handle loss functions such as absolute value loss, quantile loss, and Huber loss, which are used in the presence of heavy-tailed noise.
This setting with deep neural networks is studied in <cit.>.
We introduce a generic loss function, which satisfies the following assumption:
A loss function ℓ:×→ is symmetric and ℓ(x,y) is Lipschitz-continuous in each x and y with its Lipschitz constant C_ℓ > 0.
Further, ℓ (y,x)=0 holds if and only if y=x, and there exists a constant c_ℓ > 0 and q ≥ 1 such that
ℓ(y,x) ≥ c_ℓ |y-x|^q, ∀ x,y ∈.
A class of loss function satisfying Assumption <ref> includes several representative loss functions, e.g., an absolute loss ℓ(y,x) = |y-x|, a quantile loss ℓ(y,x) = ({y ≥ x}τ + {y ≤ x}(τ - 1)) (y-x) for τ∈ (0,1), and the Cauchy loss ℓ(y,x) = log (1 + κ^2 (y-x)^2) for κ > 0.
We introduce an empirical risk function for adversarial training based on ℓ.
Using the neighbourhood set Δ_h^p(x) and the preprocessing Ŷ, we define an empirical risk function as
R̃_n(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ(Ŷ(x'), f(x')).
This loss function is a generalized version of the ordinary loss for the adversarial training (<ref>).
Using this notion, we define its minimizer as
f̃∈_f ∈(L,W)R̃_n(f).
§.§.§ Error Analysis
We study an L^∞-risk of this estimator by deriving a non-asymptotic upper bound.
The proof differs from that of Theorem <ref>, requiring a more general treatment of loss combined with adversarial training.
Consider the regression model (<ref>) and the adversarial estimator f̃ in (<ref>) with the function class by deep neural networks with a tuple (L,W) and h ∈ (0,1).
Suppose Assumption <ref> and <ref> for β > 0, Assumption <ref> holds with ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0 and Ŷ is independent of {(X_i,Y_i)_i=1^n},
and Assumption <ref> holds with q ∈ [1,∞).
Then, we have the following as n →∞:
[f̃ - f^*_L^∞^2] = O(h^-2d/q n^-β/(q(β + d))log^ (2/q) ∨β^* n ).
This result shows that the L^∞-risk is bounded with the setup with general loss functions.
The convergence rate of Proposition <ref> of the L^∞-risk corresponds to a convergence rate of excess risks derived by Theorem 4.2 in <cit.> under general losses.
The key to this result is the bound V on [Ŷ_L^∞^2] given in Assumption <ref>.
The independence of the preprocessing Ŷ is imposed because of a technical reason, however, it is easy to satisfy it.
For example, we can randomly split the observed data into two and then conduct the preprocessing using one of the two.
The technical derivation is similar to that of Theorem <ref>.
First, we define an expected value of adversarial risk with the general loss and the preprocessing: for f ∈(L,W), we define
R(f) := _X [ sup_x' ∈Δ_h^p(X)ℓ(f(x'),Ŷ(x')) ].
Then, we derive an upper bound for an excess value of the risk R̃ (f̃) - R̃(f^*) in Proposition <ref>.
Next, we bound the L^∞-risk by properties of the expected adversarial risk as
f̃ - f^*_L^∞^q = O ( h^-d( R̃(f̃) - R̃(f^*) + Ŷ - f^*_L^∞)).
in Lemma <ref>.
This result is an extension of the bound for the L^∞-risk by the L^2-risk as shown in Lemma <ref>.
Combining the results, we obtain the result of Proposition <ref>.
§.§ Adaptation to Heterogeneous Smoothness with Besov Space
§.§.§ Motivation and Setting
In this section, we show that our proposed method can be adapted to estimate functions with heterogeneous smoothness, that is, we study the case that the true function f^* is an element of the Besov space (see <cit.> for an introduction).
The Besov space has an interesting property that linear estimators, a certain type of non-deep estimators, cannot estimate its elements with the optimal convergence rate.
First, we give the definition of the Besov space following <cit.>.
Note that there are several equivalent definitions for Besov spaces, and the following is based on the notion of difference of functions.
Consider parameters p,q ∈ (0,∞] and β > 0.
For r ∈, h ∈^d, and f:[0,1]^d →, we define an r-th difference of f at x ∈ [0,1]^d as
Δ_h^r[f](x) = {x + rh ∈ [0,1]^d}∑_j=1^r rj (-1)^r-j f(x + jh).
We also define the r-th modulus of smoothness of f with u > 0 as
ω_r,p(f,u) = sup_h_2 ≤ uΔ_h^r[f]_L^p(λ).
Recall that ·_L^p(λ) denotes the L^p-norm with the Lebesgue measure λ.
Using these notions, we define a ball in the Besov space as follows.
.
With r ∈ such that define r > β, we define a semi-norm of f: [0,1]^d → as
f__p,q^β :=
∫_0^∞ ((u^-βω_r,p(f,u))^q u^-1 du )^1/q q < ∞
sup_u > 0 u^-βω_r,p(f,u) q = ∞.
Then, we define a ball of the Besov space with its radius B ≥ 1 as
_p,q^β := { f: [0,1]^d →|f_L^p(λ) + f__p,q^β≤ B }.
The Besov space can represent functions with discontinuity and heterogeneous smoothness, which means that the degree of smoothness of functions varies depending on x.
These properties follow the fact that _1,1^1 coincides with the space of bounded total variation <cit.>.
An important property of heterogeneous smoothness is that deep estimators, such as deep neural networks, tend to have an advantage in estimating such functions.
Specifically, a linear estimator, which is one certain family of non-deep estimators <cit.>, becomes sub-optimal when estimating elements of the Besov space.
The linear estimator has a form f̂^lin(·) = ∑_i=1^n Ψ(·;X_1,...,X_n)Y_i with an arbitrary measurable map Ψ, and includes major estimators such as the kernel ridge estimator.
Then, Theorem 1 in <cit.> implies the following minimax lower bound with d=1 case:
min_f̂^linmax_f^* ∈_p,q^β[ f̂^lin - f^*_L^2(λ)^2 ] ≥ C n^-2 β' / (2β' + d ),
with some C > 0 and β' = β + 1/2 - 1/p.
For p < 2 case, the linear estimator is sub-optimal, hence the rate is slower than the minimax optimal rate Õ(n^-2 β / (2β + d )).
Several studies <cit.> show similar statements.
Therefore, it is important to estimate functions in the Besov space with deep neural networks, since it overcomes the limitations of linear estimators.
§.§.§ Error Analysis
We give a convergence rate of the adversarial estimator with deep neural networks and the preprocessing in (<ref>).
Note that we consider the adversarial risk (<ref>) based on the squared loss function.
We first give the following assumption.
There exists β > 0 such that f^* ∈_p,q^β' holds for every β' ∈ (0,β].
To estimate functions in the Besov space, we have to restrict a set of neural network functions.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1, then consider the empirical preprocessed adversarial risk (<ref>) on (L,W,S,B) as
f̂∈_f ∈(L,W,S,B) R_n(f).
Then, we give the convergence rate of the estimator, which corresponds to the minimax optimal rate Õ(n^-2 β / (2β + d )) <cit.>.
Note that this rate is valid regardless of the values of p and q.
Fix p,q ∈ (0,∞].
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W,S,B) by deep neural networks.
Suppose that Assumption <ref>, and <ref> hold with β > d/p.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as L ≥ C_d,p,β,Blog n, S ≍ W ≍ n^d/(2β + d), and B = O(n^a) with some a > 0.
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^3 ∨β^* n ).
The result shows that our estimator with deep neural networks inherits the advantages of both deep and non-deep estimators.
Rigorously, first, it achieves the minimax optimal rate up to log factors.
This optimality is not achieved by the linear estimator and is one of the advantages of using deep neural networks.
Next, the errors are convergent in the sup-norm sense.
This is not shown by deep neural network estimators using the least squares, and is achieved by adversarial training with preprocessing.
Note that the requirement on the preprocessing is satisfied by, for example, the wavelet estimator with β^* = 2β / (2β + d) <cit.>.
The proof of this proposition is a slight modification of the proof of Proposition <ref> in Appendix.
The main update is an analysis of the approximation error by deep neural networks to a function in the Besov space.
Here, we apply the seminal result by <cit.> on the approximation error in the sup-norm.
§ SIMULATIONS
In this section, we conduct simulation experiments to justify the theoretical results.
Specifically, we generate data from a function and then numerically compute the L^∞-risk of the proposed estimator and other standard methods.
We generate n samples from the regression model (<ref>) with the sample size n ∈{400,800,1200,1600} and the noise variance σ^2 ∈{0.0001,0.01,1.0}.
We consider the following three cases as values of f^* on [0,1]^d.
In Case 1, we set d=1 and f^*(x) = 0.3 sin(4 π x) - x + 0.5.
In Case 2, we set d=2 and f^*(x_1,x_2) = sin(4 π x_1) + cos(2 π x_2).
In Case 3, we set d=7 and f^*(x_1,x_2,...,x_7) = 2/x_1 + 0.01 + 3 log (x_2^7 x_3 + 0.1) x_4 + 0.1 x_5^4 x_6^2 x_7.
For estimation, we use a three-layer fully-connected neural network with the ReLU activation function.
The width of each layer is 40.
For training, we use three methods: (i) adversarial training without preprocessing, (ii) adversarial training with preprocessing (our proposal), and (iii) ordinary least squares.
In the adversarial training case (i) and (ii), the value of h is set to 2^-3.
For the adversarial training, we employ the projected descent algorithm <cit.>.
For the preprocessing, we employ the k-nearest neighbor with setting k=3.
To measure the L^∞-risk, we generate 10,000 uniform random variables on the support [0,1]^d and use their maximum to approximate the risk.
Figure <ref> shows the measured L^∞-risk against the sample size n.
We have mainly three findings:
(i) In approximately all cases, our proposed estimator from adversarial training with preprocessing monotonically reduces the L^∞-risk in n.
(ii) The adversarial estimators without preprocessing may or may not be as good as those with preprocessing.
This implies that the magnitude of the bias from adversarial training depends on the shape of the true function f^*.
(iii) The L^∞-risk of the least square estimator generally decreases at a slower rate or does not decrease in all cases.
This supports the possibility that training a deep neural network with least-squares may have difficulty in reducing the L^∞-risk.
§ CONCLUSION AND DISCUSSION
We consider the nonparametric function estimator by deep neural networks that converge in the sense of the sup-norm, i.e., L^∞-norm.
Since deep neural networks do not have a tractable structure such as a linear sum of basis functions as the conventional non-deep estimators, they are not guaranteed to converge in the sup-norm sense.
In this study, we tackle this problem by considering the estimator based on adversarial training.
For the bias due to the adversarial training, we solve this problem by introducing the preprocessing for the data.
As a result, our proposed corrected adversarial converges to the smooth true function with the minimax optimal rate in the sup-norm sense.
Our approach is also valid for functions with general loss and functions with heterogeneous smoothness.
The experiments support our theoretical results.
Future research directions include sup-norm convergence for estimating non-smooth functions.
Although we expect that there are significant obstacles to the sup-norm convergence of estimators for the non-smooth functions, it is interesting to argue how far we can relax the conditions to estimate such functions.
Another direction is the application of uniform confidence bands for functions.
Our sup-norm convergence is useful to study the uncertainty of neural network estimators and constructing uniform confidence bands.
These directions may be a step toward statistical inference with deep neural networks.
§ PROOF FOR MAIN RESULT IN SECTION <REF>
§.§ Overview
We first develop a general theorem with arbitrary preprocessing, then apply the result and prove the results in Section <ref>.
For a preprocessed output Ŷ, we define its residual as
Ξ(x) := Ŷ(x) - f^*(x), x ∈ [0,1]^d.
This notion expresses an error in estimating the true function f^* by the preprocessing Ŷ.
Consider the regression model (<ref>) and the corrected adversarial estimator f̂ as (<ref>) with the function class (L,W) by deep neural networks.
Suppose that Assumption <ref> and <ref> hold.
Then, we obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d( W^2 L^2 log(WL) log n/n + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ] ).
We apply Lemma <ref> to bound the sup-norm as
f̂ - f^*_L^∞^2 ≤ 2(C_P_X,p,d h^d)^-1f̂ - f^*_P_X, Δ^2
Note that any f ∈(L,W) is continuous, since it has a form of deep neural network with the ReLU activation with continuity.
We then take an expectation of the bounds and apply Lemma <ref> and Proposition <ref> to obtain
[f̂ - f^*_P_X, Δ^2 ]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2
≤( 16[f̂ - f^*_L^∞^2 ]^1/2 + 40 δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2 + 4Φ_L,W^2+ 8 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2 ],
for δ∈ (0,1].
Note that both f ∈(L,W) and f^* are bounded, the expectations are guaranteed to exist.
We combine this fact with the above inequality to (<ref>), then obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d h^-d( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ C_P_X,p,dh^-d( B^2 log N_L,W(δ) + B^2/n + δ B + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + [Ξ_L^∞^2 ] ),
by setting δ≤ B ∨Φ_L,W, which will be verified later.
We arrange the terms in the above inequality.
For a,b ≥ 0 and z ∈, z^2 ≤ az + b implies z^2 ≤ 3a^2 + 2b.
with regarding regard z = [f̂ - f^*_L^∞^2 ]^1/2 and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d{log N_L,W(δ)/n + δ + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ]
+ ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2δ}.
Further, we set δ = 1/n then Lemma <ref> shows
log N_L,W(1/n) = logsup_Q_n N(1/n, (L,W), ·_L^2(Q_n)) ≤ C W^2 L^2 log(WL) log (B n^2).
We substitute these results and obtain the statement.
Suppose P_X satisfies Assumption <ref> and f^* is continuous.
For any bounded and continuous f:[0,1]^d →, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d f - f^*_L^∞^2 .
We apply Lemma <ref> to achieve the statement.
To apply the lemma, we verify that the map x' ↦ (f(x') - f^*(x'))^2 is bounded and continuous by the compactness of the domain [0,1]^d and the assumptions.
Then, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d sup_x' ∈ [0,1]^d (f(x') - f^*(x'))^2 = C_P_X,p,d h^d f - f^*_L^∞^2 .
The inequality follows Lemma <ref> by setting g(·) = (f(·) - f^*(·))^2.
All f ∈ is continuous.
Suppose that f^* is continuous and f^*_L^∞≤ B holds.
Then, for any δ > 0, we have
[f̂ - f^*_P_X,Δ^2]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2.
Without loss of generality, we assume that N_L,W(δ) ≥ 3 and log N_L,W(δ) ≤ n.
Also, we define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Let X_i' be an i.i.d. samples from P_X for i = 1,...,n.
Note that Ŷ depends on X_1,...,X_n.
We give a bound on the following difference as
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
= | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f̂(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f̂(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2_=: g_ĵ(X_i,X_i')] |
+ 2 | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i) (f̂(x') - f_ĵ(x') + f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 [sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f^*_L^2(Q_n)^2 ]^1/2
+ 2 [ sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2]
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 δ[ f_ĵ - f^*_L^∞^2 ]^1/2+ 2 δ^2
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 8 δ B + 2δ^2.
Here, the second last inequality follows Lemma <ref> using the continuity of f^* and the f ∈.
The last inequality follows the definition of ĵ and the boundedness of f ∈ and f^* by B.
We further study the first term of the bound (<ref>).
As preparation, we define
r_j = Bmax{[f_j - f^*_P_X,Δ^2 ]^1/2 , (n^-1log N_L,W(δ))^1/2},
for j=1,...,N, and it yields
r_ĵ ≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f_ĵ(x') - f^*(x'))^2 ]^1/2 + B (n^-1log N_L,W(δ))^1/2
≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ.
Here, _X| X_1:n, Y_1:n[ · ] denotes a conditional expectation with given X_1,...,X_n and Y_1,...,Y_n.
By the law of iterated expectation, the first term of the bound is decomposed as
| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] |
= 1/n| [ ∑_i=1^n g_ĵ(X_i,X_i') /r_ĵ_=: g̃_ĵ(X_i,X_i')r_ĵ] |
≤1/n| [ ∑_i=1^n g̃_ĵ(X_i,X_i')( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ)] |
≤1/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2)] |
+ B/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( (n^-1log N_L,W(δ))^1/2 + δ)]^1/2|
≤B/n| [ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2 ]^1/2[f̂ - f^*_P_X,Δ^2 ]^1/2|
+ B/n[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]((n^-1log N_L,W(δ))^1/2 + δ)
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2+ B/n (6 log N_L,W(δ) + 11).
The first inequality follows (<ref>) and the second last inequality follows the Cauchy-Schwartz inequality.
We also apply Lemma <ref> and 1 ≤log N_L,W(δ) ≤ n to achieve the last inequality.
We substitute the result (<ref>) into the bound (<ref>), then obtain the inequality:
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2 + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2.
We rearrange the term and obtain that
[f̂ - f^*_P_X,Δ^2]
≤ 2 ([f̂ - f^*_n,Δ^2] + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2 ) + 8B^2(36 n log N_L,W(δ) + 256 n)/n^2.
Then, we obtain the statement.
Suppose that N_L,W(δ) ≥ 3.
For the function g̃_j(X_i,X_i') defined in the proof of Lemma <ref>, we have
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')] ≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2,
and
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2] ≤ 36 n log N_L,W(δ) + 256 n.
We first note that for any j = 1,...,N_L,W(δ), we have [g̃_j(X_i,X_i')] = 0, |g̃_j(X_i,X_i')| ≤ 4B^2 /r_j ≤ 4 n^1/2/ (log N_L,W(δ))^1/2 =: M, and
(g̃_j(X_i,X_i')) = 2 r_j^-2( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )
≤ 2 r_j^-2[ ( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )^2]
≤ 8 r_j^-2[f_j - f^*_P_X,Δ^2] B^2
≤ 8.
The second inequality follows Hölder's inequality.
Using the bounds above, we apply the Bernstein inequality as
( ∑_i=1^n g̃_j(X_i,X_i') ≥ t) ≤exp( - t^2/2t M/3 + 2n (g̃_j(X_1,X_1')))
≤exp( - t^2/8t n^1/2(log N_L,W(δ))^-1/2 /3 + 16n)
≤exp( - t^2/16t n^1/2(log N_L,W(δ))^-1/2 /3)
= exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2),
for t ≥ 6 (n log N_L,W(δ))^1/2.
The last inequality follows 8t n^1/2(log N_L,W(δ))^-1/2 /3 ≥ 16n for t larger than the threshold 6 (n log N)^1/2.
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2.
Then, the first statement is proved.
For the second statement, we similarly apply (<ref>) and obtain
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞exp( - 3t^1/2 (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 36 n log N_L,W(δ) + 256 n.
Then, the second statement is also proved.
Consider the setting in Theorem <ref>.
Then, for any δ∈ (0,1], we have
[f̂ - f^*_n,Δ^2] ≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ Φ_L,W^2+ 2 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2].
By the definition of the minimization problem, L_n(f̂) ≤L_n(f) holds for any f ∈(L,W), hence we have the following basic inequality as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
which can be rewritten as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f(x'))^2.
We bound the both-hand side of (<ref>).
The left-hand side (LHS) of (<ref>) is lower bounded as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f̂(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f̂(x'))}
≥f^* - f̂_n,Δ^2 - Ξ_n,Δ^2 - 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))|,
by applying Lemma <ref>.
Similarly, we bound the right-hand side of (<ref>) as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f(x'))}
≤f^* - f_n,Δ^2 + Ξ_n,Δ^2 +2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|.
Combining (<ref>) and (<ref>) with (<ref>), we obtain
f^* - f̂_n,Δ^2 ≤f^* - f_n,Δ^2 + 2 Ξ_n,Δ^2 + 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))| _=: T_1
+ 2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|
≤Φ_L,W^2 + 2 Ξ_L^∞^2 + T_1 + 2 Ξ_L^∞Φ_L,W,
by the definition of Φ_L,W in (<ref>).
We will bound an expectation the terms.
Note that the expectations of the terms are guaranteed to exist, by the boundedness of f^* and f̂,f ∈(L,W), and Ŷ.
We bound [T_1].
We define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Then, we bound [T_1] as
[T_1] = [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x') + f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| ] + [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') ( f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| f̂ - f^*_L^∞ + δ/f_ĵ - f^*_L^∞]
+ 2 [ sup_Q_nΞ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f̂_L^2(Q_n)^2]^1/2
≤[ (f̂ - f^*_L^∞ + δ) 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))|/f_ĵ - f^*_L^∞_=: Z_ĵ] + 2 [Ξ_L^∞^2 ]^1/2δ.
Since we have
|Z_j| ≤2/n∑_i=1^n | max_x' ∈Δ_h(X_i){| Ξ(x') | | (f^*(x') - f_j(x'))| }/f_j - f^*_L^∞| ≤ 2Ξ_L^∞,
for any j = 1,...,N,
the Cauchy-Schwartz inequality yields
[ (f̂ - f^*_L^∞ + δ) Z_ĵ] ≤[ (f̂ - f^*_L^∞ + δ)^2 ]^1/2[ Z_ĵ^2 ]^1/2
≤ 2( [f̂ - f^*_L^∞^2 ]^1/2 + δ)[ max_j=1,...,N_L,W(δ) Z_j^2 ]^1/2
≤ 4( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2.
The last inequality follows the maximal inequality (Theorem 3.1.10 in <cit.>) for the bounded random process.
Using this result, we obtain
[T_1] ≤ 4 ( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2 + 2 [Ξ_L^∞^2 ]^1/2δ
≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2.
We substitute the bound (<ref>) into the expectation of (<ref>), then obtain the statement.
Fix ε > 0 arbitrary.
Also, we fix C_* = C_P_X,p,d,B as used in the statement of Proposition <ref>.
By the universal approximation theorem (e.g. Theorem 1 in <cit.>) associate with the continuity of f^*, there exists a tuple (L',W') such that
Φ_L',W'≤√(ε h^d/( 4C_*)).
Further, by Assumption <ref>, there exists n̅∈ such that
[Ξ_L^∞^2] ≤√(ε h^2d/(4 C_*)).
Then, for all n ≥n̅, Proposition <ref> yields that
[f̂ - f^*_L^∞^2 ] ≤ C_* h^-d(W'L')^2 log(W'L') log n/n + 3 ε/4.
Then, for any n ≥ n' ∨ (4 C_* (W'L')^2 log(W'L') h^-dε^-1), we have [f̂ - f^*_L^∞^2 ] ≤ε/4 + 3ε/4 = ε, which shows the statement.
As preparation, Lemma <ref> gives the following bound
Φ_L,W≤ C_d,β (LW)^-2β/d.
With this bound on Φ_L,W, we apply Proposition <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ).
Further, we have
(LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ≤{(LW)^-2β/d + h^-d/2[Ξ_L^∞^2]^1/2}^2,
by applying Jensen's inequality.
Arranging the terms, we obtain the statement.
We start with the inequality (<ref>) in the proof of Theorem <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( n^-2β/(2β+d) (log^2 n + 1) + [Ξ_L^∞] n^-β/(2β+d) + h^-d[Ξ_L^∞^2] )
by the setting WL ≍ n^d/(4s + 2d).
§ PROOF FOR APPLICATIONS
§.§ Proof for General Loss Setting
We give proofs of the result in Section <ref>.
Consider the setting in Proposition <ref>.
Then, we have for n such that log N(1/n) ≥ 1:
[R̃ (f̃) - R̃(f^*)] ≤C_ℓ, B ( log N_L,W(1/n) + V^2 )/n^1/2 + C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
This proof is similar to Lemma 3.1 in <cit.>.
A difference between <cit.> and our result is that a property of the loss depends on f in our setting, so we have to modify it.
Hence, we write down the proof.
We develop the proof in the following four steps: (i) a basic decomposition, (ii) bounding a variance, (iii) bounding a bias, and (iv) combining every bound.
Step 1: Basic decomposition.
We define i.i.d. copies of the observations D := {(X_i,Y_i)_i=1^n} as D' := {(X_i',Y_i')_i=1^n}, and also define an excess loss as
g(x,Ŷ,f) = sup_x' ∈Δ_h^p(x)ℓ(f(x'), Ŷ(x')) - sup_x' ∈Δ_h^p(x)ℓ(f^*(x'), Ŷ(x'))
We further define empirical means of the excess loss as G_n(f) := n^-1∑_i=1^n g(X_i,Ŷ,f) with the observations D, and G_n'(f) := n^-1∑_i=1^n g(X_i',Ŷ,f) with the copies D'.
Since f̂ is independent to D', we can rewrite the expected risk as
[R̃(f̂) - R̃(f^*)] = [ _D'[G_n'(f̂) ]].
Since f̂ is the minimizer of the empirical risk and the loss is bounded, we obtain the following inequality of expectations:
[G_n(f̂)] ≤[G_n(f) ],
for any f∈(L,W).
We set set f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞
Using this fact, we decompose the excess risk as
[R̃(f̂) - R̃(f) ] = [ _D'[ G_n'(f̂)]] ≤[ - 2G_n(f̂) + _D'[ G_n'(f̂)]_=:] + 2[ G_n(f)_=: ].
The inequality follows (<ref>).
Step 2: Bound the variance [].
We bound an expectation of the term .
By the boundedness of both Ŷ and f̃ by Assumption <ref> and (<ref>), the expectation [] exists.
We prepare additional notations.
Fix δ∈ (0,1].
We consider a covering set {f_j}_j=1^N_L,W(δ)⊂, then we pick f_j from the set such that sup_Q_nf_j - f̃_L^2(Q_n)≤δ.
We define a term g̃(X_i,Ŷ,f̃) by the following reform of as
= 1/n∑_i=1^n {_D'[ G_n'(f̃)] - 2 g(X_i,Ŷ,f̃) } =: 1/n∑_i=1^ng̃(X_i,Ŷ,f̃),
which yields the following form
[] = [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)]
= [1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_:= _1] + [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)- 1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_=: _2] .
We will bound both [_1] and [_2], separately.
We bound the term [_2].
Since g in (<ref>) is Lipschitz continuous in f with its Lipschitz constant C_ℓ by Lemma <ref>, we easily see that g̃ is Lipschitz continuous in f with its Lipschitz constant 6C_ℓ.
Thus, we obtain that
[_2] ≤| [1/n∑_i=1^ng̃ (X_i,Ŷ,f̃)] - [1/n∑_i=1^ng̃ (X_i, Ŷ,f_j)] | ≤ 6 C_ℓδ.
Next, we bound the term [_1].
Here, we need to consider a uniformly bounded function y:[0,1]^d → [-B,B]
For each f_j in the covering set, t > 0, and the bounded function y, we use the Bernstein inequality to derive its stochastic upper bound.
As preparation, we consider a threshold B_n ≥ 1 depending on n and define a clipped preprocessing Ŷ_B_n(·) := max{min{Ŷ(·), B_n}, -B_n}.
We firstly approximate [_1] by the Lipschitz continuity of ℓ as
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ - Ŷ_B_n_L^∞].
Since |Ŷ(x) - Ŷ_B_n(x)| = |Ŷ(x)| {|Ŷ(x)| ≥ B_n} holds, we can bound the expectation in the second term of the right-hand side as
[Ŷ - Ŷ_B_n_L^∞] = [ sup_x ∈ [0,1]^d |Ŷ(x)| {|Ŷ(x)| ≥ B_n}|]
≤[ sup_x ∈ [0,1]^d |Ŷ(x)| sup_x ∈ [0,1]^d{|Ŷ(x)| ≥ B_n}|]
≤[Ŷ_L^∞{Ŷ_L^∞≥ B_n}]
≤[Ŷ_L^∞^2 / B_n].
The last inequality follows {x ≥ 1}≤ x for any x ≥ 0.
The existence of the second moment is guaranteed by Assumption <ref>.
We put this result to (<ref>) and obtain
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ_L^∞^2 / B_n].
Then, we bound the first term [n^-1∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)].
Since we have |g(x,Ŷ_B_n,f)| ≤ C_ℓ ( B_n ∨ B) for any x ∈ [0,1]^d and f: f_L^∞≤ B, we obtain the following inequality with fixed Ŷ_B_n:
( 1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t)
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 2/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t )
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'[ g(X_i',Ŷ_B_n,f_j)] )
≤(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'(g(X_i, Ŷ_B_n, f_j))/4 C_ℓ B_n)
≤exp( - n(t')^2/2 _D'(g(X_i, Ŷ_B_n, f_j)) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/2 t' C_ℓ ( B_n ∨ B) + C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/16 t' C_ℓ ( B_n ∨ B) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - 3 n t'/64 C_ℓ ( B_n ∨ B))
≤exp( - 3 n t/128 C_ℓ ( B_n ∨ B)).
The first and third inequalities follow _D'(g(X_i, Ŷ_B_n, f_j)) ≤ 4 C_ℓ B_n _D'[g(X_i, Ŷ_B_n, f_j)], and the second and last inequalities follows a setting t' = t/2 + _D'(g(X_i, Ŷ_B_n, f_j))/(8 C_ℓ (B ∨ B_n)).
Using this inequality for a uniform bound in terms of the covering set {f_j}_j=1^N_L,W(δ) and the independent random functions Ŷ and Ŷ_B_n, we obtain
( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) ≤ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ).
Then, by the maximal inequality (Corollary 2.2.8 in <cit.>), for any η > 0, we have
[max_j=1,...,N_L,W(δ)[1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j)]]
≤η + ∫_η^∞( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) dt
≤η + ∫_η^∞ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ) dt
≤η + N_L,W(δ) (128 C_ℓ ( B_n ∨ B))/3nexp( - 3 n η/ 128 C_ℓ ( B_n ∨ B) ) .
We set B_n = n^1/2, hence we have (B ∨ B_n) ≤ C_B n^1/2.
Also, we set η = (128 C_B,ℓ n^1/2) log N_L,W(δ) / (3n) and put this result into (<ref>), we obtain
[_1] ≤[max_j=1,...,N[1/n∑_i=1^ng̃ (X_i,Ŷ,f_j)]] ≤C_ℓ,B (log N_L,W(δ) + [Ŷ_L^∞^2 ])/n^1/2.
Combining the inequalities (<ref>) and (<ref>) into (<ref>) and set δ = 1/n, we obtain
[] ≤(2 C_ℓ^2 B_2 + C_ℓ B/3) (log N_L,W(1/n) + [Ŷ_L^∞^2 ])/n^1/2.
Step 3: Bound the bias [].
By the Lipschitz continuity of the loss ℓ by Assumption <ref>, we have
[] = [ 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ( f̅(x'), Ŷ(x')) ]
≤[ sup_x ∈[0,1]^dℓ( f̅(x), Ŷ(x)) ]
≤[sup_x' ∈[0,1]^d C_ℓ |f̅(x) - Ŷ(x)| + ℓ(Ŷ(x), Ŷ(x)) ]
≤ C_ℓ[f̅ - Ŷ_L^∞]
≤ C_ℓ (f̅ -f^*_L^∞ + [f^*- Ŷ_L^∞ ])
≤ C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
The last inequality holds by setting f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞.
Step 4: Combining the bounds.
We combine the result in Step 3 and Step 4 into the decomposition (<ref>), then obtain the statement.
Consider the expected adversarial risk R̃(·) with general losses as (<ref>).
Then, for the estimator f̃ as (<ref>) and q ∈ [1,∞), we have
f^* - f̃_L^∞^q ≤ C_P_X,p,d,ℓ,q h^-d( R̃(f̃) - R̃(f^*) + Ξ_L^∞^q ∨Ξ_L^∞).
We develop a lower bound of R̃(f̃) - R̃(f^*) as
R̃(f̃) - R̃(f^*) = _X[sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f̃(x')) - sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f^*(x')) ]
≥ C_P_X,p,d h^d sup_x ∈ [0,1]^d |ℓ(Ŷ(x'), f̃(x'))| - C_ℓŶ - f^*_L^∞
≥ C_P_X,p,d,ℓ h^d Ŷ - f̃_L^∞^q - C_ℓΞ_L^∞
≥ C_P_X,p,d,ℓ,q h^d ( f^* - f̃_L^∞^q - Ξ_L^∞^q ) - C_ℓΞ_L^∞ .
Here, the first inequality follows Lemma <ref> and the Lipschitz continuity of ℓ by Assumption <ref>, and the last inequality follows (a+b)^q ≤ C_q (a^q + b^q) for q ∈ [1,∞) and a,b ≥ 0.
By Proposition <ref> and Lemma <ref>, we have
[f^* - f̃^2_L^∞] ≤ C_P_X, p,d,ℓ,q h^-2d/q( [(R̃(f̃) - R(f^*))^2/q] + [ Ξ_L^∞^2] )
≤ C_P_X,B, p,d,ℓ,q, h^-2d/q{(log N_L,W(1/n) /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }
≤ C_P_X,B, p,d,ℓ,q,V h^-2d/q{( W^2L^2 log(WL) log n /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }.
The last inequality follows Lemma <ref>.
We set WL ≍ n^d/(4β + 4d) and obtain the statement.
§.§ Proof of Adaptation to Besov Space
We give proof of the result in Section <ref>.
To show the statement, we slightly modify the proof of Proposition <ref>.
We start from the inequality (<ref>) with setting δ = 1/n.
Since we use (L,W,S,B) as a set of candidate functions instead of (L,W), we obtain the following updated inequality of (<ref>) as
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B h^-d{logÑ_L,W,S,B(1/n)/n + Φ̃_L,W,S,B^2 + ζ_n^2 },
which replaces N_L,W(1/n) by Ñ_L,W,S,B(1/n) := sup_Q_n N(1/n, (L,W,S,B), ·_L^2(Q_n)) and Φ_L,W by Φ̃_L,W,S,B := inf_f ∈(L,W,S,B)f - f^*_L^∞.
We study the terms Ñ_L,W,S,B(1/n) and Φ̃_L,W,S,B.
For the approximation error term Φ̃_L,W,S,B, we apply Lemma <ref> by setting r = ∞ and obtain
Φ̃_L,W,S,B≤ C_d,β N^-β/d,
for sufficiently large N such that L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N.
About the entropy term Ñ_L,W,S,B(1/n), we apply Lemma <ref> and obtain
logÑ_L,W,S,B(1/n) ≤log N(1/n, (L,W,S,B), ·_L^∞)
≤ LS log(n LB(1+S))
≤ C_d,β L^2 N log (n L^2 B N)
≤ C_d,p,β,B N log^2(N) log (nN log^2(N)),
by substituting the setup of L,S,W and B.
We substitute (<ref>) and (<ref>) into (<ref>) and obtain
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,β h^-d{ N log^2(N) log (nN log^2(N))/n + N^-2β/d + ζ_n^2 }.
We set N ≍ n^d/(2β + d) and obtain the statement.
§ SUPPORTIVE RESULT
Consider a non-negative bounded continuous function g:[0,1]^d →_+.
Then, we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ P_X(Δ_h^p(x^*)),
with x^* ∈_x ∈ [0,1]^d g(x).
Further, if Assumption <ref> holds, then we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ h^d C_P_X,p,d.
Let A := {x ∈ [0,1]^d | g(x) = max_x' ∈ [0,1]^d g(x')} be a set of argmax of g(x), which is non-empty because of the compactness of [0,1]^d and boundedness/continuity of g.
Also, we define a union Δ_A := ∪_x ∈ AΔ_h^p({x}).
By the non-negativity of g, we obtain
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥_X[sup_x' ∈Δ_h^p(X) g(x') {X ∈Δ_A }]
= _X[sup_x ∈ [0,1]^d g(x) {X ∈Δ_A }]
= g_L^∞ P_X(Δ_A).
Hence, we obtain the first statement.
We consider that Assumption <ref> holds.
We develop a lower bound of P_X(Δ_A) as
P_X(Δ_A) ≥inf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ [0,1]^dλ( Δ_h^p({x})),
where C_P_X is a lower bound of a density function of P_X defined in Assumption <ref>, and λ(·) is the Lebesgue measure.
Since the Lebesgue of the L^p-ball is known, we obtain that
inf_x ∈ [0,1]^dλ( Δ_h^p({x})) = Γ(1/p + 1)^d/Γ(d/p + 1)h^d,
where Γ (·) is the Gamma function.
Then, we obtain the second statement.
We develop the following covering number bound.
The following lemma immediately holds by <cit.> and <cit.>.
Consider the set of deep neural networks as (<ref>) with the depth L, the width W, and the upper bound B.
For any δ > 0 and sufficiently large n, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ C W^2 L^2 log(WL) log (B n /δ).
Let D be the VC-dimension of , and S(≤ W^2 L) be a number of parameters in .
By Theorem 3 in <cit.>, we bound the VC-dimension as D = O(S L log(S)) ≤ O(W^2 L^2 log (WL)).
Using this inequality and Theorem 12.2 in <cit.>, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ D log( en B/δ D) ≤ C W^2 L^2 log(WH) log (B n /δ).
for n = Ω(W^2 H^2 log (WH)).
Consider a non-empty compact set T ⊂^d with some d and continuous bounded functions f,f':T →.
Then, we have
|sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) | ≤f_L^∞f'_L^∞ + f'_L^∞^2.
We define the optimal values t^* ∈ T and t^†∈ T such that sup_t ∈ T(f(t) + f'(t))^2 = (f(t^*) + f'(t^*))^2 and sup_t ∈ Tf(t) ^2 = f(t^†)^2.
Note that such t^* ∈ T and t^†∈ T exist by the compactness of T and the continuity of f and f'.
We first derive the following inequality
sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) ^2 ≤ f(t^*)^2 + 2 f(t^*)f'(t^*) + f'(t^*)^2 - f(t^*)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Second, we develop a bound for the opposite side as
sup_t ∈ Tf(t)^2 - sup_t ∈ T(f(t) + f'(t))^2 ≤ f(t^†)^2 - (f(t^†) + f'(t^†))^2
≤ 2f(t^†) f'(t^†) - f'(t^†)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Then, we obtain the statement.
For any continuous and bounded functions f,g on a compact set I, we have
max_t ∈ I (f(t) + g(t)) ≥max_t ∈ I f(t) - max_t ∈ I |g(t)|.
Let t' ∈ I be a point such that max_t ∈ I (f(t) + g(t)) = f(t') + g(t'), which is guaranteed to exist by the compactness of I and the boundedness/continuity of f,g.
The statement simply follows
max_t (f(t) + g(t)) = f(t') + g(t') ≥ f(t') - |g(t')| ≥max_t(f(t)) - max_t |g(t')|.
Consider functions f,f', y: [0,1]^d → [-B,B], and a loss function ℓ satisfying Assumption <ref>.
Also, consider a funciton g as (<ref>).
For any x ∈ [0,1]^d, we have
g(x,y,f) - g(x,y,f') ≤ C_ℓ |f(x̅) - f'(x̅)|,
for some x̅∈ [0,1]^d.
We define x^* ∈Δ_h^p(x) such that ℓ(y(x^*), f(x^*)) = max_x' ∈Δ_xℓ(y(x'), f(x')).
Its existence follows the continuity of f, f',y, and ℓ.
For f,f' ∈ L^2([0,1]^d), we have
g(x,y,f) - g(x,y,f') = max_x' ∈Δ_h^p(x)ℓ(y(x'),f(x')) -max_x' ∈Δ_h^p(x)ℓ(y(x'),f'(x'))
≤ℓ(y(x^*),f(x^*)) - ℓ(y(x^*),f'(x^*))
≤ C_ℓ |f(x^*) - f'(x^*)|.
The first inequality follows max_x' ∈Δ_h^p(x)ℓ(y(x'), f(x')) = ℓ(y(x^*), f(x^*)), and the second inequality follows the Lipschitz continuity of ℓ in the second argument from Assumption <ref>.
Thus, we obtain the statement.
Fix N,M ∈ arbitrarily.
If (L,W) is a set of functions with W= C_d (N+2) log_2 (8N) and L= C_s (M+2) log_2 (4M) + 2d, we have
inf_f ∈sup_f^* ∈ C^s_1([0,1]^d)f - f^*_L^∞≤ C_d,s N^-2s/d M^-2s/d.
Fix p,q,r∈ (0, ∞] and β∈ (0,∞).
Suppose that β > d max{1/p-1/r, 0} holds.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1.
Let N be a sufficiently large number and set L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N, and B̅ is a polynomially increasing in N.
Then, we have
sup_f^0 ∈_p,q^βinf_f ∈(L,W,S,B)f^0 - f_L^r(λ)≤ C N^-β/d,
with some constant C > 0 independent of N.
For ε∈ (0,1], we obtain
log N(ε, F(L,W,S,B)) ≤ LS log(ε^-1 LB(1+S)).
§ PROOF OF INCONSISTENCY
We first specify the coordinates of the setting.
We consider two points x = (0.3, 0.5, 0.5, ...,0.5), x' = (0.7,0.5, 0.5, ...,0.5)∈ [0,1]^d, and a marginal measure as a mixture of Dirac measures on the points; P_X = 0.5 δ_{x} + 0.5 δ_{x'}.
We also specify the true function with an input x = (x_1,...,x_d) ∈ [0,1]^d as f^*(x) = - {x_1 < 0.4} + 10 (x_1 - 0.5){0.4 ≤ x_1 ≤ 0.6} + {x_1 > 0.6}, and the noise variable ξ_i as a uniform random variable on [-0.1,0.1].
For the adversarial training, we set p=∞ and h = 0.5.
We study an empirical risk minimizer in this setting.
Since the inputs X_1,...,X_n are either of x or x', we set n_1 := |{i: X_i = x}| and n_2 := |{i: X_i = x'}| such that n = n_1 + n_2.
With the specified coordinates above, we rewrite an empirical risk of f:[0,1]^d → with the adversarial training as
1/n∑_i=1^n max_x ∈Δ_h^p(X_i) (Y_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈ [0,1]^d: x_1 ∈ [0,0.8] (ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈ [0,1]^d: x_1 ∈ [0.2,1] (1 + ξ_i - f(x))^2,
which follows f^*(x) = 0 and f^*(x') = 1.
To minimize this empirical risk in terms of f, we restrict a class of f.
Specifically, we set f with an input x = (x_1,...,x_d) as having a form f(x) = c_1 {x_1 ≤ 0.2} + c_2 {0.2 < x_1 < 0.8} + c_3 {0.8 ≤ x_1} with some constants c_1,c_2,c_3 ∈.
This form of f can minimize the risk, since The risk depends only on the value of f for each region.
Then, we rewrite the risk as
(<ref>) =1/n∑_i: X_i = xmax{ (ξ_i - c_1)^2 , (ξ_i - c_2)^2} + 1/n∑_i: X_i = x'max{ (1 + ξ_i - c_2)^2 , (1 + ξ_i - c_3)^2 }.
Here, we consider an event |n_1/2 - n/2| ≤ 1, which appears with probability 1-2 exp(-2/n) ≥ 0.5 with n ≥ 3, by Hoeffding's inequality.
In this case, a simple calculation yields that c_2 ∈ [-0.2, 0.2] minimizes the (<ref>) since it prevents quadratic growth of the risk in terms of c_2, which gives (ξ_i - c_1)^2 < (ξ_i - c_2)^2 and (1 + ξ_i - c_2)^2 > (1 + ξ_i - c_3)^2.
Then, we rewrite the risk (<ref>) as
(<ref>) = 1/n∑_i: X_i = x (ξ_i - c_2)^2 + 1/n∑_i: X_i = x'(1 + ξ_i - c_2)^2,
and the minimization on it by c_2 yields the following optimal choise
c_2^* = n_2 - n_1/n + 1/n∑_i=1^n ξ_i.
Then, we have that the original risk (<ref>) is minimized by the following function
f̌(x) := c_1^* {x_1 ≤ 0.2} + c_2^* {0.2 < x_1 < 0.8} + c_3^* {0.8 ≤ x_1},
where c_1^* = n_1^-1∑_i: X_i = xξ_i and c_3^* = n_2^-1∑_i: X_i = x'ξ_i.
Finally, we define the L^∞-risk of f̌.
Simply, we have
f̌ - f^*_L^∞^2 ≥f̌ - f^*_L^2(P_X)^2
= _X ∼ P_X[ (f̌(X) - f^*(X) )^2 ]
= 1/2{ (f̌(x) - f^*(x) )^2 + (f̌(x') - f^*(x') )^2}
= 1/2{ (c_2^* +1 )^2 + (c_2^* - 1)^2}
= 1 + (c_2^*)^2
≥ 1.
Hence, we show the statement of Proposition <ref>.
alpha
|
http://arxiv.org/abs/2307.04520v1 | 20230710124155 | Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor | [
"San Jiang",
"Yichen Ma",
"Qingquan Li",
"Wanshou Jiang",
"Bingxuan Guo",
"Lelin Li",
"Lizhe Wang"
] | cs.CV | [
"cs.CV"
] |
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
San Jiang,
Yichen Ma,
Qingquan Li,
Wanshou Jiang,
Bingxuan Guo,
Lelin Li,
and Lizhe Wang
S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang)
Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected].
W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected].
L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science
and Technology, Xiangtan 411201, China. E-mail: [email protected].
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
SfM (Structure from Motion) has been extensively used for UAV (Unmanned Aerial Vehicle) image orientation. Its efficiency is directly influenced by feature matching. Although image retrieval has been extensively used for match pair selection, high computational costs are consumed due to a large number of local features and the large size of the used codebook. Thus, this paper proposes an efficient match pair retrieval method and implements an integrated workflow for parallel SfM reconstruction. First, an individual codebook is trained online by considering the redundancy of UAV images and local features, which avoids the ambiguity of training codebooks from other datasets. Second, local features of each image are aggregated into a single high-dimension global descriptor through the VLAD (Vector of Locally Aggregated Descriptors) aggregation by using the trained codebook, which remarkably reduces the number of features and the burden of nearest neighbor searching in image indexing. Third, the global descriptors are indexed via the HNSW (Hierarchical Navigable Small World) based graph structure for the nearest neighbor searching. Match pairs are then retrieved by using an adaptive threshold selection strategy and utilized to create a view graph for divide-and-conquer based parallel SfM reconstruction. Finally, the performance of the proposed solution has been verified using three large-scale UAV datasets. The test results demonstrate that the proposed solution accelerates match pair retrieval with a speedup ratio ranging from 36 to 108 and improves the efficiency of SfM reconstruction with competitive accuracy in both relative and absolute orientation.
structure from motion, 3D reconstruction, match pair selection, unmanned aerial vehicle, feature matching
§ INTRODUCTION
UAV (Unmanned aerial vehicle) images have become one of the primary data sources for surveying and mapping in photogrammetry and remote sensing (RS). Compared with satellite and aerial-based RS platforms, UAVs have the characteristics of high flexibility, high timeliness, and high resolution <cit.>. UAV images have been widely exploited in various applications, e.g., urban 3D modeling <cit.>, transmission line inspection <cit.>, and precision agriculture management <cit.>. With the increasing endurance of UAV platforms and the explosive usage of multi-camera instruments, efficient image orientation for large-scale UAV images has become one of the most critical modules for photogrammetric systems <cit.>.
SfM (Structure from Motion) has become a well-known technology for recovering camera poses and 3D points without the requirement of their good initial values <cit.>. SfM has been extensively adopted in 3D reconstruction <cit.> for both ordered and unordered UAV images. In the workflow of SfM, a view graph is a basic structure to guide feature matching and parameter solving, which is defined as an undirected weighted graph with the vertices and edges indicating images and their overlap relationships <cit.>. Retrieving match pairs is pre-required in view graph construction. The purpose of match pair retrieval is to find overlapped image pairs to guide subsequent feature matching, which increases the reliability and efficiency of SfM reconstruction. Thus, retrieving appropriate match pairs efficiently and accurately becomes one of the core issues in SfM for large-scale UAV images.
In the literature, existing methods for retrieving match pairs can be divided into two categories, i.e., prior knowledge-based and visual similarity-based methods. The former depends on prior information, such as the sequential constraint in data acquisitions <cit.> or depends on prior data from onboard POS (Positioning and Orientation System) sensors <cit.> to calculate image ground footprints. Although these methods are very efficient, their usage is limited to the special configurations of data acquisition or depends on the precision of the prior data from used RS platforms. Without relying on other auxiliary data, visual similarity-based methods merely use images to calculate similarity scores between two images and determine overlapped match pairs by selecting images with the highest similarity scores. The most commonly used solution is CBIR (Content-Based Image Retrieval). The core idea of CBIR is to encode detected local features, e.g., SIFT (Scale Invariant Feature Transform) <cit.>, into high-dimension vectors, and the problem of retrieving match pairs is then cast as calculating the similarity score between two of these high-dimension vectors <cit.>. In the fields of photogrammetry and computer vision, vocabulary tree <cit.> based image retrieval has become the most classic method that converts local features into high-dimension BoW (Bag-of-Words) vectors <cit.>.
In vocabulary tree-based image retrieval, the similarity calculation uses an inverted index that establishes the relationship between visual words and corresponding local features <cit.>. However, building the inverted index is time-consuming for high-resolution and large-size UAV images. On the one hand, high-resolution UAV images lead to tens of thousands of local features from an individual image, which causes high computational costs in searching the nearest visual word via ANN (Approximate Nearest Neighbor) searching; on the other hand, large-volume UAV images requires an extremely large codebook to increase the discriminative ability of aggregated BoW vectors, which causes the millions of vector dimensions and further increases computational costs in ANN searching. In addition, the codebook is usually created offline from public datasets due to the high time costs of generating a large codebook. Thus, this study proposes an efficient and accurate solution for match pair retrieval. The core idea is to adopt a global descriptor for image representation and explore graph indexing for efficient ANN searching of high-dimension vectors. Our main contributions are summarized: (1) An individual codebook is trained online using random selection and scale restriction strategies to reduce image and feature redundancies. (2) Local features of each image are aggregated into a high-dimension global descriptor through a VLAD (Vector of Locally Aggregated Descriptors) aggregation that extremely reduces the number of features and the burden of nearest neighbor searching in image indexing. (3) VLAD descriptors are indexed into an HNSW (Hierarchical Navigable Small World) based graph structure for the ANN (Approximate Nearest Neighbor) searching, and match pairs are retrieved using an adaptive threshold selection strategy, which is used to create a view graph to for divide-and-conquer based parallel SfM reconstruction. (4) The performance of the proposed solution is verified by using large-scale UAV images and compared with other well-known software packages.
The structure of this study is organized as follows. Section <ref> gives a literature review of match pair retrieval and nearest neighbor searching. Section <ref>presents detailed procedures of the proposed match pairs retrieval algorithm and the workflow of the parallel SfM solution. Section <ref> conducts a comprehensive evaluation and comparison using UAV datasets. Finally, Section <ref> gives the conclusion of this study and improvements for future research.
§ RELATED WORK
This study focuses on match pair retrieval to improve the efficiency of SfM reconstruction. Thus, this section reviews match pair selection and nearest neighbor searching.
§.§ Prior knowledge-based methods
For photogrammetric data acquisition, there are usually two categories of prior knowledge, i.e., the configuration for data acquisition and the auxiliary data from onboard sensors. For the former, image match pairs are usually obtained according to the timestamp or data acquisition sequence <cit.>. According to this principle, Cheng et al. <cit.> proposed a strategy to connect sequential images for image localization and stereo-pair dense matching, which uses the optical images sequentially acquired by UAV to achieve the real-time 3D reconstruction of disaster areas. For the latter, image match pairs are usually obtained according to camera mounting angles or onboard POS (position and orientation system) data. Using the projection center of images, Rupnik et al. <cit.> searched the neighboring images close to the target image within the specified distance threshold. After acquiring the orientation data provided by the POS data of onboard navigation systems, image footprints on a specified elevation plane can be calculated, and image match pairs can be obtained through the pairwise intersection test between the image footprints <cit.>. In the work of <cit.>, ground coverages of images are calculated by using POS data, and image match pairs are determined by judging the intersection of ground coverages. Although these methods have high efficiency, their accuracy depends on the used prior knowledge.
§.§ Visual similarity-based methods
Compared with prior knowledge-based methods, these methods make match pairs selection using the images' content instead of prior knowledge. These methods can be grouped into two categories: the first is based on the number of matched correspondences, while the second uses the similarity score computed from image descriptors. For the former, two images are labeled as a valid match pair when the number of matches surpasses a threshold, such as the multi-scale strategy <cit.> and the preemptive matching strategy <cit.>. For the latter, images are quantified as descriptors, and the similarity score between two images is calculated as the distance between two descriptors. One of the most classic methods is vocabulary tree-based image retrieval <cit.>. Using a trained vocabulary tree, this method quantizes extracted local features into word frequency vectors, i.e., BoW (Bags-of-Words) vectors. The distance between the vectors represents the similarity score between the images <cit.>. These methods can quickly obtain correct match pairs on small datasets, which is inefficient for large-scale datasets. In addition to the above-mentioned methods, neural network-based methods have been proposed recently. Yan et al. <cit.> proposed a match pair selection method based on the GCN (Graph Convolutional Network) and used it to judge whether overlapping areas exist between images. This method performed remarkably well on challenging datasets from ambiguous and duplicated scenes. However, the efficiency is very low for high-resolution UAV images.
§.§ Nearest neighbor searching
NN searching aims to find the vectors closest to the query vector from a large set of database vectors. In the context of match pair selection, the NN searching in vocabulary tree-based image retrieval is solved as an ANN searching problem, which determines the efficiency of image retrieval. In the literature, existing ANN searching methods can be divided into three categories, i.e., tree-based methods, hashing-based methods, and graph-based methods. Tree-based methods use a tree structure to partition the searching space, and KD-Tree is one of the most well-known data structures <cit.>, which has been used extensively for image retrieval algorithms <cit.> and software packages, e.g., the COLMAP <cit.> and AliceVision <cit.>, because of the relative low dimension of used feature descriptors, such as the 128-dimension SIFT (Scale Invariant Feature Transform) descriptor. However, the efficiency of tree-based methods decreases dramatically for high-dimension vectors, which is not better than brute-force searching. To increase ANN searching efficiency, hashing-based methods convert continuous real-value vectors to discrete binary codes using hashing functions. In this category, LSH (Locality-Sensitive Hashing) attempts to hash similar vectors into the same cell with high probabilities <cit.>. Consequently, ANN searching can be executed in the cell that the query vector also falls in. Compared with the tree-based method, the hash operation reduces high-dimensional input vectors to low-dimensional terms by using a set of hash functions whose number is much smaller than the dimension of input vectors. This is useful to avoid the curse of dimensionality in tree-based methods. Due to their high efficiency, LSH-based methods have been used for large-scale image retrievals, such as web community and remote sensing images <cit.>. These methods, however, have lower precision caused by the usage of binary hashing encoding as well as high memory consumption to store hashing functions. In contrast to splitting the searching space, graph-based methods create a graph data structure to organize database vectors and achieve efficient ANN searching based on graph indexing. NSW (Navigable Small World) <cit.> and HNSW (Hierarchical NSW) <cit.> are two typical graph-based methods. NSW adopts an approximation of the Delaunay graph, which has the same operation for vertex insertion and query. NSW can achieve efficient and accurate searching based on long-distance edges that are created at the beginning, which forms a small navigable world and reduces the number of hops. HNSW is an improved version of NSW, which builds a multi-layer structure to speed up ANN searching. In the work of <cit.>, HNSW has been used to replace the KD-Tree in image retrieval, and good acceleration has been achieved in match pair selection. However, unacceptable time consumption is still required for processing large-scale UAV images due to a large number of local features.
§ METHODOLOGY
This study proposes an efficient and accurate match pair retrieval method for large-scale UAV images and implements a parallel SfM solution guided by the view graph constructed from retrieved match pairs. The core idea is to use global descriptors for image representation and explores a graph indexing structure for the ANN searching of high-dimension vectors. The workflow of the complete SfM reconstruction is shown in Figure <ref>, in which the inputs are UAV images without other auxiliary data. First, a codebook is trained online by selecting a subset of UAV images and scale-restricted features. Second, with the aid of the codebook, each image's local features are aggregated into a single high-dimension vector according to VLAD. Third, VLAD vectors are then indexed into an HNSW-based graph structure to achieve highly efficient ANN searching, and match pairs are retrieved based on the HNSW index and refined by using an adaptive selection strategy. Finally, after feature matching guided by the retrieved match pairs, a weighted view graph is constructed, which is used for the scene partition and parallel SfM reconstruction of large-scale UAV images.
§.§ Vocabulary tree-based image retrieval
Vocabulary tree-base image retrieval mimics the text retrieval that encodes a document as a feature vector by using trained words and casts document searching as the distance calculation between feature vectors <cit.>. The most important techniques are the inverted file for the word-image indexing and the TF-IDF (Term Frequency and Inverse Document Frequency) for the weighting of similarity scores <cit.>.
The workflow of vocabulary tree-based image retrieval consists of four major steps. First, local features with descriptors, e.g., SIFT, are extracted from training images; second, a vocabulary tree is hierarchically built from the extracted descriptors by using a clustering algorithm, e.g., the K-means, whose leaf nodes indicate the generated visual words; third, all images are indexed by searching the nearest visual word for all extracted feature descriptors, and an inverted file is simultaneously built for each visual word, which builds the indexing relationship between visual words and image features; finally, the same indexing operation is executed for an input query image, and the similarity score between the query and database images can be calculated by using their corresponding BoW vectors. Suppose there is a vocabulary with V words, and each image can be represented by a BoW vector v_d=(t_1,...,t_i,...,t_V). The component t_i is calculated according to Equation <ref>
t_i=n_id/n_dlogN/N_i
where n_id and n_d indicate the occurrence number of the word i in the image d and the total number of words in image d, respectively; N_i is the number of images that contain word i, and N is the total number of images in the database. The component t_i includes two parts, i.e., the term frequency (TF) n_id/n_d and the inverse document frequency (IDF) log(N/N_i), which indicate the occurrence frequency of the word i in the image d and the importance of word i among database images. After generating the BoW vectors, the similarity score of any two images can be quantified as the dot production of corresponding BoW vectors.
With the increasing of involved database images, vocabulary tree-based image retrieval efficiency decreases dramatically. The main reason is building the inverted index. On the one hand, the high resolution of UAV images leads to a large number of extracted features that cause high computational costs in the ANN searching to build the inverted file; on the other hand, with the increasing of database images, a larger codebook with more visual words must be used to increase the discriminative power of BoW vectors, which further increases the burden in the ANN searching and subsequent similarity calculation. Therefore, considering these issues, this study proposes an efficient image retrieval solution that combines the VLAD descriptor and the HNSW indexing. The former aggregates local feature descriptors into a high-dimensional global vector using a very small codebook, which avoids the high computational costs in image indexing; the latter is utilized to accelerate the ANN searching for high-dimensional VLAD vectors. This study would integrate the proposed solution with a parallel SfM workflow for large-scale image orientation. The details are described in the following sections.
§.§ Codebook generation considering image and feature redundancy
Local features are first detected from UAV images as training data. In recent years, UAV images have been capable of recording building facades and observing ground targets from multi-view directions. Due to the large differences in viewing directions and the obvious changes in illuminations and scales, feature matching becomes non-trivial for oblique UAV images <cit.>. Considering the issues of oblique UAV images, the SIFT algorithm extracts local features. In this study, to balance the accuracy and efficiency of subsequent match pair selection, 8,129 local features with the highest scales are extracted for each image, and the feature descriptors are represented as a vector with dimension = 128.
By using extracted local features, a codebook can be generated for the aggregation of local features to the VLAD descriptor. In general, there are two ways to generate a codebook, i.e., one for online generation for each dataset and the other for offline generation for all datasets. While the second way accelerates online processing without training an individual codebook, it cannot represent the characteristics of specified datasets and provide inferior performance on image retrieval. Therefore, the optimal way is to generate a codebook for an individual UAV dataset <cit.>. However, it would be very time-consuming to generate a codebook because the large data volume and high spatial resolution of UAV images cause many descriptors. For UAV images, there are two kinds of redundancy. The first is the image redundancy due to the high overlap degree to ensure the success of subsequent image orientation; the second is the feature redundancy because of the high spatial resolution of UAV images. These two kinds of redundancy could be exploited to reduce the descriptor number in codebook training. On the one hand, the number of visual words for VLAD aggregation is extremely less than that for BoW indexing <cit.>. A very coarse quantization of the descriptor space is required for VLAD aggregation. On the other hand, the characteristics of one image can be represented by a subset of features with large scales. Thus, this study proposes a random sampling strategy to select a subset p of training images and a scale restriction strategy to select a subset h of descriptors with large scales. Based on the work <cit.>, the parameter p and h are set as 20% and 1500.
After selecting the training descriptors, the codebook with k clusters is generated by using the K-means clustering algorithm <cit.>: 1) pick k cluster centers randomly; 2) assign each descriptor to its nearest cluster center; 3) calculate the mean vector of each cluster and use it as the new cluster centers; 4) repeat steps 2) to 3) after a certain number of iteration times or reach the convergence condition of the algorithm. Based on the clustering algorithm, the k cluster centers indicate the codebook C={c_1,c_2,c_3,...,c_k}. The number of cluster centers k is closely related to the performance of the match pair retrieval algorithm. On the one hand, the accuracy of match pair retrieval will be reduced when k is too small; on the other hand, the generation of the codebook will consume more memory, and the efficiency of subsequent feature aggregation and image retrieval will be reduced when k is too large. Thus, a proper k is significant for match pair retrieval.
§.§ Adaptive match pair retrieval via global descriptor and graph indexing
§.§.§ Global descriptor from the aggregation of local features
Some solutions are designed for aggregating local features to global vectors, e.g., the BoW that counts the term frequency of words. However, the number of words in the trained codebook should increase simultaneously with the number of involved images. It would cause high time costs for large-scale image indexing. Instead of the term frequency counting, VLAD accumulates residuals between local feature descriptors and their corresponding cluster centers and achieves high discriminative power using a very small-size codebook. Based on the observation, this study uses VLAD to aggregate local features into global descriptors <cit.>.
For N extracted local features of an image, the VLAD descriptor is obtained by iterating feature descriptors assigned to the same cluster center and calculating the sum of the residuals between these feature descriptors and the cluster center. The final VLAD descriptor is a concatenation of residual vectors generated from all cluster centers. Supposing that there are k cluster centers in the trained codebook C, the VLAD descriptor v consists of k vectors with the same dimension d=128 as the used SIFT descriptor. Therefore, the calculation of an element v_k,j in the VLAD descriptor v is presented by Equation <ref>
v_k,j=∑_i=1^Na_k(d_i)(d_i(j)-c_k(j))
where j is the dimension index of feature descriptors, i.e., j=1,2,...,d; a_k(d_i) is an indicator function: when the feature descriptor d_i belongs to the visual word c_k, a_k(d_i)=1; otherwise, a_k(d_i)=0. Based on the formulation, an image is represented as a k× d VLAD descriptor. Compared with the BoW vector, the VLAD descriptor uses the residual vector to encode the input image. In order to generate the same dimension feature vector, extremely fewer visual words are required in the trained codebook, i.e., the ratio is the same as the dimension d=128 of the used descriptors. Besides, component-wise and global L2-normalization is sequentially conducted for the generated VLAD descriptors. Noticeably, the VLAD aggregation can be executed parallelly because it is independent for each clustering center.
§.§.§ Match pair retrieval based on Graph-indexed global descriptors
Match pairs can be selected by the nearest neighbor searching between VLAD descriptors. Recently, graph-based solutions have attracted enough attention because of their high precision and promising efficiency when dealing with high-dimension descriptors. HNSW <cit.> is one of the well-known graph-based search algorithms, which is implemented based on the NSW (Navigable Small World) search method <cit.>. HNSW uses a hierarchical structure to build a vector index graph to increase retrieval efficiency, miming a coarse-to-fine searching strategy. The bottom layer includes all vertices, and the number of vertices decreases gradually from the bottom to up layers. In the retrieval stage, after the entry of the query vector, the HNSW index is used to search from top to bottom, which restricts the searching of the next nearest neighbor to the child nodes in the next layer. The nearest neighbors in the bottom layers are the retrieval results. Thus, HNSW is used in this study for high-dimension multi-VLAD vector indexing and match pair retrieval. The VLAD descriptors are first constructed into a graph structure G={V, E}, in which V and E respectively represent the vertex set composed of VLAD descriptors and the edge set composed of their connection relationships. To achieve efficient indexing and retrieval, the maximum number of connections for each vertex is restricted to M, termed the friend number. This parameter M influences the efficiency and precision of image retrieval.
In match pair retrieval, the number of returned items should be specified well. The optimal value should adapt to the data acquisition configuration, mainly affected by the image overlap degree. It varies for each data acquisition and each UAV image. However, it is usually set as a fixed number or ratio in the classical image retrieval pipeline. In this study, an adaptive selection strategy has been adopted to select the number of retrieved images <cit.>. The core idea origins from the fact that images with larger overlap areas have higher similarity scores, and the similarity scores decrease dramatically with the decrease of overlap areas. However, image pairs without overlap areas have very small similarity scores, and at the same time, no obvious changes are observed from similarity scores, as illustrated in Figure <ref>. Thus, the distribution of similarity scores is fitted well by using a power function with coefficients a and b, as presented by Equation <ref>
y=a^*x^b
where x and y indicate the image ids and similarity scores, respectively. Using the mean μ and standard deviation δ of similarity scores between one query and database images, a horizontal separation line y=μ+kδ can be defined, and database images with similarity scores above the separation line are labeled as the retrieval results. Noticeably, in the HNSW-based image retrieval, the Euclidean distance instead of the similarity score has been returned. In this study, inverse linear normalization is used to calculate similarity scores. Suppose that m items are retrieved with distance D={d_1,d_2,d_3,...,d_m}, the similarity score is calculated based on Equation <ref>
s_i=d_max-d_i/d_max-d_min
where d_min and d_max indicate the minimal and maximal values in D, respectively. Thus, this equation converts the Euclidean distance to the similarity score that ranges from 0 to 1. Besides, the separation line y=μ+kδ is mainly influenced by the mean μ and standard deviation δ. With the increase of used samples to fit the power function, the separation line y would go down and retain more retrieved results. Thus, according to practical experiences, the number of used samples is set as 300 in this study.
§.§ View graph construction from retrieved match pairs
False match pairs inevitably exist because of repetitive image patterns and non-optimal parameters in image retrieval. In this study, local feature matching and geometric verification are conducted to filter false matches. Guided by initial match pairs, local feature matching is performed by finding the nearest neighbors from two sets of features based on the Euclidean distance between feature descriptors, in which the cross-checking and ratio test have also been utilized. To further refine the initial matches, the epipolar geometry based on the Fundamental matrix is utilized to remove false matches, which can be robustly estimated in the RANSAC (Random Sampling Consensus) framework <cit.>. Finally, the match pairs with the number of refined matches greater than 15 are retained.
A view graph can be created using the retained match pairs and their feature matches. In this study, the view graph is represented as an undirected weighted graph G={V, E}, in which V and E indicate the vertex set and edge set, respectively <cit.>. Suppose that I={i_i} and P={p_ij} are respectively n images and m match pairs. The graph G is constructed as follows: a vertex v_i is added for each image i_i and all vertices form the vertex set V={v_i}; adding an edge e_ij connecting vertex v_i and vertex v_j for each matched pair p_ij and all edges form the edge set E={e_ij}. To quantify the importance of match pairs, an edge weight w_ij is assigned to the edge e_ij. In the context of SfM-based image orientation, the number of feature matches and their distribution over image planes directedly influence the overall performance. Thus, w_ij is calculated by Equation <ref>
w_ij=R_ew× w_inlier+(1-R_ew)× w_overlap
where R_ew is the weight ratio between w_inlier and w_overlap, which is set as 0.5 similar to the work in <cit.>. w_inlier is the weight item related to the number of feature matches; w_overlap is the weight item related to the distribution of feature matches. These two items are calculated respectively according to Equations <ref> and <ref>
w_inlier = log(N_inlier)/log(N_max_inlier)
w_overlap=CH_i+CH_j/A_i+A_j
where N_inlier and N_max_inlier indicate the number of matched correspondences of the match pair and the maximum number of matched correspondences among all match pairs; CH_i and CH_j represent the convex hull areas of feature matches over two images; A_i and A_j represent the areas of two image planes. In our study, the Graham-Andrew algorithm <cit.> is used to detect convex hulls of feature matches.
§.§ Parallel SfM reconstruction guided by view graph
In this study, an incremental SfM is used to estimate camera poses and scene structures. Incremental SfM, however, suffers the problem of low efficiency due to the sequential registering of images and iterative local and global bundle adjustment. For large-scale scenes, this issue becomes very obvious and limits the applications of SfM in recent photogrammetric systems. To overcome the problem, this study adopts the divide-and-conquer strategy to split the large-size reconstruction into small-size sub-reconstructions. Thus, sub-reconstructions can be well addressed, and parallel techniques can also be utilized to improve efficiency. Figure <ref> illustrates the basic principle of the designed parallel SfM solution <cit.>, which includes four major steps described as follows:
* First, after creating the view graph G, the scene is divided into small-size clusters {G_i} with strong inner connections. The scene clustering is implemented through the NC (Normalized Cut) algorithm <cit.>, which removes the edges with smaller weights and ensures the good connection of vertices in each cluster.
* Second, an incremental SfM engine is then executed parallelly for each cluster G_i, which generates an individual model for each cluster. In this study, the well-known incremental SfM engine, COLMAP <cit.>, has been utilized to implement the parallel reconstruction of each cluster.
* Third, cluster merging is performed by iteratively merging two sub-models, which convert individual models to an entire model in the same global coordinate system. In this step, the merging order is critical as it affects the robustness and precision of cluster merging. In this study, the number of common 3D points between models is used to sort the merging order, which has been calculated efficiently through a corresponding graph established between two clusters <cit.>.
* Finally, a final global bundle adjustment is executed for the merged global model. Since the number of optimization parameters would be very large, a tie-point selection strategy is adopted to decrease the number of 3D points in BA optimization. As documented in <cit.>, tie-point selection is achieved based on four metrics, i.e., re-projection error, overlap degree, image coverage, and number limitation.
§.§ Algorithm implementation
This study implements the solution of match pair retrieval and parallel SfM reconstruction using the C++ programming language, as presented in Algorithm <ref>. In detail, for feature extraction, the SIFTGPU <cit.> library is used with default parameter setting; for the generation of the codebook, the Lloyd’s K-means cluster algorithm <cit.> has been used; in addition, we have implemented an algorithm for the aggregation of SIFT features into VLAD descriptors and adopted the HNSW algorithm in the FAISS package <cit.> for graph indexing; based on our previous work <cit.>, we have embedded the match pair retrieval and view graph construction method into the parallel SfM workflow, in which the software package ColMap <cit.> has been selected as the incremental SfM engine.
§ EXPERIMENTS AND RESULTS
In the experiment, three UAV datasets have been collected to evaluate the performance of the proposed solution. First, according to the efficiency and precision of match pair selection, we analyze the influence of key parameters, i.e., the number of cluster centers k for the codebook generation and the maximum number of neighboring vertices M in HNSW. Second, we conduct the match pair selection and SfM-based 3D reconstruction of the three UAV datasets using the selected parameter setting. Third, we compared the proposed SfM solution with four well-known software packages, i.e., two open-source software packages ColMap <cit.> and DboW2 <cit.> and two commercial software packages Agisoft Metashape and Pix4Dmapper, to evaluate the performance of match pair selection and SfM reconstruction. In the study, all experiments are executed on a Windows desktop computer with 64 GB memory, four Intel 2.40 GHz Xeon E5-2680 CPUs, and one 10 GB NVIDIA GeForce RTX 3080 graphics card.
§.§ Test sites and datasets
Three UAV datasets with different sizes are used for the performance evaluation. Figure <ref> shows the sample images in each dataset, and the detailed information is listed in Table <ref>. The description of each dataset is presented as follows:
* The first dataset consists of 3,743 images taken from a university campus covered by dense and low-rise buildings. The dataset is captured by a DJI Phantom 4 RTK UAV equipped with one DJI FC6310R camera. The images with 5,472 by 3,648 pixels are collected under the flight height of 80 m, and the GSD (Ground Sample Distance) is approximately 2.6 cm.
* The second dataset includes 4,030 images taken from a complex university building. It is captured using a DJI M300 RTK UAV equipped with one DJI Zenmuse P1 camera with a dimension of 8,192 by 5,460 pixels. It is worth mentioning that this dataset has been collected based on the optimized views photogrammetric <cit.>, which adjusts camera viewpoints and directions according to the geometry of ground objects. The GSD is approximately 1.2 cm. For absolute orientation, 26 GCPs (Ground Control Points) were collected using a total station, whose nominal accuracy is about 0.8 and 1.5 cm in the horizontal and vertical directions.
* The third dataset is recorded by a penta-view oblique photogrammetric instrument equipped with five SONY ILCE 7R cameras with 6,000 by 4,000 pixels. Low-rise buildings and dense vegetation mainly cover this test site. In addition, a rive comes across the test site. Under the flight height of 87.1 m, a total number of 21,654 images has been collected with a GSD of 1.21 cm.
§.§ The influence of parameters K and M
For the proposed match pair retrieval solution, two critical parameters directly influence the efficiency and precision of image indexing and retrieval, i.e., the visual word number k in the generation of the trained codebook and the friend number M in the graph-based indexing. The former determines the dimension of the VLAD vectors; the latter determines the maximum number of connections of each vertex to others in the HNSW graph. Thus, this section analyzes their influence on retrieval efficiency and precision.
For the evaluation, dataset 1 has been selected, and two metrics are used for performance evaluation: retrieval efficiency and precision. The retrieval efficiency is the total time costs consumed in match pair selection; the retrieval precision is calculated as the ratio between the number of correct match pairs and the number of all match pairs. In this test, the retrieval time includes time costs in VLAD-based feature aggregation, HNSW-based graph construction, and image retrieval. To avoid the influence of the adaptive selection, the retrieval number is fixed as 30, and match pairs with at least 15 true matches are defined as positive results.
For the analysis of the parameter k, the values of 32, 64, 128, 256, 512, and 1024 are tested. Figure <ref> presents the statistical results of efficiency and precision in the match pair selection, in which Figure <ref> and Figure <ref> respectively indicate the efficiency and precision. It is clearly shown that with the increase of k, the time costs increase exponentially, from 45.7 seconds to 175.5 seconds, with the value ranging from 32 to 1024, respectively. The main reason is that a larger k leads to more time costs in the nearest cluster center searching for VLAD feature aggregation and increases the dimension of generated VLAD descriptors, which further poses a burden in HNSW graph indexing and retrieval. On the contrary, we can observe that the retrieval precision increases linearly with the increase of the parameter k, which increases from 0.81 to 0.94 within the specified span. To balance efficiency and precision, the parameter k is set as 256 in the following tests.
For the analysis of the parameter M, the values of 6, 8, 10, 12, 16, 32, and 64 are used, and the statistical results are presented in Figure <ref>. We can see that: (1) the changing trend of retrieval efficiency in Figure <ref> can be divided into two parts. In the first part, the retrieval efficiency is almost constant with the value M increasing from 6 to 16; in the second part, the retrieval efficiency decreases dramatically with the value M increasing from 16 to 64; (2) the changing trend of retrieval precision in Figure <ref> can be separated into three stages. In the first stage, the retrieval precision increases obviously with the value M increasing from 6 to 8; in the second stage, the retrieval precision keeps constant within the value range from 8 to 16; in the third stage, the retrieval precision decreases gradually within the value range from 16 to 64. It is worth mentioning that k has a greater impact on retrieval efficiency than M because most time costs are spent in VLAD aggregation. Besides, M affects the number of valid NN neighbors that can be retrieved. Considering that at least 300 valid NN neighbors should be retrieved in the adaptive selection, the parameter M is set as 32 in the following tests.
§.§ Match pairs selection and 3D reconstruction
§.§.§ Match pairs selection by the proposed retrieval method
By using the selected parameters k and M, the performance of match pair selection is first evaluated. Similarly, retrieval efficiency and precision are used as the metrics for performance evaluation. Table <ref> lists the statistical results of match pair selection. It is clearly shown that high retrieval precision has been achieved for the three datasets, which are 90.1%, 89.9%, and 94.4% for the three datasets, respectively. It ensures that a very large proportion of selected match pairs are overlapped images. Figure <ref> shows the results of our method to retrieve similar images for two sample images from datasets 1 and 3. It can be seen that all the retrieved images are true positive results. In addition, the time costs of match pair selection are 2.5 mins, 2.6 mins, and 12.4 mins for the three datasets, respectively, which achieves the average time costs of approximately 0.040 secs, 0.039 secs, and 0.034 secs for match pair selection. Thus, we can conclude that the proposed solution can achieve linear time complexity in image indexing and retrieval and process large-scale UAV datasets for efficient match pair selection.
§.§.§ Parallel 3D reconstruction guided by the weighted view graph
The selected match pairs are then used to guide feature matching. In this study, feature matching is achieved by searching approximate nearest neighbors, refined based on the widely used ratio test and cross-checking. The initial matches are then verified by the epipolar constraint implemented by the estimation of the fundamental matrix within the framework of RANSAC. In this study, the threshold of ratio-test is set as 0.8 as the default value in the SIFTGPU library, and the maximum distance threshold is configured as 1.0 pixels to ensure the high inlier ratio of feature matching. Using feature matching results, a view graph represented as an undirected weighted graph can be constructed for each dataset, whose vertices and edges represent images and their connection relationships, respectively. As presented in Figure <ref>, three view graphs are created for the three UAV datasets, in which vertices and edges are rendered by red dots and gray lines, respectively. It is shown that there are 59,014, 65,743, and 353,005 match pairs selected from the three datasets, respectively. The dense edges between vertices indicate a strong connection between images, which ensures the success of SfM-based image orientation.
To achieve the parallel SfM reconstruction, the entire view graph is then divided into small sub-clusters with strong inner-edge connections. In the proposed parallel SfM workflow, the normalized cut algorithm is utilized for scene clustering, and the largest size of each sub-cluster is set as 500. The scene partition results are illustrated in Figure <ref>, Figure <ref>, and Figure <ref>. We can see 8, 9, and 44 sub-clusters generated for the three datasets. Each cluster is represented by an identical color, which verifies the compact connections within each cluster. Based on the sub-clusters, parallel SfM is executed to create the sub-reconstructions that are finally merged into the entire reconstruction. Table <ref> shows the statistical results of 3D reconstruction, in which the metrics precision and completeness refer to the re-projection error of BA optimization and the numbers of oriented images and reconstructed 3D points. We can see that the precision of the three datasets are 0.542 pixels, 0.668 pixels, and 0.752 pixels, respectively, and almost all images are oriented successfully, whose numbers are 3,724, 4,029, and 21,568, respectively. For the visualization, Figure <ref>, Figure <ref>, and Figure <ref> shows the reconstructed 3D points from the three datasets. It is shown that the reconstructed 3D points can cover the whole test site. Thus, the proposed solution can create stable view graphs to achieve parallel SfM.
§.§ Performance comparison with the other software packages
§.§.§ Match pair selection
The proposed solution is compared with the BoW retrieval method in ColMap and the Dbow2 retrieval method to evaluate the performance in match pair selection. The statistic result is presented in Figure <ref>. It is clearly shown that compared with BoW and Dbow2, the proposed solution achieves the highest efficiency, whose time costs are 2.5 min, 2.6 min, and 12.4 min for the three datasets. Especially for dataset 3, the time costs of Bow and Dbow2 reach 1335.5 mins and 2848.3 mins, respectively, which is unacceptable in practice. By observing the results presented in Figure <ref>, we can see that BoW almost achieves the highest precision, which is 90.3%, 92.1%, and 97.6% for the three datasets, respectively. The proposed solution ranks second with a precision of 90.1%, 89.9%, and 94.4% for the three datasets, which are higher than Dbow2. In conclusion, compared with BoW, the proposed solution can achieve comparable precision with the speedup ratios ranging from 36 to 108 for the three UAV datasets.
§.§.§ SfM-based reconstruction
To evaluate the performance in the workflow of SfM-based reconstruction, the proposed solution is further compared with two commercial software packages Agisoft Metashape and Pix4Dmapper. Agisoft Metashape uses multi-scale matching and GNSS data for match pair selection; Pix4Dmapper provides a vocabulary tree-based image retrieval. In this test, camera intrinsic parameters are calibrated and fixed in SfM, and the match pairs selected from Bow and Dbow2 are fed into the proposed parallel SfM for reconstruction. Besides, 26 GCPs in the second dataset are used to evaluate geo-referencing accuracy. In the following tests, the metric efficiency indicates the time costs in SfM reconstruction without feature matching.
Table <ref> presents the statistical results of SfM reconstruction without GCPs. It is shown that BoW, Dbow2, and the proposed solution have almost the same efficiency because of using the same SfM engine. Although Metashape and Pix4Dmapper can achieve the reconstruction of datasets 1 and 2, their efficiency is lower, which further verifies the advantage of the parallel SfM workflow. Noticeably, Metashape and Pix4Dmapper fail to reconstruct dataset 3 since the large data volume causes the out-of-memory error in reconstruction. Considering the metric precision, it is shown that Pix4Dmapper achieves the highest performance, which BoW, Dbow2, and the proposed solution follow. For metric completeness, we can see that comparable performance can be observed from the evaluated software packages except for Pix4Dmapper. This is mainly caused by the relatively low precision of image retrieval.
Absolute bundle adjustment with GCPs is further executed to evaluate the geo-referencing accuracy of reconstructed models. In this test, three GCPs that are evenly distributed over test site 2 are utilized for the geo-referencing of SfM reconstructed models, and the others are used as check points (CPs). For the performance evaluation, two metrics, i.e., mean and std.dev. of CPs residuals are used in this test. In addition, Pix4dMapper has been selected as a baseline for commercial software packages.
Table <ref> presents the statistical results of absolute BA. It is shown that among all evaluated software packages, Pix4dMapper achieves the highest accuracy with the std.dev. of 0.013 cm, 0.016 cm, and 0.019 cm in the X, Y, and Z directions, respectively. Although BoW ranks second in the vertical direction with the std.dev. of 0.036 cm, its horizontal accuracy is lower than the proposed solution with the std.dev. of 0.029 cm and 0.026 cm in the X and Y directions, respectively, which can also be verified by the residual plot presented in Figure <ref> and Figure <ref>. Due to the low precision of match pair selection, the geo-referencing accuracy of Dbow2 is the lowest in the X and Z directions, as shown in Figure <ref> and Figure <ref>. Thus, we can conclude that the proposed solution can provide necessary and accurate match pairs to achieve reliable SfM reconstruction with obviously high efficiency.
§ CONCLUSIONS
In this paper, we proposed a workflow that integrates match pair retrieval and parallel SfM reconstruction to achieve the efficient and accurate 3D reconstruction of large-scale UAV images. The core idea of match pair selection is to aggregate many local features into high-dimensional global vectors that can then be indexed through a graph-based structure for efficient ANN searching. Guided by the selected match pairs, a weighted view graph is created to achieve the parallel SfM through graph clustering and sub-model merging. The tests demonstrate that the proposed workflow can significantly accelerate match pair selection with a speedup ratio of tens and hundreds of times and increase the efficiency of SfM-based reconstruction with comparative results.
In this study, some observations and possible limitations have also been observed. First, the precision of match pair selection is dramatically influenced by the number of words in the codebook generated through K-means clustering, as shown in Section <ref>. At the same time, a large K would also decrease the image retrieval efficiency. Thus, it is non-trivial to trade precision and efficiency, especially for large-scale datasets. Second, the hand-crafted local features, i.e., SIFT, are adopted for image retrieval of their high tolerance to scale and viewpoint changes. However, deep learning-based feature detectors have attracted enough attention in the fields of image retrieval <cit.> and feature matching <cit.> due to the excellent ability of representation learning. Therefore, it is rational to use learned descriptors to enhance the image retrieval and feature matching algorithm in the proposed workflow. Third, only the CPU is used in the implemented algorithm, which can be further accelerated using the GPU parallel computing technique. In future research, we will conduct more tests on selecting high-quality match pairs with high efficiency by exploiting learned feature descriptors and the GPU acceleration technique.
§ ACKNOWLEDGMENTS
This research was funded by the National Natural Science Foundation of China (Grant No. 42001413), the Open Research Fund from the Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) (Grant No. GML-KF-22-08), the Open Research Project of The Hubei Key Laboratory of Intelligent Geo-Information Processing (Grant No. KLIGIP-2021B11), and the Provincial Natural Science Foundation of Hunan (Grant No. 2023JJ30232).
IEEEtran
*
|
http://arxiv.org/abs/2307.04583v1 | 20230710142143 | Parameterised distance to local irregularity | [
"Foivos Fioravantes",
"Nikolaos Melissinos",
"Theofilos Triommatis"
] | cs.CC | [
"cs.CC",
"cs.DM",
"cs.DS"
] |
arrows,calc
decorations.pathreplacing
theoremTheorem[section]
claim[theorem]Claim
lemma[theorem]Lemma
observation[theorem]Observation
property[theorem]Property
proposition[theorem]Proposition
remark[theorem]Remark
corollary[theorem]Corollary
*123conjecture1-2-3 Conjecture
definition[theorem]Definition
equationsection
GrayBox[1]
#1
(Text)[draw=black!20,fill=white,rounded corners,inner sep=2ex,text width=]
;
(x) at (current bounding box.north west);
[draw=white,rectangle,inner sep=3pt,anchor=north west,fill=white]
at ((x)+(6pt,.75em)) ;
defproblemx[1]=6pt
=0pt
#1
1]Foivos Fioravantes
1]Nikolaos Melissinos
2]Theofilos Triommatis
[1]Department of Theoretical Computer Science, Faculty of Information Technology, Czech Technical University in Prague, Prague, Czech Republic
[2]School of Electrical Engineering, Electronics and Computer Science
University of Liverpool, Liverpool, L69-3BX, UK
Parameterised distance to local irregularity
The third author is supported by EP/S023445/1 EPSRC CDT in Distributed Algorithms, University of Liverpool.
[
========================================================================================================================================================
A graph G is locally irregular if no two of its adjacent vertices have the same degree. In [Fioravantes et al. Complexity of finding maximum locally irregular induced subgraph. SWAT, 2022], the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given a graph G of maximum order, or, equivalently, computing a subset S of V(G) of minimum order, whose deletion from G results in a locally irregular graph; S is denoted as an optimal vertex-irregulator of G. In this work we provide an in-depth analysis of the parameterised complexity of computing an optimal vertex-irregulator of a given graph G. Moreover, we introduce and study a variation of this problem, where S is a substet of the edges of G; in this case, S is denoted as an optimal edge-irregulator of G. In particular, we prove that computing an optimal vertex-irregulator of a graph G is in FPT when parameterised by the vertex integrity, neighborhood diversity or cluster deletion number of G, while it is [1]-hard when parameterised by the feedback vertex set number or the treedepth of G. In the case of computing an optimal edge-irregulator of a graph G, we prove that this problem is in FPT when parameterised by the vertex integrity of G, while it is -hard even if G is a planar bipartite graph of maximum degree 4, and [1]-hard when parameterised by the size of the solution, the feedback vertex set or the treedepth of G. Our results paint a comprehensive picture of the tractability of both problems studied here, considering most of the standard graph-structural parameters.
§ INTRODUCTION
A fundamental problem in graph theory is “given a graph G, find an induced subgraph H of G, of maximum order, that belongs in the family of graphs verifying a property Π”, in which case we say that H∈Π:
Largest Induced Subgraph with Property Π (ISPΠ)<cit.>
A graph G=(V,E), an integer k, a property Π.
Does there exist a set S⊆ V such that |S|≤ k and G-S∈Π?
There is a plethora of classical problems that fall under this general setting. Consider for example the Vertex Cover and the Feedback Vertex Set, where Π is the property “the graph is an independent set” and “the graph is a forest”, respectively.
In this paper we study the ISPΠ problem where Π is the property “the graph is locally irregular”, recently introduced in <cit.>. A graph G=(V,E) is called locally irregular if no two adjacent vertices in V have the same degree. We extend the work presented in <cit.>, by more thoroughly investigating the behaviour of the problem in regards to parameterised complexity. In addition, we take the first step towards the problem of finding large locally irregular (not necessarily induced) subgraphs of a given graph G. In particular, we introduce the problem where the goal is to find a subset of edges of G of maximum order, whose removal renders the graph locally irregular. Our results allow us to paint a rather clear picture concerning the tractability of both problems studied here, considering many standard graph-structural parameters (see Figure <ref> for an overview of our results).
ISPΠ and hereditarity. The ISPΠ problem has been extensively studied in the case where Π is a hereditary property. Formally, a property Π is hereditary if, for any graph G verifying that property, it holds that any induced subgraph of G also verifies that property (notice that the properties mentioned previously are indeed hereditary). It was already shown in <cit.> that ISPΠ is a hard problem for any non-trivial hereditary property. On the positive side, the ISPΠ problem always admits an FPT algorithm, when parameterised by the size of the solution, if Π is a hereditary property <cit.>. This is an important result, as it allows us to conceive efficient algorithms to solve computationally hard problems, as long as we restrict ourselves to graphs verifying such properties.
It is also worth mentioning the work in <cit.>, which provides a framework that yields exact algorithms that are significantly faster than brute-force to solve a more general version of the ISPΠ problem: given a universe, find a subset of maximum cardinality which verifies some hereditary property. On a high level, the algorithm proposed in <cit.> builds the solution which is a subset H of maximum cardinality with the wanted property, by continuously extending a partial solution X⊆ H. Note that this approach only works if Π is indeed a hereditary property.
More recently, this approach was generalised by the authors of <cit.>, who provide a framework that yields exponential-time approximation algorithms.
However, not all interesting properties are hereditary. E.g., “all vertices of the induced subgraph have odd degree”, and “the induced subgraph is d-regular”, where d is an integer given in the input (recall that a graph is d-regular if all of its vertices have the same degree d), are two non-hereditary properties. The authors of <cit.> studied the ISPΠ problem for the former property, showing that this is an -hard problem, and providing an FPT algorithm that solves the problem when parameterised by the rank-width.
Also, the authors of <cit.> studied the ISPΠ problem for the latter property. In particular, in <cit.> it is shown that finding a (connected) induced subgraph of maximum order that is d-regular, is -hard to approximate, even when restricted on bipartite or planar graphs. The authors of <cit.> also provide a linear-time algorithm to solve this problem for graphs with bounded treewidth. Lastly, it is also worth mentioning <cit.>, where the authors consider the non-hereditary property “the induced subgraph is k-anonymous”, where a graph G is k-anonymous if for each vertex of G there are at least k-1 other vertices of the same degree.
An important observation is that, in the case of non-hereditary properties, the ISPΠ problem does not necessarily admit an FPT algorithm parameterised by the size of the solution.
Indeed, the authors of <cit.> proved that when considering Π as “the induced subgraph is regular”, the ISPΠ problem is [1]-hard when parameterised by the size of the solution.
This indicates the importance of considering graph-structural parameters for conceiving efficient algorithms for such problems. This is exactly the approach followed in <cit.>, where the authors consider a generalisation of Vertex Cover, the ISPΠ problem where Π is “the graph has maximum degree k”, for an integer k given in the input.
Distance from local irregularity. In some sense, the property that interests us lies on the opposite side of the one studied in <cit.>. Recall that a graph G is locally irregular if no two of its adjacent vertices have the same degrees. The notion of locally irregular graphs was formally introduced in <cit.>, where the authors take some steps towards proving the so-called 1-2-3 Conjecture proposed in <cit.> and claimed to be solved recently in <cit.>. Roughly, this conjecture is about functions assigning weights from [k]={1,…,k} to the edges of a graph, called proper k-labellings, so that all adjacent vertices have different weighted degrees; the conjecture states that for any non-trivial graph, this should always be achievable for k≤ 3.
In <cit.>, the authors introduced and studied the problem of finding a locally irregular induced subgraph of a given graph G of maximum order (a non-hereditary property). Equivalently, given a graph, find a set of vertices of minimum cardinality, whose deletion renders the graph locally irregular; such sets are named optimal vertex-irregulators. The main focus of <cit.> was to study the complexity of computing an optimal vertex-irregulator of a given graph. Among other results, it was shown that this problem is -hard even for subcubic planar bipartite graphs, [2]-hard parameterised by the size of the solution and [1]-hard parameterised by the treewidth of the input graph. Moreover, for any constant ε <1, there cannot be a polynomial-time 𝒪(n^1-ε)-approximation algorithm. On the positive side, there are two FPT algorithms that solve this problem, parameterised by the maximum degree of the input graph plus either the size of the solution or the treewidth of the input graph. Note that the notion of vertex-irregulators proved to be fruitful in the context of proper labellings. Indeed, the authors of <cit.> observed a connection between finding large locally irregular induced subgraphs and constructing proper k-labellings that also maximise the use of weight 1 on the edges of the given graph.
Apart from improving the results of <cit.>, in this paper we also introduce the novel problem of computing a subset of a graph's edges, of minimum order, whose deletion renders the graph locally irregular; such sets are named optimal edge-irregulators. This problem is introduced as a first step towards understanding the problem of finding large locally irregular (not necessarily induced) subgraphs of a given graph. Problems concerned with finding maximum subgraphs verifying a specific property have also been extensively studied (e.g., <cit.>).
One might expect that finding edge-irregulators could be easier than finding vertex-irregulators as it is often the case with graph theoretical problems concerned with subsets of edges, whose versions considering subsets of vertices are intractable (recall, e.g., the Edge Cover, the Feedback Edge Set and even the Min Weighted Lower-Upper-Cover <cit.>). As it turns out, however, finding large edge-irregulators is also a computationally hard problem.
Our contribution. In this paper we study the complexity of computing optimal vertex and edge-irregulators. Our results allow us to identify the parameters for which the tractability of the former problem changes, considering almost all standard graph-structural parameters. We also take steps towards the same goal for the latter problem. In Section <ref> we introduce the needed notation and provide some first results. In particular, we observe that computing optimal vertex-irregulators is [1]-hard when parameterised by the treedepth or the feedback vertex set of the given graph. Section <ref> is focused on providing FPT algorithms for the problem of finding optimal vertex-irregulators, parameterised by the neighborhood diversity or the vertex integrity of the input graph. In Section <ref>, we focus on the problem of finding optimal edge-irregulators. First, we prove that this problem is -hard, even when restricted to planar bipartite graphs of maximum degree 4. We also show that the problem is [1]-hard parameterised by the size of the solution or the feedback vertex set of the input graph. Lastly, we modify the FPT algorithm for computing an optimal vertex-irregulator parameterised by the vertex integrity in order to provide an FPT algorithm that solves the edge version of the problem (once more parameterised by the vertex integrity). We close the paper in Section <ref>, where we propose some directions for further research.
§ PRELIMINARIES
For notions and definitions of graph theory not explained here, we refer the reader to <cit.>.
Let G=(V,E) be a graph and G'=(V',E') be a subgraph of G (i.e., created by deleting vertices and/or edges of G). Recall first that the subgraph G' is induced if it can be created only by deleting vertices of G. That is, for each edge uv∈ E, if u,v∈ V', then uv∈ E'. For any vertex v∈ V, let N_G(v)={u∈ V : uv∈ E} denote the neighbourhood of v in G
and d_G(v)=|N_G(v)| denote the degree of v in G. Note that, whenever the graph G is clear from the context, we will omit the subscript and simply write N(v) and d(v).
Also, for S⊆ E, denote by G-S the graph G'=(V, E∖ S). That is, G' is the graph resulting from the deletion of the edges of S from the graph G.
Let G=(V,E) be a graph. We say that G is locally irregular if, for every edge uv∈ E, we have d(u)≠ d(v). Now, let S⊆ V be such that G[V∖ S] is a locally irregular graph; any set S that has this property is denoted as a vertex-irregulator of G. Moreover, let _v(G) be the minimum order that any vertex-irregulator of G can have. We will say that S is an optimal vertex-irregulator of G if S is a vertex-irregulator of G and |S|=_v(G). Similarly, we define an
edge-irregulator of G to be any set S⊆ E such that G-S is locally irregular. Moreover, let _e(G) be the minimum order that any edge-irregulator of G can have. We will say that S is an optimal edge-irregulator of G if S is an edge-irregulator of G and |S|=_e(G).
The next simple observation is quite useful when proving lower bounds on an optimal vertex or edge-irregulator of a graph.
Let G=(V,E) be a graph containing two vertices u,v such that uv∈ E and d(u)=d(v). Any edge-irregulator of G contains at least one edge incident to u or v. Also, any vertex-irregulator of G contains at least one vertex in N(u)∩ N(v).
Let G=(V,E) be a graph. We say that two vertices u, v of V are twins if N(u)∖{v}=N(v)∖{u}, i.e., they have the same neighbourhoods.
Let G=(V,E) be a graph and u,v∈ V be a set of twins of G such that uv∈ E. Any vertex-irregulator of G contains at least one vertex in {u,v}.
Indeed, by Observation <ref>, we get that any vertex-irregulator S of G includes at least one neighbour of u or v. If we assume that S∩{u,v}=∅, then u and v are once more adjacent twins in G[V∖ S], contradicting the fact that S is a vertex-irregulator.
The importance of the upcoming Lemma <ref> lies in the fact that we can repeatedly apply it and reduce the size of the graph on which we are searching for a vertex-irregulator, as long as the reduced graph contains a pair of adjacent twins. This is a core argument behind the algorithms presented in Theorems <ref> and <ref>.
Let G=(V,E) be a graph and u,v∈ V be a pair of adjacent twins. Let G'=(V',E') be the graph resulting from the deletion of either u or v from G. Then, _v(G)=_v(G')+1.
Assume w.l.o.g. that u∉ V'. We first prove that _v(G)≤_v(G')+1. Indeed, assume that _v(G)>_v(G')+1 and let S' be an optimal vertex-irregulator of G'. Next, consider the graph G=G[V∖ (S'∪{u})]. From the construction of G', it follows that G=G'[V'∖ S']. Since S' is a vertex-irregulator of G', we obtain that G is locally irregular. In other words, the set S'∪{u} is a vertex-irregulator of G and |S'∪{u}|=I_v(G')+1, a contradiction.
Next, assume that _v(G)<_v(G')+1 and let S be an optimal vertex-irregulator of G. It follows from Observation <ref> that |{u,v}∩ S|≥ 1. Assume w.l.o.g. that u∈ S. Thus, and by the construction of G', we have that G'[V'∖ (S∖{u})]=G[V∖ S] and the set S∖{u} is a vertex-irregulator of G'. In other words, _v(G')≤ |S|-1=_v(G)-1, a contradiction.
We close this section with some observations on the proof that computing _v(G) is [1]-hard parameterised by the treewidth of G, initially presented in <cit.>, which allow us to show that this result holds even if we consider more “generous” parameters, such as the treedepth or the the feedback vertex set number (i.e., size of a minimum feedback vertex set) of the input graph. Recall that the treedepth of a graph G=(V,E) can be defined recursively: if |V|=1 then G has treedepth 1. Then, G has treedepth k if there exists a vertex v∈ V such that every connected component of G[V∖{v}] has treedepth at most k-1. Given a graph G and a tree T rooted at a vertex u, by attaching T on a vertex v of G we mean the operation of adding T to G and identifying u with v.
Let G be a graph with vertex cover number (i.e., size of a minimum vertex cover) k_1 and T be a rooted tree of depth k_2. Let G' be the graph resulting from attaching an arbitrary number of copies of T directly on vertices of G. Then G' has treedepth 𝒪(k_1+k_2) and feedback vertex set number 𝒪(k_1).
The reduction presented in <cit.> starts with a graph G which is part of an instance of the List Colouring problem, and constructs a graph G' by attaching some trees of depth at most 3 on each vertex of G. The List Colouring problem was shown to be [1]-hard in <cit.> when parameterised by the vertex cover number of the input graph. Thus, and by Observation <ref>, we obtain the following:
Given a graph G, it is [1]-hard to compute _v(G) parameterised by either the treedepth or the feedback vertex set number of G.
§ FPT ALGORITHMS FOR VERTEX-IRREGULATORS
In this section we present two FPT algorithms that compute an optimal vertex-irregulator of a given graph G, when parameterised by the neighbourhood diversity or the vertex integrity of G. The latter algorithm is then used to show that this problem is in FPT also when parameterised by the cluster deletion number of G. We begin by recalling the needed definitions.
The twin equivalence of G is the relation on the vertices of V according to which two vertices belong to the same equivalence class if and only if they are twins.
The neighbourhood diversity of a graph G, denoted by nd(G), is the number k of classes of the twin equivalence of G.
Let G=(V,E) be a graph with nd(G)=k and let V_1,…,V_k be the partition of V defined by the twin equivalence of G. Observe that for any i∈ [k], we have that G[V_i] is either an independent set or a clique.
Given a graph G=(V,E) such that nd(G)=k, there exists an algorithm that computes _v(G) in FPT-time parameterised by k.
Let V_1,…,V_k be the partition of V defined by the twin equivalence of G. Recall that for any i∈ [k], we have that G[V_i] is either an independent set or a clique.
We begin by constructing an induced subgraph G'=(V',E') of G by applying the following procedure: for each i∈ [k], if G[V_i] is a clique on at least two vertices, then delete all the vertices of V_i except one; let D be the set of vertices that were deleted in this fashion throughout the procedure and d=|D|. Observe that this procedure terminates after k repetitions and, thus, runs in polynomial time (in regards to |V|).
Moreover, it follows from Lemma <ref> that _v(G)=_v(G')+d. Thus, in order to compute _v(G), it suffices to compute _v(G'). To achieve that, we model this problem as an ILP on a bounded number of variables. For every i∈ [k], let V'_i=V_i∩ V'.
Also, for every i∈[k], let N(i)={j∈ [k] |∃ u∈ V'_j and v∈ V'_i s.t. uv∈ E'}. That is, N(i) is the set of indices of the neighbouring partitions of vertices in V'_i. Finally, we guess a partition of [k] into S_1 and S_2 (there are at most 2^k such partitions), such that, if S' is a vertex-irregulator of G', then S'∩ V'_i=V'_i for all i∈ S_2, and S'∩ V'_i≠ V'_i for all i∈ S_1.
Variables
x_i∈ [|V'_i|] i∈ S_1 number of vertices remaining in a subset of V'_i
Objective
max∑_i=1^k x_i
Constraints
∑_ℓ∈ N(i) x_ℓ≠∑_ℓ∈ N(j) x_ℓ ∀ i,j∈ S_1 s.t. j ∈ N(i)
The variable x_i is used in the above model to represent the vertices that will remain in V'_i, for each i∈ S_1, after the deletion of an optimal vertex-irregulator S' of G'. The constraint <ref> makes sure that any two adjacent vertices u,v∈ V' have different degrees in G'[V'∖ S']. Indeed, for each uv∈ E', there exist i,j such that u∈ V'_i and v∈ V'_j. If either i∈ S_2 or j∈ S_2 (or both), then u∈ S' or v∈ S' (or both). Thus, we can assume that i,j∈ S_1. In this case, it follows from the constraint <ref> that d_G'[V'∖ S'](u)=∑_ℓ∈ N(i) x_ℓ≠∑_ℓ∈ N(j) x_ℓ=d_G'[V∖ S'](v). In any case, G'[V'∖ S'] is locally irregular. Finally, since the model has k variables, we can and obtain S' in FPT time, parameterised by k (by running for example the Lenstra algorithm <cit.>).
Let V_1,…,V_k be the partition of V defined by the twin equivalence of G. Recall that for any i∈ [k], we have that G[V_i] is either an independent set or a clique.
We begin by constructing an induced subgraph G'=(V',E') of G by applying the following procedure: for each i∈ [k], if G[V_i] is a clique on at least two vertices, then delete all the vertices of V_i except one; let D be the set of vertices that were deleted in this fashion throughout the procedure and d=|D|. Observe that this procedure terminates after k repetitions and, thus, runs in polynomial time (in regards to |V|).
Moreover, it follows from Lemma <ref> that _v(G)=_v(G')+d. Thus, in order to compute _v(G), it suffices to compute _v(G'). To achieve that, we model this problem as an ILP on a bounded number of variables. For every i∈ [k], let V'_i=V_i∩ V'.
Constants
e_ij∈{0,1} i,j∈ [k] set to 1 iff ∃ u∈ V_i and v∈ V_j s.t uv∈ E'
t_i=|V'_i| i∈[k] the number of vertices in V'_i
Variables
x_i∈ [|V_i|] i∈ [k] number of deleted vertices in V'_i
Objective
min∑_i=1^k x_i
Constraints
∑_ℓ=1^k e_iℓ(t_ℓ-x_ℓ)≠∑_ℓ=1^k e_jℓ(t_ℓ-x_ℓ) ∀ i,j∈ [k] s.t. e_ij=1
0≤ x_i≤ t_i ∀ i∈ [k]
The variable x_i is used in the above model to represent the vertices that will be included in an optimal vertex-irregulator S' of G'. The constraint <ref> makes sure that any two adjacent vertices uv∈ V' have different degrees in G'[V'∖ S']. Indeed, for each uv∈ E', there exist i,j such that u∈ V_i and v∈ V_j and e_ij=1. Moreover, d_G'[V∖ S'](u)=∑_ℓ=1^k e_iℓ(t_ℓ-x_ℓ) and d_G'[V∖ S'](v)=∑_ℓ=1^k e_jℓ(t_ℓ-x_ℓ). Thus, S'∪ D is an optimal vertex-irregulator of G. Finally, since the model has k variables, we can and obtain S' in FPT time, parameterised by k (by running for example the Lenstra algorithm <cit.>).
We now present an FPT algorithm to compute an optimal vertex-irregulator of an input graph G when parameterised by the vertex integrity of G.
A graph G=(V,E) has vertex integrity k if there exists a set U ⊆ V such that |U| = k' ≤ k and all connected components of G[V∖ U] are of order at mots k - k'.
It is known that we can find such a set in FPT-time parameterised by k <cit.>.
Given a graph G=(V,E) with vertex integrity k, there exists an algorithm that computes _v(G) in FPT-time parameterised by k.
Let U be such that |U|=k'≤ k and C_1,…, C_m be the vertex sets of the connected components of G[V∖ U]. It follows that |C_j|≤ k, j∈[m]. Assume that we know the intersection of an optimal vertex-irregulator S of G and the set U, and let S' = S ∩ U and U' = U ∖ S (there are at most 2^|U|≤ 2^k possible intersections S' of U and S.).
Notice that the graph G[V∖ S'] has an optimal vertex-irregulator that contains only vertices from ⋃_i ∈ [m]C_i. Indeed, assuming otherwise contradicts that S' is the intersection of an optimal vertex-irregulator and U. Thus, in order to find an optimal vertex-irregulator S of G, it suffices to compute S^* ⊆⋃_i ∈ [m]C_i, which is an optimal vertex-irregulator of G[V ∖ S'], for every set S' ⊆ U. Then, we return the set S^*∪ S' of minimum order. We compute S^* through an ILP with bounded number of variables. To do so, we define types and sub-types of graphs G[U'∪ C_j].
Informally, the main idea is to categorise the graphs G[U' ∪ C_j], j ∈ [m], into types based on their structure (formally defined later), whose number is bounded by k. Each type i is associated to a number no_i that represents the number of the subgraphs G[U' ∪ C_j], j ∈ [m], that belong in that type.
Then, for each type i, we will define sub-types based on the induced subgraphs G[(U' ∪ C_j) ∖ S_q], for S_q ⊆ C_j. We also define a variable no_i,q that is the number of the subgraphs G[U' ∪ C_j], j ∈ [m], that are of type i and of sub-type q in G[V∖ S].
Note that knowing the structure of these types and sub-types, together with no_i,q, is enough to compute the order of S^*. Finally, for any j ∈ [m], the graph G[U' ∪ C_j] is of order at most k. Thus, the number of types, sub-types and their corresponding variables, is bounded by a function of k. We will present an ILP formulation whose objective is to minimise the order of S^*.
We begin by defining the types. Two graphs G[U' ∪ C_i] and G[U' ∪ C_j], i,j ∈ [m], are of the same type if there exists a bijection[Recall that a function f:A→ B is a bijection if, for every a_1,a_2∈ A with a_1≠ a_2, we have that f(a_1)≠ f(a_2) and for every b∈ B, there exists an a∈ A such that f(a)=b. Recall also that the inverse function of f, denoted as f^-1, exists if and only if f is a bijection, and is such that f^-1:B→ A and for each b∈ B we have that f^-1(b)=a, where f(a)=b.]
f: C_i∪ U' → C_j∪ U' such that f(u)=u for all u∈ U' and N_G[U' ∪ C_i](u) = { f^-1(v) | v ∈ N_G[U' ∪ C_j](f(u))} for all u ∈ C_i. Note that if such a function exists, then G[U' ∪ C_i] is isomorphic to G[U' ∪ C_j].
Let p be the number of different types. Notice that p is bounded by a function of k as any graph G[U' ∪ C_i] has order at most k. Also, we can decide if two graphs G[U' ∪ C_i] and G[U' ∪ C_j], i,j ∈ [m], are of the same type in FPT-time parameterised by k. For each type i ∈ [p], set no_i to be the number of graphs G[U' ∪ C_j], j ∈ [m], of type i.
Furthermore, for each type i ∈ [p] we select a C_j, j ∈ [m], such that G[U' ∪ C_j] is of type i, to represent that type; we will denote this set of vertices by C'_i.
We are now ready to define the sub-types.
Let i ∈ [p] be a type represented by C'_i and S^i_1… , S^i_2^|C'_i|
be an enumeration of the subsets of C'_i.
For any q ∈ [2^|C'_i|] we define a sub-type (i,q) which represents the induced subgraph G[(U' ∪ C'_i) ∖ S_q ]. Set no_i,q to be the number of graphs represented by G[U'∪ C'_i], i∈[p], that are of type (i,q) in G[V∖ S^*], for a vertex-irregulator S^* ,i.e., S^* ∩ C'_i = S^i_q.
Notice that, given a vertex-irregulator S^* ⊆⋃_j ∈ [m] C_j of G[V ∖ S'], there exists a sub-type (i,q), i∈ [p], q∈ [2^|C'_i|], such that the graph G[(U' ∪ C_j)∖ S^*] is of sub-type (i,q), for all j∈ [m]. Also, assuming that we know the order of |S^i_q| and the number no_i,q for all i∈ [p], q∈ [2^|C'_i|], then |S^*| = ∑_i ∈ [p]∑_q ∈ [2^|C'_i|] no_i,q |S^i_q|.
Before giving the ILP formulation whose goal is to find a vertex-irregulator S^* while minimising the above sum, we guess the (i,q) such that no_i,q≠ 0.
Let S_2 be the set of pairs (i,q) i ∈ [p] and q ∈ [2^|C'_i|], such that there are two vertices u,v ∈ C'_i ∖ S^i_q where uv∈ E(G[(U'∪ C_i)∖ S^i_q ]) and d_G[(U'∪ C'_i)∖ S^i_q ](u) =d_G[(U'∪ C'_i)∖ S^i_q ](v). For every (i,q)∈ S_2, we have that no_i,q=0. Indeed, assuming otherwise contradicts the fact that S^* is a vertex-irregulator.
We guess S_1 ⊆{ (i,q) | i ∈ [p], q ∈ 2^|C'_i|}∖ S_2 such that no_i,q≠ 0 for all (i,q) ∈ S_1. Observe that the number of different sets that are candidates for S_1 are at most some function of k.
Constants
no_i i ∈ [p] number of components of type i
e_uv∈{0,1} u,v∈ U' set to 1 iff uv ∈ E(G[U'])
e^i,q_u,v∈{0,1} i∈ [p], q∈ [2^|C'_i|], u ∈ U' set to 1 iff uv ∈ E(G[(U' ∪ C'_i) ∖ S^i_q])
and v∈ C'_i∖ S^i_q
b^i,q_u∈ [n] i∈ [p], q∈ [2^|C'_i|] and u ∈ U' set to d_G[(U' ∪ C'_i) ∖ S^i_q](u)
d^i,q_u∈ [n] i∈ [p], q∈ [2^|C'_i|] and u ∈ C'_i ∖ S^i_q set to d_G[(U' ∪ C'_i) ∖ S^i_q](u)
Variables
no_i,q i ∈ [p], q∈ [2^|C'_i|] number of types (i,q)
Objective
min∑_i ∈ [p] no_i,q |S^i_q|
Constraints
no_i,q=0 iff (i,q)∉ S_1
∑_q ∈ 2^|C'_i| no_i,q = no_i ∀ i ∈ [p]
∑_w ∈ U' e_wv + ∑_i ∈ [p] no_i,q b^i,q_v≠∑_w ∈ U' e_wu + ∑_i ∈ [p] no_i,q b^i,q_u ∀ u,v ∈ U'
d^i,q_v≠∑_w ∈ U' e_wu + ∑_i ∈ [p] no_i,q b^i,q_u ∀ e^i,q_u,v = 1 and (i,q) ∈ S_1
Assume that we have found the values no_i,q for (i,q), i∈ [p], q∈ [2^|C'_i|].
We construct an optimal vertex-irregulator of G[V∖ S'] as follows.
Start with an empty set S^*.
For each i ∈ [p] take all components C_j of type i.
Partition them in to 2^|C'_i| sets 𝒞^i_q such that any set q ∈ [2^|C'_i|] contains exactly no_i,q of these components.
For any component C ∈𝒞^i_q, select all vertices represented by the set S^i_q (as it was defined before) and add them to S^*.
The final S^* is an optimal vertex-irregulator for G[V∖ S'].
Let S=S'∪ S^*. We show that S is a vertex-irregulator of G.
To do so, it suffices to verify that in the graph G[V∖ S] there are no two adjacent vertices with the same degree.
Let u,v be a pair of adjacent vertices in a component represented by C'_i ∖ S, which is of type (i,q).
If d_G[V∖ S](u) = d_G[V∖ S](v), then (i,q)∈ S_2. Therefore, no_i,q= 0 and we do not have such a component in G[V∖ S].
Thus, it suffices to focus on adjacent vertices such that at least one of them is in U'.
Notice that, in G[V∖ S], the degree of vertex u ∈ U' is equal to ∑_w ∈ U' e_wv + ∑_i ∈ [p] no_i,q b^i,q_v. In other words, no two adjacent vertices in U' have the same degree due to the constrain <ref>.
Lastly, the constrain <ref> guarantees that no vertex in U' is adjacent to a vertex in C_i ∖ S (for some i∈ [p]) such that both of them have the same degree in G[V∖ S]. Moreover, both S' and S^* are constructed to be minimum such sets. Thus, S is an optimal vertex-irregulator of G. Finally, since the number of variables in the model is bounded by a function of k, we can and obtain S^* in FPT time, parameterised by k (by running for example the Lenstra algorithm <cit.>).
The previous algorithm can be used to find an optimal vertex-irregulator of a graph G in FPT-time when parameterised by the cluster deletion number of G. Note that the cluster deletion number of a graph can be computed in FPT-time parameterised by k <cit.>.
Let G=(V,E) be a graph and S⊆ V be a set of minimum order such that all the connected components of G[V∖ S] are cliques. Then G has cluster deletion number k, where k=|S|.
Given a graph G=(V,E) with cluster deletion number k, there exists an algorithm that computes _v(G) in FPT-time parameterised by k.
Let S be such that |S|=k and G[V∖ S] is a disjoint union of cliques C_1,… C_m for m≥ 1. Our goal is to reduce the size of these cliques so that each one of them has order at most 2^k. We achieve this through the the following procedure. Let i∈[m] be such that the clique C_i=(V_C_i,E_C_i) has |V_C_i|>2^k. Let V_1,…,V_p be the partition of V_C_i defined by the twin equivalence of C_i. That is, two vertices u,v∈ V_C_i belong in a V_j, j∈[p], if and only if u and v are twins. Note that p≤ 2^k. Observe that, since C_i is a clique, the graphs C_i[V_j], j∈[p], are also cliques. In other words, for each j∈[p], all the vertices of V_j are adjacent twins. We delete all but one vertex of V_j, for each j∈[p], and repeat this process for every i∈[m] such that |V_C_i|>2^k.
Let G'=(V',E') be the resulting subgraph of G and d=|D|, where D is the set of vertices that were removed throughout this process. It follows from Lemma <ref> that _v(G)=_v(G')+d. Observe also that S⊆ V' and that each connected component of G'[V'∖ S] is a clique of at most 2^k vertices. In other words, G' has vertex integrity at most 2^k+k. To sum up, to compute _v(G) it suffices to compute _v(G'), which can be done in FPT-time by running the algorithm presented in Theorem <ref>.
§ EDGE-IRREGULATORS
In this section we begin the study of finding an optimal edge-irregulator of a given graph G. It turns out that the decision version of this problem is -complete, even for quite restrictive classes of graphs. Furthermore, it is also [1]-hard parameterised by the size of the solution.
Let G be a graph and k∈ℕ. Deciding if _e(G)≤ k is -complete, even if G is a planar bipartite graph of maximum degree 4.
The problem is clearly in . We focus on showing it is also -hard. This is achieved through a reduction from the Planar 3-SAT problem which is known to be -complete <cit.>. In that problem, a 3CNF formula ϕ is given as an input. We say that a bipartite graph G'=(V,C,E) corresponds to ϕ if it is constructed from ϕ in the following way: for each literal x_i (resp. ¬ x_i) that appears in ϕ, add the literal vertex v_i (resp. v'_i) in V (for 1≤ i≤ n) and for each clause C_j of ϕ add a clause vertex c_j in C (for 1≤ j≤ m). Then the edge v_ic_j (resp. v'_ic_j) is added if the literal x_i (resp. ¬ x_i) appears in the clause C_j. Finally, we add the edge v_iv'_i for every i. A 3CNF formula ϕ is valid as input to the Planar 3-SAT problem if the graph G' that corresponds to ϕ is planar. Furthermore, we may assume that each variable appears in ϕ twice as a positive and once as a negative literal. The question is whether there exists a truth assignment to the variables of X satisfying ϕ.
Starting from a 3CNF formula ϕ, we construct a graph G such that _e(G)≤ 3n if and only if ϕ is satisfiable. The construction of G is as follows: we start with the graph G' that corresponds to ϕ. Then, for each 1≤ i≤ n, we remove the edge v_iv'_i, and attach the gadget illustrated in Figure <ref> to v_i and v'_i. Let E_i denote the edges of the gadget attached to v_i and v'_i plus the edges e_i^1,e_i^2 and e_i^3. Finally, for each 1≤ j≤ m, we add the star on 5 vertices, and identify one of its leaves with the vertex c_j. Observe that the resulting graph G is planar, bipartite and Δ(G)=4.
Before we provide the reduction, let us show two claims that are going to be useful.
Let S be an edge-irregulator of G such that |S|≤ 3n. For every 1≤ i≤ n, we have that |S∩ E_i|≥ 3.
Observe that d_G(u_5)=d_G(v_i)=d_G(u_6)=d_G(u_7). It follows that S contains at least one edge a_1 incident to u_6 or u_7 and one edge a_2 incident to v_i or u_5. We distinguish cases:
* a_1=a_2=v'_iu_6. Then S also contains an edge a_3 incident to v_i or u_5. If a_3=u_5v'_i, then S contains an additional edge incident to u_2 or u_5. If a_3=u_5u_4, then S also contains the edge u_3u_4. If a_3=u_2u_5, then S contains at least one additional edge incident to u_2 and u_1. If a_3=u_5v_i, then S contains one additional edge incident to u_2 or u_5. In any one of the above cases, we have that |S_i|≥ 3. Thus, we may assume that a_3 is incident to v_i but not to u_5. If a_3=e_i^1 or a_3=e_i^2, then S contains an additional edge incident to v_i or u_6. Finally, if a_3=v_iu_6, then S contains an additional edge incident to u_6 or u_8. Thus, if a_1=a_2=v'_iu_6, then |S_i|≥ 3.
* a_1≠ a_2. We distinguish some additional cases:
* a_1=v_iu_6. If a_2∈{e_i^3,v'_iu_9,v'_iu_5}, then S contains an additional edge incident to u_7. If a_2∈{v_iu_5,u_5u_4}, then S contains an additional edge incident to u_2. Finally, if a_2=u_5u_4, then S contains an additional edge incident to u_3.
* a_1=u_6u_7. Then S contains an additional edge incident to u_9.
* a_1∈{u_7u_9,u_7u_10,u_7u_11}. Then S contains an additional edge incident to u_7.
* a_1=u_6u_8. Then S contains an additional edge incident to u_16.
Thus, if a_1≠ a_2, then |S_i|≥ 3, which finishes the proof of the claim.
Let S be an edge-irregulator of G such that |S|≤ 3n. Then, for every 1≤ i≤ n, we have that
* if |S∩{e_i^1,e_i^2}|≥ 1 then |S∩{e_i^3,e_i^4}|=0 and
* if |S∩{e_i^3,e_i^4}|≥ 1 then |S∩{e_i^1,e_i^2}|=0.
Since the proofs of the two items are highly symmetrical, we will
only prove the first item. To do that, it suffices to show that if S does not respect the statement for some 1≤ i≤ n, then |S∩ E_i|≥ 4. Then, since |S|≤ 3n, and 1≤ i≤ n, there exists a 1≤ j≤ n such that i≠ j and |S∩ E_j|≤ 2. This contradicts Claim <ref>.
Let H=G-S. Assume first that there exists an i such that, say, e_i^1∈ S and e_i^3∈ S.
Observe that S contains at least one edge e incident to u_6 or u_7, as otherwise we would have that d_H(u_6)=d_H(u_7), contradicting the fact that S is an edge-irregulator of G. Thus, if we also have that e_i^2∈ S or that e_i^4∈ S, it follows that |S∩ E_i|≥ 4, a contradiction. Thus, we may assume that S∩ E_i={e_i^1,e_i^3,e}. If e∈{u_7u_9,u_7u_10,u_11}, say e=u_7u_9, then d_H(u_7)=d_H(u_10). Also, if e=u_6u_8, then S also contains u_8u_16. Finally, if e=v_iu_6 (resp. e=u_6v'_i) then d_H(u_6)=d_H(v'_i) (resp. d_H(u_6)=d_H(v_i)). It follows from Observation <ref> that in all cases, we have that |S∩ E_i|≥ 4, a contradiction.
We are now ready to give the reduction. Let G be the graph constructed from the formula ϕ as explained above. We show that there exists a satisfying truth assignment of ϕ if and only if _e(G)≤ 3n.
For the first direction, let T be a satisfying truth assignment of ϕ. Let S be the set containing the edges e_i^1,e_i^2,u_6u_7 for every 1≤ i≤ n such that T(x_i)=true and the edges e_i^3,e_i^4,v_iu_6 for each i such that T(¬ x_i)=true. Let H=G-S. Clearly, |S|=3n. Also S is an edge-irregulator of G. Indeed, the part of the graph H that corresponds to the gadget attached to v_i and v'_i is clearly locally irregular for every i. Also, for each j, we have that d_H(c_j)≤ 3 (since C_j is satisfied by at least one literal) and any vertex in N_H(c_j) has degree equal to 4.
For the reverse direction, assume that _e(G)≤ 3n and let S be an edge-irregulator of G such that |S|=3n. Recall that due to Claim <ref>, for each i∈[n], if S contains one edge in {e_i^1,e_i^2} then it contains no edge in {e_i^3,e_i^4} and vice versa. For each i∈ [n], we set T(x_i)=true if S contains one edge in {e_i^1,e_i^2} and T(¬ x_i)=true in any other case. We claim that T is indeed a truth assignment that satisfies ϕ. Indeed, due to Claim <ref>, we know that each variable will receive exactly one truth value. Also, since S is an edge-irregulator, and due to Claim <ref>, we know that for each j∈ [m], there exists an i∈ [n] such that either v_ic_j∈ S or v'_ic_j∈ S; that is, for each clause C_j, there exists either a literal x_i or a literal ¬ x_i that has been set to true. In other words, each clause of ϕ is satisfied by T. This ends the reduction.
Let G be a graph and k∈ℕ. Deciding if _e(G)≤ k is [1]-hard parameterised by k.
The reduction is from k-Multicoloured Clique.
k-Multicoloured Clique
A graph G'=(V,E) and a partition (V_1,…,V_k) of V into k independent sets.
Does there exist a set S⊆ V such that G'[S] is a clique?
It is known that k-Multicoloured Clique is [1]-hard parameterised by k <cit.>.
On a high level, our reduction will proceed as follows. Starting with the graph G' that is given in the input of k-Multicoloured Clique, we will first subdivide every edge of the graph G'. Then, for each i∈[k], we will attach one copy of a particular gadget to the vertices of V_i. Also, for each 1≤ i<j≤ k, we will attach a copy of our gadget to the vertices that correspond to the edges v_iv_j of G', with v_i∈ V_i and v_j∈ V_j. In total, we will add (k^2+k)/2 gadgets.
The gadgets are structured so that any edge-irregulator of the graph contains at least one edge for each gadget (so any solution has a size of at least (k^2+k)/2). Furthermore, we prove that, if we have selected only one edge from a gadget, then that edge must be incident to either a vertex of the original graph or a vertex that represents an edge of the original graph.
Finally, we show that:
* an edge-irregulator S that contains exactly one edge from each gadget (i.e. an edge-irregulator of size (k^2+k)/2) can give us a clique of size k in the original graph by selecting the vertices and edges (represented by vertices) of the original graph that are incident to the edges of S and
* if we have a clique of size k in the original graph we can construct an optimal edge-irregulator S by selecting the edges of the gadgets that are incident to the k vertices of the clique and the (k^2-k)/2 vertices that represent the edges of the clique.
We proceed with the formal proof. Assume that we are given an instance G'=(V,E) with vertex partition (V_1,…,V_k) where |V_i| = n for all i ∈ [k]. For each i ∈ [k], we denote by v_i^p, for p∈ [n], the vertices of V_i.
We construct a graph G as follows:
* Start with a copy of G'.
* Subdivide each edge e ∈ E. Let u_i,j^p,q be the vertex that corresponds to the edge v_i^pv_j^q∈ E. Also, let U_i,j be the set of vertices that corresponds to the edges between the sets V_i and V_j, i.e., the set {u_i,j^p,q| v_i^pv_j^q∈ E}.
* For each pair (i,j) where 1≤ i < j ≤ k, create a copy of the gadget H_|U_i,j|, illustrated in Figure <ref>, and add all the edges between the copy of w and the vertices of U_i,j. We denote this copy of w by w_i,j, the copy of H_|U_i,j| by H^w_i,j and the copy of y in H^w_i,j by y_i,j.
* For each i ∈ [k], create a copy of the gadget H_|V_i| and add all the edges between the copy of w and the vertices of V_i. We denote this copy of w by w_i, the copy of H_|V_i| by H^w_i and the copy of y in H^w_i by y_i.
* Finally, add leaves attached to the vertices of V_i, i ∈ [k], so that each vertex of V_i has degree kn and attached to the vertices of U_i,j, 1≤ i<j ≤ k, so each that vertex of U_i,j has degree kn + 1.
Let G be the resulting graph.
We prove that G has an edge-irregulator of order (k^2 + k)/2 if and only if G' is a yes instance of k-Multicoloured Clique.
Assume that G' is a yes instance of k-Multicoloured Clique and C= {c_1, … , c_k} is a clique in G' with c_i∈ V_i for every i∈[k]. We will construct an edge-irregulator of G as follows. Start with an empty set S.
Notice that, for each i ∈ [k], |V_i ∩ C|=1 and let p∈[n] be such that v_i^p=c_i;
we add to S the edge v_i^p w_i.
For each pair (i,j), 1≤ i<j ≤ k, let p,q∈[n] be such that v_i^p=c_i and v_j^q=c_j; we add to S the edge u_i,j^p,qw_i,j. Notice the edge v_i^p v_j^q must exist in E since C is a clique. It follows that the vertex u_i,j^p,q, and therefore the edge u_i,j^p,qw_i,j, also exists in G. By construction, |S| = (k^2+k)/2. It only remains to prove that S is an edge-irregulator of G.
Consider the graph G-S. Observe that, for every H^w_i, i ∈ [k], we have reduced the degree of w_i by exactly one. Therefore, any two adjacent vertices of H^w_i have different degree (see Figure <ref>). The same holds true for every H^w_i,j, 1≤ i<j ≤ k.
Consider now the edges xz∈ E(G) such that x∈{w_i,w_j,w_i,j}, and z∈ V_i ∪ U_i,j∪ V_j, 1≤ i<j ≤ k. Notice that d_G-S(x)=n^2-1 and kn-1≤ d_G-S(z)≤ kn+1. For sufficiently large n, we have that n^2-1 > kn+1.
It remains to consider the edges between vertices in V_i ∪ V_j and in U_i,j for any 1≤ i<j ≤ k.
Notice that, for every 1≤ i<j ≤ k, all vertices of V_i ∪ V_j, except one vertex v_i^p∈ V_i and one vertex v_j^q∈ V_j, have degree kn, and d_G-S(v_i^p)=d_G-S(v_j^q)=kn - 1.
Also, all vertices of U_i,j, except one vertex u', have degree kn +1, and d_G-S(u')=kn. So, u' is the only vertex of U_i,j that could possibly have the same degree as a vertex in V_i∖{v_i^p} or V_j∖{v_j^q}. It follows by the construction of S that u' is actually u_i,j^p,q. Also, by the construction of G, u_i,j^p,q is adjacent only to v_i^p and v_j^q, as it represents the edge between their corresponding vertices in G'. Thus, for every 1≤ i<j ≤ k, no vertex in U_i,j has the same degree as any of its neighbours in V_i or V_j. It follows from all the arguments above that S is indeed an edge-irregulator of G.
Now we show that if _e(G)=(k^2+k)/2 then G' has a clique of size k. Let S be an edge-irregulator of G of order (k^2+k)/2.
First, we notice that for each i∈ [k], d_G(w_i)=d_G(y_i) and that for each 1≤ i <j ≤ k, d_G(w_i,j)=d_G(y_i,j).
Let E_w_i be the set of edges w_iv for v ∈ V_i and E_w_i,j be the set of edges w_i,ju for u ∈ U_i,j. Also, let w ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }.
Since S is an edge-irregulator of G, it follows that |S∩ (E(H^w)∪ E_w)|≥ 1. Also, observe that for any pair of distinct vertices w, w' ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }, we have that (E(H^w)∪ E_w) ∩ ( E(H^w')∪ E_w' ) = ∅. Thus, and since |S|=(k^2+k)/2, we obtain that, actually, |S∩ (E(H^w)∪ E_w)|= 1. Next, we show that S includes only edges from the set E_w, for each w ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }. In particular we claim the following:
Let w ∈{w_i | i ∈ [k]}∪{w_i,j| 1≤ i < j ≤ k }.
It holds that S ∩ E(H^w) = ∅ and that |S ∩ E_w| = 1.
Assume that S∩ E(H^w)≠∅ and let e ∈ S ∩ E(H^w).
We distinguish cases according to which edge of H^w is e. In each case, we show that S must include an additional edge of E(H^w), which is a contradiction to the fact that |S ∩ ( E(H^w) ∪ E_w ) |= 1.
e is incident to neither w nor y: Then S must also include an additional edge incident to w or y (from previous discussion).
e is incident to y: Then, S must include an additional edge of E(H^w), as otherwise d_G-S(y)=d-1 and y would have at least one neighbour of degree d-1.
e is incident to w and e ≠ wy: Then, S must include an additional edge of E(H), as otherwise G-S would include a connected component isomorphic to K_2.
The previous claim also shows that S ⊆⋃_i ∈ [k]E_w_i∪⋃_1≤ i < j ≤ k E_w_i,j.
We now explain how to construct a clique of G' of order k.
Let ℓ(i)=m(i) be the index that specifies which edge incident to w_i is included in S. That is, ℓ(i) is such that w_i v_i^ℓ(i)∈ S.
Similarly, for each 1≤ i<j ≤ k, let ℓ(i,j) and m(i,j)
be the indices such that w_i,j u_i,j^ℓ(i,j) , m(i,j)∈ S.
Notice that both ℓ(i) and ℓ(i,j) are unique as S contains exactly one edge incident to each of w_i and w_i,j (by Claim <ref>).
The set C ={ v_i^ℓ(i)| i ∈[k] } induces a clique of order k in G.
First, for any 1≤ i<j ≤ k, we show that ℓ(i) = ℓ(i,j) and m(j) = m(i,j). To simplify the notation let ℓ = ℓ(i,j) and m = m(i,j). By the definition of ℓ and m we have that w_i,j u_i,j^ℓ , m∈ S.
Now, we consider the degrees of the vertices v_i^ℓ
and u_i,j^ℓ,m.
Since w_i,j u_i,j^ℓ, m ∈ S, we have that d_G-S(u_i,j^ℓ,m)=kn.
If ℓ(i) ≠ℓ,
then d_G-S(v_i^ℓ)=kn, as S would not include any edges incident to v_i^ℓ in that case. This is a contradiction since v_i^ℓ and u_i,j^ℓ, m are adjacent in G (by construction) and remain so in G-S (as S ⊆⋃_i ∈ [k]E_w_i∪⋃_1≤ i < j ≤ k E_w_i,j). Therefore, for any 1≤ i<j ≤ k, ℓ(i) = ℓ = ℓ(i,j). Similarly, we can show that for any 1≤ i<j ≤ k, m(j) = m = m(i,j).
Now we show that for every pair of distinct vertices u,v ∈{ v_i^ℓ(i)| i ∈[k] }, we have that u and v are adjacent in G'.
W.l.o.g. let u =v_i^ℓ(i) and v = v_j^ℓ(j) for some 1≤ i < j ≤ k. We know that ℓ(i) = ℓ and ℓ (j) = m(j) = m. Therefore, the vertex u_i,j^ℓ(i,j) , m(i,j) = u_i,j^ℓ , m of G is adjacent to v_i^ℓ (i) and v_j^ℓ (j). This means that v_i^ℓ (i) and v_j^ℓ (j) are incident in G' as the vertex u_i,j^ℓ(i) , m(j) corresponds to the edge between these two vertices in G' (recall the construction of G).
Thus, any pair of vertices in C is a pair of adjacent vertices in G'. It follows that C is a clique.
This completes the proof.
Unfortunately, this problem exhibits a similar behaviour to finding optimal vertex-irregulators, as it also remains intractable even for “relatively large” structural parameters.
Let G and k∈ℕ. Deciding if _e(G)≤ k is [1]-hard parameterised by either the feedback vertex set number or the treedepth of G.
The reduction is from the General Factor problem:
General Factor
A graph H=(V,E) and a list function L: V →𝒫({0,…,Δ(H)}) that specifies the available degrees for each vertex u ∈ V.
Does there exist a set S⊆ E such that d_H-S(u) ∈ L(u) for all u ∈ V?
This problem is known to be [1]-hard when parameterised by the vertex cover number of H <cit.>.
Starting from an instance (H,L) of General Factor, we construct a graph G such that _e(G)≤ n^2, where n=|V(H)|, if and only if (H,L) is a yes-instance.
For every vertex u∈ V(H), let us denote by L(u) the set {0,1,…, d_H(u)}∖ L(u). In the case where {0,1,…, d_H(u)}∖ L(u)=∅, we set L(u)={-1}. On a high level, the graph G is constructed by adding some trees on the vertices of H. In particular, for each vertex u∈ V(H) and for each element a in L(u), we will attach a tree to u whose purpose is to prevent u from having degree a in G-S, for any optimal edge-irregulator S of G.
We begin by defining an arbitrary order on the vertices of H. That is, V(H)={u_1,u_2,…,u_n}. Next, we describe the trees we will use in the construction of G. In particular, we will describe the trees that we attach to the vertex u_i, for every 1≤ i≤ n. First, for each a_j∈L(u_i), define the value a'_j=d_H(u_i)-a_j. Also, for each j, let d_i,j=2in^4-a'_j.
For each “forbidden degree” a_j in the list L(u_i), we will attach a tree T_i,j to u_i. We define the tree T_i,j as follows.
First, for every 0≤ k≤ n^2-1,
create n^2
copies of S_d_i,j-k (the star on d_i,j-k vertices) and q additional copies of S_d_i,j-n^2+1
(the exact value of q will be defined in what follows). Then, choose one leaf from each one of the above stars, and identify them into a single vertex denoted as u_i,j; the value of q is such that d(u_ij)=d_i,j-1=2in^4-a'_j-1.
Let T_i,j be the resulting tree and let us say that u_i,j is the root of T_i,j.
Let us now describe the construction of G. For each vertex u_i∈ V(H) and for each a_j∈L(u_i), add the tree T_i,j to H and the edge u_i,ju_i. Then, for each vertex u_i∈ V(H), for any j such that u_i,j is a neighbour of u_i, add p_i additional copies of the tree T_i,j, as well as the edges between u_i and the roots of the additional trees, so that d_G(u_i)=2in^4.
The resulting graph is G. Note that, for each vertex of V(H), we are adding at most 𝒪(n^4)
trees, each tree containing 𝒪(n^4) vertices.
Thus, the construction of G is achieved in polynomial time.
We are now ready to present our reduction. Assume first that (H,L) is a yes-instance of General Factor, and let S⊆ E be such that d_H-S(u)∈ L(u) for all u∈ V(H). We claim that S is also an edge-irregulator of G. By the construction of G, and since S only contains edges from H, there are no two adjacent vertices in G-H that have the same degree in G-S. Thus, it remains to check the pairs of adjacent vertices x,y such that, either both x and y belong to V(H), or, w.l.o.g., x∈ V(H) and y∈ V(G-H). For the first case, let x=u_i and y=u_i', for 1≤ i<i'≤ n. Then, assuming that d_G-S(u_i)=d_G-S(u_i'), we get that 2in^4-p=2i'n^4-p', where S contains 0≤ p≤ n^2 and 0≤ p'≤ n^2 edges incident to u_i and u_i' respectively. Thus, 2n^4(i-i')=p-p', a contradiction since -n^2≤ p-p'≤ n^2 and -n≤ i-i'≤ n. For the second case, for every i, let d_G-S(u_i)=2in^4-p, where the set S contains 1≤ p≤ n^2 edges of H incident to u_i. Also, by the construction of G and since S only contains edges from H, we have that for every j, d_G-S(u_i,j)=d_G(u_i,j)=2in^4-a'_j, where, recall, a'_j=d_H(u_i)-a_j for a_j∈L(u_i). Assume now that there exist i,j such that d_G-S(u_i)=d_G-S(u_i,j). Then, 2in^4-p=2in^4-d_H(u_i)+a_j and thus d_H(u_i)-p=a_j. But then d_H-S(u_i)=a_j, which is a contradiction since a_j∈L(u_i). Thus, S is an edge-irregulator of G and |S|≤ n^2 since S only contains edges of E(H).
For the reverse direction, assume that _e(G)≤ n^2 and let S be an optimal edge-irregulator of G. We will show that S is also such that d_H-S(u_i)∈ L(u_i), for every i. Let us first prove the following claim.
Let S be an optimal edge-irregulator of G. For every i,j, let T be any copy of the T_i,j tree that is attached to u_i, and let u be the root of this T_i,j. If S contains x≥ 1 edges of E_i,j=E(T)∪{uu_i}, then x≥ n^2.
Assume there exist i,j such that |S∩ E_i,j|=x≥ 1 and x≤ n^2. Among those edges, there are x_1≥ 0 edges incident to u and x_2≥ 0 edges incident to children of u (but not to u), with x_1+x_2=x< n^2.
Assume first that x_1=0. Then x=x_2 and there is no edge of S∩ E_i,j that is incident to u. Then d_G-S(u)=d_G(u) and observe that d_G(u) is strictly larger than that of any of its children (by the construction of G). It follows that S∖ (S∩ E_i,j) is also an edge-irregulator of G, contradicting the optimality of S. Thus x_1≥ 1. It then follows from the construction of G that there exist at least n^2 children of u, denoted by z_1,…,z_n^2, such that d_G-S(u)=d_G(z_k), for every 1≤ k≤ n^2. Since x<n^2, there exists at least one 1≤ k≤ n^2 such that d_G-S(u)=d_G-S(z_k), contradicting the fact that S is an edge-irregulator. Thus x≥ n^2.
It follows directly from Claim <ref> that S contains only edges of E(H). Assume that there exist i,j such that d_H-S(u_i)=a_j and a_j∈L(u_i). Then d_G-S(u_i)=2in^4-a'_j. Also, by the construction of G, u_i is adjacent to a vertex u_i,j for which (since S contains only edges of E(H)) we have that d_G-S(u_i,j)=d_G(u_i,j)=2in^4-a'_j. This is contradicting the fact that S is an edge-irregulator of G. Thus, for every i,j, we have that if d_H-S(u_i)=a_j, then a_j∈ L(u_i), which finishes our reduction.
Finally, if H has vertex cover number vc, then, by Observation <ref>, and since G is constructed by attaching trees of depth 3 directly on the vertices of H, we have that G has treedepth and feedback vertex set 𝒪(vc). This concludes our proof.
We close this section by observing that the proof of Theorem <ref> can be adapted for the case of edge-irregulators. Indeed, it suffices to replace the guessing of vertices and the variables defined on vertices, by guessing of edges and variables defined on the edges of the given graph. Finally, the definition of the sub-types is done through subgraphs produced only by deletion of edges. This leads us to the following:
Given a graph G with vertex integrity k, there exists an algorithm that computes _e(G) in FPT-time.
§ CONCLUSION
In this work we continued the study of the problem of finding optimal vertex-irregulators, and introduced the problem of finding optimal edge-irregulators. In the case of vertex-irregulators, our results are somewhat optimal, in the sense that we almost characterise exactly which are the “smallest” graph-structural parameters that render this problem tractable. The only meaningful parameter whose behaviour remains unknown is the modular-width of the input graph. The parameterised behaviour of the case of edge-irregulators is also somewhat understood, but there are still some parameters for which the problem remains open.
Another interesting direction is that of approximating optimal vertex or edge-irregulators. In particular it would be interesting to identify parameters for which either problem becomes approximable in FPT-time (recall that vertex-irregulators are not approximable within any decent factor in polynomial time <cit.>). Finally, provided that the behaviour of edge-irregulators is better understood, we would also like to propose the problem of finding locally irregular minors, of maximum order, of a given graph G.
abbrv
|
http://arxiv.org/abs/2307.04384v1 | 20230710074305 | Causal Neural Graph Collaborative Filtering | [
"Xiangmeng Wang",
"Qian Li",
"Dianer Yu",
"Wei Huang",
"Guandong Xu"
] | cs.IR | [
"cs.IR"
] |
Causal Neural Graph Collaborative Filtering
Xiangmeng Wang1,
Qian Li1 2,
Dianer Yu,
Wei Huang,
Guandong Xu2, Member, IEEE
X. Wang, D. Yu and G. Xu are with Data Science and Machine Intelligence Lab, Faculty of Engineering and Information Technology, University of Technology Sydney, New South Wales, Australia.
E-mail: {Xiangmeng.Wang, Dianer.Yu, Guandong.Xu}@uts.edu.au
Q. Li is with the School of Electrical Engineering, Computing and Mathematical
Sciences, Curtin University, Perth, Australia. E-mail: [email protected].
W. Huang is with RIKEN Center for Advanced Intelligence Project (AIP). E-mail: [email protected]
* Both authors contributed equally to this research.
†Corresponding author.
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Graph collaborative filtering (GCF) has gained considerable attention in recommendation systems by leveraging graph learning techniques to enhance collaborative filtering (CF) models. One classical approach in GCF is to learn user and item embeddings by modeling complex graph relations and utilizing these embeddings for CF models. However, the quality of the embeddings significantly impacts the recommendation performance of GCF models.
In this paper, we argue that existing graph learning methods are insufficient in generating satisfactory embeddings for CF models. This is because they aggregate neighboring node messages directly, which can result in incorrect estimations of user-item correlations. To overcome this limitation, we propose a novel approach that incorporates causal modeling to explicitly encode the causal effects of neighboring nodes on the target node. This approach enables us to identify spurious correlations and uncover the root causes of user preferences.
We introduce Causal Neural Graph Collaborative Filtering (CNGCF), the first causality-aware graph learning framework for CF. CNGCF integrates causal modeling into the graph representation learning process, explicitly coupling causal effects between node pairs into the core message-passing process of graph learning. As a result, CNGCF yields causality-aware embeddings that promote robust recommendations.
Our extensive experiments demonstrate that CNGCF provides precise recommendations that align with user preferences. Therefore, our proposed framework can address the limitations of existing GCF models and offer a more effective solution for recommendation systems.
Graph Representation Learning, Causal Inference, Structural Causal Model, Recommendation System
§ INTRODUCTION
Recommendation system (RS) has been a core in many web-based services, e.g., e-commerce, to facilitate information filtering for users from overwhelming data.
Benefiting from the capability to learn from relational graph data, an emerging RS paradigm built on graph learning <cit.>, i.e., graph collaborative filtering (GCF), has been studied extensively in recent years <cit.>.
GCF enhances traditional collaborative filtering <cit.> by modeling complex user-item interactions in a graph as well as auxiliary side information, e.g., user and item attributes.
Thus, GCF has shown great potential in deriving knowledge (e.g., user behavior patterns) embedded in graphs.
Existing GCF can be categorized as random walk-based and graph representation learning-based methods.
The first branch of random walk-based methods <cit.> uses user and item similarities to build random walk models that produce user-item co-occurrence information for downstream CF models.
For instance, ItemRank <cit.> performs label propagation within an interaction graph and utilizes a probability model to compute inter-user and inter-item similarities.
The similarities are then defined as transition probabilities of a random walk model, which produces item importance to enhance a CF model.
However, the random walk model is conceptually isolated from the CF model, since it does not include model parameters to be optimized with the CF learning objective.
An alternative category of graph representation learning methods utilizes graph neural networks to analyze graph connections and construct representations, commonly known as embeddings.
The fundamental concept behind these methods is to acquire vectorized user and item embeddings through the application of graph neural networks, which can subsequently be utilized to optimize the collaborative filtering model.
For instance, NGCF <cit.> exploits a graph convolutional network (GCN) to propagate neighboring node messages in the interaction graph to obtain user and item embeddings.
The learned embeddings capture user collaborative behavior and are used to predict user preference scores for CF optimization.
Following this paradigm, subsequent works <cit.> also achieve favorable performance in different tasks, e.g., sequential recommendation <cit.>, by using auxiliary information such as interaction timestamp <cit.> for user sequential behavior modeling.
Despite the efforts, we argue that existing graph representation learning methods are not sufficient to yield satisfactory embeddings to enhance CF models.
The main reason is that
they learn user and item embeddings by directly aggregating neighboring node messages, while these messages are simple correlation signals of node pairs.
Take Figure <ref> (a) as a toy example.
Given an interaction graph, existing graph representation learning generally learns user embeddings by sampling and aggregating users' correlated neighbors.
Considering that user u_1 has a neighbor set {i_1, i_2, a_1, i_3, a_2}, which is highly overlapped with user u_2's neighbor set {i_1, i_2, a_1,i_4, a_3}, the yield embeddings of u_1 and u_2 would be very similar compared with other users.
The CF model takes the inner product between u_1's embedding and the embeddings of items from the item set as u_1's preference scores over items.
Similarly, u_2's preference scores are estimated based on u_2's embedding and item embeddings.
For item i_3, as u_1 and u_2's embeddings are similar, the preference scores of u_1 and u_2 on item i_3 would be similar too.
Assuming that user u_1 has previously interacted with item i_3, thereby indicating a significant preference score for i_3, the CF model would recommend i_3 to user u_2 based on this high preference score.
However, we may infer that user u_2 is truly interested in item attribute a_3 that belongs to the item i_4 interacted with the user.
Consequently, the item i_3 that is recommended based on attribute a_2 may not align with the personal preferences of user u_2 and, consequently, fail to meet the user's expectations.
We claim that estimating the direct causal effects between node pairs in the graph could address this issue.
As illustrated in Figure <ref> (b), in order to determine the accurate preference of user u_2, we might consider each node within the set of neighbors of u_2 as the cause and the preference of u_2 as the effect.
For instance, measure the causal effect of a_3 on u_2 by considering a_3 as the cause and u_2's preference as the effect.
By estimating the causal effect in each of the node-preference pairs, we can obtain the causal effect of
a_3 on u_2, i.e., 0.96, and the causal effect of a_1 on u_2, i.e., 0.91.
Given the condition that a causal effect above 0.9 indicates strong causation between cause and effect nodes, we thus conclude that a_3 and a_1 attract u_2's personal interest.
As such, we can use this causation signal to refine the user embedding of u_2 towards favoring items with a_3 and a_1 and finally enhance the CF model for user interest modeling.
Following the above intuition, we propose to inject causal modeling into graph representation learning to explicitly encode the crucial causal relations within node pairs into embeddings.
Causal modeling identifies the intrinsic cause-effect relations between a node and true user preferences <cit.>.
Considering that the message-passing mechanism suffers from ambiguous correlations of node relations within calculated messages <cit.>, modeling node-level causal relations could help estimate the true user preferences to obtain causality-aware messages.
For instance, we can estimate how a user's preference (i.e., effect) is affected by the item brand (i.e., cause).
As such, by coupling with causal modeling, we could enable graph learning to uncover the true interests under user interactions, i.e., the root causes that trigger users' interests to interact with the item.
We therefore propose the first causality-aware graph representation learning framework for collaborative filtering.
We focus on a special class of neural networks for graph learning, namely the graph convolutional network (GCN), to inject the causal relations between nodes into the core message-passing process in the GCN computation.
The underlying idea is to establish a connection between the structural causal model (SCM) and the message-passing mechanism of graph convolutional network (GCN) computation, which enables the messages to encapsulate the causal relationships between the adjacent nodes and the target node.
Specifically, we construct a causal graph that induces a SCM to describe the recommendation generation process of graph representation learning that incorporates causality.
Using the SCM, we formulate the recommendation process as a generative model, in which each component in the generative model describes a structural equation.
We propose a novel Causal Neural Graph Collaborative Filtering (CNGCF), which utilizes variational inference to quantify the components of the generative model. The CNGCF framework explicitly integrates causal relationships, as defined by the structural causal model (SCM), into the message-passing mechanism of graph convolutional network (GCN)-based graph learning. This integration facilitates the generation of accurate recommendations that uncover the true user preferences.
The contributions of this work are:
* We introduce a novel approach that leverages causal model-based graph representation learning for recommendation systems.
Our proposed CNGCF is the first of its kind to explore causal relationships underlying the graph with the aim of generating causality-aware graph embeddings.
* Our CNGCF utilizes a unified framework based on variational inference, which is driven by a causal graph encoder to model the graph topology of the causal graph and a collaborative filtering decoder to reconstruct user interactions.
* We validate the effectiveness of our proposed framework through extensive experimentation. Our experimental results demonstrate that our approach outperforms existing methods in achieving satisfactory recommendation performance.
§ RELATED WORK
§.§ Graph Collaborative Filtering
Collaborative filtering (CF) <cit.> dominates recommendation research due to its simplicity and effectiveness.
Early CF models including latent factor models <cit.> and neural-based CF <cit.> use descriptive features (e.g., IDs) to calculate user similarities, assuming that users with similar historical behaviors have similar future preferences.
For example, Bayesian personalized ranking (BPR) <cit.> learns
user and item latent vectors from the interaction matrix built by implicit user feedback, e.g., clicks.
The inner products between latent vectors are used as user-item similarities to predict user preference scores.
Neural collaborative filtering (NCF) <cit.> uses a Multi-layer perceptron (MLP) to learn a user behavior similarity function based on simple user/item one-hot encodings.
Graph CF (GCF) leverages advances in graph learning <cit.> to model user-item interaction graphs as well as rich auxiliary data (e.g., text, image), thus boosting the recommendation by augmenting complex semantics under user-item interactions.
Relevant approaches can be categorized as random walk-based and graph representation learning-based methods.
The first line of random walk-based methods
builds random walk models with calculated similarities among users and items from probability models.
The learned random walk models give probability distributions over items to produce auxiliary user-item co-occurrence information for CF models.
For instance, ItemRank <cit.> computes the stationary distribution of a random walk model based on estimating inter-user and inter-item similarities from a user-item interaction graph.
The random walk model provides item importance for a CF model, in which the final ranking of items is based on the calculated item importance.
BiRank <cit.> extends ItemRank to incorporate both item features and user preferences in recommendations.
BiRank computes a joint stationary distribution over users and items in the graph, where the probability of transitioning from an item node to a user node is based on user ratings on items.
These methods are inferior to optimization-based CF methods since they do not include model parameters that can be optimized together with the CF training.
Another line of graph representation learning-based methods usually uses deep neural networks (e.g., graph convolution network) to scrutinize complex graph relations and produce user and item representations for recommendation tasks.
Neural graph collaborative filtering (NGCF) <cit.> is one of the most representative graph representation learning-based CF approaches, which incorporates two graph convolutional networks (GCNs) to learn the collaborative signal of user interactions from a user-item interaction graph.
GC-MC <cit.> uses a GCN-based auto-encoder to learn latent features of users and items from an interaction graph and reconstructs the rating links for matrix completion.
Later, LightGCN <cit.> simplifies the application of the GCN in recommendations by only including neighborhood aggregation for calculating user and item representations, which further boosts the efficiency of subsequent GCF approaches, e.g., <cit.>.
Despite the great effort, existing GCF methods only capture correlation signals of user behaviors by modeling neighboring node messages.
This would result in the limited ability of GCF models to capture the true user preferences in the presence of spurious correlations.
On the contrary, we abandon the modeling of spurious correlations to pursue the intrinsic causal relations between nodes, which estimate the causal effect of a specific item on user preferences to uncover true user interests.
§.§ Causal Learning for Recommendation
Recent recommendation research has largely favored causality-driven methods.
A burst of relevant papers is proposed to address critical issues in RS, such as data bias and model explainability with causal learning.
Among them, two representative causal frameworks are largely adopted, i.e., the potential outcome framework (POF) from Rubin et al. <cit.> and the structural causal model (SCM) from Pearl et al. <cit.>.
POF-based recommendation directly estimates the causal effect of a treatment (e.g., item feature) on the outcome, i.e., recommendation results.
Inverse propensity weighting (IPW) <cit.> is wildly adopted in POF-based recommendations.
Tobias et al. <cit.> adopt IPW to learn unbiased matrix factorization models, in which propensity scores are estimated by a separately learned propensity model.
Zhang et al. <cit.> integrate the learning of the propensity model and the recommendation model into a multi-task learning framework.
However, POF-based recommendation is less intuitive since it does not include graphical models to describe causal relations.
Besides, POF-based recommendation largely relies on the quality of propensity score estimation.
The estimator usually suffers from the “propensity overfitting” <cit.> due to the uncertainty of unseen variables, limiting the performance of POF-based recommendations.
SCM-based recommendation directly builds a graphical causal graph by extracting structural equations on causal relations between deterministic variables in recommendations.
It aims to use the causal graph to conduct causal reasoning for causal effect estimation.
Using the causal graph, most relevant approaches pursue mitigating the bad effects of different data biases, e.g., exposure bias <cit.>, popularity bias <cit.>.
For instance, Wang et al. <cit.> mitigate exposure bias in the partially observed user-item interactions by regarding the bias as the confounder in the causal graph.
They propose a decounfonded model that performs Poisson factorization on substitute confounders (i.e., an exposure matrix) and partially observed user ratings.
Zheng et al. <cit.> relate the user conformity issue in recommendations with popularity bias, and use a causal graph to guide the disentangled learning of user interest embeddings.
Other approaches also achieve explainable recommendations.
Wang et al. <cit.> define a causal graph that shows how users' true intents are related to item semantics, i.e., attributes.
They propose a framework that produces disentangled semantics-aware user intent embeddings, in which each model component corresponds to a specific node in the causal graph.
The learned embeddings are able to disentangle users' true intents towards specific item semantics, which explains which item attributes are favored by users.
§ PRELIMINARIES
We provide key preliminaries, including the definition of graph-based recommendations utilizing graph convolutional networks, as well as basic concepts under causal inference.
§.§ Recommendation with Graph Convolutional Network
Let 𝒰 and ℐ denote the sets of users and items, respectively.
Graph-based recommendation formulates users and items with their features into a graph G=(𝒱, ℰ), where 𝒱 is the node set absorbs all user and item nodes with |𝒱| = |𝒰∪ℐ| and ℰ is the edge set denoting the connections among nodes.
G induces an adjacency matrix 𝐀∈ [0,1]^N × N and a node feature matrix 𝐃∈ℝ^N × d, where N=|𝒱| is the number of nodes and d is the dimension of node features.
Each 𝐝_i ∈ℝ^d is the vector-valued sample of a specific node i ∈𝒱 containing descriptive information of the node, e.g., user/item IDs.
Using G, most graph-based recommendation models rely on graph representation learning <cit.> to scrutinize complex graph relations and produce dense vectors (a.k.a embeddings) for recommendation tasks, e.g., rating prediction.
Graph convolutional network (GCN) <cit.> is a typical method for graph representation learning.
It employs multiple graph convolutional layers to obtain the graph representation 𝐄 of G, where 𝐄∈ℝ^|𝒱| × d^'
absorbs user and item node representations as d^'-dimensional dense vectors.
Based on 𝐄, the model then infers the interaction probabilities of users over items to make recommendations.
In particular, a graph convolutional layer g(𝐃, 𝐀) calculates each representation 𝐞_i of a user/item node i based on its feature 𝐝_i ∈𝐃 and node neighbors 𝒩_i through the following equation [We present the wildly-used inductive graph representation learning setting with the GCN. An inductive setting can abandon the reliance on full graph Laplacian v.s. the transductive setting. For the comparison between inductive and transductive learning, refer to <cit.>.]:
𝐞_i=ϕ(𝐝_i, ⊕_j ∈𝒩_iψ(𝐝_i, 𝐝_j))
where 𝐞_i denotes the representation of a user/item node i, which is calculated by aggregating (⊕) the messages ψ from its neighbors within 𝒩_i.
𝒩_i is the neighbor set of i established by visiting the adjacency matrix 𝐀 and 𝐝_j is the node feature of the neighboring node j.
The calculation of messages ψ in Eq (<ref>) is known as message-passing <cit.>, which is the de facto of a class of GCN variants, e.g., graph attentional networks <cit.>.
The aggregation operator ⊕ may take various forms, e.g., element-wise mean <cit.>, max-pooling <cit.>.
§.§ Causal Inference
A causal graph <cit.> is a directed acyclic graph (DAG) G̃=({𝒱, Z}, ℰ) represents causal relations among endogenous and exogenous variables.
Here, 𝒱 is a set of endogenous variables of interest, e.g., user and item nodes in the graph learning, and user preference variables.
Z is a set of exogenous variables outside the model, e.g., item exposure.
ℰ is the edge set denoting causal relations among G̃.
Each directed edge (j → i) ∈ℰ represents a causal relation from j to i, where i ∈𝒱 and j is a parent node of i, i.e., j ∈ pa(i).
G̃ induces a user causal adjacency vector 𝐀̃_u and an item causal adjacency vector 𝐀̃_v, which specify the adjacent neighbors of a user node u and an item node v, respectively.
Each element 𝐀̃_u^j =1 if j ∈ pa(u), otherwise, 𝐀̃_u^j=0.
Similarly, 𝐀̃_v^j=1 if j ∈ pa(v).
A structural causal model (SCM) <cit.> ℳ = ⟨𝒱, Z, ℱ, P(Z)⟩ is the mathematical form of the causal graph G̃ that includes a collection of structural equations ℱ on endogenous variables 𝒱 and a distribution P(Z) over exogenous variables Z.
Each structural equation f_i∈ℱ for a variable i ∈𝒱 is a mapping from i's parents and connected exogenous variables to i:
i ← f_i(pa(i), Z_i), Z_i ∼ P(Z)
where pa(i) ⊆𝒱\ i is i's parents from the causal graph G̃.
Z_i ∈ Z is a set of exogenous variables connected with i.
An intervention <cit.> is operated with the do-operator do(i = x), which forces a variable i ∈𝒱 to take the value x.
do(i) introduces an independence of the intervened node i to its causal parents. i.e., i pa(i).
Intervention lies at the core of causal modeling as suggested by Rubin et al. <cit.>.
Given a SCM ℳ, an intervention is to force a variable i ∈𝒱 to take a specific value x in order to observe the effect on another variable.
Through intervention, we can determine the causal relationship between endogenous variables.
For instance, in the recommendation, we want to determine the effect of a particular recommendation (e.g., a video) on user behavior (e.g., click).
We can intervene by assigning this recommendation to users, and observe users' behaviors before and after interventions.
If users who received the recommendation are more likely to click, we can conclude that the recommendation has a positive causal effect on user behaviors.
As such, interventions allow us to determine the true causal effect by intervening to recommend items, instead of passively observing user-item correlations in training data.
§ PROBLEM FORMULATION
We put forward the causal graph for causality-aware graph-based recommendations.
We then formulate the generation process of recommendations based on structural equations under the causal graph.
§.§ A Causal View of Recommendation
Early CF resorts to user-item associative matching by assuming the causal graph in Figure <ref> (a).
They typically assume P(Y=1 | u, v) ∝𝐮^⊤𝐯, where 𝐮 and 𝐯 are user and item latent factors.
Graph CF (GCF), as shown in Figure <ref> (b), considers auxiliary data Z_u and Z_v (could be hidden) and the inner connections of users and items from their neighbors to model more complex user behavior patterns.
They first derive dense embedding vectors (i.e., E) for users and items, then use these embeddings to infer user preferences.
They assume P(Y=1 | u, v) ∝ E = N N(agg(u, z_u, msg(𝒩_u)), agg(v, z_v, msg(𝒩_v))), where 𝒩_u and 𝒩_v are neighbor sets for users and items, respectively; N N is the representation learning network (e.g., GCN), agg and msg are the aggregation and message-passing operations, respectively.
Both Figure <ref> (a) and (b) assume the co-occurrence of users and items is independent in the observational data, i.e., there is no edge U → V or V → U.
However, this assumption is unrealistic in the real world because user behaviors are influenced by the recommended items for various reasons.
For instance, users may be more likely to click the items if they are recommended <cit.>.
Besides, the exposure of items is determined by user preferences estimated from the recommendation model <cit.>.
Thus, it is necessary to model the influence of users on items and vice versa, as shown in Figure <ref> (c), to achieve better user preference modeling.
We thus use the causal graph defined in Figure <ref> (c) for user preference modeling.
The causal graph induces a structural causal model, with structural equations defined as:
ℱ(𝒱, Z):= {[ U ← f_U(U, V, Z_u); V ← f_V(U, V, Z_v); E ← f_E(U, V); Y ← f_Y(E) ].
where {U, V, E, Y}∈𝒱 are endogenous variables in the recommendation.
f_U, f_V, f_E and f_Y are the structural equations that specify the causal modeling of U (i.e., user),
V (i.e., item), E (i.e., representation) and Y (i.e., recommendation), respectively.
For example, user node u whose causal mechanism is modeled by f_U is characterized by the structural equation f_u.
Such a structural equation models the direct causal relation from the set of causes pa(u) to user node u accounting for the effects of Z_u as indicated by Eq. (<ref>).
The ability to perform interventions lays a foundation for Eq. (<ref>), as interventions enable estimating the causal effects between endogenous variables.
For example, by using the do-operation do(·) on users, we can estimate the causal effect of user influence on items (i.e., U → V) by modeling P(y | v, do(u)).
Also, we can estimate the influence of items on users (i.e., V → U) using the u-specific causal effect P(y | u, do(v)), instead of fitting users' historical interactions by modeling P(y | u, v) without accounting for user-item causal relations.
As such, we could model user-item causal relations to allow causality-aware graph-based recommendations.
§.§ Causality-aware Recommendation Generative Process
We now present the generative process of causality-aware graph-based recommendations.
The generative process is guided by the structural equations under the causal graph (cf. Eq. (<ref>)) to capture causal relations in graph-based recommendations.
In particular,
we first assume the unobserved exogenous variables of users and items in Eq. (<ref>) are drawn from a standard Gaussian prior, denoted as d-dimension latent vectors 𝐙_u and 𝐙_v for exogenous variables Z_u and Z_v, respectively.
For each user u, we calculate the user representation 𝐮 based on latent vectors of user exogenous variables 𝐙_u and neighbor information f_φ(U | U, V) propagated by its connected users and items.
Note that we enable the neighbor information f_φ(U | U, V) to capture the causal relations between neighboring nodes and the target node, and thus propose a causality-aware message passing operation that defines f_φ as a feedforward neural network with parameter φ.
f_ϕ is a sum-aggregator for message aggregation to give the distribution of 𝐮.
Analogously, item representation 𝐯 is given by aggregating 𝐙_v and neighbor information f_φ(V | U,V) through f_ϕ.
The latent representation 𝐮 and 𝐯 are transformed via a non-linear function f_θ_3∈ℝ^I.
The output of f_θ_3 is normalized via a softmax function to produce a preference probability vector 𝐞∈𝕊^I-1,
where 𝕊^I-1 is an (I-1)-simplex with (I-1) as the size of 𝐞 and I is the total item number.
Given the total number of interactions N=∑_i y_ui from user u, the observed user interaction vector 𝐲 follows multinomial priors based on the distribution of 𝐞.
Formally,
{[ 𝐙_u ∼𝒩(0, 𝐈_K), 𝐙_v ∼𝒩(0, 𝐈_K),; 𝐮∝ f_U = {f_ϕ(𝐙_u, f_φ(U | U, V))}_θ_1,; 𝐯∝ f_V ={f_ϕ(𝐙_v, f_φ(V | U, V))}_θ_2,; 𝐞∝ f_E =softmax(f_θ_3(𝐮, 𝐯)),; 𝐲∼ f_Y = Mult(N, 𝐞); ].
The generative process in Eq. (<ref>) ensures the causality-aware graph learning for recommendations by modeling causal relations induced by structural equations in Eq. (<ref>).
Later, we will use this generative process to guide our model framework design for robust recommendations.
§ METHODOLOGY
We now introduce our Causal Neural Graph Collaborative Filtering (CNGCF) framework that delivers causality-aware graph-based recommendations.
We follow Eq. (<ref>) to design each of the components in CNGCF, i.e., implementing f_U, f_V, f_E and f_Y, respectively.
We use variational autoencoders (VAEs) <cit.> to approximate the intractable posterior distributions of parameters from the four structural equations.
In particular, as shown in Figure <ref>, CNGCF devises two major components based on the VAE structure:
1) The causal graph encoder includes a semi-implicit generative model, a user encoder and an item encoder.
The semi-implicit generative model implements a causality-aware message passing to model causal relation dependencies between nodes.
The user encoder and item encoder implement f_U and f_V to output user representation 𝐮 and item representation 𝐯, respectively.
2) The collaborative filtering decoder
implements f_E to construct the user preference vector 𝐞 through collaborative filtering, from which user's interactions f_Y is sampled.
§.§ Semi-implicit Inference for Causal Graph Encoder
Our causal graph encoder aims to learn user and item representations 𝐮 and 𝐯 by using a user encoder q_θ_1(𝐮|𝐙_u, 𝐝_u, 𝐀̃_u) and an item encoder q_θ_2(𝐯|𝐙_v, 𝐝_v, 𝐀̃_v).
However, modeling q_θ_1 and q_θ_2 is not easy, since there are inherent causal relation dependencies between a user/item node and its adjacent neighbors.
Besides, as indicated by Eq. (<ref>), those causal relations should be modeled with a neural network f_φ as dependency terms of structural equations.
Thus, the true posteriors of q_θ_1 and q_θ_1 do not follow Gaussian distributions due to the existence of complex causal relation dependencies parameterized by an additional neural network.
As a result, traditional variational inference <cit.> that directly parameterizes user and item representations to simple, tractable Gaussian random vectors is not applicable in our setting.
To approximate complex posteriors, we use semi-implicit variational inference (SIVI) <cit.> that models complex distributions through the use of implicit distributions.
§.§.§ Semi-implicit Generative Model
SIVI approximates additional implicit posteriors with a generative model and integrates them with variational encoders to enable flexible mixture modeling of complex posteriors.
Inspired by SIVI, we devise a semi-implicit generative model on top of the user and item encoder to model implicit posteriors.
Notably, our semi-implicit generative model includes a causality-aware message passing to handle neighboring node dependencies of user and item nodes in the causal graph.
As a result, our causal graph encoder not only captures causal relation dependencies, but also naturally allows the mixture modeling of complex posterior distributions.
Formally, the semi-implicit generative model f_{φ, ϕ} equips causality-aware message passing with a neural network f_φ and an aggregation operator f_ϕ to learn hidden factors 𝐡_u and 𝐡_v for a user u and an item v.
Then, the user encoder q_θ_1 takes 𝐡_u as the input to output μ_u, σ_u, from which the user representation 𝐮 is sampled.
Analogously, the item encoder use 𝐡_v for q_θ_2 to calculate item representation 𝐯:
𝐡_u ∼ f_{φ, ϕ}
,
𝐮∼ q_θ_1(𝐮|𝐡_u)=𝒩(𝐮|μ_u, diag(σ_u^2))
𝐡_v ∼ f_{φ, ϕ}
,
𝐯∼ q_θ_2(𝐯|𝐡_v) =𝒩(𝐯|μ_v, diag(σ_v^2))
where {φ, ϕ} parameterize the semi-implicit generative model. θ_1 and θ_2 are the parameters of the user and the item encoder.
Next, we detail the semi-implicit generative model that learns 𝐡_u and 𝐡_v by using two key components:
* Causality-aware message passing:
Causality-aware message passing models each of the dependency terms f_φ(i,j) for a node i and its neighbor j within a structural equation, such that the learned messages themselves become a descriptor of the causal relation for (i ← j).
In particular, we define f_φ(i,j) as a learnable multi-layer perception (MLP) to capture the causal relations.
Formally, for a user u, given its features 𝐝_u and its causal adjacency vector 𝐀̃_u, the messages from u's neighbors j within 𝐀̃_u is given by:
𝐦_u^(l-1) = f_φ(u,j)= ∑_j ∈𝒩_u ∝𝐀̃_u𝐡_j^(l-1)·MLP^(l)(𝐡_u^(l-1), 𝐡_j^(l-1))
=ReLU(𝐖_φ^(l)(𝐡_u^(l-1), 𝐡_j^(l-1))), for l ∈{1, ⋯, L}
where 𝐦_u^(l-1) is the neighbor message calculated for user u at the l-1-th graph learning layer [The neighbor message at the 0-th layer, i.e., 𝐦_u^(0), is initialized from a normal distribution.].
𝒩_u is a set of neighbors adjacent to user u within u's causal adjacency vector 𝐀̃_u.
𝐡_j^(l-1) and 𝐡_u^(l-1) are hidden factors for a neighbor j and the user u at the l-1-th layer [𝐡_j^(0) and 𝐡_u^(0) are initialized as node features 𝐝_j and 𝐝_u.].
𝐖_φ is the learnable weight matrix for f_φ and denotes column-wise concatenation.
Analogously, we can calculate the neighbor message 𝐦_v for an item v follows Eq. (<ref>).
* Aggregation:
At each graph learning layer l, we perform aggregation operation on the messages 𝐦_u and user exogenous variables 𝐙_u to obtain the hidden factor 𝐡_u^(l) for u:
𝐡_u^(l)=σ(𝐖_ϕ^(l)(𝐡_u^(l-1)𝐦_u^(l-1), 𝐙_u ))
where 𝐡_u^(l) is the learned hidden factor for u at the l-th graph learning layer.
σ(·) is the aggregation function chosen as sum, following <cit.>; || is the concatenation operation. 𝐖_ϕ is the weight for aggregation.
At the 0-th layer, u's hidden factors 𝐡_u^(0) are initialized as the user features 𝐝_u.
Similarly, we can calculate the hidden factors 𝐡_v^(l) for an item v at the l-th graph learning layer follows Eq. (<ref>).
Having obtained the hidden factors 𝐡_u^(l) for user u and 𝐡_v^(l) for item v at each graph learning layer l ∈{1,⋯, L}, we adopt layer-aggregation mechanism <cit.> to concatenate vectors at all layers into a single vector:
𝐡_u=𝐡_u^(1) + ⋯ + 𝐡_u^(L), 𝐡_v=𝐡_v^(1) + ⋯ + 𝐡_v^(L)
By performing layer aggregation, we capture higher-order connectivities of node pairs across different graph learning layers.
Finally, our semi-implicit generative model outputs 𝐡_u and 𝐡_v from Eq. (<ref>) as the semi-implicit posteriors of users and items for the latter variational encoders.
§.§.§ User and Item Encoder
Given semi-implicit posterior 𝐡_u for a user u, the user encoder outputs the mean and variance in 𝒩(μ_u, diag(σ_u^2)), from which user representation 𝐮 is sampled:
q_θ_1(𝐮|𝐡_u) =𝒩(𝐮|μ_u, diag(σ_u^2))
where μ_u and diag(σ_u^2) are the mean and variance for user u, which are obtained by sending u's hidden factors 𝐡_u to a one-layer neural network with activation function ReLU(x)=max (0, x):
μ_u=ReLU(𝐖^μ_u_θ_1𝐡_u+b), σ_u^2=exp(ReLU(𝐖^σ_u_θ_1𝐡_u+b))
where 𝐖_θ_1 = {𝐖^μ_u_θ_1, 𝐖^σ_u_θ_1} is a hidden-to-output weight matrix for the user encoder q_θ_1.
Analogously, the item encoder follows the same paradigm as the user encoder to generate the mean and variance for item v based on v's hidden factors 𝐡_v:
q_θ_2(𝐯|𝐡_v) =𝒩(𝐯|μ_v, diag(σ_v^2)),
μ_v=ReLU(𝐖^μ_v_θ_2𝐡_v+b), σ_v^2=exp(ReLU(𝐖^μ_v_θ_2𝐡_v+b))
where 𝐖_θ_2 = {𝐖^μ_v_θ_1, 𝐖^σ_v_θ_1} is the weight matrix for the item encoder q_θ_2.
§.§ Collaborative Filtering Decoder
Collaborative filtering is largely dominated by latent factor models, as evidenced by Koren et al. <cit.>. These models involve mapping users and items into latent factors in order to estimate the preference scores of users towards items.
We extend latent factor-based collaborative filtering into our decoder for modeling the user preference 𝐞, which is a probability vector over the entire item set for recommendations.
The predicted user interaction vector 𝐲 is assumed to be sampled from a multinomial distribution with probability 𝐞.
Formally, we define a generative function f_θ_3(𝐮, 𝐯) recovering classical latent factor-based CF to approximate user preference vector 𝐞:
𝐞 = f_θ_3(𝐮, 𝐯)=𝐮^⊤𝐯
where 𝐮 and 𝐯 are latent factors drawn from our user and item encoder in Eq. (<ref>) and Eq. (<ref>), respectively.
Then, the decoder p_θ_3(𝐞|𝐮, 𝐯) produces interaction probability 𝐲 by approximating a logistic log-likelihood:
log p_θ_3(𝐲|𝐞) =
∑_v y_uvlogσ(𝐞)+(1-y_uv) log(1-σ(𝐞))
where y_uv is the historical interaction between u and v, e.g., click. σ(𝐞)=1 /(1+exp (-𝐞)) is the logistic function.
§.§ Optimization with Counterfactual Instances
We wish our CNGCF to be robust to unseen (unknown) user preference shift to further enhance our recommendation robustness.
Catching user preferences is at the core of any recommendation model <cit.>; however, user preferences are dynamic and may change over time <cit.>.
For example, a user may once love items with the brand been Nike but changes his taste for liking Adidas.
Such a user preference shift can be captured by actively manipulating user preference through interventions on the user preference vector 𝐞, i.e., do(𝐞= 𝐞^').
The data after interventions is termed as counterfactual instances <cit.> that, if augmented to original training instances, increase the model robustness to unseen interventions.
Following this intuition, we optimize our CNGCF by considering two different data scenarios, i.e., the clean data scenario in which our CNGCF accesses the data without interventions, and the counterfactual data scenario in which the data is generated by known interventions on user preference vectors.
Formally, for the clean data scenario, assuming that CNGCF observes clean data 𝐃 only during training.
In this case, we retain the original value 𝐨 of user preference 𝐞 by do(𝐞=𝐨).
Then, CNGCF is trained by maximizing the likelihood function log p_θ_3(𝐲|𝐞, do(𝐞=𝐨)).
Since this marginal distribution is intractable <cit.>, we instead maximize the intervention evidence lower-bound (ELBO) with do(𝐞=𝐨), i.e. max_θ_1, θ_2,θ_3ELBO(𝐃, do(𝐞=𝐨).
In particular,
ELBO(𝐃, do(𝐞=𝐨))
=
𝔼_θ[logp_θ_3(𝐲|𝐞, do(𝐞=𝐨) ) p(𝐮)p(𝐯)/q_θ_1(𝐮|Ξ, do(𝐞 =𝐨) )q_θ_2(𝐯|Ξ, do(𝐞=𝐨) )]
= 𝔼_θ[log p_θ_3(𝐲|𝐞, do(𝐞=𝐨) )]
- KL(q_θ_1(𝐮|Ξ) p(𝐮), q_θ_2(𝐯|Ξ) p(𝐯))
where Ξ represents required conditions for the conditional probability distributions of q_θ_1, q_θ_2 and p_θ_3, i.e., Ξ ={𝐙_u, 𝐝_u, 𝐀̃_u} for q_θ_1,
Ξ ={𝐙_v, 𝐝_v, 𝐀̃_v} for q_θ_2 and Ξ ={𝐮, 𝐯} for p_θ_3.
θ={θ_1, θ_2, θ_3} is a set of model parameters to be trained and KL( Q P ) is KL-divergence between distributions Q and P.
For the counterfactual data scenario, we assume CNGCF accesses counterfactual data 𝐃^' generated by known interventions do(𝐞=𝐞^') on user preference vectors.
The counterfactual vectors 𝐞^' hold the same dimension with 𝐞 and are drawn from a random distribution.
Then, the ELBO of CNGCF with the counterfactual data is,
ELBO (𝐃^', do(𝐞=𝐞^'))
=𝔼_θ[log p_θ_3(𝐲|𝐞, do(𝐞=𝐞^') )]
-KL(q_θ_1(𝐮|Ξ) p(𝐮), q_θ_2(𝐯|Ξ) p(𝐯))
Inspired by data augmentation and adversarial training <cit.>, we augment the clean data with counterfactual instances to enhance the robustness of our CNGCF meanwhile capturing user preference shifts.
In particular, the total loss function after augmentation is as below
ℒ_aug (Θ) =λ(ELBO(𝐃, do(𝐞=𝐨))
+(1-λ) (ELBO(𝐃^', do(𝐞=𝐞^'))
where ℒ_aug (Θ) is the loss function for training our CNGCF and Θ are model parameters. λ is the trade-off parameter between the clean and the counterfactual data scenario.
During the training stage, the loss function is calculated by averaging the ELBO over all users.
§ EXPERIMENTS
We thoroughly evaluate the proposed CNGCF for the recommendation task to answer the following research questions:
* RQ1:
How does CNGCF perform as compared with state-of-the-art recommendation methods?
* RQ2: How do different components impact CNGCF's performance?
* RQ3: How do parameters in the causal graph encoder affect CNGCF?
§.§ Experimental Settings
We conduct our experiments on three real-world and one synthetic datasets to evaluate the effectiveness of CNGCF.
§.§.§ Datasets
We use three benchmark recommendation datasets from Amazon Product Reviews [https://nijianmo.github.io/amazon/index.html] <cit.> and Epinions [http://www.cse.msu.edu/ tangjili/trust.html] <cit.>
* Amazon-Beauty and Amazon-Appliances: two sub-datasets selected from Amazon Product Reviews, which record large crawls of user reviews and product metadata (e.g., brand).
Following <cit.>, we use brand and price to build item features since other features (e.g., category) are too sparse and contain noisy information.
We build item neighbors based on co-purchased and co-viewed information from the product metadata.
The co-purchased and co-viewed information records item-to-item relationships, i.e., a user who bought/viewed item A also bought/viewed item B, reflecting the relations between item A and B.
We build user neighbors based on similar interactions from the review data, i.e., users who reviewed the same item are neighbors for each other.
* Epinions:
a social recommendation dataset recording social relations between users.
We convert user/item features from the dataset into one-hot embeddings.
We use social relations to build user neighbors, i.e., a user's social friends are the neighbors of the user.
Besides, items bought by the same user are neighbors to each other.
We follow <cit.> to build the synthetic dataset, which assumes that synthetic user-item interactions follow the causal relations in a causal graph.
In particular, given the causal graph in Figure <ref>(c),
we construct the Synthetic dataset in four steps:
* Feature generation:
We simulate |𝒰|=1,000 users and |ℐ|=1,000 items, where each user has one discrete feature (gender) and one continuous feature (income), while each item has three discrete features, i.e., type, brand and location.
For discrete features, their values in {0,1} are sampled from Bernoulli distributions.
We sample continuous features from random sampling, in which random feature values are chosen from the minimum (i.e., 0) and the maximum (i.e., 1000) feature values.
For both users and items, we assume four exogenous variables (i.e., Z_u and Z_v) drawn from Gaussian distribution 𝒩(0,1).
* Causal neighbor sampling:
As the causal graph gives causal relations U → U and V → V, we synthesize the causal relations by building user/item causal neighbors, i.e., the connected users/items, for the target user/item.
In particular, we set the causal neighbor number N_c=10.
We sample user causal neighbors (U → U) through random sampling, in which a user's causal neighbors are randomly chosen from the user set 𝒰.
For item causal neighbor sampling (V → V), we first convert items with their features generated in the first step into dense vectors through item2vec <cit.>, then calculate the Euclidean distances between two items.
Those items that have the N_c smallest Euclidean distances with the target item are chosen as causal neighbors for the target item.
* User preference estimation:
For each user u and item v, the user preference 𝐮∈ℝ^d towards item property 𝐯∈ℝ^d is generated from a multi-variable Gaussian distribution 𝒩(0, 𝐈), where d and 𝐈 represent the vector size and unit matrix, respectively.
Then, the preference score y_uv between user u and item v is calculated by the inner product of 𝐮 and 𝐯.
* User interaction sampling:
Once we obtain a user u's preference scores for all items (i.e., ℐ), we normalize these preference scores by exp(r_i)/∑_i^'∈ℐexp(r_i^').
We select items with k-top scores as the interactions for the user u ∈𝒰, where k is a constant chosen randomly from range [20, 100].
For the three real-world datasets, we regard user interactions with overall ratings above 3.0 as positive interactions.
For the synthetic dataset, we regard all user-item interactions as positive as they are top items selected based on users' preferences.
We adopt a 10-core setting, i.e., retaining users and items with at least ten interactions.
The statistics of the four datasets are shown in Table <ref>.
For model training, we split both datasets into training, validation, and test sets by the ratio of 70%, 10%, and 20%.
§.§.§ Baselines
We compare CNGCF with eight competitive recommendation methods.
* BPR <cit.>: a well-known matrix factorization-based model with a pairwise ranking loss to enable recommendation learning from implicit feedback.
* NCF <cit.>: extends the CF to neural network architecture. It maps users and items into dense vectors, then feeds user and item vectors into an MLP to predict user preference scores.
* MultiVAE <cit.>: extends the CF to VAE architecture for implicit feedback modeling.
It converts the CF learning process into a generative model and uses variational inference to model the distribution of the generative model.
* NGCF <cit.>: a graph CF that incorporates two GCNs to learn user and item representations. The learned representations are passed to a matrix factorization to capture the collaborative signal for recommendations.
* VGAE <cit.>: a representative graph learning method that extends VAE to handle graph-structured data. We use VGAE to obtain user and item representations and inner product those representations to predict user preference scores.
* GC-MC <cit.>: a graph-based auto-encoder framework for matrix completion. The encoder is a GCN that produces user and item representations. The learned representations reconstruct the rating links through a bilinear decoder.
* LightGCN <cit.>: a SOTA graph-based recommendation model that simplifies the GCN component.
It includes the essential part in GCNs, i.e., neighbor aggregation, to learn user and item representations for collaborative filtering.
* CACF <cit.>: a method that learns attention scores from individual treatment effect estimation.
The attention scores are used as user and item weights to enhance the CF model.
§.§.§ Evaluation Metrics
We use three Top-K recommendation evaluation metrics, i.e., Precision@K, Recall@K and Normalized Discounted Cumulative Gain(NDCG)@K.
The three evaluation metrics measure whether the recommended Top-K items are consistent with users' preferences in their historical interactions.
We report the average results with respect to the metrics over all users.
The Wilcoxon signed-rank test <cit.> is used to evaluate whether the improvements against baselines are significant.
§.§.§ Parameter Settings
We implement our CNGCF using Pytorch.
The latent embedding sizes of neural networks for all neural-based methods are fixed as d=64.
The in-dimension and out-dimension of the graph convolutional layer in CNGCF, NGCF, VGAE, GC-MC and LightGCN is set as 32 and 64, respectively for graph learning.
We apply a dropout layer on top of the graph convolutional layer to prevent model overfitting for all GCN-based methods.
The Adam optimizer is applied to all methods for model optimization, where the batch size is fixed as 1024.
The hyper-parameters of all methods are chosen by the grid search, including the learning rate l_r in {0.0001,0.0005,0.001,0.005}, L_2 norm regularization in {10^-5, 10^-4, ⋯, 10^1, 10^2}, and the dropout ratio p in {0.0,0.1, ⋯, 0.8}.
We set the maximum epoch for all methods as 400 and use the early stopping strategy, i.e., terminate model training when the validation Precision@10 value does not increase for 20 epochs.
§.§ Recommendation Performance (RQ1)
We show the recommendation performance of our CNGCF and all baselines on the four datasets in Table <ref>.
By analyzing Table <ref>, we have the following findings.
* CNGCF consistently outperforms the strongest baselines on both synthetic and real-world datasets, achieving the best recommendation performance across all three evaluation metrics.
In particular, CNGCF outperforms the strongest baselines by 23.4%, 7.0%, 34.3% and 5.7% in terms of Precision@10 on Synthetic, Amazon-Beauty, Amazon-Appliances and Epinions, respectively.
Additionally, CNGCF improves Recall@10/NDCG@10 by 2.5%/3.8%, 8.4%/22.1%, 13.3%/35.9% and 10.6%/2.8% on the four datasets, respectively.
The superiority of CNGCF can be attributed to two factors: the power of neural graph learning and the modeling of causality.
Firstly, graph learning explicitly models the interactions between users and items as a graph, and uses graph convolutional networks to capture the non-linear relations from neighboring nodes.
This allows graph learning to capture more complex user behavior patterns.
Secondly, modeling causal relations allows us to identify the causal effects of different items on users, thus capturing true user preferences on items.
By injecting causal modeling into graph representation learning, our CNGCF captures more precise user preferences to produce robust recommendations against baselines.
*
CNGCF achieves the most notable improvements (e.g., 35.9% for NDCG@10 and 43.8% for NDCG@20) on the Amazon-Appliances dataset, which is a large-scale dataset with a considerable amount of user behavior data that may be noisy and challenging to model.
CNGCF's ability to inject causality into graph learning enables the model to surpass merely capturing spurious correlations among noisy data, leading to more accurate and reliable modeling of true user preferences.
* NGCF that uses graph representation learning outperforms NCF without graph learning.
This is because NGCF models user-item interactions as a graph, and uses graph convolutional networks to capture more complex user-user collaborative behavior to enhance recommendations.
In contrast, NCF uses a multi-layer perception to learn user and item similarities, which captures only linear user-item correlations from the interaction matrix.
Moreover, GC-MC and LightGCN outperform other graph learning-based baselines (i.e., NGCF, VGAE) in most cases.
This is because GC-MC and LightGCN aggregate multiple embedding propagation layers to capture higher-order connectivity within the interaction graph.
Similarly, our CNGCF incorporates layer aggregation within our causal graph encoder, enabling us to capture higher-order connectivity and produce better graph representations for improved recommendation performance.
* CNGCF outperforms all graph learning-based baselines, including NGCF, VGAE, GC-MC and LightGCN.
This is because CNGCF models causal relations within the graph learning process.
Guided by the causality-aware recommendation generative process, CNGCF is able to inject causal relations under the structural causal model into the learning process of the graph convolutional network.
This allows CNGCF to uncover the causal effect of items on users and capture user behavior patterns more accurately.
§.§ Study of CNGCF (RQ2)
We start by exploring how replacing our causal graph encoder with other graph representation learning methods, i.e., naive GCN <cit.>, Graphsage <cit.> and Pinsage <cit.>, impact CNGCF's performance.
We then analyze the influences of core components, including causality-aware message passing and counterfactual instance-aware ELBO.
§.§.§ Effect of Causal Graph Encoder
The causal graph encoder plays a pivotal role in CNGCF to model the causal relations of nodes.
To investigate its effectiveness, we replace our causal graph encoder with different encoders built by other graph learning methods.
In particular, we use GCN <cit.>, Graphsage <cit.> and Pinsage <cit.> to produce user and item embedding vectors for the decoder learning phase, and compare the performance of CNGCF before and after the replacements.
We present the experimental results in Table <ref>.
We find that both GCN <cit.>, Graphsage <cit.> and Pinsage <cit.>-based encoders downgrade the performance of CNGCF compared with CNGCF equiped with our proposed causal graph encoder.
For instance, CNGCF with a GCN-based encoder downgrades the NDCG@10 by 28.68% on the Amazon-Beauty.
This is because GCN, Graphsage and Pinsage cannot capture the causal relations of nodes in the interaction graph, leading to insufficient representations of users and items.
On the contrary, our causal graph encoder captures the intrinsic causal relations between nodes using the causality-aware message passing; thus learns causality-aware user and item representations to
better serve the later decoder learning.
Moreover, the GCN-based encoder downgrades the CNGCF performance most severely compared with GraphSage and Pinsage-based encoders.
This is because naive GCN performs transductive learning requiring full graph Laplacian, whereas GraphSage and Pinsage perform inductive learning without requiring full graph Laplacian to handle large-scale graph data well.
We thus conclude that an inductive learning setting is more desired for our CNGCF, especially when facing large-scale graph data.
§.§.§ Effect of Causality-aware Message Passing
The causality-aware message passing models the dependency terms between each of the structural equations as the causal relations between nodes.
We present CNGCF's performance after removing the causality-aware message passing in Table <ref>.
We observe that removing the component downgrades CNGCF's performance, indicating the importance of causality-aware message passing in helping CNGCF to achieve favorable recommendation performance.
We thus conclude that modeling the causal relations between nodes within the graph-structured data is essential for graph learning-based models to uncover true user preferences for improved recommendations.
§.§.§ Effect of Counterfactual Instance-aware ELBO
The counterfactual instance-aware ELBO augments counterfactual instances for CNGCF optimization.
We present CNGCF's performance after removing the counterfactual instance-aware ELBO in Table <ref>.
Apparently, removing the counterfactual instance-aware ELBO leads to the downgraded performance of CNGCF on both datasets.
This is because our counterfactual instance-aware ELBO augments counterfactual instances, i.e., the intervened data on user preference vectors, thus facilitating better model optimization to capture user preference shifts.
§.§ Parameter Analysis of Causal Graph Encoder (RQ3)
We analyze CNGCF's performance under different embedding sizes n of the semi-implicit generative model in the causal graph encoder.
We also investigate the node dropout ratios p of the dropout layer applied in the causal graph encoder.
§.§.§ Effect of Embedding Size
Figure <ref> (a) (b) (c) report the parameter sensitivity of our CNGCF w.r.t. embedding size n with n = {16, 32, 64, 128, 256, 512, 1024, 2048}.
Apparently, the performance of CNGCF on Amazon-Beauty, Amazon-Appliances and Epinions demonstrates increasing trends from n=16, then reaches the peak when n = 512, n = 64 and n=256, respectively.
This is reasonable since n controls the number of latent vectors of users and items from the semi-implicit generative model, and low-dimensional latent vectors cannot retain enough information for the encoder learning phrase.
After reaching the peaks, the performance of CNGCF degrades slightly and then becomes stable.
The decrease in performance is due to the introduction of redundant information as the embedding size becomes too large, which can affect the model.
Additionally, we observe the largest Amazon-Appliances dataset requires the smallest embedding size of n = 64 to reach its peak performance compared to the other two datasets.
This is because a larger embedding size brings large-scale datasets a higher computational burden, thus limiting the model's performance.
§.§.§ Effect of Dropout Ratio
We employ a node dropout layer in the causal graph encoder to prevent model overfitting.
We show the influence of node dropout ratio p on the three datasets in Figure <ref> (d) (e) (f).
We observe that the performance of CNGCF on both Amazon-Beauty, Amazon-Appliances and Epinions exhibits a decreasing trend as we increase the node dropout ratio p from 0.0 to 0.3, but recovers at p=0.4.
After p=0.4, the performance of CNGCF decreases as the dropout ratio increases.
We believe that the reduced performance could be attributed to the removal of crucial information that the model needs to learn from the data, thus impairing the CNGCF's performance.
Nevertheless, the recovered performance at p=0.4 indicates that CNGCF is robust to balance the loss of information and overfitting.
§ CONCLUSION
We propose CNGCF, the first causality-aware graph representation learning framework for collaborative filtering.
Our CNGCF injects causal relations between nodes into GCN-based graph representation learning to derive satisfactory user and item representations for the CF model.
We craft a causal graph to describe the causality-aware graph representation learning process.
Our CNGCF quantifies each of the structural equations under the causal graph, with a semi-implicit generative model enabling causality-aware message passing for graph learning.
Finally, we capture true user preferences on items by modeling node messages as dependencies of structural equations.
Extensive evaluations on four datasets demonstrate CNGCF’s ability to produce precise recommendations that interpret user preferences and uncover user behavior patterns.
§ ACKNOWLEDGMENTS
This work is supported by the Australian Research Council (ARC) under Grant No. DP220103717, LE220100078, LP170100891 and DP200101374.
IEEEtran
§ BIOGRAPHY SECTION
[
< g r a p h i c s >
]Xiangmeng Wang has been a Ph.D. student at the School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney (UTS). She received her MSc degree in Computer Application Technology from Shanghai University. Her general research interests lie primarily in explainable artificial intelligence, data analysis, and causal machine learning.
[
< g r a p h i c s >
]
Qian Li is a Lecturer at the School of Engineering, Computing and Mathematical Sciences (EECMS), Curtin University, Perth, Australia.
Her general research interests lie primarily in optimization algorithms and causal machine learning.
[
< g r a p h i c s >
]Dianer Yu has been a Ph.D. candidate at the School of Computer Science, Faculty of Engineering and Information Technology, University of Technology Sydney (UTS). He received MSc and BSc degree in Computer Science from UTS.
His general research interests lie primarily in data mining, causal inference and explainable machine learning.
[
< g r a p h i c s >
]Wei Huang is a postdoctoral researcher at RIKEN Center for Advanced Intelligence Project (AIP). He obtained a Ph.D. degree in Computer Science at the University of Technology Sydney (UTS). He received his Master and Bachelor degree in Statistical Physics from the University of Science and Technology of China. His research interests lie in explainable artificial intelligence, deep learning theory, and graph representation learning.
[
< g r a p h i c s >
]Guandong Xu is a Professor in the School of Computer Science and Advanced Analytics Institute at University of Technology Sydney. He received MSc and BSc degree in Computer Science and Engineering, and PhD in Computer Science. He currently heads the Data Science and Machine Intelligence Lab, which consists of 15+ members of academics, research fellows and HDR students. From Nov 2019, he directs the newly established Smart Future Research Centre, which is an across-disciplines industry engagement and innovation platform for AI and Data Science Application towards smart wealth management and investment, energy, food, water, living, and city.
|
http://arxiv.org/abs/2307.05930v1 | 20230712055241 | Chemical freeze-out parametrization with mean field repulsive hadron resonance gas model | [
"Sunny Kumar Singh",
"Nachiketa Sarkar",
"Deeptak Biswas"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
[email protected]
Indian Institute of Technology Gandhinagar, Palaj,
Gujarat 382355
[email protected]
School of Physical Sciences, National Institute of Science
Education and Research, An OCC of Homi Bhabha National Institute,
Jatni-752050, India
[email protected]
The Institute of Mathematical Sciences, a CI of Homi
Bhabha National Institute, Chennai, 600113, India
We have examined the chemical freeze-out surface of the heavy-ion
collision experiments within an interacting hadron resonance gas model.
By considering repulsive interaction among hadrons in the mean-field
level, we have suitably parameterized the freeze-out surface by fitting
the yield data of mid-rapidity for the most central collision, for the
collision energy available in AGS, RHIC (BES), and LHC programs. To
suitably account for the repulsive interaction among mesons and (anti-)
baryons, we have introduced phenomenological parameters K_M and K_B
in the freeze-out parametrization. Although a finite value of these two
parameters seem to be necessary to have an improved normalized
chi-square, the effect on the rest of the parameters like
temperature and relevant chemical potentials seem to be within the
standard variance.
Chemical freeze-out parametrization with mean field repulsive hadron resonance gas model
Deeptak Biswas
August 12, 2023
===========================================================================================
§ INTRODUCTION
The investigation of the phase structure of strongly-interacting matter
stands as a pivotal and fundamental inquiry within the realm of
ultra-relativistic heavy-ion physics. To comprehend the particle spectra
observed in these experiments, statistical thermal models inspired by
quantum chromodynamics (QCD) are employed. In particular, the transverse
momentum (p_T) integrated rapidity spectra (namely dN/dY) are frozen
onward the chemical freeze-out (CFO) boundary and helps to map the
freeze-out surface on the phase diagram via the CFO parametrization with
temperature (T) and baryon chemical potentials (μ_B)
<cit.>. For the past few decades, the Hadron resonance Gas
(HRG) model has been successfully describing the abundance of hadrons in
collisions across a wide range of energies, from the SchwerIonen-
Synchrotron (SIS) to the Large Hadron Collider (LHC)
<cit.>. The success of the
HRG model, coupled with the lack of reliable first-principle theories
that can provide such parameterization for both high and low baryon
density regions of the phase diagram have firmly established HRG as one
of the most widely utilized models in this field.
The simplest version of the HRG model is the ideal HRG model (IHRG)
<cit.>, where
attractive interactions among hadrons in a dilute hadron gas can be
approximated by treating higher mass resonances as stable particles.
Initially proposed within the relativistic virial expansion framework,
using the S-matrix approach <cit.>, this model allows for
the calculation of various thermodynamic quantities
<cit.>. However, the IHRG model encountered discrepancies
in different thermodynamic quantities when compared to lattice QCD
results <cit.>, particularly at the
temperature range above the pseudo-critical value. Additionally, an
excess in the pion number density at chemical freeze-out was
observed<cit.>, indicating the need to incorporate short-range
repulsive interactions between hadrons to achieve more accurate Equations
of State (EoS) and realistic estimations of the chemical freeze-out
boundary.
One of the frequently employed methods to model the short-range repulsion
is the Excluded Volume Hadron Resonance Gas (EVHRG) model
<cit.>. In this model, repulsive interactions are
taken into account by incorporating an impenetrable volume surrounding
the individual hadrons. Several versions of the EVHRG model have been
proposed in the literature to determine the strength of short-range
repulsive interactions through comparisons with lattice QCD calculations
or experimental data. These include the diagonal EVHRG model
<cit.>, the cross-terms EVHRG <cit.>, the mass-dependent EVHRG
<cit.>, and the Flavor-
dependent EVHRG model <cit.>. Another phenomenological
approach to include the interaction is the Van der Waals Hadron Resonance
Gas (vdWHRG) model, which explicitly incorporates both repulsive and
attractive interactions between baryons and anti-baryons
<cit.>.
The repulsive interactions between the various baryon-baryon and meson-
meson can also be incorporated at the mean-field level
<cit.>.
The interacting part of the pressure is added along with the ideal one and
modification is introduced into the statistical model by shifting the
energy of each particle by an amount equal to U(n)=Kn where n is the
total hadron number density. One can incorporate the mean-field
coefficients K_M and K_B to scale the repulsive interaction strength
among the mesons and baryons respectively. Recent works
<cit.> have augmented the mean-field coefficients K_B
from lattice data of χ_2^B - χ_4^B and χ_2^B - χ_6^B. In
another investigation, suitable values of K_B and K_M were estimated
by fitting lattice QCD data of bulk observables, cumulants, and the speed
of sound <cit.>.
In this study, we have focused on constraining the mean-field model at
the chemical freeze-out boundary by comparing it with experimental yields
through a χ^2 minimization procedure. Previous applications of this
mean-field model at freeze-out involved fixing the repulsive strength
parameter K to explain the data of 200 AGeV S + Au collisions at CERN-
SPS <cit.>. While earlier studies consistently suggested
a value of K_B = 450 MeV fm^-3, we aim to investigate the
collision energy dependence of these phenomenological parameters by
analyzing the rapidity spectra. Within this approach, for the first time
we have obtained the collision energy dependence of mean-field
coefficients by parameterizing the chemical freeze-out surface for RHIC
and LHC energies.
We have organized the paper as follows. In Sec. <ref> we give a
short description of the ideal HRG and the MFHRG model. In Sec.
<ref> we discuss the method we have employed to extract the
various parameters in the model. In Sec. <ref> our results and the
discussion of our results are provided in the context of heavy ion
collision experiments. We conclude by giving a summary of the present
work in Sec. <ref>.
§ FORMALISM
In the ideal hadron resonance gas model, the thermodynamic potential for each species is <cit.>:
ln Z^id_i(T,μ,V)
=±Vg/(2π)^3∫ d^3p ln[1± e^(-(E_i-μ_i)/T)]
Where the upper(lower) sign corresponds to fermions(bosons). Here g is
the degeneracy factor and V is the volume. Considering the baryon
number (B), electric charge (Q), and strangeness (S), the chemical
potential (μ_i) of the ith hadron is determined by μ_i = Q_iμ_Q
+ S_iμ_S + B_iμ_B.
The grand thermodynamic potential for the total ensemble is given by:
ln Z^ideal=∑_iln Z_i^ideal
The number density of each species can be determined by:
n_i = T/V(∂ln Z_i/∂μ_i)_V,
T
= g_i/(2 π)^3∫d^3 p/exp[(E_i-
μ_i)
/ T] ± 1 .
One can relate the thermal abundance of the detected particles at the
chemical freeze-out surface with the corresponding rapidity densities as
follows:
..d N_i/d y|_Det ≃d V/d y
n_i^Tot|_Det
The total number density of each species considering decays from higher
resonances can be computed as follows:
n_i^Tot = n_i(T,μ_B,μ_Q,μ_S)
+∑_j n_j(T,μ_B,μ_Q,μ_S) ×Branching Ratio
(j→ i)
§.§ Mean-Field HRG (MFHRG)
With the inclusion of short-range repulsive interactions between hadrons
via mean-field approach, the effective chemical potential of each particle
species gets modified by μ_eff,i=μ_i - Kn, where K
is a phenomenological parameter that signifies the strength of the
repulsive interaction and n is the number density of the interacting
species of particles <cit.>. The pressure
of the mean-field repulsive model is given by:
P_MF(T, μ, V)
= ± T ∑_i g_i/(2π)^3∫ d^3p ln[1 ± e^-(E_i-
μ_eff,i)/T] + 𝒫_M, B, B̅(n_M, B,
B̅)
Here, 𝒫 is the factor arising from the interacting
part, which is necessary to maintain the thermodynamic consistency
<cit.>.
𝒫_B{B̅}(n_B{B̅})
= 1/2 K_B n_B{B̅}^2, (Baryons)
𝒫_M(n_M) = 1/2 K_M n_M^2, (Mesons)
The above form of interacting pressure is written considering repulsive
interactions among meson-meson and baryon(anti-baryon)-
baryon(anti-baryon) pairs. Here the total meson number density n_M is
calculated as:
n_M=∑_i∈Mg_i/(2 π)^3∫d^3 p/exp[(E_i-μ_eff,i) / T] - 1 .
For mesons, μ_eff, i=μ_i-K_Mn_M. K_M signifies the
strength of the repulsive interactions among the meson-meson pairs. We
have a similar equation for baryons and anti-baryon number densities:
n_B{B̅}=∑_i∈B{B̅}g_i/(2 π)^3∫d^3 p/exp[(E_i-μ_eff,i) / T] + 1 .
B and B̅ imply baryons and anti-baryons respectively.
Here, the effective chemical potential of the ith (anti-)baryon
μ_eff,i=μ_i-K_B n_B{B̅}. The repulsive
interactions among the baryon-baryon and antibaryon-antibaryon pairs are
given by the same strength parameter K_B. These equations are
transcendental in nature and should be solved simultaneously with
these two equations Eq.[<ref>-<ref>].
§ METHOD AND DATA ANALYSIS
The mid-rapidity data of hadron yields dN/dY were taken from various
experiments at 0-5% centrality (most central) and at different energies.
These consist of Pb-Pb collisions in LHC at a collision energy of 2760
GeV <cit.>. We have also included Au-Au collisions at RHIC of 200,
130, 62.4 GeV <cit.>, RHIC BES of 39, 27, 19.6, 11.5, 7.7
GeV <cit.>. and in AGS at 4.85
GeV <cit.>.
To extract the chemical freeze-out parameters i.e., T, μ_B,
μ_S, μ_Q along with the parameters scaling the strength of the
hadron-hadron interaction (K_B and K_M), we have fitted the detected
hadron yields with the thermal model estimations. Considering the initial
condition of the heavy-ion collision, it is customarily practiced to fix
μ_Q and μ_S via two constraint equations. The first constraint is
the ratio of net baryon to net charge which remains fixed throughout the
collision process considering the isentropic evolution
<cit.>.
∑_i n_i(T,μ_B,μ_S, μ_Q,K_M, K_B)B_i/∑_i
n_i(T,μ_B,μ_S,μ_Q,K_M, K_B)Q_i= r
One can evaluate this ratio r considering the number of neutrons and
protons in the incident nuclei. For heavy nuclei like Au-Au and Pb-Pb,
this ratio r is approximately 2.5 <cit.>.
The conservation of strangeness along with the strangeness neutrality
imposes another constraint:
∑_i n_i(T,μ_B,μ_S, μ_Q,K_M, K_B)S_i=0
The rest of the parameters are determined by the χ^2 minimization
procedure. The χ^2 is defined as:
χ^2=∑_i(
.d N_i/d y|_Expt-.d N_i/d
y|_Model)^2/σ_i^2
Here, we would like to emphasize that our present analysis focuses
exclusively on data from the most central events of the collisions, thus
we have chosen not to incorporate the strangeness suppression factor
γ_s assuming a state of complete chemical equilibrium. For the
present study, we have used data of π^±, K^±, p, p̅,
Λ, Λ̅, Ξ^±, as these are widely available for most
of the collision energies. To optimize numerical efficiency and reduce
the number of free parameters, we have fixed the K_M at three different
values, i.e. 0, 50, and 100 MeV fm^-3. In the considered HRG
spectrum, all confirmed hadronic states up to mass 3 GeV have been
included, with masses and branching ratios following the Particle Data
Group <cit.>.
The statistics and systematic uncertainties in a given data have been
added considering the quadrature method. The variance of the evaluated
parameter set for a particular minimization procedure has been calculated
from the ± 1 deviation of the minimized χ^2 per degree of
freedom <cit.>.
§ RESULT AND DISCUSSION
§.§ Variation of freeze-out parameters:
We have tabulated the fitted parameter set in Table. <ref>. For convenience, let us first discuss the
variation for the mean-field coefficients, as these are the most novel
output from our present study. Changes in other freeze-out parameters are
commensurate with the variation in these mean-field coefficients.
Extraction of both the K_M and K_B becomes numerically challenging
due to the slow convergence rate. We have fixed the values of K_M to be
0, 50, and 100 MeV fm^-3 and examined the corresponding
values of K_B. For a fixed value of K_M, the K_B increases with
collision energies and remains similar at higher RHIC and LHC energies as
shown in Fig. <ref>. A similar saturation of thermal parameters at
higher collision energies has been noticed earlier for temperature and
chemical potentials <cit.>. We have
found that even for K_M=0.0 MeV fm^-3 a non-zero value of
K_B helps achieve better χ^2 per degree of freedom while fitting
with yield data. With increasing the K_M to 50 and 100 MeV
fm^-3 the values of K_B increases. In the context of heavy-ion
collision, the total ensemble of baryons and mesons are connected via the
constraints like net strangeness neutrality and a fixed net
baryon-to-charge ratio. Along with these constraints, the final yield of
mesons is predominantly influenced by the decay of various higher-mass
baryon resonances <cit.>. Consequently, imposing a mean-
field repulsion in mesons necessitates a higher value of K_B to
restrict the baryon abundances, which eventually affects the final yield
of mesons and validates the required constraints.
Towards the lower collision energy, the medium is mainly baryon-dominated
<cit.>, and the effect from the variation of K_M is
minimal. However, the system becomes meson dominated with increasing
√(s_NN), and the effect of K_M is much more pronounced. We
would like to emphasize that for the range of K_M considered, which
spans from 0 to 100 MeV fm^-3, the corresponding K_B values
vary between 100 and 800 (considering variances) MeV fm^-3. These
specific values were previously explored in a hydrodynamic simulation that
incorporated a hadronic equation of state, as mentioned in
Ref. <cit.>. Furthermore, recent studies conducted using the
MFHRG model have also confirmed this range of K_B values as they successfully
account for the lattice data of various charge susceptibilities
<cit.>.
In the top left panel of Fig. <ref>, we have shown the
variation of freeze-out temperature with collision energy for the three
considered values of the mesonic mean-field coefficients K_M as
mentioned earlier. For the freeze-out temperature (T) the difference
from considering three different values of K_M seems to be similar to
K_B. The differences increase towards high collision energies,
following the variation of K_B. For all three values of K_M, the
temperature increases with the collision energy and becomes constant
around 160 MeV near higher BES energies. We want to iterate here that,
although the qualitative behavior is similar to the usual understanding
of our freeze-out parametrization within the ideal HRG formalism, the
temperature value is slightly higher (∼ 5 MeV) than the ideal HRG
result. A finite value of the mean-field repulsion parameter restricts
the number density which in turn produces a higher T to fit the yields.
The collision energy dependence of the baryon chemical potential is shown
in the top right panel. The effect of repulsion is almost negligible on the
freeze-out values of μ_B. However, at very lower collision energy, the
effect of K_B seems to induce a higher μ_B as the medium in
dominated by baryons. The chemical potential is shifted by K_B n_B, so
a higher value of K_B should be accompanied by a higher μ_B to
produce a similar estimation of yields. The general
behavior is similar to that of the ideal HRG parametrization. With higher
collision energies the baryon stopping diminishes so the medium tends to
form with lower net charges (B, Q, and S), which results in a lower
value of chemical potentials in the freeze-out parametrization. At lower
collision energy this behavior induces a high value of μ_B, which
tends to be zero at higher RHIC LHC energy
The strange chemical potential follows the trend of μ_B. A finite
μ_B results in the dominance of the hyperons over the anti-hyperons,
on the other hand, the strangeness-neutrality constraint demands the
cancellation of the net strangeness arising from the baryon sector with
that from the meson sector, which demands the μ_S to be proportional
to the μ_B. The variance of μ_S is within the uncertainties for
the three values of K_M, which indicates that the higher mass strange
mesons and baryons have negligible influence from the mean-field
repulsion, while performing the freeze-out parametrization with yields.
The resulting values of the freeze-out volume (which is presented in the
freeze-out radius here), are presented in the right bottom panel. A
similar non-monotonic behavior with the collision energy was earlier
observed from the chemical freeze-out parametrization with ideal HRG in
Refs. <cit.>. The interesting
observation here is the higher value of freeze-out radius while we imply
a higher value of K_M. A higher value of K_M and K_B suppresses the
number density, which in turn results in a higher value of freeze-out
radius to fit the yields. One can see that among the above-discussed
parameters, the variation of volume with K is much more prominent. It
seems that considering the repulsive interaction affects the value of the
freeze-out volume mostly as the yield is directly proportional to the
volume.
§.§ Particle yields from thermal parametrization:
To examine the differences in thermal abundances resulting from different
values of K_M, it would be informative to analyze the variations in
yields. Fig. <ref> displays the number density of pions,
kaons, protons, and lambdas, calculated with the resulting
parametrization. The impact of varying K_M is more pronounced for
lighter mass pions, while the effect diminishes as the particle mass
increases. Baryons with higher masses show negligible variations across
the three cases, whereas pions demonstrate more significant alterations
when different K_M values are considered.
The effective chemical potential μ_eff, i = μ_i - K
n_M, B, B̅ is expected to have a significant impact on pions
since they carry only electric charge, and the magnitude of μ_Q is
much smaller compared to other chemical potentials. Conversely, the
effect of this shift diminishes for strange and non-strange baryons, as
their respective chemical potentials have larger magnitudes. It is worth
noting that the chemical potentials themselves are modified for different
K_M values, contributing to the observed variations.
In this context, it is important to consider the decay feed-down effect
as well. The total pion density receives a significant contribution from
the decay of higher-mass meson and baryon resonances. The suppression of
these states is also reflected in the final pion abundance, resulting in
substantial variations. This effect is similarly observed for the
lowest-mass strange hadron, kaon. On the other hand, baryons receive
contributions from higher-mass baryons that are already thermally
suppressed, leading to insignificant variations while considering
different K_M values.
At this juncture, we want to reiterate that the yield dN/dY is a product
of this thermal density and the freeze-out volume dV/dY. A reverse trend
was observed for the freeze-out volume in Fig. <ref>, i.e.
a higher value of K_M resulted in higher values of freeze-out volume.
The cumulative effect of these two ensures the agreement between the yield
data and our thermal model estimation. This indicates that the resulting
parameters (especially freeze-out volume and K_B) are dependent on each
other and on the values of K_M. In our present study, it becomes
challenging to decouple this systemic dependency.
§.§ Particle ratios:
It would be interesting to estimate various particle ratios and compare
them with those from the experimental data. Along with checking the
efficacy of our parameterization, this will also examine the effect of
various choices of K_M on thermal yields. Here we shall discuss some of
the important particle ratios from various sectors.
The ratios of π^-/π^+ and K^-/K^+ as a function of √(s_NN)
are depicted in the upper panel of Fig. <ref>. Our
parametrization successfully reproduces the observed variation of the
experimental data. The pion ratio is greater than unity at lower
√(s_NN) due to the higher abundance of neutrons in the colliding
nuclei, which induces an isospin asymmetry favoring π^-. However, this
asymmetry diminishes at higher RHIC and LHC energies, resulting in similar
yields of π^- and π^+. In the case of kaons, the variation follows
the trend of μ_S. At lower collision energies, the positively charged
kaon (K^+) becomes more abundant than the negatively charged kaon
(K^-) to maintain strangeness neutrality. As the collision energy
increases, this effect disappears, and the yields of particles and
antiparticles become equal at the LHC. The qualitative behavior is the
same for all three values of K_M. It seems that the effect of K_M does
not result in the large variation of the mentioned ratio.
In the context of the heavy-ion collision, the strange to non-strange
ratios signify the relative abundance of strangeness and portray the
degree of equilibration for the strange sector <cit.>.
Deviations from the equilibrium values have earlier been observed for non-
central collisions, which necessitates the use of a strangeness saturation
factor γ_S <cit.>. Being the lightest strange to non-
strange particle, the ratio K^+/π^+ and K^-/π^- are widely studied
within the thermal model. The explanation of the non-monotonic behavior of
the K^+/π^+ was discussed as a signature of the thermalization in the
strange sector and a possible existence of initial partonic
state <cit.>. Although these details are beyond the scope of
the present thermal model, our parameterization suitably explains the data
for all three values of K_M in the middle panel of Fig.<ref>.
Although, there is not much variation among the estimations from the three
cases, indicating that these ratios have a weak dependence on the variation of
K_M and K_B.
We have shown the proton-to-pion ratio in the bottom panel of
Fig. <ref>. As we have separately used two different mean-field
coefficients K_M and K_B for the meson and baryon sectors separately,
this ratio will portray their effect on the respective variation. We have
plotted p/π^+ and p/π^- to nullify the effect of the charge
chemical potential. In the context of heavy-ion collision, the abundance
of pions is mainly dominated by the temperature as they are the lowest
mass hadrons, whereas the protons mimic the variation of exponential of
μ_B/T. At lower collision energy the medium is dominated by the
baryons due to the baryon stopping, whereas at higher collision energies
the system is dominated by the mesons, and changes from a baryon-dominated
freeze-out to meson-dominated freeze-out occurs <cit.>.
This phenomenon explains the variation observed in the proton to π^+
ratio. On the other hand, the production of anti-proton increases as the
collision energy increases and at high RHIC and LHC energies, the two
ratio becomes similar. Here the values of the p/π^+ ratio increase as
we increase the K_M for a given collision energy. A higher value of
K_M suppresses the abundance of pions and produces a higher value of the
ratio.
To quantify the impact of the various choice of repulsive parameters K_M
and K_B in the particle ratios, we have plotted the total proton
(p + p) abundance normalized to total pion (π^++π^-)
in Fig. <ref>. This ratio has a larger impact from various
choices of K_M than that of the individual ratios, as it signifies the
relative abundance of the lowest mass baryons to that of the lowest mass
mesons. The parametrization for K_M=0 seems to agree with the data
better than the other choices. At the freeze-out parametrization, one
should not expect much variation in the baryon yield from the variation of
K_B due to their heavier masses, on the contrary, the pion yields get
significantly suppressed for a higher value of K_M due to the lower
masses. This results in the variation shown as the ratio
((p + p̅)/(π^++π^-)), as for a given collision energy it
increases for a higher value of K_M. The difference is much more
pronounced at higher collision energies, as the thermal medium is meson
dominated, so the different choices of K_M produce a larger effect.
Motivated by the fact that the ratio corresponding to the total proton
yield to pions gets significant variation from the values of mean-field
parameter K_M, we have investigated their impact on the ratios of
susceptibilities calculated with the freeze-out parameterization. The nth order susceptibility is defined as:
χ_x^n=1/V T^3∂^n(ln Z)/∂(μ_x/T)^n
where μ_x is the chemical potential for conserved charge x. The
susceptibilities would be related to the cumulants measured in the
heavy-ion collisions as:
V T^3χ_x^n= C_n .
As we have fixed the K_M and fitted the mean-field co-efficient K_B,
it will be interesting to check the variance in the baryon cumulant
ratios. We have calculated these cumulants within the Boltzmann
approximation as it provides a reasonable baseline for
the massive hadrons and resonances (except π) along the chemical
freeze-out boundary <cit.>, as m_i-μ_i >> T at the
respective freeze-out parametrization. Within this consideration, we can
approximate the interacting partition functions in the Boltzmann limit and
calculate the χ_B^n <cit.>. The
differences arising from various values of the K_M increase as we move
to ratios of higher-order cumulants. The effect is negligible for
C_2/C_1, while C_3/C_2 and C_4/C_2 decrease as we fix the K_M to
higher values. As we imply a higher value of K_M, it produces a higher
value of K_B as discussed earlier in Sec. <ref>, which
translates into these differences. We want to mention that the ratio
C_4/C_2 is 1 at all collision energy for the ideal HRG case, whereas
the impact of interaction gives rise to the observed variation.
As a baseline, we have also plotted results for the net proton cumulants
estimations from STAR collaboration <cit.>. For
simplification, we have not mimicked the experimental specification like
p_T cut, decay feed-down into the cumulants calculations. Although the
effect of decay feed-down and p_T cut-off have been found to be minimal
earlier <cit.>. We want to
reiterate that we have calculated the baryon cumulants ratios, which is
different from the net-proton ratios. Although the qualitative behavior is
similar, the quantitative difference between these two increases for higher
order cumulant ratios <cit.>. The non-monotonic variation of
C_4/C_2 is not well captured in the thermal model estimation, although
the results vary from the ideal baseline of 1. The C_2/C_1 and
C_3/C_2 estimations agree with the data for
K_M=50 MeV fm^-3, there are larger deviation for C_4/C_2,
which seem to match for higher values of K_M. This behavior suggests that
a complete study of the net-proton cumulants with experimental constraints
might restrict the variation of K_M and K_B both.
§ SUMMARY
Recent advancements in incorporating repulsive interactions between
baryons and mesons in the hadron resonance gas (HRG) model have
established it as a suitable candidate for providing a bulk description of
the QCD medium below the transition temperature. Phenomenological
descriptions such as the excluded volume HRG and van der Waals HRG models
consider parameters such as a hard-core impenetrable radius of the
hadrons. On the other hand, the mean-field repulsive HRG model (MFHRG)
provides a robust representation of the medium by accounting for a density-
dependent interaction strength. However, this model requires the inclusion
of parameters such as K_B and K_M to scale the interaction strength
among baryons and mesons, which can be appropriately estimated using bulk
observables obtained from lattice QCD <cit.>. It is crucial to apply this mean-field repulsive model to
analyze data from heavy-ion collision experiments and assess its
effectiveness in comparison to other counterparts such as the ideal HRG,
evHRG, vdWHRG, and so on. Exploring the chemical freeze-out surface
provides a foundation for investigating the collision energy dependence of
the repulsive interaction strength by estimating K_M and K_B.
To parametrize the chemical freeze-out surface, we utilized the p_T-
integrated mid-rapidity yield dN/dY data for pions, kaons, protons,
Λ, and Ξ in the most central collisions. The collision energy
range available in AGS (4.85 GeV), RHIC-BES, and LHC (2.76 TeV) was
analyzed. Given that the parameters K_B and K_M are interdependent due
to relevant constraints and decay feed-down effects, evaluating them
independently can lead to larger numerical variances. To address this
issue, we fixed K_M at three representative values (0, 50, and 100
MeV.fm^-3) and performed a χ^2 fitting to determine the remaining
parameters: T, μ_B, μ_Q, μ_S, K_B, and the freeze-out
radius R.
While the values of K_B were found to be finite and influenced the
goodness of fit, the other parameters were consistent with those obtained
from the ideal HRG model. Notably, K_B increases with collision energy
and becomes significantly higher at higher √(s_NN). It is
intriguing to observe that the values of K_B obtained from this freeze-
out analysis are similar to those from earlier studies using the mean-
field approach. The agreement between the estimation of K_B from lattice
QCD-motivated studies <cit.> and our analysis underscores the effectiveness
of this model in describing the bulk properties of the created medium in
heavy-ion collisions.
Studying the influence of repulsive interactions on the thermal abundance
of different states was crucial. While the effect of finite K_M and
K_B values on the number density of massive strange hadrons and baryons
was not significant, it played a more prominent role in the case of pions.
Particle ratios within the same sector, such as meson-to-meson and baryon-
to-baryon ratios were less affected by variations in K_M and K_B.
However, the proton-to-pion ratios exhibited significant variations.
Consequently, the total proton to total pion ratio became a subject of
investigation, as it appeared to be strongly dependent on the values of
K_M. Additionally, we explored the ratio of baryon susceptibilities
using this freeze-out parameterization, as these susceptibilities are
linked to net-proton cumulants measured in heavy-ion collisions. While our
freeze-out parametrization was based on yields, there was a general
agreement between our estimations of baryon cumulant ratios and the
measurements of net-proton for lower orders. However, discrepancies
arose when considering fourth-order cumulants. Proper treatment of
cumulant ratios requires accounting for decay feed-down effects and
implementing p_T cuts within the framework of this mean-field repulsive
HRG model. This consideration will be essential for future studies,
particularly in the context of energy available in BES-II, HADES, and CBM
experiments.
§ ACKNOWLEDGEMENTS
D.B expresses gratitude to Sayantan Sharma, Aman Kanojia, Somenath Pal and
Hiranmaya Mishra for engaging and fruitful discussions. D.B. would like to
express sincere gratitude for the support received from NISER,
Bhubaneswar, with special thanks to A. Jaiswal for the kind assistance and
hospitality during the visit, where the majority of this work was
performed.
elsarticle-num
|
http://arxiv.org/abs/2307.04437v2 | 20230710092701 | HORTENSIA, a program package for the simulation of nonadiabatic autoionization dynamics in molecules | [
"Kevin Issler",
"Roland Mitrić",
"Jens Petersen"
] | physics.chem-ph | [
"physics.chem-ph",
"physics.comp-ph"
] |
AIP/123-QED
HORTENSIA]HORTENSIA, a program package for the simulation of nonadiabatic autoionization dynamics in molecules
Julius-Maximilians-Universität Würzburg, Institut für Physikalische und Theoretische Chemie, Emil-Fischer-Str. 42, 97074 Würzburg, Germany
[email protected]
Julius-Maximilians-Universität Würzburg, Institut für Physikalische und Theoretische Chemie, Emil-Fischer-Str. 42, 97074 Würzburg, Germany
[email protected]
Julius-Maximilians-Universität Würzburg, Institut für Physikalische und Theoretische Chemie, Emil-Fischer-Str. 42, 97074 Würzburg, Germany
We present a program package for the simulation of ultrafast vibration-induced autoionization dynamics in molecular anions in the manifold of the adiabatic anionic states and the discretized ionization continuum. This program, called HORTENSIA (Hopping real-time trajectories for electron-ejection by nonadiabatic self-ionization in anions), is based on the nonadiabatic surface-hopping methodology, wherein nuclei are propagated as an ensemble along classical trajectories in the quantum-mechanical potential created by the electronic density of the molecular system. The electronic Schrödinger equation is numerically integrated along the trajectory, providing the time evolution of electronic state coefficients, from which switching probabilities into discrete electronic states are determined. In the case of a discretized continuum state, this hopping event is interpreted as the ejection on an electron. The derived diabatic and nonadiabatic couplings in the time-dependent electronic Schrödinger equation are calculated from anionic and neutral wavefunctions obtained from quantum chemical calculations with commercially available program packages interfaced with our program.
Based on this methodology, we demonstrate the simulation of autoionization electron kinetic energy spectra that are both time- and angle-resolved. In addition, the program yields data that can be interpreted easily with respect to geometric characteristics such as bonding distances and angles, which facilitates the detection of molecular configurations important for the autoionization process.
Moreover, useful extensions are included, namely generation tools for initial conditions and input files as well as for the evaluation of output files both through console commands and a graphical user interface.
[
Jens Petersen
August 12, 2023
===================
For submission:
Repository link: <https://github.com/mitric-lab/HORTENSIA_LATEST.git>
Licensing: MIT
Language: Python ≥ 3.8
§ INTRODUCTION
After generation of a temporary molecular anion through electron attachment, there are three possible competing relaxation mechanisms.<cit.>
These are a) radiative deactivation, assuming that there is a lower-lying anion state that is stable with respect to ionization, b) dissociative electron attachment, in which the captured electron induces geometric change in the molecule resulting in fragmentation into more stable products, a neutral and an anionic subsystem.
And lastly, c) autoionization, in which after a finite period of time the metastable state decays via electron ejection.
The process of dissociative electron attachment is observed for example in DNA, where capture of low-energy electrons leads to single and double strand breaks<cit.>, or in a variety of substances in nanoscale thin films<cit.>.
Prominent examples for autoionization include excited dipole- and quadrupole-bound anions with binding energies slightly below the ionization threshold<cit.>, intermolecular Coulombic decay at the FADH^- cofactor involved in DNA-photolesion repair<cit.> and autoionization induced by vibrational excitation in organic molecules<cit.>.
Generally the finite lifetime of a metastable state with respect to autoionization can vary strongly from only a few femtoseconds<cit.> up to milliseconds<cit.>.
Recently, several experiments have provided insights into the dynamics of such processes in dipole- and quadrupole-bound organic anions on a (sub-)picosecond timescale.<cit.>
Although the process of autoionization is well-known and -observed experimentally by a multitude of methods, as can be seen in the references given above, the theoretical description of autoionizing systems is challenging<cit.>, especially if one is interested in the mechanistic details of the intricate ultrafast relaxation dynamics.
Autoionization processes can follow different general mechanisms, depending on how energy is redistributed among the system's degrees of freedom. Besides a purely electronic variant, where already the electronic energy of the system lies above the ionization threshold and electron ejection may proceed via tunneling, there is also the possibility of a nonadiabatic mechanism in which rotational or vibrational energy of the nuclei is transformed into the kinetic energy of the ejected electron.
In the following, we focus on the case of vibrational autoionization. This process can thus be viewed as a nonadiabatic transition between a vibrationally excited bound N-electron system and continuum electronic states consisting of an N-1 electron molecular core and a free electron. Early theoretical treatments have focused on the computation of ionization rates<cit.> as well as on establishing propensity rules for the ionization transitions<cit.>. While a full dynamical treatment of vibrational autoionization is highly desirable, an entirely quantum-dynamical approach is computationally prohibitive. As an alternative, a mixed quantum-classical ansatz can be considered, further motivated by the success of this type of methodology in the description of bound-state nonadiabatic processes and the simulation of time-resolved spectroscopic signals.<cit.> Although to date there have been several implementations of mixed quantum-classical dynamics simulations for bound-state problems made publicly available<cit.>, no program addressing the simulation of vibration-induced autoionization processes has been published so far.
Therefore, in this work we present the program package implementing our approach to describe vibrational autoionization through quantum-classical dynamics in the framework of the surface-hopping methodology in the manifold of bound and continuum electronic states as described recently<cit.>.
Therein, nuclear motion is considered classically, while the electronic system is treated quantum-mechanically.
Nonadiabatic transitions between electronic states accompanied by change of the classical vibrational energy of the molecule describe the energy exchange between the two subsystems.
With this program package and the underlying methodology, one is able to gain insight into the geometric and electronic evolution in the course of the autoionization process as well as to calculate the time-, energy- and angle-distribution of the generated free electrons, which serve as experimental observables for monitoring autoionization dynamics.
We illustrate our program on the example of the 2-cyanopyrrolide anion, which bears a dipole-bound excited state slightly below the electron detachment threshold while the vibrationally excited states are metastable and decay via autoionization.<cit.>
In the following section a brief theoretical description of the method is given. In section <ref> an overview of the actual implementation is provided. The subsequent section <ref> details performance-related issues, namely quality of approximations in the theory and runtime and memory optimization within the program, as well as a dynamics simulation example for the 2-cyanopyrrolide anion. Finally in section <ref> a conclusion and outlook are given.
§ THEORY
Our methodological framework is based on the surface-hopping procedure as proposed by Tully<cit.>, in which the coupled electron-nuclear dynamics of molecular systems is approached in a quantum-classical fashion.
Specifically, the nuclei are propagated classically according to Newton's equations of motion,
MR̈(t)
=
𝐅_i(𝐑[t])
≡
-∇_R E_i(R[t]),
where the force 𝐅_i(𝐑[t]) is obtained as the negative gradient of the electronic potential energy surface (PES) E_i(R[t]). In the above equation, M denotes a diagonal matrix containing the nuclear masses.
For an ensemble of initial conditions, this leads to trajectories R(t) moving on the given PES.
Simultaneously, the electronic time-dependent Schrödinger equation
iħΨ̇(r;R[t])
=
Ĥ_elΨ(r;R[t])
,
with the electronic Hamiltonian Ĥ_el is solved.
The electronic wavefunction can be expanded into a set of orthonormal basis states, which in the case of autoionization includes bound states Φ_m' (denoted with a primed index) as well as continuum states Φ̃_m” (denoted with a double-primed index):
Ψ(r,R[t],t)
= ∑_m' c_m'(t) Φ_m'(r,R[t])
+
∑_m”∫ d^3k c̃_m”(k,t) Φ̃_m”(k,r,R[t]),
where k denotes the continuously varying wave vector of the free electron, while m” is the quantum number of the remaining neutral state.
We assume the wavefunctions Φ_m' and Φ̃_m” to be single Slater determinants (ground state) or an expansion of singly excited Slater determinants (excited state).
In the frame of the presented methodology we discretize the continuum states, leading to
∫ d^3k c̃_m”(k,t)
Φ̃_m”(k,r,R[t])
≈∑_i
(Δ V_k)^1/2c̃_m”(k_i,t)
(Δ V_k)^1/2Φ̃_m”(k_i,r,R[t])
≈∑_i
c_m”(k_i,t)
Φ_m”(k_i,r,R[t]),
where Δ V_k denotes the volume element in k-space and the discretized and continuum state expansion coefficients are related according to c_m”(k_i,t)=(Δ V_k)^1/2c̃_m”(k_i,t). The actual determination of the wave vectors and the implementation of the discretization procedure are explained in detail in the next chapter.
Insertion of Eq. (<ref>) into the time-dependent Schrödinger equation (<ref>), multiplication from the left by an eigenstate ⟨Φ_n| and evaluation of the arising terms leads to a set of coupled differential equations for the electronic state coefficients c_n:
ċ_n(t)
=
∑_j
[
-i/ħ H_nm(R[t]) - D_nm (R[t])
]
c_m(t),
with the matrix elements of the electronic Hamiltonian H_nm = ⟨Φ_n | H_el | Φ_m|$⟩ and the nonadiabatic couplingsD_nm = ⟨Φ_n | Φ̇_m|=⟩ Ṙ·⟨Φ_n | ∇_R | Φ_m|$⟩.
These can be divided into separate expressions for the bound and continuum states, resulting in the diabatic and nonadiabatic couplings between two bound anion states,
H_n'm' =
⟨Φ_n' | Ĥ | Φ_m'|⟩hij
D_n'm' =
⟨Φ_n' | Φ̇_m'|,⟩
and between a bound and a discretized continuum state,
H_n”m'(k_i)
=
(Δ V_k)^1/2⟨Φ̃_n”(k_i) | Ĥ | Φ_m'|⟩hik
D_n”m'(k_i)
=
⟨Φ_n”(k_i) | Φ̇_m'|=⟩
(Δ V_k)^1/2⟨Φ̃_n”(k_i) | Φ̇_m'|.⟩
In the above equations, the approximation to neglect the coupling terms between the continuum states has been introduced.
The discretized continuum states consist of an antisymmetrized product of a bound N-1 electron neutral state and a molecular scattering state of the free electron
Φ̃_n”(k_i)
=
A(
Φ^(n)_n”·ψ(k_i)
).
The simplest approximation to the free electron states in the presence of a neutral molecular core are plane waves
ψ(k_i)≈ Ne^ik_i·r
with a normalization constant N = (2π)^-3/2 to satisfy the orthonormality demanded in Eq. (<ref>).
Since this function would be completely independent on the electronic and nuclear configuration of the molecular core, which is a strong simplification, the plane waves are orthogonalized with respect to the anion's molecular orbitals (MOs) ϕ_m to include (at least to a certain degree) dependence on the molecular structure according to
ψ̃(k_i)
=
(2π)^-3/2 N_ortho(
e^ik_i·r
-
∑_m^occ⟨ϕ_m | e^ik_i·r|ϕ⟩_m
)
=
N_ortho(
ψ(k_i)
-
∑_m^occ⟨ϕ_m | ψ(k_i)| ⟩ϕ_m
),
with the normalization constant
N_ortho
=
(
1
-
∑_m^occ|
⟨ϕ_m | ψ(k_i)|⟩|^2
)^-1/2
arising from the orthogonalization.
Notably, the summation over m includes the occupied MOs in all 'relevant' Slater determinants of all considered electronic states, that is, we considered all determinants which are needed to sufficiently represent the ground state and full CIS wavefunction of the excited state.
Beginning from the highest contribution to a wavefunction, determinants are included until a specific percentage or a user-adjusted maximum number of configurations per electronic state is reached (95 % / 10 configurations in the case of vinylidene<cit.>).
Considering for now the special case where only the anion's ground state is included, the used MOs are simply the energetically lowest ones up to the highest-occupied molecular orbital (HOMO).
The overlap integral between a plane wave and an MO present in Eq. (<ref>), ⟨ϕ_m | ψ(k_i)|$⟩, can be computed analytically by expanding the MO into the Gaussian atomic orbital (AO) basis, with the integral involving a single AO|ν⟩given by
⟨ν | ψ(k)|=⟩
(2π)^-3/2∫ d^3𝐫 e^ik·rφ_ν(r)
=
(2α_ν)^-3/2exp(ik·A_ν -k^2/4α_ν)
×∏_j=x,y,z
(-i√(4α_ν))^-n_ν,j
H_n_ν,j(
k_j/√(4α_ν))
,
where theH_n_ν,jare the Hermite polynomials of ordern_ν,j.
§.§ Electronic coupling terms
There are anionic systems, for example the vinylidene anion<cit.>, that do not support a bound excited state, in which case the consideration of only the ground state and the continuum in the process of autoionization is sufficient.
Besides that, for example in molecules exhibiting dipole-bound excited states <cit.>, several bound anionic states and the interaction among them are relevant as well.
Nonetheless, to keep the formalism concise, if not noted otherwise we discuss in the following the electronic coupling terms for the special case of both anion and neutral molecule being in their respective electronic ground states, which in turn are represented by a single Slater determinant.
The generalization to excited states and/or multideterminantal wavefunctions is straightforward.<cit.>
We denote the bound anionic ground state wavefunction by|Φ_0⟩and the continuum wavefunctions by|Φ_i⟩, the latter being constructed as an antisymmetrized product of the neutral ground state and a free electron state function with wave vectork_i, similar to Eq. (<ref>).
§.§.§ Diabatic couplings
In the case of two adiabatic bound anion states, the coupling matrix elementsH_n'm'given in Eq. (<ref>) yield zero for alln' ≠m'since these states are orthonormal eigenstates of the electronic Hamiltonian.
On the other hand, since in our methodology the bound and continuum state wavefunctions are constructed using separate quantum-chemical calculations for the anion and neutral, and the free electron wavefunction is taken as a plane wave, the continuum state functions are crude approximations to the actual adiabatic eigenfunctions of the electronic Hamiltonian for theN-electron system and therefore, diabatic couplings between the bound and continuum electronic states arise.
As elaborated in detail in Ref. aid, according to Eq. (<ref>) and definingV_i0^dia(k_i)as
H_i0(k_i)
≡⟨Φ_i | Ĥ | Φ_0|≡⟩(Δ V_k)^1/2
V^dia_i0(k_i),
the diabatic coupling between a bound and a continuum state can be written in terms of the AO basis as
V^dia_i0(k_i)
= ∑_λμν[
A_λμν(
⟨𝐤_i λ || μν|-⟩∑_σ
B_σ⟨σλ || μν|⟩) +
A̅_λμν(
⟨𝐤_i λ | μν|-⟩∑_σ
B_σ⟨σλ | μν|⟩)
].
In this formula the Greek letters denote the AO basis functions,⟨𝐤_i λ| μν|$⟩ is an electron-electron repulsion integral and ⟨𝐤_i λ || μν|=⟩⟨k_i λ | μν|-⟩⟨k_i λ | νμ|$⟩ its antisymmetrized variant.
The prefactorsA_λμν,A̅_λμνandB_σcomprise AO expansion coefficients and overlap integrals and are defined as follows (assuming that the extra electron of the anion hasαspin):
A_λμν
= ∑_n^occ,α∑_q,p<q^occ,α
(-1)^n+p+q-1det 𝐒_in,pq
×(
c_λ^(n)
-
∑_u^occ,α
c_λ^(u)
S_nu)
c_μ^(p) c_ν^(q)
A̅_λμν
= ∑_n̅^occ,β∑_p^occ,α∑_q̅^occ,β
(-1)^n̅+p+q̅-1det 𝐒_in̅,pq̅
×(
c_λ^(n̅)
-
∑_u̅^occ,β
c_λ^(u̅)
S_n̅u̅)
c_μ^(p) c_ν^(q̅)
B_σ
= ∑_r^occ,α∑_ρ
c_σ^(r)
c_ρ^(r)⟨k_i | ρ|,⟩
where the indices (including their variants with an overbar)p,q,rrefer to anion MOs,n,uto neutral MOs, anddet 𝐒_in,pqdenotes the minor determinant of the overlap matrix between continuum and bound state orbitals where the rows of the free electron orbitalψ̃(𝐤_i)and neutral orbitalχ_nas well as the columns of anion orbitalsϕ_pandϕ_qhave been deleted.
For the full derivation of these equations the reader is referred to Ref. aid.
§.§.§ Nonadiabatic couplings
The nonadiabatic coupling terms as defined in Eqs. (<ref>) and (<ref>) are calculated using the finite-difference approximation for the time derivative, which leads to
D_i0(t)
=
⟨Φ_i(t) | d/dtΦ_0(t)|
⟩ ≈1/2Δ t(
⟨Φ_i(t-Δ t) | Φ_0(t)|-⟩⟨Φ_i(t) | Φ_0(t-Δ t)|⟩)
In the case of two anionic bound states, these terms are evaluated according to Refs. mitric2008, werner2008, werner2010.
One can simplify the arising terms by integrating over all but one electron coordinate. For the first term of Eq. (<ref>) this yields
⟨Φ_i(t') | Φ_0(t)|=⟩
N^-1/2⟨ψ̃(k_i,t') | ψ^D(t',t)|,⟩
where we have abbreviatedt'=t-Δtand have defined the one-electron functionψ^D(t',t), which is an analog to a molecular Dyson orbital with theN- andN-1- wavefunctions taken at different time steps and geometries.
Using Eqs. (<ref>) and (<ref>) the resulting nonadiabatic coupling terms read
D_i0(k_i,t)
=
(Δ V_k)^1/2 N_ortho/2 √(N)Δ t[
⟨ψ(k_i) | ψ^D(t',t)|-⟩⟨ψ(k_i) | ψ^D(t,t')| ⟩-
∑_n
⟨ψ(k_i) | ϕ_n(t)| ⟩⟨ϕ_n(t') | ψ^D(t',t)|⟩
+
∑_n
⟨ψ(k_i) | ϕ_n(t)| ⟩⟨ϕ_n(t) | ψ^D(t,t')|⟩].
§.§ Adiabatic ionization and electronic decay
The main focus of the above presented methodology lies on describing the nonadiabatic process of vibrational autoionization. However, in the course of the molecule's dynamical evolution instances can occur where the occupied anionic state becomes unbound as the result of changes in nuclear geometry.
In this case, ionization is possible as an exclusively adiabatic electronic process without coupling to the nuclear motion.
This process can be included approximately in our method by simulating the temporal spread of the ejected electron as a wavepacket evolving freely in space. As a quantitative measure, the electronic spatial extent, i.e., the expectation value of𝐫̂^2, is calculated as a function of time.
Specifically, once a time step is reached where the VDE has become negative, the highest-occupied orbital of the last bound geometry,ϕ(r, t_0), is used as the initial free electronic wavepacket.
In the case where one only considers the anionic ground state, this corresponds to the HOMO.
If also an excited state is involved, natural transition orbitals (NTOs)<cit.> are calculated and the highest-occupied and lowest-unoccupied NTO (HONTO and LUNTO) are used for the anionic ground and excited state, respectively.
Such an electronic wavepacket is then propagated in time and its spatial extent is evaluated according to
⟨𝐫̂^2|(⟩t)
=
⟨ϕ(𝐫,t) |𝐫̂^2 |ϕ(𝐫,t)|⟩
=
∑_μν c_μ c_ν⟨φ_μ(𝐫,t) | 𝐫̂^2 | φ_ν (𝐫,t)|.⟩
Hereφ_μ, νdenote the Gaussian atomic basis functions freely propagated in time:
φ_μ(𝐫,t) = ∫ d^3𝐫' K(𝐫,𝐫',t,0) φ_μ (𝐫',0)
with the free electron propagator
K(𝐫,𝐫',t,0)
=
𝐫 | e^-i𝐩̂^2 t/2m_eħ|𝐫'.
Using Cartesian Gaussian basis functions ofs,panddtype one obtains the following analytic expression for the electronic wavepacket:
φ_μ(𝐫,t)
=
N_l_xl_yl_ze^-α/1+iβ tr^2[
-Λiβ t/2α
(1+iβ t)^-5/2
+.
.
(1+iβ t)^-3/2 - ∑_j l_j∏_j=x,y,z (r_j - A_j)^l_j],
whereAis the spatial center of the respective basis function,l_idenotes the angular momentum quantum number for thei'th spatial direction andΛis a constant that is unity if one of thel_i = 2and zero if alll_i<2.
The AO integrals in Eq. (<ref>) are calculated with an implementation of the McMurchie-Davidson scheme<cit.>.
To relate the spatial extent in a simple way to the lifetime of the unbound state, an auxiliary spherically symmetric electron distribution is considered which within the initially determined radiusr_0=√(⟨r^2|(⟩t_0))contains a probability of 99%. Subsequently, with⟨r^2|$⟩ increasing with time, the probability within r_0 decreases, giving rise to a population decay curve which can be related to a time constant τ.
The latter is incorporated into the propagation of the electronic wavefunction given by Eq. (<ref>) by adding an imaginary component to the electronic state energy,
E^(a)→
E^(a)-iħ/2τ,
which leads to an exponential population decay due to adiabatic ionization in regions where the VDE is negative for the given electronic state.
§.§ Surface-hopping procedure
Solution of the set of Eqs. (<ref>) along a nuclear trajectory yields the time-dependent electronic state coefficients c_n(t).
Within the surface-hopping methodology, a switch from the occupied bound electronic state n to any other state m is determined by the hopping probability depending on the electronic state populations ρ_nn = |c_n|^2, which is
P_n→ m
=
-ρ̇_nn/ρ_nnρ̇_mm/∑_k ρ̇_kkΔ t
for ρ̇_nn < 0 and ρ̇_mm > 0 and zero in any other instance. In the above expression, the sum over k includes all states with ρ̇_kk>0.
In case a surface hop occurs, to ensure energy conservation the nuclear velocities are rescaled such that for kinetic energies T and electronic potential energies E_n of anion (a) and neutral (n) the following conditions are fulfilled:
T'^(a)
=
T^(a) +
E_n^(a) -
E_m^(a)
for a hop between anionic bound states and
T'^(n)
=
E_n^(a) +
T^(a) -
E_m^(n) -
E_el(k_i)
for a hop into the continuum (i.e. autoionization).
For a more detailed description of the hopping procedure the reader is referred to Ref. domckebook.
§ PROGRAM IMPLEMENTATION
In the following chapter a detailed account of how the theory is actually implemented in the program package will be provided.
For an easier understanding, in Fig. <ref> the program flow is displayed schematically, with a color code indicating the module handling the respective task.
Starting from the generation of an ensemble of nuclear coordinates R(t) and velocities Ṙ(t) at the time t = t_initial using the module in the folder (red), a first quantum-chemical calculation is performed by an external quantum-chemistry program - to date these include Gaussian09/Gaussian16 <cit.> and QChem <cit.> (blue) - which yields the forces from which the accelerations R̈(t) of the nuclei are computed.
The nuclei are then propagated by integration of Newton's equations of motion for one nuclear time step using the module (orange).
With the new nuclear coordinates R(t + Δ t), a new set of quantum-chemical calculations can be performed, yielding the new energy gradients necessary for the evaluation of the velocities Ṙ(t + Δ t).
With the quantum-chemical calculations at t and t + Δ t, one is now able to construct the electronic continuum states as well as the coupling matrices of the diabatic and nonadiabatic couplings using the module (green).
From this point, the electronic state coefficients c(t) are propagated in parallel to the nuclear dynamics by integrating the electronic Schrödinger equation, yielding c(t + Δ t).
These are utilized to compute hopping probabilities from the occupied bound state to all other (bound and continuum) states.
The switching between the states is induced stochastically according to the respective hopping probabilities given in Eq. (<ref>).
After writing the results into the various output files time is shifted to t = t + Δ t, thereby completing one time step.
To make this initial overview more specific, in the following the underlying algorithms are explained in more detail.
§.§ Electronic structure calculation
All electronic structure and energy gradient calculations can be performed by using any Kohn-Sham (TD)-DFT level of theory provided within the Gaussian09, Gaussian16 or QChem program packages.
The AO basis set needs to be defined explicitly in a separate input file, thus also allowing for additional augmentation of basis sets, which is of utmost importance when describing molecular anions.<cit.>
The and modules provide an interface to the external programs by creating input files and calling the respective programs. The and modules contain classes that parse the external output files and organize the data into the form needed in the program.
§.§ Generation of initial conditions
The initial nuclear coordinates and velocities are determined by stochastic sampling of an appropriate probability distribution function for the harmonic normal modes of the system.
These can be computed from the electronic Hessian matrix at an optimized geometry of the studied molecule.
For molecules in the vibrational ground state as well as for a thermal ensemble of molecules, the Wigner function
ρ_W({Q_i,P_i})=1/(πħ)^N∏_i=1^N α_i(T) exp(-α_i(T)/ħω_i(P_i^2+ω_i^2Q_i^2))
with
α_i(T)
=
tanh(ħω_i/2k_BT)
is employed, where {Q_i,P_i} denote the normal coordinates and momenta, ω_i is the angular frequency of normal mode ν_i and T the thermodynamic temperature.
Besides these cases, in experiments investigating vibration-induced autoionization another type of initial conditions is often important in which one or more normal vibrations of the system are excited by laser irradiation.
In principle, the respective initial conditions could be also generated by using a Wigner function. However, Wigner functions for excited vibrational states can assume negative values and can thus not be directly identified with a probability distribution.
A possible approach might be to regard the positive and negative parts of the Wigner function separately as probability distributions and to run a "positive" and a "negative" ensemble of initial conditions, the final properties of the system then being obtained by appropriate averaging.
As a more efficient alternative, which gets on with only one single ensemble, we employ a positive definite probability distribution constructed from the excited-vibrational state wavefunctions in position and momentum space,
ρ^(i)_υ(Q_i,P_i)=|χ^(i)_υ(Q_i)|^2|χ̃^(i)_υ(P_i)|^2,
where χ^(i)_υ(Q_i) and χ̃^(i)_υ(P_i) are the harmonic oscillator wavefunctions for quantum state υ of normal mode ν_i in position and momentum space, respectively.
§.§ Nuclear dynamics
Given Newton's equations of motion (<ref>), the nuclei are propagated by numerical solution using the velocity Verlet algorithm <cit.> for a user-defined time step.
Within this algorithm, the nuclear coordinates at t+Δ t are obtained from a Taylor series expansion around the coordinates at t:
R(t + Δ t)
≈R(t) +
Ṙ(t)Δ t +
1/2 M^-1F(t) Δ t^2,
where in the last term the acceleration has been formulated using the force F given by the electronic potential energy gradient (cf. Eq. (<ref>)).
With the new nuclear coordinates, the force at t + Δ t can be evaluated, giving rise to the new nuclear velocities
Ṙ(t + Δ t)
=
Ṙ(t) +
Δ t/2 M^-1[
F(t) + F(t + Δ t)
]
.
Due to the approximative nature of the algorithm above and the accuracy of the calculated energy gradients, it is possible that the velocities develop small overall translational or rotational components although the initial conditions were determined with these degrees of freedom set at rest.
These numerical inaccuracies are detected, in the case of translational velocity by the shift of the center of mass away from the origin of the coordinate system, in the case of rotation by the calculation of the angular velocity according to
ω_rot = I^ -1L
with the moment of inertia I and the angular momentum L.
The translational and rotational portions of the nuclear velocities are then subtracted from the total velocity and the remaining vibrational part is rescaled to ensure energy conservation.
After each nuclear dynamics step, the new nuclear coordinates and velocities are written into separate output files, the coordinates in a format of consecutive xyz files which can be visualized easily by external software (for example with the VMD program package <cit.>, which is warmly recommended).
§.§ Electronic dynamics
Since the evaluation of electronic coupling terms in Eq. (<ref>) is, apart from the external quantum-chemistry calculations, the computationally most expensive step in the dynamics, several approximations need to be implemented, which will be discussed in the following
§.§.§ Calculation of coupling terms
Before calculating the coupling terms, the discretization procedure for the generation of wave vectors needed to construct the continuum state wavefunctions will be discussed.
To uniformly discretize angular orientation and kinetic energy of ejected electrons, it is natural to discretize angular and energetic distribution separately.
Since the kinetic energy of a plane wave is
E_kin(k_i) = ħ^2 |k_i|^2/2 m_e
and therefore proportional to the length of the wave vector squared, this length is discretized such that the desired energy range is covered evenly.
For a given energy, the vector orientations are approximately evenly distributed according to the Fibonacci sphere algorithm <cit.>.
The volume elements Δ V_k needed for calculating the bound-continuum couplings in Eqs. (<ref>) and (<ref>) are constructed as the difference of spherical caps around the corresponding wave vectors with the base diameter as an average over the six nearest points on the sphere surrounding the vector.
In the diabatic coupling terms in the AO basis (Eq. (<ref>)) two types of four-center integrals are present: (i) such involving four Gaussian-type atomic orbitals (GTOs), ⟨σλ | μν|$⟩. These are evaluated by using the library <cit.> within the PySCF program package <cit.>.
(ii) integrals involving a plane wave of wave vector𝐤_iand three GTOs,⟨𝐤_i λ| μν|$⟩.
These terms can in principle be calculated analytically as outlined, e.g., in Ref. colle1987, but this is computationally unfeasible for the present purpose since an immense number of plane waves has to be included for a proper discretization of the ionization continuum. Instead, the plane waves are approximated by their Taylor expansion around the center of basis function |μ⟩, R_μ.
As will be discussed in the Performance Section later on, for sufficient accuracy in the approximation it is necessary to include not only the zero'th order term (assuming the plane wave to be constant in the vicinity of the molecule), but also the first-order term, resulting in the approximation
e^i k·r =
e^i k·R_μe^i k· (r - R_μ)
≈e^i k·R_μ[
1 + i k· (r - R_μ)
].
This leads to two terms for the two-electron integrals as follows:
⟨𝐤_i λ | μν|≈⟩e^i k·R_μ[
⟨λ | μν|+⟩
i k⟨λ | μ̃ν|⟩].
In the above expression, |μ̃⟩ is an AO basis function with an angular momentum quantum number by one higher than |μ⟩ while having the same Gaussian exponent.
This heavily reduces the amount of two-electron integrals to be computed from n_AO^3 n_PW to n_AO^2 [n_AO + n'_AO], with n_AO being the total number of AO basis functions, n'_AO the total number of basis functions with increased quantum number and n_PW the total number of plane waves. For instance, in the case of vinylidene in Ref. aid, this amounts to a reduction by a factor of ∼30000.
These terms are again evaluated using the PySCF module.
The prefactors A, A̅ and B present in Eq. (<ref>) are straightforwardly implemented in Python according to Eqs. (<ref>), (<ref>) and (<ref>).
Evaluation of the Dyson orbitals needed for the calculation of the nonadiabatic couplings is implemented as described before in Ref. humeniuk2013 for arbitrary basis sets for the anion and the neutral molecule.
After construction of the Dyson orbitals from all bound anionic states to the neutral ground state the nonadiabatic coupling terms are then calculated according to Eq. (<ref>). To ensure that the wavefunctions of bound states do not switch their arbitrary signs (which can happen, since the external quantum-chemistry calculations are independent of each other), the overlap of electronic wavefunctions of all bound states are tracked throughout the trajectories and accounted for in all formulae involving the respective states.
§.§.§ Calculation of electronic state coefficients
The electronic degrees of freedom are propagated by solving the time-dependent Schrödinger equation (<ref>) in the manifold of all considered bound anion and continuum electronic states using Adams' method as implemented in the class of Python's module <cit.> with a user-defined integration time step. For increased computational stability the equations are beforehand transformed into the interaction picture, introducing the new electronic state coefficients
a_n(t)
=
c_n(t) e^i/ħ H_nn t.
Inserting this into Eq. (<ref>) leads to the actually implemented electronic equation of motion
ȧ_n(t)
=
∑_m
[
-i/ħH̃_nm - D_nm]
a_m(t)
e^-i/ħ (H_mm - H_nn) t
where H̃_nm denotes the Hamiltonian matrix of the system with zeros on the diagonal.
§.§.§ Hopping procedure
Hopping probabilities are directly evaluated according to Eq. (<ref>) from the state coefficients: A random number between 0 and 1 is generated using the function in the module and hopping is conducted accordingly.
Once a trajectory hops into a continuum state, it could in principle be straightforwardly continued on the potential energy surface of the neutral molecule.
Although this can be quite insightful if one is interested in the subsequent geometric changes of the ionized system, we follow a different approach and stop the trajectories after electron detachment since our focus is set on the actual autoionization process.
This allows us to implement a modification of the surface-hopping procedure that leads to a great improvement of the hopping statistics. The idea is to divide a single trajectory into 'sub-trajectories', i.e. to evaluate if a trajectory hops a number n_subtraj of times (see Fig. <ref>).
Every time a sub-trajectory hops into the continuum, n_subtraj is reduced by one and once it reaches zero, the underlying nuclear dynamics is stopped.
It has to be noted that this procedure is only followed for hops into the continuum, while for hops between bound anionic states only a single hopping event per trajectory and time step is possible due to the need to continue the nuclear dynamics on an unambiguously determined potential energy surface.
§.§ Graphical user interface
Our program package comes with a graphical user interface (GUI) for the input generation as well as an analysis tool for trajectories.
An example of the former is displayed in Fig. <ref>.
In the input generator, which is started with
[language=bash]
hortensia –gui
in addition to all relevant settings for the actual simulation, the user may find options for the generation of a complete folder structure for the trajectories as well as bash submit scripts to be used with the Slurm Workload Manager<cit.>.
Furthermore, the above mentioned Wigner ensemble scripts can be used and initial conditions can be generated. Therefore it is highly recommended to use the GUI feature.
Additionally, through the command
hortensia –analysis
one can open the analysis tool which is able to read output files and visualize them in a sub-window using the program package <cit.>.
§.§ Installation
The most convenient way to install the program package is downloading or cloning the https://github.com/mitric-lab/HORTENSIA_LATEST.gitrepository on our Github page<cit.>. In the main folder, execute
[language=bash]
python cysetup.py build_ext –inplace pip install .
to first compile the Cython modules and then install the program. The program package requires (and will automatically pip install)
*
* - for faster summation of large arrays, mainly in the calculation of the two-center integrals in Eqs. (<ref>) and (<ref>)
* - mainly in the integration of the electronic Schrödinger equation as outlined in subsection <ref>
* - for the calculation of the two-electron integrals in Eqs. (<ref>) and (<ref>)
* - for the parallelization of diabatic couplings
* - for the plots in the sub-window of the analysis tool described before
and all dependencies thereof.
Using the command
[language=bash]
pip uninstall hortensia_latest
will uninstall the program package.
§ DISCUSSION
In this section we will quantify aspects of the program related to overall performance.
This includes the quality of approximations within the methodology as well as optimization of time consumption and computational resources.
Moreover the exemplary autoionization dynamics of the 2-cyanopyrrolide anion is discussed.
§.§ Accuracy of k-vector discretization and integral approximations
The accuracy of the Fibonacci sphere algorithm for angular discretization in k-space is illustrated in Fig. <ref> by the covered surface area of a unit sphere using a given number of distributed points.
The total surface area (orange graph) is presented with the relative error|A_fib-A_sphere|/A_sphere(green graph) to the exact surface area4π≈ 12.566(blue line).
The approximated area rapidly converges to a value of∼12.243, which corresponds to a relative error of∼2.575 %.
Since in the coverage of k-vector lengths no additional approximation is introduced and for their respective volume elements the k-space is divided energetically evenly (thus covered exactly with respect to vector length), the error in the surface area for specific vector lengths equates to the overall error of the volume elements.
Therefore the sum of these volume elements results in a total volume that deviates by less than 3 % from the actual sphere for arbitrary numbers of vector orientationsn_s ≥ 30and lengthsn_E(giving a total number of wave vectorsn_k = n_E · n_s). |
http://arxiv.org/abs/2307.04185v1 | 20230709142436 | Parton shower algorithm with saturation effect | [
"Yu Shi",
"Shu-Yi Wei",
"Jian Zhou"
] | hep-ph | [
"hep-ph"
] |
Key Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China
Key Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China
Key Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China
We extend the previously developed small x parton shower algorithm to include the kinematic constraint
effect and k_t resummation effect. This work enables the Monte Carlo generator to simultaneously resum large k_t and small x logarithms in the saturation regime for the first time. It is an important step towards simulating processes involving multiple well separated hard scales, such as di-jet production in eA collisions at EIC.
Parton shower algorithm with saturation effect
Jian Zhou
August 12, 2023
==============================================
§ INTRODUCTION
The study of dense gluonic matter at small x inside a large nucleus and nucleon has been and continues to be an important frontier of high-energy nuclear physics. It is also one of the main objectives of the physics program of the future Electron-Ion Collider (EIC) <cit.>. Tremendous theoretical efforts have been made to search for smoking gun evidence of saturation. To this end, hard scattering processes in eA collisions at EIC are expected to deliver crucial messages about how saturation emerges from strongly interacting gluonic matter. A Monte Carlo event generator that incorporates saturation effects could play an essential role in fully harnessing the potential of future experimental data taken from EIC.
As the core of general purpose Monte Carlo event generators, parton showers describe successive radiations from highly-energetic partons that participate in the hard scattering process. While most parton branching algorithms <cit.> are based on the soft and collinear approximation which effectively resums the Dokshitzer-Gribov-Levin-Altarelli- Parisi (DGLAP) <cit.> like logarithm to all orders, only a few parton shower generators <cit.> have been developed to describe small x processes by simulating semi-hard emissions which give rise to the logarithm of the type ln (1/x) <cit.>. Among these generators, the Cascade <cit.> that is built on the Catani-Ciafaloni-Fiorani-Marchesini (CCFM) evolution equation <cit.> is the most widely used in the phenomenology studies (see for recent examples <cit.>). However, none of the aforementioned parton showers takes into account the gluon recombination process occurs in the dense target.
The first attempt to include saturation effect in the parton shower is presented in Ref. <cit.>
where both the forward and the backward evolution schemes have been presented. The underlying parton branching equation employed in our formulation is the folded Gribov-Levin-Ryskin (GLR) equation <cit.>. Although the GLR equation is somewhat outdated compared to modern treatments of small x evolution <cit.>, it is sufficient for simulating events in eA collisions at EIC energy. This is because the gluon density probed at EIC is not high enough for the triple pomeron vertex to dominate the gluon fusion process. In the previous work <cit.>, we performed a consistent check by comparing the transverse momentum distribution of exchanged gluons reconstructed from the parton shower generator with numerical solutions of the GLR equation. A full agreement between these two results was reached. The running coupling effect was also implemented in our Monte Carlo simulation.
In the present work, we improve this parton branching algorithm by imposing the kinematic constraint arising from the requirement that the offshellness of t channel gluon should be dominated by its transverse momentum squared <cit.>. Though it is formally a sub-leading logarithm contribution, the kinematic constraint effect is known to significantly slow down the evolution speed. It is thus a necessary component of the Monte Carlo generator for any practical phenomenological studies. Actually, the angular ordering of soft emissions is automatically imposed once the kinematic constraint is applied since the angular ordering constraint is weaker than the latter <cit.> in the small x limit. The coherent branching effect is thus effectively included in the parton shower. On the other hand, for the case of hard scattering processes involving multiple well-separated hard scales, like di-jet production in eA collisions, the transverse momentumn dependent (TMD) type large logarithm α_s ln^2 (Q^2/k_⊥^2 ) and small x logarithm α_s ln(1/x) need to be simultaneously resummed.
Such a joint resummation formalism has been established in a series of publications <cit.>. Another main objective of this work is to implement the joint resummation in the Monte Carlo simulation.
The rest of the paper is organized as follows. In Sec. II, we discuss how to integrate the kinematic constraint effect into the parton shower algorithm. The formulations of both forward and backward evolution are presented. In Sec. III, the implementation of the joint resummation in the algorithm is discussed. Our starting point is the Sudakov factor derived from a folded version of the Collins-Soper (CS) and the renormalization group equation. It is shown that the k_⊥ distribution reconstructed from the parton shower is identical to the numerical and analytical results obtained from the CS equation and renormalization group equation. The paper is summarized in Sec. IV.
§ THE KINEMATIC CONSTRAINT
In our previous work <cit.>, we have developed a Monte Carlo method to simulate the parton shower at small x based on the GLR evolution equation <cit.>. Our formulation only takes into account the summation of the leading logarithm ln(1/x) contribution which is known to result in too rapid growth of gluon number density towards small x region. From a phenomenological point of view, it is crucial to go beyond the leading logarithm accuracy and include the various sub-leading logarithm contributions <cit.>, among which the
kinematic constraint effect <cit.> is a particularly interesting one. The kinematic constraint is required for the validity of the BFKL/GLR equation at small x. The constraint is
needed to ensure that the virtuality of the gluons along the chain is controlled by the transverse momenta. The implementation of the kinematic constraint can significantly slow down the small x evolution and thus lead to a better description of relevant phenomenology. Note that the angular ordering of the gluon emissions is automatically satisfied once the kinematic constraint is imposed in the small x limit. The coherent branching effect is thus effectively achieved following the steps outlined below.
The starting point of the Monte Carlo implementation for such an effect is the folded GLR equation with the kinematic constraint.
Following the arguments made in Refs. <cit.>, the transverse momentum square of the radiated gluon l_⊥^2 must be smaller than 1-z/zk_⊥^2 where k_⊥ and z are transverse momentum and longitudinal momentum fraction carried by the daughter gluon respectively.
The inclusion of the kinematic constraint leads to a modified GLR equation,
∂ N(η,)/∂η = α̅_s/π∫^2/^2
N ( η +ln[ k_⊥^2/ k_⊥^2+ l_⊥^2] , l_⊥+ k_⊥ ) -
α̅_s/π∫_0^k_⊥^2 l_⊥/l_⊥^2 N(η,k_⊥) - α̅_s N^2( η,),
with α̅_s=α_s N_c/π, η = ln(x_0/x) and x_0=0.01. The function N(η,) is related to the normal TMD gluon distribution G(η,k_⊥) through N(η,)=2α_s π^3/ N_c S_⊥ G(η,k_⊥) with S_⊥ being the transverse area of nucleon/nucleus.
Converting the above equation to the folded form of the GLR equation, it reads,
∂/∂η N(x,)/Δ_ns (η , k_⊥) = α̅_s /π∫ _Λ_ cut ^2 l_⊥/l_⊥ ^2
N ( η +ln[ k_⊥^2/ k_⊥^2+ l_⊥^2] ,+ ) /Δ_ns (η , k_⊥).
where Δ_ns (η , k_⊥) represents the probability of evolving from η_0 to η without resolvable branching. It is given by,
Δ_ns (η , k_⊥) = exp{-α̅_s ∫^η_η_0 dη' [ lnk_⊥^2/Λ_ cut^2 +N(η',k_⊥) ] },
where the infrared cut off Λ_ cut is the matter of choice about what we classify as a resolvable emission.
Emitted gluons with transverse momentum l_⊥<Λ_ cut are considered as the unresolvable ones. And their contribution has been combined with the virtual correction to cancel the infrared divergence. The resolvable branchings are defined as emissions above this range. All order contributions from the virtual correction and the unresolvable real emission are resummed into Δ_ns (η , k_⊥) which reduces to the non-Sudakov form factor <cit.> in the dilute limit by neglecting the saturation term.
Eq. <ref> can be converted into an integral form,
N(η ,)= N(η_0 ,) Δ_ns (η ,)
+
α̅_s /π∫^η _η_0 η^'Δ_ns (η , ) /Δ_ns ( η ^', ) ∫ _Λ_ cut ^2 l_⊥/l_⊥ ^2 N(η^'+
ln[ ^2/^2+ l_⊥^2], + ) .
It is evident that the kinematic constrained small x equation is no longer a local equation. Namely, the increase of gluon number density at rapidity η is driven by the gluon distribution at rapidity η +ln[ k_⊥^2/ k_⊥^2+ l_⊥^2] rather than that at the same rapidity η. The corresponding weighting factor needs to be modified dramatically for the non-local case as shown below.
§.§ Forward evolution
With these derived folded evolution equations, we are now ready to introduce the Monte Carlo algorithm starting with the forward evolution case. For a given initial condition N(η_i,k_⊥,i), the first quantity to be generated by the algorithm is the value of η_i+1. As it has been done in <cit.>, this task can be achieved by solving the equation,
ℛ
= exp[- α̅_s ∫ ^η_i+1 _η_iη ^'( ln k_⊥,i ^2/Λ_ cut ^2 + N(η ^', k_⊥,i ) ) ],
where ℛ is a random number distributed uniformly in the interval [0,1]. Throughout this paper, we always use R to denote such a random number. N(η ^', k_⊥,i) is pre-generated by numerically solving the GLR equation with the kinematic constraint.
In contrast to the DGLAP evolution, the unitarity is not preserved during the course of the small x evolution. The number of gluons increases after each step of parton branching.
The generated cascade thus needs to be re-weighted. For instance, if one neglects the saturation effect and kinematic constraint effect, the number of gluons which vanish due to the virtual correction and the unresolved branching is proportional to α̅_s ∫_Λ_ cut^k_⊥,i l_⊥^2/l_⊥^2, while the number of gluons produced via the real correction is proportional to α̅_s ∫_Λ_ cut^P_⊥^2 l_⊥/l_⊥^2 where P_⊥ is the UV cutoff, in the same rapidity interval. The weighting function is given by the ratio of these two contributions W ( k_⊥,i) =ln(P_⊥^2/Λ_ cut ^2) /ln(k_⊥,i^2/Λ_ cut ^2 ).
It is quite non-trivial to work out the correct weighting factor when the kinematic constraint is implemented in the parton branching algorithm.
Let us first discuss the derivation of the weighting factor for the case of the fixed boundary prescription. To work out the correct weighting coefficient, we first write down the expression for the fraction of gluons at [η_i+1, η_i+1+δη] that come form the branching between η_i+1 and η_i,
δη∂/∂η_i+1 [ α̅_s/π∫_η_i^η_i+1η' ∫_Λ_ cut^2 l_⊥/l_⊥^2e^- α̅_s ∫_η_i^η'η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] θ ( 1-z'/z' (k_⊥,i-l_⊥)^2-l_⊥^2 ) ]
=δηα̅_s/π∫^ min [P_⊥,√((k_⊥,i-l_⊥)^2 1-z/z) ]_Λ_ cut^2 l_⊥/l_⊥^2e^- α̅_s ∫_η_i^η_i+1+ln(k_⊥,i-l_⊥)^2/(k_⊥,i-l_⊥)^2+l_⊥^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] ,
with z'=x_i+1 /x'=exp [η'-η_i+1]. The kinematic constraint is imposed by the θ-function. Note that the term originating from the derivative acting on the integral boundary is equal to 0. The entire contribution comes from the derivative acting on the θ-function. Meanwhile the fraction of gluons that leave from the rapidity interval [η_i+1, η_i+1+δη] due to the virtual correction is,
δη∂ e^- α̅_s ∫_η_i^η_i+1η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] /∂η_i+1=-δηα̅_s [lnk_⊥,i^2/Λ_ cut^2 +N(η_i+1,k_⊥,i) ]e^- α̅_s ∫_η_i^η_i+1η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] .
For the non-local small x evolution, one also needs the input for gluon distribution beyond the small x boundary x_0=0.01. There are two common choices for the boundary conditions: i) the fixed boundary prescription, N(η<0, k_⊥)=0; ii) the frozen boundary prescription, N(η<0, k_⊥)=N(η=0, k_⊥). The weighting functions are thus different for different rapidity boundary prescriptions.
For the fixed boundary prescription, the re-weighting function is given by
W_kc,1(η_i,η_i+1;k_⊥,i) =(η_i+1-η_i) ∫^ min [P_⊥,√(1-z/z (k_⊥,i -l_⊥)^2 ) ] _Λ_ cut^2 l_⊥/l_⊥^2 e^- α̅_s ∫_η_i+1^η_i+1+ln(k_⊥,i -l_⊥)^2/(k_⊥,i -l_⊥)^2+l_⊥^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] /(η_i+1-η_i)lnk_⊥,i^2/Λ_ cut^2 +
∫^η_i+1_η_i dη
N(η, k_⊥,i).
Here, the values of |l_⊥| and ϕ_l can be generated by solving the following equation
R = 1/ Cα̅_s/π∫_Λ_ cut^l_⊥^2 l'_⊥/l_⊥'^2exp{ - α̅_s ∫_η_i^η_i+1+ln(k_⊥,i-l_⊥')^2/(k_⊥,i-l_⊥')^2+l_⊥'^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] },
C = α̅_s/π∫_Λ_ cut^ min [P_⊥,√((k_⊥,i-l_⊥')^2 1-z/z) ]^2 l_⊥'/l_⊥'^2exp{ - α̅_s ∫_η_i^η_i+1+ln(k_⊥,i-l_⊥')^2/(k_⊥,i-l_⊥')^2+l_⊥'^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] },
where R again is a random number and C is the normalization factor ensuring that the r.h.s. of Eq. <ref> resides in the region of [0, 1]. In the practical Monte Carlo implementation, a veto algorithm is used to be more efficient. Once |l_⊥| and ϕ_l are generated, l and k_⊥,i+1 then can be reconstructed subsequently. We repeat the procedure outlined above until η_i+1 reach a minimal cut-off value η_ min. Once the whole cascade is generated, we are able to reconstruct the gluon k_⊥ distribution at arbitrary rapidity.
For the frozen boundary case, the weighting factor has to be modified to
𝒲_kc,2(η_i, η_i+1;k_⊥,i, k_⊥,i+1)
=
(η_i+1-η_i) ln P_⊥^2/Λ_ cut^2/
(η_i+1-η_i)lnk_⊥,i^2/Λ_ cut^2
+
∫^η_i+1_η_i dη
N(η, k_⊥,i )
N(η_i + ln[ k_⊥,i+1 ^2 / k_⊥,i+1 ^2 +l_⊥^2 ] , k_⊥,i )/ N(η_i ,k_⊥,i ) ,
and the radiated gluon transverse momentum l_⊥ is sampled solving the following equation
R = 1/ Cα̅_s/π∫^l_⊥_Λ_ cut^2 l_⊥'/l_⊥'^2,
where the normalization factor for this case is given by C = α̅_s/π∫^P_⊥_Λ_ cut^2 l_⊥'/l_⊥'^2. The k_⊥ distribution of the exchanged gluons that directly attaches to the hard part can be reconstructed from the forward evolution algorithm described above.
Using the recipes described above, we are now ready to generate parton cascade. Following the conventional choice, we use the MV model <cit.> result as the initial condition at rapidity η_0=0. Since we are interested in simulating events such as di-jet production in eA collisions, it is suitable to utilize the Weiszäke-Williams (WW) gluon distribution as the initial condition <cit.>. It is given by
N (η_0, k_⊥) = ∫d^2 r_⊥/2π e^-i k_⊥· r_⊥1/r_⊥^2(1- exp[-1/4 Q_s0^2 r_⊥^2 ln(e+1/Λ r_⊥) ] ),
with Q_s0^2 = 1 GeV^2 and Λ= 0.24 GeV. We explored the behavior of the parton cascade with the both fixed boundary prescription and frozen boundary prescription. From Fig. <ref>, one can see that the k_⊥ distribution obtained from the forward approach is in perfect agreement with the numerical solutions of the kinematic constrained GLR equation for both boundary conditions.
§.§ Backward evolution
We now turn to discuss how to implement the kinematic constraint in the backward evolution which is far more efficient in generating initial state parton shower as compared to the forward approach. The rapidity η_i+1 of gluon participating hard scattering is fixed by external kinematics. k_⊥,i+1 at the rapidity η_i+1 can be sampled with the distribution N(η_i+1,k_⊥,i+1), which has to be determined beforehand by numerically solving the evolution equation. The next step is to generate η_i using a modified non-Sudakov form factor.
The modified non-Sudakov form factor, Π_ns, can be related to the forward non-Sudakov form factor Δ_ns and the gluon distribution N as
Π_ns (η_i+1, η_i; k_⊥,i+1)=Δ_ns(η_i+1,k_⊥,i+1) N(η_i,k_⊥,i+1)/Δ_ns(η_i,k_⊥,i+1) N(η_i+1,k_⊥,i+1),
which looks similar to that derived in our previous work <cit.>. However, one has to keep in mind that the gluon distributions appearing in the above formula are obtained by solving the GLR equation with the kinematic constraint.
On the other hand, the non-Sudakov factor can also be expressed as <cit.>,
Π_ns (η_i+1, η_i; k_⊥,i+1)
= exp [-α̅_s/π∫_η_i^η_i+1 η∫^P_⊥_Λ_ cut^2 l_⊥/l_⊥^2 N (η+ln[ k_⊥,i+1^2/ k_⊥,i+1^2+ l_⊥^2], k_⊥,i+1+ l_⊥ )/ N(η,k_⊥,i+1) ].
Both non-Sudakov form factors can be equally well used to generate η_i for a given η_i+1 by solving the following equation,
R = Π_ns (η_i+1, η_i; k_⊥,i+1).
The transverse momentum of the radiated gluon l_⊥ can be generated according to
R = 1/ Cα̅_s /π∫_Λ_ cut^l_⊥^2 l_⊥'/l_⊥'^2 N ( η_i+1+ln[ k_⊥,i+1^2/ k_⊥,i+1^2+ l_⊥'^2] , k_⊥,i+1+ l_⊥' ),
C = α̅_s /π∫_Λ_ cut^P_⊥^2 l_⊥'/l_⊥'^2 N ( η_i+1+ln[ k_⊥,i+1^2/ k_⊥,i+1^2+ l_⊥'^2] , k_⊥,i+1+ l_⊥' ).
Once again, R is a random number, C is the normalization factor and a veto algorithm is employed in our practical implementation to make this sampling procedure more efficient. Similar to the forward evolution case, the generated event has to be re-weighted after each branching in the backward evolution method as well. It is important to notice that the GLR equation with the kinematic constraint is a non-local evolution equation when deriving the weighting factor. The weighting factor associated with backward evolution is the ratio of the fraction of gluons that appear from branching at the rapidity η_i+ ln k_⊥,i+1^2 / k_⊥,i+1^2 + l_⊥^2 and the fraction of gluons that vanish at the rapidity η_i due to the virtual correction and the fusion process. It reads,
𝒲_kc,back(η_i+1, η_i; k_⊥,i+1)
=
(η_i+1-η_i)lnk_⊥,i^2/Λ_ cut^2
+
∫^η_i+1_η_i dη N(η, k_⊥,i )
/ (η_i+1-η_i) ln P_⊥^2/Λ_ cut^2 N(η_i ,k_⊥,i ) / N(η_i + ln[ k_⊥,i+1 ^2 / k_⊥,i+1 ^2 +l_⊥^2 ] , k_⊥,i ) .
The procedure outlined above is repeated until η_i is smaller than η_0. The last step of the simulation is to construct four momenta of the radiated gluons. Note that the minus component of the t-channel gluon's four momentum can only be reconstructed after the full cascade has been generated. By going from the last t-channel gluon (closest to the nucleus), which has the vanishing minus component, forward in the cascade to the hard scattering process, the true minus component of the t-channel gluons are constructed.
In Fig. <ref>, we compare gluon k_⊥ distribution at different rapidities generated from backward evolution to the numerical solutions of the GLR equation with the kinematic constraint.
The perfect match between gluon k_⊥ distributions obtained from the backward approach and by numerically solving the kinematic constrained GLR has been found.
§ K_T RESUMMATION IN THE SMALL X LIMIT
Our ultimate goal is to build a parton shower generator for simulating events in eA collisions at EIC. The hard scattering processes occurring in eA collisions often involve multiple scales. For instance, loosely speaking, there are three well separated scales in the back-to-back di-jet production: the center mass of energy √(s), the invariant mass of the di-jet Q, and the total transverse momentum of the di-jet system k_⊥. To improve the convergence of the pertubative series, the two type large logarithms α_s ln(s/Q^2) and α_sln^2 (Q^2/k_⊥^2) arise in the high order calculations of the di-jet production cross section have to be summed to all orders. The summation of the logarithm contribution α_s ln(s/Q^2) is achieved by solving the small x evolution equation, while the logarithm contribution α_sln^2 (Q^2/k_⊥^2) can be resummed by means of the CS equation. A unified framework that allows us to resum both large logarithms simultaneously in a consistent way have been developed in a sequence of papers <cit.>. The evolved
small x gluon TMD can be expressed as the convolution of the Sudakov form
factor and the renormalized dipole amplitudes. It has been stressed in Refs. <cit.> that at small x, gluon TMDs only can be matched onto dipole scattering amplitudes rather than the normal gluon PDFs in the collinear factorization. We notice that such a joint resummation formalism has been studied in the various different context <cit.>.
To simulate hard scattering processes involving multiple scales in a parton shower generator, it is necessary to develop a Monte Carlo branching algorithm to effectively resum both types of logarithms through an iteration procedure. The essential observation that enables the computer implementation of the joint resummation is described as the following. In the backward approach, the evolution starts from the final t-channel gluon with the most negative virtual mass-squared, which participates in the hard process. As a parton cascade develops towards the backward direction, the virtual mass of the t-channel gluon decreases by radiating soft gluons with the longitudinal momentum fraction 1-z → 0. This first stage of the evolution is described by the CS equation and the renormalization group equation which resum the double leading k_t logarithm and the single leading k_t logarithm respectively. When the virtual mass of the t-channel gluon goes down to the scale which is of the order of saturation scale, we should perform the small x evolution. The precise value of this scale should be fixed by fitting the output of the cascade to the experimental data. During the course of the small x evolution, the virtual mass of the t-channel gluon stops monotonously decreasing, whereas its longitudinal momentum fraction increases rapidly until the small x evolution initial boundary is reached. In this second stage of the evolution, the development of parton cascade is mainly driven by the radiated gluons that carry the large longitudinal momentum fraction 1-z → 1. Therefore, the Monte Carlo algorithm based on the GLR equation should be applied to generate the parton branching at this stage.
To simulate the first stage of the evolution, our primary task is to derive a folded version of the CS equation and the renormalization group equation. To this end, we write down the CS equation in the momentum space,
∂ N(μ^2,ζ^2,η,k_⊥) /∂lnζ^2=α̅_s /2 π∫^ζ_0 d^2 l_⊥/l_⊥^2 [ N(μ^2,ζ^2,η, k_⊥+l_⊥) -N(μ^2,ζ^2,η, k_⊥) ] .
which can be converted into the conventional expression of the CS equation <cit.> after making the Fourier transform up to the leading logarithm accuracy. Here, μ is the factorization scale, and ζ is a scale introduced to regularize the light cone divergence.
The factorization scale dependence of the gluon TMD in the saturation regime is described by the normal renormalization group equation <cit.>,
∂ N(μ^2,ζ^2,η,k_⊥) /∂lnμ^2= α̅_s [β_0 -1/2lnζ^2/μ^2 ]N(μ^2,ζ^2,η,k_⊥) .
with β_0=11/12-N_f/6N_c and N_f=3 in this work. By choosing the factorization scale μ to be ζ, one can combine the CS equation and the renormalization group equation together. The combined evolution equation reads,
∂ N(Q^2,η,k_⊥) /∂ln Q^2=α̅_s /2 π∫^Q_0 d^2 l_⊥/l_⊥^2 [ N(Q^2,η, k_⊥+l_⊥)- N(Q^2 ,η, k_⊥) ] + α̅_s β_0 N(Q^2,η, k_⊥),
where N(Q^2,η,k_⊥)≡ N(μ^2=Q^2,ζ^2=Q^2,η,k_⊥). Following the standard procedure, the above evolution equation can be cast into a folded equation,
∂/∂ln Q^2N(Q^2,η,k_⊥)/Δ_s(Q^2)=α̅_s /2 π∫_Λ_ cut^Q d^2 l_⊥/l_⊥^2N(Q^2,η, k_⊥+l_⊥) /Δ_s(Q^2),
with the Sudakov form factor being given by,
Δ_s(Q^2)=exp[ - ∫_ Q_0^2^ Q^2dt/tα̅_s (t)/2 ( lnt/Λ_ cut^2-2β_0 ) ].
The Sudakov form factor is simply the probability of evolving from Q_0 to Q without branching.
Eq. <ref> can be integrated to give an integral equation for N(Q^2,η,k_⊥) in terms of the gluon TMD at the initial scale Q_0:
N(Q^2,η,k_⊥) =N(Q_0^2,η,k_⊥) Δ_s(Q^2) + ∫ ^Q^2 _Q^2_0dt/tΔ_s(Q^2)/Δ_s(t)α̅_s (t)/2 π∫_Λ_ cut^Q d^2 l_⊥/l_⊥^2N(t,η, k_⊥+l_⊥).
With the derived folded CS and renormalization group equation, we are ready to introduce the Monte Carlo implementation of the k_t resummation formulated in the framework of the CGC effective theory.
§.§ Forward evolution
To have a consistency check, we first present the formulation of the forward evolution scheme. The combined CS and renormalization group equation can be solved using the forward evolution approach. We lay out the main procedures in the following.
For a given virtuality scale Q_i, either after several steps of evolution or at the initial condition, we first generate the value of a higher virtuality scale Q_i+1, where the next branching occurs.
Following the conventional method, this can be achieved by solving the following equation,
ℛ=exp[ - ∫_ Q_i^2^ Q_i+1^2dt/tα̅_s(t) (1/2lnt/Λ_ cut^2-β_0 ) ].
where the argument of the running coupling α_s is simply chosen to be the virtual mass squared.
Once Q_i+1 is generated, the transverse momentum of the radiated gluon, l_⊥,i+1, can be determined according to the following equation
ℛ = 1/ C∫_Λ_ cut^l_⊥,i+1 d^2 l^'_⊥/l^'_⊥^2,
where the normalization factor reads C = ∫_Λ_ cut^Q_i+1 d^2 l^'_⊥/l^'_⊥^2. The four momenta of the radiated gluon and the t-channel gluon can be determined from the momentum conservation and the on-shell condition.
We will discuss the reconstruction of kinematics in more details in the next subsection.
The generated cascade needs to be re-weighted. This is because that the unitary is no longer preserved beyond the leading double logarithm approximation. We have included the leading single logarithm contribution in the algorithm employed here, which leads to the increase of gluon number density after each splitting. The weighting factor is given by,
W _ CS ( Q^2_i+1, Q^2_i) = ∫ ^Q_i+1^2_Q_i^2dt/tα_s(t) lnt/Λ_ cut^2/∫ ^Q_i+1^2_Q_i^2dt/tα_s(t)[ lnt/Λ_ cut^2 - 2β_0 ] .
If the single logarithm contribution associated with the β_0 term in the denominator is neglected, the weighting factor reduces to 1. With these re-weighted parton cascades, one can reconstruct the t-channel gluon k_⊥ distribution at different scales and compare with the analytical and numerical solutions of Eq. <ref>.
It is straightforward to numerically solve Eq. <ref>, while the analytical solution of Eq. <ref> can also be easily obtained in the impact parameter space. After Fourier transforming back to the momentum space, the evolved gloun TMD distribution reads,
N(Q^2,η, k_⊥)=∫d^2 b_⊥/(2π)^2 e^i k_⊥· b_⊥ e^-S(μ_b^2, Q^2)∫ d^2 l_⊥ e^-i l_⊥· b_⊥ N(η, l_⊥) ,
where N(η, l_⊥) is the gluon distribution evolved with the GLR equation, or the initial condition computed in the MV model. The Sudakov factor at one loop level in the impact parameter (b_⊥) space consists of a perturbative part and a non-perturbative part. It is given by
S(μ_b^2,Q^2)= S_pert(μ_b*^2,Q^2) +S_NP (b_⊥^2, Q^2).
The perturbative Sudakov factor reads
S_pert(μ_b*^2,Q^2) =
N_c/2π∫^Q^2_μ_b*^2dμ^2/μ^2α_s(μ) [ lnQ^2/μ^2 - 2 β_0 ],
where μ_b*^2 is defined as μ_b*^2=4e^-2γ_E/b_⊥*^2, with b_⊥ *=b_⊥/√(1+b_⊥^2/b^2_max) and b_max=1.5 GeV^-1. To compare with the Monte Carlo result on the same footing, we simply neglect the non-perturbative Sudakov factor S_NP in the numerical calculation. The behaviour at large b_⊥ is regulated by N(η, b_⊥) which is the Fourier transform of N(η,l_⊥). In this work, we use the one-loop running coupling which reads
α_s (μ^2) = 1/β_ 0N_c/πln (μ^2/Λ^2_ QCD ),
with Λ^2_ QCD = 0.0578 GeV^2.
We present the t-channel gluon k_⊥ distribution constructed from the generated parton cascade and compare it with the numerical solution of the CS-renormalization group equation for the fixed coupling case in the left panel of Fig. <ref>.
In our estimation, the MV model is employed to provide with the gluon distribution at the initial scale Q_0=3 GeV. In the formulation of TMD evolution, all soft-radiated gluons carry exactly zero longitudinal momentum fraction. In contrast, all radiated soft gluons carry finite longitudinal momentum fraction in the parton branching algorithm. This presents an important advantage of the Monte Carlo method comparing with the conventional analytical approach. Keeping longitudinal momentum conservation exactly in parton splitting process is often crucial to correctly account for phenomenology near the threshold region <cit.>. However, to make the comparisons in a consistent way, we didn't change the longitudinal momentum fraction of the t-channel gluon after each branching in our algorithm. In the right panel of Fig. <ref>, we compare the Monte Carlo simulation result with both the numerical solution of the CS-renormalization equation and the analytical solution for the running coupling case at the scale Q=13 GeV.
It is clear to see from the right panel of Fig. <ref> that our algorithm yields the same k_⊥ distribution as the numerical result. On the other hand, it differs from the analytical approach result. Such discrepancy is expected because the non-perturbative part of the CS kernel is treated differently in the analytical approach. In addition, the argument of the running coupling used in the parton branching algorithm and the numerical solution is the hard scale Q, whereas the scale of running coupling is μ_b in the analytical approach. Since the analytical result can describe the relevant phenomenology very well, one should use it as guidance to model the non-perturbative part of the Sudakov factor which will be introduced in the Monte Carlo algorithm in the future work. Alternatively, one could also use a relatively large infrared cutoff value Λ_ cut to mimic the effect of the non-perturbative Sudakov factor. We leave this for a future study.
§.§ Backward evolution
In this subsection, we will outline the essential steps of Monte Carlo implementation for the backward evolution based on the folded CS-renormalization group evolution equation. Unlike the forward evolution which can be considered as a way of solving the evolution equation, the evolved parton distributions have to be pre-generated and are used to guide the backward evolution. In the most parton branching algorithm, the k_t resummation is achieved by using the modified Sudakov factor incorporating the collinear Parton Distributions Functions (PDFs). However, in the saturation regime, the k_t resummation has to be formulated in terms of the unintegrated gluon distribution. The main procedures are summarized as follows.
The modified Sudakov factor in the backward evolution approach is different from that in the forward evolution approach. It reads
Π_s(Q_i+1,Q_i; k_⊥,i+1)=Δ_s( Q_i+1^2) N(Q^2_i, η, k_⊥,i+1) /Δ_s( Q_i^2) N(Q^2_i+1, η, k_⊥,i+1).
An alternative way to compute the modified Sudakov factor is given by
Π_s(Q_i+1,Q_i; k_⊥,i+1)
= exp [ -
∫_Q_i^2^Q_i+1^2d t /tα̅_s(t)/2π∫^√(t)_Λ_ cut d^2 l_⊥/l_⊥^2 N(t,η, k_⊥,i+1+l_⊥)/N(t,η, k_⊥,i+1)].
It describes the probability for gluon evolving backward from Q_i+1 to Q_i without branching. The transverse momentum dependent gluon distribution appearing in Eq. <ref> and Eq. <ref> has to be pre-generated by numerically solving the combined CS-renormalization group equation.
The backward evolution starts from the t-channel gluon with the highest virtuality Q_i. The hard scale of the partonic scattering process is denoted as Q_i+1. We first have to sample k_⊥, i+1 according to the following distribution
ℛ = 1/ C∫ ^k_⊥,i+1_Λ_ cutd^2k^'_⊥ N(Q_i+1^2,η,k_⊥^'),
with C = ∫ ^Q_i+1_Λ_ cut d^2 k^'_ ⊥ N(Q_i+1^2,η,k_⊥^') being the normalization factor. The rapidity η is fixed by external kinematics. The next quantity to be generated by the parton cascade algorithm is the value of virtuality Q_i.
Following the standard backward evolution strategy, Q_i is obtained using the backward type Sudakov factor. We can sample a Q_i by solving the following equation,
R = Π_s(Q_i+1,Q_i; k_⊥,i+1).
As the virtual mass of (i+1)th t-channel gluon, Q_i also serves as the hard probe scale at which the ith t-channel gluon's transverse momentum is measured. The transverse momentum of the radiated gluon l_⊥,i is thus sampled solving the following equation
ℛ = 1/ C∫ ^l_⊥,i_Λ_ cutd^2l'_⊥/l'^2_⊥N(Q_i^2,η,k_⊥,i+1 +l'_⊥),
C = ∫ ^Q_i_Λ_ cutd^2l'_⊥/l'^2_⊥N(Q_i^2,η,k_⊥,i+1+l'_⊥) .
The longitudinal momentum fraction of the radiated gluon is determined through the on-shell condition,
|Q_i^2|≈z_i l_⊥,i^2/1-z_i+|k_⊥,i+1^2|,
which is valid in the strong ordering region |Q_i-1^2 | ≪ |Q_i^2| ≪ |Q_i+1^2|. The minus component of the emitted gluon can be fixed accordingly. The ith t-channel gluon's transverse momentum is trivially obtained: k_⊥,i=k_⊥,i+1-l_⊥,i. The virtual mass Q_i-1 of the ith t-channel gluon is computed with Eq. <ref>. However, t-channel gluons' four momenta can be determined only after the whole cascade is generated. The minus component of the t-channel gluon that is directly attached to nuclear target is set to be 0. From this initial condition, the four momenta of t-channel gluons are retrospectively reconstructed by momentum conservation.
As argued in the previous subsection, the generated event has to be re-weighted after each branching since the unitary is not preserved in the single leading logarithm accuracy level. In the backward evolution approach, the re-weighting function reads
W _ CS, back ( Q^2_i+1, Q^2_i) = ∫ ^Q_i+1^2_Q_i^2dt/tα_s(t) [ lnt/Λ_ cut^2- 2β_0 ] /∫ ^Q_i+1^2_Q_i^2dt/tα_s(t) lnt/Λ_ cut^2 .
We repeat the procedure outlined above until Q_i^2 reach a minimal cut-off scale at which TMD evolution stops. The TMD evolution is driven by the soft gluon radiations which carry the vanishing longitudinal momentum fraction 1-z_i → 0. In the practical Monte Carlo implementation, the cut-off is chosen to be |Q_i^2|>|l_⊥,i^2|+|k_⊥,i+1^2|, or equivalently z_i>0.5. Meanwhile, |Q_i^2| is also required to be larger than the satuartion scale Q_s^2. If these two conditions can not be met simultaneously, we terminate the TMD evolution, and start the backward small x evolution.
We test the backward evolution algorithm against the numerical method as shown in Fig. <ref>. The MV model result is applied at the initial scale Q_0=3 GeV. The gluon k_⊥ distribution at high scale Q=13 GeV is obtained by numerically solving the combined CS-renormalization group equation. The cascade is generated starting from the scale Q=13 GeV and evolve down to the initial scale with the backward approach. The t-channel gluon k_⊥ distribution reconstructed from the cascade is compared with the numerical results at different scales.
Gluon k_⊥ distributions are presented in the left panel of Fig. <ref> for the fixed coupling case, and in the right panel of Fig. <ref> for the running coupling case. It is evident that the k_⊥ distributions obtained from the Monte Carlo method is the same as the numerical results.
We conclude that the backward evolution algorithm pass this consistency check as expected.
§ CONCLUSION
In this work, we extended the small x initial state parton branching algorithm developed in the previous paper to include the kinematic constraint effect. In the small x limit, the kinematic constraint leads to stronger suppression of soft gluon emissions than that caused by the angular ordering along the chain. The coherent branching effect is thus effectively implemented in the parton branching algorithm once the kinematic constraint is imposed. This is a nontrivial extension in the sense that the weighting factor and the way of sampling radiated gluon's transverse momenta are drastically altered. The t-channel gluon k_⊥ distributions constructed from both the forward scheme and the backward scheme are shown to reproduce the numerical solutions of the kinematic constrained GLR equation.
We also formulated a parton branching algorithm that enables us to resum large k_t logarithms at small x logarithms following a two-step evolution picture. The cascade first develops by radiating soft gluons that carry vanishing longitudinal momentum fractions in the backward approach description. At this first stage of the evolution, the parton branching is simulated with the Sudakov factor which we obtained from the folded CS equation and the renormalization group equation. The transverse momentum-dependent gluon distribution instead of gluon PDF is used to guide the evolution path toward the most populated regions of (Q^2, k_⊥).
When the virtual mass of the t-channel gluon is dominated by its transverse momentum or is of the order of saturation scale, the parton branching starts being generated according to the non-Sudakov form factor derived from the small x evolution equation. The joint k_t and small x resummation thus has been achieved in the Monte Carlo simulation by implementing such two-step evolution. Our study represents an important step towards practical applications of the parton shower generator in simulating scattering processes that involve multiple well-separated hard scales, such as di-jet production in eA collisions at EIC. The next step is to construct a full hadron-level Monte Carlo generator with the hadronization being performed using multi-purpose generators such as PYTHIA <cit.>. We also plan to integrate the algorithm into eHIJING framework <cit.> aiming at the simulation of events in eA collisions for the whole x range accessible at EIC in the future.
Acknowledgments:
We thank Hai-tao Li and Shan-shan Cao for helpful discussions.
This work has been supported by the National Natural Science Foundation of China under Grant No. 1217511. Y.S. is supported by the China Postdoctoral Science Foundation under Grant No. 2022M720082. S.Y.W. is also supported by the Taishan fellowship of Shandong Province for junior scientists.
apsrev4-1
|
http://arxiv.org/abs/2307.04458v1 | 20230710101221 | Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu | [
"Victor Prokhorenko",
"Chadni Islam",
"Muhammad Ali Babar"
] | cs.SE | [
"cs.SE"
] |
V. Prokhorenko et al.
CREST - The Centre for Research on Engineering Software Technologies, the University of Adelaide, Australia
victor.prokhorenko, [email protected]
Cyber Security Cooperative Research Centre (CSCRC), Australia
Queensland University of Technology, Brisbane, Australia
[email protected]
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu
Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2
August 12, 2023
==================================================================================================
An Operating System (OS) combines multiple interdependent software packages, which usually have their own independently developed architectures. When a multitude of independent packages are placed together in an OS, an implicit inter-package architecture is formed. For an evolutionary effort, designers/developers of OS can greatly benefit from fully understanding the system-wide dependency focused on individual files, specifically executable files, and dynamically loadable libraries. We propose a framework, DepEx, aimed at discovering the detailed package relations at the level of individual binary files and their associated evolutionary changes. We demonstrate the utility of DepEx by systematically investigating the evolution of a large-scale Open Source OS, Ubuntu. DepEx enabled us to systematically acquire and analyze the dependencies in different versions of Ubuntu released between 2005 (5.04) to 2023 (23.04). Our analysis revealed various evolutionary trends in package management and their implications based on the analysis of the 84 consecutive versions available for download (these include beta versions). This study has enabled us to assert that DepEx can provide researchers and practitioners with a better understanding of the implicit software dependencies in order to improve the stability, performance, and functionality of their software as well as to reduce the risk of issues arising during maintenance, updating, or migration.
This work is accepted for publication in The 17th European Conference on Software Architecture (ECSA 2023),
Istanbul, Turkey.
§ INTRODUCTION
Combining multiple independent software packages together is commonly used to form complex inter-connected ecosystems. A typical example of such large software ecosystems is various Linux distributions. Such ecosystems tend to consist of hundreds or thousands of packages, libraries, binaries, and configuration files with an order of magnitude more dependencies among them <cit.>, <cit.>.
Developers and researchers have expressed interest in software complexity measurement in an attempt to reason about characteristics of large code bases <cit.>. Software complexity is viewed as a result of different design decisions and implementation specifics and is a crucial component of long-term effects like the maintainability of software <cit.>. Although software complexity is a crucial consideration for package managers, Linux distributors, and maintainers, we currently have limited knowledge about the evolution of this complexity over the software lifespan. While the complexity of individual packages is tamed by their corresponding developers, combining thousands of packages materializes a new emergent layer of complexity. It is also uncertain whether different metrics for measuring software complexity exhibit similar or varying patterns of evolution.
A significant amount of research has extensively explored source-level software complexity <cit.>. As a result, various complexity metrics have been defined, such as cyclomatic, branching, or data flow complexity <cit.>. These metrics are primarily used for software design, debugging, and optimization purposes <cit.>.
These metrics are, however, not applicable when analyzing closed-source software distributed only in binary form without access to the source code. In such cases, binary dependency analysis is required to understand the interactions and dependencies between compiled binary executables. Additionally, even when source code is available, there may be situations where the compiled binary may behave differently from what is expected due to specific environment configurations. Thus, binary dependency analysis can provide a more accurate and complete understanding of run-time software behavior, which can be crucial for identifying potential issues or vulnerabilities.
This work considers an OS as a whole rather than focusing on analyzing individual software binaries. Considering an OS enables the identification of cross-application relations, which make up an emergent inter-package relation architecture instead of just the intra-package software complexity. We propose a framework that enables the extraction of binary-to-library dependencies and constructs a full OS dependency graph to obtain insights on overall OS complexity which we determine through inter-package dependency coupling. By coupling we mean any type of dependency of one code fragment on another (library inclusion, function call, etc).
Our study focused on Ubuntu as a case study to examine the evolution of large software ecosystems over almost two decades. Through empirical research and evidence-based findings, we aimed to assess the current state of package, library, and binary dependencies and identify areas for improvement in management tools, policies, and ecosystem analysis platforms.
We believe that a deep understanding of emergent inter-package architecture resulting from combining a multitude of independently developed software subsystems would benefit software developers and OS maintainers. The proposed techniques and tools are expected to minimize manual labor associated with multi-package maintenance.
Following are the key contributions of our work
* We have introduced a framework for dependency coupling analysis for multi-package software to extract the inter-package relations architecture that is applicable to a broader range of OS due to the binary-level analysis.
* We have defined four techniques to quantitatively measure software coupling in terms of executable and dynamically loadable library dependencies at different granularities.
* We have investigated the evolution of Ubuntu OS in terms of the proposed library presence dependency type, which revealed the changes in OS-wide inter-package relations over time.
§ BACKGROUND AND MOTIVATION
§.§ Software Complexity
Throughout the lifetime of any software system, various code modifications must be implemented in order to adapt to ever-changing user requirements and environmental conditions. An intuitive expectation is that large and complex software systems may be more difficult to update and maintain. Thus, in efforts to gain a stricter definition of complexity, multiple code complexity measurement techniques, such as straightforward line count or cyclomatic complexity, have been proposed so far <cit.>. However, analyzing multiple diverse software systems as a whole is not trivial due to (i) lack of access to the source code of all third-party components,
(ii) lack of formal interoperability specification and
(iii) highly dynamic state of execution environment at run time.
Several techniques are typically employed to handle the growing complexity of large software systems (such as a full OS). For instance, the system package manager may track package dependency information at the OS level. This tracking enables detecting incompatibilities between separate software subsystems and repairing them if possible. Unfortunately, manual labor is commonly used in determining and maintaining information on such version-level incompatibilities <cit.>. Due to the large number of files in a typical OS, manual efforts typically target only high-level dependency definitions, such as package level only <cit.>. As each package may consist of multiple files containing executable code (i.e., executable binaries and libraries), such package dependency understanding may not represent the dependencies precisely.
Further challenges arise due to modern complex software systems commonly developed in various programming languages. For instance, purely-binary compiled languages are intertwined with interpreted script languages leading to execution flow frequently being transferred between them. The dependency chains within such complex systems may propagate through a significant portion of files in the file system through the indirect reliance of different code fragments on each other. A typical example includes PHP web pages relying on the PHP interpreter, web server, and third-party PHP libraries. Such immediately obvious (direct) dependencies, in their turn, recursively rely on other system-provided and third-party libraries. Therefore we argue that automated and precise dependency tracking would benefit software system maintainers and administrators and may provide useful insight to software developers.
§.§ Code dependency types
One piece of code can depend on another in numerous ways. For instance, within the source code analysis context, a function may call other functions. Similarly, class methods may work by invoking other class methods. These types of dependencies present in the same code base are well understood and routinely used in modern IDEs (Integrated Development Environments) to aid software developers. In contrast, cross-language code dependencies spanning across multiple independently developed software systems are less formal and challenging to identify. For instance, a PHP-oriented IDE would not detect incompatible changes in the library which is required by the PHP interpreter itself.
Focusing solely on software running within the same OS while not taking network-based dependencies into consideration, we propose the following four conceptual types of dependencies suitable in the executable code analysis context. These four types include (i) the presence of third-party libraries, (ii) the extent of library coverage, (iii) library function call occurrences, and (iv) the run-time usage of functions (Figure <ref>).
The third-party library presence dependency relates to file-level granularity. This type of dependency indicates a requirement for a dynamically loadable library to be present in the system for an executable binary to be able to load and start. In Windows-based systems, libraries and executables are denoted by .dll and .exe file extensions, while on Linux-based these are .so and typically extension-less ELF (Executable and Linkable File) correspondingly. While high-level, this file granularity is crucial as a missing library file typically causes the executable file loader to indicate an error and prevents any further file execution.
Coverage dependency focuses on the library fragments (e.g., functions or class methods) that a developer explicitly uses or relies on. This type of dependency refers to specific function existence requirements. Thus, the library coverage aspect reflects the degree of reliance on a given library by the executable. Depending on the OS, programming language, and execution environment, individual function-level requirements can be implemented in various ways. For instance, in the context of the Windows PE executable, the list of required functions is tied to a specific library. In contrast, the lists of required libraries and functions are independent in the Linux ELF executable <cit.>. These implementation specific differences complicate coverage analysis in the general case.
Function occurrence dependency type attempts to provide further insight into the code dependency by observing that a single external function can be referred to multiple times in the original code. For instance, some heavily used functions can be mentioned all over the code, while some rarely used functions may only appear once. Extracting this type of dependency is extremely complicated and involves computationally-heavy disassembling of compiled code or parsing of interpreted languages. Initial unoptimized attempts revealed a significant time overhead for extracting such occurrence-level dependencies. While certain optimizations can be taken for production-ready usage, it can be concluded that this type of analysis is currently unsuitable for real-time applications.
Lastly, dependency usage refers to the actual run-time external code flow control transfers (i.e., the actual function calls). This level of detail may, for example, reveal that one function call is contained within a high-count loop while other function calls may be a part of a condition rarely satisfied at run time. Run-time observation would reveal a deeper understanding of the level of reliance on third-party libraries in both cases. Despite seemingly most accurate and closest to reality, relying on this type of dependency suffers from a major drawback. Different executions or instances of the same executable may exhibit different behavior due to different run-time conditions. In other words, observing a single execution does not guarantee to reveal all external code usage cases.
Note that a purposefully crafted executable may incorporate external dependencies that would not be reflected using the proposed dependency measurement techniques. For instance, if an executable downloads code over the network and executes it in place, no third-party library references, function names, or function calls related to the downloaded code may be present in the original executable. Moreover, the downloaded code downloaded can be different on each program invocation, making any dependency analysis futile in such a context. Based on the identified dependency types, we propose an extensible plugin-based framework suitable to extract code dependencies for various types of executable code.
§ OUR APPROACH AND IMPLEMENTATION
Analyzing the full file system enables a more complete and consistent understanding of the dependencies. Software developers only express a requirement for dynamically loadable library presence, but do not have actual guarantees of the library's existence in a given system. We implement a Python-based proof of concept solution to analyze system-wide dependencies.
On a conceptual level, our proposed approach for Dependency Extraction (DepEx consists of a file system scanner, a plugin dispatcher, multiple user-definable file-type-specific plugins, and the resulting database. The following steps provide an overview of the DepEx operation:
* The existing dependency extraction plugins (also Python-based) are queried to prepare the list of all supported file types
* The specified file system is iterated over and each file of a supported type is passed to a corresponding plugin for dependency extraction
* The dependencies extracted by the plugin are stored in an SQLite database
Having the knowledge of individual file type structures, each plugin is responsible for external dependency detection and extraction. Note that while the current implementation assumes one-to-one relation between file types and plugins, it is possible for multiple plugins to process the same files to extract different types of dependencies.
While we have implemented a proof of concept plugins for PHP, Bash, and, to a lesser degree, Python scripts, in this research we primarily focus on ELF executables and .so libraries with the library presence dependency.
Once the unattended phase of the dependency extraction is complete, several interactive analysis and usage scenarios become accessible. These include visualization, statistical reporting, and forward and reverse update impact estimation. For instance, various system health characteristics, such as "number of missing libraries" or "number of executables with unfulfilled dependencies" can be queried and plotted if necessary. Similarly, update impact calculation enables obtaining the list of executables and libraries that would be potentially affected in case a given library is updated.
In order to aid comprehension of the large amounts of data collected, we developed a visualization subsystem. Using DOT language for graph representation enables rendering the resulting graphs using existing tools as well (such as GraphViz or Gephi). While the individual executable file graphs were readable, the full-system dependency graph was too cluttered for human comprehension. At this stage, interactive filtering was implemented to allow the hiding of popular libraries responsible for most of the visual noise (as shown in Figure <ref>). We are also planning to implement automated filtering based on various features, such as node type, sub-string matching, and popularity.
Other auxiliary scripts for dependency graphs exploration include querying all binaries and libraries that depend on a given library () and individual binary/library dependency graph generation ( and ). Individual library dependencies can also be visualized in a more detailed view.
§ STUDYING THE ARCHITECTURAL ASPECTS OF UBUNTU
We focus on the following Research Questions (RQs) to investigate the file-level package relation architecture in Ubuntu systems using DepEx. We considered the presence dependency in this case study. We collected and analyzed the dependencies of 84 consecutive live Ubuntu Linux images that span over 18 years of development and evolution. The research questions we primarily focus on revolve around the emergent inter-package OS-wide architecture implicitly forming as a result of combining multiple independent software packages as well as the related architectural changes observed throughout longer time periods. In addition, we investigate the complexity perception from the perspectives of individual software package developers and whole system maintainers.
* RQ1. How do binary-to-library dependencies manifest in the Ubuntu OS in terms of a system-wide dependency graph?
* RQ2. What is the difference between individual library complexity directly exposed to developers vs. overall internal system complexity that emerges as a result of combining multiple subsystems together (direct vs. recursive dependencies)?
* RQ3. How does the whole Ubuntu OS binary-to-library dependency graph evolve over a longer period?
Having high popularity, rich history, and open-source nature, Ubuntu serves as a comprehensive data source. Despite other Linux distributions, such as Alpine, gaining popularity, we were unable to find another dataset comparable in size and quality. Specifically, older Alpine versions were unavailable for download and Debian produced fewer live images.
Throughout the development of our DepEx framework, we relied on well-established existing open-source software, such as
squashfs-tools[<https://github.com/plougher/squashfs-tools>], binutils[<https://www.gnu.org/software/binutils/>] and ldd[<https://man7.org/linux/man-pages/man1/ldd.1.html>]. SquashFS-related tools were used to expose compressed live Ubuntu images for analysis. Note that different versions of had to be used depending on the age of the Ubuntu image. Binutils package, particularly the GNU tool, was used to extract ELF-specific data such as imported library names. Lastly, was used to extract library search locations. Special precautions had to be taken to lookup for the library paths inside the mounted image rather than resolving paths within the host system that conducted the analysis. For this purpose, we relied on standard Linux functionality.
Solely mounting the Ubuntu ISO files directly does not provide access to the live file system, as another layer of compression is typically present for disk space optimization purposes. Thus, we implemented a two-step unpacking process to gain visibility of the inner live file system.
Interestingly, extracting the images generated over 18 years revealed how live image preparation changed over time. We noticed different compression techniques used throughout the time period analyzed that ranged from compressed loop files (cloop) to SquashFS versions 2.1-4.0. We also observed that modern SquashFS kernel modules could not transparently mount images compressed by older versions. Thus, we developed a supporting script to provide access to all of the downloaded images in a uniform manner.
Using our DepEx framework, we recursively built the full library dependency graph for each identified executable using , and tools. Extracting library dependencies requires analyzing and variables, system library cache as well as the binary executable file path. Finally, we used an SQLite database to store the collected dependency data for all the scanned Ubuntu images. This data can be queried for further analysis and visualization.
§ FINDINGS AND RESULTS
The dependency data extracted from a typical OS is a rich source of information on the high-level system architecture. In contrast to planned layer of architecture, this layer refers to the unwritten architectural aspects that emerge as a result of combining a multitude of independently-developed software packages. Coupled with temporal updates, this data can serve as a basis for a deeper system evolution trends analysis. For instance, long-term trends such as libraries gaining or losing popularity or executable complexity inflation may be detected. Predicting potential OS library or executable removal may help developers adjust the development plans. In addition, determining and removing unused libraries could be useful in optimizing disk space usage and reducing the attack surface.
Throughout the data collection conducted, we focused on three key aspects. Firstly, we investigated the OS-level dependency graph as a whole (RQ1). Secondly, we examined various aspects of complexity in binary dependencies determined through coupling analysis (RQ2). Lastly, we analyzed evolutionary trends in the OS dependency graph (RQ3).
§.§ OS-wide Dependency Graph
Analyzing the resulting SQLite database, which covers 84 Ubuntu images, revealed the following number of binaries, libraries and dependencies per image. We found that from Ubuntu 5.04 to 23.04 the number of binary executables ranged from 1519 to 2753 and the number of libraries ranged from 1683 to 3673. In terms of dependencies detected, the numbers ranged from 18165 to []37641 in the images scanned. A total of 408364 binary and library files were processed to extract the dependencies, which returned almost 2 million dependencies. The total SQLite database size generated is over 83MB of raw dependency data.
We noticed that highly popular libraries such as () make the graphs unreadable. Thus we implemented filtering out libraries from the sorted (by popularity) list of all the involved libraries. We observe that hiding the top 10-15 libraries increases the readability of the whole system graph. Notably, loosely coupled subsystems, such as the networking subsystem, become apparent. The libraries presented alongside the diagram also provide insight into the relative popularity of individual libraries within a system.
We have observed that number of libraries imported but not present in the system varied from 20 (v5.04) to 8 (v23.04) with the highest number being 92 (v21.10b). As a consequence, the number of other libraries directly impacted by the missing dependencies varied from 4 (v17.10 and v17.10.1) to 27 (v13.04 and v9.04). Similarly, we see that the number of unused libraries (i.e., not imported by any other library or executable) ranged from 1301 (v5.04) to 1666 (v23.04). These numbers constitute a significant proportion of the total number of libraries included (around 77% and 62% respectively). Potential explanations for such a high number of unused libraries could be a) plugin-based applications that do not import libraries directly, b) "forgotten" legacy libraries and c) libraries shipped "just in case" for use by applications commonly installed at a later stage.
§.§ Dependencies Coupling Aspects
Software dependencies represent the reliance of a given piece of code on external code. In practice, software developers only deal with a subset of the code required for an application to run. A graphics-oriented library may expose a simpler set of functions to developers, while relying on a multitude of other complex hardware-specific libraries to implement the advertised functionality. Thus, a complex and large code base is made to look simple from the developer's perspective.
This perception difference opens the possibility of measuring code coupling in direct and recursive ways. The direct coupling of an application reflects how many specific libraries a developer deals with explicitly. In contrast, recursive coupling takes all the underlying dependencies into consideration as well.
In addition, there is an inherent asymmetry in dependency tracking. Forward tracking from a given binary to all the required libraries is trivial, as this information is contained within the binary. Reverse tracking from a given library to determine all the binaries and libraries that require the specified library is complicated, as this information is not stored explicitly. Reverse tracking essentially reflects the popularity of a given library and requires scanning the whole file system to be calculated. Thus we developed functionality to measure the (i) direct coupling, (ii) total (recursive) coupling, and (iii) library popularity.
Figures <ref> and <ref> illustrate the changes in the average and maximum number of dependencies correspondingly. As can be seen from Figure <ref>, whereas the average total number of dependencies largely stays the same, developer-facing complexity tends to decrease over time. This indicates that developers tend to re-arrange code within libraries to minimize the coupling they face directly. The large spike in Figure <ref> is caused by the introduction of Gnome Shell in Ubuntu 17.10. We, therefore can conclude that while maintaining roughly the same external coupling, GNOME Shell has a complicated internal structure. Particularly, we found that binary has the largest amount of dependencies. This is explained by the fact that the configuration tool needs to interact with most of the GNOME Shell subsystems.
A complementary aspect of dependency coupling is popularity. We define library popularity through the number of other libraries or executables that depend on it. In other words, damaging or removing more popular libraries would impact a larger number of executables in a system. In terms of popularity, the top 10 most used libraries (i.e. imported from other libraries and executables) in Ubuntu are: . The numbers alongside the libraries refer to the number of uses (i.e., library importing) averaged across all Ubuntu versions the library was present in.
We notice that 7 out of the top 10 directly-coupled libraries relate to various GNOME subsystems while the other 3 relate to the Evolution mail client. Interestingly, the most complex executable with 100 direct dependencies was only present in two Ubuntu versions. This likely indicates that such high coupling was not tolerated, leading to the application removal.
Lastly, analyzing total coupling by taking recursive dependencies into account, we found the top 10 complex libraries and binaries:(154),
(156), (273), (155), (154), (155), (158),
(169), (158),
(164).
§.§ Dependency Graphs Evolutionary Trends
Running a large-scale analysis on a set of Linux distributions developed and released over 18 years revealed a number of shifts occurring in the domain. In constant efforts to attract users, Ubuntu is known for conducting experiments, such as introducing new large software packages as a replacement for existing ones. For instance, the significant dip in the number of dependencies on Figure <ref> is explained by the replacement of GNOME 2 with Unity. On a longer scale it is also visible that despite limited local successes of such experiments, the overall trend indicates a slow growth of the number of files and dependencies.
Interestingly, we also observed a significant amount of not explicitly required files are present in the system (Figure <ref>). In other words, up to 37% of libraries physically located in the file systems were not mentioned in the import tables of any of the binaries or libraries. This likely indicates that such libraries are primarily used as plugins and could be loaded at run-time through dynamic directory scanning if necessary. Note that these conditional dependencies may be impossible to detect in advance due to the unpredictable nature of external factors. For instance, a user controlled application configuration can determine whether a given plugin library should be loaded at run time. The overall trend also hints that such a dynamic plugin-based approach gains popularity as the proportion of libraries not imported keeps steadily growing.
Another observation discovered throughout our analysis relate to the longevity of the libraries and binaries in Ubuntu. Namely, while complex binaries are periodically removed in search of better alternatives, highly popular libraries tend to stay around. Once a popular library is introduced in a particular Ubuntu version, it is unlikely to be removed as such removal would impact all libraries and executables that rely on the library's existence. Even internal code reorganizations affecting highly popular libraries require extra care to maintain compatibility[https://developers.redhat.com/articles/2021/12/17/why-glibc-234-removed-libpthread].
§ DISCUSSION
§.§ Threats to Validity
While we primarily focused on dependency-centric package management in Linux OS, other factors may explain some of the observations. Despite high popularity, packages might get removed from the system due to licensing, compatibility, security, or maintainability issues. Dependency analysis should, therefore, be coupled with change log analysis to verify and confirm the findings.
To enhance the external validity of our dependency analysis, we selected a highly popular Linux distribution. By including all of the available versions we expect our approach to be generalizable and applicable to a broader range of OSs. Widening the input data set on the time axis enabled the discovery of uncommon cases and long-term trends.
Being well-maintained, Ubuntu served as a high-quality dataset. Legacy Ubuntu versions and their corresponding change logs were still available for download[Ubuntu wiki: Releases - https://wiki.ubuntu.com/Releases]. In contrast, Alpine (another popular Linux distribution) archives did not go far back in time. Moreover, the Alpine archives contained broken links for older versions, preventing image downloading. Similarly, while considering Debian systems, we discovered different and incompatible system image layouts which would complicate the analysis.
Primary threats to external validity are abrupt changes causing significant paradigm shifts, lower granularities skewing the results, and implicit dependencies.
Abrupt changes may be introduced throughout evolution. Such changes introduce incompatibilities, forcing to amend the scanning process accordingly. Notable examples we observed include compression algorithm changes, folder hierarchy alterations, and transition from to . We noticed a different layout of binary files in the file system that required consideration due to the changes introduced in Ubuntu 19.04. Specifically, and directories were converted to symbolic links to and correspondingly[<https://lists.ubuntu.com/archives/ubuntu-devel-announce/2018-November/001253.html>]. Depending on whether 19.04 is being installed from scratch or on top of the previously installed version, the number of binaries may look like being suddenly doubled in version 19.04. We alleviated this problem by resolving symbolic links.
In addition to library dependencies stored in executable binary file import tables, other types of coupling occur in practice. For instance, network communication, special files like Unix domain sockets, Inter-Process Communication (IPC) calls, message-oriented buses, and pipes provide various means of code interactions. Discovering such code coupling instances may not be possible in practice (e.g., new code fragments might be downloaded over a network). Taking into account these code coupling types may significantly skew our findings.
§.§ Challenges and Limitations
The two primary technical challenges we encountered throughout our data collection and analysis are the large data set sizes and performance issues related to extracting dependencies at lower granularities.
As the distributed Ubuntu images are growing in size, so do the number of executable files and their individual sizes. This steady growth is observed over all Ubuntu versions analyzed. For example, within 18 years analyzed, the live Ubuntu image size grew from 600MB (version 5.04) to 3.7GB (version 23.04). Likewise, the number of executable files experienced a 70% increase in size (1605 in 5.04, 2753 in 23.04).
Through practical experiments, we established that restricting the dependency granularity is crucial to achieving acceptable processing speed as lower granularity dependency extraction incurs large overheads. Disassembling executable binaries to identify individual third-party library function calls slows the dependency extraction and incurs significant memory overheads. For instance, we have observed cases of over-disassembly and analysis of a single executable taking 40 minutes on an average laptop-class CPU. Thus, while technically possible and potentially interesting to gain further insights, lower-level granularity analysis is out of reach for real-time applications we initially aimed for. At this stage, we restricted the analysis to the file level only.
§ RELATED WORK
The prior work primarily revolves around two aspects, (i) diverse conceptual complexity metrics definitions and (ii) dependency extraction and analysis.
Various types of software complexity metrics have been widely studied in the literature <cit.>. Some studies have focused on metrics that are useful in source code analysis but are not easily applicable in binary code analysis <cit.> <cit.> <cit.>. Others have discussed the deficiency of methods to obtain global dependency knowledge and the difficulty in visualizing the resulting graphs <cit.>. The use of software complexity metrics to detect vulnerabilities has also been investigated, with some studies proposing dependency-oriented and execution-time complexities <cit.>. Dependency extraction aspects and challenges have also been explored, with some studies focusing on specific languages or ecosystems <cit.> <cit.>.
Package management and dependency validation have been popular research topics, with a set of studies proposing methods to address issues arising from package evolution (e.g., splitting into multiple different packages) <cit.> <cit.> <cit.>. User questions related to package management, such as calculating the consequences of removing or modifying a package, have also been explored <cit.> <cit.>.
Efficient package management tools and query languages have been proposed, including tools for efficient package management and relations lookup <cit.>. However, similar to software complexity metrics research efforts, multiple studies have focused only on source-level rather than binary dependencies <cit.> <cit.>.
In efforts to resolve binary compatibility issues, some works have investigated relying on version ranges rather than minimum version requirements <cit.>. Unfortunately, the large downside of the proposed approach is the requirement of debug symbols availability, which is rare in commercial software. An interesting use of dependency extraction has been proposed for Windows executables for malware detection <cit.>. Taking the notion of the extent of a dependency into account enables detecting and eliminating insignificant dependencies <cit.>.
Overall, it should be noted that dependency related studies primarily focus on source code dependency analysis and package-level relations<cit.> <cit.> and do not typically examine software package evolution over time. We, therefore, conclude that a more precise file-based dependency extraction is an under researched area that might benefit from providing better structural visibility for large-scale systems comprising multiple independently developed packages. We also see that understanding software evolution is essential for maintaining software, ensuring compatibility, and improving security. Having this understanding aids developers in making informed decisions about updates and maintenance, ensures software remains compatible with other systems, and reduces the risk of security issues. Additionally, understanding software evolution can lead to new innovations and improvements in software design and development.
§ CONCLUSION AND FUTURE WORK
In this study, we introduce automated extraction of dependency graphs for a whole system at the executable files level (as opposed to manually maintained traditional package-level dependency graphs). The resulting system-wide dependency graph provides a high-level view of the OS architecture emerging from interactions between the different subsystems and user packages. In addition, this study enabled the discovery of general high-level trends/common patterns in Ubuntu Linux architecture evolution over time.
We also differentiate between developer-facing complexity (defined through direct dependency coupling) and overall system complexity (defined through recursive dependency coupling). The motivation behind such a separation is that developers typically deal with third-party libraries without having full visibility of the back-end side of the libraries. In other words, a developer may include one library, while the library itself can have a complicated graph of dependencies not directly visible to the developer. These invisible dependencies may cause software bloating and increase the attack surface.
We believe the findings of this study will provide useful insights for software developers and OS maintainers in terms of gaining a holistic quantitative understanding of inter-package architecture management that would be useful, for example, in optimizing disk space and improving system maintainability.
We have identified two main directions for future research lines. Specifically, expanding the dependency extraction approach to a wider set of platforms to support and more types of dependencies to extract.
For future research, we aim to perform Windows-based analysis and implement support for other levels of granularity, such as individual function dependencies. Also, in contrast to the convenient, holistic file system structure used in live editions, non-live distribution variants are composed of multiple compressed packages, complicating the dependency extraction and analysis. Implementing analysis for such non-live distributions could be a potential future research line.
As opposed to fixed library imports, code fragments interacting through various communication channels are loosely coupled. Such non-obvious dependencies are not trivial to detect. For instance, changing code on one side of a UNIX pipe may negatively affect the results of the next program in the pipeline. Furthermore, such dependencies may not be predefined in advance and are only required intermittently while being completely unnoticeable most of the time. We believe that comprehensive and accurate detection of such concealed dependencies would greatly enhance the overall system architecture, evolution, and run-time operation understanding and visibility and enable early detection of potential compatibility breaks caused by code modifications.
§ ACKNOWLEDGMENT
The work has been partially supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government’s Cooperative Research Centres Programme.
§ DATA AVAILABILITY
As the current project is funded by industry partners, we are unable to publish the source code at this stage. However, aiming to increase transparency and reproducibility in research, we have made the obtained dataset available for public access <cit.>. Researchers and interested parties can access the dataset and utilize it to replicate or build upon our findings.
8
SoftwareMetricsT. Honglei, S. Wei and Z. Yanan, "The Research on Software Metrics and Software Complexity Metrics," 2009 International Forum on Computer Science-Technology and Applications, Chongqing, China, 2009, pp. 131-136, doi: 10.1109/IFCSTA.2009.39.
SoftwareMetricsSurvey S. Yu and S. Zhou, "A survey on metric of software complexity," 2010 2nd IEEE International Conference on Information Management and Engineering, Chengdu, China, 2010, pp. 352-356, doi: 10.1109/ICIME.2010.5477581.
InitialComplexity Yonghee Shin and Laurie Williams. 2011. An initial study on the use of execution complexity metrics as indicators of software vulnerabilities. In Proceedings of the 7th International Workshop on Software Engineering for Secure Systems (SESS '11). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/1988630.1988632
PackageConflict Artho, C., Di Cosmo, R., Suzaki, K., and Zacchiroli, S. (2011). Sources of inter-package conflicts in debian. arXiv preprint arXiv:1110.1354.
DebianLinux de Sousa, O. Felicio, M. A. de Menezes, and Thadeu JP Penna. “Analysis of the package dependency on debian gnu/linux." Journal of Computational Interdisciplinary Sciences 1.2 (2009): 127-133.
LinuxPackage_IEEE Lan, Yu-Qing, et al. "Extraction methods on Linux package dependency relations." 2009 International Conference on Information Engineering and Computer Science. IEEE, 2009.
LinuxPackageVis Mithun, X. L. E., and van de Wetering, H. M. M. (2009). Linux Package Dependency Visualization. Master's Thesis at Department of Mathematics and Computer Science, Aug, 1-64.
LinuxQuality Boender, J., Di Cosmo, R., Vouillon, J., Durak, B., and Mancinelli, F. (2008, July). Improving the quality of GNU/Linux distributions. In 2008 32nd Annual IEEE International Computer Software and Applications Conference (pp. 1240-1246). IEEE.
RecoverDependency Lungu, M., Robbes, R., and Lanza, M. (2010, September). Recovering inter-project dependencies in software ecosystems. In Proceedings of the IEEE/ACM international conference on Automated software engineering (pp. 309-312).
PackageDependency_2015
Jing Wang, Qingbo Wu, Yusong Tan, Jing Xu and Xiaoli Sun, "A graph method of package dependency analysis on Linux Operating system," 2015 4th International Conference on Computer Science and Network Technology (ICCSNT), Harbin, 2015, pp. 412-415, doi: 10.1109/ICCSNT.2015.7490780.
DepOwl Jia, Z., Li, S., Yu, T., Zeng, C., Xu, E., Liu, et al. (2021, May). DepOwl: Detecting Dependency Bugs to Prevent Compatibility Failures. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) (pp. 86-98). IEEE.
unix_evolution_TSC
D. Spinellis and P. Avgeriou, “Evolution of the Unix System Architecture: An Exploratory Case Study," in IEEE Transactions on Software Engineering, vol. 47, no. 6, pp. 1134-1163, 1 June 2021, doi: 10.1109/TSE.2019.2892149.
unix_44
D. Spinellis, “A Repository with 44 Years of Unix Evolution," 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, Florence, Italy, 2015, pp. 462-465, doi: 10.1109/MSR.2015.64.
softwareComplexity
E. J. Weyuker, “Evaluating software complexity measures," in IEEE Transactions on Software Engineering, vol. 14, no. 9, pp. 1357-1365, Sept. 1988, doi: 10.1109/32.6178.
ComplexityCC
C. Ebert, J. Cain, G. Antoniol, S. Counsell and P. Laplante, “Cyclomatic Complexity," in IEEE Software, vol. 33, no. 6, pp. 27-29, Nov.-Dec. 2016, doi: 10.1109/MS.2016.147.
ComplexityComparison
Zhang, M., Baddoo, N. (2007). “Performance Comparison of Software Complexity Metrics in an Open Source Project." In: Abrahamsson, P., Baddoo, N., Margaria, T., Messnarz, R. (eds) Software Process Improvement. EuroSPI 2007. Lecture Notes in Computer Science, vol 4764. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75381-0_15
TopologyAnalysis Martin P. Robillard. 2008. Topology analysis of software dependencies. ACM Trans. Softw. Eng. Methodol. 17, 4, Article 18 (August 2008), 36 pages. https://doi.org/10.1145/13487689.13487691
SurviveDependencyCox, Russ. "Surviving software dependencies." Communications of the ACM 62.9 (2019): 36-43.
StaticDependencyJász, Judit, et al. "Static execute after/before as a replacement of traditional software dependencies." 2008 IEEE International Conference on Software Maintenance. IEEE, 2008.
AutoDepen Ossher, Joel, Sushil Bajracharya, and Cristina Lopes. "Automated dependency resolution for open source software." 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010). IEEE, 2010.
DataLink DepEx Dataset, <https://figshare.com/s/ce3247b81fac82528495.>
interPackage LaBelle, Nathan, and Eugene Wallingford. "Inter-package dependency networks in open-source software." arXiv preprint cs/0411096 (2004).
EvolutionPackageDepen Kikas, Riivo, et al. "Structure and evolution of package dependency networks." 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR). IEEE, 2017.
DLLHell Dick, Stephanie, and Daniel Volmar. "DLL hell: Software dependencies, failure, and the maintenance of Microsoft Windows." IEEE Annals of the History of Computing 40.4 (2018): 28-51.
DLLMinerNarouei, Masoud, et al. "DLLMiner: structural mining for malware detection." Security and Communication Networks 8.18 (2015): 3311-3322.
LinuxDis Horváth, Árpád. "The software package dependency networks of some Linux distributions." 2012 IEEE 4th International Conference on Nonlinear Science and Complexity (NSC). IEEE, 2012.
EmpiricalComp Decan, Alexandre, Tom Mens, and Philippe Grosjean. "An empirical comparison of dependency network evolution in seven software packaging ecosystems." Empirical Software Engineering 24 (2019): 381-416.
PowerLaws Panagiotis Louridas, Diomidis Spinellis, and Vasileios Vlachos. 2008. Power laws in software. ACM Trans. Softw. Eng. Methodol. 18, 1, Article 2 (September 2008), 26 pages. https://doi.org/10.1145/1391984.1391986
LightWeigthDll Xie, Xiongwei, and Weichao Wang. "Lightweight examination of dll environments in virtual machines to detect malware." Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. 2016.
ELFspec TIS Committee. "Tool interface standard (TIS) executable and linking format (ELF) specification version 1.2." (1995).
MetricsFaults Alakus, T. B., Das, R., and Turkoglu, I. (2019, September). An overview of quality metrics used in estimating software faults. In 2019 International Artificial Intelligence and Data Processing Symposium (IDAP) (pp. 1-6). IEEE.
|
http://arxiv.org/abs/2307.07408v1 | 20230714153850 | Hydrodynamic Navier-Stokes equations in two-dimensional systems with Rashba spin-orbit coupling | [
"Edvin G. Idrisov",
"Eddwi H. Hasdeo",
"Byjesh N. Radhakrishnan",
"Thomas L. Schmidt"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Research Center for Quantum Physics, National Research and Innovation Agency (BRIN),
South Tangerang 15314, Indonesia
School of Chemical and Physical Sciences, Victoria University of Wellington,
P.O. Box 600, Wellington 6140, New Zealand
We study a two-dimensional (2D) electron system with a linear spectrum in the presence of Rashba spin-orbit (RSO) coupling in the hydrodynamic regime. We derive a semiclassical Boltzmann equation with a collision integral due to Coulomb interactions in the basis of the eigenstates of the system with RSO coupling. Using the local equilibrium distribution functions, we obtain a generalized hydrodynamic Navier-Stokes equation for electronic systems with RSO coupling. In particular, we discuss the influence of the spin-orbit coupling on the viscosity and the enthalpy of the system and present some of its observable effects in hydrodynamic transport.
Hydrodynamic Navier-Stokes equations in two-dimensional
systems with Rashba spin-orbit coupling
Thomas L. Schmidt
August 12, 2023
================================================================================================
§ INTRODUCTION
Hydrodynamic behavior of electrons in metals was first predicted in 1963 by Gurzhi <cit.>. It became clear that the hydrodynamic regime in conductors can be reached when the electron-electron scattering time τ_ee is the shortest time scale compared with the electron-impurity (τ_ei) and electron-phonon (τ_eph) scattering times. At that time it was a challenge to fabricate samples clean enough to satisfy this condition, so the first experimental observation of the hydrodynamic regime was demonstrated only in 1995 by de Jong and Molenkamp <cit.>.
It is well known that the scattering times τ_ee, τ_ei and τ_eph strongly depend on temperature <cit.>. Electron-impurity scattering processes are most essential at low temperatures, whereas the electron-phonon mechanism becomes dominant for high temperatures. In certain materials, a hydrodynamic regime is thus reached at intermediate temperatures if they are sufficiently clean.
In the recent past, the technological progress in the fabrication of 2D materials has reignited the interest in electron hydrodynamics <cit.>. In particular, monolayer graphene with its linear Dirac-like spectrum has become a fruitful experimental platform to investigate hydrodynamic transport <cit.>. It was shown that the hydrodynamic regime in clean graphene can be realized at temperatures on the order of 100K <cit.>. Many peculiar transport properties have been demonstrated in the hydrodynamic regime <cit.>. For instance, Johnson thermometry measurements show a significant increase in the thermal conductivity and the breakdown of the Wiedemann-Franz law in graphene <cit.>. This is possible due to a decoupling of charge and heat currents within the hydrodynamic regime and can be regarded as a signature of a Dirac fluid <cit.>. Viscous electron flow through constrictions in graphene has revealed superballistic behavior <cit.>, i.e., a conductance exceeding the maximum conductance possible for ballistic electrons in the same geometry <cit.>. Another signature of collective viscous behavior is the non-local negative resistance in a graphene strip <cit.>. A viscous flow of the Dirac fluid in graphene was also confirmed using a quantum spin magnetometer <cit.>. Besides graphene, the electron hydrodynamics regime has been theoretically investigated and in certain cases also experimentally confirmed in 2D anomalous Hall materials <cit.>, in anisotropic materials <cit.>, in Coulomb drag geometries <cit.>, in Weyl semimetals <cit.>, in gallium arsenide <cit.>, and in 2D electron gases with spin-orbit interaction <cit.>.
Nowadays, it has become possible to fabricate a plethora of hybrid systems based on graphene <cit.>, for instance by combining graphene with adatoms <cit.>, two-dimensional transition metal dichalcogenides <cit.>, or thin metallic substrates <cit.>, which make it possible to manipulate the spin degree of freedom and are promising for spintronics <cit.>. Importantly, even in these hybrid structures, graphene with globally induced spin-orbit coupling remains essentially free from defects and impurities. The intrinsic spin-orbit coupling in graphene is weak, but in these hybrid structures, proximity-induced spin-orbit interaction can be large and can change the electronic band structure substantially. Intrinsic spin-orbit coupling opens a gap at the K point and is related with the pseudospin inversion asymmetry, whereas Rashba spin-orbit (RSO) coupling preserves the gapless nature of graphene <cit.>. Typically the RSO interaction appears due to the structural inversion asymmetry brought about by the substrate or adatoms <cit.>. It is worth mentioning that the induced RSO coupling can reach larger values in other two-dimensional materials such as silicene, germanene, stanene, phosphorene, arsenene, antimonene, and bismuthene <cit.>.
In this work we study the effect of RSO coupling on hydrodynamic transport in graphene. We allow the spin-orbit coupling to be on the same order as the hydrodynamic temperature and assume that the electron-electron scattering time remains the shortest time scale. The intrinsic spin-orbit coupling can be tuned to small values <cit.>, so we ignore its influence on transport properties and assume that the system remains gapless. In order to derive the hydrodynamic equations, we use the kinetic (Boltzmann) equation in the diagonal basis of the unperturbed Hamiltonian, which includes the Dirac spectrum and the RSO interaction. The collision integral on the right-hand side of the kinetic equation accounts for two-particle scattering. The RSO coupling results in the appearance of so-called Dirac factors in the two-body electron-electron interaction Hamiltonian. For our derivation we mainly follow Ref. [Narozhny2019], where the necessary calculations were performed for pristine graphene.
The rest of this article is organized as follows. In Sec. <ref>, we introduce the Hamiltonian of the 2D system under consideration. In Sec. <ref>, we provide the Boltzmann equation for the system taking into account the spin-orbit split conduction bands. In Sec. <ref>, using the kinetic equation and thermodynamic relations, we derive the generalized hydrodynamic Navier-Stokes equation. Finally, we present our conclusions in Sec. <ref>. The details of the calculations and additional information are presented in the Appendices. Throughout the paper, we set e=ħ=k_B=1.
§ SYSTEM HAMILTONIAN
In this section, we derive the Hamiltonian of the system under consideration. Specifically, we consider a single layer of graphene on a substrate which enhances the RSO coupling <cit.>. The first-quantized one-body Hamiltonian of graphene near the K point in the momentum representation is given by
H_0=v(σ_x k_x+σ_y k_y),
where v ≃ 10^6m/s is the Fermi velocity for spinless electrons, and σ_x,y are the pseudospin Pauli matrices acting on the two-dimensional space of sublattices. This Hamiltonian describes a linear gapless spectrum near a given Dirac point. The large separation in momentum space suppresses (inter-valley) scattering between the K and K' points, so we focus on a single Dirac cone in the following.
A broken structural inversion symmetry results in the RSO interaction Hamiltonian
H_R=λ_R(σ_x s_y-σ_y s_x),
where λ_R denotes the strength of the RSO coupling and s_x,y are the Pauli matrices corresponding to the electron spin. It is worth mentioning that the RSO term in graphene does not depend explicitly on momentum in contrast to the general case for two-dimensional electron gases <cit.>. In order to simplify the calculation, we neglect two additional perturbations which can be present in graphene, namely a staggered sublattice potential and the intrinsic spin-orbit coupling. Passing on to a second-quantized description, the graphene Hamiltonian in the presence of the RSO term has the form
H=H_0+H_R=∑_nϵ_n c^†_nc_n.
The eigenstates of the single-particle Hamiltonian (<ref>) are denoted by |n⟩, where n denotes the band index and =(k_x,k_y) is the wave vector in 2D momentum space. The annihilation and creation operators of an electron with momentum in band n satisfy the standard anti-commutation relations, {c^†_n,c_n^'^'}=δ_nn^'δ_^'. The RSO coupling leaves the spectrum gapless at ||=0 and one finds the dispersion relations for the four bands, namely
ϵ_n=
∓λ_R-√(v^2 ^2+λ^2_R) n=v_1,v_2,
∓λ_R+√(v^2 ^2+λ^2_R) n=c_1,c_2.
where the band indices c_1,2 and v_1,2 refer to the spin-split conductance and valence bands, respectively.
The dominant scattering mechanism to reach the hydrodynamic regime is the electron-electron interaction <cit.>. In second quantization, the electron-electron interaction in the eigenbasis of the diagonal Hamiltonian (<ref>) can be written as
H_ee =1/2S∑_n_1 n_3
_1 _3∑_n_2 n_4
_2 _4∑_V_ F^n_1 n_3__1 _3 F^n_2 n_4__2 _4
×δ__1+, _3δ__2-, _4 c^†_n_3 _3 c^†_n_4 _4c_n_2 _2 c_n_1 _1,
where S is the 2D system volume. The interaction potential V_ is given by the Fourier transform of the real-space interaction potential, V(_1-_2)=(1/S)∑_V_e^i(_1-_2), and has the symmetry V_-=V_. The Dirac factors in the presence of RSO interaction are represented by the following matrix element (see App. <ref> for details)
F^nn^'_^' =
1/2ξ^n_ξ^n^'_^'[
1 + nn^' g^n_,1(g^n^'_^',1)^∗
+
g^n_,2(g^n^'_^',2)^∗ +
nn^'g^n_,3(g^n^'_^',3)^∗],
where
ξ^n_ = [1+(ζ^n_)^2]^-1/2, ζ^n_ = ϵ_n/(v||),
g^n_,1 = iζ^n_e^-iθ_,
g^n_,2 = ζ^n_e^-iθ_,
g^n_,3 = ie^-2iθ_,
and θ_ denotes the angle of the 2D vector with respect to the k_x axis. The superscript n denotes the band and n=+1 (n=-1) corresponds to the v_1, c_1 (v_2,c_2) bands. It is worth mentioning that (F^nn^'_𝐤^')^∗=F^n^' n_𝐤^', which guarantees the hermiticity of the electron-electron interaction Hamiltonian.
§ BOLTZMANN EQUATION
The kinetic equation can be obtained by applying the Keldysh technique to the total Hamiltonian given in previous section and applying perturbation theory and a semiclassical approximation. The details of these steps are standard and explained in many reviews <cit.>. Choosing a chemical potential above the charge neutrality point, we need to consider only the two conduction bands. We denote the corresponding semiclassical distribution functions as f_n_1 _1≡ f_n_1 _1(,t), where n_1=+1 (n_1 = -1) corresponds to the lower (upper) conduction band c_1 (c_2). The exact form of the distribution functions obtained from the Keldysh Green's functions is provided in Refs. [Rammer1986] and [Kita2010]. In our effective two-band system, the Boltzmann equation for the distribution functions can be written as
∂ f_n_1_1/∂ t+∂ϵ_n_1_1/∂_1∂ f_n_1 _1/∂-∂φ/∂∂ f_n_1 _1/∂_1=ℐ[f_n_1_1],
where φ(,t) is an external field applied to the system and ℐ[f_n_1 _1] is the electron-electron collision integral, which is a functional of the distribution functions. Our main interest is in the hydrodynamic regime where electron-electron interactions dominate. For simplicity, we therefore omit the scattering integrals related to electron-hole recombination as well as the scattering integrals related to impurity and phonon scattering. In the two-particle scattering approximation the scattering integral has the form
ℐ[f_1]=-2π𝒩∑_23 4W^3 4_12[𝒢^3 4_12-𝒢_3 4^12],
where we used the shorthand notation 1≡(n_1_1) etc. The outgoing and incoming fluxes are denoted by 𝒢^3 4_12=f_1f_2(1-f_3)(1-f_4) and the valley degeneracy factor 𝒩=2. The two-particle scattering rate is given by Fermi's golden rule
W^3 4_12=|⟨ 3 4|H_ee|12⟩_c|^2δ(E_i-E_f),
where i=|12⟩ and f=|34⟩ denote, respectively, the initial and final states of the scattering process. Moreover, E_i=ϵ_n_1_1+ϵ_n_2_2 is the initial state energy, the subscript c means that only connected diagrams are taking into account, and the delta function is a consequence of energy conservation. Applying Wick's theorem, we obtain the following expression for the two-particle transition matrix element
⟨ 3 4|H_ee|12⟩_c
=
1/S[V__1-_4F^n_1n_4__1 _4F^n_2n_3__2_3
-
V__1-_3F^n_1n_3__1 _3F^n_2n_4__2_4]δ__1+_2,_3+_4,
where the Kronecker-delta ensures momentum conservation in the two-particle collision process. It is worth mentioning that in Ref. [Narozhny2019] only the first term of Eq. (<ref>) is kept, thus ignoring the interference effects. In this case, at zero spin-orbit coupling, our results reproduce those of Ref. [Narozhny2019], namely |⟨ 3 4|H_ee|12⟩_c|^2=|V__1-_3|^2 Θ^n_1n_3__1_3Θ^n_2n_4__2_4δ__1+_2,_3+_4 with Dirac factors for pristine graphene Θ^nn^'_^'=(1+nn^'𝐞_·𝐞_^')/2, where 𝐞_=/|| is a unit vector in the direction of the momentum.
The strong interactions cause the electrons to relax on a short time scale to local equilibrium distributions
f^eq_m(,t)={1+exp[ϵ_m-μ(,t)-𝐮(,t)·/T(,t)]}^-1,
where μ is the chemical potential and 𝐮 and T are the local drift velocity and the temperature, respectively. This form of the distribution function is a general consequence of Boltzmann's H-theorem, which states that in local equilibrium the entropy production must vanish. The latter is defined as
[∂ S/∂ t]_coll.=1/4∑_{n}∫d_1/2π∫d_2/2π∫d_3/2π∫d_4/2π|⟨ 3 4|H_ee|1 2⟩_c|^2δ(𝐤_i-𝐤_f)δ(E_i-E_f)[𝒢^12_3 4-𝒢_12^3 4]log[𝒢^12_3 4/𝒢_12^3 4],
where {n}=(n_1,n_2,n_3,n_4) and 𝐤_i,f, and E_i,f denote the total momentum and energy, respectively, of the initial and final states. Analogous expressions have been already derived for different systems, in particular for Fermi liquids and (non-linear) Luttinger liquids <cit.>. Using the property (𝒢^12_3 4-𝒢_12^3 4)log (𝒢^12_3 4/𝒢_12^3 4)>0, one obtains indeed [∂ S/∂ t]_coll.≥ 0 and the local distribution function (<ref>) can be deduced from the zero entropy production condition, i.e., [∂ S/∂ t]_coll.=0. The kinetic equation (<ref>) and the local distribution function (<ref>) will be used in the next section to derive the hydrodynamic equations.
§ NAVIER-STOKES EQUATION
In principle, hydrodynamic equations can be formulated on the basis of the conservation laws of particle number, momentum and energy <cit.>. Moreover, it is known that the hydrodynamic equations can be derived explicitly from the kinetic equation, as was done in Ref. [Narozhny2019] for graphene without spin-orbit coupling. We proceed along similar lines and start by considering the continuity equations and conservation laws arising from the Boltzmann equation (<ref>).
The continuity equation for the particle number can be obtained by integrating Eq. (<ref>) over momentum. In this case the right-hand side of the kinetic equation vanishes and introducing the particle number and current
n(,t) = ∑_m∫d/(2π)^2f_m(,t),
𝐣(,t) =∑_m ∫d/(2π)^2𝐯_m f_m(,t),
where 𝐯_m=∂ϵ_m/∂=v^2 /√(v^2 ^2+λ^2_R),
one arrives at the continuity equation for the particle density
∂ n/∂ t+∇·𝐣=0.
Similarly, introducing the imbalance particle number and the imbalance current
n_I(,t) = ∑_m∫d/(2π)^2m f_m(,t),
𝐣_I(,t) =∑_m ∫d/(2π)^2 m𝐯_m f_m(,t),
one straightforwardly obtains the continuity equation for imbalance quantities
∂ n_I/∂ t+∇·𝐣_I=0.
To obtain the continuity equation for the energy density n_E(,t) and the energy (heat) current 𝐣_E(,t), both sides of Eq. (<ref>) are multiplied by ϵ_n and integrated over momentum. As a result, one finds
∂ n_E/∂ t+∇·𝐣_E=𝐄·𝐣,
where the Joule heat term on the right-hand side contains the electric field 𝐄=-∇φ.
In order to derive the continuity equation for the momentum density, one needs to multiply the kinetic equation by the components of and integrate over momentum. Therefore, the continuity equation in this case has the form
∂ n^i_/∂ t+∑_j∂Π^ij/∂ j =nE_i, i, j ∈{ x,y},
where the momentum density and momentum flux tensor are given by
n^i_(,t) =∑_m ∫d/(2π)^2 k_if_m(,t),
Π^ij(,t) =∑_m ∫d/(2π)^2 k_i v^j_m f_m(,t).
Next we derive the macroscopic expressions which relate the currents with the densities and the drift velocity 𝐮(,t). Using the local distribution functions (<ref>) and the definitions of the densities and the currents, one can show the following relations (see App. <ref> for details)
𝐣 = n𝐮, 𝐣_I = n_I𝐮, 𝐣_E = W𝐮,
𝐧_ = v^-2(W+λ_R n_I)𝐮,
Π^i j = P δ_i j+v^-2(W+λ_R n_I)u_iu_j+Π^ij_d,
where W is the enthalpy per volume (it has a dimension of pressure and we will call it enthalpy instead of enthalpy density below) and P is the pressure. They are related with each other through the energy density <cit.>, namely
W=n_E+P,
and, according to thermodynamics, the pressure in the local equilibrium state is given by
P=1/β∑_n ∫d/(2π)^2log[1+e^-β(ϵ_n-μ-𝐮·)],
where β=1/T is a local inverse temperature.
Finally, Π^i j_d in Eq. (<ref>) is a dissipative contribution which is related to the electron shear viscosity (see App. <ref>),
Π_d=-νW̃/v^2 (∂_x u_x-∂_y u_y)τ_z- (∂_x u_y +∂_y u_x)τ_x,
where τ_x,z are Pauli matrices, W̃=W+λ_R n_I, and ν is the (static) kinematic viscosity
ν=v_F^2/4 τ_ee^-1,
with τ_ee being the electron-electron scattering time and v_F=∂_k⃗_nk⃗|__nk⃗=μ is the Fermi velocity.
First, we investigate how spin-orbit coupling affects τ_ee^-1. In order to evaluate the scattering integral, we linearize the Boltzmann equation (<ref>) by assuming small u⃗ and a nonequilibrium distribution function which is localized near the Fermi level. We refer to App. <ref> for the detailed derivation.
The electron-electron (e-e) interactions conserve particle number, energy and momentum which, respectively, correspond to the zeroth and the first-harmonic angular function of the nonequilibrium distribution [see Eq. (<ref>)]. Thus, the relaxation originates from second harmonics or higher. It is known that even harmonics decay faster than the odd ones <cit.> and here we assume that the viscosity only comes from the second harmonic. We then obtain τ_ee^-1=∑_mγ_2^m/g_F^m where γ_2^m is the e-e scattering rate of the second harmonic nonequilibrium distribution and g_F^m is the density of states at the Fermi energy of the conduction band m. Note that the electron wave functions inside the Dirac factor of the Coulomb matrix elements (<ref>) play a crucial role in determining the dependence of γ_2^m on λ_R. Eventually one finds that γ_2^m scales with T^2 as
γ_2^m=32/3e^4 T^2/ v^4γ̃^m(λ̃_R, d̃),
where
γ̃^±(λ̃_R, d̃)
=
∫_0 ^2 dq̃ q̃-q̃^3/4/(q̃+d̃)^2{√(4-q̃^2)
-
q̃[ tan^-1√(4-q̃^2)/q̃∓tan^-1λ̃_R/q̃√(4-q̃^2/1+ λ̃_R^2)] }^2 ,
is a dimensionless function whose value depends on λ̃_R=λ_R/(vk_F) and the (dimensionless) inverse screening length d̃ = d/k_F, where k_F is the Fermi wavevector. For this derivation, we have used a screened Coulomb potential with V_q=2π e^2/(q+d). We note that the ± sign in Eq. (<ref>) originates from the electron wavefunction which encodes the different dispersions of two conduction bands. Moreover, for determining τ_ee^-1, we need g_F^∓=(μ±λ_R)/(π v^2 ħ^2) for c_1 and c_2 respectively. Using as parameters λ_R=0.01 eV, μ=0.1 eV, v=10^6 m/s, and T=100 K, we obtain a typical electron-electron scattering rate of τ_ee^-1≈ 0.3/fs. This leads to a static viscosity ν_0≈ 0.7 nm^2/fs.
We illustrate the dependence of γ_2^m on λ_R and d in Fig. <ref>(a). Dash-dotted lines denote the contribution from the c_1 (m=-1) band, the dashed lines are from the c_2 (m=+1) band, and the solid lines denote the average defined as γ_2^ avg=1/2∑_m γ_2^m. As λ_R increases, the scattering rate can be non-monotonic and this non-trivial behavior originates from the electron wave functions. We note that the scaled values λ̃_d and d̃ can in principle be different in the two bands as they will have different Fermi momenta k_F. Here, however, we assume μ≫λ_R so that this disparity is negligible. We also plot the static viscosity as a function of λ_R in Fig. <ref>(b). Based on these results, we predict that the viscosity can change significantly due to spin-orbit coupling.
Now, we have all ingredients to derive the Navier-Stokes equation. We find
∂_t 𝐧_ =v^-2(W̃∂_t 𝐮+𝐮∂_t W̃),
∂Π^i j/∂ j = ∂ P/∂ i +1/v^2{W̃𝐮·∇ u_i - ν∇^2 u_i +u_i∇· (W̃𝐮)}.
Then, from Eqs. (<ref>) and (<ref>) one obtains
∂_t(W̃-P)+∇· (W̃𝐮)=𝐄·𝐣.
Finally, substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we arrive at the hydrodynamic Navier-Stokes equation
W(∂_t+𝐮·∇ -ν∇^2) 𝐮+v^2 ∇ P+𝐮∂_t P+(𝐄·𝐣)𝐮
+λ_R n_I (∂_t+𝐮·∇ -ν∇^2)𝐮=v^2 n 𝐄.
This is the second main result of the paper. Compared to the case of pristine graphene without RSO coupling <cit.>, one finds a new term on the left-hand side of the equation. Since the spin-orbit coupling gives rise to an additional term in the enthalpy (<ref>), it affects the time derivative and the convective term.
§ CONCLUSIONS
To summarize, we have derived the hydrodynamic Navier-Stokes equation for two-dimensional graphene-like materials in the presence of RSO coupling. Compared to the result without spin-orbit coupling, the RSO interaction modifies the viscosity and gives rise to an additional term in the Navier-Stokes equation, which is similar to the convective term. The reason for this is the modification of the spectrum of the system due to the presence of the RSO interaction, which results in the addition of a term λ_R n_I to the enthalpy. As the momentum continuity equation is sensitive to the explicit form of the spectrum, this influences the final form of the hydrodynamic Navier-Stokes equation. In addition, we have derived the two-particle scattering rate explicitly in the presence of RSO coupling. This has allowed us to derive the corrections to the effective kinematic viscosity resulting from the RSO coupling.
We are grateful to Boris Narozhny and Kristof Moors for fruitful discussions. The authors acknowledge financial support from the National Research Fund Luxembourg under Grants C19/MS/13579612/HYBMES, C21/MS/15752388/NavSQM, PRIDE19/14063202/ACTIVE and INTER/MOBILITY/2022/MS/17549827/AndMTI.
§ CALCULATION OF MATRIX ELEMENT
In this Appendix we calculate the form of the matrix element ⟨ n^'^'|e^i|n ⟩ <cit.>. First we demonstrate the calculations for pristine graphene and then generalize the results for RSO interactions. The orthonormal eigenvectors of Hamiltonian (<ref>) have the form
ψ^n_()=1/√(2)[𝒰_,A()-ng^∗()/|g()|𝒰_,B()], 𝒰_,j()=1/√(N)∑_𝐑_j^'e^i(𝐑_j^'+τ_j)ϕ_z(-𝐑_j^'-τ_j), j=A,B,
where n=+1 (n=-1) corresponds to the conduction (valence) band, the subscripts A and B correspond to the two atoms in the unit cell, the pre-factor is a complex number g()=k_x+ik_y=||e^i θ_, where θ_ defines the direction of the 2D vector and N is a normalization coefficient (number of unit cells). The localized function ϕ_z() corresponds to a p_z orbital of atoms A and B, 𝐑_j^' is a lattice vector, and τ_j denotes the position of the atom j ∈{A,B} within the unit cell. For instance, one can set τ_A=0, in which case τ_B denotes the vector from atom A to B. The p_z orbitals are approximated by hydrogen wave functions with an effective core charge Z ≃ 3.2 and Bohr radius a_0=5.3Å <cit.>, namely
ϕ_z()=ϕ_n=2,l=1,m=0(r,θ,φ)=Z^3/2/4√(2π)a^3/2_0Zr/a_0e^-Zr/a_0cosθ.
To calculate the matrix element F^nn^'_^' we insert the projection operator ∫ d |⟩⟨|=1, so that we have
F^nn^'_^'=∫ d⟨ n^'𝐤^'|⟩ e^i⟨| n⟩=∫ dψ^n^' _^'()e^iψ^n_()=1/2(1+nn^'g^∗()g(^')/|g^∗()||g(^')|) I(;,𝐤^'),
where it is assumed that the orbitals of the atoms A and B are orthogonal, and the auto-correlation term is given by the following integral expression
I(;,𝐤^')=∫ d𝒰^∗_^',A()e^i𝒰_,A()=∫ d𝒰^∗_^',B()e^i𝒰_,B()=1/N∑_𝐑_je^i(+-𝐤^')𝐑_jℐ(),
where the inner integral is denoted by ℐ()=∫ d∑_δ e^iδϕ_z()e^iϕ_z(+δ) with δ=𝐑_j^'-𝐑_j. The main contribution to this integral comes from δ=0, so we approximate the integral as follows
ℐ() ≃∫ dϕ_z()e^iϕ_z()=1/(1+|𝐐|^2)^3-6 |𝐐|^2/(1+|𝐐^2)|^4, |𝐐|=||a_0/Z.
At small momentum transfer |𝐐| ≪ 1, the expression (<ref>) is of the order of unity, and taking into account the conservation of momentum in the scattering process, i.e., (1/N)∑_𝐑_jexp[i(+-𝐤^')𝐑_j]=1, we arrive at the final result for matrix element in Eq. (<ref>), namely
F^nn^'_𝐤^'=1/2(1+nn^'g^∗()g(^')/|g^∗()||g(^')|).
In presence of RSO interaction the orthonormal eigenvectors of Eq. (<ref>) are given by
ψ^n_()=1/√(2)ξ^n_[𝒰_,A↑()+ng^n_𝐤,1𝒰_,A↓()+g^n_,2𝒰_,B↑()+ng^n_𝐤,3𝒰_,B↓()],
where ξ^n_ = √(1+(ζ^n_)^2) and ζ^n_ = ϵ_n/(v||). Further ignoring the cross correlations and repeating the same steps as in case of pristine graphene above, we arrive at Eq. (<ref>) of the main text.
§ DERIVATION OF THE MACROSCOPIC EQS. (<REF>)
In this Appendix we derive the expressions which relate the macroscopic densities and currents using the local equilibrium distribution function (<ref>) and the definitions of particle, heat, momentum densities and currents. In our derivations we omit a degeneracy factor 𝒩=2 due to the valley degeneracy. First, we demonstrate that 𝐣(,t)=n(,t) 𝐮(,t) by considering the following difference
𝐣-n𝐮=∑_n ∫d/(2π)^2 f_n∂_(ϵ_n-𝐮·-μ)
=1/β∑_n ∫d/(2π)^2∂_[log(1+e^-j[ϵ_n-𝐮·-μ])]=0,
where we used that ϵ_n=∓λ_R+√(v^2^2+λ^2_R) > 𝐮·, since the Fermi velocity is larger then the drift velocity, v>|𝐮|. An analogous calculation leads to 𝐣_I(,t)=n_I(,t) 𝐮(,t). Next, we turn to the energy current where we need to prove that 𝐣_E=W 𝐮. Since W=n_E+P this is equivalent to 𝐣_E-n_E 𝐮=P𝐮. This is shown by straightforward calculations
𝐣_E-n_E 𝐮
=
∑_n ∫d/(2π)^2ϵ_n f_n∂_ (ϵ_n-𝐮·-μ)
=
-1/β∑_n ∫d/(2π)^2ϵ_n∂_log[ 1+e^-β(ϵ_n-𝐮·-μ)]
=
1/β∑_n ∫d /(2π)^2log[1+e^-β(ϵ_n -𝐮-μ)] ∂_ (ϵ_n -𝐮+𝐮-μ)
=
𝐮/β∑_n ∫d /(2π)^2log[1+e^-β(ϵ_n -𝐮-μ)]
_=𝐮P
+
1/β∑_n ∫d /(2π)^2∂_ (ϵ_n -𝐮-μ) log[1+e^-β (ϵ_n -𝐮-μ)]
_=0=P𝐮,
where the definition of pressure in Eq. (<ref>) was used. Thus we obtained the third expression of Eqs. (<ref>) in the main text. Next, we derive the expression for the momentum density, which can be presented as a product of enthalpy, imbalance density, and drift velocity. Let us consider the x component of the energy current
j^x_E =
∑_n ∫d /(2π)^2ϵ_n v^2 k_x/v^2 ^2+λ^2_Rf_n
=
∑_n ∫d /(2π)^2(1-n λ_R/√(v^2 ^2+λ^2_R)) v^2 k_x f_n
=
v^2 ∑_n ∫d/(2π)^2 k_x f_n _n^x_
-
λ_R ∑_n n ∫d/(2π)^2v^2 k_x/√(v^2 ^2+λ^2_R)ϵ_n f_n _j^x_I
=
v^2 n^x_-λ_R j^x_I.
and analogously for the y component. Therefore, we get the fourth relation of Eqs. (<ref>) of main text, i.e.,
𝐧_=v^-2𝐣_E+v^-2λ_R 𝐣_I=v^-2(W+λ_R n_I ) 𝐮.
Finally, we derive the last relation of Eqs. (<ref>) for the stress tensor. By definition, the diagonal element Π^xx_E of the stress tensor is given by
Π^xx_E
=
∑_n ∫d/(2π)^2 k_x v^x_n f_n
=
∑_n ∫d/(2π)^2 k_x ∂_k_x(ϵ_n -u_x k_x-u_y k_y -μ+u_x k_x) f_n
=
u_x ∑_n ∫d/(2π)^2 k_x f_n _n^x_
-
1/β∑_n ∫d /(2π)^2 k_x ∂_k_xlog[1+e^-β(ϵ_n -u_x k_x-u_y k_y-μ)]
=
u_x n^x__v^-2W̃ u_x u_x
+1/β∑_n ∫d/(2π)^2log[1+e^-β(ϵ_n -u_x k_x-u_y k_y-μ)]_P
=P+v^-2W̃ u_x u_x,
where W̃=W+λ_R n_I. Moreover, the off-diagonal component Π^xy_E of the stress tensor is given by
Π^xy_E =
∑_n ∫d/(2π)^2 k_x v^y_n f_n
=
∑_n ∫d/(2π)^2 k_x ∂_k_y(ϵ_n -u_y k_y-u_x k_x -μ+u_y k_y) f_n
=
u_y ∑_n ∫d/(2π)^2 k_x f_n _n^x_
-
1/β∑_n ∫dk_x/2π k_x ∫dk_y/2π∂_k_ylog[1+e^-β(ϵ_n -u_x k_x-u_y k_y-μ)]_=0
=
n^x_ u_y
=
v^-2[W+λ_R n_I] u_x u_y.
and similar expressions can be obtained for Π^yy_E and Π^yx_E. In summary, this leads to the last lines in Eqs. (<ref>) of the main text.
§ EFFECTS OF SPIN-ORBIT COUPLING ON ELECTRON-ELECTRON SCATTERING RATE AND VISCOSITY
We consider the Boltzmann equation including only the electron-electron scattering integral,
∂ f_n_1_1/∂ t+∂ϵ_n_1_1/∂_1∂ f_n_1 _1/∂-∂φ/∂∂ f_n_1 _1/∂_1=ℐ[f_n_1_1],
where the scattering integral is given by
ℐ[f_1] =-2π𝒩∫_234δ_1234W_12^34 f_1 f_2(1-f_3) (1- f_4) - (1-f_1) (1- f_2) f_3 f_4 ,
where 𝒩=2 accounting valley degeneracy.
We have shortened the indices for simplicity as {1,2,3,4}={n_1k⃗_1,n_2k⃗_2,n_3(k⃗_1+q⃗), n_4(k⃗_2-q⃗)} and δ_1234 contains the Dirac delta functions for both energy and momentum conservation. We simplify the band index degrees of freedom by neglecting interband transitions (n_1=n_3) and interband scattering (n_1=n_2). The interband transition normally occurs on fast time scales ω≫ 1/τ_ee relevant, e.g., to optics which is not amenable for hydrodynamics. Interband scattering between the lower and upper conduction band will give mutual drags that cancel each other in the Navier-Stokes equation [summing up two c bands contribution in Eq. (<ref>)]. To simplify the calculation of scattering integral, we assume a small drift velocity so that the local equilibrium distribution is given by,
f_k⃗^ = 1/1+ exp[β (_k⃗-μ )]
We linearize the collision integral (<ref>) by writing
f_k⃗ = f^_k⃗ +δ f =f^_k⃗ - ∂ f^_k⃗/∂ F(r⃗,θ_k⃗)
where the nonequilibrium distribution δ f is assumed to be valid at low temperatures where - ∂_ f^_k⃗ can be approximates as a delta function peaked at μ and the k⃗ dependence on F only depends on the azimuthal angle θ_k⃗ between k⃗ and its component k_x. We further expand the nonequilibrium part of distribution into angular harmonics:
F(r⃗,θ_k⃗)=∑_n=-∞^∞ e^in θ_k⃗ F_n (r⃗)
We see that F_0 is related to the density fluctuations
n (r⃗,t) = ∫ d^2k⃗ f(r⃗,k⃗,t)-f^ (_k⃗)
= g_F F_0 (r⃗,t),
where g_F = ∫ d^2k⃗δ (μ - _k⃗) is the local density of states at the Fermi level.
Moreover, the functions F_± 1 are related to the current density,
j⃗ (r⃗,t) = ∫ d^2 k⃗ v⃗_k⃗ f(r⃗,t)-f^(_k⃗,t) , = 1/2 g_F v_F [ F_1 (r⃗,t) + F_-1 (r⃗,t); i [ F_1 (r⃗,t) - F_-1 (r⃗,t)] ]≡n̅u⃗(r⃗,t),
where n̅ = ∫ d^2k⃗ f^ (ϵ_k⃗)=g_Fμ is the equilibrium density and this equation defines the drift velocity u⃗(, t).
The functions F_± 2 are related to the stress tensor,
Π^i,j = ∫ d^2 k⃗ k_i v_k,j (f(r⃗,t) -f^(_k⃗)),Π^x,x = g_F k_Fv_F/4 2 F_0 (r⃗,t) + F_2(r⃗,t) + F_-2(r⃗,t),Π^x,y = Π^y,x = g_F k_Fv_F/4i F_-2(r⃗,t) - F_2(r⃗,t),
Π^y,y = g_F k_F v_F/4 2 F_0 (r⃗,t) - F_2(r⃗,t) - F_-2(r⃗,t),
We can express this tensor in a compact form as
Π = g_F k_F v_F/4 2 F_0 + ( F_2 + F_-2)τ_z+ i( F_2-F_-2)τ_x,
where τ_x,z are the Pauli matrices.
Next, we multiply Eq. (<ref>) by e^-im θ_k⃗ and integrate over k⃗ to obtain
∂_t F_m (r⃗,t)+v_F/2∂_x ( F_m-1+ F_m+1)-i∂_y ( F_m-1- F_m+1)
-e v_F/2 E_x(δ_m,1+δ_m,-1)-iE_y(δ_m,1-δ_m,-1) = -γ_m/g_F F_m
It will be obvious later that we can obtain the right hand side from linearization of the scattering integral and only even harmonics |m|>1 give nonzero γ_m.
For m=0, we obtain the continuity equation
∂_t n̅+∇j⃗ =0.
For m=± 1, we obtain the linearized Navier-Stokes equation
∂_t u⃗ +1/ρ∇·Π -n̅/ρE⃗ =0,
where ρ=n̅ k_F/v_F is the mass density and in graphene ρ = W̃/v^2.
We can approximately close the recursion relation (<ref>) by setting F_m=0 for |m|≥ 3 <cit.>. For n=2 and considering components at frequency ω, i.e., ∂_t F_n=-iω F_n, we obtain
v_F/2∂_x F_1-i∂_y F_1 = -γ_2/g_F-iω F_2
v_F/2∂_x F_-1+i∂_y F_-1 = -γ_-2/g_F -iω F_-2.
We will see later that γ_2=γ_-2. Using the relationship in Eq. (<ref>), the stress tensor in Eq. (<ref>) becomes
Π = P - ρν (∂_x u_x-∂_y u_y)τ_z- (∂_x u_y +∂_y u_x)τ_x,
and the Navier-Stokes equation becomes
∂_t u⃗ + ∇ P -ν∇^2 u⃗ - n̅/ρE⃗ =0,
where P= μ n (r⃗,t) is the pressure gradient, μ=k_F v_F^2/2 is the chemical potential and the kinematic viscosity is given by
ν=v_F^2/4τ_ee^-1-iω, τ_ee= g_F/γ_2.
The stress tensor in Eq. (<ref>) does not contain the convective term because we focus on linear term but it captures the dissipative term induced by relaxation of the second harmonics F_± 2.
Next, we discuss the linear collision integral. Linearizing the collision integral means retaining the linear order of the distribution function products,
f_1 f_21-f_3 1- f_4 = (f_1^ +δ f_1)( f_2^+δ f_2) 1-(f_3^+δ f_3) 1- (f_4^ +δ f_4) ≈ f_1^ f_2^1-f_3^ 1- f_4^ + δ f_1 f_2^1-f_3^ 1- f_4^ + f_1^δ f_21-f_3^ 1- f_4^ + f_1^ f_2^(-δ f_3) 1- f_4^ + f_1^ f_2^1-f_3^ (-δ f_4^).
Analogously, we obtain
(1-f_1) (1-f_2) f_3 f_4 ≈ (1-f_1^)(1- f_2^) f_3^ f_4^ +(-δ f_1) (1-f_2^)f_3^ f_4^ + (1- f_1^) (-δ f_2) f_3^ f_4^ + (1-f_1^)(1- f_2^)(δ f_3) f_4^ + (1-f_1^)(1- f_2^)f_3^ (δ f_4^).
Therefore,
f_1 f_21-f_3 1- f_4 - (1-f_1) (1-f_2) f_3 f_4
≈ δ f_1 f_2^ 1- f_3^ 1-f_4^ + (1-f_2^) f_3^ f_4^ + δ f_2 f_1^ 1- f_3^ 1-f_4^ + (1-f_1^) f_3^ f_4^ -δ f_3 f_1^ f_2^ 1-f_4^ + 1-f_1^ 1-f_2^ f_4^ - δ f_4 f_1^ f_2^ 1-f_3^ + 1-f_1^ 1-f_2^ f_3^ , ≈
f_1^ f_2^1-f_3^ 1- f_4^ -h_1 -h_2 + h_3 + h_4,
where we have used f_1^ f_2^(1-f_3^ )( 1- f_4^) - (1-f_1^) (1-f_2^) f_3^ f_4^=0 as a consequence of energy and number conservation, and δ f_1 = -h_1 f_1^ (1-f_1^), where h_1 = β F(r⃗, θ_1).
The linearized collision integral reads
ℐ[f_1] =4π∫_234δ_1234W_12^34 f_1^ f_2^(1-f_3^) (1- f_4^) h_12^34,
where h_12^34 = (h_1+h_2) -(h_3+h_4). Now we substitute the harmonic expansion into h_i, we can write down the nth eigenmode as
ℐ_n[f_1] =4πβ F_n(r⃗) ∫_234δ_1234W_12^34 f_1^ f_2^(1-f_3^) (1- f_4^) (e^inθ_1+e^inθ_2-e^inθ_3-e^inθ_4),
Thus, this leads to the following definition of the electron-electron scattering rate
γ_n =4πβ∫_1234δ_1234W_12^34 f_1^ f_2^(1-f_3^) (1- f_4^) (1+e^in(θ_2-θ_1)-e^in(θ_3-θ_1)-e^in(θ_4-θ_1)).
where the last factor in Eq. (<ref>) comes from taking the integral ∫_1 e^-inθ_1ℐ[f_1].
We schematically look at the possible electron-electron (e-e) collisions in Fig. <ref> <cit.>. In Fig. <ref>, we consider a momentum transfer q⃗ along the k_x axis. We can do so without loss of generality because we can rotate Fig. <ref> around an angle θ_q⃗ and the resulting e-e collision rate is invariant. The kinematic restrictions due to momentum and energy conservations leave only a few possibilities for scattering processes. The first type is a head-on collision with k⃗_2=-k⃗_1 or θ_2=π + θ_1, θ_3=-θ_1 and θ_4=π-θ_1 as depicted in Figs. <ref>(a) and (b). The second type is an exchange process where k⃗_1=k⃗_4 and k⃗_2=k⃗_3. Using this condition in Eq. (<ref>), we find that the exchange process gives ℐ_n[f_1]=0 and thus does not relax the e-e scattering. Focussing on the head-on process, we note that n=0 and n=1 also give ℐ_n[f_1]=0. This is expected as a result of particle number, energy (n=0) and momentum (n=1) conservation. Further calculations show that all odd n modes give zero ℐ_n[f_1].
For even n, we obtain
e^-inθ_1h_12^34(n) = (1+e^inπ-e^in(-2θ_1)-e^in(π-2θ_1))+c.c. = 4(1-cos(2nθ_1)), where sinθ_1= q/2k_F
For n=2, we obtain
e^-inθ_1h_12^34(2)=32q/2k_F^2 - q/2k_F^4
Next, we will consider the actual e-e interaction matrix element W_12^34=|V_12^34|^2= |V_q ⟨3|1|⟨%s|%s⟩⟩4|2|^2, neglecting the interference term of Eq. (<ref>). For the 4× 4 Hamitonian of graphene with SOC we use the corresponding eigenstate,
|k⃗⟩ = 1/√(2)e^-iθ_k⃗/√(1+(_k⃗/vk)^2)
i e^-iθ_k⃗ ,
-i _k⃗/v k, _k⃗/vk,
e^iθ_k⃗^T.
For different bands, the eigenstates differ by energy dispersion. For simplicity, we can suppress the band index unless we explicitly write the dispersion, i.e. _k⃗^± = ±λ_R +√(λ_R^2+(v k)^2), where - for c_1 and + for c_2.
We obtain
|⟨k⃗_1+q⃗|k⃗_1⟩|^2 = v^2 k_1|k⃗_1+q⃗| cos(θ_k⃗_1-θ_k⃗_⃗1⃗+⃗q⃗)+_k⃗_1_k⃗_⃗1⃗+⃗q⃗/v^2k_1|k⃗_1+q⃗|√(1+(_k⃗_1/vk_1)^2)√(1+(_k⃗_1+q⃗/v|k⃗_1+q⃗|)^2)
|⟨k⃗_2-q⃗|k⃗_2⟩|^2 = v^2 k_2|k⃗_2-q⃗| cos(θ_k⃗_2-θ_k⃗_⃗2⃗-⃗q⃗)+_k⃗_2_k⃗_⃗2⃗-⃗q⃗/v^2k_2|k⃗_2-q⃗|√(1+(_k⃗_2/vk_2)^2)√(1+(_k⃗_2-q⃗/v|k⃗_2-q⃗|)^2).
We need to express cos(θ_k⃗_1+q⃗-θ_k⃗_1) in terms of ϕ=θ_q⃗-θ_k⃗_1,
cos (θ_k⃗_1+q⃗-θ_k⃗_1) = cosθ_k⃗_1+q⃗cosθ_k⃗_1 +sinθ_k⃗_1+q⃗sinθ_k⃗_1 = (k⃗_⃗1⃗+⃗q⃗)_x/|k⃗_1+q⃗|cosθ_k⃗_1 + (k⃗_⃗1⃗+⃗q⃗)_y/|k⃗_1+q⃗|sinθ_p⃗ = kcosθ_k⃗_1 + q cosθ_q⃗/|k⃗_⃗1⃗+⃗q⃗|cosθ_k⃗_1 + k_1sinθ_k⃗_1 + q sinθ_q⃗/|k⃗_⃗1⃗+⃗q⃗|sinθ_k⃗_1 = k_1 + q(cosθ_q⃗cosθ_k⃗_1 + sinθ_q⃗sinθ_k⃗_1)/|k⃗_1+q⃗| = k_1 + qcosϕ/|k⃗_1+q⃗|
In the same way, we obtain cos(θ_k⃗_⃗2⃗-⃗q⃗-θ_k⃗_2)=(k_2-q cosϕ_2)/ |k⃗_⃗2⃗ ⃗-⃗q⃗|, where ϕ_2=θ_q⃗-θ_k⃗_⃗2⃗. As a result, we have
|⟨k⃗_1+q⃗|k⃗_1⟩|^2 = v^2 k_1 (k_1+q cosϕ)+_k⃗_1_k⃗_⃗1⃗+⃗q⃗/v^2k_1|k⃗_⃗1⃗ ⃗+⃗ ⃗q⃗|√(1+(_k⃗_1/vp)^2)√(1+(_k⃗_1+q⃗/v|k⃗_1+q⃗|)^2)
|⟨k⃗_2-q⃗|k⃗_2⟩|^2 = v^2 k_2 (k_2-q cosϕ_2)+_k⃗_2_k⃗_⃗2⃗-⃗q⃗/v^2k_2|k⃗_2-q⃗|√(1+(_k⃗_2/vk_2)^2)√(1+(_k⃗_2-q⃗/v|k⃗_2-q⃗|)^2).
The energy conservation in the delta function of Eq. (<ref>) controls the allowed transitions. We split the delta function into two,
δ(_k⃗_1+_k⃗_2-_k⃗_⃗1⃗+⃗q⃗-_k⃗_⃗2⃗-⃗q⃗) = ∫ dωδ (_k⃗_1-_k⃗_⃗1⃗+⃗q⃗-ω) δ (_k⃗_2-_k⃗_⃗2⃗-⃗q⃗+ω)
With the aid of two delta functions, we change the product of Fermi distribution into a difference,
f^_k⃗_1(1-f^_k⃗_⃗1⃗+⃗q⃗) = f^_k⃗_1-f^_k⃗_⃗1⃗+⃗q⃗/1-e^βω
f^_k⃗_2(1-f^_k⃗_⃗2⃗-⃗q⃗) = f^_k⃗_2-f^_k⃗_⃗2⃗-⃗q⃗/1-e^-βω
Now the e-e scattering rate becomes
γ_2
=
4πβ∫d^2k⃗_1/(2π)^2∫d^2k⃗_2/(2π)^2∫d^2q⃗/(2π)^2∫ dω
× δ(_k⃗_1-_k⃗_⃗1⃗+⃗q⃗-ω) v^2 k_1 (k_1+q cosϕ)+_k⃗_1_k⃗_⃗1⃗+⃗q⃗/v^2k_1|k⃗_1+q⃗|√(1+(_k⃗_1/vk_1)^2)√(1+(_k⃗_1+q⃗/v|k⃗_1+q⃗|)^2) × δ (_k⃗_2-_k⃗_⃗2⃗-⃗q⃗+ω) v^2 k_2 (k_2-q cosϕ_2)+_k⃗_2_k⃗_⃗2⃗-⃗q⃗/v^2k_2|k⃗_2-q⃗|√(1+(_k⃗_2/vk_2)^2)√(1+(_k⃗_2-q⃗/v|k⃗_2-q⃗|)^2) |V_q|^2 × f^_k⃗_1-f^_k⃗_⃗1⃗+⃗q⃗/1-e^βωf^_k⃗_2-f^_k⃗_⃗2⃗-⃗q⃗/1-e^-βω 32q/2k_F^2 - q/2k_F^4,
We shift the vector p⃗+⃗q⃗→ -p⃗ so that
∫ d^2k⃗_1 f^_k⃗_1 - f^_k⃗_1+q⃗δ(_k⃗_1 -_k⃗_⃗1⃗+⃗q⃗-ω)… = ∫ d^2k⃗_1[ f^_k⃗_1δ(_k⃗_1 -_k⃗_⃗1⃗+⃗q⃗-ω) - f^_-k⃗_1δ (_-⃗k⃗_⃗1⃗-⃗q⃗ -_-⃗k⃗_⃗1⃗-ω)] … = ∫ d^2k⃗_1 f^_k⃗_1δ(_k⃗_1 -_k⃗_⃗1⃗+⃗q⃗-ω) - δ (_k⃗_1+q⃗ -_k⃗_1-ω)….
and to get the last line, we used the symmetry of _k⃗_1 and f^_k⃗_1. Similarly for k⃗_2, we get,
∫ d^2k⃗_2 f^_k⃗_2 - f^_k⃗_2-q⃗δ(_k⃗_2 -_k⃗_⃗2⃗-⃗q⃗+ω)
… = ∫ d^2k⃗_2 f^_k⃗_2δ(_k⃗_2 -_k⃗_⃗2⃗-⃗q⃗+ω) - δ (_k⃗_2-q⃗ -_k⃗_2+ω)….
Now we examine the energy conservation during the scattering process,
_k⃗_1+q⃗ -_k⃗_1 = √(λ_R^2 + v^2 (k_1^2+q^2+2k_1q cosϕ)) - √(λ_R^2 + (vk_1)^2)_k⃗_2-q⃗ -_k⃗_2 = √(λ_R^2 + v^2 (k_2^2+q^2-2k_2q cosϕ_2)) - √(λ_R^2 + (vk_2)^2)
and perform the delta function integrations over angles,
∫ dϕ G(ϕ) δ√(λ_R^2 + v^2 (k_1^2+q^2+2k_1q cosϕ)) - √(λ_R^2 + (vk_1)^2)+ω - (ω→ -ω) =∑_j=1,2 G(ϕ_j^+) ||√(λ_R^2+ (vk_1)^2)-ω|/v^2 k_1 q sinϕ_j^+ | - G(ϕ_j^-) ||√(λ_R^2+ (vk_1)^2)+ω|/v^2 k_1 q sinϕ_j^- |,
where ϕ_j^± are the solutions that make the arguments in the delta function become zero
ϕ_j^+= ±cos^-1ω^2-v^2 q^2 -2 ω√(λ_R^2+(vk_1)^2)/2v^2k_1q,ϕ_j^-= ±cos^-1ω^2-v^2 q^2 +2ω√(λ_R^2+(vk_1)^2)/2v^2k_1q.
The form of Eq. (<ref>) makes the integration over ω, q and p rather intricate. To simplify the expression further, firstly we assume that ω∝ T is very small compared to the Fermi energy.
Then, Eq. (<ref>) will become
cosϕ_j^±= - q/2k_1
This simplifies the expression in Eq. (<ref>)
k⃗_1 ≈ k⃗_1+q⃗ -λ_R +√(λ_R^2 + v^2 (k_1^2+q^2+2k_1q cosϕ)) = -λ_R +√(λ_R^2 + (vk_1)^2) ^2
Performing the radial integration over k_1 yields,
∫ d k_1 k_1 f^_k⃗_1{∑_j=1,2 G^±(ϕ_j^+) ||√(λ_R^2+ (vk_1)^2)-ω|/v^2 k_1 q sinϕ_j^+ | - G^±(ϕ_j^-) ||√(λ_R^2+ (vk_1)^2)+ω|/v^2 k_1 q sinϕ_j^- | } = ∫_q/2^k_F d k_1 k_1 f^_k⃗_1 G^±(ϕ) -4ω/v^2 k_1 q √(1-q/2k_1^2), = -4ω k_F/2q v^2{√(4-q̃^2)-q̃tan^-1√(4-q̃^2)/q̃∓tan^-1λ̃_R/q̃√(4-q̃^2/1+ λ̃_R^2)}
where
G^±(ϕ) = v^2 k_1 (k_1+q cosϕ)+^±_k⃗_1^±_k⃗_⃗1⃗+⃗q⃗/v^2k_1|k⃗_1+q⃗|√(1+(^±_k⃗_1/vk_1)^2)√(1+(^±_k⃗_1+q⃗/v|k⃗_1+q⃗|)^2) = 1- v^2 q^2 /2 v^2k_1^2+∓λ_R +√(λ_R^2 + (vk_1)^2) ^2,
and q̃=q/k_F, and λ̃_R=λ_R/(vk_F). The lower bound of the integral k_1=q/2 is placed so that sinϕ is well defined. The upper bound of the integral k_1=k_F is due to the low-temperature limit f^_p⃗=θ(μ - ϵ _k⃗_1). The radial integration over k_2 gives the opposite sign of Eq. (<ref>). The opposite signs of the k_1 and k_2 integrals will result in a positive value of ω integral,
∫_-∞^∞ dω-ω^2/(1-e^βω)(1-e^-βω) = 2 π ^2/3 β^3
After performing the k⃗_1, k⃗_2 and ω integrals, we are left with a q integral as follows,
γ_2^± = 4π/ħ2π^2/3β^24(2π)^3 e^4k_F^2/ħ^4 v^4(2π)^6∫_0 ^2k_F q dq 1/q^2{√(4-q̃^2)-q̃tan^-1√(4-q̃^2)/q̃∓tan^-1λ̃_R/q̃√(4-q̃^2/1+ λ̃_R^2)}^2 × 32 q/2k_F^2-q/2k_F^41/(q+d)^2. = 32/3e^4 (k_B T)^2/ħ^5 v^4γ̃^±(λ̃_R, d̃)
We plot the viscosity as a function of λ_R in Fig. <ref>(b).
|
http://arxiv.org/abs/2307.04156v2 | 20230709115859 | Brylinski-Radon transformation in characteristic $p>0$ | [
"Deepam Patel",
"K. V. Shuddhodan"
] | math.AG | [
"math.AG",
"14F10, 14F20, 14F43, 14F45"
] |
Department of Mathematics, Purdue University,
150 N. University Street, West Lafayette, IN 47907, U.S.A.
[email protected]
Institut des Hautes Études Scientifiques, Université Paris-Scalay, CNRS, Laboratoire Alexandre Grothendieck, Le Bois-Marie 35 rte de Chartres, 91440 Bures-sur-Yvette, France
[email protected]
KVS was supported by the CARMIN project fellowship.
In this article, we characterize the image of the Brylinski-Radon transform in characteristic p>0 via Beilinson's theory of singular supports. We also provide an alternate proof of Brylinski's results over , which also works for sheaves with finite coefficients. Along the way, we also obtain a microlocal criterion for the descent of perverse sheaves which could be of independent interest.
Brylinski-Radon transformation in characteristic p>0
K.V. Shuddhodan
August 12, 2023
====================================================
§ INTRODUCTION
In <cit.>, Brylinski introduced topological (and geometric) versions of the classical Radon transforms and proved some fundamental properties for these transforms. The theory has had numerous applications including to Lefschetz theory <cit.>, <cit.>. More recently the Radon transform was crucially used by Beilinson <cit.> and Saito <cit.> respectively in the construction of singular support and characteristic cycle for constructible sheaves in the algebraic setting over arbitrary perfect fields. The main result of this article is to use the theory of singular supports to characterize the image of the Radon transform, generalizing the work of Brylinski to arbitrary base fields (and finite coefficents). In particular, we answer a question raised by Brylinski <cit.>.
§.§ Summary of results:
§.§.§ Singular supports of étale sheaves:
Let k be an algebraically closed field of char. p ≥ 0, ℓ≠ p a fixed prime, Λ = /l^n, and ^b_ctf(X,Λ) denote the derived category of bounded constructible étale sheaves of Λ-modules with finite tor-dimension on X. In the rest of the article, we denote this category simply by ^b_c(X). Given K ∈^b_c(X), K(n) will denote the usual Tate twist of K. If X is smooth and K ∈^b_c(X), then Beilinson <cit.> defined the singular support SS(K) ⊂^*X (see <ref> for a brief summary about singular supports). This is a closed conical subset of ^*X, and for K ≠ 0, SS(K) is equidimensional of dimension equal to (X). Moreover when char(k)=0, SS(K) is Lagrangian[A closed conical subset of ^*X is said to be Lagrangian if the smooth locus of the closed subset is both isotropic and involutive with respect to the natural symplectic structure on ^*X.] <cit.>. However this fails in positive characteristic <cit.>.
§.§.§ Main Result:
Let (X) ⊂^b_c(X) denote the abelian category of perverse sheaves (w.r.t middle perversity). In the following, given an object ∈^b_c(X), let ^i(K) ∈(X) denote the i-th perverse cohomology sheaf. If X is smooth[In this article, varieties smooth over k shall be assumed to be connected.] over k of dimension n, let (X) ⊂(X) denote the full Serre subcategory of locally constant perverse sheaves (i.e. complexes of the form [n] where is a locally constant constructible sheaf on X), and (X) the corresponding quotient category. One can realize (X) as the heart of the induced perverse t-structure on a localized triangulated category ^b_c(X)_T obtained by localizing ^b_c(X) along the multiplicative set of morphisms f such that ker(^i(f)) and coker(^i(f)) are locally constant perverse sheaves for all i. As above, let _T^i(K) ∈(X) denote the i-th `perverse cohomology' sheaf of the image of K in ^b_c(X)_T.
We now recall the Brylinski-Radon transform. Let ℙ denote projective space of dimension n ≥ 2 over k, and Y := (d) denote the denote the Grassmanian of d-planes (where 1 ≤ d ≤ n-1) in ℙ. Consider the incidence correspondence Q ⊂ℙ× Y. The Brylinski-Radon transform is defined as follows. Consider the diagram:
Q [dr]^p_2[dl]_p_1
Y,
where p_i are the natural projections.
Given ∈^b_c(^n), let ℛ():= p_2,*p_1^†∈^b_c(Y)[Give a smooth morphism f:X → S of relative dimension d with geometrically connected fibers, we set f^†:=f^*[d]. Also, by f^†(S), we mean the full subcategory (X) consisting of perverse sheaves of the form f^†M <cit.>, where M is a perverse sheaf on S.]. Similarly, we set () := p_1,*p_2^†().
Let C ⊂^* be a closed conical subset. The Brylinski-Radon transform of C is defined to be p_2∘p_1^∘(C) (see Section <ref> for the notation). A closed conical subset of ^*Y is said to be in the image of the Brylinski-Radon transform if it is contained in the Brylinski-Radon transform of a closed conical subset of ^*.
It follows from <cit.> that perverse sheaves whose singular support is in the image of the Brylinski-Radon transform form an abelian subcategory of (Y) (denoted below by (Y)_Rad) which is stable under extensions. Let (Y)_Rad be the full abelian subcategory of (Y), consisting of objects which are images of ∈(Y)_Rad. It is easy to see that both and naturally induce functors between ^b_c()_T and ^b_c(Y)_T. We are now ready to state the main result of this article.
With notation as above:
* is t-exact for the perverse t-structures on ^b_c()_T and ^b_c(Y)_T.
* The functor _T^d(n-d-1)∘(d(n-d)) ∘_T^0∘ is naturally equivalent to the identity functor on ().
* The functor ^0_T ∘ induces an equivalence of categories between () and (Y)_Rad.
§.§ Comparison with previous work
* If k =, and one considers constructible sheaves in the classical topology with coefficients, then this is one of the main results of Brylinski <cit.>. The problem of a characteristic p analogue of Brylinski's theorem was already posed as a question by Brylinski <cit.>. The results of this article answer this question in the affirmative albeit, with appropriate modifications to account for wild ramification.
* If char(k)=0, <cit.> and <cit.> imply that one can alternatively describe (Y)_Rad as those perverse sheaves who singular support is contained in p_2∘p_1^∘^*. In particular, the statement of Theorem <ref> is consistent with the analogous statement proved by Brylinski over the complex numbers.
* If d = n-1, then the aforementioned theorem gives an equivalence of categories between () and (). In this setting, Brylinski <cit.> also proves the result over an algebraic closure of a finite field as an application of the Deligne-Fourier transform in characteristic p>0.
§.§ Idea of the proof
In this section we briefly describe the ideas underlying the proof of Theorem <ref>.
§.§.§ Proof of Theorem <ref>, (1):
The proof is an easy application of Artin vanishing and is along the lines of the proof in <cit.>, where the case of n=d-1 is handled. The proof in <cit.> is in comparison microlocal in nature and does not carry over when the coefficients are finite.
§.§.§ Proof of Theorem <ref>, (2):
The essential point here is to understand the pushforward of the constant sheaf along the map Q ×_Y Q →×. This map is smooth outside the diagonal , however the fibers of the map are Grassmannians. This allows us to compute ^∨∘ in the localized category ^b_c()_T (see (<ref>)) and deduce Theorem <ref>, (2). We do so without recourse to the decomposition theorem which is technically important for us since we allows finite coefficients. As a corollary of the proof we are able to show that ^0∘ is fully faithful and induces isomorphism on Ext^1 (see Corollary <ref>).
§.§.§ Proof of Theorem <ref>, (3):
The proof of Theorem <ref>, (3) constitutes the technical heart of the paper. The first step is to prove a microlocal criterion for the descent of perverse sheaves. More precisely we prove the following statement which generalizes a result of Laumon[We thank Ahmed Abbes for pointing out the connection of our result with Laumon's work.] <cit.>. Let k be a perfect field and S/k a smooth variety. Let f: X → S a proper and smooth morphism with geometrically connected and simply connected fibers.
Then a non-zero perverse sheaf L on X is of the form f^†M iff SS(K) ⊂ f^∘Λ, for a closed conical subset Λ⊂^*S of dimension equal to dim(S). Moreover when char(k)=0, it suffices to assume that SS(K) ⊂ f^∘^*S.
Using this descent criterion and an inductive argument (see Proposition <ref>) we are able to show that simple objects in (Y)_Rad are in the image of the Radon transform. The inductive nature of our method naturally leads us to consider relative versions of Brylinski-Radon transforms and we develop the necessary background in Section <ref>. The base case (i.e. n-d=1) for the induction follows from the work of Laumon <cit.>. Finally using the isomorphism on Ext^1 (Corollary <ref>) we deduce Theorem <ref>, (3).
We would like to note that our proof of Theorem <ref> also applies to ℓ-adic étale sheaves using the notion of singular support for ℓ-adic sheaves as described in <cit.>. It also works when k= and one considers algebraically constructible sheaves in the analytic topology with Kashiwara-Schapira's <cit.> definition of singular supports.
Acknowledgements:
We would like to thank Ahmed Abbes for his interest and encouragement during the course of this project. KVS would like to thank Ofer Gabber and Ankit Rai for useful conversations. In particular, he is thankful to Ofer Gabber for presenting a counterexample to an optimistic form of Corollary <ref>, ultimately resulting in the formulation of Proposition <ref>. KVS would also like to thank Hiroki Kato for patiently answering his questions about sensitivity of vanishing cycles to test functions in positive characteristics.
§ BACKGROUND AND SOME PRELIMINARY OBSERVATIONS
§.§ Recollection of singular support
Let X be a smooth variety over a perfect base field k. Let C ⊂^*X denote a closed conical subset, and h: U → X a morphism with U smooth. Then h is said to be C-transversal if for all geometric points u of U,
ker(dh_u) ∩ C_h(u)∖{0} = ∅.
Note C-transversality implies that dh|_C ×_X U is finite and Beilinson defines h^∘(C) to be its image in
^*U, also a closed conical subset <cit.>. In particular, h^∘ always makes sense when h is a smooth morphism (since such morphisms are automatically C-transversal for any C). This will be the only relevant case for us. Similarly, for any closed conical subset C ⊂^*U whose base is proper over X, Beilinson defines h_∘(C) to be the image of dh^-1(C) under the natural projection ^*X ×_X U →^*X. This is a closed conical subset of ^*X.
For any sheaf K ∈^b_c(X), Beilinson defines the singular support SS(K) ⊂^*(X). We recall some properties of SS(K) which will be used in the following.
* For K ≠ 0, SS(K) is a equidimensional closed conical subset of ^*(X) of dimension equal to (X) <cit.> .
* Given an SS(K)-transversal morphism h: U → X, SS(h^*K) ⊂ h^∘(SS(K)) <cit.>. Moreover, one has equality if h is a smooth morphism <cit.>.
* Suppose f X → Y is a proper morphism of smooth varieties, then for any sheaf K on X, SS(f_*K) ⊂ f_0(SS(K)) <cit.>.
* SS(K) is the zero section (denoted below by 0_^*X) iff ℋ^i(K) are locally constant for all i and atleast one of them is non-zero <cit.>.
* For any sheaf K one has SS(K)=⋃_αSS(K_α), where K_α runs over the various Jordan-Holder components of ^i(K) for every i <cit.>.
We record the following standard lemma for use below.
Let X Y Z be smooth proper morphisms of smooth varieties over an algebraically closed field k.
* Given a conic Λ⊂^*X, (g_∘∘ f_∘)(Λ) = (g ∘ f)_∘(Λ).
* Given a conic Λ⊂^*Z, (f^∘∘ g^∘)(Λ) = (g ∘ f)^∘(Λ).
* Given a conic Λ⊂^*Y, (f_∘∘ f^∘)(Λ) = Λ.
* Consider a commutative square:
X [r]^f[d]^g Y [d]^g'
Z [r]^f' W
where all morphisms and varieties are smooth proper. Then, given Λ⊂^*Z, one has
((g')^∘∘ f'_∘)(Λ) =(f_∘∘ g^∘)(Λ)
The first three parts of the lemma are immediate from the definition. Using (3) we can reduce (4) to the case when the diagram is cartesian in which case the lemma is clear.
§.§ Relative Brylinski-Radon transform
In what follows, we shall fix a base scheme S which is assumed to be smooth over an algebraically closed field k.
Let be a vector bundle over S of rank n+1 ≥ 2. Let 0 ≤ d ≤ n-1 be an integer. We denote by (d,) the Grassmannian bundle parametrizing locally free quotients of ^∨ of rank d+1. In particular, given an S-scheme π: T → S, (d,)(T) consists of equivalence classes of quotients π^*^∨→ where is locally free of rank d+1. We denote by π_d, the canonical morphism from (d,) to S. It is a proper and smooth morphism of relative dimension (d+1)(n-d).
Note that we may identify (d,) with (n-d-1,^∨) by passing to duals.
Below, when working over S = (k) (where k is algebraically closed), we denote by (d,n)[We use the convention that (d,n)=∅ if d is negative.] the Grassmanian of d+1-planes in V = k^n+1. We shall also sometimes identify the latter with the d-planes in ^n.
The following decomposition theorem is well-known, and is recorded here for future use.
For any K ∈^b_c(S), there exists a functorial (in K) isomorphism
⊕_i=0^n K⊗ R^2iπ_d,Λ[-2i](i) ≃π_d,*π_d,^*K
Using the projection formula we may assume that K=Λ. In this case the result is a consequence of proper base change and <cit.> owing to the cohomology of Grassmannian satisfying hard Lefschetz (even with torsion coefficients).
§.§.§ The incidence correspondence as a Grassmannian bundle:
Given a pair of integers 0 ≤ d_1 < d_2 ≤ n-1, we denote by Q_d_1,d_2,⊂(d_1,) ×_S (d_2,) the incidence correspondence. More precisely, given a test scheme T as above, recall that an element of (d_1,) ×_S (d_2,)(T) is given by a tuple (upto equivalence) (π^*^∨→_1,π^*^∨→_2) where _i is a rank d_i +1 quotient. With this notation, Q_d_1,d_2,(T) consists of tuples such that there is a surjection _2 →_1 compatible with the maps π^*^∨→_i. Note that if such a surjection exists, it is unique. Moreover, this is a closed subscheme of (d_1,) ×_S (d_2,).
Let 0 →_n-d,^∨→π^*_d,^∨→_d+1,^∨→ 0 denote the universal exact sequence on (d,). Here
_n-d,^∨ (resp. _d+1,^∨) is the universal sub-bundle of rank n-d (resp. quotient of rank d+1). With this notation, one can identify Q_d_1,d_2,(T) as the rank n-d_2 quotients of π_T^*(_n-d_1,^∨^∨), and in particular, we may view Q_d_1,d_2,→(d_1,) as the Grasmmannian bundle (n-d_2-1,_n-d_1,^∨^∨). By the aforementioned remark, we may also view this as the Grassmannian bundle (d_2-d_1-1,_n-d_1,^∨). In a similar manner, we may view the incidence correspondence as a Grassmannian bundle (d_1, _d_2+1,^∨^∨) over (d_2,).
We denote by p_d_1,d_2, (resp. p^∨_d_1,d_2,, resp. π_d_1,d_2,) the induced map from Q_d_1,d_2, to (d_1,) (resp. (d_2,), resp. S). As noted above p_d_1,d_2, (resp. p^∨_d_1,d_2,) is a Grassmannian bundle parametrizing locally free quotients of rank d_2-d_1 (resp. of rank d_1+1) of a vector bundle of rank n-d_1 (resp. of rank d_2+1) on (d_1,) (resp. (d_2,)). Thus p_d_1,d_2, (resp. p^∨_d_1,d_2,) is proper and smooth of relative dimension
(d_2-d_1)(n-d_2) (resp. (d_1+1)(d_2-d_1)).
§.§.§ Brylinski-Radon transform
We define functors _d_1,d_2,:^b_c((d_1,)) →^b_c((d_2,)) and _d_1,d_2,:^b_c((d_2,)) →^b_c((d_1,)) as follows,
_d_1,d_2,(K):=p^∨_d_1,d_2,*p^†_d_1,d_2,K
and
_d_1,d_2,(L):=p_d_1,d_2,*p^∨†_d_1,d_2,L.
Finally, we make explicit a condition on closed conical subsets of ^*(d,) (resp. ^*Q_d_1,d_2,) which will be important in the following[See Example <ref> for a motivation to consider the condition (∗).].
We will say that a closed conical subset C ⊂^*(d,) (resp. ^*Q_d_1,d_2,) is regular over S (or just regular if S is clear from context) if the following condition is satisfied:
* Every irreducible component Λ of C contained in π_d,^∘^*S (resp. π_d_1,d_2,^∘^*S) is of the form π_d,^∘Λ' (resp. π_d_1,d_2,^∘Λ') for an irreducible closed conical subset Λ' ⊂^*S.
Note that condition (∗) above is trivially satisfied when S=(k) and C is of pure dimension of dimension equal to dim((d,)) (resp. dim(Q_d_1,d_2,)).
Let C⊂^*() be a closed conical subset. We denote by
Rad_0,d,(C):=(p^∨_0,d,)_∘(p_0,d,)^∘(C),
the Radon transform of C with respect to R_0,d,. This is a closed conical subset of ^*(d,).
Let q_0,d, and q_0,d,^∨ denote the morphism from ^*_Q_0,d,(()×_S(d,)) to ^*() and ^*(d,) respectively. We need the following, which is the relative version of <cit.>, and follows from it.
Let q̇_0,d, and q̇_0,d,^∨ respectively be the induced morphisms from ^*_Q_0,d,(()×_S(d,)) \π_0,d,^∘^*S to ^*() \π_0,^∘^*S and ^*(d,) \π_d,^∘^*S. Then
* q̇_0,d, is smooth and proper of relative dimension d(n-d-1).
* q̇_0,d,^∨ is a closed immersion.
As a consequence we have the following.
Let C ⊂^*Q_0,d, be a closed conical subset. Suppose C=p_0,d,^∘C_1=p_0,d,^∨∘C_2 for closed conical subsets C_1 and C_2 in ^*() and ^*(d,) respectively. Then C ⊂π_0,d,^∘^*S.
Note that by Remark <ref> the above corollary is also true for correspondences between (d,) and (n-1,) with d<n-1.
Let M be perverse sheaf on Q_d,n-1, (with d<n-1) that belongs to both p_d,n-1^†((d,)) and p_d,n-1^∨†((n-1,)). Then SS(M)⊂π_Q_d,n-1,^∘^*S.
The corollary is an immediate consequence of the remark above and Section <ref>, (2).
We also note the following corollary.
Let C ⊂^*() be a closed conical subset regular over S. Then Rad_0,d,(C) is also regular over S.
Let Λ⊂ C be an irreducible component of the form π_0,^∘Λ'. Then Lemma <ref> implies that Rad_0,d,(Λ)=π_d,^∘Λ'. On the other hand if Λ is not contained in π_0,^∘^*S, then Lemma <ref> implies that Rad_0,d,(Λ) is an irreducible component of Rad_0,d,(C) and is not contained in π_d,^∘^*S.
§ PROOF OF THEOREM 1: PRELIMINARY RESULTS
In this section, we collect some results which will be used in the following for the proof of part (3) of Theorem <ref>.
§.§ A criterion for descent of perverse sheaves.
As before, let k be an algebraically closed field, S/k be a smooth variety and let f X → S be a smooth morphism whose fibres are connected of dimension d.
In general, it is hard to characterise the subcategory f^†(S) of (X). If, in addition to the above assumptions, f is proper and the fibres of f are simply connected, then we have the following descent criterion.
[Our proof also works when k is only assumed to be perfect, provided f is geometrically connected.]
A (non-zero) simple perverse sheaf K ∈(X) is in the essential image of f^† iff SS(K) ⊆ f^∘Λ, for some closed conical subset Λ⊂ T^*S of dimension equal to dim(S). Moreover, when char(k)=0 it suffices to assume that SS(K) ⊂ f^∘T^*S.
Since f is smooth, the necessity results from the preservation of singular supports under pullback (see Section <ref>, (2)). Suppose now that K is a (non-zero) simple perverse sheaf on X such that as in SS(K) ⊆ f^∘Λ, with Λ as in the proposition. Since K is simple, there exists a triple (X',U,) consisting of an irreducible closed subset X' i X, a non-empty smooth (over k) open subset U j X' and a non-zero irreducible local system on U such that K=i_*j_!*[dim(X')] <cit.>. Note that f^∘ preserves irreducible components since f is smooth. As a consequence, by removing any extra components (if necessary), we may assume that SS(K)=f^0Λ.
Claim 1: It is sufficient to prove the theorem after replacing S by an open dense subset S' j' S, X by X_S' := X ×_S S', and K by K|_X_S' provided K|_X_S' is non-zero.
Proof: Let j”: X_S'↪ X denote the resulting open immersion. First note that the resulting map f':X_S'→ S' satisfies the hypotheses of the theorem, and SS(K|_X_S') = SS(K)|_X_S' = (f')^∘(Λ|_S') (Section <ref>, (2)). If M is a simple perverse sheaf on S' such that (f')^†M = K|_X_S', then f^†j'_!*(M) = j”_!*((f')^†M) = j”_!*(K|_X_S') = K. Here the first equality follows from the fact that intermediate extensions commute with pull back along smooth morphisms <cit.>, and the last follows from the fact that K is a simple perverse sheaf.
Claim 2: We may assume that the base S' of Λ is smooth, X' = X ×_S S', and SS(K) = f^∘(Λ).
Proof: Let S' be the base of Λ. Since the base of SS(K) equals the support of K <cit.> we have X'=f^-1(S'). Let Z denote the singluar locus of S'. Since k is algebraically closed, this is a strict closed subset of S'. In particular, S ∖ Z is open, and by the previous claim, we may base change everything to S ∖ Z.
Claim 3: Let Λ' be an irreducible component of f^∘Λ which is not equal to ^*_X'X, the conormal bundle of X' in X. Then the base of Λ' does not dominate S'. In particular, the union of the bases of the components of SS(K) not equal to ^*_X'X (denoted by X” below) cannot dominate S' under f.
Proof: Let Z ⊂ X' be the base of Λ'. We claim that Z does not dominate S' under f. First note that, if Z ≠ X', then it does not dominate S'. We're reduced to showing that if Z = X', then Λ' = ^*_X'X. Since X' is smooth and K=i_*j_!*[dim(X')], SS(K)=i_0SS(j_!*[dim(X')]) (Combine <cit.> and <cit.>). Note that i_0 preserves bases of irreducible components, and there exists a unique component of SS(j_!*[dim(X')]) whose base equals X' (namely the zero section). It follows that there is a unique component of SS(K) whose base is X' (namely ^*_X'X).
Note that X”=f^-1(f(X”)) ⊊ X. Let U'=X' \ X”, then f|_U' U' → S' \ f(X”) is a proper morphism with connected and simply connected fibres. Thus by <cit.> there exists a local system on S' \ f(X”) such that f|_U'^*=. Thus by uniqueness K=f^*(i_S'*j_U'!*([dim(S)]), here i_S' (resp. j_U') are the immersions from S' (resp. U') into S (resp. S').
Now suppose char(k)=0, then every irreducible component (say Λ̃) of SS(K) is Lagrangian <cit.> and further the smooth locus of Λ̃ is the conormal to the smooth locus in the intersection of Λ̃ with the zero section of ^*X (<cit.>, Exercise in Section 1.3). Such a component Λ̃ is in f^0^*S iff it is the inverse image of a closed conical subset of ^*S.
It follows from the proof of Proposition <ref> that even in positive characteristic, as long as the components of the singular support are conormals (and not just Lagrangians!), the apparently weaker assumption SS(K) ⊂ f^∘T^*S suffices.
While the following corollary will not be used in what follows, we record it here since it may be of independent interest.
Let f X → S and K be as in Proposition <ref>. Then K is lisse iff ^df_*K is lisse.
We continue using the notation from Proposition <ref>. We record below an example which shows that if char(k)>0, it is in general not sufficient to assume SS(K) ⊂ f^0T^*S.
Let k be a perfect field of characteristic p>0. Let S=^1_s, X=^1_s ×^1_[t:t'][We use subscripts to denote a choice of a coordinate system], and f X → S the projection map. Let X̃:=Z(t'^p(x^p^2-x)-(s+x^p)t^p) ⊆^1_x×^1_s ×^1_[t:t'] and denote by π: X̃→ X the induced map. We denote by X̃_t ≠ 0 (resp. X_t ≠ 0) and X̃_t' ≠ 0 (resp. X_t' ≠ 0) the open cover of X̃ (resp. X) obtained from the usual cover on ^1_[t:t'].
Note that X̃ is a smooth surface over k and that π is finite étale of rank p^2 over X_t' ≠ 0. Over the line t'=0, it is a totally ramified cover of ^1_s. Thus π is finite. and we denote by K=π_*(Λ[2]) and thus by Section <ref>, (3), SS(K) ⊆π_∘(0_^*X̃).
It follows from the definition of π_∘ that π_∘(0_^*X̃)=0_T^*X∪Λ. Here Λ is f^∘^*^1_s|_t'=0. By proper base change K is not a lisse perverse sheaf, hence SS(K)=0_^*X∪Λ. Moreover, K is not the pullback of a perverse sheaf from ^1_s, since if that were the case then its restriction to s=0 would have to be trivial by proper base change. This in turn implies that the finite étale cover X̃_t' ≠ 0→ X_t' ≠ 0 is trivial restricted to s=0, which is not the case by the choice of the Artin-Schrier cover.
§.§ A key proposition
In this section, we prove a key proposition which will be used in the proof of Theorem <ref>, (3). Recall we have a base scheme S smooth over k ( assumed to be algebraically closed) and a vector bundle on S of rank n+1. We continue using the notations from Section <ref>. However, for ease of exposition, we drop from the notation. In particular we shall denote (0,) by , (d,) by (d) and (n-1,) by (n-1).
Below, we shall makes use of the following commutative diagram in order to facilitate an inductive argument.
Q_0,d,n-1[rr] [dr] @.>[dd] Q_d,n-1[dd] [dr]
Q_0,d[rr] [dd] (d) [dd]
Q_0,n-1[rr] [dr] (n-1) [dr]
[rr] S.
In diagram (<ref>), the bottom, front and right hand side faces are the correspondences described in Section <ref>. We define Q_0,d,n-1 := Q_0,d×_(d) Q_d,n-1. This induces a morphism from Q_0,d,n-1 to ×_S (n-1), which by construction factors through Q_0,n-1 (denoted in the diagram (<ref>) by the dotted arrow). We have the following lemma which follows from the description of the incidence correspondence as a Grassmannian bundle in Section <ref>.
There exists isomorphisms (as (n-1)-schemes) Q_0,n-1≃(^∨_n,^∨), Q_d,n-1≃(d,^∨_n,^∨) and Q_0,d,n-1≃ Q_0,d,^∨_n,^∨ such that commutative square
Q_0,d,n-1[r] [d] Q_d,n-1[d]
Q_0,n-1[r] (n-1),
in diagram (<ref>) is the one induced by the correspondence Q_0,d,^∨_n,^∨⊂(^∨_n,^∨) ×_(n-1)(d,^∨_n,^∨).
Note that Q_0,d,n-1=Q_d,n-1×_((d) ×_S(n-1))(Q_0,d×_S(n-1)). Thus in order to prove the lemma, it suffices to show that projective sub-bundle of ×_S(n-1) defined by Q_0,n-1 induces the Grassmannian sub-bundle Q_d,n-1 of (d)×_S(n-1). But this follows from the description in Section <ref>.
More precisely using the notations from the section, Q_0,n-1 is the projective bundle (over (n-1)) defined by the sub-bundle ^∨_n,^∨ of π_n-1,^* and Q_d,n-1 is the Grassmannian bundle (d,^∨_n,^∨).
In what follows we denote the vector bundle ^∨_n,^∨ on (n-1) by . In particular there is a Radon transform (denoted by _0,d,) from ^b_c(Q_0,n-1) to ^b_c(Q_d,n-1). The following lemma is an immediate consequence of proper base change applied to the cartesian square at the top of Diagram (<ref>).
For any perverse sheaf K on we have ^i_0,d,(p_0,n-1^†K) p^†_d,n-1^i_0,d(K)[Here and in the rest of this article by ^i_d_1,d_2, we mean ^i∘_d_1,d_2,. We use a similar convention for ^∨_d_1,d_2,.] in (Q_d,n-1).
Below, for X smooth over k and Λ⊂^*X is a conical subset, then
(X,Λ) is the full subcategory of the category of perverse sheaves K such that SS(K) ⊂Λ. Note that this is is a Serre subcategory (see Section <ref>, (5)).
Let C ⊂^* be a closed conical subset equidimensional of dimension equal to dim(). For the rest of this section, we assume that closed conical subsets are regular over the base S (see Definition <ref>).
With notation as above, any simple perverse sheaf L in ((d),Rad_0,d(C)) is either in π_d^†((S)) or there exists a simple perverse sheaf K on and a (decreasing) filtration F^·^0_0,dK on ^0_0,dK such that
* SS(K) ⊆ C.
* F^iR^0_0,dK=R^0_0,dK for i ≤ 0.
* F^iR^0_0,dK=0 for i ≥ 3.
* Gr^i_F(^0_0,dK) belongs to π_d^†(S) for i=0,2 and Gr^1_F(^0_0,dK)=L.
We may assume L does not belong to π_d^†((S)). We prove the claim by descending induction on n-d (over varying choices of (S,)). Suppose n-d=1 and hence (d)=(^∨). Then (b)-(d) follow immediately from <cit.>. Moreover, (a) follows from the fact that K is in fact a sub-quotient of _0,n-1(L).
Now suppose the Proposition has been verified for n-d=r ≥ 1 and for all possible choices of (S,). We shall now prove it for n-d=r+1 by induction via Diagram (<ref>). By the induction hypothesis, we may assume that the Proposition has been verified for _0,d,.
It follows from <cit.> that L_:=p_d,n-1^†L is simple and by Section <ref>, (2) that SS(L_)=p_d,n-1^∘SS(L). Thus by Lemma <ref>, SS(L_) is contained in the Radon transform of p_0,n-1^0C with respect to R_0,d,. Moreover by Corollary <ref> it follows that L_ is not in the essential image of p^∨†_d,n-1((n-1)). Now by induction hypothesis there exists a simple perverse sheaf K_ on Q_0,n-1 with and a filtration F^·_^0_0,d, such that
* SS(K_) ⊆ p_0,n-1^0C.
* F_^iR_0,d,^0K_=R_0,d,^0K_ for i ≤ 0.
* F_^iR_0,d,^0K_=0 for i ≥ 3.
* Gr^i_F_(R_0,d,^0K_) belongs to p_d,n-1^∨†((n-1)) for i=0,2 and Gr^1_F_(R_0,d,^0K_)=L_.
Now using Proposition <ref>, (a') above implies that K_ descends to a simple perverse sheaf K on such SS(K) ⊆ C. Moreover by Lemma <ref>, R_0,d,^0K_ is in the essential image of p_d,n-1^†((d)). Thus by <cit.> so are Gr^i_F_(R_0,d,^0K_) for all i. Thus by Corollary <ref> and Proposition <ref>, Gr^i_F_(R_0,d,^0K_) for i=0,2 belongs to (π_d∘ p_d,n-1)^†(S). Hence the result.
§ PROOF OF THEOREM <REF>, (1)
In the rest of this article we work over S=(k), with a vector space over k of dimension n+1 (which we henceforth ignore from the notation) and use the following notation.
We will only consider the Brylinksi-Radon transform between to (d).
* We will denote (d) by Y and the incidence correspondence Q_0,d by Q. The projections from Q to (resp. Y) are denoted by p_1 (resp. p_2).
* The morphism from (resp. Y) to (k) are denoted by π_ (resp. π_Y).
* The Brylinski-Radon transforms are denoted by and .
* Let E be the complement of the incidence variety Q ⊂× Y. Let p_1^∘ and p_2^∘ be the projections to and Y respectively from E.
* In what follows we will need the modified Brylinski-Radon transform defined as _!K :=p^∘_2!p_1^∘†K.
* For a complex K on , by we mean the complex π_*K on (k). Similarly, for complexes K on Y.
* We will use ^i(K) (resp. _!^i(K), ^i) to denote the i^th perverse cohomology of (K) (resp. _!(K), ).
§.§ Some preliminary observations
The next two lemmas are immediate consequences of the smoothness and properness of p_1 and p_2, and we state them without a proof.
For any sheaf K ∈^b_c() and L ∈^b_c(Y), D((K)) ≃(DK)(d(n-d)) and D((L))=(DL)(d) [Here D is the Verdier duality functor.].
The functors ([δ](d(n-d)),,[-δ](d))[In what follows we set δ:=d(n-d-1)] form an adjoint triple.
The following result is due to Brylinski <cit.>. Again, while this is proved in loc. cit. in the complex analytic setting, the same proof goes through in our setting.
Let and be as before. Then and preserve the localizing set T (see Section <ref>), and in particular one has induced functors : ^b_c()_T →^b_c(Y)_T and : ^b_c(Y)_T →^b_c(^n)_T.
§.§ An application of Artin vanishing
We now record the following easy consequence of Artin vanishing which is used in the proof of Theorem <ref>, (1).
Let X/k be a base scheme. Let U be the complement in ^n_X of a linear subspace[A linear subspace of ^n_X is a closed subscheme, which Zariski locally over X isomorphic to ^d_X ⊂^n_X embedded linearly.] Z of relative dimension d, and let π be the map from U to X. Then π_* maps ^p^≤ 0(U) to ^p^≤ n-d-1(X).
The proof is via a repeated application of Artin vanishing in the form of right t-exactness (for the perverse t-structure) of affine morphisms <cit.>. After replacing X with a suitable Zariski open we can consider a chain of linear subspaces Z_0 ⊊ Z_1 ⊊⋯ Z_n-d-1 of Z such that Z_0=^d_X and dim(Z_i)=d+i. Let U_i :=^n_X \ Z_i be the corresponding open subscheme. Let π_i be the map from U_i onto X, and we identify π_0 with π.
We prove the lemma by descending induction on i. For i=n-d-1 the lemma is an immediate consequence of Artin vanishing <cit.>. Assuming that the lemma has been verified up to some i ≤ n-d-1, we prove it for i-1. Let j (resp. l ) be the inclusion of U_i (resp. Z_i \ Z_i-1) inside U_i-1. Let K be a sheaf on U_i-1 in ^p^≤ 0(U_i-1). By induction hypothesis π_*(j_*j^*K) ∈ ^p^≤ n-d-1-i(X). Thus it suffices to show π_*(l_*l^!K) ∈ ^p^≤ n-d-i(X).
By construction Z_i \ Z_i-1 is at once affine over X and a complete intersection of codimension n-d-i in U_i-1, and thus <cit.> implies the result.
The following corollary will be used below to describe the image of the Brylinski-Radon transform.
With notation as above, p^∘_2! maps ^p^≥ 0(E) to ^p^≥ -(n-d-1)(Y).
§.§ Proof of <ref>, (1) and Corollaries
In fact, we prove the following more refined version of Theorem <ref>, part (1).
Let K be a sheaf on .
* If K is upper semi-perverse then for any i<0, we have ^i(K) ≃π_Y^†^i-n+d.
* If K is perverse, ^i(K) are constant for any i ≠ 0. Also the perverse sheaves ^i_!(K) are constant for i ≠ n-d+1.
* Consequently is t-exact for the perverse t-structures on ^b_c()_T and ^b_c(Y)_T (see Section <ref>).
By definition of (and _!) and proper base change, we have a triangle on Y
(K)[n-d-1] [r] _!K [r] π_Y^*[(d+1)(n-d)] [r]^-+1 .
Now, by Corollary <ref> and <cit.>, one has that for any K ∈ ^p^≥ 0(), _!K∈ ^p^≥ -(n-d-1)(Y). Taking the long exact sequence of perverse cohomologies associated to the triangle (<ref>) gives us (1).
If K is perverse, then applying the first part to DK and using Lemma <ref> we deduce (2). The constancy of _!^i(K) for i ≠ n-d+1 then follows from the fact that constant sheaves form a Serre subcategory. The t-exactness of is now clear.
We get the following corollaries by combining Lemma <ref> and Theorem <ref>.
The functor [-δ](d) (resp. [δ](d(n-d))) is left t-exact (resp. right t-exact) for the perverse t-structures on ^b_c(Y)_T and ^b_c()_T.
(^δ(d(n-d)),^0, ^-δ(d))[We denote ^i_T ∘ R by ^i and a similar notation for ^i.] form an adjoint triple between () and (Y). Moreover ^-δ(d) (resp. ^δ(d(n-d))) is left t-exact (resp. right t-exact).
§ PROOF OF THEOREM <REF>, (2) AND (3)
In this section, we prove Theorem <ref>, (2) and (3).
§.§ Proof of Theorem <ref>, (2) and corollaries
Consider the following diagram of schemes, where the central square is cartesian by definition:
Q ×_Y Q [dl]_-_2[dr]^-_2
Q [dl]_-p_1[dr]^-p_2 Q [dl]_-p_2[dr]^-p_1
Y .
Let π: Q ×_Y Q →× denote the morphism induced by p_1 on each factor. Let s_1: ×→ (resp. s_2: ×→) be the projection onto the first (resp. second) factor. An application of proper base change along the central cartesian square in diagram (<ref>) and the projection formula gives a natural (in ) isomorphism:
∘(K)=s_2* (s_1^*K ⊗_Λπ_*Λ[δ_+] )[In what follows we set δ_+:=d(n-d+1).].
Let Δ: ↪× denote the diagonal embedding, let U be the complement of the diagonal embedding, and let j U ↪× be the corresponding open immersion. One has the resulting diagram with cartesian squares:
Q [r]^-i_Q[d]^p_1 Q ×_Y Q [d]^π W [l]_-j_W[d]^π_U
[r]^-Δ × U [l]_-j.
We note that π_U is a Grassmann bundle with fibers (d-2,n-2). Consider the natural closed immersion Q ×_Y Q → Q ×, which on closed points maps (x,y,L) to (x,L,y). Here x,y are closed points of and L ⊂ is a d-plane containing them. The above commutative diagram factors as:
Q [r]^-i_Q[d]^Id Q ×_Y Q [d] W [l]_-j_U[d]
Q [r] [d] Q ×[d]^π̃ V [l] [d]^π̃_U
[r]^-Δ × U [l]_-j,
where all the squares are Cartesian.
Note that π̃ is a Grassmannian bundle with fibers (d-1,n-1) and is identity along the second projection. Let Z:= V ∖ W=Q ×∖ Q ×_Y Q, and π_Z: Z → U denote the resulting morphism.
We have an exact triangle on U
π_Z!Λ[r] π̃_U*Λ[r] π_U*Λ[r]^+1 .
Since π_U is a Grassmannian bundle, Lemma <ref> implies that π_U*Λ is formal[A sheaf is said to be formal if it is isomorphic to a shifted direct sum of its cohomology sheaves] and its cohomology sheaves are locally constant. Since U is simply connected <cit.>, they are in fact constant. Let _d-2,n-2:=⊕_i M^i_d-2,n-2[-i][For any Λ-module M, by M we mean the constant local system on × with values in M.], here M_d-2,n-2^i:=H^0(U,R^iπ_U*Λ). The restriction of _d-2,n-2 to U is isomorphic to π_U*Λ[The choice of _d-2,n-2 is not unique in as much as the choice of the decomposition in Lemma <ref>, but this non-uniqueness does not play a role in what follows.]. We also denote by _d-1,n-1:=π̃_*Λ. We have exact triangles,
j_!π_Z!Λ[r] π̃_* Λ[r] π_* Λ[r]^+1 ,
_d-1,n-1⊗ j_!Λ[r] _d-1,n-1[r] _d-1,n-1⊗Δ_* Λ[r]^-+1
and
_d-2,n-2⊗ j_!Λ[r] _d-2,n-2[r] _d-2,n-2⊗Δ_* Λ[r]^-+1
in ^b_c(×). Now note that for any sheaf K on and any constant sheaf (i.e. the cohomology sheaves are constant) L on ×, the sheaf s_2*(s_1^*K ⊗ L) is also constant. Thus combining triangles (<ref>)-(<ref>) and Equation (<ref>) we get a functorial (in K) exact triangle in the localized category ^b_c()_T,
∘(K) [r] K⊗Δ^*_d-1,n-1[δ_+] ^-ϕ[r] K⊗Δ^*_d-2,n-2[δ_+][r]^-+1 .
* For any perverse sheaf K on , there exists a natural isomorphism ^i ((K)) ≃^i(^0(K)) in () (and hence in D^b_c(,Λ)_T).
* For any perverse sheaf K on , there exists functorial (in K) isomorphisms in () (and hence in ())
^i( K⊗Δ^*_d-1,n-1[δ_+]) ≃ K ⊗Δ^*H^i+δ_+(_d-1,n-1)
and
^i( K⊗Δ^*_d-2,n-2[δ_+]) ≃ K ⊗Δ^*H^i+δ_+(_d-2,n-2).
* For i=δ-1,δ, the perverse sheaves ^i( K⊗Δ^*_d-2,n-2[δ_+]) vanish. Also ^δ-1( K⊗Δ^*_d-1,n-1[δ_+]) vanishes. Moreover when n-d>1, ^δ-2( K⊗Δ^*_d-2,n-2[δ_+]) is also zero.
* For any perverse sheaf K on , there exists a natural (in K) isomorphism in () (and hence in ()), ^δ( K⊗Δ^*_d-1,n-1[δ_+])≃ K(-d(n-d)).
Claim (a) is an immediate consequence of Theorem <ref>. Claims (b) follows from the formality of _d-1,n-1 and _d-2,n-2 and the fact that their cohomology sheaves are local systems.
For claim (c), using (b) it suffices to prove that Δ^*H^i+δ_+(_d-2,n-2) vanishes for δ-2 ≤ i ≤δ, and that Δ^*H^δ_++δ-1(_d-1,n-1)=0. In either case note that the cohomology sheaves of _d-2,n-2 and _d-1,n-1 are constant local systems and hence by their definitions it suffices to show that R^i+δ_+π_U*Λ for δ-2 ≤ i ≤δ and R^δ_++δ-1π̃_*Λ vanish. But these follow immediately from the fact that π_U is a (d-2,n-2) bundle[We require n-d>1, to ensure that dim((d-2,n-2))<d(n-d)-1.] and that π̃ is a (d-1,n-1) bundle.
For claim (d) arguing as above we conclude that ^δ( K⊗Δ^*_d-1,n-1[δ_+])≃ K ⊗Δ^*R^2d(n-d)π̃_*Λ≃ K(-d(n-d)).
Combining claims (a)-(d) above shows that there exists a natural isomorphism
^δ(d(n-d))∘^0(K) ≃ K
in (), and therefore complete the Proof of Theorem <ref> (2). It is also easy to see this map is the co-unit of the adjunction in Corollary <ref>. Finally, combining Lemma <ref> and Corollary <ref> we obtain the following.
The unit of the adjunction K →^-δ(d)∘^0(K) is an isomorphism in ().
We also have the following corollary of the method of the proof.
We have ^i_()(K_1,K_2) ≃^i_(Y)(^0(K_1),^0(K_2)) for i=0,1.
The isomorphism for i=0 is an immediate consequence of Theorem <ref>, (2) and the adjunction between ^δ(d) and ^0 (Corollary <ref>). We may now assume that n-d>1, else the result follows from the fact that ^0 induces an equivalence between () and () from from Theorem <ref>, (1) and (2).
The triangle (<ref>) and Claim <ref>, (b) and (c) above imply that for K ∈(),
^-1_T([δ] ∘ K(d(n-d))) ≃ 0.
Since [δ](d(n-d)) is right t-exact and is exact, this implies that
^p_Tτ_≥ -1[δ] ∘ K(d(n-d)) ≃^δ(d(n-d))∘^0(K),
which by Theorem <ref>, (2) is isomorphic to K under the co-unit of adjunction.
We also have
^1_(Y)(^0(K_1),^0(K_2)) ≃Hom_^b_c()_T([δ] ∘ K_1(d),K_2[1])
and
Hom_^b_c()_T([δ] ∘ K_1(d),K_2[1]) ≃_^b_c()_T(^p_Tτ_≥ -1[δ] ∘ K_1(d(n-d)),K_2[1]).
The first equality being adjunction and the second since K_1 and K_2 are perverse, [δ](d(n-d)) is right t-exact and is exact. Combining this with (<ref>) gives the necessary equality.
§.§ Proof of Theorem <ref>, (3)
Thanks to <ref>, (2) and Corollary <ref>, it suffices to show that the simple objects in (Y)_Rad are in the image of ^0. This follows from Proposition <ref>.
Example <ref> naturally leads to the following question which we have been unable to answer:
Does there exist a perverse sheaf on Y with singular support inside p_2∘p_1^∘^* whose image is not in (Y)_Rad, and hence the perverse sheaf is not in the image of the Radon transform?
Note that the answer to the above question is negative in characteristic 0 (see Section <ref>, (2)) or when d=n-1.
plain
|
http://arxiv.org/abs/2307.06218v1 | 20230712150716 | Ashaar: Automatic Analysis and Generation of Arabic Poetry Using Deep Learning Approaches | [
"Zaid Alyafeai",
"Maged S. Al-Shaibani",
"Moataz Ahmed"
] | cs.CL | [
"cs.CL"
] |
utf8
[
Yichuan Yang^∗
Received May 01, 2023 / Accepted May 31, 2023
=================================================
Poetry holds immense significance within the cultural and traditional fabric of any nation. It serves as a vehicle for poets to articulate their emotions, preserve customs, and convey the essence of their culture. Arabic poetry is no exception, having played a cherished role in the heritage of the Arabic community throughout history and maintaining its relevance in the present era. Typically, comprehending Arabic poetry necessitates the expertise of a linguist who can analyze its content and assess its quality. This paper presents the introduction of a framework called Ashaar [<https://github.com/ARBML/Ashaar>], which encompasses a collection of datasets and pre-trained models designed specifically for the analysis and generation of Arabic poetry. The pipeline established within our proposed approach encompasses various aspects of poetry, such as meter, theme, and era classification. It also incorporates automatic poetry diacritization, enabling more intricate analyses like automated extraction of the Arudi style. Additionally, we explore the feasibility of generating conditional poetry through the pre-training of a character-based GPT model. Furthermore, as part of this endeavor, we provide three datasets: one for poetry generation, another for diacritization, and a third for Arudi-style prediction. These datasets aim to facilitate research and development in the field of Arabic poetry by enabling researchers and enthusiasts to delve into the nuances of this rich literary tradition.
§ INTRODUCTION
In a general setting, Arabic poetry could be divided into two forms: rhymed or measured and prose. Rhymed poetry was first introduced and theorized by (711 – 786 A. D.) who categorized every poem into one of 15 different classes, later extended to 16, called meters or Buhur as pronounced in Arabic. These meters govern how each poem should be constructed with specific rules called Arud or Arudi Style. The main constructs of Arud could be represented using Tafeelat as plural or Tafeelah as singular for easier memorization. Such constructs could be used to define how to create each meter using a finite set of rules. Another important part of Arabic poetry is Qafiyah which refers to the end rhyme pattern or the rhyme scheme used in the poem.
The construction of meters depends on diacritics which are special symbols assigned to each letter in the poem. These diacritics are categorized as either harakah or sukun. Analyzing poems usually needs expertise in the field to figure out the consistent meter and find out issues if there are any. Poets, nevertheless, have an intrinsic ability to construct poems from a specific meter without the need to consult experts. Recently, in the modern era, many poets were influenced by western culture resulting in a new form of poetry called prose poetry. Prose poetry is loose in terms of rules but has some structure and rhythm although not in a strict format. Modern poets used poetry as a medium to express various emotions and feelings.
Prose poetry is similar to English poetry in the way it is constructed but, due to its long history, Arabic poetry is richer in terms of metaphors and symbolism.
In this paper, we utilize deep learning approaches to analyze and generate poetry. A high-level pipeline is shown in Figure <ref>. We summarize our contributions as the following:
* We create four public datasets: Ashaar dataset[<https://huggingface.co/datasets/arbml/Ashaar_dataset>] is a labeled dataset with meter, theme, era, etc. that could be used for conditional poetry generation. Ashaar diacritized[<https://huggingface.co/datasets/arbml/Ashaar_diacritized>] is a cleaned dataset with diacritized poems. Ashaar arudi[<https://huggingface.co/datasets/arbml/Ashaar_ardui>] is a dataset that gives gold Arudi representations for a given set of verses. Ashaar tafeelah[<https://huggingface.co/datasets/arbml/Ashaar_tafeelah>] which contains all the possible tafeelat for a given meter.
* We provide five pre-trained models. Three classification models for era, theme, and meter. One pre-trained model for diacritization. And, a pre-trained model for conditional poetry generation.
* We introduce a framework named Ashaar for poetry analysis and generation. The analysis part uses the meter and diacritization models to predict the Arudi form. While, the generation part uses the meter, qafiyah, and theme to generate poem completion.
§ LITERATURE REVIEW
Many studies have been proposed to analyze and study the Arabic poetry metric system. Most of such efforts are directed towards linguistic libraries. <cit.> <cit.>, <cit.>, and <cit.> are just examples of the literary work advocated to the subject.
Below is a list of the tasks we found in the literature that deals with Arabic poetry from various aspects. These tasks include Authorship Attribution, meter classification, emotion and era classification, poetry identification from textual sources, poetry generation, and other miscellaneous tasks.
§.§ Authorship Attribution
In Arabic literature, there are many studies that dealt with authorship attribution in general text. <cit.>, <cit.>, <cit.>, and <cit.> are instances of various methods used to approach this problem for general Arabic text. For a special format of Arabic text like poetry, limited work has been proposed.
<cit.> used machine learning methods such as Support Vector Machines (SVM) and Sequential Minimal Optimization (SMO) to study the problem of the Authorship Attribution of Arabic poetry. The features they extracted from poetry cover characters, lexical syntactic, and semantic features. They applied their methods to a corpus of poems belonging to 54 poets. They achieved 98% precision for SMO as the best score.
<cit.> attempted to approach this problem using Markov chains. They conducted their experiments on characters and other syntactically crafted features. The experiments were conducted on a dataset of 33 samples from 33 authors for training and another different 33 unknown samples for testing. They achieved more than 96% accuracy score on the test set.
<cit.> developed a deep-learning model to identify poetry authors. The features they used are a fusion of the character embeddings and an LSTM-based pre-trained meter classification model. This architecture was evaluated on a dataset of more than 100k verses from 10 famous Arabic poets. They achieved around 81% accuracy.
On a different direction, <cit.> utilized Arud words encoding as binary features for prose Authorship Attribution. They compare this set of features to another baseline of only considering the most frequent 100 words. They showed that their method is superior compared to this baseline. They tested their method on two different sets of Arabic and English texts.
§.§ Meter Classification
The work on textual Arabic meter classification can be divided into two main categories based on the techniques used. The first category covers the techniques that are rule-based while the second category approached this problem using deep learning methods. The prominent drawback of the first approach is that it requires the poetry text to be diacritized either fully or partially. Another characteristic of this category is that it has been evaluated on relatively small datasets as compared to the second category. The largest evaluation study is reported by <cit.> consisting of less than 7k verses. Below, we survey the literature for both approaches.
Traditional Machine Learning Several methods have been proposed to classify Arabic poetry meters. <cit.> proposed a Naive Bayes while <cit.> proposed a knowledge-based framework. <cit.> filed a patent for a system that classifies poetry from acoustic as well as textual input. <cit.>, <cit.>, and <cit.> proposed rule-based systems. <cit.> introduced a rule-based system to analyze the rhyme of the poem. <cit.> created a matching pattern approach where the verse is matched against a curated set of meter patterns. <cit.> suggested a system that extends the meter classification task for modern Arabic poetry, albeit that modern Arabic poetry does not need to follow a meter, unlike classical poetry. <cit.> evaluated traditional machine learning techniques on a partial dataset proposed by <cit.>.
Deep Learning <cit.> is the first work that utilizes deep learning for this task for all the 16-meter classes. They also tried to approach this task in English and Arabic languages. It is worth noting that Arabic poetry classes are 16 as compared to only 4 meters in English. This makes the task more complex to approach for Arabic. In their research, they introduced, APCD, a large dataset of 1.5M verses of Arabic poetry. The model they proposed is RNN-based. The results they achieved are 96.38% and 82.31% for Arabic and English respectively. <cit.> proposed a GRU-based model to classify Arabic poetry meters. The model is a 5-layer stacked GRU followed by two dense layers. The dataset introduced in this research is MetRec <cit.> constituting more than 55.4K verses of 14-meter classes. The result they achieved is 94.3% on the accuracy score. <cit.> extended the work done by <cit.> and <cit.> on this task. They introduced a larger RNN-based model evaluated on a dataset of poetry and prose, 17 classes in total. They introduced the APCD2 dataset which is an extended version of APCD with the prose class. In their research, they mark the use of diacritics as optional in contrast to <cit.> where these characters are removed from the input stream. The results they reported crossed 97% accuracy on this task.
§.§ Emotion and Era classification
In the literature, there is a lot of focus to work on era classification as compared to theme classification.
Theme Classification <cit.> investigated the promise of machine learning methods to address the task of Arabic poetry emotion classification. The dataset they collected consists of 1,231 Arabic poems variable in length with four major emotional classes: Retha (Elegiac), Ghazal (love or romance), Fakhr (pride or honor), and Heja (Satire). They experimented with Naive Bayes classification, SVM, Voting Features Intervals (VFI), and Hyperpipes. They reported the results of their experiments in precision, recall, and F-measure. They showed that VFI outperforms others in terms of F-Measure with a result of around 73%.
Era Classification Depending on a set of literary features, Arabic scholars divided the Arabic poetry timeline into a couple of time segments based on either political status or literary features specific to that period of time or location. These segments are called eras. <cit.> tried to classify Arabic poetry into its recitation era. The era classes they worked on are 5 ranging from pre-Islamic era to Andalusian era. The dataset comprised a set of more than 30k poems belonging to these different classes. Various machine learning methods have been experimented with this dataset. They showed that Multinomial Naive Bayes achieved the best performance with an F1-score of 68.8% and a kappa score that is very close to 0.4. <cit.> proposed a deep learning-based approach to address era classification. The dataset they used is scraped from the web. It consists of 60,377 poems in classical Arabic recited by 739 poets. They developed two deep learning-based models and compared their performance. The first is a classification model with FastText <cit.> embeddings while the second is a CNN-based model. They showed that the CNN model was superior achieving more than 91% result on F1-score in terms of binary classification into modern and non-modern poetry. <cit.> proposed a comprehensive study on Arabic text classification with different textual styles including poetry. The poetry dataset they used comprised 1.95K documents with different 6 classes. They tried different features selections methods along with different machine learning classifiers. The best classification results they achieved were for the C5.0 classifier with 80% on average for all styles and 50% accuracy for poetry only. They attributed this low results to the difficulty of the classification task on creative materials like poetry. <cit.> evaluated various classification models for classify poetry from Abbasid and Andalusian eras. The evaluated models are logistic regression, random forest, decision trees, and SVM. They evaluated these models on a curated dataset from the web. The dataset contains around 10,895 hemistiches (half a verse) of 15 random poems by 15 poets. Their experiments showed that SVM achieved the best performance.
§.§ Poetry Generation
With the recent advancement of deep learning approaches, there were many attempts in the literature to generate Arabic poetry.
<cit.> proposed a GRU-based approach to generate Arabic poetry. They trained their model on a dataset comprising more than 20.1k poems with 80.5K verses collected from the web. For evaluation, they conducted two types of evaluations: quantitative and qualitative. For quantitative analysis, the BLEU score was used. For qualitative, they involved human subjects to evaluate the generated poetry.
<cit.> proposed a GPT-based model to generate poetry. The model was trained from scratch. The methodology they followed is first training the GPT model on a newswire dataset to develop language understanding and then fine-tuning the model on a poetry dataset. The model was evaluated on BLEU as well as human evaluation. They showed that their approach outperformed other approaches that are based on elementary architectures like RNNs and GRUs.
<cit.> evaluated the poetry generation task on two transformer-based models with two different promoting settings. The evaluated models are BERTShared <cit.> and GPT-j <cit.> and the prompting methods are rhythm or topic based. The dataset used for this research is a fused collection of an earlier version of Ashaar and a public dataset published in GitHub <cit.>. They found out that GPT-J is better at capturing the rhyme while BERTShared is better at generating fluent poems.
<cit.> fine-tuned AraGPT2 <cit.> to generate poems. The dataset they used to fine-tune the pre-trained model is APCD. In one of the proposed experiments, the model was constrained to generate poetry from a specific meter. For evaluation, they used the BLUE score as well as human evaluation where they showed that this fine-tuning procedure outperformed all proposed approaches in the literature. They also showed another study with fake-generated poetry presented to subjects with limited poetic knowledge. They showed that the generated poetry was able to fool at least 61% of the population.
§.§ Poetry Identification from the Web
<cit.> proposed a system to identify poetry from a text document. The proposed system relies extensively on the structural patterns of textual poetry. The system is evaluated on collected data from the web. The dataset has 23K lines with 161 classical poem instances. The method achieved an F-measure of 95%. <cit.> extended their work by considering modern poetry that is different in style than classical poetry. The method is similar to the one with classical poetry in the sense that it focuses on the structural patterns of modern poetry. The method was evaluated on a dataset of 2,067 plain text documents containing 513 modern Arabic poems. The method achieves an accuracy of more than 99.81%.
<cit.> developed a system for recalling Arabic poetry material from the web. The system consists of two main components, a classifier, and a distiller. The classifier classifies whether a page contains poetic material while the distiller absorbs the poetic material from the selected page. The system achieves a precision of 94% on an initially selected 14 domains as a seed list.
§.§ Miscellaneous Tasks
<cit.> applied the Arud meter system as a stenography tool. The idea is that the poem will be used as a cover message. Its binary representation is used to hide the secret message with the help of some special Arabic characters like diacritics. They compared their approach with other methods in the literature and they showed that their method outperforms others in the literature in the capacity score. <cit.> investigated the model architecture proposed by <cit.> which was designed for prose text to automatically diacritize Arabic poetry. They evaluated the model on an extended version of the dataset proposed by <cit.>. They selected samples where the diacritization ratio is 0.5 or higher resulting in 368.6K verses. The results they showed are 6% and 20% for DER and WER respectively where it was 1.9% and 5.1% respectively for prose.
§ DATASETS
The release of large Arabic poetry datasets did not happen until recently with the surge of deep learning. The first sufficiently large dataset published were, MetRec <cit.>, APCD <cit.>, and APCD 2.0 <cit.>. MetRec is the smallest among the three of these datasets. It contains verses from the most frequent 14 meters of Arabic poetry with a total of 55.4K verses. APCD is a massive dataset compared to MetRec with more than 1.8M verses containing samples from all 16 meters. APCD was extended by <cit.> introducing APCD2.0. They added another class for prose to distinguish poetry from prose in their proposed classification model. Ashaar dataset extends APCD by adding more poetry while considering more sources. We also added a column for the poem theme which was not available in APCD. Table <ref> compares APCD with Ashaar. As can be seen from this table, Ashaar is almost an order of magnitude larger than APCD in terms of verses and poets. This plenty of poetic data along with poets is useful for many tasks concerning poetry generation such as language modeling and authorship attribution. It can be noted from the table that Ashaar is also larger in terms of diacritized verses. In this comparison, we considered verses where diacritics constitute more than 25% of its characters. This is helpful for tasks that involve diacritics predictions.
§ POETRY CLASSIFICATION
In this section, we mainly discuss three types of classifications which are era, theme, and meter classification. In each subsection, we illustrate the dataset used and the architecture for training.
§.§ Meter Classification
As discussed in the literature, there are mainly 16 meters that govern how each poem should be constructed. In this subsection, we discuss our approach to generating a system that can predict a meter for a given poem.
Preprocessing and Augmentation
We first remove duplicates from the training corpus that exist in the testing corpus. Then for each verse in the training corpus, we split the two parts using a special symbol . We then remove all special characters except for the hashtag and diacritics. After that, we augment the corpus by randomly splitting each bait using then randomly swap the first and second parts. Also, to make the corpus more robust against partial diacritization at each step of training we randomly remove diacritics. We end up with 1,717,948 verses for training. We use a 15% subset for validation. For testing, we use a dataset of size 362,798 verses.
Training and Results
We use a transformer base model with multi-head attention. We start with an embedding of size 64. We use a transformer block with two dense layers at the end with ReLU activation functions. The transformer block contains, multi-head attention followed by dropout and layer normalization with a skip connection. We then add 3 Bidirectional GRU layers followed by one dense layer. The last block contains the same skip connections as in the previous block. In Figure <ref>, we show the main architecture of the model. We train the model for 15 epochs and we save the model that achieves the best validation accuracy. In Table <ref>, we compare the results of our transformer base model to the work of <cit.>. We mainly compare training on the smaller dataset and larger dataset with and without diacritics. Our base model trained on comparable corpus compared to the state of the art achieves better results with and without diacritics on the test set. Also, our models are 4 times faster in terms of inference when evaluated on a Tesla T4 machine with around 16GB of memory.
§.§ Era Classification
We group the classes of poems into four main eras in Hijri corresponding to 1) before Islam - 132, 2) 132-232, 3) 232 - 784, and 4) 784-now. We use a max length of 64 verses for each poem. We then fix the max size of each class into 50,000 poems in order to avoid bias towards classes with many poems. For tokenization, we use a sentence-piece tokenizer and we create a model with a 10,000 vocabulary size with 128 max number of tokens for each poem. We train a model with 3 bidirectional layers and two dense layers with a dropout of size 30% for 5 epochs with batch size 128. Figure <ref>, shows the confusion matrix on the test set. We can notice that in general, we see confusion, especially in consecutive eras.
§.§ Theme Classification
We group the classes of poems into four main categories that are, elegy (sad) poems, lampoon (sarcasm) poems, boosting (praise) poems, and romantic poems. We use a similar training setup as in era classification. Figure <ref>, shows the confusion matrix on the test set. Generally, we observe that the model finds it much more difficult to predict the correct classes as compared to the era classification. We think the reason is the contamination of the dataset which might contain a lot of incorrect labels.
§ POETRY DIACRITIZATION
In this section, we try to tackle the problem of diacritizing Arabic poetry. Usually, poetry contains many classical words and metaphors which makes assigning diacritics to sentences more challenging.
§.§ Training datasets
We use the Tashkeela dataset for pre-training the model <cit.>. Since the dataset doesn't contain any splits, we utilize the splits suggested by <cit.> which contains 50k training, 2.5k validation, and 2.5k testing. For Ashaar, since there are many sentences that are not diacritized we filter by percentages of diacritics. If the verse has more than 5% missing diacritics we discard it from training the model. We end up with 26,091 poems after also discarding short poems. We use 23,481, 1,305, and 1,305 for training, validation, and testing respectively. We utilize the word error rate (WER), diacritics error rate (DER), WER without case ending called WER*, and DER without case ending DER*.
§.§ Results
We use a 1-D convolution bank, highway network, and bidirectional GRUs from <cit.> as our main model for pre-training. We pre-train two models, one on Tashkeela and another on the diacritized version of Ashaar. We train each model for 10,000 steps and evaluate both on the test subset of Ashaar. In Table <ref>, we compare the two training strategies for diacritization. We observe that pre-training and then evaluating on Ashaar provide better results.
§ PREDICTING ARUDI STYLE
Each given meter has a closed set of tafeelat that represent how the meter should be constructed. For example, the Taweel meter has this sequence where 1 represents harakah and 0 represents a sukun:
11010 1101010 11010 110110
When a verse or hemistich is created it should follow one of the permissible representations. If the verse doesn't follow the meter, we can map it to the original sequence by addition, deletion, or flipping. As an example, the following sequence could be mapped to the previous sequence using that coloring scheme:
110100 1101010 11011 110110
Using that representation, we can predict whether a given poem has any problems as the following.
* We first created a dataset of all permissible changes of a given meter.
* For a given poem we diacritize it using the approach mentioned in section <ref>. We then map every harakah to 1 and sukun to 0.
* Then we use our collected dataset to find the sequence with the largest cosine similarity match. We utilize the built-in function in python which gives a similarity score between input patterns. We can use our meter classification model to also reduce the cost of the search.
This makes our algorithm robust because even if the verse doesn't follow any given Tafeelah if the diacritized form is not accurate we will still predict the Tafeelah with high confidence. Furthermore, using our color coding representation we will be able to predict if a given character needs to be added, deleted or flipped.
In order to assess the ability of our system to predict correctly a given Arudi style, we created manually an independent test set containing 100 hemstitches. We use our system to predict the patterns and then compare the gold patterns to the output. Using that we get an average score of 93.41% which indicates a high similarity score. We get 43% with an exact match i.e. similarity score of 100% which indicates a precise approach.
§ POETRY GENERATION
In this section, we consider training a poetry model from scratch rather than fine-tuning. Our early experiments show that usually poetry doesn't work well with word pieces (see Appendix <ref>) so instead we retrain the whole model on characters.
§.§ Data Preparation
Representation Our main objective is to train a model that can generate poetry that preserves the meter, theme, and structure of classical poetry. To do that, we introduce new types of tokens to the model as in Table <ref>.
Below we show a simple example of how to encode a given poem that contains two verses. We use an HTML-like prompting approach to be applied for a given input poem. Note that is in the and in the range .
Note that, for poems that don't have a meter we use our pre-trained meter classification to predict that . To make the prediction more robust, we use a majority vote over the poem to be more accurate. We filter out poems that don't match our meter classifier label. For the theme, we reserve a token for unknown .
Data Cleaning and Filtration We apply the following cleaning procedures for each poem
* We map characters using their Unicode representation.
* Remove poems that don't have an even number of verses.
* Remove poems that have very small verses i.e number of characters less than 5.
* Remove poems with meters that are not one of the 16 classes we have.
We release our dataset pre-processed in that format in HuggingFace [<https://huggingface.co/datasets/arbml/Ashaar_dataset>].
§.§ Training
For this task, we don't remove any diacritics and we consider this as an approach to generate poetry with diacritics as well. Training a GPT-based model using BPE tokenization will be expensive because the frequency of word pieces will be much less, especially with partially diacritized text. So, we use a character-based tokenization approach. We train the model for 450,000 steps with batch size 16 and context size 512. The max vocabulary size is 121 which equals the number of characters plus diacritics in the corpus in addition to the reserved tokens in Table <ref>. We use the default GPT-2 transformer-based architecture[<https://huggingface.co/docs/transformers/model_doc/gpt2>] with 10 layers.
§.§ Evaluation
Evaluating language models is a difficult task, let alone poetry generation which is a creative challenging task. For this purpose, we use a set of novel evaluation metrics, to evaluate the generative power of our pre-trained models.
Rhythm Evaluation
In order to evaluate how much rhythm is encoded in our generated poetry we use meter classification for such a task. Given a generated poetry output we can evaluate how much the model can generate poetry that belongs to the same meter with high confidence. We use the same meter classification model that we created in section <ref>. Because we can not force the model to generate certain poetry, we use the model which gets a high accuracy to evaluate how much rhythm is able to generate. At each step, we generate 10 verses for the 15 meters used in <cit.>. We repeat the process 100 times for each meter resulting in 1,500 generated poems. Then, we pass the generated poems to our classification model to predict the meter. We use majority voting to decide if the poem meter is correct. For Top-3 and Top-5 accuracy, we predict correctly if one of the top 3 and 5 predicted meters contains the true meter. In Table <ref>, we show the results and compare them to the work done by <cit.>. Even though, our model is much smaller it, still achieves better results on the poem level.
In Figure <ref>, we show the confusion matrix for meter classification on the generated poetry. We mostly observe that the more popular the meter the better results. Still, for 50% of the meters, we achieve more than 90%.
Zero-shot Analysis
Zero-shot evaluation is used to evaluate how much pre-trained models can incorporate or generalize to new tasks without explicit pre-training or fine-tuning. The model was not pre-trained explicitly to predict diacritics for a given text in a supervised way. In Table <ref>, we evaluate the correctness of our model in predicting diacritics. We evaluate the model against our pre-trained diacritization model in Section <ref>. We consider our model as the gold prediction. We pre-train an unconditional character-based model on Ashaar and evaluate its diacritization ability. We sample with different probabilities and evaluate the DER and WER metrics. We observe that the model is able to predict diacritics with at most a 50 % error rate.
§ CONCLUSION
To summarize, our paper introduces a system called Ashaar capable of analyzing and generating conditional poetry. Additionally, we curate multiple datasets and assess their effectiveness in various tasks such as classification, diacritization, Arud prediction, and conditional poetry generation. Furthermore, we leverage this dataset to generate poetry and evaluate the performance of our character-based model in diacritization, where we observe a satisfactory level of proficiency.
§ ACKNOWLEDGEMENT
We would like to thank the colleague of Computing and Mathematics at KFUPM for providing the compute to train some of the models. Also, we would like to thank ML Collective for providing compute part which was used to train the generative models. We also would like to thank Omar Hammad, with whom we discuss some of the ideas during the early phases of the project. In addition, we would like to thank Kyle McDonald for providing the compute to pre-train some of the earlier models.
acl_natbib
§ MODERN POETRY GENERATION
In this section, we discuss our experiments of pre-training on modern poetry.
§.§ Pre-training Dataset
This is a large dataset collected from around 10 Arabic newspapers <cit.>. Arabic newspapers are written in Modern Standard Arabic (MSA) which really meets our needs. One more advantage of such newspapers is their diversity. Topics are written in many domains, such as sports, politics, economy, etc. The total size of the text is about 15GB with 1.5 billion words. Working on such large datasets needs special techniques even in reading and extracting some useful statistics. We read it in chunks in parallel settings to speed up. We mainly use a GPT-2 architecture with context size 512 with 12 heads and 12 layers. In order to fit the model on a V100 with a reasonable number of batch sizes we trained a model with the following parameters: We reduced the context size (which refers to the number of tokens or history to train in parallel) to half which is a reasonable compromise because usually poems are not long. For training, we used the transformer-lm which is a Pytorch implementation that allows training in multiple GPUs. We trained on a 4 x V100 machine with a 200 GB HDD drive, 32 virtual CPUs, and 120 GB memory. The speed-up was around 2.6x compared to one GPU. However, in each epoch, the speed up was reduced which might have been caused by some memory leak. The 120 GB memory was necessary to tokenize the 14GB dataset all at once. Training using SentencePiece tokenizer. Training on the full corpus for around 10 epochs took around 20 days.
§.§ Fine-tuning
There is a big shift in terms of context writing style and vocabulary from the poetry of old traditional Arabic and the Modern Standard recent one. This will cause undesirable results as the model is trained on a Modern Standard Arabic text! We tried to limit or eliminate the old poetry to the best of our capabilities.
In terms of the datasets collected, we collected poetry from three famous Arabic poetry websites: Aldiwan, Abudhabi poetry encyclopedia, and Adab.
Aldiwan and Abu Dhabi encyclopedia sources hold, kind of, similar poetry in terms of structure, and categorization methodology. On the other hand, Adab is more comprehensive and contains more recent poetry that is close to MSA in terms of vocabulary and context and also more prose poetry than the other two repositories. It is the best fit to be used in the transfer learning task. Aldiwan and Abudhabi encyclopedia sources hold, kind of, similar poetry in terms of structure, and categorization methodology. On the other hand, Adab is more comprehensive and contains more recent poetry that is close to MSA in terms of vocabulary and context and also more prose poetry than the other two repositories. It is the best fit to be used in the transfer learning task.
We fine-tuned the pre-trained model on a poetry dataset. We experimented with different approaches to see what is the effect on the generated poetry. First, we trained on a poetry dataset that is based on meters extracted from Aldiwan. However, we realized that the model was not able to preserve the meter. Basically, it jumped between different meters. To see the effect of training on a certain meter, we extracted all the poems that belong to a certain meter which is Taweel. However, we realized something interesting. The generated poetry was somehow meaningless because the model tried to pick specific words to preserve the chosen meter. GPT-2 is a subword model but to generate a poem from a certain meter, it has to have some knowledge about characters. Moreover, there are 16 meters and training on the highest class (Taweel) reduced the size of the dataset to only around 15 MB. Arabic is a morphologically rich language and in order to capture language understanding we need a larger dataset. Moreover, most of them were old and contained many words that are not used anymore in modern literature. This created a vocabulary shift between pre-training and fine-tuning.
In order to overcome these we only extracted modern poems from Adab. We created an algorithm to clean the dataset and remove short poems as well as rhythmic poems. We ended up with around a 26 MB dataset. We applied the same process for segmentation and tokenization as we did for our pre-training. In addition, we added special characters to recognize the end of poems “#” and “&” for the end of verses. Training without these special characters mostly caused incoherent results. We also did a lot of cleaning in order to increase the quality of generated poems. We made sure to normalize the characters (Some characters had different Unicode so we made sure that they are mapped to the same set of characters), remove digits (some poems had digits indicating different parts of a given poem), remove special characters (there was a lot of, metadata and diacritics. We realized that the quality of the dataset affected a lot the generated poems in some way or another. We did a lot of back and forth by fine-tuning, inspecting the output, cleaning then fine-tuning again.
Then we fine-tuned our pre-trained model for around 200 epochs with the same parameters. This time we fine-tuned the model on Google Colab with a single V100 GPU for around three days. To see the effect of training longer we analyzed the results after each 50 iteration. To see if the model memorized some poems we randomly extracted some poems and ran the model for inspection. We realized that the model learned a lot of variations in the generated poems. We applied some post-processing approaches to increase the readiness of the generated poems. We first segmented, replaced “&” and “#” with new lines, and resolved some issues with FARASA which created some unneeded characters. For inference, we append the special character “#” to the prefix indicating the start of a new poem. We use the top 3 predictions for randomly predicting the next token. We tested with larger numbers but it resulted in some bad output. In Table <ref>, we show a sample of predicted modern poetry.
|
http://arxiv.org/abs/2307.04130v1 | 20230709090053 | The 21-cm forest as a simultaneous probe of dark matter and cosmic heating history | [
"Yue Shao",
"Yidong Xu",
"Yougang Wang",
"Wenxiu Yang",
"Ran Li",
"Xin Zhang",
"Xuelei Chen"
] | astro-ph.CO | [
"astro-ph.CO"
] |
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Yuanheng Zhang,
Nan Jiang,
Zhaoheng Xie,
Junying Cao*,
Yueyang Teng*
Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Z. Xie is with the Institute of Medical Technology, Peking University, China.
J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
J. Cao and Y. Teng contributed equally to this work.
This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114).
This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
* Key Laboratory of Cosmology and Astrophysics (Liaoning) & College of Sciences, Northeastern University, Shenyang 110819, China
* National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
* Key Laboratory of Radio Astronomy and Technology, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100101, China
* School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China
* Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
* National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Northeastern University, Shenyang 110819, China
* Key Laboratory of Data Analytics and Optimization for Smart Industry (Ministry of Education), Northeastern University, Shenyang 110819, China
* Center for High Energy Physics, Peking University, Beijing 100871, China
The absorption features in spectra of high-redshift background radio sources, caused by hyperfine structure lines of hydrogen atoms in the intervening structures, are known collectively as the 21-cm forest. They provide a unique probe of small-scale structures during the epoch of reionization, and can be used to constrain the properties of the dark matter (DM) thought to govern small-scale structure formation. However, the signals are easily suppressed by heating processes that are degenerate with a warm DM model. Here we propose a probe of both the DM particle mass and the heating history of the Universe, using the one-dimensional power spectrum of the 21-cm forest. The one-dimensional power spectrum measurement not only breaks the DM model degeneracy but also increases the sensitivity, making the probe actually feasible. Making 21-cm forest observations with the upcoming Square Kilometre Array has the potential to simultaneously determine both the DM particle mass and the heating level in the early Universe, shedding light on the nature of DM and the first galaxies.
The 21-cm line of neutral hydrogen (HI) traces various structures throughout cosmic history.
Complementary to the 21-cm tomographic observation, the 21-cm absorption signal against high-redshift
radio point sources probes intervening structures along individual lines of sight <cit.>.
The structures located at different distances along the sightline resembles forest structure on the background source spectrum, which is called 21-cm forest in analogy to the Lyman α (Lyα) forest.
The high frequency resolution of radio telescopes allows the 21-cm forest to be a promising probe to small-scale structures during the epoch of reionization (EoR) <cit.>.
In warm dark matter (WDM) models, the small-scale power is suppressed by free-streaming effect compared with the standard cold dark matter (CDM) model <cit.>.
Using Lyα forest as a tracer of small-scale structures, this effect has been used to constrain the WDM particle mass
at low redshifts <cit.>.
Similarly, the 21-cm forest
can potentially be used
deep into the EoR <cit.>,
as the decreased number of low-mass halos leads to weaker 21-cm forest signals.
Methods have been developed to improve the detection of 21-cm forest signal <cit.>.
However, the 21-cm forest signal can also be suppressed by heating effects during the early galaxy formation<cit.>.
While this means that it is a sensitive probe of the temperature of the intergalactic medium (IGM)
<cit.>, it is degenerate with the WDM suppression effect <cit.>, making the interpretation of observations ambiguous.
Nevertheless,
the WDM reduces mainly the number density of 21-cm absorption lines <cit.>, whereas a higher IGM temperature suppresses both the absorption depth and line number density
<cit.>. This difference makes it possible to distinguish these two effects statistically.
In this Article, we simulate 21-cm forest signals during the EoR under the influence of different dark matter (DM) particle masses and different heating histories of the IGM. We show that although the IGM heating and WDM both suppress the 21-cm signal, they behave differently. By measuring the one-dimensional (1D) power spectrum along lines of sight, it is possible to break the degeneracy, and
constrain the DM particle mass and the IGM temperature (hence the early heating history)
simultaneously.
To simulate the 21-cm forest from the EoR, a high dynamic range is required to model the large-scale structures in density and ionization fields on ≳ 100 comoving-megaparsec scales, while resolving small-scale halos and their ambient gas on approximately kiloparsec scales.
We use a hybrid approach to achieve this.
The cosmological evolution of large-scale density and ionization fields is simulated with the semi-numerical simulation 21cmFAST <cit.>, with a comoving box size of 1 Gpc with 500^3 grids, where the initial density fluctuations are set by DM properties, while each of the (2 Mpc)^3 grid is further divided into 500^3 voxels, and populated with halos of various masses according to the local grid density and the conditional halo mass function <cit.>,
which depends on the matter power spectrum regulated by the DM particle mass. The density in each voxel is determined by the Navarro-Frenk-White profile <cit.> or the infall model profile <cit.> according to its distance to the nearest halo (Methods).
§ RESULTS
Recent astrophysical observations have put lower limits on the
WDM particle mass (m_ WDM) of
a few kiloelectronvolts <cit.>.
We simulate the 21-cm forest signals assuming m_ WDM = 10 keV, 6 keV, and 3 keV, respectively, to be compared with the signals from a CDM model.
The 21-cm optical depth depends on the density, the neutral fraction of hydrogen gas, and the spin temperature T_ S.
The density field
and ionization field
are simulated according to the DM properties as described above, with more details given in Methods. T_ S is assumed to be fully-coupled to the gas kinetic temperature T_ K by the early Lyα background<cit.>, and T_ K is determined by the heating history of the IGM, or the virial temperature of halos, depending on the gas location (Methods).
The heating history of neutral IGM during the EoR is computed taking into account the adiabatic expansion of the universe, the Compton heating/cooling, and the X-ray heating.
We model the X-ray emissivity as proportional to the formation rate of early non-linear structures <cit.>, normalized by an X-ray production efficiency parameter f_ X
(Methods).
Assuming an unheated IGM (f_ X = 0), the 21-cm optical depth (top panels) and the differential brightness temperature (negative, bottom panels) along a line of sight at z=9 are shown in Fig. <ref>, for CDM (left column) and various WDM particle masses (right columns), respectively.
In the lower panels, the flux density of the background source, scaled to 150 MHz assuming a power-law spectrum, is assumed to be S_150 = 1 mJy, 10 mJy, and 100 mJy, from top to bottom respectively.
The overall 21-cm absorption depth in WDM models is comparable to the signal level in the CDM model, both corresponding to the absorption depth by the unheated IGM.
However, the small-scale fluctuations are notably reduced in WDM models, due to the more suppressed formation of low-mass halos.
Note that the major contribution to the 21-cm forest signal is from the overdense gas in the halo surroundings which is not heated by virialization shocks <cit.>.
These small-scale fluctuations are also suppressed,
resulting in sparser absorption lines in the spectra.
Figure <ref> shows the 21-cm optical depth (top panels) and brightness temperature (bottom panels) spectra at z∼ 9 in the CDM model, assuming different X-ray efficiency parameters.
As f_ X increases, the IGM is increasingly heated, increasing the spin temperature and notabley reducing the 21-cm forest signal. The dotted and dashed lines in the lower panels correspond to the thermal noise levels expected for phase-one and phase-two low-frequency
arrays of the Square Kilometre Array (denoted by SKA1-LOW and SKA2-LOW), for which array sensitivities of A_ eff / T_ sys = 800 m^2 K^-1 <cit.> and 4000 m^2 K^-1 <cit.>
(with A_ eff being the total effective area and T_ sys being the system temperature) are adopted, respectively. For both arrays, we assume a maximum baseline of 65 km, a channel width of 1 kHz, and an integration time of 100 hours (hr).
For the case with negligible early X-rays, the 21-cm forest signal can be marginally detected by the SKA1-LOW for sources with S_150∼ 1 mJy, while the same signal will be
easily detected with SKA2-LOW.
However, the heating will notably diminish the detectability of individual absorption lines, weakening the probing power of the 21-cm forest on either the DM properties, or the thermal history of the IGM.
Even if f_ X = 0.1, i.e. the early star formation has only ∼ 10% X-ray productivity as that of nearby starburst galaxies, the IGM will be heated to about 56 K at z = 9, then direct measurement of the 21-cm forest would only be possible for extremely bright quasars
with S_150≳ 100 mJy for SKA1-LOW, or S_150≳ 10 mJy for SKA2-LOW, otherwise a much longer integration time would be required.
If f_ X≳ 1, the IGM would be heated to ≳ 650 K at z = 9, then direct detection of the forest signal will
be challenging even for SKA2-LOW.
The heating would be weaker at higher redshifts, but then it would be more difficult to find a suitable quasar as background source.
Moreover, only the fluctuating part of the absorption is measurable in 21-cm forest observation, while the overall absorption depth from the homogeneous IGM would be effectively subtracted when comparing with the intrinsic continuum <cit.>.
If we simply count the absorption lines with a certain threshold of optical depth or equivalent width, the effects of a WDM model and a more heated IGM would be degenerate, both reducing the number of detectable absorbers <cit.>. A statistical variable with more distinguishing power is needed. As we shall show below, the 1D power spectrum of 21-cm forest along the line of sight <cit.> can serve this purpose.
The left panel of Fig. <ref> compares the 1D power spectra of 21-cm forest in the CDM model with different f_ X.
The 21-cm optical depth is inversely proportional to the gas temperature, and proportional to the density.
As f_ X increases, the IGM is increasingly heated, and the 1D power spectrum is notably suppressed on all scales.
When the IGM is cold,
the high contrast in temperature between gas in halos and gas in the IGM far from halos dominates the large-scale fluctuations in the optical depth, with typical scales corresponding to the clustering scales of halos of various masses.
As f_ X increases from 0 to 1, the IGM far from halos with the lowest temperature is heated first, suppressing the temperature contrast on scales of halo clustering, which results in the flattening of 1D power spectrum on large scales.
When f_ X = 1, the IGM temperature is about 650 K at z = 9, comparable to the virial temperature (∼ 1000 K) of the smallest halos holding gas (with mass M_ min∼ 10^6 M_⊙, Methods), then
the large-scale fluctuations in the temperature
are mostly smoothed, leaving only a flatter power spectrum originated from density fluctuations. The 21-cm forest and its 1D power spectrum are further reduced when f_ X increases from 1 to 3.
The 1D power spectra all drop off on small scales corresponding to the clustering scale of the smallest halos holding gas, and the cut-off at the small-scale end is set by the spectral resolution assumed.
The right panel of Fig. <ref> shows the results for different DM properties assuming an un-heated IGM (f_ X = 0).
The lower m_ WDM results in
a much lower level of small-scale density fluctuations, thus suppressing the small-scale 21-cm forest power spectrum. Note that with the same thermal history, the overall amplitude of the 1D power spectrum remains similar for different m_ WDM, while the slope
will be steeper for a warmer DM model. This behavior is distinct from the heating effect, which suppresses the 1D power spectrum more dramatically on all scales.
The dotted and dashed lines in Fig. <ref> indicate the thermal noise in the power spectrum, P^ N, expected for SKA1-LOW and SKA2-LOW respectively,
utilizing 10 background sources.
The error bars include both the thermal noise of SKA2-LOW and the sample variance
(see Methods). As shown in Fig. <ref>, for a background source with S_150 = 10 mJy, direct measurement of 21-cm forest becomes difficult if f_ X≳ 0.1, and almost impossible even for SKA2-LOW if f_ X≳ 1.
However, the 1D power spectrum of 21-cm forest can be measured precisely by SKA1-LOW over a broad range of wavenumber k if f_ X∼ 0.1, and it is still detectable by SKA2-LOW with S_150∼ 10 mJy sources even if f_ X = 3 at z=9.
This is because the absorption appears as an increased variance and can be measured statistically from the power spectrum even if individual absorbers are too weak to be detected with notableness <cit.>. The 1D power spectrum measurement also allows extraction of the scale-dependent information encoded in the density and temperature fields, in contrast to the flatter thermal noise. So the
observation of the 21-cm forest by 1D power spectrum is not only more feasible, but also has better discriminating power for the effects of IGM heating and the WDM.
Fig. <ref> shows the 1D power spectra for different f_ X and m_ WDM assuming S_ 150 = 1 mJy, 10 mJy, and 100 mJy, respectively. Using 1D power spectrum, with 10 background sources of S_ 150∼ 1 mJy and a moderate integration time of ∼ 100 hr, the 21-cm forest signal will be detectable by SKA2-LOW if f_ X≲ 0.1, for all DM particle masses considered here.
For brighter sources with S_ 150≳ 10 mJy, the full shape of 1D power spectrum can be well characterized, and a broader range of possible f_ X values can be probed.
Therefore the 21-cm forest 1D power spectrum will not only break the degeneracy between the effects of WDM and heating, but also be vital to make the probe feasible in practice.
The Universe may also accommodate both a heated IGM and WDM particles, both regulating the amplitude and shape of the 1D power spectrum of 21-cm forest.
We simulate the
signals for various combinations of f_ X and m_ WDM values, and measure the amplitude P and the slope β = dlogP(k)/ dlogk of the 1D power spectra at k = 40 Mpc^-1.
The top panels of Fig. <ref> show that the amplitude of the 1D power spectra roughly determines f_ X, or the IGM temperature, with a weak degeneracy between a higher f_ X and a smaller m_ WDM. On the other hand, the slope
in the bottom panels shows a different degeneracy;
a flatter power spectrum indicates a higher f_ X and/or a larger m_ WDM, while a steeper one
implies a lower f_ X and/or a smaller m_ WDM.
Therefore, the amplitude and slope of 21-cm forest 1D power spectrum can be diagnostic characters for the DM particle mass and the IGM temperature.
When combined, one can effectively break the degeneracy and determine f_ X and m_ WDM simultaneously.
With the 21-cm forest 1D power spectrum measured from 100 neutral patches of 10 comoving megaparsec at z = 9,
we use the Fisher matrix formalism to forecast constraints on m_ WDM and T_ K as expected for both SKA1-LOW and SKA2-LOW, including the thermal noise and sample variance.
Fig. <ref> shows that if the IGM was only weakly heated, then
very tight constraints can be put on both m_ WDM and T_ K, with σ_m_WDM = 1.3 keV and σ_T_K = 3.7 K for the fiducial model of m_ WDM = 6 keV and T_ K = 60 K after a total observation time of δ t = 100 hr on each source using SKA1-LOW, and σ_m_WDM = 0.3 keV and σ_T_K = 0.6 K using SKA2-LOW. σ_m_ WDM and σ_T_ K are marginalized absolute errors.
If the IGM was heated up to 600 K at z = 9 (corresponding to f_ X = 1), then SKA2-LOW would be required, and we expect to have σ_m_WDM = 0.6 keV and σ_T_K = 88 K.
The probe is more sensitive for lower values of m_ WDM.
Note that these constrains can be obtained by measurements on segments of neutral patches along sightlines against 10 background sources with S_ 150 = 10 mJy.
The constraints would be better if more sources
at different redshifts, or brighter sources,
are available.
§ DISCUSSION
The 21-cm signal from the EoR can potentially be used to constrain DM properties <cit.>,
but the degeneracies with astrophysical effects can be an obstacle<cit.>.
During the EoR, there are various feedback effects <cit.>. Here we consider primarily radiative feedbacks, including Lyα photons coupling T_ S to T_ K, ionizing photons determining the large-scale ionization field, and X-ray photons heating the IGM. The mechanical and chemical feedbacks affect the density profiles and the cooling mechanisms, but have minor influences on the 21-cm forest.
The main focus of this work is the heating effect that is most important in reducing the 21-cm forest signal and is degenerate with the WDM effect.
Using a set of semi-numerical simulations covering a high dynamic range,
we show that both the presence of WDM and an early X-ray heating can reduce the number of observable 21-cm absorbers. This degeneracy hinders the 21-cm forest from being an effective probe to either the DM properties or the thermal history of the universe.
We have demonstrated that the 1D power spectrum of 21-cm forest is a good observable to break this degeneracy, and is even effective in high heating-rate cases in which the number of 21-cm forest lines is severely diminished.
By quantifying the fluctuations, the 1D power spectrum of the 21-cm forest is also immune to subtraction of the overall absorption from the homogeneous IGM in practical observations. The DM particle mass and the IGM temperature at a specific redshift can be simultaneously constrained.
Although in our simulation the gas density profile surrounding a halo is based on simple models,
this does not have much impact on the number density and the clustering properties of absorption lines, which determines the main characteristics of the 1D power spectrum.
We also note that the overall signal level is dependent on the local density δ_0 in the large-scale environment.
We investigate the effect of local density by computing the 21-cm forest signals on different grids, with various densities on the 2 Mpc scale.
As shown in Extended Data Figs. 1 and 2, the local density affects the overall magnitude of signals,
but the effect is much weaker than the heating, even in the extreme case of δ_0 = 2 in a grid of ∼ 2 Mpc.
Meanwhile, the local density has almost negligible effect on the shape of 1D power spectrum, making the effect distinguishable from the WDM effect.
While direct detection of individual 21-cm absorption lines will be challenging if the early IGM is heated, the 1D power spectrum measurement is more promising. The observation relies on the availability of high-redshift radio-bright sources prior to reionization.
Quite a number of radio-loud quasars have been detected beyond redshift 5 <cit.>, including nine at z > 6<cit.>.
A few hundred radio quasars with > 8 mJy at z ∼ 6 are expected to be spectroscopically observed in the near future <cit.>. As there is no evidence for the evolution in the radio loudness fraction of high-z quasars <cit.>, one can expect about ∼ 2000 sources
with > 6 mJy at 8 < z < 12 <cit.>.
The long-duration gamma-ray bursts (GRBs)
are also possible high-redshift sources.
Several cases have been discovered beyond redshift 8 <cit.>.
For future missions like the High-z Gamma-ray bursts for Unraveling the Dark Ages Mission and the Transient High-Energy Sky and Early Universe Surveyor, the expected detection rate of luminous GRBs from Population III stars is 3 – 20 yr^-1 at z > 8
<cit.>.
Given the higher sensitivity of 1D power spectrum observation, radio afterglows of high-z GRBs could also be used.
The fast radio bursts, though brighter, are however too brief to allow long integration required.
Current combination of astrophysical probes of strong gravitational lensing, Lyα forest, and luminous satellites of our Galaxy
indicates that m_ WDM may be larger than 6 keV<cit.>, but models with m_ WDM of a few keV are still not excluded.
On the other hand,
tomographic 21-cm power spectrum measurement, in combination with complementary probes, yield a constraint on the IGM temperature of 8.9 K < T_ K < 1.3× 10^3 K at z∼ 8 at 68% confidence<cit.>.
With the upcoming SKA-LOW,
the 21-cm forest observation, especially the 1D power spectrum, can improve the constraints on
both the properties of DM and the thermal history of the early universe simultaneously, providing an effective probe to the DM in an unexplored era in the structure formation history,
and to the first galaxies interplaying with the early IGM.
§ METHODS
§.§ The 21-cm forest signal.
Using high-redshift quasars or radio afterglows of GRBs as background radio sources <cit.>, the HI in halos and in the IGM absorbs 21-cm photons along the line of sight.
The 21-cm forest signal is the flux decrements due to 21-cm absorption with respect to the continuum of a background radio source, which in the Rayleigh-Jeans limit is characterized by the differential brightness temperature.
In the optically-thin limit, which is usually the case for the 21-cm transition, the observed differential brightness of the 21-cm absorption signal, relative to the brightness temperature of the background radiation T_γ(ŝ, ν_0, z) at a specific direction ŝ and redshift z, is
δ T_ b(ŝ, ν) ≈T_ S(ŝ, z)-T_γ(ŝ, ν_0, z)/1+zτ_ν_0(ŝ, z).
Here ν_0 = 1420.4 MHz is the rest-frame frequency of 21-cm photons, T_ S is the spin temperature of the absorbing HI gas, and τ_ν_0 is the 21-cm optical depth.
In terms of the average gas properties within each voxel, the 21-cm optical depth can be written as <cit.>
τ_ν_0(ŝ, z) ≈
0.0085[1+δ(ŝ, z)] (1+z)^3/2[x_ HI(ŝ, z)/T_ S(ŝ, z)] [H(z) /(1+z)/ d v_ / d r_]
(Ω_ bh^2/0.022)(0.14/Ω_ mh^2),
where δ(ŝ, z), x_ HI(ŝ, z), and H(z) are the gas overdensity, the neutral fraction of hydrogen gas, and the Hubble parameter, respectively, and d v_/ d r_ is the gradient of the proper velocity projected to the line of sight. Ω_ b, Ω_m and h are baryon density parameter, matter density parameter and dimensionless Hubble constant, respectively.
The brightness temperature of the background radiation at the rest frame of the 21-cm absorption T_γ(ŝ, ν_0, z) is related to the observed brightness temperature at a redshifted frequency ν, T_γ(ŝ, ν, z=0), by T_γ(ŝ, ν_0, z)=(1+z) T_γ(ŝ, ν, z=0), and it has contributions from both the background point source and the cosmic microwave background (CMB), i.e.
T_γ(ŝ, ν, z=0)=T_ rad(ŝ, ν, z=0)+T_ CMB(z=0),
where T_ rad(ŝ, ν, z=0) represents the observed brightness temperature of the point source, and it usually dominates over the CMB temperature (T_ CMB).
For a given radio telescope resolving a solid angle of Ω,
the observed brightness temperature of a source is related to the flux density S_ rad(ν) by
T_ rad(ŝ, ν, z=0) =c^2/2 k_ Bν^2S_ rad(ν)/Ω,
where c is the speed of light and k_ B is the Boltzmann constant.
The flux density of the background source is modeled to have a power-law spectrum scaled to 150 MHz, i.e. S_ rad(ν) = S_150(ν / ν_150)^η <cit.>, where ν_150 = 150 MHz and a spectral index of η=-1.05 is assumed as appropriate for a powerful radio source like Cygnus A <cit.>.
Note that the spectral index of high-redshift quasars has a large scatter, and their spectra may be flatter than Cygnus A at low frequencies <cit.>, but the detailed spectral index makes only a negligible difference to our results.
In this work, we take the flux densities of S_150 = 1 mJy, 10 mJy, and 100 mJy for the background point sources as examples, and assume the maximum baseline of 65 km for both the SKA1-LOW and SKA2-LOW for calculating the angular resolution for a given redshift.
Assuming that T_ S is fully coupled to T_ K by the early Lyα background, the 21-cm optical depth τ_ν_0 and the forest signal δ T_ b are then dependent on the density
δ, neutral fraction x_ HI, gas temperature T_ K, and the velocity gradient d v_/ d r_, of each voxel along the line of sight.
Here we account only for the Hubble expansion for the velocity field, but neglect the peculiar velocity,
as the peculiar velocity mainly shifts the contribution of the absorption from individual segments of gas. We note that the peculiar velocity may affect the individual line profiles <cit.>, but we expect that its effect on the overall amplitude of the signal and the 1D power spectrum is small. The density field, ionization field, and the gas temperature field are modeled as follows.
Throughout this study, we adopted the set of cosmological parameters consistent with the Planck 2018 results<cit.>: Ω_ m = 0.3153, Ω_ b h^2 = 0.02236, Ω_Λ = 0.6847, h = 0.6736, σ_8 = 0.8111. Ω_Λ and σ_8 are dark-energy density parameter and matter fluctuation amplitude, respectively.
§.§ The density field.
The evolution of the large-scale density field is simulated with linear theory using the
21cmFAST <cit.>, for both the CDM and WDM models.
The simulation box has a comoving size of (1 Gpc)^3, and (500)^3 grids.
The influence of DM properties on the density field is mainly on small scales.
In each of the 2 Mpc grids, the small-scale density distribution is simulated by randomly populating halos according to the conditional halo mass function and the local density of the grid from the 21cmFAST simulation, and assigning density profiles to the gas in the halos as well as in the IGM, as detailed below.
§.§.§ Halo mass function.
In the framework of the CDM model, the number density of halos per mass interval in the range (M, M + dM), in a simulation grid with mass M_0 and overdensity δ_0 at redshift z, can be modeled by the conditional halo mass function <cit.> of the Press-Schechter form <cit.>, i.e.
d n(M|δ_0,M_0;z)/ d M=√(1/2 π)ρ̅_ m0 (1+δ_0)/M| d S/ d M|
δ_ c(z) - δ_0/(S-S_0)^3/2 exp{-[δ_ c(z) - δ_0]^2/2(S-S_0)},
where ρ̅_ m0 is the average density of matter in the universe today, S=σ^2(M) is the variance of mass scale M, S_0=σ^2(M_0), and δ_ c(z)=1.686/D(z) is the critical overdensity for collapse at redshift z extrapolated to the present time using the linear theory, in which D(z) is the linear growth factor.
In the WDM model, the structure formation is suppressed below the free streaming scale λ_ fs of DM particles, and the conditional halo mass function can be approximately written as <cit.>
d n(M|δ_0,M_0;z)/ d M=1/2{1+erf[log _10(M / M_ fs)/σ_log M]}[ d n (M|δ_0,M_0;z)/ d M]_ PS,
where σ_log M=0.5, and
M_ fs is the suppressing mass scale of halo formation corresponding to λ_ fs, i.e.
M_ fs=4 π (λ_ fs/2)^3ρ_ m0 /3. PS represents Press-Schechter form in CDM model.
The comoving free streaming scale is approximately <cit.>
λ_ fs≈ 0.11(Ω_ WDM h^2/0.15)^1 / 3(m_ WDM/ keV)^-4 / 3( Mpc),
where Ω_ WDM is the WDM density normalized by the critical density.
The Press-Schechter mass function [ d n (M|δ_0,M_0;z)/ d M]_ PS in Eq. (<ref>) takes the form of Eq. (<ref>), but the variance of density fluctuations is evaluated with the matter power spectrum fitted for WDM <cit.>:
P_ WDM(k)=P_ CDM(k){[1+(α k)^2 β]^-5 / β}^2,
where β = 1.12 and α is given by <cit.>
α=0.049(m_ WDM/ keV)^-1.11(Ω_ WDM/0.25)^0.11(h/0.7)^1.22 h^-1 ( Mpc).
Supplementary Fig. 1 shows the halo mass function, evaluated at δ_0 = 0 and S_0 = 0, for both CDM and WDM models.
The halo number is obviously suppressed below the free streaming scale in the WDM models, with the lower m_ WDM resulting in larger suppressing scale.
Especially, the WDM models notably reduce the total number of halos by suppressing the small ones, thus suppressing the small-scale fluctuations in the neutral hydrogen density, which have a major contribution to the 21-cm forest signals.
The major contribution to the 21-cm forest signal comes from the gas in and around the large number of low-mass halos that are not producing ionizing photons and reside in neutral environments<cit.>.
Therefore,
we focus on neutral patches along a given line of sight, and select neutral grids from the large-scale ionization field simulated by 21cmFAST. Then we randomly populate each of these 2 Mpc grids with halos according to the conditional mass function determined by the DM models.
We consider only the halos with
the mass upper limit M_4 corresponding to the virial temperature of T_ vir = 10^4 K, so that the atomic cooling is not efficient enough to enable substantial star formation. The lower limit of halo mass M_ min is set by the filtering mass scale, so that the halos could retain most of its gas and the gas in the ambient IGM to contribute to the 21-cm absorption.
The filtering mass is mainly determined by thermal history of the universe, and it is of order ∼ 10^6 M_⊙ for the redshifts of interest (7≲ z≲ 11) for f_ X≲ 1 in the CDM model<cit.>. It would be higher for higher f_ X, and the different density profiles in WDM models may also slightly modify its value. In the present work, we set the same M_ min = 10^6 M_⊙ for all the models for simplicity, but we expect that the dependence of filtering mass on f_ X will make the probe more sensitive to the thermal history of the universe, while more challenging to discriminate WDM models for cases with high f_ X.
§.§.§ Gas profile.
Each grid along the line of sight is further divided into (500)^3 voxels, each with a size of (4 kpc)^3, then the gas density of each voxel is determined by its distance to the nearby halos.
Inside the virial radius r_ vir, we assume that the dark matter follows the NFW density profile <cit.>,
and the gas is in hydrostatic equilibrium with the dark matter <cit.>.
Thus, the gas density distribution can be derived analytically <cit.>:
lnρ_ g(r)=lnρ_ gc-μ m_ p/2 k_ B T_ vir[v_ e^2(0)-v_ e^2(r)],
where ρ_ gc denotes the central gas density, μ is the mean molecular weight of the gas, m_ p is the proton mass,
and v_ e(r) is the gas escape velocity at radius r,
given by
v_ e^2(r)=2 ∫_r^∞G M(r^')/r^' 2 d r^'=2 V_ c^2F(y x)+y x/1+y x/x F(y).
Here V_ c^2 ≡ G M/r_ vir is the circular velocity at the virial radius, G is gravitational constant,
x ≡ r / r_ vir, y is the halo concentration, and F(y)=ln(1+y)-y/(1+y).
The central gas density is determined by normalizing the total baryonic mass fraction of the halo to the cosmic mean value, which gives
ρ_ gc =(Δ_c / 3) y^3(Ω_ b / Ω_ m) e^A/∫_0^y(1+t)^A / t t^2 d tρ̅_ m(z),
where ρ̅_ m(z) is the mean matter density of the universe at redshift z, A ≡ 2 y/F(y), e is the mathematical constant (base of natural log), and Δ_c=18 π^2 + 82(Ω_ m^z-1) - 39(Ω_ m^z-1)^2 is the mean density of a virialized halo with respect to the cosmic mean value <cit.>, in which Ω_ m^z=Ω_ m(1+z)^3 /[Ω_ m(1+z)^3+Ω_Λ].
The gas density in the halo surroundings is enhanced because of the gravitational potential.
Outside the virial radii of halos, we assume that the gas density profile follows the dark matter distribution,
and it can be computed by using the infall model which is based on the excursion set theory <cit.>. The gas density profiles in and around halos of different masses are plotted in
Supplementary Fig. 2 for z = 9.
It is seen that there is density discontinuity at the virial radius in our model. This is expected at the virialization shock near the virial radius <cit.>, though the exact location of the shock may vary from halo to halo <cit.>.
The infall model was developed for the matter density and velocity distribution around density peaks <cit.>. Directly applying it to arbitrary environments may over-predict the gas density in under-dense regions. Therefore,
we normalize the density field to ensure that the minimum density is 0, and the average density of the (500)^3 voxels in each 2 Mpc grid equals the grid density from the large-scale 21cmFAST simulation.
To test the reliability for the small-scale density field, we run a small-scale high-resolution hydrodynamical simulation with the GADGET (GAlaxies with Dark matter and Gas intEracT) <cit.> for high redshifts. The simulation has a box size of 4 h^-1 Mpc and 2×800^3 gas and DM particles <cit.>.
We compare the probability density distribution of our analytical gas density field with the one from the simulated gas density
in Supplementary Fig. 3,
at the same resolution at z = 17.
It shows that our gas density model closely recovers the probability distribution of the gas density fluctuations from the hydrodynamical simulations.
The line-of-sight density distribution in the CDM model is illustrated in the left panel of Extended Data Fig. 1
for three grids with different local overdensities δ_0 on the 2 Mpc scale at z=9.
The density distributions for different DM properties are shown in
Supplementary Fig. 4.
§.§ The ionization field.
The large-scale ionization field is simulated with the semi-numerical simulation 21cmFAST assuming ionizing sources with a minimum halo mass of M_4 and an ionizing efficiency parameter of ζ = 11 <cit.>.
By suppressing the formation of small-scale halos, the WDM models may possibly speed up or delay the large-scale reionization process by modifying both the abundances of ionizing sources and sinks <cit.>.
In the present work, we use the basic version of 21cmFAST in which the effect of sinks is incorporated by a homogeneous recombination number, and the reionization is delayed in the WDM models as shown in Supplementary Fig. 5.
It shows that the effect of WDM on the large-scale reionization history becomes obvious only if
m_ WDM≲ 3 keV, and this is consistent with the fact that atomic-cooling halos are effectively suppressed in WDM models with m_ WDM≲ 3 keV as shown in Supplementary Fig. 1.
Note that
the 21-cm forest signals mainly come from neutral regions,
and we pick up neutral patches in the large-scale simulation box to analyze the small-scale structures in the 21-cm forest signals.
The large-scale reionization history only determines the probability of getting a neutral patch of the IGM with a certain length along a line of sight.
In order to have consistent source properties when comparing the results for the same f_ X, we set the same ionizing efficiency parameter for all the models considered here, while the global reionization history would be slightly different among WDM and CDM models. On the other hand, a different reionization scenario may change the minimum source mass, for example, in a reionization model with stronger feedback effects would have a minimum halo mass for collapse higher than M_4, thus changing the reionization history. However,
the large-scale ionization field and the overall reionization history have only a minor effect on the small-scale 21-cm forest signals we are interested in.
For each of the neutral grids in the simulation box, we assume that the gas is in collisional ionization equilibrium (CIE), so that the ionized fraction of each voxel is determined by its local density and temperature, i.e.
n_ e n_ HIγ=α_ B n_ e n_ p,
where n_ HI, n_ e and n_ p represent the number densities of neutral hydrogen, electron and proton, respectively, γ is the collisional ionization coefficient <cit.>, and α_ B is the case B recombination coefficient <cit.> which is appropriate for low-mass halos and the incompletely ionized IGM.
Here both γ and
α_ B are functions of temperature.
§.§ The temperature field.
The gas temperature T_ K of each voxel is determined by the thermal history of the early universe and the location of the voxel with respect to halos.
While the photoionization heating by the UV background dominates the gas heating in ionized regions <cit.>, it is the X-rays that can penetrate deep into the neutral IGM and dominate the heating of the neutral gas contributing to 21-cm signals.
For the gas in the neutral
IGM, its temperature is mainly determined by the cosmic expansion, the heating or cooling from the Compton scattering, and the X-ray heating.
The global evolution of the IGM temperature can be written as <cit.>
d T_ K/ d t=-2 H(z) T_ K+2/3ϵ_ comp/k_ B n+2/3ϵ_ X,h/k_ B n,
where n is the total particle number density,
ϵ_ comp is the Compton heating/cooling rate per unit physical volume <cit.>, and ϵ_ X,h represents
the part of the X-ray emissivity ϵ_ X that contributes to heating,
for which we adopt a fitted formula to simulations, i.e.
ϵ_ X,h = [1-0.8751(1-x_i^0.4052)] ϵ_ X <cit.>, where x_i is the ionized fraction.
Assuming that the X-ray productivity is proportional to the star formation rate, and hence to the matter collapse rate, the total X-ray emissivity ϵ_ X can be written as <cit.>:
2/3ϵ_ X/k_ B n H(z) = 5 × 10^4 K f_ X(f_⋆/0.1 d f_ coll / d z/0.011+z/10).
Here f_⋆ is the star formation efficiency approximately evaluated at M_4 <cit.>, as appropriate for the most abundant star-forming halos, f_ coll is the fraction of matter collapsed into atomic-cooling halos with M>M_4, and f_ X is the normalization parameter describing the uncertain nature of X-ray productivity in the early universe as compared to the local universe<cit.>.
The global evolution of the IGM temperature T_ K is shown in
Supplementary Fig. 6 for different values of f_ X. The curve with f_ X = 0 denotes the case with purely adiabatic cooling and Compton heating.
Inside the virial radius, the gas kinetic temperature T_ K equals to the virial temperature T_ vir of the halo.
As for the gas in the overdense regions near halos, it will be adiabatically heated depending on the local density.
In the absence of X-rays, the temperature profiles for halos with 10^6 M_⊙, 10^7 M_⊙, and 10^8 M_⊙ are illustrated in
Supplementary Fig. 7 for z = 9.
Similar to the density profiles, the gas temperature also shows discontinuity at the virialization shocks as expected, but the exact location of the virialization shocks has negligible effects on our main results.
In the cases with X-ray heating, the gas temperature outside the halos is set by the maximum between the adiabatic temperature and the heated IGM temperature.
§.§ Thermal noise of direct measurement.
In the direct measurement of individual absorption lines,
the noise flux density averaged over two polarizations can be expressed as <cit.>:
δ S^ N≈2 k_ B T_ sys/A_ eff√(2 δνδ t),
where A_ eff is the effective collecting area of the telescope, T_ sys is the system temperature, δν is the channel width, and δ t is the integration time.
The corresponding thermal noise temperature is:
δ T^ N = δ S^ N(λ_z^2 /2 k_ BΩ)
≈λ_z^2 T_ sys/A_ effΩ√(2 δνδ t),
where λ_z is the observed wavelength, and Ω=π (θ/2)^2 is the solid angle of the telescope beam, in which θ = 1.22λ_z/D is the angular resolution with D being the longest baseline of the radio telescope/array.
For the SKA1-LOW, we adopt A_ eff / T_ sys= 800 m^2 K^-1 <cit.>,
and A_ eff / T_ sys= 4000 m^2 K^-1 is expected for SKA2-LOW <cit.>.
For both arrays, we assume D = 65 km and δ t = 100 hr, and δν = 1 kHz is assumed in order to resolve individual 21-cm lines.
Correspondingly, the synthetic spectra shown in Figs. <ref> and <ref> are smoothed with the same channel width.
At redshift z = 9, the angular resolution is about 8.17 arcsec, and the noise temperature is plotted with dotted and dashed lines in the lower panels in
Figs. <ref> and <ref>, for SKA1-LOW and SKA2-LOW respectively.
§.§ 1D power spectrum of 21-cm forest.
It is seen from Fig. <ref> that the direct measurement of individual absorption lines is vulnerably hampered by the early X-ray heating. In order to improve the sensitivity for detecting the 21-cm forest signal, and to reveal the clustering properties of the absorption lines so as to distinguish the effects between heating and WDM models, we follow the algorithm in Ref. <cit.>, and compute the 1D power spectrum of the brightness temperature on hypothetical spectra against high-redshift background sources.
The brightness temperature δ T_b(ŝ, ν) as a function of observed frequency ν can be equivalently expressed in terms of line-of-sight distance r_z, δ T_ b^'(ŝ, r_z), and the Fourier transform of δ T_b ^'(ŝ, r_z) is
δT^'(ŝ, k_)=∫δ T_ b^'(ŝ, r_z) e^-i k_ r_z d r_z.
The 1D power spectrum along the line of sight is defined as:
P(ŝ, k_) = |δT^'(ŝ, k_)|^2(1/Δ r_z).
The term 1/Δ r_z is the normalization factor, in which Δ r_z is the length of sightline under consideration. To reveal the small-scale structures we are interested in, we select neutral patches with Δ r_z = 10 comoving Mpc, and compute the 1D power spectra from segments of 10 comoving Mpc along the line of sight. For a reasonable number of 𝒪(10) high-z background sources, the expected value of the power spectrum is obtained by averaging over 100 neutral patches on lines of sight
penetrating various environments,
i.e. P(k_) ≡⟨ P(ŝ, k_)⟩.
On each quasar spectrum, we will be able to select ∼ 10 segments of 10 comoving Mpc length in neutral patches; as the neutral patches are intermittently separated by ionized regions during the EoR, we may need a spectrum covering ∼ 200 comoving Mpc along the line of sight. A length of 200 comoving Mpc projects to a total bandwidth of about 14 MHz at redshift 9, corresponding to Δ z ∼ 0.8, which is reasonable in practice.
For the rest of the paper, we abbreviate k_ as k, as here we are always interested in the k-modes along the line of sight.
Supplementary Fig. 8 shows the evolution of the 1D power spectrum with redshift.
The solid lines in the left and middle panels show the power spectra in the CDM model and in the WDM model with m_ WDM = 3 keV respectively, in the absence of X-rays.
As the redshift increases, the halo abundance decreases, and the small-scale fluctuations in the forest signal decrease, resulting in steeper power spectra. The small-scale power is slightly more notably suppressed in the WDM model, as the halo formation is more delayed.
However, the redshift evolution has only a weak effect on the 1D power spectrum in the absence of X-ray heating.
The right panel of
Supplementary Fig. 8 illustrates the evolution of the 1D power spectrum in the CDM model with f_ X = 3.
In the case of strong X-ray heating, the 1D power spectrum of the 21-cm forest is dramatically suppressed with the decreasing redshift, and the dominant reason is the rapidly increasing IGM temperature.
It implies that for the purpose of constraining DM properties, the 1D power spectrum measurement at higher redshift is preferred, as long as a radio-bright source at an even higher redshift is available.
§.§ Measurement error on 1D power spectrum.
The observational uncertainties in the 21-cm forest include the thermal noise, the sample variance, the contaminating spectral structures from foreground sources in the chromatic sidelobes, and the bandpass calibration error. The bandpass calibration error depends on specific calibration strategies, and mainly affects the broadband amplitude of the continuum, so we expect that it has a negligible effect on the small-scale features we are interested in. The contaminating spectral structures from foregrounds are not likely affecting the small structures we are aiming at, as the discriminating features locate at k ≳ 3 Mpc^-1, which are well within the “EoR window”<cit.>. Therefore, we consider only the thermal noise of an interferometer array, and the sample variance in the power spectrum measurement.
The sample variance on the 1D power spectrum is P^S=σ_P(k)/√(N_s × N_m), where σ_P(k) is the standard deviation of P(k) from N_s× N_m measurements of the 1D power spectrum at k, in which N_s is the number of 1D power spectrum measurements on different neutral patches of Δ r_z, and N_m is the number of independent modes in each k-bin from each measurement.
Using 10 high-redshift background radio sources, it is reasonable to expect about 100 independent measurements of 1D power spectra from segments of spectra, each corresponding to a comoving length of 10 Mpc.
We adopt N_s = 100, and σ_P(k) is obtained by simulating 21-cm forest signals from N_s neutral segments of 10 comoving Mpc length penetrating various environments covering grid densities from δ = -0.7 to δ = +1.5.
As for the thermal noise error,
we follow the approach taken by Ref. <cit.>, and assume that each spectrum is measured for two times separately, or the total integration time is divided into two halves, and the cross-power spectrum is practically measured in order to avoid noise bias.
Then the observing time for each measurement of the spectrum is δ t_0.5 = 0.5 δ t, and the thermal noise on the spectrum is increased by a factor of √(2).
Then the thermal noise uncertainty on the 1D power spectrum is given by <cit.>
P^N = 1/√(N_s)(λ_z^2 T_ sys/ A_ effΩ)^2(Δ r_z/2 Δν_zδ t_0.5),
where Δν_z is the total observing bandwidth corresponding to Δ r_z.
A distance of 10 comoving Mpc along the line of sight corresponds to a bandwidth of Δν_z = 0.56 MHz at z = 9.
Assuming the same telescope parameters of SKA1-LOW and SKA2-LOW as those for the direct measurement, and the same observation time of δ t = 100 hr (δ t_0.5 = 50 hr) on each source,
the expected thermal noise on the 1D power spectrum of 21-cm forest is plotted in Figs. <ref> and <ref>, as well as in
Supplementary Fig. 8, with dotted lines for SKA1-LOW and dashed lines for SKA2-LOW, respectively.
The total measurement errors including the thermal noises of SKA2-LOW and sample variance are shown with the error bars in these figures.
We have tested the extraction of 21-cm forest 1D power spectrum by simulating mock quasar spectra with thermal noises, and calculating the 1D power spectra from the noisy spectra. The results are shown in
Supplementary Fig. 9, with upper panels from mock spectra with SKA1-LOW noises, and lower panels from mock spectra with SKA2-LOW noises, respectively. In each row, the left panel shows the results from mock spectra with both 21-cm absorption signals and thermal noises, and the right panel shows the results from mock spectra with only thermal noises. The measured noise power spectra agree well with the theoretical predictions.
It is seen that the measurement of 1D power spectrum notably improves the observability of the 21-cm forest signals as compared to the direct measurement of individual absorption lines.
With about 10 moderately bright quasars with S_ 150≳ 10 mJy at redshift around 9,
the 1D power spectrum can be measured by SKA2-LOW even if the IGM was heated as sufficiently as in the model with f_ X = 3, and can reach a high signal-to-noise ratio if f_ X≲ 1. Note that the measurement error can be further suppressed if more sources are available beyond reionization, and more power spectra can be averaged to suppress both the thermal noise and the sample variance.
Data Availability
The main data that support the results in this work are provided with this paper, and are also available at
https://doi.org/10.57760/sciencedb.08093https://doi.org/10.57760/sciencedb.08093.
Further datasets are available from the corresponding authors upon reasonable request.
Code Availability
The code 21cmFAST used for large-scale simulation is publicly available at
https://github.com/andreimesinger/21cmFASThttps://github.com/andreimesinger/21cmFAST,
the codes for simulating small-scale structures and 21-cm forest signals are available from the corresponding authors upon reasonable request,
and the GADGET code is available at https://wwwmpa.mpa-garching.mpg.de/gadgethttps://wwwmpa.mpa-garching.mpg.de/gadget.
Additional information
Correspondence and requests for materials should be addressed to Yidong Xu (email: [email protected]), Xin Zhang (email: [email protected]) or Xuelei Chen (email: [email protected]).
* We thank the anonymous referees for very constructive comments and suggestions.
We thank Yichao Li, Peng-Ju Wu, Jing-Zhao Qi, and Bin Yue for helpful discussions.
This work was supported by National Key R&D Program of China (Grant No. 2022YFF0504300),
the National Natural Science Foundation of China (Grant Nos. 11973047, 11975072, 11835009, 11988101, and 12022306),
and the National SKA Program of China (Grant Nos. 2020SKA0110401, 2020SKA0110100, 2022SKA0110200, and 2022SKA0110203).
Y.X. and X.C. also acknowledge support by the CAS grant (Grant No. ZDKYYQ20200008).
Y.W. acknowledges support by the CAS Interdisciplinary Innovation Team (Grant No. JCTD-2019-05).
R.L. acknowledges support by the CAS grant (Grant No. YSBR-062) and
the grant from K.C.Wong Education Foundation.
Author contributions
Y.S. performed most of the computation and analysis, and wrote part of the manuscript. Y.X. led the study, contributed to the simulations, and wrote the majority of the manuscript. Y.W. and W.Y. contributed to the computation of the 1D power spectrum. Y.X. and R.L. proposed the study. X.Z. and X.C. contributed to the collaboration organization, the Fisher forecasts, and the manuscript writing, and supervised the study. All authors discussed the results and commented on the manuscript.
Competing Interests
The authors declare no competing interests.
< g r a p h i c s >
The density (left panel), optical depth (middle panel) and brightness temperature (right panel) for a line of sight of 2 comoving Mpc in the CDM model at z = 9.
The green, yellow and red lines correspond to local overdensities of δ_0 = 0, 1 and 2, respectively.
The flux density of the background source in the right panel is assumed to be S_150=10 mJy.
< g r a p h i c s >
1-D power spectrum of a synthetic 21-cm forest spectrum in the CDM model, for a line of sight penetrating through an un-heated IGM (f_ X = 0) with different local overdensities at z = 9.
The green, yellow and red curves correspond to δ_0 = 0, 1 and 2, respectively.
The flux density of the background source is assumed to be S_150=10 mJy.
< g r a p h i c s >
[Supplementary Figure figure]
Halo mass function for different DM particle masses at z = 9.
The red, yellow, blue and pink curves correspond to the CDM model and WDM models with m_ WDM = 10 keV, 6 keV, and 3 keV, respectively.
< g r a p h i c s >
[Supplementary Figure figure]
Neutral hydrogen overdensity profiles inside and outside the virial radius of a halo at z = 9.
The green, yellow and red lines correspond to halo mass of 10^6 M_⊙, 10^7 M_⊙ and 10^8 M_⊙, respectively.
< g r a p h i c s >
[Supplementary Figure figure]
Probability density distribution of the gas overdensity at z = 17.
The black solid line is the probability density distribution from the GADGET simulation with a box size of 4 h^-1 Mpc and 2×800^3 gas and DM particles.
The blue dashed line is the one derived from our hybrid approach with the same resolution as the GADGET simulation.
< g r a p h i c s >
[Supplementary Figure figure]
Density distribution of a patch of 10 comoving Mpc at z = 9 along the line of sight, for an un-heated IGM (f_ X = 0).
The four panels, from left to right, correspond to the CDM model and the WDM models with m_ WDM = 10 keV, 6 keV and 3 keV, respectively.
< g r a p h i c s >
[Supplementary Figure figure]
Reionization history simulated by 21cmFAST.
The black, red, yellow and green curves correspond to the average neutral fraction x̅_ HI as a function of redshift z in the CDM model and the WDM models with m_ WDM = 10 keV, 6 keV and 3 keV, respectively.
< g r a p h i c s >
[Supplementary Figure figure]
Evolution of the global gas temperature with redshift.
The blue, green, yellow and red lines correspond to f_ X = 0, 0.1, 1 and 3, respectively.
< g r a p h i c s >
[Supplementary Figure figure]
Temperature profiles of gas inside and outside the virial radii of halos at z = 9 with an un-heated IGM (f_ X = 0).
The green, yellow and red lines correspond to halo masses of 10^6 M_⊙, 10^7 M_⊙ and 10^8 M_⊙, respectively.
< g r a p h i c s >
[Supplementary Figure figure]
Evolution of the 1-D power spectrum of 21-cm forest averaged over 100 measurements on segments of 10 comoving Mpc length in neutral patches along lines of sight against background sources
with S_150 = 10 mJy.
The solid lines in the left and central panels show the power spectra in the CDM model and those in the WDM model with m_ WDM = 3 keV respectively, assuming an un-heated IGM (f_ X= 0).
The solid lines in the right panel show the power spectra in the CDM model assuming an efficiently-heated IGM (f_ X = 3).
In each panel, the blue, green and yellow lines correspond to z = 7, 9 and 11, respectively.
The dotted and dashed lines with the corresponding colors are the expected thermal noises P^ N for SKA1-LOW and SKA2-LOW, respectively, and the error bars show the total measurement errors of SKA2-LOW.
< g r a p h i c s >
[Supplementary Figure figure]
1-D cross-power spectrum computed from mock spectra simulated with thermal noises expected for SKA1-LOW (upper panels) and SKA2-LOW (lower panels), respectively.
The left plots show the results in which the mock spectra contain both 21-cm forest signal and thermal noise, and the right plots show the results from
mock spectra with only thermal noise.
Same as Fig. 3
the 1-D power spectra are averaged over 100 measurements on segments of 10 comoving Mpc length in neutral patches along lines of sight against 10 background sources
with S_150 = 10 mJy.
The blue, green, yellow and red curves correspond to f_ X = 0, 0.1, 1 and 3, respectively.
The dotted and dashed lines are the theoretical thermal noises P^N expected for the SKA1-LOW and SKA2-LOW, respectively.
|
http://arxiv.org/abs/2307.04036v1 | 20230708195101 | Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations | [
"Tong Steven Sun",
"Yuyang Gao",
"Shubham Khaladkar",
"Sijia Liu",
"Liang Zhao",
"Young-Ho Kim",
"Sungsoo Ray Hong"
] | cs.HC | [
"cs.HC",
"cs.AI",
"cs.CV",
"cs.LG"
] |
]Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
George Mason University
USA
[email protected]
Emory University
USA
[email protected]
George Mason University
USA
[email protected]
Michigan State University
USA
[email protected]
Emory University
USA
[email protected]
NAVER AI Lab
Republic of Korea
[email protected]
George Mason University
USA
[email protected]
The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs.
Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a Misc.valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed , the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.
helps CNN engineers to systemically search “unreasonable” local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using , participants made a more accurate and “reasonable” model than the current state-of-the-art. Also, participants found the way guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010282</concept_id>
<concept_desc>Computing methodologies Learning settings</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003124</concept_id>
<concept_desc>Human-centered computing Interaction paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Learning settings
[500]Human-centered computing Human computer interaction (HCI)
[500]Human-centered computing Interaction paradigms
[500]Computing methodologies Machine learning
[
Sungsoo Ray Hong
====================
§ INTRODUCTION
As the societal impact of Computer Vision (CV) models grows <cit.>, it has become crucial to find an effective way to steer Convolutional Neural Networks (CNNs) to align their behaviors with users' mental model <cit.>.
Using Explainable AI (XAI) techniques can be the first step to steering Machine Learning (ML) models, as spotting repeating cases that “surprise” ML engineers for a similar reason can help the engineers to generalize the cases to a bigger pattern that signals the vulnerability of their model <cit.>. While XAI techniques are increasingly becoming essential for revising ML models, there are relatively fewer options available for CNNs <cit.>.
Among few, local explanation–the technique that overlays a saliency map on a single image to visualize the attentive areas that the model referred to–has been widely used by tremendous ML engineers due to its visual straightforwardness <cit.>.
By seeing the attention of a model, a user can assess whether the rationale behind the prediction is reasonable <cit.>.
Checking the reasonableness of CNN's “attention” through local explanation can improve CNN's performance in two ways.
First, checking the attention can help ML engineers to identify the bias of a dataset used in training.
In diagnosing a gender classifier, for example, if a model is attentive to contextual objects, such as “snowboard” to predict a man <cit.> or “shopping cart” to infer a women <cit.>, it means that these contextual objects often appear with a specific gender class in the training dataset. As a result, such an imbalanced distribution of contextual objects causes the model attention to be biased towards contextual objects rather than focusing on the person in the image to classify the gender <cit.>.
Using a biased dataset can induce a model to reference contextual objects in prediction, which is defined to be unfair <cit.>.
Therefore, diagnosing CNNs using local explanation can reduce bias ingrained in a training set, leading the forthcoming model to be fairer <cit.>.
Second, detecting unfair predictions through local explanation can lead to a more robust and generalizable model with stable accuracy. The repeated occurrence of unfair predictions is related to the vulnerability of a CNN, which can be essential for defending against malicious attacks.
For example, imagine that an attacker found a gender classifier that tends to classify images with snowboards as men. In that case, the attacker can prepare counter-contextual examples that show women riding snowboards in a backdoor attack to drop the model accuracy.
Steering CNNs to fix the found vulnerable patterns can thus yield a model that provides stable accuracy performance regardless of object types appearing in future images.
In summary, if the dataset used in training is biased <cit.>, the model fails at demonstrating reasonable attention for specific predictions, which we call to be unfair predictions <cit.>.
Such unfair cases, in turn, make the CNN model vulnerable <cit.>.
Collectively, the phenomenon of a CNN shifting attention in an unreasonable way due to biased data refers to the problem of contextual bias <cit.>.
While contextual bias has become a highly crucial issue in ML and beyond <cit.>, spotting the vulnerability and steering the model is highly challenging or not even feasible <cit.> even for experienced ML engineers <cit.>.
Detecting unreasonable attention through local explanation can be “just noticeable” from human eyes, but the current solutions are predominantly a machine-centric approach with limited human involvement <cit.>.
In Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW), despite the rich body of research dedicated to better supporting ML engineers <cit.>, little effort has been made to design interfaces that can efficiently and effectively steer CNNs to mitigate contextual bias.
Further, while there exists a breadth of empirical studies focused on understanding the ML engineers' practice, challenges, and design opportunities(e.g., <cit.>), it is not well understood how ML engineers apply local explanation in steering CNNs to mitigate contextual bias or what are the practical challenges.
Through this work, we aim to bridge the technical and empirical gaps we identified in the problem of contextual bias.
Specifically, we aim to create a novel interactive system that can empower ML engineers to leverage local explanations in diagnosing the vulnerability of CNNs and steer them.
To inform our design based on real practice, we conducted a formative study (S1) with five industry CNN experts who have more than 5 years of model development.
We sought to understand how they use local explanations, what the limitations of existing tools are, and how the new design can practically help their practice.
As a result, we identified 3 challenges and 3 desires that we were able to use to streamline their process in our new design.
Based on the findings, we devised , the first interactive system that realizes a direct feedback loop that connects a user and a CNN using local explanations for model steering.
First, enables a user to systematically categorize unreasonables—the images that have overlaps between the model attention and contextual objects—among images used in validation.
Next, for the categorized unreasonables, suggests the “reasonable” attention boundary that excludes contextual objects to help a user effortlessly finish the annotation task required for steering.
Third, using the user-confirmed boundary input, steers the target model by optimizing both the prediction loss and attention loss (minimizing prediction errors and shifting the model's attention towards confirmed “reasonable” areas).
Finally, helps a user to see what has been changed before and after steering.
In particular, provides the evaluation results regarding (1) how the attention quality has become reasonable and (2) how the improved model attention quality affected the model accuracy performance.
In the summative study (S2), we evaluated with 12 experienced CNN builders, asking them to revise a gender classifier across two days.
We found using enabled every participant to boost their model accuracy performance and model attention quality than applying state-of-the-art techniques.
Meanwhile, after using , we also found that over 80% of the participants perceived that using would improve their capability regarding model vulnerability assessment and performance improvement.
Based on the two studies, we provide implications for design on Beyond XAI—how the future design can convert XAI-driven insights into actionable steering plans such that the AI's behavior can gradually be aligned to the human mental model.
This work offers the following contributions:
* S1: Understanding How Local Explanation Is Used in Improving CNNs: We extend our knowledge about how field practitioners apply local explanations when working on CNNs and what the challenges are. Based on the analysis, we suggest how new design can mitigate their difficulties in steering CNNs.
* Design Contribution:
We devise and instantiate , a novel, end-to-end, and interactive design that enables ML engineers to practice a systematic case-based vulnerability diagnosis and model steering.
* S2: Understanding the Effect of : Through the study with 12 experienced CNN developers, we understand how the new design can make a difference in building more accurate and robust CNNs.
* Implications for Design for Steerable AI: Based on the results of S1 and S2, we provide how the HCI and CSCW communities can contribute to converting XAI-driven insights more useful and actionable through steerable AI design.
§ RELATED WORK
In this review, we first dive deeper into understanding the problem of contextual bias and explain how unreasonable model attention can detrimentally affect CNN's model performance.
Second, we review landmark XAI-driven systems in HCI devised for diagnosing Deep Neural Networks (DNNs) and discuss how the findings can be applied to resolve the problem of contextual bias through an interactive system.
Next, we cover how the recent advance in explanation-guided steering techniques can be applied to implement an interactive and integrated model steering environment.
Then we highlight the remained technical and empirical challenges in HCI.
When CNNs are not trained properly with generalized and representative datasets, there can be various kinds of bias that can introduce several weaknesses in the model performance <cit.>.
Imagine that one engineer is preparing a set of images for training a dog detection model.
In preparation of data, 50% of the images would show a dog to balance positive and negative cases <cit.>.
The problem can start when some contextual objects, such as a ball, appear more frequently in positive cases than negative <cit.>.
Using such a biased dataset, a model would establish a “spurious” correlation between a dog and a ball <cit.>.
In such a case, the model's attention visualized through local explanation is on the ball rather than a dog <cit.>.
Consequently, when bringing an image that shows a ball, the model may likely say that it detected a dog by seeing a ball regardless of a dog appearing in the image <cit.>.
As such, this phenomenon of “contextual bias” refers to the case where a model's attention is shifting to contextual objects which are not directly relevant to the model's goal <cit.>.
Consequently, using this potential vulnerability, an attacker may be able to drastically decrease model accuracy by showing the ball images without dogs <cit.>.
Furthermore, CNN's shifting the focus to a contextual object incurs the fairness issue <cit.>;
While model accuracy is accepted as a “golden standard” in modern ML research for evaluation, there is growing concern that putting insufficient emphasis on the quality of model explanation can lead us to have a technical debt <cit.>.
This aspect of a CNN's blind decision made by referring to contextual objects has become crucial in the Fairness, Accountability, and Transparency (FAccT) community and beyond <cit.>.
In handling contextual bias, several studies outside of HCI commonly apply mathematical approaches rather than incorporating human input <cit.>.
For example, Singh et al. used Class Activation Maps as a “weak” automatic attention annotation <cit.>.
Feature augmentation <cit.> is another technique proposed for de-biasing using disentangled representation.
Hirota et al. provided a way to analyze skewed data distributions to attain unbiased human-like reasoning <cit.>.
While each method has its pros and cons, there has been no ideal breakthrough.
In recent years, ML communities' approaches are gradually shifting towards involving more human inputs <cit.>.
Aligning with this direction, local explanations, such as Grad-CAM <cit.>, started to catch attention as an XAI technique that can mitigate contextual bias. It enables a user to spot the unreasonable model attention at a glance, and perhaps this aspect makes the technique the most widely used XAI technique for investigating CNNs <cit.>.
Meanwhile, in HCI and CSCW, despite the wide range of novel systems proposed for helping ML engineers <cit.>, we didn't recognize a system directly focusing on handling contextual bias.
When we scope the approaches related to Deep Neural Networks, we found the two perspectives useful in handling contextual bias through local explanation.
The first takeaway is that a bottom-up approach—the design that helps users understand the vulnerable patterns by exploring specific cases through local explanation <cit.>—can provide a more straightforward and intuitive flow than a top-down approach which aims at helping a user to understand global structure or rules to explain how DNNs make a prediction <cit.>.
Prospector <cit.> and What-if tool <cit.> belong to the bottom-up design that can help ML engineers to see the instance-level of prediction cases to gradually realize a set of patterns for making prediction <cit.>.
On the other hand, top-down approaches include XAI techniques and visual analytic components to help a user to understand the “landscape” of prediction rules, structure, and decision boundaries.
For instance, Squares <cit.> and Blocks <cit.> are some of the earliest designs that explain how DNNs predict the multi-class problem.
MLCube Explorer <cit.>, TwoRavens <cit.>, and Visus <cit.> present the model comparison feature, helping ML engineers more easily decide the model they would like to deploy.
ActiVis <cit.>, RuleMatrix <cit.>,
CNN explore <cit.>, ExplainExplorer <cit.>, DeepEyes <cit.>, RNNVis <cit.>, NeuroCartography <cit.>, and Dodrio <cit.> fall into visual analytic approaches.
The second takeaway is that by including every feature required for assessing and steering in a single, end-to-end systems can reduce the cost of switching the context between the diagnosis to the refinement <cit.>.
EnsembleMatrix <cit.>, ModelTracker <cit.>, Tenserflow Graph Visualizer <cit.>, and explAIner <cit.> present end-to-end environments that combine diagnosis and model refinement.
This review concludes that local explanations can help a user to easily diagnose the model vulnerability for easing contextual bias in a bottom-up fashion. Meanwhile, including both diagnosis and steering in a single system can further help ML engineers. In realizing this design goal, the first technical challenge is understanding how to steer a CNN upon finding the unreasonable model attention.
In recent years, new techniques have enabled steering the AI's behavior using human input through local explanation.
For example, Attention Branch Network <cit.> is a pioneering method that allows humans to directly adjust the boundary of model attention.
More advanced techniques, such as GRADIA <cit.>, RES <cit.>, and GNES <cit.> have been proposed.
While they can be potentially effective, they have never surfaced or been used by ML engineers through interactive systems.
The second challenge is the lack of studies aimed at understanding how ML engineers practice and perceive local explanations in their CNN building workflow.
There has been a series of empirical studies aimed at learning the workflow of ML engineers and data scientists. The directions include understanding how they use XAI tools <cit.>, how ML beginners learn XAI tools to work on their model building <cit.>, how ML experts view the automated AI <cit.>, how ML experts collaborate in using XAI tools, and beyond <cit.>.
Despite the popularity of local explanations, we didn't identify the work specifically focusing on understanding ML engineers' current practices and challenges.
Item.1So, we believe that an interactive system is essential to bridge the gap between computational techniques and human-centered design to diagnose and resolve contextual bias.
Since diagnosing and steering a CNN is a deep cognitive process that requires dense and repetitive interaction with a system, conducting a formative study in advance would higher the chance of yielding a practically useful design <cit.>.
§ STUDY 1: FORMATIVE STUDY
Through the reviews, we defined our specific goal of designing an interactive system that can mitigate contextual bias embedded in CNNs.
In doing so, we learned that local explanation provided through bottom-up fashion could help a user to efficiently and effectively examine CNN's vulnerable patterns and steers it.
To situate our design considerations based on real practice, we conduct a formative study with industry practitioners.
§.§ Method
We conducted open-ended, semi-structured interviews with professional CNN developers.
In recruiting them, we first provided a flyer to a company bulletin and communicated with industry acquaintances who use local explanations.
As a result, we recruited five experts with an average of over 5 years of experience building state-of-the-art CNN solutions in their field (see Item.2Table <ref>).
In shaping the detail of the interview, we strictly followed the interview methodology in HCI <cit.>.
First, in scoping our directions of inquiry, we motivated participants to focus on sharing their lived experiences, specifically about their practice and perception of local explanation but not discouraging them from connecting their story about local explanation with other experiences.
Consequently, in designing our questions Item.3(shown in Appendix A), we started from their general background and workflow in the early phase as follows.
In particular, we asked about their (1) roles and areas of expertise, the (2) CNNs they build, and (3) their development settings and tool belts.
Then we moved to their local-explanation-related questions aiming to learn their (4) workflows, (5) reasons-of-use, (6) challenges in using local explanation, and (7) their wish lists.
Second, to construct an appropriate dialogue with our participants, two authors—who completed HCI-centered training in their PhDs and currently working on a specialized domain of Human-AI Interaction and Deep Learning in academia and industry, respectively—participated in every interview.
One author proceeded with the interview with questions, while the second author asked follow-up questions to gain more specific insights.
In our interview, we collected 4 hours and 31 minutes of video. On average, each interview lasted 54 minutes, ranging from 37 minutes to 67 minutes in total.
In our analysis, we used a qualitative coding process <cit.> which entails two authors' coding, diagramming, and consensus-based theme generation.
First, the two authors each created, using the interview records, initial sets of codes, and memos <cit.>.
Second, they shared the codes and analyzed the emerging Item.1commonalities and discrepancies related to their perceived challenges and desires. For the matters of discrepancies, the two authors discussed the reasons for the disagreement and decided each matter could be agreed upon or annexed in existing commonalities.
Finally, after thinking about others' code choices, they reviewed all our coded text, quotes, and memos to tweak and derive the final structure.
§.§ Results
From every participant, we heard strong reasons why they apply local explanations in their practice.
The overarching reason they apply explanation in their workflow is predominantly related to retaining the “generalizability” of their model.
The generalizability explains the degree to which the model would “shake” when it sees unexpected, different cases they didn't see in the past.
P5 mentioned: “we strongly believe that that's the way to go, those sorts of visualizations are clearly the path towards understanding how to improve the model. I think it's a required envision. If the mistake is turned out to be unreasonable, I'm going to explore my data and see why it's not robust enough.”
P4 shared his interesting observation that accurate prediction and reasonable attention might be somewhat correlated.
Item.2He believed it was more crucial for a model to focus on the right gaze to make it robust for unexpected cases than optimizing performance on the test set, as we could not prepare the perfect dataset that represents every case equally.
All participants shared their experiences about the cases of spotting unreasonable attention in checking the vulnerability to remove the model's weakness.
P3 mentioned that he uses local explanation in the model comparison task mainly because it can be a good indicator of how robust the model can be:
“I see model behaves very differently task-by-task. ResNet works very well in one task, and VGG works well in a different task. I have no idea why. And the local explanation tells me why.”
While attaining a CNN's generalizability has been discussed in previous literature, our findings extend the existing in two directions.
First, we identified the three practical challenges they are encountering when applying local explanation in their workflow every day.
Second, we also identified the three future desires that the current local explanation-driven techniques cannot realize but could be achieved with future solutions.
§.§.§ Challenges
C1. Iterative and Exhaustive Diagnosis:
In diagnosing their model through local explanation, participants expressed the process as “nothing is given”.
In detecting vulnerable patterns using local explanation, participants seemed to have proactive and iterative shaping of their assumption and collecting the cases.
Generally, participants went for several rounds of iterative target image selection and local explanation generation.
This generation was made based on their dense inductive and deductive reasoning.
The aspect of iterative case-based reasoning seemed to entail nontrivial labor, which exhausts ML engineers.
P1 mentioned: “I wish I could check the (saliency) maps for every case. But coding to layout multiple maps takes some effort and does not become feasible as the dataset gets bigger. In the end, I normally have to compromise, just checking instances in an inaccurate category if I'm lucky, or even fewer.”
P3 developed a multi-classifier that has 4,000 to 5,000 classes. He mentioned that the required mental effort for detecting vulnerable attention grows exponentially as the number of classes increases. In the end, he can only consider a few “major” classes.
Many of our participants remarked that their model vulnerability analysis using local explanation is mostly a group effort, and sharing insights with colleagues also adds up even more time.
For P2's case, his group made a web-based tool where the team member can upload image groups and show the local explanation results for discussion due to the complexity of coding and positioning on a screen.
C2. Ad-Hoc Diagnosis Leads to Uncertainty:
The next challenge that our participants mentioned was the uncertainty they had to cope with in determining the vulnerable patterns.
They seemed to suffer from two types of vulnerability.
Since finding the vulnerable patterns stems from their intuition, our participants mentioned that there is no guarantee that their selection covers every major and minor vulnerability type.
In addition, upon spotting the local explanations that gaze at unreasonable objects, they had to decide if cases sow merely noise or the signal that leads to a vulnerable pattern.
Often, our participants' vulnerability determination process was done on their “gut feeling”, which made them perceive the process as heuristic and ad-hoc.
P2 mentioned: “I feel like showing the pros and cons of model's attention using local explanation is cherry picking, in many cases. Even if someone says the quality of model attention is good or bad with some examples, there is no ground one can say the cases represent a real pattern or merely subtle noise that won't likely happen in the future.”
Item.2P3 also shared similar difficulties that increasing classes could result in more bad-attention cases. Even though these problematic cases were identified, they might likely reoccur in the future.
P4 said that the hardship in verifying the severity of the vulnerability is closely related to the fact that there is no measure that we can rely on to see the “impact of the detected cases” from the perspective of the whole dataset.
There was a minor opinion that their feeling of uncertainty in the process was connected to the doubt about the diagnosis results.
For instance, P1 mentioned that he doesn't believe he can completely remove the bias no matter how much effort he may put in or what tools he may use.
C3. Hard to Steer as Intended: Every participant agreed that changing the model's future behavior from learned insights is challenging or often not feasible.
P5 mentioned that the insights were not actually insightful as they are often unactionable:
“Surprisingly, it wasn't really insightful when we looked at the mistakes our model made, and the saliency map was totally unreasonable. It was like it doesn't know what to do here, something is missing, architectural leap or something I don't know, we didn't quite solve a lot of the failure cases.”
Item.2He also shared his “dream tool” idea for instant attention adjustment, which could be some drawing applications that he could manually guide CNNs to focus on previously missed features of images and retrain through backpropagation.
P1 mentioned his current struggle to fix a model by fortifying the training set, such as adding more data to counterbalance the failure class. He still looked for alternative methods as the performance was not promising.
§.§.§ Desires
D1. The Way to Interact: Beyond Command Line:
Some mentioned that local explanation could not fully realize their potential with command line interfaces as the way to create them requires some work.
This aspect is connected to C1; participants feel making multiple queries for selecting images and examining model attention can become arduous.
From the interaction design's perspective, shifting the command line-based interface to a directly manipulatable GUI can streamline the process.
P1 remarked: “I feel like a complex task like this (vulnerability diagnosis), we would mostly benefit from GUI rather than a tool with a command line. It takes too long to create saliency maps. Showing the maps with different selection criteria and sorting can be super helpful.”
By lowering the cost of creating local explanations, participants could more effectively examine a bigger volume of model attention than the current design.
Item.2Some also mentioned the necessity of reorganizing results after each search, which was not easy with the current tools. P4 always looked for failure cases manually but struggled when there were too many cases. He suggested some summarization or pre-filtering features that prioritize interesting cases.
This finding indicates it is worth considering designing an interactive analytic system that enables a user to easily formulate the query and see the results.
D2. Evaluating Model: Model Accuracy and Beyond:
We had multiple chances to hear participants' voices regarding what they care about when it comes to evaluating their models.
In particular, we found that our participants shared the consensus regarding the model accuracy as a gold standard metric that should not be sacrificed even though the purpose of revision is not for boosting model accuracy (e.g., mitigating contextual bias).
For instance,
Item.2P4 was very curious to see whether improving model attention could improve model accuracy, and if the model were not improved, he would care less about attention quality improvement.
P5 also mentioned the tension between fairness and accuracy in model development: “I had much of a concern for fairness in my practice, it was more the kind of thing where prioritizing fairness connects to increasing failure case. This would result in my client making less money. If it was a courtroom, there's a much stronger debate here. But it's very serious in industrial cases that fairness is important, but the accuracy is still the king.”
At the same time, they shared their concern that the way the current tools provide the model accuracy is not enough to understand how accurate and how reasonable their models are.
Item.2P2 found it very difficult to check the saliency maps for accurate cases, and he felt uncomfortable making decisions solely based on overlooking accurate cases since it could penalize model generalizability. He was less focused on the test set performance than generalizability in the long run.
This internal tension helped us realize the delicate view of the way ML experts see model accuracy. It's still the “King” that should not be compromised, but they may still need more than that to make their model generalizable and trustworthy enough.
D3. A Balance between “Pain” and “Gain”:
One aspect we learned from our participants is that ML engineers are generally more conservative about testing a new feature using a human-in-the-loop-driven approach than we thought due to its high cost.
Regarding the idea of using human input for steering CNNs, some participants mentioned that the direction has potential but would only work if the workload is manageable.
For instance, P3 mentioned that he might not likely use the new tool if the expected effort is more than what they are currently investing in for the model diagnosis.
Not surprisingly, many participants mentioned the difficulties in eliciting data from in-house annotators or workers in crowdsourcing platforms.
P5 said: “The workflow of human-in-the-loop to adjust attention using human help, no one would say it's a bad idea that you could include humans and get more data and improve it. This is an obvious virtuous aspect, but it's not like you just sign up for data bricks, and you're done. Getting human labels would probably need a little bit of training. You don't want that to be an expense to ML engineers.”
This aspect helped us realize that making a practical tool can be readily adopted. It must automate the vast volume of work via intelligent automation and minimize the chance for human outsourcing.
§.§ Design Considerations
While we found that the local explanation serves as an indispensable tool for diagnosing the vulnerability of participants' data and model, they suffered in each stage of C1: detecting cases that signal vulnerable patterns, C2: verifying them to be “real”, and C3: steering.
Meanwhile, we also found they desire to D1: have an interactive and directly manipulatable design that can cut down their effort for writing lots of queries
and parameters, D2: use the product that can improve the model accuracy while also improving the quality of model attention to be reasonable, and D3: enable users to achieve the new feature with a reasonable size of additional labor.
As D1 suggests, we were able to find the reason why the interactive interface can be well appreciated by ML engineers, especially when completing their task requires deep thinking and iterative interactions with their tool.
In designing the system, we further synthesize our findings and establish the design considerationsItem.1Item.2 as shown below. Table <ref> also shows how the participants (“PID”) support the identified challenges (“C”), desires (“D”), and design considerations (“DC”).
* DC1. Semantic local explanation browser:
Seeing the results of local explanations for finding the cases that signal vulnerable patterns is the first stage to mitigating contextual bias.
In this stage, providing a semantic browser—that users can see, rank, and select the dominant semantic object types observed within the model's area of attention for every image—could reduce ML engineers' uncertain feelings and save them time.
In building a dog detector, this feature may enable a user query such as “find every image attentive on treat” or “rank every object type by its occurrence in a dataset.”
Descriptive statistics, such as how frequently the object types appear, can help users understand the degree to which the object grabs the model's attention.
DC1 will relieve C1, C2, and D2 (based on all 5 participants).
* DC2. Labor-efficient selection of “unreasonables” and adjustment of their attention boundaries:
Using the browser, users can diagnose a CNN by finding the cases that show unreasonable attention (“unreasonables”, hereinafter).
Then the users would annotate the areas that can make the annotation reasonable.
The system would need to provide this annotation with a lightweight interaction cost.
DC2 is related to D3 (based on 2 participants: P3 and P5).
* DC3. The fine-tuning mechanism that can boost both model accuracy and model attention quality:
One of the most evident consensuses among the participants was their difficulties in steering CNNs.
Therefore, the tool must help users to clearly understand how the CNN's quality of the model attention visualized through local explanation has been changed based on the input the users provided.
While doing so, the tool must not compromise the model's accuracy performance.
DC3 is derived from C3 (based on 2 participants: P1 and P5).
* DC4. Evaluation results that show what has been changed:
The last stage of the workflow would be to help users understand how their attempts made a difference.
In showing the differences, providing a set of views that show the difference made regarding the accuracy of model prediction, the quality of model attention, and the combined view that explain how the changing of the attention has been related to the accuracy would facilitate users' understanding of the impact.
DC4 is derived from C3 and D2 (based on 4 participants: P1, P2, P4, and P5).
§
Based on the DCs in S1, we designed .
is the first interactive system designed and built to support a CNN engineer's contextual bias-related tasks based on their practical needs.
The early part of 's workflow is defined based on what we learned from ML engineers:
First, a user prepares the base CNN model and datasets to be used for diagnosis (the “loading model’’ and “loading dataset’’ tabs).
Second, a user collects the cases where their gazes are on unreasonable objects by browsing local explanation results (i.e., the “accessing attention quality’’ tab in ).
The rest of the stages follow the recent literature that proposes model steering through local explanation <cit.>.
Third, for the collected “unreasonables”, a user corrects the attention boundary to shift the CNN's future gaze from contextual objects and starts to fine-tune the base CNN model with annotations (the “adjusting attention’’ tab in ).
Finally, a user sees how the approaches made the CNN different (the “evaluation’’ tab in ).
§.§ Interacting with
Consider a scenario for Sarah, an ML engineer who has trained a dog classifier built based on a CNN architecture.
She found the model accuracy performance was not enough for deployment and found a few cases that she could not understand why it failed.
She decided to examine her model using local explanations.
First, she created local explanations for a few accurate and inaccurate cases for multiple rounds to reason what could be wrong.
After her search, she found out the model's focus sometimes moves to some specific contextual objects, such as balls and treats.
To study if the cases would repeat, she decided to invest her time in generating local explanations for all the images and checking them serially. She put some effort into coding for loading and saving files (models, images, and statistics).
For the dubious cases, she decided to collect similar datasets for further testing (C1). Along the path, she started to wonder if the contextual object types she identified were comprehensive. She decided to examine other object types (C2).
Upon confirming every case and object type that signals the vulnerability of her model, she will need to find a way to steer the model's behavior (C3).
Using , her workflow can make better progress with less effort.
First, she uploads the base CNN and the image data she will use for diagnosis.
Leveraging the automatic local explanation object aggregation feature, will provide a list of object types that her CNN is gazing at, such as dogs, cats, balls, treats, and other object types, with examples.
She asks that she wants to see every case that is attentive to objects other than “dogs”.
Based on her specification, local explanation results are grouped based on object type categories (DC1).
She can quickly skim through each category (e.g., dogs, balls, treats, and cats) and confirm dubious local explanations as “unreasonables” in a few clicks.
will suggest the automatically drawn “reasonable” boundary for unreasonables' and asks Sarah to confirm or manually refine (DC2).
Upon her confirmation, will fine-tune the base model such that it won't make the same mistakes (DC3).
After the fine-tuning, Sarah can check how the models' performance regarding model accuracy and model attention quality has changed (DC4).
§.§ Workflow and System Components
supports stage-based workflows to inspect the model. The global navigation bar (see Fig. <ref>) on top of the screen provides access to each stage.
§.§.§ Loading Model and Data
allows users to upload their base CNN models and datasets.
In designing the feature for model upload, we considered compatibility with one of the most widely used Python libraries for building CNNs, PyTorch <cit.>.
Next, the “loading dataset” tab helps a user to upload the image datasets for diagnosis (a validation set, hereinafter) and a final evaluation after the fine-tuning (a test set, hereinafter).
In particular, the validation set is used for diagnosing contextual bias in the next stage. Using the test set in the last stage, a user can evaluate the final model by comparing before and after treatment and more.
§.§.§ Attention Quality Assessment
This stage has two goals.
First, helping a user understand which semantic object types are causing contextual bias by which degree (DC1).
Second, helping a user categorize every image into reasonable or unreasonable (i.e., the images that do not focus or focus on contextual bias in their local explanation) (DC2), which will be used in the next stage.
For both goals, the core mission is to significantly cut down a user's labor compared to their current practice.
In achieving the first goal, provides a list of semantic object types that can be observed in the model's focused area ordered by how frequently they appear.
In detecting the semantic object types, adopts a pre-trained object detection model <cit.> that is capable of detecting 80 object types defined in the Microsoft COCO dataset <cit.> (e.g., “person”, “bicycle”, “dog”, etc.).
A user will decide if the semantic object types are relevant or contextual to a CNN's goal.
In a gender classification problem, for example, the relevant object type can be a human face, while other object types, such as neckties or bicycles, can be contextual.
Second, based on the relevant object types specified by a user, intelligently suggests if local explanations of the images in a validation set are reasonable or unreasonable (see Item.3Item.7Fig. <ref>, green borders suggest the local explanations are reasonable while yellow borders suggest unreasonable).
The suggestions can reduce a user's time for assessing the quality of local explanations.
In positioning the results of suggestions, separates them into two sides: inaccurate images on the left and accurate on the right.
This layout helps determine which semantic object contributes to accurate/inaccurate records by how much.
When a user encounters a suggestion that is not right, (s)he can flip the suggestion by clicking the image, the semantic object group, or every of the accurate or inaccurate images.
Finally, provides 3 options for visualizing local explanation results: color-scale, gray-scale, or polygon mask (see Fig. <ref>-C).
§.§.§ Adjusting Attention
To support the later part of DC2—correcting the attention boundary of images categorized as unreasonables, needs an efficient annotation experience, especially because boundary drawing is an expensive annotation task.
In doing so, shows a vis-à-vis comparison between the current model attention on the left and the suggested attention boundaries on the right-hand side (see Item.7Fig. <ref>).
The suggested boundaries are made based on the Mask R-CNN model <cit.> we applied in 4.2.1.
If the suggested boundaries are not enough, a user can redraw manually (see the drawing panel in Fig. <ref>).
In checking the boundary suggestions, a user can separately examine the images from (1) unreasonables that are accurate (i.e., the images that were accurately predicted based on the wrong reasons, or by “luck”) and (2) unreasonables that are inaccurate (i.e., the image group that made an inaccurate prediction potentially because of seeing wrong contextual objects <cit.>).
Upon finishing the correction for unreasonable, becomes ready for fine-tuning using adjusted inputs.
§.§.§ Fine-Tuning
This stage is the key to maintaining an overall effective pipeline.
Based on DC3, we implemented a fine-tuning mechanism that can consider attention adjustment as new guidance for revising a better model and making the process of using boundary adjustment input straightforward.
The existing approach to optimizing a CNN’s model performance in the fine-tuning process is to minimize only the prediction loss—an error measure between model predictions and actual values.
To boost both the model performance and the interpretability of the black-box CNN model, we adopted Explanation-guided Learning framework <cit.> where the model accuracy performance and local explanation quality are jointly optimized with the prediction loss and attention loss.
Our intention for adding the attention loss during model training is based on the assumption that the model can learn to pay attention to the right semantic object types for the prediction tasks, thus naturally enhancing both the explainability and generalizability.
While the techniques in Explanation-guided Learning are in their early stage, some studies started to validate how applying both terms of explanation loss and prediction loss can benefit DNN performance using text data <cit.>, image data <cit.>, and graph-structured data <cit.>.
However, the techniques in Explanation-guided Learning have not been tested by human participants in their workflow.
Our aim in building is to understand how “real” human participants can interact with a system to leverage the techniques and if we can find evidence that using the techniques can practically help users in mitigating contextual bias in their CNN revision workflow.
For the implementation of the explanation objective for , we adopted the most recent approach called RES <cit.>, which proposed a generic robust framework for learning based on a user's boundary adjustment under the assumptions that the human annotation labels can be (1) not exactly accurate in drawing the boundary, (2) can be incomplete in the region, and (3) inconsistent with the distribution of the model explanation (i.e., binary annotation vs. the boundary with alpha channel).
Consequently, in the benchmark test, RES outperformed GRADIA <cit.> and HAICS <cit.> in leveraging human annotation boundaries and robust against the aforementioned annotation noises <cit.>.
In implementing, we utilized two methods from the RES's GitHub codebase[Available at: https://github.com/YuyangGao/RES], “Baseline” as the conventional state-of-the-art fine-tuning mechanism that applies a prediction loss but not an explanation loss.
This will be used as a baseline to help a user to understand how using can make a difference in model accuracy and model explanation quality.
Next, we implemented “RES-G” as the experimental attention steering mechanism that jointly optimizes the prediction loss and explanation loss.
Upon using to finish their boundary adjustment, a user will click fine-tune to activate the fine-tuning process.
Typically, our fine-tuning mechanism takes at least a few hours, and it is not possible to realize a real-time system yet.
In the system's back end, we built a schedule queue that receives the boundary input one by one. The inputs will be fine-tuned in order by a system administrator.
§.§.§ Evaluation Dashboard
Model evaluation is the last stage, where a user can check how the input has changed a model's varying performances.
Based on DC4, we designed this stage to help a user understand not only how model accuracy has been changed but also how the quality of local explanation has been shifted.
Most importantly, this stage attempt to facilitate a user's understanding of how accurate or inaccurate records are reasonable or unreasonable local explanations are related.
In doing so, we adopted Reasonable Matrix <cit.>, an evaluative matrix that explains the model's performance using the four groups as follows:
* Reasonable Accurate: The group that has accurately predicted records with reasonable attention. The bigger the group is, the more generalizable the model is.
* Unreasonable Accurate: The group that has accurate records but is based on unreasonable attention. Records in this group can be considered “lucky guess”. Reducing this group can increase model generalizability.
* Reasonable Inaccurate: The group has inaccurate records, but the attention is on the right area.
* Unreasonable inaccurate: The group has inaccurate records while their attention is also on unreasonable objects. This group can be considered an opportunity group, as shifting the gaze to reasonable objects can flip the prediction from inaccurate to accurate.
To generate a Reasonability Matrix, it is required to assess if the local explanation results are reasonable or unreasonable.
provides an automatic annotation feature to avoid relying on human annotation (as D3 suggests).
In particular, a user can select from 3 options.
Strict: assess local explanation as reasonable if the attention of a record includes only relevant objects and does not contain irrelevant objects;
Moderate: assess reasonable if the majority portion of an image contains relevant objects while the minor portion includes irrelevant objects; Relaxed: assess reasonable if the attentive area has any overlap with relevant objects.
After a user selects the Reasonability Matrix creation option, (s)he can start the evaluation.
To help a user understand what has been changed, prepares the three conditions as follows:
* M: the initial model before fine-tuning.
* M_base: the state-of-the-art fine-tuned model using M without applying attention input.
* M_exp: the fine-tuned model using M that uses attention input.
Using the three conditions, provides two pairwise comparisons of (1) Before vs. After: comparing M and M_exp and (2) State-of-the-art vs. our approach: M_base and M_exp.
In each pairwise model evaluation, there were 4 types of analytic views that users could do in-depth evaluations.
(1) Overall interpretation: for helping a user to directly understand how model accuracy and attention quality have been changed, the view presents a Reasonability Matrix showing percentage changes in 4 sub-groups (see the top-left sub-figure of Item.7Fig. <ref>).
The view also shows numeric comparisons to track the overall model accuracy and attention quality changes (see the bottom-left sub-figure of Fig. <ref>).
Finally, a user can see the generated performance report and an attention explorer module to derive insights about the effectiveness of the model conditions (e.g., whether the “unreasonable inaccurate” cases have been reduced by attention steering regarding the test image data).
(2) Accuracy-related analysis: this view provides accurate/inaccurate record bar plots grouped by common objects, helping users understand which semantic object types contribute to accurate or inaccurate records.
(3) Local explanation quality analysis: In this view, we present IoU distribution charts.
IoU (Intersection over Union) helps us to understand the overlap between the model's focused gaze and relevant objects. IoU of 0% means the gaze is entirely located on contextual objects, whereas 100% means the gaze is only on relevant objects.
The higher the IoU score, the better an attention area aligns with the ground truth.
In this view, we further help users browse cases based on IoU values (e.g., show images where IoU is between 40% and 60%).
(4) Record-wise attention comparison: the right screen in Fig. <ref> contains a comprehensive comparison of models’ local explanations, side-by-side for all conditions. This design helps a user quickly recognize attention quality changes among different conditions.
§.§ Implementation
is a browser-based user interface with a lightweight back end built with Python Flask, fully compatible with widely used ML and visualization libraries in Python (e.g., PyTorch, Grad-CAM, OpenCV, Matplotlib, etc.). The front end was developed using HTML, CSS, JavaScript, and D3.js for creating dynamic and interactive elements (such as the attention-drawing feature) to communicate between users and models. Item.3More detailed technical settings and a live demo of can be found in our GitHub repository[Available at: https://github.com/TongStevenSun/DeepFuse].
§ STUDY 2: SUMMATIVE STUDY
The core tasks integrated into —(1) diagnosing CNN's vulnerable patterns through local explanation and (2) making the found patterns actionable through direct model attention adjustment—have not been introduced in the previous work.
Further, our “system” has multiple sub-pieces connected together into a “single working whole” <cit.> to streamline the target task.
Due to these characteristics, we avoid applying comparison or experimental study where we have a clear baseline, just like many previous HCI work <cit.>.
Instead, we choose to derive our directions of inquiry by defining research questions (RQs), then triangulate the way we collect data in multiple ways to answer the questions.
Our goal in S2 is to create reusable pieces of knowledge in terms of what piece integrated into our system can be useful and understand how the system, as a whole, can be effective in supporting ML engineers who mitigate contextual bias.
To achieve our goal, we first aimed at understanding the effect of workflow—how our new workflow of model steering using local explanations introduced through an interactive environment can make a difference for ML engineers.
The research questions (RQs) in this category are:
RQ1a. How has a user’s viewpoint about using attention as a method for model revision changed after experiencing our workflow? and RQ1b. How has a user’s viewpoint about using attention as a method for evaluating their model performance changed after experiencing our workflow?
Next, we were curious to learn the effect of using itself as a system—how using can change the outcomes for mitigating contextual bias? In particular, the RQs regarding this direction are: RQ2a. How did using in the input phase make participants’ model diagnosis process different? RQ2b. How did using impact the outcome of contextual bias in terms of model accuracy and attention quality?
§.§ Method
We recruited 12 participants by snowball sampling through our network in industry and academia or advertising on social media.
In defining the S2 sample size, we followed the most common sample size of the past CHI publications consulted from Caine's work <cit.>.
The participants were selected by a screening survey where we asked about their demographics and degree of expertise in building vision-based models using CNNs, the task goals of vision models if experienced, professional position, experience in using local explanation, and whether they have heard of and understands the importance of detecting the “wrong” attention to handle contextual bias.
We are aware of the potential Hawthorne and novelty effects of having overestimated results when participants are being studied and new to our system <cit.>. To reduce the effects, we particularly hired experienced CNN developers who have established their own approaches in CNN fine-tuning. Later in the study, we asked them to compare the effectiveness between our approach and their current approaches and give reasoning.
We recruited 12 qualified participants (2 females and 10 males, aged between 20 and 43) out of 43 who submitted the screening survey. Six participants were academic researchers, and the other six were practitioners. Eight participants identified themselves as experienced, three as intermediate, and one as beginner developers in vision-based modeling. Item.4Although the experience distribution was imbalanced due to our consideration of having all genders' perspectives, there should not be any potential effect of this distribution on the study since all participants were qualified for the study with a good understanding of handling contextual bias and wrong reasoning of a model based on its saliency maps. Eight participants out of 12 have experienced using local explanation to improve model performance in the past (see Table <ref>).
<ref> summarizes the S2 workflow. Participants joined two online sessions, the input and output sessions, for two consequent days. Participants joined the sessions virtually on Zoom and shared their screens with us.
In the input session, we onboarded participants by explaining the purposes of the and presenting how model evaluation could be done differently using local explanations of a standard classifier. Then participants went through a tutorial where they practiced using the interface with a toy dataset. The onboarding and tutorial took 30 minutes.
After the tutorial, participants performed the early phase of tasks using features introduced in 4.2.1, 4.2.2, and 4.2.3.
After an input session, we fine-tuned the initial model (M) into 2 conditions of models: a state-of-the-art model without users' inputs (M_base) and a model using our users' attention inputs in the validation set (M_exp).
The output session was scheduled one day after the input session since we cannot make our participants wait until fine-tuning is done.
On the following day, participants joined the output session, where they used the reviewing feature of to assess the model performance using the features introduced in 4.2.5.
After the review, we conducted semi-structured interviews with the participants.
After finishing two sessions, we provided them with 60 USD as a token of appreciation.
While the input session took 90 minutes and the output session lasted two hours, Item.4as shown in Table <ref>, participants used for about 25 minutes on average in the input session (Min=12, Max=47, SD=10.43) and about 20 minutes in the output session (Min=5, Max=33, SD=8.88). The average time spent on the system in both sessions was about 45 minutes (Min=17, Max=68, SD=16.83).
§.§.§ Task, Data, and Model
While can work with any classification task, we chose a binary gender classification problem for the study.
We are aware of the limitation of framing the gender recognition task as a binary classification, which cannot fully represent the viewpoint of gender diversity.
We are aware of the negative aspects of choosing a binary gender classification as the main task in S2. For instance, automatic gender recognition primarily classifies gender through physical characteristics, which can disadvantage gender minorities <cit.>.
Also, while we believe that binary cannot represent the diversity in gender, we chose the task because it is one of the most widely adopted tasks in the problem of contextual bias <cit.>.
We note that our choice of the binary classification task is to demonstrate the system's capability of solving contextual bias in a relatively simplistic setting with the help of well-annotated datasets used for training CNN classifiers.
We also note that we explained the possible concerns that can stem from the binary gender classification to our participants at the beginning of the study.
The dataset used in the study was selected from the Microsoft COCO dataset <cit.>, one of the most widely used datasets in ML and computer vision communities. The dataset was chosen because of its well-structured label formats and abundant 80 object classes co-appearing with humans, and it has been used for contextual bias studies <cit.>.
The image selection process has three steps.
First, the images were filtered by the segmentation labels of the “person” class for single-person images only.
Second, the images were re-filtered by the gender-related keyword in the captioning labels (i.e., “male”, “man’’, “men’’, “female”, “woman’’, “women’’).
Lastly, the filtered images were examined manually to have the best quality images for the gender classification task, excluding images with very small human figures that were unidentifiable for classification.
In total, we extracted 2,000 images and split them into 1,000 in the training set, 500 in the validation set, and 500 in the test set.
Since we wanted to test the ’s capabilities of detecting and reducing contextual bias, we needed a model that had a reasonable performance but was vulnerable to contextual bias.
We first manually added contextual objects (i.e., green star markers) on the top-left corners of the images.
The distribution of the star-added images is shown in Fig. <ref>, bottom.
For the training set, 1/3 of the “male” images (N = 167) were added with stars.
For both the validation and test sets, the star markers were added only on the “female” images (N = 250).
Then, we trained a standard ResNet-18 classifier (denoted as “M’’) using the biased image data.
In deciding on ResNet architecture in S2, we tested several models built based on ResNet-18 and 50.
We found no significant model accuracy improvement by adding more layers to the ResNet-18 architecture.
Therefore, we chose a less complex model architecture to make lightweight.
Since the majority of images in the training set were original images, the model can achieve a reasonable prediction accuracy of 74% on regular images without the star markers.
We should expect that the model only saw “male” images have star markers.
When we tested the model on the validation set that only has star markers in the female class, the accuracy dropped to 43.8%, and 77.6% of “female” images were mispredicted.
This showed that the model only used commonly appeared star markers on “male” images as a feature to make predictions for images with the same contextual objects, meaning the model (M) was vulnerable to contextual bias.
In generating local explanations, applies Grad-CAM <cit.> on the last convolutional layer.
Due to CNN's hierarchical structure and comparisons of attention maps between layers <cit.>, earlier layers' attention maps are more scattered around objects' edges and corners, whereas the focus of local explanation gets shape to semantic objects as getting closer to later layers (see Fig. 5 in <cit.>).
Using the last layer, local explanations can create more semantic object-level meanings, which a human user can easily leverage for adjusting boundaries.
§.§.§ Input Session
At the beginning of the input session, we discussed the idea of using local explanations for mitigating contextual bias in a binary gender classification task.
After the discussion, we demonstrated how participants could upload their models and datasets using . Then we explained 's model vulnerability diagnosis feature explained in 4.2.1 and 4.2.2. and attention adjustment feature described in 4.2.3.
Upon the end of the tutorial, we gave time for participants to mimic the whole process using the same toy dataset and ask any questions.
Then, we asked participants to start the main session.
We erased all prior input and asked users to start over the process using a larger dataset (particularly assessing the local explanations of the validation set) and a base model we provided.
During the main session, participants had to use the system without help.
The main session was video-recorded.
Once participants finish their input session, we asked them to fill out an input survey, asking 2 questions for the “absolute” and “relative” valuations as follows:
* Q1: “[RQ2a, Absolute] I found understanding the model’s vulnerable aspects using to be _____.” (A 7-level Likert scale of usefulness. “7” is “extremely useful”.)
* Q2: “[RQ2a, Relative] Using , understanding the model’s vulnerable aspects was _____ than my current practice.” (A 7-level Likert scale of difficulty. “7” is “much easier”.)
§.§.§ Output Session
In this session, participants evaluated the performance change of the improved model with the test set.
In particular, provided two pairwise comparisons between M and M_exp, and M_base and M_exp) (see 4.2.5).
After the short output session tutorial using a toy test set, participants started the main output session using the model they fine-tuned from their input session and the larger test set.
Once users were finished with all the analysis and comfortable with their findings, we moved to the semi-structured exit interview. The interview had 9 question categories that were made to understand (1) their general perception about , such as the pros and cons they felt throughout the two sessions, (2) their perception of the specific perspectives, including (2-a) experiencing local explanation adjustment, (2-b) applying reasonability matrix in assessing the model performance, (2-c) features they used in day 1, (2-d) features they used in day 2, and (3) their suggestions for the better in the future.
Same as S1, two researchers attended every interview.
After the interview, they completed an output survey Item.5with 6 questions (see Q3 to Q8 below).
Lastly, to check the usability of , we asked participants to fill out the System Usability Scale (SUS) survey <cit.> (see Appendix B).
* Q3: “[RQ2b, Absolute] I found the capability of regarding improving the model performance using my input was _____.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.)
* Q4: “[RQ2b, Relative] I found the capability of regarding improving the model performance was _____ than my current practice.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.)
* Q5: “[RQ1a, Absolute] Adjusting the saliency maps (as guided) can be effective in building future models.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q6: “[RQ1a, Relative] Adjusting the saliency maps (as guided) can practically change my model-building practice to a better form in the future.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q7: “[RQ1b, Absolute] On top of a model accuracy performance, using saliency maps (as guided) can provide an effective measure for evaluating my future model performance.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q8: “[RQ1b, Relative] On top of a model accuracy performance, using saliency maps (as guided) can practically change the way I evaluate my future model performance to a better form.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
For the analysis of the exit interviews, we followed the similar process we applied in analyzing S1.
The difference from S1 was the existence of the video recordings.
The recordings were reviewed multiple times for transcription, code development, and analysis to synchronize with the notes.
The codes and memos were developed by our two authors gradually as we intake more interviews.
After the final interview, each of the authors developed the themes and shared them with each other, developing the consensus-based diagram that articulates the main insights we learned relevant to explaining the three RQs.
§.§ Results
In this section, we aggregated all survey and interview responses from the participants for the RQs we developed.
S2 results suggest that (1) the workflow of the local explanation-based attention steering provided a diverse perspective in diagnosing model vulnerability, (2) the direct steering design helped the process of model revision straightforward, and (3) every participant enjoyed improved key model performance measures.
Specific sub-tasks, how they are improved, and why the participants perceived they are improved are in Table <ref>.
We believe these are not merely because of the Hawthorne and novelty effects since we have subjective evidence of performance improvement and assessment efficiency.
We also organized the aspects that need improvement in Table <ref>, which we share in detail in the Discussion section.
The behavioral data we collected shows that all participants generated the model that outperforms (1) its model accuracy, (2) the overlap between the model's focus and the relevant object types (IoU), and (3) the proportion of reasonable attention out of all images in a test set.
The average accuracy of 12 users’ fine-tuned models (M_exp) was 82.95%, with an average IoU of 0.39 (“Intersection over Union” with respect to the attention ground truth of the user-defined gender-related object: “person”), and the average proportion of reasonable attention was 89.55% (see Item.8Fig. <ref>-A). All these performances outperformed both the initial model (model M: accuracy = 47.6%, IoU = 0.12, attention reasonability = 51.8%) and the model that applied state-of-the-art fine-tuning method without attention (model M_base: accuracy = 79.0%, IoU = 0.26, attention reasonability = 79.4%).
Regarding the attitudinal survey data, every absolute and relative question's mean was over 4.
In terms of absolute questions, 100% of ratings were above 4-“neutral” (M = 6.19, SD = 0.67).
This indicates that participants were satisfied with the overall quality of the workflow and the system.
Regarding the relative questions, 89.6% of ratings were above 4-“neutral” (M = 5.94, SD = 1.24), which indicates that they felt applying the workflow and the system can practically improve their current practice.
§.§.§ [RQ1-a] Workflow: Adjusting model attention as a CNN steering method
After completing the user studies, the majority of users strongly agree that adjusting local explanations can effectively improve model performance (Q5 rating: M = 6.42 out of 7-“strongly agree”, SD = 0.64, as shown in Fig. <ref>-B). Also, people think their current modeling processes can be practically improved by considering the attention adjustment method (Q6 rating: M = 6.17 out of 7-“strongly agree”, SD = 1.07).
During interviews, all participants shared their positive impressions about the effectiveness of attention adjustment in improving model accuracy, which is the primary objective of conducting model fine-tuning. They also confirmed that the impact of contextual bias was reduced as attention quality increased by attention steering. By adding a new perspective from humans, a model also becomes fairer in making predictions for each target class (P2, P5, P10).
Participants (P1, P2, P3, P4) with experience in model attack and defense shared the possibility of using our method to improve the robustness of the models against backdoor attacks, letting the model ignore small perturbations on an image and focus on the right area. We learned that after trying our method, people gained awareness of considering human-in-the-loop and visual-based approaches in model steering since most of the ML researchers use algorithmic approaches for handling contextual bias, such as data augmentation, hyperparameter tuning, ensemble methods, etc., rather than extensively using visualization in the fine-tuning process.
§.§.§ [RQ1-b] Workflow: Adding quality of model attention in evaluating CNNs
Based on the feedback, users agree that using an attention evaluation method (e.g., reasonability matrix as guided, based on Gao et al. <cit.>) is effective in diagnosing model vulnerabilities (Q7 rating: M = 6.33, SD = 0.47, see Fig. <ref>-B), and they are very likely to use this method for improving future practices Q8 rating: M = 6.08, SD = 0.76).
Participants think that the attention assessment features in provide more diverse and rigorous perspectives in assessing a model's vulnerabilities, especially the reasonability matrix, which can be seen as an expansion of the accuracy dimension to understanding “why” a model underperforms (P1, P3, P5, P6, P8, P9, P10, P12). P1 and P4 endorsed the necessity of equipping a reasonable matrix assessment step in checking the model’s decision-making.
The matrix interpretation was straightforward to most users, as it is related to the widely-used confusion matrix concept in the data science domain.
The dynamic shifts of model vulnerability were well presented as shown by the reasonability matrix (3 vulnerable sub-groups, “UIA - unreasonable inaccurate’’, “UA - unreasonable accurate’’, and “RIA - reasonable inaccurate’’).
One major task we designed for users to achieve was the recognition of a backdoor attack in the data (i.e., added green star markers which may trigger a false prediction by the model), and all participants were able to identify the impact of the attack by evaluating attention quality using the reasonability matrix.
§.§.§ [RQ2-a] System: How improved CNN diagnosis
After comparing with people's current practices, was confirmed as a useful (Q1 rating: M = 5.92 out of 7-“extremely useful”, SD = 0.76, see Fig. <ref>-B) and easier tool (Q2 rating: M = 6.0, SD = 1.15) in understanding model vulnerability, benefiting from the labor-efficient mechanisms.
The step-by-step nature of the assessment process in allows users to systematically detect both contextual and manipulated bias in the data, making it easier to reduce model vulnerability (P3, P9, P12). People believe this GUI design can significantly reduce human effort in coding and visualization management for comprehensively assessing a CNN (P2, P3, P5, P6, P7, P8, P9, P10, P12). ML engineers are well aware of the advantages of using visualization to compare metrics and surface bias, but it is a cumbersome task (e.g., repetitive file creation and loading, lack of visual-based explorers for local explanations, etc.). Instead, people mostly use command lines and unintuitive numeric comparisons for checking vulnerabilities.
One important feature that people liked was the local explanation grouping by detected objects (e.g., “person”, “bicycle”, etc.), which allowed them to check attention quality and accuracy changes within the common object level (P2, P3, P6, P9, P12).
Some users pointed out that having consistent criteria for annotating attention quality regarding the classification task could be tricky with subjective uncertainty (P2, P4, P6, P9, P11). P6 mentioned that during the initial exploratory analysis of some models, users might not have good/bad attention criteria for annotating the attention.
P10 shared an experience in exploring what objects cause contextual bias, and the biggest challenge was making a reasonable assumption at first and evaluating it over time. This challenge is critical if the annotation task is outsourced to multiple people.
§.§.§ [RQ2-b] System: How improved CNN revision outcomes
According to survey responses, people witnessed the highly effective capability of in the performance steering task (Q3 rating: M = 6.08 out of 7-“extremely effective”, SD = 0.64, see Fig. <ref>-B).
Regarding the same task, people found it slightly more effective than current approaches (Q4 rating: M = 5.5, SD = 1.66) as 2 users who preferred their approaches and rated 2-“less effective”.
Aligning model attention with human perceptions can effectively revise a model performance, and with 's adjustment mechanisms (i.e., attention drawing panel and boundary suggestions, as shown in Fig. <ref>), people can directly embed their intention and domain knowledge into the CNN (P2, P4, P9, P10). Regarding model performance comparison, people were able to reveal the overall context of the image data and the corresponding impact on the model (accuracy and attention quality) by detected object sub-grouping of (P1, P2, P3, P5, P6, P8, P9, P11, P12).
An industry practitioner who worked primarily on model quality assurance mentioned that the black-box models were not usually accessible for engineers outside the core ML team, and had features that could be practical for them to evaluate the model performance in that situation (P11).
In the last evaluation view of for record-wise attention comparison (as shown on the right of Fig. <ref>), P7 was curious about the opposite shift of attention quality (i.e., a change from “right’’ to “wrong’’ attention after model fine-tuning) and wanted to see some quantitative measures about it.
The IoU distribution visualization was another measure in that could provide a rigorous comparison between model conditions (with/without attention adjustment), revealing the positive relationship between accuracy and attention quality improvement (P2, P8, P11). As people mentioned, measuring IoU was not commonly used in classification evaluation compared to segmentation tasks, and it was typically difficult to visualize.
§.§ Discussion
Overall, the system received acceptable usability <cit.> with an average SUS score of 76.88 (SD = 14.70, see the SUS box plot in Fig. <ref>-B, the rated scores (0-4) were converted to a 0-100 scale based on Brooke's SUS guide <cit.>), exceeding the average SUS level of 68. There were 10 out of 12 participants (except P3 and P5) who gave above-average SUS scores.
Although this study is not for system-level comparison, we wanted to understand the effect of our fine-tuning mechanism collected from real users. We conducted Mann-Whitney U tests to confirm the significant performance improvement after using attention.
From each of the 12 participants' results, the accuracy of our fine-tuned model using attention was significantly greater than the baseline line condition (U = 0, n_base = n_exp = 12, p < 0.00001). The same results apply to the IoU and attention reasonability proportion comparisons.
Through the studies, we also identified disadvantages of our system that need to be improved (as shown in Table <ref>).
Regarding the interpretation of the reasonability matrix produced by users' annotation and model prediction, the guidelines can be more formally provided to be acceptable in the ML community (P4, P5, P11). The styles of attention visualization (i.e., color-scale, gray-scale, and polygon mask) need improvement, especially since the orange polygon mask was not visually clear for P3 and P10. It can be solved by having color and opacity adjustment features.
People also raise the potential inconsistency issue in attention adjustment, where users may have subjective options and criteria about where the “right” attention should be. needs to further provide more deterministic guidelines in attention adjustment for more complex task types, especially for tasks that require domain expertise (e.g., TB diagnosis in chest X-ray images <cit.>).
With this uncertainty in attention adjustment, P7 and P10 suggested an instant performance comparison feature to reflect the model improvement on the fly as people annotate, which can be a future direction in active learning to have simultaneous updates while labeling in progress <cit.>.
About the attention adjustment module, people suggested that the drawing feature should be optimized for drawing curves and near image borders, as it was not easy to do so (P1, P3, P6). P5 suggested existing smart drawing features (e.g., image matting tool in Photoshop <cit.>) to be added. P7 thinks that binary mask drawings might not be enough for the best attention guidance used in fine-tuning the model. A solution could be giving higher weights toward the centroid of the attention areas.
Item.5(b)With the current data size and task setting in S2, the trade-off between manual workload and model improvement may not be as significant since the overall workload was not overwhelming and considered labor-efficient compared with existing assessment methods. Though evaluating attention maps could be a labor-intensive step, diagnosing and optimizing the model's vulnerability were effective and easy to use based on users' feedback. The annotation steps were incorporated with AI-supported automation (bulk annotation, object detection, object relevance filtering, adjustment recommendation, etc.) to reduce both users' cognitive and labor workloads while gaining better performance. However, as data size increases, this labor-performance trade-off becomes essential, and more specifically, scalability solutions should be explored to reduce human labor while maintaining good fine-tuning performance. We further discussed scalability considerations regarding the trade-off in the next section (6.3).
§ IMPLICATIONS FOR DESIGN BEYOND XAI
Through S1 and S2, we learned several insights from our participants.
While listening to their voice and questions, and observing the way they perceive after their usage, we learned that at the heart of people's pursuit of grounding their models into their practice, one of the core challenges they encounter seems to understand how they can harmonize between the way they see the CNN should suppose to work and the way CNNs actually work.
When they identify such a gap through XAI-driven tools, the upcoming challenge seemed to be to know how to reconcile such a gap efficiently and effectively.
We reflect on this aspect of beyond XAI—how to help a user to shift their learned insights to actionable plans—and list up possible research directions that the HCI and CSCW communities can consider in designing future XAI or steerable AI tools to help practitioners “in the trench”.
§.§ Correlating Model Attention and Model Accuracy
One of the overarching questions we wanted to understand was how the model attention seen as reasonable by the human mind could also result in accurate prediction.
Perhaps that was the reason we decided to use the reasonability matrix.
If reasonable attention and accurate prediction are aligned together, the reasonable accurate instances (i.e., accurate for the right reason) and unreasonable inaccurate instances (i.e., inaccurate for the wrong reason) should increase while the unreasonable accurate and reasonable inaccurate instances should decrease.
The tendency we saw was positive. We observed the reasonable accurate instances increased while the unreasonable accurate instances decreased from most participants.
At least from our setting, adding more human reasoning to the model's way of thinking has increased the model's gaze toward intrinsic objects, resulting in an accuracy increment.
However, one segment that didn't change was the reasonable inaccurate group.
We think understanding the reason when and why the model makes inaccurate predictions despite the reasonable gaze should be closely related to improving model performance.
Regarding research in Fairness, Accountability, and Transparency (FaccT), a dominant view is that human input or intervention may be required to realize a model that retains FaccT with the cost of model accuracy drop.
We hope to understand the effective way to correlate the right reason, and accurate prediction can motivate the development of a fair, robust, and accurate model <cit.>.
In general, we believe it is important to understand how to align human reasoning and model accuracy.
Shao et al. argue that humans “arguing” against DNNs when explanations are not reasonable can benefit the model <cit.>.
A railroad cannot be a train <cit.>, a snowboard is not a man <cit.>, and a shopping cart should not be a woman <cit.>.
Lastly, while human-guided ML has a potential and good cause <cit.>, finding a way to cut down the human-side labor is another important perspective from the two studies.
§.§ Generalizability Consideration: Beyond Binary Classification
We started to test the idea of direct steering of model attention through local explanation from the binary classification problem for reasons—simplicity of the problem and well-annotated datasets.
After using , several participants shared their feedback and curiosity on how our pipeline can be applied in more advanced vision-based tasks.
The design we provided in binary classification can be relatively simpler than the aforementioned cases.
As the model's task gets more complex and diverse, new designs customized to the particular task type and application area should be required to understand the generalizability of our findings.
Item.5(a)Methodologically, local explanation-based attention steering is not limited to binary classification tasks.
The future design can be explored to enhance CNN models for handling different tasks, such as multi-class classification, object detection, and segmentation tasks, which could possibly be expanded from processing images to videos.
The core user flow beneath in CNN steering is as follows:
First, the user flow allows human users to define reasonable and unreasonable types of attention depending on task goals.
Next, the user flow motivates reasonable attention types and penalizes unreasonable attention types in a fine-tuning process suggested in Explanation-guided Learning <cit.>.
Finally, the designer can provide a dashboard that helps users to understand how their indicated directions were reflected in the model revision process.
While the flow can be generally applicable, the way a designer facilitates a user's definition of reasonable and unreasonable attention type should be carefully implemented depending on the type of problem.
For example, in a multi-class classification or object detection task for different animals, users can employ attention logic that penalizes background and motivates foreground objects to build a more reasonable and high-performing model.
As mentioned in 5.1.1, local explanation methods can be applied to different layers of a CNN to produce different levels of granularity.
If the task goal requires a coarse granularity detection of a bounding box, applying local explanation visualization at the last layer of CNN can be suitable. However, if it needs more fine-grained granularity of closed curve for semantic segmentations, producing local explanations on both the first convolutional layer for edge-level of detail and the last convolutional layer for object-level detail can be considered, providing more depths of local explanation for users to evaluate.
Finally, we noted P7's suggestion about extending this flow to a more advanced video level of object classification, detection, and segmentation model steering.
Due to the data volume, special design considerations need to be applied in such a task.
However, upon the efficient design for indicating reasonable and unreasonable attention types, we believe that it is possible to apply the suggested flow to the problem space.
§.§ Scalability Consideration: Hundreds vs. Millions
Despite the promising performance of the model steering method, scalability remains an essential concern raised by several participants (P2, P3, P4, P8, P11), as many real-world image classification tasks involve millions of images.
Human scalability has been a crucial issue in HCI, CSCW, and beyond—while Misc.the data size can easily go up to millions and trillions in training state-of-the-art models, human cognition remains flat <cit.>.
Even if we can surface millions of images to users, it may not be possible for them to scan images serially and achieve sensemaking.
Generally, to successfully devise a scalable design, we believe that the number of images users have to go over should still not exceed thousands, and the amount of time they may spend should not exceed one hour, as recent data annotation literature suggests <cit.>.
Herbert Simon remarked that “wealth of information creates a poverty of attention” <cit.>.
As the trade-off between human labor and performance gain in human-in-the-loop applications is illustrated in Fig. <ref>, when users spend more effort as data size increases, the model will gain better performance until the workload hits the bottleneck of feasible human labor. We aim to make the curve of labor-performance trade-off steeper (from “curve 1” to “curve 2” shown in Fig. <ref>) through scalability optimization to improve the impact of human workload on performance gain. By devising “scalable” human-in-the-loop approaches, model performance could be further improved with the feasible amount of available human labor.
Item.5(b)While every human-in-the-loop approach can suffer the bottleneck of limited information, labor source, session time, etc., ultimate breakthroughs in human-in-the-loop and interactive ML designs could come from scalability strategies.
We introduce how some of the design strategies can be adopted in the design space of Beyond XAI.
First, one can consider sampling from the whole dataset.
Modern computer vision models can yield keywords of objects and context in the scene. Using such additional information extracted from the vast dataset, it is possible to define major and minor clusters of images. The new design may help users proceed with a small portion of sampled images derived from such clusters to reason the whole dataset and typify reasonable and unreasonable attention types accordingly.
Second, one can consider examining images based on the sequence built from Active Learning, Misc.a technique that chooses the fewest unlabeled data possible that could maximize the model accuracy gain <cit.>.
Applying active learning techniques is common in data annotation research, which can help reduce the required size of images to reason.
Third, devising further intelligent features that can automate the current workflow can facilitate the process as well.
Some features that need manual investigation can be automated in future designs.
Finally, if there is a strong rationale for investing more human resources, one can consider crowdsourcing.
§.§ Data Iteration and Continual Lifelong Learning
's capability of figuring out the vulnerability through local explanation is closely related to the capability of fortifying the dataset by adding more examples that can remove the contextual bias.
Such “data iteration” is not uncommon in practice.
To improve the model, the most fundamental way is to improve data. For instance, Chameleon lets users compare data features, training/testing splits, and performance across data versions <cit.>.
When combining the data iteration with model steering using local explanations, one could derive some interesting design ideas that can help ML engineers to better find, search, and add the dataset.
While improving the model with new data can be straightforward, a few issues need to be considered when steering models through local explanations.
First, it is necessary to understand what learning strategy can be more effective between the case where stacking every dataset in one place and retraining the model and the case of iteratively adding the new dataset and making the model “evolve”,
In general, the first case can yield a high-performing model than the second case due to the chance of catastrophic forgetting, which is a problematic and almost inevitable drawback <cit.>.
In recent years, the concept of continual lifelong learning has emerged <cit.> and provided a breakthrough.
Understanding which strategy can yield what strengths and weaknesses in the scenario of data iteration with local explanation reasoning would be necessary.
§.§ Improving Fine-Tuning
This work is the first study that observes how ML engineers experience techniques in the Explanation-guided Learning framework in fine-tuning their model and perceiving the difference.
While we saw participants satisfied with the progress they made with the RES framework, we introduced a few directions on how the RES framework can be evolved to design an improved model steering environment in the future.
One important direction is how to design a better quantitative measurement to assess the quality of the steered attention during the fine-tuning process.
Simple distance-based metrics such as Mean Squared Error (MSE) or Intersection over Union (IoU) scores that are calculated purely based on the alignment of each feature can hardly comprehensively reflect the quality of the adjusted attention, as they completely ignore the correlations among visual features.
One potential remedy to this issue is also to leverage fidelity-based metrics, which aim at evaluating how faithful the model's attention is with respect to the model's prediction.
The assumption behind this is that the `right' attention should contain sufficient information for the model also to make the `right' prediction <cit.>; while on the other hand, removing the attention should also lead to significant negative impact for the model to make the correct prediction <cit.>.
However, it is still not clear and challenging to propose a single metric that can together measure the faithfulness and the degree of alignment with the human annotation to make a more comprehensive assessment of the attention quality.
Another possible topic is how to leverage multiple annotations from different users for a single sample <cit.>.
As obtaining more than one annotation can be helpful to boost the reliability of the human boundary for attention adjustment, it poses challenges on how to align model attention with multiple ground truth boundaries.
While a simple way out can be using the 50% consensus or majority vote over all the available annotations, useful information can be lost during the aggregation. Thus, new techniques are in demand to leverage each annotation effectively.
§ CONCLUSION
In this work, we examined our inquiry of how we can design a direct feedback loop between a human and a CNN through local explanations.
In particular, we designed and developed the first interactive system to help a user adjust the local explanation results regarding the gaze of CNNs.
We applied our interactive design in the problem space of contextual bias for CNN engineers.
With the S1, we learned ML engineers' practical challenges and desires, converting the insights to design considerations that could improve how we use local explanations in model diagnosis and steering.
With , we conducted S2 and found how can provide a better workflow and experience to CNN engineers.
At the same time, we also found limitations and future research directions.
In particular, we boiled down and shared in Implications for Design beyond XAI within the categories of (1) correlating model attention and model accuracy, (2) generalizability consideration, (3) scalability consideration, (4) data iteration and lifelong learning, and (5) improving fine-tuning.
We hope this work can benefit researchers and practitioners who seek to understand how to make XAI-driven insights actionable in steering AI.
ACM-Reference-Format
§ STUDY 1 INTERVIEW QUESTIONS
Item.3
§.§ About you
* Can you explain your role in your company?
§.§ Your models and development settings
* Can you explain the purpose, input, and output of your models for which you used model saliency/attention?
* Can you walk us through your process of building your model? E.g., how to collect the training set, how to train your model, how to improve your model performance, how to debug?
§.§ Use of saliency maps
* Can you explain the way you use saliency maps in understanding your model’s behavior?
* Can you explain the way you use saliency maps in supervising/improving your model’s behavior?
§.§ Working on fair/robust/accurate models
* Can you explain your experience/effort towards building more fair DNN models?
* Can you explain if attention/saliency was useful or not?
§.§ Your tools, challenge, and wish list in the future
* Can you explain the types of tools that you use for understanding/improving your DNN models?
* Can you explain the challenges you experience while interacting with your DNN?
* What new tools/features do you wish to have in the near future to make your life better?
§ STUDY 2 SYSTEM USABILITY SCALE (SUS) SURVEY <CIT.>
Item.5
§.§ Indicate your degree of agreement for each of the 10 statements (on a Likert scale from 1-“strongly disagree” to 5-“strongly agree”)
* I think that I would like to use this system frequently.
* I found the system unnecessarily complex.
* I thought the system was easy to use.
* I think that I would need the support of a technical person to be able to use this system.
* I found the various functions in this system were well integrated.
* I thought there was too much inconsistency in this system.
* I would imagine that most people would learn to use this system very quickly.
* I found the system very cumbersome to use.
* I felt very confident using the system.
* I needed to learn a lot of things before I could get going with this system.
|
http://arxiv.org/abs/2307.05743v2 | 20230710174318 | An exactly solvable dissipative spin liquid | [
"Henry Shackleton",
"Mathias S. Scheurer"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.stat-mech",
"quant-ph"
] |
Department of Physics, Harvard University, Cambridge MA 02138, USA
Institute for Theoretical Physics III, University of Stuttgart, 70550 Stuttgart, Germany
Institute for Theoretical Physics, University of Innsbruck, Innsbruck A-6020, Austria
Exactly solvable Hamiltonians with spin liquid ground states have proven to be extremely useful, not only because they unambiguously demonstrate that these phases can arise in systems of interacting spins but also as a pedagogical illustration of the concept and as a controlled starting point for further theoretical analysis. However, adding dissipative couplings to the environment—an important aspect for the realization of these phases—generically spoils the exact solvability. We here present and study a Lindbladian, describing a square-lattice spin-liquid with dissipative coupling to the environment, that admits an exact solution in terms of Majorana fermions coupled to static ℤ_2 gauge fields. This solution allows us to characterize the steady-state solutions as well as “quasiparticle” excitations within the Lindbladian spectrum. We uncover distinct types of quasiparticle excitations of the Lindbladian associated with parametrically different timescales governing the equilibration time of the expectation values of different classes of observables. Most notably, for small but non-zero dissipation, we find a separation into three different timescales associated with a three-step heating profile.
On a more general level, our exactly solvable Lindbladian is expected to provide a starting point for a better understanding of the behavior of fractionalized systems under dissipative time evolution.
An exactly solvable dissipative spin liquid
Mathias S. Scheurer
August 12, 2023
===========================================
§ INTRODUCTION
Quantum spin liquids (QSLs) are exotic phases of matter characterized by emergent anyon excitations with non-trivial braiding statistics, in conjunction with the absence of any conventional long-range order <cit.>. Further interest in these states have grown due to their potential applications for use in fault-tolerant quantum computation <cit.> through their non-local encoding of quantum information.
The interplay between QSLs and open quantum systems has been an active area of research for many years, with a primary focus on the robustness of their information storage and on approaches to detect their presence when perturbations generic to experimental realization are introduced, such as a non-zero temperature, decoherence, and more <cit.>. Rather than taking this approach of considering generic forms of decoherence, we instead consider engineering a particular form of environmental coupling to a QSL in order to realize unique non-equilibrium physics. This general approach of leveraging dissipation has been shown to be efficient at preparing quantum states <cit.> including topologically-protected edge modes <cit.>. Recent applications of this idea to spin liquids <cit.> have yielded new insights into the behavior of emergent anyon excitations in the presence of dissipation.
We study a quantum spin-3 / 2 model on a two-dimensional square lattice, which is a particular limit of the QSL studied in <cit.>, and subject it to a certain choice of Markovian open dynamics generated by the Lindblad equation. We show that in a particular limit, the Lindbladian becomes exactly solvable through a parton construction. As such, exact statements about its steady-state solutions as well as transient behavior can be made. Exactly solvable Lindbladians have been studied previously using techniques such as third quantization <cit.>, Bethe ansätze <cit.>, operator-space fragmentation <cit.>, and through parton constructions <cit.> similar to our own. From a practical perspective, this exact solvability is especially useful as the wealth of analytic tools developed to approximately study the low-energy behavior of Hermitian Hamiltonians do not immediately carry over to these non-Hermitian Lindbladians, although several methods for approximately studying the spectrum of Lindbladians have been developed <cit.>.
A particular property of our exact solution that we emphasize is the existence of distinct quasiparticle excitations of the Lindbladian when viewed as an effective non-Hermitian Hamiltonian acting on an enlarged Hilbert space. We advocate for this as a powerful tool for understanding the non-equilibrium behavior of a generic state or density matrix as it equilibrates to its steady-state solution. We show that the imaginary energy gap associated with a particular type of quasiparticle excitation in this enlarged Hilbert space can be associated with the equilibration timescale of the expectation value of a certain class of observables. These classes of observables turn out to have a close relation to excitations of the corresponding unitary spin liquid.
An expert reader might immediately want to inspect Fig. <ref> for a summary of the spectrum. Importantly, the different time scales of these classes of operators have different parametric dependence on the strength γ of the coupling to the environment, which can be found simply by diagonalizing a quadratic Hamiltonian numerically, or in some cases is derived exactly analytically. For instance, in the limit of small γ, a certain set of operators, that are not conserved by the unitary dynamics, decay rapidly on a scale set by the exchange coupling rather than γ itself. Fractionalized string-like operators that can be interpreted as pairs of emergent Majorana fermion excitations in the unitary system, however, survive up to a time-scale ∝ 1/γ. After that, also the Majorana fermions heat up and only gauge-invariant fluxes of the emergent gauge fields or Wilson-loop operators remain in their original configuration. In this sense, our model realizes a three-step and exactly solvable analogue of the “fractionalized pre-thermalization” discussed recently <cit.> for stroboscopic time-evolution in the Kitaev model.
The remainder of the paper is organized as follows. A mathematical definition of all the involved operators and of the dissipative model we study can be found in Sec. <ref>. We derive an interpret the spectrum of the Lindbladian in Sec. <ref>. A discussion of perturbations away from the exactly solvable point and a conclusion are provided in Sec. <ref> and Sec. <ref>, respectively.
§ MODEL
The time evolution of a density matrix ρ can be described in its most general form by a completely-positive and trace preserving map Φ(ρ) →ρ'.
The Lindblad equation <cit.> is the most generic continuous Markovian map satisfying these properties,
ρt = ℒ[ ρ] = - i Hρ + ∑_j ( L_j ρ L_j^† - 1/2L_j^† L_j^ρ) ,
where the quantum jump operators L_j parameterize the nature of the environmental coupling. One may express the superoperator ℒ as an operator in a “doubled” Hilbert space, namely the Hilbert space of all operators. For a choice of basis in the original Hilbert space, |ψ_i⟩, i = 1…𝒟, we can represent any operator 𝒪 = ∑_i 𝒪_ij|ψ_i⟩⟨ψ_j| as a state ‖𝒪≡∑_ij𝒪_ij|ψ_i⟩⊗|ψ_j⟩ in this doubled Hilbert space, with inner product 𝒪_1 ‖𝒪_2 = 1/𝒟( 𝒪_1^†𝒪_2 ). Within this doubled Hilbert space, the action of the Lindbladian superoperator is
i ℒ = H_eff⊗𝕀 - 𝕀⊗ H_eff^† + ∑_j i γ L_j ⊗ L_j^† ,
H_eff ≡ H - i γ/2∑_j L_j^† L_j^ .
We will take L_j to be unitary, such that H_eff = H up to an overall imaginary constant.
This doubled Hilbert space construction is a powerful tool for characterizing the behavior of mixed states; notably, it has seen recent use in diagnosing the stability of quantum information stored in mixed states <cit.>. For a quantum spin model in two dimensions, it is instructive to think of this doubled Hilbert space as corresponding to a bilayer system, where the first (second) layer corresponds to the bra (ket). In this scenario, the Lindbladian consists of two copies of the Hamiltonian ± H acting on each of the two layers, with anti-Hermitian couplings i γ∑_j L_j ⊗ L_j^† between the two layers. To better connect with intuition from unitary time evolution, we will focus on the eigenvalues of the matrix iℒ rather than ℒ and refer to iℒ as “the Lindbladian”; in this convention, the imaginary components of eigenvalues correspond to dissipation, and the non-existence of exponentially growing solutions requires the imaginary part to always be negative.
§.§ Unitary time evolution
The Hermitian dynamics that we consider is a particular limit of an exactly solvable quantum spin-3 / 2 model on a square lattice first studied in <cit.>. We define this model here and review some properties of its solution, as our results are most clearly stated within this framework. Due to the four spin polarizations per site, we may express the spin-3 / 2 degrees of freedom in terms of anticommuting Gamma matrices Γ^a, a = 1… 5, which obey Γ^aΓ^b = 2 δ^ab. In terms of the physical spin operators,
Γ^1 = 1/√(3)S^yS^z , Γ^2 = 1/√(3)S^zS^x ,
Γ^3 = 1/√(3)S^xS^y , Γ^4 = 1/√(3)[ (S^x)^2 - (S^y)^2 ] ,
Γ^5 = (S^z)^2 - 5/4 .
The Hamiltonian is defined on a square lattice as
H = ∑_j [ J_x Γ^1_j Γ^2_j + x̂ + J_y Γ^3_j Γ^4_j + ŷ]
+ ∑_j [ J_x' Γ^15_j Γ^25_j + x̂ + J_y' Γ^35_j Γ^45_j + ŷ] - J_5 ∑_j Γ^5_j
where Γ^ab_j ≡Γ^a_jΓ^b_j / 2 i. For simplicity, we will assume that the lattice has an even number of sites in both the x̂ and ŷ directions. The exact solvability of this model is a consequence of an extensive number of conserved fluxes,
W_j = Γ^13_j Γ^23_j + x̂Γ^14_j + ŷΓ^24_j + x̂ + ŷ,
and can be understood most conveniently by performing a Majorana decomposition of the Γ matrices; specifically, one employs the representation
Γ_j^μ = i c_j^μ d_j , Γ_j^μ 5 = i c_j^μ d_j' , μ = 1 , 2 , 3 , 4 ,
Γ_j^5 = i d_j d_j' ,
with the constraint - i c_j^1 c_j^2 c_j^3 c_j^4 d_j d_j' = Γ_j ^1 Γ_j^2 Γ_j^3 Γ_j^4 Γ_j^5 = -1. In this representation, the Hamiltonian can be rewritten in terms of static ℤ_2 gauge fields ŵ_j,α living on the bonds of the lattice, which come from conserved bilinears of the c_j^μ operators, coupled to two species of Majorana fermions, d_j and d_j'.
We will not give a detailed review of the various properties of this solution <cit.>, as it will not be important for our analysis. However, we will emphasize the relation between these emergent degrees of freedom and physical observables, as the results of our dissipative model concisely fit into this picture. The ℤ_2 gauge fluxes - products of closed loops of ŵ_j,α operators - correspond to the conserved fluxes W_j. Pairs of Majorana fermions coupled by a string of ℤ_2 gauge fields are given by strings of Γ matrices. For a pair of d excitations, the operator can be generated by a string of bond operators:
V_j, α = Γ^1_j Γ^2_j + x̂ α = x ,
Γ^3_j Γ^4_j + ŷ α = y .
A similar construction follows for a pair of d' fermions,
V_j ,α' = Γ^15_j Γ^25_j + x̂ α = x ,
Γ^35_j Γ^45_j + ŷ α = y ,
as well as the combination of a d and d' fermion, a special case of which is Γ^5_j = i d_j d_j'.
Note that a closed loop of either the V_j,α or V_j,α' operators is equivalent to a product of the conserved fluxes contained inside the loop.
In order to retain the exact solvability upon the inclusion of dissipation, we take J_x' = J_y' = J_5 = 0, which causes the bond operators V_j ,α' to become conserved quantities. In the Majorana fermion language, this limit quenches the dispersion of the d_j' fermions and the ground state becomes highly degenerate as pairs of d_j' may be added in at no energy cost.
§.§ Jump operators
We now introduce jump operators L_j = Γ^5_j. Note that our Lindbladian jump operators commute with the conserved flux, L_jW_k = 0. This property implies that the flux operators W_j constitute strong symmetries of the system, as defined in <cit.>, and means that an initial state with a definite flux configuration will remain in such a configuration. If we express our Hermitian model as free Majorana fermions coupled to a static ℤ_2 gauge field, the interpretation of this phenomenon is that the gauge fields will remain static under the Lindbladian time evolution while generically we expect the Majorana fermions to evolve to resemble a finite-temperature Gibbs state. One may think of this behavior as “fractionalized thermalization.” For a generic set of quantum jump operators that commute with W_j, we expect the steady-state solutions of the Lindbladian can be represented as the tensor product of a thermal Gibbs state of Majorana fermions with a pure state of ℤ_2 gauge fields. We note related work studying the separation of thermalization timescales in fractionalized excitations on the Kitaev honeycomb model <cit.> under stroboscopic time evolution, as well as more directly analogous work studying the Kitaev honeycomb model coupled to jump operators that commute with the conserved fluxes <cit.>. Apart from fluxes being exactly conserved under dissipative dynamics, we also uncover below an additional, less apparent regime of fractionalized thermalization in our exactly solvable model, which occurs in the limit of small dissipation.
The above discussion follows for any jump operator that commutes with the conserved fluxes, and remains true even away from the limit J'_x = J'_y = J_5 = 0. However, our particular model admits additional conserved quantities which render the full dissipative dynamics exactly solvable.
To see this, we use the doubled Hilbert space formalism, see GeneralBilayerModel, to express the Lindbladian superoperator as an operator acting on a bilayer spin-3/2 system, with Gamma matrices Γ^a_R , Γ^a_L for the two layers - the R , L subscript indicates that they correspond to the right and left action of the gamma matrices on the physical operator. The Lindbladian can be written as
i ℒ = H[ Γ_R ] - H[ Γ_L ] + i γ∑_j Γ_j, R^5 Γ_j,L^5 - i γ N,
where N is the number of sites.
This bilayer representation makes it clear that, in addition to the intralayer fluxes W_j, R , W_j ,L which are defined in analogy to WOperators and commute with the Lindbladian separately, we have a new set of conserved interlayer fluxes U_j,α≡ V_j, α, R' V_j,α, L' defined on the plaquettes connecting the two layers, shown in Fig. <ref>. These conserved quantities are “weak” symmetries <cit.>. In contrast to the strong symmetries generated by the flux operators W_j, the operators V_j ,α' do not commute with the jump operators L_j individually, and it is exclusively the conserved superoperator consisting of the simultaneous right and left action of V_j,α' that commutes with the Lindbladian.
§.§ Parton construction
To elucidate the exact solvability of this model, we represent the Gamma matrices in terms of six Majorana fermions,
Γ_j, R^μ = i c_j, R^μ d_j, R , Γ_j, R^μ 5 = i c_j, R^μ d_j, R' , μ = 1 , 2 , 3 , 4 ,
Γ_j, R^5 = i d_j, R d_j, R' ,
with an analogous representation for Γ^μ_L in terms of c_j, L^μ , d_j, L , d_j, L '. This enlarges our Hilbert space, which necessitates the constraint - i c_j, R^1 c_j, R^2 c_j,R^3 c_j, R^4 d_j, R d_j, R' = Γ_j, R^1 Γ_j, R^2 Γ_j, R^3 Γ_j, R^4 Γ_j, R^5 = -1 on all physical states, and likewise for the Γ_L operators.
In this representation, the Hamiltonian H[Γ_R] becomes
H[Γ_R] = ∑_j J_x ŵ_j, x, R i d_j, R d_j + x̂, R + J_y ŵ_j, y, R i d_j, R d_j + ŷ, R
where ŵ_j, x, R≡ -i c_j, R^1 c^2_j + x̂, R and ŵ_j, y, R≡ - i c_j, R^3 c^4_j + ŷ, R are conserved quantities with eigenvalue ± 1. An analogous rewriting follows for the Hamiltonian on the second layer. Observe that the Majorana fermions d_j, R' , d_j, L' drop out of the intralayer Hamiltonian entirely. As a result, the interlayer coupling also becomes quadratic in the Majorana fermions,
i γ∑_j Γ^5_j, RΓ^5_j, L = - i ∑_j d_j, R d_j, R' d_j, L d_j, L'
= - γ∑_j v̂_j d_j, R d_j, L ,
where v̂_j ≡ -i d_j, R' d_j, L' is a conserved quantity with eigenvalue ± 1. With this rewriting, our model becomes one of free fermions d_j, R , d_j, L hopping on a bilayer square lattice in the presence of a background ℤ_2 gauge field ŵ_j, α, R , ŵ_j,α, L , v̂_j living on the links. Written out explicitly,
i ℒ = ∑_ℓ = L, R∑_j s_ℓ[ J_x ŵ_j, x, ℓ j d_j, ℓ d_j + x̂, ℓ + J_y ŵ_j, y, ℓ i d_j, ℓ d_j + ŷ, ℓ] - γ∑_j v̂_̂ĵ d_j, R d_j, L - i γ N
where s_L = 1, s_R = -1. This Lindbladian possesses a local ℤ_2 gauge symmetry, given by the transformation d_j, ℓ→Λ_j, ℓ d_j, ℓ, ŵ_j, α, ℓ→Λ_j, ℓŵ_j, α, ℓΛ_j + α̂, ℓ, v̂_j →Λ_j, Lv̂_j Λ_j, R, where Λ_j, ℓ = ± 1. The gauge-invariant fluxes around a single intralayer plaquette gives the conserved quantities -W_j, R , -W_j, L, and the fluxes around an interlayer plaquette gives the conserved superoperator -U_j,α. Note the relative minus signs between the two quantities - as will be relevant later, working in a sector with U_j,α = 1, which is the sector where steady-state solutions will belong to, requires us to pick a gauge configuration such as v̂_j = (-1)^j.
In order to obtain physical states, we must project back to our physical (doubled) Hilbert space. This is obtained by the projection operator P = ∏_j, ℓ1 + D_j, ℓ/2, where D_j, ℓ = - i c^1_j, ℓ c^2_j, ℓ c^3_j, ℓ c^4_j, ℓ d_j, ℓ d'_j, ℓ. A careful analysis of this for a single-layer Hamiltonian was performed in <cit.> and our analysis proceeds along similar lines. We can write P = P'(1+D), where D ≡∏_j, ℓ D_j, ℓ and P' is a linear combination of all inequivalent gauge transformations. Since D^2 = 1, Dℒ = 0, this means that we must restrict ourselves to eigenstates with D = 1. We write
D = ∏_j, α, ℓŵ_j, α, μ∏_j v̂_j ∏_j i d_j, L d_j, R .
In order to more readily leverage the gauge constraint, we re-express the Majorana fermions d_j, R , d_j, L in terms of complex fermions. A representation that will prove to be useful for future analysis is
f_j = i^j ( d_j, L + i (-1)^j d_j, R) / 2 .
With this, 2 f_j^† f_j - 1 = (-1)^j i d_j, L d_j, R and (-1)^N_f≡ (-1)^∑_j f_j^† f_j = ∏_j i d_j, L d_j, R. Therefore, gauge invariance restricts the total fermion parity, (-1)^N_f, to equal the total “gauge parity,” ∏_j, α, μŵ_j, α, μ∏_j v̂_j.
§ SPECTRUM OF THE LINDBLADIAN
In the previous section, we have shown that our Lindbladian reduces down to one of free fermions coupled to a static ℤ_2 gauge field. As such, the full spectrum and eigenvectors can in principle be calculated - analytically for translationally-invariant gauge field configurations, and by diagonalizing a non-Hermitian single-particle Hamiltonian for more general gauge configurations. However, the interpretation of these properties must be done in terms of density matrices of our physical Hilbert space, rather than a more conventional analysis of Hermitian systems. We outline our general approach to understanding these properties below.
§.§ General remarks
The most important eigenstates of the Lindbladian are those with eigenvalue zero, which correspond to steady-state solutions. Since the eigenvalues λ_i of the Lindbladian obey [λ_i] ≤ 0, every initial density matrix will eventually evolve into some superposition of these steady-state solutions (for simplicity, we we ignore the possibility of solutions with purely real eigenvalue, i.e. density matrices that do not decay but whose phase oscillates in time, as these are not present in our spectrum). Our first task will be to find these steady-state solutions and understand their properties.
Ascertaining the properties of these steady-state solutions is a non-trivial task within the doubled Hilbert space formalism. Given a density matrix ‖ρ, the expectation value of a Hermitian operator A is given by [ A ρ] = A ‖ρ. As such, standard intuition for calculating observables of pure states in ordinary Hilbert spaces, ⟨ψ| A |ψ⟩, is not applicable here. While it is possible to develop the machinery to perform such calculations, we instead proceed with a more intuitive symmetry-based analysis. The exact solvability of our model provides an extensive number of superoperators that commute with the Lindbladian, and hence ‖ρ will be an eigenstate of them. By decomposing our Hilbert space into subspaces with definite eigenvalue under these superoperators, we can conclude that A ‖ρ must vanish unless the two have the same eigenvalue. In general, this symmetry analysis only gives us limited information about ‖ρ. However, the extensive number of conserved quantities makes this perspective especially powerful for our model, and we will find that only a small amount of additional analysis is required to fully characterize the steady-state solution.
After characterizing the steady-state solutions, we will analyze the dissipative solutions - operators with eigenvalue λ_i obeying λ_i < 0. We will be interested in eigenvalues whose imaginary components have the smallest magnitude, which defines the Liouvillian gap, and a corresponding timescale associated with the decay to the steady-state solution. As the spectrum of our Lindbladian has the interpretation of fermions coupled to a ℤ_2 gauge field, we find it insightful to define distinct types of Liouvillian gaps depending on the nature of the excitation. For example, one may inquire into the Liouvillian gap with respect to fermionic excitations, or with respect to gauge excitations (visons). This is not an arbitrary labeling, the motivation for which ties back to our symmetry-based analysis of steady-state solutions. Excitations within a given sector will have different eigenvalues under the symmetries of our Lindbladian, and hence can be characterized by distinct classes of observables that have a non-zero overlap with these excitations. The corresponding Liouvillian gap for these excitations specify a timescale which governs the rate at which the expectation values for these classes of observables asymptote to their steady-state solutions. We note that a similar hierarchy of timescales was recently studied in random local Liouvillians <cit.> and in fact observed in simulations on a quantum computer <cit.> - in this model, the separation of timescales was associated with differing spatial extents of operators.
To be more explicit with our perspective, consider a steady-state solution ‖ρ_ss and a dissipative solution ‖ a which we interpret as a quasiparticle excitation of type a. A physical density matrix can be constructed by ‖ρ_d ≡‖ρ_ss + c ‖ a, where c is some constant chosen to ensure [ ρ_d^2 ] < 1. This density matrix asymptotes to ‖ρ_ss at late times but displays transient behavior dictated by ‖ a up to a timescale t_a = -[λ_a]^-1. It is useful to characterize this operator a in terms of observables {𝒪_a} such that [ 𝒪_a a ] ≠ 0, in which case one can say that the expectation value of observables 𝒪_a relax to their steady-state values with a timescale dictated by t_a for the density matrix ‖ρ_d. Of course, a generic initial density matrix will be more complicated than ‖ρ_d; however, if ‖ a is the lowest-energy excitation that has a non-zero overlap with the observables 𝒪_a, then t_a provides an upper bound on the equilibration timescale for the expectation value of these observables.
The utility of this picture is contingent on the operators 𝒪_a having a sufficiently simple representation. As we will show, these different classes of observables are most conveniently stated in terms of fractionalized operators acting on the original Hilbert space, such as the bond operators in Eq. <ref> and Eq. <ref>. In other words, we demonstrate a close connection between fractionalized excited states in the doubled Hilbert space formalism and fractionalized operators in the physical Hilbert space, with the imaginary energy of the former defining the equilibration timescale of expectation values of the latter.
§.§ Steady-state solutions
We now study the properties of the steady-state solutions. Recall that for isolated systems with similar Hamiltonians (free fermions coupled to static ℤ_2 gauge fields), there is a theorem due to Lieb <cit.> for bipartite lattices that fixes the gauge flux sector in which the ground state resides in. In a similar spirit, we leverage general arguments given in <cit.> that allow us to deduce gauge flux sectors which support steady-state solutions.
A fact that we will use in this argument is that any dissipative eigenstate of the Lindbladian must have zero trace - if it had a non-zero trace, the dissipative nature implies that the trace would decay in time, contradicting the trace preservation of the Lindbladian time evolution. Hence, the search for steady-state solutions can be recast as a search for eigenstates with a non-zero trace. This comes with the caveat that we may miss steady-state solutions that happen to also have zero trace; however, we explicitly diagonalize the Lindbladian for a 4 × 4 lattice in each gauge sector and have found no such solutions.
We first constrain the interlayer fluxes U_j,α, which constitute weak symmetries. Recall that the superoperator U_j,α acts on density matrices as U_j,α[ρ] = V_j,α' ρ V_j,α'. An eigenstate of U_j,α with non-zero trace must have eigenvalue 1, since unitarity and Hermiticity of V_j,α' implies [ρ] = [V_j,α' ρ V_j,α']. Hence, we will constrain ourselves to the U_j,α = 1 sector.
We now turn to the “strong” symmetries W_i. A similar argument as the last paragraph implies that we must constrain ourselves to sectors where W_j ρ W_j = ρ. However, recall that in the doubled Hilbert space formulation, the right and left fluxes (W_j, R and W_j, L) are conserved separately. Hence, our analysis only constrains the eigenvalues of W_j, R and W_j, L to be the same. This is actually not a new constraint - the product of fluxes around any closed surface must be +1, so the constraint that all U_j,α = +1 automatically implies W_j, R = W_j, L. We will denote this choice of W_j, R , W_j, L eigenvalue as W_j to distinguish from the operator W_j. One can prove, as in Appendix A of <cit.>, that at least one steady-state solution exists for each choice of eigenvalue.
Translating the above statements to our gauge field representation, we fix our gauge sector to be ŵ_j, α, R = ŵ_j, α, L≡ŵ_j, α and v̂_j = (-1)^j. The complex fermion representation chosen in Eq. <ref> makes the Lindbladian in the steady-state gauge sector especially simple, as
2 (f^†_j f^_j + x̂ + f_j +x̂^† f_j^) = i d_j, R d_j + x̂, R - id_j, L d_j + x̂, L ,
2 f_j^† f_j^ = 1 - (-1)^j i d_j, R d_j, L ,
where an identical relation as in the first line but for x̂↔ŷ also holds. As a consequence, the Lindbladian takes the simple form
i ℒ = ∑_j ( J_x ŵ_j, x̂ f_j^† f_j + x̂^ + J_y ŵ_j, ŷ f_j^† f_j + ŷ^ + h.c)
- 2 iγ∑_j f_j^† f_j^,
see Fig. <ref>.
The non-Hermiticity of i ℒ is manifest as simple imaginary chemical potential, and we can immediately identify the steady-state solution as the f_j^† vacuum state. The real part of the dispersion is unaffected by the dissipation, and all excitations come with the same dissipative energy penalty 2 γ.
What are the expectation values of observables in these steady-state solutions? Recall that these solutions have eigenvalue 1 under the symmetries U_jα and W_jR W_jL. Any observable with a non-zero expectation value with respect to this steady-state must have identical eigenvalues. Phrased in terms of operators on our original Hilbert space, the requirement is that observables must commute with the flux operators W_j and the bond operators V_jα'. This is a strong constraint - the only operators that satisfy this condition are precisely products of the V_jα bond operators defined in Eq. <ref>. One can check explicitly that these operators satisfy the required constraints, and the claim that these are the only operators with such a property follows from dimension counting, worked out in Appendix <ref>. Physically, these correspond to all operators that can be expressed in terms of pairs of d_j Majorana fermions connected by strings of ℤ_2 gauge fields ŵ_j α.
We now argue that among these operators, only closed loops of V_jα operators have a non-zero expectation value - recall that these correspond to products of flux operators W_j. This is a consequence of the steady-state solution being the vacuum state of the f_j^† operators, which gives an additional set of constraints: (1 - 2 f_j^† f_j) ‖ρ = ‖ρ. We can turn this into a gauge-invariant statement by the following rewriting
‖ρ = (-1)^j (1 - 2 f_j^† f_j) v̂_j ‖ρ
= d_j, R d_j, L d_j, R' d_j, L' ‖ρ
= Γ_j,R^5 Γ_j,L^5 ‖ρ .
Hence, any non-zero observable must have eigenvalue 1 under the symmetry Γ^5_j, RΓ^5_j, L (i.e., they commute with Γ^5_j), and these are precisely closed loops of V_j, α operators. Using the fact that the steady-state solution obeys the relation W_j, L‖ρ = W_j, R‖ρ≡W_j ‖ρ, we can deduce that the expectation value of the flux operators in this steady state are given precisely by the intralayer gauge fluxes W_j.
When our model is defined on a torus, the steady states of our Lindbladian exhibit a four-fold topological degeneracy arising from the possibility of flipping non-contractible loops of ŵ_j, α operators, shown in Fig. <ref>. Physically, this implies four distinct steady-state density matrices ρ_1-4 for each local flux configuration, which are distinguishable based on the expectation values of non-contractible strings of Γ matrices. We emphasize that, while this may be thought of as a topological degeneracy - and more generally, ℤ_2 topological order - within the doubled Hilbert space formalism, it does not constitute true mixed state topological order in the sense of being able to encode logical qubits in the steady-state solutions. What may appear to be a “quantum” superposition of different topological sectors ‖ρ_1 + ‖ρ_2 within the doubled Hilbert space formalism translates to a mere classical superposition of density matrices ρ_1 + ρ_2 within our original Hilbert space (moreover, the relative phase between the superposition of the two steady-states is not freely tunable - it is fixed by the Hermiticity and positive semi-definite constraint on the physical density matrix).
§.§ Liouvillian gaps
Moving beyond steady-state solutions, we can calculate the Liouvillian gap - the energy of the next-lowest state in imaginary energy. It is useful to draw a distinction between different types of Liouvillian gaps. The three types of degrees of freedom in our Lindbladian are complex fermions f_j, interlayer gauge fields v̂_j, and intralayer gauge fields ŵ_j, α, R, ŵ_j, α, L. Excitations with respect to any of these three variables may be considered. Recall from Eq. <ref> that gauge invariance requires an even number of excitations.
* Within a gauge field configuration with a steady-state solution, we compute the fermion gap, which is the energy associated with a fermionic excitation. In accordance with the condition of gauge invariance discussed previously, any valid state must include a pair of these excitations.
* We also compute the effects of interlayer gauge excitations, which corresponds to the energy associated with flipping a single v̂_j away from the “checkerboard” sector. We call this the interlayer gauge gap.
* Finally, we analyze intralayer gauge field excitations, which come from flipping a single ŵ_j, L operator. We choose left gauge fields for concreteness - an identical calculation follows for right gauge fields.
We will study each of these excitations in turn. In addition to calculating their Liouvillian gaps, we also identify operators whose equilibration timescales can be upper bounded by these gaps. We make this identification primarily through the symmetry-based analysis outlined previously in Section <ref>. To be precise, each of these excitations will be associated with a particular flux configuration, and the excitations can therefore only have a non-zero overlap with operators whose eigenvalues under the flux superoperators are identical.
This analysis is robust and can be applied to any excitation; however, for interlayer gauge excitations, we will find that the nature of the fermionic degrees of freedom allows us to say more about the structure of the long-lived excitations.
§.§.§ Fermion gap
We first study the Liouvillian gap associated with fermionic excitations within the steady-state gauge sector. As is clear from Eq. <ref>, the fermion gap is always 2 γ, and a pair of these excitations will cost energy 4 γ. As these excitations remain in the same gauge sector, they will still have eigenvalue 1 under the symmetries U_j,α, W_j,R W_j,L. Recalling the relation between f_j and the Majorana fermions in Eq. <ref>, we see that this fermion gap of 4 γ defines the inverse timescale under which the expectation values of pairs of d_j fermions will asymptote to their steady-state value of zero. The fact that also the Hermitian part of iℒ, the first line in Eq. <ref>, is quadratic means that the (in general ŵ_j,α dependent) exact eigenstates of the Lindbladian in the steady-state gauge sector and the time-dependent phases they pick up are characterized by all possible occupation numbers of the N Bloch states of the f_j and their band structure; the associated decay rate is just given by 2γ times the number of occupied Bloch states.
§.§.§ Interlayer gauge excitation
Creating an interlayer gauge excitation at site k gives us the free fermion Lindbladian
i ℒ = ∑_j ( J_x ŵ_j, x̂ f_j^† f_j + x̂^ + J_y ŵ_j, ŷ f_j^† f_j + ŷ^ + h.c)
- 2 iγ∑_j ≠ k f_j^† f_j^ - 2 i γ (1 - f_k^† f_k^) .
The structure of the Lindbladian is the same for multiple interlayer gauge excitations - the chemical potential at each site is changed from f_k^† f_k^ to (1 - f_k^† f_k^).
A single one of these flips is not gauge-invariant; one must either flip an additional gauge degree of freedom or add in an odd number of fermions in order to recover a physical excitation. The Liouvillian gap for these excitations must be computed numerically since, as opposed to Eq. <ref>, the Hermitian and anti-Hermitian part of iℒ do not commute anymore. However, we can readily see analytically that this gap vanishes in the limit of strong dissipation, γ→∞. In this limit, we ignore the Hermitian terms in Eq. <ref> and we can obtain steady-state solutions by simply placing fermions wherever the imaginary chemical potential is negative (this automatically satisfies the gauge constraint, as we place as many fermions as we flip v̂_i's).
For general γ, the gap of interlayer gauge with fermion excitations (i.e., flipping a single v̂_k and introducing a single fermion to the vacuum) is plotted in Fig. <ref>. For this and all subsequent plots, the parameters used were J_x = J_y ≡ J = 1, and N=1600. The gap depends on the background W_j flux configuration - we present results for zero flux, W_j = +1, π-flux, W_j = -1, and a random flux configuration. Note that there are two distinct contributions to the Liouvillian gap in Eq. <ref>. The first is the overall shift of 2iγ, and the second comes from the dissipative strength of the fermion excitation with the smallest imaginary energy. For small γ, the imaginary energy of this fermion excitation is positive - in other words, adding in the single fermion excitation to the vacuum is energetically unfavorable and causes the eigenstate to decay more rapidly, but one is nevertheless forced to include it by the constraint of gauge invariance. This fermion excitation energy eventually transitions from positive to negative, asymptotically approaching -2iγ.
Depending on the background flux configuration, the fermion spectrum may exhibit an anti-𝒫𝒯-symmetry breaking transition at a critical value of γ, which causes a sharp kink in the gap. In this situation, the eigenvalues with the smallest imaginary part for small γ come in pairs, with the real parts opposite in sign. The anti-𝒫𝒯-symmetry breaking transition happens when the two meet on the imaginary axis and split off. We see that in Fig. <ref>, this happens for both the uniform flux as well as the particular random flux configuration plotted, but not for the π-flux scenario. A survey of generic random flux configurations suggest that this transition is common but not necessarily guaranteed.
What is the physical interpretation of these interlayer gauge excitations? As was the case in the steady-state gauge sector, we can proceed with a symmetry analysis of the operators in this sector. In terms of gauge-invariant fluxes, the flip of a single v̂_j away from its steady-state checkerboard configuration changes the fluxes of the four neighboring U_j,α operators to be -1. Hence, operators that have a non-zero overlap with this excitation must have identical eigenvalues under these flux operators. Recall that in the steady-state sector, the operators that satisfied the flux constraint consisted of
pairs of d_j fermion excitations connected by a string of gauge fields ŵ_j,α. An interlayer gauge excitation at site k “pins” a d_k' fermion excitation to site k, and the allowed operators are gauge-invariant string-like operators that involve a d_k' fermion at site k. Therefore, the Liouvillian gap in Fig. <ref> determines the equilibration timescale of operators given by a single d' fermion coupled to a d fermion by a ℤ_2 Wilson line.
The above argument applies to all operators in this gauge sector, regardless of their energy. In the limit γ→∞, we can also analytically understand the nature of the lowest-energy (i.e. the longest lived) operator in this sector. For an interlayer gauge excitation at site k, the steady-state solution obeys f_k^† f_k ‖ψ = 1 and f_j^† f_j ‖ψ = 0 elsewhere. By leveraging this constraint using analogous manipulations as in Eq. <ref>, we find that this is only satisfied by the operator Γ^5_k, which can be interpreted as the bound state of a d and d' fermion localized on a single site [cf. Eq. <ref>]. Hence, in the limit γ→∞, we recover steady-state excitations with definite Γ^5 eigenvalue. This is a consequence of the quantum Zeno effect; if we interpret the jump operators L_j = Γ^5_j as the environment performing measurements of Γ^5 with frequency specified by γ, our state can become frozen in a Γ^5 eigenstate for large γ.
The interpretation of the lowest-energy excitation as a Γ^5_k operator also holds approximately away from the γ→∞ limit, which is a consequence of the localization of the corresponding single-particle eigenvector of Eq. <ref> around site k. As shown in Fig. <ref>, the fermion with smallest imaginary eigenvalue is highly localized around site k even for small values of γ; hence, the operator whose equilibration time is determined by Fig. <ref> retains a large overlap with Γ^5_k. We leave a detailed analysis of the extent of eigenvector localization for future work, although we mention related work <cit.> of a similar single-particle system but with a fully disordered imaginary chemical potential, rather than our case of a chemical potential that is everywhere positive expect for a single site. For their model, numerical simulations were consistent with a localization transition for arbitrarily weak disorder strength.
The above analysis has been for a single interlayer gauge field excitation. It is natural to consider multiple gauge excitations, which correspond to symmetry sectors with multiple v̂_j gauge fields flipped away from their steady-state configuration. A physically relevant quantity to consider is the Liouvillian gap associated with the f vacuum in the sector with a pair of interlayer gauge field excitations at sites k and ℓ. This determines the equilibration timescale of an operator given by a pair of d' fermions at sites k and ℓ. This state is an exact eigenstate of the Lindbladian with imaginary energy 4γ - note that for sufficiently large γ, this energy may be reduced further by including pairs of f fermions, with a quantum Zeno effect yielding a steady-state solution at γ→∞ by adding a pair of fermions at sites k and ℓ.
§.§.§ Intralayer gauge excitations
The final types of excitation we will study are intralayer gauge excitations, when we flip a gauge field on one of the two layers such that ŵ_k, α, L = - ŵ_k, α, R for some bond (k,α). Operators associated with these excitations - i.e., operators consistent with this flux configuration - are single-site operators Γ_j^μ, μ = 1, 2, 3, 4, on the two sites adjacent to the bond (k, α). A more precise identification of these operators, including the flux configurations corresponding to operators Γ^μ 5_j and Γ^μν_j, are given in Appendix <ref>.
In this gauge sector, the Lindbladian no longer has a simple expression in terms of complex fermions f_j^†, as the intralayer gauge excitation induces pairing terms into the Lindbladian - explicitly,
2 (f_j^† f_j + x̂^† + f^_j + x̂ f^_j ) = - i (-1)^j (d_j, L d_j + x̂, L + d_j, R d_j + x̂, R)
The single-particle Lindbladian is quadratic and can thus still be easily diagonalized; we provide more details of this procedure in Appendix <ref>. However, the determination of whether the resulting ground state is physical - i.e, whether it has the odd fermion parity to not be annihilated by the projection to the physical subspace - is non-trivial due to the non-Hermiticity of the Lindbladian. We leave a full analysis of this problem as an open question and plot both the ground state energy and the energy of the first excited state in Fig. <ref>. The ground state energy gives a lower bound on the physical Liouvillian gap. However, one must be careful at large γ, since the γ→∞ limit gives a fictitious quantum Zeno effect. In this limit, the ground state approaches the f_j vacuum state, which is a steady-state solution but unphysical as its fermion parity is even. As a consequence, we also plot the first excited state, which gives a more physical lower bound for large γ.
We comment on a surprising aspect of this Liouvillian gap, which is a sudden increase when an arbitrarily small γ is turned on, with a subsequent plateau at a gap of magnitude J. For finite N, the gap smoothly evolves as a function of γ, but the slope at small γ is proportional to N, as shown in the inset of Fig. <ref>. This indicates that in the thermodynamic limit, an infinitesimally small γ causes a discontinuous jump in the Liouvillian gap to J. A possible physical explanation of this fact is that, in contrast to the fractionalized operators considered earlier which have a correspondence with coherent excitations of the closed system, the operators Γ_j^1, 2, 3, 4 have no such association, and hence deconstructive interference generated by the unitary dynamics of the closed system also contributes to the decay of the expectation values of these observables. Intuition on this phenomenon can also be gained from the fermion representation - by examining the single-particle eigenstates of the Lindbladian at γ = 0 expressed in the complex fermion representation, one can see that the act of exchanging a single hopping term with a pairing term causes strong hybridization between the delocalized particle-like and hole-like excitations, which in turn leads to an extensive γ shift in the Liouvillian gap when dissipation is turned on. This phenomenon of the decay rate approaching a non-zero value as γ→ 0 in the thermodynamic limit has been found in the Lindladian dynamics of Sachdev-Ye-Kitaev models <cit.>,
This observation demonstrates a striking feature of our model in the small-γ limit. In this regime, the expectation values of string-like operators such as V_jα as well has Γ_j^5 have an γ^-1 upper bound on their equilibration timescale, in contrast to local single-site operators such as Γ^1, 2, 3, 4_j whose timescales are bounded by J^-1.
§ PERTURBATIONS AWAY FROM EXACT SOLVABILITY
As the exact solvability of our Lindbladian requires a precise set of couplings, it is natural to consider perturbations away from this exactly solvable point. Here, we discuss different types of perturbations and their physical effects. Our Lindbladian possesses an extensive number of strong symmetries W_j and weak symmetries V_j,α. The combination of the two gives us our exact solvability, and perturbations are conveniently classified in terms of their breaking of these symmetries.
The simplest perturbations retain both the strong and weak symmetries of our system. These terms are rather artificial - the most local terms consist of either explicitly adding in the flux terms W_j to the Hamiltonian, or adding a two-site jump operator L_j,α = V_j,α. Both these choices preserve the steady-state solutions as well as the structure of the quasiparticle excitations; however, details of the Liouvillian gaps will be modified.
Perturbations that break the weak symmetries but preserve the strong symmetries of our model include the J_x', J_y', and J_5 terms in the full Hamiltonian of Eq. <ref>. In this case, our quantum jump operators still commute with the fluxes W_j, and an initial state in a definite flux sector will remain in that sector for arbitrary time. However, while we can still make statements about the steady-state solutions of the Lindbladian, the full spectrum and consequently the Liouvillian gap is no longer analytically tractable in an exact way. For future work, it would be interesting to study whether coherent quasiparticle excitations still remain in this spectrum at low energies. Recall that in the exactly solvable limit, the existence of distinct types of quasiparticle excitations led to the interpretation of distinct Liouvillian gaps which give equilibration timescales for different observables - the manner in which this picture is modified away from the exactly solvable point is an important open question.
We may also consider perturbations that break the strong symmetries but conserve the weak ones. This is accomplished by a generic choice of quantum jump operator, such as Γ^1, 2, 3, 4_j. In this scenario, we expect our system to asymptote to a unique steady-state, ρ∝𝕀. The weak symmetries cause the Lindbladian spectrum to decompose into an extensive number of symmetry sectors, with the steady-state solution residing in a particular sector. This means that one still retains the ability to discuss Liouvillian gaps with respect to the steady-state sector versus gaps of different sectors, and a careful analysis of the sectors would allow one to identify the operators that live in these sectors. In passing, we note that particular choices of quantum jump operators such as Γ^μν_j with μ, ν∈{1, 2, 3, 4} will break the local strong symmetries W_j but preserve a global strong symmetry Q ≡∏_j Γ^5_j (this is not a “new” symmetry, as it can be re-expressed as a product of W_j operators). As such, in this case we expect a pair of steady-state solutions ρ_±∝𝕀± Q.
Finally, a fully generic choice of perturbation that breaks all symmetries will give a single steady-state solution. We again stress an important open question of to what extent quasiparticle excitations of the Lindbladian are robust to these types of perturbations. With regards to the extensive number of steady-state solutions in the exactly solvable limit, one will expect that a small generic perturbation away from this point will cause all but one of these steady-states to persist for a long timescale given by the inverse strength of the perturbation. Developing an analogous theory for the excitations is a promising research direction, as it emphasizes a physical interpretation of the Lindbladian spectrum that is already familiar in the study of closed systems.
§ SUMMARY AND DISCUSSION
In this work, we analyze the Lindbladian dynamics of a quantum spin-3/2 system which admits an exact solution in terms of Majorana fermions coupled to static ℤ_2 gauge fields. This allows us to characterize the steady-state solutions as well as identify distinct classes of Liouvillian gaps, with different gaps determining the equilibration timescale of different classes of observables, as summarized in Fig. <ref>. Crucially, these timescales fall into different categories with distinct parametric dependencies on γ. While closed loops of V_jα in eq:bondOperators1—i.e., the fluxes, Fig. <ref>(a), and on a torus also the non-local Wilson loops, see part (b)—do not decay at all in the exactly solvable limit, pairs of emergent Majorana fermions, Fig. <ref>(c-e), decay with rates that scale linearly with small γ; depending on whether they exhibit a quantum Zeno effect, these rates decay to zero in the large-γ limit. Finally, operators of the last category, like Γ^1,2,3,4, see Fig. <ref>(f), which are not conserved by the Hermitian dynamics, exhibit a decay rate that is singular for small γ in the thermodynamic limit; naturally, the entire dynamics is unitary at γ = 0, however, sending γ→ 0^+ after taking the thermodynamic limit N →∞, the decay rates of these operators is of order of the exchange couplings J of the Hamiltonian (<ref>). This leads to particularly non-trivial three-step fractionalized thermalization dynamics, see Fig. <ref>(g), in the thermodynamic limit: first, at times of the order of the inverse exchange couplings 1/J, all operators of the third kind decay, which is parametrically separated from the time-scale ∝ 1/γ where (gauge invariant pairs of) the Majorana fermions d and d' decay. Then only closed loops of V_jα survive, which cannot decay unless perturbations beyond our solvable limit (cf. Sec. <ref>) are included.
One promising direction for future research is the construction of additional exactly solvable Lindbladians through this fermionization technique. For closed systems, there exists a rich literature on generalizations of the Kitaev honeycomb model to other exactly solvable models <cit.>; in these cases, the exact solvability is often geometric in nature (i.e. arising from a particular choice of lattice connectivity and hopping structure) and is unaffected if a subset of couplings become non-Hermitian. One interesting phenomenon that may arise in a certain parameter regime of these models is gapless fermionic excitations, in contrast to our model where fermion excitations have a constant gap 4 γ. This would imply algebraic, rather than exponential, decay of the expectation values of certain classes of operators <cit.>. Lindbladians with gapless excitations are not new <cit.> - the intriguing new feature of this would be the ability to cleanly separate this spectrum of gapless excitations from gapped gauge excitations, implying distinct equilibration timescales of these operators.
Generalized Lindbladian constructions may also prove useful at developing a general relation between the exactly solvable open system and the underlying Hermitian dynamics. In our model, the Hermitian dynamics was given by a QSL with two species of Majorana fermions, with the dispersion of one of the fermions tuned to zero. In this limit, a particular choice of quantum jump operators admit quasiparticle excitations of the Lindbladian which display a close relation with the excitation spectrum of the closed system. It is intriguing to ask whether, in a generic system that is rendered exactly solvable through this technique, a similar relation exists between quasiparticle excitations in the doubled Hilbert space and quasiparticle operators of the physical Hilbert space. A more robust understanding of this relation, including potential violations in certain systems, is another promising direction for future research.
Note added. Just before posting our work, a related paper appeared on arXiv <cit.>, studying exactly solvable BCS-Hubbard Lindbladians. Although the starting point of their analysis involves a distinct microscopic model of complex fermions with pairing terms, a transformation to Majorana fermions yields the same Lindbladian as ours within the π-flux sector. Due to the different microscopic models, our theory also has a non-trivial gauge invariance requirement, with non-trivial consequences. For instance, the Liouvillian gap in the π-flux sector in Fig. <ref> is larger as an additional fermion has to be included.
We thank Pavel Volkov and Hanspeter Büchler for helpful feedback. M.S.S. acknowledges funding by the European Union (ERC-2021-STG, Project 101040651—SuperCorr). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. This work was partially completed at the Center for Computational Quantum Physics in the Flatiron Institute. The Flatiron Institute is a division of the Simons Foundation. H.S acknowledges funding from the U.S. Department of Energy under Grant DE-SC0019030.
§ NON-VANISHING STEADY-STATE EXPECTATION VALUES
In the main text, we claim that any operator that has eigenvalue 1 under the superoperators U_j,α and equal eigenvalues under W_j,R and W_j,L is a product of V_j,α bond operators. One can readily verify that these operators satisfy the required constraints, but a more careful argument is required to show that these are the only operators with such a property. We do so by counting the dimension of the subspace (within the doubled Hilbert space) spanned by these operators. With a square lattice having 2N bonds, there are naively 2^2N orthogonal combination of bond operators; however, this double counts the true number of operators, as the product of all bond operators is 𝕀. So, the subspace is 2^N dimensional. The full dimension of our doubled Hilbert space is 2^4N, and we have 3N independent constraints - for each site j, we have U_j,x̂ = 1, U_j,ŷ = 1, and W_j,R = W̅_j (the constraint on W_j,L is automatically satisfied under these constraints). Each constraint halves the dimension of the allowed subspace, so we find a 2^N dimensional Hilbert space, as desired.
§ DIAGONALIZATION OF THE FREE FERMION LINDBLADIAN
In this appendix, we provide more detail on the diagonalization of the free fermion Lindbladian. For a general choice of gauge sector, we work with the Lindbladian written in terms of Majorana fermions, as in Eq. <ref>. This can be re-expressed in the form
i ℒ = d^T ·A·d - i γ N
where d is a 2N-dimensional vector containing both d_j, L and d_j, R Majorana fermion operators. We follow the procedure described in <cit.> for obtaining the spectrum of this Lindbladian, which we summarize here. As A is an antisymmetric matrix, its spectrum comes in the form {β_1 , -β_1 , β_2 , -β_2 …β_N , -β_N}, where we take β_α≥ 0. One can construct N creation/annihilation operators b_α, b_α' that obey the canonical fermionic anti-commutation relations (with the caveat that b_α' is in general not the Hermitian adjoint of b_α). With this, we can write
i ℒ = - 2 ∑_α=1^N β_α b_α' b_α - (i γ N - ∑_α=1^N β_α)
The term in parenthesis gives the dissipative strength of the state with weakest dissipation within this gauge sector. Note that this Majorana fermion representation obfuscates the constraint of gauge invariance, which is most easily enforced in terms of the complex fermions f^†_j. As such, this representation is only useful in gauge sectors where pairing terms would appear if written in the f_j^† basis, in which case a proper analysis of gauge invariance is equally difficult in either representation.
§ IDENTIFICATION OF SINGLE-SITE OPERATORS WITH FLUX CONFIGURATIONS
In the main text, we emphasize that the spectrum of our Lindbladian decomposes into an extensive number of symmetry sectors, each of which is specified by a gauge flux configuration. A Liouvillian gap for each sector can be defined, and one can identify operators - which we remind the reader should be thought of as states in this doubled Hilbert space - that are contained in these symmetry sectors, which the Liouvillian gap then defines an equilibration timescale for. Here, we catalog the flux configurations associated with the set of single-site operators.
A particular flux configuration is defined by the interlayer fluxes U_jα = V_j, α = V'_j, α, R V'_j, α, L as well as the intralayer fluxes W_j, α, R, W_j, α, L. As our Lindbladian spectrum is invariant under the transformation W_j, α, R↔ W_j, α, L, we will only identify operators based on their eigenvalues under the combined flux W_j, α, R W_j, α, L. The eigenvalues of an operator under these fluxes is simply determined by whether the operators V'_j, α and W_j commute or anti-commute with the operators. If we take as our basis of operators to be products of Γ matrices, every basis operator will either commute or anti-commute with V'_j, α and W_j.
The operators Γ_k^5 commutes with all plaquette operators W_j. It also commutes with all the bond operators V'_j, α aside from the four bonds adjacent to site k. The flux configurations associated with this operator are given precisely by the interlayer gauge excitations studied in Section <ref>.
The operators Γ_k^μ, μ = 1, 2, 3, 4, commute with all the bond operators V'_j, α except for a single one adjacent to site k which anticommutes with it. Additionally, it commutes with all but two W_j operators - these two offending plaquette operators share a bond given by the anticommuting V'_j, α operator. The flux configuration associated with these operators can be obtained starting from a steady-state gauge sector and flipping an intralayer gauge field on this bond and its spectrum is analyzed in Section <ref>.
The operators Γ_k^μ 5 have the same commutation relations with the plaquette operators as Γ_k^μ, but differ with respect to the V'_j, α operators; it now anticommutes with the three V'_j, α bond operators connected to site k that aren't the bond shared by the flux operators. This flux configuration can be obtained from the intraylayer gauge excitation studied in Section <ref> and flipping an additional interlayer gauge field v̂_k.
Finally, we identify the operators Γ_k^μν, with μ, ν = 1, 2, 3, 4 and μ≠ν. For a given site k, there are 42 = 6 different operators of this type. These operators will
anticommute with two of the four V_j, α' bond operators, and either two fluxes W_j that only share a corner at site k or all four fluxes connected to site k. These flux sectors are obtained by flipping two intralayer gauge fields connected to a site k - as expected, there are 42 = 6 ways of doing this.
The Liouvillian gap of excitations corresponding to the Γ_k^μ operators are shown in Fig. <ref>. We plot the Liouvillian gap of Γ^μ 5_k and Γ^μν_k operators in Fig. <ref> and verify that similar behavior occurs. This implies that our observation of the rapid equilibration of Γ^μ_k operators holds generically for single-site operators, with the exception of Γ^5_k due to its interpretation as the bound state of two Majorana fermion excitations, or alternatively due to the fact that Γ^5_k are precisely the quantum jump operators describing the coupling to the environment.
|
http://arxiv.org/abs/2307.06185v1 | 20230711154509 | Study on Autonomous Gravity-assists with a Path-following Control | [
"Rodolfo Batista Negri",
"Antônio Fernando Bertachini de Almeida Prado"
] | astro-ph.EP | [
"astro-ph.EP",
"cs.SY",
"eess.SY"
] |
Study on Autonomous Gravity-assists with a Path-following Control
Rodolfo B. NegriPhD Candidate, Division of Graduate Studies, National Institute for Space Research - INPE, São José dos Campos, Brazil.,
and Antônio F. B. de A. PradoPro-Rector of the Graduate School, National Institute for Space Research - INPE, São José dos Campos, Brazil.
Received / Accepted
=========================================================================================================================================================================================================================================================================================
We investigate the autonomous control of gravity-assist hyperbolic trajectories using a path following control law based on sliding mode control theory. This control strategy ensures robustness to bounded disturbances. Monte Carlo simulations in the environments of Titan and Enceladus, considering significant insertion errors on the order of 50 km, demonstrate the effectiveness of the proposed approach. The Enceladus example showcases the applicability of the control strategy for close flybys of asteroids and small moons during scientific observations. It successfully stabilizes the orbital geometry within a short time span, avoiding collisions and enabling a close approach to Enceladus' surface with a separation distance of 10 km. Furthermore, we explore its application in a Jovian tour, considering a more complex N-body problem. Results indicate that the control system, while unable to guarantee a complete tour, plays a crucial role in ensuring precise trajectory control during flybys. In such cases, the vehicle guidance system requires higher precision than what can be achieved with a patched conics model. These findings demonstrate the effectiveness of the proposed control strategy for gravity-assist maneuvers and highlight its potential for various space exploration missions involving close encounters with celestial bodies.
§ INTRODUCTION
The interest in space exploration has exceeded scientific aims and increasingly reached economical domains. The expectancy is that this trend continues to grow, which would, with a high probability, imply in a tremendous amount of different space missions designed to explore every part of the solar system. This demand poses a great challenge for the ground operation facilities on Earth. A convenient solution is to increase the autonomous capabilities of the current spacecraft, eliminating or reducing the necessity of the ground in the loop. In addition, autonomous operations can imply in other advantages, such as reducing operational costs and time.
Recent studies have focused in the automation of different aspects of space missions, but, as far as the authors know, no work has studied an important aspect of many interplanetary missions, which is the gravity-assist. Current gravity-assists operations consist in the ground team accessing the spacecraft state and uploading corrective maneuvers before and after the passage to guarantee the designed flyby trajectory. In many cases, this process does not imply in concerning limitations for the mission. However, as pointed out by Reference <cit.>, outer planets tours are greatly affected by this operational profile and an autonomous operation would result in great benefit, as in a: 1) rapid turnaround and post-flyby cleanup; 2) successive and safer low altitude gravity-assists; 3) more efficient outer planet orbit insertion; 4) increase in the number and frequency of gravity-assists; 5) less propellant mass required.
In this sense, this works intends to assess an important aspect of this autonomous operation, which is the control law. The control law for such operation is very likely to be a path following than a reference tracking law. A path following control is concerned in driving the vehicle to a trajectory geometry, with no time parameterization for the movement on the path. On the other hand, a trajectory tracking would make the vehicle converge to a specific point of the trajectory for a predefined time. In the gravity-assist context, the time is only an important variable in the transfer to the body where the swing-by will be performed. Once the encounter with the body takes place, the important aspect is to preserve the desired hyperbole, with no need to spend fuel in avoiding a little delay or advance in an hypothetical predefined time.
We apply a robust Keplerian path following law recently derived <cit.> that showed promising results for small body missions <cit.>. This path following lies on sliding mode control theory to guarantee robustness to bounded disturbances, which in multiple body systems, such as the outer planetary systems, could be important. Robustness would also be a desirable feature considering low-altitude flybys in environments such as the ones in Enceladus <cit.>, Io <cit.> and Titan <cit.>. Although the drag levels found in these cases are generally low, robustness can guarantee a safer lower altitude for the flyby and allow a less cautious approach, at least under the trajectory perspective, when facing uncertainties in the environment. We run Monte Carlo simulations for Titan and Enceladus flybys considering a reasonable large encounter error. Finally, we consider a Jovian tour in order to analyze the control in the context it is most likely to be applied, outer planetary systems tours.
§ DYNAMICS
The gravity-assist concept is very simple. Its goal is to change the trajectory of a spacecraft about a main body through the encounter with an intermediate body that also orbits the main one (e.g. approaching a planet's moon to change the orbit about the planet). In this way, our study can be separated into two parts. One considering the hyperbolic trajectory relative to the body where the gravity-assist takes place, and others concerned with the general context where the autonomous gravity-assist will be applied, in a multiple body dynamical environment. The first part of this section describes the models applied to the first case, while the remainder is concerned in presenting the alternatives for obtaining and describing a trajectory in an N-body system.
§.§ Hyperbolic Trajectory
The equations of motion of the spacecraft relative to the body where the gravity-assist is performed can be written as:
ṙ⃗̇ = v⃗,
v̇⃗̇ = f⃗ + d⃗ + u⃗ ,
in which r⃗ represents the position relative to the body, and v⃗ the velocity. The function f⃗ represents the known dynamics of the system, unknown bounded disturbances are represented by d⃗, while the control command is written as u⃗.
In order to stress the control law, we assume that the only known dynamics are the point mass gravitational acceleration f⃗ = - μ/r^3r⃗, in which μ is the gravitational parameter of the gravity-assist body. The third-body effects of the main body are assumed as disturbances, taking the following form:
d⃗_3B = - μ_( R⃗/R^3 - R⃗_/R_^3),
where μ_ stands for the gravitational parameter of the main body, R⃗ denotes the position of the spacecraft relative to the main body, and R⃗_ the position of the gravity-assist body relative to the main body.
In a close approach to Titan, the spacecraft experiences drag from Titan's atmosphere. Although this drag is of little impact for hyperbolic trajectories as close as the ones of Cassini (minimum of 880 km altitude) <cit.>, it can be more significant for closer approaches. Drag can also be experienced in a pass through Io and Enceladus' plumes <cit.>, yet of little impact in a hyperbolic trajectory also. Nevertheless, we will consider as a second source of disturbance the drag acceleration:
d⃗_D = - 1/2 m C_D ρ A v v⃗,
for C_D=2.2 representing the drag coefficient, m is the spacecraft's mass, which we will assume as 1,000 kg throughout this work, ρ is the mass density, and A is the cross-sectional area projected in the v⃗ direction, assumed here as A=18.6 m^2.
§.§ Multiple body Dynamics
Since the first works of Tsander proposing gravity-assists <cit.>, and the pioneering of Crocco <cit.> in what can be considered a tour design <cit.>, the gravity-assist concept has largely evolved, with applications in most of the interplanetary missions, and complex tours as the ones designed for the Europa clipper mission in the Jovian system <cit.>. In this section, we will describe the models to obtain and simulate the trajectory of the spacecraft in the condition where an autonomous control for a gravity-assist is most valuable, that is to the application in outer planetary systems tours <cit.>.
§.§.§ Zero-SOI Patched Conics Tour Design
The first multiple body model presented here is the simplest one, the Zero-SOI Patched Conics (0SOI-PC). In this approach, it is assumed that the point where the spacecraft meets the gravity-assist body, in its trajectory about the main body, is exactly the position of the gravity-assist body in the main body reference frame. That is the reason of the “zero-SOI” nomenclature, as the magnitude of the Sphere of Influence (SOI) of the gravity-assist body is assumed to be small enough to hold the approximation. The zero-SOI approximation allows constraining much of the variables of the problem. This is specially useful for optimization routines, as the decision variables are greatly reduced, which is our case.
We assume that the time t_j for each j-th gravity-assist is a decision variable, as well as the periapsis r_pj of the hyperbolic trajectory about the gravity-assist body. The position and velocity of the gravity-assist body, respectively R⃗__j(t_j) and V⃗__j(t_j), can be reasonably assumed to be known in the time t_j. Now, it is possible to apply a Lambert solver <cit.> to connect the current gravity-assist in R⃗__j(t_j) with the previous one in R⃗__j-1(t_j-1), by finding the spacecraft velocity V⃗^+_j-1 which the spacecraft leaves the j-1 gravity-assist and the incoming velocity V⃗^-_j for the j-th encounter.
With all V⃗^+_j and V⃗^-_j found, the incoming and outgoing desired velocities at the infinity, with respect to the gravity-assist body, v⃗_d∞^- and v⃗_d∞^+ respectively, can be calculated in order that the transfers between the gravity-assist bodies are feasible.
Generally, for the 0SOI-PC, only the t_js are used as decision variables. In this case, and at this point, the procedure would be to find r_pj from v⃗_d∞^- and to connect the incoming and outgoing leg by an impulse at the periapsis <cit.>. A collision would be avoided with a non-linear constraint. However, we found trouble in avoiding a collision using this option for the MATLAB built-in optimization routines. As our aim here is not in obtaining an optimal tour design, but only finding a reasonable tour trajectory to analyze the control, we find no problem in simply introducing the r_pjs as an additional decision variable to avoid the collision.
In our procedure, we assume that the spacecraft will arrive with the desired incoming velocity, v⃗_∞^-=v⃗_d∞^-, and we obtain v_∞ = ||v⃗_∞^- ||. Now, we can calculate the half turning angle:
sinδ = μ/μ + r_pj v_∞^2,
and the angular momentum unit vector of the hyperbole:
ĥ = v⃗_∞^- ×v⃗_d∞^+/||v⃗_∞^- ×v⃗_d∞^+||,
to find the actual outgoing velocity at the infinity:
v⃗_∞^+ = v_∞[ cos(2 δ) v⃗_∞^-/v_∞ + sin(2 δ) ĥ×v⃗_∞^-/||ĥ×v⃗_∞^-||].
Finally, an impulse can be imparted to the spacecraft just after leaving the gravity-assist body in order to connect the whole multiple bodies trajectory:
Δv⃗ = v⃗_d∞^+ - v⃗_∞^+.
Therefore, in summary, an optimization routine can minimize the sum of all the impulses, || ∑_jΔv⃗ ||, for the decision variables t_j and r_pj in a given sequence k=1,2,...,j,...,I of gravity-assists.
§.§.§ Patched Conics Tour Design
The Zero-SOI assumption works well for interplanetary trajectories, but for planetary systems it shows some limitations <cit.>. In this case, not only a large mass parameter, as in the case of Earth-Moon <cit.>, affects the accuracy of the 0SOI-PC, but also the assumption of an instantaneous maneuver is not reasonable <cit.>. For instance, consider a circular orbit for Io, the small volcanic moon moves on its orbit at an angular rate of 8.48^∘/h. Therefore, in the tens of minutes inside Io's SOI, the moon moves a considerable amount, directly affecting the calculated swing-by through 0SOI-PC <cit.>. For this reason, in this section we not only take into account the magnitude of the SOI in the transfer between the bodies, but also the predicted time inside the SOI, so that the movement of the gravity-assist body in its orbit is taken into account.
In order to accomplish the proposed goal, we have to invert the problem if compared to the 0SOI-PC. Here, it is first completely defined the gravity-assist in each body, the transfer legs between the bodies are calculated and connected after that. Thus, given the encounter time t_j, here assumed as the time where the spacecraft is at its periapsis of the hyperbolic trajectory of the j-th gravity-assist, and r_pj, we also add as decision variables: the hyperbolic eccentricity e of the j-th gravity-assist, the argument of periapsis ω, the inclination i, and Ω, the longitude of the ascending node (LOAN).
This way, one can find the eccentricity unit vector:
ê = [ cosΩcosω - sinΩsinωcos i; sinΩcosω + cosΩsinωcos i; sinωsin i ],
and the angular momentum unit vector:
ĥ = [ sin i sinΩ; - sin i cosΩ; cos i ].
As well as a third unit vector defining the orthogonal system: ê_⊥ = ĥ×ê.
The true anomaly at entering and leaving the SOI is found from the conic equation:
cosν = a(1-e^2)-r_SOI/e r_SOI,
where a is the semi-major axis and found from r_pj and e, and r_SOI is defined as the Laplace sphere of influence:
r_SOI = R_( μ/μ_)^2/5.
Now, it is possible to obtain the position that spacecraft enters and leaves the SOI, respectively as:
r⃗_SOI^- =r_SOI [ cos (-ν) ê + sin (-ν) ê_⊥ ],
r⃗_SOI^+ =r_SOI [ cosνê + sinνê_⊥ ].
The respective velocities at the SOI can be found, after calculating the flight-path angle cosγ = (r_p v_p)/(r_SOI v_SOI), as:
v⃗_SOI^-=v_SOI[ cosγ( ĥ×r⃗_SOI^-/r_SOI) + sinγr⃗_SOI^-/r_SOI],
v⃗_SOI^+=v_SOI[ cosγ( ĥ×r⃗_SOI^+/r_SOI) + sinγr⃗_SOI^+/r_SOI].
Half of the time spent inside the SOI can be easily found after solving the Kepler equation:
ℳ = e sinh E - E,
for tanh( E/2) = √(e-1/e+1)tan( ν/2), as:
t_GA = ℳ √( - a^3/μ).
With the time t_GA for each hyperbole's leg, one can find the state of the spacecraft relative to the main body before and after the gravity-assist:
R⃗_j^- = R⃗^-__j(t_j-t_GA) + r⃗_SOI^-,
V⃗_j^- = V⃗^-__j(t_j-t_GA) + v⃗_SOI^-,
R⃗_j^+ = R⃗^+__j(t_j+t_GA) + r⃗_SOI^+,
V⃗_j^+ = V⃗^+__j(t_j+t_GA) + v⃗_SOI^+.
Finally, a Lambert problem is solved to obtain the transfer between all the bodies, by finding the desired incoming velocity V⃗_dj^- to the j-th gravity-assist and the outgoing desired velocity from the last flyby, V⃗_dj-1^+. This way, an impulse is imparted to the spacecraft just after and before each swing-by in order to make the transfer and to guarantee the chosen conditions for the gravity-assist:
ΔV⃗_j^- = V⃗_dj^- - V⃗_j^-,
ΔV⃗_j^+ = V⃗_dj^+ - V⃗_j^+.
Therefore, given t_j, r_pj, e_j, i_j, Ω_j, and ω_j, as decision variables, the summation of all impulses, ∑_j ( ||ΔV⃗_j^-||+||ΔV⃗_j^+|| ), can be minimized to find a tour.
§.§.§ N-2 Circular Restricted N-Body Problem
Even the most precise patched-conics just presented still an approximation of a highly perturbed N-body environment. In an outer planetary system, it is likely the spacecraft will approach many of the massive moons with at least a few SOIs of distance, this renders the trajectory quite chaotic in some cases, driving to very different results than the ones that can be expected with a patched conics approximation.
In order to simulate this N-body problem, we will employ what we are calling an N-2 circular restricted N-body problem (CRNBP). This is an approximation the authors made to simulate a trajectory in an N-body environment, greatly reducing the equations to be integrated. We hope to present a full derivation of these equations soon, in a dedicated paper. However, for now, we just present the equations of motion and refer the reader to Reference <cit.>, where we present the generalization we made for the bicircular restricted four-body problem (BCR4BP) that enabled to obtain the CRNBP.
Similarly to the BCR4BP, the CRNBP allows to greatly reduce the equations to be integrated. This comes at the expanse of the little physical incoherence of imposing a circular orbit to the bodies (the same is already true for the BCR4BP). Nevertheless, as in the BCR4BP, this is still useful for simpler and general analyses, while retaining a great part of the dynamical complexity.
Figure <ref> represents the CRNBP in a synodic frame that is rotating with the primaries of mass M_1 and M_2, respectively, which are assumed to describe a circular orbit. The other bodies, M_j, j=3,...,N-1, are also assumed to have a circular orbit about M_1, coplanar to the one of M_1 and M_2, and totally defined by the angle ψ_j. Therefore, the equations of motion of a small body moving in this frame, in canonical units, are:
ẍ = 2 ẏ + x - μ_1/r_1^3(x+μ_2) - μ_2/r_2^3 (x-μ_1) - ∑_j=3^N-1μ_j [ 1/r_j^3 (x+μ_2-R_j cosψ_j ) + .
. ∑_k=1,k≠ j^N-1μ_k/(R_k^2+R_j^2-2 R_k R_j cos(ψ_k-ψ_j) )^3/2 (R_j cosψ_j - R_k cosψ_k ) ],
ÿ = - 2 ẋ + y - μ_1/r_1^3 y - μ_2/r_2^3 y - ∑_j=3^N-1μ_j [ 1/r_j^3 (y-R_j sinψ_j ) + .
. ∑_k=1,k≠ j^N-1μ_k/(R_k^2+R_j^2-2 R_k R_j cos(ψ_k-ψ_j) )^3/2 (R_j sinψ_j - R_k sinψ_k ) ],
z̈ = - μ_1/r_1^3 z - μ_2/r_2^3 z - ∑_j=3^N-1μ_j/r_j^3 z.
The mass parameter of each body is defined as μ_j=M_j/(M_1+M_2), and R_j are the distance of each body to M_1. The ψ_j can be solved in time by the analytical expression:
ψ_j = ψ_0j+ (n_j-n_12) t,
where ψ_0j represents the initial angle, n_j is the mean motion of the j-th in canonical units and n_12 is the mean motion of M_1 and M_2 about their center of mass, which is n_12=1 in canonical units.
Note that this description of the problem reduces the 6N first-order differential equations of a N-body problem to simply six differential equations and N-2 analytical expressions.
§ ROBUST KEPLERIAN PATH-FOLLOWING CONTROL
In the hyperbolic approach to the gravity-assist body, the most important aspect to be guaranteed is the geometry of the trajectory. The small third-body perturbation, drag, and other disturbances effects that could be present in such a scenario, are very unlikely to, together with a control to cancel they out, affect the trajectory in a way to delay or advance the spacecraft in tens of minutes or hours. Moreover, a possible accumulated delay or advance caused by consecutive gravity-assists are most likely to be dealt with by the guidance algorithm (i.e., recalculating the tour trajectory) rather than by control enforcement. Therefore, for the hyperbolic trajectory, a path following control is much more suitable than a reference tracking. In this way, we apply the robust Keplerian path-following control (RKPFC) derived by Reference <cit.>, which showed promising results for small body missions in terms of fuel savings and operational requisites <cit.>.
The RKPFC is a sliding-mode control, but it uses a different set of sliding surfaces than the ones usually applied. The sliding surface s⃗ is defined in terms of the integrals of motion of the two-body problem. Once the equilibrium condition for the sliding surface is reached, s⃗ = 0, the controlled spacecraft asymptotically converges to the desired Keplerian geometry. The sliding surface is <cit.>:
s⃗ = [ ẽ⃗̃· (λ_R r̂ + θ̂); h̃; ĥ_d · (λ_N r̂ + θ̂) ]=0,
where λ_R>0 and λ_N>0 determine the rate of convergence to the desired Keplerian geometry, as shown in Proposition 1 in Reference <cit.>, ẽ⃗̃ = e⃗-e⃗_d represents the error between, respectively, the current and desired eccentricity vectors, ĥ_d is the desired specific angular momentum unit vector and h̃=h-h_d is the error in the magnitude of the specific angular momentum. The unit vectors r̂ and θ̂ are the unit vectors of the radial-transverse-normal (RTN) coordinates, r̂=r⃗/||r⃗|| and θ̂=ĥ×r̂.
The desired unit vector ĥ_d, defining the orbital plane, can be obtained from Eq. (<ref>) by choosing a desired inclination i_d and LOAN Ω_d. The current and desired angular momentum are obtained as h=|| r⃗×v⃗ || and h_d=√(μ a_d (1-e_d^2)), respectively. Finally, the current and desired eccentricity vectors are respectively: e⃗=1/μ (v⃗×h⃗ - μr̂) and e⃗_d=e_d ê_d, with ê_d found from Eq. (<ref>) for Ω_d, ω_d and i_d.
Using the sliding surface in Eq. (<ref>), robustness to bounded disturbances and asymptotic convergence to the geometry of a Keplerian orbit can be obtained using the control:
u⃗ = - [RTN]^-1 F^-1 ( G + K sgn(s⃗)) - f⃗,
K ∈ℝ^3× 3 is a diagonal positive definite matrix, the function sgn(s⃗) ∈ℝ^3 × 1 represents the sign function taken in each component of s⃗, the matrices F and G are defined by:
F = 1/hμ[ -h^2 [2λ_R h-(v⃗·r̂)r]h -μ r (e⃗_d·ĥ); 0 μ rh 0; 0 0 μ r (ĥ_d·ĥ) ],
G =h/r^2[ ẽ⃗̃·(λ_Rθ̂-r̂) -1; 0; ĥ_d·(λ_Nθ̂-r̂) ],
and the matrix [RTN] is a matrix that transforms from the Cartesian coordinates to RTN, defined as:
[RTN] = [ r̂^𝕋; θ̂^𝕋; ĥ^𝕋 ]
with the superscript 𝕋 representing the transpose.
The control in Eq. (<ref>) lies on the assumption that the magnitude of h⃗ and r⃗ are not zero. And, if defined an angle β such that cosβ = ĥ·ĥ_d, this angle is bounded by β<90^∘. Although this assumption is generally of no to little harm for the orbit keeping problem <cit.>, it can cause issues for the gravity-assist control. We will deal with this point later.
The diagonal gain matrix K can be chosen to guarantee convergence for unknown bounded disturbances | d_R | < D_R, | d_T | < D_T, and | d_N | < D_N, for d⃗_RTN= [ d_R d_T d_N ]^𝕋=[RTN]d⃗, as <cit.>:
K_1,1 ≥h/μ D_R + |2λ_R h-(v⃗·r̂)r/μ| D_T + r |e⃗_d·ĥ|/h D_N,
K_2,2 ≥ r D_T,
K_3,3 ≥ r ĥ_d·ĥ/h D_N,
in which K_j,j, j=1,2,3, are the diagonal elements of the matrix K.
In order to make easier the analysis and simulations, we can approximate the control in Eq. (<ref>) by an impulse:
ΔV⃗ = Δ t u⃗ ,
where Δ t is the control update time step.
§.§ Practical considerations
The discontinuity in the function sgn(s⃗) in the sliding-mode control is known to cause chattering in many applications <cit.>. However, this can be easily circumvent by substituting it by a continuous function, with a little trade in performance. Here, we substitute the sign function by a saturation function sat(s⃗,Φ⃗), defined as:
sat(s_j,Φ_j) = 1, s_j>Φ_j
s_j/Φ_j, -Φ_j≤ s_j≤Φ_j
-1, s_j<-Φ_j,
for j=1,2,3 representing each component of s⃗ and Φ⃗.
As discussed in Reference <cit.>, a great advantage of the RKPFC is its ability to easily accommodate periods with the thrusters turned off. If they are turned back on after a long period of idling, there is no concern for which reference to choose and if it can cause an adverse behavior or consequence (e.g., a reference too far posing the spacecraft in risk of collision or in unnecessary waste of fuel), as the RKPFC will only recover the Keplerian geometry.
In this work we will propose an alternative of control switch inspired in the Schmitt trigger as follows:
hys(χ⃗) = 1, if any χ_j>χ_j^+
0, if all χ_j<χ_j^-
1, if χ_j^- ≤χ_j ≤χ_j^+ and hys_p(χ⃗)=1
0, if χ_j^- ≤χ_j ≤χ_j^+ and hys_p(χ⃗)=0,
j=1,...,5, in which hys_p is the value of hys in the previous iteration and χ⃗_j is:
χ⃗ = [ |a - a_d|; |e - e_d|; |i - i_d|; |ω - ω_d|; |Ω - Ω_d| ].
The vector χ⃗^- and χ⃗^+ represent the lower and upper bounds for the hysteris in each component of χ⃗.
§.§ B-Plane Linear Quadratic Regulator
As mentioned earlier, the RKPFC lies on the assumption that for cosβ = ĥ·ĥ_d, this angle is bounded as β<90^∘. However, for a gravity-assist, this condition is not guaranteed to hold. In fact, it might occur with some frequency, depending on the accuracy of the guidance algorithm to predict the spacecraft state upon arriving at the gravity-assist body. To address this issue, we propose a solution. Instead of relying on a reference tracking strategy, we employ an infinite horizon linear quadratic regulator (LQR) to regulate the impact parameter vector of the approaching spacecraft.
Here, we use the b-plane, represented in Figure <ref>, which is a plane extensively applied for gravity-assist design. It is defined as perpendicular to the velocity at the infinity (v⃗_∞^-), and containing the gravity-assist body center. With this definition, it is possible to apply an orthogonal frame centered in the gravity-assist body and defined as:
η̂ = v⃗_∞^-/v_∞,
ξ̂ = V⃗_×η̂/||V⃗_×η̂||,
ζ̂ = ξ̂×η̂.
The impact parameter is a vector represented in the b-plane that indicates the point where the velocity at the infinity pierces the plane. If the spacecraft is far from the gravity-assist body, i.e. roughly outside its SOI, the impact parameter can be easily described on the b-plane as:
b⃗ = J r⃗,
in which the matrix J is:
J = [ ζ̂^𝕋; ξ̂^𝕋 ].
The desired impact parameter can be found using two-body problem relations as:
b⃗_d = J a_d ( √(e_d^2 - 1)/e_dê_d⊥ - e_d^2-1/e_dê_d )
Defining X⃗ = [ b⃗-b⃗_d ḃ⃗̇ ]^𝕋, considering that d/dt ( J ) ≈ 0 and w⃗=u⃗+f⃗, and assuming that the function f⃗ is nearly constant, the system can be described as a linear time invariant system:
Ẋ⃗̇ = A X⃗ + B w⃗,
for the matrices A and B respectively defined as:
A = [ 0_2× 2 I_2 × 2; 0_2× 2 0_2× 2 ] ,
B = [ 0_2× 3; J ] .
Therefore, an infinite horizon LQR controller can be easily obtained as <cit.>:
u⃗ = - K X⃗ - f⃗,
where K is given by:
K = R^-1 B^𝕋 P,
with P being the solution for the Riccati equation:
A^𝕋 P + P A - PBR^-1B^𝕋 P + Q = 0,
considering the cost function:
𝒥 = ∫_0^∞ (X⃗^𝕋Q X⃗ + u⃗^𝕋R u⃗) dt.
§ ANALYSIS AND DISCUSSION
In April 2017, Cassini made its 126th and last flyby of Titan, ultimately leading it to the disintegration in Saturn's atmosphere, in September 2017, for satisfying planetary protection requirements <cit.>. Among the more of a hundred Titan's flybys, the closest approach occurred in June 2010, when the spacecraft had its encounter periapsis at 880 km altitude <cit.>. At this altitude, Titan's atmosphere is of little impact on the spacecraft trajectory, and small corrective burns in the order of cm/s would compensate for its effects <cit.>. As the altitude decreases, its effects grow exponentially, rapidly reaching the same order of magnitude of the Titan's gravity. It is unlikely that a spacecraft is brought under such severe conditions with no atmospheric modelling and preparation to deal with it. However, under the autonomous gravity-assist, a somewhat challenging unexpected environment could not be discarded. In fact, for the Cassini mission itself, it is reported the unexpected variations in density found in the firsts Titan's gravity-assists, concerning the engineers and leading to a reassessment of the minimum safety altitude that was originally set at 950 km altitude at that time <cit.>.
Given this context, we choose Titan as the first simulation scenario for our control. We assume the following desired orbital elements for the gravity-assist: e_d=25, i_d=30^∘, ω_d=90^∘, and Ω_d=90^∘, with a desired semi-major axis a_d=-128.1 km for a periapsis altitude of 500 km. We run simulations with the spacecraft approaching Titan at 3 SOIs from its center, with the position and velocity solved for the corresponding desired orbital elements, integrated until the spacecraft leaves Titan's SOI. Monte Carlo simulations are conducted with 200 samples, considering a normal distribution of error for the initial states. The 3D encounter errors in the last 10 Titan flybys by Cassini were within 3 km <cit.>. However, as an autonomous gravity-assist involves diverse forms of guidance algorithms for transferring the spacecraft between multiple bodies (e.g., an embedded simplified approximation, embedded complex and precise dynamical model, or an uploaded batch of highly precise guidance calculated by ground orbit determination), we consider a higher magnitude of error for the encounter. We assume that the magnitude of the impact parameter, Eq. (<ref>), has a 1-σ dispersion of 50 km, while the velocity has an error distribution of 2% in the magnitude of the desired velocity.
It is considered as disturbances the gravitational acceleration of Saturn, Eq. (<ref>), and the drag force of the Titan's atmosphere, Eq. (<ref>), when the spacecraft is within 1,500 km altitude. The atmosphere density profile is obtained from an exponential rough fit with the figure 11 in Reference <cit.>, which is good enough for our purposes, as follows:
ρ(h) = exp[ Θexp( Ξ h ) + Λexp( Π h ) ],
where h is the altitude, and Θ=-19.0254, Λ=17.5748, Ξ=3.0747×10^-7, and Π=-1.2258×10^-6.
Figure <ref> shows the results obtained if no control authority is considered. Note in Figure <ref> that the dispersion of the outgoing hyperbole leg is so large that is even visually perceptive. The two out-layer trajectories easily identified in Figures <ref> and <ref> are the result of special interaction conditions with Titan's atmosphere, reducing the trajectory eccentricity by more than 10, while the other trajectories are within 10. The mean and standard deviation of the outgoing orbital elements are presented in Table <ref>, as Titan unc. 500.
In Figure <ref> are presented the results assuming the control command. We assume that in the incoming leg outside the SOI, the LQRC is working to bring the spacecraft at least close to pierce the b-plane in the right position. After that, the RKPFC assumes as the control input, up to reach the SOI in the outgoing leg. In the LQR, the Q and R are assumed as the diagonal matrices: diag([ 10^-6 10^-6 10^-2 10^-2 ]) and diag([ 10^5 10^5 10^5 ]). For the RKPF, the matrix K is continuously calculated using Eqs. <ref>, for D_R=D_T=D_N=50 m/s^2, with λ_R=λ_N=2. It is also assumed that each element of Φ⃗ is 50 times the corresponding element in K, and boundaries for the control switch are: χ⃗^-= [ 100 0.5 0.1 0.1 0.1 ] and χ⃗^+= [ 1000 1.0 0.5 0.5 0.5 ] (in meters or degrees). It is also assumed a control update time of 20 seconds.
One can distinguish in Figure <ref> three clear distinct phases. In the first one, the LQR controls the position of the spacecraft in the b-plane, up to 1-2 h. After that, a large spike in the budget Δ V, Figure <ref>, is the indicative that the spacecraft entered Titan's SOI and had to change its velocity, as there is no control for velocity in the η̂ direction of the LQR control. A second spike occurs at about 3 h, when the spacecraft is inside Titan's atmosphere trying to compensate the drag force. When within Titan's atmosphere, it is noted little bumps in the semi-major axis and eccentricity, Figure <ref>. In order to completely remove these bumps, the gain matrix K should be much larger, or Φ⃗ much lower, what would make the control effort much less efficient in other parts of the trajectory. Nevertheless, these bumps present a much lower magnitude than the observed for the uncontrolled case, e.g., the eccentricity is within 1, much lower if compared to the order of 10 of the uncontrolled simulation. It should also be considered that the drag disturbance reach levels of 1 order of magnitude larger than the gravitational term. In a real scenario, if such close proximity to Titan is made, it very likely that a good enough drag model would be at disposal, with the control needing to compensate only small deviations. In this case, a much lower gain matrix and Φ⃗ could be chosen, improving the overall performance. The mean and standard deviation of the outgoing parameters are presented in Table <ref> as Titan con. 500. The mean budget Δ V for this case is of 783.7 m/s, with a standard deviation of 106.7 m/s. Assuming a much more reasonable periapsis altitude of 750 km, and for D_R=D_T=D_N=0.01 m/s^2, λ_R=λ_N=1.5 and Φ as 15 times each corresponding element of K, one can note in Table <ref> that the control retain the same performance presented before (case Titan con. 750) with a mean budget Δ V of 212.7 m/s, with a standard deviation of 100.9 m/s.
Moving now to the small Saturnian moon Enceladus, for the same previous desired orbital elements, except for e=100 and an altitude about Enceladus of 10 km (below the closest Cassini approach of 25 km in October 2008). It is considered as disturbances the Saturn gravitational effects and the drag from Enceladus' plumes, with a mass density ρ= 5.5×10^-11 kg/m^3 <cit.>. The RKPFC parameters are adjusted to: D_R=D_T=D_N=10.0 m/s^2, λ_R=λ_N=2 and Φ is 30 times each corresponding element of K, with the switcher parameters: χ⃗^-= [ 10 1 0.1 0.1 0.1 ] and χ⃗^+= [ 100 5.0 0.5 0.5 0.5 ] (in meters or degrees).
In Figure <ref> is shown the simulated trajectories with no control input, resulting in tens of collisions with Enceladus, Figure <ref>. The largest dispersion in the orbital elements was observed for the eccentricity (20) and the inclination (20^∘), Figures <ref> and <ref>. In Figure <ref>, considering the proposed control, is shown the stabilized hyperbolic trajectory. Even in the drastic condition of having to be stabilized in a short time period, just a few Enceladus' radii from Enceladus surface (the SOI of Enceladus is 1.93 Enceladus' radii), the control proves its efficiency by rapidly stabilizing the orbit. The budget Δ V in this scenario, as can be checked in Table <ref>, Enceladus con. 10, is of 306 m/s with a 1-σ dispersion of 219.9 m/s.
One might correctly wonder that the budget Δ V found for the previous simulations are indeed large, probably nullifying gravity-assist advantages. First, the proposed control scheme is advantageous for flybys in general, with no need of a gravity-assist. In the case of an asteroid or small moon flyby for scientific observation, the drastic circumstance of the presented Enceladus hyperbolic encounter indicates that the proposed control allow for a close approach with the target body, rapidly converging to the desired geometry. For instance, consider a hypothetical asteroid flyby. As the spacecraft finds the asteroid in its optical navigation cameras, a hyperbolic trajectory with the desired geometrical features (as a desired periapsis, and a convenient geometry for no need of a great change in the spacecraft's velocity) is instantaneously calculated. A good estimative of the distance and velocity relative to the target asteroid might only be available a few kilometers from the target, with a small-time for corrections. As the Enceladus example indicates, the proposed control might handle such a scenario. Secondly, as we previously pointed out, much of an autonomous gravity-assist depends on the guidance algorithm in calculating a trajectory as close to the real one as possible, avoiding such large insertion errors as the ones here considered. This is beyond the scope of this work, but the next section serves as a preliminary assessment, providing guidelines for future research.
§.§ Jovian Tour
Delay in the communications with a spacecraft in the Jovian system can span from 30 to 50 minutes, which compromises the ability to safely execute robust and efficient consecutive short transfer time flybys <cit.>. So here we make a small analyses of the control applied to a Jovian system tour. We consider a tour by the Galilean moons, with the spacecraft departing from a position X=-1.2× 10^6 km in the inertial frame, and having to arrive at the final moon of the tour with no other specified condition. A first optimal solution is obtained using the 0SOI-PC model with the MATLAB built-in genetic algorithm. The resulting optimal solution is used as an initial guess for a second optimization using the patched conics. The solution obtained in the PC optimization is considered as the guidance solution available for the spacecraft, and a simulation in the CRNBP is performed to assess the performance.
Figure <ref> presents the results obtained for a tour Callisto-Ganymede-Europa-Io obtained in the PC model as a free-fall, with periapsis encounters at the times: 3.1682, 17.8318, 34.0092 and 54.4843 days. The predicted trajectory with the PC is depicted in Figure <ref>, where dashed lines represent transfer orbits with more than an orbital period. The colors for the transfer ellipses are chosen, following the order of the transfers, as: blue, orange, yellow and purple. Figure <ref> shows the obtained optimal trajectory simulated in the CRNBP, with approaches to each of the bodies shown in Figure <ref>, normalized by the SOI radius of the respective moons. As one can note, the real trajectory largely diverge from the one obtained with the PC, with the encounter with Ganymede occurring outside the SOI, and no encounter with Europa at all.
Figures <ref> and <ref> present the case considering the control. The encounter with Ganymede in the desired conditions is guaranteed by the control. However, the encounter with Europa occurs before than expected, 32.1289 days, resulting in the divergence of the real trajectory to the nominal one thereafter. This type of behavior is quite common. The Jovian system is quite chaotic, we found no case in which a flyby control by itself can guarantee a tour with a guidance given by a patched conics solution. The Δ V budget is also quite prohibitive, in this tour example it amounts to 37.7 km/s.
These results indicate that for an autonomous spacecraft to operate in an outer planetary system, it is more likely that its trajectory is uploaded from time to time by a ground orbit determination team. The control would only compensate for small deviations, with the advantage of allowing consecutive shorter time transfers between flybys (which it is not the case for the nowadays missions). If the autonomy is also considered for the guidance, the spacecraft should be able to calculate the trajectory, with no supervision, probably running optimization routines, in a highly demanding and complex model.
§ CONCLUSION
We have proposed a robust path following control strategy for stabilizing flyby trajectories. Our analysis has demonstrated the effectiveness of this control approach even in challenging scenarios, such as approaching within Titan's atmosphere or performing a close-approach to Enceladus with limited response time. Furthermore, we have examined a Jovian tour trajectory calculated using a patched conics model, revealing the sensitivity and chaotic nature of outer planetary systems. Importantly, our results highlight that a flyby control strategy alone cannot guarantee the successful completion of a tour based on a patched conics model. Therefore, autonomous spacecraft operating under these conditions would require a high-fidelity guidance model, which could be periodically uploaded by a ground orbit determination team or implemented as an embedded model. These findings emphasize the additional challenges faced by autonomous operations in outer planetary systems and the need for precise guidance systems in such missions.
§ ACKNOWLEDGMENT
The authors wish to express their appreciation for the support provided by grants # 406841/2016-0 and 301338/2016-7 from the National Council for Scientific and Technological Development (CNPq); grants # 2017/20794-2, 2015/19880-6 and 2016/24561-0 from São Paulo Research Foundation (FAPESP) and the financial support from the Coordination for the Improvement of Higher Education Personnel (CAPES).
AAS_publication
|
http://arxiv.org/abs/2307.04519v1 | 20230710123906 | Energy-based model order reduction for linear stochastic Galerkin systems of second order | [
"Roland Pulch"
] | math.NA | [
"math.NA",
"cs.NA",
"65L05, 37H05, 93D30"
] |
Energy-based model order reduction
for linear stochastic Galerkin systems
of second order
Roland Pulch
Institute of Mathematics and Computer Science,
Universität Greifswald,
Walther-Rathenau-Straße 47, 17489 Greifswald, Germany.
Email: [email protected]
Abstract
We consider a second-order linear system of ordinary differential
equations (ODEs) including random variables.
A stochastic Galerkin method yields a larger deterministic linear
system of ODEs.
We apply a model order reduction (MOR) of this high-dimensional
linear dynamical system, where its internal energy represents a
quadratic quantity of interest.
We investigate the properties of this MOR with respect to
stability, passivity, and energy dissipation.
Numerical results are shown for a system modelling a
mass-spring-damper configuration.
§ INTRODUCTION
Mathematical models typically include physical parameters or other
parameters, which are often affected by uncertainties.
A well-known approach is to change the parameters into random variables
to address their variability, see <cit.>.
Consequently, an uncertainty quantification (UQ) can be performed.
We study second-order linear systems of ordinary differential equations
(ODEs), which contain independent random variables.
Each second-order linear system of ODEs together with its
internal energy is equivalent to a first-order port-Hamiltonian (pH)
system, where the Hamiltonian function represents the internal energy,
see <cit.>.
We use a stochastic Galerkin technique, see <cit.>,
which produces a larger deterministic system of second-order linear ODEs.
The stochastic Galerkin projection is structure-preserving.
Hence the stochastic Galerkin system also features an internal energy,
which represents a quadratic output of the linear dynamical system.
Since the stochastic Galerkin system is high-dimensional,
we employ a model order reduction (MOR), see <cit.>,
to diminish the dimensionality.
MOR of linear stochastic Galerkin systems with linear outputs was applied
in <cit.>, for example.
Now we investigate an MOR, where the internal energy is defined as
the quantity of interest (QoI).
In <cit.>, a balanced truncation technique was derived
to reduce a first-order linear system of ODEs with quadratic output.
We apply the balanced truncation to the canonical first-order system,
which is equivalent to the second-order stochastic Galerkin system.
A reduced system of ODEs exhibits a quadratic output,
which approximates the underlying internal energy.
An a posteriori error bound is computable for the quadratic output
in any MOR method, provided that the systems are asymptotically stable.
Moreover, we study the properties of the reduced systems
with respect to dissipation inequalities and passivity.
A concept to measure a loss of passivity is introduced.
Finally, we present results of numerical experiments
using a model of a mass-spring-damper system.
§ PROBLEM DEFINITION
A stochastic modelling is applied to second-order linear dynamical systems,
which include uncertain parameters.
§.§ Second-order linear ODEs including Parameters
We consider second-order linear systems of ODEs in the form
M(μ) p̈ + D(μ) ṗ + K(μ) p = B(μ) u ,
where the symmetric matrices M,D,K ∈^n × n and
the matrix B ∈^n × n_ in depend on parameters
μ∈ℳ⊆^q.
Input signals u : [0,∞) →^n_ in
are supplied to the system.
The state variables p : [0,∞) ×ℳ→^n
depend on time as well as the parameters.
We assume that the matrices M and K are positive definite and
the matrix D is positive definite or semi-definite
for all μ∈ℳ.
It follows that each linear dynamical system (<ref>) is
Lyapunov stable.
A positive definite matrix D is sufficient for the asymptotic stability
of a system (<ref>), see <cit.>.
The linear dynamical system (<ref>) features an internal energy
V(p,ṗ,μ) = 12( ṗ^⊤ M(μ) ṗ + p^⊤ K(μ) p ) ,
which represents the sum of kinetic energy and potential energy.
In <cit.>, it is shown that a second-order linear system
of ODEs, which satisfies the above assumptions on the definiteness of the
matrices, is equivalent to a first-order pH system.
Consequently, the internal energy (<ref>) is identical to
the Hamiltonian function of the pH system.
§.§ Stochastic Modelling and Polynomial Chaos
Expansions
Often the parameters are affected by uncertainties.
In UQ, a typical approach consists in replacing the parameters
by random variables, see <cit.>.
Thus we substitute the parameters in the system (<ref>) by
independent random variables μ : Ω→ℳ,
ω↦ (μ_1(ω),…,μ_q(ω))
on a probability space (Ω,𝒜,𝒫).
We use traditional probability distributions for each parameter
like uniform distribution, beta distribution, Gaussian distribution, etc.
Let a joint probability density function ρ : ℳ→
be given.
A measurable function f : ℳ→ exhibits
the expected value
𝔼 [f] = ∫_Ω f(μ(ω)) d𝒫(ω)
= ∫_ℳ f(μ) ρ(μ) dμ .
The expected value (<ref>) implies an inner product
⟨ f,g ⟩ = 𝔼[fg] for two square-integrable
functions f,g.
We denote the associated Hilbert space by .
Let an orthonormal basis (Φ_i)_i ∈ be given,
which consists of polynomials Φ_i : ℳ→.
It holds that ⟨Φ_i , Φ_j ⟩ = δ_ij
with the Kronecker-delta.
The number of basis polynomials up to a total degree d is
s = (d+q)!/d!q!.
This number becomes high for larger q even if d is moderate,
say d ≤ 5.
This orthonormal basis allows for expansions in the so-called
polynomial chaos (PC), see <cit.>.
A function f ∈ can be represented as
a PC expansion
f(μ) = ∑_i=1^∞ f_i Φ_i(μ)
f_i = ⟨ f, Φ_i ⟩ .
We apply this expansion to the state variables in (<ref>)
separately for each component p_1,…,p_n and each
time point t ≥ 0.
§.§ Stochastic Galerkin System
Using the expansion (<ref>) for the state variables,
we arrange a finite sum with s terms including a priori unknown
approximations of the coefficients.
Inserting the finite sum into (<ref>) generates a residual.
The Galerkin approach requires that this residual is orthogonal to
the subspace {Φ_1,…,Φ_s} spanned by the basis polynomials.
The orthogonality is defined using the inner product of the
Hilbert space .
The stochastic Galerkin projection yields a deterministic
second-order linear system of ODEs
M̂p̈̂̈ + D̂ṗ̂̇ + K̂p̂ =
B̂ u
with larger matrices M̂,D̂,K̂∈^ns × ns,
and B̂∈^ns × n_ in.
The solution of the system is p̂ : [0,∞) →^ns
with p̂ = (p̂_1^⊤,…,p̂_s^⊤)^⊤,
where p̂_i represents an approximation of the exact
PC coefficients with respect to the ith basis polynomial.
More details on the stochastic Galerkin projection for linear ODEs
can be found in <cit.>, for example.
The stochastic Galerkin projection is structure-preserving.
Thus the matrices M̂,D̂,K̂ are symmetric again and also
inherit the definiteness of the original matrices M,D,K.
The stochastic Galerkin system (<ref>) exhibits the
internal energy
V̂(p̂,ṗ̂̇) = 12( ṗ̂̇^⊤M̂ṗ̂̇ +
p̂^⊤K̂p̂) .
The linear dynamical system (<ref>) without input
(u ≡ 0) satisfies the dissipation property
ddtV̂(p̂,ṗ̂̇)
= - ṗ̂̇^⊤D̂ṗ̂̇≤ 0 ,
since we assume that the matrix D̂ is positive (semi-)definite.
Furthermore, the second-order linear system (<ref>)
has an equivalent linear explicit first-order system
[ v̇̂̇_1; v̇̂̇_2; ] =
[ 0 I_n; -M̂^-1K̂ -M̂^-1D̂; ][ v̂_1; v̂_2; ] +
[ 0; M̂^-1B̂; ]
u
with v̂_1 = p̂, v̂_2 = ṗ̂̇,
and identity matrix I_n ∈^n × n.
The internal energy (<ref>) represents a
quadratic output of (<ref>) due to
V̂ ( v̂_1 , v̂_2 )
= 12[ v̂_1; v̂_2; ]^⊤[ K̂ 0; 0 M̂; ][ v̂_1; v̂_2; ] .
This relation is shortly written as
V̂(v̂) = 1/2v̂^⊤N̂v̂.
§ DISSIPATION INEQUALITY AND PASSIVITY
Let a linear dynamical system be given in the form ẋ = A x + B u
with A ∈^n × n and B ∈^n × n_ in.
The quadratic output V = 1/2 x^⊤ N x with N ∈^n × n
satisfies the dissipation inequality
ddt x^⊤ N x ≤
u^⊤ R u + 2 u^⊤ S x + x^⊤ L x
with two symmetric matrices L ∈^n × n,
R ∈^n_ in× n_ in,
and matrix S ∈^n_ in× n,
if and only if the symmetric matrix
( [ A^⊤ N + N A - L N B - S^⊤; B^⊤ N - S - R; ])
is negative definite or semi-definite, see <cit.>.
We select R=0 and S=B^⊤ N.
Advantageous is a bound (<ref>) with L=0,
because this case implies a dissipation inequality
ddt12 x^⊤ N x ≤ u^⊤ y
including the linear output y = B^⊤ N x,
as in pH systems.
Consequently, the linear dynamical system is passive,
see <cit.>.
Usually, the term u^⊤ y is interpreted as supplied power
and the term 12 x^⊤ N x as internal energy or stored energy.
Thus we insert R=0, L=0, S = B^⊤ N in (<ref>).
It follows that the passivity condition (<ref>)
is satisfied, if and only if the matrix A^⊤ N + N A
is negative definite or semi-definite.
§ MODEL ORDER REDUCTION
We perform an MOR of the stochastic Galerkin system,
where the internal energy represents the QoI.
§.§ Model Order Reduction for Linear Systems
with Quadratic Output
The full-order model (FOM) is a general first-order linear system of ODEs
with quadratic output
ẋ = A x + B u
y = x^⊤ N x
including a symmetric matrix N.
Let n be the dimension of this system again.
In <cit.>, a balanced truncation method was introduced
for systems of the form (<ref>).
This technique requires that the system is asymptotically stable.
We outline this method.
The two Lyapunov equations
A P + P A^⊤ + B B^⊤ = 0
A^⊤ Q + A Q + N P N = 0
are solved successively, which yields the controllability Gramian P and
the observability Gramian Q.
Now symmetric decompositions P = Z_P Z_P^⊤ and Q = Z_Q Z_Q^⊤
are applied.
The singular value decomposition (SVD)
Z_P^⊤ Z_Q = U Σ V^⊤
yields orthogonal matrices U,V and a diagonal matrix Σ,
which includes the singular values in descending order.
We choose a reduced dimension r.
Let U=(U_1,U_2), V=(V_1,V_2),
and Σ = diag(Σ_1,Σ_2)
with U_1,V_1 ∈^n × r, and Σ_1 ∈^r × r.
We obtain projection matrices
V = Z_P U_1 Σ_1^-1/2
W = Z_Q V_1 Σ_1^-1/2 .
The reduced-order model (ROM) of dimension r becomes
ẋ̅̇ = A̅x̅ + B̅ u
y̅ = x̅^⊤N̅x̅
with the smaller matrices
A̅ = W^⊤ A V, B̅ = W^⊤ B, N̅ = V^⊤ N V.
The linear dynamical system (<ref>) inherits the asymptotic
stability of the linear dynamical system (<ref>)
in the balanced truncation technique.
Furthermore, an a posteriori error bound can be computed for
the quadratic output in any MOR method, see <cit.>.
We denote the linear dynamical systems (<ref>) and (<ref>)
by H and H̅, respectively.
The error of the MOR for the quadratic output is measured in
the -norm.
The norm of the system (<ref>) reads as
H _ = √( trace(B^⊤ Q B))
with the observability Gramian satisfying (<ref>).
Likewise, we obtain the -norm of the system (<ref>).
It holds that
y - y̅_ℒ^∞≤ H - H̅_ u ⊗ u _ℒ^2
using the norms of Lebesgue spaces in time domain.
The error bound can be computed directly by
H - H̅_ =
√( trace( B^⊤ Q B + B̅^⊤Q̅B̅
- 2 B^⊤ Z B̅) ) .
Therein, the matrix Q̅∈^r × r satisfies the Lyapunov
equation (<ref>) associated to the ROM (<ref>).
The matrix Z ∈^n × r solves the Sylvester equation
A^⊤ Z + Z A̅ + N X N̅ = 0 ,
while X ∈^n × r represents the solution of
the Sylvester equation
A X + X A̅^⊤ + B B̅^⊤ = 0 .
Lyapunov equations and Sylvester equations can be solved numerically
either by direct methods or iterative methods.
§.§ Application to Stochastic Galerkin System
The second-order stochastic Galerkin system (<ref>)
and its internal energy (<ref>) is equivalent to
the first-order system (<ref>)
with quadratic output (<ref>).
The dissipation analysis of Section <ref> can be
applied to
(<ref>), (<ref>).
We obtain
Â^⊤N̂ + N̂Â =
[ 0 0; 0 - 2 D̂; ] .
The positive (semi-)definiteness of the matrix D̂ is equivalent
to the negative definiteness of the
matrix (<ref>).
Thus the stochastic Galerkin system features the desired
dissipation inequality (<ref>)
and thus it is passive.
This property of the matrix (<ref>)
is related to the counterpart (<ref>).
We employ the MOR method from Section <ref> to the
high-dimensional system (<ref>)
with quadratic output (<ref>).
The balanced truncation technique preserves the asymptotic stability
of the FOM, i.e.,
each ROM is asymptotically stable again.
However, the balanced truncation technique does not preserve the
passivity with respect to the internal energy,
as demonstrated by a test example in Section <ref>.
Hence the matrix
T̅ := A̅^⊤N̅ + N̅A̅
is not negative (semi-)definite in general.
Let λ_max > 0 be the largest eigenvalue of T̅.
A shift of the spectrum via T̅ - λ_max I_r with
identity matrix I_r ∈^r × r yields a
negative semi-definite matrix.
Choosing R̅=0, S̅ = B̅^⊤N̅,
L̅ = λ_max I_r implies the dissipation inequality,
cf. (<ref>),
ddtx̅^⊤N̅x̅≤ 2 u^⊤B̅^⊤N̅x̅ + λ_max x^⊤ x
= 2 u^⊤B̅^⊤N̅x̅ + λ_max x _2^2 .
The desired property would be the case of λ_max≤ 0.
Hence the magnitude of λ_max > 0 measures the loss of
passivity.
§ NUMERICAL RESULTS
As test example, we employ a mass-spring-damper system
from <cit.>.
Figure <ref> shows the configuration.
The system contains 4 masses, 6 springs, and 4 dampers, in total
q=14 physical parameters.
A single input u is supplied by an excitation at the lowest spring.
This test example was also used in <cit.>.
The mathematical model consists of n=4 second-order ODEs (<ref>).
The matrices M,K,D are symmetric as well as positive definite for
all positive parameters.
In the stochastic modelling, we replace the parameters by random variables
with independent uniform probability distributions, which vary
10% around their mean values.
Consequently, the PC expansions (<ref>) include the
(multivariate) Legendre polynomials.
We study two cases of total degree: two and three.
Table <ref> demonstrates the properties of the
resulting second-order stochastic Galerkin systems (<ref>).
In particular, the sparsity of the system matrices is specified
by the percentage of non-zero entries.
The stochastic Galerkin systems are asymptotically stable,
since the Galerkin projection preserves the definiteness of matrices.
Now we perform an MOR of the equivalent first-order
system (<ref>)
with quadratic output (<ref>)
using the balanced truncation technique from Section <ref>.
We solve the Lyapunov equations
(<ref>), (<ref>)
and the Sylvester equations (<ref>), (<ref>)
by direct methods of numerical linear algebra.
Figure <ref> (a) depicts the Hankel-type
singular values of the SVD (<ref>),
which rapidly decay to zero.
We compute the ROMs (<ref>) of dimension r=1,…,100.
The error of the MOR is measured in the relative -norm,
i.e., H - H̅_ / H _,
see (<ref>).
The relative errors are shown for r ≤ 50
in Figure <ref> (b).
We observe that a high accuracy is achieved already for relatively
small reduced dimensions.
Furthermore, we examine the dissipation properties of the
ROMs (<ref>), as described in Section <ref>.
All reduced systems loose the passivity,
because their matrices (<ref>)
are not negative (semi-)definite.
The maximum eigenvalues of the matrices are illustrated by
Figure <ref>.
The maxima tend to zero for increasing reduced dimension.
Yet the decay becomes slower for larger total polynomial degree.
It follows that the dissipation inequality (<ref>)
is valid for a small eigenvalue λ_max.
Finally, we present a comparison.
We reduce the stochastic Galerkin system (<ref>)
for polynomial degree two by the Arnoldi method,
which is a specific Krylov subspace technique, see <cit.>.
This scheme is a Galerkin-type MOR method, i.e., the
projection matrices satisfy V=W.
However, the asymptotic stability may be lost in this technique.
The Arnoldi method does not include an information about
a definition of a QoI.
We use a single (real) expansion point ω=1 in the complex
frequency domain,
because other real-valued choices ω = 10^k with
k ∈{-2,1,1,2} cause worse approximations.
Figure <ref> (a) depicts the
relative -error of the internal energy for the ROMs
of dimension r ≤ 60.
Higher reduced dimensions produce larger errors due to
an accumulation of round-off errors in the orthogonalisation,
which is a well-known effect in the Arnoldi algorithm.
If an ROM (<ref>) is unstable, then the error is not computable
and thus omitted.
As expected, the accuracy of the Arnoldi method is not as good
as the accuracy of the balanced truncation.
Again the passivity is lost in all ROMs.
Figure <ref> (a) shows
the maximum eigenvalue of the matrices (<ref>).
We observe that these positive maxima do not decay,
even though small errors are achieved for reduced dimensions
55 ≤ r ≤ 60.
§ CONCLUSIONS
We applied a stochastic Galerkin projection to a second-order linear
system of ODEs including random variables.
The high-dimensional stochastic Galerkin system owns an internal
energy as quadratic output.
We performed an MOR of an equivalent first-order system of ODEs,
where the used balanced truncation method is specialised to approximate
a quadratic output.
However, the passivity of the dynamical systems with respect to the
internal energy may be lost in this reduction.
We proposed a concept to quantify the discrepancy of a non-passive
dynamical system to the passive case.
Numerical results of a test example demonstrated that this
discrepancy measure tends to zero for increasing reduced dimensions
in the balanced truncation method.
00
antoulas
A. C. Antoulas,
Approximation of Large-Scale Dynamical Systems (SIAM, Philadelphia, 2005).
beattie-etal
C. Beattie, V. Mehrmann, H. Xu, and H. Zwart,
Linear port-Hamiltonian descriptor systems,
Math. Control Signals Syst. 30, no. 4 (2018).
benner-goyal-duff
P. Benner, P. K. Goyal, and I. Pontes Duff,
Gramians, energy functionals, and balanced truncation for
linear dynamical systems with quadratic outputs,
IEEE Trans. Autom. Control 67, no. 2, 886-893 (2022).
freitas-etal
F. D. Freitas, R. Pulch, and J. Rommes,
Fast and accurate model reduction for spectral methods
in uncertainty quantification,
Int. J. Uncertain. Quantificat. 6, no. 3, 271-286 (2016).
lohmann-eid
B. Lohmann and R. Eid,
Efficient order reduction of parametric and nonlinear models
by superposition of locally reduced models,
in: Methoden und Anwendungen der Regelungstechnik,
edited by G. Roppenecker and B. Lohmann
(Shaker, Aachen, 2009).
inman
D. J. Inman,
Vibration and Control
(John Wiley & Sons Ltd, Chichester, 2006).
pulch-matcom
R. Pulch,
Model order reduction and low-dimensional representations for
random linear dynamical systems,
Math. Comput. Simulat. 144, 1-20 (2018).
pulch-jmi
R. Pulch,
Stability-preserving model order reduction for linear
stochastic Galerkin systems,
J. Math. Ind. 9, no. 10 (2019).
pulch2023
R. Pulch,
Stochastic Galerkin method and port-Hamiltonian form for
linear dynamical systems of second order,
arXiv:2306.11424v1 (2023).
schaft-jeltsema
A. van der Schaft and D. Jeltsema,
Port-Hamiltonian Systems Theory: An Introductory Overview
(New Publishers Inc, 2014).
sullivan:book
T. J. Sullivan,
Introduction to Uncertainty Quantification
(Springer, Cham, 2015).
willems
J. C. Willems,
Dissipative dynamical systems,
Eur. J. Control 13, 134-151 (2007).
|
http://arxiv.org/abs/2307.04010v1 | 20230708164045 | Understanding the Efficacy of U-Net & Vision Transformer for Groundwater Numerical Modelling | [
"Maria Luisa Taccari",
"Oded Ovadia",
"He Wang",
"Adar Kahana",
"Xiaohui Chen",
"Peter K. Jimack"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cs.CE",
"cs.LG"
] |
[
Kipton Barros
August 12, 2023
===================
[]School of Civil Engineering, University of Leeds, Leeds, UK, Email: [email protected].
[]Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel.
[]School of Computing, University of Leeds, Leeds, UK.
[]Department of Applied Mathematics, Tel-Aviv University, Tel-Aviv, Israel.
[]School of Civil Engineering, University of Leeds, Leeds, UK.
[]School of Computing, University of Leeds, Leeds, UK.
[
Kipton Barros
August 12, 2023
===================
This paper presents a comprehensive comparison of various machine learning models, namely U-Net <cit.>, U-Net integrated with Vision Transformers (ViT) <cit.>, and Fourier Neural Operator (FNO) <cit.>, for time-dependent forward modelling in groundwater systems. Through testing on synthetic datasets, it is demonstrated that U-Net and U-Net + ViT models outperform FNO in accuracy and efficiency, especially in sparse data scenarios. These findings underscore the potential of U-Net-based models for groundwater modelling in real-world applications where data scarcity is prevalent.
§ INTRODUCTION
Groundwater numerical models, such as MODFLOW <cit.>, are crucial for water resource management, although they are computationally demanding. To alleviate this, surrogate modelling through data-driven methods offers efficient approximations of these complex numerical techniques.
Neural Operators <cit.>, particularly the Fourier Neural Operator (FNO) <cit.>, have been at the forefront of recent advances, having shown potential to approximate arbitrary continuous functions.
However, the computational demand of FNO is particularly high during training phase while these neural operators require architectural enhancements to deliver promising results in subsurface problems <cit.>. This is evident in the work of Wen et al. <cit.>, where the integration of FNO with U-Net architecture showed improved accuracy, speed, and data efficiency in multiphase flow problems. However, Gupta and Brandstetter's work <cit.>, showing that U-Net outperforms FNOs across various fluid mechanics problems, raises a question about the necessity of neural operators when the vanilla U-Net architecture already exhibits remarkable performance.
Recently, transformers <cit.> have seen considerable success in various fields, including physical systems <cit.>, for which the datasets are typically smaller compared to other domains. Only one study explores the use of transformers in groundwater modeling <cit.>, demonstrating that the models were outperformed by both GRU and LSTM models to predict groundwater levels across various stations in France with meteorological and hydrological data.
Finally, the integration of U-Net with Transformers, as exemplified in studies like TransUNet <cit.> and ViTO <cit.>, has demonstrated their utility across a broad range of applications, particularly in the field of medical image segmentation and operator learning for inverse PDE problems. Yet, the applicability of these combinations in addressing time-dependent forward problems, real-world data scenarios, and in situations with sparse data, remain areas yet to be fully explored.
Several studies, such as the one by Brakenhoff et al. <cit.>, primarily focus on individual time series when analysing the impact of various hydrological stressors, including pumping rates, precipitation excess, and river stage variations, on groundwater levels of individual monitoring wells. While this approach provides valuable insights, it does not account for spatial correlations, thereby limiting its use to existing time series or monitoring wells. Similarly, previous comparisons have been predominantly limited to specific models like LSTM, CNNs and NARX in the context of groundwater level forecasting <cit.>, leaving room for broader explorations.
In this paper, we present a comprehensive comparison among models—specifically U-Net, U-Net integrated with Vision Transformers (U-Net+ViT), and Fourier Neural Operator (FNO)—for their efficacy in modeling time-dependent forward and inverse problems in groundwater systems. We test our model extensively on synthetic datasets, simulating conditions from the Overbetuwe region in the Netherlands, including sparse data scenarios. We show that both U-Net and U-Net+ViT are particularly well-suited to these important sparse data scenarios, with the addition of the Transformer providing enhanced predictive capability in many cases.
§ METHODOLOGY
§.§ Example of study site and data
This subsection provides context and rationale for our study via an example case study based upon the polder region of Overbetuwe in the Netherlands (Figure <ref>). This region showcases the characteristic Dutch system of water management where the area is divided into several polders in a mix of agriculture, nature, and urban environments. Alongside its sparse data and heterogeneous soil, these unique characteristics underscore the inherent complexities of water management in similar settings, making this dataset a suitable choice for our research. The subsoil is primarily composed of clay and sandy clay, with soil properties being determined via borehole and cone penetration tests. The study area features numerous observation wells for monitoring groundwater heads while well fields (indicated as groundwater usage facilities in the figure) are utilized for the extraction of drinking water. The work of Brakenhoff et al. <cit.> considers a dataset consisting of 250 head time series, with daily recordings starting from the year 1990 and drawdown attributed to the extraction from up to four well fields.
For the purposes of this study, we employ synthetic data to validate the proposed methodology, with the intention to subsequently apply the validated method to the real-world data of the Overbetuwe region. Figure <ref> represents a sample of the high-fidelity labeled dataset, which is constructed using the U.S. Geological Survey (USGS) finite-difference flow model, MODFLOW. The model is composed of a single-layer representation of a confined aquifer with a 128×128 grid.
The aquifer's heterogeneity is reflected through varying horizontal hydraulic conductivity within the bounds k ∈[0.1, 0.5] m/d. The hydraulic conductivity fields in our study are created using random fields which are then thresholded to delineate different classes.
A maximum of ten pumping wells are extracting water with variable rates in the range Q ∈[0, 30] m^3/d over a simulation period of T = 10 days. The pumping wells are located in random locations which vary for each sample. The boundary conditions are delineated as Dirichlet, with the head equal to zero, mimicking a polder encircled by ditches where a stable water level is maintained through a comprehensive network of pumping stations.
The datasets consist of N_train = 5000 training instances and N_test = 1000 testing instances. To mirror the inherent sparsity of real-world data, a data selection strategy is adopted for the test dataset. The locations of the boreholes for estimating the hydraulic conductivity are chosen following a radial distribution pattern, and a helical pattern is used for the wells monitoring hydraulic head (Figure <ref>).
§.§ Architectures
The architectures of the three models under comparison in this study encompass the U-Net structure, a U-Net with attention mechanism in the bottleneck, and the Fourier neural operator (FNO).
The U-Net architecture is designed with an encoder-decoder structure where the decoder receives the upsampled feature map, which is then concatenated with the corresponding feature map from the encoder through a skip connection. Detailed diagrams of the the U-Net encoder and decoder can be found in Figures <ref> and <ref> in Appendix A. The encoder consists of three bottleneck blocks, where each block utilizes three layers of Conv2d, Instance Normalization, and GELU activation to extract spatial features. These blocks increase the number of channels by a factor of 2 and perform downsampling with a stride of 2. The decoder is composed of a series of upsampling blocks, where each block consists of a bilinear upsampling operation (Upsample), followed by a double convolution operation. Each convolution within the decoderis followed by Instance Normalization and GELU activation function. The bottleneck consists on a single convolutional layer. In the time-dependent scenario, the time series data of the historical pumping rates is processed through two layers of feed-forward neural network (FNN) prior to being concatenated to the input for the latent space representation (Figure <ref>).
The second model, here called UNet+ViT, employs the Vision Transformer (ViT) <cit.>, in the latent space representation of the U-Net, as per implementation of TransUNet <cit.> and ViTO <cit.>. The input is tokenized into a sequence of flattened 2D patches, each of size 1×1. Positional information is retained by employing trainable convolutional projection to learn and add specific position embeddings to the patch embeddings. The structure of the Transformer includes L blocks, with each block comprising Multi-Head Attention (MSA) and FNN. This configuration involves the use of 2 blocks, each with 2 Multihead Self-Attentions, and a FNN composed of 128 neurons. For a more detailed visualization of the Vision Transformer, attention block, and multihead attention, please refer to Appendix A, Figure <ref>.
The Fourier neural operator (FNO) <cit.> model leverages the fast Fourier Transform to parameterize the integral kernel directly in the Fourier space. The implementation of FNO for the 2D Darcy Flow problem as presented in <cit.> is followed in this study. The total amount of parameters of FNO corresponds to 2.38 million, that is 15 times more than UNet+ViT (151k) and 17 times more than UNet (137k).
§ RESULTS
§.§ Forward problem with sparse observations
This section presents the prediction of the hydraulic head at sparse monitoring wells after a constant 10-day pumping period under two different training conditions. We employ distinct sampling strategies for both input and output data in our methodology. Our training data is sampled from a regular quadratic grid, while for testing we have explored other arrangements, such as radial and helical, to understand their potential impact on the prediction performance.
In the first scenario, training is conducted using sparse data, with a spacing of 20 grid points for the input hydraulic conductivity field and a spacing of 8 for the output hydraulic head. Testing is then carried out on sparse data points, following the radial and helical patterns delineated in subsection <ref>. The resulting root mean square error (RMSE) is found to be 5.2 × 10^-2, 3.5 × 10^-2 and 8.1 × 10^-2 for the vanilla U-Net, the UNet+ViT models and FNO respectively. These results underline the superior performance of the UNet+ViT model in handling sparse data, exhibiting a lower RMSE compared to both the vanilla U-Net and the FNO models.
In contrast, when training is performed using the entire field and testing on the same sparse dataset, the error marginally escalates to 3.9 × 10^-1 for FNO, 3.8 × 10^-1 for UNet and 3.6 × 10^-1 for UNet+ViT model. This outcome is anticipated considering the training set exhibits sparsity in the first scenario, but not in the latter. Additionally, Figure <ref> displays the prediction over the entire domain, resulting in a lower RMSE of 1.0 × 10^-2 for FNO, 1.7 × 10^-2 and 1.9 × 10^-2 for the vanilla U-Net and UNet+ViT models, respectively. The FNO model, while superior when dealing with full data, exhibits the highest predictive error under sparse data observations. These results highlight the practical advantages of the U-Net and especially UNet+ViT model in real-world scenarios for which data sparsity is common.
It should be noted that traditional simpler neural networks and other machine learning techniques may not provide adequate solutions for this specific problem. This assertion is backed by a comparison of the results from a fully connected neural network, a linear regression model and a random forest, detailed in Appendix <ref>. Despite the substantial number of trainable parameters, reaching 51.17 million, inherent to the fully connected neural network and the application of linear regression and random forest, these methods significantly underperform compared to the U-Net, the UNet+ViT models, and FNO.
§.§ Identification of pumping wells
In this section, we focus on an inverse problem: specifically the identification of pumping wells. This task requires determining the locations and rates of pumping wells based on the observed hydraulic heads. Throughout these experiments, we employ a single hydraulic conductivity field, which, while spatially varying, remains identical across all samples within the dataset.
In evaluating the performance of our models, we use both RMSE and accuracy. The RMSE calculates the average difference between the true and the predicted value for each pump location in the test dataset, giving a quantitative measure of the prediction error. Complementing this, the accuracy was determined by counting the proportion of correct pump predictions, where a prediction is considered correct if the predicted and actual pump locations align. This gives a sense of how often the model correctly identifies the location of pumps.
The U-Net model performs optimally, achieving an RMSE of 5.6 × 10^-2. Interestingly, the integration of the Vision Transformer with the U-Net model does not confer any additional precision in this scenario, yielding a near RMSE of 6.1 × 10^-2. The FNO model exhibits a higher RMSE of 1.1 × 10^-1, indicating a somewhat lower accuracy in identifying the pumping well locations.
To visually illustrate these results, Figure <ref> presents a test sample using the U-Net + ViT model. It demonstrates an accuracy of 93% in locating the pumps, calculated across the entire test dataset. The figure visualizes the model's ability to accurately identify the positions and the pumping rate of the wells. In comparison, the FNO model achieved a notably lower detection accuracy of 79% in the same task.
§.§ Example results for time series data
This section unveils the results achieved from the analysis of time series data, starting with a simplified scenario, for which the inputs are the varying hydraulic conductivity field and the pumping rate of a single pump which varies over a 10-day simulation period. Results are evaluated in terms of root mean square error (RMSE) with a focus on the comparison of different configurations of the U-Net architecture with transformers.
Figure <ref> presents a comparison of results over 5 time frames for the U-Net with the Vision Transformer under autoregressive testing conditions.
The RMSE for each method was calculated to quantify the models' performance. The U-Net architecture alone yielded an RMSE of 1.79 × 10^-2. When supplemented with a Vision Transformer, consisting of 2 attention blocks and 2 heads, the performance improves, registering an RMSE of 1.67 × 10^-2. However, increasing the complexity of the Vision Transformer to 8 blocks and 8 heads did not further improve the performance, instead, it led to a slight degradation in the RMSE (1.77 × 10^-2). Adding an Axial Transformer <cit.> to the U-Net architecture also did not enhance the performance, yielding an RMSE of 1.83 × 10^-2.
These results suggest that while adding a Vision Transformer to the U-Net architecture leads to performance improvement, increasing the complexity of the latent space does not necessarily do so.
§ CONCLUSION
This paper explores and evaluates the capabilities of different machine learning models, with a particular focus on U-Net, U-Net integrated with Vision Transformers (ViT), and Fourier Neural Operator (FNO), in the context of predicting hydraulic head in groundwater studies.
Our analysis and testing, conducted on synthetic datasets designed to simulate the conditions from the Overbetuwe region in the Netherlands and including scenarios with sparse data, firmly establish that both U-Net and U-Net + ViT models are particularly adept at dealing with such tasks. Importantly, these models are also preferred due to their fewer requisite parameters.
Specifically, in the case of sparse observation scenarios, the vanilla U-Net and the U-Net + ViT models outperformed the FNO model. In particular the performance of the UNet+ViT model was superior when handling sparse data, highlighting the potential of the model in real-world applications, where data scarcity is a common issue. The U-Net model demonstrated optimal performance in identifying pumping wells. Interestingly, the integration of the Vision Transformer with the U-Net model did not confer any additional accuracy in this scenario. As for the analysis of time series data, supplementing the U-Net architecture with a Vision Transformer improved the model performance, recording an RMSE of 1.67 × 10^-2 compare to 1.79 × 10^-2 of the vanilla U-Net. However, increasing the complexity of the Vision Transformer did not further enhance the model performance, indicating that a more complex architecture does not necessarily yield better results.
Future research will involve applying this validated methodology to real-world data, beginning with the Overbetuwe region in the Netherlands. This will offer an opportunity to further validate and refine the model, accounting for the sparsity and uncertainties inherent in real-world data.
§ BROADER IMPACT
The implications of this research span a wide range of potential societal impacts, with a primary focus on improving the efficiency and reliability of groundwater level forecasting. Given that groundwater is a crucial resource for approximately 2.5 billion people worldwide, fulfilling their daily water needs, and a significant source of global irrigation water, the importance of reliable forecasts cannot be overstated. Our work, through enhancing the performance of groundwater numerical models, offers an opportunity to revolutionize the management and distribution of this vital resource. By providing more accurate and data-efficient predictions, we can aid in the formulation of informed and sustainable water management strategies. This is particularly crucial considering the pressing challenges of population growth and climate change.
§ ACKNOWLEDGEMENTS
This work was carried out with support of the Leeds-York-Hull Natural Environment Research Council (NERC) Doctoral Training Partnership (DTP) Panorama under grant NE/S007458/1.
Our sincere appreciation is extended to Professor Karniadakis of Brown University. The financial assistance provided by the Leeds Institute of Fluid Dynamics and Deltares, which made possible the research visit to Brown University, is also gratefully acknowledged. Lastly, we would like to express our gratitude to the reviewers. Their critiques and suggestions have greatly enhanced the overall clarity of our work.
99
brakenhoff Brakenhoff, D. A., Vonk, M. A., Collenteur, R. A., Van Baar, M., & Bakker, M. (2022). Application of Time Series Analysis to Estimate Drawdown From Multiple Well Fields. Frontiers in Earth Science, 10.
modflow Hughes, J. D., Russcher, M. J., Langevin, C. D., Morway, E. D., & McDonald, R. R. (2022). The MODFLOW Application Programming Interface for simulation control and software interoperability. Environmental Modelling & Software, 148.
gupta2022multispatiotemporalscale Gupta, J. K., & Brandstetter, J. (2022). Towards Multi-spatiotemporal-scale Generalized PDE Modeling. arXiv preprint arXiv:2209.15616.
li2020fourier Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., & Anandkumar, A. (2020). Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895.
vito Ovadia, O., Kahana, A., Stinis, P., Turkel, E., & Karniadakis, G. E. (2023). ViTO: Vision Transformer-Operator. arXiv preprint arXiv:2303.08891.
transunet Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., ... & Zhou, Y. (2021). TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv preprint arXiv:2102.04306.
WEN2022104180 Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163.
DINO_loket DINO loket. (2023). Retrieved from https://www.dinoloket.nl/en/subsurface-data
li2023transformer Li, Z., Meidani, K., & Farimani, A. B. (2023). Transformer for Partial Differential Equations' Operator Learning. arXiv preprint arXiv:2205.13671.
cao2021choose Cao, S. (2021). Choose a Transformer: Fourier or Galerkin. arXiv preprint arXiv:2105.14995.
dosovitskiy2021image Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929.
ronneberger2015unet Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv preprint arXiv:1505.04597.
wen2022ufno Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., & Benson, S. M. (2022). U-FNO – An enhanced Fourier neural operator-based deep-learning model for multiphase flow. arXiv preprint arXiv:2109.03697.
francestudy Mellouli, N., Rabah, M. L., & Farah, I. R. (2022). Transformers-based time series forecasting for piezometric level prediction. In 2022 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS).
Wunsch_comparison Wunsch, A., Liesch, T., & Broda, S. (2021). Groundwater level forecasting with artificial neural networks: a comparison of long short-term memory (LSTM), convolutional neural networks (CNNs), and non-linear autoregressive networks with exogenous input (NARX). Hydrology and Earth System Sciences, 25(3), 1671-1687.
jiang2023fouriermionet Jiang, Z., Zhu, M., Li, D., Li, Q., Yuan, Y. O., & Lu, L. (2023). Fourier-MIONet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration. arXiv preprint arXiv:2303.04778.
ho2019axial Ho, J., Kalchbrenner, N., Weissenborn, D., & Salimans, T. (2019). Axial Attention in Multidimensional Transformers. arXiv preprint arXiv:1912.12180.
seidman2022nomad Seidman, J. H., Kissas, G., Perdikaris, P., & Pappas, G. J. (2022). NOMAD: Nonlinear Manifold Decoders for Operator Learning. arXiv preprint arXiv:2206.03551.
deeponet Lu, L., Jin, P., Pang, G., Zhang, Z., & Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3), 218-229.
vaswani2017attention Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention Is All You Need. arXiv preprint arXiv:1706.03762.
§ APPENDIX A
This appendix provides detailed diagrams of the model structures.
§ APPENDIX B
This appendix sets out to examine whether simpler machine learning models, specifically a fully connected neural network, a linear regression model, and a Random Forest model, can achieve the same level of accuracy as more advanced models like the U-Net, the UNet+ViT models, and FNO in predicting groundwater levels.
The particular Random Forest model tested here used 30 estimators. The fully connected neural network, employed for this comparison, comprises three hidden layers, each containing 1000 nodes and using ReLU activation functions. The model holds an impressive count of 51.17 million trainable parameters.
Unfortunately, none of the models was able to predict accurately the groundwater levels neither capturing the location of the wells. Specifically, the fully connected neural network and the linear regression model yielded high RMSEs of 1.17 × 10^-1 and 1.24 × 10^-1, respectively. The Random Forest model fared slightly better, achieving a lower RMSE of 1.02 × 10^-1, but it still fell short of the U-Net, the UNet+ViT models, and FNO.
Figure <ref> visually contrasts the predictions of these simpler models gainst the ground truth. Their significant underperformance becomes evident when compared to more sophisticated models. For a comparison of these results with accurate outcomes produced by the UNet+ViT model, the reader is directed to Figure <ref>.
|
http://arxiv.org/abs/2307.03963v1 | 20230708122517 | An observational signature for extremal black holes | [
"Stefanos Aretakis",
"Gaurav Khanna",
"Subir Sabharwal"
] | gr-qc | [
"gr-qc",
"hep-th",
"math-ph",
"math.MP"
] | |
http://arxiv.org/abs/2307.04412v1 | 20230710083245 | Enhancing Biomedical Text Summarization and Question-Answering: On the Utility of Domain-Specific Pre-Training | [
"Dima Galat",
"Marian-Andrei Rizoiu"
] | cs.CL | [
"cs.CL"
] |
2023
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
CLEF 2023: Conference and Labs of the Evaluation Forum, September 18–21, 2023, Thessaloniki, Greece
mode=sub]University of Technology Sydney participation in BioASQ Task 11b Phase B
]Dima Galat[
orcid=0000-0003-3825-2142,
email=dima.galat [@] student.uts.edu.au,
url=https://github.com/dimagalat/
]
]Marian-Andrei Rizoiu[
orcid=0000-0003-0381-669X,
email=Marian-Andrei.Rizoiu [@] uts.edu.au,
url=https://www.rizoiu.eu/,
]
[]University of Technology Sydney (UTS), Australia
Biomedical summarization requires large datasets to train for text generation. We show that while transfer learning offers a viable option for addressing this challenge, an in-domain pre-training does not always offer advantages in a BioASQ summarization task. We identify a suitable model architecture and use it to show a benefit of a general-domain pre-training followed by a task-specific fine-tuning in the context of a BioASQ summarization task, leading to a novel three-step fine-tuning approach that works with only a thousand in-domain examples. Our results indicate that a Large Language Model without domain-specific pre-training can have a significant edge in some domain-specific biomedical text generation tasks.
natural language processing biomedical summarization biomedical question answering transfer learning language modeling domain-specific pre-training BioASQ CEUR-WS
[
[
Received / Accepted
========================
§ INTRODUCTION
The fields of question-answering and summarization have witnessed significant advancements in recent years, with a shift from classification-based extractive approaches to the emergence of abstractive summarization models.
This transition has been driven by the superior performance and enhanced generalization capabilities exhibited by abstractive models, effectively blurring the boundary between long-form question answering and summarization.
This paper addresses the summarization challenge presented by BioASQ Task B Phase B in the biomedical domain, for which we propose a novel approach.
The healthcare sector holds immense potential for leveraging health research data sharing to enhance clinical care, informed decision-making, and scientific discovery <cit.>.
Sharing biomedical and healthcare studies and research data with the wider public requires robust and efficient methods.
Large pre-trained language models (LLMs) have emerged as promising candidates for this purpose.
LLMs have the potential to store medical knowledge while accommodating variations in data and application tasks <cit.>.
This paper aims to analyze the impact of the training process on LLMs' ability to store biomedical knowledge, explicitly focusing on their utilization for a question-answering and summarization task.
Traditionally, achieving state-of-the-art performance on natural language processing tasks involves a two-phase approach <cit.> that is shown in blue in the top row of <ref>:
pre-training the models on an extensive range of texts and topics, followed by task-specific fine-tuning <cit.>.
This approach has revolutionized various areas of natural language processing <cit.>, with LLMs such as BERT, GPT, and BART demonstrating remarkable capabilities.
However, pre-training models is a time-consuming and a resource-intensive process, and the literature lacks comprehensive insights into the performance of these models for domain-specific applications with limited data availability.
Therefore, this study aims to address this gap by examining the performance of LLMs in the context of the BioASQ summarization task.
This paper investigates two open questions concerning biomedical domain question-answering and text summarization tasks.
Over the past five years, the biomedical domain has increasingly relied on in-domain pre-training and fine-tuning of BERT <cit.> for a wide range of datasets and benchmarks <cit.>.
In-domain pre-training has proven effective in enhancing performance for discriminatory biomedical tasks.
However, BERT's architecture is not optimized for text generation tasks <cit.>, lacking an autoregressive decoder to generate tokens based on previously generated ones.
Consequently, BERT is suboptimal for generation tasks, necessitating exploring alternative approaches.
Previous studies evaluating biomedical models across diverse tasks have not reported results on generation problems due to using non-autoregressive models <cit.>.
The first question is is there a better-suited architecture for biomedical text generation tasks?
A significant amount of research suggests that domain-specific pre-training significantly outperforms mixed-domain pre-training.
However, we could not find any convincing evidence for supporting this belief when it comes to text generation problems <cit.>.
The second question is do LLMs need to be pre-trained in domain to achieve optimal performance?
We answer the above two questions.
To investigate the efficacy of domain-specific pre-training and fine-tuning for biomedical text generation, we propose an alternative three-step approach (shown in the bottom row of <ref>).
In this approach, we initially train a general-domain LLM, followed by fine-tuning for a specific task in the general domain (text summarization) and subsequent fine-tuning for the target biomedical domain task.
Contrary to established theories in the biomedical domain <cit.>, our findings suggest that having a large task-specific dataset can be more valuable than domain-specific pre-training for biomedical text generation tasks.
This approach aligns with studies indicating that diverse pre-training objectives, larger and more diverse datasets, and tasks contribute to the robustness of the fine-tuning process even without domain adaptation <cit.>.
We explore alternative architectures for biomedical text generation.
In this study, we focus on BART <cit.>, a comprehensive architecture that incorporates pre-training objectives from both BERT <cit.> and GPT <cit.> models.
BART has demonstrated state-of-the-art performance in abstractive dialogue, question-answering, and summarization tasks, making it particularly effective for text generation and comprehension.
Our experimental results showcase the benefits and effectiveness of utilizing the BART architecture for transfer learning techniques in a context of a biomedical summarization task.
The main contributions of this work can be summarized as follows:
* Evaluating the advantages of domain-specific pre-training in the context of text generation tasks.
* Evaluating the impact of task-specific training on improving text generation tasks.
* Assessing the performance of BART, an encoder with an auto-regressive decoder architecture, in the biomedical question answering task B of BioASQ 11.
§ RELATED WORK
We are looking for a LLM which has an architecture suitable for long-form question answering and has been trained on relevant in-domain data. There are several important model architectures and pre-training objectives used to optimize the models worth considering <cit.>.
First, lets briefly mention BERT <cit.> in the context of text generation, since most biomedical Transformer-based <cit.> models still rely on this architecture. BERT does not have an autoregressive decoder, preventing it from generating text. Despite this fact, a well-known summarisation approach called PreSumm <cit.> uses this architecture by inserting additional tokens for teaching models which sentences should be included in the summary. We followed the process proposed by the authors while using a BioBERT <cit.> model; we first trained an extractive summariser, which did perform a little better on BioASQ data than a regular BERT trained the same way. Unfortunately, when training an abstractive summarization architecture, PreSumm <cit.> process uses a randomly initialised Transformer <cit.> for a decoder. It appears that there is a significant mismatch between this decoder and a BioBERT <cit.> encoder leading to unstable abstractive fine-tuning process and poor generation outputs in our experiments. Based on these findings, we have concluded that BERT is a wrong architecture to be using for text generation tasks.
BART <cit.> is an architecture that uses an encoder with an auto-regressive decoder, similarly to the original Transformer <cit.>. BART relies on an architecture which can be seen as generalising BERT (because it also uses a bi-directional encoder) and GPT <cit.> (because it also uses the left-to-right decoder). This model is using a masked language modeling objective (also known as denoising) introduced by BERT <cit.> and adds two additional denoising objectives (token deletion and sentence permutation). Authors conduct experiments that are focused on text generation, and show that denoising objectives are particularly well-suited for summarization tasks. Because it can be easily fine-tuned directly for generation tasks, authors achieved a remarkable success on a wide range of abstractive summarization and long-form question answering problems <cit.>.
BioBART <cit.> is a BART model pre-trained on PubMed <cit.> abstracts. Authors have reported that they have trained without one of the objectives proposed by BART, namely the sentence permutation, showing that models trained without this objective have a better performance. Overall, this is the only study that we are aware of that applies a LLM to a range of generation tasks and reports the results (another BioGPT <cit.> study we found has not reported any numeric results on text generation problems). We are also not completely convinced that some of the results, like those reported for a BioASQ task could not be a result of a random chance, since the differences in the scores are very small and there are a few possible sources of non-determinism in training and generation procedures we discuss later in this paper.
§ OUR CONTRIBUTION
In the biomedical domain, the majority of models we have reviewed are focused on the pre-training process, perhaps because pre-training data is readily available <cit.>. However, question answering and summarization are plagued by a lack of a large domain specific dataset for fine-tuning LLMs directly for text generation problems. More specifically, when we are looking at the biomedical text generation tasks, it's hard to find a large (and clean) sequence-to-sequence dataset for fine-tuning for a long-form question answering and summarization. BioASQ is the closest dataset currently available, however it is still a few orders of magnitude away from what we would require to fine-tune a LLM for a previously unseen generation task. Therefore, we conclude that this two-step fine-tuning process offers a limited utility for this problem.
Following a conventional transfer learning definition we use a task to refer to training on labeled data, seeking to transfer the knowledge from a source task and a source domain (𝒯_S and 𝒟_S) to a target task and a target domain (𝒯_T and 𝒟_T) <cit.>. One of the common transfer learning scenarios involves learning the tasks sequentially, one after another; and we could also have an intermediate fine-tuning task making it a three-step fine tuning process, where a second step is only used to get a representation that is more suitable for the task of summarization in a biomedical domain. This means that an intermediate 𝒯_inter (which could be both in/out domain) should lead to a performance improvement in 𝒯_T. This could be potentially useful, since task-domain specific data is hard to come by.
Since we need to perform text generation, a reasonable option is to train for an 𝒯_inter which teaches the model to perform this task. Unfortunately, large question answering and summarization datasets like SQUAD <cit.> and CNN/DM <cit.> have nothing to do with biomedical domain, but because we need 10-100 times more biomedical summarization data than what we have available, we believe that task-specific datasets could offer just as much value as a domain-specific pre-training. We believe that CNN/DM is the most suitable (clean, large, easily available) task-specific dataset; especially because summaries there are typically closely related to source sentences, which is also the case with the BioASQ data. Moreover, lengths of summaries are similar to those in BioASQ. Therefore, we are interested in this task, even though a newsmedia domain would likely have completely different marginal probability distributions of generated text. This approach means that in addition to sequential transfer learning (two and three step fine-tuning processes described above), models competing with a two-step fine-tuning strategy would have to also adapt for the domain difference (i.e. differences in prior and conditional distributions). Second 𝒯_inter we have considered for training is Pubmed <cit.> article-abstracts combinations. While these are not summaries in the stricter sense of the word, this is the closest domain-specific dataset that we could find, and we would like to understand if it adds useful information to a LLM.
§ MODELS COMPARED
We select LLMs that reduce the amount of overall training required.
We select a mix of domain-specific pre-training and general pre-training datasets, and we attempt different 𝒯_inters to see how well the resulting models generalize to 𝒯_T, namely BioASQ Task 11b Phase B. Hence, the final list of LLMs we are considering are:
* BART - a baseline two-step LLM (without additional fine-tuning) used to establish a baseline for a general domain model without specialized domain knowledge or 𝒯_inter fine-tuning
* BioBART - a biomedical two-step LLM (without fine-tuning 𝒯_inter), used to establish a baseline for an in-domain model
* BART CNN - a baseline LLM three-step LLM with task-specific fine-tuning 𝒯_inter but without any deep domain knowledge
* BioBART CNN - a biomedical three-step LLM with task-specific fine-tuning 𝒯_inter
* BART CNN Pubmed - a general domain three-step LLM fine-tuned for 𝒯_inter summarisation task, and then further fine-tuned on a domain-specific 𝒯_inter dataset containing Pubmed articles
Based on the data available, we believe that these tasks and LLMs offer the greatest benefit for biomedical summarization, and we limit our selection to 5 models that will participate in the BioASQ competition.
We are only considering large models because we want the model to analyze as much context as possible, and therefore having a large model helps to double the context length (1024 tokens vs. 512 tokens).
We are using pre-trained BART, BioBART, and BART CNN models available via Huggingface[<https://huggingface.co/models>]; and we are fine-tuning BART CNN on Pubmed data and BioBART on CNN data for one epoch each (our 𝒯_inter). Subsequently, all models are fine-tuned on the latest complete BioASQ 11 dataset (𝒯_T) for five epochs using a 10-fold cross-validation process. We empirically chose the number of training epochs to maximize the final model scores.
We've tried training on Pubmed (with and without training on CNN), and found it beneficial when using a general-domain model. Despite this, CNN dataset is a much better 𝒯_T for BioASQ. Using Pubmed (summarisation) data for fine-tuning BioBART before or after CNN training didn't offer advantages (<ref>, <ref>) and was excluded from the top five models under consideration.
§ RESULTS
Our experiments have revealed a substantial (over 10%) variation in ROUGE <cit.> score results based on a simple choice of a seed parameter for cross-validation.
This indicates that the fine-tuning process is susceptible to changes in data.
Future studies should consider which BioASQ information snippets are passed to the model as the input for summarization training. Working with small healthcare question-answering datasets can require a more careful knowledge extraction process <cit.>.
We have experimented with fine-tuning for up to 10 epochs on the 𝒯_T, and found that this problem consistently persists across a range of training scenarios. In-domain studies we have reviewed show that the generation results can often differ by a minimal margin, significantly lower than the variation in scores we have observed in cross-validation.
To our knowledge, this research is the first to draw attention to this specific problem, and we decided to overcome this by repeating the 10-fold cross-validation training process 𝒯_T four times using a different seed value.
Therefore, we effectively report the average of 400 runs for each model (95% t-test confidence interval is given in parentheses), with 100 runs for each seed choice (ten for each fold).
We are primarily focused on SU4-F1 scores (<ref>) since they have been shown to correlate with human scores the best <cit.>.
However, ROUGE is recall-oriented; therefore, we also look at Recall results separately (<ref>).
Our experiments (<ref>) suggest that LLMs without domain-specific pre-training show a better capacity for domain-specific text generation. This becomes particularly clear when comparing BART and BioBART results before any additional task-specific fine-tuning, suggesting that BioASQ data is not as similar to Pubmed pre-training data as we would expect based on other results reported on discriminatory tasks.
Moreover, we believe that currently a non-domain specific CNN summarization task 𝒯_inter is required to accomplish the best results on a BioASQ task.
Adding in-domain Pubmed data improves Recall; however, Pubmed data is unsuitable for training for a summarization task from scratch. ROUGE Recall scores (<ref>) show one notable difference, BART CNN has a higher recall, whereas BART CNN Pubmed has a higher precision, likely because the Pubmed training after the task-specific training introduces a task-specific vocabulary to the model.
Overall, LLMs have established some remarkable results in various practical applications.
However, since LLMs require task-specific datasets to train to generate text, and such domain-specific datasets are scarce, we need to find ways to overcome these challenges.
We have presented an approach that focuses on applications of transfer learning to a domain with limited task-specific training data.
§ CONCLUSION AND FUTURE WORK
In this work, we have observed that task-specific data is critical for generating text in a biomedical domain.
Based on our experiments, models without in-domain pre-training are better at summarizing BioASQ data.
Unfortunately, our models have achieved fairly modest automated ROUGE scores during BioASQ 11 runs, and we are waiting for the final results to determine how the models have performed overall. The generation process is non-deterministic, and while the answers generated by the models appear sensible, we need better ways to evaluate the candidates.
We have discussed how transfer learning can overcome challenges with data availability.
We see a lot of exciting possibilities for using generator models (more specifically paraphrasing, simplification, and rewriting models <cit.>) for creating synthetic training data, as well as for providing a differentiable loss function which allows sampling a wider space of possible answers without over-penalizing exploration.
Abstractive summarization models are trained to generate specific gold sequences, even when they start making errors in the first steps (a problem known as exposure bias <cit.>).
One recent improvement over BART proposes generating multiple candidates and comparing them, showing a new SOTA on several popular summarization datasets <cit.>.
This could address a common shortcoming of autoregressive models, leading to further performance improvements.
Another possibility that shows a significant promise would be generating synthetic data to augment BioASQ.
This approach has recently shown good results in machine translation <cit.>, and we believe it can be used for other text-generation problems.
§ ADDITIONAL RESULTS
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.